Nov 29 01:15:55 np0005539551 kernel: Linux version 5.14.0-642.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025
Nov 29 01:15:55 np0005539551 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 29 01:15:55 np0005539551 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 29 01:15:55 np0005539551 kernel: BIOS-provided physical RAM map:
Nov 29 01:15:55 np0005539551 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 29 01:15:55 np0005539551 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 29 01:15:55 np0005539551 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 29 01:15:55 np0005539551 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 29 01:15:55 np0005539551 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 29 01:15:55 np0005539551 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 29 01:15:55 np0005539551 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 29 01:15:55 np0005539551 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Nov 29 01:15:55 np0005539551 kernel: NX (Execute Disable) protection: active
Nov 29 01:15:55 np0005539551 kernel: APIC: Static calls initialized
Nov 29 01:15:55 np0005539551 kernel: SMBIOS 2.8 present.
Nov 29 01:15:55 np0005539551 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 29 01:15:55 np0005539551 kernel: Hypervisor detected: KVM
Nov 29 01:15:55 np0005539551 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 29 01:15:55 np0005539551 kernel: kvm-clock: using sched offset of 3302590969 cycles
Nov 29 01:15:55 np0005539551 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 29 01:15:55 np0005539551 kernel: tsc: Detected 2799.998 MHz processor
Nov 29 01:15:55 np0005539551 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Nov 29 01:15:55 np0005539551 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 29 01:15:55 np0005539551 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 29 01:15:55 np0005539551 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 29 01:15:55 np0005539551 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 29 01:15:55 np0005539551 kernel: Using GB pages for direct mapping
Nov 29 01:15:55 np0005539551 kernel: RAMDISK: [mem 0x2d83a000-0x32c14fff]
Nov 29 01:15:55 np0005539551 kernel: ACPI: Early table checksum verification disabled
Nov 29 01:15:55 np0005539551 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 29 01:15:55 np0005539551 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 01:15:55 np0005539551 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 01:15:55 np0005539551 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 01:15:55 np0005539551 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 29 01:15:55 np0005539551 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 01:15:55 np0005539551 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 01:15:55 np0005539551 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 29 01:15:55 np0005539551 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 29 01:15:55 np0005539551 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 29 01:15:55 np0005539551 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 29 01:15:55 np0005539551 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 29 01:15:55 np0005539551 kernel: No NUMA configuration found
Nov 29 01:15:55 np0005539551 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Nov 29 01:15:55 np0005539551 kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Nov 29 01:15:55 np0005539551 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Nov 29 01:15:55 np0005539551 kernel: Zone ranges:
Nov 29 01:15:55 np0005539551 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 29 01:15:55 np0005539551 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 29 01:15:55 np0005539551 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Nov 29 01:15:55 np0005539551 kernel:  Device   empty
Nov 29 01:15:55 np0005539551 kernel: Movable zone start for each node
Nov 29 01:15:55 np0005539551 kernel: Early memory node ranges
Nov 29 01:15:55 np0005539551 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 29 01:15:55 np0005539551 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 29 01:15:55 np0005539551 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Nov 29 01:15:55 np0005539551 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Nov 29 01:15:55 np0005539551 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 29 01:15:55 np0005539551 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 29 01:15:55 np0005539551 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 29 01:15:55 np0005539551 kernel: ACPI: PM-Timer IO Port: 0x608
Nov 29 01:15:55 np0005539551 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 29 01:15:55 np0005539551 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 29 01:15:55 np0005539551 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 29 01:15:55 np0005539551 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 29 01:15:55 np0005539551 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 29 01:15:55 np0005539551 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 29 01:15:55 np0005539551 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 29 01:15:55 np0005539551 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 29 01:15:55 np0005539551 kernel: TSC deadline timer available
Nov 29 01:15:55 np0005539551 kernel: CPU topo: Max. logical packages:   8
Nov 29 01:15:55 np0005539551 kernel: CPU topo: Max. logical dies:       8
Nov 29 01:15:55 np0005539551 kernel: CPU topo: Max. dies per package:   1
Nov 29 01:15:55 np0005539551 kernel: CPU topo: Max. threads per core:   1
Nov 29 01:15:55 np0005539551 kernel: CPU topo: Num. cores per package:     1
Nov 29 01:15:55 np0005539551 kernel: CPU topo: Num. threads per package:   1
Nov 29 01:15:55 np0005539551 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Nov 29 01:15:55 np0005539551 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 29 01:15:55 np0005539551 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 29 01:15:55 np0005539551 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 29 01:15:55 np0005539551 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 29 01:15:55 np0005539551 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 29 01:15:55 np0005539551 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 29 01:15:55 np0005539551 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 29 01:15:55 np0005539551 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 29 01:15:55 np0005539551 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 29 01:15:55 np0005539551 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 29 01:15:55 np0005539551 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 29 01:15:55 np0005539551 kernel: Booting paravirtualized kernel on KVM
Nov 29 01:15:55 np0005539551 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 29 01:15:55 np0005539551 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 29 01:15:55 np0005539551 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Nov 29 01:15:55 np0005539551 kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 29 01:15:55 np0005539551 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 29 01:15:55 np0005539551 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64", will be passed to user space.
Nov 29 01:15:55 np0005539551 kernel: random: crng init done
Nov 29 01:15:55 np0005539551 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 29 01:15:55 np0005539551 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 29 01:15:55 np0005539551 kernel: Fallback order for Node 0: 0 
Nov 29 01:15:55 np0005539551 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 29 01:15:55 np0005539551 kernel: Policy zone: Normal
Nov 29 01:15:55 np0005539551 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 29 01:15:55 np0005539551 kernel: software IO TLB: area num 8.
Nov 29 01:15:55 np0005539551 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 29 01:15:55 np0005539551 kernel: ftrace: allocating 49313 entries in 193 pages
Nov 29 01:15:55 np0005539551 kernel: ftrace: allocated 193 pages with 3 groups
Nov 29 01:15:55 np0005539551 kernel: Dynamic Preempt: voluntary
Nov 29 01:15:55 np0005539551 kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 29 01:15:55 np0005539551 kernel: rcu: #011RCU event tracing is enabled.
Nov 29 01:15:55 np0005539551 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 29 01:15:55 np0005539551 kernel: #011Trampoline variant of Tasks RCU enabled.
Nov 29 01:15:55 np0005539551 kernel: #011Rude variant of Tasks RCU enabled.
Nov 29 01:15:55 np0005539551 kernel: #011Tracing variant of Tasks RCU enabled.
Nov 29 01:15:55 np0005539551 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 29 01:15:55 np0005539551 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 29 01:15:55 np0005539551 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 29 01:15:55 np0005539551 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 29 01:15:55 np0005539551 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 29 01:15:55 np0005539551 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 29 01:15:55 np0005539551 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 29 01:15:55 np0005539551 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 29 01:15:55 np0005539551 kernel: Console: colour VGA+ 80x25
Nov 29 01:15:55 np0005539551 kernel: printk: console [ttyS0] enabled
Nov 29 01:15:55 np0005539551 kernel: ACPI: Core revision 20230331
Nov 29 01:15:55 np0005539551 kernel: APIC: Switch to symmetric I/O mode setup
Nov 29 01:15:55 np0005539551 kernel: x2apic enabled
Nov 29 01:15:55 np0005539551 kernel: APIC: Switched APIC routing to: physical x2apic
Nov 29 01:15:55 np0005539551 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 29 01:15:55 np0005539551 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Nov 29 01:15:55 np0005539551 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 29 01:15:55 np0005539551 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 29 01:15:55 np0005539551 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 29 01:15:55 np0005539551 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 29 01:15:55 np0005539551 kernel: Spectre V2 : Mitigation: Retpolines
Nov 29 01:15:55 np0005539551 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 29 01:15:55 np0005539551 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 29 01:15:55 np0005539551 kernel: RETBleed: Mitigation: untrained return thunk
Nov 29 01:15:55 np0005539551 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 29 01:15:55 np0005539551 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 29 01:15:55 np0005539551 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 29 01:15:55 np0005539551 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 29 01:15:55 np0005539551 kernel: x86/bugs: return thunk changed
Nov 29 01:15:55 np0005539551 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 29 01:15:55 np0005539551 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 29 01:15:55 np0005539551 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 29 01:15:55 np0005539551 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 29 01:15:55 np0005539551 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 29 01:15:55 np0005539551 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Nov 29 01:15:55 np0005539551 kernel: Freeing SMP alternatives memory: 40K
Nov 29 01:15:55 np0005539551 kernel: pid_max: default: 32768 minimum: 301
Nov 29 01:15:55 np0005539551 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 29 01:15:55 np0005539551 kernel: landlock: Up and running.
Nov 29 01:15:55 np0005539551 kernel: Yama: becoming mindful.
Nov 29 01:15:55 np0005539551 kernel: SELinux:  Initializing.
Nov 29 01:15:55 np0005539551 kernel: LSM support for eBPF active
Nov 29 01:15:55 np0005539551 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 29 01:15:55 np0005539551 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 29 01:15:55 np0005539551 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 29 01:15:55 np0005539551 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 29 01:15:55 np0005539551 kernel: ... version:                0
Nov 29 01:15:55 np0005539551 kernel: ... bit width:              48
Nov 29 01:15:55 np0005539551 kernel: ... generic registers:      6
Nov 29 01:15:55 np0005539551 kernel: ... value mask:             0000ffffffffffff
Nov 29 01:15:55 np0005539551 kernel: ... max period:             00007fffffffffff
Nov 29 01:15:55 np0005539551 kernel: ... fixed-purpose events:   0
Nov 29 01:15:55 np0005539551 kernel: ... event mask:             000000000000003f
Nov 29 01:15:55 np0005539551 kernel: signal: max sigframe size: 1776
Nov 29 01:15:55 np0005539551 kernel: rcu: Hierarchical SRCU implementation.
Nov 29 01:15:55 np0005539551 kernel: rcu: #011Max phase no-delay instances is 400.
Nov 29 01:15:55 np0005539551 kernel: smp: Bringing up secondary CPUs ...
Nov 29 01:15:55 np0005539551 kernel: smpboot: x86: Booting SMP configuration:
Nov 29 01:15:55 np0005539551 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 29 01:15:55 np0005539551 kernel: smp: Brought up 1 node, 8 CPUs
Nov 29 01:15:55 np0005539551 kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Nov 29 01:15:55 np0005539551 kernel: node 0 deferred pages initialised in 22ms
Nov 29 01:15:55 np0005539551 kernel: Memory: 7765876K/8388068K available (16384K kernel code, 5787K rwdata, 13900K rodata, 4192K init, 7172K bss, 616276K reserved, 0K cma-reserved)
Nov 29 01:15:55 np0005539551 kernel: devtmpfs: initialized
Nov 29 01:15:55 np0005539551 kernel: x86/mm: Memory block size: 128MB
Nov 29 01:15:55 np0005539551 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 29 01:15:55 np0005539551 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 29 01:15:55 np0005539551 kernel: pinctrl core: initialized pinctrl subsystem
Nov 29 01:15:55 np0005539551 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 29 01:15:55 np0005539551 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 29 01:15:55 np0005539551 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 29 01:15:55 np0005539551 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 29 01:15:55 np0005539551 kernel: audit: initializing netlink subsys (disabled)
Nov 29 01:15:55 np0005539551 kernel: audit: type=2000 audit(1764396953.419:1): state=initialized audit_enabled=0 res=1
Nov 29 01:15:55 np0005539551 kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 29 01:15:55 np0005539551 kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 29 01:15:55 np0005539551 kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 29 01:15:55 np0005539551 kernel: cpuidle: using governor menu
Nov 29 01:15:55 np0005539551 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 29 01:15:55 np0005539551 kernel: PCI: Using configuration type 1 for base access
Nov 29 01:15:55 np0005539551 kernel: PCI: Using configuration type 1 for extended access
Nov 29 01:15:55 np0005539551 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 29 01:15:55 np0005539551 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 29 01:15:55 np0005539551 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 29 01:15:55 np0005539551 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 29 01:15:55 np0005539551 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 29 01:15:55 np0005539551 kernel: Demotion targets for Node 0: null
Nov 29 01:15:55 np0005539551 kernel: cryptd: max_cpu_qlen set to 1000
Nov 29 01:15:55 np0005539551 kernel: ACPI: Added _OSI(Module Device)
Nov 29 01:15:55 np0005539551 kernel: ACPI: Added _OSI(Processor Device)
Nov 29 01:15:55 np0005539551 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 29 01:15:55 np0005539551 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 29 01:15:55 np0005539551 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 29 01:15:55 np0005539551 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 29 01:15:55 np0005539551 kernel: ACPI: Interpreter enabled
Nov 29 01:15:55 np0005539551 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 29 01:15:55 np0005539551 kernel: ACPI: Using IOAPIC for interrupt routing
Nov 29 01:15:55 np0005539551 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 29 01:15:55 np0005539551 kernel: PCI: Using E820 reservations for host bridge windows
Nov 29 01:15:55 np0005539551 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 29 01:15:55 np0005539551 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 29 01:15:55 np0005539551 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 29 01:15:55 np0005539551 kernel: acpiphp: Slot [3] registered
Nov 29 01:15:55 np0005539551 kernel: acpiphp: Slot [4] registered
Nov 29 01:15:55 np0005539551 kernel: acpiphp: Slot [5] registered
Nov 29 01:15:55 np0005539551 kernel: acpiphp: Slot [6] registered
Nov 29 01:15:55 np0005539551 kernel: acpiphp: Slot [7] registered
Nov 29 01:15:55 np0005539551 kernel: acpiphp: Slot [8] registered
Nov 29 01:15:55 np0005539551 kernel: acpiphp: Slot [9] registered
Nov 29 01:15:55 np0005539551 kernel: acpiphp: Slot [10] registered
Nov 29 01:15:55 np0005539551 kernel: acpiphp: Slot [11] registered
Nov 29 01:15:55 np0005539551 kernel: acpiphp: Slot [12] registered
Nov 29 01:15:55 np0005539551 kernel: acpiphp: Slot [13] registered
Nov 29 01:15:55 np0005539551 kernel: acpiphp: Slot [14] registered
Nov 29 01:15:55 np0005539551 kernel: acpiphp: Slot [15] registered
Nov 29 01:15:55 np0005539551 kernel: acpiphp: Slot [16] registered
Nov 29 01:15:55 np0005539551 kernel: acpiphp: Slot [17] registered
Nov 29 01:15:55 np0005539551 kernel: acpiphp: Slot [18] registered
Nov 29 01:15:55 np0005539551 kernel: acpiphp: Slot [19] registered
Nov 29 01:15:55 np0005539551 kernel: acpiphp: Slot [20] registered
Nov 29 01:15:55 np0005539551 kernel: acpiphp: Slot [21] registered
Nov 29 01:15:55 np0005539551 kernel: acpiphp: Slot [22] registered
Nov 29 01:15:55 np0005539551 kernel: acpiphp: Slot [23] registered
Nov 29 01:15:55 np0005539551 kernel: acpiphp: Slot [24] registered
Nov 29 01:15:55 np0005539551 kernel: acpiphp: Slot [25] registered
Nov 29 01:15:55 np0005539551 kernel: acpiphp: Slot [26] registered
Nov 29 01:15:55 np0005539551 kernel: acpiphp: Slot [27] registered
Nov 29 01:15:55 np0005539551 kernel: acpiphp: Slot [28] registered
Nov 29 01:15:55 np0005539551 kernel: acpiphp: Slot [29] registered
Nov 29 01:15:55 np0005539551 kernel: acpiphp: Slot [30] registered
Nov 29 01:15:55 np0005539551 kernel: acpiphp: Slot [31] registered
Nov 29 01:15:55 np0005539551 kernel: PCI host bridge to bus 0000:00
Nov 29 01:15:55 np0005539551 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 29 01:15:55 np0005539551 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 29 01:15:55 np0005539551 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 29 01:15:55 np0005539551 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 29 01:15:55 np0005539551 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Nov 29 01:15:55 np0005539551 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 29 01:15:55 np0005539551 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Nov 29 01:15:55 np0005539551 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Nov 29 01:15:55 np0005539551 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Nov 29 01:15:55 np0005539551 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Nov 29 01:15:55 np0005539551 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Nov 29 01:15:55 np0005539551 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Nov 29 01:15:55 np0005539551 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Nov 29 01:15:55 np0005539551 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Nov 29 01:15:55 np0005539551 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 29 01:15:55 np0005539551 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Nov 29 01:15:55 np0005539551 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Nov 29 01:15:55 np0005539551 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 29 01:15:55 np0005539551 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 29 01:15:55 np0005539551 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 29 01:15:55 np0005539551 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Nov 29 01:15:55 np0005539551 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 29 01:15:55 np0005539551 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Nov 29 01:15:55 np0005539551 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Nov 29 01:15:55 np0005539551 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 29 01:15:55 np0005539551 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 29 01:15:55 np0005539551 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Nov 29 01:15:55 np0005539551 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Nov 29 01:15:55 np0005539551 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 29 01:15:55 np0005539551 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Nov 29 01:15:55 np0005539551 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Nov 29 01:15:55 np0005539551 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Nov 29 01:15:55 np0005539551 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Nov 29 01:15:55 np0005539551 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 29 01:15:55 np0005539551 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Nov 29 01:15:55 np0005539551 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Nov 29 01:15:55 np0005539551 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 29 01:15:55 np0005539551 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Nov 29 01:15:55 np0005539551 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Nov 29 01:15:55 np0005539551 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 29 01:15:55 np0005539551 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 29 01:15:55 np0005539551 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 29 01:15:55 np0005539551 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 29 01:15:55 np0005539551 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 29 01:15:55 np0005539551 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 29 01:15:55 np0005539551 kernel: iommu: Default domain type: Translated
Nov 29 01:15:55 np0005539551 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 29 01:15:55 np0005539551 kernel: SCSI subsystem initialized
Nov 29 01:15:55 np0005539551 kernel: ACPI: bus type USB registered
Nov 29 01:15:55 np0005539551 kernel: usbcore: registered new interface driver usbfs
Nov 29 01:15:55 np0005539551 kernel: usbcore: registered new interface driver hub
Nov 29 01:15:55 np0005539551 kernel: usbcore: registered new device driver usb
Nov 29 01:15:55 np0005539551 kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 29 01:15:55 np0005539551 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 29 01:15:55 np0005539551 kernel: PTP clock support registered
Nov 29 01:15:55 np0005539551 kernel: EDAC MC: Ver: 3.0.0
Nov 29 01:15:55 np0005539551 kernel: NetLabel: Initializing
Nov 29 01:15:55 np0005539551 kernel: NetLabel:  domain hash size = 128
Nov 29 01:15:55 np0005539551 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 29 01:15:55 np0005539551 kernel: NetLabel:  unlabeled traffic allowed by default
Nov 29 01:15:55 np0005539551 kernel: PCI: Using ACPI for IRQ routing
Nov 29 01:15:55 np0005539551 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 29 01:15:55 np0005539551 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 29 01:15:55 np0005539551 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 29 01:15:55 np0005539551 kernel: vgaarb: loaded
Nov 29 01:15:55 np0005539551 kernel: clocksource: Switched to clocksource kvm-clock
Nov 29 01:15:55 np0005539551 kernel: VFS: Disk quotas dquot_6.6.0
Nov 29 01:15:55 np0005539551 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 29 01:15:55 np0005539551 kernel: pnp: PnP ACPI init
Nov 29 01:15:55 np0005539551 kernel: pnp: PnP ACPI: found 5 devices
Nov 29 01:15:55 np0005539551 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 29 01:15:55 np0005539551 kernel: NET: Registered PF_INET protocol family
Nov 29 01:15:55 np0005539551 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 29 01:15:55 np0005539551 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 29 01:15:55 np0005539551 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 29 01:15:55 np0005539551 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 29 01:15:55 np0005539551 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 29 01:15:55 np0005539551 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 29 01:15:55 np0005539551 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 29 01:15:55 np0005539551 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 29 01:15:55 np0005539551 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 29 01:15:55 np0005539551 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 29 01:15:55 np0005539551 kernel: NET: Registered PF_XDP protocol family
Nov 29 01:15:55 np0005539551 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 29 01:15:55 np0005539551 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 29 01:15:55 np0005539551 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 29 01:15:55 np0005539551 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 29 01:15:55 np0005539551 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Nov 29 01:15:55 np0005539551 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 29 01:15:55 np0005539551 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 29 01:15:55 np0005539551 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 29 01:15:55 np0005539551 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 113161 usecs
Nov 29 01:15:55 np0005539551 kernel: PCI: CLS 0 bytes, default 64
Nov 29 01:15:55 np0005539551 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 29 01:15:55 np0005539551 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Nov 29 01:15:55 np0005539551 kernel: ACPI: bus type thunderbolt registered
Nov 29 01:15:55 np0005539551 kernel: Trying to unpack rootfs image as initramfs...
Nov 29 01:15:55 np0005539551 kernel: Initialise system trusted keyrings
Nov 29 01:15:55 np0005539551 kernel: Key type blacklist registered
Nov 29 01:15:55 np0005539551 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 29 01:15:55 np0005539551 kernel: zbud: loaded
Nov 29 01:15:55 np0005539551 kernel: integrity: Platform Keyring initialized
Nov 29 01:15:55 np0005539551 kernel: integrity: Machine keyring initialized
Nov 29 01:15:55 np0005539551 kernel: Freeing initrd memory: 85868K
Nov 29 01:15:55 np0005539551 kernel: NET: Registered PF_ALG protocol family
Nov 29 01:15:55 np0005539551 kernel: xor: automatically using best checksumming function   avx       
Nov 29 01:15:55 np0005539551 kernel: Key type asymmetric registered
Nov 29 01:15:55 np0005539551 kernel: Asymmetric key parser 'x509' registered
Nov 29 01:15:55 np0005539551 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 29 01:15:55 np0005539551 kernel: io scheduler mq-deadline registered
Nov 29 01:15:55 np0005539551 kernel: io scheduler kyber registered
Nov 29 01:15:55 np0005539551 kernel: io scheduler bfq registered
Nov 29 01:15:55 np0005539551 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 29 01:15:55 np0005539551 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 29 01:15:55 np0005539551 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 29 01:15:55 np0005539551 kernel: ACPI: button: Power Button [PWRF]
Nov 29 01:15:55 np0005539551 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 29 01:15:55 np0005539551 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 29 01:15:55 np0005539551 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 29 01:15:55 np0005539551 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 29 01:15:55 np0005539551 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 29 01:15:55 np0005539551 kernel: Non-volatile memory driver v1.3
Nov 29 01:15:55 np0005539551 kernel: rdac: device handler registered
Nov 29 01:15:55 np0005539551 kernel: hp_sw: device handler registered
Nov 29 01:15:55 np0005539551 kernel: emc: device handler registered
Nov 29 01:15:55 np0005539551 kernel: alua: device handler registered
Nov 29 01:15:55 np0005539551 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 29 01:15:55 np0005539551 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 29 01:15:55 np0005539551 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 29 01:15:55 np0005539551 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 29 01:15:55 np0005539551 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 29 01:15:55 np0005539551 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 29 01:15:55 np0005539551 kernel: usb usb1: Product: UHCI Host Controller
Nov 29 01:15:55 np0005539551 kernel: usb usb1: Manufacturer: Linux 5.14.0-642.el9.x86_64 uhci_hcd
Nov 29 01:15:55 np0005539551 kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 29 01:15:55 np0005539551 kernel: hub 1-0:1.0: USB hub found
Nov 29 01:15:55 np0005539551 kernel: hub 1-0:1.0: 2 ports detected
Nov 29 01:15:55 np0005539551 kernel: usbcore: registered new interface driver usbserial_generic
Nov 29 01:15:55 np0005539551 kernel: usbserial: USB Serial support registered for generic
Nov 29 01:15:55 np0005539551 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 29 01:15:55 np0005539551 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 29 01:15:55 np0005539551 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 29 01:15:55 np0005539551 kernel: mousedev: PS/2 mouse device common for all mice
Nov 29 01:15:55 np0005539551 kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 29 01:15:55 np0005539551 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 29 01:15:55 np0005539551 kernel: rtc_cmos 00:04: registered as rtc0
Nov 29 01:15:55 np0005539551 kernel: rtc_cmos 00:04: setting system clock to 2025-11-29T06:15:54 UTC (1764396954)
Nov 29 01:15:55 np0005539551 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 29 01:15:55 np0005539551 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 29 01:15:55 np0005539551 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 29 01:15:55 np0005539551 kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 29 01:15:55 np0005539551 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 29 01:15:55 np0005539551 kernel: usbcore: registered new interface driver usbhid
Nov 29 01:15:55 np0005539551 kernel: usbhid: USB HID core driver
Nov 29 01:15:55 np0005539551 kernel: drop_monitor: Initializing network drop monitor service
Nov 29 01:15:55 np0005539551 kernel: Initializing XFRM netlink socket
Nov 29 01:15:55 np0005539551 kernel: NET: Registered PF_INET6 protocol family
Nov 29 01:15:55 np0005539551 kernel: Segment Routing with IPv6
Nov 29 01:15:55 np0005539551 kernel: NET: Registered PF_PACKET protocol family
Nov 29 01:15:55 np0005539551 kernel: mpls_gso: MPLS GSO support
Nov 29 01:15:55 np0005539551 kernel: IPI shorthand broadcast: enabled
Nov 29 01:15:55 np0005539551 kernel: AVX2 version of gcm_enc/dec engaged.
Nov 29 01:15:55 np0005539551 kernel: AES CTR mode by8 optimization enabled
Nov 29 01:15:55 np0005539551 kernel: sched_clock: Marking stable (1490001378, 149575213)->(1746636513, -107059922)
Nov 29 01:15:55 np0005539551 kernel: registered taskstats version 1
Nov 29 01:15:55 np0005539551 kernel: Loading compiled-in X.509 certificates
Nov 29 01:15:55 np0005539551 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 29 01:15:55 np0005539551 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 29 01:15:55 np0005539551 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 29 01:15:55 np0005539551 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 29 01:15:55 np0005539551 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 29 01:15:55 np0005539551 kernel: Demotion targets for Node 0: null
Nov 29 01:15:55 np0005539551 kernel: page_owner is disabled
Nov 29 01:15:55 np0005539551 kernel: Key type .fscrypt registered
Nov 29 01:15:55 np0005539551 kernel: Key type fscrypt-provisioning registered
Nov 29 01:15:55 np0005539551 kernel: Key type big_key registered
Nov 29 01:15:55 np0005539551 kernel: Key type encrypted registered
Nov 29 01:15:55 np0005539551 kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 29 01:15:55 np0005539551 kernel: Loading compiled-in module X.509 certificates
Nov 29 01:15:55 np0005539551 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 29 01:15:55 np0005539551 kernel: ima: Allocated hash algorithm: sha256
Nov 29 01:15:55 np0005539551 kernel: ima: No architecture policies found
Nov 29 01:15:55 np0005539551 kernel: evm: Initialising EVM extended attributes:
Nov 29 01:15:55 np0005539551 kernel: evm: security.selinux
Nov 29 01:15:55 np0005539551 kernel: evm: security.SMACK64 (disabled)
Nov 29 01:15:55 np0005539551 kernel: evm: security.SMACK64EXEC (disabled)
Nov 29 01:15:55 np0005539551 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 29 01:15:55 np0005539551 kernel: evm: security.SMACK64MMAP (disabled)
Nov 29 01:15:55 np0005539551 kernel: evm: security.apparmor (disabled)
Nov 29 01:15:55 np0005539551 kernel: evm: security.ima
Nov 29 01:15:55 np0005539551 kernel: evm: security.capability
Nov 29 01:15:55 np0005539551 kernel: evm: HMAC attrs: 0x1
Nov 29 01:15:55 np0005539551 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 29 01:15:55 np0005539551 kernel: Running certificate verification RSA selftest
Nov 29 01:15:55 np0005539551 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 29 01:15:55 np0005539551 kernel: Running certificate verification ECDSA selftest
Nov 29 01:15:55 np0005539551 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 29 01:15:55 np0005539551 kernel: clk: Disabling unused clocks
Nov 29 01:15:55 np0005539551 kernel: Freeing unused decrypted memory: 2028K
Nov 29 01:15:55 np0005539551 kernel: Freeing unused kernel image (initmem) memory: 4192K
Nov 29 01:15:55 np0005539551 kernel: Write protecting the kernel read-only data: 30720k
Nov 29 01:15:55 np0005539551 kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 29 01:15:55 np0005539551 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 29 01:15:55 np0005539551 kernel: Run /init as init process
Nov 29 01:15:55 np0005539551 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 29 01:15:55 np0005539551 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 29 01:15:55 np0005539551 kernel: usb 1-1: Product: QEMU USB Tablet
Nov 29 01:15:55 np0005539551 kernel: usb 1-1: Manufacturer: QEMU
Nov 29 01:15:55 np0005539551 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 29 01:15:55 np0005539551 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 29 01:15:55 np0005539551 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 29 01:15:55 np0005539551 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 29 01:15:55 np0005539551 systemd: Detected virtualization kvm.
Nov 29 01:15:55 np0005539551 systemd: Detected architecture x86-64.
Nov 29 01:15:55 np0005539551 systemd: Running in initrd.
Nov 29 01:15:55 np0005539551 systemd: No hostname configured, using default hostname.
Nov 29 01:15:55 np0005539551 systemd: Hostname set to <localhost>.
Nov 29 01:15:55 np0005539551 systemd: Initializing machine ID from VM UUID.
Nov 29 01:15:55 np0005539551 systemd: Queued start job for default target Initrd Default Target.
Nov 29 01:15:55 np0005539551 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 29 01:15:55 np0005539551 systemd: Reached target Local Encrypted Volumes.
Nov 29 01:15:55 np0005539551 systemd: Reached target Initrd /usr File System.
Nov 29 01:15:55 np0005539551 systemd: Reached target Local File Systems.
Nov 29 01:15:55 np0005539551 systemd: Reached target Path Units.
Nov 29 01:15:55 np0005539551 systemd: Reached target Slice Units.
Nov 29 01:15:55 np0005539551 systemd: Reached target Swaps.
Nov 29 01:15:55 np0005539551 systemd: Reached target Timer Units.
Nov 29 01:15:55 np0005539551 systemd: Listening on D-Bus System Message Bus Socket.
Nov 29 01:15:55 np0005539551 systemd: Listening on Journal Socket (/dev/log).
Nov 29 01:15:55 np0005539551 systemd: Listening on Journal Socket.
Nov 29 01:15:55 np0005539551 systemd: Listening on udev Control Socket.
Nov 29 01:15:55 np0005539551 systemd: Listening on udev Kernel Socket.
Nov 29 01:15:55 np0005539551 systemd: Reached target Socket Units.
Nov 29 01:15:55 np0005539551 systemd: Starting Create List of Static Device Nodes...
Nov 29 01:15:55 np0005539551 systemd: Starting Journal Service...
Nov 29 01:15:55 np0005539551 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 29 01:15:55 np0005539551 systemd: Starting Apply Kernel Variables...
Nov 29 01:15:55 np0005539551 systemd: Starting Create System Users...
Nov 29 01:15:55 np0005539551 systemd: Starting Setup Virtual Console...
Nov 29 01:15:55 np0005539551 systemd: Finished Create List of Static Device Nodes.
Nov 29 01:15:55 np0005539551 systemd: Finished Apply Kernel Variables.
Nov 29 01:15:55 np0005539551 systemd: Finished Create System Users.
Nov 29 01:15:55 np0005539551 systemd-journald[304]: Journal started
Nov 29 01:15:55 np0005539551 systemd-journald[304]: Runtime Journal (/run/log/journal/2d58586e4ce1425d890cd5cdff75e822) is 8.0M, max 153.6M, 145.6M free.
Nov 29 01:15:55 np0005539551 systemd-sysusers[308]: Creating group 'users' with GID 100.
Nov 29 01:15:55 np0005539551 systemd-sysusers[308]: Creating group 'dbus' with GID 81.
Nov 29 01:15:55 np0005539551 systemd-sysusers[308]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 29 01:15:55 np0005539551 systemd: Started Journal Service.
Nov 29 01:15:55 np0005539551 systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 29 01:15:55 np0005539551 systemd[1]: Starting Create Volatile Files and Directories...
Nov 29 01:15:55 np0005539551 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 29 01:15:55 np0005539551 systemd[1]: Finished Create Volatile Files and Directories.
Nov 29 01:15:55 np0005539551 systemd[1]: Finished Setup Virtual Console.
Nov 29 01:15:55 np0005539551 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 29 01:15:55 np0005539551 systemd[1]: Starting dracut cmdline hook...
Nov 29 01:15:55 np0005539551 dracut-cmdline[324]: dracut-9 dracut-057-102.git20250818.el9
Nov 29 01:15:55 np0005539551 dracut-cmdline[324]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 29 01:15:55 np0005539551 systemd[1]: Finished dracut cmdline hook.
Nov 29 01:15:55 np0005539551 systemd[1]: Starting dracut pre-udev hook...
Nov 29 01:15:55 np0005539551 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 29 01:15:55 np0005539551 kernel: device-mapper: uevent: version 1.0.3
Nov 29 01:15:55 np0005539551 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 29 01:15:55 np0005539551 kernel: RPC: Registered named UNIX socket transport module.
Nov 29 01:15:55 np0005539551 kernel: RPC: Registered udp transport module.
Nov 29 01:15:55 np0005539551 kernel: RPC: Registered tcp transport module.
Nov 29 01:15:55 np0005539551 kernel: RPC: Registered tcp-with-tls transport module.
Nov 29 01:15:55 np0005539551 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 29 01:15:55 np0005539551 rpc.statd[441]: Version 2.5.4 starting
Nov 29 01:15:55 np0005539551 rpc.statd[441]: Initializing NSM state
Nov 29 01:15:56 np0005539551 rpc.idmapd[446]: Setting log level to 0
Nov 29 01:15:56 np0005539551 systemd[1]: Finished dracut pre-udev hook.
Nov 29 01:15:56 np0005539551 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 29 01:15:56 np0005539551 systemd-udevd[459]: Using default interface naming scheme 'rhel-9.0'.
Nov 29 01:15:56 np0005539551 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 29 01:15:56 np0005539551 systemd[1]: Starting dracut pre-trigger hook...
Nov 29 01:15:56 np0005539551 systemd[1]: Finished dracut pre-trigger hook.
Nov 29 01:15:56 np0005539551 systemd[1]: Starting Coldplug All udev Devices...
Nov 29 01:15:56 np0005539551 systemd[1]: Created slice Slice /system/modprobe.
Nov 29 01:15:56 np0005539551 systemd[1]: Starting Load Kernel Module configfs...
Nov 29 01:15:56 np0005539551 systemd[1]: Finished Coldplug All udev Devices.
Nov 29 01:15:56 np0005539551 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 29 01:15:56 np0005539551 systemd[1]: Finished Load Kernel Module configfs.
Nov 29 01:15:56 np0005539551 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 29 01:15:56 np0005539551 systemd[1]: Reached target Network.
Nov 29 01:15:56 np0005539551 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 29 01:15:56 np0005539551 systemd[1]: Starting dracut initqueue hook...
Nov 29 01:15:56 np0005539551 systemd[1]: Mounting Kernel Configuration File System...
Nov 29 01:15:56 np0005539551 systemd[1]: Mounted Kernel Configuration File System.
Nov 29 01:15:56 np0005539551 systemd[1]: Reached target System Initialization.
Nov 29 01:15:56 np0005539551 systemd[1]: Reached target Basic System.
Nov 29 01:15:56 np0005539551 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Nov 29 01:15:56 np0005539551 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 29 01:15:56 np0005539551 kernel: vda: vda1
Nov 29 01:15:56 np0005539551 systemd-udevd[484]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 01:15:56 np0005539551 kernel: scsi host0: ata_piix
Nov 29 01:15:56 np0005539551 kernel: scsi host1: ata_piix
Nov 29 01:15:56 np0005539551 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Nov 29 01:15:56 np0005539551 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Nov 29 01:15:56 np0005539551 systemd[1]: Found device /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 29 01:15:56 np0005539551 systemd[1]: Reached target Initrd Root Device.
Nov 29 01:15:56 np0005539551 kernel: ata1: found unknown device (class 0)
Nov 29 01:15:56 np0005539551 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 29 01:15:56 np0005539551 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 29 01:15:56 np0005539551 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 29 01:15:56 np0005539551 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 29 01:15:56 np0005539551 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 29 01:15:56 np0005539551 systemd[1]: Finished dracut initqueue hook.
Nov 29 01:15:56 np0005539551 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 29 01:15:56 np0005539551 systemd[1]: Reached target Remote Encrypted Volumes.
Nov 29 01:15:56 np0005539551 systemd[1]: Reached target Remote File Systems.
Nov 29 01:15:56 np0005539551 systemd[1]: Starting dracut pre-mount hook...
Nov 29 01:15:56 np0005539551 systemd[1]: Finished dracut pre-mount hook.
Nov 29 01:15:56 np0005539551 systemd[1]: Starting File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253...
Nov 29 01:15:56 np0005539551 systemd-fsck[556]: /usr/sbin/fsck.xfs: XFS file system.
Nov 29 01:15:56 np0005539551 systemd[1]: Finished File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 29 01:15:56 np0005539551 systemd[1]: Mounting /sysroot...
Nov 29 01:15:57 np0005539551 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 29 01:15:57 np0005539551 kernel: XFS (vda1): Mounting V5 Filesystem b277050f-8ace-464d-abb6-4c46d4c45253
Nov 29 01:15:57 np0005539551 kernel: XFS (vda1): Ending clean mount
Nov 29 01:15:57 np0005539551 systemd[1]: Mounted /sysroot.
Nov 29 01:15:57 np0005539551 systemd[1]: Reached target Initrd Root File System.
Nov 29 01:15:57 np0005539551 systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 29 01:15:57 np0005539551 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 29 01:15:57 np0005539551 systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 29 01:15:57 np0005539551 systemd[1]: Reached target Initrd File Systems.
Nov 29 01:15:57 np0005539551 systemd[1]: Reached target Initrd Default Target.
Nov 29 01:15:57 np0005539551 systemd[1]: Starting dracut mount hook...
Nov 29 01:15:57 np0005539551 systemd[1]: Finished dracut mount hook.
Nov 29 01:15:57 np0005539551 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 29 01:15:57 np0005539551 rpc.idmapd[446]: exiting on signal 15
Nov 29 01:15:57 np0005539551 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 29 01:15:57 np0005539551 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 29 01:15:57 np0005539551 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 29 01:15:57 np0005539551 systemd[1]: Stopped target Network.
Nov 29 01:15:57 np0005539551 systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 29 01:15:57 np0005539551 systemd[1]: Stopped target Timer Units.
Nov 29 01:15:57 np0005539551 systemd[1]: dbus.socket: Deactivated successfully.
Nov 29 01:15:57 np0005539551 systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 29 01:15:57 np0005539551 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 29 01:15:57 np0005539551 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 29 01:15:57 np0005539551 systemd[1]: Stopped target Initrd Default Target.
Nov 29 01:15:57 np0005539551 systemd[1]: Stopped target Basic System.
Nov 29 01:15:57 np0005539551 systemd[1]: Stopped target Initrd Root Device.
Nov 29 01:15:57 np0005539551 systemd[1]: Stopped target Initrd /usr File System.
Nov 29 01:15:57 np0005539551 systemd[1]: Stopped target Path Units.
Nov 29 01:15:57 np0005539551 systemd[1]: Stopped target Remote File Systems.
Nov 29 01:15:57 np0005539551 systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 29 01:15:57 np0005539551 systemd[1]: Stopped target Slice Units.
Nov 29 01:15:57 np0005539551 systemd[1]: Stopped target Socket Units.
Nov 29 01:15:57 np0005539551 systemd[1]: Stopped target System Initialization.
Nov 29 01:15:57 np0005539551 systemd[1]: Stopped target Local File Systems.
Nov 29 01:15:57 np0005539551 systemd[1]: Stopped target Swaps.
Nov 29 01:15:57 np0005539551 systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 29 01:15:57 np0005539551 systemd[1]: Stopped dracut mount hook.
Nov 29 01:15:57 np0005539551 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 29 01:15:57 np0005539551 systemd[1]: Stopped dracut pre-mount hook.
Nov 29 01:15:57 np0005539551 systemd[1]: Stopped target Local Encrypted Volumes.
Nov 29 01:15:57 np0005539551 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 29 01:15:57 np0005539551 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 29 01:15:57 np0005539551 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 29 01:15:57 np0005539551 systemd[1]: Stopped dracut initqueue hook.
Nov 29 01:15:57 np0005539551 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 29 01:15:57 np0005539551 systemd[1]: Stopped Apply Kernel Variables.
Nov 29 01:15:57 np0005539551 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 29 01:15:57 np0005539551 systemd[1]: Stopped Create Volatile Files and Directories.
Nov 29 01:15:57 np0005539551 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 29 01:15:57 np0005539551 systemd[1]: Stopped Coldplug All udev Devices.
Nov 29 01:15:57 np0005539551 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 29 01:15:57 np0005539551 systemd[1]: Stopped dracut pre-trigger hook.
Nov 29 01:15:57 np0005539551 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 29 01:15:57 np0005539551 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 29 01:15:57 np0005539551 systemd[1]: Stopped Setup Virtual Console.
Nov 29 01:15:57 np0005539551 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Nov 29 01:15:57 np0005539551 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 29 01:15:57 np0005539551 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 29 01:15:57 np0005539551 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 29 01:15:57 np0005539551 systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 29 01:15:57 np0005539551 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 29 01:15:57 np0005539551 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 29 01:15:57 np0005539551 systemd[1]: Closed udev Control Socket.
Nov 29 01:15:57 np0005539551 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 29 01:15:57 np0005539551 systemd[1]: Closed udev Kernel Socket.
Nov 29 01:15:57 np0005539551 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 29 01:15:57 np0005539551 systemd[1]: Stopped dracut pre-udev hook.
Nov 29 01:15:57 np0005539551 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 29 01:15:57 np0005539551 systemd[1]: Stopped dracut cmdline hook.
Nov 29 01:15:57 np0005539551 systemd[1]: Starting Cleanup udev Database...
Nov 29 01:15:57 np0005539551 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 29 01:15:57 np0005539551 systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 29 01:15:57 np0005539551 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 29 01:15:57 np0005539551 systemd[1]: Stopped Create List of Static Device Nodes.
Nov 29 01:15:57 np0005539551 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 29 01:15:57 np0005539551 systemd[1]: Stopped Create System Users.
Nov 29 01:15:57 np0005539551 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Nov 29 01:15:57 np0005539551 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Nov 29 01:15:57 np0005539551 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 29 01:15:57 np0005539551 systemd[1]: Finished Cleanup udev Database.
Nov 29 01:15:57 np0005539551 systemd[1]: Reached target Switch Root.
Nov 29 01:15:57 np0005539551 systemd[1]: Starting Switch Root...
Nov 29 01:15:57 np0005539551 systemd[1]: Switching root.
Nov 29 01:15:57 np0005539551 systemd-journald[304]: Journal stopped
Nov 29 01:15:58 np0005539551 systemd-journald: Received SIGTERM from PID 1 (systemd).
Nov 29 01:15:58 np0005539551 kernel: audit: type=1404 audit(1764396957.750:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 29 01:15:58 np0005539551 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 01:15:58 np0005539551 kernel: SELinux:  policy capability open_perms=1
Nov 29 01:15:58 np0005539551 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 01:15:58 np0005539551 kernel: SELinux:  policy capability always_check_network=0
Nov 29 01:15:58 np0005539551 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 01:15:58 np0005539551 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 01:15:58 np0005539551 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 01:15:58 np0005539551 kernel: audit: type=1403 audit(1764396957.883:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 29 01:15:58 np0005539551 systemd: Successfully loaded SELinux policy in 135.280ms.
Nov 29 01:15:58 np0005539551 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.785ms.
Nov 29 01:15:58 np0005539551 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 29 01:15:58 np0005539551 systemd: Detected virtualization kvm.
Nov 29 01:15:58 np0005539551 systemd: Detected architecture x86-64.
Nov 29 01:15:58 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:15:58 np0005539551 systemd: initrd-switch-root.service: Deactivated successfully.
Nov 29 01:15:58 np0005539551 systemd: Stopped Switch Root.
Nov 29 01:15:58 np0005539551 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 29 01:15:58 np0005539551 systemd: Created slice Slice /system/getty.
Nov 29 01:15:58 np0005539551 systemd: Created slice Slice /system/serial-getty.
Nov 29 01:15:58 np0005539551 systemd: Created slice Slice /system/sshd-keygen.
Nov 29 01:15:58 np0005539551 systemd: Created slice User and Session Slice.
Nov 29 01:15:58 np0005539551 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 29 01:15:58 np0005539551 systemd: Started Forward Password Requests to Wall Directory Watch.
Nov 29 01:15:58 np0005539551 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 29 01:15:58 np0005539551 systemd: Reached target Local Encrypted Volumes.
Nov 29 01:15:58 np0005539551 systemd: Stopped target Switch Root.
Nov 29 01:15:58 np0005539551 systemd: Stopped target Initrd File Systems.
Nov 29 01:15:58 np0005539551 systemd: Stopped target Initrd Root File System.
Nov 29 01:15:58 np0005539551 systemd: Reached target Local Integrity Protected Volumes.
Nov 29 01:15:58 np0005539551 systemd: Reached target Path Units.
Nov 29 01:15:58 np0005539551 systemd: Reached target rpc_pipefs.target.
Nov 29 01:15:58 np0005539551 systemd: Reached target Slice Units.
Nov 29 01:15:58 np0005539551 systemd: Reached target Swaps.
Nov 29 01:15:58 np0005539551 systemd: Reached target Local Verity Protected Volumes.
Nov 29 01:15:58 np0005539551 systemd: Listening on RPCbind Server Activation Socket.
Nov 29 01:15:58 np0005539551 systemd: Reached target RPC Port Mapper.
Nov 29 01:15:58 np0005539551 systemd: Listening on Process Core Dump Socket.
Nov 29 01:15:58 np0005539551 systemd: Listening on initctl Compatibility Named Pipe.
Nov 29 01:15:58 np0005539551 systemd: Listening on udev Control Socket.
Nov 29 01:15:58 np0005539551 systemd: Listening on udev Kernel Socket.
Nov 29 01:15:58 np0005539551 systemd: Mounting Huge Pages File System...
Nov 29 01:15:58 np0005539551 systemd: Mounting POSIX Message Queue File System...
Nov 29 01:15:58 np0005539551 systemd: Mounting Kernel Debug File System...
Nov 29 01:15:58 np0005539551 systemd: Mounting Kernel Trace File System...
Nov 29 01:15:58 np0005539551 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 29 01:15:58 np0005539551 systemd: Starting Create List of Static Device Nodes...
Nov 29 01:15:58 np0005539551 systemd: Starting Load Kernel Module configfs...
Nov 29 01:15:58 np0005539551 systemd: Starting Load Kernel Module drm...
Nov 29 01:15:58 np0005539551 systemd: Starting Load Kernel Module efi_pstore...
Nov 29 01:15:58 np0005539551 systemd: Starting Load Kernel Module fuse...
Nov 29 01:15:58 np0005539551 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 29 01:15:58 np0005539551 systemd: systemd-fsck-root.service: Deactivated successfully.
Nov 29 01:15:58 np0005539551 systemd: Stopped File System Check on Root Device.
Nov 29 01:15:58 np0005539551 systemd: Stopped Journal Service.
Nov 29 01:15:58 np0005539551 systemd: Starting Journal Service...
Nov 29 01:15:58 np0005539551 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 29 01:15:58 np0005539551 systemd: Starting Generate network units from Kernel command line...
Nov 29 01:15:58 np0005539551 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 29 01:15:58 np0005539551 systemd: Starting Remount Root and Kernel File Systems...
Nov 29 01:15:58 np0005539551 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 29 01:15:58 np0005539551 systemd: Starting Apply Kernel Variables...
Nov 29 01:15:58 np0005539551 systemd: Starting Coldplug All udev Devices...
Nov 29 01:15:58 np0005539551 systemd: Mounted Huge Pages File System.
Nov 29 01:15:58 np0005539551 systemd: Mounted POSIX Message Queue File System.
Nov 29 01:15:58 np0005539551 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 29 01:15:58 np0005539551 systemd: Mounted Kernel Debug File System.
Nov 29 01:15:58 np0005539551 systemd: Mounted Kernel Trace File System.
Nov 29 01:15:58 np0005539551 systemd: Finished Create List of Static Device Nodes.
Nov 29 01:15:58 np0005539551 kernel: ACPI: bus type drm_connector registered
Nov 29 01:15:58 np0005539551 systemd: modprobe@configfs.service: Deactivated successfully.
Nov 29 01:15:58 np0005539551 systemd: Finished Load Kernel Module configfs.
Nov 29 01:15:58 np0005539551 systemd: modprobe@drm.service: Deactivated successfully.
Nov 29 01:15:58 np0005539551 systemd: Finished Load Kernel Module drm.
Nov 29 01:15:58 np0005539551 systemd-journald[679]: Journal started
Nov 29 01:15:58 np0005539551 systemd-journald[679]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 29 01:15:58 np0005539551 systemd[1]: Queued start job for default target Multi-User System.
Nov 29 01:15:58 np0005539551 systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 29 01:15:58 np0005539551 systemd: Started Journal Service.
Nov 29 01:15:58 np0005539551 kernel: fuse: init (API version 7.37)
Nov 29 01:15:58 np0005539551 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 29 01:15:58 np0005539551 systemd[1]: Finished Load Kernel Module efi_pstore.
Nov 29 01:15:58 np0005539551 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 29 01:15:58 np0005539551 systemd[1]: Finished Load Kernel Module fuse.
Nov 29 01:15:58 np0005539551 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 29 01:15:58 np0005539551 systemd[1]: Finished Generate network units from Kernel command line.
Nov 29 01:15:58 np0005539551 systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 29 01:15:58 np0005539551 systemd[1]: Finished Apply Kernel Variables.
Nov 29 01:15:58 np0005539551 systemd[1]: Mounting FUSE Control File System...
Nov 29 01:15:58 np0005539551 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 29 01:15:58 np0005539551 systemd[1]: Starting Rebuild Hardware Database...
Nov 29 01:15:58 np0005539551 systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 29 01:15:58 np0005539551 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 29 01:15:58 np0005539551 systemd[1]: Starting Load/Save OS Random Seed...
Nov 29 01:15:58 np0005539551 systemd[1]: Starting Create System Users...
Nov 29 01:15:58 np0005539551 systemd-journald[679]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 29 01:15:58 np0005539551 systemd-journald[679]: Received client request to flush runtime journal.
Nov 29 01:15:58 np0005539551 systemd[1]: Mounted FUSE Control File System.
Nov 29 01:15:58 np0005539551 systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 29 01:15:58 np0005539551 systemd[1]: Finished Load/Save OS Random Seed.
Nov 29 01:15:58 np0005539551 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 29 01:15:58 np0005539551 systemd[1]: Finished Create System Users.
Nov 29 01:15:58 np0005539551 systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 29 01:15:58 np0005539551 systemd[1]: Finished Coldplug All udev Devices.
Nov 29 01:15:58 np0005539551 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 29 01:15:58 np0005539551 systemd[1]: Reached target Preparation for Local File Systems.
Nov 29 01:15:58 np0005539551 systemd[1]: Reached target Local File Systems.
Nov 29 01:15:58 np0005539551 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 29 01:15:58 np0005539551 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 29 01:15:58 np0005539551 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 29 01:15:58 np0005539551 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 29 01:15:58 np0005539551 systemd[1]: Starting Automatic Boot Loader Update...
Nov 29 01:15:58 np0005539551 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 29 01:15:58 np0005539551 systemd[1]: Starting Create Volatile Files and Directories...
Nov 29 01:15:58 np0005539551 bootctl[696]: Couldn't find EFI system partition, skipping.
Nov 29 01:15:58 np0005539551 systemd[1]: Finished Automatic Boot Loader Update.
Nov 29 01:15:58 np0005539551 systemd[1]: Finished Create Volatile Files and Directories.
Nov 29 01:15:58 np0005539551 systemd[1]: Starting Security Auditing Service...
Nov 29 01:15:58 np0005539551 systemd[1]: Starting RPC Bind...
Nov 29 01:15:58 np0005539551 systemd[1]: Starting Rebuild Journal Catalog...
Nov 29 01:15:58 np0005539551 auditd[702]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 29 01:15:58 np0005539551 auditd[702]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 29 01:15:58 np0005539551 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 29 01:15:58 np0005539551 systemd[1]: Started RPC Bind.
Nov 29 01:15:58 np0005539551 systemd[1]: Finished Rebuild Journal Catalog.
Nov 29 01:15:58 np0005539551 augenrules[707]: /sbin/augenrules: No change
Nov 29 01:15:58 np0005539551 augenrules[722]: No rules
Nov 29 01:15:58 np0005539551 augenrules[722]: enabled 1
Nov 29 01:15:58 np0005539551 augenrules[722]: failure 1
Nov 29 01:15:58 np0005539551 augenrules[722]: pid 702
Nov 29 01:15:58 np0005539551 augenrules[722]: rate_limit 0
Nov 29 01:15:58 np0005539551 augenrules[722]: backlog_limit 8192
Nov 29 01:15:58 np0005539551 augenrules[722]: lost 0
Nov 29 01:15:58 np0005539551 augenrules[722]: backlog 0
Nov 29 01:15:58 np0005539551 augenrules[722]: backlog_wait_time 60000
Nov 29 01:15:58 np0005539551 augenrules[722]: backlog_wait_time_actual 0
Nov 29 01:15:58 np0005539551 augenrules[722]: enabled 1
Nov 29 01:15:58 np0005539551 augenrules[722]: failure 1
Nov 29 01:15:58 np0005539551 augenrules[722]: pid 702
Nov 29 01:15:58 np0005539551 augenrules[722]: rate_limit 0
Nov 29 01:15:58 np0005539551 augenrules[722]: backlog_limit 8192
Nov 29 01:15:58 np0005539551 augenrules[722]: lost 0
Nov 29 01:15:58 np0005539551 augenrules[722]: backlog 2
Nov 29 01:15:58 np0005539551 augenrules[722]: backlog_wait_time 60000
Nov 29 01:15:58 np0005539551 augenrules[722]: backlog_wait_time_actual 0
Nov 29 01:15:58 np0005539551 augenrules[722]: enabled 1
Nov 29 01:15:58 np0005539551 augenrules[722]: failure 1
Nov 29 01:15:58 np0005539551 augenrules[722]: pid 702
Nov 29 01:15:58 np0005539551 augenrules[722]: rate_limit 0
Nov 29 01:15:58 np0005539551 augenrules[722]: backlog_limit 8192
Nov 29 01:15:58 np0005539551 augenrules[722]: lost 0
Nov 29 01:15:58 np0005539551 augenrules[722]: backlog 2
Nov 29 01:15:58 np0005539551 augenrules[722]: backlog_wait_time 60000
Nov 29 01:15:58 np0005539551 augenrules[722]: backlog_wait_time_actual 0
Nov 29 01:15:58 np0005539551 systemd[1]: Started Security Auditing Service.
Nov 29 01:15:58 np0005539551 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 29 01:15:58 np0005539551 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 29 01:15:59 np0005539551 systemd[1]: Finished Rebuild Hardware Database.
Nov 29 01:15:59 np0005539551 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 29 01:15:59 np0005539551 systemd[1]: Starting Update is Completed...
Nov 29 01:15:59 np0005539551 systemd[1]: Finished Update is Completed.
Nov 29 01:15:59 np0005539551 systemd-udevd[730]: Using default interface naming scheme 'rhel-9.0'.
Nov 29 01:15:59 np0005539551 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 29 01:15:59 np0005539551 systemd[1]: Reached target System Initialization.
Nov 29 01:15:59 np0005539551 systemd[1]: Started dnf makecache --timer.
Nov 29 01:15:59 np0005539551 systemd[1]: Started Daily rotation of log files.
Nov 29 01:15:59 np0005539551 systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 29 01:15:59 np0005539551 systemd[1]: Reached target Timer Units.
Nov 29 01:15:59 np0005539551 systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 29 01:15:59 np0005539551 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 29 01:15:59 np0005539551 systemd[1]: Reached target Socket Units.
Nov 29 01:15:59 np0005539551 systemd[1]: Starting D-Bus System Message Bus...
Nov 29 01:15:59 np0005539551 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 29 01:15:59 np0005539551 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 29 01:15:59 np0005539551 systemd[1]: Starting Load Kernel Module configfs...
Nov 29 01:15:59 np0005539551 systemd-udevd[745]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 01:15:59 np0005539551 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 29 01:15:59 np0005539551 systemd[1]: Finished Load Kernel Module configfs.
Nov 29 01:15:59 np0005539551 systemd[1]: Started D-Bus System Message Bus.
Nov 29 01:15:59 np0005539551 systemd[1]: Reached target Basic System.
Nov 29 01:15:59 np0005539551 dbus-broker-lau[756]: Ready
Nov 29 01:15:59 np0005539551 systemd[1]: Starting NTP client/server...
Nov 29 01:15:59 np0005539551 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 29 01:15:59 np0005539551 systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 29 01:15:59 np0005539551 systemd[1]: Starting IPv4 firewall with iptables...
Nov 29 01:15:59 np0005539551 systemd[1]: Started irqbalance daemon.
Nov 29 01:15:59 np0005539551 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 29 01:15:59 np0005539551 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 01:15:59 np0005539551 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 01:15:59 np0005539551 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 01:15:59 np0005539551 systemd[1]: Reached target sshd-keygen.target.
Nov 29 01:15:59 np0005539551 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Nov 29 01:15:59 np0005539551 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 29 01:15:59 np0005539551 systemd[1]: Reached target User and Group Name Lookups.
Nov 29 01:15:59 np0005539551 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 29 01:15:59 np0005539551 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 29 01:15:59 np0005539551 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 29 01:15:59 np0005539551 chronyd[790]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 29 01:15:59 np0005539551 chronyd[790]: Loaded 0 symmetric keys
Nov 29 01:15:59 np0005539551 chronyd[790]: Using right/UTC timezone to obtain leap second data
Nov 29 01:15:59 np0005539551 chronyd[790]: Loaded seccomp filter (level 2)
Nov 29 01:15:59 np0005539551 systemd[1]: Starting User Login Management...
Nov 29 01:15:59 np0005539551 systemd[1]: Started NTP client/server.
Nov 29 01:15:59 np0005539551 systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 29 01:15:59 np0005539551 systemd-logind[788]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 29 01:15:59 np0005539551 systemd-logind[788]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 29 01:15:59 np0005539551 systemd-logind[788]: New seat seat0.
Nov 29 01:15:59 np0005539551 systemd[1]: Started User Login Management.
Nov 29 01:15:59 np0005539551 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Nov 29 01:15:59 np0005539551 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Nov 29 01:15:59 np0005539551 kernel: kvm_amd: TSC scaling supported
Nov 29 01:15:59 np0005539551 kernel: kvm_amd: Nested Virtualization enabled
Nov 29 01:15:59 np0005539551 kernel: kvm_amd: Nested Paging enabled
Nov 29 01:15:59 np0005539551 kernel: kvm_amd: LBR virtualization supported
Nov 29 01:15:59 np0005539551 kernel: Console: switching to colour dummy device 80x25
Nov 29 01:15:59 np0005539551 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 29 01:15:59 np0005539551 kernel: [drm] features: -context_init
Nov 29 01:15:59 np0005539551 kernel: [drm] number of scanouts: 1
Nov 29 01:15:59 np0005539551 kernel: [drm] number of cap sets: 0
Nov 29 01:15:59 np0005539551 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Nov 29 01:15:59 np0005539551 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 29 01:15:59 np0005539551 kernel: Console: switching to colour frame buffer device 128x48
Nov 29 01:15:59 np0005539551 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 29 01:15:59 np0005539551 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 29 01:15:59 np0005539551 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 29 01:15:59 np0005539551 iptables.init[779]: iptables: Applying firewall rules: [  OK  ]
Nov 29 01:15:59 np0005539551 systemd[1]: Finished IPv4 firewall with iptables.
Nov 29 01:15:59 np0005539551 cloud-init[839]: Cloud-init v. 24.4-7.el9 running 'init-local' at Sat, 29 Nov 2025 06:15:59 +0000. Up 6.65 seconds.
Nov 29 01:15:59 np0005539551 systemd[1]: run-cloud\x2dinit-tmp-tmp4gdav2yf.mount: Deactivated successfully.
Nov 29 01:16:00 np0005539551 systemd[1]: Starting Hostname Service...
Nov 29 01:16:00 np0005539551 systemd[1]: Started Hostname Service.
Nov 29 01:16:00 np0005539551 systemd-hostnamed[853]: Hostname set to <np0005539551.novalocal> (static)
Nov 29 01:16:00 np0005539551 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 29 01:16:00 np0005539551 systemd[1]: Reached target Preparation for Network.
Nov 29 01:16:00 np0005539551 systemd[1]: Starting Network Manager...
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3033] NetworkManager (version 1.54.1-1.el9) is starting... (boot:7e7d5958-a03f-4d88-a827-2b942ebd1608)
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3037] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3096] manager[0x55fb1728c080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3134] hostname: hostname: using hostnamed
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3135] hostname: static hostname changed from (none) to "np0005539551.novalocal"
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3139] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3225] manager[0x55fb1728c080]: rfkill: Wi-Fi hardware radio set enabled
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3226] manager[0x55fb1728c080]: rfkill: WWAN hardware radio set enabled
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3272] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3272] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3273] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3273] manager: Networking is enabled by state file
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3275] settings: Loaded settings plugin: keyfile (internal)
Nov 29 01:16:00 np0005539551 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3298] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3332] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3350] dhcp: init: Using DHCP client 'internal'
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3355] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3373] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3383] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3391] device (lo): Activation: starting connection 'lo' (bd43838b-4852-4c08-8b58-5443d26d81e3)
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3402] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3407] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3437] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3442] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3447] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3449] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3452] device (eth0): carrier: link connected
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3457] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3464] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3470] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3475] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3477] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3480] manager: NetworkManager state is now CONNECTING
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3482] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3489] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3493] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 01:16:00 np0005539551 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 01:16:00 np0005539551 systemd[1]: Started Network Manager.
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3535] dhcp4 (eth0): state changed new lease, address=38.102.83.193
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3543] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 29 01:16:00 np0005539551 systemd[1]: Reached target Network.
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3564] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 01:16:00 np0005539551 systemd[1]: Starting Network Manager Wait Online...
Nov 29 01:16:00 np0005539551 systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 29 01:16:00 np0005539551 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3739] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3741] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3749] device (lo): Activation: successful, device activated.
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3759] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3761] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3765] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3770] device (eth0): Activation: successful, device activated.
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3778] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 29 01:16:00 np0005539551 NetworkManager[857]: <info>  [1764396960.3782] manager: startup complete
Nov 29 01:16:00 np0005539551 systemd[1]: Started GSSAPI Proxy Daemon.
Nov 29 01:16:00 np0005539551 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 29 01:16:00 np0005539551 systemd[1]: Reached target NFS client services.
Nov 29 01:16:00 np0005539551 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 29 01:16:00 np0005539551 systemd[1]: Reached target Remote File Systems.
Nov 29 01:16:00 np0005539551 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 29 01:16:00 np0005539551 systemd[1]: Finished Network Manager Wait Online.
Nov 29 01:16:00 np0005539551 systemd[1]: Starting Cloud-init: Network Stage...
Nov 29 01:16:00 np0005539551 cloud-init[920]: Cloud-init v. 24.4-7.el9 running 'init' at Sat, 29 Nov 2025 06:16:00 +0000. Up 7.62 seconds.
Nov 29 01:16:00 np0005539551 cloud-init[920]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 29 01:16:00 np0005539551 cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 29 01:16:00 np0005539551 cloud-init[920]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 29 01:16:00 np0005539551 cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 29 01:16:00 np0005539551 cloud-init[920]: ci-info: |  eth0  | True |        38.102.83.193         | 255.255.255.0 | global | fa:16:3e:9e:c7:59 |
Nov 29 01:16:00 np0005539551 cloud-init[920]: ci-info: |  eth0  | True | fe80::f816:3eff:fe9e:c759/64 |       .       |  link  | fa:16:3e:9e:c7:59 |
Nov 29 01:16:00 np0005539551 cloud-init[920]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 29 01:16:00 np0005539551 cloud-init[920]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 29 01:16:00 np0005539551 cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 29 01:16:00 np0005539551 cloud-init[920]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Nov 29 01:16:00 np0005539551 cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 29 01:16:00 np0005539551 cloud-init[920]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Nov 29 01:16:00 np0005539551 cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 29 01:16:00 np0005539551 cloud-init[920]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Nov 29 01:16:00 np0005539551 cloud-init[920]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Nov 29 01:16:00 np0005539551 cloud-init[920]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Nov 29 01:16:00 np0005539551 cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 29 01:16:00 np0005539551 cloud-init[920]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 29 01:16:00 np0005539551 cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 29 01:16:00 np0005539551 cloud-init[920]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 29 01:16:00 np0005539551 cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 29 01:16:00 np0005539551 cloud-init[920]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Nov 29 01:16:00 np0005539551 cloud-init[920]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Nov 29 01:16:00 np0005539551 cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 29 01:16:01 np0005539551 cloud-init[920]: Generating public/private rsa key pair.
Nov 29 01:16:01 np0005539551 cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 29 01:16:01 np0005539551 cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 29 01:16:01 np0005539551 cloud-init[920]: The key fingerprint is:
Nov 29 01:16:01 np0005539551 cloud-init[920]: SHA256:Ms9bdFdpJ0aEmruq16JD3G0WReFhtkySgqg5/hwehik root@np0005539551.novalocal
Nov 29 01:16:01 np0005539551 cloud-init[920]: The key's randomart image is:
Nov 29 01:16:01 np0005539551 cloud-init[920]: +---[RSA 3072]----+
Nov 29 01:16:01 np0005539551 cloud-init[920]: |     . .  .oB+o  |
Nov 29 01:16:01 np0005539551 cloud-init[920]: |    . . . .Bo+  .|
Nov 29 01:16:01 np0005539551 cloud-init[920]: |   o     . ++ ooo|
Nov 29 01:16:01 np0005539551 cloud-init[920]: |  +       +  ..o.|
Nov 29 01:16:01 np0005539551 cloud-init[920]: | . + .o.S..o. .  |
Nov 29 01:16:01 np0005539551 cloud-init[920]: |E + + o=..=. .   |
Nov 29 01:16:01 np0005539551 cloud-init[920]: | . = +  o+..     |
Nov 29 01:16:01 np0005539551 cloud-init[920]: |    + . ooo      |
Nov 29 01:16:01 np0005539551 cloud-init[920]: |      o=oo       |
Nov 29 01:16:01 np0005539551 cloud-init[920]: +----[SHA256]-----+
Nov 29 01:16:01 np0005539551 cloud-init[920]: Generating public/private ecdsa key pair.
Nov 29 01:16:01 np0005539551 cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 29 01:16:01 np0005539551 cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 29 01:16:01 np0005539551 cloud-init[920]: The key fingerprint is:
Nov 29 01:16:01 np0005539551 cloud-init[920]: SHA256:Bv2Dl87xbF+0dqEwWI9YIUPCfmrAjTyEVv1MHUgqRpk root@np0005539551.novalocal
Nov 29 01:16:01 np0005539551 cloud-init[920]: The key's randomart image is:
Nov 29 01:16:01 np0005539551 cloud-init[920]: +---[ECDSA 256]---+
Nov 29 01:16:01 np0005539551 cloud-init[920]: |    oo=.o=oo.    |
Nov 29 01:16:01 np0005539551 cloud-init[920]: |   o.E +ooo..    |
Nov 29 01:16:01 np0005539551 cloud-init[920]: |  . +o=.=  o     |
Nov 29 01:16:01 np0005539551 cloud-init[920]: |    .*.+ *=.o    |
Nov 29 01:16:01 np0005539551 cloud-init[920]: |      o So*+ . ..|
Nov 29 01:16:01 np0005539551 cloud-init[920]: |       + + =o ..o|
Nov 29 01:16:01 np0005539551 cloud-init[920]: |      .   o +. oo|
Nov 29 01:16:01 np0005539551 cloud-init[920]: |           . ....|
Nov 29 01:16:01 np0005539551 cloud-init[920]: |              .  |
Nov 29 01:16:01 np0005539551 cloud-init[920]: +----[SHA256]-----+
Nov 29 01:16:01 np0005539551 cloud-init[920]: Generating public/private ed25519 key pair.
Nov 29 01:16:01 np0005539551 cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 29 01:16:01 np0005539551 cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 29 01:16:01 np0005539551 cloud-init[920]: The key fingerprint is:
Nov 29 01:16:01 np0005539551 cloud-init[920]: SHA256:+KBE3pHtzghPJNBdJSzw0LsncjbBw6c55XB/LGq3y0A root@np0005539551.novalocal
Nov 29 01:16:01 np0005539551 cloud-init[920]: The key's randomart image is:
Nov 29 01:16:01 np0005539551 cloud-init[920]: +--[ED25519 256]--+
Nov 29 01:16:01 np0005539551 cloud-init[920]: |  ..o+ oo..      |
Nov 29 01:16:01 np0005539551 cloud-init[920]: |   ..o+o..       |
Nov 29 01:16:01 np0005539551 cloud-init[920]: |    oo=o.        |
Nov 29 01:16:01 np0005539551 cloud-init[920]: |   o +B++        |
Nov 29 01:16:01 np0005539551 cloud-init[920]: |    + =%E. .     |
Nov 29 01:16:01 np0005539551 cloud-init[920]: |   ..=XBo o o    |
Nov 29 01:16:01 np0005539551 cloud-init[920]: |    .+o==. o     |
Nov 29 01:16:01 np0005539551 cloud-init[920]: |        oo.      |
Nov 29 01:16:01 np0005539551 cloud-init[920]: |       . .+o     |
Nov 29 01:16:01 np0005539551 cloud-init[920]: +----[SHA256]-----+
Nov 29 01:16:02 np0005539551 systemd[1]: Finished Cloud-init: Network Stage.
Nov 29 01:16:02 np0005539551 systemd[1]: Reached target Cloud-config availability.
Nov 29 01:16:02 np0005539551 systemd[1]: Reached target Network is Online.
Nov 29 01:16:02 np0005539551 systemd[1]: Starting Cloud-init: Config Stage...
Nov 29 01:16:02 np0005539551 systemd[1]: Starting Crash recovery kernel arming...
Nov 29 01:16:02 np0005539551 systemd[1]: Starting Notify NFS peers of a restart...
Nov 29 01:16:02 np0005539551 systemd[1]: Starting System Logging Service...
Nov 29 01:16:02 np0005539551 sm-notify[1003]: Version 2.5.4 starting
Nov 29 01:16:02 np0005539551 systemd[1]: Starting OpenSSH server daemon...
Nov 29 01:16:02 np0005539551 systemd[1]: Starting Permit User Sessions...
Nov 29 01:16:02 np0005539551 systemd[1]: Started Notify NFS peers of a restart.
Nov 29 01:16:02 np0005539551 systemd[1]: Started OpenSSH server daemon.
Nov 29 01:16:02 np0005539551 systemd[1]: Finished Permit User Sessions.
Nov 29 01:16:02 np0005539551 systemd[1]: Started Command Scheduler.
Nov 29 01:16:02 np0005539551 systemd[1]: Started Getty on tty1.
Nov 29 01:16:02 np0005539551 systemd[1]: Started Serial Getty on ttyS0.
Nov 29 01:16:02 np0005539551 systemd[1]: Reached target Login Prompts.
Nov 29 01:16:02 np0005539551 rsyslogd[1004]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1004" x-info="https://www.rsyslog.com"] start
Nov 29 01:16:02 np0005539551 rsyslogd[1004]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 29 01:16:02 np0005539551 systemd[1]: Started System Logging Service.
Nov 29 01:16:02 np0005539551 systemd[1]: Reached target Multi-User System.
Nov 29 01:16:02 np0005539551 systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 29 01:16:02 np0005539551 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 29 01:16:02 np0005539551 systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 29 01:16:02 np0005539551 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 01:16:02 np0005539551 kdumpctl[1016]: kdump: No kdump initial ramdisk found.
Nov 29 01:16:02 np0005539551 kdumpctl[1016]: kdump: Rebuilding /boot/initramfs-5.14.0-642.el9.x86_64kdump.img
Nov 29 01:16:02 np0005539551 cloud-init[1118]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Sat, 29 Nov 2025 06:16:02 +0000. Up 9.21 seconds.
Nov 29 01:16:02 np0005539551 systemd[1]: Finished Cloud-init: Config Stage.
Nov 29 01:16:02 np0005539551 systemd[1]: Starting Cloud-init: Final Stage...
Nov 29 01:16:02 np0005539551 dracut[1268]: dracut-057-102.git20250818.el9
Nov 29 01:16:02 np0005539551 cloud-init[1291]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Sat, 29 Nov 2025 06:16:02 +0000. Up 9.60 seconds.
Nov 29 01:16:02 np0005539551 cloud-init[1298]: #############################################################
Nov 29 01:16:02 np0005539551 cloud-init[1299]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 29 01:16:02 np0005539551 cloud-init[1303]: 256 SHA256:Bv2Dl87xbF+0dqEwWI9YIUPCfmrAjTyEVv1MHUgqRpk root@np0005539551.novalocal (ECDSA)
Nov 29 01:16:02 np0005539551 cloud-init[1310]: 256 SHA256:+KBE3pHtzghPJNBdJSzw0LsncjbBw6c55XB/LGq3y0A root@np0005539551.novalocal (ED25519)
Nov 29 01:16:02 np0005539551 cloud-init[1317]: 3072 SHA256:Ms9bdFdpJ0aEmruq16JD3G0WReFhtkySgqg5/hwehik root@np0005539551.novalocal (RSA)
Nov 29 01:16:02 np0005539551 cloud-init[1320]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 29 01:16:02 np0005539551 dracut[1272]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-642.el9.x86_64kdump.img 5.14.0-642.el9.x86_64
Nov 29 01:16:02 np0005539551 cloud-init[1323]: #############################################################
Nov 29 01:16:02 np0005539551 cloud-init[1291]: Cloud-init v. 24.4-7.el9 finished at Sat, 29 Nov 2025 06:16:02 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 9.83 seconds
Nov 29 01:16:02 np0005539551 systemd[1]: Finished Cloud-init: Final Stage.
Nov 29 01:16:02 np0005539551 systemd[1]: Reached target Cloud-init target.
Nov 29 01:16:03 np0005539551 dracut[1272]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 29 01:16:03 np0005539551 dracut[1272]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 29 01:16:03 np0005539551 dracut[1272]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 29 01:16:03 np0005539551 dracut[1272]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 29 01:16:03 np0005539551 dracut[1272]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 29 01:16:03 np0005539551 dracut[1272]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 29 01:16:03 np0005539551 dracut[1272]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 29 01:16:03 np0005539551 dracut[1272]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 29 01:16:03 np0005539551 dracut[1272]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 29 01:16:03 np0005539551 dracut[1272]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 29 01:16:03 np0005539551 dracut[1272]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 29 01:16:03 np0005539551 dracut[1272]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 29 01:16:03 np0005539551 dracut[1272]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 29 01:16:03 np0005539551 dracut[1272]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 29 01:16:03 np0005539551 dracut[1272]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 29 01:16:03 np0005539551 dracut[1272]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 29 01:16:03 np0005539551 dracut[1272]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 29 01:16:03 np0005539551 dracut[1272]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 29 01:16:03 np0005539551 dracut[1272]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 29 01:16:03 np0005539551 dracut[1272]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 29 01:16:03 np0005539551 dracut[1272]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 29 01:16:03 np0005539551 dracut[1272]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 29 01:16:03 np0005539551 dracut[1272]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 29 01:16:03 np0005539551 dracut[1272]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 29 01:16:03 np0005539551 dracut[1272]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 29 01:16:03 np0005539551 dracut[1272]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 29 01:16:03 np0005539551 dracut[1272]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 29 01:16:03 np0005539551 dracut[1272]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 29 01:16:03 np0005539551 dracut[1272]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 29 01:16:03 np0005539551 dracut[1272]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 29 01:16:03 np0005539551 dracut[1272]: memstrack is not available
Nov 29 01:16:03 np0005539551 dracut[1272]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 29 01:16:03 np0005539551 dracut[1272]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 29 01:16:03 np0005539551 dracut[1272]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 29 01:16:04 np0005539551 dracut[1272]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 29 01:16:04 np0005539551 dracut[1272]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 29 01:16:04 np0005539551 dracut[1272]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 29 01:16:04 np0005539551 dracut[1272]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 29 01:16:04 np0005539551 dracut[1272]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 29 01:16:04 np0005539551 dracut[1272]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 29 01:16:04 np0005539551 dracut[1272]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 29 01:16:04 np0005539551 dracut[1272]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 29 01:16:04 np0005539551 dracut[1272]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 29 01:16:04 np0005539551 dracut[1272]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 29 01:16:04 np0005539551 dracut[1272]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 29 01:16:04 np0005539551 dracut[1272]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 29 01:16:04 np0005539551 dracut[1272]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 29 01:16:04 np0005539551 dracut[1272]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 29 01:16:04 np0005539551 dracut[1272]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 29 01:16:04 np0005539551 dracut[1272]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 29 01:16:04 np0005539551 dracut[1272]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 29 01:16:04 np0005539551 dracut[1272]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 29 01:16:04 np0005539551 dracut[1272]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 29 01:16:04 np0005539551 dracut[1272]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 29 01:16:04 np0005539551 dracut[1272]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 29 01:16:04 np0005539551 dracut[1272]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 29 01:16:04 np0005539551 dracut[1272]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 29 01:16:04 np0005539551 dracut[1272]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 29 01:16:04 np0005539551 dracut[1272]: memstrack is not available
Nov 29 01:16:04 np0005539551 dracut[1272]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 29 01:16:04 np0005539551 dracut[1272]: *** Including module: systemd ***
Nov 29 01:16:04 np0005539551 dracut[1272]: *** Including module: fips ***
Nov 29 01:16:05 np0005539551 dracut[1272]: *** Including module: systemd-initrd ***
Nov 29 01:16:05 np0005539551 dracut[1272]: *** Including module: i18n ***
Nov 29 01:16:05 np0005539551 dracut[1272]: *** Including module: drm ***
Nov 29 01:16:05 np0005539551 chronyd[790]: Selected source 149.56.19.163 (2.centos.pool.ntp.org)
Nov 29 01:16:05 np0005539551 chronyd[790]: System clock TAI offset set to 37 seconds
Nov 29 01:16:05 np0005539551 dracut[1272]: *** Including module: prefixdevname ***
Nov 29 01:16:05 np0005539551 dracut[1272]: *** Including module: kernel-modules ***
Nov 29 01:16:05 np0005539551 kernel: block vda: the capability attribute has been deprecated.
Nov 29 01:16:06 np0005539551 dracut[1272]: *** Including module: kernel-modules-extra ***
Nov 29 01:16:06 np0005539551 dracut[1272]: *** Including module: qemu ***
Nov 29 01:16:06 np0005539551 dracut[1272]: *** Including module: fstab-sys ***
Nov 29 01:16:06 np0005539551 dracut[1272]: *** Including module: rootfs-block ***
Nov 29 01:16:06 np0005539551 dracut[1272]: *** Including module: terminfo ***
Nov 29 01:16:06 np0005539551 dracut[1272]: *** Including module: udev-rules ***
Nov 29 01:16:06 np0005539551 dracut[1272]: Skipping udev rule: 91-permissions.rules
Nov 29 01:16:06 np0005539551 dracut[1272]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 29 01:16:06 np0005539551 dracut[1272]: *** Including module: virtiofs ***
Nov 29 01:16:07 np0005539551 dracut[1272]: *** Including module: dracut-systemd ***
Nov 29 01:16:07 np0005539551 dracut[1272]: *** Including module: usrmount ***
Nov 29 01:16:07 np0005539551 dracut[1272]: *** Including module: base ***
Nov 29 01:16:07 np0005539551 dracut[1272]: *** Including module: fs-lib ***
Nov 29 01:16:07 np0005539551 dracut[1272]: *** Including module: kdumpbase ***
Nov 29 01:16:07 np0005539551 dracut[1272]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 29 01:16:07 np0005539551 dracut[1272]:  microcode_ctl module: mangling fw_dir
Nov 29 01:16:07 np0005539551 dracut[1272]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 29 01:16:07 np0005539551 dracut[1272]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 29 01:16:07 np0005539551 dracut[1272]:    microcode_ctl: configuration "intel" is ignored
Nov 29 01:16:07 np0005539551 dracut[1272]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 29 01:16:07 np0005539551 dracut[1272]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 29 01:16:07 np0005539551 dracut[1272]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 29 01:16:07 np0005539551 dracut[1272]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 29 01:16:07 np0005539551 dracut[1272]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 29 01:16:07 np0005539551 dracut[1272]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 29 01:16:07 np0005539551 dracut[1272]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 29 01:16:07 np0005539551 dracut[1272]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 29 01:16:07 np0005539551 dracut[1272]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 29 01:16:08 np0005539551 dracut[1272]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 29 01:16:08 np0005539551 dracut[1272]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 29 01:16:08 np0005539551 dracut[1272]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 29 01:16:08 np0005539551 dracut[1272]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 29 01:16:08 np0005539551 dracut[1272]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 29 01:16:08 np0005539551 dracut[1272]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 29 01:16:08 np0005539551 dracut[1272]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 29 01:16:08 np0005539551 dracut[1272]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 29 01:16:08 np0005539551 dracut[1272]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 29 01:16:08 np0005539551 dracut[1272]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 29 01:16:08 np0005539551 dracut[1272]: *** Including module: openssl ***
Nov 29 01:16:08 np0005539551 dracut[1272]: *** Including module: shutdown ***
Nov 29 01:16:08 np0005539551 dracut[1272]: *** Including module: squash ***
Nov 29 01:16:08 np0005539551 dracut[1272]: *** Including modules done ***
Nov 29 01:16:08 np0005539551 dracut[1272]: *** Installing kernel module dependencies ***
Nov 29 01:16:09 np0005539551 dracut[1272]: *** Installing kernel module dependencies done ***
Nov 29 01:16:09 np0005539551 dracut[1272]: *** Resolving executable dependencies ***
Nov 29 01:16:10 np0005539551 irqbalance[784]: Cannot change IRQ 25 affinity: Operation not permitted
Nov 29 01:16:10 np0005539551 irqbalance[784]: IRQ 25 affinity is now unmanaged
Nov 29 01:16:10 np0005539551 irqbalance[784]: Cannot change IRQ 31 affinity: Operation not permitted
Nov 29 01:16:10 np0005539551 irqbalance[784]: IRQ 31 affinity is now unmanaged
Nov 29 01:16:10 np0005539551 irqbalance[784]: Cannot change IRQ 28 affinity: Operation not permitted
Nov 29 01:16:10 np0005539551 irqbalance[784]: IRQ 28 affinity is now unmanaged
Nov 29 01:16:10 np0005539551 irqbalance[784]: Cannot change IRQ 32 affinity: Operation not permitted
Nov 29 01:16:10 np0005539551 irqbalance[784]: IRQ 32 affinity is now unmanaged
Nov 29 01:16:10 np0005539551 irqbalance[784]: Cannot change IRQ 30 affinity: Operation not permitted
Nov 29 01:16:10 np0005539551 irqbalance[784]: IRQ 30 affinity is now unmanaged
Nov 29 01:16:10 np0005539551 irqbalance[784]: Cannot change IRQ 29 affinity: Operation not permitted
Nov 29 01:16:10 np0005539551 irqbalance[784]: IRQ 29 affinity is now unmanaged
Nov 29 01:16:10 np0005539551 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 01:16:10 np0005539551 dracut[1272]: *** Resolving executable dependencies done ***
Nov 29 01:16:10 np0005539551 dracut[1272]: *** Generating early-microcode cpio image ***
Nov 29 01:16:10 np0005539551 dracut[1272]: *** Store current command line parameters ***
Nov 29 01:16:10 np0005539551 dracut[1272]: Stored kernel commandline:
Nov 29 01:16:10 np0005539551 dracut[1272]: No dracut internal kernel commandline stored in the initramfs
Nov 29 01:16:11 np0005539551 dracut[1272]: *** Install squash loader ***
Nov 29 01:16:11 np0005539551 dracut[1272]: *** Squashing the files inside the initramfs ***
Nov 29 01:16:13 np0005539551 dracut[1272]: *** Squashing the files inside the initramfs done ***
Nov 29 01:16:13 np0005539551 dracut[1272]: *** Creating image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' ***
Nov 29 01:16:13 np0005539551 dracut[1272]: *** Hardlinking files ***
Nov 29 01:16:13 np0005539551 dracut[1272]: *** Hardlinking files done ***
Nov 29 01:16:13 np0005539551 dracut[1272]: *** Creating initramfs image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' done ***
Nov 29 01:16:14 np0005539551 kdumpctl[1016]: kdump: kexec: loaded kdump kernel
Nov 29 01:16:14 np0005539551 kdumpctl[1016]: kdump: Starting kdump: [OK]
Nov 29 01:16:14 np0005539551 systemd[1]: Finished Crash recovery kernel arming.
Nov 29 01:16:14 np0005539551 systemd[1]: Startup finished in 1.895s (kernel) + 2.794s (initrd) + 16.686s (userspace) = 21.375s.
Nov 29 01:16:27 np0005539551 systemd[1]: Created slice User Slice of UID 1000.
Nov 29 01:16:27 np0005539551 systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 29 01:16:27 np0005539551 systemd-logind[788]: New session 1 of user zuul.
Nov 29 01:16:27 np0005539551 systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 29 01:16:27 np0005539551 systemd[1]: Starting User Manager for UID 1000...
Nov 29 01:16:27 np0005539551 systemd[4298]: Queued start job for default target Main User Target.
Nov 29 01:16:28 np0005539551 systemd[4298]: Created slice User Application Slice.
Nov 29 01:16:28 np0005539551 systemd[4298]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 01:16:28 np0005539551 systemd[4298]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 01:16:28 np0005539551 systemd[4298]: Reached target Paths.
Nov 29 01:16:28 np0005539551 systemd[4298]: Reached target Timers.
Nov 29 01:16:28 np0005539551 systemd[4298]: Starting D-Bus User Message Bus Socket...
Nov 29 01:16:28 np0005539551 systemd[4298]: Starting Create User's Volatile Files and Directories...
Nov 29 01:16:28 np0005539551 systemd[4298]: Listening on D-Bus User Message Bus Socket.
Nov 29 01:16:28 np0005539551 systemd[4298]: Reached target Sockets.
Nov 29 01:16:28 np0005539551 systemd[4298]: Finished Create User's Volatile Files and Directories.
Nov 29 01:16:28 np0005539551 systemd[4298]: Reached target Basic System.
Nov 29 01:16:28 np0005539551 systemd[4298]: Reached target Main User Target.
Nov 29 01:16:28 np0005539551 systemd[4298]: Startup finished in 122ms.
Nov 29 01:16:28 np0005539551 systemd[1]: Started User Manager for UID 1000.
Nov 29 01:16:28 np0005539551 systemd[1]: Started Session 1 of User zuul.
Nov 29 01:16:28 np0005539551 python3[4380]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:16:30 np0005539551 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 01:16:33 np0005539551 python3[4410]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:16:41 np0005539551 python3[4468]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:16:42 np0005539551 python3[4508]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 29 01:16:44 np0005539551 python3[4534]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCudHL3tHiIrGUdr3CZx/jgOAB+sTyj6z0B6SJoPEhADY63ZPdbzjdQhpgyhpnTdwlh6+Z4xPQ+DxOd+FPfH9ETfjTAZyODPGBr+U3/aWYFrr1YsSqkwWe+DI0V25XzOJIl8WeH42Z3m8/2jQ1VE7oAtX0LFpiSM33D5G6jGs1zirRd3I823HIkLEWTOQev3Et6zuPF/J/lkKUMsa94htQ/yvthrhpk7+QsWEk8T5uet2LZvnIsjZFIgfCCgTeGtE4eqcC9tdVxfYIwVhUeu3eCkwwBkVi0t0HhAh3qbiXsTIErO5yg2fPPye0mC6UjHMgSqc5crO5b4VU6uuoKLqXHXfoyrjf1PG1bb3S1A7UO9fs+mG8UJ2N53kHSyQ5YcQ+hZyyXqVeKPIQFPvwTxYMEb+rxzq5f56DdR8ruRmocVTqpGu1VTEdGIWkU8IaEB9kOEu7t8oFgiym6LUXBbbd9a6AkVCauAPe7Kq0Q4VHZVxtWSFjTAEi5x3CFG2qzn0= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:16:45 np0005539551 python3[4558]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:16:45 np0005539551 python3[4657]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:16:46 np0005539551 python3[4728]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764397005.4086952-252-130112345227020/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=e891a9c3f1e64b45bc756f48ef3ae3aa_id_rsa follow=False checksum=03e35f16cd901f940500378f2e2f2ebf2de0be9d backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:16:46 np0005539551 python3[4851]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:16:47 np0005539551 python3[4922]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764397006.3610198-307-74675733011756/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=e891a9c3f1e64b45bc756f48ef3ae3aa_id_rsa.pub follow=False checksum=b1e1e3a6e20142e56b32029e5e58b508f3db73ab backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:16:48 np0005539551 python3[4970]: ansible-ping Invoked with data=pong
Nov 29 01:16:49 np0005539551 python3[4994]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:16:52 np0005539551 python3[5052]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 29 01:16:53 np0005539551 python3[5084]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:16:53 np0005539551 python3[5108]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:16:53 np0005539551 python3[5132]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:16:54 np0005539551 python3[5156]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:16:54 np0005539551 python3[5180]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:16:54 np0005539551 python3[5204]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:16:56 np0005539551 python3[5230]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:16:57 np0005539551 python3[5308]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:16:57 np0005539551 python3[5381]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397016.5549-32-242445070195286/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:16:58 np0005539551 python3[5429]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:16:58 np0005539551 python3[5453]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:16:58 np0005539551 python3[5477]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:16:59 np0005539551 python3[5501]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:16:59 np0005539551 python3[5525]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:16:59 np0005539551 python3[5549]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:16:59 np0005539551 python3[5573]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:00 np0005539551 python3[5597]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:00 np0005539551 python3[5621]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:00 np0005539551 python3[5645]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:01 np0005539551 python3[5669]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:01 np0005539551 python3[5693]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:01 np0005539551 python3[5717]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:01 np0005539551 python3[5741]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:02 np0005539551 python3[5765]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:02 np0005539551 python3[5789]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:02 np0005539551 python3[5813]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:02 np0005539551 python3[5837]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:03 np0005539551 python3[5861]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:03 np0005539551 python3[5885]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:04 np0005539551 python3[5909]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:04 np0005539551 python3[5933]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:04 np0005539551 python3[5957]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:05 np0005539551 python3[5981]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:05 np0005539551 python3[6005]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:05 np0005539551 python3[6029]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:08 np0005539551 python3[6055]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 29 01:17:08 np0005539551 systemd[1]: Starting Time & Date Service...
Nov 29 01:17:08 np0005539551 systemd[1]: Started Time & Date Service.
Nov 29 01:17:08 np0005539551 systemd-timedated[6057]: Changed time zone to 'UTC' (UTC).
Nov 29 01:17:08 np0005539551 python3[6086]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:17:09 np0005539551 python3[6162]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:17:09 np0005539551 python3[6233]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764397028.8867528-252-1734839239181/source _original_basename=tmpeeq53s0f follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:17:10 np0005539551 python3[6333]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:17:10 np0005539551 python3[6404]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764397029.8116682-302-97650039868669/source _original_basename=tmpdmaj6nxt follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:17:11 np0005539551 python3[6506]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:17:11 np0005539551 python3[6579]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764397030.8533435-382-275829963614612/source _original_basename=tmpdunlt8vc follow=False checksum=cd982ffd608592e8f819f22d6376b4402103f855 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:17:12 np0005539551 python3[6627]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:17:12 np0005539551 python3[6653]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:17:12 np0005539551 python3[6733]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:17:13 np0005539551 python3[6806]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397032.6322901-452-187895075028907/source _original_basename=tmpi4j43onu follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:17:13 np0005539551 python3[6857]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-2a81-8810-00000000001f-1-compute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:17:14 np0005539551 python3[6885]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-2a81-8810-000000000020-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 29 01:17:16 np0005539551 python3[6913]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:17:38 np0005539551 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 01:18:03 np0005539551 python3[6941]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:18:35 np0005539551 systemd[4298]: Starting Mark boot as successful...
Nov 29 01:18:35 np0005539551 systemd[4298]: Finished Mark boot as successful.
Nov 29 01:19:03 np0005539551 systemd-logind[788]: Session 1 logged out. Waiting for processes to exit.
Nov 29 01:19:09 np0005539551 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 29 01:19:09 np0005539551 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Nov 29 01:19:09 np0005539551 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 29 01:19:09 np0005539551 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 29 01:19:09 np0005539551 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Nov 29 01:19:09 np0005539551 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Nov 29 01:19:09 np0005539551 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Nov 29 01:19:09 np0005539551 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Nov 29 01:19:09 np0005539551 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Nov 29 01:19:09 np0005539551 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Nov 29 01:19:09 np0005539551 NetworkManager[857]: <info>  [1764397149.1407] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 29 01:19:09 np0005539551 systemd-udevd[6944]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 01:19:09 np0005539551 NetworkManager[857]: <info>  [1764397149.1578] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 01:19:09 np0005539551 NetworkManager[857]: <info>  [1764397149.1610] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 29 01:19:09 np0005539551 NetworkManager[857]: <info>  [1764397149.1615] device (eth1): carrier: link connected
Nov 29 01:19:09 np0005539551 NetworkManager[857]: <info>  [1764397149.1617] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 29 01:19:09 np0005539551 NetworkManager[857]: <info>  [1764397149.1623] policy: auto-activating connection 'Wired connection 1' (acf43975-524b-3c36-83b0-d7f3a21fba6b)
Nov 29 01:19:09 np0005539551 NetworkManager[857]: <info>  [1764397149.1627] device (eth1): Activation: starting connection 'Wired connection 1' (acf43975-524b-3c36-83b0-d7f3a21fba6b)
Nov 29 01:19:09 np0005539551 NetworkManager[857]: <info>  [1764397149.1629] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 01:19:09 np0005539551 NetworkManager[857]: <info>  [1764397149.1632] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 01:19:09 np0005539551 NetworkManager[857]: <info>  [1764397149.1636] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 01:19:09 np0005539551 NetworkManager[857]: <info>  [1764397149.1641] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 29 01:19:09 np0005539551 systemd-logind[788]: New session 3 of user zuul.
Nov 29 01:19:09 np0005539551 systemd[1]: Started Session 3 of User zuul.
Nov 29 01:19:09 np0005539551 python3[6975]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-37f5-2ec2-00000000018f-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:19:20 np0005539551 python3[7055]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:19:20 np0005539551 python3[7128]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764397159.8214705-155-94460744393631/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=984a7050c6c355177ceca27983886e9fc9130018 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:19:21 np0005539551 python3[7178]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 01:19:21 np0005539551 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 29 01:19:21 np0005539551 systemd[1]: Stopped Network Manager Wait Online.
Nov 29 01:19:21 np0005539551 systemd[1]: Stopping Network Manager Wait Online...
Nov 29 01:19:21 np0005539551 systemd[1]: Stopping Network Manager...
Nov 29 01:19:21 np0005539551 NetworkManager[857]: <info>  [1764397161.1049] caught SIGTERM, shutting down normally.
Nov 29 01:19:21 np0005539551 NetworkManager[857]: <info>  [1764397161.1060] dhcp4 (eth0): canceled DHCP transaction
Nov 29 01:19:21 np0005539551 NetworkManager[857]: <info>  [1764397161.1060] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 01:19:21 np0005539551 NetworkManager[857]: <info>  [1764397161.1061] dhcp4 (eth0): state changed no lease
Nov 29 01:19:21 np0005539551 NetworkManager[857]: <info>  [1764397161.1062] manager: NetworkManager state is now CONNECTING
Nov 29 01:19:21 np0005539551 NetworkManager[857]: <info>  [1764397161.1163] dhcp4 (eth1): canceled DHCP transaction
Nov 29 01:19:21 np0005539551 NetworkManager[857]: <info>  [1764397161.1163] dhcp4 (eth1): state changed no lease
Nov 29 01:19:21 np0005539551 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 01:19:21 np0005539551 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 01:19:21 np0005539551 NetworkManager[857]: <info>  [1764397161.2057] exiting (success)
Nov 29 01:19:21 np0005539551 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 29 01:19:21 np0005539551 systemd[1]: Stopped Network Manager.
Nov 29 01:19:21 np0005539551 systemd[1]: NetworkManager.service: Consumed 1.516s CPU time, 10.1M memory peak.
Nov 29 01:19:21 np0005539551 systemd[1]: Starting Network Manager...
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.2820] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:7e7d5958-a03f-4d88-a827-2b942ebd1608)
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.2822] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.2900] manager[0x5618c2f48070]: monitoring kernel firmware directory '/lib/firmware'.
Nov 29 01:19:21 np0005539551 systemd[1]: Starting Hostname Service...
Nov 29 01:19:21 np0005539551 systemd[1]: Started Hostname Service.
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3771] hostname: hostname: using hostnamed
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3772] hostname: static hostname changed from (none) to "np0005539551.novalocal"
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3776] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3780] manager[0x5618c2f48070]: rfkill: Wi-Fi hardware radio set enabled
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3781] manager[0x5618c2f48070]: rfkill: WWAN hardware radio set enabled
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3807] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3808] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3808] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3808] manager: Networking is enabled by state file
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3810] settings: Loaded settings plugin: keyfile (internal)
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3813] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3835] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3843] dhcp: init: Using DHCP client 'internal'
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3846] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3853] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3858] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3865] device (lo): Activation: starting connection 'lo' (bd43838b-4852-4c08-8b58-5443d26d81e3)
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3872] device (eth0): carrier: link connected
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3875] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3880] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3880] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3885] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3890] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3895] device (eth1): carrier: link connected
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3898] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3902] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (acf43975-524b-3c36-83b0-d7f3a21fba6b) (indicated)
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3902] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3906] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3911] device (eth1): Activation: starting connection 'Wired connection 1' (acf43975-524b-3c36-83b0-d7f3a21fba6b)
Nov 29 01:19:21 np0005539551 systemd[1]: Started Network Manager.
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3924] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3926] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3928] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3929] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3931] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3933] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3934] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3936] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3937] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3943] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3945] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3951] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3952] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3967] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3971] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3976] device (lo): Activation: successful, device activated.
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3981] dhcp4 (eth0): state changed new lease, address=38.102.83.193
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.3987] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 29 01:19:21 np0005539551 systemd[1]: Starting Network Manager Wait Online...
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.4867] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.4935] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.4938] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.4947] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.4955] device (eth0): Activation: successful, device activated.
Nov 29 01:19:21 np0005539551 NetworkManager[7196]: <info>  [1764397161.4966] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 29 01:19:21 np0005539551 python3[7263]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-37f5-2ec2-0000000000c8-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:19:31 np0005539551 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 01:19:51 np0005539551 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 01:20:07 np0005539551 NetworkManager[7196]: <info>  [1764397207.0586] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 01:20:07 np0005539551 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 01:20:07 np0005539551 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 01:20:07 np0005539551 NetworkManager[7196]: <info>  [1764397207.0928] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 01:20:07 np0005539551 NetworkManager[7196]: <info>  [1764397207.0930] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 01:20:07 np0005539551 NetworkManager[7196]: <info>  [1764397207.0943] device (eth1): Activation: successful, device activated.
Nov 29 01:20:07 np0005539551 NetworkManager[7196]: <info>  [1764397207.0954] manager: startup complete
Nov 29 01:20:07 np0005539551 NetworkManager[7196]: <info>  [1764397207.0958] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 29 01:20:07 np0005539551 NetworkManager[7196]: <warn>  [1764397207.0967] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 29 01:20:07 np0005539551 NetworkManager[7196]: <info>  [1764397207.0980] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 29 01:20:07 np0005539551 systemd[1]: Finished Network Manager Wait Online.
Nov 29 01:20:07 np0005539551 NetworkManager[7196]: <info>  [1764397207.1113] dhcp4 (eth1): canceled DHCP transaction
Nov 29 01:20:07 np0005539551 NetworkManager[7196]: <info>  [1764397207.1114] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 29 01:20:07 np0005539551 NetworkManager[7196]: <info>  [1764397207.1114] dhcp4 (eth1): state changed no lease
Nov 29 01:20:07 np0005539551 NetworkManager[7196]: <info>  [1764397207.1134] policy: auto-activating connection 'ci-private-network' (7b4783cd-23e2-583d-98c4-e3baef46d09f)
Nov 29 01:20:07 np0005539551 NetworkManager[7196]: <info>  [1764397207.1139] device (eth1): Activation: starting connection 'ci-private-network' (7b4783cd-23e2-583d-98c4-e3baef46d09f)
Nov 29 01:20:07 np0005539551 NetworkManager[7196]: <info>  [1764397207.1140] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 01:20:07 np0005539551 NetworkManager[7196]: <info>  [1764397207.1143] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 01:20:07 np0005539551 NetworkManager[7196]: <info>  [1764397207.1151] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 01:20:07 np0005539551 NetworkManager[7196]: <info>  [1764397207.1161] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 01:20:07 np0005539551 NetworkManager[7196]: <info>  [1764397207.1535] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 01:20:07 np0005539551 NetworkManager[7196]: <info>  [1764397207.1540] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 01:20:07 np0005539551 NetworkManager[7196]: <info>  [1764397207.1555] device (eth1): Activation: successful, device activated.
Nov 29 01:20:17 np0005539551 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 01:20:21 np0005539551 systemd[1]: session-3.scope: Deactivated successfully.
Nov 29 01:20:21 np0005539551 systemd[1]: session-3.scope: Consumed 1.640s CPU time.
Nov 29 01:20:21 np0005539551 systemd-logind[788]: Session 3 logged out. Waiting for processes to exit.
Nov 29 01:20:21 np0005539551 systemd-logind[788]: Removed session 3.
Nov 29 01:21:07 np0005539551 systemd-logind[788]: New session 4 of user zuul.
Nov 29 01:21:07 np0005539551 systemd[1]: Started Session 4 of User zuul.
Nov 29 01:21:07 np0005539551 python3[7372]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:21:07 np0005539551 python3[7445]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397267.1399896-373-220295794628292/source _original_basename=tmp9_mz346f follow=False checksum=8b8510474e05b0641525b6d1881e4eebcca5fe7b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:21:11 np0005539551 systemd-logind[788]: Session 4 logged out. Waiting for processes to exit.
Nov 29 01:21:11 np0005539551 systemd[1]: session-4.scope: Deactivated successfully.
Nov 29 01:21:11 np0005539551 systemd-logind[788]: Removed session 4.
Nov 29 01:21:35 np0005539551 systemd[4298]: Created slice User Background Tasks Slice.
Nov 29 01:21:35 np0005539551 systemd[4298]: Starting Cleanup of User's Temporary Files and Directories...
Nov 29 01:21:35 np0005539551 systemd[4298]: Finished Cleanup of User's Temporary Files and Directories.
Nov 29 01:27:16 np0005539551 systemd-logind[788]: New session 5 of user zuul.
Nov 29 01:27:16 np0005539551 systemd[1]: Started Session 5 of User zuul.
Nov 29 01:27:16 np0005539551 python3[7504]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-dbf6-85dc-000000000ca8-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:27:17 np0005539551 python3[7533]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:27:17 np0005539551 python3[7559]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:27:18 np0005539551 python3[7585]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:27:18 np0005539551 python3[7611]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:27:19 np0005539551 python3[7637]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:27:20 np0005539551 python3[7715]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:27:20 np0005539551 python3[7788]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397640.0726516-367-258302792747130/source _original_basename=tmpt3_1vy9n follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:27:21 np0005539551 python3[7838]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 01:27:21 np0005539551 systemd[1]: Reloading.
Nov 29 01:27:21 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:27:23 np0005539551 python3[7893]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 29 01:27:23 np0005539551 python3[7919]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:27:24 np0005539551 python3[7949]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:27:24 np0005539551 python3[7977]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:27:24 np0005539551 python3[8005]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:27:25 np0005539551 python3[8032]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-dbf6-85dc-000000000caf-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:27:25 np0005539551 python3[8062]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 01:27:28 np0005539551 systemd[1]: session-5.scope: Deactivated successfully.
Nov 29 01:27:28 np0005539551 systemd[1]: session-5.scope: Consumed 4.206s CPU time.
Nov 29 01:27:28 np0005539551 systemd-logind[788]: Session 5 logged out. Waiting for processes to exit.
Nov 29 01:27:28 np0005539551 systemd-logind[788]: Removed session 5.
Nov 29 01:27:30 np0005539551 irqbalance[784]: Cannot change IRQ 27 affinity: Operation not permitted
Nov 29 01:27:30 np0005539551 irqbalance[784]: IRQ 27 affinity is now unmanaged
Nov 29 01:27:30 np0005539551 systemd-logind[788]: New session 6 of user zuul.
Nov 29 01:27:30 np0005539551 systemd[1]: Started Session 6 of User zuul.
Nov 29 01:27:30 np0005539551 python3[8095]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 29 01:28:01 np0005539551 kernel: SELinux:  Converting 385 SID table entries...
Nov 29 01:28:01 np0005539551 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 01:28:01 np0005539551 kernel: SELinux:  policy capability open_perms=1
Nov 29 01:28:01 np0005539551 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 01:28:01 np0005539551 kernel: SELinux:  policy capability always_check_network=0
Nov 29 01:28:01 np0005539551 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 01:28:01 np0005539551 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 01:28:01 np0005539551 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 01:29:43 np0005539551 kernel: SELinux:  Converting 385 SID table entries...
Nov 29 01:29:43 np0005539551 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 01:29:43 np0005539551 kernel: SELinux:  policy capability open_perms=1
Nov 29 01:29:43 np0005539551 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 01:29:43 np0005539551 kernel: SELinux:  policy capability always_check_network=0
Nov 29 01:29:43 np0005539551 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 01:29:43 np0005539551 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 01:29:43 np0005539551 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 01:29:55 np0005539551 kernel: SELinux:  Converting 385 SID table entries...
Nov 29 01:29:55 np0005539551 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 01:29:55 np0005539551 kernel: SELinux:  policy capability open_perms=1
Nov 29 01:29:55 np0005539551 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 01:29:55 np0005539551 kernel: SELinux:  policy capability always_check_network=0
Nov 29 01:29:55 np0005539551 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 01:29:55 np0005539551 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 01:29:55 np0005539551 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 01:29:57 np0005539551 setsebool[8162]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 29 01:29:57 np0005539551 setsebool[8162]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 29 01:30:10 np0005539551 kernel: SELinux:  Converting 388 SID table entries...
Nov 29 01:30:10 np0005539551 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 01:30:10 np0005539551 kernel: SELinux:  policy capability open_perms=1
Nov 29 01:30:10 np0005539551 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 01:30:10 np0005539551 kernel: SELinux:  policy capability always_check_network=0
Nov 29 01:30:10 np0005539551 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 01:30:10 np0005539551 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 01:30:10 np0005539551 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 01:30:30 np0005539551 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 29 01:30:30 np0005539551 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 01:30:30 np0005539551 systemd[1]: Starting man-db-cache-update.service...
Nov 29 01:30:30 np0005539551 systemd[1]: Reloading.
Nov 29 01:30:30 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:30:30 np0005539551 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 01:30:32 np0005539551 python3[10482]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-fce3-968d-00000000000c-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:30:33 np0005539551 kernel: evm: overlay not supported
Nov 29 01:30:33 np0005539551 systemd[4298]: Starting D-Bus User Message Bus...
Nov 29 01:30:33 np0005539551 dbus-broker-launch[11762]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 29 01:30:33 np0005539551 dbus-broker-launch[11762]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 29 01:30:33 np0005539551 systemd[4298]: Started D-Bus User Message Bus.
Nov 29 01:30:33 np0005539551 dbus-broker-lau[11762]: Ready
Nov 29 01:30:33 np0005539551 systemd[4298]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 29 01:30:33 np0005539551 systemd[4298]: Created slice Slice /user.
Nov 29 01:30:33 np0005539551 systemd[4298]: podman-11621.scope: unit configures an IP firewall, but not running as root.
Nov 29 01:30:33 np0005539551 systemd[4298]: (This warning is only shown for the first unit using IP firewalling.)
Nov 29 01:30:33 np0005539551 systemd[4298]: Started podman-11621.scope.
Nov 29 01:30:33 np0005539551 systemd[4298]: Started podman-pause-e286e4c5.scope.
Nov 29 01:30:36 np0005539551 python3[13989]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.89:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.89:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:30:36 np0005539551 python3[13989]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Nov 29 01:30:37 np0005539551 systemd[1]: session-6.scope: Deactivated successfully.
Nov 29 01:30:37 np0005539551 systemd[1]: session-6.scope: Consumed 1min 10.869s CPU time.
Nov 29 01:30:37 np0005539551 systemd-logind[788]: Session 6 logged out. Waiting for processes to exit.
Nov 29 01:30:37 np0005539551 systemd-logind[788]: Removed session 6.
Nov 29 01:30:53 np0005539551 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 29 01:30:53 np0005539551 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 29 01:30:53 np0005539551 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 29 01:30:53 np0005539551 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 29 01:31:02 np0005539551 systemd-logind[788]: New session 7 of user zuul.
Nov 29 01:31:02 np0005539551 systemd[1]: Started Session 7 of User zuul.
Nov 29 01:31:02 np0005539551 python3[23498]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJrjhnp5BPERL248LVqF1CJ8EP1wq/Z56Tsyt80ETMnfUDtvWUz47K0wXbpz5P79ut5MVJjWHtBnsg3Wj8zK7v0= zuul@np0005539549.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:31:03 np0005539551 python3[23648]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJrjhnp5BPERL248LVqF1CJ8EP1wq/Z56Tsyt80ETMnfUDtvWUz47K0wXbpz5P79ut5MVJjWHtBnsg3Wj8zK7v0= zuul@np0005539549.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:31:04 np0005539551 python3[23943]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005539551.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 29 01:31:04 np0005539551 python3[24161]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJrjhnp5BPERL248LVqF1CJ8EP1wq/Z56Tsyt80ETMnfUDtvWUz47K0wXbpz5P79ut5MVJjWHtBnsg3Wj8zK7v0= zuul@np0005539549.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:31:05 np0005539551 python3[24439]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:31:05 np0005539551 python3[24693]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397864.74817-168-201229415312391/source _original_basename=tmpclkprj6f follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:31:06 np0005539551 python3[25028]: ansible-ansible.builtin.hostname Invoked with name=compute-1 use=systemd
Nov 29 01:31:06 np0005539551 systemd[1]: Starting Hostname Service...
Nov 29 01:31:06 np0005539551 systemd[1]: Started Hostname Service.
Nov 29 01:31:06 np0005539551 systemd-hostnamed[25138]: Changed pretty hostname to 'compute-1'
Nov 29 01:31:06 np0005539551 systemd-hostnamed[25138]: Hostname set to <compute-1> (static)
Nov 29 01:31:06 np0005539551 NetworkManager[7196]: <info>  [1764397866.6617] hostname: static hostname changed from "np0005539551.novalocal" to "compute-1"
Nov 29 01:31:06 np0005539551 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 01:31:06 np0005539551 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 01:31:07 np0005539551 systemd[1]: session-7.scope: Deactivated successfully.
Nov 29 01:31:07 np0005539551 systemd[1]: session-7.scope: Consumed 2.468s CPU time.
Nov 29 01:31:07 np0005539551 systemd-logind[788]: Session 7 logged out. Waiting for processes to exit.
Nov 29 01:31:07 np0005539551 systemd-logind[788]: Removed session 7.
Nov 29 01:31:16 np0005539551 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 01:31:20 np0005539551 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 01:31:20 np0005539551 systemd[1]: Finished man-db-cache-update.service.
Nov 29 01:31:20 np0005539551 systemd[1]: man-db-cache-update.service: Consumed 58.235s CPU time.
Nov 29 01:31:20 np0005539551 systemd[1]: run-rd12c400a121148358a4be5ad198e92e1.service: Deactivated successfully.
Nov 29 01:31:36 np0005539551 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 01:34:35 np0005539551 systemd[1]: Starting dnf makecache...
Nov 29 01:34:35 np0005539551 dnf[29919]: Failed determining last makecache time.
Nov 29 01:34:36 np0005539551 dnf[29919]: CentOS Stream 9 - BaseOS                         26 kB/s | 7.3 kB     00:00
Nov 29 01:34:36 np0005539551 dnf[29919]: CentOS Stream 9 - AppStream                      27 kB/s | 7.4 kB     00:00
Nov 29 01:34:36 np0005539551 dnf[29919]: CentOS Stream 9 - CRB                            83 kB/s | 7.2 kB     00:00
Nov 29 01:34:36 np0005539551 dnf[29919]: CentOS Stream 9 - Extras packages                80 kB/s | 8.3 kB     00:00
Nov 29 01:34:37 np0005539551 dnf[29919]: Metadata cache created.
Nov 29 01:34:37 np0005539551 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 29 01:34:37 np0005539551 systemd[1]: Finished dnf makecache.
Nov 29 01:36:12 np0005539551 systemd-logind[788]: New session 8 of user zuul.
Nov 29 01:36:12 np0005539551 systemd[1]: Started Session 8 of User zuul.
Nov 29 01:36:12 np0005539551 python3[30000]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:36:16 np0005539551 python3[30116]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:36:16 np0005539551 python3[30189]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398175.626953-34057-86357935024630/source mode=0755 _original_basename=delorean.repo follow=False checksum=a16f090252000d02a7f7d540bb10f7c1c9cd4ac5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:36:16 np0005539551 python3[30215]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:36:17 np0005539551 python3[30288]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398175.626953-34057-86357935024630/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:36:17 np0005539551 python3[30314]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:36:17 np0005539551 python3[30387]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398175.626953-34057-86357935024630/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:36:18 np0005539551 python3[30413]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:36:18 np0005539551 python3[30486]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398175.626953-34057-86357935024630/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:36:18 np0005539551 python3[30512]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:36:19 np0005539551 python3[30586]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398175.626953-34057-86357935024630/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:36:19 np0005539551 python3[30612]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:36:19 np0005539551 python3[30685]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398175.626953-34057-86357935024630/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:36:19 np0005539551 python3[30711]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:36:20 np0005539551 python3[30784]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398175.626953-34057-86357935024630/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=25e801a9a05537c191e2aa500f19076ac31d3e5b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:36:32 np0005539551 python3[30832]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:41:32 np0005539551 systemd-logind[788]: Session 8 logged out. Waiting for processes to exit.
Nov 29 01:41:32 np0005539551 systemd[1]: session-8.scope: Deactivated successfully.
Nov 29 01:41:32 np0005539551 systemd[1]: session-8.scope: Consumed 5.381s CPU time.
Nov 29 01:41:32 np0005539551 systemd-logind[788]: Removed session 8.
Nov 29 02:00:15 np0005539551 systemd-logind[788]: New session 9 of user zuul.
Nov 29 02:00:15 np0005539551 systemd[1]: Started Session 9 of User zuul.
Nov 29 02:00:16 np0005539551 python3.9[30996]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:00:17 np0005539551 python3.9[31177]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:00:32 np0005539551 systemd[1]: session-9.scope: Deactivated successfully.
Nov 29 02:00:32 np0005539551 systemd[1]: session-9.scope: Consumed 8.168s CPU time.
Nov 29 02:00:32 np0005539551 systemd-logind[788]: Session 9 logged out. Waiting for processes to exit.
Nov 29 02:00:32 np0005539551 systemd-logind[788]: Removed session 9.
Nov 29 02:00:48 np0005539551 systemd-logind[788]: New session 10 of user zuul.
Nov 29 02:00:48 np0005539551 systemd[1]: Started Session 10 of User zuul.
Nov 29 02:00:48 np0005539551 python3.9[31390]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 29 02:00:50 np0005539551 python3.9[31564]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:00:51 np0005539551 python3.9[31716]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:00:52 np0005539551 python3.9[31869]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:00:53 np0005539551 python3.9[32021]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:00:54 np0005539551 python3.9[32173]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:00:54 np0005539551 python3.9[32296]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764399653.432385-183-197137720319154/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:00:55 np0005539551 python3.9[32448]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:00:56 np0005539551 python3.9[32604]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:00:57 np0005539551 python3.9[32756]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:00:58 np0005539551 python3.9[32906]: ansible-ansible.builtin.service_facts Invoked
Nov 29 02:01:02 np0005539551 python3.9[33174]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:01:03 np0005539551 python3.9[33324]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:01:04 np0005539551 python3.9[33478]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:01:06 np0005539551 python3.9[33636]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:01:07 np0005539551 python3.9[33720]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:02:00 np0005539551 systemd[1]: Reloading.
Nov 29 02:02:00 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:02:00 np0005539551 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 29 02:02:02 np0005539551 systemd[1]: Reloading.
Nov 29 02:02:02 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:02:02 np0005539551 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 29 02:02:02 np0005539551 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 29 02:02:02 np0005539551 systemd[1]: Reloading.
Nov 29 02:02:02 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:02:02 np0005539551 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 29 02:02:03 np0005539551 dbus-broker-launch[756]: Noticed file-system modification, trigger reload.
Nov 29 02:02:03 np0005539551 dbus-broker-launch[756]: Noticed file-system modification, trigger reload.
Nov 29 02:02:03 np0005539551 dbus-broker-launch[756]: Noticed file-system modification, trigger reload.
Nov 29 02:03:29 np0005539551 kernel: SELinux:  Converting 2718 SID table entries...
Nov 29 02:03:29 np0005539551 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 02:03:29 np0005539551 kernel: SELinux:  policy capability open_perms=1
Nov 29 02:03:29 np0005539551 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 02:03:29 np0005539551 kernel: SELinux:  policy capability always_check_network=0
Nov 29 02:03:29 np0005539551 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 02:03:29 np0005539551 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 02:03:29 np0005539551 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 02:03:29 np0005539551 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 29 02:03:30 np0005539551 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 02:03:30 np0005539551 systemd[1]: Starting man-db-cache-update.service...
Nov 29 02:03:30 np0005539551 systemd[1]: Reloading.
Nov 29 02:03:30 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:03:30 np0005539551 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 02:03:34 np0005539551 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 02:03:34 np0005539551 systemd[1]: Finished man-db-cache-update.service.
Nov 29 02:03:34 np0005539551 systemd[1]: man-db-cache-update.service: Consumed 1.131s CPU time.
Nov 29 02:03:34 np0005539551 systemd[1]: run-r687d7296321e455a8fac93016c6ca2fb.service: Deactivated successfully.
Nov 29 02:04:09 np0005539551 python3.9[35228]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:04:11 np0005539551 python3.9[35509]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 29 02:04:12 np0005539551 python3.9[35661]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 29 02:04:15 np0005539551 python3.9[35814]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:04:17 np0005539551 python3.9[35966]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 29 02:04:18 np0005539551 python3.9[36118]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:04:19 np0005539551 python3.9[36270]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:04:20 np0005539551 python3.9[36393]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399858.773563-672-3957419544082/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1960c13778c50062ca07f689a187e0cd26c6ab56 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:04:32 np0005539551 python3.9[36545]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:04:33 np0005539551 python3.9[36697]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:04:34 np0005539551 python3.9[36850]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:04:35 np0005539551 python3.9[37002]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 29 02:04:35 np0005539551 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:04:36 np0005539551 python3.9[37156]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 02:04:39 np0005539551 python3.9[37314]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 02:04:41 np0005539551 python3.9[37474]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 29 02:04:42 np0005539551 python3.9[37627]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 02:04:44 np0005539551 python3.9[37785]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 29 02:04:45 np0005539551 python3.9[37937]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:04:50 np0005539551 python3.9[38090]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:04:51 np0005539551 python3.9[38242]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:04:51 np0005539551 python3.9[38365]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764399890.553052-1029-255485841274968/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:04:52 np0005539551 python3.9[38517]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:04:52 np0005539551 systemd[1]: Starting Load Kernel Modules...
Nov 29 02:04:52 np0005539551 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 29 02:04:52 np0005539551 kernel: Bridge firewalling registered
Nov 29 02:04:52 np0005539551 systemd-modules-load[38521]: Inserted module 'br_netfilter'
Nov 29 02:04:52 np0005539551 systemd[1]: Finished Load Kernel Modules.
Nov 29 02:04:53 np0005539551 python3.9[38676]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:04:54 np0005539551 python3.9[38799]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764399893.2981017-1099-173033904452367/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:04:55 np0005539551 python3.9[38951]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:04:59 np0005539551 dbus-broker-launch[756]: Noticed file-system modification, trigger reload.
Nov 29 02:04:59 np0005539551 dbus-broker-launch[756]: Noticed file-system modification, trigger reload.
Nov 29 02:05:02 np0005539551 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 02:05:02 np0005539551 systemd[1]: Starting man-db-cache-update.service...
Nov 29 02:05:02 np0005539551 systemd[1]: Reloading.
Nov 29 02:05:02 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:05:02 np0005539551 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 02:05:05 np0005539551 python3.9[40983]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:05:05 np0005539551 python3.9[41927]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 29 02:05:06 np0005539551 python3.9[42691]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:05:07 np0005539551 python3.9[43183]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:05:07 np0005539551 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 29 02:05:08 np0005539551 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 02:05:08 np0005539551 systemd[1]: Finished man-db-cache-update.service.
Nov 29 02:05:08 np0005539551 systemd[1]: man-db-cache-update.service: Consumed 5.322s CPU time.
Nov 29 02:05:08 np0005539551 systemd[1]: run-r64939cdde26d46a88f19fb7cb0663f2d.service: Deactivated successfully.
Nov 29 02:05:08 np0005539551 systemd[1]: Starting Authorization Manager...
Nov 29 02:05:08 np0005539551 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 29 02:05:08 np0005539551 polkitd[43402]: Started polkitd version 0.117
Nov 29 02:05:08 np0005539551 systemd[1]: Started Authorization Manager.
Nov 29 02:05:09 np0005539551 python3.9[43572]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:05:09 np0005539551 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 29 02:05:09 np0005539551 systemd[1]: tuned.service: Deactivated successfully.
Nov 29 02:05:09 np0005539551 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 29 02:05:09 np0005539551 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 29 02:05:09 np0005539551 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 29 02:05:11 np0005539551 python3.9[43734]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 29 02:05:14 np0005539551 python3.9[43886]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:05:14 np0005539551 systemd[1]: Reloading.
Nov 29 02:05:14 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:05:15 np0005539551 python3.9[44076]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:05:15 np0005539551 systemd[1]: Reloading.
Nov 29 02:05:15 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:05:18 np0005539551 python3.9[44265]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:05:19 np0005539551 python3.9[44418]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:05:19 np0005539551 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 29 02:05:20 np0005539551 python3.9[44571]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:05:22 np0005539551 python3.9[44733]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:05:23 np0005539551 python3.9[44886]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:05:23 np0005539551 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 29 02:05:23 np0005539551 systemd[1]: Stopped Apply Kernel Variables.
Nov 29 02:05:23 np0005539551 systemd[1]: Stopping Apply Kernel Variables...
Nov 29 02:05:23 np0005539551 systemd[1]: Starting Apply Kernel Variables...
Nov 29 02:05:23 np0005539551 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 29 02:05:23 np0005539551 systemd[1]: Finished Apply Kernel Variables.
Nov 29 02:05:24 np0005539551 systemd[1]: session-10.scope: Deactivated successfully.
Nov 29 02:05:24 np0005539551 systemd[1]: session-10.scope: Consumed 2min 25.043s CPU time.
Nov 29 02:05:24 np0005539551 systemd-logind[788]: Session 10 logged out. Waiting for processes to exit.
Nov 29 02:05:24 np0005539551 systemd-logind[788]: Removed session 10.
Nov 29 02:05:29 np0005539551 systemd-logind[788]: New session 11 of user zuul.
Nov 29 02:05:29 np0005539551 systemd[1]: Started Session 11 of User zuul.
Nov 29 02:05:31 np0005539551 python3.9[45069]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:05:33 np0005539551 python3.9[45225]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 29 02:05:33 np0005539551 python3.9[45378]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 02:05:35 np0005539551 python3.9[45536]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 02:05:37 np0005539551 python3.9[45696]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:05:37 np0005539551 python3.9[45780]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 02:05:42 np0005539551 python3.9[45945]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:06:06 np0005539551 kernel: SELinux:  Converting 2730 SID table entries...
Nov 29 02:06:06 np0005539551 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 02:06:06 np0005539551 kernel: SELinux:  policy capability open_perms=1
Nov 29 02:06:06 np0005539551 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 02:06:06 np0005539551 kernel: SELinux:  policy capability always_check_network=0
Nov 29 02:06:06 np0005539551 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 02:06:06 np0005539551 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 02:06:06 np0005539551 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 02:06:11 np0005539551 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 29 02:06:11 np0005539551 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 29 02:06:14 np0005539551 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 02:06:14 np0005539551 systemd[1]: Starting man-db-cache-update.service...
Nov 29 02:06:14 np0005539551 systemd[1]: Reloading.
Nov 29 02:06:14 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:06:14 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:06:14 np0005539551 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 02:06:18 np0005539551 python3.9[47043]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:06:18 np0005539551 systemd[1]: Reloading.
Nov 29 02:06:18 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:06:18 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:06:18 np0005539551 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 02:06:18 np0005539551 systemd[1]: Finished man-db-cache-update.service.
Nov 29 02:06:18 np0005539551 systemd[1]: run-rba63341e9d23490fa6a574f9d9c53188.service: Deactivated successfully.
Nov 29 02:06:18 np0005539551 systemd[1]: Starting Open vSwitch Database Unit...
Nov 29 02:06:18 np0005539551 chown[47085]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 29 02:06:18 np0005539551 ovs-ctl[47091]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 29 02:06:18 np0005539551 ovs-ctl[47091]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 29 02:06:18 np0005539551 ovs-ctl[47091]: Starting ovsdb-server [  OK  ]
Nov 29 02:06:18 np0005539551 ovs-vsctl[47140]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 29 02:06:19 np0005539551 ovs-vsctl[47156]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"a37d8697-3fee-4a55-8dd5-3894cb7e8e1c\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 29 02:06:19 np0005539551 ovs-ctl[47091]: Configuring Open vSwitch system IDs [  OK  ]
Nov 29 02:06:19 np0005539551 ovs-ctl[47091]: Enabling remote OVSDB managers [  OK  ]
Nov 29 02:06:19 np0005539551 ovs-vsctl[47166]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Nov 29 02:06:19 np0005539551 systemd[1]: Started Open vSwitch Database Unit.
Nov 29 02:06:19 np0005539551 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 29 02:06:19 np0005539551 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 29 02:06:19 np0005539551 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 29 02:06:19 np0005539551 kernel: openvswitch: Open vSwitch switching datapath
Nov 29 02:06:19 np0005539551 ovs-ctl[47211]: Inserting openvswitch module [  OK  ]
Nov 29 02:06:19 np0005539551 ovs-ctl[47180]: Starting ovs-vswitchd [  OK  ]
Nov 29 02:06:19 np0005539551 ovs-vsctl[47228]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Nov 29 02:06:19 np0005539551 ovs-ctl[47180]: Enabling remote OVSDB managers [  OK  ]
Nov 29 02:06:19 np0005539551 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 29 02:06:19 np0005539551 systemd[1]: Starting Open vSwitch...
Nov 29 02:06:19 np0005539551 systemd[1]: Finished Open vSwitch.
Nov 29 02:06:21 np0005539551 python3.9[47380]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:06:22 np0005539551 python3.9[47532]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 29 02:06:25 np0005539551 kernel: SELinux:  Converting 2744 SID table entries...
Nov 29 02:06:25 np0005539551 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 02:06:25 np0005539551 kernel: SELinux:  policy capability open_perms=1
Nov 29 02:06:25 np0005539551 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 02:06:25 np0005539551 kernel: SELinux:  policy capability always_check_network=0
Nov 29 02:06:25 np0005539551 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 02:06:25 np0005539551 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 02:06:25 np0005539551 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 02:06:26 np0005539551 python3.9[47687]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:06:27 np0005539551 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 29 02:06:27 np0005539551 python3.9[47845]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:06:30 np0005539551 python3.9[47998]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:06:31 np0005539551 python3.9[48285]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 02:06:32 np0005539551 python3.9[48435]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:06:33 np0005539551 python3.9[48589]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:06:37 np0005539551 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 02:06:37 np0005539551 systemd[1]: Starting man-db-cache-update.service...
Nov 29 02:06:37 np0005539551 systemd[1]: Reloading.
Nov 29 02:06:37 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:06:37 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:06:38 np0005539551 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 02:06:41 np0005539551 python3.9[48904]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:06:41 np0005539551 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 29 02:06:41 np0005539551 systemd[1]: Stopped Network Manager Wait Online.
Nov 29 02:06:41 np0005539551 systemd[1]: Stopping Network Manager Wait Online...
Nov 29 02:06:41 np0005539551 systemd[1]: Stopping Network Manager...
Nov 29 02:06:41 np0005539551 NetworkManager[7196]: <info>  [1764400001.1061] caught SIGTERM, shutting down normally.
Nov 29 02:06:41 np0005539551 NetworkManager[7196]: <info>  [1764400001.1076] dhcp4 (eth0): canceled DHCP transaction
Nov 29 02:06:41 np0005539551 NetworkManager[7196]: <info>  [1764400001.1076] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 02:06:41 np0005539551 NetworkManager[7196]: <info>  [1764400001.1076] dhcp4 (eth0): state changed no lease
Nov 29 02:06:41 np0005539551 NetworkManager[7196]: <info>  [1764400001.1079] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 02:06:41 np0005539551 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 02:06:41 np0005539551 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 02:06:41 np0005539551 NetworkManager[7196]: <info>  [1764400001.4788] exiting (success)
Nov 29 02:06:41 np0005539551 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 29 02:06:41 np0005539551 systemd[1]: Stopped Network Manager.
Nov 29 02:06:41 np0005539551 systemd[1]: NetworkManager.service: Consumed 19.014s CPU time, 4.3M memory peak, read 0B from disk, written 12.5K to disk.
Nov 29 02:06:41 np0005539551 systemd[1]: Starting Network Manager...
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.5412] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:7e7d5958-a03f-4d88-a827-2b942ebd1608)
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.5416] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.5469] manager[0x55b428543090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 29 02:06:41 np0005539551 systemd[1]: Starting Hostname Service...
Nov 29 02:06:41 np0005539551 systemd[1]: Started Hostname Service.
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6295] hostname: hostname: using hostnamed
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6296] hostname: static hostname changed from (none) to "compute-1"
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6303] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6308] manager[0x55b428543090]: rfkill: Wi-Fi hardware radio set enabled
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6308] manager[0x55b428543090]: rfkill: WWAN hardware radio set enabled
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6328] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6337] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6338] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6339] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6339] manager: Networking is enabled by state file
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6341] settings: Loaded settings plugin: keyfile (internal)
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6345] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6366] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6376] dhcp: init: Using DHCP client 'internal'
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6378] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6383] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6387] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6394] device (lo): Activation: starting connection 'lo' (bd43838b-4852-4c08-8b58-5443d26d81e3)
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6400] device (eth0): carrier: link connected
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6405] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6410] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6410] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6415] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6421] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6426] device (eth1): carrier: link connected
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6429] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6433] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (7b4783cd-23e2-583d-98c4-e3baef46d09f) (indicated)
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6434] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6438] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6444] device (eth1): Activation: starting connection 'ci-private-network' (7b4783cd-23e2-583d-98c4-e3baef46d09f)
Nov 29 02:06:41 np0005539551 systemd[1]: Started Network Manager.
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6448] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6454] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6456] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6457] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6458] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6460] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6461] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6463] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6465] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6469] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6471] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6479] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6488] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6507] dhcp4 (eth0): state changed new lease, address=38.102.83.193
Nov 29 02:06:41 np0005539551 NetworkManager[48922]: <info>  [1764400001.6513] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 29 02:06:41 np0005539551 systemd[1]: Starting Network Manager Wait Online...
Nov 29 02:06:42 np0005539551 NetworkManager[48922]: <info>  [1764400002.3380] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 29 02:06:42 np0005539551 NetworkManager[48922]: <info>  [1764400002.3397] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 02:06:42 np0005539551 NetworkManager[48922]: <info>  [1764400002.3402] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 29 02:06:42 np0005539551 NetworkManager[48922]: <info>  [1764400002.3407] device (lo): Activation: successful, device activated.
Nov 29 02:06:42 np0005539551 NetworkManager[48922]: <info>  [1764400002.3412] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 02:06:42 np0005539551 NetworkManager[48922]: <info>  [1764400002.3414] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 02:06:42 np0005539551 NetworkManager[48922]: <info>  [1764400002.3417] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 29 02:06:42 np0005539551 NetworkManager[48922]: <info>  [1764400002.3419] device (eth1): Activation: successful, device activated.
Nov 29 02:06:42 np0005539551 NetworkManager[48922]: <info>  [1764400002.5203] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 02:06:42 np0005539551 NetworkManager[48922]: <info>  [1764400002.5205] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 02:06:42 np0005539551 NetworkManager[48922]: <info>  [1764400002.5209] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 02:06:42 np0005539551 NetworkManager[48922]: <info>  [1764400002.5213] device (eth0): Activation: successful, device activated.
Nov 29 02:06:42 np0005539551 NetworkManager[48922]: <info>  [1764400002.5217] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 29 02:06:42 np0005539551 NetworkManager[48922]: <info>  [1764400002.5218] manager: startup complete
Nov 29 02:06:42 np0005539551 systemd[1]: Finished Network Manager Wait Online.
Nov 29 02:06:43 np0005539551 python3.9[49130]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:06:43 np0005539551 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 02:06:43 np0005539551 systemd[1]: Finished man-db-cache-update.service.
Nov 29 02:06:43 np0005539551 systemd[1]: run-r5c35c3bdde2b4227b09eede7f8f509a9.service: Deactivated successfully.
Nov 29 02:06:51 np0005539551 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 02:06:51 np0005539551 systemd[1]: Starting man-db-cache-update.service...
Nov 29 02:06:51 np0005539551 systemd[1]: Reloading.
Nov 29 02:06:51 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:06:51 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:06:51 np0005539551 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 02:06:52 np0005539551 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 02:06:52 np0005539551 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 02:06:52 np0005539551 systemd[1]: Finished man-db-cache-update.service.
Nov 29 02:06:52 np0005539551 systemd[1]: run-r9c7bcb1de07545d497a690efbf78012c.service: Deactivated successfully.
Nov 29 02:07:09 np0005539551 python3.9[49590]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:07:11 np0005539551 python3.9[49742]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:07:11 np0005539551 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 02:07:12 np0005539551 python3.9[49898]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:07:12 np0005539551 python3.9[50050]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:07:14 np0005539551 python3.9[50202]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:07:14 np0005539551 python3.9[50354]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:07:15 np0005539551 python3.9[50506]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:07:16 np0005539551 python3.9[50629]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400035.1107297-653-120764626494229/.source _original_basename=.nnbw18pd follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:07:17 np0005539551 python3.9[50781]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:07:18 np0005539551 python3.9[50933]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 29 02:07:19 np0005539551 python3.9[51085]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:07:22 np0005539551 python3.9[51512]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 29 02:07:23 np0005539551 ansible-async_wrapper.py[51687]: Invoked with j316908400108 300 /home/zuul/.ansible/tmp/ansible-tmp-1764400042.8152635-851-210229504372461/AnsiballZ_edpm_os_net_config.py _
Nov 29 02:07:23 np0005539551 ansible-async_wrapper.py[51690]: Starting module and watcher
Nov 29 02:07:23 np0005539551 ansible-async_wrapper.py[51690]: Start watching 51691 (300)
Nov 29 02:07:23 np0005539551 ansible-async_wrapper.py[51691]: Start module (51691)
Nov 29 02:07:23 np0005539551 ansible-async_wrapper.py[51687]: Return async_wrapper task started.
Nov 29 02:07:23 np0005539551 python3.9[51692]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 29 02:07:24 np0005539551 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 29 02:07:24 np0005539551 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 29 02:07:24 np0005539551 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 29 02:07:24 np0005539551 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 29 02:07:24 np0005539551 kernel: cfg80211: failed to load regulatory.db
Nov 29 02:07:25 np0005539551 NetworkManager[48922]: <info>  [1764400045.7367] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51693 uid=0 result="success"
Nov 29 02:07:25 np0005539551 NetworkManager[48922]: <info>  [1764400045.7386] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51693 uid=0 result="success"
Nov 29 02:07:25 np0005539551 NetworkManager[48922]: <info>  [1764400045.7881] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 29 02:07:25 np0005539551 NetworkManager[48922]: <info>  [1764400045.7882] audit: op="connection-add" uuid="f5aa805b-a27e-4595-b049-9ac5b2dfd42d" name="br-ex-br" pid=51693 uid=0 result="success"
Nov 29 02:07:25 np0005539551 NetworkManager[48922]: <info>  [1764400045.7897] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 29 02:07:25 np0005539551 NetworkManager[48922]: <info>  [1764400045.7898] audit: op="connection-add" uuid="98129abe-e595-433c-b9ff-22fa20f1ed04" name="br-ex-port" pid=51693 uid=0 result="success"
Nov 29 02:07:25 np0005539551 NetworkManager[48922]: <info>  [1764400045.7910] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 29 02:07:25 np0005539551 NetworkManager[48922]: <info>  [1764400045.7911] audit: op="connection-add" uuid="84dfb7c8-93ba-46ca-8b89-12a3a843569d" name="eth1-port" pid=51693 uid=0 result="success"
Nov 29 02:07:25 np0005539551 NetworkManager[48922]: <info>  [1764400045.7922] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 29 02:07:25 np0005539551 NetworkManager[48922]: <info>  [1764400045.7923] audit: op="connection-add" uuid="ddf2270d-945c-4b45-b5cd-e46ff98eff2f" name="vlan20-port" pid=51693 uid=0 result="success"
Nov 29 02:07:25 np0005539551 NetworkManager[48922]: <info>  [1764400045.7935] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 29 02:07:25 np0005539551 NetworkManager[48922]: <info>  [1764400045.7936] audit: op="connection-add" uuid="5d68d3b7-f46f-4e4e-9b15-3a271d195460" name="vlan21-port" pid=51693 uid=0 result="success"
Nov 29 02:07:25 np0005539551 NetworkManager[48922]: <info>  [1764400045.7947] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 29 02:07:25 np0005539551 NetworkManager[48922]: <info>  [1764400045.7948] audit: op="connection-add" uuid="4262fb2b-ae55-47e9-a8ca-51d770927492" name="vlan22-port" pid=51693 uid=0 result="success"
Nov 29 02:07:25 np0005539551 NetworkManager[48922]: <info>  [1764400045.7958] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Nov 29 02:07:25 np0005539551 NetworkManager[48922]: <info>  [1764400045.7960] audit: op="connection-add" uuid="18345de6-9b0b-4ace-80c8-fa707ae02793" name="vlan23-port" pid=51693 uid=0 result="success"
Nov 29 02:07:25 np0005539551 NetworkManager[48922]: <info>  [1764400045.7978] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.timestamp,connection.autoconnect-priority,ipv4.dhcp-timeout,ipv4.dhcp-client-id,802-3-ethernet.mtu,ipv6.method,ipv6.dhcp-timeout,ipv6.addr-gen-mode" pid=51693 uid=0 result="success"
Nov 29 02:07:25 np0005539551 NetworkManager[48922]: <info>  [1764400045.7992] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 29 02:07:25 np0005539551 NetworkManager[48922]: <info>  [1764400045.7993] audit: op="connection-add" uuid="1ab40503-cc66-4b19-844e-093f43517567" name="br-ex-if" pid=51693 uid=0 result="success"
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8255] audit: op="connection-update" uuid="7b4783cd-23e2-583d-98c4-e3baef46d09f" name="ci-private-network" args="ovs-external-ids.data,connection.controller,connection.timestamp,connection.port-type,connection.slave-type,connection.master,ipv4.method,ipv4.routing-rules,ipv4.routes,ipv4.dns,ipv4.addresses,ipv4.never-default,ipv6.method,ipv6.routing-rules,ipv6.addr-gen-mode,ipv6.dns,ipv6.addresses,ipv6.routes,ovs-interface.type" pid=51693 uid=0 result="success"
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8303] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8307] audit: op="connection-add" uuid="c3d85c09-1070-40aa-b7d9-a3babcdd609c" name="vlan20-if" pid=51693 uid=0 result="success"
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8349] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8352] audit: op="connection-add" uuid="76920eda-d9e3-4a61-ad4e-1df32afc965d" name="vlan21-if" pid=51693 uid=0 result="success"
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8389] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8393] audit: op="connection-add" uuid="3338545f-dff5-454e-af4b-26e8b9d4fbcb" name="vlan22-if" pid=51693 uid=0 result="success"
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8423] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8426] audit: op="connection-add" uuid="39f27f88-dc46-4c52-9e5c-25c57ce96678" name="vlan23-if" pid=51693 uid=0 result="success"
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8448] audit: op="connection-delete" uuid="acf43975-524b-3c36-83b0-d7f3a21fba6b" name="Wired connection 1" pid=51693 uid=0 result="success"
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8470] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8489] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8497] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (f5aa805b-a27e-4595-b049-9ac5b2dfd42d)
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8499] audit: op="connection-activate" uuid="f5aa805b-a27e-4595-b049-9ac5b2dfd42d" name="br-ex-br" pid=51693 uid=0 result="success"
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8502] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8514] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8522] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (98129abe-e595-433c-b9ff-22fa20f1ed04)
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8525] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8537] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8546] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (84dfb7c8-93ba-46ca-8b89-12a3a843569d)
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8549] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8562] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8573] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (ddf2270d-945c-4b45-b5cd-e46ff98eff2f)
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8578] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8595] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8606] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (5d68d3b7-f46f-4e4e-9b15-3a271d195460)
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8611] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8628] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8639] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (4262fb2b-ae55-47e9-a8ca-51d770927492)
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8644] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8660] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8671] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (18345de6-9b0b-4ace-80c8-fa707ae02793)
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8673] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8680] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8685] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8701] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8713] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8724] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (1ab40503-cc66-4b19-844e-093f43517567)
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8726] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8733] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8737] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8740] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8742] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8766] device (eth1): disconnecting for new activation request.
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8767] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8774] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8782] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8785] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8790] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8799] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8808] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (c3d85c09-1070-40aa-b7d9-a3babcdd609c)
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8809] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8815] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8819] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8822] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8828] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8837] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8845] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (76920eda-d9e3-4a61-ad4e-1df32afc965d)
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8846] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8852] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8856] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8858] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8861] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8867] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8872] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (3338545f-dff5-454e-af4b-26e8b9d4fbcb)
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8873] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8877] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8880] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8882] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8885] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8890] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8898] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (39f27f88-dc46-4c52-9e5c-25c57ce96678)
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8898] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8903] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8906] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8907] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8909] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8925] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,ipv4.dhcp-timeout,ipv4.dhcp-client-id,802-3-ethernet.mtu,ipv6.method,ipv6.addr-gen-mode" pid=51693 uid=0 result="success"
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8927] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8932] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8935] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8942] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8947] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8953] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8957] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8959] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8965] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8970] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 kernel: ovs-system: entered promiscuous mode
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8974] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8977] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8982] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8988] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8992] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 kernel: Timeout policy base is empty
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.8994] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.9000] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 systemd-udevd[51699]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.9006] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.9009] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.9010] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.9016] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.9021] dhcp4 (eth0): canceled DHCP transaction
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.9021] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.9022] dhcp4 (eth0): state changed no lease
Nov 29 02:07:26 np0005539551 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.9023] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.9035] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.9042] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51693 uid=0 result="fail" reason="Device is not activated"
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.9051] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.9232] device (eth1): Activation: starting connection 'ci-private-network' (7b4783cd-23e2-583d-98c4-e3baef46d09f)
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.9235] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.9240] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.9242] dhcp4 (eth0): state changed new lease, address=38.102.83.193
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.9251] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.9257] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.9260] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.9268] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539551 NetworkManager[48922]: <info>  [1764400046.9272] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 29 02:07:26 np0005539551 kernel: br-ex: entered promiscuous mode
Nov 29 02:07:26 np0005539551 kernel: vlan22: entered promiscuous mode
Nov 29 02:07:26 np0005539551 systemd-udevd[51697]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:07:26 np0005539551 kernel: vlan21: entered promiscuous mode
Nov 29 02:07:26 np0005539551 kernel: vlan20: entered promiscuous mode
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0217] device (eth1): state change: config -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0220] device (eth1): released from controller device eth1
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0229] device (eth1): disconnecting for new activation request.
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0229] audit: op="connection-activate" uuid="7b4783cd-23e2-583d-98c4-e3baef46d09f" name="ci-private-network" pid=51693 uid=0 result="success"
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0230] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0231] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0232] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0234] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0236] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0237] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0243] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0253] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0257] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0261] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0265] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0268] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0271] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0276] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0279] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0283] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0286] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0290] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0293] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0297] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0306] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0313] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0320] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0328] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0330] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0338] device (eth1): Activation: starting connection 'ci-private-network' (7b4783cd-23e2-583d-98c4-e3baef46d09f)
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0358] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0361] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:07:27 np0005539551 kernel: vlan23: entered promiscuous mode
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0367] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51693 uid=0 result="success"
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0386] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0395] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0418] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0423] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0429] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:07:27 np0005539551 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0436] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0447] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0452] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0456] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0462] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0468] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0471] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0474] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0476] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0484] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0489] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0497] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0503] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0509] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0516] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0518] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0527] device (eth1): Activation: successful, device activated.
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0542] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0558] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0966] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0970] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:07:27 np0005539551 NetworkManager[48922]: <info>  [1764400047.0976] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 02:07:27 np0005539551 python3.9[52055]: ansible-ansible.legacy.async_status Invoked with jid=j316908400108.51687 mode=status _async_dir=/root/.ansible_async
Nov 29 02:07:28 np0005539551 NetworkManager[48922]: <info>  [1764400048.2901] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51693 uid=0 result="success"
Nov 29 02:07:28 np0005539551 NetworkManager[48922]: <info>  [1764400048.4412] checkpoint[0x55b428519950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 29 02:07:28 np0005539551 NetworkManager[48922]: <info>  [1764400048.4414] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51693 uid=0 result="success"
Nov 29 02:07:28 np0005539551 ansible-async_wrapper.py[51690]: 51691 still running (300)
Nov 29 02:07:28 np0005539551 NetworkManager[48922]: <info>  [1764400048.8672] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51693 uid=0 result="success"
Nov 29 02:07:28 np0005539551 NetworkManager[48922]: <info>  [1764400048.8701] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51693 uid=0 result="success"
Nov 29 02:07:29 np0005539551 NetworkManager[48922]: <info>  [1764400049.1136] audit: op="networking-control" arg="global-dns-configuration" pid=51693 uid=0 result="success"
Nov 29 02:07:29 np0005539551 NetworkManager[48922]: <info>  [1764400049.1164] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Nov 29 02:07:29 np0005539551 NetworkManager[48922]: <info>  [1764400049.1191] audit: op="networking-control" arg="global-dns-configuration" pid=51693 uid=0 result="success"
Nov 29 02:07:29 np0005539551 NetworkManager[48922]: <info>  [1764400049.1219] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51693 uid=0 result="success"
Nov 29 02:07:29 np0005539551 NetworkManager[48922]: <info>  [1764400049.2752] checkpoint[0x55b428519a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 29 02:07:29 np0005539551 NetworkManager[48922]: <info>  [1764400049.2757] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51693 uid=0 result="success"
Nov 29 02:07:29 np0005539551 ansible-async_wrapper.py[51691]: Module complete (51691)
Nov 29 02:07:31 np0005539551 python3.9[52161]: ansible-ansible.legacy.async_status Invoked with jid=j316908400108.51687 mode=status _async_dir=/root/.ansible_async
Nov 29 02:07:31 np0005539551 python3.9[52261]: ansible-ansible.legacy.async_status Invoked with jid=j316908400108.51687 mode=cleanup _async_dir=/root/.ansible_async
Nov 29 02:07:32 np0005539551 python3.9[52413]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:07:33 np0005539551 python3.9[52536]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400052.1567492-932-40500063850528/.source.returncode _original_basename=.5f8ic29y follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:07:33 np0005539551 ansible-async_wrapper.py[51690]: Done in kid B.
Nov 29 02:07:34 np0005539551 python3.9[52688]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:07:34 np0005539551 python3.9[52812]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400053.5951562-980-279230889365621/.source.cfg _original_basename=.kly8yw9d follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:07:35 np0005539551 python3.9[52964]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:07:35 np0005539551 systemd[1]: Reloading Network Manager...
Nov 29 02:07:35 np0005539551 NetworkManager[48922]: <info>  [1764400055.7501] audit: op="reload" arg="0" pid=52968 uid=0 result="success"
Nov 29 02:07:35 np0005539551 NetworkManager[48922]: <info>  [1764400055.7510] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 29 02:07:35 np0005539551 systemd[1]: Reloaded Network Manager.
Nov 29 02:07:36 np0005539551 systemd[1]: session-11.scope: Deactivated successfully.
Nov 29 02:07:36 np0005539551 systemd[1]: session-11.scope: Consumed 55.967s CPU time.
Nov 29 02:07:36 np0005539551 systemd-logind[788]: Session 11 logged out. Waiting for processes to exit.
Nov 29 02:07:36 np0005539551 systemd-logind[788]: Removed session 11.
Nov 29 02:07:41 np0005539551 systemd-logind[788]: New session 12 of user zuul.
Nov 29 02:07:41 np0005539551 systemd[1]: Started Session 12 of User zuul.
Nov 29 02:07:42 np0005539551 python3.9[53152]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:07:43 np0005539551 python3.9[53306]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:07:45 np0005539551 python3.9[53500]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:07:45 np0005539551 systemd[1]: session-12.scope: Deactivated successfully.
Nov 29 02:07:45 np0005539551 systemd[1]: session-12.scope: Consumed 2.226s CPU time.
Nov 29 02:07:45 np0005539551 systemd-logind[788]: Session 12 logged out. Waiting for processes to exit.
Nov 29 02:07:45 np0005539551 systemd-logind[788]: Removed session 12.
Nov 29 02:07:45 np0005539551 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 02:07:50 np0005539551 systemd-logind[788]: New session 13 of user zuul.
Nov 29 02:07:50 np0005539551 systemd[1]: Started Session 13 of User zuul.
Nov 29 02:07:52 np0005539551 python3.9[53683]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:07:53 np0005539551 python3.9[53837]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:07:54 np0005539551 python3.9[53994]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:07:55 np0005539551 python3.9[54078]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:07:57 np0005539551 python3.9[54232]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:07:59 np0005539551 python3.9[54427]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:08:00 np0005539551 python3.9[54579]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:08:00 np0005539551 systemd[1]: var-lib-containers-storage-overlay-compat3424697469-merged.mount: Deactivated successfully.
Nov 29 02:08:00 np0005539551 podman[54580]: 2025-11-29 07:08:00.135922306 +0000 UTC m=+0.053177314 system refresh
Nov 29 02:08:01 np0005539551 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:08:01 np0005539551 python3.9[54742]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:08:02 np0005539551 python3.9[54865]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400080.8119993-204-63386332871035/.source.json follow=False _original_basename=podman_network_config.j2 checksum=0ba7f7d4a2316f997bff41be0900c5cf7753edd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:08:02 np0005539551 python3.9[55017]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:08:03 np0005539551 python3.9[55140]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764400082.3303144-248-116786221815791/.source.conf follow=False _original_basename=registries.conf.j2 checksum=197bf6e1388aca01b529f5e8d08286f263a7fb81 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:08:04 np0005539551 python3.9[55292]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:08:05 np0005539551 python3.9[55444]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:08:05 np0005539551 python3.9[55596]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:08:06 np0005539551 python3.9[55748]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:08:07 np0005539551 python3.9[55900]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:08:10 np0005539551 python3.9[56053]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:08:10 np0005539551 python3.9[56207]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:08:11 np0005539551 python3.9[56359]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:08:12 np0005539551 python3.9[56511]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:08:13 np0005539551 python3.9[56664]: ansible-service_facts Invoked
Nov 29 02:08:13 np0005539551 network[56681]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:08:13 np0005539551 network[56682]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:08:13 np0005539551 network[56683]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:08:19 np0005539551 python3.9[57135]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:08:22 np0005539551 python3.9[57288]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 29 02:08:24 np0005539551 python3.9[57440]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:08:24 np0005539551 python3.9[57565]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400103.6207407-680-23485405085285/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:08:25 np0005539551 python3.9[57719]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:08:26 np0005539551 python3.9[57844]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400105.139962-726-16813674120965/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:08:28 np0005539551 python3.9[57998]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:08:29 np0005539551 python3.9[58152]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:08:30 np0005539551 python3.9[58236]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:08:32 np0005539551 python3.9[58390]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:08:33 np0005539551 python3.9[58474]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:08:33 np0005539551 chronyd[790]: chronyd exiting
Nov 29 02:08:33 np0005539551 systemd[1]: Stopping NTP client/server...
Nov 29 02:08:33 np0005539551 systemd[1]: chronyd.service: Deactivated successfully.
Nov 29 02:08:33 np0005539551 systemd[1]: Stopped NTP client/server.
Nov 29 02:08:33 np0005539551 systemd[1]: Starting NTP client/server...
Nov 29 02:08:33 np0005539551 chronyd[58483]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 29 02:08:33 np0005539551 chronyd[58483]: Frequency -26.726 +/- 0.223 ppm read from /var/lib/chrony/drift
Nov 29 02:08:33 np0005539551 chronyd[58483]: Loaded seccomp filter (level 2)
Nov 29 02:08:33 np0005539551 systemd[1]: Started NTP client/server.
Nov 29 02:08:34 np0005539551 systemd[1]: session-13.scope: Deactivated successfully.
Nov 29 02:08:34 np0005539551 systemd[1]: session-13.scope: Consumed 25.653s CPU time.
Nov 29 02:08:34 np0005539551 systemd-logind[788]: Session 13 logged out. Waiting for processes to exit.
Nov 29 02:08:34 np0005539551 systemd-logind[788]: Removed session 13.
Nov 29 02:08:40 np0005539551 systemd-logind[788]: New session 14 of user zuul.
Nov 29 02:08:40 np0005539551 systemd[1]: Started Session 14 of User zuul.
Nov 29 02:08:40 np0005539551 python3.9[58664]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:08:41 np0005539551 python3.9[58816]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:08:42 np0005539551 python3.9[58939]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400121.1866145-68-61974184697381/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:08:43 np0005539551 systemd[1]: session-14.scope: Deactivated successfully.
Nov 29 02:08:43 np0005539551 systemd[1]: session-14.scope: Consumed 1.474s CPU time.
Nov 29 02:08:43 np0005539551 systemd-logind[788]: Session 14 logged out. Waiting for processes to exit.
Nov 29 02:08:43 np0005539551 systemd-logind[788]: Removed session 14.
Nov 29 02:08:48 np0005539551 systemd-logind[788]: New session 15 of user zuul.
Nov 29 02:08:48 np0005539551 systemd[1]: Started Session 15 of User zuul.
Nov 29 02:08:49 np0005539551 python3.9[59117]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:08:50 np0005539551 python3.9[59273]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:08:51 np0005539551 python3.9[59448]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:08:52 np0005539551 python3.9[59571]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764400131.1410208-89-43211395949226/.source.json _original_basename=._82m2jn6 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:08:53 np0005539551 python3.9[59723]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:08:54 np0005539551 python3.9[59846]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400133.015428-158-2133225448026/.source _original_basename=.9mbwtzsk follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:08:54 np0005539551 python3.9[59998]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:08:55 np0005539551 python3.9[60150]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:08:56 np0005539551 python3.9[60273]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764400135.2432091-230-257506074427444/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:08:56 np0005539551 python3.9[60425]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:08:57 np0005539551 python3.9[60548]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764400136.424867-230-59364061341901/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:08:58 np0005539551 python3.9[60700]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:08:59 np0005539551 python3.9[60852]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:08:59 np0005539551 python3.9[60975]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400138.971476-341-136182515691448/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:00 np0005539551 python3.9[61127]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:09:01 np0005539551 python3.9[61250]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400140.1327808-386-207337750870408/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:02 np0005539551 python3.9[61402]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:09:02 np0005539551 systemd[1]: Reloading.
Nov 29 02:09:02 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:09:02 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:09:02 np0005539551 systemd[1]: Reloading.
Nov 29 02:09:02 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:09:02 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:09:02 np0005539551 systemd[1]: Starting EDPM Container Shutdown...
Nov 29 02:09:02 np0005539551 systemd[1]: Finished EDPM Container Shutdown.
Nov 29 02:09:03 np0005539551 python3.9[61630]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:09:03 np0005539551 python3.9[61753]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400142.965584-455-61261762418756/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:04 np0005539551 python3.9[61905]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:09:05 np0005539551 python3.9[62028]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400144.2010503-500-109366352595345/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:05 np0005539551 python3.9[62180]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:09:06 np0005539551 systemd[1]: Reloading.
Nov 29 02:09:06 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:09:06 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:09:06 np0005539551 systemd[1]: Reloading.
Nov 29 02:09:06 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:09:06 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:09:06 np0005539551 systemd[1]: Starting Create netns directory...
Nov 29 02:09:06 np0005539551 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 02:09:06 np0005539551 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 02:09:06 np0005539551 systemd[1]: Finished Create netns directory.
Nov 29 02:09:07 np0005539551 python3.9[62406]: ansible-ansible.builtin.service_facts Invoked
Nov 29 02:09:07 np0005539551 network[62423]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:09:07 np0005539551 network[62424]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:09:07 np0005539551 network[62425]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:09:12 np0005539551 python3.9[62687]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:09:12 np0005539551 systemd[1]: Reloading.
Nov 29 02:09:12 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:09:12 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:09:13 np0005539551 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 29 02:09:13 np0005539551 iptables.init[62728]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 29 02:09:13 np0005539551 iptables.init[62728]: iptables: Flushing firewall rules: [  OK  ]
Nov 29 02:09:13 np0005539551 systemd[1]: iptables.service: Deactivated successfully.
Nov 29 02:09:13 np0005539551 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 29 02:09:14 np0005539551 python3.9[62925]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:09:15 np0005539551 python3.9[63079]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:09:15 np0005539551 systemd[1]: Reloading.
Nov 29 02:09:15 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:09:15 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:09:15 np0005539551 systemd[1]: Starting Netfilter Tables...
Nov 29 02:09:15 np0005539551 systemd[1]: Finished Netfilter Tables.
Nov 29 02:09:16 np0005539551 python3.9[63271]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:09:18 np0005539551 python3.9[63424]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:09:18 np0005539551 python3.9[63549]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400157.9680862-707-264043016836513/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:19 np0005539551 python3.9[63702]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:09:19 np0005539551 systemd[1]: Reloading OpenSSH server daemon...
Nov 29 02:09:19 np0005539551 systemd[1]: Reloaded OpenSSH server daemon.
Nov 29 02:09:20 np0005539551 python3.9[63858]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:21 np0005539551 python3.9[64010]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:09:21 np0005539551 python3.9[64133]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400160.9244323-800-241748445430123/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:23 np0005539551 python3.9[64285]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 29 02:09:23 np0005539551 systemd[1]: Starting Time & Date Service...
Nov 29 02:09:23 np0005539551 systemd[1]: Started Time & Date Service.
Nov 29 02:09:24 np0005539551 python3.9[64441]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:24 np0005539551 python3.9[64593]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:09:25 np0005539551 python3.9[64716]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400164.258533-905-226094101808508/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:25 np0005539551 python3.9[64868]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:09:26 np0005539551 python3.9[64991]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400165.503215-950-157256367749248/.source.yaml _original_basename=.hiuacykg follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:27 np0005539551 python3.9[65143]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:09:27 np0005539551 python3.9[65266]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400166.7515404-996-181941456944852/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:28 np0005539551 python3.9[65418]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:09:29 np0005539551 python3.9[65571]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:09:30 np0005539551 python3[65724]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 02:09:30 np0005539551 python3.9[65876]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:09:31 np0005539551 python3.9[65999]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400170.3697429-1112-254618604794754/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:32 np0005539551 python3.9[66151]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:09:32 np0005539551 python3.9[66274]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400171.80085-1157-58134354845383/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:33 np0005539551 python3.9[66426]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:09:34 np0005539551 python3.9[66549]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400173.1380658-1202-65160341796443/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:35 np0005539551 python3.9[66701]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:09:35 np0005539551 python3.9[66824]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400174.5793657-1247-76910874083583/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:36 np0005539551 python3.9[66976]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:09:37 np0005539551 python3.9[67099]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400175.9774842-1292-236485375004519/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:37 np0005539551 python3.9[67251]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:38 np0005539551 python3.9[67403]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:09:39 np0005539551 python3.9[67562]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:40 np0005539551 python3.9[67715]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:41 np0005539551 python3.9[67867]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:42 np0005539551 python3.9[68019]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 02:09:42 np0005539551 python3.9[68172]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 02:09:43 np0005539551 systemd[1]: session-15.scope: Deactivated successfully.
Nov 29 02:09:43 np0005539551 systemd[1]: session-15.scope: Consumed 32.815s CPU time.
Nov 29 02:09:43 np0005539551 systemd-logind[788]: Session 15 logged out. Waiting for processes to exit.
Nov 29 02:09:43 np0005539551 systemd-logind[788]: Removed session 15.
Nov 29 02:09:52 np0005539551 systemd-logind[788]: New session 16 of user zuul.
Nov 29 02:09:52 np0005539551 systemd[1]: Started Session 16 of User zuul.
Nov 29 02:09:53 np0005539551 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 02:09:54 np0005539551 python3.9[68355]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 29 02:09:54 np0005539551 python3.9[68507]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:09:56 np0005539551 python3.9[68659]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:09:57 np0005539551 python3.9[68811]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8DvicBdqy7dEZlHZpy7m/TwUChtVXFipP55AL4//M7HIh4A4ZWW0M0pb4E4WsXc1Y99eeNf5R+fmafWv5Z2x8Tq9KiRM9wQGSEJo1Sp7Ant8TcIyfbWCUIhmGAfkYUT2iUTjyyBrBL7iGVxJbYtCagodoXoIL4MSkgeZpadFa4XI4DieFBF95zOzXF6Z9RVUiocOG6vaogo3k/wTemQxQ/dlVV7SPrtj+GoZEUpeNlAKRbkAB8PNee/Ne+abzClpRp50s2pAh7smZFmL0O+wDOgWwFImPpxCkh4nR/3IJq6O53KXSl9jR4X/vmJHpFEHC6oZX5/hfwaJTfvvELB5cjzaFh3mzFweGkQq82VhAAxVksDTO2+aUZFGDJbMSvjPTSTEl+qx+GAl7E0KnzST+NMnd5qplw0KIj+BBZgkZtKK8kAsxxRU3zDMDotlvIDG1KYN+wOGRG2Cy2afXmGFIFYdzOFlvkAwmv9yhY5u5OlWxzuiZEOcqJ0dGS1e0hk8=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFq0l7tgdUK0C+AqSmZJQ8Y9Z17ynv3L7Gso+BnrUJe7#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLWT8H4lhVkE+892UU3HiUydE/Wuy5lmeTLAJzcPPkEmKKDZLorB5daY+peHiUZWU/JHax1i6VTJiGCUcfBK9Vw=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfIZbQlJSY8OFW9gaKZpL5AOJYgHeGcUU4xMMLWNL/xUPPZkDRJ+0oOBxm1GBsA8W/sQVZWDc//tIOaPRg0Ts5mepXlfGs0Url+hpuUxGZNLWaIiPfHq1tUx7zM7eWeUlVhlBayXU+bDoHZDE1TezLFLi49CXlrQuy/1Fb5Ju8aYVVJNoRltLwGKo8JrHv8UnYQ29iZPFO7+AEqgSmsEyz9hjMO7qStFsK0Z4RYJrbTZ/AMj8FNebCRWGtc2weikdIjLid5Z20teORSzpJW4jLDvRkyg92/WdI7iFDyHhslm5uNGHqqE2uRPqQFTZ7tdP6IJzfhJms7WfRdsOS7qJdAeOLzhn/EcmLaKoST1KzKZYzMdAtqrHDPDth+ERDeHtT8CEHNFNgwH4Drtp7YWlKZyVPsv6dK3iVC5WQ4Smet9VXXpZhT8JcQr97oS6/QJ/gT2yzHqH9vE62bRuuVM3lwDNiZkdn1nVbxa8d58RY3T49As7qmlP5Y43puhyXDWU=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDvBaB2c/CSsrpPIGSKo/yIA8NKQbrk/1m+GY/Ma4/XG#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCX/VzLQPSOCPDMQMb838UxHYaVIDkLBboGMSvw1EX6MmRkAHKbJbJizg3TXu8nfZimb1PW1TRaFLHQkljXQfhA=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDZ3gJW4xxSNpckw2TbtUBxTZruxTxiPlDkOB8Y4ICZA576sHCsss1Ph5y2zOkXYsz9fpf2TwDKPQIVDfUxQL2k42AS2PWqcJCelaMaAxDGDVmytzhvJO+0vO0kZSFoRnDYDxt2IUjJS2VV4xS4L9mRqjK8zsSYyINET0BAxRep9xLeUV0pztWwkopYucpBL9nU+ZMkA5y3nRMxInQNfxZwW5O2P7v+HScnTy2CUe+79l+0TMU0N6uM79jmcAAH5zDqSdRx1VS+lr4cWeNOPxGiXzEepk+MRml6Y0uGKdtdlboqK6kvYfSNkkhFmtXsnvtNQyA8UDSAercKYAeSPfJftqXmHbVvAY+Ky5R22RivRx7jpubqimyS4Tab95yEzsLi6hEQ2OW1pZleLTnr31vNLojOAxtrIY7YgkPSo3yrbURsfLyldLo3LfSlYfkTpkQFE2CajUrAitfcz+uMi9UVw0jCs+cC6uvKZdzu9Flnc8SDq2rMPIHuEP+9CACVSTU=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOxCaPCuKLUncOQ8c8c4/3OodUXgAR3WjvU4uCVk4XkO#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA7zHYLiINcKCNo52qkzrmctOgzvnHIchoPMaZyVaf/Aonhb5ntaWhlnHGxOVN+ZUQQOMPIjt7zIO4FB9IYg2xw=#012 create=True mode=0644 path=/tmp/ansible.fau00_wi state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:58 np0005539551 python3.9[68963]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.fau00_wi' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:09:59 np0005539551 python3.9[69117]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.fau00_wi state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:59 np0005539551 systemd[1]: session-16.scope: Deactivated successfully.
Nov 29 02:09:59 np0005539551 systemd[1]: session-16.scope: Consumed 2.986s CPU time.
Nov 29 02:09:59 np0005539551 systemd-logind[788]: Session 16 logged out. Waiting for processes to exit.
Nov 29 02:09:59 np0005539551 systemd-logind[788]: Removed session 16.
Nov 29 02:10:06 np0005539551 systemd-logind[788]: New session 17 of user zuul.
Nov 29 02:10:06 np0005539551 systemd[1]: Started Session 17 of User zuul.
Nov 29 02:10:07 np0005539551 python3.9[69295]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:10:08 np0005539551 python3.9[69451]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 29 02:10:09 np0005539551 python3.9[69605]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:10:13 np0005539551 python3.9[69758]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:10:13 np0005539551 python3.9[69911]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:10:15 np0005539551 python3.9[70065]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:10:16 np0005539551 python3.9[70220]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:10:17 np0005539551 systemd[1]: session-17.scope: Deactivated successfully.
Nov 29 02:10:17 np0005539551 systemd[1]: session-17.scope: Consumed 4.112s CPU time.
Nov 29 02:10:17 np0005539551 systemd-logind[788]: Session 17 logged out. Waiting for processes to exit.
Nov 29 02:10:17 np0005539551 systemd-logind[788]: Removed session 17.
Nov 29 02:10:22 np0005539551 systemd-logind[788]: New session 18 of user zuul.
Nov 29 02:10:22 np0005539551 systemd[1]: Started Session 18 of User zuul.
Nov 29 02:10:23 np0005539551 python3.9[70398]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:10:24 np0005539551 python3.9[70554]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:10:25 np0005539551 python3.9[70638]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 02:10:28 np0005539551 python3.9[70789]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:10:29 np0005539551 python3.9[70940]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 02:10:30 np0005539551 python3.9[71090]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:10:30 np0005539551 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:10:30 np0005539551 python3.9[71241]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:10:31 np0005539551 systemd[1]: session-18.scope: Deactivated successfully.
Nov 29 02:10:31 np0005539551 systemd[1]: session-18.scope: Consumed 5.511s CPU time.
Nov 29 02:10:31 np0005539551 systemd-logind[788]: Session 18 logged out. Waiting for processes to exit.
Nov 29 02:10:31 np0005539551 systemd-logind[788]: Removed session 18.
Nov 29 02:10:39 np0005539551 systemd-logind[788]: New session 19 of user zuul.
Nov 29 02:10:39 np0005539551 systemd[1]: Started Session 19 of User zuul.
Nov 29 02:10:42 np0005539551 chronyd[58483]: Selected source 162.159.200.1 (pool.ntp.org)
Nov 29 02:10:47 np0005539551 python3[72007]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:10:49 np0005539551 python3[72102]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 29 02:10:50 np0005539551 python3[72129]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 02:10:51 np0005539551 python3[72155]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:10:51 np0005539551 kernel: loop: module loaded
Nov 29 02:10:51 np0005539551 kernel: loop3: detected capacity change from 0 to 14680064
Nov 29 02:10:52 np0005539551 python3[72190]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:10:52 np0005539551 lvm[72193]: PV /dev/loop3 not used.
Nov 29 02:10:52 np0005539551 lvm[72202]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 02:10:52 np0005539551 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Nov 29 02:10:52 np0005539551 lvm[72204]:  1 logical volume(s) in volume group "ceph_vg0" now active
Nov 29 02:10:52 np0005539551 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Nov 29 02:10:53 np0005539551 python3[72282]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 02:10:54 np0005539551 python3[72355]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764400253.5245295-36982-222716545397269/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:10:55 np0005539551 python3[72405]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:10:55 np0005539551 systemd[1]: Reloading.
Nov 29 02:10:55 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:10:55 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:10:55 np0005539551 systemd[1]: Starting Ceph OSD losetup...
Nov 29 02:10:55 np0005539551 bash[72444]: /dev/loop3: [64513]:4327940 (/var/lib/ceph-osd-0.img)
Nov 29 02:10:55 np0005539551 systemd[1]: Finished Ceph OSD losetup.
Nov 29 02:10:55 np0005539551 lvm[72446]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 02:10:55 np0005539551 lvm[72446]: VG ceph_vg0 finished
Nov 29 02:10:58 np0005539551 python3[72470]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:13:31 np0005539551 systemd-logind[788]: New session 20 of user ceph-admin.
Nov 29 02:13:31 np0005539551 systemd[1]: Created slice User Slice of UID 42477.
Nov 29 02:13:31 np0005539551 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 29 02:13:31 np0005539551 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 29 02:13:31 np0005539551 systemd[1]: Starting User Manager for UID 42477...
Nov 29 02:13:31 np0005539551 systemd[72521]: Queued start job for default target Main User Target.
Nov 29 02:13:31 np0005539551 systemd[72521]: Created slice User Application Slice.
Nov 29 02:13:31 np0005539551 systemd[72521]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 02:13:31 np0005539551 systemd[72521]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 02:13:31 np0005539551 systemd[72521]: Reached target Paths.
Nov 29 02:13:31 np0005539551 systemd[72521]: Reached target Timers.
Nov 29 02:13:31 np0005539551 systemd[72521]: Starting D-Bus User Message Bus Socket...
Nov 29 02:13:31 np0005539551 systemd[72521]: Starting Create User's Volatile Files and Directories...
Nov 29 02:13:31 np0005539551 systemd[72521]: Listening on D-Bus User Message Bus Socket.
Nov 29 02:13:31 np0005539551 systemd[72521]: Finished Create User's Volatile Files and Directories.
Nov 29 02:13:31 np0005539551 systemd[72521]: Reached target Sockets.
Nov 29 02:13:31 np0005539551 systemd[72521]: Reached target Basic System.
Nov 29 02:13:31 np0005539551 systemd[72521]: Reached target Main User Target.
Nov 29 02:13:31 np0005539551 systemd[72521]: Startup finished in 112ms.
Nov 29 02:13:31 np0005539551 systemd[1]: Started User Manager for UID 42477.
Nov 29 02:13:31 np0005539551 systemd[1]: Started Session 20 of User ceph-admin.
Nov 29 02:13:31 np0005539551 systemd-logind[788]: New session 22 of user ceph-admin.
Nov 29 02:13:31 np0005539551 systemd[1]: Started Session 22 of User ceph-admin.
Nov 29 02:13:32 np0005539551 systemd-logind[788]: New session 23 of user ceph-admin.
Nov 29 02:13:32 np0005539551 systemd[1]: Started Session 23 of User ceph-admin.
Nov 29 02:13:32 np0005539551 systemd-logind[788]: New session 24 of user ceph-admin.
Nov 29 02:13:32 np0005539551 systemd[1]: Started Session 24 of User ceph-admin.
Nov 29 02:13:32 np0005539551 systemd-logind[788]: New session 25 of user ceph-admin.
Nov 29 02:13:32 np0005539551 systemd[1]: Started Session 25 of User ceph-admin.
Nov 29 02:13:33 np0005539551 systemd-logind[788]: New session 26 of user ceph-admin.
Nov 29 02:13:33 np0005539551 systemd[1]: Started Session 26 of User ceph-admin.
Nov 29 02:13:33 np0005539551 systemd-logind[788]: New session 27 of user ceph-admin.
Nov 29 02:13:33 np0005539551 systemd[1]: Started Session 27 of User ceph-admin.
Nov 29 02:13:33 np0005539551 systemd-logind[788]: New session 28 of user ceph-admin.
Nov 29 02:13:33 np0005539551 systemd[1]: Started Session 28 of User ceph-admin.
Nov 29 02:13:34 np0005539551 systemd-logind[788]: New session 29 of user ceph-admin.
Nov 29 02:13:34 np0005539551 systemd[1]: Started Session 29 of User ceph-admin.
Nov 29 02:13:34 np0005539551 systemd-logind[788]: New session 30 of user ceph-admin.
Nov 29 02:13:34 np0005539551 systemd[1]: Started Session 30 of User ceph-admin.
Nov 29 02:13:35 np0005539551 systemd-logind[788]: New session 31 of user ceph-admin.
Nov 29 02:13:35 np0005539551 systemd[1]: Started Session 31 of User ceph-admin.
Nov 29 02:13:35 np0005539551 systemd-logind[788]: New session 32 of user ceph-admin.
Nov 29 02:13:35 np0005539551 systemd[1]: Started Session 32 of User ceph-admin.
Nov 29 02:13:35 np0005539551 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:13:36 np0005539551 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:13:36 np0005539551 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:13:36 np0005539551 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:13:37 np0005539551 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:13:37 np0005539551 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 73490 (sysctl)
Nov 29 02:13:37 np0005539551 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 29 02:13:37 np0005539551 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 29 02:13:38 np0005539551 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:13:39 np0005539551 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:13:39 np0005539551 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:13:41 np0005539551 systemd[1]: var-lib-containers-storage-overlay-compat3739810972-merged.mount: Deactivated successfully.
Nov 29 02:13:43 np0005539551 systemd[1]: var-lib-containers-storage-overlay-compat3739810972-lower\x2dmapped.mount: Deactivated successfully.
Nov 29 02:14:06 np0005539551 podman[73768]: 2025-11-29 07:14:06.064886068 +0000 UTC m=+26.861469290 container create 422d022bcbc633e42587f121843d818e7e2095ecece43b69d79763b14bd9dd7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Nov 29 02:14:06 np0005539551 podman[73768]: 2025-11-29 07:14:06.047205495 +0000 UTC m=+26.843788737 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:14:06 np0005539551 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 29 02:14:06 np0005539551 systemd[1]: Started libpod-conmon-422d022bcbc633e42587f121843d818e7e2095ecece43b69d79763b14bd9dd7b.scope.
Nov 29 02:14:06 np0005539551 systemd[1]: Started libcrun container.
Nov 29 02:14:06 np0005539551 podman[73768]: 2025-11-29 07:14:06.165391085 +0000 UTC m=+26.961974337 container init 422d022bcbc633e42587f121843d818e7e2095ecece43b69d79763b14bd9dd7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ishizaka, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 02:14:06 np0005539551 podman[73768]: 2025-11-29 07:14:06.172598636 +0000 UTC m=+26.969181858 container start 422d022bcbc633e42587f121843d818e7e2095ecece43b69d79763b14bd9dd7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 02:14:06 np0005539551 podman[73768]: 2025-11-29 07:14:06.1763148 +0000 UTC m=+26.972898022 container attach 422d022bcbc633e42587f121843d818e7e2095ecece43b69d79763b14bd9dd7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:14:06 np0005539551 tender_ishizaka[73830]: 167 167
Nov 29 02:14:06 np0005539551 systemd[1]: libpod-422d022bcbc633e42587f121843d818e7e2095ecece43b69d79763b14bd9dd7b.scope: Deactivated successfully.
Nov 29 02:14:06 np0005539551 podman[73768]: 2025-11-29 07:14:06.179391936 +0000 UTC m=+26.975975178 container died 422d022bcbc633e42587f121843d818e7e2095ecece43b69d79763b14bd9dd7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ishizaka, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:14:06 np0005539551 systemd[1]: var-lib-containers-storage-overlay-ecc03a913d80baaddd4ff44b4044519f769730110f6d68144a0e0551ccecce9b-merged.mount: Deactivated successfully.
Nov 29 02:14:06 np0005539551 podman[73768]: 2025-11-29 07:14:06.217305465 +0000 UTC m=+27.013888697 container remove 422d022bcbc633e42587f121843d818e7e2095ecece43b69d79763b14bd9dd7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 02:14:06 np0005539551 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:14:06 np0005539551 systemd[1]: libpod-conmon-422d022bcbc633e42587f121843d818e7e2095ecece43b69d79763b14bd9dd7b.scope: Deactivated successfully.
Nov 29 02:14:06 np0005539551 podman[73854]: 2025-11-29 07:14:06.365888524 +0000 UTC m=+0.045166433 container create 8614d14cb73af9225299fccdeca97b861306f96578933603ec21b671943ea879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lamport, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 02:14:06 np0005539551 systemd[1]: Started libpod-conmon-8614d14cb73af9225299fccdeca97b861306f96578933603ec21b671943ea879.scope.
Nov 29 02:14:06 np0005539551 systemd[1]: Started libcrun container.
Nov 29 02:14:06 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/055dc0e6a497fa0cae3017deadf2672f358d9682b0dd4ec48c46351129774c1b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:06 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/055dc0e6a497fa0cae3017deadf2672f358d9682b0dd4ec48c46351129774c1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:06 np0005539551 podman[73854]: 2025-11-29 07:14:06.434336405 +0000 UTC m=+0.113614334 container init 8614d14cb73af9225299fccdeca97b861306f96578933603ec21b671943ea879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:14:06 np0005539551 podman[73854]: 2025-11-29 07:14:06.342397738 +0000 UTC m=+0.021675657 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:14:06 np0005539551 podman[73854]: 2025-11-29 07:14:06.440744324 +0000 UTC m=+0.120022223 container start 8614d14cb73af9225299fccdeca97b861306f96578933603ec21b671943ea879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lamport, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:14:06 np0005539551 podman[73854]: 2025-11-29 07:14:06.444749206 +0000 UTC m=+0.124027135 container attach 8614d14cb73af9225299fccdeca97b861306f96578933603ec21b671943ea879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lamport, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 02:14:07 np0005539551 zen_lamport[73870]: [
Nov 29 02:14:07 np0005539551 zen_lamport[73870]:    {
Nov 29 02:14:07 np0005539551 zen_lamport[73870]:        "available": false,
Nov 29 02:14:07 np0005539551 zen_lamport[73870]:        "ceph_device": false,
Nov 29 02:14:07 np0005539551 zen_lamport[73870]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 02:14:07 np0005539551 zen_lamport[73870]:        "lsm_data": {},
Nov 29 02:14:07 np0005539551 zen_lamport[73870]:        "lvs": [],
Nov 29 02:14:07 np0005539551 zen_lamport[73870]:        "path": "/dev/sr0",
Nov 29 02:14:07 np0005539551 zen_lamport[73870]:        "rejected_reasons": [
Nov 29 02:14:07 np0005539551 zen_lamport[73870]:            "Insufficient space (<5GB)",
Nov 29 02:14:07 np0005539551 zen_lamport[73870]:            "Has a FileSystem"
Nov 29 02:14:07 np0005539551 zen_lamport[73870]:        ],
Nov 29 02:14:07 np0005539551 zen_lamport[73870]:        "sys_api": {
Nov 29 02:14:07 np0005539551 zen_lamport[73870]:            "actuators": null,
Nov 29 02:14:07 np0005539551 zen_lamport[73870]:            "device_nodes": "sr0",
Nov 29 02:14:07 np0005539551 zen_lamport[73870]:            "devname": "sr0",
Nov 29 02:14:07 np0005539551 zen_lamport[73870]:            "human_readable_size": "482.00 KB",
Nov 29 02:14:07 np0005539551 zen_lamport[73870]:            "id_bus": "ata",
Nov 29 02:14:07 np0005539551 zen_lamport[73870]:            "model": "QEMU DVD-ROM",
Nov 29 02:14:07 np0005539551 zen_lamport[73870]:            "nr_requests": "2",
Nov 29 02:14:07 np0005539551 zen_lamport[73870]:            "parent": "/dev/sr0",
Nov 29 02:14:07 np0005539551 zen_lamport[73870]:            "partitions": {},
Nov 29 02:14:07 np0005539551 zen_lamport[73870]:            "path": "/dev/sr0",
Nov 29 02:14:07 np0005539551 zen_lamport[73870]:            "removable": "1",
Nov 29 02:14:07 np0005539551 zen_lamport[73870]:            "rev": "2.5+",
Nov 29 02:14:07 np0005539551 zen_lamport[73870]:            "ro": "0",
Nov 29 02:14:07 np0005539551 zen_lamport[73870]:            "rotational": "1",
Nov 29 02:14:07 np0005539551 zen_lamport[73870]:            "sas_address": "",
Nov 29 02:14:07 np0005539551 zen_lamport[73870]:            "sas_device_handle": "",
Nov 29 02:14:07 np0005539551 zen_lamport[73870]:            "scheduler_mode": "mq-deadline",
Nov 29 02:14:07 np0005539551 zen_lamport[73870]:            "sectors": 0,
Nov 29 02:14:07 np0005539551 zen_lamport[73870]:            "sectorsize": "2048",
Nov 29 02:14:07 np0005539551 zen_lamport[73870]:            "size": 493568.0,
Nov 29 02:14:07 np0005539551 zen_lamport[73870]:            "support_discard": "2048",
Nov 29 02:14:07 np0005539551 zen_lamport[73870]:            "type": "disk",
Nov 29 02:14:07 np0005539551 zen_lamport[73870]:            "vendor": "QEMU"
Nov 29 02:14:07 np0005539551 zen_lamport[73870]:        }
Nov 29 02:14:07 np0005539551 zen_lamport[73870]:    }
Nov 29 02:14:07 np0005539551 zen_lamport[73870]: ]
Nov 29 02:14:07 np0005539551 systemd[1]: libpod-8614d14cb73af9225299fccdeca97b861306f96578933603ec21b671943ea879.scope: Deactivated successfully.
Nov 29 02:14:07 np0005539551 systemd[1]: libpod-8614d14cb73af9225299fccdeca97b861306f96578933603ec21b671943ea879.scope: Consumed 1.249s CPU time.
Nov 29 02:14:07 np0005539551 podman[73854]: 2025-11-29 07:14:07.679893364 +0000 UTC m=+1.359171263 container died 8614d14cb73af9225299fccdeca97b861306f96578933603ec21b671943ea879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lamport, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Nov 29 02:14:07 np0005539551 systemd[1]: var-lib-containers-storage-overlay-055dc0e6a497fa0cae3017deadf2672f358d9682b0dd4ec48c46351129774c1b-merged.mount: Deactivated successfully.
Nov 29 02:14:07 np0005539551 podman[73854]: 2025-11-29 07:14:07.738622633 +0000 UTC m=+1.417900532 container remove 8614d14cb73af9225299fccdeca97b861306f96578933603ec21b671943ea879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lamport, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 02:14:07 np0005539551 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:14:07 np0005539551 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:14:07 np0005539551 systemd[1]: libpod-conmon-8614d14cb73af9225299fccdeca97b861306f96578933603ec21b671943ea879.scope: Deactivated successfully.
Nov 29 02:14:12 np0005539551 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:14:12 np0005539551 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:14:12 np0005539551 podman[76697]: 2025-11-29 07:14:12.64505485 +0000 UTC m=+0.046051236 container create 0e0b2b1e8d242467dde2f57abd0293c8f4594105371c08e8c86012ae83bf62d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 02:14:12 np0005539551 systemd[1]: Started libpod-conmon-0e0b2b1e8d242467dde2f57abd0293c8f4594105371c08e8c86012ae83bf62d7.scope.
Nov 29 02:14:12 np0005539551 systemd[1]: Started libcrun container.
Nov 29 02:14:12 np0005539551 podman[76697]: 2025-11-29 07:14:12.623953611 +0000 UTC m=+0.024950017 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:14:12 np0005539551 podman[76697]: 2025-11-29 07:14:12.737236064 +0000 UTC m=+0.138232470 container init 0e0b2b1e8d242467dde2f57abd0293c8f4594105371c08e8c86012ae83bf62d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 02:14:12 np0005539551 podman[76697]: 2025-11-29 07:14:12.744163127 +0000 UTC m=+0.145159513 container start 0e0b2b1e8d242467dde2f57abd0293c8f4594105371c08e8c86012ae83bf62d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_yalow, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:14:12 np0005539551 optimistic_yalow[76714]: 167 167
Nov 29 02:14:12 np0005539551 systemd[1]: libpod-0e0b2b1e8d242467dde2f57abd0293c8f4594105371c08e8c86012ae83bf62d7.scope: Deactivated successfully.
Nov 29 02:14:12 np0005539551 podman[76697]: 2025-11-29 07:14:12.767695485 +0000 UTC m=+0.168691881 container attach 0e0b2b1e8d242467dde2f57abd0293c8f4594105371c08e8c86012ae83bf62d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_yalow, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:14:12 np0005539551 podman[76697]: 2025-11-29 07:14:12.769220697 +0000 UTC m=+0.170217083 container died 0e0b2b1e8d242467dde2f57abd0293c8f4594105371c08e8c86012ae83bf62d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_yalow, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 02:14:13 np0005539551 podman[76697]: 2025-11-29 07:14:13.005510845 +0000 UTC m=+0.406507231 container remove 0e0b2b1e8d242467dde2f57abd0293c8f4594105371c08e8c86012ae83bf62d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_yalow, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 02:14:13 np0005539551 systemd[1]: libpod-conmon-0e0b2b1e8d242467dde2f57abd0293c8f4594105371c08e8c86012ae83bf62d7.scope: Deactivated successfully.
Nov 29 02:14:13 np0005539551 systemd[1]: Reloading.
Nov 29 02:14:13 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:14:13 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:14:13 np0005539551 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:14:13 np0005539551 systemd[1]: Reloading.
Nov 29 02:14:13 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:14:13 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:14:13 np0005539551 systemd[1]: Reached target All Ceph clusters and services.
Nov 29 02:14:13 np0005539551 systemd[1]: Reloading.
Nov 29 02:14:13 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:14:13 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:14:13 np0005539551 systemd[1]: Reached target Ceph cluster b66774a7-56d9-5535-bd8c-681234404870.
Nov 29 02:14:14 np0005539551 systemd[1]: Reloading.
Nov 29 02:14:14 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:14:14 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:14:14 np0005539551 systemd[1]: Reloading.
Nov 29 02:14:14 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:14:14 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:14:14 np0005539551 systemd[1]: Created slice Slice /system/ceph-b66774a7-56d9-5535-bd8c-681234404870.
Nov 29 02:14:14 np0005539551 systemd[1]: Reached target System Time Set.
Nov 29 02:14:14 np0005539551 systemd[1]: Reached target System Time Synchronized.
Nov 29 02:14:14 np0005539551 systemd[1]: Starting Ceph crash.compute-1 for b66774a7-56d9-5535-bd8c-681234404870...
Nov 29 02:14:14 np0005539551 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:14:14 np0005539551 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:14:14 np0005539551 podman[76974]: 2025-11-29 07:14:14.95901083 +0000 UTC m=+0.040652195 container create 90ce4adbab91e406b1c142d44002a8d5649da44e57a442af57615f2fecac4da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 02:14:15 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d4768aebc528cceed4ef7752c6ac750ebbb488e0fb5e3ca23cab72bc97fb20c/merged/etc/ceph/ceph.client.crash.compute-1.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:15 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d4768aebc528cceed4ef7752c6ac750ebbb488e0fb5e3ca23cab72bc97fb20c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:15 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d4768aebc528cceed4ef7752c6ac750ebbb488e0fb5e3ca23cab72bc97fb20c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:15 np0005539551 podman[76974]: 2025-11-29 07:14:15.01844745 +0000 UTC m=+0.100088845 container init 90ce4adbab91e406b1c142d44002a8d5649da44e57a442af57615f2fecac4da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:14:15 np0005539551 podman[76974]: 2025-11-29 07:14:15.024030716 +0000 UTC m=+0.105672081 container start 90ce4adbab91e406b1c142d44002a8d5649da44e57a442af57615f2fecac4da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:14:15 np0005539551 bash[76974]: 90ce4adbab91e406b1c142d44002a8d5649da44e57a442af57615f2fecac4da7
Nov 29 02:14:15 np0005539551 podman[76974]: 2025-11-29 07:14:14.941590644 +0000 UTC m=+0.023232029 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:14:15 np0005539551 systemd[1]: Started Ceph crash.compute-1 for b66774a7-56d9-5535-bd8c-681234404870.
Nov 29 02:14:15 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1[76988]: INFO:ceph-crash:pinging cluster to exercise our key
Nov 29 02:14:15 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1[76988]: 2025-11-29T07:14:15.477+0000 7f1d03f92640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 29 02:14:15 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1[76988]: 2025-11-29T07:14:15.477+0000 7f1d03f92640 -1 AuthRegistry(0x7f1cfc066fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 29 02:14:15 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1[76988]: 2025-11-29T07:14:15.478+0000 7f1d03f92640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 29 02:14:15 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1[76988]: 2025-11-29T07:14:15.478+0000 7f1d03f92640 -1 AuthRegistry(0x7f1d03f91000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 29 02:14:15 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1[76988]: 2025-11-29T07:14:15.481+0000 7f1d01d07640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 29 02:14:15 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1[76988]: 2025-11-29T07:14:15.481+0000 7f1d03f92640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Nov 29 02:14:15 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1[76988]: [errno 13] RADOS permission denied (error connecting to the cluster)
Nov 29 02:14:15 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1[76988]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Nov 29 02:14:15 np0005539551 podman[77142]: 2025-11-29 07:14:15.708074805 +0000 UTC m=+0.021553212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:14:15 np0005539551 podman[77142]: 2025-11-29 07:14:15.877671522 +0000 UTC m=+0.191149919 container create b74e1118f2237503510738f804c6aa30a31b0dc366d934f79d3d2f638c389eed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_burnell, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 02:14:15 np0005539551 systemd[1]: Started libpod-conmon-b74e1118f2237503510738f804c6aa30a31b0dc366d934f79d3d2f638c389eed.scope.
Nov 29 02:14:15 np0005539551 systemd[1]: Started libcrun container.
Nov 29 02:14:15 np0005539551 podman[77142]: 2025-11-29 07:14:15.976308255 +0000 UTC m=+0.289786682 container init b74e1118f2237503510738f804c6aa30a31b0dc366d934f79d3d2f638c389eed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:14:15 np0005539551 podman[77142]: 2025-11-29 07:14:15.984659569 +0000 UTC m=+0.298137966 container start b74e1118f2237503510738f804c6aa30a31b0dc366d934f79d3d2f638c389eed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_burnell, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 02:14:15 np0005539551 podman[77142]: 2025-11-29 07:14:15.989448363 +0000 UTC m=+0.302926780 container attach b74e1118f2237503510738f804c6aa30a31b0dc366d934f79d3d2f638c389eed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:14:15 np0005539551 unruffled_burnell[77159]: 167 167
Nov 29 02:14:15 np0005539551 systemd[1]: libpod-b74e1118f2237503510738f804c6aa30a31b0dc366d934f79d3d2f638c389eed.scope: Deactivated successfully.
Nov 29 02:14:15 np0005539551 podman[77142]: 2025-11-29 07:14:15.993847485 +0000 UTC m=+0.307325882 container died b74e1118f2237503510738f804c6aa30a31b0dc366d934f79d3d2f638c389eed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 02:14:16 np0005539551 systemd[1]: var-lib-containers-storage-overlay-8a704f695420f38e591497678a22cc56f74b91394af4eb8a161b4ab85f4508a8-merged.mount: Deactivated successfully.
Nov 29 02:14:16 np0005539551 podman[77142]: 2025-11-29 07:14:16.040421586 +0000 UTC m=+0.353899983 container remove b74e1118f2237503510738f804c6aa30a31b0dc366d934f79d3d2f638c389eed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_burnell, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:14:16 np0005539551 systemd[1]: libpod-conmon-b74e1118f2237503510738f804c6aa30a31b0dc366d934f79d3d2f638c389eed.scope: Deactivated successfully.
Nov 29 02:14:16 np0005539551 podman[77184]: 2025-11-29 07:14:16.215797343 +0000 UTC m=+0.049839973 container create 6327dacdb70de374b6f39920c0f58a3ab9cbf5085d135288d17db2bde7d50884 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 02:14:16 np0005539551 systemd[1]: Started libpod-conmon-6327dacdb70de374b6f39920c0f58a3ab9cbf5085d135288d17db2bde7d50884.scope.
Nov 29 02:14:16 np0005539551 systemd[1]: Started libcrun container.
Nov 29 02:14:16 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a69b0c71923452230f5154feface17a22e814ee5d7d79a22afbe06e457b0fd9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:16 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a69b0c71923452230f5154feface17a22e814ee5d7d79a22afbe06e457b0fd9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:16 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a69b0c71923452230f5154feface17a22e814ee5d7d79a22afbe06e457b0fd9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:16 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a69b0c71923452230f5154feface17a22e814ee5d7d79a22afbe06e457b0fd9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:16 np0005539551 podman[77184]: 2025-11-29 07:14:16.193779908 +0000 UTC m=+0.027822568 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:14:16 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a69b0c71923452230f5154feface17a22e814ee5d7d79a22afbe06e457b0fd9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:16 np0005539551 podman[77184]: 2025-11-29 07:14:16.299365145 +0000 UTC m=+0.133407795 container init 6327dacdb70de374b6f39920c0f58a3ab9cbf5085d135288d17db2bde7d50884 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:14:16 np0005539551 podman[77184]: 2025-11-29 07:14:16.307196854 +0000 UTC m=+0.141239484 container start 6327dacdb70de374b6f39920c0f58a3ab9cbf5085d135288d17db2bde7d50884 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hopper, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:14:16 np0005539551 podman[77184]: 2025-11-29 07:14:16.313432329 +0000 UTC m=+0.147474979 container attach 6327dacdb70de374b6f39920c0f58a3ab9cbf5085d135288d17db2bde7d50884 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:14:17 np0005539551 magical_hopper[77200]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:14:17 np0005539551 magical_hopper[77200]: --> relative data size: 1.0
Nov 29 02:14:17 np0005539551 magical_hopper[77200]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 29 02:14:17 np0005539551 magical_hopper[77200]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new b9602a40-9939-419b-9b17-98d4b245553e
Nov 29 02:14:17 np0005539551 magical_hopper[77200]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 29 02:14:17 np0005539551 lvm[77248]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 02:14:17 np0005539551 lvm[77248]: VG ceph_vg0 finished
Nov 29 02:14:17 np0005539551 magical_hopper[77200]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Nov 29 02:14:17 np0005539551 magical_hopper[77200]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Nov 29 02:14:17 np0005539551 magical_hopper[77200]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 02:14:17 np0005539551 magical_hopper[77200]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Nov 29 02:14:17 np0005539551 magical_hopper[77200]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Nov 29 02:14:18 np0005539551 magical_hopper[77200]: stderr: got monmap epoch 1
Nov 29 02:14:18 np0005539551 magical_hopper[77200]: --> Creating keyring file for osd.1
Nov 29 02:14:18 np0005539551 magical_hopper[77200]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Nov 29 02:14:18 np0005539551 magical_hopper[77200]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Nov 29 02:14:18 np0005539551 magical_hopper[77200]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid b9602a40-9939-419b-9b17-98d4b245553e --setuser ceph --setgroup ceph
Nov 29 02:14:20 np0005539551 magical_hopper[77200]: stderr: 2025-11-29T07:14:18.350+0000 7fb3efaa3740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 02:14:20 np0005539551 magical_hopper[77200]: stderr: 2025-11-29T07:14:18.350+0000 7fb3efaa3740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 02:14:20 np0005539551 magical_hopper[77200]: stderr: 2025-11-29T07:14:18.350+0000 7fb3efaa3740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 02:14:20 np0005539551 magical_hopper[77200]: stderr: 2025-11-29T07:14:18.350+0000 7fb3efaa3740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Nov 29 02:14:20 np0005539551 magical_hopper[77200]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Nov 29 02:14:20 np0005539551 magical_hopper[77200]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 02:14:20 np0005539551 magical_hopper[77200]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Nov 29 02:14:20 np0005539551 magical_hopper[77200]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Nov 29 02:14:20 np0005539551 magical_hopper[77200]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Nov 29 02:14:20 np0005539551 magical_hopper[77200]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 02:14:20 np0005539551 magical_hopper[77200]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 02:14:20 np0005539551 magical_hopper[77200]: --> ceph-volume lvm activate successful for osd ID: 1
Nov 29 02:14:20 np0005539551 magical_hopper[77200]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Nov 29 02:14:20 np0005539551 systemd[1]: libpod-6327dacdb70de374b6f39920c0f58a3ab9cbf5085d135288d17db2bde7d50884.scope: Deactivated successfully.
Nov 29 02:14:20 np0005539551 systemd[1]: libpod-6327dacdb70de374b6f39920c0f58a3ab9cbf5085d135288d17db2bde7d50884.scope: Consumed 2.502s CPU time.
Nov 29 02:14:20 np0005539551 conmon[77200]: conmon 6327dacdb70de374b6f3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6327dacdb70de374b6f39920c0f58a3ab9cbf5085d135288d17db2bde7d50884.scope/container/memory.events
Nov 29 02:14:20 np0005539551 podman[77184]: 2025-11-29 07:14:20.678324455 +0000 UTC m=+4.512367085 container died 6327dacdb70de374b6f39920c0f58a3ab9cbf5085d135288d17db2bde7d50884 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hopper, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:14:21 np0005539551 systemd[1]: var-lib-containers-storage-overlay-6a69b0c71923452230f5154feface17a22e814ee5d7d79a22afbe06e457b0fd9-merged.mount: Deactivated successfully.
Nov 29 02:14:25 np0005539551 podman[77184]: 2025-11-29 07:14:25.056779 +0000 UTC m=+8.890821630 container remove 6327dacdb70de374b6f39920c0f58a3ab9cbf5085d135288d17db2bde7d50884 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hopper, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:14:25 np0005539551 systemd[1]: libpod-conmon-6327dacdb70de374b6f39920c0f58a3ab9cbf5085d135288d17db2bde7d50884.scope: Deactivated successfully.
Nov 29 02:14:25 np0005539551 podman[78311]: 2025-11-29 07:14:25.607887007 +0000 UTC m=+0.020327718 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:14:25 np0005539551 podman[78311]: 2025-11-29 07:14:25.761806895 +0000 UTC m=+0.174247576 container create 3b9458d5ad77bf3439845f7f4d6aa2b9239ba66d7f5fc2eb5339de4d1bfb0dfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:14:25 np0005539551 systemd[1]: Started libpod-conmon-3b9458d5ad77bf3439845f7f4d6aa2b9239ba66d7f5fc2eb5339de4d1bfb0dfe.scope.
Nov 29 02:14:26 np0005539551 systemd[1]: Started libcrun container.
Nov 29 02:14:26 np0005539551 podman[78311]: 2025-11-29 07:14:26.355698508 +0000 UTC m=+0.768139229 container init 3b9458d5ad77bf3439845f7f4d6aa2b9239ba66d7f5fc2eb5339de4d1bfb0dfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:14:26 np0005539551 podman[78311]: 2025-11-29 07:14:26.363720172 +0000 UTC m=+0.776160853 container start 3b9458d5ad77bf3439845f7f4d6aa2b9239ba66d7f5fc2eb5339de4d1bfb0dfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 02:14:26 np0005539551 crazy_rubin[78328]: 167 167
Nov 29 02:14:26 np0005539551 systemd[1]: libpod-3b9458d5ad77bf3439845f7f4d6aa2b9239ba66d7f5fc2eb5339de4d1bfb0dfe.scope: Deactivated successfully.
Nov 29 02:14:26 np0005539551 podman[78311]: 2025-11-29 07:14:26.404982604 +0000 UTC m=+0.817423285 container attach 3b9458d5ad77bf3439845f7f4d6aa2b9239ba66d7f5fc2eb5339de4d1bfb0dfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_rubin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Nov 29 02:14:26 np0005539551 podman[78311]: 2025-11-29 07:14:26.405578721 +0000 UTC m=+0.818019422 container died 3b9458d5ad77bf3439845f7f4d6aa2b9239ba66d7f5fc2eb5339de4d1bfb0dfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_rubin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:14:28 np0005539551 systemd[1]: var-lib-containers-storage-overlay-e53a266db24d2875295df9b6a4edc356b442b2536443f51fb1de1b4ffd2695d9-merged.mount: Deactivated successfully.
Nov 29 02:14:28 np0005539551 podman[78311]: 2025-11-29 07:14:28.976921608 +0000 UTC m=+3.389362289 container remove 3b9458d5ad77bf3439845f7f4d6aa2b9239ba66d7f5fc2eb5339de4d1bfb0dfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_rubin, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 02:14:28 np0005539551 systemd[1]: libpod-conmon-3b9458d5ad77bf3439845f7f4d6aa2b9239ba66d7f5fc2eb5339de4d1bfb0dfe.scope: Deactivated successfully.
Nov 29 02:14:29 np0005539551 podman[78351]: 2025-11-29 07:14:29.153511878 +0000 UTC m=+0.064053569 container create 403e3b1fa24c0e8a8307ad7d2771c7b0551caa85eb1cb29117a2570ab2ffb3b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 02:14:29 np0005539551 systemd[1]: Started libpod-conmon-403e3b1fa24c0e8a8307ad7d2771c7b0551caa85eb1cb29117a2570ab2ffb3b5.scope.
Nov 29 02:14:29 np0005539551 podman[78351]: 2025-11-29 07:14:29.113365158 +0000 UTC m=+0.023906869 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:14:29 np0005539551 systemd[1]: Started libcrun container.
Nov 29 02:14:29 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91edca64c1f206a571b7d6e191c7517425b73ea09d9eebb3b05bae82b91fcee1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:29 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91edca64c1f206a571b7d6e191c7517425b73ea09d9eebb3b05bae82b91fcee1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:29 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91edca64c1f206a571b7d6e191c7517425b73ea09d9eebb3b05bae82b91fcee1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:29 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91edca64c1f206a571b7d6e191c7517425b73ea09d9eebb3b05bae82b91fcee1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:29 np0005539551 podman[78351]: 2025-11-29 07:14:29.251592757 +0000 UTC m=+0.162134478 container init 403e3b1fa24c0e8a8307ad7d2771c7b0551caa85eb1cb29117a2570ab2ffb3b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_thompson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 02:14:29 np0005539551 podman[78351]: 2025-11-29 07:14:29.257869152 +0000 UTC m=+0.168410843 container start 403e3b1fa24c0e8a8307ad7d2771c7b0551caa85eb1cb29117a2570ab2ffb3b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 02:14:29 np0005539551 podman[78351]: 2025-11-29 07:14:29.262892373 +0000 UTC m=+0.173434074 container attach 403e3b1fa24c0e8a8307ad7d2771c7b0551caa85eb1cb29117a2570ab2ffb3b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 02:14:30 np0005539551 gracious_thompson[78367]: {
Nov 29 02:14:30 np0005539551 gracious_thompson[78367]:    "1": [
Nov 29 02:14:30 np0005539551 gracious_thompson[78367]:        {
Nov 29 02:14:30 np0005539551 gracious_thompson[78367]:            "devices": [
Nov 29 02:14:30 np0005539551 gracious_thompson[78367]:                "/dev/loop3"
Nov 29 02:14:30 np0005539551 gracious_thompson[78367]:            ],
Nov 29 02:14:30 np0005539551 gracious_thompson[78367]:            "lv_name": "ceph_lv0",
Nov 29 02:14:30 np0005539551 gracious_thompson[78367]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:14:30 np0005539551 gracious_thompson[78367]:            "lv_size": "7511998464",
Nov 29 02:14:30 np0005539551 gracious_thompson[78367]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9I5mop-Zidd-KLNm-Jz2Z-f1u0-SzQ4-dyQxg4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9602a40-9939-419b-9b17-98d4b245553e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:14:30 np0005539551 gracious_thompson[78367]:            "lv_uuid": "9I5mop-Zidd-KLNm-Jz2Z-f1u0-SzQ4-dyQxg4",
Nov 29 02:14:30 np0005539551 gracious_thompson[78367]:            "name": "ceph_lv0",
Nov 29 02:14:30 np0005539551 gracious_thompson[78367]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:14:30 np0005539551 gracious_thompson[78367]:            "tags": {
Nov 29 02:14:30 np0005539551 gracious_thompson[78367]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:14:30 np0005539551 gracious_thompson[78367]:                "ceph.block_uuid": "9I5mop-Zidd-KLNm-Jz2Z-f1u0-SzQ4-dyQxg4",
Nov 29 02:14:30 np0005539551 gracious_thompson[78367]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:14:30 np0005539551 gracious_thompson[78367]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:14:30 np0005539551 gracious_thompson[78367]:                "ceph.cluster_name": "ceph",
Nov 29 02:14:30 np0005539551 gracious_thompson[78367]:                "ceph.crush_device_class": "",
Nov 29 02:14:30 np0005539551 gracious_thompson[78367]:                "ceph.encrypted": "0",
Nov 29 02:14:30 np0005539551 gracious_thompson[78367]:                "ceph.osd_fsid": "b9602a40-9939-419b-9b17-98d4b245553e",
Nov 29 02:14:30 np0005539551 gracious_thompson[78367]:                "ceph.osd_id": "1",
Nov 29 02:14:30 np0005539551 gracious_thompson[78367]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:14:30 np0005539551 gracious_thompson[78367]:                "ceph.type": "block",
Nov 29 02:14:30 np0005539551 gracious_thompson[78367]:                "ceph.vdo": "0"
Nov 29 02:14:30 np0005539551 gracious_thompson[78367]:            },
Nov 29 02:14:30 np0005539551 gracious_thompson[78367]:            "type": "block",
Nov 29 02:14:30 np0005539551 gracious_thompson[78367]:            "vg_name": "ceph_vg0"
Nov 29 02:14:30 np0005539551 gracious_thompson[78367]:        }
Nov 29 02:14:30 np0005539551 gracious_thompson[78367]:    ]
Nov 29 02:14:30 np0005539551 gracious_thompson[78367]: }
Nov 29 02:14:30 np0005539551 systemd[1]: libpod-403e3b1fa24c0e8a8307ad7d2771c7b0551caa85eb1cb29117a2570ab2ffb3b5.scope: Deactivated successfully.
Nov 29 02:14:30 np0005539551 podman[78351]: 2025-11-29 07:14:30.082730284 +0000 UTC m=+0.993271995 container died 403e3b1fa24c0e8a8307ad7d2771c7b0551caa85eb1cb29117a2570ab2ffb3b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_thompson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 02:14:30 np0005539551 systemd[1]: var-lib-containers-storage-overlay-91edca64c1f206a571b7d6e191c7517425b73ea09d9eebb3b05bae82b91fcee1-merged.mount: Deactivated successfully.
Nov 29 02:14:30 np0005539551 podman[78351]: 2025-11-29 07:14:30.143655705 +0000 UTC m=+1.054197396 container remove 403e3b1fa24c0e8a8307ad7d2771c7b0551caa85eb1cb29117a2570ab2ffb3b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 02:14:30 np0005539551 systemd[1]: libpod-conmon-403e3b1fa24c0e8a8307ad7d2771c7b0551caa85eb1cb29117a2570ab2ffb3b5.scope: Deactivated successfully.
Nov 29 02:14:30 np0005539551 podman[78531]: 2025-11-29 07:14:30.765512888 +0000 UTC m=+0.039843253 container create 7e3df19c3fdaa4d168ad80a9e7f9c70c899d613faff185f27f718fcad6e045c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pare, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 02:14:30 np0005539551 systemd[1]: Started libpod-conmon-7e3df19c3fdaa4d168ad80a9e7f9c70c899d613faff185f27f718fcad6e045c6.scope.
Nov 29 02:14:30 np0005539551 systemd[1]: Started libcrun container.
Nov 29 02:14:30 np0005539551 podman[78531]: 2025-11-29 07:14:30.747634509 +0000 UTC m=+0.021964894 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:14:30 np0005539551 podman[78531]: 2025-11-29 07:14:30.845963375 +0000 UTC m=+0.120293750 container init 7e3df19c3fdaa4d168ad80a9e7f9c70c899d613faff185f27f718fcad6e045c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pare, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 02:14:30 np0005539551 podman[78531]: 2025-11-29 07:14:30.851609042 +0000 UTC m=+0.125939407 container start 7e3df19c3fdaa4d168ad80a9e7f9c70c899d613faff185f27f718fcad6e045c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pare, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 02:14:30 np0005539551 podman[78531]: 2025-11-29 07:14:30.854989377 +0000 UTC m=+0.129319742 container attach 7e3df19c3fdaa4d168ad80a9e7f9c70c899d613faff185f27f718fcad6e045c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 02:14:30 np0005539551 nostalgic_pare[78547]: 167 167
Nov 29 02:14:30 np0005539551 systemd[1]: libpod-7e3df19c3fdaa4d168ad80a9e7f9c70c899d613faff185f27f718fcad6e045c6.scope: Deactivated successfully.
Nov 29 02:14:30 np0005539551 podman[78531]: 2025-11-29 07:14:30.856810178 +0000 UTC m=+0.131140543 container died 7e3df19c3fdaa4d168ad80a9e7f9c70c899d613faff185f27f718fcad6e045c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 02:14:30 np0005539551 systemd[1]: var-lib-containers-storage-overlay-f6024b772ee9b6883ffc7436e2b2d10ad9e48a303e0a795db914d36bd388e5a2-merged.mount: Deactivated successfully.
Nov 29 02:14:30 np0005539551 podman[78531]: 2025-11-29 07:14:30.902182714 +0000 UTC m=+0.176513079 container remove 7e3df19c3fdaa4d168ad80a9e7f9c70c899d613faff185f27f718fcad6e045c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pare, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 02:14:30 np0005539551 systemd[1]: libpod-conmon-7e3df19c3fdaa4d168ad80a9e7f9c70c899d613faff185f27f718fcad6e045c6.scope: Deactivated successfully.
Nov 29 02:14:31 np0005539551 podman[78579]: 2025-11-29 07:14:31.14593559 +0000 UTC m=+0.047885007 container create c3f3e3d59e27fdac2255f7607ed69347a21cc9c9818b09ac1bd46bab3c351980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-1-activate-test, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 02:14:31 np0005539551 systemd[1]: Started libpod-conmon-c3f3e3d59e27fdac2255f7607ed69347a21cc9c9818b09ac1bd46bab3c351980.scope.
Nov 29 02:14:31 np0005539551 systemd[1]: Started libcrun container.
Nov 29 02:14:31 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94e22cef3e712cb104393ed0d9b97733c9de78cf63747b346c2e03ad9a4c415c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:31 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94e22cef3e712cb104393ed0d9b97733c9de78cf63747b346c2e03ad9a4c415c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:31 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94e22cef3e712cb104393ed0d9b97733c9de78cf63747b346c2e03ad9a4c415c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:31 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94e22cef3e712cb104393ed0d9b97733c9de78cf63747b346c2e03ad9a4c415c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:31 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94e22cef3e712cb104393ed0d9b97733c9de78cf63747b346c2e03ad9a4c415c/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:31 np0005539551 podman[78579]: 2025-11-29 07:14:31.127700522 +0000 UTC m=+0.029649959 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:14:31 np0005539551 podman[78579]: 2025-11-29 07:14:31.224243137 +0000 UTC m=+0.126192554 container init c3f3e3d59e27fdac2255f7607ed69347a21cc9c9818b09ac1bd46bab3c351980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-1-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 02:14:31 np0005539551 podman[78579]: 2025-11-29 07:14:31.232522738 +0000 UTC m=+0.134472155 container start c3f3e3d59e27fdac2255f7607ed69347a21cc9c9818b09ac1bd46bab3c351980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-1-activate-test, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:14:31 np0005539551 podman[78579]: 2025-11-29 07:14:31.236239072 +0000 UTC m=+0.138188489 container attach c3f3e3d59e27fdac2255f7607ed69347a21cc9c9818b09ac1bd46bab3c351980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-1-activate-test, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:14:31 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-1-activate-test[78596]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 29 02:14:31 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-1-activate-test[78596]:                            [--no-systemd] [--no-tmpfs]
Nov 29 02:14:31 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-1-activate-test[78596]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 29 02:14:31 np0005539551 systemd[1]: libpod-c3f3e3d59e27fdac2255f7607ed69347a21cc9c9818b09ac1bd46bab3c351980.scope: Deactivated successfully.
Nov 29 02:14:31 np0005539551 podman[78579]: 2025-11-29 07:14:31.978058045 +0000 UTC m=+0.880007472 container died c3f3e3d59e27fdac2255f7607ed69347a21cc9c9818b09ac1bd46bab3c351980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-1-activate-test, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 02:14:32 np0005539551 systemd[1]: var-lib-containers-storage-overlay-94e22cef3e712cb104393ed0d9b97733c9de78cf63747b346c2e03ad9a4c415c-merged.mount: Deactivated successfully.
Nov 29 02:14:32 np0005539551 podman[78579]: 2025-11-29 07:14:32.782071035 +0000 UTC m=+1.684020452 container remove c3f3e3d59e27fdac2255f7607ed69347a21cc9c9818b09ac1bd46bab3c351980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 02:14:32 np0005539551 systemd[1]: libpod-conmon-c3f3e3d59e27fdac2255f7607ed69347a21cc9c9818b09ac1bd46bab3c351980.scope: Deactivated successfully.
Nov 29 02:14:33 np0005539551 systemd[1]: Reloading.
Nov 29 02:14:33 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:14:33 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:14:33 np0005539551 systemd[1]: Reloading.
Nov 29 02:14:33 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:14:33 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:14:33 np0005539551 systemd[1]: Starting Ceph osd.1 for b66774a7-56d9-5535-bd8c-681234404870...
Nov 29 02:14:33 np0005539551 podman[78755]: 2025-11-29 07:14:33.746457013 +0000 UTC m=+0.044344919 container create 229931764e2b4bb4fb5eab5837b4bb7a022a803d1713b8410e3a5fc364b4ad56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-1-activate, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:14:33 np0005539551 podman[78755]: 2025-11-29 07:14:33.72413433 +0000 UTC m=+0.022022256 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:14:33 np0005539551 systemd[1]: Started libcrun container.
Nov 29 02:14:33 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0167a9f0d51a95e685d5b2d2737f79986b85f959789b1a7917b98601a0e4df23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:33 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0167a9f0d51a95e685d5b2d2737f79986b85f959789b1a7917b98601a0e4df23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:33 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0167a9f0d51a95e685d5b2d2737f79986b85f959789b1a7917b98601a0e4df23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:33 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0167a9f0d51a95e685d5b2d2737f79986b85f959789b1a7917b98601a0e4df23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:33 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0167a9f0d51a95e685d5b2d2737f79986b85f959789b1a7917b98601a0e4df23/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:33 np0005539551 podman[78755]: 2025-11-29 07:14:33.922285092 +0000 UTC m=+0.220173018 container init 229931764e2b4bb4fb5eab5837b4bb7a022a803d1713b8410e3a5fc364b4ad56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-1-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 02:14:33 np0005539551 podman[78755]: 2025-11-29 07:14:33.929235716 +0000 UTC m=+0.227123622 container start 229931764e2b4bb4fb5eab5837b4bb7a022a803d1713b8410e3a5fc364b4ad56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:14:33 np0005539551 podman[78755]: 2025-11-29 07:14:33.933225947 +0000 UTC m=+0.231113883 container attach 229931764e2b4bb4fb5eab5837b4bb7a022a803d1713b8410e3a5fc364b4ad56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-1-activate, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 02:14:34 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-1-activate[78771]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 02:14:34 np0005539551 bash[78755]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 02:14:34 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-1-activate[78771]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 02:14:34 np0005539551 bash[78755]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 02:14:34 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-1-activate[78771]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 02:14:34 np0005539551 bash[78755]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 02:14:34 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-1-activate[78771]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 02:14:34 np0005539551 bash[78755]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 02:14:34 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-1-activate[78771]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Nov 29 02:14:34 np0005539551 bash[78755]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Nov 29 02:14:34 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-1-activate[78771]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 02:14:34 np0005539551 bash[78755]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 02:14:34 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-1-activate[78771]: --> ceph-volume raw activate successful for osd ID: 1
Nov 29 02:14:34 np0005539551 bash[78755]: --> ceph-volume raw activate successful for osd ID: 1
Nov 29 02:14:34 np0005539551 systemd[1]: libpod-229931764e2b4bb4fb5eab5837b4bb7a022a803d1713b8410e3a5fc364b4ad56.scope: Deactivated successfully.
Nov 29 02:14:34 np0005539551 podman[78872]: 2025-11-29 07:14:34.919915977 +0000 UTC m=+0.023329462 container died 229931764e2b4bb4fb5eab5837b4bb7a022a803d1713b8410e3a5fc364b4ad56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-1-activate, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 02:14:34 np0005539551 systemd[1]: var-lib-containers-storage-overlay-0167a9f0d51a95e685d5b2d2737f79986b85f959789b1a7917b98601a0e4df23-merged.mount: Deactivated successfully.
Nov 29 02:14:34 np0005539551 podman[78872]: 2025-11-29 07:14:34.974171803 +0000 UTC m=+0.077585268 container remove 229931764e2b4bb4fb5eab5837b4bb7a022a803d1713b8410e3a5fc364b4ad56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-1-activate, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 02:14:35 np0005539551 podman[78934]: 2025-11-29 07:14:35.176672087 +0000 UTC m=+0.045260385 container create 31b026d435d856bf28a41c999089f2d08d27ec3e5aae7cf1b807ade111babe5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-1, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 02:14:35 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4effe9b21d4d39bfa99c94a373945c75007cd121f51544ea78084d7aa526cd6d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:35 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4effe9b21d4d39bfa99c94a373945c75007cd121f51544ea78084d7aa526cd6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:35 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4effe9b21d4d39bfa99c94a373945c75007cd121f51544ea78084d7aa526cd6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:35 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4effe9b21d4d39bfa99c94a373945c75007cd121f51544ea78084d7aa526cd6d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:35 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4effe9b21d4d39bfa99c94a373945c75007cd121f51544ea78084d7aa526cd6d/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:35 np0005539551 podman[78934]: 2025-11-29 07:14:35.235756677 +0000 UTC m=+0.104345005 container init 31b026d435d856bf28a41c999089f2d08d27ec3e5aae7cf1b807ade111babe5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-1, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 02:14:35 np0005539551 podman[78934]: 2025-11-29 07:14:35.244030218 +0000 UTC m=+0.112618536 container start 31b026d435d856bf28a41c999089f2d08d27ec3e5aae7cf1b807ade111babe5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-1, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 02:14:35 np0005539551 bash[78934]: 31b026d435d856bf28a41c999089f2d08d27ec3e5aae7cf1b807ade111babe5d
Nov 29 02:14:35 np0005539551 podman[78934]: 2025-11-29 07:14:35.155909378 +0000 UTC m=+0.024497706 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:14:35 np0005539551 systemd[1]: Started Ceph osd.1 for b66774a7-56d9-5535-bd8c-681234404870.
Nov 29 02:14:35 np0005539551 ceph-osd[78953]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 02:14:35 np0005539551 ceph-osd[78953]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 29 02:14:35 np0005539551 ceph-osd[78953]: pidfile_write: ignore empty --pid-file
Nov 29 02:14:35 np0005539551 ceph-osd[78953]: bdev(0x5616f18b7800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 02:14:35 np0005539551 ceph-osd[78953]: bdev(0x5616f18b7800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 02:14:35 np0005539551 ceph-osd[78953]: bdev(0x5616f18b7800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 02:14:35 np0005539551 ceph-osd[78953]: bdev(0x5616f18b7800 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 02:14:35 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 02:14:35 np0005539551 ceph-osd[78953]: bdev(0x5616f26ef800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 02:14:35 np0005539551 ceph-osd[78953]: bdev(0x5616f26ef800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 02:14:35 np0005539551 ceph-osd[78953]: bdev(0x5616f26ef800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 02:14:35 np0005539551 ceph-osd[78953]: bdev(0x5616f26ef800 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 02:14:35 np0005539551 ceph-osd[78953]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Nov 29 02:14:35 np0005539551 ceph-osd[78953]: bdev(0x5616f26ef800 /var/lib/ceph/osd/ceph-1/block) close
Nov 29 02:14:35 np0005539551 ceph-osd[78953]: bdev(0x5616f18b7800 /var/lib/ceph/osd/ceph-1/block) close
Nov 29 02:14:35 np0005539551 ceph-osd[78953]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Nov 29 02:14:35 np0005539551 ceph-osd[78953]: load: jerasure load: lrc 
Nov 29 02:14:35 np0005539551 ceph-osd[78953]: bdev(0x5616f2770c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 02:14:35 np0005539551 ceph-osd[78953]: bdev(0x5616f2770c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 02:14:35 np0005539551 ceph-osd[78953]: bdev(0x5616f2770c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 02:14:35 np0005539551 ceph-osd[78953]: bdev(0x5616f2770c00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 02:14:35 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 02:14:35 np0005539551 ceph-osd[78953]: bdev(0x5616f2770c00 /var/lib/ceph/osd/ceph-1/block) close
Nov 29 02:14:35 np0005539551 podman[79110]: 2025-11-29 07:14:35.853374572 +0000 UTC m=+0.021772539 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: bdev(0x5616f2770c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: bdev(0x5616f2770c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: bdev(0x5616f2770c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: bdev(0x5616f2770c00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: bdev(0x5616f2770c00 /var/lib/ceph/osd/ceph-1/block) close
Nov 29 02:14:36 np0005539551 podman[79110]: 2025-11-29 07:14:36.103979479 +0000 UTC m=+0.272377426 container create 0002bf60b89ffeff2d2037a388631b5dc68f43ea1c19589811e9d6bbe050d495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_shannon, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: bdev(0x5616f2770c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: bdev(0x5616f2770c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: bdev(0x5616f2770c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: bdev(0x5616f2770c00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: bdev(0x5616f2771400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: bdev(0x5616f2771400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: bdev(0x5616f2771400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: bdev(0x5616f2771400 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: bluefs mount
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: bluefs mount shared_bdev_used = 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: RocksDB version: 7.9.2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Git sha 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: DB SUMMARY
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: DB Session ID:  L61PT15XF0TCY3NHG5MB
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: CURRENT file:  CURRENT
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                         Options.error_if_exists: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                       Options.create_if_missing: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                                     Options.env: 0x5616f2741c70
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                                Options.info_log: 0x5616f1934ba0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                              Options.statistics: (nil)
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                               Options.use_fsync: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                              Options.db_log_dir: 
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                                 Options.wal_dir: db.wal
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.write_buffer_manager: 0x5616f284a460
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.unordered_write: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                               Options.row_cache: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                              Options.wal_filter: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.two_write_queues: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.wal_compression: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.atomic_flush: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.max_background_jobs: 4
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.max_background_compactions: -1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.max_subcompactions: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.max_open_files: -1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Compression algorithms supported:
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: #011kZSTD supported: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: #011kXpressCompression supported: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: #011kBZip2Compression supported: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: #011kLZ4Compression supported: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: #011kZlibCompression supported: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: #011kSnappyCompression supported: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.compaction_filter: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5616f1934600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5616f192add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.compression: LZ4
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.num_levels: 7
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:           Options.merge_operator: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.compaction_filter: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5616f1934600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5616f192add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.compression: LZ4
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.num_levels: 7
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:           Options.merge_operator: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.compaction_filter: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5616f1934600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5616f192add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.compression: LZ4
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.num_levels: 7
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:           Options.merge_operator: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.compaction_filter: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5616f1934600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5616f192add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.compression: LZ4
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.num_levels: 7
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:           Options.merge_operator: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.compaction_filter: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5616f1934600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5616f192add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.compression: LZ4
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.num_levels: 7
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:           Options.merge_operator: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.compaction_filter: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5616f1934600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5616f192add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.compression: LZ4
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.num_levels: 7
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:           Options.merge_operator: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.compaction_filter: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5616f1934600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5616f192add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.compression: LZ4
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.num_levels: 7
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:           Options.merge_operator: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.compaction_filter: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5616f19345c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5616f192a430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.compression: LZ4
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.num_levels: 7
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:           Options.merge_operator: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.compaction_filter: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5616f19345c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5616f192a430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.compression: LZ4
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.num_levels: 7
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:           Options.merge_operator: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.compaction_filter: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5616f19345c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5616f192a430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.compression: LZ4
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.num_levels: 7
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: f1e3474b-bf77-4cda-a0f0-ae4d3a5a26aa
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400476415193, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400476415499, "job": 1, "event": "recovery_finished"}
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: freelist init
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: freelist _read_cfg
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: bluefs umount
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: bdev(0x5616f2771400 /var/lib/ceph/osd/ceph-1/block) close
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: bdev(0x5616f2771400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: bdev(0x5616f2771400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: bdev(0x5616f2771400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: bdev(0x5616f2771400 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: bluefs mount
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: bluefs mount shared_bdev_used = 4718592
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: RocksDB version: 7.9.2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Git sha 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: DB SUMMARY
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: DB Session ID:  L61PT15XF0TCY3NHG5MA
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: CURRENT file:  CURRENT
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                         Options.error_if_exists: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                       Options.create_if_missing: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                                     Options.env: 0x5616f1976690
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                                Options.info_log: 0x5616f273d1c0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                              Options.statistics: (nil)
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                               Options.use_fsync: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                              Options.db_log_dir: 
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                                 Options.wal_dir: db.wal
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.write_buffer_manager: 0x5616f284a8c0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.unordered_write: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                               Options.row_cache: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                              Options.wal_filter: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.two_write_queues: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.wal_compression: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.atomic_flush: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.max_background_jobs: 4
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.max_background_compactions: -1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.max_subcompactions: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.max_open_files: -1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Compression algorithms supported:
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: #011kZSTD supported: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: #011kXpressCompression supported: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: #011kBZip2Compression supported: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: #011kLZ4Compression supported: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: #011kZlibCompression supported: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: #011kSnappyCompression supported: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.compaction_filter: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5616f273d360)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5616f192a430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.compression: LZ4
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.num_levels: 7
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:           Options.merge_operator: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.compaction_filter: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5616f273d360)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5616f192a430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.compression: LZ4
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.num_levels: 7
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:           Options.merge_operator: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.compaction_filter: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5616f273d360)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5616f192a430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.compression: LZ4
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.num_levels: 7
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:           Options.merge_operator: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.compaction_filter: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5616f273d360)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5616f192a430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.compression: LZ4
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.num_levels: 7
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:14:36 np0005539551 systemd[1]: Started libpod-conmon-0002bf60b89ffeff2d2037a388631b5dc68f43ea1c19589811e9d6bbe050d495.scope.
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:           Options.merge_operator: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.compaction_filter: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5616f273d360)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5616f192a430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.compression: LZ4
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.num_levels: 7
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:           Options.merge_operator: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.compaction_filter: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5616f273d360)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5616f192a430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.compression: LZ4
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.num_levels: 7
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:           Options.merge_operator: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.compaction_filter: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5616f273d360)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5616f192a430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.compression: LZ4
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.num_levels: 7
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:           Options.merge_operator: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.compaction_filter: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5616f273d2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5616f192add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.compression: LZ4
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.num_levels: 7
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:           Options.merge_operator: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.compaction_filter: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5616f273d2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5616f192add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.compression: LZ4
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.num_levels: 7
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:           Options.merge_operator: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.compaction_filter: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5616f273d2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5616f192add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.compression: LZ4
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.num_levels: 7
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: f1e3474b-bf77-4cda-a0f0-ae4d3a5a26aa
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400476660059, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 29 02:14:36 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 29 02:14:36 np0005539551 systemd[1]: Started libcrun container.
Nov 29 02:14:37 np0005539551 ceph-osd[78953]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400477303210, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400476, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f1e3474b-bf77-4cda-a0f0-ae4d3a5a26aa", "db_session_id": "L61PT15XF0TCY3NHG5MA", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:14:37 np0005539551 podman[79110]: 2025-11-29 07:14:37.323723876 +0000 UTC m=+1.492121843 container init 0002bf60b89ffeff2d2037a388631b5dc68f43ea1c19589811e9d6bbe050d495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 02:14:37 np0005539551 podman[79110]: 2025-11-29 07:14:37.334593699 +0000 UTC m=+1.502991636 container start 0002bf60b89ffeff2d2037a388631b5dc68f43ea1c19589811e9d6bbe050d495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_shannon, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 02:14:37 np0005539551 amazing_shannon[79481]: 167 167
Nov 29 02:14:37 np0005539551 systemd[1]: libpod-0002bf60b89ffeff2d2037a388631b5dc68f43ea1c19589811e9d6bbe050d495.scope: Deactivated successfully.
Nov 29 02:14:37 np0005539551 ceph-osd[78953]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400477354749, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400477, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f1e3474b-bf77-4cda-a0f0-ae4d3a5a26aa", "db_session_id": "L61PT15XF0TCY3NHG5MA", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:14:37 np0005539551 podman[79110]: 2025-11-29 07:14:37.414054799 +0000 UTC m=+1.582452776 container attach 0002bf60b89ffeff2d2037a388631b5dc68f43ea1c19589811e9d6bbe050d495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_shannon, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 02:14:37 np0005539551 podman[79110]: 2025-11-29 07:14:37.415653823 +0000 UTC m=+1.584051770 container died 0002bf60b89ffeff2d2037a388631b5dc68f43ea1c19589811e9d6bbe050d495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:14:37 np0005539551 ceph-osd[78953]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400477430241, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400477, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f1e3474b-bf77-4cda-a0f0-ae4d3a5a26aa", "db_session_id": "L61PT15XF0TCY3NHG5MA", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:14:37 np0005539551 ceph-osd[78953]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400477474105, "job": 1, "event": "recovery_finished"}
Nov 29 02:14:37 np0005539551 ceph-osd[78953]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 29 02:14:37 np0005539551 systemd[1]: var-lib-containers-storage-overlay-1f4eec60933db7f92aac121d620967b00a7dd62817a892e6f02f880e4a24b088-merged.mount: Deactivated successfully.
Nov 29 02:14:37 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5616f2910700
Nov 29 02:14:37 np0005539551 ceph-osd[78953]: rocksdb: DB pointer 0x5616f2833a00
Nov 29 02:14:37 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 29 02:14:37 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Nov 29 02:14:37 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Nov 29 02:14:37 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:14:37 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.9 total, 0.9 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.64              0.00         1    0.643       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.64              0.00         1    0.643       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.64              0.00         1    0.643       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.64              0.00         1    0.643       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.9 total, 0.9 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.6 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5616f192a430#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.9 total, 0.9 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5616f192a430#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.9 total, 0.9 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5616f192a430#2 capacity: 460.80 MB usag
Nov 29 02:14:37 np0005539551 ceph-osd[78953]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 29 02:14:37 np0005539551 ceph-osd[78953]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 29 02:14:37 np0005539551 ceph-osd[78953]: _get_class not permitted to load lua
Nov 29 02:14:37 np0005539551 ceph-osd[78953]: _get_class not permitted to load sdk
Nov 29 02:14:37 np0005539551 ceph-osd[78953]: _get_class not permitted to load test_remote_reads
Nov 29 02:14:37 np0005539551 ceph-osd[78953]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 29 02:14:37 np0005539551 ceph-osd[78953]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 29 02:14:37 np0005539551 ceph-osd[78953]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 29 02:14:37 np0005539551 ceph-osd[78953]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 29 02:14:37 np0005539551 ceph-osd[78953]: osd.1 0 load_pgs
Nov 29 02:14:37 np0005539551 ceph-osd[78953]: osd.1 0 load_pgs opened 0 pgs
Nov 29 02:14:37 np0005539551 ceph-osd[78953]: osd.1 0 log_to_monitors true
Nov 29 02:14:37 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-1[78949]: 2025-11-29T07:14:37.561+0000 7f0fe6aac740 -1 osd.1 0 log_to_monitors true
Nov 29 02:14:37 np0005539551 podman[79110]: 2025-11-29 07:14:37.564010665 +0000 UTC m=+1.732408612 container remove 0002bf60b89ffeff2d2037a388631b5dc68f43ea1c19589811e9d6bbe050d495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 02:14:37 np0005539551 systemd[1]: libpod-conmon-0002bf60b89ffeff2d2037a388631b5dc68f43ea1c19589811e9d6bbe050d495.scope: Deactivated successfully.
Nov 29 02:14:37 np0005539551 podman[79568]: 2025-11-29 07:14:37.724067644 +0000 UTC m=+0.041123988 container create 2f0b625f92419f88088d3c26e4305a8f1b94274b0e287ecf74cd1827398ab414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:14:37 np0005539551 systemd[1]: Started libpod-conmon-2f0b625f92419f88088d3c26e4305a8f1b94274b0e287ecf74cd1827398ab414.scope.
Nov 29 02:14:37 np0005539551 systemd[1]: Started libcrun container.
Nov 29 02:14:37 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec4a8ecade2d5ebb75dfdb2e16b096b573240ddb54e4971682809cad4953a45e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:37 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec4a8ecade2d5ebb75dfdb2e16b096b573240ddb54e4971682809cad4953a45e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:37 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec4a8ecade2d5ebb75dfdb2e16b096b573240ddb54e4971682809cad4953a45e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:37 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec4a8ecade2d5ebb75dfdb2e16b096b573240ddb54e4971682809cad4953a45e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:37 np0005539551 podman[79568]: 2025-11-29 07:14:37.705813965 +0000 UTC m=+0.022870329 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:14:37 np0005539551 podman[79568]: 2025-11-29 07:14:37.811945869 +0000 UTC m=+0.129002213 container init 2f0b625f92419f88088d3c26e4305a8f1b94274b0e287ecf74cd1827398ab414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:14:37 np0005539551 podman[79568]: 2025-11-29 07:14:37.818121021 +0000 UTC m=+0.135177365 container start 2f0b625f92419f88088d3c26e4305a8f1b94274b0e287ecf74cd1827398ab414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:14:37 np0005539551 podman[79568]: 2025-11-29 07:14:37.821941967 +0000 UTC m=+0.138998331 container attach 2f0b625f92419f88088d3c26e4305a8f1b94274b0e287ecf74cd1827398ab414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mclean, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:14:38 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 29 02:14:38 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 29 02:14:38 np0005539551 reverent_mclean[79585]: {
Nov 29 02:14:38 np0005539551 reverent_mclean[79585]:    "b9602a40-9939-419b-9b17-98d4b245553e": {
Nov 29 02:14:38 np0005539551 reverent_mclean[79585]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:14:38 np0005539551 reverent_mclean[79585]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:14:38 np0005539551 reverent_mclean[79585]:        "osd_id": 1,
Nov 29 02:14:38 np0005539551 reverent_mclean[79585]:        "osd_uuid": "b9602a40-9939-419b-9b17-98d4b245553e",
Nov 29 02:14:38 np0005539551 reverent_mclean[79585]:        "type": "bluestore"
Nov 29 02:14:38 np0005539551 reverent_mclean[79585]:    }
Nov 29 02:14:38 np0005539551 reverent_mclean[79585]: }
Nov 29 02:14:38 np0005539551 systemd[1]: libpod-2f0b625f92419f88088d3c26e4305a8f1b94274b0e287ecf74cd1827398ab414.scope: Deactivated successfully.
Nov 29 02:14:38 np0005539551 podman[79568]: 2025-11-29 07:14:38.751703578 +0000 UTC m=+1.068759942 container died 2f0b625f92419f88088d3c26e4305a8f1b94274b0e287ecf74cd1827398ab414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:14:38 np0005539551 systemd[1]: var-lib-containers-storage-overlay-ec4a8ecade2d5ebb75dfdb2e16b096b573240ddb54e4971682809cad4953a45e-merged.mount: Deactivated successfully.
Nov 29 02:14:38 np0005539551 ceph-osd[78953]: osd.1 0 done with init, starting boot process
Nov 29 02:14:38 np0005539551 ceph-osd[78953]: osd.1 0 start_boot
Nov 29 02:14:38 np0005539551 ceph-osd[78953]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 29 02:14:38 np0005539551 ceph-osd[78953]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 29 02:14:38 np0005539551 ceph-osd[78953]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 29 02:14:38 np0005539551 ceph-osd[78953]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 29 02:14:38 np0005539551 ceph-osd[78953]: osd.1 0  bench count 12288000 bsize 4 KiB
Nov 29 02:14:38 np0005539551 podman[79568]: 2025-11-29 07:14:38.807303251 +0000 UTC m=+1.124359595 container remove 2f0b625f92419f88088d3c26e4305a8f1b94274b0e287ecf74cd1827398ab414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mclean, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:14:38 np0005539551 systemd[1]: libpod-conmon-2f0b625f92419f88088d3c26e4305a8f1b94274b0e287ecf74cd1827398ab414.scope: Deactivated successfully.
Nov 29 02:14:40 np0005539551 podman[79840]: 2025-11-29 07:14:40.143800298 +0000 UTC m=+0.110395433 container exec 90ce4adbab91e406b1c142d44002a8d5649da44e57a442af57615f2fecac4da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:14:40 np0005539551 podman[79840]: 2025-11-29 07:14:40.341596391 +0000 UTC m=+0.308191536 container exec_died 90ce4adbab91e406b1c142d44002a8d5649da44e57a442af57615f2fecac4da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 02:14:42 np0005539551 podman[80166]: 2025-11-29 07:14:42.00619344 +0000 UTC m=+0.058611168 container create 38441e8290e13b7c39879336695ccded19311f2dcbc2c3a9ac9e0a2b8740bfb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cartwright, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 02:14:42 np0005539551 podman[80166]: 2025-11-29 07:14:41.96969404 +0000 UTC m=+0.022111798 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:14:42 np0005539551 systemd[1]: Started libpod-conmon-38441e8290e13b7c39879336695ccded19311f2dcbc2c3a9ac9e0a2b8740bfb6.scope.
Nov 29 02:14:42 np0005539551 systemd[1]: Started libcrun container.
Nov 29 02:14:42 np0005539551 podman[80166]: 2025-11-29 07:14:42.146763634 +0000 UTC m=+0.199181382 container init 38441e8290e13b7c39879336695ccded19311f2dcbc2c3a9ac9e0a2b8740bfb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Nov 29 02:14:42 np0005539551 podman[80166]: 2025-11-29 07:14:42.156442085 +0000 UTC m=+0.208859813 container start 38441e8290e13b7c39879336695ccded19311f2dcbc2c3a9ac9e0a2b8740bfb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cartwright, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:14:42 np0005539551 flamboyant_cartwright[80182]: 167 167
Nov 29 02:14:42 np0005539551 systemd[1]: libpod-38441e8290e13b7c39879336695ccded19311f2dcbc2c3a9ac9e0a2b8740bfb6.scope: Deactivated successfully.
Nov 29 02:14:42 np0005539551 podman[80166]: 2025-11-29 07:14:42.190878536 +0000 UTC m=+0.243296264 container attach 38441e8290e13b7c39879336695ccded19311f2dcbc2c3a9ac9e0a2b8740bfb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cartwright, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:14:42 np0005539551 podman[80166]: 2025-11-29 07:14:42.191898914 +0000 UTC m=+0.244316642 container died 38441e8290e13b7c39879336695ccded19311f2dcbc2c3a9ac9e0a2b8740bfb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cartwright, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:14:42 np0005539551 systemd[1]: var-lib-containers-storage-overlay-dcb04821287b9f575b35cf8b6da05f4e91975a8f4878f110bc45c9a22d41b37b-merged.mount: Deactivated successfully.
Nov 29 02:14:42 np0005539551 podman[80166]: 2025-11-29 07:14:42.325018042 +0000 UTC m=+0.377435770 container remove 38441e8290e13b7c39879336695ccded19311f2dcbc2c3a9ac9e0a2b8740bfb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cartwright, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:14:42 np0005539551 systemd[1]: libpod-conmon-38441e8290e13b7c39879336695ccded19311f2dcbc2c3a9ac9e0a2b8740bfb6.scope: Deactivated successfully.
Nov 29 02:14:42 np0005539551 podman[80209]: 2025-11-29 07:14:42.515799199 +0000 UTC m=+0.060162911 container create 2f4a44ba2ffa23d3ad07da4a723251489b6633a5b92d709c69abaf88a90048a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 02:14:42 np0005539551 systemd[1]: Started libpod-conmon-2f4a44ba2ffa23d3ad07da4a723251489b6633a5b92d709c69abaf88a90048a1.scope.
Nov 29 02:14:42 np0005539551 podman[80209]: 2025-11-29 07:14:42.48464996 +0000 UTC m=+0.029013692 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:14:42 np0005539551 systemd[1]: Started libcrun container.
Nov 29 02:14:42 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8571aab32678ed00850bccae09e90e1a284a8ec076ae5f2451c9664edd695da4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:42 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8571aab32678ed00850bccae09e90e1a284a8ec076ae5f2451c9664edd695da4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:42 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8571aab32678ed00850bccae09e90e1a284a8ec076ae5f2451c9664edd695da4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:42 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8571aab32678ed00850bccae09e90e1a284a8ec076ae5f2451c9664edd695da4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:42 np0005539551 podman[80209]: 2025-11-29 07:14:42.627147579 +0000 UTC m=+0.171511301 container init 2f4a44ba2ffa23d3ad07da4a723251489b6633a5b92d709c69abaf88a90048a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_swartz, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 02:14:42 np0005539551 podman[80209]: 2025-11-29 07:14:42.636230442 +0000 UTC m=+0.180594144 container start 2f4a44ba2ffa23d3ad07da4a723251489b6633a5b92d709c69abaf88a90048a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_swartz, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 02:14:42 np0005539551 podman[80209]: 2025-11-29 07:14:42.645925713 +0000 UTC m=+0.190289435 container attach 2f4a44ba2ffa23d3ad07da4a723251489b6633a5b92d709c69abaf88a90048a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_swartz, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:14:43 np0005539551 ceph-osd[78953]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 23.183 iops: 5934.887 elapsed_sec: 0.505
Nov 29 02:14:43 np0005539551 ceph-osd[78953]: log_channel(cluster) log [WRN] : OSD bench result of 5934.887354 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 02:14:43 np0005539551 ceph-osd[78953]: osd.1 0 waiting for initial osdmap
Nov 29 02:14:43 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-1[78949]: 2025-11-29T07:14:43.037+0000 7f0fe3243640 -1 osd.1 0 waiting for initial osdmap
Nov 29 02:14:43 np0005539551 ceph-osd[78953]: osd.1 10 crush map has features 288514050185494528, adjusting msgr requires for clients
Nov 29 02:14:43 np0005539551 ceph-osd[78953]: osd.1 10 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Nov 29 02:14:43 np0005539551 ceph-osd[78953]: osd.1 10 crush map has features 3314932999778484224, adjusting msgr requires for osds
Nov 29 02:14:43 np0005539551 ceph-osd[78953]: osd.1 10 check_osdmap_features require_osd_release unknown -> reef
Nov 29 02:14:43 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-1[78949]: 2025-11-29T07:14:43.064+0000 7f0fde054640 -1 osd.1 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 29 02:14:43 np0005539551 ceph-osd[78953]: osd.1 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 29 02:14:43 np0005539551 ceph-osd[78953]: osd.1 10 set_numa_affinity not setting numa affinity
Nov 29 02:14:43 np0005539551 ceph-osd[78953]: osd.1 10 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Nov 29 02:14:43 np0005539551 naughty_swartz[80225]: [
Nov 29 02:14:43 np0005539551 naughty_swartz[80225]:    {
Nov 29 02:14:43 np0005539551 naughty_swartz[80225]:        "available": false,
Nov 29 02:14:43 np0005539551 naughty_swartz[80225]:        "ceph_device": false,
Nov 29 02:14:43 np0005539551 naughty_swartz[80225]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 02:14:43 np0005539551 naughty_swartz[80225]:        "lsm_data": {},
Nov 29 02:14:43 np0005539551 naughty_swartz[80225]:        "lvs": [],
Nov 29 02:14:43 np0005539551 naughty_swartz[80225]:        "path": "/dev/sr0",
Nov 29 02:14:43 np0005539551 naughty_swartz[80225]:        "rejected_reasons": [
Nov 29 02:14:43 np0005539551 naughty_swartz[80225]:            "Insufficient space (<5GB)",
Nov 29 02:14:43 np0005539551 naughty_swartz[80225]:            "Has a FileSystem"
Nov 29 02:14:43 np0005539551 naughty_swartz[80225]:        ],
Nov 29 02:14:43 np0005539551 naughty_swartz[80225]:        "sys_api": {
Nov 29 02:14:43 np0005539551 naughty_swartz[80225]:            "actuators": null,
Nov 29 02:14:43 np0005539551 naughty_swartz[80225]:            "device_nodes": "sr0",
Nov 29 02:14:43 np0005539551 naughty_swartz[80225]:            "devname": "sr0",
Nov 29 02:14:43 np0005539551 naughty_swartz[80225]:            "human_readable_size": "482.00 KB",
Nov 29 02:14:43 np0005539551 naughty_swartz[80225]:            "id_bus": "ata",
Nov 29 02:14:43 np0005539551 naughty_swartz[80225]:            "model": "QEMU DVD-ROM",
Nov 29 02:14:43 np0005539551 naughty_swartz[80225]:            "nr_requests": "2",
Nov 29 02:14:43 np0005539551 naughty_swartz[80225]:            "parent": "/dev/sr0",
Nov 29 02:14:43 np0005539551 naughty_swartz[80225]:            "partitions": {},
Nov 29 02:14:43 np0005539551 naughty_swartz[80225]:            "path": "/dev/sr0",
Nov 29 02:14:43 np0005539551 naughty_swartz[80225]:            "removable": "1",
Nov 29 02:14:43 np0005539551 naughty_swartz[80225]:            "rev": "2.5+",
Nov 29 02:14:43 np0005539551 naughty_swartz[80225]:            "ro": "0",
Nov 29 02:14:43 np0005539551 naughty_swartz[80225]:            "rotational": "1",
Nov 29 02:14:43 np0005539551 naughty_swartz[80225]:            "sas_address": "",
Nov 29 02:14:43 np0005539551 naughty_swartz[80225]:            "sas_device_handle": "",
Nov 29 02:14:43 np0005539551 naughty_swartz[80225]:            "scheduler_mode": "mq-deadline",
Nov 29 02:14:43 np0005539551 naughty_swartz[80225]:            "sectors": 0,
Nov 29 02:14:43 np0005539551 naughty_swartz[80225]:            "sectorsize": "2048",
Nov 29 02:14:43 np0005539551 naughty_swartz[80225]:            "size": 493568.0,
Nov 29 02:14:43 np0005539551 naughty_swartz[80225]:            "support_discard": "2048",
Nov 29 02:14:43 np0005539551 naughty_swartz[80225]:            "type": "disk",
Nov 29 02:14:43 np0005539551 naughty_swartz[80225]:            "vendor": "QEMU"
Nov 29 02:14:43 np0005539551 naughty_swartz[80225]:        }
Nov 29 02:14:43 np0005539551 naughty_swartz[80225]:    }
Nov 29 02:14:43 np0005539551 naughty_swartz[80225]: ]
Nov 29 02:14:43 np0005539551 systemd[1]: libpod-2f4a44ba2ffa23d3ad07da4a723251489b6633a5b92d709c69abaf88a90048a1.scope: Deactivated successfully.
Nov 29 02:14:43 np0005539551 systemd[1]: libpod-2f4a44ba2ffa23d3ad07da4a723251489b6633a5b92d709c69abaf88a90048a1.scope: Consumed 1.235s CPU time.
Nov 29 02:14:43 np0005539551 podman[80209]: 2025-11-29 07:14:43.861438802 +0000 UTC m=+1.405802514 container died 2f4a44ba2ffa23d3ad07da4a723251489b6633a5b92d709c69abaf88a90048a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 02:14:43 np0005539551 ceph-osd[78953]: osd.1 10 tick checking mon for new map
Nov 29 02:14:44 np0005539551 systemd[1]: var-lib-containers-storage-overlay-8571aab32678ed00850bccae09e90e1a284a8ec076ae5f2451c9664edd695da4-merged.mount: Deactivated successfully.
Nov 29 02:14:44 np0005539551 podman[80209]: 2025-11-29 07:14:44.265689011 +0000 UTC m=+1.810052723 container remove 2f4a44ba2ffa23d3ad07da4a723251489b6633a5b92d709c69abaf88a90048a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_swartz, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:14:44 np0005539551 systemd[1]: libpod-conmon-2f4a44ba2ffa23d3ad07da4a723251489b6633a5b92d709c69abaf88a90048a1.scope: Deactivated successfully.
Nov 29 02:14:45 np0005539551 ceph-osd[78953]: osd.1 11 state: booting -> active
Nov 29 02:14:46 np0005539551 ceph-osd[78953]: osd.1 12 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 29 02:14:46 np0005539551 ceph-osd[78953]: osd.1 12 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Nov 29 02:14:46 np0005539551 ceph-osd[78953]: osd.1 12 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 29 02:14:46 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 12 pg[1.0( empty local-lis/les=0/0 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:47 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=12/13 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:14:49 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 14 pg[2.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:50 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 15 pg[2.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:14:59 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 22 pg[2.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=22 pruub=14.757394791s) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active pruub 37.026969910s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:59 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 23 pg[2.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=22 pruub=14.757394791s) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown pruub 37.026969910s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:59 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 23 pg[2.d( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:59 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 23 pg[2.e( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:59 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 23 pg[2.5( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:59 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 23 pg[2.6( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:59 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 23 pg[2.7( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:59 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 23 pg[2.8( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:59 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 23 pg[2.b( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:59 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 23 pg[2.c( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:59 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 23 pg[2.9( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:59 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 23 pg[2.a( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:59 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 23 pg[2.11( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:59 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 23 pg[2.12( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:59 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 23 pg[2.f( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:59 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 23 pg[2.10( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:59 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 23 pg[2.15( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:59 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 23 pg[2.16( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:59 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 23 pg[2.13( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:59 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 23 pg[2.14( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:59 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 23 pg[2.17( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:59 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 23 pg[2.18( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:59 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 23 pg[2.19( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:59 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 23 pg[2.1a( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:59 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 23 pg[2.1d( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:59 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 23 pg[2.1e( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:59 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 23 pg[2.1b( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:59 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 23 pg[2.1c( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:59 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 23 pg[2.1f( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:59 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 23 pg[2.1( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:59 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 23 pg[2.2( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:59 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 23 pg[2.3( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:59 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 23 pg[2.4( empty local-lis/les=14/15 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 24 pg[2.1d( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 24 pg[2.1e( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 24 pg[2.1f( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 24 pg[2.1c( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 24 pg[2.9( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 24 pg[2.a( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 24 pg[2.8( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 24 pg[2.6( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 24 pg[2.7( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 24 pg[2.5( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 24 pg[2.4( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 24 pg[2.b( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 24 pg[2.1( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 24 pg[2.0( empty local-lis/les=22/24 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 24 pg[2.2( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 24 pg[2.c( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 24 pg[2.d( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 24 pg[2.3( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 24 pg[2.f( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 24 pg[2.10( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 24 pg[2.12( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 24 pg[2.11( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 24 pg[2.e( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 24 pg[2.13( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 24 pg[2.15( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 24 pg[2.18( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 24 pg[2.17( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 24 pg[2.16( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 24 pg[2.19( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 24 pg[2.1a( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 24 pg[2.1b( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 24 pg[2.14( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=14/14 les/c/f=15/15/0 sis=22) [1] r=0 lpr=22 pi=[14,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:02 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 25 pg[7.0( empty local-lis/les=0/0 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [1] r=0 lpr=25 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:02 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 26 pg[7.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [1] r=0 lpr=25 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:07 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Nov 29 02:15:07 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[4.18( empty local-lis/les=0/0 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[4.1b( empty local-lis/les=0/0 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[3.1c( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[3.1d( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[4.1a( empty local-lis/les=0/0 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[4.d( empty local-lis/les=0/0 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[3.a( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[4.c( empty local-lis/les=0/0 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[4.e( empty local-lis/les=0/0 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[3.9( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[4.1( empty local-lis/les=0/0 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[3.5( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[3.3( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[4.5( empty local-lis/les=0/0 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[4.a( empty local-lis/les=0/0 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[3.d( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[3.c( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[3.f( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[4.8( empty local-lis/les=0/0 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[4.9( empty local-lis/les=0/0 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[3.e( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[3.11( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[3.10( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[3.13( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[4.15( empty local-lis/les=0/0 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[3.15( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[3.14( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[4.13( empty local-lis/les=0/0 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[3.16( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[4.1f( empty local-lis/les=0/0 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[3.1a( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[2.1b( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=8.794911385s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 39.304866791s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[2.15( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=8.794825554s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 39.304817200s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[2.15( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=8.794799805s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 39.304817200s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[2.1b( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=8.794859886s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 39.304866791s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[2.13( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=8.794667244s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 39.304805756s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[2.13( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=8.794646263s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 39.304805756s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[2.10( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=8.794550896s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 39.304771423s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[2.e( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=8.794527054s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 39.304794312s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[2.10( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=8.794521332s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 39.304771423s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[2.d( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=8.794439316s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 39.304725647s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[2.e( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=8.794506073s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 39.304794312s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[2.d( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=8.794419289s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 39.304725647s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[2.c( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=8.794303894s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 39.304706573s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[2.1( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=8.794229507s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 39.304679871s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[2.c( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=8.794275284s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 39.304706573s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[2.1( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=8.794203758s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 39.304679871s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[2.4( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=8.794145584s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 39.304630280s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[2.4( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=8.794123650s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 39.304630280s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[2.6( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=8.793972015s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 39.304565430s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[2.6( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=8.793949127s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 39.304565430s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[2.9( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=8.793736458s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 39.304386139s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[2.9( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=8.793669701s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 39.304386139s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[2.a( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=8.793727875s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 39.304466248s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[2.a( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=8.793708801s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 39.304466248s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[2.1e( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=8.789811134s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 39.300643921s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[2.1f( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=8.789967537s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 39.300819397s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[2.1e( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=8.789785385s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 39.300643921s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[2.1f( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=8.789946556s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 39.300819397s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[2.19( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=8.793890953s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 39.304843903s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 29 pg[2.19( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=8.793860435s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 39.304843903s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Nov 29 02:15:08 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Nov 29 02:15:09 np0005539551 podman[81434]: 2025-11-29 07:15:09.004063868 +0000 UTC m=+0.041301890 container create 9d096d3996f0c960c13f16e8cce1579f305666fe42c03dd6ddcb8fb4fe433621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_jepsen, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 02:15:09 np0005539551 systemd[1]: Started libpod-conmon-9d096d3996f0c960c13f16e8cce1579f305666fe42c03dd6ddcb8fb4fe433621.scope.
Nov 29 02:15:09 np0005539551 systemd[1]: Started libcrun container.
Nov 29 02:15:09 np0005539551 podman[81434]: 2025-11-29 07:15:08.987062568 +0000 UTC m=+0.024300610 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:15:09 np0005539551 podman[81434]: 2025-11-29 07:15:09.090480964 +0000 UTC m=+0.127719006 container init 9d096d3996f0c960c13f16e8cce1579f305666fe42c03dd6ddcb8fb4fe433621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_jepsen, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:15:09 np0005539551 podman[81434]: 2025-11-29 07:15:09.097370722 +0000 UTC m=+0.134608744 container start 9d096d3996f0c960c13f16e8cce1579f305666fe42c03dd6ddcb8fb4fe433621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 02:15:09 np0005539551 unruffled_jepsen[81450]: 167 167
Nov 29 02:15:09 np0005539551 podman[81434]: 2025-11-29 07:15:09.102465124 +0000 UTC m=+0.139703176 container attach 9d096d3996f0c960c13f16e8cce1579f305666fe42c03dd6ddcb8fb4fe433621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_jepsen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:15:09 np0005539551 systemd[1]: libpod-9d096d3996f0c960c13f16e8cce1579f305666fe42c03dd6ddcb8fb4fe433621.scope: Deactivated successfully.
Nov 29 02:15:09 np0005539551 podman[81434]: 2025-11-29 07:15:09.103163155 +0000 UTC m=+0.140401177 container died 9d096d3996f0c960c13f16e8cce1579f305666fe42c03dd6ddcb8fb4fe433621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_jepsen, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 02:15:09 np0005539551 systemd[1]: var-lib-containers-storage-overlay-d4ce174daefc94ad43701f9eb08745a16406ad02eef74c8b3fbebd5d38ab12b3-merged.mount: Deactivated successfully.
Nov 29 02:15:09 np0005539551 podman[81434]: 2025-11-29 07:15:09.145446495 +0000 UTC m=+0.182684517 container remove 9d096d3996f0c960c13f16e8cce1579f305666fe42c03dd6ddcb8fb4fe433621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_jepsen, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 02:15:09 np0005539551 systemd[1]: libpod-conmon-9d096d3996f0c960c13f16e8cce1579f305666fe42c03dd6ddcb8fb4fe433621.scope: Deactivated successfully.
Nov 29 02:15:09 np0005539551 podman[81469]: 2025-11-29 07:15:09.229534961 +0000 UTC m=+0.044984752 container create b6d001fdb29fe8bd06c6ca863c0d730b89b33f9477387660025e3b0dd875c6bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:15:09 np0005539551 systemd[1]: Started libpod-conmon-b6d001fdb29fe8bd06c6ca863c0d730b89b33f9477387660025e3b0dd875c6bb.scope.
Nov 29 02:15:09 np0005539551 systemd[1]: Started libcrun container.
Nov 29 02:15:09 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6557814d7e932b089f400a3c9dec40330fa738f3ac9538aacc0e32d61ea82da/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:09 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6557814d7e932b089f400a3c9dec40330fa738f3ac9538aacc0e32d61ea82da/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:09 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6557814d7e932b089f400a3c9dec40330fa738f3ac9538aacc0e32d61ea82da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:09 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6557814d7e932b089f400a3c9dec40330fa738f3ac9538aacc0e32d61ea82da/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:09 np0005539551 podman[81469]: 2025-11-29 07:15:09.300894084 +0000 UTC m=+0.116343895 container init b6d001fdb29fe8bd06c6ca863c0d730b89b33f9477387660025e3b0dd875c6bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 02:15:09 np0005539551 podman[81469]: 2025-11-29 07:15:09.306919814 +0000 UTC m=+0.122369605 container start b6d001fdb29fe8bd06c6ca863c0d730b89b33f9477387660025e3b0dd875c6bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 02:15:09 np0005539551 podman[81469]: 2025-11-29 07:15:09.211779767 +0000 UTC m=+0.027229568 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:15:09 np0005539551 podman[81469]: 2025-11-29 07:15:09.310265946 +0000 UTC m=+0.125715767 container attach b6d001fdb29fe8bd06c6ca863c0d730b89b33f9477387660025e3b0dd875c6bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 02:15:09 np0005539551 systemd[1]: libpod-b6d001fdb29fe8bd06c6ca863c0d730b89b33f9477387660025e3b0dd875c6bb.scope: Deactivated successfully.
Nov 29 02:15:09 np0005539551 podman[81469]: 2025-11-29 07:15:09.408461655 +0000 UTC m=+0.223911436 container died b6d001fdb29fe8bd06c6ca863c0d730b89b33f9477387660025e3b0dd875c6bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:15:09 np0005539551 systemd[1]: var-lib-containers-storage-overlay-c6557814d7e932b089f400a3c9dec40330fa738f3ac9538aacc0e32d61ea82da-merged.mount: Deactivated successfully.
Nov 29 02:15:09 np0005539551 podman[81469]: 2025-11-29 07:15:09.44560422 +0000 UTC m=+0.261054011 container remove b6d001fdb29fe8bd06c6ca863c0d730b89b33f9477387660025e3b0dd875c6bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:15:09 np0005539551 systemd[1]: libpod-conmon-b6d001fdb29fe8bd06c6ca863c0d730b89b33f9477387660025e3b0dd875c6bb.scope: Deactivated successfully.
Nov 29 02:15:09 np0005539551 systemd[1]: Reloading.
Nov 29 02:15:09 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:15:09 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:15:09 np0005539551 systemd[1]: Reloading.
Nov 29 02:15:09 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:15:09 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:15:09 np0005539551 systemd[1]: Starting Ceph mon.compute-1 for b66774a7-56d9-5535-bd8c-681234404870...
Nov 29 02:15:10 np0005539551 podman[81653]: 2025-11-29 07:15:10.203214534 +0000 UTC m=+0.039579619 container create 6ae96226e672b832d612364d4aded5b27ce7c0b5a56f101416ef8b3e6c2f34c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-1, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 02:15:10 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efa74e6362ca7cbb75ab0be05428f86d0b89f66f63d45e7502f6a8b3a5242edb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:10 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efa74e6362ca7cbb75ab0be05428f86d0b89f66f63d45e7502f6a8b3a5242edb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:10 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efa74e6362ca7cbb75ab0be05428f86d0b89f66f63d45e7502f6a8b3a5242edb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:10 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efa74e6362ca7cbb75ab0be05428f86d0b89f66f63d45e7502f6a8b3a5242edb/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:10 np0005539551 podman[81653]: 2025-11-29 07:15:10.267439473 +0000 UTC m=+0.103804578 container init 6ae96226e672b832d612364d4aded5b27ce7c0b5a56f101416ef8b3e6c2f34c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-1, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:15:10 np0005539551 podman[81653]: 2025-11-29 07:15:10.27368612 +0000 UTC m=+0.110051205 container start 6ae96226e672b832d612364d4aded5b27ce7c0b5a56f101416ef8b3e6c2f34c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 02:15:10 np0005539551 bash[81653]: 6ae96226e672b832d612364d4aded5b27ce7c0b5a56f101416ef8b3e6c2f34c3
Nov 29 02:15:10 np0005539551 podman[81653]: 2025-11-29 07:15:10.185102371 +0000 UTC m=+0.021467486 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:15:10 np0005539551 systemd[1]: Started Ceph mon.compute-1 for b66774a7-56d9-5535-bd8c-681234404870.
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: pidfile_write: ignore empty --pid-file
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: load: jerasure load: lrc 
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: RocksDB version: 7.9.2
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: Git sha 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: DB SUMMARY
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: DB Session ID:  BQ87E2QTX8PUCMR4F5B1
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: CURRENT file:  CURRENT
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-1/store.db dir, Total Num: 0, files: 
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-1/store.db: 000004.log size: 511 ; 
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                         Options.error_if_exists: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                       Options.create_if_missing: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                                     Options.env: 0x55702055cc40
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                                Options.info_log: 0x557021edcfc0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                              Options.statistics: (nil)
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                               Options.use_fsync: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                              Options.db_log_dir: 
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                                 Options.wal_dir: 
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                    Options.write_buffer_manager: 0x557021eecb40
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                  Options.unordered_write: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                               Options.row_cache: None
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                              Options.wal_filter: None
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:             Options.two_write_queues: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:             Options.wal_compression: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:             Options.atomic_flush: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:             Options.max_background_jobs: 2
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:             Options.max_background_compactions: -1
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:             Options.max_subcompactions: 1
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:             Options.max_total_wal_size: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                          Options.max_open_files: -1
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:       Options.compaction_readahead_size: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: Compression algorithms supported:
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: #011kZSTD supported: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: #011kXpressCompression supported: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: #011kBZip2Compression supported: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: #011kLZ4Compression supported: 1
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: #011kZlibCompression supported: 1
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: #011kSnappyCompression supported: 1
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:           Options.merge_operator: 
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:        Options.compaction_filter: None
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557021edcc00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557021ed51f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:        Options.write_buffer_size: 33554432
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:  Options.max_write_buffer_number: 2
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:          Options.compression: NoCompression
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:             Options.num_levels: 7
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a0019487-5659-4bb5-bfa3-5ec467e1c6c5
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400510326869, "job": 1, "event": "recovery_started", "wal_files": [4]}
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400510328953, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400510329055, "job": 1, "event": "recovery_finished"}
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x557021efee00
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: DB pointer 0x557021f88000
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: mon.compute-1 does not exist in monmap, will attempt to join an existing cluster
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557021ed51f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,0.64 KB,0.00012219%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: using public_addr v2:192.168.122.101:0/0 -> [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0]
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: starting mon.compute-1 rank -1 at public addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] at bind addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-1 fsid b66774a7-56d9-5535-bd8c-681234404870
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: mon.compute-1@-1(???) e0 preinit fsid b66774a7-56d9-5535-bd8c-681234404870
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: mon.compute-1@-1(synchronizing).mds e1 new map
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: mon.compute-1@-1(synchronizing).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: mon.compute-1@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: mon.compute-1@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: mon.compute-1@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: mon.compute-1@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: mon.compute-1@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: mon.compute-1@-1(synchronizing).osd e4 e4: 1 total, 0 up, 1 in
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: mon.compute-1@-1(synchronizing).osd e5 e5: 2 total, 0 up, 2 in
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: mon.compute-1@-1(synchronizing).osd e6 e6: 2 total, 0 up, 2 in
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: mon.compute-1@-1(synchronizing).osd e7 e7: 2 total, 0 up, 2 in
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: mon.compute-1@-1(synchronizing).osd e8 e8: 2 total, 0 up, 2 in
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: mon.compute-1@-1(synchronizing).osd e9 e9: 2 total, 0 up, 2 in
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: mon.compute-1@-1(synchronizing).osd e10 e10: 2 total, 1 up, 2 in
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: mon.compute-1@-1(synchronizing).osd e11 e11: 2 total, 2 up, 2 in
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: mon.compute-1@-1(synchronizing).osd e12 e12: 2 total, 2 up, 2 in
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: mon.compute-1@-1(synchronizing).osd e13 e13: 2 total, 2 up, 2 in
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: mon.compute-1@-1(synchronizing).osd e14 e14: 2 total, 2 up, 2 in
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: mon.compute-1@-1(synchronizing).osd e15 e15: 2 total, 2 up, 2 in
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: mon.compute-1@-1(synchronizing).osd e16 e16: 2 total, 2 up, 2 in
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: mon.compute-1@-1(synchronizing).osd e17 e17: 2 total, 2 up, 2 in
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: mon.compute-1@-1(synchronizing).osd e18 e18: 2 total, 2 up, 2 in
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: mon.compute-1@-1(synchronizing).osd e19 e19: 2 total, 2 up, 2 in
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: mon.compute-1@-1(synchronizing).osd e20 e20: 2 total, 2 up, 2 in
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: mon.compute-1@-1(synchronizing).osd e21 e21: 2 total, 2 up, 2 in
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: mon.compute-1@-1(synchronizing).osd e22 e22: 2 total, 2 up, 2 in
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: mon.compute-1@-1(synchronizing).osd e23 e23: 2 total, 2 up, 2 in
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: mon.compute-1@-1(synchronizing).osd e24 e24: 2 total, 2 up, 2 in
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: mon.compute-1@-1(synchronizing).osd e25 e25: 2 total, 2 up, 2 in
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: mon.compute-1@-1(synchronizing).osd e26 e26: 2 total, 2 up, 2 in
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: mon.compute-1@-1(synchronizing).osd e27 e27: 2 total, 2 up, 2 in
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: mon.compute-1@-1(synchronizing).osd e28 e28: 2 total, 2 up, 2 in
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: mon.compute-1@-1(synchronizing).osd e29 e29: 2 total, 2 up, 2 in
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: mon.compute-1@-1(synchronizing).osd e29 crush map has features 3314933000852226048, adjusting msgr requires
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: mon.compute-1@-1(synchronizing).osd e29 crush map has features 288514051259236352, adjusting msgr requires
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: mon.compute-1@-1(synchronizing).osd e29 crush map has features 288514051259236352, adjusting msgr requires
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: mon.compute-1@-1(synchronizing).osd e29 crush map has features 288514051259236352, adjusting msgr requires
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: from='client.? 192.168.122.100:0/1619350831' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: from='client.? 192.168.122.100:0/1619350831' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: from='client.? 192.168.122.100:0/1899703371' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: from='client.? 192.168.122.100:0/1899703371' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: from='client.? 192.168.122.100:0/3603270334' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: from='client.? 192.168.122.100:0/3603270334' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: from='client.? 192.168.122.100:0/1623730272' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: Updating compute-2:/etc/ceph/ceph.conf
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: from='client.? 192.168.122.100:0/1623730272' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: Updating compute-2:/var/lib/ceph/b66774a7-56d9-5535-bd8c-681234404870/config/ceph.conf
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: from='client.? 192.168.122.100:0/3234575411' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: from='client.? 192.168.122.100:0/3234575411' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: Updating compute-2:/var/lib/ceph/b66774a7-56d9-5535-bd8c-681234404870/config/ceph.client.admin.keyring
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: from='client.? 192.168.122.100:0/86337951' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: Deploying daemon mon.compute-2 on compute-2
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: from='client.? 192.168.122.100:0/86337951' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Nov 29 02:15:10 np0005539551 ceph-mon[81672]: mon.compute-1@-1(synchronizing).paxosservice(auth 1..7) refresh upgraded, format 0 -> 3
Nov 29 02:15:12 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 2.3 deep-scrub starts
Nov 29 02:15:12 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 2.3 deep-scrub ok
Nov 29 02:15:13 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Nov 29 02:15:13 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Nov 29 02:15:14 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Nov 29 02:15:14 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Nov 29 02:15:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 30 pg[3.1a( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 30 pg[3.14( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 30 pg[4.13( empty local-lis/les=29/30 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 30 pg[3.15( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 30 pg[3.16( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 30 pg[4.1f( empty local-lis/les=29/30 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 30 pg[3.10( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 30 pg[4.15( empty local-lis/les=29/30 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 30 pg[3.11( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 30 pg[3.e( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 30 pg[3.13( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 30 pg[4.8( empty local-lis/les=29/30 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 30 pg[4.9( empty local-lis/les=29/30 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 30 pg[3.f( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 30 pg[4.a( empty local-lis/les=29/30 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 30 pg[3.c( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 30 pg[3.3( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 30 pg[4.5( empty local-lis/les=29/30 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 30 pg[3.d( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 30 pg[3.5( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 30 pg[4.1( empty local-lis/les=29/30 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 30 pg[3.9( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 30 pg[4.c( empty local-lis/les=29/30 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 30 pg[4.e( empty local-lis/les=29/30 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 30 pg[4.d( empty local-lis/les=29/30 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 30 pg[3.a( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 30 pg[4.18( empty local-lis/les=29/30 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 30 pg[4.1a( empty local-lis/les=29/30 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 30 pg[3.1c( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 30 pg[3.1d( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 30 pg[4.1b( empty local-lis/les=29/30 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:16 np0005539551 ceph-mon[81672]: mon.compute-1@-1(probing) e3  my rank is now 2 (was -1)
Nov 29 02:15:16 np0005539551 ceph-mon[81672]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Nov 29 02:15:16 np0005539551 ceph-mon[81672]: paxos.2).electionLogic(0) init, first boot, initializing epoch at 1 
Nov 29 02:15:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:15:20 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Nov 29 02:15:20 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Nov 29 02:15:21 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 2.b scrub starts
Nov 29 02:15:21 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 2.b scrub ok
Nov 29 02:15:21 np0005539551 ceph-mon[81672]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Nov 29 02:15:21 np0005539551 ceph-mon[81672]: paxos.2).electionLogic(0) init, first boot, initializing epoch at 1 
Nov 29 02:15:21 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:15:21 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:15:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Nov 29 02:15:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:15:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Nov 29 02:15:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_auth_request failed to assign global_id
Nov 29 02:15:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Nov 29 02:15:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_auth_request failed to assign global_id
Nov 29 02:15:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_auth_request failed to assign global_id
Nov 29 02:15:26 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 2.f scrub starts
Nov 29 02:15:26 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 2.f scrub ok
Nov 29 02:15:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e30 e30: 2 total, 2 up, 2 in
Nov 29 02:15:27 np0005539551 ceph-mon[81672]: Deploying daemon mon.compute-1 on compute-1
Nov 29 02:15:27 np0005539551 ceph-mon[81672]: mon.compute-0 calling monitor election
Nov 29 02:15:27 np0005539551 ceph-mon[81672]: from='client.? 192.168.122.100:0/206725884' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 29 02:15:27 np0005539551 ceph-mon[81672]: mon.compute-2 calling monitor election
Nov 29 02:15:27 np0005539551 ceph-mon[81672]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 02:15:27 np0005539551 ceph-mon[81672]: Health detail: HEALTH_WARN 4 pool(s) do not have an application enabled
Nov 29 02:15:27 np0005539551 ceph-mon[81672]: [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled
Nov 29 02:15:27 np0005539551 ceph-mon[81672]:    application not enabled on pool 'backups'
Nov 29 02:15:27 np0005539551 ceph-mon[81672]:    application not enabled on pool 'images'
Nov 29 02:15:27 np0005539551 ceph-mon[81672]:    application not enabled on pool 'cephfs.cephfs.meta'
Nov 29 02:15:27 np0005539551 ceph-mon[81672]:    application not enabled on pool 'cephfs.cephfs.data'
Nov 29 02:15:27 np0005539551 ceph-mon[81672]:    use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Nov 29 02:15:27 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:27 np0005539551 ceph-mon[81672]: mon.compute-0 calling monitor election
Nov 29 02:15:27 np0005539551 ceph-mon[81672]: mon.compute-2 calling monitor election
Nov 29 02:15:27 np0005539551 ceph-mon[81672]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 02:15:27 np0005539551 ceph-mon[81672]: Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Nov 29 02:15:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e30 _set_new_cache_sizes cache_size:1019936097 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:27 np0005539551 ceph-mon[81672]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 02:15:27 np0005539551 ceph-mon[81672]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2; 4 pool(s) do not have an application enabled
Nov 29 02:15:27 np0005539551 ceph-mon[81672]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:15:27 np0005539551 ceph-mon[81672]:    mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Nov 29 02:15:27 np0005539551 ceph-mon[81672]: [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled
Nov 29 02:15:27 np0005539551 ceph-mon[81672]:    application not enabled on pool 'backups'
Nov 29 02:15:27 np0005539551 ceph-mon[81672]:    application not enabled on pool 'images'
Nov 29 02:15:27 np0005539551 ceph-mon[81672]:    application not enabled on pool 'cephfs.cephfs.meta'
Nov 29 02:15:27 np0005539551 ceph-mon[81672]:    application not enabled on pool 'cephfs.cephfs.data'
Nov 29 02:15:27 np0005539551 ceph-mon[81672]:    use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Nov 29 02:15:27 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:27 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:27 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.zfrvoq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 02:15:27 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.zfrvoq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 29 02:15:27 np0005539551 ceph-mon[81672]: Deploying daemon mgr.compute-2.zfrvoq on compute-2
Nov 29 02:15:27 np0005539551 ceph-mon[81672]: from='client.? 192.168.122.100:0/665227318' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 29 02:15:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:15:27 np0005539551 ceph-mon[81672]: mgrc update_daemon_metadata mon.compute-1 metadata {addrs=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,created_at=2025-11-29T07:15:09.354209Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-1,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864316,os=Linux}
Nov 29 02:15:28 np0005539551 podman[81854]: 2025-11-29 07:15:28.701023545 +0000 UTC m=+0.044409624 container create 26712e8562b7e26cba82d4b7a212efd6a9d7da8b04f5bc8355f3f1272f19658a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_franklin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:15:28 np0005539551 systemd[1]: Started libpod-conmon-26712e8562b7e26cba82d4b7a212efd6a9d7da8b04f5bc8355f3f1272f19658a.scope.
Nov 29 02:15:28 np0005539551 podman[81854]: 2025-11-29 07:15:28.683283922 +0000 UTC m=+0.026670011 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:15:28 np0005539551 systemd[1]: Started libcrun container.
Nov 29 02:15:28 np0005539551 podman[81854]: 2025-11-29 07:15:28.795724389 +0000 UTC m=+0.139110498 container init 26712e8562b7e26cba82d4b7a212efd6a9d7da8b04f5bc8355f3f1272f19658a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_franklin, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:15:28 np0005539551 podman[81854]: 2025-11-29 07:15:28.804370629 +0000 UTC m=+0.147756718 container start 26712e8562b7e26cba82d4b7a212efd6a9d7da8b04f5bc8355f3f1272f19658a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_franklin, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 02:15:28 np0005539551 podman[81854]: 2025-11-29 07:15:28.808542615 +0000 UTC m=+0.151928714 container attach 26712e8562b7e26cba82d4b7a212efd6a9d7da8b04f5bc8355f3f1272f19658a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_franklin, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 02:15:28 np0005539551 silly_franklin[81871]: 167 167
Nov 29 02:15:28 np0005539551 systemd[1]: libpod-26712e8562b7e26cba82d4b7a212efd6a9d7da8b04f5bc8355f3f1272f19658a.scope: Deactivated successfully.
Nov 29 02:15:28 np0005539551 podman[81854]: 2025-11-29 07:15:28.812163633 +0000 UTC m=+0.155549732 container died 26712e8562b7e26cba82d4b7a212efd6a9d7da8b04f5bc8355f3f1272f19658a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 02:15:28 np0005539551 systemd[1]: var-lib-containers-storage-overlay-0951eb4ece7b20cd4536fb1d424b02ae940f58c5dad64a8ca1f83ca4ffc5cf60-merged.mount: Deactivated successfully.
Nov 29 02:15:28 np0005539551 podman[81854]: 2025-11-29 07:15:28.854186175 +0000 UTC m=+0.197572254 container remove 26712e8562b7e26cba82d4b7a212efd6a9d7da8b04f5bc8355f3f1272f19658a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_franklin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 02:15:28 np0005539551 systemd[1]: libpod-conmon-26712e8562b7e26cba82d4b7a212efd6a9d7da8b04f5bc8355f3f1272f19658a.scope: Deactivated successfully.
Nov 29 02:15:28 np0005539551 systemd[1]: Reloading.
Nov 29 02:15:28 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:15:28 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:15:29 np0005539551 systemd[1]: Reloading.
Nov 29 02:15:29 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:15:29 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:15:29 np0005539551 systemd[1]: Starting Ceph mgr.compute-1.fchyan for b66774a7-56d9-5535-bd8c-681234404870...
Nov 29 02:15:29 np0005539551 ceph-mon[81672]: mon.compute-1 calling monitor election
Nov 29 02:15:29 np0005539551 ceph-mon[81672]: mon.compute-1 calling monitor election
Nov 29 02:15:29 np0005539551 ceph-mon[81672]: mon.compute-2 calling monitor election
Nov 29 02:15:29 np0005539551 ceph-mon[81672]: from='client.? 192.168.122.100:0/665227318' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 29 02:15:29 np0005539551 ceph-mon[81672]: mon.compute-0 calling monitor election
Nov 29 02:15:29 np0005539551 ceph-mon[81672]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:15:29 np0005539551 ceph-mon[81672]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Nov 29 02:15:29 np0005539551 ceph-mon[81672]: Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled
Nov 29 02:15:29 np0005539551 ceph-mon[81672]: [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled
Nov 29 02:15:29 np0005539551 ceph-mon[81672]:    application not enabled on pool 'images'
Nov 29 02:15:29 np0005539551 ceph-mon[81672]:    application not enabled on pool 'cephfs.cephfs.meta'
Nov 29 02:15:29 np0005539551 ceph-mon[81672]:    application not enabled on pool 'cephfs.cephfs.data'
Nov 29 02:15:29 np0005539551 ceph-mon[81672]:    use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Nov 29 02:15:29 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:29 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:29 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:29 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:29 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.fchyan", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 02:15:29 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.fchyan", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 29 02:15:29 np0005539551 ceph-mon[81672]: Deploying daemon mgr.compute-1.fchyan on compute-1
Nov 29 02:15:29 np0005539551 podman[82015]: 2025-11-29 07:15:29.634240704 +0000 UTC m=+0.044592550 container create ea9c9454600523a6b1b859028fb9f6bbb4ae955acbda10f3dd8383f76638c82b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-1-fchyan, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:15:29 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87e7203d43a6b049c14a9dfd9262afc5bc38806ca25e54f2a1c13aa63f29775f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:29 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87e7203d43a6b049c14a9dfd9262afc5bc38806ca25e54f2a1c13aa63f29775f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:29 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87e7203d43a6b049c14a9dfd9262afc5bc38806ca25e54f2a1c13aa63f29775f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:29 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87e7203d43a6b049c14a9dfd9262afc5bc38806ca25e54f2a1c13aa63f29775f/merged/var/lib/ceph/mgr/ceph-compute-1.fchyan supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:29 np0005539551 podman[82015]: 2025-11-29 07:15:29.695602486 +0000 UTC m=+0.105954382 container init ea9c9454600523a6b1b859028fb9f6bbb4ae955acbda10f3dd8383f76638c82b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-1-fchyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 02:15:29 np0005539551 podman[82015]: 2025-11-29 07:15:29.702098612 +0000 UTC m=+0.112450478 container start ea9c9454600523a6b1b859028fb9f6bbb4ae955acbda10f3dd8383f76638c82b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-1-fchyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:15:29 np0005539551 bash[82015]: ea9c9454600523a6b1b859028fb9f6bbb4ae955acbda10f3dd8383f76638c82b
Nov 29 02:15:29 np0005539551 podman[82015]: 2025-11-29 07:15:29.615610735 +0000 UTC m=+0.025962601 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:15:29 np0005539551 systemd[1]: Started Ceph mgr.compute-1.fchyan for b66774a7-56d9-5535-bd8c-681234404870.
Nov 29 02:15:29 np0005539551 ceph-mgr[82034]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 02:15:29 np0005539551 ceph-mgr[82034]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 29 02:15:29 np0005539551 ceph-mgr[82034]: pidfile_write: ignore empty --pid-file
Nov 29 02:15:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e31 e31: 2 total, 2 up, 2 in
Nov 29 02:15:29 np0005539551 ceph-mgr[82034]: mgr[py] Loading python module 'alerts'
Nov 29 02:15:30 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Nov 29 02:15:30 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Nov 29 02:15:30 np0005539551 ceph-mgr[82034]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 02:15:30 np0005539551 ceph-mgr[82034]: mgr[py] Loading python module 'balancer'
Nov 29 02:15:30 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-1-fchyan[82030]: 2025-11-29T07:15:30.213+0000 7fa28309c140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 02:15:30 np0005539551 ceph-mgr[82034]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 02:15:30 np0005539551 ceph-mgr[82034]: mgr[py] Loading python module 'cephadm'
Nov 29 02:15:30 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-1-fchyan[82030]: 2025-11-29T07:15:30.490+0000 7fa28309c140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 02:15:30 np0005539551 ceph-mon[81672]: from='client.? 192.168.122.100:0/665227318' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 29 02:15:30 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:30 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:30 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:30 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:30 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 02:15:30 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 29 02:15:32 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e32 e32: 2 total, 2 up, 2 in
Nov 29 02:15:32 np0005539551 ceph-mon[81672]: Deploying daemon crash.compute-2 on compute-2
Nov 29 02:15:32 np0005539551 ceph-mon[81672]: from='client.? 192.168.122.100:0/790893646' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 29 02:15:32 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 2.12 deep-scrub starts
Nov 29 02:15:32 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 2.12 deep-scrub ok
Nov 29 02:15:32 np0005539551 ceph-mgr[82034]: mgr[py] Loading python module 'crash'
Nov 29 02:15:32 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e32 _set_new_cache_sizes cache_size:1020053305 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:33 np0005539551 ceph-mgr[82034]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 02:15:33 np0005539551 ceph-mgr[82034]: mgr[py] Loading python module 'dashboard'
Nov 29 02:15:33 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-1-fchyan[82030]: 2025-11-29T07:15:33.055+0000 7fa28309c140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 02:15:33 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e33 e33: 2 total, 2 up, 2 in
Nov 29 02:15:33 np0005539551 ceph-mon[81672]: from='client.? 192.168.122.100:0/790893646' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 29 02:15:34 np0005539551 ceph-mgr[82034]: mgr[py] Loading python module 'devicehealth'
Nov 29 02:15:34 np0005539551 ceph-mon[81672]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 02:15:34 np0005539551 ceph-mon[81672]: from='client.? 192.168.122.100:0/1712256818' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 29 02:15:34 np0005539551 ceph-mon[81672]: from='client.? 192.168.122.100:0/1712256818' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 29 02:15:34 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:34 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:34 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:34 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:34 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:34 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:15:34 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:15:34 np0005539551 ceph-mgr[82034]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 02:15:34 np0005539551 ceph-mgr[82034]: mgr[py] Loading python module 'diskprediction_local'
Nov 29 02:15:34 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-1-fchyan[82030]: 2025-11-29T07:15:34.973+0000 7fa28309c140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 02:15:35 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Nov 29 02:15:35 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Nov 29 02:15:35 np0005539551 systemd[72521]: Starting Mark boot as successful...
Nov 29 02:15:35 np0005539551 systemd[72521]: Finished Mark boot as successful.
Nov 29 02:15:35 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-1-fchyan[82030]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 29 02:15:35 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-1-fchyan[82030]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 29 02:15:35 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-1-fchyan[82030]:  from numpy import show_config as show_numpy_config
Nov 29 02:15:35 np0005539551 ceph-mgr[82034]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 02:15:35 np0005539551 ceph-mgr[82034]: mgr[py] Loading python module 'influx'
Nov 29 02:15:35 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-1-fchyan[82030]: 2025-11-29T07:15:35.619+0000 7fa28309c140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 02:15:35 np0005539551 ceph-mgr[82034]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 02:15:35 np0005539551 ceph-mgr[82034]: mgr[py] Loading python module 'insights'
Nov 29 02:15:35 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-1-fchyan[82030]: 2025-11-29T07:15:35.877+0000 7fa28309c140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 02:15:36 np0005539551 ceph-mgr[82034]: mgr[py] Loading python module 'iostat'
Nov 29 02:15:36 np0005539551 ceph-mgr[82034]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 02:15:36 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-1-fchyan[82030]: 2025-11-29T07:15:36.389+0000 7fa28309c140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 02:15:36 np0005539551 ceph-mgr[82034]: mgr[py] Loading python module 'k8sevents'
Nov 29 02:15:36 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e34 e34: 3 total, 2 up, 3 in
Nov 29 02:15:36 np0005539551 ceph-mon[81672]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 29 02:15:36 np0005539551 ceph-mon[81672]: Cluster is now healthy
Nov 29 02:15:36 np0005539551 ceph-mon[81672]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "681bc90e-5cd2-4106-9be9-9995623a17e0"}]: dispatch
Nov 29 02:15:36 np0005539551 ceph-mon[81672]: from='client.? 192.168.122.102:0/1211735059' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "681bc90e-5cd2-4106-9be9-9995623a17e0"}]: dispatch
Nov 29 02:15:37 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Nov 29 02:15:37 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Nov 29 02:15:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e34 _set_new_cache_sizes cache_size:1020054714 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:38 np0005539551 ceph-mgr[82034]: mgr[py] Loading python module 'localpool'
Nov 29 02:15:38 np0005539551 ceph-mgr[82034]: mgr[py] Loading python module 'mds_autoscaler'
Nov 29 02:15:38 np0005539551 ceph-mon[81672]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "681bc90e-5cd2-4106-9be9-9995623a17e0"}]': finished
Nov 29 02:15:39 np0005539551 ceph-mgr[82034]: mgr[py] Loading python module 'mirroring'
Nov 29 02:15:39 np0005539551 ceph-mgr[82034]: mgr[py] Loading python module 'nfs'
Nov 29 02:15:39 np0005539551 ceph-mon[81672]: from='client.? 192.168.122.100:0/534318413' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 02:15:39 np0005539551 ceph-mon[81672]: from='client.? 192.168.122.100:0/534318413' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 29 02:15:39 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:40 np0005539551 ceph-mgr[82034]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 29 02:15:40 np0005539551 ceph-mgr[82034]: mgr[py] Loading python module 'orchestrator'
Nov 29 02:15:40 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-1-fchyan[82030]: 2025-11-29T07:15:40.204+0000 7fa28309c140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 29 02:15:40 np0005539551 ceph-mgr[82034]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 29 02:15:40 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-1-fchyan[82030]: 2025-11-29T07:15:40.894+0000 7fa28309c140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 29 02:15:40 np0005539551 ceph-mgr[82034]: mgr[py] Loading python module 'osd_perf_query'
Nov 29 02:15:41 np0005539551 ceph-mgr[82034]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 29 02:15:41 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-1-fchyan[82030]: 2025-11-29T07:15:41.208+0000 7fa28309c140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 29 02:15:41 np0005539551 ceph-mgr[82034]: mgr[py] Loading python module 'osd_support'
Nov 29 02:15:41 np0005539551 ceph-mgr[82034]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 29 02:15:41 np0005539551 ceph-mgr[82034]: mgr[py] Loading python module 'pg_autoscaler'
Nov 29 02:15:41 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-1-fchyan[82030]: 2025-11-29T07:15:41.466+0000 7fa28309c140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 29 02:15:41 np0005539551 ceph-mgr[82034]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 29 02:15:41 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-1-fchyan[82030]: 2025-11-29T07:15:41.772+0000 7fa28309c140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 29 02:15:41 np0005539551 ceph-mgr[82034]: mgr[py] Loading python module 'progress'
Nov 29 02:15:42 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-1-fchyan[82030]: 2025-11-29T07:15:42.032+0000 7fa28309c140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 29 02:15:42 np0005539551 ceph-mgr[82034]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 29 02:15:42 np0005539551 ceph-mgr[82034]: mgr[py] Loading python module 'prometheus'
Nov 29 02:15:42 np0005539551 ceph-mon[81672]: from='client.? 192.168.122.100:0/278235459' entity='client.admin' 
Nov 29 02:15:42 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:42 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:43 np0005539551 ceph-mgr[82034]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 29 02:15:43 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-1-fchyan[82030]: 2025-11-29T07:15:43.102+0000 7fa28309c140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 29 02:15:43 np0005539551 ceph-mgr[82034]: mgr[py] Loading python module 'rbd_support'
Nov 29 02:15:43 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Nov 29 02:15:43 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Nov 29 02:15:43 np0005539551 ceph-mon[81672]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 29 02:15:43 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:43 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:43 np0005539551 ceph-mgr[82034]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 29 02:15:43 np0005539551 ceph-mgr[82034]: mgr[py] Loading python module 'restful'
Nov 29 02:15:43 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-1-fchyan[82030]: 2025-11-29T07:15:43.444+0000 7fa28309c140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 29 02:15:44 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Nov 29 02:15:44 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Nov 29 02:15:44 np0005539551 ceph-mgr[82034]: mgr[py] Loading python module 'rgw'
Nov 29 02:15:44 np0005539551 ceph-mon[81672]: Saving service ingress.rgw.default spec with placement count:2
Nov 29 02:15:44 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 29 02:15:44 np0005539551 ceph-mon[81672]: Deploying daemon osd.2 on compute-2
Nov 29 02:15:45 np0005539551 ceph-mgr[82034]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 29 02:15:45 np0005539551 ceph-mgr[82034]: mgr[py] Loading python module 'rook'
Nov 29 02:15:45 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-1-fchyan[82030]: 2025-11-29T07:15:45.149+0000 7fa28309c140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 29 02:15:46 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 2.1a deep-scrub starts
Nov 29 02:15:46 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 2.1a deep-scrub ok
Nov 29 02:15:47 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).mds e2 new map
Nov 29 02:15:47 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).mds e2 print_map#012e2#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:15:46.469688+0000#012modified#0112025-11-29T07:15:46.469738+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 
Nov 29 02:15:47 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e35 e35: 3 total, 2 up, 3 in
Nov 29 02:15:47 np0005539551 ceph-mgr[82034]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 29 02:15:47 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-1-fchyan[82030]: 2025-11-29T07:15:47.511+0000 7fa28309c140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 29 02:15:47 np0005539551 ceph-mgr[82034]: mgr[py] Loading python module 'selftest'
Nov 29 02:15:47 np0005539551 ceph-mgr[82034]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 29 02:15:47 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-1-fchyan[82030]: 2025-11-29T07:15:47.768+0000 7fa28309c140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 29 02:15:47 np0005539551 ceph-mgr[82034]: mgr[py] Loading python module 'snap_schedule'
Nov 29 02:15:47 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:48 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 29 02:15:48 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 29 02:15:48 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 29 02:15:48 np0005539551 ceph-mon[81672]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 29 02:15:48 np0005539551 ceph-mon[81672]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 29 02:15:48 np0005539551 ceph-mgr[82034]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 29 02:15:48 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-1-fchyan[82030]: 2025-11-29T07:15:48.045+0000 7fa28309c140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 29 02:15:48 np0005539551 ceph-mgr[82034]: mgr[py] Loading python module 'stats'
Nov 29 02:15:48 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Nov 29 02:15:48 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Nov 29 02:15:48 np0005539551 ceph-mgr[82034]: mgr[py] Loading python module 'status'
Nov 29 02:15:48 np0005539551 ceph-mgr[82034]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 29 02:15:48 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-1-fchyan[82030]: 2025-11-29T07:15:48.590+0000 7fa28309c140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 29 02:15:48 np0005539551 ceph-mgr[82034]: mgr[py] Loading python module 'telegraf'
Nov 29 02:15:48 np0005539551 ceph-mgr[82034]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 29 02:15:48 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-1-fchyan[82030]: 2025-11-29T07:15:48.837+0000 7fa28309c140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 29 02:15:48 np0005539551 ceph-mgr[82034]: mgr[py] Loading python module 'telemetry'
Nov 29 02:15:49 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 29 02:15:49 np0005539551 ceph-mon[81672]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 29 02:15:49 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:49 np0005539551 ceph-mgr[82034]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 29 02:15:49 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-1-fchyan[82030]: 2025-11-29T07:15:49.481+0000 7fa28309c140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 29 02:15:49 np0005539551 ceph-mgr[82034]: mgr[py] Loading python module 'test_orchestrator'
Nov 29 02:15:50 np0005539551 ceph-mgr[82034]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 29 02:15:50 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-1-fchyan[82030]: 2025-11-29T07:15:50.245+0000 7fa28309c140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 29 02:15:50 np0005539551 ceph-mgr[82034]: mgr[py] Loading python module 'volumes'
Nov 29 02:15:50 np0005539551 ceph-mon[81672]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 29 02:15:50 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:51 np0005539551 ceph-mgr[82034]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 29 02:15:51 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-1-fchyan[82030]: 2025-11-29T07:15:51.048+0000 7fa28309c140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 29 02:15:51 np0005539551 ceph-mgr[82034]: mgr[py] Loading python module 'zabbix'
Nov 29 02:15:51 np0005539551 ceph-mgr[82034]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 29 02:15:51 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-1-fchyan[82030]: 2025-11-29T07:15:51.322+0000 7fa28309c140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 29 02:15:51 np0005539551 ceph-mgr[82034]: ms_deliver_dispatch: unhandled message 0x5625f22251e0 mon_map magic: 0 v1 from mon.2 v2:192.168.122.101:3300/0
Nov 29 02:15:51 np0005539551 ceph-mgr[82034]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1950343944
Nov 29 02:15:51 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:51 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:52 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Nov 29 02:15:52 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Nov 29 02:15:52 np0005539551 ceph-mgr[82034]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1950343944
Nov 29 02:15:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e36 e36: 3 total, 2 up, 3 in
Nov 29 02:15:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:53 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Nov 29 02:15:53 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Nov 29 02:15:53 np0005539551 ceph-mon[81672]: from='osd.2 [v2:192.168.122.102:6800/2082573902,v1:192.168.122.102:6801/2082573902]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 29 02:15:53 np0005539551 ceph-mon[81672]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 29 02:15:53 np0005539551 ceph-mon[81672]: from='client.? 192.168.122.100:0/927647266' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 29 02:15:53 np0005539551 ceph-mon[81672]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 29 02:15:54 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e37 e37: 3 total, 2 up, 3 in
Nov 29 02:15:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 37 pg[4.1f( empty local-lis/les=29/30 n=0 ec=24/19 lis/c=29/29 les/c/f=30/30/0 sis=37 pruub=8.133289337s) [] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active pruub 85.099441528s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 37 pg[4.1f( empty local-lis/les=29/30 n=0 ec=24/19 lis/c=29/29 les/c/f=30/30/0 sis=37 pruub=8.133289337s) [] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.099441528s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 37 pg[2.18( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=37 pruub=10.339236259s) [] r=-1 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active pruub 87.305679321s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 37 pg[2.18( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=37 pruub=10.339236259s) [] r=-1 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.305679321s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 37 pg[3.1a( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=37 pruub=8.129830360s) [] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active pruub 85.096618652s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 37 pg[3.1a( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=37 pruub=8.129830360s) [] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.096618652s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 37 pg[3.15( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=37 pruub=8.132436752s) [] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active pruub 85.099380493s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 37 pg[3.15( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=37 pruub=8.132436752s) [] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.099380493s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 37 pg[4.15( empty local-lis/les=29/30 n=0 ec=24/19 lis/c=29/29 les/c/f=30/30/0 sis=37 pruub=8.132427216s) [] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active pruub 85.099494934s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 37 pg[4.15( empty local-lis/les=29/30 n=0 ec=24/19 lis/c=29/29 les/c/f=30/30/0 sis=37 pruub=8.132427216s) [] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.099494934s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 37 pg[2.f( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=37 pruub=10.338275909s) [] r=-1 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active pruub 87.305450439s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 37 pg[2.12( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=37 pruub=10.338376045s) [] r=-1 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active pruub 87.305549622s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 37 pg[2.f( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=37 pruub=10.338275909s) [] r=-1 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.305450439s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 37 pg[2.12( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=37 pruub=10.338376045s) [] r=-1 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.305549622s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 37 pg[4.9( empty local-lis/les=29/30 n=0 ec=24/19 lis/c=29/29 les/c/f=30/30/0 sis=37 pruub=8.132355690s) [] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active pruub 85.099617004s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 37 pg[3.e( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=37 pruub=8.132265091s) [] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active pruub 85.099510193s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 37 pg[4.9( empty local-lis/les=29/30 n=0 ec=24/19 lis/c=29/29 les/c/f=30/30/0 sis=37 pruub=8.132355690s) [] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.099617004s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 37 pg[4.8( empty local-lis/les=29/30 n=0 ec=24/19 lis/c=29/29 les/c/f=30/30/0 sis=37 pruub=8.132342339s) [] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active pruub 85.099632263s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 37 pg[4.8( empty local-lis/les=29/30 n=0 ec=24/19 lis/c=29/29 les/c/f=30/30/0 sis=37 pruub=8.132342339s) [] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.099632263s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 37 pg[3.e( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=37 pruub=8.132265091s) [] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.099510193s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 37 pg[2.5( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=37 pruub=10.338047028s) [] r=-1 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active pruub 87.305488586s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 37 pg[2.5( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=37 pruub=10.338047028s) [] r=-1 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.305488586s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 37 pg[4.1( empty local-lis/les=29/30 n=0 ec=24/19 lis/c=29/29 les/c/f=30/30/0 sis=37 pruub=8.132300377s) [] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active pruub 85.099784851s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 37 pg[3.9( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=37 pruub=8.132225037s) [] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active pruub 85.099723816s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 37 pg[3.9( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=37 pruub=8.132225037s) [] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.099723816s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 37 pg[4.1( empty local-lis/les=29/30 n=0 ec=24/19 lis/c=29/29 les/c/f=30/30/0 sis=37 pruub=8.132300377s) [] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.099784851s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 37 pg[2.b( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=37 pruub=10.337744713s) [] r=-1 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active pruub 87.305335999s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 37 pg[2.b( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=37 pruub=10.337744713s) [] r=-1 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.305335999s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 37 pg[2.1c( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=37 pruub=10.337552071s) [] r=-1 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active pruub 87.305328369s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 37 pg[3.1d( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=37 pruub=8.132269859s) [] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active pruub 85.100044250s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 37 pg[2.1c( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=37 pruub=10.337552071s) [] r=-1 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.305328369s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 37 pg[3.1d( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=37 pruub=8.132269859s) [] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.100044250s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 37 pg[2.1d( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=37 pruub=10.334678650s) [] r=-1 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active pruub 87.302551270s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 37 pg[2.1d( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=37 pruub=10.334678650s) [] r=-1 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.302551270s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 37 pg[3.11( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=37 pruub=8.131480217s) [] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active pruub 85.099632263s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 37 pg[3.11( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=37 pruub=8.131480217s) [] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.099632263s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:54 np0005539551 ceph-mon[81672]: from='osd.2 [v2:192.168.122.102:6800/2082573902,v1:192.168.122.102:6801/2082573902]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 29 02:15:54 np0005539551 ceph-mon[81672]: from='client.? 192.168.122.100:0/927647266' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 29 02:15:54 np0005539551 ceph-mon[81672]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 29 02:15:54 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:54 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:54 np0005539551 ceph-mon[81672]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Nov 29 02:15:55 np0005539551 podman[82292]: 2025-11-29 07:15:55.592553227 +0000 UTC m=+0.074610852 container exec 90ce4adbab91e406b1c142d44002a8d5649da44e57a442af57615f2fecac4da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:15:55 np0005539551 podman[82292]: 2025-11-29 07:15:55.739751258 +0000 UTC m=+0.221808913 container exec_died 90ce4adbab91e406b1c142d44002a8d5649da44e57a442af57615f2fecac4da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 02:15:57 np0005539551 systemd[1]: session-19.scope: Deactivated successfully.
Nov 29 02:15:57 np0005539551 systemd[1]: session-19.scope: Consumed 8.165s CPU time.
Nov 29 02:15:57 np0005539551 systemd-logind[788]: Session 19 logged out. Waiting for processes to exit.
Nov 29 02:15:57 np0005539551 systemd-logind[788]: Removed session 19.
Nov 29 02:15:57 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:58 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Nov 29 02:15:58 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Nov 29 02:15:58 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e38 e38: 3 total, 2 up, 3 in
Nov 29 02:15:58 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e38 crush map has features 3314933000854323200, adjusting msgr requires
Nov 29 02:15:58 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e38 crush map has features 432629239337189376, adjusting msgr requires
Nov 29 02:15:58 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e38 crush map has features 432629239337189376, adjusting msgr requires
Nov 29 02:15:58 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e38 crush map has features 432629239337189376, adjusting msgr requires
Nov 29 02:15:59 np0005539551 ceph-osd[78953]: osd.1 38 crush map has features 432629239337189376, adjusting msgr requires for clients
Nov 29 02:15:59 np0005539551 ceph-osd[78953]: osd.1 38 crush map has features 432629239337189376 was 288514051259245057, adjusting msgr requires for mons
Nov 29 02:15:59 np0005539551 ceph-osd[78953]: osd.1 38 crush map has features 3314933000854323200, adjusting msgr requires for osds
Nov 29 02:15:59 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 38 pg[3.d( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=38 pruub=11.505324364s) [] r=-1 lpr=38 pi=[29,38)/1 crt=0'0 mlcod 0'0 active pruub 93.099845886s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:59 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 38 pg[3.d( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=38 pruub=11.505324364s) [] r=-1 lpr=38 pi=[29,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.099845886s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:59 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.4", "id": [0, 2]}]: dispatch
Nov 29 02:15:59 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.d", "id": [1, 2]}]: dispatch
Nov 29 02:15:59 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.18", "id": [0, 2]}]: dispatch
Nov 29 02:15:59 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [0, 2]}]: dispatch
Nov 29 02:15:59 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.4", "id": [0, 2]}]': finished
Nov 29 02:15:59 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.d", "id": [1, 2]}]': finished
Nov 29 02:15:59 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.18", "id": [0, 2]}]': finished
Nov 29 02:15:59 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [0, 2]}]': finished
Nov 29 02:16:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e39 e39: 3 total, 2 up, 3 in
Nov 29 02:16:01 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:16:01 np0005539551 ceph-mon[81672]: from='client.? 192.168.122.100:0/3782324994' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 29 02:16:02 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Nov 29 02:16:02 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Nov 29 02:16:02 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:16:02 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 29 02:16:02 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:02 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:02 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 29 02:16:02 np0005539551 ceph-mon[81672]: Adjusting osd_memory_target on compute-2 to 127.9M
Nov 29 02:16:02 np0005539551 ceph-mon[81672]: Unable to set osd_memory_target on compute-2 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Nov 29 02:16:02 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:16:02 np0005539551 ceph-mon[81672]: Updating compute-0:/etc/ceph/ceph.conf
Nov 29 02:16:02 np0005539551 ceph-mon[81672]: Updating compute-1:/etc/ceph/ceph.conf
Nov 29 02:16:02 np0005539551 ceph-mon[81672]: Updating compute-2:/etc/ceph/ceph.conf
Nov 29 02:16:02 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:02 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e40 e40: 3 total, 2 up, 3 in
Nov 29 02:16:03 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e41 e41: 3 total, 3 up, 3 in
Nov 29 02:16:03 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 29 02:16:03 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:16:03 np0005539551 ceph-mon[81672]: Updating compute-1:/var/lib/ceph/b66774a7-56d9-5535-bd8c-681234404870/config/ceph.conf
Nov 29 02:16:03 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 29 02:16:03 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:16:03 np0005539551 ceph-mon[81672]: Updating compute-2:/var/lib/ceph/b66774a7-56d9-5535-bd8c-681234404870/config/ceph.conf
Nov 29 02:16:03 np0005539551 ceph-mon[81672]: Updating compute-0:/var/lib/ceph/b66774a7-56d9-5535-bd8c-681234404870/config/ceph.conf
Nov 29 02:16:03 np0005539551 ceph-mon[81672]: OSD bench result of 5535.863781 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 02:16:04 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Nov 29 02:16:04 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Nov 29 02:16:04 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e42 e42: 3 total, 3 up, 3 in
Nov 29 02:16:04 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:16:04 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 29 02:16:04 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:16:04 np0005539551 ceph-mon[81672]: osd.2 [v2:192.168.122.102:6800/2082573902,v1:192.168.122.102:6801/2082573902] boot
Nov 29 02:16:04 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:04 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:04 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:04 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:04 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:04 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:04 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:04 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:04 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:16:05 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 41 pg[3.1a( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:05 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 41 pg[4.1f( empty local-lis/les=29/30 n=0 ec=24/19 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:05 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 41 pg[2.18( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=41) [2] r=-1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:05 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 42 pg[3.1a( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:05 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 42 pg[2.18( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=41) [2] r=-1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:05 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 41 pg[3.15( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:05 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 42 pg[3.15( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:05 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 41 pg[2.12( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=41) [2] r=-1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:05 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 42 pg[2.12( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=41) [2] r=-1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:05 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 42 pg[4.1f( empty local-lis/les=29/30 n=0 ec=24/19 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:05 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 41 pg[4.15( empty local-lis/les=29/30 n=0 ec=24/19 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:05 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 41 pg[2.f( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=41) [2] r=-1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:05 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 41 pg[3.e( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:05 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 41 pg[3.11( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:05 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 42 pg[2.f( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=41) [2] r=-1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:05 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 41 pg[4.9( empty local-lis/les=29/30 n=0 ec=24/19 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:05 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 42 pg[3.e( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:05 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 42 pg[3.11( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:05 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 42 pg[4.15( empty local-lis/les=29/30 n=0 ec=24/19 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:05 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=29/30 n=0 ec=24/19 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:05 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 41 pg[4.8( empty local-lis/les=29/30 n=0 ec=24/19 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:05 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 41 pg[3.d( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=5.070233822s) [2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.099845886s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:05 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=29/30 n=0 ec=24/19 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:05 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 42 pg[3.d( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=5.070210934s) [2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.099845886s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:05 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 41 pg[2.5( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=41) [2] r=-1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:05 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 41 pg[3.9( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:05 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 41 pg[4.1( empty local-lis/les=29/30 n=0 ec=24/19 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:05 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=41) [2] r=-1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:05 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 42 pg[3.9( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:05 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 41 pg[2.1c( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=41) [2] r=-1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:05 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 42 pg[2.1c( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=41) [2] r=-1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:05 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 42 pg[4.1( empty local-lis/les=29/30 n=0 ec=24/19 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:05 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 41 pg[3.1d( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:05 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 41 pg[2.b( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=41) [2] r=-1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:05 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 41 pg[2.1d( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=41) [2] r=-1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:05 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 42 pg[3.1d( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:05 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 42 pg[2.1d( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=41) [2] r=-1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:05 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 42 pg[2.b( empty local-lis/les=22/24 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=41) [2] r=-1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:06 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:16:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e43 e43: 3 total, 3 up, 3 in
Nov 29 02:16:06 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 43 pg[7.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=43 pruub=8.274783134s) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active pruub 97.389091492s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:06 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 43 pg[7.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=43 pruub=8.274783134s) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown pruub 97.389091492s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:07 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:16:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e44 e44: 3 total, 3 up, 3 in
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.1e( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.1c( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.1d( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.12( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.13( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.10( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.11( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.16( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.17( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.14( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.b( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.7( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.d( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.19( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.1d( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.12( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.10( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.16( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.17( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.14( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.b( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.1e( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.0( empty local-lis/les=43/44 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.7( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.d( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.19( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [1] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:12 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:13 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e45 e45: 3 total, 3 up, 3 in
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[5.1b( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[6.e( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[6.3( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[5.2( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[5.7( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[6.7( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[6.8( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[6.a( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[5.15( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[5.10( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[5.1f( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[5.1c( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[6.2( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[6.5( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[6.d( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.274561882s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 105.900268555s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.249114990s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 105.874816895s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.274507523s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 105.900268555s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.249045372s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 105.874816895s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.1d( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.273063660s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 105.899009705s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.1d( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.273033142s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 105.899009705s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.10( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.273153305s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 105.899185181s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.10( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.273127556s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 105.899185181s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.272758484s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 105.898956299s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.272936821s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 105.899223328s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.16( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.272927284s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 105.899238586s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.272653580s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 105.898956299s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.272907257s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 105.899223328s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.16( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.272896767s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 105.899238586s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.14( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.272764206s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 105.899261475s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.1e( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.273755074s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 105.899971008s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.14( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.272735596s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 105.899261475s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.272620201s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 105.899299622s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.b( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.273271561s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 105.899963379s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.272592545s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 105.899299622s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.b( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.273245811s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 105.899963379s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.272362709s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 105.899169922s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.273093224s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 105.899925232s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.272317886s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 105.899169922s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.273121834s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 105.900001526s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.273040771s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 105.899925232s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.273096085s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 105.900001526s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.273005486s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 105.899986267s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.272971153s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 105.899986267s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.272952080s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 105.900032043s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.272923470s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 105.900032043s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.272785187s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 105.900054932s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.272931099s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 105.900230408s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.272760391s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 105.900054932s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.272957802s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 105.900268555s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.272901535s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 105.900230408s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.272933006s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 105.900268555s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.272786140s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 105.900253296s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.272760391s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 105.900253296s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.272698402s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 105.900215149s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.272670746s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 105.900215149s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 45 pg[7.1e( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.273418427s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 105.899971008s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:13 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:16:13 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 02:16:13 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:16:13 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e46 e46: 3 total, 3 up, 3 in
Nov 29 02:16:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 46 pg[6.3( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 46 pg[6.7( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 46 pg[6.8( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 46 pg[6.a( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 46 pg[6.5( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 46 pg[6.d( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 46 pg[5.18( empty local-lis/les=45/46 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 46 pg[5.1( empty local-lis/les=45/46 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 46 pg[5.1f( empty local-lis/les=45/46 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 46 pg[5.10( empty local-lis/les=45/46 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 46 pg[5.11( empty local-lis/les=45/46 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 46 pg[5.15( empty local-lis/les=45/46 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 46 pg[5.1c( empty local-lis/les=45/46 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 46 pg[5.16( empty local-lis/les=45/46 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 46 pg[5.9( empty local-lis/les=45/46 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 46 pg[5.2( empty local-lis/les=45/46 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 46 pg[5.7( empty local-lis/les=45/46 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 46 pg[5.f( empty local-lis/les=45/46 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 46 pg[5.1b( empty local-lis/les=45/46 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:16:14 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 02:16:14 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:16:15 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:15 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:15 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.gstlru", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 02:16:16 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.gstlru", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 02:16:16 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:16 np0005539551 ceph-mon[81672]: Deploying daemon rgw.rgw.compute-2.gstlru on compute-2
Nov 29 02:16:17 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 4.d scrub starts
Nov 29 02:16:17 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 4.d scrub ok
Nov 29 02:16:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e47 e47: 3 total, 3 up, 3 in
Nov 29 02:16:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:17 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:17 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:17 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:17 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.wmgqmg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 02:16:17 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.wmgqmg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 02:16:17 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:18 np0005539551 podman[83495]: 2025-11-29 07:16:18.287682906 +0000 UTC m=+0.026904226 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:16:18 np0005539551 podman[83495]: 2025-11-29 07:16:18.714555991 +0000 UTC m=+0.453777281 container create 51af1ddaa15c4e669c58961a1877fcb02c14d7a0c955a48e64a3c0ec88d45aec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 02:16:18 np0005539551 systemd[1]: Started libpod-conmon-51af1ddaa15c4e669c58961a1877fcb02c14d7a0c955a48e64a3c0ec88d45aec.scope.
Nov 29 02:16:18 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e48 e48: 3 total, 3 up, 3 in
Nov 29 02:16:18 np0005539551 systemd[1]: Started libcrun container.
Nov 29 02:16:18 np0005539551 podman[83495]: 2025-11-29 07:16:18.800598886 +0000 UTC m=+0.539820206 container init 51af1ddaa15c4e669c58961a1877fcb02c14d7a0c955a48e64a3c0ec88d45aec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_neumann, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:16:18 np0005539551 podman[83495]: 2025-11-29 07:16:18.80684947 +0000 UTC m=+0.546070770 container start 51af1ddaa15c4e669c58961a1877fcb02c14d7a0c955a48e64a3c0ec88d45aec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_neumann, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:16:18 np0005539551 podman[83495]: 2025-11-29 07:16:18.810976269 +0000 UTC m=+0.550197569 container attach 51af1ddaa15c4e669c58961a1877fcb02c14d7a0c955a48e64a3c0ec88d45aec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 02:16:18 np0005539551 admiring_neumann[83512]: 167 167
Nov 29 02:16:18 np0005539551 systemd[1]: libpod-51af1ddaa15c4e669c58961a1877fcb02c14d7a0c955a48e64a3c0ec88d45aec.scope: Deactivated successfully.
Nov 29 02:16:18 np0005539551 podman[83495]: 2025-11-29 07:16:18.812213441 +0000 UTC m=+0.551434741 container died 51af1ddaa15c4e669c58961a1877fcb02c14d7a0c955a48e64a3c0ec88d45aec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_neumann, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:16:19 np0005539551 systemd[1]: var-lib-containers-storage-overlay-498e49194b761bb9f1333231c00fda29a57001f5894a63883e374deb2cc81ede-merged.mount: Deactivated successfully.
Nov 29 02:16:19 np0005539551 podman[83495]: 2025-11-29 07:16:19.306646167 +0000 UTC m=+1.045867467 container remove 51af1ddaa15c4e669c58961a1877fcb02c14d7a0c955a48e64a3c0ec88d45aec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 02:16:19 np0005539551 systemd[1]: Reloading.
Nov 29 02:16:19 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:16:19 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:16:19 np0005539551 systemd[1]: libpod-conmon-51af1ddaa15c4e669c58961a1877fcb02c14d7a0c955a48e64a3c0ec88d45aec.scope: Deactivated successfully.
Nov 29 02:16:19 np0005539551 systemd[1]: Reloading.
Nov 29 02:16:19 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:16:19 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:16:20 np0005539551 ceph-mon[81672]: Deploying daemon rgw.rgw.compute-1.wmgqmg on compute-1
Nov 29 02:16:20 np0005539551 ceph-mon[81672]: from='client.? 192.168.122.102:0/1509958396' entity='client.rgw.rgw.compute-2.gstlru' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 29 02:16:20 np0005539551 ceph-mon[81672]: from='client.? ' entity='client.rgw.rgw.compute-2.gstlru' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 29 02:16:20 np0005539551 ceph-mon[81672]: from='client.? ' entity='client.rgw.rgw.compute-2.gstlru' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 29 02:16:20 np0005539551 systemd[1]: Starting Ceph rgw.rgw.compute-1.wmgqmg for b66774a7-56d9-5535-bd8c-681234404870...
Nov 29 02:16:20 np0005539551 podman[83659]: 2025-11-29 07:16:20.626209198 +0000 UTC m=+0.103386682 container create 2838bfab319f9d879cf4162521351bd3727e763c276fcf7a6bbd767c82450ffb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-rgw-rgw-compute-1-wmgqmg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:16:20 np0005539551 podman[83659]: 2025-11-29 07:16:20.545421979 +0000 UTC m=+0.022599493 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:16:20 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d1165776fba52262ef299a38e663f79f6a8a9b29fb4f65b04eb8564f21f7c56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:20 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d1165776fba52262ef299a38e663f79f6a8a9b29fb4f65b04eb8564f21f7c56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:20 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d1165776fba52262ef299a38e663f79f6a8a9b29fb4f65b04eb8564f21f7c56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:20 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d1165776fba52262ef299a38e663f79f6a8a9b29fb4f65b04eb8564f21f7c56/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-1.wmgqmg supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:20 np0005539551 podman[83659]: 2025-11-29 07:16:20.694487538 +0000 UTC m=+0.171665032 container init 2838bfab319f9d879cf4162521351bd3727e763c276fcf7a6bbd767c82450ffb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-rgw-rgw-compute-1-wmgqmg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:16:20 np0005539551 podman[83659]: 2025-11-29 07:16:20.699308435 +0000 UTC m=+0.176485919 container start 2838bfab319f9d879cf4162521351bd3727e763c276fcf7a6bbd767c82450ffb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-rgw-rgw-compute-1-wmgqmg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 02:16:20 np0005539551 bash[83659]: 2838bfab319f9d879cf4162521351bd3727e763c276fcf7a6bbd767c82450ffb
Nov 29 02:16:20 np0005539551 systemd[1]: Started Ceph rgw.rgw.compute-1.wmgqmg for b66774a7-56d9-5535-bd8c-681234404870.
Nov 29 02:16:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e49 e49: 3 total, 3 up, 3 in
Nov 29 02:16:20 np0005539551 radosgw[83679]: deferred set uid:gid to 167:167 (ceph:ceph)
Nov 29 02:16:20 np0005539551 radosgw[83679]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Nov 29 02:16:20 np0005539551 radosgw[83679]: framework: beast
Nov 29 02:16:20 np0005539551 radosgw[83679]: framework conf key: endpoint, val: 192.168.122.101:8082
Nov 29 02:16:20 np0005539551 radosgw[83679]: init_numa not setting numa affinity
Nov 29 02:16:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e50 e50: 3 total, 3 up, 3 in
Nov 29 02:16:22 np0005539551 ceph-mon[81672]: from='client.? 192.168.122.102:0/1509958396' entity='client.rgw.rgw.compute-2.gstlru' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 29 02:16:22 np0005539551 ceph-mon[81672]: from='client.? ' entity='client.rgw.rgw.compute-2.gstlru' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 29 02:16:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:23 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 3.a scrub starts
Nov 29 02:16:23 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 3.a scrub ok
Nov 29 02:16:24 np0005539551 ceph-mon[81672]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 02:16:24 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:24 np0005539551 ceph-mon[81672]: from='client.? ' entity='client.rgw.rgw.compute-2.gstlru' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 29 02:16:24 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:24 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:24 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.lkiqxb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 02:16:24 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.lkiqxb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 02:16:24 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:24 np0005539551 ceph-mon[81672]: Deploying daemon rgw.rgw.compute-0.lkiqxb on compute-0
Nov 29 02:16:25 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 4.c scrub starts
Nov 29 02:16:25 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 4.c scrub ok
Nov 29 02:16:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e51 e51: 3 total, 3 up, 3 in
Nov 29 02:16:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Nov 29 02:16:25 np0005539551 ceph-mon[81672]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/4041558685' entity='client.rgw.rgw.compute-1.wmgqmg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 02:16:25 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 51 pg[10.0( empty local-lis/les=0/0 n=0 ec=51/51 lis/c=0/0 les/c/f=0/0/0 sis=51) [1] r=0 lpr=51 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:25 np0005539551 ceph-mon[81672]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 29 02:16:26 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 4.e deep-scrub starts
Nov 29 02:16:26 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 4.e deep-scrub ok
Nov 29 02:16:26 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e52 e52: 3 total, 3 up, 3 in
Nov 29 02:16:26 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 52 pg[10.0( empty local-lis/les=51/52 n=0 ec=51/51 lis/c=0/0 les/c/f=0/0/0 sis=51) [1] r=0 lpr=51 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:26 np0005539551 ceph-mon[81672]: from='client.? 192.168.122.100:0/4210256520' entity='client.rgw.rgw.compute-0.lkiqxb' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 02:16:26 np0005539551 ceph-mon[81672]: from='client.? 192.168.122.102:0/1509958396' entity='client.rgw.rgw.compute-2.gstlru' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 02:16:26 np0005539551 ceph-mon[81672]: from='client.? 192.168.122.101:0/4041558685' entity='client.rgw.rgw.compute-1.wmgqmg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 02:16:26 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:26 np0005539551 ceph-mon[81672]: from='client.? ' entity='client.rgw.rgw.compute-2.gstlru' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 02:16:26 np0005539551 ceph-mon[81672]: from='client.? ' entity='client.rgw.rgw.compute-1.wmgqmg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 02:16:26 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:26 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:26 np0005539551 ceph-mon[81672]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 29 02:16:26 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:26 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:26 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.mmoati", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 02:16:26 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.mmoati", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 02:16:26 np0005539551 ceph-mon[81672]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 02:16:26 np0005539551 ceph-mon[81672]: from='client.? 192.168.122.100:0/4210256520' entity='client.rgw.rgw.compute-0.lkiqxb' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 29 02:16:26 np0005539551 ceph-mon[81672]: from='client.? ' entity='client.rgw.rgw.compute-2.gstlru' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 29 02:16:26 np0005539551 ceph-mon[81672]: from='client.? ' entity='client.rgw.rgw.compute-1.wmgqmg' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 29 02:16:27 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 3.5 deep-scrub starts
Nov 29 02:16:27 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 3.5 deep-scrub ok
Nov 29 02:16:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e53 e53: 3 total, 3 up, 3 in
Nov 29 02:16:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Nov 29 02:16:27 np0005539551 ceph-mon[81672]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/4122975246' entity='client.rgw.rgw.compute-1.wmgqmg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 02:16:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).mds e3 new map
Nov 29 02:16:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).mds e3 print_map#012e3#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:15:46.469688+0000#012modified#0112025-11-29T07:15:46.469738+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-2.mmoati{-1:24154} state up:standby seq 1 addr [v2:192.168.122.102:6804/1598903637,v1:192.168.122.102:6805/1598903637] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 02:16:27 np0005539551 ceph-mon[81672]: Deploying daemon mds.cephfs.compute-2.mmoati on compute-2
Nov 29 02:16:27 np0005539551 ceph-mon[81672]: from='client.? 192.168.122.100:0/4213096877' entity='client.rgw.rgw.compute-0.lkiqxb' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 02:16:28 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:28 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Nov 29 02:16:28 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).mds e4 new map
Nov 29 02:16:28 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).mds e4 print_map#012e4#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:15:46.469688+0000#012modified#0112025-11-29T07:16:27.688749+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24154}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.mmoati{0:24154} state up:creating seq 1 addr [v2:192.168.122.102:6804/1598903637,v1:192.168.122.102:6805/1598903637] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Nov 29 02:16:28 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Nov 29 02:16:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e54 e54: 3 total, 3 up, 3 in
Nov 29 02:16:29 np0005539551 ceph-mon[81672]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 29 02:16:29 np0005539551 ceph-mon[81672]: from='client.? 192.168.122.101:0/4122975246' entity='client.rgw.rgw.compute-1.wmgqmg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 02:16:29 np0005539551 ceph-mon[81672]: from='client.? 192.168.122.102:0/2673003238' entity='client.rgw.rgw.compute-2.gstlru' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 02:16:29 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:29 np0005539551 ceph-mon[81672]: from='client.? ' entity='client.rgw.rgw.compute-2.gstlru' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 02:16:29 np0005539551 ceph-mon[81672]: from='client.? ' entity='client.rgw.rgw.compute-1.wmgqmg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 02:16:29 np0005539551 ceph-mon[81672]: daemon mds.cephfs.compute-2.mmoati assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 29 02:16:29 np0005539551 ceph-mon[81672]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 29 02:16:29 np0005539551 ceph-mon[81672]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 29 02:16:29 np0005539551 ceph-mon[81672]: Cluster is now healthy
Nov 29 02:16:29 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:29 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:29 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.qcwnhf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 02:16:29 np0005539551 ceph-mon[81672]: daemon mds.cephfs.compute-2.mmoati is now active in filesystem cephfs as rank 0
Nov 29 02:16:29 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.qcwnhf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 02:16:29 np0005539551 ceph-mon[81672]: Deploying daemon mds.cephfs.compute-0.qcwnhf on compute-0
Nov 29 02:16:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Nov 29 02:16:29 np0005539551 ceph-mon[81672]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/4122975246' entity='client.rgw.rgw.compute-1.wmgqmg' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 02:16:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).mds e5 new map
Nov 29 02:16:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).mds e5 print_map#012e5#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:15:46.469688+0000#012modified#0112025-11-29T07:16:29.351992+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24154}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.mmoati{0:24154} state up:active seq 2 addr [v2:192.168.122.102:6804/1598903637,v1:192.168.122.102:6805/1598903637] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Nov 29 02:16:30 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 4.5 deep-scrub starts
Nov 29 02:16:30 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 4.5 deep-scrub ok
Nov 29 02:16:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e55 e55: 3 total, 3 up, 3 in
Nov 29 02:16:30 np0005539551 ceph-mon[81672]: from='client.? 192.168.122.100:0/4213096877' entity='client.rgw.rgw.compute-0.lkiqxb' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 02:16:30 np0005539551 ceph-mon[81672]: from='client.? ' entity='client.rgw.rgw.compute-2.gstlru' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 02:16:30 np0005539551 ceph-mon[81672]: from='client.? ' entity='client.rgw.rgw.compute-1.wmgqmg' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 02:16:30 np0005539551 ceph-mon[81672]: from='client.? 192.168.122.100:0/4213096877' entity='client.rgw.rgw.compute-0.lkiqxb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 02:16:30 np0005539551 ceph-mon[81672]: from='client.? 192.168.122.102:0/2673003238' entity='client.rgw.rgw.compute-2.gstlru' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 02:16:30 np0005539551 ceph-mon[81672]: from='client.? 192.168.122.101:0/4122975246' entity='client.rgw.rgw.compute-1.wmgqmg' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 02:16:30 np0005539551 ceph-mon[81672]: from='client.? ' entity='client.rgw.rgw.compute-2.gstlru' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 02:16:30 np0005539551 ceph-mon[81672]: from='client.? ' entity='client.rgw.rgw.compute-1.wmgqmg' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 02:16:30 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:31 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 4.a scrub starts
Nov 29 02:16:31 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 4.a scrub ok
Nov 29 02:16:31 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).mds e6 new map
Nov 29 02:16:31 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).mds e6 print_map#012e6#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:15:46.469688+0000#012modified#0112025-11-29T07:16:29.351992+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24154}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.mmoati{0:24154} state up:active seq 2 addr [v2:192.168.122.102:6804/1598903637,v1:192.168.122.102:6805/1598903637] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.qcwnhf{-1:14382} state up:standby seq 1 addr [v2:192.168.122.100:6806/4251203860,v1:192.168.122.100:6807/4251203860] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 02:16:31 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).mds e7 new map
Nov 29 02:16:31 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).mds e7 print_map#012e7#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:15:46.469688+0000#012modified#0112025-11-29T07:16:29.351992+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24154}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.mmoati{0:24154} state up:active seq 2 addr [v2:192.168.122.102:6804/1598903637,v1:192.168.122.102:6805/1598903637] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.qcwnhf{-1:14382} state up:standby seq 1 addr [v2:192.168.122.100:6806/4251203860,v1:192.168.122.100:6807/4251203860] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 02:16:32 np0005539551 ceph-mon[81672]: from='client.? 192.168.122.100:0/4213096877' entity='client.rgw.rgw.compute-0.lkiqxb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 02:16:32 np0005539551 ceph-mon[81672]: from='client.? ' entity='client.rgw.rgw.compute-2.gstlru' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 02:16:32 np0005539551 ceph-mon[81672]: from='client.? ' entity='client.rgw.rgw.compute-1.wmgqmg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 02:16:32 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:33 np0005539551 radosgw[83679]: LDAP not started since no server URIs were provided in the configuration.
Nov 29 02:16:33 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-rgw-rgw-compute-1-wmgqmg[83675]: 2025-11-29T07:16:33.034+0000 7facf13de940 -1 LDAP not started since no server URIs were provided in the configuration.
Nov 29 02:16:33 np0005539551 radosgw[83679]: framework: beast
Nov 29 02:16:33 np0005539551 radosgw[83679]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Nov 29 02:16:33 np0005539551 radosgw[83679]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Nov 29 02:16:33 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:33 np0005539551 radosgw[83679]: starting handler: beast
Nov 29 02:16:33 np0005539551 radosgw[83679]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 02:16:33 np0005539551 radosgw[83679]: mgrc service_daemon_register rgw.24140 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.101:8082,frontend_type#0=beast,hostname=compute-1,id=rgw.compute-1.wmgqmg,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864316,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=dee0d062-b680-4e01-91de-db2466760c83,zone_name=default,zonegroup_id=135cb529-a24d-489d-8cfe-9e045cfed63b,zonegroup_name=default}
Nov 29 02:16:33 np0005539551 podman[84433]: 2025-11-29 07:16:33.225920497 +0000 UTC m=+0.047711692 container create 545d267629605763a0df8beb187c457fc609459ec8c1e26894cc555e72c32836 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kowalevski, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:16:33 np0005539551 systemd[1]: Started libpod-conmon-545d267629605763a0df8beb187c457fc609459ec8c1e26894cc555e72c32836.scope.
Nov 29 02:16:33 np0005539551 systemd[1]: Started libcrun container.
Nov 29 02:16:33 np0005539551 podman[84433]: 2025-11-29 07:16:33.208768846 +0000 UTC m=+0.030560071 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:16:33 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:33 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:33 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ldsugj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 02:16:33 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ldsugj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 02:16:33 np0005539551 ceph-mon[81672]: Deploying daemon mds.cephfs.compute-1.ldsugj on compute-1
Nov 29 02:16:33 np0005539551 podman[84433]: 2025-11-29 07:16:33.318853684 +0000 UTC m=+0.140644899 container init 545d267629605763a0df8beb187c457fc609459ec8c1e26894cc555e72c32836 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:16:33 np0005539551 podman[84433]: 2025-11-29 07:16:33.327656464 +0000 UTC m=+0.149447659 container start 545d267629605763a0df8beb187c457fc609459ec8c1e26894cc555e72c32836 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 02:16:33 np0005539551 podman[84433]: 2025-11-29 07:16:33.331559566 +0000 UTC m=+0.153350761 container attach 545d267629605763a0df8beb187c457fc609459ec8c1e26894cc555e72c32836 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:16:33 np0005539551 youthful_kowalevski[84453]: 167 167
Nov 29 02:16:33 np0005539551 systemd[1]: libpod-545d267629605763a0df8beb187c457fc609459ec8c1e26894cc555e72c32836.scope: Deactivated successfully.
Nov 29 02:16:33 np0005539551 podman[84433]: 2025-11-29 07:16:33.335331796 +0000 UTC m=+0.157123001 container died 545d267629605763a0df8beb187c457fc609459ec8c1e26894cc555e72c32836 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kowalevski, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:16:33 np0005539551 systemd[1]: var-lib-containers-storage-overlay-5be5337c3103bdcddfb9b51e82a704bab23bb77f6d4a4508213cfeeb2d621a59-merged.mount: Deactivated successfully.
Nov 29 02:16:33 np0005539551 podman[84433]: 2025-11-29 07:16:33.380033658 +0000 UTC m=+0.201824853 container remove 545d267629605763a0df8beb187c457fc609459ec8c1e26894cc555e72c32836 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 02:16:33 np0005539551 systemd[1]: libpod-conmon-545d267629605763a0df8beb187c457fc609459ec8c1e26894cc555e72c32836.scope: Deactivated successfully.
Nov 29 02:16:33 np0005539551 systemd[1]: Reloading.
Nov 29 02:16:33 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:16:33 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:16:33 np0005539551 systemd[1]: Reloading.
Nov 29 02:16:33 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:16:33 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:16:35 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 3.c scrub starts
Nov 29 02:16:35 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 3.c scrub ok
Nov 29 02:16:37 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Nov 29 02:16:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:39 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Nov 29 02:16:39 np0005539551 systemd[1]: Starting Ceph mds.cephfs.compute-1.ldsugj for b66774a7-56d9-5535-bd8c-681234404870...
Nov 29 02:16:39 np0005539551 podman[84596]: 2025-11-29 07:16:39.373175793 +0000 UTC m=+0.020352574 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:16:39 np0005539551 podman[84596]: 2025-11-29 07:16:39.518627807 +0000 UTC m=+0.165804568 container create 2ef57eae8d0f953f7ab9e468ea092d275f0e4b9f52c7acd2bca257e900bd89b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mds-cephfs-compute-1-ldsugj, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:16:39 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3cb6fe94d20088ddd3c8bd7dd86121e8d2c582d80e0d51d18ba7b31fb0115b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:39 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3cb6fe94d20088ddd3c8bd7dd86121e8d2c582d80e0d51d18ba7b31fb0115b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:39 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3cb6fe94d20088ddd3c8bd7dd86121e8d2c582d80e0d51d18ba7b31fb0115b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:39 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3cb6fe94d20088ddd3c8bd7dd86121e8d2c582d80e0d51d18ba7b31fb0115b1/merged/var/lib/ceph/mds/ceph-cephfs.compute-1.ldsugj supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:39 np0005539551 podman[84596]: 2025-11-29 07:16:39.759412311 +0000 UTC m=+0.406589082 container init 2ef57eae8d0f953f7ab9e468ea092d275f0e4b9f52c7acd2bca257e900bd89b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mds-cephfs-compute-1-ldsugj, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 02:16:39 np0005539551 podman[84596]: 2025-11-29 07:16:39.76435029 +0000 UTC m=+0.411527051 container start 2ef57eae8d0f953f7ab9e468ea092d275f0e4b9f52c7acd2bca257e900bd89b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mds-cephfs-compute-1-ldsugj, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:16:39 np0005539551 bash[84596]: 2ef57eae8d0f953f7ab9e468ea092d275f0e4b9f52c7acd2bca257e900bd89b0
Nov 29 02:16:39 np0005539551 systemd[1]: Started Ceph mds.cephfs.compute-1.ldsugj for b66774a7-56d9-5535-bd8c-681234404870.
Nov 29 02:16:39 np0005539551 ceph-mds[84617]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 02:16:39 np0005539551 ceph-mds[84617]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Nov 29 02:16:39 np0005539551 ceph-mds[84617]: main not setting numa affinity
Nov 29 02:16:39 np0005539551 ceph-mds[84617]: pidfile_write: ignore empty --pid-file
Nov 29 02:16:39 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mds-cephfs-compute-1-ldsugj[84613]: starting mds.cephfs.compute-1.ldsugj at 
Nov 29 02:16:39 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj Updating MDS map to version 7 from mon.2
Nov 29 02:16:42 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Nov 29 02:16:42 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Nov 29 02:16:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).mds e8 new map
Nov 29 02:16:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).mds e8 print_map#012e8#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:15:46.469688+0000#012modified#0112025-11-29T07:16:29.351992+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24154}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.mmoati{0:24154} state up:active seq 2 addr [v2:192.168.122.102:6804/1598903637,v1:192.168.122.102:6805/1598903637] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.qcwnhf{-1:14382} state up:standby seq 1 addr [v2:192.168.122.100:6806/4251203860,v1:192.168.122.100:6807/4251203860] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.ldsugj{-1:24146} state up:standby seq 1 addr [v2:192.168.122.101:6804/830016470,v1:192.168.122.101:6805/830016470] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 02:16:42 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj Updating MDS map to version 8 from mon.2
Nov 29 02:16:42 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj Monitors have assigned me to become a standby.
Nov 29 02:16:42 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:44 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:44 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Nov 29 02:16:44 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Nov 29 02:16:44 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:44 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:45 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Nov 29 02:16:45 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Nov 29 02:16:47 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 3.16 deep-scrub starts
Nov 29 02:16:47 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:47 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 3.16 deep-scrub ok
Nov 29 02:16:48 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).mds e9 new map
Nov 29 02:16:48 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).mds e9 print_map#012e9#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:15:46.469688+0000#012modified#0112025-11-29T07:16:29.351992+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24154}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.mmoati{0:24154} state up:active seq 2 addr [v2:192.168.122.102:6804/1598903637,v1:192.168.122.102:6805/1598903637] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.qcwnhf{-1:14382} state up:standby seq 5 join_fscid=1 addr [v2:192.168.122.100:6806/4251203860,v1:192.168.122.100:6807/4251203860] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.ldsugj{-1:24146} state up:standby seq 1 addr [v2:192.168.122.101:6804/830016470,v1:192.168.122.101:6805/830016470] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 02:16:48 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 3.f scrub starts
Nov 29 02:16:48 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 3.f scrub ok
Nov 29 02:16:48 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:48 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).mds e10 new map
Nov 29 02:16:48 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).mds e10 print_map#012e10#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01110#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:15:46.469688+0000#012modified#0112025-11-29T07:16:47.848009+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01156#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14382}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-0.qcwnhf{0:14382} state up:replay seq 5 join_fscid=1 addr [v2:192.168.122.100:6806/4251203860,v1:192.168.122.100:6807/4251203860] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-1.ldsugj{-1:24146} state up:standby seq 1 addr [v2:192.168.122.101:6804/830016470,v1:192.168.122.101:6805/830016470] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 02:16:48 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e56 e56: 3 total, 3 up, 3 in
Nov 29 02:16:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:49 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:49 np0005539551 ceph-mon[81672]: Dropping low affinity active daemon mds.cephfs.compute-2.mmoati in favor of higher affinity standby.
Nov 29 02:16:49 np0005539551 ceph-mon[81672]: Replacing daemon mds.cephfs.compute-2.mmoati as rank 0 with standby daemon mds.cephfs.compute-0.qcwnhf
Nov 29 02:16:49 np0005539551 ceph-mon[81672]: Health check failed: 1 filesystem is degraded (FS_DEGRADED)
Nov 29 02:16:49 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:49 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).mds e11 new map
Nov 29 02:16:49 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj Updating MDS map to version 11 from mon.2
Nov 29 02:16:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).mds e11 print_map#012e11#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01111#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:15:46.469688+0000#012modified#0112025-11-29T07:16:49.781453+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01156#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14382}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-0.qcwnhf{0:14382} state up:reconnect seq 6 join_fscid=1 addr [v2:192.168.122.100:6806/4251203860,v1:192.168.122.100:6807/4251203860] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-1.ldsugj{-1:24146} state up:standby seq 3 join_fscid=1 addr [v2:192.168.122.101:6804/830016470,v1:192.168.122.101:6805/830016470] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-2.mmoati{-1:24160} state up:standby seq 1 join_fscid=1 addr [v2:192.168.122.102:6804/3065985027,v1:192.168.122.102:6805/3065985027] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 02:16:51 np0005539551 ceph-mon[81672]: Deploying daemon haproxy.rgw.default.compute-0.uyfjya on compute-0
Nov 29 02:16:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).mds e12 new map
Nov 29 02:16:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).mds e12 print_map#012e12#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:15:46.469688+0000#012modified#0112025-11-29T07:16:50.823341+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01156#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14382}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-0.qcwnhf{0:14382} state up:rejoin seq 7 join_fscid=1 addr [v2:192.168.122.100:6806/4251203860,v1:192.168.122.100:6807/4251203860] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-1.ldsugj{-1:24146} state up:standby seq 3 join_fscid=1 addr [v2:192.168.122.101:6804/830016470,v1:192.168.122.101:6805/830016470] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-2.mmoati{-1:24160} state up:standby seq 1 join_fscid=1 addr [v2:192.168.122.102:6804/3065985027,v1:192.168.122.102:6805/3065985027] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 02:16:53 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).mds e13 new map
Nov 29 02:16:53 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).mds e13 print_map#012e13#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01113#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:15:46.469688+0000#012modified#0112025-11-29T07:16:53.325309+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01156#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14382}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-0.qcwnhf{0:14382} state up:active seq 8 join_fscid=1 addr [v2:192.168.122.100:6806/4251203860,v1:192.168.122.100:6807/4251203860] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-1.ldsugj{-1:24146} state up:standby seq 3 join_fscid=1 addr [v2:192.168.122.101:6804/830016470,v1:192.168.122.101:6805/830016470] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-2.mmoati{-1:24160} state up:standby seq 1 join_fscid=1 addr [v2:192.168.122.102:6804/3065985027,v1:192.168.122.102:6805/3065985027] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 02:16:54 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:55 np0005539551 ceph-mon[81672]: daemon mds.cephfs.compute-0.qcwnhf is now active in filesystem cephfs as rank 0
Nov 29 02:16:55 np0005539551 ceph-mon[81672]: Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded)
Nov 29 02:16:55 np0005539551 ceph-mon[81672]: Cluster is now healthy
Nov 29 02:16:57 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Nov 29 02:16:57 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Nov 29 02:16:58 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Nov 29 02:16:58 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Nov 29 02:16:59 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:59 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 7.c scrub starts
Nov 29 02:16:59 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 7.c scrub ok
Nov 29 02:17:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.002000052s ======
Nov 29 02:17:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:03.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Nov 29 02:17:04 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:17:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:17:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:05.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:17:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e57 e57: 3 total, 3 up, 3 in
Nov 29 02:17:06 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:17:06 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 7.d scrub starts
Nov 29 02:17:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:07.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:07 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 7.d scrub ok
Nov 29 02:17:07 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Nov 29 02:17:08 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Nov 29 02:17:08 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e58 e58: 3 total, 3 up, 3 in
Nov 29 02:17:08 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:17:08 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:17:08 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:17:08 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:17:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:09.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:09 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:17:09 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Nov 29 02:17:09 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Nov 29 02:17:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:11.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:11 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 7.17 deep-scrub starts
Nov 29 02:17:12 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 7.17 deep-scrub ok
Nov 29 02:17:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:12.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:12 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:17:12 np0005539551 ceph-mon[81672]: Deploying daemon haproxy.rgw.default.compute-2.efzvmt on compute-2
Nov 29 02:17:12 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:12 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:17:12 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:17:12 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:12 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:13 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Nov 29 02:17:13 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Nov 29 02:17:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:13.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:14 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e59 e59: 3 total, 3 up, 3 in
Nov 29 02:17:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:14.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:14 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:17:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:15.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:16 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Nov 29 02:17:16 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Nov 29 02:17:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:16.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:17.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:18 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 7.1c deep-scrub starts
Nov 29 02:17:18 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 7.1c deep-scrub ok
Nov 29 02:17:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:18.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:18 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e60 e60: 3 total, 3 up, 3 in
Nov 29 02:17:18 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:18 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:18 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:18 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:18 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:18 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:17:18 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:18 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:18 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:17:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:19.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:19 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:17:20 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Nov 29 02:17:20 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Nov 29 02:17:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:20.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e61 e61: 3 total, 3 up, 3 in
Nov 29 02:17:20 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:17:20 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:20 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:17:20 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:20 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:20 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:20 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:20 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:20 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:17:20 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:20 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:21 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 6.e scrub starts
Nov 29 02:17:21 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 6.e scrub ok
Nov 29 02:17:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:21.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:21 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 61 pg[10.0( v 60'49 (0'0,60'49] local-lis/les=51/52 n=8 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=8.848906517s) [1] r=0 lpr=61 pi=[51,61)/1 luod=52'48 crt=60'49 lcod 52'47 mlcod 52'47 active pruub 172.875885010s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:21 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 61 pg[10.0( v 60'49 lc 0'0 (0'0,60'49] local-lis/les=51/52 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=8.848906517s) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 lcod 52'47 mlcod 0'0 unknown pruub 172.875885010s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:22.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:22 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 5.f scrub starts
Nov 29 02:17:23 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 5.f scrub ok
Nov 29 02:17:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:17:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:23.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:17:23 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e62 e62: 3 total, 3 up, 3 in
Nov 29 02:17:23 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:17:23 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:23 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:23 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:23 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:23 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 62 pg[10.17( v 60'49 lc 0'0 (0'0,60'49] local-lis/les=51/52 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:23 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 62 pg[10.9( v 60'49 lc 0'0 (0'0,60'49] local-lis/les=51/52 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:23 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 62 pg[10.1( v 60'49 (0'0,60'49] local-lis/les=51/52 n=1 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:23 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 62 pg[10.7( v 60'49 lc 0'0 (0'0,60'49] local-lis/les=51/52 n=1 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:23 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 62 pg[10.e( v 60'49 lc 0'0 (0'0,60'49] local-lis/les=51/52 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:23 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 62 pg[10.1b( v 60'49 lc 0'0 (0'0,60'49] local-lis/les=51/52 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:23 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 62 pg[10.12( v 60'49 lc 0'0 (0'0,60'49] local-lis/les=51/52 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:23 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 62 pg[10.11( v 60'49 lc 0'0 (0'0,60'49] local-lis/les=51/52 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:23 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 62 pg[10.13( v 60'49 lc 0'0 (0'0,60'49] local-lis/les=51/52 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:23 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 62 pg[10.1f( v 60'49 lc 0'0 (0'0,60'49] local-lis/les=51/52 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:23 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 62 pg[10.10( v 60'49 lc 0'0 (0'0,60'49] local-lis/les=51/52 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:23 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 62 pg[10.1d( v 60'49 lc 0'0 (0'0,60'49] local-lis/les=51/52 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:23 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 62 pg[10.1e( v 60'49 lc 0'0 (0'0,60'49] local-lis/les=51/52 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:23 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 62 pg[10.1c( v 60'49 lc 0'0 (0'0,60'49] local-lis/les=51/52 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:23 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 62 pg[10.1a( v 60'49 lc 0'0 (0'0,60'49] local-lis/les=51/52 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:23 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 62 pg[10.19( v 60'49 lc 0'0 (0'0,60'49] local-lis/les=51/52 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:23 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 62 pg[10.18( v 60'49 lc 0'0 (0'0,60'49] local-lis/les=51/52 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:23 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 62 pg[10.6( v 60'49 lc 0'0 (0'0,60'49] local-lis/les=51/52 n=1 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:23 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 62 pg[10.5( v 60'49 lc 0'0 (0'0,60'49] local-lis/les=51/52 n=1 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:23 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 62 pg[10.4( v 60'49 lc 0'0 (0'0,60'49] local-lis/les=51/52 n=1 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:23 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 62 pg[10.b( v 60'49 lc 0'0 (0'0,60'49] local-lis/les=51/52 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:23 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 62 pg[10.8( v 60'49 lc 0'0 (0'0,60'49] local-lis/les=51/52 n=1 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:23 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 62 pg[10.a( v 60'49 lc 0'0 (0'0,60'49] local-lis/les=51/52 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:23 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 62 pg[10.c( v 60'49 lc 0'0 (0'0,60'49] local-lis/les=51/52 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:23 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 62 pg[10.d( v 60'49 lc 0'0 (0'0,60'49] local-lis/les=51/52 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:23 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 62 pg[10.f( v 60'49 lc 0'0 (0'0,60'49] local-lis/les=51/52 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:23 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 62 pg[10.2( v 60'49 lc 0'0 (0'0,60'49] local-lis/les=51/52 n=1 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:23 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 62 pg[10.3( v 60'49 lc 0'0 (0'0,60'49] local-lis/les=51/52 n=1 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:23 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 62 pg[10.15( v 60'49 lc 0'0 (0'0,60'49] local-lis/les=51/52 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:23 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 62 pg[10.16( v 60'49 lc 0'0 (0'0,60'49] local-lis/les=51/52 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:23 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 62 pg[10.14( v 60'49 lc 0'0 (0'0,60'49] local-lis/les=51/52 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:24.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:25 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Nov 29 02:17:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:25.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:26.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:27.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:27 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:17:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:28.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:29.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:30.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:31.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:31 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:17:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:17:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:32.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:17:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:33.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:34.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:35.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:35 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:17:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:17:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:36.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:17:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:37.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:17:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:38.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:17:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:39.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:39 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:17:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:40.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:40 np0005539551 ceph-osd[78953]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f0fcb811640' had timed out after 15.000000954s
Nov 29 02:17:40 np0005539551 ceph-osd[78953]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f0fcb811640' had timed out after 15.000000954s
Nov 29 02:17:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:17:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).paxos(paxos updating c 1..530) lease_timeout -- calling new election
Nov 29 02:17:40 np0005539551 ceph-mon[81672]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Nov 29 02:17:40 np0005539551 ceph-mon[81672]: paxos.2).electionLogic(16) init, last seen epoch 16
Nov 29 02:17:40 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 16.858875275s
Nov 29 02:17:40 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 16.858875275s
Nov 29 02:17:40 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.110853195s, txc = 0x5616f407cf00
Nov 29 02:17:40 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.110710144s, txc = 0x5616f5094000
Nov 29 02:17:40 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.110626221s, txc = 0x5616f4dba000
Nov 29 02:17:40 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.110534668s, txc = 0x5616f507b800
Nov 29 02:17:40 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.110439301s, txc = 0x5616f507bb00
Nov 29 02:17:40 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.110433578s, txc = 0x5616f4dba300
Nov 29 02:17:40 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.110321045s, txc = 0x5616f5094900
Nov 29 02:17:40 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.110256195s, txc = 0x5616f4daa000
Nov 29 02:17:40 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.110250473s, txc = 0x5616f4dba600
Nov 29 02:17:40 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.110210419s, txc = 0x5616f5094c00
Nov 29 02:17:40 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.110111237s, txc = 0x5616f4daa300
Nov 29 02:17:40 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.110101700s, txc = 0x5616f5094f00
Nov 29 02:17:40 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.110069275s, txc = 0x5616f4dba900
Nov 29 02:17:40 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.110013962s, txc = 0x5616f5095200
Nov 29 02:17:40 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.109924316s, txc = 0x5616f4daa600
Nov 29 02:17:40 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.109924316s, txc = 0x5616f4dbac00
Nov 29 02:17:40 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.109901428s, txc = 0x5616f5095500
Nov 29 02:17:40 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.109880447s, txc = 0x5616f508a300
Nov 29 02:17:40 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.109800339s, txc = 0x5616f5010900
Nov 29 02:17:40 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.109771729s, txc = 0x5616f4daa900
Nov 29 02:17:40 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.109582901s, txc = 0x5616f2eb3500
Nov 29 02:17:40 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.109556198s, txc = 0x5616f4dbaf00
Nov 29 02:17:40 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.109416962s, txc = 0x5616f2eb3800
Nov 29 02:17:40 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.109373093s, txc = 0x5616f4dbb200
Nov 29 02:17:40 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.109346390s, txc = 0x5616f508a600
Nov 29 02:17:40 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.109321594s, txc = 0x5616f4daac00
Nov 29 02:17:40 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.109270096s, txc = 0x5616f4dbb500
Nov 29 02:17:40 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.109228134s, txc = 0x5616f5095800
Nov 29 02:17:40 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.109148026s, txc = 0x5616f2eb3b00
Nov 29 02:17:40 np0005539551 ceph-osd[78953]: heartbeat_map reset_timeout 'OSD::osd_op_tp thread 0x7f0fcb811640' had timed out after 15.000000954s
Nov 29 02:17:40 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Nov 29 02:17:41 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:17:41 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 16.091018677s, txc = 0x5616f4dbbb00
Nov 29 02:17:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:41.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:41 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:17:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:17:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:42.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:42 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 5.7 deep-scrub starts
Nov 29 02:17:42 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 5.7 deep-scrub ok
Nov 29 02:17:43 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:17:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:43.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:44.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:44 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e63 e63: 3 total, 3 up, 3 in
Nov 29 02:17:44 np0005539551 ceph-mon[81672]: mon.compute-2 calling monitor election
Nov 29 02:17:44 np0005539551 ceph-mon[81672]: mon.compute-0 calling monitor election
Nov 29 02:17:44 np0005539551 ceph-mon[81672]: mon.compute-1 calling monitor election
Nov 29 02:17:44 np0005539551 ceph-mon[81672]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 02:17:44 np0005539551 ceph-mon[81672]: overall HEALTH_OK
Nov 29 02:17:44 np0005539551 ceph-mon[81672]: mon.compute-0 calling monitor election
Nov 29 02:17:44 np0005539551 ceph-mon[81672]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:17:44 np0005539551 ceph-mon[81672]: overall HEALTH_OK
Nov 29 02:17:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:17:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:45.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:17:45 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 63 pg[10.1b( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:45 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 63 pg[10.7( v 60'49 (0'0,60'49] local-lis/les=61/63 n=1 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:45 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 63 pg[10.1( v 60'49 (0'0,60'49] local-lis/les=61/63 n=1 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:45 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 63 pg[10.9( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:45 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 63 pg[10.13( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:45 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 63 pg[10.e( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:45 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 63 pg[10.12( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:45 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 63 pg[10.11( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:45 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 63 pg[10.10( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:45 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 63 pg[10.1f( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:45 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 63 pg[10.17( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:45 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 63 pg[10.1e( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:45 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 63 pg[10.1d( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:45 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 63 pg[10.6( v 60'49 (0'0,60'49] local-lis/les=61/63 n=1 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:45 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 63 pg[10.1c( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:45 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 63 pg[10.1a( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:45 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 63 pg[10.18( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:45 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 63 pg[10.5( v 60'49 (0'0,60'49] local-lis/les=61/63 n=1 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:45 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 63 pg[10.4( v 60'49 (0'0,60'49] local-lis/les=61/63 n=1 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:45 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 63 pg[10.b( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:45 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 63 pg[10.8( v 60'49 (0'0,60'49] local-lis/les=61/63 n=1 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:45 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 63 pg[10.a( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:45 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 63 pg[10.c( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:45 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 63 pg[10.f( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:45 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 63 pg[10.d( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:45 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 63 pg[10.0( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 lcod 52'47 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:45 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 63 pg[10.2( v 60'49 (0'0,60'49] local-lis/les=61/63 n=1 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:45 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 63 pg[10.14( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:45 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 63 pg[10.19( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:45 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 63 pg[10.3( v 60'49 (0'0,60'49] local-lis/les=61/63 n=1 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:45 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 63 pg[10.15( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:45 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 63 pg[10.16( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=61/51 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:17:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:46.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:46 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Nov 29 02:17:47 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Nov 29 02:17:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:47.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:47 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Nov 29 02:17:47 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Nov 29 02:17:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:48.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:49.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:50.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:17:50 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Nov 29 02:17:50 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Nov 29 02:17:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:51.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:52.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:52 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Nov 29 02:17:52 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Nov 29 02:17:53 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:17:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:53.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:54.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:54 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e64 e64: 3 total, 3 up, 3 in
Nov 29 02:17:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[10.1b( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64 pruub=14.754457474s) [0] r=-1 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 active pruub 211.799377441s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[10.1( v 60'49 (0'0,60'49] local-lis/les=61/63 n=1 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64 pruub=14.826634407s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 active pruub 211.871551514s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[10.1( v 60'49 (0'0,60'49] local-lis/les=61/63 n=1 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64 pruub=14.826523781s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 211.871551514s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[10.13( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64 pruub=14.826518059s) [0] r=-1 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 active pruub 211.871551514s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[10.1b( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64 pruub=14.754343987s) [0] r=-1 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 211.799377441s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[10.13( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64 pruub=14.826484680s) [0] r=-1 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 211.871551514s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[10.12( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64 pruub=14.826606750s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 active pruub 211.871688843s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[10.12( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64 pruub=14.826550484s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 211.871688843s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[10.10( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64 pruub=14.826810837s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 active pruub 211.872070312s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[10.11( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64 pruub=14.826639175s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 active pruub 211.871902466s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[10.10( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64 pruub=14.826784134s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 211.872070312s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[10.11( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64 pruub=14.826615334s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 211.871902466s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[10.1e( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64 pruub=14.826860428s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 active pruub 211.872283936s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[10.1e( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64 pruub=14.826780319s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 211.872283936s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[10.19( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64 pruub=14.827116966s) [0] r=-1 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 active pruub 211.872726440s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[10.19( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64 pruub=14.827095032s) [0] r=-1 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 211.872726440s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[10.18( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64 pruub=14.826667786s) [0] r=-1 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 active pruub 211.872421265s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[10.5( v 60'49 (0'0,60'49] local-lis/les=61/63 n=1 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64 pruub=14.826666832s) [0] r=-1 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 active pruub 211.872467041s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[10.18( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64 pruub=14.826639175s) [0] r=-1 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 211.872421265s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[10.4( v 60'49 (0'0,60'49] local-lis/les=61/63 n=1 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64 pruub=14.826641083s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 active pruub 211.872482300s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[10.5( v 60'49 (0'0,60'49] local-lis/les=61/63 n=1 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64 pruub=14.826635361s) [0] r=-1 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 211.872467041s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[10.4( v 60'49 (0'0,60'49] local-lis/les=61/63 n=1 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64 pruub=14.826618195s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 211.872482300s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[10.8( v 60'49 (0'0,60'49] local-lis/les=61/63 n=1 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64 pruub=14.826487541s) [0] r=-1 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 active pruub 211.872558594s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[10.8( v 60'49 (0'0,60'49] local-lis/les=61/63 n=1 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64 pruub=14.826457977s) [0] r=-1 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 211.872558594s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[10.f( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64 pruub=14.826454163s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 active pruub 211.872634888s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[10.f( v 60'49 (0'0,60'49] local-lis/les=61/63 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64 pruub=14.826429367s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 211.872634888s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[10.3( v 63'54 (0'0,63'54] local-lis/les=61/63 n=1 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64 pruub=14.826348305s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=63'54 lcod 63'53 mlcod 63'53 active pruub 211.872711182s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[10.3( v 63'54 (0'0,63'54] local-lis/les=61/63 n=1 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64 pruub=14.826302528s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=63'54 lcod 63'53 mlcod 0'0 unknown NOTIFY pruub 211.872711182s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[10.14( v 63'54 (0'0,63'54] local-lis/les=61/63 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64 pruub=14.826197624s) [0] r=-1 lpr=64 pi=[61,64)/1 crt=63'54 lcod 63'53 mlcod 63'53 active pruub 211.872711182s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[10.14( v 63'54 (0'0,63'54] local-lis/les=61/63 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64 pruub=14.826132774s) [0] r=-1 lpr=64 pi=[61,64)/1 crt=63'54 lcod 63'53 mlcod 0'0 unknown NOTIFY pruub 211.872711182s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[10.15( v 63'54 (0'0,63'54] local-lis/les=61/63 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64 pruub=14.826033592s) [0] r=-1 lpr=64 pi=[61,64)/1 crt=63'54 lcod 63'53 mlcod 63'53 active pruub 211.872772217s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[10.15( v 63'54 (0'0,63'54] local-lis/les=61/63 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64 pruub=14.825958252s) [0] r=-1 lpr=64 pi=[61,64)/1 crt=63'54 lcod 63'53 mlcod 0'0 unknown NOTIFY pruub 211.872772217s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[10.2( v 60'49 (0'0,60'49] local-lis/les=61/63 n=1 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64 pruub=14.825832367s) [0] r=-1 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 active pruub 211.872680664s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:54 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[10.2( v 60'49 (0'0,60'49] local-lis/les=61/63 n=1 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64 pruub=14.825808525s) [0] r=-1 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 211.872680664s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:55.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:17:55 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[8.8( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[11.f( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64) [1] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[11.4( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64) [1] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[11.5( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64) [1] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[11.7( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64) [1] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[8.4( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[8.1b( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[11.1a( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64) [1] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[8.19( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[11.1d( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64) [1] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[11.1e( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64) [1] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[8.14( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[8.12( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[11.1( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64) [1] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[8.10( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[11.1c( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64) [1] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[11.14( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64) [1] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[8.17( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[11.12( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64) [1] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[11.1b( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64) [1] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 64 pg[8.18( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:56.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:56 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e65 e65: 3 total, 3 up, 3 in
Nov 29 02:17:56 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Nov 29 02:17:56 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:56 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:56 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 02:17:56 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:56 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:56 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:56 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 02:17:56 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:56 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:56 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:56 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 02:17:56 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:56 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:56 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:56 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 02:17:56 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:56 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:17:56 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:17:56 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:17:56 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 02:17:56 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:17:56 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:17:56 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:17:56 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 02:17:56 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:17:56 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Nov 29 02:17:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:58.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:17:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:58.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:59 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e66 e66: 3 total, 3 up, 3 in
Nov 29 02:17:59 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 29 02:17:59 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:17:59 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:18:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:00.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:00.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 66 pg[8.14( v 48'4 (0'0,48'4] local-lis/les=64/66 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 66 pg[11.1( empty local-lis/les=64/66 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64) [1] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 66 pg[8.8( v 48'4 (0'0,48'4] local-lis/les=64/66 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 66 pg[11.5( empty local-lis/les=64/66 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64) [1] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 66 pg[11.4( empty local-lis/les=64/66 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64) [1] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 66 pg[8.4( v 48'4 (0'0,48'4] local-lis/les=64/66 n=1 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 66 pg[8.18( v 48'4 (0'0,48'4] local-lis/les=64/66 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 66 pg[11.1b( empty local-lis/les=64/66 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64) [1] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 66 pg[11.1d( empty local-lis/les=64/66 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64) [1] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 66 pg[8.1b( v 48'4 (0'0,48'4] local-lis/les=64/66 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 66 pg[11.1c( empty local-lis/les=64/66 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64) [1] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 66 pg[11.7( empty local-lis/les=64/66 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64) [1] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 66 pg[11.1e( empty local-lis/les=64/66 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64) [1] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 66 pg[8.12( v 48'4 (0'0,48'4] local-lis/les=64/66 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 66 pg[8.10( v 48'4 (0'0,48'4] local-lis/les=64/66 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 66 pg[11.f( empty local-lis/les=64/66 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64) [1] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 66 pg[11.12( empty local-lis/les=64/66 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64) [1] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 66 pg[11.1a( empty local-lis/les=64/66 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64) [1] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 66 pg[8.19( v 48'4 (0'0,48'4] local-lis/les=64/66 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 66 pg[11.14( empty local-lis/les=64/66 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64) [1] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:00 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 66 pg[8.17( v 48'4 (0'0,48'4] local-lis/les=64/66 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:01 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:18:01 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 02:18:01 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:18:01 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:18:01 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:18:01 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 02:18:01 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:18:01 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:01 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:01 np0005539551 ceph-mon[81672]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 29 02:18:01 np0005539551 ceph-mon[81672]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 29 02:18:01 np0005539551 ceph-mon[81672]: Deploying daemon keepalived.rgw.default.compute-2.gntzbr on compute-2
Nov 29 02:18:01 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 29 02:18:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:02.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:18:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:02.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:18:02 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 02:18:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:04.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:04 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e67 e67: 3 total, 3 up, 3 in
Nov 29 02:18:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:04.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e68 e68: 3 total, 3 up, 3 in
Nov 29 02:18:05 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Nov 29 02:18:06 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 02:18:06 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 02:18:06 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Nov 29 02:18:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:06.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:18:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:06.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:18:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e69 e69: 3 total, 3 up, 3 in
Nov 29 02:18:07 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Nov 29 02:18:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:18:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:08.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:18:08 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Nov 29 02:18:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:08.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:09 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 02:18:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:10.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:18:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:10.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:18:11 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:12.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:12.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:12 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e70 e70: 3 total, 3 up, 3 in
Nov 29 02:18:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:14.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:14 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:14.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:16.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:16.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e70 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e71 e71: 3 total, 3 up, 3 in
Nov 29 02:18:16 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:16 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:16 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 6.a scrub starts
Nov 29 02:18:16 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 6.a scrub ok
Nov 29 02:18:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:18.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:18.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:18 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e72 e72: 3 total, 3 up, 3 in
Nov 29 02:18:18 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:18 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 29 02:18:18 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Nov 29 02:18:18 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Nov 29 02:18:19 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Nov 29 02:18:19 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Nov 29 02:18:20 np0005539551 podman[84856]: 2025-11-29 07:18:20.002376667 +0000 UTC m=+0.295520640 container exec 90ce4adbab91e406b1c142d44002a8d5649da44e57a442af57615f2fecac4da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:18:20 np0005539551 podman[84856]: 2025-11-29 07:18:20.11393851 +0000 UTC m=+0.407082333 container exec_died 90ce4adbab91e406b1c142d44002a8d5649da44e57a442af57615f2fecac4da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:18:20 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 29 02:18:20 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:20 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 29 02:18:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:20.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:20.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:20 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 6.d scrub starts
Nov 29 02:18:20 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 6.d scrub ok
Nov 29 02:18:21 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:21 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Nov 29 02:18:21 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Nov 29 02:18:21 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e73 e73: 3 total, 3 up, 3 in
Nov 29 02:18:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:22.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:22.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:23 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Nov 29 02:18:23 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Nov 29 02:18:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:24.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:24.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:26.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:26 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 29 02:18:26 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:26 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:26.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:26 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e74 e74: 3 total, 3 up, 3 in
Nov 29 02:18:28 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:28.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:28.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:28 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e75 e75: 3 total, 3 up, 3 in
Nov 29 02:18:30 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:30 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:30 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:30.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:30.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e76 e76: 3 total, 3 up, 3 in
Nov 29 02:18:32 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:32.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:32.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:32 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Nov 29 02:18:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:34.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:18:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:34.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:18:35 np0005539551 systemd[72521]: Created slice User Background Tasks Slice.
Nov 29 02:18:35 np0005539551 systemd[72521]: Starting Cleanup of User's Temporary Files and Directories...
Nov 29 02:18:35 np0005539551 systemd[72521]: Finished Cleanup of User's Temporary Files and Directories.
Nov 29 02:18:35 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:18:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:36.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:36.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).paxos(paxos updating c 1..569) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 0.136351779s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Nov 29 02:18:37 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-1[81668]: 2025-11-29T07:18:37.592+0000 7fe9969b2640 -1 mon.compute-1@2(peon).paxos(paxos updating c 1..569) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 0.136351779s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Nov 29 02:18:37 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 5.812114239s
Nov 29 02:18:37 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 5.812114716s
Nov 29 02:18:37 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.812385082s, txc = 0x5616f2ec6c00
Nov 29 02:18:37 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Nov 29 02:18:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:37 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.005899429s, txc = 0x5616f5642300
Nov 29 02:18:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e77 e77: 3 total, 3 up, 3 in
Nov 29 02:18:37 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:37 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:18:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:38.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:38.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:38 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 77 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=77) [1] r=0 lpr=77 pi=[59,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:38 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 77 pg[9.e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=77) [1] r=0 lpr=77 pi=[59,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:38 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 77 pg[9.6( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=77) [1] r=0 lpr=77 pi=[59,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:38 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 77 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=77) [1] r=0 lpr=77 pi=[59,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:39 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:39 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 29 02:18:39 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:18:39 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 29 02:18:39 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 29 02:18:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e78 e78: 3 total, 3 up, 3 in
Nov 29 02:18:39 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 78 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:39 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 78 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:39 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 78 pg[9.6( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:39 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 78 pg[9.6( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:39 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 78 pg[9.e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:39 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 78 pg[9.e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:39 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 78 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:39 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 78 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:39 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Nov 29 02:18:39 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Nov 29 02:18:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:40.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:40.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:41 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e79 e79: 3 total, 3 up, 3 in
Nov 29 02:18:41 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 29 02:18:42 np0005539551 systemd-logind[788]: New session 33 of user zuul.
Nov 29 02:18:42 np0005539551 systemd[1]: Started Session 33 of User zuul.
Nov 29 02:18:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:42.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:42.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e80 e80: 3 total, 3 up, 3 in
Nov 29 02:18:42 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 80 pg[9.16( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=5 ec=59/49 lis/c=78/59 les/c/f=79/60/0 sis=80) [1] r=0 lpr=80 pi=[59,80)/1 luod=0'0 crt=55'1153 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:42 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 80 pg[9.16( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=5 ec=59/49 lis/c=78/59 les/c/f=79/60/0 sis=80) [1] r=0 lpr=80 pi=[59,80)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:42 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 29 02:18:42 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:43 np0005539551 python3.9[85265]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:18:43 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e81 e81: 3 total, 3 up, 3 in
Nov 29 02:18:44 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 81 pg[9.1e( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=5 ec=59/49 lis/c=78/59 les/c/f=79/60/0 sis=81) [1] r=0 lpr=81 pi=[59,81)/1 luod=0'0 crt=55'1153 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:44 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 81 pg[9.1e( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=5 ec=59/49 lis/c=78/59 les/c/f=79/60/0 sis=81) [1] r=0 lpr=81 pi=[59,81)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:44 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 81 pg[9.6( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=6 ec=59/49 lis/c=78/59 les/c/f=79/60/0 sis=81) [1] r=0 lpr=81 pi=[59,81)/1 luod=0'0 crt=55'1153 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:44 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 81 pg[9.6( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=6 ec=59/49 lis/c=78/59 les/c/f=79/60/0 sis=81) [1] r=0 lpr=81 pi=[59,81)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:44 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 81 pg[9.e( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=6 ec=59/49 lis/c=78/59 les/c/f=79/60/0 sis=81) [1] r=0 lpr=81 pi=[59,81)/1 luod=0'0 crt=55'1153 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:44 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 81 pg[9.e( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=6 ec=59/49 lis/c=78/59 les/c/f=79/60/0 sis=81) [1] r=0 lpr=81 pi=[59,81)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:44 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 81 pg[9.16( v 55'1153 (0'0,55'1153] local-lis/les=80/81 n=5 ec=59/49 lis/c=78/59 les/c/f=79/60/0 sis=80) [1] r=0 lpr=80 pi=[59,80)/1 crt=55'1153 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:44 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 29 02:18:44 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:44 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 29 02:18:44 np0005539551 ceph-mon[81672]: Reconfiguring mon.compute-0 (monmap changed)...
Nov 29 02:18:44 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 02:18:44 np0005539551 ceph-mon[81672]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 29 02:18:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:44.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:44.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:44 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Nov 29 02:18:44 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Nov 29 02:18:44 np0005539551 python3.9[85529]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:18:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e82 e82: 3 total, 3 up, 3 in
Nov 29 02:18:45 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 29 02:18:46 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 82 pg[9.1e( v 55'1153 (0'0,55'1153] local-lis/les=81/82 n=5 ec=59/49 lis/c=78/59 les/c/f=79/60/0 sis=81) [1] r=0 lpr=81 pi=[59,81)/1 crt=55'1153 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:46 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 82 pg[9.e( v 55'1153 (0'0,55'1153] local-lis/les=81/82 n=6 ec=59/49 lis/c=78/59 les/c/f=79/60/0 sis=81) [1] r=0 lpr=81 pi=[59,81)/1 crt=55'1153 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:46 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 82 pg[9.6( v 55'1153 (0'0,55'1153] local-lis/les=81/82 n=6 ec=59/49 lis/c=78/59 les/c/f=79/60/0 sis=81) [1] r=0 lpr=81 pi=[59,81)/1 crt=55'1153 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:46.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:46.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:47 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Nov 29 02:18:47 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Nov 29 02:18:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:48.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:48.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:48 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Nov 29 02:18:49 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Nov 29 02:18:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:50.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:50.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:51 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:51 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:51 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.pdhsqi", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 02:18:51 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Nov 29 02:18:51 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Nov 29 02:18:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e83 e83: 3 total, 3 up, 3 in
Nov 29 02:18:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:52.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:52.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:52 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 10.a scrub starts
Nov 29 02:18:52 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 10.a scrub ok
Nov 29 02:18:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:18:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:54.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:18:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:54.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:54 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 10.b scrub starts
Nov 29 02:18:54 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 10.b scrub ok
Nov 29 02:18:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e84 e84: 3 total, 3 up, 3 in
Nov 29 02:18:55 np0005539551 ceph-mon[81672]: Reconfiguring mgr.compute-0.pdhsqi (monmap changed)...
Nov 29 02:18:55 np0005539551 ceph-mon[81672]: Reconfiguring daemon mgr.compute-0.pdhsqi on compute-0
Nov 29 02:18:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:56 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e85 e85: 3 total, 3 up, 3 in
Nov 29 02:18:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:56.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:56 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:56 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:56 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:56 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 02:18:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:56.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:57 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e86 e86: 3 total, 3 up, 3 in
Nov 29 02:18:57 np0005539551 ceph-mon[81672]: Reconfiguring crash.compute-0 (monmap changed)...
Nov 29 02:18:57 np0005539551 ceph-mon[81672]: Reconfiguring daemon crash.compute-0 on compute-0
Nov 29 02:18:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:58.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:18:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:58.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:58 np0005539551 ceph-mon[81672]: Reconfiguring osd.0 (monmap changed)...
Nov 29 02:18:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 29 02:18:58 np0005539551 ceph-mon[81672]: Reconfiguring daemon osd.0 on compute-0
Nov 29 02:18:59 np0005539551 podman[85678]: 2025-11-29 07:18:59.516180793 +0000 UTC m=+0.020728404 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:18:59 np0005539551 systemd[1]: session-33.scope: Deactivated successfully.
Nov 29 02:18:59 np0005539551 systemd[1]: session-33.scope: Consumed 9.355s CPU time.
Nov 29 02:18:59 np0005539551 systemd-logind[788]: Session 33 logged out. Waiting for processes to exit.
Nov 29 02:18:59 np0005539551 systemd-logind[788]: Removed session 33.
Nov 29 02:19:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:00.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:00.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:00 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 10.c deep-scrub starts
Nov 29 02:19:00 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 10.c deep-scrub ok
Nov 29 02:19:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:02.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:02.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:04.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:04.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:04 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 10.d scrub starts
Nov 29 02:19:04 np0005539551 podman[85678]: 2025-11-29 07:19:04.754428525 +0000 UTC m=+5.258976146 container create b141dd9a3ac6a17d36e3c385d4e85ffe2257dfde9e0f04029b7a1d3497730481 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hodgkin, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:19:04 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:04 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 10.d scrub ok
Nov 29 02:19:04 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:04 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:04 np0005539551 ceph-mon[81672]: Reconfiguring crash.compute-1 (monmap changed)...
Nov 29 02:19:04 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 02:19:04 np0005539551 ceph-mon[81672]: Reconfiguring daemon crash.compute-1 on compute-1
Nov 29 02:19:04 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e87 e87: 3 total, 3 up, 3 in
Nov 29 02:19:04 np0005539551 systemd[1]: Started libpod-conmon-b141dd9a3ac6a17d36e3c385d4e85ffe2257dfde9e0f04029b7a1d3497730481.scope.
Nov 29 02:19:04 np0005539551 systemd[1]: Started libcrun container.
Nov 29 02:19:04 np0005539551 podman[85678]: 2025-11-29 07:19:04.960884791 +0000 UTC m=+5.465432392 container init b141dd9a3ac6a17d36e3c385d4e85ffe2257dfde9e0f04029b7a1d3497730481 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:19:04 np0005539551 podman[85678]: 2025-11-29 07:19:04.969667307 +0000 UTC m=+5.474214888 container start b141dd9a3ac6a17d36e3c385d4e85ffe2257dfde9e0f04029b7a1d3497730481 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hodgkin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Nov 29 02:19:04 np0005539551 podman[85678]: 2025-11-29 07:19:04.974621745 +0000 UTC m=+5.479169326 container attach b141dd9a3ac6a17d36e3c385d4e85ffe2257dfde9e0f04029b7a1d3497730481 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hodgkin, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 02:19:04 np0005539551 naughty_hodgkin[85719]: 167 167
Nov 29 02:19:04 np0005539551 systemd[1]: libpod-b141dd9a3ac6a17d36e3c385d4e85ffe2257dfde9e0f04029b7a1d3497730481.scope: Deactivated successfully.
Nov 29 02:19:04 np0005539551 podman[85678]: 2025-11-29 07:19:04.977767966 +0000 UTC m=+5.482315547 container died b141dd9a3ac6a17d36e3c385d4e85ffe2257dfde9e0f04029b7a1d3497730481 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:19:05 np0005539551 systemd[1]: var-lib-containers-storage-overlay-bd14cf6a6cbb941143b7330377d8367235f3a2df95bddaa493199f69a2b8cb1a-merged.mount: Deactivated successfully.
Nov 29 02:19:05 np0005539551 podman[85678]: 2025-11-29 07:19:05.306393518 +0000 UTC m=+5.810941099 container remove b141dd9a3ac6a17d36e3c385d4e85ffe2257dfde9e0f04029b7a1d3497730481 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hodgkin, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:19:05 np0005539551 systemd[1]: libpod-conmon-b141dd9a3ac6a17d36e3c385d4e85ffe2257dfde9e0f04029b7a1d3497730481.scope: Deactivated successfully.
Nov 29 02:19:06 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 29 02:19:06 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 29 02:19:06 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 29 02:19:06 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 29 02:19:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:19:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:06.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:19:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e88 e88: 3 total, 3 up, 3 in
Nov 29 02:19:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:06.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 88 pg[9.a( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=88) [1] r=0 lpr=88 pi=[59,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:19:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 88 pg[9.1a( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=88) [1] r=0 lpr=88 pi=[59,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:19:07 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 29 02:19:07 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 29 02:19:07 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e89 e89: 3 total, 3 up, 3 in
Nov 29 02:19:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 89 pg[9.a( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=89) [1]/[0] r=-1 lpr=89 pi=[59,89)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 89 pg[9.1a( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=89) [1]/[0] r=-1 lpr=89 pi=[59,89)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 89 pg[9.a( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=89) [1]/[0] r=-1 lpr=89 pi=[59,89)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:19:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 89 pg[9.1a( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=89) [1]/[0] r=-1 lpr=89 pi=[59,89)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:19:07 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 10.e scrub starts
Nov 29 02:19:07 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 10.e scrub ok
Nov 29 02:19:07 np0005539551 podman[85855]: 2025-11-29 07:19:07.723846436 +0000 UTC m=+0.025406005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:19:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:08.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:08.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:10.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:10.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:11 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e90 e90: 3 total, 3 up, 3 in
Nov 29 02:19:11 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Nov 29 02:19:11 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:19:11 np0005539551 podman[85855]: 2025-11-29 07:19:11.781506909 +0000 UTC m=+4.083066468 container create 8f6e960ef8a3873214a3b966acb4d3a982a1f253cb1198ef9a3f4ad7cecb283c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_goldstine, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Nov 29 02:19:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:12.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:12 np0005539551 systemd[1]: Started libpod-conmon-8f6e960ef8a3873214a3b966acb4d3a982a1f253cb1198ef9a3f4ad7cecb283c.scope.
Nov 29 02:19:12 np0005539551 systemd[1]: Started libcrun container.
Nov 29 02:19:12 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Nov 29 02:19:12 np0005539551 podman[85855]: 2025-11-29 07:19:12.434620786 +0000 UTC m=+4.736180355 container init 8f6e960ef8a3873214a3b966acb4d3a982a1f253cb1198ef9a3f4ad7cecb283c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_goldstine, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 02:19:12 np0005539551 podman[85855]: 2025-11-29 07:19:12.441036281 +0000 UTC m=+4.742595800 container start 8f6e960ef8a3873214a3b966acb4d3a982a1f253cb1198ef9a3f4ad7cecb283c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_goldstine, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:19:12 np0005539551 sleepy_goldstine[85871]: 167 167
Nov 29 02:19:12 np0005539551 systemd[1]: libpod-8f6e960ef8a3873214a3b966acb4d3a982a1f253cb1198ef9a3f4ad7cecb283c.scope: Deactivated successfully.
Nov 29 02:19:12 np0005539551 podman[85855]: 2025-11-29 07:19:12.452347792 +0000 UTC m=+4.753907321 container attach 8f6e960ef8a3873214a3b966acb4d3a982a1f253cb1198ef9a3f4ad7cecb283c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:19:12 np0005539551 podman[85855]: 2025-11-29 07:19:12.453218374 +0000 UTC m=+4.754777913 container died 8f6e960ef8a3873214a3b966acb4d3a982a1f253cb1198ef9a3f4ad7cecb283c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_goldstine, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:19:12 np0005539551 systemd[1]: var-lib-containers-storage-overlay-1ee562d96d9b906558f09058e7b912d31d6ffbf372d9526526ad51dfa6754753-merged.mount: Deactivated successfully.
Nov 29 02:19:12 np0005539551 podman[85855]: 2025-11-29 07:19:12.506052484 +0000 UTC m=+4.807612013 container remove 8f6e960ef8a3873214a3b966acb4d3a982a1f253cb1198ef9a3f4ad7cecb283c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:19:12 np0005539551 systemd[1]: libpod-conmon-8f6e960ef8a3873214a3b966acb4d3a982a1f253cb1198ef9a3f4ad7cecb283c.scope: Deactivated successfully.
Nov 29 02:19:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:12.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:12 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:12 np0005539551 ceph-mon[81672]: Reconfiguring osd.1 (monmap changed)...
Nov 29 02:19:12 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 29 02:19:12 np0005539551 ceph-mon[81672]: Reconfiguring daemon osd.1 on compute-1
Nov 29 02:19:12 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 29 02:19:12 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 29 02:19:13 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e91 e91: 3 total, 3 up, 3 in
Nov 29 02:19:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:14.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:14 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 29 02:19:14 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 29 02:19:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:19:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:14.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:19:15 np0005539551 podman[86012]: 2025-11-29 07:19:15.292773431 +0000 UTC m=+0.024589564 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:19:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e92 e92: 3 total, 3 up, 3 in
Nov 29 02:19:15 np0005539551 podman[86012]: 2025-11-29 07:19:15.655773719 +0000 UTC m=+0.387589832 container create e84a2796155427c167b961f09f33d6ad3873e917a87c8678c37e9f441ccda945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:19:15 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 92 pg[9.1a( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=4 ec=59/49 lis/c=89/59 les/c/f=90/60/0 sis=92) [1] r=0 lpr=92 pi=[59,92)/1 luod=0'0 crt=55'1153 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:15 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 92 pg[9.1a( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=4 ec=59/49 lis/c=89/59 les/c/f=90/60/0 sis=92) [1] r=0 lpr=92 pi=[59,92)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:19:15 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 92 pg[9.a( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=9 ec=59/49 lis/c=89/59 les/c/f=90/60/0 sis=92) [1] r=0 lpr=92 pi=[59,92)/1 luod=0'0 crt=55'1153 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:15 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 92 pg[9.a( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=9 ec=59/49 lis/c=89/59 les/c/f=90/60/0 sis=92) [1] r=0 lpr=92 pi=[59,92)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:19:15 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:15 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:15 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 02:19:15 np0005539551 systemd[1]: Started libpod-conmon-e84a2796155427c167b961f09f33d6ad3873e917a87c8678c37e9f441ccda945.scope.
Nov 29 02:19:15 np0005539551 systemd[1]: Started libcrun container.
Nov 29 02:19:15 np0005539551 podman[86012]: 2025-11-29 07:19:15.740331946 +0000 UTC m=+0.472148079 container init e84a2796155427c167b961f09f33d6ad3873e917a87c8678c37e9f441ccda945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:19:15 np0005539551 podman[86012]: 2025-11-29 07:19:15.746922715 +0000 UTC m=+0.478738838 container start e84a2796155427c167b961f09f33d6ad3873e917a87c8678c37e9f441ccda945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:19:15 np0005539551 podman[86012]: 2025-11-29 07:19:15.750312132 +0000 UTC m=+0.482128265 container attach e84a2796155427c167b961f09f33d6ad3873e917a87c8678c37e9f441ccda945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:19:15 np0005539551 zen_liskov[86028]: 167 167
Nov 29 02:19:15 np0005539551 systemd[1]: libpod-e84a2796155427c167b961f09f33d6ad3873e917a87c8678c37e9f441ccda945.scope: Deactivated successfully.
Nov 29 02:19:15 np0005539551 podman[86012]: 2025-11-29 07:19:15.752407097 +0000 UTC m=+0.484223230 container died e84a2796155427c167b961f09f33d6ad3873e917a87c8678c37e9f441ccda945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 02:19:15 np0005539551 systemd[1]: var-lib-containers-storage-overlay-ed4c4d8d9f4ac42f86b092e0543dbb55c7a4258b5e7063c2d47a74ce62d1c1d4-merged.mount: Deactivated successfully.
Nov 29 02:19:15 np0005539551 podman[86012]: 2025-11-29 07:19:15.842931277 +0000 UTC m=+0.574747390 container remove e84a2796155427c167b961f09f33d6ad3873e917a87c8678c37e9f441ccda945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:19:15 np0005539551 systemd[1]: libpod-conmon-e84a2796155427c167b961f09f33d6ad3873e917a87c8678c37e9f441ccda945.scope: Deactivated successfully.
Nov 29 02:19:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:16.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:16 np0005539551 systemd-logind[788]: New session 34 of user zuul.
Nov 29 02:19:16 np0005539551 systemd[1]: Started Session 34 of User zuul.
Nov 29 02:19:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:16.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e93 e93: 3 total, 3 up, 3 in
Nov 29 02:19:16 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 93 pg[9.a( v 55'1153 (0'0,55'1153] local-lis/les=92/93 n=9 ec=59/49 lis/c=89/59 les/c/f=90/60/0 sis=92) [1] r=0 lpr=92 pi=[59,92)/1 crt=55'1153 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:19:16 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 93 pg[9.1a( v 55'1153 (0'0,55'1153] local-lis/les=92/93 n=4 ec=59/49 lis/c=89/59 les/c/f=90/60/0 sis=92) [1] r=0 lpr=92 pi=[59,92)/1 crt=55'1153 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:19:17 np0005539551 python3.9[86200]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 29 02:19:18 np0005539551 ceph-mon[81672]: Reconfiguring mon.compute-1 (monmap changed)...
Nov 29 02:19:18 np0005539551 ceph-mon[81672]: Reconfiguring daemon mon.compute-1 on compute-1
Nov 29 02:19:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:18.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:18.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:18 np0005539551 python3.9[86374]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:19:19 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:19 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:19 np0005539551 ceph-mon[81672]: Reconfiguring mon.compute-2 (monmap changed)...
Nov 29 02:19:19 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 02:19:19 np0005539551 ceph-mon[81672]: Reconfiguring daemon mon.compute-2 on compute-2
Nov 29 02:19:19 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Nov 29 02:19:19 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Nov 29 02:19:20 np0005539551 python3.9[86530]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:19:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:20.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:19:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:20.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:19:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:21 np0005539551 python3.9[86683]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:19:22 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 29 02:19:22 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:22 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:22.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:22 np0005539551 python3.9[86937]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:19:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e94 e94: 3 total, 3 up, 3 in
Nov 29 02:19:22 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 94 pg[9.1d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=94) [1] r=0 lpr=94 pi=[76,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:19:22 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 94 pg[9.d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=94) [1] r=0 lpr=94 pi=[76,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:19:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:19:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:22.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:19:22 np0005539551 podman[87034]: 2025-11-29 07:19:22.612481018 +0000 UTC m=+0.051582575 container exec 90ce4adbab91e406b1c142d44002a8d5649da44e57a442af57615f2fecac4da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 02:19:22 np0005539551 podman[87034]: 2025-11-29 07:19:22.71976876 +0000 UTC m=+0.158870337 container exec_died 90ce4adbab91e406b1c142d44002a8d5649da44e57a442af57615f2fecac4da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 02:19:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e95 e95: 3 total, 3 up, 3 in
Nov 29 02:19:22 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 95 pg[9.1d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[76,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:22 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 95 pg[9.d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[76,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:22 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 95 pg[9.d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[76,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:19:22 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 95 pg[9.1d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[76,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:19:23 np0005539551 python3.9[87285]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:19:23 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 29 02:19:23 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e96 e96: 3 total, 3 up, 3 in
Nov 29 02:19:24 np0005539551 python3.9[87435]: ansible-ansible.builtin.service_facts Invoked
Nov 29 02:19:24 np0005539551 network[87452]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:19:24 np0005539551 network[87453]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:19:24 np0005539551 network[87454]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:19:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:24.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:24.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:26 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:26 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:26 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:26 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:26 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:26 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:26 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:19:26 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:26 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:19:26 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:26 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e97 e97: 3 total, 3 up, 3 in
Nov 29 02:19:26 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 97 pg[9.d( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=6 ec=59/49 lis/c=95/76 les/c/f=96/77/0 sis=97) [1] r=0 lpr=97 pi=[76,97)/1 luod=0'0 crt=55'1153 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:26 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 97 pg[9.d( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=6 ec=59/49 lis/c=95/76 les/c/f=96/77/0 sis=97) [1] r=0 lpr=97 pi=[76,97)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:19:26 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 97 pg[9.1d( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=5 ec=59/49 lis/c=95/76 les/c/f=96/77/0 sis=97) [1] r=0 lpr=97 pi=[76,97)/1 luod=0'0 crt=55'1153 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:26 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 97 pg[9.1d( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=5 ec=59/49 lis/c=95/76 les/c/f=96/77/0 sis=97) [1] r=0 lpr=97 pi=[76,97)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:19:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:26.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:26 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Nov 29 02:19:26 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Nov 29 02:19:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:19:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:26.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:19:27 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Nov 29 02:19:27 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Nov 29 02:19:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:28.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:28 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 10.1d deep-scrub starts
Nov 29 02:19:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:28.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:28 np0005539551 python3.9[87715]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:19:29 np0005539551 python3.9[87865]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:19:29 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 29 02:19:29 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 10.1d deep-scrub ok
Nov 29 02:19:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e98 e98: 3 total, 3 up, 3 in
Nov 29 02:19:29 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 98 pg[9.d( v 55'1153 (0'0,55'1153] local-lis/les=97/98 n=6 ec=59/49 lis/c=95/76 les/c/f=96/77/0 sis=97) [1] r=0 lpr=97 pi=[76,97)/1 crt=55'1153 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:19:29 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 98 pg[9.1d( v 55'1153 (0'0,55'1153] local-lis/les=97/98 n=5 ec=59/49 lis/c=95/76 les/c/f=96/77/0 sis=97) [1] r=0 lpr=97 pi=[76,97)/1 crt=55'1153 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:19:30 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 29 02:19:30 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 29 02:19:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:30.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:19:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:30.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:19:30 np0005539551 python3.9[88019]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:19:31 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Nov 29 02:19:31 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Nov 29 02:19:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:32.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:32 np0005539551 python3.9[88177]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:19:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:32.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:33 np0005539551 python3.9[88261]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:19:34 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:34.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:34.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:34 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 11.f scrub starts
Nov 29 02:19:35 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 11.f scrub ok
Nov 29 02:19:36 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e99 e99: 3 total, 3 up, 3 in
Nov 29 02:19:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:36.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:19:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:36.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:19:37 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Nov 29 02:19:37 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Nov 29 02:19:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:38.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:38.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:38 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Nov 29 02:19:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:40.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:40.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:40 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Nov 29 02:19:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:40 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Nov 29 02:19:40 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Nov 29 02:19:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e100 e100: 3 total, 3 up, 3 in
Nov 29 02:19:40 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 02:19:40 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 02:19:40 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 29 02:19:40 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:41 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 100 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=71/71 les/c/f=73/73/0 sis=100) [1] r=0 lpr=100 pi=[71,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:19:41 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 100 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=71/71 les/c/f=73/73/0 sis=100) [1] r=0 lpr=100 pi=[71,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:19:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:42.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:42.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:43 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e101 e101: 3 total, 3 up, 3 in
Nov 29 02:19:43 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 02:19:43 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 02:19:43 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 02:19:43 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 02:19:43 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:43 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 29 02:19:43 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 101 pg[9.10( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=101) [1] r=0 lpr=101 pi=[59,101)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:19:43 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 101 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=71/71 les/c/f=73/73/0 sis=101) [1]/[2] r=-1 lpr=101 pi=[71,101)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:43 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 101 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=71/71 les/c/f=73/73/0 sis=101) [1]/[2] r=-1 lpr=101 pi=[71,101)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:19:43 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 101 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=71/71 les/c/f=73/73/0 sis=101) [1]/[2] r=-1 lpr=101 pi=[71,101)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:43 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 101 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=71/71 les/c/f=73/73/0 sis=101) [1]/[2] r=-1 lpr=101 pi=[71,101)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:19:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:44.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:44.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:46 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:46 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 02:19:46 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 02:19:46 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 29 02:19:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:46.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:46 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e102 e102: 3 total, 3 up, 3 in
Nov 29 02:19:46 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 102 pg[9.10( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=102) [1]/[0] r=-1 lpr=102 pi=[59,102)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:46 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 102 pg[9.10( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=102) [1]/[0] r=-1 lpr=102 pi=[59,102)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:19:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:46.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:47 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Nov 29 02:19:47 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Nov 29 02:19:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:19:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:48.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:19:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:48.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:49 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Nov 29 02:19:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e103 e103: 3 total, 3 up, 3 in
Nov 29 02:19:49 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 103 pg[9.1f( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=5 ec=59/49 lis/c=101/71 les/c/f=102/73/0 sis=103) [1] r=0 lpr=103 pi=[71,103)/1 luod=0'0 crt=55'1153 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:49 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 103 pg[9.f( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=6 ec=59/49 lis/c=101/71 les/c/f=102/73/0 sis=103) [1] r=0 lpr=103 pi=[71,103)/1 luod=0'0 crt=55'1153 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:49 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 103 pg[9.f( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=6 ec=59/49 lis/c=101/71 les/c/f=102/73/0 sis=103) [1] r=0 lpr=103 pi=[71,103)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:19:49 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 103 pg[9.1f( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=5 ec=59/49 lis/c=101/71 les/c/f=102/73/0 sis=103) [1] r=0 lpr=103 pi=[71,103)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:19:49 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Nov 29 02:19:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:19:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:50.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:19:50 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Nov 29 02:19:50 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Nov 29 02:19:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:50.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e104 e104: 3 total, 3 up, 3 in
Nov 29 02:19:52 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 104 pg[9.10( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=2 ec=59/49 lis/c=102/59 les/c/f=103/60/0 sis=104) [1] r=0 lpr=104 pi=[59,104)/1 luod=0'0 crt=55'1153 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:52 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 104 pg[9.10( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=2 ec=59/49 lis/c=102/59 les/c/f=103/60/0 sis=104) [1] r=0 lpr=104 pi=[59,104)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:19:52 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 104 pg[9.1f( v 55'1153 (0'0,55'1153] local-lis/les=103/104 n=5 ec=59/49 lis/c=101/71 les/c/f=102/73/0 sis=103) [1] r=0 lpr=103 pi=[71,103)/1 crt=55'1153 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:19:52 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 104 pg[9.f( v 55'1153 (0'0,55'1153] local-lis/les=103/104 n=6 ec=59/49 lis/c=101/71 les/c/f=102/73/0 sis=103) [1] r=0 lpr=103 pi=[71,103)/1 crt=55'1153 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:19:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:52.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:52.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:53 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 29 02:19:53 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Nov 29 02:19:53 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Nov 29 02:19:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:54.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:54 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e105 e105: 3 total, 3 up, 3 in
Nov 29 02:19:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:19:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:54.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:19:55 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 105 pg[9.11( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=105) [1] r=0 lpr=105 pi=[59,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:19:55 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 105 pg[9.10( v 55'1153 (0'0,55'1153] local-lis/les=104/105 n=2 ec=59/49 lis/c=102/59 les/c/f=103/60/0 sis=104) [1] r=0 lpr=104 pi=[59,104)/1 crt=55'1153 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:19:56 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:56.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:56.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:56 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Nov 29 02:19:56 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Nov 29 02:19:57 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Nov 29 02:19:57 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Nov 29 02:19:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:19:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:58.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:19:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:19:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:19:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:58.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:19:59 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:19:59 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Nov 29 02:19:59 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Nov 29 02:20:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:00.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:00.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:01 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Nov 29 02:20:01 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Nov 29 02:20:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:02.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:02.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:03 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e106 e106: 3 total, 3 up, 3 in
Nov 29 02:20:03 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 106 pg[9.11( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=106) [1]/[0] r=-1 lpr=106 pi=[59,106)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:20:03 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 106 pg[9.11( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=106) [1]/[0] r=-1 lpr=106 pi=[59,106)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:20:03 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Nov 29 02:20:03 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Nov 29 02:20:04 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 29 02:20:04 np0005539551 ceph-mon[81672]: overall HEALTH_OK
Nov 29 02:20:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:04.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:04.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:04 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Nov 29 02:20:04 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Nov 29 02:20:04 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e107 e107: 3 total, 3 up, 3 in
Nov 29 02:20:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e108 e108: 3 total, 3 up, 3 in
Nov 29 02:20:06 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 108 pg[9.11( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=5 ec=59/49 lis/c=106/59 les/c/f=107/60/0 sis=108) [1] r=0 lpr=108 pi=[59,108)/1 luod=0'0 crt=55'1153 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:20:06 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 108 pg[9.11( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=5 ec=59/49 lis/c=106/59 les/c/f=107/60/0 sis=108) [1] r=0 lpr=108 pi=[59,108)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:20:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:06.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:06.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e109 e109: 3 total, 3 up, 3 in
Nov 29 02:20:07 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 109 pg[9.11( v 55'1153 (0'0,55'1153] local-lis/les=108/109 n=5 ec=59/49 lis/c=106/59 les/c/f=107/60/0 sis=108) [1] r=0 lpr=108 pi=[59,108)/1 crt=55'1153 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:20:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:08.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:08.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:09 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Nov 29 02:20:09 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Nov 29 02:20:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:10.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:10.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:12.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:12.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:13 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:13 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 11.14 deep-scrub starts
Nov 29 02:20:13 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 11.14 deep-scrub ok
Nov 29 02:20:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:14.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:20:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:14.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:20:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e110 e110: 3 total, 3 up, 3 in
Nov 29 02:20:15 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 29 02:20:16 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 110 pg[9.12( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=110) [1] r=0 lpr=110 pi=[59,110)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:20:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:16.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:16.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e111 e111: 3 total, 3 up, 3 in
Nov 29 02:20:16 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 29 02:20:16 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 29 02:20:16 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 29 02:20:18 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e112 e112: 3 total, 3 up, 3 in
Nov 29 02:20:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:18.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:18 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 112 pg[9.12( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=112) [1]/[0] r=-1 lpr=112 pi=[59,112)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:20:18 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 112 pg[9.12( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=112) [1]/[0] r=-1 lpr=112 pi=[59,112)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:20:18 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:18 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 29 02:20:18 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 29 02:20:18 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 29 02:20:18 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 29 02:20:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:18.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:19 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e113 e113: 3 total, 3 up, 3 in
Nov 29 02:20:20 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 29 02:20:20 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 29 02:20:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:20.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:20 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Nov 29 02:20:20 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Nov 29 02:20:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:20.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e114 e114: 3 total, 3 up, 3 in
Nov 29 02:20:22 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 114 pg[9.12( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=4 ec=59/49 lis/c=112/59 les/c/f=113/60/0 sis=114) [1] r=0 lpr=114 pi=[59,114)/1 luod=0'0 crt=55'1153 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:20:22 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 114 pg[9.12( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=4 ec=59/49 lis/c=112/59 les/c/f=113/60/0 sis=114) [1] r=0 lpr=114 pi=[59,114)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:20:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:22.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:22.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:23 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:24.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:24 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Nov 29 02:20:24 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Nov 29 02:20:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:24.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:26.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:26.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e115 e115: 3 total, 3 up, 3 in
Nov 29 02:20:27 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 115 pg[9.12( v 55'1153 (0'0,55'1153] local-lis/les=114/115 n=4 ec=59/49 lis/c=112/59 les/c/f=113/60/0 sis=114) [1] r=0 lpr=114 pi=[59,114)/1 crt=55'1153 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:20:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:28.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:28 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:28.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:30.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:30.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e116 e116: 3 total, 3 up, 3 in
Nov 29 02:20:32 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e117 e117: 3 total, 3 up, 3 in
Nov 29 02:20:32 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 117 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1] r=0 lpr=117 pi=[76,117)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:20:32 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 29 02:20:32 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 29 02:20:32 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 29 02:20:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:20:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:32.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:20:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:32.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:33 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e118 e118: 3 total, 3 up, 3 in
Nov 29 02:20:33 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 118 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=118) [1]/[2] r=-1 lpr=118 pi=[76,118)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:20:33 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 118 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=118) [1]/[2] r=-1 lpr=118 pi=[76,118)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:20:33 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Nov 29 02:20:33 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Nov 29 02:20:33 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 29 02:20:33 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:34.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:34 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Nov 29 02:20:34 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Nov 29 02:20:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:34.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:34 np0005539551 python3.9[88608]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:20:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e119 e119: 3 total, 3 up, 3 in
Nov 29 02:20:36 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e120 e120: 3 total, 3 up, 3 in
Nov 29 02:20:36 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 120 pg[9.15( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=4 ec=59/49 lis/c=118/76 les/c/f=119/77/0 sis=120) [1] r=0 lpr=120 pi=[76,120)/1 luod=0'0 crt=55'1153 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:20:36 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 120 pg[9.15( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=4 ec=59/49 lis/c=118/76 les/c/f=119/77/0 sis=120) [1] r=0 lpr=120 pi=[76,120)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:20:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:36.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:20:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:36.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:20:36 np0005539551 python3.9[88895]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 29 02:20:37 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Nov 29 02:20:37 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Nov 29 02:20:37 np0005539551 python3.9[89047]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 29 02:20:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:38.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:38 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:38 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Nov 29 02:20:38 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Nov 29 02:20:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:38.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:39 np0005539551 python3.9[89199]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:40 np0005539551 python3.9[89351]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 29 02:20:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:40.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:40.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:41 np0005539551 python3.9[89503]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:20:41 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e121 e121: 3 total, 3 up, 3 in
Nov 29 02:20:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:42.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:42 np0005539551 python3.9[89842]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:42.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:43 np0005539551 python3.9[89920]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:43 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:20:43 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 9.e scrub starts
Nov 29 02:20:44 np0005539551 python3.9[90072]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:20:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:44.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:44 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:44 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 121 pg[9.15( v 55'1153 (0'0,55'1153] local-lis/les=120/121 n=4 ec=59/49 lis/c=118/76 les/c/f=119/77/0 sis=120) [1] r=0 lpr=120 pi=[76,120)/1 crt=55'1153 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:20:44 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 9.e scrub ok
Nov 29 02:20:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:44.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:45 np0005539551 podman[89776]: 2025-11-29 07:20:45.086962757 +0000 UTC m=+2.909016199 container exec 90ce4adbab91e406b1c142d44002a8d5649da44e57a442af57615f2fecac4da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 02:20:45 np0005539551 podman[89776]: 2025-11-29 07:20:45.39726096 +0000 UTC m=+3.219314372 container exec_died 90ce4adbab91e406b1c142d44002a8d5649da44e57a442af57615f2fecac4da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:20:45 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:20:46 np0005539551 python3.9[90334]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 29 02:20:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:46.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:46.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:46 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:20:46 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:20:46 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:20:46 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:20:46 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:20:47 np0005539551 python3.9[90617]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 29 02:20:47 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:20:47 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:20:47 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:20:48 np0005539551 python3.9[90770]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 02:20:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:48.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:48 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Nov 29 02:20:48 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Nov 29 02:20:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:48.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:49 np0005539551 python3.9[90922]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 29 02:20:49 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Nov 29 02:20:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:49 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Nov 29 02:20:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e122 e122: 3 total, 3 up, 3 in
Nov 29 02:20:49 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 122 pg[9.16( v 55'1153 (0'0,55'1153] local-lis/les=80/81 n=4 ec=59/49 lis/c=80/80 les/c/f=81/81/0 sis=122 pruub=10.258704185s) [2] r=-1 lpr=122 pi=[80,122)/1 crt=55'1153 mlcod 0'0 active pruub 382.474029541s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:20:49 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 122 pg[9.16( v 55'1153 (0'0,55'1153] local-lis/les=80/81 n=4 ec=59/49 lis/c=80/80 les/c/f=81/81/0 sis=122 pruub=10.258603096s) [2] r=-1 lpr=122 pi=[80,122)/1 crt=55'1153 mlcod 0'0 unknown NOTIFY pruub 382.474029541s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:20:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:50.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:50 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 29 02:20:50 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 29 02:20:50 np0005539551 python3.9[91074]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:20:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:50.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e123 e123: 3 total, 3 up, 3 in
Nov 29 02:20:50 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 123 pg[9.16( v 55'1153 (0'0,55'1153] local-lis/les=80/81 n=4 ec=59/49 lis/c=80/80 les/c/f=81/81/0 sis=123) [2]/[1] r=0 lpr=123 pi=[80,123)/1 crt=55'1153 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:20:50 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 123 pg[9.16( v 55'1153 (0'0,55'1153] local-lis/les=80/81 n=4 ec=59/49 lis/c=80/80 les/c/f=81/81/0 sis=123) [2]/[1] r=0 lpr=123 pi=[80,123)/1 crt=55'1153 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:20:51 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 29 02:20:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e124 e124: 3 total, 3 up, 3 in
Nov 29 02:20:51 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 124 pg[9.16( v 55'1153 (0'0,55'1153] local-lis/les=123/124 n=4 ec=59/49 lis/c=80/80 les/c/f=81/81/0 sis=123) [2]/[1] async=[2] r=0 lpr=123 pi=[80,123)/1 crt=55'1153 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:20:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:52.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:52 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 29 02:20:52 np0005539551 python3.9[91227]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:20:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:52.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e125 e125: 3 total, 3 up, 3 in
Nov 29 02:20:52 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 125 pg[9.16( v 55'1153 (0'0,55'1153] local-lis/les=123/124 n=4 ec=59/49 lis/c=123/80 les/c/f=124/81/0 sis=125 pruub=14.916170120s) [2] async=[2] r=-1 lpr=125 pi=[80,125)/1 crt=55'1153 mlcod 55'1153 active pruub 390.261474609s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:20:52 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 125 pg[9.16( v 55'1153 (0'0,55'1153] local-lis/les=123/124 n=4 ec=59/49 lis/c=123/80 les/c/f=124/81/0 sis=125 pruub=14.915349960s) [2] r=-1 lpr=125 pi=[80,125)/1 crt=55'1153 mlcod 0'0 unknown NOTIFY pruub 390.261474609s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:20:53 np0005539551 python3.9[91379]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:53 np0005539551 python3.9[91457]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:20:54 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 29 02:20:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:54.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:54 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 9.d deep-scrub starts
Nov 29 02:20:54 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 9.d deep-scrub ok
Nov 29 02:20:54 np0005539551 python3.9[91609]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:54 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:54.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:55 np0005539551 python3.9[91687]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:20:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e126 e126: 3 total, 3 up, 3 in
Nov 29 02:20:56 np0005539551 python3.9[91839]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:20:56 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 9.f scrub starts
Nov 29 02:20:56 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 9.f scrub ok
Nov 29 02:20:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:56.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:57.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:58.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:20:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:59.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:59 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Nov 29 02:20:59 np0005539551 ceph-osd[78953]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Nov 29 02:20:59 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 29 02:21:00 np0005539551 python3.9[91990]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:21:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:21:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:00.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:01 np0005539551 python3.9[92192]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 29 02:21:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:01.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:01 np0005539551 python3.9[92342]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:21:02 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:21:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:02.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:03 np0005539551 python3.9[92494]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:21:03 np0005539551 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 29 02:21:03 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e127 e127: 3 total, 3 up, 3 in
Nov 29 02:21:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:03.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:03 np0005539551 systemd[1]: tuned.service: Deactivated successfully.
Nov 29 02:21:03 np0005539551 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 29 02:21:03 np0005539551 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 29 02:21:04 np0005539551 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 29 02:21:04 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:21:04 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 29 02:21:04 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 29 02:21:04 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 29 02:21:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:04.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:04 np0005539551 python3.9[92657]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 29 02:21:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:05.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:21:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e128 e128: 3 total, 3 up, 3 in
Nov 29 02:21:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:06.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:07 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 29 02:21:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:07.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 128 pg[9.1a( v 55'1153 (0'0,55'1153] local-lis/les=92/93 n=4 ec=59/49 lis/c=92/92 les/c/f=93/93/0 sis=128 pruub=8.157411575s) [0] r=-1 lpr=128 pi=[92,128)/1 crt=55'1153 mlcod 0'0 active pruub 399.155792236s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:21:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 128 pg[9.1a( v 55'1153 (0'0,55'1153] local-lis/les=92/93 n=4 ec=59/49 lis/c=92/92 les/c/f=93/93/0 sis=128 pruub=8.157295227s) [0] r=-1 lpr=128 pi=[92,128)/1 crt=55'1153 mlcod 0'0 unknown NOTIFY pruub 399.155792236s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:21:08 np0005539551 python3.9[92810]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:21:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:08.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:09 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e129 e129: 3 total, 3 up, 3 in
Nov 29 02:21:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:09.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:10 np0005539551 python3.9[92964]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:21:10 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 29 02:21:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:10.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:11 np0005539551 systemd[1]: session-34.scope: Deactivated successfully.
Nov 29 02:21:11 np0005539551 systemd[1]: session-34.scope: Consumed 1min 2.016s CPU time.
Nov 29 02:21:11 np0005539551 systemd-logind[788]: Session 34 logged out. Waiting for processes to exit.
Nov 29 02:21:11 np0005539551 systemd-logind[788]: Removed session 34.
Nov 29 02:21:11 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:21:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:11.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:12 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e130 e130: 3 total, 3 up, 3 in
Nov 29 02:21:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:12.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:13.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:14 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 29 02:21:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:14.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:15 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 130 pg[9.1a( v 55'1153 (0'0,55'1153] local-lis/les=92/93 n=4 ec=59/49 lis/c=92/92 les/c/f=93/93/0 sis=130) [0]/[1] r=0 lpr=130 pi=[92,130)/1 crt=55'1153 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:21:15 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 130 pg[9.1a( v 55'1153 (0'0,55'1153] local-lis/les=92/93 n=4 ec=59/49 lis/c=92/92 les/c/f=93/93/0 sis=130) [0]/[1] r=0 lpr=130 pi=[92,130)/1 crt=55'1153 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:21:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:15.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:16 np0005539551 systemd-logind[788]: New session 35 of user zuul.
Nov 29 02:21:16 np0005539551 systemd[1]: Started Session 35 of User zuul.
Nov 29 02:21:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:16.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e131 e131: 3 total, 3 up, 3 in
Nov 29 02:21:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:21:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:17.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:17 np0005539551 python3.9[93144]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:21:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:18.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:19 np0005539551 python3.9[93300]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 29 02:21:19 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e132 e132: 3 total, 3 up, 3 in
Nov 29 02:21:19 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 132 pg[9.1a( v 55'1153 (0'0,55'1153] local-lis/les=130/132 n=4 ec=59/49 lis/c=92/92 les/c/f=93/93/0 sis=130) [0]/[1] async=[0] r=0 lpr=130 pi=[92,130)/1 crt=55'1153 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:21:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:19.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:19 np0005539551 python3.9[93453]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:21:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:20.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:21 np0005539551 python3.9[93537]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 02:21:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:21.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:21 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e133 e133: 3 total, 3 up, 3 in
Nov 29 02:21:22 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 133 pg[9.1a( v 55'1153 (0'0,55'1153] local-lis/les=130/132 n=4 ec=59/49 lis/c=130/92 les/c/f=132/93/0 sis=133 pruub=13.231639862s) [0] async=[0] r=-1 lpr=133 pi=[92,133)/1 crt=55'1153 mlcod 55'1153 active pruub 417.784912109s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:21:22 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 133 pg[9.1a( v 55'1153 (0'0,55'1153] local-lis/les=130/132 n=4 ec=59/49 lis/c=130/92 les/c/f=132/93/0 sis=133 pruub=13.231541634s) [0] r=-1 lpr=133 pi=[92,133)/1 crt=55'1153 mlcod 0'0 unknown NOTIFY pruub 417.784912109s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:21:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:21:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:22.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:21:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:23.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:21:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:21:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:24.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:21:24 np0005539551 python3.9[93690]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:21:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:25.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:26.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:27.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:27 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:21:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e134 e134: 3 total, 3 up, 3 in
Nov 29 02:21:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:21:28 np0005539551 python3.9[93843]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:21:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:28.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:29.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:29 np0005539551 python3.9[93996]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:21:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:30.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:30 np0005539551 python3.9[94148]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 29 02:21:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:31.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:32 np0005539551 python3.9[94298]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:21:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:21:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:32.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:21:33 np0005539551 python3.9[94456]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:21:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:33.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:34.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:35.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:35 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:21:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:36.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:37.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:38.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:39.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:39 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:21:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:40.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:41.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.003000081s ======
Nov 29 02:21:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:42.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000081s
Nov 29 02:21:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:43.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:43 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:21:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:44.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:45.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:45 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 18.054523468s
Nov 29 02:21:45 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 18.054523468s
Nov 29 02:21:45 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 18.054927826s, txc = 0x5616f19fdb00
Nov 29 02:21:46 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).paxos(paxos updating c 1..738) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 13.386160851s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Nov 29 02:21:46 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-1[81668]: 2025-11-29T07:21:46.482+0000 7fe9969b2640 -1 mon.compute-1@2(peon).paxos(paxos updating c 1..738) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 13.386160851s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Nov 29 02:21:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:21:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:46.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:21:47 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:21:47 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:21:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:47 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:21:47 np0005539551 python3.9[94610]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:21:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:47.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 02:21:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:48.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 02:21:49 np0005539551 python3.9[94897]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 02:21:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:49.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:50 np0005539551 python3.9[95047]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:21:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:50.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:51 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:21:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:51.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:51 np0005539551 python3.9[95201]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:21:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:52.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:53 np0005539551 ceph-mon[81672]: paxos.2).electionLogic(25) init, last seen epoch 25, mid-election, bumping
Nov 29 02:21:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:53.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:54 np0005539551 python3.9[95354]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:21:54 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:21:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:54.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:21:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:21:55 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:21:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:55.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:56.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:57.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:57 np0005539551 python3.9[95507]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:21:58 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e135 e135: 3 total, 3 up, 3 in
Nov 29 02:21:58 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e136 e136: 3 total, 3 up, 3 in
Nov 29 02:21:58 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e137 e137: 3 total, 3 up, 3 in
Nov 29 02:21:58 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e138 e138: 3 total, 3 up, 3 in
Nov 29 02:21:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 02:21:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 02:21:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 02:21:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 02:21:58 np0005539551 ceph-mon[81672]: mon.compute-2 calling monitor election
Nov 29 02:21:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 02:21:58 np0005539551 ceph-mon[81672]: mon.compute-0 calling monitor election
Nov 29 02:21:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 02:21:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 02:21:58 np0005539551 ceph-mon[81672]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 02:21:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 02:21:58 np0005539551 ceph-mon[81672]: Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Nov 29 02:21:58 np0005539551 ceph-mon[81672]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:21:58 np0005539551 ceph-mon[81672]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:21:58 np0005539551 ceph-mon[81672]:    mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Nov 29 02:21:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 02:21:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 02:21:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 02:21:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 02:21:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 02:21:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 02:21:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 02:21:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 02:21:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 29 02:21:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 29 02:21:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 02:21:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 29 02:21:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 29 02:21:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 29 02:21:58 np0005539551 python3.9[95661]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Nov 29 02:21:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:58.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:59 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:21:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:21:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:59.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:22:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 get_health_metrics reporting 3 slow ops, oldest is mgrbeacon mgr.compute-1.fchyan(b66774a7-56d9-5535-bd8c-681234404870,24104, , 0)
Nov 29 02:22:00 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-1[81668]: 2025-11-29T07:22:00.049+0000 7fe9991b7640 -1 mon.compute-1@2(peon) e3 get_health_metrics reporting 3 slow ops, oldest is mgrbeacon mgr.compute-1.fchyan(b66774a7-56d9-5535-bd8c-681234404870,24104, , 0)
Nov 29 02:22:00 np0005539551 systemd[1]: session-35.scope: Deactivated successfully.
Nov 29 02:22:00 np0005539551 systemd[1]: session-35.scope: Consumed 18.706s CPU time.
Nov 29 02:22:00 np0005539551 systemd-logind[788]: Session 35 logged out. Waiting for processes to exit.
Nov 29 02:22:00 np0005539551 systemd-logind[788]: Removed session 35.
Nov 29 02:22:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:22:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:00.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:01.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:02 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e139 e139: 3 total, 3 up, 3 in
Nov 29 02:22:02 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 139 pg[9.1d( v 55'1153 (0'0,55'1153] local-lis/les=97/98 n=5 ec=59/49 lis/c=97/97 les/c/f=98/98/0 sis=139 pruub=15.086045265s) [2] r=-1 lpr=139 pi=[97,139)/1 crt=55'1153 mlcod 0'0 active pruub 460.113922119s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:22:02 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 139 pg[9.1d( v 55'1153 (0'0,55'1153] local-lis/les=97/98 n=5 ec=59/49 lis/c=97/97 les/c/f=98/98/0 sis=139 pruub=15.085873604s) [2] r=-1 lpr=139 pi=[97,139)/1 crt=55'1153 mlcod 0'0 unknown NOTIFY pruub 460.113922119s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:22:02 np0005539551 ceph-mon[81672]: mon.compute-2 calling monitor election
Nov 29 02:22:02 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 02:22:02 np0005539551 ceph-mon[81672]: mon.compute-0 calling monitor election
Nov 29 02:22:02 np0005539551 ceph-mon[81672]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:22:02 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 02:22:02 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 02:22:02 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 02:22:02 np0005539551 ceph-mon[81672]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Nov 29 02:22:02 np0005539551 ceph-mon[81672]: Cluster is now healthy
Nov 29 02:22:02 np0005539551 podman[95859]: 2025-11-29 07:22:02.798305709 +0000 UTC m=+1.300910847 container exec 90ce4adbab91e406b1c142d44002a8d5649da44e57a442af57615f2fecac4da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Nov 29 02:22:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:02.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:03 np0005539551 podman[95859]: 2025-11-29 07:22:03.197623509 +0000 UTC m=+1.700228647 container exec_died 90ce4adbab91e406b1c142d44002a8d5649da44e57a442af57615f2fecac4da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 02:22:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:03.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:04 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e140 e140: 3 total, 3 up, 3 in
Nov 29 02:22:04 np0005539551 ceph-mon[81672]: overall HEALTH_OK
Nov 29 02:22:04 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 02:22:04 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 02:22:04 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 02:22:04 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 02:22:04 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 02:22:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:04.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:05 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Nov 29 02:22:05 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:22:05.378147) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:22:05 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Nov 29 02:22:05 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400925378552, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7955, "num_deletes": 256, "total_data_size": 16417507, "memory_usage": 16737248, "flush_reason": "Manual Compaction"}
Nov 29 02:22:05 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Nov 29 02:22:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:22:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:05.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:22:05 np0005539551 systemd-logind[788]: New session 36 of user zuul.
Nov 29 02:22:05 np0005539551 systemd[1]: Started Session 36 of User zuul.
Nov 29 02:22:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:22:06 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400926210414, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 10179992, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 257, "largest_seqno": 7960, "table_properties": {"data_size": 10146937, "index_size": 21804, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10181, "raw_key_size": 97693, "raw_average_key_size": 24, "raw_value_size": 10068067, "raw_average_value_size": 2481, "num_data_blocks": 953, "num_entries": 4058, "num_filter_entries": 4058, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 1764400510, "file_creation_time": 1764400925, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:22:06 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 864028 microseconds, and 38922 cpu microseconds.
Nov 29 02:22:06 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:22:06.210473) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 10179992 bytes OK
Nov 29 02:22:06 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:22:06.242252) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Nov 29 02:22:06 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:22:06.350721) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Nov 29 02:22:06 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:22:06.350766) EVENT_LOG_v1 {"time_micros": 1764400926350757, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Nov 29 02:22:06 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:22:06.350786) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Nov 29 02:22:06 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 16374094, prev total WAL file size 16463566, number of live WAL files 2.
Nov 29 02:22:06 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:22:06 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:22:06.353733) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Nov 29 02:22:06 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Nov 29 02:22:06 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(9941KB) 8(1648B)]
Nov 29 02:22:06 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400926353837, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 10181640, "oldest_snapshot_seqno": -1}
Nov 29 02:22:06 np0005539551 python3.9[96137]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:22:06 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 3806 keys, 10176443 bytes, temperature: kUnknown
Nov 29 02:22:06 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400926710809, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 10176443, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10144078, "index_size": 21759, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9541, "raw_key_size": 93491, "raw_average_key_size": 24, "raw_value_size": 10068347, "raw_average_value_size": 2645, "num_data_blocks": 952, "num_entries": 3806, "num_filter_entries": 3806, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764400926, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:22:06 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:22:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e141 e141: 3 total, 3 up, 3 in
Nov 29 02:22:06 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:22:06.711247) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 10176443 bytes
Nov 29 02:22:06 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:22:06.766203) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 28.5 rd, 28.5 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(9.7, 0.0 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 4063, records dropped: 257 output_compression: NoCompression
Nov 29 02:22:06 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:22:06.766260) EVENT_LOG_v1 {"time_micros": 1764400926766241, "job": 4, "event": "compaction_finished", "compaction_time_micros": 357123, "compaction_time_cpu_micros": 22859, "output_level": 6, "num_output_files": 1, "total_output_size": 10176443, "num_input_records": 4063, "num_output_records": 3806, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:22:06 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:22:06 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400926768781, "job": 4, "event": "table_file_deletion", "file_number": 14}
Nov 29 02:22:06 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:22:06 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400926768849, "job": 4, "event": "table_file_deletion", "file_number": 8}
Nov 29 02:22:06 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:22:06.353619) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:22:06 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 141 pg[9.1d( v 55'1153 (0'0,55'1153] local-lis/les=97/98 n=5 ec=59/49 lis/c=97/97 les/c/f=98/98/0 sis=141) [2]/[1] r=0 lpr=141 pi=[97,141)/1 crt=55'1153 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:22:06 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:22:06 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 29 02:22:06 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 02:22:06 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:22:06 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:22:06 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 141 pg[9.1d( v 55'1153 (0'0,55'1153] local-lis/les=97/98 n=5 ec=59/49 lis/c=97/97 les/c/f=98/98/0 sis=141) [2]/[1] r=0 lpr=141 pi=[97,141)/1 crt=55'1153 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:22:06 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 141 pg[9.1e( v 55'1153 (0'0,55'1153] local-lis/les=81/82 n=5 ec=59/49 lis/c=81/81 les/c/f=82/82/0 sis=141 pruub=15.247399330s) [0] r=-1 lpr=141 pi=[81,141)/1 crt=55'1153 mlcod 0'0 active pruub 464.560882568s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:22:06 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 141 pg[9.1e( v 55'1153 (0'0,55'1153] local-lis/les=81/82 n=5 ec=59/49 lis/c=81/81 les/c/f=82/82/0 sis=141 pruub=15.247223854s) [0] r=-1 lpr=141 pi=[81,141)/1 crt=55'1153 mlcod 0'0 unknown NOTIFY pruub 464.560882568s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:22:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:06.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:07.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:07 np0005539551 python3.9[96391]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:22:07 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:22:07 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 29 02:22:07 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:22:07 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:22:08 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e142 e142: 3 total, 3 up, 3 in
Nov 29 02:22:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 142 pg[9.1e( v 55'1153 (0'0,55'1153] local-lis/les=81/82 n=5 ec=59/49 lis/c=81/81 les/c/f=82/82/0 sis=142) [0]/[1] r=0 lpr=142 pi=[81,142)/1 crt=55'1153 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:22:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 142 pg[9.1e( v 55'1153 (0'0,55'1153] local-lis/les=81/82 n=5 ec=59/49 lis/c=81/81 les/c/f=82/82/0 sis=142) [0]/[1] r=0 lpr=142 pi=[81,142)/1 crt=55'1153 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:22:08 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 142 pg[9.1d( v 55'1153 (0'0,55'1153] local-lis/les=141/142 n=5 ec=59/49 lis/c=97/97 les/c/f=98/98/0 sis=141) [2]/[1] async=[2] r=0 lpr=141 pi=[97,141)/1 crt=55'1153 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:22:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:22:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:08.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:22:09 np0005539551 python3.9[96614]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:22:09 np0005539551 ceph-mon[81672]: Health check failed: 3 slow ops, oldest one blocked for 36 sec, mon.compute-1 has slow ops (SLOW_OPS)
Nov 29 02:22:09 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:22:09 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:22:09 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:22:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:09.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:09 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e143 e143: 3 total, 3 up, 3 in
Nov 29 02:22:09 np0005539551 systemd[1]: session-36.scope: Deactivated successfully.
Nov 29 02:22:09 np0005539551 systemd[1]: session-36.scope: Consumed 2.386s CPU time.
Nov 29 02:22:09 np0005539551 systemd-logind[788]: Session 36 logged out. Waiting for processes to exit.
Nov 29 02:22:09 np0005539551 systemd-logind[788]: Removed session 36.
Nov 29 02:22:09 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 143 pg[9.1d( v 55'1153 (0'0,55'1153] local-lis/les=141/142 n=5 ec=59/49 lis/c=141/97 les/c/f=142/98/0 sis=143 pruub=14.745202065s) [2] async=[2] r=-1 lpr=143 pi=[97,143)/1 crt=55'1153 mlcod 55'1153 active pruub 467.061187744s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:22:09 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 143 pg[9.1d( v 55'1153 (0'0,55'1153] local-lis/les=141/142 n=5 ec=59/49 lis/c=141/97 les/c/f=142/98/0 sis=143 pruub=14.744150162s) [2] r=-1 lpr=143 pi=[97,143)/1 crt=55'1153 mlcod 0'0 unknown NOTIFY pruub 467.061187744s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:22:10 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 143 pg[9.1e( v 55'1153 (0'0,55'1153] local-lis/les=142/143 n=5 ec=59/49 lis/c=81/81 les/c/f=82/82/0 sis=142) [0]/[1] async=[0] r=0 lpr=142 pi=[81,142)/1 crt=55'1153 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:22:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:10.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:11.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:11 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:22:11 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e144 e144: 3 total, 3 up, 3 in
Nov 29 02:22:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:22:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:12.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:22:12 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e145 e145: 3 total, 3 up, 3 in
Nov 29 02:22:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 145 pg[9.1e( v 55'1153 (0'0,55'1153] local-lis/les=142/143 n=5 ec=59/49 lis/c=142/81 les/c/f=143/82/0 sis=145 pruub=13.719526291s) [0] async=[0] r=-1 lpr=145 pi=[81,145)/1 crt=55'1153 mlcod 55'1153 active pruub 469.169372559s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:22:13 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 145 pg[9.1e( v 55'1153 (0'0,55'1153] local-lis/les=142/143 n=5 ec=59/49 lis/c=142/81 les/c/f=143/82/0 sis=145 pruub=13.718889236s) [0] r=-1 lpr=145 pi=[81,145)/1 crt=55'1153 mlcod 0'0 unknown NOTIFY pruub 469.169372559s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:22:13 np0005539551 ceph-mon[81672]: Health check cleared: SLOW_OPS (was: 3 slow ops, oldest one blocked for 36 sec, mon.compute-1 has slow ops)
Nov 29 02:22:13 np0005539551 ceph-mon[81672]: Cluster is now healthy
Nov 29 02:22:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:22:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:13.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:22:14 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e146 e146: 3 total, 3 up, 3 in
Nov 29 02:22:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:14.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:15.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:15 np0005539551 systemd-logind[788]: New session 37 of user zuul.
Nov 29 02:22:15 np0005539551 systemd[1]: Started Session 37 of User zuul.
Nov 29 02:22:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e147 e147: 3 total, 3 up, 3 in
Nov 29 02:22:16 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 147 pg[9.1f( v 55'1153 (0'0,55'1153] local-lis/les=103/104 n=5 ec=59/49 lis/c=103/103 les/c/f=104/104/0 sis=147 pruub=15.990142822s) [0] r=-1 lpr=147 pi=[103,147)/1 crt=55'1153 mlcod 0'0 active pruub 474.732269287s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:22:16 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 147 pg[9.1f( v 55'1153 (0'0,55'1153] local-lis/les=103/104 n=5 ec=59/49 lis/c=103/103 les/c/f=104/104/0 sis=147 pruub=15.989835739s) [0] r=-1 lpr=147 pi=[103,147)/1 crt=55'1153 mlcod 0'0 unknown NOTIFY pruub 474.732269287s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:22:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:22:16 np0005539551 python3.9[96794]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:22:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:16.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:16 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:22:16 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:22:16 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:22:16 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:22:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:17.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:18 np0005539551 python3.9[96998]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:22:18 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e148 e148: 3 total, 3 up, 3 in
Nov 29 02:22:18 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 148 pg[9.1f( v 55'1153 (0'0,55'1153] local-lis/les=103/104 n=5 ec=59/49 lis/c=103/103 les/c/f=104/104/0 sis=148) [0]/[1] r=0 lpr=148 pi=[103,148)/1 crt=55'1153 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:22:18 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 148 pg[9.1f( v 55'1153 (0'0,55'1153] local-lis/les=103/104 n=5 ec=59/49 lis/c=103/103 les/c/f=104/104/0 sis=148) [0]/[1] r=0 lpr=148 pi=[103,148)/1 crt=55'1153 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:22:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:18.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:19 np0005539551 python3.9[97154]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:22:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:19.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:19 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e149 e149: 3 total, 3 up, 3 in
Nov 29 02:22:19 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 149 pg[9.1f( v 55'1153 (0'0,55'1153] local-lis/les=148/149 n=5 ec=59/49 lis/c=103/103 les/c/f=104/104/0 sis=148) [0]/[1] async=[0] r=0 lpr=148 pi=[103,148)/1 crt=55'1153 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:22:19 np0005539551 python3.9[97238]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:22:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e150 e150: 3 total, 3 up, 3 in
Nov 29 02:22:20 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 150 pg[9.1f( v 55'1153 (0'0,55'1153] local-lis/les=148/149 n=5 ec=59/49 lis/c=148/103 les/c/f=149/104/0 sis=150 pruub=15.024248123s) [0] async=[0] r=-1 lpr=150 pi=[103,150)/1 crt=55'1153 mlcod 55'1153 active pruub 478.172607422s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:22:20 np0005539551 ceph-osd[78953]: osd.1 pg_epoch: 150 pg[9.1f( v 55'1153 (0'0,55'1153] local-lis/les=148/149 n=5 ec=59/49 lis/c=148/103 les/c/f=149/104/0 sis=150 pruub=15.024137497s) [0] r=-1 lpr=150 pi=[103,150)/1 crt=55'1153 mlcod 0'0 unknown NOTIFY pruub 478.172607422s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:22:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:22:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:20.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:22:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:21.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:21 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:22:22 np0005539551 python3.9[97391]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:22:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:22.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:23 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 e151: 3 total, 3 up, 3 in
Nov 29 02:22:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:23.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:24 np0005539551 python3.9[97586]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:22:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:24.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:25.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:25 np0005539551 python3.9[97738]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:22:26 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:22:26 np0005539551 python3.9[97903]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:22:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:26.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:27 np0005539551 python3.9[97981]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:22:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:22:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:27.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:22:28 np0005539551 python3.9[98133]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:22:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:28.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:29 np0005539551 python3.9[98211]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:22:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:29.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:30 np0005539551 python3.9[98363]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:22:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:30.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:30 np0005539551 python3.9[98515]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:22:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:22:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:31.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:22:31 np0005539551 python3.9[98667]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:22:31 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:22:32 np0005539551 python3.9[98819]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:22:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:32.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:33 np0005539551 python3.9[98971]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:22:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:33.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:34.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:22:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:35.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:22:36 np0005539551 python3.9[99124]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:22:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:36.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:36 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:22:37 np0005539551 python3.9[99278]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:22:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:37.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:38 np0005539551 python3.9[99430]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:22:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:38.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:39 np0005539551 python3.9[99582]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:22:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:39.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:40 np0005539551 python3.9[99735]: ansible-service_facts Invoked
Nov 29 02:22:40 np0005539551 network[99752]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:22:40 np0005539551 network[99753]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:22:40 np0005539551 network[99754]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:22:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:40.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:41.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:41 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:22:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:42.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:43.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:44.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:45.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:46.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:46 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:22:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:47.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:47 np0005539551 python3.9[100206]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:22:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:22:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:48.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:22:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:49.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:50.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:51.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:22:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:52.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:22:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:53.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:22:54 np0005539551 python3.9[100359]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 29 02:22:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:54.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:55.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:56 np0005539551 python3.9[100511]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:22:56 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:22:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:56.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:57 np0005539551 python3.9[100589]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:22:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:57.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:58 np0005539551 python3.9[100741]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:22:58 np0005539551 python3.9[100819]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:22:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:22:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:58.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:22:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:22:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:59.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:00.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:01 np0005539551 python3.9[100971]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:01.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:23:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:02.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:03 np0005539551 python3.9[101125]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:23:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:03.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:04 np0005539551 python3.9[101209]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:23:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:04.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:05 np0005539551 systemd-logind[788]: Session 37 logged out. Waiting for processes to exit.
Nov 29 02:23:05 np0005539551 systemd[1]: session-37.scope: Deactivated successfully.
Nov 29 02:23:05 np0005539551 systemd[1]: session-37.scope: Consumed 23.317s CPU time.
Nov 29 02:23:05 np0005539551 systemd-logind[788]: Removed session 37.
Nov 29 02:23:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:05.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:23:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:06.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:23:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:07.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:23:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:08.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:09.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:10 np0005539551 systemd-logind[788]: New session 38 of user zuul.
Nov 29 02:23:10 np0005539551 systemd[1]: Started Session 38 of User zuul.
Nov 29 02:23:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:10.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:11 np0005539551 python3.9[101391]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:11.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:11 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:23:12 np0005539551 python3.9[101543]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:12 np0005539551 python3.9[101621]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:12.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:13 np0005539551 systemd-logind[788]: Session 38 logged out. Waiting for processes to exit.
Nov 29 02:23:13 np0005539551 systemd[1]: session-38.scope: Deactivated successfully.
Nov 29 02:23:13 np0005539551 systemd[1]: session-38.scope: Consumed 1.562s CPU time.
Nov 29 02:23:13 np0005539551 systemd-logind[788]: Removed session 38.
Nov 29 02:23:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:13.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:14.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:15.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:23:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:16.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:17.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:18.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:19.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:19 np0005539551 systemd-logind[788]: New session 39 of user zuul.
Nov 29 02:23:19 np0005539551 systemd[1]: Started Session 39 of User zuul.
Nov 29 02:23:19 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:23:19 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:23:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:21.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:21 np0005539551 python3.9[101932]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:23:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:21.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:21 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:23:22 np0005539551 python3.9[102088]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:23.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:23 np0005539551 python3.9[102263]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:23.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:23 np0005539551 python3.9[102341]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.0ek3hc4b recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:24 np0005539551 python3.9[102493]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:23:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:25.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:23:25 np0005539551 python3.9[102571]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.h4ig7e1g recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:25 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:23:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:25.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:26 np0005539551 python3.9[102723]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:23:26 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:23:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:27.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:27 np0005539551 python3.9[102875]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:27 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:23:27 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:23:27 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:23:27 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:23:27 np0005539551 python3.9[102953]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:23:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:27.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:28 np0005539551 python3.9[103105]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:28 np0005539551 python3.9[103183]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:23:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:29.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:29 np0005539551 python3.9[103335]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:29.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:29 np0005539551 python3.9[103487]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:30 np0005539551 python3.9[103565]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:31.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:31 np0005539551 python3.9[103717]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:31 np0005539551 python3.9[103795]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:31.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:31 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:23:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:33.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:33 np0005539551 python3.9[103947]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:23:33 np0005539551 systemd[1]: Reloading.
Nov 29 02:23:33 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:23:33 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:23:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:33.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:34 np0005539551 python3.9[104137]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:34 np0005539551 python3.9[104215]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:35.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:35 np0005539551 python3.9[104367]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:35.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:36 np0005539551 python3.9[104445]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:36 np0005539551 python3.9[104597]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:23:36 np0005539551 systemd[1]: Reloading.
Nov 29 02:23:36 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:23:36 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:23:36 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:23:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:37.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:37 np0005539551 systemd[1]: Starting Create netns directory...
Nov 29 02:23:37 np0005539551 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 02:23:37 np0005539551 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 02:23:37 np0005539551 systemd[1]: Finished Create netns directory.
Nov 29 02:23:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:37.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:38 np0005539551 python3.9[104789]: ansible-ansible.builtin.service_facts Invoked
Nov 29 02:23:38 np0005539551 network[104806]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:23:38 np0005539551 network[104807]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:23:38 np0005539551 network[104808]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:23:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:39.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:23:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:39.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:23:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:41.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:41.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:41 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:23:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:43.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:43.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:44 np0005539551 python3.9[105070]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:44 np0005539551 python3.9[105148]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:45.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:45.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:46 np0005539551 python3.9[105300]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:46 np0005539551 python3.9[105452]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:46 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:23:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:47.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:47 np0005539551 python3.9[105530]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:47.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:48 np0005539551 python3.9[105682]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 29 02:23:48 np0005539551 systemd[1]: Starting Time & Date Service...
Nov 29 02:23:48 np0005539551 systemd[1]: Started Time & Date Service.
Nov 29 02:23:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:49.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:49 np0005539551 python3.9[105838]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:49.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:50 np0005539551 python3.9[105990]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:23:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:51.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:23:51 np0005539551 python3.9[106068]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:51.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:51 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:23:51 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:23:51 np0005539551 python3.9[106270]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:23:52 np0005539551 python3.9[106348]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.9aw_v703 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:53.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:53 np0005539551 python3.9[106500]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:53 np0005539551 python3.9[106578]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:53.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:55.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:55 np0005539551 python3.9[106730]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:23:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:55.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:55 np0005539551 python3[106883]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 02:23:56 np0005539551 python3.9[107035]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:56 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:23:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:57.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:57 np0005539551 python3.9[107113]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:57.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:58 np0005539551 python3.9[107265]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:58 np0005539551 python3.9[107343]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:23:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:59.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:59 np0005539551 python3.9[107495]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:24:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:00.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:00 np0005539551 python3.9[107573]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:01.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:01 np0005539551 python3.9[107725]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:24:01 np0005539551 python3.9[107803]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:24:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:02.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:02 np0005539551 python3.9[107955]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:24:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:03.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:03 np0005539551 python3.9[108033]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:04.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:04 np0005539551 python3.9[108185]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:24:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:05.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:05 np0005539551 python3.9[108340]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:24:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:06.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:24:06 np0005539551 python3.9[108492]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:06 np0005539551 python3.9[108644]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:24:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:07.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:07 np0005539551 python3.9[108796]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 02:24:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:08.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:08 np0005539551 python3.9[108948]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 02:24:08 np0005539551 systemd[1]: session-39.scope: Deactivated successfully.
Nov 29 02:24:08 np0005539551 systemd[1]: session-39.scope: Consumed 28.949s CPU time.
Nov 29 02:24:08 np0005539551 systemd-logind[788]: Session 39 logged out. Waiting for processes to exit.
Nov 29 02:24:08 np0005539551 systemd-logind[788]: Removed session 39.
Nov 29 02:24:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:09.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:24:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:10.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:24:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:11.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:11 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:24:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:12.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:13.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:14.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:14 np0005539551 systemd-logind[788]: New session 40 of user zuul.
Nov 29 02:24:14 np0005539551 systemd[1]: Started Session 40 of User zuul.
Nov 29 02:24:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:15.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:15 np0005539551 python3.9[109129]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 29 02:24:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:16.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:16 np0005539551 python3.9[109281]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:24:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:24:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:17.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:17 np0005539551 python3.9[109435]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Nov 29 02:24:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:24:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:18.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:24:18 np0005539551 python3.9[109587]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.fucef6iy follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:24:18 np0005539551 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 02:24:19 np0005539551 python3.9[109714]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.fucef6iy mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401057.6950278-108-126373192216372/.source.fucef6iy _original_basename=.mjbz_3_m follow=False checksum=a1a59eb28f721bfa8fb748cb88539cd5cab8b099 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:19.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:24:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:20.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:24:20 np0005539551 python3.9[109866]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:24:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:21.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:21 np0005539551 python3.9[110018]: ansible-ansible.builtin.blockinfile Invoked with block=compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfIZbQlJSY8OFW9gaKZpL5AOJYgHeGcUU4xMMLWNL/xUPPZkDRJ+0oOBxm1GBsA8W/sQVZWDc//tIOaPRg0Ts5mepXlfGs0Url+hpuUxGZNLWaIiPfHq1tUx7zM7eWeUlVhlBayXU+bDoHZDE1TezLFLi49CXlrQuy/1Fb5Ju8aYVVJNoRltLwGKo8JrHv8UnYQ29iZPFO7+AEqgSmsEyz9hjMO7qStFsK0Z4RYJrbTZ/AMj8FNebCRWGtc2weikdIjLid5Z20teORSzpJW4jLDvRkyg92/WdI7iFDyHhslm5uNGHqqE2uRPqQFTZ7tdP6IJzfhJms7WfRdsOS7qJdAeOLzhn/EcmLaKoST1KzKZYzMdAtqrHDPDth+ERDeHtT8CEHNFNgwH4Drtp7YWlKZyVPsv6dK3iVC5WQ4Smet9VXXpZhT8JcQr97oS6/QJ/gT2yzHqH9vE62bRuuVM3lwDNiZkdn1nVbxa8d58RY3T49As7qmlP5Y43puhyXDWU=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDvBaB2c/CSsrpPIGSKo/yIA8NKQbrk/1m+GY/Ma4/XG#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCX/VzLQPSOCPDMQMb838UxHYaVIDkLBboGMSvw1EX6MmRkAHKbJbJizg3TXu8nfZimb1PW1TRaFLHQkljXQfhA=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDZ3gJW4xxSNpckw2TbtUBxTZruxTxiPlDkOB8Y4ICZA576sHCsss1Ph5y2zOkXYsz9fpf2TwDKPQIVDfUxQL2k42AS2PWqcJCelaMaAxDGDVmytzhvJO+0vO0kZSFoRnDYDxt2IUjJS2VV4xS4L9mRqjK8zsSYyINET0BAxRep9xLeUV0pztWwkopYucpBL9nU+ZMkA5y3nRMxInQNfxZwW5O2P7v+HScnTy2CUe+79l+0TMU0N6uM79jmcAAH5zDqSdRx1VS+lr4cWeNOPxGiXzEepk+MRml6Y0uGKdtdlboqK6kvYfSNkkhFmtXsnvtNQyA8UDSAercKYAeSPfJftqXmHbVvAY+Ky5R22RivRx7jpubqimyS4Tab95yEzsLi6hEQ2OW1pZleLTnr31vNLojOAxtrIY7YgkPSo3yrbURsfLyldLo3LfSlYfkTpkQFE2CajUrAitfcz+uMi9UVw0jCs+cC6uvKZdzu9Flnc8SDq2rMPIHuEP+9CACVSTU=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOxCaPCuKLUncOQ8c8c4/3OodUXgAR3WjvU4uCVk4XkO#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA7zHYLiINcKCNo52qkzrmctOgzvnHIchoPMaZyVaf/Aonhb5ntaWhlnHGxOVN+ZUQQOMPIjt7zIO4FB9IYg2xw=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8DvicBdqy7dEZlHZpy7m/TwUChtVXFipP55AL4//M7HIh4A4ZWW0M0pb4E4WsXc1Y99eeNf5R+fmafWv5Z2x8Tq9KiRM9wQGSEJo1Sp7Ant8TcIyfbWCUIhmGAfkYUT2iUTjyyBrBL7iGVxJbYtCagodoXoIL4MSkgeZpadFa4XI4DieFBF95zOzXF6Z9RVUiocOG6vaogo3k/wTemQxQ/dlVV7SPrtj+GoZEUpeNlAKRbkAB8PNee/Ne+abzClpRp50s2pAh7smZFmL0O+wDOgWwFImPpxCkh4nR/3IJq6O53KXSl9jR4X/vmJHpFEHC6oZX5/hfwaJTfvvELB5cjzaFh3mzFweGkQq82VhAAxVksDTO2+aUZFGDJbMSvjPTSTEl+qx+GAl7E0KnzST+NMnd5qplw0KIj+BBZgkZtKK8kAsxxRU3zDMDotlvIDG1KYN+wOGRG2Cy2afXmGFIFYdzOFlvkAwmv9yhY5u5OlWxzuiZEOcqJ0dGS1e0hk8=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFq0l7tgdUK0C+AqSmZJQ8Y9Z17ynv3L7Gso+BnrUJe7#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLWT8H4lhVkE+892UU3HiUydE/Wuy5lmeTLAJzcPPkEmKKDZLorB5daY+peHiUZWU/JHax1i6VTJiGCUcfBK9Vw=#012 create=True mode=0644 path=/tmp/ansible.fucef6iy state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:21 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:24:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:22.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:22 np0005539551 python3.9[110170]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.fucef6iy' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:24:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:23.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:23 np0005539551 python3.9[110324]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.fucef6iy state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:23 np0005539551 systemd-logind[788]: Session 40 logged out. Waiting for processes to exit.
Nov 29 02:24:23 np0005539551 systemd[1]: session-40.scope: Deactivated successfully.
Nov 29 02:24:23 np0005539551 systemd[1]: session-40.scope: Consumed 5.224s CPU time.
Nov 29 02:24:23 np0005539551 systemd-logind[788]: Removed session 40.
Nov 29 02:24:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:24:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:24.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:24:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:24:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:25.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:24:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:26.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:26 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:24:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:27.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:28.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:29.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:30 np0005539551 systemd-logind[788]: New session 41 of user zuul.
Nov 29 02:24:30 np0005539551 systemd[1]: Started Session 41 of User zuul.
Nov 29 02:24:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:30.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:31.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:31 np0005539551 python3.9[110502]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:24:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:32.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:32 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Nov 29 02:24:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:24:32.211413) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:24:32 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Nov 29 02:24:32 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401072211463, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1560, "num_deletes": 250, "total_data_size": 3812152, "memory_usage": 3851536, "flush_reason": "Manual Compaction"}
Nov 29 02:24:32 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Nov 29 02:24:32 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:24:32 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401072253524, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 1599504, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7965, "largest_seqno": 9520, "table_properties": {"data_size": 1594022, "index_size": 2750, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13153, "raw_average_key_size": 20, "raw_value_size": 1582367, "raw_average_value_size": 2441, "num_data_blocks": 125, "num_entries": 648, "num_filter_entries": 648, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400926, "oldest_key_time": 1764400926, "file_creation_time": 1764401072, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:24:32 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 42185 microseconds, and 4923 cpu microseconds.
Nov 29 02:24:32 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:24:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:24:32.253592) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 1599504 bytes OK
Nov 29 02:24:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:24:32.253617) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Nov 29 02:24:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:24:32.260041) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Nov 29 02:24:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:24:32.260096) EVENT_LOG_v1 {"time_micros": 1764401072260084, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:24:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:24:32.260125) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:24:32 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 3804870, prev total WAL file size 3820311, number of live WAL files 2.
Nov 29 02:24:32 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:24:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:24:32.262094) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Nov 29 02:24:32 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:24:32 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(1562KB)], [15(9937KB)]
Nov 29 02:24:32 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401072262230, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 11775947, "oldest_snapshot_seqno": -1}
Nov 29 02:24:32 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 3991 keys, 9766066 bytes, temperature: kUnknown
Nov 29 02:24:32 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401072433985, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 9766066, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9734112, "index_size": 20900, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9989, "raw_key_size": 97710, "raw_average_key_size": 24, "raw_value_size": 9656670, "raw_average_value_size": 2419, "num_data_blocks": 917, "num_entries": 3991, "num_filter_entries": 3991, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764401072, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:24:32 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:24:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:24:32.434198) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 9766066 bytes
Nov 29 02:24:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:24:32.448662) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 68.5 rd, 56.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 9.7 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(13.5) write-amplify(6.1) OK, records in: 4454, records dropped: 463 output_compression: NoCompression
Nov 29 02:24:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:24:32.448703) EVENT_LOG_v1 {"time_micros": 1764401072448689, "job": 6, "event": "compaction_finished", "compaction_time_micros": 171816, "compaction_time_cpu_micros": 22139, "output_level": 6, "num_output_files": 1, "total_output_size": 9766066, "num_input_records": 4454, "num_output_records": 3991, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:24:32 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:24:32 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401072449145, "job": 6, "event": "table_file_deletion", "file_number": 17}
Nov 29 02:24:32 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:24:32 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401072450700, "job": 6, "event": "table_file_deletion", "file_number": 15}
Nov 29 02:24:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:24:32.261939) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:24:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:24:32.450822) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:24:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:24:32.450833) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:24:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:24:32.450834) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:24:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:24:32.450836) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:24:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:24:32.450837) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:24:32 np0005539551 python3.9[110658]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 29 02:24:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:33.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:24:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:34.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:24:34 np0005539551 python3.9[110812]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:24:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:35.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:35 np0005539551 python3.9[110965]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:24:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:36.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:36 np0005539551 python3.9[111118]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:24:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:37.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:24:37 np0005539551 python3.9[111270]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:37 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:24:37 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.9 total, 600.0 interval#012Cumulative writes: 5974 writes, 25K keys, 5974 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5974 writes, 917 syncs, 6.51 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5974 writes, 25K keys, 5974 commit groups, 1.0 writes per commit group, ingest: 19.52 MB, 0.03 MB/s#012Interval WAL: 5974 writes, 917 syncs, 6.51 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.64              0.00         1    0.643       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.64              0.00         1    0.643       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.64              0.00         1    0.643       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.9 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.6 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5616f192a430#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.9 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5616f192a430#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.9 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtab
Nov 29 02:24:37 np0005539551 systemd[1]: session-41.scope: Deactivated successfully.
Nov 29 02:24:37 np0005539551 systemd[1]: session-41.scope: Consumed 3.604s CPU time.
Nov 29 02:24:37 np0005539551 systemd-logind[788]: Session 41 logged out. Waiting for processes to exit.
Nov 29 02:24:37 np0005539551 systemd-logind[788]: Removed session 41.
Nov 29 02:24:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:38.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:39.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:40.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:24:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:41.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:24:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:42.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:24:42 np0005539551 systemd-logind[788]: New session 42 of user zuul.
Nov 29 02:24:42 np0005539551 systemd[1]: Started Session 42 of User zuul.
Nov 29 02:24:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:43.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:44.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:44 np0005539551 python3.9[111448]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:24:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:45.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:45 np0005539551 python3.9[111604]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:24:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:46.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:46 np0005539551 python3.9[111688]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 02:24:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:47.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:47 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:24:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:48.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:48 np0005539551 python3.9[111839]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:24:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:49.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:50.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:50 np0005539551 python3.9[111990]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 02:24:51 np0005539551 python3.9[112140]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:24:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:51.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:51 np0005539551 python3.9[112388]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:24:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:52.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:52 np0005539551 systemd[1]: session-42.scope: Deactivated successfully.
Nov 29 02:24:52 np0005539551 systemd[1]: session-42.scope: Consumed 6.088s CPU time.
Nov 29 02:24:52 np0005539551 systemd-logind[788]: Session 42 logged out. Waiting for processes to exit.
Nov 29 02:24:52 np0005539551 systemd-logind[788]: Removed session 42.
Nov 29 02:24:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:24:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:53.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:53 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 02:24:53 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:24:53 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:24:53 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:24:53 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:24:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:54.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:55.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:55 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 02:24:55 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Nov 29 02:24:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:56.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:57.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:57 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:24:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:58.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:24:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:24:58 np0005539551 systemd-logind[788]: New session 43 of user zuul.
Nov 29 02:24:58 np0005539551 systemd[1]: Started Session 43 of User zuul.
Nov 29 02:24:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:24:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:59.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:59 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:24:59 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:24:59 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:24:59 np0005539551 python3.9[112719]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:25:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:00.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:01.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:01 np0005539551 python3.9[112875]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:02.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:02 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:25:02 np0005539551 python3.9[113027]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:03.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:03 np0005539551 python3.9[113179]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:04.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:04 np0005539551 python3.9[113302]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401102.7248487-158-96341934405951/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=8602bf730a6c85504922a34b3077970c13cb0fcb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:04 np0005539551 python3.9[113454]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:05.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:05 np0005539551 python3.9[113577]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401104.4082854-158-255291923373487/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=96f02fb504411b0b161adf414c18934dfa96b5b7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:06.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:06 np0005539551 python3.9[113729]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:06 np0005539551 python3.9[113852]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401105.6943696-158-132970405324885/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=b34330c9a5a0ec9012c8c228b0b795f087c36c1a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:25:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:07.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:25:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:25:07 np0005539551 python3.9[114004]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:08.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:08 np0005539551 python3.9[114156]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:09 np0005539551 python3.9[114308]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:09.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:09 np0005539551 python3.9[114481]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401108.5250173-337-250075927801106/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=c8ffee082ef0e57ab6e829de5fe9acd18699e191 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:10.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:10 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:25:10 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 1467 writes, 9916 keys, 1467 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 1467 writes, 1467 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1467 writes, 9916 keys, 1467 commit groups, 1.0 writes per commit group, ingest: 20.34 MB, 0.03 MB/s#012Interval WAL: 1467 writes, 1467 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.4      0.91              0.04         3    0.303       0      0       0.0       0.0#012  L6      1/0    9.31 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.7     39.6     36.0      0.53              0.04         2    0.264    8517    720       0.0       0.0#012 Sum      1/0    9.31 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.7     14.6     21.1      1.44              0.09         5    0.287    8517    720       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.7     14.6     21.1      1.44              0.09         4    0.359    8517    720       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     39.6     36.0      0.53              0.04         2    0.264    8517    720       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.4      0.91              0.04         2    0.453       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.011, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.03 GB write, 0.05 MB/s write, 0.02 GB read, 0.03 MB/s read, 1.4 seconds#012Interval compaction: 0.03 GB write, 0.05 MB/s write, 0.02 GB read, 0.03 MB/s read, 1.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557021ed51f0#2 capacity: 304.00 MB usage: 758.72 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 0.000107 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(34,651.56 KB,0.209306%) FilterBlock(5,31.98 KB,0.0102746%) IndexBlock(5,75.17 KB,0.024148%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 02:25:10 np0005539551 python3.9[114633]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:11 np0005539551 python3.9[114756]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401109.8969073-337-88242050294685/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=061972f8260f53205da8541ffc96c3e0cb49837b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:11.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:11 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:25:11 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:25:11 np0005539551 python3.9[114908]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:12.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:12 np0005539551 python3.9[115031]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401111.2027164-337-239044831604444/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=4f77bff4930cd548ed37b601396e6e1df7a75bac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:13 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:25:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:13.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:13 np0005539551 python3.9[115183]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:14.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:14 np0005539551 python3.9[115335]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:14 np0005539551 python3.9[115487]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:15.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:15 np0005539551 python3.9[115610]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401114.486316-520-147603777839001/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=f024c7600f23a6203f69ead8217926496a8aab52 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:16 np0005539551 python3.9[115762]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:16.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:16 np0005539551 python3.9[115885]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401115.6260383-520-7846339478507/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=061972f8260f53205da8541ffc96c3e0cb49837b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:17.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:17 np0005539551 python3.9[116037]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:18 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:25:18 np0005539551 python3.9[116160]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401116.9184916-520-196017000234014/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=eaed8084c2188291a6f1e2bab1a3681a7e1ad199 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:18.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:19.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:19 np0005539551 python3.9[116312]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:20 np0005539551 python3.9[116464]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:20.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:20 np0005539551 python3.9[116587]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401119.5660408-720-4727327631293/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1960c13778c50062ca07f689a187e0cd26c6ab56 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:21.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:21 np0005539551 python3.9[116739]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:21 np0005539551 python3.9[116891]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:22.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:22 np0005539551 python3.9[117014]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401121.488045-786-192555628443926/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1960c13778c50062ca07f689a187e0cd26c6ab56 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:23 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:25:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:23.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:23 np0005539551 python3.9[117166]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:24 np0005539551 python3.9[117318]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:24.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:24 np0005539551 python3.9[117441]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401123.5297172-853-256402688093763/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1960c13778c50062ca07f689a187e0cd26c6ab56 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:25.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:25 np0005539551 python3.9[117593]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:26 np0005539551 python3.9[117745]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:26.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:26 np0005539551 python3.9[117868]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401125.603704-926-234580824120945/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1960c13778c50062ca07f689a187e0cd26c6ab56 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:27.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:27 np0005539551 python3.9[118020]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:28 np0005539551 python3.9[118172]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:28 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:25:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:28.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:28 np0005539551 python3.9[118295]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401127.6011145-996-246166362383712/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1960c13778c50062ca07f689a187e0cd26c6ab56 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:29.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:29 np0005539551 python3.9[118447]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:25:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:30.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:25:30 np0005539551 python3.9[118599]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:30 np0005539551 python3.9[118722]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401129.81925-1071-197677661675713/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1960c13778c50062ca07f689a187e0cd26c6ab56 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:31.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:32.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:32 np0005539551 systemd-logind[788]: Session 43 logged out. Waiting for processes to exit.
Nov 29 02:25:32 np0005539551 systemd[1]: session-43.scope: Deactivated successfully.
Nov 29 02:25:32 np0005539551 systemd[1]: session-43.scope: Consumed 23.629s CPU time.
Nov 29 02:25:32 np0005539551 systemd-logind[788]: Removed session 43.
Nov 29 02:25:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:33.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:33 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:25:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:34.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:35.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:36.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:37.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:38.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:38 np0005539551 systemd-logind[788]: New session 44 of user zuul.
Nov 29 02:25:38 np0005539551 systemd[1]: Started Session 44 of User zuul.
Nov 29 02:25:38 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:25:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:39.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:39 np0005539551 python3.9[118902]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:40.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:40 np0005539551 python3.9[119054]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:41 np0005539551 python3.9[119177]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401139.7410634-68-199922714984507/.source.conf _original_basename=ceph.conf follow=False checksum=dcade63291eb6ea0d49dedd3c47047e031c2100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:41.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:41 np0005539551 python3.9[119329]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:42.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:42 np0005539551 python3.9[119452]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401141.3470151-68-62290594846060/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=ced193c31d6b83611be924c31eabde34732ad5bc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:42 np0005539551 systemd[1]: session-44.scope: Deactivated successfully.
Nov 29 02:25:42 np0005539551 systemd[1]: session-44.scope: Consumed 2.963s CPU time.
Nov 29 02:25:42 np0005539551 systemd-logind[788]: Session 44 logged out. Waiting for processes to exit.
Nov 29 02:25:42 np0005539551 systemd-logind[788]: Removed session 44.
Nov 29 02:25:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:43.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:44 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:25:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:44.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:45.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:46.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:47.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:48.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:48 np0005539551 systemd-logind[788]: New session 45 of user zuul.
Nov 29 02:25:48 np0005539551 systemd[1]: Started Session 45 of User zuul.
Nov 29 02:25:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:49.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:49 np0005539551 python3.9[119631]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:25:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:50.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:25:50 np0005539551 python3.9[119787]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:51.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:51 np0005539551 python3.9[119939]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:25:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:52.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:25:52 np0005539551 python3.9[120089]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:25:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:53.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:53 np0005539551 python3.9[120241]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 29 02:25:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:54.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:55.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:55 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:25:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:56.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:57.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:57 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:25:58 np0005539551 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 29 02:25:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:58.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:58 np0005539551 python3.9[120398]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:25:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:25:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:59.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:59 np0005539551 python3.9[120482]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:26:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:00.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:01.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:02 np0005539551 python3.9[120635]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:26:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:02.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:02 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:26:03 np0005539551 python3[120790]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 29 02:26:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:03.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:03 np0005539551 python3.9[120942]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:04.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:04 np0005539551 python3.9[121094]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:05 np0005539551 python3.9[121172]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:05.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:05 np0005539551 python3.9[121324]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:06.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:06 np0005539551 python3.9[121402]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.ij4fproc recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:07 np0005539551 python3.9[121554]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:07.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:26:07 np0005539551 python3.9[121632]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:08.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:08 np0005539551 python3.9[121784]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:26:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:09.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:09 np0005539551 python3[121937]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 02:26:10 np0005539551 python3.9[122218]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:10.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:10 np0005539551 python3.9[122343]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401169.583006-437-173477079268716/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:11.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:11 np0005539551 python3.9[122495]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:12 np0005539551 python3.9[122620]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401171.0590434-482-126927830456413/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:12.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:12 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:26:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:13.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:14 np0005539551 python3.9[122772]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:14.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:14 np0005539551 python3.9[122897]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401173.716696-527-258979887929455/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:15.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:15 np0005539551 python3.9[123049]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:16 np0005539551 python3.9[123174]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401174.968583-572-167729119763620/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:16.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:17 np0005539551 python3.9[123326]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:17.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:26:17 np0005539551 python3.9[123451]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401176.3885462-617-49103211040012/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:18.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:19 np0005539551 python3.9[123603]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:19.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:19 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:26:19 np0005539551 python3.9[123755]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:26:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:20.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:20 np0005539551 python3.9[123910]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:21.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:21 np0005539551 python3.9[124062]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:26:22 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Nov 29 02:26:22 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:26:22.033016) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:26:22 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Nov 29 02:26:22 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401182033105, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 1130, "num_deletes": 252, "total_data_size": 2570889, "memory_usage": 2611312, "flush_reason": "Manual Compaction"}
Nov 29 02:26:22 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Nov 29 02:26:22 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401182188973, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 1686982, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9525, "largest_seqno": 10650, "table_properties": {"data_size": 1681932, "index_size": 2574, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10714, "raw_average_key_size": 19, "raw_value_size": 1671684, "raw_average_value_size": 3028, "num_data_blocks": 117, "num_entries": 552, "num_filter_entries": 552, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401072, "oldest_key_time": 1764401072, "file_creation_time": 1764401182, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:26:22 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 156029 microseconds, and 5573 cpu microseconds.
Nov 29 02:26:22 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:26:22 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:26:22.189055) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 1686982 bytes OK
Nov 29 02:26:22 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:26:22.189072) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Nov 29 02:26:22 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:26:22.241387) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Nov 29 02:26:22 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:26:22.241440) EVENT_LOG_v1 {"time_micros": 1764401182241429, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:26:22 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:26:22.241469) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:26:22 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 2565415, prev total WAL file size 2565415, number of live WAL files 2.
Nov 29 02:26:22 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:26:22 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:26:22.242688) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Nov 29 02:26:22 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:26:22 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(1647KB)], [18(9537KB)]
Nov 29 02:26:22 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401182242766, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 11453048, "oldest_snapshot_seqno": -1}
Nov 29 02:26:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:22.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:26:22 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 4021 keys, 9585221 bytes, temperature: kUnknown
Nov 29 02:26:22 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401182791424, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 9585221, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9553866, "index_size": 20173, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 99087, "raw_average_key_size": 24, "raw_value_size": 9476662, "raw_average_value_size": 2356, "num_data_blocks": 876, "num_entries": 4021, "num_filter_entries": 4021, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764401182, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:26:22 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:26:22 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:26:22.792379) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 9585221 bytes
Nov 29 02:26:22 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:26:22.801086) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 20.9 rd, 17.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 9.3 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(12.5) write-amplify(5.7) OK, records in: 4543, records dropped: 522 output_compression: NoCompression
Nov 29 02:26:22 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:26:22.801134) EVENT_LOG_v1 {"time_micros": 1764401182801102, "job": 8, "event": "compaction_finished", "compaction_time_micros": 548708, "compaction_time_cpu_micros": 30669, "output_level": 6, "num_output_files": 1, "total_output_size": 9585221, "num_input_records": 4543, "num_output_records": 4021, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:26:22 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:26:22 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401182801691, "job": 8, "event": "table_file_deletion", "file_number": 20}
Nov 29 02:26:22 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:26:22 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401182804111, "job": 8, "event": "table_file_deletion", "file_number": 18}
Nov 29 02:26:22 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:26:22.242595) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:26:22 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:26:22.804326) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:26:22 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:26:22.804340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:26:22 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:26:22.804342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:26:22 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:26:22.804344) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:26:22 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:26:22.804346) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:26:22 np0005539551 python3.9[124215]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:26:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:23.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:23 np0005539551 python3.9[124369]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:26:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:24.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:24 np0005539551 python3.9[124524]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:25.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:25 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:26:25 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:26:25 np0005539551 python3.9[124674]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:26:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:26.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:27 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 02:26:27 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:26:27 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:26:27 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:26:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:27.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:27 np0005539551 python3.9[124827]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:c6:22:5a:f7" external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:26:27 np0005539551 ovs-vsctl[124828]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:c6:22:5a:f7 external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 29 02:26:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:26:28 np0005539551 python3.9[124980]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:26:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:28.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:28 np0005539551 python3.9[125135]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:26:28 np0005539551 ovs-vsctl[125136]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 29 02:26:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:29.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:30.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:30 np0005539551 python3.9[125286]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:26:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:31.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:31 np0005539551 python3.9[125440]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:26:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:32.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:32 np0005539551 python3.9[125592]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:32 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:26:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:33.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:33 np0005539551 python3.9[125670]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:26:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:34.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:34 np0005539551 python3.9[125822]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:34 np0005539551 radosgw[83679]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Nov 29 02:26:35 np0005539551 python3.9[125900]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:26:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:35.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:35 np0005539551 python3.9[126052]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:35 np0005539551 radosgw[83679]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Nov 29 02:26:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:36.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:36 np0005539551 radosgw[83679]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Nov 29 02:26:36 np0005539551 python3.9[126254]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:37 np0005539551 python3.9[126332]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:37 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:26:37 np0005539551 radosgw[83679]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Nov 29 02:26:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:37.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:26:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:38.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:39 np0005539551 python3.9[126484]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:39 np0005539551 radosgw[83679]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Nov 29 02:26:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:39.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:39 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:26:39 np0005539551 python3.9[126562]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:39 np0005539551 radosgw[83679]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Nov 29 02:26:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:40.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:40 np0005539551 python3.9[126714]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:26:40 np0005539551 systemd[1]: Reloading.
Nov 29 02:26:40 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:26:40 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:26:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:41.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:41 np0005539551 python3.9[126902]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:42 np0005539551 python3.9[126980]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:42.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:26:43 np0005539551 python3.9[127132]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:26:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:43.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:26:43 np0005539551 python3.9[127210]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:44 np0005539551 python3.9[127362]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:26:44 np0005539551 systemd[1]: Reloading.
Nov 29 02:26:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:44.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:44 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:26:44 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:26:44 np0005539551 systemd[1]: Starting Create netns directory...
Nov 29 02:26:44 np0005539551 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 02:26:44 np0005539551 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 02:26:44 np0005539551 systemd[1]: Finished Create netns directory.
Nov 29 02:26:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:45.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:45 np0005539551 python3.9[127555]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:26:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:46.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:46 np0005539551 python3.9[127707]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:47 np0005539551 python3.9[127830]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401205.822571-1370-143146764202895/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:26:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:47.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:48 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:26:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 02:26:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:48.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 02:26:48 np0005539551 python3.9[127982]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:26:49 np0005539551 python3.9[128134]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:49.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:49 np0005539551 python3.9[128257]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401208.7430403-1445-177045566193616/.source.json _original_basename=.sih_9vkb follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:50.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:50 np0005539551 python3.9[128409]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:51.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:52.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:53 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:26:53 np0005539551 python3.9[128836]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 29 02:26:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:53.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:54.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:55 np0005539551 python3.9[128988]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 02:26:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:55.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:56 np0005539551 python3.9[129140]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 29 02:26:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:56.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:57.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:57 np0005539551 python3[129319]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 02:26:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:26:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:58.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:26:58 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:26:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:26:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:59.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:27:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:00.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:27:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:01.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:27:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:02.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:27:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:03.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:04 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:27:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:04.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:05.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:27:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:06.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:27:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:07.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:08.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:09.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:09 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:27:10 np0005539551 podman[129333]: 2025-11-29 07:27:10.0536118 +0000 UTC m=+12.019352630 image pull 52cb1910f3f090372807028d1c2aea98d2557b1086636469529f290368ecdf69 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 29 02:27:10 np0005539551 podman[129450]: 2025-11-29 07:27:10.178214302 +0000 UTC m=+0.023945458 image pull 52cb1910f3f090372807028d1c2aea98d2557b1086636469529f290368ecdf69 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 29 02:27:10 np0005539551 podman[129450]: 2025-11-29 07:27:10.281228887 +0000 UTC m=+0.126960023 container create 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 02:27:10 np0005539551 python3[129319]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 29 02:27:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:10.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:11 np0005539551 python3.9[129640]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:27:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:11.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:12.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:12 np0005539551 python3.9[129794]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:27:13 np0005539551 python3.9[129870]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:27:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:13.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:13 np0005539551 python3.9[130021]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764401233.1065052-1709-66609359360156/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:27:14 np0005539551 python3.9[130097]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 02:27:14 np0005539551 systemd[1]: Reloading.
Nov 29 02:27:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:27:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:14.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:27:14 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:27:14 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:27:14 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:27:15 np0005539551 python3.9[130208]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:27:15 np0005539551 systemd[1]: Reloading.
Nov 29 02:27:15 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:27:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:15.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:15 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:27:15 np0005539551 systemd[1]: Starting ovn_controller container...
Nov 29 02:27:15 np0005539551 systemd[1]: Started libcrun container.
Nov 29 02:27:15 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f51015e79cd550f6fc63a860f86aa9f3c781e0b2bc28a751b4d6a4f82d18415/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 29 02:27:15 np0005539551 systemd[1]: Started /usr/bin/podman healthcheck run 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558.
Nov 29 02:27:15 np0005539551 podman[130251]: 2025-11-29 07:27:15.877904691 +0000 UTC m=+0.296447682 container init 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 02:27:15 np0005539551 ovn_controller[130266]: + sudo -E kolla_set_configs
Nov 29 02:27:15 np0005539551 podman[130251]: 2025-11-29 07:27:15.905616185 +0000 UTC m=+0.324159166 container start 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 02:27:15 np0005539551 systemd[1]: Created slice User Slice of UID 0.
Nov 29 02:27:15 np0005539551 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 29 02:27:15 np0005539551 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 29 02:27:15 np0005539551 edpm-start-podman-container[130251]: ovn_controller
Nov 29 02:27:15 np0005539551 systemd[1]: Starting User Manager for UID 0...
Nov 29 02:27:16 np0005539551 podman[130273]: 2025-11-29 07:27:16.014332347 +0000 UTC m=+0.093402538 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 02:27:16 np0005539551 systemd[1]: 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558-2228d42c7678f458.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 02:27:16 np0005539551 systemd[1]: 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558-2228d42c7678f458.service: Failed with result 'exit-code'.
Nov 29 02:27:16 np0005539551 edpm-start-podman-container[130250]: Creating additional drop-in dependency for "ovn_controller" (5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558)
Nov 29 02:27:16 np0005539551 systemd[1]: Reloading.
Nov 29 02:27:16 np0005539551 systemd[130286]: Queued start job for default target Main User Target.
Nov 29 02:27:16 np0005539551 systemd[130286]: Created slice User Application Slice.
Nov 29 02:27:16 np0005539551 systemd[130286]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 29 02:27:16 np0005539551 systemd[130286]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 02:27:16 np0005539551 systemd[130286]: Reached target Paths.
Nov 29 02:27:16 np0005539551 systemd[130286]: Reached target Timers.
Nov 29 02:27:16 np0005539551 systemd[130286]: Starting D-Bus User Message Bus Socket...
Nov 29 02:27:16 np0005539551 systemd[130286]: Starting Create User's Volatile Files and Directories...
Nov 29 02:27:16 np0005539551 systemd[130286]: Finished Create User's Volatile Files and Directories.
Nov 29 02:27:16 np0005539551 systemd[130286]: Listening on D-Bus User Message Bus Socket.
Nov 29 02:27:16 np0005539551 systemd[130286]: Reached target Sockets.
Nov 29 02:27:16 np0005539551 systemd[130286]: Reached target Basic System.
Nov 29 02:27:16 np0005539551 systemd[130286]: Reached target Main User Target.
Nov 29 02:27:16 np0005539551 systemd[130286]: Startup finished in 140ms.
Nov 29 02:27:16 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:27:16 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:27:16 np0005539551 systemd[1]: Started User Manager for UID 0.
Nov 29 02:27:16 np0005539551 systemd[1]: Started ovn_controller container.
Nov 29 02:27:16 np0005539551 systemd[1]: Started Session c1 of User root.
Nov 29 02:27:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:27:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:16.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: INFO:__main__:Validating config file
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: INFO:__main__:Writing out command to execute
Nov 29 02:27:16 np0005539551 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: ++ cat /run_command
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: + ARGS=
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: + sudo kolla_copy_cacerts
Nov 29 02:27:16 np0005539551 systemd[1]: Started Session c2 of User root.
Nov 29 02:27:16 np0005539551 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: + [[ ! -n '' ]]
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: + . kolla_extend_start
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: + umask 0022
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: 2025-11-29T07:27:16Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: 2025-11-29T07:27:16Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: 2025-11-29T07:27:16Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: 2025-11-29T07:27:16Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: 2025-11-29T07:27:16Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: 2025-11-29T07:27:16Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 29 02:27:16 np0005539551 NetworkManager[48922]: <info>  [1764401236.5122] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Nov 29 02:27:16 np0005539551 NetworkManager[48922]: <info>  [1764401236.5135] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:27:16 np0005539551 NetworkManager[48922]: <info>  [1764401236.5151] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 29 02:27:16 np0005539551 NetworkManager[48922]: <info>  [1764401236.5160] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Nov 29 02:27:16 np0005539551 NetworkManager[48922]: <info>  [1764401236.5165] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: 2025-11-29T07:27:16Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 29 02:27:16 np0005539551 kernel: br-int: entered promiscuous mode
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: 2025-11-29T07:27:16Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: 2025-11-29T07:27:16Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: 2025-11-29T07:27:16Z|00010|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: 2025-11-29T07:27:16Z|00011|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: 2025-11-29T07:27:16Z|00012|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: 2025-11-29T07:27:16Z|00013|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: 2025-11-29T07:27:16Z|00014|features|INFO|OVS Feature: ct_flush, state: supported
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: 2025-11-29T07:27:16Z|00015|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: 2025-11-29T07:27:16Z|00016|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: 2025-11-29T07:27:16Z|00017|main|INFO|OVS feature set changed, force recompute.
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: 2025-11-29T07:27:16Z|00018|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: 2025-11-29T07:27:16Z|00019|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: 2025-11-29T07:27:16Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: 2025-11-29T07:27:16Z|00021|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: 2025-11-29T07:27:16Z|00022|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: 2025-11-29T07:27:16Z|00023|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: 2025-11-29T07:27:16Z|00024|main|INFO|OVS feature set changed, force recompute.
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: 2025-11-29T07:27:16Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: 2025-11-29T07:27:16Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: 2025-11-29T07:27:16Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: 2025-11-29T07:27:16Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: 2025-11-29T07:27:16Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 02:27:16 np0005539551 ovn_controller[130266]: 2025-11-29T07:27:16Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 02:27:16 np0005539551 NetworkManager[48922]: <info>  [1764401236.5344] manager: (ovn-755ad2-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 29 02:27:16 np0005539551 kernel: genev_sys_6081: entered promiscuous mode
Nov 29 02:27:16 np0005539551 NetworkManager[48922]: <info>  [1764401236.5553] device (genev_sys_6081): carrier: link connected
Nov 29 02:27:16 np0005539551 NetworkManager[48922]: <info>  [1764401236.5556] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Nov 29 02:27:16 np0005539551 systemd-udevd[130405]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:27:16 np0005539551 systemd-udevd[130401]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:27:16 np0005539551 NetworkManager[48922]: <info>  [1764401236.8658] manager: (ovn-479f96-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Nov 29 02:27:16 np0005539551 NetworkManager[48922]: <info>  [1764401236.9761] manager: (ovn-a63f2f-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Nov 29 02:27:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:17.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:17 np0005539551 python3.9[130535]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:27:17 np0005539551 ovs-vsctl[130536]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 29 02:27:18 np0005539551 python3.9[130688]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:27:18 np0005539551 ovs-vsctl[130690]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 29 02:27:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:27:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:18.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:27:19 np0005539551 python3.9[130843]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:27:19 np0005539551 ovs-vsctl[130844]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 29 02:27:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:19.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:19 np0005539551 systemd[1]: session-45.scope: Deactivated successfully.
Nov 29 02:27:19 np0005539551 systemd[1]: session-45.scope: Consumed 59.852s CPU time.
Nov 29 02:27:19 np0005539551 systemd-logind[788]: Session 45 logged out. Waiting for processes to exit.
Nov 29 02:27:19 np0005539551 systemd-logind[788]: Removed session 45.
Nov 29 02:27:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:27:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:20.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:21.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:27:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:22.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:27:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:23.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:24.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:27:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:25.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:25 np0005539551 systemd-logind[788]: New session 47 of user zuul.
Nov 29 02:27:25 np0005539551 systemd[1]: Started Session 47 of User zuul.
Nov 29 02:27:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:26.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:26 np0005539551 systemd[1]: Stopping User Manager for UID 0...
Nov 29 02:27:26 np0005539551 systemd[130286]: Activating special unit Exit the Session...
Nov 29 02:27:26 np0005539551 systemd[130286]: Stopped target Main User Target.
Nov 29 02:27:26 np0005539551 systemd[130286]: Stopped target Basic System.
Nov 29 02:27:26 np0005539551 systemd[130286]: Stopped target Paths.
Nov 29 02:27:26 np0005539551 systemd[130286]: Stopped target Sockets.
Nov 29 02:27:26 np0005539551 systemd[130286]: Stopped target Timers.
Nov 29 02:27:26 np0005539551 systemd[130286]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 02:27:26 np0005539551 systemd[130286]: Closed D-Bus User Message Bus Socket.
Nov 29 02:27:26 np0005539551 systemd[130286]: Stopped Create User's Volatile Files and Directories.
Nov 29 02:27:26 np0005539551 systemd[130286]: Removed slice User Application Slice.
Nov 29 02:27:26 np0005539551 systemd[130286]: Reached target Shutdown.
Nov 29 02:27:26 np0005539551 systemd[130286]: Finished Exit the Session.
Nov 29 02:27:26 np0005539551 systemd[130286]: Reached target Exit the Session.
Nov 29 02:27:26 np0005539551 systemd[1]: user@0.service: Deactivated successfully.
Nov 29 02:27:26 np0005539551 systemd[1]: Stopped User Manager for UID 0.
Nov 29 02:27:26 np0005539551 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 29 02:27:26 np0005539551 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 29 02:27:26 np0005539551 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 29 02:27:26 np0005539551 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 29 02:27:26 np0005539551 systemd[1]: Removed slice User Slice of UID 0.
Nov 29 02:27:26 np0005539551 python3.9[131022]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:27:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:27.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:27 np0005539551 python3.9[131182]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:27:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:28.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:28 np0005539551 python3.9[131334]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:27:29 np0005539551 python3.9[131486]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:27:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:27:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:29.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:27:29 np0005539551 python3.9[131638]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:27:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:27:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:30.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:30 np0005539551 python3.9[131790]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:27:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:31.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:32 np0005539551 python3.9[131940]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:27:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:32.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:33 np0005539551 python3.9[132092]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 29 02:27:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:33.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:34.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:27:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:35.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:35 np0005539551 python3.9[132242]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:27:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:36.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:36 np0005539551 python3.9[132364]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401253.8494823-224-98098014920493/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:27:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:37.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:37 np0005539551 python3.9[132646]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:27:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:38.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:38 np0005539551 python3.9[132767]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401257.2190616-269-187197555316510/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:27:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:39.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:39 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:27:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:27:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:40.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:41.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:27:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:42.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:27:43 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).paxos(paxos active c 503..1068) lease_timeout -- calling new election
Nov 29 02:27:43 np0005539551 ceph-mon[81672]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Nov 29 02:27:43 np0005539551 ceph-mon[81672]: paxos.2).electionLogic(30) init, last seen epoch 30
Nov 29 02:27:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:43.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:43 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:27:44 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:27:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:44.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:45.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:46.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:46 np0005539551 ovn_controller[130266]: 2025-11-29T07:27:46Z|00025|memory|INFO|16128 kB peak resident set size after 30.2 seconds
Nov 29 02:27:46 np0005539551 ovn_controller[130266]: 2025-11-29T07:27:46Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Nov 29 02:27:46 np0005539551 podman[132826]: 2025-11-29 07:27:46.751029055 +0000 UTC m=+0.191251601 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 02:27:47 np0005539551 python3.9[132948]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:27:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:47.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:47 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:27:48 np0005539551 python3.9[133032]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:27:48 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:27:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:48.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:27:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:49.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:27:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:50.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:51.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:27:51 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:27:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:52.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:52 np0005539551 python3.9[133185]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:27:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:27:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:53.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:27:53 np0005539551 python3.9[133338]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:27:54 np0005539551 python3.9[133459]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401273.183442-380-75410049246609/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:27:54 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:27:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:27:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:54.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:27:54 np0005539551 python3.9[133609]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:27:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:27:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:55.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:55 np0005539551 python3.9[133730]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401274.493151-380-101702068144418/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:27:56 np0005539551 ceph-mon[81672]: mon.compute-2 calling monitor election
Nov 29 02:27:56 np0005539551 ceph-mon[81672]: mon.compute-0 calling monitor election
Nov 29 02:27:56 np0005539551 ceph-mon[81672]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:27:56 np0005539551 ceph-mon[81672]: overall HEALTH_OK
Nov 29 02:27:56 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:27:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:27:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:56.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:27:57 np0005539551 python3.9[133880]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:27:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:57.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:57 np0005539551 python3.9[134001]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401276.538812-512-38616852446369/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:27:58 np0005539551 ceph-mon[81672]: mon.compute-1 calling monitor election
Nov 29 02:27:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:27:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:27:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:58.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:58 np0005539551 python3.9[134151]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:27:59 np0005539551 python3.9[134272]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401277.9946852-512-257432563068449/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:27:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:27:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:59.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:59 np0005539551 python3.9[134422]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:28:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:28:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:00.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:28:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:01.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:28:01 np0005539551 python3.9[134576]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:28:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:02.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:02 np0005539551 python3.9[134728]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:03 np0005539551 python3.9[134806]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:28:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:03.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:03 np0005539551 python3.9[134958]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:04 np0005539551 python3.9[135036]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:28:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:04.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:05 np0005539551 python3.9[135188]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:28:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:28:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:05.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:28:06 np0005539551 python3.9[135340]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:06.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:06 np0005539551 python3.9[135418]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:07 np0005539551 python3.9[135570]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:07.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:07 np0005539551 python3.9[135648]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:08.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:08 np0005539551 python3.9[135800]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:28:08 np0005539551 systemd[1]: Reloading.
Nov 29 02:28:09 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:28:09 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:28:09 np0005539551 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:28:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:09.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:10 np0005539551 python3.9[135990]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:28:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:10.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:10 np0005539551 python3.9[136068]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:11 np0005539551 python3.9[136220]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:11.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:11 np0005539551 python3.9[136298]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:12.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:12 np0005539551 python3.9[136450]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:28:12 np0005539551 systemd[1]: Reloading.
Nov 29 02:28:12 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:28:12 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:28:13 np0005539551 systemd[1]: Starting Create netns directory...
Nov 29 02:28:13 np0005539551 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 02:28:13 np0005539551 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 02:28:13 np0005539551 systemd[1]: Finished Create netns directory.
Nov 29 02:28:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:13.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:13 np0005539551 python3.9[136643]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:28:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:14.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:14 np0005539551 python3.9[136795]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:28:15 np0005539551 python3.9[136918]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401294.1500406-965-17574812991452/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:28:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:15.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:16 np0005539551 python3.9[137070]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:28:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:16.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:17 np0005539551 podman[137194]: 2025-11-29 07:28:17.119169724 +0000 UTC m=+0.100692373 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 02:28:17 np0005539551 python3.9[137241]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:17.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:17 np0005539551 python3.9[137371]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401296.64907-1040-81177863830061/.source.json _original_basename=.f5jnvo1c follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:18.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:28:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:19.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:28:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:28:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:20.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:21 np0005539551 python3.9[137571]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:21.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:28:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:22.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:28:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:28:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:23.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:28:23 np0005539551 python3.9[138000]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 29 02:28:24 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:28:24 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:28:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:24.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:24 np0005539551 python3.9[138152]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 02:28:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:28:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:25.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:25 np0005539551 python3.9[138304]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 29 02:28:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:26.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:27.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:27 np0005539551 python3[138483]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 02:28:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:28.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:29.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:30.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:31.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:32.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:33.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:34 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:28:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:34.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:35.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:36.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:28:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:37.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:28:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:38.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:39 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:28:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:39.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:28:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).paxos(paxos updating c 503..1101) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 0.220820874s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Nov 29 02:28:39 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-1[81668]: 2025-11-29T07:28:39.742+0000 7fe9969b2640 -1 mon.compute-1@2(peon).paxos(paxos updating c 503..1101) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 0.220820874s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Nov 29 02:28:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:28:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:40.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:28:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:41.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:42.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:43 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:28:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:43.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:44.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:44 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:28:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:45.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:46.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:47 np0005539551 podman[138494]: 2025-11-29 07:28:47.031814595 +0000 UTC m=+19.036802558 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:28:47 np0005539551 podman[138623]: 2025-11-29 07:28:47.169546967 +0000 UTC m=+0.046410412 container create e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 02:28:47 np0005539551 podman[138623]: 2025-11-29 07:28:47.145858013 +0000 UTC m=+0.022721458 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:28:47 np0005539551 python3[138483]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:28:47 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:28:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:47.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:47 np0005539551 podman[138686]: 2025-11-29 07:28:47.678274606 +0000 UTC m=+0.118735885 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 02:28:48 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:28:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:28:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:48.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:28:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:49.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:50.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:51 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:28:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:51.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:52.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:53.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:54.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:55 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:28:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:55.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:56.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:57.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:58.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:59 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:28:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:28:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:28:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:59.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:29:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:29:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:00.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:01.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:02.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:03 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:29:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:03.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:29:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:04.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:29:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:29:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 get_health_metrics reporting 2 slow ops, oldest is mgrbeacon mgr.compute-1.fchyan(b66774a7-56d9-5535-bd8c-681234404870,24104, , 0)
Nov 29 02:29:05 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-1[81668]: 2025-11-29T07:29:05.178+0000 7fe9991b7640 -1 mon.compute-1@2(peon) e3 get_health_metrics reporting 2 slow ops, oldest is mgrbeacon mgr.compute-1.fchyan(b66774a7-56d9-5535-bd8c-681234404870,24104, , 0)
Nov 29 02:29:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:05.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:29:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:06.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:07.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:08 np0005539551 ceph-mon[81672]: mon.compute-2 calling monitor election
Nov 29 02:29:08 np0005539551 ceph-mon[81672]: mon.compute-2 is new leader, mons compute-2,compute-1 in quorum (ranks 1,2)
Nov 29 02:29:08 np0005539551 ceph-mon[81672]: mon.compute-0 calling monitor election
Nov 29 02:29:08 np0005539551 ceph-mon[81672]: overall HEALTH_OK
Nov 29 02:29:08 np0005539551 ceph-mon[81672]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:29:08 np0005539551 ceph-mon[81672]: overall HEALTH_OK
Nov 29 02:29:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:08.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:09.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:09 np0005539551 python3.9[138840]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:29:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:29:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:10.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:11 np0005539551 python3.9[138994]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:11.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:11 np0005539551 python3.9[139070]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:29:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:12.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:13.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:13 np0005539551 python3.9[139221]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764401352.0511045-1304-133138753907203/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:14.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:15 np0005539551 python3.9[139297]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 02:29:15 np0005539551 systemd[1]: Reloading.
Nov 29 02:29:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:29:15 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:29:15 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:29:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:15.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:16 np0005539551 python3.9[139408]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:29:16 np0005539551 systemd[1]: Reloading.
Nov 29 02:29:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:16.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:16 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:29:16 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:29:17 np0005539551 systemd[1]: Starting ovn_metadata_agent container...
Nov 29 02:29:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:29:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:17.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:29:17 np0005539551 ceph-mon[81672]: Health check failed: 3 slow ops, oldest one blocked for 33 sec, daemons [mon.compute-0,mon.compute-1] have slow ops. (SLOW_OPS)
Nov 29 02:29:17 np0005539551 systemd[1]: Started libcrun container.
Nov 29 02:29:17 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd6f2df83a650809abe35fb0948d0565cc823b437f9bdb380762c6fda15b4a98/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 29 02:29:17 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd6f2df83a650809abe35fb0948d0565cc823b437f9bdb380762c6fda15b4a98/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:29:17 np0005539551 systemd[1]: Started /usr/bin/podman healthcheck run e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e.
Nov 29 02:29:17 np0005539551 podman[139449]: 2025-11-29 07:29:17.842436049 +0000 UTC m=+0.471658879 container init e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 02:29:17 np0005539551 ovn_metadata_agent[139465]: + sudo -E kolla_set_configs
Nov 29 02:29:17 np0005539551 podman[139449]: 2025-11-29 07:29:17.870883539 +0000 UTC m=+0.500106349 container start e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:29:17 np0005539551 edpm-start-podman-container[139449]: ovn_metadata_agent
Nov 29 02:29:17 np0005539551 ovn_metadata_agent[139465]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 02:29:17 np0005539551 ovn_metadata_agent[139465]: INFO:__main__:Validating config file
Nov 29 02:29:17 np0005539551 ovn_metadata_agent[139465]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 02:29:17 np0005539551 ovn_metadata_agent[139465]: INFO:__main__:Copying service configuration files
Nov 29 02:29:17 np0005539551 ovn_metadata_agent[139465]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 29 02:29:17 np0005539551 ovn_metadata_agent[139465]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 29 02:29:17 np0005539551 podman[139468]: 2025-11-29 07:29:17.918012669 +0000 UTC m=+0.111377288 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller)
Nov 29 02:29:17 np0005539551 ovn_metadata_agent[139465]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 29 02:29:17 np0005539551 ovn_metadata_agent[139465]: INFO:__main__:Writing out command to execute
Nov 29 02:29:17 np0005539551 ovn_metadata_agent[139465]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 29 02:29:17 np0005539551 ovn_metadata_agent[139465]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 29 02:29:17 np0005539551 ovn_metadata_agent[139465]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 29 02:29:17 np0005539551 ovn_metadata_agent[139465]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 29 02:29:17 np0005539551 ovn_metadata_agent[139465]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 29 02:29:17 np0005539551 ovn_metadata_agent[139465]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 29 02:29:17 np0005539551 ovn_metadata_agent[139465]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 29 02:29:17 np0005539551 ovn_metadata_agent[139465]: ++ cat /run_command
Nov 29 02:29:17 np0005539551 ovn_metadata_agent[139465]: + CMD=neutron-ovn-metadata-agent
Nov 29 02:29:17 np0005539551 ovn_metadata_agent[139465]: + ARGS=
Nov 29 02:29:17 np0005539551 ovn_metadata_agent[139465]: + sudo kolla_copy_cacerts
Nov 29 02:29:17 np0005539551 podman[139484]: 2025-11-29 07:29:17.948802342 +0000 UTC m=+0.068170493 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 02:29:17 np0005539551 ovn_metadata_agent[139465]: + [[ ! -n '' ]]
Nov 29 02:29:17 np0005539551 ovn_metadata_agent[139465]: + . kolla_extend_start
Nov 29 02:29:17 np0005539551 ovn_metadata_agent[139465]: Running command: 'neutron-ovn-metadata-agent'
Nov 29 02:29:17 np0005539551 ovn_metadata_agent[139465]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 29 02:29:17 np0005539551 ovn_metadata_agent[139465]: + umask 0022
Nov 29 02:29:17 np0005539551 ovn_metadata_agent[139465]: + exec neutron-ovn-metadata-agent
Nov 29 02:29:17 np0005539551 edpm-start-podman-container[139448]: Creating additional drop-in dependency for "ovn_metadata_agent" (e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e)
Nov 29 02:29:17 np0005539551 systemd[1]: Reloading.
Nov 29 02:29:18 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:29:18 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:29:18 np0005539551 systemd[1]: Started ovn_metadata_agent container.
Nov 29 02:29:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:18.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:18 np0005539551 ceph-mon[81672]: Health check cleared: SLOW_OPS (was: 3 slow ops, oldest one blocked for 33 sec, daemons [mon.compute-0,mon.compute-1] have slow ops.)
Nov 29 02:29:18 np0005539551 ceph-mon[81672]: Cluster is now healthy
Nov 29 02:29:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:19.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.778 139482 INFO neutron.common.config [-] Logging enabled!#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.778 139482 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.778 139482 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.778 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.779 139482 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.779 139482 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.779 139482 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.779 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.779 139482 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.779 139482 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.779 139482 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.780 139482 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.780 139482 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.780 139482 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.780 139482 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.780 139482 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.780 139482 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.780 139482 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.780 139482 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.781 139482 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.781 139482 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.781 139482 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.781 139482 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.781 139482 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.781 139482 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.781 139482 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.781 139482 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.781 139482 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.781 139482 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.782 139482 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.782 139482 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.782 139482 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.782 139482 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.782 139482 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.782 139482 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.782 139482 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.782 139482 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.782 139482 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.783 139482 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.783 139482 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.783 139482 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.783 139482 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.783 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.783 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.783 139482 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.783 139482 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.783 139482 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.783 139482 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.783 139482 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.784 139482 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.784 139482 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.784 139482 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.784 139482 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.784 139482 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.784 139482 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.784 139482 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.784 139482 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.784 139482 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.784 139482 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.785 139482 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.785 139482 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.785 139482 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.785 139482 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.785 139482 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.785 139482 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.785 139482 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.785 139482 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.786 139482 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.786 139482 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.786 139482 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.786 139482 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.786 139482 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.786 139482 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.786 139482 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.786 139482 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.786 139482 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.787 139482 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.787 139482 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.787 139482 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.787 139482 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.787 139482 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.787 139482 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.787 139482 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.787 139482 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.788 139482 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.788 139482 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.788 139482 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.788 139482 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.788 139482 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.788 139482 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.788 139482 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.788 139482 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.788 139482 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.789 139482 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.789 139482 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.789 139482 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.789 139482 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.789 139482 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.789 139482 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.789 139482 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.789 139482 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.789 139482 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.789 139482 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.790 139482 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.790 139482 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.790 139482 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.790 139482 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.790 139482 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.790 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.790 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.790 139482 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.791 139482 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.791 139482 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.791 139482 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.791 139482 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.791 139482 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.791 139482 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.791 139482 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.791 139482 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.791 139482 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.792 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.792 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.792 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.792 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.792 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.792 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.792 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.792 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.792 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.793 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.793 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.793 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.793 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.793 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.793 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.793 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.793 139482 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.793 139482 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.793 139482 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.794 139482 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.794 139482 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.794 139482 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.794 139482 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.794 139482 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.794 139482 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.794 139482 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.794 139482 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.794 139482 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.795 139482 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.795 139482 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.795 139482 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.795 139482 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.795 139482 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.795 139482 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.795 139482 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.795 139482 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.795 139482 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.795 139482 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.796 139482 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.796 139482 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.796 139482 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.796 139482 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.796 139482 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.796 139482 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.796 139482 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.796 139482 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.796 139482 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.796 139482 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.797 139482 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.797 139482 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.797 139482 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.797 139482 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.797 139482 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.797 139482 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.797 139482 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.797 139482 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.797 139482 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.797 139482 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.798 139482 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.798 139482 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.798 139482 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.798 139482 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.798 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.798 139482 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.798 139482 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.798 139482 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.798 139482 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.798 139482 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.799 139482 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.799 139482 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.799 139482 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.799 139482 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.799 139482 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.799 139482 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.799 139482 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.799 139482 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.799 139482 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.799 139482 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.800 139482 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.800 139482 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.800 139482 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.800 139482 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.800 139482 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.800 139482 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.800 139482 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.800 139482 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.800 139482 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.801 139482 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.801 139482 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.801 139482 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.801 139482 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.801 139482 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.801 139482 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.801 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.801 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.801 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.801 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.802 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.802 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.802 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.802 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.802 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.802 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.802 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.802 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.802 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.802 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.803 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.803 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.803 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.803 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.803 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.803 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.803 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.803 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.803 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.803 139482 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.804 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.804 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.804 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.804 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.804 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.804 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.804 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.804 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.804 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.805 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.805 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.805 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.805 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.805 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.805 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.805 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.805 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.806 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.806 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.806 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.806 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.806 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.806 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.806 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.806 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.806 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.807 139482 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.807 139482 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.807 139482 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.807 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.807 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.807 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.807 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.807 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.807 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.808 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.808 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.808 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.808 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.808 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.808 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.808 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.808 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.808 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.809 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.809 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.809 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.809 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.809 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.809 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.809 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.809 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.810 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.810 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.810 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.810 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.810 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.810 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.810 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.810 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.810 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.811 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.811 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.811 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.811 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.811 139482 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.811 139482 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.821 139482 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.821 139482 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.821 139482 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.821 139482 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.821 139482 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.836 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name a37d8697-3fee-4a55-8dd5-3894cb7e8e1c (UUID: a37d8697-3fee-4a55-8dd5-3894cb7e8e1c) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.871 139482 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.871 139482 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.871 139482 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.871 139482 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.874 139482 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.881 139482 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.888 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'a37d8697-3fee-4a55-8dd5-3894cb7e8e1c'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], external_ids={}, name=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, nb_cfg_timestamp=1764401244533, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.889 139482 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fb3c845dbb0>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.890 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.890 139482 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.890 139482 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.890 139482 INFO oslo_service.service [-] Starting 1 workers#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.894 139482 DEBUG oslo_service.service [-] Started child 139598 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.897 139598 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-169363'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.897 139482 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpvcrrq5lg/privsep.sock']#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.918 139598 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.918 139598 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.918 139598 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.922 139598 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.928 139598 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 29 02:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:19.940 139598 INFO eventlet.wsgi.server [-] (139598) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Nov 29 02:29:20 np0005539551 systemd[1]: session-47.scope: Deactivated successfully.
Nov 29 02:29:20 np0005539551 systemd[1]: session-47.scope: Consumed 58.838s CPU time.
Nov 29 02:29:20 np0005539551 systemd-logind[788]: Session 47 logged out. Waiting for processes to exit.
Nov 29 02:29:20 np0005539551 systemd-logind[788]: Removed session 47.
Nov 29 02:29:20 np0005539551 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 29 02:29:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:20.587 139482 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 29 02:29:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:20.588 139482 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpvcrrq5lg/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 29 02:29:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:20.454 139603 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 29 02:29:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:20.458 139603 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 29 02:29:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:20.460 139603 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Nov 29 02:29:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:20.460 139603 INFO oslo.privsep.daemon [-] privsep daemon running as pid 139603#033[00m
Nov 29 02:29:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:20.592 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[1c36c716-6e4a-477b-b459-4016fa1f31d1]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:29:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:20.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:21 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:21.112 139603 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:29:21 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:21.113 139603 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:29:21 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:21.113 139603 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:29:21 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:21.632 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[a7fa1f1c-bd94-40eb-a415-b2c96885bdd0]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:29:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:29:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:21.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:29:21 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:21.637 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, column=external_ids, values=({'neutron:ovn-metadata-id': '496ab20b-5df6-5862-90fc-224b9a9ac399'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:29:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:29:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:22.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:23.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:24.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:25.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:26.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:27 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:29:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:27.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:28 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:29:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:28.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:29 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:29:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:29:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:29.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:29:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:30.409 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:29:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:29:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:30.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:29:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:31.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:29:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:32.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:29:33 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:29:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:33.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:34.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:35 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:29:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:35.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:36.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:37.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:38 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:29:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:38.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:39 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:29:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:29:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:39.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:29:40 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:29:40 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:29:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:40.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.830 139482 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.830 139482 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.830 139482 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.830 139482 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.830 139482 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.831 139482 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.831 139482 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.831 139482 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.831 139482 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.832 139482 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.832 139482 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.832 139482 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.832 139482 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.832 139482 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.832 139482 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.833 139482 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.833 139482 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.833 139482 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.833 139482 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.833 139482 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.833 139482 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.834 139482 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.834 139482 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.834 139482 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.834 139482 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.834 139482 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.835 139482 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.835 139482 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.835 139482 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.835 139482 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.835 139482 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.835 139482 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.836 139482 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.836 139482 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.836 139482 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.836 139482 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.836 139482 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.837 139482 DEBUG oslo_service.service [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.837 139482 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.837 139482 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.837 139482 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.837 139482 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.837 139482 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.838 139482 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.838 139482 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.838 139482 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.838 139482 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.838 139482 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.838 139482 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.838 139482 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.839 139482 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.839 139482 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.839 139482 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.839 139482 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.839 139482 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.839 139482 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.839 139482 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.840 139482 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.840 139482 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.840 139482 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.840 139482 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.840 139482 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.840 139482 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.840 139482 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.841 139482 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.841 139482 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.841 139482 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.841 139482 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.842 139482 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.842 139482 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.842 139482 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.842 139482 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.842 139482 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.842 139482 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.842 139482 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.843 139482 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.843 139482 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.843 139482 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.843 139482 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.843 139482 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.843 139482 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.843 139482 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.844 139482 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.844 139482 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.844 139482 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.844 139482 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.844 139482 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.844 139482 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.844 139482 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.845 139482 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.845 139482 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.845 139482 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.845 139482 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.845 139482 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.845 139482 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.845 139482 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.846 139482 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.846 139482 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.846 139482 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.846 139482 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.846 139482 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.846 139482 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.846 139482 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.847 139482 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.847 139482 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.847 139482 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.847 139482 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.847 139482 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.847 139482 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.848 139482 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.848 139482 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.848 139482 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.848 139482 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.848 139482 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.848 139482 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.849 139482 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.849 139482 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.849 139482 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.849 139482 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.849 139482 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.849 139482 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.850 139482 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.850 139482 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.850 139482 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.850 139482 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.850 139482 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.850 139482 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.851 139482 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.851 139482 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.851 139482 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.851 139482 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.851 139482 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.851 139482 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.851 139482 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.852 139482 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.852 139482 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.852 139482 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.852 139482 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.852 139482 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.852 139482 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.853 139482 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.853 139482 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.853 139482 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.853 139482 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.853 139482 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.853 139482 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.853 139482 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.854 139482 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.854 139482 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.854 139482 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.854 139482 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.854 139482 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.854 139482 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.854 139482 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.855 139482 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.855 139482 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.855 139482 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.855 139482 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.855 139482 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.855 139482 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.855 139482 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.856 139482 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.856 139482 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.856 139482 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.856 139482 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.856 139482 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.856 139482 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.856 139482 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.857 139482 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.857 139482 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.857 139482 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.857 139482 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.857 139482 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.857 139482 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.858 139482 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.858 139482 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.858 139482 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.858 139482 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.858 139482 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.858 139482 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.858 139482 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.859 139482 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.859 139482 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.859 139482 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.859 139482 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.859 139482 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.859 139482 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.860 139482 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.860 139482 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.860 139482 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.860 139482 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.860 139482 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.860 139482 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.860 139482 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.861 139482 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.861 139482 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.861 139482 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.861 139482 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.861 139482 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.861 139482 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.861 139482 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.862 139482 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.862 139482 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.862 139482 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.862 139482 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.862 139482 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.862 139482 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.862 139482 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.863 139482 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.863 139482 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.863 139482 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.863 139482 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.863 139482 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.863 139482 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.864 139482 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.864 139482 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.864 139482 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.864 139482 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.864 139482 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.864 139482 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.864 139482 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.865 139482 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.865 139482 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.865 139482 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.865 139482 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.865 139482 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.865 139482 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.865 139482 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.866 139482 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.866 139482 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.866 139482 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.866 139482 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.866 139482 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.866 139482 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.866 139482 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.867 139482 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.867 139482 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.867 139482 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.867 139482 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.868 139482 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.868 139482 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.868 139482 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.868 139482 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.868 139482 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.868 139482 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.869 139482 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.869 139482 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.869 139482 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.869 139482 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.869 139482 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.869 139482 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.870 139482 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.870 139482 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.870 139482 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.870 139482 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.870 139482 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.870 139482 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.870 139482 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.871 139482 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.871 139482 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.871 139482 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.871 139482 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.871 139482 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.871 139482 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.872 139482 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.872 139482 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.872 139482 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.872 139482 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.872 139482 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.872 139482 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.873 139482 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.873 139482 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.873 139482 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.873 139482 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.873 139482 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.873 139482 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.874 139482 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.874 139482 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.874 139482 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.874 139482 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.874 139482 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.874 139482 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.875 139482 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.875 139482 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.875 139482 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.875 139482 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.875 139482 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.875 139482 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.876 139482 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.876 139482 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.876 139482 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.876 139482 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.876 139482 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.876 139482 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.877 139482 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.877 139482 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.877 139482 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.877 139482 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.877 139482 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.877 139482 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.878 139482 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.878 139482 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.878 139482 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:29:40.878 139482 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 29 02:29:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:41.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:42.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:43 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:29:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:43.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:44.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:45 np0005539551 systemd-logind[788]: New session 48 of user zuul.
Nov 29 02:29:45 np0005539551 systemd[1]: Started Session 48 of User zuul.
Nov 29 02:29:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:45.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:45 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:29:45 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:29:46 np0005539551 python3.9[139942]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:29:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:46.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:47.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:48 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:29:48 np0005539551 podman[139972]: 2025-11-29 07:29:48.628515687 +0000 UTC m=+0.072247622 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 02:29:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:48.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:48 np0005539551 podman[139971]: 2025-11-29 07:29:48.692066076 +0000 UTC m=+0.145446840 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 02:29:49 np0005539551 python3.9[140143]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:29:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:29:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:49.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:29:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:50.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:51.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:51 np0005539551 python3.9[140308]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 02:29:52 np0005539551 systemd[1]: Reloading.
Nov 29 02:29:52 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:29:52 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:29:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:52.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:53 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:29:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:53.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:29:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:54.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:29:55 np0005539551 python3.9[140493]: ansible-ansible.builtin.service_facts Invoked
Nov 29 02:29:55 np0005539551 network[140510]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:29:55 np0005539551 network[140511]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:29:55 np0005539551 network[140512]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:29:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:55.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:56.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:57.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:58 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:29:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:58.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:29:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:59.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:00.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:00 np0005539551 ceph-mon[81672]: overall HEALTH_OK
Nov 29 02:30:01 np0005539551 python3.9[140774]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:30:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:01.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:02 np0005539551 python3.9[140927]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:30:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:30:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:02.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:30:02 np0005539551 python3.9[141080]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:30:03 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:30:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:03.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:03 np0005539551 python3.9[141233]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:30:04 np0005539551 python3.9[141386]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:30:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:04.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:05 np0005539551 python3.9[141539]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:30:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:05.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:06 np0005539551 python3.9[141692]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:30:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:30:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:06.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:30:07 np0005539551 python3.9[141845]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:07.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:07 np0005539551 python3.9[141997]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:08 np0005539551 python3.9[142149]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:08.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:09 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:30:09 np0005539551 python3.9[142301]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:09.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:10 np0005539551 python3.9[142453]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:10.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:11 np0005539551 python3.9[142605]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:30:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:11.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:30:12 np0005539551 python3.9[142757]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:12.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:12 np0005539551 python3.9[142909]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:13 np0005539551 python3.9[143061]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:13.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:14 np0005539551 python3.9[143213]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:14 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:30:14 np0005539551 python3.9[143365]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:14.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:15 np0005539551 python3.9[143517]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:15.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:15 np0005539551 python3.9[143669]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:16 np0005539551 python3.9[143821]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:16.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:17 np0005539551 python3.9[143973]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:30:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:30:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:17.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:30:18 np0005539551 python3.9[144125]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 02:30:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:18.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:19 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:30:19 np0005539551 podman[144250]: 2025-11-29 07:30:19.349141201 +0000 UTC m=+0.054600202 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:30:19 np0005539551 podman[144249]: 2025-11-29 07:30:19.382024043 +0000 UTC m=+0.087369311 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:30:19 np0005539551 python3.9[144312]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 02:30:19 np0005539551 systemd[1]: Reloading.
Nov 29 02:30:19 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:30:19 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:30:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:19.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:30:19.813 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:30:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:30:19.814 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:30:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:30:19.814 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:30:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:20.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:20 np0005539551 python3.9[144508]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:30:21 np0005539551 python3.9[144661]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:30:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:21.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:22 np0005539551 python3.9[144814]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:30:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:30:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:22.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:30:22 np0005539551 python3.9[144967]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:30:23 np0005539551 python3.9[145120]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:30:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:23.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:24 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:30:24 np0005539551 python3.9[145273]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:30:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:24.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:24 np0005539551 python3.9[145426]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:30:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:25.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:26 np0005539551 python3.9[145579]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 29 02:30:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:26.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:27 np0005539551 python3.9[145732]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 02:30:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:27.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:28 np0005539551 python3.9[145890]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 02:30:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:28.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:30:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:29.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:30.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:30 np0005539551 python3.9[146050]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:30:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:30:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:31.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:30:31 np0005539551 python3.9[146134]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:30:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:30:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:32.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:30:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:33.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:34 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:30:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:34.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:35.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:30:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:36.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:30:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:37.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:38.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:30:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:39.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:40.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:30:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:41.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:30:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:42.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:43 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:30:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:30:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:43.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:30:44 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:30:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:44.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:45.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:46.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:30:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:47.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:30:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:48.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:30:49 np0005539551 podman[146276]: 2025-11-29 07:30:49.604120296 +0000 UTC m=+0.059857575 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 29 02:30:49 np0005539551 podman[146275]: 2025-11-29 07:30:49.635268721 +0000 UTC m=+0.089675873 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 02:30:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:49.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:50.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:51 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:30:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:51.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:30:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:52.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:30:53 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:30:53 np0005539551 ceph-mgr[82034]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1950343944
Nov 29 02:30:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:53.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:54 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:30:54 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:30:54 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:30:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:54.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:55.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:30:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:56.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:30:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:57.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:58.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:59 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:30:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:30:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:59.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:00.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:01.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:02.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:03.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:04 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:31:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:04.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:05.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:06.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:07.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:08.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:09 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:31:09 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:31:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:09.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:10.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:11.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:12.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:13 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:31:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:13.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:14.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:31:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:15.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:31:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:16.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:31:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:17.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:18.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:31:19.815 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:31:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:31:19.816 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:31:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:31:19.816 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:31:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:19.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:20 np0005539551 podman[146548]: 2025-11-29 07:31:20.646618688 +0000 UTC m=+0.084021130 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 02:31:20 np0005539551 podman[146547]: 2025-11-29 07:31:20.696520081 +0000 UTC m=+0.137598142 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Nov 29 02:31:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:20.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:21.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:31:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:22.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:23.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:24.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:25.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:26.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:31:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:27.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:28.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:31:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:29.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:31:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:30.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:31.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:32 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:31:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:32.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:33.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:34 np0005539551 kernel: SELinux:  Converting 2769 SID table entries...
Nov 29 02:31:34 np0005539551 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 02:31:34 np0005539551 kernel: SELinux:  policy capability open_perms=1
Nov 29 02:31:34 np0005539551 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 02:31:34 np0005539551 kernel: SELinux:  policy capability always_check_network=0
Nov 29 02:31:34 np0005539551 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 02:31:34 np0005539551 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 02:31:34 np0005539551 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 02:31:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:34.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:35.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:36.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:31:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:37.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:38.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:39.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:40.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:41.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:31:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:42.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:43.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:44.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:45.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:46.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:47 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:31:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:47.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:48.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:49.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:50.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:51 np0005539551 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Nov 29 02:31:51 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:31:51 np0005539551 podman[146609]: 2025-11-29 07:31:51.679696212 +0000 UTC m=+0.118540966 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 02:31:51 np0005539551 podman[146610]: 2025-11-29 07:31:51.679137396 +0000 UTC m=+0.105766548 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 29 02:31:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:51.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:52.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:53.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:31:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:54.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:31:55 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:31:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:55.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:31:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:56.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:31:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:57.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:58 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).paxos(paxos updating c 503..1209) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 8.274009705s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Nov 29 02:31:58 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-1[81668]: 2025-11-29T07:31:58.413+0000 7fe9969b2640 -1 mon.compute-1@2(peon).paxos(paxos updating c 503..1209) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 8.274009705s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Nov 29 02:31:58 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:31:58 np0005539551 kernel: SELinux:  Converting 2769 SID table entries...
Nov 29 02:31:58 np0005539551 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 02:31:58 np0005539551 kernel: SELinux:  policy capability open_perms=1
Nov 29 02:31:58 np0005539551 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 02:31:58 np0005539551 kernel: SELinux:  policy capability always_check_network=0
Nov 29 02:31:58 np0005539551 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 02:31:58 np0005539551 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 02:31:58 np0005539551 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 02:31:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:31:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:58.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:31:59 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:31:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:31:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:59.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:32:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:01.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:01.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:02 np0005539551 ceph-mon[81672]: mon.compute-0 calling monitor election
Nov 29 02:32:02 np0005539551 ceph-mon[81672]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:32:02 np0005539551 ceph-mon[81672]: overall HEALTH_OK
Nov 29 02:32:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:03.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:03 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:32:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.003000080s ======
Nov 29 02:32:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:03.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Nov 29 02:32:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:05.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:05.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:07.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:07.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:08 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:32:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:09.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:09.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:10 np0005539551 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 29 02:32:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:11.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:11.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:13.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:13.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:14 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:32:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:15.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:15 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:32:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:15.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:15 np0005539551 podman[146828]: 2025-11-29 07:32:15.972152623 +0000 UTC m=+5.133342283 container exec 90ce4adbab91e406b1c142d44002a8d5649da44e57a442af57615f2fecac4da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 02:32:16 np0005539551 podman[146828]: 2025-11-29 07:32:16.69770599 +0000 UTC m=+5.858895610 container exec_died 90ce4adbab91e406b1c142d44002a8d5649da44e57a442af57615f2fecac4da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 02:32:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:17.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:17 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:32:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:17.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:19.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:19 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:32:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:32:19.817 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:32:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:32:19.819 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:32:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:32:19.819 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:32:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:19.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:32:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:21.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:32:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:21.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:22 np0005539551 podman[151589]: 2025-11-29 07:32:22.64318329 +0000 UTC m=+0.083999909 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:32:22 np0005539551 podman[151575]: 2025-11-29 07:32:22.674303384 +0000 UTC m=+0.117095617 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller)
Nov 29 02:32:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:23.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:23 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:32:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:23.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:24 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:32:24 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:32:24 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:32:24 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:32:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:25.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:25.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:26 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:32:26 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:32:26 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:32:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:27.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:32:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:27.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:32:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:29.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:32:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:32:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:29.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:32:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:31.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:31.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:33.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:33.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:34 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:32:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:35.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:35.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:37.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:37.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:39.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:39.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:32:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:41.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:32:41 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:32:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:41.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:43.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:43.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:45.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:45.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:47.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:47.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:48 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:32:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:49.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:49 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:32:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:49.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:51.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:52.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:53.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:53 np0005539551 podman[163962]: 2025-11-29 07:32:53.64902511 +0000 UTC m=+0.096991392 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:32:53 np0005539551 podman[163961]: 2025-11-29 07:32:53.667458519 +0000 UTC m=+0.118637768 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.build-date=20251125)
Nov 29 02:32:53 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:32:53 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Nov 29 02:32:53 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:32:53.929451) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:32:53 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Nov 29 02:32:53 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401573929554, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2404, "num_deletes": 250, "total_data_size": 6225216, "memory_usage": 6317592, "flush_reason": "Manual Compaction"}
Nov 29 02:32:53 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Nov 29 02:32:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:54.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:54 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401574549833, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 4036707, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10655, "largest_seqno": 13054, "table_properties": {"data_size": 4026688, "index_size": 6385, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20243, "raw_average_key_size": 19, "raw_value_size": 4006450, "raw_average_value_size": 3931, "num_data_blocks": 283, "num_entries": 1019, "num_filter_entries": 1019, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401182, "oldest_key_time": 1764401182, "file_creation_time": 1764401573, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:32:54 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 621077 microseconds, and 11987 cpu microseconds.
Nov 29 02:32:54 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:32:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:55.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:55 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:32:54.550529) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 4036707 bytes OK
Nov 29 02:32:55 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:32:54.550778) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Nov 29 02:32:55 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:32:55.312521) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Nov 29 02:32:55 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:32:55.312580) EVENT_LOG_v1 {"time_micros": 1764401575312565, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:32:55 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:32:55.312613) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:32:55 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 6214618, prev total WAL file size 6239084, number of live WAL files 2.
Nov 29 02:32:55 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:32:55 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:32:55.317589) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Nov 29 02:32:55 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:32:55 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(3942KB)], [21(9360KB)]
Nov 29 02:32:55 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401575317707, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 13621928, "oldest_snapshot_seqno": -1}
Nov 29 02:32:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:56.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:56 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 4520 keys, 13069560 bytes, temperature: kUnknown
Nov 29 02:32:56 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401576577401, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 13069560, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13031929, "index_size": 25233, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11333, "raw_key_size": 111817, "raw_average_key_size": 24, "raw_value_size": 12942893, "raw_average_value_size": 2863, "num_data_blocks": 1079, "num_entries": 4520, "num_filter_entries": 4520, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764401575, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:32:56 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:32:56 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:32:56.577727) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 13069560 bytes
Nov 29 02:32:56 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:32:56.694989) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 10.8 rd, 10.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 9.1 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(6.6) write-amplify(3.2) OK, records in: 5040, records dropped: 520 output_compression: NoCompression
Nov 29 02:32:56 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:32:56.695034) EVENT_LOG_v1 {"time_micros": 1764401576695018, "job": 10, "event": "compaction_finished", "compaction_time_micros": 1259766, "compaction_time_cpu_micros": 38372, "output_level": 6, "num_output_files": 1, "total_output_size": 13069560, "num_input_records": 5040, "num_output_records": 4520, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:32:56 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:32:56 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401576696004, "job": 10, "event": "table_file_deletion", "file_number": 23}
Nov 29 02:32:56 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:32:56 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401576697374, "job": 10, "event": "table_file_deletion", "file_number": 21}
Nov 29 02:32:56 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:32:55.317363) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:32:56 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:32:56.697405) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:32:56 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:32:56.697410) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:32:56 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:32:56.697412) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:32:56 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:32:56.697414) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:32:56 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:32:56.697416) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:32:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:57.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:58.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:58 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:32:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:32:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:59.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:59 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:33:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:00.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:01.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:33:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:02.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:33:03 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:33:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:03.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:03 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:33:03 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:33:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:04.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:33:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:05.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:33:05 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:33:05 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0.
Nov 29 02:33:05 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:33:05.308745) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:33:05 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25
Nov 29 02:33:05 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401585309010, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 273, "num_deletes": 250, "total_data_size": 71358, "memory_usage": 78232, "flush_reason": "Manual Compaction"}
Nov 29 02:33:05 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started
Nov 29 02:33:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:06.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:06 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401586142622, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 47119, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13056, "largest_seqno": 13327, "table_properties": {"data_size": 45261, "index_size": 87, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 3829, "raw_average_key_size": 14, "raw_value_size": 41670, "raw_average_value_size": 156, "num_data_blocks": 4, "num_entries": 267, "num_filter_entries": 267, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401575, "oldest_key_time": 1764401575, "file_creation_time": 1764401585, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:33:06 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 834111 microseconds, and 1713 cpu microseconds.
Nov 29 02:33:06 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:33:06 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:33:06.142859) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 47119 bytes OK
Nov 29 02:33:06 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:33:06.142940) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started
Nov 29 02:33:06 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:33:06.435997) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done
Nov 29 02:33:06 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:33:06.436057) EVENT_LOG_v1 {"time_micros": 1764401586436042, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:33:06 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:33:06.436082) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:33:06 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 69281, prev total WAL file size 69281, number of live WAL files 2.
Nov 29 02:33:06 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:33:06 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:33:06.437139) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Nov 29 02:33:06 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:33:06 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(46KB)], [24(12MB)]
Nov 29 02:33:06 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401586437210, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 13116679, "oldest_snapshot_seqno": -1}
Nov 29 02:33:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:33:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:07.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:07 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 4527 keys, 13111107 bytes, temperature: kUnknown
Nov 29 02:33:07 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401587858519, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 13111107, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13073428, "index_size": 25281, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11333, "raw_key_size": 111984, "raw_average_key_size": 24, "raw_value_size": 12984251, "raw_average_value_size": 2868, "num_data_blocks": 1081, "num_entries": 4527, "num_filter_entries": 4527, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764401586, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:33:07 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:33:07 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:33:07.858922) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 13111107 bytes
Nov 29 02:33:07 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:33:07.925108) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 9.2 rd, 9.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 12.5 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(556.6) write-amplify(278.3) OK, records in: 4787, records dropped: 260 output_compression: NoCompression
Nov 29 02:33:07 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:33:07.925187) EVENT_LOG_v1 {"time_micros": 1764401587925152, "job": 12, "event": "compaction_finished", "compaction_time_micros": 1421443, "compaction_time_cpu_micros": 52535, "output_level": 6, "num_output_files": 1, "total_output_size": 13111107, "num_input_records": 4787, "num_output_records": 4527, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:33:07 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:33:07 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401587925706, "job": 12, "event": "table_file_deletion", "file_number": 26}
Nov 29 02:33:07 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:33:07 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401587928004, "job": 12, "event": "table_file_deletion", "file_number": 24}
Nov 29 02:33:07 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:33:06.437053) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:33:07 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:33:07.928089) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:33:07 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:33:07.928097) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:33:07 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:33:07.928100) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:33:07 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:33:07.928103) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:33:07 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:33:07.928105) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:33:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:33:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:08.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:33:08 np0005539551 ceph-mon[81672]: mon.compute-2 calling monitor election
Nov 29 02:33:08 np0005539551 ceph-mon[81672]: mon.compute-0 calling monitor election
Nov 29 02:33:08 np0005539551 ceph-mon[81672]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:33:08 np0005539551 ceph-mon[81672]: overall HEALTH_OK
Nov 29 02:33:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:09.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:09 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:33:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:10.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:11.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:12.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:13.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:14.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:14 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:33:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:15.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:33:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:16.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:33:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:17.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:33:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:18.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:33:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:19.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:19 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:33:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:33:19.819 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:33:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:33:19.819 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:33:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:33:19.820 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:33:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:20.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:21.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:22.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:33:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:23.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:33:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:24.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:24 np0005539551 kernel: SELinux:  Converting 2770 SID table entries...
Nov 29 02:33:24 np0005539551 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 02:33:24 np0005539551 kernel: SELinux:  policy capability open_perms=1
Nov 29 02:33:24 np0005539551 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 02:33:24 np0005539551 kernel: SELinux:  policy capability always_check_network=0
Nov 29 02:33:24 np0005539551 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 02:33:24 np0005539551 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 02:33:24 np0005539551 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 02:33:24 np0005539551 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 29 02:33:24 np0005539551 podman[164024]: 2025-11-29 07:33:24.619199259 +0000 UTC m=+0.061078968 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 02:33:24 np0005539551 podman[164023]: 2025-11-29 07:33:24.65020956 +0000 UTC m=+0.092812188 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 02:33:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:25.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:26.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:26 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:33:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:27.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:28.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:29.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:30.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:31.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:31 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:33:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:32.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:33.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:34.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:35.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:35 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:33:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:36.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:37.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:38.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:39.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:39 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:33:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:40.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:41.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:42.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:43.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:43 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:33:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:44.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:45.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:46.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:47.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:47 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:33:47 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).paxos(paxos updating c 503..1253) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 15.423488617s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Nov 29 02:33:47 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-1[81668]: 2025-11-29T07:33:47.617+0000 7fe9969b2640 -1 mon.compute-1@2(peon).paxos(paxos updating c 503..1253) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 15.423488617s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Nov 29 02:33:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:48.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:49.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:33:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:33:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:50.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:51.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:51 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:33:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:52.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:53.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:54.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:55.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:55 np0005539551 ceph-mon[81672]: paxos.2).electionLogic(47) init, last seen epoch 47, mid-election, bumping
Nov 29 02:33:55 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:33:55 np0005539551 podman[164246]: 2025-11-29 07:33:55.626113212 +0000 UTC m=+0.071623143 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 29 02:33:55 np0005539551 podman[164245]: 2025-11-29 07:33:55.674621078 +0000 UTC m=+0.124637992 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 29 02:33:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:56.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:56 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:33:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:57.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:57 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:33:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:58.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:58 np0005539551 ceph-mon[81672]: mon.compute-2 calling monitor election
Nov 29 02:33:58 np0005539551 ceph-mon[81672]: mon.compute-0 calling monitor election
Nov 29 02:33:58 np0005539551 ceph-mon[81672]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 02:33:58 np0005539551 ceph-mon[81672]: Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Nov 29 02:33:58 np0005539551 ceph-mon[81672]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:33:58 np0005539551 ceph-mon[81672]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:33:58 np0005539551 ceph-mon[81672]:    mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Nov 29 02:33:58 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0.
Nov 29 02:33:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:33:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:33:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:33:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:33:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:33:58 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:33:58.509591) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:33:58 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28
Nov 29 02:33:58 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401638509710, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 527, "num_deletes": 251, "total_data_size": 809430, "memory_usage": 820376, "flush_reason": "Manual Compaction"}
Nov 29 02:33:58 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started
Nov 29 02:33:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:33:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:33:58 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401638875072, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 652025, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13332, "largest_seqno": 13854, "table_properties": {"data_size": 648841, "index_size": 1092, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7823, "raw_average_key_size": 19, "raw_value_size": 642140, "raw_average_value_size": 1589, "num_data_blocks": 49, "num_entries": 404, "num_filter_entries": 404, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401587, "oldest_key_time": 1764401587, "file_creation_time": 1764401638, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:33:58 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 365588 microseconds, and 5949 cpu microseconds.
Nov 29 02:33:58 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:33:58 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:33:58.875186) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 652025 bytes OK
Nov 29 02:33:58 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:33:58.875214) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started
Nov 29 02:33:58 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:33:58.998197) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done
Nov 29 02:33:58 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:33:58.998258) EVENT_LOG_v1 {"time_micros": 1764401638998243, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:33:58 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:33:58.998284) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:33:58 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 805982, prev total WAL file size 805982, number of live WAL files 2.
Nov 29 02:33:58 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:33:58 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:33:58.999363) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Nov 29 02:33:58 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:33:58 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(636KB)], [27(12MB)]
Nov 29 02:33:58 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401638999429, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 13763132, "oldest_snapshot_seqno": -1}
Nov 29 02:33:59 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:33:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:33:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:59.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:59 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:34:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:00.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:00 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 4408 keys, 10616329 bytes, temperature: kUnknown
Nov 29 02:34:00 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401640891508, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 10616329, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10581824, "index_size": 22363, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11077, "raw_key_size": 110645, "raw_average_key_size": 25, "raw_value_size": 10497055, "raw_average_value_size": 2381, "num_data_blocks": 942, "num_entries": 4408, "num_filter_entries": 4408, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764401638, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:34:00 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:34:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:34:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 get_health_metrics reporting 2 slow ops, oldest is mgrbeacon mgr.compute-1.fchyan(b66774a7-56d9-5535-bd8c-681234404870,24104, , 0)
Nov 29 02:34:01 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-1[81668]: 2025-11-29T07:34:01.132+0000 7fe9991b7640 -1 mon.compute-1@2(peon) e3 get_health_metrics reporting 2 slow ops, oldest is mgrbeacon mgr.compute-1.fchyan(b66774a7-56d9-5535-bd8c-681234404870,24104, , 0)
Nov 29 02:34:01 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:34:00.891750) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 10616329 bytes
Nov 29 02:34:01 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:34:01.291484) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 7.3 rd, 5.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 12.5 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(37.4) write-amplify(16.3) OK, records in: 4931, records dropped: 523 output_compression: NoCompression
Nov 29 02:34:01 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:34:01.291528) EVENT_LOG_v1 {"time_micros": 1764401641291513, "job": 14, "event": "compaction_finished", "compaction_time_micros": 1892183, "compaction_time_cpu_micros": 36078, "output_level": 6, "num_output_files": 1, "total_output_size": 10616329, "num_input_records": 4931, "num_output_records": 4408, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:34:01 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:34:01 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401641292011, "job": 14, "event": "table_file_deletion", "file_number": 29}
Nov 29 02:34:01 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:34:01 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401641294213, "job": 14, "event": "table_file_deletion", "file_number": 27}
Nov 29 02:34:01 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:33:58.999226) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:34:01 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:34:01.294279) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:34:01 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:34:01.294285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:34:01 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:34:01.294311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:34:01 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:34:01.294313) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:34:01 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:34:01.294315) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:34:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:01.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:02.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:03.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:04.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:05.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:06.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:07.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:08.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:09 np0005539551 ceph-mon[81672]: mon.compute-0 calling monitor election
Nov 29 02:34:09 np0005539551 ceph-mon[81672]: mon.compute-2 calling monitor election
Nov 29 02:34:09 np0005539551 ceph-mon[81672]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:34:09 np0005539551 ceph-mon[81672]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Nov 29 02:34:09 np0005539551 ceph-mon[81672]: Cluster is now healthy
Nov 29 02:34:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:34:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:09.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:34:09 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-1[81668]: 2025-11-29T07:34:09.892+0000 7fe9969b2640 -1 mon.compute-1@2(peon).paxos(paxos updating c 754..1280) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 0.395316392s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Nov 29 02:34:09 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).paxos(paxos updating c 754..1280) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 0.395316392s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Nov 29 02:34:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:10.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:34:11 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:34:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:11.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:12.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:13.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:14.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:15 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:34:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:15.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:16.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:17 np0005539551 ceph-mon[81672]: overall HEALTH_OK
Nov 29 02:34:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:17.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:18.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:19 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:34:19 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).paxos(paxos updating c 754..1281) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 3.845697880s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Nov 29 02:34:19 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-1[81668]: 2025-11-29T07:34:19.657+0000 7fe9969b2640 -1 mon.compute-1@2(peon).paxos(paxos updating c 754..1281) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 3.845697880s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Nov 29 02:34:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:34:19.821 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:34:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:34:19.821 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:34:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:34:19.822 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:34:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:19.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:20.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:34:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:21.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:34:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:22.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:23 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:34:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:23.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:24.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:25.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:26.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:26 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:34:26 np0005539551 podman[164295]: 2025-11-29 07:34:26.600974227 +0000 UTC m=+0.053538043 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 02:34:26 np0005539551 podman[164294]: 2025-11-29 07:34:26.626580172 +0000 UTC m=+0.086038944 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Nov 29 02:34:27 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:34:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:27.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:28.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:29.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:30.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:31 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:34:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:31.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:32.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:33.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:34.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:35 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:34:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:35.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:36.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:37 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:34:37 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.9 total, 600.0 interval#012Cumulative writes: 6389 writes, 26K keys, 6389 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 6389 writes, 1108 syncs, 5.77 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 415 writes, 648 keys, 415 commit groups, 1.0 writes per commit group, ingest: 0.20 MB, 0.00 MB/s#012Interval WAL: 415 writes, 191 syncs, 2.17 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.64              0.00         1    0.643       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.64              0.00         1    0.643       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.64              0.00         1    0.643       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.9 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.6 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5616f192a430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.9 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5616f192a430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.9 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_
Nov 29 02:34:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:37.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:34:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:38.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:34:39 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:34:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:39.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:40.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:41.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:42.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:43 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:34:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:43.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:44.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:45.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:46.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:47 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:34:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:47.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:48.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:49.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:50.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:51 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:34:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:34:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:51.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:34:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:52.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:53.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:54.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:55 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:34:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:55.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:56.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:57 np0005539551 podman[164594]: 2025-11-29 07:34:57.637551143 +0000 UTC m=+0.087070962 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 02:34:57 np0005539551 podman[164593]: 2025-11-29 07:34:57.727784591 +0000 UTC m=+0.174499664 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:34:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:57.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:58.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:59 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:34:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:34:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:59.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:00.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:01.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:02.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:03 np0005539551 ceph-mgr[82034]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1950343944
Nov 29 02:35:03 np0005539551 ceph-mgr[82034]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1950343944
Nov 29 02:35:03 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:35:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:03.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:04.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:04 np0005539551 ceph-mgr[82034]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1950343944
Nov 29 02:35:05 np0005539551 ceph-mgr[82034]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1950343944
Nov 29 02:35:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:05.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:06.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:06 np0005539551 ceph-mgr[82034]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1950343944
Nov 29 02:35:07 np0005539551 ceph-mgr[82034]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1950343944
Nov 29 02:35:07 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:35:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:07.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:35:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 get_health_metrics reporting 13 slow ops, oldest is mgrbeacon mgr.compute-1.fchyan(b66774a7-56d9-5535-bd8c-681234404870,24104, , 0)
Nov 29 02:35:07 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-1[81668]: 2025-11-29T07:35:07.977+0000 7fe9991b7640 -1 mon.compute-1@2(electing) e3 get_health_metrics reporting 13 slow ops, oldest is mgrbeacon mgr.compute-1.fchyan(b66774a7-56d9-5535-bd8c-681234404870,24104, , 0)
Nov 29 02:35:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:08.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:08 np0005539551 ceph-mgr[82034]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1950343944
Nov 29 02:35:09 np0005539551 ceph-mgr[82034]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1950343944
Nov 29 02:35:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:09.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:10.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:10 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:35:10 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 2191 writes, 13K keys, 2191 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 2190 writes, 2190 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 724 writes, 3983 keys, 724 commit groups, 1.0 writes per commit group, ingest: 8.20 MB, 0.01 MB/s#012Interval WAL: 723 writes, 723 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      6.0      2.89              0.07         7    0.412       0      0       0.0       0.0#012  L6      1/0   10.12 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.6     12.5     11.2      5.65              0.20         6    0.942     27K   2545       0.0       0.0#012 Sum      1/0   10.12 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.6      8.3      9.4      8.54              0.27        13    0.657     27K   2545       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   8.2      7.0      7.1      7.10              0.18         8    0.887     19K   1825       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     12.5     11.2      5.65              0.20         6    0.942     27K   2545       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      6.0      2.88              0.07         6    0.481       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.017, interval 0.006#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.08 GB write, 0.07 MB/s write, 0.07 GB read, 0.06 MB/s read, 8.5 seconds#012Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 7.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557021ed51f0#2 capacity: 304.00 MB usage: 2.24 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 8.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(105,1.98 MB,0.649899%) FilterBlock(13,84.36 KB,0.0270994%) IndexBlock(13,182.27 KB,0.0585506%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 02:35:10 np0005539551 ceph-mgr[82034]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1950343944
Nov 29 02:35:11 np0005539551 ceph-mgr[82034]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1950343944
Nov 29 02:35:11 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:35:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:35:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:11.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:35:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:12.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:12 np0005539551 ceph-mgr[82034]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1950343944
Nov 29 02:35:13 np0005539551 ceph-mgr[82034]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1950343944
Nov 29 02:35:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:13.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:14.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:14 np0005539551 ceph-mgr[82034]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1950343944
Nov 29 02:35:15 np0005539551 ceph-mgr[82034]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1950343944
Nov 29 02:35:15 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:35:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:35:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:15.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:35:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:16.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:16 np0005539551 ceph-mgr[82034]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1950343944
Nov 29 02:35:17 np0005539551 ceph-mgr[82034]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1950343944
Nov 29 02:35:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:17.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:18 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 get_health_metrics reporting 16 slow ops, oldest is mgrbeacon mgr.compute-1.fchyan(b66774a7-56d9-5535-bd8c-681234404870,24104, , 0)
Nov 29 02:35:18 np0005539551 ceph-mon[81672]: paxos.2).electionLogic(53) init, last seen epoch 53, mid-election, bumping
Nov 29 02:35:18 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-1[81668]: 2025-11-29T07:35:18.040+0000 7fe9991b7640 -1 mon.compute-1@2(electing) e3 get_health_metrics reporting 16 slow ops, oldest is mgrbeacon mgr.compute-1.fchyan(b66774a7-56d9-5535-bd8c-681234404870,24104, , 0)
Nov 29 02:35:18 np0005539551 ceph-mgr[82034]: client.0 ms_handle_reset on v2:192.168.122.101:3300/0
Nov 29 02:35:18 np0005539551 ceph-mgr[82034]: client.0 ms_handle_reset on v2:192.168.122.101:3300/0
Nov 29 02:35:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:18.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:18 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:35:18 np0005539551 ceph-mgr[82034]: ms_deliver_dispatch: unhandled message 0x5625f2225080 mon_map magic: 0 v1 from mon.2 v2:192.168.122.101:3300/0
Nov 29 02:35:18 np0005539551 ceph-mgr[82034]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1950343944
Nov 29 02:35:18 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:35:18 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:35:19 np0005539551 ceph-mon[81672]: mon.compute-0 calling monitor election
Nov 29 02:35:19 np0005539551 ceph-mon[81672]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 02:35:19 np0005539551 ceph-mon[81672]: Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Nov 29 02:35:19 np0005539551 ceph-mon[81672]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:35:19 np0005539551 ceph-mon[81672]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:35:19 np0005539551 ceph-mon[81672]:    mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Nov 29 02:35:19 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 02:35:19 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:35:19 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:35:19 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:35:19 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:35:19 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 02:35:19 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:35:19 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:35:19 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:35:19 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:35:19 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:35:19 np0005539551 ceph-mon[81672]: Health check failed: 13 slow ops, oldest one blocked for 60 sec, mon.compute-1 has slow ops (SLOW_OPS)
Nov 29 02:35:19 np0005539551 ceph-mon[81672]: Health check cleared: SLOW_OPS (was: 13 slow ops, oldest one blocked for 60 sec, mon.compute-1 has slow ops)
Nov 29 02:35:19 np0005539551 ceph-mgr[82034]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1950343944
Nov 29 02:35:19 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:35:19 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:35:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:35:19.822 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:35:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:35:19.823 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:35:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:35:19.823 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:35:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:19.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:20.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:20 np0005539551 ceph-mgr[82034]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1950343944
Nov 29 02:35:21 np0005539551 ceph-mon[81672]: mon.compute-2 calling monitor election
Nov 29 02:35:21 np0005539551 ceph-mon[81672]: mon.compute-0 calling monitor election
Nov 29 02:35:21 np0005539551 ceph-mon[81672]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:35:21 np0005539551 ceph-mon[81672]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Nov 29 02:35:21 np0005539551 ceph-mon[81672]: Cluster is now healthy
Nov 29 02:35:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:21.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:22.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:23 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:35:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:35:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:23.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:35:24 np0005539551 dbus-broker-launch[756]: Noticed file-system modification, trigger reload.
Nov 29 02:35:24 np0005539551 dbus-broker-launch[756]: Noticed file-system modification, trigger reload.
Nov 29 02:35:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:24.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:24 np0005539551 ceph-mon[81672]: overall HEALTH_OK
Nov 29 02:35:24 np0005539551 ceph-mon[81672]: Health check failed: 16 slow ops, oldest one blocked for 70 sec, mon.compute-1 has slow ops (SLOW_OPS)
Nov 29 02:35:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:25.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:26.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:27.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:28 np0005539551 ceph-mon[81672]: Health check cleared: SLOW_OPS (was: 16 slow ops, oldest one blocked for 70 sec, mon.compute-1 has slow ops)
Nov 29 02:35:28 np0005539551 ceph-mon[81672]: Cluster is now healthy
Nov 29 02:35:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:28.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:28 np0005539551 podman[164708]: 2025-11-29 07:35:28.621759275 +0000 UTC m=+0.064089359 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 02:35:28 np0005539551 podman[164707]: 2025-11-29 07:35:28.66692032 +0000 UTC m=+0.120041347 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 02:35:28 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:35:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:29.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:30.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:31.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:32.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:33.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:34.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:34 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:35:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:35.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:36.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:37.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:38.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:35:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:35:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:39.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:35:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:40.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:41.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:35:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:42.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:35:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:44.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:44.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:46.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:46.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:46 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:35:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:48.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:48.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:50.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:50.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:52.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:35:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:52.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:54.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:54.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:56.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:56.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:57 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:35:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:58.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:35:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:58.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:58 np0005539551 podman[164969]: 2025-11-29 07:35:58.895357972 +0000 UTC m=+0.059441392 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Nov 29 02:35:58 np0005539551 podman[164968]: 2025-11-29 07:35:58.935162333 +0000 UTC m=+0.099187222 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:36:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:36:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:00.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:36:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:36:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:00.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:36:02 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:36:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:02.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:02.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:02 np0005539551 systemd[1]: Stopping OpenSSH server daemon...
Nov 29 02:36:02 np0005539551 systemd[1]: sshd.service: Deactivated successfully.
Nov 29 02:36:02 np0005539551 systemd[1]: Stopped OpenSSH server daemon.
Nov 29 02:36:02 np0005539551 systemd[1]: sshd.service: Consumed 2.250s CPU time, read 32.0K from disk, written 0B to disk.
Nov 29 02:36:02 np0005539551 systemd[1]: Stopped target sshd-keygen.target.
Nov 29 02:36:02 np0005539551 systemd[1]: Stopping sshd-keygen.target...
Nov 29 02:36:02 np0005539551 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 02:36:02 np0005539551 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 02:36:02 np0005539551 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 02:36:02 np0005539551 systemd[1]: Reached target sshd-keygen.target.
Nov 29 02:36:02 np0005539551 systemd[1]: Starting OpenSSH server daemon...
Nov 29 02:36:02 np0005539551 systemd[1]: Started OpenSSH server daemon.
Nov 29 02:36:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:04.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:36:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:04.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:36:04 np0005539551 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 02:36:04 np0005539551 systemd[1]: Starting man-db-cache-update.service...
Nov 29 02:36:04 np0005539551 systemd[1]: Reloading.
Nov 29 02:36:04 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:36:04 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:36:05 np0005539551 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 02:36:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:06.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:36:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:06.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:36:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:36:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:08.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:08.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:10.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:10 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:36:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:36:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:10.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:36:11 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:36:11 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:36:12 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:36:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:12.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:36:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:12.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:36:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:14.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:36:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:14.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:36:15 np0005539551 python3.9[174542]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:36:15 np0005539551 systemd[1]: Reloading.
Nov 29 02:36:15 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:36:15 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:36:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:16.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:16.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:16 np0005539551 python3.9[174732]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:36:16 np0005539551 systemd[1]: Reloading.
Nov 29 02:36:16 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:36:16 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:36:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:36:18 np0005539551 python3.9[174970]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:36:18 np0005539551 systemd[1]: Reloading.
Nov 29 02:36:18 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:36:18 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:36:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:18.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:18.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:18 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:36:18 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:36:18 np0005539551 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 02:36:18 np0005539551 systemd[1]: Finished man-db-cache-update.service.
Nov 29 02:36:18 np0005539551 systemd[1]: man-db-cache-update.service: Consumed 10.217s CPU time.
Nov 29 02:36:18 np0005539551 systemd[1]: run-r981a795b9a1b408bbec43ff1b16080e5.service: Deactivated successfully.
Nov 29 02:36:19 np0005539551 python3.9[175163]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:36:19 np0005539551 systemd[1]: Reloading.
Nov 29 02:36:19 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:36:19 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:36:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:36:19.823 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:36:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:36:19.825 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:36:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:36:19.825 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:36:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:20.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:20.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:21 np0005539551 python3.9[175353]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:21 np0005539551 systemd[1]: Reloading.
Nov 29 02:36:22 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:36:22 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:36:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:36:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:22.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:36:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:36:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:22.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:36:23 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:36:23 np0005539551 python3.9[175544]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:24 np0005539551 systemd[1]: Reloading.
Nov 29 02:36:24 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:36:24 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:36:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:24.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:24.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:25 np0005539551 python3.9[175734]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:25 np0005539551 systemd[1]: Reloading.
Nov 29 02:36:25 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:36:25 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:36:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:26.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:36:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:26.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:36:26 np0005539551 python3.9[175925]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:27 np0005539551 python3.9[176080]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:27 np0005539551 systemd[1]: Reloading.
Nov 29 02:36:27 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:36:27 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:36:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:36:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:28.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:36:28 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:36:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:36:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:28.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:36:28 np0005539551 python3.9[176270]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:36:29 np0005539551 podman[176272]: 2025-11-29 07:36:29.023605111 +0000 UTC m=+0.082015335 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 02:36:29 np0005539551 systemd[1]: Reloading.
Nov 29 02:36:29 np0005539551 podman[176283]: 2025-11-29 07:36:29.208127433 +0000 UTC m=+0.210154138 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 29 02:36:29 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:36:29 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:36:29 np0005539551 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 29 02:36:29 np0005539551 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 29 02:36:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:30.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:36:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:30.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:36:30 np0005539551 python3.9[176506]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:31 np0005539551 python3.9[176661]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:32 np0005539551 python3.9[176816]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:32.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:32.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:33 np0005539551 python3.9[176971]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:33 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:36:33 np0005539551 python3.9[177126]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:34.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:36:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:34.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:36:34 np0005539551 python3.9[177281]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:35 np0005539551 radosgw[83679]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Nov 29 02:36:35 np0005539551 radosgw[83679]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Nov 29 02:36:35 np0005539551 python3.9[177436]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:36 np0005539551 radosgw[83679]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Nov 29 02:36:36 np0005539551 radosgw[83679]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Nov 29 02:36:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:36.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:36 np0005539551 python3.9[177591]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:36.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:36 np0005539551 radosgw[83679]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Nov 29 02:36:37 np0005539551 python3.9[177746]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:38 np0005539551 python3.9[177901]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:38.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:38 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:36:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:36:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:38.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:36:38 np0005539551 python3.9[178056]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:40.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:40.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:40 np0005539551 python3.9[178212]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:41 np0005539551 python3.9[178367]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:42.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:42.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:42 np0005539551 python3.9[178522]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:43 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:36:44 np0005539551 python3.9[178677]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:36:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:36:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:44.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:36:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:44.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:44 np0005539551 python3.9[178829]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:36:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:46.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:36:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:46.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:36:47 np0005539551 python3.9[178981]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:36:48 np0005539551 python3.9[179133]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:36:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:48.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:48 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:36:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:48.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:48 np0005539551 python3.9[179285]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:36:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:50.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:50.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:50 np0005539551 python3.9[179437]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:36:51 np0005539551 python3.9[179589]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:36:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:52.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:36:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:52.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:36:52 np0005539551 python3.9[179714]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401810.9852107-1634-223781871215011/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:36:53 np0005539551 python3.9[179866]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:36:53 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:36:53 np0005539551 python3.9[179991]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401812.6810627-1634-15628272394082/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:36:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:54.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:36:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:54.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:36:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:56.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:36:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:56.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:36:56 np0005539551 python3.9[180143]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:36:57 np0005539551 python3.9[180268]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401814.0157769-1634-172355779647502/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:36:57 np0005539551 python3.9[180420]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:36:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:58.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:36:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:36:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:58.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:36:58 np0005539551 python3.9[180545]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401817.304322-1634-122995031573531/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:36:59 np0005539551 python3.9[180697]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:36:59 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:36:59 np0005539551 podman[180794]: 2025-11-29 07:36:59.63655802 +0000 UTC m=+0.081785914 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:36:59 np0005539551 podman[180795]: 2025-11-29 07:36:59.673567104 +0000 UTC m=+0.108351512 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:36:59 np0005539551 python3.9[180861]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401818.643464-1634-273895601611928/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:00.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:00.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:00 np0005539551 python3.9[181020]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:01 np0005539551 python3.9[181145]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401819.9992125-1634-109787720929102/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:01 np0005539551 python3.9[181297]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:02.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:02.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:02 np0005539551 python3.9[181420]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401821.3615475-1634-74828077677391/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:03 np0005539551 python3.9[181572]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:03 np0005539551 python3.9[181697]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401822.6419208-1634-60525523391351/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:04.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:04 np0005539551 python3.9[181849]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 29 02:37:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:04.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:06.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:06.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:37:07 np0005539551 python3.9[182002]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:07 np0005539551 python3.9[182154]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:37:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:08.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:37:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:08.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:08 np0005539551 python3.9[182306]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:09 np0005539551 python3.9[182459]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:09 np0005539551 python3.9[182612]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:10.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:10.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:10 np0005539551 python3.9[182764]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:11 np0005539551 python3.9[182916]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:12 np0005539551 python3.9[183068]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:12 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:37:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:12.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:12.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:12 np0005539551 python3.9[183220]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:13 np0005539551 python3.9[183372]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:14 np0005539551 python3.9[183524]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:14.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:14.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:14 np0005539551 python3.9[183676]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:15 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0.
Nov 29 02:37:15 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:37:15.251210) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:37:15 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31
Nov 29 02:37:15 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401835251260, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1468, "num_deletes": 252, "total_data_size": 4220664, "memory_usage": 4268992, "flush_reason": "Manual Compaction"}
Nov 29 02:37:15 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started
Nov 29 02:37:15 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401835548266, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 2122001, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13859, "largest_seqno": 15322, "table_properties": {"data_size": 2116423, "index_size": 2714, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14743, "raw_average_key_size": 21, "raw_value_size": 2104005, "raw_average_value_size": 3010, "num_data_blocks": 121, "num_entries": 699, "num_filter_entries": 699, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401641, "oldest_key_time": 1764401641, "file_creation_time": 1764401835, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:37:15 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 297202 microseconds, and 5587 cpu microseconds.
Nov 29 02:37:15 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:37:15 np0005539551 python3.9[183828]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:15 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:37:15.548404) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 2122001 bytes OK
Nov 29 02:37:15 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:37:15.548432) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started
Nov 29 02:37:15 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:37:15.653347) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done
Nov 29 02:37:15 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:37:15.653464) EVENT_LOG_v1 {"time_micros": 1764401835653451, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:37:15 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:37:15.653494) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:37:15 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 4212980, prev total WAL file size 4212980, number of live WAL files 2.
Nov 29 02:37:15 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:37:15 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:37:15.655681) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353032' seq:0, type:0; will stop at (end)
Nov 29 02:37:15 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:37:15 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(2072KB)], [30(10MB)]
Nov 29 02:37:15 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401835655794, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 12738330, "oldest_snapshot_seqno": -1}
Nov 29 02:37:16 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 4645 keys, 9907221 bytes, temperature: kUnknown
Nov 29 02:37:16 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401836075677, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 9907221, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9873675, "index_size": 20828, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11653, "raw_key_size": 116203, "raw_average_key_size": 25, "raw_value_size": 9787170, "raw_average_value_size": 2107, "num_data_blocks": 876, "num_entries": 4645, "num_filter_entries": 4645, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764401835, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:37:16 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:37:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:37:16.076278) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 9907221 bytes
Nov 29 02:37:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:37:16.257136) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 30.3 rd, 23.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 10.1 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(10.7) write-amplify(4.7) OK, records in: 5107, records dropped: 462 output_compression: NoCompression
Nov 29 02:37:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:37:16.257178) EVENT_LOG_v1 {"time_micros": 1764401836257160, "job": 16, "event": "compaction_finished", "compaction_time_micros": 420274, "compaction_time_cpu_micros": 50713, "output_level": 6, "num_output_files": 1, "total_output_size": 9907221, "num_input_records": 5107, "num_output_records": 4645, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:37:16 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:37:16 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401836257827, "job": 16, "event": "table_file_deletion", "file_number": 32}
Nov 29 02:37:16 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:37:16 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401836260099, "job": 16, "event": "table_file_deletion", "file_number": 30}
Nov 29 02:37:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:37:15.655510) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:37:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:37:16.260203) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:37:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:37:16.260210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:37:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:37:16.260212) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:37:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:37:16.260215) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:37:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:37:16.260218) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:37:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:16.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:16 np0005539551 python3.9[183980]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:16.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:17 np0005539551 python3.9[184132]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:37:17 np0005539551 python3.9[184255]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401836.5906024-2297-6630039101062/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:18.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:18 np0005539551 python3.9[184499]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:18.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:18 np0005539551 python3.9[184661]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401837.8411195-2297-241650534082675/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:19 np0005539551 python3.9[184813]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:37:19.825 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:37:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:37:19.826 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:37:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:37:19.826 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:37:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:20.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:20 np0005539551 python3.9[184936]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401839.1399968-2297-66750153149636/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:20.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:21 np0005539551 python3.9[185088]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:21 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:37:21 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:37:21 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 02:37:21 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:37:21 np0005539551 python3.9[185211]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401840.580527-2297-190299575936667/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:37:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:22.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:22 np0005539551 python3.9[185363]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:22.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:22 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:37:22 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:37:22 np0005539551 python3.9[185486]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401841.8832781-2297-9172440176234/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:23 np0005539551 python3.9[185638]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:24 np0005539551 python3.9[185761]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401843.1213026-2297-176803476745629/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:24.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:24.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:24 np0005539551 python3.9[185913]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:25 np0005539551 python3.9[186036]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401844.4026303-2297-229473944598462/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:26 np0005539551 python3.9[186188]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:26.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:26.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:26 np0005539551 python3.9[186311]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401845.7326066-2297-210170169392081/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:37:27 np0005539551 python3.9[186463]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:27 np0005539551 python3.9[186586]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401846.9864995-2297-27018850704786/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:37:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:28.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:37:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:28.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:28 np0005539551 python3.9[186738]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:29 np0005539551 python3.9[186861]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401848.1472745-2297-28312112007167/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:29 np0005539551 podman[187014]: 2025-11-29 07:37:29.833474825 +0000 UTC m=+0.089996149 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 02:37:29 np0005539551 podman[187013]: 2025-11-29 07:37:29.862153751 +0000 UTC m=+0.121934744 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:37:29 np0005539551 python3.9[187015]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:30 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:37:30 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:37:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:30.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:30.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:30 np0005539551 python3.9[187234]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401849.4087842-2297-139462576452815/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:31 np0005539551 python3.9[187386]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:31 np0005539551 python3.9[187509]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401850.6927564-2297-156645438189430/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:32.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:32.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:32 np0005539551 python3.9[187661]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:33 np0005539551 python3.9[187784]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401851.9075594-2297-144701568407197/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:33 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:37:33 np0005539551 python3.9[187936]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:34.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:34 np0005539551 python3.9[188059]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401853.3710568-2297-253397580719657/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:37:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:34.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:37:35 np0005539551 python3.9[188209]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:37:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:36.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:36 np0005539551 python3.9[188364]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 29 02:37:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:36.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:38.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:38.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:39 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:37:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:37:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:40.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:40.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:42.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:42.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:44 np0005539551 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 29 02:37:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:44.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:44 np0005539551 python3.9[188520]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:44.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:45 np0005539551 python3.9[188672]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:46 np0005539551 python3.9[188824]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:46.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:46.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:46 np0005539551 python3.9[188976]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:47 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:37:47 np0005539551 python3.9[189128]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:48.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:48.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:48 np0005539551 python3.9[189280]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:49 np0005539551 python3.9[189432]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:49 np0005539551 python3.9[189584]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:50.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:50.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:50 np0005539551 python3.9[189736]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:51 np0005539551 python3.9[189888]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:51 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:37:52 np0005539551 python3.9[190040]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:37:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:52.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:52 np0005539551 systemd[1]: Reloading.
Nov 29 02:37:52 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:37:52 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:37:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:52.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:54.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:54.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:55 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:37:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:56.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:56.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:58.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:37:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:58.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:59 np0005539551 systemd[1]: Starting libvirt logging daemon socket...
Nov 29 02:37:59 np0005539551 systemd[1]: Listening on libvirt logging daemon socket.
Nov 29 02:37:59 np0005539551 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 29 02:37:59 np0005539551 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 29 02:37:59 np0005539551 systemd[1]: Starting libvirt logging daemon...
Nov 29 02:37:59 np0005539551 systemd[1]: Started libvirt logging daemon.
Nov 29 02:37:59 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:37:59 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).paxos(paxos updating c 754..1469) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 11.678073883s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Nov 29 02:37:59 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:37:59 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-1[81668]: 2025-11-29T07:37:59.715+0000 7fe9969b2640 -1 mon.compute-1@2(peon).paxos(paxos updating c 754..1469) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 11.678073883s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Nov 29 02:37:59 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:37:59 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:38:00 np0005539551 podman[190208]: 2025-11-29 07:38:00.159602424 +0000 UTC m=+0.089254078 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:38:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:38:00 np0005539551 podman[190207]: 2025-11-29 07:38:00.211924388 +0000 UTC m=+0.142003874 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 02:38:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:00.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:00 np0005539551 python3.9[190268]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:38:00 np0005539551 systemd[1]: Reloading.
Nov 29 02:38:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:00.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:00 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:38:00 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:38:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:38:00 np0005539551 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 29 02:38:00 np0005539551 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 29 02:38:00 np0005539551 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 29 02:38:00 np0005539551 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 29 02:38:00 np0005539551 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 29 02:38:00 np0005539551 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 29 02:38:00 np0005539551 systemd[1]: Starting libvirt nodedev daemon...
Nov 29 02:38:00 np0005539551 systemd[1]: Started libvirt nodedev daemon.
Nov 29 02:38:01 np0005539551 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 29 02:38:01 np0005539551 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 29 02:38:01 np0005539551 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 29 02:38:01 np0005539551 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 29 02:38:01 np0005539551 python3.9[190496]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:38:01 np0005539551 systemd[1]: Reloading.
Nov 29 02:38:01 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:38:01 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:38:02 np0005539551 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 29 02:38:02 np0005539551 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 29 02:38:02 np0005539551 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 29 02:38:02 np0005539551 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 29 02:38:02 np0005539551 systemd[1]: Starting libvirt proxy daemon...
Nov 29 02:38:02 np0005539551 systemd[1]: Started libvirt proxy daemon.
Nov 29 02:38:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:02.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:02.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:02 np0005539551 setroubleshoot[190391]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l fe6f5959-2a7a-4e4e-b18c-c7b5fb4bf1a0
Nov 29 02:38:02 np0005539551 setroubleshoot[190391]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 29 02:38:02 np0005539551 setroubleshoot[190391]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l fe6f5959-2a7a-4e4e-b18c-c7b5fb4bf1a0
Nov 29 02:38:02 np0005539551 setroubleshoot[190391]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 29 02:38:03 np0005539551 ceph-mon[81672]: mon.compute-2 calling monitor election
Nov 29 02:38:03 np0005539551 ceph-mon[81672]: mon.compute-0 calling monitor election
Nov 29 02:38:03 np0005539551 ceph-mon[81672]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 02:38:03 np0005539551 ceph-mon[81672]: overall HEALTH_OK
Nov 29 02:38:03 np0005539551 ceph-mon[81672]: mon.compute-0 calling monitor election
Nov 29 02:38:03 np0005539551 ceph-mon[81672]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:38:03 np0005539551 ceph-mon[81672]: overall HEALTH_OK
Nov 29 02:38:03 np0005539551 python3.9[190717]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:38:03 np0005539551 systemd[1]: Reloading.
Nov 29 02:38:03 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:38:03 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:38:03 np0005539551 systemd[1]: Listening on libvirt locking daemon socket.
Nov 29 02:38:03 np0005539551 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 29 02:38:03 np0005539551 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 29 02:38:03 np0005539551 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 29 02:38:03 np0005539551 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 29 02:38:03 np0005539551 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 29 02:38:03 np0005539551 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 29 02:38:03 np0005539551 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 29 02:38:03 np0005539551 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 29 02:38:03 np0005539551 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 29 02:38:03 np0005539551 systemd[1]: Starting libvirt QEMU daemon...
Nov 29 02:38:03 np0005539551 systemd[1]: Started libvirt QEMU daemon.
Nov 29 02:38:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:04.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:04.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:04 np0005539551 python3.9[190931]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:38:04 np0005539551 systemd[1]: Reloading.
Nov 29 02:38:04 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:38:04 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:38:05 np0005539551 systemd[1]: Starting libvirt secret daemon socket...
Nov 29 02:38:05 np0005539551 systemd[1]: Listening on libvirt secret daemon socket.
Nov 29 02:38:05 np0005539551 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 29 02:38:05 np0005539551 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 29 02:38:05 np0005539551 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 29 02:38:05 np0005539551 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 29 02:38:05 np0005539551 systemd[1]: Starting libvirt secret daemon...
Nov 29 02:38:05 np0005539551 systemd[1]: Started libvirt secret daemon.
Nov 29 02:38:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:38:06 np0005539551 python3.9[191143]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:06.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:06.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:06 np0005539551 python3.9[191295]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 02:38:07 np0005539551 python3.9[191447]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:38:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:08.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:08.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:08 np0005539551 python3.9[191601]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 02:38:09 np0005539551 python3.9[191751]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:38:10 np0005539551 python3.9[191872]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401889.175264-3371-17876640416092/.source.xml follow=False _original_basename=secret.xml.j2 checksum=adf02dc8f6a63a8cc45a7e93e335963254ff5ce7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:10.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:10.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:38:11 np0005539551 python3.9[192024]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine b66774a7-56d9-5535-bd8c-681234404870#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:38:11 np0005539551 python3.9[192186]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:12.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:12.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:12 np0005539551 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 29 02:38:12 np0005539551 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.060s CPU time.
Nov 29 02:38:12 np0005539551 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 29 02:38:14 np0005539551 python3.9[192650]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:14.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:14.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:14 np0005539551 python3.9[192802]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:38:15 np0005539551 python3.9[192925]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401894.4848423-3536-270226458104671/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:38:16 np0005539551 python3.9[193077]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:16.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:16.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:17 np0005539551 python3.9[193229]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:38:17 np0005539551 python3.9[193307]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:18 np0005539551 python3.9[193459]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:38:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:18.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:18.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:18 np0005539551 python3.9[193537]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.q4m_8jqs recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:19 np0005539551 python3.9[193689]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:38:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:38:19.826 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:38:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:38:19.827 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:38:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:38:19.827 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:38:20 np0005539551 python3.9[193767]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:20.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:38:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:20.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:21 np0005539551 python3.9[193919]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:38:21 np0005539551 auditd[702]: Audit daemon rotating log files
Nov 29 02:38:22 np0005539551 python3[194072]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 02:38:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:22.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:22.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:22 np0005539551 python3.9[194224]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:38:23 np0005539551 python3.9[194302]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:24 np0005539551 python3.9[194454]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:38:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:24.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:24 np0005539551 python3.9[194532]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:38:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:24.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:38:25 np0005539551 python3.9[194684]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:38:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:38:25 np0005539551 python3.9[194762]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:26.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:26.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:26 np0005539551 python3.9[194914]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:38:27 np0005539551 python3.9[194992]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:28.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:28.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:29 np0005539551 python3.9[195144]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:38:30 np0005539551 podman[195293]: 2025-11-29 07:38:30.352043967 +0000 UTC m=+0.074251377 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:38:30 np0005539551 podman[195292]: 2025-11-29 07:38:30.392696572 +0000 UTC m=+0.113837483 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 02:38:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:30.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:30 np0005539551 python3.9[195296]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401907.537068-3911-162573859227184/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:30.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:38:31 np0005539551 python3.9[195593]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:31 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:38:31 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:38:31 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:38:32 np0005539551 python3.9[195745]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:38:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:32.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:32.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:32 np0005539551 python3.9[195900]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:33 np0005539551 python3.9[196052]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:38:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:34.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:34 np0005539551 python3.9[196205]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:38:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:34.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:35 np0005539551 python3.9[196359]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:38:35 np0005539551 python3.9[196514]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:36 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:38:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:36.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:36.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:36 np0005539551 python3.9[196666]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:38:37 np0005539551 python3.9[196789]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401916.235043-4127-106460482394619/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:37 np0005539551 python3.9[196941]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:38:38 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:38:38 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:38:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:38.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:38 np0005539551 python3.9[197114]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401917.4877698-4172-35028251089207/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:38.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:39 np0005539551 python3.9[197266]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:38:39 np0005539551 python3.9[197389]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401918.7727787-4217-269186686077716/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:40.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:40.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:40 np0005539551 python3.9[197541]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:38:40 np0005539551 systemd[1]: Reloading.
Nov 29 02:38:40 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:38:40 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:38:41 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:38:41 np0005539551 systemd[1]: Reached target edpm_libvirt.target.
Nov 29 02:38:42 np0005539551 python3.9[197731]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 29 02:38:42 np0005539551 systemd[1]: Reloading.
Nov 29 02:38:42 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:38:42 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:38:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:42.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:42 np0005539551 systemd[1]: Reloading.
Nov 29 02:38:42 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:38:42 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:38:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:42.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:43 np0005539551 systemd[1]: session-48.scope: Deactivated successfully.
Nov 29 02:38:43 np0005539551 systemd[1]: session-48.scope: Consumed 3min 32.169s CPU time.
Nov 29 02:38:43 np0005539551 systemd-logind[788]: Session 48 logged out. Waiting for processes to exit.
Nov 29 02:38:43 np0005539551 systemd-logind[788]: Removed session 48.
Nov 29 02:38:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:44.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:44.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:46 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:38:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc407a6f0 =====
Nov 29 02:38:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc407a6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:46 np0005539551 radosgw[83679]: beast: 0x7fabc407a6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:46.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:46.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc407a6f0 =====
Nov 29 02:38:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:48.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc407a6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:48 np0005539551 radosgw[83679]: beast: 0x7fabc407a6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:48.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:50 np0005539551 systemd-logind[788]: New session 49 of user zuul.
Nov 29 02:38:50 np0005539551 systemd[1]: Started Session 49 of User zuul.
Nov 29 02:38:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:50.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:50.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:38:51 np0005539551 python3.9[197981]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:38:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:52.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:52.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:53 np0005539551 python3.9[198135]: ansible-ansible.builtin.service_facts Invoked
Nov 29 02:38:53 np0005539551 network[198152]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:38:53 np0005539551 network[198153]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:38:53 np0005539551 network[198154]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:38:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:54.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:54.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:56 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:38:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:56.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:38:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:56.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:38:58 np0005539551 python3.9[198426]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:38:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:38:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:58.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc407a6f0 =====
Nov 29 02:38:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc407a6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:58 np0005539551 radosgw[83679]: beast: 0x7fabc407a6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:58.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:59 np0005539551 python3.9[198510]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:39:00 np0005539551 podman[198513]: 2025-11-29 07:39:00.62417801 +0000 UTC m=+0.081354408 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:39:00 np0005539551 podman[198512]: 2025-11-29 07:39:00.6274567 +0000 UTC m=+0.084283288 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 29 02:39:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc407a6f0 =====
Nov 29 02:39:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc407a6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:00.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:00 np0005539551 radosgw[83679]: beast: 0x7fabc407a6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:00.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:39:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc407a6f0 =====
Nov 29 02:39:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:02.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc407a6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:02 np0005539551 radosgw[83679]: beast: 0x7fabc407a6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:02.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc407a6f0 =====
Nov 29 02:39:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc407a6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:04 np0005539551 radosgw[83679]: beast: 0x7fabc407a6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:04.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:04.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:39:06 np0005539551 python3.9[198708]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:39:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc407a6f0 =====
Nov 29 02:39:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:06.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc407a6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:06 np0005539551 radosgw[83679]: beast: 0x7fabc407a6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:06.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:07 np0005539551 python3.9[198860]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:39:08 np0005539551 python3.9[199013]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:39:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc407a6f0 =====
Nov 29 02:39:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc407a6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:08.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:08 np0005539551 radosgw[83679]: beast: 0x7fabc407a6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:08.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:09 np0005539551 python3.9[199165]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:39:10 np0005539551 python3.9[199318]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:39:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:39:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc407a6f0 =====
Nov 29 02:39:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:10.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:39:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc407a6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:10 np0005539551 radosgw[83679]: beast: 0x7fabc407a6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:10.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:10 np0005539551 python3.9[199441]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401949.663998-251-234312562202531/.source.iscsi _original_basename=.10_atymj follow=False checksum=7602bdd0cf527ec5521f9370326234f2b983a096 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:11 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:39:11 np0005539551 python3.9[199593]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:12.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc407a6f0 =====
Nov 29 02:39:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc407a6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:12 np0005539551 radosgw[83679]: beast: 0x7fabc407a6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:12.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:12 np0005539551 python3.9[199745]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:14 np0005539551 python3.9[199897]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:39:14 np0005539551 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 29 02:39:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:14.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:14.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:15 np0005539551 python3.9[200053]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:39:15 np0005539551 systemd[1]: Reloading.
Nov 29 02:39:15 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:39:15 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:39:15 np0005539551 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 29 02:39:15 np0005539551 systemd[1]: Starting Open-iSCSI...
Nov 29 02:39:15 np0005539551 kernel: Loading iSCSI transport class v2.0-870.
Nov 29 02:39:15 np0005539551 systemd[1]: Started Open-iSCSI.
Nov 29 02:39:15 np0005539551 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 29 02:39:15 np0005539551 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 29 02:39:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:39:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:16.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:16.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:16 np0005539551 python3.9[200254]: ansible-ansible.builtin.service_facts Invoked
Nov 29 02:39:16 np0005539551 network[200271]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:39:16 np0005539551 network[200272]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:39:16 np0005539551 network[200273]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:39:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:18.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:18.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:39:19.827 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:39:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:39:19.828 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:39:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:39:19.828 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:39:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:20.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:20.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:21 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:39:22 np0005539551 python3.9[200547]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 02:39:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:22.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:22.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:23 np0005539551 python3.9[200699]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 29 02:39:23 np0005539551 python3.9[200855]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:39:24 np0005539551 python3.9[200978]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401963.453231-482-216648387366078/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:24.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:24.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:25 np0005539551 python3.9[201130]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:26 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:39:26 np0005539551 python3.9[201282]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:39:26 np0005539551 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 29 02:39:26 np0005539551 systemd[1]: Stopped Load Kernel Modules.
Nov 29 02:39:26 np0005539551 systemd[1]: Stopping Load Kernel Modules...
Nov 29 02:39:26 np0005539551 systemd[1]: Starting Load Kernel Modules...
Nov 29 02:39:26 np0005539551 systemd[1]: Finished Load Kernel Modules.
Nov 29 02:39:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:26.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:26.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:27 np0005539551 python3.9[201438]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:39:28 np0005539551 python3.9[201590]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:39:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:28.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:28.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:29 np0005539551 python3.9[201742]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:39:30 np0005539551 python3.9[201894]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:39:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:30.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc407a6f0 =====
Nov 29 02:39:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc407a6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:30 np0005539551 radosgw[83679]: beast: 0x7fabc407a6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:30.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:30 np0005539551 podman[201989]: 2025-11-29 07:39:30.889525344 +0000 UTC m=+0.082007425 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:39:30 np0005539551 podman[201990]: 2025-11-29 07:39:30.899882927 +0000 UTC m=+0.086743095 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:39:31 np0005539551 python3.9[202052]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401969.9760845-656-128400303543784/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:31 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:39:31 np0005539551 python3.9[202211]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:39:32 np0005539551 python3.9[202364]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:32.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc407a6f0 =====
Nov 29 02:39:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc407a6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:32 np0005539551 radosgw[83679]: beast: 0x7fabc407a6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:32.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:33 np0005539551 python3.9[202516]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:34 np0005539551 python3.9[202668]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc407a6f0 =====
Nov 29 02:39:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:34.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc407a6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:34 np0005539551 radosgw[83679]: beast: 0x7fabc407a6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:34.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:35 np0005539551 python3.9[202820]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:35 np0005539551 python3.9[202972]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:36 np0005539551 python3.9[203124]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:36 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:39:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc407a6f0 =====
Nov 29 02:39:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc407a6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:36.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:36 np0005539551 radosgw[83679]: beast: 0x7fabc407a6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:36.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:37 np0005539551 python3.9[203276]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:38 np0005539551 python3.9[203428]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:39:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:38.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:38.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:38 np0005539551 python3.9[203701]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:39 np0005539551 python3.9[203865]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:39:40 np0005539551 python3.9[204017]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:39:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:40.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:40.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:41 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:39:41 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:39:41 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:39:41 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:39:41 np0005539551 python3.9[204095]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:39:42 np0005539551 python3.9[204247]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:39:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:42.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:42.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:43 np0005539551 python3.9[204325]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:39:43 np0005539551 python3.9[204477]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:44 np0005539551 python3.9[204629]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:39:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:44.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:44.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:45 np0005539551 python3.9[204707]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:46 np0005539551 python3.9[204859]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:39:46 np0005539551 python3.9[204937]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:46 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:39:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:46.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc407a6f0 =====
Nov 29 02:39:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc407a6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:46 np0005539551 radosgw[83679]: beast: 0x7fabc407a6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:46.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:47 np0005539551 python3.9[205089]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:39:47 np0005539551 systemd[1]: Reloading.
Nov 29 02:39:47 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:39:47 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:39:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc407a6f0 =====
Nov 29 02:39:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc407a6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:48 np0005539551 radosgw[83679]: beast: 0x7fabc407a6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:48.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:48.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:49 np0005539551 python3.9[205277]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:39:49 np0005539551 python3.9[205355]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:50 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:39:50 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:39:50 np0005539551 python3.9[205557]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:39:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc407a6f0 =====
Nov 29 02:39:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:50.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc407a6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:50 np0005539551 radosgw[83679]: beast: 0x7fabc407a6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:50.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:51 np0005539551 python3.9[205635]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:39:51 np0005539551 python3.9[205787]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:39:51 np0005539551 systemd[1]: Reloading.
Nov 29 02:39:52 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:39:52 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:39:52 np0005539551 systemd[1]: Starting Create netns directory...
Nov 29 02:39:52 np0005539551 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 02:39:52 np0005539551 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 02:39:52 np0005539551 systemd[1]: Finished Create netns directory.
Nov 29 02:39:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc407a6f0 =====
Nov 29 02:39:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc407a6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:52 np0005539551 radosgw[83679]: beast: 0x7fabc407a6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:52.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:52.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:54 np0005539551 python3.9[205980]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:39:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:54.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc407a6f0 =====
Nov 29 02:39:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc407a6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:54 np0005539551 radosgw[83679]: beast: 0x7fabc407a6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:54.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:54 np0005539551 python3.9[206132]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:39:55 np0005539551 python3.9[206255]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401994.3126626-1277-279595948059873/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:39:56 np0005539551 python3.9[206407]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:39:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc407a6f0 =====
Nov 29 02:39:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc407a6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:56 np0005539551 radosgw[83679]: beast: 0x7fabc407a6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:56.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:56.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:57 np0005539551 python3.9[206559]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:39:57 np0005539551 python3.9[206682]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401996.753926-1352-233264029080861/.source.json _original_basename=.v3990g0p follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:58 np0005539551 python3.9[206834]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:39:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc407a6f0 =====
Nov 29 02:39:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc407a6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:58.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:58 np0005539551 radosgw[83679]: beast: 0x7fabc407a6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:58.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc407a6f0 =====
Nov 29 02:40:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc407a6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:00 np0005539551 radosgw[83679]: beast: 0x7fabc407a6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:00.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:00.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:00 np0005539551 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 29 02:40:01 np0005539551 podman[207211]: 2025-11-29 07:40:01.668172575 +0000 UTC m=+0.108294802 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:40:01 np0005539551 podman[207209]: 2025-11-29 07:40:01.682357492 +0000 UTC m=+0.122502070 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 02:40:02 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:02 np0005539551 python3.9[207305]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 29 02:40:02 np0005539551 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 29 02:40:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc407a6f0 =====
Nov 29 02:40:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc407a6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:02.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:02 np0005539551 radosgw[83679]: beast: 0x7fabc407a6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:02.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:03 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:40:03 np0005539551 python3.9[207458]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 02:40:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc407a6f0 =====
Nov 29 02:40:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc407a6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:04 np0005539551 radosgw[83679]: beast: 0x7fabc407a6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:04.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:40:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:04.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:40:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc407a6f0 =====
Nov 29 02:40:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc407a6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:06.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:06 np0005539551 radosgw[83679]: beast: 0x7fabc407a6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:06.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:07 np0005539551 python3.9[207610]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 29 02:40:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc407a6f0 =====
Nov 29 02:40:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:40:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:08.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:40:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc407a6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:08 np0005539551 radosgw[83679]: beast: 0x7fabc407a6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:08.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:09 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:10 np0005539551 ceph-mon[81672]: overall HEALTH_OK
Nov 29 02:40:10 np0005539551 python3[207787]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 02:40:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc407a6f0 =====
Nov 29 02:40:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:40:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc407a6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:40:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:10.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:40:10 np0005539551 radosgw[83679]: beast: 0x7fabc407a6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:10.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:40:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc407a6f0 =====
Nov 29 02:40:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc407a6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:12 np0005539551 radosgw[83679]: beast: 0x7fabc407a6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:12.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:12.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:13 np0005539551 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 29 02:40:13 np0005539551 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 29 02:40:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc407a6f0 =====
Nov 29 02:40:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc407a6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:14.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:14 np0005539551 radosgw[83679]: beast: 0x7fabc407a6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:14.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:15 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:40:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc407a6f0 =====
Nov 29 02:40:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:16.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc407a6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:16 np0005539551 radosgw[83679]: beast: 0x7fabc407a6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:16.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc407a6f0 =====
Nov 29 02:40:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc407a6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:40:18 np0005539551 radosgw[83679]: beast: 0x7fabc407a6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:18.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:40:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:40:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:18.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:40:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:40:19.829 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:40:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:40:19.830 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:40:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:40:19.830 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:40:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:20.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:20.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:22.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:22.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:23 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:40:24 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:24.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:24.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:26.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:40:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:26.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:40:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:28.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:28.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:29 np0005539551 podman[207800]: 2025-11-29 07:40:29.027049564 +0000 UTC m=+18.902358169 image pull f275b8d168f7f57f31e3da49224019f39f95c80a833f083696a964527b07b54f quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 29 02:40:29 np0005539551 podman[207859]: 2025-11-29 07:40:29.182283475 +0000 UTC m=+0.025549837 image pull f275b8d168f7f57f31e3da49224019f39f95c80a833f083696a964527b07b54f quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 29 02:40:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:30.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:40:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:30.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:40:31 np0005539551 podman[207859]: 2025-11-29 07:40:31.053280052 +0000 UTC m=+1.896546324 container create 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS)
Nov 29 02:40:31 np0005539551 python3[207787]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 29 02:40:31 np0005539551 podman[208022]: 2025-11-29 07:40:31.881932283 +0000 UTC m=+0.087956988 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Nov 29 02:40:31 np0005539551 podman[208021]: 2025-11-29 07:40:31.944607362 +0000 UTC m=+0.161643937 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 02:40:32 np0005539551 python3.9[208089]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:40:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:32.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:32.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:32 np0005539551 python3.9[208248]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:40:33 np0005539551 python3.9[208324]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:40:34 np0005539551 python3.9[208475]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764402033.5717096-1616-134805053559822/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:40:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:40:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:34.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:40:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:34.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:35 np0005539551 python3.9[208551]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 02:40:35 np0005539551 systemd[1]: Reloading.
Nov 29 02:40:35 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:40:35 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:40:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:36 np0005539551 python3.9[208664]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:40:36 np0005539551 systemd[1]: Reloading.
Nov 29 02:40:36 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:40:36 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:40:36 np0005539551 systemd[1]: Starting multipathd container...
Nov 29 02:40:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:36.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:36.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:37 np0005539551 systemd[1]: Started libcrun container.
Nov 29 02:40:37 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afd072e28ed0c33d5b00183a18c8d20c22c66f316655ad6d2084686bf7953599/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 02:40:37 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afd072e28ed0c33d5b00183a18c8d20c22c66f316655ad6d2084686bf7953599/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 02:40:38 np0005539551 systemd[1]: Started /usr/bin/podman healthcheck run 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f.
Nov 29 02:40:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:38.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:38.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:40 np0005539551 podman[208703]: 2025-11-29 07:40:40.055603254 +0000 UTC m=+3.286994814 container init 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 02:40:40 np0005539551 multipathd[208719]: + sudo -E kolla_set_configs
Nov 29 02:40:40 np0005539551 podman[208703]: 2025-11-29 07:40:40.090534946 +0000 UTC m=+3.321926496 container start 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:40:40 np0005539551 multipathd[208719]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 02:40:40 np0005539551 multipathd[208719]: INFO:__main__:Validating config file
Nov 29 02:40:40 np0005539551 multipathd[208719]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 02:40:40 np0005539551 multipathd[208719]: INFO:__main__:Writing out command to execute
Nov 29 02:40:40 np0005539551 multipathd[208719]: ++ cat /run_command
Nov 29 02:40:40 np0005539551 multipathd[208719]: + CMD='/usr/sbin/multipathd -d'
Nov 29 02:40:40 np0005539551 multipathd[208719]: + ARGS=
Nov 29 02:40:40 np0005539551 multipathd[208719]: + sudo kolla_copy_cacerts
Nov 29 02:40:40 np0005539551 multipathd[208719]: + [[ ! -n '' ]]
Nov 29 02:40:40 np0005539551 multipathd[208719]: + . kolla_extend_start
Nov 29 02:40:40 np0005539551 multipathd[208719]: Running command: '/usr/sbin/multipathd -d'
Nov 29 02:40:40 np0005539551 multipathd[208719]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 29 02:40:40 np0005539551 multipathd[208719]: + umask 0022
Nov 29 02:40:40 np0005539551 multipathd[208719]: + exec /usr/sbin/multipathd -d
Nov 29 02:40:40 np0005539551 multipathd[208719]: 5087.215800 | --------start up--------
Nov 29 02:40:40 np0005539551 multipathd[208719]: 5087.215823 | read /etc/multipath.conf
Nov 29 02:40:40 np0005539551 multipathd[208719]: 5087.225499 | path checkers start up
Nov 29 02:40:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:40:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:40.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:40:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:40.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:41 np0005539551 podman[208703]: multipathd
Nov 29 02:40:41 np0005539551 systemd[1]: Started multipathd container.
Nov 29 02:40:42 np0005539551 podman[208726]: 2025-11-29 07:40:42.121165073 +0000 UTC m=+2.010725635 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 02:40:42 np0005539551 python3.9[208909]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:40:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:42.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:40:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:42.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:40:43 np0005539551 python3.9[209063]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:40:44 np0005539551 python3.9[209228]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:40:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:44.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:44.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:45 np0005539551 systemd[1]: Stopping multipathd container...
Nov 29 02:40:46 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:46.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:46.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:40:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:48.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:40:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:40:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:48.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:40:50 np0005539551 multipathd[208719]: 5097.787615 | exit (signal)
Nov 29 02:40:50 np0005539551 multipathd[208719]: 5097.787678 | --------shut down-------
Nov 29 02:40:50 np0005539551 systemd[1]: libpod-806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f.scope: Deactivated successfully.
Nov 29 02:40:50 np0005539551 podman[209232]: 2025-11-29 07:40:50.877440073 +0000 UTC m=+5.091706987 container died 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 02:40:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:50.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:50.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:51 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:40:52 np0005539551 systemd[1]: 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f-4750695bf25a4c4e.timer: Deactivated successfully.
Nov 29 02:40:52 np0005539551 systemd[1]: Stopped /usr/bin/podman healthcheck run 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f.
Nov 29 02:40:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:52.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:52.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:54.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:54.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:54 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f-userdata-shm.mount: Deactivated successfully.
Nov 29 02:40:54 np0005539551 systemd[1]: var-lib-containers-storage-overlay-afd072e28ed0c33d5b00183a18c8d20c22c66f316655ad6d2084686bf7953599-merged.mount: Deactivated successfully.
Nov 29 02:40:56 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:40:56 np0005539551 podman[209232]: 2025-11-29 07:40:56.299235022 +0000 UTC m=+10.513501936 container cleanup 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 02:40:56 np0005539551 podman[209232]: multipathd
Nov 29 02:40:56 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:56 np0005539551 podman[209392]: multipathd
Nov 29 02:40:56 np0005539551 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 29 02:40:56 np0005539551 systemd[1]: Stopped multipathd container.
Nov 29 02:40:56 np0005539551 systemd[1]: Starting multipathd container...
Nov 29 02:40:56 np0005539551 systemd[1]: Started libcrun container.
Nov 29 02:40:56 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afd072e28ed0c33d5b00183a18c8d20c22c66f316655ad6d2084686bf7953599/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 02:40:56 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afd072e28ed0c33d5b00183a18c8d20c22c66f316655ad6d2084686bf7953599/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 02:40:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:56.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:56.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:57 np0005539551 systemd[1]: Started /usr/bin/podman healthcheck run 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f.
Nov 29 02:40:57 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:40:57 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:40:57 np0005539551 podman[209405]: 2025-11-29 07:40:57.325601203 +0000 UTC m=+0.931359462 container init 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 29 02:40:57 np0005539551 multipathd[209421]: + sudo -E kolla_set_configs
Nov 29 02:40:57 np0005539551 podman[209405]: 2025-11-29 07:40:57.376263863 +0000 UTC m=+0.982022112 container start 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:40:57 np0005539551 multipathd[209421]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 02:40:57 np0005539551 multipathd[209421]: INFO:__main__:Validating config file
Nov 29 02:40:57 np0005539551 multipathd[209421]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 02:40:57 np0005539551 multipathd[209421]: INFO:__main__:Writing out command to execute
Nov 29 02:40:57 np0005539551 multipathd[209421]: ++ cat /run_command
Nov 29 02:40:57 np0005539551 multipathd[209421]: + CMD='/usr/sbin/multipathd -d'
Nov 29 02:40:57 np0005539551 multipathd[209421]: + ARGS=
Nov 29 02:40:57 np0005539551 multipathd[209421]: + sudo kolla_copy_cacerts
Nov 29 02:40:57 np0005539551 multipathd[209421]: + [[ ! -n '' ]]
Nov 29 02:40:57 np0005539551 multipathd[209421]: + . kolla_extend_start
Nov 29 02:40:57 np0005539551 multipathd[209421]: Running command: '/usr/sbin/multipathd -d'
Nov 29 02:40:57 np0005539551 multipathd[209421]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 29 02:40:57 np0005539551 multipathd[209421]: + umask 0022
Nov 29 02:40:57 np0005539551 multipathd[209421]: + exec /usr/sbin/multipathd -d
Nov 29 02:40:57 np0005539551 multipathd[209421]: 5104.401939 | --------start up--------
Nov 29 02:40:57 np0005539551 multipathd[209421]: 5104.401959 | read /etc/multipath.conf
Nov 29 02:40:57 np0005539551 multipathd[209421]: 5104.409442 | path checkers start up
Nov 29 02:40:57 np0005539551 podman[209405]: multipathd
Nov 29 02:40:57 np0005539551 systemd[1]: Started multipathd container.
Nov 29 02:40:57 np0005539551 podman[209428]: 2025-11-29 07:40:57.542522363 +0000 UTC m=+0.153938626 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd)
Nov 29 02:40:58 np0005539551 python3.9[209614]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:40:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:58.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:40:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:58.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:59 np0005539551 python3.9[209766]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 02:41:00 np0005539551 python3.9[209918]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 29 02:41:00 np0005539551 kernel: Key type psk registered
Nov 29 02:41:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:00.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:41:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:00.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:41:01 np0005539551 python3.9[210081]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:41:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:01 np0005539551 python3.9[210204]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764402060.6766772-1856-73049512451402/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:02 np0005539551 podman[210353]: 2025-11-29 07:41:02.420185105 +0000 UTC m=+0.054039953 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 02:41:02 np0005539551 podman[210351]: 2025-11-29 07:41:02.448279301 +0000 UTC m=+0.082139369 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 02:41:02 np0005539551 python3.9[210441]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:02.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:41:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:02.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:41:03 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:41:03 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:41:03 np0005539551 python3.9[210602]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:41:03 np0005539551 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 29 02:41:03 np0005539551 systemd[1]: Stopped Load Kernel Modules.
Nov 29 02:41:03 np0005539551 systemd[1]: Stopping Load Kernel Modules...
Nov 29 02:41:03 np0005539551 systemd[1]: Starting Load Kernel Modules...
Nov 29 02:41:03 np0005539551 systemd[1]: Finished Load Kernel Modules.
Nov 29 02:41:04 np0005539551 python3.9[210758]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:41:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:04.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:04.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:06.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:06.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:07 np0005539551 systemd[1]: Reloading.
Nov 29 02:41:07 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:41:07 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:41:07 np0005539551 systemd[1]: Reloading.
Nov 29 02:41:07 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:41:07 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:41:08 np0005539551 systemd-logind[788]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 29 02:41:08 np0005539551 systemd-logind[788]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 29 02:41:08 np0005539551 lvm[210873]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 02:41:08 np0005539551 lvm[210873]: VG ceph_vg0 finished
Nov 29 02:41:08 np0005539551 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 02:41:08 np0005539551 systemd[1]: Starting man-db-cache-update.service...
Nov 29 02:41:08 np0005539551 systemd[1]: Reloading.
Nov 29 02:41:08 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:41:08 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:41:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:08.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:08.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:09 np0005539551 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 02:41:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:10.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:10.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:12.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:41:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:12.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:41:13 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:14 np0005539551 python3.9[212217]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:41:14 np0005539551 systemd[1]: Stopping Open-iSCSI...
Nov 29 02:41:14 np0005539551 iscsid[200093]: iscsid shutting down.
Nov 29 02:41:14 np0005539551 systemd[1]: iscsid.service: Deactivated successfully.
Nov 29 02:41:14 np0005539551 systemd[1]: Stopped Open-iSCSI.
Nov 29 02:41:14 np0005539551 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 29 02:41:14 np0005539551 systemd[1]: Starting Open-iSCSI...
Nov 29 02:41:14 np0005539551 systemd[1]: Started Open-iSCSI.
Nov 29 02:41:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:14.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:41:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:14.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:41:15 np0005539551 python3.9[212371]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:41:16 np0005539551 python3.9[212527]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:16.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:16.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:18 np0005539551 python3.9[212679]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 02:41:18 np0005539551 systemd[1]: Reloading.
Nov 29 02:41:18 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:41:18 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:41:18 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:41:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:18.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:41:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:18.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:19 np0005539551 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 02:41:19 np0005539551 systemd[1]: Finished man-db-cache-update.service.
Nov 29 02:41:19 np0005539551 systemd[1]: man-db-cache-update.service: Consumed 1.865s CPU time.
Nov 29 02:41:19 np0005539551 systemd[1]: run-r4b6019ef2f364965b2cb54c1ab463227.service: Deactivated successfully.
Nov 29 02:41:19 np0005539551 python3.9[212865]: ansible-ansible.builtin.service_facts Invoked
Nov 29 02:41:19 np0005539551 network[212882]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:41:19 np0005539551 network[212883]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:41:19 np0005539551 network[212884]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:41:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:41:19.831 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:41:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:41:19.832 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:41:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:41:19.832 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:41:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:41:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:20.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:41:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:20.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:22.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:22.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:24 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:24.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:24.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:25 np0005539551 python3.9[213159]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:41:26 np0005539551 python3.9[213312]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:41:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:26.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:26.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:28 np0005539551 python3.9[213465]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:41:28 np0005539551 podman[213467]: 2025-11-29 07:41:28.454873741 +0000 UTC m=+0.086843907 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 02:41:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:28.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:28.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:29 np0005539551 python3.9[213637]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:41:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:30 np0005539551 python3.9[213790]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:41:30 np0005539551 python3.9[213943]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:41:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:41:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:30.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:41:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:30.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:31 np0005539551 python3.9[214096]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:41:32 np0005539551 python3.9[214249]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:41:32 np0005539551 podman[214252]: 2025-11-29 07:41:32.563061881 +0000 UTC m=+0.054121165 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 02:41:32 np0005539551 podman[214251]: 2025-11-29 07:41:32.574252376 +0000 UTC m=+0.074590953 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 02:41:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:32.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:32.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:33 np0005539551 python3.9[214446]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:34 np0005539551 python3.9[214598]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:34 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:34 np0005539551 python3.9[214750]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:34.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:34.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:35 np0005539551 python3.9[214902]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:36 np0005539551 python3.9[215054]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:36 np0005539551 python3.9[215206]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:36.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:36.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:37 np0005539551 python3.9[215358]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:37 np0005539551 python3.9[215510]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:38 np0005539551 python3.9[215662]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:41:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:38.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:41:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:38.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:39 np0005539551 python3.9[215814]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:40 np0005539551 python3.9[215966]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:40 np0005539551 python3.9[216118]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:40.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:40.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:41 np0005539551 python3.9[216270]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:42 np0005539551 python3.9[216422]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:42 np0005539551 python3.9[216574]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:42.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:42.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:43 np0005539551 python3.9[216726]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:44 np0005539551 python3.9[216878]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:41:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:41:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:44.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:41:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:44.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:45 np0005539551 python3.9[217030]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 02:41:46 np0005539551 python3.9[217182]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 02:41:46 np0005539551 systemd[1]: Reloading.
Nov 29 02:41:46 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:41:46 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:41:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:46.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:47.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:48 np0005539551 python3.9[217369]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:41:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:48.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:49.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:49 np0005539551 python3.9[217522]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:41:50 np0005539551 python3.9[217675]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:41:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:41:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:50.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:41:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:51.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:51 np0005539551 python3.9[217828]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:41:51 np0005539551 python3.9[217981]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:41:52 np0005539551 python3.9[218134]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:41:52 np0005539551 python3.9[218287]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:41:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:52.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:53.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:53 np0005539551 python3.9[218440]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:41:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:55.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:55.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:55 np0005539551 python3.9[218593]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:41:56 np0005539551 python3.9[218745]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:41:56 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:56 np0005539551 python3.9[218897]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:41:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:57.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:57.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:57 np0005539551 python3.9[219049]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:41:58 np0005539551 python3.9[219201]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:41:58 np0005539551 podman[219226]: 2025-11-29 07:41:58.634715024 +0000 UTC m=+0.088190414 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Nov 29 02:41:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:59.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:41:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:41:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:59.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:41:59 np0005539551 python3.9[219374]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:42:00 np0005539551 python3.9[219526]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:42:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:01.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:01.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:01 np0005539551 python3.9[219678]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:42:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:02 np0005539551 python3.9[219830]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:42:02 np0005539551 podman[220007]: 2025-11-29 07:42:02.740361968 +0000 UTC m=+0.062511465 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible)
Nov 29 02:42:02 np0005539551 podman[220005]: 2025-11-29 07:42:02.782812515 +0000 UTC m=+0.107990505 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 02:42:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:03.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:03.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:03 np0005539551 python3.9[220009]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:42:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:05.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:05.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:06 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:42:06 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:42:06 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:42:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:07.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:07.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:09.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:09.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:09 np0005539551 python3.9[220311]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 29 02:42:10 np0005539551 python3.9[220464]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 02:42:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:11.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:11.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:12 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:13.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:13.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:13 np0005539551 python3.9[220672]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 02:42:13 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:42:13 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:42:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:15.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:15.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:42:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:17.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:42:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:17.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:18 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:19.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:19.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:42:19.832 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:42:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:42:19.833 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:42:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:42:19.833 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:42:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:21.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:21.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:21 np0005539551 systemd-logind[788]: New session 50 of user zuul.
Nov 29 02:42:21 np0005539551 systemd[1]: Started Session 50 of User zuul.
Nov 29 02:42:22 np0005539551 systemd[1]: session-50.scope: Deactivated successfully.
Nov 29 02:42:22 np0005539551 systemd-logind[788]: Session 50 logged out. Waiting for processes to exit.
Nov 29 02:42:22 np0005539551 systemd-logind[788]: Removed session 50.
Nov 29 02:42:22 np0005539551 python3.9[220858]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:42:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:23.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:23 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:23.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:23 np0005539551 python3.9[220979]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764402142.2514613-3439-182950122766834/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:42:24 np0005539551 python3.9[221129]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:42:24 np0005539551 python3.9[221205]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:42:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:25.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:25 np0005539551 python3.9[221355]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:42:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:25.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:25 np0005539551 python3.9[221476]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764402144.6020052-3439-230767167624975/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:42:26 np0005539551 python3.9[221626]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:42:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:27.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:27.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:27 np0005539551 python3.9[221747]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764402145.8105557-3439-41134401980925/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=bc7f3bb7d4094c596a18178a888511b54e157ba4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:42:28 np0005539551 python3.9[221897]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:42:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:29.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:29.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:29 np0005539551 python3.9[222018]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764402147.4205227-3439-128893701569181/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:42:29 np0005539551 podman[222019]: 2025-11-29 07:42:29.459543335 +0000 UTC m=+0.069262629 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:42:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:29 np0005539551 python3.9[222188]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:42:30 np0005539551 python3.9[222309]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764402149.5149064-3439-152249800277392/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:42:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:31.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:31.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:31 np0005539551 python3.9[222461]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:42:32 np0005539551 python3.9[222613]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:42:32 np0005539551 python3.9[222765]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:42:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:33.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:33.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:33 np0005539551 podman[222890]: 2025-11-29 07:42:33.394100745 +0000 UTC m=+0.075830807 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:42:33 np0005539551 podman[222889]: 2025-11-29 07:42:33.421134402 +0000 UTC m=+0.103572323 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:42:33 np0005539551 python3.9[222952]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:42:34 np0005539551 python3.9[223082]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764402153.0810316-3761-65928896170488/.source _original_basename=.iqxwa72x follow=False checksum=303d1db00d3d38011b89aa2ef37c9cdb715305e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 29 02:42:34 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:35.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:35.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:35 np0005539551 python3.9[223234]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:42:36 np0005539551 python3.9[223386]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:42:36 np0005539551 python3.9[223507]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764402155.7279022-3839-63585203048920/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:42:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:37.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:37.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:37 np0005539551 python3.9[223657]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:42:38 np0005539551 python3.9[223778]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764402157.0447454-3883-31247623223441/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:42:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:39.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:39 np0005539551 python3.9[223930]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 29 02:42:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:39.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:39 np0005539551 python3.9[224082]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 02:42:41 np0005539551 python3[224234]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 02:42:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:41.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:41.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:43.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:43.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:44 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:45.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:45.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:47.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:47.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:49.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:49.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:50 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:42:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:51.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:51.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:53.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:53.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:53 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:54 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:42:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:55.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:55.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).paxos(paxos updating c 1005..1707) lease_timeout -- calling new election
Nov 29 02:42:55 np0005539551 ceph-mon[81672]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Nov 29 02:42:55 np0005539551 ceph-mon[81672]: paxos.2).electionLogic(64) init, last seen epoch 64
Nov 29 02:42:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:57.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:57.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:58 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:42:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:59.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:42:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:42:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:59.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:43:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:01.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:01.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:02 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:43:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:03.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:03.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:05.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:05.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:06 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:43:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:07.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:07.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:08 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:43:08 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 handle_timecheck drop unexpected msg
Nov 29 02:43:08 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 _ms_dispatch dropping stray message mgrbeacon mgr.compute-1.fchyan(b66774a7-56d9-5535-bd8c-681234404870,24104, , 0) v11 from mgr.24104 192.168.122.101:0/1646141000
Nov 29 02:43:08 np0005539551 ceph-mgr[82034]: client.0 ms_handle_reset on v2:192.168.122.101:3300/0
Nov 29 02:43:08 np0005539551 ceph-mgr[82034]: client.0 ms_handle_reset on v2:192.168.122.101:3300/0
Nov 29 02:43:08 np0005539551 ceph-mgr[82034]: ms_deliver_dispatch: unhandled message 0x5625f2225600 mon_map magic: 0 v1 from mon.2 v2:192.168.122.101:3300/0
Nov 29 02:43:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:09.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:09.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:10 np0005539551 ceph-mds[84617]: mds.beacon.cephfs.compute-1.ldsugj missed beacon ack from the monitors
Nov 29 02:43:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:11.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:11.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:13.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:13.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:13 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:43:13 np0005539551 ceph-mgr[82034]: client.0 ms_handle_reset on v2:192.168.122.101:3300/0
Nov 29 02:43:13 np0005539551 ceph-mgr[82034]: client.0 ms_handle_reset on v2:192.168.122.101:3300/0
Nov 29 02:43:13 np0005539551 ceph-mgr[82034]: ms_deliver_dispatch: unhandled message 0x5625f2225080 mon_map magic: 0 v1 from mon.2 v2:192.168.122.101:3300/0
Nov 29 02:43:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:15.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:15 np0005539551 podman[224320]: 2025-11-29 07:43:15.14448207 +0000 UTC m=+11.587325181 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 02:43:15 np0005539551 podman[224306]: 2025-11-29 07:43:15.146443053 +0000 UTC m=+15.556831483 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 02:43:15 np0005539551 podman[224319]: 2025-11-29 07:43:15.180054038 +0000 UTC m=+11.628224024 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:43:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:15.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:16 np0005539551 podman[224544]: 2025-11-29 07:43:16.984395247 +0000 UTC m=+1.607318166 container exec 90ce4adbab91e406b1c142d44002a8d5649da44e57a442af57615f2fecac4da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:43:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:17.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:17 np0005539551 podman[224544]: 2025-11-29 07:43:17.190891749 +0000 UTC m=+1.813814678 container exec_died 90ce4adbab91e406b1c142d44002a8d5649da44e57a442af57615f2fecac4da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 02:43:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 02:43:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:17.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 02:43:18 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 get_health_metrics reporting 1 slow ops, oldest is mdsbeacon(24146/cephfs.compute-1.ldsugj up:standby fs=cephfs seq=393 v11)
Nov 29 02:43:18 np0005539551 ceph-mgr[82034]: client.0 ms_handle_reset on v2:192.168.122.101:3300/0
Nov 29 02:43:18 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-1[81668]: 2025-11-29T07:43:18.872+0000 7fe9991b7640 -1 mon.compute-1@2(electing) e3 get_health_metrics reporting 1 slow ops, oldest is mdsbeacon(24146/cephfs.compute-1.ldsugj up:standby fs=cephfs seq=393 v11)
Nov 29 02:43:18 np0005539551 ceph-mgr[82034]: client.0 ms_handle_reset on v2:192.168.122.101:3300/0
Nov 29 02:43:18 np0005539551 ceph-mgr[82034]: ms_deliver_dispatch: unhandled message 0x5625f2225080 mon_map magic: 0 v1 from mon.2 v2:192.168.122.101:3300/0
Nov 29 02:43:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:19.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:19.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:43:19.834 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:43:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:43:19.834 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:43:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:43:19.834 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:43:19 np0005539551 ceph-mon[81672]: paxos.2).electionLogic(67) init, last seen epoch 67, mid-election, bumping
Nov 29 02:43:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:21.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:21.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:21 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:43:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:43:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:43:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:23.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:23.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:23 np0005539551 podman[224246]: 2025-11-29 07:43:23.472546057 +0000 UTC m=+42.335085158 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 29 02:43:23 np0005539551 podman[224631]: 2025-11-29 07:43:23.602058233 +0000 UTC m=+0.026419881 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 29 02:43:23 np0005539551 ceph-mon[81672]: MDS daemon mds.cephfs.compute-1.ldsugj is removed because it is dead or otherwise unavailable.
Nov 29 02:43:23 np0005539551 ceph-mon[81672]: MDS daemon mds.cephfs.compute-2.mmoati is removed because it is dead or otherwise unavailable.
Nov 29 02:43:23 np0005539551 ceph-mon[81672]: Health check failed: insufficient standby MDS daemons available (MDS_INSUFFICIENT_STANDBY)
Nov 29 02:43:23 np0005539551 ceph-mon[81672]: mon.compute-0 calling monitor election
Nov 29 02:43:23 np0005539551 ceph-mon[81672]: mon.compute-2 calling monitor election
Nov 29 02:43:23 np0005539551 ceph-mon[81672]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 02:43:23 np0005539551 ceph-mon[81672]: Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Nov 29 02:43:23 np0005539551 ceph-mon[81672]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:43:23 np0005539551 ceph-mon[81672]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:43:23 np0005539551 ceph-mon[81672]:    mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Nov 29 02:43:23 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:43:23 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:43:23 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:43:23 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:43:23 np0005539551 ceph-mon[81672]: Health check failed: 1 slow ops, oldest one blocked for 31 sec, mon.compute-1 has slow ops (SLOW_OPS)
Nov 29 02:43:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:25.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:25.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:26 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:43:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 get_health_metrics reporting 1 slow ops, oldest is log(1 entries from seq 8 at 2025-11-29T07:42:55.392160+0000)
Nov 29 02:43:27 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-1[81668]: 2025-11-29T07:43:27.055+0000 7fe9991b7640 -1 mon.compute-1@2(peon) e3 get_health_metrics reporting 1 slow ops, oldest is log(1 entries from seq 8 at 2025-11-29T07:42:55.392160+0000)
Nov 29 02:43:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:27.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:27.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:27 np0005539551 podman[224631]: 2025-11-29 07:43:27.483346643 +0000 UTC m=+3.907708201 container create d7414d5d6cfe0c87966904e3fd869230db607e5059cea06967c27a45f49aa3dc (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=nova_compute_init, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:43:27 np0005539551 python3[224234]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 29 02:43:28 np0005539551 python3.9[224877]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:43:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:29.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:29.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:31.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:31.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:31 np0005539551 python3.9[225031]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 29 02:43:32 np0005539551 python3.9[225183]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 02:43:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:33.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:33.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:34 np0005539551 python3[225335]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 02:43:34 np0005539551 podman[225372]: 2025-11-29 07:43:34.437018184 +0000 UTC m=+0.024333163 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 29 02:43:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:35.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:35.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:35 np0005539551 ceph-mon[81672]: mon.compute-2 calling monitor election
Nov 29 02:43:35 np0005539551 ceph-mon[81672]: mon.compute-0 calling monitor election
Nov 29 02:43:35 np0005539551 ceph-mon[81672]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:43:35 np0005539551 ceph-mon[81672]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Nov 29 02:43:36 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).paxos(paxos updating c 1005..1723) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 1.952327609s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Nov 29 02:43:36 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-1[81668]: 2025-11-29T07:43:36.702+0000 7fe9969b2640 -1 mon.compute-1@2(peon).paxos(paxos updating c 1005..1723) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 1.952327609s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Nov 29 02:43:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 get_health_metrics reporting 1 slow ops, oldest is log(1 entries from seq 8 at 2025-11-29T07:42:55.392160+0000)
Nov 29 02:43:37 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-1[81668]: 2025-11-29T07:43:37.130+0000 7fe9991b7640 -1 mon.compute-1@2(peon) e3 get_health_metrics reporting 1 slow ops, oldest is log(1 entries from seq 8 at 2025-11-29T07:42:55.392160+0000)
Nov 29 02:43:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:37.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:37.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:38 np0005539551 podman[225372]: 2025-11-29 07:43:38.658048363 +0000 UTC m=+4.245363332 container create f0551892eff0af3e19e92186d112f76752a63608398825ec30a73317c4f675d3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, config_id=edpm, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 02:43:38 np0005539551 python3[225335]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Nov 29 02:43:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:39.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:39.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:39 np0005539551 python3.9[225562]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:43:39 np0005539551 ceph-mon[81672]: mon.compute-1 calling monitor election
Nov 29 02:43:39 np0005539551 ceph-mon[81672]: Health detail: HEALTH_WARN 1 slow ops, oldest one blocked for 31 sec, mon.compute-1 has slow ops
Nov 29 02:43:39 np0005539551 ceph-mon[81672]: [WRN] SLOW_OPS: 1 slow ops, oldest one blocked for 31 sec, mon.compute-1 has slow ops
Nov 29 02:43:40 np0005539551 python3.9[225716]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:43:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:41.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:41 np0005539551 python3.9[225967]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764402220.5925684-4159-61965693509260/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:43:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:41.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:41 np0005539551 python3.9[226119]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 02:43:41 np0005539551 systemd[1]: Reloading.
Nov 29 02:43:41 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:43:41 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:43:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:42 np0005539551 python3.9[226301]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:43:42 np0005539551 systemd[1]: Reloading.
Nov 29 02:43:42 np0005539551 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:43:42 np0005539551 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:43:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:43.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:43:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:43.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:43:43 np0005539551 systemd[1]: Starting dnf makecache...
Nov 29 02:43:43 np0005539551 systemd[1]: Starting nova_compute container...
Nov 29 02:43:43 np0005539551 dnf[226339]: Metadata cache refreshed recently.
Nov 29 02:43:43 np0005539551 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 29 02:43:43 np0005539551 systemd[1]: Finished dnf makecache.
Nov 29 02:43:43 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:43:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:45.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:45.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:45 np0005539551 systemd[1]: Started libcrun container.
Nov 29 02:43:45 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99c24d75fc05e2419f2b854d67d86b80db23a1e112d47b5791a2ba4937b059ee/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:45 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99c24d75fc05e2419f2b854d67d86b80db23a1e112d47b5791a2ba4937b059ee/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:45 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99c24d75fc05e2419f2b854d67d86b80db23a1e112d47b5791a2ba4937b059ee/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:45 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99c24d75fc05e2419f2b854d67d86b80db23a1e112d47b5791a2ba4937b059ee/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:45 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99c24d75fc05e2419f2b854d67d86b80db23a1e112d47b5791a2ba4937b059ee/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:46 np0005539551 podman[226341]: 2025-11-29 07:43:46.388858821 +0000 UTC m=+3.019711848 container init f0551892eff0af3e19e92186d112f76752a63608398825ec30a73317c4f675d3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:43:46 np0005539551 podman[226341]: 2025-11-29 07:43:46.401573576 +0000 UTC m=+3.032426523 container start f0551892eff0af3e19e92186d112f76752a63608398825ec30a73317c4f675d3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=edpm)
Nov 29 02:43:46 np0005539551 nova_compute[226360]: + sudo -E kolla_set_configs
Nov 29 02:43:46 np0005539551 nova_compute[226360]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 02:43:46 np0005539551 nova_compute[226360]: INFO:__main__:Validating config file
Nov 29 02:43:46 np0005539551 nova_compute[226360]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 02:43:46 np0005539551 nova_compute[226360]: INFO:__main__:Copying service configuration files
Nov 29 02:43:46 np0005539551 nova_compute[226360]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 29 02:43:46 np0005539551 nova_compute[226360]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 29 02:43:46 np0005539551 nova_compute[226360]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 29 02:43:46 np0005539551 nova_compute[226360]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 29 02:43:46 np0005539551 nova_compute[226360]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 29 02:43:46 np0005539551 nova_compute[226360]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 02:43:46 np0005539551 nova_compute[226360]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 02:43:46 np0005539551 nova_compute[226360]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 02:43:46 np0005539551 nova_compute[226360]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 02:43:46 np0005539551 nova_compute[226360]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 02:43:46 np0005539551 nova_compute[226360]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 02:43:46 np0005539551 nova_compute[226360]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 02:43:46 np0005539551 nova_compute[226360]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 02:43:46 np0005539551 nova_compute[226360]: INFO:__main__:Deleting /etc/ceph
Nov 29 02:43:46 np0005539551 nova_compute[226360]: INFO:__main__:Creating directory /etc/ceph
Nov 29 02:43:46 np0005539551 nova_compute[226360]: INFO:__main__:Setting permission for /etc/ceph
Nov 29 02:43:46 np0005539551 nova_compute[226360]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 29 02:43:46 np0005539551 nova_compute[226360]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 02:43:46 np0005539551 nova_compute[226360]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 29 02:43:46 np0005539551 nova_compute[226360]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 02:43:46 np0005539551 nova_compute[226360]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 29 02:43:46 np0005539551 nova_compute[226360]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 02:43:46 np0005539551 nova_compute[226360]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 29 02:43:46 np0005539551 nova_compute[226360]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 02:43:46 np0005539551 nova_compute[226360]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 29 02:43:46 np0005539551 nova_compute[226360]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 29 02:43:46 np0005539551 nova_compute[226360]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 29 02:43:46 np0005539551 nova_compute[226360]: INFO:__main__:Writing out command to execute
Nov 29 02:43:46 np0005539551 nova_compute[226360]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 02:43:46 np0005539551 nova_compute[226360]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 02:43:46 np0005539551 nova_compute[226360]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 29 02:43:46 np0005539551 nova_compute[226360]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 02:43:46 np0005539551 nova_compute[226360]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 02:43:46 np0005539551 nova_compute[226360]: ++ cat /run_command
Nov 29 02:43:46 np0005539551 nova_compute[226360]: + CMD=nova-compute
Nov 29 02:43:46 np0005539551 nova_compute[226360]: + ARGS=
Nov 29 02:43:46 np0005539551 nova_compute[226360]: + sudo kolla_copy_cacerts
Nov 29 02:43:46 np0005539551 nova_compute[226360]: + [[ ! -n '' ]]
Nov 29 02:43:46 np0005539551 nova_compute[226360]: + . kolla_extend_start
Nov 29 02:43:46 np0005539551 nova_compute[226360]: + echo 'Running command: '\''nova-compute'\'''
Nov 29 02:43:46 np0005539551 nova_compute[226360]: Running command: 'nova-compute'
Nov 29 02:43:46 np0005539551 nova_compute[226360]: + umask 0022
Nov 29 02:43:46 np0005539551 nova_compute[226360]: + exec nova-compute
Nov 29 02:43:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:47.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:47.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:47 np0005539551 podman[226341]: nova_compute
Nov 29 02:43:47 np0005539551 podman[226358]: 2025-11-29 07:43:47.672964427 +0000 UTC m=+2.115676826 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 02:43:47 np0005539551 systemd[1]: Started nova_compute container.
Nov 29 02:43:47 np0005539551 podman[226355]: 2025-11-29 07:43:47.732450697 +0000 UTC m=+2.174932770 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:43:47 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:47 np0005539551 podman[226356]: 2025-11-29 07:43:47.773751751 +0000 UTC m=+2.213089998 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Nov 29 02:43:47 np0005539551 ceph-mon[81672]: Health check cleared: SLOW_OPS (was: 1 slow ops, oldest one blocked for 31 sec, mon.compute-1 has slow ops)
Nov 29 02:43:47 np0005539551 ceph-mon[81672]: Cluster is now healthy
Nov 29 02:43:47 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:43:48 np0005539551 nova_compute[226360]: 2025-11-29 07:43:48.692 226410 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 02:43:48 np0005539551 nova_compute[226360]: 2025-11-29 07:43:48.692 226410 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 02:43:48 np0005539551 nova_compute[226360]: 2025-11-29 07:43:48.692 226410 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 02:43:48 np0005539551 nova_compute[226360]: 2025-11-29 07:43:48.692 226410 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 29 02:43:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:49.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:49 np0005539551 nova_compute[226360]: 2025-11-29 07:43:49.302 226410 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:43:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:49.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:49 np0005539551 nova_compute[226360]: 2025-11-29 07:43:49.317 226410 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:43:49 np0005539551 nova_compute[226360]: 2025-11-29 07:43:49.318 226410 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 29 02:43:49 np0005539551 python3.9[226582]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:43:49 np0005539551 nova_compute[226360]: 2025-11-29 07:43:49.947 226410 INFO nova.virt.driver [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.512 226410 INFO nova.compute.provider_config [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 29 02:43:50 np0005539551 python3.9[226734]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.539 226410 DEBUG oslo_concurrency.lockutils [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.539 226410 DEBUG oslo_concurrency.lockutils [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.539 226410 DEBUG oslo_concurrency.lockutils [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.540 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.540 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.540 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.540 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.540 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.541 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.541 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.541 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.542 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.542 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.542 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.542 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.542 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.543 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.543 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.543 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.543 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.544 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.544 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.544 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.544 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.544 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.545 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.545 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.545 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.545 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.545 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.546 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.546 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.546 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.546 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.546 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.547 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.547 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.547 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.547 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.547 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.548 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.548 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.548 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.548 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.549 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.549 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.549 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.549 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.549 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.550 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.550 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.550 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.550 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.550 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.551 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.551 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.551 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.551 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.551 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.552 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.552 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.552 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.552 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.552 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.552 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.553 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.553 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.553 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.553 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.553 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.554 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.554 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.554 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.554 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.554 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.554 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.555 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.555 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.555 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.555 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.555 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.555 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.556 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.556 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.556 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.556 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.556 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.556 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.556 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.557 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.557 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.557 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.557 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.557 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.557 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.558 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.558 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.558 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.558 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.558 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.558 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.559 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.559 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.559 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.559 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.559 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.559 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.560 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.560 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.560 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.560 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.560 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.561 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.561 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.561 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.561 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.561 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.561 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.562 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.562 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.562 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.562 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.562 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.563 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.563 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.563 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.563 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.564 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.564 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.564 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.564 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.564 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.565 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.565 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.565 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.565 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.565 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.565 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.566 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.566 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.566 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.566 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.566 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.566 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.567 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.567 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.567 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.567 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.567 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.567 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.567 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.568 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.568 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.568 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.568 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.568 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.568 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.569 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.569 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.569 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.569 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.569 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.569 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.570 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.570 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.570 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.570 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.570 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.570 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.570 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.571 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.571 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.571 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.571 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.571 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.572 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.572 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.572 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.572 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.572 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.572 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.572 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.573 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.573 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.573 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.573 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.573 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.573 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.574 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.574 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.574 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.574 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.574 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.574 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.575 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.575 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.575 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.575 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.575 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.575 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.576 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.576 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.576 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.576 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.576 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.577 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.577 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.577 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.577 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.577 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.577 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.578 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.578 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.578 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.578 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.578 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.578 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.578 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.579 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.579 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.579 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.579 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.579 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.579 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.580 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.580 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.580 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.580 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.580 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.581 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.581 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.581 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.581 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.581 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.581 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.581 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.582 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.582 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.582 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.582 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.582 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.583 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.583 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.583 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.583 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.583 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.583 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.583 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.584 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.584 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.584 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.584 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.584 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.584 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.585 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.585 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.585 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.585 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.585 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.585 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.585 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.586 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.586 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.586 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.586 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.586 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.586 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.587 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.587 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.587 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.587 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.587 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.588 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.588 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.588 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.588 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.588 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.588 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.589 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.589 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.589 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.589 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.589 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.590 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.590 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.590 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.590 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.590 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.590 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.590 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.591 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.591 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.591 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.591 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.591 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.591 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.592 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.592 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.592 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.592 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.592 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.593 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.593 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.593 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.593 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.593 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.594 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.594 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.594 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.594 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.594 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.594 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.594 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.595 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.595 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.595 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.595 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.595 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.595 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.596 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.596 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.596 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.596 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.596 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.596 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.596 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.597 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.597 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.597 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.597 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.597 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.597 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.597 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.598 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.598 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.598 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.598 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.598 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.598 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.599 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.599 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.599 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.599 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.600 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.600 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.600 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.600 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.600 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.601 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.601 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.601 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.601 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.602 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.602 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.602 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.602 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.602 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.603 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.603 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.603 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.603 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.604 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.604 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.604 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.604 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.604 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.605 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.605 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.605 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.605 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.605 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.606 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.606 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.606 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.606 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.606 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.606 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.607 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.607 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.607 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.607 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.607 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.607 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.607 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.608 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.608 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.608 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.608 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.608 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.608 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.609 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.609 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.609 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.609 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.609 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.609 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.609 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.610 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.610 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.610 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.610 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.610 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.610 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.610 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.611 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.611 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.611 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.611 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.611 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.611 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.611 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.612 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.612 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.612 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.612 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.612 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.612 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.612 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.612 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.613 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.613 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.613 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.613 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.613 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.613 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.613 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.614 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.614 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.614 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.614 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.614 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.614 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.614 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.615 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.615 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.615 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.615 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.615 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.615 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.615 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.616 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.616 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.616 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.616 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.616 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.616 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.616 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.617 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.617 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.617 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.617 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.617 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.617 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.618 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.618 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.618 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.618 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.618 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.618 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.618 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.619 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.619 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.619 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.619 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.619 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.619 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.619 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.620 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.620 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.620 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.620 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.620 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.620 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.620 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.621 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.621 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.621 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.621 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.621 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.621 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.621 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.622 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.622 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.622 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.622 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.622 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.622 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.622 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.623 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.623 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.623 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.623 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.624 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.624 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.624 226410 WARNING oslo_config.cfg [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 29 02:43:50 np0005539551 nova_compute[226360]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 29 02:43:50 np0005539551 nova_compute[226360]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 29 02:43:50 np0005539551 nova_compute[226360]: and ``live_migration_inbound_addr`` respectively.
Nov 29 02:43:50 np0005539551 nova_compute[226360]: ).  Its value may be silently ignored in the future.#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.624 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.625 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.625 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.625 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.625 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.625 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.625 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.625 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.626 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.626 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.626 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.626 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.626 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.626 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.626 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.627 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.627 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.627 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.627 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.rbd_secret_uuid        = b66774a7-56d9-5535-bd8c-681234404870 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.627 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.627 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.628 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.628 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.628 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.628 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.628 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.628 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.628 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.629 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.629 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.629 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.629 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.629 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.629 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.629 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.630 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.630 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.630 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.630 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.630 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.630 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.630 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.631 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.631 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.631 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.631 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.631 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.631 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.631 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.632 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.632 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.632 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.632 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.632 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.632 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.632 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.633 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.633 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.633 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.633 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.633 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.633 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.633 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.634 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.634 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.634 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.634 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.634 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.634 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.634 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.635 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.635 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.635 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.635 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.635 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.635 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.635 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.636 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.636 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.636 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.636 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.636 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.636 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.637 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.637 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.637 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.637 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.637 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.637 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.637 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.638 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.638 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.638 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.638 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.638 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.638 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.638 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.639 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.639 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.639 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.639 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.639 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.639 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.639 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.640 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.640 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.640 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.640 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.640 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.640 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.640 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.641 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.641 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.641 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.641 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.641 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.641 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.641 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.642 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.642 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.642 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.642 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.642 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.642 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.642 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.643 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.643 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.643 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.643 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.643 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.643 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.643 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.644 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.644 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.644 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.644 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.644 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.644 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.644 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.645 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.645 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.645 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.645 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.645 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.645 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.646 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.646 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.646 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.646 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.646 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.646 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.646 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.647 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.647 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.647 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.647 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.647 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.647 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.647 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.648 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.648 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.648 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.648 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.648 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.648 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.649 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.649 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.649 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.649 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.649 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.649 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.649 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.650 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.650 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.650 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.650 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.650 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.650 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.650 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.651 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.651 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.651 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.651 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.651 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.651 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.652 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.652 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.652 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.652 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.652 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.652 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.652 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.653 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.653 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.653 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.653 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.653 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.653 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.653 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.654 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.654 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.654 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.654 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.654 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.654 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.654 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.655 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.655 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.655 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.655 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.655 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.655 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.655 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.656 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.656 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.656 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.656 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.656 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.656 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.656 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.656 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.657 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.657 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.657 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.657 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.657 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.657 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.657 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.658 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.658 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.658 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.658 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.658 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.658 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.658 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.659 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.659 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.659 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.659 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.659 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.659 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.659 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.660 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.660 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.660 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.660 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.660 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.660 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.661 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.661 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.661 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.661 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.661 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.661 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.661 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.662 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.662 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.662 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.662 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.662 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.662 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.662 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.663 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.663 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.663 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.663 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.663 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.663 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.663 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.663 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.664 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.664 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.664 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.664 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.664 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.664 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.665 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.665 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.665 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.665 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.665 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.665 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.665 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.666 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.666 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.666 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.666 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.666 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.666 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.666 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.667 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.667 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.667 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.667 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.667 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.667 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.667 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.668 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.668 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.668 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.668 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.668 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.668 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.669 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.669 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.669 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.669 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.669 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.669 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.669 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.670 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.670 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.670 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.670 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.670 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.670 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.670 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.671 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.671 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.671 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.671 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.671 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.671 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.671 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.672 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.672 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.672 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.672 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.672 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.672 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.672 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.673 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.673 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.673 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.673 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.673 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.673 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.673 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.674 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.674 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.674 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.674 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.674 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.674 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.674 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.675 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.675 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.675 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.675 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.675 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.675 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.675 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.676 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.676 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.676 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.676 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.676 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.676 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.676 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.677 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.677 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.677 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.677 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.677 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.677 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.677 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.677 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.678 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.678 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.678 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.678 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.678 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.678 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.679 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.679 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.679 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.679 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.679 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.679 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.680 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.680 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.680 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.680 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.680 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.680 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.680 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.681 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.681 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.681 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.681 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.681 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.682 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.682 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.682 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.682 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.682 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.682 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.683 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.683 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.683 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.683 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.683 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.683 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.683 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.684 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.684 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.684 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.684 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.685 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.685 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.685 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.685 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.686 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.686 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.686 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.686 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.686 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.686 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.686 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.687 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.687 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.687 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.687 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.687 226410 DEBUG oslo_service.service [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.689 226410 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.844 226410 DEBUG nova.virt.libvirt.host [None req-25393511-9f69-4001-a90c-fe70aed769fa - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.844 226410 DEBUG nova.virt.libvirt.host [None req-25393511-9f69-4001-a90c-fe70aed769fa - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.845 226410 DEBUG nova.virt.libvirt.host [None req-25393511-9f69-4001-a90c-fe70aed769fa - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.845 226410 DEBUG nova.virt.libvirt.host [None req-25393511-9f69-4001-a90c-fe70aed769fa - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 29 02:43:50 np0005539551 systemd[1]: Starting libvirt QEMU daemon...
Nov 29 02:43:50 np0005539551 systemd[1]: Started libvirt QEMU daemon.
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.942 226410 DEBUG nova.virt.libvirt.host [None req-25393511-9f69-4001-a90c-fe70aed769fa - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f381b405df0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.946 226410 DEBUG nova.virt.libvirt.host [None req-25393511-9f69-4001-a90c-fe70aed769fa - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f381b405df0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 29 02:43:50 np0005539551 nova_compute[226360]: 2025-11-29 07:43:50.948 226410 INFO nova.virt.libvirt.driver [None req-25393511-9f69-4001-a90c-fe70aed769fa - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 29 02:43:51 np0005539551 nova_compute[226360]: 2025-11-29 07:43:51.145 226410 WARNING nova.virt.libvirt.driver [None req-25393511-9f69-4001-a90c-fe70aed769fa - - - - - -] Cannot update service status on host "compute-1.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.#033[00m
Nov 29 02:43:51 np0005539551 nova_compute[226360]: 2025-11-29 07:43:51.146 226410 DEBUG nova.virt.libvirt.volume.mount [None req-25393511-9f69-4001-a90c-fe70aed769fa - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 29 02:43:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:51.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:51.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:51 np0005539551 python3.9[226936]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:43:51 np0005539551 nova_compute[226360]: 2025-11-29 07:43:51.968 226410 INFO nova.virt.libvirt.host [None req-25393511-9f69-4001-a90c-fe70aed769fa - - - - - -] Libvirt host capabilities <capabilities>
Nov 29 02:43:51 np0005539551 nova_compute[226360]: 
Nov 29 02:43:51 np0005539551 nova_compute[226360]:  <host>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:    <uuid>2d58586e-4ce1-425d-890c-d5cdff75e822</uuid>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:    <cpu>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <arch>x86_64</arch>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <model>EPYC-Rome-v4</model>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <vendor>AMD</vendor>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <microcode version='16777317'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <signature family='23' model='49' stepping='0'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <maxphysaddr mode='emulate' bits='40'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <feature name='x2apic'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <feature name='tsc-deadline'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <feature name='osxsave'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <feature name='hypervisor'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <feature name='tsc_adjust'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <feature name='spec-ctrl'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <feature name='stibp'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <feature name='arch-capabilities'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <feature name='ssbd'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <feature name='cmp_legacy'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <feature name='topoext'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <feature name='virt-ssbd'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <feature name='lbrv'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <feature name='tsc-scale'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <feature name='vmcb-clean'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <feature name='pause-filter'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <feature name='pfthreshold'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <feature name='svme-addr-chk'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <feature name='rdctl-no'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <feature name='skip-l1dfl-vmentry'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <feature name='mds-no'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <feature name='pschange-mc-no'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <pages unit='KiB' size='4'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <pages unit='KiB' size='2048'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <pages unit='KiB' size='1048576'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:    </cpu>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:    <power_management>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <suspend_mem/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:    </power_management>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:    <iommu support='no'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:    <migration_features>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <live/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <uri_transports>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:        <uri_transport>tcp</uri_transport>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:        <uri_transport>rdma</uri_transport>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      </uri_transports>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:    </migration_features>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:    <topology>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <cells num='1'>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:        <cell id='0'>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:          <memory unit='KiB'>7864316</memory>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:          <pages unit='KiB' size='4'>1966079</pages>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:          <pages unit='KiB' size='2048'>0</pages>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:          <distances>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:            <sibling id='0' value='10'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:          </distances>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:          <cpus num='8'>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:          </cpus>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:        </cell>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      </cells>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:    </topology>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:    <cache>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:    </cache>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:    <secmodel>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <model>selinux</model>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <doi>0</doi>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:    </secmodel>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:    <secmodel>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <model>dac</model>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <doi>0</doi>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:    </secmodel>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:  </host>
Nov 29 02:43:51 np0005539551 nova_compute[226360]: 
Nov 29 02:43:51 np0005539551 nova_compute[226360]:  <guest>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:    <os_type>hvm</os_type>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:    <arch name='i686'>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <wordsize>32</wordsize>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <domain type='qemu'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <domain type='kvm'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:    </arch>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:    <features>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <pae/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <nonpae/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <acpi default='on' toggle='yes'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <apic default='on' toggle='no'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <cpuselection/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <deviceboot/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <disksnapshot default='on' toggle='no'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <externalSnapshot/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:    </features>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:  </guest>
Nov 29 02:43:51 np0005539551 nova_compute[226360]: 
Nov 29 02:43:51 np0005539551 nova_compute[226360]:  <guest>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:    <os_type>hvm</os_type>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:    <arch name='x86_64'>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <wordsize>64</wordsize>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <domain type='qemu'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <domain type='kvm'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:    </arch>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:    <features>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <acpi default='on' toggle='yes'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <apic default='on' toggle='no'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <cpuselection/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <deviceboot/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <disksnapshot default='on' toggle='no'/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:      <externalSnapshot/>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:    </features>
Nov 29 02:43:51 np0005539551 nova_compute[226360]:  </guest>
Nov 29 02:43:51 np0005539551 nova_compute[226360]: 
Nov 29 02:43:51 np0005539551 nova_compute[226360]: </capabilities>
Nov 29 02:43:51 np0005539551 nova_compute[226360]: #033[00m
Nov 29 02:43:51 np0005539551 nova_compute[226360]: 2025-11-29 07:43:51.975 226410 DEBUG nova.virt.libvirt.host [None req-25393511-9f69-4001-a90c-fe70aed769fa - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 29 02:43:52 np0005539551 nova_compute[226360]: 2025-11-29 07:43:52.001 226410 DEBUG nova.virt.libvirt.host [None req-25393511-9f69-4001-a90c-fe70aed769fa - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 29 02:43:52 np0005539551 nova_compute[226360]: <domainCapabilities>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <domain>kvm</domain>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <arch>i686</arch>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <vcpu max='4096'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <iothreads supported='yes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <os supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <enum name='firmware'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <loader supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='type'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>rom</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>pflash</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='readonly'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>yes</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>no</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='secure'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>no</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </loader>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  </os>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <cpu>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <mode name='host-passthrough' supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='hostPassthroughMigratable'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>on</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>off</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </mode>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <mode name='maximum' supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='maximumMigratable'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>on</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>off</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </mode>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <mode name='host-model' supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <vendor>AMD</vendor>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='x2apic'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='hypervisor'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='stibp'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='ssbd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='overflow-recov'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='succor'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='ibrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='lbrv'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='tsc-scale'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='flushbyasid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='pause-filter'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='pfthreshold'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='disable' name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </mode>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <mode name='custom' supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Broadwell'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Broadwell-IBRS'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Broadwell-noTSX'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Broadwell-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Broadwell-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Broadwell-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Broadwell-v4'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Cascadelake-Server'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Cooperlake'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Cooperlake-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Cooperlake-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Denverton'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='mpx'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Denverton-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='mpx'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Denverton-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Denverton-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Dhyana-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-Genoa'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amd-psfd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='auto-ibrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='stibp-always-on'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amd-psfd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='auto-ibrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='stibp-always-on'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-Milan'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-Milan-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-Milan-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amd-psfd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='stibp-always-on'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-Rome'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-Rome-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-Rome-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-Rome-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-v4'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='GraniteRapids'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-fp16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-int8'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-tile'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-fp16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fbsdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrc'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fzrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='mcdt-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pbrsb-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='prefetchiti'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='psdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='serialize'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xfd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='GraniteRapids-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-fp16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-int8'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-tile'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-fp16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fbsdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrc'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fzrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='mcdt-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pbrsb-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='prefetchiti'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='psdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='serialize'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xfd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='GraniteRapids-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-fp16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-int8'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-tile'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx10'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx10-128'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx10-256'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx10-512'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-fp16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='cldemote'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fbsdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrc'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fzrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='mcdt-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdir64b'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdiri'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pbrsb-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='prefetchiti'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='psdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='serialize'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ss'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xfd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Haswell'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Haswell-IBRS'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Haswell-noTSX'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Haswell-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Haswell-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Haswell-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Haswell-v4'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Icelake-Server'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Icelake-Server-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Icelake-Server-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Icelake-Server-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Icelake-Server-v4'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Icelake-Server-v5'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Icelake-Server-v6'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Icelake-Server-v7'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='IvyBridge'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='IvyBridge-IBRS'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='IvyBridge-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='IvyBridge-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='KnightsMill'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-4fmaps'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-4vnniw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512er'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512pf'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ss'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='KnightsMill-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-4fmaps'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-4vnniw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512er'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512pf'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ss'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Opteron_G4'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fma4'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xop'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Opteron_G4-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fma4'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xop'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Opteron_G5'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fma4'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='tbm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xop'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Opteron_G5-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fma4'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='tbm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xop'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='SapphireRapids'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-int8'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-tile'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-fp16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrc'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fzrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='serialize'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xfd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='SapphireRapids-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-int8'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-tile'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-fp16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrc'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fzrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='serialize'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xfd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='SapphireRapids-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-int8'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-tile'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-fp16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fbsdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrc'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fzrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='psdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='serialize'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xfd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='SapphireRapids-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-int8'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-tile'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-fp16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='cldemote'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fbsdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrc'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fzrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdir64b'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdiri'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='psdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='serialize'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ss'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xfd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='SierraForest'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-ne-convert'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni-int8'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='cmpccxadd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fbsdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='mcdt-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pbrsb-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='psdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='serialize'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='SierraForest-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-ne-convert'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni-int8'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='cmpccxadd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fbsdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='mcdt-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pbrsb-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='psdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='serialize'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Client'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Client-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Client-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Client-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Client-v4'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Server'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Server-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Server-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Server-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Server-v4'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Server-v5'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Snowridge'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='cldemote'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='core-capability'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdir64b'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdiri'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='mpx'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='split-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Snowridge-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='cldemote'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='core-capability'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdir64b'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdiri'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='mpx'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='split-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Snowridge-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='cldemote'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='core-capability'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdir64b'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdiri'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='split-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Snowridge-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='cldemote'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='core-capability'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdir64b'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdiri'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='split-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Snowridge-v4'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='cldemote'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdir64b'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdiri'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='athlon'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='3dnow'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='3dnowext'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='athlon-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='3dnow'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='3dnowext'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='core2duo'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ss'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='core2duo-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ss'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='coreduo'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ss'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='coreduo-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ss'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='n270'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ss'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='n270-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ss'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='phenom'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='3dnow'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='3dnowext'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='phenom-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='3dnow'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='3dnowext'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </mode>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  </cpu>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <memoryBacking supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <enum name='sourceType'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <value>file</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <value>anonymous</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <value>memfd</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  </memoryBacking>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <devices>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <disk supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='diskDevice'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>disk</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>cdrom</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>floppy</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>lun</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='bus'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>fdc</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>scsi</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>virtio</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>usb</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>sata</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='model'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>virtio</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>virtio-transitional</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>virtio-non-transitional</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </disk>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <graphics supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='type'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>vnc</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>egl-headless</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>dbus</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </graphics>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <video supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='modelType'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>vga</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>cirrus</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>virtio</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>none</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>bochs</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>ramfb</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </video>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <hostdev supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='mode'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>subsystem</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='startupPolicy'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>default</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>mandatory</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>requisite</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>optional</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='subsysType'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>usb</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>pci</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>scsi</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='capsType'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='pciBackend'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </hostdev>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <rng supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='model'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>virtio</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>virtio-transitional</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>virtio-non-transitional</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='backendModel'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>random</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>egd</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>builtin</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </rng>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <filesystem supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='driverType'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>path</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>handle</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>virtiofs</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </filesystem>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <tpm supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='model'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>tpm-tis</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>tpm-crb</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='backendModel'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>emulator</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>external</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='backendVersion'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>2.0</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </tpm>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <redirdev supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='bus'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>usb</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </redirdev>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <channel supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='type'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>pty</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>unix</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </channel>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <crypto supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='model'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='type'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>qemu</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='backendModel'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>builtin</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </crypto>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <interface supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='backendType'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>default</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>passt</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </interface>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <panic supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='model'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>isa</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>hyperv</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </panic>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <console supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='type'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>null</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>vc</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>pty</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>dev</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>file</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>pipe</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>stdio</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>udp</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>tcp</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>unix</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>qemu-vdagent</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>dbus</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </console>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  </devices>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <features>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <gic supported='no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <vmcoreinfo supported='yes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <genid supported='yes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <backingStoreInput supported='yes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <backup supported='yes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <async-teardown supported='yes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <ps2 supported='yes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <sev supported='no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <sgx supported='no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <hyperv supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='features'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>relaxed</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>vapic</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>spinlocks</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>vpindex</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>runtime</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>synic</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>stimer</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>reset</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>vendor_id</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>frequencies</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>reenlightenment</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>tlbflush</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>ipi</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>avic</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>emsr_bitmap</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>xmm_input</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <defaults>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <spinlocks>4095</spinlocks>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <stimer_direct>on</stimer_direct>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </defaults>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </hyperv>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <launchSecurity supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='sectype'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>tdx</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </launchSecurity>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  </features>
Nov 29 02:43:52 np0005539551 nova_compute[226360]: </domainCapabilities>
Nov 29 02:43:52 np0005539551 nova_compute[226360]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 02:43:52 np0005539551 nova_compute[226360]: 2025-11-29 07:43:52.008 226410 DEBUG nova.virt.libvirt.host [None req-25393511-9f69-4001-a90c-fe70aed769fa - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 29 02:43:52 np0005539551 nova_compute[226360]: <domainCapabilities>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <domain>kvm</domain>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <arch>i686</arch>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <vcpu max='240'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <iothreads supported='yes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <os supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <enum name='firmware'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <loader supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='type'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>rom</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>pflash</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='readonly'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>yes</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>no</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='secure'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>no</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </loader>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  </os>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <cpu>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <mode name='host-passthrough' supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='hostPassthroughMigratable'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>on</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>off</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </mode>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <mode name='maximum' supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='maximumMigratable'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>on</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>off</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </mode>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <mode name='host-model' supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <vendor>AMD</vendor>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='x2apic'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='hypervisor'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='stibp'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='ssbd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='overflow-recov'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='succor'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='ibrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='lbrv'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='tsc-scale'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='flushbyasid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='pause-filter'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='pfthreshold'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='disable' name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </mode>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <mode name='custom' supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Broadwell'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Broadwell-IBRS'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Broadwell-noTSX'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Broadwell-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Broadwell-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Broadwell-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Broadwell-v4'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Cascadelake-Server'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Cooperlake'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Cooperlake-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Cooperlake-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Denverton'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='mpx'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Denverton-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='mpx'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Denverton-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Denverton-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Dhyana-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-Genoa'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amd-psfd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='auto-ibrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='stibp-always-on'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amd-psfd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='auto-ibrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='stibp-always-on'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-Milan'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-Milan-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-Milan-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amd-psfd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='stibp-always-on'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-Rome'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-Rome-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-Rome-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-Rome-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-v4'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='GraniteRapids'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-fp16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-int8'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-tile'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-fp16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fbsdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrc'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fzrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='mcdt-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pbrsb-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='prefetchiti'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='psdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='serialize'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xfd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='GraniteRapids-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-fp16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-int8'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-tile'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-fp16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fbsdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrc'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fzrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='mcdt-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pbrsb-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='prefetchiti'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='psdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='serialize'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xfd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='GraniteRapids-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-fp16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-int8'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-tile'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx10'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx10-128'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx10-256'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx10-512'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-fp16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='cldemote'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fbsdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrc'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fzrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='mcdt-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdir64b'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdiri'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pbrsb-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='prefetchiti'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='psdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='serialize'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ss'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xfd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Haswell'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Haswell-IBRS'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Haswell-noTSX'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Haswell-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Haswell-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Haswell-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Haswell-v4'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Icelake-Server'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Icelake-Server-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Icelake-Server-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Icelake-Server-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Icelake-Server-v4'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Icelake-Server-v5'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Icelake-Server-v6'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Icelake-Server-v7'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='IvyBridge'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='IvyBridge-IBRS'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='IvyBridge-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='IvyBridge-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='KnightsMill'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-4fmaps'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-4vnniw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512er'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512pf'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ss'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='KnightsMill-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-4fmaps'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-4vnniw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512er'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512pf'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ss'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Opteron_G4'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fma4'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xop'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Opteron_G4-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fma4'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xop'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Opteron_G5'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fma4'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='tbm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xop'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Opteron_G5-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fma4'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='tbm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xop'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='SapphireRapids'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-int8'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-tile'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-fp16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrc'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fzrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='serialize'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xfd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='SapphireRapids-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-int8'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-tile'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-fp16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrc'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fzrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='serialize'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xfd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='SapphireRapids-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-int8'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-tile'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-fp16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fbsdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrc'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fzrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='psdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='serialize'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xfd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='SapphireRapids-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-int8'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-tile'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-fp16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='cldemote'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fbsdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrc'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fzrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdir64b'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdiri'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='psdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='serialize'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ss'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xfd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='SierraForest'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-ne-convert'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni-int8'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='cmpccxadd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fbsdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='mcdt-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pbrsb-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='psdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='serialize'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='SierraForest-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-ne-convert'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni-int8'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='cmpccxadd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fbsdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='mcdt-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pbrsb-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='psdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='serialize'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Client'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Client-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Client-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Client-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Client-v4'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Server'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Server-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Server-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Server-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Server-v4'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Server-v5'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Snowridge'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='cldemote'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='core-capability'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdir64b'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdiri'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='mpx'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='split-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Snowridge-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='cldemote'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='core-capability'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdir64b'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdiri'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='mpx'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='split-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Snowridge-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='cldemote'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='core-capability'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdir64b'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdiri'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='split-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Snowridge-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='cldemote'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='core-capability'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdir64b'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdiri'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='split-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Snowridge-v4'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='cldemote'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdir64b'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdiri'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='athlon'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='3dnow'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='3dnowext'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='athlon-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='3dnow'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='3dnowext'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='core2duo'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ss'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='core2duo-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ss'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='coreduo'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ss'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='coreduo-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ss'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='n270'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ss'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='n270-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ss'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='phenom'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='3dnow'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='3dnowext'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='phenom-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='3dnow'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='3dnowext'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </mode>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  </cpu>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <memoryBacking supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <enum name='sourceType'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <value>file</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <value>anonymous</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <value>memfd</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  </memoryBacking>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <devices>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <disk supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='diskDevice'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>disk</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>cdrom</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>floppy</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>lun</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='bus'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>ide</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>fdc</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>scsi</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>virtio</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>usb</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>sata</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='model'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>virtio</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>virtio-transitional</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>virtio-non-transitional</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </disk>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <graphics supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='type'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>vnc</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>egl-headless</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>dbus</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </graphics>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <video supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='modelType'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>vga</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>cirrus</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>virtio</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>none</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>bochs</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>ramfb</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </video>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <hostdev supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='mode'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>subsystem</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='startupPolicy'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>default</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>mandatory</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>requisite</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>optional</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='subsysType'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>usb</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>pci</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>scsi</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='capsType'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='pciBackend'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </hostdev>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <rng supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='model'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>virtio</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>virtio-transitional</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>virtio-non-transitional</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='backendModel'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>random</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>egd</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>builtin</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </rng>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <filesystem supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='driverType'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>path</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>handle</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>virtiofs</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </filesystem>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <tpm supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='model'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>tpm-tis</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>tpm-crb</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='backendModel'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>emulator</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>external</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='backendVersion'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>2.0</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </tpm>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <redirdev supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='bus'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>usb</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </redirdev>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <channel supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='type'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>pty</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>unix</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </channel>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <crypto supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='model'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='type'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>qemu</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='backendModel'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>builtin</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </crypto>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <interface supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='backendType'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>default</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>passt</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </interface>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <panic supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='model'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>isa</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>hyperv</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </panic>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <console supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='type'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>null</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>vc</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>pty</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>dev</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>file</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>pipe</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>stdio</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>udp</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>tcp</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>unix</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>qemu-vdagent</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>dbus</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </console>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  </devices>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <features>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <gic supported='no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <vmcoreinfo supported='yes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <genid supported='yes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <backingStoreInput supported='yes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <backup supported='yes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <async-teardown supported='yes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <ps2 supported='yes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <sev supported='no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <sgx supported='no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <hyperv supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='features'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>relaxed</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>vapic</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>spinlocks</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>vpindex</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>runtime</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>synic</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>stimer</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>reset</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>vendor_id</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>frequencies</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>reenlightenment</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>tlbflush</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>ipi</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>avic</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>emsr_bitmap</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>xmm_input</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <defaults>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <spinlocks>4095</spinlocks>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <stimer_direct>on</stimer_direct>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </defaults>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </hyperv>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <launchSecurity supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='sectype'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>tdx</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </launchSecurity>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  </features>
Nov 29 02:43:52 np0005539551 nova_compute[226360]: </domainCapabilities>
Nov 29 02:43:52 np0005539551 nova_compute[226360]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 02:43:52 np0005539551 nova_compute[226360]: 2025-11-29 07:43:52.057 226410 DEBUG nova.virt.libvirt.host [None req-25393511-9f69-4001-a90c-fe70aed769fa - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 29 02:43:52 np0005539551 nova_compute[226360]: 2025-11-29 07:43:52.062 226410 DEBUG nova.virt.libvirt.host [None req-25393511-9f69-4001-a90c-fe70aed769fa - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 29 02:43:52 np0005539551 nova_compute[226360]: <domainCapabilities>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <domain>kvm</domain>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <arch>x86_64</arch>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <vcpu max='4096'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <iothreads supported='yes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <os supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <enum name='firmware'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <value>efi</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <loader supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='type'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>rom</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>pflash</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='readonly'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>yes</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>no</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='secure'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>yes</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>no</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </loader>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  </os>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <cpu>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <mode name='host-passthrough' supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='hostPassthroughMigratable'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>on</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>off</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </mode>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <mode name='maximum' supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='maximumMigratable'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>on</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>off</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </mode>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <mode name='host-model' supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <vendor>AMD</vendor>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='x2apic'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='hypervisor'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='stibp'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='ssbd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='overflow-recov'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='succor'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='ibrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='lbrv'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='tsc-scale'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='flushbyasid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='pause-filter'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='pfthreshold'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='disable' name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </mode>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <mode name='custom' supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Broadwell'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Broadwell-IBRS'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Broadwell-noTSX'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Broadwell-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Broadwell-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Broadwell-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Broadwell-v4'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Cascadelake-Server'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Cooperlake'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Cooperlake-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Cooperlake-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Denverton'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='mpx'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Denverton-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='mpx'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Denverton-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Denverton-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Dhyana-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-Genoa'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amd-psfd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='auto-ibrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='stibp-always-on'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amd-psfd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='auto-ibrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='stibp-always-on'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-Milan'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-Milan-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-Milan-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amd-psfd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='stibp-always-on'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-Rome'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-Rome-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-Rome-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-Rome-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-v4'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='GraniteRapids'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-fp16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-int8'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-tile'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-fp16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fbsdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrc'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fzrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='mcdt-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pbrsb-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='prefetchiti'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='psdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='serialize'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xfd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='GraniteRapids-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-fp16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-int8'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-tile'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-fp16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fbsdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrc'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fzrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='mcdt-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pbrsb-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='prefetchiti'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='psdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='serialize'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xfd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='GraniteRapids-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-fp16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-int8'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-tile'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx10'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx10-128'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx10-256'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx10-512'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-fp16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='cldemote'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fbsdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrc'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fzrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='mcdt-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdir64b'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdiri'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pbrsb-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='prefetchiti'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='psdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='serialize'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ss'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xfd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Haswell'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Haswell-IBRS'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Haswell-noTSX'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Haswell-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Haswell-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Haswell-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Haswell-v4'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Icelake-Server'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Icelake-Server-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Icelake-Server-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Icelake-Server-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Icelake-Server-v4'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Icelake-Server-v5'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Icelake-Server-v6'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Icelake-Server-v7'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='IvyBridge'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='IvyBridge-IBRS'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='IvyBridge-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='IvyBridge-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='KnightsMill'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-4fmaps'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-4vnniw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512er'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512pf'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ss'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='KnightsMill-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-4fmaps'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-4vnniw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512er'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512pf'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ss'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Opteron_G4'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fma4'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xop'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Opteron_G4-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fma4'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xop'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Opteron_G5'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fma4'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='tbm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xop'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Opteron_G5-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fma4'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='tbm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xop'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='SapphireRapids'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-int8'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-tile'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-fp16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrc'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fzrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='serialize'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xfd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='SapphireRapids-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-int8'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-tile'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-fp16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrc'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fzrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='serialize'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xfd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='SapphireRapids-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-int8'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-tile'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-fp16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fbsdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrc'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fzrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='psdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='serialize'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xfd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='SapphireRapids-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-int8'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-tile'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-fp16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='cldemote'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fbsdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrc'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fzrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdir64b'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdiri'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='psdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='serialize'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ss'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xfd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='SierraForest'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-ne-convert'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni-int8'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='cmpccxadd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fbsdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='mcdt-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pbrsb-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='psdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='serialize'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='SierraForest-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-ne-convert'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni-int8'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='cmpccxadd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fbsdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='mcdt-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pbrsb-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='psdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='serialize'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Client'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Client-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Client-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Client-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Client-v4'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Server'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Server-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Server-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Server-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Server-v4'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Server-v5'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Snowridge'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='cldemote'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='core-capability'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdir64b'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdiri'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='mpx'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='split-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Snowridge-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='cldemote'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='core-capability'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdir64b'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdiri'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='mpx'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='split-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Snowridge-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='cldemote'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='core-capability'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdir64b'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdiri'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='split-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Snowridge-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='cldemote'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='core-capability'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdir64b'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdiri'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='split-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Snowridge-v4'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='cldemote'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdir64b'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdiri'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='athlon'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='3dnow'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='3dnowext'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='athlon-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='3dnow'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='3dnowext'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='core2duo'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ss'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='core2duo-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ss'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='coreduo'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ss'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='coreduo-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ss'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='n270'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ss'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='n270-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ss'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='phenom'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='3dnow'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='3dnowext'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='phenom-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='3dnow'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='3dnowext'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </mode>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  </cpu>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <memoryBacking supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <enum name='sourceType'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <value>file</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <value>anonymous</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <value>memfd</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  </memoryBacking>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <devices>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <disk supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='diskDevice'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>disk</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>cdrom</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>floppy</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>lun</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='bus'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>fdc</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>scsi</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>virtio</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>usb</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>sata</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='model'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>virtio</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>virtio-transitional</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>virtio-non-transitional</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </disk>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <graphics supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='type'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>vnc</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>egl-headless</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>dbus</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </graphics>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <video supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='modelType'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>vga</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>cirrus</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>virtio</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>none</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>bochs</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>ramfb</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </video>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <hostdev supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='mode'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>subsystem</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='startupPolicy'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>default</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>mandatory</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>requisite</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>optional</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='subsysType'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>usb</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>pci</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>scsi</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='capsType'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='pciBackend'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </hostdev>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <rng supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='model'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>virtio</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>virtio-transitional</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>virtio-non-transitional</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='backendModel'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>random</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>egd</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>builtin</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </rng>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <filesystem supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='driverType'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>path</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>handle</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>virtiofs</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </filesystem>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <tpm supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='model'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>tpm-tis</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>tpm-crb</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='backendModel'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>emulator</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>external</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='backendVersion'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>2.0</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </tpm>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <redirdev supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='bus'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>usb</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </redirdev>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <channel supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='type'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>pty</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>unix</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </channel>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <crypto supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='model'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='type'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>qemu</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='backendModel'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>builtin</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </crypto>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <interface supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='backendType'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>default</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>passt</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </interface>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <panic supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='model'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>isa</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>hyperv</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </panic>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <console supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='type'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>null</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>vc</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>pty</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>dev</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>file</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>pipe</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>stdio</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>udp</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>tcp</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>unix</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>qemu-vdagent</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>dbus</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </console>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  </devices>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <features>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <gic supported='no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <vmcoreinfo supported='yes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <genid supported='yes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <backingStoreInput supported='yes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <backup supported='yes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <async-teardown supported='yes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <ps2 supported='yes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <sev supported='no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <sgx supported='no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <hyperv supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='features'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>relaxed</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>vapic</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>spinlocks</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>vpindex</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>runtime</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>synic</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>stimer</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>reset</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>vendor_id</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>frequencies</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>reenlightenment</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>tlbflush</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>ipi</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>avic</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>emsr_bitmap</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>xmm_input</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <defaults>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <spinlocks>4095</spinlocks>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <stimer_direct>on</stimer_direct>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </defaults>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </hyperv>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <launchSecurity supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='sectype'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>tdx</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </launchSecurity>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  </features>
Nov 29 02:43:52 np0005539551 nova_compute[226360]: </domainCapabilities>
Nov 29 02:43:52 np0005539551 nova_compute[226360]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 02:43:52 np0005539551 nova_compute[226360]: 2025-11-29 07:43:52.158 226410 DEBUG nova.virt.libvirt.host [None req-25393511-9f69-4001-a90c-fe70aed769fa - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 29 02:43:52 np0005539551 nova_compute[226360]: <domainCapabilities>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <domain>kvm</domain>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <arch>x86_64</arch>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <vcpu max='240'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <iothreads supported='yes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <os supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <enum name='firmware'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <loader supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='type'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>rom</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>pflash</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='readonly'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>yes</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>no</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='secure'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>no</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </loader>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  </os>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <cpu>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <mode name='host-passthrough' supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='hostPassthroughMigratable'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>on</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>off</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </mode>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <mode name='maximum' supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='maximumMigratable'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>on</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>off</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </mode>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <mode name='host-model' supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <vendor>AMD</vendor>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='x2apic'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='hypervisor'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='stibp'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='ssbd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='overflow-recov'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='succor'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='ibrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='lbrv'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='tsc-scale'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='flushbyasid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='pause-filter'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='pfthreshold'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <feature policy='disable' name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </mode>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <mode name='custom' supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Broadwell'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Broadwell-IBRS'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Broadwell-noTSX'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Broadwell-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Broadwell-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Broadwell-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Broadwell-v4'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Cascadelake-Server'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Cooperlake'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Cooperlake-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Cooperlake-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Denverton'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='mpx'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Denverton-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='mpx'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Denverton-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Denverton-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Dhyana-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-Genoa'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amd-psfd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='auto-ibrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='stibp-always-on'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amd-psfd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='auto-ibrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='stibp-always-on'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-Milan'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-Milan-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-Milan-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amd-psfd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='stibp-always-on'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-Rome'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-Rome-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-Rome-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-Rome-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='EPYC-v4'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='GraniteRapids'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-fp16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-int8'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-tile'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-fp16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fbsdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrc'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fzrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='mcdt-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pbrsb-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='prefetchiti'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='psdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='serialize'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xfd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='GraniteRapids-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-fp16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-int8'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-tile'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-fp16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fbsdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrc'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fzrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='mcdt-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pbrsb-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='prefetchiti'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='psdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='serialize'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xfd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='GraniteRapids-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-fp16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-int8'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-tile'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx10'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx10-128'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx10-256'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx10-512'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-fp16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='cldemote'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fbsdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrc'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fzrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='mcdt-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdir64b'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdiri'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pbrsb-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='prefetchiti'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='psdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='serialize'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ss'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xfd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Haswell'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Haswell-IBRS'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Haswell-noTSX'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Haswell-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Haswell-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Haswell-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Haswell-v4'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Icelake-Server'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Icelake-Server-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Icelake-Server-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Icelake-Server-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Icelake-Server-v4'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Icelake-Server-v5'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Icelake-Server-v6'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Icelake-Server-v7'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='IvyBridge'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='IvyBridge-IBRS'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='IvyBridge-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='IvyBridge-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='KnightsMill'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-4fmaps'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-4vnniw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512er'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512pf'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ss'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='KnightsMill-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-4fmaps'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-4vnniw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512er'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512pf'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ss'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Opteron_G4'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fma4'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xop'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Opteron_G4-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fma4'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xop'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Opteron_G5'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fma4'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='tbm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xop'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Opteron_G5-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fma4'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='tbm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xop'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='SapphireRapids'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-int8'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-tile'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-fp16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrc'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fzrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='serialize'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xfd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='SapphireRapids-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-int8'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-tile'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-fp16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrc'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fzrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='serialize'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xfd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='SapphireRapids-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-int8'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-tile'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-fp16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fbsdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrc'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fzrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='psdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='serialize'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xfd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='SapphireRapids-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-int8'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='amx-tile'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-bf16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-fp16'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bitalg'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='cldemote'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fbsdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrc'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fzrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='la57'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdir64b'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdiri'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='psdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='serialize'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ss'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='taa-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xfd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='SierraForest'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-ne-convert'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni-int8'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='cmpccxadd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fbsdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='mcdt-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pbrsb-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='psdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='serialize'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='SierraForest-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-ifma'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-ne-convert'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx-vnni-int8'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='cmpccxadd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fbsdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='fsrs'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ibrs-all'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='mcdt-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pbrsb-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='psdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='serialize'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vaes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Client'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Client-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Client-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Client-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Client-v4'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Server'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Server-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Server-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='hle'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='rtm'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Server-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Server-v4'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Skylake-Server-v5'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512bw'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512cd'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512dq'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512f'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='avx512vl'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='invpcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pcid'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='pku'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Snowridge'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='cldemote'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='core-capability'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdir64b'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdiri'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='mpx'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='split-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Snowridge-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='cldemote'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='core-capability'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdir64b'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdiri'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='mpx'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='split-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Snowridge-v2'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='cldemote'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='core-capability'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdir64b'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdiri'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='split-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Snowridge-v3'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='cldemote'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='core-capability'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdir64b'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdiri'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='split-lock-detect'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='Snowridge-v4'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='cldemote'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='erms'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='gfni'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdir64b'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='movdiri'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='xsaves'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='athlon'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='3dnow'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='3dnowext'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='athlon-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='3dnow'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='3dnowext'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='core2duo'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ss'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='core2duo-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ss'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='coreduo'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ss'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='coreduo-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ss'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='n270'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ss'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='n270-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='ss'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='phenom'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='3dnow'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='3dnowext'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <blockers model='phenom-v1'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='3dnow'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <feature name='3dnowext'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </blockers>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </mode>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  </cpu>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <memoryBacking supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <enum name='sourceType'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <value>file</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <value>anonymous</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <value>memfd</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  </memoryBacking>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <devices>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <disk supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='diskDevice'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>disk</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>cdrom</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>floppy</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>lun</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='bus'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>ide</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>fdc</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>scsi</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>virtio</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>usb</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>sata</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='model'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>virtio</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>virtio-transitional</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>virtio-non-transitional</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </disk>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <graphics supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='type'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>vnc</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>egl-headless</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>dbus</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </graphics>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <video supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='modelType'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>vga</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>cirrus</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>virtio</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>none</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>bochs</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>ramfb</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </video>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <hostdev supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='mode'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>subsystem</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='startupPolicy'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>default</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>mandatory</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>requisite</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>optional</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='subsysType'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>usb</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>pci</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>scsi</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='capsType'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='pciBackend'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </hostdev>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <rng supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='model'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>virtio</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>virtio-transitional</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>virtio-non-transitional</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='backendModel'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>random</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>egd</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>builtin</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </rng>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <filesystem supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='driverType'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>path</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>handle</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>virtiofs</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </filesystem>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <tpm supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='model'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>tpm-tis</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>tpm-crb</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='backendModel'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>emulator</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>external</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='backendVersion'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>2.0</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </tpm>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <redirdev supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='bus'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>usb</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </redirdev>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <channel supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='type'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>pty</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>unix</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </channel>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <crypto supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='model'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='type'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>qemu</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='backendModel'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>builtin</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </crypto>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <interface supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='backendType'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>default</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>passt</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </interface>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <panic supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='model'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>isa</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>hyperv</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </panic>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <console supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='type'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>null</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>vc</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>pty</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>dev</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>file</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>pipe</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>stdio</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>udp</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>tcp</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>unix</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>qemu-vdagent</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>dbus</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </console>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  </devices>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <features>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <gic supported='no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <vmcoreinfo supported='yes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <genid supported='yes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <backingStoreInput supported='yes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <backup supported='yes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <async-teardown supported='yes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <ps2 supported='yes'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <sev supported='no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <sgx supported='no'/>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <hyperv supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='features'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>relaxed</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>vapic</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>spinlocks</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>vpindex</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>runtime</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>synic</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>stimer</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>reset</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>vendor_id</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>frequencies</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>reenlightenment</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>tlbflush</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>ipi</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>avic</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>emsr_bitmap</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>xmm_input</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <defaults>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <spinlocks>4095</spinlocks>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <stimer_direct>on</stimer_direct>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </defaults>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </hyperv>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    <launchSecurity supported='yes'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      <enum name='sectype'>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:        <value>tdx</value>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:      </enum>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:    </launchSecurity>
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  </features>
Nov 29 02:43:52 np0005539551 nova_compute[226360]: </domainCapabilities>
Nov 29 02:43:52 np0005539551 nova_compute[226360]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 02:43:52 np0005539551 nova_compute[226360]: 2025-11-29 07:43:52.261 226410 DEBUG nova.virt.libvirt.host [None req-25393511-9f69-4001-a90c-fe70aed769fa - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 29 02:43:52 np0005539551 nova_compute[226360]: 2025-11-29 07:43:52.262 226410 INFO nova.virt.libvirt.host [None req-25393511-9f69-4001-a90c-fe70aed769fa - - - - - -] Secure Boot support detected#033[00m
Nov 29 02:43:52 np0005539551 nova_compute[226360]: 2025-11-29 07:43:52.264 226410 INFO nova.virt.libvirt.driver [None req-25393511-9f69-4001-a90c-fe70aed769fa - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 29 02:43:52 np0005539551 nova_compute[226360]: 2025-11-29 07:43:52.264 226410 INFO nova.virt.libvirt.driver [None req-25393511-9f69-4001-a90c-fe70aed769fa - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 29 02:43:52 np0005539551 nova_compute[226360]: 2025-11-29 07:43:52.274 226410 DEBUG nova.virt.libvirt.driver [None req-25393511-9f69-4001-a90c-fe70aed769fa - - - - - -] cpu compare xml: <cpu match="exact">
Nov 29 02:43:52 np0005539551 nova_compute[226360]:  <model>Nehalem</model>
Nov 29 02:43:52 np0005539551 nova_compute[226360]: </cpu>
Nov 29 02:43:52 np0005539551 nova_compute[226360]: _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019#033[00m
Nov 29 02:43:52 np0005539551 nova_compute[226360]: 2025-11-29 07:43:52.276 226410 DEBUG nova.virt.libvirt.driver [None req-25393511-9f69-4001-a90c-fe70aed769fa - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Nov 29 02:43:52 np0005539551 nova_compute[226360]: 2025-11-29 07:43:52.587 226410 INFO nova.virt.node [None req-25393511-9f69-4001-a90c-fe70aed769fa - - - - - -] Determined node identity 67c71d68-0dd7-4589-b775-189b4191a844 from /var/lib/nova/compute_id#033[00m
Nov 29 02:43:52 np0005539551 nova_compute[226360]: 2025-11-29 07:43:52.609 226410 WARNING nova.compute.manager [None req-25393511-9f69-4001-a90c-fe70aed769fa - - - - - -] Compute nodes ['67c71d68-0dd7-4589-b775-189b4191a844'] for host compute-1.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Nov 29 02:43:52 np0005539551 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:43:52 np0005539551 python3.9[227100]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 29 02:43:52 np0005539551 nova_compute[226360]: 2025-11-29 07:43:52.713 226410 INFO nova.compute.manager [None req-25393511-9f69-4001-a90c-fe70aed769fa - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Nov 29 02:43:52 np0005539551 nova_compute[226360]: 2025-11-29 07:43:52.758 226410 WARNING nova.compute.manager [None req-25393511-9f69-4001-a90c-fe70aed769fa - - - - - -] No compute node record found for host compute-1.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.#033[00m
Nov 29 02:43:52 np0005539551 nova_compute[226360]: 2025-11-29 07:43:52.759 226410 DEBUG oslo_concurrency.lockutils [None req-25393511-9f69-4001-a90c-fe70aed769fa - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:43:52 np0005539551 nova_compute[226360]: 2025-11-29 07:43:52.759 226410 DEBUG oslo_concurrency.lockutils [None req-25393511-9f69-4001-a90c-fe70aed769fa - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:43:52 np0005539551 nova_compute[226360]: 2025-11-29 07:43:52.759 226410 DEBUG oslo_concurrency.lockutils [None req-25393511-9f69-4001-a90c-fe70aed769fa - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:43:52 np0005539551 nova_compute[226360]: 2025-11-29 07:43:52.760 226410 DEBUG nova.compute.resource_tracker [None req-25393511-9f69-4001-a90c-fe70aed769fa - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:43:52 np0005539551 nova_compute[226360]: 2025-11-29 07:43:52.761 226410 DEBUG oslo_concurrency.processutils [None req-25393511-9f69-4001-a90c-fe70aed769fa - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:43:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:53.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:53.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:53 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:53 np0005539551 python3.9[227296]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:43:53 np0005539551 systemd[1]: Stopping nova_compute container...
Nov 29 02:43:53 np0005539551 ceph-mon[81672]: Health check failed: 1 slow ops, oldest one blocked for 41 sec, mon.compute-1 has slow ops (SLOW_OPS)
Nov 29 02:43:53 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:43:53 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:43:53 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:43:54 np0005539551 nova_compute[226360]: 2025-11-29 07:43:54.272 226410 DEBUG oslo_concurrency.lockutils [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:43:54 np0005539551 nova_compute[226360]: 2025-11-29 07:43:54.273 226410 DEBUG oslo_concurrency.lockutils [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:43:54 np0005539551 nova_compute[226360]: 2025-11-29 07:43:54.273 226410 DEBUG oslo_concurrency.lockutils [None req-fc828f18-5a3e-4005-a373-2df5f7c2fc68 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:43:54 np0005539551 virtqemud[226785]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 29 02:43:54 np0005539551 virtqemud[226785]: hostname: compute-1
Nov 29 02:43:54 np0005539551 virtqemud[226785]: End of file while reading data: Input/output error
Nov 29 02:43:54 np0005539551 podman[227301]: 2025-11-29 07:43:54.824902404 +0000 UTC m=+1.002117342 container died f0551892eff0af3e19e92186d112f76752a63608398825ec30a73317c4f675d3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_managed=true, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:43:54 np0005539551 systemd[1]: libpod-f0551892eff0af3e19e92186d112f76752a63608398825ec30a73317c4f675d3.scope: Deactivated successfully.
Nov 29 02:43:54 np0005539551 systemd[1]: libpod-f0551892eff0af3e19e92186d112f76752a63608398825ec30a73317c4f675d3.scope: Consumed 3.792s CPU time.
Nov 29 02:43:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:55.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:55.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:56 np0005539551 ceph-mon[81672]: Health check cleared: SLOW_OPS (was: 1 slow ops, oldest one blocked for 41 sec, mon.compute-1 has slow ops)
Nov 29 02:43:56 np0005539551 ceph-mon[81672]: Cluster is now healthy
Nov 29 02:43:56 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:43:56 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:43:56 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f0551892eff0af3e19e92186d112f76752a63608398825ec30a73317c4f675d3-userdata-shm.mount: Deactivated successfully.
Nov 29 02:43:56 np0005539551 systemd[1]: var-lib-containers-storage-overlay-99c24d75fc05e2419f2b854d67d86b80db23a1e112d47b5791a2ba4937b059ee-merged.mount: Deactivated successfully.
Nov 29 02:43:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:43:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:57.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:43:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:57.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:57 np0005539551 podman[227301]: 2025-11-29 07:43:57.779351223 +0000 UTC m=+3.956566191 container cleanup f0551892eff0af3e19e92186d112f76752a63608398825ec30a73317c4f675d3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=nova_compute)
Nov 29 02:43:57 np0005539551 podman[227301]: nova_compute
Nov 29 02:43:57 np0005539551 podman[227332]: nova_compute
Nov 29 02:43:57 np0005539551 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 29 02:43:57 np0005539551 systemd[1]: Stopped nova_compute container.
Nov 29 02:43:57 np0005539551 systemd[1]: Starting nova_compute container...
Nov 29 02:43:58 np0005539551 systemd[1]: Started libcrun container.
Nov 29 02:43:58 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99c24d75fc05e2419f2b854d67d86b80db23a1e112d47b5791a2ba4937b059ee/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:58 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99c24d75fc05e2419f2b854d67d86b80db23a1e112d47b5791a2ba4937b059ee/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:58 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99c24d75fc05e2419f2b854d67d86b80db23a1e112d47b5791a2ba4937b059ee/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:58 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99c24d75fc05e2419f2b854d67d86b80db23a1e112d47b5791a2ba4937b059ee/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:58 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99c24d75fc05e2419f2b854d67d86b80db23a1e112d47b5791a2ba4937b059ee/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:58 np0005539551 podman[227345]: 2025-11-29 07:43:58.115497765 +0000 UTC m=+0.231830803 container init f0551892eff0af3e19e92186d112f76752a63608398825ec30a73317c4f675d3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3)
Nov 29 02:43:58 np0005539551 podman[227345]: 2025-11-29 07:43:58.121789836 +0000 UTC m=+0.238122854 container start f0551892eff0af3e19e92186d112f76752a63608398825ec30a73317c4f675d3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:43:58 np0005539551 nova_compute[227360]: + sudo -E kolla_set_configs
Nov 29 02:43:58 np0005539551 podman[227345]: nova_compute
Nov 29 02:43:58 np0005539551 systemd[1]: Started nova_compute container.
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Validating config file
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Copying service configuration files
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Deleting /etc/ceph
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Creating directory /etc/ceph
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Setting permission for /etc/ceph
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Writing out command to execute
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 02:43:58 np0005539551 nova_compute[227360]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 02:43:58 np0005539551 nova_compute[227360]: ++ cat /run_command
Nov 29 02:43:58 np0005539551 nova_compute[227360]: + CMD=nova-compute
Nov 29 02:43:58 np0005539551 nova_compute[227360]: + ARGS=
Nov 29 02:43:58 np0005539551 nova_compute[227360]: + sudo kolla_copy_cacerts
Nov 29 02:43:58 np0005539551 nova_compute[227360]: + [[ ! -n '' ]]
Nov 29 02:43:58 np0005539551 nova_compute[227360]: + . kolla_extend_start
Nov 29 02:43:58 np0005539551 nova_compute[227360]: Running command: 'nova-compute'
Nov 29 02:43:58 np0005539551 nova_compute[227360]: + echo 'Running command: '\''nova-compute'\'''
Nov 29 02:43:58 np0005539551 nova_compute[227360]: + umask 0022
Nov 29 02:43:58 np0005539551 nova_compute[227360]: + exec nova-compute
Nov 29 02:43:58 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:58 np0005539551 python3.9[227523]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 29 02:43:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:59.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:43:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:59.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:59 np0005539551 systemd[1]: Started libpod-conmon-d7414d5d6cfe0c87966904e3fd869230db607e5059cea06967c27a45f49aa3dc.scope.
Nov 29 02:43:59 np0005539551 systemd[1]: Started libcrun container.
Nov 29 02:43:59 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/935b11c6afe12c35cb7520e753733cf631f0933f998fe88f3c51723bd1026a56/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:59 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/935b11c6afe12c35cb7520e753733cf631f0933f998fe88f3c51723bd1026a56/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:59 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/935b11c6afe12c35cb7520e753733cf631f0933f998fe88f3c51723bd1026a56/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:59 np0005539551 podman[227549]: 2025-11-29 07:43:59.714517906 +0000 UTC m=+0.607099389 container init d7414d5d6cfe0c87966904e3fd869230db607e5059cea06967c27a45f49aa3dc (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 02:43:59 np0005539551 podman[227549]: 2025-11-29 07:43:59.723208772 +0000 UTC m=+0.615790165 container start d7414d5d6cfe0c87966904e3fd869230db607e5059cea06967c27a45f49aa3dc (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:43:59 np0005539551 nova_compute_init[227570]: INFO:nova_statedir:Applying nova statedir ownership
Nov 29 02:43:59 np0005539551 nova_compute_init[227570]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 29 02:43:59 np0005539551 nova_compute_init[227570]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 29 02:43:59 np0005539551 nova_compute_init[227570]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 29 02:43:59 np0005539551 nova_compute_init[227570]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 29 02:43:59 np0005539551 nova_compute_init[227570]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 29 02:43:59 np0005539551 nova_compute_init[227570]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 29 02:43:59 np0005539551 nova_compute_init[227570]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 29 02:43:59 np0005539551 nova_compute_init[227570]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 29 02:43:59 np0005539551 nova_compute_init[227570]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 29 02:43:59 np0005539551 nova_compute_init[227570]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 29 02:43:59 np0005539551 nova_compute_init[227570]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 29 02:43:59 np0005539551 nova_compute_init[227570]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 29 02:43:59 np0005539551 nova_compute_init[227570]: INFO:nova_statedir:Nova statedir ownership complete
Nov 29 02:43:59 np0005539551 systemd[1]: libpod-d7414d5d6cfe0c87966904e3fd869230db607e5059cea06967c27a45f49aa3dc.scope: Deactivated successfully.
Nov 29 02:43:59 np0005539551 python3.9[227523]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 29 02:43:59 np0005539551 podman[227571]: 2025-11-29 07:43:59.836783603 +0000 UTC m=+0.033670857 container died d7414d5d6cfe0c87966904e3fd869230db607e5059cea06967c27a45f49aa3dc (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm)
Nov 29 02:44:00 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d7414d5d6cfe0c87966904e3fd869230db607e5059cea06967c27a45f49aa3dc-userdata-shm.mount: Deactivated successfully.
Nov 29 02:44:00 np0005539551 systemd[1]: var-lib-containers-storage-overlay-935b11c6afe12c35cb7520e753733cf631f0933f998fe88f3c51723bd1026a56-merged.mount: Deactivated successfully.
Nov 29 02:44:00 np0005539551 nova_compute[227360]: 2025-11-29 07:44:00.220 227364 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 02:44:00 np0005539551 nova_compute[227360]: 2025-11-29 07:44:00.221 227364 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 02:44:00 np0005539551 nova_compute[227360]: 2025-11-29 07:44:00.221 227364 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 02:44:00 np0005539551 nova_compute[227360]: 2025-11-29 07:44:00.221 227364 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 29 02:44:00 np0005539551 podman[227571]: 2025-11-29 07:44:00.238335786 +0000 UTC m=+0.435223030 container cleanup d7414d5d6cfe0c87966904e3fd869230db607e5059cea06967c27a45f49aa3dc (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=nova_compute_init)
Nov 29 02:44:00 np0005539551 systemd[1]: libpod-conmon-d7414d5d6cfe0c87966904e3fd869230db607e5059cea06967c27a45f49aa3dc.scope: Deactivated successfully.
Nov 29 02:44:00 np0005539551 nova_compute[227360]: 2025-11-29 07:44:00.376 227364 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:44:00 np0005539551 nova_compute[227360]: 2025-11-29 07:44:00.400 227364 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:44:00 np0005539551 nova_compute[227360]: 2025-11-29 07:44:00.400 227364 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 29 02:44:00 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:44:00 np0005539551 nova_compute[227360]: 2025-11-29 07:44:00.862 227364 INFO nova.virt.driver [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 29 02:44:00 np0005539551 systemd[1]: session-49.scope: Deactivated successfully.
Nov 29 02:44:00 np0005539551 systemd[1]: session-49.scope: Consumed 2min 25.910s CPU time.
Nov 29 02:44:00 np0005539551 systemd-logind[788]: Session 49 logged out. Waiting for processes to exit.
Nov 29 02:44:00 np0005539551 systemd-logind[788]: Removed session 49.
Nov 29 02:44:00 np0005539551 nova_compute[227360]: 2025-11-29 07:44:00.964 227364 INFO nova.compute.provider_config [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 29 02:44:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:01.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:01.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.065 227364 DEBUG oslo_concurrency.lockutils [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.066 227364 DEBUG oslo_concurrency.lockutils [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.066 227364 DEBUG oslo_concurrency.lockutils [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.066 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.066 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.067 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.067 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.067 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.067 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.067 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.067 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.068 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.068 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.068 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.068 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.068 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.068 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.068 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.069 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.069 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.069 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.069 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.069 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.069 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.069 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.070 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.070 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.070 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.070 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.070 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.070 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.071 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.071 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.071 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.071 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.071 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.071 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.072 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.072 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.072 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.072 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.072 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.072 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.073 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.073 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.073 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.073 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.073 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.073 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.074 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.074 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.074 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.074 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.074 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.074 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.075 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.075 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.075 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.075 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.075 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.075 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.075 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.076 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.076 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.076 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.076 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.076 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.076 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.076 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.076 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.077 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.077 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.077 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.077 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.077 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.077 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.077 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.078 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.078 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.078 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.078 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.078 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.078 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.078 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.079 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.079 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.079 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.079 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.079 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.079 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.079 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.080 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.080 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.080 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.080 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.080 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.080 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.080 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.081 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.081 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.081 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.081 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.081 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.081 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.081 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.082 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.082 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.082 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.082 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.082 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.082 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.082 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.083 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.083 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.083 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.083 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.083 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.083 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.083 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.084 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.084 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.084 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.084 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.084 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.084 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.084 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.085 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.085 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.085 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.085 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.085 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.085 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.085 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.086 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.086 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.086 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.086 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.086 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.086 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.086 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.087 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.087 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.087 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.087 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.087 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.087 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.087 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.088 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.088 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.088 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.088 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.088 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.088 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.088 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.089 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.089 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.089 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.089 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.089 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.089 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.089 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.090 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.090 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.090 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.090 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.090 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.090 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.091 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.091 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.091 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.091 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.091 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.091 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.091 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.092 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.092 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.092 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.092 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.092 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.092 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.092 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.093 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.093 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.093 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.093 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.093 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.093 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.093 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.094 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.094 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.094 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.094 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.094 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.094 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.094 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.095 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.095 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.095 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.095 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.095 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.095 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.095 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.096 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.096 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.096 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.096 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.096 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.096 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.096 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.096 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.097 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.097 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.097 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.097 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.097 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.098 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.098 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.098 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.098 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.098 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.099 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.099 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.099 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.099 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.099 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.099 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.099 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.100 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.100 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.100 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.100 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.100 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.100 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.100 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.101 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.101 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.101 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.101 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.101 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.101 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.101 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.102 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.102 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.102 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.102 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.102 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.102 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.102 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.103 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.103 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.103 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.103 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.103 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.103 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.103 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.104 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.104 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.104 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.104 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.104 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.104 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.104 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.104 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.105 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.105 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.105 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.105 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.105 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.105 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.105 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.106 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.106 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.106 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.106 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.106 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.106 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.106 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.107 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.107 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.107 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.107 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.107 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.107 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.107 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.107 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.108 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.108 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.108 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.108 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.108 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.108 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.108 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.109 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.109 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.109 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.109 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.109 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.109 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.109 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.109 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.110 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.110 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.110 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.110 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.110 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.110 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.110 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.111 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.111 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.111 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.111 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.111 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.111 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.112 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.112 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.112 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.112 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.112 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.112 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.112 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.112 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.113 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.113 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.113 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.113 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.113 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.113 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.113 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.114 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.114 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.114 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.114 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.114 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.114 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.114 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.115 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.115 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.115 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.115 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.115 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.115 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.115 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.115 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.116 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.116 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.116 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.116 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.116 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.116 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.116 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.117 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.117 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.117 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.117 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.117 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.117 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.117 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.118 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.118 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.118 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.118 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.118 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.119 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.119 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.119 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.119 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.119 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.119 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.119 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.119 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.120 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.120 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.120 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.120 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.120 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.120 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.120 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.121 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.121 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.121 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.121 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.121 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.121 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.121 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.122 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.122 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.122 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.122 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.122 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.122 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.122 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.123 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.123 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.123 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.123 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.123 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.123 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.123 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.124 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.124 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.124 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.124 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.124 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.124 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.124 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.124 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.125 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.125 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.125 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.125 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.125 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.125 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.125 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.125 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.126 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.126 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.126 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.126 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.126 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.126 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.126 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.127 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.127 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.127 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.127 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.127 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.127 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.127 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.128 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.128 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.128 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.128 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.128 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.128 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.128 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.128 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.129 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.129 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.129 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.129 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.129 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.129 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.129 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.129 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.130 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.130 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.130 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.130 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.130 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.130 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.130 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.131 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.131 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.131 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.131 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.131 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.131 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.131 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.132 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.132 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.132 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.132 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.132 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.132 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.132 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.132 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.133 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.133 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.133 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.133 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.133 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.133 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.133 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.134 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.134 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.134 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.134 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.134 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.134 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.134 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.135 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.135 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.135 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.135 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.135 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.135 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.135 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.136 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.136 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.136 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.136 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.136 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.136 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.136 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.137 227364 WARNING oslo_config.cfg [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 29 02:44:02 np0005539551 nova_compute[227360]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 29 02:44:02 np0005539551 nova_compute[227360]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 29 02:44:02 np0005539551 nova_compute[227360]: and ``live_migration_inbound_addr`` respectively.
Nov 29 02:44:02 np0005539551 nova_compute[227360]: ).  Its value may be silently ignored in the future.#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.137 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.137 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.137 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.137 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.137 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.138 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.138 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.138 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.138 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.138 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.138 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.139 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.139 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.139 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.139 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.139 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.139 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.139 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.140 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.rbd_secret_uuid        = b66774a7-56d9-5535-bd8c-681234404870 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.140 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.140 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.140 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.140 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.140 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.140 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.141 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.141 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.141 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.141 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.141 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.141 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.141 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.142 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.142 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.142 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.142 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.142 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.142 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.143 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.143 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.143 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.143 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.143 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.143 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.143 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.144 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.144 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.144 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.144 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.144 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.144 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.145 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.145 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.145 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.145 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.145 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.145 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.145 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.146 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.146 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.146 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.146 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.146 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.146 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.146 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.147 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.147 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.147 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.147 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.147 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.147 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.147 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.148 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.148 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.148 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.148 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.148 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.148 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.148 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.149 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.149 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.149 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.149 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.149 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.149 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.149 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.150 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.150 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.150 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.150 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.150 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.150 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.150 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.151 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.151 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.151 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.151 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.151 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.151 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.151 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.152 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.152 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.152 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.152 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.152 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.152 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.152 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.153 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.153 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.153 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.153 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.153 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.153 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.153 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.154 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.154 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.154 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.154 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.154 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.154 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.154 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.155 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.155 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.155 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.155 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.155 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.155 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.155 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.156 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.156 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.156 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.156 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.156 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.156 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.156 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.156 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.157 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.157 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.157 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.157 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.157 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.157 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.158 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.158 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.158 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.158 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.158 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.158 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.158 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.159 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.159 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.159 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.159 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.159 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.159 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.159 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.160 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.160 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.160 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.160 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.160 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.160 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.160 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.161 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.161 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.161 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.161 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.161 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.161 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.161 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.162 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.162 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.162 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.162 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.162 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.162 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.163 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.163 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.163 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.163 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.163 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.163 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.164 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.164 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.164 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.164 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.164 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.164 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.164 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.165 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.165 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.165 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.165 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.165 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.165 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.165 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.166 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.166 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.166 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.166 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.166 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.167 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.167 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.167 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.167 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.167 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.167 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.167 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.167 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.168 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.168 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.168 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.168 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.168 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.168 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.168 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.169 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.169 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.169 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.169 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.169 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.169 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.169 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.170 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.170 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.170 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.170 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.170 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.170 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.170 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.171 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.171 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.171 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.171 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.171 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.171 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.171 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.172 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.172 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.172 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.172 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.172 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.172 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.172 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.173 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.173 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.173 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.173 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.173 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.173 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.174 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.174 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.174 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.174 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.174 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.174 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.175 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.175 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.175 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.175 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.175 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.175 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.175 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.176 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.176 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.176 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.176 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.176 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.176 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.176 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.177 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.177 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.177 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.177 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.177 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.177 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.177 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.178 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.178 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.178 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.178 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.178 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.178 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.178 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.179 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.179 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.179 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.179 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.179 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.179 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.180 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.180 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.180 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.180 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.180 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.180 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.181 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.181 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.181 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.181 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.181 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.181 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.181 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.182 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.182 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.182 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.182 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.182 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.182 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.182 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.183 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.183 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.183 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.183 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.183 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.183 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.183 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.184 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.184 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.184 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.184 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.184 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.184 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.184 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.185 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.185 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.185 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.185 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.185 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.185 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.185 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.186 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.186 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.186 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.186 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.186 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.186 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.187 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.187 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.187 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.187 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.187 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.187 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.187 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.188 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.188 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.188 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.188 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.188 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.188 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.188 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.189 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.189 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.189 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.189 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.189 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.189 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.189 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.189 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.190 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.190 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.190 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.190 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.190 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.190 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.190 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.191 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.191 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.191 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.191 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.191 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.191 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.191 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.192 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.192 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.192 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.192 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.192 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.192 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.192 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.193 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.193 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.193 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.193 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.193 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.193 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.194 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.194 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.194 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.194 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.194 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.194 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.195 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.195 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.195 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.195 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.195 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.195 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.196 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.196 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.196 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.196 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.196 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.196 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.196 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.197 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.197 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.197 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.197 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.197 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.197 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.197 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.198 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.198 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.198 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.198 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.198 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.198 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.198 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.199 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.199 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.199 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.199 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.199 227364 DEBUG oslo_service.service [None req-4d803468-1d15-4f99-902b-3a052b503886 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.200 227364 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.319 227364 INFO nova.virt.node [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Determined node identity 67c71d68-0dd7-4589-b775-189b4191a844 from /var/lib/nova/compute_id#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.320 227364 DEBUG nova.virt.libvirt.host [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.322 227364 DEBUG nova.virt.libvirt.host [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.323 227364 DEBUG nova.virt.libvirt.host [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.323 227364 DEBUG nova.virt.libvirt.host [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.347 227364 DEBUG nova.virt.libvirt.host [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f1c1265aa90> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.352 227364 DEBUG nova.virt.libvirt.host [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f1c1265aa90> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.354 227364 INFO nova.virt.libvirt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.365 227364 INFO nova.virt.libvirt.host [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Libvirt host capabilities <capabilities>
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <host>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <uuid>2d58586e-4ce1-425d-890c-d5cdff75e822</uuid>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <cpu>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <arch>x86_64</arch>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model>EPYC-Rome-v4</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <vendor>AMD</vendor>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <microcode version='16777317'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <signature family='23' model='49' stepping='0'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <maxphysaddr mode='emulate' bits='40'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature name='x2apic'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature name='tsc-deadline'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature name='osxsave'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature name='hypervisor'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature name='tsc_adjust'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature name='spec-ctrl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature name='stibp'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature name='arch-capabilities'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature name='ssbd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature name='cmp_legacy'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature name='topoext'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature name='virt-ssbd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature name='lbrv'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature name='tsc-scale'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature name='vmcb-clean'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature name='pause-filter'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature name='pfthreshold'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature name='svme-addr-chk'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature name='rdctl-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature name='skip-l1dfl-vmentry'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature name='mds-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature name='pschange-mc-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <pages unit='KiB' size='4'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <pages unit='KiB' size='2048'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <pages unit='KiB' size='1048576'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </cpu>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <power_management>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <suspend_mem/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </power_management>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <iommu support='no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <migration_features>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <live/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <uri_transports>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <uri_transport>tcp</uri_transport>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <uri_transport>rdma</uri_transport>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </uri_transports>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </migration_features>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <topology>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <cells num='1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <cell id='0'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:          <memory unit='KiB'>7864316</memory>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:          <pages unit='KiB' size='4'>1966079</pages>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:          <pages unit='KiB' size='2048'>0</pages>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:          <distances>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:            <sibling id='0' value='10'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:          </distances>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:          <cpus num='8'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:          </cpus>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        </cell>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </cells>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </topology>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <cache>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </cache>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <secmodel>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model>selinux</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <doi>0</doi>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </secmodel>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <secmodel>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model>dac</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <doi>0</doi>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </secmodel>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  </host>
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <guest>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <os_type>hvm</os_type>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <arch name='i686'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <wordsize>32</wordsize>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <domain type='qemu'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <domain type='kvm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </arch>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <features>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <pae/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <nonpae/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <acpi default='on' toggle='yes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <apic default='on' toggle='no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <cpuselection/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <deviceboot/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <disksnapshot default='on' toggle='no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <externalSnapshot/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </features>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  </guest>
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <guest>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <os_type>hvm</os_type>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <arch name='x86_64'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <wordsize>64</wordsize>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <domain type='qemu'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <domain type='kvm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </arch>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <features>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <acpi default='on' toggle='yes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <apic default='on' toggle='no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <cpuselection/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <deviceboot/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <disksnapshot default='on' toggle='no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <externalSnapshot/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </features>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  </guest>
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 
Nov 29 02:44:02 np0005539551 nova_compute[227360]: </capabilities>
Nov 29 02:44:02 np0005539551 nova_compute[227360]: #033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.378 227364 DEBUG nova.virt.libvirt.host [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.385 227364 DEBUG nova.virt.libvirt.host [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 29 02:44:02 np0005539551 nova_compute[227360]: <domainCapabilities>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <domain>kvm</domain>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <arch>i686</arch>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <vcpu max='4096'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <iothreads supported='yes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <os supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <enum name='firmware'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <loader supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='type'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>rom</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>pflash</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='readonly'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>yes</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>no</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='secure'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>no</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </loader>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  </os>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <cpu>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <mode name='host-passthrough' supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='hostPassthroughMigratable'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>on</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>off</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </mode>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <mode name='maximum' supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='maximumMigratable'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>on</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>off</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </mode>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <mode name='host-model' supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <vendor>AMD</vendor>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='x2apic'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='hypervisor'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='stibp'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='ssbd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='overflow-recov'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='succor'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='ibrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='lbrv'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='tsc-scale'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='flushbyasid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='pause-filter'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='pfthreshold'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='disable' name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </mode>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <mode name='custom' supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Broadwell'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Broadwell-IBRS'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Broadwell-noTSX'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Broadwell-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Broadwell-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Broadwell-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Broadwell-v4'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Cascadelake-Server'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Cooperlake'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Cooperlake-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Cooperlake-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Denverton'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='mpx'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Denverton-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='mpx'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Denverton-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Denverton-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Dhyana-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-Genoa'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amd-psfd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='auto-ibrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='no-nested-data-bp'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='null-sel-clr-base'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='stibp-always-on'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amd-psfd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='auto-ibrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='no-nested-data-bp'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='null-sel-clr-base'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='stibp-always-on'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-Milan'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-Milan-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-Milan-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amd-psfd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='no-nested-data-bp'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='null-sel-clr-base'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='stibp-always-on'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-Rome'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-Rome-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-Rome-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-Rome-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-v4'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='GraniteRapids'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-fp16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-int8'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-tile'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-fp16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='bus-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fbsdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrc'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fzrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='mcdt-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pbrsb-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='prefetchiti'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='psdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='serialize'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xfd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='GraniteRapids-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-fp16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-int8'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-tile'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-fp16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='bus-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fbsdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrc'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fzrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='mcdt-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pbrsb-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='prefetchiti'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='psdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='serialize'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xfd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='GraniteRapids-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-fp16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-int8'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-tile'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx10'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx10-128'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx10-256'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx10-512'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-fp16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='bus-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='cldemote'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fbsdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrc'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fzrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='mcdt-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdir64b'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdiri'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pbrsb-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='prefetchiti'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='psdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='serialize'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ss'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xfd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Haswell'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Haswell-IBRS'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Haswell-noTSX'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Haswell-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Haswell-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Haswell-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Haswell-v4'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Icelake-Server'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Icelake-Server-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Icelake-Server-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Icelake-Server-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Icelake-Server-v4'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Icelake-Server-v5'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Icelake-Server-v6'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Icelake-Server-v7'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='IvyBridge'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='IvyBridge-IBRS'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='IvyBridge-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='IvyBridge-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='KnightsMill'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-4fmaps'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-4vnniw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512er'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512pf'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ss'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='KnightsMill-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-4fmaps'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-4vnniw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512er'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512pf'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ss'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Opteron_G4'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fma4'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xop'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Opteron_G4-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fma4'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xop'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Opteron_G5'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fma4'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='tbm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xop'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Opteron_G5-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fma4'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='tbm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xop'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='SapphireRapids'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-int8'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-tile'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-fp16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='bus-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrc'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fzrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='serialize'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xfd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='SapphireRapids-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-int8'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-tile'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-fp16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='bus-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrc'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fzrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='serialize'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xfd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='SapphireRapids-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-int8'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-tile'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-fp16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='bus-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fbsdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrc'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fzrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='psdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='serialize'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xfd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='SapphireRapids-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-int8'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-tile'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-fp16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='bus-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='cldemote'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fbsdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrc'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fzrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdir64b'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdiri'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='psdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='serialize'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ss'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xfd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='SierraForest'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-ne-convert'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni-int8'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='bus-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='cmpccxadd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fbsdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='mcdt-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pbrsb-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='psdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='serialize'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='SierraForest-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-ne-convert'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni-int8'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='bus-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='cmpccxadd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fbsdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='mcdt-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pbrsb-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='psdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='serialize'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Client'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Client-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Client-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Client-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Client-v4'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Server'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Server-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Server-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Server-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Server-v4'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Server-v5'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Snowridge'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='cldemote'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='core-capability'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdir64b'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdiri'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='mpx'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='split-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Snowridge-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='cldemote'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='core-capability'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdir64b'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdiri'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='mpx'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='split-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Snowridge-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='cldemote'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='core-capability'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdir64b'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdiri'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='split-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Snowridge-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='cldemote'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='core-capability'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdir64b'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdiri'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='split-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Snowridge-v4'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='cldemote'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdir64b'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdiri'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='athlon'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='3dnow'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='3dnowext'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='athlon-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='3dnow'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='3dnowext'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='core2duo'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ss'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='core2duo-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ss'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='coreduo'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ss'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='coreduo-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ss'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='n270'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ss'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='n270-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ss'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='phenom'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='3dnow'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='3dnowext'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='phenom-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='3dnow'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='3dnowext'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </mode>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <memoryBacking supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <enum name='sourceType'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <value>file</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <value>anonymous</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <value>memfd</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  </memoryBacking>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <devices>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <disk supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='diskDevice'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>disk</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>cdrom</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>floppy</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>lun</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='bus'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>fdc</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>scsi</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>virtio</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>usb</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>sata</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='model'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>virtio</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>virtio-transitional</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>virtio-non-transitional</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </disk>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <graphics supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='type'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>vnc</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>egl-headless</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>dbus</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </graphics>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <video supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='modelType'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>vga</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>cirrus</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>virtio</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>none</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>bochs</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>ramfb</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </video>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <hostdev supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='mode'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>subsystem</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='startupPolicy'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>default</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>mandatory</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>requisite</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>optional</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='subsysType'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>usb</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>pci</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>scsi</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='capsType'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='pciBackend'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </hostdev>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <rng supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='model'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>virtio</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>virtio-transitional</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>virtio-non-transitional</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='backendModel'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>random</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>egd</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>builtin</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </rng>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <filesystem supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='driverType'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>path</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>handle</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>virtiofs</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </filesystem>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <tpm supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='model'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>tpm-tis</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>tpm-crb</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='backendModel'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>emulator</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>external</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='backendVersion'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>2.0</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </tpm>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <redirdev supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='bus'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>usb</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </redirdev>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <channel supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='type'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>pty</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>unix</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </channel>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <crypto supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='model'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='type'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>qemu</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='backendModel'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>builtin</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </crypto>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <interface supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='backendType'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>default</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>passt</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </interface>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <panic supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='model'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>isa</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>hyperv</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </panic>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <console supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='type'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>null</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>vc</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>pty</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>dev</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>file</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>pipe</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>stdio</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>udp</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>tcp</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>unix</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>qemu-vdagent</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>dbus</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </console>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  </devices>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <features>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <gic supported='no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <vmcoreinfo supported='yes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <genid supported='yes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <backingStoreInput supported='yes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <backup supported='yes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <async-teardown supported='yes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <ps2 supported='yes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <sev supported='no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <sgx supported='no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <hyperv supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='features'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>relaxed</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>vapic</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>spinlocks</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>vpindex</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>runtime</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>synic</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>stimer</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>reset</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>vendor_id</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>frequencies</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>reenlightenment</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>tlbflush</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>ipi</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>avic</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>emsr_bitmap</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>xmm_input</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <defaults>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <spinlocks>4095</spinlocks>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <stimer_direct>on</stimer_direct>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </defaults>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </hyperv>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <launchSecurity supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='sectype'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>tdx</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </launchSecurity>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  </features>
Nov 29 02:44:02 np0005539551 nova_compute[227360]: </domainCapabilities>
Nov 29 02:44:02 np0005539551 nova_compute[227360]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.396 227364 DEBUG nova.virt.libvirt.host [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 29 02:44:02 np0005539551 nova_compute[227360]: <domainCapabilities>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <domain>kvm</domain>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <arch>i686</arch>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <vcpu max='240'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <iothreads supported='yes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <os supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <enum name='firmware'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <loader supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='type'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>rom</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>pflash</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='readonly'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>yes</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>no</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='secure'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>no</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </loader>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  </os>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <cpu>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <mode name='host-passthrough' supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='hostPassthroughMigratable'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>on</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>off</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </mode>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <mode name='maximum' supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='maximumMigratable'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>on</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>off</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </mode>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <mode name='host-model' supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <vendor>AMD</vendor>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='x2apic'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='hypervisor'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='stibp'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='ssbd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='overflow-recov'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='succor'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='ibrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='lbrv'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='tsc-scale'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='flushbyasid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='pause-filter'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='pfthreshold'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='disable' name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </mode>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <mode name='custom' supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Broadwell'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Broadwell-IBRS'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Broadwell-noTSX'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Broadwell-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Broadwell-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Broadwell-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Broadwell-v4'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Cascadelake-Server'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Cooperlake'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Cooperlake-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Cooperlake-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Denverton'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='mpx'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Denverton-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='mpx'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Denverton-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Denverton-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Dhyana-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-Genoa'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amd-psfd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='auto-ibrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='no-nested-data-bp'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='null-sel-clr-base'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='stibp-always-on'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amd-psfd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='auto-ibrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='no-nested-data-bp'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='null-sel-clr-base'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='stibp-always-on'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-Milan'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-Milan-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-Milan-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amd-psfd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='no-nested-data-bp'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='null-sel-clr-base'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='stibp-always-on'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-Rome'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-Rome-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-Rome-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-Rome-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-v4'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='GraniteRapids'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-fp16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-int8'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-tile'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-fp16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='bus-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fbsdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrc'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fzrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='mcdt-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pbrsb-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='prefetchiti'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='psdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='serialize'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xfd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='GraniteRapids-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-fp16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-int8'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-tile'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-fp16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='bus-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fbsdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrc'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fzrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='mcdt-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pbrsb-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='prefetchiti'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='psdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='serialize'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xfd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='GraniteRapids-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-fp16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-int8'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-tile'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx10'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx10-128'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx10-256'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx10-512'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-fp16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='bus-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='cldemote'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fbsdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrc'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fzrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='mcdt-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdir64b'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdiri'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pbrsb-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='prefetchiti'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='psdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='serialize'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ss'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xfd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Haswell'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Haswell-IBRS'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Haswell-noTSX'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Haswell-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Haswell-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Haswell-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Haswell-v4'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Icelake-Server'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Icelake-Server-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Icelake-Server-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Icelake-Server-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Icelake-Server-v4'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Icelake-Server-v5'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Icelake-Server-v6'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Icelake-Server-v7'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='IvyBridge'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='IvyBridge-IBRS'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='IvyBridge-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='IvyBridge-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='KnightsMill'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-4fmaps'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-4vnniw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512er'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512pf'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ss'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='KnightsMill-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-4fmaps'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-4vnniw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512er'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512pf'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ss'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Opteron_G4'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fma4'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xop'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Opteron_G4-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fma4'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xop'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Opteron_G5'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fma4'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='tbm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xop'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Opteron_G5-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fma4'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='tbm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xop'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='SapphireRapids'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-int8'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-tile'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-fp16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='bus-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrc'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fzrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='serialize'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xfd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='SapphireRapids-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-int8'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-tile'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-fp16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='bus-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrc'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fzrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='serialize'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xfd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='SapphireRapids-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-int8'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-tile'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-fp16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='bus-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fbsdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrc'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fzrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='psdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='serialize'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xfd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='SapphireRapids-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-int8'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-tile'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-fp16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='bus-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='cldemote'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fbsdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrc'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fzrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdir64b'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdiri'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='psdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='serialize'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ss'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xfd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='SierraForest'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-ne-convert'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni-int8'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='bus-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='cmpccxadd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fbsdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='mcdt-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pbrsb-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='psdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='serialize'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='SierraForest-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-ne-convert'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni-int8'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='bus-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='cmpccxadd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fbsdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='mcdt-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pbrsb-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='psdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='serialize'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Client'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Client-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Client-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Client-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Client-v4'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Server'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Server-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Server-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Server-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Server-v4'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Server-v5'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Snowridge'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='cldemote'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='core-capability'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdir64b'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdiri'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='mpx'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='split-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Snowridge-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='cldemote'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='core-capability'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdir64b'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdiri'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='mpx'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='split-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Snowridge-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='cldemote'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='core-capability'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdir64b'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdiri'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='split-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Snowridge-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='cldemote'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='core-capability'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdir64b'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdiri'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='split-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Snowridge-v4'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='cldemote'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdir64b'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdiri'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='athlon'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='3dnow'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='3dnowext'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='athlon-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='3dnow'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='3dnowext'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='core2duo'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ss'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='core2duo-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ss'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='coreduo'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ss'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='coreduo-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ss'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='n270'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ss'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='n270-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ss'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='phenom'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='3dnow'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='3dnowext'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='phenom-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='3dnow'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='3dnowext'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </mode>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <memoryBacking supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <enum name='sourceType'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <value>file</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <value>anonymous</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <value>memfd</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  </memoryBacking>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <devices>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <disk supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='diskDevice'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>disk</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>cdrom</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>floppy</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>lun</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='bus'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>ide</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>fdc</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>scsi</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>virtio</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>usb</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>sata</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='model'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>virtio</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>virtio-transitional</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>virtio-non-transitional</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </disk>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <graphics supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='type'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>vnc</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>egl-headless</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>dbus</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </graphics>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <video supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='modelType'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>vga</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>cirrus</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>virtio</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>none</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>bochs</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>ramfb</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </video>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <hostdev supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='mode'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>subsystem</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='startupPolicy'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>default</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>mandatory</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>requisite</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>optional</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='subsysType'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>usb</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>pci</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>scsi</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='capsType'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='pciBackend'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </hostdev>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <rng supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='model'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>virtio</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>virtio-transitional</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>virtio-non-transitional</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='backendModel'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>random</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>egd</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>builtin</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </rng>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <filesystem supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='driverType'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>path</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>handle</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>virtiofs</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </filesystem>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <tpm supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='model'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>tpm-tis</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>tpm-crb</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='backendModel'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>emulator</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>external</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='backendVersion'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>2.0</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </tpm>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <redirdev supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='bus'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>usb</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </redirdev>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <channel supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='type'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>pty</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>unix</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </channel>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <crypto supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='model'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='type'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>qemu</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='backendModel'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>builtin</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </crypto>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <interface supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='backendType'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>default</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>passt</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </interface>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <panic supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='model'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>isa</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>hyperv</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </panic>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <console supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='type'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>null</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>vc</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>pty</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>dev</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>file</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>pipe</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>stdio</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>udp</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>tcp</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>unix</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>qemu-vdagent</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>dbus</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </console>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  </devices>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <features>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <gic supported='no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <vmcoreinfo supported='yes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <genid supported='yes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <backingStoreInput supported='yes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <backup supported='yes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <async-teardown supported='yes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <ps2 supported='yes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <sev supported='no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <sgx supported='no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <hyperv supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='features'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>relaxed</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>vapic</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>spinlocks</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>vpindex</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>runtime</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>synic</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>stimer</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>reset</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>vendor_id</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>frequencies</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>reenlightenment</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>tlbflush</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>ipi</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>avic</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>emsr_bitmap</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>xmm_input</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <defaults>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <spinlocks>4095</spinlocks>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <stimer_direct>on</stimer_direct>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </defaults>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </hyperv>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <launchSecurity supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='sectype'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>tdx</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </launchSecurity>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  </features>
Nov 29 02:44:02 np0005539551 nova_compute[227360]: </domainCapabilities>
Nov 29 02:44:02 np0005539551 nova_compute[227360]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.456 227364 DEBUG nova.virt.libvirt.host [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.460 227364 DEBUG nova.virt.libvirt.volume.mount [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.465 227364 DEBUG nova.virt.libvirt.host [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 29 02:44:02 np0005539551 nova_compute[227360]: <domainCapabilities>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <domain>kvm</domain>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <arch>x86_64</arch>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <vcpu max='4096'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <iothreads supported='yes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <os supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <enum name='firmware'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <value>efi</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <loader supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='type'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>rom</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>pflash</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='readonly'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>yes</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>no</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='secure'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>yes</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>no</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </loader>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  </os>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <cpu>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <mode name='host-passthrough' supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='hostPassthroughMigratable'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>on</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>off</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </mode>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <mode name='maximum' supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='maximumMigratable'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>on</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>off</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </mode>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <mode name='host-model' supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <vendor>AMD</vendor>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='x2apic'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='hypervisor'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='stibp'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='ssbd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='overflow-recov'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='succor'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='ibrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='lbrv'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='tsc-scale'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='flushbyasid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='pause-filter'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='pfthreshold'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='disable' name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </mode>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <mode name='custom' supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Broadwell'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Broadwell-IBRS'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Broadwell-noTSX'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Broadwell-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Broadwell-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Broadwell-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Broadwell-v4'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Cascadelake-Server'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Cooperlake'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Cooperlake-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Cooperlake-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Denverton'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='mpx'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Denverton-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='mpx'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Denverton-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Denverton-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Dhyana-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-Genoa'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amd-psfd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='auto-ibrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='no-nested-data-bp'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='null-sel-clr-base'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='stibp-always-on'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amd-psfd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='auto-ibrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='no-nested-data-bp'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='null-sel-clr-base'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='stibp-always-on'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-Milan'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-Milan-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-Milan-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amd-psfd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='no-nested-data-bp'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='null-sel-clr-base'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='stibp-always-on'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-Rome'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-Rome-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-Rome-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-Rome-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-v4'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='GraniteRapids'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-fp16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-int8'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-tile'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-fp16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='bus-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fbsdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrc'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fzrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='mcdt-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pbrsb-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='prefetchiti'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='psdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='serialize'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xfd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='GraniteRapids-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-fp16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-int8'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-tile'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-fp16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='bus-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fbsdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrc'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fzrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='mcdt-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pbrsb-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='prefetchiti'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='psdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='serialize'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xfd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='GraniteRapids-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-fp16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-int8'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-tile'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx10'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx10-128'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx10-256'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx10-512'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-fp16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='bus-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='cldemote'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fbsdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrc'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fzrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='mcdt-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdir64b'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdiri'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pbrsb-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='prefetchiti'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='psdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='serialize'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ss'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xfd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Haswell'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Haswell-IBRS'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Haswell-noTSX'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Haswell-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Haswell-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Haswell-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Haswell-v4'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Icelake-Server'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Icelake-Server-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Icelake-Server-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Icelake-Server-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Icelake-Server-v4'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Icelake-Server-v5'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Icelake-Server-v6'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Icelake-Server-v7'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='IvyBridge'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='IvyBridge-IBRS'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='IvyBridge-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='IvyBridge-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='KnightsMill'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-4fmaps'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-4vnniw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512er'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512pf'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ss'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='KnightsMill-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-4fmaps'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-4vnniw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512er'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512pf'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ss'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Opteron_G4'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fma4'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xop'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Opteron_G4-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fma4'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xop'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Opteron_G5'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fma4'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='tbm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xop'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Opteron_G5-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fma4'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='tbm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xop'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='SapphireRapids'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-int8'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-tile'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-fp16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='bus-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrc'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fzrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='serialize'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xfd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='SapphireRapids-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-int8'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-tile'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-fp16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='bus-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrc'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fzrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='serialize'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xfd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='SapphireRapids-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-int8'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-tile'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-fp16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='bus-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fbsdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrc'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fzrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='psdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='serialize'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xfd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='SapphireRapids-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-int8'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-tile'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-fp16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='bus-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='cldemote'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fbsdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrc'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fzrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdir64b'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdiri'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='psdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='serialize'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ss'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xfd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='SierraForest'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-ne-convert'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni-int8'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='bus-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='cmpccxadd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fbsdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='mcdt-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pbrsb-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='psdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='serialize'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='SierraForest-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-ne-convert'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni-int8'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='bus-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='cmpccxadd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fbsdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='mcdt-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pbrsb-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='psdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='serialize'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Client'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Client-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Client-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Client-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Client-v4'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Server'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Server-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Server-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Server-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Server-v4'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Server-v5'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Snowridge'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='cldemote'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='core-capability'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdir64b'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdiri'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='mpx'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='split-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Snowridge-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='cldemote'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='core-capability'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdir64b'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdiri'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='mpx'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='split-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Snowridge-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='cldemote'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='core-capability'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdir64b'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdiri'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='split-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Snowridge-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='cldemote'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='core-capability'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdir64b'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdiri'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='split-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Snowridge-v4'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='cldemote'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdir64b'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdiri'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='athlon'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='3dnow'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='3dnowext'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='athlon-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='3dnow'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='3dnowext'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='core2duo'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ss'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='core2duo-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ss'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='coreduo'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ss'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='coreduo-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ss'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='n270'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ss'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='n270-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ss'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='phenom'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='3dnow'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='3dnowext'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='phenom-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='3dnow'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='3dnowext'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </mode>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <memoryBacking supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <enum name='sourceType'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <value>file</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <value>anonymous</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <value>memfd</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  </memoryBacking>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <devices>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <disk supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='diskDevice'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>disk</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>cdrom</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>floppy</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>lun</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='bus'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>fdc</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>scsi</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>virtio</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>usb</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>sata</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='model'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>virtio</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>virtio-transitional</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>virtio-non-transitional</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </disk>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <graphics supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='type'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>vnc</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>egl-headless</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>dbus</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </graphics>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <video supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='modelType'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>vga</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>cirrus</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>virtio</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>none</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>bochs</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>ramfb</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </video>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <hostdev supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='mode'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>subsystem</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='startupPolicy'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>default</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>mandatory</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>requisite</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>optional</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='subsysType'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>usb</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>pci</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>scsi</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='capsType'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='pciBackend'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </hostdev>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <rng supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='model'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>virtio</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>virtio-transitional</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>virtio-non-transitional</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='backendModel'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>random</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>egd</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>builtin</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </rng>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <filesystem supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='driverType'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>path</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>handle</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>virtiofs</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </filesystem>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <tpm supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='model'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>tpm-tis</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>tpm-crb</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='backendModel'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>emulator</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>external</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='backendVersion'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>2.0</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </tpm>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <redirdev supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='bus'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>usb</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </redirdev>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <channel supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='type'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>pty</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>unix</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </channel>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <crypto supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='model'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='type'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>qemu</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='backendModel'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>builtin</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </crypto>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <interface supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='backendType'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>default</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>passt</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </interface>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <panic supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='model'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>isa</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>hyperv</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </panic>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <console supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='type'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>null</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>vc</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>pty</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>dev</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>file</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>pipe</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>stdio</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>udp</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>tcp</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>unix</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>qemu-vdagent</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>dbus</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </console>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  </devices>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <features>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <gic supported='no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <vmcoreinfo supported='yes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <genid supported='yes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <backingStoreInput supported='yes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <backup supported='yes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <async-teardown supported='yes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <ps2 supported='yes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <sev supported='no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <sgx supported='no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <hyperv supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='features'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>relaxed</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>vapic</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>spinlocks</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>vpindex</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>runtime</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>synic</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>stimer</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>reset</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>vendor_id</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>frequencies</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>reenlightenment</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>tlbflush</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>ipi</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>avic</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>emsr_bitmap</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>xmm_input</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <defaults>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <spinlocks>4095</spinlocks>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <stimer_direct>on</stimer_direct>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </defaults>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </hyperv>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <launchSecurity supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='sectype'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>tdx</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </launchSecurity>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  </features>
Nov 29 02:44:02 np0005539551 nova_compute[227360]: </domainCapabilities>
Nov 29 02:44:02 np0005539551 nova_compute[227360]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.527 227364 DEBUG nova.virt.libvirt.host [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 29 02:44:02 np0005539551 nova_compute[227360]: <domainCapabilities>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <domain>kvm</domain>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <arch>x86_64</arch>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <vcpu max='240'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <iothreads supported='yes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <os supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <enum name='firmware'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <loader supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='type'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>rom</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>pflash</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='readonly'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>yes</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>no</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='secure'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>no</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </loader>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  </os>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <cpu>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <mode name='host-passthrough' supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='hostPassthroughMigratable'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>on</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>off</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </mode>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <mode name='maximum' supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='maximumMigratable'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>on</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>off</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </mode>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <mode name='host-model' supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <vendor>AMD</vendor>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='x2apic'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='hypervisor'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='stibp'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='ssbd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='overflow-recov'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='succor'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='ibrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='lbrv'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='tsc-scale'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='flushbyasid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='pause-filter'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='pfthreshold'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <feature policy='disable' name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </mode>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <mode name='custom' supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Broadwell'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Broadwell-IBRS'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Broadwell-noTSX'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Broadwell-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Broadwell-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Broadwell-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Broadwell-v4'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Cascadelake-Server'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Cooperlake'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Cooperlake-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Cooperlake-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Denverton'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='mpx'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Denverton-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='mpx'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Denverton-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Denverton-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Dhyana-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-Genoa'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amd-psfd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='auto-ibrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='no-nested-data-bp'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='null-sel-clr-base'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='stibp-always-on'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amd-psfd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='auto-ibrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='no-nested-data-bp'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='null-sel-clr-base'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='stibp-always-on'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-Milan'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-Milan-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-Milan-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amd-psfd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='no-nested-data-bp'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='null-sel-clr-base'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='stibp-always-on'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-Rome'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-Rome-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-Rome-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-Rome-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='EPYC-v4'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='GraniteRapids'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-fp16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-int8'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-tile'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-fp16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='bus-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fbsdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrc'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fzrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='mcdt-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pbrsb-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='prefetchiti'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='psdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='serialize'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xfd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='GraniteRapids-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-fp16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-int8'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-tile'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-fp16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='bus-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fbsdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrc'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fzrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='mcdt-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pbrsb-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='prefetchiti'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='psdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='serialize'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xfd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='GraniteRapids-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-fp16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-int8'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-tile'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx10'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx10-128'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx10-256'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx10-512'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-fp16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='bus-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='cldemote'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fbsdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrc'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fzrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='mcdt-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdir64b'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdiri'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pbrsb-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='prefetchiti'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='psdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='serialize'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ss'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xfd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Haswell'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Haswell-IBRS'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Haswell-noTSX'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Haswell-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Haswell-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Haswell-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Haswell-v4'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Icelake-Server'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Icelake-Server-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Icelake-Server-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Icelake-Server-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Icelake-Server-v4'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Icelake-Server-v5'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Icelake-Server-v6'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Icelake-Server-v7'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='IvyBridge'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='IvyBridge-IBRS'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='IvyBridge-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='IvyBridge-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='KnightsMill'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-4fmaps'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-4vnniw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512er'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512pf'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ss'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='KnightsMill-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-4fmaps'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-4vnniw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512er'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512pf'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ss'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Opteron_G4'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fma4'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xop'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Opteron_G4-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fma4'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xop'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Opteron_G5'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fma4'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='tbm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xop'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Opteron_G5-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fma4'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='tbm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xop'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='SapphireRapids'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-int8'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-tile'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-fp16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='bus-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrc'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fzrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='serialize'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xfd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='SapphireRapids-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-int8'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-tile'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-fp16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='bus-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrc'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fzrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='serialize'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xfd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='SapphireRapids-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-int8'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-tile'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-fp16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='bus-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fbsdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrc'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fzrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='psdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='serialize'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xfd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='SapphireRapids-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-int8'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='amx-tile'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-bf16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-fp16'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bitalg'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vbmi2'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='bus-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='cldemote'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fbsdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrc'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fzrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='la57'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdir64b'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdiri'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='psdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='serialize'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ss'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='taa-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='tsx-ldtrk'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xfd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='SierraForest'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-ne-convert'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni-int8'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='bus-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='cmpccxadd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fbsdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='mcdt-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pbrsb-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='psdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='serialize'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='SierraForest-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-ifma'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-ne-convert'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx-vnni-int8'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='bus-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='cmpccxadd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fbsdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='fsrs'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ibrs-all'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='mcdt-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pbrsb-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='psdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='serialize'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vaes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='vpclmulqdq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Client'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Client-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Client-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Client-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Client-v4'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Server'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Server-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Server-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='hle'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='rtm'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Server-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Server-v4'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Skylake-Server-v5'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512bw'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512cd'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512dq'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512f'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='avx512vl'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='invpcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pcid'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='pku'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Snowridge'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='cldemote'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='core-capability'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdir64b'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdiri'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='mpx'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='split-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Snowridge-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='cldemote'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='core-capability'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdir64b'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdiri'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='mpx'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='split-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Snowridge-v2'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='cldemote'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='core-capability'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdir64b'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdiri'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='split-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Snowridge-v3'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='cldemote'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='core-capability'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdir64b'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdiri'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='split-lock-detect'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='Snowridge-v4'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='cldemote'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='erms'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='gfni'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdir64b'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='movdiri'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='xsaves'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='athlon'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='3dnow'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='3dnowext'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='athlon-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='3dnow'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='3dnowext'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='core2duo'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ss'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='core2duo-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ss'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='coreduo'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ss'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='coreduo-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ss'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='n270'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ss'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='n270-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='ss'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='phenom'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='3dnow'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='3dnowext'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <blockers model='phenom-v1'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='3dnow'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <feature name='3dnowext'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </blockers>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </mode>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <memoryBacking supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <enum name='sourceType'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <value>file</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <value>anonymous</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <value>memfd</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  </memoryBacking>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <devices>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <disk supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='diskDevice'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>disk</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>cdrom</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>floppy</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>lun</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='bus'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>ide</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>fdc</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>scsi</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>virtio</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>usb</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>sata</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='model'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>virtio</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>virtio-transitional</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>virtio-non-transitional</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </disk>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <graphics supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='type'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>vnc</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>egl-headless</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>dbus</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </graphics>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <video supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='modelType'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>vga</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>cirrus</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>virtio</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>none</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>bochs</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>ramfb</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </video>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <hostdev supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='mode'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>subsystem</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='startupPolicy'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>default</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>mandatory</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>requisite</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>optional</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='subsysType'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>usb</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>pci</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>scsi</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='capsType'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='pciBackend'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </hostdev>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <rng supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='model'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>virtio</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>virtio-transitional</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>virtio-non-transitional</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='backendModel'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>random</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>egd</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>builtin</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </rng>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <filesystem supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='driverType'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>path</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>handle</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>virtiofs</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </filesystem>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <tpm supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='model'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>tpm-tis</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>tpm-crb</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='backendModel'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>emulator</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>external</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='backendVersion'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>2.0</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </tpm>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <redirdev supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='bus'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>usb</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </redirdev>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <channel supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='type'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>pty</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>unix</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </channel>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <crypto supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='model'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='type'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>qemu</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='backendModel'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>builtin</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </crypto>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <interface supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='backendType'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>default</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>passt</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </interface>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <panic supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='model'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>isa</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>hyperv</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </panic>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <console supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='type'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>null</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>vc</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>pty</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>dev</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>file</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>pipe</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>stdio</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>udp</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>tcp</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>unix</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>qemu-vdagent</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>dbus</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </console>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  </devices>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <features>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <gic supported='no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <vmcoreinfo supported='yes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <genid supported='yes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <backingStoreInput supported='yes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <backup supported='yes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <async-teardown supported='yes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <ps2 supported='yes'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <sev supported='no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <sgx supported='no'/>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <hyperv supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='features'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>relaxed</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>vapic</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>spinlocks</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>vpindex</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>runtime</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>synic</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>stimer</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>reset</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>vendor_id</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>frequencies</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>reenlightenment</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>tlbflush</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>ipi</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>avic</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>emsr_bitmap</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>xmm_input</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <defaults>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <spinlocks>4095</spinlocks>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <stimer_direct>on</stimer_direct>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </defaults>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </hyperv>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    <launchSecurity supported='yes'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      <enum name='sectype'>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:        <value>tdx</value>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:      </enum>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:    </launchSecurity>
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  </features>
Nov 29 02:44:02 np0005539551 nova_compute[227360]: </domainCapabilities>
Nov 29 02:44:02 np0005539551 nova_compute[227360]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.587 227364 DEBUG nova.virt.libvirt.host [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.588 227364 INFO nova.virt.libvirt.host [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Secure Boot support detected#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.591 227364 INFO nova.virt.libvirt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.591 227364 INFO nova.virt.libvirt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.601 227364 DEBUG nova.virt.libvirt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] cpu compare xml: <cpu match="exact">
Nov 29 02:44:02 np0005539551 nova_compute[227360]:  <model>Nehalem</model>
Nov 29 02:44:02 np0005539551 nova_compute[227360]: </cpu>
Nov 29 02:44:02 np0005539551 nova_compute[227360]: _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.605 227364 DEBUG nova.virt.libvirt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.657 227364 INFO nova.virt.node [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Determined node identity 67c71d68-0dd7-4589-b775-189b4191a844 from /var/lib/nova/compute_id#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.676 227364 WARNING nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Compute nodes ['67c71d68-0dd7-4589-b775-189b4191a844'] for host compute-1.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.726 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.759 227364 WARNING nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] No compute node record found for host compute-1.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.760 227364 DEBUG oslo_concurrency.lockutils [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.761 227364 DEBUG oslo_concurrency.lockutils [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.761 227364 DEBUG oslo_concurrency.lockutils [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.761 227364 DEBUG nova.compute.resource_tracker [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:44:02 np0005539551 nova_compute[227360]: 2025-11-29 07:44:02.762 227364 DEBUG oslo_concurrency.processutils [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:44:03 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:44:03 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3656917781' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:44:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:03.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:03 np0005539551 nova_compute[227360]: 2025-11-29 07:44:03.209 227364 DEBUG oslo_concurrency.processutils [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:44:03 np0005539551 systemd[1]: Starting libvirt nodedev daemon...
Nov 29 02:44:03 np0005539551 systemd[1]: Started libvirt nodedev daemon.
Nov 29 02:44:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 02:44:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:03.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 02:44:03 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:04 np0005539551 nova_compute[227360]: 2025-11-29 07:44:04.125 227364 WARNING nova.virt.libvirt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:44:04 np0005539551 nova_compute[227360]: 2025-11-29 07:44:04.128 227364 DEBUG nova.compute.resource_tracker [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5327MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:44:04 np0005539551 nova_compute[227360]: 2025-11-29 07:44:04.128 227364 DEBUG oslo_concurrency.lockutils [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:04 np0005539551 nova_compute[227360]: 2025-11-29 07:44:04.129 227364 DEBUG oslo_concurrency.lockutils [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:44:04 np0005539551 nova_compute[227360]: 2025-11-29 07:44:04.176 227364 WARNING nova.compute.resource_tracker [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] No compute node record for compute-1.ctlplane.example.com:67c71d68-0dd7-4589-b775-189b4191a844: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 67c71d68-0dd7-4589-b775-189b4191a844 could not be found.#033[00m
Nov 29 02:44:04 np0005539551 nova_compute[227360]: 2025-11-29 07:44:04.213 227364 INFO nova.compute.resource_tracker [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Compute node record created for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com with uuid: 67c71d68-0dd7-4589-b775-189b4191a844#033[00m
Nov 29 02:44:04 np0005539551 nova_compute[227360]: 2025-11-29 07:44:04.529 227364 DEBUG nova.compute.resource_tracker [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:44:04 np0005539551 nova_compute[227360]: 2025-11-29 07:44:04.529 227364 DEBUG nova.compute.resource_tracker [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:44:04 np0005539551 nova_compute[227360]: 2025-11-29 07:44:04.627 227364 INFO nova.scheduler.client.report [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [req-26b38923-1246-41a9-8eed-71c8e3030c95] Created resource provider record via placement API for resource provider with UUID 67c71d68-0dd7-4589-b775-189b4191a844 and name compute-1.ctlplane.example.com.#033[00m
Nov 29 02:44:04 np0005539551 nova_compute[227360]: 2025-11-29 07:44:04.666 227364 DEBUG oslo_concurrency.processutils [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:44:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:44:05 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1221066017' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:44:05 np0005539551 nova_compute[227360]: 2025-11-29 07:44:05.195 227364 DEBUG oslo_concurrency.processutils [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:44:05 np0005539551 nova_compute[227360]: 2025-11-29 07:44:05.202 227364 DEBUG nova.virt.libvirt.host [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 29 02:44:05 np0005539551 nova_compute[227360]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Nov 29 02:44:05 np0005539551 nova_compute[227360]: 2025-11-29 07:44:05.203 227364 INFO nova.virt.libvirt.host [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] kernel doesn't support AMD SEV#033[00m
Nov 29 02:44:05 np0005539551 nova_compute[227360]: 2025-11-29 07:44:05.204 227364 DEBUG nova.compute.provider_tree [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Updating inventory in ProviderTree for provider 67c71d68-0dd7-4589-b775-189b4191a844 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 02:44:05 np0005539551 nova_compute[227360]: 2025-11-29 07:44:05.205 227364 DEBUG nova.virt.libvirt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:44:05 np0005539551 nova_compute[227360]: 2025-11-29 07:44:05.208 227364 DEBUG nova.virt.libvirt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Libvirt baseline CPU <cpu>
Nov 29 02:44:05 np0005539551 nova_compute[227360]:  <arch>x86_64</arch>
Nov 29 02:44:05 np0005539551 nova_compute[227360]:  <model>Nehalem</model>
Nov 29 02:44:05 np0005539551 nova_compute[227360]:  <vendor>AMD</vendor>
Nov 29 02:44:05 np0005539551 nova_compute[227360]:  <topology sockets="8" cores="1" threads="1"/>
Nov 29 02:44:05 np0005539551 nova_compute[227360]: </cpu>
Nov 29 02:44:05 np0005539551 nova_compute[227360]: _get_guest_baseline_cpu_features /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12537#033[00m
Nov 29 02:44:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:05.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:05 np0005539551 nova_compute[227360]: 2025-11-29 07:44:05.314 227364 DEBUG nova.scheduler.client.report [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Updated inventory for provider 67c71d68-0dd7-4589-b775-189b4191a844 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Nov 29 02:44:05 np0005539551 nova_compute[227360]: 2025-11-29 07:44:05.314 227364 DEBUG nova.compute.provider_tree [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Updating resource provider 67c71d68-0dd7-4589-b775-189b4191a844 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 29 02:44:05 np0005539551 nova_compute[227360]: 2025-11-29 07:44:05.315 227364 DEBUG nova.compute.provider_tree [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Updating inventory in ProviderTree for provider 67c71d68-0dd7-4589-b775-189b4191a844 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 02:44:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:05.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:05 np0005539551 nova_compute[227360]: 2025-11-29 07:44:05.447 227364 DEBUG nova.compute.provider_tree [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Updating resource provider 67c71d68-0dd7-4589-b775-189b4191a844 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 29 02:44:05 np0005539551 nova_compute[227360]: 2025-11-29 07:44:05.532 227364 DEBUG nova.compute.resource_tracker [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:44:05 np0005539551 nova_compute[227360]: 2025-11-29 07:44:05.533 227364 DEBUG oslo_concurrency.lockutils [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.404s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:44:05 np0005539551 nova_compute[227360]: 2025-11-29 07:44:05.533 227364 DEBUG nova.service [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Nov 29 02:44:05 np0005539551 nova_compute[227360]: 2025-11-29 07:44:05.728 227364 DEBUG nova.service [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Nov 29 02:44:05 np0005539551 nova_compute[227360]: 2025-11-29 07:44:05.729 227364 DEBUG nova.servicegroup.drivers.db [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] DB_Driver: join new ServiceGroup member compute-1.ctlplane.example.com to the compute group, service = <Service: host=compute-1.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Nov 29 02:44:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:44:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:07.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:44:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:07.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:09.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:09.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:09 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:11.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:11.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:13.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:13.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:15.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:15.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:17.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:44:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:17.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:44:18 np0005539551 podman[227779]: 2025-11-29 07:44:18.628347464 +0000 UTC m=+0.062725968 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125)
Nov 29 02:44:18 np0005539551 podman[227777]: 2025-11-29 07:44:18.65535684 +0000 UTC m=+0.098091922 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2)
Nov 29 02:44:18 np0005539551 podman[227778]: 2025-11-29 07:44:18.667275414 +0000 UTC m=+0.109838171 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 02:44:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:19.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:19 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0.
Nov 29 02:44:19 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:44:19.356364) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:44:19 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34
Nov 29 02:44:19 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402259356434, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 3264, "num_deletes": 502, "total_data_size": 7825815, "memory_usage": 7949824, "flush_reason": "Manual Compaction"}
Nov 29 02:44:19 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started
Nov 29 02:44:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:19.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:44:19.835 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:44:19.836 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:44:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:44:19.836 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:44:19 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402259965694, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 5160983, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15327, "largest_seqno": 18586, "table_properties": {"data_size": 5148064, "index_size": 8133, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3717, "raw_key_size": 29764, "raw_average_key_size": 20, "raw_value_size": 5119991, "raw_average_value_size": 3466, "num_data_blocks": 361, "num_entries": 1477, "num_filter_entries": 1477, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401836, "oldest_key_time": 1764401836, "file_creation_time": 1764402259, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:44:19 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 609383 microseconds, and 11114 cpu microseconds.
Nov 29 02:44:19 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:44:19.965750) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 5160983 bytes OK
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:44:19.965774) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:44:20.061898) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:44:20.061945) EVENT_LOG_v1 {"time_micros": 1764402260061935, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:44:20.061971) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 7810685, prev total WAL file size 7877619, number of live WAL files 2.
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:44:20.063716) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(5040KB)], [33(9675KB)]
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402260063839, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 15068204, "oldest_snapshot_seqno": -1}
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 5093 keys, 10632798 bytes, temperature: kUnknown
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402260534814, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 10632798, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10596124, "index_size": 22868, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12741, "raw_key_size": 127543, "raw_average_key_size": 25, "raw_value_size": 10501404, "raw_average_value_size": 2061, "num_data_blocks": 956, "num_entries": 5093, "num_filter_entries": 5093, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764402260, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:44:20.535125) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 10632798 bytes
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:44:20.642532) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 32.0 rd, 22.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.9, 9.4 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(5.0) write-amplify(2.1) OK, records in: 6122, records dropped: 1029 output_compression: NoCompression
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:44:20.642576) EVENT_LOG_v1 {"time_micros": 1764402260642559, "job": 18, "event": "compaction_finished", "compaction_time_micros": 471109, "compaction_time_cpu_micros": 41916, "output_level": 6, "num_output_files": 1, "total_output_size": 10632798, "num_input_records": 6122, "num_output_records": 5093, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402260643769, "job": 18, "event": "table_file_deletion", "file_number": 35}
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402260646501, "job": 18, "event": "table_file_deletion", "file_number": 33}
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:44:20.063579) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:44:20.646614) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:44:20.646623) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:44:20.646627) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:44:20.646632) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:44:20.646636) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0.
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:44:20.647006) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402260647032, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 269, "num_deletes": 256, "total_data_size": 69049, "memory_usage": 76120, "flush_reason": "Manual Compaction"}
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402260791028, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 45593, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18588, "largest_seqno": 18855, "table_properties": {"data_size": 43720, "index_size": 102, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4300, "raw_average_key_size": 16, "raw_value_size": 40131, "raw_average_value_size": 149, "num_data_blocks": 4, "num_entries": 268, "num_filter_entries": 268, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402259, "oldest_key_time": 1764402259, "file_creation_time": 1764402260, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 144102 microseconds, and 940 cpu microseconds.
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:44:20.791102) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 45593 bytes OK
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:44:20.791131) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:44:20.889833) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:44:20.889885) EVENT_LOG_v1 {"time_micros": 1764402260889873, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:44:20.889914) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 66934, prev total WAL file size 67572, number of live WAL files 2.
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:44:20.981966) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323533' seq:0, type:0; will stop at (end)
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(44KB)], [36(10MB)]
Nov 29 02:44:20 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402260982025, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 10678391, "oldest_snapshot_seqno": -1}
Nov 29 02:44:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:21.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:21.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:21 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 4842 keys, 10253584 bytes, temperature: kUnknown
Nov 29 02:44:21 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402261586455, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 10253584, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10218928, "index_size": 21426, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12165, "raw_key_size": 123647, "raw_average_key_size": 25, "raw_value_size": 10128881, "raw_average_value_size": 2091, "num_data_blocks": 878, "num_entries": 4842, "num_filter_entries": 4842, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764402260, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:44:21 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:44:22 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:44:21.586774) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 10253584 bytes
Nov 29 02:44:22 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:44:22.013754) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 17.7 rd, 17.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 10.1 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(459.1) write-amplify(224.9) OK, records in: 5361, records dropped: 519 output_compression: NoCompression
Nov 29 02:44:22 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:44:22.013797) EVENT_LOG_v1 {"time_micros": 1764402262013781, "job": 20, "event": "compaction_finished", "compaction_time_micros": 604516, "compaction_time_cpu_micros": 42706, "output_level": 6, "num_output_files": 1, "total_output_size": 10253584, "num_input_records": 5361, "num_output_records": 4842, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:44:22 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:44:22 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402262014021, "job": 20, "event": "table_file_deletion", "file_number": 38}
Nov 29 02:44:22 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:44:22 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402262016202, "job": 20, "event": "table_file_deletion", "file_number": 36}
Nov 29 02:44:22 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:44:20.981824) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:44:22 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:44:22.016276) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:44:22 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:44:22.016282) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:44:22 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:44:22.016285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:44:22 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:44:22.016287) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:44:22 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:44:22.016318) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:44:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 02:44:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:23.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 02:44:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:23.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:25.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:44:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:25.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:44:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:27.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:27.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:28 np0005539551 nova_compute[227360]: 2025-11-29 07:44:28.731 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:29 np0005539551 nova_compute[227360]: 2025-11-29 07:44:29.112 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:29.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:29.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:31.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:31.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:32 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:33.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:33.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:35.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:35.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:37.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:37.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:37 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:44:37 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.9 total, 600.0 interval#012Cumulative writes: 6716 writes, 26K keys, 6716 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6716 writes, 1242 syncs, 5.41 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 327 writes, 514 keys, 327 commit groups, 1.0 writes per commit group, ingest: 0.15 MB, 0.00 MB/s#012Interval WAL: 327 writes, 134 syncs, 2.44 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 02:44:38 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:39.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:39.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:41.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:41.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:44:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:43.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:44:43 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:43.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:45.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:45.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:47.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:47.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:49.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:49.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:49 np0005539551 podman[227837]: 2025-11-29 07:44:49.611426397 +0000 UTC m=+0.055954544 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 02:44:49 np0005539551 podman[227836]: 2025-11-29 07:44:49.636197982 +0000 UTC m=+0.084381858 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd)
Nov 29 02:44:49 np0005539551 podman[227835]: 2025-11-29 07:44:49.68607378 +0000 UTC m=+0.136925419 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 02:44:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:51.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:51.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:53.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:53.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:54 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:55.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:55.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:57.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:57.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:59.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:44:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:59.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:59 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:00 np0005539551 nova_compute[227360]: 2025-11-29 07:45:00.413 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:45:00 np0005539551 nova_compute[227360]: 2025-11-29 07:45:00.413 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:45:00 np0005539551 nova_compute[227360]: 2025-11-29 07:45:00.414 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:45:00 np0005539551 nova_compute[227360]: 2025-11-29 07:45:00.414 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:45:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:01.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:01.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:02 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:03.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:03.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:04 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:04 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:04 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:04 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:04 np0005539551 nova_compute[227360]: 2025-11-29 07:45:04.938 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:45:04 np0005539551 nova_compute[227360]: 2025-11-29 07:45:04.940 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:45:04 np0005539551 nova_compute[227360]: 2025-11-29 07:45:04.940 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:45:04 np0005539551 nova_compute[227360]: 2025-11-29 07:45:04.940 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:45:04 np0005539551 nova_compute[227360]: 2025-11-29 07:45:04.941 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:45:04 np0005539551 nova_compute[227360]: 2025-11-29 07:45:04.941 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:45:04 np0005539551 nova_compute[227360]: 2025-11-29 07:45:04.942 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:45:04 np0005539551 nova_compute[227360]: 2025-11-29 07:45:04.943 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:45:04 np0005539551 nova_compute[227360]: 2025-11-29 07:45:04.943 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:45:04 np0005539551 podman[228290]: 2025-11-29 07:45:04.894814237 +0000 UTC m=+0.027494100 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:45:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:05.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:05.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:05 np0005539551 podman[228290]: 2025-11-29 07:45:05.920768807 +0000 UTC m=+1.053448650 container create 695fded741994675198a20cd8ab4db44254866c764c9486e703a82de847c21ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:45:06 np0005539551 systemd[1]: Started libpod-conmon-695fded741994675198a20cd8ab4db44254866c764c9486e703a82de847c21ac.scope.
Nov 29 02:45:06 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:06 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:06 np0005539551 systemd[1]: Started libcrun container.
Nov 29 02:45:06 np0005539551 podman[228290]: 2025-11-29 07:45:06.952578855 +0000 UTC m=+2.085258738 container init 695fded741994675198a20cd8ab4db44254866c764c9486e703a82de847c21ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_raman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 02:45:06 np0005539551 podman[228290]: 2025-11-29 07:45:06.960754168 +0000 UTC m=+2.093434011 container start 695fded741994675198a20cd8ab4db44254866c764c9486e703a82de847c21ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_raman, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 02:45:06 np0005539551 funny_raman[228306]: 167 167
Nov 29 02:45:06 np0005539551 systemd[1]: libpod-695fded741994675198a20cd8ab4db44254866c764c9486e703a82de847c21ac.scope: Deactivated successfully.
Nov 29 02:45:07 np0005539551 nova_compute[227360]: 2025-11-29 07:45:07.310 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:45:07 np0005539551 nova_compute[227360]: 2025-11-29 07:45:07.311 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:45:07 np0005539551 nova_compute[227360]: 2025-11-29 07:45:07.312 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:45:07 np0005539551 nova_compute[227360]: 2025-11-29 07:45:07.312 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:45:07 np0005539551 nova_compute[227360]: 2025-11-29 07:45:07.313 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:45:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:07.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:07.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:07 np0005539551 podman[228290]: 2025-11-29 07:45:07.66391052 +0000 UTC m=+2.796590473 container attach 695fded741994675198a20cd8ab4db44254866c764c9486e703a82de847c21ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:45:07 np0005539551 podman[228290]: 2025-11-29 07:45:07.664557137 +0000 UTC m=+2.797237050 container died 695fded741994675198a20cd8ab4db44254866c764c9486e703a82de847c21ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_raman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 02:45:09 np0005539551 systemd[1]: var-lib-containers-storage-overlay-79155415ca5507d0f2ae18527f2cd82bfc62894c9750210288d5cf135172939f-merged.mount: Deactivated successfully.
Nov 29 02:45:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:09.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:09 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:09 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:09.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:10 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:45:10 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 3221 writes, 19K keys, 3221 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 3221 writes, 3221 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1030 writes, 5423 keys, 1030 commit groups, 1.0 writes per commit group, ingest: 12.82 MB, 0.02 MB/s#012Interval WAL: 1031 writes, 1031 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      6.2      3.94              0.09        10    0.394       0      0       0.0       0.0#012  L6      1/0    9.78 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.8     15.0     13.0      7.15              0.34         9    0.794     44K   4555       0.0       0.0#012 Sum      1/0    9.78 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.8      9.7     10.6     11.08              0.42        19    0.583     44K   4555       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.2     14.4     14.3      2.55              0.15         6    0.424     16K   2010       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     15.0     13.0      7.15              0.34         9    0.794     44K   4555       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      6.2      3.93              0.09         9    0.437       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.024, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.11 GB write, 0.06 MB/s write, 0.10 GB read, 0.06 MB/s read, 11.1 seconds#012Interval compaction: 0.04 GB write, 0.06 MB/s write, 0.04 GB read, 0.06 MB/s read, 2.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557021ed51f0#2 capacity: 304.00 MB usage: 5.58 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000117 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(294,5.20 MB,1.70978%) FilterBlock(19,128.30 KB,0.0412138%) IndexBlock(19,265.59 KB,0.0853187%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 02:45:10 np0005539551 podman[228290]: 2025-11-29 07:45:10.885958515 +0000 UTC m=+6.018638358 container remove 695fded741994675198a20cd8ab4db44254866c764c9486e703a82de847c21ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_raman, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 02:45:10 np0005539551 systemd[1]: libpod-conmon-695fded741994675198a20cd8ab4db44254866c764c9486e703a82de847c21ac.scope: Deactivated successfully.
Nov 29 02:45:11 np0005539551 podman[228351]: 2025-11-29 07:45:11.034112588 +0000 UTC m=+0.030137332 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:45:11 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:45:11 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2422427854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:45:11 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:11 np0005539551 nova_compute[227360]: 2025-11-29 07:45:11.228 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.915s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:45:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:11.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:11 np0005539551 nova_compute[227360]: 2025-11-29 07:45:11.474 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:45:11 np0005539551 nova_compute[227360]: 2025-11-29 07:45:11.476 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5315MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:45:11 np0005539551 nova_compute[227360]: 2025-11-29 07:45:11.476 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:45:11 np0005539551 nova_compute[227360]: 2025-11-29 07:45:11.476 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:45:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:11.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:12 np0005539551 podman[228351]: 2025-11-29 07:45:12.299544078 +0000 UTC m=+1.295568752 container create 741568ee84fc1ffe0dcaad6ab193405737dfab91bc74b8f956ac6cc8303f70f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_murdock, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 02:45:12 np0005539551 systemd[1]: Started libpod-conmon-741568ee84fc1ffe0dcaad6ab193405737dfab91bc74b8f956ac6cc8303f70f1.scope.
Nov 29 02:45:12 np0005539551 systemd[1]: Started libcrun container.
Nov 29 02:45:12 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b76cee3fbb0218604bdfc65799fb219698690ae32d1c63c6831da34cc70570d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:45:12 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b76cee3fbb0218604bdfc65799fb219698690ae32d1c63c6831da34cc70570d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:45:12 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b76cee3fbb0218604bdfc65799fb219698690ae32d1c63c6831da34cc70570d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:45:12 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b76cee3fbb0218604bdfc65799fb219698690ae32d1c63c6831da34cc70570d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:45:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:13.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:13 np0005539551 podman[228351]: 2025-11-29 07:45:13.495214327 +0000 UTC m=+2.491239001 container init 741568ee84fc1ffe0dcaad6ab193405737dfab91bc74b8f956ac6cc8303f70f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_murdock, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 02:45:13 np0005539551 podman[228351]: 2025-11-29 07:45:13.506111913 +0000 UTC m=+2.502136567 container start 741568ee84fc1ffe0dcaad6ab193405737dfab91bc74b8f956ac6cc8303f70f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:45:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:13.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:13 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:14 np0005539551 podman[228351]: 2025-11-29 07:45:14.005114088 +0000 UTC m=+3.001138772 container attach 741568ee84fc1ffe0dcaad6ab193405737dfab91bc74b8f956ac6cc8303f70f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 02:45:14 np0005539551 cranky_murdock[228368]: [
Nov 29 02:45:14 np0005539551 cranky_murdock[228368]:    {
Nov 29 02:45:14 np0005539551 cranky_murdock[228368]:        "available": false,
Nov 29 02:45:14 np0005539551 cranky_murdock[228368]:        "ceph_device": false,
Nov 29 02:45:14 np0005539551 cranky_murdock[228368]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 02:45:14 np0005539551 cranky_murdock[228368]:        "lsm_data": {},
Nov 29 02:45:14 np0005539551 cranky_murdock[228368]:        "lvs": [],
Nov 29 02:45:14 np0005539551 cranky_murdock[228368]:        "path": "/dev/sr0",
Nov 29 02:45:14 np0005539551 cranky_murdock[228368]:        "rejected_reasons": [
Nov 29 02:45:14 np0005539551 cranky_murdock[228368]:            "Has a FileSystem",
Nov 29 02:45:14 np0005539551 cranky_murdock[228368]:            "Insufficient space (<5GB)"
Nov 29 02:45:14 np0005539551 cranky_murdock[228368]:        ],
Nov 29 02:45:14 np0005539551 cranky_murdock[228368]:        "sys_api": {
Nov 29 02:45:14 np0005539551 cranky_murdock[228368]:            "actuators": null,
Nov 29 02:45:14 np0005539551 cranky_murdock[228368]:            "device_nodes": "sr0",
Nov 29 02:45:14 np0005539551 cranky_murdock[228368]:            "devname": "sr0",
Nov 29 02:45:14 np0005539551 cranky_murdock[228368]:            "human_readable_size": "482.00 KB",
Nov 29 02:45:14 np0005539551 cranky_murdock[228368]:            "id_bus": "ata",
Nov 29 02:45:14 np0005539551 cranky_murdock[228368]:            "model": "QEMU DVD-ROM",
Nov 29 02:45:14 np0005539551 cranky_murdock[228368]:            "nr_requests": "2",
Nov 29 02:45:14 np0005539551 cranky_murdock[228368]:            "parent": "/dev/sr0",
Nov 29 02:45:14 np0005539551 cranky_murdock[228368]:            "partitions": {},
Nov 29 02:45:14 np0005539551 cranky_murdock[228368]:            "path": "/dev/sr0",
Nov 29 02:45:14 np0005539551 cranky_murdock[228368]:            "removable": "1",
Nov 29 02:45:14 np0005539551 cranky_murdock[228368]:            "rev": "2.5+",
Nov 29 02:45:14 np0005539551 cranky_murdock[228368]:            "ro": "0",
Nov 29 02:45:14 np0005539551 cranky_murdock[228368]:            "rotational": "1",
Nov 29 02:45:14 np0005539551 cranky_murdock[228368]:            "sas_address": "",
Nov 29 02:45:14 np0005539551 cranky_murdock[228368]:            "sas_device_handle": "",
Nov 29 02:45:14 np0005539551 cranky_murdock[228368]:            "scheduler_mode": "mq-deadline",
Nov 29 02:45:14 np0005539551 cranky_murdock[228368]:            "sectors": 0,
Nov 29 02:45:14 np0005539551 cranky_murdock[228368]:            "sectorsize": "2048",
Nov 29 02:45:14 np0005539551 cranky_murdock[228368]:            "size": 493568.0,
Nov 29 02:45:14 np0005539551 cranky_murdock[228368]:            "support_discard": "2048",
Nov 29 02:45:14 np0005539551 cranky_murdock[228368]:            "type": "disk",
Nov 29 02:45:14 np0005539551 cranky_murdock[228368]:            "vendor": "QEMU"
Nov 29 02:45:14 np0005539551 cranky_murdock[228368]:        }
Nov 29 02:45:14 np0005539551 cranky_murdock[228368]:    }
Nov 29 02:45:14 np0005539551 cranky_murdock[228368]: ]
Nov 29 02:45:14 np0005539551 systemd[1]: libpod-741568ee84fc1ffe0dcaad6ab193405737dfab91bc74b8f956ac6cc8303f70f1.scope: Deactivated successfully.
Nov 29 02:45:14 np0005539551 systemd[1]: libpod-741568ee84fc1ffe0dcaad6ab193405737dfab91bc74b8f956ac6cc8303f70f1.scope: Consumed 1.173s CPU time.
Nov 29 02:45:14 np0005539551 podman[228351]: 2025-11-29 07:45:14.666559114 +0000 UTC m=+3.662583768 container died 741568ee84fc1ffe0dcaad6ab193405737dfab91bc74b8f956ac6cc8303f70f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_murdock, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 02:45:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:15.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:15.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:16 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:16 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 02:45:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:17.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:17.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:17 np0005539551 nova_compute[227360]: 2025-11-29 07:45:17.761 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:45:17 np0005539551 nova_compute[227360]: 2025-11-29 07:45:17.763 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:45:17 np0005539551 nova_compute[227360]: 2025-11-29 07:45:17.788 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:45:17 np0005539551 systemd[1]: var-lib-containers-storage-overlay-b76cee3fbb0218604bdfc65799fb219698690ae32d1c63c6831da34cc70570d9-merged.mount: Deactivated successfully.
Nov 29 02:45:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:19.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:19.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:45:19.836 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:45:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:45:19.837 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:45:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:45:19.837 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:45:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:21.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:21.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:21 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:45:21 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1491620520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:45:21 np0005539551 nova_compute[227360]: 2025-11-29 07:45:21.654 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.866s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:45:21 np0005539551 nova_compute[227360]: 2025-11-29 07:45:21.664 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:45:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:22 np0005539551 podman[228351]: 2025-11-29 07:45:22.525431537 +0000 UTC m=+11.521456221 container remove 741568ee84fc1ffe0dcaad6ab193405737dfab91bc74b8f956ac6cc8303f70f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:45:22 np0005539551 systemd[1]: libpod-conmon-741568ee84fc1ffe0dcaad6ab193405737dfab91bc74b8f956ac6cc8303f70f1.scope: Deactivated successfully.
Nov 29 02:45:22 np0005539551 podman[229482]: 2025-11-29 07:45:22.639087552 +0000 UTC m=+2.093634458 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 02:45:22 np0005539551 podman[229481]: 2025-11-29 07:45:22.663660221 +0000 UTC m=+2.111472123 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd)
Nov 29 02:45:22 np0005539551 podman[229480]: 2025-11-29 07:45:22.675842012 +0000 UTC m=+2.133466051 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 02:45:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:23.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:23.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:25.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:25.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:26 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:27.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:27.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:28 np0005539551 nova_compute[227360]: 2025-11-29 07:45:28.669 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:45:28 np0005539551 nova_compute[227360]: 2025-11-29 07:45:28.670 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:45:28 np0005539551 nova_compute[227360]: 2025-11-29 07:45:28.670 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 17.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:45:28 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:28 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 02:45:28 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:45:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:29.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:29.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:30 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:30 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:45:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:31.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:31.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:32 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:33.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:33.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:35.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:35.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:36 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:36 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:37.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:37.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:39.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:39.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:41.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:41.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:43.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:43.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:43 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:45.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:45.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:47.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:47.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:49.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:49.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:51.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:51.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:53.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:53.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:53 np0005539551 podman[229597]: 2025-11-29 07:45:53.622167557 +0000 UTC m=+0.064879167 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 02:45:53 np0005539551 podman[229598]: 2025-11-29 07:45:53.639373435 +0000 UTC m=+0.083763271 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125)
Nov 29 02:45:53 np0005539551 podman[229596]: 2025-11-29 07:45:53.658919767 +0000 UTC m=+0.110348406 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible)
Nov 29 02:45:54 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:55.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:55.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:57.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:57.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:59.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:45:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:59.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:59 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:01.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:01.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:03.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:03.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:04 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:05.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:05.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:07.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:07.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:09.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:09.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:09 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:11.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:11.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:13.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:13.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:14 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:15.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:15.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:17.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:17.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:19.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:19 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:19.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:46:19.838 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:46:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:46:19.838 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:46:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:46:19.839 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:46:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:21.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:21.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:23.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:46:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:23.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:46:24 np0005539551 podman[229660]: 2025-11-29 07:46:24.637656501 +0000 UTC m=+0.085303423 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 02:46:24 np0005539551 podman[229659]: 2025-11-29 07:46:24.637877757 +0000 UTC m=+0.078484848 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=multipathd)
Nov 29 02:46:24 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:24 np0005539551 podman[229658]: 2025-11-29 07:46:24.672229023 +0000 UTC m=+0.120558504 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:46:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:25.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:25.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:27.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:46:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:27.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:46:28 np0005539551 nova_compute[227360]: 2025-11-29 07:46:28.662 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:28 np0005539551 nova_compute[227360]: 2025-11-29 07:46:28.662 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:28 np0005539551 nova_compute[227360]: 2025-11-29 07:46:28.707 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:28 np0005539551 nova_compute[227360]: 2025-11-29 07:46:28.707 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:46:28 np0005539551 nova_compute[227360]: 2025-11-29 07:46:28.707 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:46:28 np0005539551 nova_compute[227360]: 2025-11-29 07:46:28.881 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:46:28 np0005539551 nova_compute[227360]: 2025-11-29 07:46:28.881 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:28 np0005539551 nova_compute[227360]: 2025-11-29 07:46:28.881 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:28 np0005539551 nova_compute[227360]: 2025-11-29 07:46:28.881 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:28 np0005539551 nova_compute[227360]: 2025-11-29 07:46:28.881 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:28 np0005539551 nova_compute[227360]: 2025-11-29 07:46:28.882 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:28 np0005539551 nova_compute[227360]: 2025-11-29 07:46:28.882 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:28 np0005539551 nova_compute[227360]: 2025-11-29 07:46:28.882 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:46:28 np0005539551 nova_compute[227360]: 2025-11-29 07:46:28.882 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:28 np0005539551 nova_compute[227360]: 2025-11-29 07:46:28.915 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:46:28 np0005539551 nova_compute[227360]: 2025-11-29 07:46:28.916 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:46:28 np0005539551 nova_compute[227360]: 2025-11-29 07:46:28.916 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:46:28 np0005539551 nova_compute[227360]: 2025-11-29 07:46:28.916 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:46:28 np0005539551 nova_compute[227360]: 2025-11-29 07:46:28.917 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:46:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:46:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:29.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:46:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:46:29 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4053488785' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:46:29 np0005539551 nova_compute[227360]: 2025-11-29 07:46:29.559 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:46:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:29.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:29 np0005539551 nova_compute[227360]: 2025-11-29 07:46:29.753 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:46:29 np0005539551 nova_compute[227360]: 2025-11-29 07:46:29.755 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5333MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:46:29 np0005539551 nova_compute[227360]: 2025-11-29 07:46:29.755 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:46:29 np0005539551 nova_compute[227360]: 2025-11-29 07:46:29.755 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:46:30 np0005539551 nova_compute[227360]: 2025-11-29 07:46:30.034 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:46:30 np0005539551 nova_compute[227360]: 2025-11-29 07:46:30.034 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:46:30 np0005539551 nova_compute[227360]: 2025-11-29 07:46:30.063 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:46:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:46:30 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/735255498' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:46:30 np0005539551 nova_compute[227360]: 2025-11-29 07:46:30.749 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.686s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:46:30 np0005539551 nova_compute[227360]: 2025-11-29 07:46:30.754 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:46:31 np0005539551 nova_compute[227360]: 2025-11-29 07:46:31.049 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:46:31 np0005539551 nova_compute[227360]: 2025-11-29 07:46:31.051 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:46:31 np0005539551 nova_compute[227360]: 2025-11-29 07:46:31.052 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.296s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:46:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:31.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:31.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:33.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:33.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:34 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:35.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:35.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:37 np0005539551 radosgw[83679]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Nov 29 02:46:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:37.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:37.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:38 np0005539551 radosgw[83679]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Nov 29 02:46:38 np0005539551 radosgw[83679]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Nov 29 02:46:38 np0005539551 radosgw[83679]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Nov 29 02:46:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:39.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:39.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:39 np0005539551 radosgw[83679]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Nov 29 02:46:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:41.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:41.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:43.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:43.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:44 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:46:44 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:46:44 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:46:44 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:46:44 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:46:44 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:45.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:45.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:47.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:47.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:49.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:49.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:51 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:46:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:51.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:51.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:51 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:46:51.976 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:46:51 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:46:51.976 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:46:51 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:46:51.977 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:46:52 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:46:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:53.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:53.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:55.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:55 np0005539551 podman[229947]: 2025-11-29 07:46:55.619126808 +0000 UTC m=+0.057116687 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 29 02:46:55 np0005539551 podman[229946]: 2025-11-29 07:46:55.619277002 +0000 UTC m=+0.064243970 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 02:46:55 np0005539551 podman[229945]: 2025-11-29 07:46:55.643150101 +0000 UTC m=+0.090943456 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 02:46:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:55.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:57.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:57.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:59.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:46:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:59.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:01.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:01.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:03.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:03.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 02:47:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:05.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 02:47:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:05.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:07.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:07.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:09.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:09.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:11.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:11.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:13.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:13.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:15.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:15.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:17.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:17.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:19.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:19.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:47:19.839 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:47:19.839 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:47:19.839 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:21.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:21.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:23.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:23.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:25.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:25.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:26 np0005539551 podman[230008]: 2025-11-29 07:47:26.600239523 +0000 UTC m=+0.054566788 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:47:26 np0005539551 podman[230009]: 2025-11-29 07:47:26.631202085 +0000 UTC m=+0.083265018 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 02:47:26 np0005539551 podman[230007]: 2025-11-29 07:47:26.652196296 +0000 UTC m=+0.100401264 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.build-date=20251125)
Nov 29 02:47:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:27.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:27.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:29.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:47:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:29.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:47:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:31 np0005539551 nova_compute[227360]: 2025-11-29 07:47:31.053 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:31 np0005539551 nova_compute[227360]: 2025-11-29 07:47:31.054 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:31 np0005539551 nova_compute[227360]: 2025-11-29 07:47:31.055 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:47:31 np0005539551 nova_compute[227360]: 2025-11-29 07:47:31.055 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:47:31 np0005539551 nova_compute[227360]: 2025-11-29 07:47:31.070 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:47:31 np0005539551 nova_compute[227360]: 2025-11-29 07:47:31.070 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:31 np0005539551 nova_compute[227360]: 2025-11-29 07:47:31.070 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:31 np0005539551 nova_compute[227360]: 2025-11-29 07:47:31.070 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:31 np0005539551 nova_compute[227360]: 2025-11-29 07:47:31.071 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:31 np0005539551 nova_compute[227360]: 2025-11-29 07:47:31.071 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:31 np0005539551 nova_compute[227360]: 2025-11-29 07:47:31.071 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:31 np0005539551 nova_compute[227360]: 2025-11-29 07:47:31.071 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:47:31 np0005539551 nova_compute[227360]: 2025-11-29 07:47:31.071 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:31 np0005539551 nova_compute[227360]: 2025-11-29 07:47:31.095 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:31 np0005539551 nova_compute[227360]: 2025-11-29 07:47:31.096 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:31 np0005539551 nova_compute[227360]: 2025-11-29 07:47:31.096 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:31 np0005539551 nova_compute[227360]: 2025-11-29 07:47:31.096 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:47:31 np0005539551 nova_compute[227360]: 2025-11-29 07:47:31.097 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:47:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:31.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:31 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:47:31 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4072286610' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:47:31 np0005539551 nova_compute[227360]: 2025-11-29 07:47:31.707 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:47:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:31.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:31 np0005539551 nova_compute[227360]: 2025-11-29 07:47:31.876 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:47:31 np0005539551 nova_compute[227360]: 2025-11-29 07:47:31.877 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5336MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:47:31 np0005539551 nova_compute[227360]: 2025-11-29 07:47:31.877 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:31 np0005539551 nova_compute[227360]: 2025-11-29 07:47:31.878 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:31 np0005539551 nova_compute[227360]: 2025-11-29 07:47:31.947 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:47:31 np0005539551 nova_compute[227360]: 2025-11-29 07:47:31.947 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:47:31 np0005539551 nova_compute[227360]: 2025-11-29 07:47:31.959 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:47:32 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:47:32 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/212153851' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:47:32 np0005539551 nova_compute[227360]: 2025-11-29 07:47:32.413 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:47:32 np0005539551 nova_compute[227360]: 2025-11-29 07:47:32.419 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:47:32 np0005539551 nova_compute[227360]: 2025-11-29 07:47:32.437 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:47:32 np0005539551 nova_compute[227360]: 2025-11-29 07:47:32.441 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:47:32 np0005539551 nova_compute[227360]: 2025-11-29 07:47:32.441 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:33.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:33.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:35.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:35.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:37.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:37.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:39.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:47:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:39.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:47:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:41.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:41.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:43.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:43.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:45.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:45.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:47.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:47.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:49.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:47:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:49.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:47:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:51.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:51.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:53.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:53.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:54 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0.
Nov 29 02:47:54 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:47:54.040585) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:47:54 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40
Nov 29 02:47:54 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402474040666, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2158, "num_deletes": 250, "total_data_size": 5773684, "memory_usage": 5848520, "flush_reason": "Manual Compaction"}
Nov 29 02:47:54 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started
Nov 29 02:47:54 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402474074001, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 2234693, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18860, "largest_seqno": 21013, "table_properties": {"data_size": 2228089, "index_size": 3483, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16324, "raw_average_key_size": 20, "raw_value_size": 2213785, "raw_average_value_size": 2788, "num_data_blocks": 157, "num_entries": 794, "num_filter_entries": 794, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402260, "oldest_key_time": 1764402260, "file_creation_time": 1764402474, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:47:54 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 33494 microseconds, and 10691 cpu microseconds.
Nov 29 02:47:54 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:47:54 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:47:54.074077) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 2234693 bytes OK
Nov 29 02:47:54 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:47:54.074102) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started
Nov 29 02:47:54 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:47:54.077449) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done
Nov 29 02:47:54 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:47:54.077530) EVENT_LOG_v1 {"time_micros": 1764402474077513, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:47:54 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:47:54.077571) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:47:54 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 5764064, prev total WAL file size 5764064, number of live WAL files 2.
Nov 29 02:47:54 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:47:54 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:47:54.080171) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353031' seq:72057594037927935, type:22 .. '6D67727374617400373532' seq:0, type:0; will stop at (end)
Nov 29 02:47:54 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:47:54 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(2182KB)], [39(10013KB)]
Nov 29 02:47:54 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402474080271, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 12488277, "oldest_snapshot_seqno": -1}
Nov 29 02:47:54 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 5217 keys, 10129680 bytes, temperature: kUnknown
Nov 29 02:47:54 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402474176748, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 10129680, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10094212, "index_size": 21342, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13061, "raw_key_size": 131689, "raw_average_key_size": 25, "raw_value_size": 9999246, "raw_average_value_size": 1916, "num_data_blocks": 878, "num_entries": 5217, "num_filter_entries": 5217, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764402474, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:47:54 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:47:54 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:47:54.177078) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 10129680 bytes
Nov 29 02:47:54 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:47:54.179068) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 129.4 rd, 104.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 9.8 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(10.1) write-amplify(4.5) OK, records in: 5636, records dropped: 419 output_compression: NoCompression
Nov 29 02:47:54 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:47:54.179096) EVENT_LOG_v1 {"time_micros": 1764402474179084, "job": 22, "event": "compaction_finished", "compaction_time_micros": 96542, "compaction_time_cpu_micros": 43678, "output_level": 6, "num_output_files": 1, "total_output_size": 10129680, "num_input_records": 5636, "num_output_records": 5217, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:47:54 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:47:54 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402474179815, "job": 22, "event": "table_file_deletion", "file_number": 41}
Nov 29 02:47:54 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:47:54 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402474181786, "job": 22, "event": "table_file_deletion", "file_number": 39}
Nov 29 02:47:54 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:47:54.080026) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:47:54 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:47:54.182018) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:47:54 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:47:54.182031) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:47:54 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:47:54.182035) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:47:54 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:47:54.182039) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:47:54 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:47:54.182044) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:47:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:55.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:55.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:56 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:47:56 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:47:57 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 02:47:57 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:47:57 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:47:57 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:47:57 np0005539551 podman[230251]: 2025-11-29 07:47:57.616593556 +0000 UTC m=+0.065172272 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 02:47:57 np0005539551 podman[230250]: 2025-11-29 07:47:57.616717329 +0000 UTC m=+0.067732562 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd)
Nov 29 02:47:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:47:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:57.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:47:57 np0005539551 podman[230249]: 2025-11-29 07:47:57.639320224 +0000 UTC m=+0.095231472 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Nov 29 02:47:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:57.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:59.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:47:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:59.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:01.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:01.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:03.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:03 np0005539551 nova_compute[227360]: 2025-11-29 07:48:03.793 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:03 np0005539551 nova_compute[227360]: 2025-11-29 07:48:03.793 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:03.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:05.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:05.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:06 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:48:06 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:48:06 np0005539551 nova_compute[227360]: 2025-11-29 07:48:06.369 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:06 np0005539551 nova_compute[227360]: 2025-11-29 07:48:06.369 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:48:06 np0005539551 nova_compute[227360]: 2025-11-29 07:48:06.370 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:48:06 np0005539551 nova_compute[227360]: 2025-11-29 07:48:06.792 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:48:06 np0005539551 nova_compute[227360]: 2025-11-29 07:48:06.792 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:06 np0005539551 nova_compute[227360]: 2025-11-29 07:48:06.792 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:06 np0005539551 nova_compute[227360]: 2025-11-29 07:48:06.793 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:06 np0005539551 nova_compute[227360]: 2025-11-29 07:48:06.793 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:06 np0005539551 nova_compute[227360]: 2025-11-29 07:48:06.793 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:06 np0005539551 nova_compute[227360]: 2025-11-29 07:48:06.793 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:06 np0005539551 nova_compute[227360]: 2025-11-29 07:48:06.794 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:48:06 np0005539551 nova_compute[227360]: 2025-11-29 07:48:06.794 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:07.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:07.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:09 np0005539551 nova_compute[227360]: 2025-11-29 07:48:09.405 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:09 np0005539551 nova_compute[227360]: 2025-11-29 07:48:09.405 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:09 np0005539551 nova_compute[227360]: 2025-11-29 07:48:09.406 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:09 np0005539551 nova_compute[227360]: 2025-11-29 07:48:09.406 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:48:09 np0005539551 nova_compute[227360]: 2025-11-29 07:48:09.406 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:09.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:09 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:48:09 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1644816519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:48:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:09.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:09 np0005539551 nova_compute[227360]: 2025-11-29 07:48:09.854 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:10 np0005539551 nova_compute[227360]: 2025-11-29 07:48:10.017 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:48:10 np0005539551 nova_compute[227360]: 2025-11-29 07:48:10.018 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5317MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:48:10 np0005539551 nova_compute[227360]: 2025-11-29 07:48:10.018 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:10 np0005539551 nova_compute[227360]: 2025-11-29 07:48:10.018 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:10 np0005539551 nova_compute[227360]: 2025-11-29 07:48:10.132 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:48:10 np0005539551 nova_compute[227360]: 2025-11-29 07:48:10.132 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:48:10 np0005539551 nova_compute[227360]: 2025-11-29 07:48:10.151 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:48:10 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2732364866' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:48:10 np0005539551 nova_compute[227360]: 2025-11-29 07:48:10.589 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:10 np0005539551 nova_compute[227360]: 2025-11-29 07:48:10.594 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:48:10 np0005539551 nova_compute[227360]: 2025-11-29 07:48:10.620 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:48:10 np0005539551 nova_compute[227360]: 2025-11-29 07:48:10.621 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:48:10 np0005539551 nova_compute[227360]: 2025-11-29 07:48:10.621 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:11 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:11.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:11.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:48:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:13.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:48:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:13.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:14 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0.
Nov 29 02:48:14 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:48:14.325269) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:48:14 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43
Nov 29 02:48:14 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402494325369, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 487, "num_deletes": 251, "total_data_size": 669382, "memory_usage": 679496, "flush_reason": "Manual Compaction"}
Nov 29 02:48:14 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started
Nov 29 02:48:14 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402494331849, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 442019, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21018, "largest_seqno": 21500, "table_properties": {"data_size": 439231, "index_size": 824, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6731, "raw_average_key_size": 19, "raw_value_size": 433656, "raw_average_value_size": 1249, "num_data_blocks": 35, "num_entries": 347, "num_filter_entries": 347, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402474, "oldest_key_time": 1764402474, "file_creation_time": 1764402494, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:48:14 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 6633 microseconds, and 2912 cpu microseconds.
Nov 29 02:48:14 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:48:14 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:48:14.331902) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 442019 bytes OK
Nov 29 02:48:14 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:48:14.331926) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started
Nov 29 02:48:14 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:48:14.365188) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done
Nov 29 02:48:14 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:48:14.365230) EVENT_LOG_v1 {"time_micros": 1764402494365221, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:48:14 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:48:14.365252) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:48:14 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 666406, prev total WAL file size 666406, number of live WAL files 2.
Nov 29 02:48:14 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:48:14 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:48:14.365789) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Nov 29 02:48:14 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:48:14 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(431KB)], [42(9892KB)]
Nov 29 02:48:14 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402494365922, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 10571699, "oldest_snapshot_seqno": -1}
Nov 29 02:48:14 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 5047 keys, 8208392 bytes, temperature: kUnknown
Nov 29 02:48:14 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402494752768, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 8208392, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8175553, "index_size": 19097, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12677, "raw_key_size": 128850, "raw_average_key_size": 25, "raw_value_size": 8084924, "raw_average_value_size": 1601, "num_data_blocks": 777, "num_entries": 5047, "num_filter_entries": 5047, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764402494, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:48:14 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:48:14 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:48:14.752982) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 8208392 bytes
Nov 29 02:48:14 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:48:14.754765) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 27.3 rd, 21.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.7 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(42.5) write-amplify(18.6) OK, records in: 5564, records dropped: 517 output_compression: NoCompression
Nov 29 02:48:14 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:48:14.754780) EVENT_LOG_v1 {"time_micros": 1764402494754773, "job": 24, "event": "compaction_finished", "compaction_time_micros": 386886, "compaction_time_cpu_micros": 17460, "output_level": 6, "num_output_files": 1, "total_output_size": 8208392, "num_input_records": 5564, "num_output_records": 5047, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:48:14 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:48:14 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402494754944, "job": 24, "event": "table_file_deletion", "file_number": 44}
Nov 29 02:48:14 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:48:14 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402494756578, "job": 24, "event": "table_file_deletion", "file_number": 42}
Nov 29 02:48:14 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:48:14.365706) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:48:14 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:48:14.756716) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:48:14 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:48:14.756722) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:48:14 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:48:14.756725) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:48:14 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:48:14.756727) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:48:14 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:48:14.756730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:48:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:15.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:15.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:17.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:17.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - - [29/Nov/2025:07:48:18.749 +0000] "GET /swift/info HTTP/1.1" 200 509 - "python-urllib3/1.26.5" - latency=0.000000000s
Nov 29 02:48:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:19.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:48:19.840 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:48:19.841 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:48:19.841 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:48:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:19.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:48:21 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:21.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:21.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:23.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:23.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:25.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:48:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:25.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:48:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:27.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:27.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:28 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:28 np0005539551 podman[230409]: 2025-11-29 07:48:28.617504758 +0000 UTC m=+0.062070266 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 02:48:28 np0005539551 podman[230408]: 2025-11-29 07:48:28.635520275 +0000 UTC m=+0.087387865 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Nov 29 02:48:28 np0005539551 podman[230407]: 2025-11-29 07:48:28.684161309 +0000 UTC m=+0.130489296 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:48:28 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e152 e152: 3 total, 3 up, 3 in
Nov 29 02:48:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:29.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:29.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e153 e153: 3 total, 3 up, 3 in
Nov 29 02:48:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:31.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:31.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:33 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e154 e154: 3 total, 3 up, 3 in
Nov 29 02:48:33 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:33.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:33.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:35.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:35.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:37.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:37.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:39.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e155 e155: 3 total, 3 up, 3 in
Nov 29 02:48:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:39.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:41.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:41.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:43.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:43.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e156 e156: 3 total, 3 up, 3 in
Nov 29 02:48:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:48:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:45.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:48:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:45.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:47.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:47.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:49.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:49.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:51.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:51.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:53.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:53.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:55.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:55.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:57.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:57.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:59 np0005539551 podman[230474]: 2025-11-29 07:48:59.595671061 +0000 UTC m=+0.052580024 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:48:59 np0005539551 podman[230475]: 2025-11-29 07:48:59.617019561 +0000 UTC m=+0.062865378 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 02:48:59 np0005539551 podman[230473]: 2025-11-29 07:48:59.637188618 +0000 UTC m=+0.092636430 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:48:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:59.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:48:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:59.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:00 np0005539551 nova_compute[227360]: 2025-11-29 07:49:00.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:49:00 np0005539551 nova_compute[227360]: 2025-11-29 07:49:00.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 02:49:00 np0005539551 nova_compute[227360]: 2025-11-29 07:49:00.425 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 02:49:00 np0005539551 nova_compute[227360]: 2025-11-29 07:49:00.426 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:49:00 np0005539551 nova_compute[227360]: 2025-11-29 07:49:00.427 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 02:49:00 np0005539551 nova_compute[227360]: 2025-11-29 07:49:00.438 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:49:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:01.245 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:49:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:01.247 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:49:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:01.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:01.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:02 np0005539551 nova_compute[227360]: 2025-11-29 07:49:02.448 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:49:03 np0005539551 nova_compute[227360]: 2025-11-29 07:49:03.406 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:49:03 np0005539551 nova_compute[227360]: 2025-11-29 07:49:03.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:49:03 np0005539551 nova_compute[227360]: 2025-11-29 07:49:03.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:49:03 np0005539551 nova_compute[227360]: 2025-11-29 07:49:03.447 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:03 np0005539551 nova_compute[227360]: 2025-11-29 07:49:03.447 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:03 np0005539551 nova_compute[227360]: 2025-11-29 07:49:03.448 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:03 np0005539551 nova_compute[227360]: 2025-11-29 07:49:03.448 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:49:03 np0005539551 nova_compute[227360]: 2025-11-29 07:49:03.448 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:03.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:03.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:03 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:49:03 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/976117785' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:49:03 np0005539551 nova_compute[227360]: 2025-11-29 07:49:03.971 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:04 np0005539551 nova_compute[227360]: 2025-11-29 07:49:04.134 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:49:04 np0005539551 nova_compute[227360]: 2025-11-29 07:49:04.136 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5322MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:49:04 np0005539551 nova_compute[227360]: 2025-11-29 07:49:04.136 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:04 np0005539551 nova_compute[227360]: 2025-11-29 07:49:04.136 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:04 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:04.250 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:49:04 np0005539551 nova_compute[227360]: 2025-11-29 07:49:04.550 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:49:04 np0005539551 nova_compute[227360]: 2025-11-29 07:49:04.550 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:49:04 np0005539551 nova_compute[227360]: 2025-11-29 07:49:04.572 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:05.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:05.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:49:07 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4191482599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:49:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:07.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:07 np0005539551 nova_compute[227360]: 2025-11-29 07:49:07.743 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:07 np0005539551 nova_compute[227360]: 2025-11-29 07:49:07.754 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:49:07 np0005539551 nova_compute[227360]: 2025-11-29 07:49:07.773 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:49:07 np0005539551 nova_compute[227360]: 2025-11-29 07:49:07.776 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:49:07 np0005539551 nova_compute[227360]: 2025-11-29 07:49:07.777 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:07.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:08 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:49:08 np0005539551 nova_compute[227360]: 2025-11-29 07:49:08.777 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:49:08 np0005539551 nova_compute[227360]: 2025-11-29 07:49:08.778 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:49:08 np0005539551 nova_compute[227360]: 2025-11-29 07:49:08.778 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:49:08 np0005539551 nova_compute[227360]: 2025-11-29 07:49:08.793 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:49:08 np0005539551 nova_compute[227360]: 2025-11-29 07:49:08.793 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:49:08 np0005539551 nova_compute[227360]: 2025-11-29 07:49:08.793 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:49:08 np0005539551 nova_compute[227360]: 2025-11-29 07:49:08.794 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:49:08 np0005539551 nova_compute[227360]: 2025-11-29 07:49:08.794 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:49:08 np0005539551 nova_compute[227360]: 2025-11-29 07:49:08.794 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:49:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:09.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:09.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:11.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:11 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:49:11 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:49:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:11.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:12 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e157 e157: 3 total, 3 up, 3 in
Nov 29 02:49:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:13.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:13.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:14 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e158 e158: 3 total, 3 up, 3 in
Nov 29 02:49:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:15.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:15.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e159 e159: 3 total, 3 up, 3 in
Nov 29 02:49:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:17.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:17.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:18 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:49:18 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:49:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:19.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:19.841 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:19.842 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:19.842 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:19.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:21 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:49:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:21.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:49:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:21.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:23.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:23.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:24 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e160 e160: 3 total, 3 up, 3 in
Nov 29 02:49:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:25.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:25.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:27.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:27.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:29.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:29.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:30 np0005539551 podman[230763]: 2025-11-29 07:49:30.661455256 +0000 UTC m=+0.097459173 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 02:49:30 np0005539551 podman[230764]: 2025-11-29 07:49:30.686086896 +0000 UTC m=+0.116619372 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:49:30 np0005539551 podman[230762]: 2025-11-29 07:49:30.705242105 +0000 UTC m=+0.144803151 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller)
Nov 29 02:49:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:31.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:31.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:32 np0005539551 nova_compute[227360]: 2025-11-29 07:49:32.889 227364 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Acquiring lock "df95709e-b2a2-4e72-a99a-4df9e0fde1c4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:32 np0005539551 nova_compute[227360]: 2025-11-29 07:49:32.889 227364 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "df95709e-b2a2-4e72-a99a-4df9e0fde1c4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:32 np0005539551 nova_compute[227360]: 2025-11-29 07:49:32.915 227364 DEBUG nova.compute.manager [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:49:32 np0005539551 nova_compute[227360]: 2025-11-29 07:49:32.970 227364 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Acquiring lock "22f24272-bec2-444d-8c83-30e63ce6badb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:32 np0005539551 nova_compute[227360]: 2025-11-29 07:49:32.970 227364 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "22f24272-bec2-444d-8c83-30e63ce6badb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:33 np0005539551 nova_compute[227360]: 2025-11-29 07:49:33.026 227364 DEBUG nova.compute.manager [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:49:33 np0005539551 nova_compute[227360]: 2025-11-29 07:49:33.185 227364 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:33 np0005539551 nova_compute[227360]: 2025-11-29 07:49:33.185 227364 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:33 np0005539551 nova_compute[227360]: 2025-11-29 07:49:33.188 227364 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:33 np0005539551 nova_compute[227360]: 2025-11-29 07:49:33.194 227364 DEBUG nova.virt.hardware [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:49:33 np0005539551 nova_compute[227360]: 2025-11-29 07:49:33.194 227364 INFO nova.compute.claims [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 02:49:33 np0005539551 nova_compute[227360]: 2025-11-29 07:49:33.279 227364 DEBUG nova.scheduler.client.report [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Refreshing inventories for resource provider 67c71d68-0dd7-4589-b775-189b4191a844 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 02:49:33 np0005539551 nova_compute[227360]: 2025-11-29 07:49:33.359 227364 DEBUG nova.scheduler.client.report [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Updating ProviderTree inventory for provider 67c71d68-0dd7-4589-b775-189b4191a844 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 02:49:33 np0005539551 nova_compute[227360]: 2025-11-29 07:49:33.359 227364 DEBUG nova.compute.provider_tree [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Updating inventory in ProviderTree for provider 67c71d68-0dd7-4589-b775-189b4191a844 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 02:49:33 np0005539551 nova_compute[227360]: 2025-11-29 07:49:33.379 227364 DEBUG nova.scheduler.client.report [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Refreshing aggregate associations for resource provider 67c71d68-0dd7-4589-b775-189b4191a844, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 02:49:33 np0005539551 nova_compute[227360]: 2025-11-29 07:49:33.398 227364 DEBUG nova.scheduler.client.report [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Refreshing trait associations for resource provider 67c71d68-0dd7-4589-b775-189b4191a844, traits: COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 02:49:33 np0005539551 nova_compute[227360]: 2025-11-29 07:49:33.518 227364 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:33.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:33.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:34 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:35.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:35.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:36 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 5.220530033s
Nov 29 02:49:36 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 5.220530033s
Nov 29 02:49:36 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.220942497s, txc = 0x5616f515a900
Nov 29 02:49:37 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.444255829s, txc = 0x5616f3252600
Nov 29 02:49:37 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.444297314s, txc = 0x5616f4fca900
Nov 29 02:49:37 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.444263458s, txc = 0x5616f32e1200
Nov 29 02:49:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:37.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:37.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:38 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e161 e161: 3 total, 3 up, 3 in
Nov 29 02:49:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:49:39 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2723785956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:49:39 np0005539551 nova_compute[227360]: 2025-11-29 07:49:39.182 227364 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 5.664s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:39 np0005539551 nova_compute[227360]: 2025-11-29 07:49:39.187 227364 DEBUG nova.compute.provider_tree [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:49:39 np0005539551 nova_compute[227360]: 2025-11-29 07:49:39.455 227364 DEBUG nova.scheduler.client.report [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:49:39 np0005539551 nova_compute[227360]: 2025-11-29 07:49:39.511 227364 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 6.326s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:39 np0005539551 nova_compute[227360]: 2025-11-29 07:49:39.513 227364 DEBUG nova.compute.manager [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:49:39 np0005539551 nova_compute[227360]: 2025-11-29 07:49:39.518 227364 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 6.330s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:39 np0005539551 nova_compute[227360]: 2025-11-29 07:49:39.526 227364 DEBUG nova.virt.hardware [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:49:39 np0005539551 nova_compute[227360]: 2025-11-29 07:49:39.526 227364 INFO nova.compute.claims [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 02:49:39 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 29 02:49:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 02:49:39 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3396890474' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 02:49:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 02:49:39 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3396890474' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 02:49:39 np0005539551 nova_compute[227360]: 2025-11-29 07:49:39.673 227364 DEBUG nova.compute.manager [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:49:39 np0005539551 nova_compute[227360]: 2025-11-29 07:49:39.673 227364 DEBUG nova.network.neutron [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:49:39 np0005539551 nova_compute[227360]: 2025-11-29 07:49:39.738 227364 INFO nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:49:39 np0005539551 nova_compute[227360]: 2025-11-29 07:49:39.761 227364 DEBUG nova.compute.manager [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:49:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:39.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:39 np0005539551 nova_compute[227360]: 2025-11-29 07:49:39.886 227364 DEBUG nova.compute.manager [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:49:39 np0005539551 nova_compute[227360]: 2025-11-29 07:49:39.888 227364 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:49:39 np0005539551 nova_compute[227360]: 2025-11-29 07:49:39.888 227364 INFO nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Creating image(s)#033[00m
Nov 29 02:49:39 np0005539551 nova_compute[227360]: 2025-11-29 07:49:39.926 227364 DEBUG nova.storage.rbd_utils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] rbd image df95709e-b2a2-4e72-a99a-4df9e0fde1c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:49:39 np0005539551 nova_compute[227360]: 2025-11-29 07:49:39.958 227364 DEBUG nova.storage.rbd_utils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] rbd image df95709e-b2a2-4e72-a99a-4df9e0fde1c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:49:39 np0005539551 nova_compute[227360]: 2025-11-29 07:49:39.992 227364 DEBUG nova.storage.rbd_utils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] rbd image df95709e-b2a2-4e72-a99a-4df9e0fde1c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:49:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:39.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:39 np0005539551 nova_compute[227360]: 2025-11-29 07:49:39.996 227364 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:39 np0005539551 nova_compute[227360]: 2025-11-29 07:49:39.997 227364 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:40 np0005539551 nova_compute[227360]: 2025-11-29 07:49:40.000 227364 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:40 np0005539551 nova_compute[227360]: 2025-11-29 07:49:40.025 227364 DEBUG oslo_concurrency.lockutils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Acquiring lock "387c7b1f-564d-472d-a21c-f0cd49d2fd4c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:40 np0005539551 nova_compute[227360]: 2025-11-29 07:49:40.026 227364 DEBUG oslo_concurrency.lockutils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Lock "387c7b1f-564d-472d-a21c-f0cd49d2fd4c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:40 np0005539551 nova_compute[227360]: 2025-11-29 07:49:40.150 227364 DEBUG nova.compute.manager [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:49:40 np0005539551 nova_compute[227360]: 2025-11-29 07:49:40.250 227364 DEBUG oslo_concurrency.lockutils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:49:40 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3784179406' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:49:40 np0005539551 nova_compute[227360]: 2025-11-29 07:49:40.481 227364 DEBUG nova.virt.libvirt.imagebackend [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Image locations are: [{'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/4873db8c-b414-4e95-acd9-77caabebe722/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/4873db8c-b414-4e95-acd9-77caabebe722/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 02:49:40 np0005539551 nova_compute[227360]: 2025-11-29 07:49:40.484 227364 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:40 np0005539551 nova_compute[227360]: 2025-11-29 07:49:40.491 227364 DEBUG nova.compute.provider_tree [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:49:40 np0005539551 nova_compute[227360]: 2025-11-29 07:49:40.515 227364 DEBUG nova.scheduler.client.report [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:49:40 np0005539551 nova_compute[227360]: 2025-11-29 07:49:40.582 227364 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:40 np0005539551 nova_compute[227360]: 2025-11-29 07:49:40.582 227364 DEBUG nova.compute.manager [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:49:40 np0005539551 nova_compute[227360]: 2025-11-29 07:49:40.584 227364 DEBUG oslo_concurrency.lockutils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.334s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:40 np0005539551 nova_compute[227360]: 2025-11-29 07:49:40.589 227364 DEBUG nova.virt.hardware [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:49:40 np0005539551 nova_compute[227360]: 2025-11-29 07:49:40.590 227364 INFO nova.compute.claims [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 02:49:40 np0005539551 nova_compute[227360]: 2025-11-29 07:49:40.658 227364 DEBUG nova.network.neutron [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Automatically allocating a network for project 3c7cd563ba394223a76bd2579800406c. _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2460#033[00m
Nov 29 02:49:40 np0005539551 nova_compute[227360]: 2025-11-29 07:49:40.663 227364 DEBUG nova.compute.manager [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:49:40 np0005539551 nova_compute[227360]: 2025-11-29 07:49:40.664 227364 DEBUG nova.network.neutron [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:49:40 np0005539551 nova_compute[227360]: 2025-11-29 07:49:40.689 227364 INFO nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:49:40 np0005539551 nova_compute[227360]: 2025-11-29 07:49:40.712 227364 DEBUG nova.compute.manager [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:49:40 np0005539551 nova_compute[227360]: 2025-11-29 07:49:40.787 227364 DEBUG oslo_concurrency.processutils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:40 np0005539551 nova_compute[227360]: 2025-11-29 07:49:40.804 227364 DEBUG nova.compute.manager [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:49:40 np0005539551 nova_compute[227360]: 2025-11-29 07:49:40.806 227364 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:49:40 np0005539551 nova_compute[227360]: 2025-11-29 07:49:40.807 227364 INFO nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Creating image(s)#033[00m
Nov 29 02:49:40 np0005539551 nova_compute[227360]: 2025-11-29 07:49:40.834 227364 DEBUG nova.storage.rbd_utils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] rbd image 22f24272-bec2-444d-8c83-30e63ce6badb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:49:40 np0005539551 nova_compute[227360]: 2025-11-29 07:49:40.862 227364 DEBUG nova.storage.rbd_utils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] rbd image 22f24272-bec2-444d-8c83-30e63ce6badb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:49:40 np0005539551 nova_compute[227360]: 2025-11-29 07:49:40.889 227364 DEBUG nova.storage.rbd_utils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] rbd image 22f24272-bec2-444d-8c83-30e63ce6badb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:49:40 np0005539551 nova_compute[227360]: 2025-11-29 07:49:40.891 227364 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:41 np0005539551 nova_compute[227360]: 2025-11-29 07:49:41.152 227364 DEBUG nova.network.neutron [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Automatically allocating a network for project 3c7cd563ba394223a76bd2579800406c. _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2460#033[00m
Nov 29 02:49:41 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:49:41 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1529780649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:49:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:41.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:41 np0005539551 nova_compute[227360]: 2025-11-29 07:49:41.895 227364 DEBUG oslo_concurrency.processutils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:41 np0005539551 nova_compute[227360]: 2025-11-29 07:49:41.900 227364 DEBUG nova.compute.provider_tree [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Updating inventory in ProviderTree for provider 67c71d68-0dd7-4589-b775-189b4191a844 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 02:49:41 np0005539551 nova_compute[227360]: 2025-11-29 07:49:41.954 227364 ERROR nova.scheduler.client.report [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [req-6b312020-9d95-4079-807d-f47796f18da6] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 67c71d68-0dd7-4589-b775-189b4191a844.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-6b312020-9d95-4079-807d-f47796f18da6"}]}#033[00m
Nov 29 02:49:41 np0005539551 nova_compute[227360]: 2025-11-29 07:49:41.975 227364 DEBUG nova.scheduler.client.report [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Refreshing inventories for resource provider 67c71d68-0dd7-4589-b775-189b4191a844 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 02:49:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:41.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:42 np0005539551 nova_compute[227360]: 2025-11-29 07:49:42.002 227364 DEBUG nova.scheduler.client.report [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Updating ProviderTree inventory for provider 67c71d68-0dd7-4589-b775-189b4191a844 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 02:49:42 np0005539551 nova_compute[227360]: 2025-11-29 07:49:42.003 227364 DEBUG nova.compute.provider_tree [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Updating inventory in ProviderTree for provider 67c71d68-0dd7-4589-b775-189b4191a844 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 02:49:42 np0005539551 nova_compute[227360]: 2025-11-29 07:49:42.024 227364 DEBUG nova.scheduler.client.report [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Refreshing aggregate associations for resource provider 67c71d68-0dd7-4589-b775-189b4191a844, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 02:49:42 np0005539551 nova_compute[227360]: 2025-11-29 07:49:42.055 227364 DEBUG nova.scheduler.client.report [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Refreshing trait associations for resource provider 67c71d68-0dd7-4589-b775-189b4191a844, traits: COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 02:49:42 np0005539551 nova_compute[227360]: 2025-11-29 07:49:42.181 227364 DEBUG oslo_concurrency.processutils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:42 np0005539551 nova_compute[227360]: 2025-11-29 07:49:42.274 227364 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:42 np0005539551 nova_compute[227360]: 2025-11-29 07:49:42.334 227364 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488.part --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:42 np0005539551 nova_compute[227360]: 2025-11-29 07:49:42.335 227364 DEBUG nova.virt.images [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] 4873db8c-b414-4e95-acd9-77caabebe722 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Nov 29 02:49:42 np0005539551 nova_compute[227360]: 2025-11-29 07:49:42.336 227364 DEBUG nova.privsep.utils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 29 02:49:42 np0005539551 nova_compute[227360]: 2025-11-29 07:49:42.336 227364 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488.part /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:42 np0005539551 nova_compute[227360]: 2025-11-29 07:49:42.617 227364 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488.part /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488.converted" returned: 0 in 0.281s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:49:42 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1636009062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:49:42 np0005539551 nova_compute[227360]: 2025-11-29 07:49:42.621 227364 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:42 np0005539551 nova_compute[227360]: 2025-11-29 07:49:42.641 227364 DEBUG oslo_concurrency.processutils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:42 np0005539551 nova_compute[227360]: 2025-11-29 07:49:42.648 227364 DEBUG nova.compute.provider_tree [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Updating inventory in ProviderTree for provider 67c71d68-0dd7-4589-b775-189b4191a844 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 02:49:42 np0005539551 nova_compute[227360]: 2025-11-29 07:49:42.703 227364 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488.converted --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:42 np0005539551 nova_compute[227360]: 2025-11-29 07:49:42.704 227364 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:42 np0005539551 nova_compute[227360]: 2025-11-29 07:49:42.733 227364 DEBUG nova.storage.rbd_utils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] rbd image df95709e-b2a2-4e72-a99a-4df9e0fde1c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:49:42 np0005539551 nova_compute[227360]: 2025-11-29 07:49:42.736 227364 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 df95709e-b2a2-4e72-a99a-4df9e0fde1c4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:42 np0005539551 nova_compute[227360]: 2025-11-29 07:49:42.755 227364 DEBUG nova.scheduler.client.report [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Updated inventory for provider 67c71d68-0dd7-4589-b775-189b4191a844 with generation 5 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Nov 29 02:49:42 np0005539551 nova_compute[227360]: 2025-11-29 07:49:42.756 227364 DEBUG nova.compute.provider_tree [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Updating resource provider 67c71d68-0dd7-4589-b775-189b4191a844 generation from 5 to 6 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 29 02:49:42 np0005539551 nova_compute[227360]: 2025-11-29 07:49:42.757 227364 DEBUG nova.compute.provider_tree [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Updating inventory in ProviderTree for provider 67c71d68-0dd7-4589-b775-189b4191a844 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 02:49:42 np0005539551 nova_compute[227360]: 2025-11-29 07:49:42.762 227364 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 1.871s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:42 np0005539551 nova_compute[227360]: 2025-11-29 07:49:42.763 227364 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:42 np0005539551 nova_compute[227360]: 2025-11-29 07:49:42.790 227364 DEBUG nova.storage.rbd_utils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] rbd image 22f24272-bec2-444d-8c83-30e63ce6badb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:49:42 np0005539551 nova_compute[227360]: 2025-11-29 07:49:42.793 227364 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 22f24272-bec2-444d-8c83-30e63ce6badb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:42 np0005539551 nova_compute[227360]: 2025-11-29 07:49:42.833 227364 DEBUG oslo_concurrency.lockutils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.249s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:42 np0005539551 nova_compute[227360]: 2025-11-29 07:49:42.834 227364 DEBUG nova.compute.manager [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:49:42 np0005539551 nova_compute[227360]: 2025-11-29 07:49:42.897 227364 DEBUG nova.compute.manager [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:49:42 np0005539551 nova_compute[227360]: 2025-11-29 07:49:42.898 227364 DEBUG nova.network.neutron [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:49:42 np0005539551 nova_compute[227360]: 2025-11-29 07:49:42.917 227364 INFO nova.virt.libvirt.driver [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:49:42 np0005539551 nova_compute[227360]: 2025-11-29 07:49:42.933 227364 DEBUG nova.compute.manager [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:49:43 np0005539551 nova_compute[227360]: 2025-11-29 07:49:43.051 227364 DEBUG nova.compute.manager [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:49:43 np0005539551 nova_compute[227360]: 2025-11-29 07:49:43.052 227364 DEBUG nova.virt.libvirt.driver [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:49:43 np0005539551 nova_compute[227360]: 2025-11-29 07:49:43.053 227364 INFO nova.virt.libvirt.driver [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Creating image(s)#033[00m
Nov 29 02:49:43 np0005539551 nova_compute[227360]: 2025-11-29 07:49:43.118 227364 DEBUG nova.storage.rbd_utils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] rbd image 387c7b1f-564d-472d-a21c-f0cd49d2fd4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:49:43 np0005539551 nova_compute[227360]: 2025-11-29 07:49:43.148 227364 DEBUG nova.storage.rbd_utils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] rbd image 387c7b1f-564d-472d-a21c-f0cd49d2fd4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:49:43 np0005539551 nova_compute[227360]: 2025-11-29 07:49:43.175 227364 DEBUG nova.storage.rbd_utils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] rbd image 387c7b1f-564d-472d-a21c-f0cd49d2fd4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:49:43 np0005539551 nova_compute[227360]: 2025-11-29 07:49:43.179 227364 DEBUG oslo_concurrency.processutils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:43 np0005539551 nova_compute[227360]: 2025-11-29 07:49:43.227 227364 WARNING oslo_policy.policy [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Nov 29 02:49:43 np0005539551 nova_compute[227360]: 2025-11-29 07:49:43.228 227364 WARNING oslo_policy.policy [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Nov 29 02:49:43 np0005539551 nova_compute[227360]: 2025-11-29 07:49:43.232 227364 DEBUG nova.policy [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd57d713485e84d19a429533b570c4189', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '06acd02df57149e795d2be57787bb9ed', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 02:49:43 np0005539551 nova_compute[227360]: 2025-11-29 07:49:43.235 227364 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 22f24272-bec2-444d-8c83-30e63ce6badb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:43 np0005539551 nova_compute[227360]: 2025-11-29 07:49:43.236 227364 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 df95709e-b2a2-4e72-a99a-4df9e0fde1c4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:43 np0005539551 nova_compute[227360]: 2025-11-29 07:49:43.296 227364 DEBUG oslo_concurrency.processutils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:43 np0005539551 nova_compute[227360]: 2025-11-29 07:49:43.297 227364 DEBUG oslo_concurrency.lockutils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:43 np0005539551 nova_compute[227360]: 2025-11-29 07:49:43.298 227364 DEBUG oslo_concurrency.lockutils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:43 np0005539551 nova_compute[227360]: 2025-11-29 07:49:43.298 227364 DEBUG oslo_concurrency.lockutils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:43 np0005539551 nova_compute[227360]: 2025-11-29 07:49:43.327 227364 DEBUG nova.storage.rbd_utils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] rbd image 387c7b1f-564d-472d-a21c-f0cd49d2fd4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:49:43 np0005539551 nova_compute[227360]: 2025-11-29 07:49:43.330 227364 DEBUG oslo_concurrency.processutils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 387c7b1f-564d-472d-a21c-f0cd49d2fd4c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:43 np0005539551 nova_compute[227360]: 2025-11-29 07:49:43.407 227364 DEBUG nova.storage.rbd_utils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] resizing rbd image 22f24272-bec2-444d-8c83-30e63ce6badb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:49:43 np0005539551 nova_compute[227360]: 2025-11-29 07:49:43.437 227364 DEBUG nova.storage.rbd_utils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] resizing rbd image df95709e-b2a2-4e72-a99a-4df9e0fde1c4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:49:43 np0005539551 nova_compute[227360]: 2025-11-29 07:49:43.568 227364 DEBUG nova.objects.instance [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lazy-loading 'migration_context' on Instance uuid 22f24272-bec2-444d-8c83-30e63ce6badb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:49:43 np0005539551 nova_compute[227360]: 2025-11-29 07:49:43.572 227364 DEBUG nova.objects.instance [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lazy-loading 'migration_context' on Instance uuid df95709e-b2a2-4e72-a99a-4df9e0fde1c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:49:43 np0005539551 nova_compute[227360]: 2025-11-29 07:49:43.600 227364 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:49:43 np0005539551 nova_compute[227360]: 2025-11-29 07:49:43.601 227364 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Ensure instance console log exists: /var/lib/nova/instances/df95709e-b2a2-4e72-a99a-4df9e0fde1c4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:49:43 np0005539551 nova_compute[227360]: 2025-11-29 07:49:43.601 227364 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:43 np0005539551 nova_compute[227360]: 2025-11-29 07:49:43.602 227364 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:43 np0005539551 nova_compute[227360]: 2025-11-29 07:49:43.602 227364 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:43 np0005539551 nova_compute[227360]: 2025-11-29 07:49:43.602 227364 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:49:43 np0005539551 nova_compute[227360]: 2025-11-29 07:49:43.603 227364 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Ensure instance console log exists: /var/lib/nova/instances/22f24272-bec2-444d-8c83-30e63ce6badb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:49:43 np0005539551 nova_compute[227360]: 2025-11-29 07:49:43.603 227364 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:43 np0005539551 nova_compute[227360]: 2025-11-29 07:49:43.603 227364 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:43 np0005539551 nova_compute[227360]: 2025-11-29 07:49:43.604 227364 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:43 np0005539551 nova_compute[227360]: 2025-11-29 07:49:43.654 227364 DEBUG oslo_concurrency.processutils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 387c7b1f-564d-472d-a21c-f0cd49d2fd4c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.324s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:43 np0005539551 nova_compute[227360]: 2025-11-29 07:49:43.733 227364 DEBUG nova.storage.rbd_utils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] resizing rbd image 387c7b1f-564d-472d-a21c-f0cd49d2fd4c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:49:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:43.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:43 np0005539551 nova_compute[227360]: 2025-11-29 07:49:43.836 227364 DEBUG nova.objects.instance [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Lazy-loading 'migration_context' on Instance uuid 387c7b1f-564d-472d-a21c-f0cd49d2fd4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:49:43 np0005539551 nova_compute[227360]: 2025-11-29 07:49:43.853 227364 DEBUG nova.virt.libvirt.driver [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:49:43 np0005539551 nova_compute[227360]: 2025-11-29 07:49:43.853 227364 DEBUG nova.virt.libvirt.driver [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Ensure instance console log exists: /var/lib/nova/instances/387c7b1f-564d-472d-a21c-f0cd49d2fd4c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:49:43 np0005539551 nova_compute[227360]: 2025-11-29 07:49:43.854 227364 DEBUG oslo_concurrency.lockutils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:43 np0005539551 nova_compute[227360]: 2025-11-29 07:49:43.854 227364 DEBUG oslo_concurrency.lockutils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:43 np0005539551 nova_compute[227360]: 2025-11-29 07:49:43.854 227364 DEBUG oslo_concurrency.lockutils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:43.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:44 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:45 np0005539551 nova_compute[227360]: 2025-11-29 07:49:45.041 227364 DEBUG nova.network.neutron [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Successfully created port: 817d7796-296d-423b-85bf-555c653c2008 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 02:49:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:45.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:45.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:47 np0005539551 nova_compute[227360]: 2025-11-29 07:49:47.768 227364 DEBUG nova.network.neutron [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Successfully updated port: 817d7796-296d-423b-85bf-555c653c2008 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 02:49:47 np0005539551 nova_compute[227360]: 2025-11-29 07:49:47.786 227364 DEBUG oslo_concurrency.lockutils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Acquiring lock "refresh_cache-387c7b1f-564d-472d-a21c-f0cd49d2fd4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:49:47 np0005539551 nova_compute[227360]: 2025-11-29 07:49:47.787 227364 DEBUG oslo_concurrency.lockutils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Acquired lock "refresh_cache-387c7b1f-564d-472d-a21c-f0cd49d2fd4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:49:47 np0005539551 nova_compute[227360]: 2025-11-29 07:49:47.787 227364 DEBUG nova.network.neutron [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:49:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:47.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:48.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:48 np0005539551 nova_compute[227360]: 2025-11-29 07:49:48.128 227364 DEBUG nova.network.neutron [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:49:48 np0005539551 nova_compute[227360]: 2025-11-29 07:49:48.421 227364 DEBUG nova.compute.manager [req-ae57dbab-1778-4131-a021-20c39e2f3cee req-0a44bdc6-4df7-49b3-9a06-047f2600efc9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Received event network-changed-817d7796-296d-423b-85bf-555c653c2008 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:49:48 np0005539551 nova_compute[227360]: 2025-11-29 07:49:48.421 227364 DEBUG nova.compute.manager [req-ae57dbab-1778-4131-a021-20c39e2f3cee req-0a44bdc6-4df7-49b3-9a06-047f2600efc9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Refreshing instance network info cache due to event network-changed-817d7796-296d-423b-85bf-555c653c2008. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:49:48 np0005539551 nova_compute[227360]: 2025-11-29 07:49:48.422 227364 DEBUG oslo_concurrency.lockutils [req-ae57dbab-1778-4131-a021-20c39e2f3cee req-0a44bdc6-4df7-49b3-9a06-047f2600efc9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-387c7b1f-564d-472d-a21c-f0cd49d2fd4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:49:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e162 e162: 3 total, 3 up, 3 in
Nov 29 02:49:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:49.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:49 np0005539551 nova_compute[227360]: 2025-11-29 07:49:49.917 227364 DEBUG nova.network.neutron [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Updating instance_info_cache with network_info: [{"id": "817d7796-296d-423b-85bf-555c653c2008", "address": "fa:16:3e:f2:26:38", "network": {"id": "1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2058609322-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06acd02df57149e795d2be57787bb9ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap817d7796-29", "ovs_interfaceid": "817d7796-296d-423b-85bf-555c653c2008", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:49:49 np0005539551 nova_compute[227360]: 2025-11-29 07:49:49.942 227364 DEBUG oslo_concurrency.lockutils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Releasing lock "refresh_cache-387c7b1f-564d-472d-a21c-f0cd49d2fd4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:49:49 np0005539551 nova_compute[227360]: 2025-11-29 07:49:49.942 227364 DEBUG nova.compute.manager [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Instance network_info: |[{"id": "817d7796-296d-423b-85bf-555c653c2008", "address": "fa:16:3e:f2:26:38", "network": {"id": "1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2058609322-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06acd02df57149e795d2be57787bb9ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap817d7796-29", "ovs_interfaceid": "817d7796-296d-423b-85bf-555c653c2008", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:49:49 np0005539551 nova_compute[227360]: 2025-11-29 07:49:49.943 227364 DEBUG oslo_concurrency.lockutils [req-ae57dbab-1778-4131-a021-20c39e2f3cee req-0a44bdc6-4df7-49b3-9a06-047f2600efc9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-387c7b1f-564d-472d-a21c-f0cd49d2fd4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:49:49 np0005539551 nova_compute[227360]: 2025-11-29 07:49:49.943 227364 DEBUG nova.network.neutron [req-ae57dbab-1778-4131-a021-20c39e2f3cee req-0a44bdc6-4df7-49b3-9a06-047f2600efc9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Refreshing network info cache for port 817d7796-296d-423b-85bf-555c653c2008 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:49:49 np0005539551 nova_compute[227360]: 2025-11-29 07:49:49.946 227364 DEBUG nova.virt.libvirt.driver [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Start _get_guest_xml network_info=[{"id": "817d7796-296d-423b-85bf-555c653c2008", "address": "fa:16:3e:f2:26:38", "network": {"id": "1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2058609322-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06acd02df57149e795d2be57787bb9ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap817d7796-29", "ovs_interfaceid": "817d7796-296d-423b-85bf-555c653c2008", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:49:49 np0005539551 nova_compute[227360]: 2025-11-29 07:49:49.950 227364 WARNING nova.virt.libvirt.driver [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:49:49 np0005539551 nova_compute[227360]: 2025-11-29 07:49:49.956 227364 DEBUG nova.virt.libvirt.host [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:49:49 np0005539551 nova_compute[227360]: 2025-11-29 07:49:49.957 227364 DEBUG nova.virt.libvirt.host [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:49:49 np0005539551 nova_compute[227360]: 2025-11-29 07:49:49.973 227364 DEBUG nova.virt.libvirt.host [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:49:49 np0005539551 nova_compute[227360]: 2025-11-29 07:49:49.973 227364 DEBUG nova.virt.libvirt.host [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:49:49 np0005539551 nova_compute[227360]: 2025-11-29 07:49:49.974 227364 DEBUG nova.virt.libvirt.driver [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:49:49 np0005539551 nova_compute[227360]: 2025-11-29 07:49:49.975 227364 DEBUG nova.virt.hardware [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:49:27Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='247595871',id=4,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_0-1134213089',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:49:49 np0005539551 nova_compute[227360]: 2025-11-29 07:49:49.975 227364 DEBUG nova.virt.hardware [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:49:49 np0005539551 nova_compute[227360]: 2025-11-29 07:49:49.975 227364 DEBUG nova.virt.hardware [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:49:49 np0005539551 nova_compute[227360]: 2025-11-29 07:49:49.975 227364 DEBUG nova.virt.hardware [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:49:49 np0005539551 nova_compute[227360]: 2025-11-29 07:49:49.976 227364 DEBUG nova.virt.hardware [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:49:49 np0005539551 nova_compute[227360]: 2025-11-29 07:49:49.976 227364 DEBUG nova.virt.hardware [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:49:49 np0005539551 nova_compute[227360]: 2025-11-29 07:49:49.976 227364 DEBUG nova.virt.hardware [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:49:49 np0005539551 nova_compute[227360]: 2025-11-29 07:49:49.976 227364 DEBUG nova.virt.hardware [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:49:49 np0005539551 nova_compute[227360]: 2025-11-29 07:49:49.976 227364 DEBUG nova.virt.hardware [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:49:49 np0005539551 nova_compute[227360]: 2025-11-29 07:49:49.977 227364 DEBUG nova.virt.hardware [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:49:49 np0005539551 nova_compute[227360]: 2025-11-29 07:49:49.977 227364 DEBUG nova.virt.hardware [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:49:49 np0005539551 nova_compute[227360]: 2025-11-29 07:49:49.980 227364 DEBUG nova.privsep.utils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 29 02:49:49 np0005539551 nova_compute[227360]: 2025-11-29 07:49:49.980 227364 DEBUG oslo_concurrency.processutils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:50.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e163 e163: 3 total, 3 up, 3 in
Nov 29 02:49:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:49:50 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2436172742' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:49:50 np0005539551 nova_compute[227360]: 2025-11-29 07:49:50.477 227364 DEBUG oslo_concurrency.processutils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:50 np0005539551 nova_compute[227360]: 2025-11-29 07:49:50.512 227364 DEBUG nova.storage.rbd_utils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] rbd image 387c7b1f-564d-472d-a21c-f0cd49d2fd4c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:49:50 np0005539551 nova_compute[227360]: 2025-11-29 07:49:50.516 227364 DEBUG oslo_concurrency.processutils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:49:50 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2967300164' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:49:50 np0005539551 nova_compute[227360]: 2025-11-29 07:49:50.943 227364 DEBUG oslo_concurrency.processutils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:50 np0005539551 nova_compute[227360]: 2025-11-29 07:49:50.945 227364 DEBUG nova.virt.libvirt.vif [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:49:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-734780199',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-734780199',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-734780199',id=6,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNs4qJju/LAwBpfwNEkpAniOUgsZvC5AJbe2gxJ9IIbsfIqlPQKB9AiaXItMjFGTGLax2vK2q305Wa2bDc3JMsgTFTOdHEZIjpwTy2cnsiyE7ulw1lLo0Ds1kT20t3frlA==',key_name='tempest-keypair-1679719729',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='06acd02df57149e795d2be57787bb9ed',ramdisk_id='',reservation_id='r-ucmg2yuk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-456097839',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-456097839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:49:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d57d713485e84d19a429533b570c4189',uuid=387c7b1f-564d-472d-a21c-f0cd49d2fd4c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "817d7796-296d-423b-85bf-555c653c2008", "address": "fa:16:3e:f2:26:38", "network": {"id": "1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2058609322-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06acd02df57149e795d2be57787bb9ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap817d7796-29", "ovs_interfaceid": "817d7796-296d-423b-85bf-555c653c2008", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:49:50 np0005539551 nova_compute[227360]: 2025-11-29 07:49:50.945 227364 DEBUG nova.network.os_vif_util [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Converting VIF {"id": "817d7796-296d-423b-85bf-555c653c2008", "address": "fa:16:3e:f2:26:38", "network": {"id": "1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2058609322-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06acd02df57149e795d2be57787bb9ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap817d7796-29", "ovs_interfaceid": "817d7796-296d-423b-85bf-555c653c2008", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:49:50 np0005539551 nova_compute[227360]: 2025-11-29 07:49:50.946 227364 DEBUG nova.network.os_vif_util [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:26:38,bridge_name='br-int',has_traffic_filtering=True,id=817d7796-296d-423b-85bf-555c653c2008,network=Network(1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap817d7796-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:49:50 np0005539551 nova_compute[227360]: 2025-11-29 07:49:50.949 227364 DEBUG nova.objects.instance [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Lazy-loading 'pci_devices' on Instance uuid 387c7b1f-564d-472d-a21c-f0cd49d2fd4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:49:50 np0005539551 nova_compute[227360]: 2025-11-29 07:49:50.968 227364 DEBUG nova.virt.libvirt.driver [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:49:50 np0005539551 nova_compute[227360]:  <uuid>387c7b1f-564d-472d-a21c-f0cd49d2fd4c</uuid>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:  <name>instance-00000006</name>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:49:50 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:      <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-734780199</nova:name>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 07:49:49</nova:creationTime>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:      <nova:flavor name="tempest-flavor_with_ephemeral_0-1134213089">
Nov 29 02:49:50 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:        <nova:user uuid="d57d713485e84d19a429533b570c4189">tempest-ServersWithSpecificFlavorTestJSON-456097839-project-member</nova:user>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:        <nova:project uuid="06acd02df57149e795d2be57787bb9ed">tempest-ServersWithSpecificFlavorTestJSON-456097839</nova:project>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:        <nova:port uuid="817d7796-296d-423b-85bf-555c653c2008">
Nov 29 02:49:50 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <system>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:      <entry name="serial">387c7b1f-564d-472d-a21c-f0cd49d2fd4c</entry>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:      <entry name="uuid">387c7b1f-564d-472d-a21c-f0cd49d2fd4c</entry>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    </system>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:  <os>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:  </os>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:  <features>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:  </features>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:  </clock>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:  <devices>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 02:49:50 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/387c7b1f-564d-472d-a21c-f0cd49d2fd4c_disk">
Nov 29 02:49:50 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:      </source>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 02:49:50 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:      </auth>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    </disk>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 02:49:50 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/387c7b1f-564d-472d-a21c-f0cd49d2fd4c_disk.config">
Nov 29 02:49:50 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:      </source>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 02:49:50 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:      </auth>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    </disk>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 02:49:50 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:f2:26:38"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:      <target dev="tap817d7796-29"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    </interface>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 02:49:50 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/387c7b1f-564d-472d-a21c-f0cd49d2fd4c/console.log" append="off"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    </serial>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <video>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    </video>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 02:49:50 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    </rng>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 02:49:50 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 02:49:50 np0005539551 nova_compute[227360]:  </devices>
Nov 29 02:49:50 np0005539551 nova_compute[227360]: </domain>
Nov 29 02:49:50 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:49:50 np0005539551 nova_compute[227360]: 2025-11-29 07:49:50.970 227364 DEBUG nova.compute.manager [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Preparing to wait for external event network-vif-plugged-817d7796-296d-423b-85bf-555c653c2008 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 02:49:50 np0005539551 nova_compute[227360]: 2025-11-29 07:49:50.971 227364 DEBUG oslo_concurrency.lockutils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Acquiring lock "387c7b1f-564d-472d-a21c-f0cd49d2fd4c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:50 np0005539551 nova_compute[227360]: 2025-11-29 07:49:50.971 227364 DEBUG oslo_concurrency.lockutils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Lock "387c7b1f-564d-472d-a21c-f0cd49d2fd4c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:50 np0005539551 nova_compute[227360]: 2025-11-29 07:49:50.971 227364 DEBUG oslo_concurrency.lockutils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Lock "387c7b1f-564d-472d-a21c-f0cd49d2fd4c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:50 np0005539551 nova_compute[227360]: 2025-11-29 07:49:50.972 227364 DEBUG nova.virt.libvirt.vif [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:49:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-734780199',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-734780199',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-734780199',id=6,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNs4qJju/LAwBpfwNEkpAniOUgsZvC5AJbe2gxJ9IIbsfIqlPQKB9AiaXItMjFGTGLax2vK2q305Wa2bDc3JMsgTFTOdHEZIjpwTy2cnsiyE7ulw1lLo0Ds1kT20t3frlA==',key_name='tempest-keypair-1679719729',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='06acd02df57149e795d2be57787bb9ed',ramdisk_id='',reservation_id='r-ucmg2yuk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-456097839',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-456097839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:49:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d57d713485e84d19a429533b570c4189',uuid=387c7b1f-564d-472d-a21c-f0cd49d2fd4c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "817d7796-296d-423b-85bf-555c653c2008", "address": "fa:16:3e:f2:26:38", "network": {"id": "1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2058609322-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06acd02df57149e795d2be57787bb9ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap817d7796-29", "ovs_interfaceid": "817d7796-296d-423b-85bf-555c653c2008", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:49:50 np0005539551 nova_compute[227360]: 2025-11-29 07:49:50.973 227364 DEBUG nova.network.os_vif_util [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Converting VIF {"id": "817d7796-296d-423b-85bf-555c653c2008", "address": "fa:16:3e:f2:26:38", "network": {"id": "1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2058609322-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06acd02df57149e795d2be57787bb9ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap817d7796-29", "ovs_interfaceid": "817d7796-296d-423b-85bf-555c653c2008", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:49:50 np0005539551 nova_compute[227360]: 2025-11-29 07:49:50.974 227364 DEBUG nova.network.os_vif_util [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:26:38,bridge_name='br-int',has_traffic_filtering=True,id=817d7796-296d-423b-85bf-555c653c2008,network=Network(1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap817d7796-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:49:50 np0005539551 nova_compute[227360]: 2025-11-29 07:49:50.974 227364 DEBUG os_vif [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:26:38,bridge_name='br-int',has_traffic_filtering=True,id=817d7796-296d-423b-85bf-555c653c2008,network=Network(1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap817d7796-29') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:49:51 np0005539551 nova_compute[227360]: 2025-11-29 07:49:51.013 227364 DEBUG ovsdbapp.backend.ovs_idl [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:49:51 np0005539551 nova_compute[227360]: 2025-11-29 07:49:51.013 227364 DEBUG ovsdbapp.backend.ovs_idl [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:49:51 np0005539551 nova_compute[227360]: 2025-11-29 07:49:51.013 227364 DEBUG ovsdbapp.backend.ovs_idl [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:49:51 np0005539551 nova_compute[227360]: 2025-11-29 07:49:51.014 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 02:49:51 np0005539551 nova_compute[227360]: 2025-11-29 07:49:51.014 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [POLLOUT] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:51 np0005539551 nova_compute[227360]: 2025-11-29 07:49:51.015 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 02:49:51 np0005539551 nova_compute[227360]: 2025-11-29 07:49:51.015 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:51 np0005539551 nova_compute[227360]: 2025-11-29 07:49:51.016 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:51 np0005539551 nova_compute[227360]: 2025-11-29 07:49:51.018 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:51 np0005539551 nova_compute[227360]: 2025-11-29 07:49:51.031 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:51 np0005539551 nova_compute[227360]: 2025-11-29 07:49:51.031 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:49:51 np0005539551 nova_compute[227360]: 2025-11-29 07:49:51.032 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:49:51 np0005539551 nova_compute[227360]: 2025-11-29 07:49:51.033 227364 INFO oslo.privsep.daemon [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpdqom_xv5/privsep.sock']#033[00m
Nov 29 02:49:51 np0005539551 nova_compute[227360]: 2025-11-29 07:49:51.731 227364 INFO oslo.privsep.daemon [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Spawned new privsep daemon via rootwrap#033[00m
Nov 29 02:49:51 np0005539551 nova_compute[227360]: 2025-11-29 07:49:51.578 231487 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 29 02:49:51 np0005539551 nova_compute[227360]: 2025-11-29 07:49:51.585 231487 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 29 02:49:51 np0005539551 nova_compute[227360]: 2025-11-29 07:49:51.589 231487 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Nov 29 02:49:51 np0005539551 nova_compute[227360]: 2025-11-29 07:49:51.589 231487 INFO oslo.privsep.daemon [-] privsep daemon running as pid 231487#033[00m
Nov 29 02:49:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:51.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:52.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:52 np0005539551 nova_compute[227360]: 2025-11-29 07:49:52.103 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:52 np0005539551 nova_compute[227360]: 2025-11-29 07:49:52.104 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap817d7796-29, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:49:52 np0005539551 nova_compute[227360]: 2025-11-29 07:49:52.106 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap817d7796-29, col_values=(('external_ids', {'iface-id': '817d7796-296d-423b-85bf-555c653c2008', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f2:26:38', 'vm-uuid': '387c7b1f-564d-472d-a21c-f0cd49d2fd4c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:49:52 np0005539551 nova_compute[227360]: 2025-11-29 07:49:52.109 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:52 np0005539551 NetworkManager[48922]: <info>  [1764402592.1101] manager: (tap817d7796-29): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Nov 29 02:49:52 np0005539551 nova_compute[227360]: 2025-11-29 07:49:52.112 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:49:52 np0005539551 nova_compute[227360]: 2025-11-29 07:49:52.119 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:52 np0005539551 nova_compute[227360]: 2025-11-29 07:49:52.120 227364 INFO os_vif [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:26:38,bridge_name='br-int',has_traffic_filtering=True,id=817d7796-296d-423b-85bf-555c653c2008,network=Network(1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap817d7796-29')#033[00m
Nov 29 02:49:52 np0005539551 nova_compute[227360]: 2025-11-29 07:49:52.675 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:52 np0005539551 nova_compute[227360]: 2025-11-29 07:49:52.789 227364 DEBUG nova.virt.libvirt.driver [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:49:52 np0005539551 nova_compute[227360]: 2025-11-29 07:49:52.790 227364 DEBUG nova.virt.libvirt.driver [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:49:52 np0005539551 nova_compute[227360]: 2025-11-29 07:49:52.790 227364 DEBUG nova.virt.libvirt.driver [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] No VIF found with MAC fa:16:3e:f2:26:38, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:49:52 np0005539551 nova_compute[227360]: 2025-11-29 07:49:52.791 227364 INFO nova.virt.libvirt.driver [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Using config drive#033[00m
Nov 29 02:49:52 np0005539551 nova_compute[227360]: 2025-11-29 07:49:52.821 227364 DEBUG nova.storage.rbd_utils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] rbd image 387c7b1f-564d-472d-a21c-f0cd49d2fd4c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:49:53 np0005539551 nova_compute[227360]: 2025-11-29 07:49:53.665 227364 DEBUG nova.network.neutron [req-ae57dbab-1778-4131-a021-20c39e2f3cee req-0a44bdc6-4df7-49b3-9a06-047f2600efc9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Updated VIF entry in instance network info cache for port 817d7796-296d-423b-85bf-555c653c2008. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:49:53 np0005539551 nova_compute[227360]: 2025-11-29 07:49:53.666 227364 DEBUG nova.network.neutron [req-ae57dbab-1778-4131-a021-20c39e2f3cee req-0a44bdc6-4df7-49b3-9a06-047f2600efc9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Updating instance_info_cache with network_info: [{"id": "817d7796-296d-423b-85bf-555c653c2008", "address": "fa:16:3e:f2:26:38", "network": {"id": "1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2058609322-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06acd02df57149e795d2be57787bb9ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap817d7796-29", "ovs_interfaceid": "817d7796-296d-423b-85bf-555c653c2008", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:49:53 np0005539551 nova_compute[227360]: 2025-11-29 07:49:53.807 227364 DEBUG oslo_concurrency.lockutils [req-ae57dbab-1778-4131-a021-20c39e2f3cee req-0a44bdc6-4df7-49b3-9a06-047f2600efc9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-387c7b1f-564d-472d-a21c-f0cd49d2fd4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:49:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:49:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:53.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:49:53 np0005539551 nova_compute[227360]: 2025-11-29 07:49:53.885 227364 INFO nova.virt.libvirt.driver [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Creating config drive at /var/lib/nova/instances/387c7b1f-564d-472d-a21c-f0cd49d2fd4c/disk.config#033[00m
Nov 29 02:49:53 np0005539551 nova_compute[227360]: 2025-11-29 07:49:53.891 227364 DEBUG oslo_concurrency.processutils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/387c7b1f-564d-472d-a21c-f0cd49d2fd4c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2my__lgs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:54 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:54.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:54 np0005539551 nova_compute[227360]: 2025-11-29 07:49:54.034 227364 DEBUG oslo_concurrency.processutils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/387c7b1f-564d-472d-a21c-f0cd49d2fd4c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2my__lgs" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:54 np0005539551 nova_compute[227360]: 2025-11-29 07:49:54.067 227364 DEBUG nova.storage.rbd_utils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] rbd image 387c7b1f-564d-472d-a21c-f0cd49d2fd4c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:49:54 np0005539551 nova_compute[227360]: 2025-11-29 07:49:54.070 227364 DEBUG oslo_concurrency.processutils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/387c7b1f-564d-472d-a21c-f0cd49d2fd4c/disk.config 387c7b1f-564d-472d-a21c-f0cd49d2fd4c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:54 np0005539551 nova_compute[227360]: 2025-11-29 07:49:54.511 227364 DEBUG nova.network.neutron [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Automatically allocated network: {'id': '01d0d21b-eaad-4f5d-82d1-0f4d31e80363', 'name': 'auto_allocated_network', 'tenant_id': '3c7cd563ba394223a76bd2579800406c', 'admin_state_up': True, 'mtu': 1442, 'status': 'ACTIVE', 'subnets': ['6992fe6c-c595-4393-8b65-e8b3c97ce60b', 'e8640056-71aa-4d1b-9c1e-9f992a064096'], 'shared': False, 'availability_zone_hints': [], 'availability_zones': [], 'ipv4_address_scope': None, 'ipv6_address_scope': None, 'router:external': False, 'description': '', 'qos_policy_id': None, 'port_security_enabled': True, 'dns_domain': '', 'l2_adjacency': True, 'tags': [], 'created_at': '2025-11-29T07:49:40Z', 'updated_at': '2025-11-29T07:49:53Z', 'revision_number': 4, 'project_id': '3c7cd563ba394223a76bd2579800406c'} _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2478#033[00m
Nov 29 02:49:54 np0005539551 nova_compute[227360]: 2025-11-29 07:49:54.512 227364 DEBUG nova.policy [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '01739124bee74c899af6384f8ec2d427', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3c7cd563ba394223a76bd2579800406c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 02:49:54 np0005539551 nova_compute[227360]: 2025-11-29 07:49:54.683 227364 DEBUG oslo_concurrency.processutils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/387c7b1f-564d-472d-a21c-f0cd49d2fd4c/disk.config 387c7b1f-564d-472d-a21c-f0cd49d2fd4c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:54 np0005539551 nova_compute[227360]: 2025-11-29 07:49:54.683 227364 INFO nova.virt.libvirt.driver [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Deleting local config drive /var/lib/nova/instances/387c7b1f-564d-472d-a21c-f0cd49d2fd4c/disk.config because it was imported into RBD.#033[00m
Nov 29 02:49:54 np0005539551 systemd[1]: Starting libvirt secret daemon...
Nov 29 02:49:54 np0005539551 systemd[1]: Started libvirt secret daemon.
Nov 29 02:49:54 np0005539551 kernel: tun: Universal TUN/TAP device driver, 1.6
Nov 29 02:49:54 np0005539551 kernel: tap817d7796-29: entered promiscuous mode
Nov 29 02:49:54 np0005539551 NetworkManager[48922]: <info>  [1764402594.8087] manager: (tap817d7796-29): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Nov 29 02:49:54 np0005539551 ovn_controller[130266]: 2025-11-29T07:49:54Z|00027|binding|INFO|Claiming lport 817d7796-296d-423b-85bf-555c653c2008 for this chassis.
Nov 29 02:49:54 np0005539551 ovn_controller[130266]: 2025-11-29T07:49:54Z|00028|binding|INFO|817d7796-296d-423b-85bf-555c653c2008: Claiming fa:16:3e:f2:26:38 10.100.0.13
Nov 29 02:49:54 np0005539551 nova_compute[227360]: 2025-11-29 07:49:54.809 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:54 np0005539551 nova_compute[227360]: 2025-11-29 07:49:54.816 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:54 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:54.826 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:26:38 10.100.0.13'], port_security=['fa:16:3e:f2:26:38 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '387c7b1f-564d-472d-a21c-f0cd49d2fd4c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '06acd02df57149e795d2be57787bb9ed', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ea30fb5a-0cff-4126-aaa9-22a68d1fc7db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=157c85d0-7492-4bd9-b8bf-525034a08746, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=817d7796-296d-423b-85bf-555c653c2008) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:49:54 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:54.828 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 817d7796-296d-423b-85bf-555c653c2008 in datapath 1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4 bound to our chassis#033[00m
Nov 29 02:49:54 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:54.831 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4#033[00m
Nov 29 02:49:54 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:54.832 139482 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpodwl9g52/privsep.sock']#033[00m
Nov 29 02:49:54 np0005539551 systemd-udevd[231585]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:49:54 np0005539551 NetworkManager[48922]: <info>  [1764402594.8526] device (tap817d7796-29): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:49:54 np0005539551 NetworkManager[48922]: <info>  [1764402594.8537] device (tap817d7796-29): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:49:54 np0005539551 systemd-machined[190756]: New machine qemu-1-instance-00000006.
Nov 29 02:49:54 np0005539551 systemd[1]: Started Virtual Machine qemu-1-instance-00000006.
Nov 29 02:49:54 np0005539551 nova_compute[227360]: 2025-11-29 07:49:54.886 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:54 np0005539551 ovn_controller[130266]: 2025-11-29T07:49:54Z|00029|binding|INFO|Setting lport 817d7796-296d-423b-85bf-555c653c2008 ovn-installed in OVS
Nov 29 02:49:54 np0005539551 ovn_controller[130266]: 2025-11-29T07:49:54Z|00030|binding|INFO|Setting lport 817d7796-296d-423b-85bf-555c653c2008 up in Southbound
Nov 29 02:49:54 np0005539551 nova_compute[227360]: 2025-11-29 07:49:54.894 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:55 np0005539551 nova_compute[227360]: 2025-11-29 07:49:55.330 227364 DEBUG nova.network.neutron [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Successfully created port: 89b28868-4d53-4327-ba0f-01f232ee632c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 02:49:55 np0005539551 nova_compute[227360]: 2025-11-29 07:49:55.524 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764402595.5235207, 387c7b1f-564d-472d-a21c-f0cd49d2fd4c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:49:55 np0005539551 nova_compute[227360]: 2025-11-29 07:49:55.524 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] VM Started (Lifecycle Event)#033[00m
Nov 29 02:49:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:55.547 139482 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 29 02:49:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:55.548 139482 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpodwl9g52/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 29 02:49:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:55.421 231643 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 29 02:49:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:55.425 231643 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 29 02:49:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:55.427 231643 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Nov 29 02:49:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:55.427 231643 INFO oslo.privsep.daemon [-] privsep daemon running as pid 231643#033[00m
Nov 29 02:49:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:55.550 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1332e88a-b8a6-4eed-a4ae-9b998f3302bb]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:55 np0005539551 nova_compute[227360]: 2025-11-29 07:49:55.573 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:49:55 np0005539551 nova_compute[227360]: 2025-11-29 07:49:55.576 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764402595.5236886, 387c7b1f-564d-472d-a21c-f0cd49d2fd4c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:49:55 np0005539551 nova_compute[227360]: 2025-11-29 07:49:55.577 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:49:55 np0005539551 nova_compute[227360]: 2025-11-29 07:49:55.600 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:49:55 np0005539551 nova_compute[227360]: 2025-11-29 07:49:55.603 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:49:55 np0005539551 nova_compute[227360]: 2025-11-29 07:49:55.638 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:49:55 np0005539551 nova_compute[227360]: 2025-11-29 07:49:55.788 227364 DEBUG nova.compute.manager [req-4793d922-376e-46bb-bdec-b8334f207d02 req-b68cd692-9f19-4156-b1cb-52105611b9ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Received event network-vif-plugged-817d7796-296d-423b-85bf-555c653c2008 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:49:55 np0005539551 nova_compute[227360]: 2025-11-29 07:49:55.789 227364 DEBUG oslo_concurrency.lockutils [req-4793d922-376e-46bb-bdec-b8334f207d02 req-b68cd692-9f19-4156-b1cb-52105611b9ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "387c7b1f-564d-472d-a21c-f0cd49d2fd4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:55 np0005539551 nova_compute[227360]: 2025-11-29 07:49:55.789 227364 DEBUG oslo_concurrency.lockutils [req-4793d922-376e-46bb-bdec-b8334f207d02 req-b68cd692-9f19-4156-b1cb-52105611b9ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "387c7b1f-564d-472d-a21c-f0cd49d2fd4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:55 np0005539551 nova_compute[227360]: 2025-11-29 07:49:55.790 227364 DEBUG oslo_concurrency.lockutils [req-4793d922-376e-46bb-bdec-b8334f207d02 req-b68cd692-9f19-4156-b1cb-52105611b9ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "387c7b1f-564d-472d-a21c-f0cd49d2fd4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:55 np0005539551 nova_compute[227360]: 2025-11-29 07:49:55.790 227364 DEBUG nova.compute.manager [req-4793d922-376e-46bb-bdec-b8334f207d02 req-b68cd692-9f19-4156-b1cb-52105611b9ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Processing event network-vif-plugged-817d7796-296d-423b-85bf-555c653c2008 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 02:49:55 np0005539551 nova_compute[227360]: 2025-11-29 07:49:55.791 227364 DEBUG nova.compute.manager [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:49:55 np0005539551 nova_compute[227360]: 2025-11-29 07:49:55.794 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764402595.7944233, 387c7b1f-564d-472d-a21c-f0cd49d2fd4c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:49:55 np0005539551 nova_compute[227360]: 2025-11-29 07:49:55.794 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:49:55 np0005539551 nova_compute[227360]: 2025-11-29 07:49:55.808 227364 DEBUG nova.virt.libvirt.driver [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:49:55 np0005539551 nova_compute[227360]: 2025-11-29 07:49:55.813 227364 INFO nova.virt.libvirt.driver [-] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Instance spawned successfully.#033[00m
Nov 29 02:49:55 np0005539551 nova_compute[227360]: 2025-11-29 07:49:55.814 227364 DEBUG nova.virt.libvirt.driver [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:49:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:55.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:55 np0005539551 nova_compute[227360]: 2025-11-29 07:49:55.821 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:49:55 np0005539551 nova_compute[227360]: 2025-11-29 07:49:55.826 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:49:55 np0005539551 nova_compute[227360]: 2025-11-29 07:49:55.848 227364 DEBUG nova.virt.libvirt.driver [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:49:55 np0005539551 nova_compute[227360]: 2025-11-29 07:49:55.848 227364 DEBUG nova.virt.libvirt.driver [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:49:55 np0005539551 nova_compute[227360]: 2025-11-29 07:49:55.849 227364 DEBUG nova.virt.libvirt.driver [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:49:55 np0005539551 nova_compute[227360]: 2025-11-29 07:49:55.850 227364 DEBUG nova.virt.libvirt.driver [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:49:55 np0005539551 nova_compute[227360]: 2025-11-29 07:49:55.850 227364 DEBUG nova.virt.libvirt.driver [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:49:55 np0005539551 nova_compute[227360]: 2025-11-29 07:49:55.851 227364 DEBUG nova.virt.libvirt.driver [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:49:55 np0005539551 nova_compute[227360]: 2025-11-29 07:49:55.887 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:49:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:56.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:56 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:56.100 231643 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:56 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:56.101 231643 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:56 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:56.101 231643 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:56 np0005539551 nova_compute[227360]: 2025-11-29 07:49:56.149 227364 INFO nova.compute.manager [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Took 13.10 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:49:56 np0005539551 nova_compute[227360]: 2025-11-29 07:49:56.150 227364 DEBUG nova.compute.manager [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:49:56 np0005539551 nova_compute[227360]: 2025-11-29 07:49:56.244 227364 INFO nova.compute.manager [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Took 16.02 seconds to build instance.#033[00m
Nov 29 02:49:56 np0005539551 nova_compute[227360]: 2025-11-29 07:49:56.264 227364 DEBUG oslo_concurrency.lockutils [None req-7687b44e-a219-4fc5-9d1e-ced2cec787e3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Lock "387c7b1f-564d-472d-a21c-f0cd49d2fd4c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.238s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:56 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:56.805 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[97e60b56-9113-4a21-b633-a85824b6b3d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:56 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:56.806 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1968b9ed-31 in ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:49:56 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:56.808 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1968b9ed-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:49:56 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:56.809 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a3d7f12d-3dd1-489b-8931-add0e4cba505]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:56 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:56.812 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9168be7e-0b8c-4eb5-8a50-2e0345f3d266]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:56 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:56.848 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[a5b5ed46-6d99-4215-8b66-43bac3ac24c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:56 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:56.869 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f04a8451-fe70-4953-9e13-ac451f0e21b2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:56 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:56.872 139482 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpokvfd3ak/privsep.sock']#033[00m
Nov 29 02:49:57 np0005539551 nova_compute[227360]: 2025-11-29 07:49:57.020 227364 DEBUG nova.network.neutron [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Successfully updated port: 89b28868-4d53-4327-ba0f-01f232ee632c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 02:49:57 np0005539551 nova_compute[227360]: 2025-11-29 07:49:57.036 227364 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Acquiring lock "refresh_cache-df95709e-b2a2-4e72-a99a-4df9e0fde1c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:49:57 np0005539551 nova_compute[227360]: 2025-11-29 07:49:57.037 227364 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Acquired lock "refresh_cache-df95709e-b2a2-4e72-a99a-4df9e0fde1c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:49:57 np0005539551 nova_compute[227360]: 2025-11-29 07:49:57.038 227364 DEBUG nova.network.neutron [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:49:57 np0005539551 nova_compute[227360]: 2025-11-29 07:49:57.110 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:57 np0005539551 nova_compute[227360]: 2025-11-29 07:49:57.263 227364 DEBUG nova.network.neutron [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:49:57 np0005539551 nova_compute[227360]: 2025-11-29 07:49:57.430 227364 DEBUG nova.compute.manager [req-358a6594-fd36-4171-a876-39d29e31adcd req-e72753e7-af29-4462-aece-0e13c1eebf03 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Received event network-changed-89b28868-4d53-4327-ba0f-01f232ee632c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:49:57 np0005539551 nova_compute[227360]: 2025-11-29 07:49:57.432 227364 DEBUG nova.compute.manager [req-358a6594-fd36-4171-a876-39d29e31adcd req-e72753e7-af29-4462-aece-0e13c1eebf03 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Refreshing instance network info cache due to event network-changed-89b28868-4d53-4327-ba0f-01f232ee632c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:49:57 np0005539551 nova_compute[227360]: 2025-11-29 07:49:57.432 227364 DEBUG oslo_concurrency.lockutils [req-358a6594-fd36-4171-a876-39d29e31adcd req-e72753e7-af29-4462-aece-0e13c1eebf03 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-df95709e-b2a2-4e72-a99a-4df9e0fde1c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:49:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:57.627 139482 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 29 02:49:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:57.628 139482 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpokvfd3ak/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 29 02:49:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:57.473 231659 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 29 02:49:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:57.478 231659 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 29 02:49:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:57.479 231659 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Nov 29 02:49:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:57.480 231659 INFO oslo.privsep.daemon [-] privsep daemon running as pid 231659#033[00m
Nov 29 02:49:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:57.631 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[5d302008-7fc6-4b9e-9eea-2a168c25e38e]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:57 np0005539551 nova_compute[227360]: 2025-11-29 07:49:57.677 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:57.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:57 np0005539551 nova_compute[227360]: 2025-11-29 07:49:57.936 227364 DEBUG nova.compute.manager [req-08800229-f9c3-4ba9-b112-15e238865277 req-74f81707-4324-4a38-ac0d-0affc52ef3e8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Received event network-vif-plugged-817d7796-296d-423b-85bf-555c653c2008 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:49:57 np0005539551 nova_compute[227360]: 2025-11-29 07:49:57.936 227364 DEBUG oslo_concurrency.lockutils [req-08800229-f9c3-4ba9-b112-15e238865277 req-74f81707-4324-4a38-ac0d-0affc52ef3e8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "387c7b1f-564d-472d-a21c-f0cd49d2fd4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:57 np0005539551 nova_compute[227360]: 2025-11-29 07:49:57.936 227364 DEBUG oslo_concurrency.lockutils [req-08800229-f9c3-4ba9-b112-15e238865277 req-74f81707-4324-4a38-ac0d-0affc52ef3e8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "387c7b1f-564d-472d-a21c-f0cd49d2fd4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:57 np0005539551 nova_compute[227360]: 2025-11-29 07:49:57.937 227364 DEBUG oslo_concurrency.lockutils [req-08800229-f9c3-4ba9-b112-15e238865277 req-74f81707-4324-4a38-ac0d-0affc52ef3e8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "387c7b1f-564d-472d-a21c-f0cd49d2fd4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:57 np0005539551 nova_compute[227360]: 2025-11-29 07:49:57.937 227364 DEBUG nova.compute.manager [req-08800229-f9c3-4ba9-b112-15e238865277 req-74f81707-4324-4a38-ac0d-0affc52ef3e8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] No waiting events found dispatching network-vif-plugged-817d7796-296d-423b-85bf-555c653c2008 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:49:57 np0005539551 nova_compute[227360]: 2025-11-29 07:49:57.937 227364 WARNING nova.compute.manager [req-08800229-f9c3-4ba9-b112-15e238865277 req-74f81707-4324-4a38-ac0d-0affc52ef3e8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Received unexpected event network-vif-plugged-817d7796-296d-423b-85bf-555c653c2008 for instance with vm_state active and task_state None.#033[00m
Nov 29 02:49:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:58.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:58.173 231659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:58.173 231659 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:58.174 231659 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:58 np0005539551 nova_compute[227360]: 2025-11-29 07:49:58.629 227364 DEBUG nova.network.neutron [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Updating instance_info_cache with network_info: [{"id": "89b28868-4d53-4327-ba0f-01f232ee632c", "address": "fa:16:3e:95:03:64", "network": {"id": "01d0d21b-eaad-4f5d-82d1-0f4d31e80363", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::5b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c7cd563ba394223a76bd2579800406c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89b28868-4d", "ovs_interfaceid": "89b28868-4d53-4327-ba0f-01f232ee632c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:49:58 np0005539551 nova_compute[227360]: 2025-11-29 07:49:58.667 227364 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Releasing lock "refresh_cache-df95709e-b2a2-4e72-a99a-4df9e0fde1c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:49:58 np0005539551 nova_compute[227360]: 2025-11-29 07:49:58.667 227364 DEBUG nova.compute.manager [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Instance network_info: |[{"id": "89b28868-4d53-4327-ba0f-01f232ee632c", "address": "fa:16:3e:95:03:64", "network": {"id": "01d0d21b-eaad-4f5d-82d1-0f4d31e80363", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::5b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c7cd563ba394223a76bd2579800406c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89b28868-4d", "ovs_interfaceid": "89b28868-4d53-4327-ba0f-01f232ee632c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:49:58 np0005539551 nova_compute[227360]: 2025-11-29 07:49:58.668 227364 DEBUG oslo_concurrency.lockutils [req-358a6594-fd36-4171-a876-39d29e31adcd req-e72753e7-af29-4462-aece-0e13c1eebf03 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-df95709e-b2a2-4e72-a99a-4df9e0fde1c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:49:58 np0005539551 nova_compute[227360]: 2025-11-29 07:49:58.668 227364 DEBUG nova.network.neutron [req-358a6594-fd36-4171-a876-39d29e31adcd req-e72753e7-af29-4462-aece-0e13c1eebf03 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Refreshing network info cache for port 89b28868-4d53-4327-ba0f-01f232ee632c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:49:58 np0005539551 nova_compute[227360]: 2025-11-29 07:49:58.671 227364 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Start _get_guest_xml network_info=[{"id": "89b28868-4d53-4327-ba0f-01f232ee632c", "address": "fa:16:3e:95:03:64", "network": {"id": "01d0d21b-eaad-4f5d-82d1-0f4d31e80363", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::5b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c7cd563ba394223a76bd2579800406c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89b28868-4d", "ovs_interfaceid": "89b28868-4d53-4327-ba0f-01f232ee632c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:49:58 np0005539551 nova_compute[227360]: 2025-11-29 07:49:58.675 227364 WARNING nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:49:58 np0005539551 nova_compute[227360]: 2025-11-29 07:49:58.680 227364 DEBUG nova.virt.libvirt.host [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:49:58 np0005539551 nova_compute[227360]: 2025-11-29 07:49:58.681 227364 DEBUG nova.virt.libvirt.host [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:49:58 np0005539551 nova_compute[227360]: 2025-11-29 07:49:58.684 227364 DEBUG nova.virt.libvirt.host [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:49:58 np0005539551 nova_compute[227360]: 2025-11-29 07:49:58.684 227364 DEBUG nova.virt.libvirt.host [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:49:58 np0005539551 nova_compute[227360]: 2025-11-29 07:49:58.685 227364 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:49:58 np0005539551 nova_compute[227360]: 2025-11-29 07:49:58.686 227364 DEBUG nova.virt.hardware [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:49:58 np0005539551 nova_compute[227360]: 2025-11-29 07:49:58.686 227364 DEBUG nova.virt.hardware [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:49:58 np0005539551 nova_compute[227360]: 2025-11-29 07:49:58.686 227364 DEBUG nova.virt.hardware [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:49:58 np0005539551 nova_compute[227360]: 2025-11-29 07:49:58.687 227364 DEBUG nova.virt.hardware [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:49:58 np0005539551 nova_compute[227360]: 2025-11-29 07:49:58.687 227364 DEBUG nova.virt.hardware [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:49:58 np0005539551 nova_compute[227360]: 2025-11-29 07:49:58.687 227364 DEBUG nova.virt.hardware [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:49:58 np0005539551 nova_compute[227360]: 2025-11-29 07:49:58.688 227364 DEBUG nova.virt.hardware [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:49:58 np0005539551 nova_compute[227360]: 2025-11-29 07:49:58.688 227364 DEBUG nova.virt.hardware [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:49:58 np0005539551 nova_compute[227360]: 2025-11-29 07:49:58.688 227364 DEBUG nova.virt.hardware [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:49:58 np0005539551 nova_compute[227360]: 2025-11-29 07:49:58.689 227364 DEBUG nova.virt.hardware [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:49:58 np0005539551 nova_compute[227360]: 2025-11-29 07:49:58.689 227364 DEBUG nova.virt.hardware [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:49:58 np0005539551 nova_compute[227360]: 2025-11-29 07:49:58.692 227364 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:58.854 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[b929088f-e63d-44b3-8668-0d6910f6a3c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:58.876 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[eeec36e2-1dfa-4469-ae09-db2f942a5461]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:58 np0005539551 NetworkManager[48922]: <info>  [1764402598.8775] manager: (tap1968b9ed-30): new Veth device (/org/freedesktop/NetworkManager/Devices/25)
Nov 29 02:49:58 np0005539551 systemd-udevd[231691]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:49:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:58.901 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[9b3a7080-5bbe-4773-b304-2b7c52a336df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:58.904 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[ed9a0ce7-f8e7-4341-954e-38c117180dc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:58 np0005539551 NetworkManager[48922]: <info>  [1764402598.9295] device (tap1968b9ed-30): carrier: link connected
Nov 29 02:49:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:58.935 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[72a4cce6-c73f-4e64-9392-b5d2aaa7731b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:58.951 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6986d6eb-b400-4467-85f0-d65fec206c66]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1968b9ed-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ce:e5:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 564580, 'reachable_time': 30224, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 231709, 'error': None, 'target': 'ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:58.966 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9fa65101-f6cf-4c19-8cbc-8428666a5351]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fece:e5cd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 564580, 'tstamp': 564580}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 231710, 'error': None, 'target': 'ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:58.982 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[3529a8f6-1e9b-427a-9712-3850497f9eef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1968b9ed-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ce:e5:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 564580, 'reachable_time': 30224, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 231711, 'error': None, 'target': 'ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:58 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 e164: 3 total, 3 up, 3 in
Nov 29 02:49:59 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:59.014 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1fb8ae91-60c2-4bd4-a713-ce8f7edd63c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:59.077 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[386411b4-e882-4e95-8c2e-cf60b3d6d372]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:59.079 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1968b9ed-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:49:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:59.079 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:49:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:59.080 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1968b9ed-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:49:59 np0005539551 nova_compute[227360]: 2025-11-29 07:49:59.082 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:59 np0005539551 NetworkManager[48922]: <info>  [1764402599.0834] manager: (tap1968b9ed-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Nov 29 02:49:59 np0005539551 kernel: tap1968b9ed-30: entered promiscuous mode
Nov 29 02:49:59 np0005539551 nova_compute[227360]: 2025-11-29 07:49:59.085 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:59.086 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1968b9ed-30, col_values=(('external_ids', {'iface-id': '0a5d766f-ec84-47c0-9590-72934ab05c0c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:49:59 np0005539551 nova_compute[227360]: 2025-11-29 07:49:59.087 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:59 np0005539551 ovn_controller[130266]: 2025-11-29T07:49:59Z|00031|binding|INFO|Releasing lport 0a5d766f-ec84-47c0-9590-72934ab05c0c from this chassis (sb_readonly=0)
Nov 29 02:49:59 np0005539551 nova_compute[227360]: 2025-11-29 07:49:59.103 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:59.104 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:49:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:59.105 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d1101c6d-d3fa-4de2-9ce0-fb82b01bd7af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:59.106 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:49:59 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 02:49:59 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 02:49:59 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4
Nov 29 02:49:59 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 02:49:59 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 02:49:59 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 02:49:59 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4.pid.haproxy
Nov 29 02:49:59 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 02:49:59 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 02:49:59 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 02:49:59 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 02:49:59 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 02:49:59 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 02:49:59 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 02:49:59 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 02:49:59 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 02:49:59 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 02:49:59 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 02:49:59 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 02:49:59 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 02:49:59 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 02:49:59 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 02:49:59 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 02:49:59 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 02:49:59 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 02:49:59 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 02:49:59 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:49:59 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4
Nov 29 02:49:59 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:49:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:49:59.107 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4', 'env', 'PROCESS_TAG=haproxy-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:49:59 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:49:59 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1772143978' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:49:59 np0005539551 nova_compute[227360]: 2025-11-29 07:49:59.210 227364 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:59 np0005539551 nova_compute[227360]: 2025-11-29 07:49:59.242 227364 DEBUG nova.storage.rbd_utils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] rbd image df95709e-b2a2-4e72-a99a-4df9e0fde1c4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:49:59 np0005539551 nova_compute[227360]: 2025-11-29 07:49:59.245 227364 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:59 np0005539551 podman[231784]: 2025-11-29 07:49:59.500676227 +0000 UTC m=+0.061187071 container create 2cbf0447830a654ec9a1878ba0411dabbfa03a29a01de79cb066d87b80b04582 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 02:49:59 np0005539551 systemd[1]: Started libpod-conmon-2cbf0447830a654ec9a1878ba0411dabbfa03a29a01de79cb066d87b80b04582.scope.
Nov 29 02:49:59 np0005539551 podman[231784]: 2025-11-29 07:49:59.471612245 +0000 UTC m=+0.032123109 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:49:59 np0005539551 systemd[1]: Started libcrun container.
Nov 29 02:49:59 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97feefce17446c1b19f28c5179f2e1f08b88bc5b372b8538c1b5c4d2b304a920/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:49:59 np0005539551 podman[231784]: 2025-11-29 07:49:59.589275245 +0000 UTC m=+0.149786109 container init 2cbf0447830a654ec9a1878ba0411dabbfa03a29a01de79cb066d87b80b04582 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:49:59 np0005539551 podman[231784]: 2025-11-29 07:49:59.598707485 +0000 UTC m=+0.159218329 container start 2cbf0447830a654ec9a1878ba0411dabbfa03a29a01de79cb066d87b80b04582 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:49:59 np0005539551 neutron-haproxy-ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4[231799]: [NOTICE]   (231803) : New worker (231805) forked
Nov 29 02:49:59 np0005539551 neutron-haproxy-ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4[231799]: [NOTICE]   (231803) : Loading success.
Nov 29 02:49:59 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:49:59 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3824662842' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:49:59 np0005539551 nova_compute[227360]: 2025-11-29 07:49:59.687 227364 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:59 np0005539551 nova_compute[227360]: 2025-11-29 07:49:59.689 227364 DEBUG nova.virt.libvirt.vif [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:49:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-497379200-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-497379200-1',id=3,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3c7cd563ba394223a76bd2579800406c',ramdisk_id='',reservation_id='r-k1o602if',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-1372302389',owner_user_name='tempest-AutoAllocateNetworkTest-1372302389-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:49:39Z,user_data=None,user_id='01739124bee74c899af6384f8ec2d427',uuid=df95709e-b2a2-4e72-a99a-4df9e0fde1c4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "89b28868-4d53-4327-ba0f-01f232ee632c", "address": "fa:16:3e:95:03:64", "network": {"id": "01d0d21b-eaad-4f5d-82d1-0f4d31e80363", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::5b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c7cd563ba394223a76bd2579800406c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89b28868-4d", "ovs_interfaceid": "89b28868-4d53-4327-ba0f-01f232ee632c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:49:59 np0005539551 nova_compute[227360]: 2025-11-29 07:49:59.689 227364 DEBUG nova.network.os_vif_util [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Converting VIF {"id": "89b28868-4d53-4327-ba0f-01f232ee632c", "address": "fa:16:3e:95:03:64", "network": {"id": "01d0d21b-eaad-4f5d-82d1-0f4d31e80363", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::5b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c7cd563ba394223a76bd2579800406c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89b28868-4d", "ovs_interfaceid": "89b28868-4d53-4327-ba0f-01f232ee632c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:49:59 np0005539551 nova_compute[227360]: 2025-11-29 07:49:59.690 227364 DEBUG nova.network.os_vif_util [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:03:64,bridge_name='br-int',has_traffic_filtering=True,id=89b28868-4d53-4327-ba0f-01f232ee632c,network=Network(01d0d21b-eaad-4f5d-82d1-0f4d31e80363),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89b28868-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:49:59 np0005539551 nova_compute[227360]: 2025-11-29 07:49:59.692 227364 DEBUG nova.objects.instance [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lazy-loading 'pci_devices' on Instance uuid df95709e-b2a2-4e72-a99a-4df9e0fde1c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:49:59 np0005539551 nova_compute[227360]: 2025-11-29 07:49:59.716 227364 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:49:59 np0005539551 nova_compute[227360]:  <uuid>df95709e-b2a2-4e72-a99a-4df9e0fde1c4</uuid>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:  <name>instance-00000003</name>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:49:59 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:      <nova:name>tempest-tempest.common.compute-instance-497379200-1</nova:name>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 07:49:58</nova:creationTime>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 02:49:59 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:        <nova:user uuid="01739124bee74c899af6384f8ec2d427">tempest-AutoAllocateNetworkTest-1372302389-project-member</nova:user>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:        <nova:project uuid="3c7cd563ba394223a76bd2579800406c">tempest-AutoAllocateNetworkTest-1372302389</nova:project>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:        <nova:port uuid="89b28868-4d53-4327-ba0f-01f232ee632c">
Nov 29 02:49:59 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="fdfe:381f:8400:1::5b" ipVersion="6"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.1.0.118" ipVersion="4"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <system>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:      <entry name="serial">df95709e-b2a2-4e72-a99a-4df9e0fde1c4</entry>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:      <entry name="uuid">df95709e-b2a2-4e72-a99a-4df9e0fde1c4</entry>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    </system>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:  <os>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:  </os>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:  <features>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:  </features>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:  </clock>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:  <devices>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 02:49:59 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/df95709e-b2a2-4e72-a99a-4df9e0fde1c4_disk">
Nov 29 02:49:59 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:      </source>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 02:49:59 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:      </auth>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    </disk>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 02:49:59 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/df95709e-b2a2-4e72-a99a-4df9e0fde1c4_disk.config">
Nov 29 02:49:59 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:      </source>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 02:49:59 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:      </auth>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    </disk>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 02:49:59 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:95:03:64"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:      <target dev="tap89b28868-4d"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    </interface>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 02:49:59 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/df95709e-b2a2-4e72-a99a-4df9e0fde1c4/console.log" append="off"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    </serial>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <video>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    </video>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 02:49:59 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    </rng>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 02:49:59 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 02:49:59 np0005539551 nova_compute[227360]:  </devices>
Nov 29 02:49:59 np0005539551 nova_compute[227360]: </domain>
Nov 29 02:49:59 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:49:59 np0005539551 nova_compute[227360]: 2025-11-29 07:49:59.720 227364 DEBUG nova.compute.manager [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Preparing to wait for external event network-vif-plugged-89b28868-4d53-4327-ba0f-01f232ee632c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 02:49:59 np0005539551 nova_compute[227360]: 2025-11-29 07:49:59.720 227364 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Acquiring lock "df95709e-b2a2-4e72-a99a-4df9e0fde1c4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:59 np0005539551 nova_compute[227360]: 2025-11-29 07:49:59.720 227364 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "df95709e-b2a2-4e72-a99a-4df9e0fde1c4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:59 np0005539551 nova_compute[227360]: 2025-11-29 07:49:59.721 227364 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "df95709e-b2a2-4e72-a99a-4df9e0fde1c4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:59 np0005539551 nova_compute[227360]: 2025-11-29 07:49:59.722 227364 DEBUG nova.virt.libvirt.vif [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:49:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-497379200-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-497379200-1',id=3,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3c7cd563ba394223a76bd2579800406c',ramdisk_id='',reservation_id='r-k1o602if',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-1372302389',owner_user_name='tempest-AutoAllocateNetworkTest-1372302389-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:49:39Z,user_data=None,user_id='01739124bee74c899af6384f8ec2d427',uuid=df95709e-b2a2-4e72-a99a-4df9e0fde1c4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "89b28868-4d53-4327-ba0f-01f232ee632c", "address": "fa:16:3e:95:03:64", "network": {"id": "01d0d21b-eaad-4f5d-82d1-0f4d31e80363", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::5b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c7cd563ba394223a76bd2579800406c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89b28868-4d", "ovs_interfaceid": "89b28868-4d53-4327-ba0f-01f232ee632c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:49:59 np0005539551 nova_compute[227360]: 2025-11-29 07:49:59.722 227364 DEBUG nova.network.os_vif_util [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Converting VIF {"id": "89b28868-4d53-4327-ba0f-01f232ee632c", "address": "fa:16:3e:95:03:64", "network": {"id": "01d0d21b-eaad-4f5d-82d1-0f4d31e80363", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::5b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c7cd563ba394223a76bd2579800406c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89b28868-4d", "ovs_interfaceid": "89b28868-4d53-4327-ba0f-01f232ee632c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:49:59 np0005539551 nova_compute[227360]: 2025-11-29 07:49:59.723 227364 DEBUG nova.network.os_vif_util [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:03:64,bridge_name='br-int',has_traffic_filtering=True,id=89b28868-4d53-4327-ba0f-01f232ee632c,network=Network(01d0d21b-eaad-4f5d-82d1-0f4d31e80363),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89b28868-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:49:59 np0005539551 nova_compute[227360]: 2025-11-29 07:49:59.723 227364 DEBUG os_vif [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:03:64,bridge_name='br-int',has_traffic_filtering=True,id=89b28868-4d53-4327-ba0f-01f232ee632c,network=Network(01d0d21b-eaad-4f5d-82d1-0f4d31e80363),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89b28868-4d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:49:59 np0005539551 nova_compute[227360]: 2025-11-29 07:49:59.724 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:59 np0005539551 nova_compute[227360]: 2025-11-29 07:49:59.724 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:49:59 np0005539551 nova_compute[227360]: 2025-11-29 07:49:59.725 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:49:59 np0005539551 nova_compute[227360]: 2025-11-29 07:49:59.727 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:59 np0005539551 nova_compute[227360]: 2025-11-29 07:49:59.727 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap89b28868-4d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:49:59 np0005539551 nova_compute[227360]: 2025-11-29 07:49:59.728 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap89b28868-4d, col_values=(('external_ids', {'iface-id': '89b28868-4d53-4327-ba0f-01f232ee632c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:95:03:64', 'vm-uuid': 'df95709e-b2a2-4e72-a99a-4df9e0fde1c4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:49:59 np0005539551 nova_compute[227360]: 2025-11-29 07:49:59.729 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:59 np0005539551 NetworkManager[48922]: <info>  [1764402599.7301] manager: (tap89b28868-4d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Nov 29 02:49:59 np0005539551 nova_compute[227360]: 2025-11-29 07:49:59.732 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:49:59 np0005539551 nova_compute[227360]: 2025-11-29 07:49:59.734 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:59 np0005539551 nova_compute[227360]: 2025-11-29 07:49:59.735 227364 INFO os_vif [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:03:64,bridge_name='br-int',has_traffic_filtering=True,id=89b28868-4d53-4327-ba0f-01f232ee632c,network=Network(01d0d21b-eaad-4f5d-82d1-0f4d31e80363),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89b28868-4d')#033[00m
Nov 29 02:49:59 np0005539551 nova_compute[227360]: 2025-11-29 07:49:59.812 227364 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:49:59 np0005539551 nova_compute[227360]: 2025-11-29 07:49:59.812 227364 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:49:59 np0005539551 nova_compute[227360]: 2025-11-29 07:49:59.812 227364 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] No VIF found with MAC fa:16:3e:95:03:64, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:49:59 np0005539551 nova_compute[227360]: 2025-11-29 07:49:59.813 227364 INFO nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Using config drive#033[00m
Nov 29 02:49:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:49:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:59.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:59 np0005539551 nova_compute[227360]: 2025-11-29 07:49:59.835 227364 DEBUG nova.storage.rbd_utils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] rbd image df95709e-b2a2-4e72-a99a-4df9e0fde1c4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:00.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:00 np0005539551 nova_compute[227360]: 2025-11-29 07:50:00.077 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:00 np0005539551 NetworkManager[48922]: <info>  [1764402600.0780] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/28)
Nov 29 02:50:00 np0005539551 NetworkManager[48922]: <info>  [1764402600.0787] device (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:50:00 np0005539551 NetworkManager[48922]: <info>  [1764402600.0797] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/29)
Nov 29 02:50:00 np0005539551 NetworkManager[48922]: <info>  [1764402600.0800] device (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:50:00 np0005539551 NetworkManager[48922]: <info>  [1764402600.0809] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Nov 29 02:50:00 np0005539551 NetworkManager[48922]: <info>  [1764402600.0814] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Nov 29 02:50:00 np0005539551 NetworkManager[48922]: <info>  [1764402600.0818] device (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 29 02:50:00 np0005539551 NetworkManager[48922]: <info>  [1764402600.0820] device (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 29 02:50:00 np0005539551 nova_compute[227360]: 2025-11-29 07:50:00.289 227364 INFO nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Creating config drive at /var/lib/nova/instances/df95709e-b2a2-4e72-a99a-4df9e0fde1c4/disk.config#033[00m
Nov 29 02:50:00 np0005539551 nova_compute[227360]: 2025-11-29 07:50:00.293 227364 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/df95709e-b2a2-4e72-a99a-4df9e0fde1c4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphrabljry execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:00 np0005539551 nova_compute[227360]: 2025-11-29 07:50:00.371 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:00 np0005539551 ovn_controller[130266]: 2025-11-29T07:50:00Z|00032|binding|INFO|Releasing lport 0a5d766f-ec84-47c0-9590-72934ab05c0c from this chassis (sb_readonly=0)
Nov 29 02:50:00 np0005539551 nova_compute[227360]: 2025-11-29 07:50:00.411 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:00 np0005539551 nova_compute[227360]: 2025-11-29 07:50:00.420 227364 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/df95709e-b2a2-4e72-a99a-4df9e0fde1c4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphrabljry" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:00 np0005539551 nova_compute[227360]: 2025-11-29 07:50:00.442 227364 DEBUG nova.storage.rbd_utils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] rbd image df95709e-b2a2-4e72-a99a-4df9e0fde1c4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:00 np0005539551 nova_compute[227360]: 2025-11-29 07:50:00.446 227364 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/df95709e-b2a2-4e72-a99a-4df9e0fde1c4/disk.config df95709e-b2a2-4e72-a99a-4df9e0fde1c4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:00 np0005539551 nova_compute[227360]: 2025-11-29 07:50:00.955 227364 DEBUG nova.compute.manager [req-c65e0bbf-c0d0-4495-8659-64001fdb6d2d req-d035e1e6-600b-4f08-9e1a-4ac3e252ba77 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Received event network-changed-817d7796-296d-423b-85bf-555c653c2008 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:00 np0005539551 nova_compute[227360]: 2025-11-29 07:50:00.956 227364 DEBUG nova.compute.manager [req-c65e0bbf-c0d0-4495-8659-64001fdb6d2d req-d035e1e6-600b-4f08-9e1a-4ac3e252ba77 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Refreshing instance network info cache due to event network-changed-817d7796-296d-423b-85bf-555c653c2008. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:50:00 np0005539551 nova_compute[227360]: 2025-11-29 07:50:00.957 227364 DEBUG oslo_concurrency.lockutils [req-c65e0bbf-c0d0-4495-8659-64001fdb6d2d req-d035e1e6-600b-4f08-9e1a-4ac3e252ba77 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-387c7b1f-564d-472d-a21c-f0cd49d2fd4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:50:00 np0005539551 nova_compute[227360]: 2025-11-29 07:50:00.957 227364 DEBUG oslo_concurrency.lockutils [req-c65e0bbf-c0d0-4495-8659-64001fdb6d2d req-d035e1e6-600b-4f08-9e1a-4ac3e252ba77 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-387c7b1f-564d-472d-a21c-f0cd49d2fd4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:50:00 np0005539551 nova_compute[227360]: 2025-11-29 07:50:00.958 227364 DEBUG nova.network.neutron [req-c65e0bbf-c0d0-4495-8659-64001fdb6d2d req-d035e1e6-600b-4f08-9e1a-4ac3e252ba77 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Refreshing network info cache for port 817d7796-296d-423b-85bf-555c653c2008 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:50:01 np0005539551 ceph-mon[81672]: overall HEALTH_OK
Nov 29 02:50:01 np0005539551 nova_compute[227360]: 2025-11-29 07:50:01.378 227364 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/df95709e-b2a2-4e72-a99a-4df9e0fde1c4/disk.config df95709e-b2a2-4e72-a99a-4df9e0fde1c4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.932s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:01 np0005539551 nova_compute[227360]: 2025-11-29 07:50:01.379 227364 INFO nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Deleting local config drive /var/lib/nova/instances/df95709e-b2a2-4e72-a99a-4df9e0fde1c4/disk.config because it was imported into RBD.#033[00m
Nov 29 02:50:01 np0005539551 kernel: tap89b28868-4d: entered promiscuous mode
Nov 29 02:50:01 np0005539551 systemd-udevd[231702]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:50:01 np0005539551 ovn_controller[130266]: 2025-11-29T07:50:01Z|00033|binding|INFO|Claiming lport 89b28868-4d53-4327-ba0f-01f232ee632c for this chassis.
Nov 29 02:50:01 np0005539551 ovn_controller[130266]: 2025-11-29T07:50:01Z|00034|binding|INFO|89b28868-4d53-4327-ba0f-01f232ee632c: Claiming fa:16:3e:95:03:64 10.1.0.118 fdfe:381f:8400:1::5b
Nov 29 02:50:01 np0005539551 NetworkManager[48922]: <info>  [1764402601.4234] manager: (tap89b28868-4d): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Nov 29 02:50:01 np0005539551 nova_compute[227360]: 2025-11-29 07:50:01.423 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:01 np0005539551 NetworkManager[48922]: <info>  [1764402601.4364] device (tap89b28868-4d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:50:01 np0005539551 ovn_controller[130266]: 2025-11-29T07:50:01Z|00035|binding|INFO|Setting lport 89b28868-4d53-4327-ba0f-01f232ee632c ovn-installed in OVS
Nov 29 02:50:01 np0005539551 nova_compute[227360]: 2025-11-29 07:50:01.438 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:01 np0005539551 NetworkManager[48922]: <info>  [1764402601.4422] device (tap89b28868-4d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:50:01 np0005539551 nova_compute[227360]: 2025-11-29 07:50:01.442 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:01 np0005539551 ovn_controller[130266]: 2025-11-29T07:50:01Z|00036|binding|INFO|Setting lport 89b28868-4d53-4327-ba0f-01f232ee632c up in Southbound
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:01.469 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:03:64 10.1.0.118 fdfe:381f:8400:1::5b'], port_security=['fa:16:3e:95:03:64 10.1.0.118 fdfe:381f:8400:1::5b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.118/26 fdfe:381f:8400:1::5b/64', 'neutron:device_id': 'df95709e-b2a2-4e72-a99a-4df9e0fde1c4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01d0d21b-eaad-4f5d-82d1-0f4d31e80363', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3c7cd563ba394223a76bd2579800406c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd7540713-07cb-41c9-9bad-f36175f21356', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d42f5ec8-3ffe-4c03-bf87-380969e1ba25, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=89b28868-4d53-4327-ba0f-01f232ee632c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:01.471 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 89b28868-4d53-4327-ba0f-01f232ee632c in datapath 01d0d21b-eaad-4f5d-82d1-0f4d31e80363 bound to our chassis#033[00m
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:01.473 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 01d0d21b-eaad-4f5d-82d1-0f4d31e80363#033[00m
Nov 29 02:50:01 np0005539551 systemd-machined[190756]: New machine qemu-2-instance-00000003.
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:01.483 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[777724fa-b7b4-42eb-a3e6-437b5729d841]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:01.484 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap01d0d21b-e1 in ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:01.485 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap01d0d21b-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:01.486 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a78c22ef-a296-4fd3-a252-a0d830a7b13b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:01.487 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e960b96a-7af3-45ac-ae0f-241c7caa62cc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:01 np0005539551 systemd[1]: Started Virtual Machine qemu-2-instance-00000003.
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:01.513 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[e7c87070-7bfa-444a-b4b0-15f740157ead]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:01.529 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[39220b5a-88e6-422a-8290-8372255c00e8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:01.554 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[8136596b-7413-4790-992e-e38b394a033c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:01 np0005539551 podman[231890]: 2025-11-29 07:50:01.556205666 +0000 UTC m=+0.087348114 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 02:50:01 np0005539551 podman[231889]: 2025-11-29 07:50:01.558460648 +0000 UTC m=+0.092733373 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 02:50:01 np0005539551 NetworkManager[48922]: <info>  [1764402601.5618] manager: (tap01d0d21b-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/33)
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:01.560 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[77dbed9e-c72e-4a8c-bd06-c46197aebb38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:01 np0005539551 podman[231888]: 2025-11-29 07:50:01.589097164 +0000 UTC m=+0.123429371 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:01.591 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[8e41d851-b05d-4274-aa00-5d49a0ea9eae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:01.596 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[e42e557d-0dc2-4c71-9874-b80cdbac7471]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:01 np0005539551 NetworkManager[48922]: <info>  [1764402601.6131] device (tap01d0d21b-e0): carrier: link connected
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:01.617 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[4b7e8d09-9695-433a-ba68-512eecf11f7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:01.633 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4f39faaf-3dcc-4bcd-925a-74a2413627e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01d0d21b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:68:94:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 564849, 'reachable_time': 26496, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 231966, 'error': None, 'target': 'ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:01.649 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[18c74946-3e72-4bf3-a284-c364e6b883d0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe68:9497'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 564849, 'tstamp': 564849}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 231967, 'error': None, 'target': 'ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:01.664 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4494f18f-dfe6-42f0-a973-508691060589]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01d0d21b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:68:94:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 564849, 'reachable_time': 26496, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 231968, 'error': None, 'target': 'ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:01.688 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[57efa6be-1f95-4666-873a-d1c1a2264f48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:01.737 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a2483368-3b37-4a42-a116-4f838b49c8c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:01.739 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01d0d21b-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:01.739 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:01.739 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01d0d21b-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:01 np0005539551 NetworkManager[48922]: <info>  [1764402601.7420] manager: (tap01d0d21b-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Nov 29 02:50:01 np0005539551 kernel: tap01d0d21b-e0: entered promiscuous mode
Nov 29 02:50:01 np0005539551 nova_compute[227360]: 2025-11-29 07:50:01.745 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:01.750 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap01d0d21b-e0, col_values=(('external_ids', {'iface-id': 'c5a666a5-4b3e-4d4d-821a-ea0f64e84c84'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:01 np0005539551 nova_compute[227360]: 2025-11-29 07:50:01.752 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:01 np0005539551 nova_compute[227360]: 2025-11-29 07:50:01.753 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:01.754 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/01d0d21b-eaad-4f5d-82d1-0f4d31e80363.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/01d0d21b-eaad-4f5d-82d1-0f4d31e80363.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:50:01 np0005539551 ovn_controller[130266]: 2025-11-29T07:50:01Z|00037|binding|INFO|Releasing lport c5a666a5-4b3e-4d4d-821a-ea0f64e84c84 from this chassis (sb_readonly=0)
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:01.756 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[0a9c88c5-5223-44f9-b248-c0601ba50fd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:01.756 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-01d0d21b-eaad-4f5d-82d1-0f4d31e80363
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/01d0d21b-eaad-4f5d-82d1-0f4d31e80363.pid.haproxy
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 01d0d21b-eaad-4f5d-82d1-0f4d31e80363
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:50:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:01.757 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363', 'env', 'PROCESS_TAG=haproxy-01d0d21b-eaad-4f5d-82d1-0f4d31e80363', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/01d0d21b-eaad-4f5d-82d1-0f4d31e80363.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:50:01 np0005539551 nova_compute[227360]: 2025-11-29 07:50:01.780 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:01.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:02.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:02 np0005539551 podman[232037]: 2025-11-29 07:50:02.117006276 +0000 UTC m=+0.053921580 container create 38d40b61bac8638f8b0006d3154ee9e6622b7cae5438e0c1533346d8880c5a9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 02:50:02 np0005539551 nova_compute[227360]: 2025-11-29 07:50:02.119 227364 DEBUG nova.compute.manager [req-65c2d5b2-c871-438c-ac77-139587516239 req-3ed3cd10-f310-43e8-9271-72b32c984549 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Received event network-vif-plugged-89b28868-4d53-4327-ba0f-01f232ee632c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:02 np0005539551 nova_compute[227360]: 2025-11-29 07:50:02.119 227364 DEBUG oslo_concurrency.lockutils [req-65c2d5b2-c871-438c-ac77-139587516239 req-3ed3cd10-f310-43e8-9271-72b32c984549 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "df95709e-b2a2-4e72-a99a-4df9e0fde1c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:02 np0005539551 nova_compute[227360]: 2025-11-29 07:50:02.124 227364 DEBUG oslo_concurrency.lockutils [req-65c2d5b2-c871-438c-ac77-139587516239 req-3ed3cd10-f310-43e8-9271-72b32c984549 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "df95709e-b2a2-4e72-a99a-4df9e0fde1c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:02 np0005539551 nova_compute[227360]: 2025-11-29 07:50:02.124 227364 DEBUG oslo_concurrency.lockutils [req-65c2d5b2-c871-438c-ac77-139587516239 req-3ed3cd10-f310-43e8-9271-72b32c984549 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "df95709e-b2a2-4e72-a99a-4df9e0fde1c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:02 np0005539551 nova_compute[227360]: 2025-11-29 07:50:02.125 227364 DEBUG nova.compute.manager [req-65c2d5b2-c871-438c-ac77-139587516239 req-3ed3cd10-f310-43e8-9271-72b32c984549 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Processing event network-vif-plugged-89b28868-4d53-4327-ba0f-01f232ee632c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 02:50:02 np0005539551 systemd[1]: Started libpod-conmon-38d40b61bac8638f8b0006d3154ee9e6622b7cae5438e0c1533346d8880c5a9f.scope.
Nov 29 02:50:02 np0005539551 nova_compute[227360]: 2025-11-29 07:50:02.178 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764402602.1783276, df95709e-b2a2-4e72-a99a-4df9e0fde1c4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:50:02 np0005539551 podman[232037]: 2025-11-29 07:50:02.087377267 +0000 UTC m=+0.024292561 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:50:02 np0005539551 nova_compute[227360]: 2025-11-29 07:50:02.180 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] VM Started (Lifecycle Event)#033[00m
Nov 29 02:50:02 np0005539551 nova_compute[227360]: 2025-11-29 07:50:02.185 227364 DEBUG nova.compute.manager [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:50:02 np0005539551 nova_compute[227360]: 2025-11-29 07:50:02.189 227364 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:50:02 np0005539551 nova_compute[227360]: 2025-11-29 07:50:02.193 227364 INFO nova.virt.libvirt.driver [-] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Instance spawned successfully.#033[00m
Nov 29 02:50:02 np0005539551 nova_compute[227360]: 2025-11-29 07:50:02.193 227364 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:50:02 np0005539551 systemd[1]: Started libcrun container.
Nov 29 02:50:02 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eff993527b6560eea7761b3b078ae75ba7a0268a76f21919e5d914b86279486d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:50:02 np0005539551 podman[232037]: 2025-11-29 07:50:02.21741869 +0000 UTC m=+0.154333974 container init 38d40b61bac8638f8b0006d3154ee9e6622b7cae5438e0c1533346d8880c5a9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:50:02 np0005539551 nova_compute[227360]: 2025-11-29 07:50:02.218 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:50:02 np0005539551 nova_compute[227360]: 2025-11-29 07:50:02.224 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:50:02 np0005539551 podman[232037]: 2025-11-29 07:50:02.224946947 +0000 UTC m=+0.161862221 container start 38d40b61bac8638f8b0006d3154ee9e6622b7cae5438e0c1533346d8880c5a9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:50:02 np0005539551 nova_compute[227360]: 2025-11-29 07:50:02.227 227364 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:02 np0005539551 nova_compute[227360]: 2025-11-29 07:50:02.228 227364 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:02 np0005539551 nova_compute[227360]: 2025-11-29 07:50:02.228 227364 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:02 np0005539551 nova_compute[227360]: 2025-11-29 07:50:02.228 227364 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:02 np0005539551 nova_compute[227360]: 2025-11-29 07:50:02.229 227364 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:02 np0005539551 nova_compute[227360]: 2025-11-29 07:50:02.229 227364 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:02 np0005539551 neutron-haproxy-ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363[232058]: [NOTICE]   (232062) : New worker (232064) forked
Nov 29 02:50:02 np0005539551 neutron-haproxy-ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363[232058]: [NOTICE]   (232062) : Loading success.
Nov 29 02:50:02 np0005539551 nova_compute[227360]: 2025-11-29 07:50:02.269 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:50:02 np0005539551 nova_compute[227360]: 2025-11-29 07:50:02.269 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764402602.1794257, df95709e-b2a2-4e72-a99a-4df9e0fde1c4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:50:02 np0005539551 nova_compute[227360]: 2025-11-29 07:50:02.270 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:50:02 np0005539551 nova_compute[227360]: 2025-11-29 07:50:02.290 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:50:02 np0005539551 nova_compute[227360]: 2025-11-29 07:50:02.295 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764402602.1880467, df95709e-b2a2-4e72-a99a-4df9e0fde1c4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:50:02 np0005539551 nova_compute[227360]: 2025-11-29 07:50:02.295 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:50:02 np0005539551 nova_compute[227360]: 2025-11-29 07:50:02.322 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:50:02 np0005539551 nova_compute[227360]: 2025-11-29 07:50:02.326 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:50:02 np0005539551 nova_compute[227360]: 2025-11-29 07:50:02.330 227364 INFO nova.compute.manager [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Took 22.44 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:50:02 np0005539551 nova_compute[227360]: 2025-11-29 07:50:02.330 227364 DEBUG nova.compute.manager [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:50:02 np0005539551 nova_compute[227360]: 2025-11-29 07:50:02.367 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:50:02 np0005539551 nova_compute[227360]: 2025-11-29 07:50:02.406 227364 INFO nova.compute.manager [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Took 29.39 seconds to build instance.#033[00m
Nov 29 02:50:02 np0005539551 nova_compute[227360]: 2025-11-29 07:50:02.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:50:02 np0005539551 nova_compute[227360]: 2025-11-29 07:50:02.433 227364 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "df95709e-b2a2-4e72-a99a-4df9e0fde1c4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 29.544s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:02 np0005539551 nova_compute[227360]: 2025-11-29 07:50:02.436 227364 DEBUG nova.network.neutron [req-358a6594-fd36-4171-a876-39d29e31adcd req-e72753e7-af29-4462-aece-0e13c1eebf03 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Updated VIF entry in instance network info cache for port 89b28868-4d53-4327-ba0f-01f232ee632c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:50:02 np0005539551 nova_compute[227360]: 2025-11-29 07:50:02.436 227364 DEBUG nova.network.neutron [req-358a6594-fd36-4171-a876-39d29e31adcd req-e72753e7-af29-4462-aece-0e13c1eebf03 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Updating instance_info_cache with network_info: [{"id": "89b28868-4d53-4327-ba0f-01f232ee632c", "address": "fa:16:3e:95:03:64", "network": {"id": "01d0d21b-eaad-4f5d-82d1-0f4d31e80363", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::5b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c7cd563ba394223a76bd2579800406c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89b28868-4d", "ovs_interfaceid": "89b28868-4d53-4327-ba0f-01f232ee632c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:50:02 np0005539551 nova_compute[227360]: 2025-11-29 07:50:02.451 227364 DEBUG oslo_concurrency.lockutils [req-358a6594-fd36-4171-a876-39d29e31adcd req-e72753e7-af29-4462-aece-0e13c1eebf03 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-df95709e-b2a2-4e72-a99a-4df9e0fde1c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:50:02 np0005539551 nova_compute[227360]: 2025-11-29 07:50:02.678 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:03 np0005539551 nova_compute[227360]: 2025-11-29 07:50:03.093 227364 DEBUG nova.network.neutron [req-c65e0bbf-c0d0-4495-8659-64001fdb6d2d req-d035e1e6-600b-4f08-9e1a-4ac3e252ba77 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Updated VIF entry in instance network info cache for port 817d7796-296d-423b-85bf-555c653c2008. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:50:03 np0005539551 nova_compute[227360]: 2025-11-29 07:50:03.094 227364 DEBUG nova.network.neutron [req-c65e0bbf-c0d0-4495-8659-64001fdb6d2d req-d035e1e6-600b-4f08-9e1a-4ac3e252ba77 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Updating instance_info_cache with network_info: [{"id": "817d7796-296d-423b-85bf-555c653c2008", "address": "fa:16:3e:f2:26:38", "network": {"id": "1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2058609322-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06acd02df57149e795d2be57787bb9ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap817d7796-29", "ovs_interfaceid": "817d7796-296d-423b-85bf-555c653c2008", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:50:03 np0005539551 nova_compute[227360]: 2025-11-29 07:50:03.121 227364 DEBUG oslo_concurrency.lockutils [req-c65e0bbf-c0d0-4495-8659-64001fdb6d2d req-d035e1e6-600b-4f08-9e1a-4ac3e252ba77 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-387c7b1f-564d-472d-a21c-f0cd49d2fd4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:50:03 np0005539551 ovn_controller[130266]: 2025-11-29T07:50:03Z|00038|memory|INFO|peak resident set size grew 56% in last 1336.7 seconds, from 16128 kB to 25088 kB
Nov 29 02:50:03 np0005539551 ovn_controller[130266]: 2025-11-29T07:50:03Z|00039|memory|INFO|idl-cells-OVN_Southbound:10247 idl-cells-Open_vSwitch:984 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:371 lflow-cache-entries-cache-matches:295 lflow-cache-size-KB:1632 local_datapath_usage-KB:3 ofctrl_desired_flow_usage-KB:637 ofctrl_installed_flow_usage-KB:465 ofctrl_sb_flow_ref_usage-KB:241
Nov 29 02:50:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:03.372 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:50:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:03.373 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:50:03 np0005539551 nova_compute[227360]: 2025-11-29 07:50:03.374 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:03.374 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:03 np0005539551 nova_compute[227360]: 2025-11-29 07:50:03.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:50:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:03.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:04 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:04.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:04 np0005539551 nova_compute[227360]: 2025-11-29 07:50:04.730 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:04 np0005539551 nova_compute[227360]: 2025-11-29 07:50:04.994 227364 DEBUG nova.compute.manager [req-e05f8cfd-b55c-4310-8a2a-a0dd7f6c1c74 req-3b20a4a1-a24d-461d-9bcd-21e3664aca8a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Received event network-vif-plugged-89b28868-4d53-4327-ba0f-01f232ee632c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:04 np0005539551 nova_compute[227360]: 2025-11-29 07:50:04.994 227364 DEBUG oslo_concurrency.lockutils [req-e05f8cfd-b55c-4310-8a2a-a0dd7f6c1c74 req-3b20a4a1-a24d-461d-9bcd-21e3664aca8a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "df95709e-b2a2-4e72-a99a-4df9e0fde1c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:04 np0005539551 nova_compute[227360]: 2025-11-29 07:50:04.994 227364 DEBUG oslo_concurrency.lockutils [req-e05f8cfd-b55c-4310-8a2a-a0dd7f6c1c74 req-3b20a4a1-a24d-461d-9bcd-21e3664aca8a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "df95709e-b2a2-4e72-a99a-4df9e0fde1c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:04 np0005539551 nova_compute[227360]: 2025-11-29 07:50:04.995 227364 DEBUG oslo_concurrency.lockutils [req-e05f8cfd-b55c-4310-8a2a-a0dd7f6c1c74 req-3b20a4a1-a24d-461d-9bcd-21e3664aca8a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "df95709e-b2a2-4e72-a99a-4df9e0fde1c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:04 np0005539551 nova_compute[227360]: 2025-11-29 07:50:04.995 227364 DEBUG nova.compute.manager [req-e05f8cfd-b55c-4310-8a2a-a0dd7f6c1c74 req-3b20a4a1-a24d-461d-9bcd-21e3664aca8a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] No waiting events found dispatching network-vif-plugged-89b28868-4d53-4327-ba0f-01f232ee632c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:50:04 np0005539551 nova_compute[227360]: 2025-11-29 07:50:04.995 227364 WARNING nova.compute.manager [req-e05f8cfd-b55c-4310-8a2a-a0dd7f6c1c74 req-3b20a4a1-a24d-461d-9bcd-21e3664aca8a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Received unexpected event network-vif-plugged-89b28868-4d53-4327-ba0f-01f232ee632c for instance with vm_state active and task_state None.#033[00m
Nov 29 02:50:05 np0005539551 nova_compute[227360]: 2025-11-29 07:50:05.404 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:50:05 np0005539551 nova_compute[227360]: 2025-11-29 07:50:05.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:50:05 np0005539551 nova_compute[227360]: 2025-11-29 07:50:05.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:50:05 np0005539551 nova_compute[227360]: 2025-11-29 07:50:05.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:50:05 np0005539551 nova_compute[227360]: 2025-11-29 07:50:05.531 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 02:50:05 np0005539551 nova_compute[227360]: 2025-11-29 07:50:05.706 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "refresh_cache-df95709e-b2a2-4e72-a99a-4df9e0fde1c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:50:05 np0005539551 nova_compute[227360]: 2025-11-29 07:50:05.707 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquired lock "refresh_cache-df95709e-b2a2-4e72-a99a-4df9e0fde1c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:50:05 np0005539551 nova_compute[227360]: 2025-11-29 07:50:05.707 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 02:50:05 np0005539551 nova_compute[227360]: 2025-11-29 07:50:05.707 227364 DEBUG nova.objects.instance [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lazy-loading 'info_cache' on Instance uuid df95709e-b2a2-4e72-a99a-4df9e0fde1c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:50:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:05.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:06.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:06 np0005539551 ovn_controller[130266]: 2025-11-29T07:50:06Z|00040|binding|INFO|Releasing lport c5a666a5-4b3e-4d4d-821a-ea0f64e84c84 from this chassis (sb_readonly=0)
Nov 29 02:50:06 np0005539551 ovn_controller[130266]: 2025-11-29T07:50:06Z|00041|binding|INFO|Releasing lport 0a5d766f-ec84-47c0-9590-72934ab05c0c from this chassis (sb_readonly=0)
Nov 29 02:50:06 np0005539551 nova_compute[227360]: 2025-11-29 07:50:06.384 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:07 np0005539551 nova_compute[227360]: 2025-11-29 07:50:07.680 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:07.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:08.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:08 np0005539551 nova_compute[227360]: 2025-11-29 07:50:08.524 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Updating instance_info_cache with network_info: [{"id": "89b28868-4d53-4327-ba0f-01f232ee632c", "address": "fa:16:3e:95:03:64", "network": {"id": "01d0d21b-eaad-4f5d-82d1-0f4d31e80363", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::5b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c7cd563ba394223a76bd2579800406c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89b28868-4d", "ovs_interfaceid": "89b28868-4d53-4327-ba0f-01f232ee632c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:50:08 np0005539551 nova_compute[227360]: 2025-11-29 07:50:08.648 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Releasing lock "refresh_cache-df95709e-b2a2-4e72-a99a-4df9e0fde1c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:50:08 np0005539551 nova_compute[227360]: 2025-11-29 07:50:08.648 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 02:50:08 np0005539551 nova_compute[227360]: 2025-11-29 07:50:08.648 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:50:08 np0005539551 nova_compute[227360]: 2025-11-29 07:50:08.649 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:50:08 np0005539551 nova_compute[227360]: 2025-11-29 07:50:08.649 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:50:08 np0005539551 nova_compute[227360]: 2025-11-29 07:50:08.649 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:50:08 np0005539551 nova_compute[227360]: 2025-11-29 07:50:08.649 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:50:08 np0005539551 nova_compute[227360]: 2025-11-29 07:50:08.650 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:50:08 np0005539551 nova_compute[227360]: 2025-11-29 07:50:08.676 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:08 np0005539551 nova_compute[227360]: 2025-11-29 07:50:08.676 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:08 np0005539551 nova_compute[227360]: 2025-11-29 07:50:08.676 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:08 np0005539551 nova_compute[227360]: 2025-11-29 07:50:08.676 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:50:08 np0005539551 nova_compute[227360]: 2025-11-29 07:50:08.677 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:09 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:09 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:50:09 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/218191846' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:50:09 np0005539551 nova_compute[227360]: 2025-11-29 07:50:09.141 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:09 np0005539551 nova_compute[227360]: 2025-11-29 07:50:09.339 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:50:09 np0005539551 nova_compute[227360]: 2025-11-29 07:50:09.339 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:50:09 np0005539551 nova_compute[227360]: 2025-11-29 07:50:09.345 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:50:09 np0005539551 nova_compute[227360]: 2025-11-29 07:50:09.346 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:50:09 np0005539551 nova_compute[227360]: 2025-11-29 07:50:09.419 227364 DEBUG nova.network.neutron [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Automatically allocated network: {'id': '01d0d21b-eaad-4f5d-82d1-0f4d31e80363', 'name': 'auto_allocated_network', 'tenant_id': '3c7cd563ba394223a76bd2579800406c', 'admin_state_up': True, 'mtu': 1442, 'status': 'ACTIVE', 'subnets': ['6992fe6c-c595-4393-8b65-e8b3c97ce60b', 'e8640056-71aa-4d1b-9c1e-9f992a064096'], 'shared': False, 'availability_zone_hints': [], 'availability_zones': [], 'ipv4_address_scope': None, 'ipv6_address_scope': None, 'router:external': False, 'description': '', 'qos_policy_id': None, 'port_security_enabled': True, 'dns_domain': '', 'l2_adjacency': True, 'tags': [], 'created_at': '2025-11-29T07:49:40Z', 'updated_at': '2025-11-29T07:49:53Z', 'revision_number': 4, 'project_id': '3c7cd563ba394223a76bd2579800406c'} _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2478#033[00m
Nov 29 02:50:09 np0005539551 nova_compute[227360]: 2025-11-29 07:50:09.421 227364 DEBUG nova.policy [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '01739124bee74c899af6384f8ec2d427', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3c7cd563ba394223a76bd2579800406c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 02:50:09 np0005539551 nova_compute[227360]: 2025-11-29 07:50:09.558 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:50:09 np0005539551 nova_compute[227360]: 2025-11-29 07:50:09.560 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4707MB free_disk=20.838733673095703GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:50:09 np0005539551 nova_compute[227360]: 2025-11-29 07:50:09.560 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:09 np0005539551 nova_compute[227360]: 2025-11-29 07:50:09.560 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:09 np0005539551 nova_compute[227360]: 2025-11-29 07:50:09.668 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance df95709e-b2a2-4e72-a99a-4df9e0fde1c4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:50:09 np0005539551 nova_compute[227360]: 2025-11-29 07:50:09.669 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 22f24272-bec2-444d-8c83-30e63ce6badb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:50:09 np0005539551 nova_compute[227360]: 2025-11-29 07:50:09.669 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 387c7b1f-564d-472d-a21c-f0cd49d2fd4c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:50:09 np0005539551 nova_compute[227360]: 2025-11-29 07:50:09.669 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:50:09 np0005539551 nova_compute[227360]: 2025-11-29 07:50:09.669 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:50:09 np0005539551 nova_compute[227360]: 2025-11-29 07:50:09.736 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:09 np0005539551 nova_compute[227360]: 2025-11-29 07:50:09.771 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:50:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:09.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:50:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:10.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:50:10 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2307713951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:50:10 np0005539551 nova_compute[227360]: 2025-11-29 07:50:10.258 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:10 np0005539551 nova_compute[227360]: 2025-11-29 07:50:10.266 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:50:10 np0005539551 nova_compute[227360]: 2025-11-29 07:50:10.281 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:50:10 np0005539551 nova_compute[227360]: 2025-11-29 07:50:10.301 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:50:10 np0005539551 nova_compute[227360]: 2025-11-29 07:50:10.301 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:10 np0005539551 ovn_controller[130266]: 2025-11-29T07:50:10Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f2:26:38 10.100.0.13
Nov 29 02:50:10 np0005539551 ovn_controller[130266]: 2025-11-29T07:50:10Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f2:26:38 10.100.0.13
Nov 29 02:50:11 np0005539551 nova_compute[227360]: 2025-11-29 07:50:11.296 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:50:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:11.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:50:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:12.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:50:12 np0005539551 nova_compute[227360]: 2025-11-29 07:50:12.682 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:13 np0005539551 nova_compute[227360]: 2025-11-29 07:50:13.195 227364 DEBUG nova.network.neutron [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Successfully created port: cb35bbab-def8-4ef1-a4d8-30358cb1c55c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 02:50:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:13.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:14 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:14.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:14 np0005539551 nova_compute[227360]: 2025-11-29 07:50:14.698 227364 DEBUG nova.network.neutron [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Successfully updated port: cb35bbab-def8-4ef1-a4d8-30358cb1c55c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 02:50:14 np0005539551 nova_compute[227360]: 2025-11-29 07:50:14.716 227364 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Acquiring lock "refresh_cache-22f24272-bec2-444d-8c83-30e63ce6badb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:50:14 np0005539551 nova_compute[227360]: 2025-11-29 07:50:14.716 227364 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Acquired lock "refresh_cache-22f24272-bec2-444d-8c83-30e63ce6badb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:50:14 np0005539551 nova_compute[227360]: 2025-11-29 07:50:14.716 227364 DEBUG nova.network.neutron [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:50:14 np0005539551 nova_compute[227360]: 2025-11-29 07:50:14.739 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:14 np0005539551 nova_compute[227360]: 2025-11-29 07:50:14.832 227364 DEBUG nova.compute.manager [req-c5798793-6b60-4540-915f-fd72c1a05d6c req-cfa8cfe0-1b7f-4b25-a5b5-282018dbd191 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Received event network-changed-cb35bbab-def8-4ef1-a4d8-30358cb1c55c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:14 np0005539551 nova_compute[227360]: 2025-11-29 07:50:14.832 227364 DEBUG nova.compute.manager [req-c5798793-6b60-4540-915f-fd72c1a05d6c req-cfa8cfe0-1b7f-4b25-a5b5-282018dbd191 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Refreshing instance network info cache due to event network-changed-cb35bbab-def8-4ef1-a4d8-30358cb1c55c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:50:14 np0005539551 nova_compute[227360]: 2025-11-29 07:50:14.832 227364 DEBUG oslo_concurrency.lockutils [req-c5798793-6b60-4540-915f-fd72c1a05d6c req-cfa8cfe0-1b7f-4b25-a5b5-282018dbd191 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-22f24272-bec2-444d-8c83-30e63ce6badb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:50:14 np0005539551 nova_compute[227360]: 2025-11-29 07:50:14.988 227364 DEBUG nova.network.neutron [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:50:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:15.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:16.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:16 np0005539551 nova_compute[227360]: 2025-11-29 07:50:16.879 227364 DEBUG nova.network.neutron [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Updating instance_info_cache with network_info: [{"id": "cb35bbab-def8-4ef1-a4d8-30358cb1c55c", "address": "fa:16:3e:63:f3:09", "network": {"id": "01d0d21b-eaad-4f5d-82d1-0f4d31e80363", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::161", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.104", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c7cd563ba394223a76bd2579800406c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb35bbab-de", "ovs_interfaceid": "cb35bbab-def8-4ef1-a4d8-30358cb1c55c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:50:17 np0005539551 nova_compute[227360]: 2025-11-29 07:50:17.188 227364 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Releasing lock "refresh_cache-22f24272-bec2-444d-8c83-30e63ce6badb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:50:17 np0005539551 nova_compute[227360]: 2025-11-29 07:50:17.190 227364 DEBUG nova.compute.manager [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Instance network_info: |[{"id": "cb35bbab-def8-4ef1-a4d8-30358cb1c55c", "address": "fa:16:3e:63:f3:09", "network": {"id": "01d0d21b-eaad-4f5d-82d1-0f4d31e80363", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::161", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.104", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c7cd563ba394223a76bd2579800406c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb35bbab-de", "ovs_interfaceid": "cb35bbab-def8-4ef1-a4d8-30358cb1c55c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:50:17 np0005539551 nova_compute[227360]: 2025-11-29 07:50:17.191 227364 DEBUG oslo_concurrency.lockutils [req-c5798793-6b60-4540-915f-fd72c1a05d6c req-cfa8cfe0-1b7f-4b25-a5b5-282018dbd191 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-22f24272-bec2-444d-8c83-30e63ce6badb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:50:17 np0005539551 nova_compute[227360]: 2025-11-29 07:50:17.192 227364 DEBUG nova.network.neutron [req-c5798793-6b60-4540-915f-fd72c1a05d6c req-cfa8cfe0-1b7f-4b25-a5b5-282018dbd191 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Refreshing network info cache for port cb35bbab-def8-4ef1-a4d8-30358cb1c55c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:50:17 np0005539551 nova_compute[227360]: 2025-11-29 07:50:17.197 227364 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Start _get_guest_xml network_info=[{"id": "cb35bbab-def8-4ef1-a4d8-30358cb1c55c", "address": "fa:16:3e:63:f3:09", "network": {"id": "01d0d21b-eaad-4f5d-82d1-0f4d31e80363", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::161", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.104", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c7cd563ba394223a76bd2579800406c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb35bbab-de", "ovs_interfaceid": "cb35bbab-def8-4ef1-a4d8-30358cb1c55c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:50:17 np0005539551 nova_compute[227360]: 2025-11-29 07:50:17.206 227364 WARNING nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:50:17 np0005539551 nova_compute[227360]: 2025-11-29 07:50:17.211 227364 DEBUG nova.virt.libvirt.host [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:50:17 np0005539551 nova_compute[227360]: 2025-11-29 07:50:17.213 227364 DEBUG nova.virt.libvirt.host [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:50:17 np0005539551 nova_compute[227360]: 2025-11-29 07:50:17.226 227364 DEBUG nova.virt.libvirt.host [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:50:17 np0005539551 nova_compute[227360]: 2025-11-29 07:50:17.226 227364 DEBUG nova.virt.libvirt.host [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:50:17 np0005539551 nova_compute[227360]: 2025-11-29 07:50:17.228 227364 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:50:17 np0005539551 nova_compute[227360]: 2025-11-29 07:50:17.228 227364 DEBUG nova.virt.hardware [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:50:17 np0005539551 nova_compute[227360]: 2025-11-29 07:50:17.229 227364 DEBUG nova.virt.hardware [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:50:17 np0005539551 nova_compute[227360]: 2025-11-29 07:50:17.230 227364 DEBUG nova.virt.hardware [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:50:17 np0005539551 nova_compute[227360]: 2025-11-29 07:50:17.230 227364 DEBUG nova.virt.hardware [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:50:17 np0005539551 nova_compute[227360]: 2025-11-29 07:50:17.231 227364 DEBUG nova.virt.hardware [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:50:17 np0005539551 nova_compute[227360]: 2025-11-29 07:50:17.231 227364 DEBUG nova.virt.hardware [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:50:17 np0005539551 nova_compute[227360]: 2025-11-29 07:50:17.232 227364 DEBUG nova.virt.hardware [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:50:17 np0005539551 nova_compute[227360]: 2025-11-29 07:50:17.232 227364 DEBUG nova.virt.hardware [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:50:17 np0005539551 nova_compute[227360]: 2025-11-29 07:50:17.233 227364 DEBUG nova.virt.hardware [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:50:17 np0005539551 nova_compute[227360]: 2025-11-29 07:50:17.233 227364 DEBUG nova.virt.hardware [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:50:17 np0005539551 nova_compute[227360]: 2025-11-29 07:50:17.234 227364 DEBUG nova.virt.hardware [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:50:17 np0005539551 nova_compute[227360]: 2025-11-29 07:50:17.238 227364 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:17 np0005539551 nova_compute[227360]: 2025-11-29 07:50:17.685 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:17 np0005539551 nova_compute[227360]: 2025-11-29 07:50:17.826 227364 DEBUG oslo_concurrency.lockutils [None req-50da1cba-c6ff-4e30-bec7-036395ce8828 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Acquiring lock "387c7b1f-564d-472d-a21c-f0cd49d2fd4c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:17 np0005539551 nova_compute[227360]: 2025-11-29 07:50:17.826 227364 DEBUG oslo_concurrency.lockutils [None req-50da1cba-c6ff-4e30-bec7-036395ce8828 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Lock "387c7b1f-564d-472d-a21c-f0cd49d2fd4c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:17 np0005539551 nova_compute[227360]: 2025-11-29 07:50:17.827 227364 DEBUG oslo_concurrency.lockutils [None req-50da1cba-c6ff-4e30-bec7-036395ce8828 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Acquiring lock "387c7b1f-564d-472d-a21c-f0cd49d2fd4c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:17 np0005539551 nova_compute[227360]: 2025-11-29 07:50:17.827 227364 DEBUG oslo_concurrency.lockutils [None req-50da1cba-c6ff-4e30-bec7-036395ce8828 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Lock "387c7b1f-564d-472d-a21c-f0cd49d2fd4c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:17 np0005539551 nova_compute[227360]: 2025-11-29 07:50:17.827 227364 DEBUG oslo_concurrency.lockutils [None req-50da1cba-c6ff-4e30-bec7-036395ce8828 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Lock "387c7b1f-564d-472d-a21c-f0cd49d2fd4c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:17 np0005539551 nova_compute[227360]: 2025-11-29 07:50:17.829 227364 INFO nova.compute.manager [None req-50da1cba-c6ff-4e30-bec7-036395ce8828 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Terminating instance#033[00m
Nov 29 02:50:17 np0005539551 nova_compute[227360]: 2025-11-29 07:50:17.830 227364 DEBUG nova.compute.manager [None req-50da1cba-c6ff-4e30-bec7-036395ce8828 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:50:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:17.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:18.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:18 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:50:18 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2119219106' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:50:18 np0005539551 nova_compute[227360]: 2025-11-29 07:50:18.418 227364 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:18 np0005539551 nova_compute[227360]: 2025-11-29 07:50:18.463 227364 DEBUG nova.storage.rbd_utils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] rbd image 22f24272-bec2-444d-8c83-30e63ce6badb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:18 np0005539551 nova_compute[227360]: 2025-11-29 07:50:18.468 227364 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:18 np0005539551 kernel: tap817d7796-29 (unregistering): left promiscuous mode
Nov 29 02:50:18 np0005539551 NetworkManager[48922]: <info>  [1764402618.8858] device (tap817d7796-29): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:50:18 np0005539551 nova_compute[227360]: 2025-11-29 07:50:18.892 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:18 np0005539551 ovn_controller[130266]: 2025-11-29T07:50:18Z|00042|binding|INFO|Releasing lport 817d7796-296d-423b-85bf-555c653c2008 from this chassis (sb_readonly=0)
Nov 29 02:50:18 np0005539551 ovn_controller[130266]: 2025-11-29T07:50:18Z|00043|binding|INFO|Setting lport 817d7796-296d-423b-85bf-555c653c2008 down in Southbound
Nov 29 02:50:18 np0005539551 ovn_controller[130266]: 2025-11-29T07:50:18Z|00044|binding|INFO|Removing iface tap817d7796-29 ovn-installed in OVS
Nov 29 02:50:18 np0005539551 nova_compute[227360]: 2025-11-29 07:50:18.896 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:18 np0005539551 nova_compute[227360]: 2025-11-29 07:50:18.924 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:18 np0005539551 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000006.scope: Deactivated successfully.
Nov 29 02:50:18 np0005539551 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000006.scope: Consumed 14.889s CPU time.
Nov 29 02:50:18 np0005539551 systemd-machined[190756]: Machine qemu-1-instance-00000006 terminated.
Nov 29 02:50:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:18.994 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:26:38 10.100.0.13'], port_security=['fa:16:3e:f2:26:38 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '387c7b1f-564d-472d-a21c-f0cd49d2fd4c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '06acd02df57149e795d2be57787bb9ed', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ea30fb5a-0cff-4126-aaa9-22a68d1fc7db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.192'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=157c85d0-7492-4bd9-b8bf-525034a08746, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=817d7796-296d-423b-85bf-555c653c2008) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:50:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:18.995 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 817d7796-296d-423b-85bf-555c653c2008 in datapath 1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4 unbound from our chassis#033[00m
Nov 29 02:50:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:18.997 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:50:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:18.998 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b698f354-94f1-4b51-a2ca-7468d9e9f8ad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:18.999 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4 namespace which is not needed anymore#033[00m
Nov 29 02:50:19 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:50:19 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:50:19 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:50:19 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:50:19 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2431428129' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:50:19 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.057 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.060 227364 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.063 227364 DEBUG nova.virt.libvirt.vif [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:49:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-497379200-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-497379200-3',id=5,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3c7cd563ba394223a76bd2579800406c',ramdisk_id='',reservation_id='r-k1o602if',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-1372302389',owner_user_name='tempest-AutoAllocateNetworkTest-1372302389-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:49:40Z,user_data=None,user_id='01739124bee74c899af6384f8ec2d427',uuid=22f24272-bec2-444d-8c83-30e63ce6badb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cb35bbab-def8-4ef1-a4d8-30358cb1c55c", "address": "fa:16:3e:63:f3:09", "network": {"id": "01d0d21b-eaad-4f5d-82d1-0f4d31e80363", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::161", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.104", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c7cd563ba394223a76bd2579800406c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb35bbab-de", "ovs_interfaceid": "cb35bbab-def8-4ef1-a4d8-30358cb1c55c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.064 227364 DEBUG nova.network.os_vif_util [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Converting VIF {"id": "cb35bbab-def8-4ef1-a4d8-30358cb1c55c", "address": "fa:16:3e:63:f3:09", "network": {"id": "01d0d21b-eaad-4f5d-82d1-0f4d31e80363", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::161", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.104", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c7cd563ba394223a76bd2579800406c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb35bbab-de", "ovs_interfaceid": "cb35bbab-def8-4ef1-a4d8-30358cb1c55c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.065 227364 DEBUG nova.network.os_vif_util [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:f3:09,bridge_name='br-int',has_traffic_filtering=True,id=cb35bbab-def8-4ef1-a4d8-30358cb1c55c,network=Network(01d0d21b-eaad-4f5d-82d1-0f4d31e80363),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcb35bbab-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.070 227364 DEBUG nova.objects.instance [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lazy-loading 'pci_devices' on Instance uuid 22f24272-bec2-444d-8c83-30e63ce6badb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.075 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.080 227364 INFO nova.virt.libvirt.driver [-] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Instance destroyed successfully.#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.081 227364 DEBUG nova.objects.instance [None req-50da1cba-c6ff-4e30-bec7-036395ce8828 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Lazy-loading 'resources' on Instance uuid 387c7b1f-564d-472d-a21c-f0cd49d2fd4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.096 227364 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:50:19 np0005539551 nova_compute[227360]:  <uuid>22f24272-bec2-444d-8c83-30e63ce6badb</uuid>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:  <name>instance-00000005</name>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:50:19 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:      <nova:name>tempest-tempest.common.compute-instance-497379200-3</nova:name>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 07:50:17</nova:creationTime>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 02:50:19 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:        <nova:user uuid="01739124bee74c899af6384f8ec2d427">tempest-AutoAllocateNetworkTest-1372302389-project-member</nova:user>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:        <nova:project uuid="3c7cd563ba394223a76bd2579800406c">tempest-AutoAllocateNetworkTest-1372302389</nova:project>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:        <nova:port uuid="cb35bbab-def8-4ef1-a4d8-30358cb1c55c">
Nov 29 02:50:19 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="fdfe:381f:8400:1::161" ipVersion="6"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.1.0.104" ipVersion="4"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <system>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:      <entry name="serial">22f24272-bec2-444d-8c83-30e63ce6badb</entry>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:      <entry name="uuid">22f24272-bec2-444d-8c83-30e63ce6badb</entry>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    </system>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:  <os>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:  </os>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:  <features>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:  </features>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:  </clock>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:  <devices>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 02:50:19 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/22f24272-bec2-444d-8c83-30e63ce6badb_disk">
Nov 29 02:50:19 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:      </source>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 02:50:19 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:      </auth>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    </disk>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 02:50:19 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/22f24272-bec2-444d-8c83-30e63ce6badb_disk.config">
Nov 29 02:50:19 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:      </source>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 02:50:19 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:      </auth>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    </disk>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 02:50:19 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:63:f3:09"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:      <target dev="tapcb35bbab-de"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    </interface>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 02:50:19 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/22f24272-bec2-444d-8c83-30e63ce6badb/console.log" append="off"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    </serial>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <video>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    </video>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 02:50:19 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    </rng>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 02:50:19 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 02:50:19 np0005539551 nova_compute[227360]:  </devices>
Nov 29 02:50:19 np0005539551 nova_compute[227360]: </domain>
Nov 29 02:50:19 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.097 227364 DEBUG nova.compute.manager [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Preparing to wait for external event network-vif-plugged-cb35bbab-def8-4ef1-a4d8-30358cb1c55c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.097 227364 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Acquiring lock "22f24272-bec2-444d-8c83-30e63ce6badb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.097 227364 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "22f24272-bec2-444d-8c83-30e63ce6badb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.098 227364 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "22f24272-bec2-444d-8c83-30e63ce6badb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.099 227364 DEBUG nova.virt.libvirt.vif [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:49:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-497379200-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-497379200-3',id=5,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3c7cd563ba394223a76bd2579800406c',ramdisk_id='',reservation_id='r-k1o602if',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-1372302389',owner_user_name='tempest-AutoAllocateNetworkTest-1372302389-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:49:40Z,user_data=None,user_id='01739124bee74c899af6384f8ec2d427',uuid=22f24272-bec2-444d-8c83-30e63ce6badb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cb35bbab-def8-4ef1-a4d8-30358cb1c55c", "address": "fa:16:3e:63:f3:09", "network": {"id": "01d0d21b-eaad-4f5d-82d1-0f4d31e80363", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::161", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.104", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c7cd563ba394223a76bd2579800406c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb35bbab-de", "ovs_interfaceid": "cb35bbab-def8-4ef1-a4d8-30358cb1c55c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.099 227364 DEBUG nova.network.os_vif_util [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Converting VIF {"id": "cb35bbab-def8-4ef1-a4d8-30358cb1c55c", "address": "fa:16:3e:63:f3:09", "network": {"id": "01d0d21b-eaad-4f5d-82d1-0f4d31e80363", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::161", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.104", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c7cd563ba394223a76bd2579800406c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb35bbab-de", "ovs_interfaceid": "cb35bbab-def8-4ef1-a4d8-30358cb1c55c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.100 227364 DEBUG nova.network.os_vif_util [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:f3:09,bridge_name='br-int',has_traffic_filtering=True,id=cb35bbab-def8-4ef1-a4d8-30358cb1c55c,network=Network(01d0d21b-eaad-4f5d-82d1-0f4d31e80363),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcb35bbab-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.101 227364 DEBUG os_vif [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:f3:09,bridge_name='br-int',has_traffic_filtering=True,id=cb35bbab-def8-4ef1-a4d8-30358cb1c55c,network=Network(01d0d21b-eaad-4f5d-82d1-0f4d31e80363),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcb35bbab-de') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.102 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.102 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.103 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.105 227364 DEBUG nova.virt.libvirt.vif [None req-50da1cba-c6ff-4e30-bec7-036395ce8828 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:49:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-734780199',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-734780199',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-734780199',id=6,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNs4qJju/LAwBpfwNEkpAniOUgsZvC5AJbe2gxJ9IIbsfIqlPQKB9AiaXItMjFGTGLax2vK2q305Wa2bDc3JMsgTFTOdHEZIjpwTy2cnsiyE7ulw1lLo0Ds1kT20t3frlA==',key_name='tempest-keypair-1679719729',keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:49:56Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='06acd02df57149e795d2be57787bb9ed',ramdisk_id='',reservation_id='r-ucmg2yuk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-456097839',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-456097839-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:49:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d57d713485e84d19a429533b570c4189',uuid=387c7b1f-564d-472d-a21c-f0cd49d2fd4c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "817d7796-296d-423b-85bf-555c653c2008", "address": "fa:16:3e:f2:26:38", "network": {"id": "1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2058609322-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06acd02df57149e795d2be57787bb9ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap817d7796-29", "ovs_interfaceid": "817d7796-296d-423b-85bf-555c653c2008", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.105 227364 DEBUG nova.network.os_vif_util [None req-50da1cba-c6ff-4e30-bec7-036395ce8828 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Converting VIF {"id": "817d7796-296d-423b-85bf-555c653c2008", "address": "fa:16:3e:f2:26:38", "network": {"id": "1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2058609322-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06acd02df57149e795d2be57787bb9ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap817d7796-29", "ovs_interfaceid": "817d7796-296d-423b-85bf-555c653c2008", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.106 227364 DEBUG nova.network.os_vif_util [None req-50da1cba-c6ff-4e30-bec7-036395ce8828 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f2:26:38,bridge_name='br-int',has_traffic_filtering=True,id=817d7796-296d-423b-85bf-555c653c2008,network=Network(1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap817d7796-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.106 227364 DEBUG os_vif [None req-50da1cba-c6ff-4e30-bec7-036395ce8828 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f2:26:38,bridge_name='br-int',has_traffic_filtering=True,id=817d7796-296d-423b-85bf-555c653c2008,network=Network(1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap817d7796-29') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.109 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.109 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap817d7796-29, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.112 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.113 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.114 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.114 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcb35bbab-de, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.114 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcb35bbab-de, col_values=(('external_ids', {'iface-id': 'cb35bbab-def8-4ef1-a4d8-30358cb1c55c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:63:f3:09', 'vm-uuid': '22f24272-bec2-444d-8c83-30e63ce6badb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.116 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:19 np0005539551 NetworkManager[48922]: <info>  [1764402619.1168] manager: (tapcb35bbab-de): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.117 227364 INFO os_vif [None req-50da1cba-c6ff-4e30-bec7-036395ce8828 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f2:26:38,bridge_name='br-int',has_traffic_filtering=True,id=817d7796-296d-423b-85bf-555c653c2008,network=Network(1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap817d7796-29')#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.141 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.148 227364 INFO os_vif [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:f3:09,bridge_name='br-int',has_traffic_filtering=True,id=cb35bbab-def8-4ef1-a4d8-30358cb1c55c,network=Network(01d0d21b-eaad-4f5d-82d1-0f4d31e80363),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcb35bbab-de')#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.265 227364 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.265 227364 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.265 227364 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] No VIF found with MAC fa:16:3e:63:f3:09, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.266 227364 INFO nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Using config drive#033[00m
Nov 29 02:50:19 np0005539551 neutron-haproxy-ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4[231799]: [NOTICE]   (231803) : haproxy version is 2.8.14-c23fe91
Nov 29 02:50:19 np0005539551 neutron-haproxy-ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4[231799]: [NOTICE]   (231803) : path to executable is /usr/sbin/haproxy
Nov 29 02:50:19 np0005539551 neutron-haproxy-ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4[231799]: [WARNING]  (231803) : Exiting Master process...
Nov 29 02:50:19 np0005539551 neutron-haproxy-ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4[231799]: [WARNING]  (231803) : Exiting Master process...
Nov 29 02:50:19 np0005539551 neutron-haproxy-ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4[231799]: [ALERT]    (231803) : Current worker (231805) exited with code 143 (Terminated)
Nov 29 02:50:19 np0005539551 neutron-haproxy-ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4[231799]: [WARNING]  (231803) : All workers exited. Exiting... (0)
Nov 29 02:50:19 np0005539551 systemd[1]: libpod-2cbf0447830a654ec9a1878ba0411dabbfa03a29a01de79cb066d87b80b04582.scope: Deactivated successfully.
Nov 29 02:50:19 np0005539551 podman[232356]: 2025-11-29 07:50:19.752088055 +0000 UTC m=+0.608284743 container died 2cbf0447830a654ec9a1878ba0411dabbfa03a29a01de79cb066d87b80b04582 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2)
Nov 29 02:50:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:19.843 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:19.843 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:19.844 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.853 227364 DEBUG nova.storage.rbd_utils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] rbd image 22f24272-bec2-444d-8c83-30e63ce6badb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:19.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.875 227364 DEBUG nova.compute.manager [req-ff576467-14be-4937-b201-50feacba350e req-00f5dbd3-310a-4107-8775-c5c4fed0aa71 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Received event network-vif-unplugged-817d7796-296d-423b-85bf-555c653c2008 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.876 227364 DEBUG oslo_concurrency.lockutils [req-ff576467-14be-4937-b201-50feacba350e req-00f5dbd3-310a-4107-8775-c5c4fed0aa71 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "387c7b1f-564d-472d-a21c-f0cd49d2fd4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.876 227364 DEBUG oslo_concurrency.lockutils [req-ff576467-14be-4937-b201-50feacba350e req-00f5dbd3-310a-4107-8775-c5c4fed0aa71 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "387c7b1f-564d-472d-a21c-f0cd49d2fd4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.876 227364 DEBUG oslo_concurrency.lockutils [req-ff576467-14be-4937-b201-50feacba350e req-00f5dbd3-310a-4107-8775-c5c4fed0aa71 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "387c7b1f-564d-472d-a21c-f0cd49d2fd4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.877 227364 DEBUG nova.compute.manager [req-ff576467-14be-4937-b201-50feacba350e req-00f5dbd3-310a-4107-8775-c5c4fed0aa71 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] No waiting events found dispatching network-vif-unplugged-817d7796-296d-423b-85bf-555c653c2008 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:50:19 np0005539551 nova_compute[227360]: 2025-11-29 07:50:19.877 227364 DEBUG nova.compute.manager [req-ff576467-14be-4937-b201-50feacba350e req-00f5dbd3-310a-4107-8775-c5c4fed0aa71 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Received event network-vif-unplugged-817d7796-296d-423b-85bf-555c653c2008 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:50:19 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2cbf0447830a654ec9a1878ba0411dabbfa03a29a01de79cb066d87b80b04582-userdata-shm.mount: Deactivated successfully.
Nov 29 02:50:19 np0005539551 systemd[1]: var-lib-containers-storage-overlay-97feefce17446c1b19f28c5179f2e1f08b88bc5b372b8538c1b5c4d2b304a920-merged.mount: Deactivated successfully.
Nov 29 02:50:19 np0005539551 podman[232356]: 2025-11-29 07:50:19.974541819 +0000 UTC m=+0.830738527 container cleanup 2cbf0447830a654ec9a1878ba0411dabbfa03a29a01de79cb066d87b80b04582 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 02:50:19 np0005539551 systemd[1]: libpod-conmon-2cbf0447830a654ec9a1878ba0411dabbfa03a29a01de79cb066d87b80b04582.scope: Deactivated successfully.
Nov 29 02:50:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:20.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:20 np0005539551 podman[232415]: 2025-11-29 07:50:20.047467364 +0000 UTC m=+0.044990895 container remove 2cbf0447830a654ec9a1878ba0411dabbfa03a29a01de79cb066d87b80b04582 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 02:50:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:20.052 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[0512edec-acdb-4cfd-a6db-24796342bd30]: (4, ('Sat Nov 29 07:50:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4 (2cbf0447830a654ec9a1878ba0411dabbfa03a29a01de79cb066d87b80b04582)\n2cbf0447830a654ec9a1878ba0411dabbfa03a29a01de79cb066d87b80b04582\nSat Nov 29 07:50:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4 (2cbf0447830a654ec9a1878ba0411dabbfa03a29a01de79cb066d87b80b04582)\n2cbf0447830a654ec9a1878ba0411dabbfa03a29a01de79cb066d87b80b04582\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:20.054 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[21ddc99f-a764-4e7c-a5c0-ebc0e5864a56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:20.055 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1968b9ed-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:20 np0005539551 nova_compute[227360]: 2025-11-29 07:50:20.109 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:20 np0005539551 kernel: tap1968b9ed-30: left promiscuous mode
Nov 29 02:50:20 np0005539551 nova_compute[227360]: 2025-11-29 07:50:20.127 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:20 np0005539551 nova_compute[227360]: 2025-11-29 07:50:20.128 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:20.131 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[09a07a34-4c47-40c3-ae97-eacc1c8b3944]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:20.147 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6d2ba8af-484a-4994-827f-5b3959b70823]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:20.149 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5b84e220-c71b-4ab5-95cc-bb28b661618b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:20.163 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2dfecd88-eb11-4d30-8d44-82b2e83c5fb2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 564572, 'reachable_time': 31928, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 232432, 'error': None, 'target': 'ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:20 np0005539551 systemd[1]: run-netns-ovnmeta\x2d1968b9ed\x2d3ce7\x2d4c25\x2d8c75\x2d7925f3a8c0b4.mount: Deactivated successfully.
Nov 29 02:50:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:20.173 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:50:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:20.174 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[57358f0c-d7f1-4504-a8a8-77571352e195]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:20 np0005539551 nova_compute[227360]: 2025-11-29 07:50:20.619 227364 DEBUG nova.network.neutron [req-c5798793-6b60-4540-915f-fd72c1a05d6c req-cfa8cfe0-1b7f-4b25-a5b5-282018dbd191 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Updated VIF entry in instance network info cache for port cb35bbab-def8-4ef1-a4d8-30358cb1c55c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:50:20 np0005539551 nova_compute[227360]: 2025-11-29 07:50:20.620 227364 DEBUG nova.network.neutron [req-c5798793-6b60-4540-915f-fd72c1a05d6c req-cfa8cfe0-1b7f-4b25-a5b5-282018dbd191 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Updating instance_info_cache with network_info: [{"id": "cb35bbab-def8-4ef1-a4d8-30358cb1c55c", "address": "fa:16:3e:63:f3:09", "network": {"id": "01d0d21b-eaad-4f5d-82d1-0f4d31e80363", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::161", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.104", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c7cd563ba394223a76bd2579800406c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb35bbab-de", "ovs_interfaceid": "cb35bbab-def8-4ef1-a4d8-30358cb1c55c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:50:20 np0005539551 ovn_controller[130266]: 2025-11-29T07:50:20Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:95:03:64 10.1.0.118
Nov 29 02:50:20 np0005539551 ovn_controller[130266]: 2025-11-29T07:50:20Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:95:03:64 10.1.0.118
Nov 29 02:50:21 np0005539551 nova_compute[227360]: 2025-11-29 07:50:21.192 227364 INFO nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Creating config drive at /var/lib/nova/instances/22f24272-bec2-444d-8c83-30e63ce6badb/disk.config#033[00m
Nov 29 02:50:21 np0005539551 nova_compute[227360]: 2025-11-29 07:50:21.202 227364 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/22f24272-bec2-444d-8c83-30e63ce6badb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4xo_1pc6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:21 np0005539551 nova_compute[227360]: 2025-11-29 07:50:21.329 227364 DEBUG oslo_concurrency.lockutils [req-c5798793-6b60-4540-915f-fd72c1a05d6c req-cfa8cfe0-1b7f-4b25-a5b5-282018dbd191 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-22f24272-bec2-444d-8c83-30e63ce6badb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:50:21 np0005539551 nova_compute[227360]: 2025-11-29 07:50:21.330 227364 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/22f24272-bec2-444d-8c83-30e63ce6badb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4xo_1pc6" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:21 np0005539551 nova_compute[227360]: 2025-11-29 07:50:21.360 227364 DEBUG nova.storage.rbd_utils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] rbd image 22f24272-bec2-444d-8c83-30e63ce6badb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:21 np0005539551 nova_compute[227360]: 2025-11-29 07:50:21.363 227364 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/22f24272-bec2-444d-8c83-30e63ce6badb/disk.config 22f24272-bec2-444d-8c83-30e63ce6badb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:21.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:21 np0005539551 nova_compute[227360]: 2025-11-29 07:50:21.903 227364 DEBUG nova.compute.manager [req-5ea53096-0efb-4911-8744-a0a4f53caa61 req-da2881a5-0e99-453f-aba9-b82ef655d6b8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Received event network-vif-plugged-817d7796-296d-423b-85bf-555c653c2008 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:21 np0005539551 nova_compute[227360]: 2025-11-29 07:50:21.904 227364 DEBUG oslo_concurrency.lockutils [req-5ea53096-0efb-4911-8744-a0a4f53caa61 req-da2881a5-0e99-453f-aba9-b82ef655d6b8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "387c7b1f-564d-472d-a21c-f0cd49d2fd4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:21 np0005539551 nova_compute[227360]: 2025-11-29 07:50:21.904 227364 DEBUG oslo_concurrency.lockutils [req-5ea53096-0efb-4911-8744-a0a4f53caa61 req-da2881a5-0e99-453f-aba9-b82ef655d6b8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "387c7b1f-564d-472d-a21c-f0cd49d2fd4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:21 np0005539551 nova_compute[227360]: 2025-11-29 07:50:21.904 227364 DEBUG oslo_concurrency.lockutils [req-5ea53096-0efb-4911-8744-a0a4f53caa61 req-da2881a5-0e99-453f-aba9-b82ef655d6b8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "387c7b1f-564d-472d-a21c-f0cd49d2fd4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:21 np0005539551 nova_compute[227360]: 2025-11-29 07:50:21.904 227364 DEBUG nova.compute.manager [req-5ea53096-0efb-4911-8744-a0a4f53caa61 req-da2881a5-0e99-453f-aba9-b82ef655d6b8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] No waiting events found dispatching network-vif-plugged-817d7796-296d-423b-85bf-555c653c2008 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:50:21 np0005539551 nova_compute[227360]: 2025-11-29 07:50:21.904 227364 WARNING nova.compute.manager [req-5ea53096-0efb-4911-8744-a0a4f53caa61 req-da2881a5-0e99-453f-aba9-b82ef655d6b8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Received unexpected event network-vif-plugged-817d7796-296d-423b-85bf-555c653c2008 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 02:50:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:22.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:22 np0005539551 nova_compute[227360]: 2025-11-29 07:50:22.688 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:23.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:24 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:24.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:24 np0005539551 nova_compute[227360]: 2025-11-29 07:50:24.117 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:24 np0005539551 nova_compute[227360]: 2025-11-29 07:50:24.173 227364 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/22f24272-bec2-444d-8c83-30e63ce6badb/disk.config 22f24272-bec2-444d-8c83-30e63ce6badb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.810s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:24 np0005539551 nova_compute[227360]: 2025-11-29 07:50:24.174 227364 INFO nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Deleting local config drive /var/lib/nova/instances/22f24272-bec2-444d-8c83-30e63ce6badb/disk.config because it was imported into RBD.#033[00m
Nov 29 02:50:24 np0005539551 kernel: tapcb35bbab-de: entered promiscuous mode
Nov 29 02:50:24 np0005539551 ovn_controller[130266]: 2025-11-29T07:50:24Z|00045|binding|INFO|Claiming lport cb35bbab-def8-4ef1-a4d8-30358cb1c55c for this chassis.
Nov 29 02:50:24 np0005539551 ovn_controller[130266]: 2025-11-29T07:50:24Z|00046|binding|INFO|cb35bbab-def8-4ef1-a4d8-30358cb1c55c: Claiming fa:16:3e:63:f3:09 10.1.0.104 fdfe:381f:8400:1::161
Nov 29 02:50:24 np0005539551 nova_compute[227360]: 2025-11-29 07:50:24.224 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:24 np0005539551 NetworkManager[48922]: <info>  [1764402624.2252] manager: (tapcb35bbab-de): new Tun device (/org/freedesktop/NetworkManager/Devices/36)
Nov 29 02:50:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:24.234 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:f3:09 10.1.0.104 fdfe:381f:8400:1::161'], port_security=['fa:16:3e:63:f3:09 10.1.0.104 fdfe:381f:8400:1::161'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.104/26 fdfe:381f:8400:1::161/64', 'neutron:device_id': '22f24272-bec2-444d-8c83-30e63ce6badb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01d0d21b-eaad-4f5d-82d1-0f4d31e80363', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3c7cd563ba394223a76bd2579800406c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd7540713-07cb-41c9-9bad-f36175f21356', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d42f5ec8-3ffe-4c03-bf87-380969e1ba25, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=cb35bbab-def8-4ef1-a4d8-30358cb1c55c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:50:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:24.236 139482 INFO neutron.agent.ovn.metadata.agent [-] Port cb35bbab-def8-4ef1-a4d8-30358cb1c55c in datapath 01d0d21b-eaad-4f5d-82d1-0f4d31e80363 bound to our chassis#033[00m
Nov 29 02:50:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:24.238 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 01d0d21b-eaad-4f5d-82d1-0f4d31e80363#033[00m
Nov 29 02:50:24 np0005539551 ovn_controller[130266]: 2025-11-29T07:50:24Z|00047|binding|INFO|Setting lport cb35bbab-def8-4ef1-a4d8-30358cb1c55c ovn-installed in OVS
Nov 29 02:50:24 np0005539551 ovn_controller[130266]: 2025-11-29T07:50:24Z|00048|binding|INFO|Setting lport cb35bbab-def8-4ef1-a4d8-30358cb1c55c up in Southbound
Nov 29 02:50:24 np0005539551 nova_compute[227360]: 2025-11-29 07:50:24.244 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:24 np0005539551 systemd-machined[190756]: New machine qemu-3-instance-00000005.
Nov 29 02:50:24 np0005539551 systemd-udevd[232489]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:50:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:24.255 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d50e5207-eb94-454f-aa9b-8541473d46e9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:24 np0005539551 systemd[1]: Started Virtual Machine qemu-3-instance-00000005.
Nov 29 02:50:24 np0005539551 NetworkManager[48922]: <info>  [1764402624.2695] device (tapcb35bbab-de): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:50:24 np0005539551 NetworkManager[48922]: <info>  [1764402624.2705] device (tapcb35bbab-de): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:50:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:24.287 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[c61a82b7-5cd9-4cc2-ac3e-1d7b6b78989a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:24.290 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[834eefc2-682c-4b66-a4ac-30db670d85d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:24.319 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[4b214635-8a66-4395-abd3-e7641a21c73a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:24.333 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[33f0a197-f2cd-4e07-a0e9-f055a368f18c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01d0d21b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:68:94:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 6, 'rx_bytes': 1048, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 6, 'rx_bytes': 1048, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 564849, 'reachable_time': 26496, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 12, 'inoctets': 880, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 12, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 880, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 12, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 232501, 'error': None, 'target': 'ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:24.347 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5e954e64-5571-4039-8531-4f8b55743ffb]: (4, ({'family': 2, 'prefixlen': 26, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.0.66'], ['IFA_LOCAL', '10.1.0.66'], ['IFA_BROADCAST', '10.1.0.127'], ['IFA_LABEL', 'tap01d0d21b-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 564858, 'tstamp': 564858}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 232502, 'error': None, 'target': 'ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap01d0d21b-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 564861, 'tstamp': 564861}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 232502, 'error': None, 'target': 'ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:24.348 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01d0d21b-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:24 np0005539551 nova_compute[227360]: 2025-11-29 07:50:24.350 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:24 np0005539551 nova_compute[227360]: 2025-11-29 07:50:24.351 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:24.352 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01d0d21b-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:24.352 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:50:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:24.352 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap01d0d21b-e0, col_values=(('external_ids', {'iface-id': 'c5a666a5-4b3e-4d4d-821a-ea0f64e84c84'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:24.353 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:50:25 np0005539551 nova_compute[227360]: 2025-11-29 07:50:25.128 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764402625.1276286, 22f24272-bec2-444d-8c83-30e63ce6badb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:50:25 np0005539551 nova_compute[227360]: 2025-11-29 07:50:25.129 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] VM Started (Lifecycle Event)#033[00m
Nov 29 02:50:25 np0005539551 nova_compute[227360]: 2025-11-29 07:50:25.168 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:50:25 np0005539551 nova_compute[227360]: 2025-11-29 07:50:25.172 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764402625.127742, 22f24272-bec2-444d-8c83-30e63ce6badb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:50:25 np0005539551 nova_compute[227360]: 2025-11-29 07:50:25.173 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:50:25 np0005539551 nova_compute[227360]: 2025-11-29 07:50:25.201 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:50:25 np0005539551 nova_compute[227360]: 2025-11-29 07:50:25.205 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:50:25 np0005539551 nova_compute[227360]: 2025-11-29 07:50:25.229 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:50:25 np0005539551 nova_compute[227360]: 2025-11-29 07:50:25.362 227364 INFO nova.virt.libvirt.driver [None req-50da1cba-c6ff-4e30-bec7-036395ce8828 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Deleting instance files /var/lib/nova/instances/387c7b1f-564d-472d-a21c-f0cd49d2fd4c_del#033[00m
Nov 29 02:50:25 np0005539551 nova_compute[227360]: 2025-11-29 07:50:25.362 227364 INFO nova.virt.libvirt.driver [None req-50da1cba-c6ff-4e30-bec7-036395ce8828 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Deletion of /var/lib/nova/instances/387c7b1f-564d-472d-a21c-f0cd49d2fd4c_del complete#033[00m
Nov 29 02:50:25 np0005539551 nova_compute[227360]: 2025-11-29 07:50:25.425 227364 DEBUG nova.virt.libvirt.host [None req-50da1cba-c6ff-4e30-bec7-036395ce8828 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Nov 29 02:50:25 np0005539551 nova_compute[227360]: 2025-11-29 07:50:25.426 227364 INFO nova.virt.libvirt.host [None req-50da1cba-c6ff-4e30-bec7-036395ce8828 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] UEFI support detected#033[00m
Nov 29 02:50:25 np0005539551 nova_compute[227360]: 2025-11-29 07:50:25.429 227364 INFO nova.compute.manager [None req-50da1cba-c6ff-4e30-bec7-036395ce8828 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Took 7.60 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:50:25 np0005539551 nova_compute[227360]: 2025-11-29 07:50:25.430 227364 DEBUG oslo.service.loopingcall [None req-50da1cba-c6ff-4e30-bec7-036395ce8828 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:50:25 np0005539551 nova_compute[227360]: 2025-11-29 07:50:25.430 227364 DEBUG nova.compute.manager [-] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:50:25 np0005539551 nova_compute[227360]: 2025-11-29 07:50:25.431 227364 DEBUG nova.network.neutron [-] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:50:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:25.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:26.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:26 np0005539551 nova_compute[227360]: 2025-11-29 07:50:26.153 227364 DEBUG nova.compute.manager [req-76fe42b8-4c51-460a-a522-6b703270a5a4 req-c777e6fd-5db3-4fb3-8e28-220e5fab7b98 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Received event network-vif-plugged-cb35bbab-def8-4ef1-a4d8-30358cb1c55c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:26 np0005539551 nova_compute[227360]: 2025-11-29 07:50:26.153 227364 DEBUG oslo_concurrency.lockutils [req-76fe42b8-4c51-460a-a522-6b703270a5a4 req-c777e6fd-5db3-4fb3-8e28-220e5fab7b98 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "22f24272-bec2-444d-8c83-30e63ce6badb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:26 np0005539551 nova_compute[227360]: 2025-11-29 07:50:26.154 227364 DEBUG oslo_concurrency.lockutils [req-76fe42b8-4c51-460a-a522-6b703270a5a4 req-c777e6fd-5db3-4fb3-8e28-220e5fab7b98 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "22f24272-bec2-444d-8c83-30e63ce6badb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:26 np0005539551 nova_compute[227360]: 2025-11-29 07:50:26.154 227364 DEBUG oslo_concurrency.lockutils [req-76fe42b8-4c51-460a-a522-6b703270a5a4 req-c777e6fd-5db3-4fb3-8e28-220e5fab7b98 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "22f24272-bec2-444d-8c83-30e63ce6badb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:26 np0005539551 nova_compute[227360]: 2025-11-29 07:50:26.154 227364 DEBUG nova.compute.manager [req-76fe42b8-4c51-460a-a522-6b703270a5a4 req-c777e6fd-5db3-4fb3-8e28-220e5fab7b98 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Processing event network-vif-plugged-cb35bbab-def8-4ef1-a4d8-30358cb1c55c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 02:50:26 np0005539551 nova_compute[227360]: 2025-11-29 07:50:26.154 227364 DEBUG nova.compute.manager [req-76fe42b8-4c51-460a-a522-6b703270a5a4 req-c777e6fd-5db3-4fb3-8e28-220e5fab7b98 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Received event network-vif-plugged-cb35bbab-def8-4ef1-a4d8-30358cb1c55c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:26 np0005539551 nova_compute[227360]: 2025-11-29 07:50:26.155 227364 DEBUG oslo_concurrency.lockutils [req-76fe42b8-4c51-460a-a522-6b703270a5a4 req-c777e6fd-5db3-4fb3-8e28-220e5fab7b98 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "22f24272-bec2-444d-8c83-30e63ce6badb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:26 np0005539551 nova_compute[227360]: 2025-11-29 07:50:26.155 227364 DEBUG oslo_concurrency.lockutils [req-76fe42b8-4c51-460a-a522-6b703270a5a4 req-c777e6fd-5db3-4fb3-8e28-220e5fab7b98 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "22f24272-bec2-444d-8c83-30e63ce6badb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:26 np0005539551 nova_compute[227360]: 2025-11-29 07:50:26.155 227364 DEBUG oslo_concurrency.lockutils [req-76fe42b8-4c51-460a-a522-6b703270a5a4 req-c777e6fd-5db3-4fb3-8e28-220e5fab7b98 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "22f24272-bec2-444d-8c83-30e63ce6badb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:26 np0005539551 nova_compute[227360]: 2025-11-29 07:50:26.155 227364 DEBUG nova.compute.manager [req-76fe42b8-4c51-460a-a522-6b703270a5a4 req-c777e6fd-5db3-4fb3-8e28-220e5fab7b98 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] No waiting events found dispatching network-vif-plugged-cb35bbab-def8-4ef1-a4d8-30358cb1c55c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:50:26 np0005539551 nova_compute[227360]: 2025-11-29 07:50:26.155 227364 WARNING nova.compute.manager [req-76fe42b8-4c51-460a-a522-6b703270a5a4 req-c777e6fd-5db3-4fb3-8e28-220e5fab7b98 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Received unexpected event network-vif-plugged-cb35bbab-def8-4ef1-a4d8-30358cb1c55c for instance with vm_state building and task_state spawning.#033[00m
Nov 29 02:50:26 np0005539551 nova_compute[227360]: 2025-11-29 07:50:26.156 227364 DEBUG nova.compute.manager [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:50:26 np0005539551 nova_compute[227360]: 2025-11-29 07:50:26.160 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764402626.15982, 22f24272-bec2-444d-8c83-30e63ce6badb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:50:26 np0005539551 nova_compute[227360]: 2025-11-29 07:50:26.160 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:50:26 np0005539551 nova_compute[227360]: 2025-11-29 07:50:26.162 227364 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:50:26 np0005539551 nova_compute[227360]: 2025-11-29 07:50:26.165 227364 INFO nova.virt.libvirt.driver [-] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Instance spawned successfully.#033[00m
Nov 29 02:50:26 np0005539551 nova_compute[227360]: 2025-11-29 07:50:26.166 227364 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:50:26 np0005539551 nova_compute[227360]: 2025-11-29 07:50:26.189 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:50:26 np0005539551 nova_compute[227360]: 2025-11-29 07:50:26.195 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:50:26 np0005539551 nova_compute[227360]: 2025-11-29 07:50:26.199 227364 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:26 np0005539551 nova_compute[227360]: 2025-11-29 07:50:26.199 227364 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:26 np0005539551 nova_compute[227360]: 2025-11-29 07:50:26.199 227364 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:26 np0005539551 nova_compute[227360]: 2025-11-29 07:50:26.200 227364 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:26 np0005539551 nova_compute[227360]: 2025-11-29 07:50:26.200 227364 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:26 np0005539551 nova_compute[227360]: 2025-11-29 07:50:26.200 227364 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:26 np0005539551 nova_compute[227360]: 2025-11-29 07:50:26.228 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:50:26 np0005539551 nova_compute[227360]: 2025-11-29 07:50:26.268 227364 INFO nova.compute.manager [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Took 45.46 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:50:26 np0005539551 nova_compute[227360]: 2025-11-29 07:50:26.268 227364 DEBUG nova.compute.manager [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:50:26 np0005539551 nova_compute[227360]: 2025-11-29 07:50:26.365 227364 INFO nova.compute.manager [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Took 53.21 seconds to build instance.#033[00m
Nov 29 02:50:26 np0005539551 nova_compute[227360]: 2025-11-29 07:50:26.384 227364 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "22f24272-bec2-444d-8c83-30e63ce6badb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 53.414s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:26 np0005539551 nova_compute[227360]: 2025-11-29 07:50:26.557 227364 DEBUG nova.network.neutron [-] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:50:26 np0005539551 nova_compute[227360]: 2025-11-29 07:50:26.583 227364 INFO nova.compute.manager [-] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Took 1.15 seconds to deallocate network for instance.#033[00m
Nov 29 02:50:26 np0005539551 nova_compute[227360]: 2025-11-29 07:50:26.686 227364 DEBUG nova.compute.manager [req-aa447d29-e2de-40ee-a10b-54345b42fd7d req-5acafcc9-2545-4ce3-919c-8187e4cce3a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Received event network-vif-deleted-817d7796-296d-423b-85bf-555c653c2008 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:26 np0005539551 nova_compute[227360]: 2025-11-29 07:50:26.703 227364 DEBUG oslo_concurrency.lockutils [None req-50da1cba-c6ff-4e30-bec7-036395ce8828 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:26 np0005539551 nova_compute[227360]: 2025-11-29 07:50:26.703 227364 DEBUG oslo_concurrency.lockutils [None req-50da1cba-c6ff-4e30-bec7-036395ce8828 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:26 np0005539551 nova_compute[227360]: 2025-11-29 07:50:26.829 227364 DEBUG oslo_concurrency.processutils [None req-50da1cba-c6ff-4e30-bec7-036395ce8828 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:50:27 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3196264102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:50:27 np0005539551 nova_compute[227360]: 2025-11-29 07:50:27.270 227364 DEBUG oslo_concurrency.processutils [None req-50da1cba-c6ff-4e30-bec7-036395ce8828 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:27 np0005539551 nova_compute[227360]: 2025-11-29 07:50:27.277 227364 DEBUG nova.compute.provider_tree [None req-50da1cba-c6ff-4e30-bec7-036395ce8828 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:50:27 np0005539551 nova_compute[227360]: 2025-11-29 07:50:27.306 227364 DEBUG nova.scheduler.client.report [None req-50da1cba-c6ff-4e30-bec7-036395ce8828 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:50:27 np0005539551 nova_compute[227360]: 2025-11-29 07:50:27.351 227364 DEBUG oslo_concurrency.lockutils [None req-50da1cba-c6ff-4e30-bec7-036395ce8828 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:27 np0005539551 nova_compute[227360]: 2025-11-29 07:50:27.502 227364 INFO nova.scheduler.client.report [None req-50da1cba-c6ff-4e30-bec7-036395ce8828 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Deleted allocations for instance 387c7b1f-564d-472d-a21c-f0cd49d2fd4c#033[00m
Nov 29 02:50:27 np0005539551 nova_compute[227360]: 2025-11-29 07:50:27.587 227364 DEBUG oslo_concurrency.lockutils [None req-50da1cba-c6ff-4e30-bec7-036395ce8828 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Lock "387c7b1f-564d-472d-a21c-f0cd49d2fd4c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:27 np0005539551 nova_compute[227360]: 2025-11-29 07:50:27.690 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:27.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:28.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:29 np0005539551 nova_compute[227360]: 2025-11-29 07:50:29.120 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:29 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:50:29 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:50:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:29.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:30.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:31.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:32.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:32 np0005539551 podman[232620]: 2025-11-29 07:50:32.62348478 +0000 UTC m=+0.066281652 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 02:50:32 np0005539551 podman[232619]: 2025-11-29 07:50:32.636007266 +0000 UTC m=+0.079197259 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 02:50:32 np0005539551 podman[232618]: 2025-11-29 07:50:32.689217005 +0000 UTC m=+0.128767807 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:50:32 np0005539551 nova_compute[227360]: 2025-11-29 07:50:32.693 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:33 np0005539551 nova_compute[227360]: 2025-11-29 07:50:33.085 227364 DEBUG oslo_concurrency.lockutils [None req-724a2051-bb6e-41d0-9d11-773ab84ea57e 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Acquiring lock "df95709e-b2a2-4e72-a99a-4df9e0fde1c4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:33 np0005539551 nova_compute[227360]: 2025-11-29 07:50:33.085 227364 DEBUG oslo_concurrency.lockutils [None req-724a2051-bb6e-41d0-9d11-773ab84ea57e 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "df95709e-b2a2-4e72-a99a-4df9e0fde1c4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:33 np0005539551 nova_compute[227360]: 2025-11-29 07:50:33.086 227364 DEBUG oslo_concurrency.lockutils [None req-724a2051-bb6e-41d0-9d11-773ab84ea57e 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Acquiring lock "df95709e-b2a2-4e72-a99a-4df9e0fde1c4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:33 np0005539551 nova_compute[227360]: 2025-11-29 07:50:33.086 227364 DEBUG oslo_concurrency.lockutils [None req-724a2051-bb6e-41d0-9d11-773ab84ea57e 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "df95709e-b2a2-4e72-a99a-4df9e0fde1c4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:33 np0005539551 nova_compute[227360]: 2025-11-29 07:50:33.087 227364 DEBUG oslo_concurrency.lockutils [None req-724a2051-bb6e-41d0-9d11-773ab84ea57e 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "df95709e-b2a2-4e72-a99a-4df9e0fde1c4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:33 np0005539551 nova_compute[227360]: 2025-11-29 07:50:33.089 227364 INFO nova.compute.manager [None req-724a2051-bb6e-41d0-9d11-773ab84ea57e 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Terminating instance#033[00m
Nov 29 02:50:33 np0005539551 nova_compute[227360]: 2025-11-29 07:50:33.091 227364 DEBUG nova.compute.manager [None req-724a2051-bb6e-41d0-9d11-773ab84ea57e 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:50:33 np0005539551 kernel: tap89b28868-4d (unregistering): left promiscuous mode
Nov 29 02:50:33 np0005539551 NetworkManager[48922]: <info>  [1764402633.2426] device (tap89b28868-4d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:50:33 np0005539551 nova_compute[227360]: 2025-11-29 07:50:33.260 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:33 np0005539551 ovn_controller[130266]: 2025-11-29T07:50:33Z|00049|binding|INFO|Releasing lport 89b28868-4d53-4327-ba0f-01f232ee632c from this chassis (sb_readonly=0)
Nov 29 02:50:33 np0005539551 ovn_controller[130266]: 2025-11-29T07:50:33Z|00050|binding|INFO|Setting lport 89b28868-4d53-4327-ba0f-01f232ee632c down in Southbound
Nov 29 02:50:33 np0005539551 ovn_controller[130266]: 2025-11-29T07:50:33Z|00051|binding|INFO|Removing iface tap89b28868-4d ovn-installed in OVS
Nov 29 02:50:33 np0005539551 nova_compute[227360]: 2025-11-29 07:50:33.263 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:33 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:33.276 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:03:64 10.1.0.118 fdfe:381f:8400:1::5b'], port_security=['fa:16:3e:95:03:64 10.1.0.118 fdfe:381f:8400:1::5b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.118/26 fdfe:381f:8400:1::5b/64', 'neutron:device_id': 'df95709e-b2a2-4e72-a99a-4df9e0fde1c4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01d0d21b-eaad-4f5d-82d1-0f4d31e80363', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3c7cd563ba394223a76bd2579800406c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd7540713-07cb-41c9-9bad-f36175f21356', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d42f5ec8-3ffe-4c03-bf87-380969e1ba25, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=89b28868-4d53-4327-ba0f-01f232ee632c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:50:33 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:33.277 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 89b28868-4d53-4327-ba0f-01f232ee632c in datapath 01d0d21b-eaad-4f5d-82d1-0f4d31e80363 unbound from our chassis#033[00m
Nov 29 02:50:33 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:33.280 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 01d0d21b-eaad-4f5d-82d1-0f4d31e80363#033[00m
Nov 29 02:50:33 np0005539551 nova_compute[227360]: 2025-11-29 07:50:33.286 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:33 np0005539551 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000003.scope: Deactivated successfully.
Nov 29 02:50:33 np0005539551 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000003.scope: Consumed 15.817s CPU time.
Nov 29 02:50:33 np0005539551 systemd-machined[190756]: Machine qemu-2-instance-00000003 terminated.
Nov 29 02:50:33 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:33.311 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[19178c5a-b9ce-494e-b05c-58002a34047a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:33 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:33.341 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[2beed804-df3c-4213-9ffa-ff8e13a38a5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:33 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:33.344 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[500bc04e-6a23-4919-b72c-62e84a95a2f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:33 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:33.377 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[abcc8035-080f-439a-b494-9353eae7454a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:33 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:33.395 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[960128b1-74d1-48dc-a2b3-64fb7dd320cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01d0d21b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:68:94:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 8, 'rx_bytes': 1272, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 8, 'rx_bytes': 1272, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 564849, 'reachable_time': 26496, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 14, 'inoctets': 992, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 14, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 992, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 14, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 232690, 'error': None, 'target': 'ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:33 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:33.415 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5fd9e44d-c992-411a-b034-b2c00cea52a3]: (4, ({'family': 2, 'prefixlen': 26, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.0.66'], ['IFA_LOCAL', '10.1.0.66'], ['IFA_BROADCAST', '10.1.0.127'], ['IFA_LABEL', 'tap01d0d21b-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 564858, 'tstamp': 564858}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 232691, 'error': None, 'target': 'ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap01d0d21b-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 564861, 'tstamp': 564861}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 232691, 'error': None, 'target': 'ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:33 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:33.417 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01d0d21b-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:33 np0005539551 nova_compute[227360]: 2025-11-29 07:50:33.419 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:33 np0005539551 nova_compute[227360]: 2025-11-29 07:50:33.423 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:33 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:33.424 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01d0d21b-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:33 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:33.424 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:50:33 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:33.425 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap01d0d21b-e0, col_values=(('external_ids', {'iface-id': 'c5a666a5-4b3e-4d4d-821a-ea0f64e84c84'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:33 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:33.425 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:50:33 np0005539551 nova_compute[227360]: 2025-11-29 07:50:33.540 227364 INFO nova.virt.libvirt.driver [-] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Instance destroyed successfully.#033[00m
Nov 29 02:50:33 np0005539551 nova_compute[227360]: 2025-11-29 07:50:33.541 227364 DEBUG nova.objects.instance [None req-724a2051-bb6e-41d0-9d11-773ab84ea57e 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lazy-loading 'resources' on Instance uuid df95709e-b2a2-4e72-a99a-4df9e0fde1c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:50:33 np0005539551 nova_compute[227360]: 2025-11-29 07:50:33.555 227364 DEBUG nova.virt.libvirt.vif [None req-724a2051-bb6e-41d0-9d11-773ab84ea57e 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:49:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-497379200-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-497379200-1',id=3,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:50:02Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3c7cd563ba394223a76bd2579800406c',ramdisk_id='',reservation_id='r-k1o602if',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AutoAllocateNetworkTest-1372302389',owner_user_name='tempest-AutoAllocateNetworkTest-1372302389-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:50:02Z,user_data=None,user_id='01739124bee74c899af6384f8ec2d427',uuid=df95709e-b2a2-4e72-a99a-4df9e0fde1c4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "89b28868-4d53-4327-ba0f-01f232ee632c", "address": "fa:16:3e:95:03:64", "network": {"id": "01d0d21b-eaad-4f5d-82d1-0f4d31e80363", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::5b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c7cd563ba394223a76bd2579800406c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89b28868-4d", "ovs_interfaceid": "89b28868-4d53-4327-ba0f-01f232ee632c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:50:33 np0005539551 nova_compute[227360]: 2025-11-29 07:50:33.555 227364 DEBUG nova.network.os_vif_util [None req-724a2051-bb6e-41d0-9d11-773ab84ea57e 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Converting VIF {"id": "89b28868-4d53-4327-ba0f-01f232ee632c", "address": "fa:16:3e:95:03:64", "network": {"id": "01d0d21b-eaad-4f5d-82d1-0f4d31e80363", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::5b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c7cd563ba394223a76bd2579800406c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89b28868-4d", "ovs_interfaceid": "89b28868-4d53-4327-ba0f-01f232ee632c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:50:33 np0005539551 nova_compute[227360]: 2025-11-29 07:50:33.558 227364 DEBUG nova.network.os_vif_util [None req-724a2051-bb6e-41d0-9d11-773ab84ea57e 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:95:03:64,bridge_name='br-int',has_traffic_filtering=True,id=89b28868-4d53-4327-ba0f-01f232ee632c,network=Network(01d0d21b-eaad-4f5d-82d1-0f4d31e80363),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89b28868-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:50:33 np0005539551 nova_compute[227360]: 2025-11-29 07:50:33.558 227364 DEBUG os_vif [None req-724a2051-bb6e-41d0-9d11-773ab84ea57e 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:95:03:64,bridge_name='br-int',has_traffic_filtering=True,id=89b28868-4d53-4327-ba0f-01f232ee632c,network=Network(01d0d21b-eaad-4f5d-82d1-0f4d31e80363),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89b28868-4d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:50:33 np0005539551 nova_compute[227360]: 2025-11-29 07:50:33.559 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:33 np0005539551 nova_compute[227360]: 2025-11-29 07:50:33.560 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap89b28868-4d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:33 np0005539551 nova_compute[227360]: 2025-11-29 07:50:33.562 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:33 np0005539551 nova_compute[227360]: 2025-11-29 07:50:33.564 227364 INFO os_vif [None req-724a2051-bb6e-41d0-9d11-773ab84ea57e 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:95:03:64,bridge_name='br-int',has_traffic_filtering=True,id=89b28868-4d53-4327-ba0f-01f232ee632c,network=Network(01d0d21b-eaad-4f5d-82d1-0f4d31e80363),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89b28868-4d')#033[00m
Nov 29 02:50:33 np0005539551 nova_compute[227360]: 2025-11-29 07:50:33.589 227364 DEBUG nova.compute.manager [req-53d3f380-b7a7-4ce6-a609-1d28021cdf53 req-b0cd0f35-f289-42b1-8506-4a6587c51ee9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Received event network-vif-unplugged-89b28868-4d53-4327-ba0f-01f232ee632c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:33 np0005539551 nova_compute[227360]: 2025-11-29 07:50:33.590 227364 DEBUG oslo_concurrency.lockutils [req-53d3f380-b7a7-4ce6-a609-1d28021cdf53 req-b0cd0f35-f289-42b1-8506-4a6587c51ee9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "df95709e-b2a2-4e72-a99a-4df9e0fde1c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:33 np0005539551 nova_compute[227360]: 2025-11-29 07:50:33.590 227364 DEBUG oslo_concurrency.lockutils [req-53d3f380-b7a7-4ce6-a609-1d28021cdf53 req-b0cd0f35-f289-42b1-8506-4a6587c51ee9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "df95709e-b2a2-4e72-a99a-4df9e0fde1c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:33 np0005539551 nova_compute[227360]: 2025-11-29 07:50:33.590 227364 DEBUG oslo_concurrency.lockutils [req-53d3f380-b7a7-4ce6-a609-1d28021cdf53 req-b0cd0f35-f289-42b1-8506-4a6587c51ee9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "df95709e-b2a2-4e72-a99a-4df9e0fde1c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:33 np0005539551 nova_compute[227360]: 2025-11-29 07:50:33.590 227364 DEBUG nova.compute.manager [req-53d3f380-b7a7-4ce6-a609-1d28021cdf53 req-b0cd0f35-f289-42b1-8506-4a6587c51ee9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] No waiting events found dispatching network-vif-unplugged-89b28868-4d53-4327-ba0f-01f232ee632c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:50:33 np0005539551 nova_compute[227360]: 2025-11-29 07:50:33.590 227364 DEBUG nova.compute.manager [req-53d3f380-b7a7-4ce6-a609-1d28021cdf53 req-b0cd0f35-f289-42b1-8506-4a6587c51ee9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Received event network-vif-unplugged-89b28868-4d53-4327-ba0f-01f232ee632c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:50:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:33.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:34 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:34.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:34 np0005539551 nova_compute[227360]: 2025-11-29 07:50:34.079 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402619.0753124, 387c7b1f-564d-472d-a21c-f0cd49d2fd4c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:50:34 np0005539551 nova_compute[227360]: 2025-11-29 07:50:34.079 227364 INFO nova.compute.manager [-] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:50:34 np0005539551 nova_compute[227360]: 2025-11-29 07:50:34.103 227364 DEBUG nova.compute.manager [None req-8a636d00-fb15-43a2-9ab1-c4afb63c316c - - - - - -] [instance: 387c7b1f-564d-472d-a21c-f0cd49d2fd4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:50:34 np0005539551 nova_compute[227360]: 2025-11-29 07:50:34.910 227364 INFO nova.virt.libvirt.driver [None req-724a2051-bb6e-41d0-9d11-773ab84ea57e 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Deleting instance files /var/lib/nova/instances/df95709e-b2a2-4e72-a99a-4df9e0fde1c4_del#033[00m
Nov 29 02:50:34 np0005539551 nova_compute[227360]: 2025-11-29 07:50:34.911 227364 INFO nova.virt.libvirt.driver [None req-724a2051-bb6e-41d0-9d11-773ab84ea57e 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Deletion of /var/lib/nova/instances/df95709e-b2a2-4e72-a99a-4df9e0fde1c4_del complete#033[00m
Nov 29 02:50:34 np0005539551 nova_compute[227360]: 2025-11-29 07:50:34.988 227364 INFO nova.compute.manager [None req-724a2051-bb6e-41d0-9d11-773ab84ea57e 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Took 1.90 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:50:34 np0005539551 nova_compute[227360]: 2025-11-29 07:50:34.989 227364 DEBUG oslo.service.loopingcall [None req-724a2051-bb6e-41d0-9d11-773ab84ea57e 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:50:34 np0005539551 nova_compute[227360]: 2025-11-29 07:50:34.989 227364 DEBUG nova.compute.manager [-] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:50:34 np0005539551 nova_compute[227360]: 2025-11-29 07:50:34.989 227364 DEBUG nova.network.neutron [-] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:50:35 np0005539551 nova_compute[227360]: 2025-11-29 07:50:35.710 227364 DEBUG nova.compute.manager [req-820e2510-bbf5-4074-90d3-bf4e77018e78 req-f6ff74a8-328e-4d0e-b880-76cae9b11beb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Received event network-vif-plugged-89b28868-4d53-4327-ba0f-01f232ee632c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:35 np0005539551 nova_compute[227360]: 2025-11-29 07:50:35.710 227364 DEBUG oslo_concurrency.lockutils [req-820e2510-bbf5-4074-90d3-bf4e77018e78 req-f6ff74a8-328e-4d0e-b880-76cae9b11beb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "df95709e-b2a2-4e72-a99a-4df9e0fde1c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:35 np0005539551 nova_compute[227360]: 2025-11-29 07:50:35.711 227364 DEBUG oslo_concurrency.lockutils [req-820e2510-bbf5-4074-90d3-bf4e77018e78 req-f6ff74a8-328e-4d0e-b880-76cae9b11beb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "df95709e-b2a2-4e72-a99a-4df9e0fde1c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:35 np0005539551 nova_compute[227360]: 2025-11-29 07:50:35.711 227364 DEBUG oslo_concurrency.lockutils [req-820e2510-bbf5-4074-90d3-bf4e77018e78 req-f6ff74a8-328e-4d0e-b880-76cae9b11beb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "df95709e-b2a2-4e72-a99a-4df9e0fde1c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:35 np0005539551 nova_compute[227360]: 2025-11-29 07:50:35.711 227364 DEBUG nova.compute.manager [req-820e2510-bbf5-4074-90d3-bf4e77018e78 req-f6ff74a8-328e-4d0e-b880-76cae9b11beb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] No waiting events found dispatching network-vif-plugged-89b28868-4d53-4327-ba0f-01f232ee632c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:50:35 np0005539551 nova_compute[227360]: 2025-11-29 07:50:35.711 227364 WARNING nova.compute.manager [req-820e2510-bbf5-4074-90d3-bf4e77018e78 req-f6ff74a8-328e-4d0e-b880-76cae9b11beb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Received unexpected event network-vif-plugged-89b28868-4d53-4327-ba0f-01f232ee632c for instance with vm_state active and task_state deleting.#033[00m
Nov 29 02:50:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:35.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:36.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:36 np0005539551 nova_compute[227360]: 2025-11-29 07:50:36.299 227364 DEBUG nova.network.neutron [-] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:50:36 np0005539551 nova_compute[227360]: 2025-11-29 07:50:36.616 227364 DEBUG nova.compute.manager [req-787c4b28-95d2-43cc-ab19-ac93e661d581 req-1a13f9c0-a6e8-43bc-87eb-1875c90f42f9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Received event network-vif-deleted-89b28868-4d53-4327-ba0f-01f232ee632c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:36 np0005539551 nova_compute[227360]: 2025-11-29 07:50:36.617 227364 INFO nova.compute.manager [req-787c4b28-95d2-43cc-ab19-ac93e661d581 req-1a13f9c0-a6e8-43bc-87eb-1875c90f42f9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Neutron deleted interface 89b28868-4d53-4327-ba0f-01f232ee632c; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 02:50:36 np0005539551 nova_compute[227360]: 2025-11-29 07:50:36.617 227364 DEBUG nova.network.neutron [req-787c4b28-95d2-43cc-ab19-ac93e661d581 req-1a13f9c0-a6e8-43bc-87eb-1875c90f42f9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:50:36 np0005539551 nova_compute[227360]: 2025-11-29 07:50:36.620 227364 INFO nova.compute.manager [-] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Took 1.63 seconds to deallocate network for instance.#033[00m
Nov 29 02:50:36 np0005539551 nova_compute[227360]: 2025-11-29 07:50:36.690 227364 DEBUG nova.compute.manager [req-787c4b28-95d2-43cc-ab19-ac93e661d581 req-1a13f9c0-a6e8-43bc-87eb-1875c90f42f9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Detach interface failed, port_id=89b28868-4d53-4327-ba0f-01f232ee632c, reason: Instance df95709e-b2a2-4e72-a99a-4df9e0fde1c4 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 02:50:36 np0005539551 nova_compute[227360]: 2025-11-29 07:50:36.721 227364 DEBUG oslo_concurrency.lockutils [None req-724a2051-bb6e-41d0-9d11-773ab84ea57e 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:36 np0005539551 nova_compute[227360]: 2025-11-29 07:50:36.722 227364 DEBUG oslo_concurrency.lockutils [None req-724a2051-bb6e-41d0-9d11-773ab84ea57e 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:36 np0005539551 nova_compute[227360]: 2025-11-29 07:50:36.824 227364 DEBUG oslo_concurrency.processutils [None req-724a2051-bb6e-41d0-9d11-773ab84ea57e 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:50:37 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4235226444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:50:37 np0005539551 nova_compute[227360]: 2025-11-29 07:50:37.303 227364 DEBUG oslo_concurrency.processutils [None req-724a2051-bb6e-41d0-9d11-773ab84ea57e 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:37 np0005539551 nova_compute[227360]: 2025-11-29 07:50:37.309 227364 DEBUG nova.compute.provider_tree [None req-724a2051-bb6e-41d0-9d11-773ab84ea57e 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:50:37 np0005539551 nova_compute[227360]: 2025-11-29 07:50:37.347 227364 DEBUG nova.scheduler.client.report [None req-724a2051-bb6e-41d0-9d11-773ab84ea57e 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:50:37 np0005539551 nova_compute[227360]: 2025-11-29 07:50:37.397 227364 DEBUG oslo_concurrency.lockutils [None req-724a2051-bb6e-41d0-9d11-773ab84ea57e 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:37 np0005539551 nova_compute[227360]: 2025-11-29 07:50:37.440 227364 INFO nova.scheduler.client.report [None req-724a2051-bb6e-41d0-9d11-773ab84ea57e 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Deleted allocations for instance df95709e-b2a2-4e72-a99a-4df9e0fde1c4#033[00m
Nov 29 02:50:37 np0005539551 nova_compute[227360]: 2025-11-29 07:50:37.502 227364 DEBUG oslo_concurrency.lockutils [None req-724a2051-bb6e-41d0-9d11-773ab84ea57e 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "df95709e-b2a2-4e72-a99a-4df9e0fde1c4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.417s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:37 np0005539551 nova_compute[227360]: 2025-11-29 07:50:37.695 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:37.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:38.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:38 np0005539551 nova_compute[227360]: 2025-11-29 07:50:38.561 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:39 np0005539551 nova_compute[227360]: 2025-11-29 07:50:39.497 227364 DEBUG oslo_concurrency.lockutils [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Acquiring lock "9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:39 np0005539551 nova_compute[227360]: 2025-11-29 07:50:39.498 227364 DEBUG oslo_concurrency.lockutils [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Lock "9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:39 np0005539551 nova_compute[227360]: 2025-11-29 07:50:39.521 227364 DEBUG nova.compute.manager [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:50:39 np0005539551 nova_compute[227360]: 2025-11-29 07:50:39.602 227364 DEBUG oslo_concurrency.lockutils [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:39 np0005539551 nova_compute[227360]: 2025-11-29 07:50:39.603 227364 DEBUG oslo_concurrency.lockutils [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:39 np0005539551 nova_compute[227360]: 2025-11-29 07:50:39.608 227364 DEBUG nova.virt.hardware [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:50:39 np0005539551 nova_compute[227360]: 2025-11-29 07:50:39.609 227364 INFO nova.compute.claims [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 02:50:39 np0005539551 nova_compute[227360]: 2025-11-29 07:50:39.796 227364 DEBUG oslo_concurrency.processutils [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:39 np0005539551 ovn_controller[130266]: 2025-11-29T07:50:39Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:63:f3:09 10.1.0.104
Nov 29 02:50:39 np0005539551 ovn_controller[130266]: 2025-11-29T07:50:39Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:63:f3:09 10.1.0.104
Nov 29 02:50:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:39.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:40.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:50:40 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2903544425' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:50:40 np0005539551 nova_compute[227360]: 2025-11-29 07:50:40.264 227364 DEBUG oslo_concurrency.processutils [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:40 np0005539551 nova_compute[227360]: 2025-11-29 07:50:40.269 227364 DEBUG nova.compute.provider_tree [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:50:40 np0005539551 nova_compute[227360]: 2025-11-29 07:50:40.284 227364 DEBUG nova.scheduler.client.report [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:50:40 np0005539551 nova_compute[227360]: 2025-11-29 07:50:40.303 227364 DEBUG oslo_concurrency.lockutils [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:40 np0005539551 nova_compute[227360]: 2025-11-29 07:50:40.304 227364 DEBUG nova.compute.manager [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:50:40 np0005539551 nova_compute[227360]: 2025-11-29 07:50:40.368 227364 DEBUG nova.compute.manager [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:50:40 np0005539551 nova_compute[227360]: 2025-11-29 07:50:40.369 227364 DEBUG nova.network.neutron [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:50:40 np0005539551 nova_compute[227360]: 2025-11-29 07:50:40.406 227364 INFO nova.virt.libvirt.driver [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:50:40 np0005539551 nova_compute[227360]: 2025-11-29 07:50:40.426 227364 DEBUG nova.compute.manager [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:50:40 np0005539551 nova_compute[227360]: 2025-11-29 07:50:40.560 227364 DEBUG nova.compute.manager [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:50:40 np0005539551 nova_compute[227360]: 2025-11-29 07:50:40.561 227364 DEBUG nova.virt.libvirt.driver [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:50:40 np0005539551 nova_compute[227360]: 2025-11-29 07:50:40.562 227364 INFO nova.virt.libvirt.driver [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Creating image(s)#033[00m
Nov 29 02:50:40 np0005539551 nova_compute[227360]: 2025-11-29 07:50:40.591 227364 DEBUG nova.storage.rbd_utils [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] rbd image 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:40 np0005539551 nova_compute[227360]: 2025-11-29 07:50:40.623 227364 DEBUG nova.storage.rbd_utils [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] rbd image 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:40 np0005539551 nova_compute[227360]: 2025-11-29 07:50:40.650 227364 DEBUG nova.storage.rbd_utils [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] rbd image 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:40 np0005539551 nova_compute[227360]: 2025-11-29 07:50:40.653 227364 DEBUG oslo_concurrency.processutils [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:40 np0005539551 nova_compute[227360]: 2025-11-29 07:50:40.685 227364 DEBUG nova.network.neutron [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 29 02:50:40 np0005539551 nova_compute[227360]: 2025-11-29 07:50:40.686 227364 DEBUG nova.compute.manager [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:50:40 np0005539551 nova_compute[227360]: 2025-11-29 07:50:40.719 227364 DEBUG oslo_concurrency.processutils [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:40 np0005539551 nova_compute[227360]: 2025-11-29 07:50:40.720 227364 DEBUG oslo_concurrency.lockutils [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:40 np0005539551 nova_compute[227360]: 2025-11-29 07:50:40.721 227364 DEBUG oslo_concurrency.lockutils [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:40 np0005539551 nova_compute[227360]: 2025-11-29 07:50:40.721 227364 DEBUG oslo_concurrency.lockutils [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:40 np0005539551 nova_compute[227360]: 2025-11-29 07:50:40.751 227364 DEBUG nova.storage.rbd_utils [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] rbd image 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:40 np0005539551 nova_compute[227360]: 2025-11-29 07:50:40.754 227364 DEBUG oslo_concurrency.processutils [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:41 np0005539551 nova_compute[227360]: 2025-11-29 07:50:41.079 227364 DEBUG oslo_concurrency.processutils [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.325s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:41 np0005539551 nova_compute[227360]: 2025-11-29 07:50:41.200 227364 DEBUG nova.storage.rbd_utils [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] resizing rbd image 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:50:41 np0005539551 nova_compute[227360]: 2025-11-29 07:50:41.866 227364 DEBUG nova.objects.instance [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Lazy-loading 'migration_context' on Instance uuid 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:50:41 np0005539551 nova_compute[227360]: 2025-11-29 07:50:41.891 227364 DEBUG nova.virt.libvirt.driver [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:50:41 np0005539551 nova_compute[227360]: 2025-11-29 07:50:41.891 227364 DEBUG nova.virt.libvirt.driver [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Ensure instance console log exists: /var/lib/nova/instances/9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:50:41 np0005539551 nova_compute[227360]: 2025-11-29 07:50:41.892 227364 DEBUG oslo_concurrency.lockutils [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:41 np0005539551 nova_compute[227360]: 2025-11-29 07:50:41.892 227364 DEBUG oslo_concurrency.lockutils [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:41 np0005539551 nova_compute[227360]: 2025-11-29 07:50:41.893 227364 DEBUG oslo_concurrency.lockutils [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:41.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:41 np0005539551 nova_compute[227360]: 2025-11-29 07:50:41.895 227364 DEBUG nova.virt.libvirt.driver [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:50:41 np0005539551 nova_compute[227360]: 2025-11-29 07:50:41.900 227364 WARNING nova.virt.libvirt.driver [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:50:41 np0005539551 nova_compute[227360]: 2025-11-29 07:50:41.904 227364 DEBUG nova.virt.libvirt.host [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:50:41 np0005539551 nova_compute[227360]: 2025-11-29 07:50:41.905 227364 DEBUG nova.virt.libvirt.host [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:50:41 np0005539551 nova_compute[227360]: 2025-11-29 07:50:41.908 227364 DEBUG nova.virt.libvirt.host [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:50:41 np0005539551 nova_compute[227360]: 2025-11-29 07:50:41.908 227364 DEBUG nova.virt.libvirt.host [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:50:41 np0005539551 nova_compute[227360]: 2025-11-29 07:50:41.910 227364 DEBUG nova.virt.libvirt.driver [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:50:41 np0005539551 nova_compute[227360]: 2025-11-29 07:50:41.910 227364 DEBUG nova.virt.hardware [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:50:41 np0005539551 nova_compute[227360]: 2025-11-29 07:50:41.910 227364 DEBUG nova.virt.hardware [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:50:41 np0005539551 nova_compute[227360]: 2025-11-29 07:50:41.911 227364 DEBUG nova.virt.hardware [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:50:41 np0005539551 nova_compute[227360]: 2025-11-29 07:50:41.911 227364 DEBUG nova.virt.hardware [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:50:41 np0005539551 nova_compute[227360]: 2025-11-29 07:50:41.911 227364 DEBUG nova.virt.hardware [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:50:41 np0005539551 nova_compute[227360]: 2025-11-29 07:50:41.912 227364 DEBUG nova.virt.hardware [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:50:41 np0005539551 nova_compute[227360]: 2025-11-29 07:50:41.912 227364 DEBUG nova.virt.hardware [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:50:41 np0005539551 nova_compute[227360]: 2025-11-29 07:50:41.912 227364 DEBUG nova.virt.hardware [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:50:41 np0005539551 nova_compute[227360]: 2025-11-29 07:50:41.913 227364 DEBUG nova.virt.hardware [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:50:41 np0005539551 nova_compute[227360]: 2025-11-29 07:50:41.913 227364 DEBUG nova.virt.hardware [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:50:41 np0005539551 nova_compute[227360]: 2025-11-29 07:50:41.913 227364 DEBUG nova.virt.hardware [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:50:41 np0005539551 nova_compute[227360]: 2025-11-29 07:50:41.917 227364 DEBUG oslo_concurrency.processutils [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:42.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:50:42 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3349057337' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:50:42 np0005539551 nova_compute[227360]: 2025-11-29 07:50:42.376 227364 DEBUG oslo_concurrency.processutils [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:42 np0005539551 nova_compute[227360]: 2025-11-29 07:50:42.402 227364 DEBUG nova.storage.rbd_utils [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] rbd image 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:42 np0005539551 nova_compute[227360]: 2025-11-29 07:50:42.406 227364 DEBUG oslo_concurrency.processutils [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:42 np0005539551 nova_compute[227360]: 2025-11-29 07:50:42.428 227364 DEBUG oslo_concurrency.lockutils [None req-2def771c-7a29-4eee-8bd1-e1b7f4cdc3ce 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Acquiring lock "22f24272-bec2-444d-8c83-30e63ce6badb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:42 np0005539551 nova_compute[227360]: 2025-11-29 07:50:42.428 227364 DEBUG oslo_concurrency.lockutils [None req-2def771c-7a29-4eee-8bd1-e1b7f4cdc3ce 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "22f24272-bec2-444d-8c83-30e63ce6badb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:42 np0005539551 nova_compute[227360]: 2025-11-29 07:50:42.429 227364 DEBUG oslo_concurrency.lockutils [None req-2def771c-7a29-4eee-8bd1-e1b7f4cdc3ce 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Acquiring lock "22f24272-bec2-444d-8c83-30e63ce6badb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:42 np0005539551 nova_compute[227360]: 2025-11-29 07:50:42.429 227364 DEBUG oslo_concurrency.lockutils [None req-2def771c-7a29-4eee-8bd1-e1b7f4cdc3ce 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "22f24272-bec2-444d-8c83-30e63ce6badb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:42 np0005539551 nova_compute[227360]: 2025-11-29 07:50:42.429 227364 DEBUG oslo_concurrency.lockutils [None req-2def771c-7a29-4eee-8bd1-e1b7f4cdc3ce 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "22f24272-bec2-444d-8c83-30e63ce6badb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:42 np0005539551 nova_compute[227360]: 2025-11-29 07:50:42.431 227364 INFO nova.compute.manager [None req-2def771c-7a29-4eee-8bd1-e1b7f4cdc3ce 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Terminating instance#033[00m
Nov 29 02:50:42 np0005539551 nova_compute[227360]: 2025-11-29 07:50:42.431 227364 DEBUG nova.compute.manager [None req-2def771c-7a29-4eee-8bd1-e1b7f4cdc3ce 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:50:42 np0005539551 kernel: tapcb35bbab-de (unregistering): left promiscuous mode
Nov 29 02:50:42 np0005539551 NetworkManager[48922]: <info>  [1764402642.4831] device (tapcb35bbab-de): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:50:42 np0005539551 nova_compute[227360]: 2025-11-29 07:50:42.489 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:42 np0005539551 ovn_controller[130266]: 2025-11-29T07:50:42Z|00052|binding|INFO|Releasing lport cb35bbab-def8-4ef1-a4d8-30358cb1c55c from this chassis (sb_readonly=0)
Nov 29 02:50:42 np0005539551 ovn_controller[130266]: 2025-11-29T07:50:42Z|00053|binding|INFO|Setting lport cb35bbab-def8-4ef1-a4d8-30358cb1c55c down in Southbound
Nov 29 02:50:42 np0005539551 ovn_controller[130266]: 2025-11-29T07:50:42Z|00054|binding|INFO|Removing iface tapcb35bbab-de ovn-installed in OVS
Nov 29 02:50:42 np0005539551 nova_compute[227360]: 2025-11-29 07:50:42.497 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:42 np0005539551 nova_compute[227360]: 2025-11-29 07:50:42.517 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:42.534 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:f3:09 10.1.0.104 fdfe:381f:8400:1::161'], port_security=['fa:16:3e:63:f3:09 10.1.0.104 fdfe:381f:8400:1::161'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.104/26 fdfe:381f:8400:1::161/64', 'neutron:device_id': '22f24272-bec2-444d-8c83-30e63ce6badb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01d0d21b-eaad-4f5d-82d1-0f4d31e80363', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3c7cd563ba394223a76bd2579800406c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd7540713-07cb-41c9-9bad-f36175f21356', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d42f5ec8-3ffe-4c03-bf87-380969e1ba25, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=cb35bbab-def8-4ef1-a4d8-30358cb1c55c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:50:42 np0005539551 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000005.scope: Deactivated successfully.
Nov 29 02:50:42 np0005539551 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000005.scope: Consumed 14.044s CPU time.
Nov 29 02:50:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:42.536 139482 INFO neutron.agent.ovn.metadata.agent [-] Port cb35bbab-def8-4ef1-a4d8-30358cb1c55c in datapath 01d0d21b-eaad-4f5d-82d1-0f4d31e80363 unbound from our chassis#033[00m
Nov 29 02:50:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:42.538 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 01d0d21b-eaad-4f5d-82d1-0f4d31e80363, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:50:42 np0005539551 systemd-machined[190756]: Machine qemu-3-instance-00000005 terminated.
Nov 29 02:50:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:42.540 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8df607f4-2d04-4ce0-a5e0-caa2d01ab0ff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:42.541 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363 namespace which is not needed anymore#033[00m
Nov 29 02:50:42 np0005539551 nova_compute[227360]: 2025-11-29 07:50:42.648 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:42 np0005539551 nova_compute[227360]: 2025-11-29 07:50:42.653 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:42 np0005539551 neutron-haproxy-ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363[232058]: [NOTICE]   (232062) : haproxy version is 2.8.14-c23fe91
Nov 29 02:50:42 np0005539551 neutron-haproxy-ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363[232058]: [NOTICE]   (232062) : path to executable is /usr/sbin/haproxy
Nov 29 02:50:42 np0005539551 neutron-haproxy-ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363[232058]: [WARNING]  (232062) : Exiting Master process...
Nov 29 02:50:42 np0005539551 neutron-haproxy-ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363[232058]: [ALERT]    (232062) : Current worker (232064) exited with code 143 (Terminated)
Nov 29 02:50:42 np0005539551 neutron-haproxy-ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363[232058]: [WARNING]  (232062) : All workers exited. Exiting... (0)
Nov 29 02:50:42 np0005539551 systemd[1]: libpod-38d40b61bac8638f8b0006d3154ee9e6622b7cae5438e0c1533346d8880c5a9f.scope: Deactivated successfully.
Nov 29 02:50:42 np0005539551 nova_compute[227360]: 2025-11-29 07:50:42.665 227364 INFO nova.virt.libvirt.driver [-] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Instance destroyed successfully.#033[00m
Nov 29 02:50:42 np0005539551 nova_compute[227360]: 2025-11-29 07:50:42.666 227364 DEBUG nova.objects.instance [None req-2def771c-7a29-4eee-8bd1-e1b7f4cdc3ce 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lazy-loading 'resources' on Instance uuid 22f24272-bec2-444d-8c83-30e63ce6badb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:50:42 np0005539551 podman[233018]: 2025-11-29 07:50:42.66648906 +0000 UTC m=+0.048916723 container died 38d40b61bac8638f8b0006d3154ee9e6622b7cae5438e0c1533346d8880c5a9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:50:42 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-38d40b61bac8638f8b0006d3154ee9e6622b7cae5438e0c1533346d8880c5a9f-userdata-shm.mount: Deactivated successfully.
Nov 29 02:50:42 np0005539551 systemd[1]: var-lib-containers-storage-overlay-eff993527b6560eea7761b3b078ae75ba7a0268a76f21919e5d914b86279486d-merged.mount: Deactivated successfully.
Nov 29 02:50:42 np0005539551 nova_compute[227360]: 2025-11-29 07:50:42.698 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:42 np0005539551 podman[233018]: 2025-11-29 07:50:42.706045053 +0000 UTC m=+0.088472726 container cleanup 38d40b61bac8638f8b0006d3154ee9e6622b7cae5438e0c1533346d8880c5a9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 02:50:42 np0005539551 systemd[1]: libpod-conmon-38d40b61bac8638f8b0006d3154ee9e6622b7cae5438e0c1533346d8880c5a9f.scope: Deactivated successfully.
Nov 29 02:50:42 np0005539551 podman[233057]: 2025-11-29 07:50:42.764719433 +0000 UTC m=+0.040324005 container remove 38d40b61bac8638f8b0006d3154ee9e6622b7cae5438e0c1533346d8880c5a9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 02:50:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:42.769 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[3a5092c4-7892-4f40-9741-00c7bf41e5cd]: (4, ('Sat Nov 29 07:50:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363 (38d40b61bac8638f8b0006d3154ee9e6622b7cae5438e0c1533346d8880c5a9f)\n38d40b61bac8638f8b0006d3154ee9e6622b7cae5438e0c1533346d8880c5a9f\nSat Nov 29 07:50:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363 (38d40b61bac8638f8b0006d3154ee9e6622b7cae5438e0c1533346d8880c5a9f)\n38d40b61bac8638f8b0006d3154ee9e6622b7cae5438e0c1533346d8880c5a9f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:42.771 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4fb21c66-0351-4d53-8697-e0340919f6d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:42.771 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01d0d21b-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:42 np0005539551 nova_compute[227360]: 2025-11-29 07:50:42.773 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:42 np0005539551 kernel: tap01d0d21b-e0: left promiscuous mode
Nov 29 02:50:42 np0005539551 nova_compute[227360]: 2025-11-29 07:50:42.791 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:42.794 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1cf3c1e8-b924-4b74-803b-a4dbdea49e23]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:42.810 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6f316aad-5342-442d-bda5-f037b4cf67cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:42.811 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f0613274-d56e-4104-a40f-2b158432ddc8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:42.823 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[bb29e69f-46da-40e5-9474-e0ff13ed08d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 564843, 'reachable_time': 32382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 233075, 'error': None, 'target': 'ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:50:42 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3530478444' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:50:42 np0005539551 systemd[1]: run-netns-ovnmeta\x2d01d0d21b\x2deaad\x2d4f5d\x2d82d1\x2d0f4d31e80363.mount: Deactivated successfully.
Nov 29 02:50:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:42.827 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:50:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:50:42.827 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[066d0311-e691-4340-88b3-e80a97029bf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:42 np0005539551 nova_compute[227360]: 2025-11-29 07:50:42.841 227364 DEBUG oslo_concurrency.processutils [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:42 np0005539551 nova_compute[227360]: 2025-11-29 07:50:42.843 227364 DEBUG nova.objects.instance [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Lazy-loading 'pci_devices' on Instance uuid 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:50:42 np0005539551 nova_compute[227360]: 2025-11-29 07:50:42.924 227364 DEBUG nova.virt.libvirt.vif [None req-2def771c-7a29-4eee-8bd1-e1b7f4cdc3ce 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:49:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-497379200-3',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-497379200-3',id=5,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=2,launched_at=2025-11-29T07:50:26Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3c7cd563ba394223a76bd2579800406c',ramdisk_id='',reservation_id='r-k1o602if',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AutoAllocateNetworkTest-1372302389',owner_user_name='tempest-AutoAllocateNetworkTest-1372302389-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:50:26Z,user_data=None,user_id='01739124bee74c899af6384f8ec2d427',uuid=22f24272-bec2-444d-8c83-30e63ce6badb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cb35bbab-def8-4ef1-a4d8-30358cb1c55c", "address": "fa:16:3e:63:f3:09", "network": {"id": "01d0d21b-eaad-4f5d-82d1-0f4d31e80363", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::161", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.104", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c7cd563ba394223a76bd2579800406c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb35bbab-de", "ovs_interfaceid": "cb35bbab-def8-4ef1-a4d8-30358cb1c55c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:50:42 np0005539551 nova_compute[227360]: 2025-11-29 07:50:42.926 227364 DEBUG nova.network.os_vif_util [None req-2def771c-7a29-4eee-8bd1-e1b7f4cdc3ce 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Converting VIF {"id": "cb35bbab-def8-4ef1-a4d8-30358cb1c55c", "address": "fa:16:3e:63:f3:09", "network": {"id": "01d0d21b-eaad-4f5d-82d1-0f4d31e80363", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::161", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.104", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c7cd563ba394223a76bd2579800406c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb35bbab-de", "ovs_interfaceid": "cb35bbab-def8-4ef1-a4d8-30358cb1c55c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:50:42 np0005539551 nova_compute[227360]: 2025-11-29 07:50:42.927 227364 DEBUG nova.network.os_vif_util [None req-2def771c-7a29-4eee-8bd1-e1b7f4cdc3ce 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:f3:09,bridge_name='br-int',has_traffic_filtering=True,id=cb35bbab-def8-4ef1-a4d8-30358cb1c55c,network=Network(01d0d21b-eaad-4f5d-82d1-0f4d31e80363),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcb35bbab-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:50:42 np0005539551 nova_compute[227360]: 2025-11-29 07:50:42.927 227364 DEBUG os_vif [None req-2def771c-7a29-4eee-8bd1-e1b7f4cdc3ce 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:f3:09,bridge_name='br-int',has_traffic_filtering=True,id=cb35bbab-def8-4ef1-a4d8-30358cb1c55c,network=Network(01d0d21b-eaad-4f5d-82d1-0f4d31e80363),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcb35bbab-de') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:50:42 np0005539551 nova_compute[227360]: 2025-11-29 07:50:42.929 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:42 np0005539551 nova_compute[227360]: 2025-11-29 07:50:42.930 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcb35bbab-de, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:42 np0005539551 nova_compute[227360]: 2025-11-29 07:50:42.931 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:42 np0005539551 nova_compute[227360]: 2025-11-29 07:50:42.933 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:42 np0005539551 nova_compute[227360]: 2025-11-29 07:50:42.937 227364 DEBUG nova.virt.libvirt.driver [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:50:42 np0005539551 nova_compute[227360]:  <uuid>9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be</uuid>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:  <name>instance-0000000a</name>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:50:42 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:      <nova:name>tempest-ServerDiagnosticsTest-server-1505208340</nova:name>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 07:50:41</nova:creationTime>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 02:50:42 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:        <nova:user uuid="a70a34c4b9a243749a2bc234e384c389">tempest-ServerDiagnosticsTest-913526108-project-member</nova:user>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:        <nova:project uuid="4ec875eb6be146c58584e57e51696dcc">tempest-ServerDiagnosticsTest-913526108</nova:project>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:      <nova:ports/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <system>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:      <entry name="serial">9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be</entry>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:      <entry name="uuid">9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be</entry>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    </system>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:  <os>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:  </os>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:  <features>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:  </features>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:  </clock>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:  <devices>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 02:50:42 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be_disk">
Nov 29 02:50:42 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:      </source>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 02:50:42 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:      </auth>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    </disk>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 02:50:42 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be_disk.config">
Nov 29 02:50:42 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:      </source>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 02:50:42 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:      </auth>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    </disk>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 02:50:42 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be/console.log" append="off"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    </serial>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <video>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    </video>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 02:50:42 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    </rng>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 02:50:42 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 02:50:42 np0005539551 nova_compute[227360]:  </devices>
Nov 29 02:50:42 np0005539551 nova_compute[227360]: </domain>
Nov 29 02:50:42 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:50:42 np0005539551 nova_compute[227360]: 2025-11-29 07:50:42.939 227364 INFO os_vif [None req-2def771c-7a29-4eee-8bd1-e1b7f4cdc3ce 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:f3:09,bridge_name='br-int',has_traffic_filtering=True,id=cb35bbab-def8-4ef1-a4d8-30358cb1c55c,network=Network(01d0d21b-eaad-4f5d-82d1-0f4d31e80363),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcb35bbab-de')#033[00m
Nov 29 02:50:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:43.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:44 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:44.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:44 np0005539551 nova_compute[227360]: 2025-11-29 07:50:44.161 227364 DEBUG nova.compute.manager [req-e17175cb-a3bb-40f4-8667-9831a153d3ca req-f115ea72-ec9c-4bee-95ab-cca4629c217e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Received event network-vif-unplugged-cb35bbab-def8-4ef1-a4d8-30358cb1c55c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:44 np0005539551 nova_compute[227360]: 2025-11-29 07:50:44.162 227364 DEBUG oslo_concurrency.lockutils [req-e17175cb-a3bb-40f4-8667-9831a153d3ca req-f115ea72-ec9c-4bee-95ab-cca4629c217e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "22f24272-bec2-444d-8c83-30e63ce6badb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:44 np0005539551 nova_compute[227360]: 2025-11-29 07:50:44.162 227364 DEBUG oslo_concurrency.lockutils [req-e17175cb-a3bb-40f4-8667-9831a153d3ca req-f115ea72-ec9c-4bee-95ab-cca4629c217e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "22f24272-bec2-444d-8c83-30e63ce6badb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:44 np0005539551 nova_compute[227360]: 2025-11-29 07:50:44.162 227364 DEBUG oslo_concurrency.lockutils [req-e17175cb-a3bb-40f4-8667-9831a153d3ca req-f115ea72-ec9c-4bee-95ab-cca4629c217e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "22f24272-bec2-444d-8c83-30e63ce6badb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:44 np0005539551 nova_compute[227360]: 2025-11-29 07:50:44.162 227364 DEBUG nova.compute.manager [req-e17175cb-a3bb-40f4-8667-9831a153d3ca req-f115ea72-ec9c-4bee-95ab-cca4629c217e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] No waiting events found dispatching network-vif-unplugged-cb35bbab-def8-4ef1-a4d8-30358cb1c55c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:50:44 np0005539551 nova_compute[227360]: 2025-11-29 07:50:44.163 227364 DEBUG nova.compute.manager [req-e17175cb-a3bb-40f4-8667-9831a153d3ca req-f115ea72-ec9c-4bee-95ab-cca4629c217e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Received event network-vif-unplugged-cb35bbab-def8-4ef1-a4d8-30358cb1c55c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:50:44 np0005539551 nova_compute[227360]: 2025-11-29 07:50:44.190 227364 DEBUG nova.virt.libvirt.driver [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:50:44 np0005539551 nova_compute[227360]: 2025-11-29 07:50:44.190 227364 DEBUG nova.virt.libvirt.driver [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:50:44 np0005539551 nova_compute[227360]: 2025-11-29 07:50:44.191 227364 INFO nova.virt.libvirt.driver [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Using config drive#033[00m
Nov 29 02:50:44 np0005539551 nova_compute[227360]: 2025-11-29 07:50:44.215 227364 DEBUG nova.storage.rbd_utils [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] rbd image 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:45 np0005539551 nova_compute[227360]: 2025-11-29 07:50:45.495 227364 INFO nova.virt.libvirt.driver [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Creating config drive at /var/lib/nova/instances/9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be/disk.config#033[00m
Nov 29 02:50:45 np0005539551 nova_compute[227360]: 2025-11-29 07:50:45.501 227364 DEBUG oslo_concurrency.processutils [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf7r1gww_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:45 np0005539551 nova_compute[227360]: 2025-11-29 07:50:45.640 227364 DEBUG oslo_concurrency.processutils [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf7r1gww_" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:45 np0005539551 nova_compute[227360]: 2025-11-29 07:50:45.672 227364 DEBUG nova.storage.rbd_utils [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] rbd image 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:45 np0005539551 nova_compute[227360]: 2025-11-29 07:50:45.677 227364 DEBUG oslo_concurrency.processutils [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be/disk.config 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:45.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:46.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:46 np0005539551 nova_compute[227360]: 2025-11-29 07:50:46.306 227364 INFO nova.virt.libvirt.driver [None req-2def771c-7a29-4eee-8bd1-e1b7f4cdc3ce 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Deleting instance files /var/lib/nova/instances/22f24272-bec2-444d-8c83-30e63ce6badb_del#033[00m
Nov 29 02:50:46 np0005539551 nova_compute[227360]: 2025-11-29 07:50:46.308 227364 INFO nova.virt.libvirt.driver [None req-2def771c-7a29-4eee-8bd1-e1b7f4cdc3ce 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Deletion of /var/lib/nova/instances/22f24272-bec2-444d-8c83-30e63ce6badb_del complete#033[00m
Nov 29 02:50:47 np0005539551 nova_compute[227360]: 2025-11-29 07:50:47.701 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:47.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:47 np0005539551 nova_compute[227360]: 2025-11-29 07:50:47.932 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:48.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:48 np0005539551 nova_compute[227360]: 2025-11-29 07:50:48.136 227364 DEBUG nova.compute.manager [req-56ee5ba7-21ea-4858-835d-3a8ed83c2239 req-695fa48b-e641-4660-a5ac-5911c3e24799 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Received event network-vif-plugged-cb35bbab-def8-4ef1-a4d8-30358cb1c55c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:48 np0005539551 nova_compute[227360]: 2025-11-29 07:50:48.136 227364 DEBUG oslo_concurrency.lockutils [req-56ee5ba7-21ea-4858-835d-3a8ed83c2239 req-695fa48b-e641-4660-a5ac-5911c3e24799 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "22f24272-bec2-444d-8c83-30e63ce6badb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:48 np0005539551 nova_compute[227360]: 2025-11-29 07:50:48.136 227364 DEBUG oslo_concurrency.lockutils [req-56ee5ba7-21ea-4858-835d-3a8ed83c2239 req-695fa48b-e641-4660-a5ac-5911c3e24799 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "22f24272-bec2-444d-8c83-30e63ce6badb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:48 np0005539551 nova_compute[227360]: 2025-11-29 07:50:48.137 227364 DEBUG oslo_concurrency.lockutils [req-56ee5ba7-21ea-4858-835d-3a8ed83c2239 req-695fa48b-e641-4660-a5ac-5911c3e24799 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "22f24272-bec2-444d-8c83-30e63ce6badb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:48 np0005539551 nova_compute[227360]: 2025-11-29 07:50:48.137 227364 DEBUG nova.compute.manager [req-56ee5ba7-21ea-4858-835d-3a8ed83c2239 req-695fa48b-e641-4660-a5ac-5911c3e24799 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] No waiting events found dispatching network-vif-plugged-cb35bbab-def8-4ef1-a4d8-30358cb1c55c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:50:48 np0005539551 nova_compute[227360]: 2025-11-29 07:50:48.137 227364 WARNING nova.compute.manager [req-56ee5ba7-21ea-4858-835d-3a8ed83c2239 req-695fa48b-e641-4660-a5ac-5911c3e24799 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Received unexpected event network-vif-plugged-cb35bbab-def8-4ef1-a4d8-30358cb1c55c for instance with vm_state active and task_state deleting.#033[00m
Nov 29 02:50:48 np0005539551 nova_compute[227360]: 2025-11-29 07:50:48.145 227364 INFO nova.compute.manager [None req-2def771c-7a29-4eee-8bd1-e1b7f4cdc3ce 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Took 5.71 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:50:48 np0005539551 nova_compute[227360]: 2025-11-29 07:50:48.145 227364 DEBUG oslo.service.loopingcall [None req-2def771c-7a29-4eee-8bd1-e1b7f4cdc3ce 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:50:48 np0005539551 nova_compute[227360]: 2025-11-29 07:50:48.145 227364 DEBUG nova.compute.manager [-] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:50:48 np0005539551 nova_compute[227360]: 2025-11-29 07:50:48.146 227364 DEBUG nova.network.neutron [-] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:50:48 np0005539551 nova_compute[227360]: 2025-11-29 07:50:48.504 227364 DEBUG oslo_concurrency.processutils [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be/disk.config 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.827s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:48 np0005539551 nova_compute[227360]: 2025-11-29 07:50:48.505 227364 INFO nova.virt.libvirt.driver [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Deleting local config drive /var/lib/nova/instances/9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be/disk.config because it was imported into RBD.#033[00m
Nov 29 02:50:48 np0005539551 nova_compute[227360]: 2025-11-29 07:50:48.539 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402633.5384047, df95709e-b2a2-4e72-a99a-4df9e0fde1c4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:50:48 np0005539551 nova_compute[227360]: 2025-11-29 07:50:48.540 227364 INFO nova.compute.manager [-] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:50:48 np0005539551 systemd-machined[190756]: New machine qemu-4-instance-0000000a.
Nov 29 02:50:48 np0005539551 systemd[1]: Started Virtual Machine qemu-4-instance-0000000a.
Nov 29 02:50:48 np0005539551 nova_compute[227360]: 2025-11-29 07:50:48.756 227364 DEBUG nova.compute.manager [None req-8af14b21-6c72-4413-bcc7-f138674c1a44 - - - - - -] [instance: df95709e-b2a2-4e72-a99a-4df9e0fde1c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:50:49 np0005539551 nova_compute[227360]: 2025-11-29 07:50:49.077 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764402649.0770512, 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:50:49 np0005539551 nova_compute[227360]: 2025-11-29 07:50:49.078 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:50:49 np0005539551 nova_compute[227360]: 2025-11-29 07:50:49.079 227364 DEBUG nova.compute.manager [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:50:49 np0005539551 nova_compute[227360]: 2025-11-29 07:50:49.079 227364 DEBUG nova.virt.libvirt.driver [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:50:49 np0005539551 nova_compute[227360]: 2025-11-29 07:50:49.082 227364 INFO nova.virt.libvirt.driver [-] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Instance spawned successfully.#033[00m
Nov 29 02:50:49 np0005539551 nova_compute[227360]: 2025-11-29 07:50:49.083 227364 DEBUG nova.virt.libvirt.driver [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:50:49 np0005539551 nova_compute[227360]: 2025-11-29 07:50:49.107 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:50:49 np0005539551 nova_compute[227360]: 2025-11-29 07:50:49.112 227364 DEBUG nova.virt.libvirt.driver [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:49 np0005539551 nova_compute[227360]: 2025-11-29 07:50:49.112 227364 DEBUG nova.virt.libvirt.driver [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:49 np0005539551 nova_compute[227360]: 2025-11-29 07:50:49.113 227364 DEBUG nova.virt.libvirt.driver [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:49 np0005539551 nova_compute[227360]: 2025-11-29 07:50:49.113 227364 DEBUG nova.virt.libvirt.driver [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:49 np0005539551 nova_compute[227360]: 2025-11-29 07:50:49.113 227364 DEBUG nova.virt.libvirt.driver [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:49 np0005539551 nova_compute[227360]: 2025-11-29 07:50:49.114 227364 DEBUG nova.virt.libvirt.driver [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:49 np0005539551 nova_compute[227360]: 2025-11-29 07:50:49.117 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:50:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:49 np0005539551 nova_compute[227360]: 2025-11-29 07:50:49.362 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:50:49 np0005539551 nova_compute[227360]: 2025-11-29 07:50:49.363 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764402649.0778444, 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:50:49 np0005539551 nova_compute[227360]: 2025-11-29 07:50:49.363 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] VM Started (Lifecycle Event)#033[00m
Nov 29 02:50:49 np0005539551 nova_compute[227360]: 2025-11-29 07:50:49.535 227364 INFO nova.compute.manager [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Took 8.97 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:50:49 np0005539551 nova_compute[227360]: 2025-11-29 07:50:49.536 227364 DEBUG nova.compute.manager [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:50:49 np0005539551 nova_compute[227360]: 2025-11-29 07:50:49.544 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:50:49 np0005539551 nova_compute[227360]: 2025-11-29 07:50:49.547 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:50:49 np0005539551 nova_compute[227360]: 2025-11-29 07:50:49.774 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:50:49 np0005539551 nova_compute[227360]: 2025-11-29 07:50:49.785 227364 DEBUG nova.network.neutron [-] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:50:49 np0005539551 nova_compute[227360]: 2025-11-29 07:50:49.887 227364 INFO nova.compute.manager [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Took 10.31 seconds to build instance.#033[00m
Nov 29 02:50:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:49.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:50.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:50 np0005539551 nova_compute[227360]: 2025-11-29 07:50:50.193 227364 DEBUG nova.compute.manager [req-44cae74c-82e3-47f5-964c-2ccd718c40ec req-ca4b14fb-d603-4871-9f61-f34d0c6e2a55 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Received event network-vif-deleted-cb35bbab-def8-4ef1-a4d8-30358cb1c55c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:50 np0005539551 nova_compute[227360]: 2025-11-29 07:50:50.193 227364 INFO nova.compute.manager [req-44cae74c-82e3-47f5-964c-2ccd718c40ec req-ca4b14fb-d603-4871-9f61-f34d0c6e2a55 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Neutron deleted interface cb35bbab-def8-4ef1-a4d8-30358cb1c55c; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 02:50:50 np0005539551 nova_compute[227360]: 2025-11-29 07:50:50.194 227364 DEBUG nova.network.neutron [req-44cae74c-82e3-47f5-964c-2ccd718c40ec req-ca4b14fb-d603-4871-9f61-f34d0c6e2a55 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:50:50 np0005539551 nova_compute[227360]: 2025-11-29 07:50:50.233 227364 INFO nova.compute.manager [-] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Took 2.09 seconds to deallocate network for instance.#033[00m
Nov 29 02:50:50 np0005539551 nova_compute[227360]: 2025-11-29 07:50:50.251 227364 DEBUG nova.compute.manager [req-44cae74c-82e3-47f5-964c-2ccd718c40ec req-ca4b14fb-d603-4871-9f61-f34d0c6e2a55 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Detach interface failed, port_id=cb35bbab-def8-4ef1-a4d8-30358cb1c55c, reason: Instance 22f24272-bec2-444d-8c83-30e63ce6badb could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 02:50:50 np0005539551 nova_compute[227360]: 2025-11-29 07:50:50.253 227364 DEBUG oslo_concurrency.lockutils [None req-76fc51c3-4365-47f3-8221-a9ed6891d1e5 a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Lock "9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:50 np0005539551 nova_compute[227360]: 2025-11-29 07:50:50.623 227364 DEBUG oslo_concurrency.lockutils [None req-2def771c-7a29-4eee-8bd1-e1b7f4cdc3ce 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:50 np0005539551 nova_compute[227360]: 2025-11-29 07:50:50.623 227364 DEBUG oslo_concurrency.lockutils [None req-2def771c-7a29-4eee-8bd1-e1b7f4cdc3ce 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:50 np0005539551 nova_compute[227360]: 2025-11-29 07:50:50.704 227364 DEBUG oslo_concurrency.processutils [None req-2def771c-7a29-4eee-8bd1-e1b7f4cdc3ce 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:50:51 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2413346405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:50:51 np0005539551 nova_compute[227360]: 2025-11-29 07:50:51.104 227364 DEBUG oslo_concurrency.processutils [None req-2def771c-7a29-4eee-8bd1-e1b7f4cdc3ce 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:51 np0005539551 nova_compute[227360]: 2025-11-29 07:50:51.112 227364 DEBUG nova.compute.provider_tree [None req-2def771c-7a29-4eee-8bd1-e1b7f4cdc3ce 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:50:51 np0005539551 nova_compute[227360]: 2025-11-29 07:50:51.162 227364 DEBUG nova.scheduler.client.report [None req-2def771c-7a29-4eee-8bd1-e1b7f4cdc3ce 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:50:51 np0005539551 nova_compute[227360]: 2025-11-29 07:50:51.223 227364 DEBUG oslo_concurrency.lockutils [None req-2def771c-7a29-4eee-8bd1-e1b7f4cdc3ce 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:51 np0005539551 nova_compute[227360]: 2025-11-29 07:50:51.260 227364 INFO nova.scheduler.client.report [None req-2def771c-7a29-4eee-8bd1-e1b7f4cdc3ce 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Deleted allocations for instance 22f24272-bec2-444d-8c83-30e63ce6badb#033[00m
Nov 29 02:50:51 np0005539551 nova_compute[227360]: 2025-11-29 07:50:51.353 227364 DEBUG oslo_concurrency.lockutils [None req-2def771c-7a29-4eee-8bd1-e1b7f4cdc3ce 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "22f24272-bec2-444d-8c83-30e63ce6badb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.924s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:51 np0005539551 nova_compute[227360]: 2025-11-29 07:50:51.516 227364 DEBUG nova.compute.manager [None req-bbcfca00-eeba-43d1-8df2-71bd78320f50 50394ceae90042dabee2be68f1c23bb2 033bc3a89aae4e7db02ab7b819c8380c - - default default] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:50:51 np0005539551 nova_compute[227360]: 2025-11-29 07:50:51.519 227364 INFO nova.compute.manager [None req-bbcfca00-eeba-43d1-8df2-71bd78320f50 50394ceae90042dabee2be68f1c23bb2 033bc3a89aae4e7db02ab7b819c8380c - - default default] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Retrieving diagnostics#033[00m
Nov 29 02:50:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:50:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:51.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:50:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:52.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:52 np0005539551 nova_compute[227360]: 2025-11-29 07:50:52.190 227364 DEBUG oslo_concurrency.lockutils [None req-624c79bf-8f26-4f78-81da-f90db34d1a8e a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Acquiring lock "9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:52 np0005539551 nova_compute[227360]: 2025-11-29 07:50:52.192 227364 DEBUG oslo_concurrency.lockutils [None req-624c79bf-8f26-4f78-81da-f90db34d1a8e a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Lock "9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:52 np0005539551 nova_compute[227360]: 2025-11-29 07:50:52.192 227364 DEBUG oslo_concurrency.lockutils [None req-624c79bf-8f26-4f78-81da-f90db34d1a8e a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Acquiring lock "9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:52 np0005539551 nova_compute[227360]: 2025-11-29 07:50:52.193 227364 DEBUG oslo_concurrency.lockutils [None req-624c79bf-8f26-4f78-81da-f90db34d1a8e a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Lock "9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:52 np0005539551 nova_compute[227360]: 2025-11-29 07:50:52.193 227364 DEBUG oslo_concurrency.lockutils [None req-624c79bf-8f26-4f78-81da-f90db34d1a8e a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Lock "9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:52 np0005539551 nova_compute[227360]: 2025-11-29 07:50:52.194 227364 INFO nova.compute.manager [None req-624c79bf-8f26-4f78-81da-f90db34d1a8e a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Terminating instance#033[00m
Nov 29 02:50:52 np0005539551 nova_compute[227360]: 2025-11-29 07:50:52.196 227364 DEBUG oslo_concurrency.lockutils [None req-624c79bf-8f26-4f78-81da-f90db34d1a8e a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Acquiring lock "refresh_cache-9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:50:52 np0005539551 nova_compute[227360]: 2025-11-29 07:50:52.196 227364 DEBUG oslo_concurrency.lockutils [None req-624c79bf-8f26-4f78-81da-f90db34d1a8e a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Acquired lock "refresh_cache-9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:50:52 np0005539551 nova_compute[227360]: 2025-11-29 07:50:52.196 227364 DEBUG nova.network.neutron [None req-624c79bf-8f26-4f78-81da-f90db34d1a8e a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:50:52 np0005539551 nova_compute[227360]: 2025-11-29 07:50:52.504 227364 DEBUG nova.network.neutron [None req-624c79bf-8f26-4f78-81da-f90db34d1a8e a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:50:52 np0005539551 nova_compute[227360]: 2025-11-29 07:50:52.703 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:52 np0005539551 nova_compute[227360]: 2025-11-29 07:50:52.874 227364 DEBUG nova.network.neutron [None req-624c79bf-8f26-4f78-81da-f90db34d1a8e a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:50:52 np0005539551 nova_compute[227360]: 2025-11-29 07:50:52.914 227364 DEBUG oslo_concurrency.lockutils [None req-624c79bf-8f26-4f78-81da-f90db34d1a8e a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Releasing lock "refresh_cache-9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:50:52 np0005539551 nova_compute[227360]: 2025-11-29 07:50:52.915 227364 DEBUG nova.compute.manager [None req-624c79bf-8f26-4f78-81da-f90db34d1a8e a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:50:52 np0005539551 nova_compute[227360]: 2025-11-29 07:50:52.934 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:53 np0005539551 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Nov 29 02:50:53 np0005539551 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000a.scope: Consumed 4.417s CPU time.
Nov 29 02:50:53 np0005539551 systemd-machined[190756]: Machine qemu-4-instance-0000000a terminated.
Nov 29 02:50:53 np0005539551 nova_compute[227360]: 2025-11-29 07:50:53.135 227364 INFO nova.virt.libvirt.driver [-] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Instance destroyed successfully.#033[00m
Nov 29 02:50:53 np0005539551 nova_compute[227360]: 2025-11-29 07:50:53.135 227364 DEBUG nova.objects.instance [None req-624c79bf-8f26-4f78-81da-f90db34d1a8e a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Lazy-loading 'resources' on Instance uuid 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:50:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:53.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:54.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:54 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:50:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:55.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:50:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:56.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:56 np0005539551 nova_compute[227360]: 2025-11-29 07:50:56.702 227364 INFO nova.virt.libvirt.driver [None req-624c79bf-8f26-4f78-81da-f90db34d1a8e a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Deleting instance files /var/lib/nova/instances/9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be_del#033[00m
Nov 29 02:50:56 np0005539551 nova_compute[227360]: 2025-11-29 07:50:56.704 227364 INFO nova.virt.libvirt.driver [None req-624c79bf-8f26-4f78-81da-f90db34d1a8e a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Deletion of /var/lib/nova/instances/9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be_del complete#033[00m
Nov 29 02:50:57 np0005539551 nova_compute[227360]: 2025-11-29 07:50:57.373 227364 INFO nova.compute.manager [None req-624c79bf-8f26-4f78-81da-f90db34d1a8e a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Took 4.46 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:50:57 np0005539551 nova_compute[227360]: 2025-11-29 07:50:57.374 227364 DEBUG oslo.service.loopingcall [None req-624c79bf-8f26-4f78-81da-f90db34d1a8e a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:50:57 np0005539551 nova_compute[227360]: 2025-11-29 07:50:57.375 227364 DEBUG nova.compute.manager [-] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:50:57 np0005539551 nova_compute[227360]: 2025-11-29 07:50:57.375 227364 DEBUG nova.network.neutron [-] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:50:57 np0005539551 nova_compute[227360]: 2025-11-29 07:50:57.530 227364 DEBUG nova.network.neutron [-] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:50:57 np0005539551 nova_compute[227360]: 2025-11-29 07:50:57.549 227364 DEBUG nova.network.neutron [-] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:50:57 np0005539551 nova_compute[227360]: 2025-11-29 07:50:57.573 227364 INFO nova.compute.manager [-] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Took 0.20 seconds to deallocate network for instance.#033[00m
Nov 29 02:50:57 np0005539551 nova_compute[227360]: 2025-11-29 07:50:57.626 227364 DEBUG oslo_concurrency.lockutils [None req-624c79bf-8f26-4f78-81da-f90db34d1a8e a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:57 np0005539551 nova_compute[227360]: 2025-11-29 07:50:57.627 227364 DEBUG oslo_concurrency.lockutils [None req-624c79bf-8f26-4f78-81da-f90db34d1a8e a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:57 np0005539551 nova_compute[227360]: 2025-11-29 07:50:57.664 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402642.6638076, 22f24272-bec2-444d-8c83-30e63ce6badb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:50:57 np0005539551 nova_compute[227360]: 2025-11-29 07:50:57.665 227364 INFO nova.compute.manager [-] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:50:57 np0005539551 nova_compute[227360]: 2025-11-29 07:50:57.735 227364 DEBUG oslo_concurrency.processutils [None req-624c79bf-8f26-4f78-81da-f90db34d1a8e a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:57 np0005539551 nova_compute[227360]: 2025-11-29 07:50:57.759 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:57.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:57 np0005539551 nova_compute[227360]: 2025-11-29 07:50:57.935 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:58.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:58 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:50:58 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/506701073' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:50:58 np0005539551 nova_compute[227360]: 2025-11-29 07:50:58.156 227364 DEBUG oslo_concurrency.processutils [None req-624c79bf-8f26-4f78-81da-f90db34d1a8e a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:58 np0005539551 nova_compute[227360]: 2025-11-29 07:50:58.163 227364 DEBUG nova.compute.provider_tree [None req-624c79bf-8f26-4f78-81da-f90db34d1a8e a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:50:58 np0005539551 nova_compute[227360]: 2025-11-29 07:50:58.687 227364 DEBUG nova.compute.manager [None req-b70b09c2-68fe-4e1f-a3ee-1f111a19c8de - - - - - -] [instance: 22f24272-bec2-444d-8c83-30e63ce6badb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:50:58 np0005539551 nova_compute[227360]: 2025-11-29 07:50:58.688 227364 DEBUG nova.scheduler.client.report [None req-624c79bf-8f26-4f78-81da-f90db34d1a8e a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:50:58 np0005539551 nova_compute[227360]: 2025-11-29 07:50:58.982 227364 DEBUG oslo_concurrency.lockutils [None req-624c79bf-8f26-4f78-81da-f90db34d1a8e a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.355s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:59 np0005539551 nova_compute[227360]: 2025-11-29 07:50:59.097 227364 INFO nova.scheduler.client.report [None req-624c79bf-8f26-4f78-81da-f90db34d1a8e a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Deleted allocations for instance 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be#033[00m
Nov 29 02:50:59 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:59 np0005539551 nova_compute[227360]: 2025-11-29 07:50:59.281 227364 DEBUG oslo_concurrency.lockutils [None req-624c79bf-8f26-4f78-81da-f90db34d1a8e a70a34c4b9a243749a2bc234e384c389 4ec875eb6be146c58584e57e51696dcc - - default default] Lock "9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.089s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:50:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:59.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:51:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:00.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:51:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:01.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:51:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:02.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:02 np0005539551 nova_compute[227360]: 2025-11-29 07:51:02.762 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:02 np0005539551 nova_compute[227360]: 2025-11-29 07:51:02.937 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:03 np0005539551 podman[233281]: 2025-11-29 07:51:03.617709196 +0000 UTC m=+0.060266116 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd)
Nov 29 02:51:03 np0005539551 podman[233282]: 2025-11-29 07:51:03.62219995 +0000 UTC m=+0.058764055 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 29 02:51:03 np0005539551 podman[233280]: 2025-11-29 07:51:03.648214838 +0000 UTC m=+0.095356615 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:51:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:51:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:03.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:51:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:04.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:04 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:04 np0005539551 nova_compute[227360]: 2025-11-29 07:51:04.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:05 np0005539551 nova_compute[227360]: 2025-11-29 07:51:05.209 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:51:05.210 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:51:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:51:05.211 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:51:05 np0005539551 nova_compute[227360]: 2025-11-29 07:51:05.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:05 np0005539551 nova_compute[227360]: 2025-11-29 07:51:05.654 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:51:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:05.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:06 np0005539551 nova_compute[227360]: 2025-11-29 07:51:06.012 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:51:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:51:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:06.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:51:06 np0005539551 nova_compute[227360]: 2025-11-29 07:51:06.406 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:06 np0005539551 nova_compute[227360]: 2025-11-29 07:51:06.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:06 np0005539551 nova_compute[227360]: 2025-11-29 07:51:06.409 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:51:07 np0005539551 nova_compute[227360]: 2025-11-29 07:51:07.764 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:51:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:07.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:07 np0005539551 nova_compute[227360]: 2025-11-29 07:51:07.939 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:51:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:08.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:08 np0005539551 nova_compute[227360]: 2025-11-29 07:51:08.133 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402653.1325998, 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:51:08 np0005539551 nova_compute[227360]: 2025-11-29 07:51:08.134 227364 INFO nova.compute.manager [-] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:51:08 np0005539551 nova_compute[227360]: 2025-11-29 07:51:08.556 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:51:08 np0005539551 nova_compute[227360]: 2025-11-29 07:51:08.557 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:08 np0005539551 nova_compute[227360]: 2025-11-29 07:51:08.558 227364 DEBUG nova.compute.manager [None req-aedb2b3a-789c-45cd-9e12-a70828687aa9 - - - - - -] [instance: 9d3fc5ac-b119-4aa1-b1f3-a75c2fdfb2be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:51:08 np0005539551 nova_compute[227360]: 2025-11-29 07:51:08.558 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:08 np0005539551 nova_compute[227360]: 2025-11-29 07:51:08.559 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:08 np0005539551 nova_compute[227360]: 2025-11-29 07:51:08.559 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:08 np0005539551 nova_compute[227360]: 2025-11-29 07:51:08.559 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:51:08 np0005539551 nova_compute[227360]: 2025-11-29 07:51:08.559 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:08 np0005539551 nova_compute[227360]: 2025-11-29 07:51:08.631 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:08 np0005539551 nova_compute[227360]: 2025-11-29 07:51:08.631 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:08 np0005539551 nova_compute[227360]: 2025-11-29 07:51:08.632 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:08 np0005539551 nova_compute[227360]: 2025-11-29 07:51:08.632 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:51:08 np0005539551 nova_compute[227360]: 2025-11-29 07:51:08.632 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:51:09 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:51:09 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/15233577' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:51:09 np0005539551 nova_compute[227360]: 2025-11-29 07:51:09.120 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:51:09 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:09 np0005539551 nova_compute[227360]: 2025-11-29 07:51:09.303 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:51:09 np0005539551 nova_compute[227360]: 2025-11-29 07:51:09.304 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4979MB free_disk=20.85163116455078GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:51:09 np0005539551 nova_compute[227360]: 2025-11-29 07:51:09.304 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:09 np0005539551 nova_compute[227360]: 2025-11-29 07:51:09.304 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:51:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:09.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:51:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:10.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:51:10.214 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:51:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:51:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:11.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:51:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:12.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:12 np0005539551 nova_compute[227360]: 2025-11-29 07:51:12.765 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:12 np0005539551 nova_compute[227360]: 2025-11-29 07:51:12.941 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:51:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:13.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:51:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:14.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:14 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:51:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:15.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:51:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:16.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:16 np0005539551 nova_compute[227360]: 2025-11-29 07:51:16.766 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:51:16 np0005539551 nova_compute[227360]: 2025-11-29 07:51:16.766 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:51:16 np0005539551 nova_compute[227360]: 2025-11-29 07:51:16.823 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:51:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:51:17 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/181586926' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:51:17 np0005539551 nova_compute[227360]: 2025-11-29 07:51:17.249 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:51:17 np0005539551 nova_compute[227360]: 2025-11-29 07:51:17.256 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:51:17 np0005539551 nova_compute[227360]: 2025-11-29 07:51:17.281 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:51:17 np0005539551 nova_compute[227360]: 2025-11-29 07:51:17.338 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:51:17 np0005539551 nova_compute[227360]: 2025-11-29 07:51:17.339 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 8.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:17 np0005539551 nova_compute[227360]: 2025-11-29 07:51:17.767 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:17 np0005539551 nova_compute[227360]: 2025-11-29 07:51:17.943 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:51:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:17.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:51:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:18.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:19 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:51:19.843 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:51:19.844 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:51:19.844 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:51:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:19.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:51:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:20.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:51:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:51:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:21.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:51:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:51:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:51:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:22.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:51:22 np0005539551 nova_compute[227360]: 2025-11-29 07:51:22.809 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:22 np0005539551 nova_compute[227360]: 2025-11-29 07:51:22.945 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:51:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:23.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:24 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:51:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:24.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:51:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:25.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:51:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:26.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:27 np0005539551 nova_compute[227360]: 2025-11-29 07:51:27.812 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:27 np0005539551 nova_compute[227360]: 2025-11-29 07:51:27.947 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:51:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:27.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:51:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:28.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:51:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:29.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:51:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:30.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:51:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:31.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:51:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:32.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:32 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:51:32 np0005539551 nova_compute[227360]: 2025-11-29 07:51:32.815 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:32 np0005539551 nova_compute[227360]: 2025-11-29 07:51:32.948 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:33 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:51:33 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:51:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:51:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:33.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:34 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:51:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:34.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:34 np0005539551 podman[233523]: 2025-11-29 07:51:34.670814306 +0000 UTC m=+0.098803351 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 02:51:34 np0005539551 podman[233522]: 2025-11-29 07:51:34.684769711 +0000 UTC m=+0.115237604 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:51:34 np0005539551 podman[233521]: 2025-11-29 07:51:34.712509367 +0000 UTC m=+0.143088273 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 02:51:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:51:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:35.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:51:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:36.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:37 np0005539551 nova_compute[227360]: 2025-11-29 07:51:37.816 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:37 np0005539551 nova_compute[227360]: 2025-11-29 07:51:37.950 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:51:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:37.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:51:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:38.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:39 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:51:39 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:51:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:51:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:51:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:39.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:51:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:51:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:40.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:42 np0005539551 podman[235591]: 2025-11-29 07:53:42.762902635 +0000 UTC m=+0.089695220 container cleanup aa013e60222033a1e413a76a47628dae0b7eea2a01220fa52b9885036557bcc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7f03fa2b-ae90-41ca-b1ba-770fedbd8710, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:53:42 np0005539551 nova_compute[227360]: 2025-11-29 07:53:42.770 227364 DEBUG nova.virt.libvirt.vif [None req-7b42991d-ab69-4e8e-95b8-78b69aec114f 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:53:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-9680429',display_name='tempest-VolumesAdminNegativeTest-server-9680429',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-9680429',id=15,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:53:37Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ba323f7dc95a4f11911e6559a1b3c99e',ramdisk_id='',reservation_id='r-u6ecvgbm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesAdminNegativeTest-1736550981',owner_user_name='tempest-VolumesAdminNegativeTest-1736550981-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:53:37Z,user_data=None,user_id='7f80aba0abfa403b80928e251377a7cd',uuid=ccc93294-177e-4d6b-83ae-d06cb1d8bd2d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c5b78c67-19b1-47a0-a685-c663873f8e9e", "address": "fa:16:3e:ab:1f:5c", "network": {"id": "7f03fa2b-ae90-41ca-b1ba-770fedbd8710", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1259962014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba323f7dc95a4f11911e6559a1b3c99e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5b78c67-19", "ovs_interfaceid": "c5b78c67-19b1-47a0-a685-c663873f8e9e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:53:42 np0005539551 nova_compute[227360]: 2025-11-29 07:53:42.771 227364 DEBUG nova.network.os_vif_util [None req-7b42991d-ab69-4e8e-95b8-78b69aec114f 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Converting VIF {"id": "c5b78c67-19b1-47a0-a685-c663873f8e9e", "address": "fa:16:3e:ab:1f:5c", "network": {"id": "7f03fa2b-ae90-41ca-b1ba-770fedbd8710", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1259962014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba323f7dc95a4f11911e6559a1b3c99e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5b78c67-19", "ovs_interfaceid": "c5b78c67-19b1-47a0-a685-c663873f8e9e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:53:42 np0005539551 nova_compute[227360]: 2025-11-29 07:53:42.772 227364 DEBUG nova.network.os_vif_util [None req-7b42991d-ab69-4e8e-95b8-78b69aec114f 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:1f:5c,bridge_name='br-int',has_traffic_filtering=True,id=c5b78c67-19b1-47a0-a685-c663873f8e9e,network=Network(7f03fa2b-ae90-41ca-b1ba-770fedbd8710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5b78c67-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:53:42 np0005539551 nova_compute[227360]: 2025-11-29 07:53:42.772 227364 DEBUG os_vif [None req-7b42991d-ab69-4e8e-95b8-78b69aec114f 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:1f:5c,bridge_name='br-int',has_traffic_filtering=True,id=c5b78c67-19b1-47a0-a685-c663873f8e9e,network=Network(7f03fa2b-ae90-41ca-b1ba-770fedbd8710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5b78c67-19') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:53:42 np0005539551 systemd[1]: libpod-conmon-aa013e60222033a1e413a76a47628dae0b7eea2a01220fa52b9885036557bcc1.scope: Deactivated successfully.
Nov 29 02:53:42 np0005539551 nova_compute[227360]: 2025-11-29 07:53:42.775 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:42 np0005539551 nova_compute[227360]: 2025-11-29 07:53:42.775 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc5b78c67-19, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:42 np0005539551 nova_compute[227360]: 2025-11-29 07:53:42.777 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:42 np0005539551 nova_compute[227360]: 2025-11-29 07:53:42.778 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:42 np0005539551 nova_compute[227360]: 2025-11-29 07:53:42.780 227364 INFO os_vif [None req-7b42991d-ab69-4e8e-95b8-78b69aec114f 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:1f:5c,bridge_name='br-int',has_traffic_filtering=True,id=c5b78c67-19b1-47a0-a685-c663873f8e9e,network=Network(7f03fa2b-ae90-41ca-b1ba-770fedbd8710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5b78c67-19')#033[00m
Nov 29 02:53:42 np0005539551 podman[235631]: 2025-11-29 07:53:42.824782321 +0000 UTC m=+0.041871378 container remove aa013e60222033a1e413a76a47628dae0b7eea2a01220fa52b9885036557bcc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7f03fa2b-ae90-41ca-b1ba-770fedbd8710, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:53:42 np0005539551 nova_compute[227360]: 2025-11-29 07:53:42.826 227364 DEBUG nova.compute.manager [req-cad5882f-1b23-46a0-982d-c96f65c4d328 req-c82b2862-2c34-4b45-b3f7-a7445552494e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ccc93294-177e-4d6b-83ae-d06cb1d8bd2d] Received event network-vif-unplugged-c5b78c67-19b1-47a0-a685-c663873f8e9e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:53:42 np0005539551 nova_compute[227360]: 2025-11-29 07:53:42.827 227364 DEBUG oslo_concurrency.lockutils [req-cad5882f-1b23-46a0-982d-c96f65c4d328 req-c82b2862-2c34-4b45-b3f7-a7445552494e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ccc93294-177e-4d6b-83ae-d06cb1d8bd2d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:42 np0005539551 nova_compute[227360]: 2025-11-29 07:53:42.828 227364 DEBUG oslo_concurrency.lockutils [req-cad5882f-1b23-46a0-982d-c96f65c4d328 req-c82b2862-2c34-4b45-b3f7-a7445552494e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ccc93294-177e-4d6b-83ae-d06cb1d8bd2d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:42 np0005539551 nova_compute[227360]: 2025-11-29 07:53:42.828 227364 DEBUG oslo_concurrency.lockutils [req-cad5882f-1b23-46a0-982d-c96f65c4d328 req-c82b2862-2c34-4b45-b3f7-a7445552494e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ccc93294-177e-4d6b-83ae-d06cb1d8bd2d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:42 np0005539551 nova_compute[227360]: 2025-11-29 07:53:42.828 227364 DEBUG nova.compute.manager [req-cad5882f-1b23-46a0-982d-c96f65c4d328 req-c82b2862-2c34-4b45-b3f7-a7445552494e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ccc93294-177e-4d6b-83ae-d06cb1d8bd2d] No waiting events found dispatching network-vif-unplugged-c5b78c67-19b1-47a0-a685-c663873f8e9e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:53:42 np0005539551 nova_compute[227360]: 2025-11-29 07:53:42.829 227364 DEBUG nova.compute.manager [req-cad5882f-1b23-46a0-982d-c96f65c4d328 req-c82b2862-2c34-4b45-b3f7-a7445552494e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ccc93294-177e-4d6b-83ae-d06cb1d8bd2d] Received event network-vif-unplugged-c5b78c67-19b1-47a0-a685-c663873f8e9e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:53:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:53:42.830 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b676afb1-7be7-4b7d-8027-b2167931e78e]: (4, ('Sat Nov 29 07:53:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7f03fa2b-ae90-41ca-b1ba-770fedbd8710 (aa013e60222033a1e413a76a47628dae0b7eea2a01220fa52b9885036557bcc1)\naa013e60222033a1e413a76a47628dae0b7eea2a01220fa52b9885036557bcc1\nSat Nov 29 07:53:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7f03fa2b-ae90-41ca-b1ba-770fedbd8710 (aa013e60222033a1e413a76a47628dae0b7eea2a01220fa52b9885036557bcc1)\naa013e60222033a1e413a76a47628dae0b7eea2a01220fa52b9885036557bcc1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:53:42.831 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[7fbb86fb-5498-4072-ba0b-76acda49fafa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:53:42.832 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f03fa2b-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:42 np0005539551 nova_compute[227360]: 2025-11-29 07:53:42.833 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:42 np0005539551 kernel: tap7f03fa2b-a0: left promiscuous mode
Nov 29 02:53:42 np0005539551 nova_compute[227360]: 2025-11-29 07:53:42.846 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:53:42.849 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6ad69746-659e-42c9-a063-17beae1956dd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:53:42.866 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f6710a38-e7d3-4c86-b3ca-36fccaed010b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:53:42.868 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[da84d86c-8140-4b3d-b18b-a705f4829ab4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:53:42.881 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1abb42d3-6154-49e9-bab7-48d7b568ce4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586469, 'reachable_time': 40722, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 235662, 'error': None, 'target': 'ovnmeta-7f03fa2b-ae90-41ca-b1ba-770fedbd8710', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:42 np0005539551 systemd[1]: run-netns-ovnmeta\x2d7f03fa2b\x2dae90\x2d41ca\x2db1ba\x2d770fedbd8710.mount: Deactivated successfully.
Nov 29 02:53:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:53:42.883 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7f03fa2b-ae90-41ca-b1ba-770fedbd8710 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:53:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:53:42.883 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[db855be6-d07c-42a8-ba59-74de06098177]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:43 np0005539551 rsyslogd[1004]: imjournal: 1891 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 29 02:53:43 np0005539551 nova_compute[227360]: 2025-11-29 07:53:43.126 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:43 np0005539551 nova_compute[227360]: 2025-11-29 07:53:43.420 227364 DEBUG oslo_concurrency.lockutils [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:53:43 np0005539551 nova_compute[227360]: 2025-11-29 07:53:43.421 227364 DEBUG oslo_concurrency.lockutils [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:53:43 np0005539551 nova_compute[227360]: 2025-11-29 07:53:43.432 227364 INFO nova.compute.rpcapi [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66#033[00m
Nov 29 02:53:43 np0005539551 nova_compute[227360]: 2025-11-29 07:53:43.433 227364 DEBUG oslo_concurrency.lockutils [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:53:43 np0005539551 nova_compute[227360]: 2025-11-29 07:53:43.712 227364 INFO nova.virt.libvirt.driver [None req-7b42991d-ab69-4e8e-95b8-78b69aec114f 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: ccc93294-177e-4d6b-83ae-d06cb1d8bd2d] Deleting instance files /var/lib/nova/instances/ccc93294-177e-4d6b-83ae-d06cb1d8bd2d_del#033[00m
Nov 29 02:53:43 np0005539551 nova_compute[227360]: 2025-11-29 07:53:43.713 227364 INFO nova.virt.libvirt.driver [None req-7b42991d-ab69-4e8e-95b8-78b69aec114f 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: ccc93294-177e-4d6b-83ae-d06cb1d8bd2d] Deletion of /var/lib/nova/instances/ccc93294-177e-4d6b-83ae-d06cb1d8bd2d_del complete#033[00m
Nov 29 02:53:43 np0005539551 nova_compute[227360]: 2025-11-29 07:53:43.771 227364 INFO nova.compute.manager [None req-7b42991d-ab69-4e8e-95b8-78b69aec114f 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: ccc93294-177e-4d6b-83ae-d06cb1d8bd2d] Took 1.26 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:53:43 np0005539551 nova_compute[227360]: 2025-11-29 07:53:43.772 227364 DEBUG oslo.service.loopingcall [None req-7b42991d-ab69-4e8e-95b8-78b69aec114f 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:53:43 np0005539551 nova_compute[227360]: 2025-11-29 07:53:43.773 227364 DEBUG nova.compute.manager [-] [instance: ccc93294-177e-4d6b-83ae-d06cb1d8bd2d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:53:43 np0005539551 nova_compute[227360]: 2025-11-29 07:53:43.773 227364 DEBUG nova.network.neutron [-] [instance: ccc93294-177e-4d6b-83ae-d06cb1d8bd2d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:53:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:53:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:44.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:53:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:44.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:44 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:45 np0005539551 nova_compute[227360]: 2025-11-29 07:53:45.109 227364 DEBUG nova.network.neutron [-] [instance: ccc93294-177e-4d6b-83ae-d06cb1d8bd2d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:53:45 np0005539551 nova_compute[227360]: 2025-11-29 07:53:45.142 227364 INFO nova.compute.manager [-] [instance: ccc93294-177e-4d6b-83ae-d06cb1d8bd2d] Took 1.37 seconds to deallocate network for instance.#033[00m
Nov 29 02:53:45 np0005539551 nova_compute[227360]: 2025-11-29 07:53:45.197 227364 DEBUG oslo_concurrency.lockutils [None req-7b42991d-ab69-4e8e-95b8-78b69aec114f 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:45 np0005539551 nova_compute[227360]: 2025-11-29 07:53:45.198 227364 DEBUG oslo_concurrency.lockutils [None req-7b42991d-ab69-4e8e-95b8-78b69aec114f 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:45 np0005539551 nova_compute[227360]: 2025-11-29 07:53:45.271 227364 DEBUG nova.compute.manager [req-c415de36-122e-4699-b45b-ea5bfbbc56d9 req-cd1eadb4-bd88-46d0-b863-45a9f711e831 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ccc93294-177e-4d6b-83ae-d06cb1d8bd2d] Received event network-vif-deleted-c5b78c67-19b1-47a0-a685-c663873f8e9e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:53:45 np0005539551 nova_compute[227360]: 2025-11-29 07:53:45.273 227364 DEBUG oslo_concurrency.processutils [None req-7b42991d-ab69-4e8e-95b8-78b69aec114f 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:45 np0005539551 nova_compute[227360]: 2025-11-29 07:53:45.333 227364 DEBUG nova.compute.manager [req-58f228ef-8076-498b-a962-e39de3a7b1e4 req-0479f817-e513-49f2-b282-05a15f53dbca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ccc93294-177e-4d6b-83ae-d06cb1d8bd2d] Received event network-vif-plugged-c5b78c67-19b1-47a0-a685-c663873f8e9e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:53:45 np0005539551 nova_compute[227360]: 2025-11-29 07:53:45.335 227364 DEBUG oslo_concurrency.lockutils [req-58f228ef-8076-498b-a962-e39de3a7b1e4 req-0479f817-e513-49f2-b282-05a15f53dbca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ccc93294-177e-4d6b-83ae-d06cb1d8bd2d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:45 np0005539551 nova_compute[227360]: 2025-11-29 07:53:45.335 227364 DEBUG oslo_concurrency.lockutils [req-58f228ef-8076-498b-a962-e39de3a7b1e4 req-0479f817-e513-49f2-b282-05a15f53dbca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ccc93294-177e-4d6b-83ae-d06cb1d8bd2d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:45 np0005539551 nova_compute[227360]: 2025-11-29 07:53:45.336 227364 DEBUG oslo_concurrency.lockutils [req-58f228ef-8076-498b-a962-e39de3a7b1e4 req-0479f817-e513-49f2-b282-05a15f53dbca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ccc93294-177e-4d6b-83ae-d06cb1d8bd2d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:45 np0005539551 nova_compute[227360]: 2025-11-29 07:53:45.337 227364 DEBUG nova.compute.manager [req-58f228ef-8076-498b-a962-e39de3a7b1e4 req-0479f817-e513-49f2-b282-05a15f53dbca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ccc93294-177e-4d6b-83ae-d06cb1d8bd2d] No waiting events found dispatching network-vif-plugged-c5b78c67-19b1-47a0-a685-c663873f8e9e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:53:45 np0005539551 nova_compute[227360]: 2025-11-29 07:53:45.337 227364 WARNING nova.compute.manager [req-58f228ef-8076-498b-a962-e39de3a7b1e4 req-0479f817-e513-49f2-b282-05a15f53dbca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ccc93294-177e-4d6b-83ae-d06cb1d8bd2d] Received unexpected event network-vif-plugged-c5b78c67-19b1-47a0-a685-c663873f8e9e for instance with vm_state deleted and task_state None.#033[00m
Nov 29 02:53:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:53:45 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2115210206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:53:45 np0005539551 nova_compute[227360]: 2025-11-29 07:53:45.739 227364 DEBUG oslo_concurrency.processutils [None req-7b42991d-ab69-4e8e-95b8-78b69aec114f 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:45 np0005539551 nova_compute[227360]: 2025-11-29 07:53:45.775 227364 DEBUG nova.compute.provider_tree [None req-7b42991d-ab69-4e8e-95b8-78b69aec114f 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:53:45 np0005539551 nova_compute[227360]: 2025-11-29 07:53:45.880 227364 DEBUG nova.scheduler.client.report [None req-7b42991d-ab69-4e8e-95b8-78b69aec114f 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:53:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:53:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:46.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:46 np0005539551 nova_compute[227360]: 2025-11-29 07:53:46.616 227364 DEBUG oslo_concurrency.lockutils [None req-7b42991d-ab69-4e8e-95b8-78b69aec114f 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.418s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:53:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:46.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:46 np0005539551 nova_compute[227360]: 2025-11-29 07:53:46.674 227364 INFO nova.scheduler.client.report [None req-7b42991d-ab69-4e8e-95b8-78b69aec114f 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Deleted allocations for instance ccc93294-177e-4d6b-83ae-d06cb1d8bd2d#033[00m
Nov 29 02:53:46 np0005539551 nova_compute[227360]: 2025-11-29 07:53:46.783 227364 DEBUG oslo_concurrency.lockutils [None req-7b42991d-ab69-4e8e-95b8-78b69aec114f 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Lock "ccc93294-177e-4d6b-83ae-d06cb1d8bd2d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.274s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:47 np0005539551 podman[235861]: 2025-11-29 07:53:47.524526993 +0000 UTC m=+0.088849737 container exec 90ce4adbab91e406b1c142d44002a8d5649da44e57a442af57615f2fecac4da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:53:47 np0005539551 podman[235861]: 2025-11-29 07:53:47.653675302 +0000 UTC m=+0.217998016 container exec_died 90ce4adbab91e406b1c142d44002a8d5649da44e57a442af57615f2fecac4da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:53:47 np0005539551 nova_compute[227360]: 2025-11-29 07:53:47.778 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:48 np0005539551 nova_compute[227360]: 2025-11-29 07:53:48.168 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:53:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:48.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:53:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:48.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:48 np0005539551 nova_compute[227360]: 2025-11-29 07:53:48.938 227364 DEBUG nova.compute.manager [req-cd07659e-ee8f-4178-9add-f956f4222680 req-cb103dd0-c5ca-4d09-90b5-286540b80bcf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Received event network-vif-unplugged-230cb1ef-c551-4666-88fe-e49994b798e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:53:48 np0005539551 nova_compute[227360]: 2025-11-29 07:53:48.938 227364 DEBUG oslo_concurrency.lockutils [req-cd07659e-ee8f-4178-9add-f956f4222680 req-cb103dd0-c5ca-4d09-90b5-286540b80bcf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "2bb12f77-8958-446b-813d-a59f149a549b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:48 np0005539551 nova_compute[227360]: 2025-11-29 07:53:48.938 227364 DEBUG oslo_concurrency.lockutils [req-cd07659e-ee8f-4178-9add-f956f4222680 req-cb103dd0-c5ca-4d09-90b5-286540b80bcf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2bb12f77-8958-446b-813d-a59f149a549b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:48 np0005539551 nova_compute[227360]: 2025-11-29 07:53:48.939 227364 DEBUG oslo_concurrency.lockutils [req-cd07659e-ee8f-4178-9add-f956f4222680 req-cb103dd0-c5ca-4d09-90b5-286540b80bcf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2bb12f77-8958-446b-813d-a59f149a549b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:48 np0005539551 nova_compute[227360]: 2025-11-29 07:53:48.940 227364 DEBUG nova.compute.manager [req-cd07659e-ee8f-4178-9add-f956f4222680 req-cb103dd0-c5ca-4d09-90b5-286540b80bcf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] No waiting events found dispatching network-vif-unplugged-230cb1ef-c551-4666-88fe-e49994b798e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:53:48 np0005539551 nova_compute[227360]: 2025-11-29 07:53:48.940 227364 DEBUG nova.compute.manager [req-cd07659e-ee8f-4178-9add-f956f4222680 req-cb103dd0-c5ca-4d09-90b5-286540b80bcf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Received event network-vif-unplugged-230cb1ef-c551-4666-88fe-e49994b798e9 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:53:49 np0005539551 nova_compute[227360]: 2025-11-29 07:53:49.479 227364 INFO nova.compute.manager [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Took 6.06 seconds for pre_live_migration on destination host compute-0.ctlplane.example.com.#033[00m
Nov 29 02:53:49 np0005539551 nova_compute[227360]: 2025-11-29 07:53:49.480 227364 DEBUG nova.compute.manager [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:53:49 np0005539551 nova_compute[227360]: 2025-11-29 07:53:49.497 227364 DEBUG nova.compute.manager [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpikznkx1y',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='2bb12f77-8958-446b-813d-a59f149a549b',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(c850deb7-f09e-4436-ad51-149f3e81ddfe),old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m
Nov 29 02:53:49 np0005539551 nova_compute[227360]: 2025-11-29 07:53:49.503 227364 DEBUG nova.objects.instance [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Lazy-loading 'migration_context' on Instance uuid 2bb12f77-8958-446b-813d-a59f149a549b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:53:49 np0005539551 nova_compute[227360]: 2025-11-29 07:53:49.505 227364 DEBUG nova.virt.libvirt.driver [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m
Nov 29 02:53:49 np0005539551 nova_compute[227360]: 2025-11-29 07:53:49.508 227364 DEBUG nova.virt.libvirt.driver [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m
Nov 29 02:53:49 np0005539551 nova_compute[227360]: 2025-11-29 07:53:49.508 227364 DEBUG nova.virt.libvirt.driver [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m
Nov 29 02:53:49 np0005539551 nova_compute[227360]: 2025-11-29 07:53:49.524 227364 DEBUG nova.virt.libvirt.vif [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:53:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-2050487278',display_name='tempest-LiveMigrationTest-server-2050487278',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-livemigrationtest-server-2050487278',id=13,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:53:37Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1963a097b7694450aa0d7c30b27b38ac',ramdisk_id='',reservation_id='r-f3pegydb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-814240379',owner_user_name='tempest-LiveMigrationTest-814240379-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:53:37Z,user_data=None,user_id='85f5548e01234fe4ae9b88e998e943f8',uuid=2bb12f77-8958-446b-813d-a59f149a549b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "230cb1ef-c551-4666-88fe-e49994b798e9", "address": "fa:16:3e:80:fa:cd", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap230cb1ef-c5", "ovs_interfaceid": "230cb1ef-c551-4666-88fe-e49994b798e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:53:49 np0005539551 nova_compute[227360]: 2025-11-29 07:53:49.524 227364 DEBUG nova.network.os_vif_util [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Converting VIF {"id": "230cb1ef-c551-4666-88fe-e49994b798e9", "address": "fa:16:3e:80:fa:cd", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap230cb1ef-c5", "ovs_interfaceid": "230cb1ef-c551-4666-88fe-e49994b798e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:53:49 np0005539551 nova_compute[227360]: 2025-11-29 07:53:49.525 227364 DEBUG nova.network.os_vif_util [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:fa:cd,bridge_name='br-int',has_traffic_filtering=True,id=230cb1ef-c551-4666-88fe-e49994b798e9,network=Network(7a06a21a-ba04-4a14-8d62-c931cbbf124d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap230cb1ef-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:53:49 np0005539551 nova_compute[227360]: 2025-11-29 07:53:49.525 227364 DEBUG nova.virt.libvirt.migration [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Updating guest XML with vif config: <interface type="ethernet">
Nov 29 02:53:49 np0005539551 nova_compute[227360]:  <mac address="fa:16:3e:80:fa:cd"/>
Nov 29 02:53:49 np0005539551 nova_compute[227360]:  <model type="virtio"/>
Nov 29 02:53:49 np0005539551 nova_compute[227360]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:53:49 np0005539551 nova_compute[227360]:  <mtu size="1442"/>
Nov 29 02:53:49 np0005539551 nova_compute[227360]:  <target dev="tap230cb1ef-c5"/>
Nov 29 02:53:49 np0005539551 nova_compute[227360]: </interface>
Nov 29 02:53:49 np0005539551 nova_compute[227360]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m
Nov 29 02:53:49 np0005539551 nova_compute[227360]: 2025-11-29 07:53:49.526 227364 DEBUG nova.virt.libvirt.driver [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m
Nov 29 02:53:49 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:53:49 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:53:49 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:53:49 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:53:49 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:53:49 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:53:49 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:53:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:50 np0005539551 nova_compute[227360]: 2025-11-29 07:53:50.011 227364 DEBUG nova.virt.libvirt.migration [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 02:53:50 np0005539551 nova_compute[227360]: 2025-11-29 07:53:50.013 227364 INFO nova.virt.libvirt.migration [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m
Nov 29 02:53:50 np0005539551 nova_compute[227360]: 2025-11-29 07:53:50.100 227364 INFO nova.virt.libvirt.driver [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m
Nov 29 02:53:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:53:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:50.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:50 np0005539551 nova_compute[227360]: 2025-11-29 07:53:50.603 227364 DEBUG nova.virt.libvirt.migration [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 02:53:50 np0005539551 nova_compute[227360]: 2025-11-29 07:53:50.604 227364 DEBUG nova.virt.libvirt.migration [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Nov 29 02:53:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:53:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:50.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:51 np0005539551 nova_compute[227360]: 2025-11-29 07:53:51.008 227364 DEBUG nova.compute.manager [req-f1de01b5-8acd-4fb2-8431-dfc4600518e7 req-b48c76a6-2677-4617-90c9-9f69bfba8e1e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Received event network-vif-plugged-230cb1ef-c551-4666-88fe-e49994b798e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:53:51 np0005539551 nova_compute[227360]: 2025-11-29 07:53:51.009 227364 DEBUG oslo_concurrency.lockutils [req-f1de01b5-8acd-4fb2-8431-dfc4600518e7 req-b48c76a6-2677-4617-90c9-9f69bfba8e1e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "2bb12f77-8958-446b-813d-a59f149a549b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:51 np0005539551 nova_compute[227360]: 2025-11-29 07:53:51.009 227364 DEBUG oslo_concurrency.lockutils [req-f1de01b5-8acd-4fb2-8431-dfc4600518e7 req-b48c76a6-2677-4617-90c9-9f69bfba8e1e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2bb12f77-8958-446b-813d-a59f149a549b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:51 np0005539551 nova_compute[227360]: 2025-11-29 07:53:51.009 227364 DEBUG oslo_concurrency.lockutils [req-f1de01b5-8acd-4fb2-8431-dfc4600518e7 req-b48c76a6-2677-4617-90c9-9f69bfba8e1e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2bb12f77-8958-446b-813d-a59f149a549b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:51 np0005539551 nova_compute[227360]: 2025-11-29 07:53:51.010 227364 DEBUG nova.compute.manager [req-f1de01b5-8acd-4fb2-8431-dfc4600518e7 req-b48c76a6-2677-4617-90c9-9f69bfba8e1e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] No waiting events found dispatching network-vif-plugged-230cb1ef-c551-4666-88fe-e49994b798e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:53:51 np0005539551 nova_compute[227360]: 2025-11-29 07:53:51.010 227364 WARNING nova.compute.manager [req-f1de01b5-8acd-4fb2-8431-dfc4600518e7 req-b48c76a6-2677-4617-90c9-9f69bfba8e1e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Received unexpected event network-vif-plugged-230cb1ef-c551-4666-88fe-e49994b798e9 for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:53:51 np0005539551 nova_compute[227360]: 2025-11-29 07:53:51.010 227364 DEBUG nova.compute.manager [req-f1de01b5-8acd-4fb2-8431-dfc4600518e7 req-b48c76a6-2677-4617-90c9-9f69bfba8e1e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Received event network-changed-230cb1ef-c551-4666-88fe-e49994b798e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:53:51 np0005539551 nova_compute[227360]: 2025-11-29 07:53:51.010 227364 DEBUG nova.compute.manager [req-f1de01b5-8acd-4fb2-8431-dfc4600518e7 req-b48c76a6-2677-4617-90c9-9f69bfba8e1e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Refreshing instance network info cache due to event network-changed-230cb1ef-c551-4666-88fe-e49994b798e9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:53:51 np0005539551 nova_compute[227360]: 2025-11-29 07:53:51.011 227364 DEBUG oslo_concurrency.lockutils [req-f1de01b5-8acd-4fb2-8431-dfc4600518e7 req-b48c76a6-2677-4617-90c9-9f69bfba8e1e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-2bb12f77-8958-446b-813d-a59f149a549b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:53:51 np0005539551 nova_compute[227360]: 2025-11-29 07:53:51.011 227364 DEBUG oslo_concurrency.lockutils [req-f1de01b5-8acd-4fb2-8431-dfc4600518e7 req-b48c76a6-2677-4617-90c9-9f69bfba8e1e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-2bb12f77-8958-446b-813d-a59f149a549b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:53:51 np0005539551 nova_compute[227360]: 2025-11-29 07:53:51.011 227364 DEBUG nova.network.neutron [req-f1de01b5-8acd-4fb2-8431-dfc4600518e7 req-b48c76a6-2677-4617-90c9-9f69bfba8e1e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Refreshing network info cache for port 230cb1ef-c551-4666-88fe-e49994b798e9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:53:51 np0005539551 nova_compute[227360]: 2025-11-29 07:53:51.106 227364 DEBUG nova.virt.libvirt.migration [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 02:53:51 np0005539551 nova_compute[227360]: 2025-11-29 07:53:51.107 227364 DEBUG nova.virt.libvirt.migration [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Nov 29 02:53:51 np0005539551 nova_compute[227360]: 2025-11-29 07:53:51.609 227364 DEBUG nova.virt.libvirt.migration [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 02:53:51 np0005539551 nova_compute[227360]: 2025-11-29 07:53:51.610 227364 DEBUG nova.virt.libvirt.migration [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Nov 29 02:53:51 np0005539551 ovn_controller[130266]: 2025-11-29T07:53:51Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:80:fa:cd 10.100.0.12
Nov 29 02:53:51 np0005539551 ovn_controller[130266]: 2025-11-29T07:53:51Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:80:fa:cd 10.100.0.12
Nov 29 02:53:52 np0005539551 nova_compute[227360]: 2025-11-29 07:53:52.112 227364 DEBUG nova.virt.libvirt.migration [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 02:53:52 np0005539551 nova_compute[227360]: 2025-11-29 07:53:52.114 227364 DEBUG nova.virt.libvirt.migration [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Nov 29 02:53:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:53:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:52.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:52 np0005539551 nova_compute[227360]: 2025-11-29 07:53:52.271 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764402832.2713358, 2bb12f77-8958-446b-813d-a59f149a549b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:53:52 np0005539551 nova_compute[227360]: 2025-11-29 07:53:52.272 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:53:52 np0005539551 nova_compute[227360]: 2025-11-29 07:53:52.294 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:53:52 np0005539551 nova_compute[227360]: 2025-11-29 07:53:52.298 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:53:52 np0005539551 nova_compute[227360]: 2025-11-29 07:53:52.317 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] During sync_power_state the instance has a pending task (migrating). Skip.#033[00m
Nov 29 02:53:52 np0005539551 kernel: tap230cb1ef-c5 (unregistering): left promiscuous mode
Nov 29 02:53:52 np0005539551 NetworkManager[48922]: <info>  [1764402832.4893] device (tap230cb1ef-c5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:53:52 np0005539551 ovn_controller[130266]: 2025-11-29T07:53:52Z|00074|binding|INFO|Releasing lport 230cb1ef-c551-4666-88fe-e49994b798e9 from this chassis (sb_readonly=0)
Nov 29 02:53:52 np0005539551 ovn_controller[130266]: 2025-11-29T07:53:52Z|00075|binding|INFO|Setting lport 230cb1ef-c551-4666-88fe-e49994b798e9 down in Southbound
Nov 29 02:53:52 np0005539551 nova_compute[227360]: 2025-11-29 07:53:52.502 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:52 np0005539551 ovn_controller[130266]: 2025-11-29T07:53:52Z|00076|binding|INFO|Releasing lport 25733cd0-1d42-411e-be69-7bf3a59b5a2a from this chassis (sb_readonly=0)
Nov 29 02:53:52 np0005539551 ovn_controller[130266]: 2025-11-29T07:53:52Z|00077|binding|INFO|Setting lport 25733cd0-1d42-411e-be69-7bf3a59b5a2a down in Southbound
Nov 29 02:53:52 np0005539551 ovn_controller[130266]: 2025-11-29T07:53:52Z|00078|binding|INFO|Removing iface tap230cb1ef-c5 ovn-installed in OVS
Nov 29 02:53:52 np0005539551 nova_compute[227360]: 2025-11-29 07:53:52.507 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:52 np0005539551 ovn_controller[130266]: 2025-11-29T07:53:52Z|00079|binding|INFO|Releasing lport f4400efd-54c7-4734-b939-0d7fcbfba020 from this chassis (sb_readonly=0)
Nov 29 02:53:52 np0005539551 ovn_controller[130266]: 2025-11-29T07:53:52Z|00080|binding|INFO|Releasing lport 2b822f56-587d-4c36-9c9a-d54b62b2616c from this chassis (sb_readonly=0)
Nov 29 02:53:52 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:53:52.515 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:fa:cd 10.100.0.12'], port_security=['fa:16:3e:80:fa:cd 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': 'a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1631774297', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '2bb12f77-8958-446b-813d-a59f149a549b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7a06a21a-ba04-4a14-8d62-c931cbbf124d', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1631774297', 'neutron:project_id': '1963a097b7694450aa0d7c30b27b38ac', 'neutron:revision_number': '8', 'neutron:security_group_ids': '7cf396e5-2565-40f4-9bc8-f8d0b75eb4c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9eb8ff47-0cf8-4776-a959-1d6d6d7f49c2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=230cb1ef-c551-4666-88fe-e49994b798e9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:53:52 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:53:52.518 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:c7:5e 19.80.0.38'], port_security=['fa:16:3e:58:c7:5e 19.80.0.38'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['230cb1ef-c551-4666-88fe-e49994b798e9'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-815355501', 'neutron:cidrs': '19.80.0.38/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67969051-efe2-48f0-99e2-c96ec0167864', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-815355501', 'neutron:project_id': '1963a097b7694450aa0d7c30b27b38ac', 'neutron:revision_number': '3', 'neutron:security_group_ids': '7cf396e5-2565-40f4-9bc8-f8d0b75eb4c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=aa62182a-1418-4867-a065-405baf63a28f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=25733cd0-1d42-411e-be69-7bf3a59b5a2a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:53:52 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:53:52.520 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 230cb1ef-c551-4666-88fe-e49994b798e9 in datapath 7a06a21a-ba04-4a14-8d62-c931cbbf124d unbound from our chassis#033[00m
Nov 29 02:53:52 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:53:52.523 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7a06a21a-ba04-4a14-8d62-c931cbbf124d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:53:52 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:53:52.525 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a4b66185-f71e-4198-aa95-a7ed40f021c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:52 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:53:52.526 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d namespace which is not needed anymore#033[00m
Nov 29 02:53:52 np0005539551 nova_compute[227360]: 2025-11-29 07:53:52.526 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:52 np0005539551 nova_compute[227360]: 2025-11-29 07:53:52.540 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:52 np0005539551 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Nov 29 02:53:52 np0005539551 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000d.scope: Consumed 14.621s CPU time.
Nov 29 02:53:52 np0005539551 systemd-machined[190756]: Machine qemu-6-instance-0000000d terminated.
Nov 29 02:53:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:53:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:52.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:52 np0005539551 neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d[235218]: [NOTICE]   (235240) : haproxy version is 2.8.14-c23fe91
Nov 29 02:53:52 np0005539551 neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d[235218]: [NOTICE]   (235240) : path to executable is /usr/sbin/haproxy
Nov 29 02:53:52 np0005539551 neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d[235218]: [WARNING]  (235240) : Exiting Master process...
Nov 29 02:53:52 np0005539551 neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d[235218]: [ALERT]    (235240) : Current worker (235257) exited with code 143 (Terminated)
Nov 29 02:53:52 np0005539551 neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d[235218]: [WARNING]  (235240) : All workers exited. Exiting... (0)
Nov 29 02:53:52 np0005539551 systemd[1]: libpod-af66646d02b1d080380507e3942f11a21cd7e09a0f44bec95f63e89603ec3965.scope: Deactivated successfully.
Nov 29 02:53:52 np0005539551 conmon[235218]: conmon af66646d02b1d0803805 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-af66646d02b1d080380507e3942f11a21cd7e09a0f44bec95f63e89603ec3965.scope/container/memory.events
Nov 29 02:53:52 np0005539551 podman[236137]: 2025-11-29 07:53:52.710335536 +0000 UTC m=+0.056911011 container died af66646d02b1d080380507e3942f11a21cd7e09a0f44bec95f63e89603ec3965 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 02:53:52 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-af66646d02b1d080380507e3942f11a21cd7e09a0f44bec95f63e89603ec3965-userdata-shm.mount: Deactivated successfully.
Nov 29 02:53:52 np0005539551 nova_compute[227360]: 2025-11-29 07:53:52.781 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:52 np0005539551 virtqemud[226785]: Unable to get XATTR trusted.libvirt.security.ref_selinux on vms/2bb12f77-8958-446b-813d-a59f149a549b_disk: No such file or directory
Nov 29 02:53:52 np0005539551 virtqemud[226785]: Unable to get XATTR trusted.libvirt.security.ref_dac on vms/2bb12f77-8958-446b-813d-a59f149a549b_disk: No such file or directory
Nov 29 02:53:52 np0005539551 systemd[1]: var-lib-containers-storage-overlay-7fe3af82e945b94da0ba1675ca165b9bf9bd1a2a83f03497670b9886d1cfda06-merged.mount: Deactivated successfully.
Nov 29 02:53:52 np0005539551 podman[236137]: 2025-11-29 07:53:52.846660883 +0000 UTC m=+0.193236348 container cleanup af66646d02b1d080380507e3942f11a21cd7e09a0f44bec95f63e89603ec3965 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:53:52 np0005539551 systemd[1]: libpod-conmon-af66646d02b1d080380507e3942f11a21cd7e09a0f44bec95f63e89603ec3965.scope: Deactivated successfully.
Nov 29 02:53:52 np0005539551 nova_compute[227360]: 2025-11-29 07:53:52.860 227364 DEBUG nova.virt.libvirt.guest [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m
Nov 29 02:53:52 np0005539551 nova_compute[227360]: 2025-11-29 07:53:52.860 227364 INFO nova.virt.libvirt.driver [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Migration operation has completed#033[00m
Nov 29 02:53:52 np0005539551 nova_compute[227360]: 2025-11-29 07:53:52.861 227364 INFO nova.compute.manager [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] _post_live_migration() is started..#033[00m
Nov 29 02:53:52 np0005539551 nova_compute[227360]: 2025-11-29 07:53:52.866 227364 DEBUG nova.virt.libvirt.driver [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m
Nov 29 02:53:52 np0005539551 nova_compute[227360]: 2025-11-29 07:53:52.866 227364 DEBUG nova.virt.libvirt.driver [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m
Nov 29 02:53:52 np0005539551 nova_compute[227360]: 2025-11-29 07:53:52.866 227364 DEBUG nova.virt.libvirt.driver [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m
Nov 29 02:53:52 np0005539551 ovn_controller[130266]: 2025-11-29T07:53:52Z|00081|binding|INFO|Releasing lport f4400efd-54c7-4734-b939-0d7fcbfba020 from this chassis (sb_readonly=0)
Nov 29 02:53:52 np0005539551 ovn_controller[130266]: 2025-11-29T07:53:52Z|00082|binding|INFO|Releasing lport 2b822f56-587d-4c36-9c9a-d54b62b2616c from this chassis (sb_readonly=0)
Nov 29 02:53:52 np0005539551 nova_compute[227360]: 2025-11-29 07:53:52.881 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:53 np0005539551 nova_compute[227360]: 2025-11-29 07:53:53.101 227364 DEBUG nova.compute.manager [req-130636b2-2edc-4885-b4c9-f235c20a5630 req-c9f5029b-0f59-422b-894a-da5e7b8d2a34 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Received event network-vif-unplugged-230cb1ef-c551-4666-88fe-e49994b798e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:53:53 np0005539551 nova_compute[227360]: 2025-11-29 07:53:53.102 227364 DEBUG oslo_concurrency.lockutils [req-130636b2-2edc-4885-b4c9-f235c20a5630 req-c9f5029b-0f59-422b-894a-da5e7b8d2a34 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "2bb12f77-8958-446b-813d-a59f149a549b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:53 np0005539551 nova_compute[227360]: 2025-11-29 07:53:53.102 227364 DEBUG oslo_concurrency.lockutils [req-130636b2-2edc-4885-b4c9-f235c20a5630 req-c9f5029b-0f59-422b-894a-da5e7b8d2a34 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2bb12f77-8958-446b-813d-a59f149a549b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:53 np0005539551 nova_compute[227360]: 2025-11-29 07:53:53.103 227364 DEBUG oslo_concurrency.lockutils [req-130636b2-2edc-4885-b4c9-f235c20a5630 req-c9f5029b-0f59-422b-894a-da5e7b8d2a34 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2bb12f77-8958-446b-813d-a59f149a549b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:53 np0005539551 nova_compute[227360]: 2025-11-29 07:53:53.104 227364 DEBUG nova.compute.manager [req-130636b2-2edc-4885-b4c9-f235c20a5630 req-c9f5029b-0f59-422b-894a-da5e7b8d2a34 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] No waiting events found dispatching network-vif-unplugged-230cb1ef-c551-4666-88fe-e49994b798e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:53:53 np0005539551 nova_compute[227360]: 2025-11-29 07:53:53.104 227364 DEBUG nova.compute.manager [req-130636b2-2edc-4885-b4c9-f235c20a5630 req-c9f5029b-0f59-422b-894a-da5e7b8d2a34 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Received event network-vif-unplugged-230cb1ef-c551-4666-88fe-e49994b798e9 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:53:53 np0005539551 podman[236171]: 2025-11-29 07:53:53.115302426 +0000 UTC m=+0.248832601 container remove af66646d02b1d080380507e3942f11a21cd7e09a0f44bec95f63e89603ec3965 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 02:53:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:53:53.124 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1d21d36f-b59a-4348-bf76-274c036e2778]: (4, ('Sat Nov 29 07:53:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d (af66646d02b1d080380507e3942f11a21cd7e09a0f44bec95f63e89603ec3965)\naf66646d02b1d080380507e3942f11a21cd7e09a0f44bec95f63e89603ec3965\nSat Nov 29 07:53:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d (af66646d02b1d080380507e3942f11a21cd7e09a0f44bec95f63e89603ec3965)\naf66646d02b1d080380507e3942f11a21cd7e09a0f44bec95f63e89603ec3965\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:53:53.126 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[61af1520-07c7-4f3d-b93f-13f29c8f220d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:53:53.128 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a06a21a-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:53 np0005539551 nova_compute[227360]: 2025-11-29 07:53:53.130 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:53 np0005539551 kernel: tap7a06a21a-b0: left promiscuous mode
Nov 29 02:53:53 np0005539551 nova_compute[227360]: 2025-11-29 07:53:53.148 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:53:53.153 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[508c8f6c-a1a9-4ac3-a3d9-398c9e93438f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:53:53.169 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5c895d9b-8cd6-444d-b26e-8998133c487f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:53 np0005539551 nova_compute[227360]: 2025-11-29 07:53:53.170 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:53:53.170 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[48890f7e-78f1-4ed9-a529-2a2885f4dad3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:53:53.189 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[41edf5f1-68fe-46c9-af01-74673d2a327a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586263, 'reachable_time': 20237, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 236197, 'error': None, 'target': 'ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:53:53.192 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:53:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:53:53.192 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[d92d6883-050a-4357-a15b-d985f20a1e2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:53:53.193 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 25733cd0-1d42-411e-be69-7bf3a59b5a2a in datapath 67969051-efe2-48f0-99e2-c96ec0167864 unbound from our chassis#033[00m
Nov 29 02:53:53 np0005539551 systemd[1]: run-netns-ovnmeta\x2d7a06a21a\x2dba04\x2d4a14\x2d8d62\x2dc931cbbf124d.mount: Deactivated successfully.
Nov 29 02:53:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:53:53.195 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 67969051-efe2-48f0-99e2-c96ec0167864, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:53:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:53:53.196 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4efa5b9d-786c-4f39-9e32-137b0a88bc4d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:53:53.197 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-67969051-efe2-48f0-99e2-c96ec0167864 namespace which is not needed anymore#033[00m
Nov 29 02:53:53 np0005539551 neutron-haproxy-ovnmeta-67969051-efe2-48f0-99e2-c96ec0167864[235374]: [NOTICE]   (235395) : haproxy version is 2.8.14-c23fe91
Nov 29 02:53:53 np0005539551 neutron-haproxy-ovnmeta-67969051-efe2-48f0-99e2-c96ec0167864[235374]: [NOTICE]   (235395) : path to executable is /usr/sbin/haproxy
Nov 29 02:53:53 np0005539551 neutron-haproxy-ovnmeta-67969051-efe2-48f0-99e2-c96ec0167864[235374]: [WARNING]  (235395) : Exiting Master process...
Nov 29 02:53:53 np0005539551 neutron-haproxy-ovnmeta-67969051-efe2-48f0-99e2-c96ec0167864[235374]: [ALERT]    (235395) : Current worker (235399) exited with code 143 (Terminated)
Nov 29 02:53:53 np0005539551 neutron-haproxy-ovnmeta-67969051-efe2-48f0-99e2-c96ec0167864[235374]: [WARNING]  (235395) : All workers exited. Exiting... (0)
Nov 29 02:53:53 np0005539551 systemd[1]: libpod-6e592981983c58f186b4362432173ce6d5c28b8a532b66ee27ca183467008607.scope: Deactivated successfully.
Nov 29 02:53:53 np0005539551 podman[236215]: 2025-11-29 07:53:53.366358648 +0000 UTC m=+0.061466546 container died 6e592981983c58f186b4362432173ce6d5c28b8a532b66ee27ca183467008607 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67969051-efe2-48f0-99e2-c96ec0167864, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 02:53:53 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6e592981983c58f186b4362432173ce6d5c28b8a532b66ee27ca183467008607-userdata-shm.mount: Deactivated successfully.
Nov 29 02:53:53 np0005539551 systemd[1]: var-lib-containers-storage-overlay-aa181658343978d2a82fd136b26475e748668914aca822ca437fb71c82d35a27-merged.mount: Deactivated successfully.
Nov 29 02:53:53 np0005539551 podman[236215]: 2025-11-29 07:53:53.398852719 +0000 UTC m=+0.093960617 container cleanup 6e592981983c58f186b4362432173ce6d5c28b8a532b66ee27ca183467008607 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67969051-efe2-48f0-99e2-c96ec0167864, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:53:53 np0005539551 systemd[1]: libpod-conmon-6e592981983c58f186b4362432173ce6d5c28b8a532b66ee27ca183467008607.scope: Deactivated successfully.
Nov 29 02:53:53 np0005539551 podman[236245]: 2025-11-29 07:53:53.468033354 +0000 UTC m=+0.045177939 container remove 6e592981983c58f186b4362432173ce6d5c28b8a532b66ee27ca183467008607 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67969051-efe2-48f0-99e2-c96ec0167864, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 02:53:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:53:53.474 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2741f80b-d4c1-4d8b-b9c0-99411a4b33ea]: (4, ('Sat Nov 29 07:53:53 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-67969051-efe2-48f0-99e2-c96ec0167864 (6e592981983c58f186b4362432173ce6d5c28b8a532b66ee27ca183467008607)\n6e592981983c58f186b4362432173ce6d5c28b8a532b66ee27ca183467008607\nSat Nov 29 07:53:53 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-67969051-efe2-48f0-99e2-c96ec0167864 (6e592981983c58f186b4362432173ce6d5c28b8a532b66ee27ca183467008607)\n6e592981983c58f186b4362432173ce6d5c28b8a532b66ee27ca183467008607\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:53:53.475 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c34ea594-bb9e-4819-af8a-f8b411706df1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:53:53.476 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67969051-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:53 np0005539551 nova_compute[227360]: 2025-11-29 07:53:53.478 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:53 np0005539551 kernel: tap67969051-e0: left promiscuous mode
Nov 29 02:53:53 np0005539551 nova_compute[227360]: 2025-11-29 07:53:53.499 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:53:53.501 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9c2ed67e-60bc-4c1c-918f-9999c51cb0c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:53 np0005539551 nova_compute[227360]: 2025-11-29 07:53:53.507 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402818.5071352, 3077d593-c016-494e-aead-884249932b7d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:53:53 np0005539551 nova_compute[227360]: 2025-11-29 07:53:53.507 227364 INFO nova.compute.manager [-] [instance: 3077d593-c016-494e-aead-884249932b7d] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:53:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:53:53.520 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[100db24e-7edf-4c95-828e-09d06791995a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:53:53.522 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4a8034db-2330-42c8-a517-b892d3e00b37]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:53:53.537 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[41067373-e00f-45bc-b219-f9a5303428af]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586363, 'reachable_time': 22124, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 236263, 'error': None, 'target': 'ovnmeta-67969051-efe2-48f0-99e2-c96ec0167864', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:53:53.539 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-67969051-efe2-48f0-99e2-c96ec0167864 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:53:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:53:53.539 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[531641f2-d450-43f8-ba54-681d791e137e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:53 np0005539551 nova_compute[227360]: 2025-11-29 07:53:53.543 227364 DEBUG nova.compute.manager [None req-f5bd91e2-daea-49d1-817d-d19bf2eb9ed3 - - - - - -] [instance: 3077d593-c016-494e-aead-884249932b7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:53:53 np0005539551 systemd[1]: run-netns-ovnmeta\x2d67969051\x2defe2\x2d48f0\x2d99e2\x2dc96ec0167864.mount: Deactivated successfully.
Nov 29 02:53:53 np0005539551 nova_compute[227360]: 2025-11-29 07:53:53.957 227364 DEBUG nova.compute.manager [req-da2fd444-7992-4cb6-b652-59e5974811fb req-df20d465-01d2-4d6b-ba07-b66fcc39b908 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Received event network-vif-unplugged-230cb1ef-c551-4666-88fe-e49994b798e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:53:53 np0005539551 nova_compute[227360]: 2025-11-29 07:53:53.958 227364 DEBUG oslo_concurrency.lockutils [req-da2fd444-7992-4cb6-b652-59e5974811fb req-df20d465-01d2-4d6b-ba07-b66fcc39b908 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "2bb12f77-8958-446b-813d-a59f149a549b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:53 np0005539551 nova_compute[227360]: 2025-11-29 07:53:53.958 227364 DEBUG oslo_concurrency.lockutils [req-da2fd444-7992-4cb6-b652-59e5974811fb req-df20d465-01d2-4d6b-ba07-b66fcc39b908 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2bb12f77-8958-446b-813d-a59f149a549b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:53 np0005539551 nova_compute[227360]: 2025-11-29 07:53:53.959 227364 DEBUG oslo_concurrency.lockutils [req-da2fd444-7992-4cb6-b652-59e5974811fb req-df20d465-01d2-4d6b-ba07-b66fcc39b908 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2bb12f77-8958-446b-813d-a59f149a549b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:53 np0005539551 nova_compute[227360]: 2025-11-29 07:53:53.959 227364 DEBUG nova.compute.manager [req-da2fd444-7992-4cb6-b652-59e5974811fb req-df20d465-01d2-4d6b-ba07-b66fcc39b908 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] No waiting events found dispatching network-vif-unplugged-230cb1ef-c551-4666-88fe-e49994b798e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:53:53 np0005539551 nova_compute[227360]: 2025-11-29 07:53:53.959 227364 DEBUG nova.compute.manager [req-da2fd444-7992-4cb6-b652-59e5974811fb req-df20d465-01d2-4d6b-ba07-b66fcc39b908 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Received event network-vif-unplugged-230cb1ef-c551-4666-88fe-e49994b798e9 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:53:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:53:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:54.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:54 np0005539551 nova_compute[227360]: 2025-11-29 07:53:54.228 227364 DEBUG nova.network.neutron [req-f1de01b5-8acd-4fb2-8431-dfc4600518e7 req-b48c76a6-2677-4617-90c9-9f69bfba8e1e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Updated VIF entry in instance network info cache for port 230cb1ef-c551-4666-88fe-e49994b798e9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:53:54 np0005539551 nova_compute[227360]: 2025-11-29 07:53:54.228 227364 DEBUG nova.network.neutron [req-f1de01b5-8acd-4fb2-8431-dfc4600518e7 req-b48c76a6-2677-4617-90c9-9f69bfba8e1e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Updating instance_info_cache with network_info: [{"id": "230cb1ef-c551-4666-88fe-e49994b798e9", "address": "fa:16:3e:80:fa:cd", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap230cb1ef-c5", "ovs_interfaceid": "230cb1ef-c551-4666-88fe-e49994b798e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-0.ctlplane.example.com"}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:53:54 np0005539551 nova_compute[227360]: 2025-11-29 07:53:54.252 227364 DEBUG oslo_concurrency.lockutils [req-f1de01b5-8acd-4fb2-8431-dfc4600518e7 req-b48c76a6-2677-4617-90c9-9f69bfba8e1e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-2bb12f77-8958-446b-813d-a59f149a549b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:53:54 np0005539551 nova_compute[227360]: 2025-11-29 07:53:54.391 227364 DEBUG nova.network.neutron [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Activated binding for port 230cb1ef-c551-4666-88fe-e49994b798e9 and host compute-0.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m
Nov 29 02:53:54 np0005539551 nova_compute[227360]: 2025-11-29 07:53:54.391 227364 DEBUG nova.compute.manager [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "230cb1ef-c551-4666-88fe-e49994b798e9", "address": "fa:16:3e:80:fa:cd", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap230cb1ef-c5", "ovs_interfaceid": "230cb1ef-c551-4666-88fe-e49994b798e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m
Nov 29 02:53:54 np0005539551 nova_compute[227360]: 2025-11-29 07:53:54.392 227364 DEBUG nova.virt.libvirt.vif [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:53:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-2050487278',display_name='tempest-LiveMigrationTest-server-2050487278',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-livemigrationtest-server-2050487278',id=13,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:53:37Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1963a097b7694450aa0d7c30b27b38ac',ramdisk_id='',reservation_id='r-f3pegydb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-814240379',owner_user_name='tempest-LiveMigrationTest-814240379-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:53:40Z,user_data=None,user_id='85f5548e01234fe4ae9b88e998e943f8',uuid=2bb12f77-8958-446b-813d-a59f149a549b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "230cb1ef-c551-4666-88fe-e49994b798e9", "address": "fa:16:3e:80:fa:cd", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap230cb1ef-c5", "ovs_interfaceid": "230cb1ef-c551-4666-88fe-e49994b798e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:53:54 np0005539551 nova_compute[227360]: 2025-11-29 07:53:54.392 227364 DEBUG nova.network.os_vif_util [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Converting VIF {"id": "230cb1ef-c551-4666-88fe-e49994b798e9", "address": "fa:16:3e:80:fa:cd", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap230cb1ef-c5", "ovs_interfaceid": "230cb1ef-c551-4666-88fe-e49994b798e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:53:54 np0005539551 nova_compute[227360]: 2025-11-29 07:53:54.393 227364 DEBUG nova.network.os_vif_util [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:fa:cd,bridge_name='br-int',has_traffic_filtering=True,id=230cb1ef-c551-4666-88fe-e49994b798e9,network=Network(7a06a21a-ba04-4a14-8d62-c931cbbf124d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap230cb1ef-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:53:54 np0005539551 nova_compute[227360]: 2025-11-29 07:53:54.393 227364 DEBUG os_vif [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:fa:cd,bridge_name='br-int',has_traffic_filtering=True,id=230cb1ef-c551-4666-88fe-e49994b798e9,network=Network(7a06a21a-ba04-4a14-8d62-c931cbbf124d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap230cb1ef-c5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:53:54 np0005539551 nova_compute[227360]: 2025-11-29 07:53:54.394 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:54 np0005539551 nova_compute[227360]: 2025-11-29 07:53:54.395 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap230cb1ef-c5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:54 np0005539551 nova_compute[227360]: 2025-11-29 07:53:54.397 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:54 np0005539551 nova_compute[227360]: 2025-11-29 07:53:54.400 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:53:54 np0005539551 nova_compute[227360]: 2025-11-29 07:53:54.402 227364 INFO os_vif [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:fa:cd,bridge_name='br-int',has_traffic_filtering=True,id=230cb1ef-c551-4666-88fe-e49994b798e9,network=Network(7a06a21a-ba04-4a14-8d62-c931cbbf124d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap230cb1ef-c5')#033[00m
Nov 29 02:53:54 np0005539551 nova_compute[227360]: 2025-11-29 07:53:54.402 227364 DEBUG oslo_concurrency.lockutils [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:54 np0005539551 nova_compute[227360]: 2025-11-29 07:53:54.402 227364 DEBUG oslo_concurrency.lockutils [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:54 np0005539551 nova_compute[227360]: 2025-11-29 07:53:54.403 227364 DEBUG oslo_concurrency.lockutils [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:54 np0005539551 nova_compute[227360]: 2025-11-29 07:53:54.403 227364 DEBUG nova.compute.manager [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m
Nov 29 02:53:54 np0005539551 nova_compute[227360]: 2025-11-29 07:53:54.403 227364 INFO nova.virt.libvirt.driver [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Deleting instance files /var/lib/nova/instances/2bb12f77-8958-446b-813d-a59f149a549b_del#033[00m
Nov 29 02:53:54 np0005539551 nova_compute[227360]: 2025-11-29 07:53:54.404 227364 INFO nova.virt.libvirt.driver [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Deletion of /var/lib/nova/instances/2bb12f77-8958-446b-813d-a59f149a549b_del complete#033[00m
Nov 29 02:53:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:53:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:54.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:54 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:55 np0005539551 nova_compute[227360]: 2025-11-29 07:53:55.240 227364 DEBUG nova.compute.manager [req-f4177c69-21d7-411b-9f39-8a609a0c3cd7 req-39a2bf49-9dd8-4cb5-980c-be3964d2a25c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Received event network-vif-plugged-230cb1ef-c551-4666-88fe-e49994b798e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:53:55 np0005539551 nova_compute[227360]: 2025-11-29 07:53:55.240 227364 DEBUG oslo_concurrency.lockutils [req-f4177c69-21d7-411b-9f39-8a609a0c3cd7 req-39a2bf49-9dd8-4cb5-980c-be3964d2a25c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "2bb12f77-8958-446b-813d-a59f149a549b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:55 np0005539551 nova_compute[227360]: 2025-11-29 07:53:55.241 227364 DEBUG oslo_concurrency.lockutils [req-f4177c69-21d7-411b-9f39-8a609a0c3cd7 req-39a2bf49-9dd8-4cb5-980c-be3964d2a25c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2bb12f77-8958-446b-813d-a59f149a549b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:55 np0005539551 nova_compute[227360]: 2025-11-29 07:53:55.241 227364 DEBUG oslo_concurrency.lockutils [req-f4177c69-21d7-411b-9f39-8a609a0c3cd7 req-39a2bf49-9dd8-4cb5-980c-be3964d2a25c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2bb12f77-8958-446b-813d-a59f149a549b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:55 np0005539551 nova_compute[227360]: 2025-11-29 07:53:55.242 227364 DEBUG nova.compute.manager [req-f4177c69-21d7-411b-9f39-8a609a0c3cd7 req-39a2bf49-9dd8-4cb5-980c-be3964d2a25c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] No waiting events found dispatching network-vif-plugged-230cb1ef-c551-4666-88fe-e49994b798e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:53:55 np0005539551 nova_compute[227360]: 2025-11-29 07:53:55.242 227364 WARNING nova.compute.manager [req-f4177c69-21d7-411b-9f39-8a609a0c3cd7 req-39a2bf49-9dd8-4cb5-980c-be3964d2a25c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Received unexpected event network-vif-plugged-230cb1ef-c551-4666-88fe-e49994b798e9 for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:53:55 np0005539551 nova_compute[227360]: 2025-11-29 07:53:55.243 227364 DEBUG nova.compute.manager [req-f4177c69-21d7-411b-9f39-8a609a0c3cd7 req-39a2bf49-9dd8-4cb5-980c-be3964d2a25c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Received event network-vif-plugged-230cb1ef-c551-4666-88fe-e49994b798e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:53:55 np0005539551 nova_compute[227360]: 2025-11-29 07:53:55.244 227364 DEBUG oslo_concurrency.lockutils [req-f4177c69-21d7-411b-9f39-8a609a0c3cd7 req-39a2bf49-9dd8-4cb5-980c-be3964d2a25c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "2bb12f77-8958-446b-813d-a59f149a549b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:55 np0005539551 nova_compute[227360]: 2025-11-29 07:53:55.244 227364 DEBUG oslo_concurrency.lockutils [req-f4177c69-21d7-411b-9f39-8a609a0c3cd7 req-39a2bf49-9dd8-4cb5-980c-be3964d2a25c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2bb12f77-8958-446b-813d-a59f149a549b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:55 np0005539551 nova_compute[227360]: 2025-11-29 07:53:55.245 227364 DEBUG oslo_concurrency.lockutils [req-f4177c69-21d7-411b-9f39-8a609a0c3cd7 req-39a2bf49-9dd8-4cb5-980c-be3964d2a25c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2bb12f77-8958-446b-813d-a59f149a549b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:55 np0005539551 nova_compute[227360]: 2025-11-29 07:53:55.245 227364 DEBUG nova.compute.manager [req-f4177c69-21d7-411b-9f39-8a609a0c3cd7 req-39a2bf49-9dd8-4cb5-980c-be3964d2a25c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] No waiting events found dispatching network-vif-plugged-230cb1ef-c551-4666-88fe-e49994b798e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:53:55 np0005539551 nova_compute[227360]: 2025-11-29 07:53:55.245 227364 WARNING nova.compute.manager [req-f4177c69-21d7-411b-9f39-8a609a0c3cd7 req-39a2bf49-9dd8-4cb5-980c-be3964d2a25c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Received unexpected event network-vif-plugged-230cb1ef-c551-4666-88fe-e49994b798e9 for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:53:55 np0005539551 nova_compute[227360]: 2025-11-29 07:53:55.246 227364 DEBUG nova.compute.manager [req-f4177c69-21d7-411b-9f39-8a609a0c3cd7 req-39a2bf49-9dd8-4cb5-980c-be3964d2a25c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Received event network-vif-plugged-230cb1ef-c551-4666-88fe-e49994b798e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:53:55 np0005539551 nova_compute[227360]: 2025-11-29 07:53:55.246 227364 DEBUG oslo_concurrency.lockutils [req-f4177c69-21d7-411b-9f39-8a609a0c3cd7 req-39a2bf49-9dd8-4cb5-980c-be3964d2a25c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "2bb12f77-8958-446b-813d-a59f149a549b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:55 np0005539551 nova_compute[227360]: 2025-11-29 07:53:55.247 227364 DEBUG oslo_concurrency.lockutils [req-f4177c69-21d7-411b-9f39-8a609a0c3cd7 req-39a2bf49-9dd8-4cb5-980c-be3964d2a25c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2bb12f77-8958-446b-813d-a59f149a549b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:55 np0005539551 nova_compute[227360]: 2025-11-29 07:53:55.247 227364 DEBUG oslo_concurrency.lockutils [req-f4177c69-21d7-411b-9f39-8a609a0c3cd7 req-39a2bf49-9dd8-4cb5-980c-be3964d2a25c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2bb12f77-8958-446b-813d-a59f149a549b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:55 np0005539551 nova_compute[227360]: 2025-11-29 07:53:55.248 227364 DEBUG nova.compute.manager [req-f4177c69-21d7-411b-9f39-8a609a0c3cd7 req-39a2bf49-9dd8-4cb5-980c-be3964d2a25c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] No waiting events found dispatching network-vif-plugged-230cb1ef-c551-4666-88fe-e49994b798e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:53:55 np0005539551 nova_compute[227360]: 2025-11-29 07:53:55.248 227364 WARNING nova.compute.manager [req-f4177c69-21d7-411b-9f39-8a609a0c3cd7 req-39a2bf49-9dd8-4cb5-980c-be3964d2a25c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Received unexpected event network-vif-plugged-230cb1ef-c551-4666-88fe-e49994b798e9 for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:53:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:53:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:56.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:56 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:53:56 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:53:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:53:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:56.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:57 np0005539551 nova_compute[227360]: 2025-11-29 07:53:57.339 227364 DEBUG nova.compute.manager [req-e4e73d0c-793a-44db-b9e2-a437f0f6a86e req-7a403f20-a4de-49b4-a179-101f4bf62811 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Received event network-vif-plugged-230cb1ef-c551-4666-88fe-e49994b798e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:53:57 np0005539551 nova_compute[227360]: 2025-11-29 07:53:57.341 227364 DEBUG oslo_concurrency.lockutils [req-e4e73d0c-793a-44db-b9e2-a437f0f6a86e req-7a403f20-a4de-49b4-a179-101f4bf62811 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "2bb12f77-8958-446b-813d-a59f149a549b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:57 np0005539551 nova_compute[227360]: 2025-11-29 07:53:57.341 227364 DEBUG oslo_concurrency.lockutils [req-e4e73d0c-793a-44db-b9e2-a437f0f6a86e req-7a403f20-a4de-49b4-a179-101f4bf62811 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2bb12f77-8958-446b-813d-a59f149a549b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:57 np0005539551 nova_compute[227360]: 2025-11-29 07:53:57.342 227364 DEBUG oslo_concurrency.lockutils [req-e4e73d0c-793a-44db-b9e2-a437f0f6a86e req-7a403f20-a4de-49b4-a179-101f4bf62811 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2bb12f77-8958-446b-813d-a59f149a549b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:57 np0005539551 nova_compute[227360]: 2025-11-29 07:53:57.342 227364 DEBUG nova.compute.manager [req-e4e73d0c-793a-44db-b9e2-a437f0f6a86e req-7a403f20-a4de-49b4-a179-101f4bf62811 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] No waiting events found dispatching network-vif-plugged-230cb1ef-c551-4666-88fe-e49994b798e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:53:57 np0005539551 nova_compute[227360]: 2025-11-29 07:53:57.343 227364 WARNING nova.compute.manager [req-e4e73d0c-793a-44db-b9e2-a437f0f6a86e req-7a403f20-a4de-49b4-a179-101f4bf62811 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Received unexpected event network-vif-plugged-230cb1ef-c551-4666-88fe-e49994b798e9 for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:53:57 np0005539551 nova_compute[227360]: 2025-11-29 07:53:57.754 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402822.7530105, ccc93294-177e-4d6b-83ae-d06cb1d8bd2d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:53:57 np0005539551 nova_compute[227360]: 2025-11-29 07:53:57.755 227364 INFO nova.compute.manager [-] [instance: ccc93294-177e-4d6b-83ae-d06cb1d8bd2d] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:53:57 np0005539551 nova_compute[227360]: 2025-11-29 07:53:57.776 227364 DEBUG nova.compute.manager [None req-c5fd567c-0ac6-4fc2-b223-5cbc5422d72b - - - - - -] [instance: ccc93294-177e-4d6b-83ae-d06cb1d8bd2d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:53:58 np0005539551 nova_compute[227360]: 2025-11-29 07:53:58.171 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:53:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:58.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:53:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:58.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:59 np0005539551 nova_compute[227360]: 2025-11-29 07:53:59.444 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:59 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:59 np0005539551 nova_compute[227360]: 2025-11-29 07:53:59.929 227364 DEBUG oslo_concurrency.lockutils [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Acquiring lock "2bb12f77-8958-446b-813d-a59f149a549b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:59 np0005539551 nova_compute[227360]: 2025-11-29 07:53:59.929 227364 DEBUG oslo_concurrency.lockutils [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Lock "2bb12f77-8958-446b-813d-a59f149a549b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:59 np0005539551 nova_compute[227360]: 2025-11-29 07:53:59.930 227364 DEBUG oslo_concurrency.lockutils [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Lock "2bb12f77-8958-446b-813d-a59f149a549b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:59 np0005539551 nova_compute[227360]: 2025-11-29 07:53:59.950 227364 DEBUG oslo_concurrency.lockutils [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:59 np0005539551 nova_compute[227360]: 2025-11-29 07:53:59.951 227364 DEBUG oslo_concurrency.lockutils [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:59 np0005539551 nova_compute[227360]: 2025-11-29 07:53:59.951 227364 DEBUG oslo_concurrency.lockutils [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:59 np0005539551 nova_compute[227360]: 2025-11-29 07:53:59.951 227364 DEBUG nova.compute.resource_tracker [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:53:59 np0005539551 nova_compute[227360]: 2025-11-29 07:53:59.951 227364 DEBUG oslo_concurrency.processutils [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:00.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:54:00 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4124353802' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:54:00 np0005539551 nova_compute[227360]: 2025-11-29 07:54:00.385 227364 DEBUG oslo_concurrency.processutils [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:00 np0005539551 nova_compute[227360]: 2025-11-29 07:54:00.571 227364 WARNING nova.virt.libvirt.driver [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:54:00 np0005539551 nova_compute[227360]: 2025-11-29 07:54:00.573 227364 DEBUG nova.compute.resource_tracker [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4938MB free_disk=20.89746856689453GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:54:00 np0005539551 nova_compute[227360]: 2025-11-29 07:54:00.573 227364 DEBUG oslo_concurrency.lockutils [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:00 np0005539551 nova_compute[227360]: 2025-11-29 07:54:00.573 227364 DEBUG oslo_concurrency.lockutils [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:00 np0005539551 nova_compute[227360]: 2025-11-29 07:54:00.608 227364 DEBUG nova.compute.resource_tracker [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Migration for instance 2bb12f77-8958-446b-813d-a59f149a549b refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Nov 29 02:54:00 np0005539551 nova_compute[227360]: 2025-11-29 07:54:00.640 227364 DEBUG nova.compute.resource_tracker [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Nov 29 02:54:00 np0005539551 nova_compute[227360]: 2025-11-29 07:54:00.666 227364 DEBUG nova.compute.resource_tracker [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Migration c850deb7-f09e-4436-ad51-149f3e81ddfe is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Nov 29 02:54:00 np0005539551 nova_compute[227360]: 2025-11-29 07:54:00.667 227364 DEBUG nova.compute.resource_tracker [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:54:00 np0005539551 nova_compute[227360]: 2025-11-29 07:54:00.667 227364 DEBUG nova.compute.resource_tracker [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:54:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:54:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:00.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:54:00 np0005539551 nova_compute[227360]: 2025-11-29 07:54:00.717 227364 DEBUG oslo_concurrency.processutils [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:54:01 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1906390428' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:54:01 np0005539551 nova_compute[227360]: 2025-11-29 07:54:01.279 227364 DEBUG oslo_concurrency.processutils [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:01 np0005539551 nova_compute[227360]: 2025-11-29 07:54:01.285 227364 DEBUG nova.compute.provider_tree [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:54:01 np0005539551 nova_compute[227360]: 2025-11-29 07:54:01.305 227364 DEBUG nova.scheduler.client.report [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:54:01 np0005539551 nova_compute[227360]: 2025-11-29 07:54:01.331 227364 DEBUG nova.compute.resource_tracker [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:54:01 np0005539551 nova_compute[227360]: 2025-11-29 07:54:01.332 227364 DEBUG oslo_concurrency.lockutils [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:01 np0005539551 nova_compute[227360]: 2025-11-29 07:54:01.337 227364 INFO nova.compute.manager [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Migrating instance to compute-0.ctlplane.example.com finished successfully.#033[00m
Nov 29 02:54:01 np0005539551 nova_compute[227360]: 2025-11-29 07:54:01.452 227364 INFO nova.scheduler.client.report [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Deleted allocation for migration c850deb7-f09e-4436-ad51-149f3e81ddfe#033[00m
Nov 29 02:54:01 np0005539551 nova_compute[227360]: 2025-11-29 07:54:01.453 227364 DEBUG nova.virt.libvirt.driver [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m
Nov 29 02:54:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:02.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:02 np0005539551 nova_compute[227360]: 2025-11-29 07:54:02.411 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:02.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:03 np0005539551 nova_compute[227360]: 2025-11-29 07:54:03.173 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:04.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:04 np0005539551 nova_compute[227360]: 2025-11-29 07:54:04.446 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:04.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:04 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:05 np0005539551 nova_compute[227360]: 2025-11-29 07:54:05.427 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:05 np0005539551 nova_compute[227360]: 2025-11-29 07:54:05.427 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 02:54:05 np0005539551 nova_compute[227360]: 2025-11-29 07:54:05.443 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 02:54:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:06.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:06.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:07 np0005539551 nova_compute[227360]: 2025-11-29 07:54:07.103 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:07.104 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:54:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:07.104 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:54:07 np0005539551 nova_compute[227360]: 2025-11-29 07:54:07.421 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:07 np0005539551 nova_compute[227360]: 2025-11-29 07:54:07.421 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:07 np0005539551 nova_compute[227360]: 2025-11-29 07:54:07.860 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402832.8593802, 2bb12f77-8958-446b-813d-a59f149a549b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:54:07 np0005539551 nova_compute[227360]: 2025-11-29 07:54:07.861 227364 INFO nova.compute.manager [-] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:54:08 np0005539551 nova_compute[227360]: 2025-11-29 07:54:08.078 227364 DEBUG nova.compute.manager [None req-b659f6bb-99c4-4774-8625-803d77073fbc - - - - - -] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:08 np0005539551 nova_compute[227360]: 2025-11-29 07:54:08.174 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:08.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:08 np0005539551 nova_compute[227360]: 2025-11-29 07:54:08.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:08 np0005539551 podman[236361]: 2025-11-29 07:54:08.608090926 +0000 UTC m=+0.055928134 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 29 02:54:08 np0005539551 podman[236360]: 2025-11-29 07:54:08.614358467 +0000 UTC m=+0.061780724 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd)
Nov 29 02:54:08 np0005539551 podman[236359]: 2025-11-29 07:54:08.634967493 +0000 UTC m=+0.088177829 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:54:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:08.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:09 np0005539551 nova_compute[227360]: 2025-11-29 07:54:09.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:09 np0005539551 nova_compute[227360]: 2025-11-29 07:54:09.411 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:54:09 np0005539551 nova_compute[227360]: 2025-11-29 07:54:09.411 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:54:09 np0005539551 nova_compute[227360]: 2025-11-29 07:54:09.427 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:54:09 np0005539551 nova_compute[227360]: 2025-11-29 07:54:09.480 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:09 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:10.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:10 np0005539551 nova_compute[227360]: 2025-11-29 07:54:10.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:10 np0005539551 nova_compute[227360]: 2025-11-29 07:54:10.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:10 np0005539551 nova_compute[227360]: 2025-11-29 07:54:10.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:10 np0005539551 nova_compute[227360]: 2025-11-29 07:54:10.441 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:10 np0005539551 nova_compute[227360]: 2025-11-29 07:54:10.442 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:10 np0005539551 nova_compute[227360]: 2025-11-29 07:54:10.442 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:10 np0005539551 nova_compute[227360]: 2025-11-29 07:54:10.442 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:54:10 np0005539551 nova_compute[227360]: 2025-11-29 07:54:10.443 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:10.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:54:10 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/228635053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:54:10 np0005539551 nova_compute[227360]: 2025-11-29 07:54:10.858 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:11 np0005539551 nova_compute[227360]: 2025-11-29 07:54:11.019 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:54:11 np0005539551 nova_compute[227360]: 2025-11-29 07:54:11.020 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4954MB free_disk=20.897281646728516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:54:11 np0005539551 nova_compute[227360]: 2025-11-29 07:54:11.021 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:11 np0005539551 nova_compute[227360]: 2025-11-29 07:54:11.021 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:11 np0005539551 nova_compute[227360]: 2025-11-29 07:54:11.079 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:54:11 np0005539551 nova_compute[227360]: 2025-11-29 07:54:11.079 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:54:11 np0005539551 nova_compute[227360]: 2025-11-29 07:54:11.093 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:11 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:54:11 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/350040874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:54:11 np0005539551 nova_compute[227360]: 2025-11-29 07:54:11.591 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:11 np0005539551 nova_compute[227360]: 2025-11-29 07:54:11.596 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:54:11 np0005539551 nova_compute[227360]: 2025-11-29 07:54:11.616 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:54:11 np0005539551 nova_compute[227360]: 2025-11-29 07:54:11.618 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:54:11 np0005539551 nova_compute[227360]: 2025-11-29 07:54:11.618 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:11 np0005539551 nova_compute[227360]: 2025-11-29 07:54:11.619 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:11 np0005539551 nova_compute[227360]: 2025-11-29 07:54:11.619 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 02:54:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:12.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:12.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:12 np0005539551 nova_compute[227360]: 2025-11-29 07:54:12.941 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:12 np0005539551 nova_compute[227360]: 2025-11-29 07:54:12.942 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:12 np0005539551 nova_compute[227360]: 2025-11-29 07:54:12.942 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:54:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:13.106 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:13 np0005539551 nova_compute[227360]: 2025-11-29 07:54:13.177 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:14.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:14 np0005539551 nova_compute[227360]: 2025-11-29 07:54:14.483 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:14.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:14 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:15 np0005539551 nova_compute[227360]: 2025-11-29 07:54:15.075 227364 DEBUG oslo_concurrency.lockutils [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "bed5ee49-4f19-4a80-a70a-8972c9a68218" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:15 np0005539551 nova_compute[227360]: 2025-11-29 07:54:15.075 227364 DEBUG oslo_concurrency.lockutils [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:15 np0005539551 nova_compute[227360]: 2025-11-29 07:54:15.108 227364 DEBUG nova.compute.manager [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:54:15 np0005539551 nova_compute[227360]: 2025-11-29 07:54:15.333 227364 DEBUG oslo_concurrency.lockutils [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:15 np0005539551 nova_compute[227360]: 2025-11-29 07:54:15.333 227364 DEBUG oslo_concurrency.lockutils [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:15 np0005539551 nova_compute[227360]: 2025-11-29 07:54:15.341 227364 DEBUG nova.virt.hardware [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:54:15 np0005539551 nova_compute[227360]: 2025-11-29 07:54:15.341 227364 INFO nova.compute.claims [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 02:54:15 np0005539551 nova_compute[227360]: 2025-11-29 07:54:15.693 227364 DEBUG oslo_concurrency.processutils [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:54:16 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4228200092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:54:16 np0005539551 nova_compute[227360]: 2025-11-29 07:54:16.132 227364 DEBUG oslo_concurrency.processutils [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:16 np0005539551 nova_compute[227360]: 2025-11-29 07:54:16.138 227364 DEBUG nova.compute.provider_tree [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:54:16 np0005539551 nova_compute[227360]: 2025-11-29 07:54:16.162 227364 DEBUG nova.scheduler.client.report [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:54:16 np0005539551 nova_compute[227360]: 2025-11-29 07:54:16.217 227364 DEBUG oslo_concurrency.lockutils [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.883s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:16 np0005539551 nova_compute[227360]: 2025-11-29 07:54:16.218 227364 DEBUG nova.compute.manager [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:54:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:16.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:16 np0005539551 nova_compute[227360]: 2025-11-29 07:54:16.270 227364 DEBUG nova.compute.manager [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:54:16 np0005539551 nova_compute[227360]: 2025-11-29 07:54:16.271 227364 DEBUG nova.network.neutron [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:54:16 np0005539551 nova_compute[227360]: 2025-11-29 07:54:16.294 227364 INFO nova.virt.libvirt.driver [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:54:16 np0005539551 nova_compute[227360]: 2025-11-29 07:54:16.318 227364 DEBUG nova.compute.manager [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:54:16 np0005539551 nova_compute[227360]: 2025-11-29 07:54:16.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:16 np0005539551 nova_compute[227360]: 2025-11-29 07:54:16.416 227364 DEBUG oslo_concurrency.lockutils [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "ec72da52-17dd-401b-8538-90262cfe6006" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:16 np0005539551 nova_compute[227360]: 2025-11-29 07:54:16.416 227364 DEBUG oslo_concurrency.lockutils [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "ec72da52-17dd-401b-8538-90262cfe6006" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:16 np0005539551 nova_compute[227360]: 2025-11-29 07:54:16.419 227364 DEBUG nova.compute.manager [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:54:16 np0005539551 nova_compute[227360]: 2025-11-29 07:54:16.421 227364 DEBUG nova.virt.libvirt.driver [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:54:16 np0005539551 nova_compute[227360]: 2025-11-29 07:54:16.421 227364 INFO nova.virt.libvirt.driver [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Creating image(s)#033[00m
Nov 29 02:54:16 np0005539551 nova_compute[227360]: 2025-11-29 07:54:16.456 227364 DEBUG nova.storage.rbd_utils [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image bed5ee49-4f19-4a80-a70a-8972c9a68218_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:16 np0005539551 nova_compute[227360]: 2025-11-29 07:54:16.489 227364 DEBUG nova.storage.rbd_utils [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image bed5ee49-4f19-4a80-a70a-8972c9a68218_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:16 np0005539551 nova_compute[227360]: 2025-11-29 07:54:16.516 227364 DEBUG nova.storage.rbd_utils [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image bed5ee49-4f19-4a80-a70a-8972c9a68218_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:16 np0005539551 nova_compute[227360]: 2025-11-29 07:54:16.520 227364 DEBUG oslo_concurrency.processutils [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:16 np0005539551 nova_compute[227360]: 2025-11-29 07:54:16.545 227364 DEBUG nova.compute.manager [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:54:16 np0005539551 nova_compute[227360]: 2025-11-29 07:54:16.551 227364 DEBUG nova.policy [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '446b0f05699845e8bd9f7d59c787f671', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1f48e629446148199d44b34243b98b8a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 02:54:16 np0005539551 nova_compute[227360]: 2025-11-29 07:54:16.599 227364 DEBUG oslo_concurrency.processutils [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:16 np0005539551 nova_compute[227360]: 2025-11-29 07:54:16.600 227364 DEBUG oslo_concurrency.lockutils [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:16 np0005539551 nova_compute[227360]: 2025-11-29 07:54:16.600 227364 DEBUG oslo_concurrency.lockutils [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:16 np0005539551 nova_compute[227360]: 2025-11-29 07:54:16.601 227364 DEBUG oslo_concurrency.lockutils [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:16 np0005539551 nova_compute[227360]: 2025-11-29 07:54:16.626 227364 DEBUG nova.storage.rbd_utils [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image bed5ee49-4f19-4a80-a70a-8972c9a68218_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:16 np0005539551 nova_compute[227360]: 2025-11-29 07:54:16.629 227364 DEBUG oslo_concurrency.processutils [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 bed5ee49-4f19-4a80-a70a-8972c9a68218_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:16 np0005539551 nova_compute[227360]: 2025-11-29 07:54:16.647 227364 DEBUG oslo_concurrency.lockutils [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:16 np0005539551 nova_compute[227360]: 2025-11-29 07:54:16.648 227364 DEBUG oslo_concurrency.lockutils [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:16 np0005539551 nova_compute[227360]: 2025-11-29 07:54:16.657 227364 DEBUG nova.virt.hardware [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:54:16 np0005539551 nova_compute[227360]: 2025-11-29 07:54:16.657 227364 INFO nova.compute.claims [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 02:54:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:16.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:16 np0005539551 nova_compute[227360]: 2025-11-29 07:54:16.935 227364 DEBUG oslo_concurrency.processutils [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:54:17 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/30711504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:54:17 np0005539551 nova_compute[227360]: 2025-11-29 07:54:17.409 227364 DEBUG oslo_concurrency.processutils [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:17 np0005539551 nova_compute[227360]: 2025-11-29 07:54:17.415 227364 DEBUG nova.compute.provider_tree [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:54:17 np0005539551 nova_compute[227360]: 2025-11-29 07:54:17.744 227364 DEBUG nova.scheduler.client.report [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:54:17 np0005539551 nova_compute[227360]: 2025-11-29 07:54:17.916 227364 DEBUG oslo_concurrency.processutils [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 bed5ee49-4f19-4a80-a70a-8972c9a68218_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.287s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:17 np0005539551 nova_compute[227360]: 2025-11-29 07:54:17.977 227364 DEBUG oslo_concurrency.lockutils [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.328s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:17 np0005539551 nova_compute[227360]: 2025-11-29 07:54:17.977 227364 DEBUG nova.compute.manager [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:54:17 np0005539551 nova_compute[227360]: 2025-11-29 07:54:17.984 227364 DEBUG nova.storage.rbd_utils [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] resizing rbd image bed5ee49-4f19-4a80-a70a-8972c9a68218_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:54:18 np0005539551 nova_compute[227360]: 2025-11-29 07:54:18.179 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:18.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:18 np0005539551 nova_compute[227360]: 2025-11-29 07:54:18.403 227364 DEBUG nova.compute.manager [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:54:18 np0005539551 nova_compute[227360]: 2025-11-29 07:54:18.404 227364 DEBUG nova.network.neutron [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:54:18 np0005539551 nova_compute[227360]: 2025-11-29 07:54:18.412 227364 DEBUG nova.objects.instance [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lazy-loading 'migration_context' on Instance uuid bed5ee49-4f19-4a80-a70a-8972c9a68218 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:54:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:18.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:18 np0005539551 nova_compute[227360]: 2025-11-29 07:54:18.731 227364 DEBUG nova.policy [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '446b0f05699845e8bd9f7d59c787f671', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1f48e629446148199d44b34243b98b8a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 02:54:19 np0005539551 nova_compute[227360]: 2025-11-29 07:54:19.334 227364 INFO nova.virt.libvirt.driver [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:54:19 np0005539551 nova_compute[227360]: 2025-11-29 07:54:19.359 227364 DEBUG nova.virt.libvirt.driver [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:54:19 np0005539551 nova_compute[227360]: 2025-11-29 07:54:19.360 227364 DEBUG nova.virt.libvirt.driver [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Ensure instance console log exists: /var/lib/nova/instances/bed5ee49-4f19-4a80-a70a-8972c9a68218/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:54:19 np0005539551 nova_compute[227360]: 2025-11-29 07:54:19.361 227364 DEBUG oslo_concurrency.lockutils [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:19 np0005539551 nova_compute[227360]: 2025-11-29 07:54:19.361 227364 DEBUG oslo_concurrency.lockutils [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:19 np0005539551 nova_compute[227360]: 2025-11-29 07:54:19.361 227364 DEBUG oslo_concurrency.lockutils [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:19 np0005539551 nova_compute[227360]: 2025-11-29 07:54:19.363 227364 DEBUG nova.network.neutron [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Successfully created port: 10cc59c9-d730-4ae4-91ea-f799f3de9f32 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 02:54:19 np0005539551 nova_compute[227360]: 2025-11-29 07:54:19.404 227364 DEBUG nova.compute.manager [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:54:19 np0005539551 nova_compute[227360]: 2025-11-29 07:54:19.485 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:19 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:19.846 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:19.846 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:19.847 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:19 np0005539551 nova_compute[227360]: 2025-11-29 07:54:19.893 227364 DEBUG nova.compute.manager [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:54:19 np0005539551 nova_compute[227360]: 2025-11-29 07:54:19.894 227364 DEBUG nova.virt.libvirt.driver [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:54:19 np0005539551 nova_compute[227360]: 2025-11-29 07:54:19.895 227364 INFO nova.virt.libvirt.driver [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Creating image(s)#033[00m
Nov 29 02:54:19 np0005539551 nova_compute[227360]: 2025-11-29 07:54:19.924 227364 DEBUG nova.storage.rbd_utils [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image ec72da52-17dd-401b-8538-90262cfe6006_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:19 np0005539551 nova_compute[227360]: 2025-11-29 07:54:19.953 227364 DEBUG nova.storage.rbd_utils [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image ec72da52-17dd-401b-8538-90262cfe6006_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:19 np0005539551 nova_compute[227360]: 2025-11-29 07:54:19.982 227364 DEBUG nova.storage.rbd_utils [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image ec72da52-17dd-401b-8538-90262cfe6006_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:19 np0005539551 nova_compute[227360]: 2025-11-29 07:54:19.986 227364 DEBUG oslo_concurrency.processutils [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:20 np0005539551 nova_compute[227360]: 2025-11-29 07:54:20.044 227364 DEBUG oslo_concurrency.processutils [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:20 np0005539551 nova_compute[227360]: 2025-11-29 07:54:20.045 227364 DEBUG oslo_concurrency.lockutils [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:20 np0005539551 nova_compute[227360]: 2025-11-29 07:54:20.045 227364 DEBUG oslo_concurrency.lockutils [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:20 np0005539551 nova_compute[227360]: 2025-11-29 07:54:20.045 227364 DEBUG oslo_concurrency.lockutils [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:20 np0005539551 nova_compute[227360]: 2025-11-29 07:54:20.070 227364 DEBUG nova.storage.rbd_utils [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image ec72da52-17dd-401b-8538-90262cfe6006_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:20 np0005539551 nova_compute[227360]: 2025-11-29 07:54:20.074 227364 DEBUG oslo_concurrency.processutils [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 ec72da52-17dd-401b-8538-90262cfe6006_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:20.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:20 np0005539551 nova_compute[227360]: 2025-11-29 07:54:20.388 227364 DEBUG oslo_concurrency.processutils [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 ec72da52-17dd-401b-8538-90262cfe6006_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.314s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:20 np0005539551 nova_compute[227360]: 2025-11-29 07:54:20.463 227364 DEBUG nova.storage.rbd_utils [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] resizing rbd image ec72da52-17dd-401b-8538-90262cfe6006_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:54:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:20.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:21 np0005539551 nova_compute[227360]: 2025-11-29 07:54:21.409 227364 DEBUG nova.network.neutron [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Successfully created port: fe9bbbf6-1a8e-4407-b98e-d689945a1535 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 02:54:21 np0005539551 nova_compute[227360]: 2025-11-29 07:54:21.485 227364 DEBUG nova.objects.instance [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lazy-loading 'migration_context' on Instance uuid ec72da52-17dd-401b-8538-90262cfe6006 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:54:21 np0005539551 nova_compute[227360]: 2025-11-29 07:54:21.498 227364 DEBUG nova.virt.libvirt.driver [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:54:21 np0005539551 nova_compute[227360]: 2025-11-29 07:54:21.498 227364 DEBUG nova.virt.libvirt.driver [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Ensure instance console log exists: /var/lib/nova/instances/ec72da52-17dd-401b-8538-90262cfe6006/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:54:21 np0005539551 nova_compute[227360]: 2025-11-29 07:54:21.499 227364 DEBUG oslo_concurrency.lockutils [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:21 np0005539551 nova_compute[227360]: 2025-11-29 07:54:21.499 227364 DEBUG oslo_concurrency.lockutils [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:21 np0005539551 nova_compute[227360]: 2025-11-29 07:54:21.499 227364 DEBUG oslo_concurrency.lockutils [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:22 np0005539551 nova_compute[227360]: 2025-11-29 07:54:22.124 227364 DEBUG nova.network.neutron [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Successfully updated port: 10cc59c9-d730-4ae4-91ea-f799f3de9f32 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 02:54:22 np0005539551 nova_compute[227360]: 2025-11-29 07:54:22.166 227364 DEBUG oslo_concurrency.lockutils [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "refresh_cache-bed5ee49-4f19-4a80-a70a-8972c9a68218" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:54:22 np0005539551 nova_compute[227360]: 2025-11-29 07:54:22.167 227364 DEBUG oslo_concurrency.lockutils [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquired lock "refresh_cache-bed5ee49-4f19-4a80-a70a-8972c9a68218" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:54:22 np0005539551 nova_compute[227360]: 2025-11-29 07:54:22.167 227364 DEBUG nova.network.neutron [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:54:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:22.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:22 np0005539551 nova_compute[227360]: 2025-11-29 07:54:22.285 227364 DEBUG nova.compute.manager [req-6244f10b-38b4-4f60-b424-c9cce8039b7d req-dca2b51f-1f49-4585-8bce-77b6ec15d22c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Received event network-changed-10cc59c9-d730-4ae4-91ea-f799f3de9f32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:22 np0005539551 nova_compute[227360]: 2025-11-29 07:54:22.286 227364 DEBUG nova.compute.manager [req-6244f10b-38b4-4f60-b424-c9cce8039b7d req-dca2b51f-1f49-4585-8bce-77b6ec15d22c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Refreshing instance network info cache due to event network-changed-10cc59c9-d730-4ae4-91ea-f799f3de9f32. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:54:22 np0005539551 nova_compute[227360]: 2025-11-29 07:54:22.286 227364 DEBUG oslo_concurrency.lockutils [req-6244f10b-38b4-4f60-b424-c9cce8039b7d req-dca2b51f-1f49-4585-8bce-77b6ec15d22c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-bed5ee49-4f19-4a80-a70a-8972c9a68218" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:54:22 np0005539551 nova_compute[227360]: 2025-11-29 07:54:22.424 227364 DEBUG nova.network.neutron [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:54:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:22.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:23 np0005539551 nova_compute[227360]: 2025-11-29 07:54:23.180 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:23 np0005539551 nova_compute[227360]: 2025-11-29 07:54:23.198 227364 DEBUG nova.network.neutron [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Successfully updated port: fe9bbbf6-1a8e-4407-b98e-d689945a1535 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 02:54:23 np0005539551 nova_compute[227360]: 2025-11-29 07:54:23.216 227364 DEBUG oslo_concurrency.lockutils [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "refresh_cache-ec72da52-17dd-401b-8538-90262cfe6006" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:54:23 np0005539551 nova_compute[227360]: 2025-11-29 07:54:23.217 227364 DEBUG oslo_concurrency.lockutils [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquired lock "refresh_cache-ec72da52-17dd-401b-8538-90262cfe6006" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:54:23 np0005539551 nova_compute[227360]: 2025-11-29 07:54:23.217 227364 DEBUG nova.network.neutron [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:54:23 np0005539551 nova_compute[227360]: 2025-11-29 07:54:23.740 227364 DEBUG nova.network.neutron [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:54:23 np0005539551 nova_compute[227360]: 2025-11-29 07:54:23.829 227364 DEBUG nova.network.neutron [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Updating instance_info_cache with network_info: [{"id": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "address": "fa:16:3e:8b:38:bc", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10cc59c9-d7", "ovs_interfaceid": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:54:23 np0005539551 nova_compute[227360]: 2025-11-29 07:54:23.857 227364 DEBUG oslo_concurrency.lockutils [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Releasing lock "refresh_cache-bed5ee49-4f19-4a80-a70a-8972c9a68218" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:54:23 np0005539551 nova_compute[227360]: 2025-11-29 07:54:23.857 227364 DEBUG nova.compute.manager [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Instance network_info: |[{"id": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "address": "fa:16:3e:8b:38:bc", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10cc59c9-d7", "ovs_interfaceid": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:54:23 np0005539551 nova_compute[227360]: 2025-11-29 07:54:23.858 227364 DEBUG oslo_concurrency.lockutils [req-6244f10b-38b4-4f60-b424-c9cce8039b7d req-dca2b51f-1f49-4585-8bce-77b6ec15d22c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-bed5ee49-4f19-4a80-a70a-8972c9a68218" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:54:23 np0005539551 nova_compute[227360]: 2025-11-29 07:54:23.858 227364 DEBUG nova.network.neutron [req-6244f10b-38b4-4f60-b424-c9cce8039b7d req-dca2b51f-1f49-4585-8bce-77b6ec15d22c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Refreshing network info cache for port 10cc59c9-d730-4ae4-91ea-f799f3de9f32 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:54:23 np0005539551 nova_compute[227360]: 2025-11-29 07:54:23.861 227364 DEBUG nova.virt.libvirt.driver [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Start _get_guest_xml network_info=[{"id": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "address": "fa:16:3e:8b:38:bc", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10cc59c9-d7", "ovs_interfaceid": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:54:23 np0005539551 nova_compute[227360]: 2025-11-29 07:54:23.865 227364 WARNING nova.virt.libvirt.driver [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:54:23 np0005539551 nova_compute[227360]: 2025-11-29 07:54:23.868 227364 DEBUG nova.virt.libvirt.host [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:54:23 np0005539551 nova_compute[227360]: 2025-11-29 07:54:23.869 227364 DEBUG nova.virt.libvirt.host [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:54:23 np0005539551 nova_compute[227360]: 2025-11-29 07:54:23.871 227364 DEBUG nova.virt.libvirt.host [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:54:23 np0005539551 nova_compute[227360]: 2025-11-29 07:54:23.871 227364 DEBUG nova.virt.libvirt.host [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:54:23 np0005539551 nova_compute[227360]: 2025-11-29 07:54:23.873 227364 DEBUG nova.virt.libvirt.driver [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:54:23 np0005539551 nova_compute[227360]: 2025-11-29 07:54:23.873 227364 DEBUG nova.virt.hardware [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:54:23 np0005539551 nova_compute[227360]: 2025-11-29 07:54:23.873 227364 DEBUG nova.virt.hardware [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:54:23 np0005539551 nova_compute[227360]: 2025-11-29 07:54:23.874 227364 DEBUG nova.virt.hardware [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:54:23 np0005539551 nova_compute[227360]: 2025-11-29 07:54:23.874 227364 DEBUG nova.virt.hardware [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:54:23 np0005539551 nova_compute[227360]: 2025-11-29 07:54:23.874 227364 DEBUG nova.virt.hardware [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:54:23 np0005539551 nova_compute[227360]: 2025-11-29 07:54:23.874 227364 DEBUG nova.virt.hardware [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:54:23 np0005539551 nova_compute[227360]: 2025-11-29 07:54:23.874 227364 DEBUG nova.virt.hardware [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:54:23 np0005539551 nova_compute[227360]: 2025-11-29 07:54:23.875 227364 DEBUG nova.virt.hardware [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:54:23 np0005539551 nova_compute[227360]: 2025-11-29 07:54:23.875 227364 DEBUG nova.virt.hardware [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:54:23 np0005539551 nova_compute[227360]: 2025-11-29 07:54:23.875 227364 DEBUG nova.virt.hardware [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:54:23 np0005539551 nova_compute[227360]: 2025-11-29 07:54:23.875 227364 DEBUG nova.virt.hardware [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:54:23 np0005539551 nova_compute[227360]: 2025-11-29 07:54:23.878 227364 DEBUG oslo_concurrency.processutils [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:24 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:54:24 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1248607749' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:54:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:24.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:24 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:54:24 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/506205900' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:54:24 np0005539551 nova_compute[227360]: 2025-11-29 07:54:24.292 227364 DEBUG oslo_concurrency.processutils [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:24 np0005539551 nova_compute[227360]: 2025-11-29 07:54:24.328 227364 DEBUG nova.storage.rbd_utils [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image bed5ee49-4f19-4a80-a70a-8972c9a68218_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:24 np0005539551 nova_compute[227360]: 2025-11-29 07:54:24.332 227364 DEBUG oslo_concurrency.processutils [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:24 np0005539551 nova_compute[227360]: 2025-11-29 07:54:24.488 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:24 np0005539551 nova_compute[227360]: 2025-11-29 07:54:24.565 227364 DEBUG nova.compute.manager [req-2923ef3e-555a-4db7-8451-f4723057076e req-21a9cd6c-6bf4-4d2e-b3ed-cf2f7ad275f2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Received event network-changed-fe9bbbf6-1a8e-4407-b98e-d689945a1535 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:24 np0005539551 nova_compute[227360]: 2025-11-29 07:54:24.566 227364 DEBUG nova.compute.manager [req-2923ef3e-555a-4db7-8451-f4723057076e req-21a9cd6c-6bf4-4d2e-b3ed-cf2f7ad275f2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Refreshing instance network info cache due to event network-changed-fe9bbbf6-1a8e-4407-b98e-d689945a1535. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:54:24 np0005539551 nova_compute[227360]: 2025-11-29 07:54:24.566 227364 DEBUG oslo_concurrency.lockutils [req-2923ef3e-555a-4db7-8451-f4723057076e req-21a9cd6c-6bf4-4d2e-b3ed-cf2f7ad275f2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-ec72da52-17dd-401b-8538-90262cfe6006" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:54:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:24.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:24 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:54:24 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3562490356' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:54:24 np0005539551 nova_compute[227360]: 2025-11-29 07:54:24.750 227364 DEBUG oslo_concurrency.processutils [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:24 np0005539551 nova_compute[227360]: 2025-11-29 07:54:24.751 227364 DEBUG nova.virt.libvirt.vif [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:54:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-818783229',display_name='tempest-ServersAdminTestJSON-server-818783229',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-818783229',id=17,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f48e629446148199d44b34243b98b8a',ramdisk_id='',reservation_id='r-0aqrtxlz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-93807439',owner_user_name='tempest-ServersAdminTestJSON-93807439-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:54:16Z,user_data=None,user_id='446b0f05699845e8bd9f7d59c787f671',uuid=bed5ee49-4f19-4a80-a70a-8972c9a68218,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "address": "fa:16:3e:8b:38:bc", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10cc59c9-d7", "ovs_interfaceid": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:54:24 np0005539551 nova_compute[227360]: 2025-11-29 07:54:24.751 227364 DEBUG nova.network.os_vif_util [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Converting VIF {"id": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "address": "fa:16:3e:8b:38:bc", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10cc59c9-d7", "ovs_interfaceid": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:54:24 np0005539551 nova_compute[227360]: 2025-11-29 07:54:24.752 227364 DEBUG nova.network.os_vif_util [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:38:bc,bridge_name='br-int',has_traffic_filtering=True,id=10cc59c9-d730-4ae4-91ea-f799f3de9f32,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10cc59c9-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:54:24 np0005539551 nova_compute[227360]: 2025-11-29 07:54:24.753 227364 DEBUG nova.objects.instance [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lazy-loading 'pci_devices' on Instance uuid bed5ee49-4f19-4a80-a70a-8972c9a68218 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:54:24 np0005539551 nova_compute[227360]: 2025-11-29 07:54:24.774 227364 DEBUG nova.virt.libvirt.driver [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:54:24 np0005539551 nova_compute[227360]:  <uuid>bed5ee49-4f19-4a80-a70a-8972c9a68218</uuid>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:  <name>instance-00000011</name>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:54:24 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:      <nova:name>tempest-ServersAdminTestJSON-server-818783229</nova:name>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 07:54:23</nova:creationTime>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 02:54:24 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:        <nova:user uuid="446b0f05699845e8bd9f7d59c787f671">tempest-ServersAdminTestJSON-93807439-project-member</nova:user>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:        <nova:project uuid="1f48e629446148199d44b34243b98b8a">tempest-ServersAdminTestJSON-93807439</nova:project>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:        <nova:port uuid="10cc59c9-d730-4ae4-91ea-f799f3de9f32">
Nov 29 02:54:24 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <system>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:      <entry name="serial">bed5ee49-4f19-4a80-a70a-8972c9a68218</entry>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:      <entry name="uuid">bed5ee49-4f19-4a80-a70a-8972c9a68218</entry>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    </system>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:  <os>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:  </os>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:  <features>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:  </features>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:  </clock>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:  <devices>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 02:54:24 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/bed5ee49-4f19-4a80-a70a-8972c9a68218_disk">
Nov 29 02:54:24 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:      </source>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 02:54:24 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:      </auth>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    </disk>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 02:54:24 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/bed5ee49-4f19-4a80-a70a-8972c9a68218_disk.config">
Nov 29 02:54:24 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:      </source>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 02:54:24 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:      </auth>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    </disk>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 02:54:24 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:8b:38:bc"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:      <target dev="tap10cc59c9-d7"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    </interface>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 02:54:24 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/bed5ee49-4f19-4a80-a70a-8972c9a68218/console.log" append="off"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    </serial>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <video>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    </video>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 02:54:24 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    </rng>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 02:54:24 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 02:54:24 np0005539551 nova_compute[227360]:  </devices>
Nov 29 02:54:24 np0005539551 nova_compute[227360]: </domain>
Nov 29 02:54:24 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:54:24 np0005539551 nova_compute[227360]: 2025-11-29 07:54:24.776 227364 DEBUG nova.compute.manager [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Preparing to wait for external event network-vif-plugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 02:54:24 np0005539551 nova_compute[227360]: 2025-11-29 07:54:24.777 227364 DEBUG oslo_concurrency.lockutils [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:24 np0005539551 nova_compute[227360]: 2025-11-29 07:54:24.777 227364 DEBUG oslo_concurrency.lockutils [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:24 np0005539551 nova_compute[227360]: 2025-11-29 07:54:24.778 227364 DEBUG oslo_concurrency.lockutils [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:24 np0005539551 nova_compute[227360]: 2025-11-29 07:54:24.780 227364 DEBUG nova.virt.libvirt.vif [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:54:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-818783229',display_name='tempest-ServersAdminTestJSON-server-818783229',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-818783229',id=17,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f48e629446148199d44b34243b98b8a',ramdisk_id='',reservation_id='r-0aqrtxlz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-93807439',owner_user_name='tempest-ServersAdminTestJSON-93807439-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:54:16Z,user_data=None,user_id='446b0f05699845e8bd9f7d59c787f671',uuid=bed5ee49-4f19-4a80-a70a-8972c9a68218,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "address": "fa:16:3e:8b:38:bc", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10cc59c9-d7", "ovs_interfaceid": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:54:24 np0005539551 nova_compute[227360]: 2025-11-29 07:54:24.780 227364 DEBUG nova.network.os_vif_util [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Converting VIF {"id": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "address": "fa:16:3e:8b:38:bc", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10cc59c9-d7", "ovs_interfaceid": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:54:24 np0005539551 nova_compute[227360]: 2025-11-29 07:54:24.780 227364 DEBUG nova.network.os_vif_util [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:38:bc,bridge_name='br-int',has_traffic_filtering=True,id=10cc59c9-d730-4ae4-91ea-f799f3de9f32,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10cc59c9-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:54:24 np0005539551 nova_compute[227360]: 2025-11-29 07:54:24.781 227364 DEBUG os_vif [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:38:bc,bridge_name='br-int',has_traffic_filtering=True,id=10cc59c9-d730-4ae4-91ea-f799f3de9f32,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10cc59c9-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:54:24 np0005539551 nova_compute[227360]: 2025-11-29 07:54:24.783 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:24 np0005539551 nova_compute[227360]: 2025-11-29 07:54:24.784 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:24 np0005539551 nova_compute[227360]: 2025-11-29 07:54:24.784 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:54:24 np0005539551 nova_compute[227360]: 2025-11-29 07:54:24.787 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:24 np0005539551 nova_compute[227360]: 2025-11-29 07:54:24.787 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap10cc59c9-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:24 np0005539551 nova_compute[227360]: 2025-11-29 07:54:24.788 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap10cc59c9-d7, col_values=(('external_ids', {'iface-id': '10cc59c9-d730-4ae4-91ea-f799f3de9f32', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8b:38:bc', 'vm-uuid': 'bed5ee49-4f19-4a80-a70a-8972c9a68218'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:24 np0005539551 nova_compute[227360]: 2025-11-29 07:54:24.789 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:24 np0005539551 nova_compute[227360]: 2025-11-29 07:54:24.792 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:54:24 np0005539551 NetworkManager[48922]: <info>  [1764402864.7924] manager: (tap10cc59c9-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Nov 29 02:54:24 np0005539551 nova_compute[227360]: 2025-11-29 07:54:24.796 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:24 np0005539551 nova_compute[227360]: 2025-11-29 07:54:24.796 227364 INFO os_vif [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:38:bc,bridge_name='br-int',has_traffic_filtering=True,id=10cc59c9-d730-4ae4-91ea-f799f3de9f32,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10cc59c9-d7')#033[00m
Nov 29 02:54:24 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:24 np0005539551 nova_compute[227360]: 2025-11-29 07:54:24.856 227364 DEBUG nova.virt.libvirt.driver [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:54:24 np0005539551 nova_compute[227360]: 2025-11-29 07:54:24.857 227364 DEBUG nova.virt.libvirt.driver [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:54:24 np0005539551 nova_compute[227360]: 2025-11-29 07:54:24.857 227364 DEBUG nova.virt.libvirt.driver [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] No VIF found with MAC fa:16:3e:8b:38:bc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:54:24 np0005539551 nova_compute[227360]: 2025-11-29 07:54:24.858 227364 INFO nova.virt.libvirt.driver [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Using config drive#033[00m
Nov 29 02:54:24 np0005539551 nova_compute[227360]: 2025-11-29 07:54:24.887 227364 DEBUG nova.storage.rbd_utils [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image bed5ee49-4f19-4a80-a70a-8972c9a68218_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.193 227364 INFO nova.virt.libvirt.driver [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Creating config drive at /var/lib/nova/instances/bed5ee49-4f19-4a80-a70a-8972c9a68218/disk.config#033[00m
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.199 227364 DEBUG oslo_concurrency.processutils [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bed5ee49-4f19-4a80-a70a-8972c9a68218/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw1iz8xt5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.321 227364 DEBUG nova.network.neutron [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Updating instance_info_cache with network_info: [{"id": "fe9bbbf6-1a8e-4407-b98e-d689945a1535", "address": "fa:16:3e:f1:9c:fe", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe9bbbf6-1a", "ovs_interfaceid": "fe9bbbf6-1a8e-4407-b98e-d689945a1535", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.328 227364 DEBUG oslo_concurrency.processutils [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bed5ee49-4f19-4a80-a70a-8972c9a68218/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw1iz8xt5" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.358 227364 DEBUG nova.storage.rbd_utils [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image bed5ee49-4f19-4a80-a70a-8972c9a68218_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.362 227364 DEBUG oslo_concurrency.processutils [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bed5ee49-4f19-4a80-a70a-8972c9a68218/disk.config bed5ee49-4f19-4a80-a70a-8972c9a68218_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.393 227364 DEBUG oslo_concurrency.lockutils [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Releasing lock "refresh_cache-ec72da52-17dd-401b-8538-90262cfe6006" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.394 227364 DEBUG nova.compute.manager [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Instance network_info: |[{"id": "fe9bbbf6-1a8e-4407-b98e-d689945a1535", "address": "fa:16:3e:f1:9c:fe", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe9bbbf6-1a", "ovs_interfaceid": "fe9bbbf6-1a8e-4407-b98e-d689945a1535", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.395 227364 DEBUG oslo_concurrency.lockutils [req-2923ef3e-555a-4db7-8451-f4723057076e req-21a9cd6c-6bf4-4d2e-b3ed-cf2f7ad275f2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-ec72da52-17dd-401b-8538-90262cfe6006" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.395 227364 DEBUG nova.network.neutron [req-2923ef3e-555a-4db7-8451-f4723057076e req-21a9cd6c-6bf4-4d2e-b3ed-cf2f7ad275f2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Refreshing network info cache for port fe9bbbf6-1a8e-4407-b98e-d689945a1535 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.398 227364 DEBUG nova.virt.libvirt.driver [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Start _get_guest_xml network_info=[{"id": "fe9bbbf6-1a8e-4407-b98e-d689945a1535", "address": "fa:16:3e:f1:9c:fe", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe9bbbf6-1a", "ovs_interfaceid": "fe9bbbf6-1a8e-4407-b98e-d689945a1535", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.405 227364 WARNING nova.virt.libvirt.driver [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.410 227364 DEBUG nova.virt.libvirt.host [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.411 227364 DEBUG nova.virt.libvirt.host [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.419 227364 DEBUG nova.virt.libvirt.host [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.420 227364 DEBUG nova.virt.libvirt.host [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.421 227364 DEBUG nova.virt.libvirt.driver [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.422 227364 DEBUG nova.virt.hardware [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.422 227364 DEBUG nova.virt.hardware [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.423 227364 DEBUG nova.virt.hardware [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.423 227364 DEBUG nova.virt.hardware [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.423 227364 DEBUG nova.virt.hardware [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.423 227364 DEBUG nova.virt.hardware [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.424 227364 DEBUG nova.virt.hardware [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.424 227364 DEBUG nova.virt.hardware [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.424 227364 DEBUG nova.virt.hardware [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.425 227364 DEBUG nova.virt.hardware [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.425 227364 DEBUG nova.virt.hardware [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.429 227364 DEBUG oslo_concurrency.processutils [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.538 227364 DEBUG oslo_concurrency.processutils [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bed5ee49-4f19-4a80-a70a-8972c9a68218/disk.config bed5ee49-4f19-4a80-a70a-8972c9a68218_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.539 227364 INFO nova.virt.libvirt.driver [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Deleting local config drive /var/lib/nova/instances/bed5ee49-4f19-4a80-a70a-8972c9a68218/disk.config because it was imported into RBD.#033[00m
Nov 29 02:54:25 np0005539551 kernel: tap10cc59c9-d7: entered promiscuous mode
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.596 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:25 np0005539551 NetworkManager[48922]: <info>  [1764402865.5995] manager: (tap10cc59c9-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/50)
Nov 29 02:54:25 np0005539551 ovn_controller[130266]: 2025-11-29T07:54:25Z|00083|binding|INFO|Claiming lport 10cc59c9-d730-4ae4-91ea-f799f3de9f32 for this chassis.
Nov 29 02:54:25 np0005539551 ovn_controller[130266]: 2025-11-29T07:54:25Z|00084|binding|INFO|10cc59c9-d730-4ae4-91ea-f799f3de9f32: Claiming fa:16:3e:8b:38:bc 10.100.0.14
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.612 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:25 np0005539551 systemd-udevd[236996]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:25.630 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:38:bc 10.100.0.14'], port_security=['fa:16:3e:8b:38:bc 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'bed5ee49-4f19-4a80-a70a-8972c9a68218', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f48e629446148199d44b34243b98b8a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e89afbea-5410-4fb0-af48-42605427a18f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58aa5314-b5ce-40ee-9eff-0f30cffaf25d, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=10cc59c9-d730-4ae4-91ea-f799f3de9f32) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:25.631 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 10cc59c9-d730-4ae4-91ea-f799f3de9f32 in datapath 62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b bound to our chassis#033[00m
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:25.633 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b#033[00m
Nov 29 02:54:25 np0005539551 systemd-machined[190756]: New machine qemu-8-instance-00000011.
Nov 29 02:54:25 np0005539551 NetworkManager[48922]: <info>  [1764402865.6462] device (tap10cc59c9-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:54:25 np0005539551 NetworkManager[48922]: <info>  [1764402865.6471] device (tap10cc59c9-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:54:25 np0005539551 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:54:25 np0005539551 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:25.650 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6ef2439d-60a8-4ce1-82c9-d2bfebcd023a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:25.653 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap62e5f2a3-c1 in ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:25.655 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap62e5f2a3-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:25.655 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[dc59e942-169e-4067-ae48-3db5728cfba3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:25.657 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d94223e3-455a-428f-aa7a-f01b7e62d27f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:25.671 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[0ba860e6-d760-4a53-80de-a60dca89231a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:25 np0005539551 systemd[1]: Started Virtual Machine qemu-8-instance-00000011.
Nov 29 02:54:25 np0005539551 ovn_controller[130266]: 2025-11-29T07:54:25Z|00085|binding|INFO|Setting lport 10cc59c9-d730-4ae4-91ea-f799f3de9f32 ovn-installed in OVS
Nov 29 02:54:25 np0005539551 ovn_controller[130266]: 2025-11-29T07:54:25Z|00086|binding|INFO|Setting lport 10cc59c9-d730-4ae4-91ea-f799f3de9f32 up in Southbound
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.686 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:25.691 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d1d2b45d-c14e-4ada-b23c-d2167b5353d7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:25.725 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[ac52c185-5203-4c1c-b75e-6f90ae1368f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:25 np0005539551 systemd-udevd[237000]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:25.731 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[163e0955-9ade-47d0-a9f0-3755f8227c8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:25 np0005539551 NetworkManager[48922]: <info>  [1764402865.7330] manager: (tap62e5f2a3-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/51)
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:25.758 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[fea60cd3-64e0-4502-a843-bfa358728c63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:25.761 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[cc3f7912-7728-4113-a13a-f15ca3abb339]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:25 np0005539551 NetworkManager[48922]: <info>  [1764402865.7849] device (tap62e5f2a3-c0): carrier: link connected
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:25.791 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[bb3158bd-053a-49b8-9c80-fe9d6c26aba2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:25.810 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[db287c71-0386-44c8-b8ba-0044560d4daa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62e5f2a3-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:93:9d:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591266, 'reachable_time': 39989, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 237031, 'error': None, 'target': 'ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:25.829 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5d84c25f-d0c7-4265-9e9d-e8dc25bf3be4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe93:9d00'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 591266, 'tstamp': 591266}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 237032, 'error': None, 'target': 'ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:25.850 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[07a0b815-93fb-46f9-9f63-76c34bf6efed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62e5f2a3-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:93:9d:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591266, 'reachable_time': 39989, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 237033, 'error': None, 'target': 'ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:54:25 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2220897879' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:25.887 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2b39c51a-5033-41dc-b241-6731af27b267]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.895 227364 DEBUG oslo_concurrency.processutils [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.929 227364 DEBUG nova.storage.rbd_utils [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image ec72da52-17dd-401b-8538-90262cfe6006_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.935 227364 DEBUG oslo_concurrency.processutils [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:25.947 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e6656f0e-dd3d-4848-b251-0c3fce393242]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:25.948 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62e5f2a3-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:25.948 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:25.949 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62e5f2a3-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:25 np0005539551 NetworkManager[48922]: <info>  [1764402865.9511] manager: (tap62e5f2a3-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Nov 29 02:54:25 np0005539551 kernel: tap62e5f2a3-c0: entered promiscuous mode
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:25.954 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62e5f2a3-c0, col_values=(('external_ids', {'iface-id': 'acbe1c54-69e5-4789-8e0b-6d1b69eab5e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:25 np0005539551 ovn_controller[130266]: 2025-11-29T07:54:25Z|00087|binding|INFO|Releasing lport acbe1c54-69e5-4789-8e0b-6d1b69eab5e0 from this chassis (sb_readonly=0)
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.957 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.966 227364 DEBUG nova.network.neutron [req-6244f10b-38b4-4f60-b424-c9cce8039b7d req-dca2b51f-1f49-4585-8bce-77b6ec15d22c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Updated VIF entry in instance network info cache for port 10cc59c9-d730-4ae4-91ea-f799f3de9f32. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.967 227364 DEBUG nova.network.neutron [req-6244f10b-38b4-4f60-b424-c9cce8039b7d req-dca2b51f-1f49-4585-8bce-77b6ec15d22c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Updating instance_info_cache with network_info: [{"id": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "address": "fa:16:3e:8b:38:bc", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10cc59c9-d7", "ovs_interfaceid": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.971 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:25.973 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:25.973 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b4c97407-625e-4f0e-bec8-971ffc0ba131]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:25.974 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b.pid.haproxy
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:54:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:25.975 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'env', 'PROCESS_TAG=haproxy-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:54:25 np0005539551 nova_compute[227360]: 2025-11-29 07:54:25.997 227364 DEBUG oslo_concurrency.lockutils [req-6244f10b-38b4-4f60-b424-c9cce8039b7d req-dca2b51f-1f49-4585-8bce-77b6ec15d22c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-bed5ee49-4f19-4a80-a70a-8972c9a68218" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:54:26 np0005539551 nova_compute[227360]: 2025-11-29 07:54:26.192 227364 DEBUG nova.compute.manager [req-43e130d5-2f63-405d-b59e-54b2548ab72d req-bfa015d2-891e-448b-afa1-6e379f8de0fd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Received event network-vif-plugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:26 np0005539551 nova_compute[227360]: 2025-11-29 07:54:26.193 227364 DEBUG oslo_concurrency.lockutils [req-43e130d5-2f63-405d-b59e-54b2548ab72d req-bfa015d2-891e-448b-afa1-6e379f8de0fd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:26 np0005539551 nova_compute[227360]: 2025-11-29 07:54:26.193 227364 DEBUG oslo_concurrency.lockutils [req-43e130d5-2f63-405d-b59e-54b2548ab72d req-bfa015d2-891e-448b-afa1-6e379f8de0fd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:26 np0005539551 nova_compute[227360]: 2025-11-29 07:54:26.193 227364 DEBUG oslo_concurrency.lockutils [req-43e130d5-2f63-405d-b59e-54b2548ab72d req-bfa015d2-891e-448b-afa1-6e379f8de0fd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:26 np0005539551 nova_compute[227360]: 2025-11-29 07:54:26.194 227364 DEBUG nova.compute.manager [req-43e130d5-2f63-405d-b59e-54b2548ab72d req-bfa015d2-891e-448b-afa1-6e379f8de0fd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Processing event network-vif-plugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 02:54:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:26.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:26 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:54:26 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3400731141' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:54:26 np0005539551 nova_compute[227360]: 2025-11-29 07:54:26.416 227364 DEBUG oslo_concurrency.processutils [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:26 np0005539551 nova_compute[227360]: 2025-11-29 07:54:26.418 227364 DEBUG nova.virt.libvirt.vif [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:54:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1514710041',display_name='tempest-ServersAdminTestJSON-server-1514710041',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1514710041',id=18,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f48e629446148199d44b34243b98b8a',ramdisk_id='',reservation_id='r-jat5344a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-93807439',owner_user_name='tempest-ServersAdminTestJSON-93807439-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:54:19Z,user_data=None,user_id='446b0f05699845e8bd9f7d59c787f671',uuid=ec72da52-17dd-401b-8538-90262cfe6006,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fe9bbbf6-1a8e-4407-b98e-d689945a1535", "address": "fa:16:3e:f1:9c:fe", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe9bbbf6-1a", "ovs_interfaceid": "fe9bbbf6-1a8e-4407-b98e-d689945a1535", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:54:26 np0005539551 nova_compute[227360]: 2025-11-29 07:54:26.419 227364 DEBUG nova.network.os_vif_util [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Converting VIF {"id": "fe9bbbf6-1a8e-4407-b98e-d689945a1535", "address": "fa:16:3e:f1:9c:fe", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe9bbbf6-1a", "ovs_interfaceid": "fe9bbbf6-1a8e-4407-b98e-d689945a1535", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:54:26 np0005539551 nova_compute[227360]: 2025-11-29 07:54:26.419 227364 DEBUG nova.network.os_vif_util [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f1:9c:fe,bridge_name='br-int',has_traffic_filtering=True,id=fe9bbbf6-1a8e-4407-b98e-d689945a1535,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfe9bbbf6-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:54:26 np0005539551 nova_compute[227360]: 2025-11-29 07:54:26.420 227364 DEBUG nova.objects.instance [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lazy-loading 'pci_devices' on Instance uuid ec72da52-17dd-401b-8538-90262cfe6006 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:54:26 np0005539551 nova_compute[227360]: 2025-11-29 07:54:26.434 227364 DEBUG nova.virt.libvirt.driver [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:54:26 np0005539551 nova_compute[227360]:  <uuid>ec72da52-17dd-401b-8538-90262cfe6006</uuid>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:  <name>instance-00000012</name>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:54:26 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:      <nova:name>tempest-ServersAdminTestJSON-server-1514710041</nova:name>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 07:54:25</nova:creationTime>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 02:54:26 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:        <nova:user uuid="446b0f05699845e8bd9f7d59c787f671">tempest-ServersAdminTestJSON-93807439-project-member</nova:user>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:        <nova:project uuid="1f48e629446148199d44b34243b98b8a">tempest-ServersAdminTestJSON-93807439</nova:project>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:        <nova:port uuid="fe9bbbf6-1a8e-4407-b98e-d689945a1535">
Nov 29 02:54:26 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <system>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:      <entry name="serial">ec72da52-17dd-401b-8538-90262cfe6006</entry>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:      <entry name="uuid">ec72da52-17dd-401b-8538-90262cfe6006</entry>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    </system>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:  <os>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:  </os>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:  <features>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:  </features>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:  </clock>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:  <devices>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 02:54:26 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/ec72da52-17dd-401b-8538-90262cfe6006_disk">
Nov 29 02:54:26 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:      </source>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 02:54:26 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:      </auth>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    </disk>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 02:54:26 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/ec72da52-17dd-401b-8538-90262cfe6006_disk.config">
Nov 29 02:54:26 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:      </source>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 02:54:26 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:      </auth>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    </disk>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 02:54:26 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:f1:9c:fe"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:      <target dev="tapfe9bbbf6-1a"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    </interface>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 02:54:26 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/ec72da52-17dd-401b-8538-90262cfe6006/console.log" append="off"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    </serial>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <video>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    </video>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 02:54:26 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    </rng>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 02:54:26 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 02:54:26 np0005539551 nova_compute[227360]:  </devices>
Nov 29 02:54:26 np0005539551 nova_compute[227360]: </domain>
Nov 29 02:54:26 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:54:26 np0005539551 nova_compute[227360]: 2025-11-29 07:54:26.440 227364 DEBUG nova.compute.manager [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Preparing to wait for external event network-vif-plugged-fe9bbbf6-1a8e-4407-b98e-d689945a1535 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 02:54:26 np0005539551 nova_compute[227360]: 2025-11-29 07:54:26.441 227364 DEBUG oslo_concurrency.lockutils [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "ec72da52-17dd-401b-8538-90262cfe6006-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:26 np0005539551 nova_compute[227360]: 2025-11-29 07:54:26.441 227364 DEBUG oslo_concurrency.lockutils [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "ec72da52-17dd-401b-8538-90262cfe6006-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:26 np0005539551 nova_compute[227360]: 2025-11-29 07:54:26.442 227364 DEBUG oslo_concurrency.lockutils [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "ec72da52-17dd-401b-8538-90262cfe6006-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:26 np0005539551 nova_compute[227360]: 2025-11-29 07:54:26.442 227364 DEBUG nova.virt.libvirt.vif [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:54:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1514710041',display_name='tempest-ServersAdminTestJSON-server-1514710041',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1514710041',id=18,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f48e629446148199d44b34243b98b8a',ramdisk_id='',reservation_id='r-jat5344a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-93807439',owner_user_name='tempest-ServersAdminTestJSON-93807439-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:54:19Z,user_data=None,user_id='446b0f05699845e8bd9f7d59c787f671',uuid=ec72da52-17dd-401b-8538-90262cfe6006,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fe9bbbf6-1a8e-4407-b98e-d689945a1535", "address": "fa:16:3e:f1:9c:fe", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe9bbbf6-1a", "ovs_interfaceid": "fe9bbbf6-1a8e-4407-b98e-d689945a1535", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:54:26 np0005539551 nova_compute[227360]: 2025-11-29 07:54:26.443 227364 DEBUG nova.network.os_vif_util [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Converting VIF {"id": "fe9bbbf6-1a8e-4407-b98e-d689945a1535", "address": "fa:16:3e:f1:9c:fe", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe9bbbf6-1a", "ovs_interfaceid": "fe9bbbf6-1a8e-4407-b98e-d689945a1535", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:54:26 np0005539551 nova_compute[227360]: 2025-11-29 07:54:26.443 227364 DEBUG nova.network.os_vif_util [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f1:9c:fe,bridge_name='br-int',has_traffic_filtering=True,id=fe9bbbf6-1a8e-4407-b98e-d689945a1535,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfe9bbbf6-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:54:26 np0005539551 nova_compute[227360]: 2025-11-29 07:54:26.444 227364 DEBUG os_vif [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f1:9c:fe,bridge_name='br-int',has_traffic_filtering=True,id=fe9bbbf6-1a8e-4407-b98e-d689945a1535,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfe9bbbf6-1a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:54:26 np0005539551 nova_compute[227360]: 2025-11-29 07:54:26.445 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:26 np0005539551 nova_compute[227360]: 2025-11-29 07:54:26.445 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:26 np0005539551 nova_compute[227360]: 2025-11-29 07:54:26.446 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:54:26 np0005539551 nova_compute[227360]: 2025-11-29 07:54:26.448 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:26 np0005539551 nova_compute[227360]: 2025-11-29 07:54:26.448 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfe9bbbf6-1a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:26 np0005539551 nova_compute[227360]: 2025-11-29 07:54:26.449 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfe9bbbf6-1a, col_values=(('external_ids', {'iface-id': 'fe9bbbf6-1a8e-4407-b98e-d689945a1535', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f1:9c:fe', 'vm-uuid': 'ec72da52-17dd-401b-8538-90262cfe6006'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:26 np0005539551 nova_compute[227360]: 2025-11-29 07:54:26.450 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:26 np0005539551 NetworkManager[48922]: <info>  [1764402866.4513] manager: (tapfe9bbbf6-1a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Nov 29 02:54:26 np0005539551 nova_compute[227360]: 2025-11-29 07:54:26.452 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:54:26 np0005539551 nova_compute[227360]: 2025-11-29 07:54:26.455 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:26 np0005539551 nova_compute[227360]: 2025-11-29 07:54:26.456 227364 INFO os_vif [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f1:9c:fe,bridge_name='br-int',has_traffic_filtering=True,id=fe9bbbf6-1a8e-4407-b98e-d689945a1535,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfe9bbbf6-1a')#033[00m
Nov 29 02:54:26 np0005539551 podman[237105]: 2025-11-29 07:54:26.400979021 +0000 UTC m=+0.025446518 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:54:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:26.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:27 np0005539551 podman[237105]: 2025-11-29 07:54:27.201531834 +0000 UTC m=+0.825999351 container create 253d77c809441e5f16f71626f312f4781a07b44be8854c04cdc53bac110edc0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 02:54:27 np0005539551 systemd[1]: Started libpod-conmon-253d77c809441e5f16f71626f312f4781a07b44be8854c04cdc53bac110edc0a.scope.
Nov 29 02:54:27 np0005539551 systemd[1]: Started libcrun container.
Nov 29 02:54:27 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a3b5ee9644dce2a28b3c201fdbdb02606960989570d416a16147f081c89894b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:54:27 np0005539551 nova_compute[227360]: 2025-11-29 07:54:27.373 227364 DEBUG nova.virt.libvirt.driver [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:54:27 np0005539551 nova_compute[227360]: 2025-11-29 07:54:27.374 227364 DEBUG nova.virt.libvirt.driver [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:54:27 np0005539551 nova_compute[227360]: 2025-11-29 07:54:27.375 227364 DEBUG nova.virt.libvirt.driver [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] No VIF found with MAC fa:16:3e:f1:9c:fe, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:54:27 np0005539551 nova_compute[227360]: 2025-11-29 07:54:27.375 227364 INFO nova.virt.libvirt.driver [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Using config drive#033[00m
Nov 29 02:54:27 np0005539551 podman[237105]: 2025-11-29 07:54:27.603570485 +0000 UTC m=+1.228038062 container init 253d77c809441e5f16f71626f312f4781a07b44be8854c04cdc53bac110edc0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 02:54:27 np0005539551 podman[237105]: 2025-11-29 07:54:27.611663087 +0000 UTC m=+1.236130574 container start 253d77c809441e5f16f71626f312f4781a07b44be8854c04cdc53bac110edc0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:54:27 np0005539551 nova_compute[227360]: 2025-11-29 07:54:27.633 227364 DEBUG nova.storage.rbd_utils [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image ec72da52-17dd-401b-8538-90262cfe6006_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:27 np0005539551 neutron-haproxy-ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b[237141]: [NOTICE]   (237168) : New worker (237184) forked
Nov 29 02:54:27 np0005539551 neutron-haproxy-ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b[237141]: [NOTICE]   (237168) : Loading success.
Nov 29 02:54:27 np0005539551 nova_compute[227360]: 2025-11-29 07:54:27.643 227364 DEBUG nova.network.neutron [req-2923ef3e-555a-4db7-8451-f4723057076e req-21a9cd6c-6bf4-4d2e-b3ed-cf2f7ad275f2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Updated VIF entry in instance network info cache for port fe9bbbf6-1a8e-4407-b98e-d689945a1535. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:54:27 np0005539551 nova_compute[227360]: 2025-11-29 07:54:27.643 227364 DEBUG nova.network.neutron [req-2923ef3e-555a-4db7-8451-f4723057076e req-21a9cd6c-6bf4-4d2e-b3ed-cf2f7ad275f2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Updating instance_info_cache with network_info: [{"id": "fe9bbbf6-1a8e-4407-b98e-d689945a1535", "address": "fa:16:3e:f1:9c:fe", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe9bbbf6-1a", "ovs_interfaceid": "fe9bbbf6-1a8e-4407-b98e-d689945a1535", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:54:27 np0005539551 nova_compute[227360]: 2025-11-29 07:54:27.682 227364 DEBUG oslo_concurrency.lockutils [req-2923ef3e-555a-4db7-8451-f4723057076e req-21a9cd6c-6bf4-4d2e-b3ed-cf2f7ad275f2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-ec72da52-17dd-401b-8538-90262cfe6006" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.143 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764402868.142553, bed5ee49-4f19-4a80-a70a-8972c9a68218 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.144 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] VM Started (Lifecycle Event)#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.145 227364 DEBUG nova.compute.manager [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.148 227364 DEBUG nova.virt.libvirt.driver [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.151 227364 INFO nova.virt.libvirt.driver [-] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Instance spawned successfully.#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.151 227364 DEBUG nova.virt.libvirt.driver [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.182 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.189 227364 INFO nova.virt.libvirt.driver [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Creating config drive at /var/lib/nova/instances/ec72da52-17dd-401b-8538-90262cfe6006/disk.config#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.194 227364 DEBUG oslo_concurrency.processutils [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ec72da52-17dd-401b-8538-90262cfe6006/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp387r84et execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.229 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.238 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.242 227364 DEBUG nova.virt.libvirt.driver [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.243 227364 DEBUG nova.virt.libvirt.driver [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.243 227364 DEBUG nova.virt.libvirt.driver [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.244 227364 DEBUG nova.virt.libvirt.driver [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.244 227364 DEBUG nova.virt.libvirt.driver [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.245 227364 DEBUG nova.virt.libvirt.driver [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:28.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.280 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.281 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764402868.1427488, bed5ee49-4f19-4a80-a70a-8972c9a68218 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.281 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.295 227364 DEBUG nova.compute.manager [req-5bebbe18-4dec-4b27-9a7c-1fc859e18d89 req-fd3ca05d-ecc8-4761-b056-c16226e68c51 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Received event network-vif-plugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.296 227364 DEBUG oslo_concurrency.lockutils [req-5bebbe18-4dec-4b27-9a7c-1fc859e18d89 req-fd3ca05d-ecc8-4761-b056-c16226e68c51 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.296 227364 DEBUG oslo_concurrency.lockutils [req-5bebbe18-4dec-4b27-9a7c-1fc859e18d89 req-fd3ca05d-ecc8-4761-b056-c16226e68c51 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.296 227364 DEBUG oslo_concurrency.lockutils [req-5bebbe18-4dec-4b27-9a7c-1fc859e18d89 req-fd3ca05d-ecc8-4761-b056-c16226e68c51 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.297 227364 DEBUG nova.compute.manager [req-5bebbe18-4dec-4b27-9a7c-1fc859e18d89 req-fd3ca05d-ecc8-4761-b056-c16226e68c51 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] No waiting events found dispatching network-vif-plugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.297 227364 WARNING nova.compute.manager [req-5bebbe18-4dec-4b27-9a7c-1fc859e18d89 req-fd3ca05d-ecc8-4761-b056-c16226e68c51 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Received unexpected event network-vif-plugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.314 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.321 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764402868.1478033, bed5ee49-4f19-4a80-a70a-8972c9a68218 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.322 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.344 227364 DEBUG oslo_concurrency.processutils [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ec72da52-17dd-401b-8538-90262cfe6006/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp387r84et" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.372 227364 DEBUG nova.storage.rbd_utils [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image ec72da52-17dd-401b-8538-90262cfe6006_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.376 227364 DEBUG oslo_concurrency.processutils [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ec72da52-17dd-401b-8538-90262cfe6006/disk.config ec72da52-17dd-401b-8538-90262cfe6006_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.413 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.415 227364 INFO nova.compute.manager [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Took 11.99 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.416 227364 DEBUG nova.compute.manager [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.419 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.471 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.501 227364 INFO nova.compute.manager [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Took 13.32 seconds to build instance.#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.576 227364 DEBUG oslo_concurrency.lockutils [None req-0b6bb141-356c-43bc-90fe-b6e4c524bb96 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.501s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:28.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.733 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.753 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Triggering sync for uuid bed5ee49-4f19-4a80-a70a-8972c9a68218 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.754 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Triggering sync for uuid ec72da52-17dd-401b-8538-90262cfe6006 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.754 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "bed5ee49-4f19-4a80-a70a-8972c9a68218" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.754 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.755 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "ec72da52-17dd-401b-8538-90262cfe6006" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:28 np0005539551 nova_compute[227360]: 2025-11-29 07:54:28.791 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:30.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:30.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:31 np0005539551 nova_compute[227360]: 2025-11-29 07:54:31.450 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:31 np0005539551 nova_compute[227360]: 2025-11-29 07:54:31.716 227364 DEBUG oslo_concurrency.processutils [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ec72da52-17dd-401b-8538-90262cfe6006/disk.config ec72da52-17dd-401b-8538-90262cfe6006_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:31 np0005539551 nova_compute[227360]: 2025-11-29 07:54:31.718 227364 INFO nova.virt.libvirt.driver [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Deleting local config drive /var/lib/nova/instances/ec72da52-17dd-401b-8538-90262cfe6006/disk.config because it was imported into RBD.#033[00m
Nov 29 02:54:31 np0005539551 kernel: tapfe9bbbf6-1a: entered promiscuous mode
Nov 29 02:54:31 np0005539551 NetworkManager[48922]: <info>  [1764402871.7821] manager: (tapfe9bbbf6-1a): new Tun device (/org/freedesktop/NetworkManager/Devices/54)
Nov 29 02:54:31 np0005539551 ovn_controller[130266]: 2025-11-29T07:54:31Z|00088|binding|INFO|Claiming lport fe9bbbf6-1a8e-4407-b98e-d689945a1535 for this chassis.
Nov 29 02:54:31 np0005539551 ovn_controller[130266]: 2025-11-29T07:54:31Z|00089|binding|INFO|fe9bbbf6-1a8e-4407-b98e-d689945a1535: Claiming fa:16:3e:f1:9c:fe 10.100.0.9
Nov 29 02:54:31 np0005539551 nova_compute[227360]: 2025-11-29 07:54:31.786 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:31.793 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f1:9c:fe 10.100.0.9'], port_security=['fa:16:3e:f1:9c:fe 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ec72da52-17dd-401b-8538-90262cfe6006', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f48e629446148199d44b34243b98b8a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e89afbea-5410-4fb0-af48-42605427a18f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58aa5314-b5ce-40ee-9eff-0f30cffaf25d, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=fe9bbbf6-1a8e-4407-b98e-d689945a1535) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:54:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:31.795 139482 INFO neutron.agent.ovn.metadata.agent [-] Port fe9bbbf6-1a8e-4407-b98e-d689945a1535 in datapath 62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b bound to our chassis#033[00m
Nov 29 02:54:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:31.799 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b#033[00m
Nov 29 02:54:31 np0005539551 ovn_controller[130266]: 2025-11-29T07:54:31Z|00090|binding|INFO|Setting lport fe9bbbf6-1a8e-4407-b98e-d689945a1535 ovn-installed in OVS
Nov 29 02:54:31 np0005539551 ovn_controller[130266]: 2025-11-29T07:54:31Z|00091|binding|INFO|Setting lport fe9bbbf6-1a8e-4407-b98e-d689945a1535 up in Southbound
Nov 29 02:54:31 np0005539551 nova_compute[227360]: 2025-11-29 07:54:31.807 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:31 np0005539551 nova_compute[227360]: 2025-11-29 07:54:31.812 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:31.825 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a1010a78-bafc-43ae-a4f9-9cdcfb1c2bfd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:31 np0005539551 systemd-udevd[237259]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:54:31 np0005539551 systemd-machined[190756]: New machine qemu-9-instance-00000012.
Nov 29 02:54:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:31.860 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[daebc7a0-4ea6-4802-984d-173164403aa5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:31 np0005539551 systemd[1]: Started Virtual Machine qemu-9-instance-00000012.
Nov 29 02:54:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:31.865 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[5259ca3f-5898-4c2a-a364-ffe5244d9c4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:31 np0005539551 NetworkManager[48922]: <info>  [1764402871.8707] device (tapfe9bbbf6-1a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:54:31 np0005539551 NetworkManager[48922]: <info>  [1764402871.8761] device (tapfe9bbbf6-1a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:54:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:31.908 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[d9a7a955-f548-4fe4-bc34-650ef9711911]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:31.926 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d84e80b6-7134-487b-a891-dc1a4d7c6725]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62e5f2a3-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:93:9d:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591266, 'reachable_time': 39989, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 237267, 'error': None, 'target': 'ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:31.946 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[7cf51525-59b9-4f0c-9b2f-160051ff2ffb]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap62e5f2a3-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 591279, 'tstamp': 591279}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 237271, 'error': None, 'target': 'ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap62e5f2a3-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 591281, 'tstamp': 591281}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 237271, 'error': None, 'target': 'ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:31.947 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62e5f2a3-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:31 np0005539551 nova_compute[227360]: 2025-11-29 07:54:31.949 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:31 np0005539551 nova_compute[227360]: 2025-11-29 07:54:31.950 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:31.952 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62e5f2a3-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:31.953 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:54:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:31.953 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62e5f2a3-c0, col_values=(('external_ids', {'iface-id': 'acbe1c54-69e5-4789-8e0b-6d1b69eab5e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:31.953 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:54:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:32.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:32 np0005539551 nova_compute[227360]: 2025-11-29 07:54:32.576 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764402872.5760658, ec72da52-17dd-401b-8538-90262cfe6006 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:54:32 np0005539551 nova_compute[227360]: 2025-11-29 07:54:32.577 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: ec72da52-17dd-401b-8538-90262cfe6006] VM Started (Lifecycle Event)#033[00m
Nov 29 02:54:32 np0005539551 nova_compute[227360]: 2025-11-29 07:54:32.609 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:32 np0005539551 nova_compute[227360]: 2025-11-29 07:54:32.612 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764402872.5783336, ec72da52-17dd-401b-8538-90262cfe6006 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:54:32 np0005539551 nova_compute[227360]: 2025-11-29 07:54:32.612 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: ec72da52-17dd-401b-8538-90262cfe6006] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:54:32 np0005539551 nova_compute[227360]: 2025-11-29 07:54:32.638 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:32 np0005539551 nova_compute[227360]: 2025-11-29 07:54:32.641 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:54:32 np0005539551 nova_compute[227360]: 2025-11-29 07:54:32.673 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: ec72da52-17dd-401b-8538-90262cfe6006] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:54:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:32.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:33 np0005539551 nova_compute[227360]: 2025-11-29 07:54:33.189 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:34.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:34.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:34 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:36 np0005539551 nova_compute[227360]: 2025-11-29 07:54:36.134 227364 DEBUG nova.compute.manager [req-6f5d5779-b02c-4870-8e7a-b152321a5236 req-38686c0d-82c9-40fd-beba-ffec3b66ac85 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Received event network-vif-plugged-fe9bbbf6-1a8e-4407-b98e-d689945a1535 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:36 np0005539551 nova_compute[227360]: 2025-11-29 07:54:36.134 227364 DEBUG oslo_concurrency.lockutils [req-6f5d5779-b02c-4870-8e7a-b152321a5236 req-38686c0d-82c9-40fd-beba-ffec3b66ac85 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ec72da52-17dd-401b-8538-90262cfe6006-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:36 np0005539551 nova_compute[227360]: 2025-11-29 07:54:36.134 227364 DEBUG oslo_concurrency.lockutils [req-6f5d5779-b02c-4870-8e7a-b152321a5236 req-38686c0d-82c9-40fd-beba-ffec3b66ac85 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ec72da52-17dd-401b-8538-90262cfe6006-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:36 np0005539551 nova_compute[227360]: 2025-11-29 07:54:36.134 227364 DEBUG oslo_concurrency.lockutils [req-6f5d5779-b02c-4870-8e7a-b152321a5236 req-38686c0d-82c9-40fd-beba-ffec3b66ac85 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ec72da52-17dd-401b-8538-90262cfe6006-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:36 np0005539551 nova_compute[227360]: 2025-11-29 07:54:36.135 227364 DEBUG nova.compute.manager [req-6f5d5779-b02c-4870-8e7a-b152321a5236 req-38686c0d-82c9-40fd-beba-ffec3b66ac85 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Processing event network-vif-plugged-fe9bbbf6-1a8e-4407-b98e-d689945a1535 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 02:54:36 np0005539551 nova_compute[227360]: 2025-11-29 07:54:36.135 227364 DEBUG nova.compute.manager [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:54:36 np0005539551 nova_compute[227360]: 2025-11-29 07:54:36.144 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764402876.1378136, ec72da52-17dd-401b-8538-90262cfe6006 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:54:36 np0005539551 nova_compute[227360]: 2025-11-29 07:54:36.153 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: ec72da52-17dd-401b-8538-90262cfe6006] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:54:36 np0005539551 nova_compute[227360]: 2025-11-29 07:54:36.156 227364 DEBUG nova.virt.libvirt.driver [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:54:36 np0005539551 nova_compute[227360]: 2025-11-29 07:54:36.172 227364 INFO nova.virt.libvirt.driver [-] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Instance spawned successfully.#033[00m
Nov 29 02:54:36 np0005539551 nova_compute[227360]: 2025-11-29 07:54:36.172 227364 DEBUG nova.virt.libvirt.driver [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:54:36 np0005539551 nova_compute[227360]: 2025-11-29 07:54:36.179 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:36 np0005539551 nova_compute[227360]: 2025-11-29 07:54:36.182 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:54:36 np0005539551 nova_compute[227360]: 2025-11-29 07:54:36.196 227364 DEBUG nova.virt.libvirt.driver [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:36 np0005539551 nova_compute[227360]: 2025-11-29 07:54:36.196 227364 DEBUG nova.virt.libvirt.driver [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:36 np0005539551 nova_compute[227360]: 2025-11-29 07:54:36.197 227364 DEBUG nova.virt.libvirt.driver [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:36 np0005539551 nova_compute[227360]: 2025-11-29 07:54:36.197 227364 DEBUG nova.virt.libvirt.driver [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:36 np0005539551 nova_compute[227360]: 2025-11-29 07:54:36.198 227364 DEBUG nova.virt.libvirt.driver [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:36 np0005539551 nova_compute[227360]: 2025-11-29 07:54:36.198 227364 DEBUG nova.virt.libvirt.driver [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:36 np0005539551 nova_compute[227360]: 2025-11-29 07:54:36.202 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: ec72da52-17dd-401b-8538-90262cfe6006] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:54:36 np0005539551 nova_compute[227360]: 2025-11-29 07:54:36.247 227364 INFO nova.compute.manager [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Took 16.35 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:54:36 np0005539551 nova_compute[227360]: 2025-11-29 07:54:36.248 227364 DEBUG nova.compute.manager [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:36.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:36 np0005539551 nova_compute[227360]: 2025-11-29 07:54:36.298 227364 INFO nova.compute.manager [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Took 19.70 seconds to build instance.#033[00m
Nov 29 02:54:36 np0005539551 nova_compute[227360]: 2025-11-29 07:54:36.316 227364 DEBUG oslo_concurrency.lockutils [None req-8b17a4c3-df2c-4d77-af4f-755edae8fa1c 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "ec72da52-17dd-401b-8538-90262cfe6006" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.899s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:36 np0005539551 nova_compute[227360]: 2025-11-29 07:54:36.316 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "ec72da52-17dd-401b-8538-90262cfe6006" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 7.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:36 np0005539551 nova_compute[227360]: 2025-11-29 07:54:36.316 227364 INFO nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: ec72da52-17dd-401b-8538-90262cfe6006] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:54:36 np0005539551 nova_compute[227360]: 2025-11-29 07:54:36.317 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "ec72da52-17dd-401b-8538-90262cfe6006" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:36 np0005539551 nova_compute[227360]: 2025-11-29 07:54:36.503 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:36.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:37 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:54:37 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.9 total, 600.0 interval#012Cumulative writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 2646 syncs, 3.92 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3652 writes, 13K keys, 3652 commit groups, 1.0 writes per commit group, ingest: 14.53 MB, 0.02 MB/s#012Interval WAL: 3652 writes, 1404 syncs, 2.60 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 02:54:38 np0005539551 nova_compute[227360]: 2025-11-29 07:54:38.188 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:38 np0005539551 nova_compute[227360]: 2025-11-29 07:54:38.212 227364 DEBUG nova.compute.manager [req-eb36feae-a490-4f39-9500-55e13cfee3c1 req-8e706037-aba5-42b7-aee0-02a0917ea73f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Received event network-vif-plugged-fe9bbbf6-1a8e-4407-b98e-d689945a1535 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:38 np0005539551 nova_compute[227360]: 2025-11-29 07:54:38.212 227364 DEBUG oslo_concurrency.lockutils [req-eb36feae-a490-4f39-9500-55e13cfee3c1 req-8e706037-aba5-42b7-aee0-02a0917ea73f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ec72da52-17dd-401b-8538-90262cfe6006-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:38 np0005539551 nova_compute[227360]: 2025-11-29 07:54:38.213 227364 DEBUG oslo_concurrency.lockutils [req-eb36feae-a490-4f39-9500-55e13cfee3c1 req-8e706037-aba5-42b7-aee0-02a0917ea73f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ec72da52-17dd-401b-8538-90262cfe6006-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:38 np0005539551 nova_compute[227360]: 2025-11-29 07:54:38.213 227364 DEBUG oslo_concurrency.lockutils [req-eb36feae-a490-4f39-9500-55e13cfee3c1 req-8e706037-aba5-42b7-aee0-02a0917ea73f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ec72da52-17dd-401b-8538-90262cfe6006-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:38 np0005539551 nova_compute[227360]: 2025-11-29 07:54:38.214 227364 DEBUG nova.compute.manager [req-eb36feae-a490-4f39-9500-55e13cfee3c1 req-8e706037-aba5-42b7-aee0-02a0917ea73f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] No waiting events found dispatching network-vif-plugged-fe9bbbf6-1a8e-4407-b98e-d689945a1535 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:54:38 np0005539551 nova_compute[227360]: 2025-11-29 07:54:38.214 227364 WARNING nova.compute.manager [req-eb36feae-a490-4f39-9500-55e13cfee3c1 req-8e706037-aba5-42b7-aee0-02a0917ea73f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Received unexpected event network-vif-plugged-fe9bbbf6-1a8e-4407-b98e-d689945a1535 for instance with vm_state active and task_state None.#033[00m
Nov 29 02:54:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:38.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:38.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 02:54:39 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1369657371' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 02:54:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 02:54:39 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1369657371' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 02:54:39 np0005539551 podman[237317]: 2025-11-29 07:54:39.600661086 +0000 UTC m=+0.052317444 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:54:39 np0005539551 podman[237316]: 2025-11-29 07:54:39.605750386 +0000 UTC m=+0.062328039 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 02:54:39 np0005539551 podman[237315]: 2025-11-29 07:54:39.638379041 +0000 UTC m=+0.096139127 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:54:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:40.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:40.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:41 np0005539551 nova_compute[227360]: 2025-11-29 07:54:41.505 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:42.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:42.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:42 np0005539551 ovn_controller[130266]: 2025-11-29T07:54:42Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8b:38:bc 10.100.0.14
Nov 29 02:54:42 np0005539551 ovn_controller[130266]: 2025-11-29T07:54:42Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8b:38:bc 10.100.0.14
Nov 29 02:54:43 np0005539551 nova_compute[227360]: 2025-11-29 07:54:43.052 227364 DEBUG oslo_concurrency.lockutils [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "59925490-c9ac-438c-9a1f-4356f493b103" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:43 np0005539551 nova_compute[227360]: 2025-11-29 07:54:43.053 227364 DEBUG oslo_concurrency.lockutils [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "59925490-c9ac-438c-9a1f-4356f493b103" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:43 np0005539551 nova_compute[227360]: 2025-11-29 07:54:43.076 227364 DEBUG nova.compute.manager [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:54:43 np0005539551 nova_compute[227360]: 2025-11-29 07:54:43.191 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:43 np0005539551 nova_compute[227360]: 2025-11-29 07:54:43.234 227364 DEBUG oslo_concurrency.lockutils [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:43 np0005539551 nova_compute[227360]: 2025-11-29 07:54:43.234 227364 DEBUG oslo_concurrency.lockutils [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:43 np0005539551 nova_compute[227360]: 2025-11-29 07:54:43.240 227364 DEBUG nova.virt.hardware [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:54:43 np0005539551 nova_compute[227360]: 2025-11-29 07:54:43.240 227364 INFO nova.compute.claims [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 02:54:43 np0005539551 nova_compute[227360]: 2025-11-29 07:54:43.407 227364 DEBUG nova.scheduler.client.report [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Refreshing inventories for resource provider 67c71d68-0dd7-4589-b775-189b4191a844 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 02:54:43 np0005539551 nova_compute[227360]: 2025-11-29 07:54:43.441 227364 DEBUG nova.scheduler.client.report [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Updating ProviderTree inventory for provider 67c71d68-0dd7-4589-b775-189b4191a844 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 02:54:43 np0005539551 nova_compute[227360]: 2025-11-29 07:54:43.441 227364 DEBUG nova.compute.provider_tree [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Updating inventory in ProviderTree for provider 67c71d68-0dd7-4589-b775-189b4191a844 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 02:54:43 np0005539551 nova_compute[227360]: 2025-11-29 07:54:43.475 227364 DEBUG nova.scheduler.client.report [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Refreshing aggregate associations for resource provider 67c71d68-0dd7-4589-b775-189b4191a844, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 02:54:43 np0005539551 nova_compute[227360]: 2025-11-29 07:54:43.511 227364 DEBUG nova.scheduler.client.report [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Refreshing trait associations for resource provider 67c71d68-0dd7-4589-b775-189b4191a844, traits: COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 02:54:43 np0005539551 nova_compute[227360]: 2025-11-29 07:54:43.574 227364 DEBUG oslo_concurrency.processutils [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:44 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:54:44 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2505888510' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:54:44 np0005539551 nova_compute[227360]: 2025-11-29 07:54:44.027 227364 DEBUG oslo_concurrency.processutils [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:44 np0005539551 nova_compute[227360]: 2025-11-29 07:54:44.033 227364 DEBUG nova.compute.provider_tree [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:54:44 np0005539551 nova_compute[227360]: 2025-11-29 07:54:44.098 227364 DEBUG nova.scheduler.client.report [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:54:44 np0005539551 nova_compute[227360]: 2025-11-29 07:54:44.129 227364 DEBUG oslo_concurrency.lockutils [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.895s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:44 np0005539551 nova_compute[227360]: 2025-11-29 07:54:44.130 227364 DEBUG nova.compute.manager [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:54:44 np0005539551 nova_compute[227360]: 2025-11-29 07:54:44.189 227364 DEBUG nova.compute.manager [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:54:44 np0005539551 nova_compute[227360]: 2025-11-29 07:54:44.190 227364 DEBUG nova.network.neutron [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:54:44 np0005539551 nova_compute[227360]: 2025-11-29 07:54:44.208 227364 INFO nova.virt.libvirt.driver [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:54:44 np0005539551 nova_compute[227360]: 2025-11-29 07:54:44.229 227364 DEBUG nova.compute.manager [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:54:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:44.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:44 np0005539551 nova_compute[227360]: 2025-11-29 07:54:44.336 227364 DEBUG nova.compute.manager [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:54:44 np0005539551 nova_compute[227360]: 2025-11-29 07:54:44.337 227364 DEBUG nova.virt.libvirt.driver [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:54:44 np0005539551 nova_compute[227360]: 2025-11-29 07:54:44.338 227364 INFO nova.virt.libvirt.driver [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Creating image(s)#033[00m
Nov 29 02:54:44 np0005539551 nova_compute[227360]: 2025-11-29 07:54:44.371 227364 DEBUG nova.storage.rbd_utils [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image 59925490-c9ac-438c-9a1f-4356f493b103_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:44 np0005539551 nova_compute[227360]: 2025-11-29 07:54:44.402 227364 DEBUG nova.storage.rbd_utils [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image 59925490-c9ac-438c-9a1f-4356f493b103_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:44 np0005539551 nova_compute[227360]: 2025-11-29 07:54:44.434 227364 DEBUG nova.storage.rbd_utils [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image 59925490-c9ac-438c-9a1f-4356f493b103_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:44 np0005539551 nova_compute[227360]: 2025-11-29 07:54:44.438 227364 DEBUG oslo_concurrency.processutils [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:44 np0005539551 nova_compute[227360]: 2025-11-29 07:54:44.515 227364 DEBUG oslo_concurrency.processutils [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:44 np0005539551 nova_compute[227360]: 2025-11-29 07:54:44.516 227364 DEBUG oslo_concurrency.lockutils [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:44 np0005539551 nova_compute[227360]: 2025-11-29 07:54:44.517 227364 DEBUG oslo_concurrency.lockutils [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:44 np0005539551 nova_compute[227360]: 2025-11-29 07:54:44.517 227364 DEBUG oslo_concurrency.lockutils [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:44 np0005539551 nova_compute[227360]: 2025-11-29 07:54:44.543 227364 DEBUG nova.storage.rbd_utils [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image 59925490-c9ac-438c-9a1f-4356f493b103_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:44 np0005539551 nova_compute[227360]: 2025-11-29 07:54:44.547 227364 DEBUG oslo_concurrency.processutils [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 59925490-c9ac-438c-9a1f-4356f493b103_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:44.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:44 np0005539551 nova_compute[227360]: 2025-11-29 07:54:44.827 227364 DEBUG nova.policy [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '446b0f05699845e8bd9f7d59c787f671', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1f48e629446148199d44b34243b98b8a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 02:54:44 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:45 np0005539551 nova_compute[227360]: 2025-11-29 07:54:45.897 227364 DEBUG nova.network.neutron [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Successfully created port: 9b9b45bf-9767-4795-b37f-3f17fed8dd49 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 02:54:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:46.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:46 np0005539551 nova_compute[227360]: 2025-11-29 07:54:46.507 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:46.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:47 np0005539551 nova_compute[227360]: 2025-11-29 07:54:47.035 227364 DEBUG oslo_concurrency.processutils [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 59925490-c9ac-438c-9a1f-4356f493b103_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:47 np0005539551 nova_compute[227360]: 2025-11-29 07:54:47.109 227364 DEBUG nova.storage.rbd_utils [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] resizing rbd image 59925490-c9ac-438c-9a1f-4356f493b103_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:54:47 np0005539551 nova_compute[227360]: 2025-11-29 07:54:47.231 227364 DEBUG nova.objects.instance [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lazy-loading 'migration_context' on Instance uuid 59925490-c9ac-438c-9a1f-4356f493b103 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:54:47 np0005539551 nova_compute[227360]: 2025-11-29 07:54:47.249 227364 DEBUG nova.virt.libvirt.driver [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:54:47 np0005539551 nova_compute[227360]: 2025-11-29 07:54:47.250 227364 DEBUG nova.virt.libvirt.driver [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Ensure instance console log exists: /var/lib/nova/instances/59925490-c9ac-438c-9a1f-4356f493b103/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:54:47 np0005539551 nova_compute[227360]: 2025-11-29 07:54:47.250 227364 DEBUG oslo_concurrency.lockutils [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:47 np0005539551 nova_compute[227360]: 2025-11-29 07:54:47.251 227364 DEBUG oslo_concurrency.lockutils [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:47 np0005539551 nova_compute[227360]: 2025-11-29 07:54:47.251 227364 DEBUG oslo_concurrency.lockutils [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:47 np0005539551 nova_compute[227360]: 2025-11-29 07:54:47.429 227364 DEBUG nova.network.neutron [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Successfully updated port: 9b9b45bf-9767-4795-b37f-3f17fed8dd49 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 02:54:47 np0005539551 nova_compute[227360]: 2025-11-29 07:54:47.450 227364 DEBUG oslo_concurrency.lockutils [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "refresh_cache-59925490-c9ac-438c-9a1f-4356f493b103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:54:47 np0005539551 nova_compute[227360]: 2025-11-29 07:54:47.450 227364 DEBUG oslo_concurrency.lockutils [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquired lock "refresh_cache-59925490-c9ac-438c-9a1f-4356f493b103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:54:47 np0005539551 nova_compute[227360]: 2025-11-29 07:54:47.451 227364 DEBUG nova.network.neutron [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:54:47 np0005539551 nova_compute[227360]: 2025-11-29 07:54:47.741 227364 DEBUG nova.network.neutron [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:54:48 np0005539551 nova_compute[227360]: 2025-11-29 07:54:48.176 227364 DEBUG nova.compute.manager [req-6982c198-8403-4ce7-a63a-859bd80c174b req-8b82560d-4987-49c4-8076-30fb8906b9ec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Received event network-changed-9b9b45bf-9767-4795-b37f-3f17fed8dd49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:48 np0005539551 nova_compute[227360]: 2025-11-29 07:54:48.177 227364 DEBUG nova.compute.manager [req-6982c198-8403-4ce7-a63a-859bd80c174b req-8b82560d-4987-49c4-8076-30fb8906b9ec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Refreshing instance network info cache due to event network-changed-9b9b45bf-9767-4795-b37f-3f17fed8dd49. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:54:48 np0005539551 nova_compute[227360]: 2025-11-29 07:54:48.177 227364 DEBUG oslo_concurrency.lockutils [req-6982c198-8403-4ce7-a63a-859bd80c174b req-8b82560d-4987-49c4-8076-30fb8906b9ec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-59925490-c9ac-438c-9a1f-4356f493b103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:54:48 np0005539551 nova_compute[227360]: 2025-11-29 07:54:48.240 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:48.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:48.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:48 np0005539551 nova_compute[227360]: 2025-11-29 07:54:48.881 227364 DEBUG nova.network.neutron [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Updating instance_info_cache with network_info: [{"id": "9b9b45bf-9767-4795-b37f-3f17fed8dd49", "address": "fa:16:3e:33:40:96", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b9b45bf-97", "ovs_interfaceid": "9b9b45bf-9767-4795-b37f-3f17fed8dd49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:54:48 np0005539551 nova_compute[227360]: 2025-11-29 07:54:48.977 227364 DEBUG oslo_concurrency.lockutils [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Releasing lock "refresh_cache-59925490-c9ac-438c-9a1f-4356f493b103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:54:48 np0005539551 nova_compute[227360]: 2025-11-29 07:54:48.977 227364 DEBUG nova.compute.manager [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Instance network_info: |[{"id": "9b9b45bf-9767-4795-b37f-3f17fed8dd49", "address": "fa:16:3e:33:40:96", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b9b45bf-97", "ovs_interfaceid": "9b9b45bf-9767-4795-b37f-3f17fed8dd49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:54:48 np0005539551 nova_compute[227360]: 2025-11-29 07:54:48.978 227364 DEBUG oslo_concurrency.lockutils [req-6982c198-8403-4ce7-a63a-859bd80c174b req-8b82560d-4987-49c4-8076-30fb8906b9ec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-59925490-c9ac-438c-9a1f-4356f493b103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:54:48 np0005539551 nova_compute[227360]: 2025-11-29 07:54:48.978 227364 DEBUG nova.network.neutron [req-6982c198-8403-4ce7-a63a-859bd80c174b req-8b82560d-4987-49c4-8076-30fb8906b9ec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Refreshing network info cache for port 9b9b45bf-9767-4795-b37f-3f17fed8dd49 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:54:48 np0005539551 nova_compute[227360]: 2025-11-29 07:54:48.981 227364 DEBUG nova.virt.libvirt.driver [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Start _get_guest_xml network_info=[{"id": "9b9b45bf-9767-4795-b37f-3f17fed8dd49", "address": "fa:16:3e:33:40:96", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b9b45bf-97", "ovs_interfaceid": "9b9b45bf-9767-4795-b37f-3f17fed8dd49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:54:48 np0005539551 nova_compute[227360]: 2025-11-29 07:54:48.985 227364 WARNING nova.virt.libvirt.driver [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:54:48 np0005539551 nova_compute[227360]: 2025-11-29 07:54:48.989 227364 DEBUG nova.virt.libvirt.host [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:54:48 np0005539551 nova_compute[227360]: 2025-11-29 07:54:48.990 227364 DEBUG nova.virt.libvirt.host [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:54:48 np0005539551 nova_compute[227360]: 2025-11-29 07:54:48.992 227364 DEBUG nova.virt.libvirt.host [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:54:48 np0005539551 nova_compute[227360]: 2025-11-29 07:54:48.992 227364 DEBUG nova.virt.libvirt.host [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:54:48 np0005539551 nova_compute[227360]: 2025-11-29 07:54:48.993 227364 DEBUG nova.virt.libvirt.driver [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:54:48 np0005539551 nova_compute[227360]: 2025-11-29 07:54:48.994 227364 DEBUG nova.virt.hardware [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:54:48 np0005539551 nova_compute[227360]: 2025-11-29 07:54:48.994 227364 DEBUG nova.virt.hardware [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:54:48 np0005539551 nova_compute[227360]: 2025-11-29 07:54:48.994 227364 DEBUG nova.virt.hardware [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:54:48 np0005539551 nova_compute[227360]: 2025-11-29 07:54:48.994 227364 DEBUG nova.virt.hardware [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:54:48 np0005539551 nova_compute[227360]: 2025-11-29 07:54:48.995 227364 DEBUG nova.virt.hardware [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:54:48 np0005539551 nova_compute[227360]: 2025-11-29 07:54:48.995 227364 DEBUG nova.virt.hardware [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:54:48 np0005539551 nova_compute[227360]: 2025-11-29 07:54:48.995 227364 DEBUG nova.virt.hardware [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:54:48 np0005539551 nova_compute[227360]: 2025-11-29 07:54:48.995 227364 DEBUG nova.virt.hardware [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:54:48 np0005539551 nova_compute[227360]: 2025-11-29 07:54:48.996 227364 DEBUG nova.virt.hardware [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:54:48 np0005539551 nova_compute[227360]: 2025-11-29 07:54:48.996 227364 DEBUG nova.virt.hardware [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:54:48 np0005539551 nova_compute[227360]: 2025-11-29 07:54:48.996 227364 DEBUG nova.virt.hardware [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:54:48 np0005539551 nova_compute[227360]: 2025-11-29 07:54:48.999 227364 DEBUG oslo_concurrency.processutils [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:54:49 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2083952141' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:54:49 np0005539551 nova_compute[227360]: 2025-11-29 07:54:49.587 227364 DEBUG oslo_concurrency.processutils [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:49 np0005539551 nova_compute[227360]: 2025-11-29 07:54:49.620 227364 DEBUG nova.storage.rbd_utils [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image 59925490-c9ac-438c-9a1f-4356f493b103_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:49 np0005539551 nova_compute[227360]: 2025-11-29 07:54:49.624 227364 DEBUG oslo_concurrency.processutils [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:54:50 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3955525967' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:54:50 np0005539551 nova_compute[227360]: 2025-11-29 07:54:50.105 227364 DEBUG oslo_concurrency.processutils [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:50 np0005539551 nova_compute[227360]: 2025-11-29 07:54:50.108 227364 DEBUG nova.virt.libvirt.vif [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:54:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-2003089732',display_name='tempest-ServersAdminTestJSON-server-2003089732',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-2003089732',id=22,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f48e629446148199d44b34243b98b8a',ramdisk_id='',reservation_id='r-oesg0t7z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-93807439',owner_user_name='tempest-ServersAdminTestJSON-93807439-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:54:44Z,user_data=None,user_id='446b0f05699845e8bd9f7d59c787f671',uuid=59925490-c9ac-438c-9a1f-4356f493b103,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9b9b45bf-9767-4795-b37f-3f17fed8dd49", "address": "fa:16:3e:33:40:96", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b9b45bf-97", "ovs_interfaceid": "9b9b45bf-9767-4795-b37f-3f17fed8dd49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:54:50 np0005539551 nova_compute[227360]: 2025-11-29 07:54:50.108 227364 DEBUG nova.network.os_vif_util [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Converting VIF {"id": "9b9b45bf-9767-4795-b37f-3f17fed8dd49", "address": "fa:16:3e:33:40:96", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b9b45bf-97", "ovs_interfaceid": "9b9b45bf-9767-4795-b37f-3f17fed8dd49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:54:50 np0005539551 nova_compute[227360]: 2025-11-29 07:54:50.109 227364 DEBUG nova.network.os_vif_util [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:40:96,bridge_name='br-int',has_traffic_filtering=True,id=9b9b45bf-9767-4795-b37f-3f17fed8dd49,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b9b45bf-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:54:50 np0005539551 nova_compute[227360]: 2025-11-29 07:54:50.111 227364 DEBUG nova.objects.instance [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lazy-loading 'pci_devices' on Instance uuid 59925490-c9ac-438c-9a1f-4356f493b103 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:54:50 np0005539551 nova_compute[227360]: 2025-11-29 07:54:50.132 227364 DEBUG nova.virt.libvirt.driver [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:54:50 np0005539551 nova_compute[227360]:  <uuid>59925490-c9ac-438c-9a1f-4356f493b103</uuid>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:  <name>instance-00000016</name>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:54:50 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:      <nova:name>tempest-ServersAdminTestJSON-server-2003089732</nova:name>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 07:54:48</nova:creationTime>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 02:54:50 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:        <nova:user uuid="446b0f05699845e8bd9f7d59c787f671">tempest-ServersAdminTestJSON-93807439-project-member</nova:user>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:        <nova:project uuid="1f48e629446148199d44b34243b98b8a">tempest-ServersAdminTestJSON-93807439</nova:project>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:        <nova:port uuid="9b9b45bf-9767-4795-b37f-3f17fed8dd49">
Nov 29 02:54:50 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <system>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:      <entry name="serial">59925490-c9ac-438c-9a1f-4356f493b103</entry>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:      <entry name="uuid">59925490-c9ac-438c-9a1f-4356f493b103</entry>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    </system>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:  <os>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:  </os>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:  <features>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:  </features>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:  </clock>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:  <devices>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 02:54:50 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/59925490-c9ac-438c-9a1f-4356f493b103_disk">
Nov 29 02:54:50 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:      </source>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 02:54:50 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:      </auth>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    </disk>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 02:54:50 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/59925490-c9ac-438c-9a1f-4356f493b103_disk.config">
Nov 29 02:54:50 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:      </source>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 02:54:50 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:      </auth>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    </disk>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 02:54:50 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:33:40:96"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:      <target dev="tap9b9b45bf-97"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    </interface>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 02:54:50 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/59925490-c9ac-438c-9a1f-4356f493b103/console.log" append="off"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    </serial>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <video>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    </video>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 02:54:50 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    </rng>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 02:54:50 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 02:54:50 np0005539551 nova_compute[227360]:  </devices>
Nov 29 02:54:50 np0005539551 nova_compute[227360]: </domain>
Nov 29 02:54:50 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:54:50 np0005539551 nova_compute[227360]: 2025-11-29 07:54:50.134 227364 DEBUG nova.compute.manager [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Preparing to wait for external event network-vif-plugged-9b9b45bf-9767-4795-b37f-3f17fed8dd49 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 02:54:50 np0005539551 nova_compute[227360]: 2025-11-29 07:54:50.134 227364 DEBUG oslo_concurrency.lockutils [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "59925490-c9ac-438c-9a1f-4356f493b103-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:50 np0005539551 nova_compute[227360]: 2025-11-29 07:54:50.135 227364 DEBUG oslo_concurrency.lockutils [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "59925490-c9ac-438c-9a1f-4356f493b103-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:50 np0005539551 nova_compute[227360]: 2025-11-29 07:54:50.135 227364 DEBUG oslo_concurrency.lockutils [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "59925490-c9ac-438c-9a1f-4356f493b103-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:50 np0005539551 nova_compute[227360]: 2025-11-29 07:54:50.136 227364 DEBUG nova.virt.libvirt.vif [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:54:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-2003089732',display_name='tempest-ServersAdminTestJSON-server-2003089732',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-2003089732',id=22,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f48e629446148199d44b34243b98b8a',ramdisk_id='',reservation_id='r-oesg0t7z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-93807439',owner_user_name='tempest-ServersAdminTestJSON-93807439-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:54:44Z,user_data=None,user_id='446b0f05699845e8bd9f7d59c787f671',uuid=59925490-c9ac-438c-9a1f-4356f493b103,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9b9b45bf-9767-4795-b37f-3f17fed8dd49", "address": "fa:16:3e:33:40:96", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b9b45bf-97", "ovs_interfaceid": "9b9b45bf-9767-4795-b37f-3f17fed8dd49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:54:50 np0005539551 nova_compute[227360]: 2025-11-29 07:54:50.136 227364 DEBUG nova.network.os_vif_util [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Converting VIF {"id": "9b9b45bf-9767-4795-b37f-3f17fed8dd49", "address": "fa:16:3e:33:40:96", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b9b45bf-97", "ovs_interfaceid": "9b9b45bf-9767-4795-b37f-3f17fed8dd49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:54:50 np0005539551 nova_compute[227360]: 2025-11-29 07:54:50.137 227364 DEBUG nova.network.os_vif_util [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:40:96,bridge_name='br-int',has_traffic_filtering=True,id=9b9b45bf-9767-4795-b37f-3f17fed8dd49,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b9b45bf-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:54:50 np0005539551 nova_compute[227360]: 2025-11-29 07:54:50.137 227364 DEBUG os_vif [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:40:96,bridge_name='br-int',has_traffic_filtering=True,id=9b9b45bf-9767-4795-b37f-3f17fed8dd49,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b9b45bf-97') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:54:50 np0005539551 nova_compute[227360]: 2025-11-29 07:54:50.138 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:50 np0005539551 nova_compute[227360]: 2025-11-29 07:54:50.138 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:50 np0005539551 nova_compute[227360]: 2025-11-29 07:54:50.139 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:54:50 np0005539551 nova_compute[227360]: 2025-11-29 07:54:50.142 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:50 np0005539551 nova_compute[227360]: 2025-11-29 07:54:50.143 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9b9b45bf-97, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:50 np0005539551 nova_compute[227360]: 2025-11-29 07:54:50.143 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9b9b45bf-97, col_values=(('external_ids', {'iface-id': '9b9b45bf-9767-4795-b37f-3f17fed8dd49', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:33:40:96', 'vm-uuid': '59925490-c9ac-438c-9a1f-4356f493b103'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:50 np0005539551 NetworkManager[48922]: <info>  [1764402890.1459] manager: (tap9b9b45bf-97): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Nov 29 02:54:50 np0005539551 nova_compute[227360]: 2025-11-29 07:54:50.147 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:54:50 np0005539551 nova_compute[227360]: 2025-11-29 07:54:50.152 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:50 np0005539551 nova_compute[227360]: 2025-11-29 07:54:50.154 227364 INFO os_vif [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:40:96,bridge_name='br-int',has_traffic_filtering=True,id=9b9b45bf-9767-4795-b37f-3f17fed8dd49,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b9b45bf-97')#033[00m
Nov 29 02:54:50 np0005539551 nova_compute[227360]: 2025-11-29 07:54:50.266 227364 DEBUG nova.virt.libvirt.driver [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:54:50 np0005539551 nova_compute[227360]: 2025-11-29 07:54:50.267 227364 DEBUG nova.virt.libvirt.driver [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:54:50 np0005539551 nova_compute[227360]: 2025-11-29 07:54:50.267 227364 DEBUG nova.virt.libvirt.driver [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] No VIF found with MAC fa:16:3e:33:40:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:54:50 np0005539551 nova_compute[227360]: 2025-11-29 07:54:50.268 227364 INFO nova.virt.libvirt.driver [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Using config drive#033[00m
Nov 29 02:54:50 np0005539551 nova_compute[227360]: 2025-11-29 07:54:50.292 227364 DEBUG nova.storage.rbd_utils [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image 59925490-c9ac-438c-9a1f-4356f493b103_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:50.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:50 np0005539551 nova_compute[227360]: 2025-11-29 07:54:50.704 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:50.704 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:54:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:50.705 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:54:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:50.706 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:50.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:50 np0005539551 nova_compute[227360]: 2025-11-29 07:54:50.783 227364 INFO nova.virt.libvirt.driver [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Creating config drive at /var/lib/nova/instances/59925490-c9ac-438c-9a1f-4356f493b103/disk.config#033[00m
Nov 29 02:54:50 np0005539551 nova_compute[227360]: 2025-11-29 07:54:50.788 227364 DEBUG oslo_concurrency.processutils [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/59925490-c9ac-438c-9a1f-4356f493b103/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3tean32n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:50 np0005539551 nova_compute[227360]: 2025-11-29 07:54:50.884 227364 DEBUG nova.network.neutron [req-6982c198-8403-4ce7-a63a-859bd80c174b req-8b82560d-4987-49c4-8076-30fb8906b9ec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Updated VIF entry in instance network info cache for port 9b9b45bf-9767-4795-b37f-3f17fed8dd49. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:54:50 np0005539551 nova_compute[227360]: 2025-11-29 07:54:50.885 227364 DEBUG nova.network.neutron [req-6982c198-8403-4ce7-a63a-859bd80c174b req-8b82560d-4987-49c4-8076-30fb8906b9ec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Updating instance_info_cache with network_info: [{"id": "9b9b45bf-9767-4795-b37f-3f17fed8dd49", "address": "fa:16:3e:33:40:96", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b9b45bf-97", "ovs_interfaceid": "9b9b45bf-9767-4795-b37f-3f17fed8dd49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:54:50 np0005539551 nova_compute[227360]: 2025-11-29 07:54:50.903 227364 DEBUG oslo_concurrency.lockutils [req-6982c198-8403-4ce7-a63a-859bd80c174b req-8b82560d-4987-49c4-8076-30fb8906b9ec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-59925490-c9ac-438c-9a1f-4356f493b103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:54:50 np0005539551 nova_compute[227360]: 2025-11-29 07:54:50.919 227364 DEBUG oslo_concurrency.processutils [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/59925490-c9ac-438c-9a1f-4356f493b103/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3tean32n" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:50 np0005539551 ovn_controller[130266]: 2025-11-29T07:54:50Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f1:9c:fe 10.100.0.9
Nov 29 02:54:50 np0005539551 ovn_controller[130266]: 2025-11-29T07:54:50Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f1:9c:fe 10.100.0.9
Nov 29 02:54:50 np0005539551 nova_compute[227360]: 2025-11-29 07:54:50.952 227364 DEBUG nova.storage.rbd_utils [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image 59925490-c9ac-438c-9a1f-4356f493b103_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:50 np0005539551 nova_compute[227360]: 2025-11-29 07:54:50.957 227364 DEBUG oslo_concurrency.processutils [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/59925490-c9ac-438c-9a1f-4356f493b103/disk.config 59925490-c9ac-438c-9a1f-4356f493b103_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:51 np0005539551 nova_compute[227360]: 2025-11-29 07:54:51.236 227364 DEBUG oslo_concurrency.processutils [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/59925490-c9ac-438c-9a1f-4356f493b103/disk.config 59925490-c9ac-438c-9a1f-4356f493b103_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.279s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:51 np0005539551 nova_compute[227360]: 2025-11-29 07:54:51.238 227364 INFO nova.virt.libvirt.driver [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Deleting local config drive /var/lib/nova/instances/59925490-c9ac-438c-9a1f-4356f493b103/disk.config because it was imported into RBD.#033[00m
Nov 29 02:54:51 np0005539551 kernel: tap9b9b45bf-97: entered promiscuous mode
Nov 29 02:54:51 np0005539551 NetworkManager[48922]: <info>  [1764402891.3004] manager: (tap9b9b45bf-97): new Tun device (/org/freedesktop/NetworkManager/Devices/56)
Nov 29 02:54:51 np0005539551 ovn_controller[130266]: 2025-11-29T07:54:51Z|00092|binding|INFO|Claiming lport 9b9b45bf-9767-4795-b37f-3f17fed8dd49 for this chassis.
Nov 29 02:54:51 np0005539551 ovn_controller[130266]: 2025-11-29T07:54:51Z|00093|binding|INFO|9b9b45bf-9767-4795-b37f-3f17fed8dd49: Claiming fa:16:3e:33:40:96 10.100.0.3
Nov 29 02:54:51 np0005539551 nova_compute[227360]: 2025-11-29 07:54:51.302 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:51 np0005539551 ovn_controller[130266]: 2025-11-29T07:54:51Z|00094|binding|INFO|Setting lport 9b9b45bf-9767-4795-b37f-3f17fed8dd49 ovn-installed in OVS
Nov 29 02:54:51 np0005539551 nova_compute[227360]: 2025-11-29 07:54:51.324 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:51 np0005539551 nova_compute[227360]: 2025-11-29 07:54:51.327 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:51 np0005539551 systemd-machined[190756]: New machine qemu-10-instance-00000016.
Nov 29 02:54:51 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:51.334 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:40:96 10.100.0.3'], port_security=['fa:16:3e:33:40:96 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '59925490-c9ac-438c-9a1f-4356f493b103', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f48e629446148199d44b34243b98b8a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e89afbea-5410-4fb0-af48-42605427a18f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58aa5314-b5ce-40ee-9eff-0f30cffaf25d, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=9b9b45bf-9767-4795-b37f-3f17fed8dd49) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:54:51 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:51.335 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 9b9b45bf-9767-4795-b37f-3f17fed8dd49 in datapath 62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b bound to our chassis#033[00m
Nov 29 02:54:51 np0005539551 ovn_controller[130266]: 2025-11-29T07:54:51Z|00095|binding|INFO|Setting lport 9b9b45bf-9767-4795-b37f-3f17fed8dd49 up in Southbound
Nov 29 02:54:51 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:51.337 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b#033[00m
Nov 29 02:54:51 np0005539551 systemd[1]: Started Virtual Machine qemu-10-instance-00000016.
Nov 29 02:54:51 np0005539551 systemd-udevd[237704]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:54:51 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:51.351 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[deaf6662-e1ac-4b54-87c8-7a8ff468aa5d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:51 np0005539551 NetworkManager[48922]: <info>  [1764402891.3676] device (tap9b9b45bf-97): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:54:51 np0005539551 NetworkManager[48922]: <info>  [1764402891.3688] device (tap9b9b45bf-97): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:54:51 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:51.383 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[69e1ca15-d326-4c2b-a414-d2334dbebd36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:51 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:51.387 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[a96563c6-e443-49c3-8922-f502a8c4fd0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:51 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:51.414 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[af7198bf-3535-4b3d-8057-b7f89b230377]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:51 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:51.433 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4ab27b40-b312-4e52-924c-e6501892f176]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62e5f2a3-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:93:9d:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591266, 'reachable_time': 39989, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 237716, 'error': None, 'target': 'ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:51 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:51.445 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5959f3b0-f200-4254-983d-5a51526ec13c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap62e5f2a3-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 591279, 'tstamp': 591279}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 237718, 'error': None, 'target': 'ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap62e5f2a3-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 591281, 'tstamp': 591281}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 237718, 'error': None, 'target': 'ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:51 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:51.447 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62e5f2a3-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:51 np0005539551 nova_compute[227360]: 2025-11-29 07:54:51.448 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:51 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:51.453 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62e5f2a3-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:51 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:51.453 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:54:51 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:51.454 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62e5f2a3-c0, col_values=(('external_ids', {'iface-id': 'acbe1c54-69e5-4789-8e0b-6d1b69eab5e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:51 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:54:51.454 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:54:51 np0005539551 nova_compute[227360]: 2025-11-29 07:54:51.608 227364 DEBUG nova.compute.manager [req-ec045426-46eb-4081-a42e-0a565245a861 req-bf2f3e10-3309-4260-a5d3-6cffff606419 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Received event network-vif-plugged-9b9b45bf-9767-4795-b37f-3f17fed8dd49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:51 np0005539551 nova_compute[227360]: 2025-11-29 07:54:51.608 227364 DEBUG oslo_concurrency.lockutils [req-ec045426-46eb-4081-a42e-0a565245a861 req-bf2f3e10-3309-4260-a5d3-6cffff606419 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "59925490-c9ac-438c-9a1f-4356f493b103-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:51 np0005539551 nova_compute[227360]: 2025-11-29 07:54:51.609 227364 DEBUG oslo_concurrency.lockutils [req-ec045426-46eb-4081-a42e-0a565245a861 req-bf2f3e10-3309-4260-a5d3-6cffff606419 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "59925490-c9ac-438c-9a1f-4356f493b103-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:51 np0005539551 nova_compute[227360]: 2025-11-29 07:54:51.609 227364 DEBUG oslo_concurrency.lockutils [req-ec045426-46eb-4081-a42e-0a565245a861 req-bf2f3e10-3309-4260-a5d3-6cffff606419 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "59925490-c9ac-438c-9a1f-4356f493b103-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:51 np0005539551 nova_compute[227360]: 2025-11-29 07:54:51.609 227364 DEBUG nova.compute.manager [req-ec045426-46eb-4081-a42e-0a565245a861 req-bf2f3e10-3309-4260-a5d3-6cffff606419 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Processing event network-vif-plugged-9b9b45bf-9767-4795-b37f-3f17fed8dd49 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 02:54:52 np0005539551 nova_compute[227360]: 2025-11-29 07:54:52.089 227364 DEBUG nova.compute.manager [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:54:52 np0005539551 nova_compute[227360]: 2025-11-29 07:54:52.090 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764402892.08826, 59925490-c9ac-438c-9a1f-4356f493b103 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:54:52 np0005539551 nova_compute[227360]: 2025-11-29 07:54:52.090 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] VM Started (Lifecycle Event)#033[00m
Nov 29 02:54:52 np0005539551 nova_compute[227360]: 2025-11-29 07:54:52.093 227364 DEBUG nova.virt.libvirt.driver [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:54:52 np0005539551 nova_compute[227360]: 2025-11-29 07:54:52.097 227364 INFO nova.virt.libvirt.driver [-] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Instance spawned successfully.#033[00m
Nov 29 02:54:52 np0005539551 nova_compute[227360]: 2025-11-29 07:54:52.098 227364 DEBUG nova.virt.libvirt.driver [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:54:52 np0005539551 nova_compute[227360]: 2025-11-29 07:54:52.126 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:52 np0005539551 nova_compute[227360]: 2025-11-29 07:54:52.130 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:54:52 np0005539551 nova_compute[227360]: 2025-11-29 07:54:52.208 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:54:52 np0005539551 nova_compute[227360]: 2025-11-29 07:54:52.209 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764402892.0886083, 59925490-c9ac-438c-9a1f-4356f493b103 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:54:52 np0005539551 nova_compute[227360]: 2025-11-29 07:54:52.209 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:54:52 np0005539551 nova_compute[227360]: 2025-11-29 07:54:52.213 227364 DEBUG nova.virt.libvirt.driver [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:52 np0005539551 nova_compute[227360]: 2025-11-29 07:54:52.213 227364 DEBUG nova.virt.libvirt.driver [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:52 np0005539551 nova_compute[227360]: 2025-11-29 07:54:52.214 227364 DEBUG nova.virt.libvirt.driver [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:52 np0005539551 nova_compute[227360]: 2025-11-29 07:54:52.214 227364 DEBUG nova.virt.libvirt.driver [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:52 np0005539551 nova_compute[227360]: 2025-11-29 07:54:52.215 227364 DEBUG nova.virt.libvirt.driver [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:52 np0005539551 nova_compute[227360]: 2025-11-29 07:54:52.215 227364 DEBUG nova.virt.libvirt.driver [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:52.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:52 np0005539551 nova_compute[227360]: 2025-11-29 07:54:52.381 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:52 np0005539551 nova_compute[227360]: 2025-11-29 07:54:52.384 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764402892.0923579, 59925490-c9ac-438c-9a1f-4356f493b103 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:54:52 np0005539551 nova_compute[227360]: 2025-11-29 07:54:52.385 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:54:52 np0005539551 nova_compute[227360]: 2025-11-29 07:54:52.436 227364 INFO nova.compute.manager [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Took 8.10 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:54:52 np0005539551 nova_compute[227360]: 2025-11-29 07:54:52.436 227364 DEBUG nova.compute.manager [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:52 np0005539551 nova_compute[227360]: 2025-11-29 07:54:52.437 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:52 np0005539551 nova_compute[227360]: 2025-11-29 07:54:52.445 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:54:52 np0005539551 nova_compute[227360]: 2025-11-29 07:54:52.492 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:54:52 np0005539551 nova_compute[227360]: 2025-11-29 07:54:52.542 227364 INFO nova.compute.manager [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Took 9.41 seconds to build instance.#033[00m
Nov 29 02:54:52 np0005539551 nova_compute[227360]: 2025-11-29 07:54:52.626 227364 DEBUG oslo_concurrency.lockutils [None req-d56f82ca-4fe7-4bd0-a6e4-f3425e7b81dd 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "59925490-c9ac-438c-9a1f-4356f493b103" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:52.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:53 np0005539551 nova_compute[227360]: 2025-11-29 07:54:53.241 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:53 np0005539551 nova_compute[227360]: 2025-11-29 07:54:53.830 227364 DEBUG nova.compute.manager [req-c4c5e618-3e9d-4df1-9b9d-e17b9d747d63 req-0fd87092-93ef-489e-aae5-40c8ad7e30bb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Received event network-vif-plugged-9b9b45bf-9767-4795-b37f-3f17fed8dd49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:53 np0005539551 nova_compute[227360]: 2025-11-29 07:54:53.831 227364 DEBUG oslo_concurrency.lockutils [req-c4c5e618-3e9d-4df1-9b9d-e17b9d747d63 req-0fd87092-93ef-489e-aae5-40c8ad7e30bb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "59925490-c9ac-438c-9a1f-4356f493b103-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:53 np0005539551 nova_compute[227360]: 2025-11-29 07:54:53.831 227364 DEBUG oslo_concurrency.lockutils [req-c4c5e618-3e9d-4df1-9b9d-e17b9d747d63 req-0fd87092-93ef-489e-aae5-40c8ad7e30bb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "59925490-c9ac-438c-9a1f-4356f493b103-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:53 np0005539551 nova_compute[227360]: 2025-11-29 07:54:53.832 227364 DEBUG oslo_concurrency.lockutils [req-c4c5e618-3e9d-4df1-9b9d-e17b9d747d63 req-0fd87092-93ef-489e-aae5-40c8ad7e30bb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "59925490-c9ac-438c-9a1f-4356f493b103-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:53 np0005539551 nova_compute[227360]: 2025-11-29 07:54:53.832 227364 DEBUG nova.compute.manager [req-c4c5e618-3e9d-4df1-9b9d-e17b9d747d63 req-0fd87092-93ef-489e-aae5-40c8ad7e30bb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] No waiting events found dispatching network-vif-plugged-9b9b45bf-9767-4795-b37f-3f17fed8dd49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:54:53 np0005539551 nova_compute[227360]: 2025-11-29 07:54:53.832 227364 WARNING nova.compute.manager [req-c4c5e618-3e9d-4df1-9b9d-e17b9d747d63 req-0fd87092-93ef-489e-aae5-40c8ad7e30bb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Received unexpected event network-vif-plugged-9b9b45bf-9767-4795-b37f-3f17fed8dd49 for instance with vm_state active and task_state None.#033[00m
Nov 29 02:54:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:54.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:54.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:54 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:55 np0005539551 nova_compute[227360]: 2025-11-29 07:54:55.178 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:56.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:56.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:58 np0005539551 nova_compute[227360]: 2025-11-29 07:54:58.243 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:58.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:54:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:58.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:54:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:54:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:55:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:00 np0005539551 nova_compute[227360]: 2025-11-29 07:55:00.224 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:00.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:00.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:02.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:02.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:03 np0005539551 nova_compute[227360]: 2025-11-29 07:55:03.245 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:04.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:04.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:05 np0005539551 nova_compute[227360]: 2025-11-29 07:55:05.227 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:05 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:55:05 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:55:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:06.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:06.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:07 np0005539551 ovn_controller[130266]: 2025-11-29T07:55:07Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:33:40:96 10.100.0.3
Nov 29 02:55:07 np0005539551 ovn_controller[130266]: 2025-11-29T07:55:07Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:33:40:96 10.100.0.3
Nov 29 02:55:07 np0005539551 nova_compute[227360]: 2025-11-29 07:55:07.431 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:08 np0005539551 nova_compute[227360]: 2025-11-29 07:55:08.248 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:08.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:08.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:09 np0005539551 nova_compute[227360]: 2025-11-29 07:55:09.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:09 np0005539551 nova_compute[227360]: 2025-11-29 07:55:09.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:10 np0005539551 nova_compute[227360]: 2025-11-29 07:55:10.231 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:10 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:55:10 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 4668 writes, 26K keys, 4668 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 4667 writes, 4667 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1447 writes, 6831 keys, 1447 commit groups, 1.0 writes per commit group, ingest: 15.22 MB, 0.03 MB/s#012Interval WAL: 1446 writes, 1446 syncs, 1.00 writes per sync, written: 0.01 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      7.6      4.01              0.11        14    0.287       0      0       0.0       0.0#012  L6      1/0    9.32 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3     19.3     16.6      7.87              0.47        13    0.605     67K   6528       0.0       0.0#012 Sum      1/0    9.32 MB   0.0      0.1     0.0      0.1       0.2      0.0       0.0   5.3     12.8     13.6     11.88              0.58        27    0.440     67K   6528       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   7.1     56.4     55.8      0.80              0.16         8    0.099     23K   1973       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     19.3     16.6      7.87              0.47        13    0.605     67K   6528       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      7.6      4.01              0.11        13    0.308       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.030, interval 0.006#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.16 GB write, 0.07 MB/s write, 0.15 GB read, 0.06 MB/s read, 11.9 seconds#012Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557021ed51f0#2 capacity: 304.00 MB usage: 11.26 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000125 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(608,10.72 MB,3.52495%) FilterBlock(27,190.23 KB,0.0611104%) IndexBlock(27,371.66 KB,0.11939%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 02:55:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:10.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:10 np0005539551 nova_compute[227360]: 2025-11-29 07:55:10.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:10 np0005539551 nova_compute[227360]: 2025-11-29 07:55:10.409 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:55:10 np0005539551 nova_compute[227360]: 2025-11-29 07:55:10.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:55:10 np0005539551 podman[237943]: 2025-11-29 07:55:10.622439047 +0000 UTC m=+0.066728460 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd)
Nov 29 02:55:10 np0005539551 podman[237944]: 2025-11-29 07:55:10.625390498 +0000 UTC m=+0.058966957 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 02:55:10 np0005539551 podman[237942]: 2025-11-29 07:55:10.682063541 +0000 UTC m=+0.125818910 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:55:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:10.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:10 np0005539551 nova_compute[227360]: 2025-11-29 07:55:10.794 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "refresh_cache-bed5ee49-4f19-4a80-a70a-8972c9a68218" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:55:10 np0005539551 nova_compute[227360]: 2025-11-29 07:55:10.794 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquired lock "refresh_cache-bed5ee49-4f19-4a80-a70a-8972c9a68218" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:55:10 np0005539551 nova_compute[227360]: 2025-11-29 07:55:10.794 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 02:55:10 np0005539551 nova_compute[227360]: 2025-11-29 07:55:10.794 227364 DEBUG nova.objects.instance [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lazy-loading 'info_cache' on Instance uuid bed5ee49-4f19-4a80-a70a-8972c9a68218 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:55:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:12.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:12.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:12 np0005539551 nova_compute[227360]: 2025-11-29 07:55:12.948 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Updating instance_info_cache with network_info: [{"id": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "address": "fa:16:3e:8b:38:bc", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10cc59c9-d7", "ovs_interfaceid": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:55:12 np0005539551 nova_compute[227360]: 2025-11-29 07:55:12.974 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Releasing lock "refresh_cache-bed5ee49-4f19-4a80-a70a-8972c9a68218" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:55:12 np0005539551 nova_compute[227360]: 2025-11-29 07:55:12.975 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 02:55:12 np0005539551 nova_compute[227360]: 2025-11-29 07:55:12.976 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:12 np0005539551 nova_compute[227360]: 2025-11-29 07:55:12.976 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:12 np0005539551 nova_compute[227360]: 2025-11-29 07:55:12.977 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:12 np0005539551 nova_compute[227360]: 2025-11-29 07:55:12.977 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:12 np0005539551 nova_compute[227360]: 2025-11-29 07:55:12.978 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:55:12 np0005539551 nova_compute[227360]: 2025-11-29 07:55:12.979 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:13 np0005539551 nova_compute[227360]: 2025-11-29 07:55:13.001 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:13 np0005539551 nova_compute[227360]: 2025-11-29 07:55:13.001 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:13 np0005539551 nova_compute[227360]: 2025-11-29 07:55:13.002 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:13 np0005539551 nova_compute[227360]: 2025-11-29 07:55:13.002 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:55:13 np0005539551 nova_compute[227360]: 2025-11-29 07:55:13.003 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:13 np0005539551 nova_compute[227360]: 2025-11-29 07:55:13.275 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:14.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:14.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:15 np0005539551 nova_compute[227360]: 2025-11-29 07:55:15.235 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:55:15 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2512758691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:55:15 np0005539551 nova_compute[227360]: 2025-11-29 07:55:15.940 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.937s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:16 np0005539551 nova_compute[227360]: 2025-11-29 07:55:16.041 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:55:16 np0005539551 nova_compute[227360]: 2025-11-29 07:55:16.042 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:55:16 np0005539551 nova_compute[227360]: 2025-11-29 07:55:16.048 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:55:16 np0005539551 nova_compute[227360]: 2025-11-29 07:55:16.048 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:55:16 np0005539551 nova_compute[227360]: 2025-11-29 07:55:16.053 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:55:16 np0005539551 nova_compute[227360]: 2025-11-29 07:55:16.054 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:55:16 np0005539551 nova_compute[227360]: 2025-11-29 07:55:16.222 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:55:16 np0005539551 nova_compute[227360]: 2025-11-29 07:55:16.224 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4366MB free_disk=20.694351196289062GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:55:16 np0005539551 nova_compute[227360]: 2025-11-29 07:55:16.224 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:16 np0005539551 nova_compute[227360]: 2025-11-29 07:55:16.224 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:16.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:16 np0005539551 nova_compute[227360]: 2025-11-29 07:55:16.381 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance bed5ee49-4f19-4a80-a70a-8972c9a68218 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:55:16 np0005539551 nova_compute[227360]: 2025-11-29 07:55:16.381 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance ec72da52-17dd-401b-8538-90262cfe6006 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:55:16 np0005539551 nova_compute[227360]: 2025-11-29 07:55:16.381 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 59925490-c9ac-438c-9a1f-4356f493b103 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:55:16 np0005539551 nova_compute[227360]: 2025-11-29 07:55:16.381 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:55:16 np0005539551 nova_compute[227360]: 2025-11-29 07:55:16.382 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:55:16 np0005539551 nova_compute[227360]: 2025-11-29 07:55:16.479 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:16.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:55:16 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4254374631' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:55:16 np0005539551 nova_compute[227360]: 2025-11-29 07:55:16.917 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:16 np0005539551 nova_compute[227360]: 2025-11-29 07:55:16.924 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:55:16 np0005539551 nova_compute[227360]: 2025-11-29 07:55:16.940 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:55:16 np0005539551 nova_compute[227360]: 2025-11-29 07:55:16.957 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:55:16 np0005539551 nova_compute[227360]: 2025-11-29 07:55:16.957 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:18.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:18 np0005539551 nova_compute[227360]: 2025-11-29 07:55:18.368 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:18.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:19 np0005539551 nova_compute[227360]: 2025-11-29 07:55:19.815 227364 INFO nova.compute.manager [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Rebuilding instance#033[00m
Nov 29 02:55:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:55:19.847 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:55:19.847 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:55:19.848 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:20 np0005539551 nova_compute[227360]: 2025-11-29 07:55:20.238 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:20 np0005539551 nova_compute[227360]: 2025-11-29 07:55:20.249 227364 DEBUG nova.objects.instance [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lazy-loading 'trusted_certs' on Instance uuid bed5ee49-4f19-4a80-a70a-8972c9a68218 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:55:20 np0005539551 nova_compute[227360]: 2025-11-29 07:55:20.282 227364 DEBUG nova.compute.manager [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:55:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:20.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:20 np0005539551 nova_compute[227360]: 2025-11-29 07:55:20.473 227364 DEBUG nova.objects.instance [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lazy-loading 'pci_requests' on Instance uuid bed5ee49-4f19-4a80-a70a-8972c9a68218 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:55:20 np0005539551 nova_compute[227360]: 2025-11-29 07:55:20.571 227364 DEBUG nova.objects.instance [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lazy-loading 'pci_devices' on Instance uuid bed5ee49-4f19-4a80-a70a-8972c9a68218 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:55:20 np0005539551 nova_compute[227360]: 2025-11-29 07:55:20.586 227364 DEBUG nova.objects.instance [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lazy-loading 'resources' on Instance uuid bed5ee49-4f19-4a80-a70a-8972c9a68218 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:55:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:20 np0005539551 nova_compute[227360]: 2025-11-29 07:55:20.601 227364 DEBUG nova.objects.instance [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lazy-loading 'migration_context' on Instance uuid bed5ee49-4f19-4a80-a70a-8972c9a68218 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:55:20 np0005539551 nova_compute[227360]: 2025-11-29 07:55:20.689 227364 DEBUG nova.objects.instance [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 02:55:20 np0005539551 nova_compute[227360]: 2025-11-29 07:55:20.693 227364 DEBUG nova.virt.libvirt.driver [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 02:55:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:20.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:22.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:22.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:23 np0005539551 nova_compute[227360]: 2025-11-29 07:55:23.371 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:24.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:24.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:25 np0005539551 nova_compute[227360]: 2025-11-29 07:55:25.242 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:25 np0005539551 nova_compute[227360]: 2025-11-29 07:55:25.720 227364 INFO nova.virt.libvirt.driver [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Instance shutdown successfully after 5 seconds.#033[00m
Nov 29 02:55:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:26.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:26 np0005539551 kernel: tap10cc59c9-d7 (unregistering): left promiscuous mode
Nov 29 02:55:26 np0005539551 NetworkManager[48922]: <info>  [1764402926.6156] device (tap10cc59c9-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:55:26 np0005539551 ovn_controller[130266]: 2025-11-29T07:55:26Z|00096|binding|INFO|Releasing lport 10cc59c9-d730-4ae4-91ea-f799f3de9f32 from this chassis (sb_readonly=0)
Nov 29 02:55:26 np0005539551 ovn_controller[130266]: 2025-11-29T07:55:26Z|00097|binding|INFO|Setting lport 10cc59c9-d730-4ae4-91ea-f799f3de9f32 down in Southbound
Nov 29 02:55:26 np0005539551 ovn_controller[130266]: 2025-11-29T07:55:26Z|00098|binding|INFO|Removing iface tap10cc59c9-d7 ovn-installed in OVS
Nov 29 02:55:26 np0005539551 nova_compute[227360]: 2025-11-29 07:55:26.627 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:26 np0005539551 nova_compute[227360]: 2025-11-29 07:55:26.640 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:55:26.646 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:38:bc 10.100.0.14'], port_security=['fa:16:3e:8b:38:bc 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'bed5ee49-4f19-4a80-a70a-8972c9a68218', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f48e629446148199d44b34243b98b8a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e89afbea-5410-4fb0-af48-42605427a18f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58aa5314-b5ce-40ee-9eff-0f30cffaf25d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=10cc59c9-d730-4ae4-91ea-f799f3de9f32) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:55:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:55:26.647 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 10cc59c9-d730-4ae4-91ea-f799f3de9f32 in datapath 62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b unbound from our chassis#033[00m
Nov 29 02:55:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:55:26.649 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b#033[00m
Nov 29 02:55:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:55:26.667 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8418528e-41da-4db8-8399-a3ada5812e65]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:26 np0005539551 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000011.scope: Deactivated successfully.
Nov 29 02:55:26 np0005539551 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000011.scope: Consumed 16.523s CPU time.
Nov 29 02:55:26 np0005539551 systemd-machined[190756]: Machine qemu-8-instance-00000011 terminated.
Nov 29 02:55:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:55:26.699 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[3d58560a-dae2-4d81-a546-49e07bb7ef15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:55:26.701 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[d063c1ad-5173-4d9d-97f5-8a0aeb6e2664]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:55:26.731 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[9f0a8456-0568-4ade-a137-110e5444ca89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:55:26.749 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[bb344bc3-088b-4995-9239-65f3ef8f7a65]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62e5f2a3-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:93:9d:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591266, 'reachable_time': 39989, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 238062, 'error': None, 'target': 'ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:26 np0005539551 nova_compute[227360]: 2025-11-29 07:55:26.758 227364 INFO nova.virt.libvirt.driver [-] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Instance destroyed successfully.#033[00m
Nov 29 02:55:26 np0005539551 nova_compute[227360]: 2025-11-29 07:55:26.764 227364 INFO nova.virt.libvirt.driver [-] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Instance destroyed successfully.#033[00m
Nov 29 02:55:26 np0005539551 nova_compute[227360]: 2025-11-29 07:55:26.765 227364 DEBUG nova.virt.libvirt.vif [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:54:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-818783229',display_name='tempest-ServersAdminTestJSON-server-818783229',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-818783229',id=17,image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:54:28Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1f48e629446148199d44b34243b98b8a',ramdisk_id='',reservation_id='r-0aqrtxlz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-93807439',owner_user_name='tempest-ServersAdminTestJSON-93807439-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:55:17Z,user_data=None,user_id='446b0f05699845e8bd9f7d59c787f671',uuid=bed5ee49-4f19-4a80-a70a-8972c9a68218,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "address": "fa:16:3e:8b:38:bc", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10cc59c9-d7", "ovs_interfaceid": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:55:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:55:26.765 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[16288309-a522-4995-9cec-e2cd01eb57c0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap62e5f2a3-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 591279, 'tstamp': 591279}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 238071, 'error': None, 'target': 'ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap62e5f2a3-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 591281, 'tstamp': 591281}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 238071, 'error': None, 'target': 'ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:26 np0005539551 nova_compute[227360]: 2025-11-29 07:55:26.766 227364 DEBUG nova.network.os_vif_util [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Converting VIF {"id": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "address": "fa:16:3e:8b:38:bc", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10cc59c9-d7", "ovs_interfaceid": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:55:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:55:26.767 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62e5f2a3-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:26 np0005539551 nova_compute[227360]: 2025-11-29 07:55:26.767 227364 DEBUG nova.network.os_vif_util [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8b:38:bc,bridge_name='br-int',has_traffic_filtering=True,id=10cc59c9-d730-4ae4-91ea-f799f3de9f32,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10cc59c9-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:55:26 np0005539551 nova_compute[227360]: 2025-11-29 07:55:26.768 227364 DEBUG os_vif [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:38:bc,bridge_name='br-int',has_traffic_filtering=True,id=10cc59c9-d730-4ae4-91ea-f799f3de9f32,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10cc59c9-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:55:26 np0005539551 nova_compute[227360]: 2025-11-29 07:55:26.769 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:26 np0005539551 nova_compute[227360]: 2025-11-29 07:55:26.770 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10cc59c9-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:26 np0005539551 nova_compute[227360]: 2025-11-29 07:55:26.771 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:55:26.773 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62e5f2a3-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:55:26.773 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:55:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:55:26.774 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62e5f2a3-c0, col_values=(('external_ids', {'iface-id': 'acbe1c54-69e5-4789-8e0b-6d1b69eab5e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:55:26.774 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:55:26 np0005539551 nova_compute[227360]: 2025-11-29 07:55:26.775 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:55:26 np0005539551 nova_compute[227360]: 2025-11-29 07:55:26.778 227364 INFO os_vif [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:38:bc,bridge_name='br-int',has_traffic_filtering=True,id=10cc59c9-d730-4ae4-91ea-f799f3de9f32,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10cc59c9-d7')#033[00m
Nov 29 02:55:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:26.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:28.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:28 np0005539551 nova_compute[227360]: 2025-11-29 07:55:28.436 227364 DEBUG nova.compute.manager [req-929a966e-a8b9-4063-a183-67c27483130e req-7b4978eb-6954-44a2-8f11-4dbf5992098b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Received event network-vif-unplugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:28 np0005539551 nova_compute[227360]: 2025-11-29 07:55:28.437 227364 DEBUG oslo_concurrency.lockutils [req-929a966e-a8b9-4063-a183-67c27483130e req-7b4978eb-6954-44a2-8f11-4dbf5992098b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:28 np0005539551 nova_compute[227360]: 2025-11-29 07:55:28.437 227364 DEBUG oslo_concurrency.lockutils [req-929a966e-a8b9-4063-a183-67c27483130e req-7b4978eb-6954-44a2-8f11-4dbf5992098b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:28 np0005539551 nova_compute[227360]: 2025-11-29 07:55:28.437 227364 DEBUG oslo_concurrency.lockutils [req-929a966e-a8b9-4063-a183-67c27483130e req-7b4978eb-6954-44a2-8f11-4dbf5992098b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:28 np0005539551 nova_compute[227360]: 2025-11-29 07:55:28.437 227364 DEBUG nova.compute.manager [req-929a966e-a8b9-4063-a183-67c27483130e req-7b4978eb-6954-44a2-8f11-4dbf5992098b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] No waiting events found dispatching network-vif-unplugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:55:28 np0005539551 nova_compute[227360]: 2025-11-29 07:55:28.438 227364 WARNING nova.compute.manager [req-929a966e-a8b9-4063-a183-67c27483130e req-7b4978eb-6954-44a2-8f11-4dbf5992098b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Received unexpected event network-vif-unplugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 for instance with vm_state error and task_state rebuilding.#033[00m
Nov 29 02:55:28 np0005539551 nova_compute[227360]: 2025-11-29 07:55:28.438 227364 DEBUG nova.compute.manager [req-929a966e-a8b9-4063-a183-67c27483130e req-7b4978eb-6954-44a2-8f11-4dbf5992098b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Received event network-vif-plugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:28 np0005539551 nova_compute[227360]: 2025-11-29 07:55:28.438 227364 DEBUG oslo_concurrency.lockutils [req-929a966e-a8b9-4063-a183-67c27483130e req-7b4978eb-6954-44a2-8f11-4dbf5992098b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:28 np0005539551 nova_compute[227360]: 2025-11-29 07:55:28.438 227364 DEBUG oslo_concurrency.lockutils [req-929a966e-a8b9-4063-a183-67c27483130e req-7b4978eb-6954-44a2-8f11-4dbf5992098b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:28 np0005539551 nova_compute[227360]: 2025-11-29 07:55:28.438 227364 DEBUG oslo_concurrency.lockutils [req-929a966e-a8b9-4063-a183-67c27483130e req-7b4978eb-6954-44a2-8f11-4dbf5992098b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:28 np0005539551 nova_compute[227360]: 2025-11-29 07:55:28.439 227364 DEBUG nova.compute.manager [req-929a966e-a8b9-4063-a183-67c27483130e req-7b4978eb-6954-44a2-8f11-4dbf5992098b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] No waiting events found dispatching network-vif-plugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:55:28 np0005539551 nova_compute[227360]: 2025-11-29 07:55:28.439 227364 WARNING nova.compute.manager [req-929a966e-a8b9-4063-a183-67c27483130e req-7b4978eb-6954-44a2-8f11-4dbf5992098b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Received unexpected event network-vif-plugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 for instance with vm_state error and task_state rebuilding.#033[00m
Nov 29 02:55:28 np0005539551 nova_compute[227360]: 2025-11-29 07:55:28.439 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:28.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:55:29.653 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:55:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:55:29.654 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:55:29 np0005539551 nova_compute[227360]: 2025-11-29 07:55:29.653 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:55:29.654 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:30.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:30.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:31 np0005539551 nova_compute[227360]: 2025-11-29 07:55:31.772 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:32.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:32.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:33 np0005539551 nova_compute[227360]: 2025-11-29 07:55:33.508 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:33 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0.
Nov 29 02:55:33 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:55:33.898132) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:55:33 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52
Nov 29 02:55:33 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402933898186, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2388, "num_deletes": 251, "total_data_size": 5684195, "memory_usage": 5757480, "flush_reason": "Manual Compaction"}
Nov 29 02:55:33 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started
Nov 29 02:55:34 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402934043053, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 3717364, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24230, "largest_seqno": 26613, "table_properties": {"data_size": 3707924, "index_size": 5870, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20321, "raw_average_key_size": 20, "raw_value_size": 3688722, "raw_average_value_size": 3710, "num_data_blocks": 260, "num_entries": 994, "num_filter_entries": 994, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402719, "oldest_key_time": 1764402719, "file_creation_time": 1764402933, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:55:34 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 145012 microseconds, and 15666 cpu microseconds.
Nov 29 02:55:34 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:55:34 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 02:55:34 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/369076661' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 02:55:34 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 02:55:34 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/369076661' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 02:55:34 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:55:34.043134) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 3717364 bytes OK
Nov 29 02:55:34 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:55:34.043166) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started
Nov 29 02:55:34 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:55:34.439706) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done
Nov 29 02:55:34 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:55:34.439756) EVENT_LOG_v1 {"time_micros": 1764402934439747, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:55:34 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:55:34.439782) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:55:34 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 5673636, prev total WAL file size 5721954, number of live WAL files 2.
Nov 29 02:55:34 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:55:34 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:55:34.441572) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Nov 29 02:55:34 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:55:34 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(3630KB)], [51(9546KB)]
Nov 29 02:55:34 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402934441651, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 13493309, "oldest_snapshot_seqno": -1}
Nov 29 02:55:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:34.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:34 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 5776 keys, 11293929 bytes, temperature: kUnknown
Nov 29 02:55:34 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402934723535, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 11293929, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11253625, "index_size": 24770, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14469, "raw_key_size": 146983, "raw_average_key_size": 25, "raw_value_size": 11147673, "raw_average_value_size": 1929, "num_data_blocks": 1010, "num_entries": 5776, "num_filter_entries": 5776, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764402934, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:55:34 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:55:34 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:55:34.724359) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 11293929 bytes
Nov 29 02:55:34 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:55:34.743328) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 47.8 rd, 40.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 9.3 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(6.7) write-amplify(3.0) OK, records in: 6293, records dropped: 517 output_compression: NoCompression
Nov 29 02:55:34 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:55:34.743355) EVENT_LOG_v1 {"time_micros": 1764402934743343, "job": 30, "event": "compaction_finished", "compaction_time_micros": 282491, "compaction_time_cpu_micros": 36335, "output_level": 6, "num_output_files": 1, "total_output_size": 11293929, "num_input_records": 6293, "num_output_records": 5776, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:55:34 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:55:34 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402934744449, "job": 30, "event": "table_file_deletion", "file_number": 53}
Nov 29 02:55:34 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:55:34 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402934747470, "job": 30, "event": "table_file_deletion", "file_number": 51}
Nov 29 02:55:34 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:55:34.441404) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:55:34 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:55:34.747529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:55:34 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:55:34.747534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:55:34 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:55:34.747537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:55:34 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:55:34.747539) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:55:34 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:55:34.747542) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:55:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:34.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:36.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:36 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:36 np0005539551 nova_compute[227360]: 2025-11-29 07:55:36.772 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:36.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:38.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:38 np0005539551 nova_compute[227360]: 2025-11-29 07:55:38.511 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:38.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:38 np0005539551 nova_compute[227360]: 2025-11-29 07:55:38.855 227364 INFO nova.virt.libvirt.driver [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Deleting instance files /var/lib/nova/instances/bed5ee49-4f19-4a80-a70a-8972c9a68218_del#033[00m
Nov 29 02:55:38 np0005539551 nova_compute[227360]: 2025-11-29 07:55:38.856 227364 INFO nova.virt.libvirt.driver [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Deletion of /var/lib/nova/instances/bed5ee49-4f19-4a80-a70a-8972c9a68218_del complete#033[00m
Nov 29 02:55:40 np0005539551 nova_compute[227360]: 2025-11-29 07:55:40.096 227364 DEBUG nova.virt.libvirt.driver [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:55:40 np0005539551 nova_compute[227360]: 2025-11-29 07:55:40.097 227364 INFO nova.virt.libvirt.driver [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Creating image(s)#033[00m
Nov 29 02:55:40 np0005539551 nova_compute[227360]: 2025-11-29 07:55:40.313 227364 DEBUG nova.storage.rbd_utils [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image bed5ee49-4f19-4a80-a70a-8972c9a68218_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:55:40 np0005539551 nova_compute[227360]: 2025-11-29 07:55:40.357 227364 DEBUG nova.storage.rbd_utils [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image bed5ee49-4f19-4a80-a70a-8972c9a68218_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:55:40 np0005539551 nova_compute[227360]: 2025-11-29 07:55:40.408 227364 DEBUG nova.storage.rbd_utils [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image bed5ee49-4f19-4a80-a70a-8972c9a68218_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:55:40 np0005539551 nova_compute[227360]: 2025-11-29 07:55:40.414 227364 DEBUG oslo_concurrency.lockutils [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "6e1589dfec5abd76868fdc022175780e085b08de" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:40 np0005539551 nova_compute[227360]: 2025-11-29 07:55:40.415 227364 DEBUG oslo_concurrency.lockutils [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "6e1589dfec5abd76868fdc022175780e085b08de" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:40.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:40.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:40 np0005539551 nova_compute[227360]: 2025-11-29 07:55:40.932 227364 DEBUG nova.virt.libvirt.imagebackend [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Image locations are: [{'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/93eccffb-bacd-407f-af6f-64451dee7b21/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/93eccffb-bacd-407f-af6f-64451dee7b21/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 02:55:41 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:41 np0005539551 podman[238150]: 2025-11-29 07:55:41.642959264 +0000 UTC m=+0.073073134 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 02:55:41 np0005539551 podman[238149]: 2025-11-29 07:55:41.64462127 +0000 UTC m=+0.078763300 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd)
Nov 29 02:55:41 np0005539551 podman[238148]: 2025-11-29 07:55:41.663100145 +0000 UTC m=+0.111138256 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:55:41 np0005539551 nova_compute[227360]: 2025-11-29 07:55:41.757 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402926.7558618, bed5ee49-4f19-4a80-a70a-8972c9a68218 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:55:41 np0005539551 nova_compute[227360]: 2025-11-29 07:55:41.757 227364 INFO nova.compute.manager [-] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:55:41 np0005539551 nova_compute[227360]: 2025-11-29 07:55:41.773 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:42.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:42 np0005539551 nova_compute[227360]: 2025-11-29 07:55:42.471 227364 DEBUG nova.compute.manager [None req-52ea1d38-7c2c-416d-b405-4e81d0023bfb - - - - - -] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:55:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:42.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:43 np0005539551 nova_compute[227360]: 2025-11-29 07:55:43.556 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:44.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:44.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:45 np0005539551 nova_compute[227360]: 2025-11-29 07:55:45.053 227364 DEBUG oslo_concurrency.processutils [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:45 np0005539551 nova_compute[227360]: 2025-11-29 07:55:45.128 227364 DEBUG oslo_concurrency.processutils [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de.part --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:45 np0005539551 nova_compute[227360]: 2025-11-29 07:55:45.130 227364 DEBUG nova.virt.images [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] 93eccffb-bacd-407f-af6f-64451dee7b21 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Nov 29 02:55:45 np0005539551 nova_compute[227360]: 2025-11-29 07:55:45.130 227364 DEBUG nova.privsep.utils [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 29 02:55:45 np0005539551 nova_compute[227360]: 2025-11-29 07:55:45.131 227364 DEBUG oslo_concurrency.processutils [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de.part /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:45 np0005539551 nova_compute[227360]: 2025-11-29 07:55:45.317 227364 DEBUG oslo_concurrency.processutils [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de.part /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de.converted" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:45 np0005539551 nova_compute[227360]: 2025-11-29 07:55:45.325 227364 DEBUG oslo_concurrency.processutils [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:45 np0005539551 nova_compute[227360]: 2025-11-29 07:55:45.381 227364 DEBUG oslo_concurrency.processutils [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de.converted --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:45 np0005539551 nova_compute[227360]: 2025-11-29 07:55:45.383 227364 DEBUG oslo_concurrency.lockutils [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "6e1589dfec5abd76868fdc022175780e085b08de" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 4.968s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:45 np0005539551 nova_compute[227360]: 2025-11-29 07:55:45.415 227364 DEBUG nova.storage.rbd_utils [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image bed5ee49-4f19-4a80-a70a-8972c9a68218_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:55:45 np0005539551 nova_compute[227360]: 2025-11-29 07:55:45.419 227364 DEBUG oslo_concurrency.processutils [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de bed5ee49-4f19-4a80-a70a-8972c9a68218_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:45 np0005539551 nova_compute[227360]: 2025-11-29 07:55:45.950 227364 DEBUG oslo_concurrency.processutils [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de bed5ee49-4f19-4a80-a70a-8972c9a68218_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:46 np0005539551 nova_compute[227360]: 2025-11-29 07:55:46.015 227364 DEBUG nova.storage.rbd_utils [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] resizing rbd image bed5ee49-4f19-4a80-a70a-8972c9a68218_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:55:46 np0005539551 nova_compute[227360]: 2025-11-29 07:55:46.108 227364 DEBUG nova.virt.libvirt.driver [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:55:46 np0005539551 nova_compute[227360]: 2025-11-29 07:55:46.109 227364 DEBUG nova.virt.libvirt.driver [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Ensure instance console log exists: /var/lib/nova/instances/bed5ee49-4f19-4a80-a70a-8972c9a68218/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:55:46 np0005539551 nova_compute[227360]: 2025-11-29 07:55:46.109 227364 DEBUG oslo_concurrency.lockutils [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:46 np0005539551 nova_compute[227360]: 2025-11-29 07:55:46.109 227364 DEBUG oslo_concurrency.lockutils [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:46 np0005539551 nova_compute[227360]: 2025-11-29 07:55:46.110 227364 DEBUG oslo_concurrency.lockutils [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:46 np0005539551 nova_compute[227360]: 2025-11-29 07:55:46.112 227364 DEBUG nova.virt.libvirt.driver [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Start _get_guest_xml network_info=[{"id": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "address": "fa:16:3e:8b:38:bc", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10cc59c9-d7", "ovs_interfaceid": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:36Z,direct_url=<?>,disk_format='qcow2',id=93eccffb-bacd-407f-af6f-64451dee7b21,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:55:46 np0005539551 nova_compute[227360]: 2025-11-29 07:55:46.116 227364 WARNING nova.virt.libvirt.driver [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 29 02:55:46 np0005539551 nova_compute[227360]: 2025-11-29 07:55:46.121 227364 DEBUG nova.virt.libvirt.host [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:55:46 np0005539551 nova_compute[227360]: 2025-11-29 07:55:46.122 227364 DEBUG nova.virt.libvirt.host [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:55:46 np0005539551 nova_compute[227360]: 2025-11-29 07:55:46.127 227364 DEBUG nova.virt.libvirt.host [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:55:46 np0005539551 nova_compute[227360]: 2025-11-29 07:55:46.127 227364 DEBUG nova.virt.libvirt.host [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:55:46 np0005539551 nova_compute[227360]: 2025-11-29 07:55:46.129 227364 DEBUG nova.virt.libvirt.driver [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:55:46 np0005539551 nova_compute[227360]: 2025-11-29 07:55:46.129 227364 DEBUG nova.virt.hardware [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:36Z,direct_url=<?>,disk_format='qcow2',id=93eccffb-bacd-407f-af6f-64451dee7b21,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:55:46 np0005539551 nova_compute[227360]: 2025-11-29 07:55:46.130 227364 DEBUG nova.virt.hardware [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:55:46 np0005539551 nova_compute[227360]: 2025-11-29 07:55:46.130 227364 DEBUG nova.virt.hardware [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:55:46 np0005539551 nova_compute[227360]: 2025-11-29 07:55:46.130 227364 DEBUG nova.virt.hardware [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:55:46 np0005539551 nova_compute[227360]: 2025-11-29 07:55:46.130 227364 DEBUG nova.virt.hardware [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:55:46 np0005539551 nova_compute[227360]: 2025-11-29 07:55:46.131 227364 DEBUG nova.virt.hardware [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:55:46 np0005539551 nova_compute[227360]: 2025-11-29 07:55:46.131 227364 DEBUG nova.virt.hardware [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:55:46 np0005539551 nova_compute[227360]: 2025-11-29 07:55:46.131 227364 DEBUG nova.virt.hardware [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:55:46 np0005539551 nova_compute[227360]: 2025-11-29 07:55:46.131 227364 DEBUG nova.virt.hardware [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:55:46 np0005539551 nova_compute[227360]: 2025-11-29 07:55:46.132 227364 DEBUG nova.virt.hardware [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:55:46 np0005539551 nova_compute[227360]: 2025-11-29 07:55:46.132 227364 DEBUG nova.virt.hardware [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:55:46 np0005539551 nova_compute[227360]: 2025-11-29 07:55:46.132 227364 DEBUG nova.objects.instance [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lazy-loading 'vcpu_model' on Instance uuid bed5ee49-4f19-4a80-a70a-8972c9a68218 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:55:46 np0005539551 nova_compute[227360]: 2025-11-29 07:55:46.181 227364 DEBUG oslo_concurrency.processutils [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:46.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:46 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:46 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:55:46 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3816692794' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:55:46 np0005539551 nova_compute[227360]: 2025-11-29 07:55:46.606 227364 DEBUG oslo_concurrency.processutils [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:46 np0005539551 nova_compute[227360]: 2025-11-29 07:55:46.632 227364 DEBUG nova.storage.rbd_utils [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image bed5ee49-4f19-4a80-a70a-8972c9a68218_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:55:46 np0005539551 nova_compute[227360]: 2025-11-29 07:55:46.635 227364 DEBUG oslo_concurrency.processutils [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:46 np0005539551 nova_compute[227360]: 2025-11-29 07:55:46.773 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:46.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:47 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:55:47 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1647718676' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:55:47 np0005539551 nova_compute[227360]: 2025-11-29 07:55:47.133 227364 DEBUG oslo_concurrency.processutils [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:47 np0005539551 nova_compute[227360]: 2025-11-29 07:55:47.135 227364 DEBUG nova.virt.libvirt.vif [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T07:54:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-818783229',display_name='tempest-ServersAdminTestJSON-server-818783229',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-818783229',id=17,image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:54:28Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1f48e629446148199d44b34243b98b8a',ramdisk_id='',reservation_id='r-0aqrtxlz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-93807439',owner_user_name='tempest-ServersAdminTestJSON-93807439-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:55:40Z,user_data=None,user_id='446b0f05699845e8bd9f7d59c787f671',uuid=bed5ee49-4f19-4a80-a70a-8972c9a68218,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "address": "fa:16:3e:8b:38:bc", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10cc59c9-d7", "ovs_interfaceid": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:55:47 np0005539551 nova_compute[227360]: 2025-11-29 07:55:47.135 227364 DEBUG nova.network.os_vif_util [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Converting VIF {"id": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "address": "fa:16:3e:8b:38:bc", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10cc59c9-d7", "ovs_interfaceid": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:55:47 np0005539551 nova_compute[227360]: 2025-11-29 07:55:47.136 227364 DEBUG nova.network.os_vif_util [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8b:38:bc,bridge_name='br-int',has_traffic_filtering=True,id=10cc59c9-d730-4ae4-91ea-f799f3de9f32,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10cc59c9-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:55:47 np0005539551 nova_compute[227360]: 2025-11-29 07:55:47.139 227364 DEBUG nova.virt.libvirt.driver [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:55:47 np0005539551 nova_compute[227360]:  <uuid>bed5ee49-4f19-4a80-a70a-8972c9a68218</uuid>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:  <name>instance-00000011</name>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:55:47 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:      <nova:name>tempest-ServersAdminTestJSON-server-818783229</nova:name>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 07:55:46</nova:creationTime>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 02:55:47 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:        <nova:user uuid="446b0f05699845e8bd9f7d59c787f671">tempest-ServersAdminTestJSON-93807439-project-member</nova:user>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:        <nova:project uuid="1f48e629446148199d44b34243b98b8a">tempest-ServersAdminTestJSON-93807439</nova:project>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="93eccffb-bacd-407f-af6f-64451dee7b21"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:        <nova:port uuid="10cc59c9-d730-4ae4-91ea-f799f3de9f32">
Nov 29 02:55:47 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <system>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:      <entry name="serial">bed5ee49-4f19-4a80-a70a-8972c9a68218</entry>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:      <entry name="uuid">bed5ee49-4f19-4a80-a70a-8972c9a68218</entry>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    </system>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:  <os>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:  </os>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:  <features>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:  </features>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:  </clock>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:  <devices>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 02:55:47 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/bed5ee49-4f19-4a80-a70a-8972c9a68218_disk">
Nov 29 02:55:47 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:      </source>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 02:55:47 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:      </auth>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    </disk>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 02:55:47 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/bed5ee49-4f19-4a80-a70a-8972c9a68218_disk.config">
Nov 29 02:55:47 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:      </source>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 02:55:47 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:      </auth>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    </disk>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 02:55:47 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:8b:38:bc"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:      <target dev="tap10cc59c9-d7"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    </interface>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 02:55:47 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/bed5ee49-4f19-4a80-a70a-8972c9a68218/console.log" append="off"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    </serial>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <video>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    </video>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 02:55:47 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    </rng>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 02:55:47 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 02:55:47 np0005539551 nova_compute[227360]:  </devices>
Nov 29 02:55:47 np0005539551 nova_compute[227360]: </domain>
Nov 29 02:55:47 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:55:47 np0005539551 nova_compute[227360]: 2025-11-29 07:55:47.142 227364 DEBUG nova.virt.libvirt.vif [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T07:54:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-818783229',display_name='tempest-ServersAdminTestJSON-server-818783229',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-818783229',id=17,image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:54:28Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1f48e629446148199d44b34243b98b8a',ramdisk_id='',reservation_id='r-0aqrtxlz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-93807439',owner_user_name='tempest-ServersAdminTestJSON-93807439-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:55:40Z,user_data=None,user_id='446b0f05699845e8bd9f7d59c787f671',uuid=bed5ee49-4f19-4a80-a70a-8972c9a68218,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "address": "fa:16:3e:8b:38:bc", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10cc59c9-d7", "ovs_interfaceid": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:55:47 np0005539551 nova_compute[227360]: 2025-11-29 07:55:47.143 227364 DEBUG nova.network.os_vif_util [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Converting VIF {"id": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "address": "fa:16:3e:8b:38:bc", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10cc59c9-d7", "ovs_interfaceid": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:55:47 np0005539551 nova_compute[227360]: 2025-11-29 07:55:47.143 227364 DEBUG nova.network.os_vif_util [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8b:38:bc,bridge_name='br-int',has_traffic_filtering=True,id=10cc59c9-d730-4ae4-91ea-f799f3de9f32,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10cc59c9-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:55:47 np0005539551 nova_compute[227360]: 2025-11-29 07:55:47.144 227364 DEBUG os_vif [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:38:bc,bridge_name='br-int',has_traffic_filtering=True,id=10cc59c9-d730-4ae4-91ea-f799f3de9f32,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10cc59c9-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:55:47 np0005539551 nova_compute[227360]: 2025-11-29 07:55:47.145 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:47 np0005539551 nova_compute[227360]: 2025-11-29 07:55:47.145 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:47 np0005539551 nova_compute[227360]: 2025-11-29 07:55:47.146 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:55:47 np0005539551 nova_compute[227360]: 2025-11-29 07:55:47.149 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:47 np0005539551 nova_compute[227360]: 2025-11-29 07:55:47.149 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap10cc59c9-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:47 np0005539551 nova_compute[227360]: 2025-11-29 07:55:47.150 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap10cc59c9-d7, col_values=(('external_ids', {'iface-id': '10cc59c9-d730-4ae4-91ea-f799f3de9f32', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8b:38:bc', 'vm-uuid': 'bed5ee49-4f19-4a80-a70a-8972c9a68218'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:47 np0005539551 nova_compute[227360]: 2025-11-29 07:55:47.151 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:47 np0005539551 NetworkManager[48922]: <info>  [1764402947.1525] manager: (tap10cc59c9-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Nov 29 02:55:47 np0005539551 nova_compute[227360]: 2025-11-29 07:55:47.156 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:55:47 np0005539551 nova_compute[227360]: 2025-11-29 07:55:47.157 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:47 np0005539551 nova_compute[227360]: 2025-11-29 07:55:47.158 227364 INFO os_vif [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:38:bc,bridge_name='br-int',has_traffic_filtering=True,id=10cc59c9-d730-4ae4-91ea-f799f3de9f32,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10cc59c9-d7')#033[00m
Nov 29 02:55:47 np0005539551 nova_compute[227360]: 2025-11-29 07:55:47.246 227364 DEBUG nova.virt.libvirt.driver [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:55:47 np0005539551 nova_compute[227360]: 2025-11-29 07:55:47.247 227364 DEBUG nova.virt.libvirt.driver [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:55:47 np0005539551 nova_compute[227360]: 2025-11-29 07:55:47.247 227364 DEBUG nova.virt.libvirt.driver [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] No VIF found with MAC fa:16:3e:8b:38:bc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:55:47 np0005539551 nova_compute[227360]: 2025-11-29 07:55:47.248 227364 INFO nova.virt.libvirt.driver [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Using config drive#033[00m
Nov 29 02:55:47 np0005539551 nova_compute[227360]: 2025-11-29 07:55:47.271 227364 DEBUG nova.storage.rbd_utils [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image bed5ee49-4f19-4a80-a70a-8972c9a68218_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:55:47 np0005539551 nova_compute[227360]: 2025-11-29 07:55:47.298 227364 DEBUG nova.objects.instance [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lazy-loading 'ec2_ids' on Instance uuid bed5ee49-4f19-4a80-a70a-8972c9a68218 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:55:47 np0005539551 nova_compute[227360]: 2025-11-29 07:55:47.413 227364 DEBUG nova.objects.instance [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lazy-loading 'keypairs' on Instance uuid bed5ee49-4f19-4a80-a70a-8972c9a68218 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:55:48 np0005539551 nova_compute[227360]: 2025-11-29 07:55:48.316 227364 INFO nova.virt.libvirt.driver [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Creating config drive at /var/lib/nova/instances/bed5ee49-4f19-4a80-a70a-8972c9a68218/disk.config#033[00m
Nov 29 02:55:48 np0005539551 nova_compute[227360]: 2025-11-29 07:55:48.321 227364 DEBUG oslo_concurrency.processutils [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bed5ee49-4f19-4a80-a70a-8972c9a68218/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxa0_2z6q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:48 np0005539551 nova_compute[227360]: 2025-11-29 07:55:48.444 227364 DEBUG oslo_concurrency.processutils [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bed5ee49-4f19-4a80-a70a-8972c9a68218/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxa0_2z6q" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:48.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:48 np0005539551 nova_compute[227360]: 2025-11-29 07:55:48.480 227364 DEBUG nova.storage.rbd_utils [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image bed5ee49-4f19-4a80-a70a-8972c9a68218_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:55:48 np0005539551 nova_compute[227360]: 2025-11-29 07:55:48.484 227364 DEBUG oslo_concurrency.processutils [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bed5ee49-4f19-4a80-a70a-8972c9a68218/disk.config bed5ee49-4f19-4a80-a70a-8972c9a68218_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:48 np0005539551 nova_compute[227360]: 2025-11-29 07:55:48.608 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:48.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:49 np0005539551 nova_compute[227360]: 2025-11-29 07:55:49.546 227364 DEBUG oslo_concurrency.processutils [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bed5ee49-4f19-4a80-a70a-8972c9a68218/disk.config bed5ee49-4f19-4a80-a70a-8972c9a68218_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:49 np0005539551 nova_compute[227360]: 2025-11-29 07:55:49.547 227364 INFO nova.virt.libvirt.driver [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Deleting local config drive /var/lib/nova/instances/bed5ee49-4f19-4a80-a70a-8972c9a68218/disk.config because it was imported into RBD.#033[00m
Nov 29 02:55:49 np0005539551 kernel: tap10cc59c9-d7: entered promiscuous mode
Nov 29 02:55:49 np0005539551 NetworkManager[48922]: <info>  [1764402949.5876] manager: (tap10cc59c9-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/58)
Nov 29 02:55:49 np0005539551 ovn_controller[130266]: 2025-11-29T07:55:49Z|00099|binding|INFO|Claiming lport 10cc59c9-d730-4ae4-91ea-f799f3de9f32 for this chassis.
Nov 29 02:55:49 np0005539551 ovn_controller[130266]: 2025-11-29T07:55:49Z|00100|binding|INFO|10cc59c9-d730-4ae4-91ea-f799f3de9f32: Claiming fa:16:3e:8b:38:bc 10.100.0.14
Nov 29 02:55:49 np0005539551 nova_compute[227360]: 2025-11-29 07:55:49.589 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:49 np0005539551 ovn_controller[130266]: 2025-11-29T07:55:49Z|00101|binding|INFO|Setting lport 10cc59c9-d730-4ae4-91ea-f799f3de9f32 ovn-installed in OVS
Nov 29 02:55:49 np0005539551 nova_compute[227360]: 2025-11-29 07:55:49.608 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:49 np0005539551 systemd-udevd[238476]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:55:49 np0005539551 nova_compute[227360]: 2025-11-29 07:55:49.612 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:49 np0005539551 systemd-machined[190756]: New machine qemu-11-instance-00000011.
Nov 29 02:55:49 np0005539551 NetworkManager[48922]: <info>  [1764402949.6268] device (tap10cc59c9-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:55:49 np0005539551 NetworkManager[48922]: <info>  [1764402949.6276] device (tap10cc59c9-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:55:49 np0005539551 systemd[1]: Started Virtual Machine qemu-11-instance-00000011.
Nov 29 02:55:50 np0005539551 ovn_controller[130266]: 2025-11-29T07:55:50Z|00102|binding|INFO|Setting lport 10cc59c9-d730-4ae4-91ea-f799f3de9f32 up in Southbound
Nov 29 02:55:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:55:50.229 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:38:bc 10.100.0.14'], port_security=['fa:16:3e:8b:38:bc 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'bed5ee49-4f19-4a80-a70a-8972c9a68218', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f48e629446148199d44b34243b98b8a', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'e89afbea-5410-4fb0-af48-42605427a18f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58aa5314-b5ce-40ee-9eff-0f30cffaf25d, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=10cc59c9-d730-4ae4-91ea-f799f3de9f32) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:55:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:55:50.230 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 10cc59c9-d730-4ae4-91ea-f799f3de9f32 in datapath 62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b bound to our chassis#033[00m
Nov 29 02:55:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:55:50.232 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b#033[00m
Nov 29 02:55:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:55:50.246 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4cee791a-91de-4085-b92f-27af844b891e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:55:50.276 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[00dc076f-dd0c-4364-8997-af950d5247b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:55:50.279 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[44749e91-ee47-4c6b-8f55-1a89fd3cd023]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:55:50.307 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[a74389c5-1adf-47fa-9b32-3c5e3cb502bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:55:50.323 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[bb9ce5ae-5f1f-42e8-898e-0cf9f4eafe8d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62e5f2a3-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:93:9d:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591266, 'reachable_time': 39989, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 238533, 'error': None, 'target': 'ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:50 np0005539551 nova_compute[227360]: 2025-11-29 07:55:50.333 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764402950.3328066, bed5ee49-4f19-4a80-a70a-8972c9a68218 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:55:50 np0005539551 nova_compute[227360]: 2025-11-29 07:55:50.333 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:55:50 np0005539551 nova_compute[227360]: 2025-11-29 07:55:50.336 227364 DEBUG nova.compute.manager [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:55:50 np0005539551 nova_compute[227360]: 2025-11-29 07:55:50.337 227364 DEBUG nova.virt.libvirt.driver [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:55:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:55:50.339 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[de33d254-dbc9-47a2-a808-c8b6aed24074]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap62e5f2a3-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 591279, 'tstamp': 591279}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 238534, 'error': None, 'target': 'ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap62e5f2a3-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 591281, 'tstamp': 591281}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 238534, 'error': None, 'target': 'ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:50 np0005539551 nova_compute[227360]: 2025-11-29 07:55:50.340 227364 INFO nova.virt.libvirt.driver [-] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Instance spawned successfully.#033[00m
Nov 29 02:55:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:55:50.340 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62e5f2a3-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:50 np0005539551 nova_compute[227360]: 2025-11-29 07:55:50.340 227364 DEBUG nova.virt.libvirt.driver [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:55:50 np0005539551 nova_compute[227360]: 2025-11-29 07:55:50.342 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:55:50.343 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62e5f2a3-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:50 np0005539551 nova_compute[227360]: 2025-11-29 07:55:50.343 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:55:50.344 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:55:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:55:50.344 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62e5f2a3-c0, col_values=(('external_ids', {'iface-id': 'acbe1c54-69e5-4789-8e0b-6d1b69eab5e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:55:50.345 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:55:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:50.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:50 np0005539551 nova_compute[227360]: 2025-11-29 07:55:50.644 227364 DEBUG nova.virt.libvirt.driver [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:55:50 np0005539551 nova_compute[227360]: 2025-11-29 07:55:50.644 227364 DEBUG nova.virt.libvirt.driver [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:55:50 np0005539551 nova_compute[227360]: 2025-11-29 07:55:50.645 227364 DEBUG nova.virt.libvirt.driver [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:55:50 np0005539551 nova_compute[227360]: 2025-11-29 07:55:50.645 227364 DEBUG nova.virt.libvirt.driver [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:55:50 np0005539551 nova_compute[227360]: 2025-11-29 07:55:50.646 227364 DEBUG nova.virt.libvirt.driver [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:55:50 np0005539551 nova_compute[227360]: 2025-11-29 07:55:50.646 227364 DEBUG nova.virt.libvirt.driver [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:55:50 np0005539551 nova_compute[227360]: 2025-11-29 07:55:50.650 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:55:50 np0005539551 nova_compute[227360]: 2025-11-29 07:55:50.652 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:55:50 np0005539551 nova_compute[227360]: 2025-11-29 07:55:50.802 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 02:55:50 np0005539551 nova_compute[227360]: 2025-11-29 07:55:50.802 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764402950.333452, bed5ee49-4f19-4a80-a70a-8972c9a68218 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:55:50 np0005539551 nova_compute[227360]: 2025-11-29 07:55:50.803 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] VM Started (Lifecycle Event)#033[00m
Nov 29 02:55:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:50.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:50 np0005539551 nova_compute[227360]: 2025-11-29 07:55:50.925 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:55:50 np0005539551 nova_compute[227360]: 2025-11-29 07:55:50.928 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Synchronizing instance power state after lifecycle event "Started"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:55:51 np0005539551 nova_compute[227360]: 2025-11-29 07:55:51.054 227364 DEBUG nova.compute.manager [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:55:51 np0005539551 nova_compute[227360]: 2025-11-29 07:55:51.155 227364 DEBUG nova.compute.manager [req-770e6bcc-5bbd-484d-9e50-40f31a9233f2 req-338e4b27-2db0-4732-abe3-5ca25f7fcd88 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Received event network-vif-plugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:51 np0005539551 nova_compute[227360]: 2025-11-29 07:55:51.155 227364 DEBUG oslo_concurrency.lockutils [req-770e6bcc-5bbd-484d-9e50-40f31a9233f2 req-338e4b27-2db0-4732-abe3-5ca25f7fcd88 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:51 np0005539551 nova_compute[227360]: 2025-11-29 07:55:51.156 227364 DEBUG oslo_concurrency.lockutils [req-770e6bcc-5bbd-484d-9e50-40f31a9233f2 req-338e4b27-2db0-4732-abe3-5ca25f7fcd88 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:51 np0005539551 nova_compute[227360]: 2025-11-29 07:55:51.156 227364 DEBUG oslo_concurrency.lockutils [req-770e6bcc-5bbd-484d-9e50-40f31a9233f2 req-338e4b27-2db0-4732-abe3-5ca25f7fcd88 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:51 np0005539551 nova_compute[227360]: 2025-11-29 07:55:51.156 227364 DEBUG nova.compute.manager [req-770e6bcc-5bbd-484d-9e50-40f31a9233f2 req-338e4b27-2db0-4732-abe3-5ca25f7fcd88 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] No waiting events found dispatching network-vif-plugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:55:51 np0005539551 nova_compute[227360]: 2025-11-29 07:55:51.157 227364 WARNING nova.compute.manager [req-770e6bcc-5bbd-484d-9e50-40f31a9233f2 req-338e4b27-2db0-4732-abe3-5ca25f7fcd88 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Received unexpected event network-vif-plugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 for instance with vm_state error and task_state rebuild_spawning.#033[00m
Nov 29 02:55:51 np0005539551 nova_compute[227360]: 2025-11-29 07:55:51.190 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 02:55:51 np0005539551 nova_compute[227360]: 2025-11-29 07:55:51.393 227364 DEBUG oslo_concurrency.lockutils [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:51 np0005539551 nova_compute[227360]: 2025-11-29 07:55:51.394 227364 DEBUG oslo_concurrency.lockutils [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:51 np0005539551 nova_compute[227360]: 2025-11-29 07:55:51.394 227364 DEBUG nova.objects.instance [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 02:55:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:51 np0005539551 nova_compute[227360]: 2025-11-29 07:55:51.775 227364 DEBUG oslo_concurrency.lockutils [None req-b0769321-378b-49fd-964d-8aa9281004ad 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.381s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:52 np0005539551 nova_compute[227360]: 2025-11-29 07:55:52.151 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:52.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:52.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:53 np0005539551 nova_compute[227360]: 2025-11-29 07:55:53.296 227364 DEBUG nova.compute.manager [req-b2dd3c1d-d1d0-47e0-99e1-1a77407078ef req-000f71fe-2217-4334-aef5-3f40788c7c35 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Received event network-vif-plugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:53 np0005539551 nova_compute[227360]: 2025-11-29 07:55:53.297 227364 DEBUG oslo_concurrency.lockutils [req-b2dd3c1d-d1d0-47e0-99e1-1a77407078ef req-000f71fe-2217-4334-aef5-3f40788c7c35 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:53 np0005539551 nova_compute[227360]: 2025-11-29 07:55:53.297 227364 DEBUG oslo_concurrency.lockutils [req-b2dd3c1d-d1d0-47e0-99e1-1a77407078ef req-000f71fe-2217-4334-aef5-3f40788c7c35 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:53 np0005539551 nova_compute[227360]: 2025-11-29 07:55:53.297 227364 DEBUG oslo_concurrency.lockutils [req-b2dd3c1d-d1d0-47e0-99e1-1a77407078ef req-000f71fe-2217-4334-aef5-3f40788c7c35 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:53 np0005539551 nova_compute[227360]: 2025-11-29 07:55:53.298 227364 DEBUG nova.compute.manager [req-b2dd3c1d-d1d0-47e0-99e1-1a77407078ef req-000f71fe-2217-4334-aef5-3f40788c7c35 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] No waiting events found dispatching network-vif-plugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:55:53 np0005539551 nova_compute[227360]: 2025-11-29 07:55:53.298 227364 WARNING nova.compute.manager [req-b2dd3c1d-d1d0-47e0-99e1-1a77407078ef req-000f71fe-2217-4334-aef5-3f40788c7c35 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Received unexpected event network-vif-plugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 for instance with vm_state active and task_state None.#033[00m
Nov 29 02:55:53 np0005539551 nova_compute[227360]: 2025-11-29 07:55:53.610 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:54.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:54.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:56 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:56.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:56.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:57 np0005539551 nova_compute[227360]: 2025-11-29 07:55:57.154 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:58.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:58 np0005539551 nova_compute[227360]: 2025-11-29 07:55:58.611 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:55:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:58.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:00 np0005539551 nova_compute[227360]: 2025-11-29 07:56:00.092 227364 INFO nova.compute.manager [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Rebuilding instance#033[00m
Nov 29 02:56:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:00.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:00 np0005539551 nova_compute[227360]: 2025-11-29 07:56:00.733 227364 DEBUG nova.objects.instance [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lazy-loading 'trusted_certs' on Instance uuid bed5ee49-4f19-4a80-a70a-8972c9a68218 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:56:00 np0005539551 nova_compute[227360]: 2025-11-29 07:56:00.759 227364 DEBUG nova.compute.manager [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:56:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:00.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:00 np0005539551 nova_compute[227360]: 2025-11-29 07:56:00.892 227364 DEBUG nova.objects.instance [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lazy-loading 'pci_requests' on Instance uuid bed5ee49-4f19-4a80-a70a-8972c9a68218 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:56:00 np0005539551 nova_compute[227360]: 2025-11-29 07:56:00.912 227364 DEBUG nova.objects.instance [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lazy-loading 'pci_devices' on Instance uuid bed5ee49-4f19-4a80-a70a-8972c9a68218 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:56:00 np0005539551 nova_compute[227360]: 2025-11-29 07:56:00.941 227364 DEBUG nova.objects.instance [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lazy-loading 'resources' on Instance uuid bed5ee49-4f19-4a80-a70a-8972c9a68218 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:56:00 np0005539551 nova_compute[227360]: 2025-11-29 07:56:00.959 227364 DEBUG nova.objects.instance [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lazy-loading 'migration_context' on Instance uuid bed5ee49-4f19-4a80-a70a-8972c9a68218 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:56:00 np0005539551 nova_compute[227360]: 2025-11-29 07:56:00.990 227364 DEBUG nova.objects.instance [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 02:56:00 np0005539551 nova_compute[227360]: 2025-11-29 07:56:00.994 227364 DEBUG nova.virt.libvirt.driver [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 02:56:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:02 np0005539551 nova_compute[227360]: 2025-11-29 07:56:02.158 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:02.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:02.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:03 np0005539551 nova_compute[227360]: 2025-11-29 07:56:03.613 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:04.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:04.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:06.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:06.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:07 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:56:07 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:56:07 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:56:07 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:56:07 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:56:07 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:56:07 np0005539551 nova_compute[227360]: 2025-11-29 07:56:07.159 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:07 np0005539551 ovn_controller[130266]: 2025-11-29T07:56:07Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8b:38:bc 10.100.0.14
Nov 29 02:56:07 np0005539551 ovn_controller[130266]: 2025-11-29T07:56:07Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8b:38:bc 10.100.0.14
Nov 29 02:56:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:08.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:08 np0005539551 nova_compute[227360]: 2025-11-29 07:56:08.664 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:08 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 02:56:08 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 02:56:08 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:56:08 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:56:08 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:56:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:08.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:09 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 29 02:56:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:10.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:10.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:11 np0005539551 nova_compute[227360]: 2025-11-29 07:56:11.043 227364 DEBUG nova.virt.libvirt.driver [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 02:56:12 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:12 np0005539551 nova_compute[227360]: 2025-11-29 07:56:12.162 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:12.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:12 np0005539551 podman[238789]: 2025-11-29 07:56:12.596482536 +0000 UTC m=+0.048831539 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 02:56:12 np0005539551 podman[238788]: 2025-11-29 07:56:12.60758846 +0000 UTC m=+0.062111633 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 02:56:12 np0005539551 podman[238787]: 2025-11-29 07:56:12.63057074 +0000 UTC m=+0.085559176 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 02:56:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:12.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:13 np0005539551 kernel: tap10cc59c9-d7 (unregistering): left promiscuous mode
Nov 29 02:56:13 np0005539551 NetworkManager[48922]: <info>  [1764402973.2908] device (tap10cc59c9-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:56:13 np0005539551 nova_compute[227360]: 2025-11-29 07:56:13.298 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:13 np0005539551 ovn_controller[130266]: 2025-11-29T07:56:13Z|00103|binding|INFO|Releasing lport 10cc59c9-d730-4ae4-91ea-f799f3de9f32 from this chassis (sb_readonly=0)
Nov 29 02:56:13 np0005539551 ovn_controller[130266]: 2025-11-29T07:56:13Z|00104|binding|INFO|Setting lport 10cc59c9-d730-4ae4-91ea-f799f3de9f32 down in Southbound
Nov 29 02:56:13 np0005539551 ovn_controller[130266]: 2025-11-29T07:56:13Z|00105|binding|INFO|Removing iface tap10cc59c9-d7 ovn-installed in OVS
Nov 29 02:56:13 np0005539551 nova_compute[227360]: 2025-11-29 07:56:13.301 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:13.309 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:38:bc 10.100.0.14'], port_security=['fa:16:3e:8b:38:bc 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'bed5ee49-4f19-4a80-a70a-8972c9a68218', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f48e629446148199d44b34243b98b8a', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'e89afbea-5410-4fb0-af48-42605427a18f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58aa5314-b5ce-40ee-9eff-0f30cffaf25d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=10cc59c9-d730-4ae4-91ea-f799f3de9f32) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:56:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:13.310 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 10cc59c9-d730-4ae4-91ea-f799f3de9f32 in datapath 62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b unbound from our chassis#033[00m
Nov 29 02:56:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:13.311 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b#033[00m
Nov 29 02:56:13 np0005539551 nova_compute[227360]: 2025-11-29 07:56:13.314 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:13.327 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e2c448dd-5118-4f84-ad6d-4deda70add68]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:13.355 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[1d6f156f-56e8-45ce-8c44-e36ff6b16328]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:13.358 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[76925a76-cbea-4e0d-b47a-73cd8b8e2850]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:13.381 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[2bea35c3-21c1-4aac-a909-62f80aca92c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:13 np0005539551 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000011.scope: Deactivated successfully.
Nov 29 02:56:13 np0005539551 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000011.scope: Consumed 15.028s CPU time.
Nov 29 02:56:13 np0005539551 systemd-machined[190756]: Machine qemu-11-instance-00000011 terminated.
Nov 29 02:56:13 np0005539551 nova_compute[227360]: 2025-11-29 07:56:13.390 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:13 np0005539551 nova_compute[227360]: 2025-11-29 07:56:13.390 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:13 np0005539551 nova_compute[227360]: 2025-11-29 07:56:13.390 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:56:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:13.395 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1825bce6-f2cb-4491-af4a-fd69527f8653]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62e5f2a3-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:93:9d:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591266, 'reachable_time': 39989, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 238858, 'error': None, 'target': 'ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:13.412 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[35017143-ab3e-4b68-aebc-2c7f4c6c8e95]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap62e5f2a3-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 591279, 'tstamp': 591279}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 238859, 'error': None, 'target': 'ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap62e5f2a3-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 591281, 'tstamp': 591281}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 238859, 'error': None, 'target': 'ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:13.414 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62e5f2a3-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:13 np0005539551 nova_compute[227360]: 2025-11-29 07:56:13.415 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:13 np0005539551 nova_compute[227360]: 2025-11-29 07:56:13.419 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:13.420 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62e5f2a3-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:13.420 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:56:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:13.420 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62e5f2a3-c0, col_values=(('external_ids', {'iface-id': 'acbe1c54-69e5-4789-8e0b-6d1b69eab5e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:13.421 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:56:13 np0005539551 nova_compute[227360]: 2025-11-29 07:56:13.539 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:13 np0005539551 nova_compute[227360]: 2025-11-29 07:56:13.544 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:13 np0005539551 nova_compute[227360]: 2025-11-29 07:56:13.666 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:13 np0005539551 nova_compute[227360]: 2025-11-29 07:56:13.867 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "refresh_cache-ec72da52-17dd-401b-8538-90262cfe6006" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:56:13 np0005539551 nova_compute[227360]: 2025-11-29 07:56:13.867 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquired lock "refresh_cache-ec72da52-17dd-401b-8538-90262cfe6006" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:56:13 np0005539551 nova_compute[227360]: 2025-11-29 07:56:13.867 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 02:56:14 np0005539551 nova_compute[227360]: 2025-11-29 07:56:14.056 227364 INFO nova.virt.libvirt.driver [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Instance shutdown successfully after 13 seconds.#033[00m
Nov 29 02:56:14 np0005539551 nova_compute[227360]: 2025-11-29 07:56:14.061 227364 INFO nova.virt.libvirt.driver [-] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Instance destroyed successfully.#033[00m
Nov 29 02:56:14 np0005539551 nova_compute[227360]: 2025-11-29 07:56:14.065 227364 INFO nova.virt.libvirt.driver [-] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Instance destroyed successfully.#033[00m
Nov 29 02:56:14 np0005539551 nova_compute[227360]: 2025-11-29 07:56:14.066 227364 DEBUG nova.virt.libvirt.vif [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T07:54:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-818783229',display_name='tempest-ServersAdminTestJSON-server-818783229',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-818783229',id=17,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:55:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1f48e629446148199d44b34243b98b8a',ramdisk_id='',reservation_id='r-0aqrtxlz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-93807439',owner_user_name='tempest-ServersAdminTestJSON-93807439-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:55:57Z,user_data=None,user_id='446b0f05699845e8bd9f7d59c787f671',uuid=bed5ee49-4f19-4a80-a70a-8972c9a68218,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "address": "fa:16:3e:8b:38:bc", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10cc59c9-d7", "ovs_interfaceid": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:56:14 np0005539551 nova_compute[227360]: 2025-11-29 07:56:14.066 227364 DEBUG nova.network.os_vif_util [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Converting VIF {"id": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "address": "fa:16:3e:8b:38:bc", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10cc59c9-d7", "ovs_interfaceid": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:56:14 np0005539551 nova_compute[227360]: 2025-11-29 07:56:14.067 227364 DEBUG nova.network.os_vif_util [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8b:38:bc,bridge_name='br-int',has_traffic_filtering=True,id=10cc59c9-d730-4ae4-91ea-f799f3de9f32,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10cc59c9-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:56:14 np0005539551 nova_compute[227360]: 2025-11-29 07:56:14.067 227364 DEBUG os_vif [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:38:bc,bridge_name='br-int',has_traffic_filtering=True,id=10cc59c9-d730-4ae4-91ea-f799f3de9f32,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10cc59c9-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:56:14 np0005539551 nova_compute[227360]: 2025-11-29 07:56:14.069 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:14 np0005539551 nova_compute[227360]: 2025-11-29 07:56:14.069 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10cc59c9-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:14 np0005539551 nova_compute[227360]: 2025-11-29 07:56:14.070 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:14 np0005539551 nova_compute[227360]: 2025-11-29 07:56:14.072 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:14 np0005539551 nova_compute[227360]: 2025-11-29 07:56:14.075 227364 INFO os_vif [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:38:bc,bridge_name='br-int',has_traffic_filtering=True,id=10cc59c9-d730-4ae4-91ea-f799f3de9f32,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10cc59c9-d7')#033[00m
Nov 29 02:56:14 np0005539551 nova_compute[227360]: 2025-11-29 07:56:14.096 227364 DEBUG nova.compute.manager [req-89af97d1-fe82-4cc2-bc34-f9b31099e95d req-1a674bc9-40b1-46ea-aab0-097f508cb362 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Received event network-vif-unplugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:56:14 np0005539551 nova_compute[227360]: 2025-11-29 07:56:14.096 227364 DEBUG oslo_concurrency.lockutils [req-89af97d1-fe82-4cc2-bc34-f9b31099e95d req-1a674bc9-40b1-46ea-aab0-097f508cb362 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:14 np0005539551 nova_compute[227360]: 2025-11-29 07:56:14.096 227364 DEBUG oslo_concurrency.lockutils [req-89af97d1-fe82-4cc2-bc34-f9b31099e95d req-1a674bc9-40b1-46ea-aab0-097f508cb362 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:14 np0005539551 nova_compute[227360]: 2025-11-29 07:56:14.096 227364 DEBUG oslo_concurrency.lockutils [req-89af97d1-fe82-4cc2-bc34-f9b31099e95d req-1a674bc9-40b1-46ea-aab0-097f508cb362 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:14 np0005539551 nova_compute[227360]: 2025-11-29 07:56:14.097 227364 DEBUG nova.compute.manager [req-89af97d1-fe82-4cc2-bc34-f9b31099e95d req-1a674bc9-40b1-46ea-aab0-097f508cb362 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] No waiting events found dispatching network-vif-unplugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:56:14 np0005539551 nova_compute[227360]: 2025-11-29 07:56:14.097 227364 WARNING nova.compute.manager [req-89af97d1-fe82-4cc2-bc34-f9b31099e95d req-1a674bc9-40b1-46ea-aab0-097f508cb362 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Received unexpected event network-vif-unplugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 for instance with vm_state active and task_state rebuilding.#033[00m
Nov 29 02:56:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:14.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:14 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:14.824 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:56:14 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:14.825 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:56:14 np0005539551 nova_compute[227360]: 2025-11-29 07:56:14.825 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:14.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:15 np0005539551 nova_compute[227360]: 2025-11-29 07:56:15.725 227364 INFO nova.virt.libvirt.driver [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Deleting instance files /var/lib/nova/instances/bed5ee49-4f19-4a80-a70a-8972c9a68218_del#033[00m
Nov 29 02:56:15 np0005539551 nova_compute[227360]: 2025-11-29 07:56:15.726 227364 INFO nova.virt.libvirt.driver [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Deletion of /var/lib/nova/instances/bed5ee49-4f19-4a80-a70a-8972c9a68218_del complete#033[00m
Nov 29 02:56:15 np0005539551 nova_compute[227360]: 2025-11-29 07:56:15.934 227364 DEBUG nova.virt.libvirt.driver [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:56:15 np0005539551 nova_compute[227360]: 2025-11-29 07:56:15.934 227364 INFO nova.virt.libvirt.driver [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Creating image(s)#033[00m
Nov 29 02:56:15 np0005539551 nova_compute[227360]: 2025-11-29 07:56:15.967 227364 DEBUG nova.storage.rbd_utils [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image bed5ee49-4f19-4a80-a70a-8972c9a68218_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:15 np0005539551 nova_compute[227360]: 2025-11-29 07:56:15.995 227364 DEBUG nova.storage.rbd_utils [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image bed5ee49-4f19-4a80-a70a-8972c9a68218_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:16 np0005539551 nova_compute[227360]: 2025-11-29 07:56:16.022 227364 DEBUG nova.storage.rbd_utils [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image bed5ee49-4f19-4a80-a70a-8972c9a68218_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:16 np0005539551 nova_compute[227360]: 2025-11-29 07:56:16.025 227364 DEBUG oslo_concurrency.processutils [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:16 np0005539551 nova_compute[227360]: 2025-11-29 07:56:16.094 227364 DEBUG oslo_concurrency.processutils [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:16 np0005539551 nova_compute[227360]: 2025-11-29 07:56:16.095 227364 DEBUG oslo_concurrency.lockutils [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:16 np0005539551 nova_compute[227360]: 2025-11-29 07:56:16.096 227364 DEBUG oslo_concurrency.lockutils [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:16 np0005539551 nova_compute[227360]: 2025-11-29 07:56:16.096 227364 DEBUG oslo_concurrency.lockutils [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:16 np0005539551 nova_compute[227360]: 2025-11-29 07:56:16.123 227364 DEBUG nova.storage.rbd_utils [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image bed5ee49-4f19-4a80-a70a-8972c9a68218_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:16 np0005539551 nova_compute[227360]: 2025-11-29 07:56:16.127 227364 DEBUG oslo_concurrency.processutils [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 bed5ee49-4f19-4a80-a70a-8972c9a68218_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:16 np0005539551 nova_compute[227360]: 2025-11-29 07:56:16.234 227364 DEBUG nova.compute.manager [req-be2f6a99-c79f-4f5d-9f51-f7224ecd0078 req-7d3c8cea-1b28-4242-a70a-b1980b62d2b0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Received event network-vif-plugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:56:16 np0005539551 nova_compute[227360]: 2025-11-29 07:56:16.235 227364 DEBUG oslo_concurrency.lockutils [req-be2f6a99-c79f-4f5d-9f51-f7224ecd0078 req-7d3c8cea-1b28-4242-a70a-b1980b62d2b0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:16 np0005539551 nova_compute[227360]: 2025-11-29 07:56:16.235 227364 DEBUG oslo_concurrency.lockutils [req-be2f6a99-c79f-4f5d-9f51-f7224ecd0078 req-7d3c8cea-1b28-4242-a70a-b1980b62d2b0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:16 np0005539551 nova_compute[227360]: 2025-11-29 07:56:16.235 227364 DEBUG oslo_concurrency.lockutils [req-be2f6a99-c79f-4f5d-9f51-f7224ecd0078 req-7d3c8cea-1b28-4242-a70a-b1980b62d2b0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:16 np0005539551 nova_compute[227360]: 2025-11-29 07:56:16.236 227364 DEBUG nova.compute.manager [req-be2f6a99-c79f-4f5d-9f51-f7224ecd0078 req-7d3c8cea-1b28-4242-a70a-b1980b62d2b0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] No waiting events found dispatching network-vif-plugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:56:16 np0005539551 nova_compute[227360]: 2025-11-29 07:56:16.236 227364 WARNING nova.compute.manager [req-be2f6a99-c79f-4f5d-9f51-f7224ecd0078 req-7d3c8cea-1b28-4242-a70a-b1980b62d2b0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Received unexpected event network-vif-plugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Nov 29 02:56:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:16.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:16 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:56:16 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:56:16 np0005539551 nova_compute[227360]: 2025-11-29 07:56:16.804 227364 DEBUG oslo_concurrency.processutils [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 bed5ee49-4f19-4a80-a70a-8972c9a68218_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.677s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:56:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:16.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:56:16 np0005539551 nova_compute[227360]: 2025-11-29 07:56:16.893 227364 DEBUG nova.storage.rbd_utils [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] resizing rbd image bed5ee49-4f19-4a80-a70a-8972c9a68218_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.059 227364 DEBUG nova.virt.libvirt.driver [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.060 227364 DEBUG nova.virt.libvirt.driver [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Ensure instance console log exists: /var/lib/nova/instances/bed5ee49-4f19-4a80-a70a-8972c9a68218/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.061 227364 DEBUG oslo_concurrency.lockutils [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.062 227364 DEBUG oslo_concurrency.lockutils [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.062 227364 DEBUG oslo_concurrency.lockutils [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.065 227364 DEBUG nova.virt.libvirt.driver [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Start _get_guest_xml network_info=[{"id": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "address": "fa:16:3e:8b:38:bc", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10cc59c9-d7", "ovs_interfaceid": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.072 227364 WARNING nova.virt.libvirt.driver [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.079 227364 DEBUG nova.virt.libvirt.host [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.080 227364 DEBUG nova.virt.libvirt.host [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.085 227364 DEBUG nova.virt.libvirt.host [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.086 227364 DEBUG nova.virt.libvirt.host [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.088 227364 DEBUG nova.virt.libvirt.driver [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.088 227364 DEBUG nova.virt.hardware [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.089 227364 DEBUG nova.virt.hardware [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.090 227364 DEBUG nova.virt.hardware [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.090 227364 DEBUG nova.virt.hardware [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.090 227364 DEBUG nova.virt.hardware [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.091 227364 DEBUG nova.virt.hardware [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.091 227364 DEBUG nova.virt.hardware [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.092 227364 DEBUG nova.virt.hardware [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.092 227364 DEBUG nova.virt.hardware [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.092 227364 DEBUG nova.virt.hardware [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.093 227364 DEBUG nova.virt.hardware [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.093 227364 DEBUG nova.objects.instance [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lazy-loading 'vcpu_model' on Instance uuid bed5ee49-4f19-4a80-a70a-8972c9a68218 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.113 227364 DEBUG oslo_concurrency.processutils [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.442 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Updating instance_info_cache with network_info: [{"id": "fe9bbbf6-1a8e-4407-b98e-d689945a1535", "address": "fa:16:3e:f1:9c:fe", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe9bbbf6-1a", "ovs_interfaceid": "fe9bbbf6-1a8e-4407-b98e-d689945a1535", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.476 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Releasing lock "refresh_cache-ec72da52-17dd-401b-8538-90262cfe6006" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.477 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.478 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.478 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.479 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.479 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.482 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.482 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.483 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.483 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.509 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.510 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.510 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.511 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.511 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:56:17 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2815946686' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.599 227364 DEBUG oslo_concurrency.processutils [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.627 227364 DEBUG nova.storage.rbd_utils [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image bed5ee49-4f19-4a80-a70a-8972c9a68218_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:17 np0005539551 nova_compute[227360]: 2025-11-29 07:56:17.632 227364 DEBUG oslo_concurrency.processutils [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:56:17 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3808142989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.030 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:18 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:56:18 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1860703800' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.111 227364 DEBUG oslo_concurrency.processutils [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.114 227364 DEBUG nova.virt.libvirt.vif [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T07:54:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-818783229',display_name='tempest-ServersAdminTestJSON-server-818783229',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-818783229',id=17,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:55:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1f48e629446148199d44b34243b98b8a',ramdisk_id='',reservation_id='r-0aqrtxlz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='2',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-93807439',owner_user_name='tempest-ServersAdminTestJSON-93807439-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:56:15Z,user_data=None,user_id='446b0f05699845e8bd9f7d59c787f671',uuid=bed5ee49-4f19-4a80-a70a-8972c9a68218,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "address": "fa:16:3e:8b:38:bc", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10cc59c9-d7", "ovs_interfaceid": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.114 227364 DEBUG nova.network.os_vif_util [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Converting VIF {"id": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "address": "fa:16:3e:8b:38:bc", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10cc59c9-d7", "ovs_interfaceid": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.115 227364 DEBUG nova.network.os_vif_util [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8b:38:bc,bridge_name='br-int',has_traffic_filtering=True,id=10cc59c9-d730-4ae4-91ea-f799f3de9f32,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10cc59c9-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.118 227364 DEBUG nova.virt.libvirt.driver [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:56:18 np0005539551 nova_compute[227360]:  <uuid>bed5ee49-4f19-4a80-a70a-8972c9a68218</uuid>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:  <name>instance-00000011</name>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:56:18 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:      <nova:name>tempest-ServersAdminTestJSON-server-818783229</nova:name>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 07:56:17</nova:creationTime>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 02:56:18 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:        <nova:user uuid="446b0f05699845e8bd9f7d59c787f671">tempest-ServersAdminTestJSON-93807439-project-member</nova:user>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:        <nova:project uuid="1f48e629446148199d44b34243b98b8a">tempest-ServersAdminTestJSON-93807439</nova:project>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:        <nova:port uuid="10cc59c9-d730-4ae4-91ea-f799f3de9f32">
Nov 29 02:56:18 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <system>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:      <entry name="serial">bed5ee49-4f19-4a80-a70a-8972c9a68218</entry>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:      <entry name="uuid">bed5ee49-4f19-4a80-a70a-8972c9a68218</entry>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    </system>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:  <os>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:  </os>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:  <features>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:  </features>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:  </clock>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:  <devices>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 02:56:18 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/bed5ee49-4f19-4a80-a70a-8972c9a68218_disk">
Nov 29 02:56:18 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:      </source>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 02:56:18 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:      </auth>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    </disk>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 02:56:18 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/bed5ee49-4f19-4a80-a70a-8972c9a68218_disk.config">
Nov 29 02:56:18 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:      </source>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 02:56:18 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:      </auth>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    </disk>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 02:56:18 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:8b:38:bc"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:      <target dev="tap10cc59c9-d7"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    </interface>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 02:56:18 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/bed5ee49-4f19-4a80-a70a-8972c9a68218/console.log" append="off"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    </serial>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <video>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    </video>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 02:56:18 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    </rng>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 02:56:18 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 02:56:18 np0005539551 nova_compute[227360]:  </devices>
Nov 29 02:56:18 np0005539551 nova_compute[227360]: </domain>
Nov 29 02:56:18 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.118 227364 DEBUG nova.virt.libvirt.vif [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T07:54:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-818783229',display_name='tempest-ServersAdminTestJSON-server-818783229',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-818783229',id=17,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:55:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1f48e629446148199d44b34243b98b8a',ramdisk_id='',reservation_id='r-0aqrtxlz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='2',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-93807439',owner_user_name='tempest-ServersAdminTestJSON-93807439-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:56:15Z,user_data=None,user_id='446b0f05699845e8bd9f7d59c787f671',uuid=bed5ee49-4f19-4a80-a70a-8972c9a68218,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "address": "fa:16:3e:8b:38:bc", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10cc59c9-d7", "ovs_interfaceid": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.119 227364 DEBUG nova.network.os_vif_util [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Converting VIF {"id": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "address": "fa:16:3e:8b:38:bc", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10cc59c9-d7", "ovs_interfaceid": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.119 227364 DEBUG nova.network.os_vif_util [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8b:38:bc,bridge_name='br-int',has_traffic_filtering=True,id=10cc59c9-d730-4ae4-91ea-f799f3de9f32,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10cc59c9-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.120 227364 DEBUG os_vif [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:38:bc,bridge_name='br-int',has_traffic_filtering=True,id=10cc59c9-d730-4ae4-91ea-f799f3de9f32,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10cc59c9-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.120 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.121 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.122 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.125 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.125 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap10cc59c9-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.126 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap10cc59c9-d7, col_values=(('external_ids', {'iface-id': '10cc59c9-d730-4ae4-91ea-f799f3de9f32', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8b:38:bc', 'vm-uuid': 'bed5ee49-4f19-4a80-a70a-8972c9a68218'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:18 np0005539551 NetworkManager[48922]: <info>  [1764402978.1435] manager: (tap10cc59c9-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.142 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.148 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.151 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.152 227364 INFO os_vif [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:38:bc,bridge_name='br-int',has_traffic_filtering=True,id=10cc59c9-d730-4ae4-91ea-f799f3de9f32,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10cc59c9-d7')#033[00m
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.203 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.203 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.207 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.207 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.218 227364 DEBUG nova.virt.libvirt.driver [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.219 227364 DEBUG nova.virt.libvirt.driver [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.219 227364 DEBUG nova.virt.libvirt.driver [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] No VIF found with MAC fa:16:3e:8b:38:bc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.220 227364 INFO nova.virt.libvirt.driver [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Using config drive#033[00m
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.247 227364 DEBUG nova.storage.rbd_utils [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image bed5ee49-4f19-4a80-a70a-8972c9a68218_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.267 227364 DEBUG nova.objects.instance [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lazy-loading 'ec2_ids' on Instance uuid bed5ee49-4f19-4a80-a70a-8972c9a68218 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.293 227364 DEBUG nova.objects.instance [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lazy-loading 'keypairs' on Instance uuid bed5ee49-4f19-4a80-a70a-8972c9a68218 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.414 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.416 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4525MB free_disk=20.802997589111328GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.416 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.416 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:18.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.526 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance bed5ee49-4f19-4a80-a70a-8972c9a68218 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.527 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance ec72da52-17dd-401b-8538-90262cfe6006 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.527 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 59925490-c9ac-438c-9a1f-4356f493b103 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.527 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.527 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.596 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:18 np0005539551 nova_compute[227360]: 2025-11-29 07:56:18.669 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:18.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:19 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:56:19 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2999325381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:56:19 np0005539551 nova_compute[227360]: 2025-11-29 07:56:19.065 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:19 np0005539551 nova_compute[227360]: 2025-11-29 07:56:19.072 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:56:19 np0005539551 nova_compute[227360]: 2025-11-29 07:56:19.114 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:56:19 np0005539551 nova_compute[227360]: 2025-11-29 07:56:19.148 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:56:19 np0005539551 nova_compute[227360]: 2025-11-29 07:56:19.149 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:19 np0005539551 nova_compute[227360]: 2025-11-29 07:56:19.327 227364 INFO nova.virt.libvirt.driver [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Creating config drive at /var/lib/nova/instances/bed5ee49-4f19-4a80-a70a-8972c9a68218/disk.config#033[00m
Nov 29 02:56:19 np0005539551 nova_compute[227360]: 2025-11-29 07:56:19.333 227364 DEBUG oslo_concurrency.processutils [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bed5ee49-4f19-4a80-a70a-8972c9a68218/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj5ryopod execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:19 np0005539551 nova_compute[227360]: 2025-11-29 07:56:19.476 227364 DEBUG oslo_concurrency.processutils [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bed5ee49-4f19-4a80-a70a-8972c9a68218/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj5ryopod" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:19 np0005539551 nova_compute[227360]: 2025-11-29 07:56:19.512 227364 DEBUG nova.storage.rbd_utils [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image bed5ee49-4f19-4a80-a70a-8972c9a68218_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:19 np0005539551 nova_compute[227360]: 2025-11-29 07:56:19.517 227364 DEBUG oslo_concurrency.processutils [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bed5ee49-4f19-4a80-a70a-8972c9a68218/disk.config bed5ee49-4f19-4a80-a70a-8972c9a68218_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:19.848 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:19.849 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:19.849 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:20.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:20 np0005539551 nova_compute[227360]: 2025-11-29 07:56:20.639 227364 DEBUG oslo_concurrency.processutils [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bed5ee49-4f19-4a80-a70a-8972c9a68218/disk.config bed5ee49-4f19-4a80-a70a-8972c9a68218_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:20 np0005539551 nova_compute[227360]: 2025-11-29 07:56:20.640 227364 INFO nova.virt.libvirt.driver [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Deleting local config drive /var/lib/nova/instances/bed5ee49-4f19-4a80-a70a-8972c9a68218/disk.config because it was imported into RBD.#033[00m
Nov 29 02:56:20 np0005539551 kernel: tap10cc59c9-d7: entered promiscuous mode
Nov 29 02:56:20 np0005539551 NetworkManager[48922]: <info>  [1764402980.6923] manager: (tap10cc59c9-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/60)
Nov 29 02:56:20 np0005539551 ovn_controller[130266]: 2025-11-29T07:56:20Z|00106|binding|INFO|Claiming lport 10cc59c9-d730-4ae4-91ea-f799f3de9f32 for this chassis.
Nov 29 02:56:20 np0005539551 nova_compute[227360]: 2025-11-29 07:56:20.693 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:20 np0005539551 ovn_controller[130266]: 2025-11-29T07:56:20Z|00107|binding|INFO|10cc59c9-d730-4ae4-91ea-f799f3de9f32: Claiming fa:16:3e:8b:38:bc 10.100.0.14
Nov 29 02:56:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:20.699 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:38:bc 10.100.0.14'], port_security=['fa:16:3e:8b:38:bc 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'bed5ee49-4f19-4a80-a70a-8972c9a68218', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f48e629446148199d44b34243b98b8a', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'e89afbea-5410-4fb0-af48-42605427a18f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58aa5314-b5ce-40ee-9eff-0f30cffaf25d, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=10cc59c9-d730-4ae4-91ea-f799f3de9f32) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:56:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:20.701 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 10cc59c9-d730-4ae4-91ea-f799f3de9f32 in datapath 62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b bound to our chassis#033[00m
Nov 29 02:56:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:20.703 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b#033[00m
Nov 29 02:56:20 np0005539551 ovn_controller[130266]: 2025-11-29T07:56:20Z|00108|binding|INFO|Setting lport 10cc59c9-d730-4ae4-91ea-f799f3de9f32 up in Southbound
Nov 29 02:56:20 np0005539551 ovn_controller[130266]: 2025-11-29T07:56:20Z|00109|binding|INFO|Setting lport 10cc59c9-d730-4ae4-91ea-f799f3de9f32 ovn-installed in OVS
Nov 29 02:56:20 np0005539551 nova_compute[227360]: 2025-11-29 07:56:20.711 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:20 np0005539551 nova_compute[227360]: 2025-11-29 07:56:20.714 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:20.717 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f0aaa264-2316-40ff-a2fe-d0409fd33df7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:20 np0005539551 systemd-machined[190756]: New machine qemu-12-instance-00000011.
Nov 29 02:56:20 np0005539551 systemd[1]: Started Virtual Machine qemu-12-instance-00000011.
Nov 29 02:56:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:20.750 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[b52d18e1-9223-446b-bef9-fcf326399183]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:20.753 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[c798c87b-81ad-4dde-a54b-b5f91633ef16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:20 np0005539551 systemd-udevd[239287]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:56:20 np0005539551 NetworkManager[48922]: <info>  [1764402980.7678] device (tap10cc59c9-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:56:20 np0005539551 NetworkManager[48922]: <info>  [1764402980.7692] device (tap10cc59c9-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:56:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:20.783 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[993f9910-ddf2-4d1f-a35a-39124108ef67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:20.801 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[3835a690-7951-470d-a5d9-08ce0aed1256]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62e5f2a3-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:93:9d:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591266, 'reachable_time': 39989, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239295, 'error': None, 'target': 'ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:20.820 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5c64ece2-1358-4158-bea2-76cfdd16286e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap62e5f2a3-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 591279, 'tstamp': 591279}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239298, 'error': None, 'target': 'ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap62e5f2a3-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 591281, 'tstamp': 591281}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239298, 'error': None, 'target': 'ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:20.821 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62e5f2a3-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:20 np0005539551 nova_compute[227360]: 2025-11-29 07:56:20.823 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:20 np0005539551 nova_compute[227360]: 2025-11-29 07:56:20.825 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:20.825 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62e5f2a3-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:20.825 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:56:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:20.826 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62e5f2a3-c0, col_values=(('external_ids', {'iface-id': 'acbe1c54-69e5-4789-8e0b-6d1b69eab5e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:20.826 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:56:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:20.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:21 np0005539551 nova_compute[227360]: 2025-11-29 07:56:21.095 227364 DEBUG nova.compute.manager [req-fd63a691-1091-490e-ba2f-94e5174d5acf req-8cf65f43-888c-4b43-9e3b-631559fcde1c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Received event network-vif-plugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:56:21 np0005539551 nova_compute[227360]: 2025-11-29 07:56:21.096 227364 DEBUG oslo_concurrency.lockutils [req-fd63a691-1091-490e-ba2f-94e5174d5acf req-8cf65f43-888c-4b43-9e3b-631559fcde1c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:21 np0005539551 nova_compute[227360]: 2025-11-29 07:56:21.096 227364 DEBUG oslo_concurrency.lockutils [req-fd63a691-1091-490e-ba2f-94e5174d5acf req-8cf65f43-888c-4b43-9e3b-631559fcde1c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:21 np0005539551 nova_compute[227360]: 2025-11-29 07:56:21.097 227364 DEBUG oslo_concurrency.lockutils [req-fd63a691-1091-490e-ba2f-94e5174d5acf req-8cf65f43-888c-4b43-9e3b-631559fcde1c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:21 np0005539551 nova_compute[227360]: 2025-11-29 07:56:21.097 227364 DEBUG nova.compute.manager [req-fd63a691-1091-490e-ba2f-94e5174d5acf req-8cf65f43-888c-4b43-9e3b-631559fcde1c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] No waiting events found dispatching network-vif-plugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:56:21 np0005539551 nova_compute[227360]: 2025-11-29 07:56:21.098 227364 WARNING nova.compute.manager [req-fd63a691-1091-490e-ba2f-94e5174d5acf req-8cf65f43-888c-4b43-9e3b-631559fcde1c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Received unexpected event network-vif-plugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Nov 29 02:56:21 np0005539551 nova_compute[227360]: 2025-11-29 07:56:21.208 227364 DEBUG nova.compute.manager [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:56:21 np0005539551 nova_compute[227360]: 2025-11-29 07:56:21.209 227364 DEBUG nova.virt.libvirt.driver [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:56:21 np0005539551 nova_compute[227360]: 2025-11-29 07:56:21.210 227364 DEBUG nova.virt.libvirt.host [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Removed pending event for bed5ee49-4f19-4a80-a70a-8972c9a68218 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 02:56:21 np0005539551 nova_compute[227360]: 2025-11-29 07:56:21.210 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764402981.2082443, bed5ee49-4f19-4a80-a70a-8972c9a68218 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:56:21 np0005539551 nova_compute[227360]: 2025-11-29 07:56:21.210 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:56:21 np0005539551 nova_compute[227360]: 2025-11-29 07:56:21.215 227364 INFO nova.virt.libvirt.driver [-] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Instance spawned successfully.#033[00m
Nov 29 02:56:21 np0005539551 nova_compute[227360]: 2025-11-29 07:56:21.216 227364 DEBUG nova.virt.libvirt.driver [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:56:21 np0005539551 nova_compute[227360]: 2025-11-29 07:56:21.235 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:56:21 np0005539551 nova_compute[227360]: 2025-11-29 07:56:21.239 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:56:21 np0005539551 nova_compute[227360]: 2025-11-29 07:56:21.243 227364 DEBUG nova.virt.libvirt.driver [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:21 np0005539551 nova_compute[227360]: 2025-11-29 07:56:21.243 227364 DEBUG nova.virt.libvirt.driver [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:21 np0005539551 nova_compute[227360]: 2025-11-29 07:56:21.244 227364 DEBUG nova.virt.libvirt.driver [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:21 np0005539551 nova_compute[227360]: 2025-11-29 07:56:21.244 227364 DEBUG nova.virt.libvirt.driver [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:21 np0005539551 nova_compute[227360]: 2025-11-29 07:56:21.244 227364 DEBUG nova.virt.libvirt.driver [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:21 np0005539551 nova_compute[227360]: 2025-11-29 07:56:21.245 227364 DEBUG nova.virt.libvirt.driver [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:21 np0005539551 nova_compute[227360]: 2025-11-29 07:56:21.273 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 02:56:21 np0005539551 nova_compute[227360]: 2025-11-29 07:56:21.274 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764402981.2097049, bed5ee49-4f19-4a80-a70a-8972c9a68218 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:56:21 np0005539551 nova_compute[227360]: 2025-11-29 07:56:21.274 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] VM Started (Lifecycle Event)#033[00m
Nov 29 02:56:21 np0005539551 nova_compute[227360]: 2025-11-29 07:56:21.307 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:56:21 np0005539551 nova_compute[227360]: 2025-11-29 07:56:21.310 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:56:21 np0005539551 nova_compute[227360]: 2025-11-29 07:56:21.328 227364 DEBUG nova.compute.manager [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:56:21 np0005539551 nova_compute[227360]: 2025-11-29 07:56:21.479 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 02:56:21 np0005539551 nova_compute[227360]: 2025-11-29 07:56:21.513 227364 DEBUG oslo_concurrency.lockutils [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:21 np0005539551 nova_compute[227360]: 2025-11-29 07:56:21.513 227364 DEBUG oslo_concurrency.lockutils [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:21 np0005539551 nova_compute[227360]: 2025-11-29 07:56:21.513 227364 DEBUG nova.objects.instance [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 02:56:21 np0005539551 nova_compute[227360]: 2025-11-29 07:56:21.590 227364 DEBUG oslo_concurrency.lockutils [None req-bee6ac81-7e22-48e3-8078-1146f237afba 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.076s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:22.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:22.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:23 np0005539551 nova_compute[227360]: 2025-11-29 07:56:23.144 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:23 np0005539551 nova_compute[227360]: 2025-11-29 07:56:23.324 227364 DEBUG nova.compute.manager [req-e7a24a73-0dd4-439c-836d-52012b4f7a6c req-d4a4d8b1-0639-4ce7-99ba-beecde146a39 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Received event network-vif-plugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:56:23 np0005539551 nova_compute[227360]: 2025-11-29 07:56:23.324 227364 DEBUG oslo_concurrency.lockutils [req-e7a24a73-0dd4-439c-836d-52012b4f7a6c req-d4a4d8b1-0639-4ce7-99ba-beecde146a39 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:23 np0005539551 nova_compute[227360]: 2025-11-29 07:56:23.325 227364 DEBUG oslo_concurrency.lockutils [req-e7a24a73-0dd4-439c-836d-52012b4f7a6c req-d4a4d8b1-0639-4ce7-99ba-beecde146a39 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:23 np0005539551 nova_compute[227360]: 2025-11-29 07:56:23.325 227364 DEBUG oslo_concurrency.lockutils [req-e7a24a73-0dd4-439c-836d-52012b4f7a6c req-d4a4d8b1-0639-4ce7-99ba-beecde146a39 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:23 np0005539551 nova_compute[227360]: 2025-11-29 07:56:23.325 227364 DEBUG nova.compute.manager [req-e7a24a73-0dd4-439c-836d-52012b4f7a6c req-d4a4d8b1-0639-4ce7-99ba-beecde146a39 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] No waiting events found dispatching network-vif-plugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:56:23 np0005539551 nova_compute[227360]: 2025-11-29 07:56:23.325 227364 WARNING nova.compute.manager [req-e7a24a73-0dd4-439c-836d-52012b4f7a6c req-d4a4d8b1-0639-4ce7-99ba-beecde146a39 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Received unexpected event network-vif-plugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 for instance with vm_state active and task_state None.#033[00m
Nov 29 02:56:23 np0005539551 nova_compute[227360]: 2025-11-29 07:56:23.712 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:24 np0005539551 nova_compute[227360]: 2025-11-29 07:56:24.164 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:24.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:24.827 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:24.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:26.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:26.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:28 np0005539551 nova_compute[227360]: 2025-11-29 07:56:28.154 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:28.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:28 np0005539551 nova_compute[227360]: 2025-11-29 07:56:28.714 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:28.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:30.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:30.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:30 np0005539551 nova_compute[227360]: 2025-11-29 07:56:30.906 227364 DEBUG oslo_concurrency.lockutils [None req-c12720d8-2a08-4b19-869f-7d83cb30c77b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "59925490-c9ac-438c-9a1f-4356f493b103" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:30 np0005539551 nova_compute[227360]: 2025-11-29 07:56:30.906 227364 DEBUG oslo_concurrency.lockutils [None req-c12720d8-2a08-4b19-869f-7d83cb30c77b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "59925490-c9ac-438c-9a1f-4356f493b103" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:30 np0005539551 nova_compute[227360]: 2025-11-29 07:56:30.907 227364 DEBUG oslo_concurrency.lockutils [None req-c12720d8-2a08-4b19-869f-7d83cb30c77b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "59925490-c9ac-438c-9a1f-4356f493b103-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:30 np0005539551 nova_compute[227360]: 2025-11-29 07:56:30.907 227364 DEBUG oslo_concurrency.lockutils [None req-c12720d8-2a08-4b19-869f-7d83cb30c77b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "59925490-c9ac-438c-9a1f-4356f493b103-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:30 np0005539551 nova_compute[227360]: 2025-11-29 07:56:30.908 227364 DEBUG oslo_concurrency.lockutils [None req-c12720d8-2a08-4b19-869f-7d83cb30c77b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "59925490-c9ac-438c-9a1f-4356f493b103-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:30 np0005539551 nova_compute[227360]: 2025-11-29 07:56:30.909 227364 INFO nova.compute.manager [None req-c12720d8-2a08-4b19-869f-7d83cb30c77b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Terminating instance#033[00m
Nov 29 02:56:30 np0005539551 nova_compute[227360]: 2025-11-29 07:56:30.910 227364 DEBUG nova.compute.manager [None req-c12720d8-2a08-4b19-869f-7d83cb30c77b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:56:30 np0005539551 kernel: tap9b9b45bf-97 (unregistering): left promiscuous mode
Nov 29 02:56:30 np0005539551 NetworkManager[48922]: <info>  [1764402990.9775] device (tap9b9b45bf-97): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:56:30 np0005539551 ovn_controller[130266]: 2025-11-29T07:56:30Z|00110|binding|INFO|Releasing lport 9b9b45bf-9767-4795-b37f-3f17fed8dd49 from this chassis (sb_readonly=0)
Nov 29 02:56:30 np0005539551 ovn_controller[130266]: 2025-11-29T07:56:30Z|00111|binding|INFO|Setting lport 9b9b45bf-9767-4795-b37f-3f17fed8dd49 down in Southbound
Nov 29 02:56:30 np0005539551 nova_compute[227360]: 2025-11-29 07:56:30.981 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:30 np0005539551 ovn_controller[130266]: 2025-11-29T07:56:30Z|00112|binding|INFO|Removing iface tap9b9b45bf-97 ovn-installed in OVS
Nov 29 02:56:30 np0005539551 nova_compute[227360]: 2025-11-29 07:56:30.985 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:30.991 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:40:96 10.100.0.3'], port_security=['fa:16:3e:33:40:96 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '59925490-c9ac-438c-9a1f-4356f493b103', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f48e629446148199d44b34243b98b8a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e89afbea-5410-4fb0-af48-42605427a18f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58aa5314-b5ce-40ee-9eff-0f30cffaf25d, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=9b9b45bf-9767-4795-b37f-3f17fed8dd49) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:56:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:30.991 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 9b9b45bf-9767-4795-b37f-3f17fed8dd49 in datapath 62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b unbound from our chassis#033[00m
Nov 29 02:56:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:30.993 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b#033[00m
Nov 29 02:56:31 np0005539551 nova_compute[227360]: 2025-11-29 07:56:31.000 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:31.007 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[bb7d8573-76d4-4da2-86fa-9737d8cc1c0f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:31.035 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[2ff5dfaa-14a3-4c8d-9865-2ff1d86099aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:31.040 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[29c4ca5a-1a7a-405b-bdf9-ef7d53aa652c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:31 np0005539551 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000016.scope: Deactivated successfully.
Nov 29 02:56:31 np0005539551 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000016.scope: Consumed 17.456s CPU time.
Nov 29 02:56:31 np0005539551 systemd-machined[190756]: Machine qemu-10-instance-00000016 terminated.
Nov 29 02:56:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:31.071 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[98d5d3b5-be05-4105-b192-119c784d3da4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:31.086 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[0da2e5cd-f373-41af-a887-407dbf2fbf8f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62e5f2a3-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:93:9d:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 17, 'rx_bytes': 784, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 17, 'rx_bytes': 784, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591266, 'reachable_time': 39989, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239353, 'error': None, 'target': 'ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:31.104 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[602d6f2d-f048-42ab-b2bd-038956480d9f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap62e5f2a3-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 591279, 'tstamp': 591279}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239354, 'error': None, 'target': 'ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap62e5f2a3-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 591281, 'tstamp': 591281}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239354, 'error': None, 'target': 'ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:31.105 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62e5f2a3-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:31 np0005539551 nova_compute[227360]: 2025-11-29 07:56:31.107 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:31.111 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62e5f2a3-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:31 np0005539551 nova_compute[227360]: 2025-11-29 07:56:31.111 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:31.111 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:56:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:31.111 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62e5f2a3-c0, col_values=(('external_ids', {'iface-id': 'acbe1c54-69e5-4789-8e0b-6d1b69eab5e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:31.112 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:56:31 np0005539551 nova_compute[227360]: 2025-11-29 07:56:31.145 227364 INFO nova.virt.libvirt.driver [-] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Instance destroyed successfully.#033[00m
Nov 29 02:56:31 np0005539551 nova_compute[227360]: 2025-11-29 07:56:31.146 227364 DEBUG nova.objects.instance [None req-c12720d8-2a08-4b19-869f-7d83cb30c77b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lazy-loading 'resources' on Instance uuid 59925490-c9ac-438c-9a1f-4356f493b103 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:56:31 np0005539551 nova_compute[227360]: 2025-11-29 07:56:31.179 227364 DEBUG nova.virt.libvirt.vif [None req-c12720d8-2a08-4b19-869f-7d83cb30c77b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:54:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-2003089732',display_name='tempest-ServersAdminTestJSON-server-2003089732',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-2003089732',id=22,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:54:52Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f48e629446148199d44b34243b98b8a',ramdisk_id='',reservation_id='r-oesg0t7z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-93807439',owner_user_name='tempest-ServersAdminTestJSON-93807439-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:54:52Z,user_data=None,user_id='446b0f05699845e8bd9f7d59c787f671',uuid=59925490-c9ac-438c-9a1f-4356f493b103,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9b9b45bf-9767-4795-b37f-3f17fed8dd49", "address": "fa:16:3e:33:40:96", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b9b45bf-97", "ovs_interfaceid": "9b9b45bf-9767-4795-b37f-3f17fed8dd49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:56:31 np0005539551 nova_compute[227360]: 2025-11-29 07:56:31.180 227364 DEBUG nova.network.os_vif_util [None req-c12720d8-2a08-4b19-869f-7d83cb30c77b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Converting VIF {"id": "9b9b45bf-9767-4795-b37f-3f17fed8dd49", "address": "fa:16:3e:33:40:96", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b9b45bf-97", "ovs_interfaceid": "9b9b45bf-9767-4795-b37f-3f17fed8dd49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:56:31 np0005539551 nova_compute[227360]: 2025-11-29 07:56:31.180 227364 DEBUG nova.network.os_vif_util [None req-c12720d8-2a08-4b19-869f-7d83cb30c77b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:40:96,bridge_name='br-int',has_traffic_filtering=True,id=9b9b45bf-9767-4795-b37f-3f17fed8dd49,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b9b45bf-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:56:31 np0005539551 nova_compute[227360]: 2025-11-29 07:56:31.181 227364 DEBUG os_vif [None req-c12720d8-2a08-4b19-869f-7d83cb30c77b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:40:96,bridge_name='br-int',has_traffic_filtering=True,id=9b9b45bf-9767-4795-b37f-3f17fed8dd49,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b9b45bf-97') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:56:31 np0005539551 nova_compute[227360]: 2025-11-29 07:56:31.182 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:31 np0005539551 nova_compute[227360]: 2025-11-29 07:56:31.183 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9b9b45bf-97, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:31 np0005539551 nova_compute[227360]: 2025-11-29 07:56:31.184 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:31 np0005539551 nova_compute[227360]: 2025-11-29 07:56:31.186 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:56:31 np0005539551 nova_compute[227360]: 2025-11-29 07:56:31.188 227364 INFO os_vif [None req-c12720d8-2a08-4b19-869f-7d83cb30c77b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:40:96,bridge_name='br-int',has_traffic_filtering=True,id=9b9b45bf-9767-4795-b37f-3f17fed8dd49,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b9b45bf-97')#033[00m
Nov 29 02:56:31 np0005539551 nova_compute[227360]: 2025-11-29 07:56:31.493 227364 DEBUG nova.compute.manager [req-0222c585-b703-42c3-8b78-e541f19aa57b req-dfab2989-bec9-4d60-b990-8cb901032021 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Received event network-vif-unplugged-9b9b45bf-9767-4795-b37f-3f17fed8dd49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:56:31 np0005539551 nova_compute[227360]: 2025-11-29 07:56:31.494 227364 DEBUG oslo_concurrency.lockutils [req-0222c585-b703-42c3-8b78-e541f19aa57b req-dfab2989-bec9-4d60-b990-8cb901032021 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "59925490-c9ac-438c-9a1f-4356f493b103-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:31 np0005539551 nova_compute[227360]: 2025-11-29 07:56:31.494 227364 DEBUG oslo_concurrency.lockutils [req-0222c585-b703-42c3-8b78-e541f19aa57b req-dfab2989-bec9-4d60-b990-8cb901032021 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "59925490-c9ac-438c-9a1f-4356f493b103-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:31 np0005539551 nova_compute[227360]: 2025-11-29 07:56:31.495 227364 DEBUG oslo_concurrency.lockutils [req-0222c585-b703-42c3-8b78-e541f19aa57b req-dfab2989-bec9-4d60-b990-8cb901032021 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "59925490-c9ac-438c-9a1f-4356f493b103-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:31 np0005539551 nova_compute[227360]: 2025-11-29 07:56:31.496 227364 DEBUG nova.compute.manager [req-0222c585-b703-42c3-8b78-e541f19aa57b req-dfab2989-bec9-4d60-b990-8cb901032021 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] No waiting events found dispatching network-vif-unplugged-9b9b45bf-9767-4795-b37f-3f17fed8dd49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:56:31 np0005539551 nova_compute[227360]: 2025-11-29 07:56:31.496 227364 DEBUG nova.compute.manager [req-0222c585-b703-42c3-8b78-e541f19aa57b req-dfab2989-bec9-4d60-b990-8cb901032021 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Received event network-vif-unplugged-9b9b45bf-9767-4795-b37f-3f17fed8dd49 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:56:32 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:32 np0005539551 nova_compute[227360]: 2025-11-29 07:56:32.362 227364 INFO nova.virt.libvirt.driver [None req-c12720d8-2a08-4b19-869f-7d83cb30c77b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Deleting instance files /var/lib/nova/instances/59925490-c9ac-438c-9a1f-4356f493b103_del#033[00m
Nov 29 02:56:32 np0005539551 nova_compute[227360]: 2025-11-29 07:56:32.362 227364 INFO nova.virt.libvirt.driver [None req-c12720d8-2a08-4b19-869f-7d83cb30c77b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Deletion of /var/lib/nova/instances/59925490-c9ac-438c-9a1f-4356f493b103_del complete#033[00m
Nov 29 02:56:32 np0005539551 nova_compute[227360]: 2025-11-29 07:56:32.431 227364 INFO nova.compute.manager [None req-c12720d8-2a08-4b19-869f-7d83cb30c77b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Took 1.52 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:56:32 np0005539551 nova_compute[227360]: 2025-11-29 07:56:32.432 227364 DEBUG oslo.service.loopingcall [None req-c12720d8-2a08-4b19-869f-7d83cb30c77b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:56:32 np0005539551 nova_compute[227360]: 2025-11-29 07:56:32.432 227364 DEBUG nova.compute.manager [-] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:56:32 np0005539551 nova_compute[227360]: 2025-11-29 07:56:32.432 227364 DEBUG nova.network.neutron [-] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:56:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:32.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:32.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:33 np0005539551 nova_compute[227360]: 2025-11-29 07:56:33.675 227364 DEBUG nova.compute.manager [req-da977a91-d8b0-44de-8bca-3c7b1056f763 req-45866ad8-df26-4df4-a2a0-3afabb2aaf0f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Received event network-vif-plugged-9b9b45bf-9767-4795-b37f-3f17fed8dd49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:56:33 np0005539551 nova_compute[227360]: 2025-11-29 07:56:33.676 227364 DEBUG oslo_concurrency.lockutils [req-da977a91-d8b0-44de-8bca-3c7b1056f763 req-45866ad8-df26-4df4-a2a0-3afabb2aaf0f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "59925490-c9ac-438c-9a1f-4356f493b103-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:33 np0005539551 nova_compute[227360]: 2025-11-29 07:56:33.676 227364 DEBUG oslo_concurrency.lockutils [req-da977a91-d8b0-44de-8bca-3c7b1056f763 req-45866ad8-df26-4df4-a2a0-3afabb2aaf0f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "59925490-c9ac-438c-9a1f-4356f493b103-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:33 np0005539551 nova_compute[227360]: 2025-11-29 07:56:33.676 227364 DEBUG oslo_concurrency.lockutils [req-da977a91-d8b0-44de-8bca-3c7b1056f763 req-45866ad8-df26-4df4-a2a0-3afabb2aaf0f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "59925490-c9ac-438c-9a1f-4356f493b103-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:33 np0005539551 nova_compute[227360]: 2025-11-29 07:56:33.676 227364 DEBUG nova.compute.manager [req-da977a91-d8b0-44de-8bca-3c7b1056f763 req-45866ad8-df26-4df4-a2a0-3afabb2aaf0f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] No waiting events found dispatching network-vif-plugged-9b9b45bf-9767-4795-b37f-3f17fed8dd49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:56:33 np0005539551 nova_compute[227360]: 2025-11-29 07:56:33.677 227364 WARNING nova.compute.manager [req-da977a91-d8b0-44de-8bca-3c7b1056f763 req-45866ad8-df26-4df4-a2a0-3afabb2aaf0f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Received unexpected event network-vif-plugged-9b9b45bf-9767-4795-b37f-3f17fed8dd49 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 02:56:33 np0005539551 nova_compute[227360]: 2025-11-29 07:56:33.716 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:33 np0005539551 nova_compute[227360]: 2025-11-29 07:56:33.957 227364 DEBUG nova.network.neutron [-] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:56:33 np0005539551 nova_compute[227360]: 2025-11-29 07:56:33.984 227364 INFO nova.compute.manager [-] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Took 1.55 seconds to deallocate network for instance.#033[00m
Nov 29 02:56:34 np0005539551 nova_compute[227360]: 2025-11-29 07:56:34.030 227364 DEBUG oslo_concurrency.lockutils [None req-c12720d8-2a08-4b19-869f-7d83cb30c77b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:34 np0005539551 nova_compute[227360]: 2025-11-29 07:56:34.032 227364 DEBUG oslo_concurrency.lockutils [None req-c12720d8-2a08-4b19-869f-7d83cb30c77b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:34 np0005539551 nova_compute[227360]: 2025-11-29 07:56:34.085 227364 DEBUG nova.compute.manager [req-6716434e-107b-431d-9f88-7e9d917ee3bb req-ea54e7bb-3c51-4d2f-bb64-e9a94413d388 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Received event network-vif-deleted-9b9b45bf-9767-4795-b37f-3f17fed8dd49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:56:34 np0005539551 nova_compute[227360]: 2025-11-29 07:56:34.198 227364 DEBUG oslo_concurrency.processutils [None req-c12720d8-2a08-4b19-869f-7d83cb30c77b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:34.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:34 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:56:34 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/314927900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:56:34 np0005539551 nova_compute[227360]: 2025-11-29 07:56:34.711 227364 DEBUG oslo_concurrency.processutils [None req-c12720d8-2a08-4b19-869f-7d83cb30c77b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:34 np0005539551 nova_compute[227360]: 2025-11-29 07:56:34.717 227364 DEBUG nova.compute.provider_tree [None req-c12720d8-2a08-4b19-869f-7d83cb30c77b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:56:34 np0005539551 nova_compute[227360]: 2025-11-29 07:56:34.732 227364 DEBUG nova.scheduler.client.report [None req-c12720d8-2a08-4b19-869f-7d83cb30c77b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:56:34 np0005539551 nova_compute[227360]: 2025-11-29 07:56:34.752 227364 DEBUG oslo_concurrency.lockutils [None req-c12720d8-2a08-4b19-869f-7d83cb30c77b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:34 np0005539551 nova_compute[227360]: 2025-11-29 07:56:34.776 227364 INFO nova.scheduler.client.report [None req-c12720d8-2a08-4b19-869f-7d83cb30c77b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Deleted allocations for instance 59925490-c9ac-438c-9a1f-4356f493b103#033[00m
Nov 29 02:56:34 np0005539551 nova_compute[227360]: 2025-11-29 07:56:34.863 227364 DEBUG oslo_concurrency.lockutils [None req-c12720d8-2a08-4b19-869f-7d83cb30c77b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "59925490-c9ac-438c-9a1f-4356f493b103" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.957s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:34.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:35 np0005539551 nova_compute[227360]: 2025-11-29 07:56:35.411 227364 DEBUG oslo_concurrency.lockutils [None req-a2c7eac3-7b64-4efc-b837-87e30d5642d2 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "ec72da52-17dd-401b-8538-90262cfe6006" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:35 np0005539551 nova_compute[227360]: 2025-11-29 07:56:35.412 227364 DEBUG oslo_concurrency.lockutils [None req-a2c7eac3-7b64-4efc-b837-87e30d5642d2 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "ec72da52-17dd-401b-8538-90262cfe6006" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:35 np0005539551 nova_compute[227360]: 2025-11-29 07:56:35.412 227364 DEBUG oslo_concurrency.lockutils [None req-a2c7eac3-7b64-4efc-b837-87e30d5642d2 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "ec72da52-17dd-401b-8538-90262cfe6006-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:35 np0005539551 nova_compute[227360]: 2025-11-29 07:56:35.413 227364 DEBUG oslo_concurrency.lockutils [None req-a2c7eac3-7b64-4efc-b837-87e30d5642d2 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "ec72da52-17dd-401b-8538-90262cfe6006-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:35 np0005539551 nova_compute[227360]: 2025-11-29 07:56:35.414 227364 DEBUG oslo_concurrency.lockutils [None req-a2c7eac3-7b64-4efc-b837-87e30d5642d2 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "ec72da52-17dd-401b-8538-90262cfe6006-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:35 np0005539551 nova_compute[227360]: 2025-11-29 07:56:35.416 227364 INFO nova.compute.manager [None req-a2c7eac3-7b64-4efc-b837-87e30d5642d2 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Terminating instance#033[00m
Nov 29 02:56:35 np0005539551 nova_compute[227360]: 2025-11-29 07:56:35.417 227364 DEBUG nova.compute.manager [None req-a2c7eac3-7b64-4efc-b837-87e30d5642d2 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:56:35 np0005539551 kernel: tapfe9bbbf6-1a (unregistering): left promiscuous mode
Nov 29 02:56:35 np0005539551 NetworkManager[48922]: <info>  [1764402995.7450] device (tapfe9bbbf6-1a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:56:35 np0005539551 ovn_controller[130266]: 2025-11-29T07:56:35Z|00113|binding|INFO|Releasing lport fe9bbbf6-1a8e-4407-b98e-d689945a1535 from this chassis (sb_readonly=0)
Nov 29 02:56:35 np0005539551 ovn_controller[130266]: 2025-11-29T07:56:35Z|00114|binding|INFO|Setting lport fe9bbbf6-1a8e-4407-b98e-d689945a1535 down in Southbound
Nov 29 02:56:35 np0005539551 nova_compute[227360]: 2025-11-29 07:56:35.751 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:35 np0005539551 ovn_controller[130266]: 2025-11-29T07:56:35Z|00115|binding|INFO|Removing iface tapfe9bbbf6-1a ovn-installed in OVS
Nov 29 02:56:35 np0005539551 nova_compute[227360]: 2025-11-29 07:56:35.753 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:35.759 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f1:9c:fe 10.100.0.9'], port_security=['fa:16:3e:f1:9c:fe 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ec72da52-17dd-401b-8538-90262cfe6006', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f48e629446148199d44b34243b98b8a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e89afbea-5410-4fb0-af48-42605427a18f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58aa5314-b5ce-40ee-9eff-0f30cffaf25d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=fe9bbbf6-1a8e-4407-b98e-d689945a1535) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:56:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:35.760 139482 INFO neutron.agent.ovn.metadata.agent [-] Port fe9bbbf6-1a8e-4407-b98e-d689945a1535 in datapath 62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b unbound from our chassis#033[00m
Nov 29 02:56:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:35.761 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b#033[00m
Nov 29 02:56:35 np0005539551 nova_compute[227360]: 2025-11-29 07:56:35.771 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:35.783 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[45c254f2-2bab-426b-8ede-64dff7ff3dff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:35 np0005539551 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000012.scope: Deactivated successfully.
Nov 29 02:56:35 np0005539551 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000012.scope: Consumed 19.063s CPU time.
Nov 29 02:56:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:35.814 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[8d0ee5f1-a317-4927-b269-14b44e83422c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:35 np0005539551 systemd-machined[190756]: Machine qemu-9-instance-00000012 terminated.
Nov 29 02:56:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:35.818 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[fb53087e-4da6-4818-8f18-2877be0beb14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:35.848 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[e1f6c8db-3522-417a-9f10-e06fbc016eec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:35 np0005539551 nova_compute[227360]: 2025-11-29 07:56:35.853 227364 INFO nova.virt.libvirt.driver [-] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Instance destroyed successfully.#033[00m
Nov 29 02:56:35 np0005539551 nova_compute[227360]: 2025-11-29 07:56:35.854 227364 DEBUG nova.objects.instance [None req-a2c7eac3-7b64-4efc-b837-87e30d5642d2 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lazy-loading 'resources' on Instance uuid ec72da52-17dd-401b-8538-90262cfe6006 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:56:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:35.864 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[62ba1b6e-38e3-4dbb-9a1b-22a77427683e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62e5f2a3-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:93:9d:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 19, 'rx_bytes': 784, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 19, 'rx_bytes': 784, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591266, 'reachable_time': 39989, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239423, 'error': None, 'target': 'ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:35 np0005539551 nova_compute[227360]: 2025-11-29 07:56:35.874 227364 DEBUG nova.virt.libvirt.vif [None req-a2c7eac3-7b64-4efc-b837-87e30d5642d2 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:54:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1514710041',display_name='tempest-ServersAdminTestJSON-server-1514710041',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1514710041',id=18,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:54:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f48e629446148199d44b34243b98b8a',ramdisk_id='',reservation_id='r-jat5344a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-93807439',owner_user_name='tempest-ServersAdminTestJSON-93807439-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:54:36Z,user_data=None,user_id='446b0f05699845e8bd9f7d59c787f671',uuid=ec72da52-17dd-401b-8538-90262cfe6006,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fe9bbbf6-1a8e-4407-b98e-d689945a1535", "address": "fa:16:3e:f1:9c:fe", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe9bbbf6-1a", "ovs_interfaceid": "fe9bbbf6-1a8e-4407-b98e-d689945a1535", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:56:35 np0005539551 nova_compute[227360]: 2025-11-29 07:56:35.875 227364 DEBUG nova.network.os_vif_util [None req-a2c7eac3-7b64-4efc-b837-87e30d5642d2 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Converting VIF {"id": "fe9bbbf6-1a8e-4407-b98e-d689945a1535", "address": "fa:16:3e:f1:9c:fe", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe9bbbf6-1a", "ovs_interfaceid": "fe9bbbf6-1a8e-4407-b98e-d689945a1535", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:56:35 np0005539551 nova_compute[227360]: 2025-11-29 07:56:35.876 227364 DEBUG nova.network.os_vif_util [None req-a2c7eac3-7b64-4efc-b837-87e30d5642d2 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f1:9c:fe,bridge_name='br-int',has_traffic_filtering=True,id=fe9bbbf6-1a8e-4407-b98e-d689945a1535,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfe9bbbf6-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:56:35 np0005539551 nova_compute[227360]: 2025-11-29 07:56:35.876 227364 DEBUG os_vif [None req-a2c7eac3-7b64-4efc-b837-87e30d5642d2 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f1:9c:fe,bridge_name='br-int',has_traffic_filtering=True,id=fe9bbbf6-1a8e-4407-b98e-d689945a1535,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfe9bbbf6-1a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:56:35 np0005539551 nova_compute[227360]: 2025-11-29 07:56:35.878 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:35 np0005539551 nova_compute[227360]: 2025-11-29 07:56:35.878 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfe9bbbf6-1a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:35 np0005539551 nova_compute[227360]: 2025-11-29 07:56:35.879 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:35.879 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b2747c8e-d91a-4ee6-b96c-0a5e06a36a07]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap62e5f2a3-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 591279, 'tstamp': 591279}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239424, 'error': None, 'target': 'ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap62e5f2a3-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 591281, 'tstamp': 591281}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239424, 'error': None, 'target': 'ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:35.882 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62e5f2a3-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:35 np0005539551 nova_compute[227360]: 2025-11-29 07:56:35.883 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:56:35 np0005539551 nova_compute[227360]: 2025-11-29 07:56:35.884 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:35 np0005539551 nova_compute[227360]: 2025-11-29 07:56:35.885 227364 INFO os_vif [None req-a2c7eac3-7b64-4efc-b837-87e30d5642d2 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f1:9c:fe,bridge_name='br-int',has_traffic_filtering=True,id=fe9bbbf6-1a8e-4407-b98e-d689945a1535,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfe9bbbf6-1a')#033[00m
Nov 29 02:56:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:35.885 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62e5f2a3-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:35.885 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:56:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:35.885 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62e5f2a3-c0, col_values=(('external_ids', {'iface-id': 'acbe1c54-69e5-4789-8e0b-6d1b69eab5e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:35.885 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:56:35 np0005539551 nova_compute[227360]: 2025-11-29 07:56:35.901 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:35 np0005539551 nova_compute[227360]: 2025-11-29 07:56:35.987 227364 DEBUG nova.compute.manager [req-7cec3fe5-cadb-4f83-9a79-ced48fd5b72e req-309945bb-dee2-41f4-aabd-4b0800faff55 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Received event network-vif-unplugged-fe9bbbf6-1a8e-4407-b98e-d689945a1535 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:56:35 np0005539551 nova_compute[227360]: 2025-11-29 07:56:35.987 227364 DEBUG oslo_concurrency.lockutils [req-7cec3fe5-cadb-4f83-9a79-ced48fd5b72e req-309945bb-dee2-41f4-aabd-4b0800faff55 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ec72da52-17dd-401b-8538-90262cfe6006-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:35 np0005539551 nova_compute[227360]: 2025-11-29 07:56:35.988 227364 DEBUG oslo_concurrency.lockutils [req-7cec3fe5-cadb-4f83-9a79-ced48fd5b72e req-309945bb-dee2-41f4-aabd-4b0800faff55 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ec72da52-17dd-401b-8538-90262cfe6006-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:35 np0005539551 nova_compute[227360]: 2025-11-29 07:56:35.988 227364 DEBUG oslo_concurrency.lockutils [req-7cec3fe5-cadb-4f83-9a79-ced48fd5b72e req-309945bb-dee2-41f4-aabd-4b0800faff55 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ec72da52-17dd-401b-8538-90262cfe6006-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:35 np0005539551 nova_compute[227360]: 2025-11-29 07:56:35.989 227364 DEBUG nova.compute.manager [req-7cec3fe5-cadb-4f83-9a79-ced48fd5b72e req-309945bb-dee2-41f4-aabd-4b0800faff55 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] No waiting events found dispatching network-vif-unplugged-fe9bbbf6-1a8e-4407-b98e-d689945a1535 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:56:35 np0005539551 nova_compute[227360]: 2025-11-29 07:56:35.989 227364 DEBUG nova.compute.manager [req-7cec3fe5-cadb-4f83-9a79-ced48fd5b72e req-309945bb-dee2-41f4-aabd-4b0800faff55 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Received event network-vif-unplugged-fe9bbbf6-1a8e-4407-b98e-d689945a1535 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:56:36 np0005539551 ovn_controller[130266]: 2025-11-29T07:56:36Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8b:38:bc 10.100.0.14
Nov 29 02:56:36 np0005539551 ovn_controller[130266]: 2025-11-29T07:56:36Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8b:38:bc 10.100.0.14
Nov 29 02:56:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:36.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:36.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:37 np0005539551 nova_compute[227360]: 2025-11-29 07:56:37.941 227364 INFO nova.virt.libvirt.driver [None req-a2c7eac3-7b64-4efc-b837-87e30d5642d2 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Deleting instance files /var/lib/nova/instances/ec72da52-17dd-401b-8538-90262cfe6006_del#033[00m
Nov 29 02:56:37 np0005539551 nova_compute[227360]: 2025-11-29 07:56:37.942 227364 INFO nova.virt.libvirt.driver [None req-a2c7eac3-7b64-4efc-b837-87e30d5642d2 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Deletion of /var/lib/nova/instances/ec72da52-17dd-401b-8538-90262cfe6006_del complete#033[00m
Nov 29 02:56:38 np0005539551 nova_compute[227360]: 2025-11-29 07:56:38.492 227364 DEBUG nova.compute.manager [req-eddf3901-a951-457f-89c7-c53e3fe8d7a1 req-f4845bfc-3b3a-41bb-9bf7-9ee755e9ec08 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Received event network-vif-plugged-fe9bbbf6-1a8e-4407-b98e-d689945a1535 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:56:38 np0005539551 nova_compute[227360]: 2025-11-29 07:56:38.493 227364 DEBUG oslo_concurrency.lockutils [req-eddf3901-a951-457f-89c7-c53e3fe8d7a1 req-f4845bfc-3b3a-41bb-9bf7-9ee755e9ec08 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ec72da52-17dd-401b-8538-90262cfe6006-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:38 np0005539551 nova_compute[227360]: 2025-11-29 07:56:38.494 227364 DEBUG oslo_concurrency.lockutils [req-eddf3901-a951-457f-89c7-c53e3fe8d7a1 req-f4845bfc-3b3a-41bb-9bf7-9ee755e9ec08 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ec72da52-17dd-401b-8538-90262cfe6006-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:38 np0005539551 nova_compute[227360]: 2025-11-29 07:56:38.494 227364 DEBUG oslo_concurrency.lockutils [req-eddf3901-a951-457f-89c7-c53e3fe8d7a1 req-f4845bfc-3b3a-41bb-9bf7-9ee755e9ec08 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ec72da52-17dd-401b-8538-90262cfe6006-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:38 np0005539551 nova_compute[227360]: 2025-11-29 07:56:38.494 227364 DEBUG nova.compute.manager [req-eddf3901-a951-457f-89c7-c53e3fe8d7a1 req-f4845bfc-3b3a-41bb-9bf7-9ee755e9ec08 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] No waiting events found dispatching network-vif-plugged-fe9bbbf6-1a8e-4407-b98e-d689945a1535 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:56:38 np0005539551 nova_compute[227360]: 2025-11-29 07:56:38.495 227364 WARNING nova.compute.manager [req-eddf3901-a951-457f-89c7-c53e3fe8d7a1 req-f4845bfc-3b3a-41bb-9bf7-9ee755e9ec08 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Received unexpected event network-vif-plugged-fe9bbbf6-1a8e-4407-b98e-d689945a1535 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 02:56:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:56:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:38.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:56:38 np0005539551 nova_compute[227360]: 2025-11-29 07:56:38.616 227364 INFO nova.compute.manager [None req-a2c7eac3-7b64-4efc-b837-87e30d5642d2 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Took 3.20 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:56:38 np0005539551 nova_compute[227360]: 2025-11-29 07:56:38.617 227364 DEBUG oslo.service.loopingcall [None req-a2c7eac3-7b64-4efc-b837-87e30d5642d2 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:56:38 np0005539551 nova_compute[227360]: 2025-11-29 07:56:38.617 227364 DEBUG nova.compute.manager [-] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:56:38 np0005539551 nova_compute[227360]: 2025-11-29 07:56:38.617 227364 DEBUG nova.network.neutron [-] [instance: ec72da52-17dd-401b-8538-90262cfe6006] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:56:38 np0005539551 nova_compute[227360]: 2025-11-29 07:56:38.719 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:38.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:39 np0005539551 nova_compute[227360]: 2025-11-29 07:56:39.655 227364 DEBUG nova.network.neutron [-] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:56:39 np0005539551 nova_compute[227360]: 2025-11-29 07:56:39.677 227364 INFO nova.compute.manager [-] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Took 1.06 seconds to deallocate network for instance.#033[00m
Nov 29 02:56:39 np0005539551 nova_compute[227360]: 2025-11-29 07:56:39.784 227364 DEBUG oslo_concurrency.lockutils [None req-a2c7eac3-7b64-4efc-b837-87e30d5642d2 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:39 np0005539551 nova_compute[227360]: 2025-11-29 07:56:39.784 227364 DEBUG oslo_concurrency.lockutils [None req-a2c7eac3-7b64-4efc-b837-87e30d5642d2 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:39 np0005539551 nova_compute[227360]: 2025-11-29 07:56:39.874 227364 DEBUG nova.compute.manager [req-00791a2e-5e60-426c-9490-33bcbf5f2053 req-bc2b33bb-650f-4f55-ab01-f3cc35b31e16 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Received event network-vif-deleted-fe9bbbf6-1a8e-4407-b98e-d689945a1535 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:56:39 np0005539551 nova_compute[227360]: 2025-11-29 07:56:39.888 227364 DEBUG oslo_concurrency.processutils [None req-a2c7eac3-7b64-4efc-b837-87e30d5642d2 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:40.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:56:40 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1073173864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:56:40 np0005539551 nova_compute[227360]: 2025-11-29 07:56:40.750 227364 DEBUG oslo_concurrency.processutils [None req-a2c7eac3-7b64-4efc-b837-87e30d5642d2 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.863s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:40 np0005539551 nova_compute[227360]: 2025-11-29 07:56:40.761 227364 DEBUG nova.compute.provider_tree [None req-a2c7eac3-7b64-4efc-b837-87e30d5642d2 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:56:40 np0005539551 nova_compute[227360]: 2025-11-29 07:56:40.784 227364 DEBUG nova.scheduler.client.report [None req-a2c7eac3-7b64-4efc-b837-87e30d5642d2 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:56:40 np0005539551 nova_compute[227360]: 2025-11-29 07:56:40.872 227364 DEBUG oslo_concurrency.lockutils [None req-a2c7eac3-7b64-4efc-b837-87e30d5642d2 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.087s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:40 np0005539551 nova_compute[227360]: 2025-11-29 07:56:40.881 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:40.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:41 np0005539551 nova_compute[227360]: 2025-11-29 07:56:41.138 227364 INFO nova.scheduler.client.report [None req-a2c7eac3-7b64-4efc-b837-87e30d5642d2 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Deleted allocations for instance ec72da52-17dd-401b-8538-90262cfe6006#033[00m
Nov 29 02:56:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:42.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:56:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:42.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:56:43 np0005539551 nova_compute[227360]: 2025-11-29 07:56:43.409 227364 DEBUG oslo_concurrency.lockutils [None req-a2c7eac3-7b64-4efc-b837-87e30d5642d2 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "ec72da52-17dd-401b-8538-90262cfe6006" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.997s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:43 np0005539551 podman[239468]: 2025-11-29 07:56:43.636011122 +0000 UTC m=+0.083610171 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 02:56:43 np0005539551 podman[239467]: 2025-11-29 07:56:43.646260753 +0000 UTC m=+0.093845542 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:56:43 np0005539551 podman[239466]: 2025-11-29 07:56:43.682038503 +0000 UTC m=+0.126157717 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 29 02:56:43 np0005539551 nova_compute[227360]: 2025-11-29 07:56:43.722 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:44 np0005539551 nova_compute[227360]: 2025-11-29 07:56:44.049 227364 DEBUG oslo_concurrency.lockutils [None req-172820b5-8e57-4258-a3be-ccf872603f61 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "bed5ee49-4f19-4a80-a70a-8972c9a68218" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:44 np0005539551 nova_compute[227360]: 2025-11-29 07:56:44.050 227364 DEBUG oslo_concurrency.lockutils [None req-172820b5-8e57-4258-a3be-ccf872603f61 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:44 np0005539551 nova_compute[227360]: 2025-11-29 07:56:44.050 227364 DEBUG oslo_concurrency.lockutils [None req-172820b5-8e57-4258-a3be-ccf872603f61 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:44 np0005539551 nova_compute[227360]: 2025-11-29 07:56:44.051 227364 DEBUG oslo_concurrency.lockutils [None req-172820b5-8e57-4258-a3be-ccf872603f61 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:44 np0005539551 nova_compute[227360]: 2025-11-29 07:56:44.051 227364 DEBUG oslo_concurrency.lockutils [None req-172820b5-8e57-4258-a3be-ccf872603f61 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:44 np0005539551 nova_compute[227360]: 2025-11-29 07:56:44.052 227364 INFO nova.compute.manager [None req-172820b5-8e57-4258-a3be-ccf872603f61 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Terminating instance#033[00m
Nov 29 02:56:44 np0005539551 nova_compute[227360]: 2025-11-29 07:56:44.054 227364 DEBUG nova.compute.manager [None req-172820b5-8e57-4258-a3be-ccf872603f61 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:56:44 np0005539551 kernel: tap10cc59c9-d7 (unregistering): left promiscuous mode
Nov 29 02:56:44 np0005539551 NetworkManager[48922]: <info>  [1764403004.1293] device (tap10cc59c9-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:56:44 np0005539551 ovn_controller[130266]: 2025-11-29T07:56:44Z|00116|binding|INFO|Releasing lport 10cc59c9-d730-4ae4-91ea-f799f3de9f32 from this chassis (sb_readonly=0)
Nov 29 02:56:44 np0005539551 nova_compute[227360]: 2025-11-29 07:56:44.145 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:44 np0005539551 ovn_controller[130266]: 2025-11-29T07:56:44Z|00117|binding|INFO|Setting lport 10cc59c9-d730-4ae4-91ea-f799f3de9f32 down in Southbound
Nov 29 02:56:44 np0005539551 ovn_controller[130266]: 2025-11-29T07:56:44Z|00118|binding|INFO|Removing iface tap10cc59c9-d7 ovn-installed in OVS
Nov 29 02:56:44 np0005539551 nova_compute[227360]: 2025-11-29 07:56:44.148 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:44.157 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:38:bc 10.100.0.14'], port_security=['fa:16:3e:8b:38:bc 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'bed5ee49-4f19-4a80-a70a-8972c9a68218', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f48e629446148199d44b34243b98b8a', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'e89afbea-5410-4fb0-af48-42605427a18f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58aa5314-b5ce-40ee-9eff-0f30cffaf25d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=10cc59c9-d730-4ae4-91ea-f799f3de9f32) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:56:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:44.158 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 10cc59c9-d730-4ae4-91ea-f799f3de9f32 in datapath 62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b unbound from our chassis#033[00m
Nov 29 02:56:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:44.159 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:56:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:44.161 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c786c7e4-d30e-4186-8721-d596d7c74515]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:44.162 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b namespace which is not needed anymore#033[00m
Nov 29 02:56:44 np0005539551 nova_compute[227360]: 2025-11-29 07:56:44.165 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:44 np0005539551 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000011.scope: Deactivated successfully.
Nov 29 02:56:44 np0005539551 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000011.scope: Consumed 14.335s CPU time.
Nov 29 02:56:44 np0005539551 systemd-machined[190756]: Machine qemu-12-instance-00000011 terminated.
Nov 29 02:56:44 np0005539551 kernel: tap10cc59c9-d7: entered promiscuous mode
Nov 29 02:56:44 np0005539551 NetworkManager[48922]: <info>  [1764403004.2726] manager: (tap10cc59c9-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/61)
Nov 29 02:56:44 np0005539551 systemd-udevd[239531]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:56:44 np0005539551 nova_compute[227360]: 2025-11-29 07:56:44.273 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:44 np0005539551 ovn_controller[130266]: 2025-11-29T07:56:44Z|00119|binding|INFO|Claiming lport 10cc59c9-d730-4ae4-91ea-f799f3de9f32 for this chassis.
Nov 29 02:56:44 np0005539551 kernel: tap10cc59c9-d7 (unregistering): left promiscuous mode
Nov 29 02:56:44 np0005539551 ovn_controller[130266]: 2025-11-29T07:56:44Z|00120|binding|INFO|10cc59c9-d730-4ae4-91ea-f799f3de9f32: Claiming fa:16:3e:8b:38:bc 10.100.0.14
Nov 29 02:56:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:44.280 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:38:bc 10.100.0.14'], port_security=['fa:16:3e:8b:38:bc 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'bed5ee49-4f19-4a80-a70a-8972c9a68218', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f48e629446148199d44b34243b98b8a', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'e89afbea-5410-4fb0-af48-42605427a18f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58aa5314-b5ce-40ee-9eff-0f30cffaf25d, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=10cc59c9-d730-4ae4-91ea-f799f3de9f32) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:56:44 np0005539551 neutron-haproxy-ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b[237141]: [NOTICE]   (237168) : haproxy version is 2.8.14-c23fe91
Nov 29 02:56:44 np0005539551 neutron-haproxy-ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b[237141]: [NOTICE]   (237168) : path to executable is /usr/sbin/haproxy
Nov 29 02:56:44 np0005539551 neutron-haproxy-ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b[237141]: [WARNING]  (237168) : Exiting Master process...
Nov 29 02:56:44 np0005539551 ovn_controller[130266]: 2025-11-29T07:56:44Z|00121|binding|INFO|Setting lport 10cc59c9-d730-4ae4-91ea-f799f3de9f32 up in Southbound
Nov 29 02:56:44 np0005539551 ovn_controller[130266]: 2025-11-29T07:56:44Z|00122|binding|INFO|Setting lport 10cc59c9-d730-4ae4-91ea-f799f3de9f32 ovn-installed in OVS
Nov 29 02:56:44 np0005539551 nova_compute[227360]: 2025-11-29 07:56:44.296 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:44 np0005539551 ovn_controller[130266]: 2025-11-29T07:56:44Z|00123|binding|INFO|Releasing lport 10cc59c9-d730-4ae4-91ea-f799f3de9f32 from this chassis (sb_readonly=1)
Nov 29 02:56:44 np0005539551 ovn_controller[130266]: 2025-11-29T07:56:44Z|00124|binding|INFO|Removing iface tap10cc59c9-d7 ovn-installed in OVS
Nov 29 02:56:44 np0005539551 ovn_controller[130266]: 2025-11-29T07:56:44Z|00125|if_status|INFO|Not setting lport 10cc59c9-d730-4ae4-91ea-f799f3de9f32 down as sb is readonly
Nov 29 02:56:44 np0005539551 nova_compute[227360]: 2025-11-29 07:56:44.297 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:44 np0005539551 neutron-haproxy-ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b[237141]: [ALERT]    (237168) : Current worker (237184) exited with code 143 (Terminated)
Nov 29 02:56:44 np0005539551 neutron-haproxy-ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b[237141]: [WARNING]  (237168) : All workers exited. Exiting... (0)
Nov 29 02:56:44 np0005539551 systemd[1]: libpod-253d77c809441e5f16f71626f312f4781a07b44be8854c04cdc53bac110edc0a.scope: Deactivated successfully.
Nov 29 02:56:44 np0005539551 ovn_controller[130266]: 2025-11-29T07:56:44Z|00126|binding|INFO|Releasing lport 10cc59c9-d730-4ae4-91ea-f799f3de9f32 from this chassis (sb_readonly=0)
Nov 29 02:56:44 np0005539551 ovn_controller[130266]: 2025-11-29T07:56:44Z|00127|binding|INFO|Setting lport 10cc59c9-d730-4ae4-91ea-f799f3de9f32 down in Southbound
Nov 29 02:56:44 np0005539551 podman[239551]: 2025-11-29 07:56:44.305493812 +0000 UTC m=+0.049783265 container died 253d77c809441e5f16f71626f312f4781a07b44be8854c04cdc53bac110edc0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:56:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:44.312 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:38:bc 10.100.0.14'], port_security=['fa:16:3e:8b:38:bc 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'bed5ee49-4f19-4a80-a70a-8972c9a68218', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f48e629446148199d44b34243b98b8a', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'e89afbea-5410-4fb0-af48-42605427a18f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58aa5314-b5ce-40ee-9eff-0f30cffaf25d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=10cc59c9-d730-4ae4-91ea-f799f3de9f32) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:56:44 np0005539551 nova_compute[227360]: 2025-11-29 07:56:44.314 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:44 np0005539551 nova_compute[227360]: 2025-11-29 07:56:44.317 227364 INFO nova.virt.libvirt.driver [-] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Instance destroyed successfully.#033[00m
Nov 29 02:56:44 np0005539551 nova_compute[227360]: 2025-11-29 07:56:44.317 227364 DEBUG nova.objects.instance [None req-172820b5-8e57-4258-a3be-ccf872603f61 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lazy-loading 'resources' on Instance uuid bed5ee49-4f19-4a80-a70a-8972c9a68218 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:56:44 np0005539551 nova_compute[227360]: 2025-11-29 07:56:44.330 227364 DEBUG nova.virt.libvirt.vif [None req-172820b5-8e57-4258-a3be-ccf872603f61 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T07:54:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-818783229',display_name='tempest-ServersAdminTestJSON-server-818783229',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-818783229',id=17,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:56:21Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f48e629446148199d44b34243b98b8a',ramdisk_id='',reservation_id='r-0aqrtxlz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='2',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-93807439',owner_user_name='tempest-ServersAdminTestJSON-93807439-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:56:24Z,user_data=None,user_id='446b0f05699845e8bd9f7d59c787f671',uuid=bed5ee49-4f19-4a80-a70a-8972c9a68218,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "address": "fa:16:3e:8b:38:bc", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10cc59c9-d7", "ovs_interfaceid": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:56:44 np0005539551 nova_compute[227360]: 2025-11-29 07:56:44.330 227364 DEBUG nova.network.os_vif_util [None req-172820b5-8e57-4258-a3be-ccf872603f61 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Converting VIF {"id": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "address": "fa:16:3e:8b:38:bc", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10cc59c9-d7", "ovs_interfaceid": "10cc59c9-d730-4ae4-91ea-f799f3de9f32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:56:44 np0005539551 nova_compute[227360]: 2025-11-29 07:56:44.331 227364 DEBUG nova.network.os_vif_util [None req-172820b5-8e57-4258-a3be-ccf872603f61 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8b:38:bc,bridge_name='br-int',has_traffic_filtering=True,id=10cc59c9-d730-4ae4-91ea-f799f3de9f32,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10cc59c9-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:56:44 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-253d77c809441e5f16f71626f312f4781a07b44be8854c04cdc53bac110edc0a-userdata-shm.mount: Deactivated successfully.
Nov 29 02:56:44 np0005539551 systemd[1]: var-lib-containers-storage-overlay-3a3b5ee9644dce2a28b3c201fdbdb02606960989570d416a16147f081c89894b-merged.mount: Deactivated successfully.
Nov 29 02:56:44 np0005539551 nova_compute[227360]: 2025-11-29 07:56:44.332 227364 DEBUG os_vif [None req-172820b5-8e57-4258-a3be-ccf872603f61 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:38:bc,bridge_name='br-int',has_traffic_filtering=True,id=10cc59c9-d730-4ae4-91ea-f799f3de9f32,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10cc59c9-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:56:44 np0005539551 nova_compute[227360]: 2025-11-29 07:56:44.336 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:44 np0005539551 nova_compute[227360]: 2025-11-29 07:56:44.336 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10cc59c9-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:44 np0005539551 nova_compute[227360]: 2025-11-29 07:56:44.340 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:56:44 np0005539551 nova_compute[227360]: 2025-11-29 07:56:44.342 227364 INFO os_vif [None req-172820b5-8e57-4258-a3be-ccf872603f61 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:38:bc,bridge_name='br-int',has_traffic_filtering=True,id=10cc59c9-d730-4ae4-91ea-f799f3de9f32,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10cc59c9-d7')#033[00m
Nov 29 02:56:44 np0005539551 podman[239551]: 2025-11-29 07:56:44.347922515 +0000 UTC m=+0.092211968 container cleanup 253d77c809441e5f16f71626f312f4781a07b44be8854c04cdc53bac110edc0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:56:44 np0005539551 systemd[1]: libpod-conmon-253d77c809441e5f16f71626f312f4781a07b44be8854c04cdc53bac110edc0a.scope: Deactivated successfully.
Nov 29 02:56:44 np0005539551 podman[239597]: 2025-11-29 07:56:44.412831843 +0000 UTC m=+0.043766461 container remove 253d77c809441e5f16f71626f312f4781a07b44be8854c04cdc53bac110edc0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:56:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:44.418 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5e7d3175-1b2f-46d9-8e38-17ee3936e494]: (4, ('Sat Nov 29 07:56:44 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b (253d77c809441e5f16f71626f312f4781a07b44be8854c04cdc53bac110edc0a)\n253d77c809441e5f16f71626f312f4781a07b44be8854c04cdc53bac110edc0a\nSat Nov 29 07:56:44 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b (253d77c809441e5f16f71626f312f4781a07b44be8854c04cdc53bac110edc0a)\n253d77c809441e5f16f71626f312f4781a07b44be8854c04cdc53bac110edc0a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:44.421 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[fb4d402c-9567-494a-9fcb-c69358addf3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:44.422 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62e5f2a3-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:44 np0005539551 kernel: tap62e5f2a3-c0: left promiscuous mode
Nov 29 02:56:44 np0005539551 nova_compute[227360]: 2025-11-29 07:56:44.425 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:44.428 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[eaf46d19-b685-4431-94cb-4782518ef339]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:44 np0005539551 nova_compute[227360]: 2025-11-29 07:56:44.438 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:44.444 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4181cddc-153b-43e6-a78b-1a187bd1531f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:44.445 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c2331733-3f16-4cb0-b5a5-bbef6adea902]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:44.457 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[440d23ce-e296-41ac-88dd-a94c326652cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591259, 'reachable_time': 15998, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239614, 'error': None, 'target': 'ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:44 np0005539551 systemd[1]: run-netns-ovnmeta\x2d62e5f2a3\x2dcc8a\x2d4952\x2dbbb2\x2de2fde1379e9b.mount: Deactivated successfully.
Nov 29 02:56:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:44.460 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:56:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:44.460 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[6a383f47-8265-44f0-bade-fdbdfd8f7b98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:44.461 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 10cc59c9-d730-4ae4-91ea-f799f3de9f32 in datapath 62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b unbound from our chassis#033[00m
Nov 29 02:56:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:44.462 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:56:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:44.462 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ebc5cb5b-46d0-49fb-ba64-27a2aa287459]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:44.463 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 10cc59c9-d730-4ae4-91ea-f799f3de9f32 in datapath 62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b unbound from our chassis#033[00m
Nov 29 02:56:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:44.463 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:56:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:44.464 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e0f9042c-f25e-46f9-a34b-e287dac910d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:44.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:44.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:45 np0005539551 nova_compute[227360]: 2025-11-29 07:56:45.212 227364 DEBUG nova.compute.manager [req-6f1ba6f0-3853-480e-bd79-7cdb8b5bcd9d req-e67dd2a1-4c82-4f19-9328-6f0dca51a291 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Received event network-vif-unplugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:56:45 np0005539551 nova_compute[227360]: 2025-11-29 07:56:45.213 227364 DEBUG oslo_concurrency.lockutils [req-6f1ba6f0-3853-480e-bd79-7cdb8b5bcd9d req-e67dd2a1-4c82-4f19-9328-6f0dca51a291 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:45 np0005539551 nova_compute[227360]: 2025-11-29 07:56:45.213 227364 DEBUG oslo_concurrency.lockutils [req-6f1ba6f0-3853-480e-bd79-7cdb8b5bcd9d req-e67dd2a1-4c82-4f19-9328-6f0dca51a291 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:45 np0005539551 nova_compute[227360]: 2025-11-29 07:56:45.213 227364 DEBUG oslo_concurrency.lockutils [req-6f1ba6f0-3853-480e-bd79-7cdb8b5bcd9d req-e67dd2a1-4c82-4f19-9328-6f0dca51a291 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:45 np0005539551 nova_compute[227360]: 2025-11-29 07:56:45.214 227364 DEBUG nova.compute.manager [req-6f1ba6f0-3853-480e-bd79-7cdb8b5bcd9d req-e67dd2a1-4c82-4f19-9328-6f0dca51a291 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] No waiting events found dispatching network-vif-unplugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:56:45 np0005539551 nova_compute[227360]: 2025-11-29 07:56:45.214 227364 DEBUG nova.compute.manager [req-6f1ba6f0-3853-480e-bd79-7cdb8b5bcd9d req-e67dd2a1-4c82-4f19-9328-6f0dca51a291 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Received event network-vif-unplugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:56:45 np0005539551 nova_compute[227360]: 2025-11-29 07:56:45.842 227364 INFO nova.virt.libvirt.driver [None req-172820b5-8e57-4258-a3be-ccf872603f61 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Deleting instance files /var/lib/nova/instances/bed5ee49-4f19-4a80-a70a-8972c9a68218_del#033[00m
Nov 29 02:56:45 np0005539551 nova_compute[227360]: 2025-11-29 07:56:45.843 227364 INFO nova.virt.libvirt.driver [None req-172820b5-8e57-4258-a3be-ccf872603f61 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Deletion of /var/lib/nova/instances/bed5ee49-4f19-4a80-a70a-8972c9a68218_del complete#033[00m
Nov 29 02:56:45 np0005539551 nova_compute[227360]: 2025-11-29 07:56:45.898 227364 INFO nova.compute.manager [None req-172820b5-8e57-4258-a3be-ccf872603f61 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Took 1.84 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:56:45 np0005539551 nova_compute[227360]: 2025-11-29 07:56:45.899 227364 DEBUG oslo.service.loopingcall [None req-172820b5-8e57-4258-a3be-ccf872603f61 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:56:45 np0005539551 nova_compute[227360]: 2025-11-29 07:56:45.899 227364 DEBUG nova.compute.manager [-] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:56:45 np0005539551 nova_compute[227360]: 2025-11-29 07:56:45.899 227364 DEBUG nova.network.neutron [-] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:56:46 np0005539551 nova_compute[227360]: 2025-11-29 07:56:46.143 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402991.1427834, 59925490-c9ac-438c-9a1f-4356f493b103 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:56:46 np0005539551 nova_compute[227360]: 2025-11-29 07:56:46.144 227364 INFO nova.compute.manager [-] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:56:46 np0005539551 nova_compute[227360]: 2025-11-29 07:56:46.166 227364 DEBUG nova.compute.manager [None req-a25d31cb-2fae-4c4a-ad07-9e9e7d6f8539 - - - - - -] [instance: 59925490-c9ac-438c-9a1f-4356f493b103] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:56:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:46.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:46.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:46 np0005539551 nova_compute[227360]: 2025-11-29 07:56:46.930 227364 DEBUG nova.network.neutron [-] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:56:46 np0005539551 nova_compute[227360]: 2025-11-29 07:56:46.947 227364 INFO nova.compute.manager [-] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Took 1.05 seconds to deallocate network for instance.#033[00m
Nov 29 02:56:46 np0005539551 nova_compute[227360]: 2025-11-29 07:56:46.986 227364 DEBUG oslo_concurrency.lockutils [None req-172820b5-8e57-4258-a3be-ccf872603f61 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:46 np0005539551 nova_compute[227360]: 2025-11-29 07:56:46.987 227364 DEBUG oslo_concurrency.lockutils [None req-172820b5-8e57-4258-a3be-ccf872603f61 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:47 np0005539551 nova_compute[227360]: 2025-11-29 07:56:47.027 227364 DEBUG nova.compute.manager [req-2358a7a8-6071-4ae6-bc02-273b52a0b6f2 req-b42e4aaa-ec54-4a26-bf25-7d1b0fec949c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Received event network-vif-deleted-10cc59c9-d730-4ae4-91ea-f799f3de9f32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:56:47 np0005539551 nova_compute[227360]: 2025-11-29 07:56:47.062 227364 DEBUG oslo_concurrency.processutils [None req-172820b5-8e57-4258-a3be-ccf872603f61 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:47 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:47 np0005539551 nova_compute[227360]: 2025-11-29 07:56:47.371 227364 DEBUG nova.compute.manager [req-b1452348-a8dc-421e-86ca-d25b6c06eb15 req-225acafa-b20b-47d0-beb5-2378d6c44920 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Received event network-vif-plugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:56:47 np0005539551 nova_compute[227360]: 2025-11-29 07:56:47.372 227364 DEBUG oslo_concurrency.lockutils [req-b1452348-a8dc-421e-86ca-d25b6c06eb15 req-225acafa-b20b-47d0-beb5-2378d6c44920 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:47 np0005539551 nova_compute[227360]: 2025-11-29 07:56:47.373 227364 DEBUG oslo_concurrency.lockutils [req-b1452348-a8dc-421e-86ca-d25b6c06eb15 req-225acafa-b20b-47d0-beb5-2378d6c44920 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:47 np0005539551 nova_compute[227360]: 2025-11-29 07:56:47.373 227364 DEBUG oslo_concurrency.lockutils [req-b1452348-a8dc-421e-86ca-d25b6c06eb15 req-225acafa-b20b-47d0-beb5-2378d6c44920 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:47 np0005539551 nova_compute[227360]: 2025-11-29 07:56:47.373 227364 DEBUG nova.compute.manager [req-b1452348-a8dc-421e-86ca-d25b6c06eb15 req-225acafa-b20b-47d0-beb5-2378d6c44920 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] No waiting events found dispatching network-vif-plugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:56:47 np0005539551 nova_compute[227360]: 2025-11-29 07:56:47.373 227364 WARNING nova.compute.manager [req-b1452348-a8dc-421e-86ca-d25b6c06eb15 req-225acafa-b20b-47d0-beb5-2378d6c44920 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Received unexpected event network-vif-plugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 02:56:47 np0005539551 nova_compute[227360]: 2025-11-29 07:56:47.373 227364 DEBUG nova.compute.manager [req-b1452348-a8dc-421e-86ca-d25b6c06eb15 req-225acafa-b20b-47d0-beb5-2378d6c44920 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Received event network-vif-plugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:56:47 np0005539551 nova_compute[227360]: 2025-11-29 07:56:47.374 227364 DEBUG oslo_concurrency.lockutils [req-b1452348-a8dc-421e-86ca-d25b6c06eb15 req-225acafa-b20b-47d0-beb5-2378d6c44920 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:47 np0005539551 nova_compute[227360]: 2025-11-29 07:56:47.374 227364 DEBUG oslo_concurrency.lockutils [req-b1452348-a8dc-421e-86ca-d25b6c06eb15 req-225acafa-b20b-47d0-beb5-2378d6c44920 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:47 np0005539551 nova_compute[227360]: 2025-11-29 07:56:47.374 227364 DEBUG oslo_concurrency.lockutils [req-b1452348-a8dc-421e-86ca-d25b6c06eb15 req-225acafa-b20b-47d0-beb5-2378d6c44920 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:47 np0005539551 nova_compute[227360]: 2025-11-29 07:56:47.374 227364 DEBUG nova.compute.manager [req-b1452348-a8dc-421e-86ca-d25b6c06eb15 req-225acafa-b20b-47d0-beb5-2378d6c44920 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] No waiting events found dispatching network-vif-plugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:56:47 np0005539551 nova_compute[227360]: 2025-11-29 07:56:47.374 227364 WARNING nova.compute.manager [req-b1452348-a8dc-421e-86ca-d25b6c06eb15 req-225acafa-b20b-47d0-beb5-2378d6c44920 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Received unexpected event network-vif-plugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 02:56:47 np0005539551 nova_compute[227360]: 2025-11-29 07:56:47.375 227364 DEBUG nova.compute.manager [req-b1452348-a8dc-421e-86ca-d25b6c06eb15 req-225acafa-b20b-47d0-beb5-2378d6c44920 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Received event network-vif-plugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:56:47 np0005539551 nova_compute[227360]: 2025-11-29 07:56:47.375 227364 DEBUG oslo_concurrency.lockutils [req-b1452348-a8dc-421e-86ca-d25b6c06eb15 req-225acafa-b20b-47d0-beb5-2378d6c44920 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:47 np0005539551 nova_compute[227360]: 2025-11-29 07:56:47.375 227364 DEBUG oslo_concurrency.lockutils [req-b1452348-a8dc-421e-86ca-d25b6c06eb15 req-225acafa-b20b-47d0-beb5-2378d6c44920 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:47 np0005539551 nova_compute[227360]: 2025-11-29 07:56:47.375 227364 DEBUG oslo_concurrency.lockutils [req-b1452348-a8dc-421e-86ca-d25b6c06eb15 req-225acafa-b20b-47d0-beb5-2378d6c44920 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:47 np0005539551 nova_compute[227360]: 2025-11-29 07:56:47.375 227364 DEBUG nova.compute.manager [req-b1452348-a8dc-421e-86ca-d25b6c06eb15 req-225acafa-b20b-47d0-beb5-2378d6c44920 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] No waiting events found dispatching network-vif-plugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:56:47 np0005539551 nova_compute[227360]: 2025-11-29 07:56:47.376 227364 WARNING nova.compute.manager [req-b1452348-a8dc-421e-86ca-d25b6c06eb15 req-225acafa-b20b-47d0-beb5-2378d6c44920 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Received unexpected event network-vif-plugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 02:56:47 np0005539551 nova_compute[227360]: 2025-11-29 07:56:47.376 227364 DEBUG nova.compute.manager [req-b1452348-a8dc-421e-86ca-d25b6c06eb15 req-225acafa-b20b-47d0-beb5-2378d6c44920 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Received event network-vif-unplugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:56:47 np0005539551 nova_compute[227360]: 2025-11-29 07:56:47.376 227364 DEBUG oslo_concurrency.lockutils [req-b1452348-a8dc-421e-86ca-d25b6c06eb15 req-225acafa-b20b-47d0-beb5-2378d6c44920 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:47 np0005539551 nova_compute[227360]: 2025-11-29 07:56:47.376 227364 DEBUG oslo_concurrency.lockutils [req-b1452348-a8dc-421e-86ca-d25b6c06eb15 req-225acafa-b20b-47d0-beb5-2378d6c44920 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:47 np0005539551 nova_compute[227360]: 2025-11-29 07:56:47.376 227364 DEBUG oslo_concurrency.lockutils [req-b1452348-a8dc-421e-86ca-d25b6c06eb15 req-225acafa-b20b-47d0-beb5-2378d6c44920 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:47 np0005539551 nova_compute[227360]: 2025-11-29 07:56:47.377 227364 DEBUG nova.compute.manager [req-b1452348-a8dc-421e-86ca-d25b6c06eb15 req-225acafa-b20b-47d0-beb5-2378d6c44920 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] No waiting events found dispatching network-vif-unplugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:56:47 np0005539551 nova_compute[227360]: 2025-11-29 07:56:47.377 227364 WARNING nova.compute.manager [req-b1452348-a8dc-421e-86ca-d25b6c06eb15 req-225acafa-b20b-47d0-beb5-2378d6c44920 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Received unexpected event network-vif-unplugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 02:56:47 np0005539551 nova_compute[227360]: 2025-11-29 07:56:47.377 227364 DEBUG nova.compute.manager [req-b1452348-a8dc-421e-86ca-d25b6c06eb15 req-225acafa-b20b-47d0-beb5-2378d6c44920 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Received event network-vif-plugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:56:47 np0005539551 nova_compute[227360]: 2025-11-29 07:56:47.377 227364 DEBUG oslo_concurrency.lockutils [req-b1452348-a8dc-421e-86ca-d25b6c06eb15 req-225acafa-b20b-47d0-beb5-2378d6c44920 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:47 np0005539551 nova_compute[227360]: 2025-11-29 07:56:47.377 227364 DEBUG oslo_concurrency.lockutils [req-b1452348-a8dc-421e-86ca-d25b6c06eb15 req-225acafa-b20b-47d0-beb5-2378d6c44920 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:47 np0005539551 nova_compute[227360]: 2025-11-29 07:56:47.377 227364 DEBUG oslo_concurrency.lockutils [req-b1452348-a8dc-421e-86ca-d25b6c06eb15 req-225acafa-b20b-47d0-beb5-2378d6c44920 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:47 np0005539551 nova_compute[227360]: 2025-11-29 07:56:47.378 227364 DEBUG nova.compute.manager [req-b1452348-a8dc-421e-86ca-d25b6c06eb15 req-225acafa-b20b-47d0-beb5-2378d6c44920 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] No waiting events found dispatching network-vif-plugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:56:47 np0005539551 nova_compute[227360]: 2025-11-29 07:56:47.378 227364 WARNING nova.compute.manager [req-b1452348-a8dc-421e-86ca-d25b6c06eb15 req-225acafa-b20b-47d0-beb5-2378d6c44920 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Received unexpected event network-vif-plugged-10cc59c9-d730-4ae4-91ea-f799f3de9f32 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 02:56:47 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:56:47 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4284408326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:56:47 np0005539551 nova_compute[227360]: 2025-11-29 07:56:47.656 227364 DEBUG oslo_concurrency.processutils [None req-172820b5-8e57-4258-a3be-ccf872603f61 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:47 np0005539551 nova_compute[227360]: 2025-11-29 07:56:47.663 227364 DEBUG nova.compute.provider_tree [None req-172820b5-8e57-4258-a3be-ccf872603f61 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:56:47 np0005539551 nova_compute[227360]: 2025-11-29 07:56:47.680 227364 DEBUG nova.scheduler.client.report [None req-172820b5-8e57-4258-a3be-ccf872603f61 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:56:47 np0005539551 nova_compute[227360]: 2025-11-29 07:56:47.698 227364 DEBUG oslo_concurrency.lockutils [None req-172820b5-8e57-4258-a3be-ccf872603f61 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:47 np0005539551 nova_compute[227360]: 2025-11-29 07:56:47.723 227364 INFO nova.scheduler.client.report [None req-172820b5-8e57-4258-a3be-ccf872603f61 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Deleted allocations for instance bed5ee49-4f19-4a80-a70a-8972c9a68218#033[00m
Nov 29 02:56:47 np0005539551 nova_compute[227360]: 2025-11-29 07:56:47.778 227364 DEBUG oslo_concurrency.lockutils [None req-172820b5-8e57-4258-a3be-ccf872603f61 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "bed5ee49-4f19-4a80-a70a-8972c9a68218" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:48.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:48 np0005539551 nova_compute[227360]: 2025-11-29 07:56:48.724 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:48.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:49 np0005539551 nova_compute[227360]: 2025-11-29 07:56:49.338 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:49 np0005539551 nova_compute[227360]: 2025-11-29 07:56:49.411 227364 DEBUG oslo_concurrency.lockutils [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Acquiring lock "16f4d46e-eac1-444e-92e9-dd3128fa681b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:49 np0005539551 nova_compute[227360]: 2025-11-29 07:56:49.412 227364 DEBUG oslo_concurrency.lockutils [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Lock "16f4d46e-eac1-444e-92e9-dd3128fa681b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:49 np0005539551 nova_compute[227360]: 2025-11-29 07:56:49.519 227364 DEBUG nova.compute.manager [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:56:49 np0005539551 nova_compute[227360]: 2025-11-29 07:56:49.610 227364 DEBUG oslo_concurrency.lockutils [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:49 np0005539551 nova_compute[227360]: 2025-11-29 07:56:49.610 227364 DEBUG oslo_concurrency.lockutils [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:49 np0005539551 nova_compute[227360]: 2025-11-29 07:56:49.616 227364 DEBUG nova.virt.hardware [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:56:49 np0005539551 nova_compute[227360]: 2025-11-29 07:56:49.617 227364 INFO nova.compute.claims [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 02:56:49 np0005539551 nova_compute[227360]: 2025-11-29 07:56:49.729 227364 DEBUG oslo_concurrency.processutils [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:49 np0005539551 nova_compute[227360]: 2025-11-29 07:56:49.753 227364 DEBUG oslo_concurrency.lockutils [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "9eb89c30-3f33-4a7c-ae19-8312a2522b82" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:49 np0005539551 nova_compute[227360]: 2025-11-29 07:56:49.754 227364 DEBUG oslo_concurrency.lockutils [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "9eb89c30-3f33-4a7c-ae19-8312a2522b82" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:49 np0005539551 nova_compute[227360]: 2025-11-29 07:56:49.823 227364 DEBUG nova.compute.manager [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:56:49 np0005539551 nova_compute[227360]: 2025-11-29 07:56:49.979 227364 DEBUG oslo_concurrency.lockutils [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:50.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:56:50 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2620390137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:56:50 np0005539551 nova_compute[227360]: 2025-11-29 07:56:50.671 227364 DEBUG oslo_concurrency.processutils [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.942s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:50 np0005539551 nova_compute[227360]: 2025-11-29 07:56:50.680 227364 DEBUG nova.compute.provider_tree [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:56:50 np0005539551 nova_compute[227360]: 2025-11-29 07:56:50.719 227364 DEBUG nova.scheduler.client.report [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:56:50 np0005539551 nova_compute[227360]: 2025-11-29 07:56:50.779 227364 DEBUG oslo_concurrency.lockutils [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:50 np0005539551 nova_compute[227360]: 2025-11-29 07:56:50.780 227364 DEBUG nova.compute.manager [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:56:50 np0005539551 nova_compute[227360]: 2025-11-29 07:56:50.783 227364 DEBUG oslo_concurrency.lockutils [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:50 np0005539551 nova_compute[227360]: 2025-11-29 07:56:50.790 227364 DEBUG nova.virt.hardware [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:56:50 np0005539551 nova_compute[227360]: 2025-11-29 07:56:50.790 227364 INFO nova.compute.claims [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 02:56:50 np0005539551 nova_compute[227360]: 2025-11-29 07:56:50.851 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402995.8509307, ec72da52-17dd-401b-8538-90262cfe6006 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:56:50 np0005539551 nova_compute[227360]: 2025-11-29 07:56:50.852 227364 INFO nova.compute.manager [-] [instance: ec72da52-17dd-401b-8538-90262cfe6006] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:56:50 np0005539551 nova_compute[227360]: 2025-11-29 07:56:50.877 227364 DEBUG nova.compute.manager [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:56:50 np0005539551 nova_compute[227360]: 2025-11-29 07:56:50.877 227364 DEBUG nova.network.neutron [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:56:50 np0005539551 nova_compute[227360]: 2025-11-29 07:56:50.881 227364 DEBUG nova.compute.manager [None req-993f2ab0-ea03-4b53-b305-affc07e5a590 - - - - - -] [instance: ec72da52-17dd-401b-8538-90262cfe6006] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:56:50 np0005539551 nova_compute[227360]: 2025-11-29 07:56:50.895 227364 INFO nova.virt.libvirt.driver [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:56:50 np0005539551 nova_compute[227360]: 2025-11-29 07:56:50.911 227364 DEBUG nova.compute.manager [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:56:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:50.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:50 np0005539551 nova_compute[227360]: 2025-11-29 07:56:50.990 227364 DEBUG oslo_concurrency.processutils [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:51 np0005539551 nova_compute[227360]: 2025-11-29 07:56:51.012 227364 DEBUG nova.compute.manager [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:56:51 np0005539551 nova_compute[227360]: 2025-11-29 07:56:51.014 227364 DEBUG nova.virt.libvirt.driver [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:56:51 np0005539551 nova_compute[227360]: 2025-11-29 07:56:51.015 227364 INFO nova.virt.libvirt.driver [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Creating image(s)#033[00m
Nov 29 02:56:51 np0005539551 nova_compute[227360]: 2025-11-29 07:56:51.049 227364 DEBUG nova.storage.rbd_utils [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] rbd image 16f4d46e-eac1-444e-92e9-dd3128fa681b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:51 np0005539551 nova_compute[227360]: 2025-11-29 07:56:51.079 227364 DEBUG nova.storage.rbd_utils [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] rbd image 16f4d46e-eac1-444e-92e9-dd3128fa681b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:51 np0005539551 nova_compute[227360]: 2025-11-29 07:56:51.110 227364 DEBUG nova.storage.rbd_utils [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] rbd image 16f4d46e-eac1-444e-92e9-dd3128fa681b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:51 np0005539551 nova_compute[227360]: 2025-11-29 07:56:51.116 227364 DEBUG oslo_concurrency.processutils [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:51 np0005539551 nova_compute[227360]: 2025-11-29 07:56:51.178 227364 DEBUG oslo_concurrency.processutils [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:51 np0005539551 nova_compute[227360]: 2025-11-29 07:56:51.179 227364 DEBUG oslo_concurrency.lockutils [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:51 np0005539551 nova_compute[227360]: 2025-11-29 07:56:51.180 227364 DEBUG oslo_concurrency.lockutils [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:51 np0005539551 nova_compute[227360]: 2025-11-29 07:56:51.180 227364 DEBUG oslo_concurrency.lockutils [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:51 np0005539551 nova_compute[227360]: 2025-11-29 07:56:51.208 227364 DEBUG nova.storage.rbd_utils [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] rbd image 16f4d46e-eac1-444e-92e9-dd3128fa681b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:51 np0005539551 nova_compute[227360]: 2025-11-29 07:56:51.212 227364 DEBUG oslo_concurrency.processutils [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 16f4d46e-eac1-444e-92e9-dd3128fa681b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:51 np0005539551 nova_compute[227360]: 2025-11-29 07:56:51.309 227364 DEBUG nova.network.neutron [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 29 02:56:51 np0005539551 nova_compute[227360]: 2025-11-29 07:56:51.310 227364 DEBUG nova.compute.manager [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:56:51 np0005539551 nova_compute[227360]: 2025-11-29 07:56:51.313 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:56:51 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/800587937' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:56:51 np0005539551 nova_compute[227360]: 2025-11-29 07:56:51.454 227364 DEBUG oslo_concurrency.processutils [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:51 np0005539551 nova_compute[227360]: 2025-11-29 07:56:51.459 227364 DEBUG nova.compute.provider_tree [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:56:51 np0005539551 nova_compute[227360]: 2025-11-29 07:56:51.475 227364 DEBUG nova.scheduler.client.report [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:56:51 np0005539551 nova_compute[227360]: 2025-11-29 07:56:51.505 227364 DEBUG oslo_concurrency.lockutils [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:51 np0005539551 nova_compute[227360]: 2025-11-29 07:56:51.506 227364 DEBUG nova.compute.manager [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:56:51 np0005539551 nova_compute[227360]: 2025-11-29 07:56:51.571 227364 DEBUG nova.compute.manager [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:56:51 np0005539551 nova_compute[227360]: 2025-11-29 07:56:51.571 227364 DEBUG nova.network.neutron [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:56:51 np0005539551 nova_compute[227360]: 2025-11-29 07:56:51.590 227364 INFO nova.virt.libvirt.driver [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:56:51 np0005539551 nova_compute[227360]: 2025-11-29 07:56:51.611 227364 DEBUG nova.compute.manager [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:56:51 np0005539551 nova_compute[227360]: 2025-11-29 07:56:51.694 227364 DEBUG nova.compute.manager [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:56:51 np0005539551 nova_compute[227360]: 2025-11-29 07:56:51.695 227364 DEBUG nova.virt.libvirt.driver [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:56:51 np0005539551 nova_compute[227360]: 2025-11-29 07:56:51.696 227364 INFO nova.virt.libvirt.driver [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Creating image(s)#033[00m
Nov 29 02:56:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:52 np0005539551 nova_compute[227360]: 2025-11-29 07:56:52.367 227364 DEBUG nova.storage.rbd_utils [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] rbd image 9eb89c30-3f33-4a7c-ae19-8312a2522b82_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:52 np0005539551 nova_compute[227360]: 2025-11-29 07:56:52.406 227364 DEBUG nova.storage.rbd_utils [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] rbd image 9eb89c30-3f33-4a7c-ae19-8312a2522b82_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:52 np0005539551 nova_compute[227360]: 2025-11-29 07:56:52.434 227364 DEBUG nova.storage.rbd_utils [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] rbd image 9eb89c30-3f33-4a7c-ae19-8312a2522b82_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:52 np0005539551 nova_compute[227360]: 2025-11-29 07:56:52.437 227364 DEBUG oslo_concurrency.processutils [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:52 np0005539551 nova_compute[227360]: 2025-11-29 07:56:52.467 227364 DEBUG oslo_concurrency.processutils [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 16f4d46e-eac1-444e-92e9-dd3128fa681b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.254s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:52 np0005539551 nova_compute[227360]: 2025-11-29 07:56:52.506 227364 DEBUG oslo_concurrency.processutils [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:52 np0005539551 nova_compute[227360]: 2025-11-29 07:56:52.507 227364 DEBUG oslo_concurrency.lockutils [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:52 np0005539551 nova_compute[227360]: 2025-11-29 07:56:52.508 227364 DEBUG oslo_concurrency.lockutils [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:52 np0005539551 nova_compute[227360]: 2025-11-29 07:56:52.508 227364 DEBUG oslo_concurrency.lockutils [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:52 np0005539551 nova_compute[227360]: 2025-11-29 07:56:52.530 227364 DEBUG nova.storage.rbd_utils [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] rbd image 9eb89c30-3f33-4a7c-ae19-8312a2522b82_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:52 np0005539551 nova_compute[227360]: 2025-11-29 07:56:52.534 227364 DEBUG oslo_concurrency.processutils [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 9eb89c30-3f33-4a7c-ae19-8312a2522b82_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:52.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:52 np0005539551 nova_compute[227360]: 2025-11-29 07:56:52.596 227364 DEBUG nova.storage.rbd_utils [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] resizing rbd image 16f4d46e-eac1-444e-92e9-dd3128fa681b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:56:52 np0005539551 nova_compute[227360]: 2025-11-29 07:56:52.703 227364 DEBUG nova.objects.instance [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Lazy-loading 'migration_context' on Instance uuid 16f4d46e-eac1-444e-92e9-dd3128fa681b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:56:52 np0005539551 nova_compute[227360]: 2025-11-29 07:56:52.717 227364 DEBUG nova.virt.libvirt.driver [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:56:52 np0005539551 nova_compute[227360]: 2025-11-29 07:56:52.718 227364 DEBUG nova.virt.libvirt.driver [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Ensure instance console log exists: /var/lib/nova/instances/16f4d46e-eac1-444e-92e9-dd3128fa681b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:56:52 np0005539551 nova_compute[227360]: 2025-11-29 07:56:52.718 227364 DEBUG oslo_concurrency.lockutils [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:52 np0005539551 nova_compute[227360]: 2025-11-29 07:56:52.718 227364 DEBUG oslo_concurrency.lockutils [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:52 np0005539551 nova_compute[227360]: 2025-11-29 07:56:52.718 227364 DEBUG oslo_concurrency.lockutils [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:52 np0005539551 nova_compute[227360]: 2025-11-29 07:56:52.720 227364 DEBUG nova.virt.libvirt.driver [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:56:52 np0005539551 nova_compute[227360]: 2025-11-29 07:56:52.725 227364 WARNING nova.virt.libvirt.driver [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:56:52 np0005539551 nova_compute[227360]: 2025-11-29 07:56:52.730 227364 DEBUG nova.virt.libvirt.host [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:56:52 np0005539551 nova_compute[227360]: 2025-11-29 07:56:52.730 227364 DEBUG nova.virt.libvirt.host [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:56:52 np0005539551 nova_compute[227360]: 2025-11-29 07:56:52.734 227364 DEBUG nova.virt.libvirt.host [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:56:52 np0005539551 nova_compute[227360]: 2025-11-29 07:56:52.734 227364 DEBUG nova.virt.libvirt.host [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:56:52 np0005539551 nova_compute[227360]: 2025-11-29 07:56:52.735 227364 DEBUG nova.virt.libvirt.driver [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:56:52 np0005539551 nova_compute[227360]: 2025-11-29 07:56:52.736 227364 DEBUG nova.virt.hardware [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:56:52 np0005539551 nova_compute[227360]: 2025-11-29 07:56:52.736 227364 DEBUG nova.virt.hardware [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:56:52 np0005539551 nova_compute[227360]: 2025-11-29 07:56:52.736 227364 DEBUG nova.virt.hardware [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:56:52 np0005539551 nova_compute[227360]: 2025-11-29 07:56:52.737 227364 DEBUG nova.virt.hardware [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:56:52 np0005539551 nova_compute[227360]: 2025-11-29 07:56:52.737 227364 DEBUG nova.virt.hardware [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:56:52 np0005539551 nova_compute[227360]: 2025-11-29 07:56:52.737 227364 DEBUG nova.virt.hardware [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:56:52 np0005539551 nova_compute[227360]: 2025-11-29 07:56:52.737 227364 DEBUG nova.virt.hardware [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:56:52 np0005539551 nova_compute[227360]: 2025-11-29 07:56:52.738 227364 DEBUG nova.virt.hardware [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:56:52 np0005539551 nova_compute[227360]: 2025-11-29 07:56:52.738 227364 DEBUG nova.virt.hardware [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:56:52 np0005539551 nova_compute[227360]: 2025-11-29 07:56:52.738 227364 DEBUG nova.virt.hardware [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:56:52 np0005539551 nova_compute[227360]: 2025-11-29 07:56:52.738 227364 DEBUG nova.virt.hardware [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:56:52 np0005539551 nova_compute[227360]: 2025-11-29 07:56:52.741 227364 DEBUG oslo_concurrency.processutils [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:52 np0005539551 nova_compute[227360]: 2025-11-29 07:56:52.855 227364 DEBUG nova.network.neutron [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 29 02:56:52 np0005539551 nova_compute[227360]: 2025-11-29 07:56:52.856 227364 DEBUG nova.compute.manager [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:56:52 np0005539551 nova_compute[227360]: 2025-11-29 07:56:52.913 227364 DEBUG oslo_concurrency.processutils [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 9eb89c30-3f33-4a7c-ae19-8312a2522b82_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.379s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:52.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:53 np0005539551 nova_compute[227360]: 2025-11-29 07:56:53.023 227364 DEBUG nova.storage.rbd_utils [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] resizing rbd image 9eb89c30-3f33-4a7c-ae19-8312a2522b82_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:56:53 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:56:53 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3347891207' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:56:53 np0005539551 nova_compute[227360]: 2025-11-29 07:56:53.224 227364 DEBUG oslo_concurrency.processutils [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:53 np0005539551 nova_compute[227360]: 2025-11-29 07:56:53.271 227364 DEBUG nova.storage.rbd_utils [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] rbd image 16f4d46e-eac1-444e-92e9-dd3128fa681b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:53 np0005539551 nova_compute[227360]: 2025-11-29 07:56:53.276 227364 DEBUG oslo_concurrency.processutils [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:53.463 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:56:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:53.464 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:56:53 np0005539551 nova_compute[227360]: 2025-11-29 07:56:53.528 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:53 np0005539551 nova_compute[227360]: 2025-11-29 07:56:53.535 227364 DEBUG nova.objects.instance [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lazy-loading 'migration_context' on Instance uuid 9eb89c30-3f33-4a7c-ae19-8312a2522b82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:56:53 np0005539551 nova_compute[227360]: 2025-11-29 07:56:53.554 227364 DEBUG nova.virt.libvirt.driver [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:56:53 np0005539551 nova_compute[227360]: 2025-11-29 07:56:53.554 227364 DEBUG nova.virt.libvirt.driver [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Ensure instance console log exists: /var/lib/nova/instances/9eb89c30-3f33-4a7c-ae19-8312a2522b82/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:56:53 np0005539551 nova_compute[227360]: 2025-11-29 07:56:53.555 227364 DEBUG oslo_concurrency.lockutils [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:53 np0005539551 nova_compute[227360]: 2025-11-29 07:56:53.555 227364 DEBUG oslo_concurrency.lockutils [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:53 np0005539551 nova_compute[227360]: 2025-11-29 07:56:53.556 227364 DEBUG oslo_concurrency.lockutils [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:53 np0005539551 nova_compute[227360]: 2025-11-29 07:56:53.557 227364 DEBUG nova.virt.libvirt.driver [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:56:53 np0005539551 nova_compute[227360]: 2025-11-29 07:56:53.562 227364 WARNING nova.virt.libvirt.driver [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:56:53 np0005539551 nova_compute[227360]: 2025-11-29 07:56:53.567 227364 DEBUG nova.virt.libvirt.host [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:56:53 np0005539551 nova_compute[227360]: 2025-11-29 07:56:53.567 227364 DEBUG nova.virt.libvirt.host [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:56:53 np0005539551 nova_compute[227360]: 2025-11-29 07:56:53.570 227364 DEBUG nova.virt.libvirt.host [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:56:53 np0005539551 nova_compute[227360]: 2025-11-29 07:56:53.570 227364 DEBUG nova.virt.libvirt.host [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:56:53 np0005539551 nova_compute[227360]: 2025-11-29 07:56:53.571 227364 DEBUG nova.virt.libvirt.driver [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:56:53 np0005539551 nova_compute[227360]: 2025-11-29 07:56:53.571 227364 DEBUG nova.virt.hardware [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:56:53 np0005539551 nova_compute[227360]: 2025-11-29 07:56:53.572 227364 DEBUG nova.virt.hardware [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:56:53 np0005539551 nova_compute[227360]: 2025-11-29 07:56:53.572 227364 DEBUG nova.virt.hardware [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:56:53 np0005539551 nova_compute[227360]: 2025-11-29 07:56:53.572 227364 DEBUG nova.virt.hardware [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:56:53 np0005539551 nova_compute[227360]: 2025-11-29 07:56:53.572 227364 DEBUG nova.virt.hardware [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:56:53 np0005539551 nova_compute[227360]: 2025-11-29 07:56:53.573 227364 DEBUG nova.virt.hardware [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:56:53 np0005539551 nova_compute[227360]: 2025-11-29 07:56:53.573 227364 DEBUG nova.virt.hardware [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:56:53 np0005539551 nova_compute[227360]: 2025-11-29 07:56:53.573 227364 DEBUG nova.virt.hardware [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:56:53 np0005539551 nova_compute[227360]: 2025-11-29 07:56:53.573 227364 DEBUG nova.virt.hardware [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:56:53 np0005539551 nova_compute[227360]: 2025-11-29 07:56:53.574 227364 DEBUG nova.virt.hardware [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:56:53 np0005539551 nova_compute[227360]: 2025-11-29 07:56:53.574 227364 DEBUG nova.virt.hardware [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:56:53 np0005539551 nova_compute[227360]: 2025-11-29 07:56:53.579 227364 DEBUG oslo_concurrency.processutils [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:53 np0005539551 nova_compute[227360]: 2025-11-29 07:56:53.726 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:53 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:56:53 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/616942721' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:56:53 np0005539551 nova_compute[227360]: 2025-11-29 07:56:53.838 227364 DEBUG oslo_concurrency.processutils [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:53 np0005539551 nova_compute[227360]: 2025-11-29 07:56:53.841 227364 DEBUG nova.objects.instance [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Lazy-loading 'pci_devices' on Instance uuid 16f4d46e-eac1-444e-92e9-dd3128fa681b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:56:53 np0005539551 nova_compute[227360]: 2025-11-29 07:56:53.860 227364 DEBUG nova.virt.libvirt.driver [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:56:53 np0005539551 nova_compute[227360]:  <uuid>16f4d46e-eac1-444e-92e9-dd3128fa681b</uuid>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:  <name>instance-00000019</name>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:56:53 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:      <nova:name>tempest-LiveMigrationNegativeTest-server-1990231039</nova:name>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 07:56:52</nova:creationTime>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 02:56:53 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:        <nova:user uuid="8d70080854cd432eb4d60eaa335e9658">tempest-LiveMigrationNegativeTest-1463328085-project-member</nova:user>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:        <nova:project uuid="a20e261a6e5247c5a6a89f85be36c69c">tempest-LiveMigrationNegativeTest-1463328085</nova:project>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:      <nova:ports/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <system>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:      <entry name="serial">16f4d46e-eac1-444e-92e9-dd3128fa681b</entry>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:      <entry name="uuid">16f4d46e-eac1-444e-92e9-dd3128fa681b</entry>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    </system>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:  <os>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:  </os>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:  <features>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:  </features>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:  </clock>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:  <devices>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 02:56:53 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/16f4d46e-eac1-444e-92e9-dd3128fa681b_disk">
Nov 29 02:56:53 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:      </source>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 02:56:53 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:      </auth>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    </disk>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 02:56:53 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/16f4d46e-eac1-444e-92e9-dd3128fa681b_disk.config">
Nov 29 02:56:53 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:      </source>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 02:56:53 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:      </auth>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    </disk>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 02:56:53 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/16f4d46e-eac1-444e-92e9-dd3128fa681b/console.log" append="off"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    </serial>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <video>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    </video>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 02:56:53 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    </rng>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 02:56:53 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 02:56:53 np0005539551 nova_compute[227360]:  </devices>
Nov 29 02:56:53 np0005539551 nova_compute[227360]: </domain>
Nov 29 02:56:53 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:56:53 np0005539551 nova_compute[227360]: 2025-11-29 07:56:53.933 227364 DEBUG nova.virt.libvirt.driver [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:56:53 np0005539551 nova_compute[227360]: 2025-11-29 07:56:53.933 227364 DEBUG nova.virt.libvirt.driver [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:56:53 np0005539551 nova_compute[227360]: 2025-11-29 07:56:53.934 227364 INFO nova.virt.libvirt.driver [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Using config drive#033[00m
Nov 29 02:56:53 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:56:53 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/162260480' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:56:54 np0005539551 nova_compute[227360]: 2025-11-29 07:56:54.029 227364 DEBUG nova.storage.rbd_utils [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] rbd image 16f4d46e-eac1-444e-92e9-dd3128fa681b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:54 np0005539551 nova_compute[227360]: 2025-11-29 07:56:54.037 227364 DEBUG oslo_concurrency.processutils [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:54 np0005539551 nova_compute[227360]: 2025-11-29 07:56:54.083 227364 DEBUG nova.storage.rbd_utils [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] rbd image 9eb89c30-3f33-4a7c-ae19-8312a2522b82_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:54 np0005539551 nova_compute[227360]: 2025-11-29 07:56:54.095 227364 DEBUG oslo_concurrency.processutils [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:54 np0005539551 nova_compute[227360]: 2025-11-29 07:56:54.279 227364 INFO nova.virt.libvirt.driver [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Creating config drive at /var/lib/nova/instances/16f4d46e-eac1-444e-92e9-dd3128fa681b/disk.config#033[00m
Nov 29 02:56:54 np0005539551 nova_compute[227360]: 2025-11-29 07:56:54.285 227364 DEBUG oslo_concurrency.processutils [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/16f4d46e-eac1-444e-92e9-dd3128fa681b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv11dhxfi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:54 np0005539551 nova_compute[227360]: 2025-11-29 07:56:54.339 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:54 np0005539551 nova_compute[227360]: 2025-11-29 07:56:54.419 227364 DEBUG oslo_concurrency.processutils [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/16f4d46e-eac1-444e-92e9-dd3128fa681b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv11dhxfi" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:54 np0005539551 nova_compute[227360]: 2025-11-29 07:56:54.449 227364 DEBUG nova.storage.rbd_utils [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] rbd image 16f4d46e-eac1-444e-92e9-dd3128fa681b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:54 np0005539551 nova_compute[227360]: 2025-11-29 07:56:54.453 227364 DEBUG oslo_concurrency.processutils [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/16f4d46e-eac1-444e-92e9-dd3128fa681b/disk.config 16f4d46e-eac1-444e-92e9-dd3128fa681b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:54 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:56:54 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1356999429' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:56:54 np0005539551 nova_compute[227360]: 2025-11-29 07:56:54.542 227364 DEBUG oslo_concurrency.processutils [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:54 np0005539551 nova_compute[227360]: 2025-11-29 07:56:54.544 227364 DEBUG nova.objects.instance [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lazy-loading 'pci_devices' on Instance uuid 9eb89c30-3f33-4a7c-ae19-8312a2522b82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:56:54 np0005539551 nova_compute[227360]: 2025-11-29 07:56:54.563 227364 DEBUG nova.virt.libvirt.driver [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:56:54 np0005539551 nova_compute[227360]:  <uuid>9eb89c30-3f33-4a7c-ae19-8312a2522b82</uuid>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:  <name>instance-0000001a</name>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:56:54 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:      <nova:name>tempest-MigrationsAdminTest-server-71474262</nova:name>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 07:56:53</nova:creationTime>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 02:56:54 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:        <nova:user uuid="51ae07f600c545c0b4c7fae00657ea40">tempest-MigrationsAdminTest-1930136363-project-member</nova:user>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:        <nova:project uuid="6717732f9fa242b181f58881b03d246f">tempest-MigrationsAdminTest-1930136363</nova:project>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:      <nova:ports/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <system>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:      <entry name="serial">9eb89c30-3f33-4a7c-ae19-8312a2522b82</entry>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:      <entry name="uuid">9eb89c30-3f33-4a7c-ae19-8312a2522b82</entry>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    </system>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:  <os>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:  </os>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:  <features>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:  </features>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:  </clock>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:  <devices>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 02:56:54 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/9eb89c30-3f33-4a7c-ae19-8312a2522b82_disk">
Nov 29 02:56:54 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:      </source>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 02:56:54 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:      </auth>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    </disk>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 02:56:54 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/9eb89c30-3f33-4a7c-ae19-8312a2522b82_disk.config">
Nov 29 02:56:54 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:      </source>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 02:56:54 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:      </auth>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    </disk>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 02:56:54 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/9eb89c30-3f33-4a7c-ae19-8312a2522b82/console.log" append="off"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    </serial>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <video>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    </video>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 02:56:54 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    </rng>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 02:56:54 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 02:56:54 np0005539551 nova_compute[227360]:  </devices>
Nov 29 02:56:54 np0005539551 nova_compute[227360]: </domain>
Nov 29 02:56:54 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:56:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:54.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:54 np0005539551 nova_compute[227360]: 2025-11-29 07:56:54.622 227364 DEBUG nova.virt.libvirt.driver [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:56:54 np0005539551 nova_compute[227360]: 2025-11-29 07:56:54.622 227364 DEBUG nova.virt.libvirt.driver [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:56:54 np0005539551 nova_compute[227360]: 2025-11-29 07:56:54.623 227364 INFO nova.virt.libvirt.driver [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Using config drive#033[00m
Nov 29 02:56:54 np0005539551 nova_compute[227360]: 2025-11-29 07:56:54.650 227364 DEBUG nova.storage.rbd_utils [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] rbd image 9eb89c30-3f33-4a7c-ae19-8312a2522b82_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:54 np0005539551 nova_compute[227360]: 2025-11-29 07:56:54.676 227364 DEBUG oslo_concurrency.processutils [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/16f4d46e-eac1-444e-92e9-dd3128fa681b/disk.config 16f4d46e-eac1-444e-92e9-dd3128fa681b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.223s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:54 np0005539551 nova_compute[227360]: 2025-11-29 07:56:54.676 227364 INFO nova.virt.libvirt.driver [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Deleting local config drive /var/lib/nova/instances/16f4d46e-eac1-444e-92e9-dd3128fa681b/disk.config because it was imported into RBD.#033[00m
Nov 29 02:56:54 np0005539551 systemd-machined[190756]: New machine qemu-13-instance-00000019.
Nov 29 02:56:54 np0005539551 systemd[1]: Started Virtual Machine qemu-13-instance-00000019.
Nov 29 02:56:54 np0005539551 nova_compute[227360]: 2025-11-29 07:56:54.871 227364 INFO nova.virt.libvirt.driver [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Creating config drive at /var/lib/nova/instances/9eb89c30-3f33-4a7c-ae19-8312a2522b82/disk.config#033[00m
Nov 29 02:56:54 np0005539551 nova_compute[227360]: 2025-11-29 07:56:54.876 227364 DEBUG oslo_concurrency.processutils [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9eb89c30-3f33-4a7c-ae19-8312a2522b82/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu8z3j7ii execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:54.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:55 np0005539551 nova_compute[227360]: 2025-11-29 07:56:55.009 227364 DEBUG oslo_concurrency.processutils [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9eb89c30-3f33-4a7c-ae19-8312a2522b82/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu8z3j7ii" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:55 np0005539551 nova_compute[227360]: 2025-11-29 07:56:55.045 227364 DEBUG nova.storage.rbd_utils [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] rbd image 9eb89c30-3f33-4a7c-ae19-8312a2522b82_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:55 np0005539551 nova_compute[227360]: 2025-11-29 07:56:55.050 227364 DEBUG oslo_concurrency.processutils [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9eb89c30-3f33-4a7c-ae19-8312a2522b82/disk.config 9eb89c30-3f33-4a7c-ae19-8312a2522b82_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:55 np0005539551 nova_compute[227360]: 2025-11-29 07:56:55.279 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403015.2787287, 16f4d46e-eac1-444e-92e9-dd3128fa681b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:56:55 np0005539551 nova_compute[227360]: 2025-11-29 07:56:55.279 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:56:55 np0005539551 nova_compute[227360]: 2025-11-29 07:56:55.282 227364 DEBUG nova.compute.manager [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:56:55 np0005539551 nova_compute[227360]: 2025-11-29 07:56:55.283 227364 DEBUG nova.virt.libvirt.driver [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:56:55 np0005539551 nova_compute[227360]: 2025-11-29 07:56:55.287 227364 INFO nova.virt.libvirt.driver [-] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Instance spawned successfully.#033[00m
Nov 29 02:56:55 np0005539551 nova_compute[227360]: 2025-11-29 07:56:55.287 227364 DEBUG nova.virt.libvirt.driver [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:56:55 np0005539551 nova_compute[227360]: 2025-11-29 07:56:55.323 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:56:55 np0005539551 nova_compute[227360]: 2025-11-29 07:56:55.326 227364 DEBUG oslo_concurrency.processutils [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9eb89c30-3f33-4a7c-ae19-8312a2522b82/disk.config 9eb89c30-3f33-4a7c-ae19-8312a2522b82_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.276s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:55 np0005539551 nova_compute[227360]: 2025-11-29 07:56:55.327 227364 INFO nova.virt.libvirt.driver [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Deleting local config drive /var/lib/nova/instances/9eb89c30-3f33-4a7c-ae19-8312a2522b82/disk.config because it was imported into RBD.#033[00m
Nov 29 02:56:55 np0005539551 nova_compute[227360]: 2025-11-29 07:56:55.329 227364 DEBUG nova.virt.libvirt.driver [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:55 np0005539551 nova_compute[227360]: 2025-11-29 07:56:55.329 227364 DEBUG nova.virt.libvirt.driver [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:55 np0005539551 nova_compute[227360]: 2025-11-29 07:56:55.330 227364 DEBUG nova.virt.libvirt.driver [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:55 np0005539551 nova_compute[227360]: 2025-11-29 07:56:55.330 227364 DEBUG nova.virt.libvirt.driver [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:55 np0005539551 nova_compute[227360]: 2025-11-29 07:56:55.331 227364 DEBUG nova.virt.libvirt.driver [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:55 np0005539551 nova_compute[227360]: 2025-11-29 07:56:55.331 227364 DEBUG nova.virt.libvirt.driver [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:55 np0005539551 nova_compute[227360]: 2025-11-29 07:56:55.337 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:56:55 np0005539551 nova_compute[227360]: 2025-11-29 07:56:55.372 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:56:55 np0005539551 nova_compute[227360]: 2025-11-29 07:56:55.372 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403015.282027, 16f4d46e-eac1-444e-92e9-dd3128fa681b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:56:55 np0005539551 nova_compute[227360]: 2025-11-29 07:56:55.372 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] VM Started (Lifecycle Event)#033[00m
Nov 29 02:56:55 np0005539551 systemd-machined[190756]: New machine qemu-14-instance-0000001a.
Nov 29 02:56:55 np0005539551 systemd[1]: Started Virtual Machine qemu-14-instance-0000001a.
Nov 29 02:56:55 np0005539551 nova_compute[227360]: 2025-11-29 07:56:55.402 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:56:55 np0005539551 nova_compute[227360]: 2025-11-29 07:56:55.406 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:56:55 np0005539551 nova_compute[227360]: 2025-11-29 07:56:55.410 227364 INFO nova.compute.manager [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Took 4.40 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:56:55 np0005539551 nova_compute[227360]: 2025-11-29 07:56:55.410 227364 DEBUG nova.compute.manager [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:56:55 np0005539551 nova_compute[227360]: 2025-11-29 07:56:55.424 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:56:55 np0005539551 nova_compute[227360]: 2025-11-29 07:56:55.475 227364 INFO nova.compute.manager [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Took 5.89 seconds to build instance.#033[00m
Nov 29 02:56:55 np0005539551 nova_compute[227360]: 2025-11-29 07:56:55.503 227364 DEBUG oslo_concurrency.lockutils [None req-3f87b938-91c3-42cd-b6f5-7b0a13491159 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Lock "16f4d46e-eac1-444e-92e9-dd3128fa681b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.091s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:56 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:56:56.467 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:56.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:56 np0005539551 nova_compute[227360]: 2025-11-29 07:56:56.672 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403016.6723254, 9eb89c30-3f33-4a7c-ae19-8312a2522b82 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:56:56 np0005539551 nova_compute[227360]: 2025-11-29 07:56:56.673 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:56:56 np0005539551 nova_compute[227360]: 2025-11-29 07:56:56.675 227364 DEBUG nova.compute.manager [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:56:56 np0005539551 nova_compute[227360]: 2025-11-29 07:56:56.676 227364 DEBUG nova.virt.libvirt.driver [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:56:56 np0005539551 nova_compute[227360]: 2025-11-29 07:56:56.679 227364 INFO nova.virt.libvirt.driver [-] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Instance spawned successfully.#033[00m
Nov 29 02:56:56 np0005539551 nova_compute[227360]: 2025-11-29 07:56:56.679 227364 DEBUG nova.virt.libvirt.driver [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:56:56 np0005539551 nova_compute[227360]: 2025-11-29 07:56:56.729 227364 DEBUG nova.virt.libvirt.driver [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:56 np0005539551 nova_compute[227360]: 2025-11-29 07:56:56.730 227364 DEBUG nova.virt.libvirt.driver [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:56 np0005539551 nova_compute[227360]: 2025-11-29 07:56:56.731 227364 DEBUG nova.virt.libvirt.driver [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:56 np0005539551 nova_compute[227360]: 2025-11-29 07:56:56.731 227364 DEBUG nova.virt.libvirt.driver [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:56 np0005539551 nova_compute[227360]: 2025-11-29 07:56:56.732 227364 DEBUG nova.virt.libvirt.driver [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:56 np0005539551 nova_compute[227360]: 2025-11-29 07:56:56.732 227364 DEBUG nova.virt.libvirt.driver [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:56 np0005539551 nova_compute[227360]: 2025-11-29 07:56:56.737 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:56:56 np0005539551 nova_compute[227360]: 2025-11-29 07:56:56.740 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:56:56 np0005539551 nova_compute[227360]: 2025-11-29 07:56:56.780 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:56:56 np0005539551 nova_compute[227360]: 2025-11-29 07:56:56.780 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403016.673566, 9eb89c30-3f33-4a7c-ae19-8312a2522b82 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:56:56 np0005539551 nova_compute[227360]: 2025-11-29 07:56:56.780 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] VM Started (Lifecycle Event)#033[00m
Nov 29 02:56:56 np0005539551 nova_compute[227360]: 2025-11-29 07:56:56.805 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:56:56 np0005539551 nova_compute[227360]: 2025-11-29 07:56:56.809 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:56:56 np0005539551 nova_compute[227360]: 2025-11-29 07:56:56.816 227364 INFO nova.compute.manager [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Took 5.12 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:56:56 np0005539551 nova_compute[227360]: 2025-11-29 07:56:56.816 227364 DEBUG nova.compute.manager [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:56:56 np0005539551 nova_compute[227360]: 2025-11-29 07:56:56.828 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:56:56 np0005539551 nova_compute[227360]: 2025-11-29 07:56:56.880 227364 INFO nova.compute.manager [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Took 6.92 seconds to build instance.#033[00m
Nov 29 02:56:56 np0005539551 nova_compute[227360]: 2025-11-29 07:56:56.895 227364 DEBUG oslo_concurrency.lockutils [None req-bccf9ba8-a34f-4143-b79d-939e1e670ac9 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "9eb89c30-3f33-4a7c-ae19-8312a2522b82" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.141s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:56.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:57 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:58 np0005539551 nova_compute[227360]: 2025-11-29 07:56:58.112 227364 DEBUG nova.objects.instance [None req-523b21b0-b6e4-4ca0-86f9-31845e7f6c64 9ec8be824a7548a09861fa5edc71554b 5b3a09c5b05a4c6183d2a04e9c1cdb60 - - default default] Lazy-loading 'pci_devices' on Instance uuid 16f4d46e-eac1-444e-92e9-dd3128fa681b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:56:58 np0005539551 nova_compute[227360]: 2025-11-29 07:56:58.130 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403018.1303716, 16f4d46e-eac1-444e-92e9-dd3128fa681b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:56:58 np0005539551 nova_compute[227360]: 2025-11-29 07:56:58.131 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:56:58 np0005539551 nova_compute[227360]: 2025-11-29 07:56:58.159 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:56:58 np0005539551 nova_compute[227360]: 2025-11-29 07:56:58.167 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:56:58 np0005539551 nova_compute[227360]: 2025-11-29 07:56:58.195 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Nov 29 02:56:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:58.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:58 np0005539551 nova_compute[227360]: 2025-11-29 07:56:58.730 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:59 np0005539551 nova_compute[227360]: 2025-11-29 07:56:59.314 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403004.3133032, bed5ee49-4f19-4a80-a70a-8972c9a68218 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:56:59 np0005539551 nova_compute[227360]: 2025-11-29 07:56:59.316 227364 INFO nova.compute.manager [-] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:56:59 np0005539551 nova_compute[227360]: 2025-11-29 07:56:59.339 227364 DEBUG nova.compute.manager [None req-b078a76a-92e3-418e-904d-93b5708512a3 - - - - - -] [instance: bed5ee49-4f19-4a80-a70a-8972c9a68218] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:56:59 np0005539551 nova_compute[227360]: 2025-11-29 07:56:59.392 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:56:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:59.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:00 np0005539551 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d00000019.scope: Deactivated successfully.
Nov 29 02:57:00 np0005539551 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d00000019.scope: Consumed 3.416s CPU time.
Nov 29 02:57:00 np0005539551 systemd-machined[190756]: Machine qemu-13-instance-00000019 terminated.
Nov 29 02:57:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:00.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:00 np0005539551 nova_compute[227360]: 2025-11-29 07:57:00.590 227364 DEBUG nova.compute.manager [None req-523b21b0-b6e4-4ca0-86f9-31845e7f6c64 9ec8be824a7548a09861fa5edc71554b 5b3a09c5b05a4c6183d2a04e9c1cdb60 - - default default] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:57:01 np0005539551 nova_compute[227360]: 2025-11-29 07:57:01.844 227364 DEBUG oslo_concurrency.lockutils [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Acquiring lock "refresh_cache-9eb89c30-3f33-4a7c-ae19-8312a2522b82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:57:01 np0005539551 nova_compute[227360]: 2025-11-29 07:57:01.845 227364 DEBUG oslo_concurrency.lockutils [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Acquired lock "refresh_cache-9eb89c30-3f33-4a7c-ae19-8312a2522b82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:57:01 np0005539551 nova_compute[227360]: 2025-11-29 07:57:01.845 227364 DEBUG nova.network.neutron [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:57:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:01.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:02 np0005539551 nova_compute[227360]: 2025-11-29 07:57:02.278 227364 DEBUG nova.network.neutron [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:57:02 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:02.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:02 np0005539551 nova_compute[227360]: 2025-11-29 07:57:02.713 227364 DEBUG nova.network.neutron [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:57:02 np0005539551 nova_compute[227360]: 2025-11-29 07:57:02.728 227364 DEBUG oslo_concurrency.lockutils [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Releasing lock "refresh_cache-9eb89c30-3f33-4a7c-ae19-8312a2522b82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:57:02 np0005539551 nova_compute[227360]: 2025-11-29 07:57:02.831 227364 DEBUG nova.virt.libvirt.driver [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Nov 29 02:57:02 np0005539551 nova_compute[227360]: 2025-11-29 07:57:02.831 227364 DEBUG nova.virt.libvirt.volume.remotefs [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Creating file /var/lib/nova/instances/9eb89c30-3f33-4a7c-ae19-8312a2522b82/991239ee4c954b089c6bc9ff5ae1a73a.tmp on remote host 192.168.122.102 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Nov 29 02:57:02 np0005539551 nova_compute[227360]: 2025-11-29 07:57:02.832 227364 DEBUG oslo_concurrency.processutils [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/9eb89c30-3f33-4a7c-ae19-8312a2522b82/991239ee4c954b089c6bc9ff5ae1a73a.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:03 np0005539551 nova_compute[227360]: 2025-11-29 07:57:03.225 227364 DEBUG oslo_concurrency.processutils [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/9eb89c30-3f33-4a7c-ae19-8312a2522b82/991239ee4c954b089c6bc9ff5ae1a73a.tmp" returned: 1 in 0.394s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:03 np0005539551 nova_compute[227360]: 2025-11-29 07:57:03.227 227364 DEBUG oslo_concurrency.processutils [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] 'ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/9eb89c30-3f33-4a7c-ae19-8312a2522b82/991239ee4c954b089c6bc9ff5ae1a73a.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 29 02:57:03 np0005539551 nova_compute[227360]: 2025-11-29 07:57:03.227 227364 DEBUG nova.virt.libvirt.volume.remotefs [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Creating directory /var/lib/nova/instances/9eb89c30-3f33-4a7c-ae19-8312a2522b82 on remote host 192.168.122.102 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Nov 29 02:57:03 np0005539551 nova_compute[227360]: 2025-11-29 07:57:03.228 227364 DEBUG oslo_concurrency.processutils [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/9eb89c30-3f33-4a7c-ae19-8312a2522b82 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:03 np0005539551 nova_compute[227360]: 2025-11-29 07:57:03.449 227364 DEBUG oslo_concurrency.processutils [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/9eb89c30-3f33-4a7c-ae19-8312a2522b82" returned: 0 in 0.221s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:03 np0005539551 nova_compute[227360]: 2025-11-29 07:57:03.454 227364 DEBUG nova.virt.libvirt.driver [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 02:57:03 np0005539551 nova_compute[227360]: 2025-11-29 07:57:03.479 227364 DEBUG oslo_concurrency.lockutils [None req-b308f153-774a-40da-b7d9-77e5a1368851 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Acquiring lock "16f4d46e-eac1-444e-92e9-dd3128fa681b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:03 np0005539551 nova_compute[227360]: 2025-11-29 07:57:03.481 227364 DEBUG oslo_concurrency.lockutils [None req-b308f153-774a-40da-b7d9-77e5a1368851 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Lock "16f4d46e-eac1-444e-92e9-dd3128fa681b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:03 np0005539551 nova_compute[227360]: 2025-11-29 07:57:03.481 227364 DEBUG oslo_concurrency.lockutils [None req-b308f153-774a-40da-b7d9-77e5a1368851 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Acquiring lock "16f4d46e-eac1-444e-92e9-dd3128fa681b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:03 np0005539551 nova_compute[227360]: 2025-11-29 07:57:03.481 227364 DEBUG oslo_concurrency.lockutils [None req-b308f153-774a-40da-b7d9-77e5a1368851 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Lock "16f4d46e-eac1-444e-92e9-dd3128fa681b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:03 np0005539551 nova_compute[227360]: 2025-11-29 07:57:03.481 227364 DEBUG oslo_concurrency.lockutils [None req-b308f153-774a-40da-b7d9-77e5a1368851 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Lock "16f4d46e-eac1-444e-92e9-dd3128fa681b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:03 np0005539551 nova_compute[227360]: 2025-11-29 07:57:03.483 227364 INFO nova.compute.manager [None req-b308f153-774a-40da-b7d9-77e5a1368851 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Terminating instance#033[00m
Nov 29 02:57:03 np0005539551 nova_compute[227360]: 2025-11-29 07:57:03.484 227364 DEBUG oslo_concurrency.lockutils [None req-b308f153-774a-40da-b7d9-77e5a1368851 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Acquiring lock "refresh_cache-16f4d46e-eac1-444e-92e9-dd3128fa681b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:57:03 np0005539551 nova_compute[227360]: 2025-11-29 07:57:03.484 227364 DEBUG oslo_concurrency.lockutils [None req-b308f153-774a-40da-b7d9-77e5a1368851 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Acquired lock "refresh_cache-16f4d46e-eac1-444e-92e9-dd3128fa681b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:57:03 np0005539551 nova_compute[227360]: 2025-11-29 07:57:03.484 227364 DEBUG nova.network.neutron [None req-b308f153-774a-40da-b7d9-77e5a1368851 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:57:03 np0005539551 nova_compute[227360]: 2025-11-29 07:57:03.653 227364 DEBUG nova.network.neutron [None req-b308f153-774a-40da-b7d9-77e5a1368851 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:57:03 np0005539551 nova_compute[227360]: 2025-11-29 07:57:03.777 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:03.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:03 np0005539551 nova_compute[227360]: 2025-11-29 07:57:03.951 227364 DEBUG nova.network.neutron [None req-b308f153-774a-40da-b7d9-77e5a1368851 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:57:03 np0005539551 nova_compute[227360]: 2025-11-29 07:57:03.974 227364 DEBUG oslo_concurrency.lockutils [None req-b308f153-774a-40da-b7d9-77e5a1368851 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Releasing lock "refresh_cache-16f4d46e-eac1-444e-92e9-dd3128fa681b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:57:03 np0005539551 nova_compute[227360]: 2025-11-29 07:57:03.975 227364 DEBUG nova.compute.manager [None req-b308f153-774a-40da-b7d9-77e5a1368851 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:57:03 np0005539551 nova_compute[227360]: 2025-11-29 07:57:03.982 227364 INFO nova.virt.libvirt.driver [-] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Instance destroyed successfully.#033[00m
Nov 29 02:57:03 np0005539551 nova_compute[227360]: 2025-11-29 07:57:03.983 227364 DEBUG nova.objects.instance [None req-b308f153-774a-40da-b7d9-77e5a1368851 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Lazy-loading 'resources' on Instance uuid 16f4d46e-eac1-444e-92e9-dd3128fa681b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:57:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:04.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:05.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:06 np0005539551 nova_compute[227360]: 2025-11-29 07:57:06.569 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:06.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:07.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:08.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:08 np0005539551 nova_compute[227360]: 2025-11-29 07:57:08.779 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:09 np0005539551 nova_compute[227360]: 2025-11-29 07:57:09.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:09 np0005539551 nova_compute[227360]: 2025-11-29 07:57:09.588 227364 INFO nova.virt.libvirt.driver [None req-b308f153-774a-40da-b7d9-77e5a1368851 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Deleting instance files /var/lib/nova/instances/16f4d46e-eac1-444e-92e9-dd3128fa681b_del#033[00m
Nov 29 02:57:09 np0005539551 nova_compute[227360]: 2025-11-29 07:57:09.589 227364 INFO nova.virt.libvirt.driver [None req-b308f153-774a-40da-b7d9-77e5a1368851 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Deletion of /var/lib/nova/instances/16f4d46e-eac1-444e-92e9-dd3128fa681b_del complete#033[00m
Nov 29 02:57:09 np0005539551 nova_compute[227360]: 2025-11-29 07:57:09.663 227364 INFO nova.compute.manager [None req-b308f153-774a-40da-b7d9-77e5a1368851 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Took 5.69 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:57:09 np0005539551 nova_compute[227360]: 2025-11-29 07:57:09.664 227364 DEBUG oslo.service.loopingcall [None req-b308f153-774a-40da-b7d9-77e5a1368851 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:57:09 np0005539551 nova_compute[227360]: 2025-11-29 07:57:09.664 227364 DEBUG nova.compute.manager [-] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:57:09 np0005539551 nova_compute[227360]: 2025-11-29 07:57:09.664 227364 DEBUG nova.network.neutron [-] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:57:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:09.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:09 np0005539551 nova_compute[227360]: 2025-11-29 07:57:09.937 227364 DEBUG nova.network.neutron [-] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:57:10 np0005539551 nova_compute[227360]: 2025-11-29 07:57:10.024 227364 DEBUG nova.network.neutron [-] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:57:10 np0005539551 nova_compute[227360]: 2025-11-29 07:57:10.037 227364 INFO nova.compute.manager [-] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Took 0.37 seconds to deallocate network for instance.#033[00m
Nov 29 02:57:10 np0005539551 nova_compute[227360]: 2025-11-29 07:57:10.091 227364 DEBUG oslo_concurrency.lockutils [None req-b308f153-774a-40da-b7d9-77e5a1368851 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:10 np0005539551 nova_compute[227360]: 2025-11-29 07:57:10.092 227364 DEBUG oslo_concurrency.lockutils [None req-b308f153-774a-40da-b7d9-77e5a1368851 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:10 np0005539551 nova_compute[227360]: 2025-11-29 07:57:10.168 227364 DEBUG oslo_concurrency.processutils [None req-b308f153-774a-40da-b7d9-77e5a1368851 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:10 np0005539551 nova_compute[227360]: 2025-11-29 07:57:10.404 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:10 np0005539551 nova_compute[227360]: 2025-11-29 07:57:10.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:10.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:11 np0005539551 nova_compute[227360]: 2025-11-29 07:57:11.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:11 np0005539551 nova_compute[227360]: 2025-11-29 07:57:11.572 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:11 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:57:11 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1865002428' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:57:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:11.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:11 np0005539551 nova_compute[227360]: 2025-11-29 07:57:11.940 227364 DEBUG oslo_concurrency.processutils [None req-b308f153-774a-40da-b7d9-77e5a1368851 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.771s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:11 np0005539551 nova_compute[227360]: 2025-11-29 07:57:11.948 227364 DEBUG nova.compute.provider_tree [None req-b308f153-774a-40da-b7d9-77e5a1368851 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:57:11 np0005539551 nova_compute[227360]: 2025-11-29 07:57:11.962 227364 DEBUG nova.scheduler.client.report [None req-b308f153-774a-40da-b7d9-77e5a1368851 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:57:11 np0005539551 nova_compute[227360]: 2025-11-29 07:57:11.980 227364 DEBUG oslo_concurrency.lockutils [None req-b308f153-774a-40da-b7d9-77e5a1368851 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.888s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:12 np0005539551 nova_compute[227360]: 2025-11-29 07:57:12.008 227364 INFO nova.scheduler.client.report [None req-b308f153-774a-40da-b7d9-77e5a1368851 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Deleted allocations for instance 16f4d46e-eac1-444e-92e9-dd3128fa681b#033[00m
Nov 29 02:57:12 np0005539551 nova_compute[227360]: 2025-11-29 07:57:12.104 227364 DEBUG oslo_concurrency.lockutils [None req-b308f153-774a-40da-b7d9-77e5a1368851 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Lock "16f4d46e-eac1-444e-92e9-dd3128fa681b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:12 np0005539551 nova_compute[227360]: 2025-11-29 07:57:12.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:12 np0005539551 nova_compute[227360]: 2025-11-29 07:57:12.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:57:12 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:12 np0005539551 nova_compute[227360]: 2025-11-29 07:57:12.550 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:57:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:12.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:13 np0005539551 nova_compute[227360]: 2025-11-29 07:57:13.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:13 np0005539551 nova_compute[227360]: 2025-11-29 07:57:13.781 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:13.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:14 np0005539551 nova_compute[227360]: 2025-11-29 07:57:14.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:14 np0005539551 nova_compute[227360]: 2025-11-29 07:57:14.432 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:14 np0005539551 nova_compute[227360]: 2025-11-29 07:57:14.432 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:14 np0005539551 nova_compute[227360]: 2025-11-29 07:57:14.433 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:14 np0005539551 nova_compute[227360]: 2025-11-29 07:57:14.433 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:57:14 np0005539551 nova_compute[227360]: 2025-11-29 07:57:14.433 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:14.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:14 np0005539551 podman[240432]: 2025-11-29 07:57:14.613115676 +0000 UTC m=+0.060184249 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 02:57:14 np0005539551 podman[240431]: 2025-11-29 07:57:14.623246744 +0000 UTC m=+0.070513773 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 02:57:14 np0005539551 podman[240420]: 2025-11-29 07:57:14.642405979 +0000 UTC m=+0.085664518 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller)
Nov 29 02:57:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:57:15 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4174668111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:57:15 np0005539551 nova_compute[227360]: 2025-11-29 07:57:15.144 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.711s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:15 np0005539551 nova_compute[227360]: 2025-11-29 07:57:15.212 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:57:15 np0005539551 nova_compute[227360]: 2025-11-29 07:57:15.212 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:57:15 np0005539551 nova_compute[227360]: 2025-11-29 07:57:15.347 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:57:15 np0005539551 nova_compute[227360]: 2025-11-29 07:57:15.348 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4688MB free_disk=20.897945404052734GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:57:15 np0005539551 nova_compute[227360]: 2025-11-29 07:57:15.348 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:15 np0005539551 nova_compute[227360]: 2025-11-29 07:57:15.349 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:15 np0005539551 nova_compute[227360]: 2025-11-29 07:57:15.415 227364 INFO nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Updating resource usage from migration c2903281-4709-4a60-aee1-a45ec2b3f2d4#033[00m
Nov 29 02:57:15 np0005539551 nova_compute[227360]: 2025-11-29 07:57:15.435 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Migration c2903281-4709-4a60-aee1-a45ec2b3f2d4 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Nov 29 02:57:15 np0005539551 nova_compute[227360]: 2025-11-29 07:57:15.435 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:57:15 np0005539551 nova_compute[227360]: 2025-11-29 07:57:15.435 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:57:15 np0005539551 nova_compute[227360]: 2025-11-29 07:57:15.470 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:15 np0005539551 nova_compute[227360]: 2025-11-29 07:57:15.591 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403020.5896442, 16f4d46e-eac1-444e-92e9-dd3128fa681b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:57:15 np0005539551 nova_compute[227360]: 2025-11-29 07:57:15.592 227364 INFO nova.compute.manager [-] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:57:15 np0005539551 nova_compute[227360]: 2025-11-29 07:57:15.603 227364 DEBUG nova.virt.libvirt.driver [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 02:57:15 np0005539551 nova_compute[227360]: 2025-11-29 07:57:15.612 227364 DEBUG nova.compute.manager [None req-ac1a511a-d1bc-43ce-b2df-c16effb5fb2c - - - - - -] [instance: 16f4d46e-eac1-444e-92e9-dd3128fa681b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:57:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:15.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:57:16 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2533420637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:57:16 np0005539551 nova_compute[227360]: 2025-11-29 07:57:16.574 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:16 np0005539551 nova_compute[227360]: 2025-11-29 07:57:16.582 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:16 np0005539551 nova_compute[227360]: 2025-11-29 07:57:16.589 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:57:16 np0005539551 nova_compute[227360]: 2025-11-29 07:57:16.603 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:57:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:57:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:16.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:57:16 np0005539551 nova_compute[227360]: 2025-11-29 07:57:16.625 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:57:16 np0005539551 nova_compute[227360]: 2025-11-29 07:57:16.626 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.277s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:17 np0005539551 nova_compute[227360]: 2025-11-29 07:57:17.625 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:17 np0005539551 nova_compute[227360]: 2025-11-29 07:57:17.626 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:17 np0005539551 nova_compute[227360]: 2025-11-29 07:57:17.626 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:57:17 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:57:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:17.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:18.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:18 np0005539551 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Nov 29 02:57:18 np0005539551 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000001a.scope: Consumed 14.906s CPU time.
Nov 29 02:57:18 np0005539551 systemd-machined[190756]: Machine qemu-14-instance-0000001a terminated.
Nov 29 02:57:18 np0005539551 nova_compute[227360]: 2025-11-29 07:57:18.783 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:19 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:57:19 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:57:19 np0005539551 nova_compute[227360]: 2025-11-29 07:57:19.620 227364 INFO nova.virt.libvirt.driver [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Instance shutdown successfully after 14 seconds.#033[00m
Nov 29 02:57:19 np0005539551 nova_compute[227360]: 2025-11-29 07:57:19.625 227364 INFO nova.virt.libvirt.driver [-] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Instance destroyed successfully.#033[00m
Nov 29 02:57:19 np0005539551 nova_compute[227360]: 2025-11-29 07:57:19.629 227364 DEBUG nova.virt.libvirt.driver [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:57:19 np0005539551 nova_compute[227360]: 2025-11-29 07:57:19.629 227364 DEBUG nova.virt.libvirt.driver [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:57:19 np0005539551 nova_compute[227360]: 2025-11-29 07:57:19.728 227364 DEBUG oslo_concurrency.lockutils [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Acquiring lock "9eb89c30-3f33-4a7c-ae19-8312a2522b82-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:19 np0005539551 nova_compute[227360]: 2025-11-29 07:57:19.728 227364 DEBUG oslo_concurrency.lockutils [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Lock "9eb89c30-3f33-4a7c-ae19-8312a2522b82-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:19 np0005539551 nova_compute[227360]: 2025-11-29 07:57:19.728 227364 DEBUG oslo_concurrency.lockutils [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Lock "9eb89c30-3f33-4a7c-ae19-8312a2522b82-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:57:19.849 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:57:19.850 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:57:19.850 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:19.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:20.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:21 np0005539551 nova_compute[227360]: 2025-11-29 07:57:21.577 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:57:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:21.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:57:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:22.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e165 e165: 3 total, 3 up, 3 in
Nov 29 02:57:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:23 np0005539551 nova_compute[227360]: 2025-11-29 07:57:23.837 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:23.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:24.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:25 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:57:25 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:57:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:25.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:26 np0005539551 ovn_controller[130266]: 2025-11-29T07:57:26Z|00128|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory
Nov 29 02:57:26 np0005539551 nova_compute[227360]: 2025-11-29 07:57:26.579 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:26.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:27 np0005539551 nova_compute[227360]: 2025-11-29 07:57:27.624 227364 DEBUG oslo_concurrency.lockutils [None req-7f919255-9067-4b24-8082-be36f008e6ac 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "9eb89c30-3f33-4a7c-ae19-8312a2522b82" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:27 np0005539551 nova_compute[227360]: 2025-11-29 07:57:27.624 227364 DEBUG oslo_concurrency.lockutils [None req-7f919255-9067-4b24-8082-be36f008e6ac 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "9eb89c30-3f33-4a7c-ae19-8312a2522b82" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:27 np0005539551 nova_compute[227360]: 2025-11-29 07:57:27.624 227364 DEBUG nova.compute.manager [None req-7f919255-9067-4b24-8082-be36f008e6ac 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Going to confirm migration 7 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Nov 29 02:57:27 np0005539551 nova_compute[227360]: 2025-11-29 07:57:27.953 227364 DEBUG oslo_concurrency.lockutils [None req-7f919255-9067-4b24-8082-be36f008e6ac 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "refresh_cache-9eb89c30-3f33-4a7c-ae19-8312a2522b82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:57:27 np0005539551 nova_compute[227360]: 2025-11-29 07:57:27.953 227364 DEBUG oslo_concurrency.lockutils [None req-7f919255-9067-4b24-8082-be36f008e6ac 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquired lock "refresh_cache-9eb89c30-3f33-4a7c-ae19-8312a2522b82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:57:27 np0005539551 nova_compute[227360]: 2025-11-29 07:57:27.954 227364 DEBUG nova.network.neutron [None req-7f919255-9067-4b24-8082-be36f008e6ac 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:57:27 np0005539551 nova_compute[227360]: 2025-11-29 07:57:27.954 227364 DEBUG nova.objects.instance [None req-7f919255-9067-4b24-8082-be36f008e6ac 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lazy-loading 'info_cache' on Instance uuid 9eb89c30-3f33-4a7c-ae19-8312a2522b82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:57:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:27.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:28 np0005539551 nova_compute[227360]: 2025-11-29 07:57:28.130 227364 DEBUG nova.network.neutron [None req-7f919255-9067-4b24-8082-be36f008e6ac 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:57:28 np0005539551 nova_compute[227360]: 2025-11-29 07:57:28.469 227364 DEBUG nova.network.neutron [None req-7f919255-9067-4b24-8082-be36f008e6ac 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:57:28 np0005539551 nova_compute[227360]: 2025-11-29 07:57:28.488 227364 DEBUG oslo_concurrency.lockutils [None req-7f919255-9067-4b24-8082-be36f008e6ac 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Releasing lock "refresh_cache-9eb89c30-3f33-4a7c-ae19-8312a2522b82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:57:28 np0005539551 nova_compute[227360]: 2025-11-29 07:57:28.489 227364 DEBUG nova.objects.instance [None req-7f919255-9067-4b24-8082-be36f008e6ac 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lazy-loading 'migration_context' on Instance uuid 9eb89c30-3f33-4a7c-ae19-8312a2522b82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:57:28 np0005539551 nova_compute[227360]: 2025-11-29 07:57:28.611 227364 DEBUG nova.storage.rbd_utils [None req-7f919255-9067-4b24-8082-be36f008e6ac 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] removing snapshot(nova-resize) on rbd image(9eb89c30-3f33-4a7c-ae19-8312a2522b82_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 02:57:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:28.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:28 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e166 e166: 3 total, 3 up, 3 in
Nov 29 02:57:28 np0005539551 nova_compute[227360]: 2025-11-29 07:57:28.829 227364 DEBUG oslo_concurrency.lockutils [None req-7f919255-9067-4b24-8082-be36f008e6ac 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:28 np0005539551 nova_compute[227360]: 2025-11-29 07:57:28.829 227364 DEBUG oslo_concurrency.lockutils [None req-7f919255-9067-4b24-8082-be36f008e6ac 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:28 np0005539551 nova_compute[227360]: 2025-11-29 07:57:28.840 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:28 np0005539551 nova_compute[227360]: 2025-11-29 07:57:28.887 227364 DEBUG oslo_concurrency.processutils [None req-7f919255-9067-4b24-8082-be36f008e6ac 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:57:29 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4178925508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:57:29 np0005539551 nova_compute[227360]: 2025-11-29 07:57:29.326 227364 DEBUG oslo_concurrency.processutils [None req-7f919255-9067-4b24-8082-be36f008e6ac 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:29 np0005539551 nova_compute[227360]: 2025-11-29 07:57:29.334 227364 DEBUG nova.compute.provider_tree [None req-7f919255-9067-4b24-8082-be36f008e6ac 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:57:29 np0005539551 nova_compute[227360]: 2025-11-29 07:57:29.352 227364 DEBUG nova.scheduler.client.report [None req-7f919255-9067-4b24-8082-be36f008e6ac 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:57:29 np0005539551 nova_compute[227360]: 2025-11-29 07:57:29.404 227364 DEBUG oslo_concurrency.lockutils [None req-7f919255-9067-4b24-8082-be36f008e6ac 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:29 np0005539551 nova_compute[227360]: 2025-11-29 07:57:29.515 227364 INFO nova.scheduler.client.report [None req-7f919255-9067-4b24-8082-be36f008e6ac 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Deleted allocation for migration c2903281-4709-4a60-aee1-a45ec2b3f2d4#033[00m
Nov 29 02:57:29 np0005539551 nova_compute[227360]: 2025-11-29 07:57:29.579 227364 DEBUG oslo_concurrency.lockutils [None req-7f919255-9067-4b24-8082-be36f008e6ac 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "9eb89c30-3f33-4a7c-ae19-8312a2522b82" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 1.954s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:29.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:30.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:31 np0005539551 nova_compute[227360]: 2025-11-29 07:57:31.581 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:57:31.700 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:57:31 np0005539551 nova_compute[227360]: 2025-11-29 07:57:31.701 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:57:31.702 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:57:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:31.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:32.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:33 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:33 np0005539551 nova_compute[227360]: 2025-11-29 07:57:33.840 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:33 np0005539551 nova_compute[227360]: 2025-11-29 07:57:33.921 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403038.9203522, 9eb89c30-3f33-4a7c-ae19-8312a2522b82 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:57:33 np0005539551 nova_compute[227360]: 2025-11-29 07:57:33.921 227364 INFO nova.compute.manager [-] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:57:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:33.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:33 np0005539551 nova_compute[227360]: 2025-11-29 07:57:33.976 227364 DEBUG nova.compute.manager [None req-36b577f2-75e8-4007-9a0c-09794bdc2aad - - - - - -] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:57:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:34.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e167 e167: 3 total, 3 up, 3 in
Nov 29 02:57:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:35.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:36 np0005539551 nova_compute[227360]: 2025-11-29 07:57:36.630 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:36.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:57:36.704 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:57:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:37.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:38 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:38.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:38 np0005539551 nova_compute[227360]: 2025-11-29 07:57:38.891 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:39.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e168 e168: 3 total, 3 up, 3 in
Nov 29 02:57:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:40.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:41 np0005539551 nova_compute[227360]: 2025-11-29 07:57:41.631 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:41.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:42.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:43 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:43 np0005539551 nova_compute[227360]: 2025-11-29 07:57:43.894 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:43.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:44.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:45 np0005539551 podman[240772]: 2025-11-29 07:57:45.62503662 +0000 UTC m=+0.064164658 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 02:57:45 np0005539551 podman[240771]: 2025-11-29 07:57:45.633125242 +0000 UTC m=+0.071641954 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:57:45 np0005539551 podman[240770]: 2025-11-29 07:57:45.661209921 +0000 UTC m=+0.114139868 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller)
Nov 29 02:57:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:45.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:46 np0005539551 nova_compute[227360]: 2025-11-29 07:57:46.634 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:46.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:47.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:48 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:48.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:48 np0005539551 nova_compute[227360]: 2025-11-29 07:57:48.933 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:49.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:50.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:51 np0005539551 nova_compute[227360]: 2025-11-29 07:57:51.636 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:51.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:52.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:53 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:53 np0005539551 nova_compute[227360]: 2025-11-29 07:57:53.938 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:53.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:54.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:55.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:56.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:56 np0005539551 nova_compute[227360]: 2025-11-29 07:57:56.684 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:57.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:58 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:57:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:58.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:58 np0005539551 nova_compute[227360]: 2025-11-29 07:57:58.986 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:00.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:00.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:00 np0005539551 nova_compute[227360]: 2025-11-29 07:58:00.816 227364 DEBUG oslo_concurrency.lockutils [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Acquiring lock "e8c981fd-dbe3-407d-8810-88f19dac3073" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:00 np0005539551 nova_compute[227360]: 2025-11-29 07:58:00.816 227364 DEBUG oslo_concurrency.lockutils [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Lock "e8c981fd-dbe3-407d-8810-88f19dac3073" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:00 np0005539551 nova_compute[227360]: 2025-11-29 07:58:00.839 227364 DEBUG nova.compute.manager [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:58:00 np0005539551 nova_compute[227360]: 2025-11-29 07:58:00.916 227364 DEBUG oslo_concurrency.lockutils [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:00 np0005539551 nova_compute[227360]: 2025-11-29 07:58:00.916 227364 DEBUG oslo_concurrency.lockutils [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:00 np0005539551 nova_compute[227360]: 2025-11-29 07:58:00.929 227364 DEBUG nova.virt.hardware [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:58:00 np0005539551 nova_compute[227360]: 2025-11-29 07:58:00.930 227364 INFO nova.compute.claims [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 02:58:01 np0005539551 nova_compute[227360]: 2025-11-29 07:58:01.029 227364 DEBUG oslo_concurrency.processutils [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:58:01 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/706821520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:58:01 np0005539551 nova_compute[227360]: 2025-11-29 07:58:01.464 227364 DEBUG oslo_concurrency.processutils [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:01 np0005539551 nova_compute[227360]: 2025-11-29 07:58:01.472 227364 DEBUG nova.compute.provider_tree [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:58:01 np0005539551 nova_compute[227360]: 2025-11-29 07:58:01.487 227364 DEBUG nova.scheduler.client.report [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:58:01 np0005539551 nova_compute[227360]: 2025-11-29 07:58:01.541 227364 DEBUG oslo_concurrency.lockutils [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:01 np0005539551 nova_compute[227360]: 2025-11-29 07:58:01.542 227364 DEBUG nova.compute.manager [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:58:01 np0005539551 nova_compute[227360]: 2025-11-29 07:58:01.612 227364 DEBUG nova.compute.manager [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:58:01 np0005539551 nova_compute[227360]: 2025-11-29 07:58:01.613 227364 DEBUG nova.network.neutron [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:58:01 np0005539551 nova_compute[227360]: 2025-11-29 07:58:01.632 227364 INFO nova.virt.libvirt.driver [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:58:01 np0005539551 nova_compute[227360]: 2025-11-29 07:58:01.654 227364 DEBUG nova.compute.manager [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:58:01 np0005539551 nova_compute[227360]: 2025-11-29 07:58:01.686 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:01 np0005539551 nova_compute[227360]: 2025-11-29 07:58:01.754 227364 DEBUG nova.compute.manager [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:58:01 np0005539551 nova_compute[227360]: 2025-11-29 07:58:01.757 227364 DEBUG nova.virt.libvirt.driver [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:58:01 np0005539551 nova_compute[227360]: 2025-11-29 07:58:01.759 227364 INFO nova.virt.libvirt.driver [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Creating image(s)#033[00m
Nov 29 02:58:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e169 e169: 3 total, 3 up, 3 in
Nov 29 02:58:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:02.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:02 np0005539551 nova_compute[227360]: 2025-11-29 07:58:02.550 227364 DEBUG nova.storage.rbd_utils [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] rbd image e8c981fd-dbe3-407d-8810-88f19dac3073_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:02 np0005539551 nova_compute[227360]: 2025-11-29 07:58:02.594 227364 DEBUG nova.storage.rbd_utils [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] rbd image e8c981fd-dbe3-407d-8810-88f19dac3073_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:02 np0005539551 nova_compute[227360]: 2025-11-29 07:58:02.634 227364 DEBUG nova.storage.rbd_utils [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] rbd image e8c981fd-dbe3-407d-8810-88f19dac3073_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:02 np0005539551 nova_compute[227360]: 2025-11-29 07:58:02.640 227364 DEBUG oslo_concurrency.processutils [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:02 np0005539551 nova_compute[227360]: 2025-11-29 07:58:02.678 227364 DEBUG nova.policy [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e9bb5c912ca1471c895b844b43ac4831', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd932eef5d5104a79bd9aec0bbc489384', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 02:58:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:02.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:02 np0005539551 nova_compute[227360]: 2025-11-29 07:58:02.736 227364 DEBUG oslo_concurrency.processutils [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:02 np0005539551 nova_compute[227360]: 2025-11-29 07:58:02.737 227364 DEBUG oslo_concurrency.lockutils [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:02 np0005539551 nova_compute[227360]: 2025-11-29 07:58:02.739 227364 DEBUG oslo_concurrency.lockutils [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:02 np0005539551 nova_compute[227360]: 2025-11-29 07:58:02.740 227364 DEBUG oslo_concurrency.lockutils [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:02 np0005539551 nova_compute[227360]: 2025-11-29 07:58:02.791 227364 DEBUG nova.storage.rbd_utils [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] rbd image e8c981fd-dbe3-407d-8810-88f19dac3073_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:02 np0005539551 nova_compute[227360]: 2025-11-29 07:58:02.798 227364 DEBUG oslo_concurrency.processutils [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 e8c981fd-dbe3-407d-8810-88f19dac3073_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:03 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:03 np0005539551 nova_compute[227360]: 2025-11-29 07:58:03.263 227364 DEBUG oslo_concurrency.processutils [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 e8c981fd-dbe3-407d-8810-88f19dac3073_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:03 np0005539551 nova_compute[227360]: 2025-11-29 07:58:03.332 227364 DEBUG nova.storage.rbd_utils [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] resizing rbd image e8c981fd-dbe3-407d-8810-88f19dac3073_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:58:03 np0005539551 nova_compute[227360]: 2025-11-29 07:58:03.447 227364 DEBUG nova.objects.instance [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Lazy-loading 'migration_context' on Instance uuid e8c981fd-dbe3-407d-8810-88f19dac3073 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:03 np0005539551 nova_compute[227360]: 2025-11-29 07:58:03.467 227364 DEBUG nova.virt.libvirt.driver [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:58:03 np0005539551 nova_compute[227360]: 2025-11-29 07:58:03.468 227364 DEBUG nova.virt.libvirt.driver [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Ensure instance console log exists: /var/lib/nova/instances/e8c981fd-dbe3-407d-8810-88f19dac3073/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:58:03 np0005539551 nova_compute[227360]: 2025-11-29 07:58:03.468 227364 DEBUG oslo_concurrency.lockutils [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:03 np0005539551 nova_compute[227360]: 2025-11-29 07:58:03.468 227364 DEBUG oslo_concurrency.lockutils [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:03 np0005539551 nova_compute[227360]: 2025-11-29 07:58:03.469 227364 DEBUG oslo_concurrency.lockutils [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:03 np0005539551 nova_compute[227360]: 2025-11-29 07:58:03.987 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:04.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:04 np0005539551 nova_compute[227360]: 2025-11-29 07:58:04.158 227364 DEBUG nova.network.neutron [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Successfully created port: d5d49a1e-68e2-4463-a368-969f26b28b92 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 02:58:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:04.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:06.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:06 np0005539551 nova_compute[227360]: 2025-11-29 07:58:06.247 227364 DEBUG nova.network.neutron [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Successfully updated port: d5d49a1e-68e2-4463-a368-969f26b28b92 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 02:58:06 np0005539551 nova_compute[227360]: 2025-11-29 07:58:06.262 227364 DEBUG oslo_concurrency.lockutils [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Acquiring lock "refresh_cache-e8c981fd-dbe3-407d-8810-88f19dac3073" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:58:06 np0005539551 nova_compute[227360]: 2025-11-29 07:58:06.262 227364 DEBUG oslo_concurrency.lockutils [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Acquired lock "refresh_cache-e8c981fd-dbe3-407d-8810-88f19dac3073" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:58:06 np0005539551 nova_compute[227360]: 2025-11-29 07:58:06.262 227364 DEBUG nova.network.neutron [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:58:06 np0005539551 nova_compute[227360]: 2025-11-29 07:58:06.339 227364 DEBUG nova.compute.manager [req-d2cc2b9d-9529-48d7-970c-eaed91bb5608 req-b164ead6-6715-4323-a50b-312b276ce9c6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Received event network-changed-d5d49a1e-68e2-4463-a368-969f26b28b92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:58:06 np0005539551 nova_compute[227360]: 2025-11-29 07:58:06.340 227364 DEBUG nova.compute.manager [req-d2cc2b9d-9529-48d7-970c-eaed91bb5608 req-b164ead6-6715-4323-a50b-312b276ce9c6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Refreshing instance network info cache due to event network-changed-d5d49a1e-68e2-4463-a368-969f26b28b92. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:58:06 np0005539551 nova_compute[227360]: 2025-11-29 07:58:06.340 227364 DEBUG oslo_concurrency.lockutils [req-d2cc2b9d-9529-48d7-970c-eaed91bb5608 req-b164ead6-6715-4323-a50b-312b276ce9c6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-e8c981fd-dbe3-407d-8810-88f19dac3073" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:58:06 np0005539551 nova_compute[227360]: 2025-11-29 07:58:06.678 227364 DEBUG nova.network.neutron [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:58:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:06.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:06 np0005539551 nova_compute[227360]: 2025-11-29 07:58:06.688 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:07 np0005539551 nova_compute[227360]: 2025-11-29 07:58:07.975 227364 DEBUG nova.network.neutron [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Updating instance_info_cache with network_info: [{"id": "d5d49a1e-68e2-4463-a368-969f26b28b92", "address": "fa:16:3e:9d:95:b6", "network": {"id": "b448d860-e284-464a-af0f-12f52096cce8", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-414419120-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d932eef5d5104a79bd9aec0bbc489384", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5d49a1e-68", "ovs_interfaceid": "d5d49a1e-68e2-4463-a368-969f26b28b92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:58:07 np0005539551 nova_compute[227360]: 2025-11-29 07:58:07.994 227364 DEBUG oslo_concurrency.lockutils [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Releasing lock "refresh_cache-e8c981fd-dbe3-407d-8810-88f19dac3073" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:58:07 np0005539551 nova_compute[227360]: 2025-11-29 07:58:07.994 227364 DEBUG nova.compute.manager [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Instance network_info: |[{"id": "d5d49a1e-68e2-4463-a368-969f26b28b92", "address": "fa:16:3e:9d:95:b6", "network": {"id": "b448d860-e284-464a-af0f-12f52096cce8", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-414419120-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d932eef5d5104a79bd9aec0bbc489384", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5d49a1e-68", "ovs_interfaceid": "d5d49a1e-68e2-4463-a368-969f26b28b92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:58:07 np0005539551 nova_compute[227360]: 2025-11-29 07:58:07.995 227364 DEBUG oslo_concurrency.lockutils [req-d2cc2b9d-9529-48d7-970c-eaed91bb5608 req-b164ead6-6715-4323-a50b-312b276ce9c6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-e8c981fd-dbe3-407d-8810-88f19dac3073" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:58:07 np0005539551 nova_compute[227360]: 2025-11-29 07:58:07.995 227364 DEBUG nova.network.neutron [req-d2cc2b9d-9529-48d7-970c-eaed91bb5608 req-b164ead6-6715-4323-a50b-312b276ce9c6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Refreshing network info cache for port d5d49a1e-68e2-4463-a368-969f26b28b92 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:58:08 np0005539551 nova_compute[227360]: 2025-11-29 07:58:08.000 227364 DEBUG nova.virt.libvirt.driver [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Start _get_guest_xml network_info=[{"id": "d5d49a1e-68e2-4463-a368-969f26b28b92", "address": "fa:16:3e:9d:95:b6", "network": {"id": "b448d860-e284-464a-af0f-12f52096cce8", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-414419120-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d932eef5d5104a79bd9aec0bbc489384", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5d49a1e-68", "ovs_interfaceid": "d5d49a1e-68e2-4463-a368-969f26b28b92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:58:08 np0005539551 nova_compute[227360]: 2025-11-29 07:58:08.006 227364 WARNING nova.virt.libvirt.driver [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:58:08 np0005539551 nova_compute[227360]: 2025-11-29 07:58:08.012 227364 DEBUG nova.virt.libvirt.host [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:58:08 np0005539551 nova_compute[227360]: 2025-11-29 07:58:08.012 227364 DEBUG nova.virt.libvirt.host [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:58:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:08.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:08 np0005539551 nova_compute[227360]: 2025-11-29 07:58:08.021 227364 DEBUG nova.virt.libvirt.host [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:58:08 np0005539551 nova_compute[227360]: 2025-11-29 07:58:08.022 227364 DEBUG nova.virt.libvirt.host [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:58:08 np0005539551 nova_compute[227360]: 2025-11-29 07:58:08.023 227364 DEBUG nova.virt.libvirt.driver [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:58:08 np0005539551 nova_compute[227360]: 2025-11-29 07:58:08.023 227364 DEBUG nova.virt.hardware [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:58:08 np0005539551 nova_compute[227360]: 2025-11-29 07:58:08.024 227364 DEBUG nova.virt.hardware [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:58:08 np0005539551 nova_compute[227360]: 2025-11-29 07:58:08.024 227364 DEBUG nova.virt.hardware [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:58:08 np0005539551 nova_compute[227360]: 2025-11-29 07:58:08.025 227364 DEBUG nova.virt.hardware [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:58:08 np0005539551 nova_compute[227360]: 2025-11-29 07:58:08.025 227364 DEBUG nova.virt.hardware [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:58:08 np0005539551 nova_compute[227360]: 2025-11-29 07:58:08.025 227364 DEBUG nova.virt.hardware [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:58:08 np0005539551 nova_compute[227360]: 2025-11-29 07:58:08.025 227364 DEBUG nova.virt.hardware [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:58:08 np0005539551 nova_compute[227360]: 2025-11-29 07:58:08.026 227364 DEBUG nova.virt.hardware [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:58:08 np0005539551 nova_compute[227360]: 2025-11-29 07:58:08.026 227364 DEBUG nova.virt.hardware [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:58:08 np0005539551 nova_compute[227360]: 2025-11-29 07:58:08.026 227364 DEBUG nova.virt.hardware [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:58:08 np0005539551 nova_compute[227360]: 2025-11-29 07:58:08.027 227364 DEBUG nova.virt.hardware [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:58:08 np0005539551 nova_compute[227360]: 2025-11-29 07:58:08.030 227364 DEBUG oslo_concurrency.processutils [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:08 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e170 e170: 3 total, 3 up, 3 in
Nov 29 02:58:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:08.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:08 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:08 np0005539551 nova_compute[227360]: 2025-11-29 07:58:08.989 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:09 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:58:09 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2718630772' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:58:09 np0005539551 nova_compute[227360]: 2025-11-29 07:58:09.047 227364 DEBUG oslo_concurrency.processutils [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:09 np0005539551 nova_compute[227360]: 2025-11-29 07:58:09.676 227364 DEBUG nova.storage.rbd_utils [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] rbd image e8c981fd-dbe3-407d-8810-88f19dac3073_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:09 np0005539551 nova_compute[227360]: 2025-11-29 07:58:09.680 227364 DEBUG oslo_concurrency.processutils [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:10.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:10 np0005539551 nova_compute[227360]: 2025-11-29 07:58:10.104 227364 DEBUG nova.network.neutron [req-d2cc2b9d-9529-48d7-970c-eaed91bb5608 req-b164ead6-6715-4323-a50b-312b276ce9c6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Updated VIF entry in instance network info cache for port d5d49a1e-68e2-4463-a368-969f26b28b92. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:58:10 np0005539551 nova_compute[227360]: 2025-11-29 07:58:10.105 227364 DEBUG nova.network.neutron [req-d2cc2b9d-9529-48d7-970c-eaed91bb5608 req-b164ead6-6715-4323-a50b-312b276ce9c6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Updating instance_info_cache with network_info: [{"id": "d5d49a1e-68e2-4463-a368-969f26b28b92", "address": "fa:16:3e:9d:95:b6", "network": {"id": "b448d860-e284-464a-af0f-12f52096cce8", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-414419120-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d932eef5d5104a79bd9aec0bbc489384", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5d49a1e-68", "ovs_interfaceid": "d5d49a1e-68e2-4463-a368-969f26b28b92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:58:10 np0005539551 nova_compute[227360]: 2025-11-29 07:58:10.130 227364 DEBUG oslo_concurrency.lockutils [req-d2cc2b9d-9529-48d7-970c-eaed91bb5608 req-b164ead6-6715-4323-a50b-312b276ce9c6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-e8c981fd-dbe3-407d-8810-88f19dac3073" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:58:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:58:10 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3147330804' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:58:10 np0005539551 nova_compute[227360]: 2025-11-29 07:58:10.168 227364 DEBUG oslo_concurrency.processutils [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:10 np0005539551 nova_compute[227360]: 2025-11-29 07:58:10.170 227364 DEBUG nova.virt.libvirt.vif [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:57:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1367849535',display_name='tempest-AttachInterfacesUnderV243Test-server-1367849535',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1367849535',id=29,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG5pUKdnbNkO8CiOQr+0kyP/sUgTCIrM4EZ5g1/WE6CYbIb0cTodoOAwZQfJ0sH3oJziwW4Tq+7MYwATIYNq6XxP9WNer5wgI+SZUZz6YVKHmYAIGdfIIvK4+61Oj6tzZg==',key_name='tempest-keypair-211474090',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d932eef5d5104a79bd9aec0bbc489384',ramdisk_id='',reservation_id='r-2gngzvun',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1727358953',owner_user_name='tempest-AttachInterfacesUnderV243Test-1727358953-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:58:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e9bb5c912ca1471c895b844b43ac4831',uuid=e8c981fd-dbe3-407d-8810-88f19dac3073,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d5d49a1e-68e2-4463-a368-969f26b28b92", "address": "fa:16:3e:9d:95:b6", "network": {"id": "b448d860-e284-464a-af0f-12f52096cce8", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-414419120-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d932eef5d5104a79bd9aec0bbc489384", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5d49a1e-68", "ovs_interfaceid": "d5d49a1e-68e2-4463-a368-969f26b28b92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:58:10 np0005539551 nova_compute[227360]: 2025-11-29 07:58:10.170 227364 DEBUG nova.network.os_vif_util [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Converting VIF {"id": "d5d49a1e-68e2-4463-a368-969f26b28b92", "address": "fa:16:3e:9d:95:b6", "network": {"id": "b448d860-e284-464a-af0f-12f52096cce8", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-414419120-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d932eef5d5104a79bd9aec0bbc489384", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5d49a1e-68", "ovs_interfaceid": "d5d49a1e-68e2-4463-a368-969f26b28b92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:58:10 np0005539551 nova_compute[227360]: 2025-11-29 07:58:10.172 227364 DEBUG nova.network.os_vif_util [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:95:b6,bridge_name='br-int',has_traffic_filtering=True,id=d5d49a1e-68e2-4463-a368-969f26b28b92,network=Network(b448d860-e284-464a-af0f-12f52096cce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5d49a1e-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:58:10 np0005539551 nova_compute[227360]: 2025-11-29 07:58:10.173 227364 DEBUG nova.objects.instance [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Lazy-loading 'pci_devices' on Instance uuid e8c981fd-dbe3-407d-8810-88f19dac3073 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:10 np0005539551 nova_compute[227360]: 2025-11-29 07:58:10.204 227364 DEBUG nova.virt.libvirt.driver [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:58:10 np0005539551 nova_compute[227360]:  <uuid>e8c981fd-dbe3-407d-8810-88f19dac3073</uuid>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:  <name>instance-0000001d</name>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:58:10 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:      <nova:name>tempest-AttachInterfacesUnderV243Test-server-1367849535</nova:name>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 07:58:08</nova:creationTime>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 02:58:10 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:        <nova:user uuid="e9bb5c912ca1471c895b844b43ac4831">tempest-AttachInterfacesUnderV243Test-1727358953-project-member</nova:user>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:        <nova:project uuid="d932eef5d5104a79bd9aec0bbc489384">tempest-AttachInterfacesUnderV243Test-1727358953</nova:project>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:        <nova:port uuid="d5d49a1e-68e2-4463-a368-969f26b28b92">
Nov 29 02:58:10 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <system>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:      <entry name="serial">e8c981fd-dbe3-407d-8810-88f19dac3073</entry>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:      <entry name="uuid">e8c981fd-dbe3-407d-8810-88f19dac3073</entry>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    </system>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:  <os>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:  </os>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:  <features>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:  </features>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:  </clock>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:  <devices>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 02:58:10 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/e8c981fd-dbe3-407d-8810-88f19dac3073_disk">
Nov 29 02:58:10 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:      </source>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 02:58:10 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:      </auth>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    </disk>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 02:58:10 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/e8c981fd-dbe3-407d-8810-88f19dac3073_disk.config">
Nov 29 02:58:10 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:      </source>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 02:58:10 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:      </auth>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    </disk>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 02:58:10 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:9d:95:b6"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:      <target dev="tapd5d49a1e-68"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    </interface>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 02:58:10 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/e8c981fd-dbe3-407d-8810-88f19dac3073/console.log" append="off"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    </serial>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <video>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    </video>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 02:58:10 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    </rng>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 02:58:10 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 02:58:10 np0005539551 nova_compute[227360]:  </devices>
Nov 29 02:58:10 np0005539551 nova_compute[227360]: </domain>
Nov 29 02:58:10 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:58:10 np0005539551 nova_compute[227360]: 2025-11-29 07:58:10.205 227364 DEBUG nova.compute.manager [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Preparing to wait for external event network-vif-plugged-d5d49a1e-68e2-4463-a368-969f26b28b92 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 02:58:10 np0005539551 nova_compute[227360]: 2025-11-29 07:58:10.206 227364 DEBUG oslo_concurrency.lockutils [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Acquiring lock "e8c981fd-dbe3-407d-8810-88f19dac3073-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:10 np0005539551 nova_compute[227360]: 2025-11-29 07:58:10.206 227364 DEBUG oslo_concurrency.lockutils [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Lock "e8c981fd-dbe3-407d-8810-88f19dac3073-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:10 np0005539551 nova_compute[227360]: 2025-11-29 07:58:10.206 227364 DEBUG oslo_concurrency.lockutils [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Lock "e8c981fd-dbe3-407d-8810-88f19dac3073-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:10 np0005539551 nova_compute[227360]: 2025-11-29 07:58:10.207 227364 DEBUG nova.virt.libvirt.vif [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:57:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1367849535',display_name='tempest-AttachInterfacesUnderV243Test-server-1367849535',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1367849535',id=29,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG5pUKdnbNkO8CiOQr+0kyP/sUgTCIrM4EZ5g1/WE6CYbIb0cTodoOAwZQfJ0sH3oJziwW4Tq+7MYwATIYNq6XxP9WNer5wgI+SZUZz6YVKHmYAIGdfIIvK4+61Oj6tzZg==',key_name='tempest-keypair-211474090',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d932eef5d5104a79bd9aec0bbc489384',ramdisk_id='',reservation_id='r-2gngzvun',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1727358953',owner_user_name='tempest-AttachInterfacesUnderV243Test-1727358953-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:58:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e9bb5c912ca1471c895b844b43ac4831',uuid=e8c981fd-dbe3-407d-8810-88f19dac3073,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d5d49a1e-68e2-4463-a368-969f26b28b92", "address": "fa:16:3e:9d:95:b6", "network": {"id": "b448d860-e284-464a-af0f-12f52096cce8", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-414419120-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d932eef5d5104a79bd9aec0bbc489384", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5d49a1e-68", "ovs_interfaceid": "d5d49a1e-68e2-4463-a368-969f26b28b92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:58:10 np0005539551 nova_compute[227360]: 2025-11-29 07:58:10.207 227364 DEBUG nova.network.os_vif_util [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Converting VIF {"id": "d5d49a1e-68e2-4463-a368-969f26b28b92", "address": "fa:16:3e:9d:95:b6", "network": {"id": "b448d860-e284-464a-af0f-12f52096cce8", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-414419120-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d932eef5d5104a79bd9aec0bbc489384", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5d49a1e-68", "ovs_interfaceid": "d5d49a1e-68e2-4463-a368-969f26b28b92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:58:10 np0005539551 nova_compute[227360]: 2025-11-29 07:58:10.208 227364 DEBUG nova.network.os_vif_util [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:95:b6,bridge_name='br-int',has_traffic_filtering=True,id=d5d49a1e-68e2-4463-a368-969f26b28b92,network=Network(b448d860-e284-464a-af0f-12f52096cce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5d49a1e-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:58:10 np0005539551 nova_compute[227360]: 2025-11-29 07:58:10.208 227364 DEBUG os_vif [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:95:b6,bridge_name='br-int',has_traffic_filtering=True,id=d5d49a1e-68e2-4463-a368-969f26b28b92,network=Network(b448d860-e284-464a-af0f-12f52096cce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5d49a1e-68') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:58:10 np0005539551 nova_compute[227360]: 2025-11-29 07:58:10.209 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:10 np0005539551 nova_compute[227360]: 2025-11-29 07:58:10.209 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:58:10 np0005539551 nova_compute[227360]: 2025-11-29 07:58:10.210 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:58:10 np0005539551 nova_compute[227360]: 2025-11-29 07:58:10.213 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:10 np0005539551 nova_compute[227360]: 2025-11-29 07:58:10.213 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd5d49a1e-68, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:58:10 np0005539551 nova_compute[227360]: 2025-11-29 07:58:10.214 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd5d49a1e-68, col_values=(('external_ids', {'iface-id': 'd5d49a1e-68e2-4463-a368-969f26b28b92', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:95:b6', 'vm-uuid': 'e8c981fd-dbe3-407d-8810-88f19dac3073'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:58:10 np0005539551 nova_compute[227360]: 2025-11-29 07:58:10.216 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:10 np0005539551 NetworkManager[48922]: <info>  [1764403090.2176] manager: (tapd5d49a1e-68): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Nov 29 02:58:10 np0005539551 nova_compute[227360]: 2025-11-29 07:58:10.217 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:58:10 np0005539551 nova_compute[227360]: 2025-11-29 07:58:10.225 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:10 np0005539551 nova_compute[227360]: 2025-11-29 07:58:10.226 227364 INFO os_vif [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:95:b6,bridge_name='br-int',has_traffic_filtering=True,id=d5d49a1e-68e2-4463-a368-969f26b28b92,network=Network(b448d860-e284-464a-af0f-12f52096cce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5d49a1e-68')#033[00m
Nov 29 02:58:10 np0005539551 nova_compute[227360]: 2025-11-29 07:58:10.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:10 np0005539551 nova_compute[227360]: 2025-11-29 07:58:10.527 227364 DEBUG nova.virt.libvirt.driver [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:58:10 np0005539551 nova_compute[227360]: 2025-11-29 07:58:10.528 227364 DEBUG nova.virt.libvirt.driver [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:58:10 np0005539551 nova_compute[227360]: 2025-11-29 07:58:10.528 227364 DEBUG nova.virt.libvirt.driver [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] No VIF found with MAC fa:16:3e:9d:95:b6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:58:10 np0005539551 nova_compute[227360]: 2025-11-29 07:58:10.529 227364 INFO nova.virt.libvirt.driver [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Using config drive#033[00m
Nov 29 02:58:10 np0005539551 nova_compute[227360]: 2025-11-29 07:58:10.560 227364 DEBUG nova.storage.rbd_utils [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] rbd image e8c981fd-dbe3-407d-8810-88f19dac3073_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:10.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:11 np0005539551 nova_compute[227360]: 2025-11-29 07:58:11.070 227364 INFO nova.virt.libvirt.driver [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Creating config drive at /var/lib/nova/instances/e8c981fd-dbe3-407d-8810-88f19dac3073/disk.config#033[00m
Nov 29 02:58:11 np0005539551 nova_compute[227360]: 2025-11-29 07:58:11.076 227364 DEBUG oslo_concurrency.processutils [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e8c981fd-dbe3-407d-8810-88f19dac3073/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxk30c49g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:11 np0005539551 nova_compute[227360]: 2025-11-29 07:58:11.207 227364 DEBUG oslo_concurrency.processutils [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e8c981fd-dbe3-407d-8810-88f19dac3073/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxk30c49g" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:11 np0005539551 nova_compute[227360]: 2025-11-29 07:58:11.232 227364 DEBUG nova.storage.rbd_utils [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] rbd image e8c981fd-dbe3-407d-8810-88f19dac3073_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:11 np0005539551 nova_compute[227360]: 2025-11-29 07:58:11.236 227364 DEBUG oslo_concurrency.processutils [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e8c981fd-dbe3-407d-8810-88f19dac3073/disk.config e8c981fd-dbe3-407d-8810-88f19dac3073_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:11 np0005539551 nova_compute[227360]: 2025-11-29 07:58:11.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:11 np0005539551 nova_compute[227360]: 2025-11-29 07:58:11.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:11 np0005539551 nova_compute[227360]: 2025-11-29 07:58:11.759 227364 DEBUG oslo_concurrency.processutils [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e8c981fd-dbe3-407d-8810-88f19dac3073/disk.config e8c981fd-dbe3-407d-8810-88f19dac3073_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:11 np0005539551 nova_compute[227360]: 2025-11-29 07:58:11.759 227364 INFO nova.virt.libvirt.driver [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Deleting local config drive /var/lib/nova/instances/e8c981fd-dbe3-407d-8810-88f19dac3073/disk.config because it was imported into RBD.#033[00m
Nov 29 02:58:11 np0005539551 kernel: tapd5d49a1e-68: entered promiscuous mode
Nov 29 02:58:11 np0005539551 ovn_controller[130266]: 2025-11-29T07:58:11Z|00129|binding|INFO|Claiming lport d5d49a1e-68e2-4463-a368-969f26b28b92 for this chassis.
Nov 29 02:58:11 np0005539551 ovn_controller[130266]: 2025-11-29T07:58:11Z|00130|binding|INFO|d5d49a1e-68e2-4463-a368-969f26b28b92: Claiming fa:16:3e:9d:95:b6 10.100.0.9
Nov 29 02:58:11 np0005539551 nova_compute[227360]: 2025-11-29 07:58:11.809 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:11 np0005539551 NetworkManager[48922]: <info>  [1764403091.8110] manager: (tapd5d49a1e-68): new Tun device (/org/freedesktop/NetworkManager/Devices/63)
Nov 29 02:58:11 np0005539551 nova_compute[227360]: 2025-11-29 07:58:11.813 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:11 np0005539551 nova_compute[227360]: 2025-11-29 07:58:11.815 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:58:11.821 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:95:b6 10.100.0.9'], port_security=['fa:16:3e:9d:95:b6 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e8c981fd-dbe3-407d-8810-88f19dac3073', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b448d860-e284-464a-af0f-12f52096cce8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd932eef5d5104a79bd9aec0bbc489384', 'neutron:revision_number': '2', 'neutron:security_group_ids': '47c5798d-6387-4692-a31e-de53231d5beb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=88d5cc16-c83b-4004-a9bd-a19537c12db3, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=d5d49a1e-68e2-4463-a368-969f26b28b92) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:58:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:58:11.822 139482 INFO neutron.agent.ovn.metadata.agent [-] Port d5d49a1e-68e2-4463-a368-969f26b28b92 in datapath b448d860-e284-464a-af0f-12f52096cce8 bound to our chassis#033[00m
Nov 29 02:58:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:58:11.823 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b448d860-e284-464a-af0f-12f52096cce8#033[00m
Nov 29 02:58:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:58:11.833 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[317880d5-155e-4f30-9f60-c146ace69852]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:58:11.834 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb448d860-e1 in ovnmeta-b448d860-e284-464a-af0f-12f52096cce8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:58:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:58:11.836 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb448d860-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:58:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:58:11.836 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6ee91df8-abbc-4803-a5af-e0ecfdef3e98]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:58:11.836 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9f2f79f0-3cfa-49a8-aa49-5aa28282697d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:11 np0005539551 systemd-udevd[241163]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:58:11 np0005539551 systemd-machined[190756]: New machine qemu-15-instance-0000001d.
Nov 29 02:58:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:58:11.846 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[22ab0100-fb5e-4837-ae26-43298c8dd7cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:11 np0005539551 systemd[1]: Started Virtual Machine qemu-15-instance-0000001d.
Nov 29 02:58:11 np0005539551 NetworkManager[48922]: <info>  [1764403091.8530] device (tapd5d49a1e-68): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:58:11 np0005539551 NetworkManager[48922]: <info>  [1764403091.8540] device (tapd5d49a1e-68): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:58:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:58:11.868 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ef850f94-bd7b-4dd9-b71a-fc0657ca2e52]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:11 np0005539551 nova_compute[227360]: 2025-11-29 07:58:11.895 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:11 np0005539551 ovn_controller[130266]: 2025-11-29T07:58:11Z|00131|binding|INFO|Setting lport d5d49a1e-68e2-4463-a368-969f26b28b92 ovn-installed in OVS
Nov 29 02:58:11 np0005539551 ovn_controller[130266]: 2025-11-29T07:58:11Z|00132|binding|INFO|Setting lport d5d49a1e-68e2-4463-a368-969f26b28b92 up in Southbound
Nov 29 02:58:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:58:11.899 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[3894cd7c-2e61-4cd3-9f48-de826d00c487]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:11 np0005539551 nova_compute[227360]: 2025-11-29 07:58:11.902 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:58:11.907 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b0532f3c-6e46-49b4-b6a6-684831528987]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:11 np0005539551 NetworkManager[48922]: <info>  [1764403091.9084] manager: (tapb448d860-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/64)
Nov 29 02:58:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:58:11.938 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[486cb2f7-512d-4227-bd4e-025a03ecc754]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:58:11.941 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[625fd1e8-7e33-44fc-bd16-c67ce68b53a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:11 np0005539551 NetworkManager[48922]: <info>  [1764403091.9630] device (tapb448d860-e0): carrier: link connected
Nov 29 02:58:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:58:11.966 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[540aee07-0444-4390-9f5e-f8f71c09c973]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:58:11.983 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[18da49b3-e631-4757-80fd-0b70a679b926]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb448d860-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:c1:a5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 613884, 'reachable_time': 42622, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 241195, 'error': None, 'target': 'ovnmeta-b448d860-e284-464a-af0f-12f52096cce8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:58:11.996 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f88c7297-c9a9-45c0-b9f4-92801101e9e4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb4:c1a5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 613884, 'tstamp': 613884}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 241196, 'error': None, 'target': 'ovnmeta-b448d860-e284-464a-af0f-12f52096cce8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:58:12.014 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[65dd008a-00e7-4706-9fb2-be5b15cc7cad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb448d860-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:c1:a5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 613884, 'reachable_time': 42622, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 241197, 'error': None, 'target': 'ovnmeta-b448d860-e284-464a-af0f-12f52096cce8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:12.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:58:12.046 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[63e1f8fe-b718-4c2f-8b70-b29b750ba81c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:12 np0005539551 nova_compute[227360]: 2025-11-29 07:58:12.098 227364 DEBUG nova.compute.manager [req-11ed73c1-5f73-4cbc-8e4c-3be63cf5a993 req-03ea3906-eb20-4129-a6e6-bd05377e8ad8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Received event network-vif-plugged-d5d49a1e-68e2-4463-a368-969f26b28b92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:58:12 np0005539551 nova_compute[227360]: 2025-11-29 07:58:12.099 227364 DEBUG oslo_concurrency.lockutils [req-11ed73c1-5f73-4cbc-8e4c-3be63cf5a993 req-03ea3906-eb20-4129-a6e6-bd05377e8ad8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e8c981fd-dbe3-407d-8810-88f19dac3073-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:12 np0005539551 nova_compute[227360]: 2025-11-29 07:58:12.099 227364 DEBUG oslo_concurrency.lockutils [req-11ed73c1-5f73-4cbc-8e4c-3be63cf5a993 req-03ea3906-eb20-4129-a6e6-bd05377e8ad8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e8c981fd-dbe3-407d-8810-88f19dac3073-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:12 np0005539551 nova_compute[227360]: 2025-11-29 07:58:12.099 227364 DEBUG oslo_concurrency.lockutils [req-11ed73c1-5f73-4cbc-8e4c-3be63cf5a993 req-03ea3906-eb20-4129-a6e6-bd05377e8ad8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e8c981fd-dbe3-407d-8810-88f19dac3073-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:12 np0005539551 nova_compute[227360]: 2025-11-29 07:58:12.100 227364 DEBUG nova.compute.manager [req-11ed73c1-5f73-4cbc-8e4c-3be63cf5a993 req-03ea3906-eb20-4129-a6e6-bd05377e8ad8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Processing event network-vif-plugged-d5d49a1e-68e2-4463-a368-969f26b28b92 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 02:58:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:58:12.108 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c063b9a7-a3d9-4470-b948-029f2644c71c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:58:12.109 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb448d860-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:58:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:58:12.109 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:58:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:58:12.110 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb448d860-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:58:12 np0005539551 kernel: tapb448d860-e0: entered promiscuous mode
Nov 29 02:58:12 np0005539551 nova_compute[227360]: 2025-11-29 07:58:12.162 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:12 np0005539551 NetworkManager[48922]: <info>  [1764403092.1632] manager: (tapb448d860-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Nov 29 02:58:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:58:12.165 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb448d860-e0, col_values=(('external_ids', {'iface-id': 'd8dd19b5-0cd0-4d3f-a2b8-b01d3dc7756a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:58:12 np0005539551 nova_compute[227360]: 2025-11-29 07:58:12.164 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:12 np0005539551 nova_compute[227360]: 2025-11-29 07:58:12.166 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:12 np0005539551 ovn_controller[130266]: 2025-11-29T07:58:12Z|00133|binding|INFO|Releasing lport d8dd19b5-0cd0-4d3f-a2b8-b01d3dc7756a from this chassis (sb_readonly=0)
Nov 29 02:58:12 np0005539551 nova_compute[227360]: 2025-11-29 07:58:12.180 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:58:12.181 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b448d860-e284-464a-af0f-12f52096cce8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b448d860-e284-464a-af0f-12f52096cce8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:58:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:58:12.182 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8ec4166d-0367-483e-94c3-ee9800efac29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:58:12.182 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:58:12 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 02:58:12 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 02:58:12 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-b448d860-e284-464a-af0f-12f52096cce8
Nov 29 02:58:12 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 02:58:12 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 02:58:12 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 02:58:12 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/b448d860-e284-464a-af0f-12f52096cce8.pid.haproxy
Nov 29 02:58:12 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 02:58:12 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 02:58:12 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 02:58:12 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 02:58:12 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 02:58:12 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 02:58:12 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 02:58:12 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 02:58:12 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 02:58:12 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 02:58:12 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 02:58:12 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 02:58:12 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 02:58:12 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 02:58:12 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 02:58:12 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 02:58:12 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 02:58:12 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 02:58:12 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 02:58:12 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:58:12 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID b448d860-e284-464a-af0f-12f52096cce8
Nov 29 02:58:12 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:58:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:58:12.183 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b448d860-e284-464a-af0f-12f52096cce8', 'env', 'PROCESS_TAG=haproxy-b448d860-e284-464a-af0f-12f52096cce8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b448d860-e284-464a-af0f-12f52096cce8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:58:12 np0005539551 nova_compute[227360]: 2025-11-29 07:58:12.337 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403092.3370206, e8c981fd-dbe3-407d-8810-88f19dac3073 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:58:12 np0005539551 nova_compute[227360]: 2025-11-29 07:58:12.337 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] VM Started (Lifecycle Event)#033[00m
Nov 29 02:58:12 np0005539551 nova_compute[227360]: 2025-11-29 07:58:12.339 227364 DEBUG nova.compute.manager [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:58:12 np0005539551 nova_compute[227360]: 2025-11-29 07:58:12.343 227364 DEBUG nova.virt.libvirt.driver [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:58:12 np0005539551 nova_compute[227360]: 2025-11-29 07:58:12.346 227364 INFO nova.virt.libvirt.driver [-] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Instance spawned successfully.#033[00m
Nov 29 02:58:12 np0005539551 nova_compute[227360]: 2025-11-29 07:58:12.347 227364 DEBUG nova.virt.libvirt.driver [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:58:12 np0005539551 nova_compute[227360]: 2025-11-29 07:58:12.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:12 np0005539551 nova_compute[227360]: 2025-11-29 07:58:12.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:58:12 np0005539551 nova_compute[227360]: 2025-11-29 07:58:12.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:58:12 np0005539551 nova_compute[227360]: 2025-11-29 07:58:12.514 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 02:58:12 np0005539551 nova_compute[227360]: 2025-11-29 07:58:12.515 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:58:12 np0005539551 nova_compute[227360]: 2025-11-29 07:58:12.523 227364 DEBUG nova.virt.libvirt.driver [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:58:12 np0005539551 nova_compute[227360]: 2025-11-29 07:58:12.524 227364 DEBUG nova.virt.libvirt.driver [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:58:12 np0005539551 nova_compute[227360]: 2025-11-29 07:58:12.526 227364 DEBUG nova.virt.libvirt.driver [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:58:12 np0005539551 nova_compute[227360]: 2025-11-29 07:58:12.526 227364 DEBUG nova.virt.libvirt.driver [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:58:12 np0005539551 nova_compute[227360]: 2025-11-29 07:58:12.527 227364 DEBUG nova.virt.libvirt.driver [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:58:12 np0005539551 nova_compute[227360]: 2025-11-29 07:58:12.528 227364 DEBUG nova.virt.libvirt.driver [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:58:12 np0005539551 nova_compute[227360]: 2025-11-29 07:58:12.557 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:58:12 np0005539551 nova_compute[227360]: 2025-11-29 07:58:12.561 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:58:12 np0005539551 nova_compute[227360]: 2025-11-29 07:58:12.615 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:58:12 np0005539551 nova_compute[227360]: 2025-11-29 07:58:12.618 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403092.337176, e8c981fd-dbe3-407d-8810-88f19dac3073 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:58:12 np0005539551 nova_compute[227360]: 2025-11-29 07:58:12.619 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:58:12 np0005539551 podman[241271]: 2025-11-29 07:58:12.536992885 +0000 UTC m=+0.034818104 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:58:12 np0005539551 nova_compute[227360]: 2025-11-29 07:58:12.646 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:58:12 np0005539551 nova_compute[227360]: 2025-11-29 07:58:12.650 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403092.3423903, e8c981fd-dbe3-407d-8810-88f19dac3073 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:58:12 np0005539551 nova_compute[227360]: 2025-11-29 07:58:12.650 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:58:12 np0005539551 nova_compute[227360]: 2025-11-29 07:58:12.658 227364 INFO nova.compute.manager [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Took 10.90 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:58:12 np0005539551 nova_compute[227360]: 2025-11-29 07:58:12.658 227364 DEBUG nova.compute.manager [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:58:12 np0005539551 nova_compute[227360]: 2025-11-29 07:58:12.667 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:58:12 np0005539551 nova_compute[227360]: 2025-11-29 07:58:12.670 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:58:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:12.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:12 np0005539551 nova_compute[227360]: 2025-11-29 07:58:12.721 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:58:12 np0005539551 nova_compute[227360]: 2025-11-29 07:58:12.735 227364 INFO nova.compute.manager [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Took 11.85 seconds to build instance.#033[00m
Nov 29 02:58:12 np0005539551 nova_compute[227360]: 2025-11-29 07:58:12.760 227364 DEBUG oslo_concurrency.lockutils [None req-43834c44-94ad-473a-8214-961f1a3eb059 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Lock "e8c981fd-dbe3-407d-8810-88f19dac3073" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.944s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:12 np0005539551 podman[241271]: 2025-11-29 07:58:12.969089393 +0000 UTC m=+0.466914562 container create cca91bd88e2643a9d54b04c1c58adb477d132c2eb537929e352c8dfbf5b26149 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b448d860-e284-464a-af0f-12f52096cce8, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:58:13 np0005539551 systemd[1]: Started libpod-conmon-cca91bd88e2643a9d54b04c1c58adb477d132c2eb537929e352c8dfbf5b26149.scope.
Nov 29 02:58:13 np0005539551 systemd[1]: Started libcrun container.
Nov 29 02:58:13 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fe1260e275eb12434d5089610c6d9f48adb0be3d451464a97430fc3622469ef/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:58:13 np0005539551 podman[241271]: 2025-11-29 07:58:13.139258604 +0000 UTC m=+0.637083803 container init cca91bd88e2643a9d54b04c1c58adb477d132c2eb537929e352c8dfbf5b26149 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b448d860-e284-464a-af0f-12f52096cce8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:58:13 np0005539551 podman[241271]: 2025-11-29 07:58:13.147063348 +0000 UTC m=+0.644888517 container start cca91bd88e2643a9d54b04c1c58adb477d132c2eb537929e352c8dfbf5b26149 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b448d860-e284-464a-af0f-12f52096cce8, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:58:13 np0005539551 neutron-haproxy-ovnmeta-b448d860-e284-464a-af0f-12f52096cce8[241286]: [NOTICE]   (241290) : New worker (241292) forked
Nov 29 02:58:13 np0005539551 neutron-haproxy-ovnmeta-b448d860-e284-464a-af0f-12f52096cce8[241286]: [NOTICE]   (241290) : Loading success.
Nov 29 02:58:13 np0005539551 nova_compute[227360]: 2025-11-29 07:58:13.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:13 np0005539551 nova_compute[227360]: 2025-11-29 07:58:13.411 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:13 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:58:13 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/661960926' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:58:13 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:13 np0005539551 nova_compute[227360]: 2025-11-29 07:58:13.992 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:14.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:14 np0005539551 nova_compute[227360]: 2025-11-29 07:58:14.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:14 np0005539551 nova_compute[227360]: 2025-11-29 07:58:14.434 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:14 np0005539551 nova_compute[227360]: 2025-11-29 07:58:14.435 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:14 np0005539551 nova_compute[227360]: 2025-11-29 07:58:14.436 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:14 np0005539551 nova_compute[227360]: 2025-11-29 07:58:14.436 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:58:14 np0005539551 nova_compute[227360]: 2025-11-29 07:58:14.437 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:14.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:14 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:58:14 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/575129154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:58:14 np0005539551 nova_compute[227360]: 2025-11-29 07:58:14.954 227364 DEBUG nova.compute.manager [req-29870ef0-6649-44e3-9647-062ebb832eae req-56b67798-c491-4d18-84c4-3076a794ba50 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Received event network-vif-plugged-d5d49a1e-68e2-4463-a368-969f26b28b92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:58:14 np0005539551 nova_compute[227360]: 2025-11-29 07:58:14.956 227364 DEBUG oslo_concurrency.lockutils [req-29870ef0-6649-44e3-9647-062ebb832eae req-56b67798-c491-4d18-84c4-3076a794ba50 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e8c981fd-dbe3-407d-8810-88f19dac3073-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:14 np0005539551 nova_compute[227360]: 2025-11-29 07:58:14.956 227364 DEBUG oslo_concurrency.lockutils [req-29870ef0-6649-44e3-9647-062ebb832eae req-56b67798-c491-4d18-84c4-3076a794ba50 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e8c981fd-dbe3-407d-8810-88f19dac3073-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:14 np0005539551 nova_compute[227360]: 2025-11-29 07:58:14.957 227364 DEBUG oslo_concurrency.lockutils [req-29870ef0-6649-44e3-9647-062ebb832eae req-56b67798-c491-4d18-84c4-3076a794ba50 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e8c981fd-dbe3-407d-8810-88f19dac3073-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:14 np0005539551 nova_compute[227360]: 2025-11-29 07:58:14.957 227364 DEBUG nova.compute.manager [req-29870ef0-6649-44e3-9647-062ebb832eae req-56b67798-c491-4d18-84c4-3076a794ba50 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] No waiting events found dispatching network-vif-plugged-d5d49a1e-68e2-4463-a368-969f26b28b92 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:58:14 np0005539551 nova_compute[227360]: 2025-11-29 07:58:14.957 227364 WARNING nova.compute.manager [req-29870ef0-6649-44e3-9647-062ebb832eae req-56b67798-c491-4d18-84c4-3076a794ba50 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Received unexpected event network-vif-plugged-d5d49a1e-68e2-4463-a368-969f26b28b92 for instance with vm_state active and task_state None.#033[00m
Nov 29 02:58:14 np0005539551 nova_compute[227360]: 2025-11-29 07:58:14.960 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:15 np0005539551 nova_compute[227360]: 2025-11-29 07:58:15.042 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:58:15 np0005539551 nova_compute[227360]: 2025-11-29 07:58:15.043 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:58:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e171 e171: 3 total, 3 up, 3 in
Nov 29 02:58:15 np0005539551 nova_compute[227360]: 2025-11-29 07:58:15.195 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:58:15 np0005539551 nova_compute[227360]: 2025-11-29 07:58:15.196 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4699MB free_disk=20.80999755859375GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:58:15 np0005539551 nova_compute[227360]: 2025-11-29 07:58:15.197 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:15 np0005539551 nova_compute[227360]: 2025-11-29 07:58:15.197 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:15 np0005539551 nova_compute[227360]: 2025-11-29 07:58:15.217 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:15 np0005539551 nova_compute[227360]: 2025-11-29 07:58:15.275 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance e8c981fd-dbe3-407d-8810-88f19dac3073 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:58:15 np0005539551 nova_compute[227360]: 2025-11-29 07:58:15.275 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:58:15 np0005539551 nova_compute[227360]: 2025-11-29 07:58:15.276 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:58:15 np0005539551 nova_compute[227360]: 2025-11-29 07:58:15.336 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:58:15 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2124113861' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:58:15 np0005539551 nova_compute[227360]: 2025-11-29 07:58:15.786 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:15 np0005539551 nova_compute[227360]: 2025-11-29 07:58:15.793 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:58:15 np0005539551 nova_compute[227360]: 2025-11-29 07:58:15.814 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:58:15 np0005539551 nova_compute[227360]: 2025-11-29 07:58:15.844 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:58:15 np0005539551 nova_compute[227360]: 2025-11-29 07:58:15.845 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:16.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:16 np0005539551 podman[241348]: 2025-11-29 07:58:16.606226119 +0000 UTC m=+0.059344896 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 02:58:16 np0005539551 podman[241347]: 2025-11-29 07:58:16.634198656 +0000 UTC m=+0.079559942 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 02:58:16 np0005539551 podman[241346]: 2025-11-29 07:58:16.658055029 +0000 UTC m=+0.104722600 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 02:58:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:16.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:18.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:18.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:18 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:18 np0005539551 nova_compute[227360]: 2025-11-29 07:58:18.845 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:18 np0005539551 nova_compute[227360]: 2025-11-29 07:58:18.846 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:18 np0005539551 nova_compute[227360]: 2025-11-29 07:58:18.846 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:58:19 np0005539551 nova_compute[227360]: 2025-11-29 07:58:19.038 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:19 np0005539551 NetworkManager[48922]: <info>  [1764403099.7174] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Nov 29 02:58:19 np0005539551 NetworkManager[48922]: <info>  [1764403099.7185] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Nov 29 02:58:19 np0005539551 nova_compute[227360]: 2025-11-29 07:58:19.728 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:19 np0005539551 nova_compute[227360]: 2025-11-29 07:58:19.833 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:19 np0005539551 ovn_controller[130266]: 2025-11-29T07:58:19Z|00134|binding|INFO|Releasing lport d8dd19b5-0cd0-4d3f-a2b8-b01d3dc7756a from this chassis (sb_readonly=0)
Nov 29 02:58:19 np0005539551 nova_compute[227360]: 2025-11-29 07:58:19.850 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:58:19.850 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:58:19.850 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:58:19.851 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:20.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:20 np0005539551 nova_compute[227360]: 2025-11-29 07:58:20.220 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:20 np0005539551 nova_compute[227360]: 2025-11-29 07:58:20.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:20.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:20 np0005539551 nova_compute[227360]: 2025-11-29 07:58:20.884 227364 DEBUG nova.compute.manager [req-75627309-83b7-4706-9858-b656d82d3822 req-83dd4705-74a2-4cec-aedc-9d9013d24fb9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Received event network-changed-d5d49a1e-68e2-4463-a368-969f26b28b92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:58:20 np0005539551 nova_compute[227360]: 2025-11-29 07:58:20.884 227364 DEBUG nova.compute.manager [req-75627309-83b7-4706-9858-b656d82d3822 req-83dd4705-74a2-4cec-aedc-9d9013d24fb9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Refreshing instance network info cache due to event network-changed-d5d49a1e-68e2-4463-a368-969f26b28b92. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:58:20 np0005539551 nova_compute[227360]: 2025-11-29 07:58:20.885 227364 DEBUG oslo_concurrency.lockutils [req-75627309-83b7-4706-9858-b656d82d3822 req-83dd4705-74a2-4cec-aedc-9d9013d24fb9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-e8c981fd-dbe3-407d-8810-88f19dac3073" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:58:20 np0005539551 nova_compute[227360]: 2025-11-29 07:58:20.885 227364 DEBUG oslo_concurrency.lockutils [req-75627309-83b7-4706-9858-b656d82d3822 req-83dd4705-74a2-4cec-aedc-9d9013d24fb9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-e8c981fd-dbe3-407d-8810-88f19dac3073" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:58:20 np0005539551 nova_compute[227360]: 2025-11-29 07:58:20.886 227364 DEBUG nova.network.neutron [req-75627309-83b7-4706-9858-b656d82d3822 req-83dd4705-74a2-4cec-aedc-9d9013d24fb9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Refreshing network info cache for port d5d49a1e-68e2-4463-a368-969f26b28b92 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:58:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:22.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:22.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:23 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:24.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:24 np0005539551 nova_compute[227360]: 2025-11-29 07:58:24.039 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:24.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:24 np0005539551 nova_compute[227360]: 2025-11-29 07:58:24.888 227364 DEBUG nova.network.neutron [req-75627309-83b7-4706-9858-b656d82d3822 req-83dd4705-74a2-4cec-aedc-9d9013d24fb9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Updated VIF entry in instance network info cache for port d5d49a1e-68e2-4463-a368-969f26b28b92. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:58:24 np0005539551 nova_compute[227360]: 2025-11-29 07:58:24.888 227364 DEBUG nova.network.neutron [req-75627309-83b7-4706-9858-b656d82d3822 req-83dd4705-74a2-4cec-aedc-9d9013d24fb9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Updating instance_info_cache with network_info: [{"id": "d5d49a1e-68e2-4463-a368-969f26b28b92", "address": "fa:16:3e:9d:95:b6", "network": {"id": "b448d860-e284-464a-af0f-12f52096cce8", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-414419120-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d932eef5d5104a79bd9aec0bbc489384", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5d49a1e-68", "ovs_interfaceid": "d5d49a1e-68e2-4463-a368-969f26b28b92", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:58:24 np0005539551 nova_compute[227360]: 2025-11-29 07:58:24.919 227364 DEBUG oslo_concurrency.lockutils [req-75627309-83b7-4706-9858-b656d82d3822 req-83dd4705-74a2-4cec-aedc-9d9013d24fb9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-e8c981fd-dbe3-407d-8810-88f19dac3073" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:58:25 np0005539551 nova_compute[227360]: 2025-11-29 07:58:25.223 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:26.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:26.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:27 np0005539551 ovn_controller[130266]: 2025-11-29T07:58:27Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9d:95:b6 10.100.0.9
Nov 29 02:58:27 np0005539551 ovn_controller[130266]: 2025-11-29T07:58:27Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9d:95:b6 10.100.0.9
Nov 29 02:58:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:28.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:28.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:29 np0005539551 nova_compute[227360]: 2025-11-29 07:58:29.042 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:30.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:30 np0005539551 nova_compute[227360]: 2025-11-29 07:58:30.226 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:30 np0005539551 nova_compute[227360]: 2025-11-29 07:58:30.341 227364 DEBUG nova.compute.manager [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Nov 29 02:58:30 np0005539551 nova_compute[227360]: 2025-11-29 07:58:30.458 227364 DEBUG oslo_concurrency.lockutils [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:30 np0005539551 nova_compute[227360]: 2025-11-29 07:58:30.459 227364 DEBUG oslo_concurrency.lockutils [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:30 np0005539551 nova_compute[227360]: 2025-11-29 07:58:30.615 227364 DEBUG nova.objects.instance [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lazy-loading 'pci_requests' on Instance uuid 6e814e3b-3edb-4f37-8701-c37929994645 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:30 np0005539551 nova_compute[227360]: 2025-11-29 07:58:30.629 227364 DEBUG nova.virt.hardware [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:58:30 np0005539551 nova_compute[227360]: 2025-11-29 07:58:30.630 227364 INFO nova.compute.claims [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 02:58:30 np0005539551 nova_compute[227360]: 2025-11-29 07:58:30.630 227364 DEBUG nova.objects.instance [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lazy-loading 'resources' on Instance uuid 6e814e3b-3edb-4f37-8701-c37929994645 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:30 np0005539551 nova_compute[227360]: 2025-11-29 07:58:30.642 227364 DEBUG nova.objects.instance [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lazy-loading 'pci_devices' on Instance uuid 6e814e3b-3edb-4f37-8701-c37929994645 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:30 np0005539551 nova_compute[227360]: 2025-11-29 07:58:30.705 227364 INFO nova.compute.resource_tracker [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Updating resource usage from migration 39ac01bb-9eeb-45e2-96ef-f28173744ae3#033[00m
Nov 29 02:58:30 np0005539551 nova_compute[227360]: 2025-11-29 07:58:30.705 227364 DEBUG nova.compute.resource_tracker [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Starting to track incoming migration 39ac01bb-9eeb-45e2-96ef-f28173744ae3 with flavor 709b029f-0458-4e40-a6ee-e1e02b48c06c _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Nov 29 02:58:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:30.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:30 np0005539551 nova_compute[227360]: 2025-11-29 07:58:30.789 227364 DEBUG oslo_concurrency.processutils [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:32.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:32.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:34 np0005539551 nova_compute[227360]: 2025-11-29 07:58:34.045 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:34.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:34.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:35 np0005539551 nova_compute[227360]: 2025-11-29 07:58:35.229 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:36.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:36.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:37 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:58:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:58:37 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2436395977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:58:37 np0005539551 nova_compute[227360]: 2025-11-29 07:58:37.913 227364 DEBUG oslo_concurrency.processutils [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 7.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:37 np0005539551 nova_compute[227360]: 2025-11-29 07:58:37.920 227364 DEBUG nova.compute.provider_tree [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:58:37 np0005539551 nova_compute[227360]: 2025-11-29 07:58:37.944 227364 DEBUG nova.scheduler.client.report [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:58:37 np0005539551 nova_compute[227360]: 2025-11-29 07:58:37.966 227364 DEBUG oslo_concurrency.lockutils [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 7.507s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:37 np0005539551 nova_compute[227360]: 2025-11-29 07:58:37.966 227364 INFO nova.compute.manager [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Migrating#033[00m
Nov 29 02:58:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:38.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:38 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:58:38 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 02:58:38 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:58:38 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:58:38 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:58:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:38.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:38 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 02:58:38 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1836503716' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 02:58:38 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 02:58:38 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1836503716' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 02:58:39 np0005539551 nova_compute[227360]: 2025-11-29 07:58:39.047 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:39 np0005539551 systemd[1]: Created slice User Slice of UID 42436.
Nov 29 02:58:39 np0005539551 systemd[1]: Starting User Runtime Directory /run/user/42436...
Nov 29 02:58:39 np0005539551 systemd-logind[788]: New session 51 of user nova.
Nov 29 02:58:39 np0005539551 systemd[1]: Finished User Runtime Directory /run/user/42436.
Nov 29 02:58:39 np0005539551 systemd[1]: Starting User Manager for UID 42436...
Nov 29 02:58:39 np0005539551 systemd[241565]: Queued start job for default target Main User Target.
Nov 29 02:58:39 np0005539551 systemd[241565]: Created slice User Application Slice.
Nov 29 02:58:39 np0005539551 systemd[241565]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 02:58:39 np0005539551 systemd[241565]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 02:58:39 np0005539551 systemd[241565]: Reached target Paths.
Nov 29 02:58:39 np0005539551 systemd[241565]: Reached target Timers.
Nov 29 02:58:39 np0005539551 systemd[241565]: Starting D-Bus User Message Bus Socket...
Nov 29 02:58:39 np0005539551 systemd[241565]: Starting Create User's Volatile Files and Directories...
Nov 29 02:58:39 np0005539551 systemd[241565]: Listening on D-Bus User Message Bus Socket.
Nov 29 02:58:39 np0005539551 systemd[241565]: Finished Create User's Volatile Files and Directories.
Nov 29 02:58:39 np0005539551 systemd[241565]: Reached target Sockets.
Nov 29 02:58:39 np0005539551 systemd[241565]: Reached target Basic System.
Nov 29 02:58:39 np0005539551 systemd[241565]: Reached target Main User Target.
Nov 29 02:58:39 np0005539551 systemd[241565]: Startup finished in 135ms.
Nov 29 02:58:39 np0005539551 systemd[1]: Started User Manager for UID 42436.
Nov 29 02:58:39 np0005539551 systemd[1]: Started Session 51 of User nova.
Nov 29 02:58:39 np0005539551 systemd[1]: session-51.scope: Deactivated successfully.
Nov 29 02:58:39 np0005539551 systemd-logind[788]: Session 51 logged out. Waiting for processes to exit.
Nov 29 02:58:39 np0005539551 systemd-logind[788]: Removed session 51.
Nov 29 02:58:39 np0005539551 systemd-logind[788]: New session 53 of user nova.
Nov 29 02:58:39 np0005539551 systemd[1]: Started Session 53 of User nova.
Nov 29 02:58:39 np0005539551 nova_compute[227360]: 2025-11-29 07:58:39.639 227364 DEBUG nova.objects.instance [None req-e3c03382-8b46-47cc-ba04-43614563f6d0 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Lazy-loading 'flavor' on Instance uuid e8c981fd-dbe3-407d-8810-88f19dac3073 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:39 np0005539551 systemd[1]: session-53.scope: Deactivated successfully.
Nov 29 02:58:39 np0005539551 systemd-logind[788]: Session 53 logged out. Waiting for processes to exit.
Nov 29 02:58:39 np0005539551 systemd-logind[788]: Removed session 53.
Nov 29 02:58:39 np0005539551 nova_compute[227360]: 2025-11-29 07:58:39.673 227364 DEBUG oslo_concurrency.lockutils [None req-e3c03382-8b46-47cc-ba04-43614563f6d0 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Acquiring lock "refresh_cache-e8c981fd-dbe3-407d-8810-88f19dac3073" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:58:39 np0005539551 nova_compute[227360]: 2025-11-29 07:58:39.674 227364 DEBUG oslo_concurrency.lockutils [None req-e3c03382-8b46-47cc-ba04-43614563f6d0 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Acquired lock "refresh_cache-e8c981fd-dbe3-407d-8810-88f19dac3073" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:58:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:40.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:40 np0005539551 nova_compute[227360]: 2025-11-29 07:58:40.232 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:40.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:42.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:42.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:43 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:58:43.508 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:58:43 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:58:43.510 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:58:43 np0005539551 nova_compute[227360]: 2025-11-29 07:58:43.509 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:43 np0005539551 nova_compute[227360]: 2025-11-29 07:58:43.700 227364 DEBUG nova.network.neutron [None req-e3c03382-8b46-47cc-ba04-43614563f6d0 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:58:43 np0005539551 nova_compute[227360]: 2025-11-29 07:58:43.860 227364 DEBUG nova.compute.manager [req-29a79b33-62f5-426b-af19-ab3ea55f5e09 req-d011ec7b-4ca8-470a-9365-c6e7a125b050 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Received event network-changed-d5d49a1e-68e2-4463-a368-969f26b28b92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:58:43 np0005539551 nova_compute[227360]: 2025-11-29 07:58:43.861 227364 DEBUG nova.compute.manager [req-29a79b33-62f5-426b-af19-ab3ea55f5e09 req-d011ec7b-4ca8-470a-9365-c6e7a125b050 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Refreshing instance network info cache due to event network-changed-d5d49a1e-68e2-4463-a368-969f26b28b92. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:58:43 np0005539551 nova_compute[227360]: 2025-11-29 07:58:43.861 227364 DEBUG oslo_concurrency.lockutils [req-29a79b33-62f5-426b-af19-ab3ea55f5e09 req-d011ec7b-4ca8-470a-9365-c6e7a125b050 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-e8c981fd-dbe3-407d-8810-88f19dac3073" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:58:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:44.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:44 np0005539551 nova_compute[227360]: 2025-11-29 07:58:44.129 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:44.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:45 np0005539551 nova_compute[227360]: 2025-11-29 07:58:45.235 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:46 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:58:46 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:58:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:46.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:46 np0005539551 nova_compute[227360]: 2025-11-29 07:58:46.168 227364 DEBUG oslo_concurrency.lockutils [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "refresh_cache-6e814e3b-3edb-4f37-8701-c37929994645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:58:46 np0005539551 nova_compute[227360]: 2025-11-29 07:58:46.169 227364 DEBUG oslo_concurrency.lockutils [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquired lock "refresh_cache-6e814e3b-3edb-4f37-8701-c37929994645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:58:46 np0005539551 nova_compute[227360]: 2025-11-29 07:58:46.169 227364 DEBUG nova.network.neutron [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:58:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:46.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:46 np0005539551 nova_compute[227360]: 2025-11-29 07:58:46.991 227364 DEBUG nova.network.neutron [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:58:47 np0005539551 nova_compute[227360]: 2025-11-29 07:58:47.269 227364 DEBUG nova.network.neutron [None req-e3c03382-8b46-47cc-ba04-43614563f6d0 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Updating instance_info_cache with network_info: [{"id": "d5d49a1e-68e2-4463-a368-969f26b28b92", "address": "fa:16:3e:9d:95:b6", "network": {"id": "b448d860-e284-464a-af0f-12f52096cce8", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-414419120-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d932eef5d5104a79bd9aec0bbc489384", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5d49a1e-68", "ovs_interfaceid": "d5d49a1e-68e2-4463-a368-969f26b28b92", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:58:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:58:47.512 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:58:47 np0005539551 nova_compute[227360]: 2025-11-29 07:58:47.567 227364 DEBUG oslo_concurrency.lockutils [None req-e3c03382-8b46-47cc-ba04-43614563f6d0 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Releasing lock "refresh_cache-e8c981fd-dbe3-407d-8810-88f19dac3073" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:58:47 np0005539551 nova_compute[227360]: 2025-11-29 07:58:47.568 227364 DEBUG nova.compute.manager [None req-e3c03382-8b46-47cc-ba04-43614563f6d0 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Nov 29 02:58:47 np0005539551 nova_compute[227360]: 2025-11-29 07:58:47.568 227364 DEBUG nova.compute.manager [None req-e3c03382-8b46-47cc-ba04-43614563f6d0 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] network_info to inject: |[{"id": "d5d49a1e-68e2-4463-a368-969f26b28b92", "address": "fa:16:3e:9d:95:b6", "network": {"id": "b448d860-e284-464a-af0f-12f52096cce8", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-414419120-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d932eef5d5104a79bd9aec0bbc489384", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5d49a1e-68", "ovs_interfaceid": "d5d49a1e-68e2-4463-a368-969f26b28b92", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Nov 29 02:58:47 np0005539551 nova_compute[227360]: 2025-11-29 07:58:47.570 227364 DEBUG oslo_concurrency.lockutils [req-29a79b33-62f5-426b-af19-ab3ea55f5e09 req-d011ec7b-4ca8-470a-9365-c6e7a125b050 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-e8c981fd-dbe3-407d-8810-88f19dac3073" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:58:47 np0005539551 nova_compute[227360]: 2025-11-29 07:58:47.571 227364 DEBUG nova.network.neutron [req-29a79b33-62f5-426b-af19-ab3ea55f5e09 req-d011ec7b-4ca8-470a-9365-c6e7a125b050 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Refreshing network info cache for port d5d49a1e-68e2-4463-a368-969f26b28b92 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:58:47 np0005539551 nova_compute[227360]: 2025-11-29 07:58:47.945 227364 DEBUG nova.network.neutron [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:58:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:48.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:48 np0005539551 nova_compute[227360]: 2025-11-29 07:58:48.216 227364 DEBUG oslo_concurrency.lockutils [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Releasing lock "refresh_cache-6e814e3b-3edb-4f37-8701-c37929994645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:58:48 np0005539551 nova_compute[227360]: 2025-11-29 07:58:48.689 227364 DEBUG nova.virt.libvirt.driver [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Nov 29 02:58:48 np0005539551 nova_compute[227360]: 2025-11-29 07:58:48.691 227364 DEBUG nova.virt.libvirt.driver [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 02:58:48 np0005539551 nova_compute[227360]: 2025-11-29 07:58:48.692 227364 INFO nova.virt.libvirt.driver [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Creating image(s)#033[00m
Nov 29 02:58:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:48.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:48 np0005539551 podman[241640]: 2025-11-29 07:58:48.967603226 +0000 UTC m=+1.413978356 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 02:58:48 np0005539551 podman[241639]: 2025-11-29 07:58:48.97176197 +0000 UTC m=+1.421058780 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125)
Nov 29 02:58:48 np0005539551 podman[241638]: 2025-11-29 07:58:48.995033268 +0000 UTC m=+1.446691673 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 29 02:58:49 np0005539551 nova_compute[227360]: 2025-11-29 07:58:49.059 227364 DEBUG nova.storage.rbd_utils [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] creating snapshot(nova-resize) on rbd image(6e814e3b-3edb-4f37-8701-c37929994645_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 02:58:49 np0005539551 nova_compute[227360]: 2025-11-29 07:58:49.475 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:49 np0005539551 nova_compute[227360]: 2025-11-29 07:58:49.481 227364 DEBUG nova.objects.instance [None req-faa62f87-75a0-4024-a011-ba6d56e3b028 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Lazy-loading 'flavor' on Instance uuid e8c981fd-dbe3-407d-8810-88f19dac3073 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:49 np0005539551 nova_compute[227360]: 2025-11-29 07:58:49.615 227364 DEBUG oslo_concurrency.lockutils [None req-faa62f87-75a0-4024-a011-ba6d56e3b028 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Acquiring lock "refresh_cache-e8c981fd-dbe3-407d-8810-88f19dac3073" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:58:49 np0005539551 systemd[1]: Stopping User Manager for UID 42436...
Nov 29 02:58:49 np0005539551 systemd[241565]: Activating special unit Exit the Session...
Nov 29 02:58:49 np0005539551 systemd[241565]: Stopped target Main User Target.
Nov 29 02:58:49 np0005539551 systemd[241565]: Stopped target Basic System.
Nov 29 02:58:49 np0005539551 systemd[241565]: Stopped target Paths.
Nov 29 02:58:49 np0005539551 systemd[241565]: Stopped target Sockets.
Nov 29 02:58:49 np0005539551 systemd[241565]: Stopped target Timers.
Nov 29 02:58:49 np0005539551 systemd[241565]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 29 02:58:49 np0005539551 systemd[241565]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 02:58:49 np0005539551 systemd[241565]: Closed D-Bus User Message Bus Socket.
Nov 29 02:58:49 np0005539551 systemd[241565]: Stopped Create User's Volatile Files and Directories.
Nov 29 02:58:49 np0005539551 systemd[241565]: Removed slice User Application Slice.
Nov 29 02:58:49 np0005539551 systemd[241565]: Reached target Shutdown.
Nov 29 02:58:49 np0005539551 systemd[241565]: Finished Exit the Session.
Nov 29 02:58:49 np0005539551 systemd[241565]: Reached target Exit the Session.
Nov 29 02:58:49 np0005539551 systemd[1]: user@42436.service: Deactivated successfully.
Nov 29 02:58:49 np0005539551 systemd[1]: Stopped User Manager for UID 42436.
Nov 29 02:58:49 np0005539551 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Nov 29 02:58:49 np0005539551 systemd[1]: run-user-42436.mount: Deactivated successfully.
Nov 29 02:58:49 np0005539551 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Nov 29 02:58:49 np0005539551 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Nov 29 02:58:49 np0005539551 systemd[1]: Removed slice User Slice of UID 42436.
Nov 29 02:58:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:50.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:50 np0005539551 nova_compute[227360]: 2025-11-29 07:58:50.237 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:50.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:51 np0005539551 nova_compute[227360]: 2025-11-29 07:58:51.006 227364 DEBUG nova.network.neutron [req-29a79b33-62f5-426b-af19-ab3ea55f5e09 req-d011ec7b-4ca8-470a-9365-c6e7a125b050 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Updated VIF entry in instance network info cache for port d5d49a1e-68e2-4463-a368-969f26b28b92. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:58:51 np0005539551 nova_compute[227360]: 2025-11-29 07:58:51.007 227364 DEBUG nova.network.neutron [req-29a79b33-62f5-426b-af19-ab3ea55f5e09 req-d011ec7b-4ca8-470a-9365-c6e7a125b050 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Updating instance_info_cache with network_info: [{"id": "d5d49a1e-68e2-4463-a368-969f26b28b92", "address": "fa:16:3e:9d:95:b6", "network": {"id": "b448d860-e284-464a-af0f-12f52096cce8", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-414419120-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d932eef5d5104a79bd9aec0bbc489384", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5d49a1e-68", "ovs_interfaceid": "d5d49a1e-68e2-4463-a368-969f26b28b92", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:58:51 np0005539551 nova_compute[227360]: 2025-11-29 07:58:51.100 227364 DEBUG oslo_concurrency.lockutils [req-29a79b33-62f5-426b-af19-ab3ea55f5e09 req-d011ec7b-4ca8-470a-9365-c6e7a125b050 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-e8c981fd-dbe3-407d-8810-88f19dac3073" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:58:51 np0005539551 nova_compute[227360]: 2025-11-29 07:58:51.101 227364 DEBUG oslo_concurrency.lockutils [None req-faa62f87-75a0-4024-a011-ba6d56e3b028 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Acquired lock "refresh_cache-e8c981fd-dbe3-407d-8810-88f19dac3073" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:58:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:52.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e172 e172: 3 total, 3 up, 3 in
Nov 29 02:58:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:52.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:53 np0005539551 nova_compute[227360]: 2025-11-29 07:58:53.978 227364 DEBUG nova.objects.instance [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lazy-loading 'trusted_certs' on Instance uuid 6e814e3b-3edb-4f37-8701-c37929994645 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:54 np0005539551 nova_compute[227360]: 2025-11-29 07:58:54.001 227364 DEBUG nova.network.neutron [None req-faa62f87-75a0-4024-a011-ba6d56e3b028 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:58:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:54.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:54 np0005539551 nova_compute[227360]: 2025-11-29 07:58:54.172 227364 DEBUG nova.compute.manager [req-2a74691d-fa59-4a0c-9971-0b894bd0ce68 req-510d46f2-8ab3-436d-9118-f7e8a33bc251 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Received event network-changed-d5d49a1e-68e2-4463-a368-969f26b28b92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:58:54 np0005539551 nova_compute[227360]: 2025-11-29 07:58:54.173 227364 DEBUG nova.compute.manager [req-2a74691d-fa59-4a0c-9971-0b894bd0ce68 req-510d46f2-8ab3-436d-9118-f7e8a33bc251 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Refreshing instance network info cache due to event network-changed-d5d49a1e-68e2-4463-a368-969f26b28b92. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:58:54 np0005539551 nova_compute[227360]: 2025-11-29 07:58:54.173 227364 DEBUG oslo_concurrency.lockutils [req-2a74691d-fa59-4a0c-9971-0b894bd0ce68 req-510d46f2-8ab3-436d-9118-f7e8a33bc251 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-e8c981fd-dbe3-407d-8810-88f19dac3073" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:58:54 np0005539551 nova_compute[227360]: 2025-11-29 07:58:54.195 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:54 np0005539551 nova_compute[227360]: 2025-11-29 07:58:54.290 227364 DEBUG nova.virt.libvirt.driver [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 02:58:54 np0005539551 nova_compute[227360]: 2025-11-29 07:58:54.291 227364 DEBUG nova.virt.libvirt.driver [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Ensure instance console log exists: /var/lib/nova/instances/6e814e3b-3edb-4f37-8701-c37929994645/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:58:54 np0005539551 nova_compute[227360]: 2025-11-29 07:58:54.291 227364 DEBUG oslo_concurrency.lockutils [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:54 np0005539551 nova_compute[227360]: 2025-11-29 07:58:54.291 227364 DEBUG oslo_concurrency.lockutils [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:54 np0005539551 nova_compute[227360]: 2025-11-29 07:58:54.292 227364 DEBUG oslo_concurrency.lockutils [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:54 np0005539551 nova_compute[227360]: 2025-11-29 07:58:54.293 227364 DEBUG nova.virt.libvirt.driver [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:58:54 np0005539551 nova_compute[227360]: 2025-11-29 07:58:54.297 227364 WARNING nova.virt.libvirt.driver [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:58:54 np0005539551 nova_compute[227360]: 2025-11-29 07:58:54.312 227364 DEBUG nova.virt.libvirt.host [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:58:54 np0005539551 nova_compute[227360]: 2025-11-29 07:58:54.314 227364 DEBUG nova.virt.libvirt.host [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:58:54 np0005539551 nova_compute[227360]: 2025-11-29 07:58:54.320 227364 DEBUG nova.virt.libvirt.host [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:58:54 np0005539551 nova_compute[227360]: 2025-11-29 07:58:54.320 227364 DEBUG nova.virt.libvirt.host [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:58:54 np0005539551 nova_compute[227360]: 2025-11-29 07:58:54.322 227364 DEBUG nova.virt.libvirt.driver [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:58:54 np0005539551 nova_compute[227360]: 2025-11-29 07:58:54.322 227364 DEBUG nova.virt.hardware [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='709b029f-0458-4e40-a6ee-e1e02b48c06c',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:58:54 np0005539551 nova_compute[227360]: 2025-11-29 07:58:54.322 227364 DEBUG nova.virt.hardware [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:58:54 np0005539551 nova_compute[227360]: 2025-11-29 07:58:54.323 227364 DEBUG nova.virt.hardware [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:58:54 np0005539551 nova_compute[227360]: 2025-11-29 07:58:54.323 227364 DEBUG nova.virt.hardware [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:58:54 np0005539551 nova_compute[227360]: 2025-11-29 07:58:54.323 227364 DEBUG nova.virt.hardware [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:58:54 np0005539551 nova_compute[227360]: 2025-11-29 07:58:54.323 227364 DEBUG nova.virt.hardware [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:58:54 np0005539551 nova_compute[227360]: 2025-11-29 07:58:54.323 227364 DEBUG nova.virt.hardware [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:58:54 np0005539551 nova_compute[227360]: 2025-11-29 07:58:54.324 227364 DEBUG nova.virt.hardware [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:58:54 np0005539551 nova_compute[227360]: 2025-11-29 07:58:54.324 227364 DEBUG nova.virt.hardware [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:58:54 np0005539551 nova_compute[227360]: 2025-11-29 07:58:54.324 227364 DEBUG nova.virt.hardware [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:58:54 np0005539551 nova_compute[227360]: 2025-11-29 07:58:54.324 227364 DEBUG nova.virt.hardware [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:58:54 np0005539551 nova_compute[227360]: 2025-11-29 07:58:54.324 227364 DEBUG nova.objects.instance [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lazy-loading 'vcpu_model' on Instance uuid 6e814e3b-3edb-4f37-8701-c37929994645 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:54 np0005539551 nova_compute[227360]: 2025-11-29 07:58:54.519 227364 DEBUG oslo_concurrency.processutils [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:54.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:58:55 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2651516466' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:58:55 np0005539551 nova_compute[227360]: 2025-11-29 07:58:55.077 227364 DEBUG oslo_concurrency.processutils [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:55 np0005539551 nova_compute[227360]: 2025-11-29 07:58:55.132 227364 DEBUG oslo_concurrency.processutils [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:55 np0005539551 nova_compute[227360]: 2025-11-29 07:58:55.255 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:58:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:56.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:58:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:56.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:57 np0005539551 nova_compute[227360]: 2025-11-29 07:58:57.967 227364 DEBUG nova.network.neutron [None req-faa62f87-75a0-4024-a011-ba6d56e3b028 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Updating instance_info_cache with network_info: [{"id": "d5d49a1e-68e2-4463-a368-969f26b28b92", "address": "fa:16:3e:9d:95:b6", "network": {"id": "b448d860-e284-464a-af0f-12f52096cce8", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-414419120-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d932eef5d5104a79bd9aec0bbc489384", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5d49a1e-68", "ovs_interfaceid": "d5d49a1e-68e2-4463-a368-969f26b28b92", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:58:58 np0005539551 nova_compute[227360]: 2025-11-29 07:58:58.028 227364 DEBUG oslo_concurrency.lockutils [None req-faa62f87-75a0-4024-a011-ba6d56e3b028 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Releasing lock "refresh_cache-e8c981fd-dbe3-407d-8810-88f19dac3073" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:58:58 np0005539551 nova_compute[227360]: 2025-11-29 07:58:58.029 227364 DEBUG nova.compute.manager [None req-faa62f87-75a0-4024-a011-ba6d56e3b028 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Nov 29 02:58:58 np0005539551 nova_compute[227360]: 2025-11-29 07:58:58.030 227364 DEBUG nova.compute.manager [None req-faa62f87-75a0-4024-a011-ba6d56e3b028 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] network_info to inject: |[{"id": "d5d49a1e-68e2-4463-a368-969f26b28b92", "address": "fa:16:3e:9d:95:b6", "network": {"id": "b448d860-e284-464a-af0f-12f52096cce8", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-414419120-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d932eef5d5104a79bd9aec0bbc489384", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5d49a1e-68", "ovs_interfaceid": "d5d49a1e-68e2-4463-a368-969f26b28b92", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Nov 29 02:58:58 np0005539551 nova_compute[227360]: 2025-11-29 07:58:58.034 227364 DEBUG oslo_concurrency.lockutils [req-2a74691d-fa59-4a0c-9971-0b894bd0ce68 req-510d46f2-8ab3-436d-9118-f7e8a33bc251 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-e8c981fd-dbe3-407d-8810-88f19dac3073" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:58:58 np0005539551 nova_compute[227360]: 2025-11-29 07:58:58.035 227364 DEBUG nova.network.neutron [req-2a74691d-fa59-4a0c-9971-0b894bd0ce68 req-510d46f2-8ab3-436d-9118-f7e8a33bc251 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Refreshing network info cache for port d5d49a1e-68e2-4463-a368-969f26b28b92 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:58:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:58.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:58 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e173 e173: 3 total, 3 up, 3 in
Nov 29 02:58:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:58:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:58.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:58 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:59 np0005539551 nova_compute[227360]: 2025-11-29 07:58:59.094 227364 DEBUG oslo_concurrency.lockutils [None req-4b9734a5-74d3-4238-bc3e-8f87d42bca28 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Acquiring lock "e8c981fd-dbe3-407d-8810-88f19dac3073" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:59 np0005539551 nova_compute[227360]: 2025-11-29 07:58:59.095 227364 DEBUG oslo_concurrency.lockutils [None req-4b9734a5-74d3-4238-bc3e-8f87d42bca28 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Lock "e8c981fd-dbe3-407d-8810-88f19dac3073" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:59 np0005539551 nova_compute[227360]: 2025-11-29 07:58:59.095 227364 DEBUG oslo_concurrency.lockutils [None req-4b9734a5-74d3-4238-bc3e-8f87d42bca28 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Acquiring lock "e8c981fd-dbe3-407d-8810-88f19dac3073-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:59 np0005539551 nova_compute[227360]: 2025-11-29 07:58:59.095 227364 DEBUG oslo_concurrency.lockutils [None req-4b9734a5-74d3-4238-bc3e-8f87d42bca28 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Lock "e8c981fd-dbe3-407d-8810-88f19dac3073-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:59 np0005539551 nova_compute[227360]: 2025-11-29 07:58:59.096 227364 DEBUG oslo_concurrency.lockutils [None req-4b9734a5-74d3-4238-bc3e-8f87d42bca28 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Lock "e8c981fd-dbe3-407d-8810-88f19dac3073-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:59 np0005539551 nova_compute[227360]: 2025-11-29 07:58:59.097 227364 INFO nova.compute.manager [None req-4b9734a5-74d3-4238-bc3e-8f87d42bca28 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Terminating instance#033[00m
Nov 29 02:58:59 np0005539551 nova_compute[227360]: 2025-11-29 07:58:59.099 227364 DEBUG nova.compute.manager [None req-4b9734a5-74d3-4238-bc3e-8f87d42bca28 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:58:59 np0005539551 kernel: tapd5d49a1e-68 (unregistering): left promiscuous mode
Nov 29 02:58:59 np0005539551 NetworkManager[48922]: <info>  [1764403139.1707] device (tapd5d49a1e-68): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:58:59 np0005539551 ovn_controller[130266]: 2025-11-29T07:58:59Z|00135|binding|INFO|Releasing lport d5d49a1e-68e2-4463-a368-969f26b28b92 from this chassis (sb_readonly=0)
Nov 29 02:58:59 np0005539551 ovn_controller[130266]: 2025-11-29T07:58:59Z|00136|binding|INFO|Setting lport d5d49a1e-68e2-4463-a368-969f26b28b92 down in Southbound
Nov 29 02:58:59 np0005539551 ovn_controller[130266]: 2025-11-29T07:58:59Z|00137|binding|INFO|Removing iface tapd5d49a1e-68 ovn-installed in OVS
Nov 29 02:58:59 np0005539551 nova_compute[227360]: 2025-11-29 07:58:59.182 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:59 np0005539551 nova_compute[227360]: 2025-11-29 07:58:59.187 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:58:59.189 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:95:b6 10.100.0.9'], port_security=['fa:16:3e:9d:95:b6 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e8c981fd-dbe3-407d-8810-88f19dac3073', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b448d860-e284-464a-af0f-12f52096cce8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd932eef5d5104a79bd9aec0bbc489384', 'neutron:revision_number': '6', 'neutron:security_group_ids': '47c5798d-6387-4692-a31e-de53231d5beb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.211'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=88d5cc16-c83b-4004-a9bd-a19537c12db3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=d5d49a1e-68e2-4463-a368-969f26b28b92) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:58:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:58:59.190 139482 INFO neutron.agent.ovn.metadata.agent [-] Port d5d49a1e-68e2-4463-a368-969f26b28b92 in datapath b448d860-e284-464a-af0f-12f52096cce8 unbound from our chassis#033[00m
Nov 29 02:58:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:58:59.192 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b448d860-e284-464a-af0f-12f52096cce8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:58:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:58:59.193 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2eb47e2f-51ae-42e5-b2ce-d1f258e4fa17]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:58:59.194 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b448d860-e284-464a-af0f-12f52096cce8 namespace which is not needed anymore#033[00m
Nov 29 02:58:59 np0005539551 nova_compute[227360]: 2025-11-29 07:58:59.204 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:59 np0005539551 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Nov 29 02:58:59 np0005539551 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000001d.scope: Consumed 15.193s CPU time.
Nov 29 02:58:59 np0005539551 systemd-machined[190756]: Machine qemu-15-instance-0000001d terminated.
Nov 29 02:58:59 np0005539551 nova_compute[227360]: 2025-11-29 07:58:59.332 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:59 np0005539551 nova_compute[227360]: 2025-11-29 07:58:59.352 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:59 np0005539551 nova_compute[227360]: 2025-11-29 07:58:59.361 227364 INFO nova.virt.libvirt.driver [-] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Instance destroyed successfully.#033[00m
Nov 29 02:58:59 np0005539551 nova_compute[227360]: 2025-11-29 07:58:59.361 227364 DEBUG nova.objects.instance [None req-4b9734a5-74d3-4238-bc3e-8f87d42bca28 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Lazy-loading 'resources' on Instance uuid e8c981fd-dbe3-407d-8810-88f19dac3073 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:59 np0005539551 nova_compute[227360]: 2025-11-29 07:58:59.964 227364 DEBUG nova.network.neutron [req-2a74691d-fa59-4a0c-9971-0b894bd0ce68 req-510d46f2-8ab3-436d-9118-f7e8a33bc251 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Updated VIF entry in instance network info cache for port d5d49a1e-68e2-4463-a368-969f26b28b92. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:58:59 np0005539551 nova_compute[227360]: 2025-11-29 07:58:59.964 227364 DEBUG nova.network.neutron [req-2a74691d-fa59-4a0c-9971-0b894bd0ce68 req-510d46f2-8ab3-436d-9118-f7e8a33bc251 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Updating instance_info_cache with network_info: [{"id": "d5d49a1e-68e2-4463-a368-969f26b28b92", "address": "fa:16:3e:9d:95:b6", "network": {"id": "b448d860-e284-464a-af0f-12f52096cce8", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-414419120-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d932eef5d5104a79bd9aec0bbc489384", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5d49a1e-68", "ovs_interfaceid": "d5d49a1e-68e2-4463-a368-969f26b28b92", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:58:59 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0.
Nov 29 02:58:59 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:58:59.984611) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:58:59 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55
Nov 29 02:58:59 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403139984738, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2608, "num_deletes": 510, "total_data_size": 5682224, "memory_usage": 5773184, "flush_reason": "Manual Compaction"}
Nov 29 02:58:59 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started
Nov 29 02:59:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:00.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:00 np0005539551 nova_compute[227360]: 2025-11-29 07:59:00.258 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:00 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403140535135, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 3350767, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26618, "largest_seqno": 29221, "table_properties": {"data_size": 3340965, "index_size": 5592, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3141, "raw_key_size": 24738, "raw_average_key_size": 20, "raw_value_size": 3318885, "raw_average_value_size": 2693, "num_data_blocks": 243, "num_entries": 1232, "num_filter_entries": 1232, "num_deletions": 510, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402934, "oldest_key_time": 1764402934, "file_creation_time": 1764403139, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:59:00 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 551042 microseconds, and 14494 cpu microseconds.
Nov 29 02:59:00 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:59:00 np0005539551 nova_compute[227360]: 2025-11-29 07:59:00.581 227364 DEBUG nova.compute.manager [req-b69ddb82-0e4b-4c55-8841-d22b744e35e2 req-4618fcf7-1b8a-4858-909f-97bc1aa87d99 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Received event network-vif-unplugged-d5d49a1e-68e2-4463-a368-969f26b28b92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:59:00 np0005539551 nova_compute[227360]: 2025-11-29 07:59:00.582 227364 DEBUG oslo_concurrency.lockutils [req-b69ddb82-0e4b-4c55-8841-d22b744e35e2 req-4618fcf7-1b8a-4858-909f-97bc1aa87d99 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e8c981fd-dbe3-407d-8810-88f19dac3073-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:00 np0005539551 nova_compute[227360]: 2025-11-29 07:59:00.583 227364 DEBUG oslo_concurrency.lockutils [req-b69ddb82-0e4b-4c55-8841-d22b744e35e2 req-4618fcf7-1b8a-4858-909f-97bc1aa87d99 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e8c981fd-dbe3-407d-8810-88f19dac3073-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:00 np0005539551 nova_compute[227360]: 2025-11-29 07:59:00.583 227364 DEBUG oslo_concurrency.lockutils [req-b69ddb82-0e4b-4c55-8841-d22b744e35e2 req-4618fcf7-1b8a-4858-909f-97bc1aa87d99 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e8c981fd-dbe3-407d-8810-88f19dac3073-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:00 np0005539551 nova_compute[227360]: 2025-11-29 07:59:00.583 227364 DEBUG nova.compute.manager [req-b69ddb82-0e4b-4c55-8841-d22b744e35e2 req-4618fcf7-1b8a-4858-909f-97bc1aa87d99 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] No waiting events found dispatching network-vif-unplugged-d5d49a1e-68e2-4463-a368-969f26b28b92 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:59:00 np0005539551 nova_compute[227360]: 2025-11-29 07:59:00.584 227364 DEBUG nova.compute.manager [req-b69ddb82-0e4b-4c55-8841-d22b744e35e2 req-4618fcf7-1b8a-4858-909f-97bc1aa87d99 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Received event network-vif-unplugged-d5d49a1e-68e2-4463-a368-969f26b28b92 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:59:00 np0005539551 nova_compute[227360]: 2025-11-29 07:59:00.592 227364 DEBUG nova.virt.libvirt.vif [None req-4b9734a5-74d3-4238-bc3e-8f87d42bca28 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:57:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1367849535',display_name='tempest-AttachInterfacesUnderV243Test-server-1367849535',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1367849535',id=29,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG5pUKdnbNkO8CiOQr+0kyP/sUgTCIrM4EZ5g1/WE6CYbIb0cTodoOAwZQfJ0sH3oJziwW4Tq+7MYwATIYNq6XxP9WNer5wgI+SZUZz6YVKHmYAIGdfIIvK4+61Oj6tzZg==',key_name='tempest-keypair-211474090',keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:58:12Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d932eef5d5104a79bd9aec0bbc489384',ramdisk_id='',reservation_id='r-2gngzvun',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-1727358953',owner_user_name='tempest-AttachInterfacesUnderV243Test-1727358953-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:58:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e9bb5c912ca1471c895b844b43ac4831',uuid=e8c981fd-dbe3-407d-8810-88f19dac3073,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d5d49a1e-68e2-4463-a368-969f26b28b92", "address": "fa:16:3e:9d:95:b6", "network": {"id": "b448d860-e284-464a-af0f-12f52096cce8", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-414419120-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d932eef5d5104a79bd9aec0bbc489384", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5d49a1e-68", "ovs_interfaceid": "d5d49a1e-68e2-4463-a368-969f26b28b92", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:59:00 np0005539551 nova_compute[227360]: 2025-11-29 07:59:00.592 227364 DEBUG nova.network.os_vif_util [None req-4b9734a5-74d3-4238-bc3e-8f87d42bca28 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Converting VIF {"id": "d5d49a1e-68e2-4463-a368-969f26b28b92", "address": "fa:16:3e:9d:95:b6", "network": {"id": "b448d860-e284-464a-af0f-12f52096cce8", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-414419120-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d932eef5d5104a79bd9aec0bbc489384", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5d49a1e-68", "ovs_interfaceid": "d5d49a1e-68e2-4463-a368-969f26b28b92", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:59:00 np0005539551 nova_compute[227360]: 2025-11-29 07:59:00.594 227364 DEBUG nova.network.os_vif_util [None req-4b9734a5-74d3-4238-bc3e-8f87d42bca28 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9d:95:b6,bridge_name='br-int',has_traffic_filtering=True,id=d5d49a1e-68e2-4463-a368-969f26b28b92,network=Network(b448d860-e284-464a-af0f-12f52096cce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5d49a1e-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:59:00 np0005539551 nova_compute[227360]: 2025-11-29 07:59:00.594 227364 DEBUG os_vif [None req-4b9734a5-74d3-4238-bc3e-8f87d42bca28 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:95:b6,bridge_name='br-int',has_traffic_filtering=True,id=d5d49a1e-68e2-4463-a368-969f26b28b92,network=Network(b448d860-e284-464a-af0f-12f52096cce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5d49a1e-68') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:59:00 np0005539551 nova_compute[227360]: 2025-11-29 07:59:00.598 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:00 np0005539551 nova_compute[227360]: 2025-11-29 07:59:00.599 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5d49a1e-68, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:59:00 np0005539551 nova_compute[227360]: 2025-11-29 07:59:00.601 227364 DEBUG oslo_concurrency.lockutils [req-2a74691d-fa59-4a0c-9971-0b894bd0ce68 req-510d46f2-8ab3-436d-9118-f7e8a33bc251 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-e8c981fd-dbe3-407d-8810-88f19dac3073" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:59:00 np0005539551 nova_compute[227360]: 2025-11-29 07:59:00.602 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:00 np0005539551 nova_compute[227360]: 2025-11-29 07:59:00.606 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:59:00 np0005539551 nova_compute[227360]: 2025-11-29 07:59:00.610 227364 INFO os_vif [None req-4b9734a5-74d3-4238-bc3e-8f87d42bca28 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:95:b6,bridge_name='br-int',has_traffic_filtering=True,id=d5d49a1e-68e2-4463-a368-969f26b28b92,network=Network(b448d860-e284-464a-af0f-12f52096cce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5d49a1e-68')#033[00m
Nov 29 02:59:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:00.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:59:01 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/534305258' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:59:02 np0005539551 neutron-haproxy-ovnmeta-b448d860-e284-464a-af0f-12f52096cce8[241286]: [NOTICE]   (241290) : haproxy version is 2.8.14-c23fe91
Nov 29 02:59:02 np0005539551 neutron-haproxy-ovnmeta-b448d860-e284-464a-af0f-12f52096cce8[241286]: [NOTICE]   (241290) : path to executable is /usr/sbin/haproxy
Nov 29 02:59:02 np0005539551 neutron-haproxy-ovnmeta-b448d860-e284-464a-af0f-12f52096cce8[241286]: [WARNING]  (241290) : Exiting Master process...
Nov 29 02:59:02 np0005539551 neutron-haproxy-ovnmeta-b448d860-e284-464a-af0f-12f52096cce8[241286]: [ALERT]    (241290) : Current worker (241292) exited with code 143 (Terminated)
Nov 29 02:59:02 np0005539551 neutron-haproxy-ovnmeta-b448d860-e284-464a-af0f-12f52096cce8[241286]: [WARNING]  (241290) : All workers exited. Exiting... (0)
Nov 29 02:59:02 np0005539551 systemd[1]: libpod-cca91bd88e2643a9d54b04c1c58adb477d132c2eb537929e352c8dfbf5b26149.scope: Deactivated successfully.
Nov 29 02:59:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:02.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:02 np0005539551 podman[241859]: 2025-11-29 07:59:02.157425412 +0000 UTC m=+2.851751562 container died cca91bd88e2643a9d54b04c1c58adb477d132c2eb537929e352c8dfbf5b26149 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b448d860-e284-464a-af0f-12f52096cce8, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:59:02 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:59:00.535625) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 3350767 bytes OK
Nov 29 02:59:02 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:59:00.535701) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started
Nov 29 02:59:02 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:59:02.676157) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done
Nov 29 02:59:02 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:59:02.676206) EVENT_LOG_v1 {"time_micros": 1764403142676198, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:59:02 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:59:02.676227) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:59:02 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 5669626, prev total WAL file size 5719058, number of live WAL files 2.
Nov 29 02:59:02 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:59:02 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:59:02.678198) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353033' seq:72057594037927935, type:22 .. '6C6F676D00373536' seq:0, type:0; will stop at (end)
Nov 29 02:59:02 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:59:02 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(3272KB)], [54(10MB)]
Nov 29 02:59:02 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403142678395, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 14644696, "oldest_snapshot_seqno": -1}
Nov 29 02:59:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:59:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:02.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:59:02 np0005539551 nova_compute[227360]: 2025-11-29 07:59:02.930 227364 DEBUG nova.compute.manager [req-b082add3-c3c0-4155-ae71-aee75a44b035 req-48768af8-2823-4710-a00b-0c0237253344 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Received event network-vif-plugged-d5d49a1e-68e2-4463-a368-969f26b28b92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:59:02 np0005539551 nova_compute[227360]: 2025-11-29 07:59:02.931 227364 DEBUG oslo_concurrency.lockutils [req-b082add3-c3c0-4155-ae71-aee75a44b035 req-48768af8-2823-4710-a00b-0c0237253344 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e8c981fd-dbe3-407d-8810-88f19dac3073-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:02 np0005539551 nova_compute[227360]: 2025-11-29 07:59:02.931 227364 DEBUG oslo_concurrency.lockutils [req-b082add3-c3c0-4155-ae71-aee75a44b035 req-48768af8-2823-4710-a00b-0c0237253344 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e8c981fd-dbe3-407d-8810-88f19dac3073-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:02 np0005539551 nova_compute[227360]: 2025-11-29 07:59:02.932 227364 DEBUG oslo_concurrency.lockutils [req-b082add3-c3c0-4155-ae71-aee75a44b035 req-48768af8-2823-4710-a00b-0c0237253344 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e8c981fd-dbe3-407d-8810-88f19dac3073-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:02 np0005539551 nova_compute[227360]: 2025-11-29 07:59:02.932 227364 DEBUG nova.compute.manager [req-b082add3-c3c0-4155-ae71-aee75a44b035 req-48768af8-2823-4710-a00b-0c0237253344 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] No waiting events found dispatching network-vif-plugged-d5d49a1e-68e2-4463-a368-969f26b28b92 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:59:02 np0005539551 nova_compute[227360]: 2025-11-29 07:59:02.932 227364 WARNING nova.compute.manager [req-b082add3-c3c0-4155-ae71-aee75a44b035 req-48768af8-2823-4710-a00b-0c0237253344 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Received unexpected event network-vif-plugged-d5d49a1e-68e2-4463-a368-969f26b28b92 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 02:59:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:04.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:04 np0005539551 nova_compute[227360]: 2025-11-29 07:59:04.206 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:04 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 5990 keys, 10985117 bytes, temperature: kUnknown
Nov 29 02:59:04 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403144399828, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 10985117, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10944547, "index_size": 24524, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14981, "raw_key_size": 153422, "raw_average_key_size": 25, "raw_value_size": 10836058, "raw_average_value_size": 1809, "num_data_blocks": 992, "num_entries": 5990, "num_filter_entries": 5990, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764403142, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:59:04 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:59:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:04.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:05 np0005539551 nova_compute[227360]: 2025-11-29 07:59:05.603 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:06.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:06.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:06 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:59:04.400081) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 10985117 bytes
Nov 29 02:59:06 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:59:06.845849) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 8.5 rd, 6.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 10.8 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(7.6) write-amplify(3.3) OK, records in: 7008, records dropped: 1018 output_compression: NoCompression
Nov 29 02:59:06 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:59:06.845894) EVENT_LOG_v1 {"time_micros": 1764403146845877, "job": 32, "event": "compaction_finished", "compaction_time_micros": 1721488, "compaction_time_cpu_micros": 51209, "output_level": 6, "num_output_files": 1, "total_output_size": 10985117, "num_input_records": 7008, "num_output_records": 5990, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:59:06 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:59:06 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403146846751, "job": 32, "event": "table_file_deletion", "file_number": 56}
Nov 29 02:59:06 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:59:06 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403146848806, "job": 32, "event": "table_file_deletion", "file_number": 54}
Nov 29 02:59:06 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:59:02.678025) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:59:06 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:59:06.848922) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:59:06 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:59:06.848930) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:59:06 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:59:06.848934) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:59:06 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:59:06.848938) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:59:06 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:59:06.848941) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:59:06 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-1[81668]: 2025-11-29T07:59:06.852+0000 7fe9969b2640 -1 mon.compute-1@2(peon).paxos(paxos updating c 2009..2749) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 0.338402599s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Nov 29 02:59:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).paxos(paxos updating c 2009..2749) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 0.338402599s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Nov 29 02:59:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:07 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cca91bd88e2643a9d54b04c1c58adb477d132c2eb537929e352c8dfbf5b26149-userdata-shm.mount: Deactivated successfully.
Nov 29 02:59:07 np0005539551 systemd[1]: var-lib-containers-storage-overlay-2fe1260e275eb12434d5089610c6d9f48adb0be3d451464a97430fc3622469ef-merged.mount: Deactivated successfully.
Nov 29 02:59:07 np0005539551 nova_compute[227360]: 2025-11-29 07:59:07.380 227364 DEBUG oslo_concurrency.processutils [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 12.247s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:07 np0005539551 nova_compute[227360]: 2025-11-29 07:59:07.382 227364 DEBUG nova.virt.libvirt.driver [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:59:07 np0005539551 nova_compute[227360]:  <uuid>6e814e3b-3edb-4f37-8701-c37929994645</uuid>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:  <name>instance-0000001f</name>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:  <memory>196608</memory>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:59:07 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:      <nova:name>tempest-MigrationsAdminTest-server-1156943130</nova:name>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 07:58:54</nova:creationTime>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.micro">
Nov 29 02:59:07 np0005539551 nova_compute[227360]:        <nova:memory>192</nova:memory>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:        <nova:user uuid="51ae07f600c545c0b4c7fae00657ea40">tempest-MigrationsAdminTest-1930136363-project-member</nova:user>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:        <nova:project uuid="6717732f9fa242b181f58881b03d246f">tempest-MigrationsAdminTest-1930136363</nova:project>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:      <nova:ports/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <system>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:      <entry name="serial">6e814e3b-3edb-4f37-8701-c37929994645</entry>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:      <entry name="uuid">6e814e3b-3edb-4f37-8701-c37929994645</entry>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    </system>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:  <os>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:  </os>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:  <features>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:  </features>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:  </clock>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:  <devices>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 02:59:07 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/6e814e3b-3edb-4f37-8701-c37929994645_disk">
Nov 29 02:59:07 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:      </source>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 02:59:07 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:      </auth>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    </disk>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 02:59:07 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/6e814e3b-3edb-4f37-8701-c37929994645_disk.config">
Nov 29 02:59:07 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:      </source>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 02:59:07 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:      </auth>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    </disk>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 02:59:07 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/6e814e3b-3edb-4f37-8701-c37929994645/console.log" append="off"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    </serial>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <video>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    </video>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 02:59:07 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    </rng>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 02:59:07 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 02:59:07 np0005539551 nova_compute[227360]:  </devices>
Nov 29 02:59:07 np0005539551 nova_compute[227360]: </domain>
Nov 29 02:59:07 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:59:07 np0005539551 nova_compute[227360]: 2025-11-29 07:59:07.946 227364 DEBUG nova.virt.libvirt.driver [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:59:07 np0005539551 nova_compute[227360]: 2025-11-29 07:59:07.947 227364 DEBUG nova.virt.libvirt.driver [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:59:07 np0005539551 nova_compute[227360]: 2025-11-29 07:59:07.947 227364 INFO nova.virt.libvirt.driver [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Using config drive#033[00m
Nov 29 02:59:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:08.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:08 np0005539551 podman[241859]: 2025-11-29 07:59:08.655493091 +0000 UTC m=+9.349819201 container cleanup cca91bd88e2643a9d54b04c1c58adb477d132c2eb537929e352c8dfbf5b26149 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b448d860-e284-464a-af0f-12f52096cce8, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 02:59:08 np0005539551 systemd[1]: libpod-conmon-cca91bd88e2643a9d54b04c1c58adb477d132c2eb537929e352c8dfbf5b26149.scope: Deactivated successfully.
Nov 29 02:59:08 np0005539551 systemd-machined[190756]: New machine qemu-16-instance-0000001f.
Nov 29 02:59:08 np0005539551 systemd[1]: Started Virtual Machine qemu-16-instance-0000001f.
Nov 29 02:59:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:08.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:09 np0005539551 nova_compute[227360]: 2025-11-29 07:59:09.209 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:10.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:10 np0005539551 nova_compute[227360]: 2025-11-29 07:59:10.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:10 np0005539551 nova_compute[227360]: 2025-11-29 07:59:10.608 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:10.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:11 np0005539551 nova_compute[227360]: 2025-11-29 07:59:11.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:11 np0005539551 nova_compute[227360]: 2025-11-29 07:59:11.411 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:11 np0005539551 podman[241927]: 2025-11-29 07:59:11.416631249 +0000 UTC m=+2.727350893 container remove cca91bd88e2643a9d54b04c1c58adb477d132c2eb537929e352c8dfbf5b26149 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b448d860-e284-464a-af0f-12f52096cce8, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 02:59:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:59:11.423 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f0b51f14-fc4b-4ba9-b5ec-36a0f9362bf7]: (4, ('Sat Nov 29 07:58:59 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b448d860-e284-464a-af0f-12f52096cce8 (cca91bd88e2643a9d54b04c1c58adb477d132c2eb537929e352c8dfbf5b26149)\ncca91bd88e2643a9d54b04c1c58adb477d132c2eb537929e352c8dfbf5b26149\nSat Nov 29 07:59:08 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b448d860-e284-464a-af0f-12f52096cce8 (cca91bd88e2643a9d54b04c1c58adb477d132c2eb537929e352c8dfbf5b26149)\ncca91bd88e2643a9d54b04c1c58adb477d132c2eb537929e352c8dfbf5b26149\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:59:11.425 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[413fe1b9-d2f3-4b13-a4ec-acf189cf3973]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:59:11.426 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb448d860-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:59:11 np0005539551 nova_compute[227360]: 2025-11-29 07:59:11.428 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:11 np0005539551 kernel: tapb448d860-e0: left promiscuous mode
Nov 29 02:59:11 np0005539551 nova_compute[227360]: 2025-11-29 07:59:11.441 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:59:11.443 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e6a38129-120a-4d23-b1c0-11b1f1bd1df9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:59:11.457 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c4a5fa18-fc27-48a9-84a7-74194a4dca48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:59:11.459 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f8534594-75a9-455e-b3bc-6d007ca82699]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:59:11.475 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9b6cd86d-6f83-4b9d-aae5-abc6aa751ac5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 613877, 'reachable_time': 22917, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 241992, 'error': None, 'target': 'ovnmeta-b448d860-e284-464a-af0f-12f52096cce8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:11 np0005539551 systemd[1]: run-netns-ovnmeta\x2db448d860\x2de284\x2d464a\x2daf0f\x2d12f52096cce8.mount: Deactivated successfully.
Nov 29 02:59:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:59:11.478 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b448d860-e284-464a-af0f-12f52096cce8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:59:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:59:11.478 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[9e9797ee-acbb-4914-9605-9210727a6095]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:11 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e174 e174: 3 total, 3 up, 3 in
Nov 29 02:59:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:12.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:12 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:12.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:13 np0005539551 nova_compute[227360]: 2025-11-29 07:59:13.344 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403153.344433, 6e814e3b-3edb-4f37-8701-c37929994645 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:59:13 np0005539551 nova_compute[227360]: 2025-11-29 07:59:13.345 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:59:13 np0005539551 nova_compute[227360]: 2025-11-29 07:59:13.347 227364 DEBUG nova.compute.manager [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:59:13 np0005539551 nova_compute[227360]: 2025-11-29 07:59:13.351 227364 INFO nova.virt.libvirt.driver [-] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Instance running successfully.#033[00m
Nov 29 02:59:13 np0005539551 virtqemud[226785]: argument unsupported: QEMU guest agent is not configured
Nov 29 02:59:13 np0005539551 nova_compute[227360]: 2025-11-29 07:59:13.354 227364 DEBUG nova.virt.libvirt.guest [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 29 02:59:13 np0005539551 nova_compute[227360]: 2025-11-29 07:59:13.354 227364 DEBUG nova.virt.libvirt.driver [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Nov 29 02:59:13 np0005539551 nova_compute[227360]: 2025-11-29 07:59:13.370 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:59:13 np0005539551 nova_compute[227360]: 2025-11-29 07:59:13.373 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:59:13 np0005539551 nova_compute[227360]: 2025-11-29 07:59:13.415 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Nov 29 02:59:13 np0005539551 nova_compute[227360]: 2025-11-29 07:59:13.416 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403153.3467016, 6e814e3b-3edb-4f37-8701-c37929994645 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:59:13 np0005539551 nova_compute[227360]: 2025-11-29 07:59:13.416 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] VM Started (Lifecycle Event)#033[00m
Nov 29 02:59:13 np0005539551 nova_compute[227360]: 2025-11-29 07:59:13.419 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:13 np0005539551 nova_compute[227360]: 2025-11-29 07:59:13.419 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:13 np0005539551 nova_compute[227360]: 2025-11-29 07:59:13.419 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:59:13 np0005539551 nova_compute[227360]: 2025-11-29 07:59:13.419 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:59:13 np0005539551 nova_compute[227360]: 2025-11-29 07:59:13.443 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:59:13 np0005539551 nova_compute[227360]: 2025-11-29 07:59:13.446 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Nov 29 02:59:13 np0005539551 nova_compute[227360]: 2025-11-29 07:59:13.447 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "refresh_cache-6e814e3b-3edb-4f37-8701-c37929994645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:59:13 np0005539551 nova_compute[227360]: 2025-11-29 07:59:13.447 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquired lock "refresh_cache-6e814e3b-3edb-4f37-8701-c37929994645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:59:13 np0005539551 nova_compute[227360]: 2025-11-29 07:59:13.447 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 02:59:13 np0005539551 nova_compute[227360]: 2025-11-29 07:59:13.447 227364 DEBUG nova.objects.instance [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6e814e3b-3edb-4f37-8701-c37929994645 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:59:13 np0005539551 nova_compute[227360]: 2025-11-29 07:59:13.451 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:59:13 np0005539551 nova_compute[227360]: 2025-11-29 07:59:13.949 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:59:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:14.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:14 np0005539551 nova_compute[227360]: 2025-11-29 07:59:14.210 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:14 np0005539551 nova_compute[227360]: 2025-11-29 07:59:14.354 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403139.353224, e8c981fd-dbe3-407d-8810-88f19dac3073 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:59:14 np0005539551 nova_compute[227360]: 2025-11-29 07:59:14.355 227364 INFO nova.compute.manager [-] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:59:14 np0005539551 nova_compute[227360]: 2025-11-29 07:59:14.462 227364 DEBUG nova.compute.manager [None req-d468cd86-a17a-4888-a401-437c3fc9c3d2 - - - - - -] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:59:14 np0005539551 nova_compute[227360]: 2025-11-29 07:59:14.466 227364 DEBUG nova.compute.manager [None req-d468cd86-a17a-4888-a401-437c3fc9c3d2 - - - - - -] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: deleting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:59:14 np0005539551 nova_compute[227360]: 2025-11-29 07:59:14.530 227364 INFO nova.compute.manager [None req-d468cd86-a17a-4888-a401-437c3fc9c3d2 - - - - - -] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Nov 29 02:59:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:14.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:15 np0005539551 nova_compute[227360]: 2025-11-29 07:59:15.611 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:15 np0005539551 nova_compute[227360]: 2025-11-29 07:59:15.947 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:59:15 np0005539551 nova_compute[227360]: 2025-11-29 07:59:15.962 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Releasing lock "refresh_cache-6e814e3b-3edb-4f37-8701-c37929994645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:59:15 np0005539551 nova_compute[227360]: 2025-11-29 07:59:15.963 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 02:59:15 np0005539551 nova_compute[227360]: 2025-11-29 07:59:15.963 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:15 np0005539551 nova_compute[227360]: 2025-11-29 07:59:15.964 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:15 np0005539551 nova_compute[227360]: 2025-11-29 07:59:15.964 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:15 np0005539551 nova_compute[227360]: 2025-11-29 07:59:15.982 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:15 np0005539551 nova_compute[227360]: 2025-11-29 07:59:15.983 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:15 np0005539551 nova_compute[227360]: 2025-11-29 07:59:15.983 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:15 np0005539551 nova_compute[227360]: 2025-11-29 07:59:15.983 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:59:15 np0005539551 nova_compute[227360]: 2025-11-29 07:59:15.983 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:59:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:16.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:16 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0.
Nov 29 02:59:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:59:16.393250) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:59:16 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58
Nov 29 02:59:16 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403156393557, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 380, "num_deletes": 251, "total_data_size": 400892, "memory_usage": 409336, "flush_reason": "Manual Compaction"}
Nov 29 02:59:16 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started
Nov 29 02:59:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:59:16 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2634606903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:59:16 np0005539551 nova_compute[227360]: 2025-11-29 07:59:16.466 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:16 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403156555153, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 264658, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29226, "largest_seqno": 29601, "table_properties": {"data_size": 262323, "index_size": 435, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5973, "raw_average_key_size": 19, "raw_value_size": 257609, "raw_average_value_size": 825, "num_data_blocks": 19, "num_entries": 312, "num_filter_entries": 312, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403141, "oldest_key_time": 1764403141, "file_creation_time": 1764403156, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:59:16 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 161957 microseconds, and 1748 cpu microseconds.
Nov 29 02:59:16 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:59:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:59:16.555214) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 264658 bytes OK
Nov 29 02:59:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:59:16.555234) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started
Nov 29 02:59:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:59:16.560945) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done
Nov 29 02:59:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:59:16.560991) EVENT_LOG_v1 {"time_micros": 1764403156560982, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:59:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:59:16.561015) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:59:16 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 398329, prev total WAL file size 398329, number of live WAL files 2.
Nov 29 02:59:16 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:59:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:59:16.561539) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Nov 29 02:59:16 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:59:16 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(258KB)], [57(10MB)]
Nov 29 02:59:16 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403156561649, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 11249775, "oldest_snapshot_seqno": -1}
Nov 29 02:59:16 np0005539551 nova_compute[227360]: 2025-11-29 07:59:16.580 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:59:16 np0005539551 nova_compute[227360]: 2025-11-29 07:59:16.581 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:59:16 np0005539551 nova_compute[227360]: 2025-11-29 07:59:16.584 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:59:16 np0005539551 nova_compute[227360]: 2025-11-29 07:59:16.585 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:59:16 np0005539551 nova_compute[227360]: 2025-11-29 07:59:16.684 227364 DEBUG oslo_concurrency.lockutils [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "refresh_cache-6e814e3b-3edb-4f37-8701-c37929994645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:59:16 np0005539551 nova_compute[227360]: 2025-11-29 07:59:16.688 227364 DEBUG oslo_concurrency.lockutils [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquired lock "refresh_cache-6e814e3b-3edb-4f37-8701-c37929994645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:59:16 np0005539551 nova_compute[227360]: 2025-11-29 07:59:16.689 227364 DEBUG nova.network.neutron [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:59:16 np0005539551 nova_compute[227360]: 2025-11-29 07:59:16.730 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:59:16 np0005539551 nova_compute[227360]: 2025-11-29 07:59:16.731 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4727MB free_disk=20.813446044921875GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:59:16 np0005539551 nova_compute[227360]: 2025-11-29 07:59:16.732 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:16 np0005539551 nova_compute[227360]: 2025-11-29 07:59:16.732 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:16 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 5787 keys, 9280908 bytes, temperature: kUnknown
Nov 29 02:59:16 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403156788964, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 9280908, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9243095, "index_size": 22215, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14533, "raw_key_size": 149986, "raw_average_key_size": 25, "raw_value_size": 9139539, "raw_average_value_size": 1579, "num_data_blocks": 888, "num_entries": 5787, "num_filter_entries": 5787, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764403156, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:59:16 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:59:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:16.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:16 np0005539551 nova_compute[227360]: 2025-11-29 07:59:16.934 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Applying migration context for instance 6e814e3b-3edb-4f37-8701-c37929994645 as it has an incoming, in-progress migration 39ac01bb-9eeb-45e2-96ef-f28173744ae3. Migration status is reverting _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950#033[00m
Nov 29 02:59:16 np0005539551 nova_compute[227360]: 2025-11-29 07:59:16.937 227364 INFO nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Updating resource usage from migration 39ac01bb-9eeb-45e2-96ef-f28173744ae3#033[00m
Nov 29 02:59:16 np0005539551 nova_compute[227360]: 2025-11-29 07:59:16.968 227364 DEBUG nova.network.neutron [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:59:16 np0005539551 nova_compute[227360]: 2025-11-29 07:59:16.974 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance e8c981fd-dbe3-407d-8810-88f19dac3073 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:59:16 np0005539551 nova_compute[227360]: 2025-11-29 07:59:16.974 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 6e814e3b-3edb-4f37-8701-c37929994645 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:59:16 np0005539551 nova_compute[227360]: 2025-11-29 07:59:16.975 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:59:16 np0005539551 nova_compute[227360]: 2025-11-29 07:59:16.976 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=832MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:59:17 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:59:16.789209) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 9280908 bytes
Nov 29 02:59:17 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:59:17.026461) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 49.5 rd, 40.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.5 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(77.6) write-amplify(35.1) OK, records in: 6302, records dropped: 515 output_compression: NoCompression
Nov 29 02:59:17 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:59:17.026515) EVENT_LOG_v1 {"time_micros": 1764403157026494, "job": 34, "event": "compaction_finished", "compaction_time_micros": 227370, "compaction_time_cpu_micros": 29348, "output_level": 6, "num_output_files": 1, "total_output_size": 9280908, "num_input_records": 6302, "num_output_records": 5787, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:59:17 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:59:17 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403157026847, "job": 34, "event": "table_file_deletion", "file_number": 59}
Nov 29 02:59:17 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:59:17 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403157030789, "job": 34, "event": "table_file_deletion", "file_number": 57}
Nov 29 02:59:17 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:59:16.561431) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:59:17 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:59:17.030842) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:59:17 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:59:17.030848) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:59:17 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:59:17.030852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:59:17 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:59:17.030855) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:59:17 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-07:59:17.030858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:59:17 np0005539551 nova_compute[227360]: 2025-11-29 07:59:17.052 227364 INFO nova.virt.libvirt.driver [None req-4b9734a5-74d3-4238-bc3e-8f87d42bca28 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Deleting instance files /var/lib/nova/instances/e8c981fd-dbe3-407d-8810-88f19dac3073_del#033[00m
Nov 29 02:59:17 np0005539551 nova_compute[227360]: 2025-11-29 07:59:17.054 227364 INFO nova.virt.libvirt.driver [None req-4b9734a5-74d3-4238-bc3e-8f87d42bca28 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Deletion of /var/lib/nova/instances/e8c981fd-dbe3-407d-8810-88f19dac3073_del complete#033[00m
Nov 29 02:59:17 np0005539551 nova_compute[227360]: 2025-11-29 07:59:17.209 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:59:17 np0005539551 nova_compute[227360]: 2025-11-29 07:59:17.359 227364 INFO nova.compute.manager [None req-4b9734a5-74d3-4238-bc3e-8f87d42bca28 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Took 18.26 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:59:17 np0005539551 nova_compute[227360]: 2025-11-29 07:59:17.360 227364 DEBUG oslo.service.loopingcall [None req-4b9734a5-74d3-4238-bc3e-8f87d42bca28 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:59:17 np0005539551 nova_compute[227360]: 2025-11-29 07:59:17.361 227364 DEBUG nova.compute.manager [-] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:59:17 np0005539551 nova_compute[227360]: 2025-11-29 07:59:17.361 227364 DEBUG nova.network.neutron [-] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:59:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:59:17 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4201942923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:59:17 np0005539551 nova_compute[227360]: 2025-11-29 07:59:17.705 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:17 np0005539551 nova_compute[227360]: 2025-11-29 07:59:17.710 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:59:17 np0005539551 nova_compute[227360]: 2025-11-29 07:59:17.726 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:59:17 np0005539551 nova_compute[227360]: 2025-11-29 07:59:17.751 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:59:17 np0005539551 nova_compute[227360]: 2025-11-29 07:59:17.752 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.020s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:17 np0005539551 nova_compute[227360]: 2025-11-29 07:59:17.752 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:17 np0005539551 nova_compute[227360]: 2025-11-29 07:59:17.753 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 02:59:17 np0005539551 nova_compute[227360]: 2025-11-29 07:59:17.907 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 02:59:17 np0005539551 nova_compute[227360]: 2025-11-29 07:59:17.944 227364 DEBUG nova.network.neutron [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:59:17 np0005539551 nova_compute[227360]: 2025-11-29 07:59:17.980 227364 DEBUG oslo_concurrency.lockutils [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Releasing lock "refresh_cache-6e814e3b-3edb-4f37-8701-c37929994645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:59:18 np0005539551 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000001f.scope: Deactivated successfully.
Nov 29 02:59:18 np0005539551 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000001f.scope: Consumed 5.142s CPU time.
Nov 29 02:59:18 np0005539551 systemd-machined[190756]: Machine qemu-16-instance-0000001f terminated.
Nov 29 02:59:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:18.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:18 np0005539551 nova_compute[227360]: 2025-11-29 07:59:18.327 227364 INFO nova.virt.libvirt.driver [-] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Instance destroyed successfully.#033[00m
Nov 29 02:59:18 np0005539551 nova_compute[227360]: 2025-11-29 07:59:18.327 227364 DEBUG nova.objects.instance [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lazy-loading 'resources' on Instance uuid 6e814e3b-3edb-4f37-8701-c37929994645 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:59:18 np0005539551 nova_compute[227360]: 2025-11-29 07:59:18.348 227364 DEBUG oslo_concurrency.lockutils [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:18 np0005539551 nova_compute[227360]: 2025-11-29 07:59:18.349 227364 DEBUG oslo_concurrency.lockutils [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:18 np0005539551 nova_compute[227360]: 2025-11-29 07:59:18.374 227364 DEBUG nova.objects.instance [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lazy-loading 'migration_context' on Instance uuid 6e814e3b-3edb-4f37-8701-c37929994645 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:59:18 np0005539551 nova_compute[227360]: 2025-11-29 07:59:18.602 227364 DEBUG oslo_concurrency.processutils [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:59:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:18.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:19 np0005539551 nova_compute[227360]: 2025-11-29 07:59:19.213 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:19 np0005539551 nova_compute[227360]: 2025-11-29 07:59:19.353 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:19 np0005539551 nova_compute[227360]: 2025-11-29 07:59:19.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:19 np0005539551 nova_compute[227360]: 2025-11-29 07:59:19.409 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:59:19 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:59:19 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3056732937' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:59:19 np0005539551 nova_compute[227360]: 2025-11-29 07:59:19.457 227364 DEBUG oslo_concurrency.processutils [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.855s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:19 np0005539551 nova_compute[227360]: 2025-11-29 07:59:19.463 227364 DEBUG nova.compute.provider_tree [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:59:19 np0005539551 nova_compute[227360]: 2025-11-29 07:59:19.489 227364 DEBUG nova.scheduler.client.report [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:59:19 np0005539551 nova_compute[227360]: 2025-11-29 07:59:19.553 227364 DEBUG oslo_concurrency.lockutils [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: held 1.204s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:19 np0005539551 podman[242084]: 2025-11-29 07:59:19.61218 +0000 UTC m=+0.067976393 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:59:19 np0005539551 podman[242085]: 2025-11-29 07:59:19.618561495 +0000 UTC m=+0.069204477 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 02:59:19 np0005539551 podman[242083]: 2025-11-29 07:59:19.637237527 +0000 UTC m=+0.093033580 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Nov 29 02:59:19 np0005539551 nova_compute[227360]: 2025-11-29 07:59:19.737 227364 DEBUG nova.network.neutron [-] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:59:19 np0005539551 nova_compute[227360]: 2025-11-29 07:59:19.763 227364 INFO nova.compute.manager [-] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Took 2.40 seconds to deallocate network for instance.#033[00m
Nov 29 02:59:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:59:19.851 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:59:19.851 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:59:19.851 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:19 np0005539551 nova_compute[227360]: 2025-11-29 07:59:19.856 227364 DEBUG nova.compute.manager [req-0fa19465-ecb2-4a95-a233-88c465fa3c10 req-e478f011-6f81-4c37-b859-a342ec0d06fb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e8c981fd-dbe3-407d-8810-88f19dac3073] Received event network-vif-deleted-d5d49a1e-68e2-4463-a368-969f26b28b92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:59:19 np0005539551 nova_compute[227360]: 2025-11-29 07:59:19.863 227364 DEBUG oslo_concurrency.lockutils [None req-4b9734a5-74d3-4238-bc3e-8f87d42bca28 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:19 np0005539551 nova_compute[227360]: 2025-11-29 07:59:19.863 227364 DEBUG oslo_concurrency.lockutils [None req-4b9734a5-74d3-4238-bc3e-8f87d42bca28 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:19 np0005539551 nova_compute[227360]: 2025-11-29 07:59:19.948 227364 DEBUG oslo_concurrency.processutils [None req-4b9734a5-74d3-4238-bc3e-8f87d42bca28 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:59:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:20.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:59:20 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3559134832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:59:20 np0005539551 nova_compute[227360]: 2025-11-29 07:59:20.370 227364 DEBUG oslo_concurrency.processutils [None req-4b9734a5-74d3-4238-bc3e-8f87d42bca28 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:20 np0005539551 nova_compute[227360]: 2025-11-29 07:59:20.376 227364 DEBUG nova.compute.provider_tree [None req-4b9734a5-74d3-4238-bc3e-8f87d42bca28 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:59:20 np0005539551 nova_compute[227360]: 2025-11-29 07:59:20.429 227364 DEBUG nova.scheduler.client.report [None req-4b9734a5-74d3-4238-bc3e-8f87d42bca28 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:59:20 np0005539551 nova_compute[227360]: 2025-11-29 07:59:20.614 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:20 np0005539551 nova_compute[227360]: 2025-11-29 07:59:20.657 227364 DEBUG oslo_concurrency.lockutils [None req-4b9734a5-74d3-4238-bc3e-8f87d42bca28 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:20 np0005539551 nova_compute[227360]: 2025-11-29 07:59:20.684 227364 INFO nova.scheduler.client.report [None req-4b9734a5-74d3-4238-bc3e-8f87d42bca28 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Deleted allocations for instance e8c981fd-dbe3-407d-8810-88f19dac3073#033[00m
Nov 29 02:59:20 np0005539551 nova_compute[227360]: 2025-11-29 07:59:20.747 227364 DEBUG oslo_concurrency.lockutils [None req-4b9734a5-74d3-4238-bc3e-8f87d42bca28 e9bb5c912ca1471c895b844b43ac4831 d932eef5d5104a79bd9aec0bbc489384 - - default default] Lock "e8c981fd-dbe3-407d-8810-88f19dac3073" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 21.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:20.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:21 np0005539551 nova_compute[227360]: 2025-11-29 07:59:21.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:21 np0005539551 nova_compute[227360]: 2025-11-29 07:59:21.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 02:59:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e175 e175: 3 total, 3 up, 3 in
Nov 29 02:59:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:22.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:22.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:24 np0005539551 nova_compute[227360]: 2025-11-29 07:59:24.019 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:24.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:24 np0005539551 nova_compute[227360]: 2025-11-29 07:59:24.197 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:24 np0005539551 nova_compute[227360]: 2025-11-29 07:59:24.215 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:24.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:25 np0005539551 nova_compute[227360]: 2025-11-29 07:59:25.618 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:26.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:26.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:28.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:28 np0005539551 nova_compute[227360]: 2025-11-29 07:59:28.543 227364 DEBUG oslo_concurrency.lockutils [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:28 np0005539551 nova_compute[227360]: 2025-11-29 07:59:28.543 227364 DEBUG oslo_concurrency.lockutils [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:28 np0005539551 nova_compute[227360]: 2025-11-29 07:59:28.570 227364 DEBUG nova.compute.manager [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:59:28 np0005539551 nova_compute[227360]: 2025-11-29 07:59:28.685 227364 DEBUG oslo_concurrency.lockutils [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:28 np0005539551 nova_compute[227360]: 2025-11-29 07:59:28.686 227364 DEBUG oslo_concurrency.lockutils [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:28 np0005539551 nova_compute[227360]: 2025-11-29 07:59:28.690 227364 DEBUG nova.virt.hardware [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:59:28 np0005539551 nova_compute[227360]: 2025-11-29 07:59:28.691 227364 INFO nova.compute.claims [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 02:59:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:28.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:28 np0005539551 nova_compute[227360]: 2025-11-29 07:59:28.846 227364 DEBUG oslo_concurrency.processutils [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:59:29 np0005539551 nova_compute[227360]: 2025-11-29 07:59:29.216 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:59:29 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3891015562' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:59:29 np0005539551 nova_compute[227360]: 2025-11-29 07:59:29.345 227364 DEBUG oslo_concurrency.processutils [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:29 np0005539551 nova_compute[227360]: 2025-11-29 07:59:29.351 227364 DEBUG nova.compute.provider_tree [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:59:29 np0005539551 nova_compute[227360]: 2025-11-29 07:59:29.390 227364 DEBUG nova.scheduler.client.report [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:59:29 np0005539551 nova_compute[227360]: 2025-11-29 07:59:29.419 227364 DEBUG oslo_concurrency.lockutils [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:29 np0005539551 nova_compute[227360]: 2025-11-29 07:59:29.420 227364 DEBUG nova.compute.manager [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:59:29 np0005539551 nova_compute[227360]: 2025-11-29 07:59:29.488 227364 DEBUG nova.compute.manager [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:59:29 np0005539551 nova_compute[227360]: 2025-11-29 07:59:29.489 227364 DEBUG nova.network.neutron [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:59:29 np0005539551 nova_compute[227360]: 2025-11-29 07:59:29.507 227364 INFO nova.virt.libvirt.driver [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:59:29 np0005539551 nova_compute[227360]: 2025-11-29 07:59:29.555 227364 DEBUG nova.compute.manager [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:59:29 np0005539551 nova_compute[227360]: 2025-11-29 07:59:29.689 227364 DEBUG nova.compute.manager [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:59:29 np0005539551 nova_compute[227360]: 2025-11-29 07:59:29.690 227364 DEBUG nova.virt.libvirt.driver [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:59:29 np0005539551 nova_compute[227360]: 2025-11-29 07:59:29.690 227364 INFO nova.virt.libvirt.driver [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Creating image(s)#033[00m
Nov 29 02:59:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:30.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:30 np0005539551 nova_compute[227360]: 2025-11-29 07:59:30.288 227364 DEBUG nova.storage.rbd_utils [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] rbd image 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:59:30 np0005539551 nova_compute[227360]: 2025-11-29 07:59:30.323 227364 DEBUG nova.storage.rbd_utils [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] rbd image 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:59:30 np0005539551 nova_compute[227360]: 2025-11-29 07:59:30.358 227364 DEBUG nova.storage.rbd_utils [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] rbd image 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:59:30 np0005539551 nova_compute[227360]: 2025-11-29 07:59:30.363 227364 DEBUG oslo_concurrency.processutils [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:59:30 np0005539551 nova_compute[227360]: 2025-11-29 07:59:30.452 227364 DEBUG oslo_concurrency.processutils [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:30 np0005539551 nova_compute[227360]: 2025-11-29 07:59:30.453 227364 DEBUG oslo_concurrency.lockutils [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:30 np0005539551 nova_compute[227360]: 2025-11-29 07:59:30.454 227364 DEBUG oslo_concurrency.lockutils [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:30 np0005539551 nova_compute[227360]: 2025-11-29 07:59:30.454 227364 DEBUG oslo_concurrency.lockutils [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:30.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:31 np0005539551 nova_compute[227360]: 2025-11-29 07:59:31.670 227364 DEBUG nova.storage.rbd_utils [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] rbd image 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:59:31 np0005539551 nova_compute[227360]: 2025-11-29 07:59:31.673 227364 DEBUG oslo_concurrency.processutils [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:59:31 np0005539551 nova_compute[227360]: 2025-11-29 07:59:31.696 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:31 np0005539551 nova_compute[227360]: 2025-11-29 07:59:31.699 227364 DEBUG nova.network.neutron [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 29 02:59:31 np0005539551 nova_compute[227360]: 2025-11-29 07:59:31.699 227364 DEBUG nova.compute.manager [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:59:31 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e176 e176: 3 total, 3 up, 3 in
Nov 29 02:59:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:32.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:32.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:33 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:33 np0005539551 nova_compute[227360]: 2025-11-29 07:59:33.326 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403158.324669, 6e814e3b-3edb-4f37-8701-c37929994645 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:59:33 np0005539551 nova_compute[227360]: 2025-11-29 07:59:33.327 227364 INFO nova.compute.manager [-] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:59:33 np0005539551 nova_compute[227360]: 2025-11-29 07:59:33.370 227364 DEBUG nova.compute.manager [None req-3ffb515b-cae4-4fb6-8786-334c6b4717ac - - - - - -] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:59:33 np0005539551 nova_compute[227360]: 2025-11-29 07:59:33.645 227364 DEBUG oslo_concurrency.processutils [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.972s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:33 np0005539551 nova_compute[227360]: 2025-11-29 07:59:33.708 227364 DEBUG nova.storage.rbd_utils [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] resizing rbd image 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:59:33 np0005539551 nova_compute[227360]: 2025-11-29 07:59:33.870 227364 DEBUG nova.objects.instance [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lazy-loading 'migration_context' on Instance uuid 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:59:33 np0005539551 nova_compute[227360]: 2025-11-29 07:59:33.886 227364 DEBUG nova.virt.libvirt.driver [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:59:33 np0005539551 nova_compute[227360]: 2025-11-29 07:59:33.886 227364 DEBUG nova.virt.libvirt.driver [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Ensure instance console log exists: /var/lib/nova/instances/35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:59:33 np0005539551 nova_compute[227360]: 2025-11-29 07:59:33.886 227364 DEBUG oslo_concurrency.lockutils [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:33 np0005539551 nova_compute[227360]: 2025-11-29 07:59:33.887 227364 DEBUG oslo_concurrency.lockutils [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:33 np0005539551 nova_compute[227360]: 2025-11-29 07:59:33.887 227364 DEBUG oslo_concurrency.lockutils [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:33 np0005539551 nova_compute[227360]: 2025-11-29 07:59:33.889 227364 DEBUG nova.virt.libvirt.driver [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:59:33 np0005539551 nova_compute[227360]: 2025-11-29 07:59:33.892 227364 WARNING nova.virt.libvirt.driver [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:59:33 np0005539551 nova_compute[227360]: 2025-11-29 07:59:33.897 227364 DEBUG nova.virt.libvirt.host [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:59:33 np0005539551 nova_compute[227360]: 2025-11-29 07:59:33.897 227364 DEBUG nova.virt.libvirt.host [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:59:33 np0005539551 nova_compute[227360]: 2025-11-29 07:59:33.901 227364 DEBUG nova.virt.libvirt.host [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:59:33 np0005539551 nova_compute[227360]: 2025-11-29 07:59:33.901 227364 DEBUG nova.virt.libvirt.host [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:59:33 np0005539551 nova_compute[227360]: 2025-11-29 07:59:33.902 227364 DEBUG nova.virt.libvirt.driver [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:59:33 np0005539551 nova_compute[227360]: 2025-11-29 07:59:33.902 227364 DEBUG nova.virt.hardware [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:59:33 np0005539551 nova_compute[227360]: 2025-11-29 07:59:33.903 227364 DEBUG nova.virt.hardware [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:59:33 np0005539551 nova_compute[227360]: 2025-11-29 07:59:33.903 227364 DEBUG nova.virt.hardware [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:59:33 np0005539551 nova_compute[227360]: 2025-11-29 07:59:33.903 227364 DEBUG nova.virt.hardware [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:59:33 np0005539551 nova_compute[227360]: 2025-11-29 07:59:33.904 227364 DEBUG nova.virt.hardware [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:59:33 np0005539551 nova_compute[227360]: 2025-11-29 07:59:33.904 227364 DEBUG nova.virt.hardware [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:59:33 np0005539551 nova_compute[227360]: 2025-11-29 07:59:33.904 227364 DEBUG nova.virt.hardware [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:59:33 np0005539551 nova_compute[227360]: 2025-11-29 07:59:33.904 227364 DEBUG nova.virt.hardware [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:59:33 np0005539551 nova_compute[227360]: 2025-11-29 07:59:33.904 227364 DEBUG nova.virt.hardware [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:59:33 np0005539551 nova_compute[227360]: 2025-11-29 07:59:33.905 227364 DEBUG nova.virt.hardware [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:59:33 np0005539551 nova_compute[227360]: 2025-11-29 07:59:33.905 227364 DEBUG nova.virt.hardware [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:59:33 np0005539551 nova_compute[227360]: 2025-11-29 07:59:33.907 227364 DEBUG oslo_concurrency.processutils [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:59:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:34.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:34 np0005539551 nova_compute[227360]: 2025-11-29 07:59:34.217 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:34 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:59:34 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3051608195' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:59:34 np0005539551 nova_compute[227360]: 2025-11-29 07:59:34.333 227364 DEBUG oslo_concurrency.processutils [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:34 np0005539551 nova_compute[227360]: 2025-11-29 07:59:34.361 227364 DEBUG nova.storage.rbd_utils [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] rbd image 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:59:34 np0005539551 nova_compute[227360]: 2025-11-29 07:59:34.365 227364 DEBUG oslo_concurrency.processutils [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:59:34 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:59:34 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/193963749' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:59:34 np0005539551 nova_compute[227360]: 2025-11-29 07:59:34.776 227364 DEBUG oslo_concurrency.processutils [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:34 np0005539551 nova_compute[227360]: 2025-11-29 07:59:34.779 227364 DEBUG nova.objects.instance [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lazy-loading 'pci_devices' on Instance uuid 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:59:34 np0005539551 nova_compute[227360]: 2025-11-29 07:59:34.801 227364 DEBUG nova.virt.libvirt.driver [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:59:34 np0005539551 nova_compute[227360]:  <uuid>35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd</uuid>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:  <name>instance-00000020</name>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:59:34 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:      <nova:name>tempest-MigrationsAdminTest-server-10669624</nova:name>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 07:59:33</nova:creationTime>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 02:59:34 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:        <nova:user uuid="51ae07f600c545c0b4c7fae00657ea40">tempest-MigrationsAdminTest-1930136363-project-member</nova:user>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:        <nova:project uuid="6717732f9fa242b181f58881b03d246f">tempest-MigrationsAdminTest-1930136363</nova:project>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:      <nova:ports/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <system>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:      <entry name="serial">35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd</entry>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:      <entry name="uuid">35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd</entry>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    </system>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:  <os>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:  </os>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:  <features>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:  </features>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:  </clock>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:  <devices>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 02:59:34 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd_disk">
Nov 29 02:59:34 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:      </source>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 02:59:34 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:      </auth>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    </disk>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 02:59:34 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd_disk.config">
Nov 29 02:59:34 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:      </source>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 02:59:34 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:      </auth>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    </disk>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 02:59:34 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd/console.log" append="off"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    </serial>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <video>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    </video>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 02:59:34 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    </rng>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 02:59:34 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 02:59:34 np0005539551 nova_compute[227360]:  </devices>
Nov 29 02:59:34 np0005539551 nova_compute[227360]: </domain>
Nov 29 02:59:34 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:59:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:34.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:34 np0005539551 nova_compute[227360]: 2025-11-29 07:59:34.883 227364 DEBUG nova.virt.libvirt.driver [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:59:34 np0005539551 nova_compute[227360]: 2025-11-29 07:59:34.884 227364 DEBUG nova.virt.libvirt.driver [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:59:34 np0005539551 nova_compute[227360]: 2025-11-29 07:59:34.884 227364 INFO nova.virt.libvirt.driver [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Using config drive#033[00m
Nov 29 02:59:34 np0005539551 nova_compute[227360]: 2025-11-29 07:59:34.908 227364 DEBUG nova.storage.rbd_utils [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] rbd image 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:59:35 np0005539551 nova_compute[227360]: 2025-11-29 07:59:35.319 227364 INFO nova.virt.libvirt.driver [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Creating config drive at /var/lib/nova/instances/35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd/disk.config#033[00m
Nov 29 02:59:35 np0005539551 nova_compute[227360]: 2025-11-29 07:59:35.324 227364 DEBUG oslo_concurrency.processutils [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjhvodq5u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:59:35 np0005539551 nova_compute[227360]: 2025-11-29 07:59:35.450 227364 DEBUG oslo_concurrency.processutils [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjhvodq5u" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:36.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:36 np0005539551 nova_compute[227360]: 2025-11-29 07:59:36.798 227364 DEBUG nova.storage.rbd_utils [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] rbd image 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:59:36 np0005539551 nova_compute[227360]: 2025-11-29 07:59:36.802 227364 DEBUG oslo_concurrency.processutils [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd/disk.config 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:59:36 np0005539551 nova_compute[227360]: 2025-11-29 07:59:36.818 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:36.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:38.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:38.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:39 np0005539551 nova_compute[227360]: 2025-11-29 07:59:39.219 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:40.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:40.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:41 np0005539551 nova_compute[227360]: 2025-11-29 07:59:41.820 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:42.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:42.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:43 np0005539551 nova_compute[227360]: 2025-11-29 07:59:43.873 227364 DEBUG oslo_concurrency.processutils [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd/disk.config 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 7.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:43 np0005539551 nova_compute[227360]: 2025-11-29 07:59:43.874 227364 INFO nova.virt.libvirt.driver [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Deleting local config drive /var/lib/nova/instances/35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd/disk.config because it was imported into RBD.#033[00m
Nov 29 02:59:43 np0005539551 systemd-machined[190756]: New machine qemu-17-instance-00000020.
Nov 29 02:59:43 np0005539551 systemd[1]: Started Virtual Machine qemu-17-instance-00000020.
Nov 29 02:59:44 np0005539551 nova_compute[227360]: 2025-11-29 07:59:44.221 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:44.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:44.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:46.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:46 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:46 np0005539551 nova_compute[227360]: 2025-11-29 07:59:46.822 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:46.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:47 np0005539551 nova_compute[227360]: 2025-11-29 07:59:47.061 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403187.060424, 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:59:47 np0005539551 nova_compute[227360]: 2025-11-29 07:59:47.062 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:59:47 np0005539551 nova_compute[227360]: 2025-11-29 07:59:47.067 227364 DEBUG nova.compute.manager [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:59:47 np0005539551 nova_compute[227360]: 2025-11-29 07:59:47.068 227364 DEBUG nova.virt.libvirt.driver [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:59:47 np0005539551 nova_compute[227360]: 2025-11-29 07:59:47.073 227364 INFO nova.virt.libvirt.driver [-] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Instance spawned successfully.#033[00m
Nov 29 02:59:47 np0005539551 nova_compute[227360]: 2025-11-29 07:59:47.073 227364 DEBUG nova.virt.libvirt.driver [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:59:47 np0005539551 nova_compute[227360]: 2025-11-29 07:59:47.117 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:59:47 np0005539551 nova_compute[227360]: 2025-11-29 07:59:47.126 227364 DEBUG nova.virt.libvirt.driver [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:59:47 np0005539551 nova_compute[227360]: 2025-11-29 07:59:47.127 227364 DEBUG nova.virt.libvirt.driver [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:59:47 np0005539551 nova_compute[227360]: 2025-11-29 07:59:47.127 227364 DEBUG nova.virt.libvirt.driver [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:59:47 np0005539551 nova_compute[227360]: 2025-11-29 07:59:47.128 227364 DEBUG nova.virt.libvirt.driver [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:59:47 np0005539551 nova_compute[227360]: 2025-11-29 07:59:47.128 227364 DEBUG nova.virt.libvirt.driver [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:59:47 np0005539551 nova_compute[227360]: 2025-11-29 07:59:47.129 227364 DEBUG nova.virt.libvirt.driver [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:59:47 np0005539551 nova_compute[227360]: 2025-11-29 07:59:47.134 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:59:47 np0005539551 nova_compute[227360]: 2025-11-29 07:59:47.185 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:59:47 np0005539551 nova_compute[227360]: 2025-11-29 07:59:47.186 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403187.0620887, 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:59:47 np0005539551 nova_compute[227360]: 2025-11-29 07:59:47.186 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] VM Started (Lifecycle Event)#033[00m
Nov 29 02:59:47 np0005539551 nova_compute[227360]: 2025-11-29 07:59:47.213 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:59:47 np0005539551 nova_compute[227360]: 2025-11-29 07:59:47.216 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:59:47 np0005539551 nova_compute[227360]: 2025-11-29 07:59:47.227 227364 INFO nova.compute.manager [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Took 17.54 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:59:47 np0005539551 nova_compute[227360]: 2025-11-29 07:59:47.228 227364 DEBUG nova.compute.manager [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:59:47 np0005539551 nova_compute[227360]: 2025-11-29 07:59:47.248 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:59:47 np0005539551 nova_compute[227360]: 2025-11-29 07:59:47.331 227364 INFO nova.compute.manager [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Took 18.69 seconds to build instance.#033[00m
Nov 29 02:59:47 np0005539551 nova_compute[227360]: 2025-11-29 07:59:47.366 227364 DEBUG oslo_concurrency.lockutils [None req-31fd584a-2031-4816-afb3-aab3dc14afcd 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:48.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:48.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:48 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:59:48 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:59:48 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:59:49 np0005539551 nova_compute[227360]: 2025-11-29 07:59:49.223 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:50.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:50 np0005539551 podman[242670]: 2025-11-29 07:59:50.628728905 +0000 UTC m=+0.075615293 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 02:59:50 np0005539551 podman[242671]: 2025-11-29 07:59:50.634547104 +0000 UTC m=+0.079173940 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 02:59:50 np0005539551 podman[242669]: 2025-11-29 07:59:50.675793224 +0000 UTC m=+0.121033927 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 02:59:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e177 e177: 3 total, 3 up, 3 in
Nov 29 02:59:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:50.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:51 np0005539551 nova_compute[227360]: 2025-11-29 07:59:51.824 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:52.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:52.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:54 np0005539551 nova_compute[227360]: 2025-11-29 07:59:54.226 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:54.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:59:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:54.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:59:55 np0005539551 nova_compute[227360]: 2025-11-29 07:59:55.750 227364 DEBUG oslo_concurrency.lockutils [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Acquiring lock "refresh_cache-35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:59:55 np0005539551 nova_compute[227360]: 2025-11-29 07:59:55.751 227364 DEBUG oslo_concurrency.lockutils [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Acquired lock "refresh_cache-35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:59:55 np0005539551 nova_compute[227360]: 2025-11-29 07:59:55.752 227364 DEBUG nova.network.neutron [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:59:55 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:59:55 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:59:56 np0005539551 nova_compute[227360]: 2025-11-29 07:59:56.011 227364 DEBUG nova.network.neutron [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:59:56 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:59:56.024 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:59:56 np0005539551 nova_compute[227360]: 2025-11-29 07:59:56.025 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:56 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 07:59:56.025 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:59:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:56.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:56 np0005539551 nova_compute[227360]: 2025-11-29 07:59:56.826 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:56 np0005539551 nova_compute[227360]: 2025-11-29 07:59:56.829 227364 DEBUG nova.network.neutron [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:59:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:56.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:56 np0005539551 nova_compute[227360]: 2025-11-29 07:59:56.870 227364 DEBUG oslo_concurrency.lockutils [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Releasing lock "refresh_cache-35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:59:56 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:57 np0005539551 nova_compute[227360]: 2025-11-29 07:59:57.092 227364 DEBUG nova.virt.libvirt.driver [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Nov 29 02:59:57 np0005539551 nova_compute[227360]: 2025-11-29 07:59:57.092 227364 DEBUG nova.virt.libvirt.volume.remotefs [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Creating file /var/lib/nova/instances/35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd/7dfac4eb67de44a99b09a767bff73420.tmp on remote host 192.168.122.100 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Nov 29 02:59:57 np0005539551 nova_compute[227360]: 2025-11-29 07:59:57.093 227364 DEBUG oslo_concurrency.processutils [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd/7dfac4eb67de44a99b09a767bff73420.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:59:57 np0005539551 nova_compute[227360]: 2025-11-29 07:59:57.516 227364 DEBUG oslo_concurrency.processutils [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd/7dfac4eb67de44a99b09a767bff73420.tmp" returned: 1 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:57 np0005539551 nova_compute[227360]: 2025-11-29 07:59:57.517 227364 DEBUG oslo_concurrency.processutils [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] 'ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd/7dfac4eb67de44a99b09a767bff73420.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 29 02:59:57 np0005539551 nova_compute[227360]: 2025-11-29 07:59:57.517 227364 DEBUG nova.virt.libvirt.volume.remotefs [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Creating directory /var/lib/nova/instances/35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd on remote host 192.168.122.100 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Nov 29 02:59:57 np0005539551 nova_compute[227360]: 2025-11-29 07:59:57.518 227364 DEBUG oslo_concurrency.processutils [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:59:57 np0005539551 nova_compute[227360]: 2025-11-29 07:59:57.732 227364 DEBUG oslo_concurrency.processutils [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd" returned: 0 in 0.214s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:57 np0005539551 nova_compute[227360]: 2025-11-29 07:59:57.736 227364 DEBUG nova.virt.libvirt.driver [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 02:59:58 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e178 e178: 3 total, 3 up, 3 in
Nov 29 02:59:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:58.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 02:59:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:58.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:59 np0005539551 nova_compute[227360]: 2025-11-29 07:59:59.229 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:00.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:00.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:00 np0005539551 ceph-mon[81672]: overall HEALTH_OK
Nov 29 03:00:01 np0005539551 nova_compute[227360]: 2025-11-29 08:00:01.828 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:02.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:02.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:04 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:00:04.027 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:04 np0005539551 nova_compute[227360]: 2025-11-29 08:00:04.231 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:04.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:04.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:06.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:06 np0005539551 nova_compute[227360]: 2025-11-29 08:00:06.830 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:06.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:07 np0005539551 nova_compute[227360]: 2025-11-29 08:00:07.780 227364 DEBUG nova.virt.libvirt.driver [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:00:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:08.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:08 np0005539551 ovn_controller[130266]: 2025-11-29T08:00:08Z|00138|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 29 03:00:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:08.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:09 np0005539551 nova_compute[227360]: 2025-11-29 08:00:09.233 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:10.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e179 e179: 3 total, 3 up, 3 in
Nov 29 03:00:10 np0005539551 nova_compute[227360]: 2025-11-29 08:00:10.797 227364 INFO nova.virt.libvirt.driver [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Instance shutdown successfully after 13 seconds.#033[00m
Nov 29 03:00:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:10.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:10 np0005539551 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000020.scope: Deactivated successfully.
Nov 29 03:00:10 np0005539551 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000020.scope: Consumed 14.366s CPU time.
Nov 29 03:00:10 np0005539551 systemd-machined[190756]: Machine qemu-17-instance-00000020 terminated.
Nov 29 03:00:11 np0005539551 nova_compute[227360]: 2025-11-29 08:00:11.016 227364 INFO nova.virt.libvirt.driver [-] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Instance destroyed successfully.#033[00m
Nov 29 03:00:11 np0005539551 nova_compute[227360]: 2025-11-29 08:00:11.020 227364 DEBUG nova.virt.libvirt.driver [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] skipping disk for instance-00000020 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:00:11 np0005539551 nova_compute[227360]: 2025-11-29 08:00:11.020 227364 DEBUG nova.virt.libvirt.driver [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] skipping disk for instance-00000020 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:00:11 np0005539551 nova_compute[227360]: 2025-11-29 08:00:11.441 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:11 np0005539551 nova_compute[227360]: 2025-11-29 08:00:11.572 227364 DEBUG oslo_concurrency.lockutils [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Acquiring lock "35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:11 np0005539551 nova_compute[227360]: 2025-11-29 08:00:11.573 227364 DEBUG oslo_concurrency.lockutils [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Lock "35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:11 np0005539551 nova_compute[227360]: 2025-11-29 08:00:11.574 227364 DEBUG oslo_concurrency.lockutils [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Lock "35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:11 np0005539551 nova_compute[227360]: 2025-11-29 08:00:11.831 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:12 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:12 np0005539551 nova_compute[227360]: 2025-11-29 08:00:12.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:12.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:12.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:13 np0005539551 nova_compute[227360]: 2025-11-29 08:00:13.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:13 np0005539551 nova_compute[227360]: 2025-11-29 08:00:13.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:00:13 np0005539551 nova_compute[227360]: 2025-11-29 08:00:13.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:00:14 np0005539551 nova_compute[227360]: 2025-11-29 08:00:14.159 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:00:14 np0005539551 nova_compute[227360]: 2025-11-29 08:00:14.236 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:14 np0005539551 nova_compute[227360]: 2025-11-29 08:00:14.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:14.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:14.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:15 np0005539551 nova_compute[227360]: 2025-11-29 08:00:15.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:15 np0005539551 nova_compute[227360]: 2025-11-29 08:00:15.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:16 np0005539551 nova_compute[227360]: 2025-11-29 08:00:16.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:16 np0005539551 nova_compute[227360]: 2025-11-29 08:00:16.434 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:16 np0005539551 nova_compute[227360]: 2025-11-29 08:00:16.435 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:16 np0005539551 nova_compute[227360]: 2025-11-29 08:00:16.435 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:16 np0005539551 nova_compute[227360]: 2025-11-29 08:00:16.435 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:00:16 np0005539551 nova_compute[227360]: 2025-11-29 08:00:16.435 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:16.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:16 np0005539551 nova_compute[227360]: 2025-11-29 08:00:16.832 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:00:16 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1311954246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:00:16 np0005539551 nova_compute[227360]: 2025-11-29 08:00:16.875 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:16.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:16 np0005539551 nova_compute[227360]: 2025-11-29 08:00:16.934 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000020 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:00:16 np0005539551 nova_compute[227360]: 2025-11-29 08:00:16.935 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000020 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:00:17 np0005539551 nova_compute[227360]: 2025-11-29 08:00:17.064 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:00:17 np0005539551 nova_compute[227360]: 2025-11-29 08:00:17.065 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4906MB free_disk=20.785232543945312GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:00:17 np0005539551 nova_compute[227360]: 2025-11-29 08:00:17.066 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:17 np0005539551 nova_compute[227360]: 2025-11-29 08:00:17.066 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:17 np0005539551 nova_compute[227360]: 2025-11-29 08:00:17.110 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Migration for instance 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Nov 29 03:00:17 np0005539551 nova_compute[227360]: 2025-11-29 08:00:17.135 227364 INFO nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Updating resource usage from migration 4ebfcc9e-3184-4432-81df-ac416954d5bb#033[00m
Nov 29 03:00:17 np0005539551 nova_compute[227360]: 2025-11-29 08:00:17.136 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Starting to track outgoing migration 4ebfcc9e-3184-4432-81df-ac416954d5bb with flavor b4d0f3a6-e3dc-4216-aee8-148280e428cc _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1444#033[00m
Nov 29 03:00:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:17 np0005539551 nova_compute[227360]: 2025-11-29 08:00:17.301 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Migration 4ebfcc9e-3184-4432-81df-ac416954d5bb is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Nov 29 03:00:17 np0005539551 nova_compute[227360]: 2025-11-29 08:00:17.302 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:00:17 np0005539551 nova_compute[227360]: 2025-11-29 08:00:17.302 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:00:17 np0005539551 nova_compute[227360]: 2025-11-29 08:00:17.408 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Refreshing inventories for resource provider 67c71d68-0dd7-4589-b775-189b4191a844 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:00:17 np0005539551 nova_compute[227360]: 2025-11-29 08:00:17.430 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Updating ProviderTree inventory for provider 67c71d68-0dd7-4589-b775-189b4191a844 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:00:17 np0005539551 nova_compute[227360]: 2025-11-29 08:00:17.430 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Updating inventory in ProviderTree for provider 67c71d68-0dd7-4589-b775-189b4191a844 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:00:17 np0005539551 nova_compute[227360]: 2025-11-29 08:00:17.454 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Refreshing aggregate associations for resource provider 67c71d68-0dd7-4589-b775-189b4191a844, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:00:17 np0005539551 nova_compute[227360]: 2025-11-29 08:00:17.473 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Refreshing trait associations for resource provider 67c71d68-0dd7-4589-b775-189b4191a844, traits: COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:00:17 np0005539551 nova_compute[227360]: 2025-11-29 08:00:17.508 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:00:17 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3433864881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:00:17 np0005539551 nova_compute[227360]: 2025-11-29 08:00:17.905 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.397s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:17 np0005539551 nova_compute[227360]: 2025-11-29 08:00:17.912 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:00:17 np0005539551 nova_compute[227360]: 2025-11-29 08:00:17.953 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:00:17 np0005539551 nova_compute[227360]: 2025-11-29 08:00:17.982 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:00:17 np0005539551 nova_compute[227360]: 2025-11-29 08:00:17.982 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.917s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:18.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:18 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e180 e180: 3 total, 3 up, 3 in
Nov 29 03:00:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:18.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:19 np0005539551 nova_compute[227360]: 2025-11-29 08:00:19.237 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:00:19.851 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:00:19.852 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:00:19.852 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:20.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:20.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:20 np0005539551 nova_compute[227360]: 2025-11-29 08:00:20.983 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:21 np0005539551 nova_compute[227360]: 2025-11-29 08:00:21.016 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:21 np0005539551 nova_compute[227360]: 2025-11-29 08:00:21.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:21 np0005539551 nova_compute[227360]: 2025-11-29 08:00:21.409 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:00:21 np0005539551 podman[242835]: 2025-11-29 08:00:21.607893638 +0000 UTC m=+0.056722145 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:00:21 np0005539551 podman[242836]: 2025-11-29 08:00:21.632327198 +0000 UTC m=+0.078729638 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:00:21 np0005539551 podman[242834]: 2025-11-29 08:00:21.633816038 +0000 UTC m=+0.082704966 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 03:00:21 np0005539551 nova_compute[227360]: 2025-11-29 08:00:21.834 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:22.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:22.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:24 np0005539551 nova_compute[227360]: 2025-11-29 08:00:24.278 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:24.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:24.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:26 np0005539551 nova_compute[227360]: 2025-11-29 08:00:26.015 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403211.0144763, 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:00:26 np0005539551 nova_compute[227360]: 2025-11-29 08:00:26.015 227364 INFO nova.compute.manager [-] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:00:26 np0005539551 nova_compute[227360]: 2025-11-29 08:00:26.043 227364 DEBUG nova.compute.manager [None req-c0e24350-3909-4edc-94e7-994f70260477 - - - - - -] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:00:26 np0005539551 nova_compute[227360]: 2025-11-29 08:00:26.046 227364 DEBUG nova.compute.manager [None req-c0e24350-3909-4edc-94e7-994f70260477 - - - - - -] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:00:26 np0005539551 nova_compute[227360]: 2025-11-29 08:00:26.103 227364 INFO nova.compute.manager [None req-c0e24350-3909-4edc-94e7-994f70260477 - - - - - -] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-1.ctlplane.example.com#033[00m
Nov 29 03:00:26 np0005539551 nova_compute[227360]: 2025-11-29 08:00:26.152 227364 INFO nova.compute.manager [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Swapping old allocation on dict_keys(['67c71d68-0dd7-4589-b775-189b4191a844']) held by migration 4ebfcc9e-3184-4432-81df-ac416954d5bb for instance#033[00m
Nov 29 03:00:26 np0005539551 nova_compute[227360]: 2025-11-29 08:00:26.255 227364 DEBUG nova.scheduler.client.report [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Overwriting current allocation {'allocations': {'a73c606e-2495-4af4-b703-8d4b3001fdf5': {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}, 'generation': 32}}, 'project_id': '6717732f9fa242b181f58881b03d246f', 'user_id': '51ae07f600c545c0b4c7fae00657ea40', 'consumer_generation': 1} on consumer 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd move_allocations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:2018#033[00m
Nov 29 03:00:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:26 np0005539551 nova_compute[227360]: 2025-11-29 08:00:26.555 227364 DEBUG oslo_concurrency.lockutils [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "refresh_cache-35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:00:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:26.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:26 np0005539551 nova_compute[227360]: 2025-11-29 08:00:26.555 227364 DEBUG oslo_concurrency.lockutils [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquired lock "refresh_cache-35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:00:26 np0005539551 nova_compute[227360]: 2025-11-29 08:00:26.556 227364 DEBUG nova.network.neutron [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:00:26 np0005539551 nova_compute[227360]: 2025-11-29 08:00:26.756 227364 DEBUG nova.network.neutron [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:00:26 np0005539551 nova_compute[227360]: 2025-11-29 08:00:26.836 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:26.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:27 np0005539551 nova_compute[227360]: 2025-11-29 08:00:27.048 227364 DEBUG nova.network.neutron [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:00:27 np0005539551 nova_compute[227360]: 2025-11-29 08:00:27.099 227364 DEBUG oslo_concurrency.lockutils [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Releasing lock "refresh_cache-35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:00:27 np0005539551 nova_compute[227360]: 2025-11-29 08:00:27.100 227364 DEBUG nova.virt.libvirt.driver [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Starting finish_revert_migration finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11843#033[00m
Nov 29 03:00:27 np0005539551 nova_compute[227360]: 2025-11-29 08:00:27.163 227364 DEBUG nova.storage.rbd_utils [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] rolling back rbd image(35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd_disk) to snapshot(nova-resize) rollback_to_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:505#033[00m
Nov 29 03:00:27 np0005539551 nova_compute[227360]: 2025-11-29 08:00:27.261 227364 DEBUG nova.storage.rbd_utils [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] removing snapshot(nova-resize) on rbd image(35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:00:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e181 e181: 3 total, 3 up, 3 in
Nov 29 03:00:27 np0005539551 nova_compute[227360]: 2025-11-29 08:00:27.846 227364 DEBUG nova.virt.libvirt.driver [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:00:27 np0005539551 nova_compute[227360]: 2025-11-29 08:00:27.848 227364 WARNING nova.virt.libvirt.driver [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:00:27 np0005539551 nova_compute[227360]: 2025-11-29 08:00:27.853 227364 DEBUG nova.virt.libvirt.host [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:00:27 np0005539551 nova_compute[227360]: 2025-11-29 08:00:27.853 227364 DEBUG nova.virt.libvirt.host [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:00:27 np0005539551 nova_compute[227360]: 2025-11-29 08:00:27.856 227364 DEBUG nova.virt.libvirt.host [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:00:27 np0005539551 nova_compute[227360]: 2025-11-29 08:00:27.856 227364 DEBUG nova.virt.libvirt.host [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:00:27 np0005539551 nova_compute[227360]: 2025-11-29 08:00:27.857 227364 DEBUG nova.virt.libvirt.driver [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:00:27 np0005539551 nova_compute[227360]: 2025-11-29 08:00:27.858 227364 DEBUG nova.virt.hardware [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:00:27 np0005539551 nova_compute[227360]: 2025-11-29 08:00:27.858 227364 DEBUG nova.virt.hardware [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:00:27 np0005539551 nova_compute[227360]: 2025-11-29 08:00:27.858 227364 DEBUG nova.virt.hardware [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:00:27 np0005539551 nova_compute[227360]: 2025-11-29 08:00:27.858 227364 DEBUG nova.virt.hardware [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:00:27 np0005539551 nova_compute[227360]: 2025-11-29 08:00:27.859 227364 DEBUG nova.virt.hardware [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:00:27 np0005539551 nova_compute[227360]: 2025-11-29 08:00:27.859 227364 DEBUG nova.virt.hardware [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:00:27 np0005539551 nova_compute[227360]: 2025-11-29 08:00:27.859 227364 DEBUG nova.virt.hardware [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:00:27 np0005539551 nova_compute[227360]: 2025-11-29 08:00:27.859 227364 DEBUG nova.virt.hardware [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:00:27 np0005539551 nova_compute[227360]: 2025-11-29 08:00:27.859 227364 DEBUG nova.virt.hardware [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:00:27 np0005539551 nova_compute[227360]: 2025-11-29 08:00:27.860 227364 DEBUG nova.virt.hardware [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:00:27 np0005539551 nova_compute[227360]: 2025-11-29 08:00:27.860 227364 DEBUG nova.virt.hardware [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:00:27 np0005539551 nova_compute[227360]: 2025-11-29 08:00:27.860 227364 DEBUG nova.objects.instance [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lazy-loading 'vcpu_model' on Instance uuid 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:00:27 np0005539551 nova_compute[227360]: 2025-11-29 08:00:27.891 227364 DEBUG oslo_concurrency.processutils [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:28 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:00:28 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3838530879' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:00:28 np0005539551 nova_compute[227360]: 2025-11-29 08:00:28.314 227364 DEBUG oslo_concurrency.processutils [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:28 np0005539551 nova_compute[227360]: 2025-11-29 08:00:28.354 227364 DEBUG oslo_concurrency.processutils [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:28.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:28 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:00:28 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4094774873' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:00:28 np0005539551 nova_compute[227360]: 2025-11-29 08:00:28.822 227364 DEBUG oslo_concurrency.processutils [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:28 np0005539551 nova_compute[227360]: 2025-11-29 08:00:28.825 227364 DEBUG nova.virt.libvirt.driver [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:00:28 np0005539551 nova_compute[227360]:  <uuid>35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd</uuid>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:  <name>instance-00000020</name>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:00:28 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:      <nova:name>tempest-MigrationsAdminTest-server-10669624</nova:name>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:00:27</nova:creationTime>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:00:28 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:        <nova:user uuid="51ae07f600c545c0b4c7fae00657ea40">tempest-MigrationsAdminTest-1930136363-project-member</nova:user>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:        <nova:project uuid="6717732f9fa242b181f58881b03d246f">tempest-MigrationsAdminTest-1930136363</nova:project>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:      <nova:ports/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:      <entry name="serial">35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd</entry>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:      <entry name="uuid">35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd</entry>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:00:28 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd_disk">
Nov 29 03:00:28 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:00:28 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:00:28 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd_disk.config">
Nov 29 03:00:28 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:00:28 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:00:28 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd/console.log" append="off"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <input type="keyboard" bus="usb"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:00:28 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:00:28 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:00:28 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:00:28 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:00:28 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:00:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:28.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:28 np0005539551 systemd-machined[190756]: New machine qemu-18-instance-00000020.
Nov 29 03:00:28 np0005539551 systemd[1]: Started Virtual Machine qemu-18-instance-00000020.
Nov 29 03:00:29 np0005539551 nova_compute[227360]: 2025-11-29 08:00:29.281 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:29 np0005539551 nova_compute[227360]: 2025-11-29 08:00:29.778 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403229.7780032, 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:00:29 np0005539551 nova_compute[227360]: 2025-11-29 08:00:29.779 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:00:29 np0005539551 nova_compute[227360]: 2025-11-29 08:00:29.781 227364 DEBUG nova.compute.manager [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:00:29 np0005539551 nova_compute[227360]: 2025-11-29 08:00:29.784 227364 INFO nova.virt.libvirt.driver [-] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Instance running successfully.#033[00m
Nov 29 03:00:29 np0005539551 nova_compute[227360]: 2025-11-29 08:00:29.785 227364 DEBUG nova.virt.libvirt.driver [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] finish_revert_migration finished successfully. finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11887#033[00m
Nov 29 03:00:29 np0005539551 nova_compute[227360]: 2025-11-29 08:00:29.825 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:00:29 np0005539551 nova_compute[227360]: 2025-11-29 08:00:29.832 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:00:29 np0005539551 nova_compute[227360]: 2025-11-29 08:00:29.886 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] During sync_power_state the instance has a pending task (resize_reverting). Skip.#033[00m
Nov 29 03:00:29 np0005539551 nova_compute[227360]: 2025-11-29 08:00:29.887 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403229.7790298, 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:00:29 np0005539551 nova_compute[227360]: 2025-11-29 08:00:29.887 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] VM Started (Lifecycle Event)#033[00m
Nov 29 03:00:29 np0005539551 nova_compute[227360]: 2025-11-29 08:00:29.915 227364 INFO nova.compute.manager [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Updating instance to original state: 'active'#033[00m
Nov 29 03:00:29 np0005539551 nova_compute[227360]: 2025-11-29 08:00:29.923 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:00:29 np0005539551 nova_compute[227360]: 2025-11-29 08:00:29.928 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:00:29 np0005539551 nova_compute[227360]: 2025-11-29 08:00:29.960 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] During sync_power_state the instance has a pending task (resize_reverting). Skip.#033[00m
Nov 29 03:00:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 03:00:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:30.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 03:00:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:30.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:31 np0005539551 nova_compute[227360]: 2025-11-29 08:00:31.837 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:32 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:32.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:32.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:33 np0005539551 nova_compute[227360]: 2025-11-29 08:00:33.262 227364 DEBUG oslo_concurrency.lockutils [None req-b3c1192b-6246-4c79-9a85-0f6480a74612 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:33 np0005539551 nova_compute[227360]: 2025-11-29 08:00:33.262 227364 DEBUG oslo_concurrency.lockutils [None req-b3c1192b-6246-4c79-9a85-0f6480a74612 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:33 np0005539551 nova_compute[227360]: 2025-11-29 08:00:33.263 227364 DEBUG oslo_concurrency.lockutils [None req-b3c1192b-6246-4c79-9a85-0f6480a74612 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:33 np0005539551 nova_compute[227360]: 2025-11-29 08:00:33.263 227364 DEBUG oslo_concurrency.lockutils [None req-b3c1192b-6246-4c79-9a85-0f6480a74612 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:33 np0005539551 nova_compute[227360]: 2025-11-29 08:00:33.263 227364 DEBUG oslo_concurrency.lockutils [None req-b3c1192b-6246-4c79-9a85-0f6480a74612 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:33 np0005539551 nova_compute[227360]: 2025-11-29 08:00:33.264 227364 INFO nova.compute.manager [None req-b3c1192b-6246-4c79-9a85-0f6480a74612 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Terminating instance#033[00m
Nov 29 03:00:33 np0005539551 nova_compute[227360]: 2025-11-29 08:00:33.265 227364 DEBUG oslo_concurrency.lockutils [None req-b3c1192b-6246-4c79-9a85-0f6480a74612 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "refresh_cache-35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:00:33 np0005539551 nova_compute[227360]: 2025-11-29 08:00:33.265 227364 DEBUG oslo_concurrency.lockutils [None req-b3c1192b-6246-4c79-9a85-0f6480a74612 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquired lock "refresh_cache-35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:00:33 np0005539551 nova_compute[227360]: 2025-11-29 08:00:33.266 227364 DEBUG nova.network.neutron [None req-b3c1192b-6246-4c79-9a85-0f6480a74612 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:00:33 np0005539551 nova_compute[227360]: 2025-11-29 08:00:33.595 227364 DEBUG nova.network.neutron [None req-b3c1192b-6246-4c79-9a85-0f6480a74612 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:00:33 np0005539551 nova_compute[227360]: 2025-11-29 08:00:33.853 227364 DEBUG nova.network.neutron [None req-b3c1192b-6246-4c79-9a85-0f6480a74612 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:00:33 np0005539551 nova_compute[227360]: 2025-11-29 08:00:33.879 227364 DEBUG oslo_concurrency.lockutils [None req-b3c1192b-6246-4c79-9a85-0f6480a74612 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Releasing lock "refresh_cache-35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:00:33 np0005539551 nova_compute[227360]: 2025-11-29 08:00:33.880 227364 DEBUG nova.compute.manager [None req-b3c1192b-6246-4c79-9a85-0f6480a74612 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:00:33 np0005539551 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000020.scope: Deactivated successfully.
Nov 29 03:00:33 np0005539551 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000020.scope: Consumed 5.249s CPU time.
Nov 29 03:00:33 np0005539551 systemd-machined[190756]: Machine qemu-18-instance-00000020 terminated.
Nov 29 03:00:34 np0005539551 nova_compute[227360]: 2025-11-29 08:00:34.100 227364 INFO nova.virt.libvirt.driver [-] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Instance destroyed successfully.#033[00m
Nov 29 03:00:34 np0005539551 nova_compute[227360]: 2025-11-29 08:00:34.100 227364 DEBUG nova.objects.instance [None req-b3c1192b-6246-4c79-9a85-0f6480a74612 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lazy-loading 'resources' on Instance uuid 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:00:34 np0005539551 nova_compute[227360]: 2025-11-29 08:00:34.283 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:34.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:34.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:36.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:36 np0005539551 nova_compute[227360]: 2025-11-29 08:00:36.840 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:36.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:37 np0005539551 nova_compute[227360]: 2025-11-29 08:00:37.235 227364 INFO nova.virt.libvirt.driver [None req-b3c1192b-6246-4c79-9a85-0f6480a74612 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Deleting instance files /var/lib/nova/instances/35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd_del#033[00m
Nov 29 03:00:37 np0005539551 nova_compute[227360]: 2025-11-29 08:00:37.235 227364 INFO nova.virt.libvirt.driver [None req-b3c1192b-6246-4c79-9a85-0f6480a74612 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Deletion of /var/lib/nova/instances/35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd_del complete#033[00m
Nov 29 03:00:37 np0005539551 nova_compute[227360]: 2025-11-29 08:00:37.307 227364 INFO nova.compute.manager [None req-b3c1192b-6246-4c79-9a85-0f6480a74612 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Took 3.43 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:00:37 np0005539551 nova_compute[227360]: 2025-11-29 08:00:37.307 227364 DEBUG oslo.service.loopingcall [None req-b3c1192b-6246-4c79-9a85-0f6480a74612 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:00:37 np0005539551 nova_compute[227360]: 2025-11-29 08:00:37.307 227364 DEBUG nova.compute.manager [-] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:00:37 np0005539551 nova_compute[227360]: 2025-11-29 08:00:37.308 227364 DEBUG nova.network.neutron [-] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:00:37 np0005539551 nova_compute[227360]: 2025-11-29 08:00:37.462 227364 DEBUG nova.network.neutron [-] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:00:37 np0005539551 nova_compute[227360]: 2025-11-29 08:00:37.474 227364 DEBUG nova.network.neutron [-] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:00:37 np0005539551 nova_compute[227360]: 2025-11-29 08:00:37.486 227364 INFO nova.compute.manager [-] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Took 0.18 seconds to deallocate network for instance.#033[00m
Nov 29 03:00:37 np0005539551 nova_compute[227360]: 2025-11-29 08:00:37.529 227364 DEBUG oslo_concurrency.lockutils [None req-b3c1192b-6246-4c79-9a85-0f6480a74612 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:37 np0005539551 nova_compute[227360]: 2025-11-29 08:00:37.530 227364 DEBUG oslo_concurrency.lockutils [None req-b3c1192b-6246-4c79-9a85-0f6480a74612 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:37 np0005539551 nova_compute[227360]: 2025-11-29 08:00:37.536 227364 DEBUG oslo_concurrency.lockutils [None req-b3c1192b-6246-4c79-9a85-0f6480a74612 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:37 np0005539551 nova_compute[227360]: 2025-11-29 08:00:37.568 227364 INFO nova.scheduler.client.report [None req-b3c1192b-6246-4c79-9a85-0f6480a74612 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Deleted allocations for instance 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd#033[00m
Nov 29 03:00:37 np0005539551 nova_compute[227360]: 2025-11-29 08:00:37.672 227364 DEBUG oslo_concurrency.lockutils [None req-b3c1192b-6246-4c79-9a85-0f6480a74612 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.410s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:38.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:38.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:39 np0005539551 nova_compute[227360]: 2025-11-29 08:00:39.316 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:40.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:40.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:41 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e182 e182: 3 total, 3 up, 3 in
Nov 29 03:00:41 np0005539551 nova_compute[227360]: 2025-11-29 08:00:41.842 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:42.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:42.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:44 np0005539551 nova_compute[227360]: 2025-11-29 08:00:44.317 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:44.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:44.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:46.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:46 np0005539551 nova_compute[227360]: 2025-11-29 08:00:46.843 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 03:00:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:46.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 03:00:48 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:48.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:48.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:49 np0005539551 nova_compute[227360]: 2025-11-29 08:00:49.097 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403234.0966637, 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:00:49 np0005539551 nova_compute[227360]: 2025-11-29 08:00:49.098 227364 INFO nova.compute.manager [-] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:00:49 np0005539551 nova_compute[227360]: 2025-11-29 08:00:49.125 227364 DEBUG nova.compute.manager [None req-b44447a7-c879-4024-b908-fc59a3706fb5 - - - - - -] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:00:49 np0005539551 nova_compute[227360]: 2025-11-29 08:00:49.319 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:50 np0005539551 nova_compute[227360]: 2025-11-29 08:00:50.239 227364 DEBUG oslo_concurrency.lockutils [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Acquiring lock "b8808203-6263-4e9c-bff5-7d273d143a50" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:50 np0005539551 nova_compute[227360]: 2025-11-29 08:00:50.240 227364 DEBUG oslo_concurrency.lockutils [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "b8808203-6263-4e9c-bff5-7d273d143a50" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:50 np0005539551 nova_compute[227360]: 2025-11-29 08:00:50.266 227364 DEBUG nova.compute.manager [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:00:50 np0005539551 nova_compute[227360]: 2025-11-29 08:00:50.349 227364 DEBUG oslo_concurrency.lockutils [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:50 np0005539551 nova_compute[227360]: 2025-11-29 08:00:50.349 227364 DEBUG oslo_concurrency.lockutils [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:50 np0005539551 nova_compute[227360]: 2025-11-29 08:00:50.356 227364 DEBUG nova.virt.hardware [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:00:50 np0005539551 nova_compute[227360]: 2025-11-29 08:00:50.356 227364 INFO nova.compute.claims [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:00:50 np0005539551 nova_compute[227360]: 2025-11-29 08:00:50.537 227364 DEBUG oslo_concurrency.processutils [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:50.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:50.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:51 np0005539551 nova_compute[227360]: 2025-11-29 08:00:51.100 227364 DEBUG oslo_concurrency.lockutils [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Acquiring lock "9d6124aa-da05-4d06-86b2-1d2bb6c9e89e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:51 np0005539551 nova_compute[227360]: 2025-11-29 08:00:51.101 227364 DEBUG oslo_concurrency.lockutils [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Lock "9d6124aa-da05-4d06-86b2-1d2bb6c9e89e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:51 np0005539551 nova_compute[227360]: 2025-11-29 08:00:51.153 227364 DEBUG nova.compute.manager [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:00:51 np0005539551 nova_compute[227360]: 2025-11-29 08:00:51.242 227364 DEBUG oslo_concurrency.lockutils [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:00:51 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1305621088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:00:51 np0005539551 nova_compute[227360]: 2025-11-29 08:00:51.312 227364 DEBUG oslo_concurrency.processutils [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.775s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:51 np0005539551 nova_compute[227360]: 2025-11-29 08:00:51.316 227364 DEBUG nova.compute.provider_tree [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:00:51 np0005539551 nova_compute[227360]: 2025-11-29 08:00:51.329 227364 DEBUG nova.scheduler.client.report [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:00:51 np0005539551 nova_compute[227360]: 2025-11-29 08:00:51.352 227364 DEBUG oslo_concurrency.lockutils [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:51 np0005539551 nova_compute[227360]: 2025-11-29 08:00:51.352 227364 DEBUG nova.compute.manager [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:00:51 np0005539551 nova_compute[227360]: 2025-11-29 08:00:51.354 227364 DEBUG oslo_concurrency.lockutils [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:51 np0005539551 nova_compute[227360]: 2025-11-29 08:00:51.359 227364 DEBUG nova.virt.hardware [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:00:51 np0005539551 nova_compute[227360]: 2025-11-29 08:00:51.360 227364 INFO nova.compute.claims [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:00:51 np0005539551 nova_compute[227360]: 2025-11-29 08:00:51.421 227364 DEBUG nova.compute.manager [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:00:51 np0005539551 nova_compute[227360]: 2025-11-29 08:00:51.422 227364 DEBUG nova.network.neutron [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:00:51 np0005539551 nova_compute[227360]: 2025-11-29 08:00:51.445 227364 INFO nova.virt.libvirt.driver [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:00:51 np0005539551 nova_compute[227360]: 2025-11-29 08:00:51.466 227364 DEBUG nova.compute.manager [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:00:51 np0005539551 nova_compute[227360]: 2025-11-29 08:00:51.512 227364 DEBUG oslo_concurrency.processutils [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:51 np0005539551 nova_compute[227360]: 2025-11-29 08:00:51.571 227364 DEBUG nova.compute.manager [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:00:51 np0005539551 nova_compute[227360]: 2025-11-29 08:00:51.573 227364 DEBUG nova.virt.libvirt.driver [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:00:51 np0005539551 nova_compute[227360]: 2025-11-29 08:00:51.573 227364 INFO nova.virt.libvirt.driver [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Creating image(s)#033[00m
Nov 29 03:00:51 np0005539551 nova_compute[227360]: 2025-11-29 08:00:51.599 227364 DEBUG nova.storage.rbd_utils [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] rbd image b8808203-6263-4e9c-bff5-7d273d143a50_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:00:51 np0005539551 nova_compute[227360]: 2025-11-29 08:00:51.622 227364 DEBUG nova.storage.rbd_utils [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] rbd image b8808203-6263-4e9c-bff5-7d273d143a50_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:00:51 np0005539551 nova_compute[227360]: 2025-11-29 08:00:51.649 227364 DEBUG nova.storage.rbd_utils [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] rbd image b8808203-6263-4e9c-bff5-7d273d143a50_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:00:51 np0005539551 nova_compute[227360]: 2025-11-29 08:00:51.653 227364 DEBUG oslo_concurrency.processutils [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:51 np0005539551 nova_compute[227360]: 2025-11-29 08:00:51.677 227364 DEBUG nova.policy [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ba1b423b724a47f692a3d9cbf91860d7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'afaf65dfeab546ee991af0438784b8a3', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:00:51 np0005539551 nova_compute[227360]: 2025-11-29 08:00:51.716 227364 DEBUG oslo_concurrency.processutils [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:51 np0005539551 nova_compute[227360]: 2025-11-29 08:00:51.717 227364 DEBUG oslo_concurrency.lockutils [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:51 np0005539551 nova_compute[227360]: 2025-11-29 08:00:51.717 227364 DEBUG oslo_concurrency.lockutils [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:51 np0005539551 nova_compute[227360]: 2025-11-29 08:00:51.718 227364 DEBUG oslo_concurrency.lockutils [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:51 np0005539551 nova_compute[227360]: 2025-11-29 08:00:51.741 227364 DEBUG nova.storage.rbd_utils [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] rbd image b8808203-6263-4e9c-bff5-7d273d143a50_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:00:51 np0005539551 nova_compute[227360]: 2025-11-29 08:00:51.745 227364 DEBUG oslo_concurrency.processutils [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 b8808203-6263-4e9c-bff5-7d273d143a50_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:51 np0005539551 nova_compute[227360]: 2025-11-29 08:00:51.844 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:00:51 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2827734934' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:00:52 np0005539551 nova_compute[227360]: 2025-11-29 08:00:52.016 227364 DEBUG oslo_concurrency.processutils [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:52 np0005539551 nova_compute[227360]: 2025-11-29 08:00:52.021 227364 DEBUG nova.compute.provider_tree [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:00:52 np0005539551 nova_compute[227360]: 2025-11-29 08:00:52.039 227364 DEBUG nova.scheduler.client.report [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:00:52 np0005539551 nova_compute[227360]: 2025-11-29 08:00:52.096 227364 DEBUG oslo_concurrency.lockutils [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:52 np0005539551 nova_compute[227360]: 2025-11-29 08:00:52.097 227364 DEBUG nova.compute.manager [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:00:52 np0005539551 nova_compute[227360]: 2025-11-29 08:00:52.166 227364 DEBUG nova.compute.manager [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:00:52 np0005539551 nova_compute[227360]: 2025-11-29 08:00:52.166 227364 DEBUG nova.network.neutron [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:00:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:52.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:52 np0005539551 podman[243227]: 2025-11-29 08:00:52.602889093 +0000 UTC m=+0.057112336 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:00:52 np0005539551 podman[243226]: 2025-11-29 08:00:52.625316767 +0000 UTC m=+0.079608691 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:00:52 np0005539551 podman[243228]: 2025-11-29 08:00:52.626674325 +0000 UTC m=+0.077744301 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:00:52 np0005539551 nova_compute[227360]: 2025-11-29 08:00:52.797 227364 INFO nova.virt.libvirt.driver [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:00:52 np0005539551 nova_compute[227360]: 2025-11-29 08:00:52.829 227364 DEBUG nova.compute.manager [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:00:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:52.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:52 np0005539551 nova_compute[227360]: 2025-11-29 08:00:52.945 227364 DEBUG nova.compute.manager [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:00:52 np0005539551 nova_compute[227360]: 2025-11-29 08:00:52.946 227364 DEBUG nova.virt.libvirt.driver [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:00:52 np0005539551 nova_compute[227360]: 2025-11-29 08:00:52.946 227364 INFO nova.virt.libvirt.driver [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Creating image(s)#033[00m
Nov 29 03:00:54 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:54 np0005539551 nova_compute[227360]: 2025-11-29 08:00:54.151 227364 DEBUG nova.storage.rbd_utils [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] rbd image 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:00:54 np0005539551 nova_compute[227360]: 2025-11-29 08:00:54.417 227364 DEBUG nova.storage.rbd_utils [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] rbd image 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:00:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:54.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:54 np0005539551 nova_compute[227360]: 2025-11-29 08:00:54.601 227364 DEBUG nova.storage.rbd_utils [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] rbd image 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:00:54 np0005539551 nova_compute[227360]: 2025-11-29 08:00:54.604 227364 DEBUG oslo_concurrency.processutils [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:54 np0005539551 nova_compute[227360]: 2025-11-29 08:00:54.625 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:54 np0005539551 nova_compute[227360]: 2025-11-29 08:00:54.631 227364 DEBUG nova.policy [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b956671bad8a4a0b99469a9d0258a2bc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '86064cd197e14fa8a17d2a0d9547af3e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:00:54 np0005539551 nova_compute[227360]: 2025-11-29 08:00:54.634 227364 DEBUG oslo_concurrency.processutils [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 b8808203-6263-4e9c-bff5-7d273d143a50_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.889s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:54 np0005539551 nova_compute[227360]: 2025-11-29 08:00:54.668 227364 DEBUG nova.network.neutron [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Successfully created port: 26202237-bc25-465c-bcab-caf462b96a73 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:00:54 np0005539551 nova_compute[227360]: 2025-11-29 08:00:54.672 227364 DEBUG oslo_concurrency.processutils [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:54 np0005539551 nova_compute[227360]: 2025-11-29 08:00:54.673 227364 DEBUG oslo_concurrency.lockutils [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:54 np0005539551 nova_compute[227360]: 2025-11-29 08:00:54.674 227364 DEBUG oslo_concurrency.lockutils [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:54 np0005539551 nova_compute[227360]: 2025-11-29 08:00:54.674 227364 DEBUG oslo_concurrency.lockutils [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:54 np0005539551 nova_compute[227360]: 2025-11-29 08:00:54.700 227364 DEBUG nova.storage.rbd_utils [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] rbd image 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:00:54 np0005539551 nova_compute[227360]: 2025-11-29 08:00:54.703 227364 DEBUG oslo_concurrency.processutils [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:54 np0005539551 nova_compute[227360]: 2025-11-29 08:00:54.764 227364 DEBUG nova.storage.rbd_utils [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] resizing rbd image b8808203-6263-4e9c-bff5-7d273d143a50_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:00:54 np0005539551 nova_compute[227360]: 2025-11-29 08:00:54.897 227364 DEBUG nova.objects.instance [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lazy-loading 'migration_context' on Instance uuid b8808203-6263-4e9c-bff5-7d273d143a50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:00:54 np0005539551 nova_compute[227360]: 2025-11-29 08:00:54.913 227364 DEBUG nova.virt.libvirt.driver [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:00:54 np0005539551 nova_compute[227360]: 2025-11-29 08:00:54.913 227364 DEBUG nova.virt.libvirt.driver [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Ensure instance console log exists: /var/lib/nova/instances/b8808203-6263-4e9c-bff5-7d273d143a50/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:00:54 np0005539551 nova_compute[227360]: 2025-11-29 08:00:54.914 227364 DEBUG oslo_concurrency.lockutils [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:54 np0005539551 nova_compute[227360]: 2025-11-29 08:00:54.914 227364 DEBUG oslo_concurrency.lockutils [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:54 np0005539551 nova_compute[227360]: 2025-11-29 08:00:54.914 227364 DEBUG oslo_concurrency.lockutils [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:54.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:55 np0005539551 nova_compute[227360]: 2025-11-29 08:00:55.430 227364 DEBUG nova.network.neutron [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Successfully created port: 3d89eff8-f34b-494c-8c30-7457cbc1852c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:00:55 np0005539551 nova_compute[227360]: 2025-11-29 08:00:55.659 227364 DEBUG oslo_concurrency.processutils [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.956s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:56 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:00:56.486 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:00:56 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:00:56.487 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:00:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:56.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 03:00:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:56.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 03:00:58 np0005539551 nova_compute[227360]: 2025-11-29 08:00:58.573 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:58 np0005539551 nova_compute[227360]: 2025-11-29 08:00:58.575 227364 DEBUG nova.network.neutron [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Successfully updated port: 26202237-bc25-465c-bcab-caf462b96a73 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:00:58 np0005539551 nova_compute[227360]: 2025-11-29 08:00:58.576 227364 DEBUG nova.network.neutron [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Successfully updated port: 3d89eff8-f34b-494c-8c30-7457cbc1852c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:00:58 np0005539551 nova_compute[227360]: 2025-11-29 08:00:58.578 227364 DEBUG nova.compute.manager [req-ef0fe9b5-af8c-447a-837d-0d48b1241934 req-06b8b8e5-7c91-4a78-8b29-3c038ca7f4d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Received event network-changed-26202237-bc25-465c-bcab-caf462b96a73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:00:58 np0005539551 nova_compute[227360]: 2025-11-29 08:00:58.578 227364 DEBUG nova.compute.manager [req-ef0fe9b5-af8c-447a-837d-0d48b1241934 req-06b8b8e5-7c91-4a78-8b29-3c038ca7f4d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Refreshing instance network info cache due to event network-changed-26202237-bc25-465c-bcab-caf462b96a73. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:00:58 np0005539551 nova_compute[227360]: 2025-11-29 08:00:58.579 227364 DEBUG oslo_concurrency.lockutils [req-ef0fe9b5-af8c-447a-837d-0d48b1241934 req-06b8b8e5-7c91-4a78-8b29-3c038ca7f4d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-b8808203-6263-4e9c-bff5-7d273d143a50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:00:58 np0005539551 nova_compute[227360]: 2025-11-29 08:00:58.579 227364 DEBUG oslo_concurrency.lockutils [req-ef0fe9b5-af8c-447a-837d-0d48b1241934 req-06b8b8e5-7c91-4a78-8b29-3c038ca7f4d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-b8808203-6263-4e9c-bff5-7d273d143a50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:00:58 np0005539551 nova_compute[227360]: 2025-11-29 08:00:58.579 227364 DEBUG nova.network.neutron [req-ef0fe9b5-af8c-447a-837d-0d48b1241934 req-06b8b8e5-7c91-4a78-8b29-3c038ca7f4d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Refreshing network info cache for port 26202237-bc25-465c-bcab-caf462b96a73 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:00:58 np0005539551 nova_compute[227360]: 2025-11-29 08:00:58.581 227364 DEBUG nova.compute.manager [req-cdf7abb6-4f82-4254-873d-fab3ad32cc04 req-8c1cf02c-bf09-47f5-8af6-e4e508ec62fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Received event network-changed-3d89eff8-f34b-494c-8c30-7457cbc1852c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:00:58 np0005539551 nova_compute[227360]: 2025-11-29 08:00:58.581 227364 DEBUG nova.compute.manager [req-cdf7abb6-4f82-4254-873d-fab3ad32cc04 req-8c1cf02c-bf09-47f5-8af6-e4e508ec62fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Refreshing instance network info cache due to event network-changed-3d89eff8-f34b-494c-8c30-7457cbc1852c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:00:58 np0005539551 nova_compute[227360]: 2025-11-29 08:00:58.581 227364 DEBUG oslo_concurrency.lockutils [req-cdf7abb6-4f82-4254-873d-fab3ad32cc04 req-8c1cf02c-bf09-47f5-8af6-e4e508ec62fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-9d6124aa-da05-4d06-86b2-1d2bb6c9e89e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:00:58 np0005539551 nova_compute[227360]: 2025-11-29 08:00:58.581 227364 DEBUG oslo_concurrency.lockutils [req-cdf7abb6-4f82-4254-873d-fab3ad32cc04 req-8c1cf02c-bf09-47f5-8af6-e4e508ec62fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-9d6124aa-da05-4d06-86b2-1d2bb6c9e89e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:00:58 np0005539551 nova_compute[227360]: 2025-11-29 08:00:58.582 227364 DEBUG nova.network.neutron [req-cdf7abb6-4f82-4254-873d-fab3ad32cc04 req-8c1cf02c-bf09-47f5-8af6-e4e508ec62fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Refreshing network info cache for port 3d89eff8-f34b-494c-8c30-7457cbc1852c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:00:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:58.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:00:58 np0005539551 nova_compute[227360]: 2025-11-29 08:00:58.654 227364 DEBUG oslo_concurrency.lockutils [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Acquiring lock "refresh_cache-b8808203-6263-4e9c-bff5-7d273d143a50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:00:58 np0005539551 nova_compute[227360]: 2025-11-29 08:00:58.657 227364 DEBUG oslo_concurrency.lockutils [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Acquiring lock "refresh_cache-9d6124aa-da05-4d06-86b2-1d2bb6c9e89e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:00:58 np0005539551 nova_compute[227360]: 2025-11-29 08:00:58.661 227364 DEBUG nova.storage.rbd_utils [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] resizing rbd image 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:00:58 np0005539551 nova_compute[227360]: 2025-11-29 08:00:58.791 227364 DEBUG nova.objects.instance [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Lazy-loading 'migration_context' on Instance uuid 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:00:58 np0005539551 nova_compute[227360]: 2025-11-29 08:00:58.804 227364 DEBUG nova.virt.libvirt.driver [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:00:58 np0005539551 nova_compute[227360]: 2025-11-29 08:00:58.804 227364 DEBUG nova.virt.libvirt.driver [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Ensure instance console log exists: /var/lib/nova/instances/9d6124aa-da05-4d06-86b2-1d2bb6c9e89e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:00:58 np0005539551 nova_compute[227360]: 2025-11-29 08:00:58.805 227364 DEBUG oslo_concurrency.lockutils [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:58 np0005539551 nova_compute[227360]: 2025-11-29 08:00:58.805 227364 DEBUG oslo_concurrency.lockutils [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:58 np0005539551 nova_compute[227360]: 2025-11-29 08:00:58.805 227364 DEBUG oslo_concurrency.lockutils [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:00:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:58.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:59 np0005539551 nova_compute[227360]: 2025-11-29 08:00:59.051 227364 DEBUG nova.network.neutron [req-cdf7abb6-4f82-4254-873d-fab3ad32cc04 req-8c1cf02c-bf09-47f5-8af6-e4e508ec62fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:00:59 np0005539551 nova_compute[227360]: 2025-11-29 08:00:59.063 227364 DEBUG nova.network.neutron [req-ef0fe9b5-af8c-447a-837d-0d48b1241934 req-06b8b8e5-7c91-4a78-8b29-3c038ca7f4d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:00:59 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:59 np0005539551 nova_compute[227360]: 2025-11-29 08:00:59.400 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:59 np0005539551 nova_compute[227360]: 2025-11-29 08:00:59.480 227364 DEBUG nova.network.neutron [req-cdf7abb6-4f82-4254-873d-fab3ad32cc04 req-8c1cf02c-bf09-47f5-8af6-e4e508ec62fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:00:59 np0005539551 nova_compute[227360]: 2025-11-29 08:00:59.499 227364 DEBUG oslo_concurrency.lockutils [req-cdf7abb6-4f82-4254-873d-fab3ad32cc04 req-8c1cf02c-bf09-47f5-8af6-e4e508ec62fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-9d6124aa-da05-4d06-86b2-1d2bb6c9e89e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:00:59 np0005539551 nova_compute[227360]: 2025-11-29 08:00:59.499 227364 DEBUG oslo_concurrency.lockutils [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Acquired lock "refresh_cache-9d6124aa-da05-4d06-86b2-1d2bb6c9e89e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:00:59 np0005539551 nova_compute[227360]: 2025-11-29 08:00:59.500 227364 DEBUG nova.network.neutron [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:00:59 np0005539551 nova_compute[227360]: 2025-11-29 08:00:59.534 227364 DEBUG nova.network.neutron [req-ef0fe9b5-af8c-447a-837d-0d48b1241934 req-06b8b8e5-7c91-4a78-8b29-3c038ca7f4d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:00:59 np0005539551 nova_compute[227360]: 2025-11-29 08:00:59.550 227364 DEBUG oslo_concurrency.lockutils [req-ef0fe9b5-af8c-447a-837d-0d48b1241934 req-06b8b8e5-7c91-4a78-8b29-3c038ca7f4d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-b8808203-6263-4e9c-bff5-7d273d143a50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:00:59 np0005539551 nova_compute[227360]: 2025-11-29 08:00:59.551 227364 DEBUG oslo_concurrency.lockutils [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Acquired lock "refresh_cache-b8808203-6263-4e9c-bff5-7d273d143a50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:00:59 np0005539551 nova_compute[227360]: 2025-11-29 08:00:59.552 227364 DEBUG nova.network.neutron [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:01:00 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:01:00 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:01:00 np0005539551 nova_compute[227360]: 2025-11-29 08:01:00.112 227364 DEBUG nova.network.neutron [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:01:00 np0005539551 nova_compute[227360]: 2025-11-29 08:01:00.140 227364 DEBUG nova.network.neutron [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:01:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:00.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:00.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.075 227364 DEBUG nova.network.neutron [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Updating instance_info_cache with network_info: [{"id": "3d89eff8-f34b-494c-8c30-7457cbc1852c", "address": "fa:16:3e:3a:38:65", "network": {"id": "4537c1f2-a597-4d7c-a9c3-2ad7483510a6", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1808585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86064cd197e14fa8a17d2a0d9547af3e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d89eff8-f3", "ovs_interfaceid": "3d89eff8-f34b-494c-8c30-7457cbc1852c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.081 227364 DEBUG nova.network.neutron [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Updating instance_info_cache with network_info: [{"id": "26202237-bc25-465c-bcab-caf462b96a73", "address": "fa:16:3e:ef:0f:ee", "network": {"id": "f73b0808-21fd-43a4-809d-85e512de1cb7", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-904838899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "afaf65dfeab546ee991af0438784b8a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26202237-bc", "ovs_interfaceid": "26202237-bc25-465c-bcab-caf462b96a73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.163 227364 DEBUG oslo_concurrency.lockutils [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Releasing lock "refresh_cache-9d6124aa-da05-4d06-86b2-1d2bb6c9e89e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.164 227364 DEBUG nova.compute.manager [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Instance network_info: |[{"id": "3d89eff8-f34b-494c-8c30-7457cbc1852c", "address": "fa:16:3e:3a:38:65", "network": {"id": "4537c1f2-a597-4d7c-a9c3-2ad7483510a6", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1808585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86064cd197e14fa8a17d2a0d9547af3e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d89eff8-f3", "ovs_interfaceid": "3d89eff8-f34b-494c-8c30-7457cbc1852c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.168 227364 DEBUG nova.virt.libvirt.driver [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Start _get_guest_xml network_info=[{"id": "3d89eff8-f34b-494c-8c30-7457cbc1852c", "address": "fa:16:3e:3a:38:65", "network": {"id": "4537c1f2-a597-4d7c-a9c3-2ad7483510a6", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1808585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86064cd197e14fa8a17d2a0d9547af3e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d89eff8-f3", "ovs_interfaceid": "3d89eff8-f34b-494c-8c30-7457cbc1852c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.171 227364 DEBUG oslo_concurrency.lockutils [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Releasing lock "refresh_cache-b8808203-6263-4e9c-bff5-7d273d143a50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.171 227364 DEBUG nova.compute.manager [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Instance network_info: |[{"id": "26202237-bc25-465c-bcab-caf462b96a73", "address": "fa:16:3e:ef:0f:ee", "network": {"id": "f73b0808-21fd-43a4-809d-85e512de1cb7", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-904838899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "afaf65dfeab546ee991af0438784b8a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26202237-bc", "ovs_interfaceid": "26202237-bc25-465c-bcab-caf462b96a73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.176 227364 DEBUG nova.virt.libvirt.driver [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Start _get_guest_xml network_info=[{"id": "26202237-bc25-465c-bcab-caf462b96a73", "address": "fa:16:3e:ef:0f:ee", "network": {"id": "f73b0808-21fd-43a4-809d-85e512de1cb7", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-904838899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "afaf65dfeab546ee991af0438784b8a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26202237-bc", "ovs_interfaceid": "26202237-bc25-465c-bcab-caf462b96a73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.183 227364 WARNING nova.virt.libvirt.driver [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.186 227364 WARNING nova.virt.libvirt.driver [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.189 227364 DEBUG nova.virt.libvirt.host [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.190 227364 DEBUG nova.virt.libvirt.host [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.193 227364 DEBUG nova.virt.libvirt.host [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.194 227364 DEBUG nova.virt.libvirt.host [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.195 227364 DEBUG nova.virt.libvirt.host [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.196 227364 DEBUG nova.virt.libvirt.host [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.198 227364 DEBUG nova.virt.libvirt.driver [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.199 227364 DEBUG nova.virt.hardware [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.200 227364 DEBUG nova.virt.hardware [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.200 227364 DEBUG nova.virt.hardware [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.201 227364 DEBUG nova.virt.hardware [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.201 227364 DEBUG nova.virt.hardware [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.202 227364 DEBUG nova.virt.hardware [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.202 227364 DEBUG nova.virt.hardware [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.202 227364 DEBUG nova.virt.hardware [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.203 227364 DEBUG nova.virt.hardware [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.203 227364 DEBUG nova.virt.hardware [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.204 227364 DEBUG nova.virt.hardware [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.208 227364 DEBUG oslo_concurrency.processutils [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.234 227364 DEBUG nova.virt.libvirt.host [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.235 227364 DEBUG nova.virt.libvirt.host [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.238 227364 DEBUG nova.virt.libvirt.driver [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.239 227364 DEBUG nova.virt.hardware [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.240 227364 DEBUG nova.virt.hardware [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.240 227364 DEBUG nova.virt.hardware [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.241 227364 DEBUG nova.virt.hardware [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.241 227364 DEBUG nova.virt.hardware [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.241 227364 DEBUG nova.virt.hardware [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.242 227364 DEBUG nova.virt.hardware [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.242 227364 DEBUG nova.virt.hardware [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.243 227364 DEBUG nova.virt.hardware [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.243 227364 DEBUG nova.virt.hardware [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.243 227364 DEBUG nova.virt.hardware [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.249 227364 DEBUG oslo_concurrency.processutils [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:02 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:01:02 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2849144274' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:01:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 03:01:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:02.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.624 227364 DEBUG oslo_concurrency.processutils [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.675 227364 DEBUG nova.storage.rbd_utils [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] rbd image 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:01:02 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:01:02 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1420353013' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.680 227364 DEBUG oslo_concurrency.processutils [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.709 227364 DEBUG oslo_concurrency.processutils [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.737 227364 DEBUG nova.storage.rbd_utils [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] rbd image b8808203-6263-4e9c-bff5-7d273d143a50_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:01:02 np0005539551 nova_compute[227360]: 2025-11-29 08:01:02.742 227364 DEBUG oslo_concurrency.processutils [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:02.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:03 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:01:03 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/191339202' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.127 227364 DEBUG oslo_concurrency.processutils [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.130 227364 DEBUG nova.virt.libvirt.vif [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:00:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-194902022',display_name='tempest-FloatingIPsAssociationTestJSON-server-194902022',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-194902022',id=36,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='86064cd197e14fa8a17d2a0d9547af3e',ramdisk_id='',reservation_id='r-a5c1w0pm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-304325142',owner_user_name='tempest-FloatingIPsAssociationTestJSON-304325142-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:00:52Z,user_data=None,user_id='b956671bad8a4a0b99469a9d0258a2bc',uuid=9d6124aa-da05-4d06-86b2-1d2bb6c9e89e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3d89eff8-f34b-494c-8c30-7457cbc1852c", "address": "fa:16:3e:3a:38:65", "network": {"id": "4537c1f2-a597-4d7c-a9c3-2ad7483510a6", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1808585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86064cd197e14fa8a17d2a0d9547af3e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d89eff8-f3", "ovs_interfaceid": "3d89eff8-f34b-494c-8c30-7457cbc1852c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.131 227364 DEBUG nova.network.os_vif_util [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Converting VIF {"id": "3d89eff8-f34b-494c-8c30-7457cbc1852c", "address": "fa:16:3e:3a:38:65", "network": {"id": "4537c1f2-a597-4d7c-a9c3-2ad7483510a6", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1808585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86064cd197e14fa8a17d2a0d9547af3e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d89eff8-f3", "ovs_interfaceid": "3d89eff8-f34b-494c-8c30-7457cbc1852c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.133 227364 DEBUG nova.network.os_vif_util [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:38:65,bridge_name='br-int',has_traffic_filtering=True,id=3d89eff8-f34b-494c-8c30-7457cbc1852c,network=Network(4537c1f2-a597-4d7c-a9c3-2ad7483510a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d89eff8-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.134 227364 DEBUG nova.objects.instance [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Lazy-loading 'pci_devices' on Instance uuid 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.152 227364 DEBUG nova.virt.libvirt.driver [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:01:03 np0005539551 nova_compute[227360]:  <uuid>9d6124aa-da05-4d06-86b2-1d2bb6c9e89e</uuid>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:  <name>instance-00000024</name>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <nova:name>tempest-FloatingIPsAssociationTestJSON-server-194902022</nova:name>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:01:02</nova:creationTime>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:01:03 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:        <nova:user uuid="b956671bad8a4a0b99469a9d0258a2bc">tempest-FloatingIPsAssociationTestJSON-304325142-project-member</nova:user>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:        <nova:project uuid="86064cd197e14fa8a17d2a0d9547af3e">tempest-FloatingIPsAssociationTestJSON-304325142</nova:project>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:        <nova:port uuid="3d89eff8-f34b-494c-8c30-7457cbc1852c">
Nov 29 03:01:03 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <entry name="serial">9d6124aa-da05-4d06-86b2-1d2bb6c9e89e</entry>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <entry name="uuid">9d6124aa-da05-4d06-86b2-1d2bb6c9e89e</entry>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/9d6124aa-da05-4d06-86b2-1d2bb6c9e89e_disk">
Nov 29 03:01:03 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:01:03 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/9d6124aa-da05-4d06-86b2-1d2bb6c9e89e_disk.config">
Nov 29 03:01:03 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:01:03 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:3a:38:65"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <target dev="tap3d89eff8-f3"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/9d6124aa-da05-4d06-86b2-1d2bb6c9e89e/console.log" append="off"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:01:03 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:01:03 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.161 227364 DEBUG nova.compute.manager [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Preparing to wait for external event network-vif-plugged-3d89eff8-f34b-494c-8c30-7457cbc1852c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.161 227364 DEBUG oslo_concurrency.lockutils [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Acquiring lock "9d6124aa-da05-4d06-86b2-1d2bb6c9e89e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.162 227364 DEBUG oslo_concurrency.lockutils [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Lock "9d6124aa-da05-4d06-86b2-1d2bb6c9e89e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.162 227364 DEBUG oslo_concurrency.lockutils [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Lock "9d6124aa-da05-4d06-86b2-1d2bb6c9e89e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.163 227364 DEBUG nova.virt.libvirt.vif [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:00:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-194902022',display_name='tempest-FloatingIPsAssociationTestJSON-server-194902022',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-194902022',id=36,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='86064cd197e14fa8a17d2a0d9547af3e',ramdisk_id='',reservation_id='r-a5c1w0pm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-304325142',owner_user_name='tempest-FloatingIPsAssociationTestJSON-304325142-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:00:52Z,user_data=None,user_id='b956671bad8a4a0b99469a9d0258a2bc',uuid=9d6124aa-da05-4d06-86b2-1d2bb6c9e89e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3d89eff8-f34b-494c-8c30-7457cbc1852c", "address": "fa:16:3e:3a:38:65", "network": {"id": "4537c1f2-a597-4d7c-a9c3-2ad7483510a6", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1808585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86064cd197e14fa8a17d2a0d9547af3e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d89eff8-f3", "ovs_interfaceid": "3d89eff8-f34b-494c-8c30-7457cbc1852c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.163 227364 DEBUG nova.network.os_vif_util [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Converting VIF {"id": "3d89eff8-f34b-494c-8c30-7457cbc1852c", "address": "fa:16:3e:3a:38:65", "network": {"id": "4537c1f2-a597-4d7c-a9c3-2ad7483510a6", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1808585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86064cd197e14fa8a17d2a0d9547af3e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d89eff8-f3", "ovs_interfaceid": "3d89eff8-f34b-494c-8c30-7457cbc1852c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.164 227364 DEBUG nova.network.os_vif_util [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:38:65,bridge_name='br-int',has_traffic_filtering=True,id=3d89eff8-f34b-494c-8c30-7457cbc1852c,network=Network(4537c1f2-a597-4d7c-a9c3-2ad7483510a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d89eff8-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.165 227364 DEBUG os_vif [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:38:65,bridge_name='br-int',has_traffic_filtering=True,id=3d89eff8-f34b-494c-8c30-7457cbc1852c,network=Network(4537c1f2-a597-4d7c-a9c3-2ad7483510a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d89eff8-f3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.166 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.167 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.167 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:01:03 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:01:03 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3262941353' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.174 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.174 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3d89eff8-f3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.175 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3d89eff8-f3, col_values=(('external_ids', {'iface-id': '3d89eff8-f34b-494c-8c30-7457cbc1852c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3a:38:65', 'vm-uuid': '9d6124aa-da05-4d06-86b2-1d2bb6c9e89e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.177 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:03 np0005539551 NetworkManager[48922]: <info>  [1764403263.1781] manager: (tap3d89eff8-f3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.180 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.184 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.186 227364 INFO os_vif [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:38:65,bridge_name='br-int',has_traffic_filtering=True,id=3d89eff8-f34b-494c-8c30-7457cbc1852c,network=Network(4537c1f2-a597-4d7c-a9c3-2ad7483510a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d89eff8-f3')#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.193 227364 DEBUG oslo_concurrency.processutils [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.195 227364 DEBUG nova.virt.libvirt.vif [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:00:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-2061019622',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-2061019622',id=35,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHEuBiKv29X7C0drWwRABfdxNdRe5p32AdTPB10q93Ar576PCZcuPC/3VX2gJkGcV+mhRJIDE7C9Qv0DoWPW0kaJVi6f6+GX+1mDf7+x5AvCtDyfE2PGiajOtGRiWA/EXQ==',key_name='tempest-keypair-874975206',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='afaf65dfeab546ee991af0438784b8a3',ramdisk_id='',reservation_id='r-4zq3z6vy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-343315775',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-343315775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:00:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ba1b423b724a47f692a3d9cbf91860d7',uuid=b8808203-6263-4e9c-bff5-7d273d143a50,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "26202237-bc25-465c-bcab-caf462b96a73", "address": "fa:16:3e:ef:0f:ee", "network": {"id": "f73b0808-21fd-43a4-809d-85e512de1cb7", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-904838899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "afaf65dfeab546ee991af0438784b8a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26202237-bc", "ovs_interfaceid": "26202237-bc25-465c-bcab-caf462b96a73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.196 227364 DEBUG nova.network.os_vif_util [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Converting VIF {"id": "26202237-bc25-465c-bcab-caf462b96a73", "address": "fa:16:3e:ef:0f:ee", "network": {"id": "f73b0808-21fd-43a4-809d-85e512de1cb7", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-904838899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "afaf65dfeab546ee991af0438784b8a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26202237-bc", "ovs_interfaceid": "26202237-bc25-465c-bcab-caf462b96a73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.197 227364 DEBUG nova.network.os_vif_util [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:0f:ee,bridge_name='br-int',has_traffic_filtering=True,id=26202237-bc25-465c-bcab-caf462b96a73,network=Network(f73b0808-21fd-43a4-809d-85e512de1cb7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26202237-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.198 227364 DEBUG nova.objects.instance [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lazy-loading 'pci_devices' on Instance uuid b8808203-6263-4e9c-bff5-7d273d143a50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.213 227364 DEBUG nova.virt.libvirt.driver [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:01:03 np0005539551 nova_compute[227360]:  <uuid>b8808203-6263-4e9c-bff5-7d273d143a50</uuid>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:  <name>instance-00000023</name>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <nova:name>tempest-UpdateMultiattachVolumeNegativeTest-server-2061019622</nova:name>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:01:02</nova:creationTime>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:01:03 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:        <nova:user uuid="ba1b423b724a47f692a3d9cbf91860d7">tempest-UpdateMultiattachVolumeNegativeTest-343315775-project-member</nova:user>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:        <nova:project uuid="afaf65dfeab546ee991af0438784b8a3">tempest-UpdateMultiattachVolumeNegativeTest-343315775</nova:project>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:        <nova:port uuid="26202237-bc25-465c-bcab-caf462b96a73">
Nov 29 03:01:03 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <entry name="serial">b8808203-6263-4e9c-bff5-7d273d143a50</entry>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <entry name="uuid">b8808203-6263-4e9c-bff5-7d273d143a50</entry>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/b8808203-6263-4e9c-bff5-7d273d143a50_disk">
Nov 29 03:01:03 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:01:03 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/b8808203-6263-4e9c-bff5-7d273d143a50_disk.config">
Nov 29 03:01:03 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:01:03 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:ef:0f:ee"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <target dev="tap26202237-bc"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/b8808203-6263-4e9c-bff5-7d273d143a50/console.log" append="off"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:01:03 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:01:03 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:01:03 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:01:03 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.215 227364 DEBUG nova.compute.manager [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Preparing to wait for external event network-vif-plugged-26202237-bc25-465c-bcab-caf462b96a73 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.216 227364 DEBUG oslo_concurrency.lockutils [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Acquiring lock "b8808203-6263-4e9c-bff5-7d273d143a50-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.216 227364 DEBUG oslo_concurrency.lockutils [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "b8808203-6263-4e9c-bff5-7d273d143a50-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.216 227364 DEBUG oslo_concurrency.lockutils [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "b8808203-6263-4e9c-bff5-7d273d143a50-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.217 227364 DEBUG nova.virt.libvirt.vif [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:00:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-2061019622',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-2061019622',id=35,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHEuBiKv29X7C0drWwRABfdxNdRe5p32AdTPB10q93Ar576PCZcuPC/3VX2gJkGcV+mhRJIDE7C9Qv0DoWPW0kaJVi6f6+GX+1mDf7+x5AvCtDyfE2PGiajOtGRiWA/EXQ==',key_name='tempest-keypair-874975206',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='afaf65dfeab546ee991af0438784b8a3',ramdisk_id='',reservation_id='r-4zq3z6vy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-343315775',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-343315775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:00:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ba1b423b724a47f692a3d9cbf91860d7',uuid=b8808203-6263-4e9c-bff5-7d273d143a50,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "26202237-bc25-465c-bcab-caf462b96a73", "address": "fa:16:3e:ef:0f:ee", "network": {"id": "f73b0808-21fd-43a4-809d-85e512de1cb7", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-904838899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "afaf65dfeab546ee991af0438784b8a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26202237-bc", "ovs_interfaceid": "26202237-bc25-465c-bcab-caf462b96a73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.217 227364 DEBUG nova.network.os_vif_util [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Converting VIF {"id": "26202237-bc25-465c-bcab-caf462b96a73", "address": "fa:16:3e:ef:0f:ee", "network": {"id": "f73b0808-21fd-43a4-809d-85e512de1cb7", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-904838899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "afaf65dfeab546ee991af0438784b8a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26202237-bc", "ovs_interfaceid": "26202237-bc25-465c-bcab-caf462b96a73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.218 227364 DEBUG nova.network.os_vif_util [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:0f:ee,bridge_name='br-int',has_traffic_filtering=True,id=26202237-bc25-465c-bcab-caf462b96a73,network=Network(f73b0808-21fd-43a4-809d-85e512de1cb7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26202237-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.219 227364 DEBUG os_vif [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:0f:ee,bridge_name='br-int',has_traffic_filtering=True,id=26202237-bc25-465c-bcab-caf462b96a73,network=Network(f73b0808-21fd-43a4-809d-85e512de1cb7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26202237-bc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.219 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.219 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.220 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.222 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.223 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap26202237-bc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.223 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap26202237-bc, col_values=(('external_ids', {'iface-id': '26202237-bc25-465c-bcab-caf462b96a73', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ef:0f:ee', 'vm-uuid': 'b8808203-6263-4e9c-bff5-7d273d143a50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.225 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:03 np0005539551 NetworkManager[48922]: <info>  [1764403263.2256] manager: (tap26202237-bc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.227 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.233 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.233 227364 INFO os_vif [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:0f:ee,bridge_name='br-int',has_traffic_filtering=True,id=26202237-bc25-465c-bcab-caf462b96a73,network=Network(f73b0808-21fd-43a4-809d-85e512de1cb7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26202237-bc')#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.314 227364 DEBUG nova.virt.libvirt.driver [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.315 227364 DEBUG nova.virt.libvirt.driver [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.315 227364 DEBUG nova.virt.libvirt.driver [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] No VIF found with MAC fa:16:3e:ef:0f:ee, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.316 227364 INFO nova.virt.libvirt.driver [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Using config drive#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.339 227364 DEBUG nova.storage.rbd_utils [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] rbd image b8808203-6263-4e9c-bff5-7d273d143a50_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.348 227364 DEBUG nova.virt.libvirt.driver [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.349 227364 DEBUG nova.virt.libvirt.driver [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.349 227364 DEBUG nova.virt.libvirt.driver [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] No VIF found with MAC fa:16:3e:3a:38:65, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.350 227364 INFO nova.virt.libvirt.driver [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Using config drive#033[00m
Nov 29 03:01:03 np0005539551 nova_compute[227360]: 2025-11-29 08:01:03.377 227364 DEBUG nova.storage.rbd_utils [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] rbd image 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:01:04 np0005539551 nova_compute[227360]: 2025-11-29 08:01:04.067 227364 INFO nova.virt.libvirt.driver [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Creating config drive at /var/lib/nova/instances/b8808203-6263-4e9c-bff5-7d273d143a50/disk.config#033[00m
Nov 29 03:01:04 np0005539551 nova_compute[227360]: 2025-11-29 08:01:04.074 227364 DEBUG oslo_concurrency.processutils [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b8808203-6263-4e9c-bff5-7d273d143a50/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv057cw5w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:04 np0005539551 nova_compute[227360]: 2025-11-29 08:01:04.102 227364 INFO nova.virt.libvirt.driver [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Creating config drive at /var/lib/nova/instances/9d6124aa-da05-4d06-86b2-1d2bb6c9e89e/disk.config#033[00m
Nov 29 03:01:04 np0005539551 nova_compute[227360]: 2025-11-29 08:01:04.108 227364 DEBUG oslo_concurrency.processutils [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9d6124aa-da05-4d06-86b2-1d2bb6c9e89e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphxq0i6hc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:04 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:04 np0005539551 nova_compute[227360]: 2025-11-29 08:01:04.208 227364 DEBUG oslo_concurrency.processutils [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b8808203-6263-4e9c-bff5-7d273d143a50/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv057cw5w" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:04 np0005539551 nova_compute[227360]: 2025-11-29 08:01:04.243 227364 DEBUG nova.storage.rbd_utils [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] rbd image b8808203-6263-4e9c-bff5-7d273d143a50_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:01:04 np0005539551 nova_compute[227360]: 2025-11-29 08:01:04.250 227364 DEBUG oslo_concurrency.processutils [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b8808203-6263-4e9c-bff5-7d273d143a50/disk.config b8808203-6263-4e9c-bff5-7d273d143a50_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:04 np0005539551 nova_compute[227360]: 2025-11-29 08:01:04.279 227364 DEBUG oslo_concurrency.processutils [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9d6124aa-da05-4d06-86b2-1d2bb6c9e89e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphxq0i6hc" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:04 np0005539551 nova_compute[227360]: 2025-11-29 08:01:04.311 227364 DEBUG nova.storage.rbd_utils [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] rbd image 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:01:04 np0005539551 nova_compute[227360]: 2025-11-29 08:01:04.315 227364 DEBUG oslo_concurrency.processutils [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9d6124aa-da05-4d06-86b2-1d2bb6c9e89e/disk.config 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:04 np0005539551 nova_compute[227360]: 2025-11-29 08:01:04.402 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:04.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:04 np0005539551 nova_compute[227360]: 2025-11-29 08:01:04.826 227364 DEBUG oslo_concurrency.processutils [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9d6124aa-da05-4d06-86b2-1d2bb6c9e89e/disk.config 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:04 np0005539551 nova_compute[227360]: 2025-11-29 08:01:04.827 227364 INFO nova.virt.libvirt.driver [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Deleting local config drive /var/lib/nova/instances/9d6124aa-da05-4d06-86b2-1d2bb6c9e89e/disk.config because it was imported into RBD.#033[00m
Nov 29 03:01:04 np0005539551 nova_compute[227360]: 2025-11-29 08:01:04.835 227364 DEBUG oslo_concurrency.processutils [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b8808203-6263-4e9c-bff5-7d273d143a50/disk.config b8808203-6263-4e9c-bff5-7d273d143a50_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:04 np0005539551 nova_compute[227360]: 2025-11-29 08:01:04.835 227364 INFO nova.virt.libvirt.driver [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Deleting local config drive /var/lib/nova/instances/b8808203-6263-4e9c-bff5-7d273d143a50/disk.config because it was imported into RBD.#033[00m
Nov 29 03:01:04 np0005539551 kernel: tap3d89eff8-f3: entered promiscuous mode
Nov 29 03:01:04 np0005539551 NetworkManager[48922]: <info>  [1764403264.9083] manager: (tap3d89eff8-f3): new Tun device (/org/freedesktop/NetworkManager/Devices/70)
Nov 29 03:01:04 np0005539551 nova_compute[227360]: 2025-11-29 08:01:04.911 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:04 np0005539551 ovn_controller[130266]: 2025-11-29T08:01:04Z|00139|binding|INFO|Claiming lport 3d89eff8-f34b-494c-8c30-7457cbc1852c for this chassis.
Nov 29 03:01:04 np0005539551 ovn_controller[130266]: 2025-11-29T08:01:04Z|00140|binding|INFO|3d89eff8-f34b-494c-8c30-7457cbc1852c: Claiming fa:16:3e:3a:38:65 10.100.0.13
Nov 29 03:01:04 np0005539551 NetworkManager[48922]: <info>  [1764403264.9146] manager: (tap26202237-bc): new Tun device (/org/freedesktop/NetworkManager/Devices/71)
Nov 29 03:01:04 np0005539551 kernel: tap26202237-bc: entered promiscuous mode
Nov 29 03:01:04 np0005539551 nova_compute[227360]: 2025-11-29 08:01:04.917 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:04 np0005539551 nova_compute[227360]: 2025-11-29 08:01:04.921 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:04 np0005539551 ovn_controller[130266]: 2025-11-29T08:01:04Z|00141|if_status|INFO|Not updating pb chassis for 26202237-bc25-465c-bcab-caf462b96a73 now as sb is readonly
Nov 29 03:01:04 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:04.930 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:38:65 10.100.0.13'], port_security=['fa:16:3e:3a:38:65 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9d6124aa-da05-4d06-86b2-1d2bb6c9e89e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4537c1f2-a597-4d7c-a9c3-2ad7483510a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '86064cd197e14fa8a17d2a0d9547af3e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd1ea0603-2fd5-45b7-93c4-f54e4ea109a0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2df18a4d-96ab-4a0e-9a1e-536ced01a074, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=3d89eff8-f34b-494c-8c30-7457cbc1852c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:01:04 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:04.931 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 3d89eff8-f34b-494c-8c30-7457cbc1852c in datapath 4537c1f2-a597-4d7c-a9c3-2ad7483510a6 bound to our chassis#033[00m
Nov 29 03:01:04 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:04.933 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4537c1f2-a597-4d7c-a9c3-2ad7483510a6#033[00m
Nov 29 03:01:04 np0005539551 systemd-udevd[243946]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:01:04 np0005539551 systemd-udevd[243944]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:01:04 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:04.949 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4c491c73-58dc-401a-9e79-43cea129ef13]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:04 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:04.951 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4537c1f2-a1 in ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:01:04 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:04.953 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4537c1f2-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:01:04 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:04.953 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[3f890b8e-3d3e-4f1a-85cf-79e6b7300842]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:04.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:04 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:04.955 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5c868fa9-8b6c-4838-8624-65d630a83db8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:04 np0005539551 NetworkManager[48922]: <info>  [1764403264.9663] device (tap26202237-bc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:01:04 np0005539551 NetworkManager[48922]: <info>  [1764403264.9674] device (tap26202237-bc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:01:04 np0005539551 NetworkManager[48922]: <info>  [1764403264.9680] device (tap3d89eff8-f3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:01:04 np0005539551 NetworkManager[48922]: <info>  [1764403264.9689] device (tap3d89eff8-f3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:01:04 np0005539551 systemd-machined[190756]: New machine qemu-20-instance-00000023.
Nov 29 03:01:04 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:04.971 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[6cf5423b-b3ae-4df4-b582-d1b1eef46e96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:04 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:04.986 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[681b3c25-57b0-441c-9f5d-e9a298fce294]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:04 np0005539551 nova_compute[227360]: 2025-11-29 08:01:04.991 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:04 np0005539551 systemd[1]: Started Virtual Machine qemu-20-instance-00000023.
Nov 29 03:01:04 np0005539551 ovn_controller[130266]: 2025-11-29T08:01:04Z|00142|binding|INFO|Claiming lport 26202237-bc25-465c-bcab-caf462b96a73 for this chassis.
Nov 29 03:01:04 np0005539551 ovn_controller[130266]: 2025-11-29T08:01:04Z|00143|binding|INFO|26202237-bc25-465c-bcab-caf462b96a73: Claiming fa:16:3e:ef:0f:ee 10.100.0.12
Nov 29 03:01:04 np0005539551 systemd-machined[190756]: New machine qemu-19-instance-00000024.
Nov 29 03:01:04 np0005539551 systemd[1]: Started Virtual Machine qemu-19-instance-00000024.
Nov 29 03:01:05 np0005539551 ovn_controller[130266]: 2025-11-29T08:01:05Z|00144|binding|INFO|Setting lport 3d89eff8-f34b-494c-8c30-7457cbc1852c ovn-installed in OVS
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.002 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:05 np0005539551 ovn_controller[130266]: 2025-11-29T08:01:05Z|00145|binding|INFO|Setting lport 3d89eff8-f34b-494c-8c30-7457cbc1852c up in Southbound
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:05.006 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:0f:ee 10.100.0.12'], port_security=['fa:16:3e:ef:0f:ee 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b8808203-6263-4e9c-bff5-7d273d143a50', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f73b0808-21fd-43a4-809d-85e512de1cb7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'afaf65dfeab546ee991af0438784b8a3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8c6ebd91-3a75-49da-9c37-da4a9dd74667', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cbc8191a-1888-462d-843d-a3e5df7bfc2c, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=26202237-bc25-465c-bcab-caf462b96a73) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:05.024 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[a3235fd8-c1e4-4b5e-97d3-8bd710cac89c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:05 np0005539551 NetworkManager[48922]: <info>  [1764403265.0348] manager: (tap4537c1f2-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/72)
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:05.034 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[89f18006-ca58-4931-955b-e8564063edc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:05 np0005539551 ovn_controller[130266]: 2025-11-29T08:01:05Z|00146|binding|INFO|Setting lport 26202237-bc25-465c-bcab-caf462b96a73 ovn-installed in OVS
Nov 29 03:01:05 np0005539551 ovn_controller[130266]: 2025-11-29T08:01:05Z|00147|binding|INFO|Setting lport 26202237-bc25-465c-bcab-caf462b96a73 up in Southbound
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.062 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:05.066 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[bf3c8157-ad23-432e-9d80-208afc9ef296]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:05.069 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[62c3289a-e902-4434-863d-f89c88c5f588]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:05 np0005539551 NetworkManager[48922]: <info>  [1764403265.1052] device (tap4537c1f2-a0): carrier: link connected
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:05.112 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[8530c337-0407-410d-b3c1-e2078612b2ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:05.129 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1ea0530a-2d2b-49ad-bd47-90ee868bc2ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4537c1f2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:9a:78'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 631198, 'reachable_time': 17159, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243986, 'error': None, 'target': 'ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:05.146 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a0a98538-5cbb-4487-89f2-71656afe3e8b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3e:9a78'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 631198, 'tstamp': 631198}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243987, 'error': None, 'target': 'ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:05.159 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[0483448c-7fdf-490b-a183-bac9022626e7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4537c1f2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:9a:78'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 631198, 'reachable_time': 17159, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 243988, 'error': None, 'target': 'ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:05.186 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[974e668d-5839-434b-91f4-41ad1f876cd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:05.238 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e7a9412a-fe55-4a07-b0fa-199a0388b78a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:05.239 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4537c1f2-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:05.239 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:05.240 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4537c1f2-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.241 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:05 np0005539551 NetworkManager[48922]: <info>  [1764403265.2425] manager: (tap4537c1f2-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Nov 29 03:01:05 np0005539551 kernel: tap4537c1f2-a0: entered promiscuous mode
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.244 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:05.245 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4537c1f2-a0, col_values=(('external_ids', {'iface-id': 'a70c8006-642e-4a6b-bad1-4f848cd680db'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.245 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:05 np0005539551 ovn_controller[130266]: 2025-11-29T08:01:05Z|00148|binding|INFO|Releasing lport a70c8006-642e-4a6b-bad1-4f848cd680db from this chassis (sb_readonly=0)
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.246 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:05.247 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4537c1f2-a597-4d7c-a9c3-2ad7483510a6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4537c1f2-a597-4d7c-a9c3-2ad7483510a6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:05.248 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[71c79a29-451f-4cff-b7ba-6fb8747d50d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:05.248 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-4537c1f2-a597-4d7c-a9c3-2ad7483510a6
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/4537c1f2-a597-4d7c-a9c3-2ad7483510a6.pid.haproxy
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 4537c1f2-a597-4d7c-a9c3-2ad7483510a6
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:01:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:05.249 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6', 'env', 'PROCESS_TAG=haproxy-4537c1f2-a597-4d7c-a9c3-2ad7483510a6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4537c1f2-a597-4d7c-a9c3-2ad7483510a6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.259 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.609 227364 DEBUG nova.compute.manager [req-db0669ce-766b-45cd-97b9-d41e6b2b97a5 req-70b79636-4391-4153-8c74-0ebb2d8e8bc3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Received event network-vif-plugged-3d89eff8-f34b-494c-8c30-7457cbc1852c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.610 227364 DEBUG oslo_concurrency.lockutils [req-db0669ce-766b-45cd-97b9-d41e6b2b97a5 req-70b79636-4391-4153-8c74-0ebb2d8e8bc3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9d6124aa-da05-4d06-86b2-1d2bb6c9e89e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.610 227364 DEBUG oslo_concurrency.lockutils [req-db0669ce-766b-45cd-97b9-d41e6b2b97a5 req-70b79636-4391-4153-8c74-0ebb2d8e8bc3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9d6124aa-da05-4d06-86b2-1d2bb6c9e89e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.611 227364 DEBUG oslo_concurrency.lockutils [req-db0669ce-766b-45cd-97b9-d41e6b2b97a5 req-70b79636-4391-4153-8c74-0ebb2d8e8bc3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9d6124aa-da05-4d06-86b2-1d2bb6c9e89e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.611 227364 DEBUG nova.compute.manager [req-db0669ce-766b-45cd-97b9-d41e6b2b97a5 req-70b79636-4391-4153-8c74-0ebb2d8e8bc3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Processing event network-vif-plugged-3d89eff8-f34b-494c-8c30-7457cbc1852c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:01:05 np0005539551 podman[244048]: 2025-11-29 08:01:05.617053587 +0000 UTC m=+0.067489799 container create c0ac4c3c321b761bab145f8f5c43593deac02161a795355f480487d671040da4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 29 03:01:05 np0005539551 podman[244048]: 2025-11-29 08:01:05.57152303 +0000 UTC m=+0.021959162 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.696 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403265.695647, 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.697 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] VM Started (Lifecycle Event)#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.699 227364 DEBUG nova.compute.manager [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.702 227364 DEBUG nova.virt.libvirt.driver [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.705 227364 INFO nova.virt.libvirt.driver [-] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Instance spawned successfully.#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.705 227364 DEBUG nova.virt.libvirt.driver [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.717 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.720 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.734 227364 DEBUG nova.compute.manager [req-da0eecf4-3d83-42c1-9c5f-6ca64272d5af req-af8b24ac-4383-4b81-8758-b66719780490 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Received event network-vif-plugged-26202237-bc25-465c-bcab-caf462b96a73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.735 227364 DEBUG oslo_concurrency.lockutils [req-da0eecf4-3d83-42c1-9c5f-6ca64272d5af req-af8b24ac-4383-4b81-8758-b66719780490 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "b8808203-6263-4e9c-bff5-7d273d143a50-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.735 227364 DEBUG oslo_concurrency.lockutils [req-da0eecf4-3d83-42c1-9c5f-6ca64272d5af req-af8b24ac-4383-4b81-8758-b66719780490 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b8808203-6263-4e9c-bff5-7d273d143a50-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.736 227364 DEBUG oslo_concurrency.lockutils [req-da0eecf4-3d83-42c1-9c5f-6ca64272d5af req-af8b24ac-4383-4b81-8758-b66719780490 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b8808203-6263-4e9c-bff5-7d273d143a50-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.736 227364 DEBUG nova.compute.manager [req-da0eecf4-3d83-42c1-9c5f-6ca64272d5af req-af8b24ac-4383-4b81-8758-b66719780490 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Processing event network-vif-plugged-26202237-bc25-465c-bcab-caf462b96a73 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.736 227364 DEBUG nova.compute.manager [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.744 227364 DEBUG nova.virt.libvirt.driver [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.747 227364 INFO nova.virt.libvirt.driver [-] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Instance spawned successfully.#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.747 227364 DEBUG nova.virt.libvirt.driver [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.758 227364 DEBUG nova.virt.libvirt.driver [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.758 227364 DEBUG nova.virt.libvirt.driver [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.759 227364 DEBUG nova.virt.libvirt.driver [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.759 227364 DEBUG nova.virt.libvirt.driver [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.759 227364 DEBUG nova.virt.libvirt.driver [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.760 227364 DEBUG nova.virt.libvirt.driver [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.763 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.763 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403265.695819, 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.763 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.770 227364 DEBUG nova.virt.libvirt.driver [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.770 227364 DEBUG nova.virt.libvirt.driver [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.771 227364 DEBUG nova.virt.libvirt.driver [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.771 227364 DEBUG nova.virt.libvirt.driver [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.772 227364 DEBUG nova.virt.libvirt.driver [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.772 227364 DEBUG nova.virt.libvirt.driver [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.799 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.802 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403265.6971505, b8808203-6263-4e9c-bff5-7d273d143a50 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.803 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] VM Started (Lifecycle Event)#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.840 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.843 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.862 227364 INFO nova.compute.manager [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Took 12.92 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.868 227364 DEBUG nova.compute.manager [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.872 227364 INFO nova.compute.manager [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Took 14.30 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.873 227364 DEBUG nova.compute.manager [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.873 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.874 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403265.6972196, b8808203-6263-4e9c-bff5-7d273d143a50 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.874 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:01:05 np0005539551 systemd[1]: Started libpod-conmon-c0ac4c3c321b761bab145f8f5c43593deac02161a795355f480487d671040da4.scope.
Nov 29 03:01:05 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:01:05 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/770d707f1b25abf27920a21e87939288e58170512d74baa16d5c96c437b13f67/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.940 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.946 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403265.701624, 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.947 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.977 227364 INFO nova.compute.manager [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Took 14.77 seconds to build instance.#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.983 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.986 227364 INFO nova.compute.manager [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Took 15.67 seconds to build instance.#033[00m
Nov 29 03:01:05 np0005539551 nova_compute[227360]: 2025-11-29 08:01:05.991 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:01:06 np0005539551 podman[244048]: 2025-11-29 08:01:06.014858505 +0000 UTC m=+0.465294637 container init c0ac4c3c321b761bab145f8f5c43593deac02161a795355f480487d671040da4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:01:06 np0005539551 nova_compute[227360]: 2025-11-29 08:01:06.015 227364 DEBUG oslo_concurrency.lockutils [None req-ef50b8bb-48f6-40da-aaa5-9eb3f91af35a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Lock "9d6124aa-da05-4d06-86b2-1d2bb6c9e89e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.914s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:06 np0005539551 nova_compute[227360]: 2025-11-29 08:01:06.016 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403265.7390833, b8808203-6263-4e9c-bff5-7d273d143a50 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:01:06 np0005539551 nova_compute[227360]: 2025-11-29 08:01:06.017 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:01:06 np0005539551 nova_compute[227360]: 2025-11-29 08:01:06.019 227364 DEBUG oslo_concurrency.lockutils [None req-dff7d455-d302-4d41-9e84-750329311772 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "b8808203-6263-4e9c-bff5-7d273d143a50" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:06 np0005539551 podman[244048]: 2025-11-29 08:01:06.0208807 +0000 UTC m=+0.471316812 container start c0ac4c3c321b761bab145f8f5c43593deac02161a795355f480487d671040da4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:01:06 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:01:06 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:01:06 np0005539551 nova_compute[227360]: 2025-11-29 08:01:06.037 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:01:06 np0005539551 nova_compute[227360]: 2025-11-29 08:01:06.042 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:01:06 np0005539551 neutron-haproxy-ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6[244165]: [NOTICE]   (244169) : New worker (244171) forked
Nov 29 03:01:06 np0005539551 neutron-haproxy-ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6[244165]: [NOTICE]   (244169) : Loading success.
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:06.099 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 26202237-bc25-465c-bcab-caf462b96a73 in datapath f73b0808-21fd-43a4-809d-85e512de1cb7 unbound from our chassis#033[00m
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:06.101 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f73b0808-21fd-43a4-809d-85e512de1cb7#033[00m
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:06.111 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2fd6d1aa-c726-4b0c-8926-b2919fe773e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:06.112 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf73b0808-21 in ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:06.114 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf73b0808-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:06.115 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5704618b-396f-4e83-b3a2-1f0aeb1a110d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:06.115 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[cc329d5c-8c0d-441a-874e-592703a8e6e0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:06.129 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[6d6b7c4b-4309-4dd2-8ab5-f0c896b5e7b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:06.152 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[45a11476-88dc-4fa8-be64-e01d9421ce66]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:06.179 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[e04079e2-d19e-4f36-a3dc-ef3f23398779]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:06.186 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[700218bd-f78e-4bea-85c5-3ae015757344]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:06 np0005539551 systemd-udevd[243964]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:01:06 np0005539551 NetworkManager[48922]: <info>  [1764403266.1885] manager: (tapf73b0808-20): new Veth device (/org/freedesktop/NetworkManager/Devices/74)
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:06.217 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[500bb7ac-3443-4464-b5c0-45b3fb2c51fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:06.220 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[342b66bb-0c06-40be-915b-270335066d1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:06 np0005539551 NetworkManager[48922]: <info>  [1764403266.2447] device (tapf73b0808-20): carrier: link connected
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:06.248 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[b4f51fb8-1a87-4b29-b93c-3fad620e78fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:06.267 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[aecc3536-ce3c-4e4e-b6cf-818f8be5a809]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf73b0808-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e7:9a:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 44], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 631312, 'reachable_time': 30596, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244190, 'error': None, 'target': 'ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:06.282 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[23c0e4a3-9514-4f64-ab8b-6f996c8902e8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee7:9a0d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 631312, 'tstamp': 631312}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244191, 'error': None, 'target': 'ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:06.298 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e642e3fd-e894-4d54-a3c7-bf33744c2341]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf73b0808-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e7:9a:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 44], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 631312, 'reachable_time': 30596, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 244192, 'error': None, 'target': 'ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:06.329 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5d35db02-8c33-4e42-a639-e4522e6939da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:06.395 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[45b03d6c-2acf-48dd-ab32-4292420fa4da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:06.397 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf73b0808-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:06.398 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:06.398 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf73b0808-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:06 np0005539551 nova_compute[227360]: 2025-11-29 08:01:06.401 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:06 np0005539551 kernel: tapf73b0808-20: entered promiscuous mode
Nov 29 03:01:06 np0005539551 NetworkManager[48922]: <info>  [1764403266.4024] manager: (tapf73b0808-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Nov 29 03:01:06 np0005539551 nova_compute[227360]: 2025-11-29 08:01:06.404 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:06.405 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf73b0808-20, col_values=(('external_ids', {'iface-id': '76e8e3af-f64b-4dff-8ae5-0367f134cc2a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:06 np0005539551 nova_compute[227360]: 2025-11-29 08:01:06.406 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:01:06Z|00149|binding|INFO|Releasing lport 76e8e3af-f64b-4dff-8ae5-0367f134cc2a from this chassis (sb_readonly=0)
Nov 29 03:01:06 np0005539551 nova_compute[227360]: 2025-11-29 08:01:06.438 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:06.439 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f73b0808-21fd-43a4-809d-85e512de1cb7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f73b0808-21fd-43a4-809d-85e512de1cb7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:06.440 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[76247350-4b13-4d5c-81b0-af030f65f9c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:06.440 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-f73b0808-21fd-43a4-809d-85e512de1cb7
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/f73b0808-21fd-43a4-809d-85e512de1cb7.pid.haproxy
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID f73b0808-21fd-43a4-809d-85e512de1cb7
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:06.442 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7', 'env', 'PROCESS_TAG=haproxy-f73b0808-21fd-43a4-809d-85e512de1cb7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f73b0808-21fd-43a4-809d-85e512de1cb7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:01:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:06.490 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:06.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:06 np0005539551 podman[244224]: 2025-11-29 08:01:06.793817384 +0000 UTC m=+0.024651016 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:01:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:06.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:07 np0005539551 podman[244224]: 2025-11-29 08:01:07.055676428 +0000 UTC m=+0.286510060 container create b97ec14b31e5a33a666741138cb0694c7a68b2dadcb66a6d3080913d299384fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:01:07 np0005539551 systemd[1]: Started libpod-conmon-b97ec14b31e5a33a666741138cb0694c7a68b2dadcb66a6d3080913d299384fe.scope.
Nov 29 03:01:07 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:01:07 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48e117b22ca5899b47501e4e008b7304d7a7fa4a11e067a75fc0c1f9dcde5511/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:01:07 np0005539551 podman[244224]: 2025-11-29 08:01:07.169141156 +0000 UTC m=+0.399974788 container init b97ec14b31e5a33a666741138cb0694c7a68b2dadcb66a6d3080913d299384fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:01:07 np0005539551 podman[244224]: 2025-11-29 08:01:07.174923354 +0000 UTC m=+0.405756966 container start b97ec14b31e5a33a666741138cb0694c7a68b2dadcb66a6d3080913d299384fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:01:07 np0005539551 neutron-haproxy-ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7[244240]: [NOTICE]   (244244) : New worker (244246) forked
Nov 29 03:01:07 np0005539551 neutron-haproxy-ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7[244240]: [NOTICE]   (244244) : Loading success.
Nov 29 03:01:08 np0005539551 nova_compute[227360]: 2025-11-29 08:01:08.141 227364 DEBUG nova.compute.manager [req-d59fef94-cc82-4b7f-a0fc-b1e68f3bd1dd req-77fcc035-22ff-4361-8c4b-5a6f325aa087 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Received event network-vif-plugged-26202237-bc25-465c-bcab-caf462b96a73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:08 np0005539551 nova_compute[227360]: 2025-11-29 08:01:08.143 227364 DEBUG oslo_concurrency.lockutils [req-d59fef94-cc82-4b7f-a0fc-b1e68f3bd1dd req-77fcc035-22ff-4361-8c4b-5a6f325aa087 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "b8808203-6263-4e9c-bff5-7d273d143a50-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:08 np0005539551 nova_compute[227360]: 2025-11-29 08:01:08.143 227364 DEBUG oslo_concurrency.lockutils [req-d59fef94-cc82-4b7f-a0fc-b1e68f3bd1dd req-77fcc035-22ff-4361-8c4b-5a6f325aa087 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b8808203-6263-4e9c-bff5-7d273d143a50-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:08 np0005539551 nova_compute[227360]: 2025-11-29 08:01:08.144 227364 DEBUG oslo_concurrency.lockutils [req-d59fef94-cc82-4b7f-a0fc-b1e68f3bd1dd req-77fcc035-22ff-4361-8c4b-5a6f325aa087 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b8808203-6263-4e9c-bff5-7d273d143a50-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:08 np0005539551 nova_compute[227360]: 2025-11-29 08:01:08.145 227364 DEBUG nova.compute.manager [req-d59fef94-cc82-4b7f-a0fc-b1e68f3bd1dd req-77fcc035-22ff-4361-8c4b-5a6f325aa087 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] No waiting events found dispatching network-vif-plugged-26202237-bc25-465c-bcab-caf462b96a73 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:08 np0005539551 nova_compute[227360]: 2025-11-29 08:01:08.146 227364 WARNING nova.compute.manager [req-d59fef94-cc82-4b7f-a0fc-b1e68f3bd1dd req-77fcc035-22ff-4361-8c4b-5a6f325aa087 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Received unexpected event network-vif-plugged-26202237-bc25-465c-bcab-caf462b96a73 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:01:08 np0005539551 nova_compute[227360]: 2025-11-29 08:01:08.226 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:08.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:08.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:09 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:09 np0005539551 nova_compute[227360]: 2025-11-29 08:01:09.404 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:09 np0005539551 nova_compute[227360]: 2025-11-29 08:01:09.504 227364 DEBUG nova.compute.manager [req-bcb5649e-2db6-481b-a0f0-b3951efff7a9 req-87e46ea1-556d-4966-b57f-3fc5fa922a53 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Received event network-vif-plugged-3d89eff8-f34b-494c-8c30-7457cbc1852c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:09 np0005539551 nova_compute[227360]: 2025-11-29 08:01:09.505 227364 DEBUG oslo_concurrency.lockutils [req-bcb5649e-2db6-481b-a0f0-b3951efff7a9 req-87e46ea1-556d-4966-b57f-3fc5fa922a53 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9d6124aa-da05-4d06-86b2-1d2bb6c9e89e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:09 np0005539551 nova_compute[227360]: 2025-11-29 08:01:09.505 227364 DEBUG oslo_concurrency.lockutils [req-bcb5649e-2db6-481b-a0f0-b3951efff7a9 req-87e46ea1-556d-4966-b57f-3fc5fa922a53 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9d6124aa-da05-4d06-86b2-1d2bb6c9e89e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:09 np0005539551 nova_compute[227360]: 2025-11-29 08:01:09.505 227364 DEBUG oslo_concurrency.lockutils [req-bcb5649e-2db6-481b-a0f0-b3951efff7a9 req-87e46ea1-556d-4966-b57f-3fc5fa922a53 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9d6124aa-da05-4d06-86b2-1d2bb6c9e89e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:09 np0005539551 nova_compute[227360]: 2025-11-29 08:01:09.506 227364 DEBUG nova.compute.manager [req-bcb5649e-2db6-481b-a0f0-b3951efff7a9 req-87e46ea1-556d-4966-b57f-3fc5fa922a53 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] No waiting events found dispatching network-vif-plugged-3d89eff8-f34b-494c-8c30-7457cbc1852c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:09 np0005539551 nova_compute[227360]: 2025-11-29 08:01:09.506 227364 WARNING nova.compute.manager [req-bcb5649e-2db6-481b-a0f0-b3951efff7a9 req-87e46ea1-556d-4966-b57f-3fc5fa922a53 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Received unexpected event network-vif-plugged-3d89eff8-f34b-494c-8c30-7457cbc1852c for instance with vm_state active and task_state None.#033[00m
Nov 29 03:01:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:10.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:10 np0005539551 nova_compute[227360]: 2025-11-29 08:01:10.705 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:10 np0005539551 NetworkManager[48922]: <info>  [1764403270.7062] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Nov 29 03:01:10 np0005539551 NetworkManager[48922]: <info>  [1764403270.7067] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/77)
Nov 29 03:01:10 np0005539551 nova_compute[227360]: 2025-11-29 08:01:10.833 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:10 np0005539551 ovn_controller[130266]: 2025-11-29T08:01:10Z|00150|binding|INFO|Releasing lport 76e8e3af-f64b-4dff-8ae5-0367f134cc2a from this chassis (sb_readonly=0)
Nov 29 03:01:10 np0005539551 ovn_controller[130266]: 2025-11-29T08:01:10Z|00151|binding|INFO|Releasing lport a70c8006-642e-4a6b-bad1-4f848cd680db from this chassis (sb_readonly=0)
Nov 29 03:01:10 np0005539551 nova_compute[227360]: 2025-11-29 08:01:10.853 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:10.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:11 np0005539551 nova_compute[227360]: 2025-11-29 08:01:11.411 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:11 np0005539551 nova_compute[227360]: 2025-11-29 08:01:11.524 227364 DEBUG nova.compute.manager [req-a14ee803-40a1-4711-a507-eb25cc6820e2 req-7e082d9f-99f5-43c9-8c3d-453deb2b6890 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Received event network-changed-26202237-bc25-465c-bcab-caf462b96a73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:11 np0005539551 nova_compute[227360]: 2025-11-29 08:01:11.525 227364 DEBUG nova.compute.manager [req-a14ee803-40a1-4711-a507-eb25cc6820e2 req-7e082d9f-99f5-43c9-8c3d-453deb2b6890 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Refreshing instance network info cache due to event network-changed-26202237-bc25-465c-bcab-caf462b96a73. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:01:11 np0005539551 nova_compute[227360]: 2025-11-29 08:01:11.525 227364 DEBUG oslo_concurrency.lockutils [req-a14ee803-40a1-4711-a507-eb25cc6820e2 req-7e082d9f-99f5-43c9-8c3d-453deb2b6890 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-b8808203-6263-4e9c-bff5-7d273d143a50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:01:11 np0005539551 nova_compute[227360]: 2025-11-29 08:01:11.526 227364 DEBUG oslo_concurrency.lockutils [req-a14ee803-40a1-4711-a507-eb25cc6820e2 req-7e082d9f-99f5-43c9-8c3d-453deb2b6890 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-b8808203-6263-4e9c-bff5-7d273d143a50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:01:11 np0005539551 nova_compute[227360]: 2025-11-29 08:01:11.526 227364 DEBUG nova.network.neutron [req-a14ee803-40a1-4711-a507-eb25cc6820e2 req-7e082d9f-99f5-43c9-8c3d-453deb2b6890 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Refreshing network info cache for port 26202237-bc25-465c-bcab-caf462b96a73 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:01:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:12.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:12.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:13 np0005539551 nova_compute[227360]: 2025-11-29 08:01:13.229 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:14 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:14 np0005539551 nova_compute[227360]: 2025-11-29 08:01:14.310 227364 DEBUG nova.network.neutron [req-a14ee803-40a1-4711-a507-eb25cc6820e2 req-7e082d9f-99f5-43c9-8c3d-453deb2b6890 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Updated VIF entry in instance network info cache for port 26202237-bc25-465c-bcab-caf462b96a73. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:01:14 np0005539551 nova_compute[227360]: 2025-11-29 08:01:14.310 227364 DEBUG nova.network.neutron [req-a14ee803-40a1-4711-a507-eb25cc6820e2 req-7e082d9f-99f5-43c9-8c3d-453deb2b6890 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Updating instance_info_cache with network_info: [{"id": "26202237-bc25-465c-bcab-caf462b96a73", "address": "fa:16:3e:ef:0f:ee", "network": {"id": "f73b0808-21fd-43a4-809d-85e512de1cb7", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-904838899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "afaf65dfeab546ee991af0438784b8a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26202237-bc", "ovs_interfaceid": "26202237-bc25-465c-bcab-caf462b96a73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:14 np0005539551 nova_compute[227360]: 2025-11-29 08:01:14.339 227364 DEBUG oslo_concurrency.lockutils [req-a14ee803-40a1-4711-a507-eb25cc6820e2 req-7e082d9f-99f5-43c9-8c3d-453deb2b6890 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-b8808203-6263-4e9c-bff5-7d273d143a50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:01:14 np0005539551 nova_compute[227360]: 2025-11-29 08:01:14.406 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:14 np0005539551 nova_compute[227360]: 2025-11-29 08:01:14.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:14 np0005539551 nova_compute[227360]: 2025-11-29 08:01:14.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:14.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:14.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:15 np0005539551 nova_compute[227360]: 2025-11-29 08:01:15.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:15 np0005539551 nova_compute[227360]: 2025-11-29 08:01:15.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:01:15 np0005539551 nova_compute[227360]: 2025-11-29 08:01:15.411 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:01:16 np0005539551 nova_compute[227360]: 2025-11-29 08:01:16.440 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "refresh_cache-b8808203-6263-4e9c-bff5-7d273d143a50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:01:16 np0005539551 nova_compute[227360]: 2025-11-29 08:01:16.440 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquired lock "refresh_cache-b8808203-6263-4e9c-bff5-7d273d143a50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:01:16 np0005539551 nova_compute[227360]: 2025-11-29 08:01:16.440 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:01:16 np0005539551 nova_compute[227360]: 2025-11-29 08:01:16.441 227364 DEBUG nova.objects.instance [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lazy-loading 'info_cache' on Instance uuid b8808203-6263-4e9c-bff5-7d273d143a50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:01:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 03:01:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:16.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 03:01:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:16.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:17 np0005539551 nova_compute[227360]: 2025-11-29 08:01:17.998 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Updating instance_info_cache with network_info: [{"id": "26202237-bc25-465c-bcab-caf462b96a73", "address": "fa:16:3e:ef:0f:ee", "network": {"id": "f73b0808-21fd-43a4-809d-85e512de1cb7", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-904838899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "afaf65dfeab546ee991af0438784b8a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26202237-bc", "ovs_interfaceid": "26202237-bc25-465c-bcab-caf462b96a73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:18 np0005539551 nova_compute[227360]: 2025-11-29 08:01:18.015 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Releasing lock "refresh_cache-b8808203-6263-4e9c-bff5-7d273d143a50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:01:18 np0005539551 nova_compute[227360]: 2025-11-29 08:01:18.016 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:01:18 np0005539551 nova_compute[227360]: 2025-11-29 08:01:18.016 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:18 np0005539551 nova_compute[227360]: 2025-11-29 08:01:18.017 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:18 np0005539551 nova_compute[227360]: 2025-11-29 08:01:18.050 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:18 np0005539551 nova_compute[227360]: 2025-11-29 08:01:18.051 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:18 np0005539551 nova_compute[227360]: 2025-11-29 08:01:18.051 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:18 np0005539551 nova_compute[227360]: 2025-11-29 08:01:18.051 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:01:18 np0005539551 nova_compute[227360]: 2025-11-29 08:01:18.051 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:18 np0005539551 nova_compute[227360]: 2025-11-29 08:01:18.233 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:18 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:01:18 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1526988358' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:01:18 np0005539551 nova_compute[227360]: 2025-11-29 08:01:18.586 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:18 np0005539551 ovn_controller[130266]: 2025-11-29T08:01:18Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3a:38:65 10.100.0.13
Nov 29 03:01:18 np0005539551 ovn_controller[130266]: 2025-11-29T08:01:18Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3a:38:65 10.100.0.13
Nov 29 03:01:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:18.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:18 np0005539551 nova_compute[227360]: 2025-11-29 08:01:18.751 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000024 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:01:18 np0005539551 nova_compute[227360]: 2025-11-29 08:01:18.752 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000024 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:01:18 np0005539551 nova_compute[227360]: 2025-11-29 08:01:18.754 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000023 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:01:18 np0005539551 nova_compute[227360]: 2025-11-29 08:01:18.754 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000023 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:01:18 np0005539551 nova_compute[227360]: 2025-11-29 08:01:18.907 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:01:18 np0005539551 nova_compute[227360]: 2025-11-29 08:01:18.908 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4452MB free_disk=20.900978088378906GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:01:18 np0005539551 nova_compute[227360]: 2025-11-29 08:01:18.908 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:18 np0005539551 nova_compute[227360]: 2025-11-29 08:01:18.908 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:18.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:19 np0005539551 nova_compute[227360]: 2025-11-29 08:01:19.004 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance b8808203-6263-4e9c-bff5-7d273d143a50 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:01:19 np0005539551 nova_compute[227360]: 2025-11-29 08:01:19.004 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:01:19 np0005539551 nova_compute[227360]: 2025-11-29 08:01:19.004 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:01:19 np0005539551 nova_compute[227360]: 2025-11-29 08:01:19.005 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:01:19 np0005539551 nova_compute[227360]: 2025-11-29 08:01:19.068 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:19 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:19 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:01:19 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/505287399' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:01:19 np0005539551 nova_compute[227360]: 2025-11-29 08:01:19.409 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:19 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:01:19 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3078115532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:01:19 np0005539551 nova_compute[227360]: 2025-11-29 08:01:19.508 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:19 np0005539551 nova_compute[227360]: 2025-11-29 08:01:19.513 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:01:19 np0005539551 nova_compute[227360]: 2025-11-29 08:01:19.536 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:01:19 np0005539551 nova_compute[227360]: 2025-11-29 08:01:19.560 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:01:19 np0005539551 nova_compute[227360]: 2025-11-29 08:01:19.560 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:19 np0005539551 ovn_controller[130266]: 2025-11-29T08:01:19Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ef:0f:ee 10.100.0.12
Nov 29 03:01:19 np0005539551 ovn_controller[130266]: 2025-11-29T08:01:19Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ef:0f:ee 10.100.0.12
Nov 29 03:01:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:19.852 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:19.853 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:19.853 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:19 np0005539551 nova_compute[227360]: 2025-11-29 08:01:19.966 227364 DEBUG nova.compute.manager [req-d7f9198a-fec5-429d-b52b-c5d6ea6172ba req-0718db54-573b-40b4-b817-1055e20a43aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Received event network-changed-3d89eff8-f34b-494c-8c30-7457cbc1852c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:19 np0005539551 nova_compute[227360]: 2025-11-29 08:01:19.967 227364 DEBUG nova.compute.manager [req-d7f9198a-fec5-429d-b52b-c5d6ea6172ba req-0718db54-573b-40b4-b817-1055e20a43aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Refreshing instance network info cache due to event network-changed-3d89eff8-f34b-494c-8c30-7457cbc1852c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:01:19 np0005539551 nova_compute[227360]: 2025-11-29 08:01:19.967 227364 DEBUG oslo_concurrency.lockutils [req-d7f9198a-fec5-429d-b52b-c5d6ea6172ba req-0718db54-573b-40b4-b817-1055e20a43aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-9d6124aa-da05-4d06-86b2-1d2bb6c9e89e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:01:19 np0005539551 nova_compute[227360]: 2025-11-29 08:01:19.967 227364 DEBUG oslo_concurrency.lockutils [req-d7f9198a-fec5-429d-b52b-c5d6ea6172ba req-0718db54-573b-40b4-b817-1055e20a43aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-9d6124aa-da05-4d06-86b2-1d2bb6c9e89e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:01:19 np0005539551 nova_compute[227360]: 2025-11-29 08:01:19.967 227364 DEBUG nova.network.neutron [req-d7f9198a-fec5-429d-b52b-c5d6ea6172ba req-0718db54-573b-40b4-b817-1055e20a43aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Refreshing network info cache for port 3d89eff8-f34b-494c-8c30-7457cbc1852c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:01:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:20.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:20 np0005539551 nova_compute[227360]: 2025-11-29 08:01:20.953 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:20 np0005539551 nova_compute[227360]: 2025-11-29 08:01:20.953 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:20.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:22 np0005539551 nova_compute[227360]: 2025-11-29 08:01:22.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:22 np0005539551 nova_compute[227360]: 2025-11-29 08:01:22.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:01:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:22.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:22 np0005539551 nova_compute[227360]: 2025-11-29 08:01:22.778 227364 DEBUG nova.network.neutron [req-d7f9198a-fec5-429d-b52b-c5d6ea6172ba req-0718db54-573b-40b4-b817-1055e20a43aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Updated VIF entry in instance network info cache for port 3d89eff8-f34b-494c-8c30-7457cbc1852c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:01:22 np0005539551 nova_compute[227360]: 2025-11-29 08:01:22.779 227364 DEBUG nova.network.neutron [req-d7f9198a-fec5-429d-b52b-c5d6ea6172ba req-0718db54-573b-40b4-b817-1055e20a43aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Updating instance_info_cache with network_info: [{"id": "3d89eff8-f34b-494c-8c30-7457cbc1852c", "address": "fa:16:3e:3a:38:65", "network": {"id": "4537c1f2-a597-4d7c-a9c3-2ad7483510a6", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1808585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86064cd197e14fa8a17d2a0d9547af3e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d89eff8-f3", "ovs_interfaceid": "3d89eff8-f34b-494c-8c30-7457cbc1852c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:22 np0005539551 nova_compute[227360]: 2025-11-29 08:01:22.798 227364 DEBUG oslo_concurrency.lockutils [req-d7f9198a-fec5-429d-b52b-c5d6ea6172ba req-0718db54-573b-40b4-b817-1055e20a43aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-9d6124aa-da05-4d06-86b2-1d2bb6c9e89e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:01:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:22.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:23 np0005539551 nova_compute[227360]: 2025-11-29 08:01:23.235 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:23 np0005539551 podman[244304]: 2025-11-29 08:01:23.607355039 +0000 UTC m=+0.055521102 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 03:01:23 np0005539551 podman[244303]: 2025-11-29 08:01:23.656239028 +0000 UTC m=+0.099998110 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Nov 29 03:01:23 np0005539551 podman[244302]: 2025-11-29 08:01:23.690075204 +0000 UTC m=+0.143352008 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:01:24 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:24 np0005539551 nova_compute[227360]: 2025-11-29 08:01:24.411 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:24.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:24.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:26.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:26.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:28 np0005539551 nova_compute[227360]: 2025-11-29 08:01:28.238 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:28 np0005539551 nova_compute[227360]: 2025-11-29 08:01:28.304 227364 DEBUG nova.compute.manager [req-da3e7856-fe99-494e-8977-9e43551565e5 req-1a8bc603-ad36-4266-bfec-dac2fc589d06 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Received event network-changed-3d89eff8-f34b-494c-8c30-7457cbc1852c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:28 np0005539551 nova_compute[227360]: 2025-11-29 08:01:28.305 227364 DEBUG nova.compute.manager [req-da3e7856-fe99-494e-8977-9e43551565e5 req-1a8bc603-ad36-4266-bfec-dac2fc589d06 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Refreshing instance network info cache due to event network-changed-3d89eff8-f34b-494c-8c30-7457cbc1852c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:01:28 np0005539551 nova_compute[227360]: 2025-11-29 08:01:28.305 227364 DEBUG oslo_concurrency.lockutils [req-da3e7856-fe99-494e-8977-9e43551565e5 req-1a8bc603-ad36-4266-bfec-dac2fc589d06 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-9d6124aa-da05-4d06-86b2-1d2bb6c9e89e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:01:28 np0005539551 nova_compute[227360]: 2025-11-29 08:01:28.305 227364 DEBUG oslo_concurrency.lockutils [req-da3e7856-fe99-494e-8977-9e43551565e5 req-1a8bc603-ad36-4266-bfec-dac2fc589d06 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-9d6124aa-da05-4d06-86b2-1d2bb6c9e89e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:01:28 np0005539551 nova_compute[227360]: 2025-11-29 08:01:28.305 227364 DEBUG nova.network.neutron [req-da3e7856-fe99-494e-8977-9e43551565e5 req-1a8bc603-ad36-4266-bfec-dac2fc589d06 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Refreshing network info cache for port 3d89eff8-f34b-494c-8c30-7457cbc1852c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:01:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:28.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:01:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:28.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:01:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:29 np0005539551 nova_compute[227360]: 2025-11-29 08:01:29.414 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:30 np0005539551 nova_compute[227360]: 2025-11-29 08:01:30.129 227364 DEBUG nova.network.neutron [req-da3e7856-fe99-494e-8977-9e43551565e5 req-1a8bc603-ad36-4266-bfec-dac2fc589d06 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Updated VIF entry in instance network info cache for port 3d89eff8-f34b-494c-8c30-7457cbc1852c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:01:30 np0005539551 nova_compute[227360]: 2025-11-29 08:01:30.129 227364 DEBUG nova.network.neutron [req-da3e7856-fe99-494e-8977-9e43551565e5 req-1a8bc603-ad36-4266-bfec-dac2fc589d06 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Updating instance_info_cache with network_info: [{"id": "3d89eff8-f34b-494c-8c30-7457cbc1852c", "address": "fa:16:3e:3a:38:65", "network": {"id": "4537c1f2-a597-4d7c-a9c3-2ad7483510a6", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1808585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86064cd197e14fa8a17d2a0d9547af3e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d89eff8-f3", "ovs_interfaceid": "3d89eff8-f34b-494c-8c30-7457cbc1852c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:30 np0005539551 nova_compute[227360]: 2025-11-29 08:01:30.152 227364 DEBUG oslo_concurrency.lockutils [req-da3e7856-fe99-494e-8977-9e43551565e5 req-1a8bc603-ad36-4266-bfec-dac2fc589d06 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-9d6124aa-da05-4d06-86b2-1d2bb6c9e89e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:01:30 np0005539551 nova_compute[227360]: 2025-11-29 08:01:30.342 227364 DEBUG oslo_concurrency.lockutils [None req-fb2ec457-3d1a-4ed4-bbce-3cc208be3669 b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Acquiring lock "9d6124aa-da05-4d06-86b2-1d2bb6c9e89e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:30 np0005539551 nova_compute[227360]: 2025-11-29 08:01:30.343 227364 DEBUG oslo_concurrency.lockutils [None req-fb2ec457-3d1a-4ed4-bbce-3cc208be3669 b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Lock "9d6124aa-da05-4d06-86b2-1d2bb6c9e89e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:30 np0005539551 nova_compute[227360]: 2025-11-29 08:01:30.344 227364 DEBUG oslo_concurrency.lockutils [None req-fb2ec457-3d1a-4ed4-bbce-3cc208be3669 b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Acquiring lock "9d6124aa-da05-4d06-86b2-1d2bb6c9e89e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:30 np0005539551 nova_compute[227360]: 2025-11-29 08:01:30.344 227364 DEBUG oslo_concurrency.lockutils [None req-fb2ec457-3d1a-4ed4-bbce-3cc208be3669 b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Lock "9d6124aa-da05-4d06-86b2-1d2bb6c9e89e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:30 np0005539551 nova_compute[227360]: 2025-11-29 08:01:30.344 227364 DEBUG oslo_concurrency.lockutils [None req-fb2ec457-3d1a-4ed4-bbce-3cc208be3669 b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Lock "9d6124aa-da05-4d06-86b2-1d2bb6c9e89e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:30 np0005539551 nova_compute[227360]: 2025-11-29 08:01:30.346 227364 INFO nova.compute.manager [None req-fb2ec457-3d1a-4ed4-bbce-3cc208be3669 b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Terminating instance#033[00m
Nov 29 03:01:30 np0005539551 nova_compute[227360]: 2025-11-29 08:01:30.348 227364 DEBUG nova.compute.manager [None req-fb2ec457-3d1a-4ed4-bbce-3cc208be3669 b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:01:30 np0005539551 kernel: tap3d89eff8-f3 (unregistering): left promiscuous mode
Nov 29 03:01:30 np0005539551 NetworkManager[48922]: <info>  [1764403290.4081] device (tap3d89eff8-f3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:01:30 np0005539551 nova_compute[227360]: 2025-11-29 08:01:30.416 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:30 np0005539551 ovn_controller[130266]: 2025-11-29T08:01:30Z|00152|binding|INFO|Releasing lport 3d89eff8-f34b-494c-8c30-7457cbc1852c from this chassis (sb_readonly=0)
Nov 29 03:01:30 np0005539551 ovn_controller[130266]: 2025-11-29T08:01:30Z|00153|binding|INFO|Setting lport 3d89eff8-f34b-494c-8c30-7457cbc1852c down in Southbound
Nov 29 03:01:30 np0005539551 ovn_controller[130266]: 2025-11-29T08:01:30Z|00154|binding|INFO|Removing iface tap3d89eff8-f3 ovn-installed in OVS
Nov 29 03:01:30 np0005539551 nova_compute[227360]: 2025-11-29 08:01:30.418 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:30 np0005539551 nova_compute[227360]: 2025-11-29 08:01:30.431 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:30 np0005539551 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000024.scope: Deactivated successfully.
Nov 29 03:01:30 np0005539551 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000024.scope: Consumed 13.762s CPU time.
Nov 29 03:01:30 np0005539551 systemd-machined[190756]: Machine qemu-19-instance-00000024 terminated.
Nov 29 03:01:30 np0005539551 nova_compute[227360]: 2025-11-29 08:01:30.582 227364 INFO nova.virt.libvirt.driver [-] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Instance destroyed successfully.#033[00m
Nov 29 03:01:30 np0005539551 nova_compute[227360]: 2025-11-29 08:01:30.583 227364 DEBUG nova.objects.instance [None req-fb2ec457-3d1a-4ed4-bbce-3cc208be3669 b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Lazy-loading 'resources' on Instance uuid 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:01:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:30.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:30.727 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:38:65 10.100.0.13'], port_security=['fa:16:3e:3a:38:65 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9d6124aa-da05-4d06-86b2-1d2bb6c9e89e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4537c1f2-a597-4d7c-a9c3-2ad7483510a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '86064cd197e14fa8a17d2a0d9547af3e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd1ea0603-2fd5-45b7-93c4-f54e4ea109a0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2df18a4d-96ab-4a0e-9a1e-536ced01a074, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=3d89eff8-f34b-494c-8c30-7457cbc1852c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:01:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:30.729 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 3d89eff8-f34b-494c-8c30-7457cbc1852c in datapath 4537c1f2-a597-4d7c-a9c3-2ad7483510a6 unbound from our chassis#033[00m
Nov 29 03:01:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:30.730 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4537c1f2-a597-4d7c-a9c3-2ad7483510a6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:01:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:30.731 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[dc818109-a995-4caf-a3d2-b2fcd6aa62cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:30.732 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6 namespace which is not needed anymore#033[00m
Nov 29 03:01:30 np0005539551 neutron-haproxy-ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6[244165]: [NOTICE]   (244169) : haproxy version is 2.8.14-c23fe91
Nov 29 03:01:30 np0005539551 neutron-haproxy-ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6[244165]: [NOTICE]   (244169) : path to executable is /usr/sbin/haproxy
Nov 29 03:01:30 np0005539551 neutron-haproxy-ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6[244165]: [WARNING]  (244169) : Exiting Master process...
Nov 29 03:01:30 np0005539551 neutron-haproxy-ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6[244165]: [ALERT]    (244169) : Current worker (244171) exited with code 143 (Terminated)
Nov 29 03:01:30 np0005539551 neutron-haproxy-ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6[244165]: [WARNING]  (244169) : All workers exited. Exiting... (0)
Nov 29 03:01:30 np0005539551 systemd[1]: libpod-c0ac4c3c321b761bab145f8f5c43593deac02161a795355f480487d671040da4.scope: Deactivated successfully.
Nov 29 03:01:30 np0005539551 podman[244400]: 2025-11-29 08:01:30.859789554 +0000 UTC m=+0.043763740 container died c0ac4c3c321b761bab145f8f5c43593deac02161a795355f480487d671040da4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 03:01:30 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c0ac4c3c321b761bab145f8f5c43593deac02161a795355f480487d671040da4-userdata-shm.mount: Deactivated successfully.
Nov 29 03:01:30 np0005539551 systemd[1]: var-lib-containers-storage-overlay-770d707f1b25abf27920a21e87939288e58170512d74baa16d5c96c437b13f67-merged.mount: Deactivated successfully.
Nov 29 03:01:30 np0005539551 podman[244400]: 2025-11-29 08:01:30.896791717 +0000 UTC m=+0.080765913 container cleanup c0ac4c3c321b761bab145f8f5c43593deac02161a795355f480487d671040da4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:01:30 np0005539551 systemd[1]: libpod-conmon-c0ac4c3c321b761bab145f8f5c43593deac02161a795355f480487d671040da4.scope: Deactivated successfully.
Nov 29 03:01:30 np0005539551 podman[244431]: 2025-11-29 08:01:30.949998195 +0000 UTC m=+0.034611079 container remove c0ac4c3c321b761bab145f8f5c43593deac02161a795355f480487d671040da4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:01:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:30.956 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[083032ec-576c-4f49-b54f-4bfced0e5b7a]: (4, ('Sat Nov 29 08:01:30 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6 (c0ac4c3c321b761bab145f8f5c43593deac02161a795355f480487d671040da4)\nc0ac4c3c321b761bab145f8f5c43593deac02161a795355f480487d671040da4\nSat Nov 29 08:01:30 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6 (c0ac4c3c321b761bab145f8f5c43593deac02161a795355f480487d671040da4)\nc0ac4c3c321b761bab145f8f5c43593deac02161a795355f480487d671040da4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:30.957 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f4e7b10e-a79a-4308-b5b7-af8826053eae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:30.958 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4537c1f2-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:30 np0005539551 kernel: tap4537c1f2-a0: left promiscuous mode
Nov 29 03:01:30 np0005539551 nova_compute[227360]: 2025-11-29 08:01:30.961 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:30 np0005539551 nova_compute[227360]: 2025-11-29 08:01:30.977 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:30.980 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e16f7f39-59ff-4aab-a783-0e7a43104a9f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:30.993 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[30745c48-c186-4da0-a8cc-ef648ddca3a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:30.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:30.994 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[77e4b1df-507d-497e-8926-e7d29f0f0292]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:31.008 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e3e9a9f9-caca-42e7-8be6-c61342c10882]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 631189, 'reachable_time': 33823, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244450, 'error': None, 'target': 'ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:31.010 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:01:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:31.011 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[eecef918-f576-49e3-abe2-e0443fb8e17b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:31 np0005539551 systemd[1]: run-netns-ovnmeta\x2d4537c1f2\x2da597\x2d4d7c\x2da9c3\x2d2ad7483510a6.mount: Deactivated successfully.
Nov 29 03:01:31 np0005539551 nova_compute[227360]: 2025-11-29 08:01:31.792 227364 DEBUG nova.virt.libvirt.vif [None req-fb2ec457-3d1a-4ed4-bbce-3cc208be3669 b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:00:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-194902022',display_name='tempest-FloatingIPsAssociationTestJSON-server-194902022',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-194902022',id=36,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:01:05Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='86064cd197e14fa8a17d2a0d9547af3e',ramdisk_id='',reservation_id='r-a5c1w0pm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-304325142',owner_user_name='tempest-FloatingIPsAssociationTestJSON-304325142-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:01:05Z,user_data=None,user_id='b956671bad8a4a0b99469a9d0258a2bc',uuid=9d6124aa-da05-4d06-86b2-1d2bb6c9e89e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3d89eff8-f34b-494c-8c30-7457cbc1852c", "address": "fa:16:3e:3a:38:65", "network": {"id": "4537c1f2-a597-4d7c-a9c3-2ad7483510a6", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1808585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86064cd197e14fa8a17d2a0d9547af3e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d89eff8-f3", "ovs_interfaceid": "3d89eff8-f34b-494c-8c30-7457cbc1852c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:01:31 np0005539551 nova_compute[227360]: 2025-11-29 08:01:31.792 227364 DEBUG nova.network.os_vif_util [None req-fb2ec457-3d1a-4ed4-bbce-3cc208be3669 b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Converting VIF {"id": "3d89eff8-f34b-494c-8c30-7457cbc1852c", "address": "fa:16:3e:3a:38:65", "network": {"id": "4537c1f2-a597-4d7c-a9c3-2ad7483510a6", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1808585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86064cd197e14fa8a17d2a0d9547af3e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d89eff8-f3", "ovs_interfaceid": "3d89eff8-f34b-494c-8c30-7457cbc1852c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:01:31 np0005539551 nova_compute[227360]: 2025-11-29 08:01:31.793 227364 DEBUG nova.network.os_vif_util [None req-fb2ec457-3d1a-4ed4-bbce-3cc208be3669 b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3a:38:65,bridge_name='br-int',has_traffic_filtering=True,id=3d89eff8-f34b-494c-8c30-7457cbc1852c,network=Network(4537c1f2-a597-4d7c-a9c3-2ad7483510a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d89eff8-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:01:31 np0005539551 nova_compute[227360]: 2025-11-29 08:01:31.793 227364 DEBUG os_vif [None req-fb2ec457-3d1a-4ed4-bbce-3cc208be3669 b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:38:65,bridge_name='br-int',has_traffic_filtering=True,id=3d89eff8-f34b-494c-8c30-7457cbc1852c,network=Network(4537c1f2-a597-4d7c-a9c3-2ad7483510a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d89eff8-f3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:01:31 np0005539551 nova_compute[227360]: 2025-11-29 08:01:31.795 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:31 np0005539551 nova_compute[227360]: 2025-11-29 08:01:31.795 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d89eff8-f3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:31 np0005539551 nova_compute[227360]: 2025-11-29 08:01:31.798 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:31 np0005539551 nova_compute[227360]: 2025-11-29 08:01:31.800 227364 INFO os_vif [None req-fb2ec457-3d1a-4ed4-bbce-3cc208be3669 b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:38:65,bridge_name='br-int',has_traffic_filtering=True,id=3d89eff8-f34b-494c-8c30-7457cbc1852c,network=Network(4537c1f2-a597-4d7c-a9c3-2ad7483510a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d89eff8-f3')#033[00m
Nov 29 03:01:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:32.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:32 np0005539551 nova_compute[227360]: 2025-11-29 08:01:32.887 227364 DEBUG nova.compute.manager [req-acbaf37a-f2a1-4b13-a23f-6ed06c7347cd req-c9cb7d37-9af4-4182-85bc-374bc08cc0e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Received event network-vif-unplugged-3d89eff8-f34b-494c-8c30-7457cbc1852c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:32 np0005539551 nova_compute[227360]: 2025-11-29 08:01:32.887 227364 DEBUG oslo_concurrency.lockutils [req-acbaf37a-f2a1-4b13-a23f-6ed06c7347cd req-c9cb7d37-9af4-4182-85bc-374bc08cc0e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9d6124aa-da05-4d06-86b2-1d2bb6c9e89e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:32 np0005539551 nova_compute[227360]: 2025-11-29 08:01:32.888 227364 DEBUG oslo_concurrency.lockutils [req-acbaf37a-f2a1-4b13-a23f-6ed06c7347cd req-c9cb7d37-9af4-4182-85bc-374bc08cc0e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9d6124aa-da05-4d06-86b2-1d2bb6c9e89e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:32 np0005539551 nova_compute[227360]: 2025-11-29 08:01:32.888 227364 DEBUG oslo_concurrency.lockutils [req-acbaf37a-f2a1-4b13-a23f-6ed06c7347cd req-c9cb7d37-9af4-4182-85bc-374bc08cc0e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9d6124aa-da05-4d06-86b2-1d2bb6c9e89e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:32 np0005539551 nova_compute[227360]: 2025-11-29 08:01:32.888 227364 DEBUG nova.compute.manager [req-acbaf37a-f2a1-4b13-a23f-6ed06c7347cd req-c9cb7d37-9af4-4182-85bc-374bc08cc0e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] No waiting events found dispatching network-vif-unplugged-3d89eff8-f34b-494c-8c30-7457cbc1852c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:32 np0005539551 nova_compute[227360]: 2025-11-29 08:01:32.888 227364 DEBUG nova.compute.manager [req-acbaf37a-f2a1-4b13-a23f-6ed06c7347cd req-c9cb7d37-9af4-4182-85bc-374bc08cc0e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Received event network-vif-unplugged-3d89eff8-f34b-494c-8c30-7457cbc1852c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:01:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:32.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:34 np0005539551 nova_compute[227360]: 2025-11-29 08:01:34.469 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:34 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:34.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:34 np0005539551 nova_compute[227360]: 2025-11-29 08:01:34.700 227364 INFO nova.virt.libvirt.driver [None req-fb2ec457-3d1a-4ed4-bbce-3cc208be3669 b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Deleting instance files /var/lib/nova/instances/9d6124aa-da05-4d06-86b2-1d2bb6c9e89e_del#033[00m
Nov 29 03:01:34 np0005539551 nova_compute[227360]: 2025-11-29 08:01:34.701 227364 INFO nova.virt.libvirt.driver [None req-fb2ec457-3d1a-4ed4-bbce-3cc208be3669 b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Deletion of /var/lib/nova/instances/9d6124aa-da05-4d06-86b2-1d2bb6c9e89e_del complete#033[00m
Nov 29 03:01:34 np0005539551 nova_compute[227360]: 2025-11-29 08:01:34.754 227364 INFO nova.compute.manager [None req-fb2ec457-3d1a-4ed4-bbce-3cc208be3669 b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Took 4.41 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:01:34 np0005539551 nova_compute[227360]: 2025-11-29 08:01:34.754 227364 DEBUG oslo.service.loopingcall [None req-fb2ec457-3d1a-4ed4-bbce-3cc208be3669 b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:01:34 np0005539551 nova_compute[227360]: 2025-11-29 08:01:34.755 227364 DEBUG nova.compute.manager [-] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:01:34 np0005539551 nova_compute[227360]: 2025-11-29 08:01:34.755 227364 DEBUG nova.network.neutron [-] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:01:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:34.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:35 np0005539551 nova_compute[227360]: 2025-11-29 08:01:35.007 227364 DEBUG nova.compute.manager [req-99dd43f4-fa3c-470f-8066-132fe93f740f req-596c8208-669f-4a1b-bdc2-f9305f103eee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Received event network-vif-plugged-3d89eff8-f34b-494c-8c30-7457cbc1852c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:35 np0005539551 nova_compute[227360]: 2025-11-29 08:01:35.008 227364 DEBUG oslo_concurrency.lockutils [req-99dd43f4-fa3c-470f-8066-132fe93f740f req-596c8208-669f-4a1b-bdc2-f9305f103eee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9d6124aa-da05-4d06-86b2-1d2bb6c9e89e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:35 np0005539551 nova_compute[227360]: 2025-11-29 08:01:35.008 227364 DEBUG oslo_concurrency.lockutils [req-99dd43f4-fa3c-470f-8066-132fe93f740f req-596c8208-669f-4a1b-bdc2-f9305f103eee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9d6124aa-da05-4d06-86b2-1d2bb6c9e89e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:35 np0005539551 nova_compute[227360]: 2025-11-29 08:01:35.009 227364 DEBUG oslo_concurrency.lockutils [req-99dd43f4-fa3c-470f-8066-132fe93f740f req-596c8208-669f-4a1b-bdc2-f9305f103eee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9d6124aa-da05-4d06-86b2-1d2bb6c9e89e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:35 np0005539551 nova_compute[227360]: 2025-11-29 08:01:35.009 227364 DEBUG nova.compute.manager [req-99dd43f4-fa3c-470f-8066-132fe93f740f req-596c8208-669f-4a1b-bdc2-f9305f103eee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] No waiting events found dispatching network-vif-plugged-3d89eff8-f34b-494c-8c30-7457cbc1852c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:35 np0005539551 nova_compute[227360]: 2025-11-29 08:01:35.010 227364 WARNING nova.compute.manager [req-99dd43f4-fa3c-470f-8066-132fe93f740f req-596c8208-669f-4a1b-bdc2-f9305f103eee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Received unexpected event network-vif-plugged-3d89eff8-f34b-494c-8c30-7457cbc1852c for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:01:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:35.301 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:01:35 np0005539551 nova_compute[227360]: 2025-11-29 08:01:35.302 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:35.303 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:01:35 np0005539551 nova_compute[227360]: 2025-11-29 08:01:35.969 227364 DEBUG nova.network.neutron [-] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:35 np0005539551 nova_compute[227360]: 2025-11-29 08:01:35.993 227364 INFO nova.compute.manager [-] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Took 1.24 seconds to deallocate network for instance.#033[00m
Nov 29 03:01:36 np0005539551 nova_compute[227360]: 2025-11-29 08:01:36.055 227364 DEBUG oslo_concurrency.lockutils [None req-fb2ec457-3d1a-4ed4-bbce-3cc208be3669 b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:36 np0005539551 nova_compute[227360]: 2025-11-29 08:01:36.056 227364 DEBUG oslo_concurrency.lockutils [None req-fb2ec457-3d1a-4ed4-bbce-3cc208be3669 b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:36 np0005539551 nova_compute[227360]: 2025-11-29 08:01:36.075 227364 DEBUG nova.compute.manager [req-67b08a86-a33c-44d8-accf-ae8bdb917ebe req-5a319cee-1983-4fb6-afb4-501f56e34779 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Received event network-vif-deleted-3d89eff8-f34b-494c-8c30-7457cbc1852c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:36 np0005539551 nova_compute[227360]: 2025-11-29 08:01:36.145 227364 DEBUG oslo_concurrency.processutils [None req-fb2ec457-3d1a-4ed4-bbce-3cc208be3669 b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 03:01:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:36.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 03:01:36 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:01:36 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1133920864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:01:36 np0005539551 nova_compute[227360]: 2025-11-29 08:01:36.731 227364 DEBUG oslo_concurrency.processutils [None req-fb2ec457-3d1a-4ed4-bbce-3cc208be3669 b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:36 np0005539551 nova_compute[227360]: 2025-11-29 08:01:36.736 227364 DEBUG nova.compute.provider_tree [None req-fb2ec457-3d1a-4ed4-bbce-3cc208be3669 b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:01:36 np0005539551 nova_compute[227360]: 2025-11-29 08:01:36.798 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:37.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:37 np0005539551 nova_compute[227360]: 2025-11-29 08:01:37.524 227364 DEBUG nova.scheduler.client.report [None req-fb2ec457-3d1a-4ed4-bbce-3cc208be3669 b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:01:37 np0005539551 nova_compute[227360]: 2025-11-29 08:01:37.560 227364 DEBUG oslo_concurrency.lockutils [None req-fb2ec457-3d1a-4ed4-bbce-3cc208be3669 b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.504s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:37 np0005539551 nova_compute[227360]: 2025-11-29 08:01:37.691 227364 INFO nova.scheduler.client.report [None req-fb2ec457-3d1a-4ed4-bbce-3cc208be3669 b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Deleted allocations for instance 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e#033[00m
Nov 29 03:01:37 np0005539551 nova_compute[227360]: 2025-11-29 08:01:37.776 227364 DEBUG oslo_concurrency.lockutils [None req-fb2ec457-3d1a-4ed4-bbce-3cc208be3669 b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Lock "9d6124aa-da05-4d06-86b2-1d2bb6c9e89e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.433s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:38.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:39.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:39 np0005539551 nova_compute[227360]: 2025-11-29 08:01:39.472 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:40.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:41.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:01:41.305 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:41 np0005539551 nova_compute[227360]: 2025-11-29 08:01:41.451 227364 DEBUG oslo_concurrency.lockutils [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Acquiring lock "c41ceac4-1144-4898-b728-dd95a4a30a49" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:41 np0005539551 nova_compute[227360]: 2025-11-29 08:01:41.452 227364 DEBUG oslo_concurrency.lockutils [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Lock "c41ceac4-1144-4898-b728-dd95a4a30a49" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:41 np0005539551 nova_compute[227360]: 2025-11-29 08:01:41.468 227364 DEBUG nova.compute.manager [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:01:41 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 29 03:01:41 np0005539551 nova_compute[227360]: 2025-11-29 08:01:41.537 227364 DEBUG oslo_concurrency.lockutils [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:41 np0005539551 nova_compute[227360]: 2025-11-29 08:01:41.538 227364 DEBUG oslo_concurrency.lockutils [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:41 np0005539551 nova_compute[227360]: 2025-11-29 08:01:41.544 227364 DEBUG nova.virt.hardware [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:01:41 np0005539551 nova_compute[227360]: 2025-11-29 08:01:41.544 227364 INFO nova.compute.claims [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:01:41 np0005539551 nova_compute[227360]: 2025-11-29 08:01:41.643 227364 DEBUG oslo_concurrency.processutils [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:41 np0005539551 nova_compute[227360]: 2025-11-29 08:01:41.800 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:01:42 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3328859484' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:01:42 np0005539551 nova_compute[227360]: 2025-11-29 08:01:42.073 227364 DEBUG oslo_concurrency.processutils [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:42 np0005539551 nova_compute[227360]: 2025-11-29 08:01:42.078 227364 DEBUG nova.compute.provider_tree [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:01:42 np0005539551 nova_compute[227360]: 2025-11-29 08:01:42.095 227364 DEBUG nova.scheduler.client.report [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:01:42 np0005539551 nova_compute[227360]: 2025-11-29 08:01:42.130 227364 DEBUG oslo_concurrency.lockutils [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:42 np0005539551 nova_compute[227360]: 2025-11-29 08:01:42.132 227364 DEBUG nova.compute.manager [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:01:42 np0005539551 nova_compute[227360]: 2025-11-29 08:01:42.209 227364 DEBUG nova.compute.manager [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Nov 29 03:01:42 np0005539551 nova_compute[227360]: 2025-11-29 08:01:42.248 227364 INFO nova.virt.libvirt.driver [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:01:42 np0005539551 nova_compute[227360]: 2025-11-29 08:01:42.264 227364 DEBUG nova.compute.manager [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:01:42 np0005539551 nova_compute[227360]: 2025-11-29 08:01:42.394 227364 DEBUG nova.compute.manager [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:01:42 np0005539551 nova_compute[227360]: 2025-11-29 08:01:42.396 227364 DEBUG nova.virt.libvirt.driver [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:01:42 np0005539551 nova_compute[227360]: 2025-11-29 08:01:42.396 227364 INFO nova.virt.libvirt.driver [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Creating image(s)#033[00m
Nov 29 03:01:42 np0005539551 nova_compute[227360]: 2025-11-29 08:01:42.418 227364 DEBUG nova.storage.rbd_utils [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] rbd image c41ceac4-1144-4898-b728-dd95a4a30a49_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:01:42 np0005539551 nova_compute[227360]: 2025-11-29 08:01:42.441 227364 DEBUG nova.storage.rbd_utils [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] rbd image c41ceac4-1144-4898-b728-dd95a4a30a49_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:01:42 np0005539551 nova_compute[227360]: 2025-11-29 08:01:42.469 227364 DEBUG nova.storage.rbd_utils [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] rbd image c41ceac4-1144-4898-b728-dd95a4a30a49_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:01:42 np0005539551 nova_compute[227360]: 2025-11-29 08:01:42.473 227364 DEBUG oslo_concurrency.processutils [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:42 np0005539551 nova_compute[227360]: 2025-11-29 08:01:42.547 227364 DEBUG oslo_concurrency.processutils [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:42 np0005539551 nova_compute[227360]: 2025-11-29 08:01:42.548 227364 DEBUG oslo_concurrency.lockutils [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:42 np0005539551 nova_compute[227360]: 2025-11-29 08:01:42.549 227364 DEBUG oslo_concurrency.lockutils [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:42 np0005539551 nova_compute[227360]: 2025-11-29 08:01:42.549 227364 DEBUG oslo_concurrency.lockutils [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:42 np0005539551 nova_compute[227360]: 2025-11-29 08:01:42.571 227364 DEBUG nova.storage.rbd_utils [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] rbd image c41ceac4-1144-4898-b728-dd95a4a30a49_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:01:42 np0005539551 nova_compute[227360]: 2025-11-29 08:01:42.574 227364 DEBUG oslo_concurrency.processutils [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 c41ceac4-1144-4898-b728-dd95a4a30a49_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 03:01:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:42.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 03:01:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:43.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:43 np0005539551 nova_compute[227360]: 2025-11-29 08:01:43.070 227364 DEBUG oslo_concurrency.processutils [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 c41ceac4-1144-4898-b728-dd95a4a30a49_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:43 np0005539551 nova_compute[227360]: 2025-11-29 08:01:43.146 227364 DEBUG nova.storage.rbd_utils [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] resizing rbd image c41ceac4-1144-4898-b728-dd95a4a30a49_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:01:43 np0005539551 nova_compute[227360]: 2025-11-29 08:01:43.258 227364 DEBUG nova.objects.instance [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Lazy-loading 'migration_context' on Instance uuid c41ceac4-1144-4898-b728-dd95a4a30a49 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:01:43 np0005539551 nova_compute[227360]: 2025-11-29 08:01:43.276 227364 DEBUG nova.virt.libvirt.driver [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:01:43 np0005539551 nova_compute[227360]: 2025-11-29 08:01:43.277 227364 DEBUG nova.virt.libvirt.driver [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Ensure instance console log exists: /var/lib/nova/instances/c41ceac4-1144-4898-b728-dd95a4a30a49/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:01:43 np0005539551 nova_compute[227360]: 2025-11-29 08:01:43.277 227364 DEBUG oslo_concurrency.lockutils [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:43 np0005539551 nova_compute[227360]: 2025-11-29 08:01:43.278 227364 DEBUG oslo_concurrency.lockutils [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:43 np0005539551 nova_compute[227360]: 2025-11-29 08:01:43.278 227364 DEBUG oslo_concurrency.lockutils [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:43 np0005539551 nova_compute[227360]: 2025-11-29 08:01:43.279 227364 DEBUG nova.virt.libvirt.driver [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:01:43 np0005539551 nova_compute[227360]: 2025-11-29 08:01:43.283 227364 WARNING nova.virt.libvirt.driver [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:01:43 np0005539551 nova_compute[227360]: 2025-11-29 08:01:43.288 227364 DEBUG nova.virt.libvirt.host [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:01:43 np0005539551 nova_compute[227360]: 2025-11-29 08:01:43.288 227364 DEBUG nova.virt.libvirt.host [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:01:43 np0005539551 nova_compute[227360]: 2025-11-29 08:01:43.292 227364 DEBUG nova.virt.libvirt.host [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:01:43 np0005539551 nova_compute[227360]: 2025-11-29 08:01:43.293 227364 DEBUG nova.virt.libvirt.host [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:01:43 np0005539551 nova_compute[227360]: 2025-11-29 08:01:43.294 227364 DEBUG nova.virt.libvirt.driver [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:01:43 np0005539551 nova_compute[227360]: 2025-11-29 08:01:43.294 227364 DEBUG nova.virt.hardware [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:01:43 np0005539551 nova_compute[227360]: 2025-11-29 08:01:43.294 227364 DEBUG nova.virt.hardware [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:01:43 np0005539551 nova_compute[227360]: 2025-11-29 08:01:43.294 227364 DEBUG nova.virt.hardware [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:01:43 np0005539551 nova_compute[227360]: 2025-11-29 08:01:43.295 227364 DEBUG nova.virt.hardware [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:01:43 np0005539551 nova_compute[227360]: 2025-11-29 08:01:43.295 227364 DEBUG nova.virt.hardware [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:01:43 np0005539551 nova_compute[227360]: 2025-11-29 08:01:43.295 227364 DEBUG nova.virt.hardware [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:01:43 np0005539551 nova_compute[227360]: 2025-11-29 08:01:43.295 227364 DEBUG nova.virt.hardware [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:01:43 np0005539551 nova_compute[227360]: 2025-11-29 08:01:43.295 227364 DEBUG nova.virt.hardware [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:01:43 np0005539551 nova_compute[227360]: 2025-11-29 08:01:43.295 227364 DEBUG nova.virt.hardware [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:01:43 np0005539551 nova_compute[227360]: 2025-11-29 08:01:43.295 227364 DEBUG nova.virt.hardware [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:01:43 np0005539551 nova_compute[227360]: 2025-11-29 08:01:43.296 227364 DEBUG nova.virt.hardware [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:01:43 np0005539551 nova_compute[227360]: 2025-11-29 08:01:43.298 227364 DEBUG oslo_concurrency.processutils [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:43 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:01:43 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/752120317' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:01:43 np0005539551 nova_compute[227360]: 2025-11-29 08:01:43.790 227364 DEBUG oslo_concurrency.processutils [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:43 np0005539551 nova_compute[227360]: 2025-11-29 08:01:43.814 227364 DEBUG nova.storage.rbd_utils [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] rbd image c41ceac4-1144-4898-b728-dd95a4a30a49_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:01:43 np0005539551 nova_compute[227360]: 2025-11-29 08:01:43.817 227364 DEBUG oslo_concurrency.processutils [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:44 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:01:44 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/195208938' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:01:44 np0005539551 nova_compute[227360]: 2025-11-29 08:01:44.280 227364 DEBUG oslo_concurrency.processutils [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:44 np0005539551 nova_compute[227360]: 2025-11-29 08:01:44.282 227364 DEBUG nova.objects.instance [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Lazy-loading 'pci_devices' on Instance uuid c41ceac4-1144-4898-b728-dd95a4a30a49 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:01:44 np0005539551 nova_compute[227360]: 2025-11-29 08:01:44.299 227364 DEBUG nova.virt.libvirt.driver [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:01:44 np0005539551 nova_compute[227360]:  <uuid>c41ceac4-1144-4898-b728-dd95a4a30a49</uuid>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:  <name>instance-00000027</name>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:01:44 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:      <nova:name>tempest-ServerDiagnosticsV248Test-server-474159051</nova:name>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:01:43</nova:creationTime>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:01:44 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:        <nova:user uuid="81c93a09c1b0475aac91299a1bae36c9">tempest-ServerDiagnosticsV248Test-549344510-project-member</nova:user>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:        <nova:project uuid="132abeb6a490432292d29a40ab72f4b0">tempest-ServerDiagnosticsV248Test-549344510</nova:project>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:      <nova:ports/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:      <entry name="serial">c41ceac4-1144-4898-b728-dd95a4a30a49</entry>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:      <entry name="uuid">c41ceac4-1144-4898-b728-dd95a4a30a49</entry>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:01:44 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/c41ceac4-1144-4898-b728-dd95a4a30a49_disk">
Nov 29 03:01:44 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:01:44 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:01:44 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/c41ceac4-1144-4898-b728-dd95a4a30a49_disk.config">
Nov 29 03:01:44 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:01:44 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:01:44 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/c41ceac4-1144-4898-b728-dd95a4a30a49/console.log" append="off"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:01:44 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:01:44 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:01:44 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:01:44 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:01:44 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:01:44 np0005539551 nova_compute[227360]: 2025-11-29 08:01:44.348 227364 DEBUG nova.virt.libvirt.driver [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:01:44 np0005539551 nova_compute[227360]: 2025-11-29 08:01:44.349 227364 DEBUG nova.virt.libvirt.driver [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:01:44 np0005539551 nova_compute[227360]: 2025-11-29 08:01:44.349 227364 INFO nova.virt.libvirt.driver [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Using config drive#033[00m
Nov 29 03:01:44 np0005539551 nova_compute[227360]: 2025-11-29 08:01:44.376 227364 DEBUG nova.storage.rbd_utils [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] rbd image c41ceac4-1144-4898-b728-dd95a4a30a49_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:01:44 np0005539551 nova_compute[227360]: 2025-11-29 08:01:44.512 227364 INFO nova.virt.libvirt.driver [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Creating config drive at /var/lib/nova/instances/c41ceac4-1144-4898-b728-dd95a4a30a49/disk.config#033[00m
Nov 29 03:01:44 np0005539551 nova_compute[227360]: 2025-11-29 08:01:44.518 227364 DEBUG oslo_concurrency.processutils [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c41ceac4-1144-4898-b728-dd95a4a30a49/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7hpay8t4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:44 np0005539551 nova_compute[227360]: 2025-11-29 08:01:44.537 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:44 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:44 np0005539551 nova_compute[227360]: 2025-11-29 08:01:44.645 227364 DEBUG oslo_concurrency.processutils [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c41ceac4-1144-4898-b728-dd95a4a30a49/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7hpay8t4" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:44 np0005539551 nova_compute[227360]: 2025-11-29 08:01:44.673 227364 DEBUG nova.storage.rbd_utils [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] rbd image c41ceac4-1144-4898-b728-dd95a4a30a49_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:01:44 np0005539551 nova_compute[227360]: 2025-11-29 08:01:44.677 227364 DEBUG oslo_concurrency.processutils [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c41ceac4-1144-4898-b728-dd95a4a30a49/disk.config c41ceac4-1144-4898-b728-dd95a4a30a49_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:44.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:44 np0005539551 nova_compute[227360]: 2025-11-29 08:01:44.880 227364 DEBUG oslo_concurrency.processutils [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c41ceac4-1144-4898-b728-dd95a4a30a49/disk.config c41ceac4-1144-4898-b728-dd95a4a30a49_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:44 np0005539551 nova_compute[227360]: 2025-11-29 08:01:44.881 227364 INFO nova.virt.libvirt.driver [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Deleting local config drive /var/lib/nova/instances/c41ceac4-1144-4898-b728-dd95a4a30a49/disk.config because it was imported into RBD.#033[00m
Nov 29 03:01:44 np0005539551 systemd-machined[190756]: New machine qemu-21-instance-00000027.
Nov 29 03:01:44 np0005539551 systemd[1]: Started Virtual Machine qemu-21-instance-00000027.
Nov 29 03:01:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:45.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:45 np0005539551 nova_compute[227360]: 2025-11-29 08:01:45.580 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403290.5793955, 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:01:45 np0005539551 nova_compute[227360]: 2025-11-29 08:01:45.582 227364 INFO nova.compute.manager [-] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:01:45 np0005539551 nova_compute[227360]: 2025-11-29 08:01:45.602 227364 DEBUG nova.compute.manager [None req-8360f6e5-d3dd-4520-9ab8-3ce944cdb67e - - - - - -] [instance: 9d6124aa-da05-4d06-86b2-1d2bb6c9e89e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:01:45 np0005539551 nova_compute[227360]: 2025-11-29 08:01:45.680 227364 DEBUG nova.compute.manager [req-cd13bc0d-79bf-40ae-a34b-486f614c9718 req-89bd19ef-8072-4366-ba6e-23da383056fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Received event network-changed-26202237-bc25-465c-bcab-caf462b96a73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:45 np0005539551 nova_compute[227360]: 2025-11-29 08:01:45.681 227364 DEBUG nova.compute.manager [req-cd13bc0d-79bf-40ae-a34b-486f614c9718 req-89bd19ef-8072-4366-ba6e-23da383056fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Refreshing instance network info cache due to event network-changed-26202237-bc25-465c-bcab-caf462b96a73. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:01:45 np0005539551 nova_compute[227360]: 2025-11-29 08:01:45.681 227364 DEBUG oslo_concurrency.lockutils [req-cd13bc0d-79bf-40ae-a34b-486f614c9718 req-89bd19ef-8072-4366-ba6e-23da383056fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-b8808203-6263-4e9c-bff5-7d273d143a50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:01:45 np0005539551 nova_compute[227360]: 2025-11-29 08:01:45.682 227364 DEBUG oslo_concurrency.lockutils [req-cd13bc0d-79bf-40ae-a34b-486f614c9718 req-89bd19ef-8072-4366-ba6e-23da383056fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-b8808203-6263-4e9c-bff5-7d273d143a50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:01:45 np0005539551 nova_compute[227360]: 2025-11-29 08:01:45.682 227364 DEBUG nova.network.neutron [req-cd13bc0d-79bf-40ae-a34b-486f614c9718 req-89bd19ef-8072-4366-ba6e-23da383056fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Refreshing network info cache for port 26202237-bc25-465c-bcab-caf462b96a73 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:01:45 np0005539551 nova_compute[227360]: 2025-11-29 08:01:45.728 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403305.7283864, c41ceac4-1144-4898-b728-dd95a4a30a49 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:01:45 np0005539551 nova_compute[227360]: 2025-11-29 08:01:45.729 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:01:45 np0005539551 nova_compute[227360]: 2025-11-29 08:01:45.731 227364 DEBUG nova.compute.manager [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:01:45 np0005539551 nova_compute[227360]: 2025-11-29 08:01:45.731 227364 DEBUG nova.virt.libvirt.driver [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:01:45 np0005539551 nova_compute[227360]: 2025-11-29 08:01:45.734 227364 INFO nova.virt.libvirt.driver [-] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Instance spawned successfully.#033[00m
Nov 29 03:01:45 np0005539551 nova_compute[227360]: 2025-11-29 08:01:45.734 227364 DEBUG nova.virt.libvirt.driver [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:01:45 np0005539551 nova_compute[227360]: 2025-11-29 08:01:45.765 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:01:45 np0005539551 nova_compute[227360]: 2025-11-29 08:01:45.767 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:01:45 np0005539551 nova_compute[227360]: 2025-11-29 08:01:45.774 227364 DEBUG nova.virt.libvirt.driver [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:01:45 np0005539551 nova_compute[227360]: 2025-11-29 08:01:45.774 227364 DEBUG nova.virt.libvirt.driver [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:01:45 np0005539551 nova_compute[227360]: 2025-11-29 08:01:45.775 227364 DEBUG nova.virt.libvirt.driver [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:01:45 np0005539551 nova_compute[227360]: 2025-11-29 08:01:45.775 227364 DEBUG nova.virt.libvirt.driver [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:01:45 np0005539551 nova_compute[227360]: 2025-11-29 08:01:45.775 227364 DEBUG nova.virt.libvirt.driver [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:01:45 np0005539551 nova_compute[227360]: 2025-11-29 08:01:45.776 227364 DEBUG nova.virt.libvirt.driver [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:01:45 np0005539551 nova_compute[227360]: 2025-11-29 08:01:45.823 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:01:45 np0005539551 nova_compute[227360]: 2025-11-29 08:01:45.823 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403305.7301576, c41ceac4-1144-4898-b728-dd95a4a30a49 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:01:45 np0005539551 nova_compute[227360]: 2025-11-29 08:01:45.823 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] VM Started (Lifecycle Event)#033[00m
Nov 29 03:01:46 np0005539551 nova_compute[227360]: 2025-11-29 08:01:46.025 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:01:46 np0005539551 nova_compute[227360]: 2025-11-29 08:01:46.030 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:01:46 np0005539551 nova_compute[227360]: 2025-11-29 08:01:46.056 227364 INFO nova.compute.manager [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Took 3.66 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:01:46 np0005539551 nova_compute[227360]: 2025-11-29 08:01:46.056 227364 DEBUG nova.compute.manager [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:01:46 np0005539551 nova_compute[227360]: 2025-11-29 08:01:46.061 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:01:46 np0005539551 nova_compute[227360]: 2025-11-29 08:01:46.137 227364 INFO nova.compute.manager [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Took 4.61 seconds to build instance.#033[00m
Nov 29 03:01:46 np0005539551 nova_compute[227360]: 2025-11-29 08:01:46.163 227364 DEBUG oslo_concurrency.lockutils [None req-30f85787-774d-46c8-ad08-d8b365292693 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Lock "c41ceac4-1144-4898-b728-dd95a4a30a49" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:46.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:47 np0005539551 nova_compute[227360]: 2025-11-29 08:01:46.787 227364 DEBUG nova.network.neutron [req-cd13bc0d-79bf-40ae-a34b-486f614c9718 req-89bd19ef-8072-4366-ba6e-23da383056fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Updated VIF entry in instance network info cache for port 26202237-bc25-465c-bcab-caf462b96a73. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:01:47 np0005539551 nova_compute[227360]: 2025-11-29 08:01:46.788 227364 DEBUG nova.network.neutron [req-cd13bc0d-79bf-40ae-a34b-486f614c9718 req-89bd19ef-8072-4366-ba6e-23da383056fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Updating instance_info_cache with network_info: [{"id": "26202237-bc25-465c-bcab-caf462b96a73", "address": "fa:16:3e:ef:0f:ee", "network": {"id": "f73b0808-21fd-43a4-809d-85e512de1cb7", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-904838899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "afaf65dfeab546ee991af0438784b8a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26202237-bc", "ovs_interfaceid": "26202237-bc25-465c-bcab-caf462b96a73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:47.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:47 np0005539551 nova_compute[227360]: 2025-11-29 08:01:47.307 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:47 np0005539551 nova_compute[227360]: 2025-11-29 08:01:47.587 227364 DEBUG oslo_concurrency.lockutils [req-cd13bc0d-79bf-40ae-a34b-486f614c9718 req-89bd19ef-8072-4366-ba6e-23da383056fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-b8808203-6263-4e9c-bff5-7d273d143a50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:01:47 np0005539551 nova_compute[227360]: 2025-11-29 08:01:47.766 227364 DEBUG nova.compute.manager [None req-9bd6aebf-d2fb-40ce-991e-cb7a2f6aa70c a0ee8fbcfa2b43a296188a59d33ca391 df358e17a9d545dcbdf27798cc165c96 - - default default] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:01:47 np0005539551 nova_compute[227360]: 2025-11-29 08:01:47.769 227364 INFO nova.compute.manager [None req-9bd6aebf-d2fb-40ce-991e-cb7a2f6aa70c a0ee8fbcfa2b43a296188a59d33ca391 df358e17a9d545dcbdf27798cc165c96 - - default default] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Retrieving diagnostics#033[00m
Nov 29 03:01:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:48.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:49.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:49 np0005539551 nova_compute[227360]: 2025-11-29 08:01:49.515 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:50 np0005539551 ovn_controller[130266]: 2025-11-29T08:01:50Z|00155|binding|INFO|Releasing lport 76e8e3af-f64b-4dff-8ae5-0367f134cc2a from this chassis (sb_readonly=0)
Nov 29 03:01:50 np0005539551 nova_compute[227360]: 2025-11-29 08:01:50.280 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:50.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:51.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:52 np0005539551 nova_compute[227360]: 2025-11-29 08:01:52.344 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:52.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:53.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:54 np0005539551 nova_compute[227360]: 2025-11-29 08:01:54.566 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:54 np0005539551 podman[244860]: 2025-11-29 08:01:54.651623102 +0000 UTC m=+0.051691768 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:01:54 np0005539551 podman[244859]: 2025-11-29 08:01:54.661738578 +0000 UTC m=+0.065485474 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:01:54 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:54 np0005539551 podman[244858]: 2025-11-29 08:01:54.684580614 +0000 UTC m=+0.088891855 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:01:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:54.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:55.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:01:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:56.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:01:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:57.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:57 np0005539551 nova_compute[227360]: 2025-11-29 08:01:57.348 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:57 np0005539551 nova_compute[227360]: 2025-11-29 08:01:57.943 227364 DEBUG nova.compute.manager [None req-8912f06a-53f0-46b1-ba75-805c62bb4420 a0ee8fbcfa2b43a296188a59d33ca391 df358e17a9d545dcbdf27798cc165c96 - - default default] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:01:57 np0005539551 nova_compute[227360]: 2025-11-29 08:01:57.945 227364 INFO nova.compute.manager [None req-8912f06a-53f0-46b1-ba75-805c62bb4420 a0ee8fbcfa2b43a296188a59d33ca391 df358e17a9d545dcbdf27798cc165c96 - - default default] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Retrieving diagnostics#033[00m
Nov 29 03:01:58 np0005539551 nova_compute[227360]: 2025-11-29 08:01:58.237 227364 DEBUG oslo_concurrency.lockutils [None req-f84a156e-377e-4b3a-813e-688dbab8bbf6 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Acquiring lock "c41ceac4-1144-4898-b728-dd95a4a30a49" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:58 np0005539551 nova_compute[227360]: 2025-11-29 08:01:58.237 227364 DEBUG oslo_concurrency.lockutils [None req-f84a156e-377e-4b3a-813e-688dbab8bbf6 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Lock "c41ceac4-1144-4898-b728-dd95a4a30a49" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:58 np0005539551 nova_compute[227360]: 2025-11-29 08:01:58.238 227364 DEBUG oslo_concurrency.lockutils [None req-f84a156e-377e-4b3a-813e-688dbab8bbf6 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Acquiring lock "c41ceac4-1144-4898-b728-dd95a4a30a49-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:58 np0005539551 nova_compute[227360]: 2025-11-29 08:01:58.238 227364 DEBUG oslo_concurrency.lockutils [None req-f84a156e-377e-4b3a-813e-688dbab8bbf6 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Lock "c41ceac4-1144-4898-b728-dd95a4a30a49-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:58 np0005539551 nova_compute[227360]: 2025-11-29 08:01:58.238 227364 DEBUG oslo_concurrency.lockutils [None req-f84a156e-377e-4b3a-813e-688dbab8bbf6 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Lock "c41ceac4-1144-4898-b728-dd95a4a30a49-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:58 np0005539551 nova_compute[227360]: 2025-11-29 08:01:58.239 227364 INFO nova.compute.manager [None req-f84a156e-377e-4b3a-813e-688dbab8bbf6 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Terminating instance#033[00m
Nov 29 03:01:58 np0005539551 nova_compute[227360]: 2025-11-29 08:01:58.240 227364 DEBUG oslo_concurrency.lockutils [None req-f84a156e-377e-4b3a-813e-688dbab8bbf6 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Acquiring lock "refresh_cache-c41ceac4-1144-4898-b728-dd95a4a30a49" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:01:58 np0005539551 nova_compute[227360]: 2025-11-29 08:01:58.240 227364 DEBUG oslo_concurrency.lockutils [None req-f84a156e-377e-4b3a-813e-688dbab8bbf6 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Acquired lock "refresh_cache-c41ceac4-1144-4898-b728-dd95a4a30a49" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:01:58 np0005539551 nova_compute[227360]: 2025-11-29 08:01:58.240 227364 DEBUG nova.network.neutron [None req-f84a156e-377e-4b3a-813e-688dbab8bbf6 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:01:58 np0005539551 nova_compute[227360]: 2025-11-29 08:01:58.524 227364 DEBUG nova.network.neutron [None req-f84a156e-377e-4b3a-813e-688dbab8bbf6 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:01:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:58.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:01:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:59.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:59 np0005539551 nova_compute[227360]: 2025-11-29 08:01:59.187 227364 DEBUG nova.network.neutron [None req-f84a156e-377e-4b3a-813e-688dbab8bbf6 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:59 np0005539551 nova_compute[227360]: 2025-11-29 08:01:59.217 227364 DEBUG oslo_concurrency.lockutils [None req-f84a156e-377e-4b3a-813e-688dbab8bbf6 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Releasing lock "refresh_cache-c41ceac4-1144-4898-b728-dd95a4a30a49" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:01:59 np0005539551 nova_compute[227360]: 2025-11-29 08:01:59.218 227364 DEBUG nova.compute.manager [None req-f84a156e-377e-4b3a-813e-688dbab8bbf6 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:01:59 np0005539551 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000027.scope: Deactivated successfully.
Nov 29 03:01:59 np0005539551 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000027.scope: Consumed 13.822s CPU time.
Nov 29 03:01:59 np0005539551 systemd-machined[190756]: Machine qemu-21-instance-00000027 terminated.
Nov 29 03:01:59 np0005539551 nova_compute[227360]: 2025-11-29 08:01:59.567 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:59 np0005539551 nova_compute[227360]: 2025-11-29 08:01:59.642 227364 INFO nova.virt.libvirt.driver [-] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Instance destroyed successfully.#033[00m
Nov 29 03:01:59 np0005539551 nova_compute[227360]: 2025-11-29 08:01:59.642 227364 DEBUG nova.objects.instance [None req-f84a156e-377e-4b3a-813e-688dbab8bbf6 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Lazy-loading 'resources' on Instance uuid c41ceac4-1144-4898-b728-dd95a4a30a49 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:01:59 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:00 np0005539551 ovn_controller[130266]: 2025-11-29T08:02:00Z|00156|binding|INFO|Releasing lport 76e8e3af-f64b-4dff-8ae5-0367f134cc2a from this chassis (sb_readonly=0)
Nov 29 03:02:00 np0005539551 nova_compute[227360]: 2025-11-29 08:02:00.607 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:00.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:01.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:01 np0005539551 nova_compute[227360]: 2025-11-29 08:02:01.178 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:01 np0005539551 nova_compute[227360]: 2025-11-29 08:02:01.409 227364 INFO nova.virt.libvirt.driver [None req-f84a156e-377e-4b3a-813e-688dbab8bbf6 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Deleting instance files /var/lib/nova/instances/c41ceac4-1144-4898-b728-dd95a4a30a49_del#033[00m
Nov 29 03:02:01 np0005539551 nova_compute[227360]: 2025-11-29 08:02:01.410 227364 INFO nova.virt.libvirt.driver [None req-f84a156e-377e-4b3a-813e-688dbab8bbf6 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Deletion of /var/lib/nova/instances/c41ceac4-1144-4898-b728-dd95a4a30a49_del complete#033[00m
Nov 29 03:02:01 np0005539551 nova_compute[227360]: 2025-11-29 08:02:01.469 227364 INFO nova.compute.manager [None req-f84a156e-377e-4b3a-813e-688dbab8bbf6 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Took 2.25 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:02:01 np0005539551 nova_compute[227360]: 2025-11-29 08:02:01.469 227364 DEBUG oslo.service.loopingcall [None req-f84a156e-377e-4b3a-813e-688dbab8bbf6 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:02:01 np0005539551 nova_compute[227360]: 2025-11-29 08:02:01.470 227364 DEBUG nova.compute.manager [-] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:02:01 np0005539551 nova_compute[227360]: 2025-11-29 08:02:01.470 227364 DEBUG nova.network.neutron [-] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:02:01 np0005539551 nova_compute[227360]: 2025-11-29 08:02:01.639 227364 DEBUG nova.network.neutron [-] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:02:01 np0005539551 nova_compute[227360]: 2025-11-29 08:02:01.654 227364 DEBUG nova.network.neutron [-] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:02:01 np0005539551 nova_compute[227360]: 2025-11-29 08:02:01.670 227364 INFO nova.compute.manager [-] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Took 0.20 seconds to deallocate network for instance.#033[00m
Nov 29 03:02:01 np0005539551 nova_compute[227360]: 2025-11-29 08:02:01.715 227364 DEBUG oslo_concurrency.lockutils [None req-f84a156e-377e-4b3a-813e-688dbab8bbf6 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:01 np0005539551 nova_compute[227360]: 2025-11-29 08:02:01.715 227364 DEBUG oslo_concurrency.lockutils [None req-f84a156e-377e-4b3a-813e-688dbab8bbf6 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:01 np0005539551 nova_compute[227360]: 2025-11-29 08:02:01.797 227364 DEBUG oslo_concurrency.processutils [None req-f84a156e-377e-4b3a-813e-688dbab8bbf6 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:02 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:02:02 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2519849738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:02:02 np0005539551 nova_compute[227360]: 2025-11-29 08:02:02.239 227364 DEBUG oslo_concurrency.processutils [None req-f84a156e-377e-4b3a-813e-688dbab8bbf6 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:02 np0005539551 nova_compute[227360]: 2025-11-29 08:02:02.245 227364 DEBUG nova.compute.provider_tree [None req-f84a156e-377e-4b3a-813e-688dbab8bbf6 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:02:02 np0005539551 nova_compute[227360]: 2025-11-29 08:02:02.260 227364 DEBUG nova.scheduler.client.report [None req-f84a156e-377e-4b3a-813e-688dbab8bbf6 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:02:02 np0005539551 nova_compute[227360]: 2025-11-29 08:02:02.284 227364 DEBUG oslo_concurrency.lockutils [None req-f84a156e-377e-4b3a-813e-688dbab8bbf6 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:02 np0005539551 nova_compute[227360]: 2025-11-29 08:02:02.309 227364 INFO nova.scheduler.client.report [None req-f84a156e-377e-4b3a-813e-688dbab8bbf6 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Deleted allocations for instance c41ceac4-1144-4898-b728-dd95a4a30a49#033[00m
Nov 29 03:02:02 np0005539551 nova_compute[227360]: 2025-11-29 08:02:02.349 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:02 np0005539551 nova_compute[227360]: 2025-11-29 08:02:02.368 227364 DEBUG oslo_concurrency.lockutils [None req-f84a156e-377e-4b3a-813e-688dbab8bbf6 81c93a09c1b0475aac91299a1bae36c9 132abeb6a490432292d29a40ab72f4b0 - - default default] Lock "c41ceac4-1144-4898-b728-dd95a4a30a49" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:02.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:03.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:04 np0005539551 nova_compute[227360]: 2025-11-29 08:02:04.556 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:04 np0005539551 nova_compute[227360]: 2025-11-29 08:02:04.569 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:04 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:04.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:05.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.003000082s ======
Nov 29 03:02:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:06.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000082s
Nov 29 03:02:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:07.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:07 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:02:07 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:02:07 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:02:07 np0005539551 nova_compute[227360]: 2025-11-29 08:02:07.352 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:08.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:02:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:09.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:02:09 np0005539551 nova_compute[227360]: 2025-11-29 08:02:09.571 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:09 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:10.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:11.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:12 np0005539551 nova_compute[227360]: 2025-11-29 08:02:12.356 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:12 np0005539551 nova_compute[227360]: 2025-11-29 08:02:12.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:12.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:13.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:14 np0005539551 nova_compute[227360]: 2025-11-29 08:02:14.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:14 np0005539551 nova_compute[227360]: 2025-11-29 08:02:14.411 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:14 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:02:14 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:02:14 np0005539551 nova_compute[227360]: 2025-11-29 08:02:14.573 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:14 np0005539551 nova_compute[227360]: 2025-11-29 08:02:14.640 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403319.64006, c41ceac4-1144-4898-b728-dd95a4a30a49 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:02:14 np0005539551 nova_compute[227360]: 2025-11-29 08:02:14.641 227364 INFO nova.compute.manager [-] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:02:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:14.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:14 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:14 np0005539551 nova_compute[227360]: 2025-11-29 08:02:14.840 227364 DEBUG nova.compute.manager [None req-24af4bc4-12a8-4d2c-9584-32b2273d6ec7 - - - - - -] [instance: c41ceac4-1144-4898-b728-dd95a4a30a49] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:02:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:15.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:16 np0005539551 nova_compute[227360]: 2025-11-29 08:02:16.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:16 np0005539551 nova_compute[227360]: 2025-11-29 08:02:16.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:02:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:16.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:17.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:17 np0005539551 nova_compute[227360]: 2025-11-29 08:02:17.359 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:17 np0005539551 nova_compute[227360]: 2025-11-29 08:02:17.829 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:02:17 np0005539551 nova_compute[227360]: 2025-11-29 08:02:17.830 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:18 np0005539551 nova_compute[227360]: 2025-11-29 08:02:18.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:18 np0005539551 nova_compute[227360]: 2025-11-29 08:02:18.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:18 np0005539551 nova_compute[227360]: 2025-11-29 08:02:18.489 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:18 np0005539551 nova_compute[227360]: 2025-11-29 08:02:18.490 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:18 np0005539551 nova_compute[227360]: 2025-11-29 08:02:18.491 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:18 np0005539551 nova_compute[227360]: 2025-11-29 08:02:18.491 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:02:18 np0005539551 nova_compute[227360]: 2025-11-29 08:02:18.492 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:18 np0005539551 nova_compute[227360]: 2025-11-29 08:02:18.640 227364 DEBUG oslo_concurrency.lockutils [None req-38c39826-2fd1-4e2a-9d5f-a021f7a29f3b ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Acquiring lock "b8808203-6263-4e9c-bff5-7d273d143a50" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:18 np0005539551 nova_compute[227360]: 2025-11-29 08:02:18.641 227364 DEBUG oslo_concurrency.lockutils [None req-38c39826-2fd1-4e2a-9d5f-a021f7a29f3b ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "b8808203-6263-4e9c-bff5-7d273d143a50" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:18 np0005539551 nova_compute[227360]: 2025-11-29 08:02:18.676 227364 DEBUG nova.objects.instance [None req-38c39826-2fd1-4e2a-9d5f-a021f7a29f3b ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lazy-loading 'flavor' on Instance uuid b8808203-6263-4e9c-bff5-7d273d143a50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:02:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:18.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:18 np0005539551 nova_compute[227360]: 2025-11-29 08:02:18.746 227364 DEBUG oslo_concurrency.lockutils [None req-38c39826-2fd1-4e2a-9d5f-a021f7a29f3b ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "b8808203-6263-4e9c-bff5-7d273d143a50" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:18 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:02:18 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/906394754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:02:18 np0005539551 nova_compute[227360]: 2025-11-29 08:02:18.921 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:19.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:19 np0005539551 nova_compute[227360]: 2025-11-29 08:02:19.139 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000023 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:02:19 np0005539551 nova_compute[227360]: 2025-11-29 08:02:19.140 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000023 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:02:19 np0005539551 nova_compute[227360]: 2025-11-29 08:02:19.297 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:02:19 np0005539551 nova_compute[227360]: 2025-11-29 08:02:19.298 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4650MB free_disk=20.897136688232422GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:02:19 np0005539551 nova_compute[227360]: 2025-11-29 08:02:19.298 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:19 np0005539551 nova_compute[227360]: 2025-11-29 08:02:19.298 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:19 np0005539551 nova_compute[227360]: 2025-11-29 08:02:19.575 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:19 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:02:19.854 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:02:19.854 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:02:19.855 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:20.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:21.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:22 np0005539551 nova_compute[227360]: 2025-11-29 08:02:22.362 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:22.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:22 np0005539551 nova_compute[227360]: 2025-11-29 08:02:22.854 227364 DEBUG oslo_concurrency.lockutils [None req-38c39826-2fd1-4e2a-9d5f-a021f7a29f3b ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Acquiring lock "b8808203-6263-4e9c-bff5-7d273d143a50" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:22 np0005539551 nova_compute[227360]: 2025-11-29 08:02:22.855 227364 DEBUG oslo_concurrency.lockutils [None req-38c39826-2fd1-4e2a-9d5f-a021f7a29f3b ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "b8808203-6263-4e9c-bff5-7d273d143a50" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:22 np0005539551 nova_compute[227360]: 2025-11-29 08:02:22.855 227364 INFO nova.compute.manager [None req-38c39826-2fd1-4e2a-9d5f-a021f7a29f3b ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Attaching volume 87f35256-d25d-497d-b09f-e847b4302174 to /dev/vdb#033[00m
Nov 29 03:02:22 np0005539551 nova_compute[227360]: 2025-11-29 08:02:22.921 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance b8808203-6263-4e9c-bff5-7d273d143a50 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:02:22 np0005539551 nova_compute[227360]: 2025-11-29 08:02:22.921 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:02:22 np0005539551 nova_compute[227360]: 2025-11-29 08:02:22.921 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:02:23 np0005539551 nova_compute[227360]: 2025-11-29 08:02:23.005 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:23.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:23 np0005539551 nova_compute[227360]: 2025-11-29 08:02:23.401 227364 DEBUG os_brick.utils [None req-38c39826-2fd1-4e2a-9d5f-a021f7a29f3b ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:02:23 np0005539551 nova_compute[227360]: 2025-11-29 08:02:23.404 227364 INFO oslo.privsep.daemon [None req-38c39826-2fd1-4e2a-9d5f-a021f7a29f3b ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmpfd2ftk08/privsep.sock']#033[00m
Nov 29 03:02:23 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:02:23 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/658084378' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:02:23 np0005539551 nova_compute[227360]: 2025-11-29 08:02:23.448 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:23 np0005539551 nova_compute[227360]: 2025-11-29 08:02:23.458 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:02:23 np0005539551 nova_compute[227360]: 2025-11-29 08:02:23.562 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:02:23 np0005539551 nova_compute[227360]: 2025-11-29 08:02:23.566 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:23 np0005539551 nova_compute[227360]: 2025-11-29 08:02:23.649 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:02:23 np0005539551 nova_compute[227360]: 2025-11-29 08:02:23.649 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 4.351s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:24 np0005539551 nova_compute[227360]: 2025-11-29 08:02:24.059 227364 INFO oslo.privsep.daemon [None req-38c39826-2fd1-4e2a-9d5f-a021f7a29f3b ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Nov 29 03:02:24 np0005539551 nova_compute[227360]: 2025-11-29 08:02:23.945 245195 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 29 03:02:24 np0005539551 nova_compute[227360]: 2025-11-29 08:02:23.948 245195 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 29 03:02:24 np0005539551 nova_compute[227360]: 2025-11-29 08:02:23.950 245195 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Nov 29 03:02:24 np0005539551 nova_compute[227360]: 2025-11-29 08:02:23.950 245195 INFO oslo.privsep.daemon [-] privsep daemon running as pid 245195#033[00m
Nov 29 03:02:24 np0005539551 nova_compute[227360]: 2025-11-29 08:02:24.062 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[98510284-194f-4758-b854-86a64bc2bd65]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:24 np0005539551 nova_compute[227360]: 2025-11-29 08:02:24.164 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:24 np0005539551 nova_compute[227360]: 2025-11-29 08:02:24.175 245195 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:24 np0005539551 nova_compute[227360]: 2025-11-29 08:02:24.175 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[296adc84-8919-4e69-b353-07fe44988206]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:24 np0005539551 nova_compute[227360]: 2025-11-29 08:02:24.176 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:24 np0005539551 nova_compute[227360]: 2025-11-29 08:02:24.183 245195 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:24 np0005539551 nova_compute[227360]: 2025-11-29 08:02:24.183 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[f1eeb071-6d72-4591-ba24-a600dfd6c39a]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34268feecb6', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:24 np0005539551 nova_compute[227360]: 2025-11-29 08:02:24.185 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:24 np0005539551 nova_compute[227360]: 2025-11-29 08:02:24.195 245195 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:24 np0005539551 nova_compute[227360]: 2025-11-29 08:02:24.195 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[06f2b7c1-5b44-4dc5-a857-c40b90b20b5d]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:24 np0005539551 nova_compute[227360]: 2025-11-29 08:02:24.198 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[8070ec32-43b1-485e-bb9b-7e744cf191e0]: (4, '2d58586e-4ce1-425d-890c-d5cdff75e822') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:24 np0005539551 nova_compute[227360]: 2025-11-29 08:02:24.198 227364 DEBUG oslo_concurrency.processutils [None req-38c39826-2fd1-4e2a-9d5f-a021f7a29f3b ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:24 np0005539551 nova_compute[227360]: 2025-11-29 08:02:24.237 227364 DEBUG oslo_concurrency.processutils [None req-38c39826-2fd1-4e2a-9d5f-a021f7a29f3b ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] CMD "nvme version" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:24 np0005539551 nova_compute[227360]: 2025-11-29 08:02:24.240 227364 DEBUG os_brick.initiator.connectors.lightos [None req-38c39826-2fd1-4e2a-9d5f-a021f7a29f3b ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:02:24 np0005539551 nova_compute[227360]: 2025-11-29 08:02:24.242 227364 DEBUG os_brick.initiator.connectors.lightos [None req-38c39826-2fd1-4e2a-9d5f-a021f7a29f3b ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:02:24 np0005539551 nova_compute[227360]: 2025-11-29 08:02:24.242 227364 DEBUG os_brick.initiator.connectors.lightos [None req-38c39826-2fd1-4e2a-9d5f-a021f7a29f3b ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:02:24 np0005539551 nova_compute[227360]: 2025-11-29 08:02:24.242 227364 DEBUG os_brick.utils [None req-38c39826-2fd1-4e2a-9d5f-a021f7a29f3b ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] <== get_connector_properties: return (840ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34268feecb6', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2d58586e-4ce1-425d-890c-d5cdff75e822', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:02:24 np0005539551 nova_compute[227360]: 2025-11-29 08:02:24.243 227364 DEBUG nova.virt.block_device [None req-38c39826-2fd1-4e2a-9d5f-a021f7a29f3b ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Updating existing volume attachment record: 62398f19-0495-4be0-ad68-f9072905252a _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:02:24 np0005539551 nova_compute[227360]: 2025-11-29 08:02:24.577 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:24.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:24 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:25.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:25 np0005539551 podman[245205]: 2025-11-29 08:02:25.648054326 +0000 UTC m=+0.081594696 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0)
Nov 29 03:02:25 np0005539551 podman[245206]: 2025-11-29 08:02:25.648254732 +0000 UTC m=+0.078359278 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 03:02:25 np0005539551 podman[245204]: 2025-11-29 08:02:25.716100541 +0000 UTC m=+0.161935147 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 03:02:26 np0005539551 nova_compute[227360]: 2025-11-29 08:02:26.651 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:26.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:26 np0005539551 nova_compute[227360]: 2025-11-29 08:02:26.870 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:26 np0005539551 nova_compute[227360]: 2025-11-29 08:02:26.870 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:26 np0005539551 nova_compute[227360]: 2025-11-29 08:02:26.870 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:02:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:27.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:27 np0005539551 nova_compute[227360]: 2025-11-29 08:02:27.364 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:28 np0005539551 nova_compute[227360]: 2025-11-29 08:02:28.693 227364 DEBUG oslo_concurrency.lockutils [None req-38c39826-2fd1-4e2a-9d5f-a021f7a29f3b ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:28 np0005539551 nova_compute[227360]: 2025-11-29 08:02:28.693 227364 DEBUG oslo_concurrency.lockutils [None req-38c39826-2fd1-4e2a-9d5f-a021f7a29f3b ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:28 np0005539551 nova_compute[227360]: 2025-11-29 08:02:28.695 227364 DEBUG oslo_concurrency.lockutils [None req-38c39826-2fd1-4e2a-9d5f-a021f7a29f3b ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:28 np0005539551 nova_compute[227360]: 2025-11-29 08:02:28.703 227364 DEBUG nova.objects.instance [None req-38c39826-2fd1-4e2a-9d5f-a021f7a29f3b ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lazy-loading 'flavor' on Instance uuid b8808203-6263-4e9c-bff5-7d273d143a50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:02:28 np0005539551 nova_compute[227360]: 2025-11-29 08:02:28.740 227364 DEBUG nova.virt.libvirt.driver [None req-38c39826-2fd1-4e2a-9d5f-a021f7a29f3b ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Attempting to attach volume 87f35256-d25d-497d-b09f-e847b4302174 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:02:28 np0005539551 nova_compute[227360]: 2025-11-29 08:02:28.743 227364 DEBUG nova.virt.libvirt.guest [None req-38c39826-2fd1-4e2a-9d5f-a021f7a29f3b ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:02:28 np0005539551 nova_compute[227360]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:02:28 np0005539551 nova_compute[227360]:  <source protocol="rbd" name="volumes/volume-87f35256-d25d-497d-b09f-e847b4302174">
Nov 29 03:02:28 np0005539551 nova_compute[227360]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:02:28 np0005539551 nova_compute[227360]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:02:28 np0005539551 nova_compute[227360]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:02:28 np0005539551 nova_compute[227360]:  </source>
Nov 29 03:02:28 np0005539551 nova_compute[227360]:  <auth username="openstack">
Nov 29 03:02:28 np0005539551 nova_compute[227360]:    <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:02:28 np0005539551 nova_compute[227360]:  </auth>
Nov 29 03:02:28 np0005539551 nova_compute[227360]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:02:28 np0005539551 nova_compute[227360]:  <serial>87f35256-d25d-497d-b09f-e847b4302174</serial>
Nov 29 03:02:28 np0005539551 nova_compute[227360]:  <shareable/>
Nov 29 03:02:28 np0005539551 nova_compute[227360]: </disk>
Nov 29 03:02:28 np0005539551 nova_compute[227360]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:02:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:28.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:29.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:29 np0005539551 nova_compute[227360]: 2025-11-29 08:02:29.580 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:30 np0005539551 nova_compute[227360]: 2025-11-29 08:02:30.640 227364 DEBUG nova.virt.libvirt.driver [None req-38c39826-2fd1-4e2a-9d5f-a021f7a29f3b ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:02:30 np0005539551 nova_compute[227360]: 2025-11-29 08:02:30.640 227364 DEBUG nova.virt.libvirt.driver [None req-38c39826-2fd1-4e2a-9d5f-a021f7a29f3b ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:02:30 np0005539551 nova_compute[227360]: 2025-11-29 08:02:30.641 227364 DEBUG nova.virt.libvirt.driver [None req-38c39826-2fd1-4e2a-9d5f-a021f7a29f3b ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:02:30 np0005539551 nova_compute[227360]: 2025-11-29 08:02:30.641 227364 DEBUG nova.virt.libvirt.driver [None req-38c39826-2fd1-4e2a-9d5f-a021f7a29f3b ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] No VIF found with MAC fa:16:3e:ef:0f:ee, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:02:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:30.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:31 np0005539551 nova_compute[227360]: 2025-11-29 08:02:31.032 227364 DEBUG oslo_concurrency.lockutils [None req-38c39826-2fd1-4e2a-9d5f-a021f7a29f3b ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "b8808203-6263-4e9c-bff5-7d273d143a50" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 8.177s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:31.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:32 np0005539551 nova_compute[227360]: 2025-11-29 08:02:32.367 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:32.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:33.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:34 np0005539551 nova_compute[227360]: 2025-11-29 08:02:34.582 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:34.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:34 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:35.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:36.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:37.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:37 np0005539551 nova_compute[227360]: 2025-11-29 08:02:37.392 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:37 np0005539551 nova_compute[227360]: 2025-11-29 08:02:37.971 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:38.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:39.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:39 np0005539551 nova_compute[227360]: 2025-11-29 08:02:39.584 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:40.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:41.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:42 np0005539551 nova_compute[227360]: 2025-11-29 08:02:42.396 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:42.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:43.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:44 np0005539551 nova_compute[227360]: 2025-11-29 08:02:44.587 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:44.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:44 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:45.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:45 np0005539551 nova_compute[227360]: 2025-11-29 08:02:45.266 227364 DEBUG oslo_concurrency.lockutils [None req-246ba392-355f-4608-9294-3a3fa8bbf652 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Acquiring lock "b8808203-6263-4e9c-bff5-7d273d143a50" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:45 np0005539551 nova_compute[227360]: 2025-11-29 08:02:45.266 227364 DEBUG oslo_concurrency.lockutils [None req-246ba392-355f-4608-9294-3a3fa8bbf652 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "b8808203-6263-4e9c-bff5-7d273d143a50" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:45 np0005539551 nova_compute[227360]: 2025-11-29 08:02:45.291 227364 INFO nova.compute.manager [None req-246ba392-355f-4608-9294-3a3fa8bbf652 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Detaching volume 87f35256-d25d-497d-b09f-e847b4302174#033[00m
Nov 29 03:02:45 np0005539551 nova_compute[227360]: 2025-11-29 08:02:45.553 227364 INFO nova.virt.block_device [None req-246ba392-355f-4608-9294-3a3fa8bbf652 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Attempting to driver detach volume 87f35256-d25d-497d-b09f-e847b4302174 from mountpoint /dev/vdb#033[00m
Nov 29 03:02:45 np0005539551 nova_compute[227360]: 2025-11-29 08:02:45.561 227364 DEBUG nova.virt.libvirt.driver [None req-246ba392-355f-4608-9294-3a3fa8bbf652 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Attempting to detach device vdb from instance b8808203-6263-4e9c-bff5-7d273d143a50 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:02:45 np0005539551 nova_compute[227360]: 2025-11-29 08:02:45.562 227364 DEBUG nova.virt.libvirt.guest [None req-246ba392-355f-4608-9294-3a3fa8bbf652 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:02:45 np0005539551 nova_compute[227360]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:02:45 np0005539551 nova_compute[227360]:  <source protocol="rbd" name="volumes/volume-87f35256-d25d-497d-b09f-e847b4302174">
Nov 29 03:02:45 np0005539551 nova_compute[227360]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:02:45 np0005539551 nova_compute[227360]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:02:45 np0005539551 nova_compute[227360]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:02:45 np0005539551 nova_compute[227360]:  </source>
Nov 29 03:02:45 np0005539551 nova_compute[227360]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:02:45 np0005539551 nova_compute[227360]:  <serial>87f35256-d25d-497d-b09f-e847b4302174</serial>
Nov 29 03:02:45 np0005539551 nova_compute[227360]:  <shareable/>
Nov 29 03:02:45 np0005539551 nova_compute[227360]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:02:45 np0005539551 nova_compute[227360]: </disk>
Nov 29 03:02:45 np0005539551 nova_compute[227360]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:02:45 np0005539551 nova_compute[227360]: 2025-11-29 08:02:45.571 227364 INFO nova.virt.libvirt.driver [None req-246ba392-355f-4608-9294-3a3fa8bbf652 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Successfully detached device vdb from instance b8808203-6263-4e9c-bff5-7d273d143a50 from the persistent domain config.#033[00m
Nov 29 03:02:45 np0005539551 nova_compute[227360]: 2025-11-29 08:02:45.572 227364 DEBUG nova.virt.libvirt.driver [None req-246ba392-355f-4608-9294-3a3fa8bbf652 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance b8808203-6263-4e9c-bff5-7d273d143a50 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:02:45 np0005539551 nova_compute[227360]: 2025-11-29 08:02:45.572 227364 DEBUG nova.virt.libvirt.guest [None req-246ba392-355f-4608-9294-3a3fa8bbf652 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:02:45 np0005539551 nova_compute[227360]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:02:45 np0005539551 nova_compute[227360]:  <source protocol="rbd" name="volumes/volume-87f35256-d25d-497d-b09f-e847b4302174">
Nov 29 03:02:45 np0005539551 nova_compute[227360]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:02:45 np0005539551 nova_compute[227360]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:02:45 np0005539551 nova_compute[227360]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:02:45 np0005539551 nova_compute[227360]:  </source>
Nov 29 03:02:45 np0005539551 nova_compute[227360]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:02:45 np0005539551 nova_compute[227360]:  <serial>87f35256-d25d-497d-b09f-e847b4302174</serial>
Nov 29 03:02:45 np0005539551 nova_compute[227360]:  <shareable/>
Nov 29 03:02:45 np0005539551 nova_compute[227360]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:02:45 np0005539551 nova_compute[227360]: </disk>
Nov 29 03:02:45 np0005539551 nova_compute[227360]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:02:45 np0005539551 nova_compute[227360]: 2025-11-29 08:02:45.889 227364 DEBUG nova.virt.libvirt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Received event <DeviceRemovedEvent: 1764403365.8887653, b8808203-6263-4e9c-bff5-7d273d143a50 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:02:45 np0005539551 nova_compute[227360]: 2025-11-29 08:02:45.890 227364 DEBUG nova.virt.libvirt.driver [None req-246ba392-355f-4608-9294-3a3fa8bbf652 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance b8808203-6263-4e9c-bff5-7d273d143a50 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:02:45 np0005539551 nova_compute[227360]: 2025-11-29 08:02:45.892 227364 INFO nova.virt.libvirt.driver [None req-246ba392-355f-4608-9294-3a3fa8bbf652 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Successfully detached device vdb from instance b8808203-6263-4e9c-bff5-7d273d143a50 from the live domain config.#033[00m
Nov 29 03:02:46 np0005539551 nova_compute[227360]: 2025-11-29 08:02:46.622 227364 DEBUG nova.objects.instance [None req-246ba392-355f-4608-9294-3a3fa8bbf652 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lazy-loading 'flavor' on Instance uuid b8808203-6263-4e9c-bff5-7d273d143a50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:02:46 np0005539551 nova_compute[227360]: 2025-11-29 08:02:46.710 227364 DEBUG oslo_concurrency.lockutils [None req-246ba392-355f-4608-9294-3a3fa8bbf652 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "b8808203-6263-4e9c-bff5-7d273d143a50" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.444s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:46.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:46 np0005539551 ovn_controller[130266]: 2025-11-29T08:02:46Z|00157|binding|INFO|Releasing lport 76e8e3af-f64b-4dff-8ae5-0367f134cc2a from this chassis (sb_readonly=0)
Nov 29 03:02:46 np0005539551 nova_compute[227360]: 2025-11-29 08:02:46.914 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 03:02:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:47.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 03:02:47 np0005539551 nova_compute[227360]: 2025-11-29 08:02:47.399 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:48.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:49.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:49 np0005539551 nova_compute[227360]: 2025-11-29 08:02:49.588 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:50.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:51.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:51 np0005539551 nova_compute[227360]: 2025-11-29 08:02:51.486 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:51 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:02:51.486 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:02:51 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:02:51.488 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:02:52 np0005539551 nova_compute[227360]: 2025-11-29 08:02:52.401 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:52.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:53.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:54 np0005539551 nova_compute[227360]: 2025-11-29 08:02:54.589 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:54.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:54 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:54 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0.
Nov 29 03:02:54 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:02:54.883658) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:02:54 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61
Nov 29 03:02:54 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403374883699, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2446, "num_deletes": 254, "total_data_size": 5784617, "memory_usage": 5856784, "flush_reason": "Manual Compaction"}
Nov 29 03:02:54 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started
Nov 29 03:02:54 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403374987238, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 3793794, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29606, "largest_seqno": 32047, "table_properties": {"data_size": 3783890, "index_size": 6270, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 21221, "raw_average_key_size": 20, "raw_value_size": 3763811, "raw_average_value_size": 3679, "num_data_blocks": 273, "num_entries": 1023, "num_filter_entries": 1023, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403157, "oldest_key_time": 1764403157, "file_creation_time": 1764403374, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:02:54 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 103696 microseconds, and 7493 cpu microseconds.
Nov 29 03:02:54 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:02:54 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:02:54.987347) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 3793794 bytes OK
Nov 29 03:02:54 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:02:54.987372) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started
Nov 29 03:02:54 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:02:54.989150) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done
Nov 29 03:02:54 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:02:54.989171) EVENT_LOG_v1 {"time_micros": 1764403374989165, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:02:54 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:02:54.989189) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:02:54 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 5773733, prev total WAL file size 5773733, number of live WAL files 2.
Nov 29 03:02:54 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:02:54 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:02:54.990807) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Nov 29 03:02:54 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:02:54 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(3704KB)], [60(9063KB)]
Nov 29 03:02:54 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403374990909, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 13074702, "oldest_snapshot_seqno": -1}
Nov 29 03:02:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:55.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:55 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 6286 keys, 11162634 bytes, temperature: kUnknown
Nov 29 03:02:55 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403375257101, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 11162634, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11119858, "index_size": 25962, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15749, "raw_key_size": 161402, "raw_average_key_size": 25, "raw_value_size": 11005974, "raw_average_value_size": 1750, "num_data_blocks": 1043, "num_entries": 6286, "num_filter_entries": 6286, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764403374, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:02:55 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:02:55 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:02:55.257518) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 11162634 bytes
Nov 29 03:02:55 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:02:55.259453) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 49.1 rd, 41.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 8.9 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(6.4) write-amplify(2.9) OK, records in: 6810, records dropped: 524 output_compression: NoCompression
Nov 29 03:02:55 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:02:55.259482) EVENT_LOG_v1 {"time_micros": 1764403375259468, "job": 36, "event": "compaction_finished", "compaction_time_micros": 266290, "compaction_time_cpu_micros": 50596, "output_level": 6, "num_output_files": 1, "total_output_size": 11162634, "num_input_records": 6810, "num_output_records": 6286, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:02:55 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:02:55 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403375260963, "job": 36, "event": "table_file_deletion", "file_number": 62}
Nov 29 03:02:55 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:02:55 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403375264456, "job": 36, "event": "table_file_deletion", "file_number": 60}
Nov 29 03:02:55 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:02:54.990729) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:02:55 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:02:55.264553) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:02:55 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:02:55.264559) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:02:55 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:02:55.264562) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:02:55 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:02:55.264564) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:02:55 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:02:55.264566) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:02:56 np0005539551 podman[245296]: 2025-11-29 08:02:56.60708594 +0000 UTC m=+0.057112346 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_managed=true)
Nov 29 03:02:56 np0005539551 podman[245295]: 2025-11-29 08:02:56.618731219 +0000 UTC m=+0.068096376 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:02:56 np0005539551 podman[245294]: 2025-11-29 08:02:56.669209672 +0000 UTC m=+0.125302534 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125)
Nov 29 03:02:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:56.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:57.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:57 np0005539551 nova_compute[227360]: 2025-11-29 08:02:57.403 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:02:57.491 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:02:57 np0005539551 nova_compute[227360]: 2025-11-29 08:02:57.525 227364 DEBUG oslo_concurrency.lockutils [None req-6cded84c-87c3-4236-b4fc-5f8d28d1e7e7 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Acquiring lock "b8808203-6263-4e9c-bff5-7d273d143a50" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:57 np0005539551 nova_compute[227360]: 2025-11-29 08:02:57.526 227364 DEBUG oslo_concurrency.lockutils [None req-6cded84c-87c3-4236-b4fc-5f8d28d1e7e7 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "b8808203-6263-4e9c-bff5-7d273d143a50" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:57 np0005539551 nova_compute[227360]: 2025-11-29 08:02:57.526 227364 DEBUG oslo_concurrency.lockutils [None req-6cded84c-87c3-4236-b4fc-5f8d28d1e7e7 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Acquiring lock "b8808203-6263-4e9c-bff5-7d273d143a50-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:57 np0005539551 nova_compute[227360]: 2025-11-29 08:02:57.526 227364 DEBUG oslo_concurrency.lockutils [None req-6cded84c-87c3-4236-b4fc-5f8d28d1e7e7 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "b8808203-6263-4e9c-bff5-7d273d143a50-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:57 np0005539551 nova_compute[227360]: 2025-11-29 08:02:57.526 227364 DEBUG oslo_concurrency.lockutils [None req-6cded84c-87c3-4236-b4fc-5f8d28d1e7e7 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "b8808203-6263-4e9c-bff5-7d273d143a50-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:57 np0005539551 nova_compute[227360]: 2025-11-29 08:02:57.527 227364 INFO nova.compute.manager [None req-6cded84c-87c3-4236-b4fc-5f8d28d1e7e7 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Terminating instance#033[00m
Nov 29 03:02:57 np0005539551 nova_compute[227360]: 2025-11-29 08:02:57.528 227364 DEBUG nova.compute.manager [None req-6cded84c-87c3-4236-b4fc-5f8d28d1e7e7 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:02:57 np0005539551 kernel: tap26202237-bc (unregistering): left promiscuous mode
Nov 29 03:02:57 np0005539551 NetworkManager[48922]: <info>  [1764403377.7119] device (tap26202237-bc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:02:57 np0005539551 nova_compute[227360]: 2025-11-29 08:02:57.723 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:57 np0005539551 ovn_controller[130266]: 2025-11-29T08:02:57Z|00158|binding|INFO|Releasing lport 26202237-bc25-465c-bcab-caf462b96a73 from this chassis (sb_readonly=0)
Nov 29 03:02:57 np0005539551 ovn_controller[130266]: 2025-11-29T08:02:57Z|00159|binding|INFO|Setting lport 26202237-bc25-465c-bcab-caf462b96a73 down in Southbound
Nov 29 03:02:57 np0005539551 ovn_controller[130266]: 2025-11-29T08:02:57Z|00160|binding|INFO|Removing iface tap26202237-bc ovn-installed in OVS
Nov 29 03:02:57 np0005539551 nova_compute[227360]: 2025-11-29 08:02:57.726 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:02:57.741 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:0f:ee 10.100.0.12'], port_security=['fa:16:3e:ef:0f:ee 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b8808203-6263-4e9c-bff5-7d273d143a50', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f73b0808-21fd-43a4-809d-85e512de1cb7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'afaf65dfeab546ee991af0438784b8a3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8c6ebd91-3a75-49da-9c37-da4a9dd74667', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cbc8191a-1888-462d-843d-a3e5df7bfc2c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=26202237-bc25-465c-bcab-caf462b96a73) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:02:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:02:57.743 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 26202237-bc25-465c-bcab-caf462b96a73 in datapath f73b0808-21fd-43a4-809d-85e512de1cb7 unbound from our chassis#033[00m
Nov 29 03:02:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:02:57.744 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f73b0808-21fd-43a4-809d-85e512de1cb7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:02:57 np0005539551 nova_compute[227360]: 2025-11-29 08:02:57.745 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:02:57.746 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[0d8ca830-2bb0-470c-a7f4-e58ce9f71a7b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:02:57.747 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7 namespace which is not needed anymore#033[00m
Nov 29 03:02:57 np0005539551 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000023.scope: Deactivated successfully.
Nov 29 03:02:57 np0005539551 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000023.scope: Consumed 18.278s CPU time.
Nov 29 03:02:57 np0005539551 systemd-machined[190756]: Machine qemu-20-instance-00000023 terminated.
Nov 29 03:02:57 np0005539551 neutron-haproxy-ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7[244240]: [NOTICE]   (244244) : haproxy version is 2.8.14-c23fe91
Nov 29 03:02:57 np0005539551 neutron-haproxy-ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7[244240]: [NOTICE]   (244244) : path to executable is /usr/sbin/haproxy
Nov 29 03:02:57 np0005539551 neutron-haproxy-ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7[244240]: [WARNING]  (244244) : Exiting Master process...
Nov 29 03:02:57 np0005539551 neutron-haproxy-ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7[244240]: [ALERT]    (244244) : Current worker (244246) exited with code 143 (Terminated)
Nov 29 03:02:57 np0005539551 neutron-haproxy-ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7[244240]: [WARNING]  (244244) : All workers exited. Exiting... (0)
Nov 29 03:02:57 np0005539551 systemd[1]: libpod-b97ec14b31e5a33a666741138cb0694c7a68b2dadcb66a6d3080913d299384fe.scope: Deactivated successfully.
Nov 29 03:02:57 np0005539551 podman[245386]: 2025-11-29 08:02:57.932522099 +0000 UTC m=+0.073962047 container died b97ec14b31e5a33a666741138cb0694c7a68b2dadcb66a6d3080913d299384fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 03:02:57 np0005539551 nova_compute[227360]: 2025-11-29 08:02:57.955 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:57 np0005539551 nova_compute[227360]: 2025-11-29 08:02:57.960 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:57 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b97ec14b31e5a33a666741138cb0694c7a68b2dadcb66a6d3080913d299384fe-userdata-shm.mount: Deactivated successfully.
Nov 29 03:02:57 np0005539551 systemd[1]: var-lib-containers-storage-overlay-48e117b22ca5899b47501e4e008b7304d7a7fa4a11e067a75fc0c1f9dcde5511-merged.mount: Deactivated successfully.
Nov 29 03:02:57 np0005539551 nova_compute[227360]: 2025-11-29 08:02:57.976 227364 INFO nova.virt.libvirt.driver [-] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Instance destroyed successfully.#033[00m
Nov 29 03:02:57 np0005539551 nova_compute[227360]: 2025-11-29 08:02:57.977 227364 DEBUG nova.objects.instance [None req-6cded84c-87c3-4236-b4fc-5f8d28d1e7e7 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lazy-loading 'resources' on Instance uuid b8808203-6263-4e9c-bff5-7d273d143a50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:02:57 np0005539551 podman[245386]: 2025-11-29 08:02:57.982316194 +0000 UTC m=+0.123756132 container cleanup b97ec14b31e5a33a666741138cb0694c7a68b2dadcb66a6d3080913d299384fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:02:57 np0005539551 nova_compute[227360]: 2025-11-29 08:02:57.997 227364 DEBUG nova.virt.libvirt.vif [None req-6cded84c-87c3-4236-b4fc-5f8d28d1e7e7 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:00:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-2061019622',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-2061019622',id=35,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHEuBiKv29X7C0drWwRABfdxNdRe5p32AdTPB10q93Ar576PCZcuPC/3VX2gJkGcV+mhRJIDE7C9Qv0DoWPW0kaJVi6f6+GX+1mDf7+x5AvCtDyfE2PGiajOtGRiWA/EXQ==',key_name='tempest-keypair-874975206',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:01:05Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='afaf65dfeab546ee991af0438784b8a3',ramdisk_id='',reservation_id='r-4zq3z6vy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-343315775',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-343315775-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:01:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ba1b423b724a47f692a3d9cbf91860d7',uuid=b8808203-6263-4e9c-bff5-7d273d143a50,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "26202237-bc25-465c-bcab-caf462b96a73", "address": "fa:16:3e:ef:0f:ee", "network": {"id": "f73b0808-21fd-43a4-809d-85e512de1cb7", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-904838899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "afaf65dfeab546ee991af0438784b8a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26202237-bc", "ovs_interfaceid": "26202237-bc25-465c-bcab-caf462b96a73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:02:57 np0005539551 nova_compute[227360]: 2025-11-29 08:02:57.998 227364 DEBUG nova.network.os_vif_util [None req-6cded84c-87c3-4236-b4fc-5f8d28d1e7e7 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Converting VIF {"id": "26202237-bc25-465c-bcab-caf462b96a73", "address": "fa:16:3e:ef:0f:ee", "network": {"id": "f73b0808-21fd-43a4-809d-85e512de1cb7", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-904838899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "afaf65dfeab546ee991af0438784b8a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26202237-bc", "ovs_interfaceid": "26202237-bc25-465c-bcab-caf462b96a73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:02:57 np0005539551 nova_compute[227360]: 2025-11-29 08:02:57.999 227364 DEBUG nova.network.os_vif_util [None req-6cded84c-87c3-4236-b4fc-5f8d28d1e7e7 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ef:0f:ee,bridge_name='br-int',has_traffic_filtering=True,id=26202237-bc25-465c-bcab-caf462b96a73,network=Network(f73b0808-21fd-43a4-809d-85e512de1cb7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26202237-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:02:58 np0005539551 nova_compute[227360]: 2025-11-29 08:02:57.999 227364 DEBUG os_vif [None req-6cded84c-87c3-4236-b4fc-5f8d28d1e7e7 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ef:0f:ee,bridge_name='br-int',has_traffic_filtering=True,id=26202237-bc25-465c-bcab-caf462b96a73,network=Network(f73b0808-21fd-43a4-809d-85e512de1cb7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26202237-bc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:02:58 np0005539551 nova_compute[227360]: 2025-11-29 08:02:58.002 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:58 np0005539551 nova_compute[227360]: 2025-11-29 08:02:58.002 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26202237-bc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:02:58 np0005539551 systemd[1]: libpod-conmon-b97ec14b31e5a33a666741138cb0694c7a68b2dadcb66a6d3080913d299384fe.scope: Deactivated successfully.
Nov 29 03:02:58 np0005539551 nova_compute[227360]: 2025-11-29 08:02:58.005 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:58 np0005539551 nova_compute[227360]: 2025-11-29 08:02:58.008 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:02:58 np0005539551 nova_compute[227360]: 2025-11-29 08:02:58.012 227364 INFO os_vif [None req-6cded84c-87c3-4236-b4fc-5f8d28d1e7e7 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ef:0f:ee,bridge_name='br-int',has_traffic_filtering=True,id=26202237-bc25-465c-bcab-caf462b96a73,network=Network(f73b0808-21fd-43a4-809d-85e512de1cb7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26202237-bc')#033[00m
Nov 29 03:02:58 np0005539551 podman[245422]: 2025-11-29 08:02:58.060509516 +0000 UTC m=+0.048128320 container remove b97ec14b31e5a33a666741138cb0694c7a68b2dadcb66a6d3080913d299384fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:02:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:02:58.069 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ed968561-4b85-4656-ac15-bd41576ddfc8]: (4, ('Sat Nov 29 08:02:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7 (b97ec14b31e5a33a666741138cb0694c7a68b2dadcb66a6d3080913d299384fe)\nb97ec14b31e5a33a666741138cb0694c7a68b2dadcb66a6d3080913d299384fe\nSat Nov 29 08:02:57 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7 (b97ec14b31e5a33a666741138cb0694c7a68b2dadcb66a6d3080913d299384fe)\nb97ec14b31e5a33a666741138cb0694c7a68b2dadcb66a6d3080913d299384fe\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:02:58.070 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[607e4a37-afbe-45d8-9e35-c0f95c6803bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:02:58.071 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf73b0808-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:02:58 np0005539551 kernel: tapf73b0808-20: left promiscuous mode
Nov 29 03:02:58 np0005539551 nova_compute[227360]: 2025-11-29 08:02:58.072 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:58 np0005539551 nova_compute[227360]: 2025-11-29 08:02:58.085 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:58 np0005539551 nova_compute[227360]: 2025-11-29 08:02:58.086 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:02:58.089 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[57accf75-a534-48b3-a38b-db6830f2d2a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:02:58.103 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1d1b8294-9547-4583-968b-c7897d2edcec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:02:58.104 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[cd216646-57ca-4831-9718-e04b0f9b8464]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:02:58.120 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4a3e6e43-1314-49db-a750-76002a8aabab]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 631305, 'reachable_time': 43449, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245455, 'error': None, 'target': 'ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:58 np0005539551 systemd[1]: run-netns-ovnmeta\x2df73b0808\x2d21fd\x2d43a4\x2d809d\x2d85e512de1cb7.mount: Deactivated successfully.
Nov 29 03:02:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:02:58.123 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:02:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:02:58.123 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[cf1289c0-afab-4c19-b8fc-a11d0870c6c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:58 np0005539551 nova_compute[227360]: 2025-11-29 08:02:58.460 227364 INFO nova.virt.libvirt.driver [None req-6cded84c-87c3-4236-b4fc-5f8d28d1e7e7 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Deleting instance files /var/lib/nova/instances/b8808203-6263-4e9c-bff5-7d273d143a50_del#033[00m
Nov 29 03:02:58 np0005539551 nova_compute[227360]: 2025-11-29 08:02:58.461 227364 INFO nova.virt.libvirt.driver [None req-6cded84c-87c3-4236-b4fc-5f8d28d1e7e7 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Deletion of /var/lib/nova/instances/b8808203-6263-4e9c-bff5-7d273d143a50_del complete#033[00m
Nov 29 03:02:58 np0005539551 nova_compute[227360]: 2025-11-29 08:02:58.631 227364 DEBUG nova.compute.manager [req-2a44b1d6-eb81-49da-9132-e8f77d0ac7d4 req-54606b81-aa3b-43c2-a972-73fcd592ac76 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Received event network-vif-unplugged-26202237-bc25-465c-bcab-caf462b96a73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:02:58 np0005539551 nova_compute[227360]: 2025-11-29 08:02:58.631 227364 DEBUG oslo_concurrency.lockutils [req-2a44b1d6-eb81-49da-9132-e8f77d0ac7d4 req-54606b81-aa3b-43c2-a972-73fcd592ac76 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "b8808203-6263-4e9c-bff5-7d273d143a50-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:58 np0005539551 nova_compute[227360]: 2025-11-29 08:02:58.632 227364 DEBUG oslo_concurrency.lockutils [req-2a44b1d6-eb81-49da-9132-e8f77d0ac7d4 req-54606b81-aa3b-43c2-a972-73fcd592ac76 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b8808203-6263-4e9c-bff5-7d273d143a50-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:58 np0005539551 nova_compute[227360]: 2025-11-29 08:02:58.632 227364 DEBUG oslo_concurrency.lockutils [req-2a44b1d6-eb81-49da-9132-e8f77d0ac7d4 req-54606b81-aa3b-43c2-a972-73fcd592ac76 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b8808203-6263-4e9c-bff5-7d273d143a50-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:58 np0005539551 nova_compute[227360]: 2025-11-29 08:02:58.633 227364 DEBUG nova.compute.manager [req-2a44b1d6-eb81-49da-9132-e8f77d0ac7d4 req-54606b81-aa3b-43c2-a972-73fcd592ac76 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] No waiting events found dispatching network-vif-unplugged-26202237-bc25-465c-bcab-caf462b96a73 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:02:58 np0005539551 nova_compute[227360]: 2025-11-29 08:02:58.633 227364 DEBUG nova.compute.manager [req-2a44b1d6-eb81-49da-9132-e8f77d0ac7d4 req-54606b81-aa3b-43c2-a972-73fcd592ac76 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Received event network-vif-unplugged-26202237-bc25-465c-bcab-caf462b96a73 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:02:58 np0005539551 nova_compute[227360]: 2025-11-29 08:02:58.645 227364 INFO nova.compute.manager [None req-6cded84c-87c3-4236-b4fc-5f8d28d1e7e7 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Took 1.12 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:02:58 np0005539551 nova_compute[227360]: 2025-11-29 08:02:58.645 227364 DEBUG oslo.service.loopingcall [None req-6cded84c-87c3-4236-b4fc-5f8d28d1e7e7 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:02:58 np0005539551 nova_compute[227360]: 2025-11-29 08:02:58.646 227364 DEBUG nova.compute.manager [-] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:02:58 np0005539551 nova_compute[227360]: 2025-11-29 08:02:58.646 227364 DEBUG nova.network.neutron [-] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:02:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:58.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:02:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:59.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:59 np0005539551 nova_compute[227360]: 2025-11-29 08:02:59.591 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:59 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:00 np0005539551 nova_compute[227360]: 2025-11-29 08:03:00.634 227364 DEBUG nova.network.neutron [-] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:03:00 np0005539551 nova_compute[227360]: 2025-11-29 08:03:00.711 227364 INFO nova.compute.manager [-] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Took 2.06 seconds to deallocate network for instance.#033[00m
Nov 29 03:03:00 np0005539551 nova_compute[227360]: 2025-11-29 08:03:00.778 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:00.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:00 np0005539551 nova_compute[227360]: 2025-11-29 08:03:00.817 227364 DEBUG oslo_concurrency.lockutils [None req-6cded84c-87c3-4236-b4fc-5f8d28d1e7e7 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:00 np0005539551 nova_compute[227360]: 2025-11-29 08:03:00.817 227364 DEBUG oslo_concurrency.lockutils [None req-6cded84c-87c3-4236-b4fc-5f8d28d1e7e7 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:00 np0005539551 nova_compute[227360]: 2025-11-29 08:03:00.908 227364 DEBUG oslo_concurrency.processutils [None req-6cded84c-87c3-4236-b4fc-5f8d28d1e7e7 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:01 np0005539551 nova_compute[227360]: 2025-11-29 08:03:01.066 227364 DEBUG nova.compute.manager [req-808fb3e2-966c-40f5-8d8c-729a8e873959 req-1ede9ef4-95d1-4258-8de9-f6ea9cdcb8eb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Received event network-vif-plugged-26202237-bc25-465c-bcab-caf462b96a73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:03:01 np0005539551 nova_compute[227360]: 2025-11-29 08:03:01.066 227364 DEBUG oslo_concurrency.lockutils [req-808fb3e2-966c-40f5-8d8c-729a8e873959 req-1ede9ef4-95d1-4258-8de9-f6ea9cdcb8eb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "b8808203-6263-4e9c-bff5-7d273d143a50-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:01 np0005539551 nova_compute[227360]: 2025-11-29 08:03:01.067 227364 DEBUG oslo_concurrency.lockutils [req-808fb3e2-966c-40f5-8d8c-729a8e873959 req-1ede9ef4-95d1-4258-8de9-f6ea9cdcb8eb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b8808203-6263-4e9c-bff5-7d273d143a50-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:01 np0005539551 nova_compute[227360]: 2025-11-29 08:03:01.067 227364 DEBUG oslo_concurrency.lockutils [req-808fb3e2-966c-40f5-8d8c-729a8e873959 req-1ede9ef4-95d1-4258-8de9-f6ea9cdcb8eb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b8808203-6263-4e9c-bff5-7d273d143a50-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:01 np0005539551 nova_compute[227360]: 2025-11-29 08:03:01.067 227364 DEBUG nova.compute.manager [req-808fb3e2-966c-40f5-8d8c-729a8e873959 req-1ede9ef4-95d1-4258-8de9-f6ea9cdcb8eb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] No waiting events found dispatching network-vif-plugged-26202237-bc25-465c-bcab-caf462b96a73 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:03:01 np0005539551 nova_compute[227360]: 2025-11-29 08:03:01.068 227364 WARNING nova.compute.manager [req-808fb3e2-966c-40f5-8d8c-729a8e873959 req-1ede9ef4-95d1-4258-8de9-f6ea9cdcb8eb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Received unexpected event network-vif-plugged-26202237-bc25-465c-bcab-caf462b96a73 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:03:01 np0005539551 nova_compute[227360]: 2025-11-29 08:03:01.104 227364 DEBUG nova.compute.manager [req-b77a8f30-e1c1-416f-ae8e-59618d3d3629 req-cf055766-1cd4-41f2-b63d-e147bb4ca18a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Received event network-vif-deleted-26202237-bc25-465c-bcab-caf462b96a73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:03:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:01.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:03:01 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1608266224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:03:01 np0005539551 nova_compute[227360]: 2025-11-29 08:03:01.782 227364 DEBUG oslo_concurrency.processutils [None req-6cded84c-87c3-4236-b4fc-5f8d28d1e7e7 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.874s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:01 np0005539551 nova_compute[227360]: 2025-11-29 08:03:01.787 227364 DEBUG nova.compute.provider_tree [None req-6cded84c-87c3-4236-b4fc-5f8d28d1e7e7 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:03:01 np0005539551 nova_compute[227360]: 2025-11-29 08:03:01.821 227364 DEBUG nova.scheduler.client.report [None req-6cded84c-87c3-4236-b4fc-5f8d28d1e7e7 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:03:01 np0005539551 nova_compute[227360]: 2025-11-29 08:03:01.867 227364 DEBUG oslo_concurrency.lockutils [None req-6cded84c-87c3-4236-b4fc-5f8d28d1e7e7 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:01 np0005539551 nova_compute[227360]: 2025-11-29 08:03:01.928 227364 INFO nova.scheduler.client.report [None req-6cded84c-87c3-4236-b4fc-5f8d28d1e7e7 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Deleted allocations for instance b8808203-6263-4e9c-bff5-7d273d143a50#033[00m
Nov 29 03:03:02 np0005539551 nova_compute[227360]: 2025-11-29 08:03:02.042 227364 DEBUG oslo_concurrency.lockutils [None req-6cded84c-87c3-4236-b4fc-5f8d28d1e7e7 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "b8808203-6263-4e9c-bff5-7d273d143a50" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.516s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:02.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:03 np0005539551 nova_compute[227360]: 2025-11-29 08:03:03.004 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:03.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:04 np0005539551 nova_compute[227360]: 2025-11-29 08:03:04.592 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:04.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:04 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:05.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:06.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:07.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:07 np0005539551 nova_compute[227360]: 2025-11-29 08:03:07.812 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:08 np0005539551 nova_compute[227360]: 2025-11-29 08:03:08.006 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:08.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:09.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:09 np0005539551 nova_compute[227360]: 2025-11-29 08:03:09.595 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:09 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:10.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:11.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:12 np0005539551 nova_compute[227360]: 2025-11-29 08:03:12.366 227364 DEBUG oslo_concurrency.lockutils [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Acquiring lock "e2eb77ee-0ede-4fe1-b443-ad372eaadf8d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:12 np0005539551 nova_compute[227360]: 2025-11-29 08:03:12.366 227364 DEBUG oslo_concurrency.lockutils [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Lock "e2eb77ee-0ede-4fe1-b443-ad372eaadf8d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:12 np0005539551 nova_compute[227360]: 2025-11-29 08:03:12.385 227364 DEBUG nova.compute.manager [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:03:12 np0005539551 nova_compute[227360]: 2025-11-29 08:03:12.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:12 np0005539551 nova_compute[227360]: 2025-11-29 08:03:12.453 227364 DEBUG oslo_concurrency.lockutils [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:12 np0005539551 nova_compute[227360]: 2025-11-29 08:03:12.454 227364 DEBUG oslo_concurrency.lockutils [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:12 np0005539551 nova_compute[227360]: 2025-11-29 08:03:12.459 227364 DEBUG nova.virt.hardware [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:03:12 np0005539551 nova_compute[227360]: 2025-11-29 08:03:12.460 227364 INFO nova.compute.claims [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:03:12 np0005539551 nova_compute[227360]: 2025-11-29 08:03:12.591 227364 DEBUG oslo_concurrency.processutils [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:12.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:12 np0005539551 nova_compute[227360]: 2025-11-29 08:03:12.969 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403377.9686909, b8808203-6263-4e9c-bff5-7d273d143a50 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:03:12 np0005539551 nova_compute[227360]: 2025-11-29 08:03:12.970 227364 INFO nova.compute.manager [-] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:03:12 np0005539551 nova_compute[227360]: 2025-11-29 08:03:12.992 227364 DEBUG nova.compute.manager [None req-f878954d-5204-4818-b45b-5fdd292655c6 - - - - - -] [instance: b8808203-6263-4e9c-bff5-7d273d143a50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:03:13 np0005539551 nova_compute[227360]: 2025-11-29 08:03:13.007 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:13 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:03:13 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2521164636' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:03:13 np0005539551 nova_compute[227360]: 2025-11-29 08:03:13.043 227364 DEBUG oslo_concurrency.processutils [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:13 np0005539551 nova_compute[227360]: 2025-11-29 08:03:13.048 227364 DEBUG nova.compute.provider_tree [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:03:13 np0005539551 nova_compute[227360]: 2025-11-29 08:03:13.065 227364 DEBUG nova.scheduler.client.report [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:03:13 np0005539551 nova_compute[227360]: 2025-11-29 08:03:13.089 227364 DEBUG oslo_concurrency.lockutils [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:13 np0005539551 nova_compute[227360]: 2025-11-29 08:03:13.090 227364 DEBUG nova.compute.manager [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:03:13 np0005539551 nova_compute[227360]: 2025-11-29 08:03:13.132 227364 DEBUG nova.compute.manager [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:03:13 np0005539551 nova_compute[227360]: 2025-11-29 08:03:13.132 227364 DEBUG nova.network.neutron [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:03:13 np0005539551 nova_compute[227360]: 2025-11-29 08:03:13.150 227364 INFO nova.virt.libvirt.driver [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:03:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:13.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:13 np0005539551 nova_compute[227360]: 2025-11-29 08:03:13.171 227364 DEBUG nova.compute.manager [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:03:13 np0005539551 nova_compute[227360]: 2025-11-29 08:03:13.244 227364 DEBUG nova.compute.manager [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:03:13 np0005539551 nova_compute[227360]: 2025-11-29 08:03:13.245 227364 DEBUG nova.virt.libvirt.driver [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:03:13 np0005539551 nova_compute[227360]: 2025-11-29 08:03:13.245 227364 INFO nova.virt.libvirt.driver [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Creating image(s)#033[00m
Nov 29 03:03:13 np0005539551 nova_compute[227360]: 2025-11-29 08:03:13.292 227364 DEBUG nova.storage.rbd_utils [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] rbd image e2eb77ee-0ede-4fe1-b443-ad372eaadf8d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:13 np0005539551 nova_compute[227360]: 2025-11-29 08:03:13.343 227364 DEBUG nova.storage.rbd_utils [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] rbd image e2eb77ee-0ede-4fe1-b443-ad372eaadf8d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:13 np0005539551 nova_compute[227360]: 2025-11-29 08:03:13.384 227364 DEBUG nova.storage.rbd_utils [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] rbd image e2eb77ee-0ede-4fe1-b443-ad372eaadf8d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:13 np0005539551 nova_compute[227360]: 2025-11-29 08:03:13.388 227364 DEBUG oslo_concurrency.processutils [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:13 np0005539551 nova_compute[227360]: 2025-11-29 08:03:13.406 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:13 np0005539551 nova_compute[227360]: 2025-11-29 08:03:13.446 227364 DEBUG oslo_concurrency.processutils [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:13 np0005539551 nova_compute[227360]: 2025-11-29 08:03:13.446 227364 DEBUG oslo_concurrency.lockutils [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:13 np0005539551 nova_compute[227360]: 2025-11-29 08:03:13.447 227364 DEBUG oslo_concurrency.lockutils [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:13 np0005539551 nova_compute[227360]: 2025-11-29 08:03:13.447 227364 DEBUG oslo_concurrency.lockutils [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:13 np0005539551 nova_compute[227360]: 2025-11-29 08:03:13.469 227364 DEBUG nova.storage.rbd_utils [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] rbd image e2eb77ee-0ede-4fe1-b443-ad372eaadf8d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:13 np0005539551 nova_compute[227360]: 2025-11-29 08:03:13.474 227364 DEBUG oslo_concurrency.processutils [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 e2eb77ee-0ede-4fe1-b443-ad372eaadf8d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:13 np0005539551 nova_compute[227360]: 2025-11-29 08:03:13.498 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:13 np0005539551 nova_compute[227360]: 2025-11-29 08:03:13.617 227364 DEBUG nova.network.neutron [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 29 03:03:13 np0005539551 nova_compute[227360]: 2025-11-29 08:03:13.617 227364 DEBUG nova.compute.manager [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:03:14 np0005539551 nova_compute[227360]: 2025-11-29 08:03:14.519 227364 DEBUG oslo_concurrency.processutils [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 e2eb77ee-0ede-4fe1-b443-ad372eaadf8d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:14 np0005539551 nova_compute[227360]: 2025-11-29 08:03:14.607 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:14 np0005539551 nova_compute[227360]: 2025-11-29 08:03:14.613 227364 DEBUG nova.storage.rbd_utils [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] resizing rbd image e2eb77ee-0ede-4fe1-b443-ad372eaadf8d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:03:14 np0005539551 nova_compute[227360]: 2025-11-29 08:03:14.722 227364 DEBUG nova.objects.instance [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Lazy-loading 'migration_context' on Instance uuid e2eb77ee-0ede-4fe1-b443-ad372eaadf8d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:03:14 np0005539551 nova_compute[227360]: 2025-11-29 08:03:14.742 227364 DEBUG nova.virt.libvirt.driver [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:03:14 np0005539551 nova_compute[227360]: 2025-11-29 08:03:14.743 227364 DEBUG nova.virt.libvirt.driver [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Ensure instance console log exists: /var/lib/nova/instances/e2eb77ee-0ede-4fe1-b443-ad372eaadf8d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:03:14 np0005539551 nova_compute[227360]: 2025-11-29 08:03:14.743 227364 DEBUG oslo_concurrency.lockutils [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:14 np0005539551 nova_compute[227360]: 2025-11-29 08:03:14.743 227364 DEBUG oslo_concurrency.lockutils [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:14 np0005539551 nova_compute[227360]: 2025-11-29 08:03:14.744 227364 DEBUG oslo_concurrency.lockutils [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:14 np0005539551 nova_compute[227360]: 2025-11-29 08:03:14.745 227364 DEBUG nova.virt.libvirt.driver [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:03:14 np0005539551 nova_compute[227360]: 2025-11-29 08:03:14.749 227364 WARNING nova.virt.libvirt.driver [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:03:14 np0005539551 nova_compute[227360]: 2025-11-29 08:03:14.755 227364 DEBUG nova.virt.libvirt.host [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:03:14 np0005539551 nova_compute[227360]: 2025-11-29 08:03:14.756 227364 DEBUG nova.virt.libvirt.host [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:03:14 np0005539551 nova_compute[227360]: 2025-11-29 08:03:14.759 227364 DEBUG nova.virt.libvirt.host [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:03:14 np0005539551 nova_compute[227360]: 2025-11-29 08:03:14.760 227364 DEBUG nova.virt.libvirt.host [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:03:14 np0005539551 nova_compute[227360]: 2025-11-29 08:03:14.761 227364 DEBUG nova.virt.libvirt.driver [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:03:14 np0005539551 nova_compute[227360]: 2025-11-29 08:03:14.761 227364 DEBUG nova.virt.hardware [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:03:14 np0005539551 nova_compute[227360]: 2025-11-29 08:03:14.761 227364 DEBUG nova.virt.hardware [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:03:14 np0005539551 nova_compute[227360]: 2025-11-29 08:03:14.761 227364 DEBUG nova.virt.hardware [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:03:14 np0005539551 nova_compute[227360]: 2025-11-29 08:03:14.761 227364 DEBUG nova.virt.hardware [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:03:14 np0005539551 nova_compute[227360]: 2025-11-29 08:03:14.762 227364 DEBUG nova.virt.hardware [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:03:14 np0005539551 nova_compute[227360]: 2025-11-29 08:03:14.762 227364 DEBUG nova.virt.hardware [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:03:14 np0005539551 nova_compute[227360]: 2025-11-29 08:03:14.762 227364 DEBUG nova.virt.hardware [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:03:14 np0005539551 nova_compute[227360]: 2025-11-29 08:03:14.762 227364 DEBUG nova.virt.hardware [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:03:14 np0005539551 nova_compute[227360]: 2025-11-29 08:03:14.762 227364 DEBUG nova.virt.hardware [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:03:14 np0005539551 nova_compute[227360]: 2025-11-29 08:03:14.762 227364 DEBUG nova.virt.hardware [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:03:14 np0005539551 nova_compute[227360]: 2025-11-29 08:03:14.763 227364 DEBUG nova.virt.hardware [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:03:14 np0005539551 nova_compute[227360]: 2025-11-29 08:03:14.765 227364 DEBUG oslo_concurrency.processutils [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:14.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:14 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:15.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:03:15 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2890180196' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:03:15 np0005539551 nova_compute[227360]: 2025-11-29 08:03:15.218 227364 DEBUG oslo_concurrency.processutils [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:15 np0005539551 nova_compute[227360]: 2025-11-29 08:03:15.248 227364 DEBUG nova.storage.rbd_utils [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] rbd image e2eb77ee-0ede-4fe1-b443-ad372eaadf8d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:15 np0005539551 nova_compute[227360]: 2025-11-29 08:03:15.253 227364 DEBUG oslo_concurrency.processutils [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:15 np0005539551 nova_compute[227360]: 2025-11-29 08:03:15.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:15 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:03:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:03:15 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1322379724' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:03:15 np0005539551 nova_compute[227360]: 2025-11-29 08:03:15.804 227364 DEBUG oslo_concurrency.processutils [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:15 np0005539551 nova_compute[227360]: 2025-11-29 08:03:15.806 227364 DEBUG nova.objects.instance [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Lazy-loading 'pci_devices' on Instance uuid e2eb77ee-0ede-4fe1-b443-ad372eaadf8d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:03:15 np0005539551 nova_compute[227360]: 2025-11-29 08:03:15.819 227364 DEBUG nova.virt.libvirt.driver [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:03:15 np0005539551 nova_compute[227360]:  <uuid>e2eb77ee-0ede-4fe1-b443-ad372eaadf8d</uuid>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:  <name>instance-0000002b</name>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:03:15 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:      <nova:name>tempest-ServersAdminNegativeTestJSON-server-1883721227</nova:name>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:03:14</nova:creationTime>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:03:15 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:        <nova:user uuid="516e17df54a041ee8101fb121e5d5740">tempest-ServersAdminNegativeTestJSON-1775083495-project-member</nova:user>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:        <nova:project uuid="392223643d6d4b8b96eaa27c4a0d41cc">tempest-ServersAdminNegativeTestJSON-1775083495</nova:project>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:      <nova:ports/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:      <entry name="serial">e2eb77ee-0ede-4fe1-b443-ad372eaadf8d</entry>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:      <entry name="uuid">e2eb77ee-0ede-4fe1-b443-ad372eaadf8d</entry>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:03:15 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/e2eb77ee-0ede-4fe1-b443-ad372eaadf8d_disk">
Nov 29 03:03:15 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:03:15 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:03:15 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/e2eb77ee-0ede-4fe1-b443-ad372eaadf8d_disk.config">
Nov 29 03:03:15 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:03:15 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:03:15 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/e2eb77ee-0ede-4fe1-b443-ad372eaadf8d/console.log" append="off"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:03:15 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:03:15 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:03:15 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:03:15 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:03:15 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:03:15 np0005539551 nova_compute[227360]: 2025-11-29 08:03:15.872 227364 DEBUG nova.virt.libvirt.driver [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:03:15 np0005539551 nova_compute[227360]: 2025-11-29 08:03:15.872 227364 DEBUG nova.virt.libvirt.driver [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:03:15 np0005539551 nova_compute[227360]: 2025-11-29 08:03:15.873 227364 INFO nova.virt.libvirt.driver [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Using config drive#033[00m
Nov 29 03:03:15 np0005539551 nova_compute[227360]: 2025-11-29 08:03:15.900 227364 DEBUG nova.storage.rbd_utils [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] rbd image e2eb77ee-0ede-4fe1-b443-ad372eaadf8d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:16 np0005539551 nova_compute[227360]: 2025-11-29 08:03:16.129 227364 INFO nova.virt.libvirt.driver [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Creating config drive at /var/lib/nova/instances/e2eb77ee-0ede-4fe1-b443-ad372eaadf8d/disk.config#033[00m
Nov 29 03:03:16 np0005539551 nova_compute[227360]: 2025-11-29 08:03:16.134 227364 DEBUG oslo_concurrency.processutils [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e2eb77ee-0ede-4fe1-b443-ad372eaadf8d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3exovk4x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:16 np0005539551 nova_compute[227360]: 2025-11-29 08:03:16.261 227364 DEBUG oslo_concurrency.processutils [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e2eb77ee-0ede-4fe1-b443-ad372eaadf8d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3exovk4x" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:16 np0005539551 nova_compute[227360]: 2025-11-29 08:03:16.296 227364 DEBUG nova.storage.rbd_utils [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] rbd image e2eb77ee-0ede-4fe1-b443-ad372eaadf8d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:16 np0005539551 nova_compute[227360]: 2025-11-29 08:03:16.300 227364 DEBUG oslo_concurrency.processutils [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e2eb77ee-0ede-4fe1-b443-ad372eaadf8d/disk.config e2eb77ee-0ede-4fe1-b443-ad372eaadf8d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:16 np0005539551 nova_compute[227360]: 2025-11-29 08:03:16.411 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:16 np0005539551 nova_compute[227360]: 2025-11-29 08:03:16.722 227364 DEBUG oslo_concurrency.processutils [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e2eb77ee-0ede-4fe1-b443-ad372eaadf8d/disk.config e2eb77ee-0ede-4fe1-b443-ad372eaadf8d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:16 np0005539551 nova_compute[227360]: 2025-11-29 08:03:16.723 227364 INFO nova.virt.libvirt.driver [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Deleting local config drive /var/lib/nova/instances/e2eb77ee-0ede-4fe1-b443-ad372eaadf8d/disk.config because it was imported into RBD.#033[00m
Nov 29 03:03:16 np0005539551 systemd-machined[190756]: New machine qemu-22-instance-0000002b.
Nov 29 03:03:16 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:03:16 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:03:16 np0005539551 systemd[1]: Started Virtual Machine qemu-22-instance-0000002b.
Nov 29 03:03:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:16.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:17.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:17 np0005539551 nova_compute[227360]: 2025-11-29 08:03:17.343 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403397.3426268, e2eb77ee-0ede-4fe1-b443-ad372eaadf8d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:03:17 np0005539551 nova_compute[227360]: 2025-11-29 08:03:17.344 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:03:17 np0005539551 nova_compute[227360]: 2025-11-29 08:03:17.347 227364 DEBUG nova.compute.manager [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:03:17 np0005539551 nova_compute[227360]: 2025-11-29 08:03:17.347 227364 DEBUG nova.virt.libvirt.driver [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:03:17 np0005539551 nova_compute[227360]: 2025-11-29 08:03:17.351 227364 INFO nova.virt.libvirt.driver [-] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Instance spawned successfully.#033[00m
Nov 29 03:03:17 np0005539551 nova_compute[227360]: 2025-11-29 08:03:17.351 227364 DEBUG nova.virt.libvirt.driver [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:03:17 np0005539551 nova_compute[227360]: 2025-11-29 08:03:17.374 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:03:17 np0005539551 nova_compute[227360]: 2025-11-29 08:03:17.380 227364 DEBUG nova.virt.libvirt.driver [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:17 np0005539551 nova_compute[227360]: 2025-11-29 08:03:17.380 227364 DEBUG nova.virt.libvirt.driver [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:17 np0005539551 nova_compute[227360]: 2025-11-29 08:03:17.381 227364 DEBUG nova.virt.libvirt.driver [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:17 np0005539551 nova_compute[227360]: 2025-11-29 08:03:17.381 227364 DEBUG nova.virt.libvirt.driver [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:17 np0005539551 nova_compute[227360]: 2025-11-29 08:03:17.381 227364 DEBUG nova.virt.libvirt.driver [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:17 np0005539551 nova_compute[227360]: 2025-11-29 08:03:17.382 227364 DEBUG nova.virt.libvirt.driver [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:17 np0005539551 nova_compute[227360]: 2025-11-29 08:03:17.385 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:03:17 np0005539551 nova_compute[227360]: 2025-11-29 08:03:17.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:17 np0005539551 nova_compute[227360]: 2025-11-29 08:03:17.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:17 np0005539551 nova_compute[227360]: 2025-11-29 08:03:17.409 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:03:17 np0005539551 nova_compute[227360]: 2025-11-29 08:03:17.409 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:03:17 np0005539551 nova_compute[227360]: 2025-11-29 08:03:17.426 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:03:17 np0005539551 nova_compute[227360]: 2025-11-29 08:03:17.426 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403397.3445246, e2eb77ee-0ede-4fe1-b443-ad372eaadf8d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:03:17 np0005539551 nova_compute[227360]: 2025-11-29 08:03:17.426 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] VM Started (Lifecycle Event)#033[00m
Nov 29 03:03:17 np0005539551 nova_compute[227360]: 2025-11-29 08:03:17.453 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 03:03:17 np0005539551 nova_compute[227360]: 2025-11-29 08:03:17.454 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:03:17 np0005539551 nova_compute[227360]: 2025-11-29 08:03:17.454 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:17 np0005539551 nova_compute[227360]: 2025-11-29 08:03:17.461 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:03:17 np0005539551 nova_compute[227360]: 2025-11-29 08:03:17.463 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:03:17 np0005539551 nova_compute[227360]: 2025-11-29 08:03:17.471 227364 INFO nova.compute.manager [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Took 4.23 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:03:17 np0005539551 nova_compute[227360]: 2025-11-29 08:03:17.471 227364 DEBUG nova.compute.manager [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:03:17 np0005539551 nova_compute[227360]: 2025-11-29 08:03:17.495 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:03:17 np0005539551 nova_compute[227360]: 2025-11-29 08:03:17.528 227364 INFO nova.compute.manager [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Took 5.10 seconds to build instance.#033[00m
Nov 29 03:03:17 np0005539551 nova_compute[227360]: 2025-11-29 08:03:17.544 227364 DEBUG oslo_concurrency.lockutils [None req-940b9d6a-25d5-4662-9b70-21627a8ea8dd 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Lock "e2eb77ee-0ede-4fe1-b443-ad372eaadf8d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:18 np0005539551 nova_compute[227360]: 2025-11-29 08:03:18.009 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:18.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:19.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:19 np0005539551 nova_compute[227360]: 2025-11-29 08:03:19.177 227364 DEBUG nova.objects.instance [None req-c83f8b3f-233b-4aae-a68c-726a4bd796a0 ca7ab7dbe9a7409e8024d14f444f4914 9dce6bedcb494f548ade2886837f6193 - - default default] Lazy-loading 'pci_devices' on Instance uuid e2eb77ee-0ede-4fe1-b443-ad372eaadf8d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:03:19 np0005539551 nova_compute[227360]: 2025-11-29 08:03:19.201 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403399.201256, e2eb77ee-0ede-4fe1-b443-ad372eaadf8d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:03:19 np0005539551 nova_compute[227360]: 2025-11-29 08:03:19.201 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:03:19 np0005539551 nova_compute[227360]: 2025-11-29 08:03:19.226 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:03:19 np0005539551 nova_compute[227360]: 2025-11-29 08:03:19.231 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:03:19 np0005539551 nova_compute[227360]: 2025-11-29 08:03:19.251 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Nov 29 03:03:19 np0005539551 nova_compute[227360]: 2025-11-29 08:03:19.599 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:19 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:03:19.854 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:03:19.855 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:03:19.855 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:19 np0005539551 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d0000002b.scope: Deactivated successfully.
Nov 29 03:03:19 np0005539551 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d0000002b.scope: Consumed 2.612s CPU time.
Nov 29 03:03:19 np0005539551 systemd-machined[190756]: Machine qemu-22-instance-0000002b terminated.
Nov 29 03:03:20 np0005539551 nova_compute[227360]: 2025-11-29 08:03:20.062 227364 DEBUG nova.compute.manager [None req-c83f8b3f-233b-4aae-a68c-726a4bd796a0 ca7ab7dbe9a7409e8024d14f444f4914 9dce6bedcb494f548ade2886837f6193 - - default default] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:03:20 np0005539551 nova_compute[227360]: 2025-11-29 08:03:20.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:20 np0005539551 nova_compute[227360]: 2025-11-29 08:03:20.436 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:20 np0005539551 nova_compute[227360]: 2025-11-29 08:03:20.437 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:20 np0005539551 nova_compute[227360]: 2025-11-29 08:03:20.438 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:20 np0005539551 nova_compute[227360]: 2025-11-29 08:03:20.438 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:03:20 np0005539551 nova_compute[227360]: 2025-11-29 08:03:20.439 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e183 e183: 3 total, 3 up, 3 in
Nov 29 03:03:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:20.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:03:20 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1332185313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:03:20 np0005539551 nova_compute[227360]: 2025-11-29 08:03:20.887 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:20 np0005539551 nova_compute[227360]: 2025-11-29 08:03:20.968 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:03:20 np0005539551 nova_compute[227360]: 2025-11-29 08:03:20.968 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:03:21 np0005539551 nova_compute[227360]: 2025-11-29 08:03:21.131 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:03:21 np0005539551 nova_compute[227360]: 2025-11-29 08:03:21.132 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4784MB free_disk=20.902042388916016GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:03:21 np0005539551 nova_compute[227360]: 2025-11-29 08:03:21.133 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:21 np0005539551 nova_compute[227360]: 2025-11-29 08:03:21.133 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:21.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:21 np0005539551 nova_compute[227360]: 2025-11-29 08:03:21.224 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance e2eb77ee-0ede-4fe1-b443-ad372eaadf8d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:03:21 np0005539551 nova_compute[227360]: 2025-11-29 08:03:21.224 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:03:21 np0005539551 nova_compute[227360]: 2025-11-29 08:03:21.224 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:03:21 np0005539551 nova_compute[227360]: 2025-11-29 08:03:21.265 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:21 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:03:21 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1016469252' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:03:21 np0005539551 nova_compute[227360]: 2025-11-29 08:03:21.669 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:21 np0005539551 nova_compute[227360]: 2025-11-29 08:03:21.676 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:03:21 np0005539551 nova_compute[227360]: 2025-11-29 08:03:21.696 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:03:21 np0005539551 nova_compute[227360]: 2025-11-29 08:03:21.723 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:03:21 np0005539551 nova_compute[227360]: 2025-11-29 08:03:21.724 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 03:03:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:22.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 03:03:23 np0005539551 nova_compute[227360]: 2025-11-29 08:03:23.050 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:23.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:23 np0005539551 nova_compute[227360]: 2025-11-29 08:03:23.783 227364 DEBUG oslo_concurrency.lockutils [None req-a6f34388-3de6-4944-bafb-36e2deb0af88 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Acquiring lock "e2eb77ee-0ede-4fe1-b443-ad372eaadf8d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:23 np0005539551 nova_compute[227360]: 2025-11-29 08:03:23.784 227364 DEBUG oslo_concurrency.lockutils [None req-a6f34388-3de6-4944-bafb-36e2deb0af88 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Lock "e2eb77ee-0ede-4fe1-b443-ad372eaadf8d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:23 np0005539551 nova_compute[227360]: 2025-11-29 08:03:23.784 227364 DEBUG oslo_concurrency.lockutils [None req-a6f34388-3de6-4944-bafb-36e2deb0af88 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Acquiring lock "e2eb77ee-0ede-4fe1-b443-ad372eaadf8d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:23 np0005539551 nova_compute[227360]: 2025-11-29 08:03:23.785 227364 DEBUG oslo_concurrency.lockutils [None req-a6f34388-3de6-4944-bafb-36e2deb0af88 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Lock "e2eb77ee-0ede-4fe1-b443-ad372eaadf8d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:23 np0005539551 nova_compute[227360]: 2025-11-29 08:03:23.785 227364 DEBUG oslo_concurrency.lockutils [None req-a6f34388-3de6-4944-bafb-36e2deb0af88 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Lock "e2eb77ee-0ede-4fe1-b443-ad372eaadf8d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:23 np0005539551 nova_compute[227360]: 2025-11-29 08:03:23.787 227364 INFO nova.compute.manager [None req-a6f34388-3de6-4944-bafb-36e2deb0af88 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Terminating instance#033[00m
Nov 29 03:03:23 np0005539551 nova_compute[227360]: 2025-11-29 08:03:23.788 227364 DEBUG oslo_concurrency.lockutils [None req-a6f34388-3de6-4944-bafb-36e2deb0af88 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Acquiring lock "refresh_cache-e2eb77ee-0ede-4fe1-b443-ad372eaadf8d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:03:23 np0005539551 nova_compute[227360]: 2025-11-29 08:03:23.788 227364 DEBUG oslo_concurrency.lockutils [None req-a6f34388-3de6-4944-bafb-36e2deb0af88 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Acquired lock "refresh_cache-e2eb77ee-0ede-4fe1-b443-ad372eaadf8d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:03:23 np0005539551 nova_compute[227360]: 2025-11-29 08:03:23.788 227364 DEBUG nova.network.neutron [None req-a6f34388-3de6-4944-bafb-36e2deb0af88 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:03:23 np0005539551 nova_compute[227360]: 2025-11-29 08:03:23.981 227364 DEBUG nova.network.neutron [None req-a6f34388-3de6-4944-bafb-36e2deb0af88 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:03:24 np0005539551 nova_compute[227360]: 2025-11-29 08:03:24.252 227364 DEBUG nova.network.neutron [None req-a6f34388-3de6-4944-bafb-36e2deb0af88 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:03:24 np0005539551 nova_compute[227360]: 2025-11-29 08:03:24.266 227364 DEBUG oslo_concurrency.lockutils [None req-a6f34388-3de6-4944-bafb-36e2deb0af88 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Releasing lock "refresh_cache-e2eb77ee-0ede-4fe1-b443-ad372eaadf8d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:03:24 np0005539551 nova_compute[227360]: 2025-11-29 08:03:24.267 227364 DEBUG nova.compute.manager [None req-a6f34388-3de6-4944-bafb-36e2deb0af88 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:03:24 np0005539551 nova_compute[227360]: 2025-11-29 08:03:24.276 227364 INFO nova.virt.libvirt.driver [-] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Instance destroyed successfully.#033[00m
Nov 29 03:03:24 np0005539551 nova_compute[227360]: 2025-11-29 08:03:24.276 227364 DEBUG nova.objects.instance [None req-a6f34388-3de6-4944-bafb-36e2deb0af88 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Lazy-loading 'resources' on Instance uuid e2eb77ee-0ede-4fe1-b443-ad372eaadf8d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:03:24 np0005539551 nova_compute[227360]: 2025-11-29 08:03:24.601 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:24 np0005539551 nova_compute[227360]: 2025-11-29 08:03:24.724 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:24.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:24 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:25.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e184 e184: 3 total, 3 up, 3 in
Nov 29 03:03:26 np0005539551 nova_compute[227360]: 2025-11-29 08:03:26.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:26 np0005539551 nova_compute[227360]: 2025-11-29 08:03:26.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:03:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:26.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:27 np0005539551 nova_compute[227360]: 2025-11-29 08:03:27.088 227364 INFO nova.virt.libvirt.driver [None req-a6f34388-3de6-4944-bafb-36e2deb0af88 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Deleting instance files /var/lib/nova/instances/e2eb77ee-0ede-4fe1-b443-ad372eaadf8d_del#033[00m
Nov 29 03:03:27 np0005539551 nova_compute[227360]: 2025-11-29 08:03:27.089 227364 INFO nova.virt.libvirt.driver [None req-a6f34388-3de6-4944-bafb-36e2deb0af88 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Deletion of /var/lib/nova/instances/e2eb77ee-0ede-4fe1-b443-ad372eaadf8d_del complete#033[00m
Nov 29 03:03:27 np0005539551 nova_compute[227360]: 2025-11-29 08:03:27.155 227364 INFO nova.compute.manager [None req-a6f34388-3de6-4944-bafb-36e2deb0af88 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Took 2.89 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:03:27 np0005539551 nova_compute[227360]: 2025-11-29 08:03:27.156 227364 DEBUG oslo.service.loopingcall [None req-a6f34388-3de6-4944-bafb-36e2deb0af88 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:03:27 np0005539551 nova_compute[227360]: 2025-11-29 08:03:27.156 227364 DEBUG nova.compute.manager [-] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:03:27 np0005539551 nova_compute[227360]: 2025-11-29 08:03:27.156 227364 DEBUG nova.network.neutron [-] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:03:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:27.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:27 np0005539551 nova_compute[227360]: 2025-11-29 08:03:27.510 227364 DEBUG nova.network.neutron [-] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:03:27 np0005539551 nova_compute[227360]: 2025-11-29 08:03:27.529 227364 DEBUG nova.network.neutron [-] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:03:27 np0005539551 nova_compute[227360]: 2025-11-29 08:03:27.549 227364 INFO nova.compute.manager [-] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Took 0.39 seconds to deallocate network for instance.#033[00m
Nov 29 03:03:27 np0005539551 nova_compute[227360]: 2025-11-29 08:03:27.606 227364 DEBUG oslo_concurrency.lockutils [None req-a6f34388-3de6-4944-bafb-36e2deb0af88 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:27 np0005539551 nova_compute[227360]: 2025-11-29 08:03:27.606 227364 DEBUG oslo_concurrency.lockutils [None req-a6f34388-3de6-4944-bafb-36e2deb0af88 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:27 np0005539551 podman[246051]: 2025-11-29 08:03:27.617793404 +0000 UTC m=+0.059523601 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 29 03:03:27 np0005539551 podman[246050]: 2025-11-29 08:03:27.625417673 +0000 UTC m=+0.069038172 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 03:03:27 np0005539551 podman[246049]: 2025-11-29 08:03:27.666047957 +0000 UTC m=+0.106344095 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:03:27 np0005539551 nova_compute[227360]: 2025-11-29 08:03:27.667 227364 DEBUG oslo_concurrency.processutils [None req-a6f34388-3de6-4944-bafb-36e2deb0af88 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:28 np0005539551 nova_compute[227360]: 2025-11-29 08:03:28.053 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:28 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:03:28 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3997562578' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:03:28 np0005539551 nova_compute[227360]: 2025-11-29 08:03:28.179 227364 DEBUG oslo_concurrency.processutils [None req-a6f34388-3de6-4944-bafb-36e2deb0af88 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:28 np0005539551 nova_compute[227360]: 2025-11-29 08:03:28.187 227364 DEBUG nova.compute.provider_tree [None req-a6f34388-3de6-4944-bafb-36e2deb0af88 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:03:28 np0005539551 nova_compute[227360]: 2025-11-29 08:03:28.209 227364 DEBUG nova.scheduler.client.report [None req-a6f34388-3de6-4944-bafb-36e2deb0af88 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:03:28 np0005539551 nova_compute[227360]: 2025-11-29 08:03:28.252 227364 DEBUG oslo_concurrency.lockutils [None req-a6f34388-3de6-4944-bafb-36e2deb0af88 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:28 np0005539551 nova_compute[227360]: 2025-11-29 08:03:28.278 227364 INFO nova.scheduler.client.report [None req-a6f34388-3de6-4944-bafb-36e2deb0af88 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Deleted allocations for instance e2eb77ee-0ede-4fe1-b443-ad372eaadf8d#033[00m
Nov 29 03:03:28 np0005539551 nova_compute[227360]: 2025-11-29 08:03:28.362 227364 DEBUG oslo_concurrency.lockutils [None req-a6f34388-3de6-4944-bafb-36e2deb0af88 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Lock "e2eb77ee-0ede-4fe1-b443-ad372eaadf8d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:28.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:29.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e185 e185: 3 total, 3 up, 3 in
Nov 29 03:03:29 np0005539551 nova_compute[227360]: 2025-11-29 08:03:29.603 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:29 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:03:29 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:03:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:03:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:30.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:03:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:03:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:31.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:03:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:32.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:33 np0005539551 nova_compute[227360]: 2025-11-29 08:03:33.056 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:33.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:33 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e186 e186: 3 total, 3 up, 3 in
Nov 29 03:03:34 np0005539551 nova_compute[227360]: 2025-11-29 08:03:34.605 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:34.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:34 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:35 np0005539551 nova_compute[227360]: 2025-11-29 08:03:35.064 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403400.0620122, e2eb77ee-0ede-4fe1-b443-ad372eaadf8d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:03:35 np0005539551 nova_compute[227360]: 2025-11-29 08:03:35.064 227364 INFO nova.compute.manager [-] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:03:35 np0005539551 nova_compute[227360]: 2025-11-29 08:03:35.101 227364 DEBUG nova.compute.manager [None req-391cedeb-277c-483d-9fbc-6119de7af420 - - - - - -] [instance: e2eb77ee-0ede-4fe1-b443-ad372eaadf8d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:03:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:35.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:36.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:37.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:38 np0005539551 nova_compute[227360]: 2025-11-29 08:03:38.104 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:38.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:39.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:39 np0005539551 nova_compute[227360]: 2025-11-29 08:03:39.607 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e187 e187: 3 total, 3 up, 3 in
Nov 29 03:03:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:40.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:41.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:42.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:43 np0005539551 nova_compute[227360]: 2025-11-29 08:03:43.105 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:43.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:44 np0005539551 nova_compute[227360]: 2025-11-29 08:03:44.610 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:44 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:44.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:45.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:46.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:47.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:48 np0005539551 nova_compute[227360]: 2025-11-29 08:03:48.107 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:48.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:49.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:49 np0005539551 nova_compute[227360]: 2025-11-29 08:03:49.613 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:50.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:51.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e188 e188: 3 total, 3 up, 3 in
Nov 29 03:03:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:52.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:53 np0005539551 nova_compute[227360]: 2025-11-29 08:03:53.109 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:03:53.188 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:03:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:03:53.189 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:03:53 np0005539551 nova_compute[227360]: 2025-11-29 08:03:53.189 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:53.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:54 np0005539551 nova_compute[227360]: 2025-11-29 08:03:54.613 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:54 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 03:03:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:54.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 03:03:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:55.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:55 np0005539551 nova_compute[227360]: 2025-11-29 08:03:55.414 227364 DEBUG oslo_concurrency.lockutils [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "9b790d52-5049-4623-a09c-055af6b469e3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:55 np0005539551 nova_compute[227360]: 2025-11-29 08:03:55.414 227364 DEBUG oslo_concurrency.lockutils [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "9b790d52-5049-4623-a09c-055af6b469e3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:55 np0005539551 nova_compute[227360]: 2025-11-29 08:03:55.458 227364 DEBUG nova.compute.manager [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:03:55 np0005539551 nova_compute[227360]: 2025-11-29 08:03:55.578 227364 DEBUG oslo_concurrency.lockutils [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:55 np0005539551 nova_compute[227360]: 2025-11-29 08:03:55.579 227364 DEBUG oslo_concurrency.lockutils [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:55 np0005539551 nova_compute[227360]: 2025-11-29 08:03:55.585 227364 DEBUG nova.virt.hardware [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:03:55 np0005539551 nova_compute[227360]: 2025-11-29 08:03:55.585 227364 INFO nova.compute.claims [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:03:55 np0005539551 nova_compute[227360]: 2025-11-29 08:03:55.684 227364 DEBUG oslo_concurrency.processutils [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:56 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:03:56 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2647262710' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:03:56 np0005539551 nova_compute[227360]: 2025-11-29 08:03:56.112 227364 DEBUG oslo_concurrency.processutils [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:56 np0005539551 nova_compute[227360]: 2025-11-29 08:03:56.117 227364 DEBUG nova.compute.provider_tree [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:03:56 np0005539551 nova_compute[227360]: 2025-11-29 08:03:56.355 227364 DEBUG nova.scheduler.client.report [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:03:56 np0005539551 nova_compute[227360]: 2025-11-29 08:03:56.389 227364 DEBUG oslo_concurrency.lockutils [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:56 np0005539551 nova_compute[227360]: 2025-11-29 08:03:56.390 227364 DEBUG nova.compute.manager [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:03:56 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e189 e189: 3 total, 3 up, 3 in
Nov 29 03:03:56 np0005539551 nova_compute[227360]: 2025-11-29 08:03:56.876 227364 DEBUG nova.compute.manager [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:03:56 np0005539551 nova_compute[227360]: 2025-11-29 08:03:56.877 227364 DEBUG nova.network.neutron [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:03:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:56.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:56 np0005539551 nova_compute[227360]: 2025-11-29 08:03:56.897 227364 INFO nova.virt.libvirt.driver [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:03:56 np0005539551 nova_compute[227360]: 2025-11-29 08:03:56.921 227364 DEBUG nova.compute.manager [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:03:57 np0005539551 nova_compute[227360]: 2025-11-29 08:03:57.065 227364 DEBUG nova.compute.manager [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:03:57 np0005539551 nova_compute[227360]: 2025-11-29 08:03:57.067 227364 DEBUG nova.virt.libvirt.driver [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:03:57 np0005539551 nova_compute[227360]: 2025-11-29 08:03:57.067 227364 INFO nova.virt.libvirt.driver [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Creating image(s)#033[00m
Nov 29 03:03:57 np0005539551 nova_compute[227360]: 2025-11-29 08:03:57.110 227364 DEBUG nova.storage.rbd_utils [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image 9b790d52-5049-4623-a09c-055af6b469e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:57 np0005539551 nova_compute[227360]: 2025-11-29 08:03:57.154 227364 DEBUG nova.storage.rbd_utils [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image 9b790d52-5049-4623-a09c-055af6b469e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:57 np0005539551 nova_compute[227360]: 2025-11-29 08:03:57.189 227364 DEBUG nova.storage.rbd_utils [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image 9b790d52-5049-4623-a09c-055af6b469e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:57 np0005539551 nova_compute[227360]: 2025-11-29 08:03:57.194 227364 DEBUG oslo_concurrency.processutils [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:57 np0005539551 nova_compute[227360]: 2025-11-29 08:03:57.224 227364 DEBUG nova.policy [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '104aea18c5154615b602f032bdb49681', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '90c23935e0214785a9dc5061b91cf29c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:03:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:57.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:57 np0005539551 nova_compute[227360]: 2025-11-29 08:03:57.283 227364 DEBUG oslo_concurrency.processutils [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:57 np0005539551 nova_compute[227360]: 2025-11-29 08:03:57.285 227364 DEBUG oslo_concurrency.lockutils [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:57 np0005539551 nova_compute[227360]: 2025-11-29 08:03:57.286 227364 DEBUG oslo_concurrency.lockutils [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:57 np0005539551 nova_compute[227360]: 2025-11-29 08:03:57.286 227364 DEBUG oslo_concurrency.lockutils [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:57 np0005539551 nova_compute[227360]: 2025-11-29 08:03:57.328 227364 DEBUG nova.storage.rbd_utils [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image 9b790d52-5049-4623-a09c-055af6b469e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:57 np0005539551 nova_compute[227360]: 2025-11-29 08:03:57.334 227364 DEBUG oslo_concurrency.processutils [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 9b790d52-5049-4623-a09c-055af6b469e3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:57 np0005539551 nova_compute[227360]: 2025-11-29 08:03:57.893 227364 DEBUG nova.network.neutron [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Successfully created port: d611c31d-d5f4-4d8f-8460-219ee5d53577 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:03:57 np0005539551 nova_compute[227360]: 2025-11-29 08:03:57.985 227364 DEBUG oslo_concurrency.processutils [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 9b790d52-5049-4623-a09c-055af6b469e3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.651s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:58 np0005539551 nova_compute[227360]: 2025-11-29 08:03:58.058 227364 DEBUG nova.storage.rbd_utils [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] resizing rbd image 9b790d52-5049-4623-a09c-055af6b469e3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:03:58 np0005539551 nova_compute[227360]: 2025-11-29 08:03:58.111 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:58 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e190 e190: 3 total, 3 up, 3 in
Nov 29 03:03:58 np0005539551 nova_compute[227360]: 2025-11-29 08:03:58.389 227364 DEBUG nova.objects.instance [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lazy-loading 'migration_context' on Instance uuid 9b790d52-5049-4623-a09c-055af6b469e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:03:58 np0005539551 nova_compute[227360]: 2025-11-29 08:03:58.404 227364 DEBUG nova.virt.libvirt.driver [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:03:58 np0005539551 nova_compute[227360]: 2025-11-29 08:03:58.404 227364 DEBUG nova.virt.libvirt.driver [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Ensure instance console log exists: /var/lib/nova/instances/9b790d52-5049-4623-a09c-055af6b469e3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:03:58 np0005539551 nova_compute[227360]: 2025-11-29 08:03:58.405 227364 DEBUG oslo_concurrency.lockutils [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:58 np0005539551 nova_compute[227360]: 2025-11-29 08:03:58.405 227364 DEBUG oslo_concurrency.lockutils [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:58 np0005539551 nova_compute[227360]: 2025-11-29 08:03:58.405 227364 DEBUG oslo_concurrency.lockutils [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:58 np0005539551 podman[246376]: 2025-11-29 08:03:58.49694457 +0000 UTC m=+0.065613348 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 03:03:58 np0005539551 podman[246375]: 2025-11-29 08:03:58.515411206 +0000 UTC m=+0.081630187 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 03:03:58 np0005539551 podman[246374]: 2025-11-29 08:03:58.52575879 +0000 UTC m=+0.098887881 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125)
Nov 29 03:03:58 np0005539551 nova_compute[227360]: 2025-11-29 08:03:58.847 227364 DEBUG nova.network.neutron [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Successfully updated port: d611c31d-d5f4-4d8f-8460-219ee5d53577 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:03:58 np0005539551 nova_compute[227360]: 2025-11-29 08:03:58.865 227364 DEBUG oslo_concurrency.lockutils [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "refresh_cache-9b790d52-5049-4623-a09c-055af6b469e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:03:58 np0005539551 nova_compute[227360]: 2025-11-29 08:03:58.866 227364 DEBUG oslo_concurrency.lockutils [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquired lock "refresh_cache-9b790d52-5049-4623-a09c-055af6b469e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:03:58 np0005539551 nova_compute[227360]: 2025-11-29 08:03:58.866 227364 DEBUG nova.network.neutron [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:03:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:58.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:59 np0005539551 nova_compute[227360]: 2025-11-29 08:03:59.011 227364 DEBUG nova.compute.manager [req-5f97f80a-4259-4c46-a49a-2eaefd8aeeaa req-f6b8f757-f0d5-4550-a50c-026cecedf565 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Received event network-changed-d611c31d-d5f4-4d8f-8460-219ee5d53577 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:03:59 np0005539551 nova_compute[227360]: 2025-11-29 08:03:59.011 227364 DEBUG nova.compute.manager [req-5f97f80a-4259-4c46-a49a-2eaefd8aeeaa req-f6b8f757-f0d5-4550-a50c-026cecedf565 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Refreshing instance network info cache due to event network-changed-d611c31d-d5f4-4d8f-8460-219ee5d53577. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:03:59 np0005539551 nova_compute[227360]: 2025-11-29 08:03:59.011 227364 DEBUG oslo_concurrency.lockutils [req-5f97f80a-4259-4c46-a49a-2eaefd8aeeaa req-f6b8f757-f0d5-4550-a50c-026cecedf565 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-9b790d52-5049-4623-a09c-055af6b469e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:03:59 np0005539551 nova_compute[227360]: 2025-11-29 08:03:59.053 227364 DEBUG nova.network.neutron [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:03:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:03:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 03:03:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:59.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 03:03:59 np0005539551 nova_compute[227360]: 2025-11-29 08:03:59.641 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:59 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:00.191 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:00 np0005539551 nova_compute[227360]: 2025-11-29 08:04:00.618 227364 DEBUG oslo_concurrency.lockutils [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "d3837ff2-ce61-4144-ac9b-c032a188450a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:00 np0005539551 nova_compute[227360]: 2025-11-29 08:04:00.618 227364 DEBUG oslo_concurrency.lockutils [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "d3837ff2-ce61-4144-ac9b-c032a188450a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:00 np0005539551 nova_compute[227360]: 2025-11-29 08:04:00.637 227364 DEBUG nova.compute.manager [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:04:00 np0005539551 nova_compute[227360]: 2025-11-29 08:04:00.658 227364 DEBUG nova.network.neutron [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Updating instance_info_cache with network_info: [{"id": "d611c31d-d5f4-4d8f-8460-219ee5d53577", "address": "fa:16:3e:78:29:5e", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd611c31d-d5", "ovs_interfaceid": "d611c31d-d5f4-4d8f-8460-219ee5d53577", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:04:00 np0005539551 nova_compute[227360]: 2025-11-29 08:04:00.681 227364 DEBUG oslo_concurrency.lockutils [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Releasing lock "refresh_cache-9b790d52-5049-4623-a09c-055af6b469e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:04:00 np0005539551 nova_compute[227360]: 2025-11-29 08:04:00.682 227364 DEBUG nova.compute.manager [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Instance network_info: |[{"id": "d611c31d-d5f4-4d8f-8460-219ee5d53577", "address": "fa:16:3e:78:29:5e", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd611c31d-d5", "ovs_interfaceid": "d611c31d-d5f4-4d8f-8460-219ee5d53577", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:04:00 np0005539551 nova_compute[227360]: 2025-11-29 08:04:00.682 227364 DEBUG oslo_concurrency.lockutils [req-5f97f80a-4259-4c46-a49a-2eaefd8aeeaa req-f6b8f757-f0d5-4550-a50c-026cecedf565 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-9b790d52-5049-4623-a09c-055af6b469e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:04:00 np0005539551 nova_compute[227360]: 2025-11-29 08:04:00.683 227364 DEBUG nova.network.neutron [req-5f97f80a-4259-4c46-a49a-2eaefd8aeeaa req-f6b8f757-f0d5-4550-a50c-026cecedf565 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Refreshing network info cache for port d611c31d-d5f4-4d8f-8460-219ee5d53577 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:04:00 np0005539551 nova_compute[227360]: 2025-11-29 08:04:00.689 227364 DEBUG nova.virt.libvirt.driver [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Start _get_guest_xml network_info=[{"id": "d611c31d-d5f4-4d8f-8460-219ee5d53577", "address": "fa:16:3e:78:29:5e", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd611c31d-d5", "ovs_interfaceid": "d611c31d-d5f4-4d8f-8460-219ee5d53577", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:04:00 np0005539551 nova_compute[227360]: 2025-11-29 08:04:00.713 227364 WARNING nova.virt.libvirt.driver [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:04:00 np0005539551 nova_compute[227360]: 2025-11-29 08:04:00.722 227364 DEBUG nova.virt.libvirt.host [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:04:00 np0005539551 nova_compute[227360]: 2025-11-29 08:04:00.723 227364 DEBUG nova.virt.libvirt.host [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:04:00 np0005539551 nova_compute[227360]: 2025-11-29 08:04:00.726 227364 DEBUG nova.virt.libvirt.host [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:04:00 np0005539551 nova_compute[227360]: 2025-11-29 08:04:00.727 227364 DEBUG nova.virt.libvirt.host [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:04:00 np0005539551 nova_compute[227360]: 2025-11-29 08:04:00.729 227364 DEBUG nova.virt.libvirt.driver [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:04:00 np0005539551 nova_compute[227360]: 2025-11-29 08:04:00.729 227364 DEBUG nova.virt.hardware [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:04:00 np0005539551 nova_compute[227360]: 2025-11-29 08:04:00.730 227364 DEBUG nova.virt.hardware [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:04:00 np0005539551 nova_compute[227360]: 2025-11-29 08:04:00.730 227364 DEBUG nova.virt.hardware [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:04:00 np0005539551 nova_compute[227360]: 2025-11-29 08:04:00.731 227364 DEBUG nova.virt.hardware [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:04:00 np0005539551 nova_compute[227360]: 2025-11-29 08:04:00.731 227364 DEBUG nova.virt.hardware [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:04:00 np0005539551 nova_compute[227360]: 2025-11-29 08:04:00.732 227364 DEBUG nova.virt.hardware [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:04:00 np0005539551 nova_compute[227360]: 2025-11-29 08:04:00.732 227364 DEBUG nova.virt.hardware [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:04:00 np0005539551 nova_compute[227360]: 2025-11-29 08:04:00.733 227364 DEBUG nova.virt.hardware [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:04:00 np0005539551 nova_compute[227360]: 2025-11-29 08:04:00.733 227364 DEBUG nova.virt.hardware [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:04:00 np0005539551 nova_compute[227360]: 2025-11-29 08:04:00.734 227364 DEBUG nova.virt.hardware [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:04:00 np0005539551 nova_compute[227360]: 2025-11-29 08:04:00.734 227364 DEBUG nova.virt.hardware [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:04:00 np0005539551 nova_compute[227360]: 2025-11-29 08:04:00.739 227364 DEBUG oslo_concurrency.processutils [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:00 np0005539551 nova_compute[227360]: 2025-11-29 08:04:00.765 227364 DEBUG oslo_concurrency.lockutils [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:00 np0005539551 nova_compute[227360]: 2025-11-29 08:04:00.766 227364 DEBUG oslo_concurrency.lockutils [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:00 np0005539551 nova_compute[227360]: 2025-11-29 08:04:00.776 227364 DEBUG nova.virt.hardware [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:04:00 np0005539551 nova_compute[227360]: 2025-11-29 08:04:00.777 227364 INFO nova.compute.claims [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:04:00 np0005539551 nova_compute[227360]: 2025-11-29 08:04:00.900 227364 DEBUG oslo_concurrency.processutils [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:04:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:01.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:04:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:04:01 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1375329856' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:04:01 np0005539551 nova_compute[227360]: 2025-11-29 08:04:01.206 227364 DEBUG oslo_concurrency.processutils [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:01.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:01 np0005539551 nova_compute[227360]: 2025-11-29 08:04:01.239 227364 DEBUG nova.storage.rbd_utils [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image 9b790d52-5049-4623-a09c-055af6b469e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:01 np0005539551 nova_compute[227360]: 2025-11-29 08:04:01.243 227364 DEBUG oslo_concurrency.processutils [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:04:01 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4010640655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:04:01 np0005539551 nova_compute[227360]: 2025-11-29 08:04:01.307 227364 DEBUG oslo_concurrency.processutils [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:01 np0005539551 nova_compute[227360]: 2025-11-29 08:04:01.313 227364 DEBUG nova.compute.provider_tree [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:04:01 np0005539551 nova_compute[227360]: 2025-11-29 08:04:01.333 227364 DEBUG nova.scheduler.client.report [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:04:01 np0005539551 nova_compute[227360]: 2025-11-29 08:04:01.363 227364 DEBUG oslo_concurrency.lockutils [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:01 np0005539551 nova_compute[227360]: 2025-11-29 08:04:01.364 227364 DEBUG nova.compute.manager [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:04:01 np0005539551 nova_compute[227360]: 2025-11-29 08:04:01.412 227364 DEBUG nova.compute.manager [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:04:01 np0005539551 nova_compute[227360]: 2025-11-29 08:04:01.413 227364 DEBUG nova.network.neutron [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:04:01 np0005539551 nova_compute[227360]: 2025-11-29 08:04:01.435 227364 INFO nova.virt.libvirt.driver [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:04:01 np0005539551 nova_compute[227360]: 2025-11-29 08:04:01.452 227364 DEBUG nova.compute.manager [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:04:01 np0005539551 nova_compute[227360]: 2025-11-29 08:04:01.533 227364 DEBUG nova.compute.manager [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:04:01 np0005539551 nova_compute[227360]: 2025-11-29 08:04:01.535 227364 DEBUG nova.virt.libvirt.driver [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:04:01 np0005539551 nova_compute[227360]: 2025-11-29 08:04:01.536 227364 INFO nova.virt.libvirt.driver [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Creating image(s)#033[00m
Nov 29 03:04:01 np0005539551 nova_compute[227360]: 2025-11-29 08:04:01.575 227364 DEBUG nova.storage.rbd_utils [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] rbd image d3837ff2-ce61-4144-ac9b-c032a188450a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:01 np0005539551 nova_compute[227360]: 2025-11-29 08:04:01.613 227364 DEBUG nova.storage.rbd_utils [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] rbd image d3837ff2-ce61-4144-ac9b-c032a188450a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:01 np0005539551 nova_compute[227360]: 2025-11-29 08:04:01.645 227364 DEBUG nova.storage.rbd_utils [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] rbd image d3837ff2-ce61-4144-ac9b-c032a188450a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:01 np0005539551 nova_compute[227360]: 2025-11-29 08:04:01.649 227364 DEBUG oslo_concurrency.processutils [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:04:01 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3058170812' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:04:01 np0005539551 nova_compute[227360]: 2025-11-29 08:04:01.729 227364 DEBUG oslo_concurrency.processutils [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:01 np0005539551 nova_compute[227360]: 2025-11-29 08:04:01.731 227364 DEBUG oslo_concurrency.lockutils [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:01 np0005539551 nova_compute[227360]: 2025-11-29 08:04:01.732 227364 DEBUG oslo_concurrency.lockutils [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:01 np0005539551 nova_compute[227360]: 2025-11-29 08:04:01.732 227364 DEBUG oslo_concurrency.lockutils [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:01 np0005539551 nova_compute[227360]: 2025-11-29 08:04:01.764 227364 DEBUG nova.storage.rbd_utils [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] rbd image d3837ff2-ce61-4144-ac9b-c032a188450a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:01 np0005539551 nova_compute[227360]: 2025-11-29 08:04:01.768 227364 DEBUG oslo_concurrency.processutils [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 d3837ff2-ce61-4144-ac9b-c032a188450a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:02 np0005539551 nova_compute[227360]: 2025-11-29 08:04:02.190 227364 DEBUG oslo_concurrency.processutils [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.947s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:02 np0005539551 nova_compute[227360]: 2025-11-29 08:04:02.192 227364 DEBUG nova.virt.libvirt.vif [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:03:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-341592228',display_name='tempest-DeleteServersTestJSON-server-341592228',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-341592228',id=47,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90c23935e0214785a9dc5061b91cf29c',ramdisk_id='',reservation_id='r-nvjd67qt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-294503786',owner_user_name='tempest-DeleteServersTestJSON-294503786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:03:56Z,user_data=None,user_id='104aea18c5154615b602f032bdb49681',uuid=9b790d52-5049-4623-a09c-055af6b469e3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d611c31d-d5f4-4d8f-8460-219ee5d53577", "address": "fa:16:3e:78:29:5e", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd611c31d-d5", "ovs_interfaceid": "d611c31d-d5f4-4d8f-8460-219ee5d53577", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:04:02 np0005539551 nova_compute[227360]: 2025-11-29 08:04:02.192 227364 DEBUG nova.network.os_vif_util [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converting VIF {"id": "d611c31d-d5f4-4d8f-8460-219ee5d53577", "address": "fa:16:3e:78:29:5e", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd611c31d-d5", "ovs_interfaceid": "d611c31d-d5f4-4d8f-8460-219ee5d53577", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:04:02 np0005539551 nova_compute[227360]: 2025-11-29 08:04:02.193 227364 DEBUG nova.network.os_vif_util [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:29:5e,bridge_name='br-int',has_traffic_filtering=True,id=d611c31d-d5f4-4d8f-8460-219ee5d53577,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd611c31d-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:04:02 np0005539551 nova_compute[227360]: 2025-11-29 08:04:02.195 227364 DEBUG nova.objects.instance [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lazy-loading 'pci_devices' on Instance uuid 9b790d52-5049-4623-a09c-055af6b469e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:02 np0005539551 nova_compute[227360]: 2025-11-29 08:04:02.238 227364 DEBUG nova.virt.libvirt.driver [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:04:02 np0005539551 nova_compute[227360]:  <uuid>9b790d52-5049-4623-a09c-055af6b469e3</uuid>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:  <name>instance-0000002f</name>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:04:02 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:      <nova:name>tempest-DeleteServersTestJSON-server-341592228</nova:name>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:04:00</nova:creationTime>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:04:02 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:        <nova:user uuid="104aea18c5154615b602f032bdb49681">tempest-DeleteServersTestJSON-294503786-project-member</nova:user>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:        <nova:project uuid="90c23935e0214785a9dc5061b91cf29c">tempest-DeleteServersTestJSON-294503786</nova:project>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:        <nova:port uuid="d611c31d-d5f4-4d8f-8460-219ee5d53577">
Nov 29 03:04:02 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:      <entry name="serial">9b790d52-5049-4623-a09c-055af6b469e3</entry>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:      <entry name="uuid">9b790d52-5049-4623-a09c-055af6b469e3</entry>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:04:02 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/9b790d52-5049-4623-a09c-055af6b469e3_disk">
Nov 29 03:04:02 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:04:02 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:04:02 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/9b790d52-5049-4623-a09c-055af6b469e3_disk.config">
Nov 29 03:04:02 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:04:02 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:04:02 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:78:29:5e"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:      <target dev="tapd611c31d-d5"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:04:02 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/9b790d52-5049-4623-a09c-055af6b469e3/console.log" append="off"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:04:02 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:04:02 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:04:02 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:04:02 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:04:02 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:04:02 np0005539551 nova_compute[227360]: 2025-11-29 08:04:02.238 227364 DEBUG nova.compute.manager [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Preparing to wait for external event network-vif-plugged-d611c31d-d5f4-4d8f-8460-219ee5d53577 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:04:02 np0005539551 nova_compute[227360]: 2025-11-29 08:04:02.238 227364 DEBUG oslo_concurrency.lockutils [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "9b790d52-5049-4623-a09c-055af6b469e3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:02 np0005539551 nova_compute[227360]: 2025-11-29 08:04:02.238 227364 DEBUG oslo_concurrency.lockutils [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "9b790d52-5049-4623-a09c-055af6b469e3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:02 np0005539551 nova_compute[227360]: 2025-11-29 08:04:02.239 227364 DEBUG oslo_concurrency.lockutils [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "9b790d52-5049-4623-a09c-055af6b469e3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:02 np0005539551 nova_compute[227360]: 2025-11-29 08:04:02.239 227364 DEBUG nova.virt.libvirt.vif [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:03:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-341592228',display_name='tempest-DeleteServersTestJSON-server-341592228',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-341592228',id=47,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90c23935e0214785a9dc5061b91cf29c',ramdisk_id='',reservation_id='r-nvjd67qt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-294503786',owner_user_name='tempest-DeleteServersTestJSON-294503786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:03:56Z,user_data=None,user_id='104aea18c5154615b602f032bdb49681',uuid=9b790d52-5049-4623-a09c-055af6b469e3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d611c31d-d5f4-4d8f-8460-219ee5d53577", "address": "fa:16:3e:78:29:5e", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd611c31d-d5", "ovs_interfaceid": "d611c31d-d5f4-4d8f-8460-219ee5d53577", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:04:02 np0005539551 nova_compute[227360]: 2025-11-29 08:04:02.239 227364 DEBUG nova.network.os_vif_util [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converting VIF {"id": "d611c31d-d5f4-4d8f-8460-219ee5d53577", "address": "fa:16:3e:78:29:5e", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd611c31d-d5", "ovs_interfaceid": "d611c31d-d5f4-4d8f-8460-219ee5d53577", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:04:02 np0005539551 nova_compute[227360]: 2025-11-29 08:04:02.240 227364 DEBUG nova.network.os_vif_util [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:29:5e,bridge_name='br-int',has_traffic_filtering=True,id=d611c31d-d5f4-4d8f-8460-219ee5d53577,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd611c31d-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:04:02 np0005539551 nova_compute[227360]: 2025-11-29 08:04:02.240 227364 DEBUG os_vif [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:29:5e,bridge_name='br-int',has_traffic_filtering=True,id=d611c31d-d5f4-4d8f-8460-219ee5d53577,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd611c31d-d5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:04:02 np0005539551 nova_compute[227360]: 2025-11-29 08:04:02.241 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:02 np0005539551 nova_compute[227360]: 2025-11-29 08:04:02.241 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:02 np0005539551 nova_compute[227360]: 2025-11-29 08:04:02.242 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:04:02 np0005539551 nova_compute[227360]: 2025-11-29 08:04:02.243 227364 DEBUG nova.policy [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fddc5f5801764ee19d5253e2cab34df3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '638fd52fccf14f16b56d0860553063f3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:04:02 np0005539551 nova_compute[227360]: 2025-11-29 08:04:02.248 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:02 np0005539551 nova_compute[227360]: 2025-11-29 08:04:02.248 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd611c31d-d5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:02 np0005539551 nova_compute[227360]: 2025-11-29 08:04:02.248 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd611c31d-d5, col_values=(('external_ids', {'iface-id': 'd611c31d-d5f4-4d8f-8460-219ee5d53577', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:78:29:5e', 'vm-uuid': '9b790d52-5049-4623-a09c-055af6b469e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:02 np0005539551 nova_compute[227360]: 2025-11-29 08:04:02.249 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:02 np0005539551 NetworkManager[48922]: <info>  [1764403442.2505] manager: (tapd611c31d-d5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Nov 29 03:04:02 np0005539551 nova_compute[227360]: 2025-11-29 08:04:02.252 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:04:02 np0005539551 nova_compute[227360]: 2025-11-29 08:04:02.256 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:02 np0005539551 nova_compute[227360]: 2025-11-29 08:04:02.257 227364 INFO os_vif [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:29:5e,bridge_name='br-int',has_traffic_filtering=True,id=d611c31d-d5f4-4d8f-8460-219ee5d53577,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd611c31d-d5')#033[00m
Nov 29 03:04:02 np0005539551 nova_compute[227360]: 2025-11-29 08:04:02.420 227364 DEBUG nova.network.neutron [req-5f97f80a-4259-4c46-a49a-2eaefd8aeeaa req-f6b8f757-f0d5-4550-a50c-026cecedf565 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Updated VIF entry in instance network info cache for port d611c31d-d5f4-4d8f-8460-219ee5d53577. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:04:02 np0005539551 nova_compute[227360]: 2025-11-29 08:04:02.420 227364 DEBUG nova.network.neutron [req-5f97f80a-4259-4c46-a49a-2eaefd8aeeaa req-f6b8f757-f0d5-4550-a50c-026cecedf565 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Updating instance_info_cache with network_info: [{"id": "d611c31d-d5f4-4d8f-8460-219ee5d53577", "address": "fa:16:3e:78:29:5e", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd611c31d-d5", "ovs_interfaceid": "d611c31d-d5f4-4d8f-8460-219ee5d53577", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:04:02 np0005539551 nova_compute[227360]: 2025-11-29 08:04:02.462 227364 DEBUG oslo_concurrency.lockutils [req-5f97f80a-4259-4c46-a49a-2eaefd8aeeaa req-f6b8f757-f0d5-4550-a50c-026cecedf565 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-9b790d52-5049-4623-a09c-055af6b469e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:04:02 np0005539551 nova_compute[227360]: 2025-11-29 08:04:02.481 227364 DEBUG nova.virt.libvirt.driver [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:04:02 np0005539551 nova_compute[227360]: 2025-11-29 08:04:02.481 227364 DEBUG nova.virt.libvirt.driver [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:04:02 np0005539551 nova_compute[227360]: 2025-11-29 08:04:02.481 227364 DEBUG nova.virt.libvirt.driver [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] No VIF found with MAC fa:16:3e:78:29:5e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:04:02 np0005539551 nova_compute[227360]: 2025-11-29 08:04:02.488 227364 INFO nova.virt.libvirt.driver [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Using config drive#033[00m
Nov 29 03:04:02 np0005539551 nova_compute[227360]: 2025-11-29 08:04:02.515 227364 DEBUG nova.storage.rbd_utils [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image 9b790d52-5049-4623-a09c-055af6b469e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:03 np0005539551 nova_compute[227360]: 2025-11-29 08:04:03.024 227364 INFO nova.virt.libvirt.driver [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Creating config drive at /var/lib/nova/instances/9b790d52-5049-4623-a09c-055af6b469e3/disk.config#033[00m
Nov 29 03:04:03 np0005539551 nova_compute[227360]: 2025-11-29 08:04:03.028 227364 DEBUG oslo_concurrency.processutils [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9b790d52-5049-4623-a09c-055af6b469e3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0ub9p53o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:03 np0005539551 nova_compute[227360]: 2025-11-29 08:04:03.156 227364 DEBUG oslo_concurrency.processutils [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9b790d52-5049-4623-a09c-055af6b469e3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0ub9p53o" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:03.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:03 np0005539551 nova_compute[227360]: 2025-11-29 08:04:03.192 227364 DEBUG nova.storage.rbd_utils [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image 9b790d52-5049-4623-a09c-055af6b469e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:03 np0005539551 nova_compute[227360]: 2025-11-29 08:04:03.195 227364 DEBUG oslo_concurrency.processutils [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9b790d52-5049-4623-a09c-055af6b469e3/disk.config 9b790d52-5049-4623-a09c-055af6b469e3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:03 np0005539551 nova_compute[227360]: 2025-11-29 08:04:03.222 227364 DEBUG nova.network.neutron [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Successfully created port: b41e72af-8d04-442c-a943-9a9986e9e860 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:04:03 np0005539551 nova_compute[227360]: 2025-11-29 08:04:03.225 227364 DEBUG oslo_concurrency.processutils [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 d3837ff2-ce61-4144-ac9b-c032a188450a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:03.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:03 np0005539551 nova_compute[227360]: 2025-11-29 08:04:03.294 227364 DEBUG nova.storage.rbd_utils [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] resizing rbd image d3837ff2-ce61-4144-ac9b-c032a188450a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:04:03 np0005539551 nova_compute[227360]: 2025-11-29 08:04:03.407 227364 DEBUG nova.objects.instance [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lazy-loading 'migration_context' on Instance uuid d3837ff2-ce61-4144-ac9b-c032a188450a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:03 np0005539551 nova_compute[227360]: 2025-11-29 08:04:03.423 227364 DEBUG nova.virt.libvirt.driver [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:04:03 np0005539551 nova_compute[227360]: 2025-11-29 08:04:03.424 227364 DEBUG nova.virt.libvirt.driver [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Ensure instance console log exists: /var/lib/nova/instances/d3837ff2-ce61-4144-ac9b-c032a188450a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:04:03 np0005539551 nova_compute[227360]: 2025-11-29 08:04:03.424 227364 DEBUG oslo_concurrency.lockutils [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:03 np0005539551 nova_compute[227360]: 2025-11-29 08:04:03.424 227364 DEBUG oslo_concurrency.lockutils [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:03 np0005539551 nova_compute[227360]: 2025-11-29 08:04:03.424 227364 DEBUG oslo_concurrency.lockutils [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:03 np0005539551 nova_compute[227360]: 2025-11-29 08:04:03.533 227364 DEBUG oslo_concurrency.processutils [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9b790d52-5049-4623-a09c-055af6b469e3/disk.config 9b790d52-5049-4623-a09c-055af6b469e3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:03 np0005539551 nova_compute[227360]: 2025-11-29 08:04:03.533 227364 INFO nova.virt.libvirt.driver [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Deleting local config drive /var/lib/nova/instances/9b790d52-5049-4623-a09c-055af6b469e3/disk.config because it was imported into RBD.#033[00m
Nov 29 03:04:03 np0005539551 kernel: tapd611c31d-d5: entered promiscuous mode
Nov 29 03:04:03 np0005539551 NetworkManager[48922]: <info>  [1764403443.5859] manager: (tapd611c31d-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/79)
Nov 29 03:04:03 np0005539551 ovn_controller[130266]: 2025-11-29T08:04:03Z|00161|binding|INFO|Claiming lport d611c31d-d5f4-4d8f-8460-219ee5d53577 for this chassis.
Nov 29 03:04:03 np0005539551 ovn_controller[130266]: 2025-11-29T08:04:03Z|00162|binding|INFO|d611c31d-d5f4-4d8f-8460-219ee5d53577: Claiming fa:16:3e:78:29:5e 10.100.0.7
Nov 29 03:04:03 np0005539551 nova_compute[227360]: 2025-11-29 08:04:03.586 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:03 np0005539551 nova_compute[227360]: 2025-11-29 08:04:03.591 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:03.597 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:29:5e 10.100.0.7'], port_security=['fa:16:3e:78:29:5e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9b790d52-5049-4623-a09c-055af6b469e3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90c23935e0214785a9dc5061b91cf29c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f717601c-d15f-4a2d-a56a-85c60baf3a44', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc7b8639-cf64-4f98-aa54-bbd2c9e5fa46, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=d611c31d-d5f4-4d8f-8460-219ee5d53577) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:03.598 139482 INFO neutron.agent.ovn.metadata.agent [-] Port d611c31d-d5f4-4d8f-8460-219ee5d53577 in datapath a8be8715-2b74-42ca-9713-7fc1f4a33bc9 bound to our chassis#033[00m
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:03.599 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a8be8715-2b74-42ca-9713-7fc1f4a33bc9#033[00m
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:03.612 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8fe9a06f-b614-4e9c-89bf-8aa7c297b787]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:03.613 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa8be8715-21 in ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:03.615 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa8be8715-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:03.615 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[12d16deb-749e-4dc2-8572-a82e326cafe4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:03.615 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[55086df3-0c1d-453c-a32e-a596d5e177ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:03 np0005539551 systemd-udevd[246758]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:03.627 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[266bbccf-3c90-4fac-8157-e7c50a635084]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:03 np0005539551 NetworkManager[48922]: <info>  [1764403443.6297] device (tapd611c31d-d5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:04:03 np0005539551 NetworkManager[48922]: <info>  [1764403443.6304] device (tapd611c31d-d5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:04:03 np0005539551 systemd-machined[190756]: New machine qemu-23-instance-0000002f.
Nov 29 03:04:03 np0005539551 systemd[1]: Started Virtual Machine qemu-23-instance-0000002f.
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:03.651 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[74dc30d5-fc0f-4058-a46c-ba29c9216598]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:03 np0005539551 nova_compute[227360]: 2025-11-29 08:04:03.656 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:03 np0005539551 ovn_controller[130266]: 2025-11-29T08:04:03Z|00163|binding|INFO|Setting lport d611c31d-d5f4-4d8f-8460-219ee5d53577 ovn-installed in OVS
Nov 29 03:04:03 np0005539551 ovn_controller[130266]: 2025-11-29T08:04:03Z|00164|binding|INFO|Setting lport d611c31d-d5f4-4d8f-8460-219ee5d53577 up in Southbound
Nov 29 03:04:03 np0005539551 nova_compute[227360]: 2025-11-29 08:04:03.664 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:03.681 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[bdb0f302-4c1f-4270-9fcb-dff84c095ca1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:03 np0005539551 systemd-udevd[246762]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:04:03 np0005539551 NetworkManager[48922]: <info>  [1764403443.6884] manager: (tapa8be8715-20): new Veth device (/org/freedesktop/NetworkManager/Devices/80)
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:03.687 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[07bd7943-2c2e-4442-8998-0b6959bf377b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:03.722 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[7a2018e6-97a6-46d3-a36e-a9c6779f8030]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:03.726 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[67b91881-c502-4054-8af7-81c188cd4220]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:03 np0005539551 NetworkManager[48922]: <info>  [1764403443.7473] device (tapa8be8715-20): carrier: link connected
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:03.753 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[a46b2476-44fc-428d-9837-924eee9a0a6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:03.767 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e6639cc8-cd66-4fbc-bbd4-230c30c3617c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa8be8715-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:f3:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649062, 'reachable_time': 17795, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 246791, 'error': None, 'target': 'ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:03.784 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a75f19a0-18df-4b35-ad5b-81f473086778]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe95:f3b4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 649062, 'tstamp': 649062}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246792, 'error': None, 'target': 'ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:03.797 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[533ae2c7-03d1-49ae-8bd0-a05a68e25ea1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa8be8715-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:f3:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649062, 'reachable_time': 17795, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 246793, 'error': None, 'target': 'ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:03.824 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[42890382-6eff-4b56-a40d-5ed0bd392ae2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:03 np0005539551 nova_compute[227360]: 2025-11-29 08:04:03.881 227364 DEBUG nova.compute.manager [req-9a648033-a454-4e7e-93b4-358aee1914eb req-841b354c-e27f-445a-8b88-585aa3cff73c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Received event network-vif-plugged-d611c31d-d5f4-4d8f-8460-219ee5d53577 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:04:03 np0005539551 nova_compute[227360]: 2025-11-29 08:04:03.881 227364 DEBUG oslo_concurrency.lockutils [req-9a648033-a454-4e7e-93b4-358aee1914eb req-841b354c-e27f-445a-8b88-585aa3cff73c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9b790d52-5049-4623-a09c-055af6b469e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:03 np0005539551 nova_compute[227360]: 2025-11-29 08:04:03.881 227364 DEBUG oslo_concurrency.lockutils [req-9a648033-a454-4e7e-93b4-358aee1914eb req-841b354c-e27f-445a-8b88-585aa3cff73c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9b790d52-5049-4623-a09c-055af6b469e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:03 np0005539551 nova_compute[227360]: 2025-11-29 08:04:03.881 227364 DEBUG oslo_concurrency.lockutils [req-9a648033-a454-4e7e-93b4-358aee1914eb req-841b354c-e27f-445a-8b88-585aa3cff73c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9b790d52-5049-4623-a09c-055af6b469e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:03 np0005539551 nova_compute[227360]: 2025-11-29 08:04:03.881 227364 DEBUG nova.compute.manager [req-9a648033-a454-4e7e-93b4-358aee1914eb req-841b354c-e27f-445a-8b88-585aa3cff73c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Processing event network-vif-plugged-d611c31d-d5f4-4d8f-8460-219ee5d53577 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:03.883 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[55e88362-dc35-491e-b75c-380c5d315d88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:03.885 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8be8715-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:03.885 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:03.886 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa8be8715-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:03 np0005539551 nova_compute[227360]: 2025-11-29 08:04:03.887 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:03 np0005539551 NetworkManager[48922]: <info>  [1764403443.8885] manager: (tapa8be8715-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/81)
Nov 29 03:04:03 np0005539551 kernel: tapa8be8715-20: entered promiscuous mode
Nov 29 03:04:03 np0005539551 nova_compute[227360]: 2025-11-29 08:04:03.889 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:03.892 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa8be8715-20, col_values=(('external_ids', {'iface-id': '307ce936-d5dc-4357-90d6-2b0b2d3d1113'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:03 np0005539551 nova_compute[227360]: 2025-11-29 08:04:03.892 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:03 np0005539551 ovn_controller[130266]: 2025-11-29T08:04:03Z|00165|binding|INFO|Releasing lport 307ce936-d5dc-4357-90d6-2b0b2d3d1113 from this chassis (sb_readonly=0)
Nov 29 03:04:03 np0005539551 nova_compute[227360]: 2025-11-29 08:04:03.912 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:03 np0005539551 nova_compute[227360]: 2025-11-29 08:04:03.915 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:03.916 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a8be8715-2b74-42ca-9713-7fc1f4a33bc9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a8be8715-2b74-42ca-9713-7fc1f4a33bc9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:03.917 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8b2d6f3b-2db9-4a33-9a10-47d179f1bc14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:03.917 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-a8be8715-2b74-42ca-9713-7fc1f4a33bc9
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/a8be8715-2b74-42ca-9713-7fc1f4a33bc9.pid.haproxy
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID a8be8715-2b74-42ca-9713-7fc1f4a33bc9
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:04:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:03.918 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'env', 'PROCESS_TAG=haproxy-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a8be8715-2b74-42ca-9713-7fc1f4a33bc9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:04:04 np0005539551 nova_compute[227360]: 2025-11-29 08:04:04.026 227364 DEBUG nova.compute.manager [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:04:04 np0005539551 nova_compute[227360]: 2025-11-29 08:04:04.027 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403444.0257115, 9b790d52-5049-4623-a09c-055af6b469e3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:04 np0005539551 nova_compute[227360]: 2025-11-29 08:04:04.027 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] VM Started (Lifecycle Event)#033[00m
Nov 29 03:04:04 np0005539551 nova_compute[227360]: 2025-11-29 08:04:04.029 227364 DEBUG nova.virt.libvirt.driver [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:04:04 np0005539551 nova_compute[227360]: 2025-11-29 08:04:04.032 227364 INFO nova.virt.libvirt.driver [-] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Instance spawned successfully.#033[00m
Nov 29 03:04:04 np0005539551 nova_compute[227360]: 2025-11-29 08:04:04.033 227364 DEBUG nova.virt.libvirt.driver [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:04:04 np0005539551 nova_compute[227360]: 2025-11-29 08:04:04.072 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:04 np0005539551 nova_compute[227360]: 2025-11-29 08:04:04.077 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:04:04 np0005539551 nova_compute[227360]: 2025-11-29 08:04:04.080 227364 DEBUG nova.virt.libvirt.driver [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:04 np0005539551 nova_compute[227360]: 2025-11-29 08:04:04.080 227364 DEBUG nova.virt.libvirt.driver [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:04 np0005539551 nova_compute[227360]: 2025-11-29 08:04:04.081 227364 DEBUG nova.virt.libvirt.driver [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:04 np0005539551 nova_compute[227360]: 2025-11-29 08:04:04.081 227364 DEBUG nova.virt.libvirt.driver [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:04 np0005539551 nova_compute[227360]: 2025-11-29 08:04:04.082 227364 DEBUG nova.virt.libvirt.driver [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:04 np0005539551 nova_compute[227360]: 2025-11-29 08:04:04.082 227364 DEBUG nova.virt.libvirt.driver [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:04 np0005539551 nova_compute[227360]: 2025-11-29 08:04:04.101 227364 DEBUG nova.network.neutron [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Successfully updated port: b41e72af-8d04-442c-a943-9a9986e9e860 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:04:04 np0005539551 nova_compute[227360]: 2025-11-29 08:04:04.128 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:04:04 np0005539551 nova_compute[227360]: 2025-11-29 08:04:04.128 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403444.0267122, 9b790d52-5049-4623-a09c-055af6b469e3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:04 np0005539551 nova_compute[227360]: 2025-11-29 08:04:04.129 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:04:04 np0005539551 nova_compute[227360]: 2025-11-29 08:04:04.186 227364 DEBUG nova.compute.manager [req-fb7ed477-5e5d-41c9-8d03-247ccbe3476f req-60ff47d8-44a9-4450-9433-f67ce3d75154 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Received event network-changed-b41e72af-8d04-442c-a943-9a9986e9e860 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:04:04 np0005539551 nova_compute[227360]: 2025-11-29 08:04:04.186 227364 DEBUG nova.compute.manager [req-fb7ed477-5e5d-41c9-8d03-247ccbe3476f req-60ff47d8-44a9-4450-9433-f67ce3d75154 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Refreshing instance network info cache due to event network-changed-b41e72af-8d04-442c-a943-9a9986e9e860. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:04:04 np0005539551 nova_compute[227360]: 2025-11-29 08:04:04.187 227364 DEBUG oslo_concurrency.lockutils [req-fb7ed477-5e5d-41c9-8d03-247ccbe3476f req-60ff47d8-44a9-4450-9433-f67ce3d75154 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-d3837ff2-ce61-4144-ac9b-c032a188450a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:04:04 np0005539551 nova_compute[227360]: 2025-11-29 08:04:04.187 227364 DEBUG oslo_concurrency.lockutils [req-fb7ed477-5e5d-41c9-8d03-247ccbe3476f req-60ff47d8-44a9-4450-9433-f67ce3d75154 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-d3837ff2-ce61-4144-ac9b-c032a188450a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:04:04 np0005539551 nova_compute[227360]: 2025-11-29 08:04:04.188 227364 DEBUG nova.network.neutron [req-fb7ed477-5e5d-41c9-8d03-247ccbe3476f req-60ff47d8-44a9-4450-9433-f67ce3d75154 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Refreshing network info cache for port b41e72af-8d04-442c-a943-9a9986e9e860 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:04:04 np0005539551 nova_compute[227360]: 2025-11-29 08:04:04.250 227364 DEBUG oslo_concurrency.lockutils [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "refresh_cache-d3837ff2-ce61-4144-ac9b-c032a188450a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:04:04 np0005539551 nova_compute[227360]: 2025-11-29 08:04:04.251 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:04 np0005539551 nova_compute[227360]: 2025-11-29 08:04:04.254 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403444.0293806, 9b790d52-5049-4623-a09c-055af6b469e3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:04 np0005539551 nova_compute[227360]: 2025-11-29 08:04:04.254 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:04:04 np0005539551 podman[246865]: 2025-11-29 08:04:04.285690669 +0000 UTC m=+0.060340235 container create d7eb63b1897584fb70d81f72b042a06c526a80945231ff5cd5a6ddf44c7ec56c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 03:04:04 np0005539551 systemd[1]: Started libpod-conmon-d7eb63b1897584fb70d81f72b042a06c526a80945231ff5cd5a6ddf44c7ec56c.scope.
Nov 29 03:04:04 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:04:04 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db44723386f6b9e565e656326635ab596695527bfcd7af11536ec7f9d902a286/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:04:04 np0005539551 podman[246865]: 2025-11-29 08:04:04.251661737 +0000 UTC m=+0.026311353 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:04:04 np0005539551 podman[246865]: 2025-11-29 08:04:04.353601579 +0000 UTC m=+0.128251145 container init d7eb63b1897584fb70d81f72b042a06c526a80945231ff5cd5a6ddf44c7ec56c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:04:04 np0005539551 podman[246865]: 2025-11-29 08:04:04.359633434 +0000 UTC m=+0.134283000 container start d7eb63b1897584fb70d81f72b042a06c526a80945231ff5cd5a6ddf44c7ec56c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 03:04:04 np0005539551 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[246880]: [NOTICE]   (246884) : New worker (246886) forked
Nov 29 03:04:04 np0005539551 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[246880]: [NOTICE]   (246884) : Loading success.
Nov 29 03:04:04 np0005539551 nova_compute[227360]: 2025-11-29 08:04:04.385 227364 DEBUG nova.network.neutron [req-fb7ed477-5e5d-41c9-8d03-247ccbe3476f req-60ff47d8-44a9-4450-9433-f67ce3d75154 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:04:04 np0005539551 nova_compute[227360]: 2025-11-29 08:04:04.565 227364 INFO nova.compute.manager [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Took 7.50 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:04:04 np0005539551 nova_compute[227360]: 2025-11-29 08:04:04.565 227364 DEBUG nova.compute.manager [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:04 np0005539551 nova_compute[227360]: 2025-11-29 08:04:04.615 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:04 np0005539551 nova_compute[227360]: 2025-11-29 08:04:04.618 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:04:04 np0005539551 nova_compute[227360]: 2025-11-29 08:04:04.645 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:04 np0005539551 nova_compute[227360]: 2025-11-29 08:04:04.675 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:04:04 np0005539551 nova_compute[227360]: 2025-11-29 08:04:04.730 227364 INFO nova.compute.manager [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Took 9.22 seconds to build instance.#033[00m
Nov 29 03:04:04 np0005539551 nova_compute[227360]: 2025-11-29 08:04:04.746 227364 DEBUG oslo_concurrency.lockutils [None req-b55c5998-2975-4c67-ac70-3bc329684976 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "9b790d52-5049-4623-a09c-055af6b469e3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.332s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:04 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:04 np0005539551 nova_compute[227360]: 2025-11-29 08:04:04.890 227364 DEBUG nova.network.neutron [req-fb7ed477-5e5d-41c9-8d03-247ccbe3476f req-60ff47d8-44a9-4450-9433-f67ce3d75154 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:04:04 np0005539551 nova_compute[227360]: 2025-11-29 08:04:04.910 227364 DEBUG oslo_concurrency.lockutils [req-fb7ed477-5e5d-41c9-8d03-247ccbe3476f req-60ff47d8-44a9-4450-9433-f67ce3d75154 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-d3837ff2-ce61-4144-ac9b-c032a188450a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:04:04 np0005539551 nova_compute[227360]: 2025-11-29 08:04:04.911 227364 DEBUG oslo_concurrency.lockutils [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquired lock "refresh_cache-d3837ff2-ce61-4144-ac9b-c032a188450a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:04:04 np0005539551 nova_compute[227360]: 2025-11-29 08:04:04.911 227364 DEBUG nova.network.neutron [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:04:05 np0005539551 nova_compute[227360]: 2025-11-29 08:04:05.109 227364 DEBUG nova.network.neutron [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:04:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:05.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:05.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:05 np0005539551 nova_compute[227360]: 2025-11-29 08:04:05.662 227364 DEBUG oslo_concurrency.lockutils [None req-95ff0c3c-7d32-4b52-9f50-7c6af4c6dc90 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "9b790d52-5049-4623-a09c-055af6b469e3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:05 np0005539551 nova_compute[227360]: 2025-11-29 08:04:05.663 227364 DEBUG oslo_concurrency.lockutils [None req-95ff0c3c-7d32-4b52-9f50-7c6af4c6dc90 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "9b790d52-5049-4623-a09c-055af6b469e3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:05 np0005539551 nova_compute[227360]: 2025-11-29 08:04:05.663 227364 DEBUG oslo_concurrency.lockutils [None req-95ff0c3c-7d32-4b52-9f50-7c6af4c6dc90 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "9b790d52-5049-4623-a09c-055af6b469e3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:05 np0005539551 nova_compute[227360]: 2025-11-29 08:04:05.663 227364 DEBUG oslo_concurrency.lockutils [None req-95ff0c3c-7d32-4b52-9f50-7c6af4c6dc90 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "9b790d52-5049-4623-a09c-055af6b469e3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:05 np0005539551 nova_compute[227360]: 2025-11-29 08:04:05.664 227364 DEBUG oslo_concurrency.lockutils [None req-95ff0c3c-7d32-4b52-9f50-7c6af4c6dc90 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "9b790d52-5049-4623-a09c-055af6b469e3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:05 np0005539551 nova_compute[227360]: 2025-11-29 08:04:05.665 227364 INFO nova.compute.manager [None req-95ff0c3c-7d32-4b52-9f50-7c6af4c6dc90 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Terminating instance#033[00m
Nov 29 03:04:05 np0005539551 nova_compute[227360]: 2025-11-29 08:04:05.665 227364 DEBUG nova.compute.manager [None req-95ff0c3c-7d32-4b52-9f50-7c6af4c6dc90 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:04:05 np0005539551 kernel: tapd611c31d-d5 (unregistering): left promiscuous mode
Nov 29 03:04:05 np0005539551 NetworkManager[48922]: <info>  [1764403445.8045] device (tapd611c31d-d5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:04:05 np0005539551 ovn_controller[130266]: 2025-11-29T08:04:05Z|00166|binding|INFO|Releasing lport d611c31d-d5f4-4d8f-8460-219ee5d53577 from this chassis (sb_readonly=0)
Nov 29 03:04:05 np0005539551 ovn_controller[130266]: 2025-11-29T08:04:05Z|00167|binding|INFO|Setting lport d611c31d-d5f4-4d8f-8460-219ee5d53577 down in Southbound
Nov 29 03:04:05 np0005539551 nova_compute[227360]: 2025-11-29 08:04:05.812 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:05 np0005539551 ovn_controller[130266]: 2025-11-29T08:04:05Z|00168|binding|INFO|Removing iface tapd611c31d-d5 ovn-installed in OVS
Nov 29 03:04:05 np0005539551 nova_compute[227360]: 2025-11-29 08:04:05.814 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:05.820 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:29:5e 10.100.0.7'], port_security=['fa:16:3e:78:29:5e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9b790d52-5049-4623-a09c-055af6b469e3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90c23935e0214785a9dc5061b91cf29c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f717601c-d15f-4a2d-a56a-85c60baf3a44', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc7b8639-cf64-4f98-aa54-bbd2c9e5fa46, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=d611c31d-d5f4-4d8f-8460-219ee5d53577) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:04:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:05.821 139482 INFO neutron.agent.ovn.metadata.agent [-] Port d611c31d-d5f4-4d8f-8460-219ee5d53577 in datapath a8be8715-2b74-42ca-9713-7fc1f4a33bc9 unbound from our chassis#033[00m
Nov 29 03:04:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:05.822 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a8be8715-2b74-42ca-9713-7fc1f4a33bc9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:04:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:05.824 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b90e6bd0-bc20-4e73-a6cc-fa25a34d59cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:05.824 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9 namespace which is not needed anymore#033[00m
Nov 29 03:04:05 np0005539551 nova_compute[227360]: 2025-11-29 08:04:05.830 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:05 np0005539551 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d0000002f.scope: Deactivated successfully.
Nov 29 03:04:05 np0005539551 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d0000002f.scope: Consumed 2.147s CPU time.
Nov 29 03:04:05 np0005539551 systemd-machined[190756]: Machine qemu-23-instance-0000002f terminated.
Nov 29 03:04:05 np0005539551 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[246880]: [NOTICE]   (246884) : haproxy version is 2.8.14-c23fe91
Nov 29 03:04:05 np0005539551 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[246880]: [NOTICE]   (246884) : path to executable is /usr/sbin/haproxy
Nov 29 03:04:05 np0005539551 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[246880]: [WARNING]  (246884) : Exiting Master process...
Nov 29 03:04:05 np0005539551 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[246880]: [WARNING]  (246884) : Exiting Master process...
Nov 29 03:04:05 np0005539551 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[246880]: [ALERT]    (246884) : Current worker (246886) exited with code 143 (Terminated)
Nov 29 03:04:05 np0005539551 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[246880]: [WARNING]  (246884) : All workers exited. Exiting... (0)
Nov 29 03:04:05 np0005539551 systemd[1]: libpod-d7eb63b1897584fb70d81f72b042a06c526a80945231ff5cd5a6ddf44c7ec56c.scope: Deactivated successfully.
Nov 29 03:04:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e191 e191: 3 total, 3 up, 3 in
Nov 29 03:04:05 np0005539551 podman[246917]: 2025-11-29 08:04:05.957139147 +0000 UTC m=+0.041154248 container died d7eb63b1897584fb70d81f72b042a06c526a80945231ff5cd5a6ddf44c7ec56c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:04:05 np0005539551 nova_compute[227360]: 2025-11-29 08:04:05.985 227364 DEBUG nova.compute.manager [req-8f80d5c2-eaa6-44e9-9f04-ca7c5133675e req-597f8a2a-7ac0-4ce6-b283-18cbb2b804a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Received event network-vif-plugged-d611c31d-d5f4-4d8f-8460-219ee5d53577 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:04:05 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d7eb63b1897584fb70d81f72b042a06c526a80945231ff5cd5a6ddf44c7ec56c-userdata-shm.mount: Deactivated successfully.
Nov 29 03:04:05 np0005539551 nova_compute[227360]: 2025-11-29 08:04:05.985 227364 DEBUG oslo_concurrency.lockutils [req-8f80d5c2-eaa6-44e9-9f04-ca7c5133675e req-597f8a2a-7ac0-4ce6-b283-18cbb2b804a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9b790d52-5049-4623-a09c-055af6b469e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:05 np0005539551 nova_compute[227360]: 2025-11-29 08:04:05.986 227364 DEBUG oslo_concurrency.lockutils [req-8f80d5c2-eaa6-44e9-9f04-ca7c5133675e req-597f8a2a-7ac0-4ce6-b283-18cbb2b804a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9b790d52-5049-4623-a09c-055af6b469e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:05 np0005539551 nova_compute[227360]: 2025-11-29 08:04:05.986 227364 DEBUG oslo_concurrency.lockutils [req-8f80d5c2-eaa6-44e9-9f04-ca7c5133675e req-597f8a2a-7ac0-4ce6-b283-18cbb2b804a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9b790d52-5049-4623-a09c-055af6b469e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:05 np0005539551 nova_compute[227360]: 2025-11-29 08:04:05.986 227364 DEBUG nova.compute.manager [req-8f80d5c2-eaa6-44e9-9f04-ca7c5133675e req-597f8a2a-7ac0-4ce6-b283-18cbb2b804a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] No waiting events found dispatching network-vif-plugged-d611c31d-d5f4-4d8f-8460-219ee5d53577 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:04:05 np0005539551 nova_compute[227360]: 2025-11-29 08:04:05.986 227364 WARNING nova.compute.manager [req-8f80d5c2-eaa6-44e9-9f04-ca7c5133675e req-597f8a2a-7ac0-4ce6-b283-18cbb2b804a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Received unexpected event network-vif-plugged-d611c31d-d5f4-4d8f-8460-219ee5d53577 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:04:05 np0005539551 systemd[1]: var-lib-containers-storage-overlay-db44723386f6b9e565e656326635ab596695527bfcd7af11536ec7f9d902a286-merged.mount: Deactivated successfully.
Nov 29 03:04:06 np0005539551 podman[246917]: 2025-11-29 08:04:06.000908026 +0000 UTC m=+0.084923127 container cleanup d7eb63b1897584fb70d81f72b042a06c526a80945231ff5cd5a6ddf44c7ec56c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:04:06 np0005539551 systemd[1]: libpod-conmon-d7eb63b1897584fb70d81f72b042a06c526a80945231ff5cd5a6ddf44c7ec56c.scope: Deactivated successfully.
Nov 29 03:04:06 np0005539551 podman[246945]: 2025-11-29 08:04:06.058169284 +0000 UTC m=+0.035780150 container remove d7eb63b1897584fb70d81f72b042a06c526a80945231ff5cd5a6ddf44c7ec56c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:04:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:06.063 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[0840ffb1-82fe-45fb-8195-8d7812665fcc]: (4, ('Sat Nov 29 08:04:05 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9 (d7eb63b1897584fb70d81f72b042a06c526a80945231ff5cd5a6ddf44c7ec56c)\nd7eb63b1897584fb70d81f72b042a06c526a80945231ff5cd5a6ddf44c7ec56c\nSat Nov 29 08:04:06 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9 (d7eb63b1897584fb70d81f72b042a06c526a80945231ff5cd5a6ddf44c7ec56c)\nd7eb63b1897584fb70d81f72b042a06c526a80945231ff5cd5a6ddf44c7ec56c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:06.065 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[7f599291-6c9b-44bf-8063-4b16bbb9f07d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:06.065 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8be8715-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:06 np0005539551 nova_compute[227360]: 2025-11-29 08:04:06.068 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:06 np0005539551 kernel: tapa8be8715-20: left promiscuous mode
Nov 29 03:04:06 np0005539551 NetworkManager[48922]: <info>  [1764403446.0839] manager: (tapd611c31d-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/82)
Nov 29 03:04:06 np0005539551 nova_compute[227360]: 2025-11-29 08:04:06.083 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:06 np0005539551 nova_compute[227360]: 2025-11-29 08:04:06.085 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:06.086 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[69bd2576-5bf6-4ddd-b82c-bb451eea8958]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:06 np0005539551 nova_compute[227360]: 2025-11-29 08:04:06.101 227364 INFO nova.virt.libvirt.driver [-] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Instance destroyed successfully.#033[00m
Nov 29 03:04:06 np0005539551 nova_compute[227360]: 2025-11-29 08:04:06.102 227364 DEBUG nova.objects.instance [None req-95ff0c3c-7d32-4b52-9f50-7c6af4c6dc90 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lazy-loading 'resources' on Instance uuid 9b790d52-5049-4623-a09c-055af6b469e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:06.107 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[54927e68-52b1-434f-adc7-cf002bb03aff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:06.108 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[51c71af0-3a76-426e-8d99-ee7aa87ef566]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:06 np0005539551 nova_compute[227360]: 2025-11-29 08:04:06.116 227364 DEBUG nova.virt.libvirt.vif [None req-95ff0c3c-7d32-4b52-9f50-7c6af4c6dc90 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:03:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-341592228',display_name='tempest-DeleteServersTestJSON-server-341592228',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-341592228',id=47,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:04:04Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90c23935e0214785a9dc5061b91cf29c',ramdisk_id='',reservation_id='r-nvjd67qt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-294503786',owner_user_name='tempest-DeleteServersTestJSON-294503786-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:04:04Z,user_data=None,user_id='104aea18c5154615b602f032bdb49681',uuid=9b790d52-5049-4623-a09c-055af6b469e3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d611c31d-d5f4-4d8f-8460-219ee5d53577", "address": "fa:16:3e:78:29:5e", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd611c31d-d5", "ovs_interfaceid": "d611c31d-d5f4-4d8f-8460-219ee5d53577", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:04:06 np0005539551 nova_compute[227360]: 2025-11-29 08:04:06.116 227364 DEBUG nova.network.os_vif_util [None req-95ff0c3c-7d32-4b52-9f50-7c6af4c6dc90 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converting VIF {"id": "d611c31d-d5f4-4d8f-8460-219ee5d53577", "address": "fa:16:3e:78:29:5e", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd611c31d-d5", "ovs_interfaceid": "d611c31d-d5f4-4d8f-8460-219ee5d53577", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:04:06 np0005539551 nova_compute[227360]: 2025-11-29 08:04:06.117 227364 DEBUG nova.network.os_vif_util [None req-95ff0c3c-7d32-4b52-9f50-7c6af4c6dc90 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:29:5e,bridge_name='br-int',has_traffic_filtering=True,id=d611c31d-d5f4-4d8f-8460-219ee5d53577,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd611c31d-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:04:06 np0005539551 nova_compute[227360]: 2025-11-29 08:04:06.118 227364 DEBUG os_vif [None req-95ff0c3c-7d32-4b52-9f50-7c6af4c6dc90 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:29:5e,bridge_name='br-int',has_traffic_filtering=True,id=d611c31d-d5f4-4d8f-8460-219ee5d53577,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd611c31d-d5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:04:06 np0005539551 nova_compute[227360]: 2025-11-29 08:04:06.120 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:06 np0005539551 nova_compute[227360]: 2025-11-29 08:04:06.120 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd611c31d-d5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:06 np0005539551 nova_compute[227360]: 2025-11-29 08:04:06.122 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:06.121 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[deb62c48-beb7-441f-b396-c48197729a15]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649055, 'reachable_time': 18741, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 246971, 'error': None, 'target': 'ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:06.124 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:04:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:06.124 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[4eac63e3-3a01-4eb9-b268-2fe6e6016d42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:06 np0005539551 nova_compute[227360]: 2025-11-29 08:04:06.125 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:04:06 np0005539551 systemd[1]: run-netns-ovnmeta\x2da8be8715\x2d2b74\x2d42ca\x2d9713\x2d7fc1f4a33bc9.mount: Deactivated successfully.
Nov 29 03:04:06 np0005539551 nova_compute[227360]: 2025-11-29 08:04:06.127 227364 INFO os_vif [None req-95ff0c3c-7d32-4b52-9f50-7c6af4c6dc90 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:29:5e,bridge_name='br-int',has_traffic_filtering=True,id=d611c31d-d5f4-4d8f-8460-219ee5d53577,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd611c31d-d5')#033[00m
Nov 29 03:04:06 np0005539551 nova_compute[227360]: 2025-11-29 08:04:06.354 227364 DEBUG nova.network.neutron [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Updating instance_info_cache with network_info: [{"id": "b41e72af-8d04-442c-a943-9a9986e9e860", "address": "fa:16:3e:a1:53:3a", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb41e72af-8d", "ovs_interfaceid": "b41e72af-8d04-442c-a943-9a9986e9e860", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:04:06 np0005539551 nova_compute[227360]: 2025-11-29 08:04:06.376 227364 DEBUG oslo_concurrency.lockutils [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Releasing lock "refresh_cache-d3837ff2-ce61-4144-ac9b-c032a188450a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:04:06 np0005539551 nova_compute[227360]: 2025-11-29 08:04:06.377 227364 DEBUG nova.compute.manager [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Instance network_info: |[{"id": "b41e72af-8d04-442c-a943-9a9986e9e860", "address": "fa:16:3e:a1:53:3a", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb41e72af-8d", "ovs_interfaceid": "b41e72af-8d04-442c-a943-9a9986e9e860", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:04:06 np0005539551 nova_compute[227360]: 2025-11-29 08:04:06.378 227364 DEBUG nova.virt.libvirt.driver [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Start _get_guest_xml network_info=[{"id": "b41e72af-8d04-442c-a943-9a9986e9e860", "address": "fa:16:3e:a1:53:3a", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb41e72af-8d", "ovs_interfaceid": "b41e72af-8d04-442c-a943-9a9986e9e860", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:04:06 np0005539551 nova_compute[227360]: 2025-11-29 08:04:06.384 227364 WARNING nova.virt.libvirt.driver [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:04:06 np0005539551 nova_compute[227360]: 2025-11-29 08:04:06.389 227364 DEBUG nova.virt.libvirt.host [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:04:06 np0005539551 nova_compute[227360]: 2025-11-29 08:04:06.389 227364 DEBUG nova.virt.libvirt.host [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:04:06 np0005539551 nova_compute[227360]: 2025-11-29 08:04:06.392 227364 DEBUG nova.virt.libvirt.host [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:04:06 np0005539551 nova_compute[227360]: 2025-11-29 08:04:06.392 227364 DEBUG nova.virt.libvirt.host [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:04:06 np0005539551 nova_compute[227360]: 2025-11-29 08:04:06.393 227364 DEBUG nova.virt.libvirt.driver [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:04:06 np0005539551 nova_compute[227360]: 2025-11-29 08:04:06.393 227364 DEBUG nova.virt.hardware [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:04:06 np0005539551 nova_compute[227360]: 2025-11-29 08:04:06.394 227364 DEBUG nova.virt.hardware [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:04:06 np0005539551 nova_compute[227360]: 2025-11-29 08:04:06.394 227364 DEBUG nova.virt.hardware [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:04:06 np0005539551 nova_compute[227360]: 2025-11-29 08:04:06.394 227364 DEBUG nova.virt.hardware [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:04:06 np0005539551 nova_compute[227360]: 2025-11-29 08:04:06.394 227364 DEBUG nova.virt.hardware [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:04:06 np0005539551 nova_compute[227360]: 2025-11-29 08:04:06.395 227364 DEBUG nova.virt.hardware [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:04:06 np0005539551 nova_compute[227360]: 2025-11-29 08:04:06.395 227364 DEBUG nova.virt.hardware [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:04:06 np0005539551 nova_compute[227360]: 2025-11-29 08:04:06.395 227364 DEBUG nova.virt.hardware [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:04:06 np0005539551 nova_compute[227360]: 2025-11-29 08:04:06.395 227364 DEBUG nova.virt.hardware [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:04:06 np0005539551 nova_compute[227360]: 2025-11-29 08:04:06.395 227364 DEBUG nova.virt.hardware [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:04:06 np0005539551 nova_compute[227360]: 2025-11-29 08:04:06.396 227364 DEBUG nova.virt.hardware [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:04:06 np0005539551 nova_compute[227360]: 2025-11-29 08:04:06.399 227364 DEBUG oslo_concurrency.processutils [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:04:06 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4256425896' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:04:06 np0005539551 nova_compute[227360]: 2025-11-29 08:04:06.823 227364 DEBUG oslo_concurrency.processutils [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:06 np0005539551 nova_compute[227360]: 2025-11-29 08:04:06.863 227364 DEBUG nova.storage.rbd_utils [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] rbd image d3837ff2-ce61-4144-ac9b-c032a188450a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:06 np0005539551 nova_compute[227360]: 2025-11-29 08:04:06.868 227364 DEBUG oslo_concurrency.processutils [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e192 e192: 3 total, 3 up, 3 in
Nov 29 03:04:07 np0005539551 nova_compute[227360]: 2025-11-29 08:04:07.027 227364 INFO nova.virt.libvirt.driver [None req-95ff0c3c-7d32-4b52-9f50-7c6af4c6dc90 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Deleting instance files /var/lib/nova/instances/9b790d52-5049-4623-a09c-055af6b469e3_del#033[00m
Nov 29 03:04:07 np0005539551 nova_compute[227360]: 2025-11-29 08:04:07.029 227364 INFO nova.virt.libvirt.driver [None req-95ff0c3c-7d32-4b52-9f50-7c6af4c6dc90 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Deletion of /var/lib/nova/instances/9b790d52-5049-4623-a09c-055af6b469e3_del complete#033[00m
Nov 29 03:04:07 np0005539551 nova_compute[227360]: 2025-11-29 08:04:07.092 227364 INFO nova.compute.manager [None req-95ff0c3c-7d32-4b52-9f50-7c6af4c6dc90 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Took 1.43 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:04:07 np0005539551 nova_compute[227360]: 2025-11-29 08:04:07.092 227364 DEBUG oslo.service.loopingcall [None req-95ff0c3c-7d32-4b52-9f50-7c6af4c6dc90 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:04:07 np0005539551 nova_compute[227360]: 2025-11-29 08:04:07.093 227364 DEBUG nova.compute.manager [-] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:04:07 np0005539551 nova_compute[227360]: 2025-11-29 08:04:07.093 227364 DEBUG nova.network.neutron [-] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:04:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:07.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:07.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:04:07 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2821735169' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:04:07 np0005539551 nova_compute[227360]: 2025-11-29 08:04:07.370 227364 DEBUG oslo_concurrency.processutils [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:07 np0005539551 nova_compute[227360]: 2025-11-29 08:04:07.372 227364 DEBUG nova.virt.libvirt.vif [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:03:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-944398509',display_name='tempest-ImagesTestJSON-server-944398509',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-944398509',id=50,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='638fd52fccf14f16b56d0860553063f3',ramdisk_id='',reservation_id='r-k5wxk0g0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1682881466',owner_user_name='tempest-ImagesTestJSON-1682881466-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:04:01Z,user_data=None,user_id='fddc5f5801764ee19d5253e2cab34df3',uuid=d3837ff2-ce61-4144-ac9b-c032a188450a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b41e72af-8d04-442c-a943-9a9986e9e860", "address": "fa:16:3e:a1:53:3a", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb41e72af-8d", "ovs_interfaceid": "b41e72af-8d04-442c-a943-9a9986e9e860", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:04:07 np0005539551 nova_compute[227360]: 2025-11-29 08:04:07.373 227364 DEBUG nova.network.os_vif_util [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Converting VIF {"id": "b41e72af-8d04-442c-a943-9a9986e9e860", "address": "fa:16:3e:a1:53:3a", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb41e72af-8d", "ovs_interfaceid": "b41e72af-8d04-442c-a943-9a9986e9e860", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:04:07 np0005539551 nova_compute[227360]: 2025-11-29 08:04:07.373 227364 DEBUG nova.network.os_vif_util [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a1:53:3a,bridge_name='br-int',has_traffic_filtering=True,id=b41e72af-8d04-442c-a943-9a9986e9e860,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb41e72af-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:04:07 np0005539551 nova_compute[227360]: 2025-11-29 08:04:07.375 227364 DEBUG nova.objects.instance [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lazy-loading 'pci_devices' on Instance uuid d3837ff2-ce61-4144-ac9b-c032a188450a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:07 np0005539551 nova_compute[227360]: 2025-11-29 08:04:07.388 227364 DEBUG nova.virt.libvirt.driver [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:04:07 np0005539551 nova_compute[227360]:  <uuid>d3837ff2-ce61-4144-ac9b-c032a188450a</uuid>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:  <name>instance-00000032</name>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:04:07 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:      <nova:name>tempest-ImagesTestJSON-server-944398509</nova:name>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:04:06</nova:creationTime>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:04:07 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:        <nova:user uuid="fddc5f5801764ee19d5253e2cab34df3">tempest-ImagesTestJSON-1682881466-project-member</nova:user>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:        <nova:project uuid="638fd52fccf14f16b56d0860553063f3">tempest-ImagesTestJSON-1682881466</nova:project>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:        <nova:port uuid="b41e72af-8d04-442c-a943-9a9986e9e860">
Nov 29 03:04:07 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:      <entry name="serial">d3837ff2-ce61-4144-ac9b-c032a188450a</entry>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:      <entry name="uuid">d3837ff2-ce61-4144-ac9b-c032a188450a</entry>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:04:07 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/d3837ff2-ce61-4144-ac9b-c032a188450a_disk">
Nov 29 03:04:07 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:04:07 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:04:07 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/d3837ff2-ce61-4144-ac9b-c032a188450a_disk.config">
Nov 29 03:04:07 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:04:07 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:04:07 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:a1:53:3a"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:      <target dev="tapb41e72af-8d"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:04:07 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/d3837ff2-ce61-4144-ac9b-c032a188450a/console.log" append="off"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:04:07 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:04:07 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:04:07 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:04:07 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:04:07 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:04:07 np0005539551 nova_compute[227360]: 2025-11-29 08:04:07.389 227364 DEBUG nova.compute.manager [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Preparing to wait for external event network-vif-plugged-b41e72af-8d04-442c-a943-9a9986e9e860 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:04:07 np0005539551 nova_compute[227360]: 2025-11-29 08:04:07.389 227364 DEBUG oslo_concurrency.lockutils [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "d3837ff2-ce61-4144-ac9b-c032a188450a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:07 np0005539551 nova_compute[227360]: 2025-11-29 08:04:07.389 227364 DEBUG oslo_concurrency.lockutils [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "d3837ff2-ce61-4144-ac9b-c032a188450a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:07 np0005539551 nova_compute[227360]: 2025-11-29 08:04:07.389 227364 DEBUG oslo_concurrency.lockutils [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "d3837ff2-ce61-4144-ac9b-c032a188450a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:07 np0005539551 nova_compute[227360]: 2025-11-29 08:04:07.390 227364 DEBUG nova.virt.libvirt.vif [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:03:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-944398509',display_name='tempest-ImagesTestJSON-server-944398509',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-944398509',id=50,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='638fd52fccf14f16b56d0860553063f3',ramdisk_id='',reservation_id='r-k5wxk0g0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1682881466',owner_user_name='tempest-ImagesTestJSON-1682881466-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:04:01Z,user_data=None,user_id='fddc5f5801764ee19d5253e2cab34df3',uuid=d3837ff2-ce61-4144-ac9b-c032a188450a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b41e72af-8d04-442c-a943-9a9986e9e860", "address": "fa:16:3e:a1:53:3a", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb41e72af-8d", "ovs_interfaceid": "b41e72af-8d04-442c-a943-9a9986e9e860", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:04:07 np0005539551 nova_compute[227360]: 2025-11-29 08:04:07.390 227364 DEBUG nova.network.os_vif_util [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Converting VIF {"id": "b41e72af-8d04-442c-a943-9a9986e9e860", "address": "fa:16:3e:a1:53:3a", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb41e72af-8d", "ovs_interfaceid": "b41e72af-8d04-442c-a943-9a9986e9e860", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:04:07 np0005539551 nova_compute[227360]: 2025-11-29 08:04:07.391 227364 DEBUG nova.network.os_vif_util [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a1:53:3a,bridge_name='br-int',has_traffic_filtering=True,id=b41e72af-8d04-442c-a943-9a9986e9e860,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb41e72af-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:04:07 np0005539551 nova_compute[227360]: 2025-11-29 08:04:07.391 227364 DEBUG os_vif [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:53:3a,bridge_name='br-int',has_traffic_filtering=True,id=b41e72af-8d04-442c-a943-9a9986e9e860,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb41e72af-8d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:04:07 np0005539551 nova_compute[227360]: 2025-11-29 08:04:07.392 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:07 np0005539551 nova_compute[227360]: 2025-11-29 08:04:07.392 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:07 np0005539551 nova_compute[227360]: 2025-11-29 08:04:07.392 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:04:07 np0005539551 nova_compute[227360]: 2025-11-29 08:04:07.394 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:07 np0005539551 nova_compute[227360]: 2025-11-29 08:04:07.395 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb41e72af-8d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:07 np0005539551 nova_compute[227360]: 2025-11-29 08:04:07.395 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb41e72af-8d, col_values=(('external_ids', {'iface-id': 'b41e72af-8d04-442c-a943-9a9986e9e860', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a1:53:3a', 'vm-uuid': 'd3837ff2-ce61-4144-ac9b-c032a188450a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:07 np0005539551 NetworkManager[48922]: <info>  [1764403447.3978] manager: (tapb41e72af-8d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/83)
Nov 29 03:04:07 np0005539551 nova_compute[227360]: 2025-11-29 08:04:07.397 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:07 np0005539551 nova_compute[227360]: 2025-11-29 08:04:07.400 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:04:07 np0005539551 nova_compute[227360]: 2025-11-29 08:04:07.403 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:07 np0005539551 nova_compute[227360]: 2025-11-29 08:04:07.404 227364 INFO os_vif [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:53:3a,bridge_name='br-int',has_traffic_filtering=True,id=b41e72af-8d04-442c-a943-9a9986e9e860,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb41e72af-8d')#033[00m
Nov 29 03:04:07 np0005539551 nova_compute[227360]: 2025-11-29 08:04:07.472 227364 DEBUG nova.virt.libvirt.driver [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:04:07 np0005539551 nova_compute[227360]: 2025-11-29 08:04:07.472 227364 DEBUG nova.virt.libvirt.driver [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:04:07 np0005539551 nova_compute[227360]: 2025-11-29 08:04:07.472 227364 DEBUG nova.virt.libvirt.driver [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] No VIF found with MAC fa:16:3e:a1:53:3a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:04:07 np0005539551 nova_compute[227360]: 2025-11-29 08:04:07.473 227364 INFO nova.virt.libvirt.driver [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Using config drive#033[00m
Nov 29 03:04:07 np0005539551 nova_compute[227360]: 2025-11-29 08:04:07.492 227364 DEBUG nova.storage.rbd_utils [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] rbd image d3837ff2-ce61-4144-ac9b-c032a188450a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:08 np0005539551 nova_compute[227360]: 2025-11-29 08:04:08.428 227364 DEBUG nova.compute.manager [req-82e0cbc8-c9bc-421a-b058-7638440b1a63 req-e4f1d177-6b2a-4ffc-8e04-419416f66430 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Received event network-vif-unplugged-d611c31d-d5f4-4d8f-8460-219ee5d53577 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:04:08 np0005539551 nova_compute[227360]: 2025-11-29 08:04:08.428 227364 DEBUG oslo_concurrency.lockutils [req-82e0cbc8-c9bc-421a-b058-7638440b1a63 req-e4f1d177-6b2a-4ffc-8e04-419416f66430 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9b790d52-5049-4623-a09c-055af6b469e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:08 np0005539551 nova_compute[227360]: 2025-11-29 08:04:08.428 227364 DEBUG oslo_concurrency.lockutils [req-82e0cbc8-c9bc-421a-b058-7638440b1a63 req-e4f1d177-6b2a-4ffc-8e04-419416f66430 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9b790d52-5049-4623-a09c-055af6b469e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:08 np0005539551 nova_compute[227360]: 2025-11-29 08:04:08.429 227364 DEBUG oslo_concurrency.lockutils [req-82e0cbc8-c9bc-421a-b058-7638440b1a63 req-e4f1d177-6b2a-4ffc-8e04-419416f66430 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9b790d52-5049-4623-a09c-055af6b469e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:08 np0005539551 nova_compute[227360]: 2025-11-29 08:04:08.429 227364 DEBUG nova.compute.manager [req-82e0cbc8-c9bc-421a-b058-7638440b1a63 req-e4f1d177-6b2a-4ffc-8e04-419416f66430 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] No waiting events found dispatching network-vif-unplugged-d611c31d-d5f4-4d8f-8460-219ee5d53577 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:04:08 np0005539551 nova_compute[227360]: 2025-11-29 08:04:08.429 227364 DEBUG nova.compute.manager [req-82e0cbc8-c9bc-421a-b058-7638440b1a63 req-e4f1d177-6b2a-4ffc-8e04-419416f66430 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Received event network-vif-unplugged-d611c31d-d5f4-4d8f-8460-219ee5d53577 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:04:08 np0005539551 nova_compute[227360]: 2025-11-29 08:04:08.430 227364 DEBUG nova.compute.manager [req-82e0cbc8-c9bc-421a-b058-7638440b1a63 req-e4f1d177-6b2a-4ffc-8e04-419416f66430 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Received event network-vif-plugged-d611c31d-d5f4-4d8f-8460-219ee5d53577 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:04:08 np0005539551 nova_compute[227360]: 2025-11-29 08:04:08.430 227364 DEBUG oslo_concurrency.lockutils [req-82e0cbc8-c9bc-421a-b058-7638440b1a63 req-e4f1d177-6b2a-4ffc-8e04-419416f66430 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9b790d52-5049-4623-a09c-055af6b469e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:08 np0005539551 nova_compute[227360]: 2025-11-29 08:04:08.430 227364 DEBUG oslo_concurrency.lockutils [req-82e0cbc8-c9bc-421a-b058-7638440b1a63 req-e4f1d177-6b2a-4ffc-8e04-419416f66430 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9b790d52-5049-4623-a09c-055af6b469e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:08 np0005539551 nova_compute[227360]: 2025-11-29 08:04:08.430 227364 DEBUG oslo_concurrency.lockutils [req-82e0cbc8-c9bc-421a-b058-7638440b1a63 req-e4f1d177-6b2a-4ffc-8e04-419416f66430 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9b790d52-5049-4623-a09c-055af6b469e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:08 np0005539551 nova_compute[227360]: 2025-11-29 08:04:08.431 227364 DEBUG nova.compute.manager [req-82e0cbc8-c9bc-421a-b058-7638440b1a63 req-e4f1d177-6b2a-4ffc-8e04-419416f66430 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] No waiting events found dispatching network-vif-plugged-d611c31d-d5f4-4d8f-8460-219ee5d53577 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:04:08 np0005539551 nova_compute[227360]: 2025-11-29 08:04:08.431 227364 WARNING nova.compute.manager [req-82e0cbc8-c9bc-421a-b058-7638440b1a63 req-e4f1d177-6b2a-4ffc-8e04-419416f66430 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Received unexpected event network-vif-plugged-d611c31d-d5f4-4d8f-8460-219ee5d53577 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:04:08 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e193 e193: 3 total, 3 up, 3 in
Nov 29 03:04:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:09.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:09 np0005539551 nova_compute[227360]: 2025-11-29 08:04:09.208 227364 INFO nova.virt.libvirt.driver [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Creating config drive at /var/lib/nova/instances/d3837ff2-ce61-4144-ac9b-c032a188450a/disk.config#033[00m
Nov 29 03:04:09 np0005539551 nova_compute[227360]: 2025-11-29 08:04:09.215 227364 DEBUG oslo_concurrency.processutils [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d3837ff2-ce61-4144-ac9b-c032a188450a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv0dngo7y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:09.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:09 np0005539551 nova_compute[227360]: 2025-11-29 08:04:09.354 227364 DEBUG oslo_concurrency.processutils [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d3837ff2-ce61-4144-ac9b-c032a188450a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv0dngo7y" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:09 np0005539551 nova_compute[227360]: 2025-11-29 08:04:09.401 227364 DEBUG nova.storage.rbd_utils [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] rbd image d3837ff2-ce61-4144-ac9b-c032a188450a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:09 np0005539551 nova_compute[227360]: 2025-11-29 08:04:09.406 227364 DEBUG oslo_concurrency.processutils [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d3837ff2-ce61-4144-ac9b-c032a188450a/disk.config d3837ff2-ce61-4144-ac9b-c032a188450a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:09 np0005539551 nova_compute[227360]: 2025-11-29 08:04:09.648 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:09 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:09 np0005539551 nova_compute[227360]: 2025-11-29 08:04:09.953 227364 DEBUG oslo_concurrency.processutils [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d3837ff2-ce61-4144-ac9b-c032a188450a/disk.config d3837ff2-ce61-4144-ac9b-c032a188450a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:09 np0005539551 nova_compute[227360]: 2025-11-29 08:04:09.954 227364 INFO nova.virt.libvirt.driver [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Deleting local config drive /var/lib/nova/instances/d3837ff2-ce61-4144-ac9b-c032a188450a/disk.config because it was imported into RBD.#033[00m
Nov 29 03:04:09 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e194 e194: 3 total, 3 up, 3 in
Nov 29 03:04:10 np0005539551 kernel: tapb41e72af-8d: entered promiscuous mode
Nov 29 03:04:10 np0005539551 NetworkManager[48922]: <info>  [1764403450.0081] manager: (tapb41e72af-8d): new Tun device (/org/freedesktop/NetworkManager/Devices/84)
Nov 29 03:04:10 np0005539551 nova_compute[227360]: 2025-11-29 08:04:10.062 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:10 np0005539551 ovn_controller[130266]: 2025-11-29T08:04:10Z|00169|binding|INFO|Claiming lport b41e72af-8d04-442c-a943-9a9986e9e860 for this chassis.
Nov 29 03:04:10 np0005539551 ovn_controller[130266]: 2025-11-29T08:04:10Z|00170|binding|INFO|b41e72af-8d04-442c-a943-9a9986e9e860: Claiming fa:16:3e:a1:53:3a 10.100.0.9
Nov 29 03:04:10 np0005539551 nova_compute[227360]: 2025-11-29 08:04:10.064 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:10 np0005539551 systemd-udevd[247128]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:04:10 np0005539551 systemd-machined[190756]: New machine qemu-24-instance-00000032.
Nov 29 03:04:10 np0005539551 NetworkManager[48922]: <info>  [1764403450.1016] device (tapb41e72af-8d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:04:10 np0005539551 NetworkManager[48922]: <info>  [1764403450.1026] device (tapb41e72af-8d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:10.125 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a1:53:3a 10.100.0.9'], port_security=['fa:16:3e:a1:53:3a 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'd3837ff2-ce61-4144-ac9b-c032a188450a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '638fd52fccf14f16b56d0860553063f3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a57b53e5-9055-46ae-8ab4-d4a8a62173cb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d89b288b-bbc6-47fa-ad12-8aab94ffc78f, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=b41e72af-8d04-442c-a943-9a9986e9e860) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:10.126 139482 INFO neutron.agent.ovn.metadata.agent [-] Port b41e72af-8d04-442c-a943-9a9986e9e860 in datapath f01d29c1-afcb-4909-9abf-f7d31e4549d8 bound to our chassis#033[00m
Nov 29 03:04:10 np0005539551 systemd[1]: Started Virtual Machine qemu-24-instance-00000032.
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:10.128 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f01d29c1-afcb-4909-9abf-f7d31e4549d8#033[00m
Nov 29 03:04:10 np0005539551 nova_compute[227360]: 2025-11-29 08:04:10.128 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:10 np0005539551 ovn_controller[130266]: 2025-11-29T08:04:10Z|00171|binding|INFO|Setting lport b41e72af-8d04-442c-a943-9a9986e9e860 ovn-installed in OVS
Nov 29 03:04:10 np0005539551 ovn_controller[130266]: 2025-11-29T08:04:10Z|00172|binding|INFO|Setting lport b41e72af-8d04-442c-a943-9a9986e9e860 up in Southbound
Nov 29 03:04:10 np0005539551 nova_compute[227360]: 2025-11-29 08:04:10.134 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:10.142 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[eaaabadf-7b11-4b3d-b2ea-a59993c96bb1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:10.143 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf01d29c1-a1 in ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:10.144 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf01d29c1-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:10.144 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4b7535e7-8505-4c70-b845-12bb914404ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:10.145 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b9da4682-c261-443f-8f9c-671f95801a75]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:10.159 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[f1d4ac0b-6fbb-4778-82b3-ded984957f2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:10.173 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2c56404d-e95a-4901-bc57-9f65aec64e81]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:10.210 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[e487cf4c-24dc-4d1f-b6a7-379b03b00642]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:10.216 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b1a96dc5-c4ca-4455-a5dc-85e36657f856]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:10 np0005539551 NetworkManager[48922]: <info>  [1764403450.2174] manager: (tapf01d29c1-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/85)
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:10.247 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[5dfeebce-0555-440f-a0a1-fa3da5244dc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:10.250 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[ac179ad1-a547-4ffc-84f0-5bab3089f5dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:10 np0005539551 NetworkManager[48922]: <info>  [1764403450.2692] device (tapf01d29c1-a0): carrier: link connected
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:10.274 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[5fb3739e-8e2d-4cd7-a2ad-51b0f63e0c08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:10.291 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[76f2fdde-6550-4279-bab2-454085938887]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf01d29c1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:77:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 51], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649714, 'reachable_time': 15893, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247161, 'error': None, 'target': 'ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:10.306 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b86ab0f9-cee8-4f94-ab7b-5d2c401d7517]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe96:77b2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 649714, 'tstamp': 649714}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 247162, 'error': None, 'target': 'ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:10.325 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f1e90e18-98e1-4665-a2e4-9991f4012747]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf01d29c1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:77:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 51], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649714, 'reachable_time': 15893, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 247163, 'error': None, 'target': 'ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:10.361 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2650ed2a-59be-447e-9a0e-6d9640c6a59c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:10 np0005539551 nova_compute[227360]: 2025-11-29 08:04:10.393 227364 DEBUG nova.network.neutron [-] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:10.427 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[eb472393-a5f2-420c-a7c8-d583b6dc02df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:10.428 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf01d29c1-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:10.429 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:10.429 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf01d29c1-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:10 np0005539551 NetworkManager[48922]: <info>  [1764403450.4312] manager: (tapf01d29c1-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/86)
Nov 29 03:04:10 np0005539551 kernel: tapf01d29c1-a0: entered promiscuous mode
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:10.434 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf01d29c1-a0, col_values=(('external_ids', {'iface-id': '2247adf2-4048-41de-ba3c-ac69d728838f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:10 np0005539551 ovn_controller[130266]: 2025-11-29T08:04:10Z|00173|binding|INFO|Releasing lport 2247adf2-4048-41de-ba3c-ac69d728838f from this chassis (sb_readonly=0)
Nov 29 03:04:10 np0005539551 nova_compute[227360]: 2025-11-29 08:04:10.444 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:10 np0005539551 nova_compute[227360]: 2025-11-29 08:04:10.461 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:10.462 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f01d29c1-afcb-4909-9abf-f7d31e4549d8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f01d29c1-afcb-4909-9abf-f7d31e4549d8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:10.465 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[afa3db39-687a-47f5-b3e2-664a225950aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:10.467 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-f01d29c1-afcb-4909-9abf-f7d31e4549d8
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/f01d29c1-afcb-4909-9abf-f7d31e4549d8.pid.haproxy
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID f01d29c1-afcb-4909-9abf-f7d31e4549d8
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:04:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:10.470 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'env', 'PROCESS_TAG=haproxy-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f01d29c1-afcb-4909-9abf-f7d31e4549d8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:04:10 np0005539551 nova_compute[227360]: 2025-11-29 08:04:10.567 227364 INFO nova.compute.manager [-] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Took 3.47 seconds to deallocate network for instance.#033[00m
Nov 29 03:04:10 np0005539551 nova_compute[227360]: 2025-11-29 08:04:10.607 227364 DEBUG oslo_concurrency.lockutils [None req-95ff0c3c-7d32-4b52-9f50-7c6af4c6dc90 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:10 np0005539551 nova_compute[227360]: 2025-11-29 08:04:10.608 227364 DEBUG oslo_concurrency.lockutils [None req-95ff0c3c-7d32-4b52-9f50-7c6af4c6dc90 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:10 np0005539551 nova_compute[227360]: 2025-11-29 08:04:10.666 227364 DEBUG oslo_concurrency.processutils [None req-95ff0c3c-7d32-4b52-9f50-7c6af4c6dc90 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:10 np0005539551 podman[247195]: 2025-11-29 08:04:10.83381836 +0000 UTC m=+0.023493504 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:04:11 np0005539551 nova_compute[227360]: 2025-11-29 08:04:11.000 227364 DEBUG nova.compute.manager [req-edf12328-d5d5-4dd8-9612-a71ed390acce req-e4206b51-2bf7-457e-9cbc-14a49844f581 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Received event network-vif-deleted-d611c31d-d5f4-4d8f-8460-219ee5d53577 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:04:11 np0005539551 nova_compute[227360]: 2025-11-29 08:04:11.080 227364 DEBUG nova.compute.manager [req-5c3ccbd5-3ed8-46c5-bd0d-789749de5e62 req-5d187b95-cfd9-466e-90a8-9c6418b0b2a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Received event network-vif-plugged-b41e72af-8d04-442c-a943-9a9986e9e860 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:04:11 np0005539551 nova_compute[227360]: 2025-11-29 08:04:11.081 227364 DEBUG oslo_concurrency.lockutils [req-5c3ccbd5-3ed8-46c5-bd0d-789749de5e62 req-5d187b95-cfd9-466e-90a8-9c6418b0b2a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d3837ff2-ce61-4144-ac9b-c032a188450a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:11 np0005539551 nova_compute[227360]: 2025-11-29 08:04:11.082 227364 DEBUG oslo_concurrency.lockutils [req-5c3ccbd5-3ed8-46c5-bd0d-789749de5e62 req-5d187b95-cfd9-466e-90a8-9c6418b0b2a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d3837ff2-ce61-4144-ac9b-c032a188450a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:11 np0005539551 nova_compute[227360]: 2025-11-29 08:04:11.083 227364 DEBUG oslo_concurrency.lockutils [req-5c3ccbd5-3ed8-46c5-bd0d-789749de5e62 req-5d187b95-cfd9-466e-90a8-9c6418b0b2a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d3837ff2-ce61-4144-ac9b-c032a188450a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:11 np0005539551 nova_compute[227360]: 2025-11-29 08:04:11.083 227364 DEBUG nova.compute.manager [req-5c3ccbd5-3ed8-46c5-bd0d-789749de5e62 req-5d187b95-cfd9-466e-90a8-9c6418b0b2a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Processing event network-vif-plugged-b41e72af-8d04-442c-a943-9a9986e9e860 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:04:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:04:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:11.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:04:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:11.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:11 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:04:11 np0005539551 podman[247195]: 2025-11-29 08:04:11.525288702 +0000 UTC m=+0.714963816 container create 053f167d6b69506f3d0f10382de731586f6f27ebf4c5be898b721b80952fab5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:04:11 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1927502946' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:04:11 np0005539551 nova_compute[227360]: 2025-11-29 08:04:11.555 227364 DEBUG oslo_concurrency.processutils [None req-95ff0c3c-7d32-4b52-9f50-7c6af4c6dc90 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.889s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:11 np0005539551 nova_compute[227360]: 2025-11-29 08:04:11.562 227364 DEBUG nova.compute.provider_tree [None req-95ff0c3c-7d32-4b52-9f50-7c6af4c6dc90 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:04:11 np0005539551 systemd[1]: Started libpod-conmon-053f167d6b69506f3d0f10382de731586f6f27ebf4c5be898b721b80952fab5e.scope.
Nov 29 03:04:11 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:04:11 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/069b6cf3b123480c0f26872a2d80d7c9c484fdff9eeb710895788d1b44cd2e16/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:04:11 np0005539551 podman[247195]: 2025-11-29 08:04:11.625988611 +0000 UTC m=+0.815663745 container init 053f167d6b69506f3d0f10382de731586f6f27ebf4c5be898b721b80952fab5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 03:04:11 np0005539551 podman[247195]: 2025-11-29 08:04:11.633651901 +0000 UTC m=+0.823327025 container start 053f167d6b69506f3d0f10382de731586f6f27ebf4c5be898b721b80952fab5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:04:11 np0005539551 nova_compute[227360]: 2025-11-29 08:04:11.678 227364 DEBUG nova.scheduler.client.report [None req-95ff0c3c-7d32-4b52-9f50-7c6af4c6dc90 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:04:11 np0005539551 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[247266]: [NOTICE]   (247274) : New worker (247277) forked
Nov 29 03:04:11 np0005539551 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[247266]: [NOTICE]   (247274) : Loading success.
Nov 29 03:04:11 np0005539551 nova_compute[227360]: 2025-11-29 08:04:11.749 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403451.7493422, d3837ff2-ce61-4144-ac9b-c032a188450a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:11 np0005539551 nova_compute[227360]: 2025-11-29 08:04:11.750 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] VM Started (Lifecycle Event)#033[00m
Nov 29 03:04:11 np0005539551 nova_compute[227360]: 2025-11-29 08:04:11.752 227364 DEBUG nova.compute.manager [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:04:11 np0005539551 nova_compute[227360]: 2025-11-29 08:04:11.757 227364 DEBUG nova.virt.libvirt.driver [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:04:11 np0005539551 nova_compute[227360]: 2025-11-29 08:04:11.760 227364 INFO nova.virt.libvirt.driver [-] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Instance spawned successfully.#033[00m
Nov 29 03:04:11 np0005539551 nova_compute[227360]: 2025-11-29 08:04:11.760 227364 DEBUG nova.virt.libvirt.driver [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:04:11 np0005539551 nova_compute[227360]: 2025-11-29 08:04:11.928 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:11 np0005539551 nova_compute[227360]: 2025-11-29 08:04:11.937 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:04:11 np0005539551 nova_compute[227360]: 2025-11-29 08:04:11.941 227364 DEBUG nova.virt.libvirt.driver [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:11 np0005539551 nova_compute[227360]: 2025-11-29 08:04:11.942 227364 DEBUG nova.virt.libvirt.driver [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:11 np0005539551 nova_compute[227360]: 2025-11-29 08:04:11.942 227364 DEBUG nova.virt.libvirt.driver [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:11 np0005539551 nova_compute[227360]: 2025-11-29 08:04:11.943 227364 DEBUG nova.virt.libvirt.driver [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:11 np0005539551 nova_compute[227360]: 2025-11-29 08:04:11.943 227364 DEBUG nova.virt.libvirt.driver [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:11 np0005539551 nova_compute[227360]: 2025-11-29 08:04:11.944 227364 DEBUG nova.virt.libvirt.driver [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:11 np0005539551 nova_compute[227360]: 2025-11-29 08:04:11.954 227364 DEBUG oslo_concurrency.lockutils [None req-95ff0c3c-7d32-4b52-9f50-7c6af4c6dc90 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.346s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:11 np0005539551 nova_compute[227360]: 2025-11-29 08:04:11.996 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:04:11 np0005539551 nova_compute[227360]: 2025-11-29 08:04:11.997 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403451.749419, d3837ff2-ce61-4144-ac9b-c032a188450a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:11 np0005539551 nova_compute[227360]: 2025-11-29 08:04:11.997 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:04:12 np0005539551 nova_compute[227360]: 2025-11-29 08:04:12.035 227364 INFO nova.scheduler.client.report [None req-95ff0c3c-7d32-4b52-9f50-7c6af4c6dc90 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Deleted allocations for instance 9b790d52-5049-4623-a09c-055af6b469e3#033[00m
Nov 29 03:04:12 np0005539551 nova_compute[227360]: 2025-11-29 08:04:12.056 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:12 np0005539551 nova_compute[227360]: 2025-11-29 08:04:12.061 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403451.7556946, d3837ff2-ce61-4144-ac9b-c032a188450a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:12 np0005539551 nova_compute[227360]: 2025-11-29 08:04:12.062 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:04:12 np0005539551 nova_compute[227360]: 2025-11-29 08:04:12.070 227364 INFO nova.compute.manager [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Took 10.54 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:04:12 np0005539551 nova_compute[227360]: 2025-11-29 08:04:12.070 227364 DEBUG nova.compute.manager [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:12 np0005539551 nova_compute[227360]: 2025-11-29 08:04:12.109 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:12 np0005539551 nova_compute[227360]: 2025-11-29 08:04:12.113 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:04:12 np0005539551 nova_compute[227360]: 2025-11-29 08:04:12.152 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:04:12 np0005539551 nova_compute[227360]: 2025-11-29 08:04:12.399 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:12 np0005539551 nova_compute[227360]: 2025-11-29 08:04:12.533 227364 DEBUG oslo_concurrency.lockutils [None req-95ff0c3c-7d32-4b52-9f50-7c6af4c6dc90 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "9b790d52-5049-4623-a09c-055af6b469e3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.870s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:12 np0005539551 nova_compute[227360]: 2025-11-29 08:04:12.549 227364 INFO nova.compute.manager [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Took 11.86 seconds to build instance.#033[00m
Nov 29 03:04:12 np0005539551 nova_compute[227360]: 2025-11-29 08:04:12.594 227364 DEBUG oslo_concurrency.lockutils [None req-463008c3-794c-4581-a078-36e2cdf8f69e fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "d3837ff2-ce61-4144-ac9b-c032a188450a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.976s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:12 np0005539551 nova_compute[227360]: 2025-11-29 08:04:12.954 227364 INFO nova.compute.manager [None req-5ee94d55-f028-4fb9-a2ae-c3f86c66c01d fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Pausing#033[00m
Nov 29 03:04:12 np0005539551 nova_compute[227360]: 2025-11-29 08:04:12.955 227364 DEBUG nova.objects.instance [None req-5ee94d55-f028-4fb9-a2ae-c3f86c66c01d fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lazy-loading 'flavor' on Instance uuid d3837ff2-ce61-4144-ac9b-c032a188450a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:13 np0005539551 nova_compute[227360]: 2025-11-29 08:04:13.000 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403452.9999356, d3837ff2-ce61-4144-ac9b-c032a188450a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:13 np0005539551 nova_compute[227360]: 2025-11-29 08:04:13.000 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:04:13 np0005539551 nova_compute[227360]: 2025-11-29 08:04:13.002 227364 DEBUG nova.compute.manager [None req-5ee94d55-f028-4fb9-a2ae-c3f86c66c01d fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:13 np0005539551 nova_compute[227360]: 2025-11-29 08:04:13.039 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:13 np0005539551 nova_compute[227360]: 2025-11-29 08:04:13.041 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:04:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:13.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:13.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:13 np0005539551 nova_compute[227360]: 2025-11-29 08:04:13.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:13 np0005539551 nova_compute[227360]: 2025-11-29 08:04:13.417 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Nov 29 03:04:13 np0005539551 nova_compute[227360]: 2025-11-29 08:04:13.509 227364 DEBUG nova.compute.manager [req-8abb5aff-d615-46c5-acd7-27da766f11d1 req-df875542-8d6c-4451-9e1e-d608bb5b2290 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Received event network-vif-plugged-b41e72af-8d04-442c-a943-9a9986e9e860 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:04:13 np0005539551 nova_compute[227360]: 2025-11-29 08:04:13.509 227364 DEBUG oslo_concurrency.lockutils [req-8abb5aff-d615-46c5-acd7-27da766f11d1 req-df875542-8d6c-4451-9e1e-d608bb5b2290 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d3837ff2-ce61-4144-ac9b-c032a188450a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:13 np0005539551 nova_compute[227360]: 2025-11-29 08:04:13.510 227364 DEBUG oslo_concurrency.lockutils [req-8abb5aff-d615-46c5-acd7-27da766f11d1 req-df875542-8d6c-4451-9e1e-d608bb5b2290 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d3837ff2-ce61-4144-ac9b-c032a188450a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:13 np0005539551 nova_compute[227360]: 2025-11-29 08:04:13.510 227364 DEBUG oslo_concurrency.lockutils [req-8abb5aff-d615-46c5-acd7-27da766f11d1 req-df875542-8d6c-4451-9e1e-d608bb5b2290 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d3837ff2-ce61-4144-ac9b-c032a188450a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:13 np0005539551 nova_compute[227360]: 2025-11-29 08:04:13.510 227364 DEBUG nova.compute.manager [req-8abb5aff-d615-46c5-acd7-27da766f11d1 req-df875542-8d6c-4451-9e1e-d608bb5b2290 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] No waiting events found dispatching network-vif-plugged-b41e72af-8d04-442c-a943-9a9986e9e860 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:04:13 np0005539551 nova_compute[227360]: 2025-11-29 08:04:13.511 227364 WARNING nova.compute.manager [req-8abb5aff-d615-46c5-acd7-27da766f11d1 req-df875542-8d6c-4451-9e1e-d608bb5b2290 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Received unexpected event network-vif-plugged-b41e72af-8d04-442c-a943-9a9986e9e860 for instance with vm_state paused and task_state None.#033[00m
Nov 29 03:04:14 np0005539551 nova_compute[227360]: 2025-11-29 08:04:14.525 227364 DEBUG oslo_concurrency.lockutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "63ab1668-e1e0-4c32-bfaa-879399657745" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:14 np0005539551 nova_compute[227360]: 2025-11-29 08:04:14.526 227364 DEBUG oslo_concurrency.lockutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "63ab1668-e1e0-4c32-bfaa-879399657745" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:14 np0005539551 nova_compute[227360]: 2025-11-29 08:04:14.649 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:14 np0005539551 nova_compute[227360]: 2025-11-29 08:04:14.842 227364 DEBUG nova.compute.manager [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:04:14 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:15 np0005539551 nova_compute[227360]: 2025-11-29 08:04:15.082 227364 DEBUG oslo_concurrency.lockutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:15 np0005539551 nova_compute[227360]: 2025-11-29 08:04:15.083 227364 DEBUG oslo_concurrency.lockutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:15 np0005539551 nova_compute[227360]: 2025-11-29 08:04:15.089 227364 DEBUG nova.virt.hardware [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:04:15 np0005539551 nova_compute[227360]: 2025-11-29 08:04:15.089 227364 INFO nova.compute.claims [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:04:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:15.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:15.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:15 np0005539551 nova_compute[227360]: 2025-11-29 08:04:15.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e195 e195: 3 total, 3 up, 3 in
Nov 29 03:04:16 np0005539551 nova_compute[227360]: 2025-11-29 08:04:16.505 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:16 np0005539551 nova_compute[227360]: 2025-11-29 08:04:16.507 227364 DEBUG oslo_concurrency.processutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:04:17 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1869110741' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:04:17 np0005539551 nova_compute[227360]: 2025-11-29 08:04:17.136 227364 DEBUG oslo_concurrency.processutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:17 np0005539551 nova_compute[227360]: 2025-11-29 08:04:17.143 227364 DEBUG nova.compute.provider_tree [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:04:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:17.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:17.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:17 np0005539551 nova_compute[227360]: 2025-11-29 08:04:17.403 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:17 np0005539551 nova_compute[227360]: 2025-11-29 08:04:17.404 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:17 np0005539551 nova_compute[227360]: 2025-11-29 08:04:17.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:17 np0005539551 nova_compute[227360]: 2025-11-29 08:04:17.409 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:04:17 np0005539551 nova_compute[227360]: 2025-11-29 08:04:17.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:04:17 np0005539551 nova_compute[227360]: 2025-11-29 08:04:17.467 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 03:04:17 np0005539551 nova_compute[227360]: 2025-11-29 08:04:17.473 227364 DEBUG nova.scheduler.client.report [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:04:17 np0005539551 nova_compute[227360]: 2025-11-29 08:04:17.605 227364 DEBUG oslo_concurrency.lockutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.523s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:17 np0005539551 nova_compute[227360]: 2025-11-29 08:04:17.671 227364 DEBUG oslo_concurrency.lockutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "12c3101e-b575-4513-8124-77e3c35b2da3" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:17 np0005539551 nova_compute[227360]: 2025-11-29 08:04:17.671 227364 DEBUG oslo_concurrency.lockutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "12c3101e-b575-4513-8124-77e3c35b2da3" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:17 np0005539551 nova_compute[227360]: 2025-11-29 08:04:17.681 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "refresh_cache-d3837ff2-ce61-4144-ac9b-c032a188450a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:04:17 np0005539551 nova_compute[227360]: 2025-11-29 08:04:17.682 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquired lock "refresh_cache-d3837ff2-ce61-4144-ac9b-c032a188450a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:04:17 np0005539551 nova_compute[227360]: 2025-11-29 08:04:17.683 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:04:17 np0005539551 nova_compute[227360]: 2025-11-29 08:04:17.683 227364 DEBUG nova.objects.instance [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lazy-loading 'info_cache' on Instance uuid d3837ff2-ce61-4144-ac9b-c032a188450a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:17 np0005539551 nova_compute[227360]: 2025-11-29 08:04:17.792 227364 DEBUG oslo_concurrency.lockutils [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "811e2f2f-5e2c-4c60-97e8-c39377dce6dc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:17 np0005539551 nova_compute[227360]: 2025-11-29 08:04:17.793 227364 DEBUG oslo_concurrency.lockutils [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "811e2f2f-5e2c-4c60-97e8-c39377dce6dc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:17 np0005539551 nova_compute[227360]: 2025-11-29 08:04:17.812 227364 DEBUG nova.compute.manager [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] No node specified, defaulting to compute-1.ctlplane.example.com _get_nodename /usr/lib/python3.9/site-packages/nova/compute/manager.py:10505#033[00m
Nov 29 03:04:17 np0005539551 nova_compute[227360]: 2025-11-29 08:04:17.930 227364 DEBUG nova.compute.manager [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:04:18 np0005539551 nova_compute[227360]: 2025-11-29 08:04:18.035 227364 DEBUG oslo_concurrency.lockutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "12c3101e-b575-4513-8124-77e3c35b2da3" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.364s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:18 np0005539551 nova_compute[227360]: 2025-11-29 08:04:18.036 227364 DEBUG nova.compute.manager [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:04:18 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e196 e196: 3 total, 3 up, 3 in
Nov 29 03:04:18 np0005539551 nova_compute[227360]: 2025-11-29 08:04:18.142 227364 DEBUG nova.compute.manager [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:04:18 np0005539551 nova_compute[227360]: 2025-11-29 08:04:18.142 227364 DEBUG nova.network.neutron [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:04:18 np0005539551 nova_compute[227360]: 2025-11-29 08:04:18.150 227364 DEBUG oslo_concurrency.lockutils [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:18 np0005539551 nova_compute[227360]: 2025-11-29 08:04:18.150 227364 DEBUG oslo_concurrency.lockutils [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:18 np0005539551 nova_compute[227360]: 2025-11-29 08:04:18.158 227364 DEBUG nova.virt.hardware [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:04:18 np0005539551 nova_compute[227360]: 2025-11-29 08:04:18.158 227364 INFO nova.compute.claims [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:04:18 np0005539551 nova_compute[227360]: 2025-11-29 08:04:18.165 227364 DEBUG nova.compute.manager [None req-5d8dbca8-7719-4c4a-afa0-11fe1b87fba6 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:18 np0005539551 nova_compute[227360]: 2025-11-29 08:04:18.167 227364 INFO nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:04:18 np0005539551 nova_compute[227360]: 2025-11-29 08:04:18.217 227364 DEBUG nova.compute.manager [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:04:18 np0005539551 nova_compute[227360]: 2025-11-29 08:04:18.256 227364 INFO nova.compute.manager [None req-5d8dbca8-7719-4c4a-afa0-11fe1b87fba6 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] instance snapshotting#033[00m
Nov 29 03:04:18 np0005539551 nova_compute[227360]: 2025-11-29 08:04:18.257 227364 WARNING nova.compute.manager [None req-5d8dbca8-7719-4c4a-afa0-11fe1b87fba6 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] trying to snapshot a non-running instance: (state: 3 expected: 1)#033[00m
Nov 29 03:04:18 np0005539551 nova_compute[227360]: 2025-11-29 08:04:18.334 227364 DEBUG nova.compute.manager [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:04:18 np0005539551 nova_compute[227360]: 2025-11-29 08:04:18.335 227364 DEBUG nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:04:18 np0005539551 nova_compute[227360]: 2025-11-29 08:04:18.336 227364 INFO nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Creating image(s)#033[00m
Nov 29 03:04:18 np0005539551 nova_compute[227360]: 2025-11-29 08:04:18.360 227364 DEBUG nova.storage.rbd_utils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] rbd image 63ab1668-e1e0-4c32-bfaa-879399657745_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:18 np0005539551 nova_compute[227360]: 2025-11-29 08:04:18.385 227364 DEBUG nova.storage.rbd_utils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] rbd image 63ab1668-e1e0-4c32-bfaa-879399657745_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:18 np0005539551 nova_compute[227360]: 2025-11-29 08:04:18.407 227364 DEBUG nova.storage.rbd_utils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] rbd image 63ab1668-e1e0-4c32-bfaa-879399657745_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:18 np0005539551 nova_compute[227360]: 2025-11-29 08:04:18.409 227364 DEBUG oslo_concurrency.processutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:18 np0005539551 nova_compute[227360]: 2025-11-29 08:04:18.465 227364 DEBUG oslo_concurrency.processutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:18 np0005539551 nova_compute[227360]: 2025-11-29 08:04:18.466 227364 DEBUG oslo_concurrency.lockutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:18 np0005539551 nova_compute[227360]: 2025-11-29 08:04:18.467 227364 DEBUG oslo_concurrency.lockutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:18 np0005539551 nova_compute[227360]: 2025-11-29 08:04:18.467 227364 DEBUG oslo_concurrency.lockutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:18 np0005539551 nova_compute[227360]: 2025-11-29 08:04:18.494 227364 DEBUG nova.storage.rbd_utils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] rbd image 63ab1668-e1e0-4c32-bfaa-879399657745_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:18 np0005539551 nova_compute[227360]: 2025-11-29 08:04:18.497 227364 DEBUG oslo_concurrency.processutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 63ab1668-e1e0-4c32-bfaa-879399657745_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:18 np0005539551 nova_compute[227360]: 2025-11-29 08:04:18.532 227364 DEBUG nova.network.neutron [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 29 03:04:18 np0005539551 nova_compute[227360]: 2025-11-29 08:04:18.532 227364 DEBUG nova.compute.manager [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:04:18 np0005539551 nova_compute[227360]: 2025-11-29 08:04:18.541 227364 DEBUG oslo_concurrency.processutils [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:18 np0005539551 nova_compute[227360]: 2025-11-29 08:04:18.609 227364 INFO nova.virt.libvirt.driver [None req-5d8dbca8-7719-4c4a-afa0-11fe1b87fba6 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Beginning live snapshot process#033[00m
Nov 29 03:04:18 np0005539551 nova_compute[227360]: 2025-11-29 08:04:18.776 227364 DEBUG nova.virt.libvirt.imagebackend [None req-5d8dbca8-7719-4c4a-afa0-11fe1b87fba6 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] No parent info for 4873db8c-b414-4e95-acd9-77caabebe722; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:04:18 np0005539551 nova_compute[227360]: 2025-11-29 08:04:18.968 227364 DEBUG oslo_concurrency.processutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 63ab1668-e1e0-4c32-bfaa-879399657745_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:18 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:04:18 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2079509570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:04:19 np0005539551 nova_compute[227360]: 2025-11-29 08:04:19.047 227364 DEBUG nova.storage.rbd_utils [None req-5d8dbca8-7719-4c4a-afa0-11fe1b87fba6 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] creating snapshot(83c2763a9c454a2984a005c99fad363b) on rbd image(d3837ff2-ce61-4144-ac9b-c032a188450a_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:04:19 np0005539551 nova_compute[227360]: 2025-11-29 08:04:19.154 227364 DEBUG oslo_concurrency.processutils [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:19.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:19.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:19 np0005539551 nova_compute[227360]: 2025-11-29 08:04:19.364 227364 DEBUG nova.storage.rbd_utils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] resizing rbd image 63ab1668-e1e0-4c32-bfaa-879399657745_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:04:19 np0005539551 nova_compute[227360]: 2025-11-29 08:04:19.730 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:19 np0005539551 nova_compute[227360]: 2025-11-29 08:04:19.733 227364 DEBUG nova.compute.provider_tree [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:04:19 np0005539551 nova_compute[227360]: 2025-11-29 08:04:19.747 227364 DEBUG nova.scheduler.client.report [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:04:19 np0005539551 nova_compute[227360]: 2025-11-29 08:04:19.767 227364 DEBUG oslo_concurrency.lockutils [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:19 np0005539551 nova_compute[227360]: 2025-11-29 08:04:19.768 227364 DEBUG nova.compute.manager [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:04:19 np0005539551 nova_compute[227360]: 2025-11-29 08:04:19.828 227364 DEBUG nova.compute.manager [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:04:19 np0005539551 nova_compute[227360]: 2025-11-29 08:04:19.828 227364 DEBUG nova.network.neutron [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:04:19 np0005539551 nova_compute[227360]: 2025-11-29 08:04:19.850 227364 INFO nova.virt.libvirt.driver [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:04:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:19.855 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:19.856 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:19.856 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:19 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:19 np0005539551 nova_compute[227360]: 2025-11-29 08:04:19.893 227364 DEBUG nova.compute.manager [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:04:19 np0005539551 nova_compute[227360]: 2025-11-29 08:04:19.990 227364 DEBUG nova.compute.manager [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:04:19 np0005539551 nova_compute[227360]: 2025-11-29 08:04:19.992 227364 DEBUG nova.virt.libvirt.driver [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:04:19 np0005539551 nova_compute[227360]: 2025-11-29 08:04:19.993 227364 INFO nova.virt.libvirt.driver [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Creating image(s)#033[00m
Nov 29 03:04:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e197 e197: 3 total, 3 up, 3 in
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.239 227364 DEBUG nova.storage.rbd_utils [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image 811e2f2f-5e2c-4c60-97e8-c39377dce6dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.265 227364 DEBUG nova.storage.rbd_utils [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image 811e2f2f-5e2c-4c60-97e8-c39377dce6dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.293 227364 DEBUG nova.storage.rbd_utils [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image 811e2f2f-5e2c-4c60-97e8-c39377dce6dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.296 227364 DEBUG oslo_concurrency.processutils [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.320 227364 DEBUG nova.policy [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '104aea18c5154615b602f032bdb49681', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '90c23935e0214785a9dc5061b91cf29c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.367 227364 DEBUG oslo_concurrency.processutils [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.368 227364 DEBUG oslo_concurrency.lockutils [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.368 227364 DEBUG oslo_concurrency.lockutils [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.369 227364 DEBUG oslo_concurrency.lockutils [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.392 227364 DEBUG nova.storage.rbd_utils [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image 811e2f2f-5e2c-4c60-97e8-c39377dce6dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.395 227364 DEBUG oslo_concurrency.processutils [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 811e2f2f-5e2c-4c60-97e8-c39377dce6dc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.418 227364 DEBUG nova.objects.instance [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lazy-loading 'migration_context' on Instance uuid 63ab1668-e1e0-4c32-bfaa-879399657745 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.441 227364 DEBUG nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.441 227364 DEBUG nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Ensure instance console log exists: /var/lib/nova/instances/63ab1668-e1e0-4c32-bfaa-879399657745/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.442 227364 DEBUG oslo_concurrency.lockutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.442 227364 DEBUG oslo_concurrency.lockutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.443 227364 DEBUG oslo_concurrency.lockutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.446 227364 DEBUG nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.452 227364 DEBUG nova.storage.rbd_utils [None req-5d8dbca8-7719-4c4a-afa0-11fe1b87fba6 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] cloning vms/d3837ff2-ce61-4144-ac9b-c032a188450a_disk@83c2763a9c454a2984a005c99fad363b to images/c44b1015-4b83-4a23-8199-4ab6d8120f18 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.492 227364 WARNING nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.498 227364 DEBUG nova.virt.libvirt.host [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.498 227364 DEBUG nova.virt.libvirt.host [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.501 227364 DEBUG nova.virt.libvirt.host [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.502 227364 DEBUG nova.virt.libvirt.host [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.503 227364 DEBUG nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.504 227364 DEBUG nova.virt.hardware [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.504 227364 DEBUG nova.virt.hardware [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.504 227364 DEBUG nova.virt.hardware [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.505 227364 DEBUG nova.virt.hardware [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.505 227364 DEBUG nova.virt.hardware [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.505 227364 DEBUG nova.virt.hardware [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.505 227364 DEBUG nova.virt.hardware [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.506 227364 DEBUG nova.virt.hardware [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.506 227364 DEBUG nova.virt.hardware [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.506 227364 DEBUG nova.virt.hardware [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.506 227364 DEBUG nova.virt.hardware [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.509 227364 DEBUG oslo_concurrency.processutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.685 227364 DEBUG nova.storage.rbd_utils [None req-5d8dbca8-7719-4c4a-afa0-11fe1b87fba6 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] flattening images/c44b1015-4b83-4a23-8199-4ab6d8120f18 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.730 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Updating instance_info_cache with network_info: [{"id": "b41e72af-8d04-442c-a943-9a9986e9e860", "address": "fa:16:3e:a1:53:3a", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb41e72af-8d", "ovs_interfaceid": "b41e72af-8d04-442c-a943-9a9986e9e860", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.733 227364 DEBUG oslo_concurrency.processutils [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 811e2f2f-5e2c-4c60-97e8-c39377dce6dc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.792 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Releasing lock "refresh_cache-d3837ff2-ce61-4144-ac9b-c032a188450a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.792 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.792 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.793 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.793 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.852 227364 DEBUG nova.storage.rbd_utils [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] resizing rbd image 811e2f2f-5e2c-4c60-97e8-c39377dce6dc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.910 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.911 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.911 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.911 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:04:20 np0005539551 nova_compute[227360]: 2025-11-29 08:04:20.912 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:04:20 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1773296910' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:04:21 np0005539551 nova_compute[227360]: 2025-11-29 08:04:21.004 227364 DEBUG nova.storage.rbd_utils [None req-5d8dbca8-7719-4c4a-afa0-11fe1b87fba6 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] removing snapshot(83c2763a9c454a2984a005c99fad363b) on rbd image(d3837ff2-ce61-4144-ac9b-c032a188450a_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:04:21 np0005539551 nova_compute[227360]: 2025-11-29 08:04:21.050 227364 DEBUG oslo_concurrency.processutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:21 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e198 e198: 3 total, 3 up, 3 in
Nov 29 03:04:21 np0005539551 nova_compute[227360]: 2025-11-29 08:04:21.088 227364 DEBUG nova.storage.rbd_utils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] rbd image 63ab1668-e1e0-4c32-bfaa-879399657745_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:21 np0005539551 nova_compute[227360]: 2025-11-29 08:04:21.092 227364 DEBUG oslo_concurrency.processutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:21 np0005539551 nova_compute[227360]: 2025-11-29 08:04:21.115 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403446.0989506, 9b790d52-5049-4623-a09c-055af6b469e3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:21 np0005539551 nova_compute[227360]: 2025-11-29 08:04:21.116 227364 INFO nova.compute.manager [-] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:04:21 np0005539551 nova_compute[227360]: 2025-11-29 08:04:21.123 227364 DEBUG nova.objects.instance [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lazy-loading 'migration_context' on Instance uuid 811e2f2f-5e2c-4c60-97e8-c39377dce6dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:21 np0005539551 nova_compute[227360]: 2025-11-29 08:04:21.138 227364 DEBUG nova.storage.rbd_utils [None req-5d8dbca8-7719-4c4a-afa0-11fe1b87fba6 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] creating snapshot(snap) on rbd image(c44b1015-4b83-4a23-8199-4ab6d8120f18) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:04:21 np0005539551 nova_compute[227360]: 2025-11-29 08:04:21.172 227364 DEBUG nova.compute.manager [None req-0a5366a5-c491-46fa-9e20-4c52198c56ef - - - - - -] [instance: 9b790d52-5049-4623-a09c-055af6b469e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:21 np0005539551 nova_compute[227360]: 2025-11-29 08:04:21.173 227364 DEBUG nova.virt.libvirt.driver [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:04:21 np0005539551 nova_compute[227360]: 2025-11-29 08:04:21.173 227364 DEBUG nova.virt.libvirt.driver [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Ensure instance console log exists: /var/lib/nova/instances/811e2f2f-5e2c-4c60-97e8-c39377dce6dc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:04:21 np0005539551 nova_compute[227360]: 2025-11-29 08:04:21.173 227364 DEBUG oslo_concurrency.lockutils [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:21 np0005539551 nova_compute[227360]: 2025-11-29 08:04:21.174 227364 DEBUG oslo_concurrency.lockutils [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:21 np0005539551 nova_compute[227360]: 2025-11-29 08:04:21.174 227364 DEBUG oslo_concurrency.lockutils [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:21.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:21.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:21 np0005539551 nova_compute[227360]: 2025-11-29 08:04:21.318 227364 DEBUG nova.network.neutron [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Successfully created port: c765ebc3-573f-4453-ab8a-90e3943a89d5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:04:21 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:04:21 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3680256997' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:04:21 np0005539551 nova_compute[227360]: 2025-11-29 08:04:21.343 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:21 np0005539551 nova_compute[227360]: 2025-11-29 08:04:21.404 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:04:21 np0005539551 nova_compute[227360]: 2025-11-29 08:04:21.404 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:04:21 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:04:21 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/341433166' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:04:21 np0005539551 nova_compute[227360]: 2025-11-29 08:04:21.545 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:04:21 np0005539551 nova_compute[227360]: 2025-11-29 08:04:21.546 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4596MB free_disk=20.736465454101562GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:04:21 np0005539551 nova_compute[227360]: 2025-11-29 08:04:21.546 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:21 np0005539551 nova_compute[227360]: 2025-11-29 08:04:21.546 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:21 np0005539551 nova_compute[227360]: 2025-11-29 08:04:21.549 227364 DEBUG oslo_concurrency.processutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:21 np0005539551 nova_compute[227360]: 2025-11-29 08:04:21.550 227364 DEBUG nova.objects.instance [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lazy-loading 'pci_devices' on Instance uuid 63ab1668-e1e0-4c32-bfaa-879399657745 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:21 np0005539551 nova_compute[227360]: 2025-11-29 08:04:21.576 227364 DEBUG nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:04:21 np0005539551 nova_compute[227360]:  <uuid>63ab1668-e1e0-4c32-bfaa-879399657745</uuid>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:  <name>instance-00000033</name>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:04:21 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:      <nova:name>tempest-ServersOnMultiNodesTest-server-1399942865-1</nova:name>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:04:20</nova:creationTime>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:04:21 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:        <nova:user uuid="1b85c3911b7c4e558779a15904c3ce58">tempest-ServersOnMultiNodesTest-648608509-project-member</nova:user>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:        <nova:project uuid="4fe9ef6d6ed6441e87cf5bdb5d40af4b">tempest-ServersOnMultiNodesTest-648608509</nova:project>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:      <nova:ports/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:      <entry name="serial">63ab1668-e1e0-4c32-bfaa-879399657745</entry>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:      <entry name="uuid">63ab1668-e1e0-4c32-bfaa-879399657745</entry>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:04:21 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/63ab1668-e1e0-4c32-bfaa-879399657745_disk">
Nov 29 03:04:21 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:04:21 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:04:21 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/63ab1668-e1e0-4c32-bfaa-879399657745_disk.config">
Nov 29 03:04:21 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:04:21 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:04:21 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/63ab1668-e1e0-4c32-bfaa-879399657745/console.log" append="off"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:04:21 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:04:21 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:04:21 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:04:21 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:04:21 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:04:21 np0005539551 nova_compute[227360]: 2025-11-29 08:04:21.628 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance d3837ff2-ce61-4144-ac9b-c032a188450a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:04:21 np0005539551 nova_compute[227360]: 2025-11-29 08:04:21.629 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 63ab1668-e1e0-4c32-bfaa-879399657745 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:04:21 np0005539551 nova_compute[227360]: 2025-11-29 08:04:21.629 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 811e2f2f-5e2c-4c60-97e8-c39377dce6dc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:04:21 np0005539551 nova_compute[227360]: 2025-11-29 08:04:21.629 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:04:21 np0005539551 nova_compute[227360]: 2025-11-29 08:04:21.629 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:04:21 np0005539551 nova_compute[227360]: 2025-11-29 08:04:21.727 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:21 np0005539551 nova_compute[227360]: 2025-11-29 08:04:21.819 227364 DEBUG nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:04:21 np0005539551 nova_compute[227360]: 2025-11-29 08:04:21.820 227364 DEBUG nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:04:21 np0005539551 nova_compute[227360]: 2025-11-29 08:04:21.820 227364 INFO nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Using config drive#033[00m
Nov 29 03:04:21 np0005539551 nova_compute[227360]: 2025-11-29 08:04:21.844 227364 DEBUG nova.storage.rbd_utils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] rbd image 63ab1668-e1e0-4c32-bfaa-879399657745_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e199 e199: 3 total, 3 up, 3 in
Nov 29 03:04:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:04:22 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3197500049' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:04:22 np0005539551 nova_compute[227360]: 2025-11-29 08:04:22.178 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:22 np0005539551 nova_compute[227360]: 2025-11-29 08:04:22.184 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:04:22 np0005539551 nova_compute[227360]: 2025-11-29 08:04:22.209 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:04:22 np0005539551 nova_compute[227360]: 2025-11-29 08:04:22.216 227364 INFO nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Creating config drive at /var/lib/nova/instances/63ab1668-e1e0-4c32-bfaa-879399657745/disk.config#033[00m
Nov 29 03:04:22 np0005539551 nova_compute[227360]: 2025-11-29 08:04:22.220 227364 DEBUG oslo_concurrency.processutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/63ab1668-e1e0-4c32-bfaa-879399657745/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp637wmvml execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:22 np0005539551 nova_compute[227360]: 2025-11-29 08:04:22.240 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:04:22 np0005539551 nova_compute[227360]: 2025-11-29 08:04:22.241 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:22 np0005539551 nova_compute[227360]: 2025-11-29 08:04:22.345 227364 DEBUG oslo_concurrency.processutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/63ab1668-e1e0-4c32-bfaa-879399657745/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp637wmvml" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:22 np0005539551 nova_compute[227360]: 2025-11-29 08:04:22.378 227364 DEBUG nova.storage.rbd_utils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] rbd image 63ab1668-e1e0-4c32-bfaa-879399657745_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:22 np0005539551 nova_compute[227360]: 2025-11-29 08:04:22.381 227364 DEBUG oslo_concurrency.processutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/63ab1668-e1e0-4c32-bfaa-879399657745/disk.config 63ab1668-e1e0-4c32-bfaa-879399657745_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:22 np0005539551 nova_compute[227360]: 2025-11-29 08:04:22.411 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:22 np0005539551 nova_compute[227360]: 2025-11-29 08:04:22.414 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:22 np0005539551 nova_compute[227360]: 2025-11-29 08:04:22.414 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:04:22 np0005539551 nova_compute[227360]: 2025-11-29 08:04:22.583 227364 DEBUG oslo_concurrency.processutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/63ab1668-e1e0-4c32-bfaa-879399657745/disk.config 63ab1668-e1e0-4c32-bfaa-879399657745_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:22 np0005539551 nova_compute[227360]: 2025-11-29 08:04:22.583 227364 INFO nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Deleting local config drive /var/lib/nova/instances/63ab1668-e1e0-4c32-bfaa-879399657745/disk.config because it was imported into RBD.#033[00m
Nov 29 03:04:22 np0005539551 nova_compute[227360]: 2025-11-29 08:04:22.602 227364 DEBUG nova.network.neutron [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Successfully updated port: c765ebc3-573f-4453-ab8a-90e3943a89d5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:04:22 np0005539551 nova_compute[227360]: 2025-11-29 08:04:22.633 227364 DEBUG oslo_concurrency.lockutils [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "refresh_cache-811e2f2f-5e2c-4c60-97e8-c39377dce6dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:04:22 np0005539551 nova_compute[227360]: 2025-11-29 08:04:22.633 227364 DEBUG oslo_concurrency.lockutils [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquired lock "refresh_cache-811e2f2f-5e2c-4c60-97e8-c39377dce6dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:04:22 np0005539551 nova_compute[227360]: 2025-11-29 08:04:22.634 227364 DEBUG nova.network.neutron [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:04:22 np0005539551 systemd-machined[190756]: New machine qemu-25-instance-00000033.
Nov 29 03:04:22 np0005539551 systemd[1]: Started Virtual Machine qemu-25-instance-00000033.
Nov 29 03:04:22 np0005539551 nova_compute[227360]: 2025-11-29 08:04:22.912 227364 DEBUG nova.network.neutron [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:04:22 np0005539551 nova_compute[227360]: 2025-11-29 08:04:22.937 227364 DEBUG nova.compute.manager [req-80a6cbca-6f67-4489-be0e-6ee314616faf req-ee94ee8a-fabb-44ff-ade9-426e3c21b50c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Received event network-changed-c765ebc3-573f-4453-ab8a-90e3943a89d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:04:22 np0005539551 nova_compute[227360]: 2025-11-29 08:04:22.938 227364 DEBUG nova.compute.manager [req-80a6cbca-6f67-4489-be0e-6ee314616faf req-ee94ee8a-fabb-44ff-ade9-426e3c21b50c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Refreshing instance network info cache due to event network-changed-c765ebc3-573f-4453-ab8a-90e3943a89d5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:04:22 np0005539551 nova_compute[227360]: 2025-11-29 08:04:22.939 227364 DEBUG oslo_concurrency.lockutils [req-80a6cbca-6f67-4489-be0e-6ee314616faf req-ee94ee8a-fabb-44ff-ade9-426e3c21b50c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-811e2f2f-5e2c-4c60-97e8-c39377dce6dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:04:23 np0005539551 nova_compute[227360]: 2025-11-29 08:04:23.050 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403463.050505, 63ab1668-e1e0-4c32-bfaa-879399657745 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:23 np0005539551 nova_compute[227360]: 2025-11-29 08:04:23.051 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:04:23 np0005539551 nova_compute[227360]: 2025-11-29 08:04:23.053 227364 DEBUG nova.compute.manager [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:04:23 np0005539551 nova_compute[227360]: 2025-11-29 08:04:23.053 227364 DEBUG nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:04:23 np0005539551 nova_compute[227360]: 2025-11-29 08:04:23.056 227364 INFO nova.virt.libvirt.driver [-] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Instance spawned successfully.#033[00m
Nov 29 03:04:23 np0005539551 nova_compute[227360]: 2025-11-29 08:04:23.057 227364 DEBUG nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:04:23 np0005539551 nova_compute[227360]: 2025-11-29 08:04:23.083 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:23 np0005539551 nova_compute[227360]: 2025-11-29 08:04:23.089 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:04:23 np0005539551 nova_compute[227360]: 2025-11-29 08:04:23.093 227364 DEBUG nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:23 np0005539551 nova_compute[227360]: 2025-11-29 08:04:23.093 227364 DEBUG nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:23 np0005539551 nova_compute[227360]: 2025-11-29 08:04:23.094 227364 DEBUG nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:23 np0005539551 nova_compute[227360]: 2025-11-29 08:04:23.094 227364 DEBUG nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:23 np0005539551 nova_compute[227360]: 2025-11-29 08:04:23.095 227364 DEBUG nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:23 np0005539551 nova_compute[227360]: 2025-11-29 08:04:23.095 227364 DEBUG nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:23 np0005539551 nova_compute[227360]: 2025-11-29 08:04:23.131 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:04:23 np0005539551 nova_compute[227360]: 2025-11-29 08:04:23.131 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403463.0522404, 63ab1668-e1e0-4c32-bfaa-879399657745 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:23 np0005539551 nova_compute[227360]: 2025-11-29 08:04:23.132 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] VM Started (Lifecycle Event)#033[00m
Nov 29 03:04:23 np0005539551 nova_compute[227360]: 2025-11-29 08:04:23.175 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:23 np0005539551 nova_compute[227360]: 2025-11-29 08:04:23.179 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:04:23 np0005539551 nova_compute[227360]: 2025-11-29 08:04:23.196 227364 INFO nova.compute.manager [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Took 4.86 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:04:23 np0005539551 nova_compute[227360]: 2025-11-29 08:04:23.196 227364 DEBUG nova.compute.manager [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:23.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:23 np0005539551 nova_compute[227360]: 2025-11-29 08:04:23.223 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:04:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:23.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:23 np0005539551 nova_compute[227360]: 2025-11-29 08:04:23.269 227364 INFO nova.compute.manager [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Took 8.24 seconds to build instance.#033[00m
Nov 29 03:04:23 np0005539551 nova_compute[227360]: 2025-11-29 08:04:23.293 227364 DEBUG oslo_concurrency.lockutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "63ab1668-e1e0-4c32-bfaa-879399657745" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:23 np0005539551 nova_compute[227360]: 2025-11-29 08:04:23.433 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:23 np0005539551 nova_compute[227360]: 2025-11-29 08:04:23.434 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:23 np0005539551 nova_compute[227360]: 2025-11-29 08:04:23.434 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:04:23 np0005539551 nova_compute[227360]: 2025-11-29 08:04:23.618 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:04:24 np0005539551 nova_compute[227360]: 2025-11-29 08:04:24.083 227364 DEBUG nova.network.neutron [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Updating instance_info_cache with network_info: [{"id": "c765ebc3-573f-4453-ab8a-90e3943a89d5", "address": "fa:16:3e:f4:73:ea", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc765ebc3-57", "ovs_interfaceid": "c765ebc3-573f-4453-ab8a-90e3943a89d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:04:24 np0005539551 nova_compute[227360]: 2025-11-29 08:04:24.106 227364 DEBUG oslo_concurrency.lockutils [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Releasing lock "refresh_cache-811e2f2f-5e2c-4c60-97e8-c39377dce6dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:04:24 np0005539551 nova_compute[227360]: 2025-11-29 08:04:24.107 227364 DEBUG nova.compute.manager [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Instance network_info: |[{"id": "c765ebc3-573f-4453-ab8a-90e3943a89d5", "address": "fa:16:3e:f4:73:ea", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc765ebc3-57", "ovs_interfaceid": "c765ebc3-573f-4453-ab8a-90e3943a89d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:04:24 np0005539551 nova_compute[227360]: 2025-11-29 08:04:24.107 227364 DEBUG oslo_concurrency.lockutils [req-80a6cbca-6f67-4489-be0e-6ee314616faf req-ee94ee8a-fabb-44ff-ade9-426e3c21b50c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-811e2f2f-5e2c-4c60-97e8-c39377dce6dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:04:24 np0005539551 nova_compute[227360]: 2025-11-29 08:04:24.107 227364 DEBUG nova.network.neutron [req-80a6cbca-6f67-4489-be0e-6ee314616faf req-ee94ee8a-fabb-44ff-ade9-426e3c21b50c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Refreshing network info cache for port c765ebc3-573f-4453-ab8a-90e3943a89d5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:04:24 np0005539551 nova_compute[227360]: 2025-11-29 08:04:24.110 227364 DEBUG nova.virt.libvirt.driver [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Start _get_guest_xml network_info=[{"id": "c765ebc3-573f-4453-ab8a-90e3943a89d5", "address": "fa:16:3e:f4:73:ea", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc765ebc3-57", "ovs_interfaceid": "c765ebc3-573f-4453-ab8a-90e3943a89d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:04:24 np0005539551 nova_compute[227360]: 2025-11-29 08:04:24.114 227364 WARNING nova.virt.libvirt.driver [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:04:24 np0005539551 nova_compute[227360]: 2025-11-29 08:04:24.118 227364 DEBUG nova.virt.libvirt.host [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:04:24 np0005539551 nova_compute[227360]: 2025-11-29 08:04:24.118 227364 DEBUG nova.virt.libvirt.host [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:04:24 np0005539551 nova_compute[227360]: 2025-11-29 08:04:24.123 227364 DEBUG nova.virt.libvirt.host [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:04:24 np0005539551 nova_compute[227360]: 2025-11-29 08:04:24.124 227364 DEBUG nova.virt.libvirt.host [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:04:24 np0005539551 nova_compute[227360]: 2025-11-29 08:04:24.125 227364 DEBUG nova.virt.libvirt.driver [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:04:24 np0005539551 nova_compute[227360]: 2025-11-29 08:04:24.126 227364 DEBUG nova.virt.hardware [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:04:24 np0005539551 nova_compute[227360]: 2025-11-29 08:04:24.126 227364 DEBUG nova.virt.hardware [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:04:24 np0005539551 nova_compute[227360]: 2025-11-29 08:04:24.127 227364 DEBUG nova.virt.hardware [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:04:24 np0005539551 nova_compute[227360]: 2025-11-29 08:04:24.127 227364 DEBUG nova.virt.hardware [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:04:24 np0005539551 nova_compute[227360]: 2025-11-29 08:04:24.127 227364 DEBUG nova.virt.hardware [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:04:24 np0005539551 nova_compute[227360]: 2025-11-29 08:04:24.128 227364 DEBUG nova.virt.hardware [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:04:24 np0005539551 nova_compute[227360]: 2025-11-29 08:04:24.128 227364 DEBUG nova.virt.hardware [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:04:24 np0005539551 nova_compute[227360]: 2025-11-29 08:04:24.128 227364 DEBUG nova.virt.hardware [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:04:24 np0005539551 nova_compute[227360]: 2025-11-29 08:04:24.128 227364 DEBUG nova.virt.hardware [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:04:24 np0005539551 nova_compute[227360]: 2025-11-29 08:04:24.129 227364 DEBUG nova.virt.hardware [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:04:24 np0005539551 nova_compute[227360]: 2025-11-29 08:04:24.129 227364 DEBUG nova.virt.hardware [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:04:24 np0005539551 nova_compute[227360]: 2025-11-29 08:04:24.133 227364 DEBUG oslo_concurrency.processutils [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:24 np0005539551 nova_compute[227360]: 2025-11-29 08:04:24.344 227364 INFO nova.virt.libvirt.driver [None req-5d8dbca8-7719-4c4a-afa0-11fe1b87fba6 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Snapshot image upload complete#033[00m
Nov 29 03:04:24 np0005539551 nova_compute[227360]: 2025-11-29 08:04:24.345 227364 INFO nova.compute.manager [None req-5d8dbca8-7719-4c4a-afa0-11fe1b87fba6 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Took 6.09 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 29 03:04:24 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:04:24 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3160390305' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:04:24 np0005539551 nova_compute[227360]: 2025-11-29 08:04:24.562 227364 DEBUG oslo_concurrency.processutils [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:24 np0005539551 nova_compute[227360]: 2025-11-29 08:04:24.594 227364 DEBUG nova.storage.rbd_utils [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image 811e2f2f-5e2c-4c60-97e8-c39377dce6dc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:24 np0005539551 nova_compute[227360]: 2025-11-29 08:04:24.599 227364 DEBUG oslo_concurrency.processutils [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:24 np0005539551 nova_compute[227360]: 2025-11-29 08:04:24.654 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:24 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:04:25 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/397019354' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:04:25 np0005539551 nova_compute[227360]: 2025-11-29 08:04:25.112 227364 DEBUG oslo_concurrency.processutils [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:25 np0005539551 nova_compute[227360]: 2025-11-29 08:04:25.114 227364 DEBUG nova.virt.libvirt.vif [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:04:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1497216928',display_name='tempest-DeleteServersTestJSON-server-1497216928',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1497216928',id=53,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90c23935e0214785a9dc5061b91cf29c',ramdisk_id='',reservation_id='r-sh94cpug',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-294503786',owner_user_name='tempest-DeleteServersTestJSON-294503786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:04:19Z,user_data=None,user_id='104aea18c5154615b602f032bdb49681',uuid=811e2f2f-5e2c-4c60-97e8-c39377dce6dc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c765ebc3-573f-4453-ab8a-90e3943a89d5", "address": "fa:16:3e:f4:73:ea", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc765ebc3-57", "ovs_interfaceid": "c765ebc3-573f-4453-ab8a-90e3943a89d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:04:25 np0005539551 nova_compute[227360]: 2025-11-29 08:04:25.114 227364 DEBUG nova.network.os_vif_util [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converting VIF {"id": "c765ebc3-573f-4453-ab8a-90e3943a89d5", "address": "fa:16:3e:f4:73:ea", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc765ebc3-57", "ovs_interfaceid": "c765ebc3-573f-4453-ab8a-90e3943a89d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:04:25 np0005539551 nova_compute[227360]: 2025-11-29 08:04:25.115 227364 DEBUG nova.network.os_vif_util [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:73:ea,bridge_name='br-int',has_traffic_filtering=True,id=c765ebc3-573f-4453-ab8a-90e3943a89d5,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc765ebc3-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:04:25 np0005539551 nova_compute[227360]: 2025-11-29 08:04:25.116 227364 DEBUG nova.objects.instance [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lazy-loading 'pci_devices' on Instance uuid 811e2f2f-5e2c-4c60-97e8-c39377dce6dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:25 np0005539551 nova_compute[227360]: 2025-11-29 08:04:25.159 227364 DEBUG nova.virt.libvirt.driver [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:04:25 np0005539551 nova_compute[227360]:  <uuid>811e2f2f-5e2c-4c60-97e8-c39377dce6dc</uuid>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:  <name>instance-00000035</name>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:04:25 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:      <nova:name>tempest-DeleteServersTestJSON-server-1497216928</nova:name>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:04:24</nova:creationTime>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:04:25 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:        <nova:user uuid="104aea18c5154615b602f032bdb49681">tempest-DeleteServersTestJSON-294503786-project-member</nova:user>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:        <nova:project uuid="90c23935e0214785a9dc5061b91cf29c">tempest-DeleteServersTestJSON-294503786</nova:project>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:        <nova:port uuid="c765ebc3-573f-4453-ab8a-90e3943a89d5">
Nov 29 03:04:25 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:      <entry name="serial">811e2f2f-5e2c-4c60-97e8-c39377dce6dc</entry>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:      <entry name="uuid">811e2f2f-5e2c-4c60-97e8-c39377dce6dc</entry>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:04:25 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/811e2f2f-5e2c-4c60-97e8-c39377dce6dc_disk">
Nov 29 03:04:25 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:04:25 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:04:25 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/811e2f2f-5e2c-4c60-97e8-c39377dce6dc_disk.config">
Nov 29 03:04:25 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:04:25 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:04:25 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:f4:73:ea"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:      <target dev="tapc765ebc3-57"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:04:25 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/811e2f2f-5e2c-4c60-97e8-c39377dce6dc/console.log" append="off"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:04:25 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:04:25 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:04:25 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:04:25 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:04:25 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:04:25 np0005539551 nova_compute[227360]: 2025-11-29 08:04:25.160 227364 DEBUG nova.compute.manager [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Preparing to wait for external event network-vif-plugged-c765ebc3-573f-4453-ab8a-90e3943a89d5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:04:25 np0005539551 nova_compute[227360]: 2025-11-29 08:04:25.160 227364 DEBUG oslo_concurrency.lockutils [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "811e2f2f-5e2c-4c60-97e8-c39377dce6dc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:25 np0005539551 nova_compute[227360]: 2025-11-29 08:04:25.160 227364 DEBUG oslo_concurrency.lockutils [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "811e2f2f-5e2c-4c60-97e8-c39377dce6dc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:25 np0005539551 nova_compute[227360]: 2025-11-29 08:04:25.161 227364 DEBUG oslo_concurrency.lockutils [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "811e2f2f-5e2c-4c60-97e8-c39377dce6dc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:25 np0005539551 nova_compute[227360]: 2025-11-29 08:04:25.161 227364 DEBUG nova.virt.libvirt.vif [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:04:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1497216928',display_name='tempest-DeleteServersTestJSON-server-1497216928',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1497216928',id=53,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90c23935e0214785a9dc5061b91cf29c',ramdisk_id='',reservation_id='r-sh94cpug',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-294503786',owner_user_name='tempest-DeleteServersTestJSON-294503786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:04:19Z,user_data=None,user_id='104aea18c5154615b602f032bdb49681',uuid=811e2f2f-5e2c-4c60-97e8-c39377dce6dc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c765ebc3-573f-4453-ab8a-90e3943a89d5", "address": "fa:16:3e:f4:73:ea", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc765ebc3-57", "ovs_interfaceid": "c765ebc3-573f-4453-ab8a-90e3943a89d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:04:25 np0005539551 nova_compute[227360]: 2025-11-29 08:04:25.162 227364 DEBUG nova.network.os_vif_util [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converting VIF {"id": "c765ebc3-573f-4453-ab8a-90e3943a89d5", "address": "fa:16:3e:f4:73:ea", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc765ebc3-57", "ovs_interfaceid": "c765ebc3-573f-4453-ab8a-90e3943a89d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:04:25 np0005539551 nova_compute[227360]: 2025-11-29 08:04:25.162 227364 DEBUG nova.network.os_vif_util [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:73:ea,bridge_name='br-int',has_traffic_filtering=True,id=c765ebc3-573f-4453-ab8a-90e3943a89d5,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc765ebc3-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:04:25 np0005539551 nova_compute[227360]: 2025-11-29 08:04:25.163 227364 DEBUG os_vif [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:73:ea,bridge_name='br-int',has_traffic_filtering=True,id=c765ebc3-573f-4453-ab8a-90e3943a89d5,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc765ebc3-57') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:04:25 np0005539551 nova_compute[227360]: 2025-11-29 08:04:25.163 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:25 np0005539551 nova_compute[227360]: 2025-11-29 08:04:25.164 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:25 np0005539551 nova_compute[227360]: 2025-11-29 08:04:25.164 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:04:25 np0005539551 nova_compute[227360]: 2025-11-29 08:04:25.167 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:25 np0005539551 nova_compute[227360]: 2025-11-29 08:04:25.167 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc765ebc3-57, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:25 np0005539551 nova_compute[227360]: 2025-11-29 08:04:25.168 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc765ebc3-57, col_values=(('external_ids', {'iface-id': 'c765ebc3-573f-4453-ab8a-90e3943a89d5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f4:73:ea', 'vm-uuid': '811e2f2f-5e2c-4c60-97e8-c39377dce6dc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:25 np0005539551 nova_compute[227360]: 2025-11-29 08:04:25.169 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:25 np0005539551 NetworkManager[48922]: <info>  [1764403465.1706] manager: (tapc765ebc3-57): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/87)
Nov 29 03:04:25 np0005539551 nova_compute[227360]: 2025-11-29 08:04:25.172 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:04:25 np0005539551 nova_compute[227360]: 2025-11-29 08:04:25.180 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:25 np0005539551 nova_compute[227360]: 2025-11-29 08:04:25.181 227364 INFO os_vif [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:73:ea,bridge_name='br-int',has_traffic_filtering=True,id=c765ebc3-573f-4453-ab8a-90e3943a89d5,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc765ebc3-57')#033[00m
Nov 29 03:04:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:25.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:25 np0005539551 nova_compute[227360]: 2025-11-29 08:04:25.252 227364 DEBUG nova.virt.libvirt.driver [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:04:25 np0005539551 nova_compute[227360]: 2025-11-29 08:04:25.252 227364 DEBUG nova.virt.libvirt.driver [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:04:25 np0005539551 nova_compute[227360]: 2025-11-29 08:04:25.252 227364 DEBUG nova.virt.libvirt.driver [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] No VIF found with MAC fa:16:3e:f4:73:ea, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:04:25 np0005539551 nova_compute[227360]: 2025-11-29 08:04:25.253 227364 INFO nova.virt.libvirt.driver [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Using config drive#033[00m
Nov 29 03:04:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:25.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:25 np0005539551 nova_compute[227360]: 2025-11-29 08:04:25.282 227364 DEBUG nova.storage.rbd_utils [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image 811e2f2f-5e2c-4c60-97e8-c39377dce6dc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:26 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e200 e200: 3 total, 3 up, 3 in
Nov 29 03:04:26 np0005539551 nova_compute[227360]: 2025-11-29 08:04:26.356 227364 DEBUG oslo_concurrency.lockutils [None req-f896945f-172b-48cb-bd88-a5fbeb6acf56 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "63ab1668-e1e0-4c32-bfaa-879399657745" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:26 np0005539551 nova_compute[227360]: 2025-11-29 08:04:26.357 227364 DEBUG oslo_concurrency.lockutils [None req-f896945f-172b-48cb-bd88-a5fbeb6acf56 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "63ab1668-e1e0-4c32-bfaa-879399657745" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:26 np0005539551 nova_compute[227360]: 2025-11-29 08:04:26.358 227364 DEBUG oslo_concurrency.lockutils [None req-f896945f-172b-48cb-bd88-a5fbeb6acf56 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "63ab1668-e1e0-4c32-bfaa-879399657745-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:26 np0005539551 nova_compute[227360]: 2025-11-29 08:04:26.358 227364 DEBUG oslo_concurrency.lockutils [None req-f896945f-172b-48cb-bd88-a5fbeb6acf56 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "63ab1668-e1e0-4c32-bfaa-879399657745-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:26 np0005539551 nova_compute[227360]: 2025-11-29 08:04:26.359 227364 DEBUG oslo_concurrency.lockutils [None req-f896945f-172b-48cb-bd88-a5fbeb6acf56 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "63ab1668-e1e0-4c32-bfaa-879399657745-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:26 np0005539551 nova_compute[227360]: 2025-11-29 08:04:26.360 227364 INFO nova.compute.manager [None req-f896945f-172b-48cb-bd88-a5fbeb6acf56 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Terminating instance#033[00m
Nov 29 03:04:26 np0005539551 nova_compute[227360]: 2025-11-29 08:04:26.362 227364 DEBUG oslo_concurrency.lockutils [None req-f896945f-172b-48cb-bd88-a5fbeb6acf56 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "refresh_cache-63ab1668-e1e0-4c32-bfaa-879399657745" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:04:26 np0005539551 nova_compute[227360]: 2025-11-29 08:04:26.363 227364 DEBUG oslo_concurrency.lockutils [None req-f896945f-172b-48cb-bd88-a5fbeb6acf56 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquired lock "refresh_cache-63ab1668-e1e0-4c32-bfaa-879399657745" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:04:26 np0005539551 nova_compute[227360]: 2025-11-29 08:04:26.364 227364 DEBUG nova.network.neutron [None req-f896945f-172b-48cb-bd88-a5fbeb6acf56 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:04:26 np0005539551 nova_compute[227360]: 2025-11-29 08:04:26.401 227364 INFO nova.virt.libvirt.driver [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Creating config drive at /var/lib/nova/instances/811e2f2f-5e2c-4c60-97e8-c39377dce6dc/disk.config#033[00m
Nov 29 03:04:26 np0005539551 nova_compute[227360]: 2025-11-29 08:04:26.405 227364 DEBUG oslo_concurrency.processutils [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/811e2f2f-5e2c-4c60-97e8-c39377dce6dc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcjuss44p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:26 np0005539551 nova_compute[227360]: 2025-11-29 08:04:26.490 227364 DEBUG nova.network.neutron [req-80a6cbca-6f67-4489-be0e-6ee314616faf req-ee94ee8a-fabb-44ff-ade9-426e3c21b50c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Updated VIF entry in instance network info cache for port c765ebc3-573f-4453-ab8a-90e3943a89d5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:04:26 np0005539551 nova_compute[227360]: 2025-11-29 08:04:26.491 227364 DEBUG nova.network.neutron [req-80a6cbca-6f67-4489-be0e-6ee314616faf req-ee94ee8a-fabb-44ff-ade9-426e3c21b50c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Updating instance_info_cache with network_info: [{"id": "c765ebc3-573f-4453-ab8a-90e3943a89d5", "address": "fa:16:3e:f4:73:ea", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc765ebc3-57", "ovs_interfaceid": "c765ebc3-573f-4453-ab8a-90e3943a89d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:04:26 np0005539551 nova_compute[227360]: 2025-11-29 08:04:26.509 227364 DEBUG oslo_concurrency.lockutils [req-80a6cbca-6f67-4489-be0e-6ee314616faf req-ee94ee8a-fabb-44ff-ade9-426e3c21b50c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-811e2f2f-5e2c-4c60-97e8-c39377dce6dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:04:26 np0005539551 nova_compute[227360]: 2025-11-29 08:04:26.539 227364 DEBUG oslo_concurrency.processutils [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/811e2f2f-5e2c-4c60-97e8-c39377dce6dc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcjuss44p" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:26 np0005539551 nova_compute[227360]: 2025-11-29 08:04:26.575 227364 DEBUG nova.storage.rbd_utils [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image 811e2f2f-5e2c-4c60-97e8-c39377dce6dc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:26 np0005539551 nova_compute[227360]: 2025-11-29 08:04:26.581 227364 DEBUG oslo_concurrency.processutils [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/811e2f2f-5e2c-4c60-97e8-c39377dce6dc/disk.config 811e2f2f-5e2c-4c60-97e8-c39377dce6dc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:26 np0005539551 nova_compute[227360]: 2025-11-29 08:04:26.622 227364 DEBUG nova.network.neutron [None req-f896945f-172b-48cb-bd88-a5fbeb6acf56 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:04:26 np0005539551 nova_compute[227360]: 2025-11-29 08:04:26.759 227364 DEBUG oslo_concurrency.processutils [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/811e2f2f-5e2c-4c60-97e8-c39377dce6dc/disk.config 811e2f2f-5e2c-4c60-97e8-c39377dce6dc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:26 np0005539551 nova_compute[227360]: 2025-11-29 08:04:26.759 227364 INFO nova.virt.libvirt.driver [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Deleting local config drive /var/lib/nova/instances/811e2f2f-5e2c-4c60-97e8-c39377dce6dc/disk.config because it was imported into RBD.#033[00m
Nov 29 03:04:26 np0005539551 kernel: tapc765ebc3-57: entered promiscuous mode
Nov 29 03:04:26 np0005539551 NetworkManager[48922]: <info>  [1764403466.8125] manager: (tapc765ebc3-57): new Tun device (/org/freedesktop/NetworkManager/Devices/88)
Nov 29 03:04:26 np0005539551 nova_compute[227360]: 2025-11-29 08:04:26.866 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:26 np0005539551 ovn_controller[130266]: 2025-11-29T08:04:26Z|00174|binding|INFO|Claiming lport c765ebc3-573f-4453-ab8a-90e3943a89d5 for this chassis.
Nov 29 03:04:26 np0005539551 ovn_controller[130266]: 2025-11-29T08:04:26Z|00175|binding|INFO|c765ebc3-573f-4453-ab8a-90e3943a89d5: Claiming fa:16:3e:f4:73:ea 10.100.0.11
Nov 29 03:04:26 np0005539551 nova_compute[227360]: 2025-11-29 08:04:26.869 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:26.874 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:73:ea 10.100.0.11'], port_security=['fa:16:3e:f4:73:ea 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '811e2f2f-5e2c-4c60-97e8-c39377dce6dc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90c23935e0214785a9dc5061b91cf29c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f717601c-d15f-4a2d-a56a-85c60baf3a44', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc7b8639-cf64-4f98-aa54-bbd2c9e5fa46, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=c765ebc3-573f-4453-ab8a-90e3943a89d5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:04:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:26.875 139482 INFO neutron.agent.ovn.metadata.agent [-] Port c765ebc3-573f-4453-ab8a-90e3943a89d5 in datapath a8be8715-2b74-42ca-9713-7fc1f4a33bc9 bound to our chassis#033[00m
Nov 29 03:04:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:26.876 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a8be8715-2b74-42ca-9713-7fc1f4a33bc9#033[00m
Nov 29 03:04:26 np0005539551 ovn_controller[130266]: 2025-11-29T08:04:26Z|00176|binding|INFO|Setting lport c765ebc3-573f-4453-ab8a-90e3943a89d5 ovn-installed in OVS
Nov 29 03:04:26 np0005539551 ovn_controller[130266]: 2025-11-29T08:04:26Z|00177|binding|INFO|Setting lport c765ebc3-573f-4453-ab8a-90e3943a89d5 up in Southbound
Nov 29 03:04:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:26.887 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[be82bb2b-1506-4290-8cde-938cd796b071]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:26.888 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa8be8715-21 in ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:04:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:26.891 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa8be8715-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:04:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:26.891 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d2830cf2-3b13-47f2-83a9-5e526ec0f1e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:26 np0005539551 nova_compute[227360]: 2025-11-29 08:04:26.891 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:26.891 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[62d95855-4ca2-4924-85b6-59bb684403fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:26 np0005539551 systemd-udevd[248166]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:04:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:26.905 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[02a7c9eb-da25-4179-abf0-14c5cfba1486]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:26 np0005539551 systemd-machined[190756]: New machine qemu-26-instance-00000035.
Nov 29 03:04:26 np0005539551 NetworkManager[48922]: <info>  [1764403466.9144] device (tapc765ebc3-57): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:04:26 np0005539551 NetworkManager[48922]: <info>  [1764403466.9150] device (tapc765ebc3-57): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:04:26 np0005539551 systemd[1]: Started Virtual Machine qemu-26-instance-00000035.
Nov 29 03:04:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:26.928 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[62404e76-9901-4e5d-9dfd-f295a3f55d62]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:26.960 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[18b3ee8a-1e5f-4097-9fd4-7e56c8842026]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:26 np0005539551 systemd-udevd[248169]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:04:26 np0005539551 NetworkManager[48922]: <info>  [1764403466.9734] manager: (tapa8be8715-20): new Veth device (/org/freedesktop/NetworkManager/Devices/89)
Nov 29 03:04:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:26.972 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[561d9be2-d540-4fff-a95f-4e9693cd8600]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:27.019 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[5ef1b82b-0101-43ff-90d1-2bfd679d642e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:27.022 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[81fc2805-a02b-4488-a62b-fcb343afa368]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:27 np0005539551 NetworkManager[48922]: <info>  [1764403467.0442] device (tapa8be8715-20): carrier: link connected
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:27.050 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[833f154f-85f9-4c1a-a7ef-df558c305e59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.050 227364 DEBUG oslo_concurrency.lockutils [None req-8947ac46-dc5e-4ea6-aaf2-ddb8504c00dc fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "d3837ff2-ce61-4144-ac9b-c032a188450a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.051 227364 DEBUG oslo_concurrency.lockutils [None req-8947ac46-dc5e-4ea6-aaf2-ddb8504c00dc fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "d3837ff2-ce61-4144-ac9b-c032a188450a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.051 227364 DEBUG oslo_concurrency.lockutils [None req-8947ac46-dc5e-4ea6-aaf2-ddb8504c00dc fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "d3837ff2-ce61-4144-ac9b-c032a188450a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.051 227364 DEBUG oslo_concurrency.lockutils [None req-8947ac46-dc5e-4ea6-aaf2-ddb8504c00dc fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "d3837ff2-ce61-4144-ac9b-c032a188450a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.052 227364 DEBUG oslo_concurrency.lockutils [None req-8947ac46-dc5e-4ea6-aaf2-ddb8504c00dc fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "d3837ff2-ce61-4144-ac9b-c032a188450a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.053 227364 INFO nova.compute.manager [None req-8947ac46-dc5e-4ea6-aaf2-ddb8504c00dc fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Terminating instance#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.054 227364 DEBUG nova.compute.manager [None req-8947ac46-dc5e-4ea6-aaf2-ddb8504c00dc fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:27.070 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[518f8c61-266c-450d-a252-9abf2544558e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa8be8715-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:f3:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 651392, 'reachable_time': 27701, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248197, 'error': None, 'target': 'ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:27.084 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[7aac8e67-c774-4b56-bf69-896f237636fe]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe95:f3b4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651392, 'tstamp': 651392}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248198, 'error': None, 'target': 'ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.093 227364 DEBUG nova.network.neutron [None req-f896945f-172b-48cb-bd88-a5fbeb6acf56 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:04:27 np0005539551 kernel: tapb41e72af-8d (unregistering): left promiscuous mode
Nov 29 03:04:27 np0005539551 NetworkManager[48922]: <info>  [1764403467.1003] device (tapb41e72af-8d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.106 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:27 np0005539551 ovn_controller[130266]: 2025-11-29T08:04:27Z|00178|binding|INFO|Releasing lport b41e72af-8d04-442c-a943-9a9986e9e860 from this chassis (sb_readonly=0)
Nov 29 03:04:27 np0005539551 ovn_controller[130266]: 2025-11-29T08:04:27Z|00179|binding|INFO|Setting lport b41e72af-8d04-442c-a943-9a9986e9e860 down in Southbound
Nov 29 03:04:27 np0005539551 ovn_controller[130266]: 2025-11-29T08:04:27Z|00180|binding|INFO|Removing iface tapb41e72af-8d ovn-installed in OVS
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.109 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:27.109 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[dbb20b87-e25d-40c8-a540-2e9dd4ce19f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa8be8715-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:f3:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 651392, 'reachable_time': 27701, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 248199, 'error': None, 'target': 'ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.125 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:27.140 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e58d6ae6-926a-4544-adcd-271f01899054]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:27 np0005539551 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000032.scope: Deactivated successfully.
Nov 29 03:04:27 np0005539551 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000032.scope: Consumed 2.250s CPU time.
Nov 29 03:04:27 np0005539551 systemd-machined[190756]: Machine qemu-24-instance-00000032 terminated.
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:27.150 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a1:53:3a 10.100.0.9'], port_security=['fa:16:3e:a1:53:3a 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'd3837ff2-ce61-4144-ac9b-c032a188450a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '638fd52fccf14f16b56d0860553063f3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a57b53e5-9055-46ae-8ab4-d4a8a62173cb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d89b288b-bbc6-47fa-ad12-8aab94ffc78f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=b41e72af-8d04-442c-a943-9a9986e9e860) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.157 227364 DEBUG oslo_concurrency.lockutils [None req-f896945f-172b-48cb-bd88-a5fbeb6acf56 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Releasing lock "refresh_cache-63ab1668-e1e0-4c32-bfaa-879399657745" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.157 227364 DEBUG nova.compute.manager [None req-f896945f-172b-48cb-bd88-a5fbeb6acf56 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:27.194 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[70366d9c-3b21-440d-b54f-c3625fa8b027]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:27.195 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8be8715-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:27.195 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:27.196 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa8be8715-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:27 np0005539551 NetworkManager[48922]: <info>  [1764403467.1982] manager: (tapa8be8715-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/90)
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.199 227364 DEBUG nova.compute.manager [req-12ea1679-866f-4dbb-912c-cf3c42c6ee07 req-81c70cfc-a881-4096-abc0-aa234a591004 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Received event network-vif-plugged-c765ebc3-573f-4453-ab8a-90e3943a89d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.199 227364 DEBUG oslo_concurrency.lockutils [req-12ea1679-866f-4dbb-912c-cf3c42c6ee07 req-81c70cfc-a881-4096-abc0-aa234a591004 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "811e2f2f-5e2c-4c60-97e8-c39377dce6dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.200 227364 DEBUG oslo_concurrency.lockutils [req-12ea1679-866f-4dbb-912c-cf3c42c6ee07 req-81c70cfc-a881-4096-abc0-aa234a591004 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "811e2f2f-5e2c-4c60-97e8-c39377dce6dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.200 227364 DEBUG oslo_concurrency.lockutils [req-12ea1679-866f-4dbb-912c-cf3c42c6ee07 req-81c70cfc-a881-4096-abc0-aa234a591004 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "811e2f2f-5e2c-4c60-97e8-c39377dce6dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.200 227364 DEBUG nova.compute.manager [req-12ea1679-866f-4dbb-912c-cf3c42c6ee07 req-81c70cfc-a881-4096-abc0-aa234a591004 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Processing event network-vif-plugged-c765ebc3-573f-4453-ab8a-90e3943a89d5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.201 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:27 np0005539551 kernel: tapa8be8715-20: entered promiscuous mode
Nov 29 03:04:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:27.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:27.205 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa8be8715-20, col_values=(('external_ids', {'iface-id': '307ce936-d5dc-4357-90d6-2b0b2d3d1113'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.205 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:27 np0005539551 ovn_controller[130266]: 2025-11-29T08:04:27Z|00181|binding|INFO|Releasing lport 307ce936-d5dc-4357-90d6-2b0b2d3d1113 from this chassis (sb_readonly=0)
Nov 29 03:04:27 np0005539551 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000033.scope: Deactivated successfully.
Nov 29 03:04:27 np0005539551 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000033.scope: Consumed 4.621s CPU time.
Nov 29 03:04:27 np0005539551 systemd-machined[190756]: Machine qemu-25-instance-00000033 terminated.
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.222 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.225 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:27.226 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a8be8715-2b74-42ca-9713-7fc1f4a33bc9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a8be8715-2b74-42ca-9713-7fc1f4a33bc9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:27.227 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[355ea0e2-198f-44e7-8c65-2621f8156db4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:27.228 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-a8be8715-2b74-42ca-9713-7fc1f4a33bc9
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/a8be8715-2b74-42ca-9713-7fc1f4a33bc9.pid.haproxy
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID a8be8715-2b74-42ca-9713-7fc1f4a33bc9
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:27.228 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'env', 'PROCESS_TAG=haproxy-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a8be8715-2b74-42ca-9713-7fc1f4a33bc9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:04:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:27.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:27 np0005539551 NetworkManager[48922]: <info>  [1764403467.2728] manager: (tapb41e72af-8d): new Tun device (/org/freedesktop/NetworkManager/Devices/91)
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.291 227364 INFO nova.virt.libvirt.driver [-] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Instance destroyed successfully.#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.292 227364 DEBUG nova.objects.instance [None req-8947ac46-dc5e-4ea6-aaf2-ddb8504c00dc fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lazy-loading 'resources' on Instance uuid d3837ff2-ce61-4144-ac9b-c032a188450a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.376 227364 INFO nova.virt.libvirt.driver [-] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Instance destroyed successfully.#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.377 227364 DEBUG nova.objects.instance [None req-f896945f-172b-48cb-bd88-a5fbeb6acf56 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lazy-loading 'resources' on Instance uuid 63ab1668-e1e0-4c32-bfaa-879399657745 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.400 227364 DEBUG nova.virt.libvirt.vif [None req-8947ac46-dc5e-4ea6-aaf2-ddb8504c00dc fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:03:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-944398509',display_name='tempest-ImagesTestJSON-server-944398509',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-944398509',id=50,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:04:12Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='638fd52fccf14f16b56d0860553063f3',ramdisk_id='',reservation_id='r-k5wxk0g0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1682881466',owner_user_name='tempest-ImagesTestJSON-1682881466-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:04:24Z,user_data=None,user_id='fddc5f5801764ee19d5253e2cab34df3',uuid=d3837ff2-ce61-4144-ac9b-c032a188450a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "b41e72af-8d04-442c-a943-9a9986e9e860", "address": "fa:16:3e:a1:53:3a", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb41e72af-8d", "ovs_interfaceid": "b41e72af-8d04-442c-a943-9a9986e9e860", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.401 227364 DEBUG nova.network.os_vif_util [None req-8947ac46-dc5e-4ea6-aaf2-ddb8504c00dc fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Converting VIF {"id": "b41e72af-8d04-442c-a943-9a9986e9e860", "address": "fa:16:3e:a1:53:3a", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb41e72af-8d", "ovs_interfaceid": "b41e72af-8d04-442c-a943-9a9986e9e860", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.402 227364 DEBUG nova.network.os_vif_util [None req-8947ac46-dc5e-4ea6-aaf2-ddb8504c00dc fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a1:53:3a,bridge_name='br-int',has_traffic_filtering=True,id=b41e72af-8d04-442c-a943-9a9986e9e860,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb41e72af-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.402 227364 DEBUG os_vif [None req-8947ac46-dc5e-4ea6-aaf2-ddb8504c00dc fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a1:53:3a,bridge_name='br-int',has_traffic_filtering=True,id=b41e72af-8d04-442c-a943-9a9986e9e860,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb41e72af-8d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.404 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.405 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb41e72af-8d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.424 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.428 227364 DEBUG nova.compute.manager [req-1a5bc631-3e34-4c97-a444-41db0cd37aa7 req-22611520-d70d-42dc-973d-f8c0f06e968e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Received event network-vif-unplugged-b41e72af-8d04-442c-a943-9a9986e9e860 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.429 227364 DEBUG oslo_concurrency.lockutils [req-1a5bc631-3e34-4c97-a444-41db0cd37aa7 req-22611520-d70d-42dc-973d-f8c0f06e968e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d3837ff2-ce61-4144-ac9b-c032a188450a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.429 227364 DEBUG oslo_concurrency.lockutils [req-1a5bc631-3e34-4c97-a444-41db0cd37aa7 req-22611520-d70d-42dc-973d-f8c0f06e968e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d3837ff2-ce61-4144-ac9b-c032a188450a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.429 227364 DEBUG oslo_concurrency.lockutils [req-1a5bc631-3e34-4c97-a444-41db0cd37aa7 req-22611520-d70d-42dc-973d-f8c0f06e968e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d3837ff2-ce61-4144-ac9b-c032a188450a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.430 227364 DEBUG nova.compute.manager [req-1a5bc631-3e34-4c97-a444-41db0cd37aa7 req-22611520-d70d-42dc-973d-f8c0f06e968e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] No waiting events found dispatching network-vif-unplugged-b41e72af-8d04-442c-a943-9a9986e9e860 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.430 227364 DEBUG nova.compute.manager [req-1a5bc631-3e34-4c97-a444-41db0cd37aa7 req-22611520-d70d-42dc-973d-f8c0f06e968e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Received event network-vif-unplugged-b41e72af-8d04-442c-a943-9a9986e9e860 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.431 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.433 227364 INFO os_vif [None req-8947ac46-dc5e-4ea6-aaf2-ddb8504c00dc fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a1:53:3a,bridge_name='br-int',has_traffic_filtering=True,id=b41e72af-8d04-442c-a943-9a9986e9e860,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb41e72af-8d')#033[00m
Nov 29 03:04:27 np0005539551 podman[248322]: 2025-11-29 08:04:27.626665849 +0000 UTC m=+0.071881761 container create 37ba9758c903716613d533f2202570d1707f2ad58ac7b46272f5b31f657d67df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.644 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403467.6437511, 811e2f2f-5e2c-4c60-97e8-c39377dce6dc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.644 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] VM Started (Lifecycle Event)#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.646 227364 DEBUG nova.compute.manager [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.649 227364 DEBUG nova.virt.libvirt.driver [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.651 227364 INFO nova.virt.libvirt.driver [-] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Instance spawned successfully.#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.651 227364 DEBUG nova.virt.libvirt.driver [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.676 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:27 np0005539551 podman[248322]: 2025-11-29 08:04:27.584899024 +0000 UTC m=+0.030114956 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.681 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.684 227364 DEBUG nova.virt.libvirt.driver [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.684 227364 DEBUG nova.virt.libvirt.driver [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.685 227364 DEBUG nova.virt.libvirt.driver [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.685 227364 DEBUG nova.virt.libvirt.driver [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.685 227364 DEBUG nova.virt.libvirt.driver [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.686 227364 DEBUG nova.virt.libvirt.driver [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:27 np0005539551 systemd[1]: Started libpod-conmon-37ba9758c903716613d533f2202570d1707f2ad58ac7b46272f5b31f657d67df.scope.
Nov 29 03:04:27 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:04:27 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46bd7acc3ee3bfcf46fe067dbb2b70758d56dd8b0a021d39e539dad4a320af25/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:04:27 np0005539551 podman[248322]: 2025-11-29 08:04:27.724405755 +0000 UTC m=+0.169621657 container init 37ba9758c903716613d533f2202570d1707f2ad58ac7b46272f5b31f657d67df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.724 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.724 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403467.6455245, 811e2f2f-5e2c-4c60-97e8-c39377dce6dc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.725 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:04:27 np0005539551 podman[248322]: 2025-11-29 08:04:27.729443033 +0000 UTC m=+0.174658915 container start 37ba9758c903716613d533f2202570d1707f2ad58ac7b46272f5b31f657d67df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:04:27 np0005539551 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[248344]: [NOTICE]   (248349) : New worker (248351) forked
Nov 29 03:04:27 np0005539551 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[248344]: [NOTICE]   (248349) : Loading success.
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.761 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.764 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403467.648428, 811e2f2f-5e2c-4c60-97e8-c39377dce6dc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.764 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.784 227364 INFO nova.compute.manager [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Took 7.79 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.784 227364 DEBUG nova.compute.manager [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:27.785 139482 INFO neutron.agent.ovn.metadata.agent [-] Port b41e72af-8d04-442c-a943-9a9986e9e860 in datapath f01d29c1-afcb-4909-9abf-f7d31e4549d8 unbound from our chassis#033[00m
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:27.786 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f01d29c1-afcb-4909-9abf-f7d31e4549d8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.786 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:27.787 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f038327d-ccbc-4575-b578-b92fb641a943]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:27 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:27.787 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8 namespace which is not needed anymore#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.799 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.857 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.892 227364 INFO nova.compute.manager [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Took 9.77 seconds to build instance.#033[00m
Nov 29 03:04:27 np0005539551 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[247266]: [NOTICE]   (247274) : haproxy version is 2.8.14-c23fe91
Nov 29 03:04:27 np0005539551 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[247266]: [NOTICE]   (247274) : path to executable is /usr/sbin/haproxy
Nov 29 03:04:27 np0005539551 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[247266]: [WARNING]  (247274) : Exiting Master process...
Nov 29 03:04:27 np0005539551 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[247266]: [WARNING]  (247274) : Exiting Master process...
Nov 29 03:04:27 np0005539551 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[247266]: [ALERT]    (247274) : Current worker (247277) exited with code 143 (Terminated)
Nov 29 03:04:27 np0005539551 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[247266]: [WARNING]  (247274) : All workers exited. Exiting... (0)
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.909 227364 INFO nova.virt.libvirt.driver [None req-8947ac46-dc5e-4ea6-aaf2-ddb8504c00dc fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Deleting instance files /var/lib/nova/instances/d3837ff2-ce61-4144-ac9b-c032a188450a_del#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.910 227364 INFO nova.virt.libvirt.driver [None req-8947ac46-dc5e-4ea6-aaf2-ddb8504c00dc fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Deletion of /var/lib/nova/instances/d3837ff2-ce61-4144-ac9b-c032a188450a_del complete#033[00m
Nov 29 03:04:27 np0005539551 systemd[1]: libpod-053f167d6b69506f3d0f10382de731586f6f27ebf4c5be898b721b80952fab5e.scope: Deactivated successfully.
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.915 227364 DEBUG oslo_concurrency.lockutils [None req-61d97ef8-49f9-4eb0-ac34-5bf3b70bd13f 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "811e2f2f-5e2c-4c60-97e8-c39377dce6dc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:27 np0005539551 podman[248379]: 2025-11-29 08:04:27.918458262 +0000 UTC m=+0.048557942 container died 053f167d6b69506f3d0f10382de731586f6f27ebf4c5be898b721b80952fab5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:04:27 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-053f167d6b69506f3d0f10382de731586f6f27ebf4c5be898b721b80952fab5e-userdata-shm.mount: Deactivated successfully.
Nov 29 03:04:27 np0005539551 systemd[1]: var-lib-containers-storage-overlay-069b6cf3b123480c0f26872a2d80d7c9c484fdff9eeb710895788d1b44cd2e16-merged.mount: Deactivated successfully.
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.955 227364 INFO nova.compute.manager [None req-8947ac46-dc5e-4ea6-aaf2-ddb8504c00dc fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Took 0.90 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.956 227364 DEBUG oslo.service.loopingcall [None req-8947ac46-dc5e-4ea6-aaf2-ddb8504c00dc fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.956 227364 DEBUG nova.compute.manager [-] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:04:27 np0005539551 nova_compute[227360]: 2025-11-29 08:04:27.956 227364 DEBUG nova.network.neutron [-] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:04:27 np0005539551 podman[248379]: 2025-11-29 08:04:27.957788789 +0000 UTC m=+0.087888469 container cleanup 053f167d6b69506f3d0f10382de731586f6f27ebf4c5be898b721b80952fab5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:04:27 np0005539551 systemd[1]: libpod-conmon-053f167d6b69506f3d0f10382de731586f6f27ebf4c5be898b721b80952fab5e.scope: Deactivated successfully.
Nov 29 03:04:28 np0005539551 nova_compute[227360]: 2025-11-29 08:04:28.017 227364 INFO nova.virt.libvirt.driver [None req-f896945f-172b-48cb-bd88-a5fbeb6acf56 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Deleting instance files /var/lib/nova/instances/63ab1668-e1e0-4c32-bfaa-879399657745_del#033[00m
Nov 29 03:04:28 np0005539551 nova_compute[227360]: 2025-11-29 08:04:28.018 227364 INFO nova.virt.libvirt.driver [None req-f896945f-172b-48cb-bd88-a5fbeb6acf56 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Deletion of /var/lib/nova/instances/63ab1668-e1e0-4c32-bfaa-879399657745_del complete#033[00m
Nov 29 03:04:28 np0005539551 podman[248409]: 2025-11-29 08:04:28.027028926 +0000 UTC m=+0.043761130 container remove 053f167d6b69506f3d0f10382de731586f6f27ebf4c5be898b721b80952fab5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:04:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:28.032 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[45504834-8ae6-4e56-9099-40d5084ad845]: (4, ('Sat Nov 29 08:04:27 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8 (053f167d6b69506f3d0f10382de731586f6f27ebf4c5be898b721b80952fab5e)\n053f167d6b69506f3d0f10382de731586f6f27ebf4c5be898b721b80952fab5e\nSat Nov 29 08:04:27 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8 (053f167d6b69506f3d0f10382de731586f6f27ebf4c5be898b721b80952fab5e)\n053f167d6b69506f3d0f10382de731586f6f27ebf4c5be898b721b80952fab5e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:28.035 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[dda46a31-7a73-46a0-afef-f764f5c6609d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:28.036 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf01d29c1-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:28 np0005539551 kernel: tapf01d29c1-a0: left promiscuous mode
Nov 29 03:04:28 np0005539551 nova_compute[227360]: 2025-11-29 08:04:28.052 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:28 np0005539551 nova_compute[227360]: 2025-11-29 08:04:28.055 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:28.058 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b48244ab-04e5-4215-b9d3-fa4329c61c79]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:28.068 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b2e56453-47be-470e-905e-551836dc9eca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:28.069 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[519b974f-3871-4698-9be3-704a43ecbf9d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:28.090 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b29a251c-e6d3-4a2f-984c-ae1103b0d5be]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649708, 'reachable_time': 37121, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248423, 'error': None, 'target': 'ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:28 np0005539551 systemd[1]: run-netns-ovnmeta\x2df01d29c1\x2dafcb\x2d4909\x2d9abf\x2df7d31e4549d8.mount: Deactivated successfully.
Nov 29 03:04:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:28.092 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:04:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:28.092 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[f25fd180-782e-4d6e-9ca3-4edcdd79e4e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:28 np0005539551 nova_compute[227360]: 2025-11-29 08:04:28.141 227364 INFO nova.compute.manager [None req-f896945f-172b-48cb-bd88-a5fbeb6acf56 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Took 0.98 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:04:28 np0005539551 nova_compute[227360]: 2025-11-29 08:04:28.141 227364 DEBUG oslo.service.loopingcall [None req-f896945f-172b-48cb-bd88-a5fbeb6acf56 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:04:28 np0005539551 nova_compute[227360]: 2025-11-29 08:04:28.142 227364 DEBUG nova.compute.manager [-] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:04:28 np0005539551 nova_compute[227360]: 2025-11-29 08:04:28.142 227364 DEBUG nova.network.neutron [-] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:04:28 np0005539551 nova_compute[227360]: 2025-11-29 08:04:28.590 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:28 np0005539551 podman[248424]: 2025-11-29 08:04:28.595963812 +0000 UTC m=+0.052936521 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 03:04:28 np0005539551 podman[248426]: 2025-11-29 08:04:28.60283344 +0000 UTC m=+0.052740316 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, container_name=multipathd)
Nov 29 03:04:28 np0005539551 podman[248425]: 2025-11-29 08:04:28.631118554 +0000 UTC m=+0.084366622 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 03:04:28 np0005539551 nova_compute[227360]: 2025-11-29 08:04:28.666 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:28 np0005539551 nova_compute[227360]: 2025-11-29 08:04:28.666 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:04:29 np0005539551 nova_compute[227360]: 2025-11-29 08:04:29.200 227364 DEBUG nova.network.neutron [-] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:04:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:29.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:29 np0005539551 nova_compute[227360]: 2025-11-29 08:04:29.215 227364 DEBUG nova.network.neutron [-] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:04:29 np0005539551 nova_compute[227360]: 2025-11-29 08:04:29.232 227364 INFO nova.compute.manager [-] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Took 1.09 seconds to deallocate network for instance.#033[00m
Nov 29 03:04:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:29.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:29 np0005539551 nova_compute[227360]: 2025-11-29 08:04:29.293 227364 DEBUG oslo_concurrency.lockutils [None req-f896945f-172b-48cb-bd88-a5fbeb6acf56 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:29 np0005539551 nova_compute[227360]: 2025-11-29 08:04:29.294 227364 DEBUG oslo_concurrency.lockutils [None req-f896945f-172b-48cb-bd88-a5fbeb6acf56 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:29 np0005539551 nova_compute[227360]: 2025-11-29 08:04:29.405 227364 DEBUG oslo_concurrency.processutils [None req-f896945f-172b-48cb-bd88-a5fbeb6acf56 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:29 np0005539551 nova_compute[227360]: 2025-11-29 08:04:29.557 227364 DEBUG nova.network.neutron [-] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:04:29 np0005539551 nova_compute[227360]: 2025-11-29 08:04:29.576 227364 INFO nova.compute.manager [-] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Took 1.62 seconds to deallocate network for instance.#033[00m
Nov 29 03:04:29 np0005539551 nova_compute[227360]: 2025-11-29 08:04:29.631 227364 DEBUG oslo_concurrency.lockutils [None req-8947ac46-dc5e-4ea6-aaf2-ddb8504c00dc fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:29 np0005539551 nova_compute[227360]: 2025-11-29 08:04:29.655 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:29 np0005539551 nova_compute[227360]: 2025-11-29 08:04:29.667 227364 DEBUG nova.compute.manager [req-c828c3ca-2952-4179-958a-a9113b5319f7 req-c81fffd6-fde4-41f1-81e5-b59911ac53a8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Received event network-vif-plugged-c765ebc3-573f-4453-ab8a-90e3943a89d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:04:29 np0005539551 nova_compute[227360]: 2025-11-29 08:04:29.667 227364 DEBUG oslo_concurrency.lockutils [req-c828c3ca-2952-4179-958a-a9113b5319f7 req-c81fffd6-fde4-41f1-81e5-b59911ac53a8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "811e2f2f-5e2c-4c60-97e8-c39377dce6dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:29 np0005539551 nova_compute[227360]: 2025-11-29 08:04:29.668 227364 DEBUG oslo_concurrency.lockutils [req-c828c3ca-2952-4179-958a-a9113b5319f7 req-c81fffd6-fde4-41f1-81e5-b59911ac53a8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "811e2f2f-5e2c-4c60-97e8-c39377dce6dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:29 np0005539551 nova_compute[227360]: 2025-11-29 08:04:29.668 227364 DEBUG oslo_concurrency.lockutils [req-c828c3ca-2952-4179-958a-a9113b5319f7 req-c81fffd6-fde4-41f1-81e5-b59911ac53a8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "811e2f2f-5e2c-4c60-97e8-c39377dce6dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:29 np0005539551 nova_compute[227360]: 2025-11-29 08:04:29.668 227364 DEBUG nova.compute.manager [req-c828c3ca-2952-4179-958a-a9113b5319f7 req-c81fffd6-fde4-41f1-81e5-b59911ac53a8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] No waiting events found dispatching network-vif-plugged-c765ebc3-573f-4453-ab8a-90e3943a89d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:04:29 np0005539551 nova_compute[227360]: 2025-11-29 08:04:29.669 227364 WARNING nova.compute.manager [req-c828c3ca-2952-4179-958a-a9113b5319f7 req-c81fffd6-fde4-41f1-81e5-b59911ac53a8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Received unexpected event network-vif-plugged-c765ebc3-573f-4453-ab8a-90e3943a89d5 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:04:29 np0005539551 nova_compute[227360]: 2025-11-29 08:04:29.725 227364 DEBUG nova.compute.manager [req-13ecb11b-6175-4103-b5e2-47bb8446067d req-174f394d-667e-44bb-902c-91d855ec0697 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Received event network-vif-plugged-b41e72af-8d04-442c-a943-9a9986e9e860 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:04:29 np0005539551 nova_compute[227360]: 2025-11-29 08:04:29.725 227364 DEBUG oslo_concurrency.lockutils [req-13ecb11b-6175-4103-b5e2-47bb8446067d req-174f394d-667e-44bb-902c-91d855ec0697 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d3837ff2-ce61-4144-ac9b-c032a188450a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:29 np0005539551 nova_compute[227360]: 2025-11-29 08:04:29.726 227364 DEBUG oslo_concurrency.lockutils [req-13ecb11b-6175-4103-b5e2-47bb8446067d req-174f394d-667e-44bb-902c-91d855ec0697 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d3837ff2-ce61-4144-ac9b-c032a188450a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:29 np0005539551 nova_compute[227360]: 2025-11-29 08:04:29.726 227364 DEBUG oslo_concurrency.lockutils [req-13ecb11b-6175-4103-b5e2-47bb8446067d req-174f394d-667e-44bb-902c-91d855ec0697 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d3837ff2-ce61-4144-ac9b-c032a188450a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:29 np0005539551 nova_compute[227360]: 2025-11-29 08:04:29.726 227364 DEBUG nova.compute.manager [req-13ecb11b-6175-4103-b5e2-47bb8446067d req-174f394d-667e-44bb-902c-91d855ec0697 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] No waiting events found dispatching network-vif-plugged-b41e72af-8d04-442c-a943-9a9986e9e860 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:04:29 np0005539551 nova_compute[227360]: 2025-11-29 08:04:29.726 227364 WARNING nova.compute.manager [req-13ecb11b-6175-4103-b5e2-47bb8446067d req-174f394d-667e-44bb-902c-91d855ec0697 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Received unexpected event network-vif-plugged-b41e72af-8d04-442c-a943-9a9986e9e860 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:04:29 np0005539551 nova_compute[227360]: 2025-11-29 08:04:29.782 227364 DEBUG nova.compute.manager [req-d1d5b857-1acb-49d5-bb03-3442045642dc req-da0ac99c-bf9c-4b8a-b426-4f72918f1e6e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Received event network-vif-deleted-b41e72af-8d04-442c-a943-9a9986e9e860 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:04:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:04:29 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1147396923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:04:29 np0005539551 nova_compute[227360]: 2025-11-29 08:04:29.944 227364 DEBUG oslo_concurrency.processutils [None req-f896945f-172b-48cb-bd88-a5fbeb6acf56 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:29 np0005539551 nova_compute[227360]: 2025-11-29 08:04:29.951 227364 DEBUG nova.compute.provider_tree [None req-f896945f-172b-48cb-bd88-a5fbeb6acf56 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:04:29 np0005539551 nova_compute[227360]: 2025-11-29 08:04:29.975 227364 DEBUG nova.scheduler.client.report [None req-f896945f-172b-48cb-bd88-a5fbeb6acf56 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:04:30 np0005539551 nova_compute[227360]: 2025-11-29 08:04:30.022 227364 DEBUG oslo_concurrency.lockutils [None req-f896945f-172b-48cb-bd88-a5fbeb6acf56 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:30 np0005539551 nova_compute[227360]: 2025-11-29 08:04:30.027 227364 DEBUG oslo_concurrency.lockutils [None req-8947ac46-dc5e-4ea6-aaf2-ddb8504c00dc fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.396s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:30 np0005539551 nova_compute[227360]: 2025-11-29 08:04:30.081 227364 INFO nova.scheduler.client.report [None req-f896945f-172b-48cb-bd88-a5fbeb6acf56 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Deleted allocations for instance 63ab1668-e1e0-4c32-bfaa-879399657745#033[00m
Nov 29 03:04:30 np0005539551 podman[248670]: 2025-11-29 08:04:30.134377085 +0000 UTC m=+0.065725292 container exec 90ce4adbab91e406b1c142d44002a8d5649da44e57a442af57615f2fecac4da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Nov 29 03:04:30 np0005539551 nova_compute[227360]: 2025-11-29 08:04:30.156 227364 DEBUG oslo_concurrency.processutils [None req-8947ac46-dc5e-4ea6-aaf2-ddb8504c00dc fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:30 np0005539551 nova_compute[227360]: 2025-11-29 08:04:30.233 227364 DEBUG oslo_concurrency.lockutils [None req-f896945f-172b-48cb-bd88-a5fbeb6acf56 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "63ab1668-e1e0-4c32-bfaa-879399657745" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.876s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:30 np0005539551 podman[248670]: 2025-11-29 08:04:30.25169786 +0000 UTC m=+0.183046087 container exec_died 90ce4adbab91e406b1c142d44002a8d5649da44e57a442af57615f2fecac4da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:04:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:04:30 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/202953445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:04:30 np0005539551 nova_compute[227360]: 2025-11-29 08:04:30.630 227364 DEBUG oslo_concurrency.processutils [None req-8947ac46-dc5e-4ea6-aaf2-ddb8504c00dc fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:30 np0005539551 nova_compute[227360]: 2025-11-29 08:04:30.636 227364 DEBUG nova.compute.provider_tree [None req-8947ac46-dc5e-4ea6-aaf2-ddb8504c00dc fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:04:30 np0005539551 nova_compute[227360]: 2025-11-29 08:04:30.771 227364 DEBUG nova.scheduler.client.report [None req-8947ac46-dc5e-4ea6-aaf2-ddb8504c00dc fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:04:30 np0005539551 nova_compute[227360]: 2025-11-29 08:04:30.817 227364 DEBUG oslo_concurrency.lockutils [None req-8947ac46-dc5e-4ea6-aaf2-ddb8504c00dc fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:30 np0005539551 nova_compute[227360]: 2025-11-29 08:04:30.866 227364 INFO nova.scheduler.client.report [None req-8947ac46-dc5e-4ea6-aaf2-ddb8504c00dc fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Deleted allocations for instance d3837ff2-ce61-4144-ac9b-c032a188450a#033[00m
Nov 29 03:04:31 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e201 e201: 3 total, 3 up, 3 in
Nov 29 03:04:31 np0005539551 nova_compute[227360]: 2025-11-29 08:04:31.130 227364 DEBUG oslo_concurrency.lockutils [None req-8947ac46-dc5e-4ea6-aaf2-ddb8504c00dc fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "d3837ff2-ce61-4144-ac9b-c032a188450a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:31.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:31.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:32 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:04:32 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:04:32 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:04:32 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:04:32 np0005539551 nova_compute[227360]: 2025-11-29 08:04:32.454 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:32 np0005539551 nova_compute[227360]: 2025-11-29 08:04:32.762 227364 DEBUG oslo_concurrency.lockutils [None req-03774bba-4ed9-45f2-ac4f-24513576a4bb 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "811e2f2f-5e2c-4c60-97e8-c39377dce6dc" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:32 np0005539551 nova_compute[227360]: 2025-11-29 08:04:32.763 227364 DEBUG oslo_concurrency.lockutils [None req-03774bba-4ed9-45f2-ac4f-24513576a4bb 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "811e2f2f-5e2c-4c60-97e8-c39377dce6dc" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:32 np0005539551 nova_compute[227360]: 2025-11-29 08:04:32.781 227364 DEBUG nova.objects.instance [None req-03774bba-4ed9-45f2-ac4f-24513576a4bb 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lazy-loading 'flavor' on Instance uuid 811e2f2f-5e2c-4c60-97e8-c39377dce6dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:32 np0005539551 nova_compute[227360]: 2025-11-29 08:04:32.837 227364 DEBUG oslo_concurrency.lockutils [None req-03774bba-4ed9-45f2-ac4f-24513576a4bb 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "811e2f2f-5e2c-4c60-97e8-c39377dce6dc" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.074s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:32 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e202 e202: 3 total, 3 up, 3 in
Nov 29 03:04:33 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:04:33 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:04:33 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:04:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:33.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:33.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:34 np0005539551 nova_compute[227360]: 2025-11-29 08:04:34.174 227364 DEBUG oslo_concurrency.lockutils [None req-03774bba-4ed9-45f2-ac4f-24513576a4bb 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "811e2f2f-5e2c-4c60-97e8-c39377dce6dc" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:34 np0005539551 nova_compute[227360]: 2025-11-29 08:04:34.175 227364 DEBUG oslo_concurrency.lockutils [None req-03774bba-4ed9-45f2-ac4f-24513576a4bb 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "811e2f2f-5e2c-4c60-97e8-c39377dce6dc" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:34 np0005539551 nova_compute[227360]: 2025-11-29 08:04:34.175 227364 INFO nova.compute.manager [None req-03774bba-4ed9-45f2-ac4f-24513576a4bb 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Attaching volume c0dd2885-3214-46b3-a61c-f2d3eac073b6 to /dev/vdb#033[00m
Nov 29 03:04:34 np0005539551 nova_compute[227360]: 2025-11-29 08:04:34.428 227364 DEBUG os_brick.utils [None req-03774bba-4ed9-45f2-ac4f-24513576a4bb 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:04:34 np0005539551 nova_compute[227360]: 2025-11-29 08:04:34.429 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:34 np0005539551 nova_compute[227360]: 2025-11-29 08:04:34.439 245195 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:34 np0005539551 nova_compute[227360]: 2025-11-29 08:04:34.439 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[3211b67c-ba56-4e9e-a69a-f17c0db04c81]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:34 np0005539551 nova_compute[227360]: 2025-11-29 08:04:34.440 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:34 np0005539551 nova_compute[227360]: 2025-11-29 08:04:34.448 245195 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:34 np0005539551 nova_compute[227360]: 2025-11-29 08:04:34.448 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[2cde4f5a-6d3c-4122-bab8-0f995c0e5243]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34268feecb6', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:34 np0005539551 nova_compute[227360]: 2025-11-29 08:04:34.449 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:34 np0005539551 nova_compute[227360]: 2025-11-29 08:04:34.457 245195 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:34 np0005539551 nova_compute[227360]: 2025-11-29 08:04:34.457 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[307d2af9-dddc-43b4-b022-00b11a6a7d92]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:34 np0005539551 nova_compute[227360]: 2025-11-29 08:04:34.458 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[c66da8eb-ae90-4037-8f4e-ac8031b82002]: (4, '2d58586e-4ce1-425d-890c-d5cdff75e822') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:34 np0005539551 nova_compute[227360]: 2025-11-29 08:04:34.459 227364 DEBUG oslo_concurrency.processutils [None req-03774bba-4ed9-45f2-ac4f-24513576a4bb 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:34 np0005539551 nova_compute[227360]: 2025-11-29 08:04:34.480 227364 DEBUG oslo_concurrency.processutils [None req-03774bba-4ed9-45f2-ac4f-24513576a4bb 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "nvme version" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:34 np0005539551 nova_compute[227360]: 2025-11-29 08:04:34.483 227364 DEBUG os_brick.initiator.connectors.lightos [None req-03774bba-4ed9-45f2-ac4f-24513576a4bb 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:04:34 np0005539551 nova_compute[227360]: 2025-11-29 08:04:34.483 227364 DEBUG os_brick.initiator.connectors.lightos [None req-03774bba-4ed9-45f2-ac4f-24513576a4bb 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:04:34 np0005539551 nova_compute[227360]: 2025-11-29 08:04:34.483 227364 DEBUG os_brick.initiator.connectors.lightos [None req-03774bba-4ed9-45f2-ac4f-24513576a4bb 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:04:34 np0005539551 nova_compute[227360]: 2025-11-29 08:04:34.483 227364 DEBUG os_brick.utils [None req-03774bba-4ed9-45f2-ac4f-24513576a4bb 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] <== get_connector_properties: return (55ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34268feecb6', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2d58586e-4ce1-425d-890c-d5cdff75e822', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:04:34 np0005539551 nova_compute[227360]: 2025-11-29 08:04:34.484 227364 DEBUG nova.virt.block_device [None req-03774bba-4ed9-45f2-ac4f-24513576a4bb 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Updating existing volume attachment record: c1e1bd19-138b-47cc-a3c2-b49ada1b6f51 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:04:34 np0005539551 nova_compute[227360]: 2025-11-29 08:04:34.656 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:34 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:35.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:35.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:35 np0005539551 nova_compute[227360]: 2025-11-29 08:04:35.636 227364 DEBUG nova.objects.instance [None req-03774bba-4ed9-45f2-ac4f-24513576a4bb 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lazy-loading 'flavor' on Instance uuid 811e2f2f-5e2c-4c60-97e8-c39377dce6dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:35 np0005539551 nova_compute[227360]: 2025-11-29 08:04:35.668 227364 DEBUG nova.virt.libvirt.driver [None req-03774bba-4ed9-45f2-ac4f-24513576a4bb 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Attempting to attach volume c0dd2885-3214-46b3-a61c-f2d3eac073b6 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:04:35 np0005539551 nova_compute[227360]: 2025-11-29 08:04:35.673 227364 DEBUG nova.virt.libvirt.guest [None req-03774bba-4ed9-45f2-ac4f-24513576a4bb 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:04:35 np0005539551 nova_compute[227360]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:04:35 np0005539551 nova_compute[227360]:  <source protocol="rbd" name="volumes/volume-c0dd2885-3214-46b3-a61c-f2d3eac073b6">
Nov 29 03:04:35 np0005539551 nova_compute[227360]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:04:35 np0005539551 nova_compute[227360]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:04:35 np0005539551 nova_compute[227360]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:04:35 np0005539551 nova_compute[227360]:  </source>
Nov 29 03:04:35 np0005539551 nova_compute[227360]:  <auth username="openstack">
Nov 29 03:04:35 np0005539551 nova_compute[227360]:    <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:04:35 np0005539551 nova_compute[227360]:  </auth>
Nov 29 03:04:35 np0005539551 nova_compute[227360]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:04:35 np0005539551 nova_compute[227360]:  <serial>c0dd2885-3214-46b3-a61c-f2d3eac073b6</serial>
Nov 29 03:04:35 np0005539551 nova_compute[227360]: </disk>
Nov 29 03:04:35 np0005539551 nova_compute[227360]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:04:35 np0005539551 nova_compute[227360]: 2025-11-29 08:04:35.825 227364 DEBUG nova.virt.libvirt.driver [None req-03774bba-4ed9-45f2-ac4f-24513576a4bb 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:04:35 np0005539551 nova_compute[227360]: 2025-11-29 08:04:35.826 227364 DEBUG nova.virt.libvirt.driver [None req-03774bba-4ed9-45f2-ac4f-24513576a4bb 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:04:35 np0005539551 nova_compute[227360]: 2025-11-29 08:04:35.826 227364 DEBUG nova.virt.libvirt.driver [None req-03774bba-4ed9-45f2-ac4f-24513576a4bb 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:04:35 np0005539551 nova_compute[227360]: 2025-11-29 08:04:35.827 227364 DEBUG nova.virt.libvirt.driver [None req-03774bba-4ed9-45f2-ac4f-24513576a4bb 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] No VIF found with MAC fa:16:3e:f4:73:ea, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:04:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e203 e203: 3 total, 3 up, 3 in
Nov 29 03:04:36 np0005539551 nova_compute[227360]: 2025-11-29 08:04:36.044 227364 DEBUG oslo_concurrency.lockutils [None req-03774bba-4ed9-45f2-ac4f-24513576a4bb 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "811e2f2f-5e2c-4c60-97e8-c39377dce6dc" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.869s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:36 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e204 e204: 3 total, 3 up, 3 in
Nov 29 03:04:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:37.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:37.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:37 np0005539551 nova_compute[227360]: 2025-11-29 08:04:37.387 227364 DEBUG oslo_concurrency.lockutils [None req-3ef1bfa2-97eb-4ea5-bb95-a7e42fb87d46 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "811e2f2f-5e2c-4c60-97e8-c39377dce6dc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:37 np0005539551 nova_compute[227360]: 2025-11-29 08:04:37.388 227364 DEBUG oslo_concurrency.lockutils [None req-3ef1bfa2-97eb-4ea5-bb95-a7e42fb87d46 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "811e2f2f-5e2c-4c60-97e8-c39377dce6dc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:37 np0005539551 nova_compute[227360]: 2025-11-29 08:04:37.388 227364 DEBUG oslo_concurrency.lockutils [None req-3ef1bfa2-97eb-4ea5-bb95-a7e42fb87d46 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "811e2f2f-5e2c-4c60-97e8-c39377dce6dc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:37 np0005539551 nova_compute[227360]: 2025-11-29 08:04:37.388 227364 DEBUG oslo_concurrency.lockutils [None req-3ef1bfa2-97eb-4ea5-bb95-a7e42fb87d46 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "811e2f2f-5e2c-4c60-97e8-c39377dce6dc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:37 np0005539551 nova_compute[227360]: 2025-11-29 08:04:37.389 227364 DEBUG oslo_concurrency.lockutils [None req-3ef1bfa2-97eb-4ea5-bb95-a7e42fb87d46 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "811e2f2f-5e2c-4c60-97e8-c39377dce6dc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:37 np0005539551 nova_compute[227360]: 2025-11-29 08:04:37.390 227364 INFO nova.compute.manager [None req-3ef1bfa2-97eb-4ea5-bb95-a7e42fb87d46 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Terminating instance#033[00m
Nov 29 03:04:37 np0005539551 nova_compute[227360]: 2025-11-29 08:04:37.391 227364 DEBUG nova.compute.manager [None req-3ef1bfa2-97eb-4ea5-bb95-a7e42fb87d46 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:04:37 np0005539551 kernel: tapc765ebc3-57 (unregistering): left promiscuous mode
Nov 29 03:04:37 np0005539551 NetworkManager[48922]: <info>  [1764403477.4411] device (tapc765ebc3-57): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:04:37 np0005539551 ovn_controller[130266]: 2025-11-29T08:04:37Z|00182|binding|INFO|Releasing lport c765ebc3-573f-4453-ab8a-90e3943a89d5 from this chassis (sb_readonly=0)
Nov 29 03:04:37 np0005539551 ovn_controller[130266]: 2025-11-29T08:04:37Z|00183|binding|INFO|Setting lport c765ebc3-573f-4453-ab8a-90e3943a89d5 down in Southbound
Nov 29 03:04:37 np0005539551 nova_compute[227360]: 2025-11-29 08:04:37.454 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:37 np0005539551 ovn_controller[130266]: 2025-11-29T08:04:37Z|00184|binding|INFO|Removing iface tapc765ebc3-57 ovn-installed in OVS
Nov 29 03:04:37 np0005539551 nova_compute[227360]: 2025-11-29 08:04:37.460 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:37.467 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:73:ea 10.100.0.11'], port_security=['fa:16:3e:f4:73:ea 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '811e2f2f-5e2c-4c60-97e8-c39377dce6dc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90c23935e0214785a9dc5061b91cf29c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f717601c-d15f-4a2d-a56a-85c60baf3a44', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc7b8639-cf64-4f98-aa54-bbd2c9e5fa46, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=c765ebc3-573f-4453-ab8a-90e3943a89d5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:04:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:37.469 139482 INFO neutron.agent.ovn.metadata.agent [-] Port c765ebc3-573f-4453-ab8a-90e3943a89d5 in datapath a8be8715-2b74-42ca-9713-7fc1f4a33bc9 unbound from our chassis#033[00m
Nov 29 03:04:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:37.471 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a8be8715-2b74-42ca-9713-7fc1f4a33bc9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:04:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:37.472 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f5b9cce6-feea-4c2d-9e65-8b4ce74bccfd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:37.473 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9 namespace which is not needed anymore#033[00m
Nov 29 03:04:37 np0005539551 nova_compute[227360]: 2025-11-29 08:04:37.490 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:37 np0005539551 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000035.scope: Deactivated successfully.
Nov 29 03:04:37 np0005539551 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000035.scope: Consumed 10.654s CPU time.
Nov 29 03:04:37 np0005539551 systemd-machined[190756]: Machine qemu-26-instance-00000035 terminated.
Nov 29 03:04:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e205 e205: 3 total, 3 up, 3 in
Nov 29 03:04:37 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:04:37 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.9 total, 600.0 interval#012Cumulative writes: 20K writes, 78K keys, 20K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s#012Cumulative WAL: 20K writes, 6402 syncs, 3.16 writes per sync, written: 0.07 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 9853 writes, 38K keys, 9853 commit groups, 1.0 writes per commit group, ingest: 39.99 MB, 0.07 MB/s#012Interval WAL: 9853 writes, 3756 syncs, 2.62 writes per sync, written: 0.04 GB, 0.07 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 03:04:37 np0005539551 nova_compute[227360]: 2025-11-29 08:04:37.628 227364 INFO nova.virt.libvirt.driver [-] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Instance destroyed successfully.#033[00m
Nov 29 03:04:37 np0005539551 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[248344]: [NOTICE]   (248349) : haproxy version is 2.8.14-c23fe91
Nov 29 03:04:37 np0005539551 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[248344]: [NOTICE]   (248349) : path to executable is /usr/sbin/haproxy
Nov 29 03:04:37 np0005539551 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[248344]: [WARNING]  (248349) : Exiting Master process...
Nov 29 03:04:37 np0005539551 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[248344]: [WARNING]  (248349) : Exiting Master process...
Nov 29 03:04:37 np0005539551 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[248344]: [ALERT]    (248349) : Current worker (248351) exited with code 143 (Terminated)
Nov 29 03:04:37 np0005539551 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[248344]: [WARNING]  (248349) : All workers exited. Exiting... (0)
Nov 29 03:04:37 np0005539551 nova_compute[227360]: 2025-11-29 08:04:37.631 227364 DEBUG nova.objects.instance [None req-3ef1bfa2-97eb-4ea5-bb95-a7e42fb87d46 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lazy-loading 'resources' on Instance uuid 811e2f2f-5e2c-4c60-97e8-c39377dce6dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:37 np0005539551 systemd[1]: libpod-37ba9758c903716613d533f2202570d1707f2ad58ac7b46272f5b31f657d67df.scope: Deactivated successfully.
Nov 29 03:04:37 np0005539551 podman[248994]: 2025-11-29 08:04:37.641068045 +0000 UTC m=+0.054473632 container died 37ba9758c903716613d533f2202570d1707f2ad58ac7b46272f5b31f657d67df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:04:37 np0005539551 nova_compute[227360]: 2025-11-29 08:04:37.649 227364 DEBUG nova.virt.libvirt.vif [None req-3ef1bfa2-97eb-4ea5-bb95-a7e42fb87d46 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:04:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1497216928',display_name='tempest-DeleteServersTestJSON-server-1497216928',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1497216928',id=53,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:04:27Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90c23935e0214785a9dc5061b91cf29c',ramdisk_id='',reservation_id='r-sh94cpug',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-294503786',owner_user_name='tempest-DeleteServersTestJSON-294503786-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:04:27Z,user_data=None,user_id='104aea18c5154615b602f032bdb49681',uuid=811e2f2f-5e2c-4c60-97e8-c39377dce6dc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c765ebc3-573f-4453-ab8a-90e3943a89d5", "address": "fa:16:3e:f4:73:ea", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc765ebc3-57", "ovs_interfaceid": "c765ebc3-573f-4453-ab8a-90e3943a89d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:04:37 np0005539551 nova_compute[227360]: 2025-11-29 08:04:37.650 227364 DEBUG nova.network.os_vif_util [None req-3ef1bfa2-97eb-4ea5-bb95-a7e42fb87d46 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converting VIF {"id": "c765ebc3-573f-4453-ab8a-90e3943a89d5", "address": "fa:16:3e:f4:73:ea", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc765ebc3-57", "ovs_interfaceid": "c765ebc3-573f-4453-ab8a-90e3943a89d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:04:37 np0005539551 nova_compute[227360]: 2025-11-29 08:04:37.651 227364 DEBUG nova.network.os_vif_util [None req-3ef1bfa2-97eb-4ea5-bb95-a7e42fb87d46 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:73:ea,bridge_name='br-int',has_traffic_filtering=True,id=c765ebc3-573f-4453-ab8a-90e3943a89d5,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc765ebc3-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:04:37 np0005539551 nova_compute[227360]: 2025-11-29 08:04:37.657 227364 DEBUG os_vif [None req-3ef1bfa2-97eb-4ea5-bb95-a7e42fb87d46 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:73:ea,bridge_name='br-int',has_traffic_filtering=True,id=c765ebc3-573f-4453-ab8a-90e3943a89d5,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc765ebc3-57') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:04:37 np0005539551 nova_compute[227360]: 2025-11-29 08:04:37.660 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:37 np0005539551 nova_compute[227360]: 2025-11-29 08:04:37.660 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc765ebc3-57, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:37 np0005539551 nova_compute[227360]: 2025-11-29 08:04:37.662 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:37 np0005539551 nova_compute[227360]: 2025-11-29 08:04:37.664 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:04:37 np0005539551 nova_compute[227360]: 2025-11-29 08:04:37.665 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:37 np0005539551 nova_compute[227360]: 2025-11-29 08:04:37.668 227364 INFO os_vif [None req-3ef1bfa2-97eb-4ea5-bb95-a7e42fb87d46 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:73:ea,bridge_name='br-int',has_traffic_filtering=True,id=c765ebc3-573f-4453-ab8a-90e3943a89d5,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc765ebc3-57')#033[00m
Nov 29 03:04:37 np0005539551 systemd[1]: var-lib-containers-storage-overlay-46bd7acc3ee3bfcf46fe067dbb2b70758d56dd8b0a021d39e539dad4a320af25-merged.mount: Deactivated successfully.
Nov 29 03:04:37 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-37ba9758c903716613d533f2202570d1707f2ad58ac7b46272f5b31f657d67df-userdata-shm.mount: Deactivated successfully.
Nov 29 03:04:37 np0005539551 podman[248994]: 2025-11-29 08:04:37.692124355 +0000 UTC m=+0.105529902 container cleanup 37ba9758c903716613d533f2202570d1707f2ad58ac7b46272f5b31f657d67df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 03:04:37 np0005539551 systemd[1]: libpod-conmon-37ba9758c903716613d533f2202570d1707f2ad58ac7b46272f5b31f657d67df.scope: Deactivated successfully.
Nov 29 03:04:37 np0005539551 podman[249050]: 2025-11-29 08:04:37.784378292 +0000 UTC m=+0.059842981 container remove 37ba9758c903716613d533f2202570d1707f2ad58ac7b46272f5b31f657d67df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 03:04:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:37.793 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[0a649cb6-daca-44e6-bad3-d61c7aa9c205]: (4, ('Sat Nov 29 08:04:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9 (37ba9758c903716613d533f2202570d1707f2ad58ac7b46272f5b31f657d67df)\n37ba9758c903716613d533f2202570d1707f2ad58ac7b46272f5b31f657d67df\nSat Nov 29 08:04:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9 (37ba9758c903716613d533f2202570d1707f2ad58ac7b46272f5b31f657d67df)\n37ba9758c903716613d533f2202570d1707f2ad58ac7b46272f5b31f657d67df\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:37.795 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8d944121-a53f-4d41-8d1b-20af4fcc019e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:37.796 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8be8715-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:37 np0005539551 nova_compute[227360]: 2025-11-29 08:04:37.798 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:37 np0005539551 kernel: tapa8be8715-20: left promiscuous mode
Nov 29 03:04:37 np0005539551 nova_compute[227360]: 2025-11-29 08:04:37.824 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:37 np0005539551 nova_compute[227360]: 2025-11-29 08:04:37.825 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:37.826 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d3086bf0-304e-4b49-a818-afbdd4163d8e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:37.843 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2f1cd1f7-1d0f-46bf-925a-c756a55bc3d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:37.845 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[3e17d2b5-e8a6-418c-a09e-eb4101484232]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:37.859 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[89ef4814-b73c-4c3a-bff5-5325fca78a95]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 651383, 'reachable_time': 17738, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249069, 'error': None, 'target': 'ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:37.861 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:04:37 np0005539551 systemd[1]: run-netns-ovnmeta\x2da8be8715\x2d2b74\x2d42ca\x2d9713\x2d7fc1f4a33bc9.mount: Deactivated successfully.
Nov 29 03:04:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:37.861 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[e1385a27-4346-455a-a225-aa5f9e4cb2b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:38 np0005539551 nova_compute[227360]: 2025-11-29 08:04:38.032 227364 INFO nova.virt.libvirt.driver [None req-3ef1bfa2-97eb-4ea5-bb95-a7e42fb87d46 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Deleting instance files /var/lib/nova/instances/811e2f2f-5e2c-4c60-97e8-c39377dce6dc_del#033[00m
Nov 29 03:04:38 np0005539551 nova_compute[227360]: 2025-11-29 08:04:38.033 227364 INFO nova.virt.libvirt.driver [None req-3ef1bfa2-97eb-4ea5-bb95-a7e42fb87d46 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Deletion of /var/lib/nova/instances/811e2f2f-5e2c-4c60-97e8-c39377dce6dc_del complete#033[00m
Nov 29 03:04:38 np0005539551 nova_compute[227360]: 2025-11-29 08:04:38.093 227364 INFO nova.compute.manager [None req-3ef1bfa2-97eb-4ea5-bb95-a7e42fb87d46 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Took 0.70 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:04:38 np0005539551 nova_compute[227360]: 2025-11-29 08:04:38.094 227364 DEBUG oslo.service.loopingcall [None req-3ef1bfa2-97eb-4ea5-bb95-a7e42fb87d46 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:04:38 np0005539551 nova_compute[227360]: 2025-11-29 08:04:38.094 227364 DEBUG nova.compute.manager [-] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:04:38 np0005539551 nova_compute[227360]: 2025-11-29 08:04:38.094 227364 DEBUG nova.network.neutron [-] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:04:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:39.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:39.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:39 np0005539551 nova_compute[227360]: 2025-11-29 08:04:39.446 227364 DEBUG nova.compute.manager [req-df6a8ac6-603b-4e0d-9c34-98c126d4d965 req-34ea31c2-9a94-4ae9-adbd-cacf38e7f4db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Received event network-vif-unplugged-c765ebc3-573f-4453-ab8a-90e3943a89d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:04:39 np0005539551 nova_compute[227360]: 2025-11-29 08:04:39.446 227364 DEBUG oslo_concurrency.lockutils [req-df6a8ac6-603b-4e0d-9c34-98c126d4d965 req-34ea31c2-9a94-4ae9-adbd-cacf38e7f4db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "811e2f2f-5e2c-4c60-97e8-c39377dce6dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:39 np0005539551 nova_compute[227360]: 2025-11-29 08:04:39.446 227364 DEBUG oslo_concurrency.lockutils [req-df6a8ac6-603b-4e0d-9c34-98c126d4d965 req-34ea31c2-9a94-4ae9-adbd-cacf38e7f4db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "811e2f2f-5e2c-4c60-97e8-c39377dce6dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:39 np0005539551 nova_compute[227360]: 2025-11-29 08:04:39.447 227364 DEBUG oslo_concurrency.lockutils [req-df6a8ac6-603b-4e0d-9c34-98c126d4d965 req-34ea31c2-9a94-4ae9-adbd-cacf38e7f4db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "811e2f2f-5e2c-4c60-97e8-c39377dce6dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:39 np0005539551 nova_compute[227360]: 2025-11-29 08:04:39.447 227364 DEBUG nova.compute.manager [req-df6a8ac6-603b-4e0d-9c34-98c126d4d965 req-34ea31c2-9a94-4ae9-adbd-cacf38e7f4db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] No waiting events found dispatching network-vif-unplugged-c765ebc3-573f-4453-ab8a-90e3943a89d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:04:39 np0005539551 nova_compute[227360]: 2025-11-29 08:04:39.447 227364 DEBUG nova.compute.manager [req-df6a8ac6-603b-4e0d-9c34-98c126d4d965 req-34ea31c2-9a94-4ae9-adbd-cacf38e7f4db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Received event network-vif-unplugged-c765ebc3-573f-4453-ab8a-90e3943a89d5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:04:39 np0005539551 nova_compute[227360]: 2025-11-29 08:04:39.657 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:40 np0005539551 nova_compute[227360]: 2025-11-29 08:04:40.371 227364 DEBUG nova.network.neutron [-] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:04:40 np0005539551 nova_compute[227360]: 2025-11-29 08:04:40.396 227364 INFO nova.compute.manager [-] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Took 2.30 seconds to deallocate network for instance.#033[00m
Nov 29 03:04:40 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:04:40 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:04:40 np0005539551 nova_compute[227360]: 2025-11-29 08:04:40.602 227364 INFO nova.compute.manager [None req-3ef1bfa2-97eb-4ea5-bb95-a7e42fb87d46 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Took 0.20 seconds to detach 1 volumes for instance.#033[00m
Nov 29 03:04:40 np0005539551 nova_compute[227360]: 2025-11-29 08:04:40.654 227364 DEBUG oslo_concurrency.lockutils [None req-3ef1bfa2-97eb-4ea5-bb95-a7e42fb87d46 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:40 np0005539551 nova_compute[227360]: 2025-11-29 08:04:40.655 227364 DEBUG oslo_concurrency.lockutils [None req-3ef1bfa2-97eb-4ea5-bb95-a7e42fb87d46 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:40 np0005539551 nova_compute[227360]: 2025-11-29 08:04:40.704 227364 DEBUG oslo_concurrency.processutils [None req-3ef1bfa2-97eb-4ea5-bb95-a7e42fb87d46 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:41 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e206 e206: 3 total, 3 up, 3 in
Nov 29 03:04:41 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:04:41 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/931946233' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:04:41 np0005539551 nova_compute[227360]: 2025-11-29 08:04:41.165 227364 DEBUG oslo_concurrency.processutils [None req-3ef1bfa2-97eb-4ea5-bb95-a7e42fb87d46 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:41 np0005539551 nova_compute[227360]: 2025-11-29 08:04:41.173 227364 DEBUG nova.compute.provider_tree [None req-3ef1bfa2-97eb-4ea5-bb95-a7e42fb87d46 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:04:41 np0005539551 nova_compute[227360]: 2025-11-29 08:04:41.188 227364 DEBUG nova.scheduler.client.report [None req-3ef1bfa2-97eb-4ea5-bb95-a7e42fb87d46 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:04:41 np0005539551 nova_compute[227360]: 2025-11-29 08:04:41.208 227364 DEBUG oslo_concurrency.lockutils [None req-3ef1bfa2-97eb-4ea5-bb95-a7e42fb87d46 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.553s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:41.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:41 np0005539551 nova_compute[227360]: 2025-11-29 08:04:41.231 227364 INFO nova.scheduler.client.report [None req-3ef1bfa2-97eb-4ea5-bb95-a7e42fb87d46 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Deleted allocations for instance 811e2f2f-5e2c-4c60-97e8-c39377dce6dc#033[00m
Nov 29 03:04:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:04:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:41.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:04:41 np0005539551 nova_compute[227360]: 2025-11-29 08:04:41.314 227364 DEBUG oslo_concurrency.lockutils [None req-3ef1bfa2-97eb-4ea5-bb95-a7e42fb87d46 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "811e2f2f-5e2c-4c60-97e8-c39377dce6dc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.926s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:41 np0005539551 nova_compute[227360]: 2025-11-29 08:04:41.632 227364 DEBUG nova.compute.manager [req-b0ce5cf4-b400-4c4f-8d81-d2bc1f6bcf9a req-533d0737-0630-418f-bc63-23d291d94ac9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Received event network-vif-plugged-c765ebc3-573f-4453-ab8a-90e3943a89d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:04:41 np0005539551 nova_compute[227360]: 2025-11-29 08:04:41.633 227364 DEBUG oslo_concurrency.lockutils [req-b0ce5cf4-b400-4c4f-8d81-d2bc1f6bcf9a req-533d0737-0630-418f-bc63-23d291d94ac9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "811e2f2f-5e2c-4c60-97e8-c39377dce6dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:41 np0005539551 nova_compute[227360]: 2025-11-29 08:04:41.633 227364 DEBUG oslo_concurrency.lockutils [req-b0ce5cf4-b400-4c4f-8d81-d2bc1f6bcf9a req-533d0737-0630-418f-bc63-23d291d94ac9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "811e2f2f-5e2c-4c60-97e8-c39377dce6dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:41 np0005539551 nova_compute[227360]: 2025-11-29 08:04:41.634 227364 DEBUG oslo_concurrency.lockutils [req-b0ce5cf4-b400-4c4f-8d81-d2bc1f6bcf9a req-533d0737-0630-418f-bc63-23d291d94ac9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "811e2f2f-5e2c-4c60-97e8-c39377dce6dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:41 np0005539551 nova_compute[227360]: 2025-11-29 08:04:41.634 227364 DEBUG nova.compute.manager [req-b0ce5cf4-b400-4c4f-8d81-d2bc1f6bcf9a req-533d0737-0630-418f-bc63-23d291d94ac9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] No waiting events found dispatching network-vif-plugged-c765ebc3-573f-4453-ab8a-90e3943a89d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:04:41 np0005539551 nova_compute[227360]: 2025-11-29 08:04:41.635 227364 WARNING nova.compute.manager [req-b0ce5cf4-b400-4c4f-8d81-d2bc1f6bcf9a req-533d0737-0630-418f-bc63-23d291d94ac9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Received unexpected event network-vif-plugged-c765ebc3-573f-4453-ab8a-90e3943a89d5 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:04:41 np0005539551 nova_compute[227360]: 2025-11-29 08:04:41.635 227364 DEBUG nova.compute.manager [req-b0ce5cf4-b400-4c4f-8d81-d2bc1f6bcf9a req-533d0737-0630-418f-bc63-23d291d94ac9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Received event network-vif-deleted-c765ebc3-573f-4453-ab8a-90e3943a89d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:04:42 np0005539551 nova_compute[227360]: 2025-11-29 08:04:42.289 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403467.288609, d3837ff2-ce61-4144-ac9b-c032a188450a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:42 np0005539551 nova_compute[227360]: 2025-11-29 08:04:42.290 227364 INFO nova.compute.manager [-] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:04:42 np0005539551 nova_compute[227360]: 2025-11-29 08:04:42.308 227364 DEBUG nova.compute.manager [None req-73a09fcf-ce97-4b2e-b99f-b04cc61d477d - - - - - -] [instance: d3837ff2-ce61-4144-ac9b-c032a188450a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:42 np0005539551 nova_compute[227360]: 2025-11-29 08:04:42.374 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403467.373674, 63ab1668-e1e0-4c32-bfaa-879399657745 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:42 np0005539551 nova_compute[227360]: 2025-11-29 08:04:42.375 227364 INFO nova.compute.manager [-] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:04:42 np0005539551 nova_compute[227360]: 2025-11-29 08:04:42.398 227364 DEBUG nova.compute.manager [None req-f0400241-0b13-4a8e-b918-63f3c7b0bc97 - - - - - -] [instance: 63ab1668-e1e0-4c32-bfaa-879399657745] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:42 np0005539551 nova_compute[227360]: 2025-11-29 08:04:42.662 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:43.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:43.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:44 np0005539551 nova_compute[227360]: 2025-11-29 08:04:44.660 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:44 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:45.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:04:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:45.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:04:46 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e207 e207: 3 total, 3 up, 3 in
Nov 29 03:04:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:04:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:47.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:04:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:47.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:47 np0005539551 nova_compute[227360]: 2025-11-29 08:04:47.664 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:47 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e208 e208: 3 total, 3 up, 3 in
Nov 29 03:04:48 np0005539551 nova_compute[227360]: 2025-11-29 08:04:48.735 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:49.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:04:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:49.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:04:49 np0005539551 nova_compute[227360]: 2025-11-29 08:04:49.600 227364 DEBUG oslo_concurrency.lockutils [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:49 np0005539551 nova_compute[227360]: 2025-11-29 08:04:49.600 227364 DEBUG oslo_concurrency.lockutils [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:49 np0005539551 nova_compute[227360]: 2025-11-29 08:04:49.615 227364 DEBUG nova.compute.manager [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:04:49 np0005539551 nova_compute[227360]: 2025-11-29 08:04:49.661 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:49 np0005539551 nova_compute[227360]: 2025-11-29 08:04:49.695 227364 DEBUG oslo_concurrency.lockutils [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:49 np0005539551 nova_compute[227360]: 2025-11-29 08:04:49.696 227364 DEBUG oslo_concurrency.lockutils [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:49 np0005539551 nova_compute[227360]: 2025-11-29 08:04:49.703 227364 DEBUG nova.virt.hardware [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:04:49 np0005539551 nova_compute[227360]: 2025-11-29 08:04:49.704 227364 INFO nova.compute.claims [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:04:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e209 e209: 3 total, 3 up, 3 in
Nov 29 03:04:49 np0005539551 nova_compute[227360]: 2025-11-29 08:04:49.797 227364 DEBUG oslo_concurrency.processutils [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:04:50 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3883225443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:04:50 np0005539551 nova_compute[227360]: 2025-11-29 08:04:50.298 227364 DEBUG oslo_concurrency.processutils [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:50 np0005539551 nova_compute[227360]: 2025-11-29 08:04:50.309 227364 DEBUG nova.compute.provider_tree [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:04:50 np0005539551 nova_compute[227360]: 2025-11-29 08:04:50.352 227364 DEBUG nova.scheduler.client.report [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:04:50 np0005539551 nova_compute[227360]: 2025-11-29 08:04:50.384 227364 DEBUG oslo_concurrency.lockutils [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:50 np0005539551 nova_compute[227360]: 2025-11-29 08:04:50.384 227364 DEBUG nova.compute.manager [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:04:50 np0005539551 nova_compute[227360]: 2025-11-29 08:04:50.463 227364 DEBUG nova.compute.manager [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:04:50 np0005539551 nova_compute[227360]: 2025-11-29 08:04:50.463 227364 DEBUG nova.network.neutron [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:04:50 np0005539551 nova_compute[227360]: 2025-11-29 08:04:50.484 227364 INFO nova.virt.libvirt.driver [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:04:50 np0005539551 nova_compute[227360]: 2025-11-29 08:04:50.511 227364 DEBUG nova.compute.manager [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:04:50 np0005539551 nova_compute[227360]: 2025-11-29 08:04:50.643 227364 DEBUG nova.compute.manager [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:04:50 np0005539551 nova_compute[227360]: 2025-11-29 08:04:50.644 227364 DEBUG nova.virt.libvirt.driver [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:04:50 np0005539551 nova_compute[227360]: 2025-11-29 08:04:50.644 227364 INFO nova.virt.libvirt.driver [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Creating image(s)#033[00m
Nov 29 03:04:50 np0005539551 nova_compute[227360]: 2025-11-29 08:04:50.667 227364 DEBUG nova.storage.rbd_utils [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:50 np0005539551 nova_compute[227360]: 2025-11-29 08:04:50.691 227364 DEBUG nova.storage.rbd_utils [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:50 np0005539551 nova_compute[227360]: 2025-11-29 08:04:50.715 227364 DEBUG nova.storage.rbd_utils [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:50 np0005539551 nova_compute[227360]: 2025-11-29 08:04:50.719 227364 DEBUG oslo_concurrency.processutils [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:50 np0005539551 nova_compute[227360]: 2025-11-29 08:04:50.751 227364 DEBUG nova.policy [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '104aea18c5154615b602f032bdb49681', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '90c23935e0214785a9dc5061b91cf29c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:04:50 np0005539551 nova_compute[227360]: 2025-11-29 08:04:50.805 227364 DEBUG oslo_concurrency.processutils [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:50 np0005539551 nova_compute[227360]: 2025-11-29 08:04:50.808 227364 DEBUG oslo_concurrency.lockutils [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:50 np0005539551 nova_compute[227360]: 2025-11-29 08:04:50.810 227364 DEBUG oslo_concurrency.lockutils [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:50 np0005539551 nova_compute[227360]: 2025-11-29 08:04:50.811 227364 DEBUG oslo_concurrency.lockutils [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:50 np0005539551 nova_compute[227360]: 2025-11-29 08:04:50.844 227364 DEBUG nova.storage.rbd_utils [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:50 np0005539551 nova_compute[227360]: 2025-11-29 08:04:50.848 227364 DEBUG oslo_concurrency.processutils [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:51 np0005539551 nova_compute[227360]: 2025-11-29 08:04:51.142 227364 DEBUG oslo_concurrency.processutils [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:51 np0005539551 nova_compute[227360]: 2025-11-29 08:04:51.202 227364 DEBUG nova.storage.rbd_utils [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] resizing rbd image 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:04:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:04:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:51.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:04:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:04:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:51.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:04:51 np0005539551 nova_compute[227360]: 2025-11-29 08:04:51.477 227364 DEBUG nova.network.neutron [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Successfully created port: 0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:04:51 np0005539551 nova_compute[227360]: 2025-11-29 08:04:51.766 227364 DEBUG nova.objects.instance [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lazy-loading 'migration_context' on Instance uuid 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:51 np0005539551 nova_compute[227360]: 2025-11-29 08:04:51.778 227364 DEBUG nova.virt.libvirt.driver [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:04:51 np0005539551 nova_compute[227360]: 2025-11-29 08:04:51.779 227364 DEBUG nova.virt.libvirt.driver [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Ensure instance console log exists: /var/lib/nova/instances/15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:04:51 np0005539551 nova_compute[227360]: 2025-11-29 08:04:51.780 227364 DEBUG oslo_concurrency.lockutils [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:51 np0005539551 nova_compute[227360]: 2025-11-29 08:04:51.780 227364 DEBUG oslo_concurrency.lockutils [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:51 np0005539551 nova_compute[227360]: 2025-11-29 08:04:51.780 227364 DEBUG oslo_concurrency.lockutils [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:52 np0005539551 nova_compute[227360]: 2025-11-29 08:04:52.624 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403477.6229286, 811e2f2f-5e2c-4c60-97e8-c39377dce6dc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:52 np0005539551 nova_compute[227360]: 2025-11-29 08:04:52.625 227364 INFO nova.compute.manager [-] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:04:52 np0005539551 nova_compute[227360]: 2025-11-29 08:04:52.658 227364 DEBUG nova.compute.manager [None req-27f47039-4b7d-4fa4-88a8-724c2deee904 - - - - - -] [instance: 811e2f2f-5e2c-4c60-97e8-c39377dce6dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:52 np0005539551 nova_compute[227360]: 2025-11-29 08:04:52.666 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:52 np0005539551 nova_compute[227360]: 2025-11-29 08:04:52.936 227364 DEBUG nova.network.neutron [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Successfully updated port: 0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:04:52 np0005539551 nova_compute[227360]: 2025-11-29 08:04:52.957 227364 DEBUG oslo_concurrency.lockutils [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "refresh_cache-15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:04:52 np0005539551 nova_compute[227360]: 2025-11-29 08:04:52.957 227364 DEBUG oslo_concurrency.lockutils [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquired lock "refresh_cache-15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:04:52 np0005539551 nova_compute[227360]: 2025-11-29 08:04:52.958 227364 DEBUG nova.network.neutron [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:04:53 np0005539551 nova_compute[227360]: 2025-11-29 08:04:53.203 227364 DEBUG nova.compute.manager [req-6a7ee64a-5797-4f10-af49-526ba364d664 req-5adeb79f-2b59-4290-82fc-a536abe83989 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Received event network-changed-0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:04:53 np0005539551 nova_compute[227360]: 2025-11-29 08:04:53.203 227364 DEBUG nova.compute.manager [req-6a7ee64a-5797-4f10-af49-526ba364d664 req-5adeb79f-2b59-4290-82fc-a536abe83989 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Refreshing instance network info cache due to event network-changed-0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:04:53 np0005539551 nova_compute[227360]: 2025-11-29 08:04:53.203 227364 DEBUG oslo_concurrency.lockutils [req-6a7ee64a-5797-4f10-af49-526ba364d664 req-5adeb79f-2b59-4290-82fc-a536abe83989 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:04:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:04:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:53.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:04:53 np0005539551 nova_compute[227360]: 2025-11-29 08:04:53.287 227364 DEBUG nova.network.neutron [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:04:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:04:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:53.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:04:54 np0005539551 nova_compute[227360]: 2025-11-29 08:04:54.375 227364 DEBUG nova.network.neutron [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Updating instance_info_cache with network_info: [{"id": "0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae", "address": "fa:16:3e:62:f2:fc", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0427ac9e-d8", "ovs_interfaceid": "0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:04:54 np0005539551 nova_compute[227360]: 2025-11-29 08:04:54.407 227364 DEBUG oslo_concurrency.lockutils [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Releasing lock "refresh_cache-15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:04:54 np0005539551 nova_compute[227360]: 2025-11-29 08:04:54.407 227364 DEBUG nova.compute.manager [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Instance network_info: |[{"id": "0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae", "address": "fa:16:3e:62:f2:fc", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0427ac9e-d8", "ovs_interfaceid": "0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:04:54 np0005539551 nova_compute[227360]: 2025-11-29 08:04:54.408 227364 DEBUG oslo_concurrency.lockutils [req-6a7ee64a-5797-4f10-af49-526ba364d664 req-5adeb79f-2b59-4290-82fc-a536abe83989 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:04:54 np0005539551 nova_compute[227360]: 2025-11-29 08:04:54.408 227364 DEBUG nova.network.neutron [req-6a7ee64a-5797-4f10-af49-526ba364d664 req-5adeb79f-2b59-4290-82fc-a536abe83989 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Refreshing network info cache for port 0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:04:54 np0005539551 nova_compute[227360]: 2025-11-29 08:04:54.414 227364 DEBUG nova.virt.libvirt.driver [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Start _get_guest_xml network_info=[{"id": "0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae", "address": "fa:16:3e:62:f2:fc", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0427ac9e-d8", "ovs_interfaceid": "0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:04:54 np0005539551 nova_compute[227360]: 2025-11-29 08:04:54.420 227364 WARNING nova.virt.libvirt.driver [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:04:54 np0005539551 nova_compute[227360]: 2025-11-29 08:04:54.427 227364 DEBUG nova.virt.libvirt.host [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:04:54 np0005539551 nova_compute[227360]: 2025-11-29 08:04:54.428 227364 DEBUG nova.virt.libvirt.host [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:04:54 np0005539551 nova_compute[227360]: 2025-11-29 08:04:54.441 227364 DEBUG nova.virt.libvirt.host [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:04:54 np0005539551 nova_compute[227360]: 2025-11-29 08:04:54.442 227364 DEBUG nova.virt.libvirt.host [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:04:54 np0005539551 nova_compute[227360]: 2025-11-29 08:04:54.444 227364 DEBUG nova.virt.libvirt.driver [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:04:54 np0005539551 nova_compute[227360]: 2025-11-29 08:04:54.444 227364 DEBUG nova.virt.hardware [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:04:54 np0005539551 nova_compute[227360]: 2025-11-29 08:04:54.446 227364 DEBUG nova.virt.hardware [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:04:54 np0005539551 nova_compute[227360]: 2025-11-29 08:04:54.446 227364 DEBUG nova.virt.hardware [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:04:54 np0005539551 nova_compute[227360]: 2025-11-29 08:04:54.447 227364 DEBUG nova.virt.hardware [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:04:54 np0005539551 nova_compute[227360]: 2025-11-29 08:04:54.447 227364 DEBUG nova.virt.hardware [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:04:54 np0005539551 nova_compute[227360]: 2025-11-29 08:04:54.448 227364 DEBUG nova.virt.hardware [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:04:54 np0005539551 nova_compute[227360]: 2025-11-29 08:04:54.448 227364 DEBUG nova.virt.hardware [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:04:54 np0005539551 nova_compute[227360]: 2025-11-29 08:04:54.449 227364 DEBUG nova.virt.hardware [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:04:54 np0005539551 nova_compute[227360]: 2025-11-29 08:04:54.449 227364 DEBUG nova.virt.hardware [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:04:54 np0005539551 nova_compute[227360]: 2025-11-29 08:04:54.450 227364 DEBUG nova.virt.hardware [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:04:54 np0005539551 nova_compute[227360]: 2025-11-29 08:04:54.451 227364 DEBUG nova.virt.hardware [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:04:54 np0005539551 nova_compute[227360]: 2025-11-29 08:04:54.458 227364 DEBUG oslo_concurrency.processutils [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:54 np0005539551 nova_compute[227360]: 2025-11-29 08:04:54.662 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:54 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:04:54 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3889819792' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:04:54 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:54 np0005539551 nova_compute[227360]: 2025-11-29 08:04:54.907 227364 DEBUG oslo_concurrency.processutils [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:54 np0005539551 nova_compute[227360]: 2025-11-29 08:04:54.933 227364 DEBUG nova.storage.rbd_utils [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:54 np0005539551 nova_compute[227360]: 2025-11-29 08:04:54.937 227364 DEBUG oslo_concurrency.processutils [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:55.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:04:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:55.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:04:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:04:55 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1135525152' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:04:55 np0005539551 nova_compute[227360]: 2025-11-29 08:04:55.367 227364 DEBUG oslo_concurrency.processutils [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:55 np0005539551 nova_compute[227360]: 2025-11-29 08:04:55.369 227364 DEBUG nova.virt.libvirt.vif [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:04:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-284009357',display_name='tempest-DeleteServersTestJSON-server-284009357',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-284009357',id=56,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90c23935e0214785a9dc5061b91cf29c',ramdisk_id='',reservation_id='r-gehya2hj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-294503786',owner_user_name='tempest-DeleteServersTestJSON-294503786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:04:50Z,user_data=None,user_id='104aea18c5154615b602f032bdb49681',uuid=15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae", "address": "fa:16:3e:62:f2:fc", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0427ac9e-d8", "ovs_interfaceid": "0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:04:55 np0005539551 nova_compute[227360]: 2025-11-29 08:04:55.369 227364 DEBUG nova.network.os_vif_util [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converting VIF {"id": "0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae", "address": "fa:16:3e:62:f2:fc", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0427ac9e-d8", "ovs_interfaceid": "0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:04:55 np0005539551 nova_compute[227360]: 2025-11-29 08:04:55.370 227364 DEBUG nova.network.os_vif_util [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:f2:fc,bridge_name='br-int',has_traffic_filtering=True,id=0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0427ac9e-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:04:55 np0005539551 nova_compute[227360]: 2025-11-29 08:04:55.372 227364 DEBUG nova.objects.instance [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lazy-loading 'pci_devices' on Instance uuid 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:55 np0005539551 nova_compute[227360]: 2025-11-29 08:04:55.389 227364 DEBUG nova.virt.libvirt.driver [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:04:55 np0005539551 nova_compute[227360]:  <uuid>15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a</uuid>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:  <name>instance-00000038</name>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:04:55 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:      <nova:name>tempest-DeleteServersTestJSON-server-284009357</nova:name>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:04:54</nova:creationTime>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:04:55 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:        <nova:user uuid="104aea18c5154615b602f032bdb49681">tempest-DeleteServersTestJSON-294503786-project-member</nova:user>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:        <nova:project uuid="90c23935e0214785a9dc5061b91cf29c">tempest-DeleteServersTestJSON-294503786</nova:project>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:        <nova:port uuid="0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae">
Nov 29 03:04:55 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:      <entry name="serial">15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a</entry>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:      <entry name="uuid">15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a</entry>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:04:55 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a_disk">
Nov 29 03:04:55 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:04:55 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:04:55 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a_disk.config">
Nov 29 03:04:55 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:04:55 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:04:55 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:62:f2:fc"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:      <target dev="tap0427ac9e-d8"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:04:55 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a/console.log" append="off"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:04:55 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:04:55 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:04:55 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:04:55 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:04:55 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:04:55 np0005539551 nova_compute[227360]: 2025-11-29 08:04:55.391 227364 DEBUG nova.compute.manager [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Preparing to wait for external event network-vif-plugged-0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:04:55 np0005539551 nova_compute[227360]: 2025-11-29 08:04:55.391 227364 DEBUG oslo_concurrency.lockutils [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:55 np0005539551 nova_compute[227360]: 2025-11-29 08:04:55.391 227364 DEBUG oslo_concurrency.lockutils [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:55 np0005539551 nova_compute[227360]: 2025-11-29 08:04:55.392 227364 DEBUG oslo_concurrency.lockutils [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:55 np0005539551 nova_compute[227360]: 2025-11-29 08:04:55.392 227364 DEBUG nova.virt.libvirt.vif [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:04:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-284009357',display_name='tempest-DeleteServersTestJSON-server-284009357',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-284009357',id=56,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90c23935e0214785a9dc5061b91cf29c',ramdisk_id='',reservation_id='r-gehya2hj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-294503786',owner_user_name='tempest-DeleteServersTestJSON-294503786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:04:50Z,user_data=None,user_id='104aea18c5154615b602f032bdb49681',uuid=15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae", "address": "fa:16:3e:62:f2:fc", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0427ac9e-d8", "ovs_interfaceid": "0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:04:55 np0005539551 nova_compute[227360]: 2025-11-29 08:04:55.393 227364 DEBUG nova.network.os_vif_util [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converting VIF {"id": "0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae", "address": "fa:16:3e:62:f2:fc", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0427ac9e-d8", "ovs_interfaceid": "0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:04:55 np0005539551 nova_compute[227360]: 2025-11-29 08:04:55.394 227364 DEBUG nova.network.os_vif_util [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:f2:fc,bridge_name='br-int',has_traffic_filtering=True,id=0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0427ac9e-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:04:55 np0005539551 nova_compute[227360]: 2025-11-29 08:04:55.394 227364 DEBUG os_vif [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:f2:fc,bridge_name='br-int',has_traffic_filtering=True,id=0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0427ac9e-d8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:04:55 np0005539551 nova_compute[227360]: 2025-11-29 08:04:55.395 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:55 np0005539551 nova_compute[227360]: 2025-11-29 08:04:55.395 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:55 np0005539551 nova_compute[227360]: 2025-11-29 08:04:55.396 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:04:55 np0005539551 nova_compute[227360]: 2025-11-29 08:04:55.398 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:55 np0005539551 nova_compute[227360]: 2025-11-29 08:04:55.399 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0427ac9e-d8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:55 np0005539551 nova_compute[227360]: 2025-11-29 08:04:55.399 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0427ac9e-d8, col_values=(('external_ids', {'iface-id': '0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:62:f2:fc', 'vm-uuid': '15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:55 np0005539551 nova_compute[227360]: 2025-11-29 08:04:55.401 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:55 np0005539551 NetworkManager[48922]: <info>  [1764403495.4021] manager: (tap0427ac9e-d8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/92)
Nov 29 03:04:55 np0005539551 nova_compute[227360]: 2025-11-29 08:04:55.403 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:04:55 np0005539551 nova_compute[227360]: 2025-11-29 08:04:55.406 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:55 np0005539551 nova_compute[227360]: 2025-11-29 08:04:55.407 227364 INFO os_vif [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:f2:fc,bridge_name='br-int',has_traffic_filtering=True,id=0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0427ac9e-d8')#033[00m
Nov 29 03:04:55 np0005539551 nova_compute[227360]: 2025-11-29 08:04:55.470 227364 DEBUG nova.virt.libvirt.driver [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:04:55 np0005539551 nova_compute[227360]: 2025-11-29 08:04:55.471 227364 DEBUG nova.virt.libvirt.driver [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:04:55 np0005539551 nova_compute[227360]: 2025-11-29 08:04:55.471 227364 DEBUG nova.virt.libvirt.driver [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] No VIF found with MAC fa:16:3e:62:f2:fc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:04:55 np0005539551 nova_compute[227360]: 2025-11-29 08:04:55.472 227364 INFO nova.virt.libvirt.driver [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Using config drive#033[00m
Nov 29 03:04:55 np0005539551 nova_compute[227360]: 2025-11-29 08:04:55.507 227364 DEBUG nova.storage.rbd_utils [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:56 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e210 e210: 3 total, 3 up, 3 in
Nov 29 03:04:56 np0005539551 nova_compute[227360]: 2025-11-29 08:04:56.408 227364 INFO nova.virt.libvirt.driver [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Creating config drive at /var/lib/nova/instances/15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a/disk.config#033[00m
Nov 29 03:04:56 np0005539551 nova_compute[227360]: 2025-11-29 08:04:56.413 227364 DEBUG oslo_concurrency.processutils [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmz92vkx7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:56 np0005539551 nova_compute[227360]: 2025-11-29 08:04:56.527 227364 DEBUG nova.network.neutron [req-6a7ee64a-5797-4f10-af49-526ba364d664 req-5adeb79f-2b59-4290-82fc-a536abe83989 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Updated VIF entry in instance network info cache for port 0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:04:56 np0005539551 nova_compute[227360]: 2025-11-29 08:04:56.528 227364 DEBUG nova.network.neutron [req-6a7ee64a-5797-4f10-af49-526ba364d664 req-5adeb79f-2b59-4290-82fc-a536abe83989 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Updating instance_info_cache with network_info: [{"id": "0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae", "address": "fa:16:3e:62:f2:fc", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0427ac9e-d8", "ovs_interfaceid": "0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:04:56 np0005539551 nova_compute[227360]: 2025-11-29 08:04:56.543 227364 DEBUG oslo_concurrency.lockutils [req-6a7ee64a-5797-4f10-af49-526ba364d664 req-5adeb79f-2b59-4290-82fc-a536abe83989 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:04:56 np0005539551 nova_compute[227360]: 2025-11-29 08:04:56.558 227364 DEBUG oslo_concurrency.processutils [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmz92vkx7" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:56 np0005539551 nova_compute[227360]: 2025-11-29 08:04:56.589 227364 DEBUG nova.storage.rbd_utils [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:56 np0005539551 nova_compute[227360]: 2025-11-29 08:04:56.593 227364 DEBUG oslo_concurrency.processutils [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a/disk.config 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:56 np0005539551 nova_compute[227360]: 2025-11-29 08:04:56.774 227364 DEBUG oslo_concurrency.processutils [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a/disk.config 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:56 np0005539551 nova_compute[227360]: 2025-11-29 08:04:56.775 227364 INFO nova.virt.libvirt.driver [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Deleting local config drive /var/lib/nova/instances/15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a/disk.config because it was imported into RBD.#033[00m
Nov 29 03:04:56 np0005539551 kernel: tap0427ac9e-d8: entered promiscuous mode
Nov 29 03:04:56 np0005539551 NetworkManager[48922]: <info>  [1764403496.8329] manager: (tap0427ac9e-d8): new Tun device (/org/freedesktop/NetworkManager/Devices/93)
Nov 29 03:04:56 np0005539551 ovn_controller[130266]: 2025-11-29T08:04:56Z|00185|binding|INFO|Claiming lport 0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae for this chassis.
Nov 29 03:04:56 np0005539551 ovn_controller[130266]: 2025-11-29T08:04:56Z|00186|binding|INFO|0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae: Claiming fa:16:3e:62:f2:fc 10.100.0.14
Nov 29 03:04:56 np0005539551 nova_compute[227360]: 2025-11-29 08:04:56.834 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:56 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:56.850 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:f2:fc 10.100.0.14'], port_security=['fa:16:3e:62:f2:fc 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90c23935e0214785a9dc5061b91cf29c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f717601c-d15f-4a2d-a56a-85c60baf3a44', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc7b8639-cf64-4f98-aa54-bbd2c9e5fa46, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:04:56 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:56.851 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae in datapath a8be8715-2b74-42ca-9713-7fc1f4a33bc9 bound to our chassis#033[00m
Nov 29 03:04:56 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:56.853 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a8be8715-2b74-42ca-9713-7fc1f4a33bc9#033[00m
Nov 29 03:04:56 np0005539551 ovn_controller[130266]: 2025-11-29T08:04:56Z|00187|binding|INFO|Setting lport 0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae ovn-installed in OVS
Nov 29 03:04:56 np0005539551 ovn_controller[130266]: 2025-11-29T08:04:56Z|00188|binding|INFO|Setting lport 0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae up in Southbound
Nov 29 03:04:56 np0005539551 nova_compute[227360]: 2025-11-29 08:04:56.857 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:56 np0005539551 nova_compute[227360]: 2025-11-29 08:04:56.859 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:56 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:56.866 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[603244bb-5049-454f-8843-9d517096b2bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:56 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:56.867 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa8be8715-21 in ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:04:56 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:56.870 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa8be8715-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:04:56 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:56.870 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9b5fffdc-99b7-40d4-8261-e46ffea98778]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:56 np0005539551 systemd-machined[190756]: New machine qemu-27-instance-00000038.
Nov 29 03:04:56 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:56.871 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[00d8df6f-59e5-4d1c-a596-6d2b75b09786]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:56 np0005539551 systemd[1]: Started Virtual Machine qemu-27-instance-00000038.
Nov 29 03:04:56 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:56.883 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[f7f65c5b-f038-4f86-8026-eecbdb5f2b1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:56 np0005539551 systemd-udevd[249472]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:04:56 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:56.905 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a0283172-0cd8-4a86-bdf8-71cca38b6836]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:56 np0005539551 NetworkManager[48922]: <info>  [1764403496.9126] device (tap0427ac9e-d8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:04:56 np0005539551 NetworkManager[48922]: <info>  [1764403496.9135] device (tap0427ac9e-d8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:04:56 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:56.934 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[34712852-690c-4c43-b061-fb67f25af149]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:56 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:56.939 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[76756066-63d9-4113-9408-c6b9c7f15273]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:56 np0005539551 NetworkManager[48922]: <info>  [1764403496.9404] manager: (tapa8be8715-20): new Veth device (/org/freedesktop/NetworkManager/Devices/94)
Nov 29 03:04:56 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:56.965 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[0f5ca502-e0b5-43d6-82fb-86fd54167623]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:56 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:56.968 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[63647e33-5c51-4ed0-8149-773b7fc10b78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:56 np0005539551 NetworkManager[48922]: <info>  [1764403496.9880] device (tapa8be8715-20): carrier: link connected
Nov 29 03:04:56 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:56.991 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[5383f528-6c36-4efe-8377-cc0bc14b9fa5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:57.007 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e0ddaeba-4c2e-44bf-8479-afe31722c56b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa8be8715-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:f3:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654386, 'reachable_time': 31408, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249502, 'error': None, 'target': 'ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:57.019 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[bc2fa347-0667-4597-96b7-0c16829ec3c5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe95:f3b4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 654386, 'tstamp': 654386}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249503, 'error': None, 'target': 'ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:57.033 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e7aa66e1-625d-4719-87e9-6b90635fc579]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa8be8715-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:f3:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654386, 'reachable_time': 31408, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 249504, 'error': None, 'target': 'ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:57.061 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[200fbdc5-008d-4bc7-8df4-0252d38b3ea0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:57.112 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[0285e59c-06e4-4162-8d0d-0d39c42e7e12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:57.114 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8be8715-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:57.114 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:57.114 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa8be8715-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:57 np0005539551 nova_compute[227360]: 2025-11-29 08:04:57.155 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:57 np0005539551 kernel: tapa8be8715-20: entered promiscuous mode
Nov 29 03:04:57 np0005539551 NetworkManager[48922]: <info>  [1764403497.1564] manager: (tapa8be8715-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/95)
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:57.158 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa8be8715-20, col_values=(('external_ids', {'iface-id': '307ce936-d5dc-4357-90d6-2b0b2d3d1113'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:57 np0005539551 ovn_controller[130266]: 2025-11-29T08:04:57Z|00189|binding|INFO|Releasing lport 307ce936-d5dc-4357-90d6-2b0b2d3d1113 from this chassis (sb_readonly=0)
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:57.181 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a8be8715-2b74-42ca-9713-7fc1f4a33bc9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a8be8715-2b74-42ca-9713-7fc1f4a33bc9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:57.182 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[fbfda387-cde3-4e40-8592-381995a49e33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:57.183 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-a8be8715-2b74-42ca-9713-7fc1f4a33bc9
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/a8be8715-2b74-42ca-9713-7fc1f4a33bc9.pid.haproxy
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID a8be8715-2b74-42ca-9713-7fc1f4a33bc9
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:04:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:57.183 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'env', 'PROCESS_TAG=haproxy-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a8be8715-2b74-42ca-9713-7fc1f4a33bc9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:04:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:57.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:57.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:57 np0005539551 podman[249536]: 2025-11-29 08:04:57.532854292 +0000 UTC m=+0.058454037 container create b5de740226bbf78d48a7fc3951ccf01a776893658fc6943080d5b993659fe7b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 03:04:57 np0005539551 systemd[1]: Started libpod-conmon-b5de740226bbf78d48a7fc3951ccf01a776893658fc6943080d5b993659fe7b9.scope.
Nov 29 03:04:57 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:04:57 np0005539551 podman[249536]: 2025-11-29 08:04:57.504155298 +0000 UTC m=+0.029755073 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:04:57 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71995d111b687e1d7132ab58dc74cc5edce3444b2109d5a630ffddc1a44b5858/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:04:57 np0005539551 podman[249536]: 2025-11-29 08:04:57.61311246 +0000 UTC m=+0.138712225 container init b5de740226bbf78d48a7fc3951ccf01a776893658fc6943080d5b993659fe7b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:04:57 np0005539551 podman[249536]: 2025-11-29 08:04:57.618554455 +0000 UTC m=+0.144154200 container start b5de740226bbf78d48a7fc3951ccf01a776893658fc6943080d5b993659fe7b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:04:57 np0005539551 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[249551]: [NOTICE]   (249570) : New worker (249582) forked
Nov 29 03:04:57 np0005539551 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[249551]: [NOTICE]   (249570) : Loading success.
Nov 29 03:04:57 np0005539551 nova_compute[227360]: 2025-11-29 08:04:57.761 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403497.7611713, 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:57 np0005539551 nova_compute[227360]: 2025-11-29 08:04:57.761 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] VM Started (Lifecycle Event)#033[00m
Nov 29 03:04:57 np0005539551 nova_compute[227360]: 2025-11-29 08:04:57.784 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:57 np0005539551 nova_compute[227360]: 2025-11-29 08:04:57.787 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403497.761341, 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:57 np0005539551 nova_compute[227360]: 2025-11-29 08:04:57.787 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:04:57 np0005539551 nova_compute[227360]: 2025-11-29 08:04:57.834 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:57 np0005539551 nova_compute[227360]: 2025-11-29 08:04:57.837 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:04:57 np0005539551 nova_compute[227360]: 2025-11-29 08:04:57.870 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:04:58 np0005539551 nova_compute[227360]: 2025-11-29 08:04:58.538 227364 DEBUG nova.compute.manager [req-b124b584-66d7-4633-96d4-ec2d77dd2795 req-10471f4a-ff02-470a-896c-665e108da18c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Received event network-vif-plugged-0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:04:58 np0005539551 nova_compute[227360]: 2025-11-29 08:04:58.539 227364 DEBUG oslo_concurrency.lockutils [req-b124b584-66d7-4633-96d4-ec2d77dd2795 req-10471f4a-ff02-470a-896c-665e108da18c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:58 np0005539551 nova_compute[227360]: 2025-11-29 08:04:58.539 227364 DEBUG oslo_concurrency.lockutils [req-b124b584-66d7-4633-96d4-ec2d77dd2795 req-10471f4a-ff02-470a-896c-665e108da18c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:58 np0005539551 nova_compute[227360]: 2025-11-29 08:04:58.539 227364 DEBUG oslo_concurrency.lockutils [req-b124b584-66d7-4633-96d4-ec2d77dd2795 req-10471f4a-ff02-470a-896c-665e108da18c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:58 np0005539551 nova_compute[227360]: 2025-11-29 08:04:58.539 227364 DEBUG nova.compute.manager [req-b124b584-66d7-4633-96d4-ec2d77dd2795 req-10471f4a-ff02-470a-896c-665e108da18c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Processing event network-vif-plugged-0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:04:58 np0005539551 nova_compute[227360]: 2025-11-29 08:04:58.540 227364 DEBUG nova.compute.manager [req-b124b584-66d7-4633-96d4-ec2d77dd2795 req-10471f4a-ff02-470a-896c-665e108da18c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Received event network-vif-plugged-0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:04:58 np0005539551 nova_compute[227360]: 2025-11-29 08:04:58.540 227364 DEBUG oslo_concurrency.lockutils [req-b124b584-66d7-4633-96d4-ec2d77dd2795 req-10471f4a-ff02-470a-896c-665e108da18c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:58 np0005539551 nova_compute[227360]: 2025-11-29 08:04:58.540 227364 DEBUG oslo_concurrency.lockutils [req-b124b584-66d7-4633-96d4-ec2d77dd2795 req-10471f4a-ff02-470a-896c-665e108da18c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:58 np0005539551 nova_compute[227360]: 2025-11-29 08:04:58.540 227364 DEBUG oslo_concurrency.lockutils [req-b124b584-66d7-4633-96d4-ec2d77dd2795 req-10471f4a-ff02-470a-896c-665e108da18c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:58 np0005539551 nova_compute[227360]: 2025-11-29 08:04:58.540 227364 DEBUG nova.compute.manager [req-b124b584-66d7-4633-96d4-ec2d77dd2795 req-10471f4a-ff02-470a-896c-665e108da18c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] No waiting events found dispatching network-vif-plugged-0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:04:58 np0005539551 nova_compute[227360]: 2025-11-29 08:04:58.541 227364 WARNING nova.compute.manager [req-b124b584-66d7-4633-96d4-ec2d77dd2795 req-10471f4a-ff02-470a-896c-665e108da18c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Received unexpected event network-vif-plugged-0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:04:58 np0005539551 nova_compute[227360]: 2025-11-29 08:04:58.541 227364 DEBUG nova.compute.manager [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:04:58 np0005539551 nova_compute[227360]: 2025-11-29 08:04:58.544 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403498.544616, 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:58 np0005539551 nova_compute[227360]: 2025-11-29 08:04:58.545 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:04:58 np0005539551 nova_compute[227360]: 2025-11-29 08:04:58.546 227364 DEBUG nova.virt.libvirt.driver [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:04:58 np0005539551 nova_compute[227360]: 2025-11-29 08:04:58.549 227364 INFO nova.virt.libvirt.driver [-] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Instance spawned successfully.#033[00m
Nov 29 03:04:58 np0005539551 nova_compute[227360]: 2025-11-29 08:04:58.549 227364 DEBUG nova.virt.libvirt.driver [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:04:58 np0005539551 nova_compute[227360]: 2025-11-29 08:04:58.565 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:58 np0005539551 nova_compute[227360]: 2025-11-29 08:04:58.571 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:04:58 np0005539551 nova_compute[227360]: 2025-11-29 08:04:58.573 227364 DEBUG nova.virt.libvirt.driver [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:58 np0005539551 nova_compute[227360]: 2025-11-29 08:04:58.573 227364 DEBUG nova.virt.libvirt.driver [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:58 np0005539551 nova_compute[227360]: 2025-11-29 08:04:58.574 227364 DEBUG nova.virt.libvirt.driver [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:58 np0005539551 nova_compute[227360]: 2025-11-29 08:04:58.574 227364 DEBUG nova.virt.libvirt.driver [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:58 np0005539551 nova_compute[227360]: 2025-11-29 08:04:58.574 227364 DEBUG nova.virt.libvirt.driver [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:58 np0005539551 nova_compute[227360]: 2025-11-29 08:04:58.575 227364 DEBUG nova.virt.libvirt.driver [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:58 np0005539551 nova_compute[227360]: 2025-11-29 08:04:58.609 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:04:58 np0005539551 nova_compute[227360]: 2025-11-29 08:04:58.632 227364 INFO nova.compute.manager [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Took 7.99 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:04:58 np0005539551 nova_compute[227360]: 2025-11-29 08:04:58.632 227364 DEBUG nova.compute.manager [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:58 np0005539551 nova_compute[227360]: 2025-11-29 08:04:58.693 227364 INFO nova.compute.manager [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Took 9.03 seconds to build instance.#033[00m
Nov 29 03:04:58 np0005539551 nova_compute[227360]: 2025-11-29 08:04:58.717 227364 DEBUG oslo_concurrency.lockutils [None req-79896c71-412a-4a71-ac74-4a2b2c6c451a 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:59.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:04:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:04:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:59.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:04:59 np0005539551 podman[249609]: 2025-11-29 08:04:59.615696071 +0000 UTC m=+0.073318274 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:04:59 np0005539551 podman[249610]: 2025-11-29 08:04:59.634628215 +0000 UTC m=+0.086093303 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 29 03:04:59 np0005539551 nova_compute[227360]: 2025-11-29 08:04:59.673 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:59 np0005539551 podman[249608]: 2025-11-29 08:04:59.698217828 +0000 UTC m=+0.147077497 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 29 03:04:59 np0005539551 nova_compute[227360]: 2025-11-29 08:04:59.761 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:59.761 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:04:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:04:59.763 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:04:59 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e211 e211: 3 total, 3 up, 3 in
Nov 29 03:04:59 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:00 np0005539551 nova_compute[227360]: 2025-11-29 08:05:00.401 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:00 np0005539551 nova_compute[227360]: 2025-11-29 08:05:00.513 227364 INFO nova.compute.manager [None req-a0b09472-938f-443b-a7e1-522c0907bbd5 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Pausing#033[00m
Nov 29 03:05:00 np0005539551 nova_compute[227360]: 2025-11-29 08:05:00.514 227364 DEBUG nova.objects.instance [None req-a0b09472-938f-443b-a7e1-522c0907bbd5 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lazy-loading 'flavor' on Instance uuid 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:00 np0005539551 nova_compute[227360]: 2025-11-29 08:05:00.541 227364 DEBUG nova.compute.manager [None req-a0b09472-938f-443b-a7e1-522c0907bbd5 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:00 np0005539551 nova_compute[227360]: 2025-11-29 08:05:00.541 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403500.5416772, 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:05:00 np0005539551 nova_compute[227360]: 2025-11-29 08:05:00.542 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:05:00 np0005539551 nova_compute[227360]: 2025-11-29 08:05:00.564 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:00 np0005539551 nova_compute[227360]: 2025-11-29 08:05:00.567 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:05:00 np0005539551 nova_compute[227360]: 2025-11-29 08:05:00.589 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Nov 29 03:05:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e212 e212: 3 total, 3 up, 3 in
Nov 29 03:05:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:01.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:01.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e213 e213: 3 total, 3 up, 3 in
Nov 29 03:05:02 np0005539551 nova_compute[227360]: 2025-11-29 08:05:02.732 227364 DEBUG oslo_concurrency.lockutils [None req-a9dbd011-c95b-4edb-a51e-dae31ab03c4b 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:02 np0005539551 nova_compute[227360]: 2025-11-29 08:05:02.732 227364 DEBUG oslo_concurrency.lockutils [None req-a9dbd011-c95b-4edb-a51e-dae31ab03c4b 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:02 np0005539551 nova_compute[227360]: 2025-11-29 08:05:02.732 227364 DEBUG oslo_concurrency.lockutils [None req-a9dbd011-c95b-4edb-a51e-dae31ab03c4b 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:02 np0005539551 nova_compute[227360]: 2025-11-29 08:05:02.732 227364 DEBUG oslo_concurrency.lockutils [None req-a9dbd011-c95b-4edb-a51e-dae31ab03c4b 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:02 np0005539551 nova_compute[227360]: 2025-11-29 08:05:02.733 227364 DEBUG oslo_concurrency.lockutils [None req-a9dbd011-c95b-4edb-a51e-dae31ab03c4b 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:02 np0005539551 nova_compute[227360]: 2025-11-29 08:05:02.734 227364 INFO nova.compute.manager [None req-a9dbd011-c95b-4edb-a51e-dae31ab03c4b 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Terminating instance#033[00m
Nov 29 03:05:02 np0005539551 nova_compute[227360]: 2025-11-29 08:05:02.735 227364 DEBUG nova.compute.manager [None req-a9dbd011-c95b-4edb-a51e-dae31ab03c4b 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:05:02 np0005539551 kernel: tap0427ac9e-d8 (unregistering): left promiscuous mode
Nov 29 03:05:02 np0005539551 NetworkManager[48922]: <info>  [1764403502.7806] device (tap0427ac9e-d8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:05:02 np0005539551 nova_compute[227360]: 2025-11-29 08:05:02.787 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:02 np0005539551 ovn_controller[130266]: 2025-11-29T08:05:02Z|00190|binding|INFO|Releasing lport 0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae from this chassis (sb_readonly=0)
Nov 29 03:05:02 np0005539551 ovn_controller[130266]: 2025-11-29T08:05:02Z|00191|binding|INFO|Setting lport 0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae down in Southbound
Nov 29 03:05:02 np0005539551 ovn_controller[130266]: 2025-11-29T08:05:02Z|00192|binding|INFO|Removing iface tap0427ac9e-d8 ovn-installed in OVS
Nov 29 03:05:02 np0005539551 nova_compute[227360]: 2025-11-29 08:05:02.789 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:02 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:02.794 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:f2:fc 10.100.0.14'], port_security=['fa:16:3e:62:f2:fc 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90c23935e0214785a9dc5061b91cf29c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f717601c-d15f-4a2d-a56a-85c60baf3a44', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc7b8639-cf64-4f98-aa54-bbd2c9e5fa46, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:05:02 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:02.795 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae in datapath a8be8715-2b74-42ca-9713-7fc1f4a33bc9 unbound from our chassis#033[00m
Nov 29 03:05:02 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:02.796 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a8be8715-2b74-42ca-9713-7fc1f4a33bc9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:05:02 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:02.797 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d61968b9-6cb1-4ba9-9ce3-021d66caa80c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:02 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:02.797 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9 namespace which is not needed anymore#033[00m
Nov 29 03:05:02 np0005539551 nova_compute[227360]: 2025-11-29 08:05:02.810 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:02 np0005539551 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000038.scope: Deactivated successfully.
Nov 29 03:05:02 np0005539551 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000038.scope: Consumed 3.010s CPU time.
Nov 29 03:05:02 np0005539551 systemd-machined[190756]: Machine qemu-27-instance-00000038 terminated.
Nov 29 03:05:02 np0005539551 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[249551]: [NOTICE]   (249570) : haproxy version is 2.8.14-c23fe91
Nov 29 03:05:02 np0005539551 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[249551]: [NOTICE]   (249570) : path to executable is /usr/sbin/haproxy
Nov 29 03:05:02 np0005539551 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[249551]: [WARNING]  (249570) : Exiting Master process...
Nov 29 03:05:02 np0005539551 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[249551]: [WARNING]  (249570) : Exiting Master process...
Nov 29 03:05:02 np0005539551 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[249551]: [ALERT]    (249570) : Current worker (249582) exited with code 143 (Terminated)
Nov 29 03:05:02 np0005539551 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[249551]: [WARNING]  (249570) : All workers exited. Exiting... (0)
Nov 29 03:05:02 np0005539551 systemd[1]: libpod-b5de740226bbf78d48a7fc3951ccf01a776893658fc6943080d5b993659fe7b9.scope: Deactivated successfully.
Nov 29 03:05:02 np0005539551 podman[249691]: 2025-11-29 08:05:02.91787496 +0000 UTC m=+0.039987255 container died b5de740226bbf78d48a7fc3951ccf01a776893658fc6943080d5b993659fe7b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:05:02 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b5de740226bbf78d48a7fc3951ccf01a776893658fc6943080d5b993659fe7b9-userdata-shm.mount: Deactivated successfully.
Nov 29 03:05:02 np0005539551 systemd[1]: var-lib-containers-storage-overlay-71995d111b687e1d7132ab58dc74cc5edce3444b2109d5a630ffddc1a44b5858-merged.mount: Deactivated successfully.
Nov 29 03:05:02 np0005539551 podman[249691]: 2025-11-29 08:05:02.95313411 +0000 UTC m=+0.075246415 container cleanup b5de740226bbf78d48a7fc3951ccf01a776893658fc6943080d5b993659fe7b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:05:02 np0005539551 kernel: tap0427ac9e-d8: entered promiscuous mode
Nov 29 03:05:02 np0005539551 NetworkManager[48922]: <info>  [1764403502.9540] manager: (tap0427ac9e-d8): new Tun device (/org/freedesktop/NetworkManager/Devices/96)
Nov 29 03:05:02 np0005539551 ovn_controller[130266]: 2025-11-29T08:05:02Z|00193|binding|INFO|Claiming lport 0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae for this chassis.
Nov 29 03:05:02 np0005539551 nova_compute[227360]: 2025-11-29 08:05:02.955 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:02 np0005539551 ovn_controller[130266]: 2025-11-29T08:05:02Z|00194|binding|INFO|0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae: Claiming fa:16:3e:62:f2:fc 10.100.0.14
Nov 29 03:05:02 np0005539551 kernel: tap0427ac9e-d8 (unregistering): left promiscuous mode
Nov 29 03:05:02 np0005539551 systemd[1]: libpod-conmon-b5de740226bbf78d48a7fc3951ccf01a776893658fc6943080d5b993659fe7b9.scope: Deactivated successfully.
Nov 29 03:05:02 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:02.963 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:f2:fc 10.100.0.14'], port_security=['fa:16:3e:62:f2:fc 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90c23935e0214785a9dc5061b91cf29c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f717601c-d15f-4a2d-a56a-85c60baf3a44', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc7b8639-cf64-4f98-aa54-bbd2c9e5fa46, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:05:02 np0005539551 ovn_controller[130266]: 2025-11-29T08:05:02Z|00195|binding|INFO|Setting lport 0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae ovn-installed in OVS
Nov 29 03:05:02 np0005539551 ovn_controller[130266]: 2025-11-29T08:05:02Z|00196|binding|INFO|Setting lport 0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae up in Southbound
Nov 29 03:05:02 np0005539551 nova_compute[227360]: 2025-11-29 08:05:02.982 227364 INFO nova.virt.libvirt.driver [-] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Instance destroyed successfully.#033[00m
Nov 29 03:05:02 np0005539551 nova_compute[227360]: 2025-11-29 08:05:02.983 227364 DEBUG nova.objects.instance [None req-a9dbd011-c95b-4edb-a51e-dae31ab03c4b 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lazy-loading 'resources' on Instance uuid 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:03 np0005539551 nova_compute[227360]: 2025-11-29 08:05:03.003 227364 DEBUG nova.virt.libvirt.vif [None req-a9dbd011-c95b-4edb-a51e-dae31ab03c4b 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:04:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-284009357',display_name='tempest-DeleteServersTestJSON-server-284009357',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-284009357',id=56,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:04:58Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='90c23935e0214785a9dc5061b91cf29c',ramdisk_id='',reservation_id='r-gehya2hj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-294503786',owner_user_name='tempest-DeleteServersTestJSON-294503786-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:05:00Z,user_data=None,user_id='104aea18c5154615b602f032bdb49681',uuid=15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae", "address": "fa:16:3e:62:f2:fc", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0427ac9e-d8", "ovs_interfaceid": "0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:05:03 np0005539551 nova_compute[227360]: 2025-11-29 08:05:03.004 227364 DEBUG nova.network.os_vif_util [None req-a9dbd011-c95b-4edb-a51e-dae31ab03c4b 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converting VIF {"id": "0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae", "address": "fa:16:3e:62:f2:fc", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0427ac9e-d8", "ovs_interfaceid": "0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:05:03 np0005539551 nova_compute[227360]: 2025-11-29 08:05:03.004 227364 DEBUG nova.network.os_vif_util [None req-a9dbd011-c95b-4edb-a51e-dae31ab03c4b 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:f2:fc,bridge_name='br-int',has_traffic_filtering=True,id=0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0427ac9e-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:05:03 np0005539551 nova_compute[227360]: 2025-11-29 08:05:03.005 227364 DEBUG os_vif [None req-a9dbd011-c95b-4edb-a51e-dae31ab03c4b 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:f2:fc,bridge_name='br-int',has_traffic_filtering=True,id=0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0427ac9e-d8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:05:03 np0005539551 nova_compute[227360]: 2025-11-29 08:05:03.007 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:03 np0005539551 nova_compute[227360]: 2025-11-29 08:05:03.007 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0427ac9e-d8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:03 np0005539551 podman[249724]: 2025-11-29 08:05:03.023728009 +0000 UTC m=+0.044668320 container remove b5de740226bbf78d48a7fc3951ccf01a776893658fc6943080d5b993659fe7b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:05:03 np0005539551 ovn_controller[130266]: 2025-11-29T08:05:03Z|00197|binding|INFO|Releasing lport 0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae from this chassis (sb_readonly=0)
Nov 29 03:05:03 np0005539551 ovn_controller[130266]: 2025-11-29T08:05:03Z|00198|binding|INFO|Setting lport 0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae down in Southbound
Nov 29 03:05:03 np0005539551 nova_compute[227360]: 2025-11-29 08:05:03.025 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:03 np0005539551 nova_compute[227360]: 2025-11-29 08:05:03.028 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:05:03 np0005539551 nova_compute[227360]: 2025-11-29 08:05:03.030 227364 INFO os_vif [None req-a9dbd011-c95b-4edb-a51e-dae31ab03c4b 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:f2:fc,bridge_name='br-int',has_traffic_filtering=True,id=0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0427ac9e-d8')#033[00m
Nov 29 03:05:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:03.033 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:f2:fc 10.100.0.14'], port_security=['fa:16:3e:62:f2:fc 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90c23935e0214785a9dc5061b91cf29c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f717601c-d15f-4a2d-a56a-85c60baf3a44', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc7b8639-cf64-4f98-aa54-bbd2c9e5fa46, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:05:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:03.032 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[67c89fa8-390b-498e-9b6e-6b46baca8e68]: (4, ('Sat Nov 29 08:05:02 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9 (b5de740226bbf78d48a7fc3951ccf01a776893658fc6943080d5b993659fe7b9)\nb5de740226bbf78d48a7fc3951ccf01a776893658fc6943080d5b993659fe7b9\nSat Nov 29 08:05:02 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9 (b5de740226bbf78d48a7fc3951ccf01a776893658fc6943080d5b993659fe7b9)\nb5de740226bbf78d48a7fc3951ccf01a776893658fc6943080d5b993659fe7b9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:03.035 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[44231491-6737-409c-9f3e-29376c83fd9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:03.035 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8be8715-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:03 np0005539551 nova_compute[227360]: 2025-11-29 08:05:03.047 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:03 np0005539551 kernel: tapa8be8715-20: left promiscuous mode
Nov 29 03:05:03 np0005539551 nova_compute[227360]: 2025-11-29 08:05:03.058 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:03.060 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2e7fb884-bdd4-4df8-8db0-b9dc2047c46c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:03.073 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[031b8484-1546-4f2c-b7d7-a90a7ed990b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:03.075 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a62bbdcb-6906-4194-9574-0aac2f473f9b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:03.088 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d6f2c386-e9d3-47e1-ae56-dc6055dd77b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654380, 'reachable_time': 26553, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249755, 'error': None, 'target': 'ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:03.090 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:05:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:03.091 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[b6a19e81-6fc2-469c-b073-4a54afe12339]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:03 np0005539551 systemd[1]: run-netns-ovnmeta\x2da8be8715\x2d2b74\x2d42ca\x2d9713\x2d7fc1f4a33bc9.mount: Deactivated successfully.
Nov 29 03:05:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:03.091 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae in datapath a8be8715-2b74-42ca-9713-7fc1f4a33bc9 unbound from our chassis#033[00m
Nov 29 03:05:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:03.092 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a8be8715-2b74-42ca-9713-7fc1f4a33bc9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:05:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:03.093 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[480ce67f-5211-4b84-9d8f-e2ddf5729cf0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:03.093 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae in datapath a8be8715-2b74-42ca-9713-7fc1f4a33bc9 unbound from our chassis#033[00m
Nov 29 03:05:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:03.094 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a8be8715-2b74-42ca-9713-7fc1f4a33bc9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:05:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:03.094 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2bcc0631-1540-4f63-8357-14780ded754a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:03 np0005539551 nova_compute[227360]: 2025-11-29 08:05:03.101 227364 DEBUG nova.compute.manager [req-63933a75-4901-4821-9033-4e8edd4c5d1c req-d73fa73e-fb49-4267-9d4a-27691f2224b4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Received event network-vif-unplugged-0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:03 np0005539551 nova_compute[227360]: 2025-11-29 08:05:03.102 227364 DEBUG oslo_concurrency.lockutils [req-63933a75-4901-4821-9033-4e8edd4c5d1c req-d73fa73e-fb49-4267-9d4a-27691f2224b4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:03 np0005539551 nova_compute[227360]: 2025-11-29 08:05:03.102 227364 DEBUG oslo_concurrency.lockutils [req-63933a75-4901-4821-9033-4e8edd4c5d1c req-d73fa73e-fb49-4267-9d4a-27691f2224b4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:03 np0005539551 nova_compute[227360]: 2025-11-29 08:05:03.102 227364 DEBUG oslo_concurrency.lockutils [req-63933a75-4901-4821-9033-4e8edd4c5d1c req-d73fa73e-fb49-4267-9d4a-27691f2224b4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:03 np0005539551 nova_compute[227360]: 2025-11-29 08:05:03.102 227364 DEBUG nova.compute.manager [req-63933a75-4901-4821-9033-4e8edd4c5d1c req-d73fa73e-fb49-4267-9d4a-27691f2224b4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] No waiting events found dispatching network-vif-unplugged-0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:05:03 np0005539551 nova_compute[227360]: 2025-11-29 08:05:03.102 227364 DEBUG nova.compute.manager [req-63933a75-4901-4821-9033-4e8edd4c5d1c req-d73fa73e-fb49-4267-9d4a-27691f2224b4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Received event network-vif-unplugged-0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:05:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:03.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:05:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:03.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:05:03 np0005539551 nova_compute[227360]: 2025-11-29 08:05:03.878 227364 INFO nova.virt.libvirt.driver [None req-a9dbd011-c95b-4edb-a51e-dae31ab03c4b 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Deleting instance files /var/lib/nova/instances/15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a_del#033[00m
Nov 29 03:05:03 np0005539551 nova_compute[227360]: 2025-11-29 08:05:03.879 227364 INFO nova.virt.libvirt.driver [None req-a9dbd011-c95b-4edb-a51e-dae31ab03c4b 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Deletion of /var/lib/nova/instances/15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a_del complete#033[00m
Nov 29 03:05:03 np0005539551 nova_compute[227360]: 2025-11-29 08:05:03.941 227364 INFO nova.compute.manager [None req-a9dbd011-c95b-4edb-a51e-dae31ab03c4b 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Took 1.21 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:05:03 np0005539551 nova_compute[227360]: 2025-11-29 08:05:03.942 227364 DEBUG oslo.service.loopingcall [None req-a9dbd011-c95b-4edb-a51e-dae31ab03c4b 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:05:03 np0005539551 nova_compute[227360]: 2025-11-29 08:05:03.942 227364 DEBUG nova.compute.manager [-] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:05:03 np0005539551 nova_compute[227360]: 2025-11-29 08:05:03.943 227364 DEBUG nova.network.neutron [-] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:05:04 np0005539551 nova_compute[227360]: 2025-11-29 08:05:04.675 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:04 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:05 np0005539551 nova_compute[227360]: 2025-11-29 08:05:05.174 227364 DEBUG nova.network.neutron [-] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:05:05 np0005539551 nova_compute[227360]: 2025-11-29 08:05:05.196 227364 INFO nova.compute.manager [-] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Took 1.25 seconds to deallocate network for instance.#033[00m
Nov 29 03:05:05 np0005539551 nova_compute[227360]: 2025-11-29 08:05:05.247 227364 DEBUG oslo_concurrency.lockutils [None req-a9dbd011-c95b-4edb-a51e-dae31ab03c4b 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:05 np0005539551 nova_compute[227360]: 2025-11-29 08:05:05.248 227364 DEBUG oslo_concurrency.lockutils [None req-a9dbd011-c95b-4edb-a51e-dae31ab03c4b 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:05.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:05.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:05 np0005539551 nova_compute[227360]: 2025-11-29 08:05:05.332 227364 DEBUG oslo_concurrency.processutils [None req-a9dbd011-c95b-4edb-a51e-dae31ab03c4b 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:05 np0005539551 nova_compute[227360]: 2025-11-29 08:05:05.392 227364 DEBUG nova.compute.manager [req-33528693-9b65-4b3d-84c8-1e10fa7bd7f6 req-c0e1e717-0218-4a1d-a4d0-5e79c2ff99f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Received event network-vif-plugged-0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:05 np0005539551 nova_compute[227360]: 2025-11-29 08:05:05.392 227364 DEBUG oslo_concurrency.lockutils [req-33528693-9b65-4b3d-84c8-1e10fa7bd7f6 req-c0e1e717-0218-4a1d-a4d0-5e79c2ff99f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:05 np0005539551 nova_compute[227360]: 2025-11-29 08:05:05.393 227364 DEBUG oslo_concurrency.lockutils [req-33528693-9b65-4b3d-84c8-1e10fa7bd7f6 req-c0e1e717-0218-4a1d-a4d0-5e79c2ff99f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:05 np0005539551 nova_compute[227360]: 2025-11-29 08:05:05.393 227364 DEBUG oslo_concurrency.lockutils [req-33528693-9b65-4b3d-84c8-1e10fa7bd7f6 req-c0e1e717-0218-4a1d-a4d0-5e79c2ff99f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:05 np0005539551 nova_compute[227360]: 2025-11-29 08:05:05.393 227364 DEBUG nova.compute.manager [req-33528693-9b65-4b3d-84c8-1e10fa7bd7f6 req-c0e1e717-0218-4a1d-a4d0-5e79c2ff99f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] No waiting events found dispatching network-vif-plugged-0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:05:05 np0005539551 nova_compute[227360]: 2025-11-29 08:05:05.393 227364 WARNING nova.compute.manager [req-33528693-9b65-4b3d-84c8-1e10fa7bd7f6 req-c0e1e717-0218-4a1d-a4d0-5e79c2ff99f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Received unexpected event network-vif-plugged-0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:05:05 np0005539551 nova_compute[227360]: 2025-11-29 08:05:05.394 227364 DEBUG nova.compute.manager [req-33528693-9b65-4b3d-84c8-1e10fa7bd7f6 req-c0e1e717-0218-4a1d-a4d0-5e79c2ff99f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Received event network-vif-plugged-0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:05 np0005539551 nova_compute[227360]: 2025-11-29 08:05:05.394 227364 DEBUG oslo_concurrency.lockutils [req-33528693-9b65-4b3d-84c8-1e10fa7bd7f6 req-c0e1e717-0218-4a1d-a4d0-5e79c2ff99f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:05 np0005539551 nova_compute[227360]: 2025-11-29 08:05:05.394 227364 DEBUG oslo_concurrency.lockutils [req-33528693-9b65-4b3d-84c8-1e10fa7bd7f6 req-c0e1e717-0218-4a1d-a4d0-5e79c2ff99f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:05 np0005539551 nova_compute[227360]: 2025-11-29 08:05:05.394 227364 DEBUG oslo_concurrency.lockutils [req-33528693-9b65-4b3d-84c8-1e10fa7bd7f6 req-c0e1e717-0218-4a1d-a4d0-5e79c2ff99f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:05 np0005539551 nova_compute[227360]: 2025-11-29 08:05:05.394 227364 DEBUG nova.compute.manager [req-33528693-9b65-4b3d-84c8-1e10fa7bd7f6 req-c0e1e717-0218-4a1d-a4d0-5e79c2ff99f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] No waiting events found dispatching network-vif-plugged-0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:05:05 np0005539551 nova_compute[227360]: 2025-11-29 08:05:05.395 227364 WARNING nova.compute.manager [req-33528693-9b65-4b3d-84c8-1e10fa7bd7f6 req-c0e1e717-0218-4a1d-a4d0-5e79c2ff99f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Received unexpected event network-vif-plugged-0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:05:05 np0005539551 nova_compute[227360]: 2025-11-29 08:05:05.395 227364 DEBUG nova.compute.manager [req-33528693-9b65-4b3d-84c8-1e10fa7bd7f6 req-c0e1e717-0218-4a1d-a4d0-5e79c2ff99f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Received event network-vif-plugged-0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:05 np0005539551 nova_compute[227360]: 2025-11-29 08:05:05.395 227364 DEBUG oslo_concurrency.lockutils [req-33528693-9b65-4b3d-84c8-1e10fa7bd7f6 req-c0e1e717-0218-4a1d-a4d0-5e79c2ff99f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:05 np0005539551 nova_compute[227360]: 2025-11-29 08:05:05.395 227364 DEBUG oslo_concurrency.lockutils [req-33528693-9b65-4b3d-84c8-1e10fa7bd7f6 req-c0e1e717-0218-4a1d-a4d0-5e79c2ff99f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:05 np0005539551 nova_compute[227360]: 2025-11-29 08:05:05.396 227364 DEBUG oslo_concurrency.lockutils [req-33528693-9b65-4b3d-84c8-1e10fa7bd7f6 req-c0e1e717-0218-4a1d-a4d0-5e79c2ff99f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:05 np0005539551 nova_compute[227360]: 2025-11-29 08:05:05.396 227364 DEBUG nova.compute.manager [req-33528693-9b65-4b3d-84c8-1e10fa7bd7f6 req-c0e1e717-0218-4a1d-a4d0-5e79c2ff99f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] No waiting events found dispatching network-vif-plugged-0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:05:05 np0005539551 nova_compute[227360]: 2025-11-29 08:05:05.396 227364 WARNING nova.compute.manager [req-33528693-9b65-4b3d-84c8-1e10fa7bd7f6 req-c0e1e717-0218-4a1d-a4d0-5e79c2ff99f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Received unexpected event network-vif-plugged-0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:05:05 np0005539551 nova_compute[227360]: 2025-11-29 08:05:05.396 227364 DEBUG nova.compute.manager [req-33528693-9b65-4b3d-84c8-1e10fa7bd7f6 req-c0e1e717-0218-4a1d-a4d0-5e79c2ff99f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Received event network-vif-unplugged-0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:05 np0005539551 nova_compute[227360]: 2025-11-29 08:05:05.396 227364 DEBUG oslo_concurrency.lockutils [req-33528693-9b65-4b3d-84c8-1e10fa7bd7f6 req-c0e1e717-0218-4a1d-a4d0-5e79c2ff99f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:05 np0005539551 nova_compute[227360]: 2025-11-29 08:05:05.397 227364 DEBUG oslo_concurrency.lockutils [req-33528693-9b65-4b3d-84c8-1e10fa7bd7f6 req-c0e1e717-0218-4a1d-a4d0-5e79c2ff99f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:05 np0005539551 nova_compute[227360]: 2025-11-29 08:05:05.397 227364 DEBUG oslo_concurrency.lockutils [req-33528693-9b65-4b3d-84c8-1e10fa7bd7f6 req-c0e1e717-0218-4a1d-a4d0-5e79c2ff99f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:05 np0005539551 nova_compute[227360]: 2025-11-29 08:05:05.397 227364 DEBUG nova.compute.manager [req-33528693-9b65-4b3d-84c8-1e10fa7bd7f6 req-c0e1e717-0218-4a1d-a4d0-5e79c2ff99f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] No waiting events found dispatching network-vif-unplugged-0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:05:05 np0005539551 nova_compute[227360]: 2025-11-29 08:05:05.397 227364 WARNING nova.compute.manager [req-33528693-9b65-4b3d-84c8-1e10fa7bd7f6 req-c0e1e717-0218-4a1d-a4d0-5e79c2ff99f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Received unexpected event network-vif-unplugged-0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:05:05 np0005539551 nova_compute[227360]: 2025-11-29 08:05:05.397 227364 DEBUG nova.compute.manager [req-33528693-9b65-4b3d-84c8-1e10fa7bd7f6 req-c0e1e717-0218-4a1d-a4d0-5e79c2ff99f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Received event network-vif-plugged-0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:05 np0005539551 nova_compute[227360]: 2025-11-29 08:05:05.398 227364 DEBUG oslo_concurrency.lockutils [req-33528693-9b65-4b3d-84c8-1e10fa7bd7f6 req-c0e1e717-0218-4a1d-a4d0-5e79c2ff99f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:05 np0005539551 nova_compute[227360]: 2025-11-29 08:05:05.398 227364 DEBUG oslo_concurrency.lockutils [req-33528693-9b65-4b3d-84c8-1e10fa7bd7f6 req-c0e1e717-0218-4a1d-a4d0-5e79c2ff99f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:05 np0005539551 nova_compute[227360]: 2025-11-29 08:05:05.398 227364 DEBUG oslo_concurrency.lockutils [req-33528693-9b65-4b3d-84c8-1e10fa7bd7f6 req-c0e1e717-0218-4a1d-a4d0-5e79c2ff99f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:05 np0005539551 nova_compute[227360]: 2025-11-29 08:05:05.398 227364 DEBUG nova.compute.manager [req-33528693-9b65-4b3d-84c8-1e10fa7bd7f6 req-c0e1e717-0218-4a1d-a4d0-5e79c2ff99f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] No waiting events found dispatching network-vif-plugged-0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:05:05 np0005539551 nova_compute[227360]: 2025-11-29 08:05:05.398 227364 WARNING nova.compute.manager [req-33528693-9b65-4b3d-84c8-1e10fa7bd7f6 req-c0e1e717-0218-4a1d-a4d0-5e79c2ff99f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Received unexpected event network-vif-plugged-0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:05:05 np0005539551 nova_compute[227360]: 2025-11-29 08:05:05.516 227364 DEBUG nova.compute.manager [req-8888eca8-63da-4ed9-9416-99b3314a561a req-b45e5203-69ac-4502-914f-70c07b52f3b4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Received event network-vif-deleted-0427ac9e-d891-4ad2-9e2e-e7b2ae8c46ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:05:05 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3096707745' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:05:05 np0005539551 nova_compute[227360]: 2025-11-29 08:05:05.801 227364 DEBUG oslo_concurrency.processutils [None req-a9dbd011-c95b-4edb-a51e-dae31ab03c4b 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:05 np0005539551 nova_compute[227360]: 2025-11-29 08:05:05.807 227364 DEBUG nova.compute.provider_tree [None req-a9dbd011-c95b-4edb-a51e-dae31ab03c4b 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:05:05 np0005539551 nova_compute[227360]: 2025-11-29 08:05:05.830 227364 DEBUG nova.scheduler.client.report [None req-a9dbd011-c95b-4edb-a51e-dae31ab03c4b 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:05:05 np0005539551 nova_compute[227360]: 2025-11-29 08:05:05.870 227364 DEBUG oslo_concurrency.lockutils [None req-a9dbd011-c95b-4edb-a51e-dae31ab03c4b 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:05 np0005539551 nova_compute[227360]: 2025-11-29 08:05:05.907 227364 INFO nova.scheduler.client.report [None req-a9dbd011-c95b-4edb-a51e-dae31ab03c4b 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Deleted allocations for instance 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a#033[00m
Nov 29 03:05:06 np0005539551 nova_compute[227360]: 2025-11-29 08:05:06.036 227364 DEBUG oslo_concurrency.lockutils [None req-a9dbd011-c95b-4edb-a51e-dae31ab03c4b 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.304s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e214 e214: 3 total, 3 up, 3 in
Nov 29 03:05:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:07.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:07.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:08 np0005539551 nova_compute[227360]: 2025-11-29 08:05:08.025 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:09.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:09.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:09 np0005539551 nova_compute[227360]: 2025-11-29 08:05:09.676 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:09.764 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:09 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e215 e215: 3 total, 3 up, 3 in
Nov 29 03:05:10 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:05:10 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.0 total, 600.0 interval#012Cumulative writes: 6249 writes, 33K keys, 6249 commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s#012Cumulative WAL: 6249 writes, 6249 syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1581 writes, 7767 keys, 1581 commit groups, 1.0 writes per commit group, ingest: 16.18 MB, 0.03 MB/s#012Interval WAL: 1582 writes, 1582 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      8.3      4.97              0.15        18    0.276       0      0       0.0       0.0#012  L6      1/0   10.65 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.2     19.5     16.5     10.36              0.64        17    0.610     93K   9102       0.0       0.0#012 Sum      1/0   10.65 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   5.2     13.2     13.9     15.34              0.79        35    0.438     93K   9102       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   4.8     14.5     14.8      3.46              0.21         8    0.432     26K   2574       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0     19.5     16.5     10.36              0.64        17    0.610     93K   9102       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      8.3      4.97              0.15        17    0.292       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3000.0 total, 600.0 interval#012Flush(GB): cumulative 0.040, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.21 GB write, 0.07 MB/s write, 0.20 GB read, 0.07 MB/s read, 15.3 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.08 MB/s read, 3.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557021ed51f0#2 capacity: 304.00 MB usage: 19.01 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.00016 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1049,18.27 MB,6.01142%) FilterBlock(35,263.86 KB,0.0847616%) IndexBlock(35,491.22 KB,0.157798%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 03:05:11 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e216 e216: 3 total, 3 up, 3 in
Nov 29 03:05:11 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #64. Immutable memtables: 0.
Nov 29 03:05:11 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:05:11.111179) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:05:11 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 64
Nov 29 03:05:11 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403511111231, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2149, "num_deletes": 263, "total_data_size": 4612742, "memory_usage": 4670664, "flush_reason": "Manual Compaction"}
Nov 29 03:05:11 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #65: started
Nov 29 03:05:11 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403511129483, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 65, "file_size": 2065724, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32052, "largest_seqno": 34196, "table_properties": {"data_size": 2058157, "index_size": 4257, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 19552, "raw_average_key_size": 22, "raw_value_size": 2041730, "raw_average_value_size": 2309, "num_data_blocks": 185, "num_entries": 884, "num_filter_entries": 884, "num_deletions": 263, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403375, "oldest_key_time": 1764403375, "file_creation_time": 1764403511, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:05:11 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 18392 microseconds, and 5622 cpu microseconds.
Nov 29 03:05:11 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:05:11 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:05:11.129563) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #65: 2065724 bytes OK
Nov 29 03:05:11 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:05:11.129590) [db/memtable_list.cc:519] [default] Level-0 commit table #65 started
Nov 29 03:05:11 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:05:11.133153) [db/memtable_list.cc:722] [default] Level-0 commit table #65: memtable #1 done
Nov 29 03:05:11 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:05:11.133186) EVENT_LOG_v1 {"time_micros": 1764403511133177, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:05:11 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:05:11.133212) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:05:11 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 4602836, prev total WAL file size 4602836, number of live WAL files 2.
Nov 29 03:05:11 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000061.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:05:11 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:05:11.135459) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303032' seq:72057594037927935, type:22 .. '6D6772737461740031323533' seq:0, type:0; will stop at (end)
Nov 29 03:05:11 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:05:11 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [65(2017KB)], [63(10MB)]
Nov 29 03:05:11 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403511135537, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [65], "files_L6": [63], "score": -1, "input_data_size": 13228358, "oldest_snapshot_seqno": -1}
Nov 29 03:05:11 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #66: 6698 keys, 10317653 bytes, temperature: kUnknown
Nov 29 03:05:11 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403511207428, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 66, "file_size": 10317653, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10274012, "index_size": 25801, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16773, "raw_key_size": 170845, "raw_average_key_size": 25, "raw_value_size": 10154778, "raw_average_value_size": 1516, "num_data_blocks": 1038, "num_entries": 6698, "num_filter_entries": 6698, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764403511, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 66, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:05:11 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:05:11 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:05:11.207691) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 10317653 bytes
Nov 29 03:05:11 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:05:11.209734) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 183.9 rd, 143.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 10.6 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(11.4) write-amplify(5.0) OK, records in: 7170, records dropped: 472 output_compression: NoCompression
Nov 29 03:05:11 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:05:11.209755) EVENT_LOG_v1 {"time_micros": 1764403511209745, "job": 38, "event": "compaction_finished", "compaction_time_micros": 71950, "compaction_time_cpu_micros": 24967, "output_level": 6, "num_output_files": 1, "total_output_size": 10317653, "num_input_records": 7170, "num_output_records": 6698, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:05:11 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:05:11 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403511210244, "job": 38, "event": "table_file_deletion", "file_number": 65}
Nov 29 03:05:11 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000063.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:05:11 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403511212531, "job": 38, "event": "table_file_deletion", "file_number": 63}
Nov 29 03:05:11 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:05:11.135355) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:05:11 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:05:11.212648) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:05:11 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:05:11.212654) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:05:11 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:05:11.212655) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:05:11 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:05:11.212656) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:05:11 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:05:11.212657) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:05:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:05:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:11.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:05:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:11.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:12 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e217 e217: 3 total, 3 up, 3 in
Nov 29 03:05:13 np0005539551 nova_compute[227360]: 2025-11-29 08:05:13.027 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:13.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:13.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:13 np0005539551 nova_compute[227360]: 2025-11-29 08:05:13.432 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:14 np0005539551 nova_compute[227360]: 2025-11-29 08:05:14.679 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:14 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:15.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:15.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e218 e218: 3 total, 3 up, 3 in
Nov 29 03:05:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:17.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:17.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:17 np0005539551 nova_compute[227360]: 2025-11-29 08:05:17.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:17 np0005539551 nova_compute[227360]: 2025-11-29 08:05:17.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:05:17 np0005539551 nova_compute[227360]: 2025-11-29 08:05:17.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:05:17 np0005539551 nova_compute[227360]: 2025-11-29 08:05:17.425 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:05:17 np0005539551 nova_compute[227360]: 2025-11-29 08:05:17.426 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:17 np0005539551 nova_compute[227360]: 2025-11-29 08:05:17.426 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:17 np0005539551 nova_compute[227360]: 2025-11-29 08:05:17.981 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403502.9804528, 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:05:17 np0005539551 nova_compute[227360]: 2025-11-29 08:05:17.982 227364 INFO nova.compute.manager [-] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:05:18 np0005539551 nova_compute[227360]: 2025-11-29 08:05:18.000 227364 DEBUG nova.compute.manager [None req-257d4a13-5136-4d0e-b12b-bc18c74bd920 - - - - - -] [instance: 15c9e68e-1d94-4cff-a4da-2ac2f08ffa5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:18 np0005539551 nova_compute[227360]: 2025-11-29 08:05:18.028 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:18 np0005539551 nova_compute[227360]: 2025-11-29 08:05:18.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:18 np0005539551 nova_compute[227360]: 2025-11-29 08:05:18.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:19.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:19.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:19 np0005539551 nova_compute[227360]: 2025-11-29 08:05:19.680 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:19.856 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:19.857 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:19.857 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:19 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:21 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e219 e219: 3 total, 3 up, 3 in
Nov 29 03:05:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:21.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:21.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:21 np0005539551 ceph-mgr[82034]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1950343944
Nov 29 03:05:21 np0005539551 nova_compute[227360]: 2025-11-29 08:05:21.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:21 np0005539551 nova_compute[227360]: 2025-11-29 08:05:21.437 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:21 np0005539551 nova_compute[227360]: 2025-11-29 08:05:21.438 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:21 np0005539551 nova_compute[227360]: 2025-11-29 08:05:21.438 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:21 np0005539551 nova_compute[227360]: 2025-11-29 08:05:21.438 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:05:21 np0005539551 nova_compute[227360]: 2025-11-29 08:05:21.438 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:21 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:05:21 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/103710360' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:05:21 np0005539551 nova_compute[227360]: 2025-11-29 08:05:21.946 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:22 np0005539551 nova_compute[227360]: 2025-11-29 08:05:22.100 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:05:22 np0005539551 nova_compute[227360]: 2025-11-29 08:05:22.102 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4749MB free_disk=20.940444946289062GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:05:22 np0005539551 nova_compute[227360]: 2025-11-29 08:05:22.103 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:22 np0005539551 nova_compute[227360]: 2025-11-29 08:05:22.103 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:22 np0005539551 nova_compute[227360]: 2025-11-29 08:05:22.202 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:05:22 np0005539551 nova_compute[227360]: 2025-11-29 08:05:22.203 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:05:22 np0005539551 nova_compute[227360]: 2025-11-29 08:05:22.218 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Refreshing inventories for resource provider 67c71d68-0dd7-4589-b775-189b4191a844 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:05:22 np0005539551 nova_compute[227360]: 2025-11-29 08:05:22.339 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Updating ProviderTree inventory for provider 67c71d68-0dd7-4589-b775-189b4191a844 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:05:22 np0005539551 nova_compute[227360]: 2025-11-29 08:05:22.339 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Updating inventory in ProviderTree for provider 67c71d68-0dd7-4589-b775-189b4191a844 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:05:22 np0005539551 nova_compute[227360]: 2025-11-29 08:05:22.366 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Refreshing aggregate associations for resource provider 67c71d68-0dd7-4589-b775-189b4191a844, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:05:22 np0005539551 nova_compute[227360]: 2025-11-29 08:05:22.394 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Refreshing trait associations for resource provider 67c71d68-0dd7-4589-b775-189b4191a844, traits: COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:05:22 np0005539551 nova_compute[227360]: 2025-11-29 08:05:22.408 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:05:22 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/101397820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:05:22 np0005539551 nova_compute[227360]: 2025-11-29 08:05:22.823 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:22 np0005539551 nova_compute[227360]: 2025-11-29 08:05:22.829 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:05:22 np0005539551 nova_compute[227360]: 2025-11-29 08:05:22.848 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:05:22 np0005539551 nova_compute[227360]: 2025-11-29 08:05:22.891 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:05:22 np0005539551 nova_compute[227360]: 2025-11-29 08:05:22.891 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:23 np0005539551 nova_compute[227360]: 2025-11-29 08:05:23.029 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:23 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e220 e220: 3 total, 3 up, 3 in
Nov 29 03:05:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:23.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:23.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:24 np0005539551 nova_compute[227360]: 2025-11-29 08:05:24.681 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:24 np0005539551 nova_compute[227360]: 2025-11-29 08:05:24.892 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:24 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:24 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e221 e221: 3 total, 3 up, 3 in
Nov 29 03:05:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:05:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:25.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:05:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:25.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:26 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e222 e222: 3 total, 3 up, 3 in
Nov 29 03:05:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e223 e223: 3 total, 3 up, 3 in
Nov 29 03:05:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:27.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:27.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:28 np0005539551 nova_compute[227360]: 2025-11-29 08:05:28.032 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:29.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:29.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:29 np0005539551 nova_compute[227360]: 2025-11-29 08:05:29.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:29 np0005539551 nova_compute[227360]: 2025-11-29 08:05:29.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:05:29 np0005539551 nova_compute[227360]: 2025-11-29 08:05:29.738 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:30 np0005539551 podman[249826]: 2025-11-29 08:05:30.598278746 +0000 UTC m=+0.056359932 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:05:30 np0005539551 podman[249827]: 2025-11-29 08:05:30.609171636 +0000 UTC m=+0.058535090 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 29 03:05:30 np0005539551 podman[249825]: 2025-11-29 08:05:30.628013058 +0000 UTC m=+0.078085380 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 03:05:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e224 e224: 3 total, 3 up, 3 in
Nov 29 03:05:31 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e225 e225: 3 total, 3 up, 3 in
Nov 29 03:05:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:31.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:31.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:33 np0005539551 nova_compute[227360]: 2025-11-29 08:05:33.034 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:33.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:33.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:34 np0005539551 nova_compute[227360]: 2025-11-29 08:05:34.741 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:34 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e225 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:35 np0005539551 nova_compute[227360]: 2025-11-29 08:05:35.264 227364 DEBUG oslo_concurrency.lockutils [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "7c7286ee-7432-4e68-a574-bbe535d1f203" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:35 np0005539551 nova_compute[227360]: 2025-11-29 08:05:35.264 227364 DEBUG oslo_concurrency.lockutils [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "7c7286ee-7432-4e68-a574-bbe535d1f203" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:35 np0005539551 nova_compute[227360]: 2025-11-29 08:05:35.280 227364 DEBUG nova.compute.manager [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:05:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:35.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:35 np0005539551 nova_compute[227360]: 2025-11-29 08:05:35.362 227364 DEBUG oslo_concurrency.lockutils [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:35 np0005539551 nova_compute[227360]: 2025-11-29 08:05:35.362 227364 DEBUG oslo_concurrency.lockutils [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:35.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:35 np0005539551 nova_compute[227360]: 2025-11-29 08:05:35.372 227364 DEBUG nova.virt.hardware [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:05:35 np0005539551 nova_compute[227360]: 2025-11-29 08:05:35.373 227364 INFO nova.compute.claims [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:05:35 np0005539551 nova_compute[227360]: 2025-11-29 08:05:35.500 227364 DEBUG oslo_concurrency.processutils [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:05:35 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4270237107' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:05:35 np0005539551 nova_compute[227360]: 2025-11-29 08:05:35.931 227364 DEBUG oslo_concurrency.processutils [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:35 np0005539551 nova_compute[227360]: 2025-11-29 08:05:35.937 227364 DEBUG nova.compute.provider_tree [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:05:35 np0005539551 nova_compute[227360]: 2025-11-29 08:05:35.973 227364 DEBUG nova.scheduler.client.report [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:05:36 np0005539551 nova_compute[227360]: 2025-11-29 08:05:36.179 227364 DEBUG oslo_concurrency.lockutils [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.817s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:36 np0005539551 nova_compute[227360]: 2025-11-29 08:05:36.180 227364 DEBUG nova.compute.manager [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:05:36 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e226 e226: 3 total, 3 up, 3 in
Nov 29 03:05:36 np0005539551 nova_compute[227360]: 2025-11-29 08:05:36.229 227364 DEBUG nova.compute.manager [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:05:36 np0005539551 nova_compute[227360]: 2025-11-29 08:05:36.230 227364 DEBUG nova.network.neutron [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:05:36 np0005539551 nova_compute[227360]: 2025-11-29 08:05:36.256 227364 INFO nova.virt.libvirt.driver [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:05:36 np0005539551 nova_compute[227360]: 2025-11-29 08:05:36.275 227364 DEBUG nova.compute.manager [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:05:36 np0005539551 nova_compute[227360]: 2025-11-29 08:05:36.381 227364 DEBUG nova.compute.manager [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:05:36 np0005539551 nova_compute[227360]: 2025-11-29 08:05:36.382 227364 DEBUG nova.virt.libvirt.driver [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:05:36 np0005539551 nova_compute[227360]: 2025-11-29 08:05:36.383 227364 INFO nova.virt.libvirt.driver [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Creating image(s)#033[00m
Nov 29 03:05:36 np0005539551 nova_compute[227360]: 2025-11-29 08:05:36.414 227364 DEBUG nova.storage.rbd_utils [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] rbd image 7c7286ee-7432-4e68-a574-bbe535d1f203_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:36 np0005539551 nova_compute[227360]: 2025-11-29 08:05:36.438 227364 DEBUG nova.storage.rbd_utils [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] rbd image 7c7286ee-7432-4e68-a574-bbe535d1f203_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:36 np0005539551 nova_compute[227360]: 2025-11-29 08:05:36.463 227364 DEBUG nova.storage.rbd_utils [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] rbd image 7c7286ee-7432-4e68-a574-bbe535d1f203_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:36 np0005539551 nova_compute[227360]: 2025-11-29 08:05:36.467 227364 DEBUG oslo_concurrency.processutils [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:36 np0005539551 nova_compute[227360]: 2025-11-29 08:05:36.487 227364 DEBUG nova.policy [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fddc5f5801764ee19d5253e2cab34df3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '638fd52fccf14f16b56d0860553063f3', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:05:36 np0005539551 nova_compute[227360]: 2025-11-29 08:05:36.524 227364 DEBUG oslo_concurrency.processutils [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:36 np0005539551 nova_compute[227360]: 2025-11-29 08:05:36.525 227364 DEBUG oslo_concurrency.lockutils [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:36 np0005539551 nova_compute[227360]: 2025-11-29 08:05:36.526 227364 DEBUG oslo_concurrency.lockutils [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:36 np0005539551 nova_compute[227360]: 2025-11-29 08:05:36.526 227364 DEBUG oslo_concurrency.lockutils [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:36 np0005539551 nova_compute[227360]: 2025-11-29 08:05:36.552 227364 DEBUG nova.storage.rbd_utils [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] rbd image 7c7286ee-7432-4e68-a574-bbe535d1f203_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:36 np0005539551 nova_compute[227360]: 2025-11-29 08:05:36.556 227364 DEBUG oslo_concurrency.processutils [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 7c7286ee-7432-4e68-a574-bbe535d1f203_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:36 np0005539551 nova_compute[227360]: 2025-11-29 08:05:36.903 227364 DEBUG oslo_concurrency.processutils [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 7c7286ee-7432-4e68-a574-bbe535d1f203_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.347s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:36 np0005539551 nova_compute[227360]: 2025-11-29 08:05:36.966 227364 DEBUG nova.storage.rbd_utils [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] resizing rbd image 7c7286ee-7432-4e68-a574-bbe535d1f203_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:05:37 np0005539551 nova_compute[227360]: 2025-11-29 08:05:37.066 227364 DEBUG nova.objects.instance [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lazy-loading 'migration_context' on Instance uuid 7c7286ee-7432-4e68-a574-bbe535d1f203 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:37 np0005539551 nova_compute[227360]: 2025-11-29 08:05:37.078 227364 DEBUG nova.virt.libvirt.driver [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:05:37 np0005539551 nova_compute[227360]: 2025-11-29 08:05:37.078 227364 DEBUG nova.virt.libvirt.driver [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Ensure instance console log exists: /var/lib/nova/instances/7c7286ee-7432-4e68-a574-bbe535d1f203/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:05:37 np0005539551 nova_compute[227360]: 2025-11-29 08:05:37.078 227364 DEBUG oslo_concurrency.lockutils [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:37 np0005539551 nova_compute[227360]: 2025-11-29 08:05:37.079 227364 DEBUG oslo_concurrency.lockutils [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:37 np0005539551 nova_compute[227360]: 2025-11-29 08:05:37.079 227364 DEBUG oslo_concurrency.lockutils [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:37.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:37.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:37 np0005539551 nova_compute[227360]: 2025-11-29 08:05:37.667 227364 DEBUG nova.network.neutron [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Successfully created port: 29f98dd6-993d-4b30-9f39-81b5ae632943 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:05:38 np0005539551 nova_compute[227360]: 2025-11-29 08:05:38.037 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:05:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:39.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:05:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:39.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:39 np0005539551 nova_compute[227360]: 2025-11-29 08:05:39.428 227364 DEBUG nova.network.neutron [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Successfully updated port: 29f98dd6-993d-4b30-9f39-81b5ae632943 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:05:39 np0005539551 nova_compute[227360]: 2025-11-29 08:05:39.449 227364 DEBUG oslo_concurrency.lockutils [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "refresh_cache-7c7286ee-7432-4e68-a574-bbe535d1f203" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:05:39 np0005539551 nova_compute[227360]: 2025-11-29 08:05:39.449 227364 DEBUG oslo_concurrency.lockutils [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquired lock "refresh_cache-7c7286ee-7432-4e68-a574-bbe535d1f203" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:05:39 np0005539551 nova_compute[227360]: 2025-11-29 08:05:39.449 227364 DEBUG nova.network.neutron [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:05:39 np0005539551 nova_compute[227360]: 2025-11-29 08:05:39.545 227364 DEBUG nova.compute.manager [req-866b42f2-20a0-48a9-88df-b0af2ec3b471 req-72c94cd4-f918-4243-9914-b1f2232cfead 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Received event network-changed-29f98dd6-993d-4b30-9f39-81b5ae632943 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:39 np0005539551 nova_compute[227360]: 2025-11-29 08:05:39.546 227364 DEBUG nova.compute.manager [req-866b42f2-20a0-48a9-88df-b0af2ec3b471 req-72c94cd4-f918-4243-9914-b1f2232cfead 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Refreshing instance network info cache due to event network-changed-29f98dd6-993d-4b30-9f39-81b5ae632943. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:05:39 np0005539551 nova_compute[227360]: 2025-11-29 08:05:39.546 227364 DEBUG oslo_concurrency.lockutils [req-866b42f2-20a0-48a9-88df-b0af2ec3b471 req-72c94cd4-f918-4243-9914-b1f2232cfead 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-7c7286ee-7432-4e68-a574-bbe535d1f203" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:05:39 np0005539551 nova_compute[227360]: 2025-11-29 08:05:39.615 227364 DEBUG nova.network.neutron [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:05:39 np0005539551 nova_compute[227360]: 2025-11-29 08:05:39.743 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:40 np0005539551 nova_compute[227360]: 2025-11-29 08:05:40.445 227364 DEBUG nova.network.neutron [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Updating instance_info_cache with network_info: [{"id": "29f98dd6-993d-4b30-9f39-81b5ae632943", "address": "fa:16:3e:de:f9:ef", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29f98dd6-99", "ovs_interfaceid": "29f98dd6-993d-4b30-9f39-81b5ae632943", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:05:40 np0005539551 nova_compute[227360]: 2025-11-29 08:05:40.463 227364 DEBUG oslo_concurrency.lockutils [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Releasing lock "refresh_cache-7c7286ee-7432-4e68-a574-bbe535d1f203" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:05:40 np0005539551 nova_compute[227360]: 2025-11-29 08:05:40.463 227364 DEBUG nova.compute.manager [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Instance network_info: |[{"id": "29f98dd6-993d-4b30-9f39-81b5ae632943", "address": "fa:16:3e:de:f9:ef", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29f98dd6-99", "ovs_interfaceid": "29f98dd6-993d-4b30-9f39-81b5ae632943", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:05:40 np0005539551 nova_compute[227360]: 2025-11-29 08:05:40.464 227364 DEBUG oslo_concurrency.lockutils [req-866b42f2-20a0-48a9-88df-b0af2ec3b471 req-72c94cd4-f918-4243-9914-b1f2232cfead 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-7c7286ee-7432-4e68-a574-bbe535d1f203" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:05:40 np0005539551 nova_compute[227360]: 2025-11-29 08:05:40.464 227364 DEBUG nova.network.neutron [req-866b42f2-20a0-48a9-88df-b0af2ec3b471 req-72c94cd4-f918-4243-9914-b1f2232cfead 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Refreshing network info cache for port 29f98dd6-993d-4b30-9f39-81b5ae632943 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:05:40 np0005539551 nova_compute[227360]: 2025-11-29 08:05:40.466 227364 DEBUG nova.virt.libvirt.driver [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Start _get_guest_xml network_info=[{"id": "29f98dd6-993d-4b30-9f39-81b5ae632943", "address": "fa:16:3e:de:f9:ef", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29f98dd6-99", "ovs_interfaceid": "29f98dd6-993d-4b30-9f39-81b5ae632943", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:05:40 np0005539551 nova_compute[227360]: 2025-11-29 08:05:40.471 227364 WARNING nova.virt.libvirt.driver [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:05:40 np0005539551 nova_compute[227360]: 2025-11-29 08:05:40.475 227364 DEBUG nova.virt.libvirt.host [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:05:40 np0005539551 nova_compute[227360]: 2025-11-29 08:05:40.476 227364 DEBUG nova.virt.libvirt.host [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:05:40 np0005539551 nova_compute[227360]: 2025-11-29 08:05:40.479 227364 DEBUG nova.virt.libvirt.host [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:05:40 np0005539551 nova_compute[227360]: 2025-11-29 08:05:40.479 227364 DEBUG nova.virt.libvirt.host [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:05:40 np0005539551 nova_compute[227360]: 2025-11-29 08:05:40.481 227364 DEBUG nova.virt.libvirt.driver [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:05:40 np0005539551 nova_compute[227360]: 2025-11-29 08:05:40.481 227364 DEBUG nova.virt.hardware [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:05:40 np0005539551 nova_compute[227360]: 2025-11-29 08:05:40.482 227364 DEBUG nova.virt.hardware [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:05:40 np0005539551 nova_compute[227360]: 2025-11-29 08:05:40.483 227364 DEBUG nova.virt.hardware [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:05:40 np0005539551 nova_compute[227360]: 2025-11-29 08:05:40.483 227364 DEBUG nova.virt.hardware [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:05:40 np0005539551 nova_compute[227360]: 2025-11-29 08:05:40.484 227364 DEBUG nova.virt.hardware [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:05:40 np0005539551 nova_compute[227360]: 2025-11-29 08:05:40.484 227364 DEBUG nova.virt.hardware [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:05:40 np0005539551 nova_compute[227360]: 2025-11-29 08:05:40.484 227364 DEBUG nova.virt.hardware [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:05:40 np0005539551 nova_compute[227360]: 2025-11-29 08:05:40.484 227364 DEBUG nova.virt.hardware [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:05:40 np0005539551 nova_compute[227360]: 2025-11-29 08:05:40.485 227364 DEBUG nova.virt.hardware [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:05:40 np0005539551 nova_compute[227360]: 2025-11-29 08:05:40.485 227364 DEBUG nova.virt.hardware [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:05:40 np0005539551 nova_compute[227360]: 2025-11-29 08:05:40.485 227364 DEBUG nova.virt.hardware [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:05:40 np0005539551 nova_compute[227360]: 2025-11-29 08:05:40.489 227364 DEBUG oslo_concurrency.processutils [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:05:40 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4134411528' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:05:40 np0005539551 nova_compute[227360]: 2025-11-29 08:05:40.920 227364 DEBUG oslo_concurrency.processutils [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:40 np0005539551 nova_compute[227360]: 2025-11-29 08:05:40.945 227364 DEBUG nova.storage.rbd_utils [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] rbd image 7c7286ee-7432-4e68-a574-bbe535d1f203_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:40 np0005539551 nova_compute[227360]: 2025-11-29 08:05:40.949 227364 DEBUG oslo_concurrency.processutils [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:41 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:05:41 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:05:41 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:05:41 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e227 e227: 3 total, 3 up, 3 in
Nov 29 03:05:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:41.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:41 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:05:41 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/39788765' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:05:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:05:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:41.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:05:41 np0005539551 nova_compute[227360]: 2025-11-29 08:05:41.385 227364 DEBUG oslo_concurrency.processutils [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:41 np0005539551 nova_compute[227360]: 2025-11-29 08:05:41.387 227364 DEBUG nova.virt.libvirt.vif [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:05:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1247070214',display_name='tempest-ImagesTestJSON-server-1247070214',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-1247070214',id=59,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='638fd52fccf14f16b56d0860553063f3',ramdisk_id='',reservation_id='r-jkf90zee',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1682881466',owner_user_name='tempest-ImagesTestJSON-1682881466-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:05:36Z,user_data=None,user_id='fddc5f5801764ee19d5253e2cab34df3',uuid=7c7286ee-7432-4e68-a574-bbe535d1f203,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29f98dd6-993d-4b30-9f39-81b5ae632943", "address": "fa:16:3e:de:f9:ef", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29f98dd6-99", "ovs_interfaceid": "29f98dd6-993d-4b30-9f39-81b5ae632943", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:05:41 np0005539551 nova_compute[227360]: 2025-11-29 08:05:41.387 227364 DEBUG nova.network.os_vif_util [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Converting VIF {"id": "29f98dd6-993d-4b30-9f39-81b5ae632943", "address": "fa:16:3e:de:f9:ef", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29f98dd6-99", "ovs_interfaceid": "29f98dd6-993d-4b30-9f39-81b5ae632943", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:05:41 np0005539551 nova_compute[227360]: 2025-11-29 08:05:41.388 227364 DEBUG nova.network.os_vif_util [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:f9:ef,bridge_name='br-int',has_traffic_filtering=True,id=29f98dd6-993d-4b30-9f39-81b5ae632943,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29f98dd6-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:05:41 np0005539551 nova_compute[227360]: 2025-11-29 08:05:41.389 227364 DEBUG nova.objects.instance [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7c7286ee-7432-4e68-a574-bbe535d1f203 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:41 np0005539551 nova_compute[227360]: 2025-11-29 08:05:41.405 227364 DEBUG nova.virt.libvirt.driver [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:05:41 np0005539551 nova_compute[227360]:  <uuid>7c7286ee-7432-4e68-a574-bbe535d1f203</uuid>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:  <name>instance-0000003b</name>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:05:41 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:      <nova:name>tempest-ImagesTestJSON-server-1247070214</nova:name>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:05:40</nova:creationTime>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:05:41 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:        <nova:user uuid="fddc5f5801764ee19d5253e2cab34df3">tempest-ImagesTestJSON-1682881466-project-member</nova:user>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:        <nova:project uuid="638fd52fccf14f16b56d0860553063f3">tempest-ImagesTestJSON-1682881466</nova:project>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:        <nova:port uuid="29f98dd6-993d-4b30-9f39-81b5ae632943">
Nov 29 03:05:41 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:      <entry name="serial">7c7286ee-7432-4e68-a574-bbe535d1f203</entry>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:      <entry name="uuid">7c7286ee-7432-4e68-a574-bbe535d1f203</entry>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:05:41 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/7c7286ee-7432-4e68-a574-bbe535d1f203_disk">
Nov 29 03:05:41 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:05:41 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:05:41 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/7c7286ee-7432-4e68-a574-bbe535d1f203_disk.config">
Nov 29 03:05:41 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:05:41 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:05:41 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:de:f9:ef"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:      <target dev="tap29f98dd6-99"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:05:41 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/7c7286ee-7432-4e68-a574-bbe535d1f203/console.log" append="off"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:05:41 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:05:41 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:05:41 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:05:41 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:05:41 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:05:41 np0005539551 nova_compute[227360]: 2025-11-29 08:05:41.406 227364 DEBUG nova.compute.manager [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Preparing to wait for external event network-vif-plugged-29f98dd6-993d-4b30-9f39-81b5ae632943 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:05:41 np0005539551 nova_compute[227360]: 2025-11-29 08:05:41.406 227364 DEBUG oslo_concurrency.lockutils [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "7c7286ee-7432-4e68-a574-bbe535d1f203-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:41 np0005539551 nova_compute[227360]: 2025-11-29 08:05:41.407 227364 DEBUG oslo_concurrency.lockutils [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "7c7286ee-7432-4e68-a574-bbe535d1f203-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:41 np0005539551 nova_compute[227360]: 2025-11-29 08:05:41.407 227364 DEBUG oslo_concurrency.lockutils [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "7c7286ee-7432-4e68-a574-bbe535d1f203-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:41 np0005539551 nova_compute[227360]: 2025-11-29 08:05:41.407 227364 DEBUG nova.virt.libvirt.vif [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:05:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1247070214',display_name='tempest-ImagesTestJSON-server-1247070214',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-1247070214',id=59,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='638fd52fccf14f16b56d0860553063f3',ramdisk_id='',reservation_id='r-jkf90zee',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1682881466',owner_user_name='tempest-ImagesTestJSON-1682881466-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:05:36Z,user_data=None,user_id='fddc5f5801764ee19d5253e2cab34df3',uuid=7c7286ee-7432-4e68-a574-bbe535d1f203,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29f98dd6-993d-4b30-9f39-81b5ae632943", "address": "fa:16:3e:de:f9:ef", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29f98dd6-99", "ovs_interfaceid": "29f98dd6-993d-4b30-9f39-81b5ae632943", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:05:41 np0005539551 nova_compute[227360]: 2025-11-29 08:05:41.407 227364 DEBUG nova.network.os_vif_util [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Converting VIF {"id": "29f98dd6-993d-4b30-9f39-81b5ae632943", "address": "fa:16:3e:de:f9:ef", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29f98dd6-99", "ovs_interfaceid": "29f98dd6-993d-4b30-9f39-81b5ae632943", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:05:41 np0005539551 nova_compute[227360]: 2025-11-29 08:05:41.408 227364 DEBUG nova.network.os_vif_util [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:f9:ef,bridge_name='br-int',has_traffic_filtering=True,id=29f98dd6-993d-4b30-9f39-81b5ae632943,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29f98dd6-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:05:41 np0005539551 nova_compute[227360]: 2025-11-29 08:05:41.408 227364 DEBUG os_vif [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:f9:ef,bridge_name='br-int',has_traffic_filtering=True,id=29f98dd6-993d-4b30-9f39-81b5ae632943,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29f98dd6-99') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:05:41 np0005539551 nova_compute[227360]: 2025-11-29 08:05:41.409 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:41 np0005539551 nova_compute[227360]: 2025-11-29 08:05:41.409 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:41 np0005539551 nova_compute[227360]: 2025-11-29 08:05:41.409 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:05:41 np0005539551 nova_compute[227360]: 2025-11-29 08:05:41.412 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:41 np0005539551 nova_compute[227360]: 2025-11-29 08:05:41.412 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap29f98dd6-99, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:41 np0005539551 nova_compute[227360]: 2025-11-29 08:05:41.412 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap29f98dd6-99, col_values=(('external_ids', {'iface-id': '29f98dd6-993d-4b30-9f39-81b5ae632943', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:de:f9:ef', 'vm-uuid': '7c7286ee-7432-4e68-a574-bbe535d1f203'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:41 np0005539551 nova_compute[227360]: 2025-11-29 08:05:41.414 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:41 np0005539551 nova_compute[227360]: 2025-11-29 08:05:41.416 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:05:41 np0005539551 NetworkManager[48922]: <info>  [1764403541.4168] manager: (tap29f98dd6-99): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/97)
Nov 29 03:05:41 np0005539551 nova_compute[227360]: 2025-11-29 08:05:41.420 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:41 np0005539551 nova_compute[227360]: 2025-11-29 08:05:41.421 227364 INFO os_vif [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:f9:ef,bridge_name='br-int',has_traffic_filtering=True,id=29f98dd6-993d-4b30-9f39-81b5ae632943,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29f98dd6-99')#033[00m
Nov 29 03:05:41 np0005539551 nova_compute[227360]: 2025-11-29 08:05:41.469 227364 DEBUG nova.virt.libvirt.driver [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:05:41 np0005539551 nova_compute[227360]: 2025-11-29 08:05:41.470 227364 DEBUG nova.virt.libvirt.driver [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:05:41 np0005539551 nova_compute[227360]: 2025-11-29 08:05:41.470 227364 DEBUG nova.virt.libvirt.driver [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] No VIF found with MAC fa:16:3e:de:f9:ef, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:05:41 np0005539551 nova_compute[227360]: 2025-11-29 08:05:41.471 227364 INFO nova.virt.libvirt.driver [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Using config drive#033[00m
Nov 29 03:05:41 np0005539551 nova_compute[227360]: 2025-11-29 08:05:41.497 227364 DEBUG nova.storage.rbd_utils [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] rbd image 7c7286ee-7432-4e68-a574-bbe535d1f203_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:41 np0005539551 nova_compute[227360]: 2025-11-29 08:05:41.854 227364 DEBUG nova.network.neutron [req-866b42f2-20a0-48a9-88df-b0af2ec3b471 req-72c94cd4-f918-4243-9914-b1f2232cfead 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Updated VIF entry in instance network info cache for port 29f98dd6-993d-4b30-9f39-81b5ae632943. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:05:41 np0005539551 nova_compute[227360]: 2025-11-29 08:05:41.854 227364 DEBUG nova.network.neutron [req-866b42f2-20a0-48a9-88df-b0af2ec3b471 req-72c94cd4-f918-4243-9914-b1f2232cfead 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Updating instance_info_cache with network_info: [{"id": "29f98dd6-993d-4b30-9f39-81b5ae632943", "address": "fa:16:3e:de:f9:ef", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29f98dd6-99", "ovs_interfaceid": "29f98dd6-993d-4b30-9f39-81b5ae632943", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:05:41 np0005539551 nova_compute[227360]: 2025-11-29 08:05:41.862 227364 INFO nova.virt.libvirt.driver [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Creating config drive at /var/lib/nova/instances/7c7286ee-7432-4e68-a574-bbe535d1f203/disk.config#033[00m
Nov 29 03:05:41 np0005539551 nova_compute[227360]: 2025-11-29 08:05:41.866 227364 DEBUG oslo_concurrency.processutils [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7c7286ee-7432-4e68-a574-bbe535d1f203/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg4t6mznc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:41 np0005539551 nova_compute[227360]: 2025-11-29 08:05:41.889 227364 DEBUG oslo_concurrency.lockutils [req-866b42f2-20a0-48a9-88df-b0af2ec3b471 req-72c94cd4-f918-4243-9914-b1f2232cfead 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-7c7286ee-7432-4e68-a574-bbe535d1f203" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:05:41 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #67. Immutable memtables: 0.
Nov 29 03:05:41 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:05:41.979592) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:05:41 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 67
Nov 29 03:05:41 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403541979631, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 1014, "num_deletes": 512, "total_data_size": 1136220, "memory_usage": 1159240, "flush_reason": "Manual Compaction"}
Nov 29 03:05:41 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #68: started
Nov 29 03:05:41 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403541986072, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 68, "file_size": 745489, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34201, "largest_seqno": 35210, "table_properties": {"data_size": 741098, "index_size": 1531, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13227, "raw_average_key_size": 18, "raw_value_size": 730084, "raw_average_value_size": 1038, "num_data_blocks": 67, "num_entries": 703, "num_filter_entries": 703, "num_deletions": 512, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403511, "oldest_key_time": 1764403511, "file_creation_time": 1764403541, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:05:41 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 6513 microseconds, and 2858 cpu microseconds.
Nov 29 03:05:41 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:05:41 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:05:41.986106) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #68: 745489 bytes OK
Nov 29 03:05:41 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:05:41.986123) [db/memtable_list.cc:519] [default] Level-0 commit table #68 started
Nov 29 03:05:41 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:05:41.987911) [db/memtable_list.cc:722] [default] Level-0 commit table #68: memtable #1 done
Nov 29 03:05:41 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:05:41.987927) EVENT_LOG_v1 {"time_micros": 1764403541987923, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:05:41 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:05:41.987941) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:05:41 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1130062, prev total WAL file size 1130062, number of live WAL files 2.
Nov 29 03:05:41 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000064.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:05:41 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:05:41.988392) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Nov 29 03:05:41 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:05:41 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [68(728KB)], [66(10075KB)]
Nov 29 03:05:41 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403541988415, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [68], "files_L6": [66], "score": -1, "input_data_size": 11063142, "oldest_snapshot_seqno": -1}
Nov 29 03:05:41 np0005539551 nova_compute[227360]: 2025-11-29 08:05:41.995 227364 DEBUG oslo_concurrency.processutils [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7c7286ee-7432-4e68-a574-bbe535d1f203/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg4t6mznc" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:42 np0005539551 nova_compute[227360]: 2025-11-29 08:05:42.020 227364 DEBUG nova.storage.rbd_utils [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] rbd image 7c7286ee-7432-4e68-a574-bbe535d1f203_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:42 np0005539551 nova_compute[227360]: 2025-11-29 08:05:42.022 227364 DEBUG oslo_concurrency.processutils [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7c7286ee-7432-4e68-a574-bbe535d1f203/disk.config 7c7286ee-7432-4e68-a574-bbe535d1f203_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:42 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #69: 6363 keys, 8909246 bytes, temperature: kUnknown
Nov 29 03:05:42 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403542073184, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 69, "file_size": 8909246, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8868257, "index_size": 23983, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15941, "raw_key_size": 165927, "raw_average_key_size": 26, "raw_value_size": 8755157, "raw_average_value_size": 1375, "num_data_blocks": 950, "num_entries": 6363, "num_filter_entries": 6363, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764403541, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 69, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:05:42 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:05:42 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:05:42.073399) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 8909246 bytes
Nov 29 03:05:42 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:05:42.075834) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 130.4 rd, 105.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.8 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(26.8) write-amplify(12.0) OK, records in: 7401, records dropped: 1038 output_compression: NoCompression
Nov 29 03:05:42 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:05:42.075877) EVENT_LOG_v1 {"time_micros": 1764403542075862, "job": 40, "event": "compaction_finished", "compaction_time_micros": 84822, "compaction_time_cpu_micros": 20789, "output_level": 6, "num_output_files": 1, "total_output_size": 8909246, "num_input_records": 7401, "num_output_records": 6363, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:05:42 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:05:42 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403542076205, "job": 40, "event": "table_file_deletion", "file_number": 68}
Nov 29 03:05:42 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000066.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:05:42 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403542077874, "job": 40, "event": "table_file_deletion", "file_number": 66}
Nov 29 03:05:42 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:05:41.988279) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:05:42 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:05:42.077914) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:05:42 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:05:42.077919) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:05:42 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:05:42.077921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:05:42 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:05:42.077922) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:05:42 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:05:42.077924) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:05:42 np0005539551 nova_compute[227360]: 2025-11-29 08:05:42.281 227364 DEBUG oslo_concurrency.processutils [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7c7286ee-7432-4e68-a574-bbe535d1f203/disk.config 7c7286ee-7432-4e68-a574-bbe535d1f203_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.259s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:42 np0005539551 nova_compute[227360]: 2025-11-29 08:05:42.282 227364 INFO nova.virt.libvirt.driver [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Deleting local config drive /var/lib/nova/instances/7c7286ee-7432-4e68-a574-bbe535d1f203/disk.config because it was imported into RBD.#033[00m
Nov 29 03:05:42 np0005539551 kernel: tap29f98dd6-99: entered promiscuous mode
Nov 29 03:05:42 np0005539551 nova_compute[227360]: 2025-11-29 08:05:42.335 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:42 np0005539551 ovn_controller[130266]: 2025-11-29T08:05:42Z|00199|binding|INFO|Claiming lport 29f98dd6-993d-4b30-9f39-81b5ae632943 for this chassis.
Nov 29 03:05:42 np0005539551 ovn_controller[130266]: 2025-11-29T08:05:42Z|00200|binding|INFO|29f98dd6-993d-4b30-9f39-81b5ae632943: Claiming fa:16:3e:de:f9:ef 10.100.0.3
Nov 29 03:05:42 np0005539551 NetworkManager[48922]: <info>  [1764403542.3360] manager: (tap29f98dd6-99): new Tun device (/org/freedesktop/NetworkManager/Devices/98)
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:42.341 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:f9:ef 10.100.0.3'], port_security=['fa:16:3e:de:f9:ef 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '7c7286ee-7432-4e68-a574-bbe535d1f203', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '638fd52fccf14f16b56d0860553063f3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a57b53e5-9055-46ae-8ab4-d4a8a62173cb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d89b288b-bbc6-47fa-ad12-8aab94ffc78f, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=29f98dd6-993d-4b30-9f39-81b5ae632943) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:42.342 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 29f98dd6-993d-4b30-9f39-81b5ae632943 in datapath f01d29c1-afcb-4909-9abf-f7d31e4549d8 bound to our chassis#033[00m
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:42.343 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f01d29c1-afcb-4909-9abf-f7d31e4549d8#033[00m
Nov 29 03:05:42 np0005539551 ovn_controller[130266]: 2025-11-29T08:05:42Z|00201|binding|INFO|Setting lport 29f98dd6-993d-4b30-9f39-81b5ae632943 ovn-installed in OVS
Nov 29 03:05:42 np0005539551 ovn_controller[130266]: 2025-11-29T08:05:42Z|00202|binding|INFO|Setting lport 29f98dd6-993d-4b30-9f39-81b5ae632943 up in Southbound
Nov 29 03:05:42 np0005539551 nova_compute[227360]: 2025-11-29 08:05:42.354 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:42.356 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[092b1ab6-3b85-49d6-b47e-31fcd610c402]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:42 np0005539551 nova_compute[227360]: 2025-11-29 08:05:42.358 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:42.358 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf01d29c1-a1 in ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:42.360 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf01d29c1-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:42.360 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[fb59d461-1a67-4e9f-93bb-dd83f714e336]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:42.361 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1a6142c1-07ad-413b-b443-315d6f339966]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:42 np0005539551 systemd-udevd[250340]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:05:42 np0005539551 systemd-machined[190756]: New machine qemu-28-instance-0000003b.
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:42.373 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[83d01e49-9e24-45c1-9878-c88b5f37f408]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:42 np0005539551 NetworkManager[48922]: <info>  [1764403542.3821] device (tap29f98dd6-99): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:05:42 np0005539551 NetworkManager[48922]: <info>  [1764403542.3830] device (tap29f98dd6-99): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:05:42 np0005539551 systemd[1]: Started Virtual Machine qemu-28-instance-0000003b.
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:42.388 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b6484b6c-da39-4f86-8c42-87b36429675d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:42.416 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[57a0466d-ac21-4175-a64c-3f2f1a394785]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:42.421 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[899f3d4d-3732-487c-bc75-20b27f1bb3c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:42 np0005539551 NetworkManager[48922]: <info>  [1764403542.4226] manager: (tapf01d29c1-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/99)
Nov 29 03:05:42 np0005539551 systemd-udevd[250344]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:42.460 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[087e0e4d-0e99-4cc5-9d9c-2bbdd743c703]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:42.465 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[a3dd35c0-ccb1-4378-b138-7efafb1aa3e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:42 np0005539551 NetworkManager[48922]: <info>  [1764403542.4921] device (tapf01d29c1-a0): carrier: link connected
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:42.501 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[a6280c3b-b038-481f-9af7-c46b6b6da32d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:42.520 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[7d3a137b-61a9-4fe9-a79f-ff56dfb757bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf01d29c1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:77:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 60], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 658937, 'reachable_time': 17702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250372, 'error': None, 'target': 'ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:42.535 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5c49fb3b-94a1-4ec2-86d5-453094b6d268]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe96:77b2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 658937, 'tstamp': 658937}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250373, 'error': None, 'target': 'ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:42.554 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2bf72cb7-ab1e-424a-9541-57fa6fe5522a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf01d29c1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:77:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 60], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 658937, 'reachable_time': 17702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 250389, 'error': None, 'target': 'ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:42.586 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e5621e84-2fad-459f-88bd-06cb1fc50b47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:42.645 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c2189a9a-50e9-4a41-a497-5f72c8fd7f25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:42.646 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf01d29c1-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:42.646 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:42.647 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf01d29c1-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:42 np0005539551 nova_compute[227360]: 2025-11-29 08:05:42.648 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:42 np0005539551 kernel: tapf01d29c1-a0: entered promiscuous mode
Nov 29 03:05:42 np0005539551 NetworkManager[48922]: <info>  [1764403542.6491] manager: (tapf01d29c1-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/100)
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:42.652 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf01d29c1-a0, col_values=(('external_ids', {'iface-id': '2247adf2-4048-41de-ba3c-ac69d728838f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:42 np0005539551 nova_compute[227360]: 2025-11-29 08:05:42.654 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:42 np0005539551 ovn_controller[130266]: 2025-11-29T08:05:42Z|00203|binding|INFO|Releasing lport 2247adf2-4048-41de-ba3c-ac69d728838f from this chassis (sb_readonly=0)
Nov 29 03:05:42 np0005539551 nova_compute[227360]: 2025-11-29 08:05:42.654 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:42.656 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f01d29c1-afcb-4909-9abf-f7d31e4549d8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f01d29c1-afcb-4909-9abf-f7d31e4549d8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:42.656 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[59d8e523-d0fa-4576-bdde-394a109531c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:42.657 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-f01d29c1-afcb-4909-9abf-f7d31e4549d8
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/f01d29c1-afcb-4909-9abf-f7d31e4549d8.pid.haproxy
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID f01d29c1-afcb-4909-9abf-f7d31e4549d8
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:05:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:05:42.658 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'env', 'PROCESS_TAG=haproxy-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f01d29c1-afcb-4909-9abf-f7d31e4549d8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:05:42 np0005539551 nova_compute[227360]: 2025-11-29 08:05:42.671 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:42 np0005539551 nova_compute[227360]: 2025-11-29 08:05:42.712 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403542.7115192, 7c7286ee-7432-4e68-a574-bbe535d1f203 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:05:42 np0005539551 nova_compute[227360]: 2025-11-29 08:05:42.712 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] VM Started (Lifecycle Event)#033[00m
Nov 29 03:05:42 np0005539551 nova_compute[227360]: 2025-11-29 08:05:42.735 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:42 np0005539551 nova_compute[227360]: 2025-11-29 08:05:42.740 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403542.7124732, 7c7286ee-7432-4e68-a574-bbe535d1f203 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:05:42 np0005539551 nova_compute[227360]: 2025-11-29 08:05:42.741 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:05:42 np0005539551 nova_compute[227360]: 2025-11-29 08:05:42.764 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:42 np0005539551 nova_compute[227360]: 2025-11-29 08:05:42.767 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:05:42 np0005539551 nova_compute[227360]: 2025-11-29 08:05:42.789 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:05:43 np0005539551 podman[250448]: 2025-11-29 08:05:42.989395203 +0000 UTC m=+0.019735567 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:05:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:43.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:43 np0005539551 podman[250448]: 2025-11-29 08:05:43.352194604 +0000 UTC m=+0.382534998 container create 6a96bdfaf459b63927802d5060aee0d803e8c49e91619660567d10c2ae8ab753 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 03:05:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:05:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:43.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:05:43 np0005539551 systemd[1]: Started libpod-conmon-6a96bdfaf459b63927802d5060aee0d803e8c49e91619660567d10c2ae8ab753.scope.
Nov 29 03:05:43 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:05:43 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45609afc19b98adbab860f11e46b52531327e11f7e784379ad2e3085ac116b24/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:05:43 np0005539551 podman[250448]: 2025-11-29 08:05:43.448987232 +0000 UTC m=+0.479327586 container init 6a96bdfaf459b63927802d5060aee0d803e8c49e91619660567d10c2ae8ab753 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:05:43 np0005539551 podman[250448]: 2025-11-29 08:05:43.459102621 +0000 UTC m=+0.489442965 container start 6a96bdfaf459b63927802d5060aee0d803e8c49e91619660567d10c2ae8ab753 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:05:43 np0005539551 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[250464]: [NOTICE]   (250468) : New worker (250470) forked
Nov 29 03:05:43 np0005539551 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[250464]: [NOTICE]   (250468) : Loading success.
Nov 29 03:05:44 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e228 e228: 3 total, 3 up, 3 in
Nov 29 03:05:44 np0005539551 nova_compute[227360]: 2025-11-29 08:05:44.744 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:45 np0005539551 nova_compute[227360]: 2025-11-29 08:05:45.008 227364 DEBUG nova.compute.manager [req-3ed85718-e194-4b39-b2d2-3372089f3cc9 req-ded39af6-2f12-41b4-8a3c-fd83574c3685 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Received event network-vif-plugged-29f98dd6-993d-4b30-9f39-81b5ae632943 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:45 np0005539551 nova_compute[227360]: 2025-11-29 08:05:45.008 227364 DEBUG oslo_concurrency.lockutils [req-3ed85718-e194-4b39-b2d2-3372089f3cc9 req-ded39af6-2f12-41b4-8a3c-fd83574c3685 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7c7286ee-7432-4e68-a574-bbe535d1f203-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:45 np0005539551 nova_compute[227360]: 2025-11-29 08:05:45.009 227364 DEBUG oslo_concurrency.lockutils [req-3ed85718-e194-4b39-b2d2-3372089f3cc9 req-ded39af6-2f12-41b4-8a3c-fd83574c3685 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7c7286ee-7432-4e68-a574-bbe535d1f203-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:45 np0005539551 nova_compute[227360]: 2025-11-29 08:05:45.009 227364 DEBUG oslo_concurrency.lockutils [req-3ed85718-e194-4b39-b2d2-3372089f3cc9 req-ded39af6-2f12-41b4-8a3c-fd83574c3685 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7c7286ee-7432-4e68-a574-bbe535d1f203-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:45 np0005539551 nova_compute[227360]: 2025-11-29 08:05:45.009 227364 DEBUG nova.compute.manager [req-3ed85718-e194-4b39-b2d2-3372089f3cc9 req-ded39af6-2f12-41b4-8a3c-fd83574c3685 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Processing event network-vif-plugged-29f98dd6-993d-4b30-9f39-81b5ae632943 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:05:45 np0005539551 nova_compute[227360]: 2025-11-29 08:05:45.009 227364 DEBUG nova.compute.manager [req-3ed85718-e194-4b39-b2d2-3372089f3cc9 req-ded39af6-2f12-41b4-8a3c-fd83574c3685 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Received event network-vif-plugged-29f98dd6-993d-4b30-9f39-81b5ae632943 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:45 np0005539551 nova_compute[227360]: 2025-11-29 08:05:45.010 227364 DEBUG oslo_concurrency.lockutils [req-3ed85718-e194-4b39-b2d2-3372089f3cc9 req-ded39af6-2f12-41b4-8a3c-fd83574c3685 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7c7286ee-7432-4e68-a574-bbe535d1f203-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:45 np0005539551 nova_compute[227360]: 2025-11-29 08:05:45.010 227364 DEBUG oslo_concurrency.lockutils [req-3ed85718-e194-4b39-b2d2-3372089f3cc9 req-ded39af6-2f12-41b4-8a3c-fd83574c3685 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7c7286ee-7432-4e68-a574-bbe535d1f203-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:45 np0005539551 nova_compute[227360]: 2025-11-29 08:05:45.010 227364 DEBUG oslo_concurrency.lockutils [req-3ed85718-e194-4b39-b2d2-3372089f3cc9 req-ded39af6-2f12-41b4-8a3c-fd83574c3685 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7c7286ee-7432-4e68-a574-bbe535d1f203-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:45 np0005539551 nova_compute[227360]: 2025-11-29 08:05:45.010 227364 DEBUG nova.compute.manager [req-3ed85718-e194-4b39-b2d2-3372089f3cc9 req-ded39af6-2f12-41b4-8a3c-fd83574c3685 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] No waiting events found dispatching network-vif-plugged-29f98dd6-993d-4b30-9f39-81b5ae632943 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:05:45 np0005539551 nova_compute[227360]: 2025-11-29 08:05:45.011 227364 WARNING nova.compute.manager [req-3ed85718-e194-4b39-b2d2-3372089f3cc9 req-ded39af6-2f12-41b4-8a3c-fd83574c3685 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Received unexpected event network-vif-plugged-29f98dd6-993d-4b30-9f39-81b5ae632943 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:05:45 np0005539551 nova_compute[227360]: 2025-11-29 08:05:45.011 227364 DEBUG nova.compute.manager [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:05:45 np0005539551 nova_compute[227360]: 2025-11-29 08:05:45.015 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403545.0154097, 7c7286ee-7432-4e68-a574-bbe535d1f203 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:05:45 np0005539551 nova_compute[227360]: 2025-11-29 08:05:45.016 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:05:45 np0005539551 nova_compute[227360]: 2025-11-29 08:05:45.017 227364 DEBUG nova.virt.libvirt.driver [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:05:45 np0005539551 nova_compute[227360]: 2025-11-29 08:05:45.021 227364 INFO nova.virt.libvirt.driver [-] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Instance spawned successfully.#033[00m
Nov 29 03:05:45 np0005539551 nova_compute[227360]: 2025-11-29 08:05:45.021 227364 DEBUG nova.virt.libvirt.driver [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:05:45 np0005539551 nova_compute[227360]: 2025-11-29 08:05:45.062 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:45 np0005539551 nova_compute[227360]: 2025-11-29 08:05:45.065 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:05:45 np0005539551 nova_compute[227360]: 2025-11-29 08:05:45.073 227364 DEBUG nova.virt.libvirt.driver [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:45 np0005539551 nova_compute[227360]: 2025-11-29 08:05:45.073 227364 DEBUG nova.virt.libvirt.driver [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:45 np0005539551 nova_compute[227360]: 2025-11-29 08:05:45.074 227364 DEBUG nova.virt.libvirt.driver [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:45 np0005539551 nova_compute[227360]: 2025-11-29 08:05:45.074 227364 DEBUG nova.virt.libvirt.driver [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:45 np0005539551 nova_compute[227360]: 2025-11-29 08:05:45.074 227364 DEBUG nova.virt.libvirt.driver [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:45 np0005539551 nova_compute[227360]: 2025-11-29 08:05:45.075 227364 DEBUG nova.virt.libvirt.driver [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:45 np0005539551 nova_compute[227360]: 2025-11-29 08:05:45.121 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:05:45 np0005539551 nova_compute[227360]: 2025-11-29 08:05:45.154 227364 INFO nova.compute.manager [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Took 8.77 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:05:45 np0005539551 nova_compute[227360]: 2025-11-29 08:05:45.155 227364 DEBUG nova.compute.manager [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:45 np0005539551 nova_compute[227360]: 2025-11-29 08:05:45.224 227364 INFO nova.compute.manager [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Took 9.90 seconds to build instance.#033[00m
Nov 29 03:05:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:45 np0005539551 nova_compute[227360]: 2025-11-29 08:05:45.247 227364 DEBUG oslo_concurrency.lockutils [None req-17af08eb-21fd-4d79-ae49-874e682e4c88 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "7c7286ee-7432-4e68-a574-bbe535d1f203" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.983s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:45.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:45.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:45 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Nov 29 03:05:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e229 e229: 3 total, 3 up, 3 in
Nov 29 03:05:46 np0005539551 nova_compute[227360]: 2025-11-29 08:05:46.415 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:47 np0005539551 nova_compute[227360]: 2025-11-29 08:05:47.184 227364 DEBUG nova.compute.manager [None req-b7e191ea-3cb4-4100-b3e9-5dcd4a75f8ba fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:47 np0005539551 nova_compute[227360]: 2025-11-29 08:05:47.229 227364 INFO nova.compute.manager [None req-b7e191ea-3cb4-4100-b3e9-5dcd4a75f8ba fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] instance snapshotting#033[00m
Nov 29 03:05:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:47.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:47 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e230 e230: 3 total, 3 up, 3 in
Nov 29 03:05:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:47.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:47 np0005539551 nova_compute[227360]: 2025-11-29 08:05:47.504 227364 INFO nova.virt.libvirt.driver [None req-b7e191ea-3cb4-4100-b3e9-5dcd4a75f8ba fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Beginning live snapshot process#033[00m
Nov 29 03:05:47 np0005539551 nova_compute[227360]: 2025-11-29 08:05:47.653 227364 DEBUG nova.virt.libvirt.imagebackend [None req-b7e191ea-3cb4-4100-b3e9-5dcd4a75f8ba fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] No parent info for 4873db8c-b414-4e95-acd9-77caabebe722; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:05:47 np0005539551 nova_compute[227360]: 2025-11-29 08:05:47.873 227364 DEBUG nova.storage.rbd_utils [None req-b7e191ea-3cb4-4100-b3e9-5dcd4a75f8ba fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] creating snapshot(fd2d579cca9d4c91a5394504f57c1581) on rbd image(7c7286ee-7432-4e68-a574-bbe535d1f203_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:05:48 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e231 e231: 3 total, 3 up, 3 in
Nov 29 03:05:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:49.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:49.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:49 np0005539551 nova_compute[227360]: 2025-11-29 08:05:49.745 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e232 e232: 3 total, 3 up, 3 in
Nov 29 03:05:50 np0005539551 nova_compute[227360]: 2025-11-29 08:05:50.806 227364 DEBUG nova.storage.rbd_utils [None req-b7e191ea-3cb4-4100-b3e9-5dcd4a75f8ba fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] cloning vms/7c7286ee-7432-4e68-a574-bbe535d1f203_disk@fd2d579cca9d4c91a5394504f57c1581 to images/37d12ee4-89a4-499b-beea-6b6d4e46474b clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:05:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:51.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:51.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:51 np0005539551 nova_compute[227360]: 2025-11-29 08:05:51.418 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:51 np0005539551 nova_compute[227360]: 2025-11-29 08:05:51.618 227364 DEBUG nova.storage.rbd_utils [None req-b7e191ea-3cb4-4100-b3e9-5dcd4a75f8ba fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] flattening images/37d12ee4-89a4-499b-beea-6b6d4e46474b flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:05:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e233 e233: 3 total, 3 up, 3 in
Nov 29 03:05:53 np0005539551 nova_compute[227360]: 2025-11-29 08:05:53.013 227364 DEBUG nova.storage.rbd_utils [None req-b7e191ea-3cb4-4100-b3e9-5dcd4a75f8ba fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] removing snapshot(fd2d579cca9d4c91a5394504f57c1581) on rbd image(7c7286ee-7432-4e68-a574-bbe535d1f203_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:05:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:53.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:53.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:53 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:05:53 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:05:54 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e234 e234: 3 total, 3 up, 3 in
Nov 29 03:05:54 np0005539551 nova_compute[227360]: 2025-11-29 08:05:54.408 227364 DEBUG nova.storage.rbd_utils [None req-b7e191ea-3cb4-4100-b3e9-5dcd4a75f8ba fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] creating snapshot(snap) on rbd image(37d12ee4-89a4-499b-beea-6b6d4e46474b) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:05:54 np0005539551 nova_compute[227360]: 2025-11-29 08:05:54.747 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e235 e235: 3 total, 3 up, 3 in
Nov 29 03:05:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:55.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:55.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:56 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e236 e236: 3 total, 3 up, 3 in
Nov 29 03:05:56 np0005539551 nova_compute[227360]: 2025-11-29 08:05:56.420 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:57.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:05:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:57.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:05:58 np0005539551 nova_compute[227360]: 2025-11-29 08:05:58.049 227364 INFO nova.virt.libvirt.driver [None req-b7e191ea-3cb4-4100-b3e9-5dcd4a75f8ba fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Snapshot image upload complete#033[00m
Nov 29 03:05:58 np0005539551 nova_compute[227360]: 2025-11-29 08:05:58.049 227364 INFO nova.compute.manager [None req-b7e191ea-3cb4-4100-b3e9-5dcd4a75f8ba fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Took 10.82 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 29 03:05:59 np0005539551 ovn_controller[130266]: 2025-11-29T08:05:59Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:de:f9:ef 10.100.0.3
Nov 29 03:05:59 np0005539551 ovn_controller[130266]: 2025-11-29T08:05:59Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:de:f9:ef 10.100.0.3
Nov 29 03:05:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:59.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:05:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:59.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:59 np0005539551 nova_compute[227360]: 2025-11-29 08:05:59.749 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e237 e237: 3 total, 3 up, 3 in
Nov 29 03:06:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:06:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:01.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:06:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:06:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:01.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:06:01 np0005539551 nova_compute[227360]: 2025-11-29 08:06:01.423 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:01 np0005539551 podman[250672]: 2025-11-29 08:06:01.636941602 +0000 UTC m=+0.078213644 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent)
Nov 29 03:06:01 np0005539551 podman[250671]: 2025-11-29 08:06:01.641984316 +0000 UTC m=+0.079354874 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true)
Nov 29 03:06:01 np0005539551 podman[250670]: 2025-11-29 08:06:01.674949325 +0000 UTC m=+0.115232770 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=ovn_controller, tcib_managed=true)
Nov 29 03:06:02 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e238 e238: 3 total, 3 up, 3 in
Nov 29 03:06:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:06:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:03.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:06:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:03.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:04 np0005539551 nova_compute[227360]: 2025-11-29 08:06:04.751 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:04 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e239 e239: 3 total, 3 up, 3 in
Nov 29 03:06:04 np0005539551 nova_compute[227360]: 2025-11-29 08:06:04.993 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:04 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:04.996 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:06:04 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:04.998 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:06:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:05.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:05.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:06 np0005539551 nova_compute[227360]: 2025-11-29 08:06:06.426 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:07.000 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:07.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:06:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:07.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:06:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:09.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:06:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:09.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:06:09 np0005539551 nova_compute[227360]: 2025-11-29 08:06:09.754 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:10 np0005539551 nova_compute[227360]: 2025-11-29 08:06:10.267 227364 DEBUG oslo_concurrency.lockutils [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Acquiring lock "69d87725-bb3d-4966-8db2-cc1e098be52d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:10 np0005539551 nova_compute[227360]: 2025-11-29 08:06:10.268 227364 DEBUG oslo_concurrency.lockutils [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lock "69d87725-bb3d-4966-8db2-cc1e098be52d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:10 np0005539551 nova_compute[227360]: 2025-11-29 08:06:10.295 227364 DEBUG nova.compute.manager [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:06:10 np0005539551 nova_compute[227360]: 2025-11-29 08:06:10.376 227364 DEBUG oslo_concurrency.lockutils [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:10 np0005539551 nova_compute[227360]: 2025-11-29 08:06:10.377 227364 DEBUG oslo_concurrency.lockutils [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:10 np0005539551 nova_compute[227360]: 2025-11-29 08:06:10.383 227364 DEBUG nova.virt.hardware [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:06:10 np0005539551 nova_compute[227360]: 2025-11-29 08:06:10.384 227364 INFO nova.compute.claims [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:06:10 np0005539551 nova_compute[227360]: 2025-11-29 08:06:10.722 227364 DEBUG oslo_concurrency.processutils [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:11 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:06:11 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1314971384' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:06:11 np0005539551 nova_compute[227360]: 2025-11-29 08:06:11.159 227364 DEBUG oslo_concurrency.processutils [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:11 np0005539551 nova_compute[227360]: 2025-11-29 08:06:11.165 227364 DEBUG nova.compute.provider_tree [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:06:11 np0005539551 nova_compute[227360]: 2025-11-29 08:06:11.184 227364 DEBUG nova.scheduler.client.report [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:06:11 np0005539551 nova_compute[227360]: 2025-11-29 08:06:11.208 227364 DEBUG oslo_concurrency.lockutils [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:11 np0005539551 nova_compute[227360]: 2025-11-29 08:06:11.208 227364 DEBUG nova.compute.manager [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:06:11 np0005539551 nova_compute[227360]: 2025-11-29 08:06:11.268 227364 DEBUG nova.compute.manager [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:06:11 np0005539551 nova_compute[227360]: 2025-11-29 08:06:11.269 227364 DEBUG nova.network.neutron [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:06:11 np0005539551 nova_compute[227360]: 2025-11-29 08:06:11.290 227364 INFO nova.virt.libvirt.driver [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:06:11 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e240 e240: 3 total, 3 up, 3 in
Nov 29 03:06:11 np0005539551 nova_compute[227360]: 2025-11-29 08:06:11.329 227364 DEBUG nova.compute.manager [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:06:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:06:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:11.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:06:11 np0005539551 nova_compute[227360]: 2025-11-29 08:06:11.418 227364 DEBUG nova.compute.manager [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:06:11 np0005539551 nova_compute[227360]: 2025-11-29 08:06:11.421 227364 DEBUG nova.virt.libvirt.driver [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:06:11 np0005539551 nova_compute[227360]: 2025-11-29 08:06:11.422 227364 INFO nova.virt.libvirt.driver [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Creating image(s)#033[00m
Nov 29 03:06:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:11.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:11 np0005539551 nova_compute[227360]: 2025-11-29 08:06:11.446 227364 DEBUG nova.storage.rbd_utils [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] rbd image 69d87725-bb3d-4966-8db2-cc1e098be52d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:11 np0005539551 nova_compute[227360]: 2025-11-29 08:06:11.484 227364 DEBUG nova.storage.rbd_utils [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] rbd image 69d87725-bb3d-4966-8db2-cc1e098be52d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:11 np0005539551 nova_compute[227360]: 2025-11-29 08:06:11.520 227364 DEBUG nova.storage.rbd_utils [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] rbd image 69d87725-bb3d-4966-8db2-cc1e098be52d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:11 np0005539551 nova_compute[227360]: 2025-11-29 08:06:11.525 227364 DEBUG oslo_concurrency.processutils [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:11 np0005539551 nova_compute[227360]: 2025-11-29 08:06:11.550 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:11 np0005539551 nova_compute[227360]: 2025-11-29 08:06:11.586 227364 DEBUG oslo_concurrency.processutils [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:11 np0005539551 nova_compute[227360]: 2025-11-29 08:06:11.588 227364 DEBUG oslo_concurrency.lockutils [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:11 np0005539551 nova_compute[227360]: 2025-11-29 08:06:11.588 227364 DEBUG oslo_concurrency.lockutils [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:11 np0005539551 nova_compute[227360]: 2025-11-29 08:06:11.589 227364 DEBUG oslo_concurrency.lockutils [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:11 np0005539551 nova_compute[227360]: 2025-11-29 08:06:11.620 227364 DEBUG nova.storage.rbd_utils [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] rbd image 69d87725-bb3d-4966-8db2-cc1e098be52d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:11 np0005539551 nova_compute[227360]: 2025-11-29 08:06:11.624 227364 DEBUG oslo_concurrency.processutils [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 69d87725-bb3d-4966-8db2-cc1e098be52d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:11 np0005539551 nova_compute[227360]: 2025-11-29 08:06:11.930 227364 DEBUG oslo_concurrency.processutils [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 69d87725-bb3d-4966-8db2-cc1e098be52d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.306s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:12 np0005539551 nova_compute[227360]: 2025-11-29 08:06:12.024 227364 DEBUG nova.storage.rbd_utils [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] resizing rbd image 69d87725-bb3d-4966-8db2-cc1e098be52d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:06:12 np0005539551 nova_compute[227360]: 2025-11-29 08:06:12.144 227364 DEBUG nova.objects.instance [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lazy-loading 'migration_context' on Instance uuid 69d87725-bb3d-4966-8db2-cc1e098be52d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:12 np0005539551 nova_compute[227360]: 2025-11-29 08:06:12.164 227364 DEBUG nova.virt.libvirt.driver [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:06:12 np0005539551 nova_compute[227360]: 2025-11-29 08:06:12.164 227364 DEBUG nova.virt.libvirt.driver [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Ensure instance console log exists: /var/lib/nova/instances/69d87725-bb3d-4966-8db2-cc1e098be52d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:06:12 np0005539551 nova_compute[227360]: 2025-11-29 08:06:12.165 227364 DEBUG oslo_concurrency.lockutils [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:12 np0005539551 nova_compute[227360]: 2025-11-29 08:06:12.165 227364 DEBUG oslo_concurrency.lockutils [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:12 np0005539551 nova_compute[227360]: 2025-11-29 08:06:12.165 227364 DEBUG oslo_concurrency.lockutils [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:12 np0005539551 nova_compute[227360]: 2025-11-29 08:06:12.259 227364 DEBUG nova.policy [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9ee7a93f60394fd9b004c90c25ff5fc1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9e02874a2dc44489adba1420baa460f2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:06:13 np0005539551 nova_compute[227360]: 2025-11-29 08:06:13.230 227364 DEBUG nova.network.neutron [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Successfully created port: 5c6fcb7c-f475-49c0-89c1-51e447434625 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:06:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:13.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:13.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:14 np0005539551 nova_compute[227360]: 2025-11-29 08:06:14.551 227364 DEBUG nova.network.neutron [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Successfully updated port: 5c6fcb7c-f475-49c0-89c1-51e447434625 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:06:14 np0005539551 nova_compute[227360]: 2025-11-29 08:06:14.573 227364 DEBUG oslo_concurrency.lockutils [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Acquiring lock "refresh_cache-69d87725-bb3d-4966-8db2-cc1e098be52d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:06:14 np0005539551 nova_compute[227360]: 2025-11-29 08:06:14.573 227364 DEBUG oslo_concurrency.lockutils [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Acquired lock "refresh_cache-69d87725-bb3d-4966-8db2-cc1e098be52d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:06:14 np0005539551 nova_compute[227360]: 2025-11-29 08:06:14.574 227364 DEBUG nova.network.neutron [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:06:14 np0005539551 nova_compute[227360]: 2025-11-29 08:06:14.677 227364 DEBUG nova.compute.manager [req-6fb90329-d576-4339-8252-15d889e5c65a req-f7acae24-045c-4772-9f1b-31c7fbc6bd78 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Received event network-changed-5c6fcb7c-f475-49c0-89c1-51e447434625 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:14 np0005539551 nova_compute[227360]: 2025-11-29 08:06:14.677 227364 DEBUG nova.compute.manager [req-6fb90329-d576-4339-8252-15d889e5c65a req-f7acae24-045c-4772-9f1b-31c7fbc6bd78 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Refreshing instance network info cache due to event network-changed-5c6fcb7c-f475-49c0-89c1-51e447434625. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:06:14 np0005539551 nova_compute[227360]: 2025-11-29 08:06:14.678 227364 DEBUG oslo_concurrency.lockutils [req-6fb90329-d576-4339-8252-15d889e5c65a req-f7acae24-045c-4772-9f1b-31c7fbc6bd78 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-69d87725-bb3d-4966-8db2-cc1e098be52d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:06:14 np0005539551 nova_compute[227360]: 2025-11-29 08:06:14.755 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:14 np0005539551 nova_compute[227360]: 2025-11-29 08:06:14.796 227364 DEBUG nova.network.neutron [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:06:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:15.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:15 np0005539551 nova_compute[227360]: 2025-11-29 08:06:15.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:06:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:15.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:06:16 np0005539551 nova_compute[227360]: 2025-11-29 08:06:16.118 227364 DEBUG nova.network.neutron [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Updating instance_info_cache with network_info: [{"id": "5c6fcb7c-f475-49c0-89c1-51e447434625", "address": "fa:16:3e:cf:fb:05", "network": {"id": "ddaca73e-4e30-4040-a35d-8d63a2e74570", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1724696280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e02874a2dc44489adba1420baa460f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c6fcb7c-f4", "ovs_interfaceid": "5c6fcb7c-f475-49c0-89c1-51e447434625", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:06:16 np0005539551 nova_compute[227360]: 2025-11-29 08:06:16.140 227364 DEBUG oslo_concurrency.lockutils [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Releasing lock "refresh_cache-69d87725-bb3d-4966-8db2-cc1e098be52d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:06:16 np0005539551 nova_compute[227360]: 2025-11-29 08:06:16.140 227364 DEBUG nova.compute.manager [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Instance network_info: |[{"id": "5c6fcb7c-f475-49c0-89c1-51e447434625", "address": "fa:16:3e:cf:fb:05", "network": {"id": "ddaca73e-4e30-4040-a35d-8d63a2e74570", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1724696280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e02874a2dc44489adba1420baa460f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c6fcb7c-f4", "ovs_interfaceid": "5c6fcb7c-f475-49c0-89c1-51e447434625", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:06:16 np0005539551 nova_compute[227360]: 2025-11-29 08:06:16.141 227364 DEBUG oslo_concurrency.lockutils [req-6fb90329-d576-4339-8252-15d889e5c65a req-f7acae24-045c-4772-9f1b-31c7fbc6bd78 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-69d87725-bb3d-4966-8db2-cc1e098be52d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:06:16 np0005539551 nova_compute[227360]: 2025-11-29 08:06:16.141 227364 DEBUG nova.network.neutron [req-6fb90329-d576-4339-8252-15d889e5c65a req-f7acae24-045c-4772-9f1b-31c7fbc6bd78 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Refreshing network info cache for port 5c6fcb7c-f475-49c0-89c1-51e447434625 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:06:16 np0005539551 nova_compute[227360]: 2025-11-29 08:06:16.145 227364 DEBUG nova.virt.libvirt.driver [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Start _get_guest_xml network_info=[{"id": "5c6fcb7c-f475-49c0-89c1-51e447434625", "address": "fa:16:3e:cf:fb:05", "network": {"id": "ddaca73e-4e30-4040-a35d-8d63a2e74570", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1724696280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e02874a2dc44489adba1420baa460f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c6fcb7c-f4", "ovs_interfaceid": "5c6fcb7c-f475-49c0-89c1-51e447434625", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:06:16 np0005539551 nova_compute[227360]: 2025-11-29 08:06:16.150 227364 WARNING nova.virt.libvirt.driver [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:06:16 np0005539551 nova_compute[227360]: 2025-11-29 08:06:16.154 227364 DEBUG nova.virt.libvirt.host [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:06:16 np0005539551 nova_compute[227360]: 2025-11-29 08:06:16.155 227364 DEBUG nova.virt.libvirt.host [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:06:16 np0005539551 nova_compute[227360]: 2025-11-29 08:06:16.158 227364 DEBUG nova.virt.libvirt.host [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:06:16 np0005539551 nova_compute[227360]: 2025-11-29 08:06:16.158 227364 DEBUG nova.virt.libvirt.host [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:06:16 np0005539551 nova_compute[227360]: 2025-11-29 08:06:16.159 227364 DEBUG nova.virt.libvirt.driver [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:06:16 np0005539551 nova_compute[227360]: 2025-11-29 08:06:16.159 227364 DEBUG nova.virt.hardware [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:06:16 np0005539551 nova_compute[227360]: 2025-11-29 08:06:16.160 227364 DEBUG nova.virt.hardware [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:06:16 np0005539551 nova_compute[227360]: 2025-11-29 08:06:16.160 227364 DEBUG nova.virt.hardware [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:06:16 np0005539551 nova_compute[227360]: 2025-11-29 08:06:16.160 227364 DEBUG nova.virt.hardware [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:06:16 np0005539551 nova_compute[227360]: 2025-11-29 08:06:16.161 227364 DEBUG nova.virt.hardware [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:06:16 np0005539551 nova_compute[227360]: 2025-11-29 08:06:16.161 227364 DEBUG nova.virt.hardware [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:06:16 np0005539551 nova_compute[227360]: 2025-11-29 08:06:16.161 227364 DEBUG nova.virt.hardware [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:06:16 np0005539551 nova_compute[227360]: 2025-11-29 08:06:16.161 227364 DEBUG nova.virt.hardware [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:06:16 np0005539551 nova_compute[227360]: 2025-11-29 08:06:16.161 227364 DEBUG nova.virt.hardware [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:06:16 np0005539551 nova_compute[227360]: 2025-11-29 08:06:16.161 227364 DEBUG nova.virt.hardware [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:06:16 np0005539551 nova_compute[227360]: 2025-11-29 08:06:16.162 227364 DEBUG nova.virt.hardware [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:06:16 np0005539551 nova_compute[227360]: 2025-11-29 08:06:16.165 227364 DEBUG oslo_concurrency.processutils [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:16 np0005539551 nova_compute[227360]: 2025-11-29 08:06:16.554 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:06:16 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2763895146' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:06:16 np0005539551 nova_compute[227360]: 2025-11-29 08:06:16.627 227364 DEBUG oslo_concurrency.processutils [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:16 np0005539551 nova_compute[227360]: 2025-11-29 08:06:16.655 227364 DEBUG nova.storage.rbd_utils [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] rbd image 69d87725-bb3d-4966-8db2-cc1e098be52d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:16 np0005539551 nova_compute[227360]: 2025-11-29 08:06:16.659 227364 DEBUG oslo_concurrency.processutils [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:06:17 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1204983032' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:06:17 np0005539551 nova_compute[227360]: 2025-11-29 08:06:17.071 227364 DEBUG oslo_concurrency.processutils [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:17 np0005539551 nova_compute[227360]: 2025-11-29 08:06:17.074 227364 DEBUG nova.virt.libvirt.vif [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:06:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-803229820',display_name='tempest-SecurityGroupsTestJSON-server-803229820',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-803229820',id=63,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9e02874a2dc44489adba1420baa460f2',ramdisk_id='',reservation_id='r-fwyg2zal',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-1605163301',owner_user_name='tempest-SecurityGroupsTestJSON-1605163301-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:11Z,user_data=None,user_id='9ee7a93f60394fd9b004c90c25ff5fc1',uuid=69d87725-bb3d-4966-8db2-cc1e098be52d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5c6fcb7c-f475-49c0-89c1-51e447434625", "address": "fa:16:3e:cf:fb:05", "network": {"id": "ddaca73e-4e30-4040-a35d-8d63a2e74570", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1724696280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e02874a2dc44489adba1420baa460f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c6fcb7c-f4", "ovs_interfaceid": "5c6fcb7c-f475-49c0-89c1-51e447434625", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:06:17 np0005539551 nova_compute[227360]: 2025-11-29 08:06:17.075 227364 DEBUG nova.network.os_vif_util [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Converting VIF {"id": "5c6fcb7c-f475-49c0-89c1-51e447434625", "address": "fa:16:3e:cf:fb:05", "network": {"id": "ddaca73e-4e30-4040-a35d-8d63a2e74570", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1724696280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e02874a2dc44489adba1420baa460f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c6fcb7c-f4", "ovs_interfaceid": "5c6fcb7c-f475-49c0-89c1-51e447434625", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:06:17 np0005539551 nova_compute[227360]: 2025-11-29 08:06:17.076 227364 DEBUG nova.network.os_vif_util [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:fb:05,bridge_name='br-int',has_traffic_filtering=True,id=5c6fcb7c-f475-49c0-89c1-51e447434625,network=Network(ddaca73e-4e30-4040-a35d-8d63a2e74570),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c6fcb7c-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:06:17 np0005539551 nova_compute[227360]: 2025-11-29 08:06:17.078 227364 DEBUG nova.objects.instance [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 69d87725-bb3d-4966-8db2-cc1e098be52d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:17 np0005539551 nova_compute[227360]: 2025-11-29 08:06:17.095 227364 DEBUG nova.virt.libvirt.driver [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:06:17 np0005539551 nova_compute[227360]:  <uuid>69d87725-bb3d-4966-8db2-cc1e098be52d</uuid>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:  <name>instance-0000003f</name>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:06:17 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:      <nova:name>tempest-SecurityGroupsTestJSON-server-803229820</nova:name>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:06:16</nova:creationTime>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:06:17 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:        <nova:user uuid="9ee7a93f60394fd9b004c90c25ff5fc1">tempest-SecurityGroupsTestJSON-1605163301-project-member</nova:user>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:        <nova:project uuid="9e02874a2dc44489adba1420baa460f2">tempest-SecurityGroupsTestJSON-1605163301</nova:project>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:        <nova:port uuid="5c6fcb7c-f475-49c0-89c1-51e447434625">
Nov 29 03:06:17 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:      <entry name="serial">69d87725-bb3d-4966-8db2-cc1e098be52d</entry>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:      <entry name="uuid">69d87725-bb3d-4966-8db2-cc1e098be52d</entry>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:06:17 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/69d87725-bb3d-4966-8db2-cc1e098be52d_disk">
Nov 29 03:06:17 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:06:17 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:06:17 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/69d87725-bb3d-4966-8db2-cc1e098be52d_disk.config">
Nov 29 03:06:17 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:06:17 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:06:17 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:cf:fb:05"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:      <target dev="tap5c6fcb7c-f4"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:06:17 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/69d87725-bb3d-4966-8db2-cc1e098be52d/console.log" append="off"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:06:17 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:06:17 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:06:17 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:06:17 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:06:17 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:06:17 np0005539551 nova_compute[227360]: 2025-11-29 08:06:17.097 227364 DEBUG nova.compute.manager [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Preparing to wait for external event network-vif-plugged-5c6fcb7c-f475-49c0-89c1-51e447434625 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:06:17 np0005539551 nova_compute[227360]: 2025-11-29 08:06:17.097 227364 DEBUG oslo_concurrency.lockutils [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Acquiring lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:17 np0005539551 nova_compute[227360]: 2025-11-29 08:06:17.097 227364 DEBUG oslo_concurrency.lockutils [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:17 np0005539551 nova_compute[227360]: 2025-11-29 08:06:17.098 227364 DEBUG oslo_concurrency.lockutils [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:17 np0005539551 nova_compute[227360]: 2025-11-29 08:06:17.098 227364 DEBUG nova.virt.libvirt.vif [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:06:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-803229820',display_name='tempest-SecurityGroupsTestJSON-server-803229820',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-803229820',id=63,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9e02874a2dc44489adba1420baa460f2',ramdisk_id='',reservation_id='r-fwyg2zal',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-1605163301',owner_user_name='tempest-SecurityGroupsTestJSON-1605163301-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:11Z,user_data=None,user_id='9ee7a93f60394fd9b004c90c25ff5fc1',uuid=69d87725-bb3d-4966-8db2-cc1e098be52d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5c6fcb7c-f475-49c0-89c1-51e447434625", "address": "fa:16:3e:cf:fb:05", "network": {"id": "ddaca73e-4e30-4040-a35d-8d63a2e74570", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1724696280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e02874a2dc44489adba1420baa460f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c6fcb7c-f4", "ovs_interfaceid": "5c6fcb7c-f475-49c0-89c1-51e447434625", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:06:17 np0005539551 nova_compute[227360]: 2025-11-29 08:06:17.099 227364 DEBUG nova.network.os_vif_util [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Converting VIF {"id": "5c6fcb7c-f475-49c0-89c1-51e447434625", "address": "fa:16:3e:cf:fb:05", "network": {"id": "ddaca73e-4e30-4040-a35d-8d63a2e74570", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1724696280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e02874a2dc44489adba1420baa460f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c6fcb7c-f4", "ovs_interfaceid": "5c6fcb7c-f475-49c0-89c1-51e447434625", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:06:17 np0005539551 nova_compute[227360]: 2025-11-29 08:06:17.099 227364 DEBUG nova.network.os_vif_util [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:fb:05,bridge_name='br-int',has_traffic_filtering=True,id=5c6fcb7c-f475-49c0-89c1-51e447434625,network=Network(ddaca73e-4e30-4040-a35d-8d63a2e74570),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c6fcb7c-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:06:17 np0005539551 nova_compute[227360]: 2025-11-29 08:06:17.099 227364 DEBUG os_vif [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:fb:05,bridge_name='br-int',has_traffic_filtering=True,id=5c6fcb7c-f475-49c0-89c1-51e447434625,network=Network(ddaca73e-4e30-4040-a35d-8d63a2e74570),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c6fcb7c-f4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:06:17 np0005539551 nova_compute[227360]: 2025-11-29 08:06:17.100 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:17 np0005539551 nova_compute[227360]: 2025-11-29 08:06:17.100 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:17 np0005539551 nova_compute[227360]: 2025-11-29 08:06:17.101 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:06:17 np0005539551 nova_compute[227360]: 2025-11-29 08:06:17.105 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:17 np0005539551 nova_compute[227360]: 2025-11-29 08:06:17.105 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5c6fcb7c-f4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:17 np0005539551 nova_compute[227360]: 2025-11-29 08:06:17.106 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5c6fcb7c-f4, col_values=(('external_ids', {'iface-id': '5c6fcb7c-f475-49c0-89c1-51e447434625', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cf:fb:05', 'vm-uuid': '69d87725-bb3d-4966-8db2-cc1e098be52d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:17 np0005539551 NetworkManager[48922]: <info>  [1764403577.1082] manager: (tap5c6fcb7c-f4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/101)
Nov 29 03:06:17 np0005539551 nova_compute[227360]: 2025-11-29 08:06:17.109 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:06:17 np0005539551 nova_compute[227360]: 2025-11-29 08:06:17.113 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:17 np0005539551 nova_compute[227360]: 2025-11-29 08:06:17.114 227364 INFO os_vif [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:fb:05,bridge_name='br-int',has_traffic_filtering=True,id=5c6fcb7c-f475-49c0-89c1-51e447434625,network=Network(ddaca73e-4e30-4040-a35d-8d63a2e74570),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c6fcb7c-f4')#033[00m
Nov 29 03:06:17 np0005539551 nova_compute[227360]: 2025-11-29 08:06:17.166 227364 DEBUG nova.virt.libvirt.driver [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:06:17 np0005539551 nova_compute[227360]: 2025-11-29 08:06:17.166 227364 DEBUG nova.virt.libvirt.driver [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:06:17 np0005539551 nova_compute[227360]: 2025-11-29 08:06:17.166 227364 DEBUG nova.virt.libvirt.driver [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] No VIF found with MAC fa:16:3e:cf:fb:05, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:06:17 np0005539551 nova_compute[227360]: 2025-11-29 08:06:17.166 227364 INFO nova.virt.libvirt.driver [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Using config drive#033[00m
Nov 29 03:06:17 np0005539551 nova_compute[227360]: 2025-11-29 08:06:17.191 227364 DEBUG nova.storage.rbd_utils [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] rbd image 69d87725-bb3d-4966-8db2-cc1e098be52d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:17.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:17 np0005539551 nova_compute[227360]: 2025-11-29 08:06:17.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:17 np0005539551 nova_compute[227360]: 2025-11-29 08:06:17.411 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:06:17 np0005539551 nova_compute[227360]: 2025-11-29 08:06:17.411 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:06:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:17.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:17 np0005539551 nova_compute[227360]: 2025-11-29 08:06:17.457 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 03:06:17 np0005539551 nova_compute[227360]: 2025-11-29 08:06:17.572 227364 INFO nova.virt.libvirt.driver [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Creating config drive at /var/lib/nova/instances/69d87725-bb3d-4966-8db2-cc1e098be52d/disk.config#033[00m
Nov 29 03:06:17 np0005539551 nova_compute[227360]: 2025-11-29 08:06:17.577 227364 DEBUG oslo_concurrency.processutils [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/69d87725-bb3d-4966-8db2-cc1e098be52d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8c10j3en execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:17 np0005539551 nova_compute[227360]: 2025-11-29 08:06:17.701 227364 DEBUG oslo_concurrency.processutils [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/69d87725-bb3d-4966-8db2-cc1e098be52d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8c10j3en" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:17 np0005539551 nova_compute[227360]: 2025-11-29 08:06:17.733 227364 DEBUG nova.storage.rbd_utils [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] rbd image 69d87725-bb3d-4966-8db2-cc1e098be52d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:17 np0005539551 nova_compute[227360]: 2025-11-29 08:06:17.736 227364 DEBUG oslo_concurrency.processutils [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/69d87725-bb3d-4966-8db2-cc1e098be52d/disk.config 69d87725-bb3d-4966-8db2-cc1e098be52d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e241 e241: 3 total, 3 up, 3 in
Nov 29 03:06:17 np0005539551 nova_compute[227360]: 2025-11-29 08:06:17.949 227364 DEBUG oslo_concurrency.processutils [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/69d87725-bb3d-4966-8db2-cc1e098be52d/disk.config 69d87725-bb3d-4966-8db2-cc1e098be52d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.212s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:17 np0005539551 nova_compute[227360]: 2025-11-29 08:06:17.950 227364 INFO nova.virt.libvirt.driver [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Deleting local config drive /var/lib/nova/instances/69d87725-bb3d-4966-8db2-cc1e098be52d/disk.config because it was imported into RBD.#033[00m
Nov 29 03:06:18 np0005539551 kernel: tap5c6fcb7c-f4: entered promiscuous mode
Nov 29 03:06:18 np0005539551 NetworkManager[48922]: <info>  [1764403578.0056] manager: (tap5c6fcb7c-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/102)
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.006 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:18 np0005539551 ovn_controller[130266]: 2025-11-29T08:06:18Z|00204|binding|INFO|Claiming lport 5c6fcb7c-f475-49c0-89c1-51e447434625 for this chassis.
Nov 29 03:06:18 np0005539551 ovn_controller[130266]: 2025-11-29T08:06:18Z|00205|binding|INFO|5c6fcb7c-f475-49c0-89c1-51e447434625: Claiming fa:16:3e:cf:fb:05 10.100.0.8
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.010 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:18.024 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:fb:05 10.100.0.8'], port_security=['fa:16:3e:cf:fb:05 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '69d87725-bb3d-4966-8db2-cc1e098be52d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ddaca73e-4e30-4040-a35d-8d63a2e74570', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9e02874a2dc44489adba1420baa460f2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '06d3cd7a-8161-46d3-8dfb-eba1ecfb9db2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=97bfead0-da72-4782-bfb1-84e12ea4a595, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=5c6fcb7c-f475-49c0-89c1-51e447434625) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:18.026 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 5c6fcb7c-f475-49c0-89c1-51e447434625 in datapath ddaca73e-4e30-4040-a35d-8d63a2e74570 bound to our chassis#033[00m
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:18.028 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ddaca73e-4e30-4040-a35d-8d63a2e74570#033[00m
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:18.048 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1ed18c25-5d21-47d7-ad39-92681ce980ab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:18.049 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapddaca73e-41 in ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:18.052 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapddaca73e-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:18.052 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ff32c330-bfaf-4f78-8a53-3d8ea0546371]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:18.053 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9139d800-a982-4f0b-9f20-fb37070200c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:18 np0005539551 systemd-udevd[251055]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:06:18 np0005539551 systemd-machined[190756]: New machine qemu-29-instance-0000003f.
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:18.068 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[8a0d53f1-6f12-416c-9b45-1aaa343c767c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:18 np0005539551 NetworkManager[48922]: <info>  [1764403578.0769] device (tap5c6fcb7c-f4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:06:18 np0005539551 NetworkManager[48922]: <info>  [1764403578.0780] device (tap5c6fcb7c-f4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:06:18 np0005539551 systemd[1]: Started Virtual Machine qemu-29-instance-0000003f.
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:18.099 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8278c89a-921c-455e-9c34-862e18952ee9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.114 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:18 np0005539551 ovn_controller[130266]: 2025-11-29T08:06:18Z|00206|binding|INFO|Setting lport 5c6fcb7c-f475-49c0-89c1-51e447434625 ovn-installed in OVS
Nov 29 03:06:18 np0005539551 ovn_controller[130266]: 2025-11-29T08:06:18Z|00207|binding|INFO|Setting lport 5c6fcb7c-f475-49c0-89c1-51e447434625 up in Southbound
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.119 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:18.139 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[d400432d-11e2-4ffb-9ab3-151e0a308995]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:18 np0005539551 systemd-udevd[251059]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:06:18 np0005539551 NetworkManager[48922]: <info>  [1764403578.1458] manager: (tapddaca73e-40): new Veth device (/org/freedesktop/NetworkManager/Devices/103)
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:18.147 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d4fb5de2-57ad-4f71-8537-d6257c938a25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:18.177 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[f8f0c5bf-efa9-4447-8a55-e590515365f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:18.182 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[f4cf878b-f14b-4338-9796-489e7af9f3b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:18 np0005539551 NetworkManager[48922]: <info>  [1764403578.2032] device (tapddaca73e-40): carrier: link connected
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:18.208 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[5c6dc6e4-b504-4da7-b0b8-edaa22f6b51f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:18.223 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[bebf91ee-01d4-472d-9370-215f750f16c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapddaca73e-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:0d:cb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 62], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662508, 'reachable_time': 30683, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251087, 'error': None, 'target': 'ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:18.237 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[562986f8-9a76-4948-8367-8d38e083f003]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8c:dcb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662508, 'tstamp': 662508}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251088, 'error': None, 'target': 'ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:18.252 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c29b7011-d04c-43b4-8b70-f5bde3363a42]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapddaca73e-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:0d:cb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 62], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662508, 'reachable_time': 30683, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251089, 'error': None, 'target': 'ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:18.282 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[eb4d0559-8499-479b-be9a-f1441b32a0e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.306 227364 DEBUG nova.network.neutron [req-6fb90329-d576-4339-8252-15d889e5c65a req-f7acae24-045c-4772-9f1b-31c7fbc6bd78 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Updated VIF entry in instance network info cache for port 5c6fcb7c-f475-49c0-89c1-51e447434625. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.306 227364 DEBUG nova.network.neutron [req-6fb90329-d576-4339-8252-15d889e5c65a req-f7acae24-045c-4772-9f1b-31c7fbc6bd78 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Updating instance_info_cache with network_info: [{"id": "5c6fcb7c-f475-49c0-89c1-51e447434625", "address": "fa:16:3e:cf:fb:05", "network": {"id": "ddaca73e-4e30-4040-a35d-8d63a2e74570", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1724696280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e02874a2dc44489adba1420baa460f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c6fcb7c-f4", "ovs_interfaceid": "5c6fcb7c-f475-49c0-89c1-51e447434625", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.312 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "refresh_cache-7c7286ee-7432-4e68-a574-bbe535d1f203" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.312 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquired lock "refresh_cache-7c7286ee-7432-4e68-a574-bbe535d1f203" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.312 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.312 227364 DEBUG nova.objects.instance [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7c7286ee-7432-4e68-a574-bbe535d1f203 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.325 227364 DEBUG oslo_concurrency.lockutils [req-6fb90329-d576-4339-8252-15d889e5c65a req-f7acae24-045c-4772-9f1b-31c7fbc6bd78 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-69d87725-bb3d-4966-8db2-cc1e098be52d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:18.334 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a9ed1171-57e7-44f0-90a4-1030fd474b97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:18.336 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapddaca73e-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:18.336 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:18.336 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapddaca73e-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:18 np0005539551 NetworkManager[48922]: <info>  [1764403578.3709] manager: (tapddaca73e-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/104)
Nov 29 03:06:18 np0005539551 kernel: tapddaca73e-40: entered promiscuous mode
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.370 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:18.375 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapddaca73e-40, col_values=(('external_ids', {'iface-id': '1387aa6b-e0e8-4d33-958c-8368713de9bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:18 np0005539551 ovn_controller[130266]: 2025-11-29T08:06:18Z|00208|binding|INFO|Releasing lport 1387aa6b-e0e8-4d33-958c-8368713de9bf from this chassis (sb_readonly=0)
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:18.377 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ddaca73e-4e30-4040-a35d-8d63a2e74570.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ddaca73e-4e30-4040-a35d-8d63a2e74570.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.378 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:18.378 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6b2ef667-6084-4f81-ac4d-9e4b20d64180]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:18.378 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-ddaca73e-4e30-4040-a35d-8d63a2e74570
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/ddaca73e-4e30-4040-a35d-8d63a2e74570.pid.haproxy
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID ddaca73e-4e30-4040-a35d-8d63a2e74570
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:18.379 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570', 'env', 'PROCESS_TAG=haproxy-ddaca73e-4e30-4040-a35d-8d63a2e74570', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ddaca73e-4e30-4040-a35d-8d63a2e74570.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.392 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.576 227364 DEBUG nova.compute.manager [req-95a7efb5-5d0d-4fdc-b7f5-25ffa8c37e55 req-90589e12-3a0c-404c-a072-a45e9769055e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Received event network-vif-plugged-5c6fcb7c-f475-49c0-89c1-51e447434625 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.577 227364 DEBUG oslo_concurrency.lockutils [req-95a7efb5-5d0d-4fdc-b7f5-25ffa8c37e55 req-90589e12-3a0c-404c-a072-a45e9769055e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.577 227364 DEBUG oslo_concurrency.lockutils [req-95a7efb5-5d0d-4fdc-b7f5-25ffa8c37e55 req-90589e12-3a0c-404c-a072-a45e9769055e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.578 227364 DEBUG oslo_concurrency.lockutils [req-95a7efb5-5d0d-4fdc-b7f5-25ffa8c37e55 req-90589e12-3a0c-404c-a072-a45e9769055e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.578 227364 DEBUG nova.compute.manager [req-95a7efb5-5d0d-4fdc-b7f5-25ffa8c37e55 req-90589e12-3a0c-404c-a072-a45e9769055e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Processing event network-vif-plugged-5c6fcb7c-f475-49c0-89c1-51e447434625 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.583 227364 DEBUG oslo_concurrency.lockutils [None req-83c7e2da-bbb5-4a79-af47-76e42d25b00d fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "7c7286ee-7432-4e68-a574-bbe535d1f203" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.583 227364 DEBUG oslo_concurrency.lockutils [None req-83c7e2da-bbb5-4a79-af47-76e42d25b00d fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "7c7286ee-7432-4e68-a574-bbe535d1f203" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.583 227364 DEBUG oslo_concurrency.lockutils [None req-83c7e2da-bbb5-4a79-af47-76e42d25b00d fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "7c7286ee-7432-4e68-a574-bbe535d1f203-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.583 227364 DEBUG oslo_concurrency.lockutils [None req-83c7e2da-bbb5-4a79-af47-76e42d25b00d fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "7c7286ee-7432-4e68-a574-bbe535d1f203-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.584 227364 DEBUG oslo_concurrency.lockutils [None req-83c7e2da-bbb5-4a79-af47-76e42d25b00d fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "7c7286ee-7432-4e68-a574-bbe535d1f203-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.585 227364 INFO nova.compute.manager [None req-83c7e2da-bbb5-4a79-af47-76e42d25b00d fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Terminating instance#033[00m
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.586 227364 DEBUG nova.compute.manager [None req-83c7e2da-bbb5-4a79-af47-76e42d25b00d fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:06:18 np0005539551 kernel: tap29f98dd6-99 (unregistering): left promiscuous mode
Nov 29 03:06:18 np0005539551 NetworkManager[48922]: <info>  [1764403578.6417] device (tap29f98dd6-99): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:06:18 np0005539551 ovn_controller[130266]: 2025-11-29T08:06:18Z|00209|binding|INFO|Releasing lport 29f98dd6-993d-4b30-9f39-81b5ae632943 from this chassis (sb_readonly=0)
Nov 29 03:06:18 np0005539551 ovn_controller[130266]: 2025-11-29T08:06:18Z|00210|binding|INFO|Setting lport 29f98dd6-993d-4b30-9f39-81b5ae632943 down in Southbound
Nov 29 03:06:18 np0005539551 ovn_controller[130266]: 2025-11-29T08:06:18Z|00211|binding|INFO|Removing iface tap29f98dd6-99 ovn-installed in OVS
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.650 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.652 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:18.658 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:f9:ef 10.100.0.3'], port_security=['fa:16:3e:de:f9:ef 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '7c7286ee-7432-4e68-a574-bbe535d1f203', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '638fd52fccf14f16b56d0860553063f3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a57b53e5-9055-46ae-8ab4-d4a8a62173cb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d89b288b-bbc6-47fa-ad12-8aab94ffc78f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=29f98dd6-993d-4b30-9f39-81b5ae632943) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.667 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:18 np0005539551 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000003b.scope: Deactivated successfully.
Nov 29 03:06:18 np0005539551 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000003b.scope: Consumed 15.074s CPU time.
Nov 29 03:06:18 np0005539551 systemd-machined[190756]: Machine qemu-28-instance-0000003b terminated.
Nov 29 03:06:18 np0005539551 podman[251124]: 2025-11-29 08:06:18.751384077 +0000 UTC m=+0.045502873 container create d3833b73f7b6db2931eee8fe82ad3e13ed8263c43a69ee32cd63e3f2c8511bfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 03:06:18 np0005539551 systemd[1]: Started libpod-conmon-d3833b73f7b6db2931eee8fe82ad3e13ed8263c43a69ee32cd63e3f2c8511bfd.scope.
Nov 29 03:06:18 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:06:18 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13ae9c76dcdcb8b52ff1e4770b72d36765c55023a5544558165f328a594a57bf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:06:18 np0005539551 podman[251124]: 2025-11-29 08:06:18.726480843 +0000 UTC m=+0.020599669 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.823 227364 INFO nova.virt.libvirt.driver [-] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Instance destroyed successfully.#033[00m
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.824 227364 DEBUG nova.objects.instance [None req-83c7e2da-bbb5-4a79-af47-76e42d25b00d fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lazy-loading 'resources' on Instance uuid 7c7286ee-7432-4e68-a574-bbe535d1f203 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:18 np0005539551 podman[251124]: 2025-11-29 08:06:18.831183862 +0000 UTC m=+0.125302688 container init d3833b73f7b6db2931eee8fe82ad3e13ed8263c43a69ee32cd63e3f2c8511bfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS)
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.838 227364 DEBUG nova.virt.libvirt.vif [None req-83c7e2da-bbb5-4a79-af47-76e42d25b00d fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:05:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1247070214',display_name='tempest-ImagesTestJSON-server-1247070214',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-1247070214',id=59,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:05:45Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='638fd52fccf14f16b56d0860553063f3',ramdisk_id='',reservation_id='r-jkf90zee',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1682881466',owner_user_name='tempest-ImagesTestJSON-1682881466-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:05:58Z,user_data=None,user_id='fddc5f5801764ee19d5253e2cab34df3',uuid=7c7286ee-7432-4e68-a574-bbe535d1f203,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "29f98dd6-993d-4b30-9f39-81b5ae632943", "address": "fa:16:3e:de:f9:ef", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29f98dd6-99", "ovs_interfaceid": "29f98dd6-993d-4b30-9f39-81b5ae632943", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.838 227364 DEBUG nova.network.os_vif_util [None req-83c7e2da-bbb5-4a79-af47-76e42d25b00d fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Converting VIF {"id": "29f98dd6-993d-4b30-9f39-81b5ae632943", "address": "fa:16:3e:de:f9:ef", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29f98dd6-99", "ovs_interfaceid": "29f98dd6-993d-4b30-9f39-81b5ae632943", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.839 227364 DEBUG nova.network.os_vif_util [None req-83c7e2da-bbb5-4a79-af47-76e42d25b00d fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:f9:ef,bridge_name='br-int',has_traffic_filtering=True,id=29f98dd6-993d-4b30-9f39-81b5ae632943,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29f98dd6-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.839 227364 DEBUG os_vif [None req-83c7e2da-bbb5-4a79-af47-76e42d25b00d fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:f9:ef,bridge_name='br-int',has_traffic_filtering=True,id=29f98dd6-993d-4b30-9f39-81b5ae632943,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29f98dd6-99') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.841 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.841 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap29f98dd6-99, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:18 np0005539551 podman[251124]: 2025-11-29 08:06:18.843069029 +0000 UTC m=+0.137187825 container start d3833b73f7b6db2931eee8fe82ad3e13ed8263c43a69ee32cd63e3f2c8511bfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.844 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.845 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.847 227364 INFO os_vif [None req-83c7e2da-bbb5-4a79-af47-76e42d25b00d fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:f9:ef,bridge_name='br-int',has_traffic_filtering=True,id=29f98dd6-993d-4b30-9f39-81b5ae632943,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29f98dd6-99')#033[00m
Nov 29 03:06:18 np0005539551 neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570[251155]: [NOTICE]   (251188) : New worker (251205) forked
Nov 29 03:06:18 np0005539551 neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570[251155]: [NOTICE]   (251188) : Loading success.
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:18.906 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 29f98dd6-993d-4b30-9f39-81b5ae632943 in datapath f01d29c1-afcb-4909-9abf-f7d31e4549d8 unbound from our chassis#033[00m
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:18.907 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f01d29c1-afcb-4909-9abf-f7d31e4549d8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:18.908 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[74c9f9b9-55cf-4936-9b81-48f67c6babf4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:18.908 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8 namespace which is not needed anymore#033[00m
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.972 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403578.9719613, 69d87725-bb3d-4966-8db2-cc1e098be52d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.972 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] VM Started (Lifecycle Event)#033[00m
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.975 227364 DEBUG nova.compute.manager [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.980 227364 DEBUG nova.virt.libvirt.driver [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.984 227364 INFO nova.virt.libvirt.driver [-] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Instance spawned successfully.#033[00m
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.985 227364 DEBUG nova.virt.libvirt.driver [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.990 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:18 np0005539551 nova_compute[227360]: 2025-11-29 08:06:18.993 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:06:19 np0005539551 nova_compute[227360]: 2025-11-29 08:06:19.004 227364 DEBUG nova.virt.libvirt.driver [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:06:19 np0005539551 nova_compute[227360]: 2025-11-29 08:06:19.004 227364 DEBUG nova.virt.libvirt.driver [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:06:19 np0005539551 nova_compute[227360]: 2025-11-29 08:06:19.004 227364 DEBUG nova.virt.libvirt.driver [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:06:19 np0005539551 nova_compute[227360]: 2025-11-29 08:06:19.005 227364 DEBUG nova.virt.libvirt.driver [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:06:19 np0005539551 nova_compute[227360]: 2025-11-29 08:06:19.005 227364 DEBUG nova.virt.libvirt.driver [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:06:19 np0005539551 nova_compute[227360]: 2025-11-29 08:06:19.005 227364 DEBUG nova.virt.libvirt.driver [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:06:19 np0005539551 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[250464]: [NOTICE]   (250468) : haproxy version is 2.8.14-c23fe91
Nov 29 03:06:19 np0005539551 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[250464]: [NOTICE]   (250468) : path to executable is /usr/sbin/haproxy
Nov 29 03:06:19 np0005539551 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[250464]: [WARNING]  (250468) : Exiting Master process...
Nov 29 03:06:19 np0005539551 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[250464]: [ALERT]    (250468) : Current worker (250470) exited with code 143 (Terminated)
Nov 29 03:06:19 np0005539551 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[250464]: [WARNING]  (250468) : All workers exited. Exiting... (0)
Nov 29 03:06:19 np0005539551 nova_compute[227360]: 2025-11-29 08:06:19.035 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:06:19 np0005539551 nova_compute[227360]: 2025-11-29 08:06:19.036 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403578.9727683, 69d87725-bb3d-4966-8db2-cc1e098be52d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:06:19 np0005539551 nova_compute[227360]: 2025-11-29 08:06:19.036 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:06:19 np0005539551 systemd[1]: libpod-6a96bdfaf459b63927802d5060aee0d803e8c49e91619660567d10c2ae8ab753.scope: Deactivated successfully.
Nov 29 03:06:19 np0005539551 podman[251243]: 2025-11-29 08:06:19.044615466 +0000 UTC m=+0.056896156 container died 6a96bdfaf459b63927802d5060aee0d803e8c49e91619660567d10c2ae8ab753 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:06:19 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6a96bdfaf459b63927802d5060aee0d803e8c49e91619660567d10c2ae8ab753-userdata-shm.mount: Deactivated successfully.
Nov 29 03:06:19 np0005539551 systemd[1]: var-lib-containers-storage-overlay-45609afc19b98adbab860f11e46b52531327e11f7e784379ad2e3085ac116b24-merged.mount: Deactivated successfully.
Nov 29 03:06:19 np0005539551 podman[251243]: 2025-11-29 08:06:19.078092998 +0000 UTC m=+0.090373688 container cleanup 6a96bdfaf459b63927802d5060aee0d803e8c49e91619660567d10c2ae8ab753 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:06:19 np0005539551 systemd[1]: libpod-conmon-6a96bdfaf459b63927802d5060aee0d803e8c49e91619660567d10c2ae8ab753.scope: Deactivated successfully.
Nov 29 03:06:19 np0005539551 nova_compute[227360]: 2025-11-29 08:06:19.088 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:19 np0005539551 nova_compute[227360]: 2025-11-29 08:06:19.090 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403578.977176, 69d87725-bb3d-4966-8db2-cc1e098be52d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:06:19 np0005539551 nova_compute[227360]: 2025-11-29 08:06:19.091 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:06:19 np0005539551 nova_compute[227360]: 2025-11-29 08:06:19.128 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:19 np0005539551 nova_compute[227360]: 2025-11-29 08:06:19.130 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:06:19 np0005539551 podman[251275]: 2025-11-29 08:06:19.141840235 +0000 UTC m=+0.042950235 container remove 6a96bdfaf459b63927802d5060aee0d803e8c49e91619660567d10c2ae8ab753 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 03:06:19 np0005539551 nova_compute[227360]: 2025-11-29 08:06:19.143 227364 INFO nova.compute.manager [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Took 7.72 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:06:19 np0005539551 nova_compute[227360]: 2025-11-29 08:06:19.143 227364 DEBUG nova.compute.manager [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:19.152 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9567acfb-bd3c-4a5e-ac27-68bb24444c53]: (4, ('Sat Nov 29 08:06:18 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8 (6a96bdfaf459b63927802d5060aee0d803e8c49e91619660567d10c2ae8ab753)\n6a96bdfaf459b63927802d5060aee0d803e8c49e91619660567d10c2ae8ab753\nSat Nov 29 08:06:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8 (6a96bdfaf459b63927802d5060aee0d803e8c49e91619660567d10c2ae8ab753)\n6a96bdfaf459b63927802d5060aee0d803e8c49e91619660567d10c2ae8ab753\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:19.153 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[68e405bc-8426-4d78-9746-d0f4cc3d156d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:19.154 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf01d29c1-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:19 np0005539551 kernel: tapf01d29c1-a0: left promiscuous mode
Nov 29 03:06:19 np0005539551 nova_compute[227360]: 2025-11-29 08:06:19.155 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:19 np0005539551 nova_compute[227360]: 2025-11-29 08:06:19.170 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:06:19 np0005539551 nova_compute[227360]: 2025-11-29 08:06:19.174 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:19.176 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4d0eb73f-b9f0-49be-8a3d-4bdc2b8f97fd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:19.192 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d7e8e9a6-eb09-47d0-9329-fa2430a42261]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:19.194 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f28bc4cd-cf03-4376-90a0-215bbd029fdc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:19.210 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b20d8f94-2f35-4794-b1f6-47334a9a7146]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 658929, 'reachable_time': 20352, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251287, 'error': None, 'target': 'ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:19.213 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:06:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:19.213 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[45241646-83f6-4f6b-b56e-be42c2a16d93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:19 np0005539551 systemd[1]: run-netns-ovnmeta\x2df01d29c1\x2dafcb\x2d4909\x2d9abf\x2df7d31e4549d8.mount: Deactivated successfully.
Nov 29 03:06:19 np0005539551 nova_compute[227360]: 2025-11-29 08:06:19.256 227364 INFO nova.virt.libvirt.driver [None req-83c7e2da-bbb5-4a79-af47-76e42d25b00d fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Deleting instance files /var/lib/nova/instances/7c7286ee-7432-4e68-a574-bbe535d1f203_del#033[00m
Nov 29 03:06:19 np0005539551 nova_compute[227360]: 2025-11-29 08:06:19.257 227364 INFO nova.virt.libvirt.driver [None req-83c7e2da-bbb5-4a79-af47-76e42d25b00d fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Deletion of /var/lib/nova/instances/7c7286ee-7432-4e68-a574-bbe535d1f203_del complete#033[00m
Nov 29 03:06:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:19.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:19 np0005539551 nova_compute[227360]: 2025-11-29 08:06:19.387 227364 INFO nova.compute.manager [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Took 9.05 seconds to build instance.#033[00m
Nov 29 03:06:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:19.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:19 np0005539551 nova_compute[227360]: 2025-11-29 08:06:19.457 227364 DEBUG oslo_concurrency.lockutils [None req-6189eb05-95af-4cd6-87d0-702bab4e5787 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lock "69d87725-bb3d-4966-8db2-cc1e098be52d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.190s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:19 np0005539551 nova_compute[227360]: 2025-11-29 08:06:19.477 227364 INFO nova.compute.manager [None req-83c7e2da-bbb5-4a79-af47-76e42d25b00d fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Took 0.89 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:06:19 np0005539551 nova_compute[227360]: 2025-11-29 08:06:19.478 227364 DEBUG oslo.service.loopingcall [None req-83c7e2da-bbb5-4a79-af47-76e42d25b00d fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:06:19 np0005539551 nova_compute[227360]: 2025-11-29 08:06:19.478 227364 DEBUG nova.compute.manager [-] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:06:19 np0005539551 nova_compute[227360]: 2025-11-29 08:06:19.479 227364 DEBUG nova.network.neutron [-] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:06:19 np0005539551 nova_compute[227360]: 2025-11-29 08:06:19.758 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:19.857 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:19.858 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:19.858 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:20 np0005539551 nova_compute[227360]: 2025-11-29 08:06:20.691 227364 DEBUG nova.compute.manager [req-55c5f07e-244b-430a-a95f-982bf088334f req-8c4ddd6e-5d45-402a-b43e-0e39f8e5af3a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Received event network-vif-plugged-5c6fcb7c-f475-49c0-89c1-51e447434625 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:20 np0005539551 nova_compute[227360]: 2025-11-29 08:06:20.694 227364 DEBUG oslo_concurrency.lockutils [req-55c5f07e-244b-430a-a95f-982bf088334f req-8c4ddd6e-5d45-402a-b43e-0e39f8e5af3a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:20 np0005539551 nova_compute[227360]: 2025-11-29 08:06:20.695 227364 DEBUG oslo_concurrency.lockutils [req-55c5f07e-244b-430a-a95f-982bf088334f req-8c4ddd6e-5d45-402a-b43e-0e39f8e5af3a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:20 np0005539551 nova_compute[227360]: 2025-11-29 08:06:20.695 227364 DEBUG oslo_concurrency.lockutils [req-55c5f07e-244b-430a-a95f-982bf088334f req-8c4ddd6e-5d45-402a-b43e-0e39f8e5af3a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:20 np0005539551 nova_compute[227360]: 2025-11-29 08:06:20.695 227364 DEBUG nova.compute.manager [req-55c5f07e-244b-430a-a95f-982bf088334f req-8c4ddd6e-5d45-402a-b43e-0e39f8e5af3a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] No waiting events found dispatching network-vif-plugged-5c6fcb7c-f475-49c0-89c1-51e447434625 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:06:20 np0005539551 nova_compute[227360]: 2025-11-29 08:06:20.695 227364 WARNING nova.compute.manager [req-55c5f07e-244b-430a-a95f-982bf088334f req-8c4ddd6e-5d45-402a-b43e-0e39f8e5af3a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Received unexpected event network-vif-plugged-5c6fcb7c-f475-49c0-89c1-51e447434625 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:06:20 np0005539551 nova_compute[227360]: 2025-11-29 08:06:20.696 227364 DEBUG nova.compute.manager [req-55c5f07e-244b-430a-a95f-982bf088334f req-8c4ddd6e-5d45-402a-b43e-0e39f8e5af3a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Received event network-vif-unplugged-29f98dd6-993d-4b30-9f39-81b5ae632943 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:20 np0005539551 nova_compute[227360]: 2025-11-29 08:06:20.696 227364 DEBUG oslo_concurrency.lockutils [req-55c5f07e-244b-430a-a95f-982bf088334f req-8c4ddd6e-5d45-402a-b43e-0e39f8e5af3a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7c7286ee-7432-4e68-a574-bbe535d1f203-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:20 np0005539551 nova_compute[227360]: 2025-11-29 08:06:20.696 227364 DEBUG oslo_concurrency.lockutils [req-55c5f07e-244b-430a-a95f-982bf088334f req-8c4ddd6e-5d45-402a-b43e-0e39f8e5af3a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7c7286ee-7432-4e68-a574-bbe535d1f203-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:20 np0005539551 nova_compute[227360]: 2025-11-29 08:06:20.696 227364 DEBUG oslo_concurrency.lockutils [req-55c5f07e-244b-430a-a95f-982bf088334f req-8c4ddd6e-5d45-402a-b43e-0e39f8e5af3a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7c7286ee-7432-4e68-a574-bbe535d1f203-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:20 np0005539551 nova_compute[227360]: 2025-11-29 08:06:20.697 227364 DEBUG nova.compute.manager [req-55c5f07e-244b-430a-a95f-982bf088334f req-8c4ddd6e-5d45-402a-b43e-0e39f8e5af3a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] No waiting events found dispatching network-vif-unplugged-29f98dd6-993d-4b30-9f39-81b5ae632943 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:06:20 np0005539551 nova_compute[227360]: 2025-11-29 08:06:20.697 227364 DEBUG nova.compute.manager [req-55c5f07e-244b-430a-a95f-982bf088334f req-8c4ddd6e-5d45-402a-b43e-0e39f8e5af3a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Received event network-vif-unplugged-29f98dd6-993d-4b30-9f39-81b5ae632943 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:06:20 np0005539551 nova_compute[227360]: 2025-11-29 08:06:20.697 227364 DEBUG nova.compute.manager [req-55c5f07e-244b-430a-a95f-982bf088334f req-8c4ddd6e-5d45-402a-b43e-0e39f8e5af3a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Received event network-vif-plugged-29f98dd6-993d-4b30-9f39-81b5ae632943 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:20 np0005539551 nova_compute[227360]: 2025-11-29 08:06:20.698 227364 DEBUG oslo_concurrency.lockutils [req-55c5f07e-244b-430a-a95f-982bf088334f req-8c4ddd6e-5d45-402a-b43e-0e39f8e5af3a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7c7286ee-7432-4e68-a574-bbe535d1f203-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:20 np0005539551 nova_compute[227360]: 2025-11-29 08:06:20.698 227364 DEBUG oslo_concurrency.lockutils [req-55c5f07e-244b-430a-a95f-982bf088334f req-8c4ddd6e-5d45-402a-b43e-0e39f8e5af3a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7c7286ee-7432-4e68-a574-bbe535d1f203-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:20 np0005539551 nova_compute[227360]: 2025-11-29 08:06:20.698 227364 DEBUG oslo_concurrency.lockutils [req-55c5f07e-244b-430a-a95f-982bf088334f req-8c4ddd6e-5d45-402a-b43e-0e39f8e5af3a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7c7286ee-7432-4e68-a574-bbe535d1f203-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:20 np0005539551 nova_compute[227360]: 2025-11-29 08:06:20.698 227364 DEBUG nova.compute.manager [req-55c5f07e-244b-430a-a95f-982bf088334f req-8c4ddd6e-5d45-402a-b43e-0e39f8e5af3a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] No waiting events found dispatching network-vif-plugged-29f98dd6-993d-4b30-9f39-81b5ae632943 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:06:20 np0005539551 nova_compute[227360]: 2025-11-29 08:06:20.699 227364 WARNING nova.compute.manager [req-55c5f07e-244b-430a-a95f-982bf088334f req-8c4ddd6e-5d45-402a-b43e-0e39f8e5af3a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Received unexpected event network-vif-plugged-29f98dd6-993d-4b30-9f39-81b5ae632943 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:06:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:21.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:06:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:21.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:06:21 np0005539551 nova_compute[227360]: 2025-11-29 08:06:21.764 227364 DEBUG nova.network.neutron [-] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:06:21 np0005539551 nova_compute[227360]: 2025-11-29 08:06:21.800 227364 INFO nova.compute.manager [-] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Took 2.32 seconds to deallocate network for instance.#033[00m
Nov 29 03:06:21 np0005539551 nova_compute[227360]: 2025-11-29 08:06:21.871 227364 DEBUG oslo_concurrency.lockutils [None req-83c7e2da-bbb5-4a79-af47-76e42d25b00d fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:21 np0005539551 nova_compute[227360]: 2025-11-29 08:06:21.872 227364 DEBUG oslo_concurrency.lockutils [None req-83c7e2da-bbb5-4a79-af47-76e42d25b00d fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:21 np0005539551 nova_compute[227360]: 2025-11-29 08:06:21.895 227364 DEBUG nova.compute.manager [req-fd549855-0607-4bec-8fac-393d2c61faf7 req-bdcd1e72-af8d-434c-a1f6-7c70493129e4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Received event network-vif-deleted-29f98dd6-993d-4b30-9f39-81b5ae632943 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:21 np0005539551 nova_compute[227360]: 2025-11-29 08:06:21.989 227364 DEBUG oslo_concurrency.processutils [None req-83c7e2da-bbb5-4a79-af47-76e42d25b00d fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:22 np0005539551 nova_compute[227360]: 2025-11-29 08:06:22.053 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Updating instance_info_cache with network_info: [{"id": "29f98dd6-993d-4b30-9f39-81b5ae632943", "address": "fa:16:3e:de:f9:ef", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29f98dd6-99", "ovs_interfaceid": "29f98dd6-993d-4b30-9f39-81b5ae632943", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:06:22 np0005539551 nova_compute[227360]: 2025-11-29 08:06:22.090 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Releasing lock "refresh_cache-7c7286ee-7432-4e68-a574-bbe535d1f203" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:06:22 np0005539551 nova_compute[227360]: 2025-11-29 08:06:22.091 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:06:22 np0005539551 nova_compute[227360]: 2025-11-29 08:06:22.091 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:22 np0005539551 nova_compute[227360]: 2025-11-29 08:06:22.092 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:22 np0005539551 nova_compute[227360]: 2025-11-29 08:06:22.092 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:06:22 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3669280327' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:06:22 np0005539551 nova_compute[227360]: 2025-11-29 08:06:22.405 227364 DEBUG oslo_concurrency.processutils [None req-83c7e2da-bbb5-4a79-af47-76e42d25b00d fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:22 np0005539551 nova_compute[227360]: 2025-11-29 08:06:22.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:22 np0005539551 nova_compute[227360]: 2025-11-29 08:06:22.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:22 np0005539551 nova_compute[227360]: 2025-11-29 08:06:22.411 227364 DEBUG nova.compute.provider_tree [None req-83c7e2da-bbb5-4a79-af47-76e42d25b00d fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:06:22 np0005539551 nova_compute[227360]: 2025-11-29 08:06:22.441 227364 DEBUG nova.scheduler.client.report [None req-83c7e2da-bbb5-4a79-af47-76e42d25b00d fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:06:22 np0005539551 nova_compute[227360]: 2025-11-29 08:06:22.445 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:22 np0005539551 nova_compute[227360]: 2025-11-29 08:06:22.478 227364 DEBUG oslo_concurrency.lockutils [None req-83c7e2da-bbb5-4a79-af47-76e42d25b00d fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:22 np0005539551 nova_compute[227360]: 2025-11-29 08:06:22.481 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.036s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:22 np0005539551 nova_compute[227360]: 2025-11-29 08:06:22.482 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:22 np0005539551 nova_compute[227360]: 2025-11-29 08:06:22.482 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:06:22 np0005539551 nova_compute[227360]: 2025-11-29 08:06:22.482 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:22 np0005539551 nova_compute[227360]: 2025-11-29 08:06:22.593 227364 INFO nova.scheduler.client.report [None req-83c7e2da-bbb5-4a79-af47-76e42d25b00d fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Deleted allocations for instance 7c7286ee-7432-4e68-a574-bbe535d1f203#033[00m
Nov 29 03:06:22 np0005539551 nova_compute[227360]: 2025-11-29 08:06:22.702 227364 DEBUG oslo_concurrency.lockutils [None req-83c7e2da-bbb5-4a79-af47-76e42d25b00d fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "7c7286ee-7432-4e68-a574-bbe535d1f203" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:06:22 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1502403023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:06:22 np0005539551 nova_compute[227360]: 2025-11-29 08:06:22.937 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:23 np0005539551 nova_compute[227360]: 2025-11-29 08:06:23.021 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:06:23 np0005539551 nova_compute[227360]: 2025-11-29 08:06:23.022 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:06:23 np0005539551 nova_compute[227360]: 2025-11-29 08:06:23.176 227364 DEBUG nova.compute.manager [req-d74eddcf-a481-4250-80e3-01e7ce04c9f7 req-6db07f30-3e2a-4e32-b0f4-2cba333b42e5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Received event network-changed-5c6fcb7c-f475-49c0-89c1-51e447434625 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:23 np0005539551 nova_compute[227360]: 2025-11-29 08:06:23.177 227364 DEBUG nova.compute.manager [req-d74eddcf-a481-4250-80e3-01e7ce04c9f7 req-6db07f30-3e2a-4e32-b0f4-2cba333b42e5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Refreshing instance network info cache due to event network-changed-5c6fcb7c-f475-49c0-89c1-51e447434625. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:06:23 np0005539551 nova_compute[227360]: 2025-11-29 08:06:23.177 227364 DEBUG oslo_concurrency.lockutils [req-d74eddcf-a481-4250-80e3-01e7ce04c9f7 req-6db07f30-3e2a-4e32-b0f4-2cba333b42e5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-69d87725-bb3d-4966-8db2-cc1e098be52d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:06:23 np0005539551 nova_compute[227360]: 2025-11-29 08:06:23.177 227364 DEBUG oslo_concurrency.lockutils [req-d74eddcf-a481-4250-80e3-01e7ce04c9f7 req-6db07f30-3e2a-4e32-b0f4-2cba333b42e5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-69d87725-bb3d-4966-8db2-cc1e098be52d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:06:23 np0005539551 nova_compute[227360]: 2025-11-29 08:06:23.178 227364 DEBUG nova.network.neutron [req-d74eddcf-a481-4250-80e3-01e7ce04c9f7 req-6db07f30-3e2a-4e32-b0f4-2cba333b42e5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Refreshing network info cache for port 5c6fcb7c-f475-49c0-89c1-51e447434625 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:06:23 np0005539551 nova_compute[227360]: 2025-11-29 08:06:23.240 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:06:23 np0005539551 nova_compute[227360]: 2025-11-29 08:06:23.241 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4515MB free_disk=20.87618637084961GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:06:23 np0005539551 nova_compute[227360]: 2025-11-29 08:06:23.242 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:23 np0005539551 nova_compute[227360]: 2025-11-29 08:06:23.243 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:23 np0005539551 nova_compute[227360]: 2025-11-29 08:06:23.321 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 69d87725-bb3d-4966-8db2-cc1e098be52d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:06:23 np0005539551 nova_compute[227360]: 2025-11-29 08:06:23.323 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:06:23 np0005539551 nova_compute[227360]: 2025-11-29 08:06:23.324 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:06:23 np0005539551 nova_compute[227360]: 2025-11-29 08:06:23.342 227364 DEBUG oslo_concurrency.lockutils [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Acquiring lock "69d87725-bb3d-4966-8db2-cc1e098be52d" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:23 np0005539551 nova_compute[227360]: 2025-11-29 08:06:23.343 227364 DEBUG oslo_concurrency.lockutils [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lock "69d87725-bb3d-4966-8db2-cc1e098be52d" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:23 np0005539551 nova_compute[227360]: 2025-11-29 08:06:23.343 227364 INFO nova.compute.manager [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Rebooting instance#033[00m
Nov 29 03:06:23 np0005539551 nova_compute[227360]: 2025-11-29 08:06:23.369 227364 DEBUG oslo_concurrency.lockutils [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Acquiring lock "refresh_cache-69d87725-bb3d-4966-8db2-cc1e098be52d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:06:23 np0005539551 nova_compute[227360]: 2025-11-29 08:06:23.382 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:23.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:23.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:23 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:06:23 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/36691193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:06:23 np0005539551 nova_compute[227360]: 2025-11-29 08:06:23.892 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:23 np0005539551 nova_compute[227360]: 2025-11-29 08:06:23.909 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:23 np0005539551 nova_compute[227360]: 2025-11-29 08:06:23.919 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:06:23 np0005539551 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:06:23 np0005539551 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:06:23 np0005539551 nova_compute[227360]: 2025-11-29 08:06:23.950 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:06:23 np0005539551 nova_compute[227360]: 2025-11-29 08:06:23.979 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:06:23 np0005539551 nova_compute[227360]: 2025-11-29 08:06:23.979 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:24 np0005539551 nova_compute[227360]: 2025-11-29 08:06:24.760 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:24 np0005539551 nova_compute[227360]: 2025-11-29 08:06:24.956 227364 DEBUG nova.network.neutron [req-d74eddcf-a481-4250-80e3-01e7ce04c9f7 req-6db07f30-3e2a-4e32-b0f4-2cba333b42e5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Updated VIF entry in instance network info cache for port 5c6fcb7c-f475-49c0-89c1-51e447434625. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:06:24 np0005539551 nova_compute[227360]: 2025-11-29 08:06:24.957 227364 DEBUG nova.network.neutron [req-d74eddcf-a481-4250-80e3-01e7ce04c9f7 req-6db07f30-3e2a-4e32-b0f4-2cba333b42e5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Updating instance_info_cache with network_info: [{"id": "5c6fcb7c-f475-49c0-89c1-51e447434625", "address": "fa:16:3e:cf:fb:05", "network": {"id": "ddaca73e-4e30-4040-a35d-8d63a2e74570", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1724696280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e02874a2dc44489adba1420baa460f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c6fcb7c-f4", "ovs_interfaceid": "5c6fcb7c-f475-49c0-89c1-51e447434625", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:06:24 np0005539551 nova_compute[227360]: 2025-11-29 08:06:24.982 227364 DEBUG oslo_concurrency.lockutils [req-d74eddcf-a481-4250-80e3-01e7ce04c9f7 req-6db07f30-3e2a-4e32-b0f4-2cba333b42e5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-69d87725-bb3d-4966-8db2-cc1e098be52d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:06:24 np0005539551 nova_compute[227360]: 2025-11-29 08:06:24.983 227364 DEBUG oslo_concurrency.lockutils [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Acquired lock "refresh_cache-69d87725-bb3d-4966-8db2-cc1e098be52d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:06:24 np0005539551 nova_compute[227360]: 2025-11-29 08:06:24.983 227364 DEBUG nova.network.neutron [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:06:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:25.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:25.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:26 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e242 e242: 3 total, 3 up, 3 in
Nov 29 03:06:26 np0005539551 nova_compute[227360]: 2025-11-29 08:06:26.980 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:27 np0005539551 nova_compute[227360]: 2025-11-29 08:06:27.308 227364 DEBUG nova.network.neutron [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Updating instance_info_cache with network_info: [{"id": "5c6fcb7c-f475-49c0-89c1-51e447434625", "address": "fa:16:3e:cf:fb:05", "network": {"id": "ddaca73e-4e30-4040-a35d-8d63a2e74570", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1724696280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e02874a2dc44489adba1420baa460f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c6fcb7c-f4", "ovs_interfaceid": "5c6fcb7c-f475-49c0-89c1-51e447434625", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:06:27 np0005539551 nova_compute[227360]: 2025-11-29 08:06:27.334 227364 DEBUG oslo_concurrency.lockutils [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Releasing lock "refresh_cache-69d87725-bb3d-4966-8db2-cc1e098be52d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:06:27 np0005539551 nova_compute[227360]: 2025-11-29 08:06:27.335 227364 DEBUG nova.compute.manager [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:06:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:27.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:06:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:06:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:27.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:06:27 np0005539551 kernel: tap5c6fcb7c-f4 (unregistering): left promiscuous mode
Nov 29 03:06:27 np0005539551 NetworkManager[48922]: <info>  [1764403587.5284] device (tap5c6fcb7c-f4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:06:27 np0005539551 ovn_controller[130266]: 2025-11-29T08:06:27Z|00212|binding|INFO|Releasing lport 5c6fcb7c-f475-49c0-89c1-51e447434625 from this chassis (sb_readonly=0)
Nov 29 03:06:27 np0005539551 ovn_controller[130266]: 2025-11-29T08:06:27Z|00213|binding|INFO|Setting lport 5c6fcb7c-f475-49c0-89c1-51e447434625 down in Southbound
Nov 29 03:06:27 np0005539551 ovn_controller[130266]: 2025-11-29T08:06:27Z|00214|binding|INFO|Removing iface tap5c6fcb7c-f4 ovn-installed in OVS
Nov 29 03:06:27 np0005539551 nova_compute[227360]: 2025-11-29 08:06:27.538 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:27 np0005539551 nova_compute[227360]: 2025-11-29 08:06:27.540 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:27 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:27.548 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:fb:05 10.100.0.8'], port_security=['fa:16:3e:cf:fb:05 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '69d87725-bb3d-4966-8db2-cc1e098be52d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ddaca73e-4e30-4040-a35d-8d63a2e74570', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9e02874a2dc44489adba1420baa460f2', 'neutron:revision_number': '5', 'neutron:security_group_ids': '06d3cd7a-8161-46d3-8dfb-eba1ecfb9db2 9991b07a-f9c0-43a9-9d09-efba18b71e42', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=97bfead0-da72-4782-bfb1-84e12ea4a595, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=5c6fcb7c-f475-49c0-89c1-51e447434625) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:06:27 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:27.549 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 5c6fcb7c-f475-49c0-89c1-51e447434625 in datapath ddaca73e-4e30-4040-a35d-8d63a2e74570 unbound from our chassis#033[00m
Nov 29 03:06:27 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:27.550 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ddaca73e-4e30-4040-a35d-8d63a2e74570, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:06:27 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:27.551 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ae407116-4b6b-495a-8561-a31df0639146]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:27 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:27.552 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570 namespace which is not needed anymore#033[00m
Nov 29 03:06:27 np0005539551 nova_compute[227360]: 2025-11-29 08:06:27.566 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:27 np0005539551 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000003f.scope: Deactivated successfully.
Nov 29 03:06:27 np0005539551 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000003f.scope: Consumed 9.475s CPU time.
Nov 29 03:06:27 np0005539551 systemd-machined[190756]: Machine qemu-29-instance-0000003f terminated.
Nov 29 03:06:27 np0005539551 neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570[251155]: [NOTICE]   (251188) : haproxy version is 2.8.14-c23fe91
Nov 29 03:06:27 np0005539551 neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570[251155]: [NOTICE]   (251188) : path to executable is /usr/sbin/haproxy
Nov 29 03:06:27 np0005539551 neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570[251155]: [WARNING]  (251188) : Exiting Master process...
Nov 29 03:06:27 np0005539551 neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570[251155]: [WARNING]  (251188) : Exiting Master process...
Nov 29 03:06:27 np0005539551 neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570[251155]: [ALERT]    (251188) : Current worker (251205) exited with code 143 (Terminated)
Nov 29 03:06:27 np0005539551 neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570[251155]: [WARNING]  (251188) : All workers exited. Exiting... (0)
Nov 29 03:06:27 np0005539551 systemd[1]: libpod-d3833b73f7b6db2931eee8fe82ad3e13ed8263c43a69ee32cd63e3f2c8511bfd.scope: Deactivated successfully.
Nov 29 03:06:27 np0005539551 podman[251382]: 2025-11-29 08:06:27.676602723 +0000 UTC m=+0.042376099 container died d3833b73f7b6db2931eee8fe82ad3e13ed8263c43a69ee32cd63e3f2c8511bfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:06:27 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d3833b73f7b6db2931eee8fe82ad3e13ed8263c43a69ee32cd63e3f2c8511bfd-userdata-shm.mount: Deactivated successfully.
Nov 29 03:06:27 np0005539551 systemd[1]: var-lib-containers-storage-overlay-13ae9c76dcdcb8b52ff1e4770b72d36765c55023a5544558165f328a594a57bf-merged.mount: Deactivated successfully.
Nov 29 03:06:27 np0005539551 nova_compute[227360]: 2025-11-29 08:06:27.714 227364 INFO nova.virt.libvirt.driver [-] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Instance destroyed successfully.#033[00m
Nov 29 03:06:27 np0005539551 nova_compute[227360]: 2025-11-29 08:06:27.715 227364 DEBUG nova.objects.instance [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lazy-loading 'resources' on Instance uuid 69d87725-bb3d-4966-8db2-cc1e098be52d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:27 np0005539551 podman[251382]: 2025-11-29 08:06:27.723227095 +0000 UTC m=+0.089000461 container cleanup d3833b73f7b6db2931eee8fe82ad3e13ed8263c43a69ee32cd63e3f2c8511bfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:06:27 np0005539551 systemd[1]: libpod-conmon-d3833b73f7b6db2931eee8fe82ad3e13ed8263c43a69ee32cd63e3f2c8511bfd.scope: Deactivated successfully.
Nov 29 03:06:27 np0005539551 nova_compute[227360]: 2025-11-29 08:06:27.735 227364 DEBUG nova.virt.libvirt.vif [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:06:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-803229820',display_name='tempest-SecurityGroupsTestJSON-server-803229820',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-803229820',id=63,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:06:19Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9e02874a2dc44489adba1420baa460f2',ramdisk_id='',reservation_id='r-fwyg2zal',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1605163301',owner_user_name='tempest-SecurityGroupsTestJSON-1605163301-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:06:27Z,user_data=None,user_id='9ee7a93f60394fd9b004c90c25ff5fc1',uuid=69d87725-bb3d-4966-8db2-cc1e098be52d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5c6fcb7c-f475-49c0-89c1-51e447434625", "address": "fa:16:3e:cf:fb:05", "network": {"id": "ddaca73e-4e30-4040-a35d-8d63a2e74570", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1724696280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e02874a2dc44489adba1420baa460f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c6fcb7c-f4", "ovs_interfaceid": "5c6fcb7c-f475-49c0-89c1-51e447434625", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:06:27 np0005539551 nova_compute[227360]: 2025-11-29 08:06:27.736 227364 DEBUG nova.network.os_vif_util [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Converting VIF {"id": "5c6fcb7c-f475-49c0-89c1-51e447434625", "address": "fa:16:3e:cf:fb:05", "network": {"id": "ddaca73e-4e30-4040-a35d-8d63a2e74570", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1724696280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e02874a2dc44489adba1420baa460f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c6fcb7c-f4", "ovs_interfaceid": "5c6fcb7c-f475-49c0-89c1-51e447434625", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:06:27 np0005539551 nova_compute[227360]: 2025-11-29 08:06:27.737 227364 DEBUG nova.network.os_vif_util [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cf:fb:05,bridge_name='br-int',has_traffic_filtering=True,id=5c6fcb7c-f475-49c0-89c1-51e447434625,network=Network(ddaca73e-4e30-4040-a35d-8d63a2e74570),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c6fcb7c-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:06:27 np0005539551 nova_compute[227360]: 2025-11-29 08:06:27.737 227364 DEBUG os_vif [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:fb:05,bridge_name='br-int',has_traffic_filtering=True,id=5c6fcb7c-f475-49c0-89c1-51e447434625,network=Network(ddaca73e-4e30-4040-a35d-8d63a2e74570),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c6fcb7c-f4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:06:27 np0005539551 nova_compute[227360]: 2025-11-29 08:06:27.739 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:27 np0005539551 nova_compute[227360]: 2025-11-29 08:06:27.739 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c6fcb7c-f4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:27 np0005539551 nova_compute[227360]: 2025-11-29 08:06:27.741 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:27 np0005539551 nova_compute[227360]: 2025-11-29 08:06:27.742 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:27 np0005539551 nova_compute[227360]: 2025-11-29 08:06:27.745 227364 INFO os_vif [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:fb:05,bridge_name='br-int',has_traffic_filtering=True,id=5c6fcb7c-f475-49c0-89c1-51e447434625,network=Network(ddaca73e-4e30-4040-a35d-8d63a2e74570),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c6fcb7c-f4')#033[00m
Nov 29 03:06:27 np0005539551 nova_compute[227360]: 2025-11-29 08:06:27.753 227364 DEBUG nova.virt.libvirt.driver [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Start _get_guest_xml network_info=[{"id": "5c6fcb7c-f475-49c0-89c1-51e447434625", "address": "fa:16:3e:cf:fb:05", "network": {"id": "ddaca73e-4e30-4040-a35d-8d63a2e74570", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1724696280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e02874a2dc44489adba1420baa460f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c6fcb7c-f4", "ovs_interfaceid": "5c6fcb7c-f475-49c0-89c1-51e447434625", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:06:27 np0005539551 nova_compute[227360]: 2025-11-29 08:06:27.757 227364 WARNING nova.virt.libvirt.driver [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:06:27 np0005539551 nova_compute[227360]: 2025-11-29 08:06:27.761 227364 DEBUG nova.virt.libvirt.host [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:06:27 np0005539551 nova_compute[227360]: 2025-11-29 08:06:27.761 227364 DEBUG nova.virt.libvirt.host [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:06:27 np0005539551 nova_compute[227360]: 2025-11-29 08:06:27.765 227364 DEBUG nova.virt.libvirt.host [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:06:27 np0005539551 nova_compute[227360]: 2025-11-29 08:06:27.765 227364 DEBUG nova.virt.libvirt.host [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:06:27 np0005539551 nova_compute[227360]: 2025-11-29 08:06:27.766 227364 DEBUG nova.virt.libvirt.driver [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:06:27 np0005539551 nova_compute[227360]: 2025-11-29 08:06:27.766 227364 DEBUG nova.virt.hardware [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:06:27 np0005539551 nova_compute[227360]: 2025-11-29 08:06:27.767 227364 DEBUG nova.virt.hardware [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:06:27 np0005539551 nova_compute[227360]: 2025-11-29 08:06:27.767 227364 DEBUG nova.virt.hardware [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:06:27 np0005539551 nova_compute[227360]: 2025-11-29 08:06:27.767 227364 DEBUG nova.virt.hardware [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:06:27 np0005539551 nova_compute[227360]: 2025-11-29 08:06:27.767 227364 DEBUG nova.virt.hardware [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:06:27 np0005539551 nova_compute[227360]: 2025-11-29 08:06:27.767 227364 DEBUG nova.virt.hardware [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:06:27 np0005539551 nova_compute[227360]: 2025-11-29 08:06:27.768 227364 DEBUG nova.virt.hardware [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:06:27 np0005539551 nova_compute[227360]: 2025-11-29 08:06:27.768 227364 DEBUG nova.virt.hardware [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:06:27 np0005539551 nova_compute[227360]: 2025-11-29 08:06:27.768 227364 DEBUG nova.virt.hardware [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:06:27 np0005539551 nova_compute[227360]: 2025-11-29 08:06:27.768 227364 DEBUG nova.virt.hardware [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:06:27 np0005539551 nova_compute[227360]: 2025-11-29 08:06:27.768 227364 DEBUG nova.virt.hardware [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:06:27 np0005539551 nova_compute[227360]: 2025-11-29 08:06:27.768 227364 DEBUG nova.objects.instance [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 69d87725-bb3d-4966-8db2-cc1e098be52d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:27 np0005539551 podman[251425]: 2025-11-29 08:06:27.782033171 +0000 UTC m=+0.037980433 container remove d3833b73f7b6db2931eee8fe82ad3e13ed8263c43a69ee32cd63e3f2c8511bfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:06:27 np0005539551 nova_compute[227360]: 2025-11-29 08:06:27.784 227364 DEBUG oslo_concurrency.processutils [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:27 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:27.788 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c7dcd24c-d187-4f8e-b086-4343a37821db]: (4, ('Sat Nov 29 08:06:27 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570 (d3833b73f7b6db2931eee8fe82ad3e13ed8263c43a69ee32cd63e3f2c8511bfd)\nd3833b73f7b6db2931eee8fe82ad3e13ed8263c43a69ee32cd63e3f2c8511bfd\nSat Nov 29 08:06:27 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570 (d3833b73f7b6db2931eee8fe82ad3e13ed8263c43a69ee32cd63e3f2c8511bfd)\nd3833b73f7b6db2931eee8fe82ad3e13ed8263c43a69ee32cd63e3f2c8511bfd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:27 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:27.789 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ef0328cd-892c-44ee-bce9-e45be3dd8b54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:27 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:27.790 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapddaca73e-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:27 np0005539551 kernel: tapddaca73e-40: left promiscuous mode
Nov 29 03:06:27 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:27.795 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[da3f09d0-1e54-4c49-b583-d9721e6c66a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:27 np0005539551 nova_compute[227360]: 2025-11-29 08:06:27.806 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:27 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:27.812 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[896fd8c3-bb2c-4c41-81e7-dac7d9498d92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:27 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:27.813 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f5492c46-9752-4c8c-bd91-345e80ab927a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:27 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:27.827 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[70ebed5b-a89e-4373-a428-38d0a125c438]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662501, 'reachable_time': 31491, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251441, 'error': None, 'target': 'ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:27 np0005539551 systemd[1]: run-netns-ovnmeta\x2dddaca73e\x2d4e30\x2d4040\x2da35d\x2d8d63a2e74570.mount: Deactivated successfully.
Nov 29 03:06:27 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:27.830 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:06:27 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:27.831 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[6782bf36-b726-4f57-a34d-743dea7df408]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:28 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:06:28 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3059633587' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:06:28 np0005539551 nova_compute[227360]: 2025-11-29 08:06:28.222 227364 DEBUG oslo_concurrency.processutils [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:28 np0005539551 nova_compute[227360]: 2025-11-29 08:06:28.276 227364 DEBUG oslo_concurrency.processutils [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:28 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:06:28 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2729519942' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:06:28 np0005539551 nova_compute[227360]: 2025-11-29 08:06:28.725 227364 DEBUG oslo_concurrency.processutils [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:28 np0005539551 nova_compute[227360]: 2025-11-29 08:06:28.727 227364 DEBUG nova.virt.libvirt.vif [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:06:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-803229820',display_name='tempest-SecurityGroupsTestJSON-server-803229820',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-803229820',id=63,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:06:19Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9e02874a2dc44489adba1420baa460f2',ramdisk_id='',reservation_id='r-fwyg2zal',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1605163301',owner_user_name='tempest-SecurityGroupsTestJSON-1605163301-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:06:27Z,user_data=None,user_id='9ee7a93f60394fd9b004c90c25ff5fc1',uuid=69d87725-bb3d-4966-8db2-cc1e098be52d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5c6fcb7c-f475-49c0-89c1-51e447434625", "address": "fa:16:3e:cf:fb:05", "network": {"id": "ddaca73e-4e30-4040-a35d-8d63a2e74570", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1724696280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e02874a2dc44489adba1420baa460f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c6fcb7c-f4", "ovs_interfaceid": "5c6fcb7c-f475-49c0-89c1-51e447434625", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:06:28 np0005539551 nova_compute[227360]: 2025-11-29 08:06:28.727 227364 DEBUG nova.network.os_vif_util [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Converting VIF {"id": "5c6fcb7c-f475-49c0-89c1-51e447434625", "address": "fa:16:3e:cf:fb:05", "network": {"id": "ddaca73e-4e30-4040-a35d-8d63a2e74570", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1724696280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e02874a2dc44489adba1420baa460f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c6fcb7c-f4", "ovs_interfaceid": "5c6fcb7c-f475-49c0-89c1-51e447434625", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:06:28 np0005539551 nova_compute[227360]: 2025-11-29 08:06:28.728 227364 DEBUG nova.network.os_vif_util [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cf:fb:05,bridge_name='br-int',has_traffic_filtering=True,id=5c6fcb7c-f475-49c0-89c1-51e447434625,network=Network(ddaca73e-4e30-4040-a35d-8d63a2e74570),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c6fcb7c-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:06:28 np0005539551 nova_compute[227360]: 2025-11-29 08:06:28.730 227364 DEBUG nova.objects.instance [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 69d87725-bb3d-4966-8db2-cc1e098be52d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:28 np0005539551 nova_compute[227360]: 2025-11-29 08:06:28.746 227364 DEBUG nova.virt.libvirt.driver [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:06:28 np0005539551 nova_compute[227360]:  <uuid>69d87725-bb3d-4966-8db2-cc1e098be52d</uuid>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:  <name>instance-0000003f</name>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:06:28 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:      <nova:name>tempest-SecurityGroupsTestJSON-server-803229820</nova:name>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:06:27</nova:creationTime>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:06:28 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:        <nova:user uuid="9ee7a93f60394fd9b004c90c25ff5fc1">tempest-SecurityGroupsTestJSON-1605163301-project-member</nova:user>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:        <nova:project uuid="9e02874a2dc44489adba1420baa460f2">tempest-SecurityGroupsTestJSON-1605163301</nova:project>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:        <nova:port uuid="5c6fcb7c-f475-49c0-89c1-51e447434625">
Nov 29 03:06:28 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:      <entry name="serial">69d87725-bb3d-4966-8db2-cc1e098be52d</entry>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:      <entry name="uuid">69d87725-bb3d-4966-8db2-cc1e098be52d</entry>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:06:28 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/69d87725-bb3d-4966-8db2-cc1e098be52d_disk">
Nov 29 03:06:28 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:06:28 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:06:28 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/69d87725-bb3d-4966-8db2-cc1e098be52d_disk.config">
Nov 29 03:06:28 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:06:28 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:06:28 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:cf:fb:05"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:      <target dev="tap5c6fcb7c-f4"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:06:28 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/69d87725-bb3d-4966-8db2-cc1e098be52d/console.log" append="off"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <input type="keyboard" bus="usb"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:06:28 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:06:28 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:06:28 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:06:28 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:06:28 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:06:28 np0005539551 nova_compute[227360]: 2025-11-29 08:06:28.747 227364 DEBUG nova.virt.libvirt.driver [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:06:28 np0005539551 nova_compute[227360]: 2025-11-29 08:06:28.747 227364 DEBUG nova.virt.libvirt.driver [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:06:28 np0005539551 nova_compute[227360]: 2025-11-29 08:06:28.748 227364 DEBUG nova.virt.libvirt.vif [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:06:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-803229820',display_name='tempest-SecurityGroupsTestJSON-server-803229820',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-803229820',id=63,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:06:19Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='9e02874a2dc44489adba1420baa460f2',ramdisk_id='',reservation_id='r-fwyg2zal',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1605163301',owner_user_name='tempest-SecurityGroupsTestJSON-1605163301-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:06:27Z,user_data=None,user_id='9ee7a93f60394fd9b004c90c25ff5fc1',uuid=69d87725-bb3d-4966-8db2-cc1e098be52d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5c6fcb7c-f475-49c0-89c1-51e447434625", "address": "fa:16:3e:cf:fb:05", "network": {"id": "ddaca73e-4e30-4040-a35d-8d63a2e74570", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1724696280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e02874a2dc44489adba1420baa460f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c6fcb7c-f4", "ovs_interfaceid": "5c6fcb7c-f475-49c0-89c1-51e447434625", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:06:28 np0005539551 nova_compute[227360]: 2025-11-29 08:06:28.748 227364 DEBUG nova.network.os_vif_util [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Converting VIF {"id": "5c6fcb7c-f475-49c0-89c1-51e447434625", "address": "fa:16:3e:cf:fb:05", "network": {"id": "ddaca73e-4e30-4040-a35d-8d63a2e74570", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1724696280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e02874a2dc44489adba1420baa460f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c6fcb7c-f4", "ovs_interfaceid": "5c6fcb7c-f475-49c0-89c1-51e447434625", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:06:28 np0005539551 nova_compute[227360]: 2025-11-29 08:06:28.749 227364 DEBUG nova.network.os_vif_util [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cf:fb:05,bridge_name='br-int',has_traffic_filtering=True,id=5c6fcb7c-f475-49c0-89c1-51e447434625,network=Network(ddaca73e-4e30-4040-a35d-8d63a2e74570),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c6fcb7c-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:06:28 np0005539551 nova_compute[227360]: 2025-11-29 08:06:28.749 227364 DEBUG os_vif [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:fb:05,bridge_name='br-int',has_traffic_filtering=True,id=5c6fcb7c-f475-49c0-89c1-51e447434625,network=Network(ddaca73e-4e30-4040-a35d-8d63a2e74570),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c6fcb7c-f4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:06:28 np0005539551 nova_compute[227360]: 2025-11-29 08:06:28.749 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:28 np0005539551 nova_compute[227360]: 2025-11-29 08:06:28.750 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:28 np0005539551 nova_compute[227360]: 2025-11-29 08:06:28.750 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:06:28 np0005539551 nova_compute[227360]: 2025-11-29 08:06:28.753 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:28 np0005539551 nova_compute[227360]: 2025-11-29 08:06:28.753 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5c6fcb7c-f4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:28 np0005539551 nova_compute[227360]: 2025-11-29 08:06:28.754 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5c6fcb7c-f4, col_values=(('external_ids', {'iface-id': '5c6fcb7c-f475-49c0-89c1-51e447434625', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cf:fb:05', 'vm-uuid': '69d87725-bb3d-4966-8db2-cc1e098be52d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:28 np0005539551 nova_compute[227360]: 2025-11-29 08:06:28.755 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:28 np0005539551 NetworkManager[48922]: <info>  [1764403588.7565] manager: (tap5c6fcb7c-f4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/105)
Nov 29 03:06:28 np0005539551 nova_compute[227360]: 2025-11-29 08:06:28.757 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:06:28 np0005539551 nova_compute[227360]: 2025-11-29 08:06:28.760 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:28 np0005539551 nova_compute[227360]: 2025-11-29 08:06:28.761 227364 INFO os_vif [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:fb:05,bridge_name='br-int',has_traffic_filtering=True,id=5c6fcb7c-f475-49c0-89c1-51e447434625,network=Network(ddaca73e-4e30-4040-a35d-8d63a2e74570),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c6fcb7c-f4')#033[00m
Nov 29 03:06:28 np0005539551 kernel: tap5c6fcb7c-f4: entered promiscuous mode
Nov 29 03:06:28 np0005539551 systemd-udevd[251359]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:06:28 np0005539551 NetworkManager[48922]: <info>  [1764403588.8346] manager: (tap5c6fcb7c-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/106)
Nov 29 03:06:28 np0005539551 nova_compute[227360]: 2025-11-29 08:06:28.834 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:28 np0005539551 ovn_controller[130266]: 2025-11-29T08:06:28Z|00215|binding|INFO|Claiming lport 5c6fcb7c-f475-49c0-89c1-51e447434625 for this chassis.
Nov 29 03:06:28 np0005539551 ovn_controller[130266]: 2025-11-29T08:06:28Z|00216|binding|INFO|5c6fcb7c-f475-49c0-89c1-51e447434625: Claiming fa:16:3e:cf:fb:05 10.100.0.8
Nov 29 03:06:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:28.843 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:fb:05 10.100.0.8'], port_security=['fa:16:3e:cf:fb:05 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '69d87725-bb3d-4966-8db2-cc1e098be52d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ddaca73e-4e30-4040-a35d-8d63a2e74570', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9e02874a2dc44489adba1420baa460f2', 'neutron:revision_number': '6', 'neutron:security_group_ids': '06d3cd7a-8161-46d3-8dfb-eba1ecfb9db2 9991b07a-f9c0-43a9-9d09-efba18b71e42', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=97bfead0-da72-4782-bfb1-84e12ea4a595, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=5c6fcb7c-f475-49c0-89c1-51e447434625) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:06:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:28.844 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 5c6fcb7c-f475-49c0-89c1-51e447434625 in datapath ddaca73e-4e30-4040-a35d-8d63a2e74570 bound to our chassis#033[00m
Nov 29 03:06:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:28.846 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ddaca73e-4e30-4040-a35d-8d63a2e74570#033[00m
Nov 29 03:06:28 np0005539551 NetworkManager[48922]: <info>  [1764403588.8476] device (tap5c6fcb7c-f4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:06:28 np0005539551 NetworkManager[48922]: <info>  [1764403588.8489] device (tap5c6fcb7c-f4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:06:28 np0005539551 ovn_controller[130266]: 2025-11-29T08:06:28Z|00217|binding|INFO|Setting lport 5c6fcb7c-f475-49c0-89c1-51e447434625 ovn-installed in OVS
Nov 29 03:06:28 np0005539551 ovn_controller[130266]: 2025-11-29T08:06:28Z|00218|binding|INFO|Setting lport 5c6fcb7c-f475-49c0-89c1-51e447434625 up in Southbound
Nov 29 03:06:28 np0005539551 nova_compute[227360]: 2025-11-29 08:06:28.852 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:28 np0005539551 nova_compute[227360]: 2025-11-29 08:06:28.854 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:28.859 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a2d1f4e2-952b-4b0b-857a-7019eac6d437]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:28.859 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapddaca73e-41 in ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:06:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:28.861 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapddaca73e-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:06:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:28.861 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[bc013be5-db3e-476a-83d9-733c50f3e602]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:28.862 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[766fea69-34fe-451b-a42f-cc7271b8fc6b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:28 np0005539551 systemd-machined[190756]: New machine qemu-30-instance-0000003f.
Nov 29 03:06:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:28.877 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[6aff60fa-11ae-45f9-a31e-f7106046268f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:28 np0005539551 systemd[1]: Started Virtual Machine qemu-30-instance-0000003f.
Nov 29 03:06:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:28.900 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[7b68852e-8a3b-4c52-b0a9-88c93a76fb0c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:28.925 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[a3587546-813f-469b-a631-b28063e1827e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:28.930 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[0a0e951f-d598-43f8-9590-df94d1f2ddf2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:28 np0005539551 NetworkManager[48922]: <info>  [1764403588.9319] manager: (tapddaca73e-40): new Veth device (/org/freedesktop/NetworkManager/Devices/107)
Nov 29 03:06:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:28.958 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[d3f6f036-ad9f-443a-8059-48674b6b9dd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:28.961 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[3b690334-b741-49f1-8ae1-2876375cd21b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:28 np0005539551 NetworkManager[48922]: <info>  [1764403588.9876] device (tapddaca73e-40): carrier: link connected
Nov 29 03:06:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:28.994 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[a8c8e130-02c9-4bbe-825f-92fa695a8695]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:29.009 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[46a99d45-0053-4d0a-bf5e-faae58d7abda]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapddaca73e-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:0d:cb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663586, 'reachable_time': 19154, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251548, 'error': None, 'target': 'ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:29.031 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[caae49ae-6000-42f5-a756-cf9aff349235]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8c:dcb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663586, 'tstamp': 663586}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251549, 'error': None, 'target': 'ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:29.045 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[75d80b60-60bb-48df-9c8a-a3b123b980b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapddaca73e-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:0d:cb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663586, 'reachable_time': 19154, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251550, 'error': None, 'target': 'ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:29.085 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c4e4ced3-bd7b-4b87-a89b-71020a6f4ccd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:29.157 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[eaaac3b7-8518-499a-8237-7c6a4c04b00f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:29.160 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapddaca73e-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:29.160 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:29.161 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapddaca73e-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:29 np0005539551 NetworkManager[48922]: <info>  [1764403589.1649] manager: (tapddaca73e-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/108)
Nov 29 03:06:29 np0005539551 nova_compute[227360]: 2025-11-29 08:06:29.164 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:29 np0005539551 kernel: tapddaca73e-40: entered promiscuous mode
Nov 29 03:06:29 np0005539551 nova_compute[227360]: 2025-11-29 08:06:29.169 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:29.172 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapddaca73e-40, col_values=(('external_ids', {'iface-id': '1387aa6b-e0e8-4d33-958c-8368713de9bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:29 np0005539551 nova_compute[227360]: 2025-11-29 08:06:29.174 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:29 np0005539551 nova_compute[227360]: 2025-11-29 08:06:29.177 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:29 np0005539551 ovn_controller[130266]: 2025-11-29T08:06:29Z|00219|binding|INFO|Releasing lport 1387aa6b-e0e8-4d33-958c-8368713de9bf from this chassis (sb_readonly=0)
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:29.179 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ddaca73e-4e30-4040-a35d-8d63a2e74570.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ddaca73e-4e30-4040-a35d-8d63a2e74570.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:29.180 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f646f8ea-45f0-4312-869c-92864bb97d19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:29.181 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-ddaca73e-4e30-4040-a35d-8d63a2e74570
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/ddaca73e-4e30-4040-a35d-8d63a2e74570.pid.haproxy
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID ddaca73e-4e30-4040-a35d-8d63a2e74570
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:06:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:29.182 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570', 'env', 'PROCESS_TAG=haproxy-ddaca73e-4e30-4040-a35d-8d63a2e74570', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ddaca73e-4e30-4040-a35d-8d63a2e74570.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:06:29 np0005539551 nova_compute[227360]: 2025-11-29 08:06:29.205 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:29 np0005539551 nova_compute[227360]: 2025-11-29 08:06:29.261 227364 DEBUG nova.compute.manager [req-c8b5889e-b0df-4c7a-b23f-488bf974a2ef req-eb9bde70-22ce-4cae-ba73-8d5044a2f8a7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Received event network-vif-unplugged-5c6fcb7c-f475-49c0-89c1-51e447434625 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:29 np0005539551 nova_compute[227360]: 2025-11-29 08:06:29.261 227364 DEBUG oslo_concurrency.lockutils [req-c8b5889e-b0df-4c7a-b23f-488bf974a2ef req-eb9bde70-22ce-4cae-ba73-8d5044a2f8a7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:29 np0005539551 nova_compute[227360]: 2025-11-29 08:06:29.262 227364 DEBUG oslo_concurrency.lockutils [req-c8b5889e-b0df-4c7a-b23f-488bf974a2ef req-eb9bde70-22ce-4cae-ba73-8d5044a2f8a7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:29 np0005539551 nova_compute[227360]: 2025-11-29 08:06:29.262 227364 DEBUG oslo_concurrency.lockutils [req-c8b5889e-b0df-4c7a-b23f-488bf974a2ef req-eb9bde70-22ce-4cae-ba73-8d5044a2f8a7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:29 np0005539551 nova_compute[227360]: 2025-11-29 08:06:29.263 227364 DEBUG nova.compute.manager [req-c8b5889e-b0df-4c7a-b23f-488bf974a2ef req-eb9bde70-22ce-4cae-ba73-8d5044a2f8a7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] No waiting events found dispatching network-vif-unplugged-5c6fcb7c-f475-49c0-89c1-51e447434625 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:06:29 np0005539551 nova_compute[227360]: 2025-11-29 08:06:29.263 227364 WARNING nova.compute.manager [req-c8b5889e-b0df-4c7a-b23f-488bf974a2ef req-eb9bde70-22ce-4cae-ba73-8d5044a2f8a7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Received unexpected event network-vif-unplugged-5c6fcb7c-f475-49c0-89c1-51e447434625 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Nov 29 03:06:29 np0005539551 nova_compute[227360]: 2025-11-29 08:06:29.263 227364 DEBUG nova.compute.manager [req-c8b5889e-b0df-4c7a-b23f-488bf974a2ef req-eb9bde70-22ce-4cae-ba73-8d5044a2f8a7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Received event network-vif-plugged-5c6fcb7c-f475-49c0-89c1-51e447434625 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:29 np0005539551 nova_compute[227360]: 2025-11-29 08:06:29.264 227364 DEBUG oslo_concurrency.lockutils [req-c8b5889e-b0df-4c7a-b23f-488bf974a2ef req-eb9bde70-22ce-4cae-ba73-8d5044a2f8a7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:29 np0005539551 nova_compute[227360]: 2025-11-29 08:06:29.264 227364 DEBUG oslo_concurrency.lockutils [req-c8b5889e-b0df-4c7a-b23f-488bf974a2ef req-eb9bde70-22ce-4cae-ba73-8d5044a2f8a7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:29 np0005539551 nova_compute[227360]: 2025-11-29 08:06:29.265 227364 DEBUG oslo_concurrency.lockutils [req-c8b5889e-b0df-4c7a-b23f-488bf974a2ef req-eb9bde70-22ce-4cae-ba73-8d5044a2f8a7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:29 np0005539551 nova_compute[227360]: 2025-11-29 08:06:29.265 227364 DEBUG nova.compute.manager [req-c8b5889e-b0df-4c7a-b23f-488bf974a2ef req-eb9bde70-22ce-4cae-ba73-8d5044a2f8a7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] No waiting events found dispatching network-vif-plugged-5c6fcb7c-f475-49c0-89c1-51e447434625 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:06:29 np0005539551 nova_compute[227360]: 2025-11-29 08:06:29.266 227364 WARNING nova.compute.manager [req-c8b5889e-b0df-4c7a-b23f-488bf974a2ef req-eb9bde70-22ce-4cae-ba73-8d5044a2f8a7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Received unexpected event network-vif-plugged-5c6fcb7c-f475-49c0-89c1-51e447434625 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Nov 29 03:06:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:29.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:29.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:29 np0005539551 nova_compute[227360]: 2025-11-29 08:06:29.482 227364 DEBUG nova.virt.libvirt.host [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Removed pending event for 69d87725-bb3d-4966-8db2-cc1e098be52d due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:06:29 np0005539551 nova_compute[227360]: 2025-11-29 08:06:29.483 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403589.482232, 69d87725-bb3d-4966-8db2-cc1e098be52d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:06:29 np0005539551 nova_compute[227360]: 2025-11-29 08:06:29.484 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:06:29 np0005539551 nova_compute[227360]: 2025-11-29 08:06:29.486 227364 DEBUG nova.compute.manager [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:06:29 np0005539551 nova_compute[227360]: 2025-11-29 08:06:29.489 227364 INFO nova.virt.libvirt.driver [-] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Instance rebooted successfully.#033[00m
Nov 29 03:06:29 np0005539551 nova_compute[227360]: 2025-11-29 08:06:29.490 227364 DEBUG nova.compute.manager [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:29 np0005539551 nova_compute[227360]: 2025-11-29 08:06:29.515 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:29 np0005539551 nova_compute[227360]: 2025-11-29 08:06:29.519 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:06:29 np0005539551 nova_compute[227360]: 2025-11-29 08:06:29.550 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Nov 29 03:06:29 np0005539551 nova_compute[227360]: 2025-11-29 08:06:29.551 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403589.4855173, 69d87725-bb3d-4966-8db2-cc1e098be52d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:06:29 np0005539551 nova_compute[227360]: 2025-11-29 08:06:29.551 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] VM Started (Lifecycle Event)#033[00m
Nov 29 03:06:29 np0005539551 nova_compute[227360]: 2025-11-29 08:06:29.557 227364 DEBUG oslo_concurrency.lockutils [None req-05630178-8a56-44aa-b68b-3d20f9d9dcea 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lock "69d87725-bb3d-4966-8db2-cc1e098be52d" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 6.214s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:29 np0005539551 podman[251625]: 2025-11-29 08:06:29.569675137 +0000 UTC m=+0.056143306 container create ba2d031d543028deed6bdf1d70844efe87fde3a894ad9c38b08584e789557c6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:06:29 np0005539551 nova_compute[227360]: 2025-11-29 08:06:29.573 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:29 np0005539551 nova_compute[227360]: 2025-11-29 08:06:29.578 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:06:29 np0005539551 systemd[1]: Started libpod-conmon-ba2d031d543028deed6bdf1d70844efe87fde3a894ad9c38b08584e789557c6c.scope.
Nov 29 03:06:29 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:06:29 np0005539551 podman[251625]: 2025-11-29 08:06:29.545484154 +0000 UTC m=+0.031952373 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:06:29 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9552fd08376b7d817d6762f1cbbeebfdb4cc353c8e520106c1f6fc800edfcfd5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:06:29 np0005539551 podman[251625]: 2025-11-29 08:06:29.652460333 +0000 UTC m=+0.138928522 container init ba2d031d543028deed6bdf1d70844efe87fde3a894ad9c38b08584e789557c6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 03:06:29 np0005539551 podman[251625]: 2025-11-29 08:06:29.65688766 +0000 UTC m=+0.143355829 container start ba2d031d543028deed6bdf1d70844efe87fde3a894ad9c38b08584e789557c6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:06:29 np0005539551 neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570[251640]: [NOTICE]   (251644) : New worker (251646) forked
Nov 29 03:06:29 np0005539551 neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570[251640]: [NOTICE]   (251644) : Loading success.
Nov 29 03:06:29 np0005539551 nova_compute[227360]: 2025-11-29 08:06:29.763 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:30 np0005539551 nova_compute[227360]: 2025-11-29 08:06:30.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:31.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:31 np0005539551 nova_compute[227360]: 2025-11-29 08:06:31.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:31 np0005539551 nova_compute[227360]: 2025-11-29 08:06:31.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:06:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:31.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:31 np0005539551 nova_compute[227360]: 2025-11-29 08:06:31.546 227364 DEBUG nova.compute.manager [req-54246531-6874-4c0e-87cc-d2e0bf4b4a4a req-5d6bdd9c-ac90-43bc-86b0-a36ab1f86dd1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Received event network-vif-plugged-5c6fcb7c-f475-49c0-89c1-51e447434625 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:31 np0005539551 nova_compute[227360]: 2025-11-29 08:06:31.547 227364 DEBUG oslo_concurrency.lockutils [req-54246531-6874-4c0e-87cc-d2e0bf4b4a4a req-5d6bdd9c-ac90-43bc-86b0-a36ab1f86dd1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:31 np0005539551 nova_compute[227360]: 2025-11-29 08:06:31.548 227364 DEBUG oslo_concurrency.lockutils [req-54246531-6874-4c0e-87cc-d2e0bf4b4a4a req-5d6bdd9c-ac90-43bc-86b0-a36ab1f86dd1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:31 np0005539551 nova_compute[227360]: 2025-11-29 08:06:31.548 227364 DEBUG oslo_concurrency.lockutils [req-54246531-6874-4c0e-87cc-d2e0bf4b4a4a req-5d6bdd9c-ac90-43bc-86b0-a36ab1f86dd1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:31 np0005539551 nova_compute[227360]: 2025-11-29 08:06:31.549 227364 DEBUG nova.compute.manager [req-54246531-6874-4c0e-87cc-d2e0bf4b4a4a req-5d6bdd9c-ac90-43bc-86b0-a36ab1f86dd1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] No waiting events found dispatching network-vif-plugged-5c6fcb7c-f475-49c0-89c1-51e447434625 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:06:31 np0005539551 nova_compute[227360]: 2025-11-29 08:06:31.549 227364 WARNING nova.compute.manager [req-54246531-6874-4c0e-87cc-d2e0bf4b4a4a req-5d6bdd9c-ac90-43bc-86b0-a36ab1f86dd1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Received unexpected event network-vif-plugged-5c6fcb7c-f475-49c0-89c1-51e447434625 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:06:31 np0005539551 nova_compute[227360]: 2025-11-29 08:06:31.550 227364 DEBUG nova.compute.manager [req-54246531-6874-4c0e-87cc-d2e0bf4b4a4a req-5d6bdd9c-ac90-43bc-86b0-a36ab1f86dd1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Received event network-vif-plugged-5c6fcb7c-f475-49c0-89c1-51e447434625 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:31 np0005539551 nova_compute[227360]: 2025-11-29 08:06:31.551 227364 DEBUG oslo_concurrency.lockutils [req-54246531-6874-4c0e-87cc-d2e0bf4b4a4a req-5d6bdd9c-ac90-43bc-86b0-a36ab1f86dd1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:31 np0005539551 nova_compute[227360]: 2025-11-29 08:06:31.551 227364 DEBUG oslo_concurrency.lockutils [req-54246531-6874-4c0e-87cc-d2e0bf4b4a4a req-5d6bdd9c-ac90-43bc-86b0-a36ab1f86dd1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:31 np0005539551 nova_compute[227360]: 2025-11-29 08:06:31.552 227364 DEBUG oslo_concurrency.lockutils [req-54246531-6874-4c0e-87cc-d2e0bf4b4a4a req-5d6bdd9c-ac90-43bc-86b0-a36ab1f86dd1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:31 np0005539551 nova_compute[227360]: 2025-11-29 08:06:31.552 227364 DEBUG nova.compute.manager [req-54246531-6874-4c0e-87cc-d2e0bf4b4a4a req-5d6bdd9c-ac90-43bc-86b0-a36ab1f86dd1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] No waiting events found dispatching network-vif-plugged-5c6fcb7c-f475-49c0-89c1-51e447434625 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:06:31 np0005539551 nova_compute[227360]: 2025-11-29 08:06:31.553 227364 WARNING nova.compute.manager [req-54246531-6874-4c0e-87cc-d2e0bf4b4a4a req-5d6bdd9c-ac90-43bc-86b0-a36ab1f86dd1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Received unexpected event network-vif-plugged-5c6fcb7c-f475-49c0-89c1-51e447434625 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:06:31 np0005539551 nova_compute[227360]: 2025-11-29 08:06:31.843 227364 DEBUG oslo_concurrency.lockutils [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Acquiring lock "e957695b-44c7-4a21-aabb-9ddc5914a26a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:31 np0005539551 nova_compute[227360]: 2025-11-29 08:06:31.844 227364 DEBUG oslo_concurrency.lockutils [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Lock "e957695b-44c7-4a21-aabb-9ddc5914a26a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:31 np0005539551 nova_compute[227360]: 2025-11-29 08:06:31.877 227364 DEBUG nova.compute.manager [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:06:31 np0005539551 nova_compute[227360]: 2025-11-29 08:06:31.986 227364 DEBUG oslo_concurrency.lockutils [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:31 np0005539551 nova_compute[227360]: 2025-11-29 08:06:31.987 227364 DEBUG oslo_concurrency.lockutils [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:31 np0005539551 nova_compute[227360]: 2025-11-29 08:06:31.995 227364 DEBUG nova.virt.hardware [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:06:31 np0005539551 nova_compute[227360]: 2025-11-29 08:06:31.996 227364 INFO nova.compute.claims [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:06:32 np0005539551 nova_compute[227360]: 2025-11-29 08:06:32.152 227364 DEBUG oslo_concurrency.processutils [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:32 np0005539551 podman[251677]: 2025-11-29 08:06:32.639333535 +0000 UTC m=+0.076058176 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:06:32 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:06:32 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1647269240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:06:32 np0005539551 podman[251676]: 2025-11-29 08:06:32.649224358 +0000 UTC m=+0.081230204 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible)
Nov 29 03:06:32 np0005539551 nova_compute[227360]: 2025-11-29 08:06:32.672 227364 DEBUG oslo_concurrency.processutils [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:32 np0005539551 nova_compute[227360]: 2025-11-29 08:06:32.678 227364 DEBUG nova.compute.provider_tree [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:06:32 np0005539551 podman[251675]: 2025-11-29 08:06:32.683633485 +0000 UTC m=+0.116762871 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:06:32 np0005539551 nova_compute[227360]: 2025-11-29 08:06:32.699 227364 DEBUG nova.scheduler.client.report [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:06:32 np0005539551 nova_compute[227360]: 2025-11-29 08:06:32.728 227364 DEBUG oslo_concurrency.lockutils [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:32 np0005539551 nova_compute[227360]: 2025-11-29 08:06:32.729 227364 DEBUG nova.compute.manager [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:06:32 np0005539551 nova_compute[227360]: 2025-11-29 08:06:32.771 227364 DEBUG nova.compute.manager [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:06:32 np0005539551 nova_compute[227360]: 2025-11-29 08:06:32.772 227364 DEBUG nova.network.neutron [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:06:32 np0005539551 nova_compute[227360]: 2025-11-29 08:06:32.804 227364 INFO nova.virt.libvirt.driver [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:06:32 np0005539551 nova_compute[227360]: 2025-11-29 08:06:32.833 227364 DEBUG nova.compute.manager [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:06:32 np0005539551 nova_compute[227360]: 2025-11-29 08:06:32.931 227364 DEBUG nova.compute.manager [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:06:32 np0005539551 nova_compute[227360]: 2025-11-29 08:06:32.932 227364 DEBUG nova.virt.libvirt.driver [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:06:32 np0005539551 nova_compute[227360]: 2025-11-29 08:06:32.932 227364 INFO nova.virt.libvirt.driver [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Creating image(s)#033[00m
Nov 29 03:06:32 np0005539551 nova_compute[227360]: 2025-11-29 08:06:32.956 227364 DEBUG nova.storage.rbd_utils [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] rbd image e957695b-44c7-4a21-aabb-9ddc5914a26a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:32 np0005539551 nova_compute[227360]: 2025-11-29 08:06:32.987 227364 DEBUG nova.storage.rbd_utils [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] rbd image e957695b-44c7-4a21-aabb-9ddc5914a26a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:33 np0005539551 nova_compute[227360]: 2025-11-29 08:06:33.024 227364 DEBUG nova.storage.rbd_utils [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] rbd image e957695b-44c7-4a21-aabb-9ddc5914a26a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:33 np0005539551 nova_compute[227360]: 2025-11-29 08:06:33.028 227364 DEBUG oslo_concurrency.processutils [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:33 np0005539551 nova_compute[227360]: 2025-11-29 08:06:33.093 227364 DEBUG nova.policy [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7f7f42bfe6ee49b088f30c9eb496e288', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c1b873d94187455ea17d369aa54754b3', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:06:33 np0005539551 nova_compute[227360]: 2025-11-29 08:06:33.111 227364 DEBUG oslo_concurrency.processutils [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:33 np0005539551 nova_compute[227360]: 2025-11-29 08:06:33.112 227364 DEBUG oslo_concurrency.lockutils [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:33 np0005539551 nova_compute[227360]: 2025-11-29 08:06:33.112 227364 DEBUG oslo_concurrency.lockutils [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:33 np0005539551 nova_compute[227360]: 2025-11-29 08:06:33.113 227364 DEBUG oslo_concurrency.lockutils [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:33 np0005539551 nova_compute[227360]: 2025-11-29 08:06:33.137 227364 DEBUG nova.storage.rbd_utils [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] rbd image e957695b-44c7-4a21-aabb-9ddc5914a26a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:33 np0005539551 nova_compute[227360]: 2025-11-29 08:06:33.141 227364 DEBUG oslo_concurrency.processutils [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 e957695b-44c7-4a21-aabb-9ddc5914a26a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:33.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:33 np0005539551 nova_compute[227360]: 2025-11-29 08:06:33.449 227364 DEBUG oslo_concurrency.processutils [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 e957695b-44c7-4a21-aabb-9ddc5914a26a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.308s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:33.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:33 np0005539551 nova_compute[227360]: 2025-11-29 08:06:33.532 227364 DEBUG nova.storage.rbd_utils [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] resizing rbd image e957695b-44c7-4a21-aabb-9ddc5914a26a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:06:33 np0005539551 nova_compute[227360]: 2025-11-29 08:06:33.663 227364 DEBUG nova.objects.instance [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Lazy-loading 'migration_context' on Instance uuid e957695b-44c7-4a21-aabb-9ddc5914a26a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:33 np0005539551 nova_compute[227360]: 2025-11-29 08:06:33.684 227364 DEBUG nova.virt.libvirt.driver [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:06:33 np0005539551 nova_compute[227360]: 2025-11-29 08:06:33.685 227364 DEBUG nova.virt.libvirt.driver [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Ensure instance console log exists: /var/lib/nova/instances/e957695b-44c7-4a21-aabb-9ddc5914a26a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:06:33 np0005539551 nova_compute[227360]: 2025-11-29 08:06:33.685 227364 DEBUG oslo_concurrency.lockutils [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:33 np0005539551 nova_compute[227360]: 2025-11-29 08:06:33.686 227364 DEBUG oslo_concurrency.lockutils [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:33 np0005539551 nova_compute[227360]: 2025-11-29 08:06:33.686 227364 DEBUG oslo_concurrency.lockutils [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:33 np0005539551 nova_compute[227360]: 2025-11-29 08:06:33.722 227364 DEBUG nova.compute.manager [req-13e38026-bc3b-4bf8-998d-c1dfa91724c3 req-0b2941c8-7f21-47c4-afba-69d0db741a29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Received event network-changed-5c6fcb7c-f475-49c0-89c1-51e447434625 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:33 np0005539551 nova_compute[227360]: 2025-11-29 08:06:33.722 227364 DEBUG nova.compute.manager [req-13e38026-bc3b-4bf8-998d-c1dfa91724c3 req-0b2941c8-7f21-47c4-afba-69d0db741a29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Refreshing instance network info cache due to event network-changed-5c6fcb7c-f475-49c0-89c1-51e447434625. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:06:33 np0005539551 nova_compute[227360]: 2025-11-29 08:06:33.722 227364 DEBUG oslo_concurrency.lockutils [req-13e38026-bc3b-4bf8-998d-c1dfa91724c3 req-0b2941c8-7f21-47c4-afba-69d0db741a29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-69d87725-bb3d-4966-8db2-cc1e098be52d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:06:33 np0005539551 nova_compute[227360]: 2025-11-29 08:06:33.723 227364 DEBUG oslo_concurrency.lockutils [req-13e38026-bc3b-4bf8-998d-c1dfa91724c3 req-0b2941c8-7f21-47c4-afba-69d0db741a29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-69d87725-bb3d-4966-8db2-cc1e098be52d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:06:33 np0005539551 nova_compute[227360]: 2025-11-29 08:06:33.723 227364 DEBUG nova.network.neutron [req-13e38026-bc3b-4bf8-998d-c1dfa91724c3 req-0b2941c8-7f21-47c4-afba-69d0db741a29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Refreshing network info cache for port 5c6fcb7c-f475-49c0-89c1-51e447434625 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:06:33 np0005539551 nova_compute[227360]: 2025-11-29 08:06:33.757 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:33 np0005539551 nova_compute[227360]: 2025-11-29 08:06:33.822 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403578.8207865, 7c7286ee-7432-4e68-a574-bbe535d1f203 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:06:33 np0005539551 nova_compute[227360]: 2025-11-29 08:06:33.822 227364 INFO nova.compute.manager [-] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:06:33 np0005539551 nova_compute[227360]: 2025-11-29 08:06:33.870 227364 DEBUG nova.compute.manager [None req-65e16cd8-33b8-4b67-a24d-226618efc272 - - - - - -] [instance: 7c7286ee-7432-4e68-a574-bbe535d1f203] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:33 np0005539551 nova_compute[227360]: 2025-11-29 08:06:33.988 227364 DEBUG oslo_concurrency.lockutils [None req-1117cede-4d0a-4dbb-b3c6-e3426ae62bd7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Acquiring lock "69d87725-bb3d-4966-8db2-cc1e098be52d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:33 np0005539551 nova_compute[227360]: 2025-11-29 08:06:33.989 227364 DEBUG oslo_concurrency.lockutils [None req-1117cede-4d0a-4dbb-b3c6-e3426ae62bd7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lock "69d87725-bb3d-4966-8db2-cc1e098be52d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:33 np0005539551 nova_compute[227360]: 2025-11-29 08:06:33.989 227364 DEBUG oslo_concurrency.lockutils [None req-1117cede-4d0a-4dbb-b3c6-e3426ae62bd7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Acquiring lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:33 np0005539551 nova_compute[227360]: 2025-11-29 08:06:33.989 227364 DEBUG oslo_concurrency.lockutils [None req-1117cede-4d0a-4dbb-b3c6-e3426ae62bd7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:33 np0005539551 nova_compute[227360]: 2025-11-29 08:06:33.990 227364 DEBUG oslo_concurrency.lockutils [None req-1117cede-4d0a-4dbb-b3c6-e3426ae62bd7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:33 np0005539551 nova_compute[227360]: 2025-11-29 08:06:33.991 227364 INFO nova.compute.manager [None req-1117cede-4d0a-4dbb-b3c6-e3426ae62bd7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Terminating instance#033[00m
Nov 29 03:06:33 np0005539551 nova_compute[227360]: 2025-11-29 08:06:33.992 227364 DEBUG nova.compute.manager [None req-1117cede-4d0a-4dbb-b3c6-e3426ae62bd7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:06:34 np0005539551 kernel: tap5c6fcb7c-f4 (unregistering): left promiscuous mode
Nov 29 03:06:34 np0005539551 NetworkManager[48922]: <info>  [1764403594.0364] device (tap5c6fcb7c-f4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:06:34 np0005539551 nova_compute[227360]: 2025-11-29 08:06:34.049 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:34 np0005539551 ovn_controller[130266]: 2025-11-29T08:06:34Z|00220|binding|INFO|Releasing lport 5c6fcb7c-f475-49c0-89c1-51e447434625 from this chassis (sb_readonly=0)
Nov 29 03:06:34 np0005539551 ovn_controller[130266]: 2025-11-29T08:06:34Z|00221|binding|INFO|Setting lport 5c6fcb7c-f475-49c0-89c1-51e447434625 down in Southbound
Nov 29 03:06:34 np0005539551 ovn_controller[130266]: 2025-11-29T08:06:34Z|00222|binding|INFO|Removing iface tap5c6fcb7c-f4 ovn-installed in OVS
Nov 29 03:06:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:34.058 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:fb:05 10.100.0.8'], port_security=['fa:16:3e:cf:fb:05 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '69d87725-bb3d-4966-8db2-cc1e098be52d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ddaca73e-4e30-4040-a35d-8d63a2e74570', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9e02874a2dc44489adba1420baa460f2', 'neutron:revision_number': '8', 'neutron:security_group_ids': '06d3cd7a-8161-46d3-8dfb-eba1ecfb9db2 9991b07a-f9c0-43a9-9d09-efba18b71e42 dfad977b-db2a-4e14-a9c9-150bde396fcb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=97bfead0-da72-4782-bfb1-84e12ea4a595, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=5c6fcb7c-f475-49c0-89c1-51e447434625) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:06:34 np0005539551 nova_compute[227360]: 2025-11-29 08:06:34.059 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:34.061 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 5c6fcb7c-f475-49c0-89c1-51e447434625 in datapath ddaca73e-4e30-4040-a35d-8d63a2e74570 unbound from our chassis#033[00m
Nov 29 03:06:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:34.062 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ddaca73e-4e30-4040-a35d-8d63a2e74570, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:06:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:34.063 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[00e9204a-30fa-47d5-a561-2891152365db]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:34.064 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570 namespace which is not needed anymore#033[00m
Nov 29 03:06:34 np0005539551 nova_compute[227360]: 2025-11-29 08:06:34.081 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:34 np0005539551 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000003f.scope: Deactivated successfully.
Nov 29 03:06:34 np0005539551 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000003f.scope: Consumed 5.296s CPU time.
Nov 29 03:06:34 np0005539551 systemd-machined[190756]: Machine qemu-30-instance-0000003f terminated.
Nov 29 03:06:34 np0005539551 kernel: tap5c6fcb7c-f4: entered promiscuous mode
Nov 29 03:06:34 np0005539551 NetworkManager[48922]: <info>  [1764403594.2167] manager: (tap5c6fcb7c-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/109)
Nov 29 03:06:34 np0005539551 systemd-udevd[251908]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:06:34 np0005539551 neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570[251640]: [NOTICE]   (251644) : haproxy version is 2.8.14-c23fe91
Nov 29 03:06:34 np0005539551 neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570[251640]: [NOTICE]   (251644) : path to executable is /usr/sbin/haproxy
Nov 29 03:06:34 np0005539551 neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570[251640]: [WARNING]  (251644) : Exiting Master process...
Nov 29 03:06:34 np0005539551 neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570[251640]: [WARNING]  (251644) : Exiting Master process...
Nov 29 03:06:34 np0005539551 ovn_controller[130266]: 2025-11-29T08:06:34Z|00223|binding|INFO|Claiming lport 5c6fcb7c-f475-49c0-89c1-51e447434625 for this chassis.
Nov 29 03:06:34 np0005539551 ovn_controller[130266]: 2025-11-29T08:06:34Z|00224|binding|INFO|5c6fcb7c-f475-49c0-89c1-51e447434625: Claiming fa:16:3e:cf:fb:05 10.100.0.8
Nov 29 03:06:34 np0005539551 nova_compute[227360]: 2025-11-29 08:06:34.218 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:34 np0005539551 kernel: tap5c6fcb7c-f4 (unregistering): left promiscuous mode
Nov 29 03:06:34 np0005539551 neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570[251640]: [ALERT]    (251644) : Current worker (251646) exited with code 143 (Terminated)
Nov 29 03:06:34 np0005539551 neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570[251640]: [WARNING]  (251644) : All workers exited. Exiting... (0)
Nov 29 03:06:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:34.227 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:fb:05 10.100.0.8'], port_security=['fa:16:3e:cf:fb:05 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '69d87725-bb3d-4966-8db2-cc1e098be52d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ddaca73e-4e30-4040-a35d-8d63a2e74570', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9e02874a2dc44489adba1420baa460f2', 'neutron:revision_number': '8', 'neutron:security_group_ids': '06d3cd7a-8161-46d3-8dfb-eba1ecfb9db2 9991b07a-f9c0-43a9-9d09-efba18b71e42 dfad977b-db2a-4e14-a9c9-150bde396fcb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=97bfead0-da72-4782-bfb1-84e12ea4a595, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=5c6fcb7c-f475-49c0-89c1-51e447434625) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:06:34 np0005539551 systemd[1]: libpod-ba2d031d543028deed6bdf1d70844efe87fde3a894ad9c38b08584e789557c6c.scope: Deactivated successfully.
Nov 29 03:06:34 np0005539551 podman[251929]: 2025-11-29 08:06:34.229867033 +0000 UTC m=+0.067519179 container died ba2d031d543028deed6bdf1d70844efe87fde3a894ad9c38b08584e789557c6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:06:34 np0005539551 nova_compute[227360]: 2025-11-29 08:06:34.239 227364 INFO nova.virt.libvirt.driver [-] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Instance destroyed successfully.#033[00m
Nov 29 03:06:34 np0005539551 nova_compute[227360]: 2025-11-29 08:06:34.239 227364 DEBUG nova.objects.instance [None req-1117cede-4d0a-4dbb-b3c6-e3426ae62bd7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lazy-loading 'resources' on Instance uuid 69d87725-bb3d-4966-8db2-cc1e098be52d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:34 np0005539551 ovn_controller[130266]: 2025-11-29T08:06:34Z|00225|binding|INFO|Setting lport 5c6fcb7c-f475-49c0-89c1-51e447434625 ovn-installed in OVS
Nov 29 03:06:34 np0005539551 ovn_controller[130266]: 2025-11-29T08:06:34Z|00226|binding|INFO|Setting lport 5c6fcb7c-f475-49c0-89c1-51e447434625 up in Southbound
Nov 29 03:06:34 np0005539551 nova_compute[227360]: 2025-11-29 08:06:34.247 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:34 np0005539551 ovn_controller[130266]: 2025-11-29T08:06:34Z|00227|binding|INFO|Releasing lport 5c6fcb7c-f475-49c0-89c1-51e447434625 from this chassis (sb_readonly=1)
Nov 29 03:06:34 np0005539551 nova_compute[227360]: 2025-11-29 08:06:34.248 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:34 np0005539551 ovn_controller[130266]: 2025-11-29T08:06:34Z|00228|if_status|INFO|Dropped 1 log messages in last 590 seconds (most recently, 590 seconds ago) due to excessive rate
Nov 29 03:06:34 np0005539551 ovn_controller[130266]: 2025-11-29T08:06:34Z|00229|if_status|INFO|Not setting lport 5c6fcb7c-f475-49c0-89c1-51e447434625 down as sb is readonly
Nov 29 03:06:34 np0005539551 ovn_controller[130266]: 2025-11-29T08:06:34Z|00230|binding|INFO|Removing iface tap5c6fcb7c-f4 ovn-installed in OVS
Nov 29 03:06:34 np0005539551 nova_compute[227360]: 2025-11-29 08:06:34.251 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:34 np0005539551 ovn_controller[130266]: 2025-11-29T08:06:34Z|00231|binding|INFO|Releasing lport 5c6fcb7c-f475-49c0-89c1-51e447434625 from this chassis (sb_readonly=0)
Nov 29 03:06:34 np0005539551 ovn_controller[130266]: 2025-11-29T08:06:34Z|00232|binding|INFO|Setting lport 5c6fcb7c-f475-49c0-89c1-51e447434625 down in Southbound
Nov 29 03:06:34 np0005539551 nova_compute[227360]: 2025-11-29 08:06:34.257 227364 DEBUG nova.virt.libvirt.vif [None req-1117cede-4d0a-4dbb-b3c6-e3426ae62bd7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:06:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-803229820',display_name='tempest-SecurityGroupsTestJSON-server-803229820',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-803229820',id=63,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:06:19Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9e02874a2dc44489adba1420baa460f2',ramdisk_id='',reservation_id='r-fwyg2zal',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1605163301',owner_user_name='tempest-SecurityGroupsTestJSON-1605163301-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:06:29Z,user_data=None,user_id='9ee7a93f60394fd9b004c90c25ff5fc1',uuid=69d87725-bb3d-4966-8db2-cc1e098be52d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5c6fcb7c-f475-49c0-89c1-51e447434625", "address": "fa:16:3e:cf:fb:05", "network": {"id": "ddaca73e-4e30-4040-a35d-8d63a2e74570", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1724696280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e02874a2dc44489adba1420baa460f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c6fcb7c-f4", "ovs_interfaceid": "5c6fcb7c-f475-49c0-89c1-51e447434625", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:06:34 np0005539551 nova_compute[227360]: 2025-11-29 08:06:34.258 227364 DEBUG nova.network.os_vif_util [None req-1117cede-4d0a-4dbb-b3c6-e3426ae62bd7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Converting VIF {"id": "5c6fcb7c-f475-49c0-89c1-51e447434625", "address": "fa:16:3e:cf:fb:05", "network": {"id": "ddaca73e-4e30-4040-a35d-8d63a2e74570", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1724696280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e02874a2dc44489adba1420baa460f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c6fcb7c-f4", "ovs_interfaceid": "5c6fcb7c-f475-49c0-89c1-51e447434625", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:06:34 np0005539551 nova_compute[227360]: 2025-11-29 08:06:34.259 227364 DEBUG nova.network.os_vif_util [None req-1117cede-4d0a-4dbb-b3c6-e3426ae62bd7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cf:fb:05,bridge_name='br-int',has_traffic_filtering=True,id=5c6fcb7c-f475-49c0-89c1-51e447434625,network=Network(ddaca73e-4e30-4040-a35d-8d63a2e74570),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c6fcb7c-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:06:34 np0005539551 nova_compute[227360]: 2025-11-29 08:06:34.259 227364 DEBUG os_vif [None req-1117cede-4d0a-4dbb-b3c6-e3426ae62bd7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:fb:05,bridge_name='br-int',has_traffic_filtering=True,id=5c6fcb7c-f475-49c0-89c1-51e447434625,network=Network(ddaca73e-4e30-4040-a35d-8d63a2e74570),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c6fcb7c-f4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:06:34 np0005539551 nova_compute[227360]: 2025-11-29 08:06:34.261 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:34 np0005539551 nova_compute[227360]: 2025-11-29 08:06:34.262 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c6fcb7c-f4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:34 np0005539551 nova_compute[227360]: 2025-11-29 08:06:34.263 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:34 np0005539551 nova_compute[227360]: 2025-11-29 08:06:34.266 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:06:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:34.267 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:fb:05 10.100.0.8'], port_security=['fa:16:3e:cf:fb:05 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '69d87725-bb3d-4966-8db2-cc1e098be52d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ddaca73e-4e30-4040-a35d-8d63a2e74570', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9e02874a2dc44489adba1420baa460f2', 'neutron:revision_number': '8', 'neutron:security_group_ids': '06d3cd7a-8161-46d3-8dfb-eba1ecfb9db2 9991b07a-f9c0-43a9-9d09-efba18b71e42 dfad977b-db2a-4e14-a9c9-150bde396fcb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=97bfead0-da72-4782-bfb1-84e12ea4a595, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=5c6fcb7c-f475-49c0-89c1-51e447434625) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:06:34 np0005539551 nova_compute[227360]: 2025-11-29 08:06:34.267 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:34 np0005539551 nova_compute[227360]: 2025-11-29 08:06:34.269 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:34 np0005539551 nova_compute[227360]: 2025-11-29 08:06:34.271 227364 INFO os_vif [None req-1117cede-4d0a-4dbb-b3c6-e3426ae62bd7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:fb:05,bridge_name='br-int',has_traffic_filtering=True,id=5c6fcb7c-f475-49c0-89c1-51e447434625,network=Network(ddaca73e-4e30-4040-a35d-8d63a2e74570),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c6fcb7c-f4')#033[00m
Nov 29 03:06:34 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ba2d031d543028deed6bdf1d70844efe87fde3a894ad9c38b08584e789557c6c-userdata-shm.mount: Deactivated successfully.
Nov 29 03:06:34 np0005539551 systemd[1]: var-lib-containers-storage-overlay-9552fd08376b7d817d6762f1cbbeebfdb4cc353c8e520106c1f6fc800edfcfd5-merged.mount: Deactivated successfully.
Nov 29 03:06:34 np0005539551 podman[251929]: 2025-11-29 08:06:34.299787045 +0000 UTC m=+0.137439181 container cleanup ba2d031d543028deed6bdf1d70844efe87fde3a894ad9c38b08584e789557c6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:06:34 np0005539551 systemd[1]: libpod-conmon-ba2d031d543028deed6bdf1d70844efe87fde3a894ad9c38b08584e789557c6c.scope: Deactivated successfully.
Nov 29 03:06:34 np0005539551 podman[251978]: 2025-11-29 08:06:34.363723727 +0000 UTC m=+0.043099049 container remove ba2d031d543028deed6bdf1d70844efe87fde3a894ad9c38b08584e789557c6c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:06:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:34.369 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[fa0b70bb-17e8-49ec-b770-4c628f18094e]: (4, ('Sat Nov 29 08:06:34 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570 (ba2d031d543028deed6bdf1d70844efe87fde3a894ad9c38b08584e789557c6c)\nba2d031d543028deed6bdf1d70844efe87fde3a894ad9c38b08584e789557c6c\nSat Nov 29 08:06:34 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570 (ba2d031d543028deed6bdf1d70844efe87fde3a894ad9c38b08584e789557c6c)\nba2d031d543028deed6bdf1d70844efe87fde3a894ad9c38b08584e789557c6c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:34.370 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2149b24a-e41b-4563-95ae-859ee47a541b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:34.371 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapddaca73e-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:34 np0005539551 kernel: tapddaca73e-40: left promiscuous mode
Nov 29 03:06:34 np0005539551 nova_compute[227360]: 2025-11-29 08:06:34.373 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:34 np0005539551 nova_compute[227360]: 2025-11-29 08:06:34.375 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:34.379 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[41228164-6ed8-49f7-816f-d260ef801305]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:34 np0005539551 nova_compute[227360]: 2025-11-29 08:06:34.389 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:34.394 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2d692fae-3bad-4650-8939-18885ab9ae77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:34.396 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[43a95953-9dfc-46b1-985f-d17bcf083d79]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:34.411 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[01a771cd-64b0-4ecb-b5ca-8808d7a421fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663580, 'reachable_time': 33991, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251993, 'error': None, 'target': 'ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:34 np0005539551 systemd[1]: run-netns-ovnmeta\x2dddaca73e\x2d4e30\x2d4040\x2da35d\x2d8d63a2e74570.mount: Deactivated successfully.
Nov 29 03:06:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:34.415 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:06:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:34.415 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[2c9ea70c-f55d-43df-96f8-c04c54b1bc5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:34.417 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 5c6fcb7c-f475-49c0-89c1-51e447434625 in datapath ddaca73e-4e30-4040-a35d-8d63a2e74570 unbound from our chassis#033[00m
Nov 29 03:06:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:34.418 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ddaca73e-4e30-4040-a35d-8d63a2e74570, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:06:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:34.418 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[caf15e11-a20b-4277-82da-7f9b6f397ca4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:34.420 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 5c6fcb7c-f475-49c0-89c1-51e447434625 in datapath ddaca73e-4e30-4040-a35d-8d63a2e74570 unbound from our chassis#033[00m
Nov 29 03:06:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:34.421 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ddaca73e-4e30-4040-a35d-8d63a2e74570, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:06:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:34.422 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ab7e70ff-2096-41f8-9563-b3237e06ed53]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:34 np0005539551 nova_compute[227360]: 2025-11-29 08:06:34.444 227364 DEBUG nova.network.neutron [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Successfully created port: aa8f42d7-3fdd-4195-8d37-90fa410dee5a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:06:34 np0005539551 nova_compute[227360]: 2025-11-29 08:06:34.688 227364 INFO nova.virt.libvirt.driver [None req-1117cede-4d0a-4dbb-b3c6-e3426ae62bd7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Deleting instance files /var/lib/nova/instances/69d87725-bb3d-4966-8db2-cc1e098be52d_del#033[00m
Nov 29 03:06:34 np0005539551 nova_compute[227360]: 2025-11-29 08:06:34.689 227364 INFO nova.virt.libvirt.driver [None req-1117cede-4d0a-4dbb-b3c6-e3426ae62bd7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Deletion of /var/lib/nova/instances/69d87725-bb3d-4966-8db2-cc1e098be52d_del complete#033[00m
Nov 29 03:06:34 np0005539551 nova_compute[227360]: 2025-11-29 08:06:34.767 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:34 np0005539551 nova_compute[227360]: 2025-11-29 08:06:34.773 227364 INFO nova.compute.manager [None req-1117cede-4d0a-4dbb-b3c6-e3426ae62bd7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:06:34 np0005539551 nova_compute[227360]: 2025-11-29 08:06:34.774 227364 DEBUG oslo.service.loopingcall [None req-1117cede-4d0a-4dbb-b3c6-e3426ae62bd7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:06:34 np0005539551 nova_compute[227360]: 2025-11-29 08:06:34.774 227364 DEBUG nova.compute.manager [-] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:06:34 np0005539551 nova_compute[227360]: 2025-11-29 08:06:34.774 227364 DEBUG nova.network.neutron [-] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:06:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:35.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:06:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:35.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:06:35 np0005539551 nova_compute[227360]: 2025-11-29 08:06:35.941 227364 DEBUG nova.compute.manager [req-11ae5c29-3f0a-447f-9ca7-3750f792317f req-65d94362-733c-4fad-9185-f3c5032a3988 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Received event network-vif-unplugged-5c6fcb7c-f475-49c0-89c1-51e447434625 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:35 np0005539551 nova_compute[227360]: 2025-11-29 08:06:35.941 227364 DEBUG oslo_concurrency.lockutils [req-11ae5c29-3f0a-447f-9ca7-3750f792317f req-65d94362-733c-4fad-9185-f3c5032a3988 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:35 np0005539551 nova_compute[227360]: 2025-11-29 08:06:35.942 227364 DEBUG oslo_concurrency.lockutils [req-11ae5c29-3f0a-447f-9ca7-3750f792317f req-65d94362-733c-4fad-9185-f3c5032a3988 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:35 np0005539551 nova_compute[227360]: 2025-11-29 08:06:35.942 227364 DEBUG oslo_concurrency.lockutils [req-11ae5c29-3f0a-447f-9ca7-3750f792317f req-65d94362-733c-4fad-9185-f3c5032a3988 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:35 np0005539551 nova_compute[227360]: 2025-11-29 08:06:35.942 227364 DEBUG nova.compute.manager [req-11ae5c29-3f0a-447f-9ca7-3750f792317f req-65d94362-733c-4fad-9185-f3c5032a3988 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] No waiting events found dispatching network-vif-unplugged-5c6fcb7c-f475-49c0-89c1-51e447434625 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:06:35 np0005539551 nova_compute[227360]: 2025-11-29 08:06:35.942 227364 DEBUG nova.compute.manager [req-11ae5c29-3f0a-447f-9ca7-3750f792317f req-65d94362-733c-4fad-9185-f3c5032a3988 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Received event network-vif-unplugged-5c6fcb7c-f475-49c0-89c1-51e447434625 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:06:35 np0005539551 nova_compute[227360]: 2025-11-29 08:06:35.942 227364 DEBUG nova.compute.manager [req-11ae5c29-3f0a-447f-9ca7-3750f792317f req-65d94362-733c-4fad-9185-f3c5032a3988 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Received event network-vif-plugged-5c6fcb7c-f475-49c0-89c1-51e447434625 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:35 np0005539551 nova_compute[227360]: 2025-11-29 08:06:35.942 227364 DEBUG oslo_concurrency.lockutils [req-11ae5c29-3f0a-447f-9ca7-3750f792317f req-65d94362-733c-4fad-9185-f3c5032a3988 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:35 np0005539551 nova_compute[227360]: 2025-11-29 08:06:35.943 227364 DEBUG oslo_concurrency.lockutils [req-11ae5c29-3f0a-447f-9ca7-3750f792317f req-65d94362-733c-4fad-9185-f3c5032a3988 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:35 np0005539551 nova_compute[227360]: 2025-11-29 08:06:35.943 227364 DEBUG oslo_concurrency.lockutils [req-11ae5c29-3f0a-447f-9ca7-3750f792317f req-65d94362-733c-4fad-9185-f3c5032a3988 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:35 np0005539551 nova_compute[227360]: 2025-11-29 08:06:35.943 227364 DEBUG nova.compute.manager [req-11ae5c29-3f0a-447f-9ca7-3750f792317f req-65d94362-733c-4fad-9185-f3c5032a3988 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] No waiting events found dispatching network-vif-plugged-5c6fcb7c-f475-49c0-89c1-51e447434625 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:06:35 np0005539551 nova_compute[227360]: 2025-11-29 08:06:35.943 227364 WARNING nova.compute.manager [req-11ae5c29-3f0a-447f-9ca7-3750f792317f req-65d94362-733c-4fad-9185-f3c5032a3988 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Received unexpected event network-vif-plugged-5c6fcb7c-f475-49c0-89c1-51e447434625 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:06:35 np0005539551 nova_compute[227360]: 2025-11-29 08:06:35.943 227364 DEBUG nova.compute.manager [req-11ae5c29-3f0a-447f-9ca7-3750f792317f req-65d94362-733c-4fad-9185-f3c5032a3988 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Received event network-vif-plugged-5c6fcb7c-f475-49c0-89c1-51e447434625 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:35 np0005539551 nova_compute[227360]: 2025-11-29 08:06:35.943 227364 DEBUG oslo_concurrency.lockutils [req-11ae5c29-3f0a-447f-9ca7-3750f792317f req-65d94362-733c-4fad-9185-f3c5032a3988 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:35 np0005539551 nova_compute[227360]: 2025-11-29 08:06:35.944 227364 DEBUG oslo_concurrency.lockutils [req-11ae5c29-3f0a-447f-9ca7-3750f792317f req-65d94362-733c-4fad-9185-f3c5032a3988 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:35 np0005539551 nova_compute[227360]: 2025-11-29 08:06:35.944 227364 DEBUG oslo_concurrency.lockutils [req-11ae5c29-3f0a-447f-9ca7-3750f792317f req-65d94362-733c-4fad-9185-f3c5032a3988 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:35 np0005539551 nova_compute[227360]: 2025-11-29 08:06:35.944 227364 DEBUG nova.compute.manager [req-11ae5c29-3f0a-447f-9ca7-3750f792317f req-65d94362-733c-4fad-9185-f3c5032a3988 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] No waiting events found dispatching network-vif-plugged-5c6fcb7c-f475-49c0-89c1-51e447434625 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:06:35 np0005539551 nova_compute[227360]: 2025-11-29 08:06:35.944 227364 WARNING nova.compute.manager [req-11ae5c29-3f0a-447f-9ca7-3750f792317f req-65d94362-733c-4fad-9185-f3c5032a3988 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Received unexpected event network-vif-plugged-5c6fcb7c-f475-49c0-89c1-51e447434625 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:06:35 np0005539551 nova_compute[227360]: 2025-11-29 08:06:35.944 227364 DEBUG nova.compute.manager [req-11ae5c29-3f0a-447f-9ca7-3750f792317f req-65d94362-733c-4fad-9185-f3c5032a3988 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Received event network-vif-plugged-5c6fcb7c-f475-49c0-89c1-51e447434625 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:35 np0005539551 nova_compute[227360]: 2025-11-29 08:06:35.944 227364 DEBUG oslo_concurrency.lockutils [req-11ae5c29-3f0a-447f-9ca7-3750f792317f req-65d94362-733c-4fad-9185-f3c5032a3988 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:35 np0005539551 nova_compute[227360]: 2025-11-29 08:06:35.944 227364 DEBUG oslo_concurrency.lockutils [req-11ae5c29-3f0a-447f-9ca7-3750f792317f req-65d94362-733c-4fad-9185-f3c5032a3988 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:35 np0005539551 nova_compute[227360]: 2025-11-29 08:06:35.945 227364 DEBUG oslo_concurrency.lockutils [req-11ae5c29-3f0a-447f-9ca7-3750f792317f req-65d94362-733c-4fad-9185-f3c5032a3988 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:35 np0005539551 nova_compute[227360]: 2025-11-29 08:06:35.945 227364 DEBUG nova.compute.manager [req-11ae5c29-3f0a-447f-9ca7-3750f792317f req-65d94362-733c-4fad-9185-f3c5032a3988 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] No waiting events found dispatching network-vif-plugged-5c6fcb7c-f475-49c0-89c1-51e447434625 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:06:35 np0005539551 nova_compute[227360]: 2025-11-29 08:06:35.945 227364 WARNING nova.compute.manager [req-11ae5c29-3f0a-447f-9ca7-3750f792317f req-65d94362-733c-4fad-9185-f3c5032a3988 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Received unexpected event network-vif-plugged-5c6fcb7c-f475-49c0-89c1-51e447434625 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:06:35 np0005539551 nova_compute[227360]: 2025-11-29 08:06:35.945 227364 DEBUG nova.compute.manager [req-11ae5c29-3f0a-447f-9ca7-3750f792317f req-65d94362-733c-4fad-9185-f3c5032a3988 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Received event network-vif-unplugged-5c6fcb7c-f475-49c0-89c1-51e447434625 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:35 np0005539551 nova_compute[227360]: 2025-11-29 08:06:35.945 227364 DEBUG oslo_concurrency.lockutils [req-11ae5c29-3f0a-447f-9ca7-3750f792317f req-65d94362-733c-4fad-9185-f3c5032a3988 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:35 np0005539551 nova_compute[227360]: 2025-11-29 08:06:35.945 227364 DEBUG oslo_concurrency.lockutils [req-11ae5c29-3f0a-447f-9ca7-3750f792317f req-65d94362-733c-4fad-9185-f3c5032a3988 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:35 np0005539551 nova_compute[227360]: 2025-11-29 08:06:35.945 227364 DEBUG oslo_concurrency.lockutils [req-11ae5c29-3f0a-447f-9ca7-3750f792317f req-65d94362-733c-4fad-9185-f3c5032a3988 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:35 np0005539551 nova_compute[227360]: 2025-11-29 08:06:35.946 227364 DEBUG nova.compute.manager [req-11ae5c29-3f0a-447f-9ca7-3750f792317f req-65d94362-733c-4fad-9185-f3c5032a3988 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] No waiting events found dispatching network-vif-unplugged-5c6fcb7c-f475-49c0-89c1-51e447434625 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:06:35 np0005539551 nova_compute[227360]: 2025-11-29 08:06:35.946 227364 DEBUG nova.compute.manager [req-11ae5c29-3f0a-447f-9ca7-3750f792317f req-65d94362-733c-4fad-9185-f3c5032a3988 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Received event network-vif-unplugged-5c6fcb7c-f475-49c0-89c1-51e447434625 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:06:36 np0005539551 nova_compute[227360]: 2025-11-29 08:06:36.189 227364 DEBUG nova.network.neutron [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Successfully updated port: aa8f42d7-3fdd-4195-8d37-90fa410dee5a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:06:36 np0005539551 nova_compute[227360]: 2025-11-29 08:06:36.218 227364 DEBUG oslo_concurrency.lockutils [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Acquiring lock "refresh_cache-e957695b-44c7-4a21-aabb-9ddc5914a26a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:06:36 np0005539551 nova_compute[227360]: 2025-11-29 08:06:36.218 227364 DEBUG oslo_concurrency.lockutils [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Acquired lock "refresh_cache-e957695b-44c7-4a21-aabb-9ddc5914a26a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:06:36 np0005539551 nova_compute[227360]: 2025-11-29 08:06:36.218 227364 DEBUG nova.network.neutron [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:06:36 np0005539551 nova_compute[227360]: 2025-11-29 08:06:36.229 227364 DEBUG nova.network.neutron [-] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:06:36 np0005539551 nova_compute[227360]: 2025-11-29 08:06:36.249 227364 INFO nova.compute.manager [-] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Took 1.47 seconds to deallocate network for instance.#033[00m
Nov 29 03:06:36 np0005539551 nova_compute[227360]: 2025-11-29 08:06:36.297 227364 DEBUG oslo_concurrency.lockutils [None req-1117cede-4d0a-4dbb-b3c6-e3426ae62bd7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:36 np0005539551 nova_compute[227360]: 2025-11-29 08:06:36.297 227364 DEBUG oslo_concurrency.lockutils [None req-1117cede-4d0a-4dbb-b3c6-e3426ae62bd7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:36 np0005539551 nova_compute[227360]: 2025-11-29 08:06:36.309 227364 DEBUG nova.compute.manager [req-94bb7d7f-b035-417d-a872-c9026e5d4729 req-33c6cedb-527f-4113-b922-d32805bc60e5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Received event network-changed-aa8f42d7-3fdd-4195-8d37-90fa410dee5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:36 np0005539551 nova_compute[227360]: 2025-11-29 08:06:36.310 227364 DEBUG nova.compute.manager [req-94bb7d7f-b035-417d-a872-c9026e5d4729 req-33c6cedb-527f-4113-b922-d32805bc60e5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Refreshing instance network info cache due to event network-changed-aa8f42d7-3fdd-4195-8d37-90fa410dee5a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:06:36 np0005539551 nova_compute[227360]: 2025-11-29 08:06:36.310 227364 DEBUG oslo_concurrency.lockutils [req-94bb7d7f-b035-417d-a872-c9026e5d4729 req-33c6cedb-527f-4113-b922-d32805bc60e5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-e957695b-44c7-4a21-aabb-9ddc5914a26a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:06:36 np0005539551 nova_compute[227360]: 2025-11-29 08:06:36.317 227364 DEBUG nova.network.neutron [req-13e38026-bc3b-4bf8-998d-c1dfa91724c3 req-0b2941c8-7f21-47c4-afba-69d0db741a29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Updated VIF entry in instance network info cache for port 5c6fcb7c-f475-49c0-89c1-51e447434625. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:06:36 np0005539551 nova_compute[227360]: 2025-11-29 08:06:36.317 227364 DEBUG nova.network.neutron [req-13e38026-bc3b-4bf8-998d-c1dfa91724c3 req-0b2941c8-7f21-47c4-afba-69d0db741a29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Updating instance_info_cache with network_info: [{"id": "5c6fcb7c-f475-49c0-89c1-51e447434625", "address": "fa:16:3e:cf:fb:05", "network": {"id": "ddaca73e-4e30-4040-a35d-8d63a2e74570", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1724696280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e02874a2dc44489adba1420baa460f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c6fcb7c-f4", "ovs_interfaceid": "5c6fcb7c-f475-49c0-89c1-51e447434625", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:06:36 np0005539551 nova_compute[227360]: 2025-11-29 08:06:36.344 227364 DEBUG oslo_concurrency.lockutils [req-13e38026-bc3b-4bf8-998d-c1dfa91724c3 req-0b2941c8-7f21-47c4-afba-69d0db741a29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-69d87725-bb3d-4966-8db2-cc1e098be52d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:06:36 np0005539551 nova_compute[227360]: 2025-11-29 08:06:36.386 227364 DEBUG oslo_concurrency.processutils [None req-1117cede-4d0a-4dbb-b3c6-e3426ae62bd7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:36 np0005539551 nova_compute[227360]: 2025-11-29 08:06:36.408 227364 DEBUG nova.network.neutron [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:06:36 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:06:36 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3709140494' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:06:36 np0005539551 nova_compute[227360]: 2025-11-29 08:06:36.850 227364 DEBUG oslo_concurrency.processutils [None req-1117cede-4d0a-4dbb-b3c6-e3426ae62bd7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:36 np0005539551 nova_compute[227360]: 2025-11-29 08:06:36.859 227364 DEBUG nova.compute.provider_tree [None req-1117cede-4d0a-4dbb-b3c6-e3426ae62bd7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:06:36 np0005539551 nova_compute[227360]: 2025-11-29 08:06:36.885 227364 DEBUG nova.scheduler.client.report [None req-1117cede-4d0a-4dbb-b3c6-e3426ae62bd7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:06:36 np0005539551 nova_compute[227360]: 2025-11-29 08:06:36.915 227364 DEBUG oslo_concurrency.lockutils [None req-1117cede-4d0a-4dbb-b3c6-e3426ae62bd7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:36 np0005539551 nova_compute[227360]: 2025-11-29 08:06:36.956 227364 INFO nova.scheduler.client.report [None req-1117cede-4d0a-4dbb-b3c6-e3426ae62bd7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Deleted allocations for instance 69d87725-bb3d-4966-8db2-cc1e098be52d#033[00m
Nov 29 03:06:37 np0005539551 nova_compute[227360]: 2025-11-29 08:06:37.061 227364 DEBUG oslo_concurrency.lockutils [None req-1117cede-4d0a-4dbb-b3c6-e3426ae62bd7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lock "69d87725-bb3d-4966-8db2-cc1e098be52d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.072s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:37.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:37.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:37 np0005539551 nova_compute[227360]: 2025-11-29 08:06:37.762 227364 DEBUG nova.network.neutron [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Updating instance_info_cache with network_info: [{"id": "aa8f42d7-3fdd-4195-8d37-90fa410dee5a", "address": "fa:16:3e:70:3a:0c", "network": {"id": "47634f95-0b49-48e4-9aa2-5fcc460d27f7", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-887794569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c1b873d94187455ea17d369aa54754b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa8f42d7-3f", "ovs_interfaceid": "aa8f42d7-3fdd-4195-8d37-90fa410dee5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:06:37 np0005539551 nova_compute[227360]: 2025-11-29 08:06:37.783 227364 DEBUG oslo_concurrency.lockutils [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Releasing lock "refresh_cache-e957695b-44c7-4a21-aabb-9ddc5914a26a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:06:37 np0005539551 nova_compute[227360]: 2025-11-29 08:06:37.784 227364 DEBUG nova.compute.manager [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Instance network_info: |[{"id": "aa8f42d7-3fdd-4195-8d37-90fa410dee5a", "address": "fa:16:3e:70:3a:0c", "network": {"id": "47634f95-0b49-48e4-9aa2-5fcc460d27f7", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-887794569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c1b873d94187455ea17d369aa54754b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa8f42d7-3f", "ovs_interfaceid": "aa8f42d7-3fdd-4195-8d37-90fa410dee5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:06:37 np0005539551 nova_compute[227360]: 2025-11-29 08:06:37.784 227364 DEBUG oslo_concurrency.lockutils [req-94bb7d7f-b035-417d-a872-c9026e5d4729 req-33c6cedb-527f-4113-b922-d32805bc60e5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-e957695b-44c7-4a21-aabb-9ddc5914a26a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:06:37 np0005539551 nova_compute[227360]: 2025-11-29 08:06:37.785 227364 DEBUG nova.network.neutron [req-94bb7d7f-b035-417d-a872-c9026e5d4729 req-33c6cedb-527f-4113-b922-d32805bc60e5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Refreshing network info cache for port aa8f42d7-3fdd-4195-8d37-90fa410dee5a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:06:37 np0005539551 nova_compute[227360]: 2025-11-29 08:06:37.788 227364 DEBUG nova.virt.libvirt.driver [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Start _get_guest_xml network_info=[{"id": "aa8f42d7-3fdd-4195-8d37-90fa410dee5a", "address": "fa:16:3e:70:3a:0c", "network": {"id": "47634f95-0b49-48e4-9aa2-5fcc460d27f7", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-887794569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c1b873d94187455ea17d369aa54754b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa8f42d7-3f", "ovs_interfaceid": "aa8f42d7-3fdd-4195-8d37-90fa410dee5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:06:37 np0005539551 nova_compute[227360]: 2025-11-29 08:06:37.793 227364 WARNING nova.virt.libvirt.driver [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:06:37 np0005539551 nova_compute[227360]: 2025-11-29 08:06:37.799 227364 DEBUG nova.virt.libvirt.host [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:06:37 np0005539551 nova_compute[227360]: 2025-11-29 08:06:37.800 227364 DEBUG nova.virt.libvirt.host [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:06:37 np0005539551 nova_compute[227360]: 2025-11-29 08:06:37.807 227364 DEBUG nova.virt.libvirt.host [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:06:37 np0005539551 nova_compute[227360]: 2025-11-29 08:06:37.807 227364 DEBUG nova.virt.libvirt.host [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:06:37 np0005539551 nova_compute[227360]: 2025-11-29 08:06:37.808 227364 DEBUG nova.virt.libvirt.driver [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:06:37 np0005539551 nova_compute[227360]: 2025-11-29 08:06:37.809 227364 DEBUG nova.virt.hardware [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:06:37 np0005539551 nova_compute[227360]: 2025-11-29 08:06:37.809 227364 DEBUG nova.virt.hardware [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:06:37 np0005539551 nova_compute[227360]: 2025-11-29 08:06:37.810 227364 DEBUG nova.virt.hardware [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:06:37 np0005539551 nova_compute[227360]: 2025-11-29 08:06:37.810 227364 DEBUG nova.virt.hardware [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:06:37 np0005539551 nova_compute[227360]: 2025-11-29 08:06:37.810 227364 DEBUG nova.virt.hardware [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:06:37 np0005539551 nova_compute[227360]: 2025-11-29 08:06:37.811 227364 DEBUG nova.virt.hardware [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:06:37 np0005539551 nova_compute[227360]: 2025-11-29 08:06:37.811 227364 DEBUG nova.virt.hardware [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:06:37 np0005539551 nova_compute[227360]: 2025-11-29 08:06:37.811 227364 DEBUG nova.virt.hardware [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:06:37 np0005539551 nova_compute[227360]: 2025-11-29 08:06:37.812 227364 DEBUG nova.virt.hardware [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:06:37 np0005539551 nova_compute[227360]: 2025-11-29 08:06:37.812 227364 DEBUG nova.virt.hardware [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:06:37 np0005539551 nova_compute[227360]: 2025-11-29 08:06:37.812 227364 DEBUG nova.virt.hardware [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:06:37 np0005539551 nova_compute[227360]: 2025-11-29 08:06:37.816 227364 DEBUG oslo_concurrency.processutils [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:38 np0005539551 nova_compute[227360]: 2025-11-29 08:06:38.050 227364 DEBUG nova.compute.manager [req-a5f89d61-07ee-47c2-bd12-64ba5a8345bb req-40360585-b191-4aaf-996f-2bae9b71598b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Received event network-vif-plugged-5c6fcb7c-f475-49c0-89c1-51e447434625 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:38 np0005539551 nova_compute[227360]: 2025-11-29 08:06:38.051 227364 DEBUG oslo_concurrency.lockutils [req-a5f89d61-07ee-47c2-bd12-64ba5a8345bb req-40360585-b191-4aaf-996f-2bae9b71598b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:38 np0005539551 nova_compute[227360]: 2025-11-29 08:06:38.051 227364 DEBUG oslo_concurrency.lockutils [req-a5f89d61-07ee-47c2-bd12-64ba5a8345bb req-40360585-b191-4aaf-996f-2bae9b71598b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:38 np0005539551 nova_compute[227360]: 2025-11-29 08:06:38.051 227364 DEBUG oslo_concurrency.lockutils [req-a5f89d61-07ee-47c2-bd12-64ba5a8345bb req-40360585-b191-4aaf-996f-2bae9b71598b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "69d87725-bb3d-4966-8db2-cc1e098be52d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:38 np0005539551 nova_compute[227360]: 2025-11-29 08:06:38.052 227364 DEBUG nova.compute.manager [req-a5f89d61-07ee-47c2-bd12-64ba5a8345bb req-40360585-b191-4aaf-996f-2bae9b71598b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] No waiting events found dispatching network-vif-plugged-5c6fcb7c-f475-49c0-89c1-51e447434625 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:06:38 np0005539551 nova_compute[227360]: 2025-11-29 08:06:38.052 227364 WARNING nova.compute.manager [req-a5f89d61-07ee-47c2-bd12-64ba5a8345bb req-40360585-b191-4aaf-996f-2bae9b71598b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Received unexpected event network-vif-plugged-5c6fcb7c-f475-49c0-89c1-51e447434625 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:06:38 np0005539551 nova_compute[227360]: 2025-11-29 08:06:38.052 227364 DEBUG nova.compute.manager [req-a5f89d61-07ee-47c2-bd12-64ba5a8345bb req-40360585-b191-4aaf-996f-2bae9b71598b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Received event network-vif-deleted-5c6fcb7c-f475-49c0-89c1-51e447434625 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:38 np0005539551 nova_compute[227360]: 2025-11-29 08:06:38.052 227364 INFO nova.compute.manager [req-a5f89d61-07ee-47c2-bd12-64ba5a8345bb req-40360585-b191-4aaf-996f-2bae9b71598b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Neutron deleted interface 5c6fcb7c-f475-49c0-89c1-51e447434625; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:06:38 np0005539551 nova_compute[227360]: 2025-11-29 08:06:38.052 227364 DEBUG nova.network.neutron [req-a5f89d61-07ee-47c2-bd12-64ba5a8345bb req-40360585-b191-4aaf-996f-2bae9b71598b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Nov 29 03:06:38 np0005539551 nova_compute[227360]: 2025-11-29 08:06:38.055 227364 DEBUG nova.compute.manager [req-a5f89d61-07ee-47c2-bd12-64ba5a8345bb req-40360585-b191-4aaf-996f-2bae9b71598b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Detach interface failed, port_id=5c6fcb7c-f475-49c0-89c1-51e447434625, reason: Instance 69d87725-bb3d-4966-8db2-cc1e098be52d could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 03:06:38 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:06:38 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2435784468' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:06:38 np0005539551 nova_compute[227360]: 2025-11-29 08:06:38.238 227364 DEBUG oslo_concurrency.processutils [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:38 np0005539551 nova_compute[227360]: 2025-11-29 08:06:38.273 227364 DEBUG nova.storage.rbd_utils [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] rbd image e957695b-44c7-4a21-aabb-9ddc5914a26a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:38 np0005539551 nova_compute[227360]: 2025-11-29 08:06:38.278 227364 DEBUG oslo_concurrency.processutils [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:38 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e243 e243: 3 total, 3 up, 3 in
Nov 29 03:06:38 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:06:38 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1164697679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:06:38 np0005539551 nova_compute[227360]: 2025-11-29 08:06:38.790 227364 DEBUG oslo_concurrency.processutils [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:38 np0005539551 nova_compute[227360]: 2025-11-29 08:06:38.792 227364 DEBUG nova.virt.libvirt.vif [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:06:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-600217881',display_name='tempest-ImagesNegativeTestJSON-server-600217881',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-600217881',id=65,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c1b873d94187455ea17d369aa54754b3',ramdisk_id='',reservation_id='r-4689lenp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-1047505779',owner_user_name='tempest-ImagesNegativeTestJSON-1047505779-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:32Z,user_data=None,user_id='7f7f42bfe6ee49b088f30c9eb496e288',uuid=e957695b-44c7-4a21-aabb-9ddc5914a26a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aa8f42d7-3fdd-4195-8d37-90fa410dee5a", "address": "fa:16:3e:70:3a:0c", "network": {"id": "47634f95-0b49-48e4-9aa2-5fcc460d27f7", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-887794569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c1b873d94187455ea17d369aa54754b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa8f42d7-3f", "ovs_interfaceid": "aa8f42d7-3fdd-4195-8d37-90fa410dee5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:06:38 np0005539551 nova_compute[227360]: 2025-11-29 08:06:38.792 227364 DEBUG nova.network.os_vif_util [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Converting VIF {"id": "aa8f42d7-3fdd-4195-8d37-90fa410dee5a", "address": "fa:16:3e:70:3a:0c", "network": {"id": "47634f95-0b49-48e4-9aa2-5fcc460d27f7", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-887794569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c1b873d94187455ea17d369aa54754b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa8f42d7-3f", "ovs_interfaceid": "aa8f42d7-3fdd-4195-8d37-90fa410dee5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:06:38 np0005539551 nova_compute[227360]: 2025-11-29 08:06:38.793 227364 DEBUG nova.network.os_vif_util [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:3a:0c,bridge_name='br-int',has_traffic_filtering=True,id=aa8f42d7-3fdd-4195-8d37-90fa410dee5a,network=Network(47634f95-0b49-48e4-9aa2-5fcc460d27f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa8f42d7-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:06:38 np0005539551 nova_compute[227360]: 2025-11-29 08:06:38.795 227364 DEBUG nova.objects.instance [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Lazy-loading 'pci_devices' on Instance uuid e957695b-44c7-4a21-aabb-9ddc5914a26a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:38 np0005539551 nova_compute[227360]: 2025-11-29 08:06:38.832 227364 DEBUG nova.virt.libvirt.driver [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:06:38 np0005539551 nova_compute[227360]:  <uuid>e957695b-44c7-4a21-aabb-9ddc5914a26a</uuid>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:  <name>instance-00000041</name>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:06:38 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:      <nova:name>tempest-ImagesNegativeTestJSON-server-600217881</nova:name>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:06:37</nova:creationTime>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:06:38 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:        <nova:user uuid="7f7f42bfe6ee49b088f30c9eb496e288">tempest-ImagesNegativeTestJSON-1047505779-project-member</nova:user>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:        <nova:project uuid="c1b873d94187455ea17d369aa54754b3">tempest-ImagesNegativeTestJSON-1047505779</nova:project>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:        <nova:port uuid="aa8f42d7-3fdd-4195-8d37-90fa410dee5a">
Nov 29 03:06:38 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:      <entry name="serial">e957695b-44c7-4a21-aabb-9ddc5914a26a</entry>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:      <entry name="uuid">e957695b-44c7-4a21-aabb-9ddc5914a26a</entry>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:06:38 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/e957695b-44c7-4a21-aabb-9ddc5914a26a_disk">
Nov 29 03:06:38 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:06:38 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:06:38 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/e957695b-44c7-4a21-aabb-9ddc5914a26a_disk.config">
Nov 29 03:06:38 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:06:38 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:06:38 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:70:3a:0c"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:      <target dev="tapaa8f42d7-3f"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:06:38 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/e957695b-44c7-4a21-aabb-9ddc5914a26a/console.log" append="off"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:06:38 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:06:38 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:06:38 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:06:38 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:06:38 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:06:38 np0005539551 nova_compute[227360]: 2025-11-29 08:06:38.833 227364 DEBUG nova.compute.manager [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Preparing to wait for external event network-vif-plugged-aa8f42d7-3fdd-4195-8d37-90fa410dee5a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:06:38 np0005539551 nova_compute[227360]: 2025-11-29 08:06:38.834 227364 DEBUG oslo_concurrency.lockutils [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Acquiring lock "e957695b-44c7-4a21-aabb-9ddc5914a26a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:38 np0005539551 nova_compute[227360]: 2025-11-29 08:06:38.834 227364 DEBUG oslo_concurrency.lockutils [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Lock "e957695b-44c7-4a21-aabb-9ddc5914a26a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:38 np0005539551 nova_compute[227360]: 2025-11-29 08:06:38.835 227364 DEBUG oslo_concurrency.lockutils [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Lock "e957695b-44c7-4a21-aabb-9ddc5914a26a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:38 np0005539551 nova_compute[227360]: 2025-11-29 08:06:38.836 227364 DEBUG nova.virt.libvirt.vif [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:06:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-600217881',display_name='tempest-ImagesNegativeTestJSON-server-600217881',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-600217881',id=65,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c1b873d94187455ea17d369aa54754b3',ramdisk_id='',reservation_id='r-4689lenp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-1047505779',owner_user_name='tempest-ImagesNegativeTestJSON-1047505779-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:32Z,user_data=None,user_id='7f7f42bfe6ee49b088f30c9eb496e288',uuid=e957695b-44c7-4a21-aabb-9ddc5914a26a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aa8f42d7-3fdd-4195-8d37-90fa410dee5a", "address": "fa:16:3e:70:3a:0c", "network": {"id": "47634f95-0b49-48e4-9aa2-5fcc460d27f7", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-887794569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c1b873d94187455ea17d369aa54754b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa8f42d7-3f", "ovs_interfaceid": "aa8f42d7-3fdd-4195-8d37-90fa410dee5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:06:38 np0005539551 nova_compute[227360]: 2025-11-29 08:06:38.836 227364 DEBUG nova.network.os_vif_util [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Converting VIF {"id": "aa8f42d7-3fdd-4195-8d37-90fa410dee5a", "address": "fa:16:3e:70:3a:0c", "network": {"id": "47634f95-0b49-48e4-9aa2-5fcc460d27f7", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-887794569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c1b873d94187455ea17d369aa54754b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa8f42d7-3f", "ovs_interfaceid": "aa8f42d7-3fdd-4195-8d37-90fa410dee5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:06:38 np0005539551 nova_compute[227360]: 2025-11-29 08:06:38.838 227364 DEBUG nova.network.os_vif_util [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:3a:0c,bridge_name='br-int',has_traffic_filtering=True,id=aa8f42d7-3fdd-4195-8d37-90fa410dee5a,network=Network(47634f95-0b49-48e4-9aa2-5fcc460d27f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa8f42d7-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:06:38 np0005539551 nova_compute[227360]: 2025-11-29 08:06:38.839 227364 DEBUG os_vif [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:3a:0c,bridge_name='br-int',has_traffic_filtering=True,id=aa8f42d7-3fdd-4195-8d37-90fa410dee5a,network=Network(47634f95-0b49-48e4-9aa2-5fcc460d27f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa8f42d7-3f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:06:38 np0005539551 nova_compute[227360]: 2025-11-29 08:06:38.840 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:38 np0005539551 nova_compute[227360]: 2025-11-29 08:06:38.841 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:38 np0005539551 nova_compute[227360]: 2025-11-29 08:06:38.842 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:06:38 np0005539551 nova_compute[227360]: 2025-11-29 08:06:38.846 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:38 np0005539551 nova_compute[227360]: 2025-11-29 08:06:38.847 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa8f42d7-3f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:38 np0005539551 nova_compute[227360]: 2025-11-29 08:06:38.848 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapaa8f42d7-3f, col_values=(('external_ids', {'iface-id': 'aa8f42d7-3fdd-4195-8d37-90fa410dee5a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:70:3a:0c', 'vm-uuid': 'e957695b-44c7-4a21-aabb-9ddc5914a26a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:38 np0005539551 nova_compute[227360]: 2025-11-29 08:06:38.849 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:38 np0005539551 NetworkManager[48922]: <info>  [1764403598.8503] manager: (tapaa8f42d7-3f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/110)
Nov 29 03:06:38 np0005539551 nova_compute[227360]: 2025-11-29 08:06:38.851 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:06:38 np0005539551 nova_compute[227360]: 2025-11-29 08:06:38.854 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:38 np0005539551 nova_compute[227360]: 2025-11-29 08:06:38.855 227364 INFO os_vif [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:3a:0c,bridge_name='br-int',has_traffic_filtering=True,id=aa8f42d7-3fdd-4195-8d37-90fa410dee5a,network=Network(47634f95-0b49-48e4-9aa2-5fcc460d27f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa8f42d7-3f')#033[00m
Nov 29 03:06:38 np0005539551 nova_compute[227360]: 2025-11-29 08:06:38.991 227364 DEBUG nova.virt.libvirt.driver [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:06:38 np0005539551 nova_compute[227360]: 2025-11-29 08:06:38.991 227364 DEBUG nova.virt.libvirt.driver [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:06:38 np0005539551 nova_compute[227360]: 2025-11-29 08:06:38.992 227364 DEBUG nova.virt.libvirt.driver [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] No VIF found with MAC fa:16:3e:70:3a:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:06:38 np0005539551 nova_compute[227360]: 2025-11-29 08:06:38.992 227364 INFO nova.virt.libvirt.driver [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Using config drive#033[00m
Nov 29 03:06:39 np0005539551 nova_compute[227360]: 2025-11-29 08:06:39.018 227364 DEBUG nova.storage.rbd_utils [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] rbd image e957695b-44c7-4a21-aabb-9ddc5914a26a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:06:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:39.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:06:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:06:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:39.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:06:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e244 e244: 3 total, 3 up, 3 in
Nov 29 03:06:39 np0005539551 nova_compute[227360]: 2025-11-29 08:06:39.767 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:39 np0005539551 nova_compute[227360]: 2025-11-29 08:06:39.926 227364 INFO nova.virt.libvirt.driver [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Creating config drive at /var/lib/nova/instances/e957695b-44c7-4a21-aabb-9ddc5914a26a/disk.config#033[00m
Nov 29 03:06:39 np0005539551 nova_compute[227360]: 2025-11-29 08:06:39.933 227364 DEBUG oslo_concurrency.processutils [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e957695b-44c7-4a21-aabb-9ddc5914a26a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7bkrhmyh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:40 np0005539551 nova_compute[227360]: 2025-11-29 08:06:40.063 227364 DEBUG oslo_concurrency.processutils [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e957695b-44c7-4a21-aabb-9ddc5914a26a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7bkrhmyh" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:40 np0005539551 nova_compute[227360]: 2025-11-29 08:06:40.094 227364 DEBUG nova.storage.rbd_utils [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] rbd image e957695b-44c7-4a21-aabb-9ddc5914a26a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:40 np0005539551 nova_compute[227360]: 2025-11-29 08:06:40.097 227364 DEBUG oslo_concurrency.processutils [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e957695b-44c7-4a21-aabb-9ddc5914a26a/disk.config e957695b-44c7-4a21-aabb-9ddc5914a26a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:40 np0005539551 nova_compute[227360]: 2025-11-29 08:06:40.145 227364 DEBUG nova.network.neutron [req-94bb7d7f-b035-417d-a872-c9026e5d4729 req-33c6cedb-527f-4113-b922-d32805bc60e5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Updated VIF entry in instance network info cache for port aa8f42d7-3fdd-4195-8d37-90fa410dee5a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:06:40 np0005539551 nova_compute[227360]: 2025-11-29 08:06:40.146 227364 DEBUG nova.network.neutron [req-94bb7d7f-b035-417d-a872-c9026e5d4729 req-33c6cedb-527f-4113-b922-d32805bc60e5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Updating instance_info_cache with network_info: [{"id": "aa8f42d7-3fdd-4195-8d37-90fa410dee5a", "address": "fa:16:3e:70:3a:0c", "network": {"id": "47634f95-0b49-48e4-9aa2-5fcc460d27f7", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-887794569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c1b873d94187455ea17d369aa54754b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa8f42d7-3f", "ovs_interfaceid": "aa8f42d7-3fdd-4195-8d37-90fa410dee5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:06:40 np0005539551 nova_compute[227360]: 2025-11-29 08:06:40.162 227364 DEBUG oslo_concurrency.lockutils [req-94bb7d7f-b035-417d-a872-c9026e5d4729 req-33c6cedb-527f-4113-b922-d32805bc60e5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-e957695b-44c7-4a21-aabb-9ddc5914a26a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:06:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:40 np0005539551 nova_compute[227360]: 2025-11-29 08:06:40.254 227364 DEBUG oslo_concurrency.processutils [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e957695b-44c7-4a21-aabb-9ddc5914a26a/disk.config e957695b-44c7-4a21-aabb-9ddc5914a26a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:40 np0005539551 nova_compute[227360]: 2025-11-29 08:06:40.255 227364 INFO nova.virt.libvirt.driver [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Deleting local config drive /var/lib/nova/instances/e957695b-44c7-4a21-aabb-9ddc5914a26a/disk.config because it was imported into RBD.#033[00m
Nov 29 03:06:40 np0005539551 kernel: tapaa8f42d7-3f: entered promiscuous mode
Nov 29 03:06:40 np0005539551 NetworkManager[48922]: <info>  [1764403600.3021] manager: (tapaa8f42d7-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/111)
Nov 29 03:06:40 np0005539551 ovn_controller[130266]: 2025-11-29T08:06:40Z|00233|binding|INFO|Claiming lport aa8f42d7-3fdd-4195-8d37-90fa410dee5a for this chassis.
Nov 29 03:06:40 np0005539551 ovn_controller[130266]: 2025-11-29T08:06:40Z|00234|binding|INFO|aa8f42d7-3fdd-4195-8d37-90fa410dee5a: Claiming fa:16:3e:70:3a:0c 10.100.0.12
Nov 29 03:06:40 np0005539551 nova_compute[227360]: 2025-11-29 08:06:40.304 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:40.313 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:3a:0c 10.100.0.12'], port_security=['fa:16:3e:70:3a:0c 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e957695b-44c7-4a21-aabb-9ddc5914a26a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-47634f95-0b49-48e4-9aa2-5fcc460d27f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c1b873d94187455ea17d369aa54754b3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '90c24a64-4b2d-4433-b7d6-cb986d4ad977', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=441708a5-b403-4db6-93c7-8c6405400645, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=aa8f42d7-3fdd-4195-8d37-90fa410dee5a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:40.314 139482 INFO neutron.agent.ovn.metadata.agent [-] Port aa8f42d7-3fdd-4195-8d37-90fa410dee5a in datapath 47634f95-0b49-48e4-9aa2-5fcc460d27f7 bound to our chassis#033[00m
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:40.315 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 47634f95-0b49-48e4-9aa2-5fcc460d27f7#033[00m
Nov 29 03:06:40 np0005539551 systemd-udevd[252153]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:40.327 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a5f22369-c981-4729-a534-33e8217c706a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:40.329 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap47634f95-01 in ovnmeta-47634f95-0b49-48e4-9aa2-5fcc460d27f7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:40.330 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap47634f95-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:40.330 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[613ebb57-444a-4529-9012-87e3fe396b43]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:40.331 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[01bf1722-3b67-4f18-a81d-7c9500e8bdff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:40 np0005539551 systemd-machined[190756]: New machine qemu-31-instance-00000041.
Nov 29 03:06:40 np0005539551 NetworkManager[48922]: <info>  [1764403600.3429] device (tapaa8f42d7-3f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:40.342 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[46b074b7-5df2-4a9f-bde9-b3acb385ba4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:40 np0005539551 NetworkManager[48922]: <info>  [1764403600.3439] device (tapaa8f42d7-3f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:06:40 np0005539551 systemd[1]: Started Virtual Machine qemu-31-instance-00000041.
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:40.365 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[93edf400-21bd-4d09-a0bc-e879b16b6d15]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:40 np0005539551 nova_compute[227360]: 2025-11-29 08:06:40.380 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:40 np0005539551 ovn_controller[130266]: 2025-11-29T08:06:40Z|00235|binding|INFO|Setting lport aa8f42d7-3fdd-4195-8d37-90fa410dee5a ovn-installed in OVS
Nov 29 03:06:40 np0005539551 ovn_controller[130266]: 2025-11-29T08:06:40Z|00236|binding|INFO|Setting lport aa8f42d7-3fdd-4195-8d37-90fa410dee5a up in Southbound
Nov 29 03:06:40 np0005539551 nova_compute[227360]: 2025-11-29 08:06:40.387 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:40.395 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[2895b741-ad8d-431c-81a7-ba712a311821]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:40.401 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e4f2136f-555f-46a0-8fe5-c4520b245ea0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:40 np0005539551 NetworkManager[48922]: <info>  [1764403600.4024] manager: (tap47634f95-00): new Veth device (/org/freedesktop/NetworkManager/Devices/112)
Nov 29 03:06:40 np0005539551 systemd-udevd[252157]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:40.428 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[b1b514d6-49c1-453c-a527-ce849312ddae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:40.431 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[0e27b413-6c8d-4337-9ad6-998bb36f35a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:40 np0005539551 NetworkManager[48922]: <info>  [1764403600.4511] device (tap47634f95-00): carrier: link connected
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:40.457 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[a6d7bde3-3641-4db8-b8fc-fd1169c53577]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:40.473 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6902822d-503e-44c5-b44c-18bda4be97c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap47634f95-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:bd:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 69], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664733, 'reachable_time': 15492, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252188, 'error': None, 'target': 'ovnmeta-47634f95-0b49-48e4-9aa2-5fcc460d27f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:40.487 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[241e5fce-f396-425b-8fb5-9bf0f64da4da]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2a:bd8e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 664733, 'tstamp': 664733}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252189, 'error': None, 'target': 'ovnmeta-47634f95-0b49-48e4-9aa2-5fcc460d27f7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:40.502 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2cd54d94-2604-408e-8738-d175ed2d70f5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap47634f95-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:bd:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 69], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664733, 'reachable_time': 15492, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252190, 'error': None, 'target': 'ovnmeta-47634f95-0b49-48e4-9aa2-5fcc460d27f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:40.535 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e122ca3e-b349-4b8b-b287-d867998b83b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:40.591 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[acf5928d-479a-4f1b-ad2f-2673fd548df5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:40.592 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap47634f95-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:40.592 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:40.593 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap47634f95-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:40 np0005539551 nova_compute[227360]: 2025-11-29 08:06:40.594 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:40 np0005539551 kernel: tap47634f95-00: entered promiscuous mode
Nov 29 03:06:40 np0005539551 nova_compute[227360]: 2025-11-29 08:06:40.596 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:40 np0005539551 NetworkManager[48922]: <info>  [1764403600.5966] manager: (tap47634f95-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/113)
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:40.597 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap47634f95-00, col_values=(('external_ids', {'iface-id': 'dbb3ac0a-eb9c-467c-8888-29cf29f00e80'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:40 np0005539551 nova_compute[227360]: 2025-11-29 08:06:40.598 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:40 np0005539551 ovn_controller[130266]: 2025-11-29T08:06:40Z|00237|binding|INFO|Releasing lport dbb3ac0a-eb9c-467c-8888-29cf29f00e80 from this chassis (sb_readonly=0)
Nov 29 03:06:40 np0005539551 nova_compute[227360]: 2025-11-29 08:06:40.599 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:40.599 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/47634f95-0b49-48e4-9aa2-5fcc460d27f7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/47634f95-0b49-48e4-9aa2-5fcc460d27f7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:40.600 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a28727da-bb97-49f8-953d-333686b53691]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:40.600 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-47634f95-0b49-48e4-9aa2-5fcc460d27f7
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/47634f95-0b49-48e4-9aa2-5fcc460d27f7.pid.haproxy
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 47634f95-0b49-48e4-9aa2-5fcc460d27f7
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:06:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:40.601 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-47634f95-0b49-48e4-9aa2-5fcc460d27f7', 'env', 'PROCESS_TAG=haproxy-47634f95-0b49-48e4-9aa2-5fcc460d27f7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/47634f95-0b49-48e4-9aa2-5fcc460d27f7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:06:40 np0005539551 nova_compute[227360]: 2025-11-29 08:06:40.612 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e245 e245: 3 total, 3 up, 3 in
Nov 29 03:06:40 np0005539551 nova_compute[227360]: 2025-11-29 08:06:40.945 227364 DEBUG nova.compute.manager [req-1090378e-c69f-4d48-b06b-aa3a21bb1a87 req-386747c1-eb90-4cc2-8d96-d2d9f4b214f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Received event network-vif-plugged-aa8f42d7-3fdd-4195-8d37-90fa410dee5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:40 np0005539551 nova_compute[227360]: 2025-11-29 08:06:40.947 227364 DEBUG oslo_concurrency.lockutils [req-1090378e-c69f-4d48-b06b-aa3a21bb1a87 req-386747c1-eb90-4cc2-8d96-d2d9f4b214f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e957695b-44c7-4a21-aabb-9ddc5914a26a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:40 np0005539551 nova_compute[227360]: 2025-11-29 08:06:40.948 227364 DEBUG oslo_concurrency.lockutils [req-1090378e-c69f-4d48-b06b-aa3a21bb1a87 req-386747c1-eb90-4cc2-8d96-d2d9f4b214f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e957695b-44c7-4a21-aabb-9ddc5914a26a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:40 np0005539551 nova_compute[227360]: 2025-11-29 08:06:40.948 227364 DEBUG oslo_concurrency.lockutils [req-1090378e-c69f-4d48-b06b-aa3a21bb1a87 req-386747c1-eb90-4cc2-8d96-d2d9f4b214f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e957695b-44c7-4a21-aabb-9ddc5914a26a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:40 np0005539551 nova_compute[227360]: 2025-11-29 08:06:40.949 227364 DEBUG nova.compute.manager [req-1090378e-c69f-4d48-b06b-aa3a21bb1a87 req-386747c1-eb90-4cc2-8d96-d2d9f4b214f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Processing event network-vif-plugged-aa8f42d7-3fdd-4195-8d37-90fa410dee5a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:06:40 np0005539551 podman[252259]: 2025-11-29 08:06:40.958351768 +0000 UTC m=+0.058855438 container create 2398e026dc979a101c149a5e57b1b46f839ebf3562a52b60d2cf2d71e0600ceb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-47634f95-0b49-48e4-9aa2-5fcc460d27f7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:06:40 np0005539551 systemd[1]: Started libpod-conmon-2398e026dc979a101c149a5e57b1b46f839ebf3562a52b60d2cf2d71e0600ceb.scope.
Nov 29 03:06:41 np0005539551 nova_compute[227360]: 2025-11-29 08:06:41.007 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403601.006653, e957695b-44c7-4a21-aabb-9ddc5914a26a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:06:41 np0005539551 nova_compute[227360]: 2025-11-29 08:06:41.008 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] VM Started (Lifecycle Event)#033[00m
Nov 29 03:06:41 np0005539551 nova_compute[227360]: 2025-11-29 08:06:41.010 227364 DEBUG nova.compute.manager [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:06:41 np0005539551 podman[252259]: 2025-11-29 08:06:40.921168907 +0000 UTC m=+0.021672667 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:06:41 np0005539551 nova_compute[227360]: 2025-11-29 08:06:41.014 227364 DEBUG nova.virt.libvirt.driver [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:06:41 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:06:41 np0005539551 nova_compute[227360]: 2025-11-29 08:06:41.018 227364 INFO nova.virt.libvirt.driver [-] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Instance spawned successfully.#033[00m
Nov 29 03:06:41 np0005539551 nova_compute[227360]: 2025-11-29 08:06:41.018 227364 DEBUG nova.virt.libvirt.driver [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:06:41 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54787c30ad4ccaea43944b2cacc28123106984796c12487d391c9d61823b2bf5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:06:41 np0005539551 podman[252259]: 2025-11-29 08:06:41.036375985 +0000 UTC m=+0.136879665 container init 2398e026dc979a101c149a5e57b1b46f839ebf3562a52b60d2cf2d71e0600ceb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-47634f95-0b49-48e4-9aa2-5fcc460d27f7, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:06:41 np0005539551 podman[252259]: 2025-11-29 08:06:41.041575823 +0000 UTC m=+0.142079483 container start 2398e026dc979a101c149a5e57b1b46f839ebf3562a52b60d2cf2d71e0600ceb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-47634f95-0b49-48e4-9aa2-5fcc460d27f7, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 03:06:41 np0005539551 nova_compute[227360]: 2025-11-29 08:06:41.042 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:41 np0005539551 nova_compute[227360]: 2025-11-29 08:06:41.048 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:06:41 np0005539551 neutron-haproxy-ovnmeta-47634f95-0b49-48e4-9aa2-5fcc460d27f7[252279]: [NOTICE]   (252283) : New worker (252285) forked
Nov 29 03:06:41 np0005539551 neutron-haproxy-ovnmeta-47634f95-0b49-48e4-9aa2-5fcc460d27f7[252279]: [NOTICE]   (252283) : Loading success.
Nov 29 03:06:41 np0005539551 nova_compute[227360]: 2025-11-29 08:06:41.065 227364 DEBUG nova.virt.libvirt.driver [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:06:41 np0005539551 nova_compute[227360]: 2025-11-29 08:06:41.066 227364 DEBUG nova.virt.libvirt.driver [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:06:41 np0005539551 nova_compute[227360]: 2025-11-29 08:06:41.067 227364 DEBUG nova.virt.libvirt.driver [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:06:41 np0005539551 nova_compute[227360]: 2025-11-29 08:06:41.068 227364 DEBUG nova.virt.libvirt.driver [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:06:41 np0005539551 nova_compute[227360]: 2025-11-29 08:06:41.069 227364 DEBUG nova.virt.libvirt.driver [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:06:41 np0005539551 nova_compute[227360]: 2025-11-29 08:06:41.070 227364 DEBUG nova.virt.libvirt.driver [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:06:41 np0005539551 nova_compute[227360]: 2025-11-29 08:06:41.075 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:06:41 np0005539551 nova_compute[227360]: 2025-11-29 08:06:41.076 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403601.0067914, e957695b-44c7-4a21-aabb-9ddc5914a26a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:06:41 np0005539551 nova_compute[227360]: 2025-11-29 08:06:41.076 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:06:41 np0005539551 nova_compute[227360]: 2025-11-29 08:06:41.106 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:41 np0005539551 nova_compute[227360]: 2025-11-29 08:06:41.109 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403601.0134304, e957695b-44c7-4a21-aabb-9ddc5914a26a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:06:41 np0005539551 nova_compute[227360]: 2025-11-29 08:06:41.110 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:06:41 np0005539551 nova_compute[227360]: 2025-11-29 08:06:41.316 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:41 np0005539551 nova_compute[227360]: 2025-11-29 08:06:41.325 227364 INFO nova.compute.manager [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Took 8.39 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:06:41 np0005539551 nova_compute[227360]: 2025-11-29 08:06:41.325 227364 DEBUG nova.compute.manager [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:41 np0005539551 nova_compute[227360]: 2025-11-29 08:06:41.327 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:06:41 np0005539551 nova_compute[227360]: 2025-11-29 08:06:41.356 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:06:41 np0005539551 nova_compute[227360]: 2025-11-29 08:06:41.400 227364 INFO nova.compute.manager [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Took 9.45 seconds to build instance.#033[00m
Nov 29 03:06:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:41.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:41 np0005539551 nova_compute[227360]: 2025-11-29 08:06:41.430 227364 DEBUG oslo_concurrency.lockutils [None req-13f3fdb5-bc8b-4380-aa33-6eb80401c6e0 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Lock "e957695b-44c7-4a21-aabb-9ddc5914a26a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:41.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:41 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e246 e246: 3 total, 3 up, 3 in
Nov 29 03:06:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:43.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:43.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:43 np0005539551 nova_compute[227360]: 2025-11-29 08:06:43.561 227364 DEBUG nova.compute.manager [req-245c2323-1a9d-453d-81aa-7449cccce38a req-669d0127-bd18-468e-91a5-da18ba61b125 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Received event network-vif-plugged-aa8f42d7-3fdd-4195-8d37-90fa410dee5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:43 np0005539551 nova_compute[227360]: 2025-11-29 08:06:43.562 227364 DEBUG oslo_concurrency.lockutils [req-245c2323-1a9d-453d-81aa-7449cccce38a req-669d0127-bd18-468e-91a5-da18ba61b125 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e957695b-44c7-4a21-aabb-9ddc5914a26a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:43 np0005539551 nova_compute[227360]: 2025-11-29 08:06:43.562 227364 DEBUG oslo_concurrency.lockutils [req-245c2323-1a9d-453d-81aa-7449cccce38a req-669d0127-bd18-468e-91a5-da18ba61b125 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e957695b-44c7-4a21-aabb-9ddc5914a26a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:43 np0005539551 nova_compute[227360]: 2025-11-29 08:06:43.562 227364 DEBUG oslo_concurrency.lockutils [req-245c2323-1a9d-453d-81aa-7449cccce38a req-669d0127-bd18-468e-91a5-da18ba61b125 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e957695b-44c7-4a21-aabb-9ddc5914a26a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:43 np0005539551 nova_compute[227360]: 2025-11-29 08:06:43.562 227364 DEBUG nova.compute.manager [req-245c2323-1a9d-453d-81aa-7449cccce38a req-669d0127-bd18-468e-91a5-da18ba61b125 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] No waiting events found dispatching network-vif-plugged-aa8f42d7-3fdd-4195-8d37-90fa410dee5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:06:43 np0005539551 nova_compute[227360]: 2025-11-29 08:06:43.563 227364 WARNING nova.compute.manager [req-245c2323-1a9d-453d-81aa-7449cccce38a req-669d0127-bd18-468e-91a5-da18ba61b125 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Received unexpected event network-vif-plugged-aa8f42d7-3fdd-4195-8d37-90fa410dee5a for instance with vm_state active and task_state None.#033[00m
Nov 29 03:06:43 np0005539551 nova_compute[227360]: 2025-11-29 08:06:43.817 227364 DEBUG oslo_concurrency.lockutils [None req-5e406361-17cf-440b-b2e8-ab0851e32a50 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Acquiring lock "e957695b-44c7-4a21-aabb-9ddc5914a26a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:43 np0005539551 nova_compute[227360]: 2025-11-29 08:06:43.819 227364 DEBUG oslo_concurrency.lockutils [None req-5e406361-17cf-440b-b2e8-ab0851e32a50 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Lock "e957695b-44c7-4a21-aabb-9ddc5914a26a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:43 np0005539551 nova_compute[227360]: 2025-11-29 08:06:43.819 227364 DEBUG oslo_concurrency.lockutils [None req-5e406361-17cf-440b-b2e8-ab0851e32a50 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Acquiring lock "e957695b-44c7-4a21-aabb-9ddc5914a26a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:43 np0005539551 nova_compute[227360]: 2025-11-29 08:06:43.819 227364 DEBUG oslo_concurrency.lockutils [None req-5e406361-17cf-440b-b2e8-ab0851e32a50 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Lock "e957695b-44c7-4a21-aabb-9ddc5914a26a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:43 np0005539551 nova_compute[227360]: 2025-11-29 08:06:43.820 227364 DEBUG oslo_concurrency.lockutils [None req-5e406361-17cf-440b-b2e8-ab0851e32a50 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Lock "e957695b-44c7-4a21-aabb-9ddc5914a26a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:43 np0005539551 nova_compute[227360]: 2025-11-29 08:06:43.821 227364 INFO nova.compute.manager [None req-5e406361-17cf-440b-b2e8-ab0851e32a50 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Terminating instance#033[00m
Nov 29 03:06:43 np0005539551 nova_compute[227360]: 2025-11-29 08:06:43.822 227364 DEBUG nova.compute.manager [None req-5e406361-17cf-440b-b2e8-ab0851e32a50 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:06:43 np0005539551 nova_compute[227360]: 2025-11-29 08:06:43.850 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:43 np0005539551 kernel: tapaa8f42d7-3f (unregistering): left promiscuous mode
Nov 29 03:06:43 np0005539551 NetworkManager[48922]: <info>  [1764403603.9639] device (tapaa8f42d7-3f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:06:43 np0005539551 nova_compute[227360]: 2025-11-29 08:06:43.971 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:43 np0005539551 ovn_controller[130266]: 2025-11-29T08:06:43Z|00238|binding|INFO|Releasing lport aa8f42d7-3fdd-4195-8d37-90fa410dee5a from this chassis (sb_readonly=0)
Nov 29 03:06:43 np0005539551 ovn_controller[130266]: 2025-11-29T08:06:43Z|00239|binding|INFO|Setting lport aa8f42d7-3fdd-4195-8d37-90fa410dee5a down in Southbound
Nov 29 03:06:43 np0005539551 ovn_controller[130266]: 2025-11-29T08:06:43Z|00240|binding|INFO|Removing iface tapaa8f42d7-3f ovn-installed in OVS
Nov 29 03:06:43 np0005539551 nova_compute[227360]: 2025-11-29 08:06:43.973 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:43 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:43.978 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:3a:0c 10.100.0.12'], port_security=['fa:16:3e:70:3a:0c 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e957695b-44c7-4a21-aabb-9ddc5914a26a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-47634f95-0b49-48e4-9aa2-5fcc460d27f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c1b873d94187455ea17d369aa54754b3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '90c24a64-4b2d-4433-b7d6-cb986d4ad977', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=441708a5-b403-4db6-93c7-8c6405400645, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=aa8f42d7-3fdd-4195-8d37-90fa410dee5a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:06:43 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:43.979 139482 INFO neutron.agent.ovn.metadata.agent [-] Port aa8f42d7-3fdd-4195-8d37-90fa410dee5a in datapath 47634f95-0b49-48e4-9aa2-5fcc460d27f7 unbound from our chassis#033[00m
Nov 29 03:06:43 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:43.980 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 47634f95-0b49-48e4-9aa2-5fcc460d27f7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:06:43 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:43.981 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e5516f8d-6def-49ad-8c16-16fe4ff0c9bc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:43 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:43.982 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-47634f95-0b49-48e4-9aa2-5fcc460d27f7 namespace which is not needed anymore#033[00m
Nov 29 03:06:43 np0005539551 nova_compute[227360]: 2025-11-29 08:06:43.991 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:44 np0005539551 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d00000041.scope: Deactivated successfully.
Nov 29 03:06:44 np0005539551 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d00000041.scope: Consumed 3.451s CPU time.
Nov 29 03:06:44 np0005539551 systemd-machined[190756]: Machine qemu-31-instance-00000041 terminated.
Nov 29 03:06:44 np0005539551 neutron-haproxy-ovnmeta-47634f95-0b49-48e4-9aa2-5fcc460d27f7[252279]: [NOTICE]   (252283) : haproxy version is 2.8.14-c23fe91
Nov 29 03:06:44 np0005539551 neutron-haproxy-ovnmeta-47634f95-0b49-48e4-9aa2-5fcc460d27f7[252279]: [NOTICE]   (252283) : path to executable is /usr/sbin/haproxy
Nov 29 03:06:44 np0005539551 neutron-haproxy-ovnmeta-47634f95-0b49-48e4-9aa2-5fcc460d27f7[252279]: [WARNING]  (252283) : Exiting Master process...
Nov 29 03:06:44 np0005539551 neutron-haproxy-ovnmeta-47634f95-0b49-48e4-9aa2-5fcc460d27f7[252279]: [ALERT]    (252283) : Current worker (252285) exited with code 143 (Terminated)
Nov 29 03:06:44 np0005539551 neutron-haproxy-ovnmeta-47634f95-0b49-48e4-9aa2-5fcc460d27f7[252279]: [WARNING]  (252283) : All workers exited. Exiting... (0)
Nov 29 03:06:44 np0005539551 systemd[1]: libpod-2398e026dc979a101c149a5e57b1b46f839ebf3562a52b60d2cf2d71e0600ceb.scope: Deactivated successfully.
Nov 29 03:06:44 np0005539551 podman[252316]: 2025-11-29 08:06:44.111244522 +0000 UTC m=+0.047284031 container died 2398e026dc979a101c149a5e57b1b46f839ebf3562a52b60d2cf2d71e0600ceb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-47634f95-0b49-48e4-9aa2-5fcc460d27f7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:06:44 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2398e026dc979a101c149a5e57b1b46f839ebf3562a52b60d2cf2d71e0600ceb-userdata-shm.mount: Deactivated successfully.
Nov 29 03:06:44 np0005539551 systemd[1]: var-lib-containers-storage-overlay-54787c30ad4ccaea43944b2cacc28123106984796c12487d391c9d61823b2bf5-merged.mount: Deactivated successfully.
Nov 29 03:06:44 np0005539551 podman[252316]: 2025-11-29 08:06:44.17314511 +0000 UTC m=+0.109184669 container cleanup 2398e026dc979a101c149a5e57b1b46f839ebf3562a52b60d2cf2d71e0600ceb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-47634f95-0b49-48e4-9aa2-5fcc460d27f7, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:06:44 np0005539551 systemd[1]: libpod-conmon-2398e026dc979a101c149a5e57b1b46f839ebf3562a52b60d2cf2d71e0600ceb.scope: Deactivated successfully.
Nov 29 03:06:44 np0005539551 nova_compute[227360]: 2025-11-29 08:06:44.240 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:44 np0005539551 podman[252346]: 2025-11-29 08:06:44.242887977 +0000 UTC m=+0.046814718 container remove 2398e026dc979a101c149a5e57b1b46f839ebf3562a52b60d2cf2d71e0600ceb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-47634f95-0b49-48e4-9aa2-5fcc460d27f7, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:06:44 np0005539551 nova_compute[227360]: 2025-11-29 08:06:44.245 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:44.249 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a9bbfcf7-1361-4b7b-ba06-4ed47ab2cd8b]: (4, ('Sat Nov 29 08:06:44 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-47634f95-0b49-48e4-9aa2-5fcc460d27f7 (2398e026dc979a101c149a5e57b1b46f839ebf3562a52b60d2cf2d71e0600ceb)\n2398e026dc979a101c149a5e57b1b46f839ebf3562a52b60d2cf2d71e0600ceb\nSat Nov 29 08:06:44 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-47634f95-0b49-48e4-9aa2-5fcc460d27f7 (2398e026dc979a101c149a5e57b1b46f839ebf3562a52b60d2cf2d71e0600ceb)\n2398e026dc979a101c149a5e57b1b46f839ebf3562a52b60d2cf2d71e0600ceb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:44.250 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[7909820e-2ee0-4107-893e-7726d70177d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:44.251 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap47634f95-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:44 np0005539551 nova_compute[227360]: 2025-11-29 08:06:44.252 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:44 np0005539551 kernel: tap47634f95-00: left promiscuous mode
Nov 29 03:06:44 np0005539551 nova_compute[227360]: 2025-11-29 08:06:44.254 227364 INFO nova.virt.libvirt.driver [-] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Instance destroyed successfully.#033[00m
Nov 29 03:06:44 np0005539551 nova_compute[227360]: 2025-11-29 08:06:44.254 227364 DEBUG nova.objects.instance [None req-5e406361-17cf-440b-b2e8-ab0851e32a50 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Lazy-loading 'resources' on Instance uuid e957695b-44c7-4a21-aabb-9ddc5914a26a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:44 np0005539551 nova_compute[227360]: 2025-11-29 08:06:44.273 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:44.275 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c655f989-4ddb-41a8-875f-42de049d2e7d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:44 np0005539551 nova_compute[227360]: 2025-11-29 08:06:44.276 227364 DEBUG nova.virt.libvirt.vif [None req-5e406361-17cf-440b-b2e8-ab0851e32a50 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:06:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-600217881',display_name='tempest-ImagesNegativeTestJSON-server-600217881',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-600217881',id=65,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:06:41Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c1b873d94187455ea17d369aa54754b3',ramdisk_id='',reservation_id='r-4689lenp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesNegativeTestJSON-1047505779',owner_user_name='tempest-ImagesNegativeTestJSON-1047505779-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:06:41Z,user_data=None,user_id='7f7f42bfe6ee49b088f30c9eb496e288',uuid=e957695b-44c7-4a21-aabb-9ddc5914a26a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "aa8f42d7-3fdd-4195-8d37-90fa410dee5a", "address": "fa:16:3e:70:3a:0c", "network": {"id": "47634f95-0b49-48e4-9aa2-5fcc460d27f7", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-887794569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c1b873d94187455ea17d369aa54754b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa8f42d7-3f", "ovs_interfaceid": "aa8f42d7-3fdd-4195-8d37-90fa410dee5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:06:44 np0005539551 nova_compute[227360]: 2025-11-29 08:06:44.276 227364 DEBUG nova.network.os_vif_util [None req-5e406361-17cf-440b-b2e8-ab0851e32a50 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Converting VIF {"id": "aa8f42d7-3fdd-4195-8d37-90fa410dee5a", "address": "fa:16:3e:70:3a:0c", "network": {"id": "47634f95-0b49-48e4-9aa2-5fcc460d27f7", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-887794569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c1b873d94187455ea17d369aa54754b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa8f42d7-3f", "ovs_interfaceid": "aa8f42d7-3fdd-4195-8d37-90fa410dee5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:06:44 np0005539551 nova_compute[227360]: 2025-11-29 08:06:44.277 227364 DEBUG nova.network.os_vif_util [None req-5e406361-17cf-440b-b2e8-ab0851e32a50 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:3a:0c,bridge_name='br-int',has_traffic_filtering=True,id=aa8f42d7-3fdd-4195-8d37-90fa410dee5a,network=Network(47634f95-0b49-48e4-9aa2-5fcc460d27f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa8f42d7-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:06:44 np0005539551 nova_compute[227360]: 2025-11-29 08:06:44.278 227364 DEBUG os_vif [None req-5e406361-17cf-440b-b2e8-ab0851e32a50 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:3a:0c,bridge_name='br-int',has_traffic_filtering=True,id=aa8f42d7-3fdd-4195-8d37-90fa410dee5a,network=Network(47634f95-0b49-48e4-9aa2-5fcc460d27f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa8f42d7-3f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:06:44 np0005539551 nova_compute[227360]: 2025-11-29 08:06:44.280 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:44 np0005539551 nova_compute[227360]: 2025-11-29 08:06:44.281 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa8f42d7-3f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:44 np0005539551 nova_compute[227360]: 2025-11-29 08:06:44.283 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:44 np0005539551 nova_compute[227360]: 2025-11-29 08:06:44.285 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:06:44 np0005539551 nova_compute[227360]: 2025-11-29 08:06:44.287 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:44.286 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6f949a0a-d7fc-45a4-b88f-155d43774f1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:44.288 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[662b0dd3-4a7d-4ccb-89d3-a9e2656fa666]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:44 np0005539551 nova_compute[227360]: 2025-11-29 08:06:44.289 227364 INFO os_vif [None req-5e406361-17cf-440b-b2e8-ab0851e32a50 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:3a:0c,bridge_name='br-int',has_traffic_filtering=True,id=aa8f42d7-3fdd-4195-8d37-90fa410dee5a,network=Network(47634f95-0b49-48e4-9aa2-5fcc460d27f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa8f42d7-3f')#033[00m
Nov 29 03:06:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:44.303 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2fb247f8-5b06-43f2-b14e-8fd4ededce9d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664726, 'reachable_time': 29133, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252375, 'error': None, 'target': 'ovnmeta-47634f95-0b49-48e4-9aa2-5fcc460d27f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:44.305 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-47634f95-0b49-48e4-9aa2-5fcc460d27f7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:06:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:06:44.305 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[bc3c983b-6af2-4701-89cf-10ecf4223f44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:44 np0005539551 systemd[1]: run-netns-ovnmeta\x2d47634f95\x2d0b49\x2d48e4\x2d9aa2\x2d5fcc460d27f7.mount: Deactivated successfully.
Nov 29 03:06:44 np0005539551 nova_compute[227360]: 2025-11-29 08:06:44.769 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:45.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:45.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:45 np0005539551 nova_compute[227360]: 2025-11-29 08:06:45.659 227364 DEBUG nova.compute.manager [req-7cf314c5-9351-47d3-b385-d44e10c73380 req-c414118a-2a94-4ecf-adba-892137a599db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Received event network-vif-unplugged-aa8f42d7-3fdd-4195-8d37-90fa410dee5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:45 np0005539551 nova_compute[227360]: 2025-11-29 08:06:45.659 227364 DEBUG oslo_concurrency.lockutils [req-7cf314c5-9351-47d3-b385-d44e10c73380 req-c414118a-2a94-4ecf-adba-892137a599db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e957695b-44c7-4a21-aabb-9ddc5914a26a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:45 np0005539551 nova_compute[227360]: 2025-11-29 08:06:45.659 227364 DEBUG oslo_concurrency.lockutils [req-7cf314c5-9351-47d3-b385-d44e10c73380 req-c414118a-2a94-4ecf-adba-892137a599db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e957695b-44c7-4a21-aabb-9ddc5914a26a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:45 np0005539551 nova_compute[227360]: 2025-11-29 08:06:45.659 227364 DEBUG oslo_concurrency.lockutils [req-7cf314c5-9351-47d3-b385-d44e10c73380 req-c414118a-2a94-4ecf-adba-892137a599db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e957695b-44c7-4a21-aabb-9ddc5914a26a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:45 np0005539551 nova_compute[227360]: 2025-11-29 08:06:45.660 227364 DEBUG nova.compute.manager [req-7cf314c5-9351-47d3-b385-d44e10c73380 req-c414118a-2a94-4ecf-adba-892137a599db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] No waiting events found dispatching network-vif-unplugged-aa8f42d7-3fdd-4195-8d37-90fa410dee5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:06:45 np0005539551 nova_compute[227360]: 2025-11-29 08:06:45.660 227364 DEBUG nova.compute.manager [req-7cf314c5-9351-47d3-b385-d44e10c73380 req-c414118a-2a94-4ecf-adba-892137a599db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Received event network-vif-unplugged-aa8f42d7-3fdd-4195-8d37-90fa410dee5a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:06:45 np0005539551 nova_compute[227360]: 2025-11-29 08:06:45.660 227364 DEBUG nova.compute.manager [req-7cf314c5-9351-47d3-b385-d44e10c73380 req-c414118a-2a94-4ecf-adba-892137a599db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Received event network-vif-plugged-aa8f42d7-3fdd-4195-8d37-90fa410dee5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:45 np0005539551 nova_compute[227360]: 2025-11-29 08:06:45.660 227364 DEBUG oslo_concurrency.lockutils [req-7cf314c5-9351-47d3-b385-d44e10c73380 req-c414118a-2a94-4ecf-adba-892137a599db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e957695b-44c7-4a21-aabb-9ddc5914a26a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:45 np0005539551 nova_compute[227360]: 2025-11-29 08:06:45.660 227364 DEBUG oslo_concurrency.lockutils [req-7cf314c5-9351-47d3-b385-d44e10c73380 req-c414118a-2a94-4ecf-adba-892137a599db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e957695b-44c7-4a21-aabb-9ddc5914a26a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:45 np0005539551 nova_compute[227360]: 2025-11-29 08:06:45.661 227364 DEBUG oslo_concurrency.lockutils [req-7cf314c5-9351-47d3-b385-d44e10c73380 req-c414118a-2a94-4ecf-adba-892137a599db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e957695b-44c7-4a21-aabb-9ddc5914a26a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:45 np0005539551 nova_compute[227360]: 2025-11-29 08:06:45.661 227364 DEBUG nova.compute.manager [req-7cf314c5-9351-47d3-b385-d44e10c73380 req-c414118a-2a94-4ecf-adba-892137a599db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] No waiting events found dispatching network-vif-plugged-aa8f42d7-3fdd-4195-8d37-90fa410dee5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:06:45 np0005539551 nova_compute[227360]: 2025-11-29 08:06:45.661 227364 WARNING nova.compute.manager [req-7cf314c5-9351-47d3-b385-d44e10c73380 req-c414118a-2a94-4ecf-adba-892137a599db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Received unexpected event network-vif-plugged-aa8f42d7-3fdd-4195-8d37-90fa410dee5a for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:06:46 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e247 e247: 3 total, 3 up, 3 in
Nov 29 03:06:46 np0005539551 nova_compute[227360]: 2025-11-29 08:06:46.836 227364 INFO nova.virt.libvirt.driver [None req-5e406361-17cf-440b-b2e8-ab0851e32a50 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Deleting instance files /var/lib/nova/instances/e957695b-44c7-4a21-aabb-9ddc5914a26a_del#033[00m
Nov 29 03:06:46 np0005539551 nova_compute[227360]: 2025-11-29 08:06:46.837 227364 INFO nova.virt.libvirt.driver [None req-5e406361-17cf-440b-b2e8-ab0851e32a50 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Deletion of /var/lib/nova/instances/e957695b-44c7-4a21-aabb-9ddc5914a26a_del complete#033[00m
Nov 29 03:06:46 np0005539551 nova_compute[227360]: 2025-11-29 08:06:46.924 227364 INFO nova.compute.manager [None req-5e406361-17cf-440b-b2e8-ab0851e32a50 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Took 3.10 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:06:46 np0005539551 nova_compute[227360]: 2025-11-29 08:06:46.925 227364 DEBUG oslo.service.loopingcall [None req-5e406361-17cf-440b-b2e8-ab0851e32a50 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:06:46 np0005539551 nova_compute[227360]: 2025-11-29 08:06:46.925 227364 DEBUG nova.compute.manager [-] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:06:46 np0005539551 nova_compute[227360]: 2025-11-29 08:06:46.926 227364 DEBUG nova.network.neutron [-] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:06:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:47.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:47.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:49 np0005539551 nova_compute[227360]: 2025-11-29 08:06:49.237 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403594.235721, 69d87725-bb3d-4966-8db2-cc1e098be52d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:06:49 np0005539551 nova_compute[227360]: 2025-11-29 08:06:49.237 227364 INFO nova.compute.manager [-] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:06:49 np0005539551 nova_compute[227360]: 2025-11-29 08:06:49.257 227364 DEBUG nova.compute.manager [None req-59ea341e-6e65-4ff7-86f6-0fb8e63222d0 - - - - - -] [instance: 69d87725-bb3d-4966-8db2-cc1e098be52d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:49 np0005539551 nova_compute[227360]: 2025-11-29 08:06:49.284 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:49.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:49.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:49 np0005539551 nova_compute[227360]: 2025-11-29 08:06:49.771 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:50 np0005539551 nova_compute[227360]: 2025-11-29 08:06:50.498 227364 DEBUG nova.network.neutron [-] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:06:50 np0005539551 nova_compute[227360]: 2025-11-29 08:06:50.872 227364 DEBUG nova.compute.manager [req-e1159b6d-247f-4e84-9f4c-d2111ed79c9e req-b3b58bf1-5033-44c2-bddf-19e2cbea774d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Received event network-vif-deleted-aa8f42d7-3fdd-4195-8d37-90fa410dee5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:50 np0005539551 nova_compute[227360]: 2025-11-29 08:06:50.872 227364 INFO nova.compute.manager [req-e1159b6d-247f-4e84-9f4c-d2111ed79c9e req-b3b58bf1-5033-44c2-bddf-19e2cbea774d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Neutron deleted interface aa8f42d7-3fdd-4195-8d37-90fa410dee5a; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:06:50 np0005539551 nova_compute[227360]: 2025-11-29 08:06:50.872 227364 DEBUG nova.network.neutron [req-e1159b6d-247f-4e84-9f4c-d2111ed79c9e req-b3b58bf1-5033-44c2-bddf-19e2cbea774d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:06:51 np0005539551 nova_compute[227360]: 2025-11-29 08:06:51.045 227364 DEBUG nova.compute.manager [req-e1159b6d-247f-4e84-9f4c-d2111ed79c9e req-b3b58bf1-5033-44c2-bddf-19e2cbea774d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Detach interface failed, port_id=aa8f42d7-3fdd-4195-8d37-90fa410dee5a, reason: Instance e957695b-44c7-4a21-aabb-9ddc5914a26a could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 03:06:51 np0005539551 nova_compute[227360]: 2025-11-29 08:06:51.047 227364 INFO nova.compute.manager [-] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Took 4.12 seconds to deallocate network for instance.#033[00m
Nov 29 03:06:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:51.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:51.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e248 e248: 3 total, 3 up, 3 in
Nov 29 03:06:51 np0005539551 nova_compute[227360]: 2025-11-29 08:06:51.771 227364 DEBUG oslo_concurrency.lockutils [None req-5e406361-17cf-440b-b2e8-ab0851e32a50 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:51 np0005539551 nova_compute[227360]: 2025-11-29 08:06:51.771 227364 DEBUG oslo_concurrency.lockutils [None req-5e406361-17cf-440b-b2e8-ab0851e32a50 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:51 np0005539551 nova_compute[227360]: 2025-11-29 08:06:51.821 227364 DEBUG oslo_concurrency.processutils [None req-5e406361-17cf-440b-b2e8-ab0851e32a50 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:06:52 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2307748584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:06:52 np0005539551 nova_compute[227360]: 2025-11-29 08:06:52.286 227364 DEBUG oslo_concurrency.processutils [None req-5e406361-17cf-440b-b2e8-ab0851e32a50 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:52 np0005539551 nova_compute[227360]: 2025-11-29 08:06:52.291 227364 DEBUG nova.compute.provider_tree [None req-5e406361-17cf-440b-b2e8-ab0851e32a50 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:06:52 np0005539551 nova_compute[227360]: 2025-11-29 08:06:52.429 227364 DEBUG nova.scheduler.client.report [None req-5e406361-17cf-440b-b2e8-ab0851e32a50 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:06:52 np0005539551 nova_compute[227360]: 2025-11-29 08:06:52.611 227364 DEBUG oslo_concurrency.lockutils [None req-5e406361-17cf-440b-b2e8-ab0851e32a50 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:52 np0005539551 nova_compute[227360]: 2025-11-29 08:06:52.658 227364 INFO nova.scheduler.client.report [None req-5e406361-17cf-440b-b2e8-ab0851e32a50 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Deleted allocations for instance e957695b-44c7-4a21-aabb-9ddc5914a26a#033[00m
Nov 29 03:06:52 np0005539551 nova_compute[227360]: 2025-11-29 08:06:52.780 227364 DEBUG oslo_concurrency.lockutils [None req-5e406361-17cf-440b-b2e8-ab0851e32a50 7f7f42bfe6ee49b088f30c9eb496e288 c1b873d94187455ea17d369aa54754b3 - - default default] Lock "e957695b-44c7-4a21-aabb-9ddc5914a26a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.962s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:53.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:53.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:54 np0005539551 nova_compute[227360]: 2025-11-29 08:06:54.284 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:54 np0005539551 nova_compute[227360]: 2025-11-29 08:06:54.775 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:54 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:06:54 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:06:54 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:06:54 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:06:54 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:06:54 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:06:54 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:06:54 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:06:54 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:06:54 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:06:54 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:06:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:55.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:55.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:57.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:57.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:59 np0005539551 nova_compute[227360]: 2025-11-29 08:06:59.253 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403604.2515328, e957695b-44c7-4a21-aabb-9ddc5914a26a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:06:59 np0005539551 nova_compute[227360]: 2025-11-29 08:06:59.253 227364 INFO nova.compute.manager [-] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:06:59 np0005539551 nova_compute[227360]: 2025-11-29 08:06:59.286 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:59.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:06:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:06:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:59.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:06:59 np0005539551 nova_compute[227360]: 2025-11-29 08:06:59.805 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:00 np0005539551 nova_compute[227360]: 2025-11-29 08:07:00.913 227364 DEBUG nova.compute.manager [None req-65bc5717-7538-4716-baa1-12639f6620b5 - - - - - -] [instance: e957695b-44c7-4a21-aabb-9ddc5914a26a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:07:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:01.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:07:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:07:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:01.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:07:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:07:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:03.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:07:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:03.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:03 np0005539551 podman[252669]: 2025-11-29 08:07:03.634596786 +0000 UTC m=+0.083800053 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:07:03 np0005539551 podman[252668]: 2025-11-29 08:07:03.64827734 +0000 UTC m=+0.097481967 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:07:03 np0005539551 podman[252667]: 2025-11-29 08:07:03.658132963 +0000 UTC m=+0.109092626 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:07:04 np0005539551 nova_compute[227360]: 2025-11-29 08:07:04.289 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:04 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:07:04 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:07:04 np0005539551 nova_compute[227360]: 2025-11-29 08:07:04.813 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:05.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 03:07:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:05.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 03:07:06 np0005539551 nova_compute[227360]: 2025-11-29 08:07:06.113 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:07:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:07.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:07:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:07.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:07.568 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:07:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:07.569 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:07:07 np0005539551 nova_compute[227360]: 2025-11-29 08:07:07.569 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:07.570 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:09 np0005539551 nova_compute[227360]: 2025-11-29 08:07:09.290 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:07:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:09.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:07:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:09.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:09 np0005539551 nova_compute[227360]: 2025-11-29 08:07:09.809 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:11.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:11.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:13.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:13.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:14 np0005539551 nova_compute[227360]: 2025-11-29 08:07:14.293 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:14 np0005539551 nova_compute[227360]: 2025-11-29 08:07:14.811 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:07:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:15.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:07:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:15.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:17 np0005539551 nova_compute[227360]: 2025-11-29 08:07:17.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:17 np0005539551 nova_compute[227360]: 2025-11-29 08:07:17.411 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:07:17 np0005539551 nova_compute[227360]: 2025-11-29 08:07:17.411 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:07:17 np0005539551 nova_compute[227360]: 2025-11-29 08:07:17.439 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:07:17 np0005539551 nova_compute[227360]: 2025-11-29 08:07:17.439 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:17.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:07:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:17.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:07:18 np0005539551 nova_compute[227360]: 2025-11-29 08:07:18.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:19 np0005539551 nova_compute[227360]: 2025-11-29 08:07:19.294 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:19.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:19.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:19 np0005539551 nova_compute[227360]: 2025-11-29 08:07:19.812 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:19.858 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:19.858 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:19.858 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:20 np0005539551 nova_compute[227360]: 2025-11-29 08:07:20.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:21 np0005539551 nova_compute[227360]: 2025-11-29 08:07:21.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:21.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:21.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:22 np0005539551 nova_compute[227360]: 2025-11-29 08:07:22.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:22 np0005539551 nova_compute[227360]: 2025-11-29 08:07:22.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:22 np0005539551 nova_compute[227360]: 2025-11-29 08:07:22.444 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:22 np0005539551 nova_compute[227360]: 2025-11-29 08:07:22.445 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:22 np0005539551 nova_compute[227360]: 2025-11-29 08:07:22.445 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:22 np0005539551 nova_compute[227360]: 2025-11-29 08:07:22.445 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:07:22 np0005539551 nova_compute[227360]: 2025-11-29 08:07:22.445 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:07:22 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/300048197' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:07:22 np0005539551 nova_compute[227360]: 2025-11-29 08:07:22.949 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:23 np0005539551 nova_compute[227360]: 2025-11-29 08:07:23.154 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:07:23 np0005539551 nova_compute[227360]: 2025-11-29 08:07:23.156 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4765MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:07:23 np0005539551 nova_compute[227360]: 2025-11-29 08:07:23.156 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:23 np0005539551 nova_compute[227360]: 2025-11-29 08:07:23.156 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:23 np0005539551 nova_compute[227360]: 2025-11-29 08:07:23.228 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:07:23 np0005539551 nova_compute[227360]: 2025-11-29 08:07:23.229 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:07:23 np0005539551 nova_compute[227360]: 2025-11-29 08:07:23.247 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:23.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:23.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:23 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:07:23 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4132297363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:07:23 np0005539551 nova_compute[227360]: 2025-11-29 08:07:23.687 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:23 np0005539551 nova_compute[227360]: 2025-11-29 08:07:23.694 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:07:23 np0005539551 nova_compute[227360]: 2025-11-29 08:07:23.723 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:07:23 np0005539551 nova_compute[227360]: 2025-11-29 08:07:23.757 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:07:23 np0005539551 nova_compute[227360]: 2025-11-29 08:07:23.758 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:24 np0005539551 nova_compute[227360]: 2025-11-29 08:07:24.295 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:24 np0005539551 nova_compute[227360]: 2025-11-29 08:07:24.815 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:25.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:25.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:25 np0005539551 nova_compute[227360]: 2025-11-29 08:07:25.920 227364 DEBUG oslo_concurrency.lockutils [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Acquiring lock "9cebdce6-6212-492b-8105-301f19edb8b3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:25 np0005539551 nova_compute[227360]: 2025-11-29 08:07:25.920 227364 DEBUG oslo_concurrency.lockutils [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Lock "9cebdce6-6212-492b-8105-301f19edb8b3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:25 np0005539551 nova_compute[227360]: 2025-11-29 08:07:25.954 227364 DEBUG nova.compute.manager [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:07:26 np0005539551 nova_compute[227360]: 2025-11-29 08:07:26.049 227364 DEBUG oslo_concurrency.lockutils [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:26 np0005539551 nova_compute[227360]: 2025-11-29 08:07:26.049 227364 DEBUG oslo_concurrency.lockutils [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:26 np0005539551 nova_compute[227360]: 2025-11-29 08:07:26.062 227364 DEBUG nova.virt.hardware [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:07:26 np0005539551 nova_compute[227360]: 2025-11-29 08:07:26.062 227364 INFO nova.compute.claims [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:07:26 np0005539551 nova_compute[227360]: 2025-11-29 08:07:26.175 227364 DEBUG oslo_concurrency.processutils [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:07:27 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1297453971' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:07:27 np0005539551 nova_compute[227360]: 2025-11-29 08:07:27.067 227364 DEBUG oslo_concurrency.processutils [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.891s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:27 np0005539551 nova_compute[227360]: 2025-11-29 08:07:27.073 227364 DEBUG nova.compute.provider_tree [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:07:27 np0005539551 nova_compute[227360]: 2025-11-29 08:07:27.138 227364 DEBUG nova.scheduler.client.report [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:07:27 np0005539551 nova_compute[227360]: 2025-11-29 08:07:27.195 227364 DEBUG oslo_concurrency.lockutils [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.146s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:27 np0005539551 nova_compute[227360]: 2025-11-29 08:07:27.197 227364 DEBUG nova.compute.manager [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:07:27 np0005539551 nova_compute[227360]: 2025-11-29 08:07:27.259 227364 DEBUG nova.compute.manager [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:07:27 np0005539551 nova_compute[227360]: 2025-11-29 08:07:27.260 227364 DEBUG nova.network.neutron [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:07:27 np0005539551 nova_compute[227360]: 2025-11-29 08:07:27.288 227364 INFO nova.virt.libvirt.driver [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:07:27 np0005539551 nova_compute[227360]: 2025-11-29 08:07:27.363 227364 DEBUG nova.compute.manager [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:07:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:27.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:27 np0005539551 nova_compute[227360]: 2025-11-29 08:07:27.533 227364 DEBUG nova.compute.manager [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:07:27 np0005539551 nova_compute[227360]: 2025-11-29 08:07:27.534 227364 DEBUG nova.virt.libvirt.driver [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:07:27 np0005539551 nova_compute[227360]: 2025-11-29 08:07:27.534 227364 INFO nova.virt.libvirt.driver [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Creating image(s)#033[00m
Nov 29 03:07:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:27.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:27 np0005539551 nova_compute[227360]: 2025-11-29 08:07:27.562 227364 DEBUG nova.storage.rbd_utils [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] rbd image 9cebdce6-6212-492b-8105-301f19edb8b3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:27 np0005539551 nova_compute[227360]: 2025-11-29 08:07:27.595 227364 DEBUG nova.storage.rbd_utils [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] rbd image 9cebdce6-6212-492b-8105-301f19edb8b3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:27 np0005539551 nova_compute[227360]: 2025-11-29 08:07:27.631 227364 DEBUG nova.storage.rbd_utils [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] rbd image 9cebdce6-6212-492b-8105-301f19edb8b3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:27 np0005539551 nova_compute[227360]: 2025-11-29 08:07:27.636 227364 DEBUG oslo_concurrency.processutils [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:27 np0005539551 nova_compute[227360]: 2025-11-29 08:07:27.660 227364 DEBUG nova.policy [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd0a358b2256f49b085b6e0d27911e743', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '02545fd3735d4977afe5612632ee4832', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:07:27 np0005539551 nova_compute[227360]: 2025-11-29 08:07:27.693 227364 DEBUG oslo_concurrency.processutils [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:27 np0005539551 nova_compute[227360]: 2025-11-29 08:07:27.694 227364 DEBUG oslo_concurrency.lockutils [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:27 np0005539551 nova_compute[227360]: 2025-11-29 08:07:27.695 227364 DEBUG oslo_concurrency.lockutils [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:27 np0005539551 nova_compute[227360]: 2025-11-29 08:07:27.695 227364 DEBUG oslo_concurrency.lockutils [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:27 np0005539551 nova_compute[227360]: 2025-11-29 08:07:27.722 227364 DEBUG nova.storage.rbd_utils [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] rbd image 9cebdce6-6212-492b-8105-301f19edb8b3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:27 np0005539551 nova_compute[227360]: 2025-11-29 08:07:27.727 227364 DEBUG oslo_concurrency.processutils [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 9cebdce6-6212-492b-8105-301f19edb8b3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:28 np0005539551 nova_compute[227360]: 2025-11-29 08:07:28.201 227364 DEBUG oslo_concurrency.processutils [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 9cebdce6-6212-492b-8105-301f19edb8b3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:28 np0005539551 nova_compute[227360]: 2025-11-29 08:07:28.317 227364 DEBUG nova.storage.rbd_utils [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] resizing rbd image 9cebdce6-6212-492b-8105-301f19edb8b3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:07:28 np0005539551 nova_compute[227360]: 2025-11-29 08:07:28.381 227364 DEBUG nova.network.neutron [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Successfully created port: 8f05eb7d-5dc1-48da-b1b2-dfee27931e85 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:07:28 np0005539551 nova_compute[227360]: 2025-11-29 08:07:28.484 227364 DEBUG nova.objects.instance [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Lazy-loading 'migration_context' on Instance uuid 9cebdce6-6212-492b-8105-301f19edb8b3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:07:28 np0005539551 nova_compute[227360]: 2025-11-29 08:07:28.499 227364 DEBUG nova.virt.libvirt.driver [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:07:28 np0005539551 nova_compute[227360]: 2025-11-29 08:07:28.500 227364 DEBUG nova.virt.libvirt.driver [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Ensure instance console log exists: /var/lib/nova/instances/9cebdce6-6212-492b-8105-301f19edb8b3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:07:28 np0005539551 nova_compute[227360]: 2025-11-29 08:07:28.501 227364 DEBUG oslo_concurrency.lockutils [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:28 np0005539551 nova_compute[227360]: 2025-11-29 08:07:28.502 227364 DEBUG oslo_concurrency.lockutils [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:28 np0005539551 nova_compute[227360]: 2025-11-29 08:07:28.502 227364 DEBUG oslo_concurrency.lockutils [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:28 np0005539551 nova_compute[227360]: 2025-11-29 08:07:28.760 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:29 np0005539551 nova_compute[227360]: 2025-11-29 08:07:29.298 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:29.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:29.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:29 np0005539551 nova_compute[227360]: 2025-11-29 08:07:29.577 227364 DEBUG nova.network.neutron [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Successfully updated port: 8f05eb7d-5dc1-48da-b1b2-dfee27931e85 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:07:29 np0005539551 nova_compute[227360]: 2025-11-29 08:07:29.590 227364 DEBUG oslo_concurrency.lockutils [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Acquiring lock "refresh_cache-9cebdce6-6212-492b-8105-301f19edb8b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:07:29 np0005539551 nova_compute[227360]: 2025-11-29 08:07:29.590 227364 DEBUG oslo_concurrency.lockutils [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Acquired lock "refresh_cache-9cebdce6-6212-492b-8105-301f19edb8b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:07:29 np0005539551 nova_compute[227360]: 2025-11-29 08:07:29.590 227364 DEBUG nova.network.neutron [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:07:29 np0005539551 nova_compute[227360]: 2025-11-29 08:07:29.748 227364 DEBUG nova.compute.manager [req-7228b913-4c1d-4908-bf64-1d920c0391ec req-9046f9cc-92b3-4b10-9d74-12c4a8fcf62d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Received event network-changed-8f05eb7d-5dc1-48da-b1b2-dfee27931e85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:29 np0005539551 nova_compute[227360]: 2025-11-29 08:07:29.748 227364 DEBUG nova.compute.manager [req-7228b913-4c1d-4908-bf64-1d920c0391ec req-9046f9cc-92b3-4b10-9d74-12c4a8fcf62d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Refreshing instance network info cache due to event network-changed-8f05eb7d-5dc1-48da-b1b2-dfee27931e85. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:07:29 np0005539551 nova_compute[227360]: 2025-11-29 08:07:29.748 227364 DEBUG oslo_concurrency.lockutils [req-7228b913-4c1d-4908-bf64-1d920c0391ec req-9046f9cc-92b3-4b10-9d74-12c4a8fcf62d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-9cebdce6-6212-492b-8105-301f19edb8b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:07:29 np0005539551 nova_compute[227360]: 2025-11-29 08:07:29.773 227364 DEBUG nova.network.neutron [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:07:29 np0005539551 nova_compute[227360]: 2025-11-29 08:07:29.816 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:07:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:31.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:07:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:31.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:33 np0005539551 nova_compute[227360]: 2025-11-29 08:07:33.302 227364 DEBUG nova.network.neutron [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Updating instance_info_cache with network_info: [{"id": "8f05eb7d-5dc1-48da-b1b2-dfee27931e85", "address": "fa:16:3e:a6:a9:24", "network": {"id": "e2cfff13-8384-4cb8-9851-9511ccb7ff82", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-596742618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02545fd3735d4977afe5612632ee4832", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f05eb7d-5d", "ovs_interfaceid": "8f05eb7d-5dc1-48da-b1b2-dfee27931e85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:07:33 np0005539551 nova_compute[227360]: 2025-11-29 08:07:33.329 227364 DEBUG oslo_concurrency.lockutils [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Releasing lock "refresh_cache-9cebdce6-6212-492b-8105-301f19edb8b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:07:33 np0005539551 nova_compute[227360]: 2025-11-29 08:07:33.330 227364 DEBUG nova.compute.manager [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Instance network_info: |[{"id": "8f05eb7d-5dc1-48da-b1b2-dfee27931e85", "address": "fa:16:3e:a6:a9:24", "network": {"id": "e2cfff13-8384-4cb8-9851-9511ccb7ff82", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-596742618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02545fd3735d4977afe5612632ee4832", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f05eb7d-5d", "ovs_interfaceid": "8f05eb7d-5dc1-48da-b1b2-dfee27931e85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:07:33 np0005539551 nova_compute[227360]: 2025-11-29 08:07:33.330 227364 DEBUG oslo_concurrency.lockutils [req-7228b913-4c1d-4908-bf64-1d920c0391ec req-9046f9cc-92b3-4b10-9d74-12c4a8fcf62d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-9cebdce6-6212-492b-8105-301f19edb8b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:07:33 np0005539551 nova_compute[227360]: 2025-11-29 08:07:33.330 227364 DEBUG nova.network.neutron [req-7228b913-4c1d-4908-bf64-1d920c0391ec req-9046f9cc-92b3-4b10-9d74-12c4a8fcf62d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Refreshing network info cache for port 8f05eb7d-5dc1-48da-b1b2-dfee27931e85 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:07:33 np0005539551 nova_compute[227360]: 2025-11-29 08:07:33.332 227364 DEBUG nova.virt.libvirt.driver [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Start _get_guest_xml network_info=[{"id": "8f05eb7d-5dc1-48da-b1b2-dfee27931e85", "address": "fa:16:3e:a6:a9:24", "network": {"id": "e2cfff13-8384-4cb8-9851-9511ccb7ff82", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-596742618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02545fd3735d4977afe5612632ee4832", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f05eb7d-5d", "ovs_interfaceid": "8f05eb7d-5dc1-48da-b1b2-dfee27931e85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:07:33 np0005539551 nova_compute[227360]: 2025-11-29 08:07:33.336 227364 WARNING nova.virt.libvirt.driver [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:07:33 np0005539551 nova_compute[227360]: 2025-11-29 08:07:33.340 227364 DEBUG nova.virt.libvirt.host [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:07:33 np0005539551 nova_compute[227360]: 2025-11-29 08:07:33.340 227364 DEBUG nova.virt.libvirt.host [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:07:33 np0005539551 nova_compute[227360]: 2025-11-29 08:07:33.345 227364 DEBUG nova.virt.libvirt.host [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:07:33 np0005539551 nova_compute[227360]: 2025-11-29 08:07:33.345 227364 DEBUG nova.virt.libvirt.host [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:07:33 np0005539551 nova_compute[227360]: 2025-11-29 08:07:33.346 227364 DEBUG nova.virt.libvirt.driver [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:07:33 np0005539551 nova_compute[227360]: 2025-11-29 08:07:33.346 227364 DEBUG nova.virt.hardware [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:07:33 np0005539551 nova_compute[227360]: 2025-11-29 08:07:33.346 227364 DEBUG nova.virt.hardware [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:07:33 np0005539551 nova_compute[227360]: 2025-11-29 08:07:33.347 227364 DEBUG nova.virt.hardware [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:07:33 np0005539551 nova_compute[227360]: 2025-11-29 08:07:33.347 227364 DEBUG nova.virt.hardware [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:07:33 np0005539551 nova_compute[227360]: 2025-11-29 08:07:33.347 227364 DEBUG nova.virt.hardware [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:07:33 np0005539551 nova_compute[227360]: 2025-11-29 08:07:33.347 227364 DEBUG nova.virt.hardware [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:07:33 np0005539551 nova_compute[227360]: 2025-11-29 08:07:33.348 227364 DEBUG nova.virt.hardware [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:07:33 np0005539551 nova_compute[227360]: 2025-11-29 08:07:33.348 227364 DEBUG nova.virt.hardware [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:07:33 np0005539551 nova_compute[227360]: 2025-11-29 08:07:33.348 227364 DEBUG nova.virt.hardware [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:07:33 np0005539551 nova_compute[227360]: 2025-11-29 08:07:33.348 227364 DEBUG nova.virt.hardware [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:07:33 np0005539551 nova_compute[227360]: 2025-11-29 08:07:33.349 227364 DEBUG nova.virt.hardware [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:07:33 np0005539551 nova_compute[227360]: 2025-11-29 08:07:33.351 227364 DEBUG oslo_concurrency.processutils [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:33 np0005539551 nova_compute[227360]: 2025-11-29 08:07:33.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:33 np0005539551 nova_compute[227360]: 2025-11-29 08:07:33.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:07:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:07:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:33.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:07:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:33.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:33 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:07:33 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/275151365' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:07:33 np0005539551 nova_compute[227360]: 2025-11-29 08:07:33.765 227364 DEBUG oslo_concurrency.processutils [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:33 np0005539551 nova_compute[227360]: 2025-11-29 08:07:33.799 227364 DEBUG nova.storage.rbd_utils [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] rbd image 9cebdce6-6212-492b-8105-301f19edb8b3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:33 np0005539551 nova_compute[227360]: 2025-11-29 08:07:33.804 227364 DEBUG oslo_concurrency.processutils [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:34 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:07:34 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/522255629' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:07:34 np0005539551 nova_compute[227360]: 2025-11-29 08:07:34.232 227364 DEBUG oslo_concurrency.processutils [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:34 np0005539551 nova_compute[227360]: 2025-11-29 08:07:34.233 227364 DEBUG nova.virt.libvirt.vif [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:07:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1061952082',display_name='tempest-ImagesOneServerTestJSON-server-1061952082',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1061952082',id=69,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='02545fd3735d4977afe5612632ee4832',ramdisk_id='',reservation_id='r-456db5s2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-941250990',owner_user_name='tempest-ImagesOneServerTestJSON-941250990-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:07:27Z,user_data=None,user_id='d0a358b2256f49b085b6e0d27911e743',uuid=9cebdce6-6212-492b-8105-301f19edb8b3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8f05eb7d-5dc1-48da-b1b2-dfee27931e85", "address": "fa:16:3e:a6:a9:24", "network": {"id": "e2cfff13-8384-4cb8-9851-9511ccb7ff82", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-596742618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02545fd3735d4977afe5612632ee4832", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f05eb7d-5d", "ovs_interfaceid": "8f05eb7d-5dc1-48da-b1b2-dfee27931e85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:07:34 np0005539551 nova_compute[227360]: 2025-11-29 08:07:34.234 227364 DEBUG nova.network.os_vif_util [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Converting VIF {"id": "8f05eb7d-5dc1-48da-b1b2-dfee27931e85", "address": "fa:16:3e:a6:a9:24", "network": {"id": "e2cfff13-8384-4cb8-9851-9511ccb7ff82", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-596742618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02545fd3735d4977afe5612632ee4832", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f05eb7d-5d", "ovs_interfaceid": "8f05eb7d-5dc1-48da-b1b2-dfee27931e85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:07:34 np0005539551 nova_compute[227360]: 2025-11-29 08:07:34.234 227364 DEBUG nova.network.os_vif_util [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:a9:24,bridge_name='br-int',has_traffic_filtering=True,id=8f05eb7d-5dc1-48da-b1b2-dfee27931e85,network=Network(e2cfff13-8384-4cb8-9851-9511ccb7ff82),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f05eb7d-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:07:34 np0005539551 nova_compute[227360]: 2025-11-29 08:07:34.236 227364 DEBUG nova.objects.instance [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9cebdce6-6212-492b-8105-301f19edb8b3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:07:34 np0005539551 nova_compute[227360]: 2025-11-29 08:07:34.254 227364 DEBUG nova.virt.libvirt.driver [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:07:34 np0005539551 nova_compute[227360]:  <uuid>9cebdce6-6212-492b-8105-301f19edb8b3</uuid>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:  <name>instance-00000045</name>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:07:34 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:      <nova:name>tempest-ImagesOneServerTestJSON-server-1061952082</nova:name>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:07:33</nova:creationTime>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:07:34 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:        <nova:user uuid="d0a358b2256f49b085b6e0d27911e743">tempest-ImagesOneServerTestJSON-941250990-project-member</nova:user>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:        <nova:project uuid="02545fd3735d4977afe5612632ee4832">tempest-ImagesOneServerTestJSON-941250990</nova:project>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:        <nova:port uuid="8f05eb7d-5dc1-48da-b1b2-dfee27931e85">
Nov 29 03:07:34 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:      <entry name="serial">9cebdce6-6212-492b-8105-301f19edb8b3</entry>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:      <entry name="uuid">9cebdce6-6212-492b-8105-301f19edb8b3</entry>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:07:34 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/9cebdce6-6212-492b-8105-301f19edb8b3_disk">
Nov 29 03:07:34 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:07:34 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:07:34 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/9cebdce6-6212-492b-8105-301f19edb8b3_disk.config">
Nov 29 03:07:34 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:07:34 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:07:34 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:a6:a9:24"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:      <target dev="tap8f05eb7d-5d"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:07:34 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/9cebdce6-6212-492b-8105-301f19edb8b3/console.log" append="off"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:07:34 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:07:34 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:07:34 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:07:34 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:07:34 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:07:34 np0005539551 nova_compute[227360]: 2025-11-29 08:07:34.256 227364 DEBUG nova.compute.manager [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Preparing to wait for external event network-vif-plugged-8f05eb7d-5dc1-48da-b1b2-dfee27931e85 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:07:34 np0005539551 nova_compute[227360]: 2025-11-29 08:07:34.257 227364 DEBUG oslo_concurrency.lockutils [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Acquiring lock "9cebdce6-6212-492b-8105-301f19edb8b3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:34 np0005539551 nova_compute[227360]: 2025-11-29 08:07:34.258 227364 DEBUG oslo_concurrency.lockutils [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Lock "9cebdce6-6212-492b-8105-301f19edb8b3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:34 np0005539551 nova_compute[227360]: 2025-11-29 08:07:34.258 227364 DEBUG oslo_concurrency.lockutils [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Lock "9cebdce6-6212-492b-8105-301f19edb8b3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:34 np0005539551 nova_compute[227360]: 2025-11-29 08:07:34.260 227364 DEBUG nova.virt.libvirt.vif [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:07:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1061952082',display_name='tempest-ImagesOneServerTestJSON-server-1061952082',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1061952082',id=69,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='02545fd3735d4977afe5612632ee4832',ramdisk_id='',reservation_id='r-456db5s2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-941250990',owner_user_name='tempest-ImagesOneServerTestJSON-941250990-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:07:27Z,user_data=None,user_id='d0a358b2256f49b085b6e0d27911e743',uuid=9cebdce6-6212-492b-8105-301f19edb8b3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8f05eb7d-5dc1-48da-b1b2-dfee27931e85", "address": "fa:16:3e:a6:a9:24", "network": {"id": "e2cfff13-8384-4cb8-9851-9511ccb7ff82", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-596742618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02545fd3735d4977afe5612632ee4832", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f05eb7d-5d", "ovs_interfaceid": "8f05eb7d-5dc1-48da-b1b2-dfee27931e85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:07:34 np0005539551 nova_compute[227360]: 2025-11-29 08:07:34.260 227364 DEBUG nova.network.os_vif_util [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Converting VIF {"id": "8f05eb7d-5dc1-48da-b1b2-dfee27931e85", "address": "fa:16:3e:a6:a9:24", "network": {"id": "e2cfff13-8384-4cb8-9851-9511ccb7ff82", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-596742618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02545fd3735d4977afe5612632ee4832", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f05eb7d-5d", "ovs_interfaceid": "8f05eb7d-5dc1-48da-b1b2-dfee27931e85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:07:34 np0005539551 nova_compute[227360]: 2025-11-29 08:07:34.261 227364 DEBUG nova.network.os_vif_util [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:a9:24,bridge_name='br-int',has_traffic_filtering=True,id=8f05eb7d-5dc1-48da-b1b2-dfee27931e85,network=Network(e2cfff13-8384-4cb8-9851-9511ccb7ff82),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f05eb7d-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:07:34 np0005539551 nova_compute[227360]: 2025-11-29 08:07:34.263 227364 DEBUG os_vif [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:a9:24,bridge_name='br-int',has_traffic_filtering=True,id=8f05eb7d-5dc1-48da-b1b2-dfee27931e85,network=Network(e2cfff13-8384-4cb8-9851-9511ccb7ff82),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f05eb7d-5d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:07:34 np0005539551 nova_compute[227360]: 2025-11-29 08:07:34.264 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:34 np0005539551 nova_compute[227360]: 2025-11-29 08:07:34.266 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:34 np0005539551 nova_compute[227360]: 2025-11-29 08:07:34.267 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:07:34 np0005539551 nova_compute[227360]: 2025-11-29 08:07:34.273 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:34 np0005539551 nova_compute[227360]: 2025-11-29 08:07:34.274 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f05eb7d-5d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:34 np0005539551 nova_compute[227360]: 2025-11-29 08:07:34.276 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8f05eb7d-5d, col_values=(('external_ids', {'iface-id': '8f05eb7d-5dc1-48da-b1b2-dfee27931e85', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a6:a9:24', 'vm-uuid': '9cebdce6-6212-492b-8105-301f19edb8b3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:34 np0005539551 nova_compute[227360]: 2025-11-29 08:07:34.278 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:34 np0005539551 NetworkManager[48922]: <info>  [1764403654.2794] manager: (tap8f05eb7d-5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/114)
Nov 29 03:07:34 np0005539551 nova_compute[227360]: 2025-11-29 08:07:34.280 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:07:34 np0005539551 nova_compute[227360]: 2025-11-29 08:07:34.285 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:34 np0005539551 nova_compute[227360]: 2025-11-29 08:07:34.286 227364 INFO os_vif [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:a9:24,bridge_name='br-int',has_traffic_filtering=True,id=8f05eb7d-5dc1-48da-b1b2-dfee27931e85,network=Network(e2cfff13-8384-4cb8-9851-9511ccb7ff82),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f05eb7d-5d')#033[00m
Nov 29 03:07:34 np0005539551 podman[253078]: 2025-11-29 08:07:34.62444701 +0000 UTC m=+0.060926654 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 03:07:34 np0005539551 podman[253077]: 2025-11-29 08:07:34.644588596 +0000 UTC m=+0.077018312 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true)
Nov 29 03:07:34 np0005539551 podman[253076]: 2025-11-29 08:07:34.669055537 +0000 UTC m=+0.103888147 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 29 03:07:34 np0005539551 nova_compute[227360]: 2025-11-29 08:07:34.755 227364 DEBUG nova.virt.libvirt.driver [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:07:34 np0005539551 nova_compute[227360]: 2025-11-29 08:07:34.755 227364 DEBUG nova.virt.libvirt.driver [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:07:34 np0005539551 nova_compute[227360]: 2025-11-29 08:07:34.756 227364 DEBUG nova.virt.libvirt.driver [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] No VIF found with MAC fa:16:3e:a6:a9:24, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:07:34 np0005539551 nova_compute[227360]: 2025-11-29 08:07:34.757 227364 INFO nova.virt.libvirt.driver [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Using config drive#033[00m
Nov 29 03:07:34 np0005539551 nova_compute[227360]: 2025-11-29 08:07:34.879 227364 DEBUG nova.storage.rbd_utils [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] rbd image 9cebdce6-6212-492b-8105-301f19edb8b3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:34 np0005539551 nova_compute[227360]: 2025-11-29 08:07:34.885 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:35 np0005539551 nova_compute[227360]: 2025-11-29 08:07:35.328 227364 DEBUG nova.network.neutron [req-7228b913-4c1d-4908-bf64-1d920c0391ec req-9046f9cc-92b3-4b10-9d74-12c4a8fcf62d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Updated VIF entry in instance network info cache for port 8f05eb7d-5dc1-48da-b1b2-dfee27931e85. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:07:35 np0005539551 nova_compute[227360]: 2025-11-29 08:07:35.328 227364 DEBUG nova.network.neutron [req-7228b913-4c1d-4908-bf64-1d920c0391ec req-9046f9cc-92b3-4b10-9d74-12c4a8fcf62d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Updating instance_info_cache with network_info: [{"id": "8f05eb7d-5dc1-48da-b1b2-dfee27931e85", "address": "fa:16:3e:a6:a9:24", "network": {"id": "e2cfff13-8384-4cb8-9851-9511ccb7ff82", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-596742618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02545fd3735d4977afe5612632ee4832", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f05eb7d-5d", "ovs_interfaceid": "8f05eb7d-5dc1-48da-b1b2-dfee27931e85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:07:35 np0005539551 nova_compute[227360]: 2025-11-29 08:07:35.353 227364 DEBUG oslo_concurrency.lockutils [req-7228b913-4c1d-4908-bf64-1d920c0391ec req-9046f9cc-92b3-4b10-9d74-12c4a8fcf62d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-9cebdce6-6212-492b-8105-301f19edb8b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:07:35 np0005539551 nova_compute[227360]: 2025-11-29 08:07:35.385 227364 INFO nova.virt.libvirt.driver [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Creating config drive at /var/lib/nova/instances/9cebdce6-6212-492b-8105-301f19edb8b3/disk.config#033[00m
Nov 29 03:07:35 np0005539551 nova_compute[227360]: 2025-11-29 08:07:35.390 227364 DEBUG oslo_concurrency.processutils [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9cebdce6-6212-492b-8105-301f19edb8b3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf61xtlug execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:35.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:35 np0005539551 nova_compute[227360]: 2025-11-29 08:07:35.535 227364 DEBUG oslo_concurrency.processutils [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9cebdce6-6212-492b-8105-301f19edb8b3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf61xtlug" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:35.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:35 np0005539551 nova_compute[227360]: 2025-11-29 08:07:35.564 227364 DEBUG nova.storage.rbd_utils [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] rbd image 9cebdce6-6212-492b-8105-301f19edb8b3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:35 np0005539551 nova_compute[227360]: 2025-11-29 08:07:35.567 227364 DEBUG oslo_concurrency.processutils [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9cebdce6-6212-492b-8105-301f19edb8b3/disk.config 9cebdce6-6212-492b-8105-301f19edb8b3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:35 np0005539551 nova_compute[227360]: 2025-11-29 08:07:35.727 227364 DEBUG oslo_concurrency.processutils [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9cebdce6-6212-492b-8105-301f19edb8b3/disk.config 9cebdce6-6212-492b-8105-301f19edb8b3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:35 np0005539551 nova_compute[227360]: 2025-11-29 08:07:35.727 227364 INFO nova.virt.libvirt.driver [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Deleting local config drive /var/lib/nova/instances/9cebdce6-6212-492b-8105-301f19edb8b3/disk.config because it was imported into RBD.#033[00m
Nov 29 03:07:35 np0005539551 kernel: tap8f05eb7d-5d: entered promiscuous mode
Nov 29 03:07:35 np0005539551 NetworkManager[48922]: <info>  [1764403655.7769] manager: (tap8f05eb7d-5d): new Tun device (/org/freedesktop/NetworkManager/Devices/115)
Nov 29 03:07:35 np0005539551 nova_compute[227360]: 2025-11-29 08:07:35.777 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:35 np0005539551 ovn_controller[130266]: 2025-11-29T08:07:35Z|00241|binding|INFO|Claiming lport 8f05eb7d-5dc1-48da-b1b2-dfee27931e85 for this chassis.
Nov 29 03:07:35 np0005539551 ovn_controller[130266]: 2025-11-29T08:07:35Z|00242|binding|INFO|8f05eb7d-5dc1-48da-b1b2-dfee27931e85: Claiming fa:16:3e:a6:a9:24 10.100.0.11
Nov 29 03:07:35 np0005539551 nova_compute[227360]: 2025-11-29 08:07:35.785 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:35.796 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:a9:24 10.100.0.11'], port_security=['fa:16:3e:a6:a9:24 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9cebdce6-6212-492b-8105-301f19edb8b3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e2cfff13-8384-4cb8-9851-9511ccb7ff82', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '02545fd3735d4977afe5612632ee4832', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2af8c5ee-3150-4114-a613-af89d1411c4c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1a7c274-6c62-4970-a2aa-d8b6cb1c3d25, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=8f05eb7d-5dc1-48da-b1b2-dfee27931e85) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:07:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:35.797 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 8f05eb7d-5dc1-48da-b1b2-dfee27931e85 in datapath e2cfff13-8384-4cb8-9851-9511ccb7ff82 bound to our chassis#033[00m
Nov 29 03:07:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:35.798 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e2cfff13-8384-4cb8-9851-9511ccb7ff82#033[00m
Nov 29 03:07:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:35.807 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c4696c03-bae4-4e43-bf29-4871f9e3ab0f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:35.807 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape2cfff13-81 in ovnmeta-e2cfff13-8384-4cb8-9851-9511ccb7ff82 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:07:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:35.809 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape2cfff13-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:07:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:35.809 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5ce6fdc2-66e2-431e-b333-38ae7f2ad425]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:35 np0005539551 systemd-udevd[253209]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:07:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:35.810 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8ef8fa29-443f-4c04-a3e5-6925df58f613]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:35 np0005539551 systemd-machined[190756]: New machine qemu-32-instance-00000045.
Nov 29 03:07:35 np0005539551 NetworkManager[48922]: <info>  [1764403655.8219] device (tap8f05eb7d-5d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:07:35 np0005539551 NetworkManager[48922]: <info>  [1764403655.8232] device (tap8f05eb7d-5d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:07:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:35.822 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[d65ad55b-e39c-4d13-978b-342fc8cf48de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:35 np0005539551 systemd[1]: Started Virtual Machine qemu-32-instance-00000045.
Nov 29 03:07:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:35.845 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6314b714-57d2-4819-9e14-bd6f6aa3f3a9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:35 np0005539551 nova_compute[227360]: 2025-11-29 08:07:35.852 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:35 np0005539551 ovn_controller[130266]: 2025-11-29T08:07:35Z|00243|binding|INFO|Setting lport 8f05eb7d-5dc1-48da-b1b2-dfee27931e85 ovn-installed in OVS
Nov 29 03:07:35 np0005539551 ovn_controller[130266]: 2025-11-29T08:07:35Z|00244|binding|INFO|Setting lport 8f05eb7d-5dc1-48da-b1b2-dfee27931e85 up in Southbound
Nov 29 03:07:35 np0005539551 nova_compute[227360]: 2025-11-29 08:07:35.856 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:35.876 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[5901a432-accb-4ca5-92ab-5620f54ffeb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:35 np0005539551 NetworkManager[48922]: <info>  [1764403655.8823] manager: (tape2cfff13-80): new Veth device (/org/freedesktop/NetworkManager/Devices/116)
Nov 29 03:07:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:35.881 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[247a6fac-b35a-42eb-aeac-13f9f7b5dcde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:35.914 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[d878bebd-cd64-4298-a651-f850cbc1a616]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:35.917 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[13f639b9-7c3a-424c-85ba-aaca58b4b76a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:35 np0005539551 NetworkManager[48922]: <info>  [1764403655.9357] device (tape2cfff13-80): carrier: link connected
Nov 29 03:07:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:35.940 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[7eb8c60a-6fa3-4aff-a8dc-1b4d4fba8351]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:35.957 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[3d00eb30-540a-4fe9-9fb8-c674d603643a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape2cfff13-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:ef:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 72], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 670281, 'reachable_time': 18970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253241, 'error': None, 'target': 'ovnmeta-e2cfff13-8384-4cb8-9851-9511ccb7ff82', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:35.969 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1770bf45-9d9c-4aae-ba7d-51a45c6e6e0b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecf:ef84'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 670281, 'tstamp': 670281}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253242, 'error': None, 'target': 'ovnmeta-e2cfff13-8384-4cb8-9851-9511ccb7ff82', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:35.983 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2ee74a73-349f-444d-abbc-8f8c7d0fb3f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape2cfff13-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:ef:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 72], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 670281, 'reachable_time': 18970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253243, 'error': None, 'target': 'ovnmeta-e2cfff13-8384-4cb8-9851-9511ccb7ff82', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:36.010 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[3688a90c-ffd0-4706-94cd-a37468d50bc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:36.060 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ec0b816f-670e-45b9-8e98-2cbec1fdc107]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:36.062 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape2cfff13-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:36.062 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:07:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:36.062 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape2cfff13-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:36 np0005539551 nova_compute[227360]: 2025-11-29 08:07:36.064 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:36 np0005539551 kernel: tape2cfff13-80: entered promiscuous mode
Nov 29 03:07:36 np0005539551 NetworkManager[48922]: <info>  [1764403656.0652] manager: (tape2cfff13-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/117)
Nov 29 03:07:36 np0005539551 nova_compute[227360]: 2025-11-29 08:07:36.066 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:36.067 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape2cfff13-80, col_values=(('external_ids', {'iface-id': '3b6224e3-18ac-4e83-9ce8-1981565841ff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:36 np0005539551 nova_compute[227360]: 2025-11-29 08:07:36.068 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:36 np0005539551 ovn_controller[130266]: 2025-11-29T08:07:36Z|00245|binding|INFO|Releasing lport 3b6224e3-18ac-4e83-9ce8-1981565841ff from this chassis (sb_readonly=0)
Nov 29 03:07:36 np0005539551 nova_compute[227360]: 2025-11-29 08:07:36.087 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:36.089 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e2cfff13-8384-4cb8-9851-9511ccb7ff82.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e2cfff13-8384-4cb8-9851-9511ccb7ff82.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:07:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:36.090 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5a7430f1-6cf4-4ff4-ab8e-7e795428851f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:36.091 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:07:36 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:07:36 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:07:36 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-e2cfff13-8384-4cb8-9851-9511ccb7ff82
Nov 29 03:07:36 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:07:36 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:07:36 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:07:36 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/e2cfff13-8384-4cb8-9851-9511ccb7ff82.pid.haproxy
Nov 29 03:07:36 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:07:36 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:07:36 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:07:36 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:07:36 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:07:36 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:07:36 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:07:36 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:07:36 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:07:36 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:07:36 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:07:36 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:07:36 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:07:36 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:07:36 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:07:36 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:07:36 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:07:36 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:07:36 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:07:36 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:07:36 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID e2cfff13-8384-4cb8-9851-9511ccb7ff82
Nov 29 03:07:36 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:07:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:36.092 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e2cfff13-8384-4cb8-9851-9511ccb7ff82', 'env', 'PROCESS_TAG=haproxy-e2cfff13-8384-4cb8-9851-9511ccb7ff82', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e2cfff13-8384-4cb8-9851-9511ccb7ff82.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:07:36 np0005539551 nova_compute[227360]: 2025-11-29 08:07:36.438 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403656.4376683, 9cebdce6-6212-492b-8105-301f19edb8b3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:07:36 np0005539551 nova_compute[227360]: 2025-11-29 08:07:36.439 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] VM Started (Lifecycle Event)#033[00m
Nov 29 03:07:36 np0005539551 nova_compute[227360]: 2025-11-29 08:07:36.462 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:36 np0005539551 nova_compute[227360]: 2025-11-29 08:07:36.466 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403656.4378734, 9cebdce6-6212-492b-8105-301f19edb8b3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:07:36 np0005539551 nova_compute[227360]: 2025-11-29 08:07:36.466 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:07:36 np0005539551 podman[253317]: 2025-11-29 08:07:36.470278456 +0000 UTC m=+0.055875689 container create 5998b83b45fc4bd6500ce716ed45d544afcaa407d117cfee02c8e36de9dc0a40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e2cfff13-8384-4cb8-9851-9511ccb7ff82, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 03:07:36 np0005539551 systemd[1]: Started libpod-conmon-5998b83b45fc4bd6500ce716ed45d544afcaa407d117cfee02c8e36de9dc0a40.scope.
Nov 29 03:07:36 np0005539551 nova_compute[227360]: 2025-11-29 08:07:36.499 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:36 np0005539551 nova_compute[227360]: 2025-11-29 08:07:36.504 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:07:36 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:07:36 np0005539551 nova_compute[227360]: 2025-11-29 08:07:36.527 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:07:36 np0005539551 podman[253317]: 2025-11-29 08:07:36.443057762 +0000 UTC m=+0.028655045 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:07:36 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ed6bc60de2e4b0403bcd2a4c936e2fdd93ecd290fd56e876566c60c337f4fdb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:07:36 np0005539551 podman[253317]: 2025-11-29 08:07:36.552864396 +0000 UTC m=+0.138461659 container init 5998b83b45fc4bd6500ce716ed45d544afcaa407d117cfee02c8e36de9dc0a40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e2cfff13-8384-4cb8-9851-9511ccb7ff82, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:07:36 np0005539551 podman[253317]: 2025-11-29 08:07:36.55904226 +0000 UTC m=+0.144639493 container start 5998b83b45fc4bd6500ce716ed45d544afcaa407d117cfee02c8e36de9dc0a40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e2cfff13-8384-4cb8-9851-9511ccb7ff82, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:07:36 np0005539551 neutron-haproxy-ovnmeta-e2cfff13-8384-4cb8-9851-9511ccb7ff82[253333]: [NOTICE]   (253337) : New worker (253339) forked
Nov 29 03:07:36 np0005539551 neutron-haproxy-ovnmeta-e2cfff13-8384-4cb8-9851-9511ccb7ff82[253333]: [NOTICE]   (253337) : Loading success.
Nov 29 03:07:36 np0005539551 nova_compute[227360]: 2025-11-29 08:07:36.732 227364 DEBUG nova.compute.manager [req-a0af2318-d3a3-4abb-b42a-3c63f7f848b5 req-0abfcc5f-62d1-4a42-9e85-10001f985ae7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Received event network-vif-plugged-8f05eb7d-5dc1-48da-b1b2-dfee27931e85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:36 np0005539551 nova_compute[227360]: 2025-11-29 08:07:36.733 227364 DEBUG oslo_concurrency.lockutils [req-a0af2318-d3a3-4abb-b42a-3c63f7f848b5 req-0abfcc5f-62d1-4a42-9e85-10001f985ae7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9cebdce6-6212-492b-8105-301f19edb8b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:36 np0005539551 nova_compute[227360]: 2025-11-29 08:07:36.733 227364 DEBUG oslo_concurrency.lockutils [req-a0af2318-d3a3-4abb-b42a-3c63f7f848b5 req-0abfcc5f-62d1-4a42-9e85-10001f985ae7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9cebdce6-6212-492b-8105-301f19edb8b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:36 np0005539551 nova_compute[227360]: 2025-11-29 08:07:36.733 227364 DEBUG oslo_concurrency.lockutils [req-a0af2318-d3a3-4abb-b42a-3c63f7f848b5 req-0abfcc5f-62d1-4a42-9e85-10001f985ae7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9cebdce6-6212-492b-8105-301f19edb8b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:36 np0005539551 nova_compute[227360]: 2025-11-29 08:07:36.734 227364 DEBUG nova.compute.manager [req-a0af2318-d3a3-4abb-b42a-3c63f7f848b5 req-0abfcc5f-62d1-4a42-9e85-10001f985ae7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Processing event network-vif-plugged-8f05eb7d-5dc1-48da-b1b2-dfee27931e85 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:07:36 np0005539551 nova_compute[227360]: 2025-11-29 08:07:36.734 227364 DEBUG nova.compute.manager [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:07:36 np0005539551 nova_compute[227360]: 2025-11-29 08:07:36.738 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403656.7380564, 9cebdce6-6212-492b-8105-301f19edb8b3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:07:36 np0005539551 nova_compute[227360]: 2025-11-29 08:07:36.738 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:07:36 np0005539551 nova_compute[227360]: 2025-11-29 08:07:36.740 227364 DEBUG nova.virt.libvirt.driver [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:07:36 np0005539551 nova_compute[227360]: 2025-11-29 08:07:36.743 227364 INFO nova.virt.libvirt.driver [-] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Instance spawned successfully.#033[00m
Nov 29 03:07:36 np0005539551 nova_compute[227360]: 2025-11-29 08:07:36.743 227364 DEBUG nova.virt.libvirt.driver [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:07:36 np0005539551 nova_compute[227360]: 2025-11-29 08:07:36.767 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:36 np0005539551 nova_compute[227360]: 2025-11-29 08:07:36.770 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:07:36 np0005539551 nova_compute[227360]: 2025-11-29 08:07:36.792 227364 DEBUG nova.virt.libvirt.driver [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:36 np0005539551 nova_compute[227360]: 2025-11-29 08:07:36.793 227364 DEBUG nova.virt.libvirt.driver [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:36 np0005539551 nova_compute[227360]: 2025-11-29 08:07:36.793 227364 DEBUG nova.virt.libvirt.driver [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:36 np0005539551 nova_compute[227360]: 2025-11-29 08:07:36.793 227364 DEBUG nova.virt.libvirt.driver [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:36 np0005539551 nova_compute[227360]: 2025-11-29 08:07:36.794 227364 DEBUG nova.virt.libvirt.driver [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:36 np0005539551 nova_compute[227360]: 2025-11-29 08:07:36.794 227364 DEBUG nova.virt.libvirt.driver [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:36 np0005539551 nova_compute[227360]: 2025-11-29 08:07:36.803 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:07:36 np0005539551 nova_compute[227360]: 2025-11-29 08:07:36.863 227364 INFO nova.compute.manager [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Took 9.33 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:07:36 np0005539551 nova_compute[227360]: 2025-11-29 08:07:36.864 227364 DEBUG nova.compute.manager [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:36 np0005539551 nova_compute[227360]: 2025-11-29 08:07:36.935 227364 INFO nova.compute.manager [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Took 10.92 seconds to build instance.#033[00m
Nov 29 03:07:36 np0005539551 nova_compute[227360]: 2025-11-29 08:07:36.968 227364 DEBUG oslo_concurrency.lockutils [None req-fcecc563-c22c-4d6b-990b-f70fae927b05 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Lock "9cebdce6-6212-492b-8105-301f19edb8b3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:37.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:37.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:38 np0005539551 nova_compute[227360]: 2025-11-29 08:07:38.853 227364 DEBUG nova.compute.manager [req-ea2c1e42-539f-40a5-94fb-66fa0ea8bf67 req-c2951c17-65eb-47a1-8ab5-82868b2b79d4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Received event network-vif-plugged-8f05eb7d-5dc1-48da-b1b2-dfee27931e85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:38 np0005539551 nova_compute[227360]: 2025-11-29 08:07:38.854 227364 DEBUG oslo_concurrency.lockutils [req-ea2c1e42-539f-40a5-94fb-66fa0ea8bf67 req-c2951c17-65eb-47a1-8ab5-82868b2b79d4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9cebdce6-6212-492b-8105-301f19edb8b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:38 np0005539551 nova_compute[227360]: 2025-11-29 08:07:38.855 227364 DEBUG oslo_concurrency.lockutils [req-ea2c1e42-539f-40a5-94fb-66fa0ea8bf67 req-c2951c17-65eb-47a1-8ab5-82868b2b79d4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9cebdce6-6212-492b-8105-301f19edb8b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:38 np0005539551 nova_compute[227360]: 2025-11-29 08:07:38.856 227364 DEBUG oslo_concurrency.lockutils [req-ea2c1e42-539f-40a5-94fb-66fa0ea8bf67 req-c2951c17-65eb-47a1-8ab5-82868b2b79d4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9cebdce6-6212-492b-8105-301f19edb8b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:38 np0005539551 nova_compute[227360]: 2025-11-29 08:07:38.857 227364 DEBUG nova.compute.manager [req-ea2c1e42-539f-40a5-94fb-66fa0ea8bf67 req-c2951c17-65eb-47a1-8ab5-82868b2b79d4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] No waiting events found dispatching network-vif-plugged-8f05eb7d-5dc1-48da-b1b2-dfee27931e85 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:07:38 np0005539551 nova_compute[227360]: 2025-11-29 08:07:38.858 227364 WARNING nova.compute.manager [req-ea2c1e42-539f-40a5-94fb-66fa0ea8bf67 req-c2951c17-65eb-47a1-8ab5-82868b2b79d4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Received unexpected event network-vif-plugged-8f05eb7d-5dc1-48da-b1b2-dfee27931e85 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:07:39 np0005539551 nova_compute[227360]: 2025-11-29 08:07:39.279 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:39 np0005539551 nova_compute[227360]: 2025-11-29 08:07:39.427 227364 DEBUG nova.compute.manager [None req-85c28f39-79b4-4412-88bd-7d23f068308e d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:39.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:07:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:39.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:07:39 np0005539551 nova_compute[227360]: 2025-11-29 08:07:39.769 227364 INFO nova.compute.manager [None req-85c28f39-79b4-4412-88bd-7d23f068308e d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] instance snapshotting#033[00m
Nov 29 03:07:39 np0005539551 nova_compute[227360]: 2025-11-29 08:07:39.821 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:40 np0005539551 nova_compute[227360]: 2025-11-29 08:07:40.046 227364 INFO nova.virt.libvirt.driver [None req-85c28f39-79b4-4412-88bd-7d23f068308e d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Beginning live snapshot process#033[00m
Nov 29 03:07:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:40 np0005539551 nova_compute[227360]: 2025-11-29 08:07:40.328 227364 DEBUG nova.virt.libvirt.imagebackend [None req-85c28f39-79b4-4412-88bd-7d23f068308e d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] No parent info for 4873db8c-b414-4e95-acd9-77caabebe722; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:07:40 np0005539551 nova_compute[227360]: 2025-11-29 08:07:40.576 227364 DEBUG nova.storage.rbd_utils [None req-85c28f39-79b4-4412-88bd-7d23f068308e d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] creating snapshot(610df694bf9b49d2862fd71aaa73d4f9) on rbd image(9cebdce6-6212-492b-8105-301f19edb8b3_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:07:41 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e249 e249: 3 total, 3 up, 3 in
Nov 29 03:07:41 np0005539551 nova_compute[227360]: 2025-11-29 08:07:41.271 227364 DEBUG nova.storage.rbd_utils [None req-85c28f39-79b4-4412-88bd-7d23f068308e d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] cloning vms/9cebdce6-6212-492b-8105-301f19edb8b3_disk@610df694bf9b49d2862fd71aaa73d4f9 to images/6a630c30-934b-49f9-92fa-38811e802998 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:07:41 np0005539551 nova_compute[227360]: 2025-11-29 08:07:41.419 227364 DEBUG nova.storage.rbd_utils [None req-85c28f39-79b4-4412-88bd-7d23f068308e d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] flattening images/6a630c30-934b-49f9-92fa-38811e802998 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:07:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:41.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:07:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:41.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:07:41 np0005539551 nova_compute[227360]: 2025-11-29 08:07:41.681 227364 DEBUG nova.storage.rbd_utils [None req-85c28f39-79b4-4412-88bd-7d23f068308e d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] removing snapshot(610df694bf9b49d2862fd71aaa73d4f9) on rbd image(9cebdce6-6212-492b-8105-301f19edb8b3_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:07:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e250 e250: 3 total, 3 up, 3 in
Nov 29 03:07:42 np0005539551 nova_compute[227360]: 2025-11-29 08:07:42.275 227364 DEBUG nova.storage.rbd_utils [None req-85c28f39-79b4-4412-88bd-7d23f068308e d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] creating snapshot(snap) on rbd image(6a630c30-934b-49f9-92fa-38811e802998) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:07:43 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e251 e251: 3 total, 3 up, 3 in
Nov 29 03:07:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:43.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:07:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:43.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:07:44 np0005539551 nova_compute[227360]: 2025-11-29 08:07:44.281 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:44 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e252 e252: 3 total, 3 up, 3 in
Nov 29 03:07:44 np0005539551 nova_compute[227360]: 2025-11-29 08:07:44.804 227364 INFO nova.virt.libvirt.driver [None req-85c28f39-79b4-4412-88bd-7d23f068308e d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Snapshot image upload complete#033[00m
Nov 29 03:07:44 np0005539551 nova_compute[227360]: 2025-11-29 08:07:44.804 227364 INFO nova.compute.manager [None req-85c28f39-79b4-4412-88bd-7d23f068308e d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Took 5.03 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 29 03:07:44 np0005539551 nova_compute[227360]: 2025-11-29 08:07:44.821 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e253 e253: 3 total, 3 up, 3 in
Nov 29 03:07:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:45.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:45.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:46 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e254 e254: 3 total, 3 up, 3 in
Nov 29 03:07:47 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e255 e255: 3 total, 3 up, 3 in
Nov 29 03:07:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:47.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:47.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:48 np0005539551 nova_compute[227360]: 2025-11-29 08:07:48.674 227364 DEBUG nova.compute.manager [None req-839f3375-d15b-4955-8bfc-04eab736c593 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:48 np0005539551 nova_compute[227360]: 2025-11-29 08:07:48.721 227364 INFO nova.compute.manager [None req-839f3375-d15b-4955-8bfc-04eab736c593 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] instance snapshotting#033[00m
Nov 29 03:07:49 np0005539551 nova_compute[227360]: 2025-11-29 08:07:49.095 227364 INFO nova.virt.libvirt.driver [None req-839f3375-d15b-4955-8bfc-04eab736c593 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Beginning live snapshot process#033[00m
Nov 29 03:07:49 np0005539551 nova_compute[227360]: 2025-11-29 08:07:49.262 227364 DEBUG nova.virt.libvirt.imagebackend [None req-839f3375-d15b-4955-8bfc-04eab736c593 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] No parent info for 4873db8c-b414-4e95-acd9-77caabebe722; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:07:49 np0005539551 nova_compute[227360]: 2025-11-29 08:07:49.285 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:49.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:49 np0005539551 nova_compute[227360]: 2025-11-29 08:07:49.546 227364 DEBUG nova.storage.rbd_utils [None req-839f3375-d15b-4955-8bfc-04eab736c593 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] creating snapshot(9c5b136032df4957bb144322d364f5e6) on rbd image(9cebdce6-6212-492b-8105-301f19edb8b3_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:07:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:49.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:49 np0005539551 nova_compute[227360]: 2025-11-29 08:07:49.824 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:49 np0005539551 ovn_controller[130266]: 2025-11-29T08:07:49Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a6:a9:24 10.100.0.11
Nov 29 03:07:49 np0005539551 ovn_controller[130266]: 2025-11-29T08:07:49Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a6:a9:24 10.100.0.11
Nov 29 03:07:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e256 e256: 3 total, 3 up, 3 in
Nov 29 03:07:50 np0005539551 nova_compute[227360]: 2025-11-29 08:07:50.574 227364 DEBUG nova.storage.rbd_utils [None req-839f3375-d15b-4955-8bfc-04eab736c593 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] cloning vms/9cebdce6-6212-492b-8105-301f19edb8b3_disk@9c5b136032df4957bb144322d364f5e6 to images/aa6ac671-fc8d-4598-a390-3f4d0b610d84 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:07:50 np0005539551 nova_compute[227360]: 2025-11-29 08:07:50.729 227364 DEBUG nova.storage.rbd_utils [None req-839f3375-d15b-4955-8bfc-04eab736c593 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] flattening images/aa6ac671-fc8d-4598-a390-3f4d0b610d84 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:07:51 np0005539551 nova_compute[227360]: 2025-11-29 08:07:51.278 227364 DEBUG nova.storage.rbd_utils [None req-839f3375-d15b-4955-8bfc-04eab736c593 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] removing snapshot(9c5b136032df4957bb144322d364f5e6) on rbd image(9cebdce6-6212-492b-8105-301f19edb8b3_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:07:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:51.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e257 e257: 3 total, 3 up, 3 in
Nov 29 03:07:51 np0005539551 nova_compute[227360]: 2025-11-29 08:07:51.565 227364 DEBUG nova.storage.rbd_utils [None req-839f3375-d15b-4955-8bfc-04eab736c593 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] creating snapshot(snap) on rbd image(aa6ac671-fc8d-4598-a390-3f4d0b610d84) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:07:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:51.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e258 e258: 3 total, 3 up, 3 in
Nov 29 03:07:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e259 e259: 3 total, 3 up, 3 in
Nov 29 03:07:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:07:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:53.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:07:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:53.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:54 np0005539551 nova_compute[227360]: 2025-11-29 08:07:54.268 227364 INFO nova.virt.libvirt.driver [None req-839f3375-d15b-4955-8bfc-04eab736c593 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Snapshot image upload complete#033[00m
Nov 29 03:07:54 np0005539551 nova_compute[227360]: 2025-11-29 08:07:54.270 227364 INFO nova.compute.manager [None req-839f3375-d15b-4955-8bfc-04eab736c593 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Took 5.55 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 29 03:07:54 np0005539551 nova_compute[227360]: 2025-11-29 08:07:54.286 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:54 np0005539551 nova_compute[227360]: 2025-11-29 08:07:54.826 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:07:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:55.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:07:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:55.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e260 e260: 3 total, 3 up, 3 in
Nov 29 03:07:56 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e261 e261: 3 total, 3 up, 3 in
Nov 29 03:07:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:57.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:57 np0005539551 nova_compute[227360]: 2025-11-29 08:07:57.575 227364 DEBUG oslo_concurrency.lockutils [None req-5bf9e623-dc2d-4daa-bfcb-24644fc3a9f1 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Acquiring lock "9cebdce6-6212-492b-8105-301f19edb8b3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:57 np0005539551 nova_compute[227360]: 2025-11-29 08:07:57.575 227364 DEBUG oslo_concurrency.lockutils [None req-5bf9e623-dc2d-4daa-bfcb-24644fc3a9f1 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Lock "9cebdce6-6212-492b-8105-301f19edb8b3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:57 np0005539551 nova_compute[227360]: 2025-11-29 08:07:57.576 227364 DEBUG oslo_concurrency.lockutils [None req-5bf9e623-dc2d-4daa-bfcb-24644fc3a9f1 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Acquiring lock "9cebdce6-6212-492b-8105-301f19edb8b3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:57 np0005539551 nova_compute[227360]: 2025-11-29 08:07:57.576 227364 DEBUG oslo_concurrency.lockutils [None req-5bf9e623-dc2d-4daa-bfcb-24644fc3a9f1 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Lock "9cebdce6-6212-492b-8105-301f19edb8b3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:57 np0005539551 nova_compute[227360]: 2025-11-29 08:07:57.576 227364 DEBUG oslo_concurrency.lockutils [None req-5bf9e623-dc2d-4daa-bfcb-24644fc3a9f1 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Lock "9cebdce6-6212-492b-8105-301f19edb8b3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:57 np0005539551 nova_compute[227360]: 2025-11-29 08:07:57.578 227364 INFO nova.compute.manager [None req-5bf9e623-dc2d-4daa-bfcb-24644fc3a9f1 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Terminating instance#033[00m
Nov 29 03:07:57 np0005539551 nova_compute[227360]: 2025-11-29 08:07:57.580 227364 DEBUG nova.compute.manager [None req-5bf9e623-dc2d-4daa-bfcb-24644fc3a9f1 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:07:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:07:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:57.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:07:57 np0005539551 kernel: tap8f05eb7d-5d (unregistering): left promiscuous mode
Nov 29 03:07:57 np0005539551 NetworkManager[48922]: <info>  [1764403677.6534] device (tap8f05eb7d-5d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:07:57 np0005539551 nova_compute[227360]: 2025-11-29 08:07:57.660 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:57 np0005539551 ovn_controller[130266]: 2025-11-29T08:07:57Z|00246|binding|INFO|Releasing lport 8f05eb7d-5dc1-48da-b1b2-dfee27931e85 from this chassis (sb_readonly=0)
Nov 29 03:07:57 np0005539551 ovn_controller[130266]: 2025-11-29T08:07:57Z|00247|binding|INFO|Setting lport 8f05eb7d-5dc1-48da-b1b2-dfee27931e85 down in Southbound
Nov 29 03:07:57 np0005539551 ovn_controller[130266]: 2025-11-29T08:07:57Z|00248|binding|INFO|Removing iface tap8f05eb7d-5d ovn-installed in OVS
Nov 29 03:07:57 np0005539551 nova_compute[227360]: 2025-11-29 08:07:57.662 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:57.668 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:a9:24 10.100.0.11'], port_security=['fa:16:3e:a6:a9:24 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9cebdce6-6212-492b-8105-301f19edb8b3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e2cfff13-8384-4cb8-9851-9511ccb7ff82', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '02545fd3735d4977afe5612632ee4832', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2af8c5ee-3150-4114-a613-af89d1411c4c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1a7c274-6c62-4970-a2aa-d8b6cb1c3d25, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=8f05eb7d-5dc1-48da-b1b2-dfee27931e85) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:07:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:57.670 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 8f05eb7d-5dc1-48da-b1b2-dfee27931e85 in datapath e2cfff13-8384-4cb8-9851-9511ccb7ff82 unbound from our chassis#033[00m
Nov 29 03:07:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:57.671 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e2cfff13-8384-4cb8-9851-9511ccb7ff82, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:07:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:57.672 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4795fb5d-95a4-479f-abb6-12e61f26889f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:57.673 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e2cfff13-8384-4cb8-9851-9511ccb7ff82 namespace which is not needed anymore#033[00m
Nov 29 03:07:57 np0005539551 nova_compute[227360]: 2025-11-29 08:07:57.683 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:57 np0005539551 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d00000045.scope: Deactivated successfully.
Nov 29 03:07:57 np0005539551 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d00000045.scope: Consumed 14.154s CPU time.
Nov 29 03:07:57 np0005539551 systemd-machined[190756]: Machine qemu-32-instance-00000045 terminated.
Nov 29 03:07:57 np0005539551 nova_compute[227360]: 2025-11-29 08:07:57.797 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:57 np0005539551 neutron-haproxy-ovnmeta-e2cfff13-8384-4cb8-9851-9511ccb7ff82[253333]: [NOTICE]   (253337) : haproxy version is 2.8.14-c23fe91
Nov 29 03:07:57 np0005539551 neutron-haproxy-ovnmeta-e2cfff13-8384-4cb8-9851-9511ccb7ff82[253333]: [NOTICE]   (253337) : path to executable is /usr/sbin/haproxy
Nov 29 03:07:57 np0005539551 neutron-haproxy-ovnmeta-e2cfff13-8384-4cb8-9851-9511ccb7ff82[253333]: [WARNING]  (253337) : Exiting Master process...
Nov 29 03:07:57 np0005539551 nova_compute[227360]: 2025-11-29 08:07:57.800 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:57 np0005539551 neutron-haproxy-ovnmeta-e2cfff13-8384-4cb8-9851-9511ccb7ff82[253333]: [ALERT]    (253337) : Current worker (253339) exited with code 143 (Terminated)
Nov 29 03:07:57 np0005539551 neutron-haproxy-ovnmeta-e2cfff13-8384-4cb8-9851-9511ccb7ff82[253333]: [WARNING]  (253337) : All workers exited. Exiting... (0)
Nov 29 03:07:57 np0005539551 systemd[1]: libpod-5998b83b45fc4bd6500ce716ed45d544afcaa407d117cfee02c8e36de9dc0a40.scope: Deactivated successfully.
Nov 29 03:07:57 np0005539551 podman[253654]: 2025-11-29 08:07:57.811921842 +0000 UTC m=+0.043774167 container died 5998b83b45fc4bd6500ce716ed45d544afcaa407d117cfee02c8e36de9dc0a40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e2cfff13-8384-4cb8-9851-9511ccb7ff82, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:07:57 np0005539551 nova_compute[227360]: 2025-11-29 08:07:57.811 227364 INFO nova.virt.libvirt.driver [-] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Instance destroyed successfully.#033[00m
Nov 29 03:07:57 np0005539551 nova_compute[227360]: 2025-11-29 08:07:57.811 227364 DEBUG nova.objects.instance [None req-5bf9e623-dc2d-4daa-bfcb-24644fc3a9f1 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Lazy-loading 'resources' on Instance uuid 9cebdce6-6212-492b-8105-301f19edb8b3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:07:57 np0005539551 nova_compute[227360]: 2025-11-29 08:07:57.829 227364 DEBUG nova.virt.libvirt.vif [None req-5bf9e623-dc2d-4daa-bfcb-24644fc3a9f1 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:07:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1061952082',display_name='tempest-ImagesOneServerTestJSON-server-1061952082',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1061952082',id=69,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:07:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='02545fd3735d4977afe5612632ee4832',ramdisk_id='',reservation_id='r-456db5s2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerTestJSON-941250990',owner_user_name='tempest-ImagesOneServerTestJSON-941250990-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:07:54Z,user_data=None,user_id='d0a358b2256f49b085b6e0d27911e743',uuid=9cebdce6-6212-492b-8105-301f19edb8b3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8f05eb7d-5dc1-48da-b1b2-dfee27931e85", "address": "fa:16:3e:a6:a9:24", "network": {"id": "e2cfff13-8384-4cb8-9851-9511ccb7ff82", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-596742618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02545fd3735d4977afe5612632ee4832", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f05eb7d-5d", "ovs_interfaceid": "8f05eb7d-5dc1-48da-b1b2-dfee27931e85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:07:57 np0005539551 nova_compute[227360]: 2025-11-29 08:07:57.829 227364 DEBUG nova.network.os_vif_util [None req-5bf9e623-dc2d-4daa-bfcb-24644fc3a9f1 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Converting VIF {"id": "8f05eb7d-5dc1-48da-b1b2-dfee27931e85", "address": "fa:16:3e:a6:a9:24", "network": {"id": "e2cfff13-8384-4cb8-9851-9511ccb7ff82", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-596742618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02545fd3735d4977afe5612632ee4832", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f05eb7d-5d", "ovs_interfaceid": "8f05eb7d-5dc1-48da-b1b2-dfee27931e85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:07:57 np0005539551 nova_compute[227360]: 2025-11-29 08:07:57.830 227364 DEBUG nova.network.os_vif_util [None req-5bf9e623-dc2d-4daa-bfcb-24644fc3a9f1 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:a9:24,bridge_name='br-int',has_traffic_filtering=True,id=8f05eb7d-5dc1-48da-b1b2-dfee27931e85,network=Network(e2cfff13-8384-4cb8-9851-9511ccb7ff82),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f05eb7d-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:07:57 np0005539551 nova_compute[227360]: 2025-11-29 08:07:57.831 227364 DEBUG os_vif [None req-5bf9e623-dc2d-4daa-bfcb-24644fc3a9f1 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:a9:24,bridge_name='br-int',has_traffic_filtering=True,id=8f05eb7d-5dc1-48da-b1b2-dfee27931e85,network=Network(e2cfff13-8384-4cb8-9851-9511ccb7ff82),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f05eb7d-5d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:07:57 np0005539551 nova_compute[227360]: 2025-11-29 08:07:57.832 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:57 np0005539551 nova_compute[227360]: 2025-11-29 08:07:57.833 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f05eb7d-5d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:57 np0005539551 nova_compute[227360]: 2025-11-29 08:07:57.865 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:57 np0005539551 nova_compute[227360]: 2025-11-29 08:07:57.867 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:07:57 np0005539551 nova_compute[227360]: 2025-11-29 08:07:57.870 227364 INFO os_vif [None req-5bf9e623-dc2d-4daa-bfcb-24644fc3a9f1 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:a9:24,bridge_name='br-int',has_traffic_filtering=True,id=8f05eb7d-5dc1-48da-b1b2-dfee27931e85,network=Network(e2cfff13-8384-4cb8-9851-9511ccb7ff82),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f05eb7d-5d')#033[00m
Nov 29 03:07:57 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5998b83b45fc4bd6500ce716ed45d544afcaa407d117cfee02c8e36de9dc0a40-userdata-shm.mount: Deactivated successfully.
Nov 29 03:07:57 np0005539551 systemd[1]: var-lib-containers-storage-overlay-5ed6bc60de2e4b0403bcd2a4c936e2fdd93ecd290fd56e876566c60c337f4fdb-merged.mount: Deactivated successfully.
Nov 29 03:07:57 np0005539551 podman[253654]: 2025-11-29 08:07:57.88621466 +0000 UTC m=+0.118066985 container cleanup 5998b83b45fc4bd6500ce716ed45d544afcaa407d117cfee02c8e36de9dc0a40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e2cfff13-8384-4cb8-9851-9511ccb7ff82, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:07:57 np0005539551 systemd[1]: libpod-conmon-5998b83b45fc4bd6500ce716ed45d544afcaa407d117cfee02c8e36de9dc0a40.scope: Deactivated successfully.
Nov 29 03:07:57 np0005539551 podman[253710]: 2025-11-29 08:07:57.947841941 +0000 UTC m=+0.040319914 container remove 5998b83b45fc4bd6500ce716ed45d544afcaa407d117cfee02c8e36de9dc0a40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e2cfff13-8384-4cb8-9851-9511ccb7ff82, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:07:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:57.953 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[3ab9591f-40b3-477a-9677-bc29819476a2]: (4, ('Sat Nov 29 08:07:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e2cfff13-8384-4cb8-9851-9511ccb7ff82 (5998b83b45fc4bd6500ce716ed45d544afcaa407d117cfee02c8e36de9dc0a40)\n5998b83b45fc4bd6500ce716ed45d544afcaa407d117cfee02c8e36de9dc0a40\nSat Nov 29 08:07:57 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e2cfff13-8384-4cb8-9851-9511ccb7ff82 (5998b83b45fc4bd6500ce716ed45d544afcaa407d117cfee02c8e36de9dc0a40)\n5998b83b45fc4bd6500ce716ed45d544afcaa407d117cfee02c8e36de9dc0a40\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:57.954 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[97694aed-e548-4c91-90aa-7150497d936e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:57.955 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape2cfff13-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:57 np0005539551 nova_compute[227360]: 2025-11-29 08:07:57.957 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:57 np0005539551 kernel: tape2cfff13-80: left promiscuous mode
Nov 29 03:07:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:57.961 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[57ee0bcd-d137-4370-b8a7-90dbea2a4527]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:57 np0005539551 nova_compute[227360]: 2025-11-29 08:07:57.972 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:57.974 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8d040a6e-df1a-4ad8-9139-c4e3f9d7e056]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:57.975 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[92d15a97-490b-4eb2-8109-a06554c38a96]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:57.993 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ab05abff-3bbf-42ef-be41-de771ced3b37]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 670275, 'reachable_time': 33591, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253727, 'error': None, 'target': 'ovnmeta-e2cfff13-8384-4cb8-9851-9511ccb7ff82', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:57.995 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e2cfff13-8384-4cb8-9851-9511ccb7ff82 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:07:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:07:57.995 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[bfd9b63c-631a-43da-b7eb-c12d36a6df9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:57 np0005539551 systemd[1]: run-netns-ovnmeta\x2de2cfff13\x2d8384\x2d4cb8\x2d9851\x2d9511ccb7ff82.mount: Deactivated successfully.
Nov 29 03:07:58 np0005539551 nova_compute[227360]: 2025-11-29 08:07:58.058 227364 DEBUG nova.compute.manager [req-31a26681-d4e6-4bc0-9eab-507f50c3aa39 req-15f43563-ca28-4d3d-9bb0-4bed6b4d688c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Received event network-vif-unplugged-8f05eb7d-5dc1-48da-b1b2-dfee27931e85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:58 np0005539551 nova_compute[227360]: 2025-11-29 08:07:58.059 227364 DEBUG oslo_concurrency.lockutils [req-31a26681-d4e6-4bc0-9eab-507f50c3aa39 req-15f43563-ca28-4d3d-9bb0-4bed6b4d688c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9cebdce6-6212-492b-8105-301f19edb8b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:58 np0005539551 nova_compute[227360]: 2025-11-29 08:07:58.059 227364 DEBUG oslo_concurrency.lockutils [req-31a26681-d4e6-4bc0-9eab-507f50c3aa39 req-15f43563-ca28-4d3d-9bb0-4bed6b4d688c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9cebdce6-6212-492b-8105-301f19edb8b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:58 np0005539551 nova_compute[227360]: 2025-11-29 08:07:58.060 227364 DEBUG oslo_concurrency.lockutils [req-31a26681-d4e6-4bc0-9eab-507f50c3aa39 req-15f43563-ca28-4d3d-9bb0-4bed6b4d688c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9cebdce6-6212-492b-8105-301f19edb8b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:58 np0005539551 nova_compute[227360]: 2025-11-29 08:07:58.060 227364 DEBUG nova.compute.manager [req-31a26681-d4e6-4bc0-9eab-507f50c3aa39 req-15f43563-ca28-4d3d-9bb0-4bed6b4d688c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] No waiting events found dispatching network-vif-unplugged-8f05eb7d-5dc1-48da-b1b2-dfee27931e85 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:07:58 np0005539551 nova_compute[227360]: 2025-11-29 08:07:58.061 227364 DEBUG nova.compute.manager [req-31a26681-d4e6-4bc0-9eab-507f50c3aa39 req-15f43563-ca28-4d3d-9bb0-4bed6b4d688c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Received event network-vif-unplugged-8f05eb7d-5dc1-48da-b1b2-dfee27931e85 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:07:58 np0005539551 nova_compute[227360]: 2025-11-29 08:07:58.333 227364 INFO nova.virt.libvirt.driver [None req-5bf9e623-dc2d-4daa-bfcb-24644fc3a9f1 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Deleting instance files /var/lib/nova/instances/9cebdce6-6212-492b-8105-301f19edb8b3_del#033[00m
Nov 29 03:07:58 np0005539551 nova_compute[227360]: 2025-11-29 08:07:58.334 227364 INFO nova.virt.libvirt.driver [None req-5bf9e623-dc2d-4daa-bfcb-24644fc3a9f1 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Deletion of /var/lib/nova/instances/9cebdce6-6212-492b-8105-301f19edb8b3_del complete#033[00m
Nov 29 03:07:58 np0005539551 nova_compute[227360]: 2025-11-29 08:07:58.388 227364 INFO nova.compute.manager [None req-5bf9e623-dc2d-4daa-bfcb-24644fc3a9f1 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:07:58 np0005539551 nova_compute[227360]: 2025-11-29 08:07:58.389 227364 DEBUG oslo.service.loopingcall [None req-5bf9e623-dc2d-4daa-bfcb-24644fc3a9f1 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:07:58 np0005539551 nova_compute[227360]: 2025-11-29 08:07:58.389 227364 DEBUG nova.compute.manager [-] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:07:58 np0005539551 nova_compute[227360]: 2025-11-29 08:07:58.389 227364 DEBUG nova.network.neutron [-] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:07:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:59.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:07:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:59.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:59 np0005539551 nova_compute[227360]: 2025-11-29 08:07:59.829 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:00 np0005539551 nova_compute[227360]: 2025-11-29 08:08:00.153 227364 DEBUG nova.compute.manager [req-ce957f5f-485f-409f-a89a-1a646508d2ed req-5a94bc64-1f2c-46c1-87c8-2a8bea8d7bc8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Received event network-vif-plugged-8f05eb7d-5dc1-48da-b1b2-dfee27931e85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:00 np0005539551 nova_compute[227360]: 2025-11-29 08:08:00.153 227364 DEBUG oslo_concurrency.lockutils [req-ce957f5f-485f-409f-a89a-1a646508d2ed req-5a94bc64-1f2c-46c1-87c8-2a8bea8d7bc8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9cebdce6-6212-492b-8105-301f19edb8b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:00 np0005539551 nova_compute[227360]: 2025-11-29 08:08:00.154 227364 DEBUG oslo_concurrency.lockutils [req-ce957f5f-485f-409f-a89a-1a646508d2ed req-5a94bc64-1f2c-46c1-87c8-2a8bea8d7bc8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9cebdce6-6212-492b-8105-301f19edb8b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:00 np0005539551 nova_compute[227360]: 2025-11-29 08:08:00.154 227364 DEBUG oslo_concurrency.lockutils [req-ce957f5f-485f-409f-a89a-1a646508d2ed req-5a94bc64-1f2c-46c1-87c8-2a8bea8d7bc8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9cebdce6-6212-492b-8105-301f19edb8b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:00 np0005539551 nova_compute[227360]: 2025-11-29 08:08:00.155 227364 DEBUG nova.compute.manager [req-ce957f5f-485f-409f-a89a-1a646508d2ed req-5a94bc64-1f2c-46c1-87c8-2a8bea8d7bc8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] No waiting events found dispatching network-vif-plugged-8f05eb7d-5dc1-48da-b1b2-dfee27931e85 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:08:00 np0005539551 nova_compute[227360]: 2025-11-29 08:08:00.155 227364 WARNING nova.compute.manager [req-ce957f5f-485f-409f-a89a-1a646508d2ed req-5a94bc64-1f2c-46c1-87c8-2a8bea8d7bc8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Received unexpected event network-vif-plugged-8f05eb7d-5dc1-48da-b1b2-dfee27931e85 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:08:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:00 np0005539551 nova_compute[227360]: 2025-11-29 08:08:00.654 227364 DEBUG nova.network.neutron [-] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:08:00 np0005539551 nova_compute[227360]: 2025-11-29 08:08:00.669 227364 INFO nova.compute.manager [-] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Took 2.28 seconds to deallocate network for instance.#033[00m
Nov 29 03:08:00 np0005539551 nova_compute[227360]: 2025-11-29 08:08:00.740 227364 DEBUG oslo_concurrency.lockutils [None req-5bf9e623-dc2d-4daa-bfcb-24644fc3a9f1 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:00 np0005539551 nova_compute[227360]: 2025-11-29 08:08:00.741 227364 DEBUG oslo_concurrency.lockutils [None req-5bf9e623-dc2d-4daa-bfcb-24644fc3a9f1 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:00 np0005539551 nova_compute[227360]: 2025-11-29 08:08:00.792 227364 DEBUG oslo_concurrency.processutils [None req-5bf9e623-dc2d-4daa-bfcb-24644fc3a9f1 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:08:01 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4048652526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:08:01 np0005539551 nova_compute[227360]: 2025-11-29 08:08:01.269 227364 DEBUG oslo_concurrency.processutils [None req-5bf9e623-dc2d-4daa-bfcb-24644fc3a9f1 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:01 np0005539551 nova_compute[227360]: 2025-11-29 08:08:01.279 227364 DEBUG nova.compute.provider_tree [None req-5bf9e623-dc2d-4daa-bfcb-24644fc3a9f1 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:08:01 np0005539551 nova_compute[227360]: 2025-11-29 08:08:01.379 227364 DEBUG nova.scheduler.client.report [None req-5bf9e623-dc2d-4daa-bfcb-24644fc3a9f1 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:08:01 np0005539551 nova_compute[227360]: 2025-11-29 08:08:01.430 227364 DEBUG oslo_concurrency.lockutils [None req-5bf9e623-dc2d-4daa-bfcb-24644fc3a9f1 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:01 np0005539551 nova_compute[227360]: 2025-11-29 08:08:01.456 227364 INFO nova.scheduler.client.report [None req-5bf9e623-dc2d-4daa-bfcb-24644fc3a9f1 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Deleted allocations for instance 9cebdce6-6212-492b-8105-301f19edb8b3#033[00m
Nov 29 03:08:01 np0005539551 nova_compute[227360]: 2025-11-29 08:08:01.531 227364 DEBUG oslo_concurrency.lockutils [None req-5bf9e623-dc2d-4daa-bfcb-24644fc3a9f1 d0a358b2256f49b085b6e0d27911e743 02545fd3735d4977afe5612632ee4832 - - default default] Lock "9cebdce6-6212-492b-8105-301f19edb8b3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.956s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:01.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:08:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:01.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:08:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e262 e262: 3 total, 3 up, 3 in
Nov 29 03:08:02 np0005539551 nova_compute[227360]: 2025-11-29 08:08:02.855 227364 DEBUG nova.compute.manager [req-ac7fc154-31d1-467f-aead-4045eeb7b838 req-22a6bced-a748-45ca-9276-55c9d593a42c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Received event network-vif-deleted-8f05eb7d-5dc1-48da-b1b2-dfee27931e85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:02 np0005539551 nova_compute[227360]: 2025-11-29 08:08:02.868 227364 DEBUG oslo_concurrency.lockutils [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Acquiring lock "78549bb3-c5a0-4092-b64e-b4608e724d9c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:02 np0005539551 nova_compute[227360]: 2025-11-29 08:08:02.868 227364 DEBUG oslo_concurrency.lockutils [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "78549bb3-c5a0-4092-b64e-b4608e724d9c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:02 np0005539551 nova_compute[227360]: 2025-11-29 08:08:02.869 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:02 np0005539551 nova_compute[227360]: 2025-11-29 08:08:02.884 227364 DEBUG nova.compute.manager [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:08:02 np0005539551 nova_compute[227360]: 2025-11-29 08:08:02.945 227364 DEBUG oslo_concurrency.lockutils [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:02 np0005539551 nova_compute[227360]: 2025-11-29 08:08:02.945 227364 DEBUG oslo_concurrency.lockutils [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:02 np0005539551 nova_compute[227360]: 2025-11-29 08:08:02.950 227364 DEBUG nova.virt.hardware [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:08:02 np0005539551 nova_compute[227360]: 2025-11-29 08:08:02.950 227364 INFO nova.compute.claims [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:08:03 np0005539551 nova_compute[227360]: 2025-11-29 08:08:03.083 227364 DEBUG oslo_concurrency.processutils [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:03 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:08:03 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2536643597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:08:03 np0005539551 nova_compute[227360]: 2025-11-29 08:08:03.517 227364 DEBUG oslo_concurrency.processutils [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:03 np0005539551 nova_compute[227360]: 2025-11-29 08:08:03.527 227364 DEBUG nova.compute.provider_tree [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:08:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:03.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:03 np0005539551 nova_compute[227360]: 2025-11-29 08:08:03.548 227364 DEBUG nova.scheduler.client.report [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:08:03 np0005539551 nova_compute[227360]: 2025-11-29 08:08:03.574 227364 DEBUG oslo_concurrency.lockutils [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:03 np0005539551 nova_compute[227360]: 2025-11-29 08:08:03.575 227364 DEBUG nova.compute.manager [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:08:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:03.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:03 np0005539551 nova_compute[227360]: 2025-11-29 08:08:03.634 227364 DEBUG nova.compute.manager [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:08:03 np0005539551 nova_compute[227360]: 2025-11-29 08:08:03.634 227364 DEBUG nova.network.neutron [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:08:03 np0005539551 nova_compute[227360]: 2025-11-29 08:08:03.655 227364 INFO nova.virt.libvirt.driver [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:08:03 np0005539551 nova_compute[227360]: 2025-11-29 08:08:03.674 227364 DEBUG nova.compute.manager [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:08:03 np0005539551 nova_compute[227360]: 2025-11-29 08:08:03.774 227364 DEBUG nova.compute.manager [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:08:03 np0005539551 nova_compute[227360]: 2025-11-29 08:08:03.775 227364 DEBUG nova.virt.libvirt.driver [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:08:03 np0005539551 nova_compute[227360]: 2025-11-29 08:08:03.775 227364 INFO nova.virt.libvirt.driver [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Creating image(s)#033[00m
Nov 29 03:08:03 np0005539551 nova_compute[227360]: 2025-11-29 08:08:03.806 227364 DEBUG nova.storage.rbd_utils [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] rbd image 78549bb3-c5a0-4092-b64e-b4608e724d9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:08:03 np0005539551 nova_compute[227360]: 2025-11-29 08:08:03.844 227364 DEBUG nova.storage.rbd_utils [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] rbd image 78549bb3-c5a0-4092-b64e-b4608e724d9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:08:03 np0005539551 nova_compute[227360]: 2025-11-29 08:08:03.881 227364 DEBUG nova.storage.rbd_utils [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] rbd image 78549bb3-c5a0-4092-b64e-b4608e724d9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:08:03 np0005539551 nova_compute[227360]: 2025-11-29 08:08:03.885 227364 DEBUG oslo_concurrency.processutils [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:03 np0005539551 nova_compute[227360]: 2025-11-29 08:08:03.948 227364 DEBUG oslo_concurrency.processutils [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:03 np0005539551 nova_compute[227360]: 2025-11-29 08:08:03.949 227364 DEBUG oslo_concurrency.lockutils [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:03 np0005539551 nova_compute[227360]: 2025-11-29 08:08:03.950 227364 DEBUG oslo_concurrency.lockutils [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:03 np0005539551 nova_compute[227360]: 2025-11-29 08:08:03.950 227364 DEBUG oslo_concurrency.lockutils [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:03 np0005539551 nova_compute[227360]: 2025-11-29 08:08:03.976 227364 DEBUG nova.storage.rbd_utils [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] rbd image 78549bb3-c5a0-4092-b64e-b4608e724d9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:08:03 np0005539551 nova_compute[227360]: 2025-11-29 08:08:03.981 227364 DEBUG oslo_concurrency.processutils [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 78549bb3-c5a0-4092-b64e-b4608e724d9c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:04 np0005539551 nova_compute[227360]: 2025-11-29 08:08:04.011 227364 DEBUG nova.policy [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1a010a95085342c5ae9a02f15b334fad', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5c57433fd3834430904b1908f24f3f2f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:08:04 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e263 e263: 3 total, 3 up, 3 in
Nov 29 03:08:04 np0005539551 nova_compute[227360]: 2025-11-29 08:08:04.294 227364 DEBUG oslo_concurrency.processutils [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 78549bb3-c5a0-4092-b64e-b4608e724d9c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.313s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:04 np0005539551 nova_compute[227360]: 2025-11-29 08:08:04.363 227364 DEBUG nova.storage.rbd_utils [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] resizing rbd image 78549bb3-c5a0-4092-b64e-b4608e724d9c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:08:04 np0005539551 nova_compute[227360]: 2025-11-29 08:08:04.474 227364 DEBUG nova.objects.instance [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lazy-loading 'migration_context' on Instance uuid 78549bb3-c5a0-4092-b64e-b4608e724d9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:08:04 np0005539551 nova_compute[227360]: 2025-11-29 08:08:04.495 227364 DEBUG nova.virt.libvirt.driver [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:08:04 np0005539551 nova_compute[227360]: 2025-11-29 08:08:04.495 227364 DEBUG nova.virt.libvirt.driver [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Ensure instance console log exists: /var/lib/nova/instances/78549bb3-c5a0-4092-b64e-b4608e724d9c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:08:04 np0005539551 nova_compute[227360]: 2025-11-29 08:08:04.495 227364 DEBUG oslo_concurrency.lockutils [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:04 np0005539551 nova_compute[227360]: 2025-11-29 08:08:04.496 227364 DEBUG oslo_concurrency.lockutils [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:04 np0005539551 nova_compute[227360]: 2025-11-29 08:08:04.496 227364 DEBUG oslo_concurrency.lockutils [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:04 np0005539551 nova_compute[227360]: 2025-11-29 08:08:04.780 227364 DEBUG nova.network.neutron [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Successfully created port: 14fa53cc-7cdd-4dbb-b53c-f70de255efa7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:08:04 np0005539551 nova_compute[227360]: 2025-11-29 08:08:04.865 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:05 np0005539551 podman[253964]: 2025-11-29 08:08:05.243983176 +0000 UTC m=+0.073627663 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:08:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:05 np0005539551 podman[253963]: 2025-11-29 08:08:05.270586034 +0000 UTC m=+0.097071816 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:08:05 np0005539551 podman[253965]: 2025-11-29 08:08:05.276996234 +0000 UTC m=+0.106867617 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Nov 29 03:08:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:05.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:08:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:05.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:08:05 np0005539551 nova_compute[227360]: 2025-11-29 08:08:05.614 227364 DEBUG nova.network.neutron [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Successfully updated port: 14fa53cc-7cdd-4dbb-b53c-f70de255efa7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:08:05 np0005539551 nova_compute[227360]: 2025-11-29 08:08:05.630 227364 DEBUG oslo_concurrency.lockutils [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Acquiring lock "refresh_cache-78549bb3-c5a0-4092-b64e-b4608e724d9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:08:05 np0005539551 nova_compute[227360]: 2025-11-29 08:08:05.630 227364 DEBUG oslo_concurrency.lockutils [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Acquired lock "refresh_cache-78549bb3-c5a0-4092-b64e-b4608e724d9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:08:05 np0005539551 nova_compute[227360]: 2025-11-29 08:08:05.630 227364 DEBUG nova.network.neutron [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:08:05 np0005539551 nova_compute[227360]: 2025-11-29 08:08:05.782 227364 DEBUG nova.network.neutron [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:08:06 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:08:07 np0005539551 nova_compute[227360]: 2025-11-29 08:08:07.003 227364 DEBUG nova.compute.manager [req-db926dfa-b553-4281-8753-2cfb5cb7a06e req-705bc860-375f-433c-ba9d-d44213a95edf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Received event network-changed-14fa53cc-7cdd-4dbb-b53c-f70de255efa7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:07 np0005539551 nova_compute[227360]: 2025-11-29 08:08:07.003 227364 DEBUG nova.compute.manager [req-db926dfa-b553-4281-8753-2cfb5cb7a06e req-705bc860-375f-433c-ba9d-d44213a95edf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Refreshing instance network info cache due to event network-changed-14fa53cc-7cdd-4dbb-b53c-f70de255efa7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:08:07 np0005539551 nova_compute[227360]: 2025-11-29 08:08:07.003 227364 DEBUG oslo_concurrency.lockutils [req-db926dfa-b553-4281-8753-2cfb5cb7a06e req-705bc860-375f-433c-ba9d-d44213a95edf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-78549bb3-c5a0-4092-b64e-b4608e724d9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:08:07 np0005539551 nova_compute[227360]: 2025-11-29 08:08:07.434 227364 DEBUG nova.network.neutron [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Updating instance_info_cache with network_info: [{"id": "14fa53cc-7cdd-4dbb-b53c-f70de255efa7", "address": "fa:16:3e:90:21:03", "network": {"id": "0de30c6a-82ca-4f9f-a37d-5949a70a385d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-870403960-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c57433fd3834430904b1908f24f3f2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14fa53cc-7c", "ovs_interfaceid": "14fa53cc-7cdd-4dbb-b53c-f70de255efa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:08:07 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:08:07 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:08:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:07.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:07.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:07 np0005539551 nova_compute[227360]: 2025-11-29 08:08:07.839 227364 DEBUG oslo_concurrency.lockutils [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Releasing lock "refresh_cache-78549bb3-c5a0-4092-b64e-b4608e724d9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:08:07 np0005539551 nova_compute[227360]: 2025-11-29 08:08:07.840 227364 DEBUG nova.compute.manager [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Instance network_info: |[{"id": "14fa53cc-7cdd-4dbb-b53c-f70de255efa7", "address": "fa:16:3e:90:21:03", "network": {"id": "0de30c6a-82ca-4f9f-a37d-5949a70a385d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-870403960-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c57433fd3834430904b1908f24f3f2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14fa53cc-7c", "ovs_interfaceid": "14fa53cc-7cdd-4dbb-b53c-f70de255efa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:08:07 np0005539551 nova_compute[227360]: 2025-11-29 08:08:07.840 227364 DEBUG oslo_concurrency.lockutils [req-db926dfa-b553-4281-8753-2cfb5cb7a06e req-705bc860-375f-433c-ba9d-d44213a95edf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-78549bb3-c5a0-4092-b64e-b4608e724d9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:08:07 np0005539551 nova_compute[227360]: 2025-11-29 08:08:07.841 227364 DEBUG nova.network.neutron [req-db926dfa-b553-4281-8753-2cfb5cb7a06e req-705bc860-375f-433c-ba9d-d44213a95edf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Refreshing network info cache for port 14fa53cc-7cdd-4dbb-b53c-f70de255efa7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:08:07 np0005539551 nova_compute[227360]: 2025-11-29 08:08:07.846 227364 DEBUG nova.virt.libvirt.driver [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Start _get_guest_xml network_info=[{"id": "14fa53cc-7cdd-4dbb-b53c-f70de255efa7", "address": "fa:16:3e:90:21:03", "network": {"id": "0de30c6a-82ca-4f9f-a37d-5949a70a385d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-870403960-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c57433fd3834430904b1908f24f3f2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14fa53cc-7c", "ovs_interfaceid": "14fa53cc-7cdd-4dbb-b53c-f70de255efa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:08:07 np0005539551 nova_compute[227360]: 2025-11-29 08:08:07.853 227364 WARNING nova.virt.libvirt.driver [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:08:07 np0005539551 nova_compute[227360]: 2025-11-29 08:08:07.859 227364 DEBUG nova.virt.libvirt.host [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:08:07 np0005539551 nova_compute[227360]: 2025-11-29 08:08:07.860 227364 DEBUG nova.virt.libvirt.host [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:08:07 np0005539551 nova_compute[227360]: 2025-11-29 08:08:07.866 227364 DEBUG nova.virt.libvirt.host [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:08:07 np0005539551 nova_compute[227360]: 2025-11-29 08:08:07.867 227364 DEBUG nova.virt.libvirt.host [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:08:07 np0005539551 nova_compute[227360]: 2025-11-29 08:08:07.869 227364 DEBUG nova.virt.libvirt.driver [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:08:07 np0005539551 nova_compute[227360]: 2025-11-29 08:08:07.869 227364 DEBUG nova.virt.hardware [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:08:07 np0005539551 nova_compute[227360]: 2025-11-29 08:08:07.870 227364 DEBUG nova.virt.hardware [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:08:07 np0005539551 nova_compute[227360]: 2025-11-29 08:08:07.871 227364 DEBUG nova.virt.hardware [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:08:07 np0005539551 nova_compute[227360]: 2025-11-29 08:08:07.871 227364 DEBUG nova.virt.hardware [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:08:07 np0005539551 nova_compute[227360]: 2025-11-29 08:08:07.872 227364 DEBUG nova.virt.hardware [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:08:07 np0005539551 nova_compute[227360]: 2025-11-29 08:08:07.872 227364 DEBUG nova.virt.hardware [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:08:07 np0005539551 nova_compute[227360]: 2025-11-29 08:08:07.873 227364 DEBUG nova.virt.hardware [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:08:07 np0005539551 nova_compute[227360]: 2025-11-29 08:08:07.873 227364 DEBUG nova.virt.hardware [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:08:07 np0005539551 nova_compute[227360]: 2025-11-29 08:08:07.874 227364 DEBUG nova.virt.hardware [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:08:07 np0005539551 nova_compute[227360]: 2025-11-29 08:08:07.874 227364 DEBUG nova.virt.hardware [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:08:07 np0005539551 nova_compute[227360]: 2025-11-29 08:08:07.875 227364 DEBUG nova.virt.hardware [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:08:07 np0005539551 nova_compute[227360]: 2025-11-29 08:08:07.881 227364 DEBUG oslo_concurrency.processutils [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:07 np0005539551 nova_compute[227360]: 2025-11-29 08:08:07.926 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:07 np0005539551 nova_compute[227360]: 2025-11-29 08:08:07.979 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:08 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:08:08 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3559627634' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:08:08 np0005539551 nova_compute[227360]: 2025-11-29 08:08:08.382 227364 DEBUG oslo_concurrency.processutils [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:08 np0005539551 nova_compute[227360]: 2025-11-29 08:08:08.428 227364 DEBUG nova.storage.rbd_utils [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] rbd image 78549bb3-c5a0-4092-b64e-b4608e724d9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:08:08 np0005539551 nova_compute[227360]: 2025-11-29 08:08:08.434 227364 DEBUG oslo_concurrency.processutils [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:08 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:08:08 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3628110660' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:08:08 np0005539551 nova_compute[227360]: 2025-11-29 08:08:08.916 227364 DEBUG oslo_concurrency.processutils [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:08 np0005539551 nova_compute[227360]: 2025-11-29 08:08:08.918 227364 DEBUG nova.virt.libvirt.vif [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:08:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-744832929',display_name='tempest-ImagesOneServerNegativeTestJSON-server-744832929',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-744832929',id=71,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5c57433fd3834430904b1908f24f3f2f',ramdisk_id='',reservation_id='r-l9bl16un',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-167104479',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-167104479-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:08:03Z,user_data=None,user_id='1a010a95085342c5ae9a02f15b334fad',uuid=78549bb3-c5a0-4092-b64e-b4608e724d9c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "14fa53cc-7cdd-4dbb-b53c-f70de255efa7", "address": "fa:16:3e:90:21:03", "network": {"id": "0de30c6a-82ca-4f9f-a37d-5949a70a385d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-870403960-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c57433fd3834430904b1908f24f3f2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14fa53cc-7c", "ovs_interfaceid": "14fa53cc-7cdd-4dbb-b53c-f70de255efa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:08:08 np0005539551 nova_compute[227360]: 2025-11-29 08:08:08.918 227364 DEBUG nova.network.os_vif_util [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Converting VIF {"id": "14fa53cc-7cdd-4dbb-b53c-f70de255efa7", "address": "fa:16:3e:90:21:03", "network": {"id": "0de30c6a-82ca-4f9f-a37d-5949a70a385d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-870403960-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c57433fd3834430904b1908f24f3f2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14fa53cc-7c", "ovs_interfaceid": "14fa53cc-7cdd-4dbb-b53c-f70de255efa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:08:08 np0005539551 nova_compute[227360]: 2025-11-29 08:08:08.919 227364 DEBUG nova.network.os_vif_util [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:21:03,bridge_name='br-int',has_traffic_filtering=True,id=14fa53cc-7cdd-4dbb-b53c-f70de255efa7,network=Network(0de30c6a-82ca-4f9f-a37d-5949a70a385d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14fa53cc-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:08:08 np0005539551 nova_compute[227360]: 2025-11-29 08:08:08.920 227364 DEBUG nova.objects.instance [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lazy-loading 'pci_devices' on Instance uuid 78549bb3-c5a0-4092-b64e-b4608e724d9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:08:08 np0005539551 nova_compute[227360]: 2025-11-29 08:08:08.956 227364 DEBUG nova.virt.libvirt.driver [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:08:08 np0005539551 nova_compute[227360]:  <uuid>78549bb3-c5a0-4092-b64e-b4608e724d9c</uuid>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:  <name>instance-00000047</name>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:08:08 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-744832929</nova:name>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:08:07</nova:creationTime>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:08:08 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:        <nova:user uuid="1a010a95085342c5ae9a02f15b334fad">tempest-ImagesOneServerNegativeTestJSON-167104479-project-member</nova:user>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:        <nova:project uuid="5c57433fd3834430904b1908f24f3f2f">tempest-ImagesOneServerNegativeTestJSON-167104479</nova:project>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:        <nova:port uuid="14fa53cc-7cdd-4dbb-b53c-f70de255efa7">
Nov 29 03:08:08 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:      <entry name="serial">78549bb3-c5a0-4092-b64e-b4608e724d9c</entry>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:      <entry name="uuid">78549bb3-c5a0-4092-b64e-b4608e724d9c</entry>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:08:08 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/78549bb3-c5a0-4092-b64e-b4608e724d9c_disk">
Nov 29 03:08:08 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:08:08 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:08:08 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/78549bb3-c5a0-4092-b64e-b4608e724d9c_disk.config">
Nov 29 03:08:08 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:08:08 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:08:08 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:90:21:03"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:      <target dev="tap14fa53cc-7c"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:08:08 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/78549bb3-c5a0-4092-b64e-b4608e724d9c/console.log" append="off"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:08:08 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:08:08 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:08:08 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:08:08 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:08:08 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:08:08 np0005539551 nova_compute[227360]: 2025-11-29 08:08:08.958 227364 DEBUG nova.compute.manager [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Preparing to wait for external event network-vif-plugged-14fa53cc-7cdd-4dbb-b53c-f70de255efa7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:08:08 np0005539551 nova_compute[227360]: 2025-11-29 08:08:08.958 227364 DEBUG oslo_concurrency.lockutils [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Acquiring lock "78549bb3-c5a0-4092-b64e-b4608e724d9c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:08 np0005539551 nova_compute[227360]: 2025-11-29 08:08:08.958 227364 DEBUG oslo_concurrency.lockutils [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "78549bb3-c5a0-4092-b64e-b4608e724d9c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:08 np0005539551 nova_compute[227360]: 2025-11-29 08:08:08.958 227364 DEBUG oslo_concurrency.lockutils [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "78549bb3-c5a0-4092-b64e-b4608e724d9c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:08 np0005539551 nova_compute[227360]: 2025-11-29 08:08:08.959 227364 DEBUG nova.virt.libvirt.vif [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:08:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-744832929',display_name='tempest-ImagesOneServerNegativeTestJSON-server-744832929',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-744832929',id=71,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5c57433fd3834430904b1908f24f3f2f',ramdisk_id='',reservation_id='r-l9bl16un',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-167104479',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-167104479-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:08:03Z,user_data=None,user_id='1a010a95085342c5ae9a02f15b334fad',uuid=78549bb3-c5a0-4092-b64e-b4608e724d9c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "14fa53cc-7cdd-4dbb-b53c-f70de255efa7", "address": "fa:16:3e:90:21:03", "network": {"id": "0de30c6a-82ca-4f9f-a37d-5949a70a385d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-870403960-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c57433fd3834430904b1908f24f3f2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14fa53cc-7c", "ovs_interfaceid": "14fa53cc-7cdd-4dbb-b53c-f70de255efa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:08:08 np0005539551 nova_compute[227360]: 2025-11-29 08:08:08.959 227364 DEBUG nova.network.os_vif_util [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Converting VIF {"id": "14fa53cc-7cdd-4dbb-b53c-f70de255efa7", "address": "fa:16:3e:90:21:03", "network": {"id": "0de30c6a-82ca-4f9f-a37d-5949a70a385d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-870403960-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c57433fd3834430904b1908f24f3f2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14fa53cc-7c", "ovs_interfaceid": "14fa53cc-7cdd-4dbb-b53c-f70de255efa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:08:08 np0005539551 nova_compute[227360]: 2025-11-29 08:08:08.960 227364 DEBUG nova.network.os_vif_util [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:21:03,bridge_name='br-int',has_traffic_filtering=True,id=14fa53cc-7cdd-4dbb-b53c-f70de255efa7,network=Network(0de30c6a-82ca-4f9f-a37d-5949a70a385d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14fa53cc-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:08:08 np0005539551 nova_compute[227360]: 2025-11-29 08:08:08.960 227364 DEBUG os_vif [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:21:03,bridge_name='br-int',has_traffic_filtering=True,id=14fa53cc-7cdd-4dbb-b53c-f70de255efa7,network=Network(0de30c6a-82ca-4f9f-a37d-5949a70a385d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14fa53cc-7c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:08:08 np0005539551 nova_compute[227360]: 2025-11-29 08:08:08.961 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:08 np0005539551 nova_compute[227360]: 2025-11-29 08:08:08.961 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:08 np0005539551 nova_compute[227360]: 2025-11-29 08:08:08.962 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:08:08 np0005539551 nova_compute[227360]: 2025-11-29 08:08:08.964 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:08 np0005539551 nova_compute[227360]: 2025-11-29 08:08:08.964 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14fa53cc-7c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:08 np0005539551 nova_compute[227360]: 2025-11-29 08:08:08.965 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap14fa53cc-7c, col_values=(('external_ids', {'iface-id': '14fa53cc-7cdd-4dbb-b53c-f70de255efa7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:90:21:03', 'vm-uuid': '78549bb3-c5a0-4092-b64e-b4608e724d9c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:08 np0005539551 nova_compute[227360]: 2025-11-29 08:08:08.966 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:08 np0005539551 NetworkManager[48922]: <info>  [1764403688.9682] manager: (tap14fa53cc-7c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/118)
Nov 29 03:08:08 np0005539551 nova_compute[227360]: 2025-11-29 08:08:08.969 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:08:08 np0005539551 nova_compute[227360]: 2025-11-29 08:08:08.977 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:08 np0005539551 nova_compute[227360]: 2025-11-29 08:08:08.979 227364 INFO os_vif [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:21:03,bridge_name='br-int',has_traffic_filtering=True,id=14fa53cc-7cdd-4dbb-b53c-f70de255efa7,network=Network(0de30c6a-82ca-4f9f-a37d-5949a70a385d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14fa53cc-7c')#033[00m
Nov 29 03:08:09 np0005539551 nova_compute[227360]: 2025-11-29 08:08:09.109 227364 DEBUG nova.virt.libvirt.driver [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:08:09 np0005539551 nova_compute[227360]: 2025-11-29 08:08:09.111 227364 DEBUG nova.virt.libvirt.driver [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:08:09 np0005539551 nova_compute[227360]: 2025-11-29 08:08:09.112 227364 DEBUG nova.virt.libvirt.driver [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] No VIF found with MAC fa:16:3e:90:21:03, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:08:09 np0005539551 nova_compute[227360]: 2025-11-29 08:08:09.113 227364 INFO nova.virt.libvirt.driver [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Using config drive#033[00m
Nov 29 03:08:09 np0005539551 nova_compute[227360]: 2025-11-29 08:08:09.159 227364 DEBUG nova.storage.rbd_utils [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] rbd image 78549bb3-c5a0-4092-b64e-b4608e724d9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:08:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:09.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:09 np0005539551 nova_compute[227360]: 2025-11-29 08:08:09.605 227364 INFO nova.virt.libvirt.driver [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Creating config drive at /var/lib/nova/instances/78549bb3-c5a0-4092-b64e-b4608e724d9c/disk.config#033[00m
Nov 29 03:08:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:09.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:09 np0005539551 nova_compute[227360]: 2025-11-29 08:08:09.615 227364 DEBUG oslo_concurrency.processutils [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/78549bb3-c5a0-4092-b64e-b4608e724d9c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcvcnzdof execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:09 np0005539551 nova_compute[227360]: 2025-11-29 08:08:09.762 227364 DEBUG oslo_concurrency.processutils [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/78549bb3-c5a0-4092-b64e-b4608e724d9c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcvcnzdof" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:09 np0005539551 nova_compute[227360]: 2025-11-29 08:08:09.812 227364 DEBUG nova.storage.rbd_utils [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] rbd image 78549bb3-c5a0-4092-b64e-b4608e724d9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:08:09 np0005539551 nova_compute[227360]: 2025-11-29 08:08:09.818 227364 DEBUG oslo_concurrency.processutils [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/78549bb3-c5a0-4092-b64e-b4608e724d9c/disk.config 78549bb3-c5a0-4092-b64e-b4608e724d9c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:09 np0005539551 nova_compute[227360]: 2025-11-29 08:08:09.867 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:10 np0005539551 nova_compute[227360]: 2025-11-29 08:08:10.017 227364 DEBUG nova.network.neutron [req-db926dfa-b553-4281-8753-2cfb5cb7a06e req-705bc860-375f-433c-ba9d-d44213a95edf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Updated VIF entry in instance network info cache for port 14fa53cc-7cdd-4dbb-b53c-f70de255efa7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:08:10 np0005539551 nova_compute[227360]: 2025-11-29 08:08:10.018 227364 DEBUG nova.network.neutron [req-db926dfa-b553-4281-8753-2cfb5cb7a06e req-705bc860-375f-433c-ba9d-d44213a95edf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Updating instance_info_cache with network_info: [{"id": "14fa53cc-7cdd-4dbb-b53c-f70de255efa7", "address": "fa:16:3e:90:21:03", "network": {"id": "0de30c6a-82ca-4f9f-a37d-5949a70a385d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-870403960-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c57433fd3834430904b1908f24f3f2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14fa53cc-7c", "ovs_interfaceid": "14fa53cc-7cdd-4dbb-b53c-f70de255efa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:08:10 np0005539551 nova_compute[227360]: 2025-11-29 08:08:10.187 227364 DEBUG oslo_concurrency.lockutils [req-db926dfa-b553-4281-8753-2cfb5cb7a06e req-705bc860-375f-433c-ba9d-d44213a95edf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-78549bb3-c5a0-4092-b64e-b4608e724d9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:08:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:10 np0005539551 nova_compute[227360]: 2025-11-29 08:08:10.269 227364 DEBUG oslo_concurrency.processutils [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/78549bb3-c5a0-4092-b64e-b4608e724d9c/disk.config 78549bb3-c5a0-4092-b64e-b4608e724d9c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:10 np0005539551 nova_compute[227360]: 2025-11-29 08:08:10.271 227364 INFO nova.virt.libvirt.driver [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Deleting local config drive /var/lib/nova/instances/78549bb3-c5a0-4092-b64e-b4608e724d9c/disk.config because it was imported into RBD.#033[00m
Nov 29 03:08:10 np0005539551 kernel: tap14fa53cc-7c: entered promiscuous mode
Nov 29 03:08:10 np0005539551 NetworkManager[48922]: <info>  [1764403690.3565] manager: (tap14fa53cc-7c): new Tun device (/org/freedesktop/NetworkManager/Devices/119)
Nov 29 03:08:10 np0005539551 ovn_controller[130266]: 2025-11-29T08:08:10Z|00249|binding|INFO|Claiming lport 14fa53cc-7cdd-4dbb-b53c-f70de255efa7 for this chassis.
Nov 29 03:08:10 np0005539551 ovn_controller[130266]: 2025-11-29T08:08:10Z|00250|binding|INFO|14fa53cc-7cdd-4dbb-b53c-f70de255efa7: Claiming fa:16:3e:90:21:03 10.100.0.9
Nov 29 03:08:10 np0005539551 nova_compute[227360]: 2025-11-29 08:08:10.355 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:10 np0005539551 systemd-udevd[254269]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:10.441 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:21:03 10.100.0.9'], port_security=['fa:16:3e:90:21:03 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '78549bb3-c5a0-4092-b64e-b4608e724d9c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0de30c6a-82ca-4f9f-a37d-5949a70a385d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5c57433fd3834430904b1908f24f3f2f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ebe351b1-d353-46d5-990d-7ccc905f95cd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07d7455e-493a-4184-8d60-e2fd6ef2393b, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=14fa53cc-7cdd-4dbb-b53c-f70de255efa7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:08:10 np0005539551 NetworkManager[48922]: <info>  [1764403690.4463] device (tap14fa53cc-7c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:10.443 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 14fa53cc-7cdd-4dbb-b53c-f70de255efa7 in datapath 0de30c6a-82ca-4f9f-a37d-5949a70a385d bound to our chassis#033[00m
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:10.445 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0de30c6a-82ca-4f9f-a37d-5949a70a385d#033[00m
Nov 29 03:08:10 np0005539551 NetworkManager[48922]: <info>  [1764403690.4484] device (tap14fa53cc-7c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:08:10 np0005539551 nova_compute[227360]: 2025-11-29 08:08:10.459 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:10 np0005539551 nova_compute[227360]: 2025-11-29 08:08:10.465 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:10.464 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[7675cb10-7c05-4458-82c1-15dda0b37052]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:10.464 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0de30c6a-81 in ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:08:10 np0005539551 ovn_controller[130266]: 2025-11-29T08:08:10Z|00251|binding|INFO|Setting lport 14fa53cc-7cdd-4dbb-b53c-f70de255efa7 ovn-installed in OVS
Nov 29 03:08:10 np0005539551 ovn_controller[130266]: 2025-11-29T08:08:10Z|00252|binding|INFO|Setting lport 14fa53cc-7cdd-4dbb-b53c-f70de255efa7 up in Southbound
Nov 29 03:08:10 np0005539551 nova_compute[227360]: 2025-11-29 08:08:10.469 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:10.469 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0de30c6a-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:10.469 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[04fd0d7b-a288-4ad2-8257-808f984ce011]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:10 np0005539551 systemd-machined[190756]: New machine qemu-33-instance-00000047.
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:10.471 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5ca622de-f1da-48d9-83e4-bc11d27a9f5d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:10 np0005539551 systemd[1]: Started Virtual Machine qemu-33-instance-00000047.
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:10.486 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[0c50ccfe-c781-43e5-9621-6a5357e55523]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:10.515 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[291fad1e-9938-417d-9a03-298d27a791d0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:10.547 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[830fa35a-9045-4bae-b3a9-8ee6a99f6699]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:10.556 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[fa5e1b38-1079-482e-a593-2211e14be1cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:10 np0005539551 NetworkManager[48922]: <info>  [1764403690.5577] manager: (tap0de30c6a-80): new Veth device (/org/freedesktop/NetworkManager/Devices/120)
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:10.594 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[9d83796b-89ef-4aba-bdf5-bd6aefd2b64a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:10.599 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[9bca293d-4aae-40d9-9896-492f3e88f509]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:10 np0005539551 NetworkManager[48922]: <info>  [1764403690.6258] device (tap0de30c6a-80): carrier: link connected
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:10.632 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[b84ab7bb-3016-479c-abc3-4bb0194272ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:10.652 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[dc0cae62-9b7e-468a-bb64-4c77335cbc11]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0de30c6a-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:21:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673750, 'reachable_time': 23195, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254304, 'error': None, 'target': 'ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:10.668 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e17efc88-391a-4cf3-a398-04ad8e3d3929]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe92:215a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 673750, 'tstamp': 673750}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254305, 'error': None, 'target': 'ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:10.682 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[74392f88-17cb-4f7d-ace6-7acbbfec856a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0de30c6a-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:21:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673750, 'reachable_time': 23195, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 254306, 'error': None, 'target': 'ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:10.717 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1799608c-d433-4af2-b70d-f7c4c6209107]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:10.780 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8a50143a-f94f-4b8f-912c-070c6c1fb987]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:10.782 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0de30c6a-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:10.782 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:10.783 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0de30c6a-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:10 np0005539551 kernel: tap0de30c6a-80: entered promiscuous mode
Nov 29 03:08:10 np0005539551 nova_compute[227360]: 2025-11-29 08:08:10.786 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:10 np0005539551 NetworkManager[48922]: <info>  [1764403690.7885] manager: (tap0de30c6a-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/121)
Nov 29 03:08:10 np0005539551 nova_compute[227360]: 2025-11-29 08:08:10.789 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:10.792 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0de30c6a-80, col_values=(('external_ids', {'iface-id': 'db5f456f-a9cd-44e0-9bf4-deda3979e911'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:10 np0005539551 ovn_controller[130266]: 2025-11-29T08:08:10Z|00253|binding|INFO|Releasing lport db5f456f-a9cd-44e0-9bf4-deda3979e911 from this chassis (sb_readonly=0)
Nov 29 03:08:10 np0005539551 nova_compute[227360]: 2025-11-29 08:08:10.794 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:10 np0005539551 nova_compute[227360]: 2025-11-29 08:08:10.795 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:10.796 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0de30c6a-82ca-4f9f-a37d-5949a70a385d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0de30c6a-82ca-4f9f-a37d-5949a70a385d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:10.797 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a78e682d-7007-4fee-96b6-3ac463510b4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:10.799 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-0de30c6a-82ca-4f9f-a37d-5949a70a385d
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/0de30c6a-82ca-4f9f-a37d-5949a70a385d.pid.haproxy
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 0de30c6a-82ca-4f9f-a37d-5949a70a385d
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:08:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:10.801 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d', 'env', 'PROCESS_TAG=haproxy-0de30c6a-82ca-4f9f-a37d-5949a70a385d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0de30c6a-82ca-4f9f-a37d-5949a70a385d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:08:10 np0005539551 nova_compute[227360]: 2025-11-29 08:08:10.807 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:11 np0005539551 podman[254338]: 2025-11-29 08:08:11.258537909 +0000 UTC m=+0.071955257 container create ed01bf1c7609d869a86165110ae5217efbd7be75931d3021a0cfbb8d10f90c4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:08:11 np0005539551 systemd[1]: Started libpod-conmon-ed01bf1c7609d869a86165110ae5217efbd7be75931d3021a0cfbb8d10f90c4e.scope.
Nov 29 03:08:11 np0005539551 podman[254338]: 2025-11-29 08:08:11.213196951 +0000 UTC m=+0.026614299 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:08:11 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:08:11 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/143abdccf4219f4d3be5bb0a61630a04a6c78df554d0b074e065c0a6cc02f7be/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:08:11 np0005539551 podman[254338]: 2025-11-29 08:08:11.35732909 +0000 UTC m=+0.170746448 container init ed01bf1c7609d869a86165110ae5217efbd7be75931d3021a0cfbb8d10f90c4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 29 03:08:11 np0005539551 podman[254338]: 2025-11-29 08:08:11.367946163 +0000 UTC m=+0.181363491 container start ed01bf1c7609d869a86165110ae5217efbd7be75931d3021a0cfbb8d10f90c4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:08:11 np0005539551 neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d[254391]: [NOTICE]   (254398) : New worker (254401) forked
Nov 29 03:08:11 np0005539551 neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d[254391]: [NOTICE]   (254398) : Loading success.
Nov 29 03:08:11 np0005539551 nova_compute[227360]: 2025-11-29 08:08:11.423 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403691.4226632, 78549bb3-c5a0-4092-b64e-b4608e724d9c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:08:11 np0005539551 nova_compute[227360]: 2025-11-29 08:08:11.423 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] VM Started (Lifecycle Event)#033[00m
Nov 29 03:08:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:11.426 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:08:11 np0005539551 nova_compute[227360]: 2025-11-29 08:08:11.428 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:11.449 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:08:11 np0005539551 nova_compute[227360]: 2025-11-29 08:08:11.518 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:08:11 np0005539551 nova_compute[227360]: 2025-11-29 08:08:11.525 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403691.4229012, 78549bb3-c5a0-4092-b64e-b4608e724d9c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:08:11 np0005539551 nova_compute[227360]: 2025-11-29 08:08:11.526 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:08:11 np0005539551 nova_compute[227360]: 2025-11-29 08:08:11.546 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:08:11 np0005539551 nova_compute[227360]: 2025-11-29 08:08:11.551 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:08:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:08:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:11.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:08:11 np0005539551 nova_compute[227360]: 2025-11-29 08:08:11.572 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:08:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:08:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:11.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:08:11 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e264 e264: 3 total, 3 up, 3 in
Nov 29 03:08:11 np0005539551 nova_compute[227360]: 2025-11-29 08:08:11.907 227364 DEBUG nova.compute.manager [req-6d40aee8-5b5a-4757-b432-5da8ea92b9c5 req-608a61ce-4e1a-4c03-be57-cb58a6a13f63 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Received event network-vif-plugged-14fa53cc-7cdd-4dbb-b53c-f70de255efa7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:11 np0005539551 nova_compute[227360]: 2025-11-29 08:08:11.908 227364 DEBUG oslo_concurrency.lockutils [req-6d40aee8-5b5a-4757-b432-5da8ea92b9c5 req-608a61ce-4e1a-4c03-be57-cb58a6a13f63 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "78549bb3-c5a0-4092-b64e-b4608e724d9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:11 np0005539551 nova_compute[227360]: 2025-11-29 08:08:11.910 227364 DEBUG oslo_concurrency.lockutils [req-6d40aee8-5b5a-4757-b432-5da8ea92b9c5 req-608a61ce-4e1a-4c03-be57-cb58a6a13f63 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "78549bb3-c5a0-4092-b64e-b4608e724d9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:11 np0005539551 nova_compute[227360]: 2025-11-29 08:08:11.910 227364 DEBUG oslo_concurrency.lockutils [req-6d40aee8-5b5a-4757-b432-5da8ea92b9c5 req-608a61ce-4e1a-4c03-be57-cb58a6a13f63 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "78549bb3-c5a0-4092-b64e-b4608e724d9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:11 np0005539551 nova_compute[227360]: 2025-11-29 08:08:11.911 227364 DEBUG nova.compute.manager [req-6d40aee8-5b5a-4757-b432-5da8ea92b9c5 req-608a61ce-4e1a-4c03-be57-cb58a6a13f63 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Processing event network-vif-plugged-14fa53cc-7cdd-4dbb-b53c-f70de255efa7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:08:11 np0005539551 nova_compute[227360]: 2025-11-29 08:08:11.911 227364 DEBUG nova.compute.manager [req-6d40aee8-5b5a-4757-b432-5da8ea92b9c5 req-608a61ce-4e1a-4c03-be57-cb58a6a13f63 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Received event network-vif-plugged-14fa53cc-7cdd-4dbb-b53c-f70de255efa7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:11 np0005539551 nova_compute[227360]: 2025-11-29 08:08:11.912 227364 DEBUG oslo_concurrency.lockutils [req-6d40aee8-5b5a-4757-b432-5da8ea92b9c5 req-608a61ce-4e1a-4c03-be57-cb58a6a13f63 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "78549bb3-c5a0-4092-b64e-b4608e724d9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:11 np0005539551 nova_compute[227360]: 2025-11-29 08:08:11.912 227364 DEBUG oslo_concurrency.lockutils [req-6d40aee8-5b5a-4757-b432-5da8ea92b9c5 req-608a61ce-4e1a-4c03-be57-cb58a6a13f63 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "78549bb3-c5a0-4092-b64e-b4608e724d9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:11 np0005539551 nova_compute[227360]: 2025-11-29 08:08:11.913 227364 DEBUG oslo_concurrency.lockutils [req-6d40aee8-5b5a-4757-b432-5da8ea92b9c5 req-608a61ce-4e1a-4c03-be57-cb58a6a13f63 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "78549bb3-c5a0-4092-b64e-b4608e724d9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:11 np0005539551 nova_compute[227360]: 2025-11-29 08:08:11.913 227364 DEBUG nova.compute.manager [req-6d40aee8-5b5a-4757-b432-5da8ea92b9c5 req-608a61ce-4e1a-4c03-be57-cb58a6a13f63 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] No waiting events found dispatching network-vif-plugged-14fa53cc-7cdd-4dbb-b53c-f70de255efa7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:08:11 np0005539551 nova_compute[227360]: 2025-11-29 08:08:11.914 227364 WARNING nova.compute.manager [req-6d40aee8-5b5a-4757-b432-5da8ea92b9c5 req-608a61ce-4e1a-4c03-be57-cb58a6a13f63 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Received unexpected event network-vif-plugged-14fa53cc-7cdd-4dbb-b53c-f70de255efa7 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:08:11 np0005539551 nova_compute[227360]: 2025-11-29 08:08:11.916 227364 DEBUG nova.compute.manager [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:08:11 np0005539551 nova_compute[227360]: 2025-11-29 08:08:11.920 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403691.9202545, 78549bb3-c5a0-4092-b64e-b4608e724d9c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:08:11 np0005539551 nova_compute[227360]: 2025-11-29 08:08:11.921 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:08:11 np0005539551 nova_compute[227360]: 2025-11-29 08:08:11.925 227364 DEBUG nova.virt.libvirt.driver [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:08:11 np0005539551 nova_compute[227360]: 2025-11-29 08:08:11.929 227364 INFO nova.virt.libvirt.driver [-] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Instance spawned successfully.#033[00m
Nov 29 03:08:11 np0005539551 nova_compute[227360]: 2025-11-29 08:08:11.930 227364 DEBUG nova.virt.libvirt.driver [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:08:11 np0005539551 nova_compute[227360]: 2025-11-29 08:08:11.951 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:08:11 np0005539551 nova_compute[227360]: 2025-11-29 08:08:11.959 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:08:11 np0005539551 nova_compute[227360]: 2025-11-29 08:08:11.966 227364 DEBUG nova.virt.libvirt.driver [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:08:11 np0005539551 nova_compute[227360]: 2025-11-29 08:08:11.967 227364 DEBUG nova.virt.libvirt.driver [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:08:11 np0005539551 nova_compute[227360]: 2025-11-29 08:08:11.968 227364 DEBUG nova.virt.libvirt.driver [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:08:11 np0005539551 nova_compute[227360]: 2025-11-29 08:08:11.969 227364 DEBUG nova.virt.libvirt.driver [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:08:11 np0005539551 nova_compute[227360]: 2025-11-29 08:08:11.970 227364 DEBUG nova.virt.libvirt.driver [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:08:11 np0005539551 nova_compute[227360]: 2025-11-29 08:08:11.971 227364 DEBUG nova.virt.libvirt.driver [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:08:11 np0005539551 nova_compute[227360]: 2025-11-29 08:08:11.980 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:08:12 np0005539551 nova_compute[227360]: 2025-11-29 08:08:12.027 227364 INFO nova.compute.manager [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Took 8.25 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:08:12 np0005539551 nova_compute[227360]: 2025-11-29 08:08:12.028 227364 DEBUG nova.compute.manager [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:08:12 np0005539551 nova_compute[227360]: 2025-11-29 08:08:12.090 227364 INFO nova.compute.manager [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Took 9.17 seconds to build instance.#033[00m
Nov 29 03:08:12 np0005539551 nova_compute[227360]: 2025-11-29 08:08:12.109 227364 DEBUG oslo_concurrency.lockutils [None req-1788c19d-4a01-4376-b05b-87d9f5ed7acd 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "78549bb3-c5a0-4092-b64e-b4608e724d9c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:12 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:08:12 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:08:12 np0005539551 nova_compute[227360]: 2025-11-29 08:08:12.809 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403677.808616, 9cebdce6-6212-492b-8105-301f19edb8b3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:08:12 np0005539551 nova_compute[227360]: 2025-11-29 08:08:12.810 227364 INFO nova.compute.manager [-] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:08:12 np0005539551 nova_compute[227360]: 2025-11-29 08:08:12.835 227364 DEBUG nova.compute.manager [None req-87133ee5-b604-4e67-818b-d18654316956 - - - - - -] [instance: 9cebdce6-6212-492b-8105-301f19edb8b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:08:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:08:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:13.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:08:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:13.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:13 np0005539551 nova_compute[227360]: 2025-11-29 08:08:13.968 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:14 np0005539551 nova_compute[227360]: 2025-11-29 08:08:14.908 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:14 np0005539551 nova_compute[227360]: 2025-11-29 08:08:14.923 227364 DEBUG nova.compute.manager [None req-2d2b9ea9-2680-4b97-8085-ecd38235b36d 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:08:14 np0005539551 nova_compute[227360]: 2025-11-29 08:08:14.995 227364 INFO nova.compute.manager [None req-2d2b9ea9-2680-4b97-8085-ecd38235b36d 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] instance snapshotting#033[00m
Nov 29 03:08:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:15 np0005539551 nova_compute[227360]: 2025-11-29 08:08:15.451 227364 INFO nova.virt.libvirt.driver [None req-2d2b9ea9-2680-4b97-8085-ecd38235b36d 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Beginning live snapshot process#033[00m
Nov 29 03:08:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:15.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:15.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:15 np0005539551 nova_compute[227360]: 2025-11-29 08:08:15.679 227364 DEBUG nova.virt.libvirt.imagebackend [None req-2d2b9ea9-2680-4b97-8085-ecd38235b36d 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] No parent info for 4873db8c-b414-4e95-acd9-77caabebe722; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:08:15 np0005539551 nova_compute[227360]: 2025-11-29 08:08:15.911 227364 DEBUG nova.storage.rbd_utils [None req-2d2b9ea9-2680-4b97-8085-ecd38235b36d 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] creating snapshot(f6765654953443fe8c6b8228b5851636) on rbd image(78549bb3-c5a0-4092-b64e-b4608e724d9c_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:08:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e265 e265: 3 total, 3 up, 3 in
Nov 29 03:08:16 np0005539551 nova_compute[227360]: 2025-11-29 08:08:16.917 227364 DEBUG nova.storage.rbd_utils [None req-2d2b9ea9-2680-4b97-8085-ecd38235b36d 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] cloning vms/78549bb3-c5a0-4092-b64e-b4608e724d9c_disk@f6765654953443fe8c6b8228b5851636 to images/68cf293e-7e4e-4b94-aea6-5b4d60641413 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:08:17 np0005539551 nova_compute[227360]: 2025-11-29 08:08:17.056 227364 DEBUG nova.storage.rbd_utils [None req-2d2b9ea9-2680-4b97-8085-ecd38235b36d 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] flattening images/68cf293e-7e4e-4b94-aea6-5b4d60641413 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:08:17 np0005539551 nova_compute[227360]: 2025-11-29 08:08:17.401 227364 DEBUG nova.storage.rbd_utils [None req-2d2b9ea9-2680-4b97-8085-ecd38235b36d 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] removing snapshot(f6765654953443fe8c6b8228b5851636) on rbd image(78549bb3-c5a0-4092-b64e-b4608e724d9c_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:08:17 np0005539551 nova_compute[227360]: 2025-11-29 08:08:17.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:17 np0005539551 nova_compute[227360]: 2025-11-29 08:08:17.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:08:17 np0005539551 nova_compute[227360]: 2025-11-29 08:08:17.411 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:08:17 np0005539551 nova_compute[227360]: 2025-11-29 08:08:17.432 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "refresh_cache-78549bb3-c5a0-4092-b64e-b4608e724d9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:08:17 np0005539551 nova_compute[227360]: 2025-11-29 08:08:17.432 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquired lock "refresh_cache-78549bb3-c5a0-4092-b64e-b4608e724d9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:08:17 np0005539551 nova_compute[227360]: 2025-11-29 08:08:17.432 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:08:17 np0005539551 nova_compute[227360]: 2025-11-29 08:08:17.432 227364 DEBUG nova.objects.instance [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lazy-loading 'info_cache' on Instance uuid 78549bb3-c5a0-4092-b64e-b4608e724d9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:08:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:17.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:08:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:17.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:08:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e266 e266: 3 total, 3 up, 3 in
Nov 29 03:08:17 np0005539551 nova_compute[227360]: 2025-11-29 08:08:17.901 227364 DEBUG nova.storage.rbd_utils [None req-2d2b9ea9-2680-4b97-8085-ecd38235b36d 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] creating snapshot(snap) on rbd image(68cf293e-7e4e-4b94-aea6-5b4d60641413) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:08:18 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e267 e267: 3 total, 3 up, 3 in
Nov 29 03:08:18 np0005539551 nova_compute[227360]: 2025-11-29 08:08:18.970 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver [None req-2d2b9ea9-2680-4b97-8085-ecd38235b36d 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 68cf293e-7e4e-4b94-aea6-5b4d60641413 could not be found.
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver     image = self._client.call(
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 68cf293e-7e4e-4b94-aea6-5b4d60641413
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver 
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver 
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver     image = self._client.call(
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 68cf293e-7e4e-4b94-aea6-5b4d60641413 could not be found.
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.108 227364 ERROR nova.virt.libvirt.driver #033[00m
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.180 227364 DEBUG nova.storage.rbd_utils [None req-2d2b9ea9-2680-4b97-8085-ecd38235b36d 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] removing snapshot(snap) on rbd image(68cf293e-7e4e-4b94-aea6-5b4d60641413) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:08:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:19.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:19.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.645 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Updating instance_info_cache with network_info: [{"id": "14fa53cc-7cdd-4dbb-b53c-f70de255efa7", "address": "fa:16:3e:90:21:03", "network": {"id": "0de30c6a-82ca-4f9f-a37d-5949a70a385d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-870403960-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c57433fd3834430904b1908f24f3f2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14fa53cc-7c", "ovs_interfaceid": "14fa53cc-7cdd-4dbb-b53c-f70de255efa7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.662 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Releasing lock "refresh_cache-78549bb3-c5a0-4092-b64e-b4608e724d9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.662 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.662 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:19.859 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:19.860 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:19.861 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:19 np0005539551 nova_compute[227360]: 2025-11-29 08:08:19.909 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:19 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e268 e268: 3 total, 3 up, 3 in
Nov 29 03:08:20 np0005539551 nova_compute[227360]: 2025-11-29 08:08:20.246 227364 WARNING nova.compute.manager [None req-2d2b9ea9-2680-4b97-8085-ecd38235b36d 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Image not found during snapshot: nova.exception.ImageNotFound: Image 68cf293e-7e4e-4b94-aea6-5b4d60641413 could not be found.#033[00m
Nov 29 03:08:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:20 np0005539551 nova_compute[227360]: 2025-11-29 08:08:20.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:20.451 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:21 np0005539551 nova_compute[227360]: 2025-11-29 08:08:21.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:21.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:21.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:22 np0005539551 nova_compute[227360]: 2025-11-29 08:08:22.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:22 np0005539551 nova_compute[227360]: 2025-11-29 08:08:22.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:22 np0005539551 nova_compute[227360]: 2025-11-29 08:08:22.433 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:22 np0005539551 nova_compute[227360]: 2025-11-29 08:08:22.434 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:22 np0005539551 nova_compute[227360]: 2025-11-29 08:08:22.435 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:22 np0005539551 nova_compute[227360]: 2025-11-29 08:08:22.435 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:08:22 np0005539551 nova_compute[227360]: 2025-11-29 08:08:22.436 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:08:22 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2628370998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:08:22 np0005539551 nova_compute[227360]: 2025-11-29 08:08:22.894 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.094 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000047 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.095 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000047 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.187 227364 DEBUG oslo_concurrency.lockutils [None req-90156059-79b6-4808-931e-50b196c1d982 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Acquiring lock "78549bb3-c5a0-4092-b64e-b4608e724d9c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.188 227364 DEBUG oslo_concurrency.lockutils [None req-90156059-79b6-4808-931e-50b196c1d982 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "78549bb3-c5a0-4092-b64e-b4608e724d9c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.188 227364 DEBUG oslo_concurrency.lockutils [None req-90156059-79b6-4808-931e-50b196c1d982 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Acquiring lock "78549bb3-c5a0-4092-b64e-b4608e724d9c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.188 227364 DEBUG oslo_concurrency.lockutils [None req-90156059-79b6-4808-931e-50b196c1d982 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "78549bb3-c5a0-4092-b64e-b4608e724d9c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.189 227364 DEBUG oslo_concurrency.lockutils [None req-90156059-79b6-4808-931e-50b196c1d982 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "78549bb3-c5a0-4092-b64e-b4608e724d9c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.190 227364 INFO nova.compute.manager [None req-90156059-79b6-4808-931e-50b196c1d982 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Terminating instance#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.192 227364 DEBUG nova.compute.manager [None req-90156059-79b6-4808-931e-50b196c1d982 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:08:23 np0005539551 kernel: tap14fa53cc-7c (unregistering): left promiscuous mode
Nov 29 03:08:23 np0005539551 NetworkManager[48922]: <info>  [1764403703.2260] device (tap14fa53cc-7c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:08:23 np0005539551 ovn_controller[130266]: 2025-11-29T08:08:23Z|00254|binding|INFO|Releasing lport 14fa53cc-7cdd-4dbb-b53c-f70de255efa7 from this chassis (sb_readonly=0)
Nov 29 03:08:23 np0005539551 ovn_controller[130266]: 2025-11-29T08:08:23Z|00255|binding|INFO|Setting lport 14fa53cc-7cdd-4dbb-b53c-f70de255efa7 down in Southbound
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.236 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:23 np0005539551 ovn_controller[130266]: 2025-11-29T08:08:23Z|00256|binding|INFO|Removing iface tap14fa53cc-7c ovn-installed in OVS
Nov 29 03:08:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:23.242 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:21:03 10.100.0.9'], port_security=['fa:16:3e:90:21:03 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '78549bb3-c5a0-4092-b64e-b4608e724d9c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0de30c6a-82ca-4f9f-a37d-5949a70a385d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5c57433fd3834430904b1908f24f3f2f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ebe351b1-d353-46d5-990d-7ccc905f95cd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07d7455e-493a-4184-8d60-e2fd6ef2393b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=14fa53cc-7cdd-4dbb-b53c-f70de255efa7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:08:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:23.244 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 14fa53cc-7cdd-4dbb-b53c-f70de255efa7 in datapath 0de30c6a-82ca-4f9f-a37d-5949a70a385d unbound from our chassis#033[00m
Nov 29 03:08:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:23.246 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0de30c6a-82ca-4f9f-a37d-5949a70a385d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:08:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:23.247 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b41bdab3-2606-41b4-bd4c-a086f9d36e0b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:23.248 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d namespace which is not needed anymore#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.260 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:23 np0005539551 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d00000047.scope: Deactivated successfully.
Nov 29 03:08:23 np0005539551 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d00000047.scope: Consumed 12.485s CPU time.
Nov 29 03:08:23 np0005539551 systemd-machined[190756]: Machine qemu-33-instance-00000047 terminated.
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.325 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.326 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4469MB free_disk=20.92196273803711GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.327 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.327 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:23 np0005539551 neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d[254391]: [NOTICE]   (254398) : haproxy version is 2.8.14-c23fe91
Nov 29 03:08:23 np0005539551 neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d[254391]: [NOTICE]   (254398) : path to executable is /usr/sbin/haproxy
Nov 29 03:08:23 np0005539551 neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d[254391]: [WARNING]  (254398) : Exiting Master process...
Nov 29 03:08:23 np0005539551 neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d[254391]: [ALERT]    (254398) : Current worker (254401) exited with code 143 (Terminated)
Nov 29 03:08:23 np0005539551 neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d[254391]: [WARNING]  (254398) : All workers exited. Exiting... (0)
Nov 29 03:08:23 np0005539551 systemd[1]: libpod-ed01bf1c7609d869a86165110ae5217efbd7be75931d3021a0cfbb8d10f90c4e.scope: Deactivated successfully.
Nov 29 03:08:23 np0005539551 podman[254686]: 2025-11-29 08:08:23.399960785 +0000 UTC m=+0.044920277 container died ed01bf1c7609d869a86165110ae5217efbd7be75931d3021a0cfbb8d10f90c4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.418 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 78549bb3-c5a0-4092-b64e-b4608e724d9c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.419 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.419 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.426 227364 INFO nova.virt.libvirt.driver [-] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Instance destroyed successfully.#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.427 227364 DEBUG nova.objects.instance [None req-90156059-79b6-4808-931e-50b196c1d982 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lazy-loading 'resources' on Instance uuid 78549bb3-c5a0-4092-b64e-b4608e724d9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:08:23 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ed01bf1c7609d869a86165110ae5217efbd7be75931d3021a0cfbb8d10f90c4e-userdata-shm.mount: Deactivated successfully.
Nov 29 03:08:23 np0005539551 systemd[1]: var-lib-containers-storage-overlay-143abdccf4219f4d3be5bb0a61630a04a6c78df554d0b074e065c0a6cc02f7be-merged.mount: Deactivated successfully.
Nov 29 03:08:23 np0005539551 podman[254686]: 2025-11-29 08:08:23.443853934 +0000 UTC m=+0.088813416 container cleanup ed01bf1c7609d869a86165110ae5217efbd7be75931d3021a0cfbb8d10f90c4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:08:23 np0005539551 systemd[1]: libpod-conmon-ed01bf1c7609d869a86165110ae5217efbd7be75931d3021a0cfbb8d10f90c4e.scope: Deactivated successfully.
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.455 227364 DEBUG nova.virt.libvirt.vif [None req-90156059-79b6-4808-931e-50b196c1d982 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:08:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-744832929',display_name='tempest-ImagesOneServerNegativeTestJSON-server-744832929',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-744832929',id=71,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:08:12Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5c57433fd3834430904b1908f24f3f2f',ramdisk_id='',reservation_id='r-l9bl16un',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-167104479',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-167104479-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:08:20Z,user_data=None,user_id='1a010a95085342c5ae9a02f15b334fad',uuid=78549bb3-c5a0-4092-b64e-b4608e724d9c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "14fa53cc-7cdd-4dbb-b53c-f70de255efa7", "address": "fa:16:3e:90:21:03", "network": {"id": "0de30c6a-82ca-4f9f-a37d-5949a70a385d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-870403960-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c57433fd3834430904b1908f24f3f2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14fa53cc-7c", "ovs_interfaceid": "14fa53cc-7cdd-4dbb-b53c-f70de255efa7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.456 227364 DEBUG nova.network.os_vif_util [None req-90156059-79b6-4808-931e-50b196c1d982 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Converting VIF {"id": "14fa53cc-7cdd-4dbb-b53c-f70de255efa7", "address": "fa:16:3e:90:21:03", "network": {"id": "0de30c6a-82ca-4f9f-a37d-5949a70a385d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-870403960-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c57433fd3834430904b1908f24f3f2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14fa53cc-7c", "ovs_interfaceid": "14fa53cc-7cdd-4dbb-b53c-f70de255efa7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.457 227364 DEBUG nova.network.os_vif_util [None req-90156059-79b6-4808-931e-50b196c1d982 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:90:21:03,bridge_name='br-int',has_traffic_filtering=True,id=14fa53cc-7cdd-4dbb-b53c-f70de255efa7,network=Network(0de30c6a-82ca-4f9f-a37d-5949a70a385d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14fa53cc-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.457 227364 DEBUG os_vif [None req-90156059-79b6-4808-931e-50b196c1d982 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:90:21:03,bridge_name='br-int',has_traffic_filtering=True,id=14fa53cc-7cdd-4dbb-b53c-f70de255efa7,network=Network(0de30c6a-82ca-4f9f-a37d-5949a70a385d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14fa53cc-7c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.459 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.459 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14fa53cc-7c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.461 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.463 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.465 227364 INFO os_vif [None req-90156059-79b6-4808-931e-50b196c1d982 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:90:21:03,bridge_name='br-int',has_traffic_filtering=True,id=14fa53cc-7cdd-4dbb-b53c-f70de255efa7,network=Network(0de30c6a-82ca-4f9f-a37d-5949a70a385d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14fa53cc-7c')#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.485 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:23 np0005539551 podman[254725]: 2025-11-29 08:08:23.521223644 +0000 UTC m=+0.051738578 container remove ed01bf1c7609d869a86165110ae5217efbd7be75931d3021a0cfbb8d10f90c4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:08:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:23.527 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[781ecc63-42d3-41d5-a803-f7374b88c3cd]: (4, ('Sat Nov 29 08:08:23 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d (ed01bf1c7609d869a86165110ae5217efbd7be75931d3021a0cfbb8d10f90c4e)\ned01bf1c7609d869a86165110ae5217efbd7be75931d3021a0cfbb8d10f90c4e\nSat Nov 29 08:08:23 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d (ed01bf1c7609d869a86165110ae5217efbd7be75931d3021a0cfbb8d10f90c4e)\ned01bf1c7609d869a86165110ae5217efbd7be75931d3021a0cfbb8d10f90c4e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:23.529 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f0405d4e-d322-49f9-aef6-ac2ca0b93d1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:23.530 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0de30c6a-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:23 np0005539551 kernel: tap0de30c6a-80: left promiscuous mode
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.540 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.546 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:23.549 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[397bd2f1-72a0-494b-ae25-df21863c040d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:23.565 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d488ee60-a63c-4961-8da1-1ef3b1f041ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:23.566 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[25438f6c-2d3d-41dc-96f0-bbdf1c96563a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:23.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:23.587 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d956b1e7-0682-4b40-a2c6-e2db1ad67c9e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673742, 'reachable_time': 35617, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254761, 'error': None, 'target': 'ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:23 np0005539551 systemd[1]: run-netns-ovnmeta\x2d0de30c6a\x2d82ca\x2d4f9f\x2da37d\x2d5949a70a385d.mount: Deactivated successfully.
Nov 29 03:08:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:23.590 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:08:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:08:23.590 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[4d1c5bd7-831e-44ea-8915-28227fe48ca3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:23.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.667 227364 DEBUG nova.compute.manager [req-5059099a-3468-4af6-a0f1-6820649a397f req-2d13142c-268a-4ec5-b2e3-7d8fb5c71a15 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Received event network-vif-unplugged-14fa53cc-7cdd-4dbb-b53c-f70de255efa7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.667 227364 DEBUG oslo_concurrency.lockutils [req-5059099a-3468-4af6-a0f1-6820649a397f req-2d13142c-268a-4ec5-b2e3-7d8fb5c71a15 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "78549bb3-c5a0-4092-b64e-b4608e724d9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.668 227364 DEBUG oslo_concurrency.lockutils [req-5059099a-3468-4af6-a0f1-6820649a397f req-2d13142c-268a-4ec5-b2e3-7d8fb5c71a15 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "78549bb3-c5a0-4092-b64e-b4608e724d9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.669 227364 DEBUG oslo_concurrency.lockutils [req-5059099a-3468-4af6-a0f1-6820649a397f req-2d13142c-268a-4ec5-b2e3-7d8fb5c71a15 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "78549bb3-c5a0-4092-b64e-b4608e724d9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.669 227364 DEBUG nova.compute.manager [req-5059099a-3468-4af6-a0f1-6820649a397f req-2d13142c-268a-4ec5-b2e3-7d8fb5c71a15 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] No waiting events found dispatching network-vif-unplugged-14fa53cc-7cdd-4dbb-b53c-f70de255efa7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.669 227364 DEBUG nova.compute.manager [req-5059099a-3468-4af6-a0f1-6820649a397f req-2d13142c-268a-4ec5-b2e3-7d8fb5c71a15 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Received event network-vif-unplugged-14fa53cc-7cdd-4dbb-b53c-f70de255efa7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.897 227364 INFO nova.virt.libvirt.driver [None req-90156059-79b6-4808-931e-50b196c1d982 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Deleting instance files /var/lib/nova/instances/78549bb3-c5a0-4092-b64e-b4608e724d9c_del#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.897 227364 INFO nova.virt.libvirt.driver [None req-90156059-79b6-4808-931e-50b196c1d982 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Deletion of /var/lib/nova/instances/78549bb3-c5a0-4092-b64e-b4608e724d9c_del complete#033[00m
Nov 29 03:08:23 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:08:23 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/969821837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.930 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.940 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.949 227364 INFO nova.compute.manager [None req-90156059-79b6-4808-931e-50b196c1d982 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.949 227364 DEBUG oslo.service.loopingcall [None req-90156059-79b6-4808-931e-50b196c1d982 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.950 227364 DEBUG nova.compute.manager [-] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.950 227364 DEBUG nova.network.neutron [-] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.963 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.988 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:08:23 np0005539551 nova_compute[227360]: 2025-11-29 08:08:23.989 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:24 np0005539551 nova_compute[227360]: 2025-11-29 08:08:24.719 227364 DEBUG nova.network.neutron [-] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:08:24 np0005539551 nova_compute[227360]: 2025-11-29 08:08:24.750 227364 INFO nova.compute.manager [-] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Took 0.80 seconds to deallocate network for instance.#033[00m
Nov 29 03:08:24 np0005539551 nova_compute[227360]: 2025-11-29 08:08:24.804 227364 DEBUG oslo_concurrency.lockutils [None req-90156059-79b6-4808-931e-50b196c1d982 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:24 np0005539551 nova_compute[227360]: 2025-11-29 08:08:24.804 227364 DEBUG oslo_concurrency.lockutils [None req-90156059-79b6-4808-931e-50b196c1d982 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:24 np0005539551 nova_compute[227360]: 2025-11-29 08:08:24.808 227364 DEBUG nova.compute.manager [req-253e405e-9db1-4c79-a6ca-cbf7102cef8d req-5cfe4194-1c3c-4e99-b4f7-7b8b3c2598d8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Received event network-vif-deleted-14fa53cc-7cdd-4dbb-b53c-f70de255efa7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:24 np0005539551 nova_compute[227360]: 2025-11-29 08:08:24.885 227364 DEBUG oslo_concurrency.processutils [None req-90156059-79b6-4808-931e-50b196c1d982 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:24 np0005539551 nova_compute[227360]: 2025-11-29 08:08:24.923 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:08:25 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4132201846' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:08:25 np0005539551 nova_compute[227360]: 2025-11-29 08:08:25.441 227364 DEBUG oslo_concurrency.processutils [None req-90156059-79b6-4808-931e-50b196c1d982 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:25 np0005539551 nova_compute[227360]: 2025-11-29 08:08:25.447 227364 DEBUG nova.compute.provider_tree [None req-90156059-79b6-4808-931e-50b196c1d982 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:08:25 np0005539551 nova_compute[227360]: 2025-11-29 08:08:25.463 227364 DEBUG nova.scheduler.client.report [None req-90156059-79b6-4808-931e-50b196c1d982 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:08:25 np0005539551 nova_compute[227360]: 2025-11-29 08:08:25.485 227364 DEBUG oslo_concurrency.lockutils [None req-90156059-79b6-4808-931e-50b196c1d982 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:25 np0005539551 nova_compute[227360]: 2025-11-29 08:08:25.509 227364 INFO nova.scheduler.client.report [None req-90156059-79b6-4808-931e-50b196c1d982 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Deleted allocations for instance 78549bb3-c5a0-4092-b64e-b4608e724d9c#033[00m
Nov 29 03:08:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:25.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:25 np0005539551 nova_compute[227360]: 2025-11-29 08:08:25.586 227364 DEBUG oslo_concurrency.lockutils [None req-90156059-79b6-4808-931e-50b196c1d982 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "78549bb3-c5a0-4092-b64e-b4608e724d9c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.398s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:25.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:25 np0005539551 nova_compute[227360]: 2025-11-29 08:08:25.775 227364 DEBUG nova.compute.manager [req-f39b00a0-a838-49fe-b98d-c6cd0f0dece5 req-141ada30-55c3-485a-a676-4a5183e71951 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Received event network-vif-plugged-14fa53cc-7cdd-4dbb-b53c-f70de255efa7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:25 np0005539551 nova_compute[227360]: 2025-11-29 08:08:25.775 227364 DEBUG oslo_concurrency.lockutils [req-f39b00a0-a838-49fe-b98d-c6cd0f0dece5 req-141ada30-55c3-485a-a676-4a5183e71951 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "78549bb3-c5a0-4092-b64e-b4608e724d9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:25 np0005539551 nova_compute[227360]: 2025-11-29 08:08:25.776 227364 DEBUG oslo_concurrency.lockutils [req-f39b00a0-a838-49fe-b98d-c6cd0f0dece5 req-141ada30-55c3-485a-a676-4a5183e71951 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "78549bb3-c5a0-4092-b64e-b4608e724d9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:25 np0005539551 nova_compute[227360]: 2025-11-29 08:08:25.776 227364 DEBUG oslo_concurrency.lockutils [req-f39b00a0-a838-49fe-b98d-c6cd0f0dece5 req-141ada30-55c3-485a-a676-4a5183e71951 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "78549bb3-c5a0-4092-b64e-b4608e724d9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:25 np0005539551 nova_compute[227360]: 2025-11-29 08:08:25.777 227364 DEBUG nova.compute.manager [req-f39b00a0-a838-49fe-b98d-c6cd0f0dece5 req-141ada30-55c3-485a-a676-4a5183e71951 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] No waiting events found dispatching network-vif-plugged-14fa53cc-7cdd-4dbb-b53c-f70de255efa7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:08:25 np0005539551 nova_compute[227360]: 2025-11-29 08:08:25.777 227364 WARNING nova.compute.manager [req-f39b00a0-a838-49fe-b98d-c6cd0f0dece5 req-141ada30-55c3-485a-a676-4a5183e71951 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Received unexpected event network-vif-plugged-14fa53cc-7cdd-4dbb-b53c-f70de255efa7 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:08:25 np0005539551 nova_compute[227360]: 2025-11-29 08:08:25.990 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:26 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e269 e269: 3 total, 3 up, 3 in
Nov 29 03:08:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:27.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:27.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:28 np0005539551 nova_compute[227360]: 2025-11-29 08:08:28.414 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:28 np0005539551 nova_compute[227360]: 2025-11-29 08:08:28.461 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:29.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:29.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:29 np0005539551 nova_compute[227360]: 2025-11-29 08:08:29.950 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:30 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #70. Immutable memtables: 0.
Nov 29 03:08:30 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:08:30.865682) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:08:30 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 70
Nov 29 03:08:30 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403710865768, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 2631, "num_deletes": 270, "total_data_size": 5737673, "memory_usage": 5822608, "flush_reason": "Manual Compaction"}
Nov 29 03:08:30 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #71: started
Nov 29 03:08:30 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403710898928, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 71, "file_size": 3744075, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35215, "largest_seqno": 37841, "table_properties": {"data_size": 3733166, "index_size": 7019, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2821, "raw_key_size": 23617, "raw_average_key_size": 21, "raw_value_size": 3711127, "raw_average_value_size": 3370, "num_data_blocks": 300, "num_entries": 1101, "num_filter_entries": 1101, "num_deletions": 270, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403542, "oldest_key_time": 1764403542, "file_creation_time": 1764403710, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:08:30 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 33304 microseconds, and 14475 cpu microseconds.
Nov 29 03:08:30 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:08:30 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:08:30.898983) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #71: 3744075 bytes OK
Nov 29 03:08:30 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:08:30.899016) [db/memtable_list.cc:519] [default] Level-0 commit table #71 started
Nov 29 03:08:30 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:08:30.901398) [db/memtable_list.cc:722] [default] Level-0 commit table #71: memtable #1 done
Nov 29 03:08:30 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:08:30.901421) EVENT_LOG_v1 {"time_micros": 1764403710901414, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:08:30 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:08:30.901443) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:08:30 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 5725826, prev total WAL file size 5725826, number of live WAL files 2.
Nov 29 03:08:30 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000067.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:08:30 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:08:30.903754) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Nov 29 03:08:30 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:08:30 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [71(3656KB)], [69(8700KB)]
Nov 29 03:08:30 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403710903833, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [71], "files_L6": [69], "score": -1, "input_data_size": 12653321, "oldest_snapshot_seqno": -1}
Nov 29 03:08:31 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #72: 6920 keys, 10686769 bytes, temperature: kUnknown
Nov 29 03:08:31 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403711014152, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 72, "file_size": 10686769, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10640030, "index_size": 28323, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17349, "raw_key_size": 178771, "raw_average_key_size": 25, "raw_value_size": 10515243, "raw_average_value_size": 1519, "num_data_blocks": 1124, "num_entries": 6920, "num_filter_entries": 6920, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764403710, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 72, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:08:31 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:08:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:08:31.014574) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 10686769 bytes
Nov 29 03:08:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:08:31.016378) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 114.6 rd, 96.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 8.5 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(6.2) write-amplify(2.9) OK, records in: 7464, records dropped: 544 output_compression: NoCompression
Nov 29 03:08:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:08:31.016406) EVENT_LOG_v1 {"time_micros": 1764403711016393, "job": 42, "event": "compaction_finished", "compaction_time_micros": 110420, "compaction_time_cpu_micros": 45735, "output_level": 6, "num_output_files": 1, "total_output_size": 10686769, "num_input_records": 7464, "num_output_records": 6920, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:08:31 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:08:31 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403711017837, "job": 42, "event": "table_file_deletion", "file_number": 71}
Nov 29 03:08:31 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000069.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:08:31 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403711020819, "job": 42, "event": "table_file_deletion", "file_number": 69}
Nov 29 03:08:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:08:30.903624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:08:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:08:31.020863) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:08:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:08:31.020868) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:08:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:08:31.020871) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:08:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:08:31.020873) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:08:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:08:31.020876) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:08:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:31.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:31.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:33 np0005539551 nova_compute[227360]: 2025-11-29 08:08:33.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:33 np0005539551 nova_compute[227360]: 2025-11-29 08:08:33.409 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:08:33 np0005539551 nova_compute[227360]: 2025-11-29 08:08:33.463 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:08:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:33.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:08:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:33.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:34 np0005539551 nova_compute[227360]: 2025-11-29 08:08:34.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:34 np0005539551 nova_compute[227360]: 2025-11-29 08:08:34.953 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:35.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:35 np0005539551 podman[254807]: 2025-11-29 08:08:35.619196493 +0000 UTC m=+0.070981832 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_managed=true)
Nov 29 03:08:35 np0005539551 podman[254808]: 2025-11-29 08:08:35.619233454 +0000 UTC m=+0.066188184 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:08:35 np0005539551 podman[254806]: 2025-11-29 08:08:35.642518004 +0000 UTC m=+0.094421856 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:08:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:35.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:37.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:37.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:38 np0005539551 nova_compute[227360]: 2025-11-29 08:08:38.425 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403703.4241035, 78549bb3-c5a0-4092-b64e-b4608e724d9c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:08:38 np0005539551 nova_compute[227360]: 2025-11-29 08:08:38.426 227364 INFO nova.compute.manager [-] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:08:38 np0005539551 nova_compute[227360]: 2025-11-29 08:08:38.465 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:38 np0005539551 nova_compute[227360]: 2025-11-29 08:08:38.478 227364 DEBUG nova.compute.manager [None req-335d89fa-390d-4e24-ac1a-5fada80da5a9 - - - - - -] [instance: 78549bb3-c5a0-4092-b64e-b4608e724d9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:08:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:08:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:39.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:08:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:39.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:39 np0005539551 nova_compute[227360]: 2025-11-29 08:08:39.955 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:41.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:41.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:43 np0005539551 nova_compute[227360]: 2025-11-29 08:08:43.467 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:43.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:43.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:44 np0005539551 nova_compute[227360]: 2025-11-29 08:08:44.956 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:45.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:45.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:47.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:47.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:48 np0005539551 nova_compute[227360]: 2025-11-29 08:08:48.469 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:49.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:49.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:49 np0005539551 nova_compute[227360]: 2025-11-29 08:08:49.957 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:08:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:51.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:08:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:08:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:51.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:08:53 np0005539551 nova_compute[227360]: 2025-11-29 08:08:53.471 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:08:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:53.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:08:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:53.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:54 np0005539551 nova_compute[227360]: 2025-11-29 08:08:54.961 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:55.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:55.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:57 np0005539551 nova_compute[227360]: 2025-11-29 08:08:57.379 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:57.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:57.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:58 np0005539551 nova_compute[227360]: 2025-11-29 08:08:58.473 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:08:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:59.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:08:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:08:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:59.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:59 np0005539551 nova_compute[227360]: 2025-11-29 08:08:59.962 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:01.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:01.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:03 np0005539551 nova_compute[227360]: 2025-11-29 08:09:03.492 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:03.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:03.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:04 np0005539551 nova_compute[227360]: 2025-11-29 08:09:04.964 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:05.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:05.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:06 np0005539551 podman[254869]: 2025-11-29 08:09:06.618841076 +0000 UTC m=+0.071657399 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 29 03:09:06 np0005539551 podman[254870]: 2025-11-29 08:09:06.625205786 +0000 UTC m=+0.071526306 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:09:06 np0005539551 podman[254875]: 2025-11-29 08:09:06.635762717 +0000 UTC m=+0.067086177 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 29 03:09:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:09:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:07.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:09:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:07.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:08 np0005539551 nova_compute[227360]: 2025-11-29 08:09:08.494 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:09.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:09.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:09 np0005539551 nova_compute[227360]: 2025-11-29 08:09:09.966 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:11.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:11.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:13 np0005539551 nova_compute[227360]: 2025-11-29 08:09:13.048 227364 DEBUG oslo_concurrency.lockutils [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "00bfed58-0a25-44a8-aa0b-2330c27be8ee" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:13 np0005539551 nova_compute[227360]: 2025-11-29 08:09:13.049 227364 DEBUG oslo_concurrency.lockutils [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "00bfed58-0a25-44a8-aa0b-2330c27be8ee" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:13 np0005539551 nova_compute[227360]: 2025-11-29 08:09:13.116 227364 DEBUG nova.compute.manager [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:09:13 np0005539551 nova_compute[227360]: 2025-11-29 08:09:13.211 227364 DEBUG oslo_concurrency.lockutils [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:13 np0005539551 nova_compute[227360]: 2025-11-29 08:09:13.211 227364 DEBUG oslo_concurrency.lockutils [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:13 np0005539551 nova_compute[227360]: 2025-11-29 08:09:13.218 227364 DEBUG nova.virt.hardware [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:09:13 np0005539551 nova_compute[227360]: 2025-11-29 08:09:13.218 227364 INFO nova.compute.claims [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:09:13 np0005539551 nova_compute[227360]: 2025-11-29 08:09:13.324 227364 DEBUG oslo_concurrency.processutils [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:13 np0005539551 nova_compute[227360]: 2025-11-29 08:09:13.496 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:13.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:13.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:13 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:09:13 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3788334999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:09:13 np0005539551 nova_compute[227360]: 2025-11-29 08:09:13.776 227364 DEBUG oslo_concurrency.processutils [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:13 np0005539551 nova_compute[227360]: 2025-11-29 08:09:13.783 227364 DEBUG nova.compute.provider_tree [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:09:13 np0005539551 nova_compute[227360]: 2025-11-29 08:09:13.807 227364 DEBUG nova.scheduler.client.report [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:09:13 np0005539551 nova_compute[227360]: 2025-11-29 08:09:13.831 227364 DEBUG oslo_concurrency.lockutils [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:13 np0005539551 nova_compute[227360]: 2025-11-29 08:09:13.831 227364 DEBUG nova.compute.manager [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:09:13 np0005539551 nova_compute[227360]: 2025-11-29 08:09:13.883 227364 DEBUG nova.compute.manager [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:09:13 np0005539551 nova_compute[227360]: 2025-11-29 08:09:13.883 227364 DEBUG nova.network.neutron [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:09:13 np0005539551 nova_compute[227360]: 2025-11-29 08:09:13.908 227364 INFO nova.virt.libvirt.driver [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:09:13 np0005539551 nova_compute[227360]: 2025-11-29 08:09:13.936 227364 DEBUG nova.compute.manager [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:09:14 np0005539551 nova_compute[227360]: 2025-11-29 08:09:14.028 227364 DEBUG nova.compute.manager [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:09:14 np0005539551 nova_compute[227360]: 2025-11-29 08:09:14.029 227364 DEBUG nova.virt.libvirt.driver [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:09:14 np0005539551 nova_compute[227360]: 2025-11-29 08:09:14.030 227364 INFO nova.virt.libvirt.driver [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Creating image(s)#033[00m
Nov 29 03:09:14 np0005539551 nova_compute[227360]: 2025-11-29 08:09:14.055 227364 DEBUG nova.storage.rbd_utils [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] rbd image 00bfed58-0a25-44a8-aa0b-2330c27be8ee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:14 np0005539551 nova_compute[227360]: 2025-11-29 08:09:14.080 227364 DEBUG nova.storage.rbd_utils [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] rbd image 00bfed58-0a25-44a8-aa0b-2330c27be8ee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:14 np0005539551 nova_compute[227360]: 2025-11-29 08:09:14.106 227364 DEBUG nova.storage.rbd_utils [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] rbd image 00bfed58-0a25-44a8-aa0b-2330c27be8ee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:14 np0005539551 nova_compute[227360]: 2025-11-29 08:09:14.109 227364 DEBUG oslo_concurrency.processutils [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:14 np0005539551 nova_compute[227360]: 2025-11-29 08:09:14.175 227364 DEBUG oslo_concurrency.processutils [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:14 np0005539551 nova_compute[227360]: 2025-11-29 08:09:14.176 227364 DEBUG oslo_concurrency.lockutils [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:14 np0005539551 nova_compute[227360]: 2025-11-29 08:09:14.177 227364 DEBUG oslo_concurrency.lockutils [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:14 np0005539551 nova_compute[227360]: 2025-11-29 08:09:14.177 227364 DEBUG oslo_concurrency.lockutils [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:14 np0005539551 nova_compute[227360]: 2025-11-29 08:09:14.202 227364 DEBUG nova.storage.rbd_utils [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] rbd image 00bfed58-0a25-44a8-aa0b-2330c27be8ee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:14 np0005539551 nova_compute[227360]: 2025-11-29 08:09:14.205 227364 DEBUG oslo_concurrency.processutils [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 00bfed58-0a25-44a8-aa0b-2330c27be8ee_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:14 np0005539551 nova_compute[227360]: 2025-11-29 08:09:14.550 227364 DEBUG nova.policy [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b95b3e841be1420c99ee0a04dd0840f1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ff7c805d4242453aa2148a247956391d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:09:14 np0005539551 nova_compute[227360]: 2025-11-29 08:09:14.592 227364 DEBUG oslo_concurrency.processutils [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 00bfed58-0a25-44a8-aa0b-2330c27be8ee_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.387s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:14 np0005539551 nova_compute[227360]: 2025-11-29 08:09:14.669 227364 DEBUG nova.storage.rbd_utils [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] resizing rbd image 00bfed58-0a25-44a8-aa0b-2330c27be8ee_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:09:14 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:09:14 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:09:15 np0005539551 nova_compute[227360]: 2025-11-29 08:09:15.003 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:15 np0005539551 nova_compute[227360]: 2025-11-29 08:09:15.011 227364 DEBUG nova.objects.instance [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lazy-loading 'migration_context' on Instance uuid 00bfed58-0a25-44a8-aa0b-2330c27be8ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:15 np0005539551 nova_compute[227360]: 2025-11-29 08:09:15.029 227364 DEBUG nova.virt.libvirt.driver [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:09:15 np0005539551 nova_compute[227360]: 2025-11-29 08:09:15.029 227364 DEBUG nova.virt.libvirt.driver [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Ensure instance console log exists: /var/lib/nova/instances/00bfed58-0a25-44a8-aa0b-2330c27be8ee/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:09:15 np0005539551 nova_compute[227360]: 2025-11-29 08:09:15.030 227364 DEBUG oslo_concurrency.lockutils [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:15 np0005539551 nova_compute[227360]: 2025-11-29 08:09:15.031 227364 DEBUG oslo_concurrency.lockutils [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:15 np0005539551 nova_compute[227360]: 2025-11-29 08:09:15.031 227364 DEBUG oslo_concurrency.lockutils [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:15 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:09:15.335 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:09:15 np0005539551 nova_compute[227360]: 2025-11-29 08:09:15.336 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:15 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:09:15.337 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:09:15 np0005539551 nova_compute[227360]: 2025-11-29 08:09:15.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:15 np0005539551 nova_compute[227360]: 2025-11-29 08:09:15.448 227364 DEBUG nova.network.neutron [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Successfully created port: da36c2e4-1083-48fb-ae61-47bca3a21912 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:09:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:09:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:15.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:09:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:15.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:15 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 03:09:15 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:09:15 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:09:15 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:09:16 np0005539551 nova_compute[227360]: 2025-11-29 08:09:16.355 227364 DEBUG nova.network.neutron [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Successfully updated port: da36c2e4-1083-48fb-ae61-47bca3a21912 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:09:16 np0005539551 nova_compute[227360]: 2025-11-29 08:09:16.368 227364 DEBUG oslo_concurrency.lockutils [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "refresh_cache-00bfed58-0a25-44a8-aa0b-2330c27be8ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:09:16 np0005539551 nova_compute[227360]: 2025-11-29 08:09:16.368 227364 DEBUG oslo_concurrency.lockutils [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquired lock "refresh_cache-00bfed58-0a25-44a8-aa0b-2330c27be8ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:09:16 np0005539551 nova_compute[227360]: 2025-11-29 08:09:16.369 227364 DEBUG nova.network.neutron [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:09:16 np0005539551 nova_compute[227360]: 2025-11-29 08:09:16.504 227364 DEBUG nova.compute.manager [req-4427c89f-5e11-4a46-abb4-5d9001ff53be req-128ff06b-2179-4047-8f0a-2302665dc46f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Received event network-changed-da36c2e4-1083-48fb-ae61-47bca3a21912 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:16 np0005539551 nova_compute[227360]: 2025-11-29 08:09:16.505 227364 DEBUG nova.compute.manager [req-4427c89f-5e11-4a46-abb4-5d9001ff53be req-128ff06b-2179-4047-8f0a-2302665dc46f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Refreshing instance network info cache due to event network-changed-da36c2e4-1083-48fb-ae61-47bca3a21912. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:09:16 np0005539551 nova_compute[227360]: 2025-11-29 08:09:16.505 227364 DEBUG oslo_concurrency.lockutils [req-4427c89f-5e11-4a46-abb4-5d9001ff53be req-128ff06b-2179-4047-8f0a-2302665dc46f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-00bfed58-0a25-44a8-aa0b-2330c27be8ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:09:16 np0005539551 nova_compute[227360]: 2025-11-29 08:09:16.580 227364 DEBUG nova.network.neutron [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:09:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:09:17.339 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:17.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:17.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:18 np0005539551 nova_compute[227360]: 2025-11-29 08:09:18.036 227364 DEBUG nova.network.neutron [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Updating instance_info_cache with network_info: [{"id": "da36c2e4-1083-48fb-ae61-47bca3a21912", "address": "fa:16:3e:52:6c:f3", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda36c2e4-10", "ovs_interfaceid": "da36c2e4-1083-48fb-ae61-47bca3a21912", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:09:18 np0005539551 nova_compute[227360]: 2025-11-29 08:09:18.069 227364 DEBUG oslo_concurrency.lockutils [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Releasing lock "refresh_cache-00bfed58-0a25-44a8-aa0b-2330c27be8ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:09:18 np0005539551 nova_compute[227360]: 2025-11-29 08:09:18.070 227364 DEBUG nova.compute.manager [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Instance network_info: |[{"id": "da36c2e4-1083-48fb-ae61-47bca3a21912", "address": "fa:16:3e:52:6c:f3", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda36c2e4-10", "ovs_interfaceid": "da36c2e4-1083-48fb-ae61-47bca3a21912", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:09:18 np0005539551 nova_compute[227360]: 2025-11-29 08:09:18.070 227364 DEBUG oslo_concurrency.lockutils [req-4427c89f-5e11-4a46-abb4-5d9001ff53be req-128ff06b-2179-4047-8f0a-2302665dc46f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-00bfed58-0a25-44a8-aa0b-2330c27be8ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:09:18 np0005539551 nova_compute[227360]: 2025-11-29 08:09:18.071 227364 DEBUG nova.network.neutron [req-4427c89f-5e11-4a46-abb4-5d9001ff53be req-128ff06b-2179-4047-8f0a-2302665dc46f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Refreshing network info cache for port da36c2e4-1083-48fb-ae61-47bca3a21912 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:09:18 np0005539551 nova_compute[227360]: 2025-11-29 08:09:18.075 227364 DEBUG nova.virt.libvirt.driver [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Start _get_guest_xml network_info=[{"id": "da36c2e4-1083-48fb-ae61-47bca3a21912", "address": "fa:16:3e:52:6c:f3", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda36c2e4-10", "ovs_interfaceid": "da36c2e4-1083-48fb-ae61-47bca3a21912", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:09:18 np0005539551 nova_compute[227360]: 2025-11-29 08:09:18.081 227364 WARNING nova.virt.libvirt.driver [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:09:18 np0005539551 nova_compute[227360]: 2025-11-29 08:09:18.088 227364 DEBUG nova.virt.libvirt.host [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:09:18 np0005539551 nova_compute[227360]: 2025-11-29 08:09:18.089 227364 DEBUG nova.virt.libvirt.host [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:09:18 np0005539551 nova_compute[227360]: 2025-11-29 08:09:18.092 227364 DEBUG nova.virt.libvirt.host [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:09:18 np0005539551 nova_compute[227360]: 2025-11-29 08:09:18.093 227364 DEBUG nova.virt.libvirt.host [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:09:18 np0005539551 nova_compute[227360]: 2025-11-29 08:09:18.094 227364 DEBUG nova.virt.libvirt.driver [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:09:18 np0005539551 nova_compute[227360]: 2025-11-29 08:09:18.095 227364 DEBUG nova.virt.hardware [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:09:18 np0005539551 nova_compute[227360]: 2025-11-29 08:09:18.095 227364 DEBUG nova.virt.hardware [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:09:18 np0005539551 nova_compute[227360]: 2025-11-29 08:09:18.095 227364 DEBUG nova.virt.hardware [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:09:18 np0005539551 nova_compute[227360]: 2025-11-29 08:09:18.096 227364 DEBUG nova.virt.hardware [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:09:18 np0005539551 nova_compute[227360]: 2025-11-29 08:09:18.096 227364 DEBUG nova.virt.hardware [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:09:18 np0005539551 nova_compute[227360]: 2025-11-29 08:09:18.096 227364 DEBUG nova.virt.hardware [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:09:18 np0005539551 nova_compute[227360]: 2025-11-29 08:09:18.097 227364 DEBUG nova.virt.hardware [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:09:18 np0005539551 nova_compute[227360]: 2025-11-29 08:09:18.097 227364 DEBUG nova.virt.hardware [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:09:18 np0005539551 nova_compute[227360]: 2025-11-29 08:09:18.097 227364 DEBUG nova.virt.hardware [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:09:18 np0005539551 nova_compute[227360]: 2025-11-29 08:09:18.098 227364 DEBUG nova.virt.hardware [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:09:18 np0005539551 nova_compute[227360]: 2025-11-29 08:09:18.098 227364 DEBUG nova.virt.hardware [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:09:18 np0005539551 nova_compute[227360]: 2025-11-29 08:09:18.101 227364 DEBUG oslo_concurrency.processutils [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:18 np0005539551 nova_compute[227360]: 2025-11-29 08:09:18.511 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:18 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:09:18 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/717126138' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:09:18 np0005539551 nova_compute[227360]: 2025-11-29 08:09:18.543 227364 DEBUG oslo_concurrency.processutils [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:18 np0005539551 nova_compute[227360]: 2025-11-29 08:09:18.572 227364 DEBUG nova.storage.rbd_utils [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] rbd image 00bfed58-0a25-44a8-aa0b-2330c27be8ee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:18 np0005539551 nova_compute[227360]: 2025-11-29 08:09:18.576 227364 DEBUG oslo_concurrency.processutils [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:18 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:09:18 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2579794424' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:09:18 np0005539551 nova_compute[227360]: 2025-11-29 08:09:18.992 227364 DEBUG oslo_concurrency.processutils [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:18 np0005539551 nova_compute[227360]: 2025-11-29 08:09:18.994 227364 DEBUG nova.virt.libvirt.vif [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:09:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-151094337',display_name='tempest-AttachInterfacesTestJSON-server-151094337',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-151094337',id=74,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLz2/hRJ1TIdtPOtDD0/pV/i+rl/aVQTZxMyxxGV6qIdTwffh6F+z4sfbRr8GH8vQoafgFs7aaplxV1s7tLn77mcTUD2rhU9JE1b0RjzFKPDixDmcGcrtGap0RAcgqnY4A==',key_name='tempest-keypair-1899268923',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff7c805d4242453aa2148a247956391d',ramdisk_id='',reservation_id='r-t7n1tsl8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-372493183',owner_user_name='tempest-AttachInterfacesTestJSON-372493183-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b95b3e841be1420c99ee0a04dd0840f1',uuid=00bfed58-0a25-44a8-aa0b-2330c27be8ee,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "da36c2e4-1083-48fb-ae61-47bca3a21912", "address": "fa:16:3e:52:6c:f3", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda36c2e4-10", "ovs_interfaceid": "da36c2e4-1083-48fb-ae61-47bca3a21912", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:09:18 np0005539551 nova_compute[227360]: 2025-11-29 08:09:18.995 227364 DEBUG nova.network.os_vif_util [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converting VIF {"id": "da36c2e4-1083-48fb-ae61-47bca3a21912", "address": "fa:16:3e:52:6c:f3", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda36c2e4-10", "ovs_interfaceid": "da36c2e4-1083-48fb-ae61-47bca3a21912", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:09:18 np0005539551 nova_compute[227360]: 2025-11-29 08:09:18.996 227364 DEBUG nova.network.os_vif_util [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:6c:f3,bridge_name='br-int',has_traffic_filtering=True,id=da36c2e4-1083-48fb-ae61-47bca3a21912,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda36c2e4-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:09:18 np0005539551 nova_compute[227360]: 2025-11-29 08:09:18.997 227364 DEBUG nova.objects.instance [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lazy-loading 'pci_devices' on Instance uuid 00bfed58-0a25-44a8-aa0b-2330c27be8ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:19 np0005539551 nova_compute[227360]: 2025-11-29 08:09:19.021 227364 DEBUG nova.virt.libvirt.driver [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:09:19 np0005539551 nova_compute[227360]:  <uuid>00bfed58-0a25-44a8-aa0b-2330c27be8ee</uuid>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:  <name>instance-0000004a</name>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:09:19 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:      <nova:name>tempest-AttachInterfacesTestJSON-server-151094337</nova:name>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:09:18</nova:creationTime>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:09:19 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:        <nova:user uuid="b95b3e841be1420c99ee0a04dd0840f1">tempest-AttachInterfacesTestJSON-372493183-project-member</nova:user>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:        <nova:project uuid="ff7c805d4242453aa2148a247956391d">tempest-AttachInterfacesTestJSON-372493183</nova:project>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:        <nova:port uuid="da36c2e4-1083-48fb-ae61-47bca3a21912">
Nov 29 03:09:19 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:      <entry name="serial">00bfed58-0a25-44a8-aa0b-2330c27be8ee</entry>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:      <entry name="uuid">00bfed58-0a25-44a8-aa0b-2330c27be8ee</entry>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:09:19 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/00bfed58-0a25-44a8-aa0b-2330c27be8ee_disk">
Nov 29 03:09:19 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:09:19 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:09:19 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/00bfed58-0a25-44a8-aa0b-2330c27be8ee_disk.config">
Nov 29 03:09:19 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:09:19 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:09:19 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:52:6c:f3"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:      <target dev="tapda36c2e4-10"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:09:19 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/00bfed58-0a25-44a8-aa0b-2330c27be8ee/console.log" append="off"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:09:19 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:09:19 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:09:19 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:09:19 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:09:19 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:09:19 np0005539551 nova_compute[227360]: 2025-11-29 08:09:19.023 227364 DEBUG nova.compute.manager [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Preparing to wait for external event network-vif-plugged-da36c2e4-1083-48fb-ae61-47bca3a21912 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:09:19 np0005539551 nova_compute[227360]: 2025-11-29 08:09:19.023 227364 DEBUG oslo_concurrency.lockutils [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "00bfed58-0a25-44a8-aa0b-2330c27be8ee-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:19 np0005539551 nova_compute[227360]: 2025-11-29 08:09:19.024 227364 DEBUG oslo_concurrency.lockutils [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "00bfed58-0a25-44a8-aa0b-2330c27be8ee-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:19 np0005539551 nova_compute[227360]: 2025-11-29 08:09:19.024 227364 DEBUG oslo_concurrency.lockutils [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "00bfed58-0a25-44a8-aa0b-2330c27be8ee-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:19 np0005539551 nova_compute[227360]: 2025-11-29 08:09:19.025 227364 DEBUG nova.virt.libvirt.vif [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:09:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-151094337',display_name='tempest-AttachInterfacesTestJSON-server-151094337',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-151094337',id=74,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLz2/hRJ1TIdtPOtDD0/pV/i+rl/aVQTZxMyxxGV6qIdTwffh6F+z4sfbRr8GH8vQoafgFs7aaplxV1s7tLn77mcTUD2rhU9JE1b0RjzFKPDixDmcGcrtGap0RAcgqnY4A==',key_name='tempest-keypair-1899268923',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff7c805d4242453aa2148a247956391d',ramdisk_id='',reservation_id='r-t7n1tsl8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-372493183',owner_user_name='tempest-AttachInterfacesTestJSON-372493183-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b95b3e841be1420c99ee0a04dd0840f1',uuid=00bfed58-0a25-44a8-aa0b-2330c27be8ee,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "da36c2e4-1083-48fb-ae61-47bca3a21912", "address": "fa:16:3e:52:6c:f3", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda36c2e4-10", "ovs_interfaceid": "da36c2e4-1083-48fb-ae61-47bca3a21912", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:09:19 np0005539551 nova_compute[227360]: 2025-11-29 08:09:19.026 227364 DEBUG nova.network.os_vif_util [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converting VIF {"id": "da36c2e4-1083-48fb-ae61-47bca3a21912", "address": "fa:16:3e:52:6c:f3", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda36c2e4-10", "ovs_interfaceid": "da36c2e4-1083-48fb-ae61-47bca3a21912", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:09:19 np0005539551 nova_compute[227360]: 2025-11-29 08:09:19.026 227364 DEBUG nova.network.os_vif_util [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:6c:f3,bridge_name='br-int',has_traffic_filtering=True,id=da36c2e4-1083-48fb-ae61-47bca3a21912,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda36c2e4-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:09:19 np0005539551 nova_compute[227360]: 2025-11-29 08:09:19.026 227364 DEBUG os_vif [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:6c:f3,bridge_name='br-int',has_traffic_filtering=True,id=da36c2e4-1083-48fb-ae61-47bca3a21912,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda36c2e4-10') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:09:19 np0005539551 nova_compute[227360]: 2025-11-29 08:09:19.027 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:19 np0005539551 nova_compute[227360]: 2025-11-29 08:09:19.027 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:19 np0005539551 nova_compute[227360]: 2025-11-29 08:09:19.028 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:09:19 np0005539551 nova_compute[227360]: 2025-11-29 08:09:19.031 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:19 np0005539551 nova_compute[227360]: 2025-11-29 08:09:19.031 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapda36c2e4-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:19 np0005539551 nova_compute[227360]: 2025-11-29 08:09:19.032 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapda36c2e4-10, col_values=(('external_ids', {'iface-id': 'da36c2e4-1083-48fb-ae61-47bca3a21912', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:52:6c:f3', 'vm-uuid': '00bfed58-0a25-44a8-aa0b-2330c27be8ee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:19 np0005539551 NetworkManager[48922]: <info>  [1764403759.0342] manager: (tapda36c2e4-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/122)
Nov 29 03:09:19 np0005539551 nova_compute[227360]: 2025-11-29 08:09:19.036 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:09:19 np0005539551 nova_compute[227360]: 2025-11-29 08:09:19.039 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:19 np0005539551 nova_compute[227360]: 2025-11-29 08:09:19.040 227364 INFO os_vif [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:6c:f3,bridge_name='br-int',has_traffic_filtering=True,id=da36c2e4-1083-48fb-ae61-47bca3a21912,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda36c2e4-10')#033[00m
Nov 29 03:09:19 np0005539551 nova_compute[227360]: 2025-11-29 08:09:19.135 227364 DEBUG nova.virt.libvirt.driver [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:09:19 np0005539551 nova_compute[227360]: 2025-11-29 08:09:19.135 227364 DEBUG nova.virt.libvirt.driver [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:09:19 np0005539551 nova_compute[227360]: 2025-11-29 08:09:19.136 227364 DEBUG nova.virt.libvirt.driver [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] No VIF found with MAC fa:16:3e:52:6c:f3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:09:19 np0005539551 nova_compute[227360]: 2025-11-29 08:09:19.136 227364 INFO nova.virt.libvirt.driver [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Using config drive#033[00m
Nov 29 03:09:19 np0005539551 nova_compute[227360]: 2025-11-29 08:09:19.161 227364 DEBUG nova.storage.rbd_utils [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] rbd image 00bfed58-0a25-44a8-aa0b-2330c27be8ee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:19 np0005539551 nova_compute[227360]: 2025-11-29 08:09:19.422 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:19 np0005539551 nova_compute[227360]: 2025-11-29 08:09:19.422 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:09:19 np0005539551 nova_compute[227360]: 2025-11-29 08:09:19.423 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:09:19 np0005539551 nova_compute[227360]: 2025-11-29 08:09:19.447 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 03:09:19 np0005539551 nova_compute[227360]: 2025-11-29 08:09:19.447 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:09:19 np0005539551 nova_compute[227360]: 2025-11-29 08:09:19.515 227364 INFO nova.virt.libvirt.driver [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Creating config drive at /var/lib/nova/instances/00bfed58-0a25-44a8-aa0b-2330c27be8ee/disk.config#033[00m
Nov 29 03:09:19 np0005539551 nova_compute[227360]: 2025-11-29 08:09:19.523 227364 DEBUG oslo_concurrency.processutils [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/00bfed58-0a25-44a8-aa0b-2330c27be8ee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp12cvf1gy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:19 np0005539551 nova_compute[227360]: 2025-11-29 08:09:19.657 227364 DEBUG oslo_concurrency.processutils [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/00bfed58-0a25-44a8-aa0b-2330c27be8ee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp12cvf1gy" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:09:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:19.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:09:19 np0005539551 nova_compute[227360]: 2025-11-29 08:09:19.697 227364 DEBUG nova.storage.rbd_utils [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] rbd image 00bfed58-0a25-44a8-aa0b-2330c27be8ee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:19 np0005539551 nova_compute[227360]: 2025-11-29 08:09:19.703 227364 DEBUG oslo_concurrency.processutils [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/00bfed58-0a25-44a8-aa0b-2330c27be8ee/disk.config 00bfed58-0a25-44a8-aa0b-2330c27be8ee_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:09:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:19.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:09:19 np0005539551 nova_compute[227360]: 2025-11-29 08:09:19.844 227364 DEBUG nova.network.neutron [req-4427c89f-5e11-4a46-abb4-5d9001ff53be req-128ff06b-2179-4047-8f0a-2302665dc46f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Updated VIF entry in instance network info cache for port da36c2e4-1083-48fb-ae61-47bca3a21912. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:09:19 np0005539551 nova_compute[227360]: 2025-11-29 08:09:19.845 227364 DEBUG nova.network.neutron [req-4427c89f-5e11-4a46-abb4-5d9001ff53be req-128ff06b-2179-4047-8f0a-2302665dc46f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Updating instance_info_cache with network_info: [{"id": "da36c2e4-1083-48fb-ae61-47bca3a21912", "address": "fa:16:3e:52:6c:f3", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda36c2e4-10", "ovs_interfaceid": "da36c2e4-1083-48fb-ae61-47bca3a21912", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:09:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:09:19.859 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:09:19.859 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:09:19.860 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:19 np0005539551 nova_compute[227360]: 2025-11-29 08:09:19.872 227364 DEBUG oslo_concurrency.lockutils [req-4427c89f-5e11-4a46-abb4-5d9001ff53be req-128ff06b-2179-4047-8f0a-2302665dc46f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-00bfed58-0a25-44a8-aa0b-2330c27be8ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:09:19 np0005539551 nova_compute[227360]: 2025-11-29 08:09:19.891 227364 DEBUG oslo_concurrency.processutils [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/00bfed58-0a25-44a8-aa0b-2330c27be8ee/disk.config 00bfed58-0a25-44a8-aa0b-2330c27be8ee_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:19 np0005539551 nova_compute[227360]: 2025-11-29 08:09:19.892 227364 INFO nova.virt.libvirt.driver [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Deleting local config drive /var/lib/nova/instances/00bfed58-0a25-44a8-aa0b-2330c27be8ee/disk.config because it was imported into RBD.#033[00m
Nov 29 03:09:19 np0005539551 kernel: tapda36c2e4-10: entered promiscuous mode
Nov 29 03:09:19 np0005539551 NetworkManager[48922]: <info>  [1764403759.9433] manager: (tapda36c2e4-10): new Tun device (/org/freedesktop/NetworkManager/Devices/123)
Nov 29 03:09:19 np0005539551 ovn_controller[130266]: 2025-11-29T08:09:19Z|00257|binding|INFO|Claiming lport da36c2e4-1083-48fb-ae61-47bca3a21912 for this chassis.
Nov 29 03:09:19 np0005539551 ovn_controller[130266]: 2025-11-29T08:09:19Z|00258|binding|INFO|da36c2e4-1083-48fb-ae61-47bca3a21912: Claiming fa:16:3e:52:6c:f3 10.100.0.14
Nov 29 03:09:19 np0005539551 nova_compute[227360]: 2025-11-29 08:09:19.944 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:19 np0005539551 nova_compute[227360]: 2025-11-29 08:09:19.948 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:19 np0005539551 nova_compute[227360]: 2025-11-29 08:09:19.952 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:09:19.963 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:6c:f3 10.100.0.14'], port_security=['fa:16:3e:52:6c:f3 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '00bfed58-0a25-44a8-aa0b-2330c27be8ee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff7c805d4242453aa2148a247956391d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5e205a7b-1ce6-4d19-af7d-9b03504545f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5330ba90-719c-42ae-a31a-dd5fd1d240e2, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=da36c2e4-1083-48fb-ae61-47bca3a21912) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:09:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:09:19.964 139482 INFO neutron.agent.ovn.metadata.agent [-] Port da36c2e4-1083-48fb-ae61-47bca3a21912 in datapath ddd8b166-79ec-408d-b52c-581ad9dd6cb8 bound to our chassis#033[00m
Nov 29 03:09:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:09:19.965 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ddd8b166-79ec-408d-b52c-581ad9dd6cb8#033[00m
Nov 29 03:09:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:09:19.976 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[de005259-295f-4bae-818a-af41c579fa44]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:09:19.978 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapddd8b166-71 in ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:09:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:09:19.979 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapddd8b166-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:09:19 np0005539551 systemd-machined[190756]: New machine qemu-34-instance-0000004a.
Nov 29 03:09:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:09:19.979 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9b8bb7e3-4702-4818-9a00-7b3421f8a293]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:09:19.981 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[106f32cc-aefb-46fd-ac71-b35adb3b99c0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:09:20.002 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[7e32b137-1030-48c9-9371-2b26cb1757cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:20 np0005539551 systemd[1]: Started Virtual Machine qemu-34-instance-0000004a.
Nov 29 03:09:20 np0005539551 nova_compute[227360]: 2025-11-29 08:09:20.017 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:09:20.019 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8079c69c-097d-45a5-8999-24b116df1c6a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:20 np0005539551 ovn_controller[130266]: 2025-11-29T08:09:20Z|00259|binding|INFO|Setting lport da36c2e4-1083-48fb-ae61-47bca3a21912 ovn-installed in OVS
Nov 29 03:09:20 np0005539551 ovn_controller[130266]: 2025-11-29T08:09:20Z|00260|binding|INFO|Setting lport da36c2e4-1083-48fb-ae61-47bca3a21912 up in Southbound
Nov 29 03:09:20 np0005539551 nova_compute[227360]: 2025-11-29 08:09:20.022 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:20 np0005539551 systemd-udevd[255393]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:09:20.044 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[05204e8f-fb4b-485e-9fd7-e617a21f1892]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:20 np0005539551 NetworkManager[48922]: <info>  [1764403760.0483] device (tapda36c2e4-10): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:09:20 np0005539551 NetworkManager[48922]: <info>  [1764403760.0490] device (tapda36c2e4-10): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:09:20 np0005539551 NetworkManager[48922]: <info>  [1764403760.0507] manager: (tapddd8b166-70): new Veth device (/org/freedesktop/NetworkManager/Devices/124)
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:09:20.050 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ce5099f2-4acc-47a5-9652-19f969419db1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:20 np0005539551 systemd-udevd[255401]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:09:20.078 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[a80dfaa1-6748-4065-97a3-490d4b8a33f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:09:20.082 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[be9ddd08-4a14-4ab8-9396-6710495ac59c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:20 np0005539551 NetworkManager[48922]: <info>  [1764403760.1029] device (tapddd8b166-70): carrier: link connected
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:09:20.106 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[5eea2e68-6290-4696-b2d7-1bac8b80b6ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:09:20.123 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c918ac9c-dc8f-4dcb-9a99-5c5facbf70f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapddd8b166-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:35:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 680698, 'reachable_time': 30284, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255422, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:09:20.136 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d14aca2d-810b-454a-9b8e-b86f1693c49c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9d:3576'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 680698, 'tstamp': 680698}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255423, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:09:20.155 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5aa9b28d-d617-48d6-a440-7c0f1157ef63]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapddd8b166-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:35:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 680698, 'reachable_time': 30284, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 255424, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:09:20.177 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[96fe07e3-2abe-4ba1-a87a-680303ab28d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:09:20.242 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9ae86b68-4888-4054-9159-650750d683b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:09:20.244 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapddd8b166-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:09:20.244 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:09:20.244 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapddd8b166-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:20 np0005539551 nova_compute[227360]: 2025-11-29 08:09:20.246 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:20 np0005539551 NetworkManager[48922]: <info>  [1764403760.2472] manager: (tapddd8b166-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/125)
Nov 29 03:09:20 np0005539551 kernel: tapddd8b166-70: entered promiscuous mode
Nov 29 03:09:20 np0005539551 nova_compute[227360]: 2025-11-29 08:09:20.250 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:09:20.251 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapddd8b166-70, col_values=(('external_ids', {'iface-id': 'a9e57abf-e3e4-455b-b4c5-0cda127bd5c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:20 np0005539551 nova_compute[227360]: 2025-11-29 08:09:20.251 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:20 np0005539551 ovn_controller[130266]: 2025-11-29T08:09:20Z|00261|binding|INFO|Releasing lport a9e57abf-e3e4-455b-b4c5-0cda127bd5c1 from this chassis (sb_readonly=0)
Nov 29 03:09:20 np0005539551 nova_compute[227360]: 2025-11-29 08:09:20.267 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:09:20.268 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ddd8b166-79ec-408d-b52c-581ad9dd6cb8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ddd8b166-79ec-408d-b52c-581ad9dd6cb8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:09:20.269 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5d0e239d-5447-4f98-a869-2160bb1a5eac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:09:20.270 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-ddd8b166-79ec-408d-b52c-581ad9dd6cb8
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/ddd8b166-79ec-408d-b52c-581ad9dd6cb8.pid.haproxy
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID ddd8b166-79ec-408d-b52c-581ad9dd6cb8
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:09:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:09:20.270 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'env', 'PROCESS_TAG=haproxy-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ddd8b166-79ec-408d-b52c-581ad9dd6cb8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:09:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:20 np0005539551 nova_compute[227360]: 2025-11-29 08:09:20.429 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403760.428808, 00bfed58-0a25-44a8-aa0b-2330c27be8ee => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:20 np0005539551 nova_compute[227360]: 2025-11-29 08:09:20.430 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] VM Started (Lifecycle Event)#033[00m
Nov 29 03:09:20 np0005539551 nova_compute[227360]: 2025-11-29 08:09:20.477 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:20 np0005539551 nova_compute[227360]: 2025-11-29 08:09:20.482 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403760.4291806, 00bfed58-0a25-44a8-aa0b-2330c27be8ee => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:20 np0005539551 nova_compute[227360]: 2025-11-29 08:09:20.483 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:09:20 np0005539551 nova_compute[227360]: 2025-11-29 08:09:20.502 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:20 np0005539551 nova_compute[227360]: 2025-11-29 08:09:20.507 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:09:20 np0005539551 nova_compute[227360]: 2025-11-29 08:09:20.552 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:09:20 np0005539551 podman[255498]: 2025-11-29 08:09:20.662323297 +0000 UTC m=+0.050968328 container create 588bf40ddbcbd252e0464ac49159d2a439d75db17519fb28ee3bc459088a33f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:09:20 np0005539551 systemd[1]: Started libpod-conmon-588bf40ddbcbd252e0464ac49159d2a439d75db17519fb28ee3bc459088a33f2.scope.
Nov 29 03:09:20 np0005539551 podman[255498]: 2025-11-29 08:09:20.63428308 +0000 UTC m=+0.022928131 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:09:20 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:09:20 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2451a4160f4328be532d5e867e05b044b8d70b9c50c8012e039dcb5bce0cf16/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:09:20 np0005539551 podman[255498]: 2025-11-29 08:09:20.766846301 +0000 UTC m=+0.155491322 container init 588bf40ddbcbd252e0464ac49159d2a439d75db17519fb28ee3bc459088a33f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 03:09:20 np0005539551 podman[255498]: 2025-11-29 08:09:20.774085854 +0000 UTC m=+0.162730885 container start 588bf40ddbcbd252e0464ac49159d2a439d75db17519fb28ee3bc459088a33f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 03:09:20 np0005539551 neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8[255512]: [NOTICE]   (255516) : New worker (255518) forked
Nov 29 03:09:20 np0005539551 neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8[255512]: [NOTICE]   (255516) : Loading success.
Nov 29 03:09:21 np0005539551 nova_compute[227360]: 2025-11-29 08:09:21.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:21 np0005539551 nova_compute[227360]: 2025-11-29 08:09:21.411 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:21 np0005539551 nova_compute[227360]: 2025-11-29 08:09:21.411 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:21.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:21.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:22 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:09:22 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:09:22 np0005539551 nova_compute[227360]: 2025-11-29 08:09:22.210 227364 DEBUG nova.compute.manager [req-f5ecea34-e0e1-46f1-b306-858ed132b405 req-0f407a2d-2e99-4970-b016-3360ac33763c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Received event network-vif-plugged-da36c2e4-1083-48fb-ae61-47bca3a21912 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:22 np0005539551 nova_compute[227360]: 2025-11-29 08:09:22.210 227364 DEBUG oslo_concurrency.lockutils [req-f5ecea34-e0e1-46f1-b306-858ed132b405 req-0f407a2d-2e99-4970-b016-3360ac33763c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "00bfed58-0a25-44a8-aa0b-2330c27be8ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:22 np0005539551 nova_compute[227360]: 2025-11-29 08:09:22.210 227364 DEBUG oslo_concurrency.lockutils [req-f5ecea34-e0e1-46f1-b306-858ed132b405 req-0f407a2d-2e99-4970-b016-3360ac33763c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "00bfed58-0a25-44a8-aa0b-2330c27be8ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:22 np0005539551 nova_compute[227360]: 2025-11-29 08:09:22.211 227364 DEBUG oslo_concurrency.lockutils [req-f5ecea34-e0e1-46f1-b306-858ed132b405 req-0f407a2d-2e99-4970-b016-3360ac33763c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "00bfed58-0a25-44a8-aa0b-2330c27be8ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:22 np0005539551 nova_compute[227360]: 2025-11-29 08:09:22.211 227364 DEBUG nova.compute.manager [req-f5ecea34-e0e1-46f1-b306-858ed132b405 req-0f407a2d-2e99-4970-b016-3360ac33763c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Processing event network-vif-plugged-da36c2e4-1083-48fb-ae61-47bca3a21912 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:09:22 np0005539551 nova_compute[227360]: 2025-11-29 08:09:22.211 227364 DEBUG nova.compute.manager [req-f5ecea34-e0e1-46f1-b306-858ed132b405 req-0f407a2d-2e99-4970-b016-3360ac33763c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Received event network-vif-plugged-da36c2e4-1083-48fb-ae61-47bca3a21912 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:22 np0005539551 nova_compute[227360]: 2025-11-29 08:09:22.211 227364 DEBUG oslo_concurrency.lockutils [req-f5ecea34-e0e1-46f1-b306-858ed132b405 req-0f407a2d-2e99-4970-b016-3360ac33763c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "00bfed58-0a25-44a8-aa0b-2330c27be8ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:22 np0005539551 nova_compute[227360]: 2025-11-29 08:09:22.211 227364 DEBUG oslo_concurrency.lockutils [req-f5ecea34-e0e1-46f1-b306-858ed132b405 req-0f407a2d-2e99-4970-b016-3360ac33763c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "00bfed58-0a25-44a8-aa0b-2330c27be8ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:22 np0005539551 nova_compute[227360]: 2025-11-29 08:09:22.211 227364 DEBUG oslo_concurrency.lockutils [req-f5ecea34-e0e1-46f1-b306-858ed132b405 req-0f407a2d-2e99-4970-b016-3360ac33763c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "00bfed58-0a25-44a8-aa0b-2330c27be8ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:22 np0005539551 nova_compute[227360]: 2025-11-29 08:09:22.212 227364 DEBUG nova.compute.manager [req-f5ecea34-e0e1-46f1-b306-858ed132b405 req-0f407a2d-2e99-4970-b016-3360ac33763c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] No waiting events found dispatching network-vif-plugged-da36c2e4-1083-48fb-ae61-47bca3a21912 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:22 np0005539551 nova_compute[227360]: 2025-11-29 08:09:22.212 227364 WARNING nova.compute.manager [req-f5ecea34-e0e1-46f1-b306-858ed132b405 req-0f407a2d-2e99-4970-b016-3360ac33763c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Received unexpected event network-vif-plugged-da36c2e4-1083-48fb-ae61-47bca3a21912 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:09:22 np0005539551 nova_compute[227360]: 2025-11-29 08:09:22.212 227364 DEBUG nova.compute.manager [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:09:22 np0005539551 nova_compute[227360]: 2025-11-29 08:09:22.217 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403762.216938, 00bfed58-0a25-44a8-aa0b-2330c27be8ee => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:22 np0005539551 nova_compute[227360]: 2025-11-29 08:09:22.217 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:09:22 np0005539551 nova_compute[227360]: 2025-11-29 08:09:22.218 227364 DEBUG nova.virt.libvirt.driver [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:09:22 np0005539551 nova_compute[227360]: 2025-11-29 08:09:22.221 227364 INFO nova.virt.libvirt.driver [-] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Instance spawned successfully.#033[00m
Nov 29 03:09:22 np0005539551 nova_compute[227360]: 2025-11-29 08:09:22.222 227364 DEBUG nova.virt.libvirt.driver [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:09:22 np0005539551 nova_compute[227360]: 2025-11-29 08:09:22.236 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:22 np0005539551 nova_compute[227360]: 2025-11-29 08:09:22.242 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:09:22 np0005539551 nova_compute[227360]: 2025-11-29 08:09:22.247 227364 DEBUG nova.virt.libvirt.driver [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:22 np0005539551 nova_compute[227360]: 2025-11-29 08:09:22.248 227364 DEBUG nova.virt.libvirt.driver [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:22 np0005539551 nova_compute[227360]: 2025-11-29 08:09:22.248 227364 DEBUG nova.virt.libvirt.driver [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:22 np0005539551 nova_compute[227360]: 2025-11-29 08:09:22.249 227364 DEBUG nova.virt.libvirt.driver [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:22 np0005539551 nova_compute[227360]: 2025-11-29 08:09:22.249 227364 DEBUG nova.virt.libvirt.driver [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:22 np0005539551 nova_compute[227360]: 2025-11-29 08:09:22.250 227364 DEBUG nova.virt.libvirt.driver [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:22 np0005539551 nova_compute[227360]: 2025-11-29 08:09:22.277 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:09:22 np0005539551 nova_compute[227360]: 2025-11-29 08:09:22.318 227364 INFO nova.compute.manager [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Took 8.29 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:09:22 np0005539551 nova_compute[227360]: 2025-11-29 08:09:22.319 227364 DEBUG nova.compute.manager [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:22 np0005539551 nova_compute[227360]: 2025-11-29 08:09:22.377 227364 INFO nova.compute.manager [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Took 9.19 seconds to build instance.#033[00m
Nov 29 03:09:22 np0005539551 nova_compute[227360]: 2025-11-29 08:09:22.393 227364 DEBUG oslo_concurrency.lockutils [None req-9db70b36-8ea0-46b7-828c-99c7c4ef55ce b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "00bfed58-0a25-44a8-aa0b-2330c27be8ee" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.344s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:23 np0005539551 nova_compute[227360]: 2025-11-29 08:09:23.406 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:23.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:09:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:23.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:09:24 np0005539551 nova_compute[227360]: 2025-11-29 08:09:24.063 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:24 np0005539551 nova_compute[227360]: 2025-11-29 08:09:24.200 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:24 np0005539551 NetworkManager[48922]: <info>  [1764403764.2020] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/126)
Nov 29 03:09:24 np0005539551 NetworkManager[48922]: <info>  [1764403764.2041] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/127)
Nov 29 03:09:24 np0005539551 nova_compute[227360]: 2025-11-29 08:09:24.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:24 np0005539551 nova_compute[227360]: 2025-11-29 08:09:24.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:24 np0005539551 nova_compute[227360]: 2025-11-29 08:09:24.435 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:24 np0005539551 nova_compute[227360]: 2025-11-29 08:09:24.436 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:24 np0005539551 nova_compute[227360]: 2025-11-29 08:09:24.436 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:24 np0005539551 nova_compute[227360]: 2025-11-29 08:09:24.436 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:09:24 np0005539551 nova_compute[227360]: 2025-11-29 08:09:24.437 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:24 np0005539551 nova_compute[227360]: 2025-11-29 08:09:24.463 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:24 np0005539551 ovn_controller[130266]: 2025-11-29T08:09:24Z|00262|binding|INFO|Releasing lport a9e57abf-e3e4-455b-b4c5-0cda127bd5c1 from this chassis (sb_readonly=0)
Nov 29 03:09:24 np0005539551 nova_compute[227360]: 2025-11-29 08:09:24.487 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:24 np0005539551 nova_compute[227360]: 2025-11-29 08:09:24.515 227364 DEBUG nova.compute.manager [req-b7db0b1d-8804-48cf-ba94-64ab5089bca2 req-fbc313ad-b73a-4d02-805e-68ca2fa7a9fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Received event network-changed-da36c2e4-1083-48fb-ae61-47bca3a21912 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:24 np0005539551 nova_compute[227360]: 2025-11-29 08:09:24.516 227364 DEBUG nova.compute.manager [req-b7db0b1d-8804-48cf-ba94-64ab5089bca2 req-fbc313ad-b73a-4d02-805e-68ca2fa7a9fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Refreshing instance network info cache due to event network-changed-da36c2e4-1083-48fb-ae61-47bca3a21912. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:09:24 np0005539551 nova_compute[227360]: 2025-11-29 08:09:24.516 227364 DEBUG oslo_concurrency.lockutils [req-b7db0b1d-8804-48cf-ba94-64ab5089bca2 req-fbc313ad-b73a-4d02-805e-68ca2fa7a9fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-00bfed58-0a25-44a8-aa0b-2330c27be8ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:09:24 np0005539551 nova_compute[227360]: 2025-11-29 08:09:24.517 227364 DEBUG oslo_concurrency.lockutils [req-b7db0b1d-8804-48cf-ba94-64ab5089bca2 req-fbc313ad-b73a-4d02-805e-68ca2fa7a9fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-00bfed58-0a25-44a8-aa0b-2330c27be8ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:09:24 np0005539551 nova_compute[227360]: 2025-11-29 08:09:24.517 227364 DEBUG nova.network.neutron [req-b7db0b1d-8804-48cf-ba94-64ab5089bca2 req-fbc313ad-b73a-4d02-805e-68ca2fa7a9fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Refreshing network info cache for port da36c2e4-1083-48fb-ae61-47bca3a21912 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:09:24 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:09:24 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3060474784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:09:24 np0005539551 nova_compute[227360]: 2025-11-29 08:09:24.898 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:24 np0005539551 nova_compute[227360]: 2025-11-29 08:09:24.988 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000004a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:09:24 np0005539551 nova_compute[227360]: 2025-11-29 08:09:24.989 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000004a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:09:25 np0005539551 nova_compute[227360]: 2025-11-29 08:09:25.019 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:25 np0005539551 nova_compute[227360]: 2025-11-29 08:09:25.171 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:09:25 np0005539551 nova_compute[227360]: 2025-11-29 08:09:25.173 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4513MB free_disk=20.92601776123047GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:09:25 np0005539551 nova_compute[227360]: 2025-11-29 08:09:25.173 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:25 np0005539551 nova_compute[227360]: 2025-11-29 08:09:25.174 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:25 np0005539551 nova_compute[227360]: 2025-11-29 08:09:25.243 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 00bfed58-0a25-44a8-aa0b-2330c27be8ee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:09:25 np0005539551 nova_compute[227360]: 2025-11-29 08:09:25.244 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:09:25 np0005539551 nova_compute[227360]: 2025-11-29 08:09:25.244 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:09:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:25 np0005539551 nova_compute[227360]: 2025-11-29 08:09:25.376 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:25.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:25.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:09:25 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2730717189' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:09:25 np0005539551 nova_compute[227360]: 2025-11-29 08:09:25.825 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:25 np0005539551 nova_compute[227360]: 2025-11-29 08:09:25.832 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:09:25 np0005539551 nova_compute[227360]: 2025-11-29 08:09:25.851 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:09:25 np0005539551 nova_compute[227360]: 2025-11-29 08:09:25.887 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:09:25 np0005539551 nova_compute[227360]: 2025-11-29 08:09:25.888 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:25 np0005539551 nova_compute[227360]: 2025-11-29 08:09:25.889 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:25 np0005539551 nova_compute[227360]: 2025-11-29 08:09:25.889 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:09:25 np0005539551 nova_compute[227360]: 2025-11-29 08:09:25.922 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:09:26 np0005539551 nova_compute[227360]: 2025-11-29 08:09:26.141 227364 DEBUG nova.network.neutron [req-b7db0b1d-8804-48cf-ba94-64ab5089bca2 req-fbc313ad-b73a-4d02-805e-68ca2fa7a9fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Updated VIF entry in instance network info cache for port da36c2e4-1083-48fb-ae61-47bca3a21912. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:09:26 np0005539551 nova_compute[227360]: 2025-11-29 08:09:26.143 227364 DEBUG nova.network.neutron [req-b7db0b1d-8804-48cf-ba94-64ab5089bca2 req-fbc313ad-b73a-4d02-805e-68ca2fa7a9fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Updating instance_info_cache with network_info: [{"id": "da36c2e4-1083-48fb-ae61-47bca3a21912", "address": "fa:16:3e:52:6c:f3", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda36c2e4-10", "ovs_interfaceid": "da36c2e4-1083-48fb-ae61-47bca3a21912", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:09:26 np0005539551 nova_compute[227360]: 2025-11-29 08:09:26.161 227364 DEBUG oslo_concurrency.lockutils [req-b7db0b1d-8804-48cf-ba94-64ab5089bca2 req-fbc313ad-b73a-4d02-805e-68ca2fa7a9fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-00bfed58-0a25-44a8-aa0b-2330c27be8ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:09:27 np0005539551 nova_compute[227360]: 2025-11-29 08:09:27.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:27 np0005539551 nova_compute[227360]: 2025-11-29 08:09:27.411 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:09:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:27.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:27.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:29 np0005539551 nova_compute[227360]: 2025-11-29 08:09:29.066 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:29.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:29.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:30 np0005539551 nova_compute[227360]: 2025-11-29 08:09:30.021 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:30 np0005539551 nova_compute[227360]: 2025-11-29 08:09:30.426 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:31.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:31.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:33 np0005539551 nova_compute[227360]: 2025-11-29 08:09:33.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:33 np0005539551 nova_compute[227360]: 2025-11-29 08:09:33.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:09:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:33.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:33.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:34 np0005539551 nova_compute[227360]: 2025-11-29 08:09:34.119 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:35 np0005539551 nova_compute[227360]: 2025-11-29 08:09:35.026 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:35 np0005539551 nova_compute[227360]: 2025-11-29 08:09:35.116 227364 DEBUG oslo_concurrency.lockutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Acquiring lock "86dc24b2-55cd-4720-825d-14b5a233fc8f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:35 np0005539551 nova_compute[227360]: 2025-11-29 08:09:35.116 227364 DEBUG oslo_concurrency.lockutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:35 np0005539551 nova_compute[227360]: 2025-11-29 08:09:35.149 227364 DEBUG nova.compute.manager [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:09:35 np0005539551 nova_compute[227360]: 2025-11-29 08:09:35.225 227364 DEBUG oslo_concurrency.lockutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:35 np0005539551 nova_compute[227360]: 2025-11-29 08:09:35.225 227364 DEBUG oslo_concurrency.lockutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:35 np0005539551 nova_compute[227360]: 2025-11-29 08:09:35.231 227364 DEBUG nova.virt.hardware [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:09:35 np0005539551 nova_compute[227360]: 2025-11-29 08:09:35.232 227364 INFO nova.compute.claims [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:09:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:35 np0005539551 nova_compute[227360]: 2025-11-29 08:09:35.376 227364 DEBUG oslo_concurrency.processutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:09:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:35.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:09:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:35.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:09:35 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2388149586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:09:35 np0005539551 nova_compute[227360]: 2025-11-29 08:09:35.812 227364 DEBUG oslo_concurrency.processutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:35 np0005539551 nova_compute[227360]: 2025-11-29 08:09:35.818 227364 DEBUG nova.compute.provider_tree [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:09:35 np0005539551 nova_compute[227360]: 2025-11-29 08:09:35.838 227364 DEBUG nova.scheduler.client.report [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:09:35 np0005539551 nova_compute[227360]: 2025-11-29 08:09:35.862 227364 DEBUG oslo_concurrency.lockutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:35 np0005539551 nova_compute[227360]: 2025-11-29 08:09:35.862 227364 DEBUG nova.compute.manager [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:09:35 np0005539551 nova_compute[227360]: 2025-11-29 08:09:35.912 227364 DEBUG nova.compute.manager [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:09:35 np0005539551 nova_compute[227360]: 2025-11-29 08:09:35.913 227364 DEBUG nova.network.neutron [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:09:35 np0005539551 nova_compute[227360]: 2025-11-29 08:09:35.943 227364 INFO nova.virt.libvirt.driver [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:09:36 np0005539551 nova_compute[227360]: 2025-11-29 08:09:36.017 227364 DEBUG nova.compute.manager [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:09:36 np0005539551 nova_compute[227360]: 2025-11-29 08:09:36.065 227364 INFO nova.virt.block_device [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Booting with volume 7ea84dcf-1b42-49e3-8f53-f5de6e85eeeb at /dev/vda#033[00m
Nov 29 03:09:36 np0005539551 nova_compute[227360]: 2025-11-29 08:09:36.215 227364 DEBUG os_brick.utils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:09:36 np0005539551 nova_compute[227360]: 2025-11-29 08:09:36.217 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:36 np0005539551 nova_compute[227360]: 2025-11-29 08:09:36.235 245195 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:36 np0005539551 nova_compute[227360]: 2025-11-29 08:09:36.236 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[8a809660-cc17-4a2c-9983-62d6ef11da8c]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:36 np0005539551 nova_compute[227360]: 2025-11-29 08:09:36.238 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:36 np0005539551 nova_compute[227360]: 2025-11-29 08:09:36.247 245195 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:36 np0005539551 nova_compute[227360]: 2025-11-29 08:09:36.248 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[9201934f-044d-4d62-81da-137d79b3a473]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34268feecb6', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:36 np0005539551 nova_compute[227360]: 2025-11-29 08:09:36.249 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:36 np0005539551 nova_compute[227360]: 2025-11-29 08:09:36.262 245195 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:36 np0005539551 nova_compute[227360]: 2025-11-29 08:09:36.262 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[ef7f1241-cafc-4738-ae4b-26f8ead6fefd]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:36 np0005539551 nova_compute[227360]: 2025-11-29 08:09:36.264 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[5e76e0e7-ebc7-4999-beb3-9fbff74577c3]: (4, '2d58586e-4ce1-425d-890c-d5cdff75e822') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:36 np0005539551 nova_compute[227360]: 2025-11-29 08:09:36.265 227364 DEBUG oslo_concurrency.processutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:36 np0005539551 nova_compute[227360]: 2025-11-29 08:09:36.304 227364 DEBUG oslo_concurrency.processutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] CMD "nvme version" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:36 np0005539551 nova_compute[227360]: 2025-11-29 08:09:36.307 227364 DEBUG os_brick.initiator.connectors.lightos [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:09:36 np0005539551 nova_compute[227360]: 2025-11-29 08:09:36.307 227364 DEBUG os_brick.initiator.connectors.lightos [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:09:36 np0005539551 nova_compute[227360]: 2025-11-29 08:09:36.307 227364 DEBUG os_brick.initiator.connectors.lightos [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:09:36 np0005539551 nova_compute[227360]: 2025-11-29 08:09:36.308 227364 DEBUG os_brick.utils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] <== get_connector_properties: return (91ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34268feecb6', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2d58586e-4ce1-425d-890c-d5cdff75e822', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:09:36 np0005539551 nova_compute[227360]: 2025-11-29 08:09:36.308 227364 DEBUG nova.virt.block_device [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Updating existing volume attachment record: f5ba17b7-40f8-4e26-9886-d3d4bc4414ab _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:09:36 np0005539551 nova_compute[227360]: 2025-11-29 08:09:36.569 227364 DEBUG nova.policy [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6ef481e9e8e0440c91abe11aee229780', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '28c3d09b9e21417cb7bc44b8552f1b81', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:09:36 np0005539551 ovn_controller[130266]: 2025-11-29T08:09:36Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:52:6c:f3 10.100.0.14
Nov 29 03:09:36 np0005539551 ovn_controller[130266]: 2025-11-29T08:09:36Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:52:6c:f3 10.100.0.14
Nov 29 03:09:37 np0005539551 nova_compute[227360]: 2025-11-29 08:09:37.203 227364 INFO nova.virt.block_device [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Booting with volume 496b8834-d1bb-44a8-b529-c5cd1c1fcd41 at /dev/vdb#033[00m
Nov 29 03:09:37 np0005539551 nova_compute[227360]: 2025-11-29 08:09:37.348 227364 DEBUG os_brick.utils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:09:37 np0005539551 nova_compute[227360]: 2025-11-29 08:09:37.349 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:37 np0005539551 nova_compute[227360]: 2025-11-29 08:09:37.363 245195 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:37 np0005539551 nova_compute[227360]: 2025-11-29 08:09:37.364 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[0970cd20-d027-4ea2-915a-3f1c5a22dac9]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:37 np0005539551 nova_compute[227360]: 2025-11-29 08:09:37.365 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:37 np0005539551 nova_compute[227360]: 2025-11-29 08:09:37.380 245195 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:37 np0005539551 nova_compute[227360]: 2025-11-29 08:09:37.381 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[24835818-7ba1-4b7a-8f0c-7d1127f7a773]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34268feecb6', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:37 np0005539551 nova_compute[227360]: 2025-11-29 08:09:37.382 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:37 np0005539551 nova_compute[227360]: 2025-11-29 08:09:37.398 245195 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:37 np0005539551 nova_compute[227360]: 2025-11-29 08:09:37.399 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[69865c29-3512-47e4-8eec-7a6b93e9d9ee]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:37 np0005539551 nova_compute[227360]: 2025-11-29 08:09:37.400 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[78112d2c-7e51-434c-a3a4-11fedcdddda5]: (4, '2d58586e-4ce1-425d-890c-d5cdff75e822') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:37 np0005539551 nova_compute[227360]: 2025-11-29 08:09:37.400 227364 DEBUG oslo_concurrency.processutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:37 np0005539551 nova_compute[227360]: 2025-11-29 08:09:37.433 227364 DEBUG oslo_concurrency.processutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] CMD "nvme version" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:37 np0005539551 nova_compute[227360]: 2025-11-29 08:09:37.436 227364 DEBUG os_brick.initiator.connectors.lightos [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:09:37 np0005539551 nova_compute[227360]: 2025-11-29 08:09:37.436 227364 DEBUG os_brick.initiator.connectors.lightos [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:09:37 np0005539551 nova_compute[227360]: 2025-11-29 08:09:37.436 227364 DEBUG os_brick.initiator.connectors.lightos [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:09:37 np0005539551 nova_compute[227360]: 2025-11-29 08:09:37.437 227364 DEBUG os_brick.utils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] <== get_connector_properties: return (88ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34268feecb6', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2d58586e-4ce1-425d-890c-d5cdff75e822', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:09:37 np0005539551 nova_compute[227360]: 2025-11-29 08:09:37.437 227364 DEBUG nova.virt.block_device [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Updating existing volume attachment record: eb056fdb-1deb-4da8-a715-3d46433fc5b1 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:09:37 np0005539551 podman[255660]: 2025-11-29 08:09:37.65208985 +0000 UTC m=+0.090968973 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:09:37 np0005539551 podman[255661]: 2025-11-29 08:09:37.658077069 +0000 UTC m=+0.092268528 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 29 03:09:37 np0005539551 nova_compute[227360]: 2025-11-29 08:09:37.669 227364 DEBUG nova.network.neutron [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Successfully created port: 5edc2ee1-b429-4cb7-8b14-3915aa40d39c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:09:37 np0005539551 podman[255659]: 2025-11-29 08:09:37.679653044 +0000 UTC m=+0.120693075 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 29 03:09:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:37.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:37.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:38 np0005539551 nova_compute[227360]: 2025-11-29 08:09:38.232 227364 DEBUG nova.network.neutron [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Successfully created port: 064279fc-65b8-4bc6-9578-f43479c76dde _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:09:38 np0005539551 nova_compute[227360]: 2025-11-29 08:09:38.505 227364 INFO nova.virt.block_device [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Booting with volume 00dd489a-c89a-475f-adcb-101636396ac7 at /dev/vdc#033[00m
Nov 29 03:09:38 np0005539551 nova_compute[227360]: 2025-11-29 08:09:38.623 227364 DEBUG os_brick.utils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:09:38 np0005539551 nova_compute[227360]: 2025-11-29 08:09:38.624 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:38 np0005539551 nova_compute[227360]: 2025-11-29 08:09:38.643 245195 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:38 np0005539551 nova_compute[227360]: 2025-11-29 08:09:38.643 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[214cd940-1084-4c19-88c1-5ab06ca44aec]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:38 np0005539551 nova_compute[227360]: 2025-11-29 08:09:38.645 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:38 np0005539551 nova_compute[227360]: 2025-11-29 08:09:38.658 245195 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:38 np0005539551 nova_compute[227360]: 2025-11-29 08:09:38.658 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[90cc1663-59e4-4067-b4b2-ea407e4080ee]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34268feecb6', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:38 np0005539551 nova_compute[227360]: 2025-11-29 08:09:38.661 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:38 np0005539551 nova_compute[227360]: 2025-11-29 08:09:38.676 245195 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:38 np0005539551 nova_compute[227360]: 2025-11-29 08:09:38.676 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[7b34450c-4969-4dbf-a011-a11b021703a3]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:38 np0005539551 nova_compute[227360]: 2025-11-29 08:09:38.678 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[c0d662d5-4d7d-437a-a721-fd7cc90e4c65]: (4, '2d58586e-4ce1-425d-890c-d5cdff75e822') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:38 np0005539551 nova_compute[227360]: 2025-11-29 08:09:38.679 227364 DEBUG oslo_concurrency.processutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:38 np0005539551 nova_compute[227360]: 2025-11-29 08:09:38.713 227364 DEBUG oslo_concurrency.processutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] CMD "nvme version" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:38 np0005539551 nova_compute[227360]: 2025-11-29 08:09:38.717 227364 DEBUG os_brick.initiator.connectors.lightos [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:09:38 np0005539551 nova_compute[227360]: 2025-11-29 08:09:38.718 227364 DEBUG os_brick.initiator.connectors.lightos [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:09:38 np0005539551 nova_compute[227360]: 2025-11-29 08:09:38.718 227364 DEBUG os_brick.initiator.connectors.lightos [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:09:38 np0005539551 nova_compute[227360]: 2025-11-29 08:09:38.719 227364 DEBUG os_brick.utils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] <== get_connector_properties: return (95ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34268feecb6', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2d58586e-4ce1-425d-890c-d5cdff75e822', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:09:38 np0005539551 nova_compute[227360]: 2025-11-29 08:09:38.720 227364 DEBUG nova.virt.block_device [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Updating existing volume attachment record: f854b30a-a6ad-49d6-a8b7-428520e06f25 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:09:39 np0005539551 nova_compute[227360]: 2025-11-29 08:09:39.000 227364 DEBUG nova.network.neutron [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Successfully created port: 580e3c4b-9212-4f39-be82-c8e878e729e2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:09:39 np0005539551 nova_compute[227360]: 2025-11-29 08:09:39.121 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:39.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:39 np0005539551 nova_compute[227360]: 2025-11-29 08:09:39.732 227364 DEBUG nova.network.neutron [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Successfully created port: 6781fd06-3321-4100-b5d3-9e92a565007b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:09:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:39.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:39 np0005539551 nova_compute[227360]: 2025-11-29 08:09:39.897 227364 DEBUG nova.compute.manager [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:09:39 np0005539551 nova_compute[227360]: 2025-11-29 08:09:39.899 227364 DEBUG nova.virt.libvirt.driver [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:09:39 np0005539551 nova_compute[227360]: 2025-11-29 08:09:39.900 227364 INFO nova.virt.libvirt.driver [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Creating image(s)#033[00m
Nov 29 03:09:39 np0005539551 nova_compute[227360]: 2025-11-29 08:09:39.901 227364 DEBUG nova.virt.libvirt.driver [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:09:39 np0005539551 nova_compute[227360]: 2025-11-29 08:09:39.901 227364 DEBUG nova.virt.libvirt.driver [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Ensure instance console log exists: /var/lib/nova/instances/86dc24b2-55cd-4720-825d-14b5a233fc8f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:09:39 np0005539551 nova_compute[227360]: 2025-11-29 08:09:39.902 227364 DEBUG oslo_concurrency.lockutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:39 np0005539551 nova_compute[227360]: 2025-11-29 08:09:39.903 227364 DEBUG oslo_concurrency.lockutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:39 np0005539551 nova_compute[227360]: 2025-11-29 08:09:39.903 227364 DEBUG oslo_concurrency.lockutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:40 np0005539551 nova_compute[227360]: 2025-11-29 08:09:40.029 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:40 np0005539551 nova_compute[227360]: 2025-11-29 08:09:40.616 227364 DEBUG nova.network.neutron [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Successfully created port: 75fb0180-ce5c-4d77-ab28-71baed47a210 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:09:41 np0005539551 nova_compute[227360]: 2025-11-29 08:09:41.360 227364 DEBUG nova.network.neutron [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Successfully updated port: 5edc2ee1-b429-4cb7-8b14-3915aa40d39c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:09:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:41.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:41.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:42 np0005539551 nova_compute[227360]: 2025-11-29 08:09:42.532 227364 DEBUG nova.network.neutron [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Successfully updated port: 08b93c9b-b707-46be-87f1-913a85c9fdbb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:09:43 np0005539551 nova_compute[227360]: 2025-11-29 08:09:43.674 227364 DEBUG nova.compute.manager [req-5f46f088-1c43-458d-9c93-9b0a844c92bf req-c6d6662b-9e3d-4164-9196-fbe063b60b62 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-changed-5edc2ee1-b429-4cb7-8b14-3915aa40d39c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:43 np0005539551 nova_compute[227360]: 2025-11-29 08:09:43.675 227364 DEBUG nova.compute.manager [req-5f46f088-1c43-458d-9c93-9b0a844c92bf req-c6d6662b-9e3d-4164-9196-fbe063b60b62 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Refreshing instance network info cache due to event network-changed-5edc2ee1-b429-4cb7-8b14-3915aa40d39c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:09:43 np0005539551 nova_compute[227360]: 2025-11-29 08:09:43.676 227364 DEBUG oslo_concurrency.lockutils [req-5f46f088-1c43-458d-9c93-9b0a844c92bf req-c6d6662b-9e3d-4164-9196-fbe063b60b62 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-86dc24b2-55cd-4720-825d-14b5a233fc8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:09:43 np0005539551 nova_compute[227360]: 2025-11-29 08:09:43.676 227364 DEBUG oslo_concurrency.lockutils [req-5f46f088-1c43-458d-9c93-9b0a844c92bf req-c6d6662b-9e3d-4164-9196-fbe063b60b62 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-86dc24b2-55cd-4720-825d-14b5a233fc8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:09:43 np0005539551 nova_compute[227360]: 2025-11-29 08:09:43.677 227364 DEBUG nova.network.neutron [req-5f46f088-1c43-458d-9c93-9b0a844c92bf req-c6d6662b-9e3d-4164-9196-fbe063b60b62 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Refreshing network info cache for port 5edc2ee1-b429-4cb7-8b14-3915aa40d39c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:09:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:43.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:43.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:44 np0005539551 nova_compute[227360]: 2025-11-29 08:09:44.124 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:44 np0005539551 nova_compute[227360]: 2025-11-29 08:09:44.552 227364 DEBUG nova.network.neutron [req-5f46f088-1c43-458d-9c93-9b0a844c92bf req-c6d6662b-9e3d-4164-9196-fbe063b60b62 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:09:45 np0005539551 nova_compute[227360]: 2025-11-29 08:09:45.031 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:45 np0005539551 nova_compute[227360]: 2025-11-29 08:09:45.078 227364 DEBUG nova.network.neutron [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Successfully updated port: 0b94f521-1bc4-4c6b-9d91-f98c1b1371ea _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:09:45 np0005539551 nova_compute[227360]: 2025-11-29 08:09:45.242 227364 DEBUG nova.network.neutron [req-5f46f088-1c43-458d-9c93-9b0a844c92bf req-c6d6662b-9e3d-4164-9196-fbe063b60b62 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:09:45 np0005539551 nova_compute[227360]: 2025-11-29 08:09:45.257 227364 DEBUG oslo_concurrency.lockutils [req-5f46f088-1c43-458d-9c93-9b0a844c92bf req-c6d6662b-9e3d-4164-9196-fbe063b60b62 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-86dc24b2-55cd-4720-825d-14b5a233fc8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:09:45 np0005539551 nova_compute[227360]: 2025-11-29 08:09:45.258 227364 DEBUG nova.compute.manager [req-5f46f088-1c43-458d-9c93-9b0a844c92bf req-c6d6662b-9e3d-4164-9196-fbe063b60b62 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-changed-08b93c9b-b707-46be-87f1-913a85c9fdbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:45 np0005539551 nova_compute[227360]: 2025-11-29 08:09:45.258 227364 DEBUG nova.compute.manager [req-5f46f088-1c43-458d-9c93-9b0a844c92bf req-c6d6662b-9e3d-4164-9196-fbe063b60b62 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Refreshing instance network info cache due to event network-changed-08b93c9b-b707-46be-87f1-913a85c9fdbb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:09:45 np0005539551 nova_compute[227360]: 2025-11-29 08:09:45.259 227364 DEBUG oslo_concurrency.lockutils [req-5f46f088-1c43-458d-9c93-9b0a844c92bf req-c6d6662b-9e3d-4164-9196-fbe063b60b62 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-86dc24b2-55cd-4720-825d-14b5a233fc8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:09:45 np0005539551 nova_compute[227360]: 2025-11-29 08:09:45.259 227364 DEBUG oslo_concurrency.lockutils [req-5f46f088-1c43-458d-9c93-9b0a844c92bf req-c6d6662b-9e3d-4164-9196-fbe063b60b62 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-86dc24b2-55cd-4720-825d-14b5a233fc8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:09:45 np0005539551 nova_compute[227360]: 2025-11-29 08:09:45.260 227364 DEBUG nova.network.neutron [req-5f46f088-1c43-458d-9c93-9b0a844c92bf req-c6d6662b-9e3d-4164-9196-fbe063b60b62 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Refreshing network info cache for port 08b93c9b-b707-46be-87f1-913a85c9fdbb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:09:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:45 np0005539551 nova_compute[227360]: 2025-11-29 08:09:45.470 227364 DEBUG nova.network.neutron [req-5f46f088-1c43-458d-9c93-9b0a844c92bf req-c6d6662b-9e3d-4164-9196-fbe063b60b62 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:09:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:45.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:45.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:45 np0005539551 nova_compute[227360]: 2025-11-29 08:09:45.822 227364 DEBUG nova.compute.manager [req-b2e552cb-d45c-4d91-8d82-32abd924a564 req-a92c13c9-88f9-4946-b07d-0ece3f275a0d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-changed-0b94f521-1bc4-4c6b-9d91-f98c1b1371ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:45 np0005539551 nova_compute[227360]: 2025-11-29 08:09:45.822 227364 DEBUG nova.compute.manager [req-b2e552cb-d45c-4d91-8d82-32abd924a564 req-a92c13c9-88f9-4946-b07d-0ece3f275a0d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Refreshing instance network info cache due to event network-changed-0b94f521-1bc4-4c6b-9d91-f98c1b1371ea. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:09:45 np0005539551 nova_compute[227360]: 2025-11-29 08:09:45.823 227364 DEBUG oslo_concurrency.lockutils [req-b2e552cb-d45c-4d91-8d82-32abd924a564 req-a92c13c9-88f9-4946-b07d-0ece3f275a0d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-86dc24b2-55cd-4720-825d-14b5a233fc8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:09:45 np0005539551 nova_compute[227360]: 2025-11-29 08:09:45.941 227364 DEBUG nova.network.neutron [req-5f46f088-1c43-458d-9c93-9b0a844c92bf req-c6d6662b-9e3d-4164-9196-fbe063b60b62 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:09:45 np0005539551 nova_compute[227360]: 2025-11-29 08:09:45.958 227364 DEBUG oslo_concurrency.lockutils [req-5f46f088-1c43-458d-9c93-9b0a844c92bf req-c6d6662b-9e3d-4164-9196-fbe063b60b62 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-86dc24b2-55cd-4720-825d-14b5a233fc8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:09:45 np0005539551 nova_compute[227360]: 2025-11-29 08:09:45.959 227364 DEBUG oslo_concurrency.lockutils [req-b2e552cb-d45c-4d91-8d82-32abd924a564 req-a92c13c9-88f9-4946-b07d-0ece3f275a0d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-86dc24b2-55cd-4720-825d-14b5a233fc8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:09:45 np0005539551 nova_compute[227360]: 2025-11-29 08:09:45.959 227364 DEBUG nova.network.neutron [req-b2e552cb-d45c-4d91-8d82-32abd924a564 req-a92c13c9-88f9-4946-b07d-0ece3f275a0d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Refreshing network info cache for port 0b94f521-1bc4-4c6b-9d91-f98c1b1371ea _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:09:46 np0005539551 nova_compute[227360]: 2025-11-29 08:09:46.030 227364 DEBUG nova.network.neutron [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Successfully updated port: 064279fc-65b8-4bc6-9578-f43479c76dde _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:09:46 np0005539551 nova_compute[227360]: 2025-11-29 08:09:46.208 227364 DEBUG nova.network.neutron [req-b2e552cb-d45c-4d91-8d82-32abd924a564 req-a92c13c9-88f9-4946-b07d-0ece3f275a0d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:09:46 np0005539551 nova_compute[227360]: 2025-11-29 08:09:46.523 227364 DEBUG nova.network.neutron [req-b2e552cb-d45c-4d91-8d82-32abd924a564 req-a92c13c9-88f9-4946-b07d-0ece3f275a0d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:09:46 np0005539551 nova_compute[227360]: 2025-11-29 08:09:46.542 227364 DEBUG oslo_concurrency.lockutils [req-b2e552cb-d45c-4d91-8d82-32abd924a564 req-a92c13c9-88f9-4946-b07d-0ece3f275a0d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-86dc24b2-55cd-4720-825d-14b5a233fc8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:09:47 np0005539551 nova_compute[227360]: 2025-11-29 08:09:47.273 227364 DEBUG nova.network.neutron [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Successfully updated port: 580e3c4b-9212-4f39-be82-c8e878e729e2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:09:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:47.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:47.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:47 np0005539551 nova_compute[227360]: 2025-11-29 08:09:47.922 227364 DEBUG nova.compute.manager [req-00dc17f6-b4d4-41ea-a15d-8ae624f8eab3 req-9adb411f-d1a3-4281-ae9a-e8895fb124b4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-changed-064279fc-65b8-4bc6-9578-f43479c76dde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:47 np0005539551 nova_compute[227360]: 2025-11-29 08:09:47.923 227364 DEBUG nova.compute.manager [req-00dc17f6-b4d4-41ea-a15d-8ae624f8eab3 req-9adb411f-d1a3-4281-ae9a-e8895fb124b4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Refreshing instance network info cache due to event network-changed-064279fc-65b8-4bc6-9578-f43479c76dde. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:09:47 np0005539551 nova_compute[227360]: 2025-11-29 08:09:47.923 227364 DEBUG oslo_concurrency.lockutils [req-00dc17f6-b4d4-41ea-a15d-8ae624f8eab3 req-9adb411f-d1a3-4281-ae9a-e8895fb124b4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-86dc24b2-55cd-4720-825d-14b5a233fc8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:09:47 np0005539551 nova_compute[227360]: 2025-11-29 08:09:47.923 227364 DEBUG oslo_concurrency.lockutils [req-00dc17f6-b4d4-41ea-a15d-8ae624f8eab3 req-9adb411f-d1a3-4281-ae9a-e8895fb124b4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-86dc24b2-55cd-4720-825d-14b5a233fc8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:09:47 np0005539551 nova_compute[227360]: 2025-11-29 08:09:47.923 227364 DEBUG nova.network.neutron [req-00dc17f6-b4d4-41ea-a15d-8ae624f8eab3 req-9adb411f-d1a3-4281-ae9a-e8895fb124b4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Refreshing network info cache for port 064279fc-65b8-4bc6-9578-f43479c76dde _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:09:48 np0005539551 nova_compute[227360]: 2025-11-29 08:09:48.295 227364 DEBUG nova.network.neutron [req-00dc17f6-b4d4-41ea-a15d-8ae624f8eab3 req-9adb411f-d1a3-4281-ae9a-e8895fb124b4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:09:48 np0005539551 nova_compute[227360]: 2025-11-29 08:09:48.539 227364 DEBUG nova.network.neutron [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Successfully updated port: 6781fd06-3321-4100-b5d3-9e92a565007b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:09:48 np0005539551 nova_compute[227360]: 2025-11-29 08:09:48.782 227364 DEBUG nova.network.neutron [req-00dc17f6-b4d4-41ea-a15d-8ae624f8eab3 req-9adb411f-d1a3-4281-ae9a-e8895fb124b4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:09:48 np0005539551 nova_compute[227360]: 2025-11-29 08:09:48.796 227364 DEBUG oslo_concurrency.lockutils [req-00dc17f6-b4d4-41ea-a15d-8ae624f8eab3 req-9adb411f-d1a3-4281-ae9a-e8895fb124b4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-86dc24b2-55cd-4720-825d-14b5a233fc8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:09:48 np0005539551 nova_compute[227360]: 2025-11-29 08:09:48.797 227364 DEBUG nova.compute.manager [req-00dc17f6-b4d4-41ea-a15d-8ae624f8eab3 req-9adb411f-d1a3-4281-ae9a-e8895fb124b4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-changed-580e3c4b-9212-4f39-be82-c8e878e729e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:48 np0005539551 nova_compute[227360]: 2025-11-29 08:09:48.797 227364 DEBUG nova.compute.manager [req-00dc17f6-b4d4-41ea-a15d-8ae624f8eab3 req-9adb411f-d1a3-4281-ae9a-e8895fb124b4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Refreshing instance network info cache due to event network-changed-580e3c4b-9212-4f39-be82-c8e878e729e2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:09:48 np0005539551 nova_compute[227360]: 2025-11-29 08:09:48.798 227364 DEBUG oslo_concurrency.lockutils [req-00dc17f6-b4d4-41ea-a15d-8ae624f8eab3 req-9adb411f-d1a3-4281-ae9a-e8895fb124b4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-86dc24b2-55cd-4720-825d-14b5a233fc8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:09:48 np0005539551 nova_compute[227360]: 2025-11-29 08:09:48.798 227364 DEBUG oslo_concurrency.lockutils [req-00dc17f6-b4d4-41ea-a15d-8ae624f8eab3 req-9adb411f-d1a3-4281-ae9a-e8895fb124b4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-86dc24b2-55cd-4720-825d-14b5a233fc8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:09:48 np0005539551 nova_compute[227360]: 2025-11-29 08:09:48.798 227364 DEBUG nova.network.neutron [req-00dc17f6-b4d4-41ea-a15d-8ae624f8eab3 req-9adb411f-d1a3-4281-ae9a-e8895fb124b4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Refreshing network info cache for port 580e3c4b-9212-4f39-be82-c8e878e729e2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:09:49 np0005539551 nova_compute[227360]: 2025-11-29 08:09:49.050 227364 DEBUG nova.network.neutron [req-00dc17f6-b4d4-41ea-a15d-8ae624f8eab3 req-9adb411f-d1a3-4281-ae9a-e8895fb124b4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:09:49 np0005539551 nova_compute[227360]: 2025-11-29 08:09:49.127 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:49.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:49.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:50 np0005539551 nova_compute[227360]: 2025-11-29 08:09:50.032 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:50 np0005539551 nova_compute[227360]: 2025-11-29 08:09:50.190 227364 DEBUG nova.compute.manager [req-73a6d1ae-bfae-4606-9d33-6b97d02965fc req-bbd12390-6776-4ed6-bb89-b74bb0a45bfa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-changed-6781fd06-3321-4100-b5d3-9e92a565007b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:50 np0005539551 nova_compute[227360]: 2025-11-29 08:09:50.191 227364 DEBUG nova.compute.manager [req-73a6d1ae-bfae-4606-9d33-6b97d02965fc req-bbd12390-6776-4ed6-bb89-b74bb0a45bfa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Refreshing instance network info cache due to event network-changed-6781fd06-3321-4100-b5d3-9e92a565007b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:09:50 np0005539551 nova_compute[227360]: 2025-11-29 08:09:50.191 227364 DEBUG oslo_concurrency.lockutils [req-73a6d1ae-bfae-4606-9d33-6b97d02965fc req-bbd12390-6776-4ed6-bb89-b74bb0a45bfa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-86dc24b2-55cd-4720-825d-14b5a233fc8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:09:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:50 np0005539551 nova_compute[227360]: 2025-11-29 08:09:50.521 227364 DEBUG nova.network.neutron [req-00dc17f6-b4d4-41ea-a15d-8ae624f8eab3 req-9adb411f-d1a3-4281-ae9a-e8895fb124b4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:09:50 np0005539551 nova_compute[227360]: 2025-11-29 08:09:50.538 227364 DEBUG oslo_concurrency.lockutils [req-00dc17f6-b4d4-41ea-a15d-8ae624f8eab3 req-9adb411f-d1a3-4281-ae9a-e8895fb124b4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-86dc24b2-55cd-4720-825d-14b5a233fc8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:09:50 np0005539551 nova_compute[227360]: 2025-11-29 08:09:50.538 227364 DEBUG oslo_concurrency.lockutils [req-73a6d1ae-bfae-4606-9d33-6b97d02965fc req-bbd12390-6776-4ed6-bb89-b74bb0a45bfa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-86dc24b2-55cd-4720-825d-14b5a233fc8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:09:50 np0005539551 nova_compute[227360]: 2025-11-29 08:09:50.538 227364 DEBUG nova.network.neutron [req-73a6d1ae-bfae-4606-9d33-6b97d02965fc req-bbd12390-6776-4ed6-bb89-b74bb0a45bfa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Refreshing network info cache for port 6781fd06-3321-4100-b5d3-9e92a565007b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:09:51 np0005539551 nova_compute[227360]: 2025-11-29 08:09:51.646 227364 DEBUG nova.network.neutron [req-73a6d1ae-bfae-4606-9d33-6b97d02965fc req-bbd12390-6776-4ed6-bb89-b74bb0a45bfa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:09:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:51.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:09:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:51.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:09:52 np0005539551 nova_compute[227360]: 2025-11-29 08:09:52.542 227364 DEBUG nova.network.neutron [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Successfully updated port: 75fb0180-ce5c-4d77-ab28-71baed47a210 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:09:52 np0005539551 nova_compute[227360]: 2025-11-29 08:09:52.558 227364 DEBUG oslo_concurrency.lockutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Acquiring lock "refresh_cache-86dc24b2-55cd-4720-825d-14b5a233fc8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:09:52 np0005539551 nova_compute[227360]: 2025-11-29 08:09:52.625 227364 DEBUG nova.compute.manager [req-c64e33db-e067-4ec0-b6d5-fd4f33132708 req-3e6d9c17-dab8-4444-b232-7be4154d689f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-changed-75fb0180-ce5c-4d77-ab28-71baed47a210 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:52 np0005539551 nova_compute[227360]: 2025-11-29 08:09:52.625 227364 DEBUG nova.compute.manager [req-c64e33db-e067-4ec0-b6d5-fd4f33132708 req-3e6d9c17-dab8-4444-b232-7be4154d689f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Refreshing instance network info cache due to event network-changed-75fb0180-ce5c-4d77-ab28-71baed47a210. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:09:52 np0005539551 nova_compute[227360]: 2025-11-29 08:09:52.625 227364 DEBUG oslo_concurrency.lockutils [req-c64e33db-e067-4ec0-b6d5-fd4f33132708 req-3e6d9c17-dab8-4444-b232-7be4154d689f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-86dc24b2-55cd-4720-825d-14b5a233fc8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:09:52 np0005539551 nova_compute[227360]: 2025-11-29 08:09:52.648 227364 DEBUG nova.network.neutron [req-73a6d1ae-bfae-4606-9d33-6b97d02965fc req-bbd12390-6776-4ed6-bb89-b74bb0a45bfa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:09:52 np0005539551 nova_compute[227360]: 2025-11-29 08:09:52.673 227364 DEBUG oslo_concurrency.lockutils [req-73a6d1ae-bfae-4606-9d33-6b97d02965fc req-bbd12390-6776-4ed6-bb89-b74bb0a45bfa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-86dc24b2-55cd-4720-825d-14b5a233fc8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:09:52 np0005539551 nova_compute[227360]: 2025-11-29 08:09:52.674 227364 DEBUG oslo_concurrency.lockutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Acquired lock "refresh_cache-86dc24b2-55cd-4720-825d-14b5a233fc8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:09:52 np0005539551 nova_compute[227360]: 2025-11-29 08:09:52.674 227364 DEBUG nova.network.neutron [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:09:52 np0005539551 nova_compute[227360]: 2025-11-29 08:09:52.925 227364 DEBUG nova.network.neutron [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:09:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:09:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:53.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:09:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:09:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:53.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:09:54 np0005539551 nova_compute[227360]: 2025-11-29 08:09:54.128 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:55 np0005539551 nova_compute[227360]: 2025-11-29 08:09:55.035 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:55.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:55 np0005539551 ovn_controller[130266]: 2025-11-29T08:09:55Z|00263|binding|INFO|Releasing lport a9e57abf-e3e4-455b-b4c5-0cda127bd5c1 from this chassis (sb_readonly=0)
Nov 29 03:09:55 np0005539551 nova_compute[227360]: 2025-11-29 08:09:55.783 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:55.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:57.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:57.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:59 np0005539551 nova_compute[227360]: 2025-11-29 08:09:59.108 227364 DEBUG oslo_concurrency.lockutils [None req-ae3974a4-0300-426d-879e-3891ef374141 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "interface-00bfed58-0a25-44a8-aa0b-2330c27be8ee-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:59 np0005539551 nova_compute[227360]: 2025-11-29 08:09:59.110 227364 DEBUG oslo_concurrency.lockutils [None req-ae3974a4-0300-426d-879e-3891ef374141 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "interface-00bfed58-0a25-44a8-aa0b-2330c27be8ee-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:59 np0005539551 nova_compute[227360]: 2025-11-29 08:09:59.111 227364 DEBUG nova.objects.instance [None req-ae3974a4-0300-426d-879e-3891ef374141 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lazy-loading 'flavor' on Instance uuid 00bfed58-0a25-44a8-aa0b-2330c27be8ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:59 np0005539551 nova_compute[227360]: 2025-11-29 08:09:59.131 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:59.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:59 np0005539551 nova_compute[227360]: 2025-11-29 08:09:59.740 227364 DEBUG nova.objects.instance [None req-ae3974a4-0300-426d-879e-3891ef374141 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lazy-loading 'pci_requests' on Instance uuid 00bfed58-0a25-44a8-aa0b-2330c27be8ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:59 np0005539551 nova_compute[227360]: 2025-11-29 08:09:59.755 227364 DEBUG nova.network.neutron [None req-ae3974a4-0300-426d-879e-3891ef374141 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:09:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:09:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:59.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:00 np0005539551 nova_compute[227360]: 2025-11-29 08:10:00.015 227364 DEBUG nova.policy [None req-ae3974a4-0300-426d-879e-3891ef374141 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b95b3e841be1420c99ee0a04dd0840f1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ff7c805d4242453aa2148a247956391d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:10:00 np0005539551 nova_compute[227360]: 2025-11-29 08:10:00.038 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:00 np0005539551 ceph-mon[81672]: overall HEALTH_OK
Nov 29 03:10:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:01 np0005539551 nova_compute[227360]: 2025-11-29 08:10:01.243 227364 DEBUG nova.network.neutron [None req-ae3974a4-0300-426d-879e-3891ef374141 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Successfully created port: fb488c88-f0f5-4e76-90e7-47161bb2a305 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:10:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:01.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:01.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:02 np0005539551 nova_compute[227360]: 2025-11-29 08:10:02.420 227364 DEBUG nova.network.neutron [None req-ae3974a4-0300-426d-879e-3891ef374141 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Successfully updated port: fb488c88-f0f5-4e76-90e7-47161bb2a305 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:10:02 np0005539551 nova_compute[227360]: 2025-11-29 08:10:02.434 227364 DEBUG oslo_concurrency.lockutils [None req-ae3974a4-0300-426d-879e-3891ef374141 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "refresh_cache-00bfed58-0a25-44a8-aa0b-2330c27be8ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:10:02 np0005539551 nova_compute[227360]: 2025-11-29 08:10:02.434 227364 DEBUG oslo_concurrency.lockutils [None req-ae3974a4-0300-426d-879e-3891ef374141 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquired lock "refresh_cache-00bfed58-0a25-44a8-aa0b-2330c27be8ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:10:02 np0005539551 nova_compute[227360]: 2025-11-29 08:10:02.435 227364 DEBUG nova.network.neutron [None req-ae3974a4-0300-426d-879e-3891ef374141 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:10:02 np0005539551 nova_compute[227360]: 2025-11-29 08:10:02.594 227364 DEBUG nova.compute.manager [req-8b601abc-0f1a-492d-80e1-2abcd50b2e28 req-ccce72d3-4090-41f1-9b2c-6ee4f5e4db1b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Received event network-changed-fb488c88-f0f5-4e76-90e7-47161bb2a305 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:02 np0005539551 nova_compute[227360]: 2025-11-29 08:10:02.594 227364 DEBUG nova.compute.manager [req-8b601abc-0f1a-492d-80e1-2abcd50b2e28 req-ccce72d3-4090-41f1-9b2c-6ee4f5e4db1b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Refreshing instance network info cache due to event network-changed-fb488c88-f0f5-4e76-90e7-47161bb2a305. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:10:02 np0005539551 nova_compute[227360]: 2025-11-29 08:10:02.595 227364 DEBUG oslo_concurrency.lockutils [req-8b601abc-0f1a-492d-80e1-2abcd50b2e28 req-ccce72d3-4090-41f1-9b2c-6ee4f5e4db1b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-00bfed58-0a25-44a8-aa0b-2330c27be8ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:10:02 np0005539551 nova_compute[227360]: 2025-11-29 08:10:02.711 227364 WARNING nova.network.neutron [None req-ae3974a4-0300-426d-879e-3891ef374141 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] ddd8b166-79ec-408d-b52c-581ad9dd6cb8 already exists in list: networks containing: ['ddd8b166-79ec-408d-b52c-581ad9dd6cb8']. ignoring it#033[00m
Nov 29 03:10:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:03.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:03.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:03 np0005539551 nova_compute[227360]: 2025-11-29 08:10:03.852 227364 DEBUG nova.network.neutron [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Updating instance_info_cache with network_info: [{"id": "5edc2ee1-b429-4cb7-8b14-3915aa40d39c", "address": "fa:16:3e:73:c5:06", "network": {"id": "997fb1c7-7a6a-4755-bd31-f24f7590c80c", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-481618025-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5edc2ee1-b4", "ovs_interfaceid": "5edc2ee1-b429-4cb7-8b14-3915aa40d39c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "08b93c9b-b707-46be-87f1-913a85c9fdbb", "address": "fa:16:3e:7d:dd:ef", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.101", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08b93c9b-b7", "ovs_interfaceid": "08b93c9b-b707-46be-87f1-913a85c9fdbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "0b94f521-1bc4-4c6b-9d91-f98c1b1371ea", "address": "fa:16:3e:a7:4f:0c", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.188", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b94f521-1b", "ovs_interfaceid": "0b94f521-1bc4-4c6b-9d91-f98c1b1371ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "064279fc-65b8-4bc6-9578-f43479c76dde", "address": "fa:16:3e:8f:4c:8b", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap064279fc-65", "ovs_interfaceid": "064279fc-65b8-4bc6-9578-f43479c76dde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "580e3c4b-9212-4f39-be82-c8e878e729e2", "address": "fa:16:3e:e1:43:84", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580e3c4b-92", "ovs_interfaceid": "580e3c4b-9212-4f39-be82-c8e878e729e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6781fd06-3321-4100-b5d3-9e92a565007b", "address": "fa:16:3e:a4:c3:f8", "network": {"id": "c657f09f-ecca-4e01-9a07-39931c8c1994", "bridge": "br-int", "label": "tempest-device-tagging-net2-1829627846", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6781fd06-33", "ovs_interfaceid": "6781fd06-3321-4100-b5d3-9e92a565007b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "75fb0180-ce5c-4d77-ab28-71baed47a210", "address": "fa:16:3e:07:8b:79", "network": {"id": "c657f09f-ecca-4e01-9a07-39931c8c1994", "bridge": "br-int", "label": "tempest-device-tagging-net2-1829627846", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75fb0180-ce", "ovs_interfaceid": "75fb0180-ce5c-4d77-ab28-71baed47a210", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:10:03 np0005539551 nova_compute[227360]: 2025-11-29 08:10:03.887 227364 DEBUG oslo_concurrency.lockutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Releasing lock "refresh_cache-86dc24b2-55cd-4720-825d-14b5a233fc8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:10:03 np0005539551 nova_compute[227360]: 2025-11-29 08:10:03.887 227364 DEBUG nova.compute.manager [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Instance network_info: |[{"id": "5edc2ee1-b429-4cb7-8b14-3915aa40d39c", "address": "fa:16:3e:73:c5:06", "network": {"id": "997fb1c7-7a6a-4755-bd31-f24f7590c80c", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-481618025-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5edc2ee1-b4", "ovs_interfaceid": "5edc2ee1-b429-4cb7-8b14-3915aa40d39c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "08b93c9b-b707-46be-87f1-913a85c9fdbb", "address": "fa:16:3e:7d:dd:ef", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.101", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08b93c9b-b7", "ovs_interfaceid": "08b93c9b-b707-46be-87f1-913a85c9fdbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "0b94f521-1bc4-4c6b-9d91-f98c1b1371ea", "address": "fa:16:3e:a7:4f:0c", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.188", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b94f521-1b", "ovs_interfaceid": "0b94f521-1bc4-4c6b-9d91-f98c1b1371ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "064279fc-65b8-4bc6-9578-f43479c76dde", "address": "fa:16:3e:8f:4c:8b", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap064279fc-65", "ovs_interfaceid": "064279fc-65b8-4bc6-9578-f43479c76dde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "580e3c4b-9212-4f39-be82-c8e878e729e2", "address": "fa:16:3e:e1:43:84", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580e3c4b-92", "ovs_interfaceid": "580e3c4b-9212-4f39-be82-c8e878e729e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6781fd06-3321-4100-b5d3-9e92a565007b", "address": "fa:16:3e:a4:c3:f8", "network": {"id": "c657f09f-ecca-4e01-9a07-39931c8c1994", "bridge": "br-int", "label": "tempest-device-tagging-net2-1829627846", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6781fd06-33", "ovs_interfaceid": "6781fd06-3321-4100-b5d3-9e92a565007b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "75fb0180-ce5c-4d77-ab28-71baed47a210", "address": "fa:16:3e:07:8b:79", "network": {"id": "c657f09f-ecca-4e01-9a07-39931c8c1994", "bridge": "br-int", "label": "tempest-device-tagging-net2-1829627846", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75fb0180-ce", "ovs_interfaceid": "75fb0180-ce5c-4d77-ab28-71baed47a210", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:10:03 np0005539551 nova_compute[227360]: 2025-11-29 08:10:03.888 227364 DEBUG oslo_concurrency.lockutils [req-c64e33db-e067-4ec0-b6d5-fd4f33132708 req-3e6d9c17-dab8-4444-b232-7be4154d689f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-86dc24b2-55cd-4720-825d-14b5a233fc8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:10:03 np0005539551 nova_compute[227360]: 2025-11-29 08:10:03.888 227364 DEBUG nova.network.neutron [req-c64e33db-e067-4ec0-b6d5-fd4f33132708 req-3e6d9c17-dab8-4444-b232-7be4154d689f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Refreshing network info cache for port 75fb0180-ce5c-4d77-ab28-71baed47a210 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:10:03 np0005539551 nova_compute[227360]: 2025-11-29 08:10:03.895 227364 DEBUG nova.virt.libvirt.driver [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Start _get_guest_xml network_info=[{"id": "5edc2ee1-b429-4cb7-8b14-3915aa40d39c", "address": "fa:16:3e:73:c5:06", "network": {"id": "997fb1c7-7a6a-4755-bd31-f24f7590c80c", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-481618025-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5edc2ee1-b4", "ovs_interfaceid": "5edc2ee1-b429-4cb7-8b14-3915aa40d39c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "08b93c9b-b707-46be-87f1-913a85c9fdbb", "address": "fa:16:3e:7d:dd:ef", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.101", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08b93c9b-b7", "ovs_interfaceid": "08b93c9b-b707-46be-87f1-913a85c9fdbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "0b94f521-1bc4-4c6b-9d91-f98c1b1371ea", "address": "fa:16:3e:a7:4f:0c", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.188", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b94f521-1b", "ovs_interfaceid": "0b94f521-1bc4-4c6b-9d91-f98c1b1371ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "064279fc-65b8-4bc6-9578-f43479c76dde", "address": "fa:16:3e:8f:4c:8b", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap064279fc-65", "ovs_interfaceid": "064279fc-65b8-4bc6-9578-f43479c76dde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "580e3c4b-9212-4f39-be82-c8e878e729e2", "address": "fa:16:3e:e1:43:84", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580e3c4b-92", "ovs_interfaceid": "580e3c4b-9212-4f39-be82-c8e878e729e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6781fd06-3321-4100-b5d3-9e92a565007b", "address": "fa:16:3e:a4:c3:f8", "network": {"id": "c657f09f-ecca-4e01-9a07-39931c8c1994", "bridge": "br-int", "label": "tempest-device-tagging-net2-1829627846", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6781fd06-33", "ovs_interfaceid": "6781fd06-3321-4100-b5d3-9e92a565007b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "75fb0180-ce5c-4d77-ab28-71baed47a210", "address": "fa:16:3e:07:8b:79", "network": {"id": "c657f09f-ecca-4e01-9a07-39931c8c1994", "bridge": "br-int", "label": "tempest-device-tagging-net2-1829627846", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75fb0180-ce", "ovs_interfaceid": "75fb0180-ce5c-4d77-ab28-71baed47a210", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk', 'boot_index': '2'}, '/dev/vdc': {'bus': 'virtio', 'dev': 'vdc', 'type': 'disk', 'boot_index': '3'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=202
Nov 29 03:10:03 np0005539551 nova_compute[227360]: _disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-7ea84dcf-1b42-49e3-8f53-f5de6e85eeeb', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '7ea84dcf-1b42-49e3-8f53-f5de6e85eeeb', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '86dc24b2-55cd-4720-825d-14b5a233fc8f', 'attached_at': '', 'detached_at': '', 'volume_id': '7ea84dcf-1b42-49e3-8f53-f5de6e85eeeb', 'serial': '7ea84dcf-1b42-49e3-8f53-f5de6e85eeeb'}, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'attachment_id': 'f5ba17b7-40f8-4e26-9886-d3d4bc4414ab', 'volume_type': None}, {'delete_on_termination': False, 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-496b8834-d1bb-44a8-b529-c5cd1c1fcd41', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '496b8834-d1bb-44a8-b529-c5cd1c1fcd41', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '86dc24b2-55cd-4720-825d-14b5a233fc8f', 'attached_at': '', 'detached_at': '', 'volume_id': '496b8834-d1bb-44a8-b529-c5cd1c1fcd41', 'serial': '496b8834-d1bb-44a8-b529-c5cd1c1fcd41'}, 'boot_index': 1, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vdb', 'attachment_id': 'eb056fdb-1deb-4da8-a715-3d46433fc5b1', 'volume_type': None}, {'delete_on_termination': False, 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-00dd489a-c89a-475f-adcb-101636396ac7', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '00dd489a-c89a-475f-adcb-101636396ac7', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '86dc24b2-55cd-4720-825d-14b5a233fc8f', 'attached_at': '', 'detached_at': '', 'volume_id': '00dd489a-c89a-475f-adcb-101636396ac7', 'serial': '00dd489a-c89a-475f-adcb-101636396ac7'}, 'boot_index': 2, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vdc', 'attachment_id': 'f854b30a-a6ad-49d6-a8b7-428520e06f25', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:10:03 np0005539551 nova_compute[227360]: 2025-11-29 08:10:03.899 227364 WARNING nova.virt.libvirt.driver [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:10:03 np0005539551 nova_compute[227360]: 2025-11-29 08:10:03.903 227364 DEBUG nova.virt.libvirt.host [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:10:03 np0005539551 nova_compute[227360]: 2025-11-29 08:10:03.904 227364 DEBUG nova.virt.libvirt.host [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:10:03 np0005539551 nova_compute[227360]: 2025-11-29 08:10:03.906 227364 DEBUG nova.virt.libvirt.host [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:10:03 np0005539551 nova_compute[227360]: 2025-11-29 08:10:03.907 227364 DEBUG nova.virt.libvirt.host [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:10:03 np0005539551 nova_compute[227360]: 2025-11-29 08:10:03.908 227364 DEBUG nova.virt.libvirt.driver [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:10:03 np0005539551 nova_compute[227360]: 2025-11-29 08:10:03.909 227364 DEBUG nova.virt.hardware [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:10:03 np0005539551 nova_compute[227360]: 2025-11-29 08:10:03.909 227364 DEBUG nova.virt.hardware [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:10:03 np0005539551 nova_compute[227360]: 2025-11-29 08:10:03.909 227364 DEBUG nova.virt.hardware [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:10:03 np0005539551 nova_compute[227360]: 2025-11-29 08:10:03.909 227364 DEBUG nova.virt.hardware [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:10:03 np0005539551 nova_compute[227360]: 2025-11-29 08:10:03.910 227364 DEBUG nova.virt.hardware [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:10:03 np0005539551 nova_compute[227360]: 2025-11-29 08:10:03.910 227364 DEBUG nova.virt.hardware [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:10:03 np0005539551 nova_compute[227360]: 2025-11-29 08:10:03.910 227364 DEBUG nova.virt.hardware [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:10:03 np0005539551 nova_compute[227360]: 2025-11-29 08:10:03.910 227364 DEBUG nova.virt.hardware [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:10:03 np0005539551 nova_compute[227360]: 2025-11-29 08:10:03.910 227364 DEBUG nova.virt.hardware [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:10:03 np0005539551 nova_compute[227360]: 2025-11-29 08:10:03.911 227364 DEBUG nova.virt.hardware [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:10:03 np0005539551 nova_compute[227360]: 2025-11-29 08:10:03.911 227364 DEBUG nova.virt.hardware [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:10:03 np0005539551 rsyslogd[1004]: message too long (8192) with configured size 8096, begin of message is: 2025-11-29 08:10:03.895 227364 DEBUG nova.virt.libvirt.driver [None req-692f587e [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 29 03:10:03 np0005539551 nova_compute[227360]: 2025-11-29 08:10:03.943 227364 DEBUG nova.storage.rbd_utils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] rbd image 86dc24b2-55cd-4720-825d-14b5a233fc8f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:10:03 np0005539551 nova_compute[227360]: 2025-11-29 08:10:03.947 227364 DEBUG oslo_concurrency.processutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.133 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:04 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:10:04 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2222604874' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.425 227364 DEBUG oslo_concurrency.processutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.488 227364 DEBUG nova.virt.libvirt.vif [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1339930826',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1339930826',id=77,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX72qjDEB4IQ4WdPqfaDcQoLGbp1kQWDQ1fLv1OvCXP+E3nsXWXjdsJxPgY/fixvzpYHFbZD11YMCPy/p3b1ThogMSbMQVu2e2El83y3t350K3WzjnAMvFjZjqU9kkZuw==',key_name='tempest-keypair-1749804170',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='28c3d09b9e21417cb7bc44b8552f1b81',ramdisk_id='',reservation_id='r-60dw71oa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1392053176',owner_user_name='tempest-TaggedBootDevicesTest_v242-1392053176-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6ef481e9e8e0440c91abe11aee229780',uuid=86dc24b2-55cd-4720-825d-14b5a233fc8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5edc2ee1-b429-4cb7-8b14-3915aa40d39c", "address": "fa:16:3e:73:c5:06", "network": {"id": "997fb1c7-7a6a-4755-bd31-f24f7590c80c", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-481618025-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5edc2ee1-b4", "ovs_interfaceid": "5edc2ee1-b429-4cb7-8b14-3915aa40d39c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.489 227364 DEBUG nova.network.os_vif_util [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converting VIF {"id": "5edc2ee1-b429-4cb7-8b14-3915aa40d39c", "address": "fa:16:3e:73:c5:06", "network": {"id": "997fb1c7-7a6a-4755-bd31-f24f7590c80c", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-481618025-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5edc2ee1-b4", "ovs_interfaceid": "5edc2ee1-b429-4cb7-8b14-3915aa40d39c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.490 227364 DEBUG nova.network.os_vif_util [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:c5:06,bridge_name='br-int',has_traffic_filtering=True,id=5edc2ee1-b429-4cb7-8b14-3915aa40d39c,network=Network(997fb1c7-7a6a-4755-bd31-f24f7590c80c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5edc2ee1-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.491 227364 DEBUG nova.virt.libvirt.vif [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1339930826',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1339930826',id=77,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX72qjDEB4IQ4WdPqfaDcQoLGbp1kQWDQ1fLv1OvCXP+E3nsXWXjdsJxPgY/fixvzpYHFbZD11YMCPy/p3b1ThogMSbMQVu2e2El83y3t350K3WzjnAMvFjZjqU9kkZuw==',key_name='tempest-keypair-1749804170',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='28c3d09b9e21417cb7bc44b8552f1b81',ramdisk_id='',reservation_id='r-60dw71oa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1392053176',owner_user_name='tempest-TaggedBootDevicesTest_v242-1392053176-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6ef481e9e8e0440c91abe11aee229780',uuid=86dc24b2-55cd-4720-825d-14b5a233fc8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "08b93c9b-b707-46be-87f1-913a85c9fdbb", "address": "fa:16:3e:7d:dd:ef", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.101", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08b93c9b-b7", "ovs_interfaceid": "08b93c9b-b707-46be-87f1-913a85c9fdbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.491 227364 DEBUG nova.network.os_vif_util [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converting VIF {"id": "08b93c9b-b707-46be-87f1-913a85c9fdbb", "address": "fa:16:3e:7d:dd:ef", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.101", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08b93c9b-b7", "ovs_interfaceid": "08b93c9b-b707-46be-87f1-913a85c9fdbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.491 227364 DEBUG nova.network.os_vif_util [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:dd:ef,bridge_name='br-int',has_traffic_filtering=True,id=08b93c9b-b707-46be-87f1-913a85c9fdbb,network=Network(2e9e9288-8ab6-453e-b0a5-e16458d62484),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap08b93c9b-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.492 227364 DEBUG nova.virt.libvirt.vif [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1339930826',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1339930826',id=77,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX72qjDEB4IQ4WdPqfaDcQoLGbp1kQWDQ1fLv1OvCXP+E3nsXWXjdsJxPgY/fixvzpYHFbZD11YMCPy/p3b1ThogMSbMQVu2e2El83y3t350K3WzjnAMvFjZjqU9kkZuw==',key_name='tempest-keypair-1749804170',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='28c3d09b9e21417cb7bc44b8552f1b81',ramdisk_id='',reservation_id='r-60dw71oa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1392053176',owner_user_name='tempest-TaggedBootDevicesTest_v242-1392053176-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6ef481e9e8e0440c91abe11aee229780',uuid=86dc24b2-55cd-4720-825d-14b5a233fc8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0b94f521-1bc4-4c6b-9d91-f98c1b1371ea", "address": "fa:16:3e:a7:4f:0c", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.188", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b94f521-1b", "ovs_interfaceid": "0b94f521-1bc4-4c6b-9d91-f98c1b1371ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.492 227364 DEBUG nova.network.os_vif_util [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converting VIF {"id": "0b94f521-1bc4-4c6b-9d91-f98c1b1371ea", "address": "fa:16:3e:a7:4f:0c", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.188", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b94f521-1b", "ovs_interfaceid": "0b94f521-1bc4-4c6b-9d91-f98c1b1371ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.493 227364 DEBUG nova.network.os_vif_util [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:4f:0c,bridge_name='br-int',has_traffic_filtering=True,id=0b94f521-1bc4-4c6b-9d91-f98c1b1371ea,network=Network(2e9e9288-8ab6-453e-b0a5-e16458d62484),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0b94f521-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.493 227364 DEBUG nova.virt.libvirt.vif [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1339930826',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1339930826',id=77,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX72qjDEB4IQ4WdPqfaDcQoLGbp1kQWDQ1fLv1OvCXP+E3nsXWXjdsJxPgY/fixvzpYHFbZD11YMCPy/p3b1ThogMSbMQVu2e2El83y3t350K3WzjnAMvFjZjqU9kkZuw==',key_name='tempest-keypair-1749804170',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='28c3d09b9e21417cb7bc44b8552f1b81',ramdisk_id='',reservation_id='r-60dw71oa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1392053176',owner_user_name='tempest-TaggedBootDevicesTest_v242-1392053176-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6ef481e9e8e0440c91abe11aee229780',uuid=86dc24b2-55cd-4720-825d-14b5a233fc8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "064279fc-65b8-4bc6-9578-f43479c76dde", "address": "fa:16:3e:8f:4c:8b", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap064279fc-65", "ovs_interfaceid": "064279fc-65b8-4bc6-9578-f43479c76dde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.493 227364 DEBUG nova.network.os_vif_util [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converting VIF {"id": "064279fc-65b8-4bc6-9578-f43479c76dde", "address": "fa:16:3e:8f:4c:8b", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap064279fc-65", "ovs_interfaceid": "064279fc-65b8-4bc6-9578-f43479c76dde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.494 227364 DEBUG nova.network.os_vif_util [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:4c:8b,bridge_name='br-int',has_traffic_filtering=True,id=064279fc-65b8-4bc6-9578-f43479c76dde,network=Network(2e9e9288-8ab6-453e-b0a5-e16458d62484),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap064279fc-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.494 227364 DEBUG nova.virt.libvirt.vif [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1339930826',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1339930826',id=77,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX72qjDEB4IQ4WdPqfaDcQoLGbp1kQWDQ1fLv1OvCXP+E3nsXWXjdsJxPgY/fixvzpYHFbZD11YMCPy/p3b1ThogMSbMQVu2e2El83y3t350K3WzjnAMvFjZjqU9kkZuw==',key_name='tempest-keypair-1749804170',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='28c3d09b9e21417cb7bc44b8552f1b81',ramdisk_id='',reservation_id='r-60dw71oa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1392053176',owner_user_name='tempest-TaggedBootDevicesTest_v242-1392053176-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6ef481e9e8e0440c91abe11aee229780',uuid=86dc24b2-55cd-4720-825d-14b5a233fc8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "580e3c4b-9212-4f39-be82-c8e878e729e2", "address": "fa:16:3e:e1:43:84", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580e3c4b-92", "ovs_interfaceid": "580e3c4b-9212-4f39-be82-c8e878e729e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.495 227364 DEBUG nova.network.os_vif_util [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converting VIF {"id": "580e3c4b-9212-4f39-be82-c8e878e729e2", "address": "fa:16:3e:e1:43:84", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580e3c4b-92", "ovs_interfaceid": "580e3c4b-9212-4f39-be82-c8e878e729e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.495 227364 DEBUG nova.network.os_vif_util [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:43:84,bridge_name='br-int',has_traffic_filtering=True,id=580e3c4b-9212-4f39-be82-c8e878e729e2,network=Network(2e9e9288-8ab6-453e-b0a5-e16458d62484),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap580e3c4b-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.496 227364 DEBUG nova.virt.libvirt.vif [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1339930826',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1339930826',id=77,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX72qjDEB4IQ4WdPqfaDcQoLGbp1kQWDQ1fLv1OvCXP+E3nsXWXjdsJxPgY/fixvzpYHFbZD11YMCPy/p3b1ThogMSbMQVu2e2El83y3t350K3WzjnAMvFjZjqU9kkZuw==',key_name='tempest-keypair-1749804170',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='28c3d09b9e21417cb7bc44b8552f1b81',ramdisk_id='',reservation_id='r-60dw71oa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1392053176',owner_user_name='tempest-TaggedBootDevicesTest_v242-1392053176-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6ef481e9e8e0440c91abe11aee229780',uuid=86dc24b2-55cd-4720-825d-14b5a233fc8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6781fd06-3321-4100-b5d3-9e92a565007b", "address": "fa:16:3e:a4:c3:f8", "network": {"id": "c657f09f-ecca-4e01-9a07-39931c8c1994", "bridge": "br-int", "label": "tempest-device-tagging-net2-1829627846", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6781fd06-33", "ovs_interfaceid": "6781fd06-3321-4100-b5d3-9e92a565007b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.496 227364 DEBUG nova.network.os_vif_util [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converting VIF {"id": "6781fd06-3321-4100-b5d3-9e92a565007b", "address": "fa:16:3e:a4:c3:f8", "network": {"id": "c657f09f-ecca-4e01-9a07-39931c8c1994", "bridge": "br-int", "label": "tempest-device-tagging-net2-1829627846", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6781fd06-33", "ovs_interfaceid": "6781fd06-3321-4100-b5d3-9e92a565007b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.496 227364 DEBUG nova.network.os_vif_util [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:c3:f8,bridge_name='br-int',has_traffic_filtering=True,id=6781fd06-3321-4100-b5d3-9e92a565007b,network=Network(c657f09f-ecca-4e01-9a07-39931c8c1994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6781fd06-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.497 227364 DEBUG nova.virt.libvirt.vif [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1339930826',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1339930826',id=77,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX72qjDEB4IQ4WdPqfaDcQoLGbp1kQWDQ1fLv1OvCXP+E3nsXWXjdsJxPgY/fixvzpYHFbZD11YMCPy/p3b1ThogMSbMQVu2e2El83y3t350K3WzjnAMvFjZjqU9kkZuw==',key_name='tempest-keypair-1749804170',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='28c3d09b9e21417cb7bc44b8552f1b81',ramdisk_id='',reservation_id='r-60dw71oa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1392053176',owner_user_name='tempest-TaggedBootDevicesTest_v242-1392053176-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6ef481e9e8e0440c91abe11aee229780',uuid=86dc24b2-55cd-4720-825d-14b5a233fc8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "75fb0180-ce5c-4d77-ab28-71baed47a210", "address": "fa:16:3e:07:8b:79", "network": {"id": "c657f09f-ecca-4e01-9a07-39931c8c1994", "bridge": "br-int", "label": "tempest-device-tagging-net2-1829627846", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75fb0180-ce", "ovs_interfaceid": "75fb0180-ce5c-4d77-ab28-71baed47a210", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.497 227364 DEBUG nova.network.os_vif_util [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converting VIF {"id": "75fb0180-ce5c-4d77-ab28-71baed47a210", "address": "fa:16:3e:07:8b:79", "network": {"id": "c657f09f-ecca-4e01-9a07-39931c8c1994", "bridge": "br-int", "label": "tempest-device-tagging-net2-1829627846", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75fb0180-ce", "ovs_interfaceid": "75fb0180-ce5c-4d77-ab28-71baed47a210", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.498 227364 DEBUG nova.network.os_vif_util [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:8b:79,bridge_name='br-int',has_traffic_filtering=True,id=75fb0180-ce5c-4d77-ab28-71baed47a210,network=Network(c657f09f-ecca-4e01-9a07-39931c8c1994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75fb0180-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.498 227364 DEBUG nova.objects.instance [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Lazy-loading 'pci_devices' on Instance uuid 86dc24b2-55cd-4720-825d-14b5a233fc8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.511 227364 DEBUG nova.virt.libvirt.driver [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:10:04 np0005539551 nova_compute[227360]:  <uuid>86dc24b2-55cd-4720-825d-14b5a233fc8f</uuid>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:  <name>instance-0000004d</name>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <nova:name>tempest-device-tagging-server-1339930826</nova:name>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:10:03</nova:creationTime>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:10:04 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:        <nova:user uuid="6ef481e9e8e0440c91abe11aee229780">tempest-TaggedBootDevicesTest_v242-1392053176-project-member</nova:user>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:        <nova:project uuid="28c3d09b9e21417cb7bc44b8552f1b81">tempest-TaggedBootDevicesTest_v242-1392053176</nova:project>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:        <nova:port uuid="5edc2ee1-b429-4cb7-8b14-3915aa40d39c">
Nov 29 03:10:04 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:        <nova:port uuid="08b93c9b-b707-46be-87f1-913a85c9fdbb">
Nov 29 03:10:04 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.1.1.101" ipVersion="4"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:        <nova:port uuid="0b94f521-1bc4-4c6b-9d91-f98c1b1371ea">
Nov 29 03:10:04 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.1.1.188" ipVersion="4"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:        <nova:port uuid="064279fc-65b8-4bc6-9578-f43479c76dde">
Nov 29 03:10:04 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.1.1.67" ipVersion="4"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:        <nova:port uuid="580e3c4b-9212-4f39-be82-c8e878e729e2">
Nov 29 03:10:04 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.1.1.23" ipVersion="4"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:        <nova:port uuid="6781fd06-3321-4100-b5d3-9e92a565007b">
Nov 29 03:10:04 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.2.2.100" ipVersion="4"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:        <nova:port uuid="75fb0180-ce5c-4d77-ab28-71baed47a210">
Nov 29 03:10:04 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.2.2.200" ipVersion="4"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <entry name="serial">86dc24b2-55cd-4720-825d-14b5a233fc8f</entry>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <entry name="uuid">86dc24b2-55cd-4720-825d-14b5a233fc8f</entry>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/86dc24b2-55cd-4720-825d-14b5a233fc8f_disk.config">
Nov 29 03:10:04 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:10:04 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="volumes/volume-7ea84dcf-1b42-49e3-8f53-f5de6e85eeeb">
Nov 29 03:10:04 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:10:04 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <serial>7ea84dcf-1b42-49e3-8f53-f5de6e85eeeb</serial>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="volumes/volume-496b8834-d1bb-44a8-b529-c5cd1c1fcd41">
Nov 29 03:10:04 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:10:04 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <target dev="vdb" bus="virtio"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <serial>496b8834-d1bb-44a8-b529-c5cd1c1fcd41</serial>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="volumes/volume-00dd489a-c89a-475f-adcb-101636396ac7">
Nov 29 03:10:04 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:10:04 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <target dev="vdc" bus="virtio"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <serial>00dd489a-c89a-475f-adcb-101636396ac7</serial>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:73:c5:06"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <target dev="tap5edc2ee1-b4"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:7d:dd:ef"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <target dev="tap08b93c9b-b7"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:a7:4f:0c"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <target dev="tap0b94f521-1b"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:8f:4c:8b"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <target dev="tap064279fc-65"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:e1:43:84"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <target dev="tap580e3c4b-92"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:a4:c3:f8"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <target dev="tap6781fd06-33"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:07:8b:79"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <target dev="tap75fb0180-ce"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/86dc24b2-55cd-4720-825d-14b5a233fc8f/console.log" append="off"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:10:04 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:10:04 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:10:04 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:10:04 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.513 227364 DEBUG nova.compute.manager [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Preparing to wait for external event network-vif-plugged-5edc2ee1-b429-4cb7-8b14-3915aa40d39c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.514 227364 DEBUG oslo_concurrency.lockutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Acquiring lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.515 227364 DEBUG oslo_concurrency.lockutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.515 227364 DEBUG oslo_concurrency.lockutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.516 227364 DEBUG nova.compute.manager [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Preparing to wait for external event network-vif-plugged-08b93c9b-b707-46be-87f1-913a85c9fdbb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.516 227364 DEBUG oslo_concurrency.lockutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Acquiring lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.517 227364 DEBUG oslo_concurrency.lockutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.517 227364 DEBUG oslo_concurrency.lockutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.518 227364 DEBUG nova.compute.manager [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Preparing to wait for external event network-vif-plugged-0b94f521-1bc4-4c6b-9d91-f98c1b1371ea prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.518 227364 DEBUG oslo_concurrency.lockutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Acquiring lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.519 227364 DEBUG oslo_concurrency.lockutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.519 227364 DEBUG oslo_concurrency.lockutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.520 227364 DEBUG nova.compute.manager [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Preparing to wait for external event network-vif-plugged-064279fc-65b8-4bc6-9578-f43479c76dde prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.520 227364 DEBUG oslo_concurrency.lockutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Acquiring lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.520 227364 DEBUG oslo_concurrency.lockutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.521 227364 DEBUG oslo_concurrency.lockutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.521 227364 DEBUG nova.compute.manager [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Preparing to wait for external event network-vif-plugged-580e3c4b-9212-4f39-be82-c8e878e729e2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.522 227364 DEBUG oslo_concurrency.lockutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Acquiring lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.523 227364 DEBUG oslo_concurrency.lockutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.523 227364 DEBUG oslo_concurrency.lockutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.523 227364 DEBUG nova.compute.manager [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Preparing to wait for external event network-vif-plugged-6781fd06-3321-4100-b5d3-9e92a565007b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.524 227364 DEBUG oslo_concurrency.lockutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Acquiring lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.524 227364 DEBUG oslo_concurrency.lockutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.525 227364 DEBUG oslo_concurrency.lockutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.525 227364 DEBUG nova.compute.manager [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Preparing to wait for external event network-vif-plugged-75fb0180-ce5c-4d77-ab28-71baed47a210 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.526 227364 DEBUG oslo_concurrency.lockutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Acquiring lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.526 227364 DEBUG oslo_concurrency.lockutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.526 227364 DEBUG oslo_concurrency.lockutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.528 227364 DEBUG nova.virt.libvirt.vif [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1339930826',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1339930826',id=77,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX72qjDEB4IQ4WdPqfaDcQoLGbp1kQWDQ1fLv1OvCXP+E3nsXWXjdsJxPgY/fixvzpYHFbZD11YMCPy/p3b1ThogMSbMQVu2e2El83y3t350K3WzjnAMvFjZjqU9kkZuw==',key_name='tempest-keypair-1749804170',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='28c3d09b9e21417cb7bc44b8552f1b81',ramdisk_id='',reservation_id='r-60dw71oa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1392053176',owner_user_name='tempest-TaggedBootDevicesTest_v242-1392053176-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6ef481e9e8e0440c91abe11aee229780',uuid=86dc24b2-55cd-4720-825d-14b5a233fc8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5edc2ee1-b429-4cb7-8b14-3915aa40d39c", "address": "fa:16:3e:73:c5:06", "network": {"id": "997fb1c7-7a6a-4755-bd31-f24f7590c80c", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-481618025-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5edc2ee1-b4", "ovs_interfaceid": "5edc2ee1-b429-4cb7-8b14-3915aa40d39c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.528 227364 DEBUG nova.network.os_vif_util [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converting VIF {"id": "5edc2ee1-b429-4cb7-8b14-3915aa40d39c", "address": "fa:16:3e:73:c5:06", "network": {"id": "997fb1c7-7a6a-4755-bd31-f24f7590c80c", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-481618025-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5edc2ee1-b4", "ovs_interfaceid": "5edc2ee1-b429-4cb7-8b14-3915aa40d39c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.529 227364 DEBUG nova.network.os_vif_util [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:c5:06,bridge_name='br-int',has_traffic_filtering=True,id=5edc2ee1-b429-4cb7-8b14-3915aa40d39c,network=Network(997fb1c7-7a6a-4755-bd31-f24f7590c80c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5edc2ee1-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.530 227364 DEBUG os_vif [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:c5:06,bridge_name='br-int',has_traffic_filtering=True,id=5edc2ee1-b429-4cb7-8b14-3915aa40d39c,network=Network(997fb1c7-7a6a-4755-bd31-f24f7590c80c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5edc2ee1-b4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.531 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.532 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.532 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.537 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.537 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5edc2ee1-b4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.538 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5edc2ee1-b4, col_values=(('external_ids', {'iface-id': '5edc2ee1-b429-4cb7-8b14-3915aa40d39c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:73:c5:06', 'vm-uuid': '86dc24b2-55cd-4720-825d-14b5a233fc8f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:04 np0005539551 NetworkManager[48922]: <info>  [1764403804.5430] manager: (tap5edc2ee1-b4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.544 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.552 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.554 227364 INFO os_vif [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:c5:06,bridge_name='br-int',has_traffic_filtering=True,id=5edc2ee1-b429-4cb7-8b14-3915aa40d39c,network=Network(997fb1c7-7a6a-4755-bd31-f24f7590c80c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5edc2ee1-b4')#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.555 227364 DEBUG nova.virt.libvirt.vif [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1339930826',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1339930826',id=77,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX72qjDEB4IQ4WdPqfaDcQoLGbp1kQWDQ1fLv1OvCXP+E3nsXWXjdsJxPgY/fixvzpYHFbZD11YMCPy/p3b1ThogMSbMQVu2e2El83y3t350K3WzjnAMvFjZjqU9kkZuw==',key_name='tempest-keypair-1749804170',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='28c3d09b9e21417cb7bc44b8552f1b81',ramdisk_id='',reservation_id='r-60dw71oa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1392053176',owner_user_name='tempest-TaggedBootDevicesTest_v242-1392053176-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6ef481e9e8e0440c91abe11aee229780',uuid=86dc24b2-55cd-4720-825d-14b5a233fc8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "08b93c9b-b707-46be-87f1-913a85c9fdbb", "address": "fa:16:3e:7d:dd:ef", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.101", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08b93c9b-b7", "ovs_interfaceid": "08b93c9b-b707-46be-87f1-913a85c9fdbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.556 227364 DEBUG nova.network.os_vif_util [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converting VIF {"id": "08b93c9b-b707-46be-87f1-913a85c9fdbb", "address": "fa:16:3e:7d:dd:ef", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.101", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08b93c9b-b7", "ovs_interfaceid": "08b93c9b-b707-46be-87f1-913a85c9fdbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.557 227364 DEBUG nova.network.os_vif_util [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:dd:ef,bridge_name='br-int',has_traffic_filtering=True,id=08b93c9b-b707-46be-87f1-913a85c9fdbb,network=Network(2e9e9288-8ab6-453e-b0a5-e16458d62484),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap08b93c9b-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.558 227364 DEBUG os_vif [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:dd:ef,bridge_name='br-int',has_traffic_filtering=True,id=08b93c9b-b707-46be-87f1-913a85c9fdbb,network=Network(2e9e9288-8ab6-453e-b0a5-e16458d62484),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap08b93c9b-b7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.559 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.559 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.560 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.563 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.563 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08b93c9b-b7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.564 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap08b93c9b-b7, col_values=(('external_ids', {'iface-id': '08b93c9b-b707-46be-87f1-913a85c9fdbb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7d:dd:ef', 'vm-uuid': '86dc24b2-55cd-4720-825d-14b5a233fc8f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.569 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:04 np0005539551 NetworkManager[48922]: <info>  [1764403804.5697] manager: (tap08b93c9b-b7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/129)
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.571 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.579 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.580 227364 INFO os_vif [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:dd:ef,bridge_name='br-int',has_traffic_filtering=True,id=08b93c9b-b707-46be-87f1-913a85c9fdbb,network=Network(2e9e9288-8ab6-453e-b0a5-e16458d62484),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap08b93c9b-b7')#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.580 227364 DEBUG nova.virt.libvirt.vif [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1339930826',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1339930826',id=77,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX72qjDEB4IQ4WdPqfaDcQoLGbp1kQWDQ1fLv1OvCXP+E3nsXWXjdsJxPgY/fixvzpYHFbZD11YMCPy/p3b1ThogMSbMQVu2e2El83y3t350K3WzjnAMvFjZjqU9kkZuw==',key_name='tempest-keypair-1749804170',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='28c3d09b9e21417cb7bc44b8552f1b81',ramdisk_id='',reservation_id='r-60dw71oa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1392053176',owner_user_name='tempest-TaggedBootDevicesTest_v242-1392053176-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6ef481e9e8e0440c91abe11aee229780',uuid=86dc24b2-55cd-4720-825d-14b5a233fc8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0b94f521-1bc4-4c6b-9d91-f98c1b1371ea", "address": "fa:16:3e:a7:4f:0c", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.188", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b94f521-1b", "ovs_interfaceid": "0b94f521-1bc4-4c6b-9d91-f98c1b1371ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.581 227364 DEBUG nova.network.os_vif_util [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converting VIF {"id": "0b94f521-1bc4-4c6b-9d91-f98c1b1371ea", "address": "fa:16:3e:a7:4f:0c", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.188", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b94f521-1b", "ovs_interfaceid": "0b94f521-1bc4-4c6b-9d91-f98c1b1371ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.581 227364 DEBUG nova.network.os_vif_util [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:4f:0c,bridge_name='br-int',has_traffic_filtering=True,id=0b94f521-1bc4-4c6b-9d91-f98c1b1371ea,network=Network(2e9e9288-8ab6-453e-b0a5-e16458d62484),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0b94f521-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.582 227364 DEBUG os_vif [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:4f:0c,bridge_name='br-int',has_traffic_filtering=True,id=0b94f521-1bc4-4c6b-9d91-f98c1b1371ea,network=Network(2e9e9288-8ab6-453e-b0a5-e16458d62484),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0b94f521-1b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.582 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.582 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.582 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.584 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.584 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b94f521-1b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.584 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0b94f521-1b, col_values=(('external_ids', {'iface-id': '0b94f521-1bc4-4c6b-9d91-f98c1b1371ea', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a7:4f:0c', 'vm-uuid': '86dc24b2-55cd-4720-825d-14b5a233fc8f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.585 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:04 np0005539551 NetworkManager[48922]: <info>  [1764403804.5867] manager: (tap0b94f521-1b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/130)
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.588 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.596 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.597 227364 INFO os_vif [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:4f:0c,bridge_name='br-int',has_traffic_filtering=True,id=0b94f521-1bc4-4c6b-9d91-f98c1b1371ea,network=Network(2e9e9288-8ab6-453e-b0a5-e16458d62484),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0b94f521-1b')#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.598 227364 DEBUG nova.virt.libvirt.vif [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1339930826',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1339930826',id=77,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX72qjDEB4IQ4WdPqfaDcQoLGbp1kQWDQ1fLv1OvCXP+E3nsXWXjdsJxPgY/fixvzpYHFbZD11YMCPy/p3b1ThogMSbMQVu2e2El83y3t350K3WzjnAMvFjZjqU9kkZuw==',key_name='tempest-keypair-1749804170',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='28c3d09b9e21417cb7bc44b8552f1b81',ramdisk_id='',reservation_id='r-60dw71oa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1392053176',owner_user_name='tempest-TaggedBootDevicesTest_v242-1392053176-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6ef481e9e8e0440c91abe11aee229780',uuid=86dc24b2-55cd-4720-825d-14b5a233fc8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "064279fc-65b8-4bc6-9578-f43479c76dde", "address": "fa:16:3e:8f:4c:8b", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap064279fc-65", "ovs_interfaceid": "064279fc-65b8-4bc6-9578-f43479c76dde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.598 227364 DEBUG nova.network.os_vif_util [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converting VIF {"id": "064279fc-65b8-4bc6-9578-f43479c76dde", "address": "fa:16:3e:8f:4c:8b", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap064279fc-65", "ovs_interfaceid": "064279fc-65b8-4bc6-9578-f43479c76dde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.599 227364 DEBUG nova.network.os_vif_util [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:4c:8b,bridge_name='br-int',has_traffic_filtering=True,id=064279fc-65b8-4bc6-9578-f43479c76dde,network=Network(2e9e9288-8ab6-453e-b0a5-e16458d62484),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap064279fc-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.599 227364 DEBUG os_vif [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:4c:8b,bridge_name='br-int',has_traffic_filtering=True,id=064279fc-65b8-4bc6-9578-f43479c76dde,network=Network(2e9e9288-8ab6-453e-b0a5-e16458d62484),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap064279fc-65') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.600 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.600 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.600 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.602 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.602 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap064279fc-65, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.603 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap064279fc-65, col_values=(('external_ids', {'iface-id': '064279fc-65b8-4bc6-9578-f43479c76dde', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8f:4c:8b', 'vm-uuid': '86dc24b2-55cd-4720-825d-14b5a233fc8f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.604 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:04 np0005539551 NetworkManager[48922]: <info>  [1764403804.6049] manager: (tap064279fc-65): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/131)
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.606 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.620 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.621 227364 INFO os_vif [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:4c:8b,bridge_name='br-int',has_traffic_filtering=True,id=064279fc-65b8-4bc6-9578-f43479c76dde,network=Network(2e9e9288-8ab6-453e-b0a5-e16458d62484),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap064279fc-65')#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.621 227364 DEBUG nova.virt.libvirt.vif [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1339930826',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1339930826',id=77,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX72qjDEB4IQ4WdPqfaDcQoLGbp1kQWDQ1fLv1OvCXP+E3nsXWXjdsJxPgY/fixvzpYHFbZD11YMCPy/p3b1ThogMSbMQVu2e2El83y3t350K3WzjnAMvFjZjqU9kkZuw==',key_name='tempest-keypair-1749804170',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='28c3d09b9e21417cb7bc44b8552f1b81',ramdisk_id='',reservation_id='r-60dw71oa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1392053176',owner_user_name='tempest-TaggedBootDevicesTest_v242-1392053176-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6ef481e9e8e0440c91abe11aee229780',uuid=86dc24b2-55cd-4720-825d-14b5a233fc8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "580e3c4b-9212-4f39-be82-c8e878e729e2", "address": "fa:16:3e:e1:43:84", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580e3c4b-92", "ovs_interfaceid": "580e3c4b-9212-4f39-be82-c8e878e729e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.622 227364 DEBUG nova.network.os_vif_util [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converting VIF {"id": "580e3c4b-9212-4f39-be82-c8e878e729e2", "address": "fa:16:3e:e1:43:84", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580e3c4b-92", "ovs_interfaceid": "580e3c4b-9212-4f39-be82-c8e878e729e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.622 227364 DEBUG nova.network.os_vif_util [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:43:84,bridge_name='br-int',has_traffic_filtering=True,id=580e3c4b-9212-4f39-be82-c8e878e729e2,network=Network(2e9e9288-8ab6-453e-b0a5-e16458d62484),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap580e3c4b-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.622 227364 DEBUG os_vif [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:43:84,bridge_name='br-int',has_traffic_filtering=True,id=580e3c4b-9212-4f39-be82-c8e878e729e2,network=Network(2e9e9288-8ab6-453e-b0a5-e16458d62484),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap580e3c4b-92') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.623 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.623 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.623 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.625 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.625 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap580e3c4b-92, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.625 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap580e3c4b-92, col_values=(('external_ids', {'iface-id': '580e3c4b-9212-4f39-be82-c8e878e729e2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e1:43:84', 'vm-uuid': '86dc24b2-55cd-4720-825d-14b5a233fc8f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.626 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:04 np0005539551 NetworkManager[48922]: <info>  [1764403804.6274] manager: (tap580e3c4b-92): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/132)
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.628 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.644 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.644 227364 INFO os_vif [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:43:84,bridge_name='br-int',has_traffic_filtering=True,id=580e3c4b-9212-4f39-be82-c8e878e729e2,network=Network(2e9e9288-8ab6-453e-b0a5-e16458d62484),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap580e3c4b-92')#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.645 227364 DEBUG nova.virt.libvirt.vif [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1339930826',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1339930826',id=77,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX72qjDEB4IQ4WdPqfaDcQoLGbp1kQWDQ1fLv1OvCXP+E3nsXWXjdsJxPgY/fixvzpYHFbZD11YMCPy/p3b1ThogMSbMQVu2e2El83y3t350K3WzjnAMvFjZjqU9kkZuw==',key_name='tempest-keypair-1749804170',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='28c3d09b9e21417cb7bc44b8552f1b81',ramdisk_id='',reservation_id='r-60dw71oa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1392053176',owner_user_name='tempest-TaggedBootDevicesTest_v242-1392053176-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6ef481e9e8e0440c91abe11aee229780',uuid=86dc24b2-55cd-4720-825d-14b5a233fc8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6781fd06-3321-4100-b5d3-9e92a565007b", "address": "fa:16:3e:a4:c3:f8", "network": {"id": "c657f09f-ecca-4e01-9a07-39931c8c1994", "bridge": "br-int", "label": "tempest-device-tagging-net2-1829627846", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6781fd06-33", "ovs_interfaceid": "6781fd06-3321-4100-b5d3-9e92a565007b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.645 227364 DEBUG nova.network.os_vif_util [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converting VIF {"id": "6781fd06-3321-4100-b5d3-9e92a565007b", "address": "fa:16:3e:a4:c3:f8", "network": {"id": "c657f09f-ecca-4e01-9a07-39931c8c1994", "bridge": "br-int", "label": "tempest-device-tagging-net2-1829627846", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6781fd06-33", "ovs_interfaceid": "6781fd06-3321-4100-b5d3-9e92a565007b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.646 227364 DEBUG nova.network.os_vif_util [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:c3:f8,bridge_name='br-int',has_traffic_filtering=True,id=6781fd06-3321-4100-b5d3-9e92a565007b,network=Network(c657f09f-ecca-4e01-9a07-39931c8c1994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6781fd06-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.646 227364 DEBUG os_vif [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:c3:f8,bridge_name='br-int',has_traffic_filtering=True,id=6781fd06-3321-4100-b5d3-9e92a565007b,network=Network(c657f09f-ecca-4e01-9a07-39931c8c1994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6781fd06-33') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.646 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.647 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.647 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.649 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.649 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6781fd06-33, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.649 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6781fd06-33, col_values=(('external_ids', {'iface-id': '6781fd06-3321-4100-b5d3-9e92a565007b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a4:c3:f8', 'vm-uuid': '86dc24b2-55cd-4720-825d-14b5a233fc8f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.650 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:04 np0005539551 NetworkManager[48922]: <info>  [1764403804.6518] manager: (tap6781fd06-33): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/133)
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.653 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.676 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.677 227364 INFO os_vif [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:c3:f8,bridge_name='br-int',has_traffic_filtering=True,id=6781fd06-3321-4100-b5d3-9e92a565007b,network=Network(c657f09f-ecca-4e01-9a07-39931c8c1994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6781fd06-33')#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.678 227364 DEBUG nova.virt.libvirt.vif [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1339930826',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1339930826',id=77,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX72qjDEB4IQ4WdPqfaDcQoLGbp1kQWDQ1fLv1OvCXP+E3nsXWXjdsJxPgY/fixvzpYHFbZD11YMCPy/p3b1ThogMSbMQVu2e2El83y3t350K3WzjnAMvFjZjqU9kkZuw==',key_name='tempest-keypair-1749804170',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='28c3d09b9e21417cb7bc44b8552f1b81',ramdisk_id='',reservation_id='r-60dw71oa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1392053176',owner_user_name='tempest-TaggedBootDevicesTest_v242-1392053176-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6ef481e9e8e0440c91abe11aee229780',uuid=86dc24b2-55cd-4720-825d-14b5a233fc8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "75fb0180-ce5c-4d77-ab28-71baed47a210", "address": "fa:16:3e:07:8b:79", "network": {"id": "c657f09f-ecca-4e01-9a07-39931c8c1994", "bridge": "br-int", "label": "tempest-device-tagging-net2-1829627846", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75fb0180-ce", "ovs_interfaceid": "75fb0180-ce5c-4d77-ab28-71baed47a210", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.678 227364 DEBUG nova.network.os_vif_util [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converting VIF {"id": "75fb0180-ce5c-4d77-ab28-71baed47a210", "address": "fa:16:3e:07:8b:79", "network": {"id": "c657f09f-ecca-4e01-9a07-39931c8c1994", "bridge": "br-int", "label": "tempest-device-tagging-net2-1829627846", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75fb0180-ce", "ovs_interfaceid": "75fb0180-ce5c-4d77-ab28-71baed47a210", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.679 227364 DEBUG nova.network.os_vif_util [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:8b:79,bridge_name='br-int',has_traffic_filtering=True,id=75fb0180-ce5c-4d77-ab28-71baed47a210,network=Network(c657f09f-ecca-4e01-9a07-39931c8c1994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75fb0180-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.679 227364 DEBUG os_vif [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:8b:79,bridge_name='br-int',has_traffic_filtering=True,id=75fb0180-ce5c-4d77-ab28-71baed47a210,network=Network(c657f09f-ecca-4e01-9a07-39931c8c1994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75fb0180-ce') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.680 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.680 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.681 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.682 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.683 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap75fb0180-ce, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.683 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap75fb0180-ce, col_values=(('external_ids', {'iface-id': '75fb0180-ce5c-4d77-ab28-71baed47a210', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:07:8b:79', 'vm-uuid': '86dc24b2-55cd-4720-825d-14b5a233fc8f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:04 np0005539551 NetworkManager[48922]: <info>  [1764403804.6853] manager: (tap75fb0180-ce): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/134)
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.687 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.705 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.706 227364 INFO os_vif [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:8b:79,bridge_name='br-int',has_traffic_filtering=True,id=75fb0180-ce5c-4d77-ab28-71baed47a210,network=Network(c657f09f-ecca-4e01-9a07-39931c8c1994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75fb0180-ce')#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.755 227364 DEBUG nova.virt.libvirt.driver [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.756 227364 DEBUG nova.virt.libvirt.driver [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.756 227364 DEBUG nova.virt.libvirt.driver [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] No VIF found with MAC fa:16:3e:73:c5:06, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.756 227364 DEBUG nova.virt.libvirt.driver [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] No VIF found with MAC fa:16:3e:e1:43:84, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.757 227364 INFO nova.virt.libvirt.driver [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Using config drive#033[00m
Nov 29 03:10:04 np0005539551 nova_compute[227360]: 2025-11-29 08:10:04.785 227364 DEBUG nova.storage.rbd_utils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] rbd image 86dc24b2-55cd-4720-825d-14b5a233fc8f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:10:05 np0005539551 nova_compute[227360]: 2025-11-29 08:10:05.039 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:05 np0005539551 nova_compute[227360]: 2025-11-29 08:10:05.626 227364 INFO nova.virt.libvirt.driver [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Creating config drive at /var/lib/nova/instances/86dc24b2-55cd-4720-825d-14b5a233fc8f/disk.config#033[00m
Nov 29 03:10:05 np0005539551 nova_compute[227360]: 2025-11-29 08:10:05.632 227364 DEBUG oslo_concurrency.processutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/86dc24b2-55cd-4720-825d-14b5a233fc8f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpou9sp9x2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:05.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:05 np0005539551 nova_compute[227360]: 2025-11-29 08:10:05.771 227364 DEBUG oslo_concurrency.processutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/86dc24b2-55cd-4720-825d-14b5a233fc8f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpou9sp9x2" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:05.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:05 np0005539551 nova_compute[227360]: 2025-11-29 08:10:05.818 227364 DEBUG nova.storage.rbd_utils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] rbd image 86dc24b2-55cd-4720-825d-14b5a233fc8f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:10:05 np0005539551 nova_compute[227360]: 2025-11-29 08:10:05.823 227364 DEBUG oslo_concurrency.processutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/86dc24b2-55cd-4720-825d-14b5a233fc8f/disk.config 86dc24b2-55cd-4720-825d-14b5a233fc8f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.004 227364 DEBUG oslo_concurrency.processutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/86dc24b2-55cd-4720-825d-14b5a233fc8f/disk.config 86dc24b2-55cd-4720-825d-14b5a233fc8f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.005 227364 INFO nova.virt.libvirt.driver [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Deleting local config drive /var/lib/nova/instances/86dc24b2-55cd-4720-825d-14b5a233fc8f/disk.config because it was imported into RBD.#033[00m
Nov 29 03:10:06 np0005539551 NetworkManager[48922]: <info>  [1764403806.0615] manager: (tap5edc2ee1-b4): new Tun device (/org/freedesktop/NetworkManager/Devices/135)
Nov 29 03:10:06 np0005539551 kernel: tap5edc2ee1-b4: entered promiscuous mode
Nov 29 03:10:06 np0005539551 NetworkManager[48922]: <info>  [1764403806.0848] manager: (tap08b93c9b-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/136)
Nov 29 03:10:06 np0005539551 kernel: tap08b93c9b-b7: entered promiscuous mode
Nov 29 03:10:06 np0005539551 systemd-udevd[255873]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:10:06 np0005539551 systemd-udevd[255871]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.093 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:06Z|00264|binding|INFO|Claiming lport 08b93c9b-b707-46be-87f1-913a85c9fdbb for this chassis.
Nov 29 03:10:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:06Z|00265|binding|INFO|08b93c9b-b707-46be-87f1-913a85c9fdbb: Claiming fa:16:3e:7d:dd:ef 10.1.1.101
Nov 29 03:10:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:06Z|00266|binding|INFO|Claiming lport 5edc2ee1-b429-4cb7-8b14-3915aa40d39c for this chassis.
Nov 29 03:10:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:06Z|00267|binding|INFO|5edc2ee1-b429-4cb7-8b14-3915aa40d39c: Claiming fa:16:3e:73:c5:06 10.100.0.11
Nov 29 03:10:06 np0005539551 kernel: tap0b94f521-1b: entered promiscuous mode
Nov 29 03:10:06 np0005539551 systemd-udevd[255882]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:10:06 np0005539551 NetworkManager[48922]: <info>  [1764403806.1066] manager: (tap0b94f521-1b): new Tun device (/org/freedesktop/NetworkManager/Devices/137)
Nov 29 03:10:06 np0005539551 NetworkManager[48922]: <info>  [1764403806.1075] device (tap5edc2ee1-b4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:10:06 np0005539551 NetworkManager[48922]: <info>  [1764403806.1082] device (tap08b93c9b-b7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:10:06 np0005539551 NetworkManager[48922]: <info>  [1764403806.1089] device (tap5edc2ee1-b4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:10:06 np0005539551 NetworkManager[48922]: <info>  [1764403806.1092] device (tap08b93c9b-b7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:06.109 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:c5:06 10.100.0.11'], port_security=['fa:16:3e:73:c5:06 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '86dc24b2-55cd-4720-825d-14b5a233fc8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-997fb1c7-7a6a-4755-bd31-f24f7590c80c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28c3d09b9e21417cb7bc44b8552f1b81', 'neutron:revision_number': '2', 'neutron:security_group_ids': '588937ad-2a59-4cd1-8b4f-9964276d1cf1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55cabc28-801b-476b-85c5-aa3b3b098458, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=5edc2ee1-b429-4cb7-8b14-3915aa40d39c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:06.112 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:dd:ef 10.1.1.101'], port_security=['fa:16:3e:7d:dd:ef 10.1.1.101'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest_v242-207477711', 'neutron:cidrs': '10.1.1.101/24', 'neutron:device_id': '86dc24b2-55cd-4720-825d-14b5a233fc8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2e9e9288-8ab6-453e-b0a5-e16458d62484', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest_v242-207477711', 'neutron:project_id': '28c3d09b9e21417cb7bc44b8552f1b81', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c7f8a9bb-1199-4c44-851e-aceb219e03cd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fdddab22-bfa1-4179-9934-bf59b607238d, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=08b93c9b-b707-46be-87f1-913a85c9fdbb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:06.114 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 5edc2ee1-b429-4cb7-8b14-3915aa40d39c in datapath 997fb1c7-7a6a-4755-bd31-f24f7590c80c bound to our chassis#033[00m
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:06.118 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 997fb1c7-7a6a-4755-bd31-f24f7590c80c#033[00m
Nov 29 03:10:06 np0005539551 NetworkManager[48922]: <info>  [1764403806.1200] device (tap0b94f521-1b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:10:06 np0005539551 NetworkManager[48922]: <info>  [1764403806.1214] device (tap0b94f521-1b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:10:06 np0005539551 NetworkManager[48922]: <info>  [1764403806.1253] manager: (tap064279fc-65): new Tun device (/org/freedesktop/NetworkManager/Devices/138)
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:06.136 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a3b8c1a6-06ca-4546-ae04-0d9d3d2c550e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:06.138 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap997fb1c7-71 in ovnmeta-997fb1c7-7a6a-4755-bd31-f24f7590c80c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:06.142 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap997fb1c7-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:06.143 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[605ebeb1-c872-4eea-bc63-f09f9b47a805]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:06.143 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a3431aad-5277-47b5-bb06-bd9738d975c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:06 np0005539551 NetworkManager[48922]: <info>  [1764403806.1478] manager: (tap580e3c4b-92): new Tun device (/org/freedesktop/NetworkManager/Devices/139)
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:06.158 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[1e33b37d-fa36-406f-bc64-97311a774d4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:06Z|00268|binding|INFO|Claiming lport 0b94f521-1bc4-4c6b-9d91-f98c1b1371ea for this chassis.
Nov 29 03:10:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:06Z|00269|binding|INFO|0b94f521-1bc4-4c6b-9d91-f98c1b1371ea: Claiming fa:16:3e:a7:4f:0c 10.1.1.188
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.164 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:06 np0005539551 kernel: tap580e3c4b-92: entered promiscuous mode
Nov 29 03:10:06 np0005539551 NetworkManager[48922]: <info>  [1764403806.1690] manager: (tap6781fd06-33): new Tun device (/org/freedesktop/NetworkManager/Devices/140)
Nov 29 03:10:06 np0005539551 kernel: tap6781fd06-33: entered promiscuous mode
Nov 29 03:10:06 np0005539551 NetworkManager[48922]: <info>  [1764403806.1694] device (tap580e3c4b-92): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:10:06 np0005539551 NetworkManager[48922]: <info>  [1764403806.1708] device (tap580e3c4b-92): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:06.171 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:4f:0c 10.1.1.188'], port_security=['fa:16:3e:a7:4f:0c 10.1.1.188'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest_v242-1346256411', 'neutron:cidrs': '10.1.1.188/24', 'neutron:device_id': '86dc24b2-55cd-4720-825d-14b5a233fc8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2e9e9288-8ab6-453e-b0a5-e16458d62484', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest_v242-1346256411', 'neutron:project_id': '28c3d09b9e21417cb7bc44b8552f1b81', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c7f8a9bb-1199-4c44-851e-aceb219e03cd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fdddab22-bfa1-4179-9934-bf59b607238d, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=0b94f521-1bc4-4c6b-9d91-f98c1b1371ea) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:10:06 np0005539551 NetworkManager[48922]: <info>  [1764403806.1755] device (tap064279fc-65): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:10:06 np0005539551 kernel: tap064279fc-65: entered promiscuous mode
Nov 29 03:10:06 np0005539551 NetworkManager[48922]: <info>  [1764403806.1767] device (tap064279fc-65): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:10:06 np0005539551 NetworkManager[48922]: <info>  [1764403806.1778] device (tap6781fd06-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:10:06 np0005539551 NetworkManager[48922]: <info>  [1764403806.1787] device (tap6781fd06-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:10:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:06Z|00270|binding|INFO|Claiming lport 064279fc-65b8-4bc6-9578-f43479c76dde for this chassis.
Nov 29 03:10:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:06Z|00271|binding|INFO|064279fc-65b8-4bc6-9578-f43479c76dde: Claiming fa:16:3e:8f:4c:8b 10.1.1.67
Nov 29 03:10:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:06Z|00272|binding|INFO|Claiming lport 580e3c4b-9212-4f39-be82-c8e878e729e2 for this chassis.
Nov 29 03:10:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:06Z|00273|binding|INFO|580e3c4b-9212-4f39-be82-c8e878e729e2: Claiming fa:16:3e:e1:43:84 10.1.1.23
Nov 29 03:10:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:06Z|00274|binding|INFO|Claiming lport 6781fd06-3321-4100-b5d3-9e92a565007b for this chassis.
Nov 29 03:10:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:06Z|00275|binding|INFO|6781fd06-3321-4100-b5d3-9e92a565007b: Claiming fa:16:3e:a4:c3:f8 10.2.2.100
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.180 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:06 np0005539551 NetworkManager[48922]: <info>  [1764403806.1888] manager: (tap75fb0180-ce): new Tun device (/org/freedesktop/NetworkManager/Devices/141)
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.189 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:06 np0005539551 kernel: tap75fb0180-ce: entered promiscuous mode
Nov 29 03:10:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:06Z|00276|binding|INFO|Releasing lport a9e57abf-e3e4-455b-b4c5-0cda127bd5c1 from this chassis (sb_readonly=0)
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:06.192 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:c3:f8 10.2.2.100'], port_security=['fa:16:3e:a4:c3:f8 10.2.2.100'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.100/24', 'neutron:device_id': '86dc24b2-55cd-4720-825d-14b5a233fc8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c657f09f-ecca-4e01-9a07-39931c8c1994', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28c3d09b9e21417cb7bc44b8552f1b81', 'neutron:revision_number': '2', 'neutron:security_group_ids': '588937ad-2a59-4cd1-8b4f-9964276d1cf1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=93bd27dc-d7ec-4f57-a690-c5f336775299, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=6781fd06-3321-4100-b5d3-9e92a565007b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:06.194 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:43:84 10.1.1.23'], port_security=['fa:16:3e:e1:43:84 10.1.1.23'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.23/24', 'neutron:device_id': '86dc24b2-55cd-4720-825d-14b5a233fc8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2e9e9288-8ab6-453e-b0a5-e16458d62484', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28c3d09b9e21417cb7bc44b8552f1b81', 'neutron:revision_number': '2', 'neutron:security_group_ids': '588937ad-2a59-4cd1-8b4f-9964276d1cf1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fdddab22-bfa1-4179-9934-bf59b607238d, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=580e3c4b-9212-4f39-be82-c8e878e729e2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:06.196 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:4c:8b 10.1.1.67'], port_security=['fa:16:3e:8f:4c:8b 10.1.1.67'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.67/24', 'neutron:device_id': '86dc24b2-55cd-4720-825d-14b5a233fc8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2e9e9288-8ab6-453e-b0a5-e16458d62484', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28c3d09b9e21417cb7bc44b8552f1b81', 'neutron:revision_number': '2', 'neutron:security_group_ids': '588937ad-2a59-4cd1-8b4f-9964276d1cf1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fdddab22-bfa1-4179-9934-bf59b607238d, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=064279fc-65b8-4bc6-9578-f43479c76dde) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:06.196 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[572dd755-1061-45f9-93b5-da13327a9df6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:06Z|00277|binding|INFO|Setting lport 5edc2ee1-b429-4cb7-8b14-3915aa40d39c ovn-installed in OVS
Nov 29 03:10:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:06Z|00278|binding|INFO|Setting lport 5edc2ee1-b429-4cb7-8b14-3915aa40d39c up in Southbound
Nov 29 03:10:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:06Z|00279|binding|INFO|Setting lport 08b93c9b-b707-46be-87f1-913a85c9fdbb ovn-installed in OVS
Nov 29 03:10:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:06Z|00280|binding|INFO|Setting lport 08b93c9b-b707-46be-87f1-913a85c9fdbb up in Southbound
Nov 29 03:10:06 np0005539551 NetworkManager[48922]: <info>  [1764403806.2050] device (tap75fb0180-ce): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.202 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:06 np0005539551 NetworkManager[48922]: <info>  [1764403806.2057] device (tap75fb0180-ce): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.206 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:06Z|00281|binding|INFO|Claiming lport 75fb0180-ce5c-4d77-ab28-71baed47a210 for this chassis.
Nov 29 03:10:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:06Z|00282|binding|INFO|75fb0180-ce5c-4d77-ab28-71baed47a210: Claiming fa:16:3e:07:8b:79 10.2.2.200
Nov 29 03:10:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:06Z|00283|binding|INFO|Setting lport 0b94f521-1bc4-4c6b-9d91-f98c1b1371ea ovn-installed in OVS
Nov 29 03:10:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:06Z|00284|binding|INFO|Setting lport 0b94f521-1bc4-4c6b-9d91-f98c1b1371ea up in Southbound
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.210 227364 DEBUG nova.network.neutron [None req-ae3974a4-0300-426d-879e-3891ef374141 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Updating instance_info_cache with network_info: [{"id": "da36c2e4-1083-48fb-ae61-47bca3a21912", "address": "fa:16:3e:52:6c:f3", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda36c2e4-10", "ovs_interfaceid": "da36c2e4-1083-48fb-ae61-47bca3a21912", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fb488c88-f0f5-4e76-90e7-47161bb2a305", "address": "fa:16:3e:fb:26:83", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb488c88-f0", "ovs_interfaceid": "fb488c88-f0f5-4e76-90e7-47161bb2a305", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.214 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:06.213 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:8b:79 10.2.2.200'], port_security=['fa:16:3e:07:8b:79 10.2.2.200'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.200/24', 'neutron:device_id': '86dc24b2-55cd-4720-825d-14b5a233fc8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c657f09f-ecca-4e01-9a07-39931c8c1994', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28c3d09b9e21417cb7bc44b8552f1b81', 'neutron:revision_number': '2', 'neutron:security_group_ids': '588937ad-2a59-4cd1-8b4f-9964276d1cf1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=93bd27dc-d7ec-4f57-a690-c5f336775299, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=75fb0180-ce5c-4d77-ab28-71baed47a210) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:06.230 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[c9b7a725-55c8-44b0-8a50-c409ce26a17f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:06 np0005539551 systemd-machined[190756]: New machine qemu-35-instance-0000004d.
Nov 29 03:10:06 np0005539551 NetworkManager[48922]: <info>  [1764403806.2373] manager: (tap997fb1c7-70): new Veth device (/org/freedesktop/NetworkManager/Devices/142)
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:06.237 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4560a4c8-159b-4ec0-a46c-5fc65a2f959e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.240 227364 DEBUG oslo_concurrency.lockutils [None req-ae3974a4-0300-426d-879e-3891ef374141 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Releasing lock "refresh_cache-00bfed58-0a25-44a8-aa0b-2330c27be8ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.241 227364 DEBUG oslo_concurrency.lockutils [req-8b601abc-0f1a-492d-80e1-2abcd50b2e28 req-ccce72d3-4090-41f1-9b2c-6ee4f5e4db1b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-00bfed58-0a25-44a8-aa0b-2330c27be8ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.242 227364 DEBUG nova.network.neutron [req-8b601abc-0f1a-492d-80e1-2abcd50b2e28 req-ccce72d3-4090-41f1-9b2c-6ee4f5e4db1b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Refreshing network info cache for port fb488c88-f0f5-4e76-90e7-47161bb2a305 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.248 227364 DEBUG nova.virt.libvirt.vif [None req-ae3974a4-0300-426d-879e-3891ef374141 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-151094337',display_name='tempest-AttachInterfacesTestJSON-server-151094337',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-151094337',id=74,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLz2/hRJ1TIdtPOtDD0/pV/i+rl/aVQTZxMyxxGV6qIdTwffh6F+z4sfbRr8GH8vQoafgFs7aaplxV1s7tLn77mcTUD2rhU9JE1b0RjzFKPDixDmcGcrtGap0RAcgqnY4A==',key_name='tempest-keypair-1899268923',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:22Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ff7c805d4242453aa2148a247956391d',ramdisk_id='',reservation_id='r-t7n1tsl8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-372493183',owner_user_name='tempest-AttachInterfacesTestJSON-372493183-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:09:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b95b3e841be1420c99ee0a04dd0840f1',uuid=00bfed58-0a25-44a8-aa0b-2330c27be8ee,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fb488c88-f0f5-4e76-90e7-47161bb2a305", "address": "fa:16:3e:fb:26:83", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb488c88-f0", "ovs_interfaceid": "fb488c88-f0f5-4e76-90e7-47161bb2a305", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.249 227364 DEBUG nova.network.os_vif_util [None req-ae3974a4-0300-426d-879e-3891ef374141 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converting VIF {"id": "fb488c88-f0f5-4e76-90e7-47161bb2a305", "address": "fa:16:3e:fb:26:83", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb488c88-f0", "ovs_interfaceid": "fb488c88-f0f5-4e76-90e7-47161bb2a305", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.249 227364 DEBUG nova.network.os_vif_util [None req-ae3974a4-0300-426d-879e-3891ef374141 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:26:83,bridge_name='br-int',has_traffic_filtering=True,id=fb488c88-f0f5-4e76-90e7-47161bb2a305,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb488c88-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.250 227364 DEBUG os_vif [None req-ae3974a4-0300-426d-879e-3891ef374141 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:26:83,bridge_name='br-int',has_traffic_filtering=True,id=fb488c88-f0f5-4e76-90e7-47161bb2a305,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb488c88-f0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.251 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.251 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.251 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.260 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:06.264 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[426c13c9-c616-4df2-95da-eb6a2675d4b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.260 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfb488c88-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.267 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfb488c88-f0, col_values=(('external_ids', {'iface-id': 'fb488c88-f0f5-4e76-90e7-47161bb2a305', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fb:26:83', 'vm-uuid': '00bfed58-0a25-44a8-aa0b-2330c27be8ee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:06.268 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[d54b20d3-0158-4b2c-827a-3ddc914316c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:06 np0005539551 NetworkManager[48922]: <info>  [1764403806.2697] manager: (tapfb488c88-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/143)
Nov 29 03:10:06 np0005539551 systemd[1]: Started Virtual Machine qemu-35-instance-0000004d.
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.272 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.276 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.280 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.281 227364 INFO os_vif [None req-ae3974a4-0300-426d-879e-3891ef374141 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:26:83,bridge_name='br-int',has_traffic_filtering=True,id=fb488c88-f0f5-4e76-90e7-47161bb2a305,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb488c88-f0')#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.281 227364 DEBUG nova.virt.libvirt.vif [None req-ae3974a4-0300-426d-879e-3891ef374141 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-151094337',display_name='tempest-AttachInterfacesTestJSON-server-151094337',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-151094337',id=74,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLz2/hRJ1TIdtPOtDD0/pV/i+rl/aVQTZxMyxxGV6qIdTwffh6F+z4sfbRr8GH8vQoafgFs7aaplxV1s7tLn77mcTUD2rhU9JE1b0RjzFKPDixDmcGcrtGap0RAcgqnY4A==',key_name='tempest-keypair-1899268923',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:22Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ff7c805d4242453aa2148a247956391d',ramdisk_id='',reservation_id='r-t7n1tsl8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-372493183',owner_user_name='tempest-AttachInterfacesTestJSON-372493183-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:09:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b95b3e841be1420c99ee0a04dd0840f1',uuid=00bfed58-0a25-44a8-aa0b-2330c27be8ee,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fb488c88-f0f5-4e76-90e7-47161bb2a305", "address": "fa:16:3e:fb:26:83", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb488c88-f0", "ovs_interfaceid": "fb488c88-f0f5-4e76-90e7-47161bb2a305", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.282 227364 DEBUG nova.network.os_vif_util [None req-ae3974a4-0300-426d-879e-3891ef374141 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converting VIF {"id": "fb488c88-f0f5-4e76-90e7-47161bb2a305", "address": "fa:16:3e:fb:26:83", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb488c88-f0", "ovs_interfaceid": "fb488c88-f0f5-4e76-90e7-47161bb2a305", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.282 227364 DEBUG nova.network.os_vif_util [None req-ae3974a4-0300-426d-879e-3891ef374141 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:26:83,bridge_name='br-int',has_traffic_filtering=True,id=fb488c88-f0f5-4e76-90e7-47161bb2a305,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb488c88-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.286 227364 DEBUG nova.virt.libvirt.guest [None req-ae3974a4-0300-426d-879e-3891ef374141 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] attach device xml: <interface type="ethernet">
Nov 29 03:10:06 np0005539551 nova_compute[227360]:  <mac address="fa:16:3e:fb:26:83"/>
Nov 29 03:10:06 np0005539551 nova_compute[227360]:  <model type="virtio"/>
Nov 29 03:10:06 np0005539551 nova_compute[227360]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:10:06 np0005539551 nova_compute[227360]:  <mtu size="1442"/>
Nov 29 03:10:06 np0005539551 nova_compute[227360]:  <target dev="tapfb488c88-f0"/>
Nov 29 03:10:06 np0005539551 nova_compute[227360]: </interface>
Nov 29 03:10:06 np0005539551 nova_compute[227360]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:10:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:06Z|00285|binding|INFO|Setting lport 580e3c4b-9212-4f39-be82-c8e878e729e2 ovn-installed in OVS
Nov 29 03:10:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:06Z|00286|binding|INFO|Setting lport 580e3c4b-9212-4f39-be82-c8e878e729e2 up in Southbound
Nov 29 03:10:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:06Z|00287|binding|INFO|Setting lport 6781fd06-3321-4100-b5d3-9e92a565007b ovn-installed in OVS
Nov 29 03:10:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:06Z|00288|binding|INFO|Setting lport 6781fd06-3321-4100-b5d3-9e92a565007b up in Southbound
Nov 29 03:10:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:06Z|00289|binding|INFO|Setting lport 064279fc-65b8-4bc6-9578-f43479c76dde ovn-installed in OVS
Nov 29 03:10:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:06Z|00290|binding|INFO|Setting lport 064279fc-65b8-4bc6-9578-f43479c76dde up in Southbound
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.292 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:06 np0005539551 NetworkManager[48922]: <info>  [1764403806.2938] device (tap997fb1c7-70): carrier: link connected
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:06.300 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[ca395db7-24ba-4ee8-8465-39f1a7f79059]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:06 np0005539551 kernel: tapfb488c88-f0: entered promiscuous mode
Nov 29 03:10:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:06Z|00291|binding|INFO|Setting lport 75fb0180-ce5c-4d77-ab28-71baed47a210 ovn-installed in OVS
Nov 29 03:10:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:06Z|00292|binding|INFO|Setting lport 75fb0180-ce5c-4d77-ab28-71baed47a210 up in Southbound
Nov 29 03:10:06 np0005539551 NetworkManager[48922]: <info>  [1764403806.3029] manager: (tapfb488c88-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/144)
Nov 29 03:10:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:06Z|00293|if_status|INFO|Not updating pb chassis for fb488c88-f0f5-4e76-90e7-47161bb2a305 now as sb is readonly
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.303 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:06Z|00294|binding|INFO|Claiming lport fb488c88-f0f5-4e76-90e7-47161bb2a305 for this chassis.
Nov 29 03:10:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:06Z|00295|binding|INFO|fb488c88-f0f5-4e76-90e7-47161bb2a305: Claiming fa:16:3e:fb:26:83 10.100.0.13
Nov 29 03:10:06 np0005539551 NetworkManager[48922]: <info>  [1764403806.3160] device (tapfb488c88-f0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:06.315 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:26:83 10.100.0.13'], port_security=['fa:16:3e:fb:26:83 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '00bfed58-0a25-44a8-aa0b-2330c27be8ee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff7c805d4242453aa2148a247956391d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '837c5830-d55f-47dc-af7f-7cef5a2ab737', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5330ba90-719c-42ae-a31a-dd5fd1d240e2, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=fb488c88-f0f5-4e76-90e7-47161bb2a305) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:10:06 np0005539551 NetworkManager[48922]: <info>  [1764403806.3167] device (tapfb488c88-f0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:06.319 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[0955b0a5-d798-4c86-821d-93af60b5863e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap997fb1c7-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:b5:4d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 86], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685317, 'reachable_time': 15591, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255937, 'error': None, 'target': 'ovnmeta-997fb1c7-7a6a-4755-bd31-f24f7590c80c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:06Z|00296|binding|INFO|Setting lport fb488c88-f0f5-4e76-90e7-47161bb2a305 ovn-installed in OVS
Nov 29 03:10:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:06Z|00297|binding|INFO|Setting lport fb488c88-f0f5-4e76-90e7-47161bb2a305 up in Southbound
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.323 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.328 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:06.335 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2b83a9c8-71c2-4421-99d1-aa14d1ca5866]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedc:b54d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 685317, 'tstamp': 685317}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255942, 'error': None, 'target': 'ovnmeta-997fb1c7-7a6a-4755-bd31-f24f7590c80c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:06.352 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5df7bac1-9f80-4175-a4f1-655f3c42b9d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap997fb1c7-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:b5:4d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 86], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685317, 'reachable_time': 15591, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 255945, 'error': None, 'target': 'ovnmeta-997fb1c7-7a6a-4755-bd31-f24f7590c80c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:06.382 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f38c589c-2b40-4690-ad23-6941371c52b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.392 227364 DEBUG nova.virt.libvirt.driver [None req-ae3974a4-0300-426d-879e-3891ef374141 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.392 227364 DEBUG nova.virt.libvirt.driver [None req-ae3974a4-0300-426d-879e-3891ef374141 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.392 227364 DEBUG nova.virt.libvirt.driver [None req-ae3974a4-0300-426d-879e-3891ef374141 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] No VIF found with MAC fa:16:3e:52:6c:f3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.393 227364 DEBUG nova.virt.libvirt.driver [None req-ae3974a4-0300-426d-879e-3891ef374141 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] No VIF found with MAC fa:16:3e:fb:26:83, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.418 227364 DEBUG nova.virt.libvirt.guest [None req-ae3974a4-0300-426d-879e-3891ef374141 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:10:06 np0005539551 nova_compute[227360]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:10:06 np0005539551 nova_compute[227360]:  <nova:name>tempest-AttachInterfacesTestJSON-server-151094337</nova:name>
Nov 29 03:10:06 np0005539551 nova_compute[227360]:  <nova:creationTime>2025-11-29 08:10:06</nova:creationTime>
Nov 29 03:10:06 np0005539551 nova_compute[227360]:  <nova:flavor name="m1.nano">
Nov 29 03:10:06 np0005539551 nova_compute[227360]:    <nova:memory>128</nova:memory>
Nov 29 03:10:06 np0005539551 nova_compute[227360]:    <nova:disk>1</nova:disk>
Nov 29 03:10:06 np0005539551 nova_compute[227360]:    <nova:swap>0</nova:swap>
Nov 29 03:10:06 np0005539551 nova_compute[227360]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:10:06 np0005539551 nova_compute[227360]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:10:06 np0005539551 nova_compute[227360]:  </nova:flavor>
Nov 29 03:10:06 np0005539551 nova_compute[227360]:  <nova:owner>
Nov 29 03:10:06 np0005539551 nova_compute[227360]:    <nova:user uuid="b95b3e841be1420c99ee0a04dd0840f1">tempest-AttachInterfacesTestJSON-372493183-project-member</nova:user>
Nov 29 03:10:06 np0005539551 nova_compute[227360]:    <nova:project uuid="ff7c805d4242453aa2148a247956391d">tempest-AttachInterfacesTestJSON-372493183</nova:project>
Nov 29 03:10:06 np0005539551 nova_compute[227360]:  </nova:owner>
Nov 29 03:10:06 np0005539551 nova_compute[227360]:  <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:10:06 np0005539551 nova_compute[227360]:  <nova:ports>
Nov 29 03:10:06 np0005539551 nova_compute[227360]:    <nova:port uuid="da36c2e4-1083-48fb-ae61-47bca3a21912">
Nov 29 03:10:06 np0005539551 nova_compute[227360]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:10:06 np0005539551 nova_compute[227360]:    </nova:port>
Nov 29 03:10:06 np0005539551 nova_compute[227360]:    <nova:port uuid="fb488c88-f0f5-4e76-90e7-47161bb2a305">
Nov 29 03:10:06 np0005539551 nova_compute[227360]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 03:10:06 np0005539551 nova_compute[227360]:    </nova:port>
Nov 29 03:10:06 np0005539551 nova_compute[227360]:  </nova:ports>
Nov 29 03:10:06 np0005539551 nova_compute[227360]: </nova:instance>
Nov 29 03:10:06 np0005539551 nova_compute[227360]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.440 227364 DEBUG oslo_concurrency.lockutils [None req-ae3974a4-0300-426d-879e-3891ef374141 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "interface-00bfed58-0a25-44a8-aa0b-2330c27be8ee-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.330s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:06.442 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[45c32803-0b34-44a6-aca2-47867b51a247]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:06.444 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap997fb1c7-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:06.444 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:06.445 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap997fb1c7-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.447 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:06 np0005539551 kernel: tap997fb1c7-70: entered promiscuous mode
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.449 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:06 np0005539551 NetworkManager[48922]: <info>  [1764403806.4495] manager: (tap997fb1c7-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/145)
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:06.450 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap997fb1c7-70, col_values=(('external_ids', {'iface-id': 'cb5a511a-32e6-4894-af70-b048f372e4c4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.452 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:06Z|00298|binding|INFO|Releasing lport cb5a511a-32e6-4894-af70-b048f372e4c4 from this chassis (sb_readonly=0)
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.470 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:06.473 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/997fb1c7-7a6a-4755-bd31-f24f7590c80c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/997fb1c7-7a6a-4755-bd31-f24f7590c80c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:06.474 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c4165a18-0cd9-47a5-9621-5704d45428a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:06.476 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-997fb1c7-7a6a-4755-bd31-f24f7590c80c
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/997fb1c7-7a6a-4755-bd31-f24f7590c80c.pid.haproxy
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 997fb1c7-7a6a-4755-bd31-f24f7590c80c
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:06.479 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-997fb1c7-7a6a-4755-bd31-f24f7590c80c', 'env', 'PROCESS_TAG=haproxy-997fb1c7-7a6a-4755-bd31-f24f7590c80c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/997fb1c7-7a6a-4755-bd31-f24f7590c80c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.481 227364 DEBUG nova.compute.manager [req-ef9793e3-d4c5-4b87-bfc9-bef6db4e548d req-e16f7fcc-0b6a-4693-bc72-ea557fb66f87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-vif-plugged-5edc2ee1-b429-4cb7-8b14-3915aa40d39c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.481 227364 DEBUG oslo_concurrency.lockutils [req-ef9793e3-d4c5-4b87-bfc9-bef6db4e548d req-e16f7fcc-0b6a-4693-bc72-ea557fb66f87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.481 227364 DEBUG oslo_concurrency.lockutils [req-ef9793e3-d4c5-4b87-bfc9-bef6db4e548d req-e16f7fcc-0b6a-4693-bc72-ea557fb66f87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.482 227364 DEBUG oslo_concurrency.lockutils [req-ef9793e3-d4c5-4b87-bfc9-bef6db4e548d req-e16f7fcc-0b6a-4693-bc72-ea557fb66f87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.482 227364 DEBUG nova.compute.manager [req-ef9793e3-d4c5-4b87-bfc9-bef6db4e548d req-e16f7fcc-0b6a-4693-bc72-ea557fb66f87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Processing event network-vif-plugged-5edc2ee1-b429-4cb7-8b14-3915aa40d39c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.630 227364 DEBUG nova.compute.manager [req-36b51234-c10b-4a89-9c5c-a4f1681aafb2 req-b8c0796b-e41b-44fc-9ad2-507b2f32c7d5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-vif-plugged-6781fd06-3321-4100-b5d3-9e92a565007b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.630 227364 DEBUG oslo_concurrency.lockutils [req-36b51234-c10b-4a89-9c5c-a4f1681aafb2 req-b8c0796b-e41b-44fc-9ad2-507b2f32c7d5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.630 227364 DEBUG oslo_concurrency.lockutils [req-36b51234-c10b-4a89-9c5c-a4f1681aafb2 req-b8c0796b-e41b-44fc-9ad2-507b2f32c7d5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.631 227364 DEBUG oslo_concurrency.lockutils [req-36b51234-c10b-4a89-9c5c-a4f1681aafb2 req-b8c0796b-e41b-44fc-9ad2-507b2f32c7d5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.631 227364 DEBUG nova.compute.manager [req-36b51234-c10b-4a89-9c5c-a4f1681aafb2 req-b8c0796b-e41b-44fc-9ad2-507b2f32c7d5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Processing event network-vif-plugged-6781fd06-3321-4100-b5d3-9e92a565007b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.774 227364 DEBUG nova.compute.manager [req-7875d280-2401-40b0-8eaa-59399e01b7ff req-06382fbf-78fb-4b26-8828-804d154d0157 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-vif-plugged-580e3c4b-9212-4f39-be82-c8e878e729e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.775 227364 DEBUG oslo_concurrency.lockutils [req-7875d280-2401-40b0-8eaa-59399e01b7ff req-06382fbf-78fb-4b26-8828-804d154d0157 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.775 227364 DEBUG oslo_concurrency.lockutils [req-7875d280-2401-40b0-8eaa-59399e01b7ff req-06382fbf-78fb-4b26-8828-804d154d0157 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.776 227364 DEBUG oslo_concurrency.lockutils [req-7875d280-2401-40b0-8eaa-59399e01b7ff req-06382fbf-78fb-4b26-8828-804d154d0157 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.776 227364 DEBUG nova.compute.manager [req-7875d280-2401-40b0-8eaa-59399e01b7ff req-06382fbf-78fb-4b26-8828-804d154d0157 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Processing event network-vif-plugged-580e3c4b-9212-4f39-be82-c8e878e729e2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.834 227364 DEBUG nova.network.neutron [req-c64e33db-e067-4ec0-b6d5-fd4f33132708 req-3e6d9c17-dab8-4444-b232-7be4154d689f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Updated VIF entry in instance network info cache for port 75fb0180-ce5c-4d77-ab28-71baed47a210. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.835 227364 DEBUG nova.network.neutron [req-c64e33db-e067-4ec0-b6d5-fd4f33132708 req-3e6d9c17-dab8-4444-b232-7be4154d689f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Updating instance_info_cache with network_info: [{"id": "5edc2ee1-b429-4cb7-8b14-3915aa40d39c", "address": "fa:16:3e:73:c5:06", "network": {"id": "997fb1c7-7a6a-4755-bd31-f24f7590c80c", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-481618025-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5edc2ee1-b4", "ovs_interfaceid": "5edc2ee1-b429-4cb7-8b14-3915aa40d39c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "08b93c9b-b707-46be-87f1-913a85c9fdbb", "address": "fa:16:3e:7d:dd:ef", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.101", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08b93c9b-b7", "ovs_interfaceid": "08b93c9b-b707-46be-87f1-913a85c9fdbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "0b94f521-1bc4-4c6b-9d91-f98c1b1371ea", "address": "fa:16:3e:a7:4f:0c", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.188", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b94f521-1b", "ovs_interfaceid": "0b94f521-1bc4-4c6b-9d91-f98c1b1371ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "064279fc-65b8-4bc6-9578-f43479c76dde", "address": "fa:16:3e:8f:4c:8b", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap064279fc-65", "ovs_interfaceid": "064279fc-65b8-4bc6-9578-f43479c76dde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "580e3c4b-9212-4f39-be82-c8e878e729e2", "address": "fa:16:3e:e1:43:84", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580e3c4b-92", "ovs_interfaceid": "580e3c4b-9212-4f39-be82-c8e878e729e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6781fd06-3321-4100-b5d3-9e92a565007b", "address": "fa:16:3e:a4:c3:f8", "network": {"id": "c657f09f-ecca-4e01-9a07-39931c8c1994", "bridge": "br-int", "label": "tempest-device-tagging-net2-1829627846", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6781fd06-33", "ovs_interfaceid": "6781fd06-3321-4100-b5d3-9e92a565007b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "75fb0180-ce5c-4d77-ab28-71baed47a210", "address": "fa:16:3e:07:8b:79", "network": {"id": "c657f09f-ecca-4e01-9a07-39931c8c1994", "bridge": "br-int", "label": "tempest-device-tagging-net2-1829627846", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75fb0180-ce", "ovs_interfaceid": "75fb0180-ce5c-4d77-ab28-71baed47a210", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.856 227364 DEBUG oslo_concurrency.lockutils [req-c64e33db-e067-4ec0-b6d5-fd4f33132708 req-3e6d9c17-dab8-4444-b232-7be4154d689f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-86dc24b2-55cd-4720-825d-14b5a233fc8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:10:06 np0005539551 podman[256048]: 2025-11-29 08:10:06.912793219 +0000 UTC m=+0.062064914 container create 901c5d3f6e4bed9c7438a06d9fe78473315e415fa9bdce00c662d7844018f856 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-997fb1c7-7a6a-4755-bd31-f24f7590c80c, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:10:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:06.941 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.943 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:06 np0005539551 systemd[1]: Started libpod-conmon-901c5d3f6e4bed9c7438a06d9fe78473315e415fa9bdce00c662d7844018f856.scope.
Nov 29 03:10:06 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.966 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403806.9660397, 86dc24b2-55cd-4720-825d-14b5a233fc8f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:10:06 np0005539551 nova_compute[227360]: 2025-11-29 08:10:06.966 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] VM Started (Lifecycle Event)#033[00m
Nov 29 03:10:06 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9824864aa90d38506ad00615b357c329398c35795238d3e9fb2926615366d9f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:10:06 np0005539551 podman[256048]: 2025-11-29 08:10:06.886733445 +0000 UTC m=+0.036005200 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:10:06 np0005539551 podman[256048]: 2025-11-29 08:10:06.983070951 +0000 UTC m=+0.132342646 container init 901c5d3f6e4bed9c7438a06d9fe78473315e415fa9bdce00c662d7844018f856 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-997fb1c7-7a6a-4755-bd31-f24f7590c80c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 03:10:06 np0005539551 podman[256048]: 2025-11-29 08:10:06.990116799 +0000 UTC m=+0.139388494 container start 901c5d3f6e4bed9c7438a06d9fe78473315e415fa9bdce00c662d7844018f856 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-997fb1c7-7a6a-4755-bd31-f24f7590c80c, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:10:07 np0005539551 nova_compute[227360]: 2025-11-29 08:10:07.008 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:10:07 np0005539551 nova_compute[227360]: 2025-11-29 08:10:07.011 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403806.9661384, 86dc24b2-55cd-4720-825d-14b5a233fc8f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:10:07 np0005539551 nova_compute[227360]: 2025-11-29 08:10:07.011 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:10:07 np0005539551 neutron-haproxy-ovnmeta-997fb1c7-7a6a-4755-bd31-f24f7590c80c[256077]: [NOTICE]   (256081) : New worker (256083) forked
Nov 29 03:10:07 np0005539551 neutron-haproxy-ovnmeta-997fb1c7-7a6a-4755-bd31-f24f7590c80c[256077]: [NOTICE]   (256081) : Loading success.
Nov 29 03:10:07 np0005539551 nova_compute[227360]: 2025-11-29 08:10:07.036 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:10:07 np0005539551 nova_compute[227360]: 2025-11-29 08:10:07.039 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.044 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 08b93c9b-b707-46be-87f1-913a85c9fdbb in datapath 2e9e9288-8ab6-453e-b0a5-e16458d62484 unbound from our chassis#033[00m
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.047 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2e9e9288-8ab6-453e-b0a5-e16458d62484#033[00m
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.056 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[fff68240-b0b8-400f-957f-67951ae2ffea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.057 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2e9e9288-81 in ovnmeta-2e9e9288-8ab6-453e-b0a5-e16458d62484 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.060 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2e9e9288-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:10:07 np0005539551 nova_compute[227360]: 2025-11-29 08:10:07.060 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.060 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[379c0ff2-d5ec-4f8b-946b-3a50fbf286b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.061 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d758aca2-082c-4e15-8a19-2f7c57383aa2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.071 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[05d68b12-f4a3-4418-b827-757b30954317]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.093 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[db1b4c86-22b5-4a86-810e-4c866b312935]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.122 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[1dc18513-e3ed-453f-9fb5-c9ad4d5e0cfb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.127 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5344a278-a1db-4f91-bac2-667c90d41f57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:07 np0005539551 NetworkManager[48922]: <info>  [1764403807.1292] manager: (tap2e9e9288-80): new Veth device (/org/freedesktop/NetworkManager/Devices/146)
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.159 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[686ada81-8dad-4931-a6ba-2569fa4de963]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.162 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[a12b1056-99ab-4b9c-9750-990ba968823f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:07 np0005539551 NetworkManager[48922]: <info>  [1764403807.1836] device (tap2e9e9288-80): carrier: link connected
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.189 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[3726bef6-13bf-43ef-8a9a-c687ec4c4e87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.207 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[0b1569ab-5071-4593-9327-f459b7aaa09f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2e9e9288-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f0:4f:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685406, 'reachable_time': 28796, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256102, 'error': None, 'target': 'ovnmeta-2e9e9288-8ab6-453e-b0a5-e16458d62484', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.222 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e80f5f2b-39ef-448f-ba61-611554a36387]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef0:4f92'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 685406, 'tstamp': 685406}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256103, 'error': None, 'target': 'ovnmeta-2e9e9288-8ab6-453e-b0a5-e16458d62484', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.241 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[3591a327-9540-4d24-86ed-f4505f92b328]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2e9e9288-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f0:4f:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685406, 'reachable_time': 28796, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 256104, 'error': None, 'target': 'ovnmeta-2e9e9288-8ab6-453e-b0a5-e16458d62484', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.266 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9fa5db0f-8a42-4411-a157-5bb9d3ec34a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.322 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c9ea94f7-6f55-4702-a0b1-29433ab48429]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.323 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2e9e9288-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.323 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.324 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2e9e9288-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:07 np0005539551 nova_compute[227360]: 2025-11-29 08:10:07.325 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:07 np0005539551 NetworkManager[48922]: <info>  [1764403807.3263] manager: (tap2e9e9288-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/147)
Nov 29 03:10:07 np0005539551 kernel: tap2e9e9288-80: entered promiscuous mode
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.329 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2e9e9288-80, col_values=(('external_ids', {'iface-id': '3e2a376c-d7ad-4503-a53d-422555d8b00e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:07 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:07Z|00299|binding|INFO|Releasing lport 3e2a376c-d7ad-4503-a53d-422555d8b00e from this chassis (sb_readonly=0)
Nov 29 03:10:07 np0005539551 nova_compute[227360]: 2025-11-29 08:10:07.330 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:07 np0005539551 nova_compute[227360]: 2025-11-29 08:10:07.349 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.350 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2e9e9288-8ab6-453e-b0a5-e16458d62484.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2e9e9288-8ab6-453e-b0a5-e16458d62484.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.351 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[946ccf2d-6925-4fb8-a9f7-ae66e2a75831]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.351 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-2e9e9288-8ab6-453e-b0a5-e16458d62484
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/2e9e9288-8ab6-453e-b0a5-e16458d62484.pid.haproxy
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 2e9e9288-8ab6-453e-b0a5-e16458d62484
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.353 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2e9e9288-8ab6-453e-b0a5-e16458d62484', 'env', 'PROCESS_TAG=haproxy-2e9e9288-8ab6-453e-b0a5-e16458d62484', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2e9e9288-8ab6-453e-b0a5-e16458d62484.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:10:07 np0005539551 podman[256135]: 2025-11-29 08:10:07.713556664 +0000 UTC m=+0.047434194 container create c0cb385ed47ff702646a21290494bfa39db832e72d14e22b145b252485092f30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2e9e9288-8ab6-453e-b0a5-e16458d62484, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 29 03:10:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:07.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:07 np0005539551 systemd[1]: Started libpod-conmon-c0cb385ed47ff702646a21290494bfa39db832e72d14e22b145b252485092f30.scope.
Nov 29 03:10:07 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:10:07 np0005539551 podman[256135]: 2025-11-29 08:10:07.687627794 +0000 UTC m=+0.021505344 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:10:07 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e53b8c2d3342ab6a497ac134c78a63be73972ba575c014bc1292ee79a5420a7f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:10:07 np0005539551 podman[256135]: 2025-11-29 08:10:07.803108359 +0000 UTC m=+0.136985889 container init c0cb385ed47ff702646a21290494bfa39db832e72d14e22b145b252485092f30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2e9e9288-8ab6-453e-b0a5-e16458d62484, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:10:07 np0005539551 podman[256135]: 2025-11-29 08:10:07.80987916 +0000 UTC m=+0.143756710 container start c0cb385ed47ff702646a21290494bfa39db832e72d14e22b145b252485092f30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2e9e9288-8ab6-453e-b0a5-e16458d62484, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:10:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:10:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:07.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:10:07 np0005539551 neutron-haproxy-ovnmeta-2e9e9288-8ab6-453e-b0a5-e16458d62484[256154]: [NOTICE]   (256192) : New worker (256207) forked
Nov 29 03:10:07 np0005539551 neutron-haproxy-ovnmeta-2e9e9288-8ab6-453e-b0a5-e16458d62484[256154]: [NOTICE]   (256192) : Loading success.
Nov 29 03:10:07 np0005539551 podman[256152]: 2025-11-29 08:10:07.842075007 +0000 UTC m=+0.085726845 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, config_id=multipathd)
Nov 29 03:10:07 np0005539551 podman[256153]: 2025-11-29 08:10:07.847564523 +0000 UTC m=+0.084400558 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:10:07 np0005539551 podman[256148]: 2025-11-29 08:10:07.849577777 +0000 UTC m=+0.094296703 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.868 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 0b94f521-1bc4-4c6b-9d91-f98c1b1371ea in datapath 2e9e9288-8ab6-453e-b0a5-e16458d62484 unbound from our chassis#033[00m
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.871 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2e9e9288-8ab6-453e-b0a5-e16458d62484#033[00m
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.883 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[69d8fcfc-0660-4005-8140-d0c6e3fbf56d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.908 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[01ca0f40-25c9-4f17-8f5f-6b1f6286d624]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.911 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[f8d16bbc-4e69-4ea5-94cc-2ce77a1a9a4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.943 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[59e61a6a-721a-4062-855b-2f2f07b3f37a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.967 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9cdafc67-756a-4684-a7aa-fde8fb8a229c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2e9e9288-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f0:4f:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 6, 'rx_bytes': 266, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 6, 'rx_bytes': 266, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685406, 'reachable_time': 28796, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256231, 'error': None, 'target': 'ovnmeta-2e9e9288-8ab6-453e-b0a5-e16458d62484', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.987 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2895c249-fc89-4c37-96cb-50efc46f4293]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2e9e9288-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 685416, 'tstamp': 685416}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256232, 'error': None, 'target': 'ovnmeta-2e9e9288-8ab6-453e-b0a5-e16458d62484', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.1.2'], ['IFA_LOCAL', '10.1.1.2'], ['IFA_BROADCAST', '10.1.1.255'], ['IFA_LABEL', 'tap2e9e9288-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 685419, 'tstamp': 685419}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256232, 'error': None, 'target': 'ovnmeta-2e9e9288-8ab6-453e-b0a5-e16458d62484', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.989 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2e9e9288-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:07 np0005539551 nova_compute[227360]: 2025-11-29 08:10:07.990 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:07 np0005539551 nova_compute[227360]: 2025-11-29 08:10:07.991 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.992 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2e9e9288-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.992 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.992 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2e9e9288-80, col_values=(('external_ids', {'iface-id': '3e2a376c-d7ad-4503-a53d-422555d8b00e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.992 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.993 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 6781fd06-3321-4100-b5d3-9e92a565007b in datapath c657f09f-ecca-4e01-9a07-39931c8c1994 unbound from our chassis#033[00m
Nov 29 03:10:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:07.995 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c657f09f-ecca-4e01-9a07-39931c8c1994#033[00m
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:08.011 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5880ea06-612d-4430-91b3-b8348f12e288]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:08.012 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc657f09f-e1 in ovnmeta-c657f09f-ecca-4e01-9a07-39931c8c1994 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:08.013 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc657f09f-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:08.013 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6e0f5f15-74ff-475b-9ea7-2ef8534a0fd3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:08.014 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a6607fa2-4925-4a88-9814-04f465908606]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:08.031 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[e06187b5-4789-4b11-bc21-1fdedf54bb94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:08.058 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2c44e898-7dc7-41c2-aa4c-dba33cd6b38e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.065 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:08.088 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[aa578869-d1d0-4a3e-8f85-b48a0366e96b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:08 np0005539551 NetworkManager[48922]: <info>  [1764403808.0939] manager: (tapc657f09f-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/148)
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:08.093 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c6383e79-bfb4-4f70-a398-55e9db3669a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:08.120 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[9037e113-5fb9-4007-9bd0-b9dc9265317e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:08.123 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[414c7e74-8f54-4220-830c-f7fd3d9b89ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:08 np0005539551 NetworkManager[48922]: <info>  [1764403808.1473] device (tapc657f09f-e0): carrier: link connected
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:08.154 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[1a220a58-0289-46d0-836e-a6438ce0fd7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:08.177 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[52fa394f-6905-45fc-b071-f2f6b0cfa935]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc657f09f-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:40:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 89], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685502, 'reachable_time': 16376, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256243, 'error': None, 'target': 'ovnmeta-c657f09f-ecca-4e01-9a07-39931c8c1994', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:08.198 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8a966852-ee23-4234-8c07-1edcb44bb63c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe17:40ed'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 685502, 'tstamp': 685502}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256244, 'error': None, 'target': 'ovnmeta-c657f09f-ecca-4e01-9a07-39931c8c1994', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:08.220 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d9e4500a-e436-4cb6-ba52-c34c825c9749]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc657f09f-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:40:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 89], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685502, 'reachable_time': 16376, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 256245, 'error': None, 'target': 'ovnmeta-c657f09f-ecca-4e01-9a07-39931c8c1994', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:08 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:08Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fb:26:83 10.100.0.13
Nov 29 03:10:08 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:08Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fb:26:83 10.100.0.13
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:08.257 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[3438e065-8274-4dab-8faa-50573e2b8374]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:08.347 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5cb61362-4c95-4ac4-a03f-9f8fef30f75f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:08.348 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc657f09f-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:08.348 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:08.348 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc657f09f-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:08 np0005539551 NetworkManager[48922]: <info>  [1764403808.3518] manager: (tapc657f09f-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/149)
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.350 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:08 np0005539551 kernel: tapc657f09f-e0: entered promiscuous mode
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.355 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:08.355 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc657f09f-e0, col_values=(('external_ids', {'iface-id': '05072234-6e33-43d2-b68f-8c1a438d6b05'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:08 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:08Z|00300|binding|INFO|Releasing lport 05072234-6e33-43d2-b68f-8c1a438d6b05 from this chassis (sb_readonly=0)
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.386 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.387 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:08.388 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c657f09f-ecca-4e01-9a07-39931c8c1994.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c657f09f-ecca-4e01-9a07-39931c8c1994.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:08.389 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c73e335e-c695-4bee-90ab-7d8a4a5ebb19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:08.390 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-c657f09f-ecca-4e01-9a07-39931c8c1994
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/c657f09f-ecca-4e01-9a07-39931c8c1994.pid.haproxy
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID c657f09f-ecca-4e01-9a07-39931c8c1994
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:08.390 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c657f09f-ecca-4e01-9a07-39931c8c1994', 'env', 'PROCESS_TAG=haproxy-c657f09f-ecca-4e01-9a07-39931c8c1994', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c657f09f-ecca-4e01-9a07-39931c8c1994.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.555 227364 DEBUG nova.network.neutron [req-8b601abc-0f1a-492d-80e1-2abcd50b2e28 req-ccce72d3-4090-41f1-9b2c-6ee4f5e4db1b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Updated VIF entry in instance network info cache for port fb488c88-f0f5-4e76-90e7-47161bb2a305. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.556 227364 DEBUG nova.network.neutron [req-8b601abc-0f1a-492d-80e1-2abcd50b2e28 req-ccce72d3-4090-41f1-9b2c-6ee4f5e4db1b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Updating instance_info_cache with network_info: [{"id": "da36c2e4-1083-48fb-ae61-47bca3a21912", "address": "fa:16:3e:52:6c:f3", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda36c2e4-10", "ovs_interfaceid": "da36c2e4-1083-48fb-ae61-47bca3a21912", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fb488c88-f0f5-4e76-90e7-47161bb2a305", "address": "fa:16:3e:fb:26:83", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb488c88-f0", "ovs_interfaceid": "fb488c88-f0f5-4e76-90e7-47161bb2a305", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.617 227364 DEBUG nova.compute.manager [req-ab69ec71-22cf-441d-b1b1-2b841e211e46 req-81e9c551-67c9-4116-8a7e-5dc4a860b61f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-vif-plugged-5edc2ee1-b429-4cb7-8b14-3915aa40d39c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.617 227364 DEBUG oslo_concurrency.lockutils [req-ab69ec71-22cf-441d-b1b1-2b841e211e46 req-81e9c551-67c9-4116-8a7e-5dc4a860b61f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.618 227364 DEBUG oslo_concurrency.lockutils [req-ab69ec71-22cf-441d-b1b1-2b841e211e46 req-81e9c551-67c9-4116-8a7e-5dc4a860b61f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.618 227364 DEBUG oslo_concurrency.lockutils [req-ab69ec71-22cf-441d-b1b1-2b841e211e46 req-81e9c551-67c9-4116-8a7e-5dc4a860b61f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.618 227364 DEBUG nova.compute.manager [req-ab69ec71-22cf-441d-b1b1-2b841e211e46 req-81e9c551-67c9-4116-8a7e-5dc4a860b61f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] No event matching network-vif-plugged-5edc2ee1-b429-4cb7-8b14-3915aa40d39c in dict_keys([('network-vif-plugged', '08b93c9b-b707-46be-87f1-913a85c9fdbb'), ('network-vif-plugged', '0b94f521-1bc4-4c6b-9d91-f98c1b1371ea'), ('network-vif-plugged', '064279fc-65b8-4bc6-9578-f43479c76dde'), ('network-vif-plugged', '75fb0180-ce5c-4d77-ab28-71baed47a210')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.618 227364 WARNING nova.compute.manager [req-ab69ec71-22cf-441d-b1b1-2b841e211e46 req-81e9c551-67c9-4116-8a7e-5dc4a860b61f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received unexpected event network-vif-plugged-5edc2ee1-b429-4cb7-8b14-3915aa40d39c for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.620 227364 DEBUG oslo_concurrency.lockutils [req-8b601abc-0f1a-492d-80e1-2abcd50b2e28 req-ccce72d3-4090-41f1-9b2c-6ee4f5e4db1b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-00bfed58-0a25-44a8-aa0b-2330c27be8ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.689 227364 DEBUG nova.compute.manager [req-3c1632fb-a5a1-4374-9587-3fa03dd80a89 req-a709c162-db2b-4045-b9d8-c4bf2f92f3a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-vif-plugged-08b93c9b-b707-46be-87f1-913a85c9fdbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.689 227364 DEBUG oslo_concurrency.lockutils [req-3c1632fb-a5a1-4374-9587-3fa03dd80a89 req-a709c162-db2b-4045-b9d8-c4bf2f92f3a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.689 227364 DEBUG oslo_concurrency.lockutils [req-3c1632fb-a5a1-4374-9587-3fa03dd80a89 req-a709c162-db2b-4045-b9d8-c4bf2f92f3a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.689 227364 DEBUG oslo_concurrency.lockutils [req-3c1632fb-a5a1-4374-9587-3fa03dd80a89 req-a709c162-db2b-4045-b9d8-c4bf2f92f3a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.690 227364 DEBUG nova.compute.manager [req-3c1632fb-a5a1-4374-9587-3fa03dd80a89 req-a709c162-db2b-4045-b9d8-c4bf2f92f3a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Processing event network-vif-plugged-08b93c9b-b707-46be-87f1-913a85c9fdbb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.690 227364 DEBUG nova.compute.manager [req-3c1632fb-a5a1-4374-9587-3fa03dd80a89 req-a709c162-db2b-4045-b9d8-c4bf2f92f3a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-vif-plugged-08b93c9b-b707-46be-87f1-913a85c9fdbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.690 227364 DEBUG oslo_concurrency.lockutils [req-3c1632fb-a5a1-4374-9587-3fa03dd80a89 req-a709c162-db2b-4045-b9d8-c4bf2f92f3a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.690 227364 DEBUG oslo_concurrency.lockutils [req-3c1632fb-a5a1-4374-9587-3fa03dd80a89 req-a709c162-db2b-4045-b9d8-c4bf2f92f3a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.690 227364 DEBUG oslo_concurrency.lockutils [req-3c1632fb-a5a1-4374-9587-3fa03dd80a89 req-a709c162-db2b-4045-b9d8-c4bf2f92f3a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.690 227364 DEBUG nova.compute.manager [req-3c1632fb-a5a1-4374-9587-3fa03dd80a89 req-a709c162-db2b-4045-b9d8-c4bf2f92f3a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] No event matching network-vif-plugged-08b93c9b-b707-46be-87f1-913a85c9fdbb in dict_keys([('network-vif-plugged', '0b94f521-1bc4-4c6b-9d91-f98c1b1371ea'), ('network-vif-plugged', '064279fc-65b8-4bc6-9578-f43479c76dde'), ('network-vif-plugged', '75fb0180-ce5c-4d77-ab28-71baed47a210')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.691 227364 WARNING nova.compute.manager [req-3c1632fb-a5a1-4374-9587-3fa03dd80a89 req-a709c162-db2b-4045-b9d8-c4bf2f92f3a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received unexpected event network-vif-plugged-08b93c9b-b707-46be-87f1-913a85c9fdbb for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.691 227364 DEBUG nova.compute.manager [req-3c1632fb-a5a1-4374-9587-3fa03dd80a89 req-a709c162-db2b-4045-b9d8-c4bf2f92f3a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-vif-plugged-0b94f521-1bc4-4c6b-9d91-f98c1b1371ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.691 227364 DEBUG oslo_concurrency.lockutils [req-3c1632fb-a5a1-4374-9587-3fa03dd80a89 req-a709c162-db2b-4045-b9d8-c4bf2f92f3a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.691 227364 DEBUG oslo_concurrency.lockutils [req-3c1632fb-a5a1-4374-9587-3fa03dd80a89 req-a709c162-db2b-4045-b9d8-c4bf2f92f3a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.691 227364 DEBUG oslo_concurrency.lockutils [req-3c1632fb-a5a1-4374-9587-3fa03dd80a89 req-a709c162-db2b-4045-b9d8-c4bf2f92f3a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.692 227364 DEBUG nova.compute.manager [req-3c1632fb-a5a1-4374-9587-3fa03dd80a89 req-a709c162-db2b-4045-b9d8-c4bf2f92f3a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Processing event network-vif-plugged-0b94f521-1bc4-4c6b-9d91-f98c1b1371ea _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.692 227364 DEBUG nova.compute.manager [req-3c1632fb-a5a1-4374-9587-3fa03dd80a89 req-a709c162-db2b-4045-b9d8-c4bf2f92f3a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-vif-plugged-0b94f521-1bc4-4c6b-9d91-f98c1b1371ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.692 227364 DEBUG oslo_concurrency.lockutils [req-3c1632fb-a5a1-4374-9587-3fa03dd80a89 req-a709c162-db2b-4045-b9d8-c4bf2f92f3a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.692 227364 DEBUG oslo_concurrency.lockutils [req-3c1632fb-a5a1-4374-9587-3fa03dd80a89 req-a709c162-db2b-4045-b9d8-c4bf2f92f3a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.692 227364 DEBUG oslo_concurrency.lockutils [req-3c1632fb-a5a1-4374-9587-3fa03dd80a89 req-a709c162-db2b-4045-b9d8-c4bf2f92f3a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.693 227364 DEBUG nova.compute.manager [req-3c1632fb-a5a1-4374-9587-3fa03dd80a89 req-a709c162-db2b-4045-b9d8-c4bf2f92f3a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] No event matching network-vif-plugged-0b94f521-1bc4-4c6b-9d91-f98c1b1371ea in dict_keys([('network-vif-plugged', '064279fc-65b8-4bc6-9578-f43479c76dde'), ('network-vif-plugged', '75fb0180-ce5c-4d77-ab28-71baed47a210')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.693 227364 WARNING nova.compute.manager [req-3c1632fb-a5a1-4374-9587-3fa03dd80a89 req-a709c162-db2b-4045-b9d8-c4bf2f92f3a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received unexpected event network-vif-plugged-0b94f521-1bc4-4c6b-9d91-f98c1b1371ea for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.693 227364 DEBUG nova.compute.manager [req-3c1632fb-a5a1-4374-9587-3fa03dd80a89 req-a709c162-db2b-4045-b9d8-c4bf2f92f3a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-vif-plugged-75fb0180-ce5c-4d77-ab28-71baed47a210 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.693 227364 DEBUG oslo_concurrency.lockutils [req-3c1632fb-a5a1-4374-9587-3fa03dd80a89 req-a709c162-db2b-4045-b9d8-c4bf2f92f3a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.693 227364 DEBUG oslo_concurrency.lockutils [req-3c1632fb-a5a1-4374-9587-3fa03dd80a89 req-a709c162-db2b-4045-b9d8-c4bf2f92f3a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.694 227364 DEBUG oslo_concurrency.lockutils [req-3c1632fb-a5a1-4374-9587-3fa03dd80a89 req-a709c162-db2b-4045-b9d8-c4bf2f92f3a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.694 227364 DEBUG nova.compute.manager [req-3c1632fb-a5a1-4374-9587-3fa03dd80a89 req-a709c162-db2b-4045-b9d8-c4bf2f92f3a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Processing event network-vif-plugged-75fb0180-ce5c-4d77-ab28-71baed47a210 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.792 227364 DEBUG nova.compute.manager [req-14cd9a4d-28d1-4467-90dc-a6c29ecc1060 req-3c2f7df1-c1b2-45ae-b992-b2e4a495a810 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-vif-plugged-6781fd06-3321-4100-b5d3-9e92a565007b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.792 227364 DEBUG oslo_concurrency.lockutils [req-14cd9a4d-28d1-4467-90dc-a6c29ecc1060 req-3c2f7df1-c1b2-45ae-b992-b2e4a495a810 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.792 227364 DEBUG oslo_concurrency.lockutils [req-14cd9a4d-28d1-4467-90dc-a6c29ecc1060 req-3c2f7df1-c1b2-45ae-b992-b2e4a495a810 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.792 227364 DEBUG oslo_concurrency.lockutils [req-14cd9a4d-28d1-4467-90dc-a6c29ecc1060 req-3c2f7df1-c1b2-45ae-b992-b2e4a495a810 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.792 227364 DEBUG nova.compute.manager [req-14cd9a4d-28d1-4467-90dc-a6c29ecc1060 req-3c2f7df1-c1b2-45ae-b992-b2e4a495a810 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] No event matching network-vif-plugged-6781fd06-3321-4100-b5d3-9e92a565007b in dict_keys([('network-vif-plugged', '064279fc-65b8-4bc6-9578-f43479c76dde')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.793 227364 WARNING nova.compute.manager [req-14cd9a4d-28d1-4467-90dc-a6c29ecc1060 req-3c2f7df1-c1b2-45ae-b992-b2e4a495a810 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received unexpected event network-vif-plugged-6781fd06-3321-4100-b5d3-9e92a565007b for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.793 227364 DEBUG nova.compute.manager [req-14cd9a4d-28d1-4467-90dc-a6c29ecc1060 req-3c2f7df1-c1b2-45ae-b992-b2e4a495a810 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Received event network-vif-plugged-fb488c88-f0f5-4e76-90e7-47161bb2a305 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.793 227364 DEBUG oslo_concurrency.lockutils [req-14cd9a4d-28d1-4467-90dc-a6c29ecc1060 req-3c2f7df1-c1b2-45ae-b992-b2e4a495a810 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "00bfed58-0a25-44a8-aa0b-2330c27be8ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.793 227364 DEBUG oslo_concurrency.lockutils [req-14cd9a4d-28d1-4467-90dc-a6c29ecc1060 req-3c2f7df1-c1b2-45ae-b992-b2e4a495a810 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "00bfed58-0a25-44a8-aa0b-2330c27be8ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.793 227364 DEBUG oslo_concurrency.lockutils [req-14cd9a4d-28d1-4467-90dc-a6c29ecc1060 req-3c2f7df1-c1b2-45ae-b992-b2e4a495a810 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "00bfed58-0a25-44a8-aa0b-2330c27be8ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.793 227364 DEBUG nova.compute.manager [req-14cd9a4d-28d1-4467-90dc-a6c29ecc1060 req-3c2f7df1-c1b2-45ae-b992-b2e4a495a810 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] No waiting events found dispatching network-vif-plugged-fb488c88-f0f5-4e76-90e7-47161bb2a305 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.794 227364 WARNING nova.compute.manager [req-14cd9a4d-28d1-4467-90dc-a6c29ecc1060 req-3c2f7df1-c1b2-45ae-b992-b2e4a495a810 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Received unexpected event network-vif-plugged-fb488c88-f0f5-4e76-90e7-47161bb2a305 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.794 227364 DEBUG nova.compute.manager [req-14cd9a4d-28d1-4467-90dc-a6c29ecc1060 req-3c2f7df1-c1b2-45ae-b992-b2e4a495a810 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Received event network-vif-plugged-fb488c88-f0f5-4e76-90e7-47161bb2a305 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.794 227364 DEBUG oslo_concurrency.lockutils [req-14cd9a4d-28d1-4467-90dc-a6c29ecc1060 req-3c2f7df1-c1b2-45ae-b992-b2e4a495a810 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "00bfed58-0a25-44a8-aa0b-2330c27be8ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.794 227364 DEBUG oslo_concurrency.lockutils [req-14cd9a4d-28d1-4467-90dc-a6c29ecc1060 req-3c2f7df1-c1b2-45ae-b992-b2e4a495a810 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "00bfed58-0a25-44a8-aa0b-2330c27be8ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.794 227364 DEBUG oslo_concurrency.lockutils [req-14cd9a4d-28d1-4467-90dc-a6c29ecc1060 req-3c2f7df1-c1b2-45ae-b992-b2e4a495a810 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "00bfed58-0a25-44a8-aa0b-2330c27be8ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.795 227364 DEBUG nova.compute.manager [req-14cd9a4d-28d1-4467-90dc-a6c29ecc1060 req-3c2f7df1-c1b2-45ae-b992-b2e4a495a810 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] No waiting events found dispatching network-vif-plugged-fb488c88-f0f5-4e76-90e7-47161bb2a305 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.795 227364 WARNING nova.compute.manager [req-14cd9a4d-28d1-4467-90dc-a6c29ecc1060 req-3c2f7df1-c1b2-45ae-b992-b2e4a495a810 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Received unexpected event network-vif-plugged-fb488c88-f0f5-4e76-90e7-47161bb2a305 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:10:08 np0005539551 podman[256278]: 2025-11-29 08:10:08.831393603 +0000 UTC m=+0.045957754 container create ef14e25514270b5a492a891e08095082336666af73197d80992be346a6e4430c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c657f09f-ecca-4e01-9a07-39931c8c1994, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:10:08 np0005539551 systemd[1]: Started libpod-conmon-ef14e25514270b5a492a891e08095082336666af73197d80992be346a6e4430c.scope.
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.875 227364 DEBUG nova.compute.manager [req-aececc60-2895-4703-9682-32c18d81195d req-36f1639f-c5ad-4874-820d-cbffef9fd804 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-vif-plugged-580e3c4b-9212-4f39-be82-c8e878e729e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.875 227364 DEBUG oslo_concurrency.lockutils [req-aececc60-2895-4703-9682-32c18d81195d req-36f1639f-c5ad-4874-820d-cbffef9fd804 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.875 227364 DEBUG oslo_concurrency.lockutils [req-aececc60-2895-4703-9682-32c18d81195d req-36f1639f-c5ad-4874-820d-cbffef9fd804 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.875 227364 DEBUG oslo_concurrency.lockutils [req-aececc60-2895-4703-9682-32c18d81195d req-36f1639f-c5ad-4874-820d-cbffef9fd804 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.876 227364 DEBUG nova.compute.manager [req-aececc60-2895-4703-9682-32c18d81195d req-36f1639f-c5ad-4874-820d-cbffef9fd804 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] No event matching network-vif-plugged-580e3c4b-9212-4f39-be82-c8e878e729e2 in dict_keys([('network-vif-plugged', '064279fc-65b8-4bc6-9578-f43479c76dde')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.876 227364 WARNING nova.compute.manager [req-aececc60-2895-4703-9682-32c18d81195d req-36f1639f-c5ad-4874-820d-cbffef9fd804 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received unexpected event network-vif-plugged-580e3c4b-9212-4f39-be82-c8e878e729e2 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.876 227364 DEBUG nova.compute.manager [req-aececc60-2895-4703-9682-32c18d81195d req-36f1639f-c5ad-4874-820d-cbffef9fd804 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-vif-plugged-064279fc-65b8-4bc6-9578-f43479c76dde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.876 227364 DEBUG oslo_concurrency.lockutils [req-aececc60-2895-4703-9682-32c18d81195d req-36f1639f-c5ad-4874-820d-cbffef9fd804 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.876 227364 DEBUG oslo_concurrency.lockutils [req-aececc60-2895-4703-9682-32c18d81195d req-36f1639f-c5ad-4874-820d-cbffef9fd804 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.876 227364 DEBUG oslo_concurrency.lockutils [req-aececc60-2895-4703-9682-32c18d81195d req-36f1639f-c5ad-4874-820d-cbffef9fd804 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.877 227364 DEBUG nova.compute.manager [req-aececc60-2895-4703-9682-32c18d81195d req-36f1639f-c5ad-4874-820d-cbffef9fd804 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Processing event network-vif-plugged-064279fc-65b8-4bc6-9578-f43479c76dde _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.877 227364 DEBUG nova.compute.manager [req-aececc60-2895-4703-9682-32c18d81195d req-36f1639f-c5ad-4874-820d-cbffef9fd804 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-vif-plugged-064279fc-65b8-4bc6-9578-f43479c76dde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.877 227364 DEBUG oslo_concurrency.lockutils [req-aececc60-2895-4703-9682-32c18d81195d req-36f1639f-c5ad-4874-820d-cbffef9fd804 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.877 227364 DEBUG oslo_concurrency.lockutils [req-aececc60-2895-4703-9682-32c18d81195d req-36f1639f-c5ad-4874-820d-cbffef9fd804 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.877 227364 DEBUG oslo_concurrency.lockutils [req-aececc60-2895-4703-9682-32c18d81195d req-36f1639f-c5ad-4874-820d-cbffef9fd804 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.877 227364 DEBUG nova.compute.manager [req-aececc60-2895-4703-9682-32c18d81195d req-36f1639f-c5ad-4874-820d-cbffef9fd804 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] No waiting events found dispatching network-vif-plugged-064279fc-65b8-4bc6-9578-f43479c76dde pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.878 227364 WARNING nova.compute.manager [req-aececc60-2895-4703-9682-32c18d81195d req-36f1639f-c5ad-4874-820d-cbffef9fd804 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received unexpected event network-vif-plugged-064279fc-65b8-4bc6-9578-f43479c76dde for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.878 227364 DEBUG nova.compute.manager [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.883 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403808.8831096, 86dc24b2-55cd-4720-825d-14b5a233fc8f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.883 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.885 227364 DEBUG nova.virt.libvirt.driver [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:10:08 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.892 227364 INFO nova.virt.libvirt.driver [-] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Instance spawned successfully.#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.893 227364 DEBUG nova.virt.libvirt.driver [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:10:08 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb049d783d48e827362d5b273a84a1e8ff56e9f141e96df35b9c6f57ad708bef/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:10:08 np0005539551 podman[256278]: 2025-11-29 08:10:08.806744747 +0000 UTC m=+0.021308918 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:10:08 np0005539551 podman[256278]: 2025-11-29 08:10:08.909705029 +0000 UTC m=+0.124269200 container init ef14e25514270b5a492a891e08095082336666af73197d80992be346a6e4430c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c657f09f-ecca-4e01-9a07-39931c8c1994, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.909 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.912 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:10:08 np0005539551 podman[256278]: 2025-11-29 08:10:08.914911538 +0000 UTC m=+0.129475689 container start ef14e25514270b5a492a891e08095082336666af73197d80992be346a6e4430c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c657f09f-ecca-4e01-9a07-39931c8c1994, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.920 227364 DEBUG nova.virt.libvirt.driver [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.920 227364 DEBUG nova.virt.libvirt.driver [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.920 227364 DEBUG nova.virt.libvirt.driver [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.921 227364 DEBUG nova.virt.libvirt.driver [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.921 227364 DEBUG nova.virt.libvirt.driver [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.921 227364 DEBUG nova.virt.libvirt.driver [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:10:08 np0005539551 neutron-haproxy-ovnmeta-c657f09f-ecca-4e01-9a07-39931c8c1994[256293]: [NOTICE]   (256297) : New worker (256299) forked
Nov 29 03:10:08 np0005539551 neutron-haproxy-ovnmeta-c657f09f-ecca-4e01-9a07-39931c8c1994[256293]: [NOTICE]   (256297) : Loading success.
Nov 29 03:10:08 np0005539551 nova_compute[227360]: 2025-11-29 08:10:08.970 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:08.979 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 580e3c4b-9212-4f39-be82-c8e878e729e2 in datapath 2e9e9288-8ab6-453e-b0a5-e16458d62484 unbound from our chassis#033[00m
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:08.982 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2e9e9288-8ab6-453e-b0a5-e16458d62484#033[00m
Nov 29 03:10:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:08.994 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ce48347b-c530-42c8-9023-79841a7f04dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:09 np0005539551 nova_compute[227360]: 2025-11-29 08:10:09.025 227364 INFO nova.compute.manager [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Took 29.13 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:10:09 np0005539551 nova_compute[227360]: 2025-11-29 08:10:09.026 227364 DEBUG nova.compute.manager [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.025 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[5c72ac90-1b41-40ea-aa25-7a5792f7a541]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.029 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[da3a023c-9ccc-48a7-872d-286b0f3f23d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.052 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[f859df7c-8318-4346-98bc-48ada6092a52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.065 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[25146e39-01ec-4f30-ba2b-214a1032dc61]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2e9e9288-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f0:4f:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 5, 'tx_packets': 8, 'rx_bytes': 442, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 5, 'tx_packets': 8, 'rx_bytes': 442, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685406, 'reachable_time': 28796, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 5, 'inoctets': 372, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 5, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 372, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 5, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256313, 'error': None, 'target': 'ovnmeta-2e9e9288-8ab6-453e-b0a5-e16458d62484', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.082 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ebdc04ec-c0e6-46f2-b973-91295c7d2578]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2e9e9288-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 685416, 'tstamp': 685416}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256314, 'error': None, 'target': 'ovnmeta-2e9e9288-8ab6-453e-b0a5-e16458d62484', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.1.2'], ['IFA_LOCAL', '10.1.1.2'], ['IFA_BROADCAST', '10.1.1.255'], ['IFA_LABEL', 'tap2e9e9288-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 685419, 'tstamp': 685419}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256314, 'error': None, 'target': 'ovnmeta-2e9e9288-8ab6-453e-b0a5-e16458d62484', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.083 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2e9e9288-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:09 np0005539551 nova_compute[227360]: 2025-11-29 08:10:09.123 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:09 np0005539551 nova_compute[227360]: 2025-11-29 08:10:09.126 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.127 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2e9e9288-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.127 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.127 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2e9e9288-80, col_values=(('external_ids', {'iface-id': '3e2a376c-d7ad-4503-a53d-422555d8b00e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.128 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.128 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 064279fc-65b8-4bc6-9578-f43479c76dde in datapath 2e9e9288-8ab6-453e-b0a5-e16458d62484 unbound from our chassis#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.130 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2e9e9288-8ab6-453e-b0a5-e16458d62484#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.145 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e7173041-1793-4b00-94bb-dfc0d84e4e7a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:09 np0005539551 nova_compute[227360]: 2025-11-29 08:10:09.156 227364 INFO nova.compute.manager [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Took 33.96 seconds to build instance.#033[00m
Nov 29 03:10:09 np0005539551 nova_compute[227360]: 2025-11-29 08:10:09.176 227364 DEBUG oslo_concurrency.lockutils [None req-692f587e-b144-40a6-9c39-3956a51ca16a 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 34.060s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.178 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[6d5ed467-525a-4137-9ae5-92b3a4ce526d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.181 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[52e1be04-d38f-48b1-aa23-49f4a431a328]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.209 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[0efd06eb-f107-4577-a4f0-63a91b650d03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.233 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[902df561-8213-483a-ac8f-9162e17cc8ed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2e9e9288-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f0:4f:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 5, 'tx_packets': 10, 'rx_bytes': 442, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 5, 'tx_packets': 10, 'rx_bytes': 442, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685406, 'reachable_time': 28796, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 5, 'inoctets': 372, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 5, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 372, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 5, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256320, 'error': None, 'target': 'ovnmeta-2e9e9288-8ab6-453e-b0a5-e16458d62484', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.247 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[3c7a9654-c417-44eb-856e-ecfcf3e957de]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2e9e9288-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 685416, 'tstamp': 685416}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256321, 'error': None, 'target': 'ovnmeta-2e9e9288-8ab6-453e-b0a5-e16458d62484', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.1.2'], ['IFA_LOCAL', '10.1.1.2'], ['IFA_BROADCAST', '10.1.1.255'], ['IFA_LABEL', 'tap2e9e9288-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 685419, 'tstamp': 685419}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256321, 'error': None, 'target': 'ovnmeta-2e9e9288-8ab6-453e-b0a5-e16458d62484', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.248 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2e9e9288-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:09 np0005539551 nova_compute[227360]: 2025-11-29 08:10:09.249 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:09 np0005539551 nova_compute[227360]: 2025-11-29 08:10:09.250 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.250 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2e9e9288-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.251 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.251 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2e9e9288-80, col_values=(('external_ids', {'iface-id': '3e2a376c-d7ad-4503-a53d-422555d8b00e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.251 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.252 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 75fb0180-ce5c-4d77-ab28-71baed47a210 in datapath c657f09f-ecca-4e01-9a07-39931c8c1994 unbound from our chassis#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.254 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c657f09f-ecca-4e01-9a07-39931c8c1994#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.277 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[07afe1b9-9208-4a79-9500-1423859df267]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.306 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[b5980b8c-8b70-4de9-80a5-57cfd751bd7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.309 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[21e66918-d9af-4714-b285-dd0c734dd9af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.337 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[e48327e3-39b3-48d2-b861-fe8b6a928b4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.353 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8ea47513-a4b6-4830-a815-a5e87aab1cb2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc657f09f-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:40:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 89], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685502, 'reachable_time': 16376, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256327, 'error': None, 'target': 'ovnmeta-c657f09f-ecca-4e01-9a07-39931c8c1994', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.375 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6153afcc-a473-4292-afc1-0728269d84e5]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.2.2.2'], ['IFA_LOCAL', '10.2.2.2'], ['IFA_BROADCAST', '10.2.2.255'], ['IFA_LABEL', 'tapc657f09f-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 685517, 'tstamp': 685517}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256328, 'error': None, 'target': 'ovnmeta-c657f09f-ecca-4e01-9a07-39931c8c1994', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc657f09f-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 685521, 'tstamp': 685521}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256328, 'error': None, 'target': 'ovnmeta-c657f09f-ecca-4e01-9a07-39931c8c1994', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.377 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc657f09f-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:09 np0005539551 nova_compute[227360]: 2025-11-29 08:10:09.378 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.379 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc657f09f-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.379 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.380 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc657f09f-e0, col_values=(('external_ids', {'iface-id': '05072234-6e33-43d2-b68f-8c1a438d6b05'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.380 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.381 139482 INFO neutron.agent.ovn.metadata.agent [-] Port fb488c88-f0f5-4e76-90e7-47161bb2a305 in datapath ddd8b166-79ec-408d-b52c-581ad9dd6cb8 unbound from our chassis#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.383 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ddd8b166-79ec-408d-b52c-581ad9dd6cb8#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.397 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8d4e16e1-6236-441d-9ddf-fad69ed58403]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.430 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[fb07506c-10ee-4052-a8cd-45bd24c76dee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.434 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[ebaace4d-dfcc-41a5-ac28-ae0bb2e95a5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.461 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[9d55beac-ad3d-43cf-8d50-26fec74b4849]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.487 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[81a1fbcf-e14d-47e4-bf78-2e35cf65afc1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapddd8b166-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:35:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 680698, 'reachable_time': 30284, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256334, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.505 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e1172b45-b3c6-41cd-9fa6-8b6363d99109]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapddd8b166-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 680708, 'tstamp': 680708}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256335, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapddd8b166-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 680711, 'tstamp': 680711}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256335, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.507 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapddd8b166-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:09 np0005539551 nova_compute[227360]: 2025-11-29 08:10:09.508 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:09 np0005539551 nova_compute[227360]: 2025-11-29 08:10:09.509 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.510 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapddd8b166-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.510 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.511 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapddd8b166-70, col_values=(('external_ids', {'iface-id': 'a9e57abf-e3e4-455b-b4c5-0cda127bd5c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.511 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:10:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:09.512 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:10:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:09.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:09.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:10 np0005539551 nova_compute[227360]: 2025-11-29 08:10:10.021 227364 DEBUG oslo_concurrency.lockutils [None req-2b785758-bc39-4d90-b6b3-ccdd432157ec b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "interface-00bfed58-0a25-44a8-aa0b-2330c27be8ee-fb488c88-f0f5-4e76-90e7-47161bb2a305" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:10 np0005539551 nova_compute[227360]: 2025-11-29 08:10:10.021 227364 DEBUG oslo_concurrency.lockutils [None req-2b785758-bc39-4d90-b6b3-ccdd432157ec b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "interface-00bfed58-0a25-44a8-aa0b-2330c27be8ee-fb488c88-f0f5-4e76-90e7-47161bb2a305" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:10 np0005539551 nova_compute[227360]: 2025-11-29 08:10:10.042 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:10 np0005539551 nova_compute[227360]: 2025-11-29 08:10:10.046 227364 DEBUG nova.objects.instance [None req-2b785758-bc39-4d90-b6b3-ccdd432157ec b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lazy-loading 'flavor' on Instance uuid 00bfed58-0a25-44a8-aa0b-2330c27be8ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:10:10 np0005539551 nova_compute[227360]: 2025-11-29 08:10:10.069 227364 DEBUG nova.virt.libvirt.vif [None req-2b785758-bc39-4d90-b6b3-ccdd432157ec b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-151094337',display_name='tempest-AttachInterfacesTestJSON-server-151094337',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-151094337',id=74,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLz2/hRJ1TIdtPOtDD0/pV/i+rl/aVQTZxMyxxGV6qIdTwffh6F+z4sfbRr8GH8vQoafgFs7aaplxV1s7tLn77mcTUD2rhU9JE1b0RjzFKPDixDmcGcrtGap0RAcgqnY4A==',key_name='tempest-keypair-1899268923',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:22Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ff7c805d4242453aa2148a247956391d',ramdisk_id='',reservation_id='r-t7n1tsl8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-372493183',owner_user_name='tempest-AttachInterfacesTestJSON-372493183-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:09:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b95b3e841be1420c99ee0a04dd0840f1',uuid=00bfed58-0a25-44a8-aa0b-2330c27be8ee,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fb488c88-f0f5-4e76-90e7-47161bb2a305", "address": "fa:16:3e:fb:26:83", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb488c88-f0", "ovs_interfaceid": "fb488c88-f0f5-4e76-90e7-47161bb2a305", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:10:10 np0005539551 nova_compute[227360]: 2025-11-29 08:10:10.070 227364 DEBUG nova.network.os_vif_util [None req-2b785758-bc39-4d90-b6b3-ccdd432157ec b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converting VIF {"id": "fb488c88-f0f5-4e76-90e7-47161bb2a305", "address": "fa:16:3e:fb:26:83", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb488c88-f0", "ovs_interfaceid": "fb488c88-f0f5-4e76-90e7-47161bb2a305", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:10 np0005539551 nova_compute[227360]: 2025-11-29 08:10:10.071 227364 DEBUG nova.network.os_vif_util [None req-2b785758-bc39-4d90-b6b3-ccdd432157ec b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:26:83,bridge_name='br-int',has_traffic_filtering=True,id=fb488c88-f0f5-4e76-90e7-47161bb2a305,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb488c88-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:10 np0005539551 nova_compute[227360]: 2025-11-29 08:10:10.075 227364 DEBUG nova.virt.libvirt.guest [None req-2b785758-bc39-4d90-b6b3-ccdd432157ec b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:fb:26:83"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfb488c88-f0"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:10:10 np0005539551 nova_compute[227360]: 2025-11-29 08:10:10.077 227364 DEBUG nova.virt.libvirt.guest [None req-2b785758-bc39-4d90-b6b3-ccdd432157ec b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:fb:26:83"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfb488c88-f0"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:10:10 np0005539551 nova_compute[227360]: 2025-11-29 08:10:10.079 227364 DEBUG nova.virt.libvirt.driver [None req-2b785758-bc39-4d90-b6b3-ccdd432157ec b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Attempting to detach device tapfb488c88-f0 from instance 00bfed58-0a25-44a8-aa0b-2330c27be8ee from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:10:10 np0005539551 nova_compute[227360]: 2025-11-29 08:10:10.079 227364 DEBUG nova.virt.libvirt.guest [None req-2b785758-bc39-4d90-b6b3-ccdd432157ec b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] detach device xml: <interface type="ethernet">
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <mac address="fa:16:3e:fb:26:83"/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <model type="virtio"/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <mtu size="1442"/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <target dev="tapfb488c88-f0"/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]: </interface>
Nov 29 03:10:10 np0005539551 nova_compute[227360]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:10:10 np0005539551 nova_compute[227360]: 2025-11-29 08:10:10.087 227364 DEBUG nova.virt.libvirt.guest [None req-2b785758-bc39-4d90-b6b3-ccdd432157ec b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:fb:26:83"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfb488c88-f0"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:10:10 np0005539551 nova_compute[227360]: 2025-11-29 08:10:10.096 227364 DEBUG nova.virt.libvirt.guest [None req-2b785758-bc39-4d90-b6b3-ccdd432157ec b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:fb:26:83"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfb488c88-f0"/></interface>not found in domain: <domain type='kvm' id='34'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <name>instance-0000004a</name>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <uuid>00bfed58-0a25-44a8-aa0b-2330c27be8ee</uuid>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <nova:name>tempest-AttachInterfacesTestJSON-server-151094337</nova:name>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <nova:creationTime>2025-11-29 08:10:06</nova:creationTime>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <nova:flavor name="m1.nano">
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <nova:memory>128</nova:memory>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <nova:disk>1</nova:disk>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <nova:swap>0</nova:swap>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  </nova:flavor>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <nova:owner>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <nova:user uuid="b95b3e841be1420c99ee0a04dd0840f1">tempest-AttachInterfacesTestJSON-372493183-project-member</nova:user>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <nova:project uuid="ff7c805d4242453aa2148a247956391d">tempest-AttachInterfacesTestJSON-372493183</nova:project>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  </nova:owner>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <nova:ports>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <nova:port uuid="da36c2e4-1083-48fb-ae61-47bca3a21912">
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </nova:port>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <nova:port uuid="fb488c88-f0f5-4e76-90e7-47161bb2a305">
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </nova:port>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  </nova:ports>
Nov 29 03:10:10 np0005539551 nova_compute[227360]: </nova:instance>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <memory unit='KiB'>131072</memory>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <vcpu placement='static'>1</vcpu>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <resource>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <partition>/machine</partition>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  </resource>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <sysinfo type='smbios'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <entry name='manufacturer'>RDO</entry>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <entry name='product'>OpenStack Compute</entry>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <entry name='serial'>00bfed58-0a25-44a8-aa0b-2330c27be8ee</entry>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <entry name='uuid'>00bfed58-0a25-44a8-aa0b-2330c27be8ee</entry>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <entry name='family'>Virtual Machine</entry>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <boot dev='hd'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <smbios mode='sysinfo'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <vmcoreinfo state='on'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <cpu mode='custom' match='exact' check='full'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <model fallback='forbid'>Nehalem</model>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <feature policy='require' name='x2apic'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <feature policy='require' name='hypervisor'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <feature policy='require' name='vme'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <clock offset='utc'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <timer name='pit' tickpolicy='delay'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <timer name='hpet' present='no'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <on_poweroff>destroy</on_poweroff>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <on_reboot>restart</on_reboot>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <on_crash>destroy</on_crash>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <disk type='network' device='disk'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <auth username='openstack'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:        <secret type='ceph' uuid='b66774a7-56d9-5535-bd8c-681234404870'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <source protocol='rbd' name='vms/00bfed58-0a25-44a8-aa0b-2330c27be8ee_disk' index='2'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target dev='vda' bus='virtio'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='virtio-disk0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <disk type='network' device='cdrom'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <auth username='openstack'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:        <secret type='ceph' uuid='b66774a7-56d9-5535-bd8c-681234404870'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <source protocol='rbd' name='vms/00bfed58-0a25-44a8-aa0b-2330c27be8ee_disk.config' index='1'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target dev='sda' bus='sata'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <readonly/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='sata0-0-0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='0' model='pcie-root'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pcie.0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='1' port='0x10'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.1'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='2' port='0x11'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.2'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='3' port='0x12'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.3'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='4' port='0x13'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.4'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='5' port='0x14'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.5'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='6' port='0x15'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.6'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='7' port='0x16'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.7'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='8' port='0x17'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.8'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='9' port='0x18'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.9'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='10' port='0x19'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.10'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='11' port='0x1a'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.11'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='12' port='0x1b'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.12'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='13' port='0x1c'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.13'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='14' port='0x1d'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.14'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='15' port='0x1e'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.15'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='16' port='0x1f'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.16'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='17' port='0x20'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.17'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='18' port='0x21'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.18'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='19' port='0x22'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.19'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='20' port='0x23'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.20'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='21' port='0x24'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.21'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='22' port='0x25'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.22'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='23' port='0x26'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.23'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='24' port='0x27'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.24'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='25' port='0x28'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.25'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-pci-bridge'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.26'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='usb'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='sata' index='0'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='ide'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <interface type='ethernet'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <mac address='fa:16:3e:52:6c:f3'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target dev='tapda36c2e4-10'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model type='virtio'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <mtu size='1442'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='net0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <interface type='ethernet'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <mac address='fa:16:3e:fb:26:83'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target dev='tapfb488c88-f0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model type='virtio'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <mtu size='1442'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='net1'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <serial type='pty'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <source path='/dev/pts/0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <log file='/var/lib/nova/instances/00bfed58-0a25-44a8-aa0b-2330c27be8ee/console.log' append='off'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target type='isa-serial' port='0'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:        <model name='isa-serial'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      </target>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='serial0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <console type='pty' tty='/dev/pts/0'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <source path='/dev/pts/0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <log file='/var/lib/nova/instances/00bfed58-0a25-44a8-aa0b-2330c27be8ee/console.log' append='off'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target type='serial' port='0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='serial0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </console>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <input type='tablet' bus='usb'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='input0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='usb' bus='0' port='1'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </input>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <input type='mouse' bus='ps2'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='input1'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </input>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <input type='keyboard' bus='ps2'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='input2'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </input>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <listen type='address' address='::0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </graphics>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <audio id='1' type='none'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model type='virtio' heads='1' primary='yes'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='video0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <watchdog model='itco' action='reset'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='watchdog0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </watchdog>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <memballoon model='virtio'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <stats period='10'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='balloon0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <rng model='virtio'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <backend model='random'>/dev/urandom</backend>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='rng0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <label>system_u:system_r:svirt_t:s0:c37,c1018</label>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c37,c1018</imagelabel>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  </seclabel>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <label>+107:+107</label>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <imagelabel>+107:+107</imagelabel>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  </seclabel>
Nov 29 03:10:10 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:10:10 np0005539551 nova_compute[227360]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 29 03:10:10 np0005539551 nova_compute[227360]: 2025-11-29 08:10:10.101 227364 INFO nova.virt.libvirt.driver [None req-2b785758-bc39-4d90-b6b3-ccdd432157ec b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Successfully detached device tapfb488c88-f0 from instance 00bfed58-0a25-44a8-aa0b-2330c27be8ee from the persistent domain config.#033[00m
Nov 29 03:10:10 np0005539551 nova_compute[227360]: 2025-11-29 08:10:10.101 227364 DEBUG nova.virt.libvirt.driver [None req-2b785758-bc39-4d90-b6b3-ccdd432157ec b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] (1/8): Attempting to detach device tapfb488c88-f0 with device alias net1 from instance 00bfed58-0a25-44a8-aa0b-2330c27be8ee from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:10:10 np0005539551 nova_compute[227360]: 2025-11-29 08:10:10.101 227364 DEBUG nova.virt.libvirt.guest [None req-2b785758-bc39-4d90-b6b3-ccdd432157ec b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] detach device xml: <interface type="ethernet">
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <mac address="fa:16:3e:fb:26:83"/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <model type="virtio"/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <mtu size="1442"/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <target dev="tapfb488c88-f0"/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]: </interface>
Nov 29 03:10:10 np0005539551 nova_compute[227360]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:10:10 np0005539551 kernel: tapfb488c88-f0 (unregistering): left promiscuous mode
Nov 29 03:10:10 np0005539551 NetworkManager[48922]: <info>  [1764403810.1482] device (tapfb488c88-f0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:10:10 np0005539551 nova_compute[227360]: 2025-11-29 08:10:10.159 227364 DEBUG nova.virt.libvirt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Received event <DeviceRemovedEvent: 1764403810.1594954, 00bfed58-0a25-44a8-aa0b-2330c27be8ee => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:10:10 np0005539551 nova_compute[227360]: 2025-11-29 08:10:10.161 227364 DEBUG nova.virt.libvirt.driver [None req-2b785758-bc39-4d90-b6b3-ccdd432157ec b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Start waiting for the detach event from libvirt for device tapfb488c88-f0 with device alias net1 for instance 00bfed58-0a25-44a8-aa0b-2330c27be8ee _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:10:10 np0005539551 nova_compute[227360]: 2025-11-29 08:10:10.162 227364 DEBUG nova.virt.libvirt.guest [None req-2b785758-bc39-4d90-b6b3-ccdd432157ec b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:fb:26:83"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfb488c88-f0"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:10:10 np0005539551 nova_compute[227360]: 2025-11-29 08:10:10.166 227364 DEBUG nova.virt.libvirt.guest [None req-2b785758-bc39-4d90-b6b3-ccdd432157ec b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:fb:26:83"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfb488c88-f0"/></interface>not found in domain: <domain type='kvm' id='34'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <name>instance-0000004a</name>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <uuid>00bfed58-0a25-44a8-aa0b-2330c27be8ee</uuid>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <nova:name>tempest-AttachInterfacesTestJSON-server-151094337</nova:name>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <nova:creationTime>2025-11-29 08:10:06</nova:creationTime>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <nova:flavor name="m1.nano">
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <nova:memory>128</nova:memory>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <nova:disk>1</nova:disk>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <nova:swap>0</nova:swap>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  </nova:flavor>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <nova:owner>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <nova:user uuid="b95b3e841be1420c99ee0a04dd0840f1">tempest-AttachInterfacesTestJSON-372493183-project-member</nova:user>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <nova:project uuid="ff7c805d4242453aa2148a247956391d">tempest-AttachInterfacesTestJSON-372493183</nova:project>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  </nova:owner>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <nova:ports>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <nova:port uuid="da36c2e4-1083-48fb-ae61-47bca3a21912">
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </nova:port>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <nova:port uuid="fb488c88-f0f5-4e76-90e7-47161bb2a305">
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </nova:port>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  </nova:ports>
Nov 29 03:10:10 np0005539551 nova_compute[227360]: </nova:instance>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <memory unit='KiB'>131072</memory>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <vcpu placement='static'>1</vcpu>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <resource>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <partition>/machine</partition>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  </resource>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <sysinfo type='smbios'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <entry name='manufacturer'>RDO</entry>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <entry name='product'>OpenStack Compute</entry>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <entry name='serial'>00bfed58-0a25-44a8-aa0b-2330c27be8ee</entry>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <entry name='uuid'>00bfed58-0a25-44a8-aa0b-2330c27be8ee</entry>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <entry name='family'>Virtual Machine</entry>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <boot dev='hd'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <smbios mode='sysinfo'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <vmcoreinfo state='on'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <cpu mode='custom' match='exact' check='full'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <model fallback='forbid'>Nehalem</model>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <feature policy='require' name='x2apic'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <feature policy='require' name='hypervisor'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <feature policy='require' name='vme'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <clock offset='utc'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <timer name='pit' tickpolicy='delay'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <timer name='hpet' present='no'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <on_poweroff>destroy</on_poweroff>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <on_reboot>restart</on_reboot>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <on_crash>destroy</on_crash>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <disk type='network' device='disk'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <auth username='openstack'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:        <secret type='ceph' uuid='b66774a7-56d9-5535-bd8c-681234404870'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <source protocol='rbd' name='vms/00bfed58-0a25-44a8-aa0b-2330c27be8ee_disk' index='2'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target dev='vda' bus='virtio'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='virtio-disk0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <disk type='network' device='cdrom'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <auth username='openstack'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:        <secret type='ceph' uuid='b66774a7-56d9-5535-bd8c-681234404870'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <source protocol='rbd' name='vms/00bfed58-0a25-44a8-aa0b-2330c27be8ee_disk.config' index='1'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target dev='sda' bus='sata'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <readonly/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='sata0-0-0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='0' model='pcie-root'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pcie.0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='1' port='0x10'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.1'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='2' port='0x11'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.2'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='3' port='0x12'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.3'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='4' port='0x13'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.4'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='5' port='0x14'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.5'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='6' port='0x15'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.6'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='7' port='0x16'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.7'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='8' port='0x17'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.8'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='9' port='0x18'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.9'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='10' port='0x19'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.10'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='11' port='0x1a'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.11'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='12' port='0x1b'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.12'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='13' port='0x1c'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.13'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='14' port='0x1d'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.14'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='15' port='0x1e'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.15'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='16' port='0x1f'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.16'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='17' port='0x20'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.17'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='18' port='0x21'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.18'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='19' port='0x22'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.19'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='20' port='0x23'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.20'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='21' port='0x24'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.21'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='22' port='0x25'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.22'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='23' port='0x26'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.23'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='24' port='0x27'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.24'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target chassis='25' port='0x28'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.25'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model name='pcie-pci-bridge'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='pci.26'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='usb'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <controller type='sata' index='0'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='ide'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <interface type='ethernet'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <mac address='fa:16:3e:52:6c:f3'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target dev='tapda36c2e4-10'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model type='virtio'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <mtu size='1442'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='net0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <serial type='pty'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <source path='/dev/pts/0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <log file='/var/lib/nova/instances/00bfed58-0a25-44a8-aa0b-2330c27be8ee/console.log' append='off'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target type='isa-serial' port='0'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:        <model name='isa-serial'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      </target>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='serial0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <console type='pty' tty='/dev/pts/0'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <source path='/dev/pts/0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <log file='/var/lib/nova/instances/00bfed58-0a25-44a8-aa0b-2330c27be8ee/console.log' append='off'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <target type='serial' port='0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='serial0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </console>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <input type='tablet' bus='usb'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='input0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='usb' bus='0' port='1'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </input>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <input type='mouse' bus='ps2'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='input1'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </input>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <input type='keyboard' bus='ps2'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='input2'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </input>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <listen type='address' address='::0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </graphics>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <audio id='1' type='none'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <model type='virtio' heads='1' primary='yes'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='video0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <watchdog model='itco' action='reset'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='watchdog0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </watchdog>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <memballoon model='virtio'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <stats period='10'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='balloon0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <rng model='virtio'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <backend model='random'>/dev/urandom</backend>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <alias name='rng0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <label>system_u:system_r:svirt_t:s0:c37,c1018</label>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c37,c1018</imagelabel>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  </seclabel>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <label>+107:+107</label>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <imagelabel>+107:+107</imagelabel>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  </seclabel>
Nov 29 03:10:10 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:10:10 np0005539551 nova_compute[227360]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 29 03:10:10 np0005539551 nova_compute[227360]: 2025-11-29 08:10:10.166 227364 INFO nova.virt.libvirt.driver [None req-2b785758-bc39-4d90-b6b3-ccdd432157ec b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Successfully detached device tapfb488c88-f0 from instance 00bfed58-0a25-44a8-aa0b-2330c27be8ee from the live domain config.#033[00m
Nov 29 03:10:10 np0005539551 nova_compute[227360]: 2025-11-29 08:10:10.166 227364 DEBUG nova.virt.libvirt.vif [None req-2b785758-bc39-4d90-b6b3-ccdd432157ec b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-151094337',display_name='tempest-AttachInterfacesTestJSON-server-151094337',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-151094337',id=74,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLz2/hRJ1TIdtPOtDD0/pV/i+rl/aVQTZxMyxxGV6qIdTwffh6F+z4sfbRr8GH8vQoafgFs7aaplxV1s7tLn77mcTUD2rhU9JE1b0RjzFKPDixDmcGcrtGap0RAcgqnY4A==',key_name='tempest-keypair-1899268923',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:22Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ff7c805d4242453aa2148a247956391d',ramdisk_id='',reservation_id='r-t7n1tsl8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-372493183',owner_user_name='tempest-AttachInterfacesTestJSON-372493183-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:09:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b95b3e841be1420c99ee0a04dd0840f1',uuid=00bfed58-0a25-44a8-aa0b-2330c27be8ee,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fb488c88-f0f5-4e76-90e7-47161bb2a305", "address": "fa:16:3e:fb:26:83", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb488c88-f0", "ovs_interfaceid": "fb488c88-f0f5-4e76-90e7-47161bb2a305", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:10:10 np0005539551 nova_compute[227360]: 2025-11-29 08:10:10.167 227364 DEBUG nova.network.os_vif_util [None req-2b785758-bc39-4d90-b6b3-ccdd432157ec b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converting VIF {"id": "fb488c88-f0f5-4e76-90e7-47161bb2a305", "address": "fa:16:3e:fb:26:83", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb488c88-f0", "ovs_interfaceid": "fb488c88-f0f5-4e76-90e7-47161bb2a305", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:10 np0005539551 nova_compute[227360]: 2025-11-29 08:10:10.167 227364 DEBUG nova.network.os_vif_util [None req-2b785758-bc39-4d90-b6b3-ccdd432157ec b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:26:83,bridge_name='br-int',has_traffic_filtering=True,id=fb488c88-f0f5-4e76-90e7-47161bb2a305,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb488c88-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:10 np0005539551 nova_compute[227360]: 2025-11-29 08:10:10.167 227364 DEBUG os_vif [None req-2b785758-bc39-4d90-b6b3-ccdd432157ec b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:26:83,bridge_name='br-int',has_traffic_filtering=True,id=fb488c88-f0f5-4e76-90e7-47161bb2a305,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb488c88-f0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:10:10 np0005539551 nova_compute[227360]: 2025-11-29 08:10:10.168 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:10 np0005539551 nova_compute[227360]: 2025-11-29 08:10:10.169 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfb488c88-f0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:10 np0005539551 nova_compute[227360]: 2025-11-29 08:10:10.224 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:10 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:10Z|00301|binding|INFO|Releasing lport fb488c88-f0f5-4e76-90e7-47161bb2a305 from this chassis (sb_readonly=0)
Nov 29 03:10:10 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:10Z|00302|binding|INFO|Setting lport fb488c88-f0f5-4e76-90e7-47161bb2a305 down in Southbound
Nov 29 03:10:10 np0005539551 nova_compute[227360]: 2025-11-29 08:10:10.226 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:10:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:10.238 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:26:83 10.100.0.13'], port_security=['fa:16:3e:fb:26:83 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '00bfed58-0a25-44a8-aa0b-2330c27be8ee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff7c805d4242453aa2148a247956391d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '837c5830-d55f-47dc-af7f-7cef5a2ab737', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5330ba90-719c-42ae-a31a-dd5fd1d240e2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=fb488c88-f0f5-4e76-90e7-47161bb2a305) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:10:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:10.241 139482 INFO neutron.agent.ovn.metadata.agent [-] Port fb488c88-f0f5-4e76-90e7-47161bb2a305 in datapath ddd8b166-79ec-408d-b52c-581ad9dd6cb8 unbound from our chassis#033[00m
Nov 29 03:10:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:10.243 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ddd8b166-79ec-408d-b52c-581ad9dd6cb8#033[00m
Nov 29 03:10:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:10.258 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[93dfaf24-4938-41f0-b8c3-9de1f66051de]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:10 np0005539551 nova_compute[227360]: 2025-11-29 08:10:10.261 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:10 np0005539551 nova_compute[227360]: 2025-11-29 08:10:10.263 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:10 np0005539551 nova_compute[227360]: 2025-11-29 08:10:10.263 227364 INFO os_vif [None req-2b785758-bc39-4d90-b6b3-ccdd432157ec b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:26:83,bridge_name='br-int',has_traffic_filtering=True,id=fb488c88-f0f5-4e76-90e7-47161bb2a305,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb488c88-f0')#033[00m
Nov 29 03:10:10 np0005539551 nova_compute[227360]: 2025-11-29 08:10:10.264 227364 DEBUG nova.virt.libvirt.guest [None req-2b785758-bc39-4d90-b6b3-ccdd432157ec b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <nova:name>tempest-AttachInterfacesTestJSON-server-151094337</nova:name>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <nova:creationTime>2025-11-29 08:10:10</nova:creationTime>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <nova:flavor name="m1.nano">
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <nova:memory>128</nova:memory>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <nova:disk>1</nova:disk>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <nova:swap>0</nova:swap>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  </nova:flavor>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <nova:owner>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <nova:user uuid="b95b3e841be1420c99ee0a04dd0840f1">tempest-AttachInterfacesTestJSON-372493183-project-member</nova:user>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <nova:project uuid="ff7c805d4242453aa2148a247956391d">tempest-AttachInterfacesTestJSON-372493183</nova:project>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  </nova:owner>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  <nova:ports>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    <nova:port uuid="da36c2e4-1083-48fb-ae61-47bca3a21912">
Nov 29 03:10:10 np0005539551 nova_compute[227360]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:    </nova:port>
Nov 29 03:10:10 np0005539551 nova_compute[227360]:  </nova:ports>
Nov 29 03:10:10 np0005539551 nova_compute[227360]: </nova:instance>
Nov 29 03:10:10 np0005539551 nova_compute[227360]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 29 03:10:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:10.299 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[232eb066-4985-4002-b129-535ff327571d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:10.306 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[410e610d-f4d0-42c4-b3e3-511fce01d75d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:10.331 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[4eafe03d-f824-4515-8556-c1722b8a90ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:10.351 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[05a5d9ba-03d5-49d7-9d4e-783ff236ab59]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapddd8b166-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:35:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 680698, 'reachable_time': 30284, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256347, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:10.371 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d511b367-9c1d-4091-8b86-6acbb116794b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapddd8b166-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 680708, 'tstamp': 680708}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256348, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapddd8b166-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 680711, 'tstamp': 680711}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256348, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:10.373 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapddd8b166-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:10 np0005539551 nova_compute[227360]: 2025-11-29 08:10:10.375 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:10 np0005539551 nova_compute[227360]: 2025-11-29 08:10:10.376 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:10.377 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapddd8b166-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:10.377 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:10:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:10.377 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapddd8b166-70, col_values=(('external_ids', {'iface-id': 'a9e57abf-e3e4-455b-b4c5-0cda127bd5c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:10.378 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:10:10 np0005539551 nova_compute[227360]: 2025-11-29 08:10:10.917 227364 DEBUG nova.compute.manager [req-773ef75f-cd9c-4265-aa02-ce2e51b73740 req-1599b440-828f-4574-bb2a-c0624b1ea72d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Received event network-vif-unplugged-fb488c88-f0f5-4e76-90e7-47161bb2a305 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:10 np0005539551 nova_compute[227360]: 2025-11-29 08:10:10.918 227364 DEBUG oslo_concurrency.lockutils [req-773ef75f-cd9c-4265-aa02-ce2e51b73740 req-1599b440-828f-4574-bb2a-c0624b1ea72d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "00bfed58-0a25-44a8-aa0b-2330c27be8ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:10 np0005539551 nova_compute[227360]: 2025-11-29 08:10:10.919 227364 DEBUG oslo_concurrency.lockutils [req-773ef75f-cd9c-4265-aa02-ce2e51b73740 req-1599b440-828f-4574-bb2a-c0624b1ea72d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "00bfed58-0a25-44a8-aa0b-2330c27be8ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:10 np0005539551 nova_compute[227360]: 2025-11-29 08:10:10.919 227364 DEBUG oslo_concurrency.lockutils [req-773ef75f-cd9c-4265-aa02-ce2e51b73740 req-1599b440-828f-4574-bb2a-c0624b1ea72d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "00bfed58-0a25-44a8-aa0b-2330c27be8ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:10 np0005539551 nova_compute[227360]: 2025-11-29 08:10:10.919 227364 DEBUG nova.compute.manager [req-773ef75f-cd9c-4265-aa02-ce2e51b73740 req-1599b440-828f-4574-bb2a-c0624b1ea72d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] No waiting events found dispatching network-vif-unplugged-fb488c88-f0f5-4e76-90e7-47161bb2a305 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:10 np0005539551 nova_compute[227360]: 2025-11-29 08:10:10.919 227364 WARNING nova.compute.manager [req-773ef75f-cd9c-4265-aa02-ce2e51b73740 req-1599b440-828f-4574-bb2a-c0624b1ea72d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Received unexpected event network-vif-unplugged-fb488c88-f0f5-4e76-90e7-47161bb2a305 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:10:11 np0005539551 nova_compute[227360]: 2025-11-29 08:10:11.742 227364 DEBUG nova.compute.manager [req-f30635c5-e725-48ab-895e-6f96f594dcba req-6f7eaaec-486d-46aa-999f-7fd961fc406a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-vif-plugged-75fb0180-ce5c-4d77-ab28-71baed47a210 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:11 np0005539551 nova_compute[227360]: 2025-11-29 08:10:11.743 227364 DEBUG oslo_concurrency.lockutils [req-f30635c5-e725-48ab-895e-6f96f594dcba req-6f7eaaec-486d-46aa-999f-7fd961fc406a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:11 np0005539551 nova_compute[227360]: 2025-11-29 08:10:11.743 227364 DEBUG oslo_concurrency.lockutils [req-f30635c5-e725-48ab-895e-6f96f594dcba req-6f7eaaec-486d-46aa-999f-7fd961fc406a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:11 np0005539551 nova_compute[227360]: 2025-11-29 08:10:11.743 227364 DEBUG oslo_concurrency.lockutils [req-f30635c5-e725-48ab-895e-6f96f594dcba req-6f7eaaec-486d-46aa-999f-7fd961fc406a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:11 np0005539551 nova_compute[227360]: 2025-11-29 08:10:11.743 227364 DEBUG nova.compute.manager [req-f30635c5-e725-48ab-895e-6f96f594dcba req-6f7eaaec-486d-46aa-999f-7fd961fc406a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] No waiting events found dispatching network-vif-plugged-75fb0180-ce5c-4d77-ab28-71baed47a210 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:11 np0005539551 nova_compute[227360]: 2025-11-29 08:10:11.744 227364 WARNING nova.compute.manager [req-f30635c5-e725-48ab-895e-6f96f594dcba req-6f7eaaec-486d-46aa-999f-7fd961fc406a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received unexpected event network-vif-plugged-75fb0180-ce5c-4d77-ab28-71baed47a210 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:10:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:11.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:11.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.213 227364 DEBUG nova.compute.manager [req-aad2aa9d-6f3e-42e9-bfda-aa55b11ffb09 req-712ce828-010d-40f0-9ded-7dde373d63fa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Received event network-vif-deleted-fb488c88-f0f5-4e76-90e7-47161bb2a305 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.214 227364 INFO nova.compute.manager [req-aad2aa9d-6f3e-42e9-bfda-aa55b11ffb09 req-712ce828-010d-40f0-9ded-7dde373d63fa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Neutron deleted interface fb488c88-f0f5-4e76-90e7-47161bb2a305; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.214 227364 DEBUG nova.network.neutron [req-aad2aa9d-6f3e-42e9-bfda-aa55b11ffb09 req-712ce828-010d-40f0-9ded-7dde373d63fa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Updating instance_info_cache with network_info: [{"id": "da36c2e4-1083-48fb-ae61-47bca3a21912", "address": "fa:16:3e:52:6c:f3", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda36c2e4-10", "ovs_interfaceid": "da36c2e4-1083-48fb-ae61-47bca3a21912", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.241 227364 DEBUG nova.objects.instance [req-aad2aa9d-6f3e-42e9-bfda-aa55b11ffb09 req-712ce828-010d-40f0-9ded-7dde373d63fa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lazy-loading 'system_metadata' on Instance uuid 00bfed58-0a25-44a8-aa0b-2330c27be8ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.276 227364 DEBUG nova.objects.instance [req-aad2aa9d-6f3e-42e9-bfda-aa55b11ffb09 req-712ce828-010d-40f0-9ded-7dde373d63fa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lazy-loading 'flavor' on Instance uuid 00bfed58-0a25-44a8-aa0b-2330c27be8ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.290 227364 DEBUG oslo_concurrency.lockutils [None req-2b785758-bc39-4d90-b6b3-ccdd432157ec b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "refresh_cache-00bfed58-0a25-44a8-aa0b-2330c27be8ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.291 227364 DEBUG oslo_concurrency.lockutils [None req-2b785758-bc39-4d90-b6b3-ccdd432157ec b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquired lock "refresh_cache-00bfed58-0a25-44a8-aa0b-2330c27be8ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.291 227364 DEBUG nova.network.neutron [None req-2b785758-bc39-4d90-b6b3-ccdd432157ec b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.299 227364 DEBUG nova.virt.libvirt.vif [req-aad2aa9d-6f3e-42e9-bfda-aa55b11ffb09 req-712ce828-010d-40f0-9ded-7dde373d63fa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-151094337',display_name='tempest-AttachInterfacesTestJSON-server-151094337',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-151094337',id=74,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLz2/hRJ1TIdtPOtDD0/pV/i+rl/aVQTZxMyxxGV6qIdTwffh6F+z4sfbRr8GH8vQoafgFs7aaplxV1s7tLn77mcTUD2rhU9JE1b0RjzFKPDixDmcGcrtGap0RAcgqnY4A==',key_name='tempest-keypair-1899268923',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:22Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ff7c805d4242453aa2148a247956391d',ramdisk_id='',reservation_id='r-t7n1tsl8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-372493183',owner_user_name='tempest-AttachInterfacesTestJSON-372493183-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:09:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b95b3e841be1420c99ee0a04dd0840f1',uuid=00bfed58-0a25-44a8-aa0b-2330c27be8ee,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fb488c88-f0f5-4e76-90e7-47161bb2a305", "address": "fa:16:3e:fb:26:83", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb488c88-f0", "ovs_interfaceid": "fb488c88-f0f5-4e76-90e7-47161bb2a305", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.300 227364 DEBUG nova.network.os_vif_util [req-aad2aa9d-6f3e-42e9-bfda-aa55b11ffb09 req-712ce828-010d-40f0-9ded-7dde373d63fa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Converting VIF {"id": "fb488c88-f0f5-4e76-90e7-47161bb2a305", "address": "fa:16:3e:fb:26:83", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb488c88-f0", "ovs_interfaceid": "fb488c88-f0f5-4e76-90e7-47161bb2a305", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.301 227364 DEBUG nova.network.os_vif_util [req-aad2aa9d-6f3e-42e9-bfda-aa55b11ffb09 req-712ce828-010d-40f0-9ded-7dde373d63fa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:26:83,bridge_name='br-int',has_traffic_filtering=True,id=fb488c88-f0f5-4e76-90e7-47161bb2a305,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb488c88-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.306 227364 DEBUG nova.virt.libvirt.guest [req-aad2aa9d-6f3e-42e9-bfda-aa55b11ffb09 req-712ce828-010d-40f0-9ded-7dde373d63fa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:fb:26:83"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfb488c88-f0"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.309 227364 DEBUG nova.virt.libvirt.guest [req-aad2aa9d-6f3e-42e9-bfda-aa55b11ffb09 req-712ce828-010d-40f0-9ded-7dde373d63fa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:fb:26:83"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfb488c88-f0"/></interface>not found in domain: <domain type='kvm' id='34'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <name>instance-0000004a</name>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <uuid>00bfed58-0a25-44a8-aa0b-2330c27be8ee</uuid>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <nova:name>tempest-AttachInterfacesTestJSON-server-151094337</nova:name>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <nova:creationTime>2025-11-29 08:10:10</nova:creationTime>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <nova:flavor name="m1.nano">
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <nova:memory>128</nova:memory>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <nova:disk>1</nova:disk>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <nova:swap>0</nova:swap>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  </nova:flavor>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <nova:owner>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <nova:user uuid="b95b3e841be1420c99ee0a04dd0840f1">tempest-AttachInterfacesTestJSON-372493183-project-member</nova:user>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <nova:project uuid="ff7c805d4242453aa2148a247956391d">tempest-AttachInterfacesTestJSON-372493183</nova:project>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  </nova:owner>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <nova:ports>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <nova:port uuid="da36c2e4-1083-48fb-ae61-47bca3a21912">
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </nova:port>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  </nova:ports>
Nov 29 03:10:12 np0005539551 nova_compute[227360]: </nova:instance>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <memory unit='KiB'>131072</memory>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <vcpu placement='static'>1</vcpu>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <resource>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <partition>/machine</partition>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  </resource>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <sysinfo type='smbios'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <entry name='manufacturer'>RDO</entry>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <entry name='product'>OpenStack Compute</entry>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <entry name='serial'>00bfed58-0a25-44a8-aa0b-2330c27be8ee</entry>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <entry name='uuid'>00bfed58-0a25-44a8-aa0b-2330c27be8ee</entry>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <entry name='family'>Virtual Machine</entry>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <boot dev='hd'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <smbios mode='sysinfo'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <vmcoreinfo state='on'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <cpu mode='custom' match='exact' check='full'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <model fallback='forbid'>Nehalem</model>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <feature policy='require' name='x2apic'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <feature policy='require' name='hypervisor'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <feature policy='require' name='vme'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <clock offset='utc'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <timer name='pit' tickpolicy='delay'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <timer name='hpet' present='no'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <on_poweroff>destroy</on_poweroff>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <on_reboot>restart</on_reboot>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <on_crash>destroy</on_crash>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <disk type='network' device='disk'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <auth username='openstack'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:        <secret type='ceph' uuid='b66774a7-56d9-5535-bd8c-681234404870'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <source protocol='rbd' name='vms/00bfed58-0a25-44a8-aa0b-2330c27be8ee_disk' index='2'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target dev='vda' bus='virtio'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='virtio-disk0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <disk type='network' device='cdrom'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <auth username='openstack'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:        <secret type='ceph' uuid='b66774a7-56d9-5535-bd8c-681234404870'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <source protocol='rbd' name='vms/00bfed58-0a25-44a8-aa0b-2330c27be8ee_disk.config' index='1'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target dev='sda' bus='sata'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <readonly/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='sata0-0-0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='0' model='pcie-root'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pcie.0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='1' port='0x10'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.1'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='2' port='0x11'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.2'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='3' port='0x12'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.3'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='4' port='0x13'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.4'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='5' port='0x14'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.5'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='6' port='0x15'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.6'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='7' port='0x16'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.7'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='8' port='0x17'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.8'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='9' port='0x18'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.9'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='10' port='0x19'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.10'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='11' port='0x1a'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.11'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='12' port='0x1b'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.12'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='13' port='0x1c'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.13'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='14' port='0x1d'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.14'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='15' port='0x1e'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.15'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='16' port='0x1f'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.16'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='17' port='0x20'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.17'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='18' port='0x21'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.18'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='19' port='0x22'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.19'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='20' port='0x23'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.20'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='21' port='0x24'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.21'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='22' port='0x25'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.22'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='23' port='0x26'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.23'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='24' port='0x27'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.24'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='25' port='0x28'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.25'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-pci-bridge'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.26'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='usb'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='sata' index='0'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='ide'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <interface type='ethernet'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <mac address='fa:16:3e:52:6c:f3'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target dev='tapda36c2e4-10'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model type='virtio'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <mtu size='1442'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='net0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <serial type='pty'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <source path='/dev/pts/0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <log file='/var/lib/nova/instances/00bfed58-0a25-44a8-aa0b-2330c27be8ee/console.log' append='off'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target type='isa-serial' port='0'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:        <model name='isa-serial'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      </target>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='serial0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <console type='pty' tty='/dev/pts/0'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <source path='/dev/pts/0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <log file='/var/lib/nova/instances/00bfed58-0a25-44a8-aa0b-2330c27be8ee/console.log' append='off'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target type='serial' port='0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='serial0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </console>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <input type='tablet' bus='usb'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='input0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='usb' bus='0' port='1'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </input>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <input type='mouse' bus='ps2'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='input1'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </input>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <input type='keyboard' bus='ps2'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='input2'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </input>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <listen type='address' address='::0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </graphics>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <audio id='1' type='none'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model type='virtio' heads='1' primary='yes'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='video0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <watchdog model='itco' action='reset'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='watchdog0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </watchdog>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <memballoon model='virtio'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <stats period='10'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='balloon0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <rng model='virtio'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <backend model='random'>/dev/urandom</backend>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='rng0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <label>system_u:system_r:svirt_t:s0:c37,c1018</label>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c37,c1018</imagelabel>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  </seclabel>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <label>+107:+107</label>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <imagelabel>+107:+107</imagelabel>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  </seclabel>
Nov 29 03:10:12 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:10:12 np0005539551 nova_compute[227360]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.322 227364 DEBUG nova.virt.libvirt.guest [req-aad2aa9d-6f3e-42e9-bfda-aa55b11ffb09 req-712ce828-010d-40f0-9ded-7dde373d63fa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:fb:26:83"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfb488c88-f0"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.325 227364 DEBUG nova.virt.libvirt.guest [req-aad2aa9d-6f3e-42e9-bfda-aa55b11ffb09 req-712ce828-010d-40f0-9ded-7dde373d63fa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:fb:26:83"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfb488c88-f0"/></interface>not found in domain: <domain type='kvm' id='34'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <name>instance-0000004a</name>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <uuid>00bfed58-0a25-44a8-aa0b-2330c27be8ee</uuid>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <nova:name>tempest-AttachInterfacesTestJSON-server-151094337</nova:name>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <nova:creationTime>2025-11-29 08:10:10</nova:creationTime>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <nova:flavor name="m1.nano">
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <nova:memory>128</nova:memory>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <nova:disk>1</nova:disk>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <nova:swap>0</nova:swap>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  </nova:flavor>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <nova:owner>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <nova:user uuid="b95b3e841be1420c99ee0a04dd0840f1">tempest-AttachInterfacesTestJSON-372493183-project-member</nova:user>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <nova:project uuid="ff7c805d4242453aa2148a247956391d">tempest-AttachInterfacesTestJSON-372493183</nova:project>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  </nova:owner>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <nova:ports>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <nova:port uuid="da36c2e4-1083-48fb-ae61-47bca3a21912">
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </nova:port>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  </nova:ports>
Nov 29 03:10:12 np0005539551 nova_compute[227360]: </nova:instance>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <memory unit='KiB'>131072</memory>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <vcpu placement='static'>1</vcpu>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <resource>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <partition>/machine</partition>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  </resource>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <sysinfo type='smbios'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <entry name='manufacturer'>RDO</entry>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <entry name='product'>OpenStack Compute</entry>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <entry name='serial'>00bfed58-0a25-44a8-aa0b-2330c27be8ee</entry>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <entry name='uuid'>00bfed58-0a25-44a8-aa0b-2330c27be8ee</entry>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <entry name='family'>Virtual Machine</entry>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <boot dev='hd'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <smbios mode='sysinfo'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <vmcoreinfo state='on'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <cpu mode='custom' match='exact' check='full'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <model fallback='forbid'>Nehalem</model>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <feature policy='require' name='x2apic'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <feature policy='require' name='hypervisor'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <feature policy='require' name='vme'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <clock offset='utc'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <timer name='pit' tickpolicy='delay'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <timer name='hpet' present='no'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <on_poweroff>destroy</on_poweroff>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <on_reboot>restart</on_reboot>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <on_crash>destroy</on_crash>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <disk type='network' device='disk'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <auth username='openstack'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:        <secret type='ceph' uuid='b66774a7-56d9-5535-bd8c-681234404870'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <source protocol='rbd' name='vms/00bfed58-0a25-44a8-aa0b-2330c27be8ee_disk' index='2'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target dev='vda' bus='virtio'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='virtio-disk0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <disk type='network' device='cdrom'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <auth username='openstack'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:        <secret type='ceph' uuid='b66774a7-56d9-5535-bd8c-681234404870'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <source protocol='rbd' name='vms/00bfed58-0a25-44a8-aa0b-2330c27be8ee_disk.config' index='1'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target dev='sda' bus='sata'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <readonly/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='sata0-0-0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='0' model='pcie-root'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pcie.0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='1' port='0x10'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.1'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='2' port='0x11'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.2'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='3' port='0x12'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.3'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='4' port='0x13'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.4'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='5' port='0x14'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.5'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='6' port='0x15'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.6'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='7' port='0x16'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.7'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='8' port='0x17'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.8'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='9' port='0x18'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.9'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='10' port='0x19'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.10'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='11' port='0x1a'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.11'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='12' port='0x1b'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.12'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='13' port='0x1c'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.13'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='14' port='0x1d'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.14'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='15' port='0x1e'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.15'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='16' port='0x1f'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.16'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='17' port='0x20'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.17'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='18' port='0x21'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.18'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='19' port='0x22'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.19'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='20' port='0x23'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.20'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='21' port='0x24'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.21'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='22' port='0x25'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.22'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='23' port='0x26'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.23'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='24' port='0x27'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.24'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target chassis='25' port='0x28'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.25'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model name='pcie-pci-bridge'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='pci.26'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='usb'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <controller type='sata' index='0'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='ide'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <interface type='ethernet'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <mac address='fa:16:3e:52:6c:f3'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target dev='tapda36c2e4-10'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model type='virtio'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <mtu size='1442'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='net0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <serial type='pty'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <source path='/dev/pts/0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <log file='/var/lib/nova/instances/00bfed58-0a25-44a8-aa0b-2330c27be8ee/console.log' append='off'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target type='isa-serial' port='0'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:        <model name='isa-serial'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      </target>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='serial0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <console type='pty' tty='/dev/pts/0'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <source path='/dev/pts/0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <log file='/var/lib/nova/instances/00bfed58-0a25-44a8-aa0b-2330c27be8ee/console.log' append='off'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <target type='serial' port='0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='serial0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </console>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <input type='tablet' bus='usb'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='input0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='usb' bus='0' port='1'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </input>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <input type='mouse' bus='ps2'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='input1'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </input>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <input type='keyboard' bus='ps2'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='input2'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </input>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <listen type='address' address='::0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </graphics>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <audio id='1' type='none'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <model type='virtio' heads='1' primary='yes'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='video0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <watchdog model='itco' action='reset'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='watchdog0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </watchdog>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <memballoon model='virtio'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <stats period='10'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='balloon0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <rng model='virtio'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <backend model='random'>/dev/urandom</backend>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <alias name='rng0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <label>system_u:system_r:svirt_t:s0:c37,c1018</label>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c37,c1018</imagelabel>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  </seclabel>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <label>+107:+107</label>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <imagelabel>+107:+107</imagelabel>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  </seclabel>
Nov 29 03:10:12 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:10:12 np0005539551 nova_compute[227360]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.325 227364 WARNING nova.virt.libvirt.driver [req-aad2aa9d-6f3e-42e9-bfda-aa55b11ffb09 req-712ce828-010d-40f0-9ded-7dde373d63fa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Detaching interface fa:16:3e:fb:26:83 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapfb488c88-f0' not found.#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.326 227364 DEBUG nova.virt.libvirt.vif [req-aad2aa9d-6f3e-42e9-bfda-aa55b11ffb09 req-712ce828-010d-40f0-9ded-7dde373d63fa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-151094337',display_name='tempest-AttachInterfacesTestJSON-server-151094337',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-151094337',id=74,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLz2/hRJ1TIdtPOtDD0/pV/i+rl/aVQTZxMyxxGV6qIdTwffh6F+z4sfbRr8GH8vQoafgFs7aaplxV1s7tLn77mcTUD2rhU9JE1b0RjzFKPDixDmcGcrtGap0RAcgqnY4A==',key_name='tempest-keypair-1899268923',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:22Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ff7c805d4242453aa2148a247956391d',ramdisk_id='',reservation_id='r-t7n1tsl8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-372493183',owner_user_name='tempest-AttachInterfacesTestJSON-372493183-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:09:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b95b3e841be1420c99ee0a04dd0840f1',uuid=00bfed58-0a25-44a8-aa0b-2330c27be8ee,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fb488c88-f0f5-4e76-90e7-47161bb2a305", "address": "fa:16:3e:fb:26:83", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb488c88-f0", "ovs_interfaceid": "fb488c88-f0f5-4e76-90e7-47161bb2a305", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.326 227364 DEBUG nova.network.os_vif_util [req-aad2aa9d-6f3e-42e9-bfda-aa55b11ffb09 req-712ce828-010d-40f0-9ded-7dde373d63fa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Converting VIF {"id": "fb488c88-f0f5-4e76-90e7-47161bb2a305", "address": "fa:16:3e:fb:26:83", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb488c88-f0", "ovs_interfaceid": "fb488c88-f0f5-4e76-90e7-47161bb2a305", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.326 227364 DEBUG nova.network.os_vif_util [req-aad2aa9d-6f3e-42e9-bfda-aa55b11ffb09 req-712ce828-010d-40f0-9ded-7dde373d63fa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:26:83,bridge_name='br-int',has_traffic_filtering=True,id=fb488c88-f0f5-4e76-90e7-47161bb2a305,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb488c88-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.327 227364 DEBUG os_vif [req-aad2aa9d-6f3e-42e9-bfda-aa55b11ffb09 req-712ce828-010d-40f0-9ded-7dde373d63fa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:26:83,bridge_name='br-int',has_traffic_filtering=True,id=fb488c88-f0f5-4e76-90e7-47161bb2a305,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb488c88-f0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.328 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.328 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfb488c88-f0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.328 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.343 227364 INFO os_vif [req-aad2aa9d-6f3e-42e9-bfda-aa55b11ffb09 req-712ce828-010d-40f0-9ded-7dde373d63fa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:26:83,bridge_name='br-int',has_traffic_filtering=True,id=fb488c88-f0f5-4e76-90e7-47161bb2a305,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb488c88-f0')#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.343 227364 DEBUG nova.virt.libvirt.guest [req-aad2aa9d-6f3e-42e9-bfda-aa55b11ffb09 req-712ce828-010d-40f0-9ded-7dde373d63fa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <nova:name>tempest-AttachInterfacesTestJSON-server-151094337</nova:name>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <nova:creationTime>2025-11-29 08:10:12</nova:creationTime>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <nova:flavor name="m1.nano">
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <nova:memory>128</nova:memory>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <nova:disk>1</nova:disk>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <nova:swap>0</nova:swap>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  </nova:flavor>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <nova:owner>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <nova:user uuid="b95b3e841be1420c99ee0a04dd0840f1">tempest-AttachInterfacesTestJSON-372493183-project-member</nova:user>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <nova:project uuid="ff7c805d4242453aa2148a247956391d">tempest-AttachInterfacesTestJSON-372493183</nova:project>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  </nova:owner>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  <nova:ports>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    <nova:port uuid="da36c2e4-1083-48fb-ae61-47bca3a21912">
Nov 29 03:10:12 np0005539551 nova_compute[227360]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:    </nova:port>
Nov 29 03:10:12 np0005539551 nova_compute[227360]:  </nova:ports>
Nov 29 03:10:12 np0005539551 nova_compute[227360]: </nova:instance>
Nov 29 03:10:12 np0005539551 nova_compute[227360]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.526 227364 DEBUG oslo_concurrency.lockutils [None req-add783d6-58a8-4258-a5a5-bcae1d172323 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "00bfed58-0a25-44a8-aa0b-2330c27be8ee" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.527 227364 DEBUG oslo_concurrency.lockutils [None req-add783d6-58a8-4258-a5a5-bcae1d172323 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "00bfed58-0a25-44a8-aa0b-2330c27be8ee" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.527 227364 DEBUG oslo_concurrency.lockutils [None req-add783d6-58a8-4258-a5a5-bcae1d172323 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "00bfed58-0a25-44a8-aa0b-2330c27be8ee-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.527 227364 DEBUG oslo_concurrency.lockutils [None req-add783d6-58a8-4258-a5a5-bcae1d172323 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "00bfed58-0a25-44a8-aa0b-2330c27be8ee-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.527 227364 DEBUG oslo_concurrency.lockutils [None req-add783d6-58a8-4258-a5a5-bcae1d172323 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "00bfed58-0a25-44a8-aa0b-2330c27be8ee-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.528 227364 INFO nova.compute.manager [None req-add783d6-58a8-4258-a5a5-bcae1d172323 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Terminating instance#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.529 227364 DEBUG nova.compute.manager [None req-add783d6-58a8-4258-a5a5-bcae1d172323 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:10:12 np0005539551 kernel: tapda36c2e4-10 (unregistering): left promiscuous mode
Nov 29 03:10:12 np0005539551 NetworkManager[48922]: <info>  [1764403812.5863] device (tapda36c2e4-10): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.637 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:12 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:12Z|00303|binding|INFO|Releasing lport da36c2e4-1083-48fb-ae61-47bca3a21912 from this chassis (sb_readonly=0)
Nov 29 03:10:12 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:12Z|00304|binding|INFO|Setting lport da36c2e4-1083-48fb-ae61-47bca3a21912 down in Southbound
Nov 29 03:10:12 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:12Z|00305|binding|INFO|Removing iface tapda36c2e4-10 ovn-installed in OVS
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.644 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.655 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:12.656 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:6c:f3 10.100.0.14'], port_security=['fa:16:3e:52:6c:f3 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '00bfed58-0a25-44a8-aa0b-2330c27be8ee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff7c805d4242453aa2148a247956391d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5e205a7b-1ce6-4d19-af7d-9b03504545f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.226'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5330ba90-719c-42ae-a31a-dd5fd1d240e2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=da36c2e4-1083-48fb-ae61-47bca3a21912) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:10:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:12.658 139482 INFO neutron.agent.ovn.metadata.agent [-] Port da36c2e4-1083-48fb-ae61-47bca3a21912 in datapath ddd8b166-79ec-408d-b52c-581ad9dd6cb8 unbound from our chassis#033[00m
Nov 29 03:10:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:12.664 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ddd8b166-79ec-408d-b52c-581ad9dd6cb8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:10:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:12.666 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a8970aa9-7e4e-4ebb-8593-79e69a58dc51]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:12.666 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8 namespace which is not needed anymore#033[00m
Nov 29 03:10:12 np0005539551 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000004a.scope: Deactivated successfully.
Nov 29 03:10:12 np0005539551 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000004a.scope: Consumed 15.356s CPU time.
Nov 29 03:10:12 np0005539551 systemd-machined[190756]: Machine qemu-34-instance-0000004a terminated.
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.763 227364 INFO nova.virt.libvirt.driver [-] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Instance destroyed successfully.#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.763 227364 DEBUG nova.objects.instance [None req-add783d6-58a8-4258-a5a5-bcae1d172323 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lazy-loading 'resources' on Instance uuid 00bfed58-0a25-44a8-aa0b-2330c27be8ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.784 227364 DEBUG nova.virt.libvirt.vif [None req-add783d6-58a8-4258-a5a5-bcae1d172323 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-151094337',display_name='tempest-AttachInterfacesTestJSON-server-151094337',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-151094337',id=74,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLz2/hRJ1TIdtPOtDD0/pV/i+rl/aVQTZxMyxxGV6qIdTwffh6F+z4sfbRr8GH8vQoafgFs7aaplxV1s7tLn77mcTUD2rhU9JE1b0RjzFKPDixDmcGcrtGap0RAcgqnY4A==',key_name='tempest-keypair-1899268923',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:22Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ff7c805d4242453aa2148a247956391d',ramdisk_id='',reservation_id='r-t7n1tsl8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-372493183',owner_user_name='tempest-AttachInterfacesTestJSON-372493183-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:09:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b95b3e841be1420c99ee0a04dd0840f1',uuid=00bfed58-0a25-44a8-aa0b-2330c27be8ee,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "da36c2e4-1083-48fb-ae61-47bca3a21912", "address": "fa:16:3e:52:6c:f3", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda36c2e4-10", "ovs_interfaceid": "da36c2e4-1083-48fb-ae61-47bca3a21912", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.785 227364 DEBUG nova.network.os_vif_util [None req-add783d6-58a8-4258-a5a5-bcae1d172323 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converting VIF {"id": "da36c2e4-1083-48fb-ae61-47bca3a21912", "address": "fa:16:3e:52:6c:f3", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda36c2e4-10", "ovs_interfaceid": "da36c2e4-1083-48fb-ae61-47bca3a21912", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.785 227364 DEBUG nova.network.os_vif_util [None req-add783d6-58a8-4258-a5a5-bcae1d172323 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:52:6c:f3,bridge_name='br-int',has_traffic_filtering=True,id=da36c2e4-1083-48fb-ae61-47bca3a21912,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda36c2e4-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.786 227364 DEBUG os_vif [None req-add783d6-58a8-4258-a5a5-bcae1d172323 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:52:6c:f3,bridge_name='br-int',has_traffic_filtering=True,id=da36c2e4-1083-48fb-ae61-47bca3a21912,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda36c2e4-10') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.787 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.787 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda36c2e4-10, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.790 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.792 227364 INFO os_vif [None req-add783d6-58a8-4258-a5a5-bcae1d172323 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:52:6c:f3,bridge_name='br-int',has_traffic_filtering=True,id=da36c2e4-1083-48fb-ae61-47bca3a21912,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda36c2e4-10')#033[00m
Nov 29 03:10:12 np0005539551 neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8[255512]: [NOTICE]   (255516) : haproxy version is 2.8.14-c23fe91
Nov 29 03:10:12 np0005539551 neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8[255512]: [NOTICE]   (255516) : path to executable is /usr/sbin/haproxy
Nov 29 03:10:12 np0005539551 neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8[255512]: [ALERT]    (255516) : Current worker (255518) exited with code 143 (Terminated)
Nov 29 03:10:12 np0005539551 neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8[255512]: [WARNING]  (255516) : All workers exited. Exiting... (0)
Nov 29 03:10:12 np0005539551 systemd[1]: libpod-588bf40ddbcbd252e0464ac49159d2a439d75db17519fb28ee3bc459088a33f2.scope: Deactivated successfully.
Nov 29 03:10:12 np0005539551 podman[256377]: 2025-11-29 08:10:12.82853274 +0000 UTC m=+0.057599354 container died 588bf40ddbcbd252e0464ac49159d2a439d75db17519fb28ee3bc459088a33f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 03:10:12 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-588bf40ddbcbd252e0464ac49159d2a439d75db17519fb28ee3bc459088a33f2-userdata-shm.mount: Deactivated successfully.
Nov 29 03:10:12 np0005539551 systemd[1]: var-lib-containers-storage-overlay-c2451a4160f4328be532d5e867e05b044b8d70b9c50c8012e039dcb5bce0cf16-merged.mount: Deactivated successfully.
Nov 29 03:10:12 np0005539551 podman[256377]: 2025-11-29 08:10:12.880524645 +0000 UTC m=+0.109591249 container cleanup 588bf40ddbcbd252e0464ac49159d2a439d75db17519fb28ee3bc459088a33f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:10:12 np0005539551 systemd[1]: libpod-conmon-588bf40ddbcbd252e0464ac49159d2a439d75db17519fb28ee3bc459088a33f2.scope: Deactivated successfully.
Nov 29 03:10:12 np0005539551 podman[256428]: 2025-11-29 08:10:12.950963631 +0000 UTC m=+0.047503366 container remove 588bf40ddbcbd252e0464ac49159d2a439d75db17519fb28ee3bc459088a33f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:10:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:12.956 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[cfdf3878-8206-4847-8bd3-c65d3792ce97]: (4, ('Sat Nov 29 08:10:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8 (588bf40ddbcbd252e0464ac49159d2a439d75db17519fb28ee3bc459088a33f2)\n588bf40ddbcbd252e0464ac49159d2a439d75db17519fb28ee3bc459088a33f2\nSat Nov 29 08:10:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8 (588bf40ddbcbd252e0464ac49159d2a439d75db17519fb28ee3bc459088a33f2)\n588bf40ddbcbd252e0464ac49159d2a439d75db17519fb28ee3bc459088a33f2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:12.958 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b1a8eb7d-0cea-4264-8f99-63eb2f197f24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:12.960 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapddd8b166-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.961 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:12 np0005539551 kernel: tapddd8b166-70: left promiscuous mode
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.976 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:12 np0005539551 nova_compute[227360]: 2025-11-29 08:10:12.979 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:12.979 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6f7b7c7f-0415-4a3d-8c8b-d66b9e5abf93]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:13.004 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[be5a58b9-fb3c-4d0f-9a69-c4be73463601]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:13.005 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[3f4d8999-fbdd-42b3-9161-aeb3ed6d0596]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:13 np0005539551 nova_compute[227360]: 2025-11-29 08:10:13.007 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:13.023 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ebdb1c03-6f1c-47ec-861d-0e2d82e56320]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 680691, 'reachable_time': 20590, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256443, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:13.025 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:10:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:13.025 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[07386542-7bda-4b9c-b4fe-5dc72b3975ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:13 np0005539551 systemd[1]: run-netns-ovnmeta\x2dddd8b166\x2d79ec\x2d408d\x2db52c\x2d581ad9dd6cb8.mount: Deactivated successfully.
Nov 29 03:10:13 np0005539551 nova_compute[227360]: 2025-11-29 08:10:13.068 227364 DEBUG nova.compute.manager [req-4766c2a0-27ab-44f9-85b2-4bec6fa84884 req-cc8a6e07-abdc-42d9-b035-12639e094938 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Received event network-vif-plugged-fb488c88-f0f5-4e76-90e7-47161bb2a305 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:13 np0005539551 nova_compute[227360]: 2025-11-29 08:10:13.068 227364 DEBUG oslo_concurrency.lockutils [req-4766c2a0-27ab-44f9-85b2-4bec6fa84884 req-cc8a6e07-abdc-42d9-b035-12639e094938 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "00bfed58-0a25-44a8-aa0b-2330c27be8ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:13 np0005539551 nova_compute[227360]: 2025-11-29 08:10:13.068 227364 DEBUG oslo_concurrency.lockutils [req-4766c2a0-27ab-44f9-85b2-4bec6fa84884 req-cc8a6e07-abdc-42d9-b035-12639e094938 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "00bfed58-0a25-44a8-aa0b-2330c27be8ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:13 np0005539551 nova_compute[227360]: 2025-11-29 08:10:13.068 227364 DEBUG oslo_concurrency.lockutils [req-4766c2a0-27ab-44f9-85b2-4bec6fa84884 req-cc8a6e07-abdc-42d9-b035-12639e094938 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "00bfed58-0a25-44a8-aa0b-2330c27be8ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:13 np0005539551 nova_compute[227360]: 2025-11-29 08:10:13.069 227364 DEBUG nova.compute.manager [req-4766c2a0-27ab-44f9-85b2-4bec6fa84884 req-cc8a6e07-abdc-42d9-b035-12639e094938 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] No waiting events found dispatching network-vif-plugged-fb488c88-f0f5-4e76-90e7-47161bb2a305 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:13 np0005539551 nova_compute[227360]: 2025-11-29 08:10:13.069 227364 WARNING nova.compute.manager [req-4766c2a0-27ab-44f9-85b2-4bec6fa84884 req-cc8a6e07-abdc-42d9-b035-12639e094938 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Received unexpected event network-vif-plugged-fb488c88-f0f5-4e76-90e7-47161bb2a305 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:10:13 np0005539551 nova_compute[227360]: 2025-11-29 08:10:13.069 227364 DEBUG nova.compute.manager [req-4766c2a0-27ab-44f9-85b2-4bec6fa84884 req-cc8a6e07-abdc-42d9-b035-12639e094938 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Received event network-vif-unplugged-da36c2e4-1083-48fb-ae61-47bca3a21912 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:13 np0005539551 nova_compute[227360]: 2025-11-29 08:10:13.069 227364 DEBUG oslo_concurrency.lockutils [req-4766c2a0-27ab-44f9-85b2-4bec6fa84884 req-cc8a6e07-abdc-42d9-b035-12639e094938 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "00bfed58-0a25-44a8-aa0b-2330c27be8ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:13 np0005539551 nova_compute[227360]: 2025-11-29 08:10:13.069 227364 DEBUG oslo_concurrency.lockutils [req-4766c2a0-27ab-44f9-85b2-4bec6fa84884 req-cc8a6e07-abdc-42d9-b035-12639e094938 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "00bfed58-0a25-44a8-aa0b-2330c27be8ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:13 np0005539551 nova_compute[227360]: 2025-11-29 08:10:13.069 227364 DEBUG oslo_concurrency.lockutils [req-4766c2a0-27ab-44f9-85b2-4bec6fa84884 req-cc8a6e07-abdc-42d9-b035-12639e094938 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "00bfed58-0a25-44a8-aa0b-2330c27be8ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:13 np0005539551 nova_compute[227360]: 2025-11-29 08:10:13.069 227364 DEBUG nova.compute.manager [req-4766c2a0-27ab-44f9-85b2-4bec6fa84884 req-cc8a6e07-abdc-42d9-b035-12639e094938 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] No waiting events found dispatching network-vif-unplugged-da36c2e4-1083-48fb-ae61-47bca3a21912 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:13 np0005539551 nova_compute[227360]: 2025-11-29 08:10:13.070 227364 DEBUG nova.compute.manager [req-4766c2a0-27ab-44f9-85b2-4bec6fa84884 req-cc8a6e07-abdc-42d9-b035-12639e094938 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Received event network-vif-unplugged-da36c2e4-1083-48fb-ae61-47bca3a21912 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:10:13 np0005539551 nova_compute[227360]: 2025-11-29 08:10:13.209 227364 INFO nova.virt.libvirt.driver [None req-add783d6-58a8-4258-a5a5-bcae1d172323 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Deleting instance files /var/lib/nova/instances/00bfed58-0a25-44a8-aa0b-2330c27be8ee_del#033[00m
Nov 29 03:10:13 np0005539551 nova_compute[227360]: 2025-11-29 08:10:13.210 227364 INFO nova.virt.libvirt.driver [None req-add783d6-58a8-4258-a5a5-bcae1d172323 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Deletion of /var/lib/nova/instances/00bfed58-0a25-44a8-aa0b-2330c27be8ee_del complete#033[00m
Nov 29 03:10:13 np0005539551 nova_compute[227360]: 2025-11-29 08:10:13.271 227364 INFO nova.compute.manager [None req-add783d6-58a8-4258-a5a5-bcae1d172323 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:10:13 np0005539551 nova_compute[227360]: 2025-11-29 08:10:13.271 227364 DEBUG oslo.service.loopingcall [None req-add783d6-58a8-4258-a5a5-bcae1d172323 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:10:13 np0005539551 nova_compute[227360]: 2025-11-29 08:10:13.272 227364 DEBUG nova.compute.manager [-] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:10:13 np0005539551 nova_compute[227360]: 2025-11-29 08:10:13.272 227364 DEBUG nova.network.neutron [-] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:10:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:10:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:13.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:10:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:13.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:13 np0005539551 nova_compute[227360]: 2025-11-29 08:10:13.825 227364 DEBUG nova.compute.manager [req-ab2c046c-c2c9-4bf9-ae7a-40028b1ab709 req-9cb2e661-c864-4be8-b1f7-cddd6fcb0cc6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-changed-5edc2ee1-b429-4cb7-8b14-3915aa40d39c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:13 np0005539551 nova_compute[227360]: 2025-11-29 08:10:13.826 227364 DEBUG nova.compute.manager [req-ab2c046c-c2c9-4bf9-ae7a-40028b1ab709 req-9cb2e661-c864-4be8-b1f7-cddd6fcb0cc6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Refreshing instance network info cache due to event network-changed-5edc2ee1-b429-4cb7-8b14-3915aa40d39c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:10:13 np0005539551 nova_compute[227360]: 2025-11-29 08:10:13.826 227364 DEBUG oslo_concurrency.lockutils [req-ab2c046c-c2c9-4bf9-ae7a-40028b1ab709 req-9cb2e661-c864-4be8-b1f7-cddd6fcb0cc6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-86dc24b2-55cd-4720-825d-14b5a233fc8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:10:13 np0005539551 nova_compute[227360]: 2025-11-29 08:10:13.826 227364 DEBUG oslo_concurrency.lockutils [req-ab2c046c-c2c9-4bf9-ae7a-40028b1ab709 req-9cb2e661-c864-4be8-b1f7-cddd6fcb0cc6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-86dc24b2-55cd-4720-825d-14b5a233fc8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:10:13 np0005539551 nova_compute[227360]: 2025-11-29 08:10:13.826 227364 DEBUG nova.network.neutron [req-ab2c046c-c2c9-4bf9-ae7a-40028b1ab709 req-9cb2e661-c864-4be8-b1f7-cddd6fcb0cc6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Refreshing network info cache for port 5edc2ee1-b429-4cb7-8b14-3915aa40d39c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:10:14 np0005539551 nova_compute[227360]: 2025-11-29 08:10:14.129 227364 DEBUG nova.network.neutron [-] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:10:14 np0005539551 nova_compute[227360]: 2025-11-29 08:10:14.162 227364 INFO nova.compute.manager [-] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Took 0.89 seconds to deallocate network for instance.#033[00m
Nov 29 03:10:14 np0005539551 nova_compute[227360]: 2025-11-29 08:10:14.222 227364 DEBUG nova.compute.manager [req-efe53449-c1c8-40f2-b2cc-359b7a2d5877 req-4f88c72a-6273-4aaa-a5e4-7e0ea0d21b61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Received event network-vif-deleted-da36c2e4-1083-48fb-ae61-47bca3a21912 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:14 np0005539551 nova_compute[227360]: 2025-11-29 08:10:14.399 227364 DEBUG oslo_concurrency.lockutils [None req-add783d6-58a8-4258-a5a5-bcae1d172323 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:14 np0005539551 nova_compute[227360]: 2025-11-29 08:10:14.399 227364 DEBUG oslo_concurrency.lockutils [None req-add783d6-58a8-4258-a5a5-bcae1d172323 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:14 np0005539551 nova_compute[227360]: 2025-11-29 08:10:14.455 227364 DEBUG oslo_concurrency.processutils [None req-add783d6-58a8-4258-a5a5-bcae1d172323 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:14 np0005539551 nova_compute[227360]: 2025-11-29 08:10:14.666 227364 DEBUG nova.network.neutron [None req-2b785758-bc39-4d90-b6b3-ccdd432157ec b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Updating instance_info_cache with network_info: [{"id": "da36c2e4-1083-48fb-ae61-47bca3a21912", "address": "fa:16:3e:52:6c:f3", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda36c2e4-10", "ovs_interfaceid": "da36c2e4-1083-48fb-ae61-47bca3a21912", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:10:14 np0005539551 nova_compute[227360]: 2025-11-29 08:10:14.698 227364 DEBUG oslo_concurrency.lockutils [None req-2b785758-bc39-4d90-b6b3-ccdd432157ec b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Releasing lock "refresh_cache-00bfed58-0a25-44a8-aa0b-2330c27be8ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:10:14 np0005539551 nova_compute[227360]: 2025-11-29 08:10:14.726 227364 DEBUG oslo_concurrency.lockutils [None req-2b785758-bc39-4d90-b6b3-ccdd432157ec b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "interface-00bfed58-0a25-44a8-aa0b-2330c27be8ee-fb488c88-f0f5-4e76-90e7-47161bb2a305" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:14 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:10:14 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/510353875' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:10:14 np0005539551 nova_compute[227360]: 2025-11-29 08:10:14.902 227364 DEBUG oslo_concurrency.processutils [None req-add783d6-58a8-4258-a5a5-bcae1d172323 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:14 np0005539551 nova_compute[227360]: 2025-11-29 08:10:14.908 227364 DEBUG nova.compute.provider_tree [None req-add783d6-58a8-4258-a5a5-bcae1d172323 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:10:14 np0005539551 nova_compute[227360]: 2025-11-29 08:10:14.935 227364 DEBUG nova.scheduler.client.report [None req-add783d6-58a8-4258-a5a5-bcae1d172323 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:10:14 np0005539551 nova_compute[227360]: 2025-11-29 08:10:14.977 227364 DEBUG oslo_concurrency.lockutils [None req-add783d6-58a8-4258-a5a5-bcae1d172323 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:15 np0005539551 nova_compute[227360]: 2025-11-29 08:10:15.005 227364 INFO nova.scheduler.client.report [None req-add783d6-58a8-4258-a5a5-bcae1d172323 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Deleted allocations for instance 00bfed58-0a25-44a8-aa0b-2330c27be8ee#033[00m
Nov 29 03:10:15 np0005539551 nova_compute[227360]: 2025-11-29 08:10:15.043 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:15 np0005539551 nova_compute[227360]: 2025-11-29 08:10:15.075 227364 DEBUG oslo_concurrency.lockutils [None req-add783d6-58a8-4258-a5a5-bcae1d172323 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "00bfed58-0a25-44a8-aa0b-2330c27be8ee" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:15 np0005539551 nova_compute[227360]: 2025-11-29 08:10:15.195 227364 DEBUG nova.compute.manager [req-77b0a7a0-676a-4c19-9b9a-91297cb540af req-5bb2a171-0352-4e06-8ecf-f4190aa6e1a6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Received event network-vif-plugged-da36c2e4-1083-48fb-ae61-47bca3a21912 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:15 np0005539551 nova_compute[227360]: 2025-11-29 08:10:15.196 227364 DEBUG oslo_concurrency.lockutils [req-77b0a7a0-676a-4c19-9b9a-91297cb540af req-5bb2a171-0352-4e06-8ecf-f4190aa6e1a6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "00bfed58-0a25-44a8-aa0b-2330c27be8ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:15 np0005539551 nova_compute[227360]: 2025-11-29 08:10:15.196 227364 DEBUG oslo_concurrency.lockutils [req-77b0a7a0-676a-4c19-9b9a-91297cb540af req-5bb2a171-0352-4e06-8ecf-f4190aa6e1a6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "00bfed58-0a25-44a8-aa0b-2330c27be8ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:15 np0005539551 nova_compute[227360]: 2025-11-29 08:10:15.196 227364 DEBUG oslo_concurrency.lockutils [req-77b0a7a0-676a-4c19-9b9a-91297cb540af req-5bb2a171-0352-4e06-8ecf-f4190aa6e1a6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "00bfed58-0a25-44a8-aa0b-2330c27be8ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:15 np0005539551 nova_compute[227360]: 2025-11-29 08:10:15.197 227364 DEBUG nova.compute.manager [req-77b0a7a0-676a-4c19-9b9a-91297cb540af req-5bb2a171-0352-4e06-8ecf-f4190aa6e1a6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] No waiting events found dispatching network-vif-plugged-da36c2e4-1083-48fb-ae61-47bca3a21912 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:15 np0005539551 nova_compute[227360]: 2025-11-29 08:10:15.197 227364 WARNING nova.compute.manager [req-77b0a7a0-676a-4c19-9b9a-91297cb540af req-5bb2a171-0352-4e06-8ecf-f4190aa6e1a6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Received unexpected event network-vif-plugged-da36c2e4-1083-48fb-ae61-47bca3a21912 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:10:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:15.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:15.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:15 np0005539551 nova_compute[227360]: 2025-11-29 08:10:15.862 227364 DEBUG nova.network.neutron [req-ab2c046c-c2c9-4bf9-ae7a-40028b1ab709 req-9cb2e661-c864-4be8-b1f7-cddd6fcb0cc6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Updated VIF entry in instance network info cache for port 5edc2ee1-b429-4cb7-8b14-3915aa40d39c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:10:15 np0005539551 nova_compute[227360]: 2025-11-29 08:10:15.862 227364 DEBUG nova.network.neutron [req-ab2c046c-c2c9-4bf9-ae7a-40028b1ab709 req-9cb2e661-c864-4be8-b1f7-cddd6fcb0cc6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Updating instance_info_cache with network_info: [{"id": "5edc2ee1-b429-4cb7-8b14-3915aa40d39c", "address": "fa:16:3e:73:c5:06", "network": {"id": "997fb1c7-7a6a-4755-bd31-f24f7590c80c", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-481618025-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5edc2ee1-b4", "ovs_interfaceid": "5edc2ee1-b429-4cb7-8b14-3915aa40d39c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "08b93c9b-b707-46be-87f1-913a85c9fdbb", "address": "fa:16:3e:7d:dd:ef", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.101", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08b93c9b-b7", "ovs_interfaceid": "08b93c9b-b707-46be-87f1-913a85c9fdbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "0b94f521-1bc4-4c6b-9d91-f98c1b1371ea", "address": "fa:16:3e:a7:4f:0c", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.188", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b94f521-1b", "ovs_interfaceid": "0b94f521-1bc4-4c6b-9d91-f98c1b1371ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "064279fc-65b8-4bc6-9578-f43479c76dde", "address": "fa:16:3e:8f:4c:8b", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap064279fc-65", "ovs_interfaceid": "064279fc-65b8-4bc6-9578-f43479c76dde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "580e3c4b-9212-4f39-be82-c8e878e729e2", "address": "fa:16:3e:e1:43:84", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580e3c4b-92", "ovs_interfaceid": "580e3c4b-9212-4f39-be82-c8e878e729e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6781fd06-3321-4100-b5d3-9e92a565007b", "address": "fa:16:3e:a4:c3:f8", "network": {"id": "c657f09f-ecca-4e01-9a07-39931c8c1994", "bridge": "br-int", "label": "tempest-device-tagging-net2-1829627846", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6781fd06-33", "ovs_interfaceid": "6781fd06-3321-4100-b5d3-9e92a565007b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "75fb0180-ce5c-4d77-ab28-71baed47a210", "address": "fa:16:3e:07:8b:79", "network": {"id": "c657f09f-ecca-4e01-9a07-39931c8c1994", "bridge": "br-int", "label": "tempest-device-tagging-net2-1829627846", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75fb0180-ce", "ovs_interfaceid": "75fb0180-ce5c-4d77-ab28-71baed47a210", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:10:15 np0005539551 nova_compute[227360]: 2025-11-29 08:10:15.883 227364 DEBUG oslo_concurrency.lockutils [req-ab2c046c-c2c9-4bf9-ae7a-40028b1ab709 req-9cb2e661-c864-4be8-b1f7-cddd6fcb0cc6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-86dc24b2-55cd-4720-825d-14b5a233fc8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:10:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:17.514 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:10:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:17.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:10:17 np0005539551 nova_compute[227360]: 2025-11-29 08:10:17.792 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:17.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:19.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:10:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:19.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:10:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:19.860 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:19.860 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:19.862 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:20 np0005539551 nova_compute[227360]: 2025-11-29 08:10:20.046 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:20 np0005539551 nova_compute[227360]: 2025-11-29 08:10:20.411 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:20 np0005539551 nova_compute[227360]: 2025-11-29 08:10:20.411 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:10:20 np0005539551 nova_compute[227360]: 2025-11-29 08:10:20.411 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:10:20 np0005539551 nova_compute[227360]: 2025-11-29 08:10:20.636 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "refresh_cache-86dc24b2-55cd-4720-825d-14b5a233fc8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:10:20 np0005539551 nova_compute[227360]: 2025-11-29 08:10:20.636 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquired lock "refresh_cache-86dc24b2-55cd-4720-825d-14b5a233fc8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:10:20 np0005539551 nova_compute[227360]: 2025-11-29 08:10:20.636 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:10:20 np0005539551 nova_compute[227360]: 2025-11-29 08:10:20.637 227364 DEBUG nova.objects.instance [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lazy-loading 'info_cache' on Instance uuid 86dc24b2-55cd-4720-825d-14b5a233fc8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:10:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:21.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:21.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:22 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:10:22 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:10:22 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:10:22 np0005539551 nova_compute[227360]: 2025-11-29 08:10:22.810 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:23 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:23Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a4:c3:f8 10.2.2.100
Nov 29 03:10:23 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:23Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a4:c3:f8 10.2.2.100
Nov 29 03:10:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:23.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:23 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:23Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:07:8b:79 10.2.2.200
Nov 29 03:10:23 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:23Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:07:8b:79 10.2.2.200
Nov 29 03:10:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:23.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:24 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:24Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7d:dd:ef 10.1.1.101
Nov 29 03:10:24 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:24Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7d:dd:ef 10.1.1.101
Nov 29 03:10:24 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:24Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8f:4c:8b 10.1.1.67
Nov 29 03:10:24 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:24Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8f:4c:8b 10.1.1.67
Nov 29 03:10:24 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:24Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:73:c5:06 10.100.0.11
Nov 29 03:10:24 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:24Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:73:c5:06 10.100.0.11
Nov 29 03:10:25 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:25Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a7:4f:0c 10.1.1.188
Nov 29 03:10:25 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:25Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a7:4f:0c 10.1.1.188
Nov 29 03:10:25 np0005539551 nova_compute[227360]: 2025-11-29 08:10:25.048 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:25 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:25Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e1:43:84 10.1.1.23
Nov 29 03:10:25 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:25Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e1:43:84 10.1.1.23
Nov 29 03:10:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:25.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:25.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:27 np0005539551 nova_compute[227360]: 2025-11-29 08:10:27.765 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403812.7615108, 00bfed58-0a25-44a8-aa0b-2330c27be8ee => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:10:27 np0005539551 nova_compute[227360]: 2025-11-29 08:10:27.765 227364 INFO nova.compute.manager [-] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:10:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:10:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:27.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:10:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:27.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:27 np0005539551 nova_compute[227360]: 2025-11-29 08:10:27.852 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:27 np0005539551 nova_compute[227360]: 2025-11-29 08:10:27.873 227364 DEBUG nova.compute.manager [None req-2c9dbef1-8e8c-44ab-9fa3-0f6b0b3ef06e - - - - - -] [instance: 00bfed58-0a25-44a8-aa0b-2330c27be8ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:10:28 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:10:28 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:10:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:29.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:10:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:29.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:10:30 np0005539551 nova_compute[227360]: 2025-11-29 08:10:30.053 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:30 np0005539551 nova_compute[227360]: 2025-11-29 08:10:30.995 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Updating instance_info_cache with network_info: [{"id": "5edc2ee1-b429-4cb7-8b14-3915aa40d39c", "address": "fa:16:3e:73:c5:06", "network": {"id": "997fb1c7-7a6a-4755-bd31-f24f7590c80c", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-481618025-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5edc2ee1-b4", "ovs_interfaceid": "5edc2ee1-b429-4cb7-8b14-3915aa40d39c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "08b93c9b-b707-46be-87f1-913a85c9fdbb", "address": "fa:16:3e:7d:dd:ef", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.101", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08b93c9b-b7", "ovs_interfaceid": "08b93c9b-b707-46be-87f1-913a85c9fdbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "0b94f521-1bc4-4c6b-9d91-f98c1b1371ea", "address": "fa:16:3e:a7:4f:0c", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.188", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b94f521-1b", "ovs_interfaceid": "0b94f521-1bc4-4c6b-9d91-f98c1b1371ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "064279fc-65b8-4bc6-9578-f43479c76dde", "address": "fa:16:3e:8f:4c:8b", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap064279fc-65", "ovs_interfaceid": "064279fc-65b8-4bc6-9578-f43479c76dde", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "580e3c4b-9212-4f39-be82-c8e878e729e2", "address": "fa:16:3e:e1:43:84", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580e3c4b-92", "ovs_interfaceid": "580e3c4b-9212-4f39-be82-c8e878e729e2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6781fd06-3321-4100-b5d3-9e92a565007b", "address": "fa:16:3e:a4:c3:f8", "network": {"id": "c657f09f-ecca-4e01-9a07-39931c8c1994", "bridge": "br-int", "label": "tempest-device-tagging-net2-1829627846", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6781fd06-33", "ovs_interfaceid": "6781fd06-3321-4100-b5d3-9e92a565007b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "75fb0180-ce5c-4d77-ab28-71baed47a210", "address": "fa:16:3e:07:8b:79", "network": {"id": "c657f09f-ecca-4e01-9a07-39931c8c1994", "bridge": "br-int", "label": "tempest-device-tagging-net2-1829627846", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75fb0180-ce", "ovs_interfaceid": "75fb0180-ce5c-4d77-ab28-71baed47a210", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:10:31 np0005539551 nova_compute[227360]: 2025-11-29 08:10:31.018 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Releasing lock "refresh_cache-86dc24b2-55cd-4720-825d-14b5a233fc8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:10:31 np0005539551 nova_compute[227360]: 2025-11-29 08:10:31.018 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:10:31 np0005539551 nova_compute[227360]: 2025-11-29 08:10:31.019 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:31 np0005539551 nova_compute[227360]: 2025-11-29 08:10:31.019 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:31 np0005539551 nova_compute[227360]: 2025-11-29 08:10:31.019 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:31 np0005539551 nova_compute[227360]: 2025-11-29 08:10:31.019 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:31 np0005539551 nova_compute[227360]: 2025-11-29 08:10:31.020 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:31 np0005539551 nova_compute[227360]: 2025-11-29 08:10:31.046 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:31 np0005539551 nova_compute[227360]: 2025-11-29 08:10:31.047 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:31 np0005539551 nova_compute[227360]: 2025-11-29 08:10:31.047 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:31 np0005539551 nova_compute[227360]: 2025-11-29 08:10:31.047 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:10:31 np0005539551 nova_compute[227360]: 2025-11-29 08:10:31.047 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:31 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:10:31 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1279067861' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:10:31 np0005539551 nova_compute[227360]: 2025-11-29 08:10:31.490 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:31 np0005539551 nova_compute[227360]: 2025-11-29 08:10:31.582 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:10:31 np0005539551 nova_compute[227360]: 2025-11-29 08:10:31.582 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:10:31 np0005539551 nova_compute[227360]: 2025-11-29 08:10:31.582 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:10:31 np0005539551 nova_compute[227360]: 2025-11-29 08:10:31.583 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:10:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:31.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:31 np0005539551 nova_compute[227360]: 2025-11-29 08:10:31.813 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:10:31 np0005539551 nova_compute[227360]: 2025-11-29 08:10:31.814 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4381MB free_disk=20.907878875732422GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:10:31 np0005539551 nova_compute[227360]: 2025-11-29 08:10:31.814 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:31 np0005539551 nova_compute[227360]: 2025-11-29 08:10:31.815 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:31.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:31 np0005539551 nova_compute[227360]: 2025-11-29 08:10:31.954 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 86dc24b2-55cd-4720-825d-14b5a233fc8f actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:10:31 np0005539551 nova_compute[227360]: 2025-11-29 08:10:31.954 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:10:31 np0005539551 nova_compute[227360]: 2025-11-29 08:10:31.955 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:10:32 np0005539551 nova_compute[227360]: 2025-11-29 08:10:32.112 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Refreshing inventories for resource provider 67c71d68-0dd7-4589-b775-189b4191a844 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:10:32 np0005539551 nova_compute[227360]: 2025-11-29 08:10:32.140 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Updating ProviderTree inventory for provider 67c71d68-0dd7-4589-b775-189b4191a844 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:10:32 np0005539551 nova_compute[227360]: 2025-11-29 08:10:32.141 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Updating inventory in ProviderTree for provider 67c71d68-0dd7-4589-b775-189b4191a844 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:10:32 np0005539551 nova_compute[227360]: 2025-11-29 08:10:32.156 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Refreshing aggregate associations for resource provider 67c71d68-0dd7-4589-b775-189b4191a844, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:10:32 np0005539551 nova_compute[227360]: 2025-11-29 08:10:32.178 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Refreshing trait associations for resource provider 67c71d68-0dd7-4589-b775-189b4191a844, traits: COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:10:32 np0005539551 nova_compute[227360]: 2025-11-29 08:10:32.216 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:32 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:10:32 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2255911119' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:10:32 np0005539551 nova_compute[227360]: 2025-11-29 08:10:32.653 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:32 np0005539551 nova_compute[227360]: 2025-11-29 08:10:32.659 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:10:32 np0005539551 nova_compute[227360]: 2025-11-29 08:10:32.682 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:10:32 np0005539551 nova_compute[227360]: 2025-11-29 08:10:32.717 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:10:32 np0005539551 nova_compute[227360]: 2025-11-29 08:10:32.717 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.903s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:32 np0005539551 nova_compute[227360]: 2025-11-29 08:10:32.856 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:33 np0005539551 nova_compute[227360]: 2025-11-29 08:10:33.108 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:33 np0005539551 nova_compute[227360]: 2025-11-29 08:10:33.109 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:33.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:33.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:34 np0005539551 nova_compute[227360]: 2025-11-29 08:10:34.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:34 np0005539551 nova_compute[227360]: 2025-11-29 08:10:34.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:10:35 np0005539551 nova_compute[227360]: 2025-11-29 08:10:35.057 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e270 e270: 3 total, 3 up, 3 in
Nov 29 03:10:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:35.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:35.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:36 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:10:36 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/183765169' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:10:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:37.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:37.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:37 np0005539551 nova_compute[227360]: 2025-11-29 08:10:37.893 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:38 np0005539551 podman[256694]: 2025-11-29 08:10:38.619166908 +0000 UTC m=+0.061358175 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:10:38 np0005539551 podman[256697]: 2025-11-29 08:10:38.628324662 +0000 UTC m=+0.058219412 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:10:38 np0005539551 podman[256693]: 2025-11-29 08:10:38.644374369 +0000 UTC m=+0.094876847 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller)
Nov 29 03:10:39 np0005539551 nova_compute[227360]: 2025-11-29 08:10:39.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:39.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:39.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:40 np0005539551 nova_compute[227360]: 2025-11-29 08:10:40.059 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:41.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:41.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:42 np0005539551 nova_compute[227360]: 2025-11-29 08:10:42.936 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:10:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:43.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:10:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:43.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:43 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e271 e271: 3 total, 3 up, 3 in
Nov 29 03:10:45 np0005539551 nova_compute[227360]: 2025-11-29 08:10:45.061 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:45.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:45.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:46 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Nov 29 03:10:46 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:46.486 139598 DEBUG eventlet.wsgi.server [-] (139598) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Nov 29 03:10:46 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:46.487 139598 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /openstack/latest/meta_data.json HTTP/1.0#015
Nov 29 03:10:46 np0005539551 ovn_metadata_agent[139465]: Accept: */*#015
Nov 29 03:10:46 np0005539551 ovn_metadata_agent[139465]: Connection: close#015
Nov 29 03:10:46 np0005539551 ovn_metadata_agent[139465]: Content-Type: text/plain#015
Nov 29 03:10:46 np0005539551 ovn_metadata_agent[139465]: Host: 169.254.169.254#015
Nov 29 03:10:46 np0005539551 ovn_metadata_agent[139465]: User-Agent: curl/7.84.0#015
Nov 29 03:10:46 np0005539551 ovn_metadata_agent[139465]: X-Forwarded-For: 10.100.0.11#015
Nov 29 03:10:46 np0005539551 ovn_metadata_agent[139465]: X-Ovn-Network-Id: 997fb1c7-7a6a-4755-bd31-f24f7590c80c __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Nov 29 03:10:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:47.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:47 np0005539551 haproxy-metadata-proxy-997fb1c7-7a6a-4755-bd31-f24f7590c80c[256083]: 10.100.0.11:49128 [29/Nov/2025:08:10:46.485] listener listener/metadata 0/0/0/1337/1337 200 2536 - - ---- 1/1/0/0/0 0/0 "GET /openstack/latest/meta_data.json HTTP/1.1"
Nov 29 03:10:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:47.821 139598 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Nov 29 03:10:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:47.822 139598 INFO eventlet.wsgi.server [-] 10.100.0.11,<local> "GET /openstack/latest/meta_data.json HTTP/1.1" status: 200  len: 2552 time: 1.3345544#033[00m
Nov 29 03:10:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:47.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:47 np0005539551 nova_compute[227360]: 2025-11-29 08:10:47.941 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.221 227364 DEBUG oslo_concurrency.lockutils [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Acquiring lock "86dc24b2-55cd-4720-825d-14b5a233fc8f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.221 227364 DEBUG oslo_concurrency.lockutils [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.222 227364 DEBUG oslo_concurrency.lockutils [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Acquiring lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.222 227364 DEBUG oslo_concurrency.lockutils [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.222 227364 DEBUG oslo_concurrency.lockutils [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.224 227364 INFO nova.compute.manager [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Terminating instance#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.226 227364 DEBUG nova.compute.manager [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:10:49 np0005539551 kernel: tap5edc2ee1-b4 (unregistering): left promiscuous mode
Nov 29 03:10:49 np0005539551 NetworkManager[48922]: <info>  [1764403849.3775] device (tap5edc2ee1-b4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:10:49 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:49Z|00306|binding|INFO|Releasing lport 5edc2ee1-b429-4cb7-8b14-3915aa40d39c from this chassis (sb_readonly=0)
Nov 29 03:10:49 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:49Z|00307|binding|INFO|Setting lport 5edc2ee1-b429-4cb7-8b14-3915aa40d39c down in Southbound
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.388 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:49Z|00308|binding|INFO|Removing iface tap5edc2ee1-b4 ovn-installed in OVS
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.390 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 kernel: tap08b93c9b-b7 (unregistering): left promiscuous mode
Nov 29 03:10:49 np0005539551 NetworkManager[48922]: <info>  [1764403849.4060] device (tap08b93c9b-b7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.406 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:49.411 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:c5:06 10.100.0.11'], port_security=['fa:16:3e:73:c5:06 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '86dc24b2-55cd-4720-825d-14b5a233fc8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-997fb1c7-7a6a-4755-bd31-f24f7590c80c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28c3d09b9e21417cb7bc44b8552f1b81', 'neutron:revision_number': '4', 'neutron:security_group_ids': '588937ad-2a59-4cd1-8b4f-9964276d1cf1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.206'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55cabc28-801b-476b-85c5-aa3b3b098458, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=5edc2ee1-b429-4cb7-8b14-3915aa40d39c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:10:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:49.412 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 5edc2ee1-b429-4cb7-8b14-3915aa40d39c in datapath 997fb1c7-7a6a-4755-bd31-f24f7590c80c unbound from our chassis#033[00m
Nov 29 03:10:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:49.414 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 997fb1c7-7a6a-4755-bd31-f24f7590c80c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:10:49 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:49Z|00309|binding|INFO|Releasing lport 08b93c9b-b707-46be-87f1-913a85c9fdbb from this chassis (sb_readonly=0)
Nov 29 03:10:49 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:49Z|00310|binding|INFO|Setting lport 08b93c9b-b707-46be-87f1-913a85c9fdbb down in Southbound
Nov 29 03:10:49 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:49Z|00311|binding|INFO|Removing iface tap08b93c9b-b7 ovn-installed in OVS
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.414 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:49.416 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b6df1dbf-a8fa-4d92-b586-e4393d8433f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:49.416 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-997fb1c7-7a6a-4755-bd31-f24f7590c80c namespace which is not needed anymore#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.417 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.429 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 kernel: tap0b94f521-1b (unregistering): left promiscuous mode
Nov 29 03:10:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:49.432 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:dd:ef 10.1.1.101'], port_security=['fa:16:3e:7d:dd:ef 10.1.1.101'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest_v242-207477711', 'neutron:cidrs': '10.1.1.101/24', 'neutron:device_id': '86dc24b2-55cd-4720-825d-14b5a233fc8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2e9e9288-8ab6-453e-b0a5-e16458d62484', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest_v242-207477711', 'neutron:project_id': '28c3d09b9e21417cb7bc44b8552f1b81', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c7f8a9bb-1199-4c44-851e-aceb219e03cd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fdddab22-bfa1-4179-9934-bf59b607238d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=08b93c9b-b707-46be-87f1-913a85c9fdbb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:10:49 np0005539551 NetworkManager[48922]: <info>  [1764403849.4340] device (tap0b94f521-1b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.447 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:49Z|00312|binding|INFO|Releasing lport 0b94f521-1bc4-4c6b-9d91-f98c1b1371ea from this chassis (sb_readonly=0)
Nov 29 03:10:49 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:49Z|00313|binding|INFO|Setting lport 0b94f521-1bc4-4c6b-9d91-f98c1b1371ea down in Southbound
Nov 29 03:10:49 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:49Z|00314|binding|INFO|Removing iface tap0b94f521-1b ovn-installed in OVS
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.448 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 kernel: tap064279fc-65 (unregistering): left promiscuous mode
Nov 29 03:10:49 np0005539551 NetworkManager[48922]: <info>  [1764403849.4667] device (tap064279fc-65): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.468 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 kernel: tap580e3c4b-92 (unregistering): left promiscuous mode
Nov 29 03:10:49 np0005539551 NetworkManager[48922]: <info>  [1764403849.4856] device (tap580e3c4b-92): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:10:49 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:49Z|00315|binding|INFO|Releasing lport 064279fc-65b8-4bc6-9578-f43479c76dde from this chassis (sb_readonly=1)
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.487 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:49Z|00316|binding|INFO|Removing iface tap064279fc-65 ovn-installed in OVS
Nov 29 03:10:49 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:49Z|00317|if_status|INFO|Dropped 2 log messages in last 255 seconds (most recently, 255 seconds ago) due to excessive rate
Nov 29 03:10:49 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:49Z|00318|if_status|INFO|Not setting lport 064279fc-65b8-4bc6-9578-f43479c76dde down as sb is readonly
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.490 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 kernel: tap6781fd06-33 (unregistering): left promiscuous mode
Nov 29 03:10:49 np0005539551 NetworkManager[48922]: <info>  [1764403849.5138] device (tap6781fd06-33): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.513 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:49Z|00319|binding|INFO|Releasing lport 580e3c4b-9212-4f39-be82-c8e878e729e2 from this chassis (sb_readonly=1)
Nov 29 03:10:49 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:49Z|00320|binding|INFO|Removing iface tap580e3c4b-92 ovn-installed in OVS
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.518 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 kernel: tap75fb0180-ce (unregistering): left promiscuous mode
Nov 29 03:10:49 np0005539551 NetworkManager[48922]: <info>  [1764403849.5485] device (tap75fb0180-ce): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:10:49 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:49Z|00321|binding|INFO|Releasing lport 6781fd06-3321-4100-b5d3-9e92a565007b from this chassis (sb_readonly=1)
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.550 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:49Z|00322|binding|INFO|Removing iface tap6781fd06-33 ovn-installed in OVS
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.552 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 neutron-haproxy-ovnmeta-997fb1c7-7a6a-4755-bd31-f24f7590c80c[256077]: [NOTICE]   (256081) : haproxy version is 2.8.14-c23fe91
Nov 29 03:10:49 np0005539551 neutron-haproxy-ovnmeta-997fb1c7-7a6a-4755-bd31-f24f7590c80c[256077]: [NOTICE]   (256081) : path to executable is /usr/sbin/haproxy
Nov 29 03:10:49 np0005539551 neutron-haproxy-ovnmeta-997fb1c7-7a6a-4755-bd31-f24f7590c80c[256077]: [WARNING]  (256081) : Exiting Master process...
Nov 29 03:10:49 np0005539551 neutron-haproxy-ovnmeta-997fb1c7-7a6a-4755-bd31-f24f7590c80c[256077]: [WARNING]  (256081) : Exiting Master process...
Nov 29 03:10:49 np0005539551 neutron-haproxy-ovnmeta-997fb1c7-7a6a-4755-bd31-f24f7590c80c[256077]: [ALERT]    (256081) : Current worker (256083) exited with code 143 (Terminated)
Nov 29 03:10:49 np0005539551 neutron-haproxy-ovnmeta-997fb1c7-7a6a-4755-bd31-f24f7590c80c[256077]: [WARNING]  (256081) : All workers exited. Exiting... (0)
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.556 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 systemd[1]: libpod-901c5d3f6e4bed9c7438a06d9fe78473315e415fa9bdce00c662d7844018f856.scope: Deactivated successfully.
Nov 29 03:10:49 np0005539551 conmon[256077]: conmon 901c5d3f6e4bed9c7438 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-901c5d3f6e4bed9c7438a06d9fe78473315e415fa9bdce00c662d7844018f856.scope/container/memory.events
Nov 29 03:10:49 np0005539551 podman[256799]: 2025-11-29 08:10:49.564463583 +0000 UTC m=+0.055841038 container died 901c5d3f6e4bed9c7438a06d9fe78473315e415fa9bdce00c662d7844018f856 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-997fb1c7-7a6a-4755-bd31-f24f7590c80c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.589 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-901c5d3f6e4bed9c7438a06d9fe78473315e415fa9bdce00c662d7844018f856-userdata-shm.mount: Deactivated successfully.
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.591 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:49Z|00323|binding|INFO|Releasing lport 75fb0180-ce5c-4d77-ab28-71baed47a210 from this chassis (sb_readonly=1)
Nov 29 03:10:49 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:49Z|00324|binding|INFO|Removing iface tap75fb0180-ce ovn-installed in OVS
Nov 29 03:10:49 np0005539551 systemd[1]: var-lib-containers-storage-overlay-a9824864aa90d38506ad00615b357c329398c35795238d3e9fb2926615366d9f-merged.mount: Deactivated successfully.
Nov 29 03:10:49 np0005539551 podman[256799]: 2025-11-29 08:10:49.601639753 +0000 UTC m=+0.093017198 container cleanup 901c5d3f6e4bed9c7438a06d9fe78473315e415fa9bdce00c662d7844018f856 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-997fb1c7-7a6a-4755-bd31-f24f7590c80c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 03:10:49 np0005539551 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000004d.scope: Deactivated successfully.
Nov 29 03:10:49 np0005539551 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000004d.scope: Consumed 17.582s CPU time.
Nov 29 03:10:49 np0005539551 systemd-machined[190756]: Machine qemu-35-instance-0000004d terminated.
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.621 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:49Z|00325|binding|INFO|Setting lport 75fb0180-ce5c-4d77-ab28-71baed47a210 down in Southbound
Nov 29 03:10:49 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:49Z|00326|binding|INFO|Setting lport 580e3c4b-9212-4f39-be82-c8e878e729e2 down in Southbound
Nov 29 03:10:49 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:49Z|00327|binding|INFO|Setting lport 6781fd06-3321-4100-b5d3-9e92a565007b down in Southbound
Nov 29 03:10:49 np0005539551 ovn_controller[130266]: 2025-11-29T08:10:49Z|00328|binding|INFO|Setting lport 064279fc-65b8-4bc6-9578-f43479c76dde down in Southbound
Nov 29 03:10:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:49.635 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:4f:0c 10.1.1.188'], port_security=['fa:16:3e:a7:4f:0c 10.1.1.188'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest_v242-1346256411', 'neutron:cidrs': '10.1.1.188/24', 'neutron:device_id': '86dc24b2-55cd-4720-825d-14b5a233fc8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2e9e9288-8ab6-453e-b0a5-e16458d62484', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest_v242-1346256411', 'neutron:project_id': '28c3d09b9e21417cb7bc44b8552f1b81', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c7f8a9bb-1199-4c44-851e-aceb219e03cd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fdddab22-bfa1-4179-9934-bf59b607238d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=0b94f521-1bc4-4c6b-9d91-f98c1b1371ea) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:10:49 np0005539551 systemd[1]: libpod-conmon-901c5d3f6e4bed9c7438a06d9fe78473315e415fa9bdce00c662d7844018f856.scope: Deactivated successfully.
Nov 29 03:10:49 np0005539551 NetworkManager[48922]: <info>  [1764403849.6455] manager: (tap5edc2ee1-b4): new Tun device (/org/freedesktop/NetworkManager/Devices/150)
Nov 29 03:10:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:49.649 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:c3:f8 10.2.2.100'], port_security=['fa:16:3e:a4:c3:f8 10.2.2.100'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.100/24', 'neutron:device_id': '86dc24b2-55cd-4720-825d-14b5a233fc8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c657f09f-ecca-4e01-9a07-39931c8c1994', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28c3d09b9e21417cb7bc44b8552f1b81', 'neutron:revision_number': '4', 'neutron:security_group_ids': '588937ad-2a59-4cd1-8b4f-9964276d1cf1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=93bd27dc-d7ec-4f57-a690-c5f336775299, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=6781fd06-3321-4100-b5d3-9e92a565007b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:10:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:49.651 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:8b:79 10.2.2.200'], port_security=['fa:16:3e:07:8b:79 10.2.2.200'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.200/24', 'neutron:device_id': '86dc24b2-55cd-4720-825d-14b5a233fc8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c657f09f-ecca-4e01-9a07-39931c8c1994', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28c3d09b9e21417cb7bc44b8552f1b81', 'neutron:revision_number': '4', 'neutron:security_group_ids': '588937ad-2a59-4cd1-8b4f-9964276d1cf1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=93bd27dc-d7ec-4f57-a690-c5f336775299, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=75fb0180-ce5c-4d77-ab28-71baed47a210) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:10:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:49.653 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:43:84 10.1.1.23'], port_security=['fa:16:3e:e1:43:84 10.1.1.23'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.23/24', 'neutron:device_id': '86dc24b2-55cd-4720-825d-14b5a233fc8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2e9e9288-8ab6-453e-b0a5-e16458d62484', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28c3d09b9e21417cb7bc44b8552f1b81', 'neutron:revision_number': '4', 'neutron:security_group_ids': '588937ad-2a59-4cd1-8b4f-9964276d1cf1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fdddab22-bfa1-4179-9934-bf59b607238d, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=580e3c4b-9212-4f39-be82-c8e878e729e2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:10:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:49.654 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:4c:8b 10.1.1.67'], port_security=['fa:16:3e:8f:4c:8b 10.1.1.67'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.67/24', 'neutron:device_id': '86dc24b2-55cd-4720-825d-14b5a233fc8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2e9e9288-8ab6-453e-b0a5-e16458d62484', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28c3d09b9e21417cb7bc44b8552f1b81', 'neutron:revision_number': '4', 'neutron:security_group_ids': '588937ad-2a59-4cd1-8b4f-9964276d1cf1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fdddab22-bfa1-4179-9934-bf59b607238d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=064279fc-65b8-4bc6-9578-f43479c76dde) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:10:49 np0005539551 NetworkManager[48922]: <info>  [1764403849.6567] manager: (tap08b93c9b-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/151)
Nov 29 03:10:49 np0005539551 podman[256851]: 2025-11-29 08:10:49.671796781 +0000 UTC m=+0.048313517 container remove 901c5d3f6e4bed9c7438a06d9fe78473315e415fa9bdce00c662d7844018f856 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-997fb1c7-7a6a-4755-bd31-f24f7590c80c, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:10:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:49.678 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[94b1cfae-139e-4c0b-986d-303f6e1b43d0]: (4, ('Sat Nov 29 08:10:49 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-997fb1c7-7a6a-4755-bd31-f24f7590c80c (901c5d3f6e4bed9c7438a06d9fe78473315e415fa9bdce00c662d7844018f856)\n901c5d3f6e4bed9c7438a06d9fe78473315e415fa9bdce00c662d7844018f856\nSat Nov 29 08:10:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-997fb1c7-7a6a-4755-bd31-f24f7590c80c (901c5d3f6e4bed9c7438a06d9fe78473315e415fa9bdce00c662d7844018f856)\n901c5d3f6e4bed9c7438a06d9fe78473315e415fa9bdce00c662d7844018f856\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:49.680 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ae581124-10b3-44cc-a38c-d2b3ed40ba45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:49.681 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap997fb1c7-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.682 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 NetworkManager[48922]: <info>  [1764403849.6863] manager: (tap580e3c4b-92): new Tun device (/org/freedesktop/NetworkManager/Devices/152)
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.703 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 kernel: tap997fb1c7-70: left promiscuous mode
Nov 29 03:10:49 np0005539551 NetworkManager[48922]: <info>  [1764403849.7116] manager: (tap75fb0180-ce): new Tun device (/org/freedesktop/NetworkManager/Devices/153)
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.722 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.725 227364 INFO nova.virt.libvirt.driver [-] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Instance destroyed successfully.#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.725 227364 DEBUG nova.objects.instance [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Lazy-loading 'resources' on Instance uuid 86dc24b2-55cd-4720-825d-14b5a233fc8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:10:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:49.725 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9262e1f7-7b73-4dc8-8f78-0e764c569c99]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:49.737 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6a832033-416f-4499-b46f-f5bd66867a82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:49.739 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4c9cfd7b-3aa8-45b0-95a2-3d19901aea3b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.745 227364 DEBUG nova.virt.libvirt.vif [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1339930826',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1339930826',id=77,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX72qjDEB4IQ4WdPqfaDcQoLGbp1kQWDQ1fLv1OvCXP+E3nsXWXjdsJxPgY/fixvzpYHFbZD11YMCPy/p3b1ThogMSbMQVu2e2El83y3t350K3WzjnAMvFjZjqU9kkZuw==',key_name='tempest-keypair-1749804170',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:10:09Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='28c3d09b9e21417cb7bc44b8552f1b81',ramdisk_id='',reservation_id='r-60dw71oa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-1392053176',owner_user_name='tempest-TaggedBootDevicesTest_v242-1392053176-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:10:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6ef481e9e8e0440c91abe11aee229780',uuid=86dc24b2-55cd-4720-825d-14b5a233fc8f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5edc2ee1-b429-4cb7-8b14-3915aa40d39c", "address": "fa:16:3e:73:c5:06", "network": {"id": "997fb1c7-7a6a-4755-bd31-f24f7590c80c", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-481618025-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5edc2ee1-b4", "ovs_interfaceid": "5edc2ee1-b429-4cb7-8b14-3915aa40d39c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.745 227364 DEBUG nova.network.os_vif_util [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converting VIF {"id": "5edc2ee1-b429-4cb7-8b14-3915aa40d39c", "address": "fa:16:3e:73:c5:06", "network": {"id": "997fb1c7-7a6a-4755-bd31-f24f7590c80c", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-481618025-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5edc2ee1-b4", "ovs_interfaceid": "5edc2ee1-b429-4cb7-8b14-3915aa40d39c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.746 227364 DEBUG nova.network.os_vif_util [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:73:c5:06,bridge_name='br-int',has_traffic_filtering=True,id=5edc2ee1-b429-4cb7-8b14-3915aa40d39c,network=Network(997fb1c7-7a6a-4755-bd31-f24f7590c80c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5edc2ee1-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.746 227364 DEBUG os_vif [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:73:c5:06,bridge_name='br-int',has_traffic_filtering=True,id=5edc2ee1-b429-4cb7-8b14-3915aa40d39c,network=Network(997fb1c7-7a6a-4755-bd31-f24f7590c80c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5edc2ee1-b4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.748 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.748 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5edc2ee1-b4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.750 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.752 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:10:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:49.755 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a2ba52ba-77ba-4c0f-a268-1adc48fac24a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685310, 'reachable_time': 38235, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256950, 'error': None, 'target': 'ovnmeta-997fb1c7-7a6a-4755-bd31-f24f7590c80c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:49 np0005539551 systemd[1]: run-netns-ovnmeta\x2d997fb1c7\x2d7a6a\x2d4755\x2dbd31\x2df24f7590c80c.mount: Deactivated successfully.
Nov 29 03:10:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:49.757 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-997fb1c7-7a6a-4755-bd31-f24f7590c80c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:10:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:49.757 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[623eecde-e472-47bb-8a54-3cc1768180ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:49.758 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 08b93c9b-b707-46be-87f1-913a85c9fdbb in datapath 2e9e9288-8ab6-453e-b0a5-e16458d62484 unbound from our chassis#033[00m
Nov 29 03:10:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:49.760 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2e9e9288-8ab6-453e-b0a5-e16458d62484, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:10:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:49.760 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1656e815-78d3-4686-9a79-18be37ab5257]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:49.761 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2e9e9288-8ab6-453e-b0a5-e16458d62484 namespace which is not needed anymore#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.780 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.782 227364 INFO os_vif [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:73:c5:06,bridge_name='br-int',has_traffic_filtering=True,id=5edc2ee1-b429-4cb7-8b14-3915aa40d39c,network=Network(997fb1c7-7a6a-4755-bd31-f24f7590c80c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5edc2ee1-b4')#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.783 227364 DEBUG nova.virt.libvirt.vif [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1339930826',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1339930826',id=77,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX72qjDEB4IQ4WdPqfaDcQoLGbp1kQWDQ1fLv1OvCXP+E3nsXWXjdsJxPgY/fixvzpYHFbZD11YMCPy/p3b1ThogMSbMQVu2e2El83y3t350K3WzjnAMvFjZjqU9kkZuw==',key_name='tempest-keypair-1749804170',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:10:09Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='28c3d09b9e21417cb7bc44b8552f1b81',ramdisk_id='',reservation_id='r-60dw71oa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-1392053176',owner_user_name='tempest-TaggedBootDevicesTest_v242-1392053176-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:10:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6ef481e9e8e0440c91abe11aee229780',uuid=86dc24b2-55cd-4720-825d-14b5a233fc8f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "08b93c9b-b707-46be-87f1-913a85c9fdbb", "address": "fa:16:3e:7d:dd:ef", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.101", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08b93c9b-b7", "ovs_interfaceid": "08b93c9b-b707-46be-87f1-913a85c9fdbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.783 227364 DEBUG nova.network.os_vif_util [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converting VIF {"id": "08b93c9b-b707-46be-87f1-913a85c9fdbb", "address": "fa:16:3e:7d:dd:ef", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.101", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08b93c9b-b7", "ovs_interfaceid": "08b93c9b-b707-46be-87f1-913a85c9fdbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.784 227364 DEBUG nova.network.os_vif_util [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7d:dd:ef,bridge_name='br-int',has_traffic_filtering=True,id=08b93c9b-b707-46be-87f1-913a85c9fdbb,network=Network(2e9e9288-8ab6-453e-b0a5-e16458d62484),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap08b93c9b-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.784 227364 DEBUG os_vif [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7d:dd:ef,bridge_name='br-int',has_traffic_filtering=True,id=08b93c9b-b707-46be-87f1-913a85c9fdbb,network=Network(2e9e9288-8ab6-453e-b0a5-e16458d62484),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap08b93c9b-b7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.785 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.785 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08b93c9b-b7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.786 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.788 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.797 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.799 227364 INFO os_vif [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7d:dd:ef,bridge_name='br-int',has_traffic_filtering=True,id=08b93c9b-b707-46be-87f1-913a85c9fdbb,network=Network(2e9e9288-8ab6-453e-b0a5-e16458d62484),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap08b93c9b-b7')#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.800 227364 DEBUG nova.virt.libvirt.vif [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1339930826',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1339930826',id=77,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX72qjDEB4IQ4WdPqfaDcQoLGbp1kQWDQ1fLv1OvCXP+E3nsXWXjdsJxPgY/fixvzpYHFbZD11YMCPy/p3b1ThogMSbMQVu2e2El83y3t350K3WzjnAMvFjZjqU9kkZuw==',key_name='tempest-keypair-1749804170',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:10:09Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='28c3d09b9e21417cb7bc44b8552f1b81',ramdisk_id='',reservation_id='r-60dw71oa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-1392053176',owner_user_name='tempest-TaggedBootDevicesTest_v242-1392053176-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:10:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6ef481e9e8e0440c91abe11aee229780',uuid=86dc24b2-55cd-4720-825d-14b5a233fc8f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0b94f521-1bc4-4c6b-9d91-f98c1b1371ea", "address": "fa:16:3e:a7:4f:0c", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.188", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b94f521-1b", "ovs_interfaceid": "0b94f521-1bc4-4c6b-9d91-f98c1b1371ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.800 227364 DEBUG nova.network.os_vif_util [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converting VIF {"id": "0b94f521-1bc4-4c6b-9d91-f98c1b1371ea", "address": "fa:16:3e:a7:4f:0c", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.188", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b94f521-1b", "ovs_interfaceid": "0b94f521-1bc4-4c6b-9d91-f98c1b1371ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.801 227364 DEBUG nova.network.os_vif_util [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a7:4f:0c,bridge_name='br-int',has_traffic_filtering=True,id=0b94f521-1bc4-4c6b-9d91-f98c1b1371ea,network=Network(2e9e9288-8ab6-453e-b0a5-e16458d62484),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0b94f521-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.801 227364 DEBUG os_vif [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a7:4f:0c,bridge_name='br-int',has_traffic_filtering=True,id=0b94f521-1bc4-4c6b-9d91-f98c1b1371ea,network=Network(2e9e9288-8ab6-453e-b0a5-e16458d62484),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0b94f521-1b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.802 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.803 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b94f521-1b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:49.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.806 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.813 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.814 227364 INFO os_vif [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a7:4f:0c,bridge_name='br-int',has_traffic_filtering=True,id=0b94f521-1bc4-4c6b-9d91-f98c1b1371ea,network=Network(2e9e9288-8ab6-453e-b0a5-e16458d62484),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0b94f521-1b')#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.815 227364 DEBUG nova.virt.libvirt.vif [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1339930826',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1339930826',id=77,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX72qjDEB4IQ4WdPqfaDcQoLGbp1kQWDQ1fLv1OvCXP+E3nsXWXjdsJxPgY/fixvzpYHFbZD11YMCPy/p3b1ThogMSbMQVu2e2El83y3t350K3WzjnAMvFjZjqU9kkZuw==',key_name='tempest-keypair-1749804170',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:10:09Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='28c3d09b9e21417cb7bc44b8552f1b81',ramdisk_id='',reservation_id='r-60dw71oa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-1392053176',owner_user_name='tempest-TaggedBootDevicesTest_v242-1392053176-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:10:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6ef481e9e8e0440c91abe11aee229780',uuid=86dc24b2-55cd-4720-825d-14b5a233fc8f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "064279fc-65b8-4bc6-9578-f43479c76dde", "address": "fa:16:3e:8f:4c:8b", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap064279fc-65", "ovs_interfaceid": "064279fc-65b8-4bc6-9578-f43479c76dde", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.815 227364 DEBUG nova.network.os_vif_util [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converting VIF {"id": "064279fc-65b8-4bc6-9578-f43479c76dde", "address": "fa:16:3e:8f:4c:8b", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap064279fc-65", "ovs_interfaceid": "064279fc-65b8-4bc6-9578-f43479c76dde", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.816 227364 DEBUG nova.network.os_vif_util [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8f:4c:8b,bridge_name='br-int',has_traffic_filtering=True,id=064279fc-65b8-4bc6-9578-f43479c76dde,network=Network(2e9e9288-8ab6-453e-b0a5-e16458d62484),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap064279fc-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.816 227364 DEBUG os_vif [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8f:4c:8b,bridge_name='br-int',has_traffic_filtering=True,id=064279fc-65b8-4bc6-9578-f43479c76dde,network=Network(2e9e9288-8ab6-453e-b0a5-e16458d62484),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap064279fc-65') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.817 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.817 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap064279fc-65, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.818 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.821 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.825 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.827 227364 INFO os_vif [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8f:4c:8b,bridge_name='br-int',has_traffic_filtering=True,id=064279fc-65b8-4bc6-9578-f43479c76dde,network=Network(2e9e9288-8ab6-453e-b0a5-e16458d62484),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap064279fc-65')#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.827 227364 DEBUG nova.virt.libvirt.vif [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1339930826',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1339930826',id=77,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX72qjDEB4IQ4WdPqfaDcQoLGbp1kQWDQ1fLv1OvCXP+E3nsXWXjdsJxPgY/fixvzpYHFbZD11YMCPy/p3b1ThogMSbMQVu2e2El83y3t350K3WzjnAMvFjZjqU9kkZuw==',key_name='tempest-keypair-1749804170',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:10:09Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='28c3d09b9e21417cb7bc44b8552f1b81',ramdisk_id='',reservation_id='r-60dw71oa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-1392053176',owner_user_name='tempest-TaggedBootDevicesTest_v242-1392053176-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:10:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6ef481e9e8e0440c91abe11aee229780',uuid=86dc24b2-55cd-4720-825d-14b5a233fc8f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "580e3c4b-9212-4f39-be82-c8e878e729e2", "address": "fa:16:3e:e1:43:84", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580e3c4b-92", "ovs_interfaceid": "580e3c4b-9212-4f39-be82-c8e878e729e2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.828 227364 DEBUG nova.network.os_vif_util [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converting VIF {"id": "580e3c4b-9212-4f39-be82-c8e878e729e2", "address": "fa:16:3e:e1:43:84", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580e3c4b-92", "ovs_interfaceid": "580e3c4b-9212-4f39-be82-c8e878e729e2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.828 227364 DEBUG nova.network.os_vif_util [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e1:43:84,bridge_name='br-int',has_traffic_filtering=True,id=580e3c4b-9212-4f39-be82-c8e878e729e2,network=Network(2e9e9288-8ab6-453e-b0a5-e16458d62484),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap580e3c4b-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.828 227364 DEBUG os_vif [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e1:43:84,bridge_name='br-int',has_traffic_filtering=True,id=580e3c4b-9212-4f39-be82-c8e878e729e2,network=Network(2e9e9288-8ab6-453e-b0a5-e16458d62484),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap580e3c4b-92') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.829 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.830 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap580e3c4b-92, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.831 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.834 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.837 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.838 227364 INFO os_vif [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e1:43:84,bridge_name='br-int',has_traffic_filtering=True,id=580e3c4b-9212-4f39-be82-c8e878e729e2,network=Network(2e9e9288-8ab6-453e-b0a5-e16458d62484),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap580e3c4b-92')#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.839 227364 DEBUG nova.virt.libvirt.vif [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1339930826',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1339930826',id=77,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX72qjDEB4IQ4WdPqfaDcQoLGbp1kQWDQ1fLv1OvCXP+E3nsXWXjdsJxPgY/fixvzpYHFbZD11YMCPy/p3b1ThogMSbMQVu2e2El83y3t350K3WzjnAMvFjZjqU9kkZuw==',key_name='tempest-keypair-1749804170',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:10:09Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='28c3d09b9e21417cb7bc44b8552f1b81',ramdisk_id='',reservation_id='r-60dw71oa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-1392053176',owner_user_name='tempest-TaggedBootDevicesTest_v242-1392053176-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:10:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6ef481e9e8e0440c91abe11aee229780',uuid=86dc24b2-55cd-4720-825d-14b5a233fc8f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6781fd06-3321-4100-b5d3-9e92a565007b", "address": "fa:16:3e:a4:c3:f8", "network": {"id": "c657f09f-ecca-4e01-9a07-39931c8c1994", "bridge": "br-int", "label": "tempest-device-tagging-net2-1829627846", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6781fd06-33", "ovs_interfaceid": "6781fd06-3321-4100-b5d3-9e92a565007b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.839 227364 DEBUG nova.network.os_vif_util [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converting VIF {"id": "6781fd06-3321-4100-b5d3-9e92a565007b", "address": "fa:16:3e:a4:c3:f8", "network": {"id": "c657f09f-ecca-4e01-9a07-39931c8c1994", "bridge": "br-int", "label": "tempest-device-tagging-net2-1829627846", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6781fd06-33", "ovs_interfaceid": "6781fd06-3321-4100-b5d3-9e92a565007b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.839 227364 DEBUG nova.network.os_vif_util [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a4:c3:f8,bridge_name='br-int',has_traffic_filtering=True,id=6781fd06-3321-4100-b5d3-9e92a565007b,network=Network(c657f09f-ecca-4e01-9a07-39931c8c1994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6781fd06-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.840 227364 DEBUG os_vif [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a4:c3:f8,bridge_name='br-int',has_traffic_filtering=True,id=6781fd06-3321-4100-b5d3-9e92a565007b,network=Network(c657f09f-ecca-4e01-9a07-39931c8c1994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6781fd06-33') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.841 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.841 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6781fd06-33, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.842 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.844 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.845 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.846 227364 INFO os_vif [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a4:c3:f8,bridge_name='br-int',has_traffic_filtering=True,id=6781fd06-3321-4100-b5d3-9e92a565007b,network=Network(c657f09f-ecca-4e01-9a07-39931c8c1994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6781fd06-33')#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.847 227364 DEBUG nova.virt.libvirt.vif [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1339930826',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1339930826',id=77,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX72qjDEB4IQ4WdPqfaDcQoLGbp1kQWDQ1fLv1OvCXP+E3nsXWXjdsJxPgY/fixvzpYHFbZD11YMCPy/p3b1ThogMSbMQVu2e2El83y3t350K3WzjnAMvFjZjqU9kkZuw==',key_name='tempest-keypair-1749804170',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:10:09Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='28c3d09b9e21417cb7bc44b8552f1b81',ramdisk_id='',reservation_id='r-60dw71oa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-1392053176',owner_user_name='tempest-TaggedBootDevicesTest_v242-1392053176-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:10:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6ef481e9e8e0440c91abe11aee229780',uuid=86dc24b2-55cd-4720-825d-14b5a233fc8f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "75fb0180-ce5c-4d77-ab28-71baed47a210", "address": "fa:16:3e:07:8b:79", "network": {"id": "c657f09f-ecca-4e01-9a07-39931c8c1994", "bridge": "br-int", "label": "tempest-device-tagging-net2-1829627846", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75fb0180-ce", "ovs_interfaceid": "75fb0180-ce5c-4d77-ab28-71baed47a210", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.848 227364 DEBUG nova.network.os_vif_util [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converting VIF {"id": "75fb0180-ce5c-4d77-ab28-71baed47a210", "address": "fa:16:3e:07:8b:79", "network": {"id": "c657f09f-ecca-4e01-9a07-39931c8c1994", "bridge": "br-int", "label": "tempest-device-tagging-net2-1829627846", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75fb0180-ce", "ovs_interfaceid": "75fb0180-ce5c-4d77-ab28-71baed47a210", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.848 227364 DEBUG nova.network.os_vif_util [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:07:8b:79,bridge_name='br-int',has_traffic_filtering=True,id=75fb0180-ce5c-4d77-ab28-71baed47a210,network=Network(c657f09f-ecca-4e01-9a07-39931c8c1994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75fb0180-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.849 227364 DEBUG os_vif [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:07:8b:79,bridge_name='br-int',has_traffic_filtering=True,id=75fb0180-ce5c-4d77-ab28-71baed47a210,network=Network(c657f09f-ecca-4e01-9a07-39931c8c1994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75fb0180-ce') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.850 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.850 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap75fb0180-ce, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.852 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.854 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:10:49 np0005539551 nova_compute[227360]: 2025-11-29 08:10:49.855 227364 INFO os_vif [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:07:8b:79,bridge_name='br-int',has_traffic_filtering=True,id=75fb0180-ce5c-4d77-ab28-71baed47a210,network=Network(c657f09f-ecca-4e01-9a07-39931c8c1994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75fb0180-ce')#033[00m
Nov 29 03:10:49 np0005539551 neutron-haproxy-ovnmeta-2e9e9288-8ab6-453e-b0a5-e16458d62484[256154]: [NOTICE]   (256192) : haproxy version is 2.8.14-c23fe91
Nov 29 03:10:49 np0005539551 neutron-haproxy-ovnmeta-2e9e9288-8ab6-453e-b0a5-e16458d62484[256154]: [NOTICE]   (256192) : path to executable is /usr/sbin/haproxy
Nov 29 03:10:49 np0005539551 neutron-haproxy-ovnmeta-2e9e9288-8ab6-453e-b0a5-e16458d62484[256154]: [WARNING]  (256192) : Exiting Master process...
Nov 29 03:10:49 np0005539551 neutron-haproxy-ovnmeta-2e9e9288-8ab6-453e-b0a5-e16458d62484[256154]: [ALERT]    (256192) : Current worker (256207) exited with code 143 (Terminated)
Nov 29 03:10:49 np0005539551 neutron-haproxy-ovnmeta-2e9e9288-8ab6-453e-b0a5-e16458d62484[256154]: [WARNING]  (256192) : All workers exited. Exiting... (0)
Nov 29 03:10:49 np0005539551 systemd[1]: libpod-c0cb385ed47ff702646a21290494bfa39db832e72d14e22b145b252485092f30.scope: Deactivated successfully.
Nov 29 03:10:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:49.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:49 np0005539551 podman[256993]: 2025-11-29 08:10:49.879068411 +0000 UTC m=+0.039043301 container died c0cb385ed47ff702646a21290494bfa39db832e72d14e22b145b252485092f30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2e9e9288-8ab6-453e-b0a5-e16458d62484, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:10:49 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c0cb385ed47ff702646a21290494bfa39db832e72d14e22b145b252485092f30-userdata-shm.mount: Deactivated successfully.
Nov 29 03:10:49 np0005539551 systemd[1]: var-lib-containers-storage-overlay-e53b8c2d3342ab6a497ac134c78a63be73972ba575c014bc1292ee79a5420a7f-merged.mount: Deactivated successfully.
Nov 29 03:10:49 np0005539551 podman[256993]: 2025-11-29 08:10:49.917628648 +0000 UTC m=+0.077603538 container cleanup c0cb385ed47ff702646a21290494bfa39db832e72d14e22b145b252485092f30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2e9e9288-8ab6-453e-b0a5-e16458d62484, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:10:49 np0005539551 systemd[1]: libpod-conmon-c0cb385ed47ff702646a21290494bfa39db832e72d14e22b145b252485092f30.scope: Deactivated successfully.
Nov 29 03:10:49 np0005539551 podman[257039]: 2025-11-29 08:10:49.991367562 +0000 UTC m=+0.055545071 container remove c0cb385ed47ff702646a21290494bfa39db832e72d14e22b145b252485092f30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2e9e9288-8ab6-453e-b0a5-e16458d62484, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 03:10:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:49.997 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e1bacf83-ec82-4ac2-80cb-553de076a32f]: (4, ('Sat Nov 29 08:10:49 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2e9e9288-8ab6-453e-b0a5-e16458d62484 (c0cb385ed47ff702646a21290494bfa39db832e72d14e22b145b252485092f30)\nc0cb385ed47ff702646a21290494bfa39db832e72d14e22b145b252485092f30\nSat Nov 29 08:10:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2e9e9288-8ab6-453e-b0a5-e16458d62484 (c0cb385ed47ff702646a21290494bfa39db832e72d14e22b145b252485092f30)\nc0cb385ed47ff702646a21290494bfa39db832e72d14e22b145b252485092f30\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:49.999 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[55874485-eff8-4614-a467-7fd8125043fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:50.000 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2e9e9288-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:50 np0005539551 nova_compute[227360]: 2025-11-29 08:10:50.001 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:50 np0005539551 kernel: tap2e9e9288-80: left promiscuous mode
Nov 29 03:10:50 np0005539551 nova_compute[227360]: 2025-11-29 08:10:50.003 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:50.005 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[37b137f9-b317-47d2-bfe5-6e5e59b2a5d3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:50 np0005539551 nova_compute[227360]: 2025-11-29 08:10:50.015 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:50.027 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c64f1c5c-b9ec-4f30-818e-308fb120718c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:50.029 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4d4db284-41e7-4f73-bb17-addd56c522d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:50.047 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[47796c75-bb6d-4f66-9c3e-c0544d32e994]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685399, 'reachable_time': 28529, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257053, 'error': None, 'target': 'ovnmeta-2e9e9288-8ab6-453e-b0a5-e16458d62484', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:50.050 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2e9e9288-8ab6-453e-b0a5-e16458d62484 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:10:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:50.050 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[9184cdbf-8406-4e53-8e44-6685e6cb8898]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:50.051 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 0b94f521-1bc4-4c6b-9d91-f98c1b1371ea in datapath 2e9e9288-8ab6-453e-b0a5-e16458d62484 unbound from our chassis#033[00m
Nov 29 03:10:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:50.052 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2e9e9288-8ab6-453e-b0a5-e16458d62484, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:10:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:50.053 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b0614a7a-ff1f-48fe-8573-7f9155e5b178]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:50.054 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 6781fd06-3321-4100-b5d3-9e92a565007b in datapath c657f09f-ecca-4e01-9a07-39931c8c1994 unbound from our chassis#033[00m
Nov 29 03:10:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:50.055 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c657f09f-ecca-4e01-9a07-39931c8c1994, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:10:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:50.056 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[908d5d89-fdd4-4b73-9f61-57e5b8372946]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:50.056 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c657f09f-ecca-4e01-9a07-39931c8c1994 namespace which is not needed anymore#033[00m
Nov 29 03:10:50 np0005539551 nova_compute[227360]: 2025-11-29 08:10:50.058 227364 DEBUG nova.compute.manager [req-be622932-2553-46bf-891b-baef41dc93c1 req-a7c8b78e-c1d8-4f30-b981-0faf01bac4a5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-vif-unplugged-5edc2ee1-b429-4cb7-8b14-3915aa40d39c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:50 np0005539551 nova_compute[227360]: 2025-11-29 08:10:50.059 227364 DEBUG oslo_concurrency.lockutils [req-be622932-2553-46bf-891b-baef41dc93c1 req-a7c8b78e-c1d8-4f30-b981-0faf01bac4a5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:50 np0005539551 nova_compute[227360]: 2025-11-29 08:10:50.061 227364 DEBUG oslo_concurrency.lockutils [req-be622932-2553-46bf-891b-baef41dc93c1 req-a7c8b78e-c1d8-4f30-b981-0faf01bac4a5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:50 np0005539551 nova_compute[227360]: 2025-11-29 08:10:50.061 227364 DEBUG oslo_concurrency.lockutils [req-be622932-2553-46bf-891b-baef41dc93c1 req-a7c8b78e-c1d8-4f30-b981-0faf01bac4a5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:50 np0005539551 nova_compute[227360]: 2025-11-29 08:10:50.061 227364 DEBUG nova.compute.manager [req-be622932-2553-46bf-891b-baef41dc93c1 req-a7c8b78e-c1d8-4f30-b981-0faf01bac4a5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] No waiting events found dispatching network-vif-unplugged-5edc2ee1-b429-4cb7-8b14-3915aa40d39c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:50 np0005539551 nova_compute[227360]: 2025-11-29 08:10:50.061 227364 DEBUG nova.compute.manager [req-be622932-2553-46bf-891b-baef41dc93c1 req-a7c8b78e-c1d8-4f30-b981-0faf01bac4a5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-vif-unplugged-5edc2ee1-b429-4cb7-8b14-3915aa40d39c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:10:50 np0005539551 nova_compute[227360]: 2025-11-29 08:10:50.065 227364 DEBUG nova.compute.manager [req-75eaf31b-1b42-4fee-a816-9a3804f22a5d req-aaf81bec-2fe3-41c9-a445-188d621b79a5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-vif-unplugged-08b93c9b-b707-46be-87f1-913a85c9fdbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:50 np0005539551 nova_compute[227360]: 2025-11-29 08:10:50.065 227364 DEBUG oslo_concurrency.lockutils [req-75eaf31b-1b42-4fee-a816-9a3804f22a5d req-aaf81bec-2fe3-41c9-a445-188d621b79a5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:50 np0005539551 nova_compute[227360]: 2025-11-29 08:10:50.066 227364 DEBUG oslo_concurrency.lockutils [req-75eaf31b-1b42-4fee-a816-9a3804f22a5d req-aaf81bec-2fe3-41c9-a445-188d621b79a5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:50 np0005539551 nova_compute[227360]: 2025-11-29 08:10:50.066 227364 DEBUG oslo_concurrency.lockutils [req-75eaf31b-1b42-4fee-a816-9a3804f22a5d req-aaf81bec-2fe3-41c9-a445-188d621b79a5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:50 np0005539551 nova_compute[227360]: 2025-11-29 08:10:50.066 227364 DEBUG nova.compute.manager [req-75eaf31b-1b42-4fee-a816-9a3804f22a5d req-aaf81bec-2fe3-41c9-a445-188d621b79a5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] No waiting events found dispatching network-vif-unplugged-08b93c9b-b707-46be-87f1-913a85c9fdbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:50 np0005539551 nova_compute[227360]: 2025-11-29 08:10:50.066 227364 DEBUG nova.compute.manager [req-75eaf31b-1b42-4fee-a816-9a3804f22a5d req-aaf81bec-2fe3-41c9-a445-188d621b79a5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-vif-unplugged-08b93c9b-b707-46be-87f1-913a85c9fdbb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:10:50 np0005539551 nova_compute[227360]: 2025-11-29 08:10:50.067 227364 DEBUG nova.compute.manager [req-6aaf48a1-b0a5-4b11-a19e-97f96760cc9e req-38661a93-10b8-48c4-9272-6354b3040c99 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-vif-unplugged-6781fd06-3321-4100-b5d3-9e92a565007b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:50 np0005539551 nova_compute[227360]: 2025-11-29 08:10:50.067 227364 DEBUG oslo_concurrency.lockutils [req-6aaf48a1-b0a5-4b11-a19e-97f96760cc9e req-38661a93-10b8-48c4-9272-6354b3040c99 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:50 np0005539551 nova_compute[227360]: 2025-11-29 08:10:50.068 227364 DEBUG oslo_concurrency.lockutils [req-6aaf48a1-b0a5-4b11-a19e-97f96760cc9e req-38661a93-10b8-48c4-9272-6354b3040c99 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:50 np0005539551 nova_compute[227360]: 2025-11-29 08:10:50.068 227364 DEBUG oslo_concurrency.lockutils [req-6aaf48a1-b0a5-4b11-a19e-97f96760cc9e req-38661a93-10b8-48c4-9272-6354b3040c99 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:50 np0005539551 nova_compute[227360]: 2025-11-29 08:10:50.068 227364 DEBUG nova.compute.manager [req-6aaf48a1-b0a5-4b11-a19e-97f96760cc9e req-38661a93-10b8-48c4-9272-6354b3040c99 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] No waiting events found dispatching network-vif-unplugged-6781fd06-3321-4100-b5d3-9e92a565007b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:50 np0005539551 nova_compute[227360]: 2025-11-29 08:10:50.068 227364 DEBUG nova.compute.manager [req-6aaf48a1-b0a5-4b11-a19e-97f96760cc9e req-38661a93-10b8-48c4-9272-6354b3040c99 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-vif-unplugged-6781fd06-3321-4100-b5d3-9e92a565007b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:10:50 np0005539551 nova_compute[227360]: 2025-11-29 08:10:50.069 227364 DEBUG nova.compute.manager [req-c6a584be-4270-413d-b99c-9d6e83566fe3 req-49743a11-0aaa-4c0c-88d7-b26f04567025 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-vif-unplugged-580e3c4b-9212-4f39-be82-c8e878e729e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:50 np0005539551 nova_compute[227360]: 2025-11-29 08:10:50.070 227364 DEBUG oslo_concurrency.lockutils [req-c6a584be-4270-413d-b99c-9d6e83566fe3 req-49743a11-0aaa-4c0c-88d7-b26f04567025 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:50 np0005539551 nova_compute[227360]: 2025-11-29 08:10:50.070 227364 DEBUG oslo_concurrency.lockutils [req-c6a584be-4270-413d-b99c-9d6e83566fe3 req-49743a11-0aaa-4c0c-88d7-b26f04567025 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:50 np0005539551 nova_compute[227360]: 2025-11-29 08:10:50.070 227364 DEBUG oslo_concurrency.lockutils [req-c6a584be-4270-413d-b99c-9d6e83566fe3 req-49743a11-0aaa-4c0c-88d7-b26f04567025 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:50 np0005539551 nova_compute[227360]: 2025-11-29 08:10:50.070 227364 DEBUG nova.compute.manager [req-c6a584be-4270-413d-b99c-9d6e83566fe3 req-49743a11-0aaa-4c0c-88d7-b26f04567025 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] No waiting events found dispatching network-vif-unplugged-580e3c4b-9212-4f39-be82-c8e878e729e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:50 np0005539551 nova_compute[227360]: 2025-11-29 08:10:50.070 227364 DEBUG nova.compute.manager [req-c6a584be-4270-413d-b99c-9d6e83566fe3 req-49743a11-0aaa-4c0c-88d7-b26f04567025 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-vif-unplugged-580e3c4b-9212-4f39-be82-c8e878e729e2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:10:50 np0005539551 nova_compute[227360]: 2025-11-29 08:10:50.071 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:50 np0005539551 neutron-haproxy-ovnmeta-c657f09f-ecca-4e01-9a07-39931c8c1994[256293]: [NOTICE]   (256297) : haproxy version is 2.8.14-c23fe91
Nov 29 03:10:50 np0005539551 neutron-haproxy-ovnmeta-c657f09f-ecca-4e01-9a07-39931c8c1994[256293]: [NOTICE]   (256297) : path to executable is /usr/sbin/haproxy
Nov 29 03:10:50 np0005539551 neutron-haproxy-ovnmeta-c657f09f-ecca-4e01-9a07-39931c8c1994[256293]: [WARNING]  (256297) : Exiting Master process...
Nov 29 03:10:50 np0005539551 neutron-haproxy-ovnmeta-c657f09f-ecca-4e01-9a07-39931c8c1994[256293]: [ALERT]    (256297) : Current worker (256299) exited with code 143 (Terminated)
Nov 29 03:10:50 np0005539551 neutron-haproxy-ovnmeta-c657f09f-ecca-4e01-9a07-39931c8c1994[256293]: [WARNING]  (256297) : All workers exited. Exiting... (0)
Nov 29 03:10:50 np0005539551 systemd[1]: libpod-ef14e25514270b5a492a891e08095082336666af73197d80992be346a6e4430c.scope: Deactivated successfully.
Nov 29 03:10:50 np0005539551 podman[257071]: 2025-11-29 08:10:50.214309259 +0000 UTC m=+0.045823012 container died ef14e25514270b5a492a891e08095082336666af73197d80992be346a6e4430c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c657f09f-ecca-4e01-9a07-39931c8c1994, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Nov 29 03:10:50 np0005539551 podman[257071]: 2025-11-29 08:10:50.247563134 +0000 UTC m=+0.079076887 container cleanup ef14e25514270b5a492a891e08095082336666af73197d80992be346a6e4430c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c657f09f-ecca-4e01-9a07-39931c8c1994, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:10:50 np0005539551 systemd[1]: libpod-conmon-ef14e25514270b5a492a891e08095082336666af73197d80992be346a6e4430c.scope: Deactivated successfully.
Nov 29 03:10:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:50 np0005539551 podman[257102]: 2025-11-29 08:10:50.30637742 +0000 UTC m=+0.038336181 container remove ef14e25514270b5a492a891e08095082336666af73197d80992be346a6e4430c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c657f09f-ecca-4e01-9a07-39931c8c1994, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:10:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:50.311 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b7441b35-2280-4e2b-9c19-2b2bb93b616a]: (4, ('Sat Nov 29 08:10:50 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c657f09f-ecca-4e01-9a07-39931c8c1994 (ef14e25514270b5a492a891e08095082336666af73197d80992be346a6e4430c)\nef14e25514270b5a492a891e08095082336666af73197d80992be346a6e4430c\nSat Nov 29 08:10:50 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c657f09f-ecca-4e01-9a07-39931c8c1994 (ef14e25514270b5a492a891e08095082336666af73197d80992be346a6e4430c)\nef14e25514270b5a492a891e08095082336666af73197d80992be346a6e4430c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:50.312 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[78fce79e-b5b2-44d6-99c3-b50f29ab7c58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:50.313 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc657f09f-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:50 np0005539551 nova_compute[227360]: 2025-11-29 08:10:50.314 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:50 np0005539551 kernel: tapc657f09f-e0: left promiscuous mode
Nov 29 03:10:50 np0005539551 nova_compute[227360]: 2025-11-29 08:10:50.326 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:50.330 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a636d026-5e71-41e3-9674-6c7628176948]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:50.354 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[62156f3b-fa2f-4d82-b76e-55ea03452090]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:50.355 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d78a1e32-d1a8-4746-a5ac-8d29e9109535]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:50.368 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8afbfb37-179c-40fb-ac30-c1843b22e486]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685496, 'reachable_time': 21660, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257117, 'error': None, 'target': 'ovnmeta-c657f09f-ecca-4e01-9a07-39931c8c1994', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:50.370 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c657f09f-ecca-4e01-9a07-39931c8c1994 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:10:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:50.370 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[808ce997-3234-4fa3-b1ee-ae381ca946d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:50.371 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 75fb0180-ce5c-4d77-ab28-71baed47a210 in datapath c657f09f-ecca-4e01-9a07-39931c8c1994 unbound from our chassis#033[00m
Nov 29 03:10:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:50.372 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c657f09f-ecca-4e01-9a07-39931c8c1994, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:10:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:50.373 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[bf665a9d-9483-4826-9ee3-67de19a76ca8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:50.373 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 580e3c4b-9212-4f39-be82-c8e878e729e2 in datapath 2e9e9288-8ab6-453e-b0a5-e16458d62484 unbound from our chassis#033[00m
Nov 29 03:10:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:50.374 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2e9e9288-8ab6-453e-b0a5-e16458d62484, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:10:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:50.375 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[900260be-013e-4308-8f55-9df719453f06]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:50.375 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 064279fc-65b8-4bc6-9578-f43479c76dde in datapath 2e9e9288-8ab6-453e-b0a5-e16458d62484 unbound from our chassis#033[00m
Nov 29 03:10:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:50.377 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2e9e9288-8ab6-453e-b0a5-e16458d62484, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:10:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:50.377 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[be9bd642-ae39-43f3-a26b-2182775f721e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:50 np0005539551 nova_compute[227360]: 2025-11-29 08:10:50.585 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:50.585 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:10:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:50.586 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:10:50 np0005539551 systemd[1]: var-lib-containers-storage-overlay-bb049d783d48e827362d5b273a84a1e8ff56e9f141e96df35b9c6f57ad708bef-merged.mount: Deactivated successfully.
Nov 29 03:10:50 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ef14e25514270b5a492a891e08095082336666af73197d80992be346a6e4430c-userdata-shm.mount: Deactivated successfully.
Nov 29 03:10:50 np0005539551 systemd[1]: run-netns-ovnmeta\x2dc657f09f\x2decca\x2d4e01\x2d9a07\x2d39931c8c1994.mount: Deactivated successfully.
Nov 29 03:10:50 np0005539551 systemd[1]: run-netns-ovnmeta\x2d2e9e9288\x2d8ab6\x2d453e\x2db0a5\x2de16458d62484.mount: Deactivated successfully.
Nov 29 03:10:51 np0005539551 nova_compute[227360]: 2025-11-29 08:10:51.240 227364 INFO nova.virt.libvirt.driver [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Deleting instance files /var/lib/nova/instances/86dc24b2-55cd-4720-825d-14b5a233fc8f_del#033[00m
Nov 29 03:10:51 np0005539551 nova_compute[227360]: 2025-11-29 08:10:51.241 227364 INFO nova.virt.libvirt.driver [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Deletion of /var/lib/nova/instances/86dc24b2-55cd-4720-825d-14b5a233fc8f_del complete#033[00m
Nov 29 03:10:51 np0005539551 nova_compute[227360]: 2025-11-29 08:10:51.305 227364 INFO nova.compute.manager [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Took 2.08 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:10:51 np0005539551 nova_compute[227360]: 2025-11-29 08:10:51.306 227364 DEBUG oslo.service.loopingcall [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:10:51 np0005539551 nova_compute[227360]: 2025-11-29 08:10:51.306 227364 DEBUG nova.compute.manager [-] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:10:51 np0005539551 nova_compute[227360]: 2025-11-29 08:10:51.306 227364 DEBUG nova.network.neutron [-] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:10:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:51.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:51.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:52 np0005539551 nova_compute[227360]: 2025-11-29 08:10:52.148 227364 DEBUG nova.compute.manager [req-df549b5a-a168-4188-9ec3-af466bf60d58 req-46fa009f-933c-4635-a571-dfe79fcc100d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-vif-plugged-6781fd06-3321-4100-b5d3-9e92a565007b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:52 np0005539551 nova_compute[227360]: 2025-11-29 08:10:52.148 227364 DEBUG oslo_concurrency.lockutils [req-df549b5a-a168-4188-9ec3-af466bf60d58 req-46fa009f-933c-4635-a571-dfe79fcc100d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:52 np0005539551 nova_compute[227360]: 2025-11-29 08:10:52.149 227364 DEBUG oslo_concurrency.lockutils [req-df549b5a-a168-4188-9ec3-af466bf60d58 req-46fa009f-933c-4635-a571-dfe79fcc100d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:52 np0005539551 nova_compute[227360]: 2025-11-29 08:10:52.149 227364 DEBUG oslo_concurrency.lockutils [req-df549b5a-a168-4188-9ec3-af466bf60d58 req-46fa009f-933c-4635-a571-dfe79fcc100d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:52 np0005539551 nova_compute[227360]: 2025-11-29 08:10:52.149 227364 DEBUG nova.compute.manager [req-df549b5a-a168-4188-9ec3-af466bf60d58 req-46fa009f-933c-4635-a571-dfe79fcc100d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] No waiting events found dispatching network-vif-plugged-6781fd06-3321-4100-b5d3-9e92a565007b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:52 np0005539551 nova_compute[227360]: 2025-11-29 08:10:52.150 227364 WARNING nova.compute.manager [req-df549b5a-a168-4188-9ec3-af466bf60d58 req-46fa009f-933c-4635-a571-dfe79fcc100d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received unexpected event network-vif-plugged-6781fd06-3321-4100-b5d3-9e92a565007b for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:10:52 np0005539551 nova_compute[227360]: 2025-11-29 08:10:52.155 227364 DEBUG nova.compute.manager [req-9b6c6aeb-f3d1-4260-9096-60470bc2c1a9 req-4e7195bd-f11d-421f-940f-93dc1562e2fa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-vif-plugged-5edc2ee1-b429-4cb7-8b14-3915aa40d39c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:52 np0005539551 nova_compute[227360]: 2025-11-29 08:10:52.155 227364 DEBUG oslo_concurrency.lockutils [req-9b6c6aeb-f3d1-4260-9096-60470bc2c1a9 req-4e7195bd-f11d-421f-940f-93dc1562e2fa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:52 np0005539551 nova_compute[227360]: 2025-11-29 08:10:52.156 227364 DEBUG oslo_concurrency.lockutils [req-9b6c6aeb-f3d1-4260-9096-60470bc2c1a9 req-4e7195bd-f11d-421f-940f-93dc1562e2fa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:52 np0005539551 nova_compute[227360]: 2025-11-29 08:10:52.156 227364 DEBUG oslo_concurrency.lockutils [req-9b6c6aeb-f3d1-4260-9096-60470bc2c1a9 req-4e7195bd-f11d-421f-940f-93dc1562e2fa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:52 np0005539551 nova_compute[227360]: 2025-11-29 08:10:52.156 227364 DEBUG nova.compute.manager [req-9b6c6aeb-f3d1-4260-9096-60470bc2c1a9 req-4e7195bd-f11d-421f-940f-93dc1562e2fa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] No waiting events found dispatching network-vif-plugged-5edc2ee1-b429-4cb7-8b14-3915aa40d39c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:52 np0005539551 nova_compute[227360]: 2025-11-29 08:10:52.156 227364 WARNING nova.compute.manager [req-9b6c6aeb-f3d1-4260-9096-60470bc2c1a9 req-4e7195bd-f11d-421f-940f-93dc1562e2fa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received unexpected event network-vif-plugged-5edc2ee1-b429-4cb7-8b14-3915aa40d39c for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:10:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e272 e272: 3 total, 3 up, 3 in
Nov 29 03:10:52 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:10:52.588 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:52 np0005539551 nova_compute[227360]: 2025-11-29 08:10:52.810 227364 DEBUG nova.compute.manager [req-293ad8dc-20e5-4108-8483-b02660dc3031 req-b87b71a5-9bca-453f-a031-9c77b6902cbd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-vif-plugged-08b93c9b-b707-46be-87f1-913a85c9fdbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:52 np0005539551 nova_compute[227360]: 2025-11-29 08:10:52.810 227364 DEBUG oslo_concurrency.lockutils [req-293ad8dc-20e5-4108-8483-b02660dc3031 req-b87b71a5-9bca-453f-a031-9c77b6902cbd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:52 np0005539551 nova_compute[227360]: 2025-11-29 08:10:52.811 227364 DEBUG oslo_concurrency.lockutils [req-293ad8dc-20e5-4108-8483-b02660dc3031 req-b87b71a5-9bca-453f-a031-9c77b6902cbd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:52 np0005539551 nova_compute[227360]: 2025-11-29 08:10:52.811 227364 DEBUG oslo_concurrency.lockutils [req-293ad8dc-20e5-4108-8483-b02660dc3031 req-b87b71a5-9bca-453f-a031-9c77b6902cbd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:52 np0005539551 nova_compute[227360]: 2025-11-29 08:10:52.811 227364 DEBUG nova.compute.manager [req-293ad8dc-20e5-4108-8483-b02660dc3031 req-b87b71a5-9bca-453f-a031-9c77b6902cbd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] No waiting events found dispatching network-vif-plugged-08b93c9b-b707-46be-87f1-913a85c9fdbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:52 np0005539551 nova_compute[227360]: 2025-11-29 08:10:52.811 227364 WARNING nova.compute.manager [req-293ad8dc-20e5-4108-8483-b02660dc3031 req-b87b71a5-9bca-453f-a031-9c77b6902cbd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received unexpected event network-vif-plugged-08b93c9b-b707-46be-87f1-913a85c9fdbb for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:10:53 np0005539551 nova_compute[227360]: 2025-11-29 08:10:53.006 227364 DEBUG nova.compute.manager [req-baccd0b7-b372-4cbb-acad-e9c7dcc74288 req-f9603935-88ff-46fc-a263-0520c3aa2db5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-vif-plugged-580e3c4b-9212-4f39-be82-c8e878e729e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:53 np0005539551 nova_compute[227360]: 2025-11-29 08:10:53.006 227364 DEBUG oslo_concurrency.lockutils [req-baccd0b7-b372-4cbb-acad-e9c7dcc74288 req-f9603935-88ff-46fc-a263-0520c3aa2db5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:53 np0005539551 nova_compute[227360]: 2025-11-29 08:10:53.007 227364 DEBUG oslo_concurrency.lockutils [req-baccd0b7-b372-4cbb-acad-e9c7dcc74288 req-f9603935-88ff-46fc-a263-0520c3aa2db5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:53 np0005539551 nova_compute[227360]: 2025-11-29 08:10:53.007 227364 DEBUG oslo_concurrency.lockutils [req-baccd0b7-b372-4cbb-acad-e9c7dcc74288 req-f9603935-88ff-46fc-a263-0520c3aa2db5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:53 np0005539551 nova_compute[227360]: 2025-11-29 08:10:53.007 227364 DEBUG nova.compute.manager [req-baccd0b7-b372-4cbb-acad-e9c7dcc74288 req-f9603935-88ff-46fc-a263-0520c3aa2db5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] No waiting events found dispatching network-vif-plugged-580e3c4b-9212-4f39-be82-c8e878e729e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:53 np0005539551 nova_compute[227360]: 2025-11-29 08:10:53.007 227364 WARNING nova.compute.manager [req-baccd0b7-b372-4cbb-acad-e9c7dcc74288 req-f9603935-88ff-46fc-a263-0520c3aa2db5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received unexpected event network-vif-plugged-580e3c4b-9212-4f39-be82-c8e878e729e2 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:10:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:53.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:53.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:54 np0005539551 nova_compute[227360]: 2025-11-29 08:10:54.854 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:55 np0005539551 nova_compute[227360]: 2025-11-29 08:10:55.066 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:55 np0005539551 nova_compute[227360]: 2025-11-29 08:10:55.184 227364 DEBUG nova.compute.manager [req-2d926740-dc4a-4350-8297-01f4d6d319d3 req-e8caede8-15b3-42b3-862a-de2ea52da94c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-vif-unplugged-0b94f521-1bc4-4c6b-9d91-f98c1b1371ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:55 np0005539551 nova_compute[227360]: 2025-11-29 08:10:55.184 227364 DEBUG oslo_concurrency.lockutils [req-2d926740-dc4a-4350-8297-01f4d6d319d3 req-e8caede8-15b3-42b3-862a-de2ea52da94c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:55 np0005539551 nova_compute[227360]: 2025-11-29 08:10:55.184 227364 DEBUG oslo_concurrency.lockutils [req-2d926740-dc4a-4350-8297-01f4d6d319d3 req-e8caede8-15b3-42b3-862a-de2ea52da94c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:55 np0005539551 nova_compute[227360]: 2025-11-29 08:10:55.184 227364 DEBUG oslo_concurrency.lockutils [req-2d926740-dc4a-4350-8297-01f4d6d319d3 req-e8caede8-15b3-42b3-862a-de2ea52da94c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:55 np0005539551 nova_compute[227360]: 2025-11-29 08:10:55.185 227364 DEBUG nova.compute.manager [req-2d926740-dc4a-4350-8297-01f4d6d319d3 req-e8caede8-15b3-42b3-862a-de2ea52da94c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] No waiting events found dispatching network-vif-unplugged-0b94f521-1bc4-4c6b-9d91-f98c1b1371ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:55 np0005539551 nova_compute[227360]: 2025-11-29 08:10:55.185 227364 DEBUG nova.compute.manager [req-2d926740-dc4a-4350-8297-01f4d6d319d3 req-e8caede8-15b3-42b3-862a-de2ea52da94c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-vif-unplugged-0b94f521-1bc4-4c6b-9d91-f98c1b1371ea for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:10:55 np0005539551 nova_compute[227360]: 2025-11-29 08:10:55.185 227364 DEBUG nova.compute.manager [req-2d926740-dc4a-4350-8297-01f4d6d319d3 req-e8caede8-15b3-42b3-862a-de2ea52da94c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-vif-plugged-0b94f521-1bc4-4c6b-9d91-f98c1b1371ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:55 np0005539551 nova_compute[227360]: 2025-11-29 08:10:55.185 227364 DEBUG oslo_concurrency.lockutils [req-2d926740-dc4a-4350-8297-01f4d6d319d3 req-e8caede8-15b3-42b3-862a-de2ea52da94c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:55 np0005539551 nova_compute[227360]: 2025-11-29 08:10:55.185 227364 DEBUG oslo_concurrency.lockutils [req-2d926740-dc4a-4350-8297-01f4d6d319d3 req-e8caede8-15b3-42b3-862a-de2ea52da94c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:55 np0005539551 nova_compute[227360]: 2025-11-29 08:10:55.185 227364 DEBUG oslo_concurrency.lockutils [req-2d926740-dc4a-4350-8297-01f4d6d319d3 req-e8caede8-15b3-42b3-862a-de2ea52da94c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:55 np0005539551 nova_compute[227360]: 2025-11-29 08:10:55.185 227364 DEBUG nova.compute.manager [req-2d926740-dc4a-4350-8297-01f4d6d319d3 req-e8caede8-15b3-42b3-862a-de2ea52da94c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] No waiting events found dispatching network-vif-plugged-0b94f521-1bc4-4c6b-9d91-f98c1b1371ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:55 np0005539551 nova_compute[227360]: 2025-11-29 08:10:55.186 227364 WARNING nova.compute.manager [req-2d926740-dc4a-4350-8297-01f4d6d319d3 req-e8caede8-15b3-42b3-862a-de2ea52da94c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received unexpected event network-vif-plugged-0b94f521-1bc4-4c6b-9d91-f98c1b1371ea for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:10:55 np0005539551 nova_compute[227360]: 2025-11-29 08:10:55.186 227364 DEBUG nova.compute.manager [req-2d926740-dc4a-4350-8297-01f4d6d319d3 req-e8caede8-15b3-42b3-862a-de2ea52da94c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-vif-unplugged-75fb0180-ce5c-4d77-ab28-71baed47a210 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:55 np0005539551 nova_compute[227360]: 2025-11-29 08:10:55.186 227364 DEBUG oslo_concurrency.lockutils [req-2d926740-dc4a-4350-8297-01f4d6d319d3 req-e8caede8-15b3-42b3-862a-de2ea52da94c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:55 np0005539551 nova_compute[227360]: 2025-11-29 08:10:55.186 227364 DEBUG oslo_concurrency.lockutils [req-2d926740-dc4a-4350-8297-01f4d6d319d3 req-e8caede8-15b3-42b3-862a-de2ea52da94c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:55 np0005539551 nova_compute[227360]: 2025-11-29 08:10:55.186 227364 DEBUG oslo_concurrency.lockutils [req-2d926740-dc4a-4350-8297-01f4d6d319d3 req-e8caede8-15b3-42b3-862a-de2ea52da94c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:55 np0005539551 nova_compute[227360]: 2025-11-29 08:10:55.186 227364 DEBUG nova.compute.manager [req-2d926740-dc4a-4350-8297-01f4d6d319d3 req-e8caede8-15b3-42b3-862a-de2ea52da94c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] No waiting events found dispatching network-vif-unplugged-75fb0180-ce5c-4d77-ab28-71baed47a210 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:55 np0005539551 nova_compute[227360]: 2025-11-29 08:10:55.187 227364 DEBUG nova.compute.manager [req-2d926740-dc4a-4350-8297-01f4d6d319d3 req-e8caede8-15b3-42b3-862a-de2ea52da94c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-vif-unplugged-75fb0180-ce5c-4d77-ab28-71baed47a210 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:10:55 np0005539551 nova_compute[227360]: 2025-11-29 08:10:55.187 227364 DEBUG nova.compute.manager [req-2d926740-dc4a-4350-8297-01f4d6d319d3 req-e8caede8-15b3-42b3-862a-de2ea52da94c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-vif-plugged-75fb0180-ce5c-4d77-ab28-71baed47a210 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:55 np0005539551 nova_compute[227360]: 2025-11-29 08:10:55.187 227364 DEBUG oslo_concurrency.lockutils [req-2d926740-dc4a-4350-8297-01f4d6d319d3 req-e8caede8-15b3-42b3-862a-de2ea52da94c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:55 np0005539551 nova_compute[227360]: 2025-11-29 08:10:55.187 227364 DEBUG oslo_concurrency.lockutils [req-2d926740-dc4a-4350-8297-01f4d6d319d3 req-e8caede8-15b3-42b3-862a-de2ea52da94c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:55 np0005539551 nova_compute[227360]: 2025-11-29 08:10:55.187 227364 DEBUG oslo_concurrency.lockutils [req-2d926740-dc4a-4350-8297-01f4d6d319d3 req-e8caede8-15b3-42b3-862a-de2ea52da94c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:55 np0005539551 nova_compute[227360]: 2025-11-29 08:10:55.187 227364 DEBUG nova.compute.manager [req-2d926740-dc4a-4350-8297-01f4d6d319d3 req-e8caede8-15b3-42b3-862a-de2ea52da94c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] No waiting events found dispatching network-vif-plugged-75fb0180-ce5c-4d77-ab28-71baed47a210 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:55 np0005539551 nova_compute[227360]: 2025-11-29 08:10:55.187 227364 WARNING nova.compute.manager [req-2d926740-dc4a-4350-8297-01f4d6d319d3 req-e8caede8-15b3-42b3-862a-de2ea52da94c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received unexpected event network-vif-plugged-75fb0180-ce5c-4d77-ab28-71baed47a210 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:10:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:55 np0005539551 nova_compute[227360]: 2025-11-29 08:10:55.337 227364 DEBUG nova.compute.manager [req-79a91d68-f77f-438b-8a06-acfd1b7839a4 req-c1bcb10b-3ad2-4bbd-aa0f-e17c4421ac17 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-vif-unplugged-064279fc-65b8-4bc6-9578-f43479c76dde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:55 np0005539551 nova_compute[227360]: 2025-11-29 08:10:55.337 227364 DEBUG oslo_concurrency.lockutils [req-79a91d68-f77f-438b-8a06-acfd1b7839a4 req-c1bcb10b-3ad2-4bbd-aa0f-e17c4421ac17 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:55 np0005539551 nova_compute[227360]: 2025-11-29 08:10:55.337 227364 DEBUG oslo_concurrency.lockutils [req-79a91d68-f77f-438b-8a06-acfd1b7839a4 req-c1bcb10b-3ad2-4bbd-aa0f-e17c4421ac17 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:55 np0005539551 nova_compute[227360]: 2025-11-29 08:10:55.337 227364 DEBUG oslo_concurrency.lockutils [req-79a91d68-f77f-438b-8a06-acfd1b7839a4 req-c1bcb10b-3ad2-4bbd-aa0f-e17c4421ac17 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:55 np0005539551 nova_compute[227360]: 2025-11-29 08:10:55.338 227364 DEBUG nova.compute.manager [req-79a91d68-f77f-438b-8a06-acfd1b7839a4 req-c1bcb10b-3ad2-4bbd-aa0f-e17c4421ac17 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] No waiting events found dispatching network-vif-unplugged-064279fc-65b8-4bc6-9578-f43479c76dde pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:55 np0005539551 nova_compute[227360]: 2025-11-29 08:10:55.338 227364 DEBUG nova.compute.manager [req-79a91d68-f77f-438b-8a06-acfd1b7839a4 req-c1bcb10b-3ad2-4bbd-aa0f-e17c4421ac17 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-vif-unplugged-064279fc-65b8-4bc6-9578-f43479c76dde for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:10:55 np0005539551 nova_compute[227360]: 2025-11-29 08:10:55.338 227364 DEBUG nova.compute.manager [req-79a91d68-f77f-438b-8a06-acfd1b7839a4 req-c1bcb10b-3ad2-4bbd-aa0f-e17c4421ac17 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-vif-plugged-064279fc-65b8-4bc6-9578-f43479c76dde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:55 np0005539551 nova_compute[227360]: 2025-11-29 08:10:55.338 227364 DEBUG oslo_concurrency.lockutils [req-79a91d68-f77f-438b-8a06-acfd1b7839a4 req-c1bcb10b-3ad2-4bbd-aa0f-e17c4421ac17 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:55 np0005539551 nova_compute[227360]: 2025-11-29 08:10:55.338 227364 DEBUG oslo_concurrency.lockutils [req-79a91d68-f77f-438b-8a06-acfd1b7839a4 req-c1bcb10b-3ad2-4bbd-aa0f-e17c4421ac17 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:55 np0005539551 nova_compute[227360]: 2025-11-29 08:10:55.339 227364 DEBUG oslo_concurrency.lockutils [req-79a91d68-f77f-438b-8a06-acfd1b7839a4 req-c1bcb10b-3ad2-4bbd-aa0f-e17c4421ac17 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:55 np0005539551 nova_compute[227360]: 2025-11-29 08:10:55.339 227364 DEBUG nova.compute.manager [req-79a91d68-f77f-438b-8a06-acfd1b7839a4 req-c1bcb10b-3ad2-4bbd-aa0f-e17c4421ac17 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] No waiting events found dispatching network-vif-plugged-064279fc-65b8-4bc6-9578-f43479c76dde pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:55 np0005539551 nova_compute[227360]: 2025-11-29 08:10:55.339 227364 WARNING nova.compute.manager [req-79a91d68-f77f-438b-8a06-acfd1b7839a4 req-c1bcb10b-3ad2-4bbd-aa0f-e17c4421ac17 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received unexpected event network-vif-plugged-064279fc-65b8-4bc6-9578-f43479c76dde for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:10:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:55.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:55.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:57 np0005539551 nova_compute[227360]: 2025-11-29 08:10:57.013 227364 DEBUG nova.compute.manager [req-8f9e8ae3-c677-4bdb-bc9e-11f190f2773d req-f3585331-45a2-461c-9380-5e763feabfb7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-vif-deleted-6781fd06-3321-4100-b5d3-9e92a565007b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:57 np0005539551 nova_compute[227360]: 2025-11-29 08:10:57.013 227364 INFO nova.compute.manager [req-8f9e8ae3-c677-4bdb-bc9e-11f190f2773d req-f3585331-45a2-461c-9380-5e763feabfb7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Neutron deleted interface 6781fd06-3321-4100-b5d3-9e92a565007b; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:10:57 np0005539551 nova_compute[227360]: 2025-11-29 08:10:57.013 227364 DEBUG nova.network.neutron [req-8f9e8ae3-c677-4bdb-bc9e-11f190f2773d req-f3585331-45a2-461c-9380-5e763feabfb7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Updating instance_info_cache with network_info: [{"id": "5edc2ee1-b429-4cb7-8b14-3915aa40d39c", "address": "fa:16:3e:73:c5:06", "network": {"id": "997fb1c7-7a6a-4755-bd31-f24f7590c80c", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-481618025-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5edc2ee1-b4", "ovs_interfaceid": "5edc2ee1-b429-4cb7-8b14-3915aa40d39c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "08b93c9b-b707-46be-87f1-913a85c9fdbb", "address": "fa:16:3e:7d:dd:ef", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.101", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08b93c9b-b7", "ovs_interfaceid": "08b93c9b-b707-46be-87f1-913a85c9fdbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "0b94f521-1bc4-4c6b-9d91-f98c1b1371ea", "address": "fa:16:3e:a7:4f:0c", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.188", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b94f521-1b", "ovs_interfaceid": "0b94f521-1bc4-4c6b-9d91-f98c1b1371ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "064279fc-65b8-4bc6-9578-f43479c76dde", "address": "fa:16:3e:8f:4c:8b", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap064279fc-65", "ovs_interfaceid": "064279fc-65b8-4bc6-9578-f43479c76dde", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "580e3c4b-9212-4f39-be82-c8e878e729e2", "address": "fa:16:3e:e1:43:84", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580e3c4b-92", "ovs_interfaceid": "580e3c4b-9212-4f39-be82-c8e878e729e2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "75fb0180-ce5c-4d77-ab28-71baed47a210", "address": "fa:16:3e:07:8b:79", "network": {"id": "c657f09f-ecca-4e01-9a07-39931c8c1994", "bridge": "br-int", "label": "tempest-device-tagging-net2-1829627846", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75fb0180-ce", "ovs_interfaceid": "75fb0180-ce5c-4d77-ab28-71baed47a210", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:10:57 np0005539551 nova_compute[227360]: 2025-11-29 08:10:57.333 227364 DEBUG nova.compute.manager [req-8f9e8ae3-c677-4bdb-bc9e-11f190f2773d req-f3585331-45a2-461c-9380-5e763feabfb7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Detach interface failed, port_id=6781fd06-3321-4100-b5d3-9e92a565007b, reason: Instance 86dc24b2-55cd-4720-825d-14b5a233fc8f could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 03:10:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:57.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:10:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:57.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:10:59 np0005539551 nova_compute[227360]: 2025-11-29 08:10:59.183 227364 DEBUG nova.compute.manager [req-909b2ddb-06b2-4e39-9910-8ee40b020fbf req-f640f4c4-3124-4e04-96f1-f1fa5f336907 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-vif-deleted-5edc2ee1-b429-4cb7-8b14-3915aa40d39c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:59 np0005539551 nova_compute[227360]: 2025-11-29 08:10:59.184 227364 INFO nova.compute.manager [req-909b2ddb-06b2-4e39-9910-8ee40b020fbf req-f640f4c4-3124-4e04-96f1-f1fa5f336907 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Neutron deleted interface 5edc2ee1-b429-4cb7-8b14-3915aa40d39c; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:10:59 np0005539551 nova_compute[227360]: 2025-11-29 08:10:59.184 227364 DEBUG nova.network.neutron [req-909b2ddb-06b2-4e39-9910-8ee40b020fbf req-f640f4c4-3124-4e04-96f1-f1fa5f336907 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Updating instance_info_cache with network_info: [{"id": "08b93c9b-b707-46be-87f1-913a85c9fdbb", "address": "fa:16:3e:7d:dd:ef", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.101", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08b93c9b-b7", "ovs_interfaceid": "08b93c9b-b707-46be-87f1-913a85c9fdbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "0b94f521-1bc4-4c6b-9d91-f98c1b1371ea", "address": "fa:16:3e:a7:4f:0c", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.188", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b94f521-1b", "ovs_interfaceid": "0b94f521-1bc4-4c6b-9d91-f98c1b1371ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "064279fc-65b8-4bc6-9578-f43479c76dde", "address": "fa:16:3e:8f:4c:8b", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap064279fc-65", "ovs_interfaceid": "064279fc-65b8-4bc6-9578-f43479c76dde", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "580e3c4b-9212-4f39-be82-c8e878e729e2", "address": "fa:16:3e:e1:43:84", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580e3c4b-92", "ovs_interfaceid": "580e3c4b-9212-4f39-be82-c8e878e729e2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "75fb0180-ce5c-4d77-ab28-71baed47a210", "address": "fa:16:3e:07:8b:79", "network": {"id": "c657f09f-ecca-4e01-9a07-39931c8c1994", "bridge": "br-int", "label": "tempest-device-tagging-net2-1829627846", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75fb0180-ce", "ovs_interfaceid": "75fb0180-ce5c-4d77-ab28-71baed47a210", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:10:59 np0005539551 nova_compute[227360]: 2025-11-29 08:10:59.237 227364 DEBUG nova.compute.manager [req-909b2ddb-06b2-4e39-9910-8ee40b020fbf req-f640f4c4-3124-4e04-96f1-f1fa5f336907 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Detach interface failed, port_id=5edc2ee1-b429-4cb7-8b14-3915aa40d39c, reason: Instance 86dc24b2-55cd-4720-825d-14b5a233fc8f could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 03:10:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:59.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:59 np0005539551 nova_compute[227360]: 2025-11-29 08:10:59.857 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:10:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:59.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:00 np0005539551 nova_compute[227360]: 2025-11-29 08:11:00.068 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:01 np0005539551 nova_compute[227360]: 2025-11-29 08:11:01.167 227364 DEBUG oslo_concurrency.lockutils [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "96e45d6d-27c7-4a08-9126-4856b2920133" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:01 np0005539551 nova_compute[227360]: 2025-11-29 08:11:01.168 227364 DEBUG oslo_concurrency.lockutils [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "96e45d6d-27c7-4a08-9126-4856b2920133" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:01 np0005539551 nova_compute[227360]: 2025-11-29 08:11:01.198 227364 DEBUG nova.compute.manager [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:11:01 np0005539551 nova_compute[227360]: 2025-11-29 08:11:01.470 227364 DEBUG nova.compute.manager [req-69ab4729-e937-4705-b293-0c7b6554e9e2 req-a11ec972-040c-4ba4-a06b-ecf166e2b773 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-vif-deleted-064279fc-65b8-4bc6-9578-f43479c76dde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:01 np0005539551 nova_compute[227360]: 2025-11-29 08:11:01.470 227364 INFO nova.compute.manager [req-69ab4729-e937-4705-b293-0c7b6554e9e2 req-a11ec972-040c-4ba4-a06b-ecf166e2b773 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Neutron deleted interface 064279fc-65b8-4bc6-9578-f43479c76dde; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:11:01 np0005539551 nova_compute[227360]: 2025-11-29 08:11:01.470 227364 DEBUG nova.network.neutron [req-69ab4729-e937-4705-b293-0c7b6554e9e2 req-a11ec972-040c-4ba4-a06b-ecf166e2b773 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Updating instance_info_cache with network_info: [{"id": "08b93c9b-b707-46be-87f1-913a85c9fdbb", "address": "fa:16:3e:7d:dd:ef", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.101", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08b93c9b-b7", "ovs_interfaceid": "08b93c9b-b707-46be-87f1-913a85c9fdbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "0b94f521-1bc4-4c6b-9d91-f98c1b1371ea", "address": "fa:16:3e:a7:4f:0c", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.188", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b94f521-1b", "ovs_interfaceid": "0b94f521-1bc4-4c6b-9d91-f98c1b1371ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "580e3c4b-9212-4f39-be82-c8e878e729e2", "address": "fa:16:3e:e1:43:84", "network": {"id": "2e9e9288-8ab6-453e-b0a5-e16458d62484", "bridge": "br-int", "label": "tempest-device-tagging-net1-1618829358", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580e3c4b-92", "ovs_interfaceid": "580e3c4b-9212-4f39-be82-c8e878e729e2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "75fb0180-ce5c-4d77-ab28-71baed47a210", "address": "fa:16:3e:07:8b:79", "network": {"id": "c657f09f-ecca-4e01-9a07-39931c8c1994", "bridge": "br-int", "label": "tempest-device-tagging-net2-1829627846", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28c3d09b9e21417cb7bc44b8552f1b81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75fb0180-ce", "ovs_interfaceid": "75fb0180-ce5c-4d77-ab28-71baed47a210", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:11:01 np0005539551 nova_compute[227360]: 2025-11-29 08:11:01.503 227364 DEBUG nova.compute.manager [req-69ab4729-e937-4705-b293-0c7b6554e9e2 req-a11ec972-040c-4ba4-a06b-ecf166e2b773 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Detach interface failed, port_id=064279fc-65b8-4bc6-9578-f43479c76dde, reason: Instance 86dc24b2-55cd-4720-825d-14b5a233fc8f could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 03:11:01 np0005539551 nova_compute[227360]: 2025-11-29 08:11:01.515 227364 DEBUG oslo_concurrency.lockutils [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:01 np0005539551 nova_compute[227360]: 2025-11-29 08:11:01.516 227364 DEBUG oslo_concurrency.lockutils [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:01 np0005539551 nova_compute[227360]: 2025-11-29 08:11:01.523 227364 DEBUG nova.virt.hardware [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:11:01 np0005539551 nova_compute[227360]: 2025-11-29 08:11:01.524 227364 INFO nova.compute.claims [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:11:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:01.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:01 np0005539551 nova_compute[227360]: 2025-11-29 08:11:01.851 227364 DEBUG oslo_concurrency.processutils [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:01.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:02 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:11:02 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3148740392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:11:02 np0005539551 nova_compute[227360]: 2025-11-29 08:11:02.302 227364 DEBUG oslo_concurrency.processutils [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:02 np0005539551 nova_compute[227360]: 2025-11-29 08:11:02.308 227364 DEBUG nova.compute.provider_tree [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:11:02 np0005539551 nova_compute[227360]: 2025-11-29 08:11:02.435 227364 DEBUG nova.scheduler.client.report [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:11:02 np0005539551 nova_compute[227360]: 2025-11-29 08:11:02.491 227364 DEBUG nova.network.neutron [-] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:11:02 np0005539551 nova_compute[227360]: 2025-11-29 08:11:02.510 227364 DEBUG oslo_concurrency.lockutils [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.994s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:02 np0005539551 nova_compute[227360]: 2025-11-29 08:11:02.511 227364 DEBUG nova.compute.manager [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:11:02 np0005539551 nova_compute[227360]: 2025-11-29 08:11:02.515 227364 INFO nova.compute.manager [-] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Took 11.21 seconds to deallocate network for instance.#033[00m
Nov 29 03:11:02 np0005539551 nova_compute[227360]: 2025-11-29 08:11:02.776 227364 DEBUG nova.compute.manager [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:11:02 np0005539551 nova_compute[227360]: 2025-11-29 08:11:02.777 227364 DEBUG nova.network.neutron [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:11:02 np0005539551 nova_compute[227360]: 2025-11-29 08:11:02.831 227364 INFO nova.virt.libvirt.driver [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:11:02 np0005539551 nova_compute[227360]: 2025-11-29 08:11:02.852 227364 DEBUG nova.compute.manager [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:11:03 np0005539551 nova_compute[227360]: 2025-11-29 08:11:03.054 227364 DEBUG nova.policy [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9ab0114aca6149af994da2b9052c1368', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8384e5887c0948f5876c019d50057152', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:11:03 np0005539551 nova_compute[227360]: 2025-11-29 08:11:03.085 227364 DEBUG nova.compute.manager [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:11:03 np0005539551 nova_compute[227360]: 2025-11-29 08:11:03.087 227364 DEBUG nova.virt.libvirt.driver [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:11:03 np0005539551 nova_compute[227360]: 2025-11-29 08:11:03.088 227364 INFO nova.virt.libvirt.driver [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Creating image(s)#033[00m
Nov 29 03:11:03 np0005539551 nova_compute[227360]: 2025-11-29 08:11:03.126 227364 DEBUG nova.storage.rbd_utils [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image 96e45d6d-27c7-4a08-9126-4856b2920133_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:11:03 np0005539551 nova_compute[227360]: 2025-11-29 08:11:03.154 227364 DEBUG nova.storage.rbd_utils [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image 96e45d6d-27c7-4a08-9126-4856b2920133_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:11:03 np0005539551 nova_compute[227360]: 2025-11-29 08:11:03.183 227364 DEBUG nova.storage.rbd_utils [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image 96e45d6d-27c7-4a08-9126-4856b2920133_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:11:03 np0005539551 nova_compute[227360]: 2025-11-29 08:11:03.187 227364 DEBUG oslo_concurrency.processutils [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:03 np0005539551 nova_compute[227360]: 2025-11-29 08:11:03.251 227364 DEBUG oslo_concurrency.processutils [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:03 np0005539551 nova_compute[227360]: 2025-11-29 08:11:03.252 227364 DEBUG oslo_concurrency.lockutils [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:03 np0005539551 nova_compute[227360]: 2025-11-29 08:11:03.253 227364 DEBUG oslo_concurrency.lockutils [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:03 np0005539551 nova_compute[227360]: 2025-11-29 08:11:03.253 227364 DEBUG oslo_concurrency.lockutils [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:03 np0005539551 nova_compute[227360]: 2025-11-29 08:11:03.283 227364 DEBUG nova.storage.rbd_utils [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image 96e45d6d-27c7-4a08-9126-4856b2920133_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:11:03 np0005539551 nova_compute[227360]: 2025-11-29 08:11:03.287 227364 DEBUG oslo_concurrency.processutils [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 96e45d6d-27c7-4a08-9126-4856b2920133_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:03 np0005539551 nova_compute[227360]: 2025-11-29 08:11:03.474 227364 INFO nova.compute.manager [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Took 0.96 seconds to detach 3 volumes for instance.#033[00m
Nov 29 03:11:03 np0005539551 nova_compute[227360]: 2025-11-29 08:11:03.551 227364 DEBUG oslo_concurrency.lockutils [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:03 np0005539551 nova_compute[227360]: 2025-11-29 08:11:03.551 227364 DEBUG oslo_concurrency.lockutils [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:03 np0005539551 nova_compute[227360]: 2025-11-29 08:11:03.614 227364 DEBUG oslo_concurrency.processutils [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:03 np0005539551 nova_compute[227360]: 2025-11-29 08:11:03.642 227364 DEBUG nova.compute.manager [req-52be947e-5f56-4209-a1e2-cf6755fddb88 req-d50af7d4-6b23-4754-900c-09c293d697c5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-vif-deleted-580e3c4b-9212-4f39-be82-c8e878e729e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:03 np0005539551 nova_compute[227360]: 2025-11-29 08:11:03.642 227364 DEBUG nova.compute.manager [req-52be947e-5f56-4209-a1e2-cf6755fddb88 req-d50af7d4-6b23-4754-900c-09c293d697c5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Received event network-vif-deleted-75fb0180-ce5c-4d77-ab28-71baed47a210 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:03.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:03.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:04 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:11:04 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2406898412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:11:04 np0005539551 nova_compute[227360]: 2025-11-29 08:11:04.040 227364 DEBUG oslo_concurrency.processutils [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:04 np0005539551 nova_compute[227360]: 2025-11-29 08:11:04.045 227364 DEBUG nova.compute.provider_tree [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:11:04 np0005539551 nova_compute[227360]: 2025-11-29 08:11:04.061 227364 DEBUG nova.scheduler.client.report [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:11:04 np0005539551 nova_compute[227360]: 2025-11-29 08:11:04.092 227364 DEBUG oslo_concurrency.lockutils [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.540s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:04 np0005539551 nova_compute[227360]: 2025-11-29 08:11:04.121 227364 INFO nova.scheduler.client.report [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Deleted allocations for instance 86dc24b2-55cd-4720-825d-14b5a233fc8f#033[00m
Nov 29 03:11:04 np0005539551 nova_compute[227360]: 2025-11-29 08:11:04.209 227364 DEBUG oslo_concurrency.lockutils [None req-813823ab-72ab-4464-bf13-084db1143b21 6ef481e9e8e0440c91abe11aee229780 28c3d09b9e21417cb7bc44b8552f1b81 - - default default] Lock "86dc24b2-55cd-4720-825d-14b5a233fc8f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 14.987s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:04 np0005539551 nova_compute[227360]: 2025-11-29 08:11:04.273 227364 DEBUG nova.network.neutron [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Successfully created port: 3690f505-411e-49df-9869-848f5e8e1d1a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:11:04 np0005539551 nova_compute[227360]: 2025-11-29 08:11:04.530 227364 DEBUG oslo_concurrency.processutils [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 96e45d6d-27c7-4a08-9126-4856b2920133_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.242s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:04 np0005539551 nova_compute[227360]: 2025-11-29 08:11:04.607 227364 DEBUG nova.storage.rbd_utils [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] resizing rbd image 96e45d6d-27c7-4a08-9126-4856b2920133_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:11:04 np0005539551 nova_compute[227360]: 2025-11-29 08:11:04.754 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403849.722204, 86dc24b2-55cd-4720-825d-14b5a233fc8f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:11:04 np0005539551 nova_compute[227360]: 2025-11-29 08:11:04.754 227364 INFO nova.compute.manager [-] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:11:04 np0005539551 nova_compute[227360]: 2025-11-29 08:11:04.782 227364 DEBUG nova.compute.manager [None req-fe6995c0-8c36-46f7-9d82-f11b3f63ee9c - - - - - -] [instance: 86dc24b2-55cd-4720-825d-14b5a233fc8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:11:04 np0005539551 nova_compute[227360]: 2025-11-29 08:11:04.888 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:05 np0005539551 nova_compute[227360]: 2025-11-29 08:11:05.109 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:05 np0005539551 nova_compute[227360]: 2025-11-29 08:11:05.116 227364 DEBUG nova.objects.instance [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'migration_context' on Instance uuid 96e45d6d-27c7-4a08-9126-4856b2920133 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:11:05 np0005539551 nova_compute[227360]: 2025-11-29 08:11:05.138 227364 DEBUG nova.virt.libvirt.driver [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:11:05 np0005539551 nova_compute[227360]: 2025-11-29 08:11:05.138 227364 DEBUG nova.virt.libvirt.driver [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Ensure instance console log exists: /var/lib/nova/instances/96e45d6d-27c7-4a08-9126-4856b2920133/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:11:05 np0005539551 nova_compute[227360]: 2025-11-29 08:11:05.139 227364 DEBUG oslo_concurrency.lockutils [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:05 np0005539551 nova_compute[227360]: 2025-11-29 08:11:05.139 227364 DEBUG oslo_concurrency.lockutils [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:05 np0005539551 nova_compute[227360]: 2025-11-29 08:11:05.139 227364 DEBUG oslo_concurrency.lockutils [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:05 np0005539551 nova_compute[227360]: 2025-11-29 08:11:05.402 227364 DEBUG nova.network.neutron [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Successfully updated port: 3690f505-411e-49df-9869-848f5e8e1d1a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:11:05 np0005539551 nova_compute[227360]: 2025-11-29 08:11:05.432 227364 DEBUG oslo_concurrency.lockutils [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "refresh_cache-96e45d6d-27c7-4a08-9126-4856b2920133" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:11:05 np0005539551 nova_compute[227360]: 2025-11-29 08:11:05.432 227364 DEBUG oslo_concurrency.lockutils [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquired lock "refresh_cache-96e45d6d-27c7-4a08-9126-4856b2920133" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:11:05 np0005539551 nova_compute[227360]: 2025-11-29 08:11:05.433 227364 DEBUG nova.network.neutron [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:11:05 np0005539551 nova_compute[227360]: 2025-11-29 08:11:05.626 227364 DEBUG nova.network.neutron [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:11:05 np0005539551 nova_compute[227360]: 2025-11-29 08:11:05.735 227364 DEBUG nova.compute.manager [req-4f7255b8-a768-4915-aeb8-ece944007f63 req-4d7439dc-0812-403f-bff9-e3392857ca9a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Received event network-changed-3690f505-411e-49df-9869-848f5e8e1d1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:05 np0005539551 nova_compute[227360]: 2025-11-29 08:11:05.737 227364 DEBUG nova.compute.manager [req-4f7255b8-a768-4915-aeb8-ece944007f63 req-4d7439dc-0812-403f-bff9-e3392857ca9a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Refreshing instance network info cache due to event network-changed-3690f505-411e-49df-9869-848f5e8e1d1a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:11:05 np0005539551 nova_compute[227360]: 2025-11-29 08:11:05.738 227364 DEBUG oslo_concurrency.lockutils [req-4f7255b8-a768-4915-aeb8-ece944007f63 req-4d7439dc-0812-403f-bff9-e3392857ca9a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-96e45d6d-27c7-4a08-9126-4856b2920133" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:11:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:05.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:05.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:06 np0005539551 nova_compute[227360]: 2025-11-29 08:11:06.561 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:06 np0005539551 nova_compute[227360]: 2025-11-29 08:11:06.867 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:07 np0005539551 nova_compute[227360]: 2025-11-29 08:11:07.007 227364 DEBUG nova.network.neutron [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Updating instance_info_cache with network_info: [{"id": "3690f505-411e-49df-9869-848f5e8e1d1a", "address": "fa:16:3e:70:93:84", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3690f505-41", "ovs_interfaceid": "3690f505-411e-49df-9869-848f5e8e1d1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:11:07 np0005539551 nova_compute[227360]: 2025-11-29 08:11:07.058 227364 DEBUG oslo_concurrency.lockutils [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Releasing lock "refresh_cache-96e45d6d-27c7-4a08-9126-4856b2920133" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:11:07 np0005539551 nova_compute[227360]: 2025-11-29 08:11:07.059 227364 DEBUG nova.compute.manager [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Instance network_info: |[{"id": "3690f505-411e-49df-9869-848f5e8e1d1a", "address": "fa:16:3e:70:93:84", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3690f505-41", "ovs_interfaceid": "3690f505-411e-49df-9869-848f5e8e1d1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:11:07 np0005539551 nova_compute[227360]: 2025-11-29 08:11:07.059 227364 DEBUG oslo_concurrency.lockutils [req-4f7255b8-a768-4915-aeb8-ece944007f63 req-4d7439dc-0812-403f-bff9-e3392857ca9a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-96e45d6d-27c7-4a08-9126-4856b2920133" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:11:07 np0005539551 nova_compute[227360]: 2025-11-29 08:11:07.060 227364 DEBUG nova.network.neutron [req-4f7255b8-a768-4915-aeb8-ece944007f63 req-4d7439dc-0812-403f-bff9-e3392857ca9a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Refreshing network info cache for port 3690f505-411e-49df-9869-848f5e8e1d1a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:11:07 np0005539551 nova_compute[227360]: 2025-11-29 08:11:07.064 227364 DEBUG nova.virt.libvirt.driver [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Start _get_guest_xml network_info=[{"id": "3690f505-411e-49df-9869-848f5e8e1d1a", "address": "fa:16:3e:70:93:84", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3690f505-41", "ovs_interfaceid": "3690f505-411e-49df-9869-848f5e8e1d1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:11:07 np0005539551 nova_compute[227360]: 2025-11-29 08:11:07.071 227364 WARNING nova.virt.libvirt.driver [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:11:07 np0005539551 nova_compute[227360]: 2025-11-29 08:11:07.077 227364 DEBUG nova.virt.libvirt.host [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:11:07 np0005539551 nova_compute[227360]: 2025-11-29 08:11:07.078 227364 DEBUG nova.virt.libvirt.host [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:11:07 np0005539551 nova_compute[227360]: 2025-11-29 08:11:07.090 227364 DEBUG nova.virt.libvirt.host [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:11:07 np0005539551 nova_compute[227360]: 2025-11-29 08:11:07.091 227364 DEBUG nova.virt.libvirt.host [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:11:07 np0005539551 nova_compute[227360]: 2025-11-29 08:11:07.092 227364 DEBUG nova.virt.libvirt.driver [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:11:07 np0005539551 nova_compute[227360]: 2025-11-29 08:11:07.093 227364 DEBUG nova.virt.hardware [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:11:07 np0005539551 nova_compute[227360]: 2025-11-29 08:11:07.094 227364 DEBUG nova.virt.hardware [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:11:07 np0005539551 nova_compute[227360]: 2025-11-29 08:11:07.094 227364 DEBUG nova.virt.hardware [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:11:07 np0005539551 nova_compute[227360]: 2025-11-29 08:11:07.095 227364 DEBUG nova.virt.hardware [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:11:07 np0005539551 nova_compute[227360]: 2025-11-29 08:11:07.095 227364 DEBUG nova.virt.hardware [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:11:07 np0005539551 nova_compute[227360]: 2025-11-29 08:11:07.096 227364 DEBUG nova.virt.hardware [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:11:07 np0005539551 nova_compute[227360]: 2025-11-29 08:11:07.096 227364 DEBUG nova.virt.hardware [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:11:07 np0005539551 nova_compute[227360]: 2025-11-29 08:11:07.097 227364 DEBUG nova.virt.hardware [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:11:07 np0005539551 nova_compute[227360]: 2025-11-29 08:11:07.097 227364 DEBUG nova.virt.hardware [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:11:07 np0005539551 nova_compute[227360]: 2025-11-29 08:11:07.097 227364 DEBUG nova.virt.hardware [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:11:07 np0005539551 nova_compute[227360]: 2025-11-29 08:11:07.097 227364 DEBUG nova.virt.hardware [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:11:07 np0005539551 nova_compute[227360]: 2025-11-29 08:11:07.101 227364 DEBUG oslo_concurrency.processutils [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:11:07 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3731673038' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:11:07 np0005539551 nova_compute[227360]: 2025-11-29 08:11:07.566 227364 DEBUG oslo_concurrency.processutils [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:07 np0005539551 nova_compute[227360]: 2025-11-29 08:11:07.588 227364 DEBUG nova.storage.rbd_utils [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image 96e45d6d-27c7-4a08-9126-4856b2920133_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:11:07 np0005539551 nova_compute[227360]: 2025-11-29 08:11:07.591 227364 DEBUG oslo_concurrency.processutils [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:07.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:07.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:08 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:11:08 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/193532295' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:11:08 np0005539551 nova_compute[227360]: 2025-11-29 08:11:08.040 227364 DEBUG oslo_concurrency.processutils [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:08 np0005539551 nova_compute[227360]: 2025-11-29 08:11:08.042 227364 DEBUG nova.virt.libvirt.vif [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:10:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1237580642',display_name='tempest-ServerDiskConfigTestJSON-server-1237580642',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1237580642',id=81,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8384e5887c0948f5876c019d50057152',ramdisk_id='',reservation_id='r-nfk3svhu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-767135984',owner_user_name='tempest-ServerDiskConfigTestJSON-767135984-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:11:02Z,user_data=None,user_id='9ab0114aca6149af994da2b9052c1368',uuid=96e45d6d-27c7-4a08-9126-4856b2920133,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3690f505-411e-49df-9869-848f5e8e1d1a", "address": "fa:16:3e:70:93:84", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3690f505-41", "ovs_interfaceid": "3690f505-411e-49df-9869-848f5e8e1d1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:11:08 np0005539551 nova_compute[227360]: 2025-11-29 08:11:08.043 227364 DEBUG nova.network.os_vif_util [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converting VIF {"id": "3690f505-411e-49df-9869-848f5e8e1d1a", "address": "fa:16:3e:70:93:84", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3690f505-41", "ovs_interfaceid": "3690f505-411e-49df-9869-848f5e8e1d1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:11:08 np0005539551 nova_compute[227360]: 2025-11-29 08:11:08.044 227364 DEBUG nova.network.os_vif_util [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:93:84,bridge_name='br-int',has_traffic_filtering=True,id=3690f505-411e-49df-9869-848f5e8e1d1a,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3690f505-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:11:08 np0005539551 nova_compute[227360]: 2025-11-29 08:11:08.045 227364 DEBUG nova.objects.instance [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'pci_devices' on Instance uuid 96e45d6d-27c7-4a08-9126-4856b2920133 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:11:08 np0005539551 nova_compute[227360]: 2025-11-29 08:11:08.067 227364 DEBUG nova.virt.libvirt.driver [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:11:08 np0005539551 nova_compute[227360]:  <uuid>96e45d6d-27c7-4a08-9126-4856b2920133</uuid>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:  <name>instance-00000051</name>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:11:08 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-1237580642</nova:name>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:11:07</nova:creationTime>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:11:08 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:        <nova:user uuid="9ab0114aca6149af994da2b9052c1368">tempest-ServerDiskConfigTestJSON-767135984-project-member</nova:user>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:        <nova:project uuid="8384e5887c0948f5876c019d50057152">tempest-ServerDiskConfigTestJSON-767135984</nova:project>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:        <nova:port uuid="3690f505-411e-49df-9869-848f5e8e1d1a">
Nov 29 03:11:08 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:      <entry name="serial">96e45d6d-27c7-4a08-9126-4856b2920133</entry>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:      <entry name="uuid">96e45d6d-27c7-4a08-9126-4856b2920133</entry>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:11:08 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/96e45d6d-27c7-4a08-9126-4856b2920133_disk">
Nov 29 03:11:08 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:11:08 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:11:08 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/96e45d6d-27c7-4a08-9126-4856b2920133_disk.config">
Nov 29 03:11:08 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:11:08 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:11:08 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:70:93:84"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:      <target dev="tap3690f505-41"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:11:08 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/96e45d6d-27c7-4a08-9126-4856b2920133/console.log" append="off"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:11:08 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:11:08 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:11:08 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:11:08 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:11:08 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:11:08 np0005539551 nova_compute[227360]: 2025-11-29 08:11:08.069 227364 DEBUG nova.compute.manager [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Preparing to wait for external event network-vif-plugged-3690f505-411e-49df-9869-848f5e8e1d1a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:11:08 np0005539551 nova_compute[227360]: 2025-11-29 08:11:08.069 227364 DEBUG oslo_concurrency.lockutils [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:08 np0005539551 nova_compute[227360]: 2025-11-29 08:11:08.069 227364 DEBUG oslo_concurrency.lockutils [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:08 np0005539551 nova_compute[227360]: 2025-11-29 08:11:08.070 227364 DEBUG oslo_concurrency.lockutils [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:08 np0005539551 nova_compute[227360]: 2025-11-29 08:11:08.070 227364 DEBUG nova.virt.libvirt.vif [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:10:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1237580642',display_name='tempest-ServerDiskConfigTestJSON-server-1237580642',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1237580642',id=81,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8384e5887c0948f5876c019d50057152',ramdisk_id='',reservation_id='r-nfk3svhu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-767135984',owner_user_name='tempest-ServerDiskConfigTestJSON-767135984-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:11:02Z,user_data=None,user_id='9ab0114aca6149af994da2b9052c1368',uuid=96e45d6d-27c7-4a08-9126-4856b2920133,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3690f505-411e-49df-9869-848f5e8e1d1a", "address": "fa:16:3e:70:93:84", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3690f505-41", "ovs_interfaceid": "3690f505-411e-49df-9869-848f5e8e1d1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:11:08 np0005539551 nova_compute[227360]: 2025-11-29 08:11:08.071 227364 DEBUG nova.network.os_vif_util [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converting VIF {"id": "3690f505-411e-49df-9869-848f5e8e1d1a", "address": "fa:16:3e:70:93:84", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3690f505-41", "ovs_interfaceid": "3690f505-411e-49df-9869-848f5e8e1d1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:11:08 np0005539551 nova_compute[227360]: 2025-11-29 08:11:08.071 227364 DEBUG nova.network.os_vif_util [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:93:84,bridge_name='br-int',has_traffic_filtering=True,id=3690f505-411e-49df-9869-848f5e8e1d1a,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3690f505-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:11:08 np0005539551 nova_compute[227360]: 2025-11-29 08:11:08.072 227364 DEBUG os_vif [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:93:84,bridge_name='br-int',has_traffic_filtering=True,id=3690f505-411e-49df-9869-848f5e8e1d1a,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3690f505-41') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:11:08 np0005539551 nova_compute[227360]: 2025-11-29 08:11:08.072 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:08 np0005539551 nova_compute[227360]: 2025-11-29 08:11:08.073 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:08 np0005539551 nova_compute[227360]: 2025-11-29 08:11:08.073 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:11:08 np0005539551 nova_compute[227360]: 2025-11-29 08:11:08.075 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:08 np0005539551 nova_compute[227360]: 2025-11-29 08:11:08.076 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3690f505-41, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:08 np0005539551 nova_compute[227360]: 2025-11-29 08:11:08.076 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3690f505-41, col_values=(('external_ids', {'iface-id': '3690f505-411e-49df-9869-848f5e8e1d1a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:70:93:84', 'vm-uuid': '96e45d6d-27c7-4a08-9126-4856b2920133'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:08 np0005539551 nova_compute[227360]: 2025-11-29 08:11:08.077 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:08 np0005539551 NetworkManager[48922]: <info>  [1764403868.0789] manager: (tap3690f505-41): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/154)
Nov 29 03:11:08 np0005539551 nova_compute[227360]: 2025-11-29 08:11:08.080 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:11:08 np0005539551 nova_compute[227360]: 2025-11-29 08:11:08.084 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:08 np0005539551 nova_compute[227360]: 2025-11-29 08:11:08.085 227364 INFO os_vif [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:93:84,bridge_name='br-int',has_traffic_filtering=True,id=3690f505-411e-49df-9869-848f5e8e1d1a,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3690f505-41')#033[00m
Nov 29 03:11:08 np0005539551 nova_compute[227360]: 2025-11-29 08:11:08.162 227364 DEBUG nova.virt.libvirt.driver [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:11:08 np0005539551 nova_compute[227360]: 2025-11-29 08:11:08.163 227364 DEBUG nova.virt.libvirt.driver [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:11:08 np0005539551 nova_compute[227360]: 2025-11-29 08:11:08.163 227364 DEBUG nova.virt.libvirt.driver [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] No VIF found with MAC fa:16:3e:70:93:84, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:11:08 np0005539551 nova_compute[227360]: 2025-11-29 08:11:08.163 227364 INFO nova.virt.libvirt.driver [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Using config drive#033[00m
Nov 29 03:11:08 np0005539551 nova_compute[227360]: 2025-11-29 08:11:08.185 227364 DEBUG nova.storage.rbd_utils [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image 96e45d6d-27c7-4a08-9126-4856b2920133_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:11:08 np0005539551 nova_compute[227360]: 2025-11-29 08:11:08.848 227364 INFO nova.virt.libvirt.driver [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Creating config drive at /var/lib/nova/instances/96e45d6d-27c7-4a08-9126-4856b2920133/disk.config#033[00m
Nov 29 03:11:08 np0005539551 nova_compute[227360]: 2025-11-29 08:11:08.852 227364 DEBUG oslo_concurrency.processutils [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/96e45d6d-27c7-4a08-9126-4856b2920133/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0l6dkhm7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:08 np0005539551 nova_compute[227360]: 2025-11-29 08:11:08.917 227364 DEBUG nova.network.neutron [req-4f7255b8-a768-4915-aeb8-ece944007f63 req-4d7439dc-0812-403f-bff9-e3392857ca9a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Updated VIF entry in instance network info cache for port 3690f505-411e-49df-9869-848f5e8e1d1a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:11:08 np0005539551 nova_compute[227360]: 2025-11-29 08:11:08.918 227364 DEBUG nova.network.neutron [req-4f7255b8-a768-4915-aeb8-ece944007f63 req-4d7439dc-0812-403f-bff9-e3392857ca9a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Updating instance_info_cache with network_info: [{"id": "3690f505-411e-49df-9869-848f5e8e1d1a", "address": "fa:16:3e:70:93:84", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3690f505-41", "ovs_interfaceid": "3690f505-411e-49df-9869-848f5e8e1d1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:11:08 np0005539551 nova_compute[227360]: 2025-11-29 08:11:08.934 227364 DEBUG oslo_concurrency.lockutils [req-4f7255b8-a768-4915-aeb8-ece944007f63 req-4d7439dc-0812-403f-bff9-e3392857ca9a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-96e45d6d-27c7-4a08-9126-4856b2920133" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:11:08 np0005539551 nova_compute[227360]: 2025-11-29 08:11:08.996 227364 DEBUG oslo_concurrency.processutils [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/96e45d6d-27c7-4a08-9126-4856b2920133/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0l6dkhm7" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:09 np0005539551 nova_compute[227360]: 2025-11-29 08:11:09.018 227364 DEBUG nova.storage.rbd_utils [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image 96e45d6d-27c7-4a08-9126-4856b2920133_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:11:09 np0005539551 nova_compute[227360]: 2025-11-29 08:11:09.021 227364 DEBUG oslo_concurrency.processutils [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/96e45d6d-27c7-4a08-9126-4856b2920133/disk.config 96e45d6d-27c7-4a08-9126-4856b2920133_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:09 np0005539551 nova_compute[227360]: 2025-11-29 08:11:09.286 227364 DEBUG oslo_concurrency.processutils [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/96e45d6d-27c7-4a08-9126-4856b2920133/disk.config 96e45d6d-27c7-4a08-9126-4856b2920133_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.265s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:09 np0005539551 nova_compute[227360]: 2025-11-29 08:11:09.287 227364 INFO nova.virt.libvirt.driver [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Deleting local config drive /var/lib/nova/instances/96e45d6d-27c7-4a08-9126-4856b2920133/disk.config because it was imported into RBD.#033[00m
Nov 29 03:11:09 np0005539551 kernel: tap3690f505-41: entered promiscuous mode
Nov 29 03:11:09 np0005539551 NetworkManager[48922]: <info>  [1764403869.3585] manager: (tap3690f505-41): new Tun device (/org/freedesktop/NetworkManager/Devices/155)
Nov 29 03:11:09 np0005539551 nova_compute[227360]: 2025-11-29 08:11:09.359 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:09 np0005539551 ovn_controller[130266]: 2025-11-29T08:11:09Z|00329|binding|INFO|Claiming lport 3690f505-411e-49df-9869-848f5e8e1d1a for this chassis.
Nov 29 03:11:09 np0005539551 ovn_controller[130266]: 2025-11-29T08:11:09Z|00330|binding|INFO|3690f505-411e-49df-9869-848f5e8e1d1a: Claiming fa:16:3e:70:93:84 10.100.0.12
Nov 29 03:11:09 np0005539551 nova_compute[227360]: 2025-11-29 08:11:09.363 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:09.375 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:93:84 10.100.0.12'], port_security=['fa:16:3e:70:93:84 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '96e45d6d-27c7-4a08-9126-4856b2920133', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8384e5887c0948f5876c019d50057152', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a1e9ed13-b0e1-45c0-9be6-be0f145466a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c0727149-3377-4d23-9d8d-0006462cd03e, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=3690f505-411e-49df-9869-848f5e8e1d1a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:09.375 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 3690f505-411e-49df-9869-848f5e8e1d1a in datapath 65f88c5a-8801-4bc1-9eed-15e2bab4717d bound to our chassis#033[00m
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:09.377 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 65f88c5a-8801-4bc1-9eed-15e2bab4717d#033[00m
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:09.387 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[7ccc8e0d-7d6f-4b05-a785-b282cf8dae8e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:09.388 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap65f88c5a-81 in ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:09.390 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap65f88c5a-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:09.390 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c6729220-489f-44bf-be14-32405d969144]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:09.392 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[df68a899-ef68-4fc0-b39a-66633594912a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:09 np0005539551 systemd-udevd[257516]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:09.403 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[dcfef5db-323f-48c6-bb6f-757989194ee3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:09 np0005539551 systemd-machined[190756]: New machine qemu-36-instance-00000051.
Nov 29 03:11:09 np0005539551 podman[257454]: 2025-11-29 08:11:09.409151726 +0000 UTC m=+0.088368183 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:11:09 np0005539551 NetworkManager[48922]: <info>  [1764403869.4110] device (tap3690f505-41): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:11:09 np0005539551 NetworkManager[48922]: <info>  [1764403869.4119] device (tap3690f505-41): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:11:09 np0005539551 systemd[1]: Started Virtual Machine qemu-36-instance-00000051.
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:09.422 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a19c7bbd-b757-414e-aa03-81c7022aa35e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:09 np0005539551 podman[257456]: 2025-11-29 08:11:09.423111998 +0000 UTC m=+0.099760217 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 03:11:09 np0005539551 nova_compute[227360]: 2025-11-29 08:11:09.426 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:09 np0005539551 podman[257453]: 2025-11-29 08:11:09.434732698 +0000 UTC m=+0.116191005 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:11:09 np0005539551 ovn_controller[130266]: 2025-11-29T08:11:09Z|00331|binding|INFO|Setting lport 3690f505-411e-49df-9869-848f5e8e1d1a ovn-installed in OVS
Nov 29 03:11:09 np0005539551 ovn_controller[130266]: 2025-11-29T08:11:09Z|00332|binding|INFO|Setting lport 3690f505-411e-49df-9869-848f5e8e1d1a up in Southbound
Nov 29 03:11:09 np0005539551 nova_compute[227360]: 2025-11-29 08:11:09.436 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:09.451 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[b5b929a3-aebb-4471-a480-ef99884426eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:09 np0005539551 NetworkManager[48922]: <info>  [1764403869.4578] manager: (tap65f88c5a-80): new Veth device (/org/freedesktop/NetworkManager/Devices/156)
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:09.457 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e556f1f6-502a-43fc-a7d2-23a510704c36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:09.494 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[9923d385-1a88-416c-8de8-894417adf91b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:09.497 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[e2016921-84a9-46d3-bbae-3e66a44740ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:09 np0005539551 NetworkManager[48922]: <info>  [1764403869.5188] device (tap65f88c5a-80): carrier: link connected
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:09.523 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[7d175836-45a7-4c49-ab1c-5aa1e9b32669]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:09.540 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2330c27a-2aef-437a-80ed-2bd9864d8e47]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap65f88c5a-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:22:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 99], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691639, 'reachable_time': 23606, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257560, 'error': None, 'target': 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:09.555 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a125e1bb-2a40-4e99-bff3-fd93103bb1c9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feda:227e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 691639, 'tstamp': 691639}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257561, 'error': None, 'target': 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:09.578 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[962352eb-b4ff-4106-a6c7-9a014cef9ef7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap65f88c5a-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:22:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 99], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691639, 'reachable_time': 23606, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 257562, 'error': None, 'target': 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:09.608 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[cd30f82e-1f36-41f3-89b2-75c6d57b7b41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:09.680 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[adda7849-78be-49a8-a368-af7e306e8e86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:09.682 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap65f88c5a-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:09.683 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:09.683 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap65f88c5a-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:09 np0005539551 nova_compute[227360]: 2025-11-29 08:11:09.685 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:09 np0005539551 NetworkManager[48922]: <info>  [1764403869.6865] manager: (tap65f88c5a-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/157)
Nov 29 03:11:09 np0005539551 kernel: tap65f88c5a-80: entered promiscuous mode
Nov 29 03:11:09 np0005539551 nova_compute[227360]: 2025-11-29 08:11:09.688 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:09.690 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap65f88c5a-80, col_values=(('external_ids', {'iface-id': 'dd9b6149-e4f7-45dd-a89e-de246cf739ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:09 np0005539551 nova_compute[227360]: 2025-11-29 08:11:09.691 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:09 np0005539551 ovn_controller[130266]: 2025-11-29T08:11:09Z|00333|binding|INFO|Releasing lport dd9b6149-e4f7-45dd-a89e-de246cf739ae from this chassis (sb_readonly=0)
Nov 29 03:11:09 np0005539551 nova_compute[227360]: 2025-11-29 08:11:09.710 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:09 np0005539551 nova_compute[227360]: 2025-11-29 08:11:09.710 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:09.711 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/65f88c5a-8801-4bc1-9eed-15e2bab4717d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/65f88c5a-8801-4bc1-9eed-15e2bab4717d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:09.712 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[38b080b0-fcc9-46ca-8b25-7427c5164fc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:09.714 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-65f88c5a-8801-4bc1-9eed-15e2bab4717d
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/65f88c5a-8801-4bc1-9eed-15e2bab4717d.pid.haproxy
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 65f88c5a-8801-4bc1-9eed-15e2bab4717d
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:11:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:09.715 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'env', 'PROCESS_TAG=haproxy-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/65f88c5a-8801-4bc1-9eed-15e2bab4717d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:11:09 np0005539551 nova_compute[227360]: 2025-11-29 08:11:09.792 227364 DEBUG nova.compute.manager [req-8ad35192-5286-4b47-8fe6-0d71b39f69f9 req-d43c5613-7785-4a29-8342-19d16da40649 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Received event network-vif-plugged-3690f505-411e-49df-9869-848f5e8e1d1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:09 np0005539551 nova_compute[227360]: 2025-11-29 08:11:09.792 227364 DEBUG oslo_concurrency.lockutils [req-8ad35192-5286-4b47-8fe6-0d71b39f69f9 req-d43c5613-7785-4a29-8342-19d16da40649 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:09 np0005539551 nova_compute[227360]: 2025-11-29 08:11:09.792 227364 DEBUG oslo_concurrency.lockutils [req-8ad35192-5286-4b47-8fe6-0d71b39f69f9 req-d43c5613-7785-4a29-8342-19d16da40649 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:09 np0005539551 nova_compute[227360]: 2025-11-29 08:11:09.793 227364 DEBUG oslo_concurrency.lockutils [req-8ad35192-5286-4b47-8fe6-0d71b39f69f9 req-d43c5613-7785-4a29-8342-19d16da40649 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:09 np0005539551 nova_compute[227360]: 2025-11-29 08:11:09.793 227364 DEBUG nova.compute.manager [req-8ad35192-5286-4b47-8fe6-0d71b39f69f9 req-d43c5613-7785-4a29-8342-19d16da40649 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Processing event network-vif-plugged-3690f505-411e-49df-9869-848f5e8e1d1a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:11:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:09.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:09.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:10 np0005539551 nova_compute[227360]: 2025-11-29 08:11:10.072 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:10 np0005539551 nova_compute[227360]: 2025-11-29 08:11:10.088 227364 DEBUG nova.compute.manager [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:11:10 np0005539551 nova_compute[227360]: 2025-11-29 08:11:10.090 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403870.0896845, 96e45d6d-27c7-4a08-9126-4856b2920133 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:11:10 np0005539551 nova_compute[227360]: 2025-11-29 08:11:10.090 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] VM Started (Lifecycle Event)#033[00m
Nov 29 03:11:10 np0005539551 nova_compute[227360]: 2025-11-29 08:11:10.094 227364 DEBUG nova.virt.libvirt.driver [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:11:10 np0005539551 nova_compute[227360]: 2025-11-29 08:11:10.099 227364 INFO nova.virt.libvirt.driver [-] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Instance spawned successfully.#033[00m
Nov 29 03:11:10 np0005539551 nova_compute[227360]: 2025-11-29 08:11:10.099 227364 DEBUG nova.virt.libvirt.driver [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:11:10 np0005539551 nova_compute[227360]: 2025-11-29 08:11:10.118 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:11:10 np0005539551 nova_compute[227360]: 2025-11-29 08:11:10.123 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:11:10 np0005539551 nova_compute[227360]: 2025-11-29 08:11:10.130 227364 DEBUG nova.virt.libvirt.driver [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:11:10 np0005539551 nova_compute[227360]: 2025-11-29 08:11:10.131 227364 DEBUG nova.virt.libvirt.driver [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:11:10 np0005539551 nova_compute[227360]: 2025-11-29 08:11:10.132 227364 DEBUG nova.virt.libvirt.driver [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:11:10 np0005539551 nova_compute[227360]: 2025-11-29 08:11:10.132 227364 DEBUG nova.virt.libvirt.driver [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:11:10 np0005539551 nova_compute[227360]: 2025-11-29 08:11:10.133 227364 DEBUG nova.virt.libvirt.driver [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:11:10 np0005539551 nova_compute[227360]: 2025-11-29 08:11:10.133 227364 DEBUG nova.virt.libvirt.driver [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:11:10 np0005539551 podman[257634]: 2025-11-29 08:11:10.13586443 +0000 UTC m=+0.060751419 container create 6b74328a8684dcc7b7167a11e83018b226c1fa6da701e7a2ae8c654e3f5270d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:11:10 np0005539551 nova_compute[227360]: 2025-11-29 08:11:10.149 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:11:10 np0005539551 nova_compute[227360]: 2025-11-29 08:11:10.149 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403870.090428, 96e45d6d-27c7-4a08-9126-4856b2920133 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:11:10 np0005539551 nova_compute[227360]: 2025-11-29 08:11:10.149 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:11:10 np0005539551 systemd[1]: Started libpod-conmon-6b74328a8684dcc7b7167a11e83018b226c1fa6da701e7a2ae8c654e3f5270d0.scope.
Nov 29 03:11:10 np0005539551 nova_compute[227360]: 2025-11-29 08:11:10.177 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:11:10 np0005539551 nova_compute[227360]: 2025-11-29 08:11:10.180 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403870.0937274, 96e45d6d-27c7-4a08-9126-4856b2920133 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:11:10 np0005539551 nova_compute[227360]: 2025-11-29 08:11:10.180 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:11:10 np0005539551 podman[257634]: 2025-11-29 08:11:10.103842307 +0000 UTC m=+0.028729356 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:11:10 np0005539551 nova_compute[227360]: 2025-11-29 08:11:10.207 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:11:10 np0005539551 nova_compute[227360]: 2025-11-29 08:11:10.211 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:11:10 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:11:10 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2931d49ff99e49469b4b52303ef300aceadd9d5094ecf791a69fb99a6eba1d9f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:11:10 np0005539551 nova_compute[227360]: 2025-11-29 08:11:10.222 227364 INFO nova.compute.manager [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Took 7.14 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:11:10 np0005539551 nova_compute[227360]: 2025-11-29 08:11:10.223 227364 DEBUG nova.compute.manager [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:11:10 np0005539551 podman[257634]: 2025-11-29 08:11:10.23162882 +0000 UTC m=+0.156515829 container init 6b74328a8684dcc7b7167a11e83018b226c1fa6da701e7a2ae8c654e3f5270d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:11:10 np0005539551 nova_compute[227360]: 2025-11-29 08:11:10.235 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:11:10 np0005539551 podman[257634]: 2025-11-29 08:11:10.238532424 +0000 UTC m=+0.163419413 container start 6b74328a8684dcc7b7167a11e83018b226c1fa6da701e7a2ae8c654e3f5270d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:11:10 np0005539551 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[257649]: [NOTICE]   (257653) : New worker (257655) forked
Nov 29 03:11:10 np0005539551 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[257649]: [NOTICE]   (257653) : Loading success.
Nov 29 03:11:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:10 np0005539551 nova_compute[227360]: 2025-11-29 08:11:10.292 227364 INFO nova.compute.manager [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Took 8.80 seconds to build instance.#033[00m
Nov 29 03:11:10 np0005539551 nova_compute[227360]: 2025-11-29 08:11:10.313 227364 DEBUG oslo_concurrency.lockutils [None req-77dc0a90-d02e-4b67-8738-c2d2bd36525c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "96e45d6d-27c7-4a08-9126-4856b2920133" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.145s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:11:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:11.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:11:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:11.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:11 np0005539551 nova_compute[227360]: 2025-11-29 08:11:11.927 227364 DEBUG nova.compute.manager [req-633c3e00-ed75-4d77-aa39-49403459cd81 req-931a69f1-9677-4cfa-8f05-aeff8f9ebb18 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Received event network-vif-plugged-3690f505-411e-49df-9869-848f5e8e1d1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:11 np0005539551 nova_compute[227360]: 2025-11-29 08:11:11.928 227364 DEBUG oslo_concurrency.lockutils [req-633c3e00-ed75-4d77-aa39-49403459cd81 req-931a69f1-9677-4cfa-8f05-aeff8f9ebb18 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:11 np0005539551 nova_compute[227360]: 2025-11-29 08:11:11.928 227364 DEBUG oslo_concurrency.lockutils [req-633c3e00-ed75-4d77-aa39-49403459cd81 req-931a69f1-9677-4cfa-8f05-aeff8f9ebb18 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:11 np0005539551 nova_compute[227360]: 2025-11-29 08:11:11.928 227364 DEBUG oslo_concurrency.lockutils [req-633c3e00-ed75-4d77-aa39-49403459cd81 req-931a69f1-9677-4cfa-8f05-aeff8f9ebb18 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:11 np0005539551 nova_compute[227360]: 2025-11-29 08:11:11.928 227364 DEBUG nova.compute.manager [req-633c3e00-ed75-4d77-aa39-49403459cd81 req-931a69f1-9677-4cfa-8f05-aeff8f9ebb18 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] No waiting events found dispatching network-vif-plugged-3690f505-411e-49df-9869-848f5e8e1d1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:11:11 np0005539551 nova_compute[227360]: 2025-11-29 08:11:11.929 227364 WARNING nova.compute.manager [req-633c3e00-ed75-4d77-aa39-49403459cd81 req-931a69f1-9677-4cfa-8f05-aeff8f9ebb18 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Received unexpected event network-vif-plugged-3690f505-411e-49df-9869-848f5e8e1d1a for instance with vm_state active and task_state None.#033[00m
Nov 29 03:11:13 np0005539551 nova_compute[227360]: 2025-11-29 08:11:13.077 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:13 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:11:13 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4258135923' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:11:13 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:11:13 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4258135923' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:11:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:13.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:11:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:13.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:11:15 np0005539551 nova_compute[227360]: 2025-11-29 08:11:15.075 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:11:15 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2726311682' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:11:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:11:15 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2726311682' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:11:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:15.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:15.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:16 np0005539551 nova_compute[227360]: 2025-11-29 08:11:16.031 227364 DEBUG oslo_concurrency.lockutils [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "refresh_cache-96e45d6d-27c7-4a08-9126-4856b2920133" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:11:16 np0005539551 nova_compute[227360]: 2025-11-29 08:11:16.031 227364 DEBUG oslo_concurrency.lockutils [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquired lock "refresh_cache-96e45d6d-27c7-4a08-9126-4856b2920133" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:11:16 np0005539551 nova_compute[227360]: 2025-11-29 08:11:16.032 227364 DEBUG nova.network.neutron [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:11:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:17.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:17.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:18 np0005539551 nova_compute[227360]: 2025-11-29 08:11:18.079 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:18 np0005539551 nova_compute[227360]: 2025-11-29 08:11:18.537 227364 DEBUG nova.network.neutron [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Updating instance_info_cache with network_info: [{"id": "3690f505-411e-49df-9869-848f5e8e1d1a", "address": "fa:16:3e:70:93:84", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3690f505-41", "ovs_interfaceid": "3690f505-411e-49df-9869-848f5e8e1d1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:11:18 np0005539551 nova_compute[227360]: 2025-11-29 08:11:18.586 227364 DEBUG oslo_concurrency.lockutils [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Releasing lock "refresh_cache-96e45d6d-27c7-4a08-9126-4856b2920133" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:11:18 np0005539551 nova_compute[227360]: 2025-11-29 08:11:18.734 227364 DEBUG nova.virt.libvirt.driver [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Nov 29 03:11:18 np0005539551 nova_compute[227360]: 2025-11-29 08:11:18.735 227364 DEBUG nova.virt.libvirt.volume.remotefs [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Creating file /var/lib/nova/instances/96e45d6d-27c7-4a08-9126-4856b2920133/fb502c90c94343d49b5c069b6068db3d.tmp on remote host 192.168.122.102 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Nov 29 03:11:18 np0005539551 nova_compute[227360]: 2025-11-29 08:11:18.735 227364 DEBUG oslo_concurrency.processutils [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/96e45d6d-27c7-4a08-9126-4856b2920133/fb502c90c94343d49b5c069b6068db3d.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:19 np0005539551 nova_compute[227360]: 2025-11-29 08:11:19.108 227364 DEBUG oslo_concurrency.processutils [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/96e45d6d-27c7-4a08-9126-4856b2920133/fb502c90c94343d49b5c069b6068db3d.tmp" returned: 1 in 0.373s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:19 np0005539551 nova_compute[227360]: 2025-11-29 08:11:19.109 227364 DEBUG oslo_concurrency.processutils [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] 'ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/96e45d6d-27c7-4a08-9126-4856b2920133/fb502c90c94343d49b5c069b6068db3d.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 29 03:11:19 np0005539551 nova_compute[227360]: 2025-11-29 08:11:19.110 227364 DEBUG nova.virt.libvirt.volume.remotefs [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Creating directory /var/lib/nova/instances/96e45d6d-27c7-4a08-9126-4856b2920133 on remote host 192.168.122.102 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Nov 29 03:11:19 np0005539551 nova_compute[227360]: 2025-11-29 08:11:19.110 227364 DEBUG oslo_concurrency.processutils [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/96e45d6d-27c7-4a08-9126-4856b2920133 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:19 np0005539551 nova_compute[227360]: 2025-11-29 08:11:19.330 227364 DEBUG oslo_concurrency.processutils [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/96e45d6d-27c7-4a08-9126-4856b2920133" returned: 0 in 0.220s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:19 np0005539551 nova_compute[227360]: 2025-11-29 08:11:19.336 227364 DEBUG nova.virt.libvirt.driver [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:11:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:19.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:19.861 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:19.862 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:19.862 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:19.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:20 np0005539551 nova_compute[227360]: 2025-11-29 08:11:20.078 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:21 np0005539551 nova_compute[227360]: 2025-11-29 08:11:21.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:11:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:21.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:11:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:11:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:21.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:11:22 np0005539551 nova_compute[227360]: 2025-11-29 08:11:22.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:22 np0005539551 nova_compute[227360]: 2025-11-29 08:11:22.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:11:22 np0005539551 nova_compute[227360]: 2025-11-29 08:11:22.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:11:22 np0005539551 nova_compute[227360]: 2025-11-29 08:11:22.448 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "refresh_cache-96e45d6d-27c7-4a08-9126-4856b2920133" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:11:22 np0005539551 nova_compute[227360]: 2025-11-29 08:11:22.448 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquired lock "refresh_cache-96e45d6d-27c7-4a08-9126-4856b2920133" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:11:22 np0005539551 nova_compute[227360]: 2025-11-29 08:11:22.449 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:11:22 np0005539551 nova_compute[227360]: 2025-11-29 08:11:22.449 227364 DEBUG nova.objects.instance [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96e45d6d-27c7-4a08-9126-4856b2920133 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:11:23 np0005539551 nova_compute[227360]: 2025-11-29 08:11:23.081 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:11:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:23.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:11:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:23.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:24 np0005539551 ovn_controller[130266]: 2025-11-29T08:11:24Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:70:93:84 10.100.0.12
Nov 29 03:11:24 np0005539551 ovn_controller[130266]: 2025-11-29T08:11:24Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:70:93:84 10.100.0.12
Nov 29 03:11:24 np0005539551 nova_compute[227360]: 2025-11-29 08:11:24.816 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Updating instance_info_cache with network_info: [{"id": "3690f505-411e-49df-9869-848f5e8e1d1a", "address": "fa:16:3e:70:93:84", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3690f505-41", "ovs_interfaceid": "3690f505-411e-49df-9869-848f5e8e1d1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:11:24 np0005539551 nova_compute[227360]: 2025-11-29 08:11:24.847 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Releasing lock "refresh_cache-96e45d6d-27c7-4a08-9126-4856b2920133" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:11:24 np0005539551 nova_compute[227360]: 2025-11-29 08:11:24.848 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:11:24 np0005539551 nova_compute[227360]: 2025-11-29 08:11:24.848 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:25 np0005539551 nova_compute[227360]: 2025-11-29 08:11:25.079 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:25 np0005539551 nova_compute[227360]: 2025-11-29 08:11:25.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:25 np0005539551 nova_compute[227360]: 2025-11-29 08:11:25.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:25.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:11:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:25.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:11:26 np0005539551 nova_compute[227360]: 2025-11-29 08:11:26.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:26 np0005539551 nova_compute[227360]: 2025-11-29 08:11:26.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:26 np0005539551 nova_compute[227360]: 2025-11-29 08:11:26.465 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:26 np0005539551 nova_compute[227360]: 2025-11-29 08:11:26.466 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:26 np0005539551 nova_compute[227360]: 2025-11-29 08:11:26.466 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:26 np0005539551 nova_compute[227360]: 2025-11-29 08:11:26.466 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:11:26 np0005539551 nova_compute[227360]: 2025-11-29 08:11:26.466 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:26 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:11:26 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3338104615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:11:26 np0005539551 nova_compute[227360]: 2025-11-29 08:11:26.875 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:26 np0005539551 nova_compute[227360]: 2025-11-29 08:11:26.989 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000051 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:11:26 np0005539551 nova_compute[227360]: 2025-11-29 08:11:26.990 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000051 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:11:27 np0005539551 nova_compute[227360]: 2025-11-29 08:11:27.125 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:11:27 np0005539551 nova_compute[227360]: 2025-11-29 08:11:27.126 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4456MB free_disk=20.910324096679688GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:11:27 np0005539551 nova_compute[227360]: 2025-11-29 08:11:27.126 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:27 np0005539551 nova_compute[227360]: 2025-11-29 08:11:27.127 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:27 np0005539551 nova_compute[227360]: 2025-11-29 08:11:27.199 227364 INFO nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Updating resource usage from migration e2865359-f83a-40e7-a7b3-e0d673d76ec8#033[00m
Nov 29 03:11:27 np0005539551 nova_compute[227360]: 2025-11-29 08:11:27.242 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Migration e2865359-f83a-40e7-a7b3-e0d673d76ec8 is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Nov 29 03:11:27 np0005539551 nova_compute[227360]: 2025-11-29 08:11:27.242 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:11:27 np0005539551 nova_compute[227360]: 2025-11-29 08:11:27.242 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:11:27 np0005539551 nova_compute[227360]: 2025-11-29 08:11:27.318 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:11:27 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2995176580' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:11:27 np0005539551 nova_compute[227360]: 2025-11-29 08:11:27.730 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:27 np0005539551 nova_compute[227360]: 2025-11-29 08:11:27.736 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:11:27 np0005539551 nova_compute[227360]: 2025-11-29 08:11:27.764 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:11:27 np0005539551 nova_compute[227360]: 2025-11-29 08:11:27.810 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:11:27 np0005539551 nova_compute[227360]: 2025-11-29 08:11:27.810 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:27.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:27.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:28 np0005539551 nova_compute[227360]: 2025-11-29 08:11:28.096 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:29 np0005539551 nova_compute[227360]: 2025-11-29 08:11:29.380 227364 DEBUG nova.virt.libvirt.driver [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:11:29 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:11:29 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:11:29 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:11:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:11:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:29.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:11:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:11:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:29.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:11:30 np0005539551 nova_compute[227360]: 2025-11-29 08:11:30.082 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:31 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #73. Immutable memtables: 0.
Nov 29 03:11:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:11:31.736929) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:11:31 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 73
Nov 29 03:11:31 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403891736978, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 2118, "num_deletes": 256, "total_data_size": 4838965, "memory_usage": 4893200, "flush_reason": "Manual Compaction"}
Nov 29 03:11:31 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #74: started
Nov 29 03:11:31 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403891762234, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 74, "file_size": 3181158, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37846, "largest_seqno": 39959, "table_properties": {"data_size": 3172556, "index_size": 5224, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 18594, "raw_average_key_size": 20, "raw_value_size": 3154991, "raw_average_value_size": 3455, "num_data_blocks": 228, "num_entries": 913, "num_filter_entries": 913, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403711, "oldest_key_time": 1764403711, "file_creation_time": 1764403891, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:11:31 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 25381 microseconds, and 13887 cpu microseconds.
Nov 29 03:11:31 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:11:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:11:31.762278) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #74: 3181158 bytes OK
Nov 29 03:11:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:11:31.762329) [db/memtable_list.cc:519] [default] Level-0 commit table #74 started
Nov 29 03:11:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:11:31.763900) [db/memtable_list.cc:722] [default] Level-0 commit table #74: memtable #1 done
Nov 29 03:11:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:11:31.763914) EVENT_LOG_v1 {"time_micros": 1764403891763909, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:11:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:11:31.763932) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:11:31 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 4829451, prev total WAL file size 4875597, number of live WAL files 2.
Nov 29 03:11:31 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000070.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:11:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:11:31.765128) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303036' seq:72057594037927935, type:22 .. '6C6F676D0031323537' seq:0, type:0; will stop at (end)
Nov 29 03:11:31 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:11:31 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [74(3106KB)], [72(10MB)]
Nov 29 03:11:31 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403891765160, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [74], "files_L6": [72], "score": -1, "input_data_size": 13867927, "oldest_snapshot_seqno": -1}
Nov 29 03:11:31 np0005539551 kernel: tap3690f505-41 (unregistering): left promiscuous mode
Nov 29 03:11:31 np0005539551 NetworkManager[48922]: <info>  [1764403891.7839] device (tap3690f505-41): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:11:31 np0005539551 ovn_controller[130266]: 2025-11-29T08:11:31Z|00334|binding|INFO|Releasing lport 3690f505-411e-49df-9869-848f5e8e1d1a from this chassis (sb_readonly=0)
Nov 29 03:11:31 np0005539551 ovn_controller[130266]: 2025-11-29T08:11:31Z|00335|binding|INFO|Setting lport 3690f505-411e-49df-9869-848f5e8e1d1a down in Southbound
Nov 29 03:11:31 np0005539551 ovn_controller[130266]: 2025-11-29T08:11:31Z|00336|binding|INFO|Removing iface tap3690f505-41 ovn-installed in OVS
Nov 29 03:11:31 np0005539551 nova_compute[227360]: 2025-11-29 08:11:31.800 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:31 np0005539551 nova_compute[227360]: 2025-11-29 08:11:31.837 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:31 np0005539551 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000051.scope: Deactivated successfully.
Nov 29 03:11:31 np0005539551 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000051.scope: Consumed 14.591s CPU time.
Nov 29 03:11:31 np0005539551 systemd-machined[190756]: Machine qemu-36-instance-00000051 terminated.
Nov 29 03:11:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:31 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #75: 7302 keys, 13715051 bytes, temperature: kUnknown
Nov 29 03:11:31 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403891870659, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 75, "file_size": 13715051, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13662751, "index_size": 32924, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18309, "raw_key_size": 187910, "raw_average_key_size": 25, "raw_value_size": 13528324, "raw_average_value_size": 1852, "num_data_blocks": 1317, "num_entries": 7302, "num_filter_entries": 7302, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764403891, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 75, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:11:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:11:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:31.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:11:31 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:11:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:11:31.870989) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 13715051 bytes
Nov 29 03:11:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:11:31.873372) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 131.3 rd, 129.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 10.2 +0.0 blob) out(13.1 +0.0 blob), read-write-amplify(8.7) write-amplify(4.3) OK, records in: 7833, records dropped: 531 output_compression: NoCompression
Nov 29 03:11:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:11:31.873393) EVENT_LOG_v1 {"time_micros": 1764403891873383, "job": 44, "event": "compaction_finished", "compaction_time_micros": 105620, "compaction_time_cpu_micros": 31046, "output_level": 6, "num_output_files": 1, "total_output_size": 13715051, "num_input_records": 7833, "num_output_records": 7302, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:11:31 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:11:31 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403891874126, "job": 44, "event": "table_file_deletion", "file_number": 74}
Nov 29 03:11:31 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000072.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:11:31 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403891876052, "job": 44, "event": "table_file_deletion", "file_number": 72}
Nov 29 03:11:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:11:31.765096) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:11:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:11:31.876194) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:11:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:11:31.876206) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:11:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:11:31.876207) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:11:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:11:31.876208) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:11:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:11:31.876210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:11:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:31.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:32 np0005539551 nova_compute[227360]: 2025-11-29 08:11:32.395 227364 INFO nova.virt.libvirt.driver [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Instance shutdown successfully after 13 seconds.#033[00m
Nov 29 03:11:32 np0005539551 nova_compute[227360]: 2025-11-29 08:11:32.404 227364 INFO nova.virt.libvirt.driver [-] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Instance destroyed successfully.#033[00m
Nov 29 03:11:32 np0005539551 nova_compute[227360]: 2025-11-29 08:11:32.406 227364 DEBUG nova.virt.libvirt.vif [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:10:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1237580642',display_name='tempest-ServerDiskConfigTestJSON-server-1237580642',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1237580642',id=81,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:11:10Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8384e5887c0948f5876c019d50057152',ramdisk_id='',reservation_id='r-nfk3svhu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-767135984',owner_user_name='tempest-ServerDiskConfigTestJSON-767135984-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:11:14Z,user_data=None,user_id='9ab0114aca6149af994da2b9052c1368',uuid=96e45d6d-27c7-4a08-9126-4856b2920133,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3690f505-411e-49df-9869-848f5e8e1d1a", "address": "fa:16:3e:70:93:84", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "vif_mac": "fa:16:3e:70:93:84"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3690f505-41", "ovs_interfaceid": "3690f505-411e-49df-9869-848f5e8e1d1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:11:32 np0005539551 nova_compute[227360]: 2025-11-29 08:11:32.406 227364 DEBUG nova.network.os_vif_util [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converting VIF {"id": "3690f505-411e-49df-9869-848f5e8e1d1a", "address": "fa:16:3e:70:93:84", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "vif_mac": "fa:16:3e:70:93:84"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3690f505-41", "ovs_interfaceid": "3690f505-411e-49df-9869-848f5e8e1d1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:11:32 np0005539551 nova_compute[227360]: 2025-11-29 08:11:32.407 227364 DEBUG nova.network.os_vif_util [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:70:93:84,bridge_name='br-int',has_traffic_filtering=True,id=3690f505-411e-49df-9869-848f5e8e1d1a,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3690f505-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:11:32 np0005539551 nova_compute[227360]: 2025-11-29 08:11:32.408 227364 DEBUG os_vif [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:93:84,bridge_name='br-int',has_traffic_filtering=True,id=3690f505-411e-49df-9869-848f5e8e1d1a,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3690f505-41') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:11:32 np0005539551 nova_compute[227360]: 2025-11-29 08:11:32.411 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:32 np0005539551 nova_compute[227360]: 2025-11-29 08:11:32.411 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3690f505-41, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:32 np0005539551 nova_compute[227360]: 2025-11-29 08:11:32.414 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:32 np0005539551 nova_compute[227360]: 2025-11-29 08:11:32.416 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:11:32 np0005539551 nova_compute[227360]: 2025-11-29 08:11:32.420 227364 INFO os_vif [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:93:84,bridge_name='br-int',has_traffic_filtering=True,id=3690f505-411e-49df-9869-848f5e8e1d1a,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3690f505-41')#033[00m
Nov 29 03:11:32 np0005539551 nova_compute[227360]: 2025-11-29 08:11:32.426 227364 DEBUG nova.virt.libvirt.driver [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] skipping disk for instance-00000051 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:11:32 np0005539551 nova_compute[227360]: 2025-11-29 08:11:32.426 227364 DEBUG nova.virt.libvirt.driver [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] skipping disk for instance-00000051 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:11:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:32.451 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:93:84 10.100.0.12'], port_security=['fa:16:3e:70:93:84 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '96e45d6d-27c7-4a08-9126-4856b2920133', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8384e5887c0948f5876c019d50057152', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a1e9ed13-b0e1-45c0-9be6-be0f145466a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c0727149-3377-4d23-9d8d-0006462cd03e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=3690f505-411e-49df-9869-848f5e8e1d1a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:11:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:32.453 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 3690f505-411e-49df-9869-848f5e8e1d1a in datapath 65f88c5a-8801-4bc1-9eed-15e2bab4717d unbound from our chassis#033[00m
Nov 29 03:11:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:32.456 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 65f88c5a-8801-4bc1-9eed-15e2bab4717d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:11:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:32.457 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[0eab43e1-bf05-405c-b4f6-8e41c01c2d01]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:32.458 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d namespace which is not needed anymore#033[00m
Nov 29 03:11:32 np0005539551 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[257649]: [NOTICE]   (257653) : haproxy version is 2.8.14-c23fe91
Nov 29 03:11:32 np0005539551 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[257649]: [NOTICE]   (257653) : path to executable is /usr/sbin/haproxy
Nov 29 03:11:32 np0005539551 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[257649]: [WARNING]  (257653) : Exiting Master process...
Nov 29 03:11:32 np0005539551 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[257649]: [WARNING]  (257653) : Exiting Master process...
Nov 29 03:11:32 np0005539551 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[257649]: [ALERT]    (257653) : Current worker (257655) exited with code 143 (Terminated)
Nov 29 03:11:32 np0005539551 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[257649]: [WARNING]  (257653) : All workers exited. Exiting... (0)
Nov 29 03:11:32 np0005539551 systemd[1]: libpod-6b74328a8684dcc7b7167a11e83018b226c1fa6da701e7a2ae8c654e3f5270d0.scope: Deactivated successfully.
Nov 29 03:11:32 np0005539551 podman[257877]: 2025-11-29 08:11:32.660615186 +0000 UTC m=+0.068604168 container died 6b74328a8684dcc7b7167a11e83018b226c1fa6da701e7a2ae8c654e3f5270d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 03:11:32 np0005539551 systemd[1]: var-lib-containers-storage-overlay-2931d49ff99e49469b4b52303ef300aceadd9d5094ecf791a69fb99a6eba1d9f-merged.mount: Deactivated successfully.
Nov 29 03:11:32 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6b74328a8684dcc7b7167a11e83018b226c1fa6da701e7a2ae8c654e3f5270d0-userdata-shm.mount: Deactivated successfully.
Nov 29 03:11:32 np0005539551 podman[257877]: 2025-11-29 08:11:32.708269965 +0000 UTC m=+0.116258947 container cleanup 6b74328a8684dcc7b7167a11e83018b226c1fa6da701e7a2ae8c654e3f5270d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:11:32 np0005539551 systemd[1]: libpod-conmon-6b74328a8684dcc7b7167a11e83018b226c1fa6da701e7a2ae8c654e3f5270d0.scope: Deactivated successfully.
Nov 29 03:11:32 np0005539551 podman[257908]: 2025-11-29 08:11:32.77755965 +0000 UTC m=+0.043050827 container remove 6b74328a8684dcc7b7167a11e83018b226c1fa6da701e7a2ae8c654e3f5270d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:11:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:32.783 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[bac90394-e2e5-49a5-94cc-ed8a3bb72292]: (4, ('Sat Nov 29 08:11:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d (6b74328a8684dcc7b7167a11e83018b226c1fa6da701e7a2ae8c654e3f5270d0)\n6b74328a8684dcc7b7167a11e83018b226c1fa6da701e7a2ae8c654e3f5270d0\nSat Nov 29 08:11:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d (6b74328a8684dcc7b7167a11e83018b226c1fa6da701e7a2ae8c654e3f5270d0)\n6b74328a8684dcc7b7167a11e83018b226c1fa6da701e7a2ae8c654e3f5270d0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:32.786 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f758c817-5052-4013-9b9a-e5e05c76f685]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:32.787 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap65f88c5a-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:32 np0005539551 kernel: tap65f88c5a-80: left promiscuous mode
Nov 29 03:11:32 np0005539551 nova_compute[227360]: 2025-11-29 08:11:32.856 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:32.858 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[09420c71-1f9d-40df-baaf-cd49ad84ccda]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:32 np0005539551 nova_compute[227360]: 2025-11-29 08:11:32.871 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:32.873 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b2c6134c-fba4-4731-99bd-4755a02447f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:32.875 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[766a78b6-547b-409c-82ac-bb5aff17a723]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:32.893 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5371ab61-da6a-48bd-9058-2753c01d0f10]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691632, 'reachable_time': 40046, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257925, 'error': None, 'target': 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:32 np0005539551 systemd[1]: run-netns-ovnmeta\x2d65f88c5a\x2d8801\x2d4bc1\x2d9eed\x2d15e2bab4717d.mount: Deactivated successfully.
Nov 29 03:11:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:32.896 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:11:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:32.896 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[d183995e-8e97-408f-a00d-06521d229bb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:33.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:33 np0005539551 nova_compute[227360]: 2025-11-29 08:11:33.911 227364 DEBUG neutronclient.v2_0.client [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 3690f505-411e-49df-9869-848f5e8e1d1a for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Nov 29 03:11:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:33.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:34 np0005539551 nova_compute[227360]: 2025-11-29 08:11:34.089 227364 DEBUG oslo_concurrency.lockutils [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:34 np0005539551 nova_compute[227360]: 2025-11-29 08:11:34.090 227364 DEBUG oslo_concurrency.lockutils [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:34 np0005539551 nova_compute[227360]: 2025-11-29 08:11:34.091 227364 DEBUG oslo_concurrency.lockutils [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:34 np0005539551 nova_compute[227360]: 2025-11-29 08:11:34.679 227364 DEBUG nova.compute.manager [req-98054204-0a74-456c-9a55-d81a65e4fbaa req-21c32cb2-c7fa-46eb-8514-009bcfe095ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Received event network-vif-unplugged-3690f505-411e-49df-9869-848f5e8e1d1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:34 np0005539551 nova_compute[227360]: 2025-11-29 08:11:34.680 227364 DEBUG oslo_concurrency.lockutils [req-98054204-0a74-456c-9a55-d81a65e4fbaa req-21c32cb2-c7fa-46eb-8514-009bcfe095ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:34 np0005539551 nova_compute[227360]: 2025-11-29 08:11:34.680 227364 DEBUG oslo_concurrency.lockutils [req-98054204-0a74-456c-9a55-d81a65e4fbaa req-21c32cb2-c7fa-46eb-8514-009bcfe095ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:34 np0005539551 nova_compute[227360]: 2025-11-29 08:11:34.681 227364 DEBUG oslo_concurrency.lockutils [req-98054204-0a74-456c-9a55-d81a65e4fbaa req-21c32cb2-c7fa-46eb-8514-009bcfe095ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:34 np0005539551 nova_compute[227360]: 2025-11-29 08:11:34.681 227364 DEBUG nova.compute.manager [req-98054204-0a74-456c-9a55-d81a65e4fbaa req-21c32cb2-c7fa-46eb-8514-009bcfe095ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] No waiting events found dispatching network-vif-unplugged-3690f505-411e-49df-9869-848f5e8e1d1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:11:34 np0005539551 nova_compute[227360]: 2025-11-29 08:11:34.682 227364 WARNING nova.compute.manager [req-98054204-0a74-456c-9a55-d81a65e4fbaa req-21c32cb2-c7fa-46eb-8514-009bcfe095ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Received unexpected event network-vif-unplugged-3690f505-411e-49df-9869-848f5e8e1d1a for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:11:34 np0005539551 nova_compute[227360]: 2025-11-29 08:11:34.682 227364 DEBUG nova.compute.manager [req-98054204-0a74-456c-9a55-d81a65e4fbaa req-21c32cb2-c7fa-46eb-8514-009bcfe095ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Received event network-vif-plugged-3690f505-411e-49df-9869-848f5e8e1d1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:34 np0005539551 nova_compute[227360]: 2025-11-29 08:11:34.683 227364 DEBUG oslo_concurrency.lockutils [req-98054204-0a74-456c-9a55-d81a65e4fbaa req-21c32cb2-c7fa-46eb-8514-009bcfe095ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:34 np0005539551 nova_compute[227360]: 2025-11-29 08:11:34.684 227364 DEBUG oslo_concurrency.lockutils [req-98054204-0a74-456c-9a55-d81a65e4fbaa req-21c32cb2-c7fa-46eb-8514-009bcfe095ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:34 np0005539551 nova_compute[227360]: 2025-11-29 08:11:34.684 227364 DEBUG oslo_concurrency.lockutils [req-98054204-0a74-456c-9a55-d81a65e4fbaa req-21c32cb2-c7fa-46eb-8514-009bcfe095ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:34 np0005539551 nova_compute[227360]: 2025-11-29 08:11:34.685 227364 DEBUG nova.compute.manager [req-98054204-0a74-456c-9a55-d81a65e4fbaa req-21c32cb2-c7fa-46eb-8514-009bcfe095ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] No waiting events found dispatching network-vif-plugged-3690f505-411e-49df-9869-848f5e8e1d1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:11:34 np0005539551 nova_compute[227360]: 2025-11-29 08:11:34.685 227364 WARNING nova.compute.manager [req-98054204-0a74-456c-9a55-d81a65e4fbaa req-21c32cb2-c7fa-46eb-8514-009bcfe095ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Received unexpected event network-vif-plugged-3690f505-411e-49df-9869-848f5e8e1d1a for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:11:34 np0005539551 nova_compute[227360]: 2025-11-29 08:11:34.810 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:34 np0005539551 nova_compute[227360]: 2025-11-29 08:11:34.811 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:34 np0005539551 nova_compute[227360]: 2025-11-29 08:11:34.811 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:11:35 np0005539551 nova_compute[227360]: 2025-11-29 08:11:35.083 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:35.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:11:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:35.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:11:36 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:11:37 np0005539551 nova_compute[227360]: 2025-11-29 08:11:37.414 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:37 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:11:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:37.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:11:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:37.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:11:39 np0005539551 nova_compute[227360]: 2025-11-29 08:11:39.217 227364 DEBUG nova.compute.manager [req-d1cd87fb-ae41-40c3-a828-a219517b3ebc req-ec790a63-2c2d-4ea3-98ac-582a503f5ec9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Received event network-changed-3690f505-411e-49df-9869-848f5e8e1d1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:39 np0005539551 nova_compute[227360]: 2025-11-29 08:11:39.221 227364 DEBUG nova.compute.manager [req-d1cd87fb-ae41-40c3-a828-a219517b3ebc req-ec790a63-2c2d-4ea3-98ac-582a503f5ec9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Refreshing instance network info cache due to event network-changed-3690f505-411e-49df-9869-848f5e8e1d1a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:11:39 np0005539551 nova_compute[227360]: 2025-11-29 08:11:39.221 227364 DEBUG oslo_concurrency.lockutils [req-d1cd87fb-ae41-40c3-a828-a219517b3ebc req-ec790a63-2c2d-4ea3-98ac-582a503f5ec9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-96e45d6d-27c7-4a08-9126-4856b2920133" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:11:39 np0005539551 nova_compute[227360]: 2025-11-29 08:11:39.222 227364 DEBUG oslo_concurrency.lockutils [req-d1cd87fb-ae41-40c3-a828-a219517b3ebc req-ec790a63-2c2d-4ea3-98ac-582a503f5ec9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-96e45d6d-27c7-4a08-9126-4856b2920133" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:11:39 np0005539551 nova_compute[227360]: 2025-11-29 08:11:39.222 227364 DEBUG nova.network.neutron [req-d1cd87fb-ae41-40c3-a828-a219517b3ebc req-ec790a63-2c2d-4ea3-98ac-582a503f5ec9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Refreshing network info cache for port 3690f505-411e-49df-9869-848f5e8e1d1a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:11:39 np0005539551 podman[257977]: 2025-11-29 08:11:39.631971348 +0000 UTC m=+0.075779429 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:11:39 np0005539551 podman[257978]: 2025-11-29 08:11:39.638774729 +0000 UTC m=+0.068396162 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:11:39 np0005539551 podman[257976]: 2025-11-29 08:11:39.673568966 +0000 UTC m=+0.111941632 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Nov 29 03:11:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:39.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:39.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:40 np0005539551 nova_compute[227360]: 2025-11-29 08:11:40.085 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:41.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:41.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:42 np0005539551 nova_compute[227360]: 2025-11-29 08:11:42.418 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:11:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:43.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:11:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:43.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:44 np0005539551 nova_compute[227360]: 2025-11-29 08:11:44.750 227364 DEBUG nova.network.neutron [req-d1cd87fb-ae41-40c3-a828-a219517b3ebc req-ec790a63-2c2d-4ea3-98ac-582a503f5ec9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Updated VIF entry in instance network info cache for port 3690f505-411e-49df-9869-848f5e8e1d1a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:11:44 np0005539551 nova_compute[227360]: 2025-11-29 08:11:44.750 227364 DEBUG nova.network.neutron [req-d1cd87fb-ae41-40c3-a828-a219517b3ebc req-ec790a63-2c2d-4ea3-98ac-582a503f5ec9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Updating instance_info_cache with network_info: [{"id": "3690f505-411e-49df-9869-848f5e8e1d1a", "address": "fa:16:3e:70:93:84", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3690f505-41", "ovs_interfaceid": "3690f505-411e-49df-9869-848f5e8e1d1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:11:44 np0005539551 nova_compute[227360]: 2025-11-29 08:11:44.784 227364 DEBUG oslo_concurrency.lockutils [req-d1cd87fb-ae41-40c3-a828-a219517b3ebc req-ec790a63-2c2d-4ea3-98ac-582a503f5ec9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-96e45d6d-27c7-4a08-9126-4856b2920133" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:11:44 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e273 e273: 3 total, 3 up, 3 in
Nov 29 03:11:45 np0005539551 nova_compute[227360]: 2025-11-29 08:11:45.086 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:45.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:11:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:45.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:11:47 np0005539551 nova_compute[227360]: 2025-11-29 08:11:47.046 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403892.0448496, 96e45d6d-27c7-4a08-9126-4856b2920133 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:11:47 np0005539551 nova_compute[227360]: 2025-11-29 08:11:47.046 227364 INFO nova.compute.manager [-] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:11:47 np0005539551 nova_compute[227360]: 2025-11-29 08:11:47.151 227364 DEBUG nova.compute.manager [None req-6ea1f8ba-8e45-4666-b607-26c792024e5a - - - - - -] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:11:47 np0005539551 nova_compute[227360]: 2025-11-29 08:11:47.156 227364 DEBUG nova.compute.manager [None req-6ea1f8ba-8e45-4666-b607-26c792024e5a - - - - - -] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:11:47 np0005539551 nova_compute[227360]: 2025-11-29 08:11:47.214 227364 DEBUG nova.compute.manager [req-ff8a452c-b8c4-4b12-8efe-3e906b7e9a3e req-593e9f53-3087-4fab-861a-d1e5a5148660 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Received event network-vif-plugged-3690f505-411e-49df-9869-848f5e8e1d1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:47 np0005539551 nova_compute[227360]: 2025-11-29 08:11:47.214 227364 DEBUG oslo_concurrency.lockutils [req-ff8a452c-b8c4-4b12-8efe-3e906b7e9a3e req-593e9f53-3087-4fab-861a-d1e5a5148660 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:47 np0005539551 nova_compute[227360]: 2025-11-29 08:11:47.215 227364 DEBUG oslo_concurrency.lockutils [req-ff8a452c-b8c4-4b12-8efe-3e906b7e9a3e req-593e9f53-3087-4fab-861a-d1e5a5148660 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:47 np0005539551 nova_compute[227360]: 2025-11-29 08:11:47.215 227364 DEBUG oslo_concurrency.lockutils [req-ff8a452c-b8c4-4b12-8efe-3e906b7e9a3e req-593e9f53-3087-4fab-861a-d1e5a5148660 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:47 np0005539551 nova_compute[227360]: 2025-11-29 08:11:47.215 227364 DEBUG nova.compute.manager [req-ff8a452c-b8c4-4b12-8efe-3e906b7e9a3e req-593e9f53-3087-4fab-861a-d1e5a5148660 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] No waiting events found dispatching network-vif-plugged-3690f505-411e-49df-9869-848f5e8e1d1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:11:47 np0005539551 nova_compute[227360]: 2025-11-29 08:11:47.215 227364 WARNING nova.compute.manager [req-ff8a452c-b8c4-4b12-8efe-3e906b7e9a3e req-593e9f53-3087-4fab-861a-d1e5a5148660 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Received unexpected event network-vif-plugged-3690f505-411e-49df-9869-848f5e8e1d1a for instance with vm_state active and task_state resize_finish.#033[00m
Nov 29 03:11:47 np0005539551 nova_compute[227360]: 2025-11-29 08:11:47.421 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:47 np0005539551 nova_compute[227360]: 2025-11-29 08:11:47.480 227364 INFO nova.compute.manager [None req-6ea1f8ba-8e45-4666-b607-26c792024e5a - - - - - -] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-1.ctlplane.example.com#033[00m
Nov 29 03:11:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:47.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:11:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:47.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:11:49 np0005539551 nova_compute[227360]: 2025-11-29 08:11:49.467 227364 DEBUG oslo_concurrency.lockutils [None req-7afa5e0b-6ec4-4cd6-b10c-acd65f681e3a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "96e45d6d-27c7-4a08-9126-4856b2920133" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:49 np0005539551 nova_compute[227360]: 2025-11-29 08:11:49.467 227364 DEBUG oslo_concurrency.lockutils [None req-7afa5e0b-6ec4-4cd6-b10c-acd65f681e3a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "96e45d6d-27c7-4a08-9126-4856b2920133" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:49 np0005539551 nova_compute[227360]: 2025-11-29 08:11:49.468 227364 DEBUG nova.compute.manager [None req-7afa5e0b-6ec4-4cd6-b10c-acd65f681e3a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Going to confirm migration 13 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Nov 29 03:11:49 np0005539551 nova_compute[227360]: 2025-11-29 08:11:49.494 227364 DEBUG nova.compute.manager [req-cdf30bb2-a406-4d84-9a5e-50c9f7c5b191 req-44a38a47-f2c0-462e-be5f-479907f82408 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Received event network-vif-plugged-3690f505-411e-49df-9869-848f5e8e1d1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:49 np0005539551 nova_compute[227360]: 2025-11-29 08:11:49.495 227364 DEBUG oslo_concurrency.lockutils [req-cdf30bb2-a406-4d84-9a5e-50c9f7c5b191 req-44a38a47-f2c0-462e-be5f-479907f82408 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:49 np0005539551 nova_compute[227360]: 2025-11-29 08:11:49.495 227364 DEBUG oslo_concurrency.lockutils [req-cdf30bb2-a406-4d84-9a5e-50c9f7c5b191 req-44a38a47-f2c0-462e-be5f-479907f82408 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:49 np0005539551 nova_compute[227360]: 2025-11-29 08:11:49.495 227364 DEBUG oslo_concurrency.lockutils [req-cdf30bb2-a406-4d84-9a5e-50c9f7c5b191 req-44a38a47-f2c0-462e-be5f-479907f82408 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:49 np0005539551 nova_compute[227360]: 2025-11-29 08:11:49.495 227364 DEBUG nova.compute.manager [req-cdf30bb2-a406-4d84-9a5e-50c9f7c5b191 req-44a38a47-f2c0-462e-be5f-479907f82408 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] No waiting events found dispatching network-vif-plugged-3690f505-411e-49df-9869-848f5e8e1d1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:11:49 np0005539551 nova_compute[227360]: 2025-11-29 08:11:49.495 227364 WARNING nova.compute.manager [req-cdf30bb2-a406-4d84-9a5e-50c9f7c5b191 req-44a38a47-f2c0-462e-be5f-479907f82408 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Received unexpected event network-vif-plugged-3690f505-411e-49df-9869-848f5e8e1d1a for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:11:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:11:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:49.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:11:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:49.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:50 np0005539551 nova_compute[227360]: 2025-11-29 08:11:50.000 227364 DEBUG neutronclient.v2_0.client [None req-7afa5e0b-6ec4-4cd6-b10c-acd65f681e3a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 3690f505-411e-49df-9869-848f5e8e1d1a for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Nov 29 03:11:50 np0005539551 nova_compute[227360]: 2025-11-29 08:11:50.001 227364 DEBUG oslo_concurrency.lockutils [None req-7afa5e0b-6ec4-4cd6-b10c-acd65f681e3a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "refresh_cache-96e45d6d-27c7-4a08-9126-4856b2920133" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:11:50 np0005539551 nova_compute[227360]: 2025-11-29 08:11:50.001 227364 DEBUG oslo_concurrency.lockutils [None req-7afa5e0b-6ec4-4cd6-b10c-acd65f681e3a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquired lock "refresh_cache-96e45d6d-27c7-4a08-9126-4856b2920133" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:11:50 np0005539551 nova_compute[227360]: 2025-11-29 08:11:50.001 227364 DEBUG nova.network.neutron [None req-7afa5e0b-6ec4-4cd6-b10c-acd65f681e3a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:11:50 np0005539551 nova_compute[227360]: 2025-11-29 08:11:50.002 227364 DEBUG nova.objects.instance [None req-7afa5e0b-6ec4-4cd6-b10c-acd65f681e3a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'info_cache' on Instance uuid 96e45d6d-27c7-4a08-9126-4856b2920133 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:11:50 np0005539551 nova_compute[227360]: 2025-11-29 08:11:50.087 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:51.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:11:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:51.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:11:52 np0005539551 nova_compute[227360]: 2025-11-29 08:11:52.425 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:53 np0005539551 nova_compute[227360]: 2025-11-29 08:11:53.046 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:53.046 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:11:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:11:53.047 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:11:53 np0005539551 nova_compute[227360]: 2025-11-29 08:11:53.197 227364 DEBUG nova.network.neutron [None req-7afa5e0b-6ec4-4cd6-b10c-acd65f681e3a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Updating instance_info_cache with network_info: [{"id": "3690f505-411e-49df-9869-848f5e8e1d1a", "address": "fa:16:3e:70:93:84", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3690f505-41", "ovs_interfaceid": "3690f505-411e-49df-9869-848f5e8e1d1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:11:53 np0005539551 nova_compute[227360]: 2025-11-29 08:11:53.252 227364 DEBUG oslo_concurrency.lockutils [None req-7afa5e0b-6ec4-4cd6-b10c-acd65f681e3a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Releasing lock "refresh_cache-96e45d6d-27c7-4a08-9126-4856b2920133" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:11:53 np0005539551 nova_compute[227360]: 2025-11-29 08:11:53.252 227364 DEBUG nova.objects.instance [None req-7afa5e0b-6ec4-4cd6-b10c-acd65f681e3a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'migration_context' on Instance uuid 96e45d6d-27c7-4a08-9126-4856b2920133 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:11:53 np0005539551 nova_compute[227360]: 2025-11-29 08:11:53.453 227364 DEBUG nova.storage.rbd_utils [None req-7afa5e0b-6ec4-4cd6-b10c-acd65f681e3a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] removing snapshot(nova-resize) on rbd image(96e45d6d-27c7-4a08-9126-4856b2920133_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:11:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:53.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:53.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:54 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e274 e274: 3 total, 3 up, 3 in
Nov 29 03:11:54 np0005539551 nova_compute[227360]: 2025-11-29 08:11:54.423 227364 DEBUG nova.virt.libvirt.vif [None req-7afa5e0b-6ec4-4cd6-b10c-acd65f681e3a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:10:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1237580642',display_name='tempest-ServerDiskConfigTestJSON-server-1237580642',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1237580642',id=81,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:11:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8384e5887c0948f5876c019d50057152',ramdisk_id='',reservation_id='r-nfk3svhu',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-767135984',owner_user_name='tempest-ServerDiskConfigTestJSON-767135984-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:11:47Z,user_data=None,user_id='9ab0114aca6149af994da2b9052c1368',uuid=96e45d6d-27c7-4a08-9126-4856b2920133,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "3690f505-411e-49df-9869-848f5e8e1d1a", "address": "fa:16:3e:70:93:84", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3690f505-41", "ovs_interfaceid": "3690f505-411e-49df-9869-848f5e8e1d1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:11:54 np0005539551 nova_compute[227360]: 2025-11-29 08:11:54.424 227364 DEBUG nova.network.os_vif_util [None req-7afa5e0b-6ec4-4cd6-b10c-acd65f681e3a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converting VIF {"id": "3690f505-411e-49df-9869-848f5e8e1d1a", "address": "fa:16:3e:70:93:84", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3690f505-41", "ovs_interfaceid": "3690f505-411e-49df-9869-848f5e8e1d1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:11:54 np0005539551 nova_compute[227360]: 2025-11-29 08:11:54.424 227364 DEBUG nova.network.os_vif_util [None req-7afa5e0b-6ec4-4cd6-b10c-acd65f681e3a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:70:93:84,bridge_name='br-int',has_traffic_filtering=True,id=3690f505-411e-49df-9869-848f5e8e1d1a,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3690f505-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:11:54 np0005539551 nova_compute[227360]: 2025-11-29 08:11:54.425 227364 DEBUG os_vif [None req-7afa5e0b-6ec4-4cd6-b10c-acd65f681e3a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:93:84,bridge_name='br-int',has_traffic_filtering=True,id=3690f505-411e-49df-9869-848f5e8e1d1a,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3690f505-41') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:11:54 np0005539551 nova_compute[227360]: 2025-11-29 08:11:54.426 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:54 np0005539551 nova_compute[227360]: 2025-11-29 08:11:54.426 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3690f505-41, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:54 np0005539551 nova_compute[227360]: 2025-11-29 08:11:54.426 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:11:54 np0005539551 nova_compute[227360]: 2025-11-29 08:11:54.428 227364 INFO os_vif [None req-7afa5e0b-6ec4-4cd6-b10c-acd65f681e3a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:93:84,bridge_name='br-int',has_traffic_filtering=True,id=3690f505-411e-49df-9869-848f5e8e1d1a,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3690f505-41')#033[00m
Nov 29 03:11:54 np0005539551 nova_compute[227360]: 2025-11-29 08:11:54.428 227364 DEBUG oslo_concurrency.lockutils [None req-7afa5e0b-6ec4-4cd6-b10c-acd65f681e3a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:54 np0005539551 nova_compute[227360]: 2025-11-29 08:11:54.429 227364 DEBUG oslo_concurrency.lockutils [None req-7afa5e0b-6ec4-4cd6-b10c-acd65f681e3a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:54 np0005539551 nova_compute[227360]: 2025-11-29 08:11:54.533 227364 DEBUG oslo_concurrency.processutils [None req-7afa5e0b-6ec4-4cd6-b10c-acd65f681e3a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:54 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:11:54 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2435378699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:11:54 np0005539551 nova_compute[227360]: 2025-11-29 08:11:54.966 227364 DEBUG oslo_concurrency.processutils [None req-7afa5e0b-6ec4-4cd6-b10c-acd65f681e3a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:54 np0005539551 nova_compute[227360]: 2025-11-29 08:11:54.978 227364 DEBUG nova.compute.provider_tree [None req-7afa5e0b-6ec4-4cd6-b10c-acd65f681e3a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:11:55 np0005539551 nova_compute[227360]: 2025-11-29 08:11:55.010 227364 DEBUG nova.scheduler.client.report [None req-7afa5e0b-6ec4-4cd6-b10c-acd65f681e3a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:11:55 np0005539551 nova_compute[227360]: 2025-11-29 08:11:55.089 227364 DEBUG oslo_concurrency.lockutils [None req-7afa5e0b-6ec4-4cd6-b10c-acd65f681e3a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:55 np0005539551 nova_compute[227360]: 2025-11-29 08:11:55.091 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:55 np0005539551 nova_compute[227360]: 2025-11-29 08:11:55.307 227364 INFO nova.scheduler.client.report [None req-7afa5e0b-6ec4-4cd6-b10c-acd65f681e3a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Deleted allocation for migration e2865359-f83a-40e7-a7b3-e0d673d76ec8#033[00m
Nov 29 03:11:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:55 np0005539551 nova_compute[227360]: 2025-11-29 08:11:55.381 227364 DEBUG oslo_concurrency.lockutils [None req-7afa5e0b-6ec4-4cd6-b10c-acd65f681e3a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "96e45d6d-27c7-4a08-9126-4856b2920133" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 5.914s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:11:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:55.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:11:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:55.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:56 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #76. Immutable memtables: 0.
Nov 29 03:11:56 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:11:56.282841) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:11:56 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 76
Nov 29 03:11:56 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403916282885, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 542, "num_deletes": 251, "total_data_size": 783461, "memory_usage": 795304, "flush_reason": "Manual Compaction"}
Nov 29 03:11:56 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #77: started
Nov 29 03:11:56 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403916290599, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 77, "file_size": 516957, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39964, "largest_seqno": 40501, "table_properties": {"data_size": 514062, "index_size": 867, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7000, "raw_average_key_size": 19, "raw_value_size": 508197, "raw_average_value_size": 1403, "num_data_blocks": 37, "num_entries": 362, "num_filter_entries": 362, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403891, "oldest_key_time": 1764403891, "file_creation_time": 1764403916, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:11:56 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 7817 microseconds, and 3589 cpu microseconds.
Nov 29 03:11:56 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:11:56 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:11:56.290658) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #77: 516957 bytes OK
Nov 29 03:11:56 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:11:56.290683) [db/memtable_list.cc:519] [default] Level-0 commit table #77 started
Nov 29 03:11:56 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:11:56.291815) [db/memtable_list.cc:722] [default] Level-0 commit table #77: memtable #1 done
Nov 29 03:11:56 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:11:56.291834) EVENT_LOG_v1 {"time_micros": 1764403916291828, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:11:56 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:11:56.291855) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:11:56 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 780292, prev total WAL file size 780292, number of live WAL files 2.
Nov 29 03:11:56 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000073.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:11:56 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:11:56.292545) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Nov 29 03:11:56 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:11:56 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [77(504KB)], [75(13MB)]
Nov 29 03:11:56 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403916292597, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [77], "files_L6": [75], "score": -1, "input_data_size": 14232008, "oldest_snapshot_seqno": -1}
Nov 29 03:11:56 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #78: 7147 keys, 12294331 bytes, temperature: kUnknown
Nov 29 03:11:56 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403916416029, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 78, "file_size": 12294331, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12244596, "index_size": 30743, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17925, "raw_key_size": 185415, "raw_average_key_size": 25, "raw_value_size": 12114436, "raw_average_value_size": 1695, "num_data_blocks": 1216, "num_entries": 7147, "num_filter_entries": 7147, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764403916, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 78, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:11:56 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:11:56 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:11:56.416256) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 12294331 bytes
Nov 29 03:11:56 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:11:56.417998) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 115.2 rd, 99.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 13.1 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(51.3) write-amplify(23.8) OK, records in: 7664, records dropped: 517 output_compression: NoCompression
Nov 29 03:11:56 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:11:56.418013) EVENT_LOG_v1 {"time_micros": 1764403916418006, "job": 46, "event": "compaction_finished", "compaction_time_micros": 123497, "compaction_time_cpu_micros": 49443, "output_level": 6, "num_output_files": 1, "total_output_size": 12294331, "num_input_records": 7664, "num_output_records": 7147, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:11:56 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:11:56 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403916418193, "job": 46, "event": "table_file_deletion", "file_number": 77}
Nov 29 03:11:56 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000075.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:11:56 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403916420310, "job": 46, "event": "table_file_deletion", "file_number": 75}
Nov 29 03:11:56 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:11:56.292452) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:11:56 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:11:56.420458) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:11:56 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:11:56.420467) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:11:56 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:11:56.420471) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:11:56 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:11:56.420475) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:11:56 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:11:56.420479) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:11:57 np0005539551 nova_compute[227360]: 2025-11-29 08:11:57.429 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:57.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:57.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:59.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:11:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:11:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:59.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:00.049 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:00 np0005539551 nova_compute[227360]: 2025-11-29 08:12:00.092 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e275 e275: 3 total, 3 up, 3 in
Nov 29 03:12:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:01.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:01.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:02 np0005539551 nova_compute[227360]: 2025-11-29 08:12:02.432 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:03.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:03.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:05 np0005539551 nova_compute[227360]: 2025-11-29 08:12:05.095 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:05 np0005539551 nova_compute[227360]: 2025-11-29 08:12:05.587 227364 DEBUG oslo_concurrency.lockutils [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "aa47ccff-ce1a-4fea-9236-6652f9da5330" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:05 np0005539551 nova_compute[227360]: 2025-11-29 08:12:05.588 227364 DEBUG oslo_concurrency.lockutils [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "aa47ccff-ce1a-4fea-9236-6652f9da5330" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:05 np0005539551 nova_compute[227360]: 2025-11-29 08:12:05.615 227364 DEBUG nova.compute.manager [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:12:05 np0005539551 nova_compute[227360]: 2025-11-29 08:12:05.729 227364 DEBUG oslo_concurrency.lockutils [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:05 np0005539551 nova_compute[227360]: 2025-11-29 08:12:05.730 227364 DEBUG oslo_concurrency.lockutils [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:05 np0005539551 nova_compute[227360]: 2025-11-29 08:12:05.741 227364 DEBUG nova.virt.hardware [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:12:05 np0005539551 nova_compute[227360]: 2025-11-29 08:12:05.741 227364 INFO nova.compute.claims [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:12:05 np0005539551 nova_compute[227360]: 2025-11-29 08:12:05.918 227364 DEBUG oslo_concurrency.processutils [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:05.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:05.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:12:06 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3058290385' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:12:06 np0005539551 nova_compute[227360]: 2025-11-29 08:12:06.391 227364 DEBUG oslo_concurrency.processutils [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:06 np0005539551 nova_compute[227360]: 2025-11-29 08:12:06.397 227364 DEBUG nova.compute.provider_tree [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:12:06 np0005539551 nova_compute[227360]: 2025-11-29 08:12:06.420 227364 DEBUG nova.scheduler.client.report [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:12:06 np0005539551 nova_compute[227360]: 2025-11-29 08:12:06.458 227364 DEBUG oslo_concurrency.lockutils [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:06 np0005539551 nova_compute[227360]: 2025-11-29 08:12:06.459 227364 DEBUG nova.compute.manager [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:12:06 np0005539551 nova_compute[227360]: 2025-11-29 08:12:06.518 227364 DEBUG nova.compute.manager [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:12:06 np0005539551 nova_compute[227360]: 2025-11-29 08:12:06.518 227364 DEBUG nova.network.neutron [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:12:06 np0005539551 nova_compute[227360]: 2025-11-29 08:12:06.553 227364 INFO nova.virt.libvirt.driver [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:12:06 np0005539551 nova_compute[227360]: 2025-11-29 08:12:06.583 227364 DEBUG nova.compute.manager [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:12:07 np0005539551 nova_compute[227360]: 2025-11-29 08:12:07.005 227364 DEBUG nova.compute.manager [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:12:07 np0005539551 nova_compute[227360]: 2025-11-29 08:12:07.007 227364 DEBUG nova.virt.libvirt.driver [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:12:07 np0005539551 nova_compute[227360]: 2025-11-29 08:12:07.007 227364 INFO nova.virt.libvirt.driver [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Creating image(s)#033[00m
Nov 29 03:12:07 np0005539551 nova_compute[227360]: 2025-11-29 08:12:07.039 227364 DEBUG nova.storage.rbd_utils [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image aa47ccff-ce1a-4fea-9236-6652f9da5330_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:07 np0005539551 nova_compute[227360]: 2025-11-29 08:12:07.066 227364 DEBUG nova.storage.rbd_utils [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image aa47ccff-ce1a-4fea-9236-6652f9da5330_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:07 np0005539551 nova_compute[227360]: 2025-11-29 08:12:07.090 227364 DEBUG nova.storage.rbd_utils [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image aa47ccff-ce1a-4fea-9236-6652f9da5330_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:07 np0005539551 nova_compute[227360]: 2025-11-29 08:12:07.094 227364 DEBUG oslo_concurrency.processutils [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:07 np0005539551 nova_compute[227360]: 2025-11-29 08:12:07.161 227364 DEBUG oslo_concurrency.processutils [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:07 np0005539551 nova_compute[227360]: 2025-11-29 08:12:07.161 227364 DEBUG oslo_concurrency.lockutils [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:07 np0005539551 nova_compute[227360]: 2025-11-29 08:12:07.162 227364 DEBUG oslo_concurrency.lockutils [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:07 np0005539551 nova_compute[227360]: 2025-11-29 08:12:07.162 227364 DEBUG oslo_concurrency.lockutils [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:07 np0005539551 nova_compute[227360]: 2025-11-29 08:12:07.187 227364 DEBUG nova.storage.rbd_utils [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image aa47ccff-ce1a-4fea-9236-6652f9da5330_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:07 np0005539551 nova_compute[227360]: 2025-11-29 08:12:07.191 227364 DEBUG oslo_concurrency.processutils [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 aa47ccff-ce1a-4fea-9236-6652f9da5330_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:07 np0005539551 nova_compute[227360]: 2025-11-29 08:12:07.436 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:07 np0005539551 nova_compute[227360]: 2025-11-29 08:12:07.815 227364 DEBUG nova.policy [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9ab0114aca6149af994da2b9052c1368', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8384e5887c0948f5876c019d50057152', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:12:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:07.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:12:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:08.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:12:08 np0005539551 nova_compute[227360]: 2025-11-29 08:12:08.658 227364 DEBUG oslo_concurrency.processutils [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 aa47ccff-ce1a-4fea-9236-6652f9da5330_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:08 np0005539551 nova_compute[227360]: 2025-11-29 08:12:08.773 227364 DEBUG nova.storage.rbd_utils [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] resizing rbd image aa47ccff-ce1a-4fea-9236-6652f9da5330_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:12:08 np0005539551 nova_compute[227360]: 2025-11-29 08:12:08.928 227364 DEBUG nova.objects.instance [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'migration_context' on Instance uuid aa47ccff-ce1a-4fea-9236-6652f9da5330 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:08 np0005539551 nova_compute[227360]: 2025-11-29 08:12:08.953 227364 DEBUG nova.virt.libvirt.driver [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:12:08 np0005539551 nova_compute[227360]: 2025-11-29 08:12:08.954 227364 DEBUG nova.virt.libvirt.driver [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Ensure instance console log exists: /var/lib/nova/instances/aa47ccff-ce1a-4fea-9236-6652f9da5330/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:12:08 np0005539551 nova_compute[227360]: 2025-11-29 08:12:08.955 227364 DEBUG oslo_concurrency.lockutils [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:08 np0005539551 nova_compute[227360]: 2025-11-29 08:12:08.955 227364 DEBUG oslo_concurrency.lockutils [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:08 np0005539551 nova_compute[227360]: 2025-11-29 08:12:08.955 227364 DEBUG oslo_concurrency.lockutils [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:09 np0005539551 nova_compute[227360]: 2025-11-29 08:12:09.708 227364 DEBUG nova.network.neutron [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Successfully created port: 69e73356-b715-4583-9de4-9e35b4deb600 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:12:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:09.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:10.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:10 np0005539551 nova_compute[227360]: 2025-11-29 08:12:10.098 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:10 np0005539551 podman[258288]: 2025-11-29 08:12:10.619531376 +0000 UTC m=+0.062362803 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:12:10 np0005539551 podman[258287]: 2025-11-29 08:12:10.629327696 +0000 UTC m=+0.078288846 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251125, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:12:10 np0005539551 podman[258286]: 2025-11-29 08:12:10.67717992 +0000 UTC m=+0.129979662 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:12:11 np0005539551 nova_compute[227360]: 2025-11-29 08:12:11.477 227364 DEBUG nova.network.neutron [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Successfully updated port: 69e73356-b715-4583-9de4-9e35b4deb600 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:12:11 np0005539551 nova_compute[227360]: 2025-11-29 08:12:11.600 227364 DEBUG oslo_concurrency.lockutils [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "refresh_cache-aa47ccff-ce1a-4fea-9236-6652f9da5330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:12:11 np0005539551 nova_compute[227360]: 2025-11-29 08:12:11.600 227364 DEBUG oslo_concurrency.lockutils [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquired lock "refresh_cache-aa47ccff-ce1a-4fea-9236-6652f9da5330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:12:11 np0005539551 nova_compute[227360]: 2025-11-29 08:12:11.601 227364 DEBUG nova.network.neutron [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:12:11 np0005539551 nova_compute[227360]: 2025-11-29 08:12:11.622 227364 DEBUG nova.compute.manager [req-64f72d4b-873a-4ee3-a1b3-f7178fc7fd95 req-82fe09e0-3139-4827-b7a5-be3a1491b298 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Received event network-changed-69e73356-b715-4583-9de4-9e35b4deb600 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:11 np0005539551 nova_compute[227360]: 2025-11-29 08:12:11.623 227364 DEBUG nova.compute.manager [req-64f72d4b-873a-4ee3-a1b3-f7178fc7fd95 req-82fe09e0-3139-4827-b7a5-be3a1491b298 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Refreshing instance network info cache due to event network-changed-69e73356-b715-4583-9de4-9e35b4deb600. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:12:11 np0005539551 nova_compute[227360]: 2025-11-29 08:12:11.623 227364 DEBUG oslo_concurrency.lockutils [req-64f72d4b-873a-4ee3-a1b3-f7178fc7fd95 req-82fe09e0-3139-4827-b7a5-be3a1491b298 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-aa47ccff-ce1a-4fea-9236-6652f9da5330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:12:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:11.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:12:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:12.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:12:12 np0005539551 nova_compute[227360]: 2025-11-29 08:12:12.170 227364 DEBUG nova.network.neutron [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:12:12 np0005539551 nova_compute[227360]: 2025-11-29 08:12:12.439 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:13.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:14.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:15 np0005539551 nova_compute[227360]: 2025-11-29 08:12:15.009 227364 DEBUG nova.network.neutron [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Updating instance_info_cache with network_info: [{"id": "69e73356-b715-4583-9de4-9e35b4deb600", "address": "fa:16:3e:82:af:c1", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69e73356-b7", "ovs_interfaceid": "69e73356-b715-4583-9de4-9e35b4deb600", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:15 np0005539551 nova_compute[227360]: 2025-11-29 08:12:15.033 227364 DEBUG oslo_concurrency.lockutils [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Releasing lock "refresh_cache-aa47ccff-ce1a-4fea-9236-6652f9da5330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:12:15 np0005539551 nova_compute[227360]: 2025-11-29 08:12:15.033 227364 DEBUG nova.compute.manager [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Instance network_info: |[{"id": "69e73356-b715-4583-9de4-9e35b4deb600", "address": "fa:16:3e:82:af:c1", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69e73356-b7", "ovs_interfaceid": "69e73356-b715-4583-9de4-9e35b4deb600", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:12:15 np0005539551 nova_compute[227360]: 2025-11-29 08:12:15.034 227364 DEBUG oslo_concurrency.lockutils [req-64f72d4b-873a-4ee3-a1b3-f7178fc7fd95 req-82fe09e0-3139-4827-b7a5-be3a1491b298 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-aa47ccff-ce1a-4fea-9236-6652f9da5330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:12:15 np0005539551 nova_compute[227360]: 2025-11-29 08:12:15.034 227364 DEBUG nova.network.neutron [req-64f72d4b-873a-4ee3-a1b3-f7178fc7fd95 req-82fe09e0-3139-4827-b7a5-be3a1491b298 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Refreshing network info cache for port 69e73356-b715-4583-9de4-9e35b4deb600 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:12:15 np0005539551 nova_compute[227360]: 2025-11-29 08:12:15.037 227364 DEBUG nova.virt.libvirt.driver [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Start _get_guest_xml network_info=[{"id": "69e73356-b715-4583-9de4-9e35b4deb600", "address": "fa:16:3e:82:af:c1", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69e73356-b7", "ovs_interfaceid": "69e73356-b715-4583-9de4-9e35b4deb600", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:12:15 np0005539551 nova_compute[227360]: 2025-11-29 08:12:15.042 227364 WARNING nova.virt.libvirt.driver [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:12:15 np0005539551 nova_compute[227360]: 2025-11-29 08:12:15.053 227364 DEBUG nova.virt.libvirt.host [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:12:15 np0005539551 nova_compute[227360]: 2025-11-29 08:12:15.054 227364 DEBUG nova.virt.libvirt.host [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:12:15 np0005539551 nova_compute[227360]: 2025-11-29 08:12:15.057 227364 DEBUG nova.virt.libvirt.host [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:12:15 np0005539551 nova_compute[227360]: 2025-11-29 08:12:15.058 227364 DEBUG nova.virt.libvirt.host [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:12:15 np0005539551 nova_compute[227360]: 2025-11-29 08:12:15.059 227364 DEBUG nova.virt.libvirt.driver [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:12:15 np0005539551 nova_compute[227360]: 2025-11-29 08:12:15.059 227364 DEBUG nova.virt.hardware [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:12:15 np0005539551 nova_compute[227360]: 2025-11-29 08:12:15.060 227364 DEBUG nova.virt.hardware [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:12:15 np0005539551 nova_compute[227360]: 2025-11-29 08:12:15.060 227364 DEBUG nova.virt.hardware [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:12:15 np0005539551 nova_compute[227360]: 2025-11-29 08:12:15.060 227364 DEBUG nova.virt.hardware [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:12:15 np0005539551 nova_compute[227360]: 2025-11-29 08:12:15.061 227364 DEBUG nova.virt.hardware [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:12:15 np0005539551 nova_compute[227360]: 2025-11-29 08:12:15.061 227364 DEBUG nova.virt.hardware [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:12:15 np0005539551 nova_compute[227360]: 2025-11-29 08:12:15.061 227364 DEBUG nova.virt.hardware [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:12:15 np0005539551 nova_compute[227360]: 2025-11-29 08:12:15.061 227364 DEBUG nova.virt.hardware [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:12:15 np0005539551 nova_compute[227360]: 2025-11-29 08:12:15.062 227364 DEBUG nova.virt.hardware [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:12:15 np0005539551 nova_compute[227360]: 2025-11-29 08:12:15.062 227364 DEBUG nova.virt.hardware [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:12:15 np0005539551 nova_compute[227360]: 2025-11-29 08:12:15.062 227364 DEBUG nova.virt.hardware [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:12:15 np0005539551 nova_compute[227360]: 2025-11-29 08:12:15.065 227364 DEBUG oslo_concurrency.processutils [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:15 np0005539551 nova_compute[227360]: 2025-11-29 08:12:15.100 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:12:15 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1810802583' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:12:15 np0005539551 nova_compute[227360]: 2025-11-29 08:12:15.566 227364 DEBUG oslo_concurrency.processutils [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:15 np0005539551 nova_compute[227360]: 2025-11-29 08:12:15.592 227364 DEBUG nova.storage.rbd_utils [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image aa47ccff-ce1a-4fea-9236-6652f9da5330_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:15 np0005539551 nova_compute[227360]: 2025-11-29 08:12:15.596 227364 DEBUG oslo_concurrency.processutils [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:15.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:16.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:12:16 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1610630193' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:12:16 np0005539551 nova_compute[227360]: 2025-11-29 08:12:16.046 227364 DEBUG oslo_concurrency.processutils [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:16 np0005539551 nova_compute[227360]: 2025-11-29 08:12:16.049 227364 DEBUG nova.virt.libvirt.vif [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:12:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-2065877529',display_name='tempest-ServerDiskConfigTestJSON-server-2065877529',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-2065877529',id=84,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8384e5887c0948f5876c019d50057152',ramdisk_id='',reservation_id='r-imp37oax',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-767135984',owner_user_name='tempest-ServerDiskConfigTestJSON-767135984-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:12:06Z,user_data=None,user_id='9ab0114aca6149af994da2b9052c1368',uuid=aa47ccff-ce1a-4fea-9236-6652f9da5330,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "69e73356-b715-4583-9de4-9e35b4deb600", "address": "fa:16:3e:82:af:c1", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69e73356-b7", "ovs_interfaceid": "69e73356-b715-4583-9de4-9e35b4deb600", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:12:16 np0005539551 nova_compute[227360]: 2025-11-29 08:12:16.050 227364 DEBUG nova.network.os_vif_util [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converting VIF {"id": "69e73356-b715-4583-9de4-9e35b4deb600", "address": "fa:16:3e:82:af:c1", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69e73356-b7", "ovs_interfaceid": "69e73356-b715-4583-9de4-9e35b4deb600", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:16 np0005539551 nova_compute[227360]: 2025-11-29 08:12:16.052 227364 DEBUG nova.network.os_vif_util [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:af:c1,bridge_name='br-int',has_traffic_filtering=True,id=69e73356-b715-4583-9de4-9e35b4deb600,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69e73356-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:16 np0005539551 nova_compute[227360]: 2025-11-29 08:12:16.054 227364 DEBUG nova.objects.instance [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'pci_devices' on Instance uuid aa47ccff-ce1a-4fea-9236-6652f9da5330 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:16 np0005539551 nova_compute[227360]: 2025-11-29 08:12:16.083 227364 DEBUG nova.virt.libvirt.driver [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:12:16 np0005539551 nova_compute[227360]:  <uuid>aa47ccff-ce1a-4fea-9236-6652f9da5330</uuid>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:  <name>instance-00000054</name>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:12:16 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-2065877529</nova:name>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:12:15</nova:creationTime>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:12:16 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:        <nova:user uuid="9ab0114aca6149af994da2b9052c1368">tempest-ServerDiskConfigTestJSON-767135984-project-member</nova:user>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:        <nova:project uuid="8384e5887c0948f5876c019d50057152">tempest-ServerDiskConfigTestJSON-767135984</nova:project>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:        <nova:port uuid="69e73356-b715-4583-9de4-9e35b4deb600">
Nov 29 03:12:16 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:      <entry name="serial">aa47ccff-ce1a-4fea-9236-6652f9da5330</entry>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:      <entry name="uuid">aa47ccff-ce1a-4fea-9236-6652f9da5330</entry>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:12:16 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/aa47ccff-ce1a-4fea-9236-6652f9da5330_disk">
Nov 29 03:12:16 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:12:16 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:12:16 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/aa47ccff-ce1a-4fea-9236-6652f9da5330_disk.config">
Nov 29 03:12:16 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:12:16 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:12:16 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:82:af:c1"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:      <target dev="tap69e73356-b7"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:12:16 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/aa47ccff-ce1a-4fea-9236-6652f9da5330/console.log" append="off"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:12:16 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:12:16 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:12:16 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:12:16 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:12:16 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:12:16 np0005539551 nova_compute[227360]: 2025-11-29 08:12:16.086 227364 DEBUG nova.compute.manager [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Preparing to wait for external event network-vif-plugged-69e73356-b715-4583-9de4-9e35b4deb600 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:12:16 np0005539551 nova_compute[227360]: 2025-11-29 08:12:16.086 227364 DEBUG oslo_concurrency.lockutils [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "aa47ccff-ce1a-4fea-9236-6652f9da5330-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:16 np0005539551 nova_compute[227360]: 2025-11-29 08:12:16.087 227364 DEBUG oslo_concurrency.lockutils [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "aa47ccff-ce1a-4fea-9236-6652f9da5330-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:16 np0005539551 nova_compute[227360]: 2025-11-29 08:12:16.087 227364 DEBUG oslo_concurrency.lockutils [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "aa47ccff-ce1a-4fea-9236-6652f9da5330-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:16 np0005539551 nova_compute[227360]: 2025-11-29 08:12:16.088 227364 DEBUG nova.virt.libvirt.vif [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:12:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-2065877529',display_name='tempest-ServerDiskConfigTestJSON-server-2065877529',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-2065877529',id=84,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8384e5887c0948f5876c019d50057152',ramdisk_id='',reservation_id='r-imp37oax',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-767135984',owner_user_name='tempest-ServerDiskConfigTestJSON-767135984-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:12:06Z,user_data=None,user_id='9ab0114aca6149af994da2b9052c1368',uuid=aa47ccff-ce1a-4fea-9236-6652f9da5330,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "69e73356-b715-4583-9de4-9e35b4deb600", "address": "fa:16:3e:82:af:c1", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69e73356-b7", "ovs_interfaceid": "69e73356-b715-4583-9de4-9e35b4deb600", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:12:16 np0005539551 nova_compute[227360]: 2025-11-29 08:12:16.089 227364 DEBUG nova.network.os_vif_util [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converting VIF {"id": "69e73356-b715-4583-9de4-9e35b4deb600", "address": "fa:16:3e:82:af:c1", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69e73356-b7", "ovs_interfaceid": "69e73356-b715-4583-9de4-9e35b4deb600", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:16 np0005539551 nova_compute[227360]: 2025-11-29 08:12:16.090 227364 DEBUG nova.network.os_vif_util [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:af:c1,bridge_name='br-int',has_traffic_filtering=True,id=69e73356-b715-4583-9de4-9e35b4deb600,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69e73356-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:16 np0005539551 nova_compute[227360]: 2025-11-29 08:12:16.091 227364 DEBUG os_vif [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:af:c1,bridge_name='br-int',has_traffic_filtering=True,id=69e73356-b715-4583-9de4-9e35b4deb600,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69e73356-b7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:12:16 np0005539551 nova_compute[227360]: 2025-11-29 08:12:16.091 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:16 np0005539551 nova_compute[227360]: 2025-11-29 08:12:16.092 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:16 np0005539551 nova_compute[227360]: 2025-11-29 08:12:16.092 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:12:16 np0005539551 nova_compute[227360]: 2025-11-29 08:12:16.095 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:16 np0005539551 nova_compute[227360]: 2025-11-29 08:12:16.095 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap69e73356-b7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:16 np0005539551 nova_compute[227360]: 2025-11-29 08:12:16.096 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap69e73356-b7, col_values=(('external_ids', {'iface-id': '69e73356-b715-4583-9de4-9e35b4deb600', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:82:af:c1', 'vm-uuid': 'aa47ccff-ce1a-4fea-9236-6652f9da5330'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:16 np0005539551 nova_compute[227360]: 2025-11-29 08:12:16.097 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:16 np0005539551 NetworkManager[48922]: <info>  [1764403936.0988] manager: (tap69e73356-b7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/158)
Nov 29 03:12:16 np0005539551 nova_compute[227360]: 2025-11-29 08:12:16.099 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:12:16 np0005539551 nova_compute[227360]: 2025-11-29 08:12:16.106 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:16 np0005539551 nova_compute[227360]: 2025-11-29 08:12:16.107 227364 INFO os_vif [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:af:c1,bridge_name='br-int',has_traffic_filtering=True,id=69e73356-b715-4583-9de4-9e35b4deb600,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69e73356-b7')#033[00m
Nov 29 03:12:16 np0005539551 nova_compute[227360]: 2025-11-29 08:12:16.186 227364 DEBUG nova.virt.libvirt.driver [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:12:16 np0005539551 nova_compute[227360]: 2025-11-29 08:12:16.186 227364 DEBUG nova.virt.libvirt.driver [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:12:16 np0005539551 nova_compute[227360]: 2025-11-29 08:12:16.187 227364 DEBUG nova.virt.libvirt.driver [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] No VIF found with MAC fa:16:3e:82:af:c1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:12:16 np0005539551 nova_compute[227360]: 2025-11-29 08:12:16.187 227364 INFO nova.virt.libvirt.driver [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Using config drive#033[00m
Nov 29 03:12:16 np0005539551 nova_compute[227360]: 2025-11-29 08:12:16.214 227364 DEBUG nova.storage.rbd_utils [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image aa47ccff-ce1a-4fea-9236-6652f9da5330_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:17 np0005539551 nova_compute[227360]: 2025-11-29 08:12:17.078 227364 INFO nova.virt.libvirt.driver [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Creating config drive at /var/lib/nova/instances/aa47ccff-ce1a-4fea-9236-6652f9da5330/disk.config#033[00m
Nov 29 03:12:17 np0005539551 nova_compute[227360]: 2025-11-29 08:12:17.086 227364 DEBUG oslo_concurrency.processutils [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aa47ccff-ce1a-4fea-9236-6652f9da5330/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp29d0_i4a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:17 np0005539551 nova_compute[227360]: 2025-11-29 08:12:17.229 227364 DEBUG oslo_concurrency.processutils [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aa47ccff-ce1a-4fea-9236-6652f9da5330/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp29d0_i4a" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:17 np0005539551 nova_compute[227360]: 2025-11-29 08:12:17.262 227364 DEBUG nova.storage.rbd_utils [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image aa47ccff-ce1a-4fea-9236-6652f9da5330_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:17 np0005539551 nova_compute[227360]: 2025-11-29 08:12:17.266 227364 DEBUG oslo_concurrency.processutils [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aa47ccff-ce1a-4fea-9236-6652f9da5330/disk.config aa47ccff-ce1a-4fea-9236-6652f9da5330_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:17 np0005539551 nova_compute[227360]: 2025-11-29 08:12:17.427 227364 DEBUG oslo_concurrency.processutils [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aa47ccff-ce1a-4fea-9236-6652f9da5330/disk.config aa47ccff-ce1a-4fea-9236-6652f9da5330_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:17 np0005539551 nova_compute[227360]: 2025-11-29 08:12:17.428 227364 INFO nova.virt.libvirt.driver [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Deleting local config drive /var/lib/nova/instances/aa47ccff-ce1a-4fea-9236-6652f9da5330/disk.config because it was imported into RBD.#033[00m
Nov 29 03:12:17 np0005539551 kernel: tap69e73356-b7: entered promiscuous mode
Nov 29 03:12:17 np0005539551 ovn_controller[130266]: 2025-11-29T08:12:17Z|00337|binding|INFO|Claiming lport 69e73356-b715-4583-9de4-9e35b4deb600 for this chassis.
Nov 29 03:12:17 np0005539551 nova_compute[227360]: 2025-11-29 08:12:17.501 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:17 np0005539551 ovn_controller[130266]: 2025-11-29T08:12:17Z|00338|binding|INFO|69e73356-b715-4583-9de4-9e35b4deb600: Claiming fa:16:3e:82:af:c1 10.100.0.6
Nov 29 03:12:17 np0005539551 NetworkManager[48922]: <info>  [1764403937.5043] manager: (tap69e73356-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/159)
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:17.511 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:af:c1 10.100.0.6'], port_security=['fa:16:3e:82:af:c1 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'aa47ccff-ce1a-4fea-9236-6652f9da5330', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8384e5887c0948f5876c019d50057152', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a1e9ed13-b0e1-45c0-9be6-be0f145466a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c0727149-3377-4d23-9d8d-0006462cd03e, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=69e73356-b715-4583-9de4-9e35b4deb600) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:17.513 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 69e73356-b715-4583-9de4-9e35b4deb600 in datapath 65f88c5a-8801-4bc1-9eed-15e2bab4717d bound to our chassis#033[00m
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:17.516 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 65f88c5a-8801-4bc1-9eed-15e2bab4717d#033[00m
Nov 29 03:12:17 np0005539551 ovn_controller[130266]: 2025-11-29T08:12:17Z|00339|binding|INFO|Setting lport 69e73356-b715-4583-9de4-9e35b4deb600 ovn-installed in OVS
Nov 29 03:12:17 np0005539551 ovn_controller[130266]: 2025-11-29T08:12:17Z|00340|binding|INFO|Setting lport 69e73356-b715-4583-9de4-9e35b4deb600 up in Southbound
Nov 29 03:12:17 np0005539551 nova_compute[227360]: 2025-11-29 08:12:17.520 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:17 np0005539551 nova_compute[227360]: 2025-11-29 08:12:17.523 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:17.526 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b7a0e69b-a8d6-4ed7-9925-39b5925f6931]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:17.527 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap65f88c5a-81 in ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:17.530 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap65f88c5a-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:17.531 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b541b358-5afb-4ecf-b1f3-b8d3ad02e51e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:17.531 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f46a42bb-6fd5-4329-9886-24bfdbd976f1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:17 np0005539551 systemd-machined[190756]: New machine qemu-37-instance-00000054.
Nov 29 03:12:17 np0005539551 systemd[1]: Started Virtual Machine qemu-37-instance-00000054.
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:17.545 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[e65f1957-957b-48c2-ab55-c7d7ae16ca10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:17.566 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[722a4dee-fc61-455e-8e6b-e88f89194b28]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:17 np0005539551 systemd-udevd[258487]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:12:17 np0005539551 NetworkManager[48922]: <info>  [1764403937.5857] device (tap69e73356-b7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:12:17 np0005539551 NetworkManager[48922]: <info>  [1764403937.5868] device (tap69e73356-b7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:17.606 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[e9a17193-8332-4138-8798-0a58367c5002]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:17.610 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[eaaa68f4-b931-4e29-a90f-cf40c58c4cb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:17 np0005539551 NetworkManager[48922]: <info>  [1764403937.6120] manager: (tap65f88c5a-80): new Veth device (/org/freedesktop/NetworkManager/Devices/160)
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:17.644 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[0e68ef84-07e5-4c6e-9104-dbb9707aa88c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:17.648 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[466ec156-b561-4c8d-a20e-1c8e1bb3a1bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:17 np0005539551 NetworkManager[48922]: <info>  [1764403937.6694] device (tap65f88c5a-80): carrier: link connected
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:17.674 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[671bf0b3-656a-47c6-b2d9-f366d376879b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:17.695 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f5af6294-ab6b-4931-8375-ef77a4a8020b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap65f88c5a-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:22:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 698454, 'reachable_time': 42608, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258517, 'error': None, 'target': 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:17.712 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a7149695-b711-4bfe-b5f3-335e303a61e7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feda:227e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 698454, 'tstamp': 698454}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258518, 'error': None, 'target': 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:17.734 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f584aa19-9f2d-4daf-a2f3-4c56b44571a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap65f88c5a-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:22:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 698454, 'reachable_time': 42608, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 258519, 'error': None, 'target': 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:17.764 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[74de3092-f52a-4170-b8fa-00500b5ae6b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:17.841 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[91bad50e-e8f0-4f5e-99d1-e8b73eb1ef71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:17.843 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap65f88c5a-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:17.843 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:17.843 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap65f88c5a-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:17 np0005539551 nova_compute[227360]: 2025-11-29 08:12:17.845 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:17 np0005539551 NetworkManager[48922]: <info>  [1764403937.8465] manager: (tap65f88c5a-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/161)
Nov 29 03:12:17 np0005539551 kernel: tap65f88c5a-80: entered promiscuous mode
Nov 29 03:12:17 np0005539551 nova_compute[227360]: 2025-11-29 08:12:17.849 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:17.849 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap65f88c5a-80, col_values=(('external_ids', {'iface-id': 'dd9b6149-e4f7-45dd-a89e-de246cf739ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:17 np0005539551 nova_compute[227360]: 2025-11-29 08:12:17.850 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:17 np0005539551 ovn_controller[130266]: 2025-11-29T08:12:17Z|00341|binding|INFO|Releasing lport dd9b6149-e4f7-45dd-a89e-de246cf739ae from this chassis (sb_readonly=0)
Nov 29 03:12:17 np0005539551 nova_compute[227360]: 2025-11-29 08:12:17.877 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:17.878 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/65f88c5a-8801-4bc1-9eed-15e2bab4717d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/65f88c5a-8801-4bc1-9eed-15e2bab4717d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:17.878 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[fad34c25-0480-4014-bc09-4a215740497e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:17.879 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-65f88c5a-8801-4bc1-9eed-15e2bab4717d
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/65f88c5a-8801-4bc1-9eed-15e2bab4717d.pid.haproxy
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 65f88c5a-8801-4bc1-9eed-15e2bab4717d
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:12:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:17.880 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'env', 'PROCESS_TAG=haproxy-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/65f88c5a-8801-4bc1-9eed-15e2bab4717d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:12:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:17.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:18.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:18 np0005539551 nova_compute[227360]: 2025-11-29 08:12:18.165 227364 DEBUG nova.compute.manager [req-637dc017-7d76-46ec-828f-b0156a1bf701 req-ad2de353-514b-4b68-a8af-2c7a3ee5ad28 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Received event network-vif-plugged-69e73356-b715-4583-9de4-9e35b4deb600 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:18 np0005539551 nova_compute[227360]: 2025-11-29 08:12:18.166 227364 DEBUG oslo_concurrency.lockutils [req-637dc017-7d76-46ec-828f-b0156a1bf701 req-ad2de353-514b-4b68-a8af-2c7a3ee5ad28 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "aa47ccff-ce1a-4fea-9236-6652f9da5330-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:18 np0005539551 nova_compute[227360]: 2025-11-29 08:12:18.166 227364 DEBUG oslo_concurrency.lockutils [req-637dc017-7d76-46ec-828f-b0156a1bf701 req-ad2de353-514b-4b68-a8af-2c7a3ee5ad28 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "aa47ccff-ce1a-4fea-9236-6652f9da5330-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:18 np0005539551 nova_compute[227360]: 2025-11-29 08:12:18.167 227364 DEBUG oslo_concurrency.lockutils [req-637dc017-7d76-46ec-828f-b0156a1bf701 req-ad2de353-514b-4b68-a8af-2c7a3ee5ad28 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "aa47ccff-ce1a-4fea-9236-6652f9da5330-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:18 np0005539551 nova_compute[227360]: 2025-11-29 08:12:18.167 227364 DEBUG nova.compute.manager [req-637dc017-7d76-46ec-828f-b0156a1bf701 req-ad2de353-514b-4b68-a8af-2c7a3ee5ad28 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Processing event network-vif-plugged-69e73356-b715-4583-9de4-9e35b4deb600 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:12:18 np0005539551 podman[258551]: 2025-11-29 08:12:18.208616008 +0000 UTC m=+0.049337076 container create 0fa9e6024ca487bbaedd7da77dd46c453bfa05c4819e57b0096d0f8c8ea8e257 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:12:18 np0005539551 systemd[1]: Started libpod-conmon-0fa9e6024ca487bbaedd7da77dd46c453bfa05c4819e57b0096d0f8c8ea8e257.scope.
Nov 29 03:12:18 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:12:18 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af472885a5d5b00276161100f497db435d31dacef9091e71f6b3761f7d95ffb2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:12:18 np0005539551 podman[258551]: 2025-11-29 08:12:18.275594271 +0000 UTC m=+0.116315349 container init 0fa9e6024ca487bbaedd7da77dd46c453bfa05c4819e57b0096d0f8c8ea8e257 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:12:18 np0005539551 podman[258551]: 2025-11-29 08:12:18.180021275 +0000 UTC m=+0.020742373 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:12:18 np0005539551 nova_compute[227360]: 2025-11-29 08:12:18.280 227364 DEBUG nova.network.neutron [req-64f72d4b-873a-4ee3-a1b3-f7178fc7fd95 req-82fe09e0-3139-4827-b7a5-be3a1491b298 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Updated VIF entry in instance network info cache for port 69e73356-b715-4583-9de4-9e35b4deb600. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:12:18 np0005539551 nova_compute[227360]: 2025-11-29 08:12:18.282 227364 DEBUG nova.network.neutron [req-64f72d4b-873a-4ee3-a1b3-f7178fc7fd95 req-82fe09e0-3139-4827-b7a5-be3a1491b298 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Updating instance_info_cache with network_info: [{"id": "69e73356-b715-4583-9de4-9e35b4deb600", "address": "fa:16:3e:82:af:c1", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69e73356-b7", "ovs_interfaceid": "69e73356-b715-4583-9de4-9e35b4deb600", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:18 np0005539551 podman[258551]: 2025-11-29 08:12:18.283747398 +0000 UTC m=+0.124468466 container start 0fa9e6024ca487bbaedd7da77dd46c453bfa05c4819e57b0096d0f8c8ea8e257 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:12:18 np0005539551 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[258566]: [NOTICE]   (258572) : New worker (258574) forked
Nov 29 03:12:18 np0005539551 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[258566]: [NOTICE]   (258572) : Loading success.
Nov 29 03:12:18 np0005539551 nova_compute[227360]: 2025-11-29 08:12:18.317 227364 DEBUG oslo_concurrency.lockutils [req-64f72d4b-873a-4ee3-a1b3-f7178fc7fd95 req-82fe09e0-3139-4827-b7a5-be3a1491b298 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-aa47ccff-ce1a-4fea-9236-6652f9da5330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:12:18 np0005539551 nova_compute[227360]: 2025-11-29 08:12:18.759 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403938.7586043, aa47ccff-ce1a-4fea-9236-6652f9da5330 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:12:18 np0005539551 nova_compute[227360]: 2025-11-29 08:12:18.760 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] VM Started (Lifecycle Event)#033[00m
Nov 29 03:12:18 np0005539551 nova_compute[227360]: 2025-11-29 08:12:18.762 227364 DEBUG nova.compute.manager [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:12:18 np0005539551 nova_compute[227360]: 2025-11-29 08:12:18.767 227364 DEBUG nova.virt.libvirt.driver [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:12:18 np0005539551 nova_compute[227360]: 2025-11-29 08:12:18.770 227364 INFO nova.virt.libvirt.driver [-] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Instance spawned successfully.#033[00m
Nov 29 03:12:18 np0005539551 nova_compute[227360]: 2025-11-29 08:12:18.770 227364 DEBUG nova.virt.libvirt.driver [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:12:18 np0005539551 nova_compute[227360]: 2025-11-29 08:12:18.801 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:18 np0005539551 nova_compute[227360]: 2025-11-29 08:12:18.808 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:12:18 np0005539551 nova_compute[227360]: 2025-11-29 08:12:18.811 227364 DEBUG nova.virt.libvirt.driver [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:18 np0005539551 nova_compute[227360]: 2025-11-29 08:12:18.812 227364 DEBUG nova.virt.libvirt.driver [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:18 np0005539551 nova_compute[227360]: 2025-11-29 08:12:18.813 227364 DEBUG nova.virt.libvirt.driver [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:18 np0005539551 nova_compute[227360]: 2025-11-29 08:12:18.813 227364 DEBUG nova.virt.libvirt.driver [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:18 np0005539551 nova_compute[227360]: 2025-11-29 08:12:18.814 227364 DEBUG nova.virt.libvirt.driver [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:18 np0005539551 nova_compute[227360]: 2025-11-29 08:12:18.815 227364 DEBUG nova.virt.libvirt.driver [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:18 np0005539551 nova_compute[227360]: 2025-11-29 08:12:18.852 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:12:18 np0005539551 nova_compute[227360]: 2025-11-29 08:12:18.853 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403938.7587092, aa47ccff-ce1a-4fea-9236-6652f9da5330 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:12:18 np0005539551 nova_compute[227360]: 2025-11-29 08:12:18.853 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:12:18 np0005539551 nova_compute[227360]: 2025-11-29 08:12:18.886 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:18 np0005539551 nova_compute[227360]: 2025-11-29 08:12:18.892 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403938.7653418, aa47ccff-ce1a-4fea-9236-6652f9da5330 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:12:18 np0005539551 nova_compute[227360]: 2025-11-29 08:12:18.892 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:12:18 np0005539551 nova_compute[227360]: 2025-11-29 08:12:18.901 227364 INFO nova.compute.manager [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Took 11.90 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:12:18 np0005539551 nova_compute[227360]: 2025-11-29 08:12:18.901 227364 DEBUG nova.compute.manager [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:18 np0005539551 nova_compute[227360]: 2025-11-29 08:12:18.926 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:18 np0005539551 nova_compute[227360]: 2025-11-29 08:12:18.930 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:12:18 np0005539551 nova_compute[227360]: 2025-11-29 08:12:18.971 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:12:18 np0005539551 nova_compute[227360]: 2025-11-29 08:12:18.981 227364 INFO nova.compute.manager [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Took 13.30 seconds to build instance.#033[00m
Nov 29 03:12:19 np0005539551 nova_compute[227360]: 2025-11-29 08:12:19.000 227364 DEBUG oslo_concurrency.lockutils [None req-912cc076-cfc2-4b15-84cc-ae292854ad5d 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "aa47ccff-ce1a-4fea-9236-6652f9da5330" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.412s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:19.862 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:19.863 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:19.863 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:19.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:20.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:20 np0005539551 nova_compute[227360]: 2025-11-29 08:12:20.103 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:20 np0005539551 nova_compute[227360]: 2025-11-29 08:12:20.395 227364 DEBUG nova.compute.manager [req-9caec334-a39f-4fd1-ae0c-2920a225d004 req-2dd91a44-c0fd-45a0-b5d9-edec15ada4fd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Received event network-vif-plugged-69e73356-b715-4583-9de4-9e35b4deb600 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:20 np0005539551 nova_compute[227360]: 2025-11-29 08:12:20.396 227364 DEBUG oslo_concurrency.lockutils [req-9caec334-a39f-4fd1-ae0c-2920a225d004 req-2dd91a44-c0fd-45a0-b5d9-edec15ada4fd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "aa47ccff-ce1a-4fea-9236-6652f9da5330-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:20 np0005539551 nova_compute[227360]: 2025-11-29 08:12:20.396 227364 DEBUG oslo_concurrency.lockutils [req-9caec334-a39f-4fd1-ae0c-2920a225d004 req-2dd91a44-c0fd-45a0-b5d9-edec15ada4fd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "aa47ccff-ce1a-4fea-9236-6652f9da5330-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:20 np0005539551 nova_compute[227360]: 2025-11-29 08:12:20.396 227364 DEBUG oslo_concurrency.lockutils [req-9caec334-a39f-4fd1-ae0c-2920a225d004 req-2dd91a44-c0fd-45a0-b5d9-edec15ada4fd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "aa47ccff-ce1a-4fea-9236-6652f9da5330-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:20 np0005539551 nova_compute[227360]: 2025-11-29 08:12:20.396 227364 DEBUG nova.compute.manager [req-9caec334-a39f-4fd1-ae0c-2920a225d004 req-2dd91a44-c0fd-45a0-b5d9-edec15ada4fd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] No waiting events found dispatching network-vif-plugged-69e73356-b715-4583-9de4-9e35b4deb600 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:12:20 np0005539551 nova_compute[227360]: 2025-11-29 08:12:20.396 227364 WARNING nova.compute.manager [req-9caec334-a39f-4fd1-ae0c-2920a225d004 req-2dd91a44-c0fd-45a0-b5d9-edec15ada4fd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Received unexpected event network-vif-plugged-69e73356-b715-4583-9de4-9e35b4deb600 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:12:21 np0005539551 nova_compute[227360]: 2025-11-29 08:12:21.097 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:21 np0005539551 nova_compute[227360]: 2025-11-29 08:12:21.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:12:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:21.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:12:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:22.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:22 np0005539551 nova_compute[227360]: 2025-11-29 08:12:22.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:22 np0005539551 nova_compute[227360]: 2025-11-29 08:12:22.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:12:22 np0005539551 nova_compute[227360]: 2025-11-29 08:12:22.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:12:22 np0005539551 nova_compute[227360]: 2025-11-29 08:12:22.962 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "refresh_cache-aa47ccff-ce1a-4fea-9236-6652f9da5330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:12:22 np0005539551 nova_compute[227360]: 2025-11-29 08:12:22.963 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquired lock "refresh_cache-aa47ccff-ce1a-4fea-9236-6652f9da5330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:12:22 np0005539551 nova_compute[227360]: 2025-11-29 08:12:22.963 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:12:22 np0005539551 nova_compute[227360]: 2025-11-29 08:12:22.963 227364 DEBUG nova.objects.instance [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lazy-loading 'info_cache' on Instance uuid aa47ccff-ce1a-4fea-9236-6652f9da5330 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:23.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:24.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:25 np0005539551 nova_compute[227360]: 2025-11-29 08:12:25.105 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:25 np0005539551 nova_compute[227360]: 2025-11-29 08:12:25.706 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Updating instance_info_cache with network_info: [{"id": "69e73356-b715-4583-9de4-9e35b4deb600", "address": "fa:16:3e:82:af:c1", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69e73356-b7", "ovs_interfaceid": "69e73356-b715-4583-9de4-9e35b4deb600", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:25 np0005539551 nova_compute[227360]: 2025-11-29 08:12:25.743 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Releasing lock "refresh_cache-aa47ccff-ce1a-4fea-9236-6652f9da5330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:12:25 np0005539551 nova_compute[227360]: 2025-11-29 08:12:25.744 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:12:25 np0005539551 nova_compute[227360]: 2025-11-29 08:12:25.745 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:25 np0005539551 nova_compute[227360]: 2025-11-29 08:12:25.745 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:25.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:26.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:26 np0005539551 nova_compute[227360]: 2025-11-29 08:12:26.099 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:26 np0005539551 nova_compute[227360]: 2025-11-29 08:12:26.102 227364 DEBUG oslo_concurrency.lockutils [None req-9a1cbe8b-93cb-408d-bb7c-ca91f99b3241 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "aa47ccff-ce1a-4fea-9236-6652f9da5330" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:26 np0005539551 nova_compute[227360]: 2025-11-29 08:12:26.102 227364 DEBUG oslo_concurrency.lockutils [None req-9a1cbe8b-93cb-408d-bb7c-ca91f99b3241 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "aa47ccff-ce1a-4fea-9236-6652f9da5330" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:26 np0005539551 nova_compute[227360]: 2025-11-29 08:12:26.103 227364 DEBUG oslo_concurrency.lockutils [None req-9a1cbe8b-93cb-408d-bb7c-ca91f99b3241 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "aa47ccff-ce1a-4fea-9236-6652f9da5330-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:26 np0005539551 nova_compute[227360]: 2025-11-29 08:12:26.103 227364 DEBUG oslo_concurrency.lockutils [None req-9a1cbe8b-93cb-408d-bb7c-ca91f99b3241 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "aa47ccff-ce1a-4fea-9236-6652f9da5330-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:26 np0005539551 nova_compute[227360]: 2025-11-29 08:12:26.104 227364 DEBUG oslo_concurrency.lockutils [None req-9a1cbe8b-93cb-408d-bb7c-ca91f99b3241 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "aa47ccff-ce1a-4fea-9236-6652f9da5330-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:26 np0005539551 nova_compute[227360]: 2025-11-29 08:12:26.107 227364 INFO nova.compute.manager [None req-9a1cbe8b-93cb-408d-bb7c-ca91f99b3241 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Terminating instance#033[00m
Nov 29 03:12:26 np0005539551 nova_compute[227360]: 2025-11-29 08:12:26.109 227364 DEBUG nova.compute.manager [None req-9a1cbe8b-93cb-408d-bb7c-ca91f99b3241 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:12:26 np0005539551 kernel: tap69e73356-b7 (unregistering): left promiscuous mode
Nov 29 03:12:26 np0005539551 NetworkManager[48922]: <info>  [1764403946.1595] device (tap69e73356-b7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:12:26 np0005539551 ovn_controller[130266]: 2025-11-29T08:12:26Z|00342|binding|INFO|Releasing lport 69e73356-b715-4583-9de4-9e35b4deb600 from this chassis (sb_readonly=0)
Nov 29 03:12:26 np0005539551 ovn_controller[130266]: 2025-11-29T08:12:26Z|00343|binding|INFO|Setting lport 69e73356-b715-4583-9de4-9e35b4deb600 down in Southbound
Nov 29 03:12:26 np0005539551 nova_compute[227360]: 2025-11-29 08:12:26.168 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:26 np0005539551 ovn_controller[130266]: 2025-11-29T08:12:26Z|00344|binding|INFO|Removing iface tap69e73356-b7 ovn-installed in OVS
Nov 29 03:12:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:26.179 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:af:c1 10.100.0.6'], port_security=['fa:16:3e:82:af:c1 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'aa47ccff-ce1a-4fea-9236-6652f9da5330', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8384e5887c0948f5876c019d50057152', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a1e9ed13-b0e1-45c0-9be6-be0f145466a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c0727149-3377-4d23-9d8d-0006462cd03e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=69e73356-b715-4583-9de4-9e35b4deb600) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:12:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:26.182 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 69e73356-b715-4583-9de4-9e35b4deb600 in datapath 65f88c5a-8801-4bc1-9eed-15e2bab4717d unbound from our chassis#033[00m
Nov 29 03:12:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:26.186 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 65f88c5a-8801-4bc1-9eed-15e2bab4717d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:12:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:26.187 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[19d5a7dc-7b34-4de4-a0f4-a239797139a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:26 np0005539551 nova_compute[227360]: 2025-11-29 08:12:26.189 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:26.188 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d namespace which is not needed anymore#033[00m
Nov 29 03:12:26 np0005539551 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000054.scope: Deactivated successfully.
Nov 29 03:12:26 np0005539551 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000054.scope: Consumed 8.807s CPU time.
Nov 29 03:12:26 np0005539551 systemd-machined[190756]: Machine qemu-37-instance-00000054 terminated.
Nov 29 03:12:26 np0005539551 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[258566]: [NOTICE]   (258572) : haproxy version is 2.8.14-c23fe91
Nov 29 03:12:26 np0005539551 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[258566]: [NOTICE]   (258572) : path to executable is /usr/sbin/haproxy
Nov 29 03:12:26 np0005539551 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[258566]: [WARNING]  (258572) : Exiting Master process...
Nov 29 03:12:26 np0005539551 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[258566]: [ALERT]    (258572) : Current worker (258574) exited with code 143 (Terminated)
Nov 29 03:12:26 np0005539551 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[258566]: [WARNING]  (258572) : All workers exited. Exiting... (0)
Nov 29 03:12:26 np0005539551 nova_compute[227360]: 2025-11-29 08:12:26.328 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:26 np0005539551 systemd[1]: libpod-0fa9e6024ca487bbaedd7da77dd46c453bfa05c4819e57b0096d0f8c8ea8e257.scope: Deactivated successfully.
Nov 29 03:12:26 np0005539551 nova_compute[227360]: 2025-11-29 08:12:26.333 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:26 np0005539551 podman[258650]: 2025-11-29 08:12:26.335931795 +0000 UTC m=+0.050804024 container died 0fa9e6024ca487bbaedd7da77dd46c453bfa05c4819e57b0096d0f8c8ea8e257 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:12:26 np0005539551 nova_compute[227360]: 2025-11-29 08:12:26.344 227364 INFO nova.virt.libvirt.driver [-] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Instance destroyed successfully.#033[00m
Nov 29 03:12:26 np0005539551 nova_compute[227360]: 2025-11-29 08:12:26.345 227364 DEBUG nova.objects.instance [None req-9a1cbe8b-93cb-408d-bb7c-ca91f99b3241 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'resources' on Instance uuid aa47ccff-ce1a-4fea-9236-6652f9da5330 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:26 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0fa9e6024ca487bbaedd7da77dd46c453bfa05c4819e57b0096d0f8c8ea8e257-userdata-shm.mount: Deactivated successfully.
Nov 29 03:12:26 np0005539551 systemd[1]: var-lib-containers-storage-overlay-af472885a5d5b00276161100f497db435d31dacef9091e71f6b3761f7d95ffb2-merged.mount: Deactivated successfully.
Nov 29 03:12:26 np0005539551 nova_compute[227360]: 2025-11-29 08:12:26.374 227364 DEBUG nova.virt.libvirt.vif [None req-9a1cbe8b-93cb-408d-bb7c-ca91f99b3241 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:12:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-2065877529',display_name='tempest-ServerDiskConfigTestJSON-server-2065877529',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-2065877529',id=84,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:12:18Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8384e5887c0948f5876c019d50057152',ramdisk_id='',reservation_id='r-imp37oax',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-767135984',owner_user_name='tempest-ServerDiskConfigTestJSON-767135984-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:12:24Z,user_data=None,user_id='9ab0114aca6149af994da2b9052c1368',uuid=aa47ccff-ce1a-4fea-9236-6652f9da5330,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "69e73356-b715-4583-9de4-9e35b4deb600", "address": "fa:16:3e:82:af:c1", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69e73356-b7", "ovs_interfaceid": "69e73356-b715-4583-9de4-9e35b4deb600", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:12:26 np0005539551 nova_compute[227360]: 2025-11-29 08:12:26.374 227364 DEBUG nova.network.os_vif_util [None req-9a1cbe8b-93cb-408d-bb7c-ca91f99b3241 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converting VIF {"id": "69e73356-b715-4583-9de4-9e35b4deb600", "address": "fa:16:3e:82:af:c1", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69e73356-b7", "ovs_interfaceid": "69e73356-b715-4583-9de4-9e35b4deb600", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:26 np0005539551 nova_compute[227360]: 2025-11-29 08:12:26.375 227364 DEBUG nova.network.os_vif_util [None req-9a1cbe8b-93cb-408d-bb7c-ca91f99b3241 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:82:af:c1,bridge_name='br-int',has_traffic_filtering=True,id=69e73356-b715-4583-9de4-9e35b4deb600,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69e73356-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:26 np0005539551 nova_compute[227360]: 2025-11-29 08:12:26.375 227364 DEBUG os_vif [None req-9a1cbe8b-93cb-408d-bb7c-ca91f99b3241 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:82:af:c1,bridge_name='br-int',has_traffic_filtering=True,id=69e73356-b715-4583-9de4-9e35b4deb600,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69e73356-b7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:12:26 np0005539551 nova_compute[227360]: 2025-11-29 08:12:26.377 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:26 np0005539551 nova_compute[227360]: 2025-11-29 08:12:26.377 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69e73356-b7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:26 np0005539551 nova_compute[227360]: 2025-11-29 08:12:26.379 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:26 np0005539551 podman[258650]: 2025-11-29 08:12:26.381475617 +0000 UTC m=+0.096347826 container cleanup 0fa9e6024ca487bbaedd7da77dd46c453bfa05c4819e57b0096d0f8c8ea8e257 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:12:26 np0005539551 nova_compute[227360]: 2025-11-29 08:12:26.382 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:12:26 np0005539551 nova_compute[227360]: 2025-11-29 08:12:26.385 227364 INFO os_vif [None req-9a1cbe8b-93cb-408d-bb7c-ca91f99b3241 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:82:af:c1,bridge_name='br-int',has_traffic_filtering=True,id=69e73356-b715-4583-9de4-9e35b4deb600,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69e73356-b7')#033[00m
Nov 29 03:12:26 np0005539551 systemd[1]: libpod-conmon-0fa9e6024ca487bbaedd7da77dd46c453bfa05c4819e57b0096d0f8c8ea8e257.scope: Deactivated successfully.
Nov 29 03:12:26 np0005539551 nova_compute[227360]: 2025-11-29 08:12:26.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:26 np0005539551 nova_compute[227360]: 2025-11-29 08:12:26.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:26 np0005539551 podman[258691]: 2025-11-29 08:12:26.493549322 +0000 UTC m=+0.089124795 container remove 0fa9e6024ca487bbaedd7da77dd46c453bfa05c4819e57b0096d0f8c8ea8e257 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:12:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:26.502 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9bb7f037-5459-4ad8-97f9-a842a4a72d32]: (4, ('Sat Nov 29 08:12:26 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d (0fa9e6024ca487bbaedd7da77dd46c453bfa05c4819e57b0096d0f8c8ea8e257)\n0fa9e6024ca487bbaedd7da77dd46c453bfa05c4819e57b0096d0f8c8ea8e257\nSat Nov 29 08:12:26 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d (0fa9e6024ca487bbaedd7da77dd46c453bfa05c4819e57b0096d0f8c8ea8e257)\n0fa9e6024ca487bbaedd7da77dd46c453bfa05c4819e57b0096d0f8c8ea8e257\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:26.504 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d7ef9c1e-7988-4814-9214-941942b64178]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:26.505 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap65f88c5a-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:26 np0005539551 nova_compute[227360]: 2025-11-29 08:12:26.506 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:26 np0005539551 kernel: tap65f88c5a-80: left promiscuous mode
Nov 29 03:12:26 np0005539551 nova_compute[227360]: 2025-11-29 08:12:26.519 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:26.522 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[60cf2c1f-737c-4d3c-aaab-c99ec59c455a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:26.539 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[24619ec5-4249-479c-8ab8-524f8c12dec3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:26.541 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[338a4282-55b3-452e-b672-a75cc3430f87]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:26.564 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[18f93813-e3bd-4ef1-9856-bf4bbd57e351]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 698447, 'reachable_time': 31306, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258725, 'error': None, 'target': 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:26 np0005539551 systemd[1]: run-netns-ovnmeta\x2d65f88c5a\x2d8801\x2d4bc1\x2d9eed\x2d15e2bab4717d.mount: Deactivated successfully.
Nov 29 03:12:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:26.568 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:12:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:26.568 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[c5238f01-dc7f-4eb0-9456-d8ce76227ad7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:27 np0005539551 nova_compute[227360]: 2025-11-29 08:12:27.274 227364 INFO nova.virt.libvirt.driver [None req-9a1cbe8b-93cb-408d-bb7c-ca91f99b3241 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Deleting instance files /var/lib/nova/instances/aa47ccff-ce1a-4fea-9236-6652f9da5330_del#033[00m
Nov 29 03:12:27 np0005539551 nova_compute[227360]: 2025-11-29 08:12:27.275 227364 INFO nova.virt.libvirt.driver [None req-9a1cbe8b-93cb-408d-bb7c-ca91f99b3241 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Deletion of /var/lib/nova/instances/aa47ccff-ce1a-4fea-9236-6652f9da5330_del complete#033[00m
Nov 29 03:12:27 np0005539551 nova_compute[227360]: 2025-11-29 08:12:27.553 227364 INFO nova.compute.manager [None req-9a1cbe8b-93cb-408d-bb7c-ca91f99b3241 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Took 1.44 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:12:27 np0005539551 nova_compute[227360]: 2025-11-29 08:12:27.555 227364 DEBUG oslo.service.loopingcall [None req-9a1cbe8b-93cb-408d-bb7c-ca91f99b3241 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:12:27 np0005539551 nova_compute[227360]: 2025-11-29 08:12:27.555 227364 DEBUG nova.compute.manager [-] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:12:27 np0005539551 nova_compute[227360]: 2025-11-29 08:12:27.555 227364 DEBUG nova.network.neutron [-] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:12:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:27.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:12:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:28.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:12:28 np0005539551 nova_compute[227360]: 2025-11-29 08:12:28.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:28 np0005539551 nova_compute[227360]: 2025-11-29 08:12:28.473 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:28 np0005539551 nova_compute[227360]: 2025-11-29 08:12:28.474 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:28 np0005539551 nova_compute[227360]: 2025-11-29 08:12:28.474 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:28 np0005539551 nova_compute[227360]: 2025-11-29 08:12:28.474 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:12:28 np0005539551 nova_compute[227360]: 2025-11-29 08:12:28.475 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:28 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:12:28 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2533207051' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:12:28 np0005539551 nova_compute[227360]: 2025-11-29 08:12:28.890 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:29 np0005539551 nova_compute[227360]: 2025-11-29 08:12:29.029 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:12:29 np0005539551 nova_compute[227360]: 2025-11-29 08:12:29.030 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4602MB free_disk=20.880294799804688GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:12:29 np0005539551 nova_compute[227360]: 2025-11-29 08:12:29.030 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:29 np0005539551 nova_compute[227360]: 2025-11-29 08:12:29.031 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:29 np0005539551 nova_compute[227360]: 2025-11-29 08:12:29.273 227364 DEBUG nova.compute.manager [req-0df3df34-db38-4239-976d-9ac770c9eaea req-bf60d835-a83a-436b-8137-4d20c2c90586 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Received event network-vif-unplugged-69e73356-b715-4583-9de4-9e35b4deb600 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:29 np0005539551 nova_compute[227360]: 2025-11-29 08:12:29.274 227364 DEBUG oslo_concurrency.lockutils [req-0df3df34-db38-4239-976d-9ac770c9eaea req-bf60d835-a83a-436b-8137-4d20c2c90586 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "aa47ccff-ce1a-4fea-9236-6652f9da5330-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:29 np0005539551 nova_compute[227360]: 2025-11-29 08:12:29.275 227364 DEBUG oslo_concurrency.lockutils [req-0df3df34-db38-4239-976d-9ac770c9eaea req-bf60d835-a83a-436b-8137-4d20c2c90586 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "aa47ccff-ce1a-4fea-9236-6652f9da5330-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:29 np0005539551 nova_compute[227360]: 2025-11-29 08:12:29.275 227364 DEBUG oslo_concurrency.lockutils [req-0df3df34-db38-4239-976d-9ac770c9eaea req-bf60d835-a83a-436b-8137-4d20c2c90586 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "aa47ccff-ce1a-4fea-9236-6652f9da5330-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:29 np0005539551 nova_compute[227360]: 2025-11-29 08:12:29.275 227364 DEBUG nova.compute.manager [req-0df3df34-db38-4239-976d-9ac770c9eaea req-bf60d835-a83a-436b-8137-4d20c2c90586 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] No waiting events found dispatching network-vif-unplugged-69e73356-b715-4583-9de4-9e35b4deb600 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:12:29 np0005539551 nova_compute[227360]: 2025-11-29 08:12:29.275 227364 DEBUG nova.compute.manager [req-0df3df34-db38-4239-976d-9ac770c9eaea req-bf60d835-a83a-436b-8137-4d20c2c90586 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Received event network-vif-unplugged-69e73356-b715-4583-9de4-9e35b4deb600 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:12:29 np0005539551 nova_compute[227360]: 2025-11-29 08:12:29.275 227364 DEBUG nova.compute.manager [req-0df3df34-db38-4239-976d-9ac770c9eaea req-bf60d835-a83a-436b-8137-4d20c2c90586 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Received event network-vif-plugged-69e73356-b715-4583-9de4-9e35b4deb600 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:29 np0005539551 nova_compute[227360]: 2025-11-29 08:12:29.275 227364 DEBUG oslo_concurrency.lockutils [req-0df3df34-db38-4239-976d-9ac770c9eaea req-bf60d835-a83a-436b-8137-4d20c2c90586 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "aa47ccff-ce1a-4fea-9236-6652f9da5330-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:29 np0005539551 nova_compute[227360]: 2025-11-29 08:12:29.276 227364 DEBUG oslo_concurrency.lockutils [req-0df3df34-db38-4239-976d-9ac770c9eaea req-bf60d835-a83a-436b-8137-4d20c2c90586 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "aa47ccff-ce1a-4fea-9236-6652f9da5330-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:29 np0005539551 nova_compute[227360]: 2025-11-29 08:12:29.276 227364 DEBUG oslo_concurrency.lockutils [req-0df3df34-db38-4239-976d-9ac770c9eaea req-bf60d835-a83a-436b-8137-4d20c2c90586 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "aa47ccff-ce1a-4fea-9236-6652f9da5330-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:29 np0005539551 nova_compute[227360]: 2025-11-29 08:12:29.276 227364 DEBUG nova.compute.manager [req-0df3df34-db38-4239-976d-9ac770c9eaea req-bf60d835-a83a-436b-8137-4d20c2c90586 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] No waiting events found dispatching network-vif-plugged-69e73356-b715-4583-9de4-9e35b4deb600 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:12:29 np0005539551 nova_compute[227360]: 2025-11-29 08:12:29.276 227364 WARNING nova.compute.manager [req-0df3df34-db38-4239-976d-9ac770c9eaea req-bf60d835-a83a-436b-8137-4d20c2c90586 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Received unexpected event network-vif-plugged-69e73356-b715-4583-9de4-9e35b4deb600 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:12:29 np0005539551 nova_compute[227360]: 2025-11-29 08:12:29.649 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance aa47ccff-ce1a-4fea-9236-6652f9da5330 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:12:29 np0005539551 nova_compute[227360]: 2025-11-29 08:12:29.649 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:12:29 np0005539551 nova_compute[227360]: 2025-11-29 08:12:29.650 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:12:29 np0005539551 nova_compute[227360]: 2025-11-29 08:12:29.720 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:29 np0005539551 nova_compute[227360]: 2025-11-29 08:12:29.823 227364 DEBUG nova.network.neutron [-] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:29 np0005539551 nova_compute[227360]: 2025-11-29 08:12:29.894 227364 INFO nova.compute.manager [-] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Took 2.34 seconds to deallocate network for instance.#033[00m
Nov 29 03:12:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:29.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:30.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:30 np0005539551 nova_compute[227360]: 2025-11-29 08:12:30.107 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:12:30 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2013472266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:12:30 np0005539551 nova_compute[227360]: 2025-11-29 08:12:30.151 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:30 np0005539551 nova_compute[227360]: 2025-11-29 08:12:30.156 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:12:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:30 np0005539551 nova_compute[227360]: 2025-11-29 08:12:30.604 227364 DEBUG oslo_concurrency.lockutils [None req-9a1cbe8b-93cb-408d-bb7c-ca91f99b3241 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:30 np0005539551 nova_compute[227360]: 2025-11-29 08:12:30.612 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:12:30 np0005539551 nova_compute[227360]: 2025-11-29 08:12:30.996 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:12:30 np0005539551 nova_compute[227360]: 2025-11-29 08:12:30.997 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.966s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:30 np0005539551 nova_compute[227360]: 2025-11-29 08:12:30.997 227364 DEBUG oslo_concurrency.lockutils [None req-9a1cbe8b-93cb-408d-bb7c-ca91f99b3241 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.394s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:31 np0005539551 nova_compute[227360]: 2025-11-29 08:12:31.057 227364 DEBUG oslo_concurrency.processutils [None req-9a1cbe8b-93cb-408d-bb7c-ca91f99b3241 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:31 np0005539551 nova_compute[227360]: 2025-11-29 08:12:31.378 227364 DEBUG nova.compute.manager [req-b31e0e6f-0ec7-45f6-93d0-6a7f2c9d908c req-3ee5b316-57a1-4d60-87f7-20cbab671efb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Received event network-vif-deleted-69e73356-b715-4583-9de4-9e35b4deb600 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:31 np0005539551 nova_compute[227360]: 2025-11-29 08:12:31.379 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:31 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:12:31 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2520791236' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:12:31 np0005539551 nova_compute[227360]: 2025-11-29 08:12:31.487 227364 DEBUG oslo_concurrency.processutils [None req-9a1cbe8b-93cb-408d-bb7c-ca91f99b3241 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:31 np0005539551 nova_compute[227360]: 2025-11-29 08:12:31.492 227364 DEBUG nova.compute.provider_tree [None req-9a1cbe8b-93cb-408d-bb7c-ca91f99b3241 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:12:31 np0005539551 nova_compute[227360]: 2025-11-29 08:12:31.507 227364 DEBUG nova.scheduler.client.report [None req-9a1cbe8b-93cb-408d-bb7c-ca91f99b3241 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:12:31 np0005539551 nova_compute[227360]: 2025-11-29 08:12:31.526 227364 DEBUG oslo_concurrency.lockutils [None req-9a1cbe8b-93cb-408d-bb7c-ca91f99b3241 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.529s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:31 np0005539551 nova_compute[227360]: 2025-11-29 08:12:31.556 227364 INFO nova.scheduler.client.report [None req-9a1cbe8b-93cb-408d-bb7c-ca91f99b3241 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Deleted allocations for instance aa47ccff-ce1a-4fea-9236-6652f9da5330#033[00m
Nov 29 03:12:31 np0005539551 nova_compute[227360]: 2025-11-29 08:12:31.645 227364 DEBUG oslo_concurrency.lockutils [None req-9a1cbe8b-93cb-408d-bb7c-ca91f99b3241 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "aa47ccff-ce1a-4fea-9236-6652f9da5330" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.542s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:31.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:32.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:33.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:34.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:35 np0005539551 nova_compute[227360]: 2025-11-29 08:12:35.108 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:35.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:12:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:36.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:12:36 np0005539551 nova_compute[227360]: 2025-11-29 08:12:36.239 227364 DEBUG oslo_concurrency.lockutils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Acquiring lock "eda0d174-a55c-4f6c-a128-8f10face8c8c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:36 np0005539551 nova_compute[227360]: 2025-11-29 08:12:36.240 227364 DEBUG oslo_concurrency.lockutils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lock "eda0d174-a55c-4f6c-a128-8f10face8c8c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:36 np0005539551 nova_compute[227360]: 2025-11-29 08:12:36.261 227364 DEBUG nova.compute.manager [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:12:36 np0005539551 nova_compute[227360]: 2025-11-29 08:12:36.326 227364 DEBUG oslo_concurrency.lockutils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:36 np0005539551 nova_compute[227360]: 2025-11-29 08:12:36.327 227364 DEBUG oslo_concurrency.lockutils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:36 np0005539551 nova_compute[227360]: 2025-11-29 08:12:36.335 227364 DEBUG nova.virt.hardware [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:12:36 np0005539551 nova_compute[227360]: 2025-11-29 08:12:36.336 227364 INFO nova.compute.claims [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:12:36 np0005539551 nova_compute[227360]: 2025-11-29 08:12:36.383 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:36 np0005539551 nova_compute[227360]: 2025-11-29 08:12:36.446 227364 DEBUG oslo_concurrency.processutils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:36 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:12:36 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1757111044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:12:36 np0005539551 nova_compute[227360]: 2025-11-29 08:12:36.887 227364 DEBUG oslo_concurrency.processutils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:36 np0005539551 nova_compute[227360]: 2025-11-29 08:12:36.893 227364 DEBUG nova.compute.provider_tree [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:12:36 np0005539551 nova_compute[227360]: 2025-11-29 08:12:36.941 227364 DEBUG nova.scheduler.client.report [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:12:36 np0005539551 nova_compute[227360]: 2025-11-29 08:12:36.968 227364 DEBUG oslo_concurrency.lockutils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:36 np0005539551 nova_compute[227360]: 2025-11-29 08:12:36.969 227364 DEBUG nova.compute.manager [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:12:36 np0005539551 nova_compute[227360]: 2025-11-29 08:12:36.998 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:36 np0005539551 nova_compute[227360]: 2025-11-29 08:12:36.998 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:36 np0005539551 nova_compute[227360]: 2025-11-29 08:12:36.998 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:12:37 np0005539551 nova_compute[227360]: 2025-11-29 08:12:37.073 227364 DEBUG nova.compute.manager [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:12:37 np0005539551 nova_compute[227360]: 2025-11-29 08:12:37.073 227364 DEBUG nova.network.neutron [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:12:37 np0005539551 nova_compute[227360]: 2025-11-29 08:12:37.097 227364 INFO nova.virt.libvirt.driver [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:12:37 np0005539551 nova_compute[227360]: 2025-11-29 08:12:37.120 227364 DEBUG nova.compute.manager [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:12:37 np0005539551 nova_compute[227360]: 2025-11-29 08:12:37.232 227364 DEBUG nova.compute.manager [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:12:37 np0005539551 nova_compute[227360]: 2025-11-29 08:12:37.233 227364 DEBUG nova.virt.libvirt.driver [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:12:37 np0005539551 nova_compute[227360]: 2025-11-29 08:12:37.233 227364 INFO nova.virt.libvirt.driver [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Creating image(s)#033[00m
Nov 29 03:12:37 np0005539551 nova_compute[227360]: 2025-11-29 08:12:37.262 227364 DEBUG nova.storage.rbd_utils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] rbd image eda0d174-a55c-4f6c-a128-8f10face8c8c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:37 np0005539551 nova_compute[227360]: 2025-11-29 08:12:37.294 227364 DEBUG nova.storage.rbd_utils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] rbd image eda0d174-a55c-4f6c-a128-8f10face8c8c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:37 np0005539551 nova_compute[227360]: 2025-11-29 08:12:37.327 227364 DEBUG nova.storage.rbd_utils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] rbd image eda0d174-a55c-4f6c-a128-8f10face8c8c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:37 np0005539551 nova_compute[227360]: 2025-11-29 08:12:37.330 227364 DEBUG oslo_concurrency.processutils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:37 np0005539551 nova_compute[227360]: 2025-11-29 08:12:37.393 227364 DEBUG oslo_concurrency.processutils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:37 np0005539551 nova_compute[227360]: 2025-11-29 08:12:37.394 227364 DEBUG oslo_concurrency.lockutils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:37 np0005539551 nova_compute[227360]: 2025-11-29 08:12:37.394 227364 DEBUG oslo_concurrency.lockutils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:37 np0005539551 nova_compute[227360]: 2025-11-29 08:12:37.395 227364 DEBUG oslo_concurrency.lockutils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:37 np0005539551 nova_compute[227360]: 2025-11-29 08:12:37.416 227364 DEBUG nova.storage.rbd_utils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] rbd image eda0d174-a55c-4f6c-a128-8f10face8c8c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:37 np0005539551 nova_compute[227360]: 2025-11-29 08:12:37.419 227364 DEBUG oslo_concurrency.processutils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 eda0d174-a55c-4f6c-a128-8f10face8c8c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:37 np0005539551 nova_compute[227360]: 2025-11-29 08:12:37.687 227364 DEBUG oslo_concurrency.processutils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 eda0d174-a55c-4f6c-a128-8f10face8c8c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.268s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:37 np0005539551 nova_compute[227360]: 2025-11-29 08:12:37.749 227364 DEBUG nova.storage.rbd_utils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] resizing rbd image eda0d174-a55c-4f6c-a128-8f10face8c8c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:12:37 np0005539551 nova_compute[227360]: 2025-11-29 08:12:37.841 227364 DEBUG nova.objects.instance [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lazy-loading 'migration_context' on Instance uuid eda0d174-a55c-4f6c-a128-8f10face8c8c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:37 np0005539551 nova_compute[227360]: 2025-11-29 08:12:37.847 227364 DEBUG nova.policy [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '247a4abb59cd459fa66a891e998e548c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c5e5a2f42dc64b7cb0a22b666f160b1d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:12:37 np0005539551 nova_compute[227360]: 2025-11-29 08:12:37.863 227364 DEBUG nova.virt.libvirt.driver [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:12:37 np0005539551 nova_compute[227360]: 2025-11-29 08:12:37.863 227364 DEBUG nova.virt.libvirt.driver [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Ensure instance console log exists: /var/lib/nova/instances/eda0d174-a55c-4f6c-a128-8f10face8c8c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:12:37 np0005539551 nova_compute[227360]: 2025-11-29 08:12:37.864 227364 DEBUG oslo_concurrency.lockutils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:37 np0005539551 nova_compute[227360]: 2025-11-29 08:12:37.864 227364 DEBUG oslo_concurrency.lockutils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:37 np0005539551 nova_compute[227360]: 2025-11-29 08:12:37.864 227364 DEBUG oslo_concurrency.lockutils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:37.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:38.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:38 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:12:38 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:12:38 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:12:39 np0005539551 nova_compute[227360]: 2025-11-29 08:12:39.244 227364 DEBUG nova.network.neutron [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Successfully created port: 4a78a8a2-9369-4d10-a787-dcbfbcc434fd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:12:39 np0005539551 nova_compute[227360]: 2025-11-29 08:12:39.364 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:39 np0005539551 nova_compute[227360]: 2025-11-29 08:12:39.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:39.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:40.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:40 np0005539551 nova_compute[227360]: 2025-11-29 08:12:40.110 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:40 np0005539551 nova_compute[227360]: 2025-11-29 08:12:40.356 227364 DEBUG nova.network.neutron [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Successfully created port: adb6ac2c-d9da-47d8-91a1-21fee2c2a28b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:12:41 np0005539551 nova_compute[227360]: 2025-11-29 08:12:41.341 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403946.3395753, aa47ccff-ce1a-4fea-9236-6652f9da5330 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:12:41 np0005539551 nova_compute[227360]: 2025-11-29 08:12:41.341 227364 INFO nova.compute.manager [-] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:12:41 np0005539551 nova_compute[227360]: 2025-11-29 08:12:41.362 227364 DEBUG nova.compute.manager [None req-451494fd-d824-4334-8d27-cc1e251cb8ee - - - - - -] [instance: aa47ccff-ce1a-4fea-9236-6652f9da5330] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:41 np0005539551 nova_compute[227360]: 2025-11-29 08:12:41.386 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:41 np0005539551 podman[259115]: 2025-11-29 08:12:41.766171127 +0000 UTC m=+0.054049670 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 03:12:41 np0005539551 podman[259113]: 2025-11-29 08:12:41.796610138 +0000 UTC m=+0.083976247 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Nov 29 03:12:41 np0005539551 podman[259114]: 2025-11-29 08:12:41.79667298 +0000 UTC m=+0.084550483 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 03:12:41 np0005539551 nova_compute[227360]: 2025-11-29 08:12:41.968 227364 DEBUG nova.network.neutron [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Successfully updated port: 4a78a8a2-9369-4d10-a787-dcbfbcc434fd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:12:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:41.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:42.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:42 np0005539551 nova_compute[227360]: 2025-11-29 08:12:42.119 227364 DEBUG nova.compute.manager [req-64dde9e1-6fac-4f77-b1b6-64609296a82c req-059b14b0-e3e6-4071-b84c-a6a395ff21d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Received event network-changed-4a78a8a2-9369-4d10-a787-dcbfbcc434fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:42 np0005539551 nova_compute[227360]: 2025-11-29 08:12:42.120 227364 DEBUG nova.compute.manager [req-64dde9e1-6fac-4f77-b1b6-64609296a82c req-059b14b0-e3e6-4071-b84c-a6a395ff21d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Refreshing instance network info cache due to event network-changed-4a78a8a2-9369-4d10-a787-dcbfbcc434fd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:12:42 np0005539551 nova_compute[227360]: 2025-11-29 08:12:42.120 227364 DEBUG oslo_concurrency.lockutils [req-64dde9e1-6fac-4f77-b1b6-64609296a82c req-059b14b0-e3e6-4071-b84c-a6a395ff21d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-eda0d174-a55c-4f6c-a128-8f10face8c8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:12:42 np0005539551 nova_compute[227360]: 2025-11-29 08:12:42.121 227364 DEBUG oslo_concurrency.lockutils [req-64dde9e1-6fac-4f77-b1b6-64609296a82c req-059b14b0-e3e6-4071-b84c-a6a395ff21d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-eda0d174-a55c-4f6c-a128-8f10face8c8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:12:42 np0005539551 nova_compute[227360]: 2025-11-29 08:12:42.121 227364 DEBUG nova.network.neutron [req-64dde9e1-6fac-4f77-b1b6-64609296a82c req-059b14b0-e3e6-4071-b84c-a6a395ff21d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Refreshing network info cache for port 4a78a8a2-9369-4d10-a787-dcbfbcc434fd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:12:42 np0005539551 nova_compute[227360]: 2025-11-29 08:12:42.824 227364 DEBUG nova.network.neutron [req-64dde9e1-6fac-4f77-b1b6-64609296a82c req-059b14b0-e3e6-4071-b84c-a6a395ff21d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:12:43 np0005539551 nova_compute[227360]: 2025-11-29 08:12:43.416 227364 DEBUG nova.network.neutron [req-64dde9e1-6fac-4f77-b1b6-64609296a82c req-059b14b0-e3e6-4071-b84c-a6a395ff21d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:43 np0005539551 nova_compute[227360]: 2025-11-29 08:12:43.438 227364 DEBUG oslo_concurrency.lockutils [req-64dde9e1-6fac-4f77-b1b6-64609296a82c req-059b14b0-e3e6-4071-b84c-a6a395ff21d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-eda0d174-a55c-4f6c-a128-8f10face8c8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:12:43 np0005539551 nova_compute[227360]: 2025-11-29 08:12:43.679 227364 DEBUG nova.network.neutron [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Successfully updated port: adb6ac2c-d9da-47d8-91a1-21fee2c2a28b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:12:43 np0005539551 nova_compute[227360]: 2025-11-29 08:12:43.696 227364 DEBUG oslo_concurrency.lockutils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Acquiring lock "refresh_cache-eda0d174-a55c-4f6c-a128-8f10face8c8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:12:43 np0005539551 nova_compute[227360]: 2025-11-29 08:12:43.696 227364 DEBUG oslo_concurrency.lockutils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Acquired lock "refresh_cache-eda0d174-a55c-4f6c-a128-8f10face8c8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:12:43 np0005539551 nova_compute[227360]: 2025-11-29 08:12:43.696 227364 DEBUG nova.network.neutron [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:12:43 np0005539551 nova_compute[227360]: 2025-11-29 08:12:43.890 227364 DEBUG nova.network.neutron [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:12:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:43.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:44.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:44 np0005539551 nova_compute[227360]: 2025-11-29 08:12:44.257 227364 DEBUG nova.compute.manager [req-245367a6-575e-413e-b23c-8afeffad552c req-9ccb750b-22e7-4954-8b21-e2ee9ef14066 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Received event network-changed-adb6ac2c-d9da-47d8-91a1-21fee2c2a28b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:44 np0005539551 nova_compute[227360]: 2025-11-29 08:12:44.257 227364 DEBUG nova.compute.manager [req-245367a6-575e-413e-b23c-8afeffad552c req-9ccb750b-22e7-4954-8b21-e2ee9ef14066 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Refreshing instance network info cache due to event network-changed-adb6ac2c-d9da-47d8-91a1-21fee2c2a28b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:12:44 np0005539551 nova_compute[227360]: 2025-11-29 08:12:44.258 227364 DEBUG oslo_concurrency.lockutils [req-245367a6-575e-413e-b23c-8afeffad552c req-9ccb750b-22e7-4954-8b21-e2ee9ef14066 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-eda0d174-a55c-4f6c-a128-8f10face8c8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:12:44 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:12:44 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:12:45 np0005539551 nova_compute[227360]: 2025-11-29 08:12:45.112 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:45.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:46.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:46 np0005539551 nova_compute[227360]: 2025-11-29 08:12:46.379 227364 DEBUG nova.network.neutron [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Updating instance_info_cache with network_info: [{"id": "4a78a8a2-9369-4d10-a787-dcbfbcc434fd", "address": "fa:16:3e:67:36:ae", "network": {"id": "df8c555e-da64-491a-8baf-3c0f59a8167d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-100933196", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a78a8a2-93", "ovs_interfaceid": "4a78a8a2-9369-4d10-a787-dcbfbcc434fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "adb6ac2c-d9da-47d8-91a1-21fee2c2a28b", "address": "fa:16:3e:ef:ed:a4", "network": {"id": "ef7a3323-9113-4395-8455-c2b1e5a382e2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1561467401", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadb6ac2c-d9", "ovs_interfaceid": "adb6ac2c-d9da-47d8-91a1-21fee2c2a28b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:46 np0005539551 nova_compute[227360]: 2025-11-29 08:12:46.387 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:46 np0005539551 nova_compute[227360]: 2025-11-29 08:12:46.407 227364 DEBUG oslo_concurrency.lockutils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Releasing lock "refresh_cache-eda0d174-a55c-4f6c-a128-8f10face8c8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:12:46 np0005539551 nova_compute[227360]: 2025-11-29 08:12:46.408 227364 DEBUG nova.compute.manager [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Instance network_info: |[{"id": "4a78a8a2-9369-4d10-a787-dcbfbcc434fd", "address": "fa:16:3e:67:36:ae", "network": {"id": "df8c555e-da64-491a-8baf-3c0f59a8167d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-100933196", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a78a8a2-93", "ovs_interfaceid": "4a78a8a2-9369-4d10-a787-dcbfbcc434fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "adb6ac2c-d9da-47d8-91a1-21fee2c2a28b", "address": "fa:16:3e:ef:ed:a4", "network": {"id": "ef7a3323-9113-4395-8455-c2b1e5a382e2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1561467401", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadb6ac2c-d9", "ovs_interfaceid": "adb6ac2c-d9da-47d8-91a1-21fee2c2a28b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:12:46 np0005539551 nova_compute[227360]: 2025-11-29 08:12:46.409 227364 DEBUG oslo_concurrency.lockutils [req-245367a6-575e-413e-b23c-8afeffad552c req-9ccb750b-22e7-4954-8b21-e2ee9ef14066 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-eda0d174-a55c-4f6c-a128-8f10face8c8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:12:46 np0005539551 nova_compute[227360]: 2025-11-29 08:12:46.409 227364 DEBUG nova.network.neutron [req-245367a6-575e-413e-b23c-8afeffad552c req-9ccb750b-22e7-4954-8b21-e2ee9ef14066 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Refreshing network info cache for port adb6ac2c-d9da-47d8-91a1-21fee2c2a28b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:12:46 np0005539551 nova_compute[227360]: 2025-11-29 08:12:46.416 227364 DEBUG nova.virt.libvirt.driver [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Start _get_guest_xml network_info=[{"id": "4a78a8a2-9369-4d10-a787-dcbfbcc434fd", "address": "fa:16:3e:67:36:ae", "network": {"id": "df8c555e-da64-491a-8baf-3c0f59a8167d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-100933196", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a78a8a2-93", "ovs_interfaceid": "4a78a8a2-9369-4d10-a787-dcbfbcc434fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "adb6ac2c-d9da-47d8-91a1-21fee2c2a28b", "address": "fa:16:3e:ef:ed:a4", "network": {"id": "ef7a3323-9113-4395-8455-c2b1e5a382e2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1561467401", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadb6ac2c-d9", "ovs_interfaceid": "adb6ac2c-d9da-47d8-91a1-21fee2c2a28b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:12:46 np0005539551 nova_compute[227360]: 2025-11-29 08:12:46.423 227364 WARNING nova.virt.libvirt.driver [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:12:46 np0005539551 nova_compute[227360]: 2025-11-29 08:12:46.428 227364 DEBUG nova.virt.libvirt.host [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:12:46 np0005539551 nova_compute[227360]: 2025-11-29 08:12:46.429 227364 DEBUG nova.virt.libvirt.host [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:12:46 np0005539551 nova_compute[227360]: 2025-11-29 08:12:46.433 227364 DEBUG nova.virt.libvirt.host [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:12:46 np0005539551 nova_compute[227360]: 2025-11-29 08:12:46.433 227364 DEBUG nova.virt.libvirt.host [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:12:46 np0005539551 nova_compute[227360]: 2025-11-29 08:12:46.435 227364 DEBUG nova.virt.libvirt.driver [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:12:46 np0005539551 nova_compute[227360]: 2025-11-29 08:12:46.435 227364 DEBUG nova.virt.hardware [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:12:46 np0005539551 nova_compute[227360]: 2025-11-29 08:12:46.436 227364 DEBUG nova.virt.hardware [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:12:46 np0005539551 nova_compute[227360]: 2025-11-29 08:12:46.436 227364 DEBUG nova.virt.hardware [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:12:46 np0005539551 nova_compute[227360]: 2025-11-29 08:12:46.436 227364 DEBUG nova.virt.hardware [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:12:46 np0005539551 nova_compute[227360]: 2025-11-29 08:12:46.436 227364 DEBUG nova.virt.hardware [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:12:46 np0005539551 nova_compute[227360]: 2025-11-29 08:12:46.436 227364 DEBUG nova.virt.hardware [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:12:46 np0005539551 nova_compute[227360]: 2025-11-29 08:12:46.437 227364 DEBUG nova.virt.hardware [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:12:46 np0005539551 nova_compute[227360]: 2025-11-29 08:12:46.437 227364 DEBUG nova.virt.hardware [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:12:46 np0005539551 nova_compute[227360]: 2025-11-29 08:12:46.437 227364 DEBUG nova.virt.hardware [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:12:46 np0005539551 nova_compute[227360]: 2025-11-29 08:12:46.437 227364 DEBUG nova.virt.hardware [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:12:46 np0005539551 nova_compute[227360]: 2025-11-29 08:12:46.437 227364 DEBUG nova.virt.hardware [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:12:46 np0005539551 nova_compute[227360]: 2025-11-29 08:12:46.440 227364 DEBUG oslo_concurrency.processutils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:46 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:12:46 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1820934542' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:12:46 np0005539551 nova_compute[227360]: 2025-11-29 08:12:46.936 227364 DEBUG oslo_concurrency.processutils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:46 np0005539551 nova_compute[227360]: 2025-11-29 08:12:46.970 227364 DEBUG nova.storage.rbd_utils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] rbd image eda0d174-a55c-4f6c-a128-8f10face8c8c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:46 np0005539551 nova_compute[227360]: 2025-11-29 08:12:46.974 227364 DEBUG oslo_concurrency.processutils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:47 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:12:47 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3185301205' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.412 227364 DEBUG oslo_concurrency.processutils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.415 227364 DEBUG nova.virt.libvirt.vif [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:12:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1762067701',display_name='tempest-ServersTestMultiNic-server-1762067701',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1762067701',id=87,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c5e5a2f42dc64b7cb0a22b666f160b1d',ramdisk_id='',reservation_id='r-vsv4vaza',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-2054665875',owner_user_name='tempest-ServersTestMultiNic-2054665875-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:12:37Z,user_data=None,user_id='247a4abb59cd459fa66a891e998e548c',uuid=eda0d174-a55c-4f6c-a128-8f10face8c8c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4a78a8a2-9369-4d10-a787-dcbfbcc434fd", "address": "fa:16:3e:67:36:ae", "network": {"id": "df8c555e-da64-491a-8baf-3c0f59a8167d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-100933196", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a78a8a2-93", "ovs_interfaceid": "4a78a8a2-9369-4d10-a787-dcbfbcc434fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.416 227364 DEBUG nova.network.os_vif_util [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Converting VIF {"id": "4a78a8a2-9369-4d10-a787-dcbfbcc434fd", "address": "fa:16:3e:67:36:ae", "network": {"id": "df8c555e-da64-491a-8baf-3c0f59a8167d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-100933196", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a78a8a2-93", "ovs_interfaceid": "4a78a8a2-9369-4d10-a787-dcbfbcc434fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.417 227364 DEBUG nova.network.os_vif_util [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:36:ae,bridge_name='br-int',has_traffic_filtering=True,id=4a78a8a2-9369-4d10-a787-dcbfbcc434fd,network=Network(df8c555e-da64-491a-8baf-3c0f59a8167d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a78a8a2-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.418 227364 DEBUG nova.virt.libvirt.vif [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:12:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1762067701',display_name='tempest-ServersTestMultiNic-server-1762067701',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1762067701',id=87,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c5e5a2f42dc64b7cb0a22b666f160b1d',ramdisk_id='',reservation_id='r-vsv4vaza',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-2054665875',owner_user_name='tempest-ServersTestMultiNic-2054665875-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:12:37Z,user_data=None,user_id='247a4abb59cd459fa66a891e998e548c',uuid=eda0d174-a55c-4f6c-a128-8f10face8c8c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "adb6ac2c-d9da-47d8-91a1-21fee2c2a28b", "address": "fa:16:3e:ef:ed:a4", "network": {"id": "ef7a3323-9113-4395-8455-c2b1e5a382e2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1561467401", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadb6ac2c-d9", "ovs_interfaceid": "adb6ac2c-d9da-47d8-91a1-21fee2c2a28b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.418 227364 DEBUG nova.network.os_vif_util [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Converting VIF {"id": "adb6ac2c-d9da-47d8-91a1-21fee2c2a28b", "address": "fa:16:3e:ef:ed:a4", "network": {"id": "ef7a3323-9113-4395-8455-c2b1e5a382e2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1561467401", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadb6ac2c-d9", "ovs_interfaceid": "adb6ac2c-d9da-47d8-91a1-21fee2c2a28b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.419 227364 DEBUG nova.network.os_vif_util [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:ed:a4,bridge_name='br-int',has_traffic_filtering=True,id=adb6ac2c-d9da-47d8-91a1-21fee2c2a28b,network=Network(ef7a3323-9113-4395-8455-c2b1e5a382e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapadb6ac2c-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.420 227364 DEBUG nova.objects.instance [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lazy-loading 'pci_devices' on Instance uuid eda0d174-a55c-4f6c-a128-8f10face8c8c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.436 227364 DEBUG nova.virt.libvirt.driver [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:12:47 np0005539551 nova_compute[227360]:  <uuid>eda0d174-a55c-4f6c-a128-8f10face8c8c</uuid>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:  <name>instance-00000057</name>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      <nova:name>tempest-ServersTestMultiNic-server-1762067701</nova:name>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:12:46</nova:creationTime>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:12:47 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:        <nova:user uuid="247a4abb59cd459fa66a891e998e548c">tempest-ServersTestMultiNic-2054665875-project-member</nova:user>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:        <nova:project uuid="c5e5a2f42dc64b7cb0a22b666f160b1d">tempest-ServersTestMultiNic-2054665875</nova:project>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:        <nova:port uuid="4a78a8a2-9369-4d10-a787-dcbfbcc434fd">
Nov 29 03:12:47 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.142" ipVersion="4"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:        <nova:port uuid="adb6ac2c-d9da-47d8-91a1-21fee2c2a28b">
Nov 29 03:12:47 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.1.215" ipVersion="4"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      <entry name="serial">eda0d174-a55c-4f6c-a128-8f10face8c8c</entry>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      <entry name="uuid">eda0d174-a55c-4f6c-a128-8f10face8c8c</entry>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/eda0d174-a55c-4f6c-a128-8f10face8c8c_disk">
Nov 29 03:12:47 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:12:47 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/eda0d174-a55c-4f6c-a128-8f10face8c8c_disk.config">
Nov 29 03:12:47 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:12:47 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:67:36:ae"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      <target dev="tap4a78a8a2-93"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:ef:ed:a4"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      <target dev="tapadb6ac2c-d9"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/eda0d174-a55c-4f6c-a128-8f10face8c8c/console.log" append="off"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:12:47 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:12:47 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:12:47 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:12:47 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.437 227364 DEBUG nova.compute.manager [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Preparing to wait for external event network-vif-plugged-4a78a8a2-9369-4d10-a787-dcbfbcc434fd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.437 227364 DEBUG oslo_concurrency.lockutils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Acquiring lock "eda0d174-a55c-4f6c-a128-8f10face8c8c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.438 227364 DEBUG oslo_concurrency.lockutils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lock "eda0d174-a55c-4f6c-a128-8f10face8c8c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.438 227364 DEBUG oslo_concurrency.lockutils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lock "eda0d174-a55c-4f6c-a128-8f10face8c8c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.438 227364 DEBUG nova.compute.manager [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Preparing to wait for external event network-vif-plugged-adb6ac2c-d9da-47d8-91a1-21fee2c2a28b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.438 227364 DEBUG oslo_concurrency.lockutils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Acquiring lock "eda0d174-a55c-4f6c-a128-8f10face8c8c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.439 227364 DEBUG oslo_concurrency.lockutils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lock "eda0d174-a55c-4f6c-a128-8f10face8c8c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.439 227364 DEBUG oslo_concurrency.lockutils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lock "eda0d174-a55c-4f6c-a128-8f10face8c8c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.439 227364 DEBUG nova.virt.libvirt.vif [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:12:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1762067701',display_name='tempest-ServersTestMultiNic-server-1762067701',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1762067701',id=87,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c5e5a2f42dc64b7cb0a22b666f160b1d',ramdisk_id='',reservation_id='r-vsv4vaza',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-2054665875',owner_user_name='tempest-ServersTestMultiNic-2054665875-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:12:37Z,user_data=None,user_id='247a4abb59cd459fa66a891e998e548c',uuid=eda0d174-a55c-4f6c-a128-8f10face8c8c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4a78a8a2-9369-4d10-a787-dcbfbcc434fd", "address": "fa:16:3e:67:36:ae", "network": {"id": "df8c555e-da64-491a-8baf-3c0f59a8167d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-100933196", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a78a8a2-93", "ovs_interfaceid": "4a78a8a2-9369-4d10-a787-dcbfbcc434fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.440 227364 DEBUG nova.network.os_vif_util [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Converting VIF {"id": "4a78a8a2-9369-4d10-a787-dcbfbcc434fd", "address": "fa:16:3e:67:36:ae", "network": {"id": "df8c555e-da64-491a-8baf-3c0f59a8167d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-100933196", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a78a8a2-93", "ovs_interfaceid": "4a78a8a2-9369-4d10-a787-dcbfbcc434fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.440 227364 DEBUG nova.network.os_vif_util [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:36:ae,bridge_name='br-int',has_traffic_filtering=True,id=4a78a8a2-9369-4d10-a787-dcbfbcc434fd,network=Network(df8c555e-da64-491a-8baf-3c0f59a8167d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a78a8a2-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.440 227364 DEBUG os_vif [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:36:ae,bridge_name='br-int',has_traffic_filtering=True,id=4a78a8a2-9369-4d10-a787-dcbfbcc434fd,network=Network(df8c555e-da64-491a-8baf-3c0f59a8167d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a78a8a2-93') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.441 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.441 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.442 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.444 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.445 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a78a8a2-93, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.445 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4a78a8a2-93, col_values=(('external_ids', {'iface-id': '4a78a8a2-9369-4d10-a787-dcbfbcc434fd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:67:36:ae', 'vm-uuid': 'eda0d174-a55c-4f6c-a128-8f10face8c8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.446 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:47 np0005539551 NetworkManager[48922]: <info>  [1764403967.4479] manager: (tap4a78a8a2-93): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/162)
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.449 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.452 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.453 227364 INFO os_vif [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:36:ae,bridge_name='br-int',has_traffic_filtering=True,id=4a78a8a2-9369-4d10-a787-dcbfbcc434fd,network=Network(df8c555e-da64-491a-8baf-3c0f59a8167d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a78a8a2-93')#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.454 227364 DEBUG nova.virt.libvirt.vif [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:12:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1762067701',display_name='tempest-ServersTestMultiNic-server-1762067701',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1762067701',id=87,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c5e5a2f42dc64b7cb0a22b666f160b1d',ramdisk_id='',reservation_id='r-vsv4vaza',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-2054665875',owner_user_name='tempest-ServersTestMultiNic-2054665875-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:12:37Z,user_data=None,user_id='247a4abb59cd459fa66a891e998e548c',uuid=eda0d174-a55c-4f6c-a128-8f10face8c8c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "adb6ac2c-d9da-47d8-91a1-21fee2c2a28b", "address": "fa:16:3e:ef:ed:a4", "network": {"id": "ef7a3323-9113-4395-8455-c2b1e5a382e2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1561467401", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadb6ac2c-d9", "ovs_interfaceid": "adb6ac2c-d9da-47d8-91a1-21fee2c2a28b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.454 227364 DEBUG nova.network.os_vif_util [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Converting VIF {"id": "adb6ac2c-d9da-47d8-91a1-21fee2c2a28b", "address": "fa:16:3e:ef:ed:a4", "network": {"id": "ef7a3323-9113-4395-8455-c2b1e5a382e2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1561467401", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadb6ac2c-d9", "ovs_interfaceid": "adb6ac2c-d9da-47d8-91a1-21fee2c2a28b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.455 227364 DEBUG nova.network.os_vif_util [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:ed:a4,bridge_name='br-int',has_traffic_filtering=True,id=adb6ac2c-d9da-47d8-91a1-21fee2c2a28b,network=Network(ef7a3323-9113-4395-8455-c2b1e5a382e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapadb6ac2c-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.455 227364 DEBUG os_vif [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:ed:a4,bridge_name='br-int',has_traffic_filtering=True,id=adb6ac2c-d9da-47d8-91a1-21fee2c2a28b,network=Network(ef7a3323-9113-4395-8455-c2b1e5a382e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapadb6ac2c-d9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.455 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.456 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.456 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.458 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.458 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapadb6ac2c-d9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.458 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapadb6ac2c-d9, col_values=(('external_ids', {'iface-id': 'adb6ac2c-d9da-47d8-91a1-21fee2c2a28b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ef:ed:a4', 'vm-uuid': 'eda0d174-a55c-4f6c-a128-8f10face8c8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.460 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:47 np0005539551 NetworkManager[48922]: <info>  [1764403967.4607] manager: (tapadb6ac2c-d9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/163)
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.462 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.467 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.468 227364 INFO os_vif [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:ed:a4,bridge_name='br-int',has_traffic_filtering=True,id=adb6ac2c-d9da-47d8-91a1-21fee2c2a28b,network=Network(ef7a3323-9113-4395-8455-c2b1e5a382e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapadb6ac2c-d9')#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.511 227364 DEBUG nova.virt.libvirt.driver [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.512 227364 DEBUG nova.virt.libvirt.driver [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.512 227364 DEBUG nova.virt.libvirt.driver [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] No VIF found with MAC fa:16:3e:67:36:ae, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.512 227364 DEBUG nova.virt.libvirt.driver [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] No VIF found with MAC fa:16:3e:ef:ed:a4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.512 227364 INFO nova.virt.libvirt.driver [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Using config drive#033[00m
Nov 29 03:12:47 np0005539551 nova_compute[227360]: 2025-11-29 08:12:47.535 227364 DEBUG nova.storage.rbd_utils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] rbd image eda0d174-a55c-4f6c-a128-8f10face8c8c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:47.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:48.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:48 np0005539551 nova_compute[227360]: 2025-11-29 08:12:48.080 227364 INFO nova.virt.libvirt.driver [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Creating config drive at /var/lib/nova/instances/eda0d174-a55c-4f6c-a128-8f10face8c8c/disk.config#033[00m
Nov 29 03:12:48 np0005539551 nova_compute[227360]: 2025-11-29 08:12:48.084 227364 DEBUG oslo_concurrency.processutils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/eda0d174-a55c-4f6c-a128-8f10face8c8c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcobp0_g8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:48 np0005539551 nova_compute[227360]: 2025-11-29 08:12:48.219 227364 DEBUG oslo_concurrency.processutils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/eda0d174-a55c-4f6c-a128-8f10face8c8c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcobp0_g8" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:48 np0005539551 nova_compute[227360]: 2025-11-29 08:12:48.246 227364 DEBUG nova.storage.rbd_utils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] rbd image eda0d174-a55c-4f6c-a128-8f10face8c8c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:48 np0005539551 nova_compute[227360]: 2025-11-29 08:12:48.250 227364 DEBUG oslo_concurrency.processutils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/eda0d174-a55c-4f6c-a128-8f10face8c8c/disk.config eda0d174-a55c-4f6c-a128-8f10face8c8c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:48 np0005539551 nova_compute[227360]: 2025-11-29 08:12:48.490 227364 DEBUG oslo_concurrency.processutils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/eda0d174-a55c-4f6c-a128-8f10face8c8c/disk.config eda0d174-a55c-4f6c-a128-8f10face8c8c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.240s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:48 np0005539551 nova_compute[227360]: 2025-11-29 08:12:48.492 227364 INFO nova.virt.libvirt.driver [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Deleting local config drive /var/lib/nova/instances/eda0d174-a55c-4f6c-a128-8f10face8c8c/disk.config because it was imported into RBD.#033[00m
Nov 29 03:12:48 np0005539551 kernel: tap4a78a8a2-93: entered promiscuous mode
Nov 29 03:12:48 np0005539551 NetworkManager[48922]: <info>  [1764403968.5499] manager: (tap4a78a8a2-93): new Tun device (/org/freedesktop/NetworkManager/Devices/164)
Nov 29 03:12:48 np0005539551 ovn_controller[130266]: 2025-11-29T08:12:48Z|00345|binding|INFO|Claiming lport 4a78a8a2-9369-4d10-a787-dcbfbcc434fd for this chassis.
Nov 29 03:12:48 np0005539551 ovn_controller[130266]: 2025-11-29T08:12:48Z|00346|binding|INFO|4a78a8a2-9369-4d10-a787-dcbfbcc434fd: Claiming fa:16:3e:67:36:ae 10.100.0.142
Nov 29 03:12:48 np0005539551 nova_compute[227360]: 2025-11-29 08:12:48.552 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:48.561 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:36:ae 10.100.0.142'], port_security=['fa:16:3e:67:36:ae 10.100.0.142'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.142/24', 'neutron:device_id': 'eda0d174-a55c-4f6c-a128-8f10face8c8c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df8c555e-da64-491a-8baf-3c0f59a8167d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5e5a2f42dc64b7cb0a22b666f160b1d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ed2cdfce-a096-4b63-95b4-e5e189367262', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=230dc13e-d5a1-42a3-bdbc-5e79b10a31e0, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=4a78a8a2-9369-4d10-a787-dcbfbcc434fd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:48.563 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 4a78a8a2-9369-4d10-a787-dcbfbcc434fd in datapath df8c555e-da64-491a-8baf-3c0f59a8167d bound to our chassis#033[00m
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:48.564 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network df8c555e-da64-491a-8baf-3c0f59a8167d#033[00m
Nov 29 03:12:48 np0005539551 NetworkManager[48922]: <info>  [1764403968.5719] manager: (tapadb6ac2c-d9): new Tun device (/org/freedesktop/NetworkManager/Devices/165)
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:48.577 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d39ee60f-5b15-4602-a044-af8eff457529]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:48.578 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdf8c555e-d1 in ovnmeta-df8c555e-da64-491a-8baf-3c0f59a8167d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:12:48 np0005539551 systemd-udevd[259365]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:48.580 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdf8c555e-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:48.580 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5dca6dcf-1437-4216-bbae-13726d79344c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:48 np0005539551 systemd-udevd[259366]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:48.581 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[7f0f7f1f-8bb6-42a2-85c7-697ae6fa8772]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:48.593 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[dd3885dc-ddcd-44f2-9e4e-ca51f2fbd0da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:48 np0005539551 kernel: tapadb6ac2c-d9: entered promiscuous mode
Nov 29 03:12:48 np0005539551 NetworkManager[48922]: <info>  [1764403968.5989] device (tapadb6ac2c-d9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:12:48 np0005539551 ovn_controller[130266]: 2025-11-29T08:12:48Z|00347|binding|INFO|Claiming lport adb6ac2c-d9da-47d8-91a1-21fee2c2a28b for this chassis.
Nov 29 03:12:48 np0005539551 ovn_controller[130266]: 2025-11-29T08:12:48Z|00348|binding|INFO|adb6ac2c-d9da-47d8-91a1-21fee2c2a28b: Claiming fa:16:3e:ef:ed:a4 10.100.1.215
Nov 29 03:12:48 np0005539551 nova_compute[227360]: 2025-11-29 08:12:48.599 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:48 np0005539551 nova_compute[227360]: 2025-11-29 08:12:48.602 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:48 np0005539551 NetworkManager[48922]: <info>  [1764403968.6033] device (tapadb6ac2c-d9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:12:48 np0005539551 NetworkManager[48922]: <info>  [1764403968.6050] device (tap4a78a8a2-93): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:12:48 np0005539551 ovn_controller[130266]: 2025-11-29T08:12:48Z|00349|binding|INFO|Setting lport 4a78a8a2-9369-4d10-a787-dcbfbcc434fd ovn-installed in OVS
Nov 29 03:12:48 np0005539551 ovn_controller[130266]: 2025-11-29T08:12:48Z|00350|binding|INFO|Setting lport 4a78a8a2-9369-4d10-a787-dcbfbcc434fd up in Southbound
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:48.605 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:ed:a4 10.100.1.215'], port_security=['fa:16:3e:ef:ed:a4 10.100.1.215'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.215/24', 'neutron:device_id': 'eda0d174-a55c-4f6c-a128-8f10face8c8c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef7a3323-9113-4395-8455-c2b1e5a382e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5e5a2f42dc64b7cb0a22b666f160b1d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ed2cdfce-a096-4b63-95b4-e5e189367262', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0e44bf01-9687-4683-88cd-0013b1f3762b, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=adb6ac2c-d9da-47d8-91a1-21fee2c2a28b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:12:48 np0005539551 NetworkManager[48922]: <info>  [1764403968.6087] device (tap4a78a8a2-93): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:12:48 np0005539551 nova_compute[227360]: 2025-11-29 08:12:48.607 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:48 np0005539551 systemd-machined[190756]: New machine qemu-38-instance-00000057.
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:48.621 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[99aa9543-7bec-4817-876b-ff7181b00ff7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:48 np0005539551 ovn_controller[130266]: 2025-11-29T08:12:48Z|00351|binding|INFO|Setting lport adb6ac2c-d9da-47d8-91a1-21fee2c2a28b ovn-installed in OVS
Nov 29 03:12:48 np0005539551 ovn_controller[130266]: 2025-11-29T08:12:48Z|00352|binding|INFO|Setting lport adb6ac2c-d9da-47d8-91a1-21fee2c2a28b up in Southbound
Nov 29 03:12:48 np0005539551 nova_compute[227360]: 2025-11-29 08:12:48.642 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:48 np0005539551 systemd[1]: Started Virtual Machine qemu-38-instance-00000057.
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:48.652 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[646698f0-f3f2-4997-94ff-123ffa5baa77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:48.658 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f4ab353a-be3d-4d10-a218-e5415050c773]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:48 np0005539551 NetworkManager[48922]: <info>  [1764403968.6598] manager: (tapdf8c555e-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/166)
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:48.689 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[0a66a092-df19-4672-b013-8838f1200db6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:48.692 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[5730b609-1527-47a9-bb3d-e3275efd9162]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:48 np0005539551 NetworkManager[48922]: <info>  [1764403968.7125] device (tapdf8c555e-d0): carrier: link connected
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:48.717 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[36afdbe0-8f85-4762-ab15-cd65e8703165]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:48.734 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5ca3a76d-08dd-4045-a970-5656d575eca7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdf8c555e-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:2e:e4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 106], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701559, 'reachable_time': 28638, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259401, 'error': None, 'target': 'ovnmeta-df8c555e-da64-491a-8baf-3c0f59a8167d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:48.749 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b0ec887a-cb6a-4a7d-81c2-2f680532022c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8c:2ee4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701559, 'tstamp': 701559}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259402, 'error': None, 'target': 'ovnmeta-df8c555e-da64-491a-8baf-3c0f59a8167d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:48.771 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[80d48e34-7577-4dfc-8367-77bd3179c0b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdf8c555e-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:2e:e4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 106], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701559, 'reachable_time': 28638, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 259403, 'error': None, 'target': 'ovnmeta-df8c555e-da64-491a-8baf-3c0f59a8167d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:48.805 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6d25bf25-0b9b-4135-b516-6168430e777e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:48 np0005539551 nova_compute[227360]: 2025-11-29 08:12:48.840 227364 DEBUG nova.network.neutron [req-245367a6-575e-413e-b23c-8afeffad552c req-9ccb750b-22e7-4954-8b21-e2ee9ef14066 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Updated VIF entry in instance network info cache for port adb6ac2c-d9da-47d8-91a1-21fee2c2a28b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:12:48 np0005539551 nova_compute[227360]: 2025-11-29 08:12:48.841 227364 DEBUG nova.network.neutron [req-245367a6-575e-413e-b23c-8afeffad552c req-9ccb750b-22e7-4954-8b21-e2ee9ef14066 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Updating instance_info_cache with network_info: [{"id": "4a78a8a2-9369-4d10-a787-dcbfbcc434fd", "address": "fa:16:3e:67:36:ae", "network": {"id": "df8c555e-da64-491a-8baf-3c0f59a8167d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-100933196", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a78a8a2-93", "ovs_interfaceid": "4a78a8a2-9369-4d10-a787-dcbfbcc434fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "adb6ac2c-d9da-47d8-91a1-21fee2c2a28b", "address": "fa:16:3e:ef:ed:a4", "network": {"id": "ef7a3323-9113-4395-8455-c2b1e5a382e2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1561467401", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadb6ac2c-d9", "ovs_interfaceid": "adb6ac2c-d9da-47d8-91a1-21fee2c2a28b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:48 np0005539551 nova_compute[227360]: 2025-11-29 08:12:48.863 227364 DEBUG oslo_concurrency.lockutils [req-245367a6-575e-413e-b23c-8afeffad552c req-9ccb750b-22e7-4954-8b21-e2ee9ef14066 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-eda0d174-a55c-4f6c-a128-8f10face8c8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:48.865 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6033cbf8-8bd2-4d0d-a99e-45c38ffee270]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:48.867 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf8c555e-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:48.867 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:48.867 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdf8c555e-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:48 np0005539551 nova_compute[227360]: 2025-11-29 08:12:48.869 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:48 np0005539551 kernel: tapdf8c555e-d0: entered promiscuous mode
Nov 29 03:12:48 np0005539551 NetworkManager[48922]: <info>  [1764403968.8701] manager: (tapdf8c555e-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/167)
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:48.877 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdf8c555e-d0, col_values=(('external_ids', {'iface-id': '5d7962aa-906b-46f4-bff2-af10de9df4ce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:48 np0005539551 nova_compute[227360]: 2025-11-29 08:12:48.878 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:48 np0005539551 ovn_controller[130266]: 2025-11-29T08:12:48Z|00353|binding|INFO|Releasing lport 5d7962aa-906b-46f4-bff2-af10de9df4ce from this chassis (sb_readonly=0)
Nov 29 03:12:48 np0005539551 nova_compute[227360]: 2025-11-29 08:12:48.879 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:48.882 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/df8c555e-da64-491a-8baf-3c0f59a8167d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/df8c555e-da64-491a-8baf-3c0f59a8167d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:48.883 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[cd25812e-9296-468b-927d-7075568b4f91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:48.883 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-df8c555e-da64-491a-8baf-3c0f59a8167d
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/df8c555e-da64-491a-8baf-3c0f59a8167d.pid.haproxy
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID df8c555e-da64-491a-8baf-3c0f59a8167d
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:12:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:48.884 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-df8c555e-da64-491a-8baf-3c0f59a8167d', 'env', 'PROCESS_TAG=haproxy-df8c555e-da64-491a-8baf-3c0f59a8167d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/df8c555e-da64-491a-8baf-3c0f59a8167d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:12:48 np0005539551 nova_compute[227360]: 2025-11-29 08:12:48.891 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:48 np0005539551 nova_compute[227360]: 2025-11-29 08:12:48.906 227364 DEBUG nova.compute.manager [req-7d43c590-82ab-4b0c-9bc7-162a3c5ba3ed req-4ae22792-2aac-4a29-a8cf-82a7185965d9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Received event network-vif-plugged-4a78a8a2-9369-4d10-a787-dcbfbcc434fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:48 np0005539551 nova_compute[227360]: 2025-11-29 08:12:48.906 227364 DEBUG oslo_concurrency.lockutils [req-7d43c590-82ab-4b0c-9bc7-162a3c5ba3ed req-4ae22792-2aac-4a29-a8cf-82a7185965d9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "eda0d174-a55c-4f6c-a128-8f10face8c8c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:48 np0005539551 nova_compute[227360]: 2025-11-29 08:12:48.906 227364 DEBUG oslo_concurrency.lockutils [req-7d43c590-82ab-4b0c-9bc7-162a3c5ba3ed req-4ae22792-2aac-4a29-a8cf-82a7185965d9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "eda0d174-a55c-4f6c-a128-8f10face8c8c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:48 np0005539551 nova_compute[227360]: 2025-11-29 08:12:48.906 227364 DEBUG oslo_concurrency.lockutils [req-7d43c590-82ab-4b0c-9bc7-162a3c5ba3ed req-4ae22792-2aac-4a29-a8cf-82a7185965d9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "eda0d174-a55c-4f6c-a128-8f10face8c8c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:48 np0005539551 nova_compute[227360]: 2025-11-29 08:12:48.907 227364 DEBUG nova.compute.manager [req-7d43c590-82ab-4b0c-9bc7-162a3c5ba3ed req-4ae22792-2aac-4a29-a8cf-82a7185965d9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Processing event network-vif-plugged-4a78a8a2-9369-4d10-a787-dcbfbcc434fd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:12:48 np0005539551 nova_compute[227360]: 2025-11-29 08:12:48.992 227364 DEBUG nova.compute.manager [req-06afbb26-3785-4228-a05d-968f6edc481d req-9e1a5a59-80aa-4fd2-bc31-27efcddf35a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Received event network-vif-plugged-adb6ac2c-d9da-47d8-91a1-21fee2c2a28b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:48 np0005539551 nova_compute[227360]: 2025-11-29 08:12:48.992 227364 DEBUG oslo_concurrency.lockutils [req-06afbb26-3785-4228-a05d-968f6edc481d req-9e1a5a59-80aa-4fd2-bc31-27efcddf35a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "eda0d174-a55c-4f6c-a128-8f10face8c8c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:48 np0005539551 nova_compute[227360]: 2025-11-29 08:12:48.992 227364 DEBUG oslo_concurrency.lockutils [req-06afbb26-3785-4228-a05d-968f6edc481d req-9e1a5a59-80aa-4fd2-bc31-27efcddf35a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "eda0d174-a55c-4f6c-a128-8f10face8c8c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:48 np0005539551 nova_compute[227360]: 2025-11-29 08:12:48.993 227364 DEBUG oslo_concurrency.lockutils [req-06afbb26-3785-4228-a05d-968f6edc481d req-9e1a5a59-80aa-4fd2-bc31-27efcddf35a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "eda0d174-a55c-4f6c-a128-8f10face8c8c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:48 np0005539551 nova_compute[227360]: 2025-11-29 08:12:48.993 227364 DEBUG nova.compute.manager [req-06afbb26-3785-4228-a05d-968f6edc481d req-9e1a5a59-80aa-4fd2-bc31-27efcddf35a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Processing event network-vif-plugged-adb6ac2c-d9da-47d8-91a1-21fee2c2a28b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:12:49 np0005539551 podman[259435]: 2025-11-29 08:12:49.231028154 +0000 UTC m=+0.022071879 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:12:49 np0005539551 podman[259435]: 2025-11-29 08:12:49.544510972 +0000 UTC m=+0.335554677 container create 1e338e048964acacfe9cc5873058a7059864ee874e8084b5868194ddb393c066 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-df8c555e-da64-491a-8baf-3c0f59a8167d, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 03:12:49 np0005539551 systemd[1]: Started libpod-conmon-1e338e048964acacfe9cc5873058a7059864ee874e8084b5868194ddb393c066.scope.
Nov 29 03:12:49 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:12:49 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c9e8639f6b2bcc632bba5d82e7a84b06e024b9163c87bee459ebb3de86137d8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:12:49 np0005539551 podman[259435]: 2025-11-29 08:12:49.624101372 +0000 UTC m=+0.415145087 container init 1e338e048964acacfe9cc5873058a7059864ee874e8084b5868194ddb393c066 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-df8c555e-da64-491a-8baf-3c0f59a8167d, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0)
Nov 29 03:12:49 np0005539551 podman[259435]: 2025-11-29 08:12:49.630069391 +0000 UTC m=+0.421113096 container start 1e338e048964acacfe9cc5873058a7059864ee874e8084b5868194ddb393c066 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-df8c555e-da64-491a-8baf-3c0f59a8167d, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 03:12:49 np0005539551 neutron-haproxy-ovnmeta-df8c555e-da64-491a-8baf-3c0f59a8167d[259489]: [NOTICE]   (259496) : New worker (259499) forked
Nov 29 03:12:49 np0005539551 neutron-haproxy-ovnmeta-df8c555e-da64-491a-8baf-3c0f59a8167d[259489]: [NOTICE]   (259496) : Loading success.
Nov 29 03:12:49 np0005539551 nova_compute[227360]: 2025-11-29 08:12:49.670 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403969.6697698, eda0d174-a55c-4f6c-a128-8f10face8c8c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:12:49 np0005539551 nova_compute[227360]: 2025-11-29 08:12:49.670 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] VM Started (Lifecycle Event)#033[00m
Nov 29 03:12:49 np0005539551 nova_compute[227360]: 2025-11-29 08:12:49.672 227364 DEBUG nova.compute.manager [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:12:49 np0005539551 nova_compute[227360]: 2025-11-29 08:12:49.675 227364 DEBUG nova.virt.libvirt.driver [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:12:49 np0005539551 nova_compute[227360]: 2025-11-29 08:12:49.678 227364 INFO nova.virt.libvirt.driver [-] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Instance spawned successfully.#033[00m
Nov 29 03:12:49 np0005539551 nova_compute[227360]: 2025-11-29 08:12:49.678 227364 DEBUG nova.virt.libvirt.driver [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:12:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:49.690 139482 INFO neutron.agent.ovn.metadata.agent [-] Port adb6ac2c-d9da-47d8-91a1-21fee2c2a28b in datapath ef7a3323-9113-4395-8455-c2b1e5a382e2 unbound from our chassis#033[00m
Nov 29 03:12:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:49.692 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ef7a3323-9113-4395-8455-c2b1e5a382e2#033[00m
Nov 29 03:12:49 np0005539551 nova_compute[227360]: 2025-11-29 08:12:49.697 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:49 np0005539551 nova_compute[227360]: 2025-11-29 08:12:49.702 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:12:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:49.703 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9a233e5a-c711-4683-83cf-6a0ecc9ba302]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:49.703 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapef7a3323-91 in ovnmeta-ef7a3323-9113-4395-8455-c2b1e5a382e2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:12:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:49.705 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapef7a3323-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:12:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:49.705 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[273b50ab-3363-4e21-8d33-dbb9864c2622]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:49.706 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[fc354bb6-c541-4502-9edb-6e85155ae3e0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:49 np0005539551 nova_compute[227360]: 2025-11-29 08:12:49.706 227364 DEBUG nova.virt.libvirt.driver [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:49 np0005539551 nova_compute[227360]: 2025-11-29 08:12:49.707 227364 DEBUG nova.virt.libvirt.driver [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:49 np0005539551 nova_compute[227360]: 2025-11-29 08:12:49.707 227364 DEBUG nova.virt.libvirt.driver [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:49 np0005539551 nova_compute[227360]: 2025-11-29 08:12:49.708 227364 DEBUG nova.virt.libvirt.driver [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:49 np0005539551 nova_compute[227360]: 2025-11-29 08:12:49.708 227364 DEBUG nova.virt.libvirt.driver [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:49 np0005539551 nova_compute[227360]: 2025-11-29 08:12:49.709 227364 DEBUG nova.virt.libvirt.driver [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:49.716 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[5df0c4cd-0acb-4118-af08-af302734da0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:49 np0005539551 nova_compute[227360]: 2025-11-29 08:12:49.721 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:12:49 np0005539551 nova_compute[227360]: 2025-11-29 08:12:49.722 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403969.669969, eda0d174-a55c-4f6c-a128-8f10face8c8c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:12:49 np0005539551 nova_compute[227360]: 2025-11-29 08:12:49.722 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:12:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:49.741 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f6898cd0-f5e6-46a8-883b-e0f71ee2306b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:49 np0005539551 nova_compute[227360]: 2025-11-29 08:12:49.752 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:49 np0005539551 nova_compute[227360]: 2025-11-29 08:12:49.755 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403969.6748714, eda0d174-a55c-4f6c-a128-8f10face8c8c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:12:49 np0005539551 nova_compute[227360]: 2025-11-29 08:12:49.756 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:12:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:49.772 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[5a053c8b-62a0-4c23-a187-929b8d24b438]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:49 np0005539551 NetworkManager[48922]: <info>  [1764403969.7787] manager: (tapef7a3323-90): new Veth device (/org/freedesktop/NetworkManager/Devices/168)
Nov 29 03:12:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:49.780 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[95931968-a269-4e70-888e-ca85c2bef97f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:49 np0005539551 nova_compute[227360]: 2025-11-29 08:12:49.786 227364 INFO nova.compute.manager [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Took 12.55 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:12:49 np0005539551 nova_compute[227360]: 2025-11-29 08:12:49.787 227364 DEBUG nova.compute.manager [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:49 np0005539551 nova_compute[227360]: 2025-11-29 08:12:49.787 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:49 np0005539551 nova_compute[227360]: 2025-11-29 08:12:49.797 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:12:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:49.810 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[3da8ce5f-de92-4d7a-8dca-627894bc5cf2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:49.815 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[db3b12c0-b600-49a4-b632-5b47058f331e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:49 np0005539551 NetworkManager[48922]: <info>  [1764403969.8351] device (tapef7a3323-90): carrier: link connected
Nov 29 03:12:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:49.841 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[8f35913f-8db8-40f5-a869-ea370be70d0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:49 np0005539551 nova_compute[227360]: 2025-11-29 08:12:49.856 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:12:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:49.856 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[47f44255-7cc2-46bd-9d51-a26b075de0ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef7a3323-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:1b:ef'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 107], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701671, 'reachable_time': 41771, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259518, 'error': None, 'target': 'ovnmeta-ef7a3323-9113-4395-8455-c2b1e5a382e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:49.870 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c0fef41f-1f2f-4489-a72b-cfefc26b4353]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe54:1bef'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701671, 'tstamp': 701671}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259519, 'error': None, 'target': 'ovnmeta-ef7a3323-9113-4395-8455-c2b1e5a382e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:49 np0005539551 nova_compute[227360]: 2025-11-29 08:12:49.886 227364 INFO nova.compute.manager [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Took 13.58 seconds to build instance.#033[00m
Nov 29 03:12:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:49.888 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[03400615-68fa-4f34-8d9e-8d2340234f2f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef7a3323-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:1b:ef'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 107], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701671, 'reachable_time': 41771, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 259520, 'error': None, 'target': 'ovnmeta-ef7a3323-9113-4395-8455-c2b1e5a382e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:49 np0005539551 nova_compute[227360]: 2025-11-29 08:12:49.914 227364 DEBUG oslo_concurrency.lockutils [None req-4ac303c0-b951-48a5-914a-e3d2d8b0243e 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lock "eda0d174-a55c-4f6c-a128-8f10face8c8c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:49.919 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f8486f77-0572-405f-bc99-83fc3160b3a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:49.990 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5f6f2246-5e77-47f7-b5bf-f02cb4de9441]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:49.991 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef7a3323-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:49.992 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:12:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:49.992 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef7a3323-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:49 np0005539551 nova_compute[227360]: 2025-11-29 08:12:49.994 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:49 np0005539551 NetworkManager[48922]: <info>  [1764403969.9944] manager: (tapef7a3323-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/169)
Nov 29 03:12:49 np0005539551 kernel: tapef7a3323-90: entered promiscuous mode
Nov 29 03:12:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:49.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:49 np0005539551 nova_compute[227360]: 2025-11-29 08:12:49.996 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:49.997 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapef7a3323-90, col_values=(('external_ids', {'iface-id': 'c280f8ca-04fc-47eb-a9b8-f400374920d6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:49 np0005539551 nova_compute[227360]: 2025-11-29 08:12:49.998 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:49 np0005539551 ovn_controller[130266]: 2025-11-29T08:12:49Z|00354|binding|INFO|Releasing lport c280f8ca-04fc-47eb-a9b8-f400374920d6 from this chassis (sb_readonly=0)
Nov 29 03:12:50 np0005539551 nova_compute[227360]: 2025-11-29 08:12:50.012 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:50.014 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ef7a3323-9113-4395-8455-c2b1e5a382e2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ef7a3323-9113-4395-8455-c2b1e5a382e2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:12:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:50.015 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ebc6bbf5-a737-486c-a8e4-9ad2d43e306c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:50.016 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:12:50 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:12:50 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:12:50 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-ef7a3323-9113-4395-8455-c2b1e5a382e2
Nov 29 03:12:50 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:12:50 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:12:50 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:12:50 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/ef7a3323-9113-4395-8455-c2b1e5a382e2.pid.haproxy
Nov 29 03:12:50 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:12:50 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:12:50 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:12:50 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:12:50 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:12:50 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:12:50 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:12:50 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:12:50 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:12:50 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:12:50 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:12:50 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:12:50 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:12:50 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:12:50 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:12:50 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:12:50 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:12:50 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:12:50 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:12:50 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:12:50 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID ef7a3323-9113-4395-8455-c2b1e5a382e2
Nov 29 03:12:50 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:12:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:50.017 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ef7a3323-9113-4395-8455-c2b1e5a382e2', 'env', 'PROCESS_TAG=haproxy-ef7a3323-9113-4395-8455-c2b1e5a382e2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ef7a3323-9113-4395-8455-c2b1e5a382e2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:12:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:50.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:50 np0005539551 nova_compute[227360]: 2025-11-29 08:12:50.114 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:50 np0005539551 podman[259553]: 2025-11-29 08:12:50.391611391 +0000 UTC m=+0.060658246 container create ffc569a74822366e764cd7e5f566707225756be951f5eac9b068f97fcc9d145c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ef7a3323-9113-4395-8455-c2b1e5a382e2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:12:50 np0005539551 systemd[1]: Started libpod-conmon-ffc569a74822366e764cd7e5f566707225756be951f5eac9b068f97fcc9d145c.scope.
Nov 29 03:12:50 np0005539551 podman[259553]: 2025-11-29 08:12:50.359069954 +0000 UTC m=+0.028116829 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:12:50 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:12:50 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/444412ff052fe8111a2ecb9dd73fb67a2b4989f1139acb62c3845a64fb06b42b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:12:50 np0005539551 podman[259553]: 2025-11-29 08:12:50.488555282 +0000 UTC m=+0.157602097 container init ffc569a74822366e764cd7e5f566707225756be951f5eac9b068f97fcc9d145c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ef7a3323-9113-4395-8455-c2b1e5a382e2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 03:12:50 np0005539551 podman[259553]: 2025-11-29 08:12:50.495031536 +0000 UTC m=+0.164078361 container start ffc569a74822366e764cd7e5f566707225756be951f5eac9b068f97fcc9d145c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ef7a3323-9113-4395-8455-c2b1e5a382e2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:12:50 np0005539551 neutron-haproxy-ovnmeta-ef7a3323-9113-4395-8455-c2b1e5a382e2[259568]: [NOTICE]   (259572) : New worker (259574) forked
Nov 29 03:12:50 np0005539551 neutron-haproxy-ovnmeta-ef7a3323-9113-4395-8455-c2b1e5a382e2[259568]: [NOTICE]   (259572) : Loading success.
Nov 29 03:12:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e276 e276: 3 total, 3 up, 3 in
Nov 29 03:12:51 np0005539551 nova_compute[227360]: 2025-11-29 08:12:51.001 227364 DEBUG nova.compute.manager [req-6be7fa49-5dd3-42cc-9107-bdbb252bda95 req-6d0a7f44-5f91-48ec-94c5-52c8229d6c29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Received event network-vif-plugged-4a78a8a2-9369-4d10-a787-dcbfbcc434fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:51 np0005539551 nova_compute[227360]: 2025-11-29 08:12:51.001 227364 DEBUG oslo_concurrency.lockutils [req-6be7fa49-5dd3-42cc-9107-bdbb252bda95 req-6d0a7f44-5f91-48ec-94c5-52c8229d6c29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "eda0d174-a55c-4f6c-a128-8f10face8c8c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:51 np0005539551 nova_compute[227360]: 2025-11-29 08:12:51.002 227364 DEBUG oslo_concurrency.lockutils [req-6be7fa49-5dd3-42cc-9107-bdbb252bda95 req-6d0a7f44-5f91-48ec-94c5-52c8229d6c29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "eda0d174-a55c-4f6c-a128-8f10face8c8c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:51 np0005539551 nova_compute[227360]: 2025-11-29 08:12:51.002 227364 DEBUG oslo_concurrency.lockutils [req-6be7fa49-5dd3-42cc-9107-bdbb252bda95 req-6d0a7f44-5f91-48ec-94c5-52c8229d6c29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "eda0d174-a55c-4f6c-a128-8f10face8c8c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:51 np0005539551 nova_compute[227360]: 2025-11-29 08:12:51.002 227364 DEBUG nova.compute.manager [req-6be7fa49-5dd3-42cc-9107-bdbb252bda95 req-6d0a7f44-5f91-48ec-94c5-52c8229d6c29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] No waiting events found dispatching network-vif-plugged-4a78a8a2-9369-4d10-a787-dcbfbcc434fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:12:51 np0005539551 nova_compute[227360]: 2025-11-29 08:12:51.003 227364 WARNING nova.compute.manager [req-6be7fa49-5dd3-42cc-9107-bdbb252bda95 req-6d0a7f44-5f91-48ec-94c5-52c8229d6c29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Received unexpected event network-vif-plugged-4a78a8a2-9369-4d10-a787-dcbfbcc434fd for instance with vm_state active and task_state None.#033[00m
Nov 29 03:12:51 np0005539551 nova_compute[227360]: 2025-11-29 08:12:51.089 227364 DEBUG nova.compute.manager [req-d6dd3d06-94ba-4e78-ab3a-f9d6258ba711 req-73e80322-c4eb-4201-b8ff-3053e6f32517 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Received event network-vif-plugged-adb6ac2c-d9da-47d8-91a1-21fee2c2a28b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:51 np0005539551 nova_compute[227360]: 2025-11-29 08:12:51.090 227364 DEBUG oslo_concurrency.lockutils [req-d6dd3d06-94ba-4e78-ab3a-f9d6258ba711 req-73e80322-c4eb-4201-b8ff-3053e6f32517 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "eda0d174-a55c-4f6c-a128-8f10face8c8c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:51 np0005539551 nova_compute[227360]: 2025-11-29 08:12:51.090 227364 DEBUG oslo_concurrency.lockutils [req-d6dd3d06-94ba-4e78-ab3a-f9d6258ba711 req-73e80322-c4eb-4201-b8ff-3053e6f32517 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "eda0d174-a55c-4f6c-a128-8f10face8c8c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:51 np0005539551 nova_compute[227360]: 2025-11-29 08:12:51.090 227364 DEBUG oslo_concurrency.lockutils [req-d6dd3d06-94ba-4e78-ab3a-f9d6258ba711 req-73e80322-c4eb-4201-b8ff-3053e6f32517 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "eda0d174-a55c-4f6c-a128-8f10face8c8c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:51 np0005539551 nova_compute[227360]: 2025-11-29 08:12:51.091 227364 DEBUG nova.compute.manager [req-d6dd3d06-94ba-4e78-ab3a-f9d6258ba711 req-73e80322-c4eb-4201-b8ff-3053e6f32517 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] No waiting events found dispatching network-vif-plugged-adb6ac2c-d9da-47d8-91a1-21fee2c2a28b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:12:51 np0005539551 nova_compute[227360]: 2025-11-29 08:12:51.091 227364 WARNING nova.compute.manager [req-d6dd3d06-94ba-4e78-ab3a-f9d6258ba711 req-73e80322-c4eb-4201-b8ff-3053e6f32517 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Received unexpected event network-vif-plugged-adb6ac2c-d9da-47d8-91a1-21fee2c2a28b for instance with vm_state active and task_state None.#033[00m
Nov 29 03:12:51 np0005539551 nova_compute[227360]: 2025-11-29 08:12:51.755 227364 DEBUG oslo_concurrency.lockutils [None req-e269d012-f737-436f-b131-ed324f4eda15 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Acquiring lock "eda0d174-a55c-4f6c-a128-8f10face8c8c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:51 np0005539551 nova_compute[227360]: 2025-11-29 08:12:51.756 227364 DEBUG oslo_concurrency.lockutils [None req-e269d012-f737-436f-b131-ed324f4eda15 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lock "eda0d174-a55c-4f6c-a128-8f10face8c8c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:51 np0005539551 nova_compute[227360]: 2025-11-29 08:12:51.757 227364 DEBUG oslo_concurrency.lockutils [None req-e269d012-f737-436f-b131-ed324f4eda15 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Acquiring lock "eda0d174-a55c-4f6c-a128-8f10face8c8c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:51 np0005539551 nova_compute[227360]: 2025-11-29 08:12:51.758 227364 DEBUG oslo_concurrency.lockutils [None req-e269d012-f737-436f-b131-ed324f4eda15 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lock "eda0d174-a55c-4f6c-a128-8f10face8c8c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:51 np0005539551 nova_compute[227360]: 2025-11-29 08:12:51.758 227364 DEBUG oslo_concurrency.lockutils [None req-e269d012-f737-436f-b131-ed324f4eda15 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lock "eda0d174-a55c-4f6c-a128-8f10face8c8c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:51 np0005539551 nova_compute[227360]: 2025-11-29 08:12:51.759 227364 INFO nova.compute.manager [None req-e269d012-f737-436f-b131-ed324f4eda15 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Terminating instance#033[00m
Nov 29 03:12:51 np0005539551 nova_compute[227360]: 2025-11-29 08:12:51.760 227364 DEBUG nova.compute.manager [None req-e269d012-f737-436f-b131-ed324f4eda15 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:12:51 np0005539551 kernel: tap4a78a8a2-93 (unregistering): left promiscuous mode
Nov 29 03:12:51 np0005539551 NetworkManager[48922]: <info>  [1764403971.8000] device (tap4a78a8a2-93): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:12:51 np0005539551 nova_compute[227360]: 2025-11-29 08:12:51.806 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:51 np0005539551 ovn_controller[130266]: 2025-11-29T08:12:51Z|00355|binding|INFO|Releasing lport 4a78a8a2-9369-4d10-a787-dcbfbcc434fd from this chassis (sb_readonly=0)
Nov 29 03:12:51 np0005539551 ovn_controller[130266]: 2025-11-29T08:12:51Z|00356|binding|INFO|Setting lport 4a78a8a2-9369-4d10-a787-dcbfbcc434fd down in Southbound
Nov 29 03:12:51 np0005539551 ovn_controller[130266]: 2025-11-29T08:12:51Z|00357|binding|INFO|Removing iface tap4a78a8a2-93 ovn-installed in OVS
Nov 29 03:12:51 np0005539551 nova_compute[227360]: 2025-11-29 08:12:51.808 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:51 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:51.814 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:36:ae 10.100.0.142'], port_security=['fa:16:3e:67:36:ae 10.100.0.142'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.142/24', 'neutron:device_id': 'eda0d174-a55c-4f6c-a128-8f10face8c8c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df8c555e-da64-491a-8baf-3c0f59a8167d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5e5a2f42dc64b7cb0a22b666f160b1d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ed2cdfce-a096-4b63-95b4-e5e189367262', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=230dc13e-d5a1-42a3-bdbc-5e79b10a31e0, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=4a78a8a2-9369-4d10-a787-dcbfbcc434fd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:12:51 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:51.815 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 4a78a8a2-9369-4d10-a787-dcbfbcc434fd in datapath df8c555e-da64-491a-8baf-3c0f59a8167d unbound from our chassis#033[00m
Nov 29 03:12:51 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:51.816 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network df8c555e-da64-491a-8baf-3c0f59a8167d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:12:51 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:51.817 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f6c7880a-d9a2-4671-83e3-ab2d2248c9cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:51 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:51.818 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-df8c555e-da64-491a-8baf-3c0f59a8167d namespace which is not needed anymore#033[00m
Nov 29 03:12:51 np0005539551 kernel: tapadb6ac2c-d9 (unregistering): left promiscuous mode
Nov 29 03:12:51 np0005539551 NetworkManager[48922]: <info>  [1764403971.8229] device (tapadb6ac2c-d9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:12:51 np0005539551 nova_compute[227360]: 2025-11-29 08:12:51.824 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:51 np0005539551 ovn_controller[130266]: 2025-11-29T08:12:51Z|00358|binding|INFO|Releasing lport adb6ac2c-d9da-47d8-91a1-21fee2c2a28b from this chassis (sb_readonly=0)
Nov 29 03:12:51 np0005539551 ovn_controller[130266]: 2025-11-29T08:12:51Z|00359|binding|INFO|Setting lport adb6ac2c-d9da-47d8-91a1-21fee2c2a28b down in Southbound
Nov 29 03:12:51 np0005539551 ovn_controller[130266]: 2025-11-29T08:12:51Z|00360|binding|INFO|Removing iface tapadb6ac2c-d9 ovn-installed in OVS
Nov 29 03:12:51 np0005539551 nova_compute[227360]: 2025-11-29 08:12:51.836 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:51 np0005539551 nova_compute[227360]: 2025-11-29 08:12:51.837 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:51 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:51.846 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:ed:a4 10.100.1.215'], port_security=['fa:16:3e:ef:ed:a4 10.100.1.215'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.215/24', 'neutron:device_id': 'eda0d174-a55c-4f6c-a128-8f10face8c8c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef7a3323-9113-4395-8455-c2b1e5a382e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5e5a2f42dc64b7cb0a22b666f160b1d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ed2cdfce-a096-4b63-95b4-e5e189367262', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0e44bf01-9687-4683-88cd-0013b1f3762b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=adb6ac2c-d9da-47d8-91a1-21fee2c2a28b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:12:51 np0005539551 nova_compute[227360]: 2025-11-29 08:12:51.850 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:51 np0005539551 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000057.scope: Deactivated successfully.
Nov 29 03:12:51 np0005539551 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000057.scope: Consumed 2.915s CPU time.
Nov 29 03:12:51 np0005539551 systemd-machined[190756]: Machine qemu-38-instance-00000057 terminated.
Nov 29 03:12:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e277 e277: 3 total, 3 up, 3 in
Nov 29 03:12:51 np0005539551 neutron-haproxy-ovnmeta-df8c555e-da64-491a-8baf-3c0f59a8167d[259489]: [NOTICE]   (259496) : haproxy version is 2.8.14-c23fe91
Nov 29 03:12:51 np0005539551 neutron-haproxy-ovnmeta-df8c555e-da64-491a-8baf-3c0f59a8167d[259489]: [NOTICE]   (259496) : path to executable is /usr/sbin/haproxy
Nov 29 03:12:51 np0005539551 neutron-haproxy-ovnmeta-df8c555e-da64-491a-8baf-3c0f59a8167d[259489]: [WARNING]  (259496) : Exiting Master process...
Nov 29 03:12:51 np0005539551 neutron-haproxy-ovnmeta-df8c555e-da64-491a-8baf-3c0f59a8167d[259489]: [ALERT]    (259496) : Current worker (259499) exited with code 143 (Terminated)
Nov 29 03:12:51 np0005539551 neutron-haproxy-ovnmeta-df8c555e-da64-491a-8baf-3c0f59a8167d[259489]: [WARNING]  (259496) : All workers exited. Exiting... (0)
Nov 29 03:12:51 np0005539551 systemd[1]: libpod-1e338e048964acacfe9cc5873058a7059864ee874e8084b5868194ddb393c066.scope: Deactivated successfully.
Nov 29 03:12:51 np0005539551 podman[259612]: 2025-11-29 08:12:51.958450867 +0000 UTC m=+0.048893252 container died 1e338e048964acacfe9cc5873058a7059864ee874e8084b5868194ddb393c066 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-df8c555e-da64-491a-8baf-3c0f59a8167d, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:12:51 np0005539551 nova_compute[227360]: 2025-11-29 08:12:51.987 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:51 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1e338e048964acacfe9cc5873058a7059864ee874e8084b5868194ddb393c066-userdata-shm.mount: Deactivated successfully.
Nov 29 03:12:51 np0005539551 systemd[1]: var-lib-containers-storage-overlay-7c9e8639f6b2bcc632bba5d82e7a84b06e024b9163c87bee459ebb3de86137d8-merged.mount: Deactivated successfully.
Nov 29 03:12:52 np0005539551 NetworkManager[48922]: <info>  [1764403971.9999] manager: (tapadb6ac2c-d9): new Tun device (/org/freedesktop/NetworkManager/Devices/170)
Nov 29 03:12:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:51.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:52 np0005539551 nova_compute[227360]: 2025-11-29 08:12:52.001 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:52 np0005539551 nova_compute[227360]: 2025-11-29 08:12:52.008 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:52 np0005539551 podman[259612]: 2025-11-29 08:12:52.010202446 +0000 UTC m=+0.100644841 container cleanup 1e338e048964acacfe9cc5873058a7059864ee874e8084b5868194ddb393c066 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-df8c555e-da64-491a-8baf-3c0f59a8167d, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:12:52 np0005539551 nova_compute[227360]: 2025-11-29 08:12:52.013 227364 INFO nova.virt.libvirt.driver [-] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Instance destroyed successfully.#033[00m
Nov 29 03:12:52 np0005539551 nova_compute[227360]: 2025-11-29 08:12:52.014 227364 DEBUG nova.objects.instance [None req-e269d012-f737-436f-b131-ed324f4eda15 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lazy-loading 'resources' on Instance uuid eda0d174-a55c-4f6c-a128-8f10face8c8c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:52 np0005539551 systemd[1]: libpod-conmon-1e338e048964acacfe9cc5873058a7059864ee874e8084b5868194ddb393c066.scope: Deactivated successfully.
Nov 29 03:12:52 np0005539551 nova_compute[227360]: 2025-11-29 08:12:52.028 227364 DEBUG nova.virt.libvirt.vif [None req-e269d012-f737-436f-b131-ed324f4eda15 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:12:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1762067701',display_name='tempest-ServersTestMultiNic-server-1762067701',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1762067701',id=87,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:12:49Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c5e5a2f42dc64b7cb0a22b666f160b1d',ramdisk_id='',reservation_id='r-vsv4vaza',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-2054665875',owner_user_name='tempest-ServersTestMultiNic-2054665875-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:12:49Z,user_data=None,user_id='247a4abb59cd459fa66a891e998e548c',uuid=eda0d174-a55c-4f6c-a128-8f10face8c8c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4a78a8a2-9369-4d10-a787-dcbfbcc434fd", "address": "fa:16:3e:67:36:ae", "network": {"id": "df8c555e-da64-491a-8baf-3c0f59a8167d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-100933196", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a78a8a2-93", "ovs_interfaceid": "4a78a8a2-9369-4d10-a787-dcbfbcc434fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:12:52 np0005539551 nova_compute[227360]: 2025-11-29 08:12:52.029 227364 DEBUG nova.network.os_vif_util [None req-e269d012-f737-436f-b131-ed324f4eda15 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Converting VIF {"id": "4a78a8a2-9369-4d10-a787-dcbfbcc434fd", "address": "fa:16:3e:67:36:ae", "network": {"id": "df8c555e-da64-491a-8baf-3c0f59a8167d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-100933196", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a78a8a2-93", "ovs_interfaceid": "4a78a8a2-9369-4d10-a787-dcbfbcc434fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:52 np0005539551 nova_compute[227360]: 2025-11-29 08:12:52.029 227364 DEBUG nova.network.os_vif_util [None req-e269d012-f737-436f-b131-ed324f4eda15 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:36:ae,bridge_name='br-int',has_traffic_filtering=True,id=4a78a8a2-9369-4d10-a787-dcbfbcc434fd,network=Network(df8c555e-da64-491a-8baf-3c0f59a8167d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a78a8a2-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:52 np0005539551 nova_compute[227360]: 2025-11-29 08:12:52.030 227364 DEBUG os_vif [None req-e269d012-f737-436f-b131-ed324f4eda15 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:36:ae,bridge_name='br-int',has_traffic_filtering=True,id=4a78a8a2-9369-4d10-a787-dcbfbcc434fd,network=Network(df8c555e-da64-491a-8baf-3c0f59a8167d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a78a8a2-93') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:12:52 np0005539551 nova_compute[227360]: 2025-11-29 08:12:52.032 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:52 np0005539551 nova_compute[227360]: 2025-11-29 08:12:52.032 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a78a8a2-93, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:52 np0005539551 nova_compute[227360]: 2025-11-29 08:12:52.034 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:52 np0005539551 nova_compute[227360]: 2025-11-29 08:12:52.036 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:12:52 np0005539551 nova_compute[227360]: 2025-11-29 08:12:52.039 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:52 np0005539551 nova_compute[227360]: 2025-11-29 08:12:52.041 227364 INFO os_vif [None req-e269d012-f737-436f-b131-ed324f4eda15 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:36:ae,bridge_name='br-int',has_traffic_filtering=True,id=4a78a8a2-9369-4d10-a787-dcbfbcc434fd,network=Network(df8c555e-da64-491a-8baf-3c0f59a8167d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a78a8a2-93')#033[00m
Nov 29 03:12:52 np0005539551 nova_compute[227360]: 2025-11-29 08:12:52.043 227364 DEBUG nova.virt.libvirt.vif [None req-e269d012-f737-436f-b131-ed324f4eda15 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:12:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1762067701',display_name='tempest-ServersTestMultiNic-server-1762067701',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1762067701',id=87,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:12:49Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c5e5a2f42dc64b7cb0a22b666f160b1d',ramdisk_id='',reservation_id='r-vsv4vaza',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-2054665875',owner_user_name='tempest-ServersTestMultiNic-2054665875-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:12:49Z,user_data=None,user_id='247a4abb59cd459fa66a891e998e548c',uuid=eda0d174-a55c-4f6c-a128-8f10face8c8c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "adb6ac2c-d9da-47d8-91a1-21fee2c2a28b", "address": "fa:16:3e:ef:ed:a4", "network": {"id": "ef7a3323-9113-4395-8455-c2b1e5a382e2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1561467401", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadb6ac2c-d9", "ovs_interfaceid": "adb6ac2c-d9da-47d8-91a1-21fee2c2a28b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:12:52 np0005539551 nova_compute[227360]: 2025-11-29 08:12:52.043 227364 DEBUG nova.network.os_vif_util [None req-e269d012-f737-436f-b131-ed324f4eda15 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Converting VIF {"id": "adb6ac2c-d9da-47d8-91a1-21fee2c2a28b", "address": "fa:16:3e:ef:ed:a4", "network": {"id": "ef7a3323-9113-4395-8455-c2b1e5a382e2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1561467401", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadb6ac2c-d9", "ovs_interfaceid": "adb6ac2c-d9da-47d8-91a1-21fee2c2a28b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:52 np0005539551 nova_compute[227360]: 2025-11-29 08:12:52.044 227364 DEBUG nova.network.os_vif_util [None req-e269d012-f737-436f-b131-ed324f4eda15 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:ed:a4,bridge_name='br-int',has_traffic_filtering=True,id=adb6ac2c-d9da-47d8-91a1-21fee2c2a28b,network=Network(ef7a3323-9113-4395-8455-c2b1e5a382e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapadb6ac2c-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:52 np0005539551 nova_compute[227360]: 2025-11-29 08:12:52.044 227364 DEBUG os_vif [None req-e269d012-f737-436f-b131-ed324f4eda15 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:ed:a4,bridge_name='br-int',has_traffic_filtering=True,id=adb6ac2c-d9da-47d8-91a1-21fee2c2a28b,network=Network(ef7a3323-9113-4395-8455-c2b1e5a382e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapadb6ac2c-d9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:12:52 np0005539551 nova_compute[227360]: 2025-11-29 08:12:52.046 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:52 np0005539551 nova_compute[227360]: 2025-11-29 08:12:52.046 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapadb6ac2c-d9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:52 np0005539551 nova_compute[227360]: 2025-11-29 08:12:52.050 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:12:52 np0005539551 nova_compute[227360]: 2025-11-29 08:12:52.052 227364 INFO os_vif [None req-e269d012-f737-436f-b131-ed324f4eda15 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:ed:a4,bridge_name='br-int',has_traffic_filtering=True,id=adb6ac2c-d9da-47d8-91a1-21fee2c2a28b,network=Network(ef7a3323-9113-4395-8455-c2b1e5a382e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapadb6ac2c-d9')#033[00m
Nov 29 03:12:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:52.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:52 np0005539551 podman[259658]: 2025-11-29 08:12:52.078147755 +0000 UTC m=+0.045663477 container remove 1e338e048964acacfe9cc5873058a7059864ee874e8084b5868194ddb393c066 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-df8c555e-da64-491a-8baf-3c0f59a8167d, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 03:12:52 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:52.087 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b032dc30-5be2-4c66-8f23-98f3032a3b36]: (4, ('Sat Nov 29 08:12:51 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-df8c555e-da64-491a-8baf-3c0f59a8167d (1e338e048964acacfe9cc5873058a7059864ee874e8084b5868194ddb393c066)\n1e338e048964acacfe9cc5873058a7059864ee874e8084b5868194ddb393c066\nSat Nov 29 08:12:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-df8c555e-da64-491a-8baf-3c0f59a8167d (1e338e048964acacfe9cc5873058a7059864ee874e8084b5868194ddb393c066)\n1e338e048964acacfe9cc5873058a7059864ee874e8084b5868194ddb393c066\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:52 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:52.089 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[40ee39e8-d068-46e8-b572-ac58a25de400]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:52 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:52.091 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf8c555e-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:52 np0005539551 nova_compute[227360]: 2025-11-29 08:12:52.094 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:52 np0005539551 kernel: tapdf8c555e-d0: left promiscuous mode
Nov 29 03:12:52 np0005539551 nova_compute[227360]: 2025-11-29 08:12:52.095 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:52 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:52.101 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e82b0a55-b3a1-4cc9-b304-efdcc00efcfe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:52 np0005539551 nova_compute[227360]: 2025-11-29 08:12:52.110 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:52 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:52.119 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9439bc78-0988-44da-aaae-8b5964b9c033]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:52 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:52.120 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[edb808ca-3abf-4583-bd7c-4ce537de3e29]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:52 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:52.135 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c22f0954-2ea9-4555-978a-9e399c76d5e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701552, 'reachable_time': 18361, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259690, 'error': None, 'target': 'ovnmeta-df8c555e-da64-491a-8baf-3c0f59a8167d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:52 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:52.141 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-df8c555e-da64-491a-8baf-3c0f59a8167d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:12:52 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:52.142 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[9a200c37-f140-4529-a683-4d5f24edd6c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:52 np0005539551 systemd[1]: run-netns-ovnmeta\x2ddf8c555e\x2dda64\x2d491a\x2d8baf\x2d3c0f59a8167d.mount: Deactivated successfully.
Nov 29 03:12:52 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:52.143 139482 INFO neutron.agent.ovn.metadata.agent [-] Port adb6ac2c-d9da-47d8-91a1-21fee2c2a28b in datapath ef7a3323-9113-4395-8455-c2b1e5a382e2 unbound from our chassis#033[00m
Nov 29 03:12:52 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:52.145 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ef7a3323-9113-4395-8455-c2b1e5a382e2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:12:52 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:52.146 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[18d831a3-60ae-45de-8d8c-fa52e2c54651]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:52 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:52.146 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ef7a3323-9113-4395-8455-c2b1e5a382e2 namespace which is not needed anymore#033[00m
Nov 29 03:12:52 np0005539551 neutron-haproxy-ovnmeta-ef7a3323-9113-4395-8455-c2b1e5a382e2[259568]: [NOTICE]   (259572) : haproxy version is 2.8.14-c23fe91
Nov 29 03:12:52 np0005539551 neutron-haproxy-ovnmeta-ef7a3323-9113-4395-8455-c2b1e5a382e2[259568]: [NOTICE]   (259572) : path to executable is /usr/sbin/haproxy
Nov 29 03:12:52 np0005539551 neutron-haproxy-ovnmeta-ef7a3323-9113-4395-8455-c2b1e5a382e2[259568]: [WARNING]  (259572) : Exiting Master process...
Nov 29 03:12:52 np0005539551 neutron-haproxy-ovnmeta-ef7a3323-9113-4395-8455-c2b1e5a382e2[259568]: [ALERT]    (259572) : Current worker (259574) exited with code 143 (Terminated)
Nov 29 03:12:52 np0005539551 neutron-haproxy-ovnmeta-ef7a3323-9113-4395-8455-c2b1e5a382e2[259568]: [WARNING]  (259572) : All workers exited. Exiting... (0)
Nov 29 03:12:52 np0005539551 systemd[1]: libpod-ffc569a74822366e764cd7e5f566707225756be951f5eac9b068f97fcc9d145c.scope: Deactivated successfully.
Nov 29 03:12:52 np0005539551 podman[259708]: 2025-11-29 08:12:52.318366523 +0000 UTC m=+0.067309014 container died ffc569a74822366e764cd7e5f566707225756be951f5eac9b068f97fcc9d145c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ef7a3323-9113-4395-8455-c2b1e5a382e2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 03:12:52 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ffc569a74822366e764cd7e5f566707225756be951f5eac9b068f97fcc9d145c-userdata-shm.mount: Deactivated successfully.
Nov 29 03:12:52 np0005539551 systemd[1]: var-lib-containers-storage-overlay-444412ff052fe8111a2ecb9dd73fb67a2b4989f1139acb62c3845a64fb06b42b-merged.mount: Deactivated successfully.
Nov 29 03:12:52 np0005539551 podman[259708]: 2025-11-29 08:12:52.354550946 +0000 UTC m=+0.103493437 container cleanup ffc569a74822366e764cd7e5f566707225756be951f5eac9b068f97fcc9d145c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ef7a3323-9113-4395-8455-c2b1e5a382e2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:12:52 np0005539551 systemd[1]: libpod-conmon-ffc569a74822366e764cd7e5f566707225756be951f5eac9b068f97fcc9d145c.scope: Deactivated successfully.
Nov 29 03:12:52 np0005539551 podman[259740]: 2025-11-29 08:12:52.418480928 +0000 UTC m=+0.044020473 container remove ffc569a74822366e764cd7e5f566707225756be951f5eac9b068f97fcc9d145c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ef7a3323-9113-4395-8455-c2b1e5a382e2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:12:52 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:52.424 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2f347a53-a6c2-4952-8cfc-84dc495811fa]: (4, ('Sat Nov 29 08:12:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ef7a3323-9113-4395-8455-c2b1e5a382e2 (ffc569a74822366e764cd7e5f566707225756be951f5eac9b068f97fcc9d145c)\nffc569a74822366e764cd7e5f566707225756be951f5eac9b068f97fcc9d145c\nSat Nov 29 08:12:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ef7a3323-9113-4395-8455-c2b1e5a382e2 (ffc569a74822366e764cd7e5f566707225756be951f5eac9b068f97fcc9d145c)\nffc569a74822366e764cd7e5f566707225756be951f5eac9b068f97fcc9d145c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:52 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:52.426 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8a2a3f7a-cd8a-48c3-bc09-7103382f0ed8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:52 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:52.427 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef7a3323-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:52 np0005539551 nova_compute[227360]: 2025-11-29 08:12:52.429 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:52 np0005539551 kernel: tapef7a3323-90: left promiscuous mode
Nov 29 03:12:52 np0005539551 nova_compute[227360]: 2025-11-29 08:12:52.443 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:52 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:52.446 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[83538c75-a468-4696-beb5-6c2b33b3299d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:52 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:52.465 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[da41e9a6-e9ae-43fd-8ae0-e0ee1ab1078a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:52 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:52.468 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8ce9672b-efef-48c9-99e8-3d3c2af881d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:52 np0005539551 nova_compute[227360]: 2025-11-29 08:12:52.469 227364 INFO nova.virt.libvirt.driver [None req-e269d012-f737-436f-b131-ed324f4eda15 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Deleting instance files /var/lib/nova/instances/eda0d174-a55c-4f6c-a128-8f10face8c8c_del#033[00m
Nov 29 03:12:52 np0005539551 nova_compute[227360]: 2025-11-29 08:12:52.470 227364 INFO nova.virt.libvirt.driver [None req-e269d012-f737-436f-b131-ed324f4eda15 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Deletion of /var/lib/nova/instances/eda0d174-a55c-4f6c-a128-8f10face8c8c_del complete#033[00m
Nov 29 03:12:52 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:52.485 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[30c42be7-dee0-43b6-a128-dc0f9148c242]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701664, 'reachable_time': 41868, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259756, 'error': None, 'target': 'ovnmeta-ef7a3323-9113-4395-8455-c2b1e5a382e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:52 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:52.487 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ef7a3323-9113-4395-8455-c2b1e5a382e2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:12:52 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:52.487 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[2722d10a-6a12-4b1d-a280-7957f9897644]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:52 np0005539551 nova_compute[227360]: 2025-11-29 08:12:52.526 227364 INFO nova.compute.manager [None req-e269d012-f737-436f-b131-ed324f4eda15 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:12:52 np0005539551 nova_compute[227360]: 2025-11-29 08:12:52.526 227364 DEBUG oslo.service.loopingcall [None req-e269d012-f737-436f-b131-ed324f4eda15 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:12:52 np0005539551 nova_compute[227360]: 2025-11-29 08:12:52.527 227364 DEBUG nova.compute.manager [-] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:12:52 np0005539551 nova_compute[227360]: 2025-11-29 08:12:52.527 227364 DEBUG nova.network.neutron [-] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:12:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e278 e278: 3 total, 3 up, 3 in
Nov 29 03:12:52 np0005539551 systemd[1]: run-netns-ovnmeta\x2def7a3323\x2d9113\x2d4395\x2d8455\x2dc2b1e5a382e2.mount: Deactivated successfully.
Nov 29 03:12:53 np0005539551 nova_compute[227360]: 2025-11-29 08:12:53.118 227364 DEBUG nova.compute.manager [req-49205017-4164-492f-8164-e9d36e5f6800 req-225da8a6-213a-4d35-a4d8-634cad82e30d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Received event network-vif-unplugged-4a78a8a2-9369-4d10-a787-dcbfbcc434fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:53 np0005539551 nova_compute[227360]: 2025-11-29 08:12:53.119 227364 DEBUG oslo_concurrency.lockutils [req-49205017-4164-492f-8164-e9d36e5f6800 req-225da8a6-213a-4d35-a4d8-634cad82e30d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "eda0d174-a55c-4f6c-a128-8f10face8c8c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:53 np0005539551 nova_compute[227360]: 2025-11-29 08:12:53.119 227364 DEBUG oslo_concurrency.lockutils [req-49205017-4164-492f-8164-e9d36e5f6800 req-225da8a6-213a-4d35-a4d8-634cad82e30d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "eda0d174-a55c-4f6c-a128-8f10face8c8c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:53 np0005539551 nova_compute[227360]: 2025-11-29 08:12:53.119 227364 DEBUG oslo_concurrency.lockutils [req-49205017-4164-492f-8164-e9d36e5f6800 req-225da8a6-213a-4d35-a4d8-634cad82e30d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "eda0d174-a55c-4f6c-a128-8f10face8c8c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:53 np0005539551 nova_compute[227360]: 2025-11-29 08:12:53.120 227364 DEBUG nova.compute.manager [req-49205017-4164-492f-8164-e9d36e5f6800 req-225da8a6-213a-4d35-a4d8-634cad82e30d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] No waiting events found dispatching network-vif-unplugged-4a78a8a2-9369-4d10-a787-dcbfbcc434fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:12:53 np0005539551 nova_compute[227360]: 2025-11-29 08:12:53.120 227364 DEBUG nova.compute.manager [req-49205017-4164-492f-8164-e9d36e5f6800 req-225da8a6-213a-4d35-a4d8-634cad82e30d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Received event network-vif-unplugged-4a78a8a2-9369-4d10-a787-dcbfbcc434fd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:12:53 np0005539551 nova_compute[227360]: 2025-11-29 08:12:53.227 227364 DEBUG nova.compute.manager [req-7e84ebc5-ba9f-42f4-bb58-03a7930277b3 req-5bdf4f88-8aaa-4a30-a4a5-3bfea92ec217 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Received event network-vif-deleted-adb6ac2c-d9da-47d8-91a1-21fee2c2a28b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:53 np0005539551 nova_compute[227360]: 2025-11-29 08:12:53.228 227364 INFO nova.compute.manager [req-7e84ebc5-ba9f-42f4-bb58-03a7930277b3 req-5bdf4f88-8aaa-4a30-a4a5-3bfea92ec217 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Neutron deleted interface adb6ac2c-d9da-47d8-91a1-21fee2c2a28b; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:12:53 np0005539551 nova_compute[227360]: 2025-11-29 08:12:53.228 227364 DEBUG nova.network.neutron [req-7e84ebc5-ba9f-42f4-bb58-03a7930277b3 req-5bdf4f88-8aaa-4a30-a4a5-3bfea92ec217 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Updating instance_info_cache with network_info: [{"id": "4a78a8a2-9369-4d10-a787-dcbfbcc434fd", "address": "fa:16:3e:67:36:ae", "network": {"id": "df8c555e-da64-491a-8baf-3c0f59a8167d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-100933196", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a78a8a2-93", "ovs_interfaceid": "4a78a8a2-9369-4d10-a787-dcbfbcc434fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:53 np0005539551 nova_compute[227360]: 2025-11-29 08:12:53.282 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:53.283 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:12:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:53.284 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:12:53 np0005539551 nova_compute[227360]: 2025-11-29 08:12:53.292 227364 DEBUG nova.compute.manager [req-7e84ebc5-ba9f-42f4-bb58-03a7930277b3 req-5bdf4f88-8aaa-4a30-a4a5-3bfea92ec217 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Detach interface failed, port_id=adb6ac2c-d9da-47d8-91a1-21fee2c2a28b, reason: Instance eda0d174-a55c-4f6c-a128-8f10face8c8c could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 03:12:53 np0005539551 nova_compute[227360]: 2025-11-29 08:12:53.969 227364 DEBUG nova.network.neutron [-] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:53 np0005539551 nova_compute[227360]: 2025-11-29 08:12:53.992 227364 INFO nova.compute.manager [-] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Took 1.47 seconds to deallocate network for instance.#033[00m
Nov 29 03:12:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:54.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:54 np0005539551 nova_compute[227360]: 2025-11-29 08:12:54.058 227364 DEBUG oslo_concurrency.lockutils [None req-e269d012-f737-436f-b131-ed324f4eda15 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:54 np0005539551 nova_compute[227360]: 2025-11-29 08:12:54.058 227364 DEBUG oslo_concurrency.lockutils [None req-e269d012-f737-436f-b131-ed324f4eda15 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:54.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:54 np0005539551 nova_compute[227360]: 2025-11-29 08:12:54.127 227364 DEBUG oslo_concurrency.processutils [None req-e269d012-f737-436f-b131-ed324f4eda15 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:54 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:12:54 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/835861461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:12:54 np0005539551 nova_compute[227360]: 2025-11-29 08:12:54.590 227364 DEBUG oslo_concurrency.processutils [None req-e269d012-f737-436f-b131-ed324f4eda15 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:54 np0005539551 nova_compute[227360]: 2025-11-29 08:12:54.595 227364 DEBUG nova.compute.provider_tree [None req-e269d012-f737-436f-b131-ed324f4eda15 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:12:54 np0005539551 nova_compute[227360]: 2025-11-29 08:12:54.627 227364 DEBUG nova.scheduler.client.report [None req-e269d012-f737-436f-b131-ed324f4eda15 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:12:54 np0005539551 nova_compute[227360]: 2025-11-29 08:12:54.651 227364 DEBUG oslo_concurrency.lockutils [None req-e269d012-f737-436f-b131-ed324f4eda15 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:54 np0005539551 nova_compute[227360]: 2025-11-29 08:12:54.692 227364 INFO nova.scheduler.client.report [None req-e269d012-f737-436f-b131-ed324f4eda15 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Deleted allocations for instance eda0d174-a55c-4f6c-a128-8f10face8c8c#033[00m
Nov 29 03:12:54 np0005539551 nova_compute[227360]: 2025-11-29 08:12:54.819 227364 DEBUG oslo_concurrency.lockutils [None req-e269d012-f737-436f-b131-ed324f4eda15 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lock "eda0d174-a55c-4f6c-a128-8f10face8c8c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:55 np0005539551 nova_compute[227360]: 2025-11-29 08:12:55.115 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:12:55.286 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:55 np0005539551 nova_compute[227360]: 2025-11-29 08:12:55.358 227364 DEBUG nova.compute.manager [req-cbd09f21-300d-4d7e-b247-0b23d286dd70 req-8adccf7f-f035-4493-aa65-99f1d02ebd87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Received event network-vif-deleted-4a78a8a2-9369-4d10-a787-dcbfbcc434fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:55 np0005539551 nova_compute[227360]: 2025-11-29 08:12:55.479 227364 DEBUG nova.compute.manager [req-ede0970e-f243-442f-ba91-857504b751a4 req-e6cbd90a-b1bb-46ee-be39-ee66b7adc8dd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Received event network-vif-plugged-4a78a8a2-9369-4d10-a787-dcbfbcc434fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:55 np0005539551 nova_compute[227360]: 2025-11-29 08:12:55.480 227364 DEBUG oslo_concurrency.lockutils [req-ede0970e-f243-442f-ba91-857504b751a4 req-e6cbd90a-b1bb-46ee-be39-ee66b7adc8dd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "eda0d174-a55c-4f6c-a128-8f10face8c8c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:55 np0005539551 nova_compute[227360]: 2025-11-29 08:12:55.481 227364 DEBUG oslo_concurrency.lockutils [req-ede0970e-f243-442f-ba91-857504b751a4 req-e6cbd90a-b1bb-46ee-be39-ee66b7adc8dd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "eda0d174-a55c-4f6c-a128-8f10face8c8c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:55 np0005539551 nova_compute[227360]: 2025-11-29 08:12:55.481 227364 DEBUG oslo_concurrency.lockutils [req-ede0970e-f243-442f-ba91-857504b751a4 req-e6cbd90a-b1bb-46ee-be39-ee66b7adc8dd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "eda0d174-a55c-4f6c-a128-8f10face8c8c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:55 np0005539551 nova_compute[227360]: 2025-11-29 08:12:55.482 227364 DEBUG nova.compute.manager [req-ede0970e-f243-442f-ba91-857504b751a4 req-e6cbd90a-b1bb-46ee-be39-ee66b7adc8dd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] No waiting events found dispatching network-vif-plugged-4a78a8a2-9369-4d10-a787-dcbfbcc434fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:12:55 np0005539551 nova_compute[227360]: 2025-11-29 08:12:55.482 227364 WARNING nova.compute.manager [req-ede0970e-f243-442f-ba91-857504b751a4 req-e6cbd90a-b1bb-46ee-be39-ee66b7adc8dd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Received unexpected event network-vif-plugged-4a78a8a2-9369-4d10-a787-dcbfbcc434fd for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:12:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:56.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:56.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:56 np0005539551 nova_compute[227360]: 2025-11-29 08:12:56.527 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:57 np0005539551 nova_compute[227360]: 2025-11-29 08:12:57.049 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:58.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:58 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e279 e279: 3 total, 3 up, 3 in
Nov 29 03:12:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:12:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:58.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:59 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e280 e280: 3 total, 3 up, 3 in
Nov 29 03:13:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:00.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:00.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:00 np0005539551 nova_compute[227360]: 2025-11-29 08:13:00.116 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e281 e281: 3 total, 3 up, 3 in
Nov 29 03:13:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e282 e282: 3 total, 3 up, 3 in
Nov 29 03:13:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:13:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:02.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:13:02 np0005539551 nova_compute[227360]: 2025-11-29 08:13:02.051 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:02.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:02 np0005539551 nova_compute[227360]: 2025-11-29 08:13:02.521 227364 DEBUG oslo_concurrency.lockutils [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Acquiring lock "bd19cbdf-750a-4c23-b574-0c21df92ab8f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:02 np0005539551 nova_compute[227360]: 2025-11-29 08:13:02.521 227364 DEBUG oslo_concurrency.lockutils [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Lock "bd19cbdf-750a-4c23-b574-0c21df92ab8f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:02 np0005539551 nova_compute[227360]: 2025-11-29 08:13:02.553 227364 DEBUG nova.compute.manager [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:13:02 np0005539551 nova_compute[227360]: 2025-11-29 08:13:02.737 227364 DEBUG oslo_concurrency.lockutils [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:02 np0005539551 nova_compute[227360]: 2025-11-29 08:13:02.738 227364 DEBUG oslo_concurrency.lockutils [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:02 np0005539551 nova_compute[227360]: 2025-11-29 08:13:02.746 227364 DEBUG nova.virt.hardware [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:13:02 np0005539551 nova_compute[227360]: 2025-11-29 08:13:02.746 227364 INFO nova.compute.claims [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:13:02 np0005539551 nova_compute[227360]: 2025-11-29 08:13:02.881 227364 DEBUG oslo_concurrency.processutils [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:03 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:13:03 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1335805837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:13:03 np0005539551 nova_compute[227360]: 2025-11-29 08:13:03.286 227364 DEBUG oslo_concurrency.processutils [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:03 np0005539551 nova_compute[227360]: 2025-11-29 08:13:03.294 227364 DEBUG nova.compute.provider_tree [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:13:03 np0005539551 nova_compute[227360]: 2025-11-29 08:13:03.314 227364 DEBUG nova.scheduler.client.report [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:13:03 np0005539551 nova_compute[227360]: 2025-11-29 08:13:03.334 227364 DEBUG oslo_concurrency.lockutils [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:03 np0005539551 nova_compute[227360]: 2025-11-29 08:13:03.335 227364 DEBUG nova.compute.manager [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:13:03 np0005539551 nova_compute[227360]: 2025-11-29 08:13:03.404 227364 DEBUG nova.compute.manager [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:13:03 np0005539551 nova_compute[227360]: 2025-11-29 08:13:03.406 227364 DEBUG nova.network.neutron [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:13:03 np0005539551 nova_compute[227360]: 2025-11-29 08:13:03.441 227364 INFO nova.virt.libvirt.driver [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:13:03 np0005539551 nova_compute[227360]: 2025-11-29 08:13:03.499 227364 DEBUG nova.compute.manager [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:13:03 np0005539551 nova_compute[227360]: 2025-11-29 08:13:03.642 227364 DEBUG nova.compute.manager [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:13:03 np0005539551 nova_compute[227360]: 2025-11-29 08:13:03.644 227364 DEBUG nova.virt.libvirt.driver [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:13:03 np0005539551 nova_compute[227360]: 2025-11-29 08:13:03.644 227364 INFO nova.virt.libvirt.driver [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Creating image(s)#033[00m
Nov 29 03:13:03 np0005539551 nova_compute[227360]: 2025-11-29 08:13:03.679 227364 DEBUG nova.storage.rbd_utils [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] rbd image bd19cbdf-750a-4c23-b574-0c21df92ab8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:03 np0005539551 nova_compute[227360]: 2025-11-29 08:13:03.716 227364 DEBUG nova.storage.rbd_utils [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] rbd image bd19cbdf-750a-4c23-b574-0c21df92ab8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:03 np0005539551 nova_compute[227360]: 2025-11-29 08:13:03.755 227364 DEBUG nova.storage.rbd_utils [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] rbd image bd19cbdf-750a-4c23-b574-0c21df92ab8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:03 np0005539551 nova_compute[227360]: 2025-11-29 08:13:03.761 227364 DEBUG oslo_concurrency.processutils [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:03 np0005539551 nova_compute[227360]: 2025-11-29 08:13:03.793 227364 DEBUG nova.policy [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8c106fd8f8d448ebb9235a2cec9752a0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a79b4d10530a4e1b93fd74acae457024', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:13:03 np0005539551 nova_compute[227360]: 2025-11-29 08:13:03.829 227364 DEBUG oslo_concurrency.processutils [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:03 np0005539551 nova_compute[227360]: 2025-11-29 08:13:03.830 227364 DEBUG oslo_concurrency.lockutils [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:03 np0005539551 nova_compute[227360]: 2025-11-29 08:13:03.830 227364 DEBUG oslo_concurrency.lockutils [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:03 np0005539551 nova_compute[227360]: 2025-11-29 08:13:03.830 227364 DEBUG oslo_concurrency.lockutils [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:03 np0005539551 nova_compute[227360]: 2025-11-29 08:13:03.855 227364 DEBUG nova.storage.rbd_utils [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] rbd image bd19cbdf-750a-4c23-b574-0c21df92ab8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:03 np0005539551 nova_compute[227360]: 2025-11-29 08:13:03.860 227364 DEBUG oslo_concurrency.processutils [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 bd19cbdf-750a-4c23-b574-0c21df92ab8f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:13:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:04.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:13:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:04.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:04 np0005539551 nova_compute[227360]: 2025-11-29 08:13:04.191 227364 DEBUG oslo_concurrency.processutils [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 bd19cbdf-750a-4c23-b574-0c21df92ab8f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.331s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:04 np0005539551 nova_compute[227360]: 2025-11-29 08:13:04.277 227364 DEBUG nova.storage.rbd_utils [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] resizing rbd image bd19cbdf-750a-4c23-b574-0c21df92ab8f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:13:04 np0005539551 nova_compute[227360]: 2025-11-29 08:13:04.493 227364 DEBUG nova.objects.instance [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Lazy-loading 'migration_context' on Instance uuid bd19cbdf-750a-4c23-b574-0c21df92ab8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:04 np0005539551 nova_compute[227360]: 2025-11-29 08:13:04.507 227364 DEBUG nova.virt.libvirt.driver [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:13:04 np0005539551 nova_compute[227360]: 2025-11-29 08:13:04.508 227364 DEBUG nova.virt.libvirt.driver [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Ensure instance console log exists: /var/lib/nova/instances/bd19cbdf-750a-4c23-b574-0c21df92ab8f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:13:04 np0005539551 nova_compute[227360]: 2025-11-29 08:13:04.509 227364 DEBUG oslo_concurrency.lockutils [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:04 np0005539551 nova_compute[227360]: 2025-11-29 08:13:04.509 227364 DEBUG oslo_concurrency.lockutils [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:04 np0005539551 nova_compute[227360]: 2025-11-29 08:13:04.509 227364 DEBUG oslo_concurrency.lockutils [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:05 np0005539551 nova_compute[227360]: 2025-11-29 08:13:05.119 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:05 np0005539551 nova_compute[227360]: 2025-11-29 08:13:05.734 227364 DEBUG nova.network.neutron [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Successfully created port: 079617df-2314-4964-a695-b9ce9a9e2a61 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:13:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:13:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:06.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:13:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:06.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e283 e283: 3 total, 3 up, 3 in
Nov 29 03:13:06 np0005539551 nova_compute[227360]: 2025-11-29 08:13:06.941 227364 DEBUG nova.network.neutron [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Successfully updated port: 079617df-2314-4964-a695-b9ce9a9e2a61 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:13:06 np0005539551 nova_compute[227360]: 2025-11-29 08:13:06.956 227364 DEBUG oslo_concurrency.lockutils [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Acquiring lock "refresh_cache-bd19cbdf-750a-4c23-b574-0c21df92ab8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:13:06 np0005539551 nova_compute[227360]: 2025-11-29 08:13:06.957 227364 DEBUG oslo_concurrency.lockutils [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Acquired lock "refresh_cache-bd19cbdf-750a-4c23-b574-0c21df92ab8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:13:06 np0005539551 nova_compute[227360]: 2025-11-29 08:13:06.957 227364 DEBUG nova.network.neutron [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:13:07 np0005539551 nova_compute[227360]: 2025-11-29 08:13:07.011 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403972.010965, eda0d174-a55c-4f6c-a128-8f10face8c8c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:13:07 np0005539551 nova_compute[227360]: 2025-11-29 08:13:07.012 227364 INFO nova.compute.manager [-] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:13:07 np0005539551 nova_compute[227360]: 2025-11-29 08:13:07.035 227364 DEBUG nova.compute.manager [None req-6b31cbec-c2d6-4b3f-83df-6777e102f945 - - - - - -] [instance: eda0d174-a55c-4f6c-a128-8f10face8c8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:13:07 np0005539551 nova_compute[227360]: 2025-11-29 08:13:07.086 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:07 np0005539551 nova_compute[227360]: 2025-11-29 08:13:07.232 227364 DEBUG nova.network.neutron [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:13:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e284 e284: 3 total, 3 up, 3 in
Nov 29 03:13:07 np0005539551 nova_compute[227360]: 2025-11-29 08:13:07.825 227364 DEBUG nova.compute.manager [req-bc4f6967-1f14-4888-a225-26d3c5f9bb0a req-f8eb64b0-5687-48dc-bd45-c7cec5389ed6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Received event network-changed-079617df-2314-4964-a695-b9ce9a9e2a61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:07 np0005539551 nova_compute[227360]: 2025-11-29 08:13:07.825 227364 DEBUG nova.compute.manager [req-bc4f6967-1f14-4888-a225-26d3c5f9bb0a req-f8eb64b0-5687-48dc-bd45-c7cec5389ed6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Refreshing instance network info cache due to event network-changed-079617df-2314-4964-a695-b9ce9a9e2a61. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:13:07 np0005539551 nova_compute[227360]: 2025-11-29 08:13:07.826 227364 DEBUG oslo_concurrency.lockutils [req-bc4f6967-1f14-4888-a225-26d3c5f9bb0a req-f8eb64b0-5687-48dc-bd45-c7cec5389ed6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-bd19cbdf-750a-4c23-b574-0c21df92ab8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:13:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:08.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:08.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:08 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e285 e285: 3 total, 3 up, 3 in
Nov 29 03:13:08 np0005539551 nova_compute[227360]: 2025-11-29 08:13:08.908 227364 DEBUG nova.network.neutron [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Updating instance_info_cache with network_info: [{"id": "079617df-2314-4964-a695-b9ce9a9e2a61", "address": "fa:16:3e:96:65:06", "network": {"id": "901954cf-1f45-4d5e-b447-237316b6baa4", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1849970683-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79b4d10530a4e1b93fd74acae457024", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap079617df-23", "ovs_interfaceid": "079617df-2314-4964-a695-b9ce9a9e2a61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:13:08 np0005539551 nova_compute[227360]: 2025-11-29 08:13:08.934 227364 DEBUG oslo_concurrency.lockutils [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Releasing lock "refresh_cache-bd19cbdf-750a-4c23-b574-0c21df92ab8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:13:08 np0005539551 nova_compute[227360]: 2025-11-29 08:13:08.935 227364 DEBUG nova.compute.manager [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Instance network_info: |[{"id": "079617df-2314-4964-a695-b9ce9a9e2a61", "address": "fa:16:3e:96:65:06", "network": {"id": "901954cf-1f45-4d5e-b447-237316b6baa4", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1849970683-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79b4d10530a4e1b93fd74acae457024", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap079617df-23", "ovs_interfaceid": "079617df-2314-4964-a695-b9ce9a9e2a61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:13:08 np0005539551 nova_compute[227360]: 2025-11-29 08:13:08.935 227364 DEBUG oslo_concurrency.lockutils [req-bc4f6967-1f14-4888-a225-26d3c5f9bb0a req-f8eb64b0-5687-48dc-bd45-c7cec5389ed6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-bd19cbdf-750a-4c23-b574-0c21df92ab8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:13:08 np0005539551 nova_compute[227360]: 2025-11-29 08:13:08.935 227364 DEBUG nova.network.neutron [req-bc4f6967-1f14-4888-a225-26d3c5f9bb0a req-f8eb64b0-5687-48dc-bd45-c7cec5389ed6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Refreshing network info cache for port 079617df-2314-4964-a695-b9ce9a9e2a61 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:13:08 np0005539551 nova_compute[227360]: 2025-11-29 08:13:08.938 227364 DEBUG nova.virt.libvirt.driver [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Start _get_guest_xml network_info=[{"id": "079617df-2314-4964-a695-b9ce9a9e2a61", "address": "fa:16:3e:96:65:06", "network": {"id": "901954cf-1f45-4d5e-b447-237316b6baa4", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1849970683-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79b4d10530a4e1b93fd74acae457024", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap079617df-23", "ovs_interfaceid": "079617df-2314-4964-a695-b9ce9a9e2a61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:13:08 np0005539551 nova_compute[227360]: 2025-11-29 08:13:08.942 227364 WARNING nova.virt.libvirt.driver [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:13:08 np0005539551 nova_compute[227360]: 2025-11-29 08:13:08.946 227364 DEBUG nova.virt.libvirt.host [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:13:08 np0005539551 nova_compute[227360]: 2025-11-29 08:13:08.946 227364 DEBUG nova.virt.libvirt.host [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:13:08 np0005539551 nova_compute[227360]: 2025-11-29 08:13:08.949 227364 DEBUG nova.virt.libvirt.host [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:13:08 np0005539551 nova_compute[227360]: 2025-11-29 08:13:08.949 227364 DEBUG nova.virt.libvirt.host [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:13:08 np0005539551 nova_compute[227360]: 2025-11-29 08:13:08.950 227364 DEBUG nova.virt.libvirt.driver [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:13:08 np0005539551 nova_compute[227360]: 2025-11-29 08:13:08.950 227364 DEBUG nova.virt.hardware [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:13:08 np0005539551 nova_compute[227360]: 2025-11-29 08:13:08.950 227364 DEBUG nova.virt.hardware [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:13:08 np0005539551 nova_compute[227360]: 2025-11-29 08:13:08.950 227364 DEBUG nova.virt.hardware [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:13:08 np0005539551 nova_compute[227360]: 2025-11-29 08:13:08.951 227364 DEBUG nova.virt.hardware [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:13:08 np0005539551 nova_compute[227360]: 2025-11-29 08:13:08.951 227364 DEBUG nova.virt.hardware [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:13:08 np0005539551 nova_compute[227360]: 2025-11-29 08:13:08.951 227364 DEBUG nova.virt.hardware [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:13:08 np0005539551 nova_compute[227360]: 2025-11-29 08:13:08.951 227364 DEBUG nova.virt.hardware [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:13:08 np0005539551 nova_compute[227360]: 2025-11-29 08:13:08.951 227364 DEBUG nova.virt.hardware [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:13:08 np0005539551 nova_compute[227360]: 2025-11-29 08:13:08.951 227364 DEBUG nova.virt.hardware [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:13:08 np0005539551 nova_compute[227360]: 2025-11-29 08:13:08.952 227364 DEBUG nova.virt.hardware [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:13:08 np0005539551 nova_compute[227360]: 2025-11-29 08:13:08.952 227364 DEBUG nova.virt.hardware [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:13:08 np0005539551 nova_compute[227360]: 2025-11-29 08:13:08.954 227364 DEBUG oslo_concurrency.processutils [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:09 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:13:09 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2359390907' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:13:09 np0005539551 nova_compute[227360]: 2025-11-29 08:13:09.374 227364 DEBUG oslo_concurrency.processutils [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:09 np0005539551 nova_compute[227360]: 2025-11-29 08:13:09.402 227364 DEBUG nova.storage.rbd_utils [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] rbd image bd19cbdf-750a-4c23-b574-0c21df92ab8f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:09 np0005539551 nova_compute[227360]: 2025-11-29 08:13:09.405 227364 DEBUG oslo_concurrency.processutils [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:09 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:13:09 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2355168718' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:13:09 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e286 e286: 3 total, 3 up, 3 in
Nov 29 03:13:09 np0005539551 nova_compute[227360]: 2025-11-29 08:13:09.862 227364 DEBUG oslo_concurrency.processutils [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:09 np0005539551 nova_compute[227360]: 2025-11-29 08:13:09.864 227364 DEBUG nova.virt.libvirt.vif [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:13:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1810353612',display_name='tempest-InstanceActionsNegativeTestJSON-server-1810353612',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1810353612',id=88,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a79b4d10530a4e1b93fd74acae457024',ramdisk_id='',reservation_id='r-v5dkdgwq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1811421354',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1811421354-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:13:03Z,user_data=None,user_id='8c106fd8f8d448ebb9235a2cec9752a0',uuid=bd19cbdf-750a-4c23-b574-0c21df92ab8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "079617df-2314-4964-a695-b9ce9a9e2a61", "address": "fa:16:3e:96:65:06", "network": {"id": "901954cf-1f45-4d5e-b447-237316b6baa4", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1849970683-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79b4d10530a4e1b93fd74acae457024", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap079617df-23", "ovs_interfaceid": "079617df-2314-4964-a695-b9ce9a9e2a61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:13:09 np0005539551 nova_compute[227360]: 2025-11-29 08:13:09.865 227364 DEBUG nova.network.os_vif_util [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Converting VIF {"id": "079617df-2314-4964-a695-b9ce9a9e2a61", "address": "fa:16:3e:96:65:06", "network": {"id": "901954cf-1f45-4d5e-b447-237316b6baa4", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1849970683-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79b4d10530a4e1b93fd74acae457024", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap079617df-23", "ovs_interfaceid": "079617df-2314-4964-a695-b9ce9a9e2a61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:13:09 np0005539551 nova_compute[227360]: 2025-11-29 08:13:09.866 227364 DEBUG nova.network.os_vif_util [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:65:06,bridge_name='br-int',has_traffic_filtering=True,id=079617df-2314-4964-a695-b9ce9a9e2a61,network=Network(901954cf-1f45-4d5e-b447-237316b6baa4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap079617df-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:13:09 np0005539551 nova_compute[227360]: 2025-11-29 08:13:09.868 227364 DEBUG nova.objects.instance [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Lazy-loading 'pci_devices' on Instance uuid bd19cbdf-750a-4c23-b574-0c21df92ab8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:09 np0005539551 nova_compute[227360]: 2025-11-29 08:13:09.880 227364 DEBUG nova.virt.libvirt.driver [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:13:09 np0005539551 nova_compute[227360]:  <uuid>bd19cbdf-750a-4c23-b574-0c21df92ab8f</uuid>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:  <name>instance-00000058</name>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:13:09 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:      <nova:name>tempest-InstanceActionsNegativeTestJSON-server-1810353612</nova:name>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:13:08</nova:creationTime>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:13:09 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:        <nova:user uuid="8c106fd8f8d448ebb9235a2cec9752a0">tempest-InstanceActionsNegativeTestJSON-1811421354-project-member</nova:user>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:        <nova:project uuid="a79b4d10530a4e1b93fd74acae457024">tempest-InstanceActionsNegativeTestJSON-1811421354</nova:project>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:        <nova:port uuid="079617df-2314-4964-a695-b9ce9a9e2a61">
Nov 29 03:13:09 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:      <entry name="serial">bd19cbdf-750a-4c23-b574-0c21df92ab8f</entry>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:      <entry name="uuid">bd19cbdf-750a-4c23-b574-0c21df92ab8f</entry>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:13:09 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/bd19cbdf-750a-4c23-b574-0c21df92ab8f_disk">
Nov 29 03:13:09 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:13:09 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:13:09 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/bd19cbdf-750a-4c23-b574-0c21df92ab8f_disk.config">
Nov 29 03:13:09 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:13:09 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:13:09 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:96:65:06"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:      <target dev="tap079617df-23"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:13:09 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/bd19cbdf-750a-4c23-b574-0c21df92ab8f/console.log" append="off"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:13:09 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:13:09 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:13:09 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:13:09 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:13:09 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:13:09 np0005539551 nova_compute[227360]: 2025-11-29 08:13:09.882 227364 DEBUG nova.compute.manager [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Preparing to wait for external event network-vif-plugged-079617df-2314-4964-a695-b9ce9a9e2a61 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:13:09 np0005539551 nova_compute[227360]: 2025-11-29 08:13:09.882 227364 DEBUG oslo_concurrency.lockutils [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Acquiring lock "bd19cbdf-750a-4c23-b574-0c21df92ab8f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:09 np0005539551 nova_compute[227360]: 2025-11-29 08:13:09.882 227364 DEBUG oslo_concurrency.lockutils [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Lock "bd19cbdf-750a-4c23-b574-0c21df92ab8f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:09 np0005539551 nova_compute[227360]: 2025-11-29 08:13:09.883 227364 DEBUG oslo_concurrency.lockutils [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Lock "bd19cbdf-750a-4c23-b574-0c21df92ab8f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:09 np0005539551 nova_compute[227360]: 2025-11-29 08:13:09.883 227364 DEBUG nova.virt.libvirt.vif [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:13:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1810353612',display_name='tempest-InstanceActionsNegativeTestJSON-server-1810353612',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1810353612',id=88,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a79b4d10530a4e1b93fd74acae457024',ramdisk_id='',reservation_id='r-v5dkdgwq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1811421354',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1811421354-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:13:03Z,user_data=None,user_id='8c106fd8f8d448ebb9235a2cec9752a0',uuid=bd19cbdf-750a-4c23-b574-0c21df92ab8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "079617df-2314-4964-a695-b9ce9a9e2a61", "address": "fa:16:3e:96:65:06", "network": {"id": "901954cf-1f45-4d5e-b447-237316b6baa4", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1849970683-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79b4d10530a4e1b93fd74acae457024", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap079617df-23", "ovs_interfaceid": "079617df-2314-4964-a695-b9ce9a9e2a61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:13:09 np0005539551 nova_compute[227360]: 2025-11-29 08:13:09.884 227364 DEBUG nova.network.os_vif_util [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Converting VIF {"id": "079617df-2314-4964-a695-b9ce9a9e2a61", "address": "fa:16:3e:96:65:06", "network": {"id": "901954cf-1f45-4d5e-b447-237316b6baa4", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1849970683-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79b4d10530a4e1b93fd74acae457024", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap079617df-23", "ovs_interfaceid": "079617df-2314-4964-a695-b9ce9a9e2a61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:13:09 np0005539551 nova_compute[227360]: 2025-11-29 08:13:09.884 227364 DEBUG nova.network.os_vif_util [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:65:06,bridge_name='br-int',has_traffic_filtering=True,id=079617df-2314-4964-a695-b9ce9a9e2a61,network=Network(901954cf-1f45-4d5e-b447-237316b6baa4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap079617df-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:13:09 np0005539551 nova_compute[227360]: 2025-11-29 08:13:09.885 227364 DEBUG os_vif [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:65:06,bridge_name='br-int',has_traffic_filtering=True,id=079617df-2314-4964-a695-b9ce9a9e2a61,network=Network(901954cf-1f45-4d5e-b447-237316b6baa4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap079617df-23') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:13:09 np0005539551 nova_compute[227360]: 2025-11-29 08:13:09.885 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:09 np0005539551 nova_compute[227360]: 2025-11-29 08:13:09.886 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:09 np0005539551 nova_compute[227360]: 2025-11-29 08:13:09.886 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:13:09 np0005539551 nova_compute[227360]: 2025-11-29 08:13:09.889 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:09 np0005539551 nova_compute[227360]: 2025-11-29 08:13:09.889 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap079617df-23, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:09 np0005539551 nova_compute[227360]: 2025-11-29 08:13:09.890 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap079617df-23, col_values=(('external_ids', {'iface-id': '079617df-2314-4964-a695-b9ce9a9e2a61', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:96:65:06', 'vm-uuid': 'bd19cbdf-750a-4c23-b574-0c21df92ab8f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:09 np0005539551 nova_compute[227360]: 2025-11-29 08:13:09.891 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:09 np0005539551 NetworkManager[48922]: <info>  [1764403989.8926] manager: (tap079617df-23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/171)
Nov 29 03:13:09 np0005539551 nova_compute[227360]: 2025-11-29 08:13:09.893 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:13:09 np0005539551 nova_compute[227360]: 2025-11-29 08:13:09.899 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:09 np0005539551 nova_compute[227360]: 2025-11-29 08:13:09.900 227364 INFO os_vif [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:65:06,bridge_name='br-int',has_traffic_filtering=True,id=079617df-2314-4964-a695-b9ce9a9e2a61,network=Network(901954cf-1f45-4d5e-b447-237316b6baa4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap079617df-23')#033[00m
Nov 29 03:13:09 np0005539551 nova_compute[227360]: 2025-11-29 08:13:09.967 227364 DEBUG nova.virt.libvirt.driver [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:13:09 np0005539551 nova_compute[227360]: 2025-11-29 08:13:09.968 227364 DEBUG nova.virt.libvirt.driver [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:13:09 np0005539551 nova_compute[227360]: 2025-11-29 08:13:09.968 227364 DEBUG nova.virt.libvirt.driver [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] No VIF found with MAC fa:16:3e:96:65:06, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:13:09 np0005539551 nova_compute[227360]: 2025-11-29 08:13:09.968 227364 INFO nova.virt.libvirt.driver [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Using config drive#033[00m
Nov 29 03:13:09 np0005539551 nova_compute[227360]: 2025-11-29 08:13:09.995 227364 DEBUG nova.storage.rbd_utils [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] rbd image bd19cbdf-750a-4c23-b574-0c21df92ab8f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:13:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:10.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:13:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:10.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:10 np0005539551 nova_compute[227360]: 2025-11-29 08:13:10.120 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:10 np0005539551 nova_compute[227360]: 2025-11-29 08:13:10.879 227364 INFO nova.virt.libvirt.driver [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Creating config drive at /var/lib/nova/instances/bd19cbdf-750a-4c23-b574-0c21df92ab8f/disk.config#033[00m
Nov 29 03:13:10 np0005539551 nova_compute[227360]: 2025-11-29 08:13:10.890 227364 DEBUG oslo_concurrency.processutils [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bd19cbdf-750a-4c23-b574-0c21df92ab8f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdlmfsmdd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:11 np0005539551 nova_compute[227360]: 2025-11-29 08:13:11.036 227364 DEBUG oslo_concurrency.processutils [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bd19cbdf-750a-4c23-b574-0c21df92ab8f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdlmfsmdd" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:11 np0005539551 nova_compute[227360]: 2025-11-29 08:13:11.066 227364 DEBUG nova.storage.rbd_utils [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] rbd image bd19cbdf-750a-4c23-b574-0c21df92ab8f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:11 np0005539551 nova_compute[227360]: 2025-11-29 08:13:11.069 227364 DEBUG oslo_concurrency.processutils [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bd19cbdf-750a-4c23-b574-0c21df92ab8f/disk.config bd19cbdf-750a-4c23-b574-0c21df92ab8f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:11 np0005539551 nova_compute[227360]: 2025-11-29 08:13:11.295 227364 DEBUG oslo_concurrency.processutils [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bd19cbdf-750a-4c23-b574-0c21df92ab8f/disk.config bd19cbdf-750a-4c23-b574-0c21df92ab8f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.225s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:11 np0005539551 nova_compute[227360]: 2025-11-29 08:13:11.296 227364 INFO nova.virt.libvirt.driver [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Deleting local config drive /var/lib/nova/instances/bd19cbdf-750a-4c23-b574-0c21df92ab8f/disk.config because it was imported into RBD.#033[00m
Nov 29 03:13:11 np0005539551 kernel: tap079617df-23: entered promiscuous mode
Nov 29 03:13:11 np0005539551 NetworkManager[48922]: <info>  [1764403991.3456] manager: (tap079617df-23): new Tun device (/org/freedesktop/NetworkManager/Devices/172)
Nov 29 03:13:11 np0005539551 ovn_controller[130266]: 2025-11-29T08:13:11Z|00361|binding|INFO|Claiming lport 079617df-2314-4964-a695-b9ce9a9e2a61 for this chassis.
Nov 29 03:13:11 np0005539551 ovn_controller[130266]: 2025-11-29T08:13:11Z|00362|binding|INFO|079617df-2314-4964-a695-b9ce9a9e2a61: Claiming fa:16:3e:96:65:06 10.100.0.11
Nov 29 03:13:11 np0005539551 nova_compute[227360]: 2025-11-29 08:13:11.347 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:11 np0005539551 nova_compute[227360]: 2025-11-29 08:13:11.352 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:11 np0005539551 nova_compute[227360]: 2025-11-29 08:13:11.354 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:11.361 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:65:06 10.100.0.11'], port_security=['fa:16:3e:96:65:06 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'bd19cbdf-750a-4c23-b574-0c21df92ab8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-901954cf-1f45-4d5e-b447-237316b6baa4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a79b4d10530a4e1b93fd74acae457024', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fe6aa664-b43c-4186-8553-92f466c331e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=111aa050-1ad1-495c-8473-b4b831b4432e, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=079617df-2314-4964-a695-b9ce9a9e2a61) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:11.362 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 079617df-2314-4964-a695-b9ce9a9e2a61 in datapath 901954cf-1f45-4d5e-b447-237316b6baa4 bound to our chassis#033[00m
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:11.364 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 901954cf-1f45-4d5e-b447-237316b6baa4#033[00m
Nov 29 03:13:11 np0005539551 systemd-machined[190756]: New machine qemu-39-instance-00000058.
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:11.377 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a4a6a6b2-8bcd-4d06-81ef-fb92da72a3e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:11.378 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap901954cf-11 in ovnmeta-901954cf-1f45-4d5e-b447-237316b6baa4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:11.380 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap901954cf-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:11.380 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[7c5e521c-ed14-450f-8f65-026d404bb1fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:11.381 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1aeba5ff-71c8-4bdd-bbd8-487a368824b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:11.395 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[9abc6bd1-5e36-483e-96db-7be5ecee10e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:11 np0005539551 nova_compute[227360]: 2025-11-29 08:13:11.418 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:11 np0005539551 systemd[1]: Started Virtual Machine qemu-39-instance-00000058.
Nov 29 03:13:11 np0005539551 ovn_controller[130266]: 2025-11-29T08:13:11Z|00363|binding|INFO|Setting lport 079617df-2314-4964-a695-b9ce9a9e2a61 ovn-installed in OVS
Nov 29 03:13:11 np0005539551 ovn_controller[130266]: 2025-11-29T08:13:11Z|00364|binding|INFO|Setting lport 079617df-2314-4964-a695-b9ce9a9e2a61 up in Southbound
Nov 29 03:13:11 np0005539551 nova_compute[227360]: 2025-11-29 08:13:11.425 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:11.424 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9979fe0d-f6ba-4d10-9f96-7ce6a658b250]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:11 np0005539551 systemd-udevd[260108]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:11.454 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[a234e841-627d-43d0-b197-291403674dc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:11.460 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c4f0cf02-b437-4707-9283-c99d3696b5ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:11 np0005539551 systemd-udevd[260112]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:13:11 np0005539551 NetworkManager[48922]: <info>  [1764403991.4622] manager: (tap901954cf-10): new Veth device (/org/freedesktop/NetworkManager/Devices/173)
Nov 29 03:13:11 np0005539551 NetworkManager[48922]: <info>  [1764403991.4645] device (tap079617df-23): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:13:11 np0005539551 NetworkManager[48922]: <info>  [1764403991.4660] device (tap079617df-23): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:11.495 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[ea0abbcf-c0e6-4a2f-95f9-b26d03c4e4ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:11.498 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[aacd1be6-a943-4b72-926a-f4969f8f6215]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:11 np0005539551 NetworkManager[48922]: <info>  [1764403991.5284] device (tap901954cf-10): carrier: link connected
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:11.528 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[57736fc3-3fd5-4400-b54a-81c78b0bda2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:11.545 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2d1caecb-a7e6-4b86-af81-b3cdee65a0e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap901954cf-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:c3:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 111], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703840, 'reachable_time': 34688, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260136, 'error': None, 'target': 'ovnmeta-901954cf-1f45-4d5e-b447-237316b6baa4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:11.561 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6814e1f5-52e5-499d-b659-7130db85ab4e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe67:c3bd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703840, 'tstamp': 703840}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260137, 'error': None, 'target': 'ovnmeta-901954cf-1f45-4d5e-b447-237316b6baa4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:11.578 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[31b6d315-9ef9-43f1-9d99-438d1bedf09e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap901954cf-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:c3:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 111], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703840, 'reachable_time': 34688, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 260138, 'error': None, 'target': 'ovnmeta-901954cf-1f45-4d5e-b447-237316b6baa4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:11.613 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a74f83a3-50e9-4f48-ac63-e2c77ff235a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:11.670 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[49ab6391-074d-43fb-8849-1ec5aa411337]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:11.672 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap901954cf-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:11.672 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:11.672 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap901954cf-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:11 np0005539551 NetworkManager[48922]: <info>  [1764403991.6751] manager: (tap901954cf-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/174)
Nov 29 03:13:11 np0005539551 kernel: tap901954cf-10: entered promiscuous mode
Nov 29 03:13:11 np0005539551 nova_compute[227360]: 2025-11-29 08:13:11.674 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:11.677 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap901954cf-10, col_values=(('external_ids', {'iface-id': 'd849c6af-d0b3-496f-a838-514d1a7ffd0c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:11 np0005539551 ovn_controller[130266]: 2025-11-29T08:13:11Z|00365|binding|INFO|Releasing lport d849c6af-d0b3-496f-a838-514d1a7ffd0c from this chassis (sb_readonly=0)
Nov 29 03:13:11 np0005539551 nova_compute[227360]: 2025-11-29 08:13:11.678 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:11 np0005539551 nova_compute[227360]: 2025-11-29 08:13:11.695 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:11.696 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/901954cf-1f45-4d5e-b447-237316b6baa4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/901954cf-1f45-4d5e-b447-237316b6baa4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:11.696 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b6455cb3-5d21-49c7-b866-cff45e0bf489]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:11.697 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-901954cf-1f45-4d5e-b447-237316b6baa4
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/901954cf-1f45-4d5e-b447-237316b6baa4.pid.haproxy
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 901954cf-1f45-4d5e-b447-237316b6baa4
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:13:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:11.698 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-901954cf-1f45-4d5e-b447-237316b6baa4', 'env', 'PROCESS_TAG=haproxy-901954cf-1f45-4d5e-b447-237316b6baa4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/901954cf-1f45-4d5e-b447-237316b6baa4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:13:11 np0005539551 nova_compute[227360]: 2025-11-29 08:13:11.756 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403991.7559443, bd19cbdf-750a-4c23-b574-0c21df92ab8f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:13:11 np0005539551 nova_compute[227360]: 2025-11-29 08:13:11.757 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] VM Started (Lifecycle Event)#033[00m
Nov 29 03:13:11 np0005539551 nova_compute[227360]: 2025-11-29 08:13:11.857 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:13:11 np0005539551 nova_compute[227360]: 2025-11-29 08:13:11.865 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403991.756991, bd19cbdf-750a-4c23-b574-0c21df92ab8f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:13:11 np0005539551 nova_compute[227360]: 2025-11-29 08:13:11.865 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:13:11 np0005539551 nova_compute[227360]: 2025-11-29 08:13:11.911 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:13:11 np0005539551 nova_compute[227360]: 2025-11-29 08:13:11.914 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:13:11 np0005539551 nova_compute[227360]: 2025-11-29 08:13:11.936 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:13:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:12.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:12 np0005539551 podman[260210]: 2025-11-29 08:13:12.048405065 +0000 UTC m=+0.047837023 container create d740597ab543449bbb2e2357f9423ad7050d99187cec0493c13469017f2e2393 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-901954cf-1f45-4d5e-b447-237316b6baa4, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:13:12 np0005539551 systemd[1]: Started libpod-conmon-d740597ab543449bbb2e2357f9423ad7050d99187cec0493c13469017f2e2393.scope.
Nov 29 03:13:12 np0005539551 nova_compute[227360]: 2025-11-29 08:13:12.099 227364 DEBUG nova.network.neutron [req-bc4f6967-1f14-4888-a225-26d3c5f9bb0a req-f8eb64b0-5687-48dc-bd45-c7cec5389ed6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Updated VIF entry in instance network info cache for port 079617df-2314-4964-a695-b9ce9a9e2a61. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:13:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:12.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:12 np0005539551 nova_compute[227360]: 2025-11-29 08:13:12.100 227364 DEBUG nova.network.neutron [req-bc4f6967-1f14-4888-a225-26d3c5f9bb0a req-f8eb64b0-5687-48dc-bd45-c7cec5389ed6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Updating instance_info_cache with network_info: [{"id": "079617df-2314-4964-a695-b9ce9a9e2a61", "address": "fa:16:3e:96:65:06", "network": {"id": "901954cf-1f45-4d5e-b447-237316b6baa4", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1849970683-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79b4d10530a4e1b93fd74acae457024", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap079617df-23", "ovs_interfaceid": "079617df-2314-4964-a695-b9ce9a9e2a61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:13:12 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:13:12 np0005539551 podman[260210]: 2025-11-29 08:13:12.020171296 +0000 UTC m=+0.019603274 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:13:12 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab8489a759330d9bf2cc8c0e4b459637238fcd9770f5d1efb9139e8e0be7b6e1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:12 np0005539551 nova_compute[227360]: 2025-11-29 08:13:12.121 227364 DEBUG oslo_concurrency.lockutils [req-bc4f6967-1f14-4888-a225-26d3c5f9bb0a req-f8eb64b0-5687-48dc-bd45-c7cec5389ed6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-bd19cbdf-750a-4c23-b574-0c21df92ab8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:13:12 np0005539551 podman[260210]: 2025-11-29 08:13:12.133047766 +0000 UTC m=+0.132479744 container init d740597ab543449bbb2e2357f9423ad7050d99187cec0493c13469017f2e2393 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-901954cf-1f45-4d5e-b447-237316b6baa4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:13:12 np0005539551 podman[260210]: 2025-11-29 08:13:12.138413141 +0000 UTC m=+0.137845100 container start d740597ab543449bbb2e2357f9423ad7050d99187cec0493c13469017f2e2393 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-901954cf-1f45-4d5e-b447-237316b6baa4, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0)
Nov 29 03:13:12 np0005539551 podman[260226]: 2025-11-29 08:13:12.144267091 +0000 UTC m=+0.059355635 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:13:12 np0005539551 neutron-haproxy-ovnmeta-901954cf-1f45-4d5e-b447-237316b6baa4[260243]: [NOTICE]   (260285) : New worker (260293) forked
Nov 29 03:13:12 np0005539551 neutron-haproxy-ovnmeta-901954cf-1f45-4d5e-b447-237316b6baa4[260243]: [NOTICE]   (260285) : Loading success.
Nov 29 03:13:12 np0005539551 podman[260227]: 2025-11-29 08:13:12.164454929 +0000 UTC m=+0.075755270 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:13:12 np0005539551 podman[260223]: 2025-11-29 08:13:12.16595389 +0000 UTC m=+0.084258602 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:13:13 np0005539551 nova_compute[227360]: 2025-11-29 08:13:13.441 227364 DEBUG nova.compute.manager [req-d1e03774-e00c-4ac4-b1f9-74c72dce81bb req-ac921d33-cf1f-4f78-9a3b-fae958f11dca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Received event network-vif-plugged-079617df-2314-4964-a695-b9ce9a9e2a61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:13 np0005539551 nova_compute[227360]: 2025-11-29 08:13:13.442 227364 DEBUG oslo_concurrency.lockutils [req-d1e03774-e00c-4ac4-b1f9-74c72dce81bb req-ac921d33-cf1f-4f78-9a3b-fae958f11dca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "bd19cbdf-750a-4c23-b574-0c21df92ab8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:13 np0005539551 nova_compute[227360]: 2025-11-29 08:13:13.442 227364 DEBUG oslo_concurrency.lockutils [req-d1e03774-e00c-4ac4-b1f9-74c72dce81bb req-ac921d33-cf1f-4f78-9a3b-fae958f11dca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bd19cbdf-750a-4c23-b574-0c21df92ab8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:13 np0005539551 nova_compute[227360]: 2025-11-29 08:13:13.442 227364 DEBUG oslo_concurrency.lockutils [req-d1e03774-e00c-4ac4-b1f9-74c72dce81bb req-ac921d33-cf1f-4f78-9a3b-fae958f11dca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bd19cbdf-750a-4c23-b574-0c21df92ab8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:13 np0005539551 nova_compute[227360]: 2025-11-29 08:13:13.442 227364 DEBUG nova.compute.manager [req-d1e03774-e00c-4ac4-b1f9-74c72dce81bb req-ac921d33-cf1f-4f78-9a3b-fae958f11dca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Processing event network-vif-plugged-079617df-2314-4964-a695-b9ce9a9e2a61 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:13:13 np0005539551 nova_compute[227360]: 2025-11-29 08:13:13.443 227364 DEBUG nova.compute.manager [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:13:13 np0005539551 nova_compute[227360]: 2025-11-29 08:13:13.447 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764403993.447044, bd19cbdf-750a-4c23-b574-0c21df92ab8f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:13:13 np0005539551 nova_compute[227360]: 2025-11-29 08:13:13.447 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:13:13 np0005539551 nova_compute[227360]: 2025-11-29 08:13:13.448 227364 DEBUG nova.virt.libvirt.driver [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:13:13 np0005539551 nova_compute[227360]: 2025-11-29 08:13:13.451 227364 INFO nova.virt.libvirt.driver [-] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Instance spawned successfully.#033[00m
Nov 29 03:13:13 np0005539551 nova_compute[227360]: 2025-11-29 08:13:13.451 227364 DEBUG nova.virt.libvirt.driver [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:13:13 np0005539551 nova_compute[227360]: 2025-11-29 08:13:13.493 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:13:13 np0005539551 nova_compute[227360]: 2025-11-29 08:13:13.499 227364 DEBUG nova.virt.libvirt.driver [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:13 np0005539551 nova_compute[227360]: 2025-11-29 08:13:13.500 227364 DEBUG nova.virt.libvirt.driver [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:13 np0005539551 nova_compute[227360]: 2025-11-29 08:13:13.501 227364 DEBUG nova.virt.libvirt.driver [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:13 np0005539551 nova_compute[227360]: 2025-11-29 08:13:13.502 227364 DEBUG nova.virt.libvirt.driver [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:13 np0005539551 nova_compute[227360]: 2025-11-29 08:13:13.503 227364 DEBUG nova.virt.libvirt.driver [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:13 np0005539551 nova_compute[227360]: 2025-11-29 08:13:13.504 227364 DEBUG nova.virt.libvirt.driver [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:13 np0005539551 nova_compute[227360]: 2025-11-29 08:13:13.511 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:13:13 np0005539551 nova_compute[227360]: 2025-11-29 08:13:13.538 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:13:13 np0005539551 nova_compute[227360]: 2025-11-29 08:13:13.559 227364 INFO nova.compute.manager [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Took 9.92 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:13:13 np0005539551 nova_compute[227360]: 2025-11-29 08:13:13.560 227364 DEBUG nova.compute.manager [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:13:13 np0005539551 nova_compute[227360]: 2025-11-29 08:13:13.668 227364 INFO nova.compute.manager [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Took 10.96 seconds to build instance.#033[00m
Nov 29 03:13:13 np0005539551 nova_compute[227360]: 2025-11-29 08:13:13.701 227364 DEBUG oslo_concurrency.lockutils [None req-8a1cd806-5ea0-470f-a2dc-99ffa241756a 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Lock "bd19cbdf-750a-4c23-b574-0c21df92ab8f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:13 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e287 e287: 3 total, 3 up, 3 in
Nov 29 03:13:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:14.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:14.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:14 np0005539551 nova_compute[227360]: 2025-11-29 08:13:14.891 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:15 np0005539551 nova_compute[227360]: 2025-11-29 08:13:15.122 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:15 np0005539551 nova_compute[227360]: 2025-11-29 08:13:15.983 227364 DEBUG nova.compute.manager [req-ff329923-5ff5-43cc-8758-ad6260e8bb12 req-1d661afe-b2da-4ea3-be5d-1a132604f1e5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Received event network-vif-plugged-079617df-2314-4964-a695-b9ce9a9e2a61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:15 np0005539551 nova_compute[227360]: 2025-11-29 08:13:15.984 227364 DEBUG oslo_concurrency.lockutils [req-ff329923-5ff5-43cc-8758-ad6260e8bb12 req-1d661afe-b2da-4ea3-be5d-1a132604f1e5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "bd19cbdf-750a-4c23-b574-0c21df92ab8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:15 np0005539551 nova_compute[227360]: 2025-11-29 08:13:15.984 227364 DEBUG oslo_concurrency.lockutils [req-ff329923-5ff5-43cc-8758-ad6260e8bb12 req-1d661afe-b2da-4ea3-be5d-1a132604f1e5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bd19cbdf-750a-4c23-b574-0c21df92ab8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:15 np0005539551 nova_compute[227360]: 2025-11-29 08:13:15.985 227364 DEBUG oslo_concurrency.lockutils [req-ff329923-5ff5-43cc-8758-ad6260e8bb12 req-1d661afe-b2da-4ea3-be5d-1a132604f1e5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bd19cbdf-750a-4c23-b574-0c21df92ab8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:15 np0005539551 nova_compute[227360]: 2025-11-29 08:13:15.985 227364 DEBUG nova.compute.manager [req-ff329923-5ff5-43cc-8758-ad6260e8bb12 req-1d661afe-b2da-4ea3-be5d-1a132604f1e5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] No waiting events found dispatching network-vif-plugged-079617df-2314-4964-a695-b9ce9a9e2a61 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:13:15 np0005539551 nova_compute[227360]: 2025-11-29 08:13:15.985 227364 WARNING nova.compute.manager [req-ff329923-5ff5-43cc-8758-ad6260e8bb12 req-1d661afe-b2da-4ea3-be5d-1a132604f1e5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Received unexpected event network-vif-plugged-079617df-2314-4964-a695-b9ce9a9e2a61 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:13:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:13:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:16.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:13:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:16.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:16 np0005539551 nova_compute[227360]: 2025-11-29 08:13:16.252 227364 DEBUG oslo_concurrency.lockutils [None req-16968feb-0469-4916-aa85-b0eaa634456f 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Acquiring lock "bd19cbdf-750a-4c23-b574-0c21df92ab8f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:16 np0005539551 nova_compute[227360]: 2025-11-29 08:13:16.253 227364 DEBUG oslo_concurrency.lockutils [None req-16968feb-0469-4916-aa85-b0eaa634456f 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Lock "bd19cbdf-750a-4c23-b574-0c21df92ab8f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:16 np0005539551 nova_compute[227360]: 2025-11-29 08:13:16.253 227364 DEBUG oslo_concurrency.lockutils [None req-16968feb-0469-4916-aa85-b0eaa634456f 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Acquiring lock "bd19cbdf-750a-4c23-b574-0c21df92ab8f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:16 np0005539551 nova_compute[227360]: 2025-11-29 08:13:16.253 227364 DEBUG oslo_concurrency.lockutils [None req-16968feb-0469-4916-aa85-b0eaa634456f 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Lock "bd19cbdf-750a-4c23-b574-0c21df92ab8f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:16 np0005539551 nova_compute[227360]: 2025-11-29 08:13:16.254 227364 DEBUG oslo_concurrency.lockutils [None req-16968feb-0469-4916-aa85-b0eaa634456f 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Lock "bd19cbdf-750a-4c23-b574-0c21df92ab8f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:16 np0005539551 nova_compute[227360]: 2025-11-29 08:13:16.255 227364 INFO nova.compute.manager [None req-16968feb-0469-4916-aa85-b0eaa634456f 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Terminating instance#033[00m
Nov 29 03:13:16 np0005539551 nova_compute[227360]: 2025-11-29 08:13:16.255 227364 DEBUG nova.compute.manager [None req-16968feb-0469-4916-aa85-b0eaa634456f 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:13:16 np0005539551 kernel: tap079617df-23 (unregistering): left promiscuous mode
Nov 29 03:13:16 np0005539551 NetworkManager[48922]: <info>  [1764403996.2958] device (tap079617df-23): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:13:16 np0005539551 ovn_controller[130266]: 2025-11-29T08:13:16Z|00366|binding|INFO|Releasing lport 079617df-2314-4964-a695-b9ce9a9e2a61 from this chassis (sb_readonly=0)
Nov 29 03:13:16 np0005539551 ovn_controller[130266]: 2025-11-29T08:13:16Z|00367|binding|INFO|Setting lport 079617df-2314-4964-a695-b9ce9a9e2a61 down in Southbound
Nov 29 03:13:16 np0005539551 ovn_controller[130266]: 2025-11-29T08:13:16Z|00368|binding|INFO|Removing iface tap079617df-23 ovn-installed in OVS
Nov 29 03:13:16 np0005539551 nova_compute[227360]: 2025-11-29 08:13:16.305 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:16 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:16.308 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:65:06 10.100.0.11'], port_security=['fa:16:3e:96:65:06 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'bd19cbdf-750a-4c23-b574-0c21df92ab8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-901954cf-1f45-4d5e-b447-237316b6baa4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a79b4d10530a4e1b93fd74acae457024', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fe6aa664-b43c-4186-8553-92f466c331e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=111aa050-1ad1-495c-8473-b4b831b4432e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=079617df-2314-4964-a695-b9ce9a9e2a61) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:13:16 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:16.309 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 079617df-2314-4964-a695-b9ce9a9e2a61 in datapath 901954cf-1f45-4d5e-b447-237316b6baa4 unbound from our chassis#033[00m
Nov 29 03:13:16 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:16.310 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 901954cf-1f45-4d5e-b447-237316b6baa4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:13:16 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:16.311 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[71a68d13-e9a9-42bb-b959-4675aebf495d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:16 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:16.312 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-901954cf-1f45-4d5e-b447-237316b6baa4 namespace which is not needed anymore#033[00m
Nov 29 03:13:16 np0005539551 nova_compute[227360]: 2025-11-29 08:13:16.325 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:16 np0005539551 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000058.scope: Deactivated successfully.
Nov 29 03:13:16 np0005539551 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000058.scope: Consumed 3.229s CPU time.
Nov 29 03:13:16 np0005539551 systemd-machined[190756]: Machine qemu-39-instance-00000058 terminated.
Nov 29 03:13:16 np0005539551 neutron-haproxy-ovnmeta-901954cf-1f45-4d5e-b447-237316b6baa4[260243]: [NOTICE]   (260285) : haproxy version is 2.8.14-c23fe91
Nov 29 03:13:16 np0005539551 neutron-haproxy-ovnmeta-901954cf-1f45-4d5e-b447-237316b6baa4[260243]: [NOTICE]   (260285) : path to executable is /usr/sbin/haproxy
Nov 29 03:13:16 np0005539551 neutron-haproxy-ovnmeta-901954cf-1f45-4d5e-b447-237316b6baa4[260243]: [WARNING]  (260285) : Exiting Master process...
Nov 29 03:13:16 np0005539551 neutron-haproxy-ovnmeta-901954cf-1f45-4d5e-b447-237316b6baa4[260243]: [WARNING]  (260285) : Exiting Master process...
Nov 29 03:13:16 np0005539551 neutron-haproxy-ovnmeta-901954cf-1f45-4d5e-b447-237316b6baa4[260243]: [ALERT]    (260285) : Current worker (260293) exited with code 143 (Terminated)
Nov 29 03:13:16 np0005539551 neutron-haproxy-ovnmeta-901954cf-1f45-4d5e-b447-237316b6baa4[260243]: [WARNING]  (260285) : All workers exited. Exiting... (0)
Nov 29 03:13:16 np0005539551 systemd[1]: libpod-d740597ab543449bbb2e2357f9423ad7050d99187cec0493c13469017f2e2393.scope: Deactivated successfully.
Nov 29 03:13:16 np0005539551 podman[260329]: 2025-11-29 08:13:16.431589841 +0000 UTC m=+0.043242082 container died d740597ab543449bbb2e2357f9423ad7050d99187cec0493c13469017f2e2393 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-901954cf-1f45-4d5e-b447-237316b6baa4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:13:16 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d740597ab543449bbb2e2357f9423ad7050d99187cec0493c13469017f2e2393-userdata-shm.mount: Deactivated successfully.
Nov 29 03:13:16 np0005539551 systemd[1]: var-lib-containers-storage-overlay-ab8489a759330d9bf2cc8c0e4b459637238fcd9770f5d1efb9139e8e0be7b6e1-merged.mount: Deactivated successfully.
Nov 29 03:13:16 np0005539551 podman[260329]: 2025-11-29 08:13:16.467101829 +0000 UTC m=+0.078754070 container cleanup d740597ab543449bbb2e2357f9423ad7050d99187cec0493c13469017f2e2393 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-901954cf-1f45-4d5e-b447-237316b6baa4, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:13:16 np0005539551 nova_compute[227360]: 2025-11-29 08:13:16.473 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:16 np0005539551 nova_compute[227360]: 2025-11-29 08:13:16.478 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:16 np0005539551 nova_compute[227360]: 2025-11-29 08:13:16.489 227364 INFO nova.virt.libvirt.driver [-] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Instance destroyed successfully.#033[00m
Nov 29 03:13:16 np0005539551 systemd[1]: libpod-conmon-d740597ab543449bbb2e2357f9423ad7050d99187cec0493c13469017f2e2393.scope: Deactivated successfully.
Nov 29 03:13:16 np0005539551 nova_compute[227360]: 2025-11-29 08:13:16.490 227364 DEBUG nova.objects.instance [None req-16968feb-0469-4916-aa85-b0eaa634456f 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Lazy-loading 'resources' on Instance uuid bd19cbdf-750a-4c23-b574-0c21df92ab8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:16 np0005539551 nova_compute[227360]: 2025-11-29 08:13:16.508 227364 DEBUG nova.virt.libvirt.vif [None req-16968feb-0469-4916-aa85-b0eaa634456f 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:13:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1810353612',display_name='tempest-InstanceActionsNegativeTestJSON-server-1810353612',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1810353612',id=88,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:13:13Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a79b4d10530a4e1b93fd74acae457024',ramdisk_id='',reservation_id='r-v5dkdgwq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1811421354',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1811421354-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:13:13Z,user_data=None,user_id='8c106fd8f8d448ebb9235a2cec9752a0',uuid=bd19cbdf-750a-4c23-b574-0c21df92ab8f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "079617df-2314-4964-a695-b9ce9a9e2a61", "address": "fa:16:3e:96:65:06", "network": {"id": "901954cf-1f45-4d5e-b447-237316b6baa4", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1849970683-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79b4d10530a4e1b93fd74acae457024", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap079617df-23", "ovs_interfaceid": "079617df-2314-4964-a695-b9ce9a9e2a61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:13:16 np0005539551 nova_compute[227360]: 2025-11-29 08:13:16.509 227364 DEBUG nova.network.os_vif_util [None req-16968feb-0469-4916-aa85-b0eaa634456f 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Converting VIF {"id": "079617df-2314-4964-a695-b9ce9a9e2a61", "address": "fa:16:3e:96:65:06", "network": {"id": "901954cf-1f45-4d5e-b447-237316b6baa4", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1849970683-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a79b4d10530a4e1b93fd74acae457024", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap079617df-23", "ovs_interfaceid": "079617df-2314-4964-a695-b9ce9a9e2a61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:13:16 np0005539551 nova_compute[227360]: 2025-11-29 08:13:16.510 227364 DEBUG nova.network.os_vif_util [None req-16968feb-0469-4916-aa85-b0eaa634456f 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:65:06,bridge_name='br-int',has_traffic_filtering=True,id=079617df-2314-4964-a695-b9ce9a9e2a61,network=Network(901954cf-1f45-4d5e-b447-237316b6baa4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap079617df-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:13:16 np0005539551 nova_compute[227360]: 2025-11-29 08:13:16.510 227364 DEBUG os_vif [None req-16968feb-0469-4916-aa85-b0eaa634456f 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:65:06,bridge_name='br-int',has_traffic_filtering=True,id=079617df-2314-4964-a695-b9ce9a9e2a61,network=Network(901954cf-1f45-4d5e-b447-237316b6baa4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap079617df-23') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:13:16 np0005539551 nova_compute[227360]: 2025-11-29 08:13:16.513 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:16 np0005539551 nova_compute[227360]: 2025-11-29 08:13:16.513 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap079617df-23, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:16 np0005539551 nova_compute[227360]: 2025-11-29 08:13:16.515 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:16 np0005539551 nova_compute[227360]: 2025-11-29 08:13:16.517 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:13:16 np0005539551 nova_compute[227360]: 2025-11-29 08:13:16.520 227364 INFO os_vif [None req-16968feb-0469-4916-aa85-b0eaa634456f 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:65:06,bridge_name='br-int',has_traffic_filtering=True,id=079617df-2314-4964-a695-b9ce9a9e2a61,network=Network(901954cf-1f45-4d5e-b447-237316b6baa4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap079617df-23')#033[00m
Nov 29 03:13:16 np0005539551 podman[260364]: 2025-11-29 08:13:16.538127844 +0000 UTC m=+0.045432761 container remove d740597ab543449bbb2e2357f9423ad7050d99187cec0493c13469017f2e2393 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-901954cf-1f45-4d5e-b447-237316b6baa4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 03:13:16 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:16.544 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a551a6ff-cd3c-42d2-8545-7d8a77f1b7a4]: (4, ('Sat Nov 29 08:13:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-901954cf-1f45-4d5e-b447-237316b6baa4 (d740597ab543449bbb2e2357f9423ad7050d99187cec0493c13469017f2e2393)\nd740597ab543449bbb2e2357f9423ad7050d99187cec0493c13469017f2e2393\nSat Nov 29 08:13:16 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-901954cf-1f45-4d5e-b447-237316b6baa4 (d740597ab543449bbb2e2357f9423ad7050d99187cec0493c13469017f2e2393)\nd740597ab543449bbb2e2357f9423ad7050d99187cec0493c13469017f2e2393\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:16 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:16.546 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[018fa937-5186-4365-b961-124fcdd1becb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:16 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:16.546 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap901954cf-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:16 np0005539551 nova_compute[227360]: 2025-11-29 08:13:16.548 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:16 np0005539551 kernel: tap901954cf-10: left promiscuous mode
Nov 29 03:13:16 np0005539551 nova_compute[227360]: 2025-11-29 08:13:16.568 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:16 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:16.569 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9404c7f8-1773-4861-946c-a1267516f27d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:16 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:16.581 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2ff6c353-ccc6-4065-ab8d-e8d4330b6b89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:16 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:16.582 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[78234436-a830-4261-b5e0-18952f88427e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:16 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:16.601 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[232067a5-2794-4433-ad48-94fe1180113e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703833, 'reachable_time': 24134, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260402, 'error': None, 'target': 'ovnmeta-901954cf-1f45-4d5e-b447-237316b6baa4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:16 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:16.604 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-901954cf-1f45-4d5e-b447-237316b6baa4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:13:16 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:16.604 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[97e60883-39e7-4b0e-8dba-e986393ba748]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:16 np0005539551 systemd[1]: run-netns-ovnmeta\x2d901954cf\x2d1f45\x2d4d5e\x2db447\x2d237316b6baa4.mount: Deactivated successfully.
Nov 29 03:13:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e288 e288: 3 total, 3 up, 3 in
Nov 29 03:13:16 np0005539551 nova_compute[227360]: 2025-11-29 08:13:16.952 227364 INFO nova.virt.libvirt.driver [None req-16968feb-0469-4916-aa85-b0eaa634456f 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Deleting instance files /var/lib/nova/instances/bd19cbdf-750a-4c23-b574-0c21df92ab8f_del#033[00m
Nov 29 03:13:16 np0005539551 nova_compute[227360]: 2025-11-29 08:13:16.954 227364 INFO nova.virt.libvirt.driver [None req-16968feb-0469-4916-aa85-b0eaa634456f 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Deletion of /var/lib/nova/instances/bd19cbdf-750a-4c23-b574-0c21df92ab8f_del complete#033[00m
Nov 29 03:13:17 np0005539551 nova_compute[227360]: 2025-11-29 08:13:17.010 227364 INFO nova.compute.manager [None req-16968feb-0469-4916-aa85-b0eaa634456f 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:13:17 np0005539551 nova_compute[227360]: 2025-11-29 08:13:17.011 227364 DEBUG oslo.service.loopingcall [None req-16968feb-0469-4916-aa85-b0eaa634456f 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:13:17 np0005539551 nova_compute[227360]: 2025-11-29 08:13:17.012 227364 DEBUG nova.compute.manager [-] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:13:17 np0005539551 nova_compute[227360]: 2025-11-29 08:13:17.012 227364 DEBUG nova.network.neutron [-] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:13:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e289 e289: 3 total, 3 up, 3 in
Nov 29 03:13:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:13:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:18.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:13:18 np0005539551 nova_compute[227360]: 2025-11-29 08:13:18.099 227364 DEBUG nova.compute.manager [req-9b46db59-0194-40ca-9fe7-4a26b13d2098 req-370ef552-9579-4f91-9764-1905e2acb883 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Received event network-vif-unplugged-079617df-2314-4964-a695-b9ce9a9e2a61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:18 np0005539551 nova_compute[227360]: 2025-11-29 08:13:18.100 227364 DEBUG oslo_concurrency.lockutils [req-9b46db59-0194-40ca-9fe7-4a26b13d2098 req-370ef552-9579-4f91-9764-1905e2acb883 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "bd19cbdf-750a-4c23-b574-0c21df92ab8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:18 np0005539551 nova_compute[227360]: 2025-11-29 08:13:18.100 227364 DEBUG oslo_concurrency.lockutils [req-9b46db59-0194-40ca-9fe7-4a26b13d2098 req-370ef552-9579-4f91-9764-1905e2acb883 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bd19cbdf-750a-4c23-b574-0c21df92ab8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:18 np0005539551 nova_compute[227360]: 2025-11-29 08:13:18.100 227364 DEBUG oslo_concurrency.lockutils [req-9b46db59-0194-40ca-9fe7-4a26b13d2098 req-370ef552-9579-4f91-9764-1905e2acb883 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bd19cbdf-750a-4c23-b574-0c21df92ab8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:18 np0005539551 nova_compute[227360]: 2025-11-29 08:13:18.100 227364 DEBUG nova.compute.manager [req-9b46db59-0194-40ca-9fe7-4a26b13d2098 req-370ef552-9579-4f91-9764-1905e2acb883 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] No waiting events found dispatching network-vif-unplugged-079617df-2314-4964-a695-b9ce9a9e2a61 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:13:18 np0005539551 nova_compute[227360]: 2025-11-29 08:13:18.101 227364 DEBUG nova.compute.manager [req-9b46db59-0194-40ca-9fe7-4a26b13d2098 req-370ef552-9579-4f91-9764-1905e2acb883 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Received event network-vif-unplugged-079617df-2314-4964-a695-b9ce9a9e2a61 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:13:18 np0005539551 nova_compute[227360]: 2025-11-29 08:13:18.101 227364 DEBUG nova.compute.manager [req-9b46db59-0194-40ca-9fe7-4a26b13d2098 req-370ef552-9579-4f91-9764-1905e2acb883 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Received event network-vif-plugged-079617df-2314-4964-a695-b9ce9a9e2a61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:18 np0005539551 nova_compute[227360]: 2025-11-29 08:13:18.101 227364 DEBUG oslo_concurrency.lockutils [req-9b46db59-0194-40ca-9fe7-4a26b13d2098 req-370ef552-9579-4f91-9764-1905e2acb883 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "bd19cbdf-750a-4c23-b574-0c21df92ab8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:18 np0005539551 nova_compute[227360]: 2025-11-29 08:13:18.101 227364 DEBUG oslo_concurrency.lockutils [req-9b46db59-0194-40ca-9fe7-4a26b13d2098 req-370ef552-9579-4f91-9764-1905e2acb883 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bd19cbdf-750a-4c23-b574-0c21df92ab8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:18 np0005539551 nova_compute[227360]: 2025-11-29 08:13:18.102 227364 DEBUG oslo_concurrency.lockutils [req-9b46db59-0194-40ca-9fe7-4a26b13d2098 req-370ef552-9579-4f91-9764-1905e2acb883 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bd19cbdf-750a-4c23-b574-0c21df92ab8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:18 np0005539551 nova_compute[227360]: 2025-11-29 08:13:18.102 227364 DEBUG nova.compute.manager [req-9b46db59-0194-40ca-9fe7-4a26b13d2098 req-370ef552-9579-4f91-9764-1905e2acb883 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] No waiting events found dispatching network-vif-plugged-079617df-2314-4964-a695-b9ce9a9e2a61 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:13:18 np0005539551 nova_compute[227360]: 2025-11-29 08:13:18.102 227364 WARNING nova.compute.manager [req-9b46db59-0194-40ca-9fe7-4a26b13d2098 req-370ef552-9579-4f91-9764-1905e2acb883 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Received unexpected event network-vif-plugged-079617df-2314-4964-a695-b9ce9a9e2a61 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:13:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:13:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:18.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:13:18 np0005539551 nova_compute[227360]: 2025-11-29 08:13:18.435 227364 DEBUG nova.network.neutron [-] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:13:18 np0005539551 nova_compute[227360]: 2025-11-29 08:13:18.488 227364 INFO nova.compute.manager [-] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Took 1.48 seconds to deallocate network for instance.#033[00m
Nov 29 03:13:18 np0005539551 nova_compute[227360]: 2025-11-29 08:13:18.567 227364 DEBUG oslo_concurrency.lockutils [None req-16968feb-0469-4916-aa85-b0eaa634456f 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:18 np0005539551 nova_compute[227360]: 2025-11-29 08:13:18.567 227364 DEBUG oslo_concurrency.lockutils [None req-16968feb-0469-4916-aa85-b0eaa634456f 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:18 np0005539551 nova_compute[227360]: 2025-11-29 08:13:18.611 227364 DEBUG oslo_concurrency.processutils [None req-16968feb-0469-4916-aa85-b0eaa634456f 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:19 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:13:19 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2612651554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:13:19 np0005539551 nova_compute[227360]: 2025-11-29 08:13:19.087 227364 DEBUG oslo_concurrency.processutils [None req-16968feb-0469-4916-aa85-b0eaa634456f 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:19 np0005539551 nova_compute[227360]: 2025-11-29 08:13:19.096 227364 DEBUG nova.compute.provider_tree [None req-16968feb-0469-4916-aa85-b0eaa634456f 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:13:19 np0005539551 nova_compute[227360]: 2025-11-29 08:13:19.116 227364 DEBUG nova.scheduler.client.report [None req-16968feb-0469-4916-aa85-b0eaa634456f 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:13:19 np0005539551 nova_compute[227360]: 2025-11-29 08:13:19.145 227364 DEBUG oslo_concurrency.lockutils [None req-16968feb-0469-4916-aa85-b0eaa634456f 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:19 np0005539551 nova_compute[227360]: 2025-11-29 08:13:19.172 227364 INFO nova.scheduler.client.report [None req-16968feb-0469-4916-aa85-b0eaa634456f 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Deleted allocations for instance bd19cbdf-750a-4c23-b574-0c21df92ab8f#033[00m
Nov 29 03:13:19 np0005539551 nova_compute[227360]: 2025-11-29 08:13:19.231 227364 DEBUG oslo_concurrency.lockutils [None req-16968feb-0469-4916-aa85-b0eaa634456f 8c106fd8f8d448ebb9235a2cec9752a0 a79b4d10530a4e1b93fd74acae457024 - - default default] Lock "bd19cbdf-750a-4c23-b574-0c21df92ab8f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.978s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:19.863 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:19.863 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:19.864 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:20.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:13:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:20.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:13:20 np0005539551 nova_compute[227360]: 2025-11-29 08:13:20.154 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:20 np0005539551 nova_compute[227360]: 2025-11-29 08:13:20.216 227364 DEBUG nova.compute.manager [req-11994c00-86be-484a-b49f-888a1c4fc6f6 req-abc5d618-55c2-49f4-a7ec-7c279f1bc0d9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Received event network-vif-deleted-079617df-2314-4964-a695-b9ce9a9e2a61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:21 np0005539551 nova_compute[227360]: 2025-11-29 08:13:21.515 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:21 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e290 e290: 3 total, 3 up, 3 in
Nov 29 03:13:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:22.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:13:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:22.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:13:22 np0005539551 nova_compute[227360]: 2025-11-29 08:13:22.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:23 np0005539551 nova_compute[227360]: 2025-11-29 08:13:23.061 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:23 np0005539551 nova_compute[227360]: 2025-11-29 08:13:23.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:23 np0005539551 nova_compute[227360]: 2025-11-29 08:13:23.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:13:23 np0005539551 nova_compute[227360]: 2025-11-29 08:13:23.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:13:23 np0005539551 nova_compute[227360]: 2025-11-29 08:13:23.437 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:13:23 np0005539551 nova_compute[227360]: 2025-11-29 08:13:23.437 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:24.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:13:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:24.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:13:25 np0005539551 nova_compute[227360]: 2025-11-29 08:13:25.157 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:25 np0005539551 nova_compute[227360]: 2025-11-29 08:13:25.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:26.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:26.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:26 np0005539551 nova_compute[227360]: 2025-11-29 08:13:26.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:26 np0005539551 nova_compute[227360]: 2025-11-29 08:13:26.518 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:26 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e291 e291: 3 total, 3 up, 3 in
Nov 29 03:13:27 np0005539551 nova_compute[227360]: 2025-11-29 08:13:27.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:27 np0005539551 nova_compute[227360]: 2025-11-29 08:13:27.824 227364 DEBUG oslo_concurrency.lockutils [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "076bf9f6-6607-4b08-b733-864854aad069" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:27 np0005539551 nova_compute[227360]: 2025-11-29 08:13:27.824 227364 DEBUG oslo_concurrency.lockutils [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "076bf9f6-6607-4b08-b733-864854aad069" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:27 np0005539551 nova_compute[227360]: 2025-11-29 08:13:27.844 227364 DEBUG nova.compute.manager [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:13:27 np0005539551 nova_compute[227360]: 2025-11-29 08:13:27.924 227364 DEBUG oslo_concurrency.lockutils [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:27 np0005539551 nova_compute[227360]: 2025-11-29 08:13:27.925 227364 DEBUG oslo_concurrency.lockutils [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:27 np0005539551 nova_compute[227360]: 2025-11-29 08:13:27.932 227364 DEBUG nova.virt.hardware [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:13:27 np0005539551 nova_compute[227360]: 2025-11-29 08:13:27.933 227364 INFO nova.compute.claims [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:13:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:28.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:28 np0005539551 nova_compute[227360]: 2025-11-29 08:13:28.052 227364 DEBUG oslo_concurrency.processutils [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:28.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:28 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:13:28 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3580975306' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:13:28 np0005539551 nova_compute[227360]: 2025-11-29 08:13:28.503 227364 DEBUG oslo_concurrency.processutils [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:28 np0005539551 nova_compute[227360]: 2025-11-29 08:13:28.510 227364 DEBUG nova.compute.provider_tree [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:13:28 np0005539551 nova_compute[227360]: 2025-11-29 08:13:28.525 227364 DEBUG nova.scheduler.client.report [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:13:28 np0005539551 nova_compute[227360]: 2025-11-29 08:13:28.544 227364 DEBUG oslo_concurrency.lockutils [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:28 np0005539551 nova_compute[227360]: 2025-11-29 08:13:28.545 227364 DEBUG nova.compute.manager [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:13:28 np0005539551 nova_compute[227360]: 2025-11-29 08:13:28.583 227364 DEBUG nova.compute.manager [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:13:28 np0005539551 nova_compute[227360]: 2025-11-29 08:13:28.584 227364 DEBUG nova.network.neutron [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:13:28 np0005539551 nova_compute[227360]: 2025-11-29 08:13:28.605 227364 INFO nova.virt.libvirt.driver [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:13:28 np0005539551 nova_compute[227360]: 2025-11-29 08:13:28.621 227364 DEBUG nova.compute.manager [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:13:28 np0005539551 nova_compute[227360]: 2025-11-29 08:13:28.730 227364 DEBUG nova.compute.manager [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:13:28 np0005539551 nova_compute[227360]: 2025-11-29 08:13:28.735 227364 DEBUG nova.virt.libvirt.driver [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:13:28 np0005539551 nova_compute[227360]: 2025-11-29 08:13:28.735 227364 INFO nova.virt.libvirt.driver [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Creating image(s)#033[00m
Nov 29 03:13:28 np0005539551 nova_compute[227360]: 2025-11-29 08:13:28.763 227364 DEBUG nova.storage.rbd_utils [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rbd image 076bf9f6-6607-4b08-b733-864854aad069_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:28 np0005539551 nova_compute[227360]: 2025-11-29 08:13:28.790 227364 DEBUG nova.storage.rbd_utils [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rbd image 076bf9f6-6607-4b08-b733-864854aad069_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:28 np0005539551 nova_compute[227360]: 2025-11-29 08:13:28.814 227364 DEBUG nova.storage.rbd_utils [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rbd image 076bf9f6-6607-4b08-b733-864854aad069_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:28 np0005539551 nova_compute[227360]: 2025-11-29 08:13:28.818 227364 DEBUG oslo_concurrency.processutils [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:28 np0005539551 nova_compute[227360]: 2025-11-29 08:13:28.909 227364 DEBUG oslo_concurrency.processutils [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:28 np0005539551 nova_compute[227360]: 2025-11-29 08:13:28.911 227364 DEBUG oslo_concurrency.lockutils [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:28 np0005539551 nova_compute[227360]: 2025-11-29 08:13:28.911 227364 DEBUG oslo_concurrency.lockutils [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:28 np0005539551 nova_compute[227360]: 2025-11-29 08:13:28.911 227364 DEBUG oslo_concurrency.lockutils [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:28 np0005539551 nova_compute[227360]: 2025-11-29 08:13:28.935 227364 DEBUG nova.storage.rbd_utils [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rbd image 076bf9f6-6607-4b08-b733-864854aad069_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:28 np0005539551 nova_compute[227360]: 2025-11-29 08:13:28.939 227364 DEBUG oslo_concurrency.processutils [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 076bf9f6-6607-4b08-b733-864854aad069_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:28 np0005539551 nova_compute[227360]: 2025-11-29 08:13:28.961 227364 DEBUG nova.policy [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c5e3ade3963d47be97b545b2e3779b6b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1b8899f76f554afc96bb2441424e5a77', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:13:29 np0005539551 nova_compute[227360]: 2025-11-29 08:13:29.212 227364 DEBUG oslo_concurrency.processutils [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 076bf9f6-6607-4b08-b733-864854aad069_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.273s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:29 np0005539551 nova_compute[227360]: 2025-11-29 08:13:29.298 227364 DEBUG nova.storage.rbd_utils [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] resizing rbd image 076bf9f6-6607-4b08-b733-864854aad069_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:13:29 np0005539551 nova_compute[227360]: 2025-11-29 08:13:29.417 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:29 np0005539551 nova_compute[227360]: 2025-11-29 08:13:29.425 227364 DEBUG nova.objects.instance [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'migration_context' on Instance uuid 076bf9f6-6607-4b08-b733-864854aad069 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:29 np0005539551 nova_compute[227360]: 2025-11-29 08:13:29.451 227364 DEBUG nova.virt.libvirt.driver [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:13:29 np0005539551 nova_compute[227360]: 2025-11-29 08:13:29.452 227364 DEBUG nova.virt.libvirt.driver [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Ensure instance console log exists: /var/lib/nova/instances/076bf9f6-6607-4b08-b733-864854aad069/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:13:29 np0005539551 nova_compute[227360]: 2025-11-29 08:13:29.452 227364 DEBUG oslo_concurrency.lockutils [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:29 np0005539551 nova_compute[227360]: 2025-11-29 08:13:29.453 227364 DEBUG oslo_concurrency.lockutils [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:29 np0005539551 nova_compute[227360]: 2025-11-29 08:13:29.453 227364 DEBUG oslo_concurrency.lockutils [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:29 np0005539551 nova_compute[227360]: 2025-11-29 08:13:29.458 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:29 np0005539551 nova_compute[227360]: 2025-11-29 08:13:29.458 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:29 np0005539551 nova_compute[227360]: 2025-11-29 08:13:29.458 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:29 np0005539551 nova_compute[227360]: 2025-11-29 08:13:29.458 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:13:29 np0005539551 nova_compute[227360]: 2025-11-29 08:13:29.459 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:13:29 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/583828928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:13:29 np0005539551 nova_compute[227360]: 2025-11-29 08:13:29.878 227364 DEBUG nova.network.neutron [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Successfully created port: 5a18d86b-a59a-42b2-a09d-a9db462f6034 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:13:29 np0005539551 nova_compute[227360]: 2025-11-29 08:13:29.894 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:30.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:30 np0005539551 nova_compute[227360]: 2025-11-29 08:13:30.062 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:13:30 np0005539551 nova_compute[227360]: 2025-11-29 08:13:30.063 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4547MB free_disk=20.94269561767578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:13:30 np0005539551 nova_compute[227360]: 2025-11-29 08:13:30.063 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:30 np0005539551 nova_compute[227360]: 2025-11-29 08:13:30.063 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:30.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:30 np0005539551 nova_compute[227360]: 2025-11-29 08:13:30.158 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:30 np0005539551 nova_compute[227360]: 2025-11-29 08:13:30.164 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 076bf9f6-6607-4b08-b733-864854aad069 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:13:30 np0005539551 nova_compute[227360]: 2025-11-29 08:13:30.165 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:13:30 np0005539551 nova_compute[227360]: 2025-11-29 08:13:30.165 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:13:30 np0005539551 nova_compute[227360]: 2025-11-29 08:13:30.254 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:13:30 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2613908886' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:13:30 np0005539551 nova_compute[227360]: 2025-11-29 08:13:30.707 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:30 np0005539551 nova_compute[227360]: 2025-11-29 08:13:30.713 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:13:30 np0005539551 nova_compute[227360]: 2025-11-29 08:13:30.763 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:13:30 np0005539551 nova_compute[227360]: 2025-11-29 08:13:30.809 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:13:30 np0005539551 nova_compute[227360]: 2025-11-29 08:13:30.809 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:31 np0005539551 nova_compute[227360]: 2025-11-29 08:13:31.309 227364 DEBUG nova.network.neutron [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Successfully updated port: 5a18d86b-a59a-42b2-a09d-a9db462f6034 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:13:31 np0005539551 nova_compute[227360]: 2025-11-29 08:13:31.326 227364 DEBUG oslo_concurrency.lockutils [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "refresh_cache-076bf9f6-6607-4b08-b733-864854aad069" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:13:31 np0005539551 nova_compute[227360]: 2025-11-29 08:13:31.326 227364 DEBUG oslo_concurrency.lockutils [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquired lock "refresh_cache-076bf9f6-6607-4b08-b733-864854aad069" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:13:31 np0005539551 nova_compute[227360]: 2025-11-29 08:13:31.327 227364 DEBUG nova.network.neutron [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:13:31 np0005539551 nova_compute[227360]: 2025-11-29 08:13:31.443 227364 DEBUG nova.compute.manager [req-86d99c53-33b9-41ca-bcb9-6d9c6a270ccb req-cfad3822-a971-468c-a781-7310c4535ed4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Received event network-changed-5a18d86b-a59a-42b2-a09d-a9db462f6034 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:31 np0005539551 nova_compute[227360]: 2025-11-29 08:13:31.443 227364 DEBUG nova.compute.manager [req-86d99c53-33b9-41ca-bcb9-6d9c6a270ccb req-cfad3822-a971-468c-a781-7310c4535ed4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Refreshing instance network info cache due to event network-changed-5a18d86b-a59a-42b2-a09d-a9db462f6034. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:13:31 np0005539551 nova_compute[227360]: 2025-11-29 08:13:31.444 227364 DEBUG oslo_concurrency.lockutils [req-86d99c53-33b9-41ca-bcb9-6d9c6a270ccb req-cfad3822-a971-468c-a781-7310c4535ed4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-076bf9f6-6607-4b08-b733-864854aad069" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:13:31 np0005539551 nova_compute[227360]: 2025-11-29 08:13:31.488 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403996.4872954, bd19cbdf-750a-4c23-b574-0c21df92ab8f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:13:31 np0005539551 nova_compute[227360]: 2025-11-29 08:13:31.489 227364 INFO nova.compute.manager [-] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:13:31 np0005539551 nova_compute[227360]: 2025-11-29 08:13:31.515 227364 DEBUG nova.compute.manager [None req-d39d4e76-5ed8-42be-974a-a78f88ef1e65 - - - - - -] [instance: bd19cbdf-750a-4c23-b574-0c21df92ab8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:13:31 np0005539551 nova_compute[227360]: 2025-11-29 08:13:31.519 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:31 np0005539551 nova_compute[227360]: 2025-11-29 08:13:31.626 227364 DEBUG nova.network.neutron [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:13:31 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #79. Immutable memtables: 0.
Nov 29 03:13:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:13:31.768454) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:13:31 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 79
Nov 29 03:13:31 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404011768513, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 1437, "num_deletes": 258, "total_data_size": 2881376, "memory_usage": 2922376, "flush_reason": "Manual Compaction"}
Nov 29 03:13:31 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #80: started
Nov 29 03:13:31 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404011778490, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 80, "file_size": 1270000, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40507, "largest_seqno": 41938, "table_properties": {"data_size": 1264835, "index_size": 2497, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 13733, "raw_average_key_size": 21, "raw_value_size": 1253550, "raw_average_value_size": 1974, "num_data_blocks": 109, "num_entries": 635, "num_filter_entries": 635, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403917, "oldest_key_time": 1764403917, "file_creation_time": 1764404011, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:13:31 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 10092 microseconds, and 5301 cpu microseconds.
Nov 29 03:13:31 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:13:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:13:31.778542) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #80: 1270000 bytes OK
Nov 29 03:13:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:13:31.778567) [db/memtable_list.cc:519] [default] Level-0 commit table #80 started
Nov 29 03:13:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:13:31.779907) [db/memtable_list.cc:722] [default] Level-0 commit table #80: memtable #1 done
Nov 29 03:13:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:13:31.779930) EVENT_LOG_v1 {"time_micros": 1764404011779923, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:13:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:13:31.779952) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:13:31 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 2874539, prev total WAL file size 2874539, number of live WAL files 2.
Nov 29 03:13:31 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000076.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:13:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:13:31.781417) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323532' seq:72057594037927935, type:22 .. '6D6772737461740031353033' seq:0, type:0; will stop at (end)
Nov 29 03:13:31 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:13:31 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [80(1240KB)], [78(11MB)]
Nov 29 03:13:31 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404011781501, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [80], "files_L6": [78], "score": -1, "input_data_size": 13564331, "oldest_snapshot_seqno": -1}
Nov 29 03:13:31 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #81: 7295 keys, 10352686 bytes, temperature: kUnknown
Nov 29 03:13:31 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404011843567, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 81, "file_size": 10352686, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10305100, "index_size": 28274, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18245, "raw_key_size": 188915, "raw_average_key_size": 25, "raw_value_size": 10175428, "raw_average_value_size": 1394, "num_data_blocks": 1114, "num_entries": 7295, "num_filter_entries": 7295, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764404011, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 81, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:13:31 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:13:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:13:31.843932) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 10352686 bytes
Nov 29 03:13:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:13:31.845489) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 218.1 rd, 166.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 11.7 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(18.8) write-amplify(8.2) OK, records in: 7782, records dropped: 487 output_compression: NoCompression
Nov 29 03:13:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:13:31.845520) EVENT_LOG_v1 {"time_micros": 1764404011845507, "job": 48, "event": "compaction_finished", "compaction_time_micros": 62205, "compaction_time_cpu_micros": 29080, "output_level": 6, "num_output_files": 1, "total_output_size": 10352686, "num_input_records": 7782, "num_output_records": 7295, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:13:31 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:13:31 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404011846273, "job": 48, "event": "table_file_deletion", "file_number": 80}
Nov 29 03:13:31 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000078.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:13:31 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404011850947, "job": 48, "event": "table_file_deletion", "file_number": 78}
Nov 29 03:13:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:13:31.781262) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:13:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:13:31.851139) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:13:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:13:31.851147) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:13:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:13:31.851150) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:13:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:13:31.851152) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:13:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:13:31.851155) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:13:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:32.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:13:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:32.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:13:33 np0005539551 nova_compute[227360]: 2025-11-29 08:13:33.254 227364 DEBUG nova.network.neutron [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Updating instance_info_cache with network_info: [{"id": "5a18d86b-a59a-42b2-a09d-a9db462f6034", "address": "fa:16:3e:46:a5:ff", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a18d86b-a5", "ovs_interfaceid": "5a18d86b-a59a-42b2-a09d-a9db462f6034", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:13:33 np0005539551 nova_compute[227360]: 2025-11-29 08:13:33.282 227364 DEBUG oslo_concurrency.lockutils [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Releasing lock "refresh_cache-076bf9f6-6607-4b08-b733-864854aad069" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:13:33 np0005539551 nova_compute[227360]: 2025-11-29 08:13:33.283 227364 DEBUG nova.compute.manager [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Instance network_info: |[{"id": "5a18d86b-a59a-42b2-a09d-a9db462f6034", "address": "fa:16:3e:46:a5:ff", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a18d86b-a5", "ovs_interfaceid": "5a18d86b-a59a-42b2-a09d-a9db462f6034", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:13:33 np0005539551 nova_compute[227360]: 2025-11-29 08:13:33.283 227364 DEBUG oslo_concurrency.lockutils [req-86d99c53-33b9-41ca-bcb9-6d9c6a270ccb req-cfad3822-a971-468c-a781-7310c4535ed4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-076bf9f6-6607-4b08-b733-864854aad069" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:13:33 np0005539551 nova_compute[227360]: 2025-11-29 08:13:33.283 227364 DEBUG nova.network.neutron [req-86d99c53-33b9-41ca-bcb9-6d9c6a270ccb req-cfad3822-a971-468c-a781-7310c4535ed4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Refreshing network info cache for port 5a18d86b-a59a-42b2-a09d-a9db462f6034 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:13:33 np0005539551 nova_compute[227360]: 2025-11-29 08:13:33.287 227364 DEBUG nova.virt.libvirt.driver [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Start _get_guest_xml network_info=[{"id": "5a18d86b-a59a-42b2-a09d-a9db462f6034", "address": "fa:16:3e:46:a5:ff", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a18d86b-a5", "ovs_interfaceid": "5a18d86b-a59a-42b2-a09d-a9db462f6034", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:13:33 np0005539551 nova_compute[227360]: 2025-11-29 08:13:33.292 227364 WARNING nova.virt.libvirt.driver [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:13:33 np0005539551 nova_compute[227360]: 2025-11-29 08:13:33.297 227364 DEBUG nova.virt.libvirt.host [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:13:33 np0005539551 nova_compute[227360]: 2025-11-29 08:13:33.297 227364 DEBUG nova.virt.libvirt.host [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:13:33 np0005539551 nova_compute[227360]: 2025-11-29 08:13:33.303 227364 DEBUG nova.virt.libvirt.host [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:13:33 np0005539551 nova_compute[227360]: 2025-11-29 08:13:33.304 227364 DEBUG nova.virt.libvirt.host [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:13:33 np0005539551 nova_compute[227360]: 2025-11-29 08:13:33.305 227364 DEBUG nova.virt.libvirt.driver [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:13:33 np0005539551 nova_compute[227360]: 2025-11-29 08:13:33.305 227364 DEBUG nova.virt.hardware [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:13:33 np0005539551 nova_compute[227360]: 2025-11-29 08:13:33.306 227364 DEBUG nova.virt.hardware [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:13:33 np0005539551 nova_compute[227360]: 2025-11-29 08:13:33.306 227364 DEBUG nova.virt.hardware [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:13:33 np0005539551 nova_compute[227360]: 2025-11-29 08:13:33.307 227364 DEBUG nova.virt.hardware [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:13:33 np0005539551 nova_compute[227360]: 2025-11-29 08:13:33.307 227364 DEBUG nova.virt.hardware [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:13:33 np0005539551 nova_compute[227360]: 2025-11-29 08:13:33.307 227364 DEBUG nova.virt.hardware [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:13:33 np0005539551 nova_compute[227360]: 2025-11-29 08:13:33.307 227364 DEBUG nova.virt.hardware [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:13:33 np0005539551 nova_compute[227360]: 2025-11-29 08:13:33.308 227364 DEBUG nova.virt.hardware [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:13:33 np0005539551 nova_compute[227360]: 2025-11-29 08:13:33.308 227364 DEBUG nova.virt.hardware [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:13:33 np0005539551 nova_compute[227360]: 2025-11-29 08:13:33.308 227364 DEBUG nova.virt.hardware [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:13:33 np0005539551 nova_compute[227360]: 2025-11-29 08:13:33.309 227364 DEBUG nova.virt.hardware [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:13:33 np0005539551 nova_compute[227360]: 2025-11-29 08:13:33.312 227364 DEBUG oslo_concurrency.processutils [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:33 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:13:33 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1911708064' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:13:33 np0005539551 nova_compute[227360]: 2025-11-29 08:13:33.799 227364 DEBUG oslo_concurrency.processutils [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:33 np0005539551 nova_compute[227360]: 2025-11-29 08:13:33.824 227364 DEBUG nova.storage.rbd_utils [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rbd image 076bf9f6-6607-4b08-b733-864854aad069_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:33 np0005539551 nova_compute[227360]: 2025-11-29 08:13:33.828 227364 DEBUG oslo_concurrency.processutils [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:13:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:34.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:13:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:34.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:34 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:13:34 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2105223715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:13:34 np0005539551 nova_compute[227360]: 2025-11-29 08:13:34.242 227364 DEBUG oslo_concurrency.processutils [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:34 np0005539551 nova_compute[227360]: 2025-11-29 08:13:34.244 227364 DEBUG nova.virt.libvirt.vif [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:13:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-563695543',display_name='tempest-ServerActionsTestOtherB-server-563695543',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-563695543',id=89,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1b8899f76f554afc96bb2441424e5a77',ramdisk_id='',reservation_id='r-0xergg3i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-477220446',owner_user_name='tempest-ServerActionsTestOtherB-477220446-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:13:28Z,user_data=None,user_id='c5e3ade3963d47be97b545b2e3779b6b',uuid=076bf9f6-6607-4b08-b733-864854aad069,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a18d86b-a59a-42b2-a09d-a9db462f6034", "address": "fa:16:3e:46:a5:ff", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a18d86b-a5", "ovs_interfaceid": "5a18d86b-a59a-42b2-a09d-a9db462f6034", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:13:34 np0005539551 nova_compute[227360]: 2025-11-29 08:13:34.244 227364 DEBUG nova.network.os_vif_util [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converting VIF {"id": "5a18d86b-a59a-42b2-a09d-a9db462f6034", "address": "fa:16:3e:46:a5:ff", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a18d86b-a5", "ovs_interfaceid": "5a18d86b-a59a-42b2-a09d-a9db462f6034", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:13:34 np0005539551 nova_compute[227360]: 2025-11-29 08:13:34.245 227364 DEBUG nova.network.os_vif_util [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:a5:ff,bridge_name='br-int',has_traffic_filtering=True,id=5a18d86b-a59a-42b2-a09d-a9db462f6034,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a18d86b-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:13:34 np0005539551 nova_compute[227360]: 2025-11-29 08:13:34.247 227364 DEBUG nova.objects.instance [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'pci_devices' on Instance uuid 076bf9f6-6607-4b08-b733-864854aad069 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:34 np0005539551 nova_compute[227360]: 2025-11-29 08:13:34.273 227364 DEBUG nova.virt.libvirt.driver [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:13:34 np0005539551 nova_compute[227360]:  <uuid>076bf9f6-6607-4b08-b733-864854aad069</uuid>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:  <name>instance-00000059</name>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:13:34 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:      <nova:name>tempest-ServerActionsTestOtherB-server-563695543</nova:name>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:13:33</nova:creationTime>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:13:34 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:        <nova:user uuid="c5e3ade3963d47be97b545b2e3779b6b">tempest-ServerActionsTestOtherB-477220446-project-member</nova:user>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:        <nova:project uuid="1b8899f76f554afc96bb2441424e5a77">tempest-ServerActionsTestOtherB-477220446</nova:project>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:        <nova:port uuid="5a18d86b-a59a-42b2-a09d-a9db462f6034">
Nov 29 03:13:34 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:      <entry name="serial">076bf9f6-6607-4b08-b733-864854aad069</entry>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:      <entry name="uuid">076bf9f6-6607-4b08-b733-864854aad069</entry>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:13:34 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/076bf9f6-6607-4b08-b733-864854aad069_disk">
Nov 29 03:13:34 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:13:34 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:13:34 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/076bf9f6-6607-4b08-b733-864854aad069_disk.config">
Nov 29 03:13:34 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:13:34 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:13:34 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:46:a5:ff"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:      <target dev="tap5a18d86b-a5"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:13:34 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/076bf9f6-6607-4b08-b733-864854aad069/console.log" append="off"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:13:34 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:13:34 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:13:34 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:13:34 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:13:34 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:13:34 np0005539551 nova_compute[227360]: 2025-11-29 08:13:34.274 227364 DEBUG nova.compute.manager [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Preparing to wait for external event network-vif-plugged-5a18d86b-a59a-42b2-a09d-a9db462f6034 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:13:34 np0005539551 nova_compute[227360]: 2025-11-29 08:13:34.275 227364 DEBUG oslo_concurrency.lockutils [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "076bf9f6-6607-4b08-b733-864854aad069-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:34 np0005539551 nova_compute[227360]: 2025-11-29 08:13:34.275 227364 DEBUG oslo_concurrency.lockutils [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "076bf9f6-6607-4b08-b733-864854aad069-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:34 np0005539551 nova_compute[227360]: 2025-11-29 08:13:34.275 227364 DEBUG oslo_concurrency.lockutils [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "076bf9f6-6607-4b08-b733-864854aad069-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:34 np0005539551 nova_compute[227360]: 2025-11-29 08:13:34.276 227364 DEBUG nova.virt.libvirt.vif [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:13:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-563695543',display_name='tempest-ServerActionsTestOtherB-server-563695543',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-563695543',id=89,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1b8899f76f554afc96bb2441424e5a77',ramdisk_id='',reservation_id='r-0xergg3i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-477220446',owner_user_name='tempest-ServerActionsTestOtherB-477220446-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:13:28Z,user_data=None,user_id='c5e3ade3963d47be97b545b2e3779b6b',uuid=076bf9f6-6607-4b08-b733-864854aad069,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a18d86b-a59a-42b2-a09d-a9db462f6034", "address": "fa:16:3e:46:a5:ff", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a18d86b-a5", "ovs_interfaceid": "5a18d86b-a59a-42b2-a09d-a9db462f6034", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:13:34 np0005539551 nova_compute[227360]: 2025-11-29 08:13:34.276 227364 DEBUG nova.network.os_vif_util [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converting VIF {"id": "5a18d86b-a59a-42b2-a09d-a9db462f6034", "address": "fa:16:3e:46:a5:ff", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a18d86b-a5", "ovs_interfaceid": "5a18d86b-a59a-42b2-a09d-a9db462f6034", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:13:34 np0005539551 nova_compute[227360]: 2025-11-29 08:13:34.277 227364 DEBUG nova.network.os_vif_util [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:a5:ff,bridge_name='br-int',has_traffic_filtering=True,id=5a18d86b-a59a-42b2-a09d-a9db462f6034,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a18d86b-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:13:34 np0005539551 nova_compute[227360]: 2025-11-29 08:13:34.278 227364 DEBUG os_vif [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:a5:ff,bridge_name='br-int',has_traffic_filtering=True,id=5a18d86b-a59a-42b2-a09d-a9db462f6034,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a18d86b-a5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:13:34 np0005539551 nova_compute[227360]: 2025-11-29 08:13:34.279 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:34 np0005539551 nova_compute[227360]: 2025-11-29 08:13:34.279 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:34 np0005539551 nova_compute[227360]: 2025-11-29 08:13:34.280 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:13:34 np0005539551 nova_compute[227360]: 2025-11-29 08:13:34.282 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:34 np0005539551 nova_compute[227360]: 2025-11-29 08:13:34.283 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5a18d86b-a5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:34 np0005539551 nova_compute[227360]: 2025-11-29 08:13:34.283 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5a18d86b-a5, col_values=(('external_ids', {'iface-id': '5a18d86b-a59a-42b2-a09d-a9db462f6034', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:46:a5:ff', 'vm-uuid': '076bf9f6-6607-4b08-b733-864854aad069'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:34 np0005539551 NetworkManager[48922]: <info>  [1764404014.2858] manager: (tap5a18d86b-a5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/175)
Nov 29 03:13:34 np0005539551 nova_compute[227360]: 2025-11-29 08:13:34.286 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:13:34 np0005539551 nova_compute[227360]: 2025-11-29 08:13:34.291 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:34 np0005539551 nova_compute[227360]: 2025-11-29 08:13:34.292 227364 INFO os_vif [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:a5:ff,bridge_name='br-int',has_traffic_filtering=True,id=5a18d86b-a59a-42b2-a09d-a9db462f6034,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a18d86b-a5')#033[00m
Nov 29 03:13:34 np0005539551 nova_compute[227360]: 2025-11-29 08:13:34.358 227364 DEBUG nova.virt.libvirt.driver [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:13:34 np0005539551 nova_compute[227360]: 2025-11-29 08:13:34.358 227364 DEBUG nova.virt.libvirt.driver [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:13:34 np0005539551 nova_compute[227360]: 2025-11-29 08:13:34.359 227364 DEBUG nova.virt.libvirt.driver [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] No VIF found with MAC fa:16:3e:46:a5:ff, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:13:34 np0005539551 nova_compute[227360]: 2025-11-29 08:13:34.360 227364 INFO nova.virt.libvirt.driver [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Using config drive#033[00m
Nov 29 03:13:34 np0005539551 nova_compute[227360]: 2025-11-29 08:13:34.391 227364 DEBUG nova.storage.rbd_utils [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rbd image 076bf9f6-6607-4b08-b733-864854aad069_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:35 np0005539551 nova_compute[227360]: 2025-11-29 08:13:35.203 227364 INFO nova.virt.libvirt.driver [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Creating config drive at /var/lib/nova/instances/076bf9f6-6607-4b08-b733-864854aad069/disk.config#033[00m
Nov 29 03:13:35 np0005539551 nova_compute[227360]: 2025-11-29 08:13:35.208 227364 DEBUG oslo_concurrency.processutils [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/076bf9f6-6607-4b08-b733-864854aad069/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplreospwv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:35 np0005539551 nova_compute[227360]: 2025-11-29 08:13:35.234 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:35 np0005539551 nova_compute[227360]: 2025-11-29 08:13:35.343 227364 DEBUG oslo_concurrency.processutils [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/076bf9f6-6607-4b08-b733-864854aad069/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplreospwv" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:35 np0005539551 nova_compute[227360]: 2025-11-29 08:13:35.379 227364 DEBUG nova.storage.rbd_utils [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rbd image 076bf9f6-6607-4b08-b733-864854aad069_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:35 np0005539551 nova_compute[227360]: 2025-11-29 08:13:35.382 227364 DEBUG oslo_concurrency.processutils [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/076bf9f6-6607-4b08-b733-864854aad069/disk.config 076bf9f6-6607-4b08-b733-864854aad069_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:35 np0005539551 nova_compute[227360]: 2025-11-29 08:13:35.544 227364 DEBUG oslo_concurrency.processutils [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/076bf9f6-6607-4b08-b733-864854aad069/disk.config 076bf9f6-6607-4b08-b733-864854aad069_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:35 np0005539551 nova_compute[227360]: 2025-11-29 08:13:35.545 227364 INFO nova.virt.libvirt.driver [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Deleting local config drive /var/lib/nova/instances/076bf9f6-6607-4b08-b733-864854aad069/disk.config because it was imported into RBD.#033[00m
Nov 29 03:13:35 np0005539551 kernel: tap5a18d86b-a5: entered promiscuous mode
Nov 29 03:13:35 np0005539551 NetworkManager[48922]: <info>  [1764404015.5877] manager: (tap5a18d86b-a5): new Tun device (/org/freedesktop/NetworkManager/Devices/176)
Nov 29 03:13:35 np0005539551 nova_compute[227360]: 2025-11-29 08:13:35.588 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:35 np0005539551 ovn_controller[130266]: 2025-11-29T08:13:35Z|00369|binding|INFO|Claiming lport 5a18d86b-a59a-42b2-a09d-a9db462f6034 for this chassis.
Nov 29 03:13:35 np0005539551 ovn_controller[130266]: 2025-11-29T08:13:35Z|00370|binding|INFO|5a18d86b-a59a-42b2-a09d-a9db462f6034: Claiming fa:16:3e:46:a5:ff 10.100.0.11
Nov 29 03:13:35 np0005539551 nova_compute[227360]: 2025-11-29 08:13:35.593 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:35 np0005539551 nova_compute[227360]: 2025-11-29 08:13:35.596 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:35 np0005539551 nova_compute[227360]: 2025-11-29 08:13:35.600 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:35 np0005539551 nova_compute[227360]: 2025-11-29 08:13:35.605 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:35 np0005539551 NetworkManager[48922]: <info>  [1764404015.6064] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/177)
Nov 29 03:13:35 np0005539551 NetworkManager[48922]: <info>  [1764404015.6072] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/178)
Nov 29 03:13:35 np0005539551 systemd-udevd[260791]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:13:35 np0005539551 NetworkManager[48922]: <info>  [1764404015.6282] device (tap5a18d86b-a5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:13:35 np0005539551 NetworkManager[48922]: <info>  [1764404015.6297] device (tap5a18d86b-a5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:13:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:35.697 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:a5:ff 10.100.0.11'], port_security=['fa:16:3e:46:a5:ff 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '076bf9f6-6607-4b08-b733-864854aad069', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b8899f76f554afc96bb2441424e5a77', 'neutron:revision_number': '2', 'neutron:security_group_ids': '37becba8-ee73-4915-a6ba-420db31887d1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0af49baf-9694-4485-99a0-1529dc778e83, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=5a18d86b-a59a-42b2-a09d-a9db462f6034) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:13:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:35.699 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 5a18d86b-a59a-42b2-a09d-a9db462f6034 in datapath 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 bound to our chassis#033[00m
Nov 29 03:13:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:35.701 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06#033[00m
Nov 29 03:13:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:35.714 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4cfc9938-961e-48bf-ac05-ffcc7719c7f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:35.715 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2b704d3a-d1 in ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:13:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:35.718 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2b704d3a-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:13:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:35.718 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[bffa54e0-ae03-4b59-b0e1-e9eb9063179c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:35.719 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a7500af1-2905-46a1-ab20-e9d0c0973e08]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:35 np0005539551 systemd-machined[190756]: New machine qemu-40-instance-00000059.
Nov 29 03:13:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:35.731 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[fa6b3727-f2b6-41b5-acaf-e1656458b521]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:35 np0005539551 systemd[1]: Started Virtual Machine qemu-40-instance-00000059.
Nov 29 03:13:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:35.759 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ea713a83-22da-4317-9cfe-146ee3f883b5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:35.791 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[de2a9abd-26b2-40ce-b239-6704ca3baea7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:35 np0005539551 systemd-udevd[260793]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:13:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:35.798 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ab6f8b02-19eb-4e2c-8edb-31e32a15b812]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:35 np0005539551 NetworkManager[48922]: <info>  [1764404015.8044] manager: (tap2b704d3a-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/179)
Nov 29 03:13:35 np0005539551 nova_compute[227360]: 2025-11-29 08:13:35.804 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:35 np0005539551 nova_compute[227360]: 2025-11-29 08:13:35.826 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:35.836 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[1e7a0800-c93e-4b84-9c07-b15d409ebe92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:35.839 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[8060dcbe-34ab-4ef5-812b-5a709fb047cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:35 np0005539551 ovn_controller[130266]: 2025-11-29T08:13:35Z|00371|binding|INFO|Setting lport 5a18d86b-a59a-42b2-a09d-a9db462f6034 ovn-installed in OVS
Nov 29 03:13:35 np0005539551 ovn_controller[130266]: 2025-11-29T08:13:35Z|00372|binding|INFO|Setting lport 5a18d86b-a59a-42b2-a09d-a9db462f6034 up in Southbound
Nov 29 03:13:35 np0005539551 nova_compute[227360]: 2025-11-29 08:13:35.847 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:35 np0005539551 NetworkManager[48922]: <info>  [1764404015.8627] device (tap2b704d3a-d0): carrier: link connected
Nov 29 03:13:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:35.868 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[852860c5-c5f1-4415-ad86-a422e464f5c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:35.883 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5470545e-5517-4bc6-8c52-e9ad09576fb9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b704d3a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:d7:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 114], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706274, 'reachable_time': 18523, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260827, 'error': None, 'target': 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:35.895 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[13734998-9030-44a5-94cf-126f55f03990]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed2:d799'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 706274, 'tstamp': 706274}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260828, 'error': None, 'target': 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:35.911 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6db96bda-7d33-4020-8265-21e1e055d4b7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b704d3a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:d7:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 114], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706274, 'reachable_time': 18523, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 260829, 'error': None, 'target': 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:35.940 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a2c607a0-a701-42e3-b0e9-280afda5e3a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:35.993 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[435387fa-9e3d-4dec-a2da-4e553ce85485]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:35.994 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b704d3a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:35.994 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:13:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:35.995 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b704d3a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:35 np0005539551 nova_compute[227360]: 2025-11-29 08:13:35.997 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:35 np0005539551 kernel: tap2b704d3a-d0: entered promiscuous mode
Nov 29 03:13:35 np0005539551 NetworkManager[48922]: <info>  [1764404015.9980] manager: (tap2b704d3a-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/180)
Nov 29 03:13:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:36.000 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2b704d3a-d0, col_values=(('external_ids', {'iface-id': '299ca1be-be1b-47d9-8865-4316d34012e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:36 np0005539551 ovn_controller[130266]: 2025-11-29T08:13:36Z|00373|binding|INFO|Releasing lport 299ca1be-be1b-47d9-8865-4316d34012e3 from this chassis (sb_readonly=0)
Nov 29 03:13:36 np0005539551 nova_compute[227360]: 2025-11-29 08:13:36.001 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:36 np0005539551 nova_compute[227360]: 2025-11-29 08:13:36.016 227364 DEBUG nova.network.neutron [req-86d99c53-33b9-41ca-bcb9-6d9c6a270ccb req-cfad3822-a971-468c-a781-7310c4535ed4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Updated VIF entry in instance network info cache for port 5a18d86b-a59a-42b2-a09d-a9db462f6034. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:13:36 np0005539551 nova_compute[227360]: 2025-11-29 08:13:36.016 227364 DEBUG nova.network.neutron [req-86d99c53-33b9-41ca-bcb9-6d9c6a270ccb req-cfad3822-a971-468c-a781-7310c4535ed4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Updating instance_info_cache with network_info: [{"id": "5a18d86b-a59a-42b2-a09d-a9db462f6034", "address": "fa:16:3e:46:a5:ff", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a18d86b-a5", "ovs_interfaceid": "5a18d86b-a59a-42b2-a09d-a9db462f6034", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:13:36 np0005539551 nova_compute[227360]: 2025-11-29 08:13:36.018 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:36.019 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:13:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:36.020 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4ddd4139-f065-4080-ad5d-9a5aa5bf76fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:36.021 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:13:36 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:13:36 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:13:36 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06
Nov 29 03:13:36 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:13:36 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:13:36 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:13:36 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06.pid.haproxy
Nov 29 03:13:36 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:13:36 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:13:36 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:13:36 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:13:36 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:13:36 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:13:36 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:13:36 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:13:36 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:13:36 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:13:36 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:13:36 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:13:36 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:13:36 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:13:36 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:13:36 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:13:36 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:13:36 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:13:36 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:13:36 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:13:36 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06
Nov 29 03:13:36 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:13:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:36.022 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'env', 'PROCESS_TAG=haproxy-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:13:36 np0005539551 nova_compute[227360]: 2025-11-29 08:13:36.049 227364 DEBUG oslo_concurrency.lockutils [req-86d99c53-33b9-41ca-bcb9-6d9c6a270ccb req-cfad3822-a971-468c-a781-7310c4535ed4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-076bf9f6-6607-4b08-b733-864854aad069" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:13:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:13:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:36.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:13:36 np0005539551 nova_compute[227360]: 2025-11-29 08:13:36.099 227364 DEBUG nova.compute.manager [req-c34b7a9b-f9ac-43f3-93c5-685d272478b8 req-6258024d-9082-42d9-ada5-51ccee06b087 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Received event network-vif-plugged-5a18d86b-a59a-42b2-a09d-a9db462f6034 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:36 np0005539551 nova_compute[227360]: 2025-11-29 08:13:36.100 227364 DEBUG oslo_concurrency.lockutils [req-c34b7a9b-f9ac-43f3-93c5-685d272478b8 req-6258024d-9082-42d9-ada5-51ccee06b087 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "076bf9f6-6607-4b08-b733-864854aad069-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:36 np0005539551 nova_compute[227360]: 2025-11-29 08:13:36.100 227364 DEBUG oslo_concurrency.lockutils [req-c34b7a9b-f9ac-43f3-93c5-685d272478b8 req-6258024d-9082-42d9-ada5-51ccee06b087 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "076bf9f6-6607-4b08-b733-864854aad069-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:36 np0005539551 nova_compute[227360]: 2025-11-29 08:13:36.100 227364 DEBUG oslo_concurrency.lockutils [req-c34b7a9b-f9ac-43f3-93c5-685d272478b8 req-6258024d-9082-42d9-ada5-51ccee06b087 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "076bf9f6-6607-4b08-b733-864854aad069-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:36 np0005539551 nova_compute[227360]: 2025-11-29 08:13:36.101 227364 DEBUG nova.compute.manager [req-c34b7a9b-f9ac-43f3-93c5-685d272478b8 req-6258024d-9082-42d9-ada5-51ccee06b087 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Processing event network-vif-plugged-5a18d86b-a59a-42b2-a09d-a9db462f6034 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:13:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:36.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:36 np0005539551 podman[260861]: 2025-11-29 08:13:36.349460395 +0000 UTC m=+0.051461410 container create ee4a6d1646ca21cf24882e05b35e75095fe6b273cc72d2705cd6b5f8e716676b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:13:36 np0005539551 systemd[1]: Started libpod-conmon-ee4a6d1646ca21cf24882e05b35e75095fe6b273cc72d2705cd6b5f8e716676b.scope.
Nov 29 03:13:36 np0005539551 podman[260861]: 2025-11-29 08:13:36.320285605 +0000 UTC m=+0.022286640 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:13:36 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:13:36 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b21185a68d7b22e8a73540c0f7d4306ae5d717f8b11b546fd2c9587f1e92cf9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:36 np0005539551 podman[260861]: 2025-11-29 08:13:36.43562252 +0000 UTC m=+0.137623555 container init ee4a6d1646ca21cf24882e05b35e75095fe6b273cc72d2705cd6b5f8e716676b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:13:36 np0005539551 podman[260861]: 2025-11-29 08:13:36.441598187 +0000 UTC m=+0.143599212 container start ee4a6d1646ca21cf24882e05b35e75095fe6b273cc72d2705cd6b5f8e716676b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:13:36 np0005539551 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[260876]: [NOTICE]   (260880) : New worker (260882) forked
Nov 29 03:13:36 np0005539551 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[260876]: [NOTICE]   (260880) : Loading success.
Nov 29 03:13:36 np0005539551 nova_compute[227360]: 2025-11-29 08:13:36.885 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404016.8847582, 076bf9f6-6607-4b08-b733-864854aad069 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:13:36 np0005539551 nova_compute[227360]: 2025-11-29 08:13:36.886 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 076bf9f6-6607-4b08-b733-864854aad069] VM Started (Lifecycle Event)#033[00m
Nov 29 03:13:36 np0005539551 nova_compute[227360]: 2025-11-29 08:13:36.888 227364 DEBUG nova.compute.manager [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:13:36 np0005539551 nova_compute[227360]: 2025-11-29 08:13:36.892 227364 DEBUG nova.virt.libvirt.driver [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:13:36 np0005539551 nova_compute[227360]: 2025-11-29 08:13:36.895 227364 INFO nova.virt.libvirt.driver [-] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Instance spawned successfully.#033[00m
Nov 29 03:13:36 np0005539551 nova_compute[227360]: 2025-11-29 08:13:36.895 227364 DEBUG nova.virt.libvirt.driver [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:13:36 np0005539551 nova_compute[227360]: 2025-11-29 08:13:36.931 227364 DEBUG nova.virt.libvirt.driver [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:36 np0005539551 nova_compute[227360]: 2025-11-29 08:13:36.932 227364 DEBUG nova.virt.libvirt.driver [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:36 np0005539551 nova_compute[227360]: 2025-11-29 08:13:36.933 227364 DEBUG nova.virt.libvirt.driver [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:36 np0005539551 nova_compute[227360]: 2025-11-29 08:13:36.934 227364 DEBUG nova.virt.libvirt.driver [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:36 np0005539551 nova_compute[227360]: 2025-11-29 08:13:36.935 227364 DEBUG nova.virt.libvirt.driver [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:36 np0005539551 nova_compute[227360]: 2025-11-29 08:13:36.936 227364 DEBUG nova.virt.libvirt.driver [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:36 np0005539551 nova_compute[227360]: 2025-11-29 08:13:36.947 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:13:36 np0005539551 nova_compute[227360]: 2025-11-29 08:13:36.952 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:13:36 np0005539551 nova_compute[227360]: 2025-11-29 08:13:36.989 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:37 np0005539551 nova_compute[227360]: 2025-11-29 08:13:37.008 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 076bf9f6-6607-4b08-b733-864854aad069] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:13:37 np0005539551 nova_compute[227360]: 2025-11-29 08:13:37.008 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404016.8859394, 076bf9f6-6607-4b08-b733-864854aad069 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:13:37 np0005539551 nova_compute[227360]: 2025-11-29 08:13:37.009 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 076bf9f6-6607-4b08-b733-864854aad069] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:13:37 np0005539551 nova_compute[227360]: 2025-11-29 08:13:37.029 227364 INFO nova.compute.manager [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Took 8.30 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:13:37 np0005539551 nova_compute[227360]: 2025-11-29 08:13:37.030 227364 DEBUG nova.compute.manager [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:13:37 np0005539551 nova_compute[227360]: 2025-11-29 08:13:37.031 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:13:37 np0005539551 nova_compute[227360]: 2025-11-29 08:13:37.037 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404016.8909028, 076bf9f6-6607-4b08-b733-864854aad069 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:13:37 np0005539551 nova_compute[227360]: 2025-11-29 08:13:37.037 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 076bf9f6-6607-4b08-b733-864854aad069] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:13:37 np0005539551 nova_compute[227360]: 2025-11-29 08:13:37.066 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:13:37 np0005539551 nova_compute[227360]: 2025-11-29 08:13:37.070 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:13:37 np0005539551 nova_compute[227360]: 2025-11-29 08:13:37.099 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 076bf9f6-6607-4b08-b733-864854aad069] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:13:37 np0005539551 nova_compute[227360]: 2025-11-29 08:13:37.136 227364 INFO nova.compute.manager [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Took 9.24 seconds to build instance.#033[00m
Nov 29 03:13:37 np0005539551 nova_compute[227360]: 2025-11-29 08:13:37.159 227364 DEBUG oslo_concurrency.lockutils [None req-738daa25-88a1-4d7e-8492-0da07aef9dd5 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "076bf9f6-6607-4b08-b733-864854aad069" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.335s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:37 np0005539551 nova_compute[227360]: 2025-11-29 08:13:37.802 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:37 np0005539551 nova_compute[227360]: 2025-11-29 08:13:37.802 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:37 np0005539551 nova_compute[227360]: 2025-11-29 08:13:37.803 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:13:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:38.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:38.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:38 np0005539551 nova_compute[227360]: 2025-11-29 08:13:38.180 227364 INFO nova.compute.manager [None req-fed1d252-82b3-4980-8955-d34426d914da c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Get console output#033[00m
Nov 29 03:13:38 np0005539551 nova_compute[227360]: 2025-11-29 08:13:38.185 227364 INFO oslo.privsep.daemon [None req-fed1d252-82b3-4980-8955-d34426d914da c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpurhh3d6c/privsep.sock']#033[00m
Nov 29 03:13:38 np0005539551 nova_compute[227360]: 2025-11-29 08:13:38.642 227364 DEBUG nova.compute.manager [req-0bd57d17-2aab-4973-a7da-adb666ca0b1a req-5229a9be-ffcb-47a8-8980-e40240c4c719 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Received event network-vif-plugged-5a18d86b-a59a-42b2-a09d-a9db462f6034 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:38 np0005539551 nova_compute[227360]: 2025-11-29 08:13:38.642 227364 DEBUG oslo_concurrency.lockutils [req-0bd57d17-2aab-4973-a7da-adb666ca0b1a req-5229a9be-ffcb-47a8-8980-e40240c4c719 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "076bf9f6-6607-4b08-b733-864854aad069-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:38 np0005539551 nova_compute[227360]: 2025-11-29 08:13:38.643 227364 DEBUG oslo_concurrency.lockutils [req-0bd57d17-2aab-4973-a7da-adb666ca0b1a req-5229a9be-ffcb-47a8-8980-e40240c4c719 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "076bf9f6-6607-4b08-b733-864854aad069-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:38 np0005539551 nova_compute[227360]: 2025-11-29 08:13:38.643 227364 DEBUG oslo_concurrency.lockutils [req-0bd57d17-2aab-4973-a7da-adb666ca0b1a req-5229a9be-ffcb-47a8-8980-e40240c4c719 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "076bf9f6-6607-4b08-b733-864854aad069-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:38 np0005539551 nova_compute[227360]: 2025-11-29 08:13:38.643 227364 DEBUG nova.compute.manager [req-0bd57d17-2aab-4973-a7da-adb666ca0b1a req-5229a9be-ffcb-47a8-8980-e40240c4c719 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] No waiting events found dispatching network-vif-plugged-5a18d86b-a59a-42b2-a09d-a9db462f6034 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:13:38 np0005539551 nova_compute[227360]: 2025-11-29 08:13:38.644 227364 WARNING nova.compute.manager [req-0bd57d17-2aab-4973-a7da-adb666ca0b1a req-5229a9be-ffcb-47a8-8980-e40240c4c719 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Received unexpected event network-vif-plugged-5a18d86b-a59a-42b2-a09d-a9db462f6034 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:13:38 np0005539551 nova_compute[227360]: 2025-11-29 08:13:38.858 227364 INFO oslo.privsep.daemon [None req-fed1d252-82b3-4980-8955-d34426d914da c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Nov 29 03:13:38 np0005539551 nova_compute[227360]: 2025-11-29 08:13:38.737 260937 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 29 03:13:38 np0005539551 nova_compute[227360]: 2025-11-29 08:13:38.741 260937 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 29 03:13:38 np0005539551 nova_compute[227360]: 2025-11-29 08:13:38.743 260937 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Nov 29 03:13:38 np0005539551 nova_compute[227360]: 2025-11-29 08:13:38.743 260937 INFO oslo.privsep.daemon [-] privsep daemon running as pid 260937#033[00m
Nov 29 03:13:38 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:13:38 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3552024218' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:13:38 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:13:38 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3552024218' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:13:39 np0005539551 nova_compute[227360]: 2025-11-29 08:13:39.286 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:39 np0005539551 nova_compute[227360]: 2025-11-29 08:13:39.624 227364 DEBUG oslo_concurrency.lockutils [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Acquiring lock "6731026b-60ad-4c02-b8a2-807704d1bee2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:39 np0005539551 nova_compute[227360]: 2025-11-29 08:13:39.625 227364 DEBUG oslo_concurrency.lockutils [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "6731026b-60ad-4c02-b8a2-807704d1bee2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:39 np0005539551 nova_compute[227360]: 2025-11-29 08:13:39.666 227364 DEBUG nova.compute.manager [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:13:39 np0005539551 nova_compute[227360]: 2025-11-29 08:13:39.805 227364 DEBUG oslo_concurrency.lockutils [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:39 np0005539551 nova_compute[227360]: 2025-11-29 08:13:39.806 227364 DEBUG oslo_concurrency.lockutils [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:39 np0005539551 nova_compute[227360]: 2025-11-29 08:13:39.824 227364 DEBUG nova.virt.hardware [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:13:39 np0005539551 nova_compute[227360]: 2025-11-29 08:13:39.824 227364 INFO nova.compute.claims [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:13:40 np0005539551 nova_compute[227360]: 2025-11-29 08:13:40.029 227364 DEBUG oslo_concurrency.processutils [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:40.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:40.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:40 np0005539551 nova_compute[227360]: 2025-11-29 08:13:40.237 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:13:40 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1203250204' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:13:40 np0005539551 nova_compute[227360]: 2025-11-29 08:13:40.488 227364 DEBUG oslo_concurrency.processutils [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:40 np0005539551 nova_compute[227360]: 2025-11-29 08:13:40.495 227364 DEBUG nova.compute.provider_tree [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:13:40 np0005539551 nova_compute[227360]: 2025-11-29 08:13:40.526 227364 DEBUG nova.scheduler.client.report [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:13:40 np0005539551 nova_compute[227360]: 2025-11-29 08:13:40.774 227364 DEBUG oslo_concurrency.lockutils [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.968s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:40 np0005539551 nova_compute[227360]: 2025-11-29 08:13:40.774 227364 DEBUG nova.compute.manager [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:13:40 np0005539551 nova_compute[227360]: 2025-11-29 08:13:40.914 227364 DEBUG nova.compute.manager [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:13:40 np0005539551 nova_compute[227360]: 2025-11-29 08:13:40.915 227364 DEBUG nova.network.neutron [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:13:40 np0005539551 nova_compute[227360]: 2025-11-29 08:13:40.961 227364 INFO nova.virt.libvirt.driver [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:13:40 np0005539551 nova_compute[227360]: 2025-11-29 08:13:40.985 227364 DEBUG nova.compute.manager [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:13:41 np0005539551 nova_compute[227360]: 2025-11-29 08:13:41.138 227364 DEBUG nova.compute.manager [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:13:41 np0005539551 nova_compute[227360]: 2025-11-29 08:13:41.139 227364 DEBUG nova.virt.libvirt.driver [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:13:41 np0005539551 nova_compute[227360]: 2025-11-29 08:13:41.140 227364 INFO nova.virt.libvirt.driver [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Creating image(s)#033[00m
Nov 29 03:13:41 np0005539551 nova_compute[227360]: 2025-11-29 08:13:41.163 227364 DEBUG nova.storage.rbd_utils [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] rbd image 6731026b-60ad-4c02-b8a2-807704d1bee2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:41 np0005539551 nova_compute[227360]: 2025-11-29 08:13:41.188 227364 DEBUG nova.storage.rbd_utils [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] rbd image 6731026b-60ad-4c02-b8a2-807704d1bee2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:41 np0005539551 nova_compute[227360]: 2025-11-29 08:13:41.212 227364 DEBUG nova.storage.rbd_utils [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] rbd image 6731026b-60ad-4c02-b8a2-807704d1bee2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:41 np0005539551 nova_compute[227360]: 2025-11-29 08:13:41.215 227364 DEBUG oslo_concurrency.processutils [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:41 np0005539551 nova_compute[227360]: 2025-11-29 08:13:41.247 227364 DEBUG nova.policy [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '05e59f4debd946ad9b7a4bac0e968bc6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '17c0ff0fdeac43fc8fa0d7bedad67c34', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:13:41 np0005539551 nova_compute[227360]: 2025-11-29 08:13:41.302 227364 DEBUG oslo_concurrency.processutils [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:41 np0005539551 nova_compute[227360]: 2025-11-29 08:13:41.303 227364 DEBUG oslo_concurrency.lockutils [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:41 np0005539551 nova_compute[227360]: 2025-11-29 08:13:41.303 227364 DEBUG oslo_concurrency.lockutils [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:41 np0005539551 nova_compute[227360]: 2025-11-29 08:13:41.304 227364 DEBUG oslo_concurrency.lockutils [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:41 np0005539551 nova_compute[227360]: 2025-11-29 08:13:41.328 227364 DEBUG nova.storage.rbd_utils [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] rbd image 6731026b-60ad-4c02-b8a2-807704d1bee2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:41 np0005539551 nova_compute[227360]: 2025-11-29 08:13:41.332 227364 DEBUG oslo_concurrency.processutils [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 6731026b-60ad-4c02-b8a2-807704d1bee2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:41 np0005539551 nova_compute[227360]: 2025-11-29 08:13:41.633 227364 DEBUG oslo_concurrency.processutils [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 6731026b-60ad-4c02-b8a2-807704d1bee2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.301s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:41 np0005539551 nova_compute[227360]: 2025-11-29 08:13:41.712 227364 DEBUG nova.storage.rbd_utils [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] resizing rbd image 6731026b-60ad-4c02-b8a2-807704d1bee2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:13:41 np0005539551 nova_compute[227360]: 2025-11-29 08:13:41.805 227364 DEBUG nova.objects.instance [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lazy-loading 'migration_context' on Instance uuid 6731026b-60ad-4c02-b8a2-807704d1bee2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:41 np0005539551 nova_compute[227360]: 2025-11-29 08:13:41.875 227364 DEBUG nova.virt.libvirt.driver [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:13:41 np0005539551 nova_compute[227360]: 2025-11-29 08:13:41.875 227364 DEBUG nova.virt.libvirt.driver [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Ensure instance console log exists: /var/lib/nova/instances/6731026b-60ad-4c02-b8a2-807704d1bee2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:13:41 np0005539551 nova_compute[227360]: 2025-11-29 08:13:41.876 227364 DEBUG oslo_concurrency.lockutils [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:41 np0005539551 nova_compute[227360]: 2025-11-29 08:13:41.876 227364 DEBUG oslo_concurrency.lockutils [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:41 np0005539551 nova_compute[227360]: 2025-11-29 08:13:41.876 227364 DEBUG oslo_concurrency.lockutils [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:42.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:42.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:42 np0005539551 podman[261130]: 2025-11-29 08:13:42.61417366 +0000 UTC m=+0.057203101 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:13:42 np0005539551 podman[261129]: 2025-11-29 08:13:42.62933469 +0000 UTC m=+0.066470476 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, config_id=multipathd)
Nov 29 03:13:42 np0005539551 podman[261128]: 2025-11-29 08:13:42.662202738 +0000 UTC m=+0.102111627 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:13:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:44.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:44.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:44 np0005539551 nova_compute[227360]: 2025-11-29 08:13:44.288 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:44 np0005539551 nova_compute[227360]: 2025-11-29 08:13:44.330 227364 DEBUG nova.network.neutron [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Successfully created port: 05ac3c57-c712-428d-bb48-65f19e3ab17b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:13:45 np0005539551 nova_compute[227360]: 2025-11-29 08:13:45.278 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:46.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:46.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:46 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:13:46 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:13:46 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:13:46 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:13:46 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:13:46 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:13:46 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:13:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:48.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:13:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:48.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:13:49 np0005539551 nova_compute[227360]: 2025-11-29 08:13:49.291 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:49 np0005539551 nova_compute[227360]: 2025-11-29 08:13:49.470 227364 DEBUG nova.network.neutron [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Successfully updated port: 05ac3c57-c712-428d-bb48-65f19e3ab17b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:13:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:50.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:13:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:50.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:13:50 np0005539551 nova_compute[227360]: 2025-11-29 08:13:50.280 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:51 np0005539551 nova_compute[227360]: 2025-11-29 08:13:51.564 227364 DEBUG nova.compute.manager [req-14db3d3a-9aa9-4faa-88d9-b347b0890f18 req-99d3eb89-f0c9-4bd3-b2da-a1862b53281b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Received event network-changed-05ac3c57-c712-428d-bb48-65f19e3ab17b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:51 np0005539551 nova_compute[227360]: 2025-11-29 08:13:51.565 227364 DEBUG nova.compute.manager [req-14db3d3a-9aa9-4faa-88d9-b347b0890f18 req-99d3eb89-f0c9-4bd3-b2da-a1862b53281b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Refreshing instance network info cache due to event network-changed-05ac3c57-c712-428d-bb48-65f19e3ab17b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:13:51 np0005539551 nova_compute[227360]: 2025-11-29 08:13:51.565 227364 DEBUG oslo_concurrency.lockutils [req-14db3d3a-9aa9-4faa-88d9-b347b0890f18 req-99d3eb89-f0c9-4bd3-b2da-a1862b53281b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-6731026b-60ad-4c02-b8a2-807704d1bee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:13:51 np0005539551 nova_compute[227360]: 2025-11-29 08:13:51.565 227364 DEBUG oslo_concurrency.lockutils [req-14db3d3a-9aa9-4faa-88d9-b347b0890f18 req-99d3eb89-f0c9-4bd3-b2da-a1862b53281b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-6731026b-60ad-4c02-b8a2-807704d1bee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:13:51 np0005539551 nova_compute[227360]: 2025-11-29 08:13:51.565 227364 DEBUG nova.network.neutron [req-14db3d3a-9aa9-4faa-88d9-b347b0890f18 req-99d3eb89-f0c9-4bd3-b2da-a1862b53281b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Refreshing network info cache for port 05ac3c57-c712-428d-bb48-65f19e3ab17b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:13:51 np0005539551 ovn_controller[130266]: 2025-11-29T08:13:51Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:46:a5:ff 10.100.0.11
Nov 29 03:13:51 np0005539551 ovn_controller[130266]: 2025-11-29T08:13:51Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:46:a5:ff 10.100.0.11
Nov 29 03:13:51 np0005539551 nova_compute[227360]: 2025-11-29 08:13:51.631 227364 DEBUG oslo_concurrency.lockutils [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Acquiring lock "refresh_cache-6731026b-60ad-4c02-b8a2-807704d1bee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:13:52 np0005539551 nova_compute[227360]: 2025-11-29 08:13:52.066 227364 DEBUG nova.network.neutron [req-14db3d3a-9aa9-4faa-88d9-b347b0890f18 req-99d3eb89-f0c9-4bd3-b2da-a1862b53281b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:13:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:52.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:52.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:53 np0005539551 nova_compute[227360]: 2025-11-29 08:13:53.548 227364 DEBUG nova.network.neutron [req-14db3d3a-9aa9-4faa-88d9-b347b0890f18 req-99d3eb89-f0c9-4bd3-b2da-a1862b53281b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:13:53 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:13:53 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:13:53 np0005539551 nova_compute[227360]: 2025-11-29 08:13:53.569 227364 DEBUG oslo_concurrency.lockutils [req-14db3d3a-9aa9-4faa-88d9-b347b0890f18 req-99d3eb89-f0c9-4bd3-b2da-a1862b53281b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-6731026b-60ad-4c02-b8a2-807704d1bee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:13:53 np0005539551 nova_compute[227360]: 2025-11-29 08:13:53.570 227364 DEBUG oslo_concurrency.lockutils [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Acquired lock "refresh_cache-6731026b-60ad-4c02-b8a2-807704d1bee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:13:53 np0005539551 nova_compute[227360]: 2025-11-29 08:13:53.570 227364 DEBUG nova.network.neutron [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:13:53 np0005539551 nova_compute[227360]: 2025-11-29 08:13:53.853 227364 DEBUG nova.network.neutron [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:13:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:54.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:54.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:54 np0005539551 nova_compute[227360]: 2025-11-29 08:13:54.293 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:55 np0005539551 nova_compute[227360]: 2025-11-29 08:13:55.285 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:13:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:56.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:13:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:56.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:57 np0005539551 nova_compute[227360]: 2025-11-29 08:13:57.263 227364 DEBUG nova.network.neutron [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Updating instance_info_cache with network_info: [{"id": "05ac3c57-c712-428d-bb48-65f19e3ab17b", "address": "fa:16:3e:08:6d:3e", "network": {"id": "5ce08321-9ca9-47d5-b99b-65a439440787", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1544923692-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17c0ff0fdeac43fc8fa0d7bedad67c34", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05ac3c57-c7", "ovs_interfaceid": "05ac3c57-c712-428d-bb48-65f19e3ab17b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:13:57 np0005539551 nova_compute[227360]: 2025-11-29 08:13:57.303 227364 DEBUG oslo_concurrency.lockutils [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Releasing lock "refresh_cache-6731026b-60ad-4c02-b8a2-807704d1bee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:13:57 np0005539551 nova_compute[227360]: 2025-11-29 08:13:57.304 227364 DEBUG nova.compute.manager [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Instance network_info: |[{"id": "05ac3c57-c712-428d-bb48-65f19e3ab17b", "address": "fa:16:3e:08:6d:3e", "network": {"id": "5ce08321-9ca9-47d5-b99b-65a439440787", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1544923692-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17c0ff0fdeac43fc8fa0d7bedad67c34", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05ac3c57-c7", "ovs_interfaceid": "05ac3c57-c712-428d-bb48-65f19e3ab17b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:13:57 np0005539551 nova_compute[227360]: 2025-11-29 08:13:57.306 227364 DEBUG nova.virt.libvirt.driver [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Start _get_guest_xml network_info=[{"id": "05ac3c57-c712-428d-bb48-65f19e3ab17b", "address": "fa:16:3e:08:6d:3e", "network": {"id": "5ce08321-9ca9-47d5-b99b-65a439440787", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1544923692-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17c0ff0fdeac43fc8fa0d7bedad67c34", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05ac3c57-c7", "ovs_interfaceid": "05ac3c57-c712-428d-bb48-65f19e3ab17b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:13:57 np0005539551 nova_compute[227360]: 2025-11-29 08:13:57.313 227364 WARNING nova.virt.libvirt.driver [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:13:57 np0005539551 nova_compute[227360]: 2025-11-29 08:13:57.323 227364 DEBUG nova.virt.libvirt.host [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:13:57 np0005539551 nova_compute[227360]: 2025-11-29 08:13:57.324 227364 DEBUG nova.virt.libvirt.host [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:13:57 np0005539551 nova_compute[227360]: 2025-11-29 08:13:57.329 227364 DEBUG nova.virt.libvirt.host [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:13:57 np0005539551 nova_compute[227360]: 2025-11-29 08:13:57.330 227364 DEBUG nova.virt.libvirt.host [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:13:57 np0005539551 nova_compute[227360]: 2025-11-29 08:13:57.332 227364 DEBUG nova.virt.libvirt.driver [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:13:57 np0005539551 nova_compute[227360]: 2025-11-29 08:13:57.332 227364 DEBUG nova.virt.hardware [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='709b029f-0458-4e40-a6ee-e1e02b48c06c',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:13:57 np0005539551 nova_compute[227360]: 2025-11-29 08:13:57.333 227364 DEBUG nova.virt.hardware [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:13:57 np0005539551 nova_compute[227360]: 2025-11-29 08:13:57.334 227364 DEBUG nova.virt.hardware [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:13:57 np0005539551 nova_compute[227360]: 2025-11-29 08:13:57.334 227364 DEBUG nova.virt.hardware [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:13:57 np0005539551 nova_compute[227360]: 2025-11-29 08:13:57.334 227364 DEBUG nova.virt.hardware [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:13:57 np0005539551 nova_compute[227360]: 2025-11-29 08:13:57.335 227364 DEBUG nova.virt.hardware [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:13:57 np0005539551 nova_compute[227360]: 2025-11-29 08:13:57.335 227364 DEBUG nova.virt.hardware [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:13:57 np0005539551 nova_compute[227360]: 2025-11-29 08:13:57.336 227364 DEBUG nova.virt.hardware [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:13:57 np0005539551 nova_compute[227360]: 2025-11-29 08:13:57.336 227364 DEBUG nova.virt.hardware [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:13:57 np0005539551 nova_compute[227360]: 2025-11-29 08:13:57.337 227364 DEBUG nova.virt.hardware [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:13:57 np0005539551 nova_compute[227360]: 2025-11-29 08:13:57.337 227364 DEBUG nova.virt.hardware [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:13:57 np0005539551 nova_compute[227360]: 2025-11-29 08:13:57.340 227364 DEBUG oslo_concurrency.processutils [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:57 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:13:57 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3802419816' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:13:57 np0005539551 nova_compute[227360]: 2025-11-29 08:13:57.789 227364 DEBUG oslo_concurrency.processutils [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:57 np0005539551 nova_compute[227360]: 2025-11-29 08:13:57.825 227364 DEBUG nova.storage.rbd_utils [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] rbd image 6731026b-60ad-4c02-b8a2-807704d1bee2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:57 np0005539551 nova_compute[227360]: 2025-11-29 08:13:57.829 227364 DEBUG oslo_concurrency.processutils [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:13:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:58.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:13:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:13:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:58.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:58 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:13:58 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/558651890' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:13:58 np0005539551 nova_compute[227360]: 2025-11-29 08:13:58.271 227364 DEBUG oslo_concurrency.processutils [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:58 np0005539551 nova_compute[227360]: 2025-11-29 08:13:58.273 227364 DEBUG nova.virt.libvirt.vif [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:13:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1563638659',display_name='tempest-ListServerFiltersTestJSON-instance-1563638659',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1563638659',id=93,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='17c0ff0fdeac43fc8fa0d7bedad67c34',ramdisk_id='',reservation_id='r-lyl34nvn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-825347861',owner_user_name='tempest-ListServerFiltersTestJSON-825347861-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:13:41Z,user_data=None,user_id='05e59f4debd946ad9b7a4bac0e968bc6',uuid=6731026b-60ad-4c02-b8a2-807704d1bee2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05ac3c57-c712-428d-bb48-65f19e3ab17b", "address": "fa:16:3e:08:6d:3e", "network": {"id": "5ce08321-9ca9-47d5-b99b-65a439440787", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1544923692-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17c0ff0fdeac43fc8fa0d7bedad67c34", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05ac3c57-c7", "ovs_interfaceid": "05ac3c57-c712-428d-bb48-65f19e3ab17b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:13:58 np0005539551 nova_compute[227360]: 2025-11-29 08:13:58.274 227364 DEBUG nova.network.os_vif_util [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Converting VIF {"id": "05ac3c57-c712-428d-bb48-65f19e3ab17b", "address": "fa:16:3e:08:6d:3e", "network": {"id": "5ce08321-9ca9-47d5-b99b-65a439440787", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1544923692-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17c0ff0fdeac43fc8fa0d7bedad67c34", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05ac3c57-c7", "ovs_interfaceid": "05ac3c57-c712-428d-bb48-65f19e3ab17b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:13:58 np0005539551 nova_compute[227360]: 2025-11-29 08:13:58.275 227364 DEBUG nova.network.os_vif_util [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:6d:3e,bridge_name='br-int',has_traffic_filtering=True,id=05ac3c57-c712-428d-bb48-65f19e3ab17b,network=Network(5ce08321-9ca9-47d5-b99b-65a439440787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05ac3c57-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:13:58 np0005539551 nova_compute[227360]: 2025-11-29 08:13:58.276 227364 DEBUG nova.objects.instance [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6731026b-60ad-4c02-b8a2-807704d1bee2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:59 np0005539551 nova_compute[227360]: 2025-11-29 08:13:59.295 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:59 np0005539551 nova_compute[227360]: 2025-11-29 08:13:59.449 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:59.453 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:13:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:13:59.455 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:13:59 np0005539551 nova_compute[227360]: 2025-11-29 08:13:59.570 227364 DEBUG nova.virt.libvirt.driver [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:13:59 np0005539551 nova_compute[227360]:  <uuid>6731026b-60ad-4c02-b8a2-807704d1bee2</uuid>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:  <name>instance-0000005d</name>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:  <memory>196608</memory>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:13:59 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:      <nova:name>tempest-ListServerFiltersTestJSON-instance-1563638659</nova:name>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:13:57</nova:creationTime>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.micro">
Nov 29 03:13:59 np0005539551 nova_compute[227360]:        <nova:memory>192</nova:memory>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:        <nova:user uuid="05e59f4debd946ad9b7a4bac0e968bc6">tempest-ListServerFiltersTestJSON-825347861-project-member</nova:user>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:        <nova:project uuid="17c0ff0fdeac43fc8fa0d7bedad67c34">tempest-ListServerFiltersTestJSON-825347861</nova:project>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:        <nova:port uuid="05ac3c57-c712-428d-bb48-65f19e3ab17b">
Nov 29 03:13:59 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:      <entry name="serial">6731026b-60ad-4c02-b8a2-807704d1bee2</entry>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:      <entry name="uuid">6731026b-60ad-4c02-b8a2-807704d1bee2</entry>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:13:59 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/6731026b-60ad-4c02-b8a2-807704d1bee2_disk">
Nov 29 03:13:59 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:13:59 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:13:59 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/6731026b-60ad-4c02-b8a2-807704d1bee2_disk.config">
Nov 29 03:13:59 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:13:59 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:13:59 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:08:6d:3e"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:      <target dev="tap05ac3c57-c7"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:13:59 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/6731026b-60ad-4c02-b8a2-807704d1bee2/console.log" append="off"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:13:59 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:13:59 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:13:59 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:13:59 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:13:59 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:13:59 np0005539551 nova_compute[227360]: 2025-11-29 08:13:59.572 227364 DEBUG nova.compute.manager [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Preparing to wait for external event network-vif-plugged-05ac3c57-c712-428d-bb48-65f19e3ab17b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:13:59 np0005539551 nova_compute[227360]: 2025-11-29 08:13:59.573 227364 DEBUG oslo_concurrency.lockutils [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Acquiring lock "6731026b-60ad-4c02-b8a2-807704d1bee2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:59 np0005539551 nova_compute[227360]: 2025-11-29 08:13:59.573 227364 DEBUG oslo_concurrency.lockutils [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "6731026b-60ad-4c02-b8a2-807704d1bee2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:59 np0005539551 nova_compute[227360]: 2025-11-29 08:13:59.573 227364 DEBUG oslo_concurrency.lockutils [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "6731026b-60ad-4c02-b8a2-807704d1bee2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:59 np0005539551 nova_compute[227360]: 2025-11-29 08:13:59.574 227364 DEBUG nova.virt.libvirt.vif [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:13:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1563638659',display_name='tempest-ListServerFiltersTestJSON-instance-1563638659',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1563638659',id=93,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='17c0ff0fdeac43fc8fa0d7bedad67c34',ramdisk_id='',reservation_id='r-lyl34nvn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-825347861',owner_user_name='tempest-ListServerFiltersTestJSON-825347861-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:13:41Z,user_data=None,user_id='05e59f4debd946ad9b7a4bac0e968bc6',uuid=6731026b-60ad-4c02-b8a2-807704d1bee2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05ac3c57-c712-428d-bb48-65f19e3ab17b", "address": "fa:16:3e:08:6d:3e", "network": {"id": "5ce08321-9ca9-47d5-b99b-65a439440787", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1544923692-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17c0ff0fdeac43fc8fa0d7bedad67c34", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05ac3c57-c7", "ovs_interfaceid": "05ac3c57-c712-428d-bb48-65f19e3ab17b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:13:59 np0005539551 nova_compute[227360]: 2025-11-29 08:13:59.574 227364 DEBUG nova.network.os_vif_util [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Converting VIF {"id": "05ac3c57-c712-428d-bb48-65f19e3ab17b", "address": "fa:16:3e:08:6d:3e", "network": {"id": "5ce08321-9ca9-47d5-b99b-65a439440787", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1544923692-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17c0ff0fdeac43fc8fa0d7bedad67c34", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05ac3c57-c7", "ovs_interfaceid": "05ac3c57-c712-428d-bb48-65f19e3ab17b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:13:59 np0005539551 nova_compute[227360]: 2025-11-29 08:13:59.575 227364 DEBUG nova.network.os_vif_util [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:6d:3e,bridge_name='br-int',has_traffic_filtering=True,id=05ac3c57-c712-428d-bb48-65f19e3ab17b,network=Network(5ce08321-9ca9-47d5-b99b-65a439440787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05ac3c57-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:13:59 np0005539551 nova_compute[227360]: 2025-11-29 08:13:59.575 227364 DEBUG os_vif [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:6d:3e,bridge_name='br-int',has_traffic_filtering=True,id=05ac3c57-c712-428d-bb48-65f19e3ab17b,network=Network(5ce08321-9ca9-47d5-b99b-65a439440787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05ac3c57-c7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:13:59 np0005539551 nova_compute[227360]: 2025-11-29 08:13:59.575 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:59 np0005539551 nova_compute[227360]: 2025-11-29 08:13:59.576 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:59 np0005539551 nova_compute[227360]: 2025-11-29 08:13:59.576 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:13:59 np0005539551 nova_compute[227360]: 2025-11-29 08:13:59.579 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:59 np0005539551 nova_compute[227360]: 2025-11-29 08:13:59.579 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05ac3c57-c7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:59 np0005539551 nova_compute[227360]: 2025-11-29 08:13:59.580 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap05ac3c57-c7, col_values=(('external_ids', {'iface-id': '05ac3c57-c712-428d-bb48-65f19e3ab17b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:08:6d:3e', 'vm-uuid': '6731026b-60ad-4c02-b8a2-807704d1bee2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:59 np0005539551 nova_compute[227360]: 2025-11-29 08:13:59.581 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:59 np0005539551 NetworkManager[48922]: <info>  [1764404039.5829] manager: (tap05ac3c57-c7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/181)
Nov 29 03:13:59 np0005539551 nova_compute[227360]: 2025-11-29 08:13:59.585 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:13:59 np0005539551 nova_compute[227360]: 2025-11-29 08:13:59.589 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:59 np0005539551 nova_compute[227360]: 2025-11-29 08:13:59.591 227364 INFO os_vif [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:6d:3e,bridge_name='br-int',has_traffic_filtering=True,id=05ac3c57-c712-428d-bb48-65f19e3ab17b,network=Network(5ce08321-9ca9-47d5-b99b-65a439440787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05ac3c57-c7')#033[00m
Nov 29 03:14:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:14:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:00.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:14:00 np0005539551 nova_compute[227360]: 2025-11-29 08:14:00.286 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:00.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:00 np0005539551 nova_compute[227360]: 2025-11-29 08:14:00.601 227364 DEBUG nova.virt.libvirt.driver [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:14:00 np0005539551 nova_compute[227360]: 2025-11-29 08:14:00.602 227364 DEBUG nova.virt.libvirt.driver [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:14:00 np0005539551 nova_compute[227360]: 2025-11-29 08:14:00.602 227364 DEBUG nova.virt.libvirt.driver [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] No VIF found with MAC fa:16:3e:08:6d:3e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:14:00 np0005539551 nova_compute[227360]: 2025-11-29 08:14:00.602 227364 INFO nova.virt.libvirt.driver [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Using config drive#033[00m
Nov 29 03:14:00 np0005539551 nova_compute[227360]: 2025-11-29 08:14:00.638 227364 DEBUG nova.storage.rbd_utils [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] rbd image 6731026b-60ad-4c02-b8a2-807704d1bee2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:01 np0005539551 nova_compute[227360]: 2025-11-29 08:14:01.282 227364 INFO nova.virt.libvirt.driver [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Creating config drive at /var/lib/nova/instances/6731026b-60ad-4c02-b8a2-807704d1bee2/disk.config#033[00m
Nov 29 03:14:01 np0005539551 nova_compute[227360]: 2025-11-29 08:14:01.289 227364 DEBUG oslo_concurrency.processutils [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6731026b-60ad-4c02-b8a2-807704d1bee2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwr9zcgi0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:01 np0005539551 nova_compute[227360]: 2025-11-29 08:14:01.427 227364 DEBUG oslo_concurrency.processutils [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6731026b-60ad-4c02-b8a2-807704d1bee2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwr9zcgi0" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:01 np0005539551 nova_compute[227360]: 2025-11-29 08:14:01.457 227364 DEBUG nova.storage.rbd_utils [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] rbd image 6731026b-60ad-4c02-b8a2-807704d1bee2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:01 np0005539551 nova_compute[227360]: 2025-11-29 08:14:01.460 227364 DEBUG oslo_concurrency.processutils [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6731026b-60ad-4c02-b8a2-807704d1bee2/disk.config 6731026b-60ad-4c02-b8a2-807704d1bee2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:01 np0005539551 nova_compute[227360]: 2025-11-29 08:14:01.825 227364 DEBUG oslo_concurrency.processutils [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6731026b-60ad-4c02-b8a2-807704d1bee2/disk.config 6731026b-60ad-4c02-b8a2-807704d1bee2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.365s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:01 np0005539551 nova_compute[227360]: 2025-11-29 08:14:01.826 227364 INFO nova.virt.libvirt.driver [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Deleting local config drive /var/lib/nova/instances/6731026b-60ad-4c02-b8a2-807704d1bee2/disk.config because it was imported into RBD.#033[00m
Nov 29 03:14:01 np0005539551 kernel: tap05ac3c57-c7: entered promiscuous mode
Nov 29 03:14:01 np0005539551 NetworkManager[48922]: <info>  [1764404041.8906] manager: (tap05ac3c57-c7): new Tun device (/org/freedesktop/NetworkManager/Devices/182)
Nov 29 03:14:01 np0005539551 ovn_controller[130266]: 2025-11-29T08:14:01Z|00374|binding|INFO|Claiming lport 05ac3c57-c712-428d-bb48-65f19e3ab17b for this chassis.
Nov 29 03:14:01 np0005539551 ovn_controller[130266]: 2025-11-29T08:14:01Z|00375|binding|INFO|05ac3c57-c712-428d-bb48-65f19e3ab17b: Claiming fa:16:3e:08:6d:3e 10.100.0.3
Nov 29 03:14:01 np0005539551 nova_compute[227360]: 2025-11-29 08:14:01.895 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:01.916 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:6d:3e 10.100.0.3'], port_security=['fa:16:3e:08:6d:3e 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6731026b-60ad-4c02-b8a2-807704d1bee2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5ce08321-9ca9-47d5-b99b-65a439440787', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '17c0ff0fdeac43fc8fa0d7bedad67c34', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9e0588e8-cc01-4cf1-ba71-74f90ca3214d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=65c90a62-2d0d-4ced-b7e5-a1b1d91ba84b, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=05ac3c57-c712-428d-bb48-65f19e3ab17b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:14:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:01.918 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 05ac3c57-c712-428d-bb48-65f19e3ab17b in datapath 5ce08321-9ca9-47d5-b99b-65a439440787 bound to our chassis#033[00m
Nov 29 03:14:01 np0005539551 systemd-udevd[261625]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:14:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:01.922 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5ce08321-9ca9-47d5-b99b-65a439440787#033[00m
Nov 29 03:14:01 np0005539551 systemd-machined[190756]: New machine qemu-41-instance-0000005d.
Nov 29 03:14:01 np0005539551 NetworkManager[48922]: <info>  [1764404041.9369] device (tap05ac3c57-c7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:14:01 np0005539551 NetworkManager[48922]: <info>  [1764404041.9384] device (tap05ac3c57-c7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:14:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:01.941 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ac7b62f6-5345-491c-bfac-bc1afcdc96c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:01.943 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5ce08321-91 in ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:14:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:01.948 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5ce08321-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:14:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:01.948 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9b1c77b2-b1ed-4c4c-bd9f-e4dfd631661d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:01 np0005539551 systemd[1]: Started Virtual Machine qemu-41-instance-0000005d.
Nov 29 03:14:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:01.950 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[359ed4ed-c9ba-4ff4-9a90-753837456392]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:01 np0005539551 nova_compute[227360]: 2025-11-29 08:14:01.969 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:01 np0005539551 ovn_controller[130266]: 2025-11-29T08:14:01Z|00376|binding|INFO|Setting lport 05ac3c57-c712-428d-bb48-65f19e3ab17b ovn-installed in OVS
Nov 29 03:14:01 np0005539551 ovn_controller[130266]: 2025-11-29T08:14:01Z|00377|binding|INFO|Setting lport 05ac3c57-c712-428d-bb48-65f19e3ab17b up in Southbound
Nov 29 03:14:01 np0005539551 nova_compute[227360]: 2025-11-29 08:14:01.974 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:01.973 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[84aed48c-3106-48f1-a9f1-19e3e08bba8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:02.004 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[3bb62f96-6221-47c9-8462-4c3c8c047756]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:02.047 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[4142d609-cd80-438e-9624-0e39238b190b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:02.053 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6ba0a8ab-567e-4268-bf72-1c8c960de114]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:02 np0005539551 systemd-udevd[261629]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:14:02 np0005539551 NetworkManager[48922]: <info>  [1764404042.0556] manager: (tap5ce08321-90): new Veth device (/org/freedesktop/NetworkManager/Devices/183)
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:02.098 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[a209ae8b-837d-46f5-887e-6381d2d51acf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:02.102 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[2c0b4f1e-162a-4ca5-a94c-451b43e966fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:02 np0005539551 NetworkManager[48922]: <info>  [1764404042.1331] device (tap5ce08321-90): carrier: link connected
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:02.141 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[edba6e9d-54a5-4e5e-ad6b-3e252c1fbc82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:02.176 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[669d5b5f-b3c8-4ac8-ab37-ef479badf3d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5ce08321-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:bc:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 116], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 708901, 'reachable_time': 25249, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261659, 'error': None, 'target': 'ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:14:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:02.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:02.204 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[92f2ce10-d5d3-4fb8-9de8-290dc617510c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7b:bc0c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 708901, 'tstamp': 708901}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261660, 'error': None, 'target': 'ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:02.226 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[63574343-b41c-4acd-a1a7-4815528d5022]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5ce08321-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:bc:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 116], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 708901, 'reachable_time': 25249, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 261661, 'error': None, 'target': 'ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:02.255 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[27b61955-d878-4ec9-980d-1293d0ddd5dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:02.328 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[98b5b5fe-c0c2-4217-bdfb-b10d53ae0e08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:02.329 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5ce08321-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:02.330 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:02.330 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5ce08321-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:02 np0005539551 nova_compute[227360]: 2025-11-29 08:14:02.332 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:02 np0005539551 NetworkManager[48922]: <info>  [1764404042.3327] manager: (tap5ce08321-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/184)
Nov 29 03:14:02 np0005539551 kernel: tap5ce08321-90: entered promiscuous mode
Nov 29 03:14:02 np0005539551 nova_compute[227360]: 2025-11-29 08:14:02.335 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:02.336 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5ce08321-90, col_values=(('external_ids', {'iface-id': 'fb53c57a-d19f-4391-add7-afa34095fb59'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:02 np0005539551 ovn_controller[130266]: 2025-11-29T08:14:02Z|00378|binding|INFO|Releasing lport fb53c57a-d19f-4391-add7-afa34095fb59 from this chassis (sb_readonly=0)
Nov 29 03:14:02 np0005539551 nova_compute[227360]: 2025-11-29 08:14:02.337 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:02 np0005539551 nova_compute[227360]: 2025-11-29 08:14:02.352 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:02 np0005539551 nova_compute[227360]: 2025-11-29 08:14:02.353 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:02.354 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5ce08321-9ca9-47d5-b99b-65a439440787.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5ce08321-9ca9-47d5-b99b-65a439440787.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:02.356 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[216201ac-6c30-4d42-af6f-554b789d7c25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:02.357 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-5ce08321-9ca9-47d5-b99b-65a439440787
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/5ce08321-9ca9-47d5-b99b-65a439440787.pid.haproxy
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 5ce08321-9ca9-47d5-b99b-65a439440787
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:14:02 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:02.358 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787', 'env', 'PROCESS_TAG=haproxy-5ce08321-9ca9-47d5-b99b-65a439440787', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5ce08321-9ca9-47d5-b99b-65a439440787.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:14:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:02.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:02 np0005539551 nova_compute[227360]: 2025-11-29 08:14:02.515 227364 DEBUG nova.compute.manager [req-fbe4e10c-16f5-4f1c-a53b-5334382584c8 req-b4727799-b8fe-4112-a43c-10d5e2ab44cf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Received event network-vif-plugged-05ac3c57-c712-428d-bb48-65f19e3ab17b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:02 np0005539551 nova_compute[227360]: 2025-11-29 08:14:02.515 227364 DEBUG oslo_concurrency.lockutils [req-fbe4e10c-16f5-4f1c-a53b-5334382584c8 req-b4727799-b8fe-4112-a43c-10d5e2ab44cf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "6731026b-60ad-4c02-b8a2-807704d1bee2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:02 np0005539551 nova_compute[227360]: 2025-11-29 08:14:02.516 227364 DEBUG oslo_concurrency.lockutils [req-fbe4e10c-16f5-4f1c-a53b-5334382584c8 req-b4727799-b8fe-4112-a43c-10d5e2ab44cf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6731026b-60ad-4c02-b8a2-807704d1bee2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:02 np0005539551 nova_compute[227360]: 2025-11-29 08:14:02.516 227364 DEBUG oslo_concurrency.lockutils [req-fbe4e10c-16f5-4f1c-a53b-5334382584c8 req-b4727799-b8fe-4112-a43c-10d5e2ab44cf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6731026b-60ad-4c02-b8a2-807704d1bee2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:02 np0005539551 nova_compute[227360]: 2025-11-29 08:14:02.516 227364 DEBUG nova.compute.manager [req-fbe4e10c-16f5-4f1c-a53b-5334382584c8 req-b4727799-b8fe-4112-a43c-10d5e2ab44cf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Processing event network-vif-plugged-05ac3c57-c712-428d-bb48-65f19e3ab17b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:14:02 np0005539551 nova_compute[227360]: 2025-11-29 08:14:02.530 227364 DEBUG nova.compute.manager [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:14:02 np0005539551 nova_compute[227360]: 2025-11-29 08:14:02.531 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404042.5306878, 6731026b-60ad-4c02-b8a2-807704d1bee2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:14:02 np0005539551 nova_compute[227360]: 2025-11-29 08:14:02.531 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] VM Started (Lifecycle Event)#033[00m
Nov 29 03:14:02 np0005539551 nova_compute[227360]: 2025-11-29 08:14:02.533 227364 DEBUG nova.virt.libvirt.driver [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:14:02 np0005539551 nova_compute[227360]: 2025-11-29 08:14:02.536 227364 INFO nova.virt.libvirt.driver [-] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Instance spawned successfully.#033[00m
Nov 29 03:14:02 np0005539551 nova_compute[227360]: 2025-11-29 08:14:02.536 227364 DEBUG nova.virt.libvirt.driver [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:14:02 np0005539551 nova_compute[227360]: 2025-11-29 08:14:02.599 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:02 np0005539551 nova_compute[227360]: 2025-11-29 08:14:02.602 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:14:02 np0005539551 nova_compute[227360]: 2025-11-29 08:14:02.668 227364 DEBUG nova.virt.libvirt.driver [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:14:02 np0005539551 nova_compute[227360]: 2025-11-29 08:14:02.668 227364 DEBUG nova.virt.libvirt.driver [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:14:02 np0005539551 nova_compute[227360]: 2025-11-29 08:14:02.669 227364 DEBUG nova.virt.libvirt.driver [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:14:02 np0005539551 nova_compute[227360]: 2025-11-29 08:14:02.669 227364 DEBUG nova.virt.libvirt.driver [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:14:02 np0005539551 nova_compute[227360]: 2025-11-29 08:14:02.670 227364 DEBUG nova.virt.libvirt.driver [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:14:02 np0005539551 nova_compute[227360]: 2025-11-29 08:14:02.670 227364 DEBUG nova.virt.libvirt.driver [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:14:02 np0005539551 nova_compute[227360]: 2025-11-29 08:14:02.674 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:14:02 np0005539551 nova_compute[227360]: 2025-11-29 08:14:02.674 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404042.5331225, 6731026b-60ad-4c02-b8a2-807704d1bee2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:14:02 np0005539551 nova_compute[227360]: 2025-11-29 08:14:02.674 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:14:02 np0005539551 podman[261736]: 2025-11-29 08:14:02.755763732 +0000 UTC m=+0.046621432 container create ac41e102027f11a390def8da28e054fa3187324e108890eee029691d7f5f7d14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:14:02 np0005539551 systemd[1]: Started libpod-conmon-ac41e102027f11a390def8da28e054fa3187324e108890eee029691d7f5f7d14.scope.
Nov 29 03:14:02 np0005539551 podman[261736]: 2025-11-29 08:14:02.731315937 +0000 UTC m=+0.022173667 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:14:02 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:14:02 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/068d9718e82075ce3b92b5a6699daa244eaabfbb2216e05c39b2a54f38663a49/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:14:02 np0005539551 podman[261736]: 2025-11-29 08:14:02.853157153 +0000 UTC m=+0.144014873 container init ac41e102027f11a390def8da28e054fa3187324e108890eee029691d7f5f7d14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0)
Nov 29 03:14:02 np0005539551 podman[261736]: 2025-11-29 08:14:02.858861644 +0000 UTC m=+0.149719344 container start ac41e102027f11a390def8da28e054fa3187324e108890eee029691d7f5f7d14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:14:02 np0005539551 neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787[261751]: [NOTICE]   (261755) : New worker (261757) forked
Nov 29 03:14:02 np0005539551 neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787[261751]: [NOTICE]   (261755) : Loading success.
Nov 29 03:14:03 np0005539551 nova_compute[227360]: 2025-11-29 08:14:03.048 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:03 np0005539551 nova_compute[227360]: 2025-11-29 08:14:03.052 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404042.533668, 6731026b-60ad-4c02-b8a2-807704d1bee2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:14:03 np0005539551 nova_compute[227360]: 2025-11-29 08:14:03.052 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:14:03 np0005539551 nova_compute[227360]: 2025-11-29 08:14:03.151 227364 INFO nova.compute.manager [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Took 22.01 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:14:03 np0005539551 nova_compute[227360]: 2025-11-29 08:14:03.151 227364 DEBUG nova.compute.manager [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:03 np0005539551 nova_compute[227360]: 2025-11-29 08:14:03.153 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:03 np0005539551 nova_compute[227360]: 2025-11-29 08:14:03.159 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:14:03 np0005539551 nova_compute[227360]: 2025-11-29 08:14:03.188 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:14:03 np0005539551 nova_compute[227360]: 2025-11-29 08:14:03.246 227364 INFO nova.compute.manager [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Took 23.48 seconds to build instance.#033[00m
Nov 29 03:14:03 np0005539551 nova_compute[227360]: 2025-11-29 08:14:03.281 227364 DEBUG oslo_concurrency.lockutils [None req-bbf0a717-4fd6-48a9-8e83-55f3a8b15fc8 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "6731026b-60ad-4c02-b8a2-807704d1bee2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 23.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:14:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:04.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:14:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:04.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:04 np0005539551 nova_compute[227360]: 2025-11-29 08:14:04.582 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:04 np0005539551 nova_compute[227360]: 2025-11-29 08:14:04.664 227364 DEBUG nova.compute.manager [req-426ece51-3a6e-4b9b-8477-074d7f579ddc req-701492b2-2c88-410f-ab5a-09a5ca13d86e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Received event network-vif-plugged-05ac3c57-c712-428d-bb48-65f19e3ab17b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:04 np0005539551 nova_compute[227360]: 2025-11-29 08:14:04.665 227364 DEBUG oslo_concurrency.lockutils [req-426ece51-3a6e-4b9b-8477-074d7f579ddc req-701492b2-2c88-410f-ab5a-09a5ca13d86e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "6731026b-60ad-4c02-b8a2-807704d1bee2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:04 np0005539551 nova_compute[227360]: 2025-11-29 08:14:04.665 227364 DEBUG oslo_concurrency.lockutils [req-426ece51-3a6e-4b9b-8477-074d7f579ddc req-701492b2-2c88-410f-ab5a-09a5ca13d86e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6731026b-60ad-4c02-b8a2-807704d1bee2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:04 np0005539551 nova_compute[227360]: 2025-11-29 08:14:04.665 227364 DEBUG oslo_concurrency.lockutils [req-426ece51-3a6e-4b9b-8477-074d7f579ddc req-701492b2-2c88-410f-ab5a-09a5ca13d86e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6731026b-60ad-4c02-b8a2-807704d1bee2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:04 np0005539551 nova_compute[227360]: 2025-11-29 08:14:04.665 227364 DEBUG nova.compute.manager [req-426ece51-3a6e-4b9b-8477-074d7f579ddc req-701492b2-2c88-410f-ab5a-09a5ca13d86e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] No waiting events found dispatching network-vif-plugged-05ac3c57-c712-428d-bb48-65f19e3ab17b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:04 np0005539551 nova_compute[227360]: 2025-11-29 08:14:04.666 227364 WARNING nova.compute.manager [req-426ece51-3a6e-4b9b-8477-074d7f579ddc req-701492b2-2c88-410f-ab5a-09a5ca13d86e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Received unexpected event network-vif-plugged-05ac3c57-c712-428d-bb48-65f19e3ab17b for instance with vm_state active and task_state None.#033[00m
Nov 29 03:14:05 np0005539551 nova_compute[227360]: 2025-11-29 08:14:05.325 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:14:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:06.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:14:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:06.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:08.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:14:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:08.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:14:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:08.458 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:09 np0005539551 nova_compute[227360]: 2025-11-29 08:14:09.585 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:14:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:10.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:14:10 np0005539551 nova_compute[227360]: 2025-11-29 08:14:10.370 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:14:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:10.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:14:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:12.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:14:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:12.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:14:13 np0005539551 podman[261768]: 2025-11-29 08:14:13.607175641 +0000 UTC m=+0.052722963 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 03:14:13 np0005539551 podman[261767]: 2025-11-29 08:14:13.614072383 +0000 UTC m=+0.060637961 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 03:14:13 np0005539551 podman[261766]: 2025-11-29 08:14:13.653060212 +0000 UTC m=+0.102319562 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:14:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:14.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:14.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:14 np0005539551 nova_compute[227360]: 2025-11-29 08:14:14.587 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:15 np0005539551 nova_compute[227360]: 2025-11-29 08:14:15.394 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:16.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:16.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:17 np0005539551 ovn_controller[130266]: 2025-11-29T08:14:17Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:08:6d:3e 10.100.0.3
Nov 29 03:14:17 np0005539551 ovn_controller[130266]: 2025-11-29T08:14:17Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:08:6d:3e 10.100.0.3
Nov 29 03:14:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:18.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:18.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:19 np0005539551 nova_compute[227360]: 2025-11-29 08:14:19.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:19 np0005539551 nova_compute[227360]: 2025-11-29 08:14:19.589 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:19.864 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:19.865 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:19.866 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:14:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:20.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:14:20 np0005539551 nova_compute[227360]: 2025-11-29 08:14:20.397 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:20.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:21 np0005539551 ovn_controller[130266]: 2025-11-29T08:14:21Z|00379|binding|INFO|Releasing lport 299ca1be-be1b-47d9-8865-4316d34012e3 from this chassis (sb_readonly=0)
Nov 29 03:14:21 np0005539551 ovn_controller[130266]: 2025-11-29T08:14:21Z|00380|binding|INFO|Releasing lport fb53c57a-d19f-4391-add7-afa34095fb59 from this chassis (sb_readonly=0)
Nov 29 03:14:21 np0005539551 nova_compute[227360]: 2025-11-29 08:14:21.261 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e292 e292: 3 total, 3 up, 3 in
Nov 29 03:14:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:14:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:22.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:14:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:22.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:24.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:24 np0005539551 nova_compute[227360]: 2025-11-29 08:14:24.428 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:24 np0005539551 nova_compute[227360]: 2025-11-29 08:14:24.428 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:14:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:24 np0005539551 nova_compute[227360]: 2025-11-29 08:14:24.428 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:14:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:24.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:24 np0005539551 nova_compute[227360]: 2025-11-29 08:14:24.592 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:24 np0005539551 nova_compute[227360]: 2025-11-29 08:14:24.949 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "refresh_cache-076bf9f6-6607-4b08-b733-864854aad069" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:14:24 np0005539551 nova_compute[227360]: 2025-11-29 08:14:24.950 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquired lock "refresh_cache-076bf9f6-6607-4b08-b733-864854aad069" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:14:24 np0005539551 nova_compute[227360]: 2025-11-29 08:14:24.950 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:14:24 np0005539551 nova_compute[227360]: 2025-11-29 08:14:24.950 227364 DEBUG nova.objects.instance [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lazy-loading 'info_cache' on Instance uuid 076bf9f6-6607-4b08-b733-864854aad069 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:25 np0005539551 nova_compute[227360]: 2025-11-29 08:14:25.400 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:26.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:26.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:27 np0005539551 nova_compute[227360]: 2025-11-29 08:14:27.040 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Updating instance_info_cache with network_info: [{"id": "5a18d86b-a59a-42b2-a09d-a9db462f6034", "address": "fa:16:3e:46:a5:ff", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a18d86b-a5", "ovs_interfaceid": "5a18d86b-a59a-42b2-a09d-a9db462f6034", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:27 np0005539551 nova_compute[227360]: 2025-11-29 08:14:27.076 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Releasing lock "refresh_cache-076bf9f6-6607-4b08-b733-864854aad069" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:14:27 np0005539551 nova_compute[227360]: 2025-11-29 08:14:27.077 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:14:27 np0005539551 nova_compute[227360]: 2025-11-29 08:14:27.077 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:27 np0005539551 nova_compute[227360]: 2025-11-29 08:14:27.078 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:27 np0005539551 nova_compute[227360]: 2025-11-29 08:14:27.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:27 np0005539551 nova_compute[227360]: 2025-11-29 08:14:27.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:28.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:28 np0005539551 nova_compute[227360]: 2025-11-29 08:14:28.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:28.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:29 np0005539551 nova_compute[227360]: 2025-11-29 08:14:29.114 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:29 np0005539551 nova_compute[227360]: 2025-11-29 08:14:29.448 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:29 np0005539551 nova_compute[227360]: 2025-11-29 08:14:29.470 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:29 np0005539551 nova_compute[227360]: 2025-11-29 08:14:29.470 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:29 np0005539551 nova_compute[227360]: 2025-11-29 08:14:29.471 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:29 np0005539551 nova_compute[227360]: 2025-11-29 08:14:29.471 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:14:29 np0005539551 nova_compute[227360]: 2025-11-29 08:14:29.471 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:29 np0005539551 nova_compute[227360]: 2025-11-29 08:14:29.646 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:14:29 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1931962298' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:14:29 np0005539551 nova_compute[227360]: 2025-11-29 08:14:29.976 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:30 np0005539551 nova_compute[227360]: 2025-11-29 08:14:30.065 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:14:30 np0005539551 nova_compute[227360]: 2025-11-29 08:14:30.066 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:14:30 np0005539551 nova_compute[227360]: 2025-11-29 08:14:30.068 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:14:30 np0005539551 nova_compute[227360]: 2025-11-29 08:14:30.069 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:14:30 np0005539551 nova_compute[227360]: 2025-11-29 08:14:30.203 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:14:30 np0005539551 nova_compute[227360]: 2025-11-29 08:14:30.204 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4162MB free_disk=20.715065002441406GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:14:30 np0005539551 nova_compute[227360]: 2025-11-29 08:14:30.205 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:30 np0005539551 nova_compute[227360]: 2025-11-29 08:14:30.205 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:30.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:30 np0005539551 nova_compute[227360]: 2025-11-29 08:14:30.317 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 076bf9f6-6607-4b08-b733-864854aad069 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:14:30 np0005539551 nova_compute[227360]: 2025-11-29 08:14:30.318 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 6731026b-60ad-4c02-b8a2-807704d1bee2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:14:30 np0005539551 nova_compute[227360]: 2025-11-29 08:14:30.318 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:14:30 np0005539551 nova_compute[227360]: 2025-11-29 08:14:30.318 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=832MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:14:30 np0005539551 nova_compute[227360]: 2025-11-29 08:14:30.402 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:30.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:30 np0005539551 nova_compute[227360]: 2025-11-29 08:14:30.635 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:31 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:14:31 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3272116224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:14:31 np0005539551 nova_compute[227360]: 2025-11-29 08:14:31.045 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:31 np0005539551 nova_compute[227360]: 2025-11-29 08:14:31.051 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:14:31 np0005539551 nova_compute[227360]: 2025-11-29 08:14:31.070 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:14:31 np0005539551 nova_compute[227360]: 2025-11-29 08:14:31.106 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:14:31 np0005539551 nova_compute[227360]: 2025-11-29 08:14:31.106 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.901s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:31 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e293 e293: 3 total, 3 up, 3 in
Nov 29 03:14:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:32.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:32.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:34.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:34.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:34 np0005539551 nova_compute[227360]: 2025-11-29 08:14:34.668 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:35 np0005539551 nova_compute[227360]: 2025-11-29 08:14:35.404 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:36.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:36 np0005539551 nova_compute[227360]: 2025-11-29 08:14:36.411 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:36 np0005539551 nova_compute[227360]: 2025-11-29 08:14:36.412 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:14:36 np0005539551 nova_compute[227360]: 2025-11-29 08:14:36.447 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:14:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:36.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:36 np0005539551 nova_compute[227360]: 2025-11-29 08:14:36.955 227364 DEBUG oslo_concurrency.lockutils [None req-596a53a1-9636-484d-9db5-c5c6c97035b9 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Acquiring lock "6731026b-60ad-4c02-b8a2-807704d1bee2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:36 np0005539551 nova_compute[227360]: 2025-11-29 08:14:36.956 227364 DEBUG oslo_concurrency.lockutils [None req-596a53a1-9636-484d-9db5-c5c6c97035b9 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "6731026b-60ad-4c02-b8a2-807704d1bee2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:36 np0005539551 nova_compute[227360]: 2025-11-29 08:14:36.956 227364 DEBUG oslo_concurrency.lockutils [None req-596a53a1-9636-484d-9db5-c5c6c97035b9 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Acquiring lock "6731026b-60ad-4c02-b8a2-807704d1bee2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:36 np0005539551 nova_compute[227360]: 2025-11-29 08:14:36.956 227364 DEBUG oslo_concurrency.lockutils [None req-596a53a1-9636-484d-9db5-c5c6c97035b9 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "6731026b-60ad-4c02-b8a2-807704d1bee2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:36 np0005539551 nova_compute[227360]: 2025-11-29 08:14:36.957 227364 DEBUG oslo_concurrency.lockutils [None req-596a53a1-9636-484d-9db5-c5c6c97035b9 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "6731026b-60ad-4c02-b8a2-807704d1bee2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:36 np0005539551 nova_compute[227360]: 2025-11-29 08:14:36.958 227364 INFO nova.compute.manager [None req-596a53a1-9636-484d-9db5-c5c6c97035b9 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Terminating instance#033[00m
Nov 29 03:14:36 np0005539551 nova_compute[227360]: 2025-11-29 08:14:36.959 227364 DEBUG nova.compute.manager [None req-596a53a1-9636-484d-9db5-c5c6c97035b9 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:14:37 np0005539551 kernel: tap05ac3c57-c7 (unregistering): left promiscuous mode
Nov 29 03:14:37 np0005539551 NetworkManager[48922]: <info>  [1764404077.0199] device (tap05ac3c57-c7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:14:37 np0005539551 nova_compute[227360]: 2025-11-29 08:14:37.031 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:37 np0005539551 ovn_controller[130266]: 2025-11-29T08:14:37Z|00381|binding|INFO|Releasing lport 05ac3c57-c712-428d-bb48-65f19e3ab17b from this chassis (sb_readonly=0)
Nov 29 03:14:37 np0005539551 ovn_controller[130266]: 2025-11-29T08:14:37Z|00382|binding|INFO|Setting lport 05ac3c57-c712-428d-bb48-65f19e3ab17b down in Southbound
Nov 29 03:14:37 np0005539551 ovn_controller[130266]: 2025-11-29T08:14:37Z|00383|binding|INFO|Removing iface tap05ac3c57-c7 ovn-installed in OVS
Nov 29 03:14:37 np0005539551 nova_compute[227360]: 2025-11-29 08:14:37.034 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:37.042 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:6d:3e 10.100.0.3'], port_security=['fa:16:3e:08:6d:3e 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6731026b-60ad-4c02-b8a2-807704d1bee2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5ce08321-9ca9-47d5-b99b-65a439440787', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '17c0ff0fdeac43fc8fa0d7bedad67c34', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9e0588e8-cc01-4cf1-ba71-74f90ca3214d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=65c90a62-2d0d-4ced-b7e5-a1b1d91ba84b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=05ac3c57-c712-428d-bb48-65f19e3ab17b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:14:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:37.045 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 05ac3c57-c712-428d-bb48-65f19e3ab17b in datapath 5ce08321-9ca9-47d5-b99b-65a439440787 unbound from our chassis#033[00m
Nov 29 03:14:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:37.048 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5ce08321-9ca9-47d5-b99b-65a439440787, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:14:37 np0005539551 nova_compute[227360]: 2025-11-29 08:14:37.048 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:37.050 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[7c0dd589-457c-48f7-8e92-2e81f4127ab9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:37.051 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787 namespace which is not needed anymore#033[00m
Nov 29 03:14:37 np0005539551 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Nov 29 03:14:37 np0005539551 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d0000005d.scope: Consumed 14.569s CPU time.
Nov 29 03:14:37 np0005539551 systemd-machined[190756]: Machine qemu-41-instance-0000005d terminated.
Nov 29 03:14:37 np0005539551 nova_compute[227360]: 2025-11-29 08:14:37.180 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:37 np0005539551 nova_compute[227360]: 2025-11-29 08:14:37.185 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:37 np0005539551 neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787[261751]: [NOTICE]   (261755) : haproxy version is 2.8.14-c23fe91
Nov 29 03:14:37 np0005539551 neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787[261751]: [NOTICE]   (261755) : path to executable is /usr/sbin/haproxy
Nov 29 03:14:37 np0005539551 neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787[261751]: [WARNING]  (261755) : Exiting Master process...
Nov 29 03:14:37 np0005539551 neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787[261751]: [ALERT]    (261755) : Current worker (261757) exited with code 143 (Terminated)
Nov 29 03:14:37 np0005539551 neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787[261751]: [WARNING]  (261755) : All workers exited. Exiting... (0)
Nov 29 03:14:37 np0005539551 systemd[1]: libpod-ac41e102027f11a390def8da28e054fa3187324e108890eee029691d7f5f7d14.scope: Deactivated successfully.
Nov 29 03:14:37 np0005539551 nova_compute[227360]: 2025-11-29 08:14:37.197 227364 INFO nova.virt.libvirt.driver [-] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Instance destroyed successfully.#033[00m
Nov 29 03:14:37 np0005539551 nova_compute[227360]: 2025-11-29 08:14:37.198 227364 DEBUG nova.objects.instance [None req-596a53a1-9636-484d-9db5-c5c6c97035b9 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lazy-loading 'resources' on Instance uuid 6731026b-60ad-4c02-b8a2-807704d1bee2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:37 np0005539551 podman[261894]: 2025-11-29 08:14:37.201380274 +0000 UTC m=+0.049047825 container died ac41e102027f11a390def8da28e054fa3187324e108890eee029691d7f5f7d14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2)
Nov 29 03:14:37 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ac41e102027f11a390def8da28e054fa3187324e108890eee029691d7f5f7d14-userdata-shm.mount: Deactivated successfully.
Nov 29 03:14:37 np0005539551 systemd[1]: var-lib-containers-storage-overlay-068d9718e82075ce3b92b5a6699daa244eaabfbb2216e05c39b2a54f38663a49-merged.mount: Deactivated successfully.
Nov 29 03:14:37 np0005539551 nova_compute[227360]: 2025-11-29 08:14:37.232 227364 DEBUG nova.virt.libvirt.vif [None req-596a53a1-9636-484d-9db5-c5c6c97035b9 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:13:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1563638659',display_name='tempest-ListServerFiltersTestJSON-instance-1563638659',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1563638659',id=93,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:14:03Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='17c0ff0fdeac43fc8fa0d7bedad67c34',ramdisk_id='',reservation_id='r-lyl34nvn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-825347861',owner_user_name='tempest-ListServerFiltersTestJSON-825347861-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:14:03Z,user_data=None,user_id='05e59f4debd946ad9b7a4bac0e968bc6',uuid=6731026b-60ad-4c02-b8a2-807704d1bee2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "05ac3c57-c712-428d-bb48-65f19e3ab17b", "address": "fa:16:3e:08:6d:3e", "network": {"id": "5ce08321-9ca9-47d5-b99b-65a439440787", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1544923692-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17c0ff0fdeac43fc8fa0d7bedad67c34", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05ac3c57-c7", "ovs_interfaceid": "05ac3c57-c712-428d-bb48-65f19e3ab17b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:14:37 np0005539551 nova_compute[227360]: 2025-11-29 08:14:37.232 227364 DEBUG nova.network.os_vif_util [None req-596a53a1-9636-484d-9db5-c5c6c97035b9 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Converting VIF {"id": "05ac3c57-c712-428d-bb48-65f19e3ab17b", "address": "fa:16:3e:08:6d:3e", "network": {"id": "5ce08321-9ca9-47d5-b99b-65a439440787", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1544923692-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17c0ff0fdeac43fc8fa0d7bedad67c34", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05ac3c57-c7", "ovs_interfaceid": "05ac3c57-c712-428d-bb48-65f19e3ab17b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:37 np0005539551 nova_compute[227360]: 2025-11-29 08:14:37.233 227364 DEBUG nova.network.os_vif_util [None req-596a53a1-9636-484d-9db5-c5c6c97035b9 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:6d:3e,bridge_name='br-int',has_traffic_filtering=True,id=05ac3c57-c712-428d-bb48-65f19e3ab17b,network=Network(5ce08321-9ca9-47d5-b99b-65a439440787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05ac3c57-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:37 np0005539551 nova_compute[227360]: 2025-11-29 08:14:37.233 227364 DEBUG os_vif [None req-596a53a1-9636-484d-9db5-c5c6c97035b9 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:6d:3e,bridge_name='br-int',has_traffic_filtering=True,id=05ac3c57-c712-428d-bb48-65f19e3ab17b,network=Network(5ce08321-9ca9-47d5-b99b-65a439440787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05ac3c57-c7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:14:37 np0005539551 nova_compute[227360]: 2025-11-29 08:14:37.235 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:37 np0005539551 nova_compute[227360]: 2025-11-29 08:14:37.236 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05ac3c57-c7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:37 np0005539551 nova_compute[227360]: 2025-11-29 08:14:37.237 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:37 np0005539551 nova_compute[227360]: 2025-11-29 08:14:37.240 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:14:37 np0005539551 podman[261894]: 2025-11-29 08:14:37.241894124 +0000 UTC m=+0.089561665 container cleanup ac41e102027f11a390def8da28e054fa3187324e108890eee029691d7f5f7d14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:14:37 np0005539551 nova_compute[227360]: 2025-11-29 08:14:37.242 227364 INFO os_vif [None req-596a53a1-9636-484d-9db5-c5c6c97035b9 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:6d:3e,bridge_name='br-int',has_traffic_filtering=True,id=05ac3c57-c712-428d-bb48-65f19e3ab17b,network=Network(5ce08321-9ca9-47d5-b99b-65a439440787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05ac3c57-c7')#033[00m
Nov 29 03:14:37 np0005539551 systemd[1]: libpod-conmon-ac41e102027f11a390def8da28e054fa3187324e108890eee029691d7f5f7d14.scope: Deactivated successfully.
Nov 29 03:14:37 np0005539551 podman[261936]: 2025-11-29 08:14:37.305871304 +0000 UTC m=+0.043346656 container remove ac41e102027f11a390def8da28e054fa3187324e108890eee029691d7f5f7d14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:14:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:37.312 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[fb3b5323-4582-4bff-80c9-2eca9ae8ae88]: (4, ('Sat Nov 29 08:14:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787 (ac41e102027f11a390def8da28e054fa3187324e108890eee029691d7f5f7d14)\nac41e102027f11a390def8da28e054fa3187324e108890eee029691d7f5f7d14\nSat Nov 29 08:14:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787 (ac41e102027f11a390def8da28e054fa3187324e108890eee029691d7f5f7d14)\nac41e102027f11a390def8da28e054fa3187324e108890eee029691d7f5f7d14\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:37.315 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[743e8453-e153-440a-bd3a-97d7ae487712]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:37.316 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5ce08321-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:37 np0005539551 nova_compute[227360]: 2025-11-29 08:14:37.318 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:37 np0005539551 kernel: tap5ce08321-90: left promiscuous mode
Nov 29 03:14:37 np0005539551 nova_compute[227360]: 2025-11-29 08:14:37.334 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:37.336 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[fd6acbb7-777e-45ac-8952-1d761eb0b80b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:37.353 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d6bef393-3c1c-435d-97c4-f87d9a53a250]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:37.354 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a9f8a5e2-8d93-4703-959e-c660a0543c0f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:37.369 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6eec5511-c455-4a6a-a8aa-db3d94fa2655]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 708892, 'reachable_time': 44266, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261963, 'error': None, 'target': 'ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:37.374 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:14:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:14:37.375 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[afa4c6db-88a3-4c19-b7a5-a4b4dd151833]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:37 np0005539551 systemd[1]: run-netns-ovnmeta\x2d5ce08321\x2d9ca9\x2d47d5\x2db99b\x2d65a439440787.mount: Deactivated successfully.
Nov 29 03:14:37 np0005539551 nova_compute[227360]: 2025-11-29 08:14:37.446 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:37 np0005539551 nova_compute[227360]: 2025-11-29 08:14:37.447 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:37 np0005539551 nova_compute[227360]: 2025-11-29 08:14:37.447 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:14:37 np0005539551 nova_compute[227360]: 2025-11-29 08:14:37.462 227364 DEBUG nova.compute.manager [req-884b8dbf-d830-4c8e-ac6b-9fce22300e82 req-54a22476-b813-4a1d-9e72-7e329171983c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Received event network-vif-unplugged-05ac3c57-c712-428d-bb48-65f19e3ab17b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:37 np0005539551 nova_compute[227360]: 2025-11-29 08:14:37.464 227364 DEBUG oslo_concurrency.lockutils [req-884b8dbf-d830-4c8e-ac6b-9fce22300e82 req-54a22476-b813-4a1d-9e72-7e329171983c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "6731026b-60ad-4c02-b8a2-807704d1bee2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:37 np0005539551 nova_compute[227360]: 2025-11-29 08:14:37.465 227364 DEBUG oslo_concurrency.lockutils [req-884b8dbf-d830-4c8e-ac6b-9fce22300e82 req-54a22476-b813-4a1d-9e72-7e329171983c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6731026b-60ad-4c02-b8a2-807704d1bee2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:37 np0005539551 nova_compute[227360]: 2025-11-29 08:14:37.465 227364 DEBUG oslo_concurrency.lockutils [req-884b8dbf-d830-4c8e-ac6b-9fce22300e82 req-54a22476-b813-4a1d-9e72-7e329171983c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6731026b-60ad-4c02-b8a2-807704d1bee2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:37 np0005539551 nova_compute[227360]: 2025-11-29 08:14:37.465 227364 DEBUG nova.compute.manager [req-884b8dbf-d830-4c8e-ac6b-9fce22300e82 req-54a22476-b813-4a1d-9e72-7e329171983c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] No waiting events found dispatching network-vif-unplugged-05ac3c57-c712-428d-bb48-65f19e3ab17b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:37 np0005539551 nova_compute[227360]: 2025-11-29 08:14:37.466 227364 DEBUG nova.compute.manager [req-884b8dbf-d830-4c8e-ac6b-9fce22300e82 req-54a22476-b813-4a1d-9e72-7e329171983c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Received event network-vif-unplugged-05ac3c57-c712-428d-bb48-65f19e3ab17b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:14:37 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:14:37 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.9 total, 600.0 interval#012Cumulative writes: 31K writes, 119K keys, 31K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.03 MB/s#012Cumulative WAL: 31K writes, 10K syncs, 2.88 writes per sync, written: 0.11 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 11K writes, 41K keys, 11K commit groups, 1.0 writes per commit group, ingest: 41.84 MB, 0.07 MB/s#012Interval WAL: 11K writes, 4438 syncs, 2.49 writes per sync, written: 0.04 GB, 0.07 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 03:14:38 np0005539551 nova_compute[227360]: 2025-11-29 08:14:38.144 227364 INFO nova.virt.libvirt.driver [None req-596a53a1-9636-484d-9db5-c5c6c97035b9 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Deleting instance files /var/lib/nova/instances/6731026b-60ad-4c02-b8a2-807704d1bee2_del#033[00m
Nov 29 03:14:38 np0005539551 nova_compute[227360]: 2025-11-29 08:14:38.146 227364 INFO nova.virt.libvirt.driver [None req-596a53a1-9636-484d-9db5-c5c6c97035b9 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Deletion of /var/lib/nova/instances/6731026b-60ad-4c02-b8a2-807704d1bee2_del complete#033[00m
Nov 29 03:14:38 np0005539551 nova_compute[227360]: 2025-11-29 08:14:38.202 227364 INFO nova.compute.manager [None req-596a53a1-9636-484d-9db5-c5c6c97035b9 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Took 1.24 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:14:38 np0005539551 nova_compute[227360]: 2025-11-29 08:14:38.203 227364 DEBUG oslo.service.loopingcall [None req-596a53a1-9636-484d-9db5-c5c6c97035b9 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:14:38 np0005539551 nova_compute[227360]: 2025-11-29 08:14:38.204 227364 DEBUG nova.compute.manager [-] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:14:38 np0005539551 nova_compute[227360]: 2025-11-29 08:14:38.204 227364 DEBUG nova.network.neutron [-] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:14:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:38.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:38.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:39 np0005539551 nova_compute[227360]: 2025-11-29 08:14:39.548 227364 DEBUG nova.compute.manager [req-4b2c3f3d-4c00-49f3-9469-109ec9cf3945 req-5edfa8ac-216b-46e6-81a3-59ac27896aa8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Received event network-vif-plugged-05ac3c57-c712-428d-bb48-65f19e3ab17b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:39 np0005539551 nova_compute[227360]: 2025-11-29 08:14:39.549 227364 DEBUG oslo_concurrency.lockutils [req-4b2c3f3d-4c00-49f3-9469-109ec9cf3945 req-5edfa8ac-216b-46e6-81a3-59ac27896aa8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "6731026b-60ad-4c02-b8a2-807704d1bee2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:39 np0005539551 nova_compute[227360]: 2025-11-29 08:14:39.549 227364 DEBUG oslo_concurrency.lockutils [req-4b2c3f3d-4c00-49f3-9469-109ec9cf3945 req-5edfa8ac-216b-46e6-81a3-59ac27896aa8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6731026b-60ad-4c02-b8a2-807704d1bee2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:39 np0005539551 nova_compute[227360]: 2025-11-29 08:14:39.549 227364 DEBUG oslo_concurrency.lockutils [req-4b2c3f3d-4c00-49f3-9469-109ec9cf3945 req-5edfa8ac-216b-46e6-81a3-59ac27896aa8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6731026b-60ad-4c02-b8a2-807704d1bee2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:39 np0005539551 nova_compute[227360]: 2025-11-29 08:14:39.549 227364 DEBUG nova.compute.manager [req-4b2c3f3d-4c00-49f3-9469-109ec9cf3945 req-5edfa8ac-216b-46e6-81a3-59ac27896aa8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] No waiting events found dispatching network-vif-plugged-05ac3c57-c712-428d-bb48-65f19e3ab17b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:39 np0005539551 nova_compute[227360]: 2025-11-29 08:14:39.550 227364 WARNING nova.compute.manager [req-4b2c3f3d-4c00-49f3-9469-109ec9cf3945 req-5edfa8ac-216b-46e6-81a3-59ac27896aa8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Received unexpected event network-vif-plugged-05ac3c57-c712-428d-bb48-65f19e3ab17b for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:14:40 np0005539551 nova_compute[227360]: 2025-11-29 08:14:40.132 227364 DEBUG nova.network.neutron [-] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:40 np0005539551 nova_compute[227360]: 2025-11-29 08:14:40.175 227364 INFO nova.compute.manager [-] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Took 1.97 seconds to deallocate network for instance.#033[00m
Nov 29 03:14:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:40.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:40 np0005539551 nova_compute[227360]: 2025-11-29 08:14:40.240 227364 DEBUG oslo_concurrency.lockutils [None req-596a53a1-9636-484d-9db5-c5c6c97035b9 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:40 np0005539551 nova_compute[227360]: 2025-11-29 08:14:40.240 227364 DEBUG oslo_concurrency.lockutils [None req-596a53a1-9636-484d-9db5-c5c6c97035b9 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:40 np0005539551 nova_compute[227360]: 2025-11-29 08:14:40.318 227364 DEBUG oslo_concurrency.processutils [None req-596a53a1-9636-484d-9db5-c5c6c97035b9 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:40 np0005539551 nova_compute[227360]: 2025-11-29 08:14:40.448 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:40 np0005539551 nova_compute[227360]: 2025-11-29 08:14:40.449 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:14:40 np0005539551 nova_compute[227360]: 2025-11-29 08:14:40.451 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:40.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:14:40 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1590626171' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:14:40 np0005539551 nova_compute[227360]: 2025-11-29 08:14:40.729 227364 DEBUG oslo_concurrency.processutils [None req-596a53a1-9636-484d-9db5-c5c6c97035b9 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:40 np0005539551 nova_compute[227360]: 2025-11-29 08:14:40.734 227364 DEBUG nova.compute.provider_tree [None req-596a53a1-9636-484d-9db5-c5c6c97035b9 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:14:40 np0005539551 nova_compute[227360]: 2025-11-29 08:14:40.799 227364 DEBUG nova.scheduler.client.report [None req-596a53a1-9636-484d-9db5-c5c6c97035b9 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:14:40 np0005539551 nova_compute[227360]: 2025-11-29 08:14:40.991 227364 DEBUG oslo_concurrency.lockutils [None req-596a53a1-9636-484d-9db5-c5c6c97035b9 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:41 np0005539551 nova_compute[227360]: 2025-11-29 08:14:41.020 227364 INFO nova.scheduler.client.report [None req-596a53a1-9636-484d-9db5-c5c6c97035b9 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Deleted allocations for instance 6731026b-60ad-4c02-b8a2-807704d1bee2#033[00m
Nov 29 03:14:41 np0005539551 nova_compute[227360]: 2025-11-29 08:14:41.102 227364 DEBUG oslo_concurrency.lockutils [None req-596a53a1-9636-484d-9db5-c5c6c97035b9 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "6731026b-60ad-4c02-b8a2-807704d1bee2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.147s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:41 np0005539551 nova_compute[227360]: 2025-11-29 08:14:41.703 227364 DEBUG nova.compute.manager [req-2aab3676-f4f9-463d-ae79-37921e29753f req-1d9f74d7-0d35-4635-98d6-83b6646660f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Received event network-vif-deleted-05ac3c57-c712-428d-bb48-65f19e3ab17b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:41 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e294 e294: 3 total, 3 up, 3 in
Nov 29 03:14:42 np0005539551 nova_compute[227360]: 2025-11-29 08:14:42.238 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:42.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:42.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:43 np0005539551 nova_compute[227360]: 2025-11-29 08:14:43.104 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:43 np0005539551 nova_compute[227360]: 2025-11-29 08:14:43.550 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:44.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:44.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:44 np0005539551 podman[261988]: 2025-11-29 08:14:44.602938784 +0000 UTC m=+0.051293314 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:14:44 np0005539551 podman[261989]: 2025-11-29 08:14:44.604395423 +0000 UTC m=+0.048297775 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Nov 29 03:14:44 np0005539551 podman[261987]: 2025-11-29 08:14:44.630148143 +0000 UTC m=+0.078501973 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:14:45 np0005539551 nova_compute[227360]: 2025-11-29 08:14:45.453 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:46.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:46.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:47 np0005539551 nova_compute[227360]: 2025-11-29 08:14:47.031 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:47 np0005539551 nova_compute[227360]: 2025-11-29 08:14:47.240 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:14:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:48.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:14:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:48.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:14:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:50.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:14:50 np0005539551 nova_compute[227360]: 2025-11-29 08:14:50.457 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:50.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:52 np0005539551 nova_compute[227360]: 2025-11-29 08:14:52.197 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404077.1942682, 6731026b-60ad-4c02-b8a2-807704d1bee2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:14:52 np0005539551 nova_compute[227360]: 2025-11-29 08:14:52.197 227364 INFO nova.compute.manager [-] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:14:52 np0005539551 ovn_controller[130266]: 2025-11-29T08:14:52Z|00384|binding|INFO|Releasing lport 299ca1be-be1b-47d9-8865-4316d34012e3 from this chassis (sb_readonly=0)
Nov 29 03:14:52 np0005539551 nova_compute[227360]: 2025-11-29 08:14:52.238 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:52 np0005539551 nova_compute[227360]: 2025-11-29 08:14:52.240 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:52 np0005539551 nova_compute[227360]: 2025-11-29 08:14:52.252 227364 DEBUG nova.compute.manager [None req-7aa26661-f21e-4168-8cf5-17fd4cb5ec33 - - - - - -] [instance: 6731026b-60ad-4c02-b8a2-807704d1bee2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:52.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:52 np0005539551 ovn_controller[130266]: 2025-11-29T08:14:52Z|00385|binding|INFO|Releasing lport 299ca1be-be1b-47d9-8865-4316d34012e3 from this chassis (sb_readonly=0)
Nov 29 03:14:52 np0005539551 nova_compute[227360]: 2025-11-29 08:14:52.471 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:52.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:53 np0005539551 podman[262224]: 2025-11-29 08:14:53.064500798 +0000 UTC m=+0.057642023 container exec 90ce4adbab91e406b1c142d44002a8d5649da44e57a442af57615f2fecac4da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:14:53 np0005539551 podman[262224]: 2025-11-29 08:14:53.169611202 +0000 UTC m=+0.162752417 container exec_died 90ce4adbab91e406b1c142d44002a8d5649da44e57a442af57615f2fecac4da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 03:14:53 np0005539551 nova_compute[227360]: 2025-11-29 08:14:53.732 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:54 np0005539551 nova_compute[227360]: 2025-11-29 08:14:54.068 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Triggering sync for uuid 076bf9f6-6607-4b08-b733-864854aad069 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 29 03:14:54 np0005539551 nova_compute[227360]: 2025-11-29 08:14:54.069 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "076bf9f6-6607-4b08-b733-864854aad069" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:54 np0005539551 nova_compute[227360]: 2025-11-29 08:14:54.069 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "076bf9f6-6607-4b08-b733-864854aad069" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:54 np0005539551 nova_compute[227360]: 2025-11-29 08:14:54.106 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "076bf9f6-6607-4b08-b733-864854aad069" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:54.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:14:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:54.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:14:54 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:14:54 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:14:54 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:14:54 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:14:55 np0005539551 nova_compute[227360]: 2025-11-29 08:14:55.458 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:55 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:14:55 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:14:55 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:14:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:56.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:56.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:57 np0005539551 nova_compute[227360]: 2025-11-29 08:14:57.243 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:14:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:58.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:14:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:14:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:14:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:58.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:15:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:00.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:00.360 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:15:00 np0005539551 nova_compute[227360]: 2025-11-29 08:15:00.360 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:00.361 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:15:00 np0005539551 nova_compute[227360]: 2025-11-29 08:15:00.461 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:01.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:02 np0005539551 nova_compute[227360]: 2025-11-29 08:15:02.245 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:02.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:02 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:15:02 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:15:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:15:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:03.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:03.362 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:03 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #82. Immutable memtables: 0.
Nov 29 03:15:03 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:15:03.598621) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:15:03 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 82
Nov 29 03:15:03 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404103598684, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1321, "num_deletes": 252, "total_data_size": 2878307, "memory_usage": 2914608, "flush_reason": "Manual Compaction"}
Nov 29 03:15:03 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #83: started
Nov 29 03:15:03 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404103618438, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 83, "file_size": 1857339, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41943, "largest_seqno": 43259, "table_properties": {"data_size": 1851536, "index_size": 3070, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13174, "raw_average_key_size": 20, "raw_value_size": 1839648, "raw_average_value_size": 2856, "num_data_blocks": 135, "num_entries": 644, "num_filter_entries": 644, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404011, "oldest_key_time": 1764404011, "file_creation_time": 1764404103, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:15:03 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 19867 microseconds, and 7530 cpu microseconds.
Nov 29 03:15:03 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:15:03 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:15:03.618485) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #83: 1857339 bytes OK
Nov 29 03:15:03 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:15:03.618510) [db/memtable_list.cc:519] [default] Level-0 commit table #83 started
Nov 29 03:15:03 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:15:03.619872) [db/memtable_list.cc:722] [default] Level-0 commit table #83: memtable #1 done
Nov 29 03:15:03 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:15:03.619887) EVENT_LOG_v1 {"time_micros": 1764404103619882, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:15:03 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:15:03.619906) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:15:03 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 2871928, prev total WAL file size 2871928, number of live WAL files 2.
Nov 29 03:15:03 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000079.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:15:03 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:15:03.620810) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Nov 29 03:15:03 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:15:03 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [83(1813KB)], [81(10110KB)]
Nov 29 03:15:03 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404103620862, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [83], "files_L6": [81], "score": -1, "input_data_size": 12210025, "oldest_snapshot_seqno": -1}
Nov 29 03:15:03 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #84: 7416 keys, 10269789 bytes, temperature: kUnknown
Nov 29 03:15:03 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404103952210, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 84, "file_size": 10269789, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10221663, "index_size": 28494, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18565, "raw_key_size": 192317, "raw_average_key_size": 25, "raw_value_size": 10090156, "raw_average_value_size": 1360, "num_data_blocks": 1118, "num_entries": 7416, "num_filter_entries": 7416, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764404103, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 84, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:15:03 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:15:03 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:15:03.952532) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 10269789 bytes
Nov 29 03:15:03 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:15:03.956117) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 36.8 rd, 31.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 9.9 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(12.1) write-amplify(5.5) OK, records in: 7939, records dropped: 523 output_compression: NoCompression
Nov 29 03:15:03 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:15:03.956134) EVENT_LOG_v1 {"time_micros": 1764404103956126, "job": 50, "event": "compaction_finished", "compaction_time_micros": 331471, "compaction_time_cpu_micros": 25706, "output_level": 6, "num_output_files": 1, "total_output_size": 10269789, "num_input_records": 7939, "num_output_records": 7416, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:15:03 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:15:03 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404103956510, "job": 50, "event": "table_file_deletion", "file_number": 83}
Nov 29 03:15:03 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000081.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:15:03 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404103958212, "job": 50, "event": "table_file_deletion", "file_number": 81}
Nov 29 03:15:03 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:15:03.620738) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:15:03 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:15:03.958245) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:15:03 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:15:03.958249) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:15:03 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:15:03.958251) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:15:03 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:15:03.958253) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:15:03 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:15:03.958254) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:15:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:04.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:05.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:05 np0005539551 nova_compute[227360]: 2025-11-29 08:15:05.463 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:05 np0005539551 nova_compute[227360]: 2025-11-29 08:15:05.741 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:05 np0005539551 NetworkManager[48922]: <info>  [1764404105.7426] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/185)
Nov 29 03:15:05 np0005539551 NetworkManager[48922]: <info>  [1764404105.7444] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/186)
Nov 29 03:15:05 np0005539551 nova_compute[227360]: 2025-11-29 08:15:05.844 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:05 np0005539551 ovn_controller[130266]: 2025-11-29T08:15:05Z|00386|binding|INFO|Releasing lport 299ca1be-be1b-47d9-8865-4316d34012e3 from this chassis (sb_readonly=0)
Nov 29 03:15:05 np0005539551 nova_compute[227360]: 2025-11-29 08:15:05.868 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:06.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:06 np0005539551 nova_compute[227360]: 2025-11-29 08:15:06.982 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:07.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:07 np0005539551 nova_compute[227360]: 2025-11-29 08:15:07.247 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:08.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:09.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:10.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:10 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:15:10 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.0 total, 600.0 interval#012Cumulative writes: 8047 writes, 43K keys, 8047 commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.03 MB/s#012Cumulative WAL: 8047 writes, 8047 syncs, 1.00 writes per sync, written: 0.09 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1798 writes, 9391 keys, 1798 commit groups, 1.0 writes per commit group, ingest: 17.55 MB, 0.03 MB/s#012Interval WAL: 1798 writes, 1798 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     10.6      5.09              0.21        25    0.204       0      0       0.0       0.0#012  L6      1/0    9.79 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.5     25.7     21.7     11.25              0.87        24    0.469    147K    13K       0.0       0.0#012 Sum      1/0    9.79 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.5     17.7     18.3     16.35              1.07        49    0.334    147K    13K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.7     85.6     84.8      1.01              0.28        14    0.072     53K   4112       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0     25.7     21.7     11.25              0.87        24    0.469    147K    13K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     10.6      5.09              0.21        24    0.212       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3600.0 total, 600.0 interval#012Flush(GB): cumulative 0.053, interval 0.012#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.29 GB write, 0.08 MB/s write, 0.28 GB read, 0.08 MB/s read, 16.3 seconds#012Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 1.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557021ed51f0#2 capacity: 304.00 MB usage: 29.71 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000217 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1671,28.59 MB,9.40458%) FilterBlock(49,414.55 KB,0.133168%) IndexBlock(49,736.69 KB,0.236652%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 03:15:10 np0005539551 nova_compute[227360]: 2025-11-29 08:15:10.466 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:11 np0005539551 nova_compute[227360]: 2025-11-29 08:15:11.033 227364 DEBUG oslo_concurrency.lockutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Acquiring lock "5e330f04-18c7-454f-a836-2502d7a50da4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:11 np0005539551 nova_compute[227360]: 2025-11-29 08:15:11.034 227364 DEBUG oslo_concurrency.lockutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "5e330f04-18c7-454f-a836-2502d7a50da4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:11 np0005539551 nova_compute[227360]: 2025-11-29 08:15:11.056 227364 DEBUG nova.compute.manager [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:15:11 np0005539551 nova_compute[227360]: 2025-11-29 08:15:11.128 227364 DEBUG oslo_concurrency.lockutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:11 np0005539551 nova_compute[227360]: 2025-11-29 08:15:11.128 227364 DEBUG oslo_concurrency.lockutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:11 np0005539551 nova_compute[227360]: 2025-11-29 08:15:11.135 227364 DEBUG nova.virt.hardware [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:15:11 np0005539551 nova_compute[227360]: 2025-11-29 08:15:11.136 227364 INFO nova.compute.claims [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:15:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:11.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:11 np0005539551 nova_compute[227360]: 2025-11-29 08:15:11.303 227364 DEBUG oslo_concurrency.processutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:11 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:15:11 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/751127984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:15:11 np0005539551 nova_compute[227360]: 2025-11-29 08:15:11.863 227364 DEBUG oslo_concurrency.processutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:11 np0005539551 nova_compute[227360]: 2025-11-29 08:15:11.873 227364 DEBUG nova.compute.provider_tree [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:15:11 np0005539551 nova_compute[227360]: 2025-11-29 08:15:11.899 227364 DEBUG nova.scheduler.client.report [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:15:11 np0005539551 nova_compute[227360]: 2025-11-29 08:15:11.923 227364 DEBUG oslo_concurrency.lockutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:11 np0005539551 nova_compute[227360]: 2025-11-29 08:15:11.924 227364 DEBUG nova.compute.manager [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:15:11 np0005539551 nova_compute[227360]: 2025-11-29 08:15:11.974 227364 DEBUG nova.compute.manager [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:15:11 np0005539551 nova_compute[227360]: 2025-11-29 08:15:11.975 227364 DEBUG nova.network.neutron [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:15:11 np0005539551 nova_compute[227360]: 2025-11-29 08:15:11.998 227364 INFO nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:15:12 np0005539551 nova_compute[227360]: 2025-11-29 08:15:12.017 227364 DEBUG nova.compute.manager [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:15:12 np0005539551 nova_compute[227360]: 2025-11-29 08:15:12.119 227364 DEBUG nova.compute.manager [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:15:12 np0005539551 nova_compute[227360]: 2025-11-29 08:15:12.120 227364 DEBUG nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:15:12 np0005539551 nova_compute[227360]: 2025-11-29 08:15:12.120 227364 INFO nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Creating image(s)#033[00m
Nov 29 03:15:12 np0005539551 nova_compute[227360]: 2025-11-29 08:15:12.154 227364 DEBUG nova.storage.rbd_utils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] rbd image 5e330f04-18c7-454f-a836-2502d7a50da4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:12 np0005539551 nova_compute[227360]: 2025-11-29 08:15:12.188 227364 DEBUG nova.storage.rbd_utils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] rbd image 5e330f04-18c7-454f-a836-2502d7a50da4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:12 np0005539551 nova_compute[227360]: 2025-11-29 08:15:12.227 227364 DEBUG nova.storage.rbd_utils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] rbd image 5e330f04-18c7-454f-a836-2502d7a50da4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:12 np0005539551 nova_compute[227360]: 2025-11-29 08:15:12.231 227364 DEBUG oslo_concurrency.processutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:12 np0005539551 nova_compute[227360]: 2025-11-29 08:15:12.259 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:12 np0005539551 nova_compute[227360]: 2025-11-29 08:15:12.264 227364 DEBUG nova.policy [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0bd9df09f1324e3f9dba099f03ffe1c6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '647c3591c2b940409293763c6c83e58c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:15:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:15:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:12.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:12 np0005539551 nova_compute[227360]: 2025-11-29 08:15:12.311 227364 DEBUG oslo_concurrency.processutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:12 np0005539551 nova_compute[227360]: 2025-11-29 08:15:12.312 227364 DEBUG oslo_concurrency.lockutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:12 np0005539551 nova_compute[227360]: 2025-11-29 08:15:12.313 227364 DEBUG oslo_concurrency.lockutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:12 np0005539551 nova_compute[227360]: 2025-11-29 08:15:12.313 227364 DEBUG oslo_concurrency.lockutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:12 np0005539551 nova_compute[227360]: 2025-11-29 08:15:12.345 227364 DEBUG nova.storage.rbd_utils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] rbd image 5e330f04-18c7-454f-a836-2502d7a50da4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:12 np0005539551 nova_compute[227360]: 2025-11-29 08:15:12.349 227364 DEBUG oslo_concurrency.processutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 5e330f04-18c7-454f-a836-2502d7a50da4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:13 np0005539551 nova_compute[227360]: 2025-11-29 08:15:13.024 227364 DEBUG nova.network.neutron [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Successfully created port: cef3de2b-0c4e-495b-a804-56753ce894c1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:15:13 np0005539551 nova_compute[227360]: 2025-11-29 08:15:13.181 227364 DEBUG oslo_concurrency.processutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 5e330f04-18c7-454f-a836-2502d7a50da4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.832s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:13.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:13 np0005539551 nova_compute[227360]: 2025-11-29 08:15:13.279 227364 DEBUG nova.storage.rbd_utils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] resizing rbd image 5e330f04-18c7-454f-a836-2502d7a50da4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:15:13 np0005539551 nova_compute[227360]: 2025-11-29 08:15:13.548 227364 DEBUG nova.objects.instance [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lazy-loading 'migration_context' on Instance uuid 5e330f04-18c7-454f-a836-2502d7a50da4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:13 np0005539551 nova_compute[227360]: 2025-11-29 08:15:13.563 227364 DEBUG nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:15:13 np0005539551 nova_compute[227360]: 2025-11-29 08:15:13.564 227364 DEBUG nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Ensure instance console log exists: /var/lib/nova/instances/5e330f04-18c7-454f-a836-2502d7a50da4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:15:13 np0005539551 nova_compute[227360]: 2025-11-29 08:15:13.564 227364 DEBUG oslo_concurrency.lockutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:13 np0005539551 nova_compute[227360]: 2025-11-29 08:15:13.565 227364 DEBUG oslo_concurrency.lockutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:13 np0005539551 nova_compute[227360]: 2025-11-29 08:15:13.565 227364 DEBUG oslo_concurrency.lockutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:14 np0005539551 nova_compute[227360]: 2025-11-29 08:15:14.141 227364 DEBUG nova.network.neutron [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Successfully updated port: cef3de2b-0c4e-495b-a804-56753ce894c1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:15:14 np0005539551 nova_compute[227360]: 2025-11-29 08:15:14.179 227364 DEBUG oslo_concurrency.lockutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Acquiring lock "refresh_cache-5e330f04-18c7-454f-a836-2502d7a50da4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:15:14 np0005539551 nova_compute[227360]: 2025-11-29 08:15:14.179 227364 DEBUG oslo_concurrency.lockutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Acquired lock "refresh_cache-5e330f04-18c7-454f-a836-2502d7a50da4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:15:14 np0005539551 nova_compute[227360]: 2025-11-29 08:15:14.179 227364 DEBUG nova.network.neutron [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:15:14 np0005539551 nova_compute[227360]: 2025-11-29 08:15:14.269 227364 DEBUG nova.compute.manager [req-d3644411-70dd-4d4e-8c8f-79101da19c36 req-e5fc6df8-ecef-47c8-963d-419f2429b1d3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Received event network-changed-cef3de2b-0c4e-495b-a804-56753ce894c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:14 np0005539551 nova_compute[227360]: 2025-11-29 08:15:14.270 227364 DEBUG nova.compute.manager [req-d3644411-70dd-4d4e-8c8f-79101da19c36 req-e5fc6df8-ecef-47c8-963d-419f2429b1d3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Refreshing instance network info cache due to event network-changed-cef3de2b-0c4e-495b-a804-56753ce894c1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:15:14 np0005539551 nova_compute[227360]: 2025-11-29 08:15:14.270 227364 DEBUG oslo_concurrency.lockutils [req-d3644411-70dd-4d4e-8c8f-79101da19c36 req-e5fc6df8-ecef-47c8-963d-419f2429b1d3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-5e330f04-18c7-454f-a836-2502d7a50da4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:15:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:14.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:15 np0005539551 nova_compute[227360]: 2025-11-29 08:15:15.051 227364 DEBUG nova.network.neutron [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:15:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:15.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:15 np0005539551 nova_compute[227360]: 2025-11-29 08:15:15.516 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:15 np0005539551 podman[262721]: 2025-11-29 08:15:15.612576372 +0000 UTC m=+0.064266747 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd)
Nov 29 03:15:15 np0005539551 podman[262722]: 2025-11-29 08:15:15.624930688 +0000 UTC m=+0.066443124 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Nov 29 03:15:15 np0005539551 podman[262720]: 2025-11-29 08:15:15.63520912 +0000 UTC m=+0.088488218 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:15:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:16.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:17.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:17 np0005539551 nova_compute[227360]: 2025-11-29 08:15:17.263 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:17 np0005539551 nova_compute[227360]: 2025-11-29 08:15:17.413 227364 DEBUG nova.network.neutron [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Updating instance_info_cache with network_info: [{"id": "cef3de2b-0c4e-495b-a804-56753ce894c1", "address": "fa:16:3e:33:f5:a8", "network": {"id": "ab70b036-b5ab-4377-b081-f4b82fdb05c5", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1993818804-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "647c3591c2b940409293763c6c83e58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcef3de2b-0c", "ovs_interfaceid": "cef3de2b-0c4e-495b-a804-56753ce894c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:15:17 np0005539551 nova_compute[227360]: 2025-11-29 08:15:17.440 227364 DEBUG oslo_concurrency.lockutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Releasing lock "refresh_cache-5e330f04-18c7-454f-a836-2502d7a50da4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:15:17 np0005539551 nova_compute[227360]: 2025-11-29 08:15:17.441 227364 DEBUG nova.compute.manager [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Instance network_info: |[{"id": "cef3de2b-0c4e-495b-a804-56753ce894c1", "address": "fa:16:3e:33:f5:a8", "network": {"id": "ab70b036-b5ab-4377-b081-f4b82fdb05c5", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1993818804-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "647c3591c2b940409293763c6c83e58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcef3de2b-0c", "ovs_interfaceid": "cef3de2b-0c4e-495b-a804-56753ce894c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:15:17 np0005539551 nova_compute[227360]: 2025-11-29 08:15:17.441 227364 DEBUG oslo_concurrency.lockutils [req-d3644411-70dd-4d4e-8c8f-79101da19c36 req-e5fc6df8-ecef-47c8-963d-419f2429b1d3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-5e330f04-18c7-454f-a836-2502d7a50da4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:15:17 np0005539551 nova_compute[227360]: 2025-11-29 08:15:17.441 227364 DEBUG nova.network.neutron [req-d3644411-70dd-4d4e-8c8f-79101da19c36 req-e5fc6df8-ecef-47c8-963d-419f2429b1d3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Refreshing network info cache for port cef3de2b-0c4e-495b-a804-56753ce894c1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:15:17 np0005539551 nova_compute[227360]: 2025-11-29 08:15:17.444 227364 DEBUG nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Start _get_guest_xml network_info=[{"id": "cef3de2b-0c4e-495b-a804-56753ce894c1", "address": "fa:16:3e:33:f5:a8", "network": {"id": "ab70b036-b5ab-4377-b081-f4b82fdb05c5", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1993818804-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "647c3591c2b940409293763c6c83e58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcef3de2b-0c", "ovs_interfaceid": "cef3de2b-0c4e-495b-a804-56753ce894c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:15:17 np0005539551 nova_compute[227360]: 2025-11-29 08:15:17.449 227364 WARNING nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:15:17 np0005539551 nova_compute[227360]: 2025-11-29 08:15:17.454 227364 DEBUG nova.virt.libvirt.host [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:15:17 np0005539551 nova_compute[227360]: 2025-11-29 08:15:17.454 227364 DEBUG nova.virt.libvirt.host [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:15:17 np0005539551 nova_compute[227360]: 2025-11-29 08:15:17.461 227364 DEBUG nova.virt.libvirt.host [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:15:17 np0005539551 nova_compute[227360]: 2025-11-29 08:15:17.461 227364 DEBUG nova.virt.libvirt.host [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:15:17 np0005539551 nova_compute[227360]: 2025-11-29 08:15:17.462 227364 DEBUG nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:15:17 np0005539551 nova_compute[227360]: 2025-11-29 08:15:17.462 227364 DEBUG nova.virt.hardware [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:15:17 np0005539551 nova_compute[227360]: 2025-11-29 08:15:17.463 227364 DEBUG nova.virt.hardware [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:15:17 np0005539551 nova_compute[227360]: 2025-11-29 08:15:17.463 227364 DEBUG nova.virt.hardware [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:15:17 np0005539551 nova_compute[227360]: 2025-11-29 08:15:17.463 227364 DEBUG nova.virt.hardware [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:15:17 np0005539551 nova_compute[227360]: 2025-11-29 08:15:17.463 227364 DEBUG nova.virt.hardware [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:15:17 np0005539551 nova_compute[227360]: 2025-11-29 08:15:17.464 227364 DEBUG nova.virt.hardware [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:15:17 np0005539551 nova_compute[227360]: 2025-11-29 08:15:17.464 227364 DEBUG nova.virt.hardware [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:15:17 np0005539551 nova_compute[227360]: 2025-11-29 08:15:17.464 227364 DEBUG nova.virt.hardware [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:15:17 np0005539551 nova_compute[227360]: 2025-11-29 08:15:17.464 227364 DEBUG nova.virt.hardware [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:15:17 np0005539551 nova_compute[227360]: 2025-11-29 08:15:17.464 227364 DEBUG nova.virt.hardware [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:15:17 np0005539551 nova_compute[227360]: 2025-11-29 08:15:17.465 227364 DEBUG nova.virt.hardware [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:15:17 np0005539551 nova_compute[227360]: 2025-11-29 08:15:17.468 227364 DEBUG oslo_concurrency.processutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:15:17 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3384434761' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:15:17 np0005539551 nova_compute[227360]: 2025-11-29 08:15:17.877 227364 DEBUG oslo_concurrency.processutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:17 np0005539551 nova_compute[227360]: 2025-11-29 08:15:17.919 227364 DEBUG nova.storage.rbd_utils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] rbd image 5e330f04-18c7-454f-a836-2502d7a50da4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:17 np0005539551 nova_compute[227360]: 2025-11-29 08:15:17.923 227364 DEBUG oslo_concurrency.processutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:18.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:18 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:15:18 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1678718068' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:15:18 np0005539551 nova_compute[227360]: 2025-11-29 08:15:18.394 227364 DEBUG oslo_concurrency.processutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:18 np0005539551 nova_compute[227360]: 2025-11-29 08:15:18.398 227364 DEBUG nova.virt.libvirt.vif [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:15:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1895741234',display_name='tempest-tempest.common.compute-instance-1895741234-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1895741234-2',id=98,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='647c3591c2b940409293763c6c83e58c',ramdisk_id='',reservation_id='r-vw058oln',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-2058984420',owner_user_name='tempest-MultipleCreateTestJSON-2058984420-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:15:12Z,user_data=None,user_id='0bd9df09f1324e3f9dba099f03ffe1c6',uuid=5e330f04-18c7-454f-a836-2502d7a50da4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cef3de2b-0c4e-495b-a804-56753ce894c1", "address": "fa:16:3e:33:f5:a8", "network": {"id": "ab70b036-b5ab-4377-b081-f4b82fdb05c5", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1993818804-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "647c3591c2b940409293763c6c83e58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcef3de2b-0c", "ovs_interfaceid": "cef3de2b-0c4e-495b-a804-56753ce894c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:15:18 np0005539551 nova_compute[227360]: 2025-11-29 08:15:18.398 227364 DEBUG nova.network.os_vif_util [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Converting VIF {"id": "cef3de2b-0c4e-495b-a804-56753ce894c1", "address": "fa:16:3e:33:f5:a8", "network": {"id": "ab70b036-b5ab-4377-b081-f4b82fdb05c5", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1993818804-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "647c3591c2b940409293763c6c83e58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcef3de2b-0c", "ovs_interfaceid": "cef3de2b-0c4e-495b-a804-56753ce894c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:15:18 np0005539551 nova_compute[227360]: 2025-11-29 08:15:18.399 227364 DEBUG nova.network.os_vif_util [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:f5:a8,bridge_name='br-int',has_traffic_filtering=True,id=cef3de2b-0c4e-495b-a804-56753ce894c1,network=Network(ab70b036-b5ab-4377-b081-f4b82fdb05c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcef3de2b-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:15:18 np0005539551 nova_compute[227360]: 2025-11-29 08:15:18.401 227364 DEBUG nova.objects.instance [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lazy-loading 'pci_devices' on Instance uuid 5e330f04-18c7-454f-a836-2502d7a50da4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:18 np0005539551 nova_compute[227360]: 2025-11-29 08:15:18.422 227364 DEBUG nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:15:18 np0005539551 nova_compute[227360]:  <uuid>5e330f04-18c7-454f-a836-2502d7a50da4</uuid>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:  <name>instance-00000062</name>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:15:18 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:      <nova:name>tempest-tempest.common.compute-instance-1895741234-2</nova:name>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:15:17</nova:creationTime>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:15:18 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:        <nova:user uuid="0bd9df09f1324e3f9dba099f03ffe1c6">tempest-MultipleCreateTestJSON-2058984420-project-member</nova:user>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:        <nova:project uuid="647c3591c2b940409293763c6c83e58c">tempest-MultipleCreateTestJSON-2058984420</nova:project>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:        <nova:port uuid="cef3de2b-0c4e-495b-a804-56753ce894c1">
Nov 29 03:15:18 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:      <entry name="serial">5e330f04-18c7-454f-a836-2502d7a50da4</entry>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:      <entry name="uuid">5e330f04-18c7-454f-a836-2502d7a50da4</entry>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:15:18 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/5e330f04-18c7-454f-a836-2502d7a50da4_disk">
Nov 29 03:15:18 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:15:18 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:15:18 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/5e330f04-18c7-454f-a836-2502d7a50da4_disk.config">
Nov 29 03:15:18 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:15:18 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:15:18 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:33:f5:a8"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:      <target dev="tapcef3de2b-0c"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:15:18 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/5e330f04-18c7-454f-a836-2502d7a50da4/console.log" append="off"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:15:18 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:15:18 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:15:18 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:15:18 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:15:18 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:15:18 np0005539551 nova_compute[227360]: 2025-11-29 08:15:18.423 227364 DEBUG nova.compute.manager [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Preparing to wait for external event network-vif-plugged-cef3de2b-0c4e-495b-a804-56753ce894c1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:15:18 np0005539551 nova_compute[227360]: 2025-11-29 08:15:18.423 227364 DEBUG oslo_concurrency.lockutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Acquiring lock "5e330f04-18c7-454f-a836-2502d7a50da4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:18 np0005539551 nova_compute[227360]: 2025-11-29 08:15:18.423 227364 DEBUG oslo_concurrency.lockutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "5e330f04-18c7-454f-a836-2502d7a50da4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:18 np0005539551 nova_compute[227360]: 2025-11-29 08:15:18.424 227364 DEBUG oslo_concurrency.lockutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "5e330f04-18c7-454f-a836-2502d7a50da4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:18 np0005539551 nova_compute[227360]: 2025-11-29 08:15:18.424 227364 DEBUG nova.virt.libvirt.vif [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:15:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1895741234',display_name='tempest-tempest.common.compute-instance-1895741234-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1895741234-2',id=98,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='647c3591c2b940409293763c6c83e58c',ramdisk_id='',reservation_id='r-vw058oln',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-2058984420',owner_user_name='tempest-MultipleCreateTestJSON-2058984420-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:15:12Z,user_data=None,user_id='0bd9df09f1324e3f9dba099f03ffe1c6',uuid=5e330f04-18c7-454f-a836-2502d7a50da4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cef3de2b-0c4e-495b-a804-56753ce894c1", "address": "fa:16:3e:33:f5:a8", "network": {"id": "ab70b036-b5ab-4377-b081-f4b82fdb05c5", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1993818804-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "647c3591c2b940409293763c6c83e58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcef3de2b-0c", "ovs_interfaceid": "cef3de2b-0c4e-495b-a804-56753ce894c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:15:18 np0005539551 nova_compute[227360]: 2025-11-29 08:15:18.425 227364 DEBUG nova.network.os_vif_util [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Converting VIF {"id": "cef3de2b-0c4e-495b-a804-56753ce894c1", "address": "fa:16:3e:33:f5:a8", "network": {"id": "ab70b036-b5ab-4377-b081-f4b82fdb05c5", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1993818804-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "647c3591c2b940409293763c6c83e58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcef3de2b-0c", "ovs_interfaceid": "cef3de2b-0c4e-495b-a804-56753ce894c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:15:18 np0005539551 nova_compute[227360]: 2025-11-29 08:15:18.425 227364 DEBUG nova.network.os_vif_util [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:f5:a8,bridge_name='br-int',has_traffic_filtering=True,id=cef3de2b-0c4e-495b-a804-56753ce894c1,network=Network(ab70b036-b5ab-4377-b081-f4b82fdb05c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcef3de2b-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:15:18 np0005539551 nova_compute[227360]: 2025-11-29 08:15:18.426 227364 DEBUG os_vif [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:f5:a8,bridge_name='br-int',has_traffic_filtering=True,id=cef3de2b-0c4e-495b-a804-56753ce894c1,network=Network(ab70b036-b5ab-4377-b081-f4b82fdb05c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcef3de2b-0c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:15:18 np0005539551 nova_compute[227360]: 2025-11-29 08:15:18.426 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:18 np0005539551 nova_compute[227360]: 2025-11-29 08:15:18.427 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:18 np0005539551 nova_compute[227360]: 2025-11-29 08:15:18.427 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:15:18 np0005539551 nova_compute[227360]: 2025-11-29 08:15:18.431 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:18 np0005539551 nova_compute[227360]: 2025-11-29 08:15:18.432 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcef3de2b-0c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:18 np0005539551 nova_compute[227360]: 2025-11-29 08:15:18.432 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcef3de2b-0c, col_values=(('external_ids', {'iface-id': 'cef3de2b-0c4e-495b-a804-56753ce894c1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:33:f5:a8', 'vm-uuid': '5e330f04-18c7-454f-a836-2502d7a50da4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:18 np0005539551 nova_compute[227360]: 2025-11-29 08:15:18.434 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:18 np0005539551 NetworkManager[48922]: <info>  [1764404118.4348] manager: (tapcef3de2b-0c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/187)
Nov 29 03:15:18 np0005539551 nova_compute[227360]: 2025-11-29 08:15:18.436 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:15:18 np0005539551 nova_compute[227360]: 2025-11-29 08:15:18.442 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:18 np0005539551 nova_compute[227360]: 2025-11-29 08:15:18.443 227364 INFO os_vif [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:f5:a8,bridge_name='br-int',has_traffic_filtering=True,id=cef3de2b-0c4e-495b-a804-56753ce894c1,network=Network(ab70b036-b5ab-4377-b081-f4b82fdb05c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcef3de2b-0c')#033[00m
Nov 29 03:15:19 np0005539551 nova_compute[227360]: 2025-11-29 08:15:19.053 227364 DEBUG nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:15:19 np0005539551 nova_compute[227360]: 2025-11-29 08:15:19.054 227364 DEBUG nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:15:19 np0005539551 nova_compute[227360]: 2025-11-29 08:15:19.054 227364 DEBUG nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] No VIF found with MAC fa:16:3e:33:f5:a8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:15:19 np0005539551 nova_compute[227360]: 2025-11-29 08:15:19.055 227364 INFO nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Using config drive#033[00m
Nov 29 03:15:19 np0005539551 nova_compute[227360]: 2025-11-29 08:15:19.099 227364 DEBUG nova.storage.rbd_utils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] rbd image 5e330f04-18c7-454f-a836-2502d7a50da4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:15:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:19.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:19 np0005539551 nova_compute[227360]: 2025-11-29 08:15:19.397 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:19 np0005539551 nova_compute[227360]: 2025-11-29 08:15:19.714 227364 INFO nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Creating config drive at /var/lib/nova/instances/5e330f04-18c7-454f-a836-2502d7a50da4/disk.config#033[00m
Nov 29 03:15:19 np0005539551 nova_compute[227360]: 2025-11-29 08:15:19.724 227364 DEBUG oslo_concurrency.processutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5e330f04-18c7-454f-a836-2502d7a50da4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr92y297y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:19.865 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:19.865 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:19.866 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:19 np0005539551 nova_compute[227360]: 2025-11-29 08:15:19.880 227364 DEBUG oslo_concurrency.processutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5e330f04-18c7-454f-a836-2502d7a50da4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr92y297y" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:19 np0005539551 nova_compute[227360]: 2025-11-29 08:15:19.912 227364 DEBUG nova.storage.rbd_utils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] rbd image 5e330f04-18c7-454f-a836-2502d7a50da4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:19 np0005539551 nova_compute[227360]: 2025-11-29 08:15:19.916 227364 DEBUG oslo_concurrency.processutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5e330f04-18c7-454f-a836-2502d7a50da4/disk.config 5e330f04-18c7-454f-a836-2502d7a50da4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:20 np0005539551 nova_compute[227360]: 2025-11-29 08:15:20.069 227364 DEBUG nova.network.neutron [req-d3644411-70dd-4d4e-8c8f-79101da19c36 req-e5fc6df8-ecef-47c8-963d-419f2429b1d3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Updated VIF entry in instance network info cache for port cef3de2b-0c4e-495b-a804-56753ce894c1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:15:20 np0005539551 nova_compute[227360]: 2025-11-29 08:15:20.070 227364 DEBUG nova.network.neutron [req-d3644411-70dd-4d4e-8c8f-79101da19c36 req-e5fc6df8-ecef-47c8-963d-419f2429b1d3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Updating instance_info_cache with network_info: [{"id": "cef3de2b-0c4e-495b-a804-56753ce894c1", "address": "fa:16:3e:33:f5:a8", "network": {"id": "ab70b036-b5ab-4377-b081-f4b82fdb05c5", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1993818804-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "647c3591c2b940409293763c6c83e58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcef3de2b-0c", "ovs_interfaceid": "cef3de2b-0c4e-495b-a804-56753ce894c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:15:20 np0005539551 nova_compute[227360]: 2025-11-29 08:15:20.086 227364 DEBUG oslo_concurrency.lockutils [req-d3644411-70dd-4d4e-8c8f-79101da19c36 req-e5fc6df8-ecef-47c8-963d-419f2429b1d3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-5e330f04-18c7-454f-a836-2502d7a50da4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:15:20 np0005539551 nova_compute[227360]: 2025-11-29 08:15:20.222 227364 DEBUG oslo_concurrency.processutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5e330f04-18c7-454f-a836-2502d7a50da4/disk.config 5e330f04-18c7-454f-a836-2502d7a50da4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.306s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:20 np0005539551 nova_compute[227360]: 2025-11-29 08:15:20.223 227364 INFO nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Deleting local config drive /var/lib/nova/instances/5e330f04-18c7-454f-a836-2502d7a50da4/disk.config because it was imported into RBD.#033[00m
Nov 29 03:15:20 np0005539551 kernel: tapcef3de2b-0c: entered promiscuous mode
Nov 29 03:15:20 np0005539551 NetworkManager[48922]: <info>  [1764404120.2780] manager: (tapcef3de2b-0c): new Tun device (/org/freedesktop/NetworkManager/Devices/188)
Nov 29 03:15:20 np0005539551 ovn_controller[130266]: 2025-11-29T08:15:20Z|00387|binding|INFO|Claiming lport cef3de2b-0c4e-495b-a804-56753ce894c1 for this chassis.
Nov 29 03:15:20 np0005539551 ovn_controller[130266]: 2025-11-29T08:15:20Z|00388|binding|INFO|cef3de2b-0c4e-495b-a804-56753ce894c1: Claiming fa:16:3e:33:f5:a8 10.100.0.6
Nov 29 03:15:20 np0005539551 nova_compute[227360]: 2025-11-29 08:15:20.277 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:20.285 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:f5:a8 10.100.0.6'], port_security=['fa:16:3e:33:f5:a8 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '5e330f04-18c7-454f-a836-2502d7a50da4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ab70b036-b5ab-4377-b081-f4b82fdb05c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '647c3591c2b940409293763c6c83e58c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '41c3698d-cc27-49de-8078-b06ee82fc1d7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8ab88e7d-4131-470c-a431-4c951fbab973, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=cef3de2b-0c4e-495b-a804-56753ce894c1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:20.286 139482 INFO neutron.agent.ovn.metadata.agent [-] Port cef3de2b-0c4e-495b-a804-56753ce894c1 in datapath ab70b036-b5ab-4377-b081-f4b82fdb05c5 bound to our chassis#033[00m
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:20.287 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ab70b036-b5ab-4377-b081-f4b82fdb05c5#033[00m
Nov 29 03:15:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:20.300 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[23a799d6-d72b-4311-9b97-c675a587ebe7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:20.301 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapab70b036-b1 in ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:15:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:20.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:20.303 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapab70b036-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:20.303 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ec9cbaf4-6ff6-4ecb-a2d6-a90910293edf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:20.304 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5b983978-dea9-4aaa-b21d-f7c1eaa15693]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:20 np0005539551 ovn_controller[130266]: 2025-11-29T08:15:20Z|00389|binding|INFO|Setting lport cef3de2b-0c4e-495b-a804-56753ce894c1 ovn-installed in OVS
Nov 29 03:15:20 np0005539551 ovn_controller[130266]: 2025-11-29T08:15:20Z|00390|binding|INFO|Setting lport cef3de2b-0c4e-495b-a804-56753ce894c1 up in Southbound
Nov 29 03:15:20 np0005539551 nova_compute[227360]: 2025-11-29 08:15:20.306 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:20 np0005539551 nova_compute[227360]: 2025-11-29 08:15:20.311 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:20 np0005539551 systemd-machined[190756]: New machine qemu-42-instance-00000062.
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:20.317 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[0bc316b4-2fac-4c03-a756-b91d1169ae22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:20 np0005539551 systemd[1]: Started Virtual Machine qemu-42-instance-00000062.
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:20.345 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5fd90ce1-35d8-400b-9377-cd9e3c346e4c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:20 np0005539551 systemd-udevd[262921]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:15:20 np0005539551 NetworkManager[48922]: <info>  [1764404120.3648] device (tapcef3de2b-0c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:15:20 np0005539551 NetworkManager[48922]: <info>  [1764404120.3655] device (tapcef3de2b-0c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:20.380 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[3fce8d3f-1186-44f1-9e82-ece64d49cd26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:20 np0005539551 NetworkManager[48922]: <info>  [1764404120.3872] manager: (tapab70b036-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/189)
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:20.386 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[3d451f53-3fcb-4f19-874c-e7f8ba618713]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:20.419 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[86f8cb04-0574-443c-b2e3-6a62548ed9d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:20.422 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[8e87484e-b44a-4c9f-b231-f4be5b8708a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:20 np0005539551 NetworkManager[48922]: <info>  [1764404120.4455] device (tapab70b036-b0): carrier: link connected
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:20.452 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[a3d4296f-9760-4608-b69d-3793a2f69bd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:20.469 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[3b9717c8-af50-4c0c-ad4f-f81830d38aa8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapab70b036-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:43:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 119], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 716732, 'reachable_time': 17478, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262951, 'error': None, 'target': 'ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:20.486 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8d7cbbd7-111b-4093-8096-7933c3c2e1a7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe14:436f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 716732, 'tstamp': 716732}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262952, 'error': None, 'target': 'ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:20.504 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[40d17399-a008-4eb7-a102-86bff601b119]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapab70b036-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:43:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 119], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 716732, 'reachable_time': 17478, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 262953, 'error': None, 'target': 'ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:20 np0005539551 nova_compute[227360]: 2025-11-29 08:15:20.517 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:20.536 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5dbc6463-737c-4efc-921c-818862975bea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:20.605 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[41b2dc15-a069-4e42-8fc2-51f91d32c685]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:20.606 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapab70b036-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:20.606 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:20.606 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapab70b036-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:20 np0005539551 NetworkManager[48922]: <info>  [1764404120.6094] manager: (tapab70b036-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/190)
Nov 29 03:15:20 np0005539551 kernel: tapab70b036-b0: entered promiscuous mode
Nov 29 03:15:20 np0005539551 nova_compute[227360]: 2025-11-29 08:15:20.608 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:20.619 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapab70b036-b0, col_values=(('external_ids', {'iface-id': 'b68836c1-a6f1-4b18-aa8f-0c55204e98dc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:20 np0005539551 ovn_controller[130266]: 2025-11-29T08:15:20Z|00391|binding|INFO|Releasing lport b68836c1-a6f1-4b18-aa8f-0c55204e98dc from this chassis (sb_readonly=0)
Nov 29 03:15:20 np0005539551 nova_compute[227360]: 2025-11-29 08:15:20.620 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:20 np0005539551 nova_compute[227360]: 2025-11-29 08:15:20.635 227364 DEBUG nova.compute.manager [req-ca3243db-d368-4b0f-8862-5c1bb134b89b req-5ca75bda-8182-437c-84d8-170a4ab2f0d5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Received event network-vif-plugged-cef3de2b-0c4e-495b-a804-56753ce894c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:20 np0005539551 nova_compute[227360]: 2025-11-29 08:15:20.635 227364 DEBUG oslo_concurrency.lockutils [req-ca3243db-d368-4b0f-8862-5c1bb134b89b req-5ca75bda-8182-437c-84d8-170a4ab2f0d5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5e330f04-18c7-454f-a836-2502d7a50da4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:20 np0005539551 nova_compute[227360]: 2025-11-29 08:15:20.635 227364 DEBUG oslo_concurrency.lockutils [req-ca3243db-d368-4b0f-8862-5c1bb134b89b req-5ca75bda-8182-437c-84d8-170a4ab2f0d5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5e330f04-18c7-454f-a836-2502d7a50da4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:20 np0005539551 nova_compute[227360]: 2025-11-29 08:15:20.635 227364 DEBUG oslo_concurrency.lockutils [req-ca3243db-d368-4b0f-8862-5c1bb134b89b req-5ca75bda-8182-437c-84d8-170a4ab2f0d5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5e330f04-18c7-454f-a836-2502d7a50da4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:20 np0005539551 nova_compute[227360]: 2025-11-29 08:15:20.635 227364 DEBUG nova.compute.manager [req-ca3243db-d368-4b0f-8862-5c1bb134b89b req-5ca75bda-8182-437c-84d8-170a4ab2f0d5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Processing event network-vif-plugged-cef3de2b-0c4e-495b-a804-56753ce894c1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:15:20 np0005539551 nova_compute[227360]: 2025-11-29 08:15:20.636 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:20.638 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ab70b036-b5ab-4377-b081-f4b82fdb05c5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ab70b036-b5ab-4377-b081-f4b82fdb05c5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:20.639 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b3d5c585-770c-444a-b51a-b9de26737242]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:20.640 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-ab70b036-b5ab-4377-b081-f4b82fdb05c5
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/ab70b036-b5ab-4377-b081-f4b82fdb05c5.pid.haproxy
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID ab70b036-b5ab-4377-b081-f4b82fdb05c5
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:15:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:20.640 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5', 'env', 'PROCESS_TAG=haproxy-ab70b036-b5ab-4377-b081-f4b82fdb05c5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ab70b036-b5ab-4377-b081-f4b82fdb05c5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:15:20 np0005539551 nova_compute[227360]: 2025-11-29 08:15:20.955 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404120.954616, 5e330f04-18c7-454f-a836-2502d7a50da4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:15:20 np0005539551 nova_compute[227360]: 2025-11-29 08:15:20.956 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] VM Started (Lifecycle Event)#033[00m
Nov 29 03:15:20 np0005539551 nova_compute[227360]: 2025-11-29 08:15:20.960 227364 DEBUG nova.compute.manager [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:15:20 np0005539551 nova_compute[227360]: 2025-11-29 08:15:20.965 227364 DEBUG nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:15:20 np0005539551 nova_compute[227360]: 2025-11-29 08:15:20.969 227364 INFO nova.virt.libvirt.driver [-] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Instance spawned successfully.#033[00m
Nov 29 03:15:20 np0005539551 nova_compute[227360]: 2025-11-29 08:15:20.970 227364 DEBUG nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:15:20 np0005539551 nova_compute[227360]: 2025-11-29 08:15:20.979 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:20 np0005539551 nova_compute[227360]: 2025-11-29 08:15:20.982 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:15:20 np0005539551 nova_compute[227360]: 2025-11-29 08:15:20.994 227364 DEBUG nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:20 np0005539551 nova_compute[227360]: 2025-11-29 08:15:20.995 227364 DEBUG nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:20 np0005539551 nova_compute[227360]: 2025-11-29 08:15:20.995 227364 DEBUG nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:20 np0005539551 nova_compute[227360]: 2025-11-29 08:15:20.996 227364 DEBUG nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:20 np0005539551 nova_compute[227360]: 2025-11-29 08:15:20.997 227364 DEBUG nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:20 np0005539551 nova_compute[227360]: 2025-11-29 08:15:20.998 227364 DEBUG nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:21 np0005539551 nova_compute[227360]: 2025-11-29 08:15:21.005 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:15:21 np0005539551 nova_compute[227360]: 2025-11-29 08:15:21.005 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404120.9559834, 5e330f04-18c7-454f-a836-2502d7a50da4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:15:21 np0005539551 nova_compute[227360]: 2025-11-29 08:15:21.006 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:15:21 np0005539551 nova_compute[227360]: 2025-11-29 08:15:21.043 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:21 np0005539551 nova_compute[227360]: 2025-11-29 08:15:21.048 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404120.9646394, 5e330f04-18c7-454f-a836-2502d7a50da4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:15:21 np0005539551 nova_compute[227360]: 2025-11-29 08:15:21.048 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:15:21 np0005539551 nova_compute[227360]: 2025-11-29 08:15:21.068 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:21 np0005539551 nova_compute[227360]: 2025-11-29 08:15:21.072 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:15:21 np0005539551 nova_compute[227360]: 2025-11-29 08:15:21.077 227364 INFO nova.compute.manager [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Took 8.96 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:15:21 np0005539551 nova_compute[227360]: 2025-11-29 08:15:21.079 227364 DEBUG nova.compute.manager [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:21 np0005539551 nova_compute[227360]: 2025-11-29 08:15:21.088 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:15:21 np0005539551 podman[263025]: 2025-11-29 08:15:21.019038468 +0000 UTC m=+0.033163767 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:15:21 np0005539551 nova_compute[227360]: 2025-11-29 08:15:21.156 227364 INFO nova.compute.manager [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Took 10.06 seconds to build instance.#033[00m
Nov 29 03:15:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:21.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:21 np0005539551 nova_compute[227360]: 2025-11-29 08:15:21.197 227364 DEBUG oslo_concurrency.lockutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "5e330f04-18c7-454f-a836-2502d7a50da4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.163s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:21 np0005539551 podman[263025]: 2025-11-29 08:15:21.625506089 +0000 UTC m=+0.639631408 container create f5a5af239ff2351142e30e03ddd1bdfdf3b735687fff38a10cf21fddea830b70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:15:21 np0005539551 systemd[1]: Started libpod-conmon-f5a5af239ff2351142e30e03ddd1bdfdf3b735687fff38a10cf21fddea830b70.scope.
Nov 29 03:15:21 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:15:21 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5295b756d66c4573a49e10075e0fabc3b27df86a31e0ec5dc7167ee75b2c3234/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:15:22 np0005539551 podman[263025]: 2025-11-29 08:15:22.051405653 +0000 UTC m=+1.065531032 container init f5a5af239ff2351142e30e03ddd1bdfdf3b735687fff38a10cf21fddea830b70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 03:15:22 np0005539551 podman[263025]: 2025-11-29 08:15:22.061267514 +0000 UTC m=+1.075392813 container start f5a5af239ff2351142e30e03ddd1bdfdf3b735687fff38a10cf21fddea830b70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 03:15:22 np0005539551 neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5[263040]: [NOTICE]   (263044) : New worker (263046) forked
Nov 29 03:15:22 np0005539551 neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5[263040]: [NOTICE]   (263044) : Loading success.
Nov 29 03:15:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:22.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:22 np0005539551 nova_compute[227360]: 2025-11-29 08:15:22.849 227364 DEBUG nova.compute.manager [req-5e3cbd41-4b43-40d2-947b-7ecc27193580 req-e6c61a10-96a7-43c0-a27e-b00621016c88 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Received event network-vif-plugged-cef3de2b-0c4e-495b-a804-56753ce894c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:22 np0005539551 nova_compute[227360]: 2025-11-29 08:15:22.849 227364 DEBUG oslo_concurrency.lockutils [req-5e3cbd41-4b43-40d2-947b-7ecc27193580 req-e6c61a10-96a7-43c0-a27e-b00621016c88 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5e330f04-18c7-454f-a836-2502d7a50da4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:22 np0005539551 nova_compute[227360]: 2025-11-29 08:15:22.850 227364 DEBUG oslo_concurrency.lockutils [req-5e3cbd41-4b43-40d2-947b-7ecc27193580 req-e6c61a10-96a7-43c0-a27e-b00621016c88 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5e330f04-18c7-454f-a836-2502d7a50da4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:22 np0005539551 nova_compute[227360]: 2025-11-29 08:15:22.850 227364 DEBUG oslo_concurrency.lockutils [req-5e3cbd41-4b43-40d2-947b-7ecc27193580 req-e6c61a10-96a7-43c0-a27e-b00621016c88 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5e330f04-18c7-454f-a836-2502d7a50da4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:22 np0005539551 nova_compute[227360]: 2025-11-29 08:15:22.850 227364 DEBUG nova.compute.manager [req-5e3cbd41-4b43-40d2-947b-7ecc27193580 req-e6c61a10-96a7-43c0-a27e-b00621016c88 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] No waiting events found dispatching network-vif-plugged-cef3de2b-0c4e-495b-a804-56753ce894c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:15:22 np0005539551 nova_compute[227360]: 2025-11-29 08:15:22.850 227364 WARNING nova.compute.manager [req-5e3cbd41-4b43-40d2-947b-7ecc27193580 req-e6c61a10-96a7-43c0-a27e-b00621016c88 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Received unexpected event network-vif-plugged-cef3de2b-0c4e-495b-a804-56753ce894c1 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:15:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:23.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:23 np0005539551 nova_compute[227360]: 2025-11-29 08:15:23.435 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:24 np0005539551 nova_compute[227360]: 2025-11-29 08:15:24.160 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:24.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:24 np0005539551 nova_compute[227360]: 2025-11-29 08:15:24.745 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:24 np0005539551 nova_compute[227360]: 2025-11-29 08:15:24.746 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:15:24 np0005539551 nova_compute[227360]: 2025-11-29 08:15:24.832 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:15:24 np0005539551 nova_compute[227360]: 2025-11-29 08:15:24.835 227364 DEBUG oslo_concurrency.lockutils [None req-fafe8d82-2ba9-4fbe-9e24-f9114bdea795 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Acquiring lock "5e330f04-18c7-454f-a836-2502d7a50da4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:24 np0005539551 nova_compute[227360]: 2025-11-29 08:15:24.835 227364 DEBUG oslo_concurrency.lockutils [None req-fafe8d82-2ba9-4fbe-9e24-f9114bdea795 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "5e330f04-18c7-454f-a836-2502d7a50da4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:24 np0005539551 nova_compute[227360]: 2025-11-29 08:15:24.836 227364 DEBUG oslo_concurrency.lockutils [None req-fafe8d82-2ba9-4fbe-9e24-f9114bdea795 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Acquiring lock "5e330f04-18c7-454f-a836-2502d7a50da4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:24 np0005539551 nova_compute[227360]: 2025-11-29 08:15:24.836 227364 DEBUG oslo_concurrency.lockutils [None req-fafe8d82-2ba9-4fbe-9e24-f9114bdea795 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "5e330f04-18c7-454f-a836-2502d7a50da4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:24 np0005539551 nova_compute[227360]: 2025-11-29 08:15:24.836 227364 DEBUG oslo_concurrency.lockutils [None req-fafe8d82-2ba9-4fbe-9e24-f9114bdea795 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "5e330f04-18c7-454f-a836-2502d7a50da4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:24 np0005539551 nova_compute[227360]: 2025-11-29 08:15:24.837 227364 INFO nova.compute.manager [None req-fafe8d82-2ba9-4fbe-9e24-f9114bdea795 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Terminating instance#033[00m
Nov 29 03:15:24 np0005539551 nova_compute[227360]: 2025-11-29 08:15:24.838 227364 DEBUG nova.compute.manager [None req-fafe8d82-2ba9-4fbe-9e24-f9114bdea795 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:15:24 np0005539551 kernel: tapcef3de2b-0c (unregistering): left promiscuous mode
Nov 29 03:15:24 np0005539551 NetworkManager[48922]: <info>  [1764404124.8964] device (tapcef3de2b-0c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:15:24 np0005539551 nova_compute[227360]: 2025-11-29 08:15:24.906 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:24 np0005539551 ovn_controller[130266]: 2025-11-29T08:15:24Z|00392|binding|INFO|Releasing lport cef3de2b-0c4e-495b-a804-56753ce894c1 from this chassis (sb_readonly=0)
Nov 29 03:15:24 np0005539551 ovn_controller[130266]: 2025-11-29T08:15:24Z|00393|binding|INFO|Setting lport cef3de2b-0c4e-495b-a804-56753ce894c1 down in Southbound
Nov 29 03:15:24 np0005539551 ovn_controller[130266]: 2025-11-29T08:15:24Z|00394|binding|INFO|Removing iface tapcef3de2b-0c ovn-installed in OVS
Nov 29 03:15:24 np0005539551 nova_compute[227360]: 2025-11-29 08:15:24.908 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:24.919 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:f5:a8 10.100.0.6'], port_security=['fa:16:3e:33:f5:a8 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '5e330f04-18c7-454f-a836-2502d7a50da4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ab70b036-b5ab-4377-b081-f4b82fdb05c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '647c3591c2b940409293763c6c83e58c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '41c3698d-cc27-49de-8078-b06ee82fc1d7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8ab88e7d-4131-470c-a431-4c951fbab973, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=cef3de2b-0c4e-495b-a804-56753ce894c1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:15:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:24.920 139482 INFO neutron.agent.ovn.metadata.agent [-] Port cef3de2b-0c4e-495b-a804-56753ce894c1 in datapath ab70b036-b5ab-4377-b081-f4b82fdb05c5 unbound from our chassis#033[00m
Nov 29 03:15:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:24.922 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ab70b036-b5ab-4377-b081-f4b82fdb05c5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:15:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:24.923 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4fa14de5-7cc0-4c5d-8bfa-754bcffc8101]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:24.924 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5 namespace which is not needed anymore#033[00m
Nov 29 03:15:24 np0005539551 nova_compute[227360]: 2025-11-29 08:15:24.930 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:24 np0005539551 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000062.scope: Deactivated successfully.
Nov 29 03:15:24 np0005539551 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000062.scope: Consumed 4.586s CPU time.
Nov 29 03:15:24 np0005539551 systemd-machined[190756]: Machine qemu-42-instance-00000062 terminated.
Nov 29 03:15:25 np0005539551 nova_compute[227360]: 2025-11-29 08:15:25.074 227364 INFO nova.virt.libvirt.driver [-] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Instance destroyed successfully.#033[00m
Nov 29 03:15:25 np0005539551 nova_compute[227360]: 2025-11-29 08:15:25.075 227364 DEBUG nova.objects.instance [None req-fafe8d82-2ba9-4fbe-9e24-f9114bdea795 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lazy-loading 'resources' on Instance uuid 5e330f04-18c7-454f-a836-2502d7a50da4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:25 np0005539551 nova_compute[227360]: 2025-11-29 08:15:25.090 227364 DEBUG nova.virt.libvirt.vif [None req-fafe8d82-2ba9-4fbe-9e24-f9114bdea795 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:15:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1895741234',display_name='tempest-tempest.common.compute-instance-1895741234-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1895741234-2',id=98,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-11-29T08:15:21Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='647c3591c2b940409293763c6c83e58c',ramdisk_id='',reservation_id='r-vw058oln',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-2058984420',owner_user_name='tempest-MultipleCreateTestJSON-2058984420-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:15:21Z,user_data=None,user_id='0bd9df09f1324e3f9dba099f03ffe1c6',uuid=5e330f04-18c7-454f-a836-2502d7a50da4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cef3de2b-0c4e-495b-a804-56753ce894c1", "address": "fa:16:3e:33:f5:a8", "network": {"id": "ab70b036-b5ab-4377-b081-f4b82fdb05c5", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1993818804-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "647c3591c2b940409293763c6c83e58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcef3de2b-0c", "ovs_interfaceid": "cef3de2b-0c4e-495b-a804-56753ce894c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:15:25 np0005539551 nova_compute[227360]: 2025-11-29 08:15:25.091 227364 DEBUG nova.network.os_vif_util [None req-fafe8d82-2ba9-4fbe-9e24-f9114bdea795 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Converting VIF {"id": "cef3de2b-0c4e-495b-a804-56753ce894c1", "address": "fa:16:3e:33:f5:a8", "network": {"id": "ab70b036-b5ab-4377-b081-f4b82fdb05c5", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1993818804-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "647c3591c2b940409293763c6c83e58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcef3de2b-0c", "ovs_interfaceid": "cef3de2b-0c4e-495b-a804-56753ce894c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:15:25 np0005539551 nova_compute[227360]: 2025-11-29 08:15:25.091 227364 DEBUG nova.network.os_vif_util [None req-fafe8d82-2ba9-4fbe-9e24-f9114bdea795 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:f5:a8,bridge_name='br-int',has_traffic_filtering=True,id=cef3de2b-0c4e-495b-a804-56753ce894c1,network=Network(ab70b036-b5ab-4377-b081-f4b82fdb05c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcef3de2b-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:15:25 np0005539551 nova_compute[227360]: 2025-11-29 08:15:25.091 227364 DEBUG os_vif [None req-fafe8d82-2ba9-4fbe-9e24-f9114bdea795 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:f5:a8,bridge_name='br-int',has_traffic_filtering=True,id=cef3de2b-0c4e-495b-a804-56753ce894c1,network=Network(ab70b036-b5ab-4377-b081-f4b82fdb05c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcef3de2b-0c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:15:25 np0005539551 nova_compute[227360]: 2025-11-29 08:15:25.093 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:25 np0005539551 nova_compute[227360]: 2025-11-29 08:15:25.093 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcef3de2b-0c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:25 np0005539551 nova_compute[227360]: 2025-11-29 08:15:25.095 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:25 np0005539551 nova_compute[227360]: 2025-11-29 08:15:25.097 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:15:25 np0005539551 nova_compute[227360]: 2025-11-29 08:15:25.099 227364 INFO os_vif [None req-fafe8d82-2ba9-4fbe-9e24-f9114bdea795 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:f5:a8,bridge_name='br-int',has_traffic_filtering=True,id=cef3de2b-0c4e-495b-a804-56753ce894c1,network=Network(ab70b036-b5ab-4377-b081-f4b82fdb05c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcef3de2b-0c')#033[00m
Nov 29 03:15:25 np0005539551 neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5[263040]: [NOTICE]   (263044) : haproxy version is 2.8.14-c23fe91
Nov 29 03:15:25 np0005539551 neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5[263040]: [NOTICE]   (263044) : path to executable is /usr/sbin/haproxy
Nov 29 03:15:25 np0005539551 neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5[263040]: [ALERT]    (263044) : Current worker (263046) exited with code 143 (Terminated)
Nov 29 03:15:25 np0005539551 neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5[263040]: [WARNING]  (263044) : All workers exited. Exiting... (0)
Nov 29 03:15:25 np0005539551 systemd[1]: libpod-f5a5af239ff2351142e30e03ddd1bdfdf3b735687fff38a10cf21fddea830b70.scope: Deactivated successfully.
Nov 29 03:15:25 np0005539551 podman[263078]: 2025-11-29 08:15:25.130933747 +0000 UTC m=+0.104180553 container died f5a5af239ff2351142e30e03ddd1bdfdf3b735687fff38a10cf21fddea830b70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:15:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:25.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:25 np0005539551 nova_compute[227360]: 2025-11-29 08:15:25.250 227364 DEBUG nova.compute.manager [req-10ab1f89-85cb-430e-9249-140267b15206 req-e6dfc453-9d40-417a-937d-f8826b89a45a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Received event network-vif-unplugged-cef3de2b-0c4e-495b-a804-56753ce894c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:25 np0005539551 nova_compute[227360]: 2025-11-29 08:15:25.251 227364 DEBUG oslo_concurrency.lockutils [req-10ab1f89-85cb-430e-9249-140267b15206 req-e6dfc453-9d40-417a-937d-f8826b89a45a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5e330f04-18c7-454f-a836-2502d7a50da4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:25 np0005539551 nova_compute[227360]: 2025-11-29 08:15:25.251 227364 DEBUG oslo_concurrency.lockutils [req-10ab1f89-85cb-430e-9249-140267b15206 req-e6dfc453-9d40-417a-937d-f8826b89a45a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5e330f04-18c7-454f-a836-2502d7a50da4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:25 np0005539551 nova_compute[227360]: 2025-11-29 08:15:25.251 227364 DEBUG oslo_concurrency.lockutils [req-10ab1f89-85cb-430e-9249-140267b15206 req-e6dfc453-9d40-417a-937d-f8826b89a45a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5e330f04-18c7-454f-a836-2502d7a50da4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:25 np0005539551 nova_compute[227360]: 2025-11-29 08:15:25.251 227364 DEBUG nova.compute.manager [req-10ab1f89-85cb-430e-9249-140267b15206 req-e6dfc453-9d40-417a-937d-f8826b89a45a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] No waiting events found dispatching network-vif-unplugged-cef3de2b-0c4e-495b-a804-56753ce894c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:15:25 np0005539551 nova_compute[227360]: 2025-11-29 08:15:25.252 227364 DEBUG nova.compute.manager [req-10ab1f89-85cb-430e-9249-140267b15206 req-e6dfc453-9d40-417a-937d-f8826b89a45a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Received event network-vif-unplugged-cef3de2b-0c4e-495b-a804-56753ce894c1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:15:25 np0005539551 systemd[1]: var-lib-containers-storage-overlay-5295b756d66c4573a49e10075e0fabc3b27df86a31e0ec5dc7167ee75b2c3234-merged.mount: Deactivated successfully.
Nov 29 03:15:25 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f5a5af239ff2351142e30e03ddd1bdfdf3b735687fff38a10cf21fddea830b70-userdata-shm.mount: Deactivated successfully.
Nov 29 03:15:25 np0005539551 nova_compute[227360]: 2025-11-29 08:15:25.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:25 np0005539551 nova_compute[227360]: 2025-11-29 08:15:25.520 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:25 np0005539551 podman[263078]: 2025-11-29 08:15:25.882192651 +0000 UTC m=+0.855439407 container cleanup f5a5af239ff2351142e30e03ddd1bdfdf3b735687fff38a10cf21fddea830b70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:15:25 np0005539551 systemd[1]: libpod-conmon-f5a5af239ff2351142e30e03ddd1bdfdf3b735687fff38a10cf21fddea830b70.scope: Deactivated successfully.
Nov 29 03:15:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:26.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:26 np0005539551 nova_compute[227360]: 2025-11-29 08:15:26.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:26 np0005539551 nova_compute[227360]: 2025-11-29 08:15:26.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:26 np0005539551 podman[263138]: 2025-11-29 08:15:26.697431764 +0000 UTC m=+0.785710865 container remove f5a5af239ff2351142e30e03ddd1bdfdf3b735687fff38a10cf21fddea830b70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:15:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:26.704 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[7760a2c0-b3a5-4454-9b87-7ac7493fef1a]: (4, ('Sat Nov 29 08:15:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5 (f5a5af239ff2351142e30e03ddd1bdfdf3b735687fff38a10cf21fddea830b70)\nf5a5af239ff2351142e30e03ddd1bdfdf3b735687fff38a10cf21fddea830b70\nSat Nov 29 08:15:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5 (f5a5af239ff2351142e30e03ddd1bdfdf3b735687fff38a10cf21fddea830b70)\nf5a5af239ff2351142e30e03ddd1bdfdf3b735687fff38a10cf21fddea830b70\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:26.707 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[65462c94-7737-4410-85ba-b72831142df1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:26.708 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapab70b036-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:26 np0005539551 nova_compute[227360]: 2025-11-29 08:15:26.711 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:26 np0005539551 kernel: tapab70b036-b0: left promiscuous mode
Nov 29 03:15:26 np0005539551 nova_compute[227360]: 2025-11-29 08:15:26.724 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:26.730 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[becbe79c-7e37-4991-b96e-d7174b53524d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:26.746 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c4d4ed57-fcad-4e4e-8d1d-393068584991]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:26.748 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[0a24d8e7-f12d-4c75-a04d-f3a1cdcc31b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:26.775 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[206d70d6-dae3-4eb6-be82-421a51942cdb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 716725, 'reachable_time': 31699, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263153, 'error': None, 'target': 'ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:26 np0005539551 systemd[1]: run-netns-ovnmeta\x2dab70b036\x2db5ab\x2d4377\x2db081\x2df4b82fdb05c5.mount: Deactivated successfully.
Nov 29 03:15:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:26.778 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:15:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:26.779 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[8d1a450c-4d0e-4430-961f-00a5971567ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:15:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:27.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:27 np0005539551 nova_compute[227360]: 2025-11-29 08:15:27.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:27 np0005539551 nova_compute[227360]: 2025-11-29 08:15:27.500 227364 DEBUG nova.compute.manager [req-478704b4-1779-458b-b978-9b2f7ad712a8 req-8efddf4d-72f4-4c5e-a708-2e2271e4ae32 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Received event network-vif-plugged-cef3de2b-0c4e-495b-a804-56753ce894c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:27 np0005539551 nova_compute[227360]: 2025-11-29 08:15:27.501 227364 DEBUG oslo_concurrency.lockutils [req-478704b4-1779-458b-b978-9b2f7ad712a8 req-8efddf4d-72f4-4c5e-a708-2e2271e4ae32 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5e330f04-18c7-454f-a836-2502d7a50da4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:27 np0005539551 nova_compute[227360]: 2025-11-29 08:15:27.501 227364 DEBUG oslo_concurrency.lockutils [req-478704b4-1779-458b-b978-9b2f7ad712a8 req-8efddf4d-72f4-4c5e-a708-2e2271e4ae32 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5e330f04-18c7-454f-a836-2502d7a50da4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:27 np0005539551 nova_compute[227360]: 2025-11-29 08:15:27.501 227364 DEBUG oslo_concurrency.lockutils [req-478704b4-1779-458b-b978-9b2f7ad712a8 req-8efddf4d-72f4-4c5e-a708-2e2271e4ae32 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5e330f04-18c7-454f-a836-2502d7a50da4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:27 np0005539551 nova_compute[227360]: 2025-11-29 08:15:27.501 227364 DEBUG nova.compute.manager [req-478704b4-1779-458b-b978-9b2f7ad712a8 req-8efddf4d-72f4-4c5e-a708-2e2271e4ae32 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] No waiting events found dispatching network-vif-plugged-cef3de2b-0c4e-495b-a804-56753ce894c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:15:27 np0005539551 nova_compute[227360]: 2025-11-29 08:15:27.501 227364 WARNING nova.compute.manager [req-478704b4-1779-458b-b978-9b2f7ad712a8 req-8efddf4d-72f4-4c5e-a708-2e2271e4ae32 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Received unexpected event network-vif-plugged-cef3de2b-0c4e-495b-a804-56753ce894c1 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:15:27 np0005539551 nova_compute[227360]: 2025-11-29 08:15:27.880 227364 INFO nova.virt.libvirt.driver [None req-fafe8d82-2ba9-4fbe-9e24-f9114bdea795 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Deleting instance files /var/lib/nova/instances/5e330f04-18c7-454f-a836-2502d7a50da4_del#033[00m
Nov 29 03:15:27 np0005539551 nova_compute[227360]: 2025-11-29 08:15:27.881 227364 INFO nova.virt.libvirt.driver [None req-fafe8d82-2ba9-4fbe-9e24-f9114bdea795 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Deletion of /var/lib/nova/instances/5e330f04-18c7-454f-a836-2502d7a50da4_del complete#033[00m
Nov 29 03:15:27 np0005539551 nova_compute[227360]: 2025-11-29 08:15:27.969 227364 INFO nova.compute.manager [None req-fafe8d82-2ba9-4fbe-9e24-f9114bdea795 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Took 3.13 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:15:27 np0005539551 nova_compute[227360]: 2025-11-29 08:15:27.970 227364 DEBUG oslo.service.loopingcall [None req-fafe8d82-2ba9-4fbe-9e24-f9114bdea795 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:15:27 np0005539551 nova_compute[227360]: 2025-11-29 08:15:27.970 227364 DEBUG nova.compute.manager [-] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:15:27 np0005539551 nova_compute[227360]: 2025-11-29 08:15:27.971 227364 DEBUG nova.network.neutron [-] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:15:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:28.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:29.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:29 np0005539551 nova_compute[227360]: 2025-11-29 08:15:29.562 227364 DEBUG nova.network.neutron [-] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:15:29 np0005539551 nova_compute[227360]: 2025-11-29 08:15:29.606 227364 INFO nova.compute.manager [-] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Took 1.64 seconds to deallocate network for instance.#033[00m
Nov 29 03:15:29 np0005539551 nova_compute[227360]: 2025-11-29 08:15:29.682 227364 DEBUG oslo_concurrency.lockutils [None req-fafe8d82-2ba9-4fbe-9e24-f9114bdea795 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:29 np0005539551 nova_compute[227360]: 2025-11-29 08:15:29.683 227364 DEBUG oslo_concurrency.lockutils [None req-fafe8d82-2ba9-4fbe-9e24-f9114bdea795 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:29 np0005539551 nova_compute[227360]: 2025-11-29 08:15:29.773 227364 DEBUG nova.compute.manager [req-596938e0-84d7-4c92-b150-d263e267576a req-4b694d4e-5da9-4c9f-89fd-e1e5607035ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Received event network-vif-deleted-cef3de2b-0c4e-495b-a804-56753ce894c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:29 np0005539551 nova_compute[227360]: 2025-11-29 08:15:29.865 227364 DEBUG oslo_concurrency.processutils [None req-fafe8d82-2ba9-4fbe-9e24-f9114bdea795 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:30 np0005539551 nova_compute[227360]: 2025-11-29 08:15:30.096 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:15:30 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2848523922' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:15:30 np0005539551 nova_compute[227360]: 2025-11-29 08:15:30.302 227364 DEBUG oslo_concurrency.processutils [None req-fafe8d82-2ba9-4fbe-9e24-f9114bdea795 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:30 np0005539551 nova_compute[227360]: 2025-11-29 08:15:30.309 227364 DEBUG nova.compute.provider_tree [None req-fafe8d82-2ba9-4fbe-9e24-f9114bdea795 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:15:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:15:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:30.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:30 np0005539551 nova_compute[227360]: 2025-11-29 08:15:30.327 227364 DEBUG nova.scheduler.client.report [None req-fafe8d82-2ba9-4fbe-9e24-f9114bdea795 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:15:30 np0005539551 nova_compute[227360]: 2025-11-29 08:15:30.365 227364 DEBUG oslo_concurrency.lockutils [None req-fafe8d82-2ba9-4fbe-9e24-f9114bdea795 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:30 np0005539551 nova_compute[227360]: 2025-11-29 08:15:30.409 227364 INFO nova.scheduler.client.report [None req-fafe8d82-2ba9-4fbe-9e24-f9114bdea795 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Deleted allocations for instance 5e330f04-18c7-454f-a836-2502d7a50da4#033[00m
Nov 29 03:15:30 np0005539551 nova_compute[227360]: 2025-11-29 08:15:30.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:30 np0005539551 nova_compute[227360]: 2025-11-29 08:15:30.411 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:30 np0005539551 nova_compute[227360]: 2025-11-29 08:15:30.450 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:30 np0005539551 nova_compute[227360]: 2025-11-29 08:15:30.450 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:30 np0005539551 nova_compute[227360]: 2025-11-29 08:15:30.451 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:30 np0005539551 nova_compute[227360]: 2025-11-29 08:15:30.451 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:15:30 np0005539551 nova_compute[227360]: 2025-11-29 08:15:30.451 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:30 np0005539551 nova_compute[227360]: 2025-11-29 08:15:30.523 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:30 np0005539551 nova_compute[227360]: 2025-11-29 08:15:30.539 227364 DEBUG oslo_concurrency.lockutils [None req-fafe8d82-2ba9-4fbe-9e24-f9114bdea795 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "5e330f04-18c7-454f-a836-2502d7a50da4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:15:30 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3300042229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:15:30 np0005539551 nova_compute[227360]: 2025-11-29 08:15:30.915 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:31 np0005539551 nova_compute[227360]: 2025-11-29 08:15:31.108 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:15:31 np0005539551 nova_compute[227360]: 2025-11-29 08:15:31.109 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:15:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:15:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:31.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:31 np0005539551 nova_compute[227360]: 2025-11-29 08:15:31.317 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:15:31 np0005539551 nova_compute[227360]: 2025-11-29 08:15:31.318 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4360MB free_disk=20.817615509033203GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:15:31 np0005539551 nova_compute[227360]: 2025-11-29 08:15:31.318 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:31 np0005539551 nova_compute[227360]: 2025-11-29 08:15:31.318 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:31 np0005539551 nova_compute[227360]: 2025-11-29 08:15:31.534 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 076bf9f6-6607-4b08-b733-864854aad069 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:15:31 np0005539551 nova_compute[227360]: 2025-11-29 08:15:31.534 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:15:31 np0005539551 nova_compute[227360]: 2025-11-29 08:15:31.535 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:15:31 np0005539551 nova_compute[227360]: 2025-11-29 08:15:31.605 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:31 np0005539551 nova_compute[227360]: 2025-11-29 08:15:31.666 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:32 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:15:32 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2962958715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:15:32 np0005539551 nova_compute[227360]: 2025-11-29 08:15:32.064 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:32 np0005539551 nova_compute[227360]: 2025-11-29 08:15:32.070 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:15:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:32.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:32 np0005539551 nova_compute[227360]: 2025-11-29 08:15:32.391 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:15:32 np0005539551 nova_compute[227360]: 2025-11-29 08:15:32.422 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:15:32 np0005539551 nova_compute[227360]: 2025-11-29 08:15:32.422 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.104s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:15:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:33.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:15:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:34.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:35 np0005539551 nova_compute[227360]: 2025-11-29 08:15:35.099 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:35.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:35 np0005539551 nova_compute[227360]: 2025-11-29 08:15:35.525 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:36.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:37.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:15:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:38.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:39 np0005539551 nova_compute[227360]: 2025-11-29 08:15:39.123 227364 DEBUG oslo_concurrency.lockutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Acquiring lock "9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:39 np0005539551 nova_compute[227360]: 2025-11-29 08:15:39.123 227364 DEBUG oslo_concurrency.lockutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:39 np0005539551 nova_compute[227360]: 2025-11-29 08:15:39.194 227364 DEBUG nova.compute.manager [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:15:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:39.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:39 np0005539551 nova_compute[227360]: 2025-11-29 08:15:39.388 227364 DEBUG oslo_concurrency.lockutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:39 np0005539551 nova_compute[227360]: 2025-11-29 08:15:39.389 227364 DEBUG oslo_concurrency.lockutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:39 np0005539551 nova_compute[227360]: 2025-11-29 08:15:39.398 227364 DEBUG nova.virt.hardware [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:15:39 np0005539551 nova_compute[227360]: 2025-11-29 08:15:39.399 227364 INFO nova.compute.claims [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:15:39 np0005539551 nova_compute[227360]: 2025-11-29 08:15:39.422 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:39 np0005539551 nova_compute[227360]: 2025-11-29 08:15:39.422 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:39 np0005539551 nova_compute[227360]: 2025-11-29 08:15:39.423 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:15:39 np0005539551 nova_compute[227360]: 2025-11-29 08:15:39.774 227364 DEBUG nova.scheduler.client.report [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Refreshing inventories for resource provider 67c71d68-0dd7-4589-b775-189b4191a844 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:15:39 np0005539551 nova_compute[227360]: 2025-11-29 08:15:39.857 227364 DEBUG nova.scheduler.client.report [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Updating ProviderTree inventory for provider 67c71d68-0dd7-4589-b775-189b4191a844 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:15:39 np0005539551 nova_compute[227360]: 2025-11-29 08:15:39.858 227364 DEBUG nova.compute.provider_tree [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Updating inventory in ProviderTree for provider 67c71d68-0dd7-4589-b775-189b4191a844 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:15:39 np0005539551 nova_compute[227360]: 2025-11-29 08:15:39.875 227364 DEBUG nova.scheduler.client.report [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Refreshing aggregate associations for resource provider 67c71d68-0dd7-4589-b775-189b4191a844, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:15:39 np0005539551 nova_compute[227360]: 2025-11-29 08:15:39.934 227364 DEBUG nova.scheduler.client.report [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Refreshing trait associations for resource provider 67c71d68-0dd7-4589-b775-189b4191a844, traits: COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:15:39 np0005539551 nova_compute[227360]: 2025-11-29 08:15:39.995 227364 DEBUG oslo_concurrency.processutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:40 np0005539551 nova_compute[227360]: 2025-11-29 08:15:40.073 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404125.07243, 5e330f04-18c7-454f-a836-2502d7a50da4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:15:40 np0005539551 nova_compute[227360]: 2025-11-29 08:15:40.074 227364 INFO nova.compute.manager [-] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:15:40 np0005539551 nova_compute[227360]: 2025-11-29 08:15:40.104 227364 DEBUG nova.compute.manager [None req-529a2fcf-9e7a-41cd-b47d-c23988591cb7 - - - - - -] [instance: 5e330f04-18c7-454f-a836-2502d7a50da4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:40 np0005539551 nova_compute[227360]: 2025-11-29 08:15:40.152 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:40.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:15:40 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3287240449' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:15:40 np0005539551 nova_compute[227360]: 2025-11-29 08:15:40.485 227364 DEBUG oslo_concurrency.processutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:40 np0005539551 nova_compute[227360]: 2025-11-29 08:15:40.491 227364 DEBUG nova.compute.provider_tree [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:15:40 np0005539551 nova_compute[227360]: 2025-11-29 08:15:40.527 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:40 np0005539551 nova_compute[227360]: 2025-11-29 08:15:40.567 227364 DEBUG nova.scheduler.client.report [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:15:40 np0005539551 nova_compute[227360]: 2025-11-29 08:15:40.617 227364 DEBUG oslo_concurrency.lockutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.228s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:40 np0005539551 nova_compute[227360]: 2025-11-29 08:15:40.618 227364 DEBUG nova.compute.manager [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:15:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:40 np0005539551 nova_compute[227360]: 2025-11-29 08:15:40.724 227364 DEBUG nova.compute.manager [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:15:40 np0005539551 nova_compute[227360]: 2025-11-29 08:15:40.724 227364 DEBUG nova.network.neutron [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:15:40 np0005539551 nova_compute[227360]: 2025-11-29 08:15:40.755 227364 INFO nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:15:40 np0005539551 nova_compute[227360]: 2025-11-29 08:15:40.812 227364 DEBUG nova.compute.manager [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:15:41 np0005539551 nova_compute[227360]: 2025-11-29 08:15:41.062 227364 DEBUG nova.compute.manager [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:15:41 np0005539551 nova_compute[227360]: 2025-11-29 08:15:41.064 227364 DEBUG nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:15:41 np0005539551 nova_compute[227360]: 2025-11-29 08:15:41.065 227364 INFO nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Creating image(s)#033[00m
Nov 29 03:15:41 np0005539551 nova_compute[227360]: 2025-11-29 08:15:41.118 227364 DEBUG nova.storage.rbd_utils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] rbd image 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:41 np0005539551 nova_compute[227360]: 2025-11-29 08:15:41.154 227364 DEBUG nova.storage.rbd_utils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] rbd image 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:41 np0005539551 nova_compute[227360]: 2025-11-29 08:15:41.184 227364 DEBUG nova.storage.rbd_utils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] rbd image 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:41 np0005539551 nova_compute[227360]: 2025-11-29 08:15:41.187 227364 DEBUG oslo_concurrency.processutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:41.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:41 np0005539551 nova_compute[227360]: 2025-11-29 08:15:41.270 227364 DEBUG oslo_concurrency.processutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:41 np0005539551 nova_compute[227360]: 2025-11-29 08:15:41.271 227364 DEBUG oslo_concurrency.lockutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:41 np0005539551 nova_compute[227360]: 2025-11-29 08:15:41.272 227364 DEBUG oslo_concurrency.lockutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:41 np0005539551 nova_compute[227360]: 2025-11-29 08:15:41.272 227364 DEBUG oslo_concurrency.lockutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:41 np0005539551 nova_compute[227360]: 2025-11-29 08:15:41.298 227364 DEBUG nova.storage.rbd_utils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] rbd image 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:41 np0005539551 nova_compute[227360]: 2025-11-29 08:15:41.302 227364 DEBUG oslo_concurrency.processutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:41 np0005539551 nova_compute[227360]: 2025-11-29 08:15:41.518 227364 DEBUG nova.policy [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0bd9df09f1324e3f9dba099f03ffe1c6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '647c3591c2b940409293763c6c83e58c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:15:41 np0005539551 nova_compute[227360]: 2025-11-29 08:15:41.673 227364 DEBUG oslo_concurrency.processutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.371s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:41 np0005539551 nova_compute[227360]: 2025-11-29 08:15:41.753 227364 DEBUG nova.storage.rbd_utils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] resizing rbd image 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:15:41 np0005539551 nova_compute[227360]: 2025-11-29 08:15:41.872 227364 DEBUG nova.objects.instance [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lazy-loading 'migration_context' on Instance uuid 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:41 np0005539551 nova_compute[227360]: 2025-11-29 08:15:41.918 227364 DEBUG nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:15:41 np0005539551 nova_compute[227360]: 2025-11-29 08:15:41.919 227364 DEBUG nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Ensure instance console log exists: /var/lib/nova/instances/9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:15:41 np0005539551 nova_compute[227360]: 2025-11-29 08:15:41.919 227364 DEBUG oslo_concurrency.lockutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:41 np0005539551 nova_compute[227360]: 2025-11-29 08:15:41.920 227364 DEBUG oslo_concurrency.lockutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:41 np0005539551 nova_compute[227360]: 2025-11-29 08:15:41.920 227364 DEBUG oslo_concurrency.lockutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:42.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:42 np0005539551 nova_compute[227360]: 2025-11-29 08:15:42.721 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:43 np0005539551 nova_compute[227360]: 2025-11-29 08:15:43.075 227364 DEBUG nova.network.neutron [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Successfully created port: 821bcebf-20b5-454c-b867-9e53797f68b6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:15:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:15:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:43.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:15:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:44.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:44 np0005539551 nova_compute[227360]: 2025-11-29 08:15:44.392 227364 DEBUG nova.network.neutron [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Successfully updated port: 821bcebf-20b5-454c-b867-9e53797f68b6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:15:44 np0005539551 nova_compute[227360]: 2025-11-29 08:15:44.414 227364 DEBUG oslo_concurrency.lockutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Acquiring lock "refresh_cache-9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:15:44 np0005539551 nova_compute[227360]: 2025-11-29 08:15:44.415 227364 DEBUG oslo_concurrency.lockutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Acquired lock "refresh_cache-9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:15:44 np0005539551 nova_compute[227360]: 2025-11-29 08:15:44.415 227364 DEBUG nova.network.neutron [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:15:44 np0005539551 nova_compute[227360]: 2025-11-29 08:15:44.567 227364 DEBUG nova.compute.manager [req-49c7b5c2-d2be-41f6-bf02-cd11ff06204d req-23699237-1857-4b07-bc9f-2cb10fee89f2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Received event network-changed-821bcebf-20b5-454c-b867-9e53797f68b6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:44 np0005539551 nova_compute[227360]: 2025-11-29 08:15:44.567 227364 DEBUG nova.compute.manager [req-49c7b5c2-d2be-41f6-bf02-cd11ff06204d req-23699237-1857-4b07-bc9f-2cb10fee89f2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Refreshing instance network info cache due to event network-changed-821bcebf-20b5-454c-b867-9e53797f68b6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:15:44 np0005539551 nova_compute[227360]: 2025-11-29 08:15:44.567 227364 DEBUG oslo_concurrency.lockutils [req-49c7b5c2-d2be-41f6-bf02-cd11ff06204d req-23699237-1857-4b07-bc9f-2cb10fee89f2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:15:44 np0005539551 nova_compute[227360]: 2025-11-29 08:15:44.694 227364 DEBUG nova.network.neutron [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:15:45 np0005539551 nova_compute[227360]: 2025-11-29 08:15:45.154 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:45.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:45 np0005539551 nova_compute[227360]: 2025-11-29 08:15:45.528 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:46.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:46 np0005539551 podman[263412]: 2025-11-29 08:15:46.594548609 +0000 UTC m=+0.047027622 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:15:46 np0005539551 nova_compute[227360]: 2025-11-29 08:15:46.605 227364 DEBUG nova.network.neutron [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Updating instance_info_cache with network_info: [{"id": "821bcebf-20b5-454c-b867-9e53797f68b6", "address": "fa:16:3e:f1:59:20", "network": {"id": "ab70b036-b5ab-4377-b081-f4b82fdb05c5", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1993818804-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "647c3591c2b940409293763c6c83e58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap821bcebf-20", "ovs_interfaceid": "821bcebf-20b5-454c-b867-9e53797f68b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:15:46 np0005539551 podman[263411]: 2025-11-29 08:15:46.620593647 +0000 UTC m=+0.075470893 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:15:46 np0005539551 podman[263410]: 2025-11-29 08:15:46.621739317 +0000 UTC m=+0.076208823 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:15:46 np0005539551 nova_compute[227360]: 2025-11-29 08:15:46.648 227364 DEBUG oslo_concurrency.lockutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Releasing lock "refresh_cache-9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:15:46 np0005539551 nova_compute[227360]: 2025-11-29 08:15:46.648 227364 DEBUG nova.compute.manager [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Instance network_info: |[{"id": "821bcebf-20b5-454c-b867-9e53797f68b6", "address": "fa:16:3e:f1:59:20", "network": {"id": "ab70b036-b5ab-4377-b081-f4b82fdb05c5", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1993818804-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "647c3591c2b940409293763c6c83e58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap821bcebf-20", "ovs_interfaceid": "821bcebf-20b5-454c-b867-9e53797f68b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:15:46 np0005539551 nova_compute[227360]: 2025-11-29 08:15:46.648 227364 DEBUG oslo_concurrency.lockutils [req-49c7b5c2-d2be-41f6-bf02-cd11ff06204d req-23699237-1857-4b07-bc9f-2cb10fee89f2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:15:46 np0005539551 nova_compute[227360]: 2025-11-29 08:15:46.648 227364 DEBUG nova.network.neutron [req-49c7b5c2-d2be-41f6-bf02-cd11ff06204d req-23699237-1857-4b07-bc9f-2cb10fee89f2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Refreshing network info cache for port 821bcebf-20b5-454c-b867-9e53797f68b6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:15:46 np0005539551 nova_compute[227360]: 2025-11-29 08:15:46.650 227364 DEBUG nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Start _get_guest_xml network_info=[{"id": "821bcebf-20b5-454c-b867-9e53797f68b6", "address": "fa:16:3e:f1:59:20", "network": {"id": "ab70b036-b5ab-4377-b081-f4b82fdb05c5", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1993818804-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "647c3591c2b940409293763c6c83e58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap821bcebf-20", "ovs_interfaceid": "821bcebf-20b5-454c-b867-9e53797f68b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:15:46 np0005539551 nova_compute[227360]: 2025-11-29 08:15:46.654 227364 WARNING nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:15:46 np0005539551 nova_compute[227360]: 2025-11-29 08:15:46.661 227364 DEBUG nova.virt.libvirt.host [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:15:46 np0005539551 nova_compute[227360]: 2025-11-29 08:15:46.662 227364 DEBUG nova.virt.libvirt.host [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:15:46 np0005539551 nova_compute[227360]: 2025-11-29 08:15:46.666 227364 DEBUG nova.virt.libvirt.host [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:15:46 np0005539551 nova_compute[227360]: 2025-11-29 08:15:46.666 227364 DEBUG nova.virt.libvirt.host [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:15:46 np0005539551 nova_compute[227360]: 2025-11-29 08:15:46.667 227364 DEBUG nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:15:46 np0005539551 nova_compute[227360]: 2025-11-29 08:15:46.667 227364 DEBUG nova.virt.hardware [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:15:46 np0005539551 nova_compute[227360]: 2025-11-29 08:15:46.668 227364 DEBUG nova.virt.hardware [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:15:46 np0005539551 nova_compute[227360]: 2025-11-29 08:15:46.668 227364 DEBUG nova.virt.hardware [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:15:46 np0005539551 nova_compute[227360]: 2025-11-29 08:15:46.668 227364 DEBUG nova.virt.hardware [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:15:46 np0005539551 nova_compute[227360]: 2025-11-29 08:15:46.668 227364 DEBUG nova.virt.hardware [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:15:46 np0005539551 nova_compute[227360]: 2025-11-29 08:15:46.668 227364 DEBUG nova.virt.hardware [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:15:46 np0005539551 nova_compute[227360]: 2025-11-29 08:15:46.669 227364 DEBUG nova.virt.hardware [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:15:46 np0005539551 nova_compute[227360]: 2025-11-29 08:15:46.669 227364 DEBUG nova.virt.hardware [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:15:46 np0005539551 nova_compute[227360]: 2025-11-29 08:15:46.669 227364 DEBUG nova.virt.hardware [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:15:46 np0005539551 nova_compute[227360]: 2025-11-29 08:15:46.669 227364 DEBUG nova.virt.hardware [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:15:46 np0005539551 nova_compute[227360]: 2025-11-29 08:15:46.669 227364 DEBUG nova.virt.hardware [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:15:46 np0005539551 nova_compute[227360]: 2025-11-29 08:15:46.672 227364 DEBUG oslo_concurrency.processutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:47 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:15:47 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1894630715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:15:47 np0005539551 nova_compute[227360]: 2025-11-29 08:15:47.095 227364 DEBUG oslo_concurrency.processutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:47 np0005539551 nova_compute[227360]: 2025-11-29 08:15:47.123 227364 DEBUG nova.storage.rbd_utils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] rbd image 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:47 np0005539551 nova_compute[227360]: 2025-11-29 08:15:47.128 227364 DEBUG oslo_concurrency.processutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:47.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:47 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:15:47 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/727597712' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:15:47 np0005539551 nova_compute[227360]: 2025-11-29 08:15:47.562 227364 DEBUG oslo_concurrency.processutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:47 np0005539551 nova_compute[227360]: 2025-11-29 08:15:47.564 227364 DEBUG nova.virt.libvirt.vif [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:15:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-700323681',display_name='tempest-MultipleCreateTestJSON-server-700323681-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-700323681-1',id=100,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='647c3591c2b940409293763c6c83e58c',ramdisk_id='',reservation_id='r-a6zfs0z0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-2058984420',owner_user_name='tempest-MultipleCreateTestJSON-2058984420-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:15:40Z,user_data=None,user_id='0bd9df09f1324e3f9dba099f03ffe1c6',uuid=9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "821bcebf-20b5-454c-b867-9e53797f68b6", "address": "fa:16:3e:f1:59:20", "network": {"id": "ab70b036-b5ab-4377-b081-f4b82fdb05c5", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1993818804-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "647c3591c2b940409293763c6c83e58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap821bcebf-20", "ovs_interfaceid": "821bcebf-20b5-454c-b867-9e53797f68b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:15:47 np0005539551 nova_compute[227360]: 2025-11-29 08:15:47.564 227364 DEBUG nova.network.os_vif_util [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Converting VIF {"id": "821bcebf-20b5-454c-b867-9e53797f68b6", "address": "fa:16:3e:f1:59:20", "network": {"id": "ab70b036-b5ab-4377-b081-f4b82fdb05c5", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1993818804-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "647c3591c2b940409293763c6c83e58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap821bcebf-20", "ovs_interfaceid": "821bcebf-20b5-454c-b867-9e53797f68b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:15:47 np0005539551 nova_compute[227360]: 2025-11-29 08:15:47.566 227364 DEBUG nova.network.os_vif_util [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f1:59:20,bridge_name='br-int',has_traffic_filtering=True,id=821bcebf-20b5-454c-b867-9e53797f68b6,network=Network(ab70b036-b5ab-4377-b081-f4b82fdb05c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap821bcebf-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:15:47 np0005539551 nova_compute[227360]: 2025-11-29 08:15:47.567 227364 DEBUG nova.objects.instance [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lazy-loading 'pci_devices' on Instance uuid 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:47 np0005539551 nova_compute[227360]: 2025-11-29 08:15:47.593 227364 DEBUG nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:15:47 np0005539551 nova_compute[227360]:  <uuid>9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c</uuid>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:  <name>instance-00000064</name>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:15:47 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:      <nova:name>tempest-MultipleCreateTestJSON-server-700323681-1</nova:name>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:15:46</nova:creationTime>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:15:47 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:        <nova:user uuid="0bd9df09f1324e3f9dba099f03ffe1c6">tempest-MultipleCreateTestJSON-2058984420-project-member</nova:user>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:        <nova:project uuid="647c3591c2b940409293763c6c83e58c">tempest-MultipleCreateTestJSON-2058984420</nova:project>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:        <nova:port uuid="821bcebf-20b5-454c-b867-9e53797f68b6">
Nov 29 03:15:47 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:      <entry name="serial">9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c</entry>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:      <entry name="uuid">9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c</entry>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:15:47 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c_disk">
Nov 29 03:15:47 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:15:47 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:15:47 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c_disk.config">
Nov 29 03:15:47 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:15:47 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:15:47 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:f1:59:20"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:      <target dev="tap821bcebf-20"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:15:47 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c/console.log" append="off"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:15:47 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:15:47 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:15:47 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:15:47 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:15:47 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:15:47 np0005539551 nova_compute[227360]: 2025-11-29 08:15:47.595 227364 DEBUG nova.compute.manager [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Preparing to wait for external event network-vif-plugged-821bcebf-20b5-454c-b867-9e53797f68b6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:15:47 np0005539551 nova_compute[227360]: 2025-11-29 08:15:47.596 227364 DEBUG oslo_concurrency.lockutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Acquiring lock "9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:47 np0005539551 nova_compute[227360]: 2025-11-29 08:15:47.596 227364 DEBUG oslo_concurrency.lockutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:47 np0005539551 nova_compute[227360]: 2025-11-29 08:15:47.596 227364 DEBUG oslo_concurrency.lockutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:47 np0005539551 nova_compute[227360]: 2025-11-29 08:15:47.597 227364 DEBUG nova.virt.libvirt.vif [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:15:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-700323681',display_name='tempest-MultipleCreateTestJSON-server-700323681-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-700323681-1',id=100,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='647c3591c2b940409293763c6c83e58c',ramdisk_id='',reservation_id='r-a6zfs0z0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-2058984420',owner_user_name='tempest-MultipleCreateTestJSON-2058984420-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:15:40Z,user_data=None,user_id='0bd9df09f1324e3f9dba099f03ffe1c6',uuid=9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "821bcebf-20b5-454c-b867-9e53797f68b6", "address": "fa:16:3e:f1:59:20", "network": {"id": "ab70b036-b5ab-4377-b081-f4b82fdb05c5", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1993818804-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "647c3591c2b940409293763c6c83e58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap821bcebf-20", "ovs_interfaceid": "821bcebf-20b5-454c-b867-9e53797f68b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:15:47 np0005539551 nova_compute[227360]: 2025-11-29 08:15:47.597 227364 DEBUG nova.network.os_vif_util [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Converting VIF {"id": "821bcebf-20b5-454c-b867-9e53797f68b6", "address": "fa:16:3e:f1:59:20", "network": {"id": "ab70b036-b5ab-4377-b081-f4b82fdb05c5", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1993818804-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "647c3591c2b940409293763c6c83e58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap821bcebf-20", "ovs_interfaceid": "821bcebf-20b5-454c-b867-9e53797f68b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:15:47 np0005539551 nova_compute[227360]: 2025-11-29 08:15:47.598 227364 DEBUG nova.network.os_vif_util [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f1:59:20,bridge_name='br-int',has_traffic_filtering=True,id=821bcebf-20b5-454c-b867-9e53797f68b6,network=Network(ab70b036-b5ab-4377-b081-f4b82fdb05c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap821bcebf-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:15:47 np0005539551 nova_compute[227360]: 2025-11-29 08:15:47.598 227364 DEBUG os_vif [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f1:59:20,bridge_name='br-int',has_traffic_filtering=True,id=821bcebf-20b5-454c-b867-9e53797f68b6,network=Network(ab70b036-b5ab-4377-b081-f4b82fdb05c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap821bcebf-20') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:15:47 np0005539551 nova_compute[227360]: 2025-11-29 08:15:47.599 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:47 np0005539551 nova_compute[227360]: 2025-11-29 08:15:47.599 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:47 np0005539551 nova_compute[227360]: 2025-11-29 08:15:47.600 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:15:47 np0005539551 nova_compute[227360]: 2025-11-29 08:15:47.603 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:47 np0005539551 nova_compute[227360]: 2025-11-29 08:15:47.603 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap821bcebf-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:47 np0005539551 nova_compute[227360]: 2025-11-29 08:15:47.604 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap821bcebf-20, col_values=(('external_ids', {'iface-id': '821bcebf-20b5-454c-b867-9e53797f68b6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f1:59:20', 'vm-uuid': '9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:47 np0005539551 NetworkManager[48922]: <info>  [1764404147.6064] manager: (tap821bcebf-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/191)
Nov 29 03:15:47 np0005539551 nova_compute[227360]: 2025-11-29 08:15:47.609 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:15:47 np0005539551 nova_compute[227360]: 2025-11-29 08:15:47.611 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:47 np0005539551 nova_compute[227360]: 2025-11-29 08:15:47.612 227364 INFO os_vif [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f1:59:20,bridge_name='br-int',has_traffic_filtering=True,id=821bcebf-20b5-454c-b867-9e53797f68b6,network=Network(ab70b036-b5ab-4377-b081-f4b82fdb05c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap821bcebf-20')#033[00m
Nov 29 03:15:47 np0005539551 nova_compute[227360]: 2025-11-29 08:15:47.729 227364 DEBUG nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:15:47 np0005539551 nova_compute[227360]: 2025-11-29 08:15:47.730 227364 DEBUG nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:15:47 np0005539551 nova_compute[227360]: 2025-11-29 08:15:47.730 227364 DEBUG nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] No VIF found with MAC fa:16:3e:f1:59:20, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:15:47 np0005539551 nova_compute[227360]: 2025-11-29 08:15:47.731 227364 INFO nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Using config drive#033[00m
Nov 29 03:15:47 np0005539551 nova_compute[227360]: 2025-11-29 08:15:47.755 227364 DEBUG nova.storage.rbd_utils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] rbd image 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:48.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:49.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:49 np0005539551 nova_compute[227360]: 2025-11-29 08:15:49.876 227364 INFO nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Creating config drive at /var/lib/nova/instances/9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c/disk.config#033[00m
Nov 29 03:15:49 np0005539551 nova_compute[227360]: 2025-11-29 08:15:49.885 227364 DEBUG oslo_concurrency.processutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe6w41jg6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:50 np0005539551 nova_compute[227360]: 2025-11-29 08:15:50.026 227364 DEBUG oslo_concurrency.processutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe6w41jg6" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:50 np0005539551 nova_compute[227360]: 2025-11-29 08:15:50.108 227364 DEBUG nova.storage.rbd_utils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] rbd image 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:50 np0005539551 nova_compute[227360]: 2025-11-29 08:15:50.111 227364 DEBUG oslo_concurrency.processutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c/disk.config 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:50 np0005539551 nova_compute[227360]: 2025-11-29 08:15:50.324 227364 DEBUG oslo_concurrency.processutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c/disk.config 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.213s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:50 np0005539551 nova_compute[227360]: 2025-11-29 08:15:50.326 227364 INFO nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Deleting local config drive /var/lib/nova/instances/9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c/disk.config because it was imported into RBD.#033[00m
Nov 29 03:15:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:50.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:50 np0005539551 kernel: tap821bcebf-20: entered promiscuous mode
Nov 29 03:15:50 np0005539551 ovn_controller[130266]: 2025-11-29T08:15:50Z|00395|binding|INFO|Claiming lport 821bcebf-20b5-454c-b867-9e53797f68b6 for this chassis.
Nov 29 03:15:50 np0005539551 NetworkManager[48922]: <info>  [1764404150.3731] manager: (tap821bcebf-20): new Tun device (/org/freedesktop/NetworkManager/Devices/192)
Nov 29 03:15:50 np0005539551 ovn_controller[130266]: 2025-11-29T08:15:50Z|00396|binding|INFO|821bcebf-20b5-454c-b867-9e53797f68b6: Claiming fa:16:3e:f1:59:20 10.100.0.8
Nov 29 03:15:50 np0005539551 nova_compute[227360]: 2025-11-29 08:15:50.371 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:50.378 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f1:59:20 10.100.0.8'], port_security=['fa:16:3e:f1:59:20 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ab70b036-b5ab-4377-b081-f4b82fdb05c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '647c3591c2b940409293763c6c83e58c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '41c3698d-cc27-49de-8078-b06ee82fc1d7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8ab88e7d-4131-470c-a431-4c951fbab973, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=821bcebf-20b5-454c-b867-9e53797f68b6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:50.378 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 821bcebf-20b5-454c-b867-9e53797f68b6 in datapath ab70b036-b5ab-4377-b081-f4b82fdb05c5 bound to our chassis#033[00m
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:50.380 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ab70b036-b5ab-4377-b081-f4b82fdb05c5#033[00m
Nov 29 03:15:50 np0005539551 ovn_controller[130266]: 2025-11-29T08:15:50Z|00397|binding|INFO|Setting lport 821bcebf-20b5-454c-b867-9e53797f68b6 ovn-installed in OVS
Nov 29 03:15:50 np0005539551 ovn_controller[130266]: 2025-11-29T08:15:50Z|00398|binding|INFO|Setting lport 821bcebf-20b5-454c-b867-9e53797f68b6 up in Southbound
Nov 29 03:15:50 np0005539551 nova_compute[227360]: 2025-11-29 08:15:50.388 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:50 np0005539551 nova_compute[227360]: 2025-11-29 08:15:50.390 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:50.393 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6d352026-b25e-4a15-8266-80e630e96aed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:50.394 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapab70b036-b1 in ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:50.396 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapab70b036-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:50.397 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[17efeb7c-58b2-4ca2-87b4-f84e6c6e2220]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:50.397 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[3f573fac-dac9-4ae2-b2ab-0ff77950f5c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:50 np0005539551 systemd-machined[190756]: New machine qemu-43-instance-00000064.
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:50.408 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[6e8b7e69-4b16-4fce-a77e-38c7344cbc3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:50 np0005539551 systemd[1]: Started Virtual Machine qemu-43-instance-00000064.
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:50.432 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[fcce53de-4b7f-4ee7-a713-b0411ee452cc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:50 np0005539551 systemd-udevd[263609]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:15:50 np0005539551 NetworkManager[48922]: <info>  [1764404150.4463] device (tap821bcebf-20): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:15:50 np0005539551 NetworkManager[48922]: <info>  [1764404150.4485] device (tap821bcebf-20): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:50.467 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[510bec32-91af-4c8c-8f66-afa81046823c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:50 np0005539551 NetworkManager[48922]: <info>  [1764404150.4739] manager: (tapab70b036-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/193)
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:50.472 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[034e4c58-884f-41c0-bb32-6a62a00968b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:50.518 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[5428213c-8f8b-4548-95f5-8aa4c5192fd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:50.521 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[f95ae7ea-3198-42fc-b0ac-b159252c351e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:50 np0005539551 nova_compute[227360]: 2025-11-29 08:15:50.529 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:50 np0005539551 NetworkManager[48922]: <info>  [1764404150.5492] device (tapab70b036-b0): carrier: link connected
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:50.558 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[04a89e3e-4d72-41a9-9717-0d11910ce57f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:50.579 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2e97fabb-f4e9-40eb-a991-295c86bf708f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapab70b036-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:43:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 122], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719742, 'reachable_time': 24362, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263639, 'error': None, 'target': 'ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:50.594 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[032b4b85-0ff1-4e47-b32b-2465acf95c6b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe14:436f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719742, 'tstamp': 719742}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263640, 'error': None, 'target': 'ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:50.615 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[afd8e78f-b274-4b8b-b514-9f5512e87cf2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapab70b036-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:43:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 122], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719742, 'reachable_time': 24362, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 263641, 'error': None, 'target': 'ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:50.646 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[75fbe916-38d3-41ef-962d-aa3171cf6926]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:50.718 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[05cc917c-f435-4ab8-8f6b-d6f0bfc1fdb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:50.720 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapab70b036-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:50.720 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:50.721 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapab70b036-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:50 np0005539551 NetworkManager[48922]: <info>  [1764404150.7236] manager: (tapab70b036-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/194)
Nov 29 03:15:50 np0005539551 kernel: tapab70b036-b0: entered promiscuous mode
Nov 29 03:15:50 np0005539551 nova_compute[227360]: 2025-11-29 08:15:50.723 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:50 np0005539551 nova_compute[227360]: 2025-11-29 08:15:50.725 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:50.726 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapab70b036-b0, col_values=(('external_ids', {'iface-id': 'b68836c1-a6f1-4b18-aa8f-0c55204e98dc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:50 np0005539551 nova_compute[227360]: 2025-11-29 08:15:50.727 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:50 np0005539551 nova_compute[227360]: 2025-11-29 08:15:50.728 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:50 np0005539551 ovn_controller[130266]: 2025-11-29T08:15:50Z|00399|binding|INFO|Releasing lport b68836c1-a6f1-4b18-aa8f-0c55204e98dc from this chassis (sb_readonly=0)
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:50.729 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ab70b036-b5ab-4377-b081-f4b82fdb05c5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ab70b036-b5ab-4377-b081-f4b82fdb05c5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:50.731 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9a3be1db-df92-4681-a03c-b20ad6348154]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:50.731 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-ab70b036-b5ab-4377-b081-f4b82fdb05c5
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/ab70b036-b5ab-4377-b081-f4b82fdb05c5.pid.haproxy
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID ab70b036-b5ab-4377-b081-f4b82fdb05c5
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:15:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:50.732 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5', 'env', 'PROCESS_TAG=haproxy-ab70b036-b5ab-4377-b081-f4b82fdb05c5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ab70b036-b5ab-4377-b081-f4b82fdb05c5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:15:50 np0005539551 nova_compute[227360]: 2025-11-29 08:15:50.743 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:50 np0005539551 nova_compute[227360]: 2025-11-29 08:15:50.754 227364 DEBUG nova.network.neutron [req-49c7b5c2-d2be-41f6-bf02-cd11ff06204d req-23699237-1857-4b07-bc9f-2cb10fee89f2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Updated VIF entry in instance network info cache for port 821bcebf-20b5-454c-b867-9e53797f68b6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:15:50 np0005539551 nova_compute[227360]: 2025-11-29 08:15:50.755 227364 DEBUG nova.network.neutron [req-49c7b5c2-d2be-41f6-bf02-cd11ff06204d req-23699237-1857-4b07-bc9f-2cb10fee89f2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Updating instance_info_cache with network_info: [{"id": "821bcebf-20b5-454c-b867-9e53797f68b6", "address": "fa:16:3e:f1:59:20", "network": {"id": "ab70b036-b5ab-4377-b081-f4b82fdb05c5", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1993818804-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "647c3591c2b940409293763c6c83e58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap821bcebf-20", "ovs_interfaceid": "821bcebf-20b5-454c-b867-9e53797f68b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:15:50 np0005539551 nova_compute[227360]: 2025-11-29 08:15:50.805 227364 DEBUG oslo_concurrency.lockutils [req-49c7b5c2-d2be-41f6-bf02-cd11ff06204d req-23699237-1857-4b07-bc9f-2cb10fee89f2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:15:50 np0005539551 nova_compute[227360]: 2025-11-29 08:15:50.963 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404150.9627635, 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:15:50 np0005539551 nova_compute[227360]: 2025-11-29 08:15:50.963 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] VM Started (Lifecycle Event)#033[00m
Nov 29 03:15:50 np0005539551 nova_compute[227360]: 2025-11-29 08:15:50.987 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:50 np0005539551 nova_compute[227360]: 2025-11-29 08:15:50.992 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404150.963018, 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:15:50 np0005539551 nova_compute[227360]: 2025-11-29 08:15:50.992 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:15:51 np0005539551 nova_compute[227360]: 2025-11-29 08:15:51.012 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:51 np0005539551 nova_compute[227360]: 2025-11-29 08:15:51.015 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:15:51 np0005539551 nova_compute[227360]: 2025-11-29 08:15:51.074 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:15:51 np0005539551 podman[263715]: 2025-11-29 08:15:51.112894067 +0000 UTC m=+0.049164889 container create f96e04fc0abf2238025a4816e29f7769cfb6aa9b119cc908703bf7f93e3dcf1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 03:15:51 np0005539551 systemd[1]: Started libpod-conmon-f96e04fc0abf2238025a4816e29f7769cfb6aa9b119cc908703bf7f93e3dcf1d.scope.
Nov 29 03:15:51 np0005539551 podman[263715]: 2025-11-29 08:15:51.087452885 +0000 UTC m=+0.023723747 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:15:51 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:15:51 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9340dfc939c5fdf921eda4687be06f97a2a2c1727b7b19bacc3f6da2d89afca/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:15:51 np0005539551 podman[263715]: 2025-11-29 08:15:51.205176653 +0000 UTC m=+0.141447475 container init f96e04fc0abf2238025a4816e29f7769cfb6aa9b119cc908703bf7f93e3dcf1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0)
Nov 29 03:15:51 np0005539551 podman[263715]: 2025-11-29 08:15:51.210399781 +0000 UTC m=+0.146670603 container start f96e04fc0abf2238025a4816e29f7769cfb6aa9b119cc908703bf7f93e3dcf1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:15:51 np0005539551 neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5[263731]: [NOTICE]   (263735) : New worker (263737) forked
Nov 29 03:15:51 np0005539551 neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5[263731]: [NOTICE]   (263735) : Loading success.
Nov 29 03:15:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:15:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:51.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:51 np0005539551 nova_compute[227360]: 2025-11-29 08:15:51.936 227364 DEBUG nova.compute.manager [req-92bcfa60-d5aa-4c3a-b620-a50803ac64cc req-30736157-6f88-434a-b11d-d40869579f55 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Received event network-vif-plugged-821bcebf-20b5-454c-b867-9e53797f68b6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:51 np0005539551 nova_compute[227360]: 2025-11-29 08:15:51.937 227364 DEBUG oslo_concurrency.lockutils [req-92bcfa60-d5aa-4c3a-b620-a50803ac64cc req-30736157-6f88-434a-b11d-d40869579f55 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:51 np0005539551 nova_compute[227360]: 2025-11-29 08:15:51.937 227364 DEBUG oslo_concurrency.lockutils [req-92bcfa60-d5aa-4c3a-b620-a50803ac64cc req-30736157-6f88-434a-b11d-d40869579f55 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:51 np0005539551 nova_compute[227360]: 2025-11-29 08:15:51.938 227364 DEBUG oslo_concurrency.lockutils [req-92bcfa60-d5aa-4c3a-b620-a50803ac64cc req-30736157-6f88-434a-b11d-d40869579f55 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:51 np0005539551 nova_compute[227360]: 2025-11-29 08:15:51.938 227364 DEBUG nova.compute.manager [req-92bcfa60-d5aa-4c3a-b620-a50803ac64cc req-30736157-6f88-434a-b11d-d40869579f55 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Processing event network-vif-plugged-821bcebf-20b5-454c-b867-9e53797f68b6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:15:51 np0005539551 nova_compute[227360]: 2025-11-29 08:15:51.938 227364 DEBUG nova.compute.manager [req-92bcfa60-d5aa-4c3a-b620-a50803ac64cc req-30736157-6f88-434a-b11d-d40869579f55 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Received event network-vif-plugged-821bcebf-20b5-454c-b867-9e53797f68b6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:51 np0005539551 nova_compute[227360]: 2025-11-29 08:15:51.938 227364 DEBUG oslo_concurrency.lockutils [req-92bcfa60-d5aa-4c3a-b620-a50803ac64cc req-30736157-6f88-434a-b11d-d40869579f55 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:51 np0005539551 nova_compute[227360]: 2025-11-29 08:15:51.939 227364 DEBUG oslo_concurrency.lockutils [req-92bcfa60-d5aa-4c3a-b620-a50803ac64cc req-30736157-6f88-434a-b11d-d40869579f55 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:51 np0005539551 nova_compute[227360]: 2025-11-29 08:15:51.939 227364 DEBUG oslo_concurrency.lockutils [req-92bcfa60-d5aa-4c3a-b620-a50803ac64cc req-30736157-6f88-434a-b11d-d40869579f55 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:51 np0005539551 nova_compute[227360]: 2025-11-29 08:15:51.939 227364 DEBUG nova.compute.manager [req-92bcfa60-d5aa-4c3a-b620-a50803ac64cc req-30736157-6f88-434a-b11d-d40869579f55 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] No waiting events found dispatching network-vif-plugged-821bcebf-20b5-454c-b867-9e53797f68b6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:15:51 np0005539551 nova_compute[227360]: 2025-11-29 08:15:51.940 227364 WARNING nova.compute.manager [req-92bcfa60-d5aa-4c3a-b620-a50803ac64cc req-30736157-6f88-434a-b11d-d40869579f55 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Received unexpected event network-vif-plugged-821bcebf-20b5-454c-b867-9e53797f68b6 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:15:51 np0005539551 nova_compute[227360]: 2025-11-29 08:15:51.940 227364 DEBUG nova.compute.manager [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:15:51 np0005539551 nova_compute[227360]: 2025-11-29 08:15:51.944 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404151.9444308, 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:15:51 np0005539551 nova_compute[227360]: 2025-11-29 08:15:51.944 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:15:51 np0005539551 nova_compute[227360]: 2025-11-29 08:15:51.946 227364 DEBUG nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:15:51 np0005539551 nova_compute[227360]: 2025-11-29 08:15:51.949 227364 INFO nova.virt.libvirt.driver [-] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Instance spawned successfully.#033[00m
Nov 29 03:15:51 np0005539551 nova_compute[227360]: 2025-11-29 08:15:51.950 227364 DEBUG nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:15:51 np0005539551 nova_compute[227360]: 2025-11-29 08:15:51.976 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:51 np0005539551 nova_compute[227360]: 2025-11-29 08:15:51.981 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:15:51 np0005539551 nova_compute[227360]: 2025-11-29 08:15:51.984 227364 DEBUG nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:51 np0005539551 nova_compute[227360]: 2025-11-29 08:15:51.985 227364 DEBUG nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:51 np0005539551 nova_compute[227360]: 2025-11-29 08:15:51.985 227364 DEBUG nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:51 np0005539551 nova_compute[227360]: 2025-11-29 08:15:51.985 227364 DEBUG nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:51 np0005539551 nova_compute[227360]: 2025-11-29 08:15:51.986 227364 DEBUG nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:51 np0005539551 nova_compute[227360]: 2025-11-29 08:15:51.986 227364 DEBUG nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:52 np0005539551 nova_compute[227360]: 2025-11-29 08:15:52.042 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:15:52 np0005539551 nova_compute[227360]: 2025-11-29 08:15:52.094 227364 INFO nova.compute.manager [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Took 11.03 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:15:52 np0005539551 nova_compute[227360]: 2025-11-29 08:15:52.094 227364 DEBUG nova.compute.manager [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:52 np0005539551 nova_compute[227360]: 2025-11-29 08:15:52.205 227364 INFO nova.compute.manager [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Took 12.88 seconds to build instance.#033[00m
Nov 29 03:15:52 np0005539551 nova_compute[227360]: 2025-11-29 08:15:52.258 227364 DEBUG oslo_concurrency.lockutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.135s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:15:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:52.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:52 np0005539551 nova_compute[227360]: 2025-11-29 08:15:52.607 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:53.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:54.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:55.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:55 np0005539551 nova_compute[227360]: 2025-11-29 08:15:55.533 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:56.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:57.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:57 np0005539551 nova_compute[227360]: 2025-11-29 08:15:57.612 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:58.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:58 np0005539551 nova_compute[227360]: 2025-11-29 08:15:58.485 227364 DEBUG oslo_concurrency.lockutils [None req-db6d4149-68c8-40f9-ab2b-7bcf25745f61 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Acquiring lock "9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:58 np0005539551 nova_compute[227360]: 2025-11-29 08:15:58.486 227364 DEBUG oslo_concurrency.lockutils [None req-db6d4149-68c8-40f9-ab2b-7bcf25745f61 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:58 np0005539551 nova_compute[227360]: 2025-11-29 08:15:58.486 227364 DEBUG oslo_concurrency.lockutils [None req-db6d4149-68c8-40f9-ab2b-7bcf25745f61 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Acquiring lock "9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:58 np0005539551 nova_compute[227360]: 2025-11-29 08:15:58.486 227364 DEBUG oslo_concurrency.lockutils [None req-db6d4149-68c8-40f9-ab2b-7bcf25745f61 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:58 np0005539551 nova_compute[227360]: 2025-11-29 08:15:58.487 227364 DEBUG oslo_concurrency.lockutils [None req-db6d4149-68c8-40f9-ab2b-7bcf25745f61 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:58 np0005539551 nova_compute[227360]: 2025-11-29 08:15:58.488 227364 INFO nova.compute.manager [None req-db6d4149-68c8-40f9-ab2b-7bcf25745f61 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Terminating instance#033[00m
Nov 29 03:15:58 np0005539551 nova_compute[227360]: 2025-11-29 08:15:58.489 227364 DEBUG nova.compute.manager [None req-db6d4149-68c8-40f9-ab2b-7bcf25745f61 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:15:59 np0005539551 kernel: tap821bcebf-20 (unregistering): left promiscuous mode
Nov 29 03:15:59 np0005539551 NetworkManager[48922]: <info>  [1764404159.1927] device (tap821bcebf-20): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:15:59 np0005539551 nova_compute[227360]: 2025-11-29 08:15:59.196 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:59 np0005539551 ovn_controller[130266]: 2025-11-29T08:15:59Z|00400|binding|INFO|Releasing lport 821bcebf-20b5-454c-b867-9e53797f68b6 from this chassis (sb_readonly=0)
Nov 29 03:15:59 np0005539551 ovn_controller[130266]: 2025-11-29T08:15:59Z|00401|binding|INFO|Setting lport 821bcebf-20b5-454c-b867-9e53797f68b6 down in Southbound
Nov 29 03:15:59 np0005539551 ovn_controller[130266]: 2025-11-29T08:15:59Z|00402|binding|INFO|Removing iface tap821bcebf-20 ovn-installed in OVS
Nov 29 03:15:59 np0005539551 nova_compute[227360]: 2025-11-29 08:15:59.199 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:59.202 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f1:59:20 10.100.0.8'], port_security=['fa:16:3e:f1:59:20 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ab70b036-b5ab-4377-b081-f4b82fdb05c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '647c3591c2b940409293763c6c83e58c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '41c3698d-cc27-49de-8078-b06ee82fc1d7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8ab88e7d-4131-470c-a431-4c951fbab973, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=821bcebf-20b5-454c-b867-9e53797f68b6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:15:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:59.204 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 821bcebf-20b5-454c-b867-9e53797f68b6 in datapath ab70b036-b5ab-4377-b081-f4b82fdb05c5 unbound from our chassis#033[00m
Nov 29 03:15:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:59.205 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ab70b036-b5ab-4377-b081-f4b82fdb05c5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:15:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:59.206 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ddfda31c-bf7d-410b-a75d-926d3dfd4639]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:59.207 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5 namespace which is not needed anymore#033[00m
Nov 29 03:15:59 np0005539551 nova_compute[227360]: 2025-11-29 08:15:59.219 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:59 np0005539551 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000064.scope: Deactivated successfully.
Nov 29 03:15:59 np0005539551 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000064.scope: Consumed 7.326s CPU time.
Nov 29 03:15:59 np0005539551 systemd-machined[190756]: Machine qemu-43-instance-00000064 terminated.
Nov 29 03:15:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:15:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:59.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:59 np0005539551 nova_compute[227360]: 2025-11-29 08:15:59.318 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:59 np0005539551 nova_compute[227360]: 2025-11-29 08:15:59.323 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:59 np0005539551 nova_compute[227360]: 2025-11-29 08:15:59.330 227364 INFO nova.virt.libvirt.driver [-] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Instance destroyed successfully.#033[00m
Nov 29 03:15:59 np0005539551 nova_compute[227360]: 2025-11-29 08:15:59.331 227364 DEBUG nova.objects.instance [None req-db6d4149-68c8-40f9-ab2b-7bcf25745f61 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lazy-loading 'resources' on Instance uuid 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:59 np0005539551 nova_compute[227360]: 2025-11-29 08:15:59.349 227364 DEBUG nova.virt.libvirt.vif [None req-db6d4149-68c8-40f9-ab2b-7bcf25745f61 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:15:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-700323681',display_name='tempest-MultipleCreateTestJSON-server-700323681-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-700323681-1',id=100,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:15:52Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='647c3591c2b940409293763c6c83e58c',ramdisk_id='',reservation_id='r-a6zfs0z0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-2058984420',owner_user_name='tempest-MultipleCreateTestJSON-2058984420-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:15:52Z,user_data=None,user_id='0bd9df09f1324e3f9dba099f03ffe1c6',uuid=9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "821bcebf-20b5-454c-b867-9e53797f68b6", "address": "fa:16:3e:f1:59:20", "network": {"id": "ab70b036-b5ab-4377-b081-f4b82fdb05c5", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1993818804-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "647c3591c2b940409293763c6c83e58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap821bcebf-20", "ovs_interfaceid": "821bcebf-20b5-454c-b867-9e53797f68b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:15:59 np0005539551 neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5[263731]: [NOTICE]   (263735) : haproxy version is 2.8.14-c23fe91
Nov 29 03:15:59 np0005539551 neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5[263731]: [NOTICE]   (263735) : path to executable is /usr/sbin/haproxy
Nov 29 03:15:59 np0005539551 neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5[263731]: [WARNING]  (263735) : Exiting Master process...
Nov 29 03:15:59 np0005539551 nova_compute[227360]: 2025-11-29 08:15:59.350 227364 DEBUG nova.network.os_vif_util [None req-db6d4149-68c8-40f9-ab2b-7bcf25745f61 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Converting VIF {"id": "821bcebf-20b5-454c-b867-9e53797f68b6", "address": "fa:16:3e:f1:59:20", "network": {"id": "ab70b036-b5ab-4377-b081-f4b82fdb05c5", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1993818804-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "647c3591c2b940409293763c6c83e58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap821bcebf-20", "ovs_interfaceid": "821bcebf-20b5-454c-b867-9e53797f68b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:15:59 np0005539551 nova_compute[227360]: 2025-11-29 08:15:59.351 227364 DEBUG nova.network.os_vif_util [None req-db6d4149-68c8-40f9-ab2b-7bcf25745f61 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f1:59:20,bridge_name='br-int',has_traffic_filtering=True,id=821bcebf-20b5-454c-b867-9e53797f68b6,network=Network(ab70b036-b5ab-4377-b081-f4b82fdb05c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap821bcebf-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:15:59 np0005539551 nova_compute[227360]: 2025-11-29 08:15:59.351 227364 DEBUG os_vif [None req-db6d4149-68c8-40f9-ab2b-7bcf25745f61 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f1:59:20,bridge_name='br-int',has_traffic_filtering=True,id=821bcebf-20b5-454c-b867-9e53797f68b6,network=Network(ab70b036-b5ab-4377-b081-f4b82fdb05c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap821bcebf-20') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:15:59 np0005539551 neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5[263731]: [ALERT]    (263735) : Current worker (263737) exited with code 143 (Terminated)
Nov 29 03:15:59 np0005539551 neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5[263731]: [WARNING]  (263735) : All workers exited. Exiting... (0)
Nov 29 03:15:59 np0005539551 nova_compute[227360]: 2025-11-29 08:15:59.353 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:59 np0005539551 nova_compute[227360]: 2025-11-29 08:15:59.353 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap821bcebf-20, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:59 np0005539551 systemd[1]: libpod-f96e04fc0abf2238025a4816e29f7769cfb6aa9b119cc908703bf7f93e3dcf1d.scope: Deactivated successfully.
Nov 29 03:15:59 np0005539551 nova_compute[227360]: 2025-11-29 08:15:59.356 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:59 np0005539551 nova_compute[227360]: 2025-11-29 08:15:59.357 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:59 np0005539551 nova_compute[227360]: 2025-11-29 08:15:59.360 227364 INFO os_vif [None req-db6d4149-68c8-40f9-ab2b-7bcf25745f61 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f1:59:20,bridge_name='br-int',has_traffic_filtering=True,id=821bcebf-20b5-454c-b867-9e53797f68b6,network=Network(ab70b036-b5ab-4377-b081-f4b82fdb05c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap821bcebf-20')#033[00m
Nov 29 03:15:59 np0005539551 podman[263771]: 2025-11-29 08:15:59.361884399 +0000 UTC m=+0.060689473 container died f96e04fc0abf2238025a4816e29f7769cfb6aa9b119cc908703bf7f93e3dcf1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:15:59 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f96e04fc0abf2238025a4816e29f7769cfb6aa9b119cc908703bf7f93e3dcf1d-userdata-shm.mount: Deactivated successfully.
Nov 29 03:15:59 np0005539551 systemd[1]: var-lib-containers-storage-overlay-d9340dfc939c5fdf921eda4687be06f97a2a2c1727b7b19bacc3f6da2d89afca-merged.mount: Deactivated successfully.
Nov 29 03:15:59 np0005539551 podman[263771]: 2025-11-29 08:15:59.400442258 +0000 UTC m=+0.099247332 container cleanup f96e04fc0abf2238025a4816e29f7769cfb6aa9b119cc908703bf7f93e3dcf1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:15:59 np0005539551 systemd[1]: libpod-conmon-f96e04fc0abf2238025a4816e29f7769cfb6aa9b119cc908703bf7f93e3dcf1d.scope: Deactivated successfully.
Nov 29 03:15:59 np0005539551 podman[263824]: 2025-11-29 08:15:59.463404909 +0000 UTC m=+0.040142410 container remove f96e04fc0abf2238025a4816e29f7769cfb6aa9b119cc908703bf7f93e3dcf1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:15:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:59.468 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[7f90cb1e-2c8e-47de-acba-0d0853574c96]: (4, ('Sat Nov 29 08:15:59 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5 (f96e04fc0abf2238025a4816e29f7769cfb6aa9b119cc908703bf7f93e3dcf1d)\nf96e04fc0abf2238025a4816e29f7769cfb6aa9b119cc908703bf7f93e3dcf1d\nSat Nov 29 08:15:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5 (f96e04fc0abf2238025a4816e29f7769cfb6aa9b119cc908703bf7f93e3dcf1d)\nf96e04fc0abf2238025a4816e29f7769cfb6aa9b119cc908703bf7f93e3dcf1d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:59.470 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e7e50d43-e184-488d-8e61-ccabec590ca9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:59.472 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapab70b036-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:59 np0005539551 nova_compute[227360]: 2025-11-29 08:15:59.473 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:59 np0005539551 kernel: tapab70b036-b0: left promiscuous mode
Nov 29 03:15:59 np0005539551 nova_compute[227360]: 2025-11-29 08:15:59.476 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:59.478 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6f131f99-1115-4e19-899a-32b45fdeedcb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:59 np0005539551 nova_compute[227360]: 2025-11-29 08:15:59.489 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:59.492 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[102181ca-3a35-4c5d-80a5-83d5b5f0c6ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:59.494 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[3b132107-7bf2-4c89-bace-0ea331503050]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:59.508 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[75762a9c-36b0-466a-b6d8-e00e9abee01d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719733, 'reachable_time': 41302, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263840, 'error': None, 'target': 'ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:59 np0005539551 systemd[1]: run-netns-ovnmeta\x2dab70b036\x2db5ab\x2d4377\x2db081\x2df4b82fdb05c5.mount: Deactivated successfully.
Nov 29 03:15:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:59.511 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:15:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:15:59.511 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[c0357ddd-743a-4a77-b28b-c166830b9ddd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:00.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:00 np0005539551 nova_compute[227360]: 2025-11-29 08:16:00.548 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:16:00.705 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:16:00 np0005539551 nova_compute[227360]: 2025-11-29 08:16:00.706 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:16:00.706 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:16:00 np0005539551 nova_compute[227360]: 2025-11-29 08:16:00.728 227364 INFO nova.virt.libvirt.driver [None req-db6d4149-68c8-40f9-ab2b-7bcf25745f61 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Deleting instance files /var/lib/nova/instances/9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c_del#033[00m
Nov 29 03:16:00 np0005539551 nova_compute[227360]: 2025-11-29 08:16:00.729 227364 INFO nova.virt.libvirt.driver [None req-db6d4149-68c8-40f9-ab2b-7bcf25745f61 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Deletion of /var/lib/nova/instances/9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c_del complete#033[00m
Nov 29 03:16:00 np0005539551 nova_compute[227360]: 2025-11-29 08:16:00.845 227364 INFO nova.compute.manager [None req-db6d4149-68c8-40f9-ab2b-7bcf25745f61 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Took 2.36 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:16:00 np0005539551 nova_compute[227360]: 2025-11-29 08:16:00.846 227364 DEBUG oslo.service.loopingcall [None req-db6d4149-68c8-40f9-ab2b-7bcf25745f61 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:16:00 np0005539551 nova_compute[227360]: 2025-11-29 08:16:00.846 227364 DEBUG nova.compute.manager [-] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:16:00 np0005539551 nova_compute[227360]: 2025-11-29 08:16:00.846 227364 DEBUG nova.network.neutron [-] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:16:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:01.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:01 np0005539551 nova_compute[227360]: 2025-11-29 08:16:01.849 227364 DEBUG nova.network.neutron [-] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:16:01 np0005539551 nova_compute[227360]: 2025-11-29 08:16:01.881 227364 DEBUG nova.compute.manager [req-b10e6d00-7fdb-4888-b8f9-d3b037fb2f76 req-cbe3515d-fca8-4fad-86fa-1b2cbbe34dcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Received event network-vif-unplugged-821bcebf-20b5-454c-b867-9e53797f68b6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:01 np0005539551 nova_compute[227360]: 2025-11-29 08:16:01.881 227364 DEBUG oslo_concurrency.lockutils [req-b10e6d00-7fdb-4888-b8f9-d3b037fb2f76 req-cbe3515d-fca8-4fad-86fa-1b2cbbe34dcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:01 np0005539551 nova_compute[227360]: 2025-11-29 08:16:01.882 227364 DEBUG oslo_concurrency.lockutils [req-b10e6d00-7fdb-4888-b8f9-d3b037fb2f76 req-cbe3515d-fca8-4fad-86fa-1b2cbbe34dcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:01 np0005539551 nova_compute[227360]: 2025-11-29 08:16:01.882 227364 DEBUG oslo_concurrency.lockutils [req-b10e6d00-7fdb-4888-b8f9-d3b037fb2f76 req-cbe3515d-fca8-4fad-86fa-1b2cbbe34dcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:01 np0005539551 nova_compute[227360]: 2025-11-29 08:16:01.882 227364 DEBUG nova.compute.manager [req-b10e6d00-7fdb-4888-b8f9-d3b037fb2f76 req-cbe3515d-fca8-4fad-86fa-1b2cbbe34dcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] No waiting events found dispatching network-vif-unplugged-821bcebf-20b5-454c-b867-9e53797f68b6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:01 np0005539551 nova_compute[227360]: 2025-11-29 08:16:01.882 227364 DEBUG nova.compute.manager [req-b10e6d00-7fdb-4888-b8f9-d3b037fb2f76 req-cbe3515d-fca8-4fad-86fa-1b2cbbe34dcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Received event network-vif-unplugged-821bcebf-20b5-454c-b867-9e53797f68b6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:16:01 np0005539551 nova_compute[227360]: 2025-11-29 08:16:01.883 227364 DEBUG nova.compute.manager [req-b10e6d00-7fdb-4888-b8f9-d3b037fb2f76 req-cbe3515d-fca8-4fad-86fa-1b2cbbe34dcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Received event network-vif-plugged-821bcebf-20b5-454c-b867-9e53797f68b6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:01 np0005539551 nova_compute[227360]: 2025-11-29 08:16:01.883 227364 DEBUG oslo_concurrency.lockutils [req-b10e6d00-7fdb-4888-b8f9-d3b037fb2f76 req-cbe3515d-fca8-4fad-86fa-1b2cbbe34dcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:01 np0005539551 nova_compute[227360]: 2025-11-29 08:16:01.883 227364 DEBUG oslo_concurrency.lockutils [req-b10e6d00-7fdb-4888-b8f9-d3b037fb2f76 req-cbe3515d-fca8-4fad-86fa-1b2cbbe34dcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:01 np0005539551 nova_compute[227360]: 2025-11-29 08:16:01.883 227364 DEBUG oslo_concurrency.lockutils [req-b10e6d00-7fdb-4888-b8f9-d3b037fb2f76 req-cbe3515d-fca8-4fad-86fa-1b2cbbe34dcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:01 np0005539551 nova_compute[227360]: 2025-11-29 08:16:01.884 227364 DEBUG nova.compute.manager [req-b10e6d00-7fdb-4888-b8f9-d3b037fb2f76 req-cbe3515d-fca8-4fad-86fa-1b2cbbe34dcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] No waiting events found dispatching network-vif-plugged-821bcebf-20b5-454c-b867-9e53797f68b6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:01 np0005539551 nova_compute[227360]: 2025-11-29 08:16:01.884 227364 WARNING nova.compute.manager [req-b10e6d00-7fdb-4888-b8f9-d3b037fb2f76 req-cbe3515d-fca8-4fad-86fa-1b2cbbe34dcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Received unexpected event network-vif-plugged-821bcebf-20b5-454c-b867-9e53797f68b6 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:16:01 np0005539551 nova_compute[227360]: 2025-11-29 08:16:01.886 227364 INFO nova.compute.manager [-] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Took 1.04 seconds to deallocate network for instance.#033[00m
Nov 29 03:16:01 np0005539551 nova_compute[227360]: 2025-11-29 08:16:01.950 227364 DEBUG oslo_concurrency.lockutils [None req-db6d4149-68c8-40f9-ab2b-7bcf25745f61 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:01 np0005539551 nova_compute[227360]: 2025-11-29 08:16:01.951 227364 DEBUG oslo_concurrency.lockutils [None req-db6d4149-68c8-40f9-ab2b-7bcf25745f61 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:01 np0005539551 nova_compute[227360]: 2025-11-29 08:16:01.995 227364 DEBUG nova.compute.manager [req-77e4d8b1-81b5-4906-a803-fb8e8e1a25c7 req-b1b20048-3d9e-47d9-a568-8b99a5a15ab8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Received event network-vif-deleted-821bcebf-20b5-454c-b867-9e53797f68b6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:02 np0005539551 nova_compute[227360]: 2025-11-29 08:16:02.043 227364 DEBUG oslo_concurrency.processutils [None req-db6d4149-68c8-40f9-ab2b-7bcf25745f61 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:02.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:02 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:16:02 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3772126964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:16:02 np0005539551 nova_compute[227360]: 2025-11-29 08:16:02.497 227364 DEBUG oslo_concurrency.processutils [None req-db6d4149-68c8-40f9-ab2b-7bcf25745f61 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:02 np0005539551 nova_compute[227360]: 2025-11-29 08:16:02.506 227364 DEBUG nova.compute.provider_tree [None req-db6d4149-68c8-40f9-ab2b-7bcf25745f61 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:16:02 np0005539551 nova_compute[227360]: 2025-11-29 08:16:02.526 227364 DEBUG nova.scheduler.client.report [None req-db6d4149-68c8-40f9-ab2b-7bcf25745f61 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:16:02 np0005539551 nova_compute[227360]: 2025-11-29 08:16:02.560 227364 DEBUG oslo_concurrency.lockutils [None req-db6d4149-68c8-40f9-ab2b-7bcf25745f61 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:02 np0005539551 nova_compute[227360]: 2025-11-29 08:16:02.668 227364 INFO nova.scheduler.client.report [None req-db6d4149-68c8-40f9-ab2b-7bcf25745f61 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Deleted allocations for instance 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c#033[00m
Nov 29 03:16:02 np0005539551 nova_compute[227360]: 2025-11-29 08:16:02.837 227364 DEBUG oslo_concurrency.lockutils [None req-db6d4149-68c8-40f9-ab2b-7bcf25745f61 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.351s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:03 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Nov 29 03:16:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:03.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:04 np0005539551 podman[264135]: 2025-11-29 08:16:03.989888424 +0000 UTC m=+0.020738748 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:16:04 np0005539551 podman[264135]: 2025-11-29 08:16:04.265829479 +0000 UTC m=+0.296679763 container create 8c464c1329a4d11c976c9aa20c03d3b1fb8b6153e56bba14ff9c5617f123fdac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:16:04 np0005539551 systemd[1]: Started libpod-conmon-8c464c1329a4d11c976c9aa20c03d3b1fb8b6153e56bba14ff9c5617f123fdac.scope.
Nov 29 03:16:04 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:16:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:04.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:04 np0005539551 nova_compute[227360]: 2025-11-29 08:16:04.393 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:04 np0005539551 podman[264135]: 2025-11-29 08:16:04.404867951 +0000 UTC m=+0.435718265 container init 8c464c1329a4d11c976c9aa20c03d3b1fb8b6153e56bba14ff9c5617f123fdac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chatelet, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 03:16:04 np0005539551 podman[264135]: 2025-11-29 08:16:04.413148789 +0000 UTC m=+0.443999073 container start 8c464c1329a4d11c976c9aa20c03d3b1fb8b6153e56bba14ff9c5617f123fdac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chatelet, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:16:04 np0005539551 podman[264135]: 2025-11-29 08:16:04.41699578 +0000 UTC m=+0.447846064 container attach 8c464c1329a4d11c976c9aa20c03d3b1fb8b6153e56bba14ff9c5617f123fdac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 03:16:04 np0005539551 vigilant_chatelet[264152]: 167 167
Nov 29 03:16:04 np0005539551 systemd[1]: libpod-8c464c1329a4d11c976c9aa20c03d3b1fb8b6153e56bba14ff9c5617f123fdac.scope: Deactivated successfully.
Nov 29 03:16:04 np0005539551 podman[264135]: 2025-11-29 08:16:04.420527204 +0000 UTC m=+0.451377488 container died 8c464c1329a4d11c976c9aa20c03d3b1fb8b6153e56bba14ff9c5617f123fdac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chatelet, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 03:16:04 np0005539551 systemd[1]: var-lib-containers-storage-overlay-c62ca99dc595fc75d5b45bb7a258c38ef70fc9a3e098466d29eb462c79f8d6bd-merged.mount: Deactivated successfully.
Nov 29 03:16:04 np0005539551 podman[264135]: 2025-11-29 08:16:04.471816938 +0000 UTC m=+0.502667222 container remove 8c464c1329a4d11c976c9aa20c03d3b1fb8b6153e56bba14ff9c5617f123fdac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chatelet, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 03:16:04 np0005539551 systemd[1]: libpod-conmon-8c464c1329a4d11c976c9aa20c03d3b1fb8b6153e56bba14ff9c5617f123fdac.scope: Deactivated successfully.
Nov 29 03:16:04 np0005539551 podman[264178]: 2025-11-29 08:16:04.629039609 +0000 UTC m=+0.036242468 container create 0358ec0c79b23aa37d36683517c0ecd94bc0c79506df370dc9c9db85685415fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 03:16:04 np0005539551 systemd[1]: Started libpod-conmon-0358ec0c79b23aa37d36683517c0ecd94bc0c79506df370dc9c9db85685415fc.scope.
Nov 29 03:16:04 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:16:04 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1752efbaa12aa00456bb5a585cf9ccba7772d41f9344c4190e5cf317ae1745c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:16:04 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1752efbaa12aa00456bb5a585cf9ccba7772d41f9344c4190e5cf317ae1745c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:16:04 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1752efbaa12aa00456bb5a585cf9ccba7772d41f9344c4190e5cf317ae1745c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:16:04 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1752efbaa12aa00456bb5a585cf9ccba7772d41f9344c4190e5cf317ae1745c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:16:04 np0005539551 podman[264178]: 2025-11-29 08:16:04.702735774 +0000 UTC m=+0.109938633 container init 0358ec0c79b23aa37d36683517c0ecd94bc0c79506df370dc9c9db85685415fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_vaughan, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 03:16:04 np0005539551 podman[264178]: 2025-11-29 08:16:04.613690064 +0000 UTC m=+0.020892953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:16:04 np0005539551 podman[264178]: 2025-11-29 08:16:04.711410283 +0000 UTC m=+0.118613142 container start 0358ec0c79b23aa37d36683517c0ecd94bc0c79506df370dc9c9db85685415fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_vaughan, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:16:04 np0005539551 podman[264178]: 2025-11-29 08:16:04.715137192 +0000 UTC m=+0.122340071 container attach 0358ec0c79b23aa37d36683517c0ecd94bc0c79506df370dc9c9db85685415fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_vaughan, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 03:16:04 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:16:04 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:16:04 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:16:04 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2619920075' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:16:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:05.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:05 np0005539551 nova_compute[227360]: 2025-11-29 08:16:05.549 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:05 np0005539551 infallible_vaughan[264194]: [
Nov 29 03:16:05 np0005539551 infallible_vaughan[264194]:    {
Nov 29 03:16:05 np0005539551 infallible_vaughan[264194]:        "available": false,
Nov 29 03:16:05 np0005539551 infallible_vaughan[264194]:        "ceph_device": false,
Nov 29 03:16:05 np0005539551 infallible_vaughan[264194]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 03:16:05 np0005539551 infallible_vaughan[264194]:        "lsm_data": {},
Nov 29 03:16:05 np0005539551 infallible_vaughan[264194]:        "lvs": [],
Nov 29 03:16:05 np0005539551 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:16:05 np0005539551 infallible_vaughan[264194]:        "path": "/dev/sr0",
Nov 29 03:16:05 np0005539551 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:16:05 np0005539551 infallible_vaughan[264194]:        "rejected_reasons": [
Nov 29 03:16:05 np0005539551 infallible_vaughan[264194]:            "Has a FileSystem",
Nov 29 03:16:05 np0005539551 infallible_vaughan[264194]:            "Insufficient space (<5GB)"
Nov 29 03:16:05 np0005539551 infallible_vaughan[264194]:        ],
Nov 29 03:16:05 np0005539551 infallible_vaughan[264194]:        "sys_api": {
Nov 29 03:16:05 np0005539551 infallible_vaughan[264194]:            "actuators": null,
Nov 29 03:16:05 np0005539551 infallible_vaughan[264194]:            "device_nodes": "sr0",
Nov 29 03:16:05 np0005539551 infallible_vaughan[264194]:            "devname": "sr0",
Nov 29 03:16:05 np0005539551 infallible_vaughan[264194]:            "human_readable_size": "482.00 KB",
Nov 29 03:16:05 np0005539551 infallible_vaughan[264194]:            "id_bus": "ata",
Nov 29 03:16:05 np0005539551 infallible_vaughan[264194]:            "model": "QEMU DVD-ROM",
Nov 29 03:16:05 np0005539551 infallible_vaughan[264194]:            "nr_requests": "2",
Nov 29 03:16:05 np0005539551 infallible_vaughan[264194]:            "parent": "/dev/sr0",
Nov 29 03:16:05 np0005539551 infallible_vaughan[264194]:            "partitions": {},
Nov 29 03:16:05 np0005539551 infallible_vaughan[264194]:            "path": "/dev/sr0",
Nov 29 03:16:05 np0005539551 infallible_vaughan[264194]:            "removable": "1",
Nov 29 03:16:05 np0005539551 infallible_vaughan[264194]:            "rev": "2.5+",
Nov 29 03:16:05 np0005539551 infallible_vaughan[264194]:            "ro": "0",
Nov 29 03:16:05 np0005539551 infallible_vaughan[264194]:            "rotational": "1",
Nov 29 03:16:05 np0005539551 infallible_vaughan[264194]:            "sas_address": "",
Nov 29 03:16:05 np0005539551 infallible_vaughan[264194]:            "sas_device_handle": "",
Nov 29 03:16:05 np0005539551 infallible_vaughan[264194]:            "scheduler_mode": "mq-deadline",
Nov 29 03:16:05 np0005539551 infallible_vaughan[264194]:            "sectors": 0,
Nov 29 03:16:05 np0005539551 infallible_vaughan[264194]:            "sectorsize": "2048",
Nov 29 03:16:05 np0005539551 infallible_vaughan[264194]:            "size": 493568.0,
Nov 29 03:16:05 np0005539551 infallible_vaughan[264194]:            "support_discard": "2048",
Nov 29 03:16:05 np0005539551 infallible_vaughan[264194]:            "type": "disk",
Nov 29 03:16:05 np0005539551 infallible_vaughan[264194]:            "vendor": "QEMU"
Nov 29 03:16:05 np0005539551 infallible_vaughan[264194]:        }
Nov 29 03:16:05 np0005539551 infallible_vaughan[264194]:    }
Nov 29 03:16:05 np0005539551 infallible_vaughan[264194]: ]
Nov 29 03:16:05 np0005539551 systemd[1]: libpod-0358ec0c79b23aa37d36683517c0ecd94bc0c79506df370dc9c9db85685415fc.scope: Deactivated successfully.
Nov 29 03:16:05 np0005539551 podman[264178]: 2025-11-29 08:16:05.870570827 +0000 UTC m=+1.277773696 container died 0358ec0c79b23aa37d36683517c0ecd94bc0c79506df370dc9c9db85685415fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 03:16:05 np0005539551 systemd[1]: libpod-0358ec0c79b23aa37d36683517c0ecd94bc0c79506df370dc9c9db85685415fc.scope: Consumed 1.155s CPU time.
Nov 29 03:16:05 np0005539551 systemd[1]: var-lib-containers-storage-overlay-1752efbaa12aa00456bb5a585cf9ccba7772d41f9344c4190e5cf317ae1745c9-merged.mount: Deactivated successfully.
Nov 29 03:16:05 np0005539551 podman[264178]: 2025-11-29 08:16:05.97715154 +0000 UTC m=+1.384354399 container remove 0358ec0c79b23aa37d36683517c0ecd94bc0c79506df370dc9c9db85685415fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 03:16:05 np0005539551 systemd[1]: libpod-conmon-0358ec0c79b23aa37d36683517c0ecd94bc0c79506df370dc9c9db85685415fc.scope: Deactivated successfully.
Nov 29 03:16:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e295 e295: 3 total, 3 up, 3 in
Nov 29 03:16:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:06.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:16:06.708 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:07.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:07 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:16:07 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:16:07 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:16:07 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:16:07 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:16:07 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:16:07 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:16:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:08.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:09.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:09 np0005539551 nova_compute[227360]: 2025-11-29 08:16:09.397 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:09 np0005539551 ovn_controller[130266]: 2025-11-29T08:16:09Z|00403|binding|INFO|Releasing lport 299ca1be-be1b-47d9-8865-4316d34012e3 from this chassis (sb_readonly=0)
Nov 29 03:16:09 np0005539551 nova_compute[227360]: 2025-11-29 08:16:09.735 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:10.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:10 np0005539551 nova_compute[227360]: 2025-11-29 08:16:10.595 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:11.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:12.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:12 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:16:12 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:16:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:13.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:14 np0005539551 nova_compute[227360]: 2025-11-29 08:16:14.328 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404159.327413, 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:16:14 np0005539551 nova_compute[227360]: 2025-11-29 08:16:14.329 227364 INFO nova.compute.manager [-] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:16:14 np0005539551 nova_compute[227360]: 2025-11-29 08:16:14.356 227364 DEBUG nova.compute.manager [None req-293d1347-f6cc-4d5e-9ddd-6dc4d3350ade - - - - - -] [instance: 9c2bd2ab-aab3-4741-9c99-8edc34f2ff5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:14 np0005539551 nova_compute[227360]: 2025-11-29 08:16:14.400 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:14.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:15.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:15 np0005539551 nova_compute[227360]: 2025-11-29 08:16:15.599 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:16.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:17.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:17 np0005539551 podman[265388]: 2025-11-29 08:16:17.659851247 +0000 UTC m=+0.096164101 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:16:17 np0005539551 podman[265387]: 2025-11-29 08:16:17.660621397 +0000 UTC m=+0.104814419 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 03:16:17 np0005539551 podman[265386]: 2025-11-29 08:16:17.705316897 +0000 UTC m=+0.149498819 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:16:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:18.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:19.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:19 np0005539551 nova_compute[227360]: 2025-11-29 08:16:19.403 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:16:19.866 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:16:19.867 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:16:19.867 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:20.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:20 np0005539551 nova_compute[227360]: 2025-11-29 08:16:20.479 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:20 np0005539551 nova_compute[227360]: 2025-11-29 08:16:20.599 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e296 e296: 3 total, 3 up, 3 in
Nov 29 03:16:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:21.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:22.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:23.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:24 np0005539551 nova_compute[227360]: 2025-11-29 08:16:24.254 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:24 np0005539551 nova_compute[227360]: 2025-11-29 08:16:24.406 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:24.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:25.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:25 np0005539551 nova_compute[227360]: 2025-11-29 08:16:25.601 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:26 np0005539551 nova_compute[227360]: 2025-11-29 08:16:26.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:26 np0005539551 nova_compute[227360]: 2025-11-29 08:16:26.411 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:16:26 np0005539551 nova_compute[227360]: 2025-11-29 08:16:26.411 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:16:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:26.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:26 np0005539551 nova_compute[227360]: 2025-11-29 08:16:26.717 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "refresh_cache-076bf9f6-6607-4b08-b733-864854aad069" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:16:26 np0005539551 nova_compute[227360]: 2025-11-29 08:16:26.717 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquired lock "refresh_cache-076bf9f6-6607-4b08-b733-864854aad069" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:16:26 np0005539551 nova_compute[227360]: 2025-11-29 08:16:26.718 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:16:26 np0005539551 nova_compute[227360]: 2025-11-29 08:16:26.718 227364 DEBUG nova.objects.instance [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lazy-loading 'info_cache' on Instance uuid 076bf9f6-6607-4b08-b733-864854aad069 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:27.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:28.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:29.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:29 np0005539551 nova_compute[227360]: 2025-11-29 08:16:29.410 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:29 np0005539551 nova_compute[227360]: 2025-11-29 08:16:29.540 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Updating instance_info_cache with network_info: [{"id": "5a18d86b-a59a-42b2-a09d-a9db462f6034", "address": "fa:16:3e:46:a5:ff", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a18d86b-a5", "ovs_interfaceid": "5a18d86b-a59a-42b2-a09d-a9db462f6034", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:16:29 np0005539551 nova_compute[227360]: 2025-11-29 08:16:29.564 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Releasing lock "refresh_cache-076bf9f6-6607-4b08-b733-864854aad069" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:16:29 np0005539551 nova_compute[227360]: 2025-11-29 08:16:29.565 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:16:29 np0005539551 nova_compute[227360]: 2025-11-29 08:16:29.565 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:29 np0005539551 nova_compute[227360]: 2025-11-29 08:16:29.566 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:29 np0005539551 nova_compute[227360]: 2025-11-29 08:16:29.566 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:30 np0005539551 nova_compute[227360]: 2025-11-29 08:16:30.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:30 np0005539551 nova_compute[227360]: 2025-11-29 08:16:30.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:30.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:30 np0005539551 nova_compute[227360]: 2025-11-29 08:16:30.443 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:30 np0005539551 nova_compute[227360]: 2025-11-29 08:16:30.444 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:30 np0005539551 nova_compute[227360]: 2025-11-29 08:16:30.444 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:30 np0005539551 nova_compute[227360]: 2025-11-29 08:16:30.444 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:16:30 np0005539551 nova_compute[227360]: 2025-11-29 08:16:30.444 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:30 np0005539551 nova_compute[227360]: 2025-11-29 08:16:30.604 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:16:30 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1375637355' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:16:30 np0005539551 nova_compute[227360]: 2025-11-29 08:16:30.908 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:31 np0005539551 nova_compute[227360]: 2025-11-29 08:16:31.031 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:16:31 np0005539551 nova_compute[227360]: 2025-11-29 08:16:31.031 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:16:31 np0005539551 nova_compute[227360]: 2025-11-29 08:16:31.205 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:16:31 np0005539551 nova_compute[227360]: 2025-11-29 08:16:31.206 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4372MB free_disk=20.8306884765625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:16:31 np0005539551 nova_compute[227360]: 2025-11-29 08:16:31.207 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:31 np0005539551 nova_compute[227360]: 2025-11-29 08:16:31.207 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:31 np0005539551 nova_compute[227360]: 2025-11-29 08:16:31.300 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 076bf9f6-6607-4b08-b733-864854aad069 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:16:31 np0005539551 nova_compute[227360]: 2025-11-29 08:16:31.300 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:16:31 np0005539551 nova_compute[227360]: 2025-11-29 08:16:31.301 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:16:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:31.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:31 np0005539551 nova_compute[227360]: 2025-11-29 08:16:31.435 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:31 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e297 e297: 3 total, 3 up, 3 in
Nov 29 03:16:31 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:16:31 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3306258165' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:16:31 np0005539551 nova_compute[227360]: 2025-11-29 08:16:31.850 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:31 np0005539551 nova_compute[227360]: 2025-11-29 08:16:31.855 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:16:31 np0005539551 nova_compute[227360]: 2025-11-29 08:16:31.874 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:16:31 np0005539551 nova_compute[227360]: 2025-11-29 08:16:31.905 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:16:31 np0005539551 nova_compute[227360]: 2025-11-29 08:16:31.905 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:32.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:33.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:33 np0005539551 nova_compute[227360]: 2025-11-29 08:16:33.905 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:34 np0005539551 nova_compute[227360]: 2025-11-29 08:16:34.412 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:34.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:35.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:35 np0005539551 nova_compute[227360]: 2025-11-29 08:16:35.606 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:36.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:36 np0005539551 ovn_controller[130266]: 2025-11-29T08:16:36Z|00404|binding|INFO|Releasing lport 299ca1be-be1b-47d9-8865-4316d34012e3 from this chassis (sb_readonly=0)
Nov 29 03:16:36 np0005539551 nova_compute[227360]: 2025-11-29 08:16:36.472 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:36 np0005539551 ovn_controller[130266]: 2025-11-29T08:16:36Z|00405|binding|INFO|Releasing lport 299ca1be-be1b-47d9-8865-4316d34012e3 from this chassis (sb_readonly=0)
Nov 29 03:16:36 np0005539551 nova_compute[227360]: 2025-11-29 08:16:36.721 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:37.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:38.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:39.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:39 np0005539551 nova_compute[227360]: 2025-11-29 08:16:39.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:39 np0005539551 nova_compute[227360]: 2025-11-29 08:16:39.415 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:40 np0005539551 nova_compute[227360]: 2025-11-29 08:16:40.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:40 np0005539551 nova_compute[227360]: 2025-11-29 08:16:40.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:16:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:40.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:40 np0005539551 nova_compute[227360]: 2025-11-29 08:16:40.609 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:41.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:42.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:43.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:44 np0005539551 nova_compute[227360]: 2025-11-29 08:16:44.417 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:44.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:45.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:45 np0005539551 nova_compute[227360]: 2025-11-29 08:16:45.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:45 np0005539551 nova_compute[227360]: 2025-11-29 08:16:45.611 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:46.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:47.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:48.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:48 np0005539551 podman[265498]: 2025-11-29 08:16:48.609104393 +0000 UTC m=+0.058229458 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:16:48 np0005539551 podman[265497]: 2025-11-29 08:16:48.620628559 +0000 UTC m=+0.070707109 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 29 03:16:48 np0005539551 podman[265496]: 2025-11-29 08:16:48.650683761 +0000 UTC m=+0.102211999 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:16:48 np0005539551 nova_compute[227360]: 2025-11-29 08:16:48.933 227364 DEBUG oslo_concurrency.lockutils [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Acquiring lock "f5f315c0-7e8f-4015-8c35-98b7726f35bf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:48 np0005539551 nova_compute[227360]: 2025-11-29 08:16:48.934 227364 DEBUG oslo_concurrency.lockutils [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lock "f5f315c0-7e8f-4015-8c35-98b7726f35bf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:48 np0005539551 nova_compute[227360]: 2025-11-29 08:16:48.964 227364 DEBUG nova.compute.manager [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:16:49 np0005539551 nova_compute[227360]: 2025-11-29 08:16:49.060 227364 DEBUG oslo_concurrency.lockutils [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:49 np0005539551 nova_compute[227360]: 2025-11-29 08:16:49.060 227364 DEBUG oslo_concurrency.lockutils [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:49 np0005539551 nova_compute[227360]: 2025-11-29 08:16:49.070 227364 DEBUG nova.virt.hardware [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:16:49 np0005539551 nova_compute[227360]: 2025-11-29 08:16:49.070 227364 INFO nova.compute.claims [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:16:49 np0005539551 nova_compute[227360]: 2025-11-29 08:16:49.243 227364 DEBUG oslo_concurrency.processutils [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:49.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:49 np0005539551 nova_compute[227360]: 2025-11-29 08:16:49.456 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:49 np0005539551 nova_compute[227360]: 2025-11-29 08:16:49.634 227364 DEBUG oslo_concurrency.lockutils [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Acquiring lock "c06df71c-732b-4ad5-a0c7-5e3ca1d02e36" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:49 np0005539551 nova_compute[227360]: 2025-11-29 08:16:49.635 227364 DEBUG oslo_concurrency.lockutils [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lock "c06df71c-732b-4ad5-a0c7-5e3ca1d02e36" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:16:49 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/343790644' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:16:49 np0005539551 nova_compute[227360]: 2025-11-29 08:16:49.658 227364 DEBUG nova.compute.manager [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:16:49 np0005539551 nova_compute[227360]: 2025-11-29 08:16:49.678 227364 DEBUG oslo_concurrency.processutils [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:49 np0005539551 nova_compute[227360]: 2025-11-29 08:16:49.685 227364 DEBUG nova.compute.provider_tree [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:16:49 np0005539551 nova_compute[227360]: 2025-11-29 08:16:49.710 227364 DEBUG nova.scheduler.client.report [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:16:49 np0005539551 nova_compute[227360]: 2025-11-29 08:16:49.733 227364 DEBUG oslo_concurrency.lockutils [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:49 np0005539551 nova_compute[227360]: 2025-11-29 08:16:49.765 227364 DEBUG oslo_concurrency.lockutils [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:49 np0005539551 nova_compute[227360]: 2025-11-29 08:16:49.766 227364 DEBUG nova.compute.manager [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:16:49 np0005539551 nova_compute[227360]: 2025-11-29 08:16:49.771 227364 DEBUG oslo_concurrency.lockutils [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:49 np0005539551 nova_compute[227360]: 2025-11-29 08:16:49.782 227364 DEBUG nova.virt.hardware [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:16:49 np0005539551 nova_compute[227360]: 2025-11-29 08:16:49.782 227364 INFO nova.compute.claims [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:16:49 np0005539551 nova_compute[227360]: 2025-11-29 08:16:49.893 227364 DEBUG nova.compute.manager [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Nov 29 03:16:49 np0005539551 nova_compute[227360]: 2025-11-29 08:16:49.911 227364 INFO nova.virt.libvirt.driver [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:16:49 np0005539551 nova_compute[227360]: 2025-11-29 08:16:49.933 227364 DEBUG nova.compute.manager [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.007 227364 DEBUG oslo_concurrency.processutils [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.061 227364 DEBUG nova.compute.manager [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.065 227364 DEBUG nova.virt.libvirt.driver [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.065 227364 INFO nova.virt.libvirt.driver [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Creating image(s)#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.111 227364 DEBUG nova.storage.rbd_utils [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] rbd image f5f315c0-7e8f-4015-8c35-98b7726f35bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.146 227364 DEBUG nova.storage.rbd_utils [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] rbd image f5f315c0-7e8f-4015-8c35-98b7726f35bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.180 227364 DEBUG nova.storage.rbd_utils [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] rbd image f5f315c0-7e8f-4015-8c35-98b7726f35bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.183 227364 DEBUG oslo_concurrency.processutils [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.259 227364 DEBUG oslo_concurrency.processutils [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.261 227364 DEBUG oslo_concurrency.lockutils [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.262 227364 DEBUG oslo_concurrency.lockutils [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.262 227364 DEBUG oslo_concurrency.lockutils [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.297 227364 DEBUG nova.storage.rbd_utils [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] rbd image f5f315c0-7e8f-4015-8c35-98b7726f35bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.302 227364 DEBUG oslo_concurrency.processutils [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 f5f315c0-7e8f-4015-8c35-98b7726f35bf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:16:50 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/685295172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:16:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:50.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.471 227364 DEBUG oslo_concurrency.processutils [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.478 227364 DEBUG nova.compute.provider_tree [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.581 227364 DEBUG nova.scheduler.client.report [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.586 227364 DEBUG oslo_concurrency.processutils [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 f5f315c0-7e8f-4015-8c35-98b7726f35bf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.621 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.624 227364 DEBUG oslo_concurrency.lockutils [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.853s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.625 227364 DEBUG nova.compute.manager [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:16:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.669 227364 DEBUG nova.storage.rbd_utils [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] resizing rbd image f5f315c0-7e8f-4015-8c35-98b7726f35bf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.707 227364 DEBUG nova.compute.manager [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.768 227364 INFO nova.virt.libvirt.driver [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.782 227364 DEBUG nova.objects.instance [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lazy-loading 'migration_context' on Instance uuid f5f315c0-7e8f-4015-8c35-98b7726f35bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.810 227364 DEBUG nova.compute.manager [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.814 227364 DEBUG nova.virt.libvirt.driver [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.814 227364 DEBUG nova.virt.libvirt.driver [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Ensure instance console log exists: /var/lib/nova/instances/f5f315c0-7e8f-4015-8c35-98b7726f35bf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.815 227364 DEBUG oslo_concurrency.lockutils [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.815 227364 DEBUG oslo_concurrency.lockutils [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.816 227364 DEBUG oslo_concurrency.lockutils [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.817 227364 DEBUG nova.virt.libvirt.driver [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.823 227364 WARNING nova.virt.libvirt.driver [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.841 227364 DEBUG nova.virt.libvirt.host [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.842 227364 DEBUG nova.virt.libvirt.host [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.845 227364 DEBUG nova.virt.libvirt.host [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.846 227364 DEBUG nova.virt.libvirt.host [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.847 227364 DEBUG nova.virt.libvirt.driver [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.847 227364 DEBUG nova.virt.hardware [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.848 227364 DEBUG nova.virt.hardware [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.848 227364 DEBUG nova.virt.hardware [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.849 227364 DEBUG nova.virt.hardware [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.849 227364 DEBUG nova.virt.hardware [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.849 227364 DEBUG nova.virt.hardware [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.849 227364 DEBUG nova.virt.hardware [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.850 227364 DEBUG nova.virt.hardware [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.850 227364 DEBUG nova.virt.hardware [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.850 227364 DEBUG nova.virt.hardware [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.850 227364 DEBUG nova.virt.hardware [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.854 227364 DEBUG oslo_concurrency.processutils [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.943 227364 DEBUG nova.compute.manager [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.945 227364 DEBUG nova.virt.libvirt.driver [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.945 227364 INFO nova.virt.libvirt.driver [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Creating image(s)#033[00m
Nov 29 03:16:50 np0005539551 nova_compute[227360]: 2025-11-29 08:16:50.977 227364 DEBUG nova.storage.rbd_utils [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] rbd image c06df71c-732b-4ad5-a0c7-5e3ca1d02e36_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:51 np0005539551 nova_compute[227360]: 2025-11-29 08:16:51.021 227364 DEBUG nova.storage.rbd_utils [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] rbd image c06df71c-732b-4ad5-a0c7-5e3ca1d02e36_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:51 np0005539551 nova_compute[227360]: 2025-11-29 08:16:51.053 227364 DEBUG nova.storage.rbd_utils [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] rbd image c06df71c-732b-4ad5-a0c7-5e3ca1d02e36_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:51 np0005539551 nova_compute[227360]: 2025-11-29 08:16:51.057 227364 DEBUG oslo_concurrency.processutils [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:51 np0005539551 nova_compute[227360]: 2025-11-29 08:16:51.130 227364 DEBUG oslo_concurrency.processutils [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:51 np0005539551 nova_compute[227360]: 2025-11-29 08:16:51.131 227364 DEBUG oslo_concurrency.lockutils [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:51 np0005539551 nova_compute[227360]: 2025-11-29 08:16:51.132 227364 DEBUG oslo_concurrency.lockutils [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:51 np0005539551 nova_compute[227360]: 2025-11-29 08:16:51.132 227364 DEBUG oslo_concurrency.lockutils [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:51 np0005539551 nova_compute[227360]: 2025-11-29 08:16:51.154 227364 DEBUG nova.storage.rbd_utils [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] rbd image c06df71c-732b-4ad5-a0c7-5e3ca1d02e36_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:51 np0005539551 nova_compute[227360]: 2025-11-29 08:16:51.157 227364 DEBUG oslo_concurrency.processutils [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 c06df71c-732b-4ad5-a0c7-5e3ca1d02e36_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:51.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:16:51 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2387730868' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:16:51 np0005539551 nova_compute[227360]: 2025-11-29 08:16:51.378 227364 DEBUG oslo_concurrency.processutils [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:51 np0005539551 nova_compute[227360]: 2025-11-29 08:16:51.405 227364 DEBUG nova.storage.rbd_utils [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] rbd image f5f315c0-7e8f-4015-8c35-98b7726f35bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:51 np0005539551 nova_compute[227360]: 2025-11-29 08:16:51.409 227364 DEBUG oslo_concurrency.processutils [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:51 np0005539551 nova_compute[227360]: 2025-11-29 08:16:51.430 227364 DEBUG oslo_concurrency.processutils [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 c06df71c-732b-4ad5-a0c7-5e3ca1d02e36_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:51 np0005539551 nova_compute[227360]: 2025-11-29 08:16:51.497 227364 DEBUG nova.storage.rbd_utils [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] resizing rbd image c06df71c-732b-4ad5-a0c7-5e3ca1d02e36_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:16:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:16:51 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2802967133' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:16:51 np0005539551 nova_compute[227360]: 2025-11-29 08:16:51.987 227364 DEBUG oslo_concurrency.processutils [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:51 np0005539551 nova_compute[227360]: 2025-11-29 08:16:51.989 227364 DEBUG nova.objects.instance [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lazy-loading 'pci_devices' on Instance uuid f5f315c0-7e8f-4015-8c35-98b7726f35bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:52 np0005539551 nova_compute[227360]: 2025-11-29 08:16:52.010 227364 DEBUG nova.virt.libvirt.driver [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:16:52 np0005539551 nova_compute[227360]:  <uuid>f5f315c0-7e8f-4015-8c35-98b7726f35bf</uuid>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:  <name>instance-00000069</name>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:16:52 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:      <nova:name>tempest-ServerShowV247Test-server-141478049</nova:name>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:16:50</nova:creationTime>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:16:52 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:        <nova:user uuid="132e3b01995f42c28e20f5d190885d00">tempest-ServerShowV247Test-1657801549-project-member</nova:user>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:        <nova:project uuid="98623544266946388dc2821daacd91e3">tempest-ServerShowV247Test-1657801549</nova:project>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:      <nova:ports/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:      <entry name="serial">f5f315c0-7e8f-4015-8c35-98b7726f35bf</entry>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:      <entry name="uuid">f5f315c0-7e8f-4015-8c35-98b7726f35bf</entry>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:16:52 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/f5f315c0-7e8f-4015-8c35-98b7726f35bf_disk">
Nov 29 03:16:52 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:16:52 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:16:52 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/f5f315c0-7e8f-4015-8c35-98b7726f35bf_disk.config">
Nov 29 03:16:52 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:16:52 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:16:52 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/f5f315c0-7e8f-4015-8c35-98b7726f35bf/console.log" append="off"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:16:52 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:16:52 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:16:52 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:16:52 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:16:52 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:16:52 np0005539551 nova_compute[227360]: 2025-11-29 08:16:52.055 227364 DEBUG nova.objects.instance [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lazy-loading 'migration_context' on Instance uuid c06df71c-732b-4ad5-a0c7-5e3ca1d02e36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:52 np0005539551 nova_compute[227360]: 2025-11-29 08:16:52.074 227364 DEBUG nova.virt.libvirt.driver [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:16:52 np0005539551 nova_compute[227360]: 2025-11-29 08:16:52.074 227364 DEBUG nova.virt.libvirt.driver [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Ensure instance console log exists: /var/lib/nova/instances/c06df71c-732b-4ad5-a0c7-5e3ca1d02e36/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:16:52 np0005539551 nova_compute[227360]: 2025-11-29 08:16:52.075 227364 DEBUG oslo_concurrency.lockutils [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:52 np0005539551 nova_compute[227360]: 2025-11-29 08:16:52.075 227364 DEBUG oslo_concurrency.lockutils [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:52 np0005539551 nova_compute[227360]: 2025-11-29 08:16:52.076 227364 DEBUG oslo_concurrency.lockutils [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:52 np0005539551 nova_compute[227360]: 2025-11-29 08:16:52.077 227364 DEBUG nova.virt.libvirt.driver [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:16:52 np0005539551 nova_compute[227360]: 2025-11-29 08:16:52.086 227364 DEBUG nova.virt.libvirt.driver [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:16:52 np0005539551 nova_compute[227360]: 2025-11-29 08:16:52.087 227364 DEBUG nova.virt.libvirt.driver [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:16:52 np0005539551 nova_compute[227360]: 2025-11-29 08:16:52.088 227364 INFO nova.virt.libvirt.driver [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Using config drive#033[00m
Nov 29 03:16:52 np0005539551 nova_compute[227360]: 2025-11-29 08:16:52.122 227364 DEBUG nova.storage.rbd_utils [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] rbd image f5f315c0-7e8f-4015-8c35-98b7726f35bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:52 np0005539551 nova_compute[227360]: 2025-11-29 08:16:52.129 227364 WARNING nova.virt.libvirt.driver [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:16:52 np0005539551 nova_compute[227360]: 2025-11-29 08:16:52.133 227364 DEBUG nova.virt.libvirt.host [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:16:52 np0005539551 nova_compute[227360]: 2025-11-29 08:16:52.133 227364 DEBUG nova.virt.libvirt.host [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:16:52 np0005539551 nova_compute[227360]: 2025-11-29 08:16:52.138 227364 DEBUG nova.virt.libvirt.host [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:16:52 np0005539551 nova_compute[227360]: 2025-11-29 08:16:52.138 227364 DEBUG nova.virt.libvirt.host [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:16:52 np0005539551 nova_compute[227360]: 2025-11-29 08:16:52.139 227364 DEBUG nova.virt.libvirt.driver [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:16:52 np0005539551 nova_compute[227360]: 2025-11-29 08:16:52.139 227364 DEBUG nova.virt.hardware [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:16:52 np0005539551 nova_compute[227360]: 2025-11-29 08:16:52.140 227364 DEBUG nova.virt.hardware [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:16:52 np0005539551 nova_compute[227360]: 2025-11-29 08:16:52.140 227364 DEBUG nova.virt.hardware [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:16:52 np0005539551 nova_compute[227360]: 2025-11-29 08:16:52.140 227364 DEBUG nova.virt.hardware [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:16:52 np0005539551 nova_compute[227360]: 2025-11-29 08:16:52.140 227364 DEBUG nova.virt.hardware [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:16:52 np0005539551 nova_compute[227360]: 2025-11-29 08:16:52.140 227364 DEBUG nova.virt.hardware [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:16:52 np0005539551 nova_compute[227360]: 2025-11-29 08:16:52.141 227364 DEBUG nova.virt.hardware [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:16:52 np0005539551 nova_compute[227360]: 2025-11-29 08:16:52.141 227364 DEBUG nova.virt.hardware [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:16:52 np0005539551 nova_compute[227360]: 2025-11-29 08:16:52.141 227364 DEBUG nova.virt.hardware [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:16:52 np0005539551 nova_compute[227360]: 2025-11-29 08:16:52.141 227364 DEBUG nova.virt.hardware [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:16:52 np0005539551 nova_compute[227360]: 2025-11-29 08:16:52.141 227364 DEBUG nova.virt.hardware [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:16:52 np0005539551 nova_compute[227360]: 2025-11-29 08:16:52.143 227364 DEBUG oslo_concurrency.processutils [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:52.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:16:52 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1313156954' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:16:52 np0005539551 nova_compute[227360]: 2025-11-29 08:16:52.571 227364 DEBUG oslo_concurrency.processutils [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:52 np0005539551 nova_compute[227360]: 2025-11-29 08:16:52.601 227364 DEBUG nova.storage.rbd_utils [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] rbd image c06df71c-732b-4ad5-a0c7-5e3ca1d02e36_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:52 np0005539551 nova_compute[227360]: 2025-11-29 08:16:52.608 227364 DEBUG oslo_concurrency.processutils [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:52 np0005539551 nova_compute[227360]: 2025-11-29 08:16:52.676 227364 INFO nova.virt.libvirt.driver [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Creating config drive at /var/lib/nova/instances/f5f315c0-7e8f-4015-8c35-98b7726f35bf/disk.config#033[00m
Nov 29 03:16:52 np0005539551 nova_compute[227360]: 2025-11-29 08:16:52.681 227364 DEBUG oslo_concurrency.processutils [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f5f315c0-7e8f-4015-8c35-98b7726f35bf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphvovbtw5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:52 np0005539551 nova_compute[227360]: 2025-11-29 08:16:52.813 227364 DEBUG oslo_concurrency.processutils [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f5f315c0-7e8f-4015-8c35-98b7726f35bf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphvovbtw5" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:52 np0005539551 nova_compute[227360]: 2025-11-29 08:16:52.840 227364 DEBUG nova.storage.rbd_utils [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] rbd image f5f315c0-7e8f-4015-8c35-98b7726f35bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:52 np0005539551 nova_compute[227360]: 2025-11-29 08:16:52.843 227364 DEBUG oslo_concurrency.processutils [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f5f315c0-7e8f-4015-8c35-98b7726f35bf/disk.config f5f315c0-7e8f-4015-8c35-98b7726f35bf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:53 np0005539551 nova_compute[227360]: 2025-11-29 08:16:53.008 227364 DEBUG oslo_concurrency.processutils [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f5f315c0-7e8f-4015-8c35-98b7726f35bf/disk.config f5f315c0-7e8f-4015-8c35-98b7726f35bf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:53 np0005539551 nova_compute[227360]: 2025-11-29 08:16:53.009 227364 INFO nova.virt.libvirt.driver [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Deleting local config drive /var/lib/nova/instances/f5f315c0-7e8f-4015-8c35-98b7726f35bf/disk.config because it was imported into RBD.#033[00m
Nov 29 03:16:53 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:16:53 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/407667327' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:16:53 np0005539551 systemd-machined[190756]: New machine qemu-44-instance-00000069.
Nov 29 03:16:53 np0005539551 nova_compute[227360]: 2025-11-29 08:16:53.076 227364 DEBUG oslo_concurrency.processutils [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:53 np0005539551 nova_compute[227360]: 2025-11-29 08:16:53.078 227364 DEBUG nova.objects.instance [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lazy-loading 'pci_devices' on Instance uuid c06df71c-732b-4ad5-a0c7-5e3ca1d02e36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:53 np0005539551 systemd[1]: Started Virtual Machine qemu-44-instance-00000069.
Nov 29 03:16:53 np0005539551 nova_compute[227360]: 2025-11-29 08:16:53.098 227364 DEBUG nova.virt.libvirt.driver [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:16:53 np0005539551 nova_compute[227360]:  <uuid>c06df71c-732b-4ad5-a0c7-5e3ca1d02e36</uuid>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:  <name>instance-0000006a</name>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:16:53 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:      <nova:name>tempest-ServerShowV247Test-server-1554820516</nova:name>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:16:52</nova:creationTime>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:16:53 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:        <nova:user uuid="132e3b01995f42c28e20f5d190885d00">tempest-ServerShowV247Test-1657801549-project-member</nova:user>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:        <nova:project uuid="98623544266946388dc2821daacd91e3">tempest-ServerShowV247Test-1657801549</nova:project>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:      <nova:ports/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:      <entry name="serial">c06df71c-732b-4ad5-a0c7-5e3ca1d02e36</entry>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:      <entry name="uuid">c06df71c-732b-4ad5-a0c7-5e3ca1d02e36</entry>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:16:53 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/c06df71c-732b-4ad5-a0c7-5e3ca1d02e36_disk">
Nov 29 03:16:53 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:16:53 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:16:53 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/c06df71c-732b-4ad5-a0c7-5e3ca1d02e36_disk.config">
Nov 29 03:16:53 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:16:53 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:16:53 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/c06df71c-732b-4ad5-a0c7-5e3ca1d02e36/console.log" append="off"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:16:53 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:16:53 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:16:53 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:16:53 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:16:53 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:16:53 np0005539551 nova_compute[227360]: 2025-11-29 08:16:53.168 227364 DEBUG nova.virt.libvirt.driver [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:16:53 np0005539551 nova_compute[227360]: 2025-11-29 08:16:53.168 227364 DEBUG nova.virt.libvirt.driver [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:16:53 np0005539551 nova_compute[227360]: 2025-11-29 08:16:53.169 227364 INFO nova.virt.libvirt.driver [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Using config drive#033[00m
Nov 29 03:16:53 np0005539551 nova_compute[227360]: 2025-11-29 08:16:53.197 227364 DEBUG nova.storage.rbd_utils [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] rbd image c06df71c-732b-4ad5-a0c7-5e3ca1d02e36_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:53.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:53 np0005539551 nova_compute[227360]: 2025-11-29 08:16:53.652 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404213.6520598, f5f315c0-7e8f-4015-8c35-98b7726f35bf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:16:53 np0005539551 nova_compute[227360]: 2025-11-29 08:16:53.653 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:16:53 np0005539551 nova_compute[227360]: 2025-11-29 08:16:53.655 227364 DEBUG nova.compute.manager [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:16:53 np0005539551 nova_compute[227360]: 2025-11-29 08:16:53.655 227364 DEBUG nova.virt.libvirt.driver [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:16:53 np0005539551 nova_compute[227360]: 2025-11-29 08:16:53.659 227364 INFO nova.virt.libvirt.driver [-] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Instance spawned successfully.#033[00m
Nov 29 03:16:53 np0005539551 nova_compute[227360]: 2025-11-29 08:16:53.659 227364 DEBUG nova.virt.libvirt.driver [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:16:53 np0005539551 nova_compute[227360]: 2025-11-29 08:16:53.678 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:53 np0005539551 nova_compute[227360]: 2025-11-29 08:16:53.682 227364 DEBUG nova.virt.libvirt.driver [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:16:53 np0005539551 nova_compute[227360]: 2025-11-29 08:16:53.683 227364 DEBUG nova.virt.libvirt.driver [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:16:53 np0005539551 nova_compute[227360]: 2025-11-29 08:16:53.683 227364 DEBUG nova.virt.libvirt.driver [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:16:53 np0005539551 nova_compute[227360]: 2025-11-29 08:16:53.684 227364 DEBUG nova.virt.libvirt.driver [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:16:53 np0005539551 nova_compute[227360]: 2025-11-29 08:16:53.684 227364 DEBUG nova.virt.libvirt.driver [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:16:53 np0005539551 nova_compute[227360]: 2025-11-29 08:16:53.684 227364 DEBUG nova.virt.libvirt.driver [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:16:53 np0005539551 nova_compute[227360]: 2025-11-29 08:16:53.688 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:16:53 np0005539551 nova_compute[227360]: 2025-11-29 08:16:53.733 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:16:53 np0005539551 nova_compute[227360]: 2025-11-29 08:16:53.733 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404213.6545744, f5f315c0-7e8f-4015-8c35-98b7726f35bf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:16:53 np0005539551 nova_compute[227360]: 2025-11-29 08:16:53.733 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] VM Started (Lifecycle Event)#033[00m
Nov 29 03:16:53 np0005539551 nova_compute[227360]: 2025-11-29 08:16:53.757 227364 INFO nova.compute.manager [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Took 3.69 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:16:53 np0005539551 nova_compute[227360]: 2025-11-29 08:16:53.757 227364 DEBUG nova.compute.manager [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:53 np0005539551 nova_compute[227360]: 2025-11-29 08:16:53.759 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:53 np0005539551 nova_compute[227360]: 2025-11-29 08:16:53.766 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:16:53 np0005539551 nova_compute[227360]: 2025-11-29 08:16:53.806 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:16:53 np0005539551 nova_compute[227360]: 2025-11-29 08:16:53.851 227364 INFO nova.compute.manager [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Took 4.82 seconds to build instance.#033[00m
Nov 29 03:16:53 np0005539551 nova_compute[227360]: 2025-11-29 08:16:53.880 227364 DEBUG oslo_concurrency.lockutils [None req-daed2bd2-52a0-4f89-a731-915c7ab35fea 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lock "f5f315c0-7e8f-4015-8c35-98b7726f35bf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.946s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:54 np0005539551 nova_compute[227360]: 2025-11-29 08:16:54.113 227364 INFO nova.virt.libvirt.driver [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Creating config drive at /var/lib/nova/instances/c06df71c-732b-4ad5-a0c7-5e3ca1d02e36/disk.config#033[00m
Nov 29 03:16:54 np0005539551 nova_compute[227360]: 2025-11-29 08:16:54.119 227364 DEBUG oslo_concurrency.processutils [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c06df71c-732b-4ad5-a0c7-5e3ca1d02e36/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw5zfajbn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:54 np0005539551 nova_compute[227360]: 2025-11-29 08:16:54.252 227364 DEBUG oslo_concurrency.processutils [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c06df71c-732b-4ad5-a0c7-5e3ca1d02e36/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw5zfajbn" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:54 np0005539551 nova_compute[227360]: 2025-11-29 08:16:54.287 227364 DEBUG nova.storage.rbd_utils [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] rbd image c06df71c-732b-4ad5-a0c7-5e3ca1d02e36_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:54 np0005539551 nova_compute[227360]: 2025-11-29 08:16:54.292 227364 DEBUG oslo_concurrency.processutils [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c06df71c-732b-4ad5-a0c7-5e3ca1d02e36/disk.config c06df71c-732b-4ad5-a0c7-5e3ca1d02e36_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:54 np0005539551 nova_compute[227360]: 2025-11-29 08:16:54.459 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:54.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:54 np0005539551 nova_compute[227360]: 2025-11-29 08:16:54.513 227364 DEBUG oslo_concurrency.processutils [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c06df71c-732b-4ad5-a0c7-5e3ca1d02e36/disk.config c06df71c-732b-4ad5-a0c7-5e3ca1d02e36_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.221s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:54 np0005539551 nova_compute[227360]: 2025-11-29 08:16:54.514 227364 INFO nova.virt.libvirt.driver [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Deleting local config drive /var/lib/nova/instances/c06df71c-732b-4ad5-a0c7-5e3ca1d02e36/disk.config because it was imported into RBD.#033[00m
Nov 29 03:16:54 np0005539551 systemd-machined[190756]: New machine qemu-45-instance-0000006a.
Nov 29 03:16:54 np0005539551 systemd[1]: Started Virtual Machine qemu-45-instance-0000006a.
Nov 29 03:16:55 np0005539551 nova_compute[227360]: 2025-11-29 08:16:55.023 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404215.0226374, c06df71c-732b-4ad5-a0c7-5e3ca1d02e36 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:16:55 np0005539551 nova_compute[227360]: 2025-11-29 08:16:55.023 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:16:55 np0005539551 nova_compute[227360]: 2025-11-29 08:16:55.026 227364 DEBUG nova.compute.manager [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:16:55 np0005539551 nova_compute[227360]: 2025-11-29 08:16:55.026 227364 DEBUG nova.virt.libvirt.driver [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:16:55 np0005539551 nova_compute[227360]: 2025-11-29 08:16:55.029 227364 INFO nova.virt.libvirt.driver [-] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Instance spawned successfully.#033[00m
Nov 29 03:16:55 np0005539551 nova_compute[227360]: 2025-11-29 08:16:55.030 227364 DEBUG nova.virt.libvirt.driver [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:16:55 np0005539551 nova_compute[227360]: 2025-11-29 08:16:55.053 227364 DEBUG nova.virt.libvirt.driver [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:16:55 np0005539551 nova_compute[227360]: 2025-11-29 08:16:55.053 227364 DEBUG nova.virt.libvirt.driver [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:16:55 np0005539551 nova_compute[227360]: 2025-11-29 08:16:55.054 227364 DEBUG nova.virt.libvirt.driver [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:16:55 np0005539551 nova_compute[227360]: 2025-11-29 08:16:55.054 227364 DEBUG nova.virt.libvirt.driver [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:16:55 np0005539551 nova_compute[227360]: 2025-11-29 08:16:55.054 227364 DEBUG nova.virt.libvirt.driver [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:16:55 np0005539551 nova_compute[227360]: 2025-11-29 08:16:55.055 227364 DEBUG nova.virt.libvirt.driver [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:16:55 np0005539551 nova_compute[227360]: 2025-11-29 08:16:55.061 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:55 np0005539551 nova_compute[227360]: 2025-11-29 08:16:55.065 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:16:55 np0005539551 nova_compute[227360]: 2025-11-29 08:16:55.100 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:16:55 np0005539551 nova_compute[227360]: 2025-11-29 08:16:55.101 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404215.025511, c06df71c-732b-4ad5-a0c7-5e3ca1d02e36 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:16:55 np0005539551 nova_compute[227360]: 2025-11-29 08:16:55.101 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] VM Started (Lifecycle Event)#033[00m
Nov 29 03:16:55 np0005539551 nova_compute[227360]: 2025-11-29 08:16:55.132 227364 INFO nova.compute.manager [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Took 4.19 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:16:55 np0005539551 nova_compute[227360]: 2025-11-29 08:16:55.132 227364 DEBUG nova.compute.manager [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:55 np0005539551 nova_compute[227360]: 2025-11-29 08:16:55.134 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:55 np0005539551 nova_compute[227360]: 2025-11-29 08:16:55.141 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:16:55 np0005539551 nova_compute[227360]: 2025-11-29 08:16:55.187 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:16:55 np0005539551 nova_compute[227360]: 2025-11-29 08:16:55.203 227364 INFO nova.compute.manager [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Took 5.49 seconds to build instance.#033[00m
Nov 29 03:16:55 np0005539551 nova_compute[227360]: 2025-11-29 08:16:55.224 227364 DEBUG oslo_concurrency.lockutils [None req-f437e3ea-1676-4d87-805f-affbbb4e0fd2 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lock "c06df71c-732b-4ad5-a0c7-5e3ca1d02e36" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:55.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:55 np0005539551 nova_compute[227360]: 2025-11-29 08:16:55.614 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:56.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:57.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:57 np0005539551 nova_compute[227360]: 2025-11-29 08:16:57.646 227364 INFO nova.compute.manager [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Rebuilding instance#033[00m
Nov 29 03:16:57 np0005539551 nova_compute[227360]: 2025-11-29 08:16:57.985 227364 DEBUG nova.objects.instance [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lazy-loading 'trusted_certs' on Instance uuid c06df71c-732b-4ad5-a0c7-5e3ca1d02e36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:58 np0005539551 nova_compute[227360]: 2025-11-29 08:16:58.004 227364 DEBUG nova.compute.manager [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:58 np0005539551 nova_compute[227360]: 2025-11-29 08:16:58.070 227364 DEBUG nova.objects.instance [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lazy-loading 'pci_requests' on Instance uuid c06df71c-732b-4ad5-a0c7-5e3ca1d02e36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:58 np0005539551 nova_compute[227360]: 2025-11-29 08:16:58.089 227364 DEBUG nova.objects.instance [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lazy-loading 'pci_devices' on Instance uuid c06df71c-732b-4ad5-a0c7-5e3ca1d02e36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:58 np0005539551 nova_compute[227360]: 2025-11-29 08:16:58.101 227364 DEBUG nova.objects.instance [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lazy-loading 'resources' on Instance uuid c06df71c-732b-4ad5-a0c7-5e3ca1d02e36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:58 np0005539551 nova_compute[227360]: 2025-11-29 08:16:58.112 227364 DEBUG nova.objects.instance [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lazy-loading 'migration_context' on Instance uuid c06df71c-732b-4ad5-a0c7-5e3ca1d02e36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:58 np0005539551 nova_compute[227360]: 2025-11-29 08:16:58.132 227364 DEBUG nova.objects.instance [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 03:16:58 np0005539551 nova_compute[227360]: 2025-11-29 08:16:58.135 227364 DEBUG nova.virt.libvirt.driver [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:16:58 np0005539551 nova_compute[227360]: 2025-11-29 08:16:58.279 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:58 np0005539551 NetworkManager[48922]: <info>  [1764404218.2800] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/195)
Nov 29 03:16:58 np0005539551 NetworkManager[48922]: <info>  [1764404218.2814] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/196)
Nov 29 03:16:58 np0005539551 ovn_controller[130266]: 2025-11-29T08:16:58Z|00406|binding|INFO|Releasing lport 299ca1be-be1b-47d9-8865-4316d34012e3 from this chassis (sb_readonly=0)
Nov 29 03:16:58 np0005539551 ovn_controller[130266]: 2025-11-29T08:16:58Z|00407|binding|INFO|Releasing lport 299ca1be-be1b-47d9-8865-4316d34012e3 from this chassis (sb_readonly=0)
Nov 29 03:16:58 np0005539551 nova_compute[227360]: 2025-11-29 08:16:58.471 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:58.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:59 np0005539551 ovn_controller[130266]: 2025-11-29T08:16:59Z|00408|binding|INFO|Releasing lport 299ca1be-be1b-47d9-8865-4316d34012e3 from this chassis (sb_readonly=0)
Nov 29 03:16:59 np0005539551 nova_compute[227360]: 2025-11-29 08:16:59.187 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:16:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.002000052s ======
Nov 29 03:16:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:59.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Nov 29 03:16:59 np0005539551 nova_compute[227360]: 2025-11-29 08:16:59.461 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:17:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:00.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:17:00 np0005539551 nova_compute[227360]: 2025-11-29 08:17:00.617 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:01.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:02.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:17:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:03.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:17:04 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:17:04.032 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:17:04 np0005539551 nova_compute[227360]: 2025-11-29 08:17:04.032 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:04 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:17:04.034 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:17:04 np0005539551 nova_compute[227360]: 2025-11-29 08:17:04.464 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:04.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:05.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:05 np0005539551 nova_compute[227360]: 2025-11-29 08:17:05.618 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:06.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.003000078s ======
Nov 29 03:17:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:07.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000078s
Nov 29 03:17:07 np0005539551 nova_compute[227360]: 2025-11-29 08:17:07.831 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:08 np0005539551 nova_compute[227360]: 2025-11-29 08:17:08.175 227364 DEBUG nova.virt.libvirt.driver [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:17:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:08.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:17:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:09.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:17:09 np0005539551 nova_compute[227360]: 2025-11-29 08:17:09.466 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:10.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:10 np0005539551 nova_compute[227360]: 2025-11-29 08:17:10.621 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:10 np0005539551 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Nov 29 03:17:10 np0005539551 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d0000006a.scope: Consumed 13.076s CPU time.
Nov 29 03:17:10 np0005539551 systemd-machined[190756]: Machine qemu-45-instance-0000006a terminated.
Nov 29 03:17:11 np0005539551 nova_compute[227360]: 2025-11-29 08:17:11.187 227364 INFO nova.virt.libvirt.driver [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Instance shutdown successfully after 13 seconds.#033[00m
Nov 29 03:17:11 np0005539551 nova_compute[227360]: 2025-11-29 08:17:11.192 227364 INFO nova.virt.libvirt.driver [-] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Instance destroyed successfully.#033[00m
Nov 29 03:17:11 np0005539551 nova_compute[227360]: 2025-11-29 08:17:11.196 227364 INFO nova.virt.libvirt.driver [-] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Instance destroyed successfully.#033[00m
Nov 29 03:17:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:11.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:11 np0005539551 nova_compute[227360]: 2025-11-29 08:17:11.702 227364 INFO nova.virt.libvirt.driver [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Deleting instance files /var/lib/nova/instances/c06df71c-732b-4ad5-a0c7-5e3ca1d02e36_del#033[00m
Nov 29 03:17:11 np0005539551 nova_compute[227360]: 2025-11-29 08:17:11.703 227364 INFO nova.virt.libvirt.driver [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Deletion of /var/lib/nova/instances/c06df71c-732b-4ad5-a0c7-5e3ca1d02e36_del complete#033[00m
Nov 29 03:17:12 np0005539551 nova_compute[227360]: 2025-11-29 08:17:12.114 227364 DEBUG nova.virt.libvirt.driver [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:17:12 np0005539551 nova_compute[227360]: 2025-11-29 08:17:12.114 227364 INFO nova.virt.libvirt.driver [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Creating image(s)#033[00m
Nov 29 03:17:12 np0005539551 nova_compute[227360]: 2025-11-29 08:17:12.132 227364 DEBUG nova.storage.rbd_utils [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] rbd image c06df71c-732b-4ad5-a0c7-5e3ca1d02e36_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:12 np0005539551 nova_compute[227360]: 2025-11-29 08:17:12.151 227364 DEBUG nova.storage.rbd_utils [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] rbd image c06df71c-732b-4ad5-a0c7-5e3ca1d02e36_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:12 np0005539551 nova_compute[227360]: 2025-11-29 08:17:12.170 227364 DEBUG nova.storage.rbd_utils [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] rbd image c06df71c-732b-4ad5-a0c7-5e3ca1d02e36_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:12 np0005539551 nova_compute[227360]: 2025-11-29 08:17:12.173 227364 DEBUG oslo_concurrency.processutils [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:12 np0005539551 nova_compute[227360]: 2025-11-29 08:17:12.230 227364 DEBUG oslo_concurrency.processutils [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:12 np0005539551 nova_compute[227360]: 2025-11-29 08:17:12.231 227364 DEBUG oslo_concurrency.lockutils [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Acquiring lock "6e1589dfec5abd76868fdc022175780e085b08de" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:12 np0005539551 nova_compute[227360]: 2025-11-29 08:17:12.231 227364 DEBUG oslo_concurrency.lockutils [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lock "6e1589dfec5abd76868fdc022175780e085b08de" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:12 np0005539551 nova_compute[227360]: 2025-11-29 08:17:12.232 227364 DEBUG oslo_concurrency.lockutils [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lock "6e1589dfec5abd76868fdc022175780e085b08de" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:12 np0005539551 nova_compute[227360]: 2025-11-29 08:17:12.254 227364 DEBUG nova.storage.rbd_utils [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] rbd image c06df71c-732b-4ad5-a0c7-5e3ca1d02e36_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:12 np0005539551 nova_compute[227360]: 2025-11-29 08:17:12.257 227364 DEBUG oslo_concurrency.processutils [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de c06df71c-732b-4ad5-a0c7-5e3ca1d02e36_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:12.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:12 np0005539551 nova_compute[227360]: 2025-11-29 08:17:12.512 227364 DEBUG oslo_concurrency.processutils [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de c06df71c-732b-4ad5-a0c7-5e3ca1d02e36_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.255s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:12 np0005539551 nova_compute[227360]: 2025-11-29 08:17:12.581 227364 DEBUG nova.storage.rbd_utils [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] resizing rbd image c06df71c-732b-4ad5-a0c7-5e3ca1d02e36_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:17:12 np0005539551 nova_compute[227360]: 2025-11-29 08:17:12.694 227364 DEBUG nova.virt.libvirt.driver [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:17:12 np0005539551 nova_compute[227360]: 2025-11-29 08:17:12.695 227364 DEBUG nova.virt.libvirt.driver [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Ensure instance console log exists: /var/lib/nova/instances/c06df71c-732b-4ad5-a0c7-5e3ca1d02e36/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:17:12 np0005539551 nova_compute[227360]: 2025-11-29 08:17:12.695 227364 DEBUG oslo_concurrency.lockutils [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:12 np0005539551 nova_compute[227360]: 2025-11-29 08:17:12.696 227364 DEBUG oslo_concurrency.lockutils [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:12 np0005539551 nova_compute[227360]: 2025-11-29 08:17:12.696 227364 DEBUG oslo_concurrency.lockutils [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:12 np0005539551 nova_compute[227360]: 2025-11-29 08:17:12.698 227364 DEBUG nova.virt.libvirt.driver [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:36Z,direct_url=<?>,disk_format='qcow2',id=93eccffb-bacd-407f-af6f-64451dee7b21,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:17:12 np0005539551 nova_compute[227360]: 2025-11-29 08:17:12.702 227364 WARNING nova.virt.libvirt.driver [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 29 03:17:12 np0005539551 nova_compute[227360]: 2025-11-29 08:17:12.730 227364 DEBUG nova.virt.libvirt.host [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:17:12 np0005539551 nova_compute[227360]: 2025-11-29 08:17:12.731 227364 DEBUG nova.virt.libvirt.host [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:17:12 np0005539551 nova_compute[227360]: 2025-11-29 08:17:12.740 227364 DEBUG nova.virt.libvirt.host [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:17:12 np0005539551 nova_compute[227360]: 2025-11-29 08:17:12.741 227364 DEBUG nova.virt.libvirt.host [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:17:12 np0005539551 nova_compute[227360]: 2025-11-29 08:17:12.742 227364 DEBUG nova.virt.libvirt.driver [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:17:12 np0005539551 nova_compute[227360]: 2025-11-29 08:17:12.742 227364 DEBUG nova.virt.hardware [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:36Z,direct_url=<?>,disk_format='qcow2',id=93eccffb-bacd-407f-af6f-64451dee7b21,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:17:12 np0005539551 nova_compute[227360]: 2025-11-29 08:17:12.742 227364 DEBUG nova.virt.hardware [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:17:12 np0005539551 nova_compute[227360]: 2025-11-29 08:17:12.743 227364 DEBUG nova.virt.hardware [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:17:12 np0005539551 nova_compute[227360]: 2025-11-29 08:17:12.743 227364 DEBUG nova.virt.hardware [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:17:12 np0005539551 nova_compute[227360]: 2025-11-29 08:17:12.743 227364 DEBUG nova.virt.hardware [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:17:12 np0005539551 nova_compute[227360]: 2025-11-29 08:17:12.743 227364 DEBUG nova.virt.hardware [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:17:12 np0005539551 nova_compute[227360]: 2025-11-29 08:17:12.743 227364 DEBUG nova.virt.hardware [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:17:12 np0005539551 nova_compute[227360]: 2025-11-29 08:17:12.743 227364 DEBUG nova.virt.hardware [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:17:12 np0005539551 nova_compute[227360]: 2025-11-29 08:17:12.744 227364 DEBUG nova.virt.hardware [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:17:12 np0005539551 nova_compute[227360]: 2025-11-29 08:17:12.744 227364 DEBUG nova.virt.hardware [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:17:12 np0005539551 nova_compute[227360]: 2025-11-29 08:17:12.744 227364 DEBUG nova.virt.hardware [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:17:12 np0005539551 nova_compute[227360]: 2025-11-29 08:17:12.744 227364 DEBUG nova.objects.instance [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid c06df71c-732b-4ad5-a0c7-5e3ca1d02e36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:12 np0005539551 nova_compute[227360]: 2025-11-29 08:17:12.766 227364 DEBUG oslo_concurrency.processutils [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:13 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:17:13 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1379972330' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:17:13 np0005539551 nova_compute[227360]: 2025-11-29 08:17:13.191 227364 DEBUG oslo_concurrency.processutils [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:13 np0005539551 nova_compute[227360]: 2025-11-29 08:17:13.226 227364 DEBUG nova.storage.rbd_utils [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] rbd image c06df71c-732b-4ad5-a0c7-5e3ca1d02e36_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:13 np0005539551 nova_compute[227360]: 2025-11-29 08:17:13.231 227364 DEBUG oslo_concurrency.processutils [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:13.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:13 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:17:13 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1521307812' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:17:13 np0005539551 nova_compute[227360]: 2025-11-29 08:17:13.666 227364 DEBUG oslo_concurrency.processutils [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:13 np0005539551 nova_compute[227360]: 2025-11-29 08:17:13.672 227364 DEBUG nova.virt.libvirt.driver [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:17:13 np0005539551 nova_compute[227360]:  <uuid>c06df71c-732b-4ad5-a0c7-5e3ca1d02e36</uuid>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:  <name>instance-0000006a</name>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:17:13 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:      <nova:name>tempest-ServerShowV247Test-server-1554820516</nova:name>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:17:12</nova:creationTime>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:17:13 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:        <nova:user uuid="132e3b01995f42c28e20f5d190885d00">tempest-ServerShowV247Test-1657801549-project-member</nova:user>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:        <nova:project uuid="98623544266946388dc2821daacd91e3">tempest-ServerShowV247Test-1657801549</nova:project>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="93eccffb-bacd-407f-af6f-64451dee7b21"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:      <nova:ports/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:      <entry name="serial">c06df71c-732b-4ad5-a0c7-5e3ca1d02e36</entry>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:      <entry name="uuid">c06df71c-732b-4ad5-a0c7-5e3ca1d02e36</entry>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:17:13 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/c06df71c-732b-4ad5-a0c7-5e3ca1d02e36_disk">
Nov 29 03:17:13 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:17:13 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:17:13 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/c06df71c-732b-4ad5-a0c7-5e3ca1d02e36_disk.config">
Nov 29 03:17:13 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:17:13 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:17:13 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/c06df71c-732b-4ad5-a0c7-5e3ca1d02e36/console.log" append="off"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:17:13 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:17:13 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:17:13 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:17:13 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:17:13 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:17:13 np0005539551 nova_compute[227360]: 2025-11-29 08:17:13.778 227364 DEBUG nova.virt.libvirt.driver [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:17:13 np0005539551 nova_compute[227360]: 2025-11-29 08:17:13.779 227364 DEBUG nova.virt.libvirt.driver [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:17:13 np0005539551 nova_compute[227360]: 2025-11-29 08:17:13.779 227364 INFO nova.virt.libvirt.driver [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Using config drive#033[00m
Nov 29 03:17:13 np0005539551 nova_compute[227360]: 2025-11-29 08:17:13.804 227364 DEBUG nova.storage.rbd_utils [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] rbd image c06df71c-732b-4ad5-a0c7-5e3ca1d02e36_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:13 np0005539551 nova_compute[227360]: 2025-11-29 08:17:13.864 227364 DEBUG nova.objects.instance [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lazy-loading 'ec2_ids' on Instance uuid c06df71c-732b-4ad5-a0c7-5e3ca1d02e36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:13 np0005539551 nova_compute[227360]: 2025-11-29 08:17:13.925 227364 DEBUG nova.objects.instance [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lazy-loading 'keypairs' on Instance uuid c06df71c-732b-4ad5-a0c7-5e3ca1d02e36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:14 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:17:14.035 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:14 np0005539551 nova_compute[227360]: 2025-11-29 08:17:14.469 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:14.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:14 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:14 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:14 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:14 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:14 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:14 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:14 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:17:14 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:17:15 np0005539551 nova_compute[227360]: 2025-11-29 08:17:15.109 227364 INFO nova.virt.libvirt.driver [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Creating config drive at /var/lib/nova/instances/c06df71c-732b-4ad5-a0c7-5e3ca1d02e36/disk.config#033[00m
Nov 29 03:17:15 np0005539551 nova_compute[227360]: 2025-11-29 08:17:15.114 227364 DEBUG oslo_concurrency.processutils [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c06df71c-732b-4ad5-a0c7-5e3ca1d02e36/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp59v1jb_y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:15 np0005539551 nova_compute[227360]: 2025-11-29 08:17:15.254 227364 DEBUG oslo_concurrency.processutils [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c06df71c-732b-4ad5-a0c7-5e3ca1d02e36/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp59v1jb_y" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:15 np0005539551 nova_compute[227360]: 2025-11-29 08:17:15.305 227364 DEBUG nova.storage.rbd_utils [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] rbd image c06df71c-732b-4ad5-a0c7-5e3ca1d02e36_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:15 np0005539551 nova_compute[227360]: 2025-11-29 08:17:15.312 227364 DEBUG oslo_concurrency.processutils [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c06df71c-732b-4ad5-a0c7-5e3ca1d02e36/disk.config c06df71c-732b-4ad5-a0c7-5e3ca1d02e36_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:15.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:15 np0005539551 nova_compute[227360]: 2025-11-29 08:17:15.550 227364 DEBUG oslo_concurrency.processutils [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c06df71c-732b-4ad5-a0c7-5e3ca1d02e36/disk.config c06df71c-732b-4ad5-a0c7-5e3ca1d02e36_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.238s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:15 np0005539551 nova_compute[227360]: 2025-11-29 08:17:15.552 227364 INFO nova.virt.libvirt.driver [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Deleting local config drive /var/lib/nova/instances/c06df71c-732b-4ad5-a0c7-5e3ca1d02e36/disk.config because it was imported into RBD.#033[00m
Nov 29 03:17:15 np0005539551 nova_compute[227360]: 2025-11-29 08:17:15.625 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:15 np0005539551 systemd-machined[190756]: New machine qemu-46-instance-0000006a.
Nov 29 03:17:15 np0005539551 systemd[1]: Started Virtual Machine qemu-46-instance-0000006a.
Nov 29 03:17:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:16 np0005539551 nova_compute[227360]: 2025-11-29 08:17:16.286 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:17:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:16.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:17:16 np0005539551 nova_compute[227360]: 2025-11-29 08:17:16.849 227364 DEBUG nova.virt.libvirt.host [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Removed pending event for c06df71c-732b-4ad5-a0c7-5e3ca1d02e36 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:17:16 np0005539551 nova_compute[227360]: 2025-11-29 08:17:16.850 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404236.8488748, c06df71c-732b-4ad5-a0c7-5e3ca1d02e36 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:17:16 np0005539551 nova_compute[227360]: 2025-11-29 08:17:16.850 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:17:16 np0005539551 nova_compute[227360]: 2025-11-29 08:17:16.852 227364 DEBUG nova.compute.manager [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:17:16 np0005539551 nova_compute[227360]: 2025-11-29 08:17:16.853 227364 DEBUG nova.virt.libvirt.driver [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:17:16 np0005539551 nova_compute[227360]: 2025-11-29 08:17:16.857 227364 INFO nova.virt.libvirt.driver [-] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Instance spawned successfully.#033[00m
Nov 29 03:17:16 np0005539551 nova_compute[227360]: 2025-11-29 08:17:16.857 227364 DEBUG nova.virt.libvirt.driver [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:17:16 np0005539551 nova_compute[227360]: 2025-11-29 08:17:16.869 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:16 np0005539551 nova_compute[227360]: 2025-11-29 08:17:16.872 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:17:16 np0005539551 nova_compute[227360]: 2025-11-29 08:17:16.882 227364 DEBUG nova.virt.libvirt.driver [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:16 np0005539551 nova_compute[227360]: 2025-11-29 08:17:16.883 227364 DEBUG nova.virt.libvirt.driver [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:16 np0005539551 nova_compute[227360]: 2025-11-29 08:17:16.883 227364 DEBUG nova.virt.libvirt.driver [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:16 np0005539551 nova_compute[227360]: 2025-11-29 08:17:16.884 227364 DEBUG nova.virt.libvirt.driver [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:16 np0005539551 nova_compute[227360]: 2025-11-29 08:17:16.884 227364 DEBUG nova.virt.libvirt.driver [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:16 np0005539551 nova_compute[227360]: 2025-11-29 08:17:16.884 227364 DEBUG nova.virt.libvirt.driver [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:16 np0005539551 nova_compute[227360]: 2025-11-29 08:17:16.891 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 03:17:16 np0005539551 nova_compute[227360]: 2025-11-29 08:17:16.891 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404236.8499074, c06df71c-732b-4ad5-a0c7-5e3ca1d02e36 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:17:16 np0005539551 nova_compute[227360]: 2025-11-29 08:17:16.892 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] VM Started (Lifecycle Event)#033[00m
Nov 29 03:17:16 np0005539551 nova_compute[227360]: 2025-11-29 08:17:16.921 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:16 np0005539551 nova_compute[227360]: 2025-11-29 08:17:16.924 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:17:16 np0005539551 nova_compute[227360]: 2025-11-29 08:17:16.945 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 03:17:16 np0005539551 nova_compute[227360]: 2025-11-29 08:17:16.955 227364 DEBUG nova.compute.manager [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:17 np0005539551 nova_compute[227360]: 2025-11-29 08:17:17.009 227364 DEBUG oslo_concurrency.lockutils [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:17 np0005539551 nova_compute[227360]: 2025-11-29 08:17:17.010 227364 DEBUG oslo_concurrency.lockutils [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:17 np0005539551 nova_compute[227360]: 2025-11-29 08:17:17.010 227364 DEBUG nova.objects.instance [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 03:17:17 np0005539551 nova_compute[227360]: 2025-11-29 08:17:17.075 227364 DEBUG oslo_concurrency.lockutils [None req-81983889-7e3e-439a-a30e-86b31ddbaa53 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:17.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:17 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:17 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:17 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:17:17 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:17 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:17:18 np0005539551 nova_compute[227360]: 2025-11-29 08:17:18.245 227364 DEBUG oslo_concurrency.lockutils [None req-9cc5c5b3-2e74-4f25-b4e4-ff31dd77c09c 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Acquiring lock "c06df71c-732b-4ad5-a0c7-5e3ca1d02e36" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:18 np0005539551 nova_compute[227360]: 2025-11-29 08:17:18.245 227364 DEBUG oslo_concurrency.lockutils [None req-9cc5c5b3-2e74-4f25-b4e4-ff31dd77c09c 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lock "c06df71c-732b-4ad5-a0c7-5e3ca1d02e36" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:18 np0005539551 nova_compute[227360]: 2025-11-29 08:17:18.246 227364 DEBUG oslo_concurrency.lockutils [None req-9cc5c5b3-2e74-4f25-b4e4-ff31dd77c09c 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Acquiring lock "c06df71c-732b-4ad5-a0c7-5e3ca1d02e36-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:18 np0005539551 nova_compute[227360]: 2025-11-29 08:17:18.246 227364 DEBUG oslo_concurrency.lockutils [None req-9cc5c5b3-2e74-4f25-b4e4-ff31dd77c09c 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lock "c06df71c-732b-4ad5-a0c7-5e3ca1d02e36-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:18 np0005539551 nova_compute[227360]: 2025-11-29 08:17:18.246 227364 DEBUG oslo_concurrency.lockutils [None req-9cc5c5b3-2e74-4f25-b4e4-ff31dd77c09c 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lock "c06df71c-732b-4ad5-a0c7-5e3ca1d02e36-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:18 np0005539551 nova_compute[227360]: 2025-11-29 08:17:18.247 227364 INFO nova.compute.manager [None req-9cc5c5b3-2e74-4f25-b4e4-ff31dd77c09c 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Terminating instance#033[00m
Nov 29 03:17:18 np0005539551 nova_compute[227360]: 2025-11-29 08:17:18.248 227364 DEBUG oslo_concurrency.lockutils [None req-9cc5c5b3-2e74-4f25-b4e4-ff31dd77c09c 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Acquiring lock "refresh_cache-c06df71c-732b-4ad5-a0c7-5e3ca1d02e36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:17:18 np0005539551 nova_compute[227360]: 2025-11-29 08:17:18.248 227364 DEBUG oslo_concurrency.lockutils [None req-9cc5c5b3-2e74-4f25-b4e4-ff31dd77c09c 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Acquired lock "refresh_cache-c06df71c-732b-4ad5-a0c7-5e3ca1d02e36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:17:18 np0005539551 nova_compute[227360]: 2025-11-29 08:17:18.248 227364 DEBUG nova.network.neutron [None req-9cc5c5b3-2e74-4f25-b4e4-ff31dd77c09c 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:17:18 np0005539551 nova_compute[227360]: 2025-11-29 08:17:18.479 227364 DEBUG nova.network.neutron [None req-9cc5c5b3-2e74-4f25-b4e4-ff31dd77c09c 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:17:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:17:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:18.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:17:18 np0005539551 nova_compute[227360]: 2025-11-29 08:17:18.893 227364 DEBUG nova.network.neutron [None req-9cc5c5b3-2e74-4f25-b4e4-ff31dd77c09c 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:17:18 np0005539551 nova_compute[227360]: 2025-11-29 08:17:18.908 227364 DEBUG oslo_concurrency.lockutils [None req-9cc5c5b3-2e74-4f25-b4e4-ff31dd77c09c 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Releasing lock "refresh_cache-c06df71c-732b-4ad5-a0c7-5e3ca1d02e36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:17:18 np0005539551 nova_compute[227360]: 2025-11-29 08:17:18.909 227364 DEBUG nova.compute.manager [None req-9cc5c5b3-2e74-4f25-b4e4-ff31dd77c09c 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:17:18 np0005539551 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Nov 29 03:17:18 np0005539551 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000006a.scope: Consumed 3.346s CPU time.
Nov 29 03:17:18 np0005539551 systemd-machined[190756]: Machine qemu-46-instance-0000006a terminated.
Nov 29 03:17:19 np0005539551 podman[266911]: 2025-11-29 08:17:19.068064603 +0000 UTC m=+0.062686886 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 29 03:17:19 np0005539551 podman[266912]: 2025-11-29 08:17:19.083825399 +0000 UTC m=+0.071808137 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 03:17:19 np0005539551 podman[266909]: 2025-11-29 08:17:19.121150134 +0000 UTC m=+0.114586696 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:17:19 np0005539551 nova_compute[227360]: 2025-11-29 08:17:19.131 227364 INFO nova.virt.libvirt.driver [-] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Instance destroyed successfully.#033[00m
Nov 29 03:17:19 np0005539551 nova_compute[227360]: 2025-11-29 08:17:19.131 227364 DEBUG nova.objects.instance [None req-9cc5c5b3-2e74-4f25-b4e4-ff31dd77c09c 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lazy-loading 'resources' on Instance uuid c06df71c-732b-4ad5-a0c7-5e3ca1d02e36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:19.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:19 np0005539551 nova_compute[227360]: 2025-11-29 08:17:19.471 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:19 np0005539551 nova_compute[227360]: 2025-11-29 08:17:19.535 227364 INFO nova.virt.libvirt.driver [None req-9cc5c5b3-2e74-4f25-b4e4-ff31dd77c09c 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Deleting instance files /var/lib/nova/instances/c06df71c-732b-4ad5-a0c7-5e3ca1d02e36_del#033[00m
Nov 29 03:17:19 np0005539551 nova_compute[227360]: 2025-11-29 08:17:19.536 227364 INFO nova.virt.libvirt.driver [None req-9cc5c5b3-2e74-4f25-b4e4-ff31dd77c09c 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Deletion of /var/lib/nova/instances/c06df71c-732b-4ad5-a0c7-5e3ca1d02e36_del complete#033[00m
Nov 29 03:17:19 np0005539551 nova_compute[227360]: 2025-11-29 08:17:19.600 227364 INFO nova.compute.manager [None req-9cc5c5b3-2e74-4f25-b4e4-ff31dd77c09c 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Took 0.69 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:17:19 np0005539551 nova_compute[227360]: 2025-11-29 08:17:19.601 227364 DEBUG oslo.service.loopingcall [None req-9cc5c5b3-2e74-4f25-b4e4-ff31dd77c09c 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:17:19 np0005539551 nova_compute[227360]: 2025-11-29 08:17:19.601 227364 DEBUG nova.compute.manager [-] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:17:19 np0005539551 nova_compute[227360]: 2025-11-29 08:17:19.601 227364 DEBUG nova.network.neutron [-] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:17:19 np0005539551 nova_compute[227360]: 2025-11-29 08:17:19.784 227364 DEBUG nova.network.neutron [-] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:17:19 np0005539551 nova_compute[227360]: 2025-11-29 08:17:19.813 227364 DEBUG nova.network.neutron [-] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:17:19 np0005539551 nova_compute[227360]: 2025-11-29 08:17:19.834 227364 INFO nova.compute.manager [-] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Took 0.23 seconds to deallocate network for instance.#033[00m
Nov 29 03:17:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:17:19.867 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:17:19.867 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:17:19.867 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:19 np0005539551 nova_compute[227360]: 2025-11-29 08:17:19.919 227364 DEBUG oslo_concurrency.lockutils [None req-9cc5c5b3-2e74-4f25-b4e4-ff31dd77c09c 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:19 np0005539551 nova_compute[227360]: 2025-11-29 08:17:19.920 227364 DEBUG oslo_concurrency.lockutils [None req-9cc5c5b3-2e74-4f25-b4e4-ff31dd77c09c 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:20 np0005539551 nova_compute[227360]: 2025-11-29 08:17:20.039 227364 DEBUG oslo_concurrency.processutils [None req-9cc5c5b3-2e74-4f25-b4e4-ff31dd77c09c 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:17:20 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1266211709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:17:20 np0005539551 nova_compute[227360]: 2025-11-29 08:17:20.500 227364 DEBUG oslo_concurrency.processutils [None req-9cc5c5b3-2e74-4f25-b4e4-ff31dd77c09c 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:20 np0005539551 nova_compute[227360]: 2025-11-29 08:17:20.507 227364 DEBUG nova.compute.provider_tree [None req-9cc5c5b3-2e74-4f25-b4e4-ff31dd77c09c 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:17:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:20.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:20 np0005539551 nova_compute[227360]: 2025-11-29 08:17:20.538 227364 DEBUG nova.scheduler.client.report [None req-9cc5c5b3-2e74-4f25-b4e4-ff31dd77c09c 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:17:20 np0005539551 nova_compute[227360]: 2025-11-29 08:17:20.570 227364 DEBUG oslo_concurrency.lockutils [None req-9cc5c5b3-2e74-4f25-b4e4-ff31dd77c09c 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:20 np0005539551 nova_compute[227360]: 2025-11-29 08:17:20.594 227364 INFO nova.scheduler.client.report [None req-9cc5c5b3-2e74-4f25-b4e4-ff31dd77c09c 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Deleted allocations for instance c06df71c-732b-4ad5-a0c7-5e3ca1d02e36#033[00m
Nov 29 03:17:20 np0005539551 nova_compute[227360]: 2025-11-29 08:17:20.628 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:20 np0005539551 nova_compute[227360]: 2025-11-29 08:17:20.680 227364 DEBUG oslo_concurrency.lockutils [None req-9cc5c5b3-2e74-4f25-b4e4-ff31dd77c09c 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lock "c06df71c-732b-4ad5-a0c7-5e3ca1d02e36" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.434s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:21 np0005539551 nova_compute[227360]: 2025-11-29 08:17:21.263 227364 DEBUG oslo_concurrency.lockutils [None req-0136864a-ba33-4178-8571-8b654815df21 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Acquiring lock "f5f315c0-7e8f-4015-8c35-98b7726f35bf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:21 np0005539551 nova_compute[227360]: 2025-11-29 08:17:21.264 227364 DEBUG oslo_concurrency.lockutils [None req-0136864a-ba33-4178-8571-8b654815df21 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lock "f5f315c0-7e8f-4015-8c35-98b7726f35bf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:21 np0005539551 nova_compute[227360]: 2025-11-29 08:17:21.264 227364 DEBUG oslo_concurrency.lockutils [None req-0136864a-ba33-4178-8571-8b654815df21 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Acquiring lock "f5f315c0-7e8f-4015-8c35-98b7726f35bf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:21 np0005539551 nova_compute[227360]: 2025-11-29 08:17:21.264 227364 DEBUG oslo_concurrency.lockutils [None req-0136864a-ba33-4178-8571-8b654815df21 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lock "f5f315c0-7e8f-4015-8c35-98b7726f35bf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:21 np0005539551 nova_compute[227360]: 2025-11-29 08:17:21.265 227364 DEBUG oslo_concurrency.lockutils [None req-0136864a-ba33-4178-8571-8b654815df21 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lock "f5f315c0-7e8f-4015-8c35-98b7726f35bf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:21 np0005539551 nova_compute[227360]: 2025-11-29 08:17:21.266 227364 INFO nova.compute.manager [None req-0136864a-ba33-4178-8571-8b654815df21 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Terminating instance#033[00m
Nov 29 03:17:21 np0005539551 nova_compute[227360]: 2025-11-29 08:17:21.266 227364 DEBUG oslo_concurrency.lockutils [None req-0136864a-ba33-4178-8571-8b654815df21 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Acquiring lock "refresh_cache-f5f315c0-7e8f-4015-8c35-98b7726f35bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:17:21 np0005539551 nova_compute[227360]: 2025-11-29 08:17:21.267 227364 DEBUG oslo_concurrency.lockutils [None req-0136864a-ba33-4178-8571-8b654815df21 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Acquired lock "refresh_cache-f5f315c0-7e8f-4015-8c35-98b7726f35bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:17:21 np0005539551 nova_compute[227360]: 2025-11-29 08:17:21.267 227364 DEBUG nova.network.neutron [None req-0136864a-ba33-4178-8571-8b654815df21 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:17:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:21.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:22 np0005539551 nova_compute[227360]: 2025-11-29 08:17:22.139 227364 DEBUG nova.network.neutron [None req-0136864a-ba33-4178-8571-8b654815df21 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:17:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:22.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:22 np0005539551 nova_compute[227360]: 2025-11-29 08:17:22.901 227364 DEBUG nova.network.neutron [None req-0136864a-ba33-4178-8571-8b654815df21 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:17:22 np0005539551 nova_compute[227360]: 2025-11-29 08:17:22.925 227364 DEBUG oslo_concurrency.lockutils [None req-0136864a-ba33-4178-8571-8b654815df21 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Releasing lock "refresh_cache-f5f315c0-7e8f-4015-8c35-98b7726f35bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:17:22 np0005539551 nova_compute[227360]: 2025-11-29 08:17:22.926 227364 DEBUG nova.compute.manager [None req-0136864a-ba33-4178-8571-8b654815df21 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:17:23 np0005539551 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000069.scope: Deactivated successfully.
Nov 29 03:17:23 np0005539551 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000069.scope: Consumed 14.081s CPU time.
Nov 29 03:17:23 np0005539551 systemd-machined[190756]: Machine qemu-44-instance-00000069 terminated.
Nov 29 03:17:23 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:23 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:23 np0005539551 nova_compute[227360]: 2025-11-29 08:17:23.151 227364 INFO nova.virt.libvirt.driver [-] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Instance destroyed successfully.#033[00m
Nov 29 03:17:23 np0005539551 nova_compute[227360]: 2025-11-29 08:17:23.151 227364 DEBUG nova.objects.instance [None req-0136864a-ba33-4178-8571-8b654815df21 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lazy-loading 'resources' on Instance uuid f5f315c0-7e8f-4015-8c35-98b7726f35bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:23.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:23 np0005539551 nova_compute[227360]: 2025-11-29 08:17:23.695 227364 INFO nova.virt.libvirt.driver [None req-0136864a-ba33-4178-8571-8b654815df21 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Deleting instance files /var/lib/nova/instances/f5f315c0-7e8f-4015-8c35-98b7726f35bf_del#033[00m
Nov 29 03:17:23 np0005539551 nova_compute[227360]: 2025-11-29 08:17:23.696 227364 INFO nova.virt.libvirt.driver [None req-0136864a-ba33-4178-8571-8b654815df21 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Deletion of /var/lib/nova/instances/f5f315c0-7e8f-4015-8c35-98b7726f35bf_del complete#033[00m
Nov 29 03:17:23 np0005539551 nova_compute[227360]: 2025-11-29 08:17:23.757 227364 INFO nova.compute.manager [None req-0136864a-ba33-4178-8571-8b654815df21 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Took 0.83 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:17:23 np0005539551 nova_compute[227360]: 2025-11-29 08:17:23.758 227364 DEBUG oslo.service.loopingcall [None req-0136864a-ba33-4178-8571-8b654815df21 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:17:23 np0005539551 nova_compute[227360]: 2025-11-29 08:17:23.758 227364 DEBUG nova.compute.manager [-] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:17:23 np0005539551 nova_compute[227360]: 2025-11-29 08:17:23.759 227364 DEBUG nova.network.neutron [-] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:17:24 np0005539551 nova_compute[227360]: 2025-11-29 08:17:24.140 227364 DEBUG nova.network.neutron [-] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:17:24 np0005539551 nova_compute[227360]: 2025-11-29 08:17:24.154 227364 DEBUG nova.network.neutron [-] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:17:24 np0005539551 nova_compute[227360]: 2025-11-29 08:17:24.181 227364 INFO nova.compute.manager [-] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Took 0.42 seconds to deallocate network for instance.#033[00m
Nov 29 03:17:24 np0005539551 nova_compute[227360]: 2025-11-29 08:17:24.230 227364 DEBUG oslo_concurrency.lockutils [None req-0136864a-ba33-4178-8571-8b654815df21 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:24 np0005539551 nova_compute[227360]: 2025-11-29 08:17:24.231 227364 DEBUG oslo_concurrency.lockutils [None req-0136864a-ba33-4178-8571-8b654815df21 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:24 np0005539551 nova_compute[227360]: 2025-11-29 08:17:24.286 227364 DEBUG oslo_concurrency.processutils [None req-0136864a-ba33-4178-8571-8b654815df21 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:24 np0005539551 nova_compute[227360]: 2025-11-29 08:17:24.474 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:17:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:24.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:17:24 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:17:24 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/82600081' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:17:24 np0005539551 nova_compute[227360]: 2025-11-29 08:17:24.694 227364 DEBUG oslo_concurrency.processutils [None req-0136864a-ba33-4178-8571-8b654815df21 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:24 np0005539551 nova_compute[227360]: 2025-11-29 08:17:24.701 227364 DEBUG nova.compute.provider_tree [None req-0136864a-ba33-4178-8571-8b654815df21 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:17:24 np0005539551 nova_compute[227360]: 2025-11-29 08:17:24.718 227364 DEBUG nova.scheduler.client.report [None req-0136864a-ba33-4178-8571-8b654815df21 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:17:24 np0005539551 nova_compute[227360]: 2025-11-29 08:17:24.749 227364 DEBUG oslo_concurrency.lockutils [None req-0136864a-ba33-4178-8571-8b654815df21 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.519s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:24 np0005539551 nova_compute[227360]: 2025-11-29 08:17:24.798 227364 INFO nova.scheduler.client.report [None req-0136864a-ba33-4178-8571-8b654815df21 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Deleted allocations for instance f5f315c0-7e8f-4015-8c35-98b7726f35bf#033[00m
Nov 29 03:17:24 np0005539551 nova_compute[227360]: 2025-11-29 08:17:24.927 227364 DEBUG oslo_concurrency.lockutils [None req-0136864a-ba33-4178-8571-8b654815df21 132e3b01995f42c28e20f5d190885d00 98623544266946388dc2821daacd91e3 - - default default] Lock "f5f315c0-7e8f-4015-8c35-98b7726f35bf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:17:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:25.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:17:25 np0005539551 nova_compute[227360]: 2025-11-29 08:17:25.630 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:26.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:27.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:28 np0005539551 nova_compute[227360]: 2025-11-29 08:17:28.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:28 np0005539551 nova_compute[227360]: 2025-11-29 08:17:28.411 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:28 np0005539551 nova_compute[227360]: 2025-11-29 08:17:28.411 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:17:28 np0005539551 nova_compute[227360]: 2025-11-29 08:17:28.411 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:17:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:28.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:28 np0005539551 nova_compute[227360]: 2025-11-29 08:17:28.928 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "refresh_cache-076bf9f6-6607-4b08-b733-864854aad069" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:17:28 np0005539551 nova_compute[227360]: 2025-11-29 08:17:28.928 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquired lock "refresh_cache-076bf9f6-6607-4b08-b733-864854aad069" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:17:28 np0005539551 nova_compute[227360]: 2025-11-29 08:17:28.928 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:17:28 np0005539551 nova_compute[227360]: 2025-11-29 08:17:28.929 227364 DEBUG nova.objects.instance [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lazy-loading 'info_cache' on Instance uuid 076bf9f6-6607-4b08-b733-864854aad069 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:17:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:29.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:17:29 np0005539551 nova_compute[227360]: 2025-11-29 08:17:29.513 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:17:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:30.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:17:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:30 np0005539551 nova_compute[227360]: 2025-11-29 08:17:30.674 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:30 np0005539551 nova_compute[227360]: 2025-11-29 08:17:30.790 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Updating instance_info_cache with network_info: [{"id": "5a18d86b-a59a-42b2-a09d-a9db462f6034", "address": "fa:16:3e:46:a5:ff", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a18d86b-a5", "ovs_interfaceid": "5a18d86b-a59a-42b2-a09d-a9db462f6034", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:17:30 np0005539551 nova_compute[227360]: 2025-11-29 08:17:30.813 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Releasing lock "refresh_cache-076bf9f6-6607-4b08-b733-864854aad069" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:17:30 np0005539551 nova_compute[227360]: 2025-11-29 08:17:30.814 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:17:30 np0005539551 nova_compute[227360]: 2025-11-29 08:17:30.814 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:30 np0005539551 nova_compute[227360]: 2025-11-29 08:17:30.814 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:30 np0005539551 nova_compute[227360]: 2025-11-29 08:17:30.814 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:31.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:32 np0005539551 nova_compute[227360]: 2025-11-29 08:17:32.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:32 np0005539551 nova_compute[227360]: 2025-11-29 08:17:32.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:32 np0005539551 nova_compute[227360]: 2025-11-29 08:17:32.435 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:32 np0005539551 nova_compute[227360]: 2025-11-29 08:17:32.435 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:32 np0005539551 nova_compute[227360]: 2025-11-29 08:17:32.436 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:32 np0005539551 nova_compute[227360]: 2025-11-29 08:17:32.436 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:17:32 np0005539551 nova_compute[227360]: 2025-11-29 08:17:32.436 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:32.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:32 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:17:32 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1461182053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:17:32 np0005539551 nova_compute[227360]: 2025-11-29 08:17:32.880 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:32 np0005539551 nova_compute[227360]: 2025-11-29 08:17:32.947 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:17:32 np0005539551 nova_compute[227360]: 2025-11-29 08:17:32.948 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:17:33 np0005539551 nova_compute[227360]: 2025-11-29 08:17:33.102 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:17:33 np0005539551 nova_compute[227360]: 2025-11-29 08:17:33.104 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4328MB free_disk=20.8553466796875GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:17:33 np0005539551 nova_compute[227360]: 2025-11-29 08:17:33.104 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:33 np0005539551 nova_compute[227360]: 2025-11-29 08:17:33.105 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:33 np0005539551 nova_compute[227360]: 2025-11-29 08:17:33.178 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 076bf9f6-6607-4b08-b733-864854aad069 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:17:33 np0005539551 nova_compute[227360]: 2025-11-29 08:17:33.178 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:17:33 np0005539551 nova_compute[227360]: 2025-11-29 08:17:33.178 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:17:33 np0005539551 nova_compute[227360]: 2025-11-29 08:17:33.222 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:17:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:33.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:17:33 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:17:33 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2786273399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:17:33 np0005539551 nova_compute[227360]: 2025-11-29 08:17:33.642 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:33 np0005539551 nova_compute[227360]: 2025-11-29 08:17:33.649 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:17:33 np0005539551 nova_compute[227360]: 2025-11-29 08:17:33.689 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:17:33 np0005539551 nova_compute[227360]: 2025-11-29 08:17:33.714 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:17:33 np0005539551 nova_compute[227360]: 2025-11-29 08:17:33.715 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:34 np0005539551 nova_compute[227360]: 2025-11-29 08:17:34.130 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404239.128985, c06df71c-732b-4ad5-a0c7-5e3ca1d02e36 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:17:34 np0005539551 nova_compute[227360]: 2025-11-29 08:17:34.131 227364 INFO nova.compute.manager [-] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:17:34 np0005539551 nova_compute[227360]: 2025-11-29 08:17:34.165 227364 DEBUG nova.compute.manager [None req-3ceddab7-717c-4108-9041-7d3d7e379632 - - - - - -] [instance: c06df71c-732b-4ad5-a0c7-5e3ca1d02e36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:34 np0005539551 nova_compute[227360]: 2025-11-29 08:17:34.515 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:17:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:34.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:17:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:35.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:35 np0005539551 nova_compute[227360]: 2025-11-29 08:17:35.676 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:35 np0005539551 nova_compute[227360]: 2025-11-29 08:17:35.717 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:36 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e298 e298: 3 total, 3 up, 3 in
Nov 29 03:17:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:17:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:36.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:17:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e299 e299: 3 total, 3 up, 3 in
Nov 29 03:17:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:37.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:38 np0005539551 nova_compute[227360]: 2025-11-29 08:17:38.149 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404243.1482625, f5f315c0-7e8f-4015-8c35-98b7726f35bf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:17:38 np0005539551 nova_compute[227360]: 2025-11-29 08:17:38.150 227364 INFO nova.compute.manager [-] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:17:38 np0005539551 nova_compute[227360]: 2025-11-29 08:17:38.172 227364 DEBUG nova.compute.manager [None req-0a604ad1-a0cf-42d9-958a-4704228a8c38 - - - - - -] [instance: f5f315c0-7e8f-4015-8c35-98b7726f35bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:38 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e300 e300: 3 total, 3 up, 3 in
Nov 29 03:17:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:38.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:39.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:39 np0005539551 nova_compute[227360]: 2025-11-29 08:17:39.518 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:39 np0005539551 nova_compute[227360]: 2025-11-29 08:17:39.644 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:40 np0005539551 irqbalance[784]: Cannot change IRQ 26 affinity: Operation not permitted
Nov 29 03:17:40 np0005539551 irqbalance[784]: IRQ 26 affinity is now unmanaged
Nov 29 03:17:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:17:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:40.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:17:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:40 np0005539551 nova_compute[227360]: 2025-11-29 08:17:40.678 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:41.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:41 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e301 e301: 3 total, 3 up, 3 in
Nov 29 03:17:41 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #85. Immutable memtables: 0.
Nov 29 03:17:41 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:17:41.863504) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:17:41 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 85
Nov 29 03:17:41 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404261863567, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 2115, "num_deletes": 257, "total_data_size": 4878249, "memory_usage": 4959416, "flush_reason": "Manual Compaction"}
Nov 29 03:17:41 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #86: started
Nov 29 03:17:41 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404261881265, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 86, "file_size": 3153680, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43265, "largest_seqno": 45374, "table_properties": {"data_size": 3144977, "index_size": 5261, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18918, "raw_average_key_size": 20, "raw_value_size": 3127233, "raw_average_value_size": 3388, "num_data_blocks": 228, "num_entries": 923, "num_filter_entries": 923, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404104, "oldest_key_time": 1764404104, "file_creation_time": 1764404261, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:17:41 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 17864 microseconds, and 7449 cpu microseconds.
Nov 29 03:17:41 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:17:41 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:17:41.881363) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #86: 3153680 bytes OK
Nov 29 03:17:41 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:17:41.881385) [db/memtable_list.cc:519] [default] Level-0 commit table #86 started
Nov 29 03:17:41 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:17:41.882810) [db/memtable_list.cc:722] [default] Level-0 commit table #86: memtable #1 done
Nov 29 03:17:41 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:17:41.882827) EVENT_LOG_v1 {"time_micros": 1764404261882822, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:17:41 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:17:41.882846) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:17:41 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 4868647, prev total WAL file size 4868647, number of live WAL files 2.
Nov 29 03:17:41 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000082.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:17:41 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:17:41.884172) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323536' seq:72057594037927935, type:22 .. '6C6F676D0031353037' seq:0, type:0; will stop at (end)
Nov 29 03:17:41 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:17:41 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [86(3079KB)], [84(10029KB)]
Nov 29 03:17:41 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404261884222, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [86], "files_L6": [84], "score": -1, "input_data_size": 13423469, "oldest_snapshot_seqno": -1}
Nov 29 03:17:41 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #87: 7805 keys, 13255063 bytes, temperature: kUnknown
Nov 29 03:17:41 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404261973480, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 87, "file_size": 13255063, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13201367, "index_size": 33104, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19525, "raw_key_size": 201625, "raw_average_key_size": 25, "raw_value_size": 13060286, "raw_average_value_size": 1673, "num_data_blocks": 1310, "num_entries": 7805, "num_filter_entries": 7805, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764404261, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 87, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:17:41 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:17:41 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:17:41.973904) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 13255063 bytes
Nov 29 03:17:41 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:17:41.975651) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 150.2 rd, 148.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 9.8 +0.0 blob) out(12.6 +0.0 blob), read-write-amplify(8.5) write-amplify(4.2) OK, records in: 8339, records dropped: 534 output_compression: NoCompression
Nov 29 03:17:41 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:17:41.975671) EVENT_LOG_v1 {"time_micros": 1764404261975661, "job": 52, "event": "compaction_finished", "compaction_time_micros": 89391, "compaction_time_cpu_micros": 32151, "output_level": 6, "num_output_files": 1, "total_output_size": 13255063, "num_input_records": 8339, "num_output_records": 7805, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:17:41 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:17:41 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404261976478, "job": 52, "event": "table_file_deletion", "file_number": 86}
Nov 29 03:17:41 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000084.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:17:41 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404261978709, "job": 52, "event": "table_file_deletion", "file_number": 84}
Nov 29 03:17:41 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:17:41.884118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:17:41 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:17:41.978799) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:17:41 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:17:41.978806) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:17:41 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:17:41.978808) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:17:41 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:17:41.978810) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:17:41 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:17:41.978812) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:17:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:42.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:42 np0005539551 nova_compute[227360]: 2025-11-29 08:17:42.716 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:42 np0005539551 nova_compute[227360]: 2025-11-29 08:17:42.716 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:42 np0005539551 nova_compute[227360]: 2025-11-29 08:17:42.717 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:17:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:43.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:44.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:44 np0005539551 nova_compute[227360]: 2025-11-29 08:17:44.564 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:45.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:45 np0005539551 nova_compute[227360]: 2025-11-29 08:17:45.715 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:45 np0005539551 nova_compute[227360]: 2025-11-29 08:17:45.762 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:46.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:47.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:48.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:49.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:49 np0005539551 nova_compute[227360]: 2025-11-29 08:17:49.592 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:49 np0005539551 podman[267161]: 2025-11-29 08:17:49.642095299 +0000 UTC m=+0.087162642 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:17:49 np0005539551 podman[267160]: 2025-11-29 08:17:49.653013608 +0000 UTC m=+0.092559815 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:17:49 np0005539551 podman[267159]: 2025-11-29 08:17:49.697114662 +0000 UTC m=+0.136666779 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 29 03:17:49 np0005539551 nova_compute[227360]: 2025-11-29 08:17:49.715 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:50.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:50 np0005539551 nova_compute[227360]: 2025-11-29 08:17:50.719 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:17:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:51.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:17:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:17:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:52.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:17:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:53.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:17:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:54.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:17:54 np0005539551 nova_compute[227360]: 2025-11-29 08:17:54.641 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:17:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:55.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:17:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:55 np0005539551 nova_compute[227360]: 2025-11-29 08:17:55.768 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:56 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e302 e302: 3 total, 3 up, 3 in
Nov 29 03:17:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:56.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:57.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:58 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e303 e303: 3 total, 3 up, 3 in
Nov 29 03:17:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:58.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:17:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:59.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:59 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e304 e304: 3 total, 3 up, 3 in
Nov 29 03:17:59 np0005539551 nova_compute[227360]: 2025-11-29 08:17:59.655 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:59 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:17:59 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2693065625' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:17:59 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:17:59 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2693065625' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:18:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:18:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:00.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:18:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:00 np0005539551 nova_compute[227360]: 2025-11-29 08:18:00.769 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:18:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:01.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:18:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:02.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:03.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:03 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:18:03 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2928269535' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:18:03 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:18:03 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2928269535' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:18:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:04.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:04 np0005539551 nova_compute[227360]: 2025-11-29 08:18:04.658 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:18:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:05.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:18:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:05 np0005539551 nova_compute[227360]: 2025-11-29 08:18:05.772 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:06.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:18:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:07.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:18:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e305 e305: 3 total, 3 up, 3 in
Nov 29 03:18:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.002000052s ======
Nov 29 03:18:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:08.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Nov 29 03:18:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:18:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:09.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:18:09 np0005539551 nova_compute[227360]: 2025-11-29 08:18:09.659 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:10.503 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:18:10 np0005539551 nova_compute[227360]: 2025-11-29 08:18:10.503 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:10.504 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:18:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:10.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:10 np0005539551 nova_compute[227360]: 2025-11-29 08:18:10.773 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:18:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:11.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:18:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:18:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:12.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:18:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:13.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:14 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:14.506 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:14.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:14 np0005539551 nova_compute[227360]: 2025-11-29 08:18:14.661 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:15.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:15 np0005539551 nova_compute[227360]: 2025-11-29 08:18:15.776 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:16 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #88. Immutable memtables: 0.
Nov 29 03:18:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:18:16.590750) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:18:16 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 88
Nov 29 03:18:16 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404296590827, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 605, "num_deletes": 252, "total_data_size": 922431, "memory_usage": 933760, "flush_reason": "Manual Compaction"}
Nov 29 03:18:16 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #89: started
Nov 29 03:18:16 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404296599125, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 89, "file_size": 607843, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45379, "largest_seqno": 45979, "table_properties": {"data_size": 604733, "index_size": 1018, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7508, "raw_average_key_size": 19, "raw_value_size": 598457, "raw_average_value_size": 1554, "num_data_blocks": 45, "num_entries": 385, "num_filter_entries": 385, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404262, "oldest_key_time": 1764404262, "file_creation_time": 1764404296, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:18:16 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 8430 microseconds, and 4008 cpu microseconds.
Nov 29 03:18:16 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:18:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:18:16.599182) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #89: 607843 bytes OK
Nov 29 03:18:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:18:16.599209) [db/memtable_list.cc:519] [default] Level-0 commit table #89 started
Nov 29 03:18:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:18:16.602149) [db/memtable_list.cc:722] [default] Level-0 commit table #89: memtable #1 done
Nov 29 03:18:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:18:16.602170) EVENT_LOG_v1 {"time_micros": 1764404296602163, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:18:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:18:16.602191) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:18:16 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 918988, prev total WAL file size 918988, number of live WAL files 2.
Nov 29 03:18:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:16 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000085.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:18:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:18:16.602876) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Nov 29 03:18:16 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:18:16 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [89(593KB)], [87(12MB)]
Nov 29 03:18:16 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404296602916, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [89], "files_L6": [87], "score": -1, "input_data_size": 13862906, "oldest_snapshot_seqno": -1}
Nov 29 03:18:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:16.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:16 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #90: 7674 keys, 11981449 bytes, temperature: kUnknown
Nov 29 03:18:16 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404296714892, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 90, "file_size": 11981449, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11929659, "index_size": 31518, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19205, "raw_key_size": 199677, "raw_average_key_size": 26, "raw_value_size": 11791739, "raw_average_value_size": 1536, "num_data_blocks": 1237, "num_entries": 7674, "num_filter_entries": 7674, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764404296, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 90, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:18:16 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:18:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:18:16.715179) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 11981449 bytes
Nov 29 03:18:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:18:16.716453) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 123.7 rd, 106.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 12.6 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(42.5) write-amplify(19.7) OK, records in: 8190, records dropped: 516 output_compression: NoCompression
Nov 29 03:18:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:18:16.716473) EVENT_LOG_v1 {"time_micros": 1764404296716464, "job": 54, "event": "compaction_finished", "compaction_time_micros": 112053, "compaction_time_cpu_micros": 45233, "output_level": 6, "num_output_files": 1, "total_output_size": 11981449, "num_input_records": 8190, "num_output_records": 7674, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:18:16 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:18:16 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404296716736, "job": 54, "event": "table_file_deletion", "file_number": 89}
Nov 29 03:18:16 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000087.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:18:16 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404296719536, "job": 54, "event": "table_file_deletion", "file_number": 87}
Nov 29 03:18:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:18:16.602775) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:18:16.719587) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:18:16.719593) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:18:16.719595) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:18:16.719597) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:18:16.719598) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:18:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:17.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:18:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:18.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:19.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:19 np0005539551 nova_compute[227360]: 2025-11-29 08:18:19.528 227364 DEBUG oslo_concurrency.lockutils [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "68c19565-6fe1-4c2c-927d-87f801074e18" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:19 np0005539551 nova_compute[227360]: 2025-11-29 08:18:19.529 227364 DEBUG oslo_concurrency.lockutils [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "68c19565-6fe1-4c2c-927d-87f801074e18" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:19 np0005539551 nova_compute[227360]: 2025-11-29 08:18:19.529 227364 INFO nova.compute.manager [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Unshelving#033[00m
Nov 29 03:18:19 np0005539551 nova_compute[227360]: 2025-11-29 08:18:19.634 227364 DEBUG oslo_concurrency.lockutils [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:19 np0005539551 nova_compute[227360]: 2025-11-29 08:18:19.635 227364 DEBUG oslo_concurrency.lockutils [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:19 np0005539551 nova_compute[227360]: 2025-11-29 08:18:19.640 227364 DEBUG nova.objects.instance [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'pci_requests' on Instance uuid 68c19565-6fe1-4c2c-927d-87f801074e18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:19 np0005539551 nova_compute[227360]: 2025-11-29 08:18:19.654 227364 DEBUG nova.objects.instance [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'numa_topology' on Instance uuid 68c19565-6fe1-4c2c-927d-87f801074e18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:19 np0005539551 nova_compute[227360]: 2025-11-29 08:18:19.663 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:19 np0005539551 nova_compute[227360]: 2025-11-29 08:18:19.666 227364 DEBUG nova.virt.hardware [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:18:19 np0005539551 nova_compute[227360]: 2025-11-29 08:18:19.666 227364 INFO nova.compute.claims [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:18:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:19.868 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:19.868 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:19.868 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:19 np0005539551 nova_compute[227360]: 2025-11-29 08:18:19.897 227364 DEBUG oslo_concurrency.processutils [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:18:20 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/590055784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:18:20 np0005539551 nova_compute[227360]: 2025-11-29 08:18:20.356 227364 DEBUG oslo_concurrency.processutils [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:20 np0005539551 nova_compute[227360]: 2025-11-29 08:18:20.364 227364 DEBUG nova.compute.provider_tree [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:18:20 np0005539551 nova_compute[227360]: 2025-11-29 08:18:20.384 227364 DEBUG nova.scheduler.client.report [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:18:20 np0005539551 ovn_controller[130266]: 2025-11-29T08:18:20Z|00409|binding|INFO|Releasing lport 299ca1be-be1b-47d9-8865-4316d34012e3 from this chassis (sb_readonly=0)
Nov 29 03:18:20 np0005539551 nova_compute[227360]: 2025-11-29 08:18:20.411 227364 DEBUG oslo_concurrency.lockutils [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:20 np0005539551 nova_compute[227360]: 2025-11-29 08:18:20.457 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:20 np0005539551 podman[267243]: 2025-11-29 08:18:20.592399513 +0000 UTC m=+0.047043072 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 29 03:18:20 np0005539551 podman[267242]: 2025-11-29 08:18:20.621337138 +0000 UTC m=+0.079585263 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Nov 29 03:18:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:20.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:20 np0005539551 podman[267241]: 2025-11-29 08:18:20.657141733 +0000 UTC m=+0.118661554 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 29 03:18:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:20 np0005539551 nova_compute[227360]: 2025-11-29 08:18:20.777 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:21 np0005539551 nova_compute[227360]: 2025-11-29 08:18:21.219 227364 INFO nova.network.neutron [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Updating port 2d17165a-5bc8-402e-a08c-e6f21188fb1b with attributes {'binding:host_id': 'compute-1.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Nov 29 03:18:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:18:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:21.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:18:22 np0005539551 nova_compute[227360]: 2025-11-29 08:18:22.289 227364 DEBUG oslo_concurrency.lockutils [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "refresh_cache-68c19565-6fe1-4c2c-927d-87f801074e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:18:22 np0005539551 nova_compute[227360]: 2025-11-29 08:18:22.290 227364 DEBUG oslo_concurrency.lockutils [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquired lock "refresh_cache-68c19565-6fe1-4c2c-927d-87f801074e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:18:22 np0005539551 nova_compute[227360]: 2025-11-29 08:18:22.290 227364 DEBUG nova.network.neutron [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:18:22 np0005539551 nova_compute[227360]: 2025-11-29 08:18:22.413 227364 DEBUG nova.compute.manager [req-64305106-2082-4d49-abe7-0307028a5499 req-1f6e2147-7c30-4168-92a7-fa1580113df3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Received event network-changed-2d17165a-5bc8-402e-a08c-e6f21188fb1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:18:22 np0005539551 nova_compute[227360]: 2025-11-29 08:18:22.414 227364 DEBUG nova.compute.manager [req-64305106-2082-4d49-abe7-0307028a5499 req-1f6e2147-7c30-4168-92a7-fa1580113df3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Refreshing instance network info cache due to event network-changed-2d17165a-5bc8-402e-a08c-e6f21188fb1b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:18:22 np0005539551 nova_compute[227360]: 2025-11-29 08:18:22.414 227364 DEBUG oslo_concurrency.lockutils [req-64305106-2082-4d49-abe7-0307028a5499 req-1f6e2147-7c30-4168-92a7-fa1580113df3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-68c19565-6fe1-4c2c-927d-87f801074e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:18:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:22.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:23.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:24 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:18:24 np0005539551 nova_compute[227360]: 2025-11-29 08:18:24.422 227364 DEBUG nova.network.neutron [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Updating instance_info_cache with network_info: [{"id": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "address": "fa:16:3e:ae:bc:67", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d17165a-5b", "ovs_interfaceid": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:18:24 np0005539551 nova_compute[227360]: 2025-11-29 08:18:24.462 227364 DEBUG oslo_concurrency.lockutils [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Releasing lock "refresh_cache-68c19565-6fe1-4c2c-927d-87f801074e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:18:24 np0005539551 nova_compute[227360]: 2025-11-29 08:18:24.464 227364 DEBUG nova.virt.libvirt.driver [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:18:24 np0005539551 nova_compute[227360]: 2025-11-29 08:18:24.464 227364 INFO nova.virt.libvirt.driver [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Creating image(s)#033[00m
Nov 29 03:18:24 np0005539551 nova_compute[227360]: 2025-11-29 08:18:24.497 227364 DEBUG nova.storage.rbd_utils [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rbd image 68c19565-6fe1-4c2c-927d-87f801074e18_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:24 np0005539551 nova_compute[227360]: 2025-11-29 08:18:24.501 227364 DEBUG nova.objects.instance [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 68c19565-6fe1-4c2c-927d-87f801074e18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:24 np0005539551 nova_compute[227360]: 2025-11-29 08:18:24.503 227364 DEBUG oslo_concurrency.lockutils [req-64305106-2082-4d49-abe7-0307028a5499 req-1f6e2147-7c30-4168-92a7-fa1580113df3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-68c19565-6fe1-4c2c-927d-87f801074e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:18:24 np0005539551 nova_compute[227360]: 2025-11-29 08:18:24.503 227364 DEBUG nova.network.neutron [req-64305106-2082-4d49-abe7-0307028a5499 req-1f6e2147-7c30-4168-92a7-fa1580113df3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Refreshing network info cache for port 2d17165a-5bc8-402e-a08c-e6f21188fb1b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:18:24 np0005539551 nova_compute[227360]: 2025-11-29 08:18:24.550 227364 DEBUG nova.storage.rbd_utils [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rbd image 68c19565-6fe1-4c2c-927d-87f801074e18_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:24 np0005539551 nova_compute[227360]: 2025-11-29 08:18:24.581 227364 DEBUG nova.storage.rbd_utils [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rbd image 68c19565-6fe1-4c2c-927d-87f801074e18_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:24 np0005539551 nova_compute[227360]: 2025-11-29 08:18:24.585 227364 DEBUG oslo_concurrency.lockutils [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "a6579f3d82abc14ac4a94cfd78c04c8f17fb7389" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:24 np0005539551 nova_compute[227360]: 2025-11-29 08:18:24.586 227364 DEBUG oslo_concurrency.lockutils [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "a6579f3d82abc14ac4a94cfd78c04c8f17fb7389" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:24.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:24 np0005539551 nova_compute[227360]: 2025-11-29 08:18:24.710 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:24 np0005539551 nova_compute[227360]: 2025-11-29 08:18:24.801 227364 DEBUG nova.virt.libvirt.imagebackend [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Image locations are: [{'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/c28b4cfd-fa14-4a57-95be-5fa2d667ebc5/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/c28b4cfd-fa14-4a57-95be-5fa2d667ebc5/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 03:18:24 np0005539551 nova_compute[227360]: 2025-11-29 08:18:24.853 227364 DEBUG nova.virt.libvirt.imagebackend [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Selected location: {'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/c28b4cfd-fa14-4a57-95be-5fa2d667ebc5/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 29 03:18:24 np0005539551 nova_compute[227360]: 2025-11-29 08:18:24.854 227364 DEBUG nova.storage.rbd_utils [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] cloning images/c28b4cfd-fa14-4a57-95be-5fa2d667ebc5@snap to None/68c19565-6fe1-4c2c-927d-87f801074e18_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:18:24 np0005539551 nova_compute[227360]: 2025-11-29 08:18:24.974 227364 DEBUG oslo_concurrency.lockutils [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "a6579f3d82abc14ac4a94cfd78c04c8f17fb7389" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.387s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:25 np0005539551 nova_compute[227360]: 2025-11-29 08:18:25.120 227364 DEBUG nova.objects.instance [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'migration_context' on Instance uuid 68c19565-6fe1-4c2c-927d-87f801074e18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:25 np0005539551 nova_compute[227360]: 2025-11-29 08:18:25.176 227364 DEBUG nova.storage.rbd_utils [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] flattening vms/68c19565-6fe1-4c2c-927d-87f801074e18_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:18:25 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:18:25 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:18:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:25.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:25 np0005539551 nova_compute[227360]: 2025-11-29 08:18:25.794 227364 DEBUG nova.virt.libvirt.driver [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Image rbd:vms/68c19565-6fe1-4c2c-927d-87f801074e18_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Nov 29 03:18:25 np0005539551 nova_compute[227360]: 2025-11-29 08:18:25.795 227364 DEBUG nova.virt.libvirt.driver [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:18:25 np0005539551 nova_compute[227360]: 2025-11-29 08:18:25.795 227364 DEBUG nova.virt.libvirt.driver [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Ensure instance console log exists: /var/lib/nova/instances/68c19565-6fe1-4c2c-927d-87f801074e18/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:18:25 np0005539551 nova_compute[227360]: 2025-11-29 08:18:25.795 227364 DEBUG oslo_concurrency.lockutils [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:25 np0005539551 nova_compute[227360]: 2025-11-29 08:18:25.796 227364 DEBUG oslo_concurrency.lockutils [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:25 np0005539551 nova_compute[227360]: 2025-11-29 08:18:25.796 227364 DEBUG oslo_concurrency.lockutils [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:25 np0005539551 nova_compute[227360]: 2025-11-29 08:18:25.798 227364 DEBUG nova.virt.libvirt.driver [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Start _get_guest_xml network_info=[{"id": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "address": "fa:16:3e:ae:bc:67", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d17165a-5b", "ovs_interfaceid": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T08:17:50Z,direct_url=<?>,disk_format='raw',id=c28b4cfd-fa14-4a57-95be-5fa2d667ebc5,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-671911008-shelved',owner='1b8899f76f554afc96bb2441424e5a77',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T08:18:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:18:25 np0005539551 nova_compute[227360]: 2025-11-29 08:18:25.817 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:25 np0005539551 nova_compute[227360]: 2025-11-29 08:18:25.820 227364 WARNING nova.virt.libvirt.driver [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:18:25 np0005539551 nova_compute[227360]: 2025-11-29 08:18:25.830 227364 DEBUG nova.virt.libvirt.host [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:18:25 np0005539551 nova_compute[227360]: 2025-11-29 08:18:25.832 227364 DEBUG nova.virt.libvirt.host [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:18:25 np0005539551 nova_compute[227360]: 2025-11-29 08:18:25.835 227364 DEBUG nova.virt.libvirt.host [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:18:25 np0005539551 nova_compute[227360]: 2025-11-29 08:18:25.835 227364 DEBUG nova.virt.libvirt.host [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:18:25 np0005539551 nova_compute[227360]: 2025-11-29 08:18:25.836 227364 DEBUG nova.virt.libvirt.driver [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:18:25 np0005539551 nova_compute[227360]: 2025-11-29 08:18:25.836 227364 DEBUG nova.virt.hardware [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T08:17:50Z,direct_url=<?>,disk_format='raw',id=c28b4cfd-fa14-4a57-95be-5fa2d667ebc5,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-671911008-shelved',owner='1b8899f76f554afc96bb2441424e5a77',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T08:18:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:18:25 np0005539551 nova_compute[227360]: 2025-11-29 08:18:25.837 227364 DEBUG nova.virt.hardware [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:18:25 np0005539551 nova_compute[227360]: 2025-11-29 08:18:25.837 227364 DEBUG nova.virt.hardware [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:18:25 np0005539551 nova_compute[227360]: 2025-11-29 08:18:25.837 227364 DEBUG nova.virt.hardware [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:18:25 np0005539551 nova_compute[227360]: 2025-11-29 08:18:25.837 227364 DEBUG nova.virt.hardware [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:18:25 np0005539551 nova_compute[227360]: 2025-11-29 08:18:25.837 227364 DEBUG nova.virt.hardware [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:18:25 np0005539551 nova_compute[227360]: 2025-11-29 08:18:25.838 227364 DEBUG nova.virt.hardware [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:18:25 np0005539551 nova_compute[227360]: 2025-11-29 08:18:25.838 227364 DEBUG nova.virt.hardware [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:18:25 np0005539551 nova_compute[227360]: 2025-11-29 08:18:25.838 227364 DEBUG nova.virt.hardware [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:18:25 np0005539551 nova_compute[227360]: 2025-11-29 08:18:25.838 227364 DEBUG nova.virt.hardware [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:18:25 np0005539551 nova_compute[227360]: 2025-11-29 08:18:25.839 227364 DEBUG nova.virt.hardware [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:18:25 np0005539551 nova_compute[227360]: 2025-11-29 08:18:25.839 227364 DEBUG nova.objects.instance [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 68c19565-6fe1-4c2c-927d-87f801074e18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:25 np0005539551 nova_compute[227360]: 2025-11-29 08:18:25.909 227364 DEBUG oslo_concurrency.processutils [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:26 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:18:26 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1204477156' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:18:26 np0005539551 nova_compute[227360]: 2025-11-29 08:18:26.360 227364 DEBUG oslo_concurrency.processutils [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:26 np0005539551 nova_compute[227360]: 2025-11-29 08:18:26.390 227364 DEBUG nova.storage.rbd_utils [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rbd image 68c19565-6fe1-4c2c-927d-87f801074e18_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:26 np0005539551 nova_compute[227360]: 2025-11-29 08:18:26.394 227364 DEBUG oslo_concurrency.processutils [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:18:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:26.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:18:26 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:18:26 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1374796542' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:18:26 np0005539551 nova_compute[227360]: 2025-11-29 08:18:26.844 227364 DEBUG oslo_concurrency.processutils [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:26 np0005539551 nova_compute[227360]: 2025-11-29 08:18:26.847 227364 DEBUG nova.virt.libvirt.vif [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:16:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-671911008',display_name='tempest-ServerActionsTestOtherB-server-671911008',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-671911008',id=104,image_ref='c28b4cfd-fa14-4a57-95be-5fa2d667ebc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-keypair-319762409',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:16:52Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='1b8899f76f554afc96bb2441424e5a77',ramdisk_id='',reservation_id='r-1uf85n06',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-477220446',owner_user_name='tempest-ServerActionsTestOtherB-477220446-project-member',shelved_at='2025-11-29T08:18:02.165022',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='c28b4cfd-fa14-4a57-95be-5fa2d667ebc5'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:18:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5e3ade3963d47be97b545b2e3779b6b',uuid=68c19565-6fe1-4c2c-927d-87f801074e18,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "address": "fa:16:3e:ae:bc:67", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d17165a-5b", "ovs_interfaceid": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:18:26 np0005539551 nova_compute[227360]: 2025-11-29 08:18:26.847 227364 DEBUG nova.network.os_vif_util [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converting VIF {"id": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "address": "fa:16:3e:ae:bc:67", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d17165a-5b", "ovs_interfaceid": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:18:26 np0005539551 nova_compute[227360]: 2025-11-29 08:18:26.849 227364 DEBUG nova.network.os_vif_util [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:bc:67,bridge_name='br-int',has_traffic_filtering=True,id=2d17165a-5bc8-402e-a08c-e6f21188fb1b,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d17165a-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:18:26 np0005539551 nova_compute[227360]: 2025-11-29 08:18:26.850 227364 DEBUG nova.objects.instance [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'pci_devices' on Instance uuid 68c19565-6fe1-4c2c-927d-87f801074e18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:26 np0005539551 nova_compute[227360]: 2025-11-29 08:18:26.878 227364 DEBUG nova.virt.libvirt.driver [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:18:26 np0005539551 nova_compute[227360]:  <uuid>68c19565-6fe1-4c2c-927d-87f801074e18</uuid>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:  <name>instance-00000068</name>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:18:26 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:      <nova:name>tempest-ServerActionsTestOtherB-server-671911008</nova:name>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:18:25</nova:creationTime>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:18:26 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:        <nova:user uuid="c5e3ade3963d47be97b545b2e3779b6b">tempest-ServerActionsTestOtherB-477220446-project-member</nova:user>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:        <nova:project uuid="1b8899f76f554afc96bb2441424e5a77">tempest-ServerActionsTestOtherB-477220446</nova:project>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="c28b4cfd-fa14-4a57-95be-5fa2d667ebc5"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:        <nova:port uuid="2d17165a-5bc8-402e-a08c-e6f21188fb1b">
Nov 29 03:18:26 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:      <entry name="serial">68c19565-6fe1-4c2c-927d-87f801074e18</entry>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:      <entry name="uuid">68c19565-6fe1-4c2c-927d-87f801074e18</entry>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:18:26 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/68c19565-6fe1-4c2c-927d-87f801074e18_disk">
Nov 29 03:18:26 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:18:26 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:18:26 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/68c19565-6fe1-4c2c-927d-87f801074e18_disk.config">
Nov 29 03:18:26 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:18:26 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:18:26 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:ae:bc:67"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:      <target dev="tap2d17165a-5b"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:18:26 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/68c19565-6fe1-4c2c-927d-87f801074e18/console.log" append="off"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <input type="keyboard" bus="usb"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:18:26 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:18:26 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:18:26 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:18:26 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:18:26 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:18:26 np0005539551 nova_compute[227360]: 2025-11-29 08:18:26.879 227364 DEBUG nova.compute.manager [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Preparing to wait for external event network-vif-plugged-2d17165a-5bc8-402e-a08c-e6f21188fb1b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:18:26 np0005539551 nova_compute[227360]: 2025-11-29 08:18:26.879 227364 DEBUG oslo_concurrency.lockutils [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "68c19565-6fe1-4c2c-927d-87f801074e18-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:26 np0005539551 nova_compute[227360]: 2025-11-29 08:18:26.880 227364 DEBUG oslo_concurrency.lockutils [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "68c19565-6fe1-4c2c-927d-87f801074e18-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:26 np0005539551 nova_compute[227360]: 2025-11-29 08:18:26.880 227364 DEBUG oslo_concurrency.lockutils [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "68c19565-6fe1-4c2c-927d-87f801074e18-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:26 np0005539551 nova_compute[227360]: 2025-11-29 08:18:26.881 227364 DEBUG nova.virt.libvirt.vif [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:16:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-671911008',display_name='tempest-ServerActionsTestOtherB-server-671911008',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-671911008',id=104,image_ref='c28b4cfd-fa14-4a57-95be-5fa2d667ebc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-keypair-319762409',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:16:52Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='1b8899f76f554afc96bb2441424e5a77',ramdisk_id='',reservation_id='r-1uf85n06',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-477220446',owner_user_name='tempest-ServerActionsTestOtherB-477220446-project-member',shelved_at='2025-11-29T08:18:02.165022',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='c28b4cfd-fa14-4a57-95be-5fa2d667ebc5'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:18:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5e3ade3963d47be97b545b2e3779b6b',uuid=68c19565-6fe1-4c2c-927d-87f801074e18,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "address": "fa:16:3e:ae:bc:67", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d17165a-5b", "ovs_interfaceid": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:18:26 np0005539551 nova_compute[227360]: 2025-11-29 08:18:26.882 227364 DEBUG nova.network.os_vif_util [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converting VIF {"id": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "address": "fa:16:3e:ae:bc:67", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d17165a-5b", "ovs_interfaceid": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:18:26 np0005539551 nova_compute[227360]: 2025-11-29 08:18:26.883 227364 DEBUG nova.network.os_vif_util [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:bc:67,bridge_name='br-int',has_traffic_filtering=True,id=2d17165a-5bc8-402e-a08c-e6f21188fb1b,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d17165a-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:18:26 np0005539551 nova_compute[227360]: 2025-11-29 08:18:26.883 227364 DEBUG os_vif [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:bc:67,bridge_name='br-int',has_traffic_filtering=True,id=2d17165a-5bc8-402e-a08c-e6f21188fb1b,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d17165a-5b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:18:26 np0005539551 nova_compute[227360]: 2025-11-29 08:18:26.884 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:26 np0005539551 nova_compute[227360]: 2025-11-29 08:18:26.884 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:26 np0005539551 nova_compute[227360]: 2025-11-29 08:18:26.885 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:18:26 np0005539551 nova_compute[227360]: 2025-11-29 08:18:26.889 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:26 np0005539551 nova_compute[227360]: 2025-11-29 08:18:26.889 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2d17165a-5b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:26 np0005539551 nova_compute[227360]: 2025-11-29 08:18:26.890 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2d17165a-5b, col_values=(('external_ids', {'iface-id': '2d17165a-5bc8-402e-a08c-e6f21188fb1b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ae:bc:67', 'vm-uuid': '68c19565-6fe1-4c2c-927d-87f801074e18'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:26 np0005539551 nova_compute[227360]: 2025-11-29 08:18:26.918 227364 DEBUG nova.network.neutron [req-64305106-2082-4d49-abe7-0307028a5499 req-1f6e2147-7c30-4168-92a7-fa1580113df3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Updated VIF entry in instance network info cache for port 2d17165a-5bc8-402e-a08c-e6f21188fb1b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:18:26 np0005539551 nova_compute[227360]: 2025-11-29 08:18:26.918 227364 DEBUG nova.network.neutron [req-64305106-2082-4d49-abe7-0307028a5499 req-1f6e2147-7c30-4168-92a7-fa1580113df3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Updating instance_info_cache with network_info: [{"id": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "address": "fa:16:3e:ae:bc:67", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d17165a-5b", "ovs_interfaceid": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:18:26 np0005539551 nova_compute[227360]: 2025-11-29 08:18:26.931 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:26 np0005539551 NetworkManager[48922]: <info>  [1764404306.9321] manager: (tap2d17165a-5b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/197)
Nov 29 03:18:26 np0005539551 nova_compute[227360]: 2025-11-29 08:18:26.933 227364 DEBUG oslo_concurrency.lockutils [req-64305106-2082-4d49-abe7-0307028a5499 req-1f6e2147-7c30-4168-92a7-fa1580113df3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-68c19565-6fe1-4c2c-927d-87f801074e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:18:26 np0005539551 nova_compute[227360]: 2025-11-29 08:18:26.934 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:18:26 np0005539551 nova_compute[227360]: 2025-11-29 08:18:26.938 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:26 np0005539551 nova_compute[227360]: 2025-11-29 08:18:26.940 227364 INFO os_vif [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:bc:67,bridge_name='br-int',has_traffic_filtering=True,id=2d17165a-5bc8-402e-a08c-e6f21188fb1b,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d17165a-5b')#033[00m
Nov 29 03:18:26 np0005539551 nova_compute[227360]: 2025-11-29 08:18:26.985 227364 DEBUG nova.virt.libvirt.driver [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:18:26 np0005539551 nova_compute[227360]: 2025-11-29 08:18:26.986 227364 DEBUG nova.virt.libvirt.driver [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:18:26 np0005539551 nova_compute[227360]: 2025-11-29 08:18:26.986 227364 DEBUG nova.virt.libvirt.driver [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] No VIF found with MAC fa:16:3e:ae:bc:67, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:18:26 np0005539551 nova_compute[227360]: 2025-11-29 08:18:26.986 227364 INFO nova.virt.libvirt.driver [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Using config drive#033[00m
Nov 29 03:18:27 np0005539551 nova_compute[227360]: 2025-11-29 08:18:27.011 227364 DEBUG nova.storage.rbd_utils [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rbd image 68c19565-6fe1-4c2c-927d-87f801074e18_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:27 np0005539551 nova_compute[227360]: 2025-11-29 08:18:27.041 227364 DEBUG nova.objects.instance [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 68c19565-6fe1-4c2c-927d-87f801074e18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:27 np0005539551 nova_compute[227360]: 2025-11-29 08:18:27.084 227364 DEBUG nova.objects.instance [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'keypairs' on Instance uuid 68c19565-6fe1-4c2c-927d-87f801074e18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:27 np0005539551 ovn_controller[130266]: 2025-11-29T08:18:27Z|00410|binding|INFO|Releasing lport 299ca1be-be1b-47d9-8865-4316d34012e3 from this chassis (sb_readonly=0)
Nov 29 03:18:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:27.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:27 np0005539551 nova_compute[227360]: 2025-11-29 08:18:27.485 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:27 np0005539551 nova_compute[227360]: 2025-11-29 08:18:27.507 227364 INFO nova.virt.libvirt.driver [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Creating config drive at /var/lib/nova/instances/68c19565-6fe1-4c2c-927d-87f801074e18/disk.config#033[00m
Nov 29 03:18:27 np0005539551 nova_compute[227360]: 2025-11-29 08:18:27.515 227364 DEBUG oslo_concurrency.processutils [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/68c19565-6fe1-4c2c-927d-87f801074e18/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo_dyl_sw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:27 np0005539551 nova_compute[227360]: 2025-11-29 08:18:27.646 227364 DEBUG oslo_concurrency.processutils [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/68c19565-6fe1-4c2c-927d-87f801074e18/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo_dyl_sw" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:27 np0005539551 nova_compute[227360]: 2025-11-29 08:18:27.703 227364 DEBUG nova.storage.rbd_utils [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rbd image 68c19565-6fe1-4c2c-927d-87f801074e18_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:27 np0005539551 nova_compute[227360]: 2025-11-29 08:18:27.706 227364 DEBUG oslo_concurrency.processutils [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/68c19565-6fe1-4c2c-927d-87f801074e18/disk.config 68c19565-6fe1-4c2c-927d-87f801074e18_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:27 np0005539551 nova_compute[227360]: 2025-11-29 08:18:27.954 227364 DEBUG oslo_concurrency.processutils [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/68c19565-6fe1-4c2c-927d-87f801074e18/disk.config 68c19565-6fe1-4c2c-927d-87f801074e18_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.248s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:27 np0005539551 nova_compute[227360]: 2025-11-29 08:18:27.956 227364 INFO nova.virt.libvirt.driver [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Deleting local config drive /var/lib/nova/instances/68c19565-6fe1-4c2c-927d-87f801074e18/disk.config because it was imported into RBD.#033[00m
Nov 29 03:18:28 np0005539551 kernel: tap2d17165a-5b: entered promiscuous mode
Nov 29 03:18:28 np0005539551 NetworkManager[48922]: <info>  [1764404308.0118] manager: (tap2d17165a-5b): new Tun device (/org/freedesktop/NetworkManager/Devices/198)
Nov 29 03:18:28 np0005539551 systemd-udevd[267780]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:18:28 np0005539551 nova_compute[227360]: 2025-11-29 08:18:28.046 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:28 np0005539551 ovn_controller[130266]: 2025-11-29T08:18:28Z|00411|binding|INFO|Claiming lport 2d17165a-5bc8-402e-a08c-e6f21188fb1b for this chassis.
Nov 29 03:18:28 np0005539551 ovn_controller[130266]: 2025-11-29T08:18:28Z|00412|binding|INFO|2d17165a-5bc8-402e-a08c-e6f21188fb1b: Claiming fa:16:3e:ae:bc:67 10.100.0.9
Nov 29 03:18:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:28.054 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:bc:67 10.100.0.9'], port_security=['fa:16:3e:ae:bc:67 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '68c19565-6fe1-4c2c-927d-87f801074e18', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b8899f76f554afc96bb2441424e5a77', 'neutron:revision_number': '7', 'neutron:security_group_ids': '8e7cfeb6-8d91-4d68-8970-f480a7e0a619', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.246'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0af49baf-9694-4485-99a0-1529dc778e83, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=2d17165a-5bc8-402e-a08c-e6f21188fb1b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:18:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:28.055 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 2d17165a-5bc8-402e-a08c-e6f21188fb1b in datapath 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 bound to our chassis#033[00m
Nov 29 03:18:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:28.058 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06#033[00m
Nov 29 03:18:28 np0005539551 NetworkManager[48922]: <info>  [1764404308.0619] device (tap2d17165a-5b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:18:28 np0005539551 NetworkManager[48922]: <info>  [1764404308.0636] device (tap2d17165a-5b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:18:28 np0005539551 nova_compute[227360]: 2025-11-29 08:18:28.071 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:28 np0005539551 ovn_controller[130266]: 2025-11-29T08:18:28Z|00413|binding|INFO|Setting lport 2d17165a-5bc8-402e-a08c-e6f21188fb1b ovn-installed in OVS
Nov 29 03:18:28 np0005539551 ovn_controller[130266]: 2025-11-29T08:18:28Z|00414|binding|INFO|Setting lport 2d17165a-5bc8-402e-a08c-e6f21188fb1b up in Southbound
Nov 29 03:18:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:28.074 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4e8fbeef-2b69-453d-a901-3e2fc35802db]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:28 np0005539551 systemd-machined[190756]: New machine qemu-47-instance-00000068.
Nov 29 03:18:28 np0005539551 nova_compute[227360]: 2025-11-29 08:18:28.076 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:28 np0005539551 systemd[1]: Started Virtual Machine qemu-47-instance-00000068.
Nov 29 03:18:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:28.111 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[2155fffe-1840-4f6d-9ed3-8275c6b3a8ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:28.115 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[63275849-f298-42aa-8ab7-595864ff9221]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:28.152 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[c56a8acd-92cc-40d3-93ab-31533c55f16c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:28.172 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[72621d2e-2553-437b-b99d-5596083544c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b704d3a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:d7:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 114], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706274, 'reachable_time': 18523, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267796, 'error': None, 'target': 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:28.196 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e4f382f9-6a45-4535-ae36-1fc49db64679]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2b704d3a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 706284, 'tstamp': 706284}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267798, 'error': None, 'target': 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2b704d3a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 706286, 'tstamp': 706286}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267798, 'error': None, 'target': 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:28.199 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b704d3a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:28 np0005539551 nova_compute[227360]: 2025-11-29 08:18:28.201 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:28 np0005539551 nova_compute[227360]: 2025-11-29 08:18:28.202 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:28.204 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b704d3a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:28.205 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:18:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:28.205 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2b704d3a-d0, col_values=(('external_ids', {'iface-id': '299ca1be-be1b-47d9-8865-4316d34012e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:28.206 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:18:28 np0005539551 nova_compute[227360]: 2025-11-29 08:18:28.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:28 np0005539551 nova_compute[227360]: 2025-11-29 08:18:28.417 227364 DEBUG nova.compute.manager [req-b7f43e37-5a06-4a8d-9129-edbdac5c66ad req-26668287-88b4-4df8-b472-878169d23d22 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Received event network-vif-plugged-2d17165a-5bc8-402e-a08c-e6f21188fb1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:18:28 np0005539551 nova_compute[227360]: 2025-11-29 08:18:28.417 227364 DEBUG oslo_concurrency.lockutils [req-b7f43e37-5a06-4a8d-9129-edbdac5c66ad req-26668287-88b4-4df8-b472-878169d23d22 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "68c19565-6fe1-4c2c-927d-87f801074e18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:28 np0005539551 nova_compute[227360]: 2025-11-29 08:18:28.418 227364 DEBUG oslo_concurrency.lockutils [req-b7f43e37-5a06-4a8d-9129-edbdac5c66ad req-26668287-88b4-4df8-b472-878169d23d22 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "68c19565-6fe1-4c2c-927d-87f801074e18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:28 np0005539551 nova_compute[227360]: 2025-11-29 08:18:28.418 227364 DEBUG oslo_concurrency.lockutils [req-b7f43e37-5a06-4a8d-9129-edbdac5c66ad req-26668287-88b4-4df8-b472-878169d23d22 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "68c19565-6fe1-4c2c-927d-87f801074e18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:28 np0005539551 nova_compute[227360]: 2025-11-29 08:18:28.418 227364 DEBUG nova.compute.manager [req-b7f43e37-5a06-4a8d-9129-edbdac5c66ad req-26668287-88b4-4df8-b472-878169d23d22 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Processing event network-vif-plugged-2d17165a-5bc8-402e-a08c-e6f21188fb1b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:18:28 np0005539551 nova_compute[227360]: 2025-11-29 08:18:28.621 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404308.6204505, 68c19565-6fe1-4c2c-927d-87f801074e18 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:18:28 np0005539551 nova_compute[227360]: 2025-11-29 08:18:28.622 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] VM Started (Lifecycle Event)#033[00m
Nov 29 03:18:28 np0005539551 nova_compute[227360]: 2025-11-29 08:18:28.625 227364 DEBUG nova.compute.manager [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:18:28 np0005539551 nova_compute[227360]: 2025-11-29 08:18:28.629 227364 DEBUG nova.virt.libvirt.driver [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:18:28 np0005539551 nova_compute[227360]: 2025-11-29 08:18:28.634 227364 INFO nova.virt.libvirt.driver [-] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Instance spawned successfully.#033[00m
Nov 29 03:18:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:18:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:28.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:18:28 np0005539551 nova_compute[227360]: 2025-11-29 08:18:28.651 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:28 np0005539551 nova_compute[227360]: 2025-11-29 08:18:28.658 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:18:28 np0005539551 nova_compute[227360]: 2025-11-29 08:18:28.691 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:18:28 np0005539551 nova_compute[227360]: 2025-11-29 08:18:28.692 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404308.620561, 68c19565-6fe1-4c2c-927d-87f801074e18 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:18:28 np0005539551 nova_compute[227360]: 2025-11-29 08:18:28.692 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:18:28 np0005539551 nova_compute[227360]: 2025-11-29 08:18:28.714 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:28 np0005539551 nova_compute[227360]: 2025-11-29 08:18:28.718 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404308.6290517, 68c19565-6fe1-4c2c-927d-87f801074e18 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:18:28 np0005539551 nova_compute[227360]: 2025-11-29 08:18:28.718 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:18:28 np0005539551 nova_compute[227360]: 2025-11-29 08:18:28.746 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:28 np0005539551 nova_compute[227360]: 2025-11-29 08:18:28.750 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:18:28 np0005539551 nova_compute[227360]: 2025-11-29 08:18:28.786 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:18:29 np0005539551 nova_compute[227360]: 2025-11-29 08:18:29.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:29.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e306 e306: 3 total, 3 up, 3 in
Nov 29 03:18:29 np0005539551 nova_compute[227360]: 2025-11-29 08:18:29.930 227364 DEBUG nova.compute.manager [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:29 np0005539551 nova_compute[227360]: 2025-11-29 08:18:29.999 227364 DEBUG oslo_concurrency.lockutils [None req-37fa8208-5101-4e2f-a0df-823f53d560e3 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "68c19565-6fe1-4c2c-927d-87f801074e18" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 10.470s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:30 np0005539551 nova_compute[227360]: 2025-11-29 08:18:30.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:30 np0005539551 nova_compute[227360]: 2025-11-29 08:18:30.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:18:30 np0005539551 nova_compute[227360]: 2025-11-29 08:18:30.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:18:30 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:18:30 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:18:30 np0005539551 nova_compute[227360]: 2025-11-29 08:18:30.556 227364 DEBUG nova.compute.manager [req-c2243574-205c-451b-b748-5c015f210ce2 req-237483f7-730d-4572-aecf-3f002e65620c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Received event network-vif-plugged-2d17165a-5bc8-402e-a08c-e6f21188fb1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:18:30 np0005539551 nova_compute[227360]: 2025-11-29 08:18:30.556 227364 DEBUG oslo_concurrency.lockutils [req-c2243574-205c-451b-b748-5c015f210ce2 req-237483f7-730d-4572-aecf-3f002e65620c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "68c19565-6fe1-4c2c-927d-87f801074e18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:30 np0005539551 nova_compute[227360]: 2025-11-29 08:18:30.557 227364 DEBUG oslo_concurrency.lockutils [req-c2243574-205c-451b-b748-5c015f210ce2 req-237483f7-730d-4572-aecf-3f002e65620c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "68c19565-6fe1-4c2c-927d-87f801074e18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:30 np0005539551 nova_compute[227360]: 2025-11-29 08:18:30.557 227364 DEBUG oslo_concurrency.lockutils [req-c2243574-205c-451b-b748-5c015f210ce2 req-237483f7-730d-4572-aecf-3f002e65620c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "68c19565-6fe1-4c2c-927d-87f801074e18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:30 np0005539551 nova_compute[227360]: 2025-11-29 08:18:30.557 227364 DEBUG nova.compute.manager [req-c2243574-205c-451b-b748-5c015f210ce2 req-237483f7-730d-4572-aecf-3f002e65620c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] No waiting events found dispatching network-vif-plugged-2d17165a-5bc8-402e-a08c-e6f21188fb1b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:18:30 np0005539551 nova_compute[227360]: 2025-11-29 08:18:30.558 227364 WARNING nova.compute.manager [req-c2243574-205c-451b-b748-5c015f210ce2 req-237483f7-730d-4572-aecf-3f002e65620c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Received unexpected event network-vif-plugged-2d17165a-5bc8-402e-a08c-e6f21188fb1b for instance with vm_state active and task_state None.#033[00m
Nov 29 03:18:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:18:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:30.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:18:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:30 np0005539551 nova_compute[227360]: 2025-11-29 08:18:30.819 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:30 np0005539551 nova_compute[227360]: 2025-11-29 08:18:30.851 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "refresh_cache-076bf9f6-6607-4b08-b733-864854aad069" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:18:30 np0005539551 nova_compute[227360]: 2025-11-29 08:18:30.852 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquired lock "refresh_cache-076bf9f6-6607-4b08-b733-864854aad069" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:18:30 np0005539551 nova_compute[227360]: 2025-11-29 08:18:30.852 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:18:30 np0005539551 nova_compute[227360]: 2025-11-29 08:18:30.852 227364 DEBUG nova.objects.instance [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lazy-loading 'info_cache' on Instance uuid 076bf9f6-6607-4b08-b733-864854aad069 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:30 np0005539551 ovn_controller[130266]: 2025-11-29T08:18:30Z|00415|binding|INFO|Releasing lport 299ca1be-be1b-47d9-8865-4316d34012e3 from this chassis (sb_readonly=0)
Nov 29 03:18:31 np0005539551 nova_compute[227360]: 2025-11-29 08:18:31.053 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:31.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:31 np0005539551 nova_compute[227360]: 2025-11-29 08:18:31.932 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:18:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:32.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:18:32 np0005539551 nova_compute[227360]: 2025-11-29 08:18:32.757 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Updating instance_info_cache with network_info: [{"id": "5a18d86b-a59a-42b2-a09d-a9db462f6034", "address": "fa:16:3e:46:a5:ff", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a18d86b-a5", "ovs_interfaceid": "5a18d86b-a59a-42b2-a09d-a9db462f6034", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:18:32 np0005539551 nova_compute[227360]: 2025-11-29 08:18:32.779 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Releasing lock "refresh_cache-076bf9f6-6607-4b08-b733-864854aad069" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:18:32 np0005539551 nova_compute[227360]: 2025-11-29 08:18:32.779 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:18:32 np0005539551 nova_compute[227360]: 2025-11-29 08:18:32.780 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:32 np0005539551 nova_compute[227360]: 2025-11-29 08:18:32.780 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:33 np0005539551 nova_compute[227360]: 2025-11-29 08:18:33.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:33 np0005539551 nova_compute[227360]: 2025-11-29 08:18:33.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:33 np0005539551 nova_compute[227360]: 2025-11-29 08:18:33.449 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:33 np0005539551 nova_compute[227360]: 2025-11-29 08:18:33.450 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:33 np0005539551 nova_compute[227360]: 2025-11-29 08:18:33.450 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:33 np0005539551 nova_compute[227360]: 2025-11-29 08:18:33.451 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:18:33 np0005539551 nova_compute[227360]: 2025-11-29 08:18:33.451 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:33.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:33 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e307 e307: 3 total, 3 up, 3 in
Nov 29 03:18:33 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:18:33 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3371447544' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:18:33 np0005539551 nova_compute[227360]: 2025-11-29 08:18:33.970 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:34 np0005539551 nova_compute[227360]: 2025-11-29 08:18:34.045 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:18:34 np0005539551 nova_compute[227360]: 2025-11-29 08:18:34.045 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:18:34 np0005539551 nova_compute[227360]: 2025-11-29 08:18:34.049 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:18:34 np0005539551 nova_compute[227360]: 2025-11-29 08:18:34.049 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:18:34 np0005539551 nova_compute[227360]: 2025-11-29 08:18:34.211 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:18:34 np0005539551 nova_compute[227360]: 2025-11-29 08:18:34.212 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4126MB free_disk=20.851512908935547GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:18:34 np0005539551 nova_compute[227360]: 2025-11-29 08:18:34.213 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:34 np0005539551 nova_compute[227360]: 2025-11-29 08:18:34.213 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:34 np0005539551 nova_compute[227360]: 2025-11-29 08:18:34.420 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 076bf9f6-6607-4b08-b733-864854aad069 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:18:34 np0005539551 nova_compute[227360]: 2025-11-29 08:18:34.421 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 68c19565-6fe1-4c2c-927d-87f801074e18 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:18:34 np0005539551 nova_compute[227360]: 2025-11-29 08:18:34.422 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:18:34 np0005539551 nova_compute[227360]: 2025-11-29 08:18:34.422 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:18:34 np0005539551 nova_compute[227360]: 2025-11-29 08:18:34.502 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:34.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:18:35 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/568555367' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:18:35 np0005539551 nova_compute[227360]: 2025-11-29 08:18:35.090 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:35 np0005539551 nova_compute[227360]: 2025-11-29 08:18:35.096 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:18:35 np0005539551 nova_compute[227360]: 2025-11-29 08:18:35.133 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:18:35 np0005539551 nova_compute[227360]: 2025-11-29 08:18:35.163 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:18:35 np0005539551 nova_compute[227360]: 2025-11-29 08:18:35.164 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.951s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:35.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:35 np0005539551 nova_compute[227360]: 2025-11-29 08:18:35.823 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:36.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:36 np0005539551 nova_compute[227360]: 2025-11-29 08:18:36.935 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e308 e308: 3 total, 3 up, 3 in
Nov 29 03:18:37 np0005539551 nova_compute[227360]: 2025-11-29 08:18:37.262 227364 DEBUG oslo_concurrency.lockutils [None req-3a331dcb-3c35-41d5-8fa0-f93a1162fcbf c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "68c19565-6fe1-4c2c-927d-87f801074e18" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:37 np0005539551 nova_compute[227360]: 2025-11-29 08:18:37.263 227364 DEBUG oslo_concurrency.lockutils [None req-3a331dcb-3c35-41d5-8fa0-f93a1162fcbf c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "68c19565-6fe1-4c2c-927d-87f801074e18" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:37 np0005539551 nova_compute[227360]: 2025-11-29 08:18:37.263 227364 DEBUG oslo_concurrency.lockutils [None req-3a331dcb-3c35-41d5-8fa0-f93a1162fcbf c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "68c19565-6fe1-4c2c-927d-87f801074e18-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:37 np0005539551 nova_compute[227360]: 2025-11-29 08:18:37.263 227364 DEBUG oslo_concurrency.lockutils [None req-3a331dcb-3c35-41d5-8fa0-f93a1162fcbf c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "68c19565-6fe1-4c2c-927d-87f801074e18-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:37 np0005539551 nova_compute[227360]: 2025-11-29 08:18:37.264 227364 DEBUG oslo_concurrency.lockutils [None req-3a331dcb-3c35-41d5-8fa0-f93a1162fcbf c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "68c19565-6fe1-4c2c-927d-87f801074e18-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:37 np0005539551 nova_compute[227360]: 2025-11-29 08:18:37.265 227364 INFO nova.compute.manager [None req-3a331dcb-3c35-41d5-8fa0-f93a1162fcbf c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Terminating instance#033[00m
Nov 29 03:18:37 np0005539551 nova_compute[227360]: 2025-11-29 08:18:37.266 227364 DEBUG nova.compute.manager [None req-3a331dcb-3c35-41d5-8fa0-f93a1162fcbf c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:18:37 np0005539551 kernel: tap2d17165a-5b (unregistering): left promiscuous mode
Nov 29 03:18:37 np0005539551 NetworkManager[48922]: <info>  [1764404317.3076] device (tap2d17165a-5b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:18:37 np0005539551 ovn_controller[130266]: 2025-11-29T08:18:37Z|00416|binding|INFO|Releasing lport 2d17165a-5bc8-402e-a08c-e6f21188fb1b from this chassis (sb_readonly=0)
Nov 29 03:18:37 np0005539551 nova_compute[227360]: 2025-11-29 08:18:37.328 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:37 np0005539551 ovn_controller[130266]: 2025-11-29T08:18:37Z|00417|binding|INFO|Setting lport 2d17165a-5bc8-402e-a08c-e6f21188fb1b down in Southbound
Nov 29 03:18:37 np0005539551 ovn_controller[130266]: 2025-11-29T08:18:37Z|00418|binding|INFO|Removing iface tap2d17165a-5b ovn-installed in OVS
Nov 29 03:18:37 np0005539551 nova_compute[227360]: 2025-11-29 08:18:37.332 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:37.342 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:bc:67 10.100.0.9'], port_security=['fa:16:3e:ae:bc:67 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '68c19565-6fe1-4c2c-927d-87f801074e18', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b8899f76f554afc96bb2441424e5a77', 'neutron:revision_number': '9', 'neutron:security_group_ids': '8e7cfeb6-8d91-4d68-8970-f480a7e0a619', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.246', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0af49baf-9694-4485-99a0-1529dc778e83, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=2d17165a-5bc8-402e-a08c-e6f21188fb1b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:18:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:37.343 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 2d17165a-5bc8-402e-a08c-e6f21188fb1b in datapath 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 unbound from our chassis#033[00m
Nov 29 03:18:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:37.345 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06#033[00m
Nov 29 03:18:37 np0005539551 nova_compute[227360]: 2025-11-29 08:18:37.347 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:37 np0005539551 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d00000068.scope: Deactivated successfully.
Nov 29 03:18:37 np0005539551 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d00000068.scope: Consumed 9.712s CPU time.
Nov 29 03:18:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:37.362 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[35a53977-b34e-4056-bbaa-c1fa488b9c37]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:37 np0005539551 systemd-machined[190756]: Machine qemu-47-instance-00000068 terminated.
Nov 29 03:18:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:37.397 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[b414d88b-06b0-4243-a200-3add5e03f2bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:37.402 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[87fd1cc6-9340-49c1-bb81-5d6a91889c6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:37.426 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[006446a0-b3e5-4920-9340-ae37ad3415d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:37.443 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[569f9f5d-2b56-48da-9616-53859a3b99f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b704d3a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:d7:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 114], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706274, 'reachable_time': 18523, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267948, 'error': None, 'target': 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:37.458 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b9854f9b-c504-4bf4-b144-84f2e5c7e98a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2b704d3a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 706284, 'tstamp': 706284}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267949, 'error': None, 'target': 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2b704d3a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 706286, 'tstamp': 706286}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267949, 'error': None, 'target': 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:37.459 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b704d3a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:37 np0005539551 nova_compute[227360]: 2025-11-29 08:18:37.460 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:37 np0005539551 nova_compute[227360]: 2025-11-29 08:18:37.464 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:37.464 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b704d3a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:37.464 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:18:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:37.465 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2b704d3a-d0, col_values=(('external_ids', {'iface-id': '299ca1be-be1b-47d9-8865-4316d34012e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:37.465 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:18:37 np0005539551 nova_compute[227360]: 2025-11-29 08:18:37.484 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:37.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:37 np0005539551 nova_compute[227360]: 2025-11-29 08:18:37.489 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:37 np0005539551 nova_compute[227360]: 2025-11-29 08:18:37.496 227364 INFO nova.virt.libvirt.driver [-] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Instance destroyed successfully.#033[00m
Nov 29 03:18:37 np0005539551 nova_compute[227360]: 2025-11-29 08:18:37.497 227364 DEBUG nova.objects.instance [None req-3a331dcb-3c35-41d5-8fa0-f93a1162fcbf c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'resources' on Instance uuid 68c19565-6fe1-4c2c-927d-87f801074e18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:37 np0005539551 nova_compute[227360]: 2025-11-29 08:18:37.519 227364 DEBUG nova.virt.libvirt.vif [None req-3a331dcb-3c35-41d5-8fa0-f93a1162fcbf c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:16:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-671911008',display_name='tempest-ServerActionsTestOtherB-server-671911008',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-671911008',id=104,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEIWQ7Agoaix0SKEJrKHu4bB1Waq8EgVKfKJ/0RzVkl2dpwZ96ym4a4YEld/N4o6ej04XW7IMisQ29oCITVHbKZxjsHowaHjgF+3UGfTUq2pqZm9EZTJqhsQL0kJWzkKow==',key_name='tempest-keypair-319762409',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:18:29Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1b8899f76f554afc96bb2441424e5a77',ramdisk_id='',reservation_id='r-1uf85n06',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-477220446',owner_user_name='tempest-ServerActionsTestOtherB-477220446-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:18:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5e3ade3963d47be97b545b2e3779b6b',uuid=68c19565-6fe1-4c2c-927d-87f801074e18,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "address": "fa:16:3e:ae:bc:67", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d17165a-5b", "ovs_interfaceid": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:18:37 np0005539551 nova_compute[227360]: 2025-11-29 08:18:37.520 227364 DEBUG nova.network.os_vif_util [None req-3a331dcb-3c35-41d5-8fa0-f93a1162fcbf c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converting VIF {"id": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "address": "fa:16:3e:ae:bc:67", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d17165a-5b", "ovs_interfaceid": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:18:37 np0005539551 nova_compute[227360]: 2025-11-29 08:18:37.520 227364 DEBUG nova.network.os_vif_util [None req-3a331dcb-3c35-41d5-8fa0-f93a1162fcbf c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:bc:67,bridge_name='br-int',has_traffic_filtering=True,id=2d17165a-5bc8-402e-a08c-e6f21188fb1b,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d17165a-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:18:37 np0005539551 nova_compute[227360]: 2025-11-29 08:18:37.521 227364 DEBUG os_vif [None req-3a331dcb-3c35-41d5-8fa0-f93a1162fcbf c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:bc:67,bridge_name='br-int',has_traffic_filtering=True,id=2d17165a-5bc8-402e-a08c-e6f21188fb1b,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d17165a-5b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:18:37 np0005539551 nova_compute[227360]: 2025-11-29 08:18:37.522 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:37 np0005539551 nova_compute[227360]: 2025-11-29 08:18:37.522 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d17165a-5b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:37 np0005539551 nova_compute[227360]: 2025-11-29 08:18:37.524 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:37 np0005539551 nova_compute[227360]: 2025-11-29 08:18:37.525 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:37 np0005539551 nova_compute[227360]: 2025-11-29 08:18:37.527 227364 INFO os_vif [None req-3a331dcb-3c35-41d5-8fa0-f93a1162fcbf c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:bc:67,bridge_name='br-int',has_traffic_filtering=True,id=2d17165a-5bc8-402e-a08c-e6f21188fb1b,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d17165a-5b')#033[00m
Nov 29 03:18:37 np0005539551 nova_compute[227360]: 2025-11-29 08:18:37.657 227364 DEBUG nova.compute.manager [req-5a4ed092-1132-4653-99f8-cb04eac346f5 req-f7e011b0-17df-4d04-acba-e3d3f4378aee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Received event network-vif-unplugged-2d17165a-5bc8-402e-a08c-e6f21188fb1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:18:37 np0005539551 nova_compute[227360]: 2025-11-29 08:18:37.657 227364 DEBUG oslo_concurrency.lockutils [req-5a4ed092-1132-4653-99f8-cb04eac346f5 req-f7e011b0-17df-4d04-acba-e3d3f4378aee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "68c19565-6fe1-4c2c-927d-87f801074e18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:37 np0005539551 nova_compute[227360]: 2025-11-29 08:18:37.658 227364 DEBUG oslo_concurrency.lockutils [req-5a4ed092-1132-4653-99f8-cb04eac346f5 req-f7e011b0-17df-4d04-acba-e3d3f4378aee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "68c19565-6fe1-4c2c-927d-87f801074e18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:37 np0005539551 nova_compute[227360]: 2025-11-29 08:18:37.658 227364 DEBUG oslo_concurrency.lockutils [req-5a4ed092-1132-4653-99f8-cb04eac346f5 req-f7e011b0-17df-4d04-acba-e3d3f4378aee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "68c19565-6fe1-4c2c-927d-87f801074e18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:37 np0005539551 nova_compute[227360]: 2025-11-29 08:18:37.659 227364 DEBUG nova.compute.manager [req-5a4ed092-1132-4653-99f8-cb04eac346f5 req-f7e011b0-17df-4d04-acba-e3d3f4378aee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] No waiting events found dispatching network-vif-unplugged-2d17165a-5bc8-402e-a08c-e6f21188fb1b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:18:37 np0005539551 nova_compute[227360]: 2025-11-29 08:18:37.659 227364 DEBUG nova.compute.manager [req-5a4ed092-1132-4653-99f8-cb04eac346f5 req-f7e011b0-17df-4d04-acba-e3d3f4378aee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Received event network-vif-unplugged-2d17165a-5bc8-402e-a08c-e6f21188fb1b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:18:37 np0005539551 nova_compute[227360]: 2025-11-29 08:18:37.890 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:18:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:38.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:18:39 np0005539551 nova_compute[227360]: 2025-11-29 08:18:39.281 227364 INFO nova.virt.libvirt.driver [None req-3a331dcb-3c35-41d5-8fa0-f93a1162fcbf c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Deleting instance files /var/lib/nova/instances/68c19565-6fe1-4c2c-927d-87f801074e18_del#033[00m
Nov 29 03:18:39 np0005539551 nova_compute[227360]: 2025-11-29 08:18:39.282 227364 INFO nova.virt.libvirt.driver [None req-3a331dcb-3c35-41d5-8fa0-f93a1162fcbf c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Deletion of /var/lib/nova/instances/68c19565-6fe1-4c2c-927d-87f801074e18_del complete#033[00m
Nov 29 03:18:39 np0005539551 nova_compute[227360]: 2025-11-29 08:18:39.331 227364 INFO nova.compute.manager [None req-3a331dcb-3c35-41d5-8fa0-f93a1162fcbf c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Took 2.06 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:18:39 np0005539551 nova_compute[227360]: 2025-11-29 08:18:39.331 227364 DEBUG oslo.service.loopingcall [None req-3a331dcb-3c35-41d5-8fa0-f93a1162fcbf c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:18:39 np0005539551 nova_compute[227360]: 2025-11-29 08:18:39.331 227364 DEBUG nova.compute.manager [-] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:18:39 np0005539551 nova_compute[227360]: 2025-11-29 08:18:39.332 227364 DEBUG nova.network.neutron [-] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:18:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:39.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:39 np0005539551 nova_compute[227360]: 2025-11-29 08:18:39.973 227364 DEBUG nova.compute.manager [req-aef217aa-ac98-4983-b614-1f707a576f99 req-fab5b451-4f68-48fe-bffc-cb8f9544602c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Received event network-vif-plugged-2d17165a-5bc8-402e-a08c-e6f21188fb1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:18:39 np0005539551 nova_compute[227360]: 2025-11-29 08:18:39.974 227364 DEBUG oslo_concurrency.lockutils [req-aef217aa-ac98-4983-b614-1f707a576f99 req-fab5b451-4f68-48fe-bffc-cb8f9544602c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "68c19565-6fe1-4c2c-927d-87f801074e18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:39 np0005539551 nova_compute[227360]: 2025-11-29 08:18:39.974 227364 DEBUG oslo_concurrency.lockutils [req-aef217aa-ac98-4983-b614-1f707a576f99 req-fab5b451-4f68-48fe-bffc-cb8f9544602c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "68c19565-6fe1-4c2c-927d-87f801074e18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:39 np0005539551 nova_compute[227360]: 2025-11-29 08:18:39.974 227364 DEBUG oslo_concurrency.lockutils [req-aef217aa-ac98-4983-b614-1f707a576f99 req-fab5b451-4f68-48fe-bffc-cb8f9544602c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "68c19565-6fe1-4c2c-927d-87f801074e18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:39 np0005539551 nova_compute[227360]: 2025-11-29 08:18:39.974 227364 DEBUG nova.compute.manager [req-aef217aa-ac98-4983-b614-1f707a576f99 req-fab5b451-4f68-48fe-bffc-cb8f9544602c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] No waiting events found dispatching network-vif-plugged-2d17165a-5bc8-402e-a08c-e6f21188fb1b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:18:39 np0005539551 nova_compute[227360]: 2025-11-29 08:18:39.975 227364 WARNING nova.compute.manager [req-aef217aa-ac98-4983-b614-1f707a576f99 req-fab5b451-4f68-48fe-bffc-cb8f9544602c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Received unexpected event network-vif-plugged-2d17165a-5bc8-402e-a08c-e6f21188fb1b for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:18:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:18:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:40.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:18:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:40 np0005539551 nova_compute[227360]: 2025-11-29 08:18:40.708 227364 DEBUG nova.network.neutron [-] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:18:40 np0005539551 nova_compute[227360]: 2025-11-29 08:18:40.736 227364 INFO nova.compute.manager [-] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Took 1.40 seconds to deallocate network for instance.#033[00m
Nov 29 03:18:40 np0005539551 nova_compute[227360]: 2025-11-29 08:18:40.825 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:40 np0005539551 nova_compute[227360]: 2025-11-29 08:18:40.972 227364 DEBUG oslo_concurrency.lockutils [None req-3a331dcb-3c35-41d5-8fa0-f93a1162fcbf c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:40 np0005539551 nova_compute[227360]: 2025-11-29 08:18:40.972 227364 DEBUG oslo_concurrency.lockutils [None req-3a331dcb-3c35-41d5-8fa0-f93a1162fcbf c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:40 np0005539551 nova_compute[227360]: 2025-11-29 08:18:40.974 227364 DEBUG nova.compute.manager [req-fc356c05-cb00-41ba-b6fb-e452c862be43 req-e3ef3f11-e1d6-4cad-bf18-89f0c89a0fdf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Received event network-vif-deleted-2d17165a-5bc8-402e-a08c-e6f21188fb1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:18:41 np0005539551 nova_compute[227360]: 2025-11-29 08:18:41.077 227364 DEBUG oslo_concurrency.processutils [None req-3a331dcb-3c35-41d5-8fa0-f93a1162fcbf c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:41.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:41 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:18:41 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1860364358' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:18:41 np0005539551 nova_compute[227360]: 2025-11-29 08:18:41.561 227364 DEBUG oslo_concurrency.processutils [None req-3a331dcb-3c35-41d5-8fa0-f93a1162fcbf c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:41 np0005539551 nova_compute[227360]: 2025-11-29 08:18:41.567 227364 DEBUG nova.compute.provider_tree [None req-3a331dcb-3c35-41d5-8fa0-f93a1162fcbf c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:18:41 np0005539551 nova_compute[227360]: 2025-11-29 08:18:41.580 227364 DEBUG nova.scheduler.client.report [None req-3a331dcb-3c35-41d5-8fa0-f93a1162fcbf c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:18:41 np0005539551 nova_compute[227360]: 2025-11-29 08:18:41.602 227364 DEBUG oslo_concurrency.lockutils [None req-3a331dcb-3c35-41d5-8fa0-f93a1162fcbf c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:41 np0005539551 nova_compute[227360]: 2025-11-29 08:18:41.632 227364 INFO nova.scheduler.client.report [None req-3a331dcb-3c35-41d5-8fa0-f93a1162fcbf c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Deleted allocations for instance 68c19565-6fe1-4c2c-927d-87f801074e18#033[00m
Nov 29 03:18:41 np0005539551 nova_compute[227360]: 2025-11-29 08:18:41.730 227364 DEBUG oslo_concurrency.lockutils [None req-3a331dcb-3c35-41d5-8fa0-f93a1162fcbf c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "68c19565-6fe1-4c2c-927d-87f801074e18" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.467s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e309 e309: 3 total, 3 up, 3 in
Nov 29 03:18:42 np0005539551 nova_compute[227360]: 2025-11-29 08:18:42.524 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:42.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:43 np0005539551 nova_compute[227360]: 2025-11-29 08:18:43.212 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:43.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:44 np0005539551 nova_compute[227360]: 2025-11-29 08:18:44.163 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:44 np0005539551 nova_compute[227360]: 2025-11-29 08:18:44.164 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:44 np0005539551 nova_compute[227360]: 2025-11-29 08:18:44.164 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:18:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:44.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:18:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:45.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:18:45 np0005539551 nova_compute[227360]: 2025-11-29 08:18:45.529 227364 DEBUG oslo_concurrency.lockutils [None req-705d726f-0f00-4a08-80bc-647480804d76 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "076bf9f6-6607-4b08-b733-864854aad069" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:45 np0005539551 nova_compute[227360]: 2025-11-29 08:18:45.530 227364 DEBUG oslo_concurrency.lockutils [None req-705d726f-0f00-4a08-80bc-647480804d76 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "076bf9f6-6607-4b08-b733-864854aad069" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:45 np0005539551 nova_compute[227360]: 2025-11-29 08:18:45.530 227364 DEBUG oslo_concurrency.lockutils [None req-705d726f-0f00-4a08-80bc-647480804d76 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "076bf9f6-6607-4b08-b733-864854aad069-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:45 np0005539551 nova_compute[227360]: 2025-11-29 08:18:45.530 227364 DEBUG oslo_concurrency.lockutils [None req-705d726f-0f00-4a08-80bc-647480804d76 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "076bf9f6-6607-4b08-b733-864854aad069-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:45 np0005539551 nova_compute[227360]: 2025-11-29 08:18:45.530 227364 DEBUG oslo_concurrency.lockutils [None req-705d726f-0f00-4a08-80bc-647480804d76 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "076bf9f6-6607-4b08-b733-864854aad069-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:45 np0005539551 nova_compute[227360]: 2025-11-29 08:18:45.531 227364 INFO nova.compute.manager [None req-705d726f-0f00-4a08-80bc-647480804d76 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Terminating instance#033[00m
Nov 29 03:18:45 np0005539551 nova_compute[227360]: 2025-11-29 08:18:45.532 227364 DEBUG nova.compute.manager [None req-705d726f-0f00-4a08-80bc-647480804d76 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:18:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:45 np0005539551 nova_compute[227360]: 2025-11-29 08:18:45.871 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:45 np0005539551 kernel: tap5a18d86b-a5 (unregistering): left promiscuous mode
Nov 29 03:18:45 np0005539551 NetworkManager[48922]: <info>  [1764404325.9109] device (tap5a18d86b-a5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:18:45 np0005539551 ovn_controller[130266]: 2025-11-29T08:18:45Z|00419|binding|INFO|Releasing lport 5a18d86b-a59a-42b2-a09d-a9db462f6034 from this chassis (sb_readonly=0)
Nov 29 03:18:45 np0005539551 ovn_controller[130266]: 2025-11-29T08:18:45Z|00420|binding|INFO|Setting lport 5a18d86b-a59a-42b2-a09d-a9db462f6034 down in Southbound
Nov 29 03:18:45 np0005539551 nova_compute[227360]: 2025-11-29 08:18:45.924 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:45 np0005539551 ovn_controller[130266]: 2025-11-29T08:18:45Z|00421|binding|INFO|Removing iface tap5a18d86b-a5 ovn-installed in OVS
Nov 29 03:18:45 np0005539551 nova_compute[227360]: 2025-11-29 08:18:45.929 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:45 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:45.931 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:a5:ff 10.100.0.11'], port_security=['fa:16:3e:46:a5:ff 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '076bf9f6-6607-4b08-b733-864854aad069', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b8899f76f554afc96bb2441424e5a77', 'neutron:revision_number': '4', 'neutron:security_group_ids': '37becba8-ee73-4915-a6ba-420db31887d1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0af49baf-9694-4485-99a0-1529dc778e83, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=5a18d86b-a59a-42b2-a09d-a9db462f6034) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:18:45 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:45.932 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 5a18d86b-a59a-42b2-a09d-a9db462f6034 in datapath 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 unbound from our chassis#033[00m
Nov 29 03:18:45 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:45.933 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:18:45 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:45.934 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e5a9da66-7566-4178-8998-d621aa6aefde]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:45 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:45.934 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 namespace which is not needed anymore#033[00m
Nov 29 03:18:45 np0005539551 nova_compute[227360]: 2025-11-29 08:18:45.946 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:45 np0005539551 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000059.scope: Deactivated successfully.
Nov 29 03:18:45 np0005539551 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000059.scope: Consumed 27.198s CPU time.
Nov 29 03:18:45 np0005539551 systemd-machined[190756]: Machine qemu-40-instance-00000059 terminated.
Nov 29 03:18:46 np0005539551 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[260876]: [NOTICE]   (260880) : haproxy version is 2.8.14-c23fe91
Nov 29 03:18:46 np0005539551 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[260876]: [NOTICE]   (260880) : path to executable is /usr/sbin/haproxy
Nov 29 03:18:46 np0005539551 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[260876]: [WARNING]  (260880) : Exiting Master process...
Nov 29 03:18:46 np0005539551 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[260876]: [WARNING]  (260880) : Exiting Master process...
Nov 29 03:18:46 np0005539551 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[260876]: [ALERT]    (260880) : Current worker (260882) exited with code 143 (Terminated)
Nov 29 03:18:46 np0005539551 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[260876]: [WARNING]  (260880) : All workers exited. Exiting... (0)
Nov 29 03:18:46 np0005539551 systemd[1]: libpod-ee4a6d1646ca21cf24882e05b35e75095fe6b273cc72d2705cd6b5f8e716676b.scope: Deactivated successfully.
Nov 29 03:18:46 np0005539551 podman[268026]: 2025-11-29 08:18:46.090476389 +0000 UTC m=+0.054541520 container died ee4a6d1646ca21cf24882e05b35e75095fe6b273cc72d2705cd6b5f8e716676b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:18:46 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ee4a6d1646ca21cf24882e05b35e75095fe6b273cc72d2705cd6b5f8e716676b-userdata-shm.mount: Deactivated successfully.
Nov 29 03:18:46 np0005539551 systemd[1]: var-lib-containers-storage-overlay-2b21185a68d7b22e8a73540c0f7d4306ae5d717f8b11b546fd2c9587f1e92cf9-merged.mount: Deactivated successfully.
Nov 29 03:18:46 np0005539551 podman[268026]: 2025-11-29 08:18:46.136371301 +0000 UTC m=+0.100436432 container cleanup ee4a6d1646ca21cf24882e05b35e75095fe6b273cc72d2705cd6b5f8e716676b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:18:46 np0005539551 kernel: tap5a18d86b-a5: entered promiscuous mode
Nov 29 03:18:46 np0005539551 kernel: tap5a18d86b-a5 (unregistering): left promiscuous mode
Nov 29 03:18:46 np0005539551 systemd[1]: libpod-conmon-ee4a6d1646ca21cf24882e05b35e75095fe6b273cc72d2705cd6b5f8e716676b.scope: Deactivated successfully.
Nov 29 03:18:46 np0005539551 ovn_controller[130266]: 2025-11-29T08:18:46Z|00422|binding|INFO|Claiming lport 5a18d86b-a59a-42b2-a09d-a9db462f6034 for this chassis.
Nov 29 03:18:46 np0005539551 ovn_controller[130266]: 2025-11-29T08:18:46Z|00423|binding|INFO|5a18d86b-a59a-42b2-a09d-a9db462f6034: Claiming fa:16:3e:46:a5:ff 10.100.0.11
Nov 29 03:18:46 np0005539551 nova_compute[227360]: 2025-11-29 08:18:46.158 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:46 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:46.167 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:a5:ff 10.100.0.11'], port_security=['fa:16:3e:46:a5:ff 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '076bf9f6-6607-4b08-b733-864854aad069', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b8899f76f554afc96bb2441424e5a77', 'neutron:revision_number': '4', 'neutron:security_group_ids': '37becba8-ee73-4915-a6ba-420db31887d1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0af49baf-9694-4485-99a0-1529dc778e83, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=5a18d86b-a59a-42b2-a09d-a9db462f6034) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:18:46 np0005539551 nova_compute[227360]: 2025-11-29 08:18:46.171 227364 INFO nova.virt.libvirt.driver [-] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Instance destroyed successfully.#033[00m
Nov 29 03:18:46 np0005539551 nova_compute[227360]: 2025-11-29 08:18:46.171 227364 DEBUG nova.objects.instance [None req-705d726f-0f00-4a08-80bc-647480804d76 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'resources' on Instance uuid 076bf9f6-6607-4b08-b733-864854aad069 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:46 np0005539551 nova_compute[227360]: 2025-11-29 08:18:46.188 227364 DEBUG nova.virt.libvirt.vif [None req-705d726f-0f00-4a08-80bc-647480804d76 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:13:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-563695543',display_name='tempest-ServerActionsTestOtherB-server-563695543',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-563695543',id=89,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:13:37Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1b8899f76f554afc96bb2441424e5a77',ramdisk_id='',reservation_id='r-0xergg3i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-477220446',owner_user_name='tempest-ServerActionsTestOtherB-477220446-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:13:37Z,user_data=None,user_id='c5e3ade3963d47be97b545b2e3779b6b',uuid=076bf9f6-6607-4b08-b733-864854aad069,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5a18d86b-a59a-42b2-a09d-a9db462f6034", "address": "fa:16:3e:46:a5:ff", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a18d86b-a5", "ovs_interfaceid": "5a18d86b-a59a-42b2-a09d-a9db462f6034", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:18:46 np0005539551 nova_compute[227360]: 2025-11-29 08:18:46.189 227364 DEBUG nova.network.os_vif_util [None req-705d726f-0f00-4a08-80bc-647480804d76 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converting VIF {"id": "5a18d86b-a59a-42b2-a09d-a9db462f6034", "address": "fa:16:3e:46:a5:ff", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a18d86b-a5", "ovs_interfaceid": "5a18d86b-a59a-42b2-a09d-a9db462f6034", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:18:46 np0005539551 nova_compute[227360]: 2025-11-29 08:18:46.190 227364 DEBUG nova.network.os_vif_util [None req-705d726f-0f00-4a08-80bc-647480804d76 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:46:a5:ff,bridge_name='br-int',has_traffic_filtering=True,id=5a18d86b-a59a-42b2-a09d-a9db462f6034,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a18d86b-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:18:46 np0005539551 nova_compute[227360]: 2025-11-29 08:18:46.190 227364 DEBUG os_vif [None req-705d726f-0f00-4a08-80bc-647480804d76 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:46:a5:ff,bridge_name='br-int',has_traffic_filtering=True,id=5a18d86b-a59a-42b2-a09d-a9db462f6034,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a18d86b-a5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:18:46 np0005539551 nova_compute[227360]: 2025-11-29 08:18:46.193 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:46 np0005539551 nova_compute[227360]: 2025-11-29 08:18:46.194 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5a18d86b-a5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:46 np0005539551 ovn_controller[130266]: 2025-11-29T08:18:46Z|00424|binding|INFO|Releasing lport 5a18d86b-a59a-42b2-a09d-a9db462f6034 from this chassis (sb_readonly=0)
Nov 29 03:18:46 np0005539551 nova_compute[227360]: 2025-11-29 08:18:46.196 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:46 np0005539551 nova_compute[227360]: 2025-11-29 08:18:46.199 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:18:46 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:46.203 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:a5:ff 10.100.0.11'], port_security=['fa:16:3e:46:a5:ff 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '076bf9f6-6607-4b08-b733-864854aad069', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b8899f76f554afc96bb2441424e5a77', 'neutron:revision_number': '4', 'neutron:security_group_ids': '37becba8-ee73-4915-a6ba-420db31887d1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0af49baf-9694-4485-99a0-1529dc778e83, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=5a18d86b-a59a-42b2-a09d-a9db462f6034) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:18:46 np0005539551 podman[268057]: 2025-11-29 08:18:46.216839915 +0000 UTC m=+0.053294068 container remove ee4a6d1646ca21cf24882e05b35e75095fe6b273cc72d2705cd6b5f8e716676b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:18:46 np0005539551 nova_compute[227360]: 2025-11-29 08:18:46.219 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:46 np0005539551 nova_compute[227360]: 2025-11-29 08:18:46.222 227364 INFO os_vif [None req-705d726f-0f00-4a08-80bc-647480804d76 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:46:a5:ff,bridge_name='br-int',has_traffic_filtering=True,id=5a18d86b-a59a-42b2-a09d-a9db462f6034,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a18d86b-a5')#033[00m
Nov 29 03:18:46 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:46.222 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d5326abb-7af1-40b3-8058-0f39014fbf5f]: (4, ('Sat Nov 29 08:18:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 (ee4a6d1646ca21cf24882e05b35e75095fe6b273cc72d2705cd6b5f8e716676b)\nee4a6d1646ca21cf24882e05b35e75095fe6b273cc72d2705cd6b5f8e716676b\nSat Nov 29 08:18:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 (ee4a6d1646ca21cf24882e05b35e75095fe6b273cc72d2705cd6b5f8e716676b)\nee4a6d1646ca21cf24882e05b35e75095fe6b273cc72d2705cd6b5f8e716676b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:46 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:46.225 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[19f74788-f620-455f-96e3-0a45e94f45a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:46 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:46.227 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b704d3a-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:46 np0005539551 nova_compute[227360]: 2025-11-29 08:18:46.239 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:46 np0005539551 kernel: tap2b704d3a-d0: left promiscuous mode
Nov 29 03:18:46 np0005539551 nova_compute[227360]: 2025-11-29 08:18:46.257 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:46 np0005539551 nova_compute[227360]: 2025-11-29 08:18:46.259 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:46 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:46.261 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a8d7b765-3588-48fe-8cb2-bd26d0d78adc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:46 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:46.276 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b5607447-61e0-4723-9f64-b3d612429136]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:46 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:46.278 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ebde89bb-f678-4a71-b7ba-0a9a6b379536]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:46 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:46.294 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[277379dc-c8e3-4e36-97c7-0e9fa8e37b97]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706266, 'reachable_time': 29242, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268097, 'error': None, 'target': 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:46 np0005539551 systemd[1]: run-netns-ovnmeta\x2d2b704d3a\x2dd3e4\x2d47ce\x2d8a28\x2d10a6f4e6fd06.mount: Deactivated successfully.
Nov 29 03:18:46 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:46.298 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:18:46 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:46.298 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[d3f5d595-949f-4582-a629-be634014ad9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:46 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:46.299 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 5a18d86b-a59a-42b2-a09d-a9db462f6034 in datapath 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 unbound from our chassis#033[00m
Nov 29 03:18:46 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:46.300 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:18:46 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:46.300 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a7d8e3d4-1be8-40f5-abdc-25cb26d5ea59]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:46 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:46.301 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 5a18d86b-a59a-42b2-a09d-a9db462f6034 in datapath 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 unbound from our chassis#033[00m
Nov 29 03:18:46 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:46.302 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:18:46 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:18:46.303 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1b3a60c7-43a5-4762-bcb4-e0239710619c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:46 np0005539551 nova_compute[227360]: 2025-11-29 08:18:46.321 227364 DEBUG nova.compute.manager [req-8cd70a61-6047-4f23-9ce5-a29a7c2d3bdb req-d4358ce0-295d-41b5-a913-1b16942bdf56 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Received event network-vif-unplugged-5a18d86b-a59a-42b2-a09d-a9db462f6034 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:18:46 np0005539551 nova_compute[227360]: 2025-11-29 08:18:46.321 227364 DEBUG oslo_concurrency.lockutils [req-8cd70a61-6047-4f23-9ce5-a29a7c2d3bdb req-d4358ce0-295d-41b5-a913-1b16942bdf56 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "076bf9f6-6607-4b08-b733-864854aad069-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:46 np0005539551 nova_compute[227360]: 2025-11-29 08:18:46.322 227364 DEBUG oslo_concurrency.lockutils [req-8cd70a61-6047-4f23-9ce5-a29a7c2d3bdb req-d4358ce0-295d-41b5-a913-1b16942bdf56 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "076bf9f6-6607-4b08-b733-864854aad069-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:46 np0005539551 nova_compute[227360]: 2025-11-29 08:18:46.322 227364 DEBUG oslo_concurrency.lockutils [req-8cd70a61-6047-4f23-9ce5-a29a7c2d3bdb req-d4358ce0-295d-41b5-a913-1b16942bdf56 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "076bf9f6-6607-4b08-b733-864854aad069-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:46 np0005539551 nova_compute[227360]: 2025-11-29 08:18:46.322 227364 DEBUG nova.compute.manager [req-8cd70a61-6047-4f23-9ce5-a29a7c2d3bdb req-d4358ce0-295d-41b5-a913-1b16942bdf56 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] No waiting events found dispatching network-vif-unplugged-5a18d86b-a59a-42b2-a09d-a9db462f6034 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:18:46 np0005539551 nova_compute[227360]: 2025-11-29 08:18:46.322 227364 DEBUG nova.compute.manager [req-8cd70a61-6047-4f23-9ce5-a29a7c2d3bdb req-d4358ce0-295d-41b5-a913-1b16942bdf56 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Received event network-vif-unplugged-5a18d86b-a59a-42b2-a09d-a9db462f6034 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:18:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:18:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:46.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:18:47 np0005539551 nova_compute[227360]: 2025-11-29 08:18:47.401 227364 INFO nova.virt.libvirt.driver [None req-705d726f-0f00-4a08-80bc-647480804d76 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Deleting instance files /var/lib/nova/instances/076bf9f6-6607-4b08-b733-864854aad069_del#033[00m
Nov 29 03:18:47 np0005539551 nova_compute[227360]: 2025-11-29 08:18:47.402 227364 INFO nova.virt.libvirt.driver [None req-705d726f-0f00-4a08-80bc-647480804d76 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Deletion of /var/lib/nova/instances/076bf9f6-6607-4b08-b733-864854aad069_del complete#033[00m
Nov 29 03:18:47 np0005539551 nova_compute[227360]: 2025-11-29 08:18:47.404 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:47 np0005539551 nova_compute[227360]: 2025-11-29 08:18:47.483 227364 INFO nova.compute.manager [None req-705d726f-0f00-4a08-80bc-647480804d76 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Took 1.95 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:18:47 np0005539551 nova_compute[227360]: 2025-11-29 08:18:47.484 227364 DEBUG oslo.service.loopingcall [None req-705d726f-0f00-4a08-80bc-647480804d76 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:18:47 np0005539551 nova_compute[227360]: 2025-11-29 08:18:47.484 227364 DEBUG nova.compute.manager [-] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:18:47 np0005539551 nova_compute[227360]: 2025-11-29 08:18:47.485 227364 DEBUG nova.network.neutron [-] [instance: 076bf9f6-6607-4b08-b733-864854aad069] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:18:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:47.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:48 np0005539551 nova_compute[227360]: 2025-11-29 08:18:48.437 227364 DEBUG nova.network.neutron [-] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:18:48 np0005539551 nova_compute[227360]: 2025-11-29 08:18:48.446 227364 DEBUG nova.compute.manager [req-802d6ae3-f214-4165-a742-4ad78c2808c9 req-8c0eacb9-3e6e-4f0d-a8b8-d9f0037cf12e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Received event network-vif-plugged-5a18d86b-a59a-42b2-a09d-a9db462f6034 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:18:48 np0005539551 nova_compute[227360]: 2025-11-29 08:18:48.446 227364 DEBUG oslo_concurrency.lockutils [req-802d6ae3-f214-4165-a742-4ad78c2808c9 req-8c0eacb9-3e6e-4f0d-a8b8-d9f0037cf12e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "076bf9f6-6607-4b08-b733-864854aad069-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:48 np0005539551 nova_compute[227360]: 2025-11-29 08:18:48.447 227364 DEBUG oslo_concurrency.lockutils [req-802d6ae3-f214-4165-a742-4ad78c2808c9 req-8c0eacb9-3e6e-4f0d-a8b8-d9f0037cf12e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "076bf9f6-6607-4b08-b733-864854aad069-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:48 np0005539551 nova_compute[227360]: 2025-11-29 08:18:48.447 227364 DEBUG oslo_concurrency.lockutils [req-802d6ae3-f214-4165-a742-4ad78c2808c9 req-8c0eacb9-3e6e-4f0d-a8b8-d9f0037cf12e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "076bf9f6-6607-4b08-b733-864854aad069-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:48 np0005539551 nova_compute[227360]: 2025-11-29 08:18:48.447 227364 DEBUG nova.compute.manager [req-802d6ae3-f214-4165-a742-4ad78c2808c9 req-8c0eacb9-3e6e-4f0d-a8b8-d9f0037cf12e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] No waiting events found dispatching network-vif-plugged-5a18d86b-a59a-42b2-a09d-a9db462f6034 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:18:48 np0005539551 nova_compute[227360]: 2025-11-29 08:18:48.447 227364 WARNING nova.compute.manager [req-802d6ae3-f214-4165-a742-4ad78c2808c9 req-8c0eacb9-3e6e-4f0d-a8b8-d9f0037cf12e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Received unexpected event network-vif-plugged-5a18d86b-a59a-42b2-a09d-a9db462f6034 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:18:48 np0005539551 nova_compute[227360]: 2025-11-29 08:18:48.471 227364 INFO nova.compute.manager [-] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Took 0.99 seconds to deallocate network for instance.#033[00m
Nov 29 03:18:48 np0005539551 nova_compute[227360]: 2025-11-29 08:18:48.556 227364 DEBUG oslo_concurrency.lockutils [None req-705d726f-0f00-4a08-80bc-647480804d76 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:48 np0005539551 nova_compute[227360]: 2025-11-29 08:18:48.557 227364 DEBUG oslo_concurrency.lockutils [None req-705d726f-0f00-4a08-80bc-647480804d76 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:48 np0005539551 nova_compute[227360]: 2025-11-29 08:18:48.604 227364 DEBUG oslo_concurrency.processutils [None req-705d726f-0f00-4a08-80bc-647480804d76 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:48.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:48 np0005539551 nova_compute[227360]: 2025-11-29 08:18:48.804 227364 DEBUG nova.compute.manager [req-ce60d8fb-1a5b-41e5-9c89-7fc40ba693c4 req-48616e1d-cb33-4f4b-a233-8b87bc042537 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Received event network-vif-deleted-5a18d86b-a59a-42b2-a09d-a9db462f6034 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:18:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:18:49 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3097729995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:18:49 np0005539551 nova_compute[227360]: 2025-11-29 08:18:49.053 227364 DEBUG oslo_concurrency.processutils [None req-705d726f-0f00-4a08-80bc-647480804d76 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:49 np0005539551 nova_compute[227360]: 2025-11-29 08:18:49.058 227364 DEBUG nova.compute.provider_tree [None req-705d726f-0f00-4a08-80bc-647480804d76 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:18:49 np0005539551 nova_compute[227360]: 2025-11-29 08:18:49.075 227364 DEBUG nova.scheduler.client.report [None req-705d726f-0f00-4a08-80bc-647480804d76 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:18:49 np0005539551 nova_compute[227360]: 2025-11-29 08:18:49.093 227364 DEBUG oslo_concurrency.lockutils [None req-705d726f-0f00-4a08-80bc-647480804d76 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.536s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:49 np0005539551 nova_compute[227360]: 2025-11-29 08:18:49.118 227364 INFO nova.scheduler.client.report [None req-705d726f-0f00-4a08-80bc-647480804d76 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Deleted allocations for instance 076bf9f6-6607-4b08-b733-864854aad069#033[00m
Nov 29 03:18:49 np0005539551 nova_compute[227360]: 2025-11-29 08:18:49.187 227364 DEBUG oslo_concurrency.lockutils [None req-705d726f-0f00-4a08-80bc-647480804d76 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "076bf9f6-6607-4b08-b733-864854aad069" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:49.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:50.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:50 np0005539551 nova_compute[227360]: 2025-11-29 08:18:50.874 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:51 np0005539551 nova_compute[227360]: 2025-11-29 08:18:51.196 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:51.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:51 np0005539551 podman[268123]: 2025-11-29 08:18:51.619547021 +0000 UTC m=+0.051418928 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 29 03:18:51 np0005539551 podman[268122]: 2025-11-29 08:18:51.644232923 +0000 UTC m=+0.073976365 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:18:51 np0005539551 podman[268121]: 2025-11-29 08:18:51.666695076 +0000 UTC m=+0.099588711 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:18:52 np0005539551 nova_compute[227360]: 2025-11-29 08:18:52.495 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404317.4941356, 68c19565-6fe1-4c2c-927d-87f801074e18 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:18:52 np0005539551 nova_compute[227360]: 2025-11-29 08:18:52.496 227364 INFO nova.compute.manager [-] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:18:52 np0005539551 nova_compute[227360]: 2025-11-29 08:18:52.522 227364 DEBUG nova.compute.manager [None req-0e936fbf-4a2e-42f0-b498-d2aaf2309233 - - - - - -] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:52.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:53.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:53 np0005539551 nova_compute[227360]: 2025-11-29 08:18:53.643 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:53 np0005539551 nova_compute[227360]: 2025-11-29 08:18:53.811 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:54.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:55.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:55 np0005539551 nova_compute[227360]: 2025-11-29 08:18:55.922 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:56 np0005539551 nova_compute[227360]: 2025-11-29 08:18:56.198 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:56.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:57.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:18:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:58.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:18:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:18:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:59.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:00.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:00 np0005539551 nova_compute[227360]: 2025-11-29 08:19:00.924 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:01 np0005539551 nova_compute[227360]: 2025-11-29 08:19:01.169 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404326.1686683, 076bf9f6-6607-4b08-b733-864854aad069 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:19:01 np0005539551 nova_compute[227360]: 2025-11-29 08:19:01.170 227364 INFO nova.compute.manager [-] [instance: 076bf9f6-6607-4b08-b733-864854aad069] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:19:01 np0005539551 nova_compute[227360]: 2025-11-29 08:19:01.189 227364 DEBUG nova.compute.manager [None req-f79cebb5-3820-48f3-b18d-211a19277839 - - - - - -] [instance: 076bf9f6-6607-4b08-b733-864854aad069] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:01 np0005539551 nova_compute[227360]: 2025-11-29 08:19:01.200 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:01.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:02.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:03.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:04.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:05.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:05 np0005539551 nova_compute[227360]: 2025-11-29 08:19:05.926 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:06 np0005539551 nova_compute[227360]: 2025-11-29 08:19:06.201 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:06.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:07.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:08.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:09.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:10.566 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:19:10 np0005539551 nova_compute[227360]: 2025-11-29 08:19:10.566 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:10.566 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:19:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:10.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:10 np0005539551 nova_compute[227360]: 2025-11-29 08:19:10.927 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:11 np0005539551 nova_compute[227360]: 2025-11-29 08:19:11.203 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:11.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:12.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:13.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:14.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:15.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:15 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:15.568 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:15 np0005539551 nova_compute[227360]: 2025-11-29 08:19:15.929 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:16 np0005539551 nova_compute[227360]: 2025-11-29 08:19:16.205 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:16.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:17.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:18.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:19.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:19.869 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:19.869 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:19.870 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:20.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:20 np0005539551 nova_compute[227360]: 2025-11-29 08:19:20.932 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:21 np0005539551 nova_compute[227360]: 2025-11-29 08:19:21.207 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:21.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:22 np0005539551 nova_compute[227360]: 2025-11-29 08:19:22.232 227364 DEBUG oslo_concurrency.lockutils [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Acquiring lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:22 np0005539551 nova_compute[227360]: 2025-11-29 08:19:22.232 227364 DEBUG oslo_concurrency.lockutils [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:22 np0005539551 nova_compute[227360]: 2025-11-29 08:19:22.251 227364 DEBUG nova.compute.manager [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:19:22 np0005539551 nova_compute[227360]: 2025-11-29 08:19:22.341 227364 DEBUG oslo_concurrency.lockutils [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:22 np0005539551 nova_compute[227360]: 2025-11-29 08:19:22.341 227364 DEBUG oslo_concurrency.lockutils [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:22 np0005539551 nova_compute[227360]: 2025-11-29 08:19:22.347 227364 DEBUG nova.virt.hardware [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:19:22 np0005539551 nova_compute[227360]: 2025-11-29 08:19:22.348 227364 INFO nova.compute.claims [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:19:22 np0005539551 nova_compute[227360]: 2025-11-29 08:19:22.480 227364 DEBUG oslo_concurrency.processutils [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:22 np0005539551 podman[268190]: 2025-11-29 08:19:22.599180026 +0000 UTC m=+0.047787232 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 29 03:19:22 np0005539551 podman[268189]: 2025-11-29 08:19:22.60313949 +0000 UTC m=+0.055133646 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:19:22 np0005539551 podman[268188]: 2025-11-29 08:19:22.627626087 +0000 UTC m=+0.084452460 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:19:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:22.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:19:22 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3420818875' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:19:22 np0005539551 nova_compute[227360]: 2025-11-29 08:19:22.897 227364 DEBUG oslo_concurrency.processutils [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:22 np0005539551 nova_compute[227360]: 2025-11-29 08:19:22.903 227364 DEBUG nova.compute.provider_tree [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:19:22 np0005539551 nova_compute[227360]: 2025-11-29 08:19:22.917 227364 DEBUG nova.scheduler.client.report [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:19:22 np0005539551 nova_compute[227360]: 2025-11-29 08:19:22.936 227364 DEBUG oslo_concurrency.lockutils [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:22 np0005539551 nova_compute[227360]: 2025-11-29 08:19:22.936 227364 DEBUG nova.compute.manager [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:19:22 np0005539551 nova_compute[227360]: 2025-11-29 08:19:22.982 227364 DEBUG nova.compute.manager [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:19:22 np0005539551 nova_compute[227360]: 2025-11-29 08:19:22.983 227364 DEBUG nova.network.neutron [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:19:23 np0005539551 nova_compute[227360]: 2025-11-29 08:19:23.006 227364 INFO nova.virt.libvirt.driver [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:19:23 np0005539551 nova_compute[227360]: 2025-11-29 08:19:23.025 227364 DEBUG nova.compute.manager [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:19:23 np0005539551 nova_compute[227360]: 2025-11-29 08:19:23.074 227364 INFO nova.virt.block_device [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Booting with volume 85297db0-ac85-44d1-bc74-6e4a332ee974 at /dev/vda#033[00m
Nov 29 03:19:23 np0005539551 nova_compute[227360]: 2025-11-29 08:19:23.213 227364 DEBUG os_brick.utils [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:19:23 np0005539551 nova_compute[227360]: 2025-11-29 08:19:23.214 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:23 np0005539551 nova_compute[227360]: 2025-11-29 08:19:23.227 245195 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:23 np0005539551 nova_compute[227360]: 2025-11-29 08:19:23.227 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[dce61f87-ae07-4f81-9ec3-a0a29ce7b851]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:23 np0005539551 nova_compute[227360]: 2025-11-29 08:19:23.228 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:23 np0005539551 nova_compute[227360]: 2025-11-29 08:19:23.236 245195 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:23 np0005539551 nova_compute[227360]: 2025-11-29 08:19:23.237 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[e85803d2-cb23-47b6-8b96-5aa672911265]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34268feecb6', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:23 np0005539551 nova_compute[227360]: 2025-11-29 08:19:23.238 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:23 np0005539551 nova_compute[227360]: 2025-11-29 08:19:23.245 245195 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:23 np0005539551 nova_compute[227360]: 2025-11-29 08:19:23.245 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[e94968c6-9e18-4b17-a84f-f348844cc13b]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:23 np0005539551 nova_compute[227360]: 2025-11-29 08:19:23.246 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[d4e9e4da-1dec-4624-a8a7-38005457f999]: (4, '2d58586e-4ce1-425d-890c-d5cdff75e822') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:23 np0005539551 nova_compute[227360]: 2025-11-29 08:19:23.247 227364 DEBUG oslo_concurrency.processutils [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:23 np0005539551 nova_compute[227360]: 2025-11-29 08:19:23.277 227364 DEBUG nova.policy [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '697e5f10e07b4256a3dc2ad3906db9d2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '696ec278f2ec426fa75ebb50bdf1c16a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:19:23 np0005539551 nova_compute[227360]: 2025-11-29 08:19:23.280 227364 DEBUG oslo_concurrency.processutils [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] CMD "nvme version" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:23 np0005539551 nova_compute[227360]: 2025-11-29 08:19:23.282 227364 DEBUG os_brick.initiator.connectors.lightos [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:19:23 np0005539551 nova_compute[227360]: 2025-11-29 08:19:23.282 227364 DEBUG os_brick.initiator.connectors.lightos [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:19:23 np0005539551 nova_compute[227360]: 2025-11-29 08:19:23.282 227364 DEBUG os_brick.initiator.connectors.lightos [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:19:23 np0005539551 nova_compute[227360]: 2025-11-29 08:19:23.283 227364 DEBUG os_brick.utils [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] <== get_connector_properties: return (68ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34268feecb6', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2d58586e-4ce1-425d-890c-d5cdff75e822', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:19:23 np0005539551 nova_compute[227360]: 2025-11-29 08:19:23.283 227364 DEBUG nova.virt.block_device [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Updating existing volume attachment record: e603fded-cf62-41e6-b73b-0eeec5f57a7e _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:19:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:23.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:24 np0005539551 nova_compute[227360]: 2025-11-29 08:19:24.570 227364 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Acquiring lock "ffb5c16c-adce-4345-8d54-4f48e1f1e57b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:24 np0005539551 nova_compute[227360]: 2025-11-29 08:19:24.570 227364 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "ffb5c16c-adce-4345-8d54-4f48e1f1e57b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:24 np0005539551 nova_compute[227360]: 2025-11-29 08:19:24.602 227364 DEBUG nova.compute.manager [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:19:24 np0005539551 nova_compute[227360]: 2025-11-29 08:19:24.645 227364 DEBUG nova.compute.manager [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:19:24 np0005539551 nova_compute[227360]: 2025-11-29 08:19:24.646 227364 DEBUG nova.virt.libvirt.driver [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:19:24 np0005539551 nova_compute[227360]: 2025-11-29 08:19:24.646 227364 INFO nova.virt.libvirt.driver [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Creating image(s)#033[00m
Nov 29 03:19:24 np0005539551 nova_compute[227360]: 2025-11-29 08:19:24.647 227364 DEBUG nova.virt.libvirt.driver [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:19:24 np0005539551 nova_compute[227360]: 2025-11-29 08:19:24.647 227364 DEBUG nova.virt.libvirt.driver [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Ensure instance console log exists: /var/lib/nova/instances/a14ca685-ed5c-4583-90e2-565fbf5e1ef0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:19:24 np0005539551 nova_compute[227360]: 2025-11-29 08:19:24.647 227364 DEBUG oslo_concurrency.lockutils [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:24 np0005539551 nova_compute[227360]: 2025-11-29 08:19:24.648 227364 DEBUG oslo_concurrency.lockutils [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:24 np0005539551 nova_compute[227360]: 2025-11-29 08:19:24.648 227364 DEBUG oslo_concurrency.lockutils [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:24 np0005539551 nova_compute[227360]: 2025-11-29 08:19:24.691 227364 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:24 np0005539551 nova_compute[227360]: 2025-11-29 08:19:24.692 227364 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:24 np0005539551 nova_compute[227360]: 2025-11-29 08:19:24.722 227364 DEBUG nova.virt.hardware [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:19:24 np0005539551 nova_compute[227360]: 2025-11-29 08:19:24.723 227364 INFO nova.compute.claims [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:19:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:24.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:24 np0005539551 nova_compute[227360]: 2025-11-29 08:19:24.770 227364 DEBUG nova.network.neutron [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Successfully created port: 3b646f81-c090-4ebc-ab66-1ed42838ee7b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:19:24 np0005539551 nova_compute[227360]: 2025-11-29 08:19:24.946 227364 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:19:25 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3490139499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:19:25 np0005539551 nova_compute[227360]: 2025-11-29 08:19:25.403 227364 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:25 np0005539551 nova_compute[227360]: 2025-11-29 08:19:25.409 227364 DEBUG nova.compute.provider_tree [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:19:25 np0005539551 nova_compute[227360]: 2025-11-29 08:19:25.411 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:25 np0005539551 nova_compute[227360]: 2025-11-29 08:19:25.429 227364 DEBUG nova.scheduler.client.report [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:19:25 np0005539551 nova_compute[227360]: 2025-11-29 08:19:25.459 227364 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:25 np0005539551 nova_compute[227360]: 2025-11-29 08:19:25.460 227364 DEBUG nova.compute.manager [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:19:25 np0005539551 nova_compute[227360]: 2025-11-29 08:19:25.537 227364 DEBUG nova.compute.manager [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:19:25 np0005539551 nova_compute[227360]: 2025-11-29 08:19:25.537 227364 DEBUG nova.network.neutron [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:19:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:25.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:25 np0005539551 nova_compute[227360]: 2025-11-29 08:19:25.574 227364 INFO nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:19:25 np0005539551 nova_compute[227360]: 2025-11-29 08:19:25.628 227364 DEBUG nova.compute.manager [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:19:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:25 np0005539551 nova_compute[227360]: 2025-11-29 08:19:25.742 227364 DEBUG nova.compute.manager [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:19:25 np0005539551 nova_compute[227360]: 2025-11-29 08:19:25.743 227364 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:19:25 np0005539551 nova_compute[227360]: 2025-11-29 08:19:25.744 227364 INFO nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Creating image(s)#033[00m
Nov 29 03:19:25 np0005539551 nova_compute[227360]: 2025-11-29 08:19:25.776 227364 DEBUG nova.storage.rbd_utils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] rbd image ffb5c16c-adce-4345-8d54-4f48e1f1e57b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:25 np0005539551 nova_compute[227360]: 2025-11-29 08:19:25.806 227364 DEBUG nova.storage.rbd_utils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] rbd image ffb5c16c-adce-4345-8d54-4f48e1f1e57b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:25 np0005539551 nova_compute[227360]: 2025-11-29 08:19:25.844 227364 DEBUG nova.storage.rbd_utils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] rbd image ffb5c16c-adce-4345-8d54-4f48e1f1e57b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:25 np0005539551 nova_compute[227360]: 2025-11-29 08:19:25.848 227364 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:25 np0005539551 nova_compute[227360]: 2025-11-29 08:19:25.886 227364 DEBUG nova.policy [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '14d293467f8e498eaa87b6b8976b34d9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '27fd30263a7f4717b84946720a5770b5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:19:25 np0005539551 nova_compute[227360]: 2025-11-29 08:19:25.933 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:25 np0005539551 nova_compute[227360]: 2025-11-29 08:19:25.944 227364 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:25 np0005539551 nova_compute[227360]: 2025-11-29 08:19:25.945 227364 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:25 np0005539551 nova_compute[227360]: 2025-11-29 08:19:25.946 227364 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:25 np0005539551 nova_compute[227360]: 2025-11-29 08:19:25.946 227364 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:25 np0005539551 nova_compute[227360]: 2025-11-29 08:19:25.976 227364 DEBUG nova.storage.rbd_utils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] rbd image ffb5c16c-adce-4345-8d54-4f48e1f1e57b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:25 np0005539551 nova_compute[227360]: 2025-11-29 08:19:25.981 227364 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 ffb5c16c-adce-4345-8d54-4f48e1f1e57b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:26 np0005539551 nova_compute[227360]: 2025-11-29 08:19:26.234 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:26 np0005539551 nova_compute[227360]: 2025-11-29 08:19:26.359 227364 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 ffb5c16c-adce-4345-8d54-4f48e1f1e57b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.378s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:26 np0005539551 nova_compute[227360]: 2025-11-29 08:19:26.434 227364 DEBUG nova.storage.rbd_utils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] resizing rbd image ffb5c16c-adce-4345-8d54-4f48e1f1e57b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:19:26 np0005539551 nova_compute[227360]: 2025-11-29 08:19:26.547 227364 DEBUG nova.objects.instance [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lazy-loading 'migration_context' on Instance uuid ffb5c16c-adce-4345-8d54-4f48e1f1e57b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:26 np0005539551 nova_compute[227360]: 2025-11-29 08:19:26.565 227364 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:19:26 np0005539551 nova_compute[227360]: 2025-11-29 08:19:26.566 227364 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Ensure instance console log exists: /var/lib/nova/instances/ffb5c16c-adce-4345-8d54-4f48e1f1e57b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:19:26 np0005539551 nova_compute[227360]: 2025-11-29 08:19:26.566 227364 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:26 np0005539551 nova_compute[227360]: 2025-11-29 08:19:26.567 227364 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:26 np0005539551 nova_compute[227360]: 2025-11-29 08:19:26.567 227364 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:26 np0005539551 nova_compute[227360]: 2025-11-29 08:19:26.574 227364 DEBUG nova.network.neutron [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Successfully created port: 50b605a5-4c40-4112-8125-4e918829b690 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:19:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:26.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:26 np0005539551 nova_compute[227360]: 2025-11-29 08:19:26.894 227364 DEBUG nova.network.neutron [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Successfully updated port: 3b646f81-c090-4ebc-ab66-1ed42838ee7b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:19:26 np0005539551 nova_compute[227360]: 2025-11-29 08:19:26.912 227364 DEBUG oslo_concurrency.lockutils [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Acquiring lock "refresh_cache-a14ca685-ed5c-4583-90e2-565fbf5e1ef0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:19:26 np0005539551 nova_compute[227360]: 2025-11-29 08:19:26.913 227364 DEBUG oslo_concurrency.lockutils [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Acquired lock "refresh_cache-a14ca685-ed5c-4583-90e2-565fbf5e1ef0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:19:26 np0005539551 nova_compute[227360]: 2025-11-29 08:19:26.913 227364 DEBUG nova.network.neutron [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:19:27 np0005539551 nova_compute[227360]: 2025-11-29 08:19:27.009 227364 DEBUG nova.compute.manager [req-8f5b6bd3-0a65-4de3-86ea-16b3dc0f7d19 req-e2585bf8-d0f7-4d77-a894-e2b4d2c1bcd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Received event network-changed-3b646f81-c090-4ebc-ab66-1ed42838ee7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:27 np0005539551 nova_compute[227360]: 2025-11-29 08:19:27.009 227364 DEBUG nova.compute.manager [req-8f5b6bd3-0a65-4de3-86ea-16b3dc0f7d19 req-e2585bf8-d0f7-4d77-a894-e2b4d2c1bcd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Refreshing instance network info cache due to event network-changed-3b646f81-c090-4ebc-ab66-1ed42838ee7b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:19:27 np0005539551 nova_compute[227360]: 2025-11-29 08:19:27.010 227364 DEBUG oslo_concurrency.lockutils [req-8f5b6bd3-0a65-4de3-86ea-16b3dc0f7d19 req-e2585bf8-d0f7-4d77-a894-e2b4d2c1bcd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-a14ca685-ed5c-4583-90e2-565fbf5e1ef0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:19:27 np0005539551 nova_compute[227360]: 2025-11-29 08:19:27.225 227364 DEBUG nova.network.neutron [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:19:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:27.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:27 np0005539551 nova_compute[227360]: 2025-11-29 08:19:27.654 227364 DEBUG nova.network.neutron [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Successfully updated port: 50b605a5-4c40-4112-8125-4e918829b690 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:19:27 np0005539551 nova_compute[227360]: 2025-11-29 08:19:27.682 227364 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Acquiring lock "refresh_cache-ffb5c16c-adce-4345-8d54-4f48e1f1e57b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:19:27 np0005539551 nova_compute[227360]: 2025-11-29 08:19:27.683 227364 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Acquired lock "refresh_cache-ffb5c16c-adce-4345-8d54-4f48e1f1e57b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:19:27 np0005539551 nova_compute[227360]: 2025-11-29 08:19:27.683 227364 DEBUG nova.network.neutron [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:19:27 np0005539551 nova_compute[227360]: 2025-11-29 08:19:27.802 227364 DEBUG nova.compute.manager [req-c73bc495-2f4d-493e-b361-9002c683740f req-205aa5d9-22fa-444a-99b1-598fbfea2ad1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Received event network-changed-50b605a5-4c40-4112-8125-4e918829b690 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:27 np0005539551 nova_compute[227360]: 2025-11-29 08:19:27.802 227364 DEBUG nova.compute.manager [req-c73bc495-2f4d-493e-b361-9002c683740f req-205aa5d9-22fa-444a-99b1-598fbfea2ad1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Refreshing instance network info cache due to event network-changed-50b605a5-4c40-4112-8125-4e918829b690. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:19:27 np0005539551 nova_compute[227360]: 2025-11-29 08:19:27.802 227364 DEBUG oslo_concurrency.lockutils [req-c73bc495-2f4d-493e-b361-9002c683740f req-205aa5d9-22fa-444a-99b1-598fbfea2ad1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-ffb5c16c-adce-4345-8d54-4f48e1f1e57b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:19:27 np0005539551 nova_compute[227360]: 2025-11-29 08:19:27.870 227364 DEBUG nova.network.neutron [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.425 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.713 227364 DEBUG nova.network.neutron [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Updating instance_info_cache with network_info: [{"id": "50b605a5-4c40-4112-8125-4e918829b690", "address": "fa:16:3e:86:15:8f", "network": {"id": "26782821-34df-4010-9d17-f8854e221b4e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1086271893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27fd30263a7f4717b84946720a5770b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50b605a5-4c", "ovs_interfaceid": "50b605a5-4c40-4112-8125-4e918829b690", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.729 227364 DEBUG nova.network.neutron [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Updating instance_info_cache with network_info: [{"id": "3b646f81-c090-4ebc-ab66-1ed42838ee7b", "address": "fa:16:3e:03:d3:13", "network": {"id": "6512f12e-7c22-4533-b1e6-41428016593a", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1331773440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "696ec278f2ec426fa75ebb50bdf1c16a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b646f81-c0", "ovs_interfaceid": "3b646f81-c090-4ebc-ab66-1ed42838ee7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:19:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:28.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.746 227364 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Releasing lock "refresh_cache-ffb5c16c-adce-4345-8d54-4f48e1f1e57b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.747 227364 DEBUG nova.compute.manager [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Instance network_info: |[{"id": "50b605a5-4c40-4112-8125-4e918829b690", "address": "fa:16:3e:86:15:8f", "network": {"id": "26782821-34df-4010-9d17-f8854e221b4e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1086271893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27fd30263a7f4717b84946720a5770b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50b605a5-4c", "ovs_interfaceid": "50b605a5-4c40-4112-8125-4e918829b690", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.748 227364 DEBUG oslo_concurrency.lockutils [req-c73bc495-2f4d-493e-b361-9002c683740f req-205aa5d9-22fa-444a-99b1-598fbfea2ad1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-ffb5c16c-adce-4345-8d54-4f48e1f1e57b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.749 227364 DEBUG nova.network.neutron [req-c73bc495-2f4d-493e-b361-9002c683740f req-205aa5d9-22fa-444a-99b1-598fbfea2ad1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Refreshing network info cache for port 50b605a5-4c40-4112-8125-4e918829b690 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.755 227364 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Start _get_guest_xml network_info=[{"id": "50b605a5-4c40-4112-8125-4e918829b690", "address": "fa:16:3e:86:15:8f", "network": {"id": "26782821-34df-4010-9d17-f8854e221b4e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1086271893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27fd30263a7f4717b84946720a5770b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50b605a5-4c", "ovs_interfaceid": "50b605a5-4c40-4112-8125-4e918829b690", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.759 227364 DEBUG oslo_concurrency.lockutils [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Releasing lock "refresh_cache-a14ca685-ed5c-4583-90e2-565fbf5e1ef0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.760 227364 DEBUG nova.compute.manager [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Instance network_info: |[{"id": "3b646f81-c090-4ebc-ab66-1ed42838ee7b", "address": "fa:16:3e:03:d3:13", "network": {"id": "6512f12e-7c22-4533-b1e6-41428016593a", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1331773440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "696ec278f2ec426fa75ebb50bdf1c16a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b646f81-c0", "ovs_interfaceid": "3b646f81-c090-4ebc-ab66-1ed42838ee7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.761 227364 DEBUG oslo_concurrency.lockutils [req-8f5b6bd3-0a65-4de3-86ea-16b3dc0f7d19 req-e2585bf8-d0f7-4d77-a894-e2b4d2c1bcd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-a14ca685-ed5c-4583-90e2-565fbf5e1ef0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.762 227364 DEBUG nova.network.neutron [req-8f5b6bd3-0a65-4de3-86ea-16b3dc0f7d19 req-e2585bf8-d0f7-4d77-a894-e2b4d2c1bcd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Refreshing network info cache for port 3b646f81-c090-4ebc-ab66-1ed42838ee7b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.768 227364 DEBUG nova.virt.libvirt.driver [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Start _get_guest_xml network_info=[{"id": "3b646f81-c090-4ebc-ab66-1ed42838ee7b", "address": "fa:16:3e:03:d3:13", "network": {"id": "6512f12e-7c22-4533-b1e6-41428016593a", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1331773440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "696ec278f2ec426fa75ebb50bdf1c16a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b646f81-c0", "ovs_interfaceid": "3b646f81-c090-4ebc-ab66-1ed42838ee7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': True, 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-85297db0-ac85-44d1-bc74-6e4a332ee974', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '85297db0-ac85-44d1-bc74-6e4a332ee974', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'a14ca685-ed5c-4583-90e2-565fbf5e1ef0', 'attached_at': '', 'detached_at': '', 'volume_id': '85297db0-ac85-44d1-bc74-6e4a332ee974', 'serial': '85297db0-ac85-44d1-bc74-6e4a332ee974'}, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'attachment_id': 'e603fded-cf62-41e6-b73b-0eeec5f57a7e', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.775 227364 WARNING nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.780 227364 WARNING nova.virt.libvirt.driver [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.783 227364 DEBUG nova.virt.libvirt.host [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.784 227364 DEBUG nova.virt.libvirt.host [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.785 227364 DEBUG nova.virt.libvirt.host [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.785 227364 DEBUG nova.virt.libvirt.host [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.790 227364 DEBUG nova.virt.libvirt.host [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.790 227364 DEBUG nova.virt.libvirt.host [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.791 227364 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.791 227364 DEBUG nova.virt.hardware [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.792 227364 DEBUG nova.virt.hardware [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.792 227364 DEBUG nova.virt.hardware [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.792 227364 DEBUG nova.virt.hardware [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.792 227364 DEBUG nova.virt.hardware [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.793 227364 DEBUG nova.virt.hardware [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.793 227364 DEBUG nova.virt.hardware [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.793 227364 DEBUG nova.virt.hardware [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.793 227364 DEBUG nova.virt.hardware [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.793 227364 DEBUG nova.virt.hardware [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.794 227364 DEBUG nova.virt.hardware [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.796 227364 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.819 227364 DEBUG nova.virt.libvirt.host [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.820 227364 DEBUG nova.virt.libvirt.host [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.821 227364 DEBUG nova.virt.libvirt.driver [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.821 227364 DEBUG nova.virt.hardware [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.822 227364 DEBUG nova.virt.hardware [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.822 227364 DEBUG nova.virt.hardware [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.822 227364 DEBUG nova.virt.hardware [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.822 227364 DEBUG nova.virt.hardware [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.822 227364 DEBUG nova.virt.hardware [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.823 227364 DEBUG nova.virt.hardware [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.823 227364 DEBUG nova.virt.hardware [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.823 227364 DEBUG nova.virt.hardware [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.823 227364 DEBUG nova.virt.hardware [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.824 227364 DEBUG nova.virt.hardware [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.853 227364 DEBUG nova.storage.rbd_utils [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] rbd image a14ca685-ed5c-4583-90e2-565fbf5e1ef0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:28 np0005539551 nova_compute[227360]: 2025-11-29 08:19:28.858 227364 DEBUG oslo_concurrency.processutils [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:19:29 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2619141903' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.213 227364 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.253 227364 DEBUG nova.storage.rbd_utils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] rbd image ffb5c16c-adce-4345-8d54-4f48e1f1e57b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.257 227364 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:19:29 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1991819357' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.295 227364 DEBUG oslo_concurrency.processutils [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.342 227364 DEBUG nova.virt.libvirt.vif [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:19:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-1770757417',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-1770757417',id=112,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNQpCV8LVYV3XU0jRo9AB80D14fcNsVBchxQv/kqVR98kae3OkX+NM6hECihiGK1BlzV9Y53yNGfFNWAKz66qpbev4cpjjjNCpxIV7kZ2X9xTxjFVoODdsMt72oZagL4iA==',key_name='tempest-keypair-1370277031',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='696ec278f2ec426fa75ebb50bdf1c16a',ramdisk_id='',reservation_id='r-gh8c0s9k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServerActionsV293TestJSON-2028033432',owner_user_name='tempest-ServerActionsV293TestJSON-2028033432-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:19:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='697e5f10e07b4256a3dc2ad3906db9d2',uuid=a14ca685-ed5c-4583-90e2-565fbf5e1ef0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3b646f81-c090-4ebc-ab66-1ed42838ee7b", "address": "fa:16:3e:03:d3:13", "network": {"id": "6512f12e-7c22-4533-b1e6-41428016593a", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1331773440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "696ec278f2ec426fa75ebb50bdf1c16a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b646f81-c0", "ovs_interfaceid": "3b646f81-c090-4ebc-ab66-1ed42838ee7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.343 227364 DEBUG nova.network.os_vif_util [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Converting VIF {"id": "3b646f81-c090-4ebc-ab66-1ed42838ee7b", "address": "fa:16:3e:03:d3:13", "network": {"id": "6512f12e-7c22-4533-b1e6-41428016593a", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1331773440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "696ec278f2ec426fa75ebb50bdf1c16a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b646f81-c0", "ovs_interfaceid": "3b646f81-c090-4ebc-ab66-1ed42838ee7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.345 227364 DEBUG nova.network.os_vif_util [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:d3:13,bridge_name='br-int',has_traffic_filtering=True,id=3b646f81-c090-4ebc-ab66-1ed42838ee7b,network=Network(6512f12e-7c22-4533-b1e6-41428016593a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b646f81-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.347 227364 DEBUG nova.objects.instance [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Lazy-loading 'pci_devices' on Instance uuid a14ca685-ed5c-4583-90e2-565fbf5e1ef0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.370 227364 DEBUG nova.virt.libvirt.driver [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:19:29 np0005539551 nova_compute[227360]:  <uuid>a14ca685-ed5c-4583-90e2-565fbf5e1ef0</uuid>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:  <name>instance-00000070</name>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <nova:name>tempest-ServerActionsV293TestJSON-server-1770757417</nova:name>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:19:28</nova:creationTime>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:19:29 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:        <nova:user uuid="697e5f10e07b4256a3dc2ad3906db9d2">tempest-ServerActionsV293TestJSON-2028033432-project-member</nova:user>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:        <nova:project uuid="696ec278f2ec426fa75ebb50bdf1c16a">tempest-ServerActionsV293TestJSON-2028033432</nova:project>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:        <nova:port uuid="3b646f81-c090-4ebc-ab66-1ed42838ee7b">
Nov 29 03:19:29 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <entry name="serial">a14ca685-ed5c-4583-90e2-565fbf5e1ef0</entry>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <entry name="uuid">a14ca685-ed5c-4583-90e2-565fbf5e1ef0</entry>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/a14ca685-ed5c-4583-90e2-565fbf5e1ef0_disk.config">
Nov 29 03:19:29 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:19:29 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="volumes/volume-85297db0-ac85-44d1-bc74-6e4a332ee974">
Nov 29 03:19:29 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:19:29 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <serial>85297db0-ac85-44d1-bc74-6e4a332ee974</serial>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:03:d3:13"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <target dev="tap3b646f81-c0"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/a14ca685-ed5c-4583-90e2-565fbf5e1ef0/console.log" append="off"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:19:29 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:19:29 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.373 227364 DEBUG nova.compute.manager [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Preparing to wait for external event network-vif-plugged-3b646f81-c090-4ebc-ab66-1ed42838ee7b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.374 227364 DEBUG oslo_concurrency.lockutils [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Acquiring lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.374 227364 DEBUG oslo_concurrency.lockutils [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.375 227364 DEBUG oslo_concurrency.lockutils [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.376 227364 DEBUG nova.virt.libvirt.vif [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:19:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-1770757417',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-1770757417',id=112,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNQpCV8LVYV3XU0jRo9AB80D14fcNsVBchxQv/kqVR98kae3OkX+NM6hECihiGK1BlzV9Y53yNGfFNWAKz66qpbev4cpjjjNCpxIV7kZ2X9xTxjFVoODdsMt72oZagL4iA==',key_name='tempest-keypair-1370277031',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='696ec278f2ec426fa75ebb50bdf1c16a',ramdisk_id='',reservation_id='r-gh8c0s9k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServerActionsV293TestJSON-2028033432',owner_user_name='tempest-ServerActionsV293TestJSON-2028033432-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:19:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='697e5f10e07b4256a3dc2ad3906db9d2',uuid=a14ca685-ed5c-4583-90e2-565fbf5e1ef0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3b646f81-c090-4ebc-ab66-1ed42838ee7b", "address": "fa:16:3e:03:d3:13", "network": {"id": "6512f12e-7c22-4533-b1e6-41428016593a", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1331773440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "696ec278f2ec426fa75ebb50bdf1c16a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b646f81-c0", "ovs_interfaceid": "3b646f81-c090-4ebc-ab66-1ed42838ee7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.377 227364 DEBUG nova.network.os_vif_util [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Converting VIF {"id": "3b646f81-c090-4ebc-ab66-1ed42838ee7b", "address": "fa:16:3e:03:d3:13", "network": {"id": "6512f12e-7c22-4533-b1e6-41428016593a", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1331773440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "696ec278f2ec426fa75ebb50bdf1c16a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b646f81-c0", "ovs_interfaceid": "3b646f81-c090-4ebc-ab66-1ed42838ee7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.378 227364 DEBUG nova.network.os_vif_util [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:d3:13,bridge_name='br-int',has_traffic_filtering=True,id=3b646f81-c090-4ebc-ab66-1ed42838ee7b,network=Network(6512f12e-7c22-4533-b1e6-41428016593a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b646f81-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.379 227364 DEBUG os_vif [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:d3:13,bridge_name='br-int',has_traffic_filtering=True,id=3b646f81-c090-4ebc-ab66-1ed42838ee7b,network=Network(6512f12e-7c22-4533-b1e6-41428016593a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b646f81-c0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.380 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.381 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.382 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.388 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.389 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3b646f81-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.389 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3b646f81-c0, col_values=(('external_ids', {'iface-id': '3b646f81-c090-4ebc-ab66-1ed42838ee7b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:03:d3:13', 'vm-uuid': 'a14ca685-ed5c-4583-90e2-565fbf5e1ef0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.391 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:29 np0005539551 NetworkManager[48922]: <info>  [1764404369.3929] manager: (tap3b646f81-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/199)
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.394 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.399 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.400 227364 INFO os_vif [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:d3:13,bridge_name='br-int',has_traffic_filtering=True,id=3b646f81-c090-4ebc-ab66-1ed42838ee7b,network=Network(6512f12e-7c22-4533-b1e6-41428016593a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b646f81-c0')#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:29.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.572 227364 DEBUG nova.virt.libvirt.driver [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.573 227364 DEBUG nova.virt.libvirt.driver [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.573 227364 DEBUG nova.virt.libvirt.driver [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] No VIF found with MAC fa:16:3e:03:d3:13, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.574 227364 INFO nova.virt.libvirt.driver [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Using config drive#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.602 227364 DEBUG nova.storage.rbd_utils [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] rbd image a14ca685-ed5c-4583-90e2-565fbf5e1ef0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:19:29 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1538734210' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.689 227364 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.690 227364 DEBUG nova.virt.libvirt.vif [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:19:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1670228511',display_name='tempest-ListServersNegativeTestJSON-server-1670228511-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1670228511-3',id=115,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='27fd30263a7f4717b84946720a5770b5',ramdisk_id='',reservation_id='r-oe57na6j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1508942438',owner_user_name='tempest-ListServersNegativeTestJSON-1508942438-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:19:25Z,user_data=None,user_id='14d293467f8e498eaa87b6b8976b34d9',uuid=ffb5c16c-adce-4345-8d54-4f48e1f1e57b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "50b605a5-4c40-4112-8125-4e918829b690", "address": "fa:16:3e:86:15:8f", "network": {"id": "26782821-34df-4010-9d17-f8854e221b4e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1086271893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27fd30263a7f4717b84946720a5770b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50b605a5-4c", "ovs_interfaceid": "50b605a5-4c40-4112-8125-4e918829b690", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.690 227364 DEBUG nova.network.os_vif_util [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Converting VIF {"id": "50b605a5-4c40-4112-8125-4e918829b690", "address": "fa:16:3e:86:15:8f", "network": {"id": "26782821-34df-4010-9d17-f8854e221b4e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1086271893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27fd30263a7f4717b84946720a5770b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50b605a5-4c", "ovs_interfaceid": "50b605a5-4c40-4112-8125-4e918829b690", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.691 227364 DEBUG nova.network.os_vif_util [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:15:8f,bridge_name='br-int',has_traffic_filtering=True,id=50b605a5-4c40-4112-8125-4e918829b690,network=Network(26782821-34df-4010-9d17-f8854e221b4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50b605a5-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.692 227364 DEBUG nova.objects.instance [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lazy-loading 'pci_devices' on Instance uuid ffb5c16c-adce-4345-8d54-4f48e1f1e57b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.730 227364 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:19:29 np0005539551 nova_compute[227360]:  <uuid>ffb5c16c-adce-4345-8d54-4f48e1f1e57b</uuid>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:  <name>instance-00000073</name>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <nova:name>tempest-ListServersNegativeTestJSON-server-1670228511-3</nova:name>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:19:28</nova:creationTime>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:19:29 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:        <nova:user uuid="14d293467f8e498eaa87b6b8976b34d9">tempest-ListServersNegativeTestJSON-1508942438-project-member</nova:user>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:        <nova:project uuid="27fd30263a7f4717b84946720a5770b5">tempest-ListServersNegativeTestJSON-1508942438</nova:project>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:        <nova:port uuid="50b605a5-4c40-4112-8125-4e918829b690">
Nov 29 03:19:29 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <entry name="serial">ffb5c16c-adce-4345-8d54-4f48e1f1e57b</entry>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <entry name="uuid">ffb5c16c-adce-4345-8d54-4f48e1f1e57b</entry>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/ffb5c16c-adce-4345-8d54-4f48e1f1e57b_disk">
Nov 29 03:19:29 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:19:29 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/ffb5c16c-adce-4345-8d54-4f48e1f1e57b_disk.config">
Nov 29 03:19:29 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:19:29 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:86:15:8f"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <target dev="tap50b605a5-4c"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/ffb5c16c-adce-4345-8d54-4f48e1f1e57b/console.log" append="off"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:19:29 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:19:29 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:19:29 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:19:29 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.732 227364 DEBUG nova.compute.manager [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Preparing to wait for external event network-vif-plugged-50b605a5-4c40-4112-8125-4e918829b690 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.733 227364 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Acquiring lock "ffb5c16c-adce-4345-8d54-4f48e1f1e57b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.733 227364 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "ffb5c16c-adce-4345-8d54-4f48e1f1e57b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.733 227364 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "ffb5c16c-adce-4345-8d54-4f48e1f1e57b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.734 227364 DEBUG nova.virt.libvirt.vif [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:19:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1670228511',display_name='tempest-ListServersNegativeTestJSON-server-1670228511-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1670228511-3',id=115,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='27fd30263a7f4717b84946720a5770b5',ramdisk_id='',reservation_id='r-oe57na6j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1508942438',owner_user_name='tempest-ListServersNegativeTestJSON-1508942438-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:19:25Z,user_data=None,user_id='14d293467f8e498eaa87b6b8976b34d9',uuid=ffb5c16c-adce-4345-8d54-4f48e1f1e57b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "50b605a5-4c40-4112-8125-4e918829b690", "address": "fa:16:3e:86:15:8f", "network": {"id": "26782821-34df-4010-9d17-f8854e221b4e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1086271893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27fd30263a7f4717b84946720a5770b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50b605a5-4c", "ovs_interfaceid": "50b605a5-4c40-4112-8125-4e918829b690", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.735 227364 DEBUG nova.network.os_vif_util [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Converting VIF {"id": "50b605a5-4c40-4112-8125-4e918829b690", "address": "fa:16:3e:86:15:8f", "network": {"id": "26782821-34df-4010-9d17-f8854e221b4e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1086271893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27fd30263a7f4717b84946720a5770b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50b605a5-4c", "ovs_interfaceid": "50b605a5-4c40-4112-8125-4e918829b690", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.736 227364 DEBUG nova.network.os_vif_util [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:15:8f,bridge_name='br-int',has_traffic_filtering=True,id=50b605a5-4c40-4112-8125-4e918829b690,network=Network(26782821-34df-4010-9d17-f8854e221b4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50b605a5-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.736 227364 DEBUG os_vif [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:15:8f,bridge_name='br-int',has_traffic_filtering=True,id=50b605a5-4c40-4112-8125-4e918829b690,network=Network(26782821-34df-4010-9d17-f8854e221b4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50b605a5-4c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.737 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.738 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.738 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.743 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.743 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50b605a5-4c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.744 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap50b605a5-4c, col_values=(('external_ids', {'iface-id': '50b605a5-4c40-4112-8125-4e918829b690', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:86:15:8f', 'vm-uuid': 'ffb5c16c-adce-4345-8d54-4f48e1f1e57b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.746 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:29 np0005539551 NetworkManager[48922]: <info>  [1764404369.7471] manager: (tap50b605a5-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/200)
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.750 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.757 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.759 227364 INFO os_vif [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:15:8f,bridge_name='br-int',has_traffic_filtering=True,id=50b605a5-4c40-4112-8125-4e918829b690,network=Network(26782821-34df-4010-9d17-f8854e221b4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50b605a5-4c')#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.830 227364 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.831 227364 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.832 227364 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] No VIF found with MAC fa:16:3e:86:15:8f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.832 227364 INFO nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Using config drive#033[00m
Nov 29 03:19:29 np0005539551 nova_compute[227360]: 2025-11-29 08:19:29.870 227364 DEBUG nova.storage.rbd_utils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] rbd image ffb5c16c-adce-4345-8d54-4f48e1f1e57b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:30 np0005539551 nova_compute[227360]: 2025-11-29 08:19:30.088 227364 INFO nova.virt.libvirt.driver [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Creating config drive at /var/lib/nova/instances/a14ca685-ed5c-4583-90e2-565fbf5e1ef0/disk.config#033[00m
Nov 29 03:19:30 np0005539551 nova_compute[227360]: 2025-11-29 08:19:30.100 227364 DEBUG oslo_concurrency.processutils [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a14ca685-ed5c-4583-90e2-565fbf5e1ef0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8bkx9qeg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:30 np0005539551 nova_compute[227360]: 2025-11-29 08:19:30.157 227364 DEBUG nova.network.neutron [req-8f5b6bd3-0a65-4de3-86ea-16b3dc0f7d19 req-e2585bf8-d0f7-4d77-a894-e2b4d2c1bcd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Updated VIF entry in instance network info cache for port 3b646f81-c090-4ebc-ab66-1ed42838ee7b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:19:30 np0005539551 nova_compute[227360]: 2025-11-29 08:19:30.159 227364 DEBUG nova.network.neutron [req-8f5b6bd3-0a65-4de3-86ea-16b3dc0f7d19 req-e2585bf8-d0f7-4d77-a894-e2b4d2c1bcd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Updating instance_info_cache with network_info: [{"id": "3b646f81-c090-4ebc-ab66-1ed42838ee7b", "address": "fa:16:3e:03:d3:13", "network": {"id": "6512f12e-7c22-4533-b1e6-41428016593a", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1331773440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "696ec278f2ec426fa75ebb50bdf1c16a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b646f81-c0", "ovs_interfaceid": "3b646f81-c090-4ebc-ab66-1ed42838ee7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:19:30 np0005539551 nova_compute[227360]: 2025-11-29 08:19:30.180 227364 DEBUG oslo_concurrency.lockutils [req-8f5b6bd3-0a65-4de3-86ea-16b3dc0f7d19 req-e2585bf8-d0f7-4d77-a894-e2b4d2c1bcd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-a14ca685-ed5c-4583-90e2-565fbf5e1ef0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:19:30 np0005539551 nova_compute[227360]: 2025-11-29 08:19:30.242 227364 DEBUG oslo_concurrency.processutils [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a14ca685-ed5c-4583-90e2-565fbf5e1ef0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8bkx9qeg" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:30 np0005539551 nova_compute[227360]: 2025-11-29 08:19:30.288 227364 DEBUG nova.storage.rbd_utils [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] rbd image a14ca685-ed5c-4583-90e2-565fbf5e1ef0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:30 np0005539551 nova_compute[227360]: 2025-11-29 08:19:30.294 227364 DEBUG oslo_concurrency.processutils [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a14ca685-ed5c-4583-90e2-565fbf5e1ef0/disk.config a14ca685-ed5c-4583-90e2-565fbf5e1ef0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:30 np0005539551 nova_compute[227360]: 2025-11-29 08:19:30.340 227364 INFO nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Creating config drive at /var/lib/nova/instances/ffb5c16c-adce-4345-8d54-4f48e1f1e57b/disk.config#033[00m
Nov 29 03:19:30 np0005539551 nova_compute[227360]: 2025-11-29 08:19:30.350 227364 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ffb5c16c-adce-4345-8d54-4f48e1f1e57b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi0rp3ju8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:30 np0005539551 nova_compute[227360]: 2025-11-29 08:19:30.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:30 np0005539551 nova_compute[227360]: 2025-11-29 08:19:30.411 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:19:30 np0005539551 nova_compute[227360]: 2025-11-29 08:19:30.452 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:19:30 np0005539551 nova_compute[227360]: 2025-11-29 08:19:30.453 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:30 np0005539551 nova_compute[227360]: 2025-11-29 08:19:30.507 227364 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ffb5c16c-adce-4345-8d54-4f48e1f1e57b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi0rp3ju8" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:30 np0005539551 nova_compute[227360]: 2025-11-29 08:19:30.553 227364 DEBUG nova.storage.rbd_utils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] rbd image ffb5c16c-adce-4345-8d54-4f48e1f1e57b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:30 np0005539551 nova_compute[227360]: 2025-11-29 08:19:30.558 227364 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ffb5c16c-adce-4345-8d54-4f48e1f1e57b/disk.config ffb5c16c-adce-4345-8d54-4f48e1f1e57b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:30 np0005539551 nova_compute[227360]: 2025-11-29 08:19:30.602 227364 DEBUG oslo_concurrency.processutils [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a14ca685-ed5c-4583-90e2-565fbf5e1ef0/disk.config a14ca685-ed5c-4583-90e2-565fbf5e1ef0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.308s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:30 np0005539551 nova_compute[227360]: 2025-11-29 08:19:30.604 227364 INFO nova.virt.libvirt.driver [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Deleting local config drive /var/lib/nova/instances/a14ca685-ed5c-4583-90e2-565fbf5e1ef0/disk.config because it was imported into RBD.#033[00m
Nov 29 03:19:30 np0005539551 NetworkManager[48922]: <info>  [1764404370.6745] manager: (tap3b646f81-c0): new Tun device (/org/freedesktop/NetworkManager/Devices/201)
Nov 29 03:19:30 np0005539551 kernel: tap3b646f81-c0: entered promiscuous mode
Nov 29 03:19:30 np0005539551 ovn_controller[130266]: 2025-11-29T08:19:30Z|00425|binding|INFO|Claiming lport 3b646f81-c090-4ebc-ab66-1ed42838ee7b for this chassis.
Nov 29 03:19:30 np0005539551 ovn_controller[130266]: 2025-11-29T08:19:30Z|00426|binding|INFO|3b646f81-c090-4ebc-ab66-1ed42838ee7b: Claiming fa:16:3e:03:d3:13 10.100.0.4
Nov 29 03:19:30 np0005539551 nova_compute[227360]: 2025-11-29 08:19:30.679 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:30.693 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:d3:13 10.100.0.4'], port_security=['fa:16:3e:03:d3:13 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'a14ca685-ed5c-4583-90e2-565fbf5e1ef0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6512f12e-7c22-4533-b1e6-41428016593a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '696ec278f2ec426fa75ebb50bdf1c16a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '79696f95-603f-4ed2-8054-c440dd658a0e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f17a52e-c2c7-41e9-af0a-0831981ed76c, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=3b646f81-c090-4ebc-ab66-1ed42838ee7b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:19:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:30.695 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 3b646f81-c090-4ebc-ab66-1ed42838ee7b in datapath 6512f12e-7c22-4533-b1e6-41428016593a bound to our chassis#033[00m
Nov 29 03:19:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:30.697 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6512f12e-7c22-4533-b1e6-41428016593a#033[00m
Nov 29 03:19:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:30 np0005539551 systemd-machined[190756]: New machine qemu-48-instance-00000070.
Nov 29 03:19:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:30.712 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[03e23eb9-3229-426c-acb0-c7e5f60ecb10]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:30.713 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6512f12e-71 in ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:19:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:30.715 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6512f12e-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:19:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:30.715 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[84066fc7-3521-49db-bb42-2943f3151322]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:30 np0005539551 systemd-udevd[268727]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:19:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:30.716 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[65fa43a4-f788-42b0-bfa0-55e29743a5c1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:30 np0005539551 systemd[1]: Started Virtual Machine qemu-48-instance-00000070.
Nov 29 03:19:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:30.727 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[61f68cb9-77e7-4b39-b30c-3408fd30b2e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:30 np0005539551 NetworkManager[48922]: <info>  [1764404370.7342] device (tap3b646f81-c0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:19:30 np0005539551 NetworkManager[48922]: <info>  [1764404370.7356] device (tap3b646f81-c0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:19:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:30.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:30.756 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[3b0e3c94-5981-4589-9f58-69844b0bbf6c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:30 np0005539551 nova_compute[227360]: 2025-11-29 08:19:30.770 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:30 np0005539551 ovn_controller[130266]: 2025-11-29T08:19:30Z|00427|binding|INFO|Setting lport 3b646f81-c090-4ebc-ab66-1ed42838ee7b ovn-installed in OVS
Nov 29 03:19:30 np0005539551 ovn_controller[130266]: 2025-11-29T08:19:30Z|00428|binding|INFO|Setting lport 3b646f81-c090-4ebc-ab66-1ed42838ee7b up in Southbound
Nov 29 03:19:30 np0005539551 nova_compute[227360]: 2025-11-29 08:19:30.776 227364 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ffb5c16c-adce-4345-8d54-4f48e1f1e57b/disk.config ffb5c16c-adce-4345-8d54-4f48e1f1e57b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.218s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:30 np0005539551 nova_compute[227360]: 2025-11-29 08:19:30.777 227364 INFO nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Deleting local config drive /var/lib/nova/instances/ffb5c16c-adce-4345-8d54-4f48e1f1e57b/disk.config because it was imported into RBD.#033[00m
Nov 29 03:19:30 np0005539551 nova_compute[227360]: 2025-11-29 08:19:30.777 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:30.790 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[08704187-e08e-4d89-9bf1-460f3fe0b8f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:30 np0005539551 systemd-udevd[268732]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:19:30 np0005539551 NetworkManager[48922]: <info>  [1764404370.7988] manager: (tap6512f12e-70): new Veth device (/org/freedesktop/NetworkManager/Devices/202)
Nov 29 03:19:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:30.797 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[061e2e73-d364-46a4-a595-5893672880f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:30.829 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[639b0621-df82-46a6-84cd-ae9e612305d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:30.833 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[d5fb0300-31fc-4059-b7a3-6064f4a59d28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:30 np0005539551 systemd-udevd[268789]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:19:30 np0005539551 kernel: tap50b605a5-4c: entered promiscuous mode
Nov 29 03:19:30 np0005539551 NetworkManager[48922]: <info>  [1764404370.8369] manager: (tap50b605a5-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/203)
Nov 29 03:19:30 np0005539551 nova_compute[227360]: 2025-11-29 08:19:30.838 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:30 np0005539551 ovn_controller[130266]: 2025-11-29T08:19:30Z|00429|binding|INFO|Claiming lport 50b605a5-4c40-4112-8125-4e918829b690 for this chassis.
Nov 29 03:19:30 np0005539551 ovn_controller[130266]: 2025-11-29T08:19:30Z|00430|binding|INFO|50b605a5-4c40-4112-8125-4e918829b690: Claiming fa:16:3e:86:15:8f 10.100.0.5
Nov 29 03:19:30 np0005539551 NetworkManager[48922]: <info>  [1764404370.8500] device (tap50b605a5-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:19:30 np0005539551 NetworkManager[48922]: <info>  [1764404370.8520] device (tap50b605a5-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:19:30 np0005539551 NetworkManager[48922]: <info>  [1764404370.8570] device (tap6512f12e-70): carrier: link connected
Nov 29 03:19:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:30.861 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[bf86aae7-f560-4f56-aed9-4aa37015c6c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:30.864 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:15:8f 10.100.0.5'], port_security=['fa:16:3e:86:15:8f 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ffb5c16c-adce-4345-8d54-4f48e1f1e57b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26782821-34df-4010-9d17-f8854e221b4e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27fd30263a7f4717b84946720a5770b5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '45467816-71fb-46ea-84fa-c25f49ea2a6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c8dbca6-8e74-4fae-981d-344fddfca3c7, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=50b605a5-4c40-4112-8125-4e918829b690) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:19:30 np0005539551 systemd-machined[190756]: New machine qemu-49-instance-00000073.
Nov 29 03:19:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:30.878 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[18291a69-fb17-4a45-a203-66ca9b05f8e2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6512f12e-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:74:72'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 128], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 741773, 'reachable_time': 29243, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268826, 'error': None, 'target': 'ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:30.890 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[36a1b62c-3554-4d4a-948c-b34cdd15c632]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb1:7472'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 741773, 'tstamp': 741773}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268827, 'error': None, 'target': 'ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:30 np0005539551 systemd[1]: Started Virtual Machine qemu-49-instance-00000073.
Nov 29 03:19:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:30.906 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[46ba4893-f97a-42af-bd30-ffa97305908a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6512f12e-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:74:72'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 128], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 741773, 'reachable_time': 29243, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268828, 'error': None, 'target': 'ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:30 np0005539551 nova_compute[227360]: 2025-11-29 08:19:30.910 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:30 np0005539551 ovn_controller[130266]: 2025-11-29T08:19:30Z|00431|binding|INFO|Setting lport 50b605a5-4c40-4112-8125-4e918829b690 ovn-installed in OVS
Nov 29 03:19:30 np0005539551 ovn_controller[130266]: 2025-11-29T08:19:30Z|00432|binding|INFO|Setting lport 50b605a5-4c40-4112-8125-4e918829b690 up in Southbound
Nov 29 03:19:30 np0005539551 nova_compute[227360]: 2025-11-29 08:19:30.918 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:30 np0005539551 nova_compute[227360]: 2025-11-29 08:19:30.935 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:30.937 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[3532f7ba-4a54-42c3-afdd-1f6d53091cd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:30.988 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b2bdfec5-b7f5-4797-aedc-4ddf92b6426c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:30.989 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6512f12e-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:30.989 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:19:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:30.990 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6512f12e-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:30 np0005539551 nova_compute[227360]: 2025-11-29 08:19:30.991 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:30 np0005539551 NetworkManager[48922]: <info>  [1764404370.9921] manager: (tap6512f12e-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/204)
Nov 29 03:19:30 np0005539551 kernel: tap6512f12e-70: entered promiscuous mode
Nov 29 03:19:30 np0005539551 nova_compute[227360]: 2025-11-29 08:19:30.994 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:30.995 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6512f12e-70, col_values=(('external_ids', {'iface-id': '73177abb-43fa-48b2-bff7-3e77c9984956'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:30 np0005539551 nova_compute[227360]: 2025-11-29 08:19:30.996 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:30 np0005539551 ovn_controller[130266]: 2025-11-29T08:19:30Z|00433|binding|INFO|Releasing lport 73177abb-43fa-48b2-bff7-3e77c9984956 from this chassis (sb_readonly=0)
Nov 29 03:19:31 np0005539551 nova_compute[227360]: 2025-11-29 08:19:31.009 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:31.010 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6512f12e-7c22-4533-b1e6-41428016593a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6512f12e-7c22-4533-b1e6-41428016593a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:31.011 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f0d712d4-f2db-41e7-bc37-aee86cf19497]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:31.012 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-6512f12e-7c22-4533-b1e6-41428016593a
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/6512f12e-7c22-4533-b1e6-41428016593a.pid.haproxy
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 6512f12e-7c22-4533-b1e6-41428016593a
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:31.012 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a', 'env', 'PROCESS_TAG=haproxy-6512f12e-7c22-4533-b1e6-41428016593a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6512f12e-7c22-4533-b1e6-41428016593a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:19:31 np0005539551 podman[268911]: 2025-11-29 08:19:31.378750716 +0000 UTC m=+0.061884174 container create 69f3ec8a154e7276998b520ff3e3b6bcfa19395467116a9539730725133174b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 03:19:31 np0005539551 nova_compute[227360]: 2025-11-29 08:19:31.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:31 np0005539551 systemd[1]: Started libpod-conmon-69f3ec8a154e7276998b520ff3e3b6bcfa19395467116a9539730725133174b4.scope.
Nov 29 03:19:31 np0005539551 podman[268911]: 2025-11-29 08:19:31.342718025 +0000 UTC m=+0.025851503 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:19:31 np0005539551 nova_compute[227360]: 2025-11-29 08:19:31.442 227364 DEBUG nova.compute.manager [req-d52849f1-4c4d-4f2f-9cac-1f5513c372c0 req-00c4cece-8b48-47c1-a08d-565bc3aff000 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Received event network-vif-plugged-50b605a5-4c40-4112-8125-4e918829b690 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:31 np0005539551 nova_compute[227360]: 2025-11-29 08:19:31.443 227364 DEBUG oslo_concurrency.lockutils [req-d52849f1-4c4d-4f2f-9cac-1f5513c372c0 req-00c4cece-8b48-47c1-a08d-565bc3aff000 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ffb5c16c-adce-4345-8d54-4f48e1f1e57b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:31 np0005539551 nova_compute[227360]: 2025-11-29 08:19:31.443 227364 DEBUG oslo_concurrency.lockutils [req-d52849f1-4c4d-4f2f-9cac-1f5513c372c0 req-00c4cece-8b48-47c1-a08d-565bc3aff000 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ffb5c16c-adce-4345-8d54-4f48e1f1e57b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:31 np0005539551 nova_compute[227360]: 2025-11-29 08:19:31.444 227364 DEBUG oslo_concurrency.lockutils [req-d52849f1-4c4d-4f2f-9cac-1f5513c372c0 req-00c4cece-8b48-47c1-a08d-565bc3aff000 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ffb5c16c-adce-4345-8d54-4f48e1f1e57b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:31 np0005539551 nova_compute[227360]: 2025-11-29 08:19:31.444 227364 DEBUG nova.compute.manager [req-d52849f1-4c4d-4f2f-9cac-1f5513c372c0 req-00c4cece-8b48-47c1-a08d-565bc3aff000 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Processing event network-vif-plugged-50b605a5-4c40-4112-8125-4e918829b690 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:19:31 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:19:31 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1304b4c881201ffba487c2000b4ce5c2d728394d676a9c99e02105ea040117e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:19:31 np0005539551 podman[268911]: 2025-11-29 08:19:31.471074014 +0000 UTC m=+0.154207492 container init 69f3ec8a154e7276998b520ff3e3b6bcfa19395467116a9539730725133174b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:19:31 np0005539551 podman[268911]: 2025-11-29 08:19:31.480709388 +0000 UTC m=+0.163842846 container start 69f3ec8a154e7276998b520ff3e3b6bcfa19395467116a9539730725133174b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 03:19:31 np0005539551 neutron-haproxy-ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a[268936]: [NOTICE]   (268940) : New worker (268942) forked
Nov 29 03:19:31 np0005539551 neutron-haproxy-ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a[268936]: [NOTICE]   (268940) : Loading success.
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:31.557 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 50b605a5-4c40-4112-8125-4e918829b690 in datapath 26782821-34df-4010-9d17-f8854e221b4e unbound from our chassis#033[00m
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:31.559 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 26782821-34df-4010-9d17-f8854e221b4e#033[00m
Nov 29 03:19:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:31.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:31.573 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a58d4b-2a20-4903-801b-6ab42cb1f068]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:31.574 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap26782821-31 in ovnmeta-26782821-34df-4010-9d17-f8854e221b4e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:31.576 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap26782821-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:31.576 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9a9464e3-2308-4cba-8239-81932104aeb4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:31.577 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9280a13c-584f-426d-a163-54eba49b6be1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:31.587 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[f7a0add4-3f15-41f7-9d7a-2fc7ec98022a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:31.610 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[88f1a882-deeb-4708-8fa9-e1bf9e4a5da1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:31.639 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[a733ec51-8001-4be2-ab2d-db55767445b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:31.644 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6f888103-9142-4ad3-8c9c-2b2a608fcf04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:31 np0005539551 NetworkManager[48922]: <info>  [1764404371.6456] manager: (tap26782821-30): new Veth device (/org/freedesktop/NetworkManager/Devices/205)
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:31.672 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[e8730957-f71b-4ecb-bc4a-b787e0367413]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:31.675 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[996d0aba-f431-42be-9c58-2824bfff54c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:31 np0005539551 NetworkManager[48922]: <info>  [1764404371.6951] device (tap26782821-30): carrier: link connected
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:31.701 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[148ed4e8-9c9c-4528-bc3d-7902bf1186b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:31.720 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a9c1732e-b1dd-4f31-a138-4a2172dacc4f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26782821-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:13:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 130], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 741857, 'reachable_time': 28672, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268961, 'error': None, 'target': 'ovnmeta-26782821-34df-4010-9d17-f8854e221b4e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:31.735 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b9bd691b-78b8-48bf-8812-70983da31538]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8c:13dd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 741857, 'tstamp': 741857}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268962, 'error': None, 'target': 'ovnmeta-26782821-34df-4010-9d17-f8854e221b4e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:31.749 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b615f869-10b7-4967-94c3-2e57dd55a3bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26782821-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:13:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 130], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 741857, 'reachable_time': 28672, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268963, 'error': None, 'target': 'ovnmeta-26782821-34df-4010-9d17-f8854e221b4e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:31.778 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[3a2a6cd1-2441-4105-9e32-7e2c02818bc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:31.840 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[178e7e29-2fe8-45a3-badf-8ef611d43851]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:31.842 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26782821-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:31.842 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:31.843 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap26782821-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:31 np0005539551 nova_compute[227360]: 2025-11-29 08:19:31.845 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:31 np0005539551 NetworkManager[48922]: <info>  [1764404371.8459] manager: (tap26782821-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/206)
Nov 29 03:19:31 np0005539551 kernel: tap26782821-30: entered promiscuous mode
Nov 29 03:19:31 np0005539551 nova_compute[227360]: 2025-11-29 08:19:31.850 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:31.852 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap26782821-30, col_values=(('external_ids', {'iface-id': 'a221b966-8231-4ead-a0ed-2978f32e8746'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:31 np0005539551 nova_compute[227360]: 2025-11-29 08:19:31.853 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:31 np0005539551 ovn_controller[130266]: 2025-11-29T08:19:31Z|00434|binding|INFO|Releasing lport a221b966-8231-4ead-a0ed-2978f32e8746 from this chassis (sb_readonly=0)
Nov 29 03:19:31 np0005539551 nova_compute[227360]: 2025-11-29 08:19:31.883 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:31.886 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/26782821-34df-4010-9d17-f8854e221b4e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/26782821-34df-4010-9d17-f8854e221b4e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:31.887 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[10012069-2168-46e2-bdbd-a8465c0bbeff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:31.888 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-26782821-34df-4010-9d17-f8854e221b4e
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/26782821-34df-4010-9d17-f8854e221b4e.pid.haproxy
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 26782821-34df-4010-9d17-f8854e221b4e
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:19:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:31.889 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-26782821-34df-4010-9d17-f8854e221b4e', 'env', 'PROCESS_TAG=haproxy-26782821-34df-4010-9d17-f8854e221b4e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/26782821-34df-4010-9d17-f8854e221b4e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.217 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404372.2170115, a14ca685-ed5c-4583-90e2-565fbf5e1ef0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.218 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] VM Started (Lifecycle Event)#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.258 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.263 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404372.217235, a14ca685-ed5c-4583-90e2-565fbf5e1ef0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.263 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:19:32 np0005539551 podman[269055]: 2025-11-29 08:19:32.282086245 +0000 UTC m=+0.059520672 container create 2f0c9c283e1bc9444fd15e44bbba700cd48d648e58c3129dfd5140ef0993ee22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.294 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.299 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:19:32 np0005539551 systemd[1]: Started libpod-conmon-2f0c9c283e1bc9444fd15e44bbba700cd48d648e58c3129dfd5140ef0993ee22.scope.
Nov 29 03:19:32 np0005539551 podman[269055]: 2025-11-29 08:19:32.248595451 +0000 UTC m=+0.026029958 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:19:32 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:19:32 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c92f63129ec80266b6ba43804387ce29cb066a60c8732cb4a334d221f1d374b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.366 227364 DEBUG nova.network.neutron [req-c73bc495-2f4d-493e-b361-9002c683740f req-205aa5d9-22fa-444a-99b1-598fbfea2ad1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Updated VIF entry in instance network info cache for port 50b605a5-4c40-4112-8125-4e918829b690. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.366 227364 DEBUG nova.network.neutron [req-c73bc495-2f4d-493e-b361-9002c683740f req-205aa5d9-22fa-444a-99b1-598fbfea2ad1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Updating instance_info_cache with network_info: [{"id": "50b605a5-4c40-4112-8125-4e918829b690", "address": "fa:16:3e:86:15:8f", "network": {"id": "26782821-34df-4010-9d17-f8854e221b4e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1086271893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27fd30263a7f4717b84946720a5770b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50b605a5-4c", "ovs_interfaceid": "50b605a5-4c40-4112-8125-4e918829b690", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.370 227364 DEBUG nova.compute.manager [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.373 227364 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:19:32 np0005539551 podman[269055]: 2025-11-29 08:19:32.374150966 +0000 UTC m=+0.151585413 container init 2f0c9c283e1bc9444fd15e44bbba700cd48d648e58c3129dfd5140ef0993ee22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.376 227364 INFO nova.virt.libvirt.driver [-] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Instance spawned successfully.#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.376 227364 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:19:32 np0005539551 podman[269055]: 2025-11-29 08:19:32.379962089 +0000 UTC m=+0.157396516 container start 2f0c9c283e1bc9444fd15e44bbba700cd48d648e58c3129dfd5140ef0993ee22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:19:32 np0005539551 neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e[269094]: [NOTICE]   (269099) : New worker (269101) forked
Nov 29 03:19:32 np0005539551 neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e[269094]: [NOTICE]   (269099) : Loading success.
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.446 227364 DEBUG nova.compute.manager [req-4d7a6a64-952a-45d9-ae90-6415d36a1098 req-44756126-02b6-4a55-8dcb-ee4b71d94d8b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Received event network-vif-plugged-3b646f81-c090-4ebc-ab66-1ed42838ee7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.446 227364 DEBUG oslo_concurrency.lockutils [req-4d7a6a64-952a-45d9-ae90-6415d36a1098 req-44756126-02b6-4a55-8dcb-ee4b71d94d8b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.447 227364 DEBUG oslo_concurrency.lockutils [req-4d7a6a64-952a-45d9-ae90-6415d36a1098 req-44756126-02b6-4a55-8dcb-ee4b71d94d8b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.447 227364 DEBUG oslo_concurrency.lockutils [req-4d7a6a64-952a-45d9-ae90-6415d36a1098 req-44756126-02b6-4a55-8dcb-ee4b71d94d8b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.447 227364 DEBUG nova.compute.manager [req-4d7a6a64-952a-45d9-ae90-6415d36a1098 req-44756126-02b6-4a55-8dcb-ee4b71d94d8b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Processing event network-vif-plugged-3b646f81-c090-4ebc-ab66-1ed42838ee7b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.448 227364 DEBUG nova.compute.manager [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.452 227364 DEBUG nova.virt.libvirt.driver [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.455 227364 INFO nova.virt.libvirt.driver [-] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Instance spawned successfully.#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.455 227364 DEBUG nova.virt.libvirt.driver [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.466 227364 DEBUG oslo_concurrency.lockutils [req-c73bc495-2f4d-493e-b361-9002c683740f req-205aa5d9-22fa-444a-99b1-598fbfea2ad1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-ffb5c16c-adce-4345-8d54-4f48e1f1e57b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.469 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.469 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404372.3698196, ffb5c16c-adce-4345-8d54-4f48e1f1e57b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.470 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] VM Started (Lifecycle Event)#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.474 227364 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.475 227364 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.476 227364 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.476 227364 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.477 227364 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.478 227364 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.572 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.575 227364 DEBUG nova.virt.libvirt.driver [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.575 227364 DEBUG nova.virt.libvirt.driver [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.576 227364 DEBUG nova.virt.libvirt.driver [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.576 227364 DEBUG nova.virt.libvirt.driver [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.576 227364 DEBUG nova.virt.libvirt.driver [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.577 227364 DEBUG nova.virt.libvirt.driver [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.580 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.619 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.620 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404372.3699112, ffb5c16c-adce-4345-8d54-4f48e1f1e57b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.620 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.631 227364 INFO nova.compute.manager [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Took 6.89 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.632 227364 DEBUG nova.compute.manager [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.641 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.645 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404372.372441, ffb5c16c-adce-4345-8d54-4f48e1f1e57b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.645 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.670 227364 INFO nova.compute.manager [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Took 8.02 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.670 227364 DEBUG nova.compute.manager [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.676 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.678 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.718 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.718 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404372.4515643, a14ca685-ed5c-4583-90e2-565fbf5e1ef0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.719 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.732 227364 INFO nova.compute.manager [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Took 8.06 seconds to build instance.#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.740 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.742 227364 INFO nova.compute.manager [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Took 10.43 seconds to build instance.#033[00m
Nov 29 03:19:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:32.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.746 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.782 227364 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "ffb5c16c-adce-4345-8d54-4f48e1f1e57b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.211s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:32 np0005539551 nova_compute[227360]: 2025-11-29 08:19:32.808 227364 DEBUG oslo_concurrency.lockutils [None req-97ba6168-c2cb-4531-be18-eb26983fd14d 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:33 np0005539551 nova_compute[227360]: 2025-11-29 08:19:33.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:33 np0005539551 nova_compute[227360]: 2025-11-29 08:19:33.429 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:33 np0005539551 nova_compute[227360]: 2025-11-29 08:19:33.429 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:33 np0005539551 nova_compute[227360]: 2025-11-29 08:19:33.430 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:33 np0005539551 nova_compute[227360]: 2025-11-29 08:19:33.430 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:19:33 np0005539551 nova_compute[227360]: 2025-11-29 08:19:33.431 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:33.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:33 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:19:33 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:19:33 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 03:19:33 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:19:33 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:19:33 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:19:33 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:19:33 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/814112315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:19:33 np0005539551 nova_compute[227360]: 2025-11-29 08:19:33.936 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:34 np0005539551 nova_compute[227360]: 2025-11-29 08:19:34.015 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:19:34 np0005539551 nova_compute[227360]: 2025-11-29 08:19:34.015 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:19:34 np0005539551 nova_compute[227360]: 2025-11-29 08:19:34.018 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:19:34 np0005539551 nova_compute[227360]: 2025-11-29 08:19:34.018 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:19:34 np0005539551 nova_compute[227360]: 2025-11-29 08:19:34.168 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:19:34 np0005539551 nova_compute[227360]: 2025-11-29 08:19:34.169 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4287MB free_disk=20.859298706054688GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:19:34 np0005539551 nova_compute[227360]: 2025-11-29 08:19:34.169 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:34 np0005539551 nova_compute[227360]: 2025-11-29 08:19:34.170 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:34 np0005539551 nova_compute[227360]: 2025-11-29 08:19:34.251 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance a14ca685-ed5c-4583-90e2-565fbf5e1ef0 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:19:34 np0005539551 nova_compute[227360]: 2025-11-29 08:19:34.251 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance ffb5c16c-adce-4345-8d54-4f48e1f1e57b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:19:34 np0005539551 nova_compute[227360]: 2025-11-29 08:19:34.252 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:19:34 np0005539551 nova_compute[227360]: 2025-11-29 08:19:34.252 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:19:34 np0005539551 nova_compute[227360]: 2025-11-29 08:19:34.410 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:34 np0005539551 nova_compute[227360]: 2025-11-29 08:19:34.503 227364 DEBUG nova.compute.manager [req-daac263f-48f1-4659-8d97-1b07f86cc913 req-d546cc31-1401-49d1-bffa-269c832c1ef4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Received event network-vif-plugged-50b605a5-4c40-4112-8125-4e918829b690 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:34 np0005539551 nova_compute[227360]: 2025-11-29 08:19:34.504 227364 DEBUG oslo_concurrency.lockutils [req-daac263f-48f1-4659-8d97-1b07f86cc913 req-d546cc31-1401-49d1-bffa-269c832c1ef4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ffb5c16c-adce-4345-8d54-4f48e1f1e57b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:34 np0005539551 nova_compute[227360]: 2025-11-29 08:19:34.505 227364 DEBUG oslo_concurrency.lockutils [req-daac263f-48f1-4659-8d97-1b07f86cc913 req-d546cc31-1401-49d1-bffa-269c832c1ef4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ffb5c16c-adce-4345-8d54-4f48e1f1e57b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:34 np0005539551 nova_compute[227360]: 2025-11-29 08:19:34.505 227364 DEBUG oslo_concurrency.lockutils [req-daac263f-48f1-4659-8d97-1b07f86cc913 req-d546cc31-1401-49d1-bffa-269c832c1ef4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ffb5c16c-adce-4345-8d54-4f48e1f1e57b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:34 np0005539551 nova_compute[227360]: 2025-11-29 08:19:34.506 227364 DEBUG nova.compute.manager [req-daac263f-48f1-4659-8d97-1b07f86cc913 req-d546cc31-1401-49d1-bffa-269c832c1ef4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] No waiting events found dispatching network-vif-plugged-50b605a5-4c40-4112-8125-4e918829b690 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:19:34 np0005539551 nova_compute[227360]: 2025-11-29 08:19:34.506 227364 WARNING nova.compute.manager [req-daac263f-48f1-4659-8d97-1b07f86cc913 req-d546cc31-1401-49d1-bffa-269c832c1ef4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Received unexpected event network-vif-plugged-50b605a5-4c40-4112-8125-4e918829b690 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:19:34 np0005539551 nova_compute[227360]: 2025-11-29 08:19:34.510 227364 DEBUG nova.compute.manager [req-a820377c-5073-41ed-8b2e-b0ed69fa92bf req-f28c9f2f-c98a-4051-88c5-e05d08a06fa3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Received event network-vif-plugged-3b646f81-c090-4ebc-ab66-1ed42838ee7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:34 np0005539551 nova_compute[227360]: 2025-11-29 08:19:34.510 227364 DEBUG oslo_concurrency.lockutils [req-a820377c-5073-41ed-8b2e-b0ed69fa92bf req-f28c9f2f-c98a-4051-88c5-e05d08a06fa3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:34 np0005539551 nova_compute[227360]: 2025-11-29 08:19:34.511 227364 DEBUG oslo_concurrency.lockutils [req-a820377c-5073-41ed-8b2e-b0ed69fa92bf req-f28c9f2f-c98a-4051-88c5-e05d08a06fa3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:34 np0005539551 nova_compute[227360]: 2025-11-29 08:19:34.511 227364 DEBUG oslo_concurrency.lockutils [req-a820377c-5073-41ed-8b2e-b0ed69fa92bf req-f28c9f2f-c98a-4051-88c5-e05d08a06fa3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:34 np0005539551 nova_compute[227360]: 2025-11-29 08:19:34.511 227364 DEBUG nova.compute.manager [req-a820377c-5073-41ed-8b2e-b0ed69fa92bf req-f28c9f2f-c98a-4051-88c5-e05d08a06fa3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] No waiting events found dispatching network-vif-plugged-3b646f81-c090-4ebc-ab66-1ed42838ee7b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:19:34 np0005539551 nova_compute[227360]: 2025-11-29 08:19:34.512 227364 WARNING nova.compute.manager [req-a820377c-5073-41ed-8b2e-b0ed69fa92bf req-f28c9f2f-c98a-4051-88c5-e05d08a06fa3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Received unexpected event network-vif-plugged-3b646f81-c090-4ebc-ab66-1ed42838ee7b for instance with vm_state active and task_state None.#033[00m
Nov 29 03:19:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:34.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:34 np0005539551 nova_compute[227360]: 2025-11-29 08:19:34.746 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:34 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:19:34 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2170892126' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:19:34 np0005539551 nova_compute[227360]: 2025-11-29 08:19:34.845 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:34 np0005539551 nova_compute[227360]: 2025-11-29 08:19:34.852 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:19:34 np0005539551 nova_compute[227360]: 2025-11-29 08:19:34.875 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:19:34 np0005539551 nova_compute[227360]: 2025-11-29 08:19:34.904 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:19:34 np0005539551 nova_compute[227360]: 2025-11-29 08:19:34.904 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:35.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:35 np0005539551 nova_compute[227360]: 2025-11-29 08:19:35.937 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:36.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:36 np0005539551 nova_compute[227360]: 2025-11-29 08:19:36.904 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:37.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:38 np0005539551 nova_compute[227360]: 2025-11-29 08:19:38.390 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:38 np0005539551 NetworkManager[48922]: <info>  [1764404378.3916] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/207)
Nov 29 03:19:38 np0005539551 NetworkManager[48922]: <info>  [1764404378.3929] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/208)
Nov 29 03:19:38 np0005539551 nova_compute[227360]: 2025-11-29 08:19:38.530 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:38 np0005539551 ovn_controller[130266]: 2025-11-29T08:19:38Z|00435|binding|INFO|Releasing lport 73177abb-43fa-48b2-bff7-3e77c9984956 from this chassis (sb_readonly=0)
Nov 29 03:19:38 np0005539551 ovn_controller[130266]: 2025-11-29T08:19:38Z|00436|binding|INFO|Releasing lport a221b966-8231-4ead-a0ed-2978f32e8746 from this chassis (sb_readonly=0)
Nov 29 03:19:38 np0005539551 nova_compute[227360]: 2025-11-29 08:19:38.547 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:38.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:39.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:39 np0005539551 nova_compute[227360]: 2025-11-29 08:19:39.748 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:40.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:40 np0005539551 nova_compute[227360]: 2025-11-29 08:19:40.939 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:40 np0005539551 nova_compute[227360]: 2025-11-29 08:19:40.964 227364 DEBUG nova.compute.manager [req-6a31ff33-7209-4e8d-b480-35f87ed6280d req-0b5dc0ed-f5fc-451f-8307-e5cb8fc1916a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Received event network-changed-3b646f81-c090-4ebc-ab66-1ed42838ee7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:40 np0005539551 nova_compute[227360]: 2025-11-29 08:19:40.964 227364 DEBUG nova.compute.manager [req-6a31ff33-7209-4e8d-b480-35f87ed6280d req-0b5dc0ed-f5fc-451f-8307-e5cb8fc1916a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Refreshing instance network info cache due to event network-changed-3b646f81-c090-4ebc-ab66-1ed42838ee7b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:19:40 np0005539551 nova_compute[227360]: 2025-11-29 08:19:40.964 227364 DEBUG oslo_concurrency.lockutils [req-6a31ff33-7209-4e8d-b480-35f87ed6280d req-0b5dc0ed-f5fc-451f-8307-e5cb8fc1916a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-a14ca685-ed5c-4583-90e2-565fbf5e1ef0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:19:40 np0005539551 nova_compute[227360]: 2025-11-29 08:19:40.964 227364 DEBUG oslo_concurrency.lockutils [req-6a31ff33-7209-4e8d-b480-35f87ed6280d req-0b5dc0ed-f5fc-451f-8307-e5cb8fc1916a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-a14ca685-ed5c-4583-90e2-565fbf5e1ef0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:19:40 np0005539551 nova_compute[227360]: 2025-11-29 08:19:40.965 227364 DEBUG nova.network.neutron [req-6a31ff33-7209-4e8d-b480-35f87ed6280d req-0b5dc0ed-f5fc-451f-8307-e5cb8fc1916a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Refreshing network info cache for port 3b646f81-c090-4ebc-ab66-1ed42838ee7b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:19:41 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:19:41 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:19:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:41.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:42 np0005539551 nova_compute[227360]: 2025-11-29 08:19:42.280 227364 DEBUG nova.network.neutron [req-6a31ff33-7209-4e8d-b480-35f87ed6280d req-0b5dc0ed-f5fc-451f-8307-e5cb8fc1916a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Updated VIF entry in instance network info cache for port 3b646f81-c090-4ebc-ab66-1ed42838ee7b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:19:42 np0005539551 nova_compute[227360]: 2025-11-29 08:19:42.280 227364 DEBUG nova.network.neutron [req-6a31ff33-7209-4e8d-b480-35f87ed6280d req-0b5dc0ed-f5fc-451f-8307-e5cb8fc1916a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Updating instance_info_cache with network_info: [{"id": "3b646f81-c090-4ebc-ab66-1ed42838ee7b", "address": "fa:16:3e:03:d3:13", "network": {"id": "6512f12e-7c22-4533-b1e6-41428016593a", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1331773440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "696ec278f2ec426fa75ebb50bdf1c16a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b646f81-c0", "ovs_interfaceid": "3b646f81-c090-4ebc-ab66-1ed42838ee7b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:19:42 np0005539551 nova_compute[227360]: 2025-11-29 08:19:42.314 227364 DEBUG oslo_concurrency.lockutils [req-6a31ff33-7209-4e8d-b480-35f87ed6280d req-0b5dc0ed-f5fc-451f-8307-e5cb8fc1916a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-a14ca685-ed5c-4583-90e2-565fbf5e1ef0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:19:42 np0005539551 nova_compute[227360]: 2025-11-29 08:19:42.414 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:42 np0005539551 nova_compute[227360]: 2025-11-29 08:19:42.415 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:19:42 np0005539551 nova_compute[227360]: 2025-11-29 08:19:42.415 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:42 np0005539551 nova_compute[227360]: 2025-11-29 08:19:42.416 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:19:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:42.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:43 np0005539551 nova_compute[227360]: 2025-11-29 08:19:43.463 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:43 np0005539551 nova_compute[227360]: 2025-11-29 08:19:43.463 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:43 np0005539551 nova_compute[227360]: 2025-11-29 08:19:43.464 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:19:43 np0005539551 nova_compute[227360]: 2025-11-29 08:19:43.490 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:19:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:43.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:44 np0005539551 nova_compute[227360]: 2025-11-29 08:19:44.750 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:44.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:45 np0005539551 ovn_controller[130266]: 2025-11-29T08:19:45Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:03:d3:13 10.100.0.4
Nov 29 03:19:45 np0005539551 ovn_controller[130266]: 2025-11-29T08:19:45Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:03:d3:13 10.100.0.4
Nov 29 03:19:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:45.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:45 np0005539551 nova_compute[227360]: 2025-11-29 08:19:45.941 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:46 np0005539551 ovn_controller[130266]: 2025-11-29T08:19:46Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:86:15:8f 10.100.0.5
Nov 29 03:19:46 np0005539551 ovn_controller[130266]: 2025-11-29T08:19:46Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:86:15:8f 10.100.0.5
Nov 29 03:19:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:46.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:47 np0005539551 nova_compute[227360]: 2025-11-29 08:19:47.115 227364 DEBUG oslo_concurrency.lockutils [None req-e1a87245-a4c5-41b4-88b1-44f83119cee0 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Acquiring lock "ffb5c16c-adce-4345-8d54-4f48e1f1e57b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:47 np0005539551 nova_compute[227360]: 2025-11-29 08:19:47.116 227364 DEBUG oslo_concurrency.lockutils [None req-e1a87245-a4c5-41b4-88b1-44f83119cee0 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "ffb5c16c-adce-4345-8d54-4f48e1f1e57b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:47 np0005539551 nova_compute[227360]: 2025-11-29 08:19:47.117 227364 DEBUG oslo_concurrency.lockutils [None req-e1a87245-a4c5-41b4-88b1-44f83119cee0 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Acquiring lock "ffb5c16c-adce-4345-8d54-4f48e1f1e57b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:47 np0005539551 nova_compute[227360]: 2025-11-29 08:19:47.117 227364 DEBUG oslo_concurrency.lockutils [None req-e1a87245-a4c5-41b4-88b1-44f83119cee0 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "ffb5c16c-adce-4345-8d54-4f48e1f1e57b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:47 np0005539551 nova_compute[227360]: 2025-11-29 08:19:47.117 227364 DEBUG oslo_concurrency.lockutils [None req-e1a87245-a4c5-41b4-88b1-44f83119cee0 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "ffb5c16c-adce-4345-8d54-4f48e1f1e57b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:47 np0005539551 nova_compute[227360]: 2025-11-29 08:19:47.118 227364 INFO nova.compute.manager [None req-e1a87245-a4c5-41b4-88b1-44f83119cee0 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Terminating instance#033[00m
Nov 29 03:19:47 np0005539551 nova_compute[227360]: 2025-11-29 08:19:47.119 227364 DEBUG nova.compute.manager [None req-e1a87245-a4c5-41b4-88b1-44f83119cee0 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:19:47 np0005539551 kernel: tap50b605a5-4c (unregistering): left promiscuous mode
Nov 29 03:19:47 np0005539551 NetworkManager[48922]: <info>  [1764404387.1725] device (tap50b605a5-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:19:47 np0005539551 ovn_controller[130266]: 2025-11-29T08:19:47Z|00437|binding|INFO|Releasing lport 50b605a5-4c40-4112-8125-4e918829b690 from this chassis (sb_readonly=0)
Nov 29 03:19:47 np0005539551 nova_compute[227360]: 2025-11-29 08:19:47.184 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:47 np0005539551 ovn_controller[130266]: 2025-11-29T08:19:47Z|00438|binding|INFO|Setting lport 50b605a5-4c40-4112-8125-4e918829b690 down in Southbound
Nov 29 03:19:47 np0005539551 ovn_controller[130266]: 2025-11-29T08:19:47Z|00439|binding|INFO|Removing iface tap50b605a5-4c ovn-installed in OVS
Nov 29 03:19:47 np0005539551 nova_compute[227360]: 2025-11-29 08:19:47.187 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:47.190 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:15:8f 10.100.0.5'], port_security=['fa:16:3e:86:15:8f 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ffb5c16c-adce-4345-8d54-4f48e1f1e57b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26782821-34df-4010-9d17-f8854e221b4e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27fd30263a7f4717b84946720a5770b5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '45467816-71fb-46ea-84fa-c25f49ea2a6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c8dbca6-8e74-4fae-981d-344fddfca3c7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=50b605a5-4c40-4112-8125-4e918829b690) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:19:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:47.192 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 50b605a5-4c40-4112-8125-4e918829b690 in datapath 26782821-34df-4010-9d17-f8854e221b4e unbound from our chassis#033[00m
Nov 29 03:19:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:47.193 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 26782821-34df-4010-9d17-f8854e221b4e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:19:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:47.195 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ac1834e0-cfd9-4cbc-9316-0ba53f83b5ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:47.196 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-26782821-34df-4010-9d17-f8854e221b4e namespace which is not needed anymore#033[00m
Nov 29 03:19:47 np0005539551 nova_compute[227360]: 2025-11-29 08:19:47.199 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:47 np0005539551 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d00000073.scope: Deactivated successfully.
Nov 29 03:19:47 np0005539551 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d00000073.scope: Consumed 14.616s CPU time.
Nov 29 03:19:47 np0005539551 systemd-machined[190756]: Machine qemu-49-instance-00000073 terminated.
Nov 29 03:19:47 np0005539551 neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e[269094]: [NOTICE]   (269099) : haproxy version is 2.8.14-c23fe91
Nov 29 03:19:47 np0005539551 neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e[269094]: [NOTICE]   (269099) : path to executable is /usr/sbin/haproxy
Nov 29 03:19:47 np0005539551 neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e[269094]: [WARNING]  (269099) : Exiting Master process...
Nov 29 03:19:47 np0005539551 neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e[269094]: [ALERT]    (269099) : Current worker (269101) exited with code 143 (Terminated)
Nov 29 03:19:47 np0005539551 neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e[269094]: [WARNING]  (269099) : All workers exited. Exiting... (0)
Nov 29 03:19:47 np0005539551 systemd[1]: libpod-2f0c9c283e1bc9444fd15e44bbba700cd48d648e58c3129dfd5140ef0993ee22.scope: Deactivated successfully.
Nov 29 03:19:47 np0005539551 podman[269230]: 2025-11-29 08:19:47.338594943 +0000 UTC m=+0.048824230 container died 2f0c9c283e1bc9444fd15e44bbba700cd48d648e58c3129dfd5140ef0993ee22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:19:47 np0005539551 nova_compute[227360]: 2025-11-29 08:19:47.346 227364 INFO nova.virt.libvirt.driver [-] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Instance destroyed successfully.#033[00m
Nov 29 03:19:47 np0005539551 nova_compute[227360]: 2025-11-29 08:19:47.347 227364 DEBUG nova.objects.instance [None req-e1a87245-a4c5-41b4-88b1-44f83119cee0 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lazy-loading 'resources' on Instance uuid ffb5c16c-adce-4345-8d54-4f48e1f1e57b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:47 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2f0c9c283e1bc9444fd15e44bbba700cd48d648e58c3129dfd5140ef0993ee22-userdata-shm.mount: Deactivated successfully.
Nov 29 03:19:47 np0005539551 systemd[1]: var-lib-containers-storage-overlay-7c92f63129ec80266b6ba43804387ce29cb066a60c8732cb4a334d221f1d374b-merged.mount: Deactivated successfully.
Nov 29 03:19:47 np0005539551 nova_compute[227360]: 2025-11-29 08:19:47.372 227364 DEBUG nova.virt.libvirt.vif [None req-e1a87245-a4c5-41b4-88b1-44f83119cee0 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:19:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1670228511',display_name='tempest-ListServersNegativeTestJSON-server-1670228511-3',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1670228511-3',id=115,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=2,launched_at=2025-11-29T08:19:32Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='27fd30263a7f4717b84946720a5770b5',ramdisk_id='',reservation_id='r-oe57na6j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-1508942438',owner_user_name='tempest-ListServersNegativeTestJSON-1508942438-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:19:32Z,user_data=None,user_id='14d293467f8e498eaa87b6b8976b34d9',uuid=ffb5c16c-adce-4345-8d54-4f48e1f1e57b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "50b605a5-4c40-4112-8125-4e918829b690", "address": "fa:16:3e:86:15:8f", "network": {"id": "26782821-34df-4010-9d17-f8854e221b4e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1086271893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27fd30263a7f4717b84946720a5770b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50b605a5-4c", "ovs_interfaceid": "50b605a5-4c40-4112-8125-4e918829b690", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:19:47 np0005539551 nova_compute[227360]: 2025-11-29 08:19:47.373 227364 DEBUG nova.network.os_vif_util [None req-e1a87245-a4c5-41b4-88b1-44f83119cee0 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Converting VIF {"id": "50b605a5-4c40-4112-8125-4e918829b690", "address": "fa:16:3e:86:15:8f", "network": {"id": "26782821-34df-4010-9d17-f8854e221b4e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1086271893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27fd30263a7f4717b84946720a5770b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50b605a5-4c", "ovs_interfaceid": "50b605a5-4c40-4112-8125-4e918829b690", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:19:47 np0005539551 nova_compute[227360]: 2025-11-29 08:19:47.374 227364 DEBUG nova.network.os_vif_util [None req-e1a87245-a4c5-41b4-88b1-44f83119cee0 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:15:8f,bridge_name='br-int',has_traffic_filtering=True,id=50b605a5-4c40-4112-8125-4e918829b690,network=Network(26782821-34df-4010-9d17-f8854e221b4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50b605a5-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:19:47 np0005539551 nova_compute[227360]: 2025-11-29 08:19:47.374 227364 DEBUG os_vif [None req-e1a87245-a4c5-41b4-88b1-44f83119cee0 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:15:8f,bridge_name='br-int',has_traffic_filtering=True,id=50b605a5-4c40-4112-8125-4e918829b690,network=Network(26782821-34df-4010-9d17-f8854e221b4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50b605a5-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:19:47 np0005539551 nova_compute[227360]: 2025-11-29 08:19:47.376 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:47 np0005539551 nova_compute[227360]: 2025-11-29 08:19:47.377 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50b605a5-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:47 np0005539551 nova_compute[227360]: 2025-11-29 08:19:47.379 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:47 np0005539551 nova_compute[227360]: 2025-11-29 08:19:47.380 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:47 np0005539551 podman[269230]: 2025-11-29 08:19:47.385592724 +0000 UTC m=+0.095821981 container cleanup 2f0c9c283e1bc9444fd15e44bbba700cd48d648e58c3129dfd5140ef0993ee22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:19:47 np0005539551 nova_compute[227360]: 2025-11-29 08:19:47.386 227364 INFO os_vif [None req-e1a87245-a4c5-41b4-88b1-44f83119cee0 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:15:8f,bridge_name='br-int',has_traffic_filtering=True,id=50b605a5-4c40-4112-8125-4e918829b690,network=Network(26782821-34df-4010-9d17-f8854e221b4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50b605a5-4c')#033[00m
Nov 29 03:19:47 np0005539551 systemd[1]: libpod-conmon-2f0c9c283e1bc9444fd15e44bbba700cd48d648e58c3129dfd5140ef0993ee22.scope: Deactivated successfully.
Nov 29 03:19:47 np0005539551 podman[269276]: 2025-11-29 08:19:47.442114396 +0000 UTC m=+0.036798692 container remove 2f0c9c283e1bc9444fd15e44bbba700cd48d648e58c3129dfd5140ef0993ee22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 03:19:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:47.447 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f22006c8-7b70-4864-9725-7c535a45fbe0]: (4, ('Sat Nov 29 08:19:47 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e (2f0c9c283e1bc9444fd15e44bbba700cd48d648e58c3129dfd5140ef0993ee22)\n2f0c9c283e1bc9444fd15e44bbba700cd48d648e58c3129dfd5140ef0993ee22\nSat Nov 29 08:19:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e (2f0c9c283e1bc9444fd15e44bbba700cd48d648e58c3129dfd5140ef0993ee22)\n2f0c9c283e1bc9444fd15e44bbba700cd48d648e58c3129dfd5140ef0993ee22\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:47.448 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ba44d447-141a-477c-8720-e256eda4dea3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:47.449 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26782821-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:47 np0005539551 nova_compute[227360]: 2025-11-29 08:19:47.450 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:47 np0005539551 kernel: tap26782821-30: left promiscuous mode
Nov 29 03:19:47 np0005539551 nova_compute[227360]: 2025-11-29 08:19:47.463 227364 DEBUG nova.compute.manager [req-daa1f92d-e132-428e-8897-f3fc7f07f10e req-a7746a72-a28b-4f93-8f69-cd37af668e1d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Received event network-vif-unplugged-50b605a5-4c40-4112-8125-4e918829b690 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:47 np0005539551 nova_compute[227360]: 2025-11-29 08:19:47.464 227364 DEBUG oslo_concurrency.lockutils [req-daa1f92d-e132-428e-8897-f3fc7f07f10e req-a7746a72-a28b-4f93-8f69-cd37af668e1d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ffb5c16c-adce-4345-8d54-4f48e1f1e57b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:47 np0005539551 nova_compute[227360]: 2025-11-29 08:19:47.464 227364 DEBUG oslo_concurrency.lockutils [req-daa1f92d-e132-428e-8897-f3fc7f07f10e req-a7746a72-a28b-4f93-8f69-cd37af668e1d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ffb5c16c-adce-4345-8d54-4f48e1f1e57b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:47 np0005539551 nova_compute[227360]: 2025-11-29 08:19:47.464 227364 DEBUG oslo_concurrency.lockutils [req-daa1f92d-e132-428e-8897-f3fc7f07f10e req-a7746a72-a28b-4f93-8f69-cd37af668e1d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ffb5c16c-adce-4345-8d54-4f48e1f1e57b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:47 np0005539551 nova_compute[227360]: 2025-11-29 08:19:47.465 227364 DEBUG nova.compute.manager [req-daa1f92d-e132-428e-8897-f3fc7f07f10e req-a7746a72-a28b-4f93-8f69-cd37af668e1d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] No waiting events found dispatching network-vif-unplugged-50b605a5-4c40-4112-8125-4e918829b690 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:19:47 np0005539551 nova_compute[227360]: 2025-11-29 08:19:47.465 227364 DEBUG nova.compute.manager [req-daa1f92d-e132-428e-8897-f3fc7f07f10e req-a7746a72-a28b-4f93-8f69-cd37af668e1d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Received event network-vif-unplugged-50b605a5-4c40-4112-8125-4e918829b690 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:19:47 np0005539551 nova_compute[227360]: 2025-11-29 08:19:47.465 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:47.466 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5f7efa74-0086-45bf-b2cf-a722a280c6be]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:47.482 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a581de84-3d15-4f0a-9d68-ccd28986539a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:47.483 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ab10bfc2-6f36-4d19-85fa-8ad7981c282a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:47.498 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ae1d3f85-cf28-4c68-80ab-7e33682cb903]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 741851, 'reachable_time': 24902, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269302, 'error': None, 'target': 'ovnmeta-26782821-34df-4010-9d17-f8854e221b4e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:47.501 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-26782821-34df-4010-9d17-f8854e221b4e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:19:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:47.501 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[36741700-655a-4a8b-b7c0-524fd250973e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:47 np0005539551 systemd[1]: run-netns-ovnmeta\x2d26782821\x2d34df\x2d4010\x2d9d17\x2df8854e221b4e.mount: Deactivated successfully.
Nov 29 03:19:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:47.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:47 np0005539551 nova_compute[227360]: 2025-11-29 08:19:47.754 227364 INFO nova.virt.libvirt.driver [None req-e1a87245-a4c5-41b4-88b1-44f83119cee0 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Deleting instance files /var/lib/nova/instances/ffb5c16c-adce-4345-8d54-4f48e1f1e57b_del#033[00m
Nov 29 03:19:47 np0005539551 nova_compute[227360]: 2025-11-29 08:19:47.758 227364 INFO nova.virt.libvirt.driver [None req-e1a87245-a4c5-41b4-88b1-44f83119cee0 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Deletion of /var/lib/nova/instances/ffb5c16c-adce-4345-8d54-4f48e1f1e57b_del complete#033[00m
Nov 29 03:19:47 np0005539551 nova_compute[227360]: 2025-11-29 08:19:47.829 227364 INFO nova.compute.manager [None req-e1a87245-a4c5-41b4-88b1-44f83119cee0 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Took 0.71 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:19:47 np0005539551 nova_compute[227360]: 2025-11-29 08:19:47.830 227364 DEBUG oslo.service.loopingcall [None req-e1a87245-a4c5-41b4-88b1-44f83119cee0 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:19:47 np0005539551 nova_compute[227360]: 2025-11-29 08:19:47.830 227364 DEBUG nova.compute.manager [-] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:19:47 np0005539551 nova_compute[227360]: 2025-11-29 08:19:47.830 227364 DEBUG nova.network.neutron [-] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:19:48 np0005539551 nova_compute[227360]: 2025-11-29 08:19:48.705 227364 DEBUG nova.network.neutron [-] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:19:48 np0005539551 nova_compute[227360]: 2025-11-29 08:19:48.727 227364 INFO nova.compute.manager [-] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Took 0.90 seconds to deallocate network for instance.#033[00m
Nov 29 03:19:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:48.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:48 np0005539551 nova_compute[227360]: 2025-11-29 08:19:48.770 227364 DEBUG oslo_concurrency.lockutils [None req-e1a87245-a4c5-41b4-88b1-44f83119cee0 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:48 np0005539551 nova_compute[227360]: 2025-11-29 08:19:48.771 227364 DEBUG oslo_concurrency.lockutils [None req-e1a87245-a4c5-41b4-88b1-44f83119cee0 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:48 np0005539551 nova_compute[227360]: 2025-11-29 08:19:48.801 227364 DEBUG nova.compute.manager [req-de6edfab-5f62-4207-a740-f9d5aabe02cf req-0252e811-473e-429c-9949-3208d1dc752e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Received event network-vif-deleted-50b605a5-4c40-4112-8125-4e918829b690 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:48 np0005539551 nova_compute[227360]: 2025-11-29 08:19:48.848 227364 DEBUG oslo_concurrency.processutils [None req-e1a87245-a4c5-41b4-88b1-44f83119cee0 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:19:49 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2687942240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:19:49 np0005539551 nova_compute[227360]: 2025-11-29 08:19:49.263 227364 DEBUG oslo_concurrency.processutils [None req-e1a87245-a4c5-41b4-88b1-44f83119cee0 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:49 np0005539551 nova_compute[227360]: 2025-11-29 08:19:49.270 227364 DEBUG nova.compute.provider_tree [None req-e1a87245-a4c5-41b4-88b1-44f83119cee0 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:19:49 np0005539551 nova_compute[227360]: 2025-11-29 08:19:49.287 227364 DEBUG nova.scheduler.client.report [None req-e1a87245-a4c5-41b4-88b1-44f83119cee0 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:19:49 np0005539551 nova_compute[227360]: 2025-11-29 08:19:49.324 227364 DEBUG oslo_concurrency.lockutils [None req-e1a87245-a4c5-41b4-88b1-44f83119cee0 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.554s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:49 np0005539551 nova_compute[227360]: 2025-11-29 08:19:49.362 227364 INFO nova.scheduler.client.report [None req-e1a87245-a4c5-41b4-88b1-44f83119cee0 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Deleted allocations for instance ffb5c16c-adce-4345-8d54-4f48e1f1e57b#033[00m
Nov 29 03:19:49 np0005539551 nova_compute[227360]: 2025-11-29 08:19:49.456 227364 DEBUG oslo_concurrency.lockutils [None req-e1a87245-a4c5-41b4-88b1-44f83119cee0 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "ffb5c16c-adce-4345-8d54-4f48e1f1e57b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.340s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:49 np0005539551 nova_compute[227360]: 2025-11-29 08:19:49.551 227364 DEBUG nova.compute.manager [req-ffd02439-39da-4987-90f7-f9197128dcfb req-559a9bc8-0b2c-4e02-a51d-ace59a2371ef 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Received event network-vif-plugged-50b605a5-4c40-4112-8125-4e918829b690 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:49 np0005539551 nova_compute[227360]: 2025-11-29 08:19:49.552 227364 DEBUG oslo_concurrency.lockutils [req-ffd02439-39da-4987-90f7-f9197128dcfb req-559a9bc8-0b2c-4e02-a51d-ace59a2371ef 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ffb5c16c-adce-4345-8d54-4f48e1f1e57b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:49 np0005539551 nova_compute[227360]: 2025-11-29 08:19:49.552 227364 DEBUG oslo_concurrency.lockutils [req-ffd02439-39da-4987-90f7-f9197128dcfb req-559a9bc8-0b2c-4e02-a51d-ace59a2371ef 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ffb5c16c-adce-4345-8d54-4f48e1f1e57b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:49 np0005539551 nova_compute[227360]: 2025-11-29 08:19:49.553 227364 DEBUG oslo_concurrency.lockutils [req-ffd02439-39da-4987-90f7-f9197128dcfb req-559a9bc8-0b2c-4e02-a51d-ace59a2371ef 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ffb5c16c-adce-4345-8d54-4f48e1f1e57b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:49 np0005539551 nova_compute[227360]: 2025-11-29 08:19:49.554 227364 DEBUG nova.compute.manager [req-ffd02439-39da-4987-90f7-f9197128dcfb req-559a9bc8-0b2c-4e02-a51d-ace59a2371ef 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] No waiting events found dispatching network-vif-plugged-50b605a5-4c40-4112-8125-4e918829b690 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:19:49 np0005539551 nova_compute[227360]: 2025-11-29 08:19:49.554 227364 WARNING nova.compute.manager [req-ffd02439-39da-4987-90f7-f9197128dcfb req-559a9bc8-0b2c-4e02-a51d-ace59a2371ef 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Received unexpected event network-vif-plugged-50b605a5-4c40-4112-8125-4e918829b690 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:19:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:49.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:50.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:50 np0005539551 nova_compute[227360]: 2025-11-29 08:19:50.944 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:51.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:52 np0005539551 nova_compute[227360]: 2025-11-29 08:19:52.380 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:52.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:53.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:53 np0005539551 podman[269327]: 2025-11-29 08:19:53.629284384 +0000 UTC m=+0.075685980 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 03:19:53 np0005539551 podman[269328]: 2025-11-29 08:19:53.629287394 +0000 UTC m=+0.065502600 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:19:53 np0005539551 podman[269326]: 2025-11-29 08:19:53.656273487 +0000 UTC m=+0.101392028 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 29 03:19:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:53.895 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:19:53 np0005539551 nova_compute[227360]: 2025-11-29 08:19:53.896 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:53.898 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:19:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:54.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:55.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:55 np0005539551 nova_compute[227360]: 2025-11-29 08:19:55.946 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:56 np0005539551 nova_compute[227360]: 2025-11-29 08:19:56.210 227364 INFO nova.compute.manager [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Rebuilding instance#033[00m
Nov 29 03:19:56 np0005539551 nova_compute[227360]: 2025-11-29 08:19:56.489 227364 DEBUG nova.objects.instance [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Lazy-loading 'trusted_certs' on Instance uuid a14ca685-ed5c-4583-90e2-565fbf5e1ef0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:56.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:56 np0005539551 nova_compute[227360]: 2025-11-29 08:19:56.810 227364 DEBUG nova.compute.manager [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:56 np0005539551 nova_compute[227360]: 2025-11-29 08:19:56.889 227364 DEBUG nova.objects.instance [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Lazy-loading 'pci_requests' on Instance uuid a14ca685-ed5c-4583-90e2-565fbf5e1ef0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:56 np0005539551 nova_compute[227360]: 2025-11-29 08:19:56.901 227364 DEBUG nova.objects.instance [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Lazy-loading 'pci_devices' on Instance uuid a14ca685-ed5c-4583-90e2-565fbf5e1ef0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:56 np0005539551 nova_compute[227360]: 2025-11-29 08:19:56.917 227364 DEBUG nova.objects.instance [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Lazy-loading 'resources' on Instance uuid a14ca685-ed5c-4583-90e2-565fbf5e1ef0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:56 np0005539551 nova_compute[227360]: 2025-11-29 08:19:56.928 227364 DEBUG nova.objects.instance [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Lazy-loading 'migration_context' on Instance uuid a14ca685-ed5c-4583-90e2-565fbf5e1ef0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:56 np0005539551 nova_compute[227360]: 2025-11-29 08:19:56.940 227364 DEBUG nova.objects.instance [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 03:19:56 np0005539551 nova_compute[227360]: 2025-11-29 08:19:56.944 227364 DEBUG nova.virt.libvirt.driver [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:19:57 np0005539551 ovn_controller[130266]: 2025-11-29T08:19:57Z|00440|binding|INFO|Releasing lport 73177abb-43fa-48b2-bff7-3e77c9984956 from this chassis (sb_readonly=0)
Nov 29 03:19:57 np0005539551 nova_compute[227360]: 2025-11-29 08:19:57.190 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:57 np0005539551 nova_compute[227360]: 2025-11-29 08:19:57.381 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:57.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:57.901 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:58 np0005539551 nova_compute[227360]: 2025-11-29 08:19:58.733 227364 DEBUG nova.compute.manager [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Nov 29 03:19:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:58.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:58 np0005539551 nova_compute[227360]: 2025-11-29 08:19:58.845 227364 DEBUG oslo_concurrency.lockutils [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:58 np0005539551 nova_compute[227360]: 2025-11-29 08:19:58.845 227364 DEBUG oslo_concurrency.lockutils [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:58 np0005539551 nova_compute[227360]: 2025-11-29 08:19:58.893 227364 DEBUG nova.objects.instance [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Lazy-loading 'pci_requests' on Instance uuid cf3d3db9-f753-47a8-93d5-7f0491bb03fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:58 np0005539551 nova_compute[227360]: 2025-11-29 08:19:58.919 227364 DEBUG nova.virt.hardware [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:19:58 np0005539551 nova_compute[227360]: 2025-11-29 08:19:58.919 227364 INFO nova.compute.claims [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:19:58 np0005539551 nova_compute[227360]: 2025-11-29 08:19:58.920 227364 DEBUG nova.objects.instance [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Lazy-loading 'resources' on Instance uuid cf3d3db9-f753-47a8-93d5-7f0491bb03fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:58 np0005539551 nova_compute[227360]: 2025-11-29 08:19:58.932 227364 DEBUG nova.objects.instance [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Lazy-loading 'numa_topology' on Instance uuid cf3d3db9-f753-47a8-93d5-7f0491bb03fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:58 np0005539551 nova_compute[227360]: 2025-11-29 08:19:58.945 227364 DEBUG nova.objects.instance [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Lazy-loading 'pci_devices' on Instance uuid cf3d3db9-f753-47a8-93d5-7f0491bb03fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:59 np0005539551 nova_compute[227360]: 2025-11-29 08:19:59.000 227364 INFO nova.compute.resource_tracker [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Updating resource usage from migration 96a5768a-dd2b-42ce-a3a7-1085b8029fe4#033[00m
Nov 29 03:19:59 np0005539551 nova_compute[227360]: 2025-11-29 08:19:59.000 227364 DEBUG nova.compute.resource_tracker [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Starting to track incoming migration 96a5768a-dd2b-42ce-a3a7-1085b8029fe4 with flavor b4d0f3a6-e3dc-4216-aee8-148280e428cc _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Nov 29 03:19:59 np0005539551 nova_compute[227360]: 2025-11-29 08:19:59.077 227364 DEBUG oslo_concurrency.processutils [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:59 np0005539551 kernel: tap3b646f81-c0 (unregistering): left promiscuous mode
Nov 29 03:19:59 np0005539551 NetworkManager[48922]: <info>  [1764404399.1874] device (tap3b646f81-c0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:19:59 np0005539551 nova_compute[227360]: 2025-11-29 08:19:59.193 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:59 np0005539551 ovn_controller[130266]: 2025-11-29T08:19:59Z|00441|binding|INFO|Releasing lport 3b646f81-c090-4ebc-ab66-1ed42838ee7b from this chassis (sb_readonly=0)
Nov 29 03:19:59 np0005539551 ovn_controller[130266]: 2025-11-29T08:19:59Z|00442|binding|INFO|Setting lport 3b646f81-c090-4ebc-ab66-1ed42838ee7b down in Southbound
Nov 29 03:19:59 np0005539551 ovn_controller[130266]: 2025-11-29T08:19:59Z|00443|binding|INFO|Removing iface tap3b646f81-c0 ovn-installed in OVS
Nov 29 03:19:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:59.204 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:d3:13 10.100.0.4'], port_security=['fa:16:3e:03:d3:13 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'a14ca685-ed5c-4583-90e2-565fbf5e1ef0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6512f12e-7c22-4533-b1e6-41428016593a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '696ec278f2ec426fa75ebb50bdf1c16a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '79696f95-603f-4ed2-8054-c440dd658a0e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.246'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f17a52e-c2c7-41e9-af0a-0831981ed76c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=3b646f81-c090-4ebc-ab66-1ed42838ee7b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:19:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:59.205 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 3b646f81-c090-4ebc-ab66-1ed42838ee7b in datapath 6512f12e-7c22-4533-b1e6-41428016593a unbound from our chassis#033[00m
Nov 29 03:19:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:59.207 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6512f12e-7c22-4533-b1e6-41428016593a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:19:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:59.208 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[23ee97a7-1eca-4d0e-ae62-1a7018a67090]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:59.210 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a namespace which is not needed anymore#033[00m
Nov 29 03:19:59 np0005539551 nova_compute[227360]: 2025-11-29 08:19:59.222 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:59 np0005539551 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d00000070.scope: Deactivated successfully.
Nov 29 03:19:59 np0005539551 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d00000070.scope: Consumed 15.239s CPU time.
Nov 29 03:19:59 np0005539551 systemd-machined[190756]: Machine qemu-48-instance-00000070 terminated.
Nov 29 03:19:59 np0005539551 neutron-haproxy-ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a[268936]: [NOTICE]   (268940) : haproxy version is 2.8.14-c23fe91
Nov 29 03:19:59 np0005539551 neutron-haproxy-ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a[268936]: [NOTICE]   (268940) : path to executable is /usr/sbin/haproxy
Nov 29 03:19:59 np0005539551 neutron-haproxy-ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a[268936]: [WARNING]  (268940) : Exiting Master process...
Nov 29 03:19:59 np0005539551 neutron-haproxy-ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a[268936]: [WARNING]  (268940) : Exiting Master process...
Nov 29 03:19:59 np0005539551 neutron-haproxy-ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a[268936]: [ALERT]    (268940) : Current worker (268942) exited with code 143 (Terminated)
Nov 29 03:19:59 np0005539551 neutron-haproxy-ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a[268936]: [WARNING]  (268940) : All workers exited. Exiting... (0)
Nov 29 03:19:59 np0005539551 systemd[1]: libpod-69f3ec8a154e7276998b520ff3e3b6bcfa19395467116a9539730725133174b4.scope: Deactivated successfully.
Nov 29 03:19:59 np0005539551 podman[269435]: 2025-11-29 08:19:59.362017615 +0000 UTC m=+0.058282350 container died 69f3ec8a154e7276998b520ff3e3b6bcfa19395467116a9539730725133174b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 03:19:59 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-69f3ec8a154e7276998b520ff3e3b6bcfa19395467116a9539730725133174b4-userdata-shm.mount: Deactivated successfully.
Nov 29 03:19:59 np0005539551 systemd[1]: var-lib-containers-storage-overlay-d1304b4c881201ffba487c2000b4ce5c2d728394d676a9c99e02105ea040117e-merged.mount: Deactivated successfully.
Nov 29 03:19:59 np0005539551 podman[269435]: 2025-11-29 08:19:59.412247491 +0000 UTC m=+0.108512246 container cleanup 69f3ec8a154e7276998b520ff3e3b6bcfa19395467116a9539730725133174b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:19:59 np0005539551 systemd[1]: libpod-conmon-69f3ec8a154e7276998b520ff3e3b6bcfa19395467116a9539730725133174b4.scope: Deactivated successfully.
Nov 29 03:19:59 np0005539551 podman[269472]: 2025-11-29 08:19:59.477335269 +0000 UTC m=+0.041377363 container remove 69f3ec8a154e7276998b520ff3e3b6bcfa19395467116a9539730725133174b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:19:59 np0005539551 nova_compute[227360]: 2025-11-29 08:19:59.481 227364 DEBUG nova.compute.manager [req-b4f3725d-941f-45fb-9f0a-9758f7f1419d req-1f6da833-a902-4be6-8d0a-e9f0fbe6826b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Received event network-vif-unplugged-3b646f81-c090-4ebc-ab66-1ed42838ee7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:59 np0005539551 nova_compute[227360]: 2025-11-29 08:19:59.482 227364 DEBUG oslo_concurrency.lockutils [req-b4f3725d-941f-45fb-9f0a-9758f7f1419d req-1f6da833-a902-4be6-8d0a-e9f0fbe6826b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:59 np0005539551 nova_compute[227360]: 2025-11-29 08:19:59.482 227364 DEBUG oslo_concurrency.lockutils [req-b4f3725d-941f-45fb-9f0a-9758f7f1419d req-1f6da833-a902-4be6-8d0a-e9f0fbe6826b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:59 np0005539551 nova_compute[227360]: 2025-11-29 08:19:59.483 227364 DEBUG oslo_concurrency.lockutils [req-b4f3725d-941f-45fb-9f0a-9758f7f1419d req-1f6da833-a902-4be6-8d0a-e9f0fbe6826b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:59 np0005539551 nova_compute[227360]: 2025-11-29 08:19:59.483 227364 DEBUG nova.compute.manager [req-b4f3725d-941f-45fb-9f0a-9758f7f1419d req-1f6da833-a902-4be6-8d0a-e9f0fbe6826b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] No waiting events found dispatching network-vif-unplugged-3b646f81-c090-4ebc-ab66-1ed42838ee7b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:19:59 np0005539551 nova_compute[227360]: 2025-11-29 08:19:59.483 227364 WARNING nova.compute.manager [req-b4f3725d-941f-45fb-9f0a-9758f7f1419d req-1f6da833-a902-4be6-8d0a-e9f0fbe6826b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Received unexpected event network-vif-unplugged-3b646f81-c090-4ebc-ab66-1ed42838ee7b for instance with vm_state active and task_state rebuilding.#033[00m
Nov 29 03:19:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:59.486 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5f00cfa2-9ef3-4dfe-a098-7e59865dd9d6]: (4, ('Sat Nov 29 08:19:59 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a (69f3ec8a154e7276998b520ff3e3b6bcfa19395467116a9539730725133174b4)\n69f3ec8a154e7276998b520ff3e3b6bcfa19395467116a9539730725133174b4\nSat Nov 29 08:19:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a (69f3ec8a154e7276998b520ff3e3b6bcfa19395467116a9539730725133174b4)\n69f3ec8a154e7276998b520ff3e3b6bcfa19395467116a9539730725133174b4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:59.488 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e1b06f06-6c4a-455c-b1d1-d332c4713738]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:59.489 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6512f12e-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:59 np0005539551 nova_compute[227360]: 2025-11-29 08:19:59.490 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:59 np0005539551 kernel: tap6512f12e-70: left promiscuous mode
Nov 29 03:19:59 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:19:59 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/651048761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:19:59 np0005539551 nova_compute[227360]: 2025-11-29 08:19:59.520 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:59.523 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d9cdb528-d902-4861-b1c7-df7c6ae115b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:59.537 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[875806b6-83ae-4444-9639-563dd2c9132a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:59.538 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[57897865-6e03-4ee1-ae19-f3c9dad4c9c0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:59 np0005539551 nova_compute[227360]: 2025-11-29 08:19:59.548 227364 DEBUG oslo_concurrency.processutils [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:59 np0005539551 nova_compute[227360]: 2025-11-29 08:19:59.556 227364 DEBUG nova.compute.provider_tree [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:19:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:59.557 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a8bab772-bba5-4186-8808-02ac057e1a78]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 741766, 'reachable_time': 26649, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269497, 'error': None, 'target': 'ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:59.560 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:19:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:19:59.560 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[c0ef49e3-2ae2-4bff-9823-9301e8111dc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:59 np0005539551 systemd[1]: run-netns-ovnmeta\x2d6512f12e\x2d7c22\x2d4533\x2db1e6\x2d41428016593a.mount: Deactivated successfully.
Nov 29 03:19:59 np0005539551 nova_compute[227360]: 2025-11-29 08:19:59.576 227364 DEBUG nova.scheduler.client.report [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:19:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:19:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:59.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:59 np0005539551 nova_compute[227360]: 2025-11-29 08:19:59.624 227364 DEBUG oslo_concurrency.lockutils [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:59 np0005539551 nova_compute[227360]: 2025-11-29 08:19:59.624 227364 INFO nova.compute.manager [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Migrating#033[00m
Nov 29 03:19:59 np0005539551 nova_compute[227360]: 2025-11-29 08:19:59.965 227364 INFO nova.virt.libvirt.driver [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:19:59 np0005539551 nova_compute[227360]: 2025-11-29 08:19:59.976 227364 INFO nova.virt.libvirt.driver [-] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Instance destroyed successfully.#033[00m
Nov 29 03:19:59 np0005539551 nova_compute[227360]: 2025-11-29 08:19:59.984 227364 INFO nova.virt.libvirt.driver [-] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Instance destroyed successfully.#033[00m
Nov 29 03:19:59 np0005539551 nova_compute[227360]: 2025-11-29 08:19:59.986 227364 DEBUG nova.virt.libvirt.vif [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:19:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-1269321448',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-1770757417',id=112,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNQpCV8LVYV3XU0jRo9AB80D14fcNsVBchxQv/kqVR98kae3OkX+NM6hECihiGK1BlzV9Y53yNGfFNWAKz66qpbev4cpjjjNCpxIV7kZ2X9xTxjFVoODdsMt72oZagL4iA==',key_name='tempest-keypair-1370277031',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:19:32Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='696ec278f2ec426fa75ebb50bdf1c16a',ramdisk_id='',reservation_id='r-gh8c0s9k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsV293TestJSON-2028033432',owner_user_name='tempest-ServerActionsV293TestJSON-2028033432-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:19:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='697e5f10e07b4256a3dc2ad3906db9d2',uuid=a14ca685-ed5c-4583-90e2-565fbf5e1ef0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3b646f81-c090-4ebc-ab66-1ed42838ee7b", "address": "fa:16:3e:03:d3:13", "network": {"id": "6512f12e-7c22-4533-b1e6-41428016593a", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1331773440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "696ec278f2ec426fa75ebb50bdf1c16a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b646f81-c0", "ovs_interfaceid": "3b646f81-c090-4ebc-ab66-1ed42838ee7b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:19:59 np0005539551 nova_compute[227360]: 2025-11-29 08:19:59.987 227364 DEBUG nova.network.os_vif_util [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Converting VIF {"id": "3b646f81-c090-4ebc-ab66-1ed42838ee7b", "address": "fa:16:3e:03:d3:13", "network": {"id": "6512f12e-7c22-4533-b1e6-41428016593a", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1331773440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "696ec278f2ec426fa75ebb50bdf1c16a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b646f81-c0", "ovs_interfaceid": "3b646f81-c090-4ebc-ab66-1ed42838ee7b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:19:59 np0005539551 nova_compute[227360]: 2025-11-29 08:19:59.988 227364 DEBUG nova.network.os_vif_util [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:03:d3:13,bridge_name='br-int',has_traffic_filtering=True,id=3b646f81-c090-4ebc-ab66-1ed42838ee7b,network=Network(6512f12e-7c22-4533-b1e6-41428016593a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b646f81-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:19:59 np0005539551 nova_compute[227360]: 2025-11-29 08:19:59.989 227364 DEBUG os_vif [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:03:d3:13,bridge_name='br-int',has_traffic_filtering=True,id=3b646f81-c090-4ebc-ab66-1ed42838ee7b,network=Network(6512f12e-7c22-4533-b1e6-41428016593a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b646f81-c0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:19:59 np0005539551 nova_compute[227360]: 2025-11-29 08:19:59.991 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:59 np0005539551 nova_compute[227360]: 2025-11-29 08:19:59.992 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b646f81-c0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:59 np0005539551 nova_compute[227360]: 2025-11-29 08:19:59.994 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:59 np0005539551 nova_compute[227360]: 2025-11-29 08:19:59.997 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:20:00 np0005539551 nova_compute[227360]: 2025-11-29 08:20:00.000 227364 INFO os_vif [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:03:d3:13,bridge_name='br-int',has_traffic_filtering=True,id=3b646f81-c090-4ebc-ab66-1ed42838ee7b,network=Network(6512f12e-7c22-4533-b1e6-41428016593a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b646f81-c0')#033[00m
Nov 29 03:20:00 np0005539551 ceph-mon[81672]: overall HEALTH_OK
Nov 29 03:20:00 np0005539551 nova_compute[227360]: 2025-11-29 08:20:00.223 227364 INFO nova.virt.libvirt.driver [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Deleting instance files /var/lib/nova/instances/a14ca685-ed5c-4583-90e2-565fbf5e1ef0_del#033[00m
Nov 29 03:20:00 np0005539551 nova_compute[227360]: 2025-11-29 08:20:00.224 227364 INFO nova.virt.libvirt.driver [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Deletion of /var/lib/nova/instances/a14ca685-ed5c-4583-90e2-565fbf5e1ef0_del complete#033[00m
Nov 29 03:20:00 np0005539551 nova_compute[227360]: 2025-11-29 08:20:00.650 227364 WARNING nova.virt.libvirt.driver [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] During detach_volume, instance disappeared.: nova.exception.InstanceNotFound: Instance a14ca685-ed5c-4583-90e2-565fbf5e1ef0 could not be found.#033[00m
Nov 29 03:20:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:00.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:01 np0005539551 nova_compute[227360]: 2025-11-29 08:20:01.005 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:01 np0005539551 nova_compute[227360]: 2025-11-29 08:20:01.397 227364 DEBUG nova.compute.manager [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Preparing to wait for external event volume-reimaged-85297db0-ac85-44d1-bc74-6e4a332ee974 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:20:01 np0005539551 nova_compute[227360]: 2025-11-29 08:20:01.398 227364 DEBUG oslo_concurrency.lockutils [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Acquiring lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:01 np0005539551 nova_compute[227360]: 2025-11-29 08:20:01.398 227364 DEBUG oslo_concurrency.lockutils [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:01 np0005539551 nova_compute[227360]: 2025-11-29 08:20:01.398 227364 DEBUG oslo_concurrency.lockutils [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:01 np0005539551 nova_compute[227360]: 2025-11-29 08:20:01.606 227364 DEBUG nova.compute.manager [req-465ae196-89c9-4c5d-9408-e75b02d693bd req-021e7ab6-6bdd-49f3-81aa-379baa272b1d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Received event network-vif-plugged-3b646f81-c090-4ebc-ab66-1ed42838ee7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:01 np0005539551 nova_compute[227360]: 2025-11-29 08:20:01.606 227364 DEBUG oslo_concurrency.lockutils [req-465ae196-89c9-4c5d-9408-e75b02d693bd req-021e7ab6-6bdd-49f3-81aa-379baa272b1d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:01 np0005539551 nova_compute[227360]: 2025-11-29 08:20:01.606 227364 DEBUG oslo_concurrency.lockutils [req-465ae196-89c9-4c5d-9408-e75b02d693bd req-021e7ab6-6bdd-49f3-81aa-379baa272b1d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:01 np0005539551 nova_compute[227360]: 2025-11-29 08:20:01.606 227364 DEBUG oslo_concurrency.lockutils [req-465ae196-89c9-4c5d-9408-e75b02d693bd req-021e7ab6-6bdd-49f3-81aa-379baa272b1d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:01 np0005539551 nova_compute[227360]: 2025-11-29 08:20:01.607 227364 DEBUG nova.compute.manager [req-465ae196-89c9-4c5d-9408-e75b02d693bd req-021e7ab6-6bdd-49f3-81aa-379baa272b1d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] No event matching network-vif-plugged-3b646f81-c090-4ebc-ab66-1ed42838ee7b in dict_keys([('volume-reimaged', '85297db0-ac85-44d1-bc74-6e4a332ee974')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Nov 29 03:20:01 np0005539551 nova_compute[227360]: 2025-11-29 08:20:01.607 227364 WARNING nova.compute.manager [req-465ae196-89c9-4c5d-9408-e75b02d693bd req-021e7ab6-6bdd-49f3-81aa-379baa272b1d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Received unexpected event network-vif-plugged-3b646f81-c090-4ebc-ab66-1ed42838ee7b for instance with vm_state active and task_state rebuilding.#033[00m
Nov 29 03:20:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:01.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:02 np0005539551 systemd[1]: Created slice User Slice of UID 42436.
Nov 29 03:20:02 np0005539551 systemd[1]: Starting User Runtime Directory /run/user/42436...
Nov 29 03:20:02 np0005539551 systemd-logind[788]: New session 54 of user nova.
Nov 29 03:20:02 np0005539551 systemd[1]: Finished User Runtime Directory /run/user/42436.
Nov 29 03:20:02 np0005539551 systemd[1]: Starting User Manager for UID 42436...
Nov 29 03:20:02 np0005539551 nova_compute[227360]: 2025-11-29 08:20:02.346 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404387.3455663, ffb5c16c-adce-4345-8d54-4f48e1f1e57b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:20:02 np0005539551 nova_compute[227360]: 2025-11-29 08:20:02.347 227364 INFO nova.compute.manager [-] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:20:02 np0005539551 nova_compute[227360]: 2025-11-29 08:20:02.365 227364 DEBUG nova.compute.manager [None req-c8e4abc4-a973-4a69-8f01-45fdf3b43936 - - - - - -] [instance: ffb5c16c-adce-4345-8d54-4f48e1f1e57b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:02 np0005539551 systemd[269521]: Queued start job for default target Main User Target.
Nov 29 03:20:02 np0005539551 systemd[269521]: Created slice User Application Slice.
Nov 29 03:20:02 np0005539551 systemd[269521]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 03:20:02 np0005539551 systemd[269521]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 03:20:02 np0005539551 systemd[269521]: Reached target Paths.
Nov 29 03:20:02 np0005539551 systemd[269521]: Reached target Timers.
Nov 29 03:20:02 np0005539551 systemd[269521]: Starting D-Bus User Message Bus Socket...
Nov 29 03:20:02 np0005539551 systemd[269521]: Starting Create User's Volatile Files and Directories...
Nov 29 03:20:02 np0005539551 systemd[269521]: Finished Create User's Volatile Files and Directories.
Nov 29 03:20:02 np0005539551 systemd[269521]: Listening on D-Bus User Message Bus Socket.
Nov 29 03:20:02 np0005539551 systemd[269521]: Reached target Sockets.
Nov 29 03:20:02 np0005539551 systemd[269521]: Reached target Basic System.
Nov 29 03:20:02 np0005539551 systemd[269521]: Reached target Main User Target.
Nov 29 03:20:02 np0005539551 systemd[269521]: Startup finished in 112ms.
Nov 29 03:20:02 np0005539551 systemd[1]: Started User Manager for UID 42436.
Nov 29 03:20:02 np0005539551 systemd[1]: Started Session 54 of User nova.
Nov 29 03:20:02 np0005539551 systemd[1]: session-54.scope: Deactivated successfully.
Nov 29 03:20:02 np0005539551 systemd-logind[788]: Session 54 logged out. Waiting for processes to exit.
Nov 29 03:20:02 np0005539551 systemd-logind[788]: Removed session 54.
Nov 29 03:20:02 np0005539551 systemd-logind[788]: New session 56 of user nova.
Nov 29 03:20:02 np0005539551 systemd[1]: Started Session 56 of User nova.
Nov 29 03:20:02 np0005539551 systemd[1]: session-56.scope: Deactivated successfully.
Nov 29 03:20:02 np0005539551 systemd-logind[788]: Session 56 logged out. Waiting for processes to exit.
Nov 29 03:20:02 np0005539551 systemd-logind[788]: Removed session 56.
Nov 29 03:20:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:02.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:03.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:04.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:04 np0005539551 nova_compute[227360]: 2025-11-29 08:20:04.995 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:05.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:06 np0005539551 nova_compute[227360]: 2025-11-29 08:20:06.007 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:06 np0005539551 nova_compute[227360]: 2025-11-29 08:20:06.752 227364 DEBUG nova.compute.manager [req-f79485c5-bbd9-4055-bd3d-713d39264f86 req-0bfed806-3076-4f20-89e2-f92a637f7155 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Received event network-vif-unplugged-0bdc8d4b-e261-4398-8465-58392acd35a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:06 np0005539551 nova_compute[227360]: 2025-11-29 08:20:06.753 227364 DEBUG oslo_concurrency.lockutils [req-f79485c5-bbd9-4055-bd3d-713d39264f86 req-0bfed806-3076-4f20-89e2-f92a637f7155 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:06 np0005539551 nova_compute[227360]: 2025-11-29 08:20:06.754 227364 DEBUG oslo_concurrency.lockutils [req-f79485c5-bbd9-4055-bd3d-713d39264f86 req-0bfed806-3076-4f20-89e2-f92a637f7155 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:06 np0005539551 nova_compute[227360]: 2025-11-29 08:20:06.754 227364 DEBUG oslo_concurrency.lockutils [req-f79485c5-bbd9-4055-bd3d-713d39264f86 req-0bfed806-3076-4f20-89e2-f92a637f7155 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:06 np0005539551 nova_compute[227360]: 2025-11-29 08:20:06.755 227364 DEBUG nova.compute.manager [req-f79485c5-bbd9-4055-bd3d-713d39264f86 req-0bfed806-3076-4f20-89e2-f92a637f7155 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] No waiting events found dispatching network-vif-unplugged-0bdc8d4b-e261-4398-8465-58392acd35a8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:06 np0005539551 nova_compute[227360]: 2025-11-29 08:20:06.755 227364 WARNING nova.compute.manager [req-f79485c5-bbd9-4055-bd3d-713d39264f86 req-0bfed806-3076-4f20-89e2-f92a637f7155 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Received unexpected event network-vif-unplugged-0bdc8d4b-e261-4398-8465-58392acd35a8 for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:20:06 np0005539551 nova_compute[227360]: 2025-11-29 08:20:06.756 227364 DEBUG nova.compute.manager [req-f79485c5-bbd9-4055-bd3d-713d39264f86 req-0bfed806-3076-4f20-89e2-f92a637f7155 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Received event network-vif-plugged-0bdc8d4b-e261-4398-8465-58392acd35a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:06 np0005539551 nova_compute[227360]: 2025-11-29 08:20:06.756 227364 DEBUG oslo_concurrency.lockutils [req-f79485c5-bbd9-4055-bd3d-713d39264f86 req-0bfed806-3076-4f20-89e2-f92a637f7155 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:06 np0005539551 nova_compute[227360]: 2025-11-29 08:20:06.757 227364 DEBUG oslo_concurrency.lockutils [req-f79485c5-bbd9-4055-bd3d-713d39264f86 req-0bfed806-3076-4f20-89e2-f92a637f7155 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:06 np0005539551 nova_compute[227360]: 2025-11-29 08:20:06.758 227364 DEBUG oslo_concurrency.lockutils [req-f79485c5-bbd9-4055-bd3d-713d39264f86 req-0bfed806-3076-4f20-89e2-f92a637f7155 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:06 np0005539551 nova_compute[227360]: 2025-11-29 08:20:06.758 227364 DEBUG nova.compute.manager [req-f79485c5-bbd9-4055-bd3d-713d39264f86 req-0bfed806-3076-4f20-89e2-f92a637f7155 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] No waiting events found dispatching network-vif-plugged-0bdc8d4b-e261-4398-8465-58392acd35a8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:06 np0005539551 nova_compute[227360]: 2025-11-29 08:20:06.759 227364 WARNING nova.compute.manager [req-f79485c5-bbd9-4055-bd3d-713d39264f86 req-0bfed806-3076-4f20-89e2-f92a637f7155 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Received unexpected event network-vif-plugged-0bdc8d4b-e261-4398-8465-58392acd35a8 for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:20:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:20:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:06.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:20:06 np0005539551 nova_compute[227360]: 2025-11-29 08:20:06.903 227364 INFO nova.network.neutron [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Updating port 0bdc8d4b-e261-4398-8465-58392acd35a8 with attributes {'binding:host_id': 'compute-1.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Nov 29 03:20:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:07.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:08 np0005539551 nova_compute[227360]: 2025-11-29 08:20:08.130 227364 DEBUG oslo_concurrency.lockutils [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Acquiring lock "refresh_cache-cf3d3db9-f753-47a8-93d5-7f0491bb03fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:20:08 np0005539551 nova_compute[227360]: 2025-11-29 08:20:08.131 227364 DEBUG oslo_concurrency.lockutils [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Acquired lock "refresh_cache-cf3d3db9-f753-47a8-93d5-7f0491bb03fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:20:08 np0005539551 nova_compute[227360]: 2025-11-29 08:20:08.131 227364 DEBUG nova.network.neutron [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:20:08 np0005539551 nova_compute[227360]: 2025-11-29 08:20:08.146 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:08 np0005539551 nova_compute[227360]: 2025-11-29 08:20:08.669 227364 DEBUG nova.compute.manager [req-0a59ae81-8d69-4e07-8230-14eef35d13b0 req-1785187b-9969-40d2-95fe-25cf17aed515 0b5e852f4a214b5fad81b314f18939d1 ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Received event volume-reimaged-85297db0-ac85-44d1-bc74-6e4a332ee974 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:08 np0005539551 nova_compute[227360]: 2025-11-29 08:20:08.670 227364 DEBUG oslo_concurrency.lockutils [req-0a59ae81-8d69-4e07-8230-14eef35d13b0 req-1785187b-9969-40d2-95fe-25cf17aed515 0b5e852f4a214b5fad81b314f18939d1 ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:08 np0005539551 nova_compute[227360]: 2025-11-29 08:20:08.671 227364 DEBUG oslo_concurrency.lockutils [req-0a59ae81-8d69-4e07-8230-14eef35d13b0 req-1785187b-9969-40d2-95fe-25cf17aed515 0b5e852f4a214b5fad81b314f18939d1 ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:08 np0005539551 nova_compute[227360]: 2025-11-29 08:20:08.671 227364 DEBUG oslo_concurrency.lockutils [req-0a59ae81-8d69-4e07-8230-14eef35d13b0 req-1785187b-9969-40d2-95fe-25cf17aed515 0b5e852f4a214b5fad81b314f18939d1 ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:08 np0005539551 nova_compute[227360]: 2025-11-29 08:20:08.672 227364 DEBUG nova.compute.manager [req-0a59ae81-8d69-4e07-8230-14eef35d13b0 req-1785187b-9969-40d2-95fe-25cf17aed515 0b5e852f4a214b5fad81b314f18939d1 ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Processing event volume-reimaged-85297db0-ac85-44d1-bc74-6e4a332ee974 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:20:08 np0005539551 nova_compute[227360]: 2025-11-29 08:20:08.673 227364 DEBUG nova.compute.manager [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Instance event wait completed in 5 seconds for volume-reimaged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:20:08 np0005539551 nova_compute[227360]: 2025-11-29 08:20:08.730 227364 INFO nova.virt.block_device [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Booting with volume 85297db0-ac85-44d1-bc74-6e4a332ee974 at /dev/vda#033[00m
Nov 29 03:20:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:08.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:08 np0005539551 nova_compute[227360]: 2025-11-29 08:20:08.889 227364 DEBUG nova.compute.manager [req-35e296b9-a6c3-402d-bdfd-61c633ff6928 req-b6d5d505-6205-4581-825a-318bdaa487be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Received event network-changed-0bdc8d4b-e261-4398-8465-58392acd35a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:08 np0005539551 nova_compute[227360]: 2025-11-29 08:20:08.889 227364 DEBUG nova.compute.manager [req-35e296b9-a6c3-402d-bdfd-61c633ff6928 req-b6d5d505-6205-4581-825a-318bdaa487be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Refreshing instance network info cache due to event network-changed-0bdc8d4b-e261-4398-8465-58392acd35a8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:20:08 np0005539551 nova_compute[227360]: 2025-11-29 08:20:08.889 227364 DEBUG oslo_concurrency.lockutils [req-35e296b9-a6c3-402d-bdfd-61c633ff6928 req-b6d5d505-6205-4581-825a-318bdaa487be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-cf3d3db9-f753-47a8-93d5-7f0491bb03fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:20:08 np0005539551 nova_compute[227360]: 2025-11-29 08:20:08.979 227364 DEBUG os_brick.utils [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:20:08 np0005539551 nova_compute[227360]: 2025-11-29 08:20:08.980 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:08 np0005539551 nova_compute[227360]: 2025-11-29 08:20:08.990 245195 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:08 np0005539551 nova_compute[227360]: 2025-11-29 08:20:08.990 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[8aca30cb-8876-481e-8201-958c6f804ad5]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:08 np0005539551 nova_compute[227360]: 2025-11-29 08:20:08.991 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:08 np0005539551 nova_compute[227360]: 2025-11-29 08:20:08.997 245195 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:08 np0005539551 nova_compute[227360]: 2025-11-29 08:20:08.997 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[a9c0d8be-7a8c-40e4-889a-adaf6cc95c46]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34268feecb6', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:09 np0005539551 nova_compute[227360]: 2025-11-29 08:20:08.999 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:09 np0005539551 nova_compute[227360]: 2025-11-29 08:20:09.008 245195 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:09 np0005539551 nova_compute[227360]: 2025-11-29 08:20:09.008 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[c0ebd285-0dfb-4abc-b76e-11cd7e4b8418]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:09 np0005539551 nova_compute[227360]: 2025-11-29 08:20:09.009 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[65c51990-2921-4ac0-82cd-0c10d11ed3d7]: (4, '2d58586e-4ce1-425d-890c-d5cdff75e822') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:09 np0005539551 nova_compute[227360]: 2025-11-29 08:20:09.010 227364 DEBUG oslo_concurrency.processutils [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:09 np0005539551 nova_compute[227360]: 2025-11-29 08:20:09.043 227364 DEBUG oslo_concurrency.processutils [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:09 np0005539551 nova_compute[227360]: 2025-11-29 08:20:09.046 227364 DEBUG os_brick.initiator.connectors.lightos [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:20:09 np0005539551 nova_compute[227360]: 2025-11-29 08:20:09.046 227364 DEBUG os_brick.initiator.connectors.lightos [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:20:09 np0005539551 nova_compute[227360]: 2025-11-29 08:20:09.046 227364 DEBUG os_brick.initiator.connectors.lightos [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:20:09 np0005539551 nova_compute[227360]: 2025-11-29 08:20:09.047 227364 DEBUG os_brick.utils [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] <== get_connector_properties: return (67ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34268feecb6', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2d58586e-4ce1-425d-890c-d5cdff75e822', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:20:09 np0005539551 nova_compute[227360]: 2025-11-29 08:20:09.047 227364 DEBUG nova.virt.block_device [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Updating existing volume attachment record: 6bfc9ef9-9a14-4c36-895a-b788a0f55342 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:20:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:20:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:09.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:20:09 np0005539551 nova_compute[227360]: 2025-11-29 08:20:09.998 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:10 np0005539551 nova_compute[227360]: 2025-11-29 08:20:10.381 227364 DEBUG nova.virt.libvirt.driver [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:20:10 np0005539551 nova_compute[227360]: 2025-11-29 08:20:10.382 227364 INFO nova.virt.libvirt.driver [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Creating image(s)#033[00m
Nov 29 03:20:10 np0005539551 nova_compute[227360]: 2025-11-29 08:20:10.383 227364 DEBUG nova.virt.libvirt.driver [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:20:10 np0005539551 nova_compute[227360]: 2025-11-29 08:20:10.383 227364 DEBUG nova.virt.libvirt.driver [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Ensure instance console log exists: /var/lib/nova/instances/a14ca685-ed5c-4583-90e2-565fbf5e1ef0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:20:10 np0005539551 nova_compute[227360]: 2025-11-29 08:20:10.384 227364 DEBUG oslo_concurrency.lockutils [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:10 np0005539551 nova_compute[227360]: 2025-11-29 08:20:10.385 227364 DEBUG oslo_concurrency.lockutils [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:10 np0005539551 nova_compute[227360]: 2025-11-29 08:20:10.386 227364 DEBUG oslo_concurrency.lockutils [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:10 np0005539551 nova_compute[227360]: 2025-11-29 08:20:10.391 227364 DEBUG nova.virt.libvirt.driver [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Start _get_guest_xml network_info=[{"id": "3b646f81-c090-4ebc-ab66-1ed42838ee7b", "address": "fa:16:3e:03:d3:13", "network": {"id": "6512f12e-7c22-4533-b1e6-41428016593a", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1331773440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "696ec278f2ec426fa75ebb50bdf1c16a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b646f81-c0", "ovs_interfaceid": "3b646f81-c090-4ebc-ab66-1ed42838ee7b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:36Z,direct_url=<?>,disk_format='qcow2',id=93eccffb-bacd-407f-af6f-64451dee7b21,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': True, 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-85297db0-ac85-44d1-bc74-6e4a332ee974', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '85297db0-ac85-44d1-bc74-6e4a332ee974', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'a14ca685-ed5c-4583-90e2-565fbf5e1ef0', 'attached_at': '', 'detached_at': '', 'volume_id': '85297db0-ac85-44d1-bc74-6e4a332ee974', 'serial': '85297db0-ac85-44d1-bc74-6e4a332ee974'}, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'attachment_id': '6bfc9ef9-9a14-4c36-895a-b788a0f55342', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:20:10 np0005539551 nova_compute[227360]: 2025-11-29 08:20:10.399 227364 WARNING nova.virt.libvirt.driver [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 29 03:20:10 np0005539551 nova_compute[227360]: 2025-11-29 08:20:10.409 227364 DEBUG nova.virt.libvirt.host [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:20:10 np0005539551 nova_compute[227360]: 2025-11-29 08:20:10.411 227364 DEBUG nova.virt.libvirt.host [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:20:10 np0005539551 nova_compute[227360]: 2025-11-29 08:20:10.417 227364 DEBUG nova.virt.libvirt.host [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:20:10 np0005539551 nova_compute[227360]: 2025-11-29 08:20:10.417 227364 DEBUG nova.virt.libvirt.host [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:20:10 np0005539551 nova_compute[227360]: 2025-11-29 08:20:10.419 227364 DEBUG nova.virt.libvirt.driver [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:20:10 np0005539551 nova_compute[227360]: 2025-11-29 08:20:10.420 227364 DEBUG nova.virt.hardware [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:36Z,direct_url=<?>,disk_format='qcow2',id=93eccffb-bacd-407f-af6f-64451dee7b21,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:20:10 np0005539551 nova_compute[227360]: 2025-11-29 08:20:10.421 227364 DEBUG nova.virt.hardware [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:20:10 np0005539551 nova_compute[227360]: 2025-11-29 08:20:10.421 227364 DEBUG nova.virt.hardware [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:20:10 np0005539551 nova_compute[227360]: 2025-11-29 08:20:10.422 227364 DEBUG nova.virt.hardware [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:20:10 np0005539551 nova_compute[227360]: 2025-11-29 08:20:10.422 227364 DEBUG nova.virt.hardware [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:20:10 np0005539551 nova_compute[227360]: 2025-11-29 08:20:10.423 227364 DEBUG nova.virt.hardware [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:20:10 np0005539551 nova_compute[227360]: 2025-11-29 08:20:10.423 227364 DEBUG nova.virt.hardware [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:20:10 np0005539551 nova_compute[227360]: 2025-11-29 08:20:10.424 227364 DEBUG nova.virt.hardware [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:20:10 np0005539551 nova_compute[227360]: 2025-11-29 08:20:10.424 227364 DEBUG nova.virt.hardware [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:20:10 np0005539551 nova_compute[227360]: 2025-11-29 08:20:10.425 227364 DEBUG nova.virt.hardware [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:20:10 np0005539551 nova_compute[227360]: 2025-11-29 08:20:10.425 227364 DEBUG nova.virt.hardware [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:20:10 np0005539551 nova_compute[227360]: 2025-11-29 08:20:10.426 227364 DEBUG nova.objects.instance [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Lazy-loading 'vcpu_model' on Instance uuid a14ca685-ed5c-4583-90e2-565fbf5e1ef0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:10 np0005539551 nova_compute[227360]: 2025-11-29 08:20:10.434 227364 DEBUG nova.network.neutron [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Updating instance_info_cache with network_info: [{"id": "0bdc8d4b-e261-4398-8465-58392acd35a8", "address": "fa:16:3e:5a:07:26", "network": {"id": "96a9f8d0-94cb-4ef1-b5fc-814aeb66b309", "bridge": "br-int", "label": "tempest-network-smoke--589317975", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bdc8d4b-e2", "ovs_interfaceid": "0bdc8d4b-e261-4398-8465-58392acd35a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:20:10 np0005539551 nova_compute[227360]: 2025-11-29 08:20:10.492 227364 DEBUG nova.storage.rbd_utils [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] rbd image a14ca685-ed5c-4583-90e2-565fbf5e1ef0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:10 np0005539551 nova_compute[227360]: 2025-11-29 08:20:10.496 227364 DEBUG oslo_concurrency.processutils [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:10 np0005539551 nova_compute[227360]: 2025-11-29 08:20:10.526 227364 DEBUG oslo_concurrency.lockutils [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Releasing lock "refresh_cache-cf3d3db9-f753-47a8-93d5-7f0491bb03fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:20:10 np0005539551 nova_compute[227360]: 2025-11-29 08:20:10.531 227364 DEBUG oslo_concurrency.lockutils [req-35e296b9-a6c3-402d-bdfd-61c633ff6928 req-b6d5d505-6205-4581-825a-318bdaa487be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-cf3d3db9-f753-47a8-93d5-7f0491bb03fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:20:10 np0005539551 nova_compute[227360]: 2025-11-29 08:20:10.531 227364 DEBUG nova.network.neutron [req-35e296b9-a6c3-402d-bdfd-61c633ff6928 req-b6d5d505-6205-4581-825a-318bdaa487be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Refreshing network info cache for port 0bdc8d4b-e261-4398-8465-58392acd35a8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:20:10 np0005539551 nova_compute[227360]: 2025-11-29 08:20:10.647 227364 DEBUG nova.virt.libvirt.driver [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Nov 29 03:20:10 np0005539551 nova_compute[227360]: 2025-11-29 08:20:10.649 227364 DEBUG nova.virt.libvirt.driver [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:20:10 np0005539551 nova_compute[227360]: 2025-11-29 08:20:10.649 227364 INFO nova.virt.libvirt.driver [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Creating image(s)#033[00m
Nov 29 03:20:10 np0005539551 nova_compute[227360]: 2025-11-29 08:20:10.695 227364 DEBUG nova.storage.rbd_utils [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] creating snapshot(nova-resize) on rbd image(cf3d3db9-f753-47a8-93d5-7f0491bb03fd_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:20:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:10.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:20:10 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3194487606' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:20:10 np0005539551 nova_compute[227360]: 2025-11-29 08:20:10.929 227364 DEBUG oslo_concurrency.processutils [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.009 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.073 227364 DEBUG nova.virt.libvirt.vif [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:19:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-1269321448',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-1770757417',id=112,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNQpCV8LVYV3XU0jRo9AB80D14fcNsVBchxQv/kqVR98kae3OkX+NM6hECihiGK1BlzV9Y53yNGfFNWAKz66qpbev4cpjjjNCpxIV7kZ2X9xTxjFVoODdsMt72oZagL4iA==',key_name='tempest-keypair-1370277031',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:19:32Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='696ec278f2ec426fa75ebb50bdf1c16a',ramdisk_id='',reservation_id='r-gh8c0s9k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsV293TestJSON-2028033432',owner_user_name='tempest-ServerActionsV293TestJSON-2028033432-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:20:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='697e5f10e07b4256a3dc2ad3906db9d2',uuid=a14ca685-ed5c-4583-90e2-565fbf5e1ef0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3b646f81-c090-4ebc-ab66-1ed42838ee7b", "address": "fa:16:3e:03:d3:13", "network": {"id": "6512f12e-7c22-4533-b1e6-41428016593a", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1331773440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "696ec278f2ec426fa75ebb50bdf1c16a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b646f81-c0", "ovs_interfaceid": "3b646f81-c090-4ebc-ab66-1ed42838ee7b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.074 227364 DEBUG nova.network.os_vif_util [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Converting VIF {"id": "3b646f81-c090-4ebc-ab66-1ed42838ee7b", "address": "fa:16:3e:03:d3:13", "network": {"id": "6512f12e-7c22-4533-b1e6-41428016593a", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1331773440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "696ec278f2ec426fa75ebb50bdf1c16a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b646f81-c0", "ovs_interfaceid": "3b646f81-c090-4ebc-ab66-1ed42838ee7b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.075 227364 DEBUG nova.network.os_vif_util [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:03:d3:13,bridge_name='br-int',has_traffic_filtering=True,id=3b646f81-c090-4ebc-ab66-1ed42838ee7b,network=Network(6512f12e-7c22-4533-b1e6-41428016593a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b646f81-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.079 227364 DEBUG nova.virt.libvirt.driver [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:20:11 np0005539551 nova_compute[227360]:  <uuid>a14ca685-ed5c-4583-90e2-565fbf5e1ef0</uuid>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:  <name>instance-00000070</name>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:20:11 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:      <nova:name>tempest-ServerActionsV293TestJSON-server-1269321448</nova:name>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:20:10</nova:creationTime>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:20:11 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:        <nova:user uuid="697e5f10e07b4256a3dc2ad3906db9d2">tempest-ServerActionsV293TestJSON-2028033432-project-member</nova:user>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:        <nova:project uuid="696ec278f2ec426fa75ebb50bdf1c16a">tempest-ServerActionsV293TestJSON-2028033432</nova:project>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:        <nova:port uuid="3b646f81-c090-4ebc-ab66-1ed42838ee7b">
Nov 29 03:20:11 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:      <entry name="serial">a14ca685-ed5c-4583-90e2-565fbf5e1ef0</entry>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:      <entry name="uuid">a14ca685-ed5c-4583-90e2-565fbf5e1ef0</entry>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:20:11 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/a14ca685-ed5c-4583-90e2-565fbf5e1ef0_disk.config">
Nov 29 03:20:11 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:20:11 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:20:11 np0005539551 nova_compute[227360]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="volumes/volume-85297db0-ac85-44d1-bc74-6e4a332ee974">
Nov 29 03:20:11 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:20:11 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:      <serial>85297db0-ac85-44d1-bc74-6e4a332ee974</serial>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:20:11 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:03:d3:13"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:      <target dev="tap3b646f81-c0"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:20:11 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/a14ca685-ed5c-4583-90e2-565fbf5e1ef0/console.log" append="off"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:20:11 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:20:11 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:20:11 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:20:11 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:20:11 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.082 227364 DEBUG nova.virt.libvirt.vif [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:19:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-1269321448',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-1770757417',id=112,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNQpCV8LVYV3XU0jRo9AB80D14fcNsVBchxQv/kqVR98kae3OkX+NM6hECihiGK1BlzV9Y53yNGfFNWAKz66qpbev4cpjjjNCpxIV7kZ2X9xTxjFVoODdsMt72oZagL4iA==',key_name='tempest-keypair-1370277031',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:19:32Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='696ec278f2ec426fa75ebb50bdf1c16a',ramdisk_id='',reservation_id='r-gh8c0s9k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsV293TestJSON-2028033432',owner_user_name='tempest-ServerActionsV293TestJSON-2028033432-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:20:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='697e5f10e07b4256a3dc2ad3906db9d2',uuid=a14ca685-ed5c-4583-90e2-565fbf5e1ef0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3b646f81-c090-4ebc-ab66-1ed42838ee7b", "address": "fa:16:3e:03:d3:13", "network": {"id": "6512f12e-7c22-4533-b1e6-41428016593a", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1331773440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "696ec278f2ec426fa75ebb50bdf1c16a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b646f81-c0", "ovs_interfaceid": "3b646f81-c090-4ebc-ab66-1ed42838ee7b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.082 227364 DEBUG nova.network.os_vif_util [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Converting VIF {"id": "3b646f81-c090-4ebc-ab66-1ed42838ee7b", "address": "fa:16:3e:03:d3:13", "network": {"id": "6512f12e-7c22-4533-b1e6-41428016593a", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1331773440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "696ec278f2ec426fa75ebb50bdf1c16a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b646f81-c0", "ovs_interfaceid": "3b646f81-c090-4ebc-ab66-1ed42838ee7b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.083 227364 DEBUG nova.network.os_vif_util [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:03:d3:13,bridge_name='br-int',has_traffic_filtering=True,id=3b646f81-c090-4ebc-ab66-1ed42838ee7b,network=Network(6512f12e-7c22-4533-b1e6-41428016593a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b646f81-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.084 227364 DEBUG os_vif [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:03:d3:13,bridge_name='br-int',has_traffic_filtering=True,id=3b646f81-c090-4ebc-ab66-1ed42838ee7b,network=Network(6512f12e-7c22-4533-b1e6-41428016593a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b646f81-c0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.085 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.085 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.086 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.090 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.091 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3b646f81-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.091 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3b646f81-c0, col_values=(('external_ids', {'iface-id': '3b646f81-c090-4ebc-ab66-1ed42838ee7b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:03:d3:13', 'vm-uuid': 'a14ca685-ed5c-4583-90e2-565fbf5e1ef0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.094 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:11 np0005539551 NetworkManager[48922]: <info>  [1764404411.0952] manager: (tap3b646f81-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/209)
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.096 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.104 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.106 227364 INFO os_vif [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:03:d3:13,bridge_name='br-int',has_traffic_filtering=True,id=3b646f81-c090-4ebc-ab66-1ed42838ee7b,network=Network(6512f12e-7c22-4533-b1e6-41428016593a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b646f81-c0')#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.202 227364 DEBUG nova.virt.libvirt.driver [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.202 227364 DEBUG nova.virt.libvirt.driver [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.202 227364 DEBUG nova.virt.libvirt.driver [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] No VIF found with MAC fa:16:3e:03:d3:13, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.203 227364 INFO nova.virt.libvirt.driver [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Using config drive#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.232 227364 DEBUG nova.storage.rbd_utils [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] rbd image a14ca685-ed5c-4583-90e2-565fbf5e1ef0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.260 227364 DEBUG nova.objects.instance [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Lazy-loading 'ec2_ids' on Instance uuid a14ca685-ed5c-4583-90e2-565fbf5e1ef0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.308 227364 DEBUG nova.objects.instance [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Lazy-loading 'keypairs' on Instance uuid a14ca685-ed5c-4583-90e2-565fbf5e1ef0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:11 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e310 e310: 3 total, 3 up, 3 in
Nov 29 03:20:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:11.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.651 227364 DEBUG nova.objects.instance [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Lazy-loading 'trusted_certs' on Instance uuid cf3d3db9-f753-47a8-93d5-7f0491bb03fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.798 227364 DEBUG nova.virt.libvirt.driver [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.798 227364 DEBUG nova.virt.libvirt.driver [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Ensure instance console log exists: /var/lib/nova/instances/cf3d3db9-f753-47a8-93d5-7f0491bb03fd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.799 227364 DEBUG oslo_concurrency.lockutils [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.799 227364 DEBUG oslo_concurrency.lockutils [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.799 227364 DEBUG oslo_concurrency.lockutils [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.801 227364 DEBUG nova.virt.libvirt.driver [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Start _get_guest_xml network_info=[{"id": "0bdc8d4b-e261-4398-8465-58392acd35a8", "address": "fa:16:3e:5a:07:26", "network": {"id": "96a9f8d0-94cb-4ef1-b5fc-814aeb66b309", "bridge": "br-int", "label": "tempest-network-smoke--589317975", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--589317975", "vif_mac": "fa:16:3e:5a:07:26"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bdc8d4b-e2", "ovs_interfaceid": "0bdc8d4b-e261-4398-8465-58392acd35a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.805 227364 WARNING nova.virt.libvirt.driver [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.809 227364 DEBUG nova.virt.libvirt.host [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.810 227364 DEBUG nova.virt.libvirt.host [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.813 227364 DEBUG nova.virt.libvirt.host [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.814 227364 DEBUG nova.virt.libvirt.host [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.814 227364 DEBUG nova.virt.libvirt.driver [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.815 227364 DEBUG nova.virt.hardware [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.815 227364 DEBUG nova.virt.hardware [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.815 227364 DEBUG nova.virt.hardware [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.815 227364 DEBUG nova.virt.hardware [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.815 227364 DEBUG nova.virt.hardware [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.816 227364 DEBUG nova.virt.hardware [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.816 227364 DEBUG nova.virt.hardware [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.816 227364 DEBUG nova.virt.hardware [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.816 227364 DEBUG nova.virt.hardware [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.816 227364 DEBUG nova.virt.hardware [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.816 227364 DEBUG nova.virt.hardware [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.816 227364 DEBUG nova.objects.instance [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Lazy-loading 'vcpu_model' on Instance uuid cf3d3db9-f753-47a8-93d5-7f0491bb03fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:11 np0005539551 nova_compute[227360]: 2025-11-29 08:20:11.836 227364 DEBUG oslo_concurrency.processutils [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:12 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:20:12 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4144348051' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:20:12 np0005539551 nova_compute[227360]: 2025-11-29 08:20:12.237 227364 INFO nova.virt.libvirt.driver [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Creating config drive at /var/lib/nova/instances/a14ca685-ed5c-4583-90e2-565fbf5e1ef0/disk.config#033[00m
Nov 29 03:20:12 np0005539551 nova_compute[227360]: 2025-11-29 08:20:12.243 227364 DEBUG oslo_concurrency.processutils [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a14ca685-ed5c-4583-90e2-565fbf5e1ef0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpovnc87lx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:12 np0005539551 nova_compute[227360]: 2025-11-29 08:20:12.269 227364 DEBUG oslo_concurrency.processutils [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:12 np0005539551 nova_compute[227360]: 2025-11-29 08:20:12.309 227364 DEBUG oslo_concurrency.processutils [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:12 np0005539551 nova_compute[227360]: 2025-11-29 08:20:12.377 227364 DEBUG oslo_concurrency.processutils [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a14ca685-ed5c-4583-90e2-565fbf5e1ef0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpovnc87lx" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:12 np0005539551 nova_compute[227360]: 2025-11-29 08:20:12.413 227364 DEBUG nova.storage.rbd_utils [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] rbd image a14ca685-ed5c-4583-90e2-565fbf5e1ef0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:12 np0005539551 nova_compute[227360]: 2025-11-29 08:20:12.417 227364 DEBUG oslo_concurrency.processutils [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a14ca685-ed5c-4583-90e2-565fbf5e1ef0/disk.config a14ca685-ed5c-4583-90e2-565fbf5e1ef0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:12 np0005539551 nova_compute[227360]: 2025-11-29 08:20:12.580 227364 DEBUG oslo_concurrency.processutils [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a14ca685-ed5c-4583-90e2-565fbf5e1ef0/disk.config a14ca685-ed5c-4583-90e2-565fbf5e1ef0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:12 np0005539551 nova_compute[227360]: 2025-11-29 08:20:12.581 227364 INFO nova.virt.libvirt.driver [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Deleting local config drive /var/lib/nova/instances/a14ca685-ed5c-4583-90e2-565fbf5e1ef0/disk.config because it was imported into RBD.#033[00m
Nov 29 03:20:12 np0005539551 kernel: tap3b646f81-c0: entered promiscuous mode
Nov 29 03:20:12 np0005539551 ovn_controller[130266]: 2025-11-29T08:20:12Z|00444|binding|INFO|Claiming lport 3b646f81-c090-4ebc-ab66-1ed42838ee7b for this chassis.
Nov 29 03:20:12 np0005539551 ovn_controller[130266]: 2025-11-29T08:20:12Z|00445|binding|INFO|3b646f81-c090-4ebc-ab66-1ed42838ee7b: Claiming fa:16:3e:03:d3:13 10.100.0.4
Nov 29 03:20:12 np0005539551 nova_compute[227360]: 2025-11-29 08:20:12.669 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:12 np0005539551 NetworkManager[48922]: <info>  [1764404412.6695] manager: (tap3b646f81-c0): new Tun device (/org/freedesktop/NetworkManager/Devices/210)
Nov 29 03:20:12 np0005539551 ovn_controller[130266]: 2025-11-29T08:20:12Z|00446|binding|INFO|Setting lport 3b646f81-c090-4ebc-ab66-1ed42838ee7b ovn-installed in OVS
Nov 29 03:20:12 np0005539551 nova_compute[227360]: 2025-11-29 08:20:12.696 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:12 np0005539551 nova_compute[227360]: 2025-11-29 08:20:12.700 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:12 np0005539551 systemd-udevd[269795]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:20:12 np0005539551 systemd-machined[190756]: New machine qemu-50-instance-00000070.
Nov 29 03:20:12 np0005539551 systemd[1]: Started Virtual Machine qemu-50-instance-00000070.
Nov 29 03:20:12 np0005539551 NetworkManager[48922]: <info>  [1764404412.7414] device (tap3b646f81-c0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:20:12 np0005539551 NetworkManager[48922]: <info>  [1764404412.7462] device (tap3b646f81-c0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:20:12 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:20:12 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3756302290' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:20:12 np0005539551 systemd[1]: Stopping User Manager for UID 42436...
Nov 29 03:20:12 np0005539551 ovn_controller[130266]: 2025-11-29T08:20:12Z|00447|binding|INFO|Setting lport 3b646f81-c090-4ebc-ab66-1ed42838ee7b up in Southbound
Nov 29 03:20:12 np0005539551 systemd[269521]: Activating special unit Exit the Session...
Nov 29 03:20:12 np0005539551 systemd[269521]: Stopped target Main User Target.
Nov 29 03:20:12 np0005539551 systemd[269521]: Stopped target Basic System.
Nov 29 03:20:12 np0005539551 systemd[269521]: Stopped target Paths.
Nov 29 03:20:12 np0005539551 systemd[269521]: Stopped target Sockets.
Nov 29 03:20:12 np0005539551 systemd[269521]: Stopped target Timers.
Nov 29 03:20:12 np0005539551 systemd[269521]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 29 03:20:12 np0005539551 systemd[269521]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 03:20:12 np0005539551 systemd[269521]: Closed D-Bus User Message Bus Socket.
Nov 29 03:20:12 np0005539551 systemd[269521]: Stopped Create User's Volatile Files and Directories.
Nov 29 03:20:12 np0005539551 systemd[269521]: Removed slice User Application Slice.
Nov 29 03:20:12 np0005539551 systemd[269521]: Reached target Shutdown.
Nov 29 03:20:12 np0005539551 systemd[269521]: Finished Exit the Session.
Nov 29 03:20:12 np0005539551 systemd[269521]: Reached target Exit the Session.
Nov 29 03:20:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:12.774 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:d3:13 10.100.0.4'], port_security=['fa:16:3e:03:d3:13 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'a14ca685-ed5c-4583-90e2-565fbf5e1ef0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6512f12e-7c22-4533-b1e6-41428016593a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '696ec278f2ec426fa75ebb50bdf1c16a', 'neutron:revision_number': '5', 'neutron:security_group_ids': '79696f95-603f-4ed2-8054-c440dd658a0e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.246'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f17a52e-c2c7-41e9-af0a-0831981ed76c, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=3b646f81-c090-4ebc-ab66-1ed42838ee7b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:20:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:12.775 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 3b646f81-c090-4ebc-ab66-1ed42838ee7b in datapath 6512f12e-7c22-4533-b1e6-41428016593a bound to our chassis#033[00m
Nov 29 03:20:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:12.781 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6512f12e-7c22-4533-b1e6-41428016593a#033[00m
Nov 29 03:20:12 np0005539551 nova_compute[227360]: 2025-11-29 08:20:12.786 227364 DEBUG oslo_concurrency.processutils [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:12 np0005539551 nova_compute[227360]: 2025-11-29 08:20:12.787 227364 DEBUG nova.virt.libvirt.vif [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:19:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1568692522',display_name='tempest-TestNetworkAdvancedServerOps-server-1568692522',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1568692522',id=111,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPNKUwFrrTjn8atdc6IVHURjdCwbc8WxyLGXpa+LJc5sLs2eoepMjjuqxjn33AoGUMizcXrpPXctDgQXs8T7l76aOuh+gBdm/mktVIbC7S76mvgSpzr3zbuH99OXaXcKFA==',key_name='tempest-TestNetworkAdvancedServerOps-1778483648',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:19:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-kpo3ssd1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:20:06Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=cf3d3db9-f753-47a8-93d5-7f0491bb03fd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0bdc8d4b-e261-4398-8465-58392acd35a8", "address": "fa:16:3e:5a:07:26", "network": {"id": "96a9f8d0-94cb-4ef1-b5fc-814aeb66b309", "bridge": "br-int", "label": "tempest-network-smoke--589317975", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--589317975", "vif_mac": "fa:16:3e:5a:07:26"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bdc8d4b-e2", "ovs_interfaceid": "0bdc8d4b-e261-4398-8465-58392acd35a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:20:12 np0005539551 nova_compute[227360]: 2025-11-29 08:20:12.788 227364 DEBUG nova.network.os_vif_util [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Converting VIF {"id": "0bdc8d4b-e261-4398-8465-58392acd35a8", "address": "fa:16:3e:5a:07:26", "network": {"id": "96a9f8d0-94cb-4ef1-b5fc-814aeb66b309", "bridge": "br-int", "label": "tempest-network-smoke--589317975", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--589317975", "vif_mac": "fa:16:3e:5a:07:26"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bdc8d4b-e2", "ovs_interfaceid": "0bdc8d4b-e261-4398-8465-58392acd35a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:20:12 np0005539551 nova_compute[227360]: 2025-11-29 08:20:12.789 227364 DEBUG nova.network.os_vif_util [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:07:26,bridge_name='br-int',has_traffic_filtering=True,id=0bdc8d4b-e261-4398-8465-58392acd35a8,network=Network(96a9f8d0-94cb-4ef1-b5fc-814aeb66b309),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0bdc8d4b-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:20:12 np0005539551 systemd[1]: user@42436.service: Deactivated successfully.
Nov 29 03:20:12 np0005539551 systemd[1]: Stopped User Manager for UID 42436.
Nov 29 03:20:12 np0005539551 nova_compute[227360]: 2025-11-29 08:20:12.791 227364 DEBUG nova.virt.libvirt.driver [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:20:12 np0005539551 nova_compute[227360]:  <uuid>cf3d3db9-f753-47a8-93d5-7f0491bb03fd</uuid>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:  <name>instance-0000006f</name>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:20:12 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1568692522</nova:name>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:20:11</nova:creationTime>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:20:12 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:        <nova:user uuid="fed6803a835e471f9bd60e3236e78e5d">tempest-TestNetworkAdvancedServerOps-274367929-project-member</nova:user>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:        <nova:project uuid="4145ed6cde61439ebcc12fae2609b724">tempest-TestNetworkAdvancedServerOps-274367929</nova:project>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:        <nova:port uuid="0bdc8d4b-e261-4398-8465-58392acd35a8">
Nov 29 03:20:12 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:      <entry name="serial">cf3d3db9-f753-47a8-93d5-7f0491bb03fd</entry>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:      <entry name="uuid">cf3d3db9-f753-47a8-93d5-7f0491bb03fd</entry>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:20:12 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/cf3d3db9-f753-47a8-93d5-7f0491bb03fd_disk">
Nov 29 03:20:12 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:20:12 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:20:12 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/cf3d3db9-f753-47a8-93d5-7f0491bb03fd_disk.config">
Nov 29 03:20:12 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:20:12 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:20:12 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:5a:07:26"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:      <target dev="tap0bdc8d4b-e2"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:20:12 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/cf3d3db9-f753-47a8-93d5-7f0491bb03fd/console.log" append="off"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:20:12 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:20:12 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:20:12 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:20:12 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:20:12 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:20:12 np0005539551 nova_compute[227360]: 2025-11-29 08:20:12.791 227364 DEBUG nova.virt.libvirt.vif [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:19:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1568692522',display_name='tempest-TestNetworkAdvancedServerOps-server-1568692522',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1568692522',id=111,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPNKUwFrrTjn8atdc6IVHURjdCwbc8WxyLGXpa+LJc5sLs2eoepMjjuqxjn33AoGUMizcXrpPXctDgQXs8T7l76aOuh+gBdm/mktVIbC7S76mvgSpzr3zbuH99OXaXcKFA==',key_name='tempest-TestNetworkAdvancedServerOps-1778483648',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:19:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-kpo3ssd1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:20:06Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=cf3d3db9-f753-47a8-93d5-7f0491bb03fd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0bdc8d4b-e261-4398-8465-58392acd35a8", "address": "fa:16:3e:5a:07:26", "network": {"id": "96a9f8d0-94cb-4ef1-b5fc-814aeb66b309", "bridge": "br-int", "label": "tempest-network-smoke--589317975", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--589317975", "vif_mac": "fa:16:3e:5a:07:26"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bdc8d4b-e2", "ovs_interfaceid": "0bdc8d4b-e261-4398-8465-58392acd35a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:20:12 np0005539551 nova_compute[227360]: 2025-11-29 08:20:12.792 227364 DEBUG nova.network.os_vif_util [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Converting VIF {"id": "0bdc8d4b-e261-4398-8465-58392acd35a8", "address": "fa:16:3e:5a:07:26", "network": {"id": "96a9f8d0-94cb-4ef1-b5fc-814aeb66b309", "bridge": "br-int", "label": "tempest-network-smoke--589317975", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--589317975", "vif_mac": "fa:16:3e:5a:07:26"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bdc8d4b-e2", "ovs_interfaceid": "0bdc8d4b-e261-4398-8465-58392acd35a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:20:12 np0005539551 nova_compute[227360]: 2025-11-29 08:20:12.792 227364 DEBUG nova.network.os_vif_util [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:07:26,bridge_name='br-int',has_traffic_filtering=True,id=0bdc8d4b-e261-4398-8465-58392acd35a8,network=Network(96a9f8d0-94cb-4ef1-b5fc-814aeb66b309),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0bdc8d4b-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:20:12 np0005539551 nova_compute[227360]: 2025-11-29 08:20:12.792 227364 DEBUG os_vif [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:07:26,bridge_name='br-int',has_traffic_filtering=True,id=0bdc8d4b-e261-4398-8465-58392acd35a8,network=Network(96a9f8d0-94cb-4ef1-b5fc-814aeb66b309),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0bdc8d4b-e2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:20:12 np0005539551 nova_compute[227360]: 2025-11-29 08:20:12.793 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:12 np0005539551 nova_compute[227360]: 2025-11-29 08:20:12.793 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:12 np0005539551 nova_compute[227360]: 2025-11-29 08:20:12.793 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:12 np0005539551 nova_compute[227360]: 2025-11-29 08:20:12.795 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:12 np0005539551 nova_compute[227360]: 2025-11-29 08:20:12.796 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0bdc8d4b-e2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:12 np0005539551 nova_compute[227360]: 2025-11-29 08:20:12.796 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0bdc8d4b-e2, col_values=(('external_ids', {'iface-id': '0bdc8d4b-e261-4398-8465-58392acd35a8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5a:07:26', 'vm-uuid': 'cf3d3db9-f753-47a8-93d5-7f0491bb03fd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:12 np0005539551 NetworkManager[48922]: <info>  [1764404412.8002] manager: (tap0bdc8d4b-e2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/211)
Nov 29 03:20:12 np0005539551 nova_compute[227360]: 2025-11-29 08:20:12.797 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:12.801 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[68a97e5e-a28e-49f9-afce-cccd98f7a61d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:12.802 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6512f12e-71 in ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:20:12 np0005539551 nova_compute[227360]: 2025-11-29 08:20:12.802 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:20:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:12.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:12.804 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6512f12e-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:20:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:12.805 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[54da9e0f-b90f-40de-b5fa-c5a5d8be1883]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:12.805 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[73372137-430d-41a6-aa20-700f2a096a8a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:12 np0005539551 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Nov 29 03:20:12 np0005539551 nova_compute[227360]: 2025-11-29 08:20:12.811 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:12 np0005539551 nova_compute[227360]: 2025-11-29 08:20:12.812 227364 INFO os_vif [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:07:26,bridge_name='br-int',has_traffic_filtering=True,id=0bdc8d4b-e261-4398-8465-58392acd35a8,network=Network(96a9f8d0-94cb-4ef1-b5fc-814aeb66b309),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0bdc8d4b-e2')#033[00m
Nov 29 03:20:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:12.827 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[ada0418c-76df-4e94-bed3-8e6958a902ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:12 np0005539551 systemd[1]: run-user-42436.mount: Deactivated successfully.
Nov 29 03:20:12 np0005539551 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Nov 29 03:20:12 np0005539551 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Nov 29 03:20:12 np0005539551 systemd[1]: Removed slice User Slice of UID 42436.
Nov 29 03:20:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:12.852 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[621925e0-a188-4618-b6c2-80a6cd7d5334]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:12.880 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[484d32f8-1647-4723-8af6-3fb1fcbcd3a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:12.889 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[cc041fcd-a24b-45d8-8d23-aec4eea29009]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:12 np0005539551 NetworkManager[48922]: <info>  [1764404412.8908] manager: (tap6512f12e-70): new Veth device (/org/freedesktop/NetworkManager/Devices/212)
Nov 29 03:20:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:12.928 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[0bd96198-9a07-424c-abe5-8ffd8bbf2a08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:12.931 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[5d57d865-4464-4ea3-b9ba-5b03c91b1b9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:12 np0005539551 nova_compute[227360]: 2025-11-29 08:20:12.935 227364 DEBUG nova.virt.libvirt.driver [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:20:12 np0005539551 nova_compute[227360]: 2025-11-29 08:20:12.936 227364 DEBUG nova.virt.libvirt.driver [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:20:12 np0005539551 nova_compute[227360]: 2025-11-29 08:20:12.936 227364 DEBUG nova.virt.libvirt.driver [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] No VIF found with MAC fa:16:3e:5a:07:26, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:20:12 np0005539551 nova_compute[227360]: 2025-11-29 08:20:12.936 227364 INFO nova.virt.libvirt.driver [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Using config drive#033[00m
Nov 29 03:20:12 np0005539551 NetworkManager[48922]: <info>  [1764404412.9526] device (tap6512f12e-70): carrier: link connected
Nov 29 03:20:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:12.958 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[ea1b9de0-0ca3-4ed9-a5b4-bc38a6f623dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:12.973 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[7fd62a7f-c45a-44ac-ba36-800399bffc1e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6512f12e-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:74:72'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 134], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 745983, 'reachable_time': 41939, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269853, 'error': None, 'target': 'ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:12.995 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8fe7e217-cd9f-4158-91c7-a3725ece2f07]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb1:7472'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 745983, 'tstamp': 745983}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269856, 'error': None, 'target': 'ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:13.009 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ba413d0e-3c5c-47d0-bc3a-c8b18bfff8a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6512f12e-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:74:72'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 134], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 745983, 'reachable_time': 41939, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 269859, 'error': None, 'target': 'ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:13 np0005539551 systemd-udevd[269829]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:20:13 np0005539551 NetworkManager[48922]: <info>  [1764404413.0287] manager: (tap0bdc8d4b-e2): new Tun device (/org/freedesktop/NetworkManager/Devices/213)
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:13.045 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[dae3ac6f-3199-436c-97f9-716a1a8c605c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:13 np0005539551 systemd-machined[190756]: New machine qemu-51-instance-0000006f.
Nov 29 03:20:13 np0005539551 kernel: tap0bdc8d4b-e2: entered promiscuous mode
Nov 29 03:20:13 np0005539551 NetworkManager[48922]: <info>  [1764404413.0746] device (tap0bdc8d4b-e2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:20:13 np0005539551 NetworkManager[48922]: <info>  [1764404413.0756] device (tap0bdc8d4b-e2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:20:13 np0005539551 ovn_controller[130266]: 2025-11-29T08:20:13Z|00448|binding|INFO|Claiming lport 0bdc8d4b-e261-4398-8465-58392acd35a8 for this chassis.
Nov 29 03:20:13 np0005539551 ovn_controller[130266]: 2025-11-29T08:20:13Z|00449|binding|INFO|0bdc8d4b-e261-4398-8465-58392acd35a8: Claiming fa:16:3e:5a:07:26 10.100.0.3
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.077 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:13.084 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:07:26 10.100.0.3'], port_security=['fa:16:3e:5a:07:26 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'cf3d3db9-f753-47a8-93d5-7f0491bb03fd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'fea4296f-ae17-483b-99ac-9c138bd93045', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.176'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=34584095-7fef-4e24-ba1b-1ecac0c29f47, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=0bdc8d4b-e261-4398-8465-58392acd35a8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:20:13 np0005539551 systemd[1]: Started Virtual Machine qemu-51-instance-0000006f.
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.082 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:13 np0005539551 ovn_controller[130266]: 2025-11-29T08:20:13Z|00450|binding|INFO|Setting lport 0bdc8d4b-e261-4398-8465-58392acd35a8 ovn-installed in OVS
Nov 29 03:20:13 np0005539551 ovn_controller[130266]: 2025-11-29T08:20:13Z|00451|binding|INFO|Setting lport 0bdc8d4b-e261-4398-8465-58392acd35a8 up in Southbound
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.103 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:13.118 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[be805768-61fb-4a07-9337-1db448fbd07c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:13.119 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6512f12e-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:13.119 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:13.120 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6512f12e-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.121 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:13 np0005539551 NetworkManager[48922]: <info>  [1764404413.1218] manager: (tap6512f12e-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/214)
Nov 29 03:20:13 np0005539551 kernel: tap6512f12e-70: entered promiscuous mode
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:13.129 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6512f12e-70, col_values=(('external_ids', {'iface-id': '73177abb-43fa-48b2-bff7-3e77c9984956'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.130 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:13 np0005539551 ovn_controller[130266]: 2025-11-29T08:20:13Z|00452|binding|INFO|Releasing lport 73177abb-43fa-48b2-bff7-3e77c9984956 from this chassis (sb_readonly=0)
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:13.131 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6512f12e-7c22-4533-b1e6-41428016593a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6512f12e-7c22-4533-b1e6-41428016593a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:13.134 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[60138fa9-7ce1-42de-b5fa-dcf28edd1764]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:13.134 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-6512f12e-7c22-4533-b1e6-41428016593a
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/6512f12e-7c22-4533-b1e6-41428016593a.pid.haproxy
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 6512f12e-7c22-4533-b1e6-41428016593a
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:13.135 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a', 'env', 'PROCESS_TAG=haproxy-6512f12e-7c22-4533-b1e6-41428016593a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6512f12e-7c22-4533-b1e6-41428016593a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.144 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.437 227364 DEBUG nova.compute.manager [req-7df0d200-a2d6-4119-bb80-8f82fde44048 req-ddb1c85b-2507-4ad6-8535-977f8b5002a9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Received event network-vif-plugged-3b646f81-c090-4ebc-ab66-1ed42838ee7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.437 227364 DEBUG oslo_concurrency.lockutils [req-7df0d200-a2d6-4119-bb80-8f82fde44048 req-ddb1c85b-2507-4ad6-8535-977f8b5002a9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.437 227364 DEBUG oslo_concurrency.lockutils [req-7df0d200-a2d6-4119-bb80-8f82fde44048 req-ddb1c85b-2507-4ad6-8535-977f8b5002a9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.437 227364 DEBUG oslo_concurrency.lockutils [req-7df0d200-a2d6-4119-bb80-8f82fde44048 req-ddb1c85b-2507-4ad6-8535-977f8b5002a9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.438 227364 DEBUG nova.compute.manager [req-7df0d200-a2d6-4119-bb80-8f82fde44048 req-ddb1c85b-2507-4ad6-8535-977f8b5002a9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] No waiting events found dispatching network-vif-plugged-3b646f81-c090-4ebc-ab66-1ed42838ee7b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.438 227364 WARNING nova.compute.manager [req-7df0d200-a2d6-4119-bb80-8f82fde44048 req-ddb1c85b-2507-4ad6-8535-977f8b5002a9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Received unexpected event network-vif-plugged-3b646f81-c090-4ebc-ab66-1ed42838ee7b for instance with vm_state active and task_state rebuild_spawning.#033[00m
Nov 29 03:20:13 np0005539551 podman[269908]: 2025-11-29 08:20:13.473195356 +0000 UTC m=+0.051584888 container create a2f399654a638f519a4d000dbe8c818d65e7924d5c2a07932d35a030597439da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:20:13 np0005539551 systemd[1]: Started libpod-conmon-a2f399654a638f519a4d000dbe8c818d65e7924d5c2a07932d35a030597439da.scope.
Nov 29 03:20:13 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:20:13 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/525cc6eb49792cf89bf0d465e18f515522114adab79b8fb845482836cc51aae8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:20:13 np0005539551 podman[269908]: 2025-11-29 08:20:13.542169874 +0000 UTC m=+0.120559426 container init a2f399654a638f519a4d000dbe8c818d65e7924d5c2a07932d35a030597439da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:20:13 np0005539551 podman[269908]: 2025-11-29 08:20:13.448043255 +0000 UTC m=+0.026432807 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:20:13 np0005539551 podman[269908]: 2025-11-29 08:20:13.549763269 +0000 UTC m=+0.128152801 container start a2f399654a638f519a4d000dbe8c818d65e7924d5c2a07932d35a030597439da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 03:20:13 np0005539551 neutron-haproxy-ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a[269954]: [NOTICE]   (269966) : New worker (269975) forked
Nov 29 03:20:13 np0005539551 neutron-haproxy-ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a[269954]: [NOTICE]   (269966) : Loading success.
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:13.613 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 0bdc8d4b-e261-4398-8465-58392acd35a8 in datapath 96a9f8d0-94cb-4ef1-b5fc-814aeb66b309 unbound from our chassis#033[00m
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:13.616 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 96a9f8d0-94cb-4ef1-b5fc-814aeb66b309#033[00m
Nov 29 03:20:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:13.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:13.629 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[241b8ade-08b2-4299-8752-5b161e5d45eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:13.632 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap96a9f8d0-91 in ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:13.633 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap96a9f8d0-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:13.634 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c4468d09-69a7-4f2a-a7e2-1e5dcf3a592d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:13.634 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[7ef171a2-a152-4ef2-8f5b-82e8679dfeb1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.645 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404413.6433108, cf3d3db9-f753-47a8-93d5-7f0491bb03fd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.645 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.647 227364 DEBUG nova.compute.manager [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:13.649 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[4a3931df-1cfc-49fb-8e10-fd48b22dec90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.652 227364 INFO nova.virt.libvirt.driver [-] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Instance running successfully.#033[00m
Nov 29 03:20:13 np0005539551 virtqemud[226785]: argument unsupported: QEMU guest agent is not configured
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.655 227364 DEBUG nova.virt.libvirt.guest [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.656 227364 DEBUG nova.virt.libvirt.driver [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.667 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.670 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:13.674 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[50fdeafb-d5cb-4111-9598-03f7407a0304]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:13.702 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[1768ce75-59f4-40d9-af89-1429aec496aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:13 np0005539551 NetworkManager[48922]: <info>  [1764404413.7137] manager: (tap96a9f8d0-90): new Veth device (/org/freedesktop/NetworkManager/Devices/215)
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:13.712 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e9591fb4-827a-4a36-a567-223c351bf5f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.724 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.724 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404413.6434755, cf3d3db9-f753-47a8-93d5-7f0491bb03fd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.724 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] VM Started (Lifecycle Event)#033[00m
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:13.749 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[36cc4286-2bf8-401b-bf90-507e620e2af7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:13.752 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[078e482a-d701-4b68-a3db-93cf643f7f8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.759 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.763 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:20:13 np0005539551 NetworkManager[48922]: <info>  [1764404413.7882] device (tap96a9f8d0-90): carrier: link connected
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:13.792 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[d70f4920-e5e6-41a9-9713-9108ebb21069]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:13.809 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f324a268-46c1-4434-be00-23fe29ea56bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap96a9f8d0-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:3f:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 136], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 746066, 'reachable_time': 42264, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270030, 'error': None, 'target': 'ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.814 227364 DEBUG nova.compute.manager [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.815 227364 DEBUG nova.virt.libvirt.driver [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.827 227364 INFO nova.virt.libvirt.driver [-] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Instance spawned successfully.#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.827 227364 DEBUG nova.virt.libvirt.driver [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:13.828 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[75d65353-b6ab-49b7-ab67-460f5cd92aeb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe71:3f3b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 746066, 'tstamp': 746066}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270031, 'error': None, 'target': 'ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:13.841 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[04fe5cca-0ef8-4144-964b-b70c6223a1ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap96a9f8d0-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:3f:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 136], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 746066, 'reachable_time': 42264, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 270032, 'error': None, 'target': 'ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:13.871 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[fe28aeae-251f-46f3-84af-19d552d84865]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:13.917 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d7ce7f21-0408-44ad-bfea-42b3f88744a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:13.919 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap96a9f8d0-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:13.919 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:13.919 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap96a9f8d0-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.921 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:13 np0005539551 NetworkManager[48922]: <info>  [1764404413.9223] manager: (tap96a9f8d0-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/216)
Nov 29 03:20:13 np0005539551 kernel: tap96a9f8d0-90: entered promiscuous mode
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:13.933 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap96a9f8d0-90, col_values=(('external_ids', {'iface-id': '74c0e253-5186-4acd-84b9-fb779ff161ee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.934 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:13 np0005539551 ovn_controller[130266]: 2025-11-29T08:20:13Z|00453|binding|INFO|Releasing lport 74c0e253-5186-4acd-84b9-fb779ff161ee from this chassis (sb_readonly=0)
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.945 227364 DEBUG nova.virt.libvirt.host [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Removed pending event for a14ca685-ed5c-4583-90e2-565fbf5e1ef0 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.945 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404413.8068283, a14ca685-ed5c-4583-90e2-565fbf5e1ef0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.945 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:13.956 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/96a9f8d0-94cb-4ef1-b5fc-814aeb66b309.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/96a9f8d0-94cb-4ef1-b5fc-814aeb66b309.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.956 227364 DEBUG nova.virt.libvirt.driver [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.958 227364 DEBUG nova.virt.libvirt.driver [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.958 227364 DEBUG nova.virt.libvirt.driver [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.958 227364 DEBUG nova.virt.libvirt.driver [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.959 227364 DEBUG nova.virt.libvirt.driver [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.959 227364 DEBUG nova.virt.libvirt.driver [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:13.957 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[861b7ac3-02c9-4e70-aee7-042dabeae12a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:13.958 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/96a9f8d0-94cb-4ef1-b5fc-814aeb66b309.pid.haproxy
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 96a9f8d0-94cb-4ef1-b5fc-814aeb66b309
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:20:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:13.958 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309', 'env', 'PROCESS_TAG=haproxy-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/96a9f8d0-94cb-4ef1-b5fc-814aeb66b309.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.961 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.964 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.971 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.985 227364 DEBUG nova.network.neutron [req-35e296b9-a6c3-402d-bdfd-61c633ff6928 req-b6d5d505-6205-4581-825a-318bdaa487be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Updated VIF entry in instance network info cache for port 0bdc8d4b-e261-4398-8465-58392acd35a8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.986 227364 DEBUG nova.network.neutron [req-35e296b9-a6c3-402d-bdfd-61c633ff6928 req-b6d5d505-6205-4581-825a-318bdaa487be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Updating instance_info_cache with network_info: [{"id": "0bdc8d4b-e261-4398-8465-58392acd35a8", "address": "fa:16:3e:5a:07:26", "network": {"id": "96a9f8d0-94cb-4ef1-b5fc-814aeb66b309", "bridge": "br-int", "label": "tempest-network-smoke--589317975", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bdc8d4b-e2", "ovs_interfaceid": "0bdc8d4b-e261-4398-8465-58392acd35a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.998 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.998 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404413.8080926, a14ca685-ed5c-4583-90e2-565fbf5e1ef0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:20:13 np0005539551 nova_compute[227360]: 2025-11-29 08:20:13.998 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] VM Started (Lifecycle Event)#033[00m
Nov 29 03:20:14 np0005539551 nova_compute[227360]: 2025-11-29 08:20:14.034 227364 DEBUG oslo_concurrency.lockutils [req-35e296b9-a6c3-402d-bdfd-61c633ff6928 req-b6d5d505-6205-4581-825a-318bdaa487be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-cf3d3db9-f753-47a8-93d5-7f0491bb03fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:20:14 np0005539551 nova_compute[227360]: 2025-11-29 08:20:14.035 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:14 np0005539551 nova_compute[227360]: 2025-11-29 08:20:14.037 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:20:14 np0005539551 nova_compute[227360]: 2025-11-29 08:20:14.041 227364 DEBUG nova.compute.manager [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:14 np0005539551 nova_compute[227360]: 2025-11-29 08:20:14.069 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 03:20:14 np0005539551 nova_compute[227360]: 2025-11-29 08:20:14.098 227364 DEBUG oslo_concurrency.lockutils [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:14 np0005539551 nova_compute[227360]: 2025-11-29 08:20:14.098 227364 DEBUG oslo_concurrency.lockutils [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:14 np0005539551 nova_compute[227360]: 2025-11-29 08:20:14.099 227364 DEBUG nova.objects.instance [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 03:20:14 np0005539551 nova_compute[227360]: 2025-11-29 08:20:14.163 227364 DEBUG oslo_concurrency.lockutils [None req-0a59ae81-8d69-4e07-8230-14eef35d13b0 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.064s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:14 np0005539551 podman[270062]: 2025-11-29 08:20:14.348032808 +0000 UTC m=+0.049803519 container create 4ef76d41b1a02e44fc282a14e70b0a4aa4d0a5cc377b1ed04cb51969c26bfec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:20:14 np0005539551 systemd[1]: Started libpod-conmon-4ef76d41b1a02e44fc282a14e70b0a4aa4d0a5cc377b1ed04cb51969c26bfec5.scope.
Nov 29 03:20:14 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:20:14 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71c2c684a8d3585327237fa5853625f9e0242001c02ec4f6dfb9486a340e1945/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:20:14 np0005539551 podman[270062]: 2025-11-29 08:20:14.325676943 +0000 UTC m=+0.027447664 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:20:14 np0005539551 podman[270062]: 2025-11-29 08:20:14.430241675 +0000 UTC m=+0.132012406 container init 4ef76d41b1a02e44fc282a14e70b0a4aa4d0a5cc377b1ed04cb51969c26bfec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 03:20:14 np0005539551 podman[270062]: 2025-11-29 08:20:14.435129577 +0000 UTC m=+0.136900288 container start 4ef76d41b1a02e44fc282a14e70b0a4aa4d0a5cc377b1ed04cb51969c26bfec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:20:14 np0005539551 neutron-haproxy-ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309[270077]: [NOTICE]   (270081) : New worker (270083) forked
Nov 29 03:20:14 np0005539551 neutron-haproxy-ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309[270077]: [NOTICE]   (270081) : Loading success.
Nov 29 03:20:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:14.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:15 np0005539551 nova_compute[227360]: 2025-11-29 08:20:15.546 227364 DEBUG nova.compute.manager [req-8155625a-0c39-46e6-8dfb-3a1353cb9bd8 req-c97f7b9c-3258-4123-84d2-ba7b51f3349e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Received event network-vif-plugged-3b646f81-c090-4ebc-ab66-1ed42838ee7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:15 np0005539551 nova_compute[227360]: 2025-11-29 08:20:15.546 227364 DEBUG oslo_concurrency.lockutils [req-8155625a-0c39-46e6-8dfb-3a1353cb9bd8 req-c97f7b9c-3258-4123-84d2-ba7b51f3349e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:15 np0005539551 nova_compute[227360]: 2025-11-29 08:20:15.547 227364 DEBUG oslo_concurrency.lockutils [req-8155625a-0c39-46e6-8dfb-3a1353cb9bd8 req-c97f7b9c-3258-4123-84d2-ba7b51f3349e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:15 np0005539551 nova_compute[227360]: 2025-11-29 08:20:15.547 227364 DEBUG oslo_concurrency.lockutils [req-8155625a-0c39-46e6-8dfb-3a1353cb9bd8 req-c97f7b9c-3258-4123-84d2-ba7b51f3349e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:15 np0005539551 nova_compute[227360]: 2025-11-29 08:20:15.547 227364 DEBUG nova.compute.manager [req-8155625a-0c39-46e6-8dfb-3a1353cb9bd8 req-c97f7b9c-3258-4123-84d2-ba7b51f3349e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] No waiting events found dispatching network-vif-plugged-3b646f81-c090-4ebc-ab66-1ed42838ee7b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:15 np0005539551 nova_compute[227360]: 2025-11-29 08:20:15.547 227364 WARNING nova.compute.manager [req-8155625a-0c39-46e6-8dfb-3a1353cb9bd8 req-c97f7b9c-3258-4123-84d2-ba7b51f3349e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Received unexpected event network-vif-plugged-3b646f81-c090-4ebc-ab66-1ed42838ee7b for instance with vm_state active and task_state None.#033[00m
Nov 29 03:20:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:15.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:16 np0005539551 nova_compute[227360]: 2025-11-29 08:20:16.049 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:16.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:17.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:17 np0005539551 nova_compute[227360]: 2025-11-29 08:20:17.798 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e311 e311: 3 total, 3 up, 3 in
Nov 29 03:20:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:18.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:19 np0005539551 nova_compute[227360]: 2025-11-29 08:20:19.331 227364 DEBUG nova.compute.manager [req-82c718dc-8acf-4d03-9c68-6133f04bef48 req-e87b6ebb-5377-4e0e-b9ea-14785737494a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Received event network-vif-plugged-0bdc8d4b-e261-4398-8465-58392acd35a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:19 np0005539551 nova_compute[227360]: 2025-11-29 08:20:19.332 227364 DEBUG oslo_concurrency.lockutils [req-82c718dc-8acf-4d03-9c68-6133f04bef48 req-e87b6ebb-5377-4e0e-b9ea-14785737494a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:19 np0005539551 nova_compute[227360]: 2025-11-29 08:20:19.332 227364 DEBUG oslo_concurrency.lockutils [req-82c718dc-8acf-4d03-9c68-6133f04bef48 req-e87b6ebb-5377-4e0e-b9ea-14785737494a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:19 np0005539551 nova_compute[227360]: 2025-11-29 08:20:19.333 227364 DEBUG oslo_concurrency.lockutils [req-82c718dc-8acf-4d03-9c68-6133f04bef48 req-e87b6ebb-5377-4e0e-b9ea-14785737494a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:19 np0005539551 nova_compute[227360]: 2025-11-29 08:20:19.333 227364 DEBUG nova.compute.manager [req-82c718dc-8acf-4d03-9c68-6133f04bef48 req-e87b6ebb-5377-4e0e-b9ea-14785737494a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] No waiting events found dispatching network-vif-plugged-0bdc8d4b-e261-4398-8465-58392acd35a8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:19 np0005539551 nova_compute[227360]: 2025-11-29 08:20:19.334 227364 WARNING nova.compute.manager [req-82c718dc-8acf-4d03-9c68-6133f04bef48 req-e87b6ebb-5377-4e0e-b9ea-14785737494a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Received unexpected event network-vif-plugged-0bdc8d4b-e261-4398-8465-58392acd35a8 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:20:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:19.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:19.870 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:19.871 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:19.871 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:20.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:21 np0005539551 nova_compute[227360]: 2025-11-29 08:20:21.048 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:21 np0005539551 ceph-mgr[82034]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1950343944
Nov 29 03:20:21 np0005539551 nova_compute[227360]: 2025-11-29 08:20:21.439 227364 DEBUG nova.compute.manager [req-6c1c4255-0d03-41d3-883a-ec006be89bd1 req-4c575996-6bea-4b74-b4b9-62fb07638271 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Received event network-vif-plugged-0bdc8d4b-e261-4398-8465-58392acd35a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:21 np0005539551 nova_compute[227360]: 2025-11-29 08:20:21.440 227364 DEBUG oslo_concurrency.lockutils [req-6c1c4255-0d03-41d3-883a-ec006be89bd1 req-4c575996-6bea-4b74-b4b9-62fb07638271 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:21 np0005539551 nova_compute[227360]: 2025-11-29 08:20:21.441 227364 DEBUG oslo_concurrency.lockutils [req-6c1c4255-0d03-41d3-883a-ec006be89bd1 req-4c575996-6bea-4b74-b4b9-62fb07638271 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:21 np0005539551 nova_compute[227360]: 2025-11-29 08:20:21.441 227364 DEBUG oslo_concurrency.lockutils [req-6c1c4255-0d03-41d3-883a-ec006be89bd1 req-4c575996-6bea-4b74-b4b9-62fb07638271 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:21 np0005539551 nova_compute[227360]: 2025-11-29 08:20:21.442 227364 DEBUG nova.compute.manager [req-6c1c4255-0d03-41d3-883a-ec006be89bd1 req-4c575996-6bea-4b74-b4b9-62fb07638271 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] No waiting events found dispatching network-vif-plugged-0bdc8d4b-e261-4398-8465-58392acd35a8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:21 np0005539551 nova_compute[227360]: 2025-11-29 08:20:21.443 227364 WARNING nova.compute.manager [req-6c1c4255-0d03-41d3-883a-ec006be89bd1 req-4c575996-6bea-4b74-b4b9-62fb07638271 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Received unexpected event network-vif-plugged-0bdc8d4b-e261-4398-8465-58392acd35a8 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:20:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:21.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:22 np0005539551 nova_compute[227360]: 2025-11-29 08:20:22.801 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:22.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:23.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:24 np0005539551 podman[270095]: 2025-11-29 08:20:24.601538955 +0000 UTC m=+0.054292242 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 03:20:24 np0005539551 podman[270093]: 2025-11-29 08:20:24.625497974 +0000 UTC m=+0.082103755 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller)
Nov 29 03:20:24 np0005539551 podman[270094]: 2025-11-29 08:20:24.632202095 +0000 UTC m=+0.088600410 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:20:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:24.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:25.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:26 np0005539551 nova_compute[227360]: 2025-11-29 08:20:26.050 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:26 np0005539551 ovn_controller[130266]: 2025-11-29T08:20:26Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5a:07:26 10.100.0.3
Nov 29 03:20:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:26.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e312 e312: 3 total, 3 up, 3 in
Nov 29 03:20:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:27.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:27 np0005539551 nova_compute[227360]: 2025-11-29 08:20:27.805 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:28 np0005539551 ovn_controller[130266]: 2025-11-29T08:20:28Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:03:d3:13 10.100.0.4
Nov 29 03:20:28 np0005539551 ovn_controller[130266]: 2025-11-29T08:20:28Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:03:d3:13 10.100.0.4
Nov 29 03:20:28 np0005539551 nova_compute[227360]: 2025-11-29 08:20:28.436 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:28.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:29.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:30 np0005539551 nova_compute[227360]: 2025-11-29 08:20:30.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:30 np0005539551 nova_compute[227360]: 2025-11-29 08:20:30.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:30 np0005539551 nova_compute[227360]: 2025-11-29 08:20:30.409 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:20:30 np0005539551 nova_compute[227360]: 2025-11-29 08:20:30.409 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:20:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:30.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:31 np0005539551 nova_compute[227360]: 2025-11-29 08:20:31.054 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:31 np0005539551 nova_compute[227360]: 2025-11-29 08:20:31.238 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "refresh_cache-cf3d3db9-f753-47a8-93d5-7f0491bb03fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:20:31 np0005539551 nova_compute[227360]: 2025-11-29 08:20:31.238 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquired lock "refresh_cache-cf3d3db9-f753-47a8-93d5-7f0491bb03fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:20:31 np0005539551 nova_compute[227360]: 2025-11-29 08:20:31.239 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:20:31 np0005539551 nova_compute[227360]: 2025-11-29 08:20:31.239 227364 DEBUG nova.objects.instance [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lazy-loading 'info_cache' on Instance uuid cf3d3db9-f753-47a8-93d5-7f0491bb03fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:31.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:32 np0005539551 nova_compute[227360]: 2025-11-29 08:20:32.807 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:32.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:33 np0005539551 nova_compute[227360]: 2025-11-29 08:20:33.119 227364 INFO nova.compute.manager [None req-7c518b37-58f0-4e79-a313-dab69f10fd20 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Get console output#033[00m
Nov 29 03:20:33 np0005539551 nova_compute[227360]: 2025-11-29 08:20:33.126 260937 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:20:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:33.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:33 np0005539551 nova_compute[227360]: 2025-11-29 08:20:33.675 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Updating instance_info_cache with network_info: [{"id": "0bdc8d4b-e261-4398-8465-58392acd35a8", "address": "fa:16:3e:5a:07:26", "network": {"id": "96a9f8d0-94cb-4ef1-b5fc-814aeb66b309", "bridge": "br-int", "label": "tempest-network-smoke--589317975", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bdc8d4b-e2", "ovs_interfaceid": "0bdc8d4b-e261-4398-8465-58392acd35a8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:20:33 np0005539551 nova_compute[227360]: 2025-11-29 08:20:33.704 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Releasing lock "refresh_cache-cf3d3db9-f753-47a8-93d5-7f0491bb03fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:20:33 np0005539551 nova_compute[227360]: 2025-11-29 08:20:33.705 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:20:33 np0005539551 nova_compute[227360]: 2025-11-29 08:20:33.706 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:33 np0005539551 nova_compute[227360]: 2025-11-29 08:20:33.706 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:34.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:35 np0005539551 nova_compute[227360]: 2025-11-29 08:20:35.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:35 np0005539551 nova_compute[227360]: 2025-11-29 08:20:35.434 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:35 np0005539551 nova_compute[227360]: 2025-11-29 08:20:35.435 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:35 np0005539551 nova_compute[227360]: 2025-11-29 08:20:35.435 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:35 np0005539551 nova_compute[227360]: 2025-11-29 08:20:35.435 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:20:35 np0005539551 nova_compute[227360]: 2025-11-29 08:20:35.435 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:20:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:35.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:20:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:20:35 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1733522019' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:20:35 np0005539551 nova_compute[227360]: 2025-11-29 08:20:35.905 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:35 np0005539551 nova_compute[227360]: 2025-11-29 08:20:35.980 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:20:35 np0005539551 nova_compute[227360]: 2025-11-29 08:20:35.980 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:20:35 np0005539551 nova_compute[227360]: 2025-11-29 08:20:35.984 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:20:35 np0005539551 nova_compute[227360]: 2025-11-29 08:20:35.984 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.022 227364 DEBUG oslo_concurrency.lockutils [None req-143a9bbc-3fe2-496b-9903-56f4c626f5a4 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.022 227364 DEBUG oslo_concurrency.lockutils [None req-143a9bbc-3fe2-496b-9903-56f4c626f5a4 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.022 227364 DEBUG oslo_concurrency.lockutils [None req-143a9bbc-3fe2-496b-9903-56f4c626f5a4 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.023 227364 DEBUG oslo_concurrency.lockutils [None req-143a9bbc-3fe2-496b-9903-56f4c626f5a4 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.023 227364 DEBUG oslo_concurrency.lockutils [None req-143a9bbc-3fe2-496b-9903-56f4c626f5a4 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.024 227364 INFO nova.compute.manager [None req-143a9bbc-3fe2-496b-9903-56f4c626f5a4 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Terminating instance#033[00m
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.025 227364 DEBUG nova.compute.manager [None req-143a9bbc-3fe2-496b-9903-56f4c626f5a4 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.057 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:36 np0005539551 kernel: tap0bdc8d4b-e2 (unregistering): left promiscuous mode
Nov 29 03:20:36 np0005539551 NetworkManager[48922]: <info>  [1764404436.0720] device (tap0bdc8d4b-e2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:20:36 np0005539551 virtqemud[226785]: An error occurred, but the cause is unknown
Nov 29 03:20:36 np0005539551 ovn_controller[130266]: 2025-11-29T08:20:36Z|00454|binding|INFO|Releasing lport 0bdc8d4b-e261-4398-8465-58392acd35a8 from this chassis (sb_readonly=0)
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.082 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:36 np0005539551 ovn_controller[130266]: 2025-11-29T08:20:36Z|00455|binding|INFO|Setting lport 0bdc8d4b-e261-4398-8465-58392acd35a8 down in Southbound
Nov 29 03:20:36 np0005539551 ovn_controller[130266]: 2025-11-29T08:20:36Z|00456|binding|INFO|Removing iface tap0bdc8d4b-e2 ovn-installed in OVS
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.084 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:36.097 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:07:26 10.100.0.3'], port_security=['fa:16:3e:5a:07:26 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'cf3d3db9-f753-47a8-93d5-7f0491bb03fd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'fea4296f-ae17-483b-99ac-9c138bd93045', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=34584095-7fef-4e24-ba1b-1ecac0c29f47, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=0bdc8d4b-e261-4398-8465-58392acd35a8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:20:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:36.099 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 0bdc8d4b-e261-4398-8465-58392acd35a8 in datapath 96a9f8d0-94cb-4ef1-b5fc-814aeb66b309 unbound from our chassis#033[00m
Nov 29 03:20:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:36.100 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 96a9f8d0-94cb-4ef1-b5fc-814aeb66b309, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:20:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:36.101 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b153b439-e4ae-4203-895c-201db4f56f08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:36.102 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309 namespace which is not needed anymore#033[00m
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.108 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:36 np0005539551 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000006f.scope: Deactivated successfully.
Nov 29 03:20:36 np0005539551 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000006f.scope: Consumed 13.509s CPU time.
Nov 29 03:20:36 np0005539551 systemd-machined[190756]: Machine qemu-51-instance-0000006f terminated.
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.187 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.189 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4095MB free_disk=20.860031127929688GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.189 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.189 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:36 np0005539551 neutron-haproxy-ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309[270077]: [NOTICE]   (270081) : haproxy version is 2.8.14-c23fe91
Nov 29 03:20:36 np0005539551 neutron-haproxy-ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309[270077]: [NOTICE]   (270081) : path to executable is /usr/sbin/haproxy
Nov 29 03:20:36 np0005539551 neutron-haproxy-ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309[270077]: [WARNING]  (270081) : Exiting Master process...
Nov 29 03:20:36 np0005539551 neutron-haproxy-ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309[270077]: [ALERT]    (270081) : Current worker (270083) exited with code 143 (Terminated)
Nov 29 03:20:36 np0005539551 neutron-haproxy-ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309[270077]: [WARNING]  (270081) : All workers exited. Exiting... (0)
Nov 29 03:20:36 np0005539551 systemd[1]: libpod-4ef76d41b1a02e44fc282a14e70b0a4aa4d0a5cc377b1ed04cb51969c26bfec5.scope: Deactivated successfully.
Nov 29 03:20:36 np0005539551 podman[270203]: 2025-11-29 08:20:36.23123728 +0000 UTC m=+0.043964542 container died 4ef76d41b1a02e44fc282a14e70b0a4aa4d0a5cc377b1ed04cb51969c26bfec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:20:36 np0005539551 kernel: tap0bdc8d4b-e2: entered promiscuous mode
Nov 29 03:20:36 np0005539551 ovn_controller[130266]: 2025-11-29T08:20:36Z|00457|binding|INFO|Claiming lport 0bdc8d4b-e261-4398-8465-58392acd35a8 for this chassis.
Nov 29 03:20:36 np0005539551 NetworkManager[48922]: <info>  [1764404436.2465] manager: (tap0bdc8d4b-e2): new Tun device (/org/freedesktop/NetworkManager/Devices/217)
Nov 29 03:20:36 np0005539551 ovn_controller[130266]: 2025-11-29T08:20:36Z|00458|binding|INFO|0bdc8d4b-e261-4398-8465-58392acd35a8: Claiming fa:16:3e:5a:07:26 10.100.0.3
Nov 29 03:20:36 np0005539551 systemd-udevd[270184]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:20:36 np0005539551 kernel: tap0bdc8d4b-e2 (unregistering): left promiscuous mode
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.247 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:36.255 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:07:26 10.100.0.3'], port_security=['fa:16:3e:5a:07:26 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'cf3d3db9-f753-47a8-93d5-7f0491bb03fd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'fea4296f-ae17-483b-99ac-9c138bd93045', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=34584095-7fef-4e24-ba1b-1ecac0c29f47, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=0bdc8d4b-e261-4398-8465-58392acd35a8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:20:36 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4ef76d41b1a02e44fc282a14e70b0a4aa4d0a5cc377b1ed04cb51969c26bfec5-userdata-shm.mount: Deactivated successfully.
Nov 29 03:20:36 np0005539551 systemd[1]: var-lib-containers-storage-overlay-71c2c684a8d3585327237fa5853625f9e0242001c02ec4f6dfb9486a340e1945-merged.mount: Deactivated successfully.
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.273 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance a14ca685-ed5c-4583-90e2-565fbf5e1ef0 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.273 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance cf3d3db9-f753-47a8-93d5-7f0491bb03fd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.273 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.274 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:20:36 np0005539551 ovn_controller[130266]: 2025-11-29T08:20:36Z|00459|binding|INFO|Setting lport 0bdc8d4b-e261-4398-8465-58392acd35a8 ovn-installed in OVS
Nov 29 03:20:36 np0005539551 ovn_controller[130266]: 2025-11-29T08:20:36Z|00460|binding|INFO|Setting lport 0bdc8d4b-e261-4398-8465-58392acd35a8 up in Southbound
Nov 29 03:20:36 np0005539551 podman[270203]: 2025-11-29 08:20:36.281985605 +0000 UTC m=+0.094712877 container cleanup 4ef76d41b1a02e44fc282a14e70b0a4aa4d0a5cc377b1ed04cb51969c26bfec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:20:36 np0005539551 ovn_controller[130266]: 2025-11-29T08:20:36Z|00461|binding|INFO|Releasing lport 0bdc8d4b-e261-4398-8465-58392acd35a8 from this chassis (sb_readonly=1)
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.282 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:36 np0005539551 ovn_controller[130266]: 2025-11-29T08:20:36Z|00462|if_status|INFO|Dropped 46 log messages in last 587 seconds (most recently, 587 seconds ago) due to excessive rate
Nov 29 03:20:36 np0005539551 ovn_controller[130266]: 2025-11-29T08:20:36Z|00463|if_status|INFO|Not setting lport 0bdc8d4b-e261-4398-8465-58392acd35a8 down as sb is readonly
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.283 227364 INFO nova.virt.libvirt.driver [-] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Instance destroyed successfully.#033[00m
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.283 227364 DEBUG nova.objects.instance [None req-143a9bbc-3fe2-496b-9903-56f4c626f5a4 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'resources' on Instance uuid cf3d3db9-f753-47a8-93d5-7f0491bb03fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:36 np0005539551 ovn_controller[130266]: 2025-11-29T08:20:36Z|00464|binding|INFO|Releasing lport 0bdc8d4b-e261-4398-8465-58392acd35a8 from this chassis (sb_readonly=0)
Nov 29 03:20:36 np0005539551 ovn_controller[130266]: 2025-11-29T08:20:36Z|00465|binding|INFO|Removing iface tap0bdc8d4b-e2 ovn-installed in OVS
Nov 29 03:20:36 np0005539551 ovn_controller[130266]: 2025-11-29T08:20:36Z|00466|binding|INFO|Setting lport 0bdc8d4b-e261-4398-8465-58392acd35a8 down in Southbound
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.288 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:36 np0005539551 systemd[1]: libpod-conmon-4ef76d41b1a02e44fc282a14e70b0a4aa4d0a5cc377b1ed04cb51969c26bfec5.scope: Deactivated successfully.
Nov 29 03:20:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:36.296 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:07:26 10.100.0.3'], port_security=['fa:16:3e:5a:07:26 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'cf3d3db9-f753-47a8-93d5-7f0491bb03fd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'fea4296f-ae17-483b-99ac-9c138bd93045', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=34584095-7fef-4e24-ba1b-1ecac0c29f47, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=0bdc8d4b-e261-4398-8465-58392acd35a8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.300 227364 DEBUG nova.virt.libvirt.vif [None req-143a9bbc-3fe2-496b-9903-56f4c626f5a4 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:19:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1568692522',display_name='tempest-TestNetworkAdvancedServerOps-server-1568692522',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1568692522',id=111,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPNKUwFrrTjn8atdc6IVHURjdCwbc8WxyLGXpa+LJc5sLs2eoepMjjuqxjn33AoGUMizcXrpPXctDgQXs8T7l76aOuh+gBdm/mktVIbC7S76mvgSpzr3zbuH99OXaXcKFA==',key_name='tempest-TestNetworkAdvancedServerOps-1778483648',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:20:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-kpo3ssd1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:20:19Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=cf3d3db9-f753-47a8-93d5-7f0491bb03fd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0bdc8d4b-e261-4398-8465-58392acd35a8", "address": "fa:16:3e:5a:07:26", "network": {"id": "96a9f8d0-94cb-4ef1-b5fc-814aeb66b309", "bridge": "br-int", "label": "tempest-network-smoke--589317975", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bdc8d4b-e2", "ovs_interfaceid": "0bdc8d4b-e261-4398-8465-58392acd35a8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.301 227364 DEBUG nova.network.os_vif_util [None req-143a9bbc-3fe2-496b-9903-56f4c626f5a4 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "0bdc8d4b-e261-4398-8465-58392acd35a8", "address": "fa:16:3e:5a:07:26", "network": {"id": "96a9f8d0-94cb-4ef1-b5fc-814aeb66b309", "bridge": "br-int", "label": "tempest-network-smoke--589317975", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bdc8d4b-e2", "ovs_interfaceid": "0bdc8d4b-e261-4398-8465-58392acd35a8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.302 227364 DEBUG nova.network.os_vif_util [None req-143a9bbc-3fe2-496b-9903-56f4c626f5a4 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5a:07:26,bridge_name='br-int',has_traffic_filtering=True,id=0bdc8d4b-e261-4398-8465-58392acd35a8,network=Network(96a9f8d0-94cb-4ef1-b5fc-814aeb66b309),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0bdc8d4b-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.302 227364 DEBUG os_vif [None req-143a9bbc-3fe2-496b-9903-56f4c626f5a4 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5a:07:26,bridge_name='br-int',has_traffic_filtering=True,id=0bdc8d4b-e261-4398-8465-58392acd35a8,network=Network(96a9f8d0-94cb-4ef1-b5fc-814aeb66b309),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0bdc8d4b-e2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.304 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.305 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0bdc8d4b-e2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.306 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.307 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.310 227364 INFO os_vif [None req-143a9bbc-3fe2-496b-9903-56f4c626f5a4 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5a:07:26,bridge_name='br-int',has_traffic_filtering=True,id=0bdc8d4b-e261-4398-8465-58392acd35a8,network=Network(96a9f8d0-94cb-4ef1-b5fc-814aeb66b309),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0bdc8d4b-e2')#033[00m
Nov 29 03:20:36 np0005539551 podman[270233]: 2025-11-29 08:20:36.34826188 +0000 UTC m=+0.040108718 container remove 4ef76d41b1a02e44fc282a14e70b0a4aa4d0a5cc377b1ed04cb51969c26bfec5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:20:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:36.355 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e3703c36-395a-4cde-b38d-8e0c30cf0b27]: (4, ('Sat Nov 29 08:20:36 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309 (4ef76d41b1a02e44fc282a14e70b0a4aa4d0a5cc377b1ed04cb51969c26bfec5)\n4ef76d41b1a02e44fc282a14e70b0a4aa4d0a5cc377b1ed04cb51969c26bfec5\nSat Nov 29 08:20:36 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309 (4ef76d41b1a02e44fc282a14e70b0a4aa4d0a5cc377b1ed04cb51969c26bfec5)\n4ef76d41b1a02e44fc282a14e70b0a4aa4d0a5cc377b1ed04cb51969c26bfec5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:36.357 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[eccb3afb-d465-4eed-8c2e-67f5545913f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:36.357 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap96a9f8d0-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.359 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:36 np0005539551 kernel: tap96a9f8d0-90: left promiscuous mode
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.363 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:36.379 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8b9293c7-1c7e-42b2-82ba-6877e043998c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.391 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:36.397 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c7a51b0f-6fe0-4b89-a862-e4d038ac7d14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:36.398 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a4da9884-48f1-4e08-b0c2-bbe2d2f07f5a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:36.413 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4734dd34-0b97-4e30-bda0-3252412eb7c6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 746057, 'reachable_time': 27440, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270267, 'error': None, 'target': 'ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:36.416 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:20:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:36.416 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[9d4a4488-f675-4842-81ef-4188077f260b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:36.417 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 0bdc8d4b-e261-4398-8465-58392acd35a8 in datapath 96a9f8d0-94cb-4ef1-b5fc-814aeb66b309 unbound from our chassis#033[00m
Nov 29 03:20:36 np0005539551 systemd[1]: run-netns-ovnmeta\x2d96a9f8d0\x2d94cb\x2d4ef1\x2db5fc\x2d814aeb66b309.mount: Deactivated successfully.
Nov 29 03:20:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:36.418 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 96a9f8d0-94cb-4ef1-b5fc-814aeb66b309, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:20:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:36.418 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f36924d7-eb85-4703-9312-d3055539662a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:36.419 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 0bdc8d4b-e261-4398-8465-58392acd35a8 in datapath 96a9f8d0-94cb-4ef1-b5fc-814aeb66b309 unbound from our chassis#033[00m
Nov 29 03:20:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:36.420 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 96a9f8d0-94cb-4ef1-b5fc-814aeb66b309, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:20:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:36.420 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8a6881eb-e79c-47e6-997d-b9f4a93d27f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.614 227364 DEBUG nova.compute.manager [req-6b56a04a-3ecb-4fa5-9699-879919b667f5 req-5ed3e1d0-3035-486c-96b2-e2d2f7d9eb0a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Received event network-changed-0bdc8d4b-e261-4398-8465-58392acd35a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.615 227364 DEBUG nova.compute.manager [req-6b56a04a-3ecb-4fa5-9699-879919b667f5 req-5ed3e1d0-3035-486c-96b2-e2d2f7d9eb0a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Refreshing instance network info cache due to event network-changed-0bdc8d4b-e261-4398-8465-58392acd35a8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.616 227364 DEBUG oslo_concurrency.lockutils [req-6b56a04a-3ecb-4fa5-9699-879919b667f5 req-5ed3e1d0-3035-486c-96b2-e2d2f7d9eb0a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-cf3d3db9-f753-47a8-93d5-7f0491bb03fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.616 227364 DEBUG oslo_concurrency.lockutils [req-6b56a04a-3ecb-4fa5-9699-879919b667f5 req-5ed3e1d0-3035-486c-96b2-e2d2f7d9eb0a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-cf3d3db9-f753-47a8-93d5-7f0491bb03fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.616 227364 DEBUG nova.network.neutron [req-6b56a04a-3ecb-4fa5-9699-879919b667f5 req-5ed3e1d0-3035-486c-96b2-e2d2f7d9eb0a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Refreshing network info cache for port 0bdc8d4b-e261-4398-8465-58392acd35a8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.721 227364 INFO nova.virt.libvirt.driver [None req-143a9bbc-3fe2-496b-9903-56f4c626f5a4 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Deleting instance files /var/lib/nova/instances/cf3d3db9-f753-47a8-93d5-7f0491bb03fd_del#033[00m
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.722 227364 INFO nova.virt.libvirt.driver [None req-143a9bbc-3fe2-496b-9903-56f4c626f5a4 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Deletion of /var/lib/nova/instances/cf3d3db9-f753-47a8-93d5-7f0491bb03fd_del complete#033[00m
Nov 29 03:20:36 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:20:36 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/164492713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.779 227364 INFO nova.compute.manager [None req-143a9bbc-3fe2-496b-9903-56f4c626f5a4 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.780 227364 DEBUG oslo.service.loopingcall [None req-143a9bbc-3fe2-496b-9903-56f4c626f5a4 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.780 227364 DEBUG nova.compute.manager [-] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.780 227364 DEBUG nova.network.neutron [-] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.788 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.793 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.808 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:20:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:36.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.851 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:20:36 np0005539551 nova_compute[227360]: 2025-11-29 08:20:36.852 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:37 np0005539551 nova_compute[227360]: 2025-11-29 08:20:37.404 227364 DEBUG nova.network.neutron [-] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:20:37 np0005539551 nova_compute[227360]: 2025-11-29 08:20:37.417 227364 INFO nova.compute.manager [-] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Took 0.64 seconds to deallocate network for instance.#033[00m
Nov 29 03:20:37 np0005539551 nova_compute[227360]: 2025-11-29 08:20:37.457 227364 DEBUG oslo_concurrency.lockutils [None req-143a9bbc-3fe2-496b-9903-56f4c626f5a4 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:37 np0005539551 nova_compute[227360]: 2025-11-29 08:20:37.457 227364 DEBUG oslo_concurrency.lockutils [None req-143a9bbc-3fe2-496b-9903-56f4c626f5a4 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:37 np0005539551 nova_compute[227360]: 2025-11-29 08:20:37.506 227364 DEBUG nova.compute.manager [req-ae3a5382-32db-475c-9ffd-3a9a25bddc02 req-09f2df93-f570-4001-b2d9-1b32d0195d5b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Received event network-vif-deleted-0bdc8d4b-e261-4398-8465-58392acd35a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:37 np0005539551 nova_compute[227360]: 2025-11-29 08:20:37.513 227364 DEBUG oslo_concurrency.processutils [None req-143a9bbc-3fe2-496b-9903-56f4c626f5a4 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:20:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:37.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:20:37 np0005539551 nova_compute[227360]: 2025-11-29 08:20:37.852 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:20:37 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4070946661' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:20:37 np0005539551 nova_compute[227360]: 2025-11-29 08:20:37.933 227364 DEBUG oslo_concurrency.processutils [None req-143a9bbc-3fe2-496b-9903-56f4c626f5a4 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:37 np0005539551 nova_compute[227360]: 2025-11-29 08:20:37.940 227364 DEBUG nova.compute.provider_tree [None req-143a9bbc-3fe2-496b-9903-56f4c626f5a4 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:20:37 np0005539551 nova_compute[227360]: 2025-11-29 08:20:37.961 227364 DEBUG nova.scheduler.client.report [None req-143a9bbc-3fe2-496b-9903-56f4c626f5a4 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:20:37 np0005539551 nova_compute[227360]: 2025-11-29 08:20:37.989 227364 DEBUG oslo_concurrency.lockutils [None req-143a9bbc-3fe2-496b-9903-56f4c626f5a4 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:38 np0005539551 nova_compute[227360]: 2025-11-29 08:20:38.022 227364 INFO nova.scheduler.client.report [None req-143a9bbc-3fe2-496b-9903-56f4c626f5a4 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Deleted allocations for instance cf3d3db9-f753-47a8-93d5-7f0491bb03fd#033[00m
Nov 29 03:20:38 np0005539551 nova_compute[227360]: 2025-11-29 08:20:38.119 227364 DEBUG oslo_concurrency.lockutils [None req-143a9bbc-3fe2-496b-9903-56f4c626f5a4 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.096s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:38 np0005539551 nova_compute[227360]: 2025-11-29 08:20:38.449 227364 DEBUG nova.network.neutron [req-6b56a04a-3ecb-4fa5-9699-879919b667f5 req-5ed3e1d0-3035-486c-96b2-e2d2f7d9eb0a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Updated VIF entry in instance network info cache for port 0bdc8d4b-e261-4398-8465-58392acd35a8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:20:38 np0005539551 nova_compute[227360]: 2025-11-29 08:20:38.449 227364 DEBUG nova.network.neutron [req-6b56a04a-3ecb-4fa5-9699-879919b667f5 req-5ed3e1d0-3035-486c-96b2-e2d2f7d9eb0a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Updating instance_info_cache with network_info: [{"id": "0bdc8d4b-e261-4398-8465-58392acd35a8", "address": "fa:16:3e:5a:07:26", "network": {"id": "96a9f8d0-94cb-4ef1-b5fc-814aeb66b309", "bridge": "br-int", "label": "tempest-network-smoke--589317975", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bdc8d4b-e2", "ovs_interfaceid": "0bdc8d4b-e261-4398-8465-58392acd35a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:20:38 np0005539551 nova_compute[227360]: 2025-11-29 08:20:38.515 227364 DEBUG oslo_concurrency.lockutils [req-6b56a04a-3ecb-4fa5-9699-879919b667f5 req-5ed3e1d0-3035-486c-96b2-e2d2f7d9eb0a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-cf3d3db9-f753-47a8-93d5-7f0491bb03fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:20:38 np0005539551 nova_compute[227360]: 2025-11-29 08:20:38.515 227364 DEBUG nova.compute.manager [req-6b56a04a-3ecb-4fa5-9699-879919b667f5 req-5ed3e1d0-3035-486c-96b2-e2d2f7d9eb0a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Received event network-vif-unplugged-0bdc8d4b-e261-4398-8465-58392acd35a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:38 np0005539551 nova_compute[227360]: 2025-11-29 08:20:38.516 227364 DEBUG oslo_concurrency.lockutils [req-6b56a04a-3ecb-4fa5-9699-879919b667f5 req-5ed3e1d0-3035-486c-96b2-e2d2f7d9eb0a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:38 np0005539551 nova_compute[227360]: 2025-11-29 08:20:38.516 227364 DEBUG oslo_concurrency.lockutils [req-6b56a04a-3ecb-4fa5-9699-879919b667f5 req-5ed3e1d0-3035-486c-96b2-e2d2f7d9eb0a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:38 np0005539551 nova_compute[227360]: 2025-11-29 08:20:38.516 227364 DEBUG oslo_concurrency.lockutils [req-6b56a04a-3ecb-4fa5-9699-879919b667f5 req-5ed3e1d0-3035-486c-96b2-e2d2f7d9eb0a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:38 np0005539551 nova_compute[227360]: 2025-11-29 08:20:38.516 227364 DEBUG nova.compute.manager [req-6b56a04a-3ecb-4fa5-9699-879919b667f5 req-5ed3e1d0-3035-486c-96b2-e2d2f7d9eb0a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] No waiting events found dispatching network-vif-unplugged-0bdc8d4b-e261-4398-8465-58392acd35a8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:38 np0005539551 nova_compute[227360]: 2025-11-29 08:20:38.516 227364 DEBUG nova.compute.manager [req-6b56a04a-3ecb-4fa5-9699-879919b667f5 req-5ed3e1d0-3035-486c-96b2-e2d2f7d9eb0a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Received event network-vif-unplugged-0bdc8d4b-e261-4398-8465-58392acd35a8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:20:38 np0005539551 nova_compute[227360]: 2025-11-29 08:20:38.682 227364 DEBUG nova.compute.manager [req-f39ab6b1-e1c3-4cf4-b569-39d455e44f82 req-7d82d360-2971-47e5-80a5-14a51d49d120 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Received event network-vif-plugged-0bdc8d4b-e261-4398-8465-58392acd35a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:38 np0005539551 nova_compute[227360]: 2025-11-29 08:20:38.683 227364 DEBUG oslo_concurrency.lockutils [req-f39ab6b1-e1c3-4cf4-b569-39d455e44f82 req-7d82d360-2971-47e5-80a5-14a51d49d120 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:38 np0005539551 nova_compute[227360]: 2025-11-29 08:20:38.683 227364 DEBUG oslo_concurrency.lockutils [req-f39ab6b1-e1c3-4cf4-b569-39d455e44f82 req-7d82d360-2971-47e5-80a5-14a51d49d120 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:38 np0005539551 nova_compute[227360]: 2025-11-29 08:20:38.683 227364 DEBUG oslo_concurrency.lockutils [req-f39ab6b1-e1c3-4cf4-b569-39d455e44f82 req-7d82d360-2971-47e5-80a5-14a51d49d120 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:38 np0005539551 nova_compute[227360]: 2025-11-29 08:20:38.684 227364 DEBUG nova.compute.manager [req-f39ab6b1-e1c3-4cf4-b569-39d455e44f82 req-7d82d360-2971-47e5-80a5-14a51d49d120 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] No waiting events found dispatching network-vif-plugged-0bdc8d4b-e261-4398-8465-58392acd35a8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:38 np0005539551 nova_compute[227360]: 2025-11-29 08:20:38.684 227364 WARNING nova.compute.manager [req-f39ab6b1-e1c3-4cf4-b569-39d455e44f82 req-7d82d360-2971-47e5-80a5-14a51d49d120 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Received unexpected event network-vif-plugged-0bdc8d4b-e261-4398-8465-58392acd35a8 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:20:38 np0005539551 nova_compute[227360]: 2025-11-29 08:20:38.684 227364 DEBUG nova.compute.manager [req-f39ab6b1-e1c3-4cf4-b569-39d455e44f82 req-7d82d360-2971-47e5-80a5-14a51d49d120 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Received event network-vif-plugged-0bdc8d4b-e261-4398-8465-58392acd35a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:38 np0005539551 nova_compute[227360]: 2025-11-29 08:20:38.684 227364 DEBUG oslo_concurrency.lockutils [req-f39ab6b1-e1c3-4cf4-b569-39d455e44f82 req-7d82d360-2971-47e5-80a5-14a51d49d120 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:38 np0005539551 nova_compute[227360]: 2025-11-29 08:20:38.684 227364 DEBUG oslo_concurrency.lockutils [req-f39ab6b1-e1c3-4cf4-b569-39d455e44f82 req-7d82d360-2971-47e5-80a5-14a51d49d120 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:38 np0005539551 nova_compute[227360]: 2025-11-29 08:20:38.685 227364 DEBUG oslo_concurrency.lockutils [req-f39ab6b1-e1c3-4cf4-b569-39d455e44f82 req-7d82d360-2971-47e5-80a5-14a51d49d120 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:38 np0005539551 nova_compute[227360]: 2025-11-29 08:20:38.685 227364 DEBUG nova.compute.manager [req-f39ab6b1-e1c3-4cf4-b569-39d455e44f82 req-7d82d360-2971-47e5-80a5-14a51d49d120 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] No waiting events found dispatching network-vif-plugged-0bdc8d4b-e261-4398-8465-58392acd35a8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:38 np0005539551 nova_compute[227360]: 2025-11-29 08:20:38.685 227364 WARNING nova.compute.manager [req-f39ab6b1-e1c3-4cf4-b569-39d455e44f82 req-7d82d360-2971-47e5-80a5-14a51d49d120 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Received unexpected event network-vif-plugged-0bdc8d4b-e261-4398-8465-58392acd35a8 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:20:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:38.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:39.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:39 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:39.897 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:20:39 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:39.898 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:20:39 np0005539551 nova_compute[227360]: 2025-11-29 08:20:39.898 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:40.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:41 np0005539551 nova_compute[227360]: 2025-11-29 08:20:41.059 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:41 np0005539551 nova_compute[227360]: 2025-11-29 08:20:41.307 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:41 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:20:41 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:20:41 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:20:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:41.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:41 np0005539551 ovn_controller[130266]: 2025-11-29T08:20:41Z|00467|binding|INFO|Releasing lport 73177abb-43fa-48b2-bff7-3e77c9984956 from this chassis (sb_readonly=0)
Nov 29 03:20:41 np0005539551 nova_compute[227360]: 2025-11-29 08:20:41.988 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:42.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:43 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Nov 29 03:20:43 np0005539551 nova_compute[227360]: 2025-11-29 08:20:43.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:43.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:44 np0005539551 nova_compute[227360]: 2025-11-29 08:20:44.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:44 np0005539551 nova_compute[227360]: 2025-11-29 08:20:44.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:20:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:44.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:45.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:46 np0005539551 nova_compute[227360]: 2025-11-29 08:20:46.063 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:46 np0005539551 nova_compute[227360]: 2025-11-29 08:20:46.308 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:46 np0005539551 nova_compute[227360]: 2025-11-29 08:20:46.686 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:46.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:47.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:47.900 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:47 np0005539551 nova_compute[227360]: 2025-11-29 08:20:47.917 227364 DEBUG oslo_concurrency.lockutils [None req-9795036b-7ef0-4c6c-8262-3d7bef3e1b73 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Acquiring lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:47 np0005539551 nova_compute[227360]: 2025-11-29 08:20:47.918 227364 DEBUG oslo_concurrency.lockutils [None req-9795036b-7ef0-4c6c-8262-3d7bef3e1b73 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:47 np0005539551 nova_compute[227360]: 2025-11-29 08:20:47.918 227364 DEBUG oslo_concurrency.lockutils [None req-9795036b-7ef0-4c6c-8262-3d7bef3e1b73 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Acquiring lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:47 np0005539551 nova_compute[227360]: 2025-11-29 08:20:47.919 227364 DEBUG oslo_concurrency.lockutils [None req-9795036b-7ef0-4c6c-8262-3d7bef3e1b73 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:47 np0005539551 nova_compute[227360]: 2025-11-29 08:20:47.919 227364 DEBUG oslo_concurrency.lockutils [None req-9795036b-7ef0-4c6c-8262-3d7bef3e1b73 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:47 np0005539551 nova_compute[227360]: 2025-11-29 08:20:47.921 227364 INFO nova.compute.manager [None req-9795036b-7ef0-4c6c-8262-3d7bef3e1b73 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Terminating instance#033[00m
Nov 29 03:20:47 np0005539551 nova_compute[227360]: 2025-11-29 08:20:47.923 227364 DEBUG nova.compute.manager [None req-9795036b-7ef0-4c6c-8262-3d7bef3e1b73 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:20:47 np0005539551 kernel: tap3b646f81-c0 (unregistering): left promiscuous mode
Nov 29 03:20:47 np0005539551 NetworkManager[48922]: <info>  [1764404447.9836] device (tap3b646f81-c0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:20:47 np0005539551 nova_compute[227360]: 2025-11-29 08:20:47.992 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:47 np0005539551 ovn_controller[130266]: 2025-11-29T08:20:47Z|00468|binding|INFO|Releasing lport 3b646f81-c090-4ebc-ab66-1ed42838ee7b from this chassis (sb_readonly=0)
Nov 29 03:20:47 np0005539551 ovn_controller[130266]: 2025-11-29T08:20:47Z|00469|binding|INFO|Setting lport 3b646f81-c090-4ebc-ab66-1ed42838ee7b down in Southbound
Nov 29 03:20:47 np0005539551 ovn_controller[130266]: 2025-11-29T08:20:47Z|00470|binding|INFO|Removing iface tap3b646f81-c0 ovn-installed in OVS
Nov 29 03:20:47 np0005539551 nova_compute[227360]: 2025-11-29 08:20:47.995 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:48.002 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:d3:13 10.100.0.4'], port_security=['fa:16:3e:03:d3:13 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'a14ca685-ed5c-4583-90e2-565fbf5e1ef0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6512f12e-7c22-4533-b1e6-41428016593a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '696ec278f2ec426fa75ebb50bdf1c16a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '79696f95-603f-4ed2-8054-c440dd658a0e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.246', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f17a52e-c2c7-41e9-af0a-0831981ed76c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=3b646f81-c090-4ebc-ab66-1ed42838ee7b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:20:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:48.004 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 3b646f81-c090-4ebc-ab66-1ed42838ee7b in datapath 6512f12e-7c22-4533-b1e6-41428016593a unbound from our chassis#033[00m
Nov 29 03:20:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:48.007 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6512f12e-7c22-4533-b1e6-41428016593a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:20:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:48.009 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[cac61043-2c44-4c43-b68b-f70935458ccc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:48.011 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a namespace which is not needed anymore#033[00m
Nov 29 03:20:48 np0005539551 nova_compute[227360]: 2025-11-29 08:20:48.012 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:48 np0005539551 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d00000070.scope: Deactivated successfully.
Nov 29 03:20:48 np0005539551 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d00000070.scope: Consumed 15.311s CPU time.
Nov 29 03:20:48 np0005539551 systemd-machined[190756]: Machine qemu-50-instance-00000070 terminated.
Nov 29 03:20:48 np0005539551 nova_compute[227360]: 2025-11-29 08:20:48.166 227364 INFO nova.virt.libvirt.driver [-] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Instance destroyed successfully.#033[00m
Nov 29 03:20:48 np0005539551 nova_compute[227360]: 2025-11-29 08:20:48.167 227364 DEBUG nova.objects.instance [None req-9795036b-7ef0-4c6c-8262-3d7bef3e1b73 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Lazy-loading 'resources' on Instance uuid a14ca685-ed5c-4583-90e2-565fbf5e1ef0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:48 np0005539551 neutron-haproxy-ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a[269954]: [NOTICE]   (269966) : haproxy version is 2.8.14-c23fe91
Nov 29 03:20:48 np0005539551 neutron-haproxy-ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a[269954]: [NOTICE]   (269966) : path to executable is /usr/sbin/haproxy
Nov 29 03:20:48 np0005539551 neutron-haproxy-ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a[269954]: [WARNING]  (269966) : Exiting Master process...
Nov 29 03:20:48 np0005539551 nova_compute[227360]: 2025-11-29 08:20:48.184 227364 DEBUG nova.virt.libvirt.vif [None req-9795036b-7ef0-4c6c-8262-3d7bef3e1b73 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:19:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-1269321448',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-1770757417',id=112,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNQpCV8LVYV3XU0jRo9AB80D14fcNsVBchxQv/kqVR98kae3OkX+NM6hECihiGK1BlzV9Y53yNGfFNWAKz66qpbev4cpjjjNCpxIV7kZ2X9xTxjFVoODdsMt72oZagL4iA==',key_name='tempest-keypair-1370277031',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:20:14Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='696ec278f2ec426fa75ebb50bdf1c16a',ramdisk_id='',reservation_id='r-gh8c0s9k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsV293TestJSON-2028033432',owner_user_name='tempest-ServerActionsV293TestJSON-2028033432-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:20:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='697e5f10e07b4256a3dc2ad3906db9d2',uuid=a14ca685-ed5c-4583-90e2-565fbf5e1ef0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3b646f81-c090-4ebc-ab66-1ed42838ee7b", "address": "fa:16:3e:03:d3:13", "network": {"id": "6512f12e-7c22-4533-b1e6-41428016593a", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1331773440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "696ec278f2ec426fa75ebb50bdf1c16a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b646f81-c0", "ovs_interfaceid": "3b646f81-c090-4ebc-ab66-1ed42838ee7b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:20:48 np0005539551 nova_compute[227360]: 2025-11-29 08:20:48.185 227364 DEBUG nova.network.os_vif_util [None req-9795036b-7ef0-4c6c-8262-3d7bef3e1b73 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Converting VIF {"id": "3b646f81-c090-4ebc-ab66-1ed42838ee7b", "address": "fa:16:3e:03:d3:13", "network": {"id": "6512f12e-7c22-4533-b1e6-41428016593a", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1331773440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "696ec278f2ec426fa75ebb50bdf1c16a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b646f81-c0", "ovs_interfaceid": "3b646f81-c090-4ebc-ab66-1ed42838ee7b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:20:48 np0005539551 neutron-haproxy-ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a[269954]: [ALERT]    (269966) : Current worker (269975) exited with code 143 (Terminated)
Nov 29 03:20:48 np0005539551 neutron-haproxy-ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a[269954]: [WARNING]  (269966) : All workers exited. Exiting... (0)
Nov 29 03:20:48 np0005539551 nova_compute[227360]: 2025-11-29 08:20:48.186 227364 DEBUG nova.network.os_vif_util [None req-9795036b-7ef0-4c6c-8262-3d7bef3e1b73 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:03:d3:13,bridge_name='br-int',has_traffic_filtering=True,id=3b646f81-c090-4ebc-ab66-1ed42838ee7b,network=Network(6512f12e-7c22-4533-b1e6-41428016593a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b646f81-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:20:48 np0005539551 nova_compute[227360]: 2025-11-29 08:20:48.187 227364 DEBUG os_vif [None req-9795036b-7ef0-4c6c-8262-3d7bef3e1b73 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:03:d3:13,bridge_name='br-int',has_traffic_filtering=True,id=3b646f81-c090-4ebc-ab66-1ed42838ee7b,network=Network(6512f12e-7c22-4533-b1e6-41428016593a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b646f81-c0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:20:48 np0005539551 systemd[1]: libpod-a2f399654a638f519a4d000dbe8c818d65e7924d5c2a07932d35a030597439da.scope: Deactivated successfully.
Nov 29 03:20:48 np0005539551 nova_compute[227360]: 2025-11-29 08:20:48.188 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:48 np0005539551 nova_compute[227360]: 2025-11-29 08:20:48.189 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b646f81-c0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:48 np0005539551 nova_compute[227360]: 2025-11-29 08:20:48.192 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:48 np0005539551 nova_compute[227360]: 2025-11-29 08:20:48.194 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:20:48 np0005539551 podman[270518]: 2025-11-29 08:20:48.195825195 +0000 UTC m=+0.098815787 container died a2f399654a638f519a4d000dbe8c818d65e7924d5c2a07932d35a030597439da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:20:48 np0005539551 nova_compute[227360]: 2025-11-29 08:20:48.196 227364 INFO os_vif [None req-9795036b-7ef0-4c6c-8262-3d7bef3e1b73 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:03:d3:13,bridge_name='br-int',has_traffic_filtering=True,id=3b646f81-c090-4ebc-ab66-1ed42838ee7b,network=Network(6512f12e-7c22-4533-b1e6-41428016593a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b646f81-c0')#033[00m
Nov 29 03:20:48 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a2f399654a638f519a4d000dbe8c818d65e7924d5c2a07932d35a030597439da-userdata-shm.mount: Deactivated successfully.
Nov 29 03:20:48 np0005539551 systemd[1]: var-lib-containers-storage-overlay-525cc6eb49792cf89bf0d465e18f515522114adab79b8fb845482836cc51aae8-merged.mount: Deactivated successfully.
Nov 29 03:20:48 np0005539551 podman[270518]: 2025-11-29 08:20:48.23110856 +0000 UTC m=+0.134099162 container cleanup a2f399654a638f519a4d000dbe8c818d65e7924d5c2a07932d35a030597439da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 03:20:48 np0005539551 systemd[1]: libpod-conmon-a2f399654a638f519a4d000dbe8c818d65e7924d5c2a07932d35a030597439da.scope: Deactivated successfully.
Nov 29 03:20:48 np0005539551 podman[270576]: 2025-11-29 08:20:48.299116353 +0000 UTC m=+0.040931220 container remove a2f399654a638f519a4d000dbe8c818d65e7924d5c2a07932d35a030597439da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:20:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:48.304 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2c85ecf5-2b94-4802-8fb2-10454d4b10c8]: (4, ('Sat Nov 29 08:20:48 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a (a2f399654a638f519a4d000dbe8c818d65e7924d5c2a07932d35a030597439da)\na2f399654a638f519a4d000dbe8c818d65e7924d5c2a07932d35a030597439da\nSat Nov 29 08:20:48 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a (a2f399654a638f519a4d000dbe8c818d65e7924d5c2a07932d35a030597439da)\na2f399654a638f519a4d000dbe8c818d65e7924d5c2a07932d35a030597439da\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:48.306 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9130d135-cbe3-4aa4-bc2b-3eb4cedfe2c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:48.307 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6512f12e-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:48 np0005539551 kernel: tap6512f12e-70: left promiscuous mode
Nov 29 03:20:48 np0005539551 nova_compute[227360]: 2025-11-29 08:20:48.357 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:48 np0005539551 nova_compute[227360]: 2025-11-29 08:20:48.370 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:48.375 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[bf827c65-b935-4958-aafc-903e5e38ec68]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:48.395 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ba51981e-2784-4f94-b9d7-0aedc53f68f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:48.396 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ee29badf-f10e-4259-9f2e-0d7d7180cd2a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:48.410 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[03994c9f-db94-44a5-b2b0-5858e40b6b34]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 745975, 'reachable_time': 21083, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270591, 'error': None, 'target': 'ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:48 np0005539551 systemd[1]: run-netns-ovnmeta\x2d6512f12e\x2d7c22\x2d4533\x2db1e6\x2d41428016593a.mount: Deactivated successfully.
Nov 29 03:20:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:48.413 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6512f12e-7c22-4533-b1e6-41428016593a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:20:48 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:20:48.413 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[cf348a54-aa31-418d-8b9b-01f4959cb529]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:48 np0005539551 nova_compute[227360]: 2025-11-29 08:20:48.428 227364 INFO nova.virt.libvirt.driver [None req-9795036b-7ef0-4c6c-8262-3d7bef3e1b73 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Deleting instance files /var/lib/nova/instances/a14ca685-ed5c-4583-90e2-565fbf5e1ef0_del#033[00m
Nov 29 03:20:48 np0005539551 nova_compute[227360]: 2025-11-29 08:20:48.429 227364 INFO nova.virt.libvirt.driver [None req-9795036b-7ef0-4c6c-8262-3d7bef3e1b73 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Deletion of /var/lib/nova/instances/a14ca685-ed5c-4583-90e2-565fbf5e1ef0_del complete#033[00m
Nov 29 03:20:48 np0005539551 nova_compute[227360]: 2025-11-29 08:20:48.526 227364 INFO nova.compute.manager [None req-9795036b-7ef0-4c6c-8262-3d7bef3e1b73 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Took 0.60 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:20:48 np0005539551 nova_compute[227360]: 2025-11-29 08:20:48.527 227364 DEBUG oslo.service.loopingcall [None req-9795036b-7ef0-4c6c-8262-3d7bef3e1b73 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:20:48 np0005539551 nova_compute[227360]: 2025-11-29 08:20:48.527 227364 DEBUG nova.compute.manager [-] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:20:48 np0005539551 nova_compute[227360]: 2025-11-29 08:20:48.527 227364 DEBUG nova.network.neutron [-] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:20:48 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:20:48 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:20:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:48.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:49 np0005539551 nova_compute[227360]: 2025-11-29 08:20:49.188 227364 DEBUG nova.compute.manager [req-2aebab9a-973f-448c-a9c9-9dfe7e7411e3 req-a0628edc-7ba8-497b-9291-5417a8506e14 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Received event network-vif-unplugged-3b646f81-c090-4ebc-ab66-1ed42838ee7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:49 np0005539551 nova_compute[227360]: 2025-11-29 08:20:49.188 227364 DEBUG oslo_concurrency.lockutils [req-2aebab9a-973f-448c-a9c9-9dfe7e7411e3 req-a0628edc-7ba8-497b-9291-5417a8506e14 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:49 np0005539551 nova_compute[227360]: 2025-11-29 08:20:49.189 227364 DEBUG oslo_concurrency.lockutils [req-2aebab9a-973f-448c-a9c9-9dfe7e7411e3 req-a0628edc-7ba8-497b-9291-5417a8506e14 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:49 np0005539551 nova_compute[227360]: 2025-11-29 08:20:49.189 227364 DEBUG oslo_concurrency.lockutils [req-2aebab9a-973f-448c-a9c9-9dfe7e7411e3 req-a0628edc-7ba8-497b-9291-5417a8506e14 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:49 np0005539551 nova_compute[227360]: 2025-11-29 08:20:49.189 227364 DEBUG nova.compute.manager [req-2aebab9a-973f-448c-a9c9-9dfe7e7411e3 req-a0628edc-7ba8-497b-9291-5417a8506e14 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] No waiting events found dispatching network-vif-unplugged-3b646f81-c090-4ebc-ab66-1ed42838ee7b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:49 np0005539551 nova_compute[227360]: 2025-11-29 08:20:49.189 227364 DEBUG nova.compute.manager [req-2aebab9a-973f-448c-a9c9-9dfe7e7411e3 req-a0628edc-7ba8-497b-9291-5417a8506e14 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Received event network-vif-unplugged-3b646f81-c090-4ebc-ab66-1ed42838ee7b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:20:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e313 e313: 3 total, 3 up, 3 in
Nov 29 03:20:49 np0005539551 nova_compute[227360]: 2025-11-29 08:20:49.653 227364 DEBUG nova.network.neutron [-] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:20:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:49.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:49 np0005539551 nova_compute[227360]: 2025-11-29 08:20:49.680 227364 INFO nova.compute.manager [-] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Took 1.15 seconds to deallocate network for instance.#033[00m
Nov 29 03:20:49 np0005539551 nova_compute[227360]: 2025-11-29 08:20:49.785 227364 DEBUG nova.compute.manager [req-c0519a7b-4b7a-4e99-8003-023a4257a5eb req-6925c946-a8a1-462e-8aa6-ce1e7a09848a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Received event network-vif-deleted-3b646f81-c090-4ebc-ab66-1ed42838ee7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:49 np0005539551 nova_compute[227360]: 2025-11-29 08:20:49.983 227364 INFO nova.compute.manager [None req-9795036b-7ef0-4c6c-8262-3d7bef3e1b73 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Took 0.30 seconds to detach 1 volumes for instance.#033[00m
Nov 29 03:20:49 np0005539551 nova_compute[227360]: 2025-11-29 08:20:49.986 227364 DEBUG nova.compute.manager [None req-9795036b-7ef0-4c6c-8262-3d7bef3e1b73 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Deleting volume: 85297db0-ac85-44d1-bc74-6e4a332ee974 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Nov 29 03:20:50 np0005539551 nova_compute[227360]: 2025-11-29 08:20:50.264 227364 DEBUG oslo_concurrency.lockutils [None req-9795036b-7ef0-4c6c-8262-3d7bef3e1b73 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:50 np0005539551 nova_compute[227360]: 2025-11-29 08:20:50.265 227364 DEBUG oslo_concurrency.lockutils [None req-9795036b-7ef0-4c6c-8262-3d7bef3e1b73 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:50 np0005539551 nova_compute[227360]: 2025-11-29 08:20:50.295 227364 DEBUG nova.scheduler.client.report [None req-9795036b-7ef0-4c6c-8262-3d7bef3e1b73 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Refreshing inventories for resource provider 67c71d68-0dd7-4589-b775-189b4191a844 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:20:50 np0005539551 nova_compute[227360]: 2025-11-29 08:20:50.317 227364 DEBUG nova.scheduler.client.report [None req-9795036b-7ef0-4c6c-8262-3d7bef3e1b73 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Updating ProviderTree inventory for provider 67c71d68-0dd7-4589-b775-189b4191a844 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:20:50 np0005539551 nova_compute[227360]: 2025-11-29 08:20:50.318 227364 DEBUG nova.compute.provider_tree [None req-9795036b-7ef0-4c6c-8262-3d7bef3e1b73 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Updating inventory in ProviderTree for provider 67c71d68-0dd7-4589-b775-189b4191a844 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:20:50 np0005539551 nova_compute[227360]: 2025-11-29 08:20:50.381 227364 DEBUG nova.scheduler.client.report [None req-9795036b-7ef0-4c6c-8262-3d7bef3e1b73 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Refreshing aggregate associations for resource provider 67c71d68-0dd7-4589-b775-189b4191a844, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:20:50 np0005539551 nova_compute[227360]: 2025-11-29 08:20:50.406 227364 DEBUG nova.scheduler.client.report [None req-9795036b-7ef0-4c6c-8262-3d7bef3e1b73 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Refreshing trait associations for resource provider 67c71d68-0dd7-4589-b775-189b4191a844, traits: COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:20:50 np0005539551 nova_compute[227360]: 2025-11-29 08:20:50.446 227364 DEBUG oslo_concurrency.processutils [None req-9795036b-7ef0-4c6c-8262-3d7bef3e1b73 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:20:50 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/182089397' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:20:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:50.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:50 np0005539551 nova_compute[227360]: 2025-11-29 08:20:50.870 227364 DEBUG oslo_concurrency.processutils [None req-9795036b-7ef0-4c6c-8262-3d7bef3e1b73 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:50 np0005539551 nova_compute[227360]: 2025-11-29 08:20:50.876 227364 DEBUG nova.compute.provider_tree [None req-9795036b-7ef0-4c6c-8262-3d7bef3e1b73 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:20:50 np0005539551 nova_compute[227360]: 2025-11-29 08:20:50.905 227364 DEBUG nova.scheduler.client.report [None req-9795036b-7ef0-4c6c-8262-3d7bef3e1b73 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:20:50 np0005539551 nova_compute[227360]: 2025-11-29 08:20:50.929 227364 DEBUG oslo_concurrency.lockutils [None req-9795036b-7ef0-4c6c-8262-3d7bef3e1b73 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:50 np0005539551 nova_compute[227360]: 2025-11-29 08:20:50.963 227364 INFO nova.scheduler.client.report [None req-9795036b-7ef0-4c6c-8262-3d7bef3e1b73 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Deleted allocations for instance a14ca685-ed5c-4583-90e2-565fbf5e1ef0#033[00m
Nov 29 03:20:51 np0005539551 nova_compute[227360]: 2025-11-29 08:20:51.041 227364 DEBUG oslo_concurrency.lockutils [None req-9795036b-7ef0-4c6c-8262-3d7bef3e1b73 697e5f10e07b4256a3dc2ad3906db9d2 696ec278f2ec426fa75ebb50bdf1c16a - - default default] Lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:51 np0005539551 nova_compute[227360]: 2025-11-29 08:20:51.046 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:51 np0005539551 nova_compute[227360]: 2025-11-29 08:20:51.065 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:51 np0005539551 nova_compute[227360]: 2025-11-29 08:20:51.278 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404436.2768135, cf3d3db9-f753-47a8-93d5-7f0491bb03fd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:20:51 np0005539551 nova_compute[227360]: 2025-11-29 08:20:51.278 227364 INFO nova.compute.manager [-] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:20:51 np0005539551 nova_compute[227360]: 2025-11-29 08:20:51.298 227364 DEBUG nova.compute.manager [None req-642929aa-ad31-408f-8acd-a09cbff250fc - - - - - -] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:51 np0005539551 nova_compute[227360]: 2025-11-29 08:20:51.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:51 np0005539551 nova_compute[227360]: 2025-11-29 08:20:51.477 227364 DEBUG nova.compute.manager [req-f3b75e20-873c-4f72-aaa3-816e6b0eacc0 req-66653ef7-e18a-4a07-adb9-896077f7589c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Received event network-vif-plugged-3b646f81-c090-4ebc-ab66-1ed42838ee7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:51 np0005539551 nova_compute[227360]: 2025-11-29 08:20:51.477 227364 DEBUG oslo_concurrency.lockutils [req-f3b75e20-873c-4f72-aaa3-816e6b0eacc0 req-66653ef7-e18a-4a07-adb9-896077f7589c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:51 np0005539551 nova_compute[227360]: 2025-11-29 08:20:51.478 227364 DEBUG oslo_concurrency.lockutils [req-f3b75e20-873c-4f72-aaa3-816e6b0eacc0 req-66653ef7-e18a-4a07-adb9-896077f7589c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:51 np0005539551 nova_compute[227360]: 2025-11-29 08:20:51.478 227364 DEBUG oslo_concurrency.lockutils [req-f3b75e20-873c-4f72-aaa3-816e6b0eacc0 req-66653ef7-e18a-4a07-adb9-896077f7589c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a14ca685-ed5c-4583-90e2-565fbf5e1ef0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:51 np0005539551 nova_compute[227360]: 2025-11-29 08:20:51.478 227364 DEBUG nova.compute.manager [req-f3b75e20-873c-4f72-aaa3-816e6b0eacc0 req-66653ef7-e18a-4a07-adb9-896077f7589c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] No waiting events found dispatching network-vif-plugged-3b646f81-c090-4ebc-ab66-1ed42838ee7b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:51 np0005539551 nova_compute[227360]: 2025-11-29 08:20:51.478 227364 WARNING nova.compute.manager [req-f3b75e20-873c-4f72-aaa3-816e6b0eacc0 req-66653ef7-e18a-4a07-adb9-896077f7589c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Received unexpected event network-vif-plugged-3b646f81-c090-4ebc-ab66-1ed42838ee7b for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:20:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:51.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:52.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:53 np0005539551 nova_compute[227360]: 2025-11-29 08:20:53.193 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:53.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:54.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:55 np0005539551 podman[270617]: 2025-11-29 08:20:55.656069055 +0000 UTC m=+0.093062381 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:20:55 np0005539551 podman[270616]: 2025-11-29 08:20:55.679436737 +0000 UTC m=+0.126562778 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 29 03:20:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:55.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:55 np0005539551 podman[270618]: 2025-11-29 08:20:55.689465549 +0000 UTC m=+0.120705160 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 29 03:20:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:56 np0005539551 nova_compute[227360]: 2025-11-29 08:20:56.067 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:56.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:57.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:58 np0005539551 nova_compute[227360]: 2025-11-29 08:20:58.198 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:58 np0005539551 nova_compute[227360]: 2025-11-29 08:20:58.337 227364 DEBUG oslo_concurrency.lockutils [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "83af56cb-4634-464b-a921-b228b72f2ea5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:58 np0005539551 nova_compute[227360]: 2025-11-29 08:20:58.337 227364 DEBUG oslo_concurrency.lockutils [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "83af56cb-4634-464b-a921-b228b72f2ea5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:58 np0005539551 nova_compute[227360]: 2025-11-29 08:20:58.384 227364 DEBUG nova.compute.manager [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:20:58 np0005539551 nova_compute[227360]: 2025-11-29 08:20:58.631 227364 DEBUG oslo_concurrency.lockutils [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "37bf3f0c-b49b-457b-81be-b4b31f32d872" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:58 np0005539551 nova_compute[227360]: 2025-11-29 08:20:58.631 227364 DEBUG oslo_concurrency.lockutils [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:58 np0005539551 nova_compute[227360]: 2025-11-29 08:20:58.645 227364 DEBUG oslo_concurrency.lockutils [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:58 np0005539551 nova_compute[227360]: 2025-11-29 08:20:58.645 227364 DEBUG oslo_concurrency.lockutils [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:58 np0005539551 nova_compute[227360]: 2025-11-29 08:20:58.652 227364 DEBUG nova.virt.hardware [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:20:58 np0005539551 nova_compute[227360]: 2025-11-29 08:20:58.652 227364 INFO nova.compute.claims [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:20:58 np0005539551 nova_compute[227360]: 2025-11-29 08:20:58.655 227364 DEBUG nova.compute.manager [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:20:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:58.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:59 np0005539551 nova_compute[227360]: 2025-11-29 08:20:59.244 227364 DEBUG oslo_concurrency.lockutils [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:59 np0005539551 nova_compute[227360]: 2025-11-29 08:20:59.286 227364 DEBUG oslo_concurrency.processutils [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:20:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:20:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:59.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:20:59 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:20:59 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3084246635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:20:59 np0005539551 nova_compute[227360]: 2025-11-29 08:20:59.760 227364 DEBUG oslo_concurrency.processutils [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:59 np0005539551 nova_compute[227360]: 2025-11-29 08:20:59.768 227364 DEBUG nova.compute.provider_tree [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:20:59 np0005539551 nova_compute[227360]: 2025-11-29 08:20:59.836 227364 DEBUG nova.scheduler.client.report [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:21:00 np0005539551 nova_compute[227360]: 2025-11-29 08:21:00.103 227364 DEBUG oslo_concurrency.lockutils [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.457s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:00 np0005539551 nova_compute[227360]: 2025-11-29 08:21:00.104 227364 DEBUG nova.compute.manager [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:21:00 np0005539551 nova_compute[227360]: 2025-11-29 08:21:00.106 227364 DEBUG oslo_concurrency.lockutils [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.863s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:00 np0005539551 nova_compute[227360]: 2025-11-29 08:21:00.114 227364 DEBUG nova.virt.hardware [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:21:00 np0005539551 nova_compute[227360]: 2025-11-29 08:21:00.115 227364 INFO nova.compute.claims [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:21:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e314 e314: 3 total, 3 up, 3 in
Nov 29 03:21:00 np0005539551 nova_compute[227360]: 2025-11-29 08:21:00.515 227364 DEBUG nova.compute.manager [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:21:00 np0005539551 nova_compute[227360]: 2025-11-29 08:21:00.515 227364 DEBUG nova.network.neutron [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:21:00 np0005539551 nova_compute[227360]: 2025-11-29 08:21:00.598 227364 DEBUG oslo_concurrency.processutils [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:00 np0005539551 nova_compute[227360]: 2025-11-29 08:21:00.635 227364 INFO nova.virt.libvirt.driver [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:21:00 np0005539551 nova_compute[227360]: 2025-11-29 08:21:00.850 227364 DEBUG nova.compute.manager [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:21:00 np0005539551 nova_compute[227360]: 2025-11-29 08:21:00.861 227364 DEBUG nova.policy [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1552f15deb524705a9456cbe9b54c429', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0bace34c102e4d56b089fd695d324f10', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:21:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:00.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:01 np0005539551 nova_compute[227360]: 2025-11-29 08:21:01.070 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:01 np0005539551 nova_compute[227360]: 2025-11-29 08:21:01.352 227364 DEBUG nova.compute.manager [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:21:01 np0005539551 nova_compute[227360]: 2025-11-29 08:21:01.354 227364 DEBUG nova.virt.libvirt.driver [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:21:01 np0005539551 nova_compute[227360]: 2025-11-29 08:21:01.355 227364 INFO nova.virt.libvirt.driver [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Creating image(s)#033[00m
Nov 29 03:21:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:01 np0005539551 nova_compute[227360]: 2025-11-29 08:21:01.673 227364 DEBUG nova.storage.rbd_utils [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image 83af56cb-4634-464b-a921-b228b72f2ea5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:01.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:01 np0005539551 nova_compute[227360]: 2025-11-29 08:21:01.708 227364 DEBUG nova.storage.rbd_utils [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image 83af56cb-4634-464b-a921-b228b72f2ea5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:01 np0005539551 nova_compute[227360]: 2025-11-29 08:21:01.740 227364 DEBUG nova.storage.rbd_utils [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image 83af56cb-4634-464b-a921-b228b72f2ea5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:01 np0005539551 nova_compute[227360]: 2025-11-29 08:21:01.746 227364 DEBUG oslo_concurrency.processutils [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:01 np0005539551 nova_compute[227360]: 2025-11-29 08:21:01.853 227364 DEBUG oslo_concurrency.processutils [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:01 np0005539551 nova_compute[227360]: 2025-11-29 08:21:01.855 227364 DEBUG oslo_concurrency.lockutils [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:01 np0005539551 nova_compute[227360]: 2025-11-29 08:21:01.857 227364 DEBUG oslo_concurrency.lockutils [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:01 np0005539551 nova_compute[227360]: 2025-11-29 08:21:01.858 227364 DEBUG oslo_concurrency.lockutils [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:01 np0005539551 nova_compute[227360]: 2025-11-29 08:21:01.923 227364 DEBUG nova.storage.rbd_utils [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image 83af56cb-4634-464b-a921-b228b72f2ea5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:01 np0005539551 nova_compute[227360]: 2025-11-29 08:21:01.942 227364 DEBUG oslo_concurrency.processutils [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 83af56cb-4634-464b-a921-b228b72f2ea5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:21:01 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3908439525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:21:01 np0005539551 nova_compute[227360]: 2025-11-29 08:21:01.992 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:02 np0005539551 nova_compute[227360]: 2025-11-29 08:21:02.002 227364 DEBUG oslo_concurrency.processutils [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:02 np0005539551 nova_compute[227360]: 2025-11-29 08:21:02.012 227364 DEBUG nova.compute.provider_tree [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:21:02 np0005539551 nova_compute[227360]: 2025-11-29 08:21:02.129 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:02 np0005539551 nova_compute[227360]: 2025-11-29 08:21:02.163 227364 DEBUG nova.scheduler.client.report [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:21:02 np0005539551 nova_compute[227360]: 2025-11-29 08:21:02.364 227364 DEBUG oslo_concurrency.lockutils [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.258s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:02 np0005539551 nova_compute[227360]: 2025-11-29 08:21:02.366 227364 DEBUG nova.compute.manager [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:21:02 np0005539551 nova_compute[227360]: 2025-11-29 08:21:02.548 227364 DEBUG nova.compute.manager [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:21:02 np0005539551 nova_compute[227360]: 2025-11-29 08:21:02.548 227364 DEBUG nova.network.neutron [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:21:02 np0005539551 nova_compute[227360]: 2025-11-29 08:21:02.764 227364 DEBUG nova.policy [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fed6803a835e471f9bd60e3236e78e5d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4145ed6cde61439ebcc12fae2609b724', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:21:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:02.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:02 np0005539551 nova_compute[227360]: 2025-11-29 08:21:02.985 227364 DEBUG oslo_concurrency.processutils [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 83af56cb-4634-464b-a921-b228b72f2ea5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:03 np0005539551 nova_compute[227360]: 2025-11-29 08:21:03.029 227364 INFO nova.virt.libvirt.driver [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:21:03 np0005539551 nova_compute[227360]: 2025-11-29 08:21:03.078 227364 DEBUG nova.storage.rbd_utils [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] resizing rbd image 83af56cb-4634-464b-a921-b228b72f2ea5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:21:03 np0005539551 nova_compute[227360]: 2025-11-29 08:21:03.163 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404448.162581, a14ca685-ed5c-4583-90e2-565fbf5e1ef0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:21:03 np0005539551 nova_compute[227360]: 2025-11-29 08:21:03.163 227364 INFO nova.compute.manager [-] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:21:03 np0005539551 nova_compute[227360]: 2025-11-29 08:21:03.210 227364 DEBUG nova.compute.manager [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:21:03 np0005539551 nova_compute[227360]: 2025-11-29 08:21:03.229 227364 DEBUG nova.compute.manager [None req-aeb939f4-4063-45d3-8c95-6e598f80a58d - - - - - -] [instance: a14ca685-ed5c-4583-90e2-565fbf5e1ef0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:03 np0005539551 nova_compute[227360]: 2025-11-29 08:21:03.234 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:03 np0005539551 nova_compute[227360]: 2025-11-29 08:21:03.332 227364 DEBUG nova.objects.instance [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lazy-loading 'migration_context' on Instance uuid 83af56cb-4634-464b-a921-b228b72f2ea5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:03 np0005539551 nova_compute[227360]: 2025-11-29 08:21:03.404 227364 DEBUG nova.network.neutron [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Successfully created port: cfb780c5-3830-452f-89f0-cd6f52cd9e67 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:21:03 np0005539551 nova_compute[227360]: 2025-11-29 08:21:03.419 227364 DEBUG nova.virt.libvirt.driver [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:21:03 np0005539551 nova_compute[227360]: 2025-11-29 08:21:03.420 227364 DEBUG nova.virt.libvirt.driver [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Ensure instance console log exists: /var/lib/nova/instances/83af56cb-4634-464b-a921-b228b72f2ea5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:21:03 np0005539551 nova_compute[227360]: 2025-11-29 08:21:03.421 227364 DEBUG oslo_concurrency.lockutils [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:03 np0005539551 nova_compute[227360]: 2025-11-29 08:21:03.421 227364 DEBUG oslo_concurrency.lockutils [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:03 np0005539551 nova_compute[227360]: 2025-11-29 08:21:03.421 227364 DEBUG oslo_concurrency.lockutils [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:03 np0005539551 nova_compute[227360]: 2025-11-29 08:21:03.641 227364 DEBUG nova.compute.manager [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:21:03 np0005539551 nova_compute[227360]: 2025-11-29 08:21:03.643 227364 DEBUG nova.virt.libvirt.driver [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:21:03 np0005539551 nova_compute[227360]: 2025-11-29 08:21:03.644 227364 INFO nova.virt.libvirt.driver [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Creating image(s)#033[00m
Nov 29 03:21:03 np0005539551 nova_compute[227360]: 2025-11-29 08:21:03.684 227364 DEBUG nova.storage.rbd_utils [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image 37bf3f0c-b49b-457b-81be-b4b31f32d872_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:03.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:03 np0005539551 nova_compute[227360]: 2025-11-29 08:21:03.718 227364 DEBUG nova.storage.rbd_utils [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image 37bf3f0c-b49b-457b-81be-b4b31f32d872_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:03 np0005539551 nova_compute[227360]: 2025-11-29 08:21:03.752 227364 DEBUG nova.storage.rbd_utils [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image 37bf3f0c-b49b-457b-81be-b4b31f32d872_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:03 np0005539551 nova_compute[227360]: 2025-11-29 08:21:03.756 227364 DEBUG oslo_concurrency.processutils [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:03 np0005539551 nova_compute[227360]: 2025-11-29 08:21:03.856 227364 DEBUG oslo_concurrency.processutils [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:03 np0005539551 nova_compute[227360]: 2025-11-29 08:21:03.857 227364 DEBUG oslo_concurrency.lockutils [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:03 np0005539551 nova_compute[227360]: 2025-11-29 08:21:03.858 227364 DEBUG oslo_concurrency.lockutils [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:03 np0005539551 nova_compute[227360]: 2025-11-29 08:21:03.859 227364 DEBUG oslo_concurrency.lockutils [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:03 np0005539551 nova_compute[227360]: 2025-11-29 08:21:03.893 227364 DEBUG nova.storage.rbd_utils [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image 37bf3f0c-b49b-457b-81be-b4b31f32d872_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:03 np0005539551 nova_compute[227360]: 2025-11-29 08:21:03.898 227364 DEBUG oslo_concurrency.processutils [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 37bf3f0c-b49b-457b-81be-b4b31f32d872_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:04 np0005539551 nova_compute[227360]: 2025-11-29 08:21:04.254 227364 DEBUG nova.network.neutron [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Successfully created port: f435ee76-ed2f-4ad8-a9e1-bda955080b3e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:21:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:04.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:05 np0005539551 nova_compute[227360]: 2025-11-29 08:21:05.169 227364 DEBUG oslo_concurrency.processutils [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 37bf3f0c-b49b-457b-81be-b4b31f32d872_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:05 np0005539551 nova_compute[227360]: 2025-11-29 08:21:05.274 227364 DEBUG nova.storage.rbd_utils [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] resizing rbd image 37bf3f0c-b49b-457b-81be-b4b31f32d872_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:21:05 np0005539551 nova_compute[227360]: 2025-11-29 08:21:05.322 227364 DEBUG nova.network.neutron [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Successfully updated port: cfb780c5-3830-452f-89f0-cd6f52cd9e67 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:21:05 np0005539551 nova_compute[227360]: 2025-11-29 08:21:05.364 227364 DEBUG oslo_concurrency.lockutils [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "refresh_cache-83af56cb-4634-464b-a921-b228b72f2ea5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:21:05 np0005539551 nova_compute[227360]: 2025-11-29 08:21:05.364 227364 DEBUG oslo_concurrency.lockutils [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquired lock "refresh_cache-83af56cb-4634-464b-a921-b228b72f2ea5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:21:05 np0005539551 nova_compute[227360]: 2025-11-29 08:21:05.364 227364 DEBUG nova.network.neutron [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:21:05 np0005539551 nova_compute[227360]: 2025-11-29 08:21:05.485 227364 DEBUG nova.network.neutron [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Successfully updated port: f435ee76-ed2f-4ad8-a9e1-bda955080b3e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:21:05 np0005539551 nova_compute[227360]: 2025-11-29 08:21:05.488 227364 DEBUG nova.compute.manager [req-03677878-4e6b-4a26-ae66-cbdbaf53d22b req-0f29d678-5d03-49eb-9d2e-45a23a3ec2a2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Received event network-changed-cfb780c5-3830-452f-89f0-cd6f52cd9e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:05 np0005539551 nova_compute[227360]: 2025-11-29 08:21:05.488 227364 DEBUG nova.compute.manager [req-03677878-4e6b-4a26-ae66-cbdbaf53d22b req-0f29d678-5d03-49eb-9d2e-45a23a3ec2a2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Refreshing instance network info cache due to event network-changed-cfb780c5-3830-452f-89f0-cd6f52cd9e67. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:21:05 np0005539551 nova_compute[227360]: 2025-11-29 08:21:05.489 227364 DEBUG oslo_concurrency.lockutils [req-03677878-4e6b-4a26-ae66-cbdbaf53d22b req-0f29d678-5d03-49eb-9d2e-45a23a3ec2a2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-83af56cb-4634-464b-a921-b228b72f2ea5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:21:05 np0005539551 nova_compute[227360]: 2025-11-29 08:21:05.494 227364 DEBUG nova.objects.instance [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'migration_context' on Instance uuid 37bf3f0c-b49b-457b-81be-b4b31f32d872 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:05 np0005539551 nova_compute[227360]: 2025-11-29 08:21:05.606 227364 DEBUG nova.network.neutron [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:21:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:21:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:05.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:21:05 np0005539551 nova_compute[227360]: 2025-11-29 08:21:05.968 227364 DEBUG nova.compute.manager [req-2ca3de10-8923-4227-b68e-825fffe434f2 req-5ea5ef32-445c-43fc-9db1-088d57f462ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received event network-changed-f435ee76-ed2f-4ad8-a9e1-bda955080b3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:05 np0005539551 nova_compute[227360]: 2025-11-29 08:21:05.968 227364 DEBUG nova.compute.manager [req-2ca3de10-8923-4227-b68e-825fffe434f2 req-5ea5ef32-445c-43fc-9db1-088d57f462ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Refreshing instance network info cache due to event network-changed-f435ee76-ed2f-4ad8-a9e1-bda955080b3e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:21:05 np0005539551 nova_compute[227360]: 2025-11-29 08:21:05.969 227364 DEBUG oslo_concurrency.lockutils [req-2ca3de10-8923-4227-b68e-825fffe434f2 req-5ea5ef32-445c-43fc-9db1-088d57f462ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-37bf3f0c-b49b-457b-81be-b4b31f32d872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:21:05 np0005539551 nova_compute[227360]: 2025-11-29 08:21:05.969 227364 DEBUG oslo_concurrency.lockutils [req-2ca3de10-8923-4227-b68e-825fffe434f2 req-5ea5ef32-445c-43fc-9db1-088d57f462ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-37bf3f0c-b49b-457b-81be-b4b31f32d872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:21:05 np0005539551 nova_compute[227360]: 2025-11-29 08:21:05.969 227364 DEBUG nova.network.neutron [req-2ca3de10-8923-4227-b68e-825fffe434f2 req-5ea5ef32-445c-43fc-9db1-088d57f462ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Refreshing network info cache for port f435ee76-ed2f-4ad8-a9e1-bda955080b3e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:21:05 np0005539551 nova_compute[227360]: 2025-11-29 08:21:05.976 227364 DEBUG nova.virt.libvirt.driver [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:21:05 np0005539551 nova_compute[227360]: 2025-11-29 08:21:05.976 227364 DEBUG nova.virt.libvirt.driver [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Ensure instance console log exists: /var/lib/nova/instances/37bf3f0c-b49b-457b-81be-b4b31f32d872/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:21:05 np0005539551 nova_compute[227360]: 2025-11-29 08:21:05.976 227364 DEBUG oslo_concurrency.lockutils [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:05 np0005539551 nova_compute[227360]: 2025-11-29 08:21:05.976 227364 DEBUG oslo_concurrency.lockutils [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:05 np0005539551 nova_compute[227360]: 2025-11-29 08:21:05.977 227364 DEBUG oslo_concurrency.lockutils [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:06 np0005539551 nova_compute[227360]: 2025-11-29 08:21:06.070 227364 DEBUG oslo_concurrency.lockutils [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "refresh_cache-37bf3f0c-b49b-457b-81be-b4b31f32d872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:21:06 np0005539551 nova_compute[227360]: 2025-11-29 08:21:06.072 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:06 np0005539551 nova_compute[227360]: 2025-11-29 08:21:06.238 227364 DEBUG nova.network.neutron [req-2ca3de10-8923-4227-b68e-825fffe434f2 req-5ea5ef32-445c-43fc-9db1-088d57f462ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:21:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:06 np0005539551 nova_compute[227360]: 2025-11-29 08:21:06.661 227364 DEBUG nova.network.neutron [req-2ca3de10-8923-4227-b68e-825fffe434f2 req-5ea5ef32-445c-43fc-9db1-088d57f462ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:21:06 np0005539551 nova_compute[227360]: 2025-11-29 08:21:06.701 227364 DEBUG oslo_concurrency.lockutils [req-2ca3de10-8923-4227-b68e-825fffe434f2 req-5ea5ef32-445c-43fc-9db1-088d57f462ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-37bf3f0c-b49b-457b-81be-b4b31f32d872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:21:06 np0005539551 nova_compute[227360]: 2025-11-29 08:21:06.701 227364 DEBUG oslo_concurrency.lockutils [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquired lock "refresh_cache-37bf3f0c-b49b-457b-81be-b4b31f32d872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:21:06 np0005539551 nova_compute[227360]: 2025-11-29 08:21:06.702 227364 DEBUG nova.network.neutron [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:21:06 np0005539551 nova_compute[227360]: 2025-11-29 08:21:06.805 227364 DEBUG nova.network.neutron [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Updating instance_info_cache with network_info: [{"id": "cfb780c5-3830-452f-89f0-cd6f52cd9e67", "address": "fa:16:3e:d7:e6:6b", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfb780c5-38", "ovs_interfaceid": "cfb780c5-3830-452f-89f0-cd6f52cd9e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:21:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:06.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:06 np0005539551 nova_compute[227360]: 2025-11-29 08:21:06.944 227364 DEBUG oslo_concurrency.lockutils [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Releasing lock "refresh_cache-83af56cb-4634-464b-a921-b228b72f2ea5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:21:06 np0005539551 nova_compute[227360]: 2025-11-29 08:21:06.944 227364 DEBUG nova.compute.manager [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Instance network_info: |[{"id": "cfb780c5-3830-452f-89f0-cd6f52cd9e67", "address": "fa:16:3e:d7:e6:6b", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfb780c5-38", "ovs_interfaceid": "cfb780c5-3830-452f-89f0-cd6f52cd9e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:21:06 np0005539551 nova_compute[227360]: 2025-11-29 08:21:06.944 227364 DEBUG oslo_concurrency.lockutils [req-03677878-4e6b-4a26-ae66-cbdbaf53d22b req-0f29d678-5d03-49eb-9d2e-45a23a3ec2a2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-83af56cb-4634-464b-a921-b228b72f2ea5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:21:06 np0005539551 nova_compute[227360]: 2025-11-29 08:21:06.945 227364 DEBUG nova.network.neutron [req-03677878-4e6b-4a26-ae66-cbdbaf53d22b req-0f29d678-5d03-49eb-9d2e-45a23a3ec2a2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Refreshing network info cache for port cfb780c5-3830-452f-89f0-cd6f52cd9e67 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:21:06 np0005539551 nova_compute[227360]: 2025-11-29 08:21:06.948 227364 DEBUG nova.virt.libvirt.driver [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Start _get_guest_xml network_info=[{"id": "cfb780c5-3830-452f-89f0-cd6f52cd9e67", "address": "fa:16:3e:d7:e6:6b", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfb780c5-38", "ovs_interfaceid": "cfb780c5-3830-452f-89f0-cd6f52cd9e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:21:06 np0005539551 nova_compute[227360]: 2025-11-29 08:21:06.953 227364 WARNING nova.virt.libvirt.driver [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:21:06 np0005539551 nova_compute[227360]: 2025-11-29 08:21:06.958 227364 DEBUG nova.virt.libvirt.host [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:21:06 np0005539551 nova_compute[227360]: 2025-11-29 08:21:06.958 227364 DEBUG nova.virt.libvirt.host [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:21:06 np0005539551 nova_compute[227360]: 2025-11-29 08:21:06.961 227364 DEBUG nova.virt.libvirt.host [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:21:06 np0005539551 nova_compute[227360]: 2025-11-29 08:21:06.961 227364 DEBUG nova.virt.libvirt.host [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:21:06 np0005539551 nova_compute[227360]: 2025-11-29 08:21:06.962 227364 DEBUG nova.virt.libvirt.driver [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:21:06 np0005539551 nova_compute[227360]: 2025-11-29 08:21:06.963 227364 DEBUG nova.virt.hardware [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:21:06 np0005539551 nova_compute[227360]: 2025-11-29 08:21:06.963 227364 DEBUG nova.virt.hardware [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:21:06 np0005539551 nova_compute[227360]: 2025-11-29 08:21:06.964 227364 DEBUG nova.virt.hardware [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:21:06 np0005539551 nova_compute[227360]: 2025-11-29 08:21:06.964 227364 DEBUG nova.virt.hardware [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:21:06 np0005539551 nova_compute[227360]: 2025-11-29 08:21:06.964 227364 DEBUG nova.virt.hardware [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:21:06 np0005539551 nova_compute[227360]: 2025-11-29 08:21:06.965 227364 DEBUG nova.virt.hardware [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:21:06 np0005539551 nova_compute[227360]: 2025-11-29 08:21:06.965 227364 DEBUG nova.virt.hardware [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:21:06 np0005539551 nova_compute[227360]: 2025-11-29 08:21:06.965 227364 DEBUG nova.virt.hardware [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:21:06 np0005539551 nova_compute[227360]: 2025-11-29 08:21:06.965 227364 DEBUG nova.virt.hardware [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:21:06 np0005539551 nova_compute[227360]: 2025-11-29 08:21:06.966 227364 DEBUG nova.virt.hardware [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:21:06 np0005539551 nova_compute[227360]: 2025-11-29 08:21:06.966 227364 DEBUG nova.virt.hardware [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:21:06 np0005539551 nova_compute[227360]: 2025-11-29 08:21:06.969 227364 DEBUG oslo_concurrency.processutils [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:07 np0005539551 nova_compute[227360]: 2025-11-29 08:21:06.999 227364 DEBUG nova.network.neutron [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:21:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:21:07 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3089702294' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:21:07 np0005539551 nova_compute[227360]: 2025-11-29 08:21:07.414 227364 DEBUG oslo_concurrency.processutils [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:07 np0005539551 nova_compute[227360]: 2025-11-29 08:21:07.440 227364 DEBUG nova.storage.rbd_utils [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image 83af56cb-4634-464b-a921-b228b72f2ea5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:07 np0005539551 nova_compute[227360]: 2025-11-29 08:21:07.445 227364 DEBUG oslo_concurrency.processutils [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:07.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:21:07 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4126795036' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:21:07 np0005539551 nova_compute[227360]: 2025-11-29 08:21:07.856 227364 DEBUG oslo_concurrency.processutils [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:07 np0005539551 nova_compute[227360]: 2025-11-29 08:21:07.859 227364 DEBUG nova.virt.libvirt.vif [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:20:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1605427141',display_name='tempest-tempest.common.compute-instance-1605427141',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1605427141',id=118,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0bace34c102e4d56b089fd695d324f10',ramdisk_id='',reservation_id='r-9cfxhqhs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1954650991',owner_user_name='tempest-ServerActionsTestOtherA-1954650991-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:21:00Z,user_data=None,user_id='1552f15deb524705a9456cbe9b54c429',uuid=83af56cb-4634-464b-a921-b228b72f2ea5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cfb780c5-3830-452f-89f0-cd6f52cd9e67", "address": "fa:16:3e:d7:e6:6b", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfb780c5-38", "ovs_interfaceid": "cfb780c5-3830-452f-89f0-cd6f52cd9e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:21:07 np0005539551 nova_compute[227360]: 2025-11-29 08:21:07.860 227364 DEBUG nova.network.os_vif_util [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converting VIF {"id": "cfb780c5-3830-452f-89f0-cd6f52cd9e67", "address": "fa:16:3e:d7:e6:6b", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfb780c5-38", "ovs_interfaceid": "cfb780c5-3830-452f-89f0-cd6f52cd9e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:21:07 np0005539551 nova_compute[227360]: 2025-11-29 08:21:07.862 227364 DEBUG nova.network.os_vif_util [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:e6:6b,bridge_name='br-int',has_traffic_filtering=True,id=cfb780c5-3830-452f-89f0-cd6f52cd9e67,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcfb780c5-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:21:07 np0005539551 nova_compute[227360]: 2025-11-29 08:21:07.864 227364 DEBUG nova.objects.instance [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lazy-loading 'pci_devices' on Instance uuid 83af56cb-4634-464b-a921-b228b72f2ea5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:07 np0005539551 nova_compute[227360]: 2025-11-29 08:21:07.905 227364 DEBUG nova.virt.libvirt.driver [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:21:07 np0005539551 nova_compute[227360]:  <uuid>83af56cb-4634-464b-a921-b228b72f2ea5</uuid>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:  <name>instance-00000076</name>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:21:07 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:      <nova:name>tempest-tempest.common.compute-instance-1605427141</nova:name>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:21:06</nova:creationTime>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:21:07 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:        <nova:user uuid="1552f15deb524705a9456cbe9b54c429">tempest-ServerActionsTestOtherA-1954650991-project-member</nova:user>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:        <nova:project uuid="0bace34c102e4d56b089fd695d324f10">tempest-ServerActionsTestOtherA-1954650991</nova:project>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:        <nova:port uuid="cfb780c5-3830-452f-89f0-cd6f52cd9e67">
Nov 29 03:21:07 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:      <entry name="serial">83af56cb-4634-464b-a921-b228b72f2ea5</entry>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:      <entry name="uuid">83af56cb-4634-464b-a921-b228b72f2ea5</entry>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:21:07 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/83af56cb-4634-464b-a921-b228b72f2ea5_disk">
Nov 29 03:21:07 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:21:07 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:21:07 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/83af56cb-4634-464b-a921-b228b72f2ea5_disk.config">
Nov 29 03:21:07 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:21:07 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:21:07 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:d7:e6:6b"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:      <target dev="tapcfb780c5-38"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:21:07 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/83af56cb-4634-464b-a921-b228b72f2ea5/console.log" append="off"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:21:07 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:21:07 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:21:07 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:21:07 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:21:07 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:21:07 np0005539551 nova_compute[227360]: 2025-11-29 08:21:07.907 227364 DEBUG nova.compute.manager [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Preparing to wait for external event network-vif-plugged-cfb780c5-3830-452f-89f0-cd6f52cd9e67 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:21:07 np0005539551 nova_compute[227360]: 2025-11-29 08:21:07.907 227364 DEBUG oslo_concurrency.lockutils [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:07 np0005539551 nova_compute[227360]: 2025-11-29 08:21:07.907 227364 DEBUG oslo_concurrency.lockutils [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:07 np0005539551 nova_compute[227360]: 2025-11-29 08:21:07.908 227364 DEBUG oslo_concurrency.lockutils [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:07 np0005539551 nova_compute[227360]: 2025-11-29 08:21:07.908 227364 DEBUG nova.virt.libvirt.vif [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:20:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1605427141',display_name='tempest-tempest.common.compute-instance-1605427141',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1605427141',id=118,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0bace34c102e4d56b089fd695d324f10',ramdisk_id='',reservation_id='r-9cfxhqhs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1954650991',owner_user_name='tempest-ServerActionsTestOtherA-1954650991-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:21:00Z,user_data=None,user_id='1552f15deb524705a9456cbe9b54c429',uuid=83af56cb-4634-464b-a921-b228b72f2ea5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cfb780c5-3830-452f-89f0-cd6f52cd9e67", "address": "fa:16:3e:d7:e6:6b", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfb780c5-38", "ovs_interfaceid": "cfb780c5-3830-452f-89f0-cd6f52cd9e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:21:07 np0005539551 nova_compute[227360]: 2025-11-29 08:21:07.909 227364 DEBUG nova.network.os_vif_util [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converting VIF {"id": "cfb780c5-3830-452f-89f0-cd6f52cd9e67", "address": "fa:16:3e:d7:e6:6b", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfb780c5-38", "ovs_interfaceid": "cfb780c5-3830-452f-89f0-cd6f52cd9e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:21:07 np0005539551 nova_compute[227360]: 2025-11-29 08:21:07.909 227364 DEBUG nova.network.os_vif_util [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:e6:6b,bridge_name='br-int',has_traffic_filtering=True,id=cfb780c5-3830-452f-89f0-cd6f52cd9e67,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcfb780c5-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:21:07 np0005539551 nova_compute[227360]: 2025-11-29 08:21:07.910 227364 DEBUG os_vif [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:e6:6b,bridge_name='br-int',has_traffic_filtering=True,id=cfb780c5-3830-452f-89f0-cd6f52cd9e67,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcfb780c5-38') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:21:07 np0005539551 nova_compute[227360]: 2025-11-29 08:21:07.910 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:07 np0005539551 nova_compute[227360]: 2025-11-29 08:21:07.911 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:07 np0005539551 nova_compute[227360]: 2025-11-29 08:21:07.911 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:21:07 np0005539551 nova_compute[227360]: 2025-11-29 08:21:07.916 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:07 np0005539551 nova_compute[227360]: 2025-11-29 08:21:07.917 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcfb780c5-38, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:07 np0005539551 nova_compute[227360]: 2025-11-29 08:21:07.918 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcfb780c5-38, col_values=(('external_ids', {'iface-id': 'cfb780c5-3830-452f-89f0-cd6f52cd9e67', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d7:e6:6b', 'vm-uuid': '83af56cb-4634-464b-a921-b228b72f2ea5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:07 np0005539551 NetworkManager[48922]: <info>  [1764404467.9214] manager: (tapcfb780c5-38): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/218)
Nov 29 03:21:07 np0005539551 nova_compute[227360]: 2025-11-29 08:21:07.921 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:07 np0005539551 nova_compute[227360]: 2025-11-29 08:21:07.925 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:21:07 np0005539551 nova_compute[227360]: 2025-11-29 08:21:07.927 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:07 np0005539551 nova_compute[227360]: 2025-11-29 08:21:07.928 227364 INFO os_vif [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:e6:6b,bridge_name='br-int',has_traffic_filtering=True,id=cfb780c5-3830-452f-89f0-cd6f52cd9e67,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcfb780c5-38')#033[00m
Nov 29 03:21:07 np0005539551 nova_compute[227360]: 2025-11-29 08:21:07.983 227364 DEBUG nova.virt.libvirt.driver [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:21:07 np0005539551 nova_compute[227360]: 2025-11-29 08:21:07.983 227364 DEBUG nova.virt.libvirt.driver [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:21:07 np0005539551 nova_compute[227360]: 2025-11-29 08:21:07.984 227364 DEBUG nova.virt.libvirt.driver [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] No VIF found with MAC fa:16:3e:d7:e6:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:21:07 np0005539551 nova_compute[227360]: 2025-11-29 08:21:07.984 227364 INFO nova.virt.libvirt.driver [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Using config drive#033[00m
Nov 29 03:21:08 np0005539551 nova_compute[227360]: 2025-11-29 08:21:08.014 227364 DEBUG nova.storage.rbd_utils [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image 83af56cb-4634-464b-a921-b228b72f2ea5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:08 np0005539551 nova_compute[227360]: 2025-11-29 08:21:08.167 227364 DEBUG nova.network.neutron [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Updating instance_info_cache with network_info: [{"id": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "address": "fa:16:3e:a0:40:0c", "network": {"id": "2b381cec-57a8-4697-a273-a320681301f8", "bridge": "br-int", "label": "tempest-network-smoke--788617349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf435ee76-ed", "ovs_interfaceid": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:21:08 np0005539551 nova_compute[227360]: 2025-11-29 08:21:08.190 227364 DEBUG oslo_concurrency.lockutils [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Releasing lock "refresh_cache-37bf3f0c-b49b-457b-81be-b4b31f32d872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:21:08 np0005539551 nova_compute[227360]: 2025-11-29 08:21:08.190 227364 DEBUG nova.compute.manager [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Instance network_info: |[{"id": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "address": "fa:16:3e:a0:40:0c", "network": {"id": "2b381cec-57a8-4697-a273-a320681301f8", "bridge": "br-int", "label": "tempest-network-smoke--788617349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf435ee76-ed", "ovs_interfaceid": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:21:08 np0005539551 nova_compute[227360]: 2025-11-29 08:21:08.192 227364 DEBUG nova.virt.libvirt.driver [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Start _get_guest_xml network_info=[{"id": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "address": "fa:16:3e:a0:40:0c", "network": {"id": "2b381cec-57a8-4697-a273-a320681301f8", "bridge": "br-int", "label": "tempest-network-smoke--788617349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf435ee76-ed", "ovs_interfaceid": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:21:08 np0005539551 nova_compute[227360]: 2025-11-29 08:21:08.197 227364 WARNING nova.virt.libvirt.driver [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:21:08 np0005539551 nova_compute[227360]: 2025-11-29 08:21:08.201 227364 DEBUG nova.virt.libvirt.host [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:21:08 np0005539551 nova_compute[227360]: 2025-11-29 08:21:08.201 227364 DEBUG nova.virt.libvirt.host [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:21:08 np0005539551 nova_compute[227360]: 2025-11-29 08:21:08.205 227364 DEBUG nova.virt.libvirt.host [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:21:08 np0005539551 nova_compute[227360]: 2025-11-29 08:21:08.205 227364 DEBUG nova.virt.libvirt.host [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:21:08 np0005539551 nova_compute[227360]: 2025-11-29 08:21:08.207 227364 DEBUG nova.virt.libvirt.driver [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:21:08 np0005539551 nova_compute[227360]: 2025-11-29 08:21:08.208 227364 DEBUG nova.virt.hardware [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:21:08 np0005539551 nova_compute[227360]: 2025-11-29 08:21:08.208 227364 DEBUG nova.virt.hardware [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:21:08 np0005539551 nova_compute[227360]: 2025-11-29 08:21:08.208 227364 DEBUG nova.virt.hardware [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:21:08 np0005539551 nova_compute[227360]: 2025-11-29 08:21:08.209 227364 DEBUG nova.virt.hardware [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:21:08 np0005539551 nova_compute[227360]: 2025-11-29 08:21:08.209 227364 DEBUG nova.virt.hardware [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:21:08 np0005539551 nova_compute[227360]: 2025-11-29 08:21:08.209 227364 DEBUG nova.virt.hardware [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:21:08 np0005539551 nova_compute[227360]: 2025-11-29 08:21:08.209 227364 DEBUG nova.virt.hardware [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:21:08 np0005539551 nova_compute[227360]: 2025-11-29 08:21:08.210 227364 DEBUG nova.virt.hardware [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:21:08 np0005539551 nova_compute[227360]: 2025-11-29 08:21:08.210 227364 DEBUG nova.virt.hardware [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:21:08 np0005539551 nova_compute[227360]: 2025-11-29 08:21:08.210 227364 DEBUG nova.virt.hardware [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:21:08 np0005539551 nova_compute[227360]: 2025-11-29 08:21:08.210 227364 DEBUG nova.virt.hardware [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:21:08 np0005539551 nova_compute[227360]: 2025-11-29 08:21:08.213 227364 DEBUG oslo_concurrency.processutils [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:08 np0005539551 nova_compute[227360]: 2025-11-29 08:21:08.520 227364 DEBUG nova.network.neutron [req-03677878-4e6b-4a26-ae66-cbdbaf53d22b req-0f29d678-5d03-49eb-9d2e-45a23a3ec2a2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Updated VIF entry in instance network info cache for port cfb780c5-3830-452f-89f0-cd6f52cd9e67. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:21:08 np0005539551 nova_compute[227360]: 2025-11-29 08:21:08.521 227364 DEBUG nova.network.neutron [req-03677878-4e6b-4a26-ae66-cbdbaf53d22b req-0f29d678-5d03-49eb-9d2e-45a23a3ec2a2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Updating instance_info_cache with network_info: [{"id": "cfb780c5-3830-452f-89f0-cd6f52cd9e67", "address": "fa:16:3e:d7:e6:6b", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfb780c5-38", "ovs_interfaceid": "cfb780c5-3830-452f-89f0-cd6f52cd9e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:21:08 np0005539551 nova_compute[227360]: 2025-11-29 08:21:08.539 227364 DEBUG oslo_concurrency.lockutils [req-03677878-4e6b-4a26-ae66-cbdbaf53d22b req-0f29d678-5d03-49eb-9d2e-45a23a3ec2a2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-83af56cb-4634-464b-a921-b228b72f2ea5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:21:08 np0005539551 nova_compute[227360]: 2025-11-29 08:21:08.633 227364 INFO nova.virt.libvirt.driver [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Creating config drive at /var/lib/nova/instances/83af56cb-4634-464b-a921-b228b72f2ea5/disk.config#033[00m
Nov 29 03:21:08 np0005539551 nova_compute[227360]: 2025-11-29 08:21:08.638 227364 DEBUG oslo_concurrency.processutils [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/83af56cb-4634-464b-a921-b228b72f2ea5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzkqq1h2a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:08 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:21:08 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2185328806' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:21:08 np0005539551 nova_compute[227360]: 2025-11-29 08:21:08.668 227364 DEBUG oslo_concurrency.processutils [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:08 np0005539551 nova_compute[227360]: 2025-11-29 08:21:08.695 227364 DEBUG nova.storage.rbd_utils [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image 37bf3f0c-b49b-457b-81be-b4b31f32d872_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:08 np0005539551 nova_compute[227360]: 2025-11-29 08:21:08.698 227364 DEBUG oslo_concurrency.processutils [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:08 np0005539551 nova_compute[227360]: 2025-11-29 08:21:08.770 227364 DEBUG oslo_concurrency.processutils [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/83af56cb-4634-464b-a921-b228b72f2ea5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzkqq1h2a" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:08 np0005539551 nova_compute[227360]: 2025-11-29 08:21:08.797 227364 DEBUG nova.storage.rbd_utils [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image 83af56cb-4634-464b-a921-b228b72f2ea5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:08 np0005539551 nova_compute[227360]: 2025-11-29 08:21:08.800 227364 DEBUG oslo_concurrency.processutils [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/83af56cb-4634-464b-a921-b228b72f2ea5/disk.config 83af56cb-4634-464b-a921-b228b72f2ea5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:08.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:08 np0005539551 nova_compute[227360]: 2025-11-29 08:21:08.948 227364 DEBUG oslo_concurrency.processutils [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/83af56cb-4634-464b-a921-b228b72f2ea5/disk.config 83af56cb-4634-464b-a921-b228b72f2ea5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:08 np0005539551 nova_compute[227360]: 2025-11-29 08:21:08.949 227364 INFO nova.virt.libvirt.driver [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Deleting local config drive /var/lib/nova/instances/83af56cb-4634-464b-a921-b228b72f2ea5/disk.config because it was imported into RBD.#033[00m
Nov 29 03:21:09 np0005539551 NetworkManager[48922]: <info>  [1764404469.0039] manager: (tapcfb780c5-38): new Tun device (/org/freedesktop/NetworkManager/Devices/219)
Nov 29 03:21:09 np0005539551 kernel: tapcfb780c5-38: entered promiscuous mode
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.006 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:09 np0005539551 ovn_controller[130266]: 2025-11-29T08:21:09Z|00471|binding|INFO|Claiming lport cfb780c5-3830-452f-89f0-cd6f52cd9e67 for this chassis.
Nov 29 03:21:09 np0005539551 ovn_controller[130266]: 2025-11-29T08:21:09Z|00472|binding|INFO|cfb780c5-3830-452f-89f0-cd6f52cd9e67: Claiming fa:16:3e:d7:e6:6b 10.100.0.13
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.016 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:09 np0005539551 NetworkManager[48922]: <info>  [1764404469.0207] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/220)
Nov 29 03:21:09 np0005539551 NetworkManager[48922]: <info>  [1764404469.0217] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/221)
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.020 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:09.029 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:e6:6b 10.100.0.13'], port_security=['fa:16:3e:d7:e6:6b 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '83af56cb-4634-464b-a921-b228b72f2ea5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bace34c102e4d56b089fd695d324f10', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2702f195-789d-4a37-affe-a8159dccabea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a26ea06d-6837-4c64-a5e9-9d9016316b21, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=cfb780c5-3830-452f-89f0-cd6f52cd9e67) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:09.031 139482 INFO neutron.agent.ovn.metadata.agent [-] Port cfb780c5-3830-452f-89f0-cd6f52cd9e67 in datapath 7fc1dfc3-8d7f-4854-980d-37a93f366035 bound to our chassis#033[00m
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:09.034 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7fc1dfc3-8d7f-4854-980d-37a93f366035#033[00m
Nov 29 03:21:09 np0005539551 systemd-udevd[271250]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:09.048 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[95aafb2a-bf19-4c68-bb46-f2b97291ff87]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:09.049 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7fc1dfc3-81 in ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:21:09 np0005539551 NetworkManager[48922]: <info>  [1764404469.0532] device (tapcfb780c5-38): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:21:09 np0005539551 NetworkManager[48922]: <info>  [1764404469.0562] device (tapcfb780c5-38): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:21:09 np0005539551 systemd-machined[190756]: New machine qemu-52-instance-00000076.
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:09.057 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7fc1dfc3-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:09.057 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b9ca18d8-9d93-4193-ba5e-e044853bfdc2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:09.061 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2f636107-9ce4-4498-a456-59ec684cddd4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:09.072 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[d7bb8cfb-ba3d-4061-bb84-f7f068e4bfb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:09 np0005539551 systemd[1]: Started Virtual Machine qemu-52-instance-00000076.
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:09.096 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[cd2961f4-01e6-4d2f-aaa4-e52670428487]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:09 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:09.122 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[1c6ae3d2-1bae-477c-a21a-f9ca42ce2bb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:09 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1873562322' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:09.130 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[419f2d44-a41e-41f8-bc33-0aa3c896e849]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:09 np0005539551 NetworkManager[48922]: <info>  [1764404469.1327] manager: (tap7fc1dfc3-80): new Veth device (/org/freedesktop/NetworkManager/Devices/222)
Nov 29 03:21:09 np0005539551 systemd-udevd[271254]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.141 227364 DEBUG oslo_concurrency.processutils [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.142 227364 DEBUG nova.virt.libvirt.vif [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:20:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-700901307',display_name='tempest-TestNetworkAdvancedServerOps-server-700901307',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-700901307',id=119,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHTZdF339uG4GTcdjaqWUNyl9tCN2Ihz0tT1aABynGHxCfjrTplPF8A9td3DkI7lqNybnYi0rKYsiF72+HnhHVmKPriLXx/cBMbe2eRLXVh9VLRo2vvXjsLkBGMzWqs3qw==',key_name='tempest-TestNetworkAdvancedServerOps-236637179',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-rb72np4c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:21:03Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=37bf3f0c-b49b-457b-81be-b4b31f32d872,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "address": "fa:16:3e:a0:40:0c", "network": {"id": "2b381cec-57a8-4697-a273-a320681301f8", "bridge": "br-int", "label": "tempest-network-smoke--788617349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf435ee76-ed", "ovs_interfaceid": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.142 227364 DEBUG nova.network.os_vif_util [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "address": "fa:16:3e:a0:40:0c", "network": {"id": "2b381cec-57a8-4697-a273-a320681301f8", "bridge": "br-int", "label": "tempest-network-smoke--788617349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf435ee76-ed", "ovs_interfaceid": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.144 227364 DEBUG nova.network.os_vif_util [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:40:0c,bridge_name='br-int',has_traffic_filtering=True,id=f435ee76-ed2f-4ad8-a9e1-bda955080b3e,network=Network(2b381cec-57a8-4697-a273-a320681301f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf435ee76-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.145 227364 DEBUG nova.objects.instance [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'pci_devices' on Instance uuid 37bf3f0c-b49b-457b-81be-b4b31f32d872 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.159 227364 DEBUG nova.virt.libvirt.driver [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:21:09 np0005539551 nova_compute[227360]:  <uuid>37bf3f0c-b49b-457b-81be-b4b31f32d872</uuid>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:  <name>instance-00000077</name>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:21:09 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-700901307</nova:name>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:21:08</nova:creationTime>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:21:09 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:        <nova:user uuid="fed6803a835e471f9bd60e3236e78e5d">tempest-TestNetworkAdvancedServerOps-274367929-project-member</nova:user>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:        <nova:project uuid="4145ed6cde61439ebcc12fae2609b724">tempest-TestNetworkAdvancedServerOps-274367929</nova:project>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:        <nova:port uuid="f435ee76-ed2f-4ad8-a9e1-bda955080b3e">
Nov 29 03:21:09 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:      <entry name="serial">37bf3f0c-b49b-457b-81be-b4b31f32d872</entry>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:      <entry name="uuid">37bf3f0c-b49b-457b-81be-b4b31f32d872</entry>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:21:09 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/37bf3f0c-b49b-457b-81be-b4b31f32d872_disk">
Nov 29 03:21:09 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:21:09 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:21:09 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/37bf3f0c-b49b-457b-81be-b4b31f32d872_disk.config">
Nov 29 03:21:09 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:21:09 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:21:09 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:a0:40:0c"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:      <target dev="tapf435ee76-ed"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:21:09 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/37bf3f0c-b49b-457b-81be-b4b31f32d872/console.log" append="off"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:21:09 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:21:09 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:21:09 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:21:09 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:21:09 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.160 227364 DEBUG nova.compute.manager [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Preparing to wait for external event network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.160 227364 DEBUG oslo_concurrency.lockutils [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.160 227364 DEBUG oslo_concurrency.lockutils [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.160 227364 DEBUG oslo_concurrency.lockutils [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.161 227364 DEBUG nova.virt.libvirt.vif [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:20:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-700901307',display_name='tempest-TestNetworkAdvancedServerOps-server-700901307',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-700901307',id=119,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHTZdF339uG4GTcdjaqWUNyl9tCN2Ihz0tT1aABynGHxCfjrTplPF8A9td3DkI7lqNybnYi0rKYsiF72+HnhHVmKPriLXx/cBMbe2eRLXVh9VLRo2vvXjsLkBGMzWqs3qw==',key_name='tempest-TestNetworkAdvancedServerOps-236637179',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-rb72np4c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:21:03Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=37bf3f0c-b49b-457b-81be-b4b31f32d872,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "address": "fa:16:3e:a0:40:0c", "network": {"id": "2b381cec-57a8-4697-a273-a320681301f8", "bridge": "br-int", "label": "tempest-network-smoke--788617349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf435ee76-ed", "ovs_interfaceid": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.161 227364 DEBUG nova.network.os_vif_util [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "address": "fa:16:3e:a0:40:0c", "network": {"id": "2b381cec-57a8-4697-a273-a320681301f8", "bridge": "br-int", "label": "tempest-network-smoke--788617349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf435ee76-ed", "ovs_interfaceid": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.162 227364 DEBUG nova.network.os_vif_util [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:40:0c,bridge_name='br-int',has_traffic_filtering=True,id=f435ee76-ed2f-4ad8-a9e1-bda955080b3e,network=Network(2b381cec-57a8-4697-a273-a320681301f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf435ee76-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.162 227364 DEBUG os_vif [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:40:0c,bridge_name='br-int',has_traffic_filtering=True,id=f435ee76-ed2f-4ad8-a9e1-bda955080b3e,network=Network(2b381cec-57a8-4697-a273-a320681301f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf435ee76-ed') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.162 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.163 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.163 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:09.165 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[21ceba9e-931e-41d9-a3cd-831fffb7f20b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:09.171 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[ac6c59c4-d602-4db5-a414-3fcadb6347bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.172 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.172 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf435ee76-ed, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.172 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf435ee76-ed, col_values=(('external_ids', {'iface-id': 'f435ee76-ed2f-4ad8-a9e1-bda955080b3e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a0:40:0c', 'vm-uuid': '37bf3f0c-b49b-457b-81be-b4b31f32d872'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.174 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:09 np0005539551 NetworkManager[48922]: <info>  [1764404469.1756] manager: (tapf435ee76-ed): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/223)
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.176 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:21:09 np0005539551 NetworkManager[48922]: <info>  [1764404469.1941] device (tap7fc1dfc3-80): carrier: link connected
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:09.199 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[e46d722f-289a-410b-ada6-a21fb35a6050]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.202 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.210 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.212 227364 INFO os_vif [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:40:0c,bridge_name='br-int',has_traffic_filtering=True,id=f435ee76-ed2f-4ad8-a9e1-bda955080b3e,network=Network(2b381cec-57a8-4697-a273-a320681301f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf435ee76-ed')#033[00m
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:09.215 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[64390f3d-5d38-495c-9669-a81761256c02]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7fc1dfc3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:27:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 140], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751607, 'reachable_time': 31671, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271288, 'error': None, 'target': 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.228 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:09.231 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[021c0a4e-4cb4-4ac8-be7d-837a5db92c96]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5d:273e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 751607, 'tstamp': 751607}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271291, 'error': None, 'target': 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:09 np0005539551 ovn_controller[130266]: 2025-11-29T08:21:09Z|00473|binding|INFO|Setting lport cfb780c5-3830-452f-89f0-cd6f52cd9e67 ovn-installed in OVS
Nov 29 03:21:09 np0005539551 ovn_controller[130266]: 2025-11-29T08:21:09Z|00474|binding|INFO|Setting lport cfb780c5-3830-452f-89f0-cd6f52cd9e67 up in Southbound
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.243 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:09.252 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b8bbd81d-f4d4-4fc2-ab50-c618ba19fd89]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7fc1dfc3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:27:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 140], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751607, 'reachable_time': 31671, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271292, 'error': None, 'target': 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.265 227364 DEBUG nova.virt.libvirt.driver [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.265 227364 DEBUG nova.virt.libvirt.driver [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.266 227364 DEBUG nova.virt.libvirt.driver [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] No VIF found with MAC fa:16:3e:a0:40:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.266 227364 INFO nova.virt.libvirt.driver [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Using config drive#033[00m
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:09.284 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[47cc983f-906e-4267-bf2e-dc8ecc7ce08d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.305 227364 DEBUG nova.storage.rbd_utils [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image 37bf3f0c-b49b-457b-81be-b4b31f32d872_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:09.349 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[44ab91bb-440d-4f3d-867e-816c17db45ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:09.350 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7fc1dfc3-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:09.350 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:09.351 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7fc1dfc3-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.391 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:09 np0005539551 kernel: tap7fc1dfc3-80: entered promiscuous mode
Nov 29 03:21:09 np0005539551 NetworkManager[48922]: <info>  [1764404469.3929] manager: (tap7fc1dfc3-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/224)
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.397 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:09.399 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7fc1dfc3-80, col_values=(('external_ids', {'iface-id': '79109459-2a40-4b69-936e-ac2a2aa77985'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.400 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:09 np0005539551 ovn_controller[130266]: 2025-11-29T08:21:09Z|00475|binding|INFO|Releasing lport 79109459-2a40-4b69-936e-ac2a2aa77985 from this chassis (sb_readonly=0)
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.429 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.434 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:09.435 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7fc1dfc3-8d7f-4854-980d-37a93f366035.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7fc1dfc3-8d7f-4854-980d-37a93f366035.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:09.436 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f1009e28-8570-45c7-b370-ff254d2840f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:09.436 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-7fc1dfc3-8d7f-4854-980d-37a93f366035
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/7fc1dfc3-8d7f-4854-980d-37a93f366035.pid.haproxy
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 7fc1dfc3-8d7f-4854-980d-37a93f366035
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:21:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:09.437 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'env', 'PROCESS_TAG=haproxy-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7fc1dfc3-8d7f-4854-980d-37a93f366035.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.670 227364 INFO nova.virt.libvirt.driver [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Creating config drive at /var/lib/nova/instances/37bf3f0c-b49b-457b-81be-b4b31f32d872/disk.config#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.675 227364 DEBUG oslo_concurrency.processutils [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/37bf3f0c-b49b-457b-81be-b4b31f32d872/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp028788j1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:09.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.706 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404469.6957762, 83af56cb-4634-464b-a921-b228b72f2ea5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.708 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] VM Started (Lifecycle Event)#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.740 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.745 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404469.6963785, 83af56cb-4634-464b-a921-b228b72f2ea5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.745 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.775 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.778 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:21:09 np0005539551 podman[271392]: 2025-11-29 08:21:09.793014794 +0000 UTC m=+0.049734207 container create 8075158aa51f64181060ff01e8747cd40d7ffc740022a70166189b6f4cdc1cdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS)
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.805 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.814 227364 DEBUG oslo_concurrency.processutils [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/37bf3f0c-b49b-457b-81be-b4b31f32d872/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp028788j1" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:09 np0005539551 systemd[1]: Started libpod-conmon-8075158aa51f64181060ff01e8747cd40d7ffc740022a70166189b6f4cdc1cdd.scope.
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.843 227364 DEBUG nova.storage.rbd_utils [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image 37bf3f0c-b49b-457b-81be-b4b31f32d872_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:09 np0005539551 nova_compute[227360]: 2025-11-29 08:21:09.846 227364 DEBUG oslo_concurrency.processutils [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/37bf3f0c-b49b-457b-81be-b4b31f32d872/disk.config 37bf3f0c-b49b-457b-81be-b4b31f32d872_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:09 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:21:09 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66fb9b023ccdf9f8b71c051038dc0187ccda19fde126cca34bf92bcedf978152/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:21:09 np0005539551 podman[271392]: 2025-11-29 08:21:09.765437228 +0000 UTC m=+0.022156661 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:21:09 np0005539551 podman[271392]: 2025-11-29 08:21:09.867566423 +0000 UTC m=+0.124285856 container init 8075158aa51f64181060ff01e8747cd40d7ffc740022a70166189b6f4cdc1cdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:21:09 np0005539551 podman[271392]: 2025-11-29 08:21:09.872681732 +0000 UTC m=+0.129401155 container start 8075158aa51f64181060ff01e8747cd40d7ffc740022a70166189b6f4cdc1cdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 03:21:09 np0005539551 neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035[271416]: [NOTICE]   (271432) : New worker (271434) forked
Nov 29 03:21:09 np0005539551 neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035[271416]: [NOTICE]   (271432) : Loading success.
Nov 29 03:21:10 np0005539551 nova_compute[227360]: 2025-11-29 08:21:10.416 227364 DEBUG oslo_concurrency.processutils [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/37bf3f0c-b49b-457b-81be-b4b31f32d872/disk.config 37bf3f0c-b49b-457b-81be-b4b31f32d872_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:10 np0005539551 nova_compute[227360]: 2025-11-29 08:21:10.417 227364 INFO nova.virt.libvirt.driver [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Deleting local config drive /var/lib/nova/instances/37bf3f0c-b49b-457b-81be-b4b31f32d872/disk.config because it was imported into RBD.#033[00m
Nov 29 03:21:10 np0005539551 kernel: tapf435ee76-ed: entered promiscuous mode
Nov 29 03:21:10 np0005539551 systemd-udevd[271285]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:21:10 np0005539551 NetworkManager[48922]: <info>  [1764404470.5114] manager: (tapf435ee76-ed): new Tun device (/org/freedesktop/NetworkManager/Devices/225)
Nov 29 03:21:10 np0005539551 NetworkManager[48922]: <info>  [1764404470.5321] device (tapf435ee76-ed): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:21:10 np0005539551 NetworkManager[48922]: <info>  [1764404470.5332] device (tapf435ee76-ed): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:21:10 np0005539551 ovn_controller[130266]: 2025-11-29T08:21:10Z|00476|binding|INFO|Claiming lport f435ee76-ed2f-4ad8-a9e1-bda955080b3e for this chassis.
Nov 29 03:21:10 np0005539551 ovn_controller[130266]: 2025-11-29T08:21:10Z|00477|binding|INFO|f435ee76-ed2f-4ad8-a9e1-bda955080b3e: Claiming fa:16:3e:a0:40:0c 10.100.0.5
Nov 29 03:21:10 np0005539551 nova_compute[227360]: 2025-11-29 08:21:10.538 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:10 np0005539551 ovn_controller[130266]: 2025-11-29T08:21:10Z|00478|binding|INFO|Setting lport f435ee76-ed2f-4ad8-a9e1-bda955080b3e ovn-installed in OVS
Nov 29 03:21:10 np0005539551 nova_compute[227360]: 2025-11-29 08:21:10.565 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:10 np0005539551 systemd-machined[190756]: New machine qemu-53-instance-00000077.
Nov 29 03:21:10 np0005539551 systemd[1]: Started Virtual Machine qemu-53-instance-00000077.
Nov 29 03:21:10 np0005539551 ovn_controller[130266]: 2025-11-29T08:21:10Z|00479|binding|INFO|Setting lport f435ee76-ed2f-4ad8-a9e1-bda955080b3e up in Southbound
Nov 29 03:21:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:10.735 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:40:0c 10.100.0.5'], port_security=['fa:16:3e:a0:40:0c 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '37bf3f0c-b49b-457b-81be-b4b31f32d872', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b381cec-57a8-4697-a273-a320681301f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f7e4462f-71ed-420d-b2ac-83fad8b034b6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7b87ec03-3fc0-4efd-b28c-90cfac0d10cf, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=f435ee76-ed2f-4ad8-a9e1-bda955080b3e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:21:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:10.738 139482 INFO neutron.agent.ovn.metadata.agent [-] Port f435ee76-ed2f-4ad8-a9e1-bda955080b3e in datapath 2b381cec-57a8-4697-a273-a320681301f8 bound to our chassis#033[00m
Nov 29 03:21:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:10.742 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2b381cec-57a8-4697-a273-a320681301f8#033[00m
Nov 29 03:21:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:10.758 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9c1130c5-b1c5-42f0-b631-6110ec57616b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:10.760 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2b381cec-51 in ovnmeta-2b381cec-57a8-4697-a273-a320681301f8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:21:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:10.762 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2b381cec-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:21:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:10.762 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c587191c-f3bc-4b50-93d4-cf3588e1e568]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:10.763 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f6217bd4-ce31-48b1-ab78-e0c78adb62e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:10.777 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[6f1eecad-382d-4f55-8312-8ce3c35092d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:10.803 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[0c9e0c32-a67f-43a2-ad88-2dabd810e1a2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:10.831 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[cb113845-6745-4e1e-92e2-ec3c11963be1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:10.837 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4f91634c-9b45-4e87-9c83-d317183d7233]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:10 np0005539551 NetworkManager[48922]: <info>  [1764404470.8395] manager: (tap2b381cec-50): new Veth device (/org/freedesktop/NetworkManager/Devices/226)
Nov 29 03:21:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:10.872 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[4e1193b3-fab3-405b-b2cb-0b28117e9628]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:10.874 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[1044ad24-e4ca-4a57-8bf3-43232aa3fc5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:10.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:10 np0005539551 NetworkManager[48922]: <info>  [1764404470.8954] device (tap2b381cec-50): carrier: link connected
Nov 29 03:21:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:10.901 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[1ae3e374-1070-4f91-8e6c-a94b191ab1ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:10.918 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[11da27b0-37a7-4410-9fb7-f503319788a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b381cec-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c1:c0:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 142], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751777, 'reachable_time': 27922, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271491, 'error': None, 'target': 'ovnmeta-2b381cec-57a8-4697-a273-a320681301f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:10.936 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4fd56dca-5dfc-4dc1-8dbb-a4fce88a3eaa]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec1:c0a0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 751777, 'tstamp': 751777}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271492, 'error': None, 'target': 'ovnmeta-2b381cec-57a8-4697-a273-a320681301f8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:10.955 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[83102870-6d07-4b85-be44-7f2a7b26e91c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b381cec-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c1:c0:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 142], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751777, 'reachable_time': 27922, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271500, 'error': None, 'target': 'ovnmeta-2b381cec-57a8-4697-a273-a320681301f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:10.984 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[fcbc3774-01ed-4d6a-920a-89c650901758]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:11.048 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[19154217-9e91-4f2c-b904-7e64a80e6cdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:11.049 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b381cec-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:11.049 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:21:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:11.050 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b381cec-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:11 np0005539551 nova_compute[227360]: 2025-11-29 08:21:11.052 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:11 np0005539551 NetworkManager[48922]: <info>  [1764404471.0526] manager: (tap2b381cec-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/227)
Nov 29 03:21:11 np0005539551 kernel: tap2b381cec-50: entered promiscuous mode
Nov 29 03:21:11 np0005539551 nova_compute[227360]: 2025-11-29 08:21:11.055 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:11.057 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2b381cec-50, col_values=(('external_ids', {'iface-id': '7127038e-90ca-4039-8404-4b8a2152df71'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:11 np0005539551 nova_compute[227360]: 2025-11-29 08:21:11.058 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:11 np0005539551 ovn_controller[130266]: 2025-11-29T08:21:11Z|00480|binding|INFO|Releasing lport 7127038e-90ca-4039-8404-4b8a2152df71 from this chassis (sb_readonly=0)
Nov 29 03:21:11 np0005539551 nova_compute[227360]: 2025-11-29 08:21:11.077 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:11 np0005539551 nova_compute[227360]: 2025-11-29 08:21:11.078 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:11.078 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2b381cec-57a8-4697-a273-a320681301f8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2b381cec-57a8-4697-a273-a320681301f8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:21:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:11.079 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1805651b-96b2-4c71-ba47-15029d7a94f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:11.080 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:21:11 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:21:11 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:21:11 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-2b381cec-57a8-4697-a273-a320681301f8
Nov 29 03:21:11 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:21:11 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:21:11 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:21:11 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/2b381cec-57a8-4697-a273-a320681301f8.pid.haproxy
Nov 29 03:21:11 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:21:11 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:21:11 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:21:11 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:21:11 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:21:11 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:21:11 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:21:11 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:21:11 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:21:11 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:21:11 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:21:11 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:21:11 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:21:11 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:21:11 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:21:11 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:21:11 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:21:11 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:21:11 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:21:11 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:21:11 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 2b381cec-57a8-4697-a273-a320681301f8
Nov 29 03:21:11 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:21:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:11.081 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2b381cec-57a8-4697-a273-a320681301f8', 'env', 'PROCESS_TAG=haproxy-2b381cec-57a8-4697-a273-a320681301f8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2b381cec-57a8-4697-a273-a320681301f8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:21:11 np0005539551 podman[271541]: 2025-11-29 08:21:11.419515433 +0000 UTC m=+0.020482585 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:21:11 np0005539551 nova_compute[227360]: 2025-11-29 08:21:11.603 227364 DEBUG nova.compute.manager [req-5eeb090b-9580-4a60-9fd0-d1e9e84855e2 req-7342bf16-4e49-494e-9d50-2f29c6701773 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Received event network-vif-plugged-cfb780c5-3830-452f-89f0-cd6f52cd9e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:11 np0005539551 nova_compute[227360]: 2025-11-29 08:21:11.604 227364 DEBUG oslo_concurrency.lockutils [req-5eeb090b-9580-4a60-9fd0-d1e9e84855e2 req-7342bf16-4e49-494e-9d50-2f29c6701773 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:11 np0005539551 nova_compute[227360]: 2025-11-29 08:21:11.604 227364 DEBUG oslo_concurrency.lockutils [req-5eeb090b-9580-4a60-9fd0-d1e9e84855e2 req-7342bf16-4e49-494e-9d50-2f29c6701773 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:11 np0005539551 nova_compute[227360]: 2025-11-29 08:21:11.604 227364 DEBUG oslo_concurrency.lockutils [req-5eeb090b-9580-4a60-9fd0-d1e9e84855e2 req-7342bf16-4e49-494e-9d50-2f29c6701773 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:11 np0005539551 nova_compute[227360]: 2025-11-29 08:21:11.605 227364 DEBUG nova.compute.manager [req-5eeb090b-9580-4a60-9fd0-d1e9e84855e2 req-7342bf16-4e49-494e-9d50-2f29c6701773 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Processing event network-vif-plugged-cfb780c5-3830-452f-89f0-cd6f52cd9e67 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:21:11 np0005539551 nova_compute[227360]: 2025-11-29 08:21:11.605 227364 DEBUG nova.compute.manager [req-5eeb090b-9580-4a60-9fd0-d1e9e84855e2 req-7342bf16-4e49-494e-9d50-2f29c6701773 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Received event network-vif-plugged-cfb780c5-3830-452f-89f0-cd6f52cd9e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:11 np0005539551 nova_compute[227360]: 2025-11-29 08:21:11.606 227364 DEBUG oslo_concurrency.lockutils [req-5eeb090b-9580-4a60-9fd0-d1e9e84855e2 req-7342bf16-4e49-494e-9d50-2f29c6701773 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:11 np0005539551 nova_compute[227360]: 2025-11-29 08:21:11.606 227364 DEBUG oslo_concurrency.lockutils [req-5eeb090b-9580-4a60-9fd0-d1e9e84855e2 req-7342bf16-4e49-494e-9d50-2f29c6701773 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:11 np0005539551 nova_compute[227360]: 2025-11-29 08:21:11.606 227364 DEBUG oslo_concurrency.lockutils [req-5eeb090b-9580-4a60-9fd0-d1e9e84855e2 req-7342bf16-4e49-494e-9d50-2f29c6701773 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:11 np0005539551 nova_compute[227360]: 2025-11-29 08:21:11.606 227364 DEBUG nova.compute.manager [req-5eeb090b-9580-4a60-9fd0-d1e9e84855e2 req-7342bf16-4e49-494e-9d50-2f29c6701773 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] No waiting events found dispatching network-vif-plugged-cfb780c5-3830-452f-89f0-cd6f52cd9e67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:21:11 np0005539551 nova_compute[227360]: 2025-11-29 08:21:11.607 227364 WARNING nova.compute.manager [req-5eeb090b-9580-4a60-9fd0-d1e9e84855e2 req-7342bf16-4e49-494e-9d50-2f29c6701773 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Received unexpected event network-vif-plugged-cfb780c5-3830-452f-89f0-cd6f52cd9e67 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:21:11 np0005539551 nova_compute[227360]: 2025-11-29 08:21:11.607 227364 DEBUG nova.compute.manager [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:21:11 np0005539551 nova_compute[227360]: 2025-11-29 08:21:11.613 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404471.612795, 83af56cb-4634-464b-a921-b228b72f2ea5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:21:11 np0005539551 nova_compute[227360]: 2025-11-29 08:21:11.613 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:21:11 np0005539551 nova_compute[227360]: 2025-11-29 08:21:11.616 227364 DEBUG nova.virt.libvirt.driver [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:21:11 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:11 np0005539551 nova_compute[227360]: 2025-11-29 08:21:11.624 227364 INFO nova.virt.libvirt.driver [-] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Instance spawned successfully.#033[00m
Nov 29 03:21:11 np0005539551 nova_compute[227360]: 2025-11-29 08:21:11.626 227364 DEBUG nova.virt.libvirt.driver [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:21:11 np0005539551 podman[271541]: 2025-11-29 08:21:11.668866596 +0000 UTC m=+0.269833728 container create 39925981a56efd3952d73a7563390dc949b7ec84a7e9619df95c462bc15e9dbe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:21:11 np0005539551 nova_compute[227360]: 2025-11-29 08:21:11.693 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:11 np0005539551 nova_compute[227360]: 2025-11-29 08:21:11.697 227364 DEBUG nova.virt.libvirt.driver [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:11 np0005539551 nova_compute[227360]: 2025-11-29 08:21:11.698 227364 DEBUG nova.virt.libvirt.driver [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:11 np0005539551 nova_compute[227360]: 2025-11-29 08:21:11.698 227364 DEBUG nova.virt.libvirt.driver [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:11 np0005539551 nova_compute[227360]: 2025-11-29 08:21:11.698 227364 DEBUG nova.virt.libvirt.driver [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:11 np0005539551 nova_compute[227360]: 2025-11-29 08:21:11.698 227364 DEBUG nova.virt.libvirt.driver [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:11 np0005539551 nova_compute[227360]: 2025-11-29 08:21:11.699 227364 DEBUG nova.virt.libvirt.driver [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:11 np0005539551 nova_compute[227360]: 2025-11-29 08:21:11.702 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:21:11 np0005539551 systemd[1]: Started libpod-conmon-39925981a56efd3952d73a7563390dc949b7ec84a7e9619df95c462bc15e9dbe.scope.
Nov 29 03:21:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:11.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:11 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:21:11 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50c41a157e00037349a87420ff03012c1b0a95c070cd2283c29d09f5a4cd4207/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:21:11 np0005539551 nova_compute[227360]: 2025-11-29 08:21:11.813 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:21:11 np0005539551 nova_compute[227360]: 2025-11-29 08:21:11.814 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404471.7374249, 37bf3f0c-b49b-457b-81be-b4b31f32d872 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:21:11 np0005539551 nova_compute[227360]: 2025-11-29 08:21:11.814 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] VM Started (Lifecycle Event)#033[00m
Nov 29 03:21:11 np0005539551 podman[271541]: 2025-11-29 08:21:11.853077005 +0000 UTC m=+0.454044237 container init 39925981a56efd3952d73a7563390dc949b7ec84a7e9619df95c462bc15e9dbe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 03:21:11 np0005539551 podman[271541]: 2025-11-29 08:21:11.860501485 +0000 UTC m=+0.461468627 container start 39925981a56efd3952d73a7563390dc949b7ec84a7e9619df95c462bc15e9dbe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 03:21:11 np0005539551 neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8[271582]: [NOTICE]   (271587) : New worker (271589) forked
Nov 29 03:21:11 np0005539551 neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8[271582]: [NOTICE]   (271587) : Loading success.
Nov 29 03:21:11 np0005539551 nova_compute[227360]: 2025-11-29 08:21:11.910 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:11 np0005539551 nova_compute[227360]: 2025-11-29 08:21:11.916 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404471.737578, 37bf3f0c-b49b-457b-81be-b4b31f32d872 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:21:11 np0005539551 nova_compute[227360]: 2025-11-29 08:21:11.916 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:21:11 np0005539551 nova_compute[227360]: 2025-11-29 08:21:11.953 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:11 np0005539551 nova_compute[227360]: 2025-11-29 08:21:11.956 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:21:11 np0005539551 nova_compute[227360]: 2025-11-29 08:21:11.966 227364 INFO nova.compute.manager [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Took 10.61 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:21:11 np0005539551 nova_compute[227360]: 2025-11-29 08:21:11.966 227364 DEBUG nova.compute.manager [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:11 np0005539551 nova_compute[227360]: 2025-11-29 08:21:11.976 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:21:12 np0005539551 nova_compute[227360]: 2025-11-29 08:21:12.004 227364 DEBUG nova.compute.manager [req-2fb0982c-fe45-47d4-a126-a249cec95331 req-f8033d01-69e8-47c2-99ba-838983b5772d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received event network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:12 np0005539551 nova_compute[227360]: 2025-11-29 08:21:12.006 227364 DEBUG oslo_concurrency.lockutils [req-2fb0982c-fe45-47d4-a126-a249cec95331 req-f8033d01-69e8-47c2-99ba-838983b5772d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:12 np0005539551 nova_compute[227360]: 2025-11-29 08:21:12.006 227364 DEBUG oslo_concurrency.lockutils [req-2fb0982c-fe45-47d4-a126-a249cec95331 req-f8033d01-69e8-47c2-99ba-838983b5772d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:12 np0005539551 nova_compute[227360]: 2025-11-29 08:21:12.006 227364 DEBUG oslo_concurrency.lockutils [req-2fb0982c-fe45-47d4-a126-a249cec95331 req-f8033d01-69e8-47c2-99ba-838983b5772d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:12 np0005539551 nova_compute[227360]: 2025-11-29 08:21:12.006 227364 DEBUG nova.compute.manager [req-2fb0982c-fe45-47d4-a126-a249cec95331 req-f8033d01-69e8-47c2-99ba-838983b5772d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Processing event network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:21:12 np0005539551 nova_compute[227360]: 2025-11-29 08:21:12.008 227364 DEBUG nova.compute.manager [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:21:12 np0005539551 nova_compute[227360]: 2025-11-29 08:21:12.013 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404472.0133178, 37bf3f0c-b49b-457b-81be-b4b31f32d872 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:21:12 np0005539551 nova_compute[227360]: 2025-11-29 08:21:12.013 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:21:12 np0005539551 nova_compute[227360]: 2025-11-29 08:21:12.026 227364 DEBUG nova.virt.libvirt.driver [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:21:12 np0005539551 nova_compute[227360]: 2025-11-29 08:21:12.030 227364 INFO nova.virt.libvirt.driver [-] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Instance spawned successfully.#033[00m
Nov 29 03:21:12 np0005539551 nova_compute[227360]: 2025-11-29 08:21:12.030 227364 DEBUG nova.virt.libvirt.driver [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:21:12 np0005539551 nova_compute[227360]: 2025-11-29 08:21:12.070 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:12 np0005539551 nova_compute[227360]: 2025-11-29 08:21:12.076 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:21:12 np0005539551 nova_compute[227360]: 2025-11-29 08:21:12.078 227364 DEBUG nova.virt.libvirt.driver [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:12 np0005539551 nova_compute[227360]: 2025-11-29 08:21:12.079 227364 DEBUG nova.virt.libvirt.driver [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:12 np0005539551 nova_compute[227360]: 2025-11-29 08:21:12.079 227364 DEBUG nova.virt.libvirt.driver [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:12 np0005539551 nova_compute[227360]: 2025-11-29 08:21:12.079 227364 DEBUG nova.virt.libvirt.driver [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:12 np0005539551 nova_compute[227360]: 2025-11-29 08:21:12.080 227364 DEBUG nova.virt.libvirt.driver [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:12 np0005539551 nova_compute[227360]: 2025-11-29 08:21:12.080 227364 DEBUG nova.virt.libvirt.driver [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:12 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e315 e315: 3 total, 3 up, 3 in
Nov 29 03:21:12 np0005539551 nova_compute[227360]: 2025-11-29 08:21:12.132 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:21:12 np0005539551 nova_compute[227360]: 2025-11-29 08:21:12.135 227364 INFO nova.compute.manager [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Took 13.52 seconds to build instance.#033[00m
Nov 29 03:21:12 np0005539551 nova_compute[227360]: 2025-11-29 08:21:12.171 227364 DEBUG oslo_concurrency.lockutils [None req-9561c618-03f2-4ad5-82ee-4d046a376372 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "83af56cb-4634-464b-a921-b228b72f2ea5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:12 np0005539551 nova_compute[227360]: 2025-11-29 08:21:12.177 227364 INFO nova.compute.manager [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Took 8.54 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:21:12 np0005539551 nova_compute[227360]: 2025-11-29 08:21:12.177 227364 DEBUG nova.compute.manager [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:12 np0005539551 nova_compute[227360]: 2025-11-29 08:21:12.386 227364 INFO nova.compute.manager [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Took 13.16 seconds to build instance.#033[00m
Nov 29 03:21:12 np0005539551 nova_compute[227360]: 2025-11-29 08:21:12.505 227364 DEBUG oslo_concurrency.lockutils [None req-9f40e5de-3262-4348-8b50-8bdf9ec916d3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.874s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:12.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:21:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:13.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:21:14 np0005539551 nova_compute[227360]: 2025-11-29 08:21:14.092 227364 DEBUG nova.compute.manager [req-af65b314-6b68-47ed-824c-95d7e3743845 req-e8935c8a-3c9f-4fff-968e-b0d223133e3d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received event network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:14 np0005539551 nova_compute[227360]: 2025-11-29 08:21:14.093 227364 DEBUG oslo_concurrency.lockutils [req-af65b314-6b68-47ed-824c-95d7e3743845 req-e8935c8a-3c9f-4fff-968e-b0d223133e3d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:14 np0005539551 nova_compute[227360]: 2025-11-29 08:21:14.093 227364 DEBUG oslo_concurrency.lockutils [req-af65b314-6b68-47ed-824c-95d7e3743845 req-e8935c8a-3c9f-4fff-968e-b0d223133e3d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:14 np0005539551 nova_compute[227360]: 2025-11-29 08:21:14.093 227364 DEBUG oslo_concurrency.lockutils [req-af65b314-6b68-47ed-824c-95d7e3743845 req-e8935c8a-3c9f-4fff-968e-b0d223133e3d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:14 np0005539551 nova_compute[227360]: 2025-11-29 08:21:14.094 227364 DEBUG nova.compute.manager [req-af65b314-6b68-47ed-824c-95d7e3743845 req-e8935c8a-3c9f-4fff-968e-b0d223133e3d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] No waiting events found dispatching network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:21:14 np0005539551 nova_compute[227360]: 2025-11-29 08:21:14.094 227364 WARNING nova.compute.manager [req-af65b314-6b68-47ed-824c-95d7e3743845 req-e8935c8a-3c9f-4fff-968e-b0d223133e3d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received unexpected event network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e for instance with vm_state active and task_state None.#033[00m
Nov 29 03:21:14 np0005539551 nova_compute[227360]: 2025-11-29 08:21:14.175 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:14.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:14 np0005539551 nova_compute[227360]: 2025-11-29 08:21:14.972 227364 DEBUG oslo_concurrency.lockutils [None req-2ad80242-1dd4-481c-9a5f-086d57814806 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "83af56cb-4634-464b-a921-b228b72f2ea5" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:14 np0005539551 nova_compute[227360]: 2025-11-29 08:21:14.972 227364 DEBUG oslo_concurrency.lockutils [None req-2ad80242-1dd4-481c-9a5f-086d57814806 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "83af56cb-4634-464b-a921-b228b72f2ea5" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:14 np0005539551 nova_compute[227360]: 2025-11-29 08:21:14.972 227364 DEBUG nova.compute.manager [None req-2ad80242-1dd4-481c-9a5f-086d57814806 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:14 np0005539551 nova_compute[227360]: 2025-11-29 08:21:14.977 227364 DEBUG nova.compute.manager [None req-2ad80242-1dd4-481c-9a5f-086d57814806 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Nov 29 03:21:14 np0005539551 nova_compute[227360]: 2025-11-29 08:21:14.978 227364 DEBUG nova.objects.instance [None req-2ad80242-1dd4-481c-9a5f-086d57814806 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lazy-loading 'flavor' on Instance uuid 83af56cb-4634-464b-a921-b228b72f2ea5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:15 np0005539551 nova_compute[227360]: 2025-11-29 08:21:15.004 227364 DEBUG nova.virt.libvirt.driver [None req-2ad80242-1dd4-481c-9a5f-086d57814806 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:21:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:15.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:16 np0005539551 nova_compute[227360]: 2025-11-29 08:21:16.080 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:16.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:17 np0005539551 nova_compute[227360]: 2025-11-29 08:21:17.347 227364 DEBUG nova.compute.manager [req-6a134ade-84fd-4f71-a5e0-267c46cb868e req-57a7281b-6006-4a08-9ef6-3289d8f799d8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received event network-changed-f435ee76-ed2f-4ad8-a9e1-bda955080b3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:17 np0005539551 nova_compute[227360]: 2025-11-29 08:21:17.347 227364 DEBUG nova.compute.manager [req-6a134ade-84fd-4f71-a5e0-267c46cb868e req-57a7281b-6006-4a08-9ef6-3289d8f799d8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Refreshing instance network info cache due to event network-changed-f435ee76-ed2f-4ad8-a9e1-bda955080b3e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:21:17 np0005539551 nova_compute[227360]: 2025-11-29 08:21:17.347 227364 DEBUG oslo_concurrency.lockutils [req-6a134ade-84fd-4f71-a5e0-267c46cb868e req-57a7281b-6006-4a08-9ef6-3289d8f799d8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-37bf3f0c-b49b-457b-81be-b4b31f32d872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:21:17 np0005539551 nova_compute[227360]: 2025-11-29 08:21:17.347 227364 DEBUG oslo_concurrency.lockutils [req-6a134ade-84fd-4f71-a5e0-267c46cb868e req-57a7281b-6006-4a08-9ef6-3289d8f799d8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-37bf3f0c-b49b-457b-81be-b4b31f32d872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:21:17 np0005539551 nova_compute[227360]: 2025-11-29 08:21:17.348 227364 DEBUG nova.network.neutron [req-6a134ade-84fd-4f71-a5e0-267c46cb868e req-57a7281b-6006-4a08-9ef6-3289d8f799d8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Refreshing network info cache for port f435ee76-ed2f-4ad8-a9e1-bda955080b3e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:21:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:17.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:18.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:19 np0005539551 nova_compute[227360]: 2025-11-29 08:21:19.176 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:19 np0005539551 nova_compute[227360]: 2025-11-29 08:21:19.354 227364 DEBUG nova.network.neutron [req-6a134ade-84fd-4f71-a5e0-267c46cb868e req-57a7281b-6006-4a08-9ef6-3289d8f799d8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Updated VIF entry in instance network info cache for port f435ee76-ed2f-4ad8-a9e1-bda955080b3e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:21:19 np0005539551 nova_compute[227360]: 2025-11-29 08:21:19.354 227364 DEBUG nova.network.neutron [req-6a134ade-84fd-4f71-a5e0-267c46cb868e req-57a7281b-6006-4a08-9ef6-3289d8f799d8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Updating instance_info_cache with network_info: [{"id": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "address": "fa:16:3e:a0:40:0c", "network": {"id": "2b381cec-57a8-4697-a273-a320681301f8", "bridge": "br-int", "label": "tempest-network-smoke--788617349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf435ee76-ed", "ovs_interfaceid": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:21:19 np0005539551 nova_compute[227360]: 2025-11-29 08:21:19.371 227364 DEBUG oslo_concurrency.lockutils [req-6a134ade-84fd-4f71-a5e0-267c46cb868e req-57a7281b-6006-4a08-9ef6-3289d8f799d8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-37bf3f0c-b49b-457b-81be-b4b31f32d872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:21:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:19.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:19.871 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:19.871 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:19.872 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:20.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:21 np0005539551 nova_compute[227360]: 2025-11-29 08:21:21.082 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:21.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:21 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:22.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:23.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:24 np0005539551 nova_compute[227360]: 2025-11-29 08:21:24.178 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 03:21:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:24.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 03:21:25 np0005539551 nova_compute[227360]: 2025-11-29 08:21:25.047 227364 DEBUG nova.virt.libvirt.driver [None req-2ad80242-1dd4-481c-9a5f-086d57814806 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:21:25 np0005539551 ovn_controller[130266]: 2025-11-29T08:21:25Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a0:40:0c 10.100.0.5
Nov 29 03:21:25 np0005539551 ovn_controller[130266]: 2025-11-29T08:21:25Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a0:40:0c 10.100.0.5
Nov 29 03:21:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:25.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:25 np0005539551 ovn_controller[130266]: 2025-11-29T08:21:25Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d7:e6:6b 10.100.0.13
Nov 29 03:21:25 np0005539551 ovn_controller[130266]: 2025-11-29T08:21:25Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d7:e6:6b 10.100.0.13
Nov 29 03:21:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:26.005 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:21:26 np0005539551 nova_compute[227360]: 2025-11-29 08:21:26.006 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:26.006 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:21:26 np0005539551 nova_compute[227360]: 2025-11-29 08:21:26.084 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:26 np0005539551 podman[271600]: 2025-11-29 08:21:26.622089541 +0000 UTC m=+0.063974783 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 03:21:26 np0005539551 podman[271602]: 2025-11-29 08:21:26.631074085 +0000 UTC m=+0.070242524 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 29 03:21:26 np0005539551 podman[271599]: 2025-11-29 08:21:26.636945543 +0000 UTC m=+0.088246170 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:21:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:26.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:26 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:27.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:28 np0005539551 nova_compute[227360]: 2025-11-29 08:21:28.348 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:28 np0005539551 nova_compute[227360]: 2025-11-29 08:21:28.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:28 np0005539551 kernel: tapcfb780c5-38 (unregistering): left promiscuous mode
Nov 29 03:21:28 np0005539551 NetworkManager[48922]: <info>  [1764404488.7479] device (tapcfb780c5-38): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:21:28 np0005539551 ovn_controller[130266]: 2025-11-29T08:21:28Z|00481|binding|INFO|Releasing lport cfb780c5-3830-452f-89f0-cd6f52cd9e67 from this chassis (sb_readonly=0)
Nov 29 03:21:28 np0005539551 nova_compute[227360]: 2025-11-29 08:21:28.752 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:28 np0005539551 ovn_controller[130266]: 2025-11-29T08:21:28Z|00482|binding|INFO|Setting lport cfb780c5-3830-452f-89f0-cd6f52cd9e67 down in Southbound
Nov 29 03:21:28 np0005539551 ovn_controller[130266]: 2025-11-29T08:21:28Z|00483|binding|INFO|Removing iface tapcfb780c5-38 ovn-installed in OVS
Nov 29 03:21:28 np0005539551 nova_compute[227360]: 2025-11-29 08:21:28.755 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:28.759 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:e6:6b 10.100.0.13'], port_security=['fa:16:3e:d7:e6:6b 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '83af56cb-4634-464b-a921-b228b72f2ea5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bace34c102e4d56b089fd695d324f10', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2702f195-789d-4a37-affe-a8159dccabea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a26ea06d-6837-4c64-a5e9-9d9016316b21, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=cfb780c5-3830-452f-89f0-cd6f52cd9e67) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:21:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:28.760 139482 INFO neutron.agent.ovn.metadata.agent [-] Port cfb780c5-3830-452f-89f0-cd6f52cd9e67 in datapath 7fc1dfc3-8d7f-4854-980d-37a93f366035 unbound from our chassis#033[00m
Nov 29 03:21:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:28.762 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7fc1dfc3-8d7f-4854-980d-37a93f366035, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:21:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:28.764 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6bfaa891-1a54-44bf-9277-6110693a3ecf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:28.764 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035 namespace which is not needed anymore#033[00m
Nov 29 03:21:28 np0005539551 nova_compute[227360]: 2025-11-29 08:21:28.831 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:28 np0005539551 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000076.scope: Deactivated successfully.
Nov 29 03:21:28 np0005539551 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000076.scope: Consumed 14.269s CPU time.
Nov 29 03:21:28 np0005539551 systemd-machined[190756]: Machine qemu-52-instance-00000076 terminated.
Nov 29 03:21:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:28.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:28 np0005539551 neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035[271416]: [NOTICE]   (271432) : haproxy version is 2.8.14-c23fe91
Nov 29 03:21:28 np0005539551 neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035[271416]: [NOTICE]   (271432) : path to executable is /usr/sbin/haproxy
Nov 29 03:21:28 np0005539551 neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035[271416]: [WARNING]  (271432) : Exiting Master process...
Nov 29 03:21:28 np0005539551 neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035[271416]: [WARNING]  (271432) : Exiting Master process...
Nov 29 03:21:28 np0005539551 neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035[271416]: [ALERT]    (271432) : Current worker (271434) exited with code 143 (Terminated)
Nov 29 03:21:28 np0005539551 neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035[271416]: [WARNING]  (271432) : All workers exited. Exiting... (0)
Nov 29 03:21:28 np0005539551 systemd[1]: libpod-8075158aa51f64181060ff01e8747cd40d7ffc740022a70166189b6f4cdc1cdd.scope: Deactivated successfully.
Nov 29 03:21:28 np0005539551 podman[271690]: 2025-11-29 08:21:28.934998849 +0000 UTC m=+0.075833354 container died 8075158aa51f64181060ff01e8747cd40d7ffc740022a70166189b6f4cdc1cdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:21:29 np0005539551 nova_compute[227360]: 2025-11-29 08:21:29.065 227364 INFO nova.virt.libvirt.driver [None req-2ad80242-1dd4-481c-9a5f-086d57814806 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Instance shutdown successfully after 14 seconds.#033[00m
Nov 29 03:21:29 np0005539551 nova_compute[227360]: 2025-11-29 08:21:29.072 227364 INFO nova.virt.libvirt.driver [-] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Instance destroyed successfully.#033[00m
Nov 29 03:21:29 np0005539551 nova_compute[227360]: 2025-11-29 08:21:29.073 227364 DEBUG nova.objects.instance [None req-2ad80242-1dd4-481c-9a5f-086d57814806 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lazy-loading 'numa_topology' on Instance uuid 83af56cb-4634-464b-a921-b228b72f2ea5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:29 np0005539551 nova_compute[227360]: 2025-11-29 08:21:29.089 227364 DEBUG nova.compute.manager [None req-2ad80242-1dd4-481c-9a5f-086d57814806 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:29 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8075158aa51f64181060ff01e8747cd40d7ffc740022a70166189b6f4cdc1cdd-userdata-shm.mount: Deactivated successfully.
Nov 29 03:21:29 np0005539551 systemd[1]: var-lib-containers-storage-overlay-66fb9b023ccdf9f8b71c051038dc0187ccda19fde126cca34bf92bcedf978152-merged.mount: Deactivated successfully.
Nov 29 03:21:29 np0005539551 nova_compute[227360]: 2025-11-29 08:21:29.137 227364 DEBUG oslo_concurrency.lockutils [None req-2ad80242-1dd4-481c-9a5f-086d57814806 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "83af56cb-4634-464b-a921-b228b72f2ea5" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 14.165s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:29 np0005539551 nova_compute[227360]: 2025-11-29 08:21:29.180 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:29 np0005539551 nova_compute[227360]: 2025-11-29 08:21:29.461 227364 DEBUG nova.compute.manager [req-4f5b92c7-c692-41f8-a5ed-92fd5de64636 req-18804c7f-a2f4-4489-83a5-68cbe9c9c721 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Received event network-vif-unplugged-cfb780c5-3830-452f-89f0-cd6f52cd9e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:29 np0005539551 nova_compute[227360]: 2025-11-29 08:21:29.461 227364 DEBUG oslo_concurrency.lockutils [req-4f5b92c7-c692-41f8-a5ed-92fd5de64636 req-18804c7f-a2f4-4489-83a5-68cbe9c9c721 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:29 np0005539551 nova_compute[227360]: 2025-11-29 08:21:29.462 227364 DEBUG oslo_concurrency.lockutils [req-4f5b92c7-c692-41f8-a5ed-92fd5de64636 req-18804c7f-a2f4-4489-83a5-68cbe9c9c721 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:29 np0005539551 nova_compute[227360]: 2025-11-29 08:21:29.462 227364 DEBUG oslo_concurrency.lockutils [req-4f5b92c7-c692-41f8-a5ed-92fd5de64636 req-18804c7f-a2f4-4489-83a5-68cbe9c9c721 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:29 np0005539551 nova_compute[227360]: 2025-11-29 08:21:29.462 227364 DEBUG nova.compute.manager [req-4f5b92c7-c692-41f8-a5ed-92fd5de64636 req-18804c7f-a2f4-4489-83a5-68cbe9c9c721 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] No waiting events found dispatching network-vif-unplugged-cfb780c5-3830-452f-89f0-cd6f52cd9e67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:21:29 np0005539551 nova_compute[227360]: 2025-11-29 08:21:29.462 227364 WARNING nova.compute.manager [req-4f5b92c7-c692-41f8-a5ed-92fd5de64636 req-18804c7f-a2f4-4489-83a5-68cbe9c9c721 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Received unexpected event network-vif-unplugged-cfb780c5-3830-452f-89f0-cd6f52cd9e67 for instance with vm_state stopped and task_state None.#033[00m
Nov 29 03:21:29 np0005539551 podman[271690]: 2025-11-29 08:21:29.561501017 +0000 UTC m=+0.702335552 container cleanup 8075158aa51f64181060ff01e8747cd40d7ffc740022a70166189b6f4cdc1cdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 03:21:29 np0005539551 systemd[1]: libpod-conmon-8075158aa51f64181060ff01e8747cd40d7ffc740022a70166189b6f4cdc1cdd.scope: Deactivated successfully.
Nov 29 03:21:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:29.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:29 np0005539551 podman[271734]: 2025-11-29 08:21:29.855451678 +0000 UTC m=+0.270468276 container remove 8075158aa51f64181060ff01e8747cd40d7ffc740022a70166189b6f4cdc1cdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 03:21:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:29.865 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b5f50b77-cd5e-4517-9be0-e3bbb5ddd354]: (4, ('Sat Nov 29 08:21:28 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035 (8075158aa51f64181060ff01e8747cd40d7ffc740022a70166189b6f4cdc1cdd)\n8075158aa51f64181060ff01e8747cd40d7ffc740022a70166189b6f4cdc1cdd\nSat Nov 29 08:21:29 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035 (8075158aa51f64181060ff01e8747cd40d7ffc740022a70166189b6f4cdc1cdd)\n8075158aa51f64181060ff01e8747cd40d7ffc740022a70166189b6f4cdc1cdd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:29.868 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4a7e03f0-3fe2-4ee0-8f44-e42ed3b99ea4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:29.869 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7fc1dfc3-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:29 np0005539551 nova_compute[227360]: 2025-11-29 08:21:29.915 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:29 np0005539551 kernel: tap7fc1dfc3-80: left promiscuous mode
Nov 29 03:21:29 np0005539551 nova_compute[227360]: 2025-11-29 08:21:29.948 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:29.951 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[3d7d7f06-0c93-4b37-95b5-1a56089b251c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:29.979 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[71d1f8ea-251d-4dab-a967-4be694e2201f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:29.980 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5f54fdbb-8aa7-4edd-b348-fb275924abf9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:29.998 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1d10a8f5-6e2c-4dc8-8d2c-7ea3a2122a04]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751599, 'reachable_time': 15153, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271753, 'error': None, 'target': 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:30.000 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:21:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:30.001 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[a87386f7-0510-420d-b3d7-9bd49cdf78b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:30 np0005539551 systemd[1]: run-netns-ovnmeta\x2d7fc1dfc3\x2d8d7f\x2d4854\x2d980d\x2d37a93f366035.mount: Deactivated successfully.
Nov 29 03:21:30 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:30.010 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:30 np0005539551 nova_compute[227360]: 2025-11-29 08:21:30.404 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:30 np0005539551 nova_compute[227360]: 2025-11-29 08:21:30.530 227364 INFO nova.compute.manager [None req-5e6c4b6d-7a3b-4f3f-a5bc-477f2b3bce7c fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Get console output#033[00m
Nov 29 03:21:30 np0005539551 nova_compute[227360]: 2025-11-29 08:21:30.537 260937 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:21:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:30.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:31 np0005539551 nova_compute[227360]: 2025-11-29 08:21:31.088 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:31 np0005539551 nova_compute[227360]: 2025-11-29 08:21:31.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:31 np0005539551 nova_compute[227360]: 2025-11-29 08:21:31.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:21:31 np0005539551 nova_compute[227360]: 2025-11-29 08:21:31.429 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:21:31 np0005539551 nova_compute[227360]: 2025-11-29 08:21:31.551 227364 DEBUG nova.compute.manager [req-17768ee2-6324-4298-8bfa-61adfc7894eb req-b767f088-390e-4711-bced-433512efa252 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Received event network-vif-plugged-cfb780c5-3830-452f-89f0-cd6f52cd9e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:31 np0005539551 nova_compute[227360]: 2025-11-29 08:21:31.552 227364 DEBUG oslo_concurrency.lockutils [req-17768ee2-6324-4298-8bfa-61adfc7894eb req-b767f088-390e-4711-bced-433512efa252 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:31 np0005539551 nova_compute[227360]: 2025-11-29 08:21:31.553 227364 DEBUG oslo_concurrency.lockutils [req-17768ee2-6324-4298-8bfa-61adfc7894eb req-b767f088-390e-4711-bced-433512efa252 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:31 np0005539551 nova_compute[227360]: 2025-11-29 08:21:31.553 227364 DEBUG oslo_concurrency.lockutils [req-17768ee2-6324-4298-8bfa-61adfc7894eb req-b767f088-390e-4711-bced-433512efa252 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:31 np0005539551 nova_compute[227360]: 2025-11-29 08:21:31.554 227364 DEBUG nova.compute.manager [req-17768ee2-6324-4298-8bfa-61adfc7894eb req-b767f088-390e-4711-bced-433512efa252 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] No waiting events found dispatching network-vif-plugged-cfb780c5-3830-452f-89f0-cd6f52cd9e67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:21:31 np0005539551 nova_compute[227360]: 2025-11-29 08:21:31.554 227364 WARNING nova.compute.manager [req-17768ee2-6324-4298-8bfa-61adfc7894eb req-b767f088-390e-4711-bced-433512efa252 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Received unexpected event network-vif-plugged-cfb780c5-3830-452f-89f0-cd6f52cd9e67 for instance with vm_state stopped and task_state None.#033[00m
Nov 29 03:21:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:31.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:31 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:32 np0005539551 nova_compute[227360]: 2025-11-29 08:21:32.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:32 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #91. Immutable memtables: 0.
Nov 29 03:21:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:21:32.610083) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:21:32 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 91
Nov 29 03:21:32 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404492610159, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 2457, "num_deletes": 255, "total_data_size": 5533209, "memory_usage": 5624176, "flush_reason": "Manual Compaction"}
Nov 29 03:21:32 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #92: started
Nov 29 03:21:32 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404492633869, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 92, "file_size": 3625781, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45984, "largest_seqno": 48436, "table_properties": {"data_size": 3615901, "index_size": 6182, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21788, "raw_average_key_size": 20, "raw_value_size": 3595663, "raw_average_value_size": 3454, "num_data_blocks": 268, "num_entries": 1041, "num_filter_entries": 1041, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404297, "oldest_key_time": 1764404297, "file_creation_time": 1764404492, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:21:32 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 23828 microseconds, and 7479 cpu microseconds.
Nov 29 03:21:32 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:21:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:21:32.633916) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #92: 3625781 bytes OK
Nov 29 03:21:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:21:32.633942) [db/memtable_list.cc:519] [default] Level-0 commit table #92 started
Nov 29 03:21:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:21:32.635301) [db/memtable_list.cc:722] [default] Level-0 commit table #92: memtable #1 done
Nov 29 03:21:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:21:32.635313) EVENT_LOG_v1 {"time_micros": 1764404492635309, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:21:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:21:32.635329) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:21:32 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 5522247, prev total WAL file size 5522247, number of live WAL files 2.
Nov 29 03:21:32 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000088.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:21:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:21:32.636451) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Nov 29 03:21:32 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:21:32 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [92(3540KB)], [90(11MB)]
Nov 29 03:21:32 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404492636521, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [92], "files_L6": [90], "score": -1, "input_data_size": 15607230, "oldest_snapshot_seqno": -1}
Nov 29 03:21:32 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #93: 8188 keys, 13656683 bytes, temperature: kUnknown
Nov 29 03:21:32 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404492760221, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 93, "file_size": 13656683, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13600176, "index_size": 34955, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20485, "raw_key_size": 211505, "raw_average_key_size": 25, "raw_value_size": 13452059, "raw_average_value_size": 1642, "num_data_blocks": 1376, "num_entries": 8188, "num_filter_entries": 8188, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764404492, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 93, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:21:32 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:21:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:21:32.760537) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 13656683 bytes
Nov 29 03:21:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:21:32.761977) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 126.0 rd, 110.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 11.4 +0.0 blob) out(13.0 +0.0 blob), read-write-amplify(8.1) write-amplify(3.8) OK, records in: 8715, records dropped: 527 output_compression: NoCompression
Nov 29 03:21:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:21:32.762000) EVENT_LOG_v1 {"time_micros": 1764404492761989, "job": 56, "event": "compaction_finished", "compaction_time_micros": 123819, "compaction_time_cpu_micros": 55250, "output_level": 6, "num_output_files": 1, "total_output_size": 13656683, "num_input_records": 8715, "num_output_records": 8188, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:21:32 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:21:32 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404492762941, "job": 56, "event": "table_file_deletion", "file_number": 92}
Nov 29 03:21:32 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000090.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:21:32 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404492765711, "job": 56, "event": "table_file_deletion", "file_number": 90}
Nov 29 03:21:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:21:32.636325) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:21:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:21:32.765743) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:21:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:21:32.765748) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:21:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:21:32.765750) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:21:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:21:32.765752) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:21:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:21:32.765754) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:21:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:32.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:33 np0005539551 nova_compute[227360]: 2025-11-29 08:21:33.278 227364 INFO nova.compute.manager [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Rebuilding instance#033[00m
Nov 29 03:21:33 np0005539551 nova_compute[227360]: 2025-11-29 08:21:33.612 227364 DEBUG nova.objects.instance [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 83af56cb-4634-464b-a921-b228b72f2ea5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:33 np0005539551 nova_compute[227360]: 2025-11-29 08:21:33.656 227364 DEBUG nova.compute.manager [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:33 np0005539551 nova_compute[227360]: 2025-11-29 08:21:33.715 227364 DEBUG nova.objects.instance [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lazy-loading 'pci_requests' on Instance uuid 83af56cb-4634-464b-a921-b228b72f2ea5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:33 np0005539551 nova_compute[227360]: 2025-11-29 08:21:33.728 227364 DEBUG nova.objects.instance [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lazy-loading 'pci_devices' on Instance uuid 83af56cb-4634-464b-a921-b228b72f2ea5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:33 np0005539551 nova_compute[227360]: 2025-11-29 08:21:33.739 227364 DEBUG nova.objects.instance [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lazy-loading 'resources' on Instance uuid 83af56cb-4634-464b-a921-b228b72f2ea5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:33.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:33 np0005539551 nova_compute[227360]: 2025-11-29 08:21:33.751 227364 DEBUG nova.objects.instance [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lazy-loading 'migration_context' on Instance uuid 83af56cb-4634-464b-a921-b228b72f2ea5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:33 np0005539551 nova_compute[227360]: 2025-11-29 08:21:33.763 227364 DEBUG nova.objects.instance [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 03:21:33 np0005539551 nova_compute[227360]: 2025-11-29 08:21:33.766 227364 INFO nova.virt.libvirt.driver [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Instance already shutdown.#033[00m
Nov 29 03:21:33 np0005539551 nova_compute[227360]: 2025-11-29 08:21:33.774 227364 INFO nova.virt.libvirt.driver [-] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Instance destroyed successfully.#033[00m
Nov 29 03:21:33 np0005539551 nova_compute[227360]: 2025-11-29 08:21:33.782 227364 INFO nova.virt.libvirt.driver [-] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Instance destroyed successfully.#033[00m
Nov 29 03:21:33 np0005539551 nova_compute[227360]: 2025-11-29 08:21:33.785 227364 DEBUG nova.virt.libvirt.vif [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:20:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1605427141',display_name='tempest-tempest.common.compute-instance-1605427141',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1605427141',id=118,image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:21:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='0bace34c102e4d56b089fd695d324f10',ramdisk_id='',reservation_id='r-9cfxhqhs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1954650991',owner_user_name='tempest-ServerActionsTestOtherA-1954650991-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:21:32Z,user_data=None,user_id='1552f15deb524705a9456cbe9b54c429',uuid=83af56cb-4634-464b-a921-b228b72f2ea5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "cfb780c5-3830-452f-89f0-cd6f52cd9e67", "address": "fa:16:3e:d7:e6:6b", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfb780c5-38", "ovs_interfaceid": "cfb780c5-3830-452f-89f0-cd6f52cd9e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:21:33 np0005539551 nova_compute[227360]: 2025-11-29 08:21:33.785 227364 DEBUG nova.network.os_vif_util [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converting VIF {"id": "cfb780c5-3830-452f-89f0-cd6f52cd9e67", "address": "fa:16:3e:d7:e6:6b", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfb780c5-38", "ovs_interfaceid": "cfb780c5-3830-452f-89f0-cd6f52cd9e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:21:33 np0005539551 nova_compute[227360]: 2025-11-29 08:21:33.787 227364 DEBUG nova.network.os_vif_util [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:e6:6b,bridge_name='br-int',has_traffic_filtering=True,id=cfb780c5-3830-452f-89f0-cd6f52cd9e67,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcfb780c5-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:21:33 np0005539551 nova_compute[227360]: 2025-11-29 08:21:33.787 227364 DEBUG os_vif [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:e6:6b,bridge_name='br-int',has_traffic_filtering=True,id=cfb780c5-3830-452f-89f0-cd6f52cd9e67,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcfb780c5-38') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:21:33 np0005539551 nova_compute[227360]: 2025-11-29 08:21:33.790 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:33 np0005539551 nova_compute[227360]: 2025-11-29 08:21:33.790 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcfb780c5-38, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:33 np0005539551 nova_compute[227360]: 2025-11-29 08:21:33.793 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:33 np0005539551 nova_compute[227360]: 2025-11-29 08:21:33.796 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:21:33 np0005539551 nova_compute[227360]: 2025-11-29 08:21:33.799 227364 INFO os_vif [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:e6:6b,bridge_name='br-int',has_traffic_filtering=True,id=cfb780c5-3830-452f-89f0-cd6f52cd9e67,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcfb780c5-38')#033[00m
Nov 29 03:21:34 np0005539551 nova_compute[227360]: 2025-11-29 08:21:34.214 227364 INFO nova.virt.libvirt.driver [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Deleting instance files /var/lib/nova/instances/83af56cb-4634-464b-a921-b228b72f2ea5_del#033[00m
Nov 29 03:21:34 np0005539551 nova_compute[227360]: 2025-11-29 08:21:34.214 227364 INFO nova.virt.libvirt.driver [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Deletion of /var/lib/nova/instances/83af56cb-4634-464b-a921-b228b72f2ea5_del complete#033[00m
Nov 29 03:21:34 np0005539551 nova_compute[227360]: 2025-11-29 08:21:34.364 227364 DEBUG nova.virt.libvirt.driver [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:21:34 np0005539551 nova_compute[227360]: 2025-11-29 08:21:34.364 227364 INFO nova.virt.libvirt.driver [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Creating image(s)#033[00m
Nov 29 03:21:34 np0005539551 nova_compute[227360]: 2025-11-29 08:21:34.386 227364 DEBUG nova.storage.rbd_utils [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image 83af56cb-4634-464b-a921-b228b72f2ea5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:34 np0005539551 nova_compute[227360]: 2025-11-29 08:21:34.411 227364 DEBUG nova.storage.rbd_utils [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image 83af56cb-4634-464b-a921-b228b72f2ea5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:34 np0005539551 nova_compute[227360]: 2025-11-29 08:21:34.438 227364 DEBUG nova.storage.rbd_utils [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image 83af56cb-4634-464b-a921-b228b72f2ea5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:34 np0005539551 nova_compute[227360]: 2025-11-29 08:21:34.441 227364 DEBUG oslo_concurrency.processutils [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:34 np0005539551 nova_compute[227360]: 2025-11-29 08:21:34.470 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:34 np0005539551 nova_compute[227360]: 2025-11-29 08:21:34.510 227364 DEBUG oslo_concurrency.processutils [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:34 np0005539551 nova_compute[227360]: 2025-11-29 08:21:34.512 227364 DEBUG oslo_concurrency.lockutils [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "6e1589dfec5abd76868fdc022175780e085b08de" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:34 np0005539551 nova_compute[227360]: 2025-11-29 08:21:34.513 227364 DEBUG oslo_concurrency.lockutils [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "6e1589dfec5abd76868fdc022175780e085b08de" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:34 np0005539551 nova_compute[227360]: 2025-11-29 08:21:34.513 227364 DEBUG oslo_concurrency.lockutils [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "6e1589dfec5abd76868fdc022175780e085b08de" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:34 np0005539551 nova_compute[227360]: 2025-11-29 08:21:34.543 227364 DEBUG nova.storage.rbd_utils [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image 83af56cb-4634-464b-a921-b228b72f2ea5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:34 np0005539551 nova_compute[227360]: 2025-11-29 08:21:34.546 227364 DEBUG oslo_concurrency.processutils [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de 83af56cb-4634-464b-a921-b228b72f2ea5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:34 np0005539551 nova_compute[227360]: 2025-11-29 08:21:34.674 227364 INFO nova.compute.manager [None req-14c68bf3-0f0b-40a5-ab05-eab9373321bf fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Get console output#033[00m
Nov 29 03:21:34 np0005539551 nova_compute[227360]: 2025-11-29 08:21:34.683 260937 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:21:34 np0005539551 nova_compute[227360]: 2025-11-29 08:21:34.832 227364 DEBUG oslo_concurrency.processutils [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de 83af56cb-4634-464b-a921-b228b72f2ea5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.286s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:34 np0005539551 nova_compute[227360]: 2025-11-29 08:21:34.898 227364 DEBUG nova.storage.rbd_utils [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] resizing rbd image 83af56cb-4634-464b-a921-b228b72f2ea5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:21:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:34.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:34 np0005539551 nova_compute[227360]: 2025-11-29 08:21:34.991 227364 DEBUG nova.virt.libvirt.driver [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:21:34 np0005539551 nova_compute[227360]: 2025-11-29 08:21:34.992 227364 DEBUG nova.virt.libvirt.driver [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Ensure instance console log exists: /var/lib/nova/instances/83af56cb-4634-464b-a921-b228b72f2ea5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:21:34 np0005539551 nova_compute[227360]: 2025-11-29 08:21:34.993 227364 DEBUG oslo_concurrency.lockutils [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:34 np0005539551 nova_compute[227360]: 2025-11-29 08:21:34.993 227364 DEBUG oslo_concurrency.lockutils [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:34 np0005539551 nova_compute[227360]: 2025-11-29 08:21:34.993 227364 DEBUG oslo_concurrency.lockutils [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:34 np0005539551 nova_compute[227360]: 2025-11-29 08:21:34.995 227364 DEBUG nova.virt.libvirt.driver [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Start _get_guest_xml network_info=[{"id": "cfb780c5-3830-452f-89f0-cd6f52cd9e67", "address": "fa:16:3e:d7:e6:6b", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfb780c5-38", "ovs_interfaceid": "cfb780c5-3830-452f-89f0-cd6f52cd9e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:36Z,direct_url=<?>,disk_format='qcow2',id=93eccffb-bacd-407f-af6f-64451dee7b21,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:21:34 np0005539551 nova_compute[227360]: 2025-11-29 08:21:34.999 227364 WARNING nova.virt.libvirt.driver [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.003 227364 DEBUG nova.virt.libvirt.host [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.004 227364 DEBUG nova.virt.libvirt.host [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.006 227364 DEBUG nova.virt.libvirt.host [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.006 227364 DEBUG nova.virt.libvirt.host [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.007 227364 DEBUG nova.virt.libvirt.driver [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.007 227364 DEBUG nova.virt.hardware [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:36Z,direct_url=<?>,disk_format='qcow2',id=93eccffb-bacd-407f-af6f-64451dee7b21,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.008 227364 DEBUG nova.virt.hardware [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.008 227364 DEBUG nova.virt.hardware [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.008 227364 DEBUG nova.virt.hardware [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.008 227364 DEBUG nova.virt.hardware [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.009 227364 DEBUG nova.virt.hardware [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.009 227364 DEBUG nova.virt.hardware [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.009 227364 DEBUG nova.virt.hardware [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.009 227364 DEBUG nova.virt.hardware [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.010 227364 DEBUG nova.virt.hardware [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.010 227364 DEBUG nova.virt.hardware [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.010 227364 DEBUG nova.objects.instance [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 83af56cb-4634-464b-a921-b228b72f2ea5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.026 227364 DEBUG oslo_concurrency.processutils [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.431 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.432 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.432 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.432 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.432 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:21:35 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1886697688' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.474 227364 DEBUG oslo_concurrency.processutils [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.501 227364 DEBUG nova.storage.rbd_utils [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image 83af56cb-4634-464b-a921-b228b72f2ea5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.505 227364 DEBUG oslo_concurrency.processutils [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:35.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:21:35 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3539159093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.903 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:21:35 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/157237776' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.937 227364 DEBUG oslo_concurrency.processutils [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.938 227364 DEBUG nova.virt.libvirt.vif [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:20:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1605427141',display_name='tempest-tempest.common.compute-instance-1605427141',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1605427141',id=118,image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:21:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='0bace34c102e4d56b089fd695d324f10',ramdisk_id='',reservation_id='r-9cfxhqhs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1954650991',owner_user_name='tempest-ServerActionsTestOtherA-1954650991-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:21:34Z,user_data=None,user_id='1552f15deb524705a9456cbe9b54c429',uuid=83af56cb-4634-464b-a921-b228b72f2ea5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "cfb780c5-3830-452f-89f0-cd6f52cd9e67", "address": "fa:16:3e:d7:e6:6b", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfb780c5-38", "ovs_interfaceid": "cfb780c5-3830-452f-89f0-cd6f52cd9e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.938 227364 DEBUG nova.network.os_vif_util [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converting VIF {"id": "cfb780c5-3830-452f-89f0-cd6f52cd9e67", "address": "fa:16:3e:d7:e6:6b", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfb780c5-38", "ovs_interfaceid": "cfb780c5-3830-452f-89f0-cd6f52cd9e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.939 227364 DEBUG nova.network.os_vif_util [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:e6:6b,bridge_name='br-int',has_traffic_filtering=True,id=cfb780c5-3830-452f-89f0-cd6f52cd9e67,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcfb780c5-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.942 227364 DEBUG nova.virt.libvirt.driver [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:21:35 np0005539551 nova_compute[227360]:  <uuid>83af56cb-4634-464b-a921-b228b72f2ea5</uuid>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:  <name>instance-00000076</name>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:21:35 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:      <nova:name>tempest-tempest.common.compute-instance-1605427141</nova:name>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:21:34</nova:creationTime>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:21:35 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:        <nova:user uuid="1552f15deb524705a9456cbe9b54c429">tempest-ServerActionsTestOtherA-1954650991-project-member</nova:user>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:        <nova:project uuid="0bace34c102e4d56b089fd695d324f10">tempest-ServerActionsTestOtherA-1954650991</nova:project>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="93eccffb-bacd-407f-af6f-64451dee7b21"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:        <nova:port uuid="cfb780c5-3830-452f-89f0-cd6f52cd9e67">
Nov 29 03:21:35 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:      <entry name="serial">83af56cb-4634-464b-a921-b228b72f2ea5</entry>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:      <entry name="uuid">83af56cb-4634-464b-a921-b228b72f2ea5</entry>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:21:35 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/83af56cb-4634-464b-a921-b228b72f2ea5_disk">
Nov 29 03:21:35 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:21:35 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:21:35 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/83af56cb-4634-464b-a921-b228b72f2ea5_disk.config">
Nov 29 03:21:35 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:21:35 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:21:35 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:d7:e6:6b"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:      <target dev="tapcfb780c5-38"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:21:35 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/83af56cb-4634-464b-a921-b228b72f2ea5/console.log" append="off"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:21:35 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:21:35 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:21:35 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:21:35 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:21:35 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.943 227364 DEBUG nova.compute.manager [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Preparing to wait for external event network-vif-plugged-cfb780c5-3830-452f-89f0-cd6f52cd9e67 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.944 227364 DEBUG oslo_concurrency.lockutils [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.945 227364 DEBUG oslo_concurrency.lockutils [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.945 227364 DEBUG oslo_concurrency.lockutils [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.946 227364 DEBUG nova.virt.libvirt.vif [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:20:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1605427141',display_name='tempest-tempest.common.compute-instance-1605427141',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1605427141',id=118,image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:21:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='0bace34c102e4d56b089fd695d324f10',ramdisk_id='',reservation_id='r-9cfxhqhs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1954650991',owner_user_name='tempest-ServerActionsTestOtherA-1954650991-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:21:34Z,user_data=None,user_id='1552f15deb524705a9456cbe9b54c429',uuid=83af56cb-4634-464b-a921-b228b72f2ea5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "cfb780c5-3830-452f-89f0-cd6f52cd9e67", "address": "fa:16:3e:d7:e6:6b", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfb780c5-38", "ovs_interfaceid": "cfb780c5-3830-452f-89f0-cd6f52cd9e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.946 227364 DEBUG nova.network.os_vif_util [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converting VIF {"id": "cfb780c5-3830-452f-89f0-cd6f52cd9e67", "address": "fa:16:3e:d7:e6:6b", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfb780c5-38", "ovs_interfaceid": "cfb780c5-3830-452f-89f0-cd6f52cd9e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.947 227364 DEBUG nova.network.os_vif_util [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:e6:6b,bridge_name='br-int',has_traffic_filtering=True,id=cfb780c5-3830-452f-89f0-cd6f52cd9e67,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcfb780c5-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.947 227364 DEBUG os_vif [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:e6:6b,bridge_name='br-int',has_traffic_filtering=True,id=cfb780c5-3830-452f-89f0-cd6f52cd9e67,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcfb780c5-38') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.948 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.949 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.949 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.953 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.953 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcfb780c5-38, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.954 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcfb780c5-38, col_values=(('external_ids', {'iface-id': 'cfb780c5-3830-452f-89f0-cd6f52cd9e67', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d7:e6:6b', 'vm-uuid': '83af56cb-4634-464b-a921-b228b72f2ea5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.955 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:35 np0005539551 NetworkManager[48922]: <info>  [1764404495.9563] manager: (tapcfb780c5-38): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/228)
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.957 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.960 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.960 227364 INFO os_vif [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:e6:6b,bridge_name='br-int',has_traffic_filtering=True,id=cfb780c5-3830-452f-89f0-cd6f52cd9e67,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcfb780c5-38')#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.968 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:21:35 np0005539551 nova_compute[227360]: 2025-11-29 08:21:35.968 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:21:36 np0005539551 nova_compute[227360]: 2025-11-29 08:21:36.003 227364 DEBUG nova.virt.libvirt.driver [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:21:36 np0005539551 nova_compute[227360]: 2025-11-29 08:21:36.003 227364 DEBUG nova.virt.libvirt.driver [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:21:36 np0005539551 nova_compute[227360]: 2025-11-29 08:21:36.003 227364 DEBUG nova.virt.libvirt.driver [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] No VIF found with MAC fa:16:3e:d7:e6:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:21:36 np0005539551 nova_compute[227360]: 2025-11-29 08:21:36.004 227364 INFO nova.virt.libvirt.driver [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Using config drive#033[00m
Nov 29 03:21:36 np0005539551 nova_compute[227360]: 2025-11-29 08:21:36.030 227364 DEBUG nova.storage.rbd_utils [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image 83af56cb-4634-464b-a921-b228b72f2ea5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:36 np0005539551 nova_compute[227360]: 2025-11-29 08:21:36.055 227364 DEBUG nova.objects.instance [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 83af56cb-4634-464b-a921-b228b72f2ea5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:36 np0005539551 nova_compute[227360]: 2025-11-29 08:21:36.083 227364 DEBUG nova.objects.instance [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lazy-loading 'keypairs' on Instance uuid 83af56cb-4634-464b-a921-b228b72f2ea5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:36 np0005539551 nova_compute[227360]: 2025-11-29 08:21:36.089 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:36 np0005539551 nova_compute[227360]: 2025-11-29 08:21:36.158 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:21:36 np0005539551 nova_compute[227360]: 2025-11-29 08:21:36.159 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4201MB free_disk=20.834129333496094GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:21:36 np0005539551 nova_compute[227360]: 2025-11-29 08:21:36.159 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:36 np0005539551 nova_compute[227360]: 2025-11-29 08:21:36.159 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:36 np0005539551 nova_compute[227360]: 2025-11-29 08:21:36.215 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 83af56cb-4634-464b-a921-b228b72f2ea5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:21:36 np0005539551 nova_compute[227360]: 2025-11-29 08:21:36.215 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 37bf3f0c-b49b-457b-81be-b4b31f32d872 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:21:36 np0005539551 nova_compute[227360]: 2025-11-29 08:21:36.215 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:21:36 np0005539551 nova_compute[227360]: 2025-11-29 08:21:36.215 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:21:36 np0005539551 nova_compute[227360]: 2025-11-29 08:21:36.282 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:36 np0005539551 nova_compute[227360]: 2025-11-29 08:21:36.476 227364 INFO nova.virt.libvirt.driver [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Creating config drive at /var/lib/nova/instances/83af56cb-4634-464b-a921-b228b72f2ea5/disk.config#033[00m
Nov 29 03:21:36 np0005539551 nova_compute[227360]: 2025-11-29 08:21:36.487 227364 DEBUG oslo_concurrency.processutils [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/83af56cb-4634-464b-a921-b228b72f2ea5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2mh0fu5k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:36 np0005539551 nova_compute[227360]: 2025-11-29 08:21:36.558 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:36 np0005539551 nova_compute[227360]: 2025-11-29 08:21:36.622 227364 DEBUG oslo_concurrency.processutils [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/83af56cb-4634-464b-a921-b228b72f2ea5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2mh0fu5k" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:36 np0005539551 nova_compute[227360]: 2025-11-29 08:21:36.656 227364 DEBUG nova.storage.rbd_utils [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image 83af56cb-4634-464b-a921-b228b72f2ea5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:36 np0005539551 nova_compute[227360]: 2025-11-29 08:21:36.661 227364 DEBUG oslo_concurrency.processutils [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/83af56cb-4634-464b-a921-b228b72f2ea5/disk.config 83af56cb-4634-464b-a921-b228b72f2ea5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:36 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:21:36 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/823937770' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:21:36 np0005539551 nova_compute[227360]: 2025-11-29 08:21:36.757 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:36 np0005539551 nova_compute[227360]: 2025-11-29 08:21:36.762 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:21:36 np0005539551 nova_compute[227360]: 2025-11-29 08:21:36.793 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:21:36 np0005539551 nova_compute[227360]: 2025-11-29 08:21:36.811 227364 DEBUG oslo_concurrency.processutils [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/83af56cb-4634-464b-a921-b228b72f2ea5/disk.config 83af56cb-4634-464b-a921-b228b72f2ea5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:36 np0005539551 nova_compute[227360]: 2025-11-29 08:21:36.811 227364 INFO nova.virt.libvirt.driver [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Deleting local config drive /var/lib/nova/instances/83af56cb-4634-464b-a921-b228b72f2ea5/disk.config because it was imported into RBD.#033[00m
Nov 29 03:21:36 np0005539551 nova_compute[227360]: 2025-11-29 08:21:36.818 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:21:36 np0005539551 nova_compute[227360]: 2025-11-29 08:21:36.818 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:36 np0005539551 kernel: tapcfb780c5-38: entered promiscuous mode
Nov 29 03:21:36 np0005539551 NetworkManager[48922]: <info>  [1764404496.8666] manager: (tapcfb780c5-38): new Tun device (/org/freedesktop/NetworkManager/Devices/229)
Nov 29 03:21:36 np0005539551 nova_compute[227360]: 2025-11-29 08:21:36.866 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:36 np0005539551 ovn_controller[130266]: 2025-11-29T08:21:36Z|00484|binding|INFO|Claiming lport cfb780c5-3830-452f-89f0-cd6f52cd9e67 for this chassis.
Nov 29 03:21:36 np0005539551 ovn_controller[130266]: 2025-11-29T08:21:36Z|00485|binding|INFO|cfb780c5-3830-452f-89f0-cd6f52cd9e67: Claiming fa:16:3e:d7:e6:6b 10.100.0.13
Nov 29 03:21:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:36.879 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:e6:6b 10.100.0.13'], port_security=['fa:16:3e:d7:e6:6b 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '83af56cb-4634-464b-a921-b228b72f2ea5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bace34c102e4d56b089fd695d324f10', 'neutron:revision_number': '5', 'neutron:security_group_ids': '2702f195-789d-4a37-affe-a8159dccabea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a26ea06d-6837-4c64-a5e9-9d9016316b21, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=cfb780c5-3830-452f-89f0-cd6f52cd9e67) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:21:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:36.881 139482 INFO neutron.agent.ovn.metadata.agent [-] Port cfb780c5-3830-452f-89f0-cd6f52cd9e67 in datapath 7fc1dfc3-8d7f-4854-980d-37a93f366035 bound to our chassis#033[00m
Nov 29 03:21:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:36.882 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7fc1dfc3-8d7f-4854-980d-37a93f366035#033[00m
Nov 29 03:21:36 np0005539551 nova_compute[227360]: 2025-11-29 08:21:36.883 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:36 np0005539551 ovn_controller[130266]: 2025-11-29T08:21:36Z|00486|binding|INFO|Setting lport cfb780c5-3830-452f-89f0-cd6f52cd9e67 ovn-installed in OVS
Nov 29 03:21:36 np0005539551 ovn_controller[130266]: 2025-11-29T08:21:36Z|00487|binding|INFO|Setting lport cfb780c5-3830-452f-89f0-cd6f52cd9e67 up in Southbound
Nov 29 03:21:36 np0005539551 nova_compute[227360]: 2025-11-29 08:21:36.886 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:36.893 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[89ce18be-74aa-4e00-9e98-376751c30307]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:36.894 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7fc1dfc3-81 in ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:21:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:36.895 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7fc1dfc3-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:21:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:36.895 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a262316c-4b84-47d7-ac82-73441f123058]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:36.896 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[082c0b2d-ce07-43b1-ab80-601a691cd955]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:36.907 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[2fec10ea-05b2-4ebf-a70e-da778604224d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:36 np0005539551 systemd-machined[190756]: New machine qemu-54-instance-00000076.
Nov 29 03:21:36 np0005539551 systemd[1]: Started Virtual Machine qemu-54-instance-00000076.
Nov 29 03:21:36 np0005539551 systemd-udevd[272121]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:21:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:36.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:36.931 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[125826d3-a8e6-43e3-8053-950209bfe623]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:36 np0005539551 NetworkManager[48922]: <info>  [1764404496.9367] device (tapcfb780c5-38): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:21:36 np0005539551 NetworkManager[48922]: <info>  [1764404496.9374] device (tapcfb780c5-38): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:21:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:36.963 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[aa795ea6-c074-492d-80d0-c958e19f4747]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:36 np0005539551 NetworkManager[48922]: <info>  [1764404496.9694] manager: (tap7fc1dfc3-80): new Veth device (/org/freedesktop/NetworkManager/Devices/230)
Nov 29 03:21:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:36.969 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[cde32f37-513b-4133-919d-24cab70317fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:36 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:36.997 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[a0051a5b-5df9-409f-a5c5-6807e98aa33e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:37.000 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[a2ff4513-1f5f-4939-ad64-7ffc264d9f68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:37 np0005539551 NetworkManager[48922]: <info>  [1764404497.0206] device (tap7fc1dfc3-80): carrier: link connected
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:37.024 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[3d5b5b6b-2f1b-45e6-b908-709c35e96c76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:37.038 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[196ac40c-17d6-4005-bb9a-90689e97b72e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7fc1dfc3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:27:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 145], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 754389, 'reachable_time': 40619, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272151, 'error': None, 'target': 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:37.053 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[53b003d2-72c6-42e6-b511-de221e48ff91]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5d:273e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 754389, 'tstamp': 754389}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272152, 'error': None, 'target': 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:37.066 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b7825f09-d4b7-4409-a011-a131022ef10b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7fc1dfc3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:27:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 145], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 754389, 'reachable_time': 40619, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 272153, 'error': None, 'target': 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:37.093 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[030c9494-3a8c-4ae6-a898-a978500dd627]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.134 227364 DEBUG nova.compute.manager [req-9977c109-aa37-4d18-8243-207b99535297 req-55c1e0da-0c2d-45f9-943b-dbdf0269f1d9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Received event network-vif-plugged-cfb780c5-3830-452f-89f0-cd6f52cd9e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.135 227364 DEBUG oslo_concurrency.lockutils [req-9977c109-aa37-4d18-8243-207b99535297 req-55c1e0da-0c2d-45f9-943b-dbdf0269f1d9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.135 227364 DEBUG oslo_concurrency.lockutils [req-9977c109-aa37-4d18-8243-207b99535297 req-55c1e0da-0c2d-45f9-943b-dbdf0269f1d9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.135 227364 DEBUG oslo_concurrency.lockutils [req-9977c109-aa37-4d18-8243-207b99535297 req-55c1e0da-0c2d-45f9-943b-dbdf0269f1d9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.136 227364 DEBUG nova.compute.manager [req-9977c109-aa37-4d18-8243-207b99535297 req-55c1e0da-0c2d-45f9-943b-dbdf0269f1d9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Processing event network-vif-plugged-cfb780c5-3830-452f-89f0-cd6f52cd9e67 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:37.152 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[41f13ca5-beed-408d-8704-b22fb8175a94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:37.153 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7fc1dfc3-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:37.153 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:37.154 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7fc1dfc3-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.155 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:37 np0005539551 kernel: tap7fc1dfc3-80: entered promiscuous mode
Nov 29 03:21:37 np0005539551 NetworkManager[48922]: <info>  [1764404497.1576] manager: (tap7fc1dfc3-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/231)
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:37.157 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7fc1dfc3-80, col_values=(('external_ids', {'iface-id': '79109459-2a40-4b69-936e-ac2a2aa77985'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.158 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:37 np0005539551 ovn_controller[130266]: 2025-11-29T08:21:37Z|00488|binding|INFO|Releasing lport 79109459-2a40-4b69-936e-ac2a2aa77985 from this chassis (sb_readonly=0)
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.171 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:37.172 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7fc1dfc3-8d7f-4854-980d-37a93f366035.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7fc1dfc3-8d7f-4854-980d-37a93f366035.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:37.173 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[7987be5b-9ef3-4ddc-b479-9ee7c34de38c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:37.174 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-7fc1dfc3-8d7f-4854-980d-37a93f366035
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/7fc1dfc3-8d7f-4854-980d-37a93f366035.pid.haproxy
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 7fc1dfc3-8d7f-4854-980d-37a93f366035
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:37.174 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'env', 'PROCESS_TAG=haproxy-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7fc1dfc3-8d7f-4854-980d-37a93f366035.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.327 227364 DEBUG nova.compute.manager [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.329 227364 DEBUG nova.virt.libvirt.host [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Removed pending event for 83af56cb-4634-464b-a921-b228b72f2ea5 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.330 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404497.3272152, 83af56cb-4634-464b-a921-b228b72f2ea5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.330 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] VM Started (Lifecycle Event)#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.336 227364 DEBUG nova.virt.libvirt.driver [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.343 227364 INFO nova.virt.libvirt.driver [-] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Instance spawned successfully.#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.344 227364 DEBUG nova.virt.libvirt.driver [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.353 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.358 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.370 227364 DEBUG nova.virt.libvirt.driver [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.371 227364 DEBUG nova.virt.libvirt.driver [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.371 227364 DEBUG nova.virt.libvirt.driver [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.372 227364 DEBUG nova.virt.libvirt.driver [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.373 227364 DEBUG nova.virt.libvirt.driver [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.374 227364 DEBUG nova.virt.libvirt.driver [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.389 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.389 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404497.327459, 83af56cb-4634-464b-a921-b228b72f2ea5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.390 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.416 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.421 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404497.3350384, 83af56cb-4634-464b-a921-b228b72f2ea5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.421 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.452 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.458 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.462 227364 DEBUG nova.compute.manager [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.514 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 03:21:37 np0005539551 podman[272227]: 2025-11-29 08:21:37.528364176 +0000 UTC m=+0.048509234 container create 42bcbca618014a5c4b62ebd36b8146b79be3c1688494d235a76324addaebffa7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.544 227364 INFO nova.compute.manager [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] bringing vm to original state: 'stopped'#033[00m
Nov 29 03:21:37 np0005539551 systemd[1]: Started libpod-conmon-42bcbca618014a5c4b62ebd36b8146b79be3c1688494d235a76324addaebffa7.scope.
Nov 29 03:21:37 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:21:37 np0005539551 podman[272227]: 2025-11-29 08:21:37.502384433 +0000 UTC m=+0.022529521 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:21:37 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99de3e6df2462d6b13e72f848f951c6a42c869f863fea4aeb2d2862045aa8d19/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:21:37 np0005539551 podman[272227]: 2025-11-29 08:21:37.619910256 +0000 UTC m=+0.140055324 container init 42bcbca618014a5c4b62ebd36b8146b79be3c1688494d235a76324addaebffa7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.620 227364 DEBUG oslo_concurrency.lockutils [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "83af56cb-4634-464b-a921-b228b72f2ea5" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.620 227364 DEBUG oslo_concurrency.lockutils [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "83af56cb-4634-464b-a921-b228b72f2ea5" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.620 227364 DEBUG nova.compute.manager [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:37 np0005539551 podman[272227]: 2025-11-29 08:21:37.62521711 +0000 UTC m=+0.145362168 container start 42bcbca618014a5c4b62ebd36b8146b79be3c1688494d235a76324addaebffa7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.626 227364 DEBUG nova.compute.manager [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Nov 29 03:21:37 np0005539551 neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035[272242]: [NOTICE]   (272246) : New worker (272248) forked
Nov 29 03:21:37 np0005539551 neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035[272242]: [NOTICE]   (272246) : Loading success.
Nov 29 03:21:37 np0005539551 kernel: tapcfb780c5-38 (unregistering): left promiscuous mode
Nov 29 03:21:37 np0005539551 NetworkManager[48922]: <info>  [1764404497.6818] device (tapcfb780c5-38): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.687 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:37 np0005539551 ovn_controller[130266]: 2025-11-29T08:21:37Z|00489|binding|INFO|Releasing lport cfb780c5-3830-452f-89f0-cd6f52cd9e67 from this chassis (sb_readonly=0)
Nov 29 03:21:37 np0005539551 ovn_controller[130266]: 2025-11-29T08:21:37Z|00490|binding|INFO|Setting lport cfb780c5-3830-452f-89f0-cd6f52cd9e67 down in Southbound
Nov 29 03:21:37 np0005539551 ovn_controller[130266]: 2025-11-29T08:21:37Z|00491|binding|INFO|Removing iface tapcfb780c5-38 ovn-installed in OVS
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:37.697 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:e6:6b 10.100.0.13'], port_security=['fa:16:3e:d7:e6:6b 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '83af56cb-4634-464b-a921-b228b72f2ea5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bace34c102e4d56b089fd695d324f10', 'neutron:revision_number': '6', 'neutron:security_group_ids': '2702f195-789d-4a37-affe-a8159dccabea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a26ea06d-6837-4c64-a5e9-9d9016316b21, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=cfb780c5-3830-452f-89f0-cd6f52cd9e67) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:37.699 139482 INFO neutron.agent.ovn.metadata.agent [-] Port cfb780c5-3830-452f-89f0-cd6f52cd9e67 in datapath 7fc1dfc3-8d7f-4854-980d-37a93f366035 unbound from our chassis#033[00m
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:37.700 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7fc1dfc3-8d7f-4854-980d-37a93f366035, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:37.700 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[7efd3229-b2eb-4bb1-a9dc-273078d618d4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:37.701 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035 namespace which is not needed anymore#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.706 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:37 np0005539551 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000076.scope: Deactivated successfully.
Nov 29 03:21:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:21:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:37.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:21:37 np0005539551 systemd-machined[190756]: Machine qemu-54-instance-00000076 terminated.
Nov 29 03:21:37 np0005539551 neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035[272242]: [NOTICE]   (272246) : haproxy version is 2.8.14-c23fe91
Nov 29 03:21:37 np0005539551 neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035[272242]: [NOTICE]   (272246) : path to executable is /usr/sbin/haproxy
Nov 29 03:21:37 np0005539551 neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035[272242]: [WARNING]  (272246) : Exiting Master process...
Nov 29 03:21:37 np0005539551 neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035[272242]: [WARNING]  (272246) : Exiting Master process...
Nov 29 03:21:37 np0005539551 neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035[272242]: [ALERT]    (272246) : Current worker (272248) exited with code 143 (Terminated)
Nov 29 03:21:37 np0005539551 neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035[272242]: [WARNING]  (272246) : All workers exited. Exiting... (0)
Nov 29 03:21:37 np0005539551 systemd[1]: libpod-42bcbca618014a5c4b62ebd36b8146b79be3c1688494d235a76324addaebffa7.scope: Deactivated successfully.
Nov 29 03:21:37 np0005539551 podman[272278]: 2025-11-29 08:21:37.825205056 +0000 UTC m=+0.043730056 container died 42bcbca618014a5c4b62ebd36b8146b79be3c1688494d235a76324addaebffa7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 03:21:37 np0005539551 kernel: tapcfb780c5-38: entered promiscuous mode
Nov 29 03:21:37 np0005539551 systemd-udevd[272145]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:21:37 np0005539551 NetworkManager[48922]: <info>  [1764404497.8447] manager: (tapcfb780c5-38): new Tun device (/org/freedesktop/NetworkManager/Devices/232)
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.844 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:37 np0005539551 ovn_controller[130266]: 2025-11-29T08:21:37Z|00492|binding|INFO|Claiming lport cfb780c5-3830-452f-89f0-cd6f52cd9e67 for this chassis.
Nov 29 03:21:37 np0005539551 ovn_controller[130266]: 2025-11-29T08:21:37Z|00493|binding|INFO|cfb780c5-3830-452f-89f0-cd6f52cd9e67: Claiming fa:16:3e:d7:e6:6b 10.100.0.13
Nov 29 03:21:37 np0005539551 kernel: tapcfb780c5-38 (unregistering): left promiscuous mode
Nov 29 03:21:37 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-42bcbca618014a5c4b62ebd36b8146b79be3c1688494d235a76324addaebffa7-userdata-shm.mount: Deactivated successfully.
Nov 29 03:21:37 np0005539551 systemd[1]: var-lib-containers-storage-overlay-99de3e6df2462d6b13e72f848f951c6a42c869f863fea4aeb2d2862045aa8d19-merged.mount: Deactivated successfully.
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:37.853 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:e6:6b 10.100.0.13'], port_security=['fa:16:3e:d7:e6:6b 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '83af56cb-4634-464b-a921-b228b72f2ea5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bace34c102e4d56b089fd695d324f10', 'neutron:revision_number': '6', 'neutron:security_group_ids': '2702f195-789d-4a37-affe-a8159dccabea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a26ea06d-6837-4c64-a5e9-9d9016316b21, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=cfb780c5-3830-452f-89f0-cd6f52cd9e67) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:21:37 np0005539551 podman[272278]: 2025-11-29 08:21:37.85897874 +0000 UTC m=+0.077503740 container cleanup 42bcbca618014a5c4b62ebd36b8146b79be3c1688494d235a76324addaebffa7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:21:37 np0005539551 ovn_controller[130266]: 2025-11-29T08:21:37Z|00494|binding|INFO|Setting lport cfb780c5-3830-452f-89f0-cd6f52cd9e67 ovn-installed in OVS
Nov 29 03:21:37 np0005539551 ovn_controller[130266]: 2025-11-29T08:21:37Z|00495|binding|INFO|Setting lport cfb780c5-3830-452f-89f0-cd6f52cd9e67 up in Southbound
Nov 29 03:21:37 np0005539551 ovn_controller[130266]: 2025-11-29T08:21:37Z|00496|binding|INFO|Releasing lport cfb780c5-3830-452f-89f0-cd6f52cd9e67 from this chassis (sb_readonly=1)
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.869 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:37 np0005539551 ovn_controller[130266]: 2025-11-29T08:21:37Z|00497|if_status|INFO|Not setting lport cfb780c5-3830-452f-89f0-cd6f52cd9e67 down as sb is readonly
Nov 29 03:21:37 np0005539551 ovn_controller[130266]: 2025-11-29T08:21:37Z|00498|binding|INFO|Removing iface tapcfb780c5-38 ovn-installed in OVS
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.872 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:37 np0005539551 ovn_controller[130266]: 2025-11-29T08:21:37Z|00499|binding|INFO|Releasing lport cfb780c5-3830-452f-89f0-cd6f52cd9e67 from this chassis (sb_readonly=0)
Nov 29 03:21:37 np0005539551 ovn_controller[130266]: 2025-11-29T08:21:37Z|00500|binding|INFO|Setting lport cfb780c5-3830-452f-89f0-cd6f52cd9e67 down in Southbound
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.875 227364 INFO nova.virt.libvirt.driver [-] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Instance destroyed successfully.#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.876 227364 DEBUG nova.compute.manager [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:37 np0005539551 systemd[1]: libpod-conmon-42bcbca618014a5c4b62ebd36b8146b79be3c1688494d235a76324addaebffa7.scope: Deactivated successfully.
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:37.881 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:e6:6b 10.100.0.13'], port_security=['fa:16:3e:d7:e6:6b 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '83af56cb-4634-464b-a921-b228b72f2ea5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bace34c102e4d56b089fd695d324f10', 'neutron:revision_number': '6', 'neutron:security_group_ids': '2702f195-789d-4a37-affe-a8159dccabea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a26ea06d-6837-4c64-a5e9-9d9016316b21, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=cfb780c5-3830-452f-89f0-cd6f52cd9e67) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.886 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:37 np0005539551 podman[272310]: 2025-11-29 08:21:37.916489777 +0000 UTC m=+0.036575801 container remove 42bcbca618014a5c4b62ebd36b8146b79be3c1688494d235a76324addaebffa7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:37.922 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8237ac4f-8c4a-4d2d-bc1e-f8adfe20d1b9]: (4, ('Sat Nov 29 08:21:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035 (42bcbca618014a5c4b62ebd36b8146b79be3c1688494d235a76324addaebffa7)\n42bcbca618014a5c4b62ebd36b8146b79be3c1688494d235a76324addaebffa7\nSat Nov 29 08:21:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035 (42bcbca618014a5c4b62ebd36b8146b79be3c1688494d235a76324addaebffa7)\n42bcbca618014a5c4b62ebd36b8146b79be3c1688494d235a76324addaebffa7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:37.925 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[0b520566-75c7-4d78-a11d-669ade7703f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:37.925 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7fc1dfc3-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.927 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:37 np0005539551 kernel: tap7fc1dfc3-80: left promiscuous mode
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.943 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:37.945 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[0cad5d2d-b8fa-4dc1-9cf5-8723e168b01f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.955 227364 DEBUG oslo_concurrency.lockutils [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "83af56cb-4634-464b-a921-b228b72f2ea5" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 0.335s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:37.964 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4a4260aa-6ef6-4639-95f9-913829241c53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:37.966 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ecda2bf5-2e21-4132-98a9-c88bf3f9dddd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:37.982 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[603f8b14-9f6a-4a39-8030-0b5b3a27b4da]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 754383, 'reachable_time': 21581, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272330, 'error': None, 'target': 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:37.984 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:37.984 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[db0b024b-a133-4775-af43-e66ff8d14d2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:37.984 139482 INFO neutron.agent.ovn.metadata.agent [-] Port cfb780c5-3830-452f-89f0-cd6f52cd9e67 in datapath 7fc1dfc3-8d7f-4854-980d-37a93f366035 unbound from our chassis#033[00m
Nov 29 03:21:37 np0005539551 systemd[1]: run-netns-ovnmeta\x2d7fc1dfc3\x2d8d7f\x2d4854\x2d980d\x2d37a93f366035.mount: Deactivated successfully.
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:37.986 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7fc1dfc3-8d7f-4854-980d-37a93f366035, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:37.986 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1cee9fa0-40ba-4868-9604-6059f197e6bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:37.987 139482 INFO neutron.agent.ovn.metadata.agent [-] Port cfb780c5-3830-452f-89f0-cd6f52cd9e67 in datapath 7fc1dfc3-8d7f-4854-980d-37a93f366035 unbound from our chassis#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.987 227364 DEBUG oslo_concurrency.lockutils [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.987 227364 DEBUG oslo_concurrency.lockutils [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:37 np0005539551 nova_compute[227360]: 2025-11-29 08:21:37.988 227364 DEBUG nova.objects.instance [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:37.988 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7fc1dfc3-8d7f-4854-980d-37a93f366035, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:21:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:37.988 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e621a75d-86d9-45b1-a58b-ceb55eb4e17a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:38 np0005539551 nova_compute[227360]: 2025-11-29 08:21:38.062 227364 DEBUG oslo_concurrency.lockutils [None req-5080df77-d19f-4bb2-b78a-2683a796fbca 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.074s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:38 np0005539551 nova_compute[227360]: 2025-11-29 08:21:38.672 227364 DEBUG oslo_concurrency.lockutils [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Acquiring lock "refresh_cache-37bf3f0c-b49b-457b-81be-b4b31f32d872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:21:38 np0005539551 nova_compute[227360]: 2025-11-29 08:21:38.672 227364 DEBUG oslo_concurrency.lockutils [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Acquired lock "refresh_cache-37bf3f0c-b49b-457b-81be-b4b31f32d872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:21:38 np0005539551 nova_compute[227360]: 2025-11-29 08:21:38.672 227364 DEBUG nova.network.neutron [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:21:38 np0005539551 nova_compute[227360]: 2025-11-29 08:21:38.818 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:38.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:39 np0005539551 nova_compute[227360]: 2025-11-29 08:21:39.212 227364 DEBUG nova.compute.manager [req-38fe644c-7b63-49af-8a10-6a2b62419182 req-faa969fa-e112-4f80-b201-ad622aeae036 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Received event network-vif-plugged-cfb780c5-3830-452f-89f0-cd6f52cd9e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:39 np0005539551 nova_compute[227360]: 2025-11-29 08:21:39.212 227364 DEBUG oslo_concurrency.lockutils [req-38fe644c-7b63-49af-8a10-6a2b62419182 req-faa969fa-e112-4f80-b201-ad622aeae036 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:39 np0005539551 nova_compute[227360]: 2025-11-29 08:21:39.212 227364 DEBUG oslo_concurrency.lockutils [req-38fe644c-7b63-49af-8a10-6a2b62419182 req-faa969fa-e112-4f80-b201-ad622aeae036 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:39 np0005539551 nova_compute[227360]: 2025-11-29 08:21:39.213 227364 DEBUG oslo_concurrency.lockutils [req-38fe644c-7b63-49af-8a10-6a2b62419182 req-faa969fa-e112-4f80-b201-ad622aeae036 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:39 np0005539551 nova_compute[227360]: 2025-11-29 08:21:39.213 227364 DEBUG nova.compute.manager [req-38fe644c-7b63-49af-8a10-6a2b62419182 req-faa969fa-e112-4f80-b201-ad622aeae036 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] No waiting events found dispatching network-vif-plugged-cfb780c5-3830-452f-89f0-cd6f52cd9e67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:21:39 np0005539551 nova_compute[227360]: 2025-11-29 08:21:39.213 227364 WARNING nova.compute.manager [req-38fe644c-7b63-49af-8a10-6a2b62419182 req-faa969fa-e112-4f80-b201-ad622aeae036 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Received unexpected event network-vif-plugged-cfb780c5-3830-452f-89f0-cd6f52cd9e67 for instance with vm_state stopped and task_state None.#033[00m
Nov 29 03:21:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:39.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:40.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:40 np0005539551 nova_compute[227360]: 2025-11-29 08:21:40.957 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.091 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.318 227364 DEBUG nova.network.neutron [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Updating instance_info_cache with network_info: [{"id": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "address": "fa:16:3e:a0:40:0c", "network": {"id": "2b381cec-57a8-4697-a273-a320681301f8", "bridge": "br-int", "label": "tempest-network-smoke--788617349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf435ee76-ed", "ovs_interfaceid": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.333 227364 DEBUG oslo_concurrency.lockutils [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Releasing lock "refresh_cache-37bf3f0c-b49b-457b-81be-b4b31f32d872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.384 227364 DEBUG nova.compute.manager [req-47e4d1df-2282-4fb6-a090-d3703c52613e req-3978438e-6bac-460c-91d1-4191f8fc4276 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Received event network-vif-unplugged-cfb780c5-3830-452f-89f0-cd6f52cd9e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.385 227364 DEBUG oslo_concurrency.lockutils [req-47e4d1df-2282-4fb6-a090-d3703c52613e req-3978438e-6bac-460c-91d1-4191f8fc4276 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.385 227364 DEBUG oslo_concurrency.lockutils [req-47e4d1df-2282-4fb6-a090-d3703c52613e req-3978438e-6bac-460c-91d1-4191f8fc4276 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.386 227364 DEBUG oslo_concurrency.lockutils [req-47e4d1df-2282-4fb6-a090-d3703c52613e req-3978438e-6bac-460c-91d1-4191f8fc4276 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.386 227364 DEBUG nova.compute.manager [req-47e4d1df-2282-4fb6-a090-d3703c52613e req-3978438e-6bac-460c-91d1-4191f8fc4276 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] No waiting events found dispatching network-vif-unplugged-cfb780c5-3830-452f-89f0-cd6f52cd9e67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.386 227364 WARNING nova.compute.manager [req-47e4d1df-2282-4fb6-a090-d3703c52613e req-3978438e-6bac-460c-91d1-4191f8fc4276 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Received unexpected event network-vif-unplugged-cfb780c5-3830-452f-89f0-cd6f52cd9e67 for instance with vm_state stopped and task_state None.#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.387 227364 DEBUG nova.compute.manager [req-47e4d1df-2282-4fb6-a090-d3703c52613e req-3978438e-6bac-460c-91d1-4191f8fc4276 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Received event network-vif-plugged-cfb780c5-3830-452f-89f0-cd6f52cd9e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.387 227364 DEBUG oslo_concurrency.lockutils [req-47e4d1df-2282-4fb6-a090-d3703c52613e req-3978438e-6bac-460c-91d1-4191f8fc4276 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.388 227364 DEBUG oslo_concurrency.lockutils [req-47e4d1df-2282-4fb6-a090-d3703c52613e req-3978438e-6bac-460c-91d1-4191f8fc4276 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.388 227364 DEBUG oslo_concurrency.lockutils [req-47e4d1df-2282-4fb6-a090-d3703c52613e req-3978438e-6bac-460c-91d1-4191f8fc4276 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.389 227364 DEBUG nova.compute.manager [req-47e4d1df-2282-4fb6-a090-d3703c52613e req-3978438e-6bac-460c-91d1-4191f8fc4276 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] No waiting events found dispatching network-vif-plugged-cfb780c5-3830-452f-89f0-cd6f52cd9e67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.389 227364 WARNING nova.compute.manager [req-47e4d1df-2282-4fb6-a090-d3703c52613e req-3978438e-6bac-460c-91d1-4191f8fc4276 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Received unexpected event network-vif-plugged-cfb780c5-3830-452f-89f0-cd6f52cd9e67 for instance with vm_state stopped and task_state None.#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.390 227364 DEBUG nova.compute.manager [req-47e4d1df-2282-4fb6-a090-d3703c52613e req-3978438e-6bac-460c-91d1-4191f8fc4276 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Received event network-vif-plugged-cfb780c5-3830-452f-89f0-cd6f52cd9e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.390 227364 DEBUG oslo_concurrency.lockutils [req-47e4d1df-2282-4fb6-a090-d3703c52613e req-3978438e-6bac-460c-91d1-4191f8fc4276 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.391 227364 DEBUG oslo_concurrency.lockutils [req-47e4d1df-2282-4fb6-a090-d3703c52613e req-3978438e-6bac-460c-91d1-4191f8fc4276 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.391 227364 DEBUG oslo_concurrency.lockutils [req-47e4d1df-2282-4fb6-a090-d3703c52613e req-3978438e-6bac-460c-91d1-4191f8fc4276 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.392 227364 DEBUG nova.compute.manager [req-47e4d1df-2282-4fb6-a090-d3703c52613e req-3978438e-6bac-460c-91d1-4191f8fc4276 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] No waiting events found dispatching network-vif-plugged-cfb780c5-3830-452f-89f0-cd6f52cd9e67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.392 227364 WARNING nova.compute.manager [req-47e4d1df-2282-4fb6-a090-d3703c52613e req-3978438e-6bac-460c-91d1-4191f8fc4276 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Received unexpected event network-vif-plugged-cfb780c5-3830-452f-89f0-cd6f52cd9e67 for instance with vm_state stopped and task_state None.#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.393 227364 DEBUG nova.compute.manager [req-47e4d1df-2282-4fb6-a090-d3703c52613e req-3978438e-6bac-460c-91d1-4191f8fc4276 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Received event network-vif-plugged-cfb780c5-3830-452f-89f0-cd6f52cd9e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.393 227364 DEBUG oslo_concurrency.lockutils [req-47e4d1df-2282-4fb6-a090-d3703c52613e req-3978438e-6bac-460c-91d1-4191f8fc4276 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.394 227364 DEBUG oslo_concurrency.lockutils [req-47e4d1df-2282-4fb6-a090-d3703c52613e req-3978438e-6bac-460c-91d1-4191f8fc4276 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.394 227364 DEBUG oslo_concurrency.lockutils [req-47e4d1df-2282-4fb6-a090-d3703c52613e req-3978438e-6bac-460c-91d1-4191f8fc4276 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.395 227364 DEBUG nova.compute.manager [req-47e4d1df-2282-4fb6-a090-d3703c52613e req-3978438e-6bac-460c-91d1-4191f8fc4276 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] No waiting events found dispatching network-vif-plugged-cfb780c5-3830-452f-89f0-cd6f52cd9e67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.395 227364 WARNING nova.compute.manager [req-47e4d1df-2282-4fb6-a090-d3703c52613e req-3978438e-6bac-460c-91d1-4191f8fc4276 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Received unexpected event network-vif-plugged-cfb780c5-3830-452f-89f0-cd6f52cd9e67 for instance with vm_state stopped and task_state None.#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.396 227364 DEBUG nova.compute.manager [req-47e4d1df-2282-4fb6-a090-d3703c52613e req-3978438e-6bac-460c-91d1-4191f8fc4276 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Received event network-vif-unplugged-cfb780c5-3830-452f-89f0-cd6f52cd9e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.396 227364 DEBUG oslo_concurrency.lockutils [req-47e4d1df-2282-4fb6-a090-d3703c52613e req-3978438e-6bac-460c-91d1-4191f8fc4276 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.397 227364 DEBUG oslo_concurrency.lockutils [req-47e4d1df-2282-4fb6-a090-d3703c52613e req-3978438e-6bac-460c-91d1-4191f8fc4276 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.397 227364 DEBUG oslo_concurrency.lockutils [req-47e4d1df-2282-4fb6-a090-d3703c52613e req-3978438e-6bac-460c-91d1-4191f8fc4276 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.398 227364 DEBUG nova.compute.manager [req-47e4d1df-2282-4fb6-a090-d3703c52613e req-3978438e-6bac-460c-91d1-4191f8fc4276 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] No waiting events found dispatching network-vif-unplugged-cfb780c5-3830-452f-89f0-cd6f52cd9e67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.398 227364 WARNING nova.compute.manager [req-47e4d1df-2282-4fb6-a090-d3703c52613e req-3978438e-6bac-460c-91d1-4191f8fc4276 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Received unexpected event network-vif-unplugged-cfb780c5-3830-452f-89f0-cd6f52cd9e67 for instance with vm_state stopped and task_state None.#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.421 227364 DEBUG nova.virt.libvirt.driver [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.422 227364 DEBUG nova.virt.libvirt.volume.remotefs [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Creating file /var/lib/nova/instances/37bf3f0c-b49b-457b-81be-b4b31f32d872/0e5700784ff843708da99f5c99630f40.tmp on remote host 192.168.122.100 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.422 227364 DEBUG oslo_concurrency.processutils [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/37bf3f0c-b49b-457b-81be-b4b31f32d872/0e5700784ff843708da99f5c99630f40.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.484 227364 DEBUG oslo_concurrency.lockutils [None req-1c750945-89a8-4df5-b66d-f64b23be1c05 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "83af56cb-4634-464b-a921-b228b72f2ea5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.485 227364 DEBUG oslo_concurrency.lockutils [None req-1c750945-89a8-4df5-b66d-f64b23be1c05 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "83af56cb-4634-464b-a921-b228b72f2ea5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.486 227364 DEBUG oslo_concurrency.lockutils [None req-1c750945-89a8-4df5-b66d-f64b23be1c05 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.486 227364 DEBUG oslo_concurrency.lockutils [None req-1c750945-89a8-4df5-b66d-f64b23be1c05 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.486 227364 DEBUG oslo_concurrency.lockutils [None req-1c750945-89a8-4df5-b66d-f64b23be1c05 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.489 227364 INFO nova.compute.manager [None req-1c750945-89a8-4df5-b66d-f64b23be1c05 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Terminating instance#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.491 227364 DEBUG nova.compute.manager [None req-1c750945-89a8-4df5-b66d-f64b23be1c05 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.502 227364 INFO nova.virt.libvirt.driver [-] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Instance destroyed successfully.#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.503 227364 DEBUG nova.objects.instance [None req-1c750945-89a8-4df5-b66d-f64b23be1c05 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lazy-loading 'resources' on Instance uuid 83af56cb-4634-464b-a921-b228b72f2ea5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.517 227364 DEBUG nova.virt.libvirt.vif [None req-1c750945-89a8-4df5-b66d-f64b23be1c05 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:20:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1605427141',display_name='tempest-tempest.common.compute-instance-1605427141',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1605427141',id=118,image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:21:37Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='0bace34c102e4d56b089fd695d324f10',ramdisk_id='',reservation_id='r-9cfxhqhs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1954650991',owner_user_name='tempest-ServerActionsTestOtherA-1954650991-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:21:38Z,user_data=None,user_id='1552f15deb524705a9456cbe9b54c429',uuid=83af56cb-4634-464b-a921-b228b72f2ea5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "cfb780c5-3830-452f-89f0-cd6f52cd9e67", "address": "fa:16:3e:d7:e6:6b", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfb780c5-38", "ovs_interfaceid": "cfb780c5-3830-452f-89f0-cd6f52cd9e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.518 227364 DEBUG nova.network.os_vif_util [None req-1c750945-89a8-4df5-b66d-f64b23be1c05 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converting VIF {"id": "cfb780c5-3830-452f-89f0-cd6f52cd9e67", "address": "fa:16:3e:d7:e6:6b", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfb780c5-38", "ovs_interfaceid": "cfb780c5-3830-452f-89f0-cd6f52cd9e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.519 227364 DEBUG nova.network.os_vif_util [None req-1c750945-89a8-4df5-b66d-f64b23be1c05 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:e6:6b,bridge_name='br-int',has_traffic_filtering=True,id=cfb780c5-3830-452f-89f0-cd6f52cd9e67,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcfb780c5-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.520 227364 DEBUG os_vif [None req-1c750945-89a8-4df5-b66d-f64b23be1c05 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:e6:6b,bridge_name='br-int',has_traffic_filtering=True,id=cfb780c5-3830-452f-89f0-cd6f52cd9e67,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcfb780c5-38') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.524 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.525 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcfb780c5-38, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.563 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.564 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.567 227364 INFO os_vif [None req-1c750945-89a8-4df5-b66d-f64b23be1c05 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:e6:6b,bridge_name='br-int',has_traffic_filtering=True,id=cfb780c5-3830-452f-89f0-cd6f52cd9e67,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcfb780c5-38')#033[00m
Nov 29 03:21:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:41.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.916 227364 DEBUG oslo_concurrency.processutils [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/37bf3f0c-b49b-457b-81be-b4b31f32d872/0e5700784ff843708da99f5c99630f40.tmp" returned: 1 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.916 227364 DEBUG oslo_concurrency.processutils [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] 'ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/37bf3f0c-b49b-457b-81be-b4b31f32d872/0e5700784ff843708da99f5c99630f40.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.916 227364 DEBUG nova.virt.libvirt.volume.remotefs [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Creating directory /var/lib/nova/instances/37bf3f0c-b49b-457b-81be-b4b31f32d872 on remote host 192.168.122.100 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Nov 29 03:21:41 np0005539551 nova_compute[227360]: 2025-11-29 08:21:41.917 227364 DEBUG oslo_concurrency.processutils [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/37bf3f0c-b49b-457b-81be-b4b31f32d872 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:41 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:42 np0005539551 nova_compute[227360]: 2025-11-29 08:21:42.150 227364 DEBUG oslo_concurrency.processutils [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/37bf3f0c-b49b-457b-81be-b4b31f32d872" returned: 0 in 0.233s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:42 np0005539551 nova_compute[227360]: 2025-11-29 08:21:42.155 227364 DEBUG nova.virt.libvirt.driver [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:21:42 np0005539551 nova_compute[227360]: 2025-11-29 08:21:42.163 227364 INFO nova.virt.libvirt.driver [None req-1c750945-89a8-4df5-b66d-f64b23be1c05 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Deleting instance files /var/lib/nova/instances/83af56cb-4634-464b-a921-b228b72f2ea5_del#033[00m
Nov 29 03:21:42 np0005539551 nova_compute[227360]: 2025-11-29 08:21:42.163 227364 INFO nova.virt.libvirt.driver [None req-1c750945-89a8-4df5-b66d-f64b23be1c05 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Deletion of /var/lib/nova/instances/83af56cb-4634-464b-a921-b228b72f2ea5_del complete#033[00m
Nov 29 03:21:42 np0005539551 nova_compute[227360]: 2025-11-29 08:21:42.225 227364 INFO nova.compute.manager [None req-1c750945-89a8-4df5-b66d-f64b23be1c05 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:21:42 np0005539551 nova_compute[227360]: 2025-11-29 08:21:42.225 227364 DEBUG oslo.service.loopingcall [None req-1c750945-89a8-4df5-b66d-f64b23be1c05 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:21:42 np0005539551 nova_compute[227360]: 2025-11-29 08:21:42.226 227364 DEBUG nova.compute.manager [-] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:21:42 np0005539551 nova_compute[227360]: 2025-11-29 08:21:42.226 227364 DEBUG nova.network.neutron [-] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:21:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:42.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:43 np0005539551 nova_compute[227360]: 2025-11-29 08:21:43.188 227364 DEBUG nova.network.neutron [-] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:21:43 np0005539551 nova_compute[227360]: 2025-11-29 08:21:43.208 227364 INFO nova.compute.manager [-] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Took 0.98 seconds to deallocate network for instance.#033[00m
Nov 29 03:21:43 np0005539551 nova_compute[227360]: 2025-11-29 08:21:43.252 227364 DEBUG oslo_concurrency.lockutils [None req-1c750945-89a8-4df5-b66d-f64b23be1c05 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:43 np0005539551 nova_compute[227360]: 2025-11-29 08:21:43.252 227364 DEBUG oslo_concurrency.lockutils [None req-1c750945-89a8-4df5-b66d-f64b23be1c05 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:43 np0005539551 nova_compute[227360]: 2025-11-29 08:21:43.299 227364 DEBUG nova.compute.manager [req-7c1ecdd2-becd-4492-991c-e1f399a277de req-28a404f9-7ddf-43c7-b29e-7de503e01785 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Received event network-vif-deleted-cfb780c5-3830-452f-89f0-cd6f52cd9e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:43 np0005539551 nova_compute[227360]: 2025-11-29 08:21:43.318 227364 DEBUG oslo_concurrency.processutils [None req-1c750945-89a8-4df5-b66d-f64b23be1c05 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:43 np0005539551 nova_compute[227360]: 2025-11-29 08:21:43.510 227364 DEBUG nova.compute.manager [req-f56166b1-cc82-4813-89ed-0025672f6bee req-aeba9c38-b877-4f15-ac1e-b16da5d97930 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Received event network-vif-plugged-cfb780c5-3830-452f-89f0-cd6f52cd9e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:43 np0005539551 nova_compute[227360]: 2025-11-29 08:21:43.511 227364 DEBUG oslo_concurrency.lockutils [req-f56166b1-cc82-4813-89ed-0025672f6bee req-aeba9c38-b877-4f15-ac1e-b16da5d97930 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:43 np0005539551 nova_compute[227360]: 2025-11-29 08:21:43.511 227364 DEBUG oslo_concurrency.lockutils [req-f56166b1-cc82-4813-89ed-0025672f6bee req-aeba9c38-b877-4f15-ac1e-b16da5d97930 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:43 np0005539551 nova_compute[227360]: 2025-11-29 08:21:43.512 227364 DEBUG oslo_concurrency.lockutils [req-f56166b1-cc82-4813-89ed-0025672f6bee req-aeba9c38-b877-4f15-ac1e-b16da5d97930 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "83af56cb-4634-464b-a921-b228b72f2ea5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:43 np0005539551 nova_compute[227360]: 2025-11-29 08:21:43.512 227364 DEBUG nova.compute.manager [req-f56166b1-cc82-4813-89ed-0025672f6bee req-aeba9c38-b877-4f15-ac1e-b16da5d97930 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] No waiting events found dispatching network-vif-plugged-cfb780c5-3830-452f-89f0-cd6f52cd9e67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:21:43 np0005539551 nova_compute[227360]: 2025-11-29 08:21:43.512 227364 WARNING nova.compute.manager [req-f56166b1-cc82-4813-89ed-0025672f6bee req-aeba9c38-b877-4f15-ac1e-b16da5d97930 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Received unexpected event network-vif-plugged-cfb780c5-3830-452f-89f0-cd6f52cd9e67 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:21:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:43.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:43 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:21:43 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3354126060' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:21:43 np0005539551 nova_compute[227360]: 2025-11-29 08:21:43.800 227364 DEBUG oslo_concurrency.processutils [None req-1c750945-89a8-4df5-b66d-f64b23be1c05 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:43 np0005539551 nova_compute[227360]: 2025-11-29 08:21:43.805 227364 DEBUG nova.compute.provider_tree [None req-1c750945-89a8-4df5-b66d-f64b23be1c05 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:21:43 np0005539551 nova_compute[227360]: 2025-11-29 08:21:43.822 227364 DEBUG nova.scheduler.client.report [None req-1c750945-89a8-4df5-b66d-f64b23be1c05 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:21:43 np0005539551 nova_compute[227360]: 2025-11-29 08:21:43.847 227364 DEBUG oslo_concurrency.lockutils [None req-1c750945-89a8-4df5-b66d-f64b23be1c05 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:43 np0005539551 nova_compute[227360]: 2025-11-29 08:21:43.870 227364 INFO nova.scheduler.client.report [None req-1c750945-89a8-4df5-b66d-f64b23be1c05 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Deleted allocations for instance 83af56cb-4634-464b-a921-b228b72f2ea5#033[00m
Nov 29 03:21:43 np0005539551 nova_compute[227360]: 2025-11-29 08:21:43.937 227364 DEBUG oslo_concurrency.lockutils [None req-1c750945-89a8-4df5-b66d-f64b23be1c05 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "83af56cb-4634-464b-a921-b228b72f2ea5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.452s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:44 np0005539551 nova_compute[227360]: 2025-11-29 08:21:44.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:44 np0005539551 kernel: tapf435ee76-ed (unregistering): left promiscuous mode
Nov 29 03:21:44 np0005539551 NetworkManager[48922]: <info>  [1764404504.4737] device (tapf435ee76-ed): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:21:44 np0005539551 nova_compute[227360]: 2025-11-29 08:21:44.477 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:44 np0005539551 ovn_controller[130266]: 2025-11-29T08:21:44Z|00501|binding|INFO|Releasing lport f435ee76-ed2f-4ad8-a9e1-bda955080b3e from this chassis (sb_readonly=0)
Nov 29 03:21:44 np0005539551 ovn_controller[130266]: 2025-11-29T08:21:44Z|00502|binding|INFO|Setting lport f435ee76-ed2f-4ad8-a9e1-bda955080b3e down in Southbound
Nov 29 03:21:44 np0005539551 ovn_controller[130266]: 2025-11-29T08:21:44Z|00503|binding|INFO|Removing iface tapf435ee76-ed ovn-installed in OVS
Nov 29 03:21:44 np0005539551 nova_compute[227360]: 2025-11-29 08:21:44.482 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:44.486 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:40:0c 10.100.0.5'], port_security=['fa:16:3e:a0:40:0c 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '37bf3f0c-b49b-457b-81be-b4b31f32d872', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b381cec-57a8-4697-a273-a320681301f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f7e4462f-71ed-420d-b2ac-83fad8b034b6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.228'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7b87ec03-3fc0-4efd-b28c-90cfac0d10cf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=f435ee76-ed2f-4ad8-a9e1-bda955080b3e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:21:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:44.488 139482 INFO neutron.agent.ovn.metadata.agent [-] Port f435ee76-ed2f-4ad8-a9e1-bda955080b3e in datapath 2b381cec-57a8-4697-a273-a320681301f8 unbound from our chassis#033[00m
Nov 29 03:21:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:44.491 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2b381cec-57a8-4697-a273-a320681301f8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:21:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:44.492 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2fdec1c6-367c-4a8a-a86d-03913795fa1e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:44.493 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2b381cec-57a8-4697-a273-a320681301f8 namespace which is not needed anymore#033[00m
Nov 29 03:21:44 np0005539551 nova_compute[227360]: 2025-11-29 08:21:44.506 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:44 np0005539551 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000077.scope: Deactivated successfully.
Nov 29 03:21:44 np0005539551 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000077.scope: Consumed 14.367s CPU time.
Nov 29 03:21:44 np0005539551 systemd-machined[190756]: Machine qemu-53-instance-00000077 terminated.
Nov 29 03:21:44 np0005539551 neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8[271582]: [NOTICE]   (271587) : haproxy version is 2.8.14-c23fe91
Nov 29 03:21:44 np0005539551 neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8[271582]: [NOTICE]   (271587) : path to executable is /usr/sbin/haproxy
Nov 29 03:21:44 np0005539551 neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8[271582]: [WARNING]  (271587) : Exiting Master process...
Nov 29 03:21:44 np0005539551 neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8[271582]: [ALERT]    (271587) : Current worker (271589) exited with code 143 (Terminated)
Nov 29 03:21:44 np0005539551 neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8[271582]: [WARNING]  (271587) : All workers exited. Exiting... (0)
Nov 29 03:21:44 np0005539551 systemd[1]: libpod-39925981a56efd3952d73a7563390dc949b7ec84a7e9619df95c462bc15e9dbe.scope: Deactivated successfully.
Nov 29 03:21:44 np0005539551 podman[272398]: 2025-11-29 08:21:44.651735533 +0000 UTC m=+0.043106139 container died 39925981a56efd3952d73a7563390dc949b7ec84a7e9619df95c462bc15e9dbe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:21:44 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-39925981a56efd3952d73a7563390dc949b7ec84a7e9619df95c462bc15e9dbe-userdata-shm.mount: Deactivated successfully.
Nov 29 03:21:44 np0005539551 systemd[1]: var-lib-containers-storage-overlay-50c41a157e00037349a87420ff03012c1b0a95c070cd2283c29d09f5a4cd4207-merged.mount: Deactivated successfully.
Nov 29 03:21:44 np0005539551 podman[272398]: 2025-11-29 08:21:44.693135895 +0000 UTC m=+0.084506481 container cleanup 39925981a56efd3952d73a7563390dc949b7ec84a7e9619df95c462bc15e9dbe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:21:44 np0005539551 systemd[1]: libpod-conmon-39925981a56efd3952d73a7563390dc949b7ec84a7e9619df95c462bc15e9dbe.scope: Deactivated successfully.
Nov 29 03:21:44 np0005539551 nova_compute[227360]: 2025-11-29 08:21:44.710 227364 DEBUG nova.compute.manager [req-6e8cf0c9-580c-4018-96c1-b0f9f11f2382 req-9bcb63e4-8ce4-4c53-955c-4d1b2517f6c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received event network-vif-unplugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:44 np0005539551 nova_compute[227360]: 2025-11-29 08:21:44.711 227364 DEBUG oslo_concurrency.lockutils [req-6e8cf0c9-580c-4018-96c1-b0f9f11f2382 req-9bcb63e4-8ce4-4c53-955c-4d1b2517f6c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:44 np0005539551 nova_compute[227360]: 2025-11-29 08:21:44.711 227364 DEBUG oslo_concurrency.lockutils [req-6e8cf0c9-580c-4018-96c1-b0f9f11f2382 req-9bcb63e4-8ce4-4c53-955c-4d1b2517f6c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:44 np0005539551 nova_compute[227360]: 2025-11-29 08:21:44.711 227364 DEBUG oslo_concurrency.lockutils [req-6e8cf0c9-580c-4018-96c1-b0f9f11f2382 req-9bcb63e4-8ce4-4c53-955c-4d1b2517f6c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:44 np0005539551 nova_compute[227360]: 2025-11-29 08:21:44.712 227364 DEBUG nova.compute.manager [req-6e8cf0c9-580c-4018-96c1-b0f9f11f2382 req-9bcb63e4-8ce4-4c53-955c-4d1b2517f6c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] No waiting events found dispatching network-vif-unplugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:21:44 np0005539551 nova_compute[227360]: 2025-11-29 08:21:44.712 227364 WARNING nova.compute.manager [req-6e8cf0c9-580c-4018-96c1-b0f9f11f2382 req-9bcb63e4-8ce4-4c53-955c-4d1b2517f6c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received unexpected event network-vif-unplugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e for instance with vm_state active and task_state resize_migrating.#033[00m
Nov 29 03:21:44 np0005539551 podman[272431]: 2025-11-29 08:21:44.7598137 +0000 UTC m=+0.045322439 container remove 39925981a56efd3952d73a7563390dc949b7ec84a7e9619df95c462bc15e9dbe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:21:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:44.767 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[427e5204-f46d-49ce-ab03-a2ebfb338444]: (4, ('Sat Nov 29 08:21:44 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8 (39925981a56efd3952d73a7563390dc949b7ec84a7e9619df95c462bc15e9dbe)\n39925981a56efd3952d73a7563390dc949b7ec84a7e9619df95c462bc15e9dbe\nSat Nov 29 08:21:44 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8 (39925981a56efd3952d73a7563390dc949b7ec84a7e9619df95c462bc15e9dbe)\n39925981a56efd3952d73a7563390dc949b7ec84a7e9619df95c462bc15e9dbe\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:44.768 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4029b8fa-83dd-4166-b41d-0fced4d4c27c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:44.769 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b381cec-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:44 np0005539551 nova_compute[227360]: 2025-11-29 08:21:44.770 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:44 np0005539551 kernel: tap2b381cec-50: left promiscuous mode
Nov 29 03:21:44 np0005539551 nova_compute[227360]: 2025-11-29 08:21:44.792 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:44.794 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1e7ebead-ed7b-4493-80c3-2b91d5fc59c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:44.808 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[177e73c9-c01e-425d-b77a-a906e8a0f820]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:44.809 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[dff710fd-ed91-40d0-9d3d-8c16f9fd4c26]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:44.823 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[886df097-e572-4940-96b6-9e5b16f34421]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751770, 'reachable_time': 33454, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272455, 'error': None, 'target': 'ovnmeta-2b381cec-57a8-4697-a273-a320681301f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:44.825 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2b381cec-57a8-4697-a273-a320681301f8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:21:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:21:44.826 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[806a9cfe-01d2-4fd1-8a52-f8e211bcd210]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:44 np0005539551 systemd[1]: run-netns-ovnmeta\x2d2b381cec\x2d57a8\x2d4697\x2da273\x2da320681301f8.mount: Deactivated successfully.
Nov 29 03:21:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:44.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:45 np0005539551 nova_compute[227360]: 2025-11-29 08:21:45.182 227364 INFO nova.virt.libvirt.driver [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:21:45 np0005539551 nova_compute[227360]: 2025-11-29 08:21:45.189 227364 INFO nova.virt.libvirt.driver [-] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Instance destroyed successfully.#033[00m
Nov 29 03:21:45 np0005539551 nova_compute[227360]: 2025-11-29 08:21:45.190 227364 DEBUG nova.virt.libvirt.vif [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:20:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-700901307',display_name='tempest-TestNetworkAdvancedServerOps-server-700901307',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-700901307',id=119,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHTZdF339uG4GTcdjaqWUNyl9tCN2Ihz0tT1aABynGHxCfjrTplPF8A9td3DkI7lqNybnYi0rKYsiF72+HnhHVmKPriLXx/cBMbe2eRLXVh9VLRo2vvXjsLkBGMzWqs3qw==',key_name='tempest-TestNetworkAdvancedServerOps-236637179',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:21:12Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-rb72np4c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:21:37Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=37bf3f0c-b49b-457b-81be-b4b31f32d872,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "address": "fa:16:3e:a0:40:0c", "network": {"id": "2b381cec-57a8-4697-a273-a320681301f8", "bridge": "br-int", "label": "tempest-network-smoke--788617349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--788617349", "vif_mac": "fa:16:3e:a0:40:0c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf435ee76-ed", "ovs_interfaceid": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:21:45 np0005539551 nova_compute[227360]: 2025-11-29 08:21:45.191 227364 DEBUG nova.network.os_vif_util [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Converting VIF {"id": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "address": "fa:16:3e:a0:40:0c", "network": {"id": "2b381cec-57a8-4697-a273-a320681301f8", "bridge": "br-int", "label": "tempest-network-smoke--788617349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--788617349", "vif_mac": "fa:16:3e:a0:40:0c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf435ee76-ed", "ovs_interfaceid": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:21:45 np0005539551 nova_compute[227360]: 2025-11-29 08:21:45.192 227364 DEBUG nova.network.os_vif_util [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a0:40:0c,bridge_name='br-int',has_traffic_filtering=True,id=f435ee76-ed2f-4ad8-a9e1-bda955080b3e,network=Network(2b381cec-57a8-4697-a273-a320681301f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf435ee76-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:21:45 np0005539551 nova_compute[227360]: 2025-11-29 08:21:45.193 227364 DEBUG os_vif [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a0:40:0c,bridge_name='br-int',has_traffic_filtering=True,id=f435ee76-ed2f-4ad8-a9e1-bda955080b3e,network=Network(2b381cec-57a8-4697-a273-a320681301f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf435ee76-ed') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:21:45 np0005539551 nova_compute[227360]: 2025-11-29 08:21:45.196 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:45 np0005539551 nova_compute[227360]: 2025-11-29 08:21:45.197 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf435ee76-ed, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:45 np0005539551 nova_compute[227360]: 2025-11-29 08:21:45.199 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:45 np0005539551 nova_compute[227360]: 2025-11-29 08:21:45.201 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:45 np0005539551 nova_compute[227360]: 2025-11-29 08:21:45.204 227364 INFO os_vif [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a0:40:0c,bridge_name='br-int',has_traffic_filtering=True,id=f435ee76-ed2f-4ad8-a9e1-bda955080b3e,network=Network(2b381cec-57a8-4697-a273-a320681301f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf435ee76-ed')#033[00m
Nov 29 03:21:45 np0005539551 nova_compute[227360]: 2025-11-29 08:21:45.210 227364 DEBUG nova.virt.libvirt.driver [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:21:45 np0005539551 nova_compute[227360]: 2025-11-29 08:21:45.211 227364 DEBUG nova.virt.libvirt.driver [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:21:45 np0005539551 nova_compute[227360]: 2025-11-29 08:21:45.398 227364 DEBUG neutronclient.v2_0.client [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port f435ee76-ed2f-4ad8-a9e1-bda955080b3e for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Nov 29 03:21:45 np0005539551 nova_compute[227360]: 2025-11-29 08:21:45.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:45 np0005539551 nova_compute[227360]: 2025-11-29 08:21:45.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:21:45 np0005539551 nova_compute[227360]: 2025-11-29 08:21:45.530 227364 DEBUG oslo_concurrency.lockutils [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Acquiring lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:45 np0005539551 nova_compute[227360]: 2025-11-29 08:21:45.531 227364 DEBUG oslo_concurrency.lockutils [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:45 np0005539551 nova_compute[227360]: 2025-11-29 08:21:45.531 227364 DEBUG oslo_concurrency.lockutils [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:45.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:46 np0005539551 nova_compute[227360]: 2025-11-29 08:21:46.094 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:46 np0005539551 nova_compute[227360]: 2025-11-29 08:21:46.909 227364 DEBUG nova.compute.manager [req-2bd66353-d7a3-482d-9b89-82b5a61c8963 req-f011d0c0-0171-46ee-a0ce-b6fc365b60a8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received event network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:46 np0005539551 nova_compute[227360]: 2025-11-29 08:21:46.909 227364 DEBUG oslo_concurrency.lockutils [req-2bd66353-d7a3-482d-9b89-82b5a61c8963 req-f011d0c0-0171-46ee-a0ce-b6fc365b60a8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:46 np0005539551 nova_compute[227360]: 2025-11-29 08:21:46.910 227364 DEBUG oslo_concurrency.lockutils [req-2bd66353-d7a3-482d-9b89-82b5a61c8963 req-f011d0c0-0171-46ee-a0ce-b6fc365b60a8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:46 np0005539551 nova_compute[227360]: 2025-11-29 08:21:46.910 227364 DEBUG oslo_concurrency.lockutils [req-2bd66353-d7a3-482d-9b89-82b5a61c8963 req-f011d0c0-0171-46ee-a0ce-b6fc365b60a8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:46 np0005539551 nova_compute[227360]: 2025-11-29 08:21:46.911 227364 DEBUG nova.compute.manager [req-2bd66353-d7a3-482d-9b89-82b5a61c8963 req-f011d0c0-0171-46ee-a0ce-b6fc365b60a8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] No waiting events found dispatching network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:21:46 np0005539551 nova_compute[227360]: 2025-11-29 08:21:46.911 227364 WARNING nova.compute.manager [req-2bd66353-d7a3-482d-9b89-82b5a61c8963 req-f011d0c0-0171-46ee-a0ce-b6fc365b60a8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received unexpected event network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:21:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:46.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:46 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:47.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:47 np0005539551 nova_compute[227360]: 2025-11-29 08:21:47.829 227364 DEBUG nova.compute.manager [req-dea5e7e1-2281-422a-95e3-6f88c13ee4f8 req-3f62d195-9548-407f-bb15-d41931ae956e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received event network-changed-f435ee76-ed2f-4ad8-a9e1-bda955080b3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:47 np0005539551 nova_compute[227360]: 2025-11-29 08:21:47.829 227364 DEBUG nova.compute.manager [req-dea5e7e1-2281-422a-95e3-6f88c13ee4f8 req-3f62d195-9548-407f-bb15-d41931ae956e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Refreshing instance network info cache due to event network-changed-f435ee76-ed2f-4ad8-a9e1-bda955080b3e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:21:47 np0005539551 nova_compute[227360]: 2025-11-29 08:21:47.830 227364 DEBUG oslo_concurrency.lockutils [req-dea5e7e1-2281-422a-95e3-6f88c13ee4f8 req-3f62d195-9548-407f-bb15-d41931ae956e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-37bf3f0c-b49b-457b-81be-b4b31f32d872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:21:47 np0005539551 nova_compute[227360]: 2025-11-29 08:21:47.830 227364 DEBUG oslo_concurrency.lockutils [req-dea5e7e1-2281-422a-95e3-6f88c13ee4f8 req-3f62d195-9548-407f-bb15-d41931ae956e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-37bf3f0c-b49b-457b-81be-b4b31f32d872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:21:47 np0005539551 nova_compute[227360]: 2025-11-29 08:21:47.830 227364 DEBUG nova.network.neutron [req-dea5e7e1-2281-422a-95e3-6f88c13ee4f8 req-3f62d195-9548-407f-bb15-d41931ae956e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Refreshing network info cache for port f435ee76-ed2f-4ad8-a9e1-bda955080b3e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:21:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:48.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:49 np0005539551 nova_compute[227360]: 2025-11-29 08:21:49.697 227364 DEBUG nova.network.neutron [req-dea5e7e1-2281-422a-95e3-6f88c13ee4f8 req-3f62d195-9548-407f-bb15-d41931ae956e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Updated VIF entry in instance network info cache for port f435ee76-ed2f-4ad8-a9e1-bda955080b3e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:21:49 np0005539551 nova_compute[227360]: 2025-11-29 08:21:49.700 227364 DEBUG nova.network.neutron [req-dea5e7e1-2281-422a-95e3-6f88c13ee4f8 req-3f62d195-9548-407f-bb15-d41931ae956e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Updating instance_info_cache with network_info: [{"id": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "address": "fa:16:3e:a0:40:0c", "network": {"id": "2b381cec-57a8-4697-a273-a320681301f8", "bridge": "br-int", "label": "tempest-network-smoke--788617349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf435ee76-ed", "ovs_interfaceid": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:21:49 np0005539551 nova_compute[227360]: 2025-11-29 08:21:49.725 227364 DEBUG oslo_concurrency.lockutils [req-dea5e7e1-2281-422a-95e3-6f88c13ee4f8 req-3f62d195-9548-407f-bb15-d41931ae956e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-37bf3f0c-b49b-457b-81be-b4b31f32d872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:21:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:49.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:49 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:21:49 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:21:49 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:21:50 np0005539551 nova_compute[227360]: 2025-11-29 08:21:50.200 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e316 e316: 3 total, 3 up, 3 in
Nov 29 03:21:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:50.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:51 np0005539551 nova_compute[227360]: 2025-11-29 08:21:51.097 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:51.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:51 np0005539551 nova_compute[227360]: 2025-11-29 08:21:51.922 227364 DEBUG nova.compute.manager [req-e300093e-5347-4045-8868-63d5afb052e1 req-ad047e98-3fa4-4026-b6cb-f9b3a32bec81 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received event network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:51 np0005539551 nova_compute[227360]: 2025-11-29 08:21:51.922 227364 DEBUG oslo_concurrency.lockutils [req-e300093e-5347-4045-8868-63d5afb052e1 req-ad047e98-3fa4-4026-b6cb-f9b3a32bec81 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:51 np0005539551 nova_compute[227360]: 2025-11-29 08:21:51.922 227364 DEBUG oslo_concurrency.lockutils [req-e300093e-5347-4045-8868-63d5afb052e1 req-ad047e98-3fa4-4026-b6cb-f9b3a32bec81 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:51 np0005539551 nova_compute[227360]: 2025-11-29 08:21:51.922 227364 DEBUG oslo_concurrency.lockutils [req-e300093e-5347-4045-8868-63d5afb052e1 req-ad047e98-3fa4-4026-b6cb-f9b3a32bec81 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:51 np0005539551 nova_compute[227360]: 2025-11-29 08:21:51.922 227364 DEBUG nova.compute.manager [req-e300093e-5347-4045-8868-63d5afb052e1 req-ad047e98-3fa4-4026-b6cb-f9b3a32bec81 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] No waiting events found dispatching network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:21:51 np0005539551 nova_compute[227360]: 2025-11-29 08:21:51.923 227364 WARNING nova.compute.manager [req-e300093e-5347-4045-8868-63d5afb052e1 req-ad047e98-3fa4-4026-b6cb-f9b3a32bec81 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received unexpected event network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e for instance with vm_state active and task_state resize_finish.#033[00m
Nov 29 03:21:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:52 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #94. Immutable memtables: 0.
Nov 29 03:21:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:21:52.053280) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:21:52 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 94
Nov 29 03:21:52 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404512053408, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 492, "num_deletes": 250, "total_data_size": 564022, "memory_usage": 574352, "flush_reason": "Manual Compaction"}
Nov 29 03:21:52 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #95: started
Nov 29 03:21:52 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404512058327, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 95, "file_size": 309886, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48441, "largest_seqno": 48928, "table_properties": {"data_size": 307356, "index_size": 566, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 7141, "raw_average_key_size": 20, "raw_value_size": 302043, "raw_average_value_size": 875, "num_data_blocks": 25, "num_entries": 345, "num_filter_entries": 345, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404493, "oldest_key_time": 1764404493, "file_creation_time": 1764404512, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:21:52 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 5040 microseconds, and 2424 cpu microseconds.
Nov 29 03:21:52 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:21:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:21:52.058371) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #95: 309886 bytes OK
Nov 29 03:21:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:21:52.058391) [db/memtable_list.cc:519] [default] Level-0 commit table #95 started
Nov 29 03:21:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:21:52.060192) [db/memtable_list.cc:722] [default] Level-0 commit table #95: memtable #1 done
Nov 29 03:21:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:21:52.060208) EVENT_LOG_v1 {"time_micros": 1764404512060203, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:21:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:21:52.060226) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:21:52 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 561048, prev total WAL file size 561048, number of live WAL files 2.
Nov 29 03:21:52 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000091.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:21:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:21:52.060709) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353032' seq:72057594037927935, type:22 .. '6D6772737461740031373533' seq:0, type:0; will stop at (end)
Nov 29 03:21:52 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:21:52 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [95(302KB)], [93(13MB)]
Nov 29 03:21:52 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404512060739, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [95], "files_L6": [93], "score": -1, "input_data_size": 13966569, "oldest_snapshot_seqno": -1}
Nov 29 03:21:52 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #96: 8023 keys, 10180791 bytes, temperature: kUnknown
Nov 29 03:21:52 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404512114665, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 96, "file_size": 10180791, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10129970, "index_size": 29653, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20101, "raw_key_size": 208322, "raw_average_key_size": 25, "raw_value_size": 9989354, "raw_average_value_size": 1245, "num_data_blocks": 1154, "num_entries": 8023, "num_filter_entries": 8023, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764404512, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 96, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:21:52 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:21:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:21:52.114893) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 10180791 bytes
Nov 29 03:21:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:21:52.116153) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 258.7 rd, 188.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 13.0 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(77.9) write-amplify(32.9) OK, records in: 8533, records dropped: 510 output_compression: NoCompression
Nov 29 03:21:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:21:52.116169) EVENT_LOG_v1 {"time_micros": 1764404512116162, "job": 58, "event": "compaction_finished", "compaction_time_micros": 53992, "compaction_time_cpu_micros": 23790, "output_level": 6, "num_output_files": 1, "total_output_size": 10180791, "num_input_records": 8533, "num_output_records": 8023, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:21:52 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:21:52 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404512116325, "job": 58, "event": "table_file_deletion", "file_number": 95}
Nov 29 03:21:52 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000093.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:21:52 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404512118498, "job": 58, "event": "table_file_deletion", "file_number": 93}
Nov 29 03:21:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:21:52.060668) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:21:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:21:52.118582) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:21:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:21:52.118589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:21:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:21:52.118591) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:21:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:21:52.118593) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:21:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:21:52.118595) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:21:52 np0005539551 nova_compute[227360]: 2025-11-29 08:21:52.873 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404497.8709202, 83af56cb-4634-464b-a921-b228b72f2ea5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:21:52 np0005539551 nova_compute[227360]: 2025-11-29 08:21:52.874 227364 INFO nova.compute.manager [-] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:21:52 np0005539551 nova_compute[227360]: 2025-11-29 08:21:52.910 227364 DEBUG nova.compute.manager [None req-c61d7bb1-e2f9-4253-92cc-a5a36723d8a5 - - - - - -] [instance: 83af56cb-4634-464b-a921-b228b72f2ea5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:52.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:53.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:54 np0005539551 nova_compute[227360]: 2025-11-29 08:21:54.157 227364 DEBUG nova.compute.manager [req-f2b48c38-e19f-4860-b443-ef3dde378ac5 req-d445fe45-2b0b-4d57-8821-ccb1b4fcb196 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received event network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:54 np0005539551 nova_compute[227360]: 2025-11-29 08:21:54.158 227364 DEBUG oslo_concurrency.lockutils [req-f2b48c38-e19f-4860-b443-ef3dde378ac5 req-d445fe45-2b0b-4d57-8821-ccb1b4fcb196 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:54 np0005539551 nova_compute[227360]: 2025-11-29 08:21:54.158 227364 DEBUG oslo_concurrency.lockutils [req-f2b48c38-e19f-4860-b443-ef3dde378ac5 req-d445fe45-2b0b-4d57-8821-ccb1b4fcb196 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:54 np0005539551 nova_compute[227360]: 2025-11-29 08:21:54.158 227364 DEBUG oslo_concurrency.lockutils [req-f2b48c38-e19f-4860-b443-ef3dde378ac5 req-d445fe45-2b0b-4d57-8821-ccb1b4fcb196 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:54 np0005539551 nova_compute[227360]: 2025-11-29 08:21:54.158 227364 DEBUG nova.compute.manager [req-f2b48c38-e19f-4860-b443-ef3dde378ac5 req-d445fe45-2b0b-4d57-8821-ccb1b4fcb196 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] No waiting events found dispatching network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:21:54 np0005539551 nova_compute[227360]: 2025-11-29 08:21:54.159 227364 WARNING nova.compute.manager [req-f2b48c38-e19f-4860-b443-ef3dde378ac5 req-d445fe45-2b0b-4d57-8821-ccb1b4fcb196 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received unexpected event network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:21:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:54.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:55 np0005539551 nova_compute[227360]: 2025-11-29 08:21:55.202 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:21:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:55.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:21:56 np0005539551 nova_compute[227360]: 2025-11-29 08:21:56.102 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:56.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:57 np0005539551 podman[272638]: 2025-11-29 08:21:57.638714955 +0000 UTC m=+0.079122582 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:21:57 np0005539551 podman[272637]: 2025-11-29 08:21:57.647817421 +0000 UTC m=+0.088375222 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:21:57 np0005539551 podman[272636]: 2025-11-29 08:21:57.697767283 +0000 UTC m=+0.141280594 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:21:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:57.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:57 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:21:57 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:21:57 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:58.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:59 np0005539551 nova_compute[227360]: 2025-11-29 08:21:59.454 227364 INFO nova.compute.manager [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Swapping old allocation on dict_keys(['67c71d68-0dd7-4589-b775-189b4191a844']) held by migration 485f1b6d-5a0c-41df-a6c5-8c0287fb9ee9 for instance#033[00m
Nov 29 03:21:59 np0005539551 nova_compute[227360]: 2025-11-29 08:21:59.466 227364 DEBUG nova.compute.manager [req-ed67b180-22fa-4fd6-b3a3-a545be2767c9 req-0b14ef01-2c9f-44af-8bce-6b3df8bf918d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received event network-vif-unplugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:59 np0005539551 nova_compute[227360]: 2025-11-29 08:21:59.467 227364 DEBUG oslo_concurrency.lockutils [req-ed67b180-22fa-4fd6-b3a3-a545be2767c9 req-0b14ef01-2c9f-44af-8bce-6b3df8bf918d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:59 np0005539551 nova_compute[227360]: 2025-11-29 08:21:59.467 227364 DEBUG oslo_concurrency.lockutils [req-ed67b180-22fa-4fd6-b3a3-a545be2767c9 req-0b14ef01-2c9f-44af-8bce-6b3df8bf918d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:59 np0005539551 nova_compute[227360]: 2025-11-29 08:21:59.468 227364 DEBUG oslo_concurrency.lockutils [req-ed67b180-22fa-4fd6-b3a3-a545be2767c9 req-0b14ef01-2c9f-44af-8bce-6b3df8bf918d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:59 np0005539551 nova_compute[227360]: 2025-11-29 08:21:59.468 227364 DEBUG nova.compute.manager [req-ed67b180-22fa-4fd6-b3a3-a545be2767c9 req-0b14ef01-2c9f-44af-8bce-6b3df8bf918d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] No waiting events found dispatching network-vif-unplugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:21:59 np0005539551 nova_compute[227360]: 2025-11-29 08:21:59.469 227364 WARNING nova.compute.manager [req-ed67b180-22fa-4fd6-b3a3-a545be2767c9 req-0b14ef01-2c9f-44af-8bce-6b3df8bf918d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received unexpected event network-vif-unplugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e for instance with vm_state resized and task_state resize_reverting.#033[00m
Nov 29 03:21:59 np0005539551 nova_compute[227360]: 2025-11-29 08:21:59.494 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:59 np0005539551 nova_compute[227360]: 2025-11-29 08:21:59.519 227364 DEBUG nova.scheduler.client.report [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Overwriting current allocation {'allocations': {'a73c606e-2495-4af4-b703-8d4b3001fdf5': {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}, 'generation': 79}}, 'project_id': '4145ed6cde61439ebcc12fae2609b724', 'user_id': 'fed6803a835e471f9bd60e3236e78e5d', 'consumer_generation': 1} on consumer 37bf3f0c-b49b-457b-81be-b4b31f32d872 move_allocations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:2018#033[00m
Nov 29 03:21:59 np0005539551 nova_compute[227360]: 2025-11-29 08:21:59.714 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404504.7110345, 37bf3f0c-b49b-457b-81be-b4b31f32d872 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:21:59 np0005539551 nova_compute[227360]: 2025-11-29 08:21:59.715 227364 INFO nova.compute.manager [-] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:21:59 np0005539551 nova_compute[227360]: 2025-11-29 08:21:59.734 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:59 np0005539551 nova_compute[227360]: 2025-11-29 08:21:59.742 227364 DEBUG nova.compute.manager [None req-5df26909-a53b-4fef-af4f-2e3b437f0f29 - - - - - -] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:59 np0005539551 nova_compute[227360]: 2025-11-29 08:21:59.746 227364 DEBUG nova.compute.manager [None req-5df26909-a53b-4fef-af4f-2e3b437f0f29 - - - - - -] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:21:59 np0005539551 nova_compute[227360]: 2025-11-29 08:21:59.772 227364 INFO nova.compute.manager [None req-5df26909-a53b-4fef-af4f-2e3b437f0f29 - - - - - -] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] During sync_power_state the instance has a pending task (resize_reverting). Skip.#033[00m
Nov 29 03:21:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:21:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:59.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:59 np0005539551 nova_compute[227360]: 2025-11-29 08:21:59.840 227364 INFO nova.network.neutron [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Updating port f435ee76-ed2f-4ad8-a9e1-bda955080b3e with attributes {'binding:host_id': 'compute-1.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Nov 29 03:22:00 np0005539551 nova_compute[227360]: 2025-11-29 08:22:00.204 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:00.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:01 np0005539551 nova_compute[227360]: 2025-11-29 08:22:01.104 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:01.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:02 np0005539551 nova_compute[227360]: 2025-11-29 08:22:02.468 227364 DEBUG nova.compute.manager [req-28025659-f680-47bd-9e60-5c8ad7ae9de8 req-e45b3d7e-0605-4e09-a81d-49a0001443be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received event network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:02 np0005539551 nova_compute[227360]: 2025-11-29 08:22:02.469 227364 DEBUG oslo_concurrency.lockutils [req-28025659-f680-47bd-9e60-5c8ad7ae9de8 req-e45b3d7e-0605-4e09-a81d-49a0001443be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:02 np0005539551 nova_compute[227360]: 2025-11-29 08:22:02.469 227364 DEBUG oslo_concurrency.lockutils [req-28025659-f680-47bd-9e60-5c8ad7ae9de8 req-e45b3d7e-0605-4e09-a81d-49a0001443be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:02 np0005539551 nova_compute[227360]: 2025-11-29 08:22:02.470 227364 DEBUG oslo_concurrency.lockutils [req-28025659-f680-47bd-9e60-5c8ad7ae9de8 req-e45b3d7e-0605-4e09-a81d-49a0001443be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:02 np0005539551 nova_compute[227360]: 2025-11-29 08:22:02.470 227364 DEBUG nova.compute.manager [req-28025659-f680-47bd-9e60-5c8ad7ae9de8 req-e45b3d7e-0605-4e09-a81d-49a0001443be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] No waiting events found dispatching network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:02 np0005539551 nova_compute[227360]: 2025-11-29 08:22:02.471 227364 WARNING nova.compute.manager [req-28025659-f680-47bd-9e60-5c8ad7ae9de8 req-e45b3d7e-0605-4e09-a81d-49a0001443be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received unexpected event network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e for instance with vm_state resized and task_state resize_reverting.#033[00m
Nov 29 03:22:02 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:02.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:03.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:04.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:05 np0005539551 nova_compute[227360]: 2025-11-29 08:22:05.196 227364 DEBUG oslo_concurrency.lockutils [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "refresh_cache-37bf3f0c-b49b-457b-81be-b4b31f32d872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:22:05 np0005539551 nova_compute[227360]: 2025-11-29 08:22:05.197 227364 DEBUG oslo_concurrency.lockutils [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquired lock "refresh_cache-37bf3f0c-b49b-457b-81be-b4b31f32d872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:22:05 np0005539551 nova_compute[227360]: 2025-11-29 08:22:05.197 227364 DEBUG nova.network.neutron [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:22:05 np0005539551 nova_compute[227360]: 2025-11-29 08:22:05.206 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:22:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:05.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:22:06 np0005539551 nova_compute[227360]: 2025-11-29 08:22:06.108 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:06.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:07 np0005539551 nova_compute[227360]: 2025-11-29 08:22:07.156 227364 DEBUG nova.compute.manager [req-b9f23350-fc2c-435c-aa08-8b36bdd4ae31 req-2919f597-37df-4f16-abb6-90c341fb3998 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received event network-changed-f435ee76-ed2f-4ad8-a9e1-bda955080b3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:07 np0005539551 nova_compute[227360]: 2025-11-29 08:22:07.156 227364 DEBUG nova.compute.manager [req-b9f23350-fc2c-435c-aa08-8b36bdd4ae31 req-2919f597-37df-4f16-abb6-90c341fb3998 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Refreshing instance network info cache due to event network-changed-f435ee76-ed2f-4ad8-a9e1-bda955080b3e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:22:07 np0005539551 nova_compute[227360]: 2025-11-29 08:22:07.157 227364 DEBUG oslo_concurrency.lockutils [req-b9f23350-fc2c-435c-aa08-8b36bdd4ae31 req-2919f597-37df-4f16-abb6-90c341fb3998 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-37bf3f0c-b49b-457b-81be-b4b31f32d872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:22:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:07.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:08 np0005539551 nova_compute[227360]: 2025-11-29 08:22:08.473 227364 DEBUG nova.network.neutron [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Updating instance_info_cache with network_info: [{"id": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "address": "fa:16:3e:a0:40:0c", "network": {"id": "2b381cec-57a8-4697-a273-a320681301f8", "bridge": "br-int", "label": "tempest-network-smoke--788617349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf435ee76-ed", "ovs_interfaceid": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:22:08 np0005539551 nova_compute[227360]: 2025-11-29 08:22:08.529 227364 DEBUG oslo_concurrency.lockutils [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Releasing lock "refresh_cache-37bf3f0c-b49b-457b-81be-b4b31f32d872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:22:08 np0005539551 nova_compute[227360]: 2025-11-29 08:22:08.530 227364 DEBUG nova.virt.libvirt.driver [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Starting finish_revert_migration finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11843#033[00m
Nov 29 03:22:08 np0005539551 nova_compute[227360]: 2025-11-29 08:22:08.575 227364 DEBUG oslo_concurrency.lockutils [req-b9f23350-fc2c-435c-aa08-8b36bdd4ae31 req-2919f597-37df-4f16-abb6-90c341fb3998 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-37bf3f0c-b49b-457b-81be-b4b31f32d872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:22:08 np0005539551 nova_compute[227360]: 2025-11-29 08:22:08.576 227364 DEBUG nova.network.neutron [req-b9f23350-fc2c-435c-aa08-8b36bdd4ae31 req-2919f597-37df-4f16-abb6-90c341fb3998 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Refreshing network info cache for port f435ee76-ed2f-4ad8-a9e1-bda955080b3e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:22:08 np0005539551 nova_compute[227360]: 2025-11-29 08:22:08.633 227364 DEBUG nova.storage.rbd_utils [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rolling back rbd image(37bf3f0c-b49b-457b-81be-b4b31f32d872_disk) to snapshot(nova-resize) rollback_to_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:505#033[00m
Nov 29 03:22:08 np0005539551 nova_compute[227360]: 2025-11-29 08:22:08.750 227364 DEBUG nova.storage.rbd_utils [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] removing snapshot(nova-resize) on rbd image(37bf3f0c-b49b-457b-81be-b4b31f32d872_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:22:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:08.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:09 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e317 e317: 3 total, 3 up, 3 in
Nov 29 03:22:09 np0005539551 nova_compute[227360]: 2025-11-29 08:22:09.631 227364 DEBUG nova.virt.libvirt.driver [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Start _get_guest_xml network_info=[{"id": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "address": "fa:16:3e:a0:40:0c", "network": {"id": "2b381cec-57a8-4697-a273-a320681301f8", "bridge": "br-int", "label": "tempest-network-smoke--788617349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf435ee76-ed", "ovs_interfaceid": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:22:09 np0005539551 nova_compute[227360]: 2025-11-29 08:22:09.636 227364 WARNING nova.virt.libvirt.driver [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:22:09 np0005539551 nova_compute[227360]: 2025-11-29 08:22:09.640 227364 DEBUG nova.virt.libvirt.host [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:22:09 np0005539551 nova_compute[227360]: 2025-11-29 08:22:09.641 227364 DEBUG nova.virt.libvirt.host [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:22:09 np0005539551 nova_compute[227360]: 2025-11-29 08:22:09.643 227364 DEBUG nova.virt.libvirt.host [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:22:09 np0005539551 nova_compute[227360]: 2025-11-29 08:22:09.644 227364 DEBUG nova.virt.libvirt.host [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:22:09 np0005539551 nova_compute[227360]: 2025-11-29 08:22:09.645 227364 DEBUG nova.virt.libvirt.driver [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:22:09 np0005539551 nova_compute[227360]: 2025-11-29 08:22:09.645 227364 DEBUG nova.virt.hardware [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:22:09 np0005539551 nova_compute[227360]: 2025-11-29 08:22:09.645 227364 DEBUG nova.virt.hardware [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:22:09 np0005539551 nova_compute[227360]: 2025-11-29 08:22:09.645 227364 DEBUG nova.virt.hardware [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:22:09 np0005539551 nova_compute[227360]: 2025-11-29 08:22:09.646 227364 DEBUG nova.virt.hardware [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:22:09 np0005539551 nova_compute[227360]: 2025-11-29 08:22:09.646 227364 DEBUG nova.virt.hardware [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:22:09 np0005539551 nova_compute[227360]: 2025-11-29 08:22:09.646 227364 DEBUG nova.virt.hardware [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:22:09 np0005539551 nova_compute[227360]: 2025-11-29 08:22:09.646 227364 DEBUG nova.virt.hardware [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:22:09 np0005539551 nova_compute[227360]: 2025-11-29 08:22:09.646 227364 DEBUG nova.virt.hardware [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:22:09 np0005539551 nova_compute[227360]: 2025-11-29 08:22:09.647 227364 DEBUG nova.virt.hardware [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:22:09 np0005539551 nova_compute[227360]: 2025-11-29 08:22:09.647 227364 DEBUG nova.virt.hardware [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:22:09 np0005539551 nova_compute[227360]: 2025-11-29 08:22:09.647 227364 DEBUG nova.virt.hardware [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:22:09 np0005539551 nova_compute[227360]: 2025-11-29 08:22:09.647 227364 DEBUG nova.objects.instance [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 37bf3f0c-b49b-457b-81be-b4b31f32d872 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:09 np0005539551 nova_compute[227360]: 2025-11-29 08:22:09.662 227364 DEBUG oslo_concurrency.processutils [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:09.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:22:10 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/862241062' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:22:10 np0005539551 nova_compute[227360]: 2025-11-29 08:22:10.094 227364 DEBUG oslo_concurrency.processutils [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:10 np0005539551 nova_compute[227360]: 2025-11-29 08:22:10.136 227364 DEBUG oslo_concurrency.processutils [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:10 np0005539551 nova_compute[227360]: 2025-11-29 08:22:10.208 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:10 np0005539551 nova_compute[227360]: 2025-11-29 08:22:10.337 227364 DEBUG nova.network.neutron [req-b9f23350-fc2c-435c-aa08-8b36bdd4ae31 req-2919f597-37df-4f16-abb6-90c341fb3998 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Updated VIF entry in instance network info cache for port f435ee76-ed2f-4ad8-a9e1-bda955080b3e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:22:10 np0005539551 nova_compute[227360]: 2025-11-29 08:22:10.338 227364 DEBUG nova.network.neutron [req-b9f23350-fc2c-435c-aa08-8b36bdd4ae31 req-2919f597-37df-4f16-abb6-90c341fb3998 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Updating instance_info_cache with network_info: [{"id": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "address": "fa:16:3e:a0:40:0c", "network": {"id": "2b381cec-57a8-4697-a273-a320681301f8", "bridge": "br-int", "label": "tempest-network-smoke--788617349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf435ee76-ed", "ovs_interfaceid": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:22:10 np0005539551 nova_compute[227360]: 2025-11-29 08:22:10.355 227364 DEBUG oslo_concurrency.lockutils [req-b9f23350-fc2c-435c-aa08-8b36bdd4ae31 req-2919f597-37df-4f16-abb6-90c341fb3998 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-37bf3f0c-b49b-457b-81be-b4b31f32d872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:22:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:22:10 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2621778276' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:22:10 np0005539551 nova_compute[227360]: 2025-11-29 08:22:10.549 227364 DEBUG oslo_concurrency.processutils [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:10 np0005539551 nova_compute[227360]: 2025-11-29 08:22:10.551 227364 DEBUG nova.virt.libvirt.vif [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:20:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-700901307',display_name='tempest-TestNetworkAdvancedServerOps-server-700901307',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-700901307',id=119,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHTZdF339uG4GTcdjaqWUNyl9tCN2Ihz0tT1aABynGHxCfjrTplPF8A9td3DkI7lqNybnYi0rKYsiF72+HnhHVmKPriLXx/cBMbe2eRLXVh9VLRo2vvXjsLkBGMzWqs3qw==',key_name='tempest-TestNetworkAdvancedServerOps-236637179',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:21:52Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-rb72np4c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:21:54Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=37bf3f0c-b49b-457b-81be-b4b31f32d872,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "address": "fa:16:3e:a0:40:0c", "network": {"id": "2b381cec-57a8-4697-a273-a320681301f8", "bridge": "br-int", "label": "tempest-network-smoke--788617349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf435ee76-ed", "ovs_interfaceid": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:22:10 np0005539551 nova_compute[227360]: 2025-11-29 08:22:10.551 227364 DEBUG nova.network.os_vif_util [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "address": "fa:16:3e:a0:40:0c", "network": {"id": "2b381cec-57a8-4697-a273-a320681301f8", "bridge": "br-int", "label": "tempest-network-smoke--788617349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf435ee76-ed", "ovs_interfaceid": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:22:10 np0005539551 nova_compute[227360]: 2025-11-29 08:22:10.552 227364 DEBUG nova.network.os_vif_util [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:40:0c,bridge_name='br-int',has_traffic_filtering=True,id=f435ee76-ed2f-4ad8-a9e1-bda955080b3e,network=Network(2b381cec-57a8-4697-a273-a320681301f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf435ee76-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:22:10 np0005539551 nova_compute[227360]: 2025-11-29 08:22:10.554 227364 DEBUG nova.virt.libvirt.driver [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:22:10 np0005539551 nova_compute[227360]:  <uuid>37bf3f0c-b49b-457b-81be-b4b31f32d872</uuid>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:  <name>instance-00000077</name>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:22:10 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-700901307</nova:name>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:22:09</nova:creationTime>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:22:10 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:        <nova:user uuid="fed6803a835e471f9bd60e3236e78e5d">tempest-TestNetworkAdvancedServerOps-274367929-project-member</nova:user>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:        <nova:project uuid="4145ed6cde61439ebcc12fae2609b724">tempest-TestNetworkAdvancedServerOps-274367929</nova:project>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:        <nova:port uuid="f435ee76-ed2f-4ad8-a9e1-bda955080b3e">
Nov 29 03:22:10 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:      <entry name="serial">37bf3f0c-b49b-457b-81be-b4b31f32d872</entry>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:      <entry name="uuid">37bf3f0c-b49b-457b-81be-b4b31f32d872</entry>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:22:10 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/37bf3f0c-b49b-457b-81be-b4b31f32d872_disk">
Nov 29 03:22:10 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:22:10 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:22:10 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/37bf3f0c-b49b-457b-81be-b4b31f32d872_disk.config">
Nov 29 03:22:10 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:22:10 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:22:10 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:a0:40:0c"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:      <target dev="tapf435ee76-ed"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:22:10 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/37bf3f0c-b49b-457b-81be-b4b31f32d872/console.log" append="off"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <input type="keyboard" bus="usb"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:22:10 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:22:10 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:22:10 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:22:10 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:22:10 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:22:10 np0005539551 nova_compute[227360]: 2025-11-29 08:22:10.555 227364 DEBUG nova.compute.manager [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Preparing to wait for external event network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:22:10 np0005539551 nova_compute[227360]: 2025-11-29 08:22:10.555 227364 DEBUG oslo_concurrency.lockutils [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:10 np0005539551 nova_compute[227360]: 2025-11-29 08:22:10.556 227364 DEBUG oslo_concurrency.lockutils [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:10 np0005539551 nova_compute[227360]: 2025-11-29 08:22:10.556 227364 DEBUG oslo_concurrency.lockutils [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:10 np0005539551 nova_compute[227360]: 2025-11-29 08:22:10.556 227364 DEBUG nova.virt.libvirt.vif [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:20:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-700901307',display_name='tempest-TestNetworkAdvancedServerOps-server-700901307',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-700901307',id=119,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHTZdF339uG4GTcdjaqWUNyl9tCN2Ihz0tT1aABynGHxCfjrTplPF8A9td3DkI7lqNybnYi0rKYsiF72+HnhHVmKPriLXx/cBMbe2eRLXVh9VLRo2vvXjsLkBGMzWqs3qw==',key_name='tempest-TestNetworkAdvancedServerOps-236637179',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:21:52Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-rb72np4c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:21:54Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=37bf3f0c-b49b-457b-81be-b4b31f32d872,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "address": "fa:16:3e:a0:40:0c", "network": {"id": "2b381cec-57a8-4697-a273-a320681301f8", "bridge": "br-int", "label": "tempest-network-smoke--788617349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf435ee76-ed", "ovs_interfaceid": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:22:10 np0005539551 nova_compute[227360]: 2025-11-29 08:22:10.557 227364 DEBUG nova.network.os_vif_util [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "address": "fa:16:3e:a0:40:0c", "network": {"id": "2b381cec-57a8-4697-a273-a320681301f8", "bridge": "br-int", "label": "tempest-network-smoke--788617349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf435ee76-ed", "ovs_interfaceid": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:22:10 np0005539551 nova_compute[227360]: 2025-11-29 08:22:10.557 227364 DEBUG nova.network.os_vif_util [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:40:0c,bridge_name='br-int',has_traffic_filtering=True,id=f435ee76-ed2f-4ad8-a9e1-bda955080b3e,network=Network(2b381cec-57a8-4697-a273-a320681301f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf435ee76-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:22:10 np0005539551 nova_compute[227360]: 2025-11-29 08:22:10.557 227364 DEBUG os_vif [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:40:0c,bridge_name='br-int',has_traffic_filtering=True,id=f435ee76-ed2f-4ad8-a9e1-bda955080b3e,network=Network(2b381cec-57a8-4697-a273-a320681301f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf435ee76-ed') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:22:10 np0005539551 nova_compute[227360]: 2025-11-29 08:22:10.558 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:10 np0005539551 nova_compute[227360]: 2025-11-29 08:22:10.558 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:10 np0005539551 nova_compute[227360]: 2025-11-29 08:22:10.559 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:22:10 np0005539551 nova_compute[227360]: 2025-11-29 08:22:10.562 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:10 np0005539551 nova_compute[227360]: 2025-11-29 08:22:10.562 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf435ee76-ed, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:10 np0005539551 nova_compute[227360]: 2025-11-29 08:22:10.563 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf435ee76-ed, col_values=(('external_ids', {'iface-id': 'f435ee76-ed2f-4ad8-a9e1-bda955080b3e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a0:40:0c', 'vm-uuid': '37bf3f0c-b49b-457b-81be-b4b31f32d872'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:10 np0005539551 NetworkManager[48922]: <info>  [1764404530.5671] manager: (tapf435ee76-ed): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/233)
Nov 29 03:22:10 np0005539551 nova_compute[227360]: 2025-11-29 08:22:10.566 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:10 np0005539551 nova_compute[227360]: 2025-11-29 08:22:10.570 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:22:10 np0005539551 nova_compute[227360]: 2025-11-29 08:22:10.572 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:10 np0005539551 nova_compute[227360]: 2025-11-29 08:22:10.573 227364 INFO os_vif [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:40:0c,bridge_name='br-int',has_traffic_filtering=True,id=f435ee76-ed2f-4ad8-a9e1-bda955080b3e,network=Network(2b381cec-57a8-4697-a273-a320681301f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf435ee76-ed')#033[00m
Nov 29 03:22:10 np0005539551 kernel: tapf435ee76-ed: entered promiscuous mode
Nov 29 03:22:10 np0005539551 NetworkManager[48922]: <info>  [1764404530.6367] manager: (tapf435ee76-ed): new Tun device (/org/freedesktop/NetworkManager/Devices/234)
Nov 29 03:22:10 np0005539551 ovn_controller[130266]: 2025-11-29T08:22:10Z|00504|binding|INFO|Claiming lport f435ee76-ed2f-4ad8-a9e1-bda955080b3e for this chassis.
Nov 29 03:22:10 np0005539551 ovn_controller[130266]: 2025-11-29T08:22:10Z|00505|binding|INFO|f435ee76-ed2f-4ad8-a9e1-bda955080b3e: Claiming fa:16:3e:a0:40:0c 10.100.0.5
Nov 29 03:22:10 np0005539551 nova_compute[227360]: 2025-11-29 08:22:10.642 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:10 np0005539551 nova_compute[227360]: 2025-11-29 08:22:10.653 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:10 np0005539551 NetworkManager[48922]: <info>  [1764404530.6549] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/235)
Nov 29 03:22:10 np0005539551 NetworkManager[48922]: <info>  [1764404530.6555] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/236)
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:10.658 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:40:0c 10.100.0.5'], port_security=['fa:16:3e:a0:40:0c 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '37bf3f0c-b49b-457b-81be-b4b31f32d872', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b381cec-57a8-4697-a273-a320681301f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'f7e4462f-71ed-420d-b2ac-83fad8b034b6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.228'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7b87ec03-3fc0-4efd-b28c-90cfac0d10cf, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=f435ee76-ed2f-4ad8-a9e1-bda955080b3e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:10.659 139482 INFO neutron.agent.ovn.metadata.agent [-] Port f435ee76-ed2f-4ad8-a9e1-bda955080b3e in datapath 2b381cec-57a8-4697-a273-a320681301f8 bound to our chassis#033[00m
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:10.660 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2b381cec-57a8-4697-a273-a320681301f8#033[00m
Nov 29 03:22:10 np0005539551 systemd-machined[190756]: New machine qemu-55-instance-00000077.
Nov 29 03:22:10 np0005539551 systemd-udevd[272837]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:10.670 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2f38a3b9-c350-4b85-ad55-0886c6479651]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:10.671 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2b381cec-51 in ovnmeta-2b381cec-57a8-4697-a273-a320681301f8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:10.672 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2b381cec-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:10.672 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e9b60bb3-9fba-49b1-b735-431f75ce6fcd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:10.673 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6e5ad797-d8a2-4872-907f-5e5b35d4ec6b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:10.682 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[fa727fa4-c3a7-43eb-98fb-aed91bd77498]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:10 np0005539551 NetworkManager[48922]: <info>  [1764404530.6877] device (tapf435ee76-ed): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:22:10 np0005539551 NetworkManager[48922]: <info>  [1764404530.6888] device (tapf435ee76-ed): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:10.706 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[20dd6d3e-573c-464a-aab0-f9e4aebf33cc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:10 np0005539551 systemd[1]: Started Virtual Machine qemu-55-instance-00000077.
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:10.733 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[0fad4454-71d2-476b-8261-ad4ac3d4f9b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:10 np0005539551 systemd-udevd[272841]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:22:10 np0005539551 NetworkManager[48922]: <info>  [1764404530.7432] manager: (tap2b381cec-50): new Veth device (/org/freedesktop/NetworkManager/Devices/237)
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:10.742 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5c6ff4d4-6274-4c9f-928f-4c974d408708]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:10.779 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[c59c04c2-1d4b-4bff-b7c6-527da2f26e55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:10.782 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[0fcd2ce8-789b-4b61-95a9-85a213379f19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:10 np0005539551 NetworkManager[48922]: <info>  [1764404530.8085] device (tap2b381cec-50): carrier: link connected
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:10.816 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[21592257-1497-4a36-b7ae-5fda60a52303]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:10 np0005539551 nova_compute[227360]: 2025-11-29 08:22:10.826 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:10 np0005539551 nova_compute[227360]: 2025-11-29 08:22:10.850 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:10.854 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[04f7ece2-5d1b-4142-914b-7b882c757cbd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b381cec-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c1:c0:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 757768, 'reachable_time': 29260, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272869, 'error': None, 'target': 'ovnmeta-2b381cec-57a8-4697-a273-a320681301f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:10 np0005539551 ovn_controller[130266]: 2025-11-29T08:22:10Z|00506|binding|INFO|Setting lport f435ee76-ed2f-4ad8-a9e1-bda955080b3e ovn-installed in OVS
Nov 29 03:22:10 np0005539551 ovn_controller[130266]: 2025-11-29T08:22:10Z|00507|binding|INFO|Setting lport f435ee76-ed2f-4ad8-a9e1-bda955080b3e up in Southbound
Nov 29 03:22:10 np0005539551 nova_compute[227360]: 2025-11-29 08:22:10.863 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:10.870 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[3e3daefe-db51-4c16-aea4-8a6e7bf3039d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec1:c0a0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 757768, 'tstamp': 757768}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272870, 'error': None, 'target': 'ovnmeta-2b381cec-57a8-4697-a273-a320681301f8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:10.885 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e3a426e9-ae76-4520-8bad-2097b13db5d3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b381cec-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c1:c0:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 757768, 'reachable_time': 29260, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 272871, 'error': None, 'target': 'ovnmeta-2b381cec-57a8-4697-a273-a320681301f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:10.914 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4a40785d-421f-4a55-b956-8e58f7ed71b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:10.974 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d2b40f98-55c4-4d5a-954b-ee9a5159d4cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:10.976 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b381cec-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:10.976 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:10.976 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b381cec-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:22:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:10.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:22:10 np0005539551 nova_compute[227360]: 2025-11-29 08:22:10.978 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:10 np0005539551 NetworkManager[48922]: <info>  [1764404530.9787] manager: (tap2b381cec-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/238)
Nov 29 03:22:10 np0005539551 kernel: tap2b381cec-50: entered promiscuous mode
Nov 29 03:22:10 np0005539551 nova_compute[227360]: 2025-11-29 08:22:10.980 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:10.981 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2b381cec-50, col_values=(('external_ids', {'iface-id': '7127038e-90ca-4039-8404-4b8a2152df71'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:10 np0005539551 nova_compute[227360]: 2025-11-29 08:22:10.982 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:10 np0005539551 ovn_controller[130266]: 2025-11-29T08:22:10Z|00508|binding|INFO|Releasing lport 7127038e-90ca-4039-8404-4b8a2152df71 from this chassis (sb_readonly=0)
Nov 29 03:22:10 np0005539551 nova_compute[227360]: 2025-11-29 08:22:10.983 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:10.984 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2b381cec-57a8-4697-a273-a320681301f8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2b381cec-57a8-4697-a273-a320681301f8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:10.985 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[46bd66e5-431d-40f3-b854-7f92e7cc7738]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:10.985 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-2b381cec-57a8-4697-a273-a320681301f8
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/2b381cec-57a8-4697-a273-a320681301f8.pid.haproxy
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 2b381cec-57a8-4697-a273-a320681301f8
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:22:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:10.986 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2b381cec-57a8-4697-a273-a320681301f8', 'env', 'PROCESS_TAG=haproxy-2b381cec-57a8-4697-a273-a320681301f8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2b381cec-57a8-4697-a273-a320681301f8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:22:11 np0005539551 nova_compute[227360]: 2025-11-29 08:22:11.004 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:11 np0005539551 nova_compute[227360]: 2025-11-29 08:22:11.110 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:11 np0005539551 nova_compute[227360]: 2025-11-29 08:22:11.117 227364 DEBUG nova.compute.manager [req-15bc7495-ad51-4220-a117-75766b2ca4c0 req-20726506-0b67-42d4-9827-d6cd047cbf24 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received event network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:11 np0005539551 nova_compute[227360]: 2025-11-29 08:22:11.117 227364 DEBUG oslo_concurrency.lockutils [req-15bc7495-ad51-4220-a117-75766b2ca4c0 req-20726506-0b67-42d4-9827-d6cd047cbf24 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:11 np0005539551 nova_compute[227360]: 2025-11-29 08:22:11.118 227364 DEBUG oslo_concurrency.lockutils [req-15bc7495-ad51-4220-a117-75766b2ca4c0 req-20726506-0b67-42d4-9827-d6cd047cbf24 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:11 np0005539551 nova_compute[227360]: 2025-11-29 08:22:11.118 227364 DEBUG oslo_concurrency.lockutils [req-15bc7495-ad51-4220-a117-75766b2ca4c0 req-20726506-0b67-42d4-9827-d6cd047cbf24 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:11 np0005539551 nova_compute[227360]: 2025-11-29 08:22:11.118 227364 DEBUG nova.compute.manager [req-15bc7495-ad51-4220-a117-75766b2ca4c0 req-20726506-0b67-42d4-9827-d6cd047cbf24 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Processing event network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:22:11 np0005539551 nova_compute[227360]: 2025-11-29 08:22:11.202 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404531.2018237, 37bf3f0c-b49b-457b-81be-b4b31f32d872 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:22:11 np0005539551 nova_compute[227360]: 2025-11-29 08:22:11.202 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] VM Started (Lifecycle Event)#033[00m
Nov 29 03:22:11 np0005539551 nova_compute[227360]: 2025-11-29 08:22:11.204 227364 DEBUG nova.compute.manager [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:22:11 np0005539551 nova_compute[227360]: 2025-11-29 08:22:11.211 227364 INFO nova.virt.libvirt.driver [-] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Instance running successfully.#033[00m
Nov 29 03:22:11 np0005539551 nova_compute[227360]: 2025-11-29 08:22:11.213 227364 DEBUG nova.virt.libvirt.driver [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] finish_revert_migration finished successfully. finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11887#033[00m
Nov 29 03:22:11 np0005539551 nova_compute[227360]: 2025-11-29 08:22:11.219 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:11 np0005539551 nova_compute[227360]: 2025-11-29 08:22:11.225 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:22:11 np0005539551 nova_compute[227360]: 2025-11-29 08:22:11.250 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] During sync_power_state the instance has a pending task (resize_reverting). Skip.#033[00m
Nov 29 03:22:11 np0005539551 nova_compute[227360]: 2025-11-29 08:22:11.251 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404531.2030108, 37bf3f0c-b49b-457b-81be-b4b31f32d872 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:22:11 np0005539551 nova_compute[227360]: 2025-11-29 08:22:11.251 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:22:11 np0005539551 nova_compute[227360]: 2025-11-29 08:22:11.313 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:11 np0005539551 nova_compute[227360]: 2025-11-29 08:22:11.317 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404531.2066605, 37bf3f0c-b49b-457b-81be-b4b31f32d872 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:22:11 np0005539551 nova_compute[227360]: 2025-11-29 08:22:11.317 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:22:11 np0005539551 nova_compute[227360]: 2025-11-29 08:22:11.333 227364 INFO nova.compute.manager [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Updating instance to original state: 'active'#033[00m
Nov 29 03:22:11 np0005539551 nova_compute[227360]: 2025-11-29 08:22:11.348 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:11 np0005539551 nova_compute[227360]: 2025-11-29 08:22:11.353 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:22:11 np0005539551 podman[272943]: 2025-11-29 08:22:11.374214653 +0000 UTC m=+0.048793181 container create 5304585e10b5cfd21d24a2d792df6cd86a6cf8f198b74dcb20885d7442995c65 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 03:22:11 np0005539551 nova_compute[227360]: 2025-11-29 08:22:11.397 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] During sync_power_state the instance has a pending task (resize_reverting). Skip.#033[00m
Nov 29 03:22:11 np0005539551 systemd[1]: Started libpod-conmon-5304585e10b5cfd21d24a2d792df6cd86a6cf8f198b74dcb20885d7442995c65.scope.
Nov 29 03:22:11 np0005539551 podman[272943]: 2025-11-29 08:22:11.348020604 +0000 UTC m=+0.022599162 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:22:11 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:22:11 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b950b53d31ff8e2fbbca1fa466e3291685d167e52cf35d3b1a336da833fdb36/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:22:11 np0005539551 podman[272943]: 2025-11-29 08:22:11.473577241 +0000 UTC m=+0.148155769 container init 5304585e10b5cfd21d24a2d792df6cd86a6cf8f198b74dcb20885d7442995c65 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:22:11 np0005539551 podman[272943]: 2025-11-29 08:22:11.479929133 +0000 UTC m=+0.154507661 container start 5304585e10b5cfd21d24a2d792df6cd86a6cf8f198b74dcb20885d7442995c65 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:22:11 np0005539551 neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8[272959]: [NOTICE]   (272963) : New worker (272965) forked
Nov 29 03:22:11 np0005539551 neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8[272959]: [NOTICE]   (272963) : Loading success.
Nov 29 03:22:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:11.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:12 np0005539551 nova_compute[227360]: 2025-11-29 08:22:12.935 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:12 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:12.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:13 np0005539551 nova_compute[227360]: 2025-11-29 08:22:13.248 227364 DEBUG nova.compute.manager [req-2a525007-efe5-40df-bfe7-a829f9126d88 req-c9faee5e-00de-4c87-9d19-44ab6ad4d497 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received event network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:13 np0005539551 nova_compute[227360]: 2025-11-29 08:22:13.248 227364 DEBUG oslo_concurrency.lockutils [req-2a525007-efe5-40df-bfe7-a829f9126d88 req-c9faee5e-00de-4c87-9d19-44ab6ad4d497 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:13 np0005539551 nova_compute[227360]: 2025-11-29 08:22:13.249 227364 DEBUG oslo_concurrency.lockutils [req-2a525007-efe5-40df-bfe7-a829f9126d88 req-c9faee5e-00de-4c87-9d19-44ab6ad4d497 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:13 np0005539551 nova_compute[227360]: 2025-11-29 08:22:13.249 227364 DEBUG oslo_concurrency.lockutils [req-2a525007-efe5-40df-bfe7-a829f9126d88 req-c9faee5e-00de-4c87-9d19-44ab6ad4d497 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:13 np0005539551 nova_compute[227360]: 2025-11-29 08:22:13.249 227364 DEBUG nova.compute.manager [req-2a525007-efe5-40df-bfe7-a829f9126d88 req-c9faee5e-00de-4c87-9d19-44ab6ad4d497 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] No waiting events found dispatching network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:13 np0005539551 nova_compute[227360]: 2025-11-29 08:22:13.249 227364 WARNING nova.compute.manager [req-2a525007-efe5-40df-bfe7-a829f9126d88 req-c9faee5e-00de-4c87-9d19-44ab6ad4d497 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received unexpected event network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e for instance with vm_state active and task_state None.#033[00m
Nov 29 03:22:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:22:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:13.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:22:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:14.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e318 e318: 3 total, 3 up, 3 in
Nov 29 03:22:15 np0005539551 nova_compute[227360]: 2025-11-29 08:22:15.568 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:15.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:16 np0005539551 nova_compute[227360]: 2025-11-29 08:22:16.113 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:16 np0005539551 nova_compute[227360]: 2025-11-29 08:22:16.264 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:16.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e319 e319: 3 total, 3 up, 3 in
Nov 29 03:22:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:17.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:18.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:19.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:19.872 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:19.873 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:19.873 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:20 np0005539551 nova_compute[227360]: 2025-11-29 08:22:20.570 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:20.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:21 np0005539551 nova_compute[227360]: 2025-11-29 08:22:21.115 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:22:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:21.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:22:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:22.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:23.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:24 np0005539551 ovn_controller[130266]: 2025-11-29T08:22:24Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a0:40:0c 10.100.0.5
Nov 29 03:22:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:24.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:25 np0005539551 nova_compute[227360]: 2025-11-29 08:22:25.574 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:25.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:26 np0005539551 nova_compute[227360]: 2025-11-29 08:22:26.118 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:27.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:27.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:28 np0005539551 podman[272977]: 2025-11-29 08:22:28.635365187 +0000 UTC m=+0.075976426 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:22:28 np0005539551 podman[272976]: 2025-11-29 08:22:28.64839915 +0000 UTC m=+0.092683478 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 29 03:22:28 np0005539551 podman[272975]: 2025-11-29 08:22:28.667907938 +0000 UTC m=+0.112478584 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 03:22:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:29.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e320 e320: 3 total, 3 up, 3 in
Nov 29 03:22:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:29.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:30 np0005539551 nova_compute[227360]: 2025-11-29 08:22:30.404 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:30 np0005539551 nova_compute[227360]: 2025-11-29 08:22:30.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:30 np0005539551 nova_compute[227360]: 2025-11-29 08:22:30.576 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:31.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:31 np0005539551 nova_compute[227360]: 2025-11-29 08:22:31.109 227364 INFO nova.compute.manager [None req-e69b471e-e8c8-4731-ae78-3ea15c3789c3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Get console output#033[00m
Nov 29 03:22:31 np0005539551 nova_compute[227360]: 2025-11-29 08:22:31.114 260937 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:22:31 np0005539551 nova_compute[227360]: 2025-11-29 08:22:31.120 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:31 np0005539551 nova_compute[227360]: 2025-11-29 08:22:31.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:31 np0005539551 nova_compute[227360]: 2025-11-29 08:22:31.409 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:22:31 np0005539551 nova_compute[227360]: 2025-11-29 08:22:31.409 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:22:31 np0005539551 nova_compute[227360]: 2025-11-29 08:22:31.576 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "refresh_cache-37bf3f0c-b49b-457b-81be-b4b31f32d872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:22:31 np0005539551 nova_compute[227360]: 2025-11-29 08:22:31.576 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquired lock "refresh_cache-37bf3f0c-b49b-457b-81be-b4b31f32d872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:22:31 np0005539551 nova_compute[227360]: 2025-11-29 08:22:31.576 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:22:31 np0005539551 nova_compute[227360]: 2025-11-29 08:22:31.577 227364 DEBUG nova.objects.instance [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lazy-loading 'info_cache' on Instance uuid 37bf3f0c-b49b-457b-81be-b4b31f32d872 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:31.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:31 np0005539551 nova_compute[227360]: 2025-11-29 08:22:31.832 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:31.833 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:22:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:31.835 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:22:32 np0005539551 nova_compute[227360]: 2025-11-29 08:22:32.921 227364 DEBUG oslo_concurrency.lockutils [None req-9d84c536-7a56-499a-939b-3515d0156874 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "37bf3f0c-b49b-457b-81be-b4b31f32d872" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:32 np0005539551 nova_compute[227360]: 2025-11-29 08:22:32.921 227364 DEBUG oslo_concurrency.lockutils [None req-9d84c536-7a56-499a-939b-3515d0156874 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:32 np0005539551 nova_compute[227360]: 2025-11-29 08:22:32.921 227364 DEBUG oslo_concurrency.lockutils [None req-9d84c536-7a56-499a-939b-3515d0156874 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:32 np0005539551 nova_compute[227360]: 2025-11-29 08:22:32.922 227364 DEBUG oslo_concurrency.lockutils [None req-9d84c536-7a56-499a-939b-3515d0156874 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:32 np0005539551 nova_compute[227360]: 2025-11-29 08:22:32.922 227364 DEBUG oslo_concurrency.lockutils [None req-9d84c536-7a56-499a-939b-3515d0156874 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:32 np0005539551 nova_compute[227360]: 2025-11-29 08:22:32.923 227364 INFO nova.compute.manager [None req-9d84c536-7a56-499a-939b-3515d0156874 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Terminating instance#033[00m
Nov 29 03:22:32 np0005539551 nova_compute[227360]: 2025-11-29 08:22:32.923 227364 DEBUG nova.compute.manager [None req-9d84c536-7a56-499a-939b-3515d0156874 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:22:32 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:32 np0005539551 kernel: tapf435ee76-ed (unregistering): left promiscuous mode
Nov 29 03:22:32 np0005539551 NetworkManager[48922]: <info>  [1764404552.9696] device (tapf435ee76-ed): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:22:32 np0005539551 ovn_controller[130266]: 2025-11-29T08:22:32Z|00509|binding|INFO|Releasing lport f435ee76-ed2f-4ad8-a9e1-bda955080b3e from this chassis (sb_readonly=0)
Nov 29 03:22:32 np0005539551 ovn_controller[130266]: 2025-11-29T08:22:32Z|00510|binding|INFO|Setting lport f435ee76-ed2f-4ad8-a9e1-bda955080b3e down in Southbound
Nov 29 03:22:32 np0005539551 ovn_controller[130266]: 2025-11-29T08:22:32Z|00511|binding|INFO|Removing iface tapf435ee76-ed ovn-installed in OVS
Nov 29 03:22:32 np0005539551 nova_compute[227360]: 2025-11-29 08:22:32.976 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:32.987 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:40:0c 10.100.0.5'], port_security=['fa:16:3e:a0:40:0c 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '37bf3f0c-b49b-457b-81be-b4b31f32d872', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b381cec-57a8-4697-a273-a320681301f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '12', 'neutron:security_group_ids': 'f7e4462f-71ed-420d-b2ac-83fad8b034b6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7b87ec03-3fc0-4efd-b28c-90cfac0d10cf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=f435ee76-ed2f-4ad8-a9e1-bda955080b3e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:22:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:32.988 139482 INFO neutron.agent.ovn.metadata.agent [-] Port f435ee76-ed2f-4ad8-a9e1-bda955080b3e in datapath 2b381cec-57a8-4697-a273-a320681301f8 unbound from our chassis#033[00m
Nov 29 03:22:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:32.989 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2b381cec-57a8-4697-a273-a320681301f8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:22:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:32.990 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9d02a9e4-de23-4e9d-858d-608c87a01490]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:32.991 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2b381cec-57a8-4697-a273-a320681301f8 namespace which is not needed anymore#033[00m
Nov 29 03:22:32 np0005539551 nova_compute[227360]: 2025-11-29 08:22:32.997 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:33.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:33 np0005539551 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000077.scope: Deactivated successfully.
Nov 29 03:22:33 np0005539551 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000077.scope: Consumed 14.244s CPU time.
Nov 29 03:22:33 np0005539551 systemd-machined[190756]: Machine qemu-55-instance-00000077 terminated.
Nov 29 03:22:33 np0005539551 neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8[272959]: [NOTICE]   (272963) : haproxy version is 2.8.14-c23fe91
Nov 29 03:22:33 np0005539551 neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8[272959]: [NOTICE]   (272963) : path to executable is /usr/sbin/haproxy
Nov 29 03:22:33 np0005539551 neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8[272959]: [WARNING]  (272963) : Exiting Master process...
Nov 29 03:22:33 np0005539551 neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8[272959]: [WARNING]  (272963) : Exiting Master process...
Nov 29 03:22:33 np0005539551 neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8[272959]: [ALERT]    (272963) : Current worker (272965) exited with code 143 (Terminated)
Nov 29 03:22:33 np0005539551 neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8[272959]: [WARNING]  (272963) : All workers exited. Exiting... (0)
Nov 29 03:22:33 np0005539551 systemd[1]: libpod-5304585e10b5cfd21d24a2d792df6cd86a6cf8f198b74dcb20885d7442995c65.scope: Deactivated successfully.
Nov 29 03:22:33 np0005539551 podman[273062]: 2025-11-29 08:22:33.117475359 +0000 UTC m=+0.042651885 container died 5304585e10b5cfd21d24a2d792df6cd86a6cf8f198b74dcb20885d7442995c65 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:22:33 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5304585e10b5cfd21d24a2d792df6cd86a6cf8f198b74dcb20885d7442995c65-userdata-shm.mount: Deactivated successfully.
Nov 29 03:22:33 np0005539551 systemd[1]: var-lib-containers-storage-overlay-6b950b53d31ff8e2fbbca1fa466e3291685d167e52cf35d3b1a336da833fdb36-merged.mount: Deactivated successfully.
Nov 29 03:22:33 np0005539551 nova_compute[227360]: 2025-11-29 08:22:33.156 227364 INFO nova.virt.libvirt.driver [-] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Instance destroyed successfully.#033[00m
Nov 29 03:22:33 np0005539551 nova_compute[227360]: 2025-11-29 08:22:33.157 227364 DEBUG nova.objects.instance [None req-9d84c536-7a56-499a-939b-3515d0156874 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'resources' on Instance uuid 37bf3f0c-b49b-457b-81be-b4b31f32d872 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:33 np0005539551 podman[273062]: 2025-11-29 08:22:33.158946681 +0000 UTC m=+0.084123207 container cleanup 5304585e10b5cfd21d24a2d792df6cd86a6cf8f198b74dcb20885d7442995c65 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:22:33 np0005539551 systemd[1]: libpod-conmon-5304585e10b5cfd21d24a2d792df6cd86a6cf8f198b74dcb20885d7442995c65.scope: Deactivated successfully.
Nov 29 03:22:33 np0005539551 nova_compute[227360]: 2025-11-29 08:22:33.188 227364 DEBUG nova.virt.libvirt.vif [None req-9d84c536-7a56-499a-939b-3515d0156874 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:20:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-700901307',display_name='tempest-TestNetworkAdvancedServerOps-server-700901307',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-700901307',id=119,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHTZdF339uG4GTcdjaqWUNyl9tCN2Ihz0tT1aABynGHxCfjrTplPF8A9td3DkI7lqNybnYi0rKYsiF72+HnhHVmKPriLXx/cBMbe2eRLXVh9VLRo2vvXjsLkBGMzWqs3qw==',key_name='tempest-TestNetworkAdvancedServerOps-236637179',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:22:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-rb72np4c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:22:11Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=37bf3f0c-b49b-457b-81be-b4b31f32d872,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "address": "fa:16:3e:a0:40:0c", "network": {"id": "2b381cec-57a8-4697-a273-a320681301f8", "bridge": "br-int", "label": "tempest-network-smoke--788617349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf435ee76-ed", "ovs_interfaceid": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:22:33 np0005539551 nova_compute[227360]: 2025-11-29 08:22:33.188 227364 DEBUG nova.network.os_vif_util [None req-9d84c536-7a56-499a-939b-3515d0156874 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "address": "fa:16:3e:a0:40:0c", "network": {"id": "2b381cec-57a8-4697-a273-a320681301f8", "bridge": "br-int", "label": "tempest-network-smoke--788617349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf435ee76-ed", "ovs_interfaceid": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:22:33 np0005539551 nova_compute[227360]: 2025-11-29 08:22:33.189 227364 DEBUG nova.network.os_vif_util [None req-9d84c536-7a56-499a-939b-3515d0156874 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:40:0c,bridge_name='br-int',has_traffic_filtering=True,id=f435ee76-ed2f-4ad8-a9e1-bda955080b3e,network=Network(2b381cec-57a8-4697-a273-a320681301f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf435ee76-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:22:33 np0005539551 nova_compute[227360]: 2025-11-29 08:22:33.189 227364 DEBUG os_vif [None req-9d84c536-7a56-499a-939b-3515d0156874 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:40:0c,bridge_name='br-int',has_traffic_filtering=True,id=f435ee76-ed2f-4ad8-a9e1-bda955080b3e,network=Network(2b381cec-57a8-4697-a273-a320681301f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf435ee76-ed') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:22:33 np0005539551 nova_compute[227360]: 2025-11-29 08:22:33.191 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:33 np0005539551 nova_compute[227360]: 2025-11-29 08:22:33.191 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf435ee76-ed, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:33 np0005539551 nova_compute[227360]: 2025-11-29 08:22:33.192 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:33 np0005539551 nova_compute[227360]: 2025-11-29 08:22:33.194 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:33 np0005539551 nova_compute[227360]: 2025-11-29 08:22:33.196 227364 INFO os_vif [None req-9d84c536-7a56-499a-939b-3515d0156874 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:40:0c,bridge_name='br-int',has_traffic_filtering=True,id=f435ee76-ed2f-4ad8-a9e1-bda955080b3e,network=Network(2b381cec-57a8-4697-a273-a320681301f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf435ee76-ed')#033[00m
Nov 29 03:22:33 np0005539551 podman[273100]: 2025-11-29 08:22:33.227169077 +0000 UTC m=+0.045306937 container remove 5304585e10b5cfd21d24a2d792df6cd86a6cf8f198b74dcb20885d7442995c65 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:22:33 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:33.232 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[fa37a0ce-8a5e-432d-99f0-6693c23b946e]: (4, ('Sat Nov 29 08:22:33 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8 (5304585e10b5cfd21d24a2d792df6cd86a6cf8f198b74dcb20885d7442995c65)\n5304585e10b5cfd21d24a2d792df6cd86a6cf8f198b74dcb20885d7442995c65\nSat Nov 29 08:22:33 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8 (5304585e10b5cfd21d24a2d792df6cd86a6cf8f198b74dcb20885d7442995c65)\n5304585e10b5cfd21d24a2d792df6cd86a6cf8f198b74dcb20885d7442995c65\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:33 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:33.235 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c40c3b25-64e0-4b2c-957d-bc0f6104059a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:33 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:33.236 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b381cec-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:33 np0005539551 nova_compute[227360]: 2025-11-29 08:22:33.237 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:33 np0005539551 kernel: tap2b381cec-50: left promiscuous mode
Nov 29 03:22:33 np0005539551 nova_compute[227360]: 2025-11-29 08:22:33.253 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:33 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:33.255 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1c26f0cb-3f28-48dc-a18c-228fe7303009]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:33 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:33.273 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e71e03c7-5f67-4e3c-a4ee-fc76f0ebd200]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:33 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:33.274 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[3433b4ad-c53c-4f8d-8446-a561ec7cbe6f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:33 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:33.290 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[7dd11a56-5aab-48a9-8836-9c2eb71655a3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 757760, 'reachable_time': 29332, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273133, 'error': None, 'target': 'ovnmeta-2b381cec-57a8-4697-a273-a320681301f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:33 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:33.293 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2b381cec-57a8-4697-a273-a320681301f8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:22:33 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:33.293 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[574e6181-5784-4713-8b6b-64c9432e3f4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:33 np0005539551 systemd[1]: run-netns-ovnmeta\x2d2b381cec\x2d57a8\x2d4697\x2da273\x2da320681301f8.mount: Deactivated successfully.
Nov 29 03:22:33 np0005539551 nova_compute[227360]: 2025-11-29 08:22:33.571 227364 INFO nova.virt.libvirt.driver [None req-9d84c536-7a56-499a-939b-3515d0156874 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Deleting instance files /var/lib/nova/instances/37bf3f0c-b49b-457b-81be-b4b31f32d872_del#033[00m
Nov 29 03:22:33 np0005539551 nova_compute[227360]: 2025-11-29 08:22:33.572 227364 INFO nova.virt.libvirt.driver [None req-9d84c536-7a56-499a-939b-3515d0156874 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Deletion of /var/lib/nova/instances/37bf3f0c-b49b-457b-81be-b4b31f32d872_del complete#033[00m
Nov 29 03:22:33 np0005539551 nova_compute[227360]: 2025-11-29 08:22:33.653 227364 INFO nova.compute.manager [None req-9d84c536-7a56-499a-939b-3515d0156874 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:22:33 np0005539551 nova_compute[227360]: 2025-11-29 08:22:33.654 227364 DEBUG oslo.service.loopingcall [None req-9d84c536-7a56-499a-939b-3515d0156874 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:22:33 np0005539551 nova_compute[227360]: 2025-11-29 08:22:33.654 227364 DEBUG nova.compute.manager [-] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:22:33 np0005539551 nova_compute[227360]: 2025-11-29 08:22:33.655 227364 DEBUG nova.network.neutron [-] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:22:33 np0005539551 nova_compute[227360]: 2025-11-29 08:22:33.766 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Updating instance_info_cache with network_info: [{"id": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "address": "fa:16:3e:a0:40:0c", "network": {"id": "2b381cec-57a8-4697-a273-a320681301f8", "bridge": "br-int", "label": "tempest-network-smoke--788617349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf435ee76-ed", "ovs_interfaceid": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:22:33 np0005539551 nova_compute[227360]: 2025-11-29 08:22:33.804 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Releasing lock "refresh_cache-37bf3f0c-b49b-457b-81be-b4b31f32d872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:22:33 np0005539551 nova_compute[227360]: 2025-11-29 08:22:33.805 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:22:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 03:22:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:33.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 03:22:34 np0005539551 nova_compute[227360]: 2025-11-29 08:22:34.037 227364 DEBUG nova.compute.manager [req-9dd35bdb-2c46-4891-8efd-d6559d767145 req-6ad49477-c693-4263-9bf3-1255cf956702 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received event network-vif-unplugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:34 np0005539551 nova_compute[227360]: 2025-11-29 08:22:34.037 227364 DEBUG oslo_concurrency.lockutils [req-9dd35bdb-2c46-4891-8efd-d6559d767145 req-6ad49477-c693-4263-9bf3-1255cf956702 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:34 np0005539551 nova_compute[227360]: 2025-11-29 08:22:34.038 227364 DEBUG oslo_concurrency.lockutils [req-9dd35bdb-2c46-4891-8efd-d6559d767145 req-6ad49477-c693-4263-9bf3-1255cf956702 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:34 np0005539551 nova_compute[227360]: 2025-11-29 08:22:34.038 227364 DEBUG oslo_concurrency.lockutils [req-9dd35bdb-2c46-4891-8efd-d6559d767145 req-6ad49477-c693-4263-9bf3-1255cf956702 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:34 np0005539551 nova_compute[227360]: 2025-11-29 08:22:34.039 227364 DEBUG nova.compute.manager [req-9dd35bdb-2c46-4891-8efd-d6559d767145 req-6ad49477-c693-4263-9bf3-1255cf956702 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] No waiting events found dispatching network-vif-unplugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:34 np0005539551 nova_compute[227360]: 2025-11-29 08:22:34.039 227364 DEBUG nova.compute.manager [req-9dd35bdb-2c46-4891-8efd-d6559d767145 req-6ad49477-c693-4263-9bf3-1255cf956702 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received event network-vif-unplugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:22:34 np0005539551 nova_compute[227360]: 2025-11-29 08:22:34.040 227364 DEBUG nova.compute.manager [req-9dd35bdb-2c46-4891-8efd-d6559d767145 req-6ad49477-c693-4263-9bf3-1255cf956702 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received event network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:34 np0005539551 nova_compute[227360]: 2025-11-29 08:22:34.040 227364 DEBUG oslo_concurrency.lockutils [req-9dd35bdb-2c46-4891-8efd-d6559d767145 req-6ad49477-c693-4263-9bf3-1255cf956702 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:34 np0005539551 nova_compute[227360]: 2025-11-29 08:22:34.041 227364 DEBUG oslo_concurrency.lockutils [req-9dd35bdb-2c46-4891-8efd-d6559d767145 req-6ad49477-c693-4263-9bf3-1255cf956702 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:34 np0005539551 nova_compute[227360]: 2025-11-29 08:22:34.041 227364 DEBUG oslo_concurrency.lockutils [req-9dd35bdb-2c46-4891-8efd-d6559d767145 req-6ad49477-c693-4263-9bf3-1255cf956702 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:34 np0005539551 nova_compute[227360]: 2025-11-29 08:22:34.042 227364 DEBUG nova.compute.manager [req-9dd35bdb-2c46-4891-8efd-d6559d767145 req-6ad49477-c693-4263-9bf3-1255cf956702 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] No waiting events found dispatching network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:34 np0005539551 nova_compute[227360]: 2025-11-29 08:22:34.042 227364 WARNING nova.compute.manager [req-9dd35bdb-2c46-4891-8efd-d6559d767145 req-6ad49477-c693-4263-9bf3-1255cf956702 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received unexpected event network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:22:34 np0005539551 nova_compute[227360]: 2025-11-29 08:22:34.223 227364 DEBUG nova.compute.manager [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received event network-changed-f435ee76-ed2f-4ad8-a9e1-bda955080b3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:34 np0005539551 nova_compute[227360]: 2025-11-29 08:22:34.223 227364 DEBUG nova.compute.manager [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Refreshing instance network info cache due to event network-changed-f435ee76-ed2f-4ad8-a9e1-bda955080b3e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:22:34 np0005539551 nova_compute[227360]: 2025-11-29 08:22:34.224 227364 DEBUG oslo_concurrency.lockutils [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-37bf3f0c-b49b-457b-81be-b4b31f32d872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:22:34 np0005539551 nova_compute[227360]: 2025-11-29 08:22:34.224 227364 DEBUG oslo_concurrency.lockutils [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-37bf3f0c-b49b-457b-81be-b4b31f32d872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:22:34 np0005539551 nova_compute[227360]: 2025-11-29 08:22:34.225 227364 DEBUG nova.network.neutron [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Refreshing network info cache for port f435ee76-ed2f-4ad8-a9e1-bda955080b3e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:22:34 np0005539551 nova_compute[227360]: 2025-11-29 08:22:34.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:34 np0005539551 nova_compute[227360]: 2025-11-29 08:22:34.575 227364 DEBUG nova.network.neutron [-] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:22:34 np0005539551 nova_compute[227360]: 2025-11-29 08:22:34.591 227364 INFO nova.compute.manager [-] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Took 0.94 seconds to deallocate network for instance.#033[00m
Nov 29 03:22:34 np0005539551 nova_compute[227360]: 2025-11-29 08:22:34.637 227364 DEBUG oslo_concurrency.lockutils [None req-9d84c536-7a56-499a-939b-3515d0156874 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:34 np0005539551 nova_compute[227360]: 2025-11-29 08:22:34.638 227364 DEBUG oslo_concurrency.lockutils [None req-9d84c536-7a56-499a-939b-3515d0156874 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:34 np0005539551 nova_compute[227360]: 2025-11-29 08:22:34.711 227364 DEBUG oslo_concurrency.processutils [None req-9d84c536-7a56-499a-939b-3515d0156874 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:35.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:22:35 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/573464170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:22:35 np0005539551 nova_compute[227360]: 2025-11-29 08:22:35.147 227364 DEBUG oslo_concurrency.processutils [None req-9d84c536-7a56-499a-939b-3515d0156874 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:35 np0005539551 nova_compute[227360]: 2025-11-29 08:22:35.153 227364 DEBUG nova.compute.provider_tree [None req-9d84c536-7a56-499a-939b-3515d0156874 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:22:35 np0005539551 nova_compute[227360]: 2025-11-29 08:22:35.172 227364 DEBUG nova.scheduler.client.report [None req-9d84c536-7a56-499a-939b-3515d0156874 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:22:35 np0005539551 nova_compute[227360]: 2025-11-29 08:22:35.195 227364 DEBUG oslo_concurrency.lockutils [None req-9d84c536-7a56-499a-939b-3515d0156874 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:35 np0005539551 nova_compute[227360]: 2025-11-29 08:22:35.220 227364 INFO nova.scheduler.client.report [None req-9d84c536-7a56-499a-939b-3515d0156874 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Deleted allocations for instance 37bf3f0c-b49b-457b-81be-b4b31f32d872#033[00m
Nov 29 03:22:35 np0005539551 nova_compute[227360]: 2025-11-29 08:22:35.309 227364 DEBUG oslo_concurrency.lockutils [None req-9d84c536-7a56-499a-939b-3515d0156874 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.388s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:35 np0005539551 nova_compute[227360]: 2025-11-29 08:22:35.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:35 np0005539551 nova_compute[227360]: 2025-11-29 08:22:35.433 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:35 np0005539551 nova_compute[227360]: 2025-11-29 08:22:35.433 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:35 np0005539551 nova_compute[227360]: 2025-11-29 08:22:35.433 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:35 np0005539551 nova_compute[227360]: 2025-11-29 08:22:35.434 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:22:35 np0005539551 nova_compute[227360]: 2025-11-29 08:22:35.434 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:35 np0005539551 nova_compute[227360]: 2025-11-29 08:22:35.464 227364 DEBUG nova.network.neutron [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Updated VIF entry in instance network info cache for port f435ee76-ed2f-4ad8-a9e1-bda955080b3e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:22:35 np0005539551 nova_compute[227360]: 2025-11-29 08:22:35.464 227364 DEBUG nova.network.neutron [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Updating instance_info_cache with network_info: [{"id": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "address": "fa:16:3e:a0:40:0c", "network": {"id": "2b381cec-57a8-4697-a273-a320681301f8", "bridge": "br-int", "label": "tempest-network-smoke--788617349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf435ee76-ed", "ovs_interfaceid": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:22:35 np0005539551 nova_compute[227360]: 2025-11-29 08:22:35.487 227364 DEBUG oslo_concurrency.lockutils [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-37bf3f0c-b49b-457b-81be-b4b31f32d872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:22:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:35.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:22:35 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1448182720' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:22:35 np0005539551 nova_compute[227360]: 2025-11-29 08:22:35.874 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:36 np0005539551 nova_compute[227360]: 2025-11-29 08:22:36.026 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:22:36 np0005539551 nova_compute[227360]: 2025-11-29 08:22:36.027 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4448MB free_disk=20.81930160522461GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:22:36 np0005539551 nova_compute[227360]: 2025-11-29 08:22:36.027 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:36 np0005539551 nova_compute[227360]: 2025-11-29 08:22:36.027 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:36 np0005539551 nova_compute[227360]: 2025-11-29 08:22:36.076 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:22:36 np0005539551 nova_compute[227360]: 2025-11-29 08:22:36.076 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:22:36 np0005539551 nova_compute[227360]: 2025-11-29 08:22:36.096 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:36 np0005539551 nova_compute[227360]: 2025-11-29 08:22:36.123 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:36 np0005539551 nova_compute[227360]: 2025-11-29 08:22:36.143 227364 DEBUG nova.compute.manager [req-bb5f0d86-be5c-4450-9f87-e9d7647d7a62 req-0c77eb70-6511-46a2-af62-6d51dd6af2ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received event network-vif-deleted-f435ee76-ed2f-4ad8-a9e1-bda955080b3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:36 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:22:36 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/645418670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:22:36 np0005539551 nova_compute[227360]: 2025-11-29 08:22:36.549 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:36 np0005539551 nova_compute[227360]: 2025-11-29 08:22:36.557 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:22:36 np0005539551 nova_compute[227360]: 2025-11-29 08:22:36.574 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:22:36 np0005539551 nova_compute[227360]: 2025-11-29 08:22:36.610 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:22:36 np0005539551 nova_compute[227360]: 2025-11-29 08:22:36.610 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:37.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e321 e321: 3 total, 3 up, 3 in
Nov 29 03:22:37 np0005539551 nova_compute[227360]: 2025-11-29 08:22:37.611 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:37 np0005539551 nova_compute[227360]: 2025-11-29 08:22:37.611 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:37.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:38 np0005539551 nova_compute[227360]: 2025-11-29 08:22:38.194 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:38 np0005539551 nova_compute[227360]: 2025-11-29 08:22:38.507 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:38 np0005539551 nova_compute[227360]: 2025-11-29 08:22:38.723 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:38 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:22:38.837 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:39.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:39.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:41.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:41 np0005539551 nova_compute[227360]: 2025-11-29 08:22:41.123 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:41.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:43.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:43 np0005539551 nova_compute[227360]: 2025-11-29 08:22:43.197 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:22:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:43.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:22:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:45.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:45 np0005539551 nova_compute[227360]: 2025-11-29 08:22:45.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:22:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:45.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:22:46 np0005539551 nova_compute[227360]: 2025-11-29 08:22:46.195 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:46 np0005539551 nova_compute[227360]: 2025-11-29 08:22:46.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:46 np0005539551 nova_compute[227360]: 2025-11-29 08:22:46.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:22:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:22:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:47.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:22:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:47.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:48 np0005539551 nova_compute[227360]: 2025-11-29 08:22:48.155 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404553.1543405, 37bf3f0c-b49b-457b-81be-b4b31f32d872 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:22:48 np0005539551 nova_compute[227360]: 2025-11-29 08:22:48.156 227364 INFO nova.compute.manager [-] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:22:48 np0005539551 nova_compute[227360]: 2025-11-29 08:22:48.200 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:48 np0005539551 nova_compute[227360]: 2025-11-29 08:22:48.285 227364 DEBUG nova.compute.manager [None req-9eb4c5b2-84a0-4972-8b70-849fbfcec3c1 - - - - - -] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:48 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:49.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:49.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:51.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:51 np0005539551 nova_compute[227360]: 2025-11-29 08:22:51.197 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:51.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:53.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:53 np0005539551 nova_compute[227360]: 2025-11-29 08:22:53.203 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:53 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:53.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:55.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:55 np0005539551 nova_compute[227360]: 2025-11-29 08:22:55.406 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:55.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:56 np0005539551 nova_compute[227360]: 2025-11-29 08:22:56.199 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:57.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:57 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:22:57 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:22:57 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:22:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:57.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:58 np0005539551 nova_compute[227360]: 2025-11-29 08:22:58.205 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:58 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:59.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:59 np0005539551 podman[273337]: 2025-11-29 08:22:59.609600411 +0000 UTC m=+0.055329118 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 03:22:59 np0005539551 podman[273336]: 2025-11-29 08:22:59.616504718 +0000 UTC m=+0.059829510 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:22:59 np0005539551 podman[273335]: 2025-11-29 08:22:59.643332063 +0000 UTC m=+0.090818577 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 29 03:22:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:22:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:59.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:00 np0005539551 nova_compute[227360]: 2025-11-29 08:23:00.194 227364 DEBUG oslo_concurrency.lockutils [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "27bb49e9-1b5b-452b-89e4-21008913f536" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:00 np0005539551 nova_compute[227360]: 2025-11-29 08:23:00.194 227364 DEBUG oslo_concurrency.lockutils [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "27bb49e9-1b5b-452b-89e4-21008913f536" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:00 np0005539551 nova_compute[227360]: 2025-11-29 08:23:00.231 227364 DEBUG nova.compute.manager [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:23:00 np0005539551 nova_compute[227360]: 2025-11-29 08:23:00.372 227364 DEBUG oslo_concurrency.lockutils [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:00 np0005539551 nova_compute[227360]: 2025-11-29 08:23:00.373 227364 DEBUG oslo_concurrency.lockutils [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:00 np0005539551 nova_compute[227360]: 2025-11-29 08:23:00.380 227364 DEBUG nova.virt.hardware [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:23:00 np0005539551 nova_compute[227360]: 2025-11-29 08:23:00.380 227364 INFO nova.compute.claims [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:23:00 np0005539551 nova_compute[227360]: 2025-11-29 08:23:00.479 227364 DEBUG oslo_concurrency.processutils [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:23:00 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2977413673' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:23:00 np0005539551 nova_compute[227360]: 2025-11-29 08:23:00.931 227364 DEBUG oslo_concurrency.processutils [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:00 np0005539551 nova_compute[227360]: 2025-11-29 08:23:00.938 227364 DEBUG nova.compute.provider_tree [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:23:00 np0005539551 nova_compute[227360]: 2025-11-29 08:23:00.952 227364 DEBUG nova.scheduler.client.report [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:23:00 np0005539551 nova_compute[227360]: 2025-11-29 08:23:00.977 227364 DEBUG oslo_concurrency.lockutils [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:00 np0005539551 nova_compute[227360]: 2025-11-29 08:23:00.978 227364 DEBUG nova.compute.manager [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:23:01 np0005539551 nova_compute[227360]: 2025-11-29 08:23:01.033 227364 DEBUG nova.compute.manager [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:23:01 np0005539551 nova_compute[227360]: 2025-11-29 08:23:01.034 227364 DEBUG nova.network.neutron [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:23:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:01.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:01 np0005539551 nova_compute[227360]: 2025-11-29 08:23:01.063 227364 INFO nova.virt.libvirt.driver [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:23:01 np0005539551 nova_compute[227360]: 2025-11-29 08:23:01.082 227364 DEBUG nova.compute.manager [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:23:01 np0005539551 nova_compute[227360]: 2025-11-29 08:23:01.196 227364 DEBUG nova.compute.manager [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:23:01 np0005539551 nova_compute[227360]: 2025-11-29 08:23:01.199 227364 DEBUG nova.virt.libvirt.driver [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:23:01 np0005539551 nova_compute[227360]: 2025-11-29 08:23:01.200 227364 INFO nova.virt.libvirt.driver [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Creating image(s)#033[00m
Nov 29 03:23:01 np0005539551 nova_compute[227360]: 2025-11-29 08:23:01.229 227364 DEBUG nova.storage.rbd_utils [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image 27bb49e9-1b5b-452b-89e4-21008913f536_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:23:01 np0005539551 nova_compute[227360]: 2025-11-29 08:23:01.262 227364 DEBUG nova.storage.rbd_utils [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image 27bb49e9-1b5b-452b-89e4-21008913f536_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:23:01 np0005539551 nova_compute[227360]: 2025-11-29 08:23:01.293 227364 DEBUG nova.storage.rbd_utils [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image 27bb49e9-1b5b-452b-89e4-21008913f536_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:23:01 np0005539551 nova_compute[227360]: 2025-11-29 08:23:01.299 227364 DEBUG oslo_concurrency.processutils [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:01 np0005539551 nova_compute[227360]: 2025-11-29 08:23:01.331 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:01 np0005539551 nova_compute[227360]: 2025-11-29 08:23:01.374 227364 DEBUG oslo_concurrency.processutils [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:01 np0005539551 nova_compute[227360]: 2025-11-29 08:23:01.375 227364 DEBUG oslo_concurrency.lockutils [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:01 np0005539551 nova_compute[227360]: 2025-11-29 08:23:01.376 227364 DEBUG oslo_concurrency.lockutils [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:01 np0005539551 nova_compute[227360]: 2025-11-29 08:23:01.376 227364 DEBUG oslo_concurrency.lockutils [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:01 np0005539551 nova_compute[227360]: 2025-11-29 08:23:01.404 227364 DEBUG nova.storage.rbd_utils [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image 27bb49e9-1b5b-452b-89e4-21008913f536_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:23:01 np0005539551 nova_compute[227360]: 2025-11-29 08:23:01.408 227364 DEBUG oslo_concurrency.processutils [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 27bb49e9-1b5b-452b-89e4-21008913f536_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:01 np0005539551 nova_compute[227360]: 2025-11-29 08:23:01.438 227364 DEBUG nova.policy [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fed6803a835e471f9bd60e3236e78e5d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4145ed6cde61439ebcc12fae2609b724', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:23:01 np0005539551 nova_compute[227360]: 2025-11-29 08:23:01.660 227364 DEBUG oslo_concurrency.processutils [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 27bb49e9-1b5b-452b-89e4-21008913f536_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.252s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:01 np0005539551 nova_compute[227360]: 2025-11-29 08:23:01.726 227364 DEBUG nova.storage.rbd_utils [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] resizing rbd image 27bb49e9-1b5b-452b-89e4-21008913f536_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:23:01 np0005539551 nova_compute[227360]: 2025-11-29 08:23:01.831 227364 DEBUG nova.objects.instance [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'migration_context' on Instance uuid 27bb49e9-1b5b-452b-89e4-21008913f536 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:23:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:01.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:01 np0005539551 nova_compute[227360]: 2025-11-29 08:23:01.874 227364 DEBUG nova.virt.libvirt.driver [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:23:01 np0005539551 nova_compute[227360]: 2025-11-29 08:23:01.874 227364 DEBUG nova.virt.libvirt.driver [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Ensure instance console log exists: /var/lib/nova/instances/27bb49e9-1b5b-452b-89e4-21008913f536/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:23:01 np0005539551 nova_compute[227360]: 2025-11-29 08:23:01.875 227364 DEBUG oslo_concurrency.lockutils [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:01 np0005539551 nova_compute[227360]: 2025-11-29 08:23:01.875 227364 DEBUG oslo_concurrency.lockutils [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:01 np0005539551 nova_compute[227360]: 2025-11-29 08:23:01.875 227364 DEBUG oslo_concurrency.lockutils [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:02 np0005539551 nova_compute[227360]: 2025-11-29 08:23:02.532 227364 DEBUG nova.network.neutron [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Successfully created port: 1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:23:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:03.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:03 np0005539551 nova_compute[227360]: 2025-11-29 08:23:03.247 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:03 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:03 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:23:03 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:23:03 np0005539551 nova_compute[227360]: 2025-11-29 08:23:03.704 227364 DEBUG nova.network.neutron [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Successfully updated port: 1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:23:03 np0005539551 nova_compute[227360]: 2025-11-29 08:23:03.728 227364 DEBUG oslo_concurrency.lockutils [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "refresh_cache-27bb49e9-1b5b-452b-89e4-21008913f536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:23:03 np0005539551 nova_compute[227360]: 2025-11-29 08:23:03.728 227364 DEBUG oslo_concurrency.lockutils [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquired lock "refresh_cache-27bb49e9-1b5b-452b-89e4-21008913f536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:23:03 np0005539551 nova_compute[227360]: 2025-11-29 08:23:03.728 227364 DEBUG nova.network.neutron [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:23:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:03.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:03 np0005539551 nova_compute[227360]: 2025-11-29 08:23:03.923 227364 DEBUG nova.network.neutron [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:23:04 np0005539551 nova_compute[227360]: 2025-11-29 08:23:04.571 227364 DEBUG nova.compute.manager [req-0a3eeec2-4866-4f55-9bda-319e3d3db429 req-3f682f32-cde0-4e1e-a7fc-3a917c347164 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Received event network-changed-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:04 np0005539551 nova_compute[227360]: 2025-11-29 08:23:04.572 227364 DEBUG nova.compute.manager [req-0a3eeec2-4866-4f55-9bda-319e3d3db429 req-3f682f32-cde0-4e1e-a7fc-3a917c347164 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Refreshing instance network info cache due to event network-changed-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:23:04 np0005539551 nova_compute[227360]: 2025-11-29 08:23:04.572 227364 DEBUG oslo_concurrency.lockutils [req-0a3eeec2-4866-4f55-9bda-319e3d3db429 req-3f682f32-cde0-4e1e-a7fc-3a917c347164 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-27bb49e9-1b5b-452b-89e4-21008913f536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:23:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:05.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:05 np0005539551 nova_compute[227360]: 2025-11-29 08:23:05.257 227364 DEBUG nova.network.neutron [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Updating instance_info_cache with network_info: [{"id": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "address": "fa:16:3e:32:69:95", "network": {"id": "36ca7446-a7cc-4230-a5a5-4c818b881403", "bridge": "br-int", "label": "tempest-network-smoke--1073048915", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d2b1b3c-a9", "ovs_interfaceid": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:23:05 np0005539551 nova_compute[227360]: 2025-11-29 08:23:05.291 227364 DEBUG oslo_concurrency.lockutils [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Releasing lock "refresh_cache-27bb49e9-1b5b-452b-89e4-21008913f536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:23:05 np0005539551 nova_compute[227360]: 2025-11-29 08:23:05.291 227364 DEBUG nova.compute.manager [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Instance network_info: |[{"id": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "address": "fa:16:3e:32:69:95", "network": {"id": "36ca7446-a7cc-4230-a5a5-4c818b881403", "bridge": "br-int", "label": "tempest-network-smoke--1073048915", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d2b1b3c-a9", "ovs_interfaceid": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:23:05 np0005539551 nova_compute[227360]: 2025-11-29 08:23:05.291 227364 DEBUG oslo_concurrency.lockutils [req-0a3eeec2-4866-4f55-9bda-319e3d3db429 req-3f682f32-cde0-4e1e-a7fc-3a917c347164 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-27bb49e9-1b5b-452b-89e4-21008913f536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:23:05 np0005539551 nova_compute[227360]: 2025-11-29 08:23:05.292 227364 DEBUG nova.network.neutron [req-0a3eeec2-4866-4f55-9bda-319e3d3db429 req-3f682f32-cde0-4e1e-a7fc-3a917c347164 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Refreshing network info cache for port 1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:23:05 np0005539551 nova_compute[227360]: 2025-11-29 08:23:05.298 227364 DEBUG nova.virt.libvirt.driver [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Start _get_guest_xml network_info=[{"id": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "address": "fa:16:3e:32:69:95", "network": {"id": "36ca7446-a7cc-4230-a5a5-4c818b881403", "bridge": "br-int", "label": "tempest-network-smoke--1073048915", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d2b1b3c-a9", "ovs_interfaceid": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:23:05 np0005539551 nova_compute[227360]: 2025-11-29 08:23:05.303 227364 WARNING nova.virt.libvirt.driver [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:23:05 np0005539551 nova_compute[227360]: 2025-11-29 08:23:05.309 227364 DEBUG nova.virt.libvirt.host [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:23:05 np0005539551 nova_compute[227360]: 2025-11-29 08:23:05.310 227364 DEBUG nova.virt.libvirt.host [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:23:05 np0005539551 nova_compute[227360]: 2025-11-29 08:23:05.320 227364 DEBUG nova.virt.libvirt.host [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:23:05 np0005539551 nova_compute[227360]: 2025-11-29 08:23:05.321 227364 DEBUG nova.virt.libvirt.host [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:23:05 np0005539551 nova_compute[227360]: 2025-11-29 08:23:05.323 227364 DEBUG nova.virt.libvirt.driver [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:23:05 np0005539551 nova_compute[227360]: 2025-11-29 08:23:05.324 227364 DEBUG nova.virt.hardware [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:23:05 np0005539551 nova_compute[227360]: 2025-11-29 08:23:05.325 227364 DEBUG nova.virt.hardware [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:23:05 np0005539551 nova_compute[227360]: 2025-11-29 08:23:05.326 227364 DEBUG nova.virt.hardware [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:23:05 np0005539551 nova_compute[227360]: 2025-11-29 08:23:05.326 227364 DEBUG nova.virt.hardware [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:23:05 np0005539551 nova_compute[227360]: 2025-11-29 08:23:05.327 227364 DEBUG nova.virt.hardware [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:23:05 np0005539551 nova_compute[227360]: 2025-11-29 08:23:05.327 227364 DEBUG nova.virt.hardware [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:23:05 np0005539551 nova_compute[227360]: 2025-11-29 08:23:05.328 227364 DEBUG nova.virt.hardware [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:23:05 np0005539551 nova_compute[227360]: 2025-11-29 08:23:05.329 227364 DEBUG nova.virt.hardware [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:23:05 np0005539551 nova_compute[227360]: 2025-11-29 08:23:05.329 227364 DEBUG nova.virt.hardware [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:23:05 np0005539551 nova_compute[227360]: 2025-11-29 08:23:05.329 227364 DEBUG nova.virt.hardware [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:23:05 np0005539551 nova_compute[227360]: 2025-11-29 08:23:05.330 227364 DEBUG nova.virt.hardware [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:23:05 np0005539551 nova_compute[227360]: 2025-11-29 08:23:05.334 227364 DEBUG oslo_concurrency.processutils [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:23:05 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2406472335' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:23:05 np0005539551 nova_compute[227360]: 2025-11-29 08:23:05.774 227364 DEBUG oslo_concurrency.processutils [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:05 np0005539551 nova_compute[227360]: 2025-11-29 08:23:05.805 227364 DEBUG nova.storage.rbd_utils [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image 27bb49e9-1b5b-452b-89e4-21008913f536_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:23:05 np0005539551 nova_compute[227360]: 2025-11-29 08:23:05.809 227364 DEBUG oslo_concurrency.processutils [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:05.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:06 np0005539551 nova_compute[227360]: 2025-11-29 08:23:06.202 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:23:06 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1703217722' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:23:06 np0005539551 nova_compute[227360]: 2025-11-29 08:23:06.256 227364 DEBUG oslo_concurrency.processutils [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:06 np0005539551 nova_compute[227360]: 2025-11-29 08:23:06.258 227364 DEBUG nova.virt.libvirt.vif [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:22:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-265826808',display_name='tempest-TestNetworkAdvancedServerOps-server-265826808',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-265826808',id=126,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBeXTTUn8A7zpFHcqlGQ90V4zAP7o75tcD3W3n7dQTartlXTpdu7VEmK0VYRLV8PgqFSlc7bWF2UZqww8/DhGK+DK739lPxQTOjWQ1ziHudEAIfQaT52tCAw6zsO+8sntg==',key_name='tempest-TestNetworkAdvancedServerOps-1403441276',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-sncacrhj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:23:01Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=27bb49e9-1b5b-452b-89e4-21008913f536,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "address": "fa:16:3e:32:69:95", "network": {"id": "36ca7446-a7cc-4230-a5a5-4c818b881403", "bridge": "br-int", "label": "tempest-network-smoke--1073048915", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d2b1b3c-a9", "ovs_interfaceid": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:23:06 np0005539551 nova_compute[227360]: 2025-11-29 08:23:06.259 227364 DEBUG nova.network.os_vif_util [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "address": "fa:16:3e:32:69:95", "network": {"id": "36ca7446-a7cc-4230-a5a5-4c818b881403", "bridge": "br-int", "label": "tempest-network-smoke--1073048915", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d2b1b3c-a9", "ovs_interfaceid": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:23:06 np0005539551 nova_compute[227360]: 2025-11-29 08:23:06.260 227364 DEBUG nova.network.os_vif_util [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:69:95,bridge_name='br-int',has_traffic_filtering=True,id=1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4,network=Network(36ca7446-a7cc-4230-a5a5-4c818b881403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d2b1b3c-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:23:06 np0005539551 nova_compute[227360]: 2025-11-29 08:23:06.262 227364 DEBUG nova.objects.instance [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'pci_devices' on Instance uuid 27bb49e9-1b5b-452b-89e4-21008913f536 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:23:06 np0005539551 nova_compute[227360]: 2025-11-29 08:23:06.390 227364 DEBUG nova.virt.libvirt.driver [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:23:06 np0005539551 nova_compute[227360]:  <uuid>27bb49e9-1b5b-452b-89e4-21008913f536</uuid>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:  <name>instance-0000007e</name>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:23:06 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-265826808</nova:name>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:23:05</nova:creationTime>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:23:06 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:        <nova:user uuid="fed6803a835e471f9bd60e3236e78e5d">tempest-TestNetworkAdvancedServerOps-274367929-project-member</nova:user>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:        <nova:project uuid="4145ed6cde61439ebcc12fae2609b724">tempest-TestNetworkAdvancedServerOps-274367929</nova:project>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:        <nova:port uuid="1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4">
Nov 29 03:23:06 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:      <entry name="serial">27bb49e9-1b5b-452b-89e4-21008913f536</entry>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:      <entry name="uuid">27bb49e9-1b5b-452b-89e4-21008913f536</entry>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:23:06 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/27bb49e9-1b5b-452b-89e4-21008913f536_disk">
Nov 29 03:23:06 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:23:06 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:23:06 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/27bb49e9-1b5b-452b-89e4-21008913f536_disk.config">
Nov 29 03:23:06 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:23:06 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:23:06 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:32:69:95"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:      <target dev="tap1d2b1b3c-a9"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:23:06 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/27bb49e9-1b5b-452b-89e4-21008913f536/console.log" append="off"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:23:06 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:23:06 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:23:06 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:23:06 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:23:06 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:23:06 np0005539551 nova_compute[227360]: 2025-11-29 08:23:06.391 227364 DEBUG nova.compute.manager [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Preparing to wait for external event network-vif-plugged-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:23:06 np0005539551 nova_compute[227360]: 2025-11-29 08:23:06.392 227364 DEBUG oslo_concurrency.lockutils [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:06 np0005539551 nova_compute[227360]: 2025-11-29 08:23:06.393 227364 DEBUG oslo_concurrency.lockutils [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:06 np0005539551 nova_compute[227360]: 2025-11-29 08:23:06.394 227364 DEBUG oslo_concurrency.lockutils [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:06 np0005539551 nova_compute[227360]: 2025-11-29 08:23:06.395 227364 DEBUG nova.virt.libvirt.vif [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:22:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-265826808',display_name='tempest-TestNetworkAdvancedServerOps-server-265826808',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-265826808',id=126,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBeXTTUn8A7zpFHcqlGQ90V4zAP7o75tcD3W3n7dQTartlXTpdu7VEmK0VYRLV8PgqFSlc7bWF2UZqww8/DhGK+DK739lPxQTOjWQ1ziHudEAIfQaT52tCAw6zsO+8sntg==',key_name='tempest-TestNetworkAdvancedServerOps-1403441276',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-sncacrhj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:23:01Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=27bb49e9-1b5b-452b-89e4-21008913f536,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "address": "fa:16:3e:32:69:95", "network": {"id": "36ca7446-a7cc-4230-a5a5-4c818b881403", "bridge": "br-int", "label": "tempest-network-smoke--1073048915", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d2b1b3c-a9", "ovs_interfaceid": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:23:06 np0005539551 nova_compute[227360]: 2025-11-29 08:23:06.395 227364 DEBUG nova.network.os_vif_util [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "address": "fa:16:3e:32:69:95", "network": {"id": "36ca7446-a7cc-4230-a5a5-4c818b881403", "bridge": "br-int", "label": "tempest-network-smoke--1073048915", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d2b1b3c-a9", "ovs_interfaceid": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:23:06 np0005539551 nova_compute[227360]: 2025-11-29 08:23:06.396 227364 DEBUG nova.network.os_vif_util [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:69:95,bridge_name='br-int',has_traffic_filtering=True,id=1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4,network=Network(36ca7446-a7cc-4230-a5a5-4c818b881403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d2b1b3c-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:23:06 np0005539551 nova_compute[227360]: 2025-11-29 08:23:06.397 227364 DEBUG os_vif [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:69:95,bridge_name='br-int',has_traffic_filtering=True,id=1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4,network=Network(36ca7446-a7cc-4230-a5a5-4c818b881403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d2b1b3c-a9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:23:06 np0005539551 nova_compute[227360]: 2025-11-29 08:23:06.398 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:06 np0005539551 nova_compute[227360]: 2025-11-29 08:23:06.399 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:06 np0005539551 nova_compute[227360]: 2025-11-29 08:23:06.399 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:23:06 np0005539551 nova_compute[227360]: 2025-11-29 08:23:06.404 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:06 np0005539551 nova_compute[227360]: 2025-11-29 08:23:06.404 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1d2b1b3c-a9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:06 np0005539551 nova_compute[227360]: 2025-11-29 08:23:06.405 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1d2b1b3c-a9, col_values=(('external_ids', {'iface-id': '1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:32:69:95', 'vm-uuid': '27bb49e9-1b5b-452b-89e4-21008913f536'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:06 np0005539551 nova_compute[227360]: 2025-11-29 08:23:06.407 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:06 np0005539551 NetworkManager[48922]: <info>  [1764404586.4085] manager: (tap1d2b1b3c-a9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/239)
Nov 29 03:23:06 np0005539551 nova_compute[227360]: 2025-11-29 08:23:06.410 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:23:06 np0005539551 nova_compute[227360]: 2025-11-29 08:23:06.415 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:06 np0005539551 nova_compute[227360]: 2025-11-29 08:23:06.416 227364 INFO os_vif [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:69:95,bridge_name='br-int',has_traffic_filtering=True,id=1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4,network=Network(36ca7446-a7cc-4230-a5a5-4c818b881403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d2b1b3c-a9')#033[00m
Nov 29 03:23:06 np0005539551 nova_compute[227360]: 2025-11-29 08:23:06.552 227364 DEBUG nova.virt.libvirt.driver [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:23:06 np0005539551 nova_compute[227360]: 2025-11-29 08:23:06.553 227364 DEBUG nova.virt.libvirt.driver [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:23:06 np0005539551 nova_compute[227360]: 2025-11-29 08:23:06.553 227364 DEBUG nova.virt.libvirt.driver [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] No VIF found with MAC fa:16:3e:32:69:95, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:23:06 np0005539551 nova_compute[227360]: 2025-11-29 08:23:06.554 227364 INFO nova.virt.libvirt.driver [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Using config drive#033[00m
Nov 29 03:23:06 np0005539551 nova_compute[227360]: 2025-11-29 08:23:06.582 227364 DEBUG nova.storage.rbd_utils [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image 27bb49e9-1b5b-452b-89e4-21008913f536_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:23:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:07.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:07 np0005539551 nova_compute[227360]: 2025-11-29 08:23:07.338 227364 INFO nova.virt.libvirt.driver [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Creating config drive at /var/lib/nova/instances/27bb49e9-1b5b-452b-89e4-21008913f536/disk.config#033[00m
Nov 29 03:23:07 np0005539551 nova_compute[227360]: 2025-11-29 08:23:07.342 227364 DEBUG oslo_concurrency.processutils [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/27bb49e9-1b5b-452b-89e4-21008913f536/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbw9ud561 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:07 np0005539551 nova_compute[227360]: 2025-11-29 08:23:07.391 227364 DEBUG nova.network.neutron [req-0a3eeec2-4866-4f55-9bda-319e3d3db429 req-3f682f32-cde0-4e1e-a7fc-3a917c347164 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Updated VIF entry in instance network info cache for port 1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:23:07 np0005539551 nova_compute[227360]: 2025-11-29 08:23:07.392 227364 DEBUG nova.network.neutron [req-0a3eeec2-4866-4f55-9bda-319e3d3db429 req-3f682f32-cde0-4e1e-a7fc-3a917c347164 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Updating instance_info_cache with network_info: [{"id": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "address": "fa:16:3e:32:69:95", "network": {"id": "36ca7446-a7cc-4230-a5a5-4c818b881403", "bridge": "br-int", "label": "tempest-network-smoke--1073048915", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d2b1b3c-a9", "ovs_interfaceid": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:23:07 np0005539551 nova_compute[227360]: 2025-11-29 08:23:07.466 227364 DEBUG oslo_concurrency.lockutils [req-0a3eeec2-4866-4f55-9bda-319e3d3db429 req-3f682f32-cde0-4e1e-a7fc-3a917c347164 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-27bb49e9-1b5b-452b-89e4-21008913f536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:23:07 np0005539551 nova_compute[227360]: 2025-11-29 08:23:07.474 227364 DEBUG oslo_concurrency.processutils [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/27bb49e9-1b5b-452b-89e4-21008913f536/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbw9ud561" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:07 np0005539551 nova_compute[227360]: 2025-11-29 08:23:07.505 227364 DEBUG nova.storage.rbd_utils [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image 27bb49e9-1b5b-452b-89e4-21008913f536_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:23:07 np0005539551 nova_compute[227360]: 2025-11-29 08:23:07.510 227364 DEBUG oslo_concurrency.processutils [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/27bb49e9-1b5b-452b-89e4-21008913f536/disk.config 27bb49e9-1b5b-452b-89e4-21008913f536_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:07 np0005539551 nova_compute[227360]: 2025-11-29 08:23:07.664 227364 DEBUG oslo_concurrency.processutils [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/27bb49e9-1b5b-452b-89e4-21008913f536/disk.config 27bb49e9-1b5b-452b-89e4-21008913f536_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:07 np0005539551 nova_compute[227360]: 2025-11-29 08:23:07.666 227364 INFO nova.virt.libvirt.driver [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Deleting local config drive /var/lib/nova/instances/27bb49e9-1b5b-452b-89e4-21008913f536/disk.config because it was imported into RBD.#033[00m
Nov 29 03:23:07 np0005539551 kernel: tap1d2b1b3c-a9: entered promiscuous mode
Nov 29 03:23:07 np0005539551 NetworkManager[48922]: <info>  [1764404587.7137] manager: (tap1d2b1b3c-a9): new Tun device (/org/freedesktop/NetworkManager/Devices/240)
Nov 29 03:23:07 np0005539551 nova_compute[227360]: 2025-11-29 08:23:07.773 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:07 np0005539551 systemd-udevd[273764]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:23:07 np0005539551 ovn_controller[130266]: 2025-11-29T08:23:07Z|00512|binding|INFO|Claiming lport 1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 for this chassis.
Nov 29 03:23:07 np0005539551 ovn_controller[130266]: 2025-11-29T08:23:07Z|00513|binding|INFO|1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4: Claiming fa:16:3e:32:69:95 10.100.0.8
Nov 29 03:23:07 np0005539551 nova_compute[227360]: 2025-11-29 08:23:07.780 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:07 np0005539551 NetworkManager[48922]: <info>  [1764404587.7894] device (tap1d2b1b3c-a9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:23:07 np0005539551 NetworkManager[48922]: <info>  [1764404587.7900] device (tap1d2b1b3c-a9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:23:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:07.790 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:69:95 10.100.0.8'], port_security=['fa:16:3e:32:69:95 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '27bb49e9-1b5b-452b-89e4-21008913f536', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-36ca7446-a7cc-4230-a5a5-4c818b881403', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '2', 'neutron:security_group_ids': '29b2a720-5603-492e-b672-0c12c21d24cc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ee4ab43-1884-4c9a-b7dc-aa4995f42087, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:23:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:07.791 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 in datapath 36ca7446-a7cc-4230-a5a5-4c818b881403 bound to our chassis#033[00m
Nov 29 03:23:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:07.794 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 36ca7446-a7cc-4230-a5a5-4c818b881403#033[00m
Nov 29 03:23:07 np0005539551 systemd-machined[190756]: New machine qemu-56-instance-0000007e.
Nov 29 03:23:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:07.804 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[fae95825-d0ca-4303-90af-98263b59a5bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:07.805 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap36ca7446-a1 in ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:23:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:07.807 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap36ca7446-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:23:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:07.807 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a572c9a1-044a-492c-894d-ad80491f7abb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:07.808 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6692dda7-5975-4f92-91f3-99437ede554e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:07.821 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[db5c6509-345c-4324-b0ec-43e9812b9ee4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:07 np0005539551 systemd[1]: Started Virtual Machine qemu-56-instance-0000007e.
Nov 29 03:23:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:07.845 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c76b2a09-4cf4-45a5-a4a3-e617da6b2506]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:07 np0005539551 ovn_controller[130266]: 2025-11-29T08:23:07Z|00514|binding|INFO|Setting lport 1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 ovn-installed in OVS
Nov 29 03:23:07 np0005539551 ovn_controller[130266]: 2025-11-29T08:23:07Z|00515|binding|INFO|Setting lport 1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 up in Southbound
Nov 29 03:23:07 np0005539551 nova_compute[227360]: 2025-11-29 08:23:07.850 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:07.871 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[760fc908-b32d-497f-884c-738271d00f64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:07.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:07.876 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ed1786de-dc23-49a3-ab23-cf28f22c9836]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:07 np0005539551 NetworkManager[48922]: <info>  [1764404587.8776] manager: (tap36ca7446-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/241)
Nov 29 03:23:07 np0005539551 systemd-udevd[273768]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:23:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:07.904 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[cf309765-fba7-43d5-9d03-f1cee6bec300]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:07.907 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[eb4d1c9b-576b-4218-a896-a71b83fd209e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:07 np0005539551 NetworkManager[48922]: <info>  [1764404587.9289] device (tap36ca7446-a0): carrier: link connected
Nov 29 03:23:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:07.932 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[fe85cb3c-370a-470c-84c1-3c3e3f682a1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:07.947 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[470b5bde-c889-4a02-9774-9b6fa3820d1c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap36ca7446-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:32:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 152], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763480, 'reachable_time': 38263, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273800, 'error': None, 'target': 'ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:07.961 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[df94350d-dc51-400d-af95-bb36443764ed]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4f:3266'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763480, 'tstamp': 763480}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273801, 'error': None, 'target': 'ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:07.977 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[443be892-53fa-4de0-bb8c-83520bde0db2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap36ca7446-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:32:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 152], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763480, 'reachable_time': 38263, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273802, 'error': None, 'target': 'ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:08.019 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a1933069-f5bc-4376-9360-98f7edaa3f82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:08.082 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f423848f-ed2d-4bfa-86e3-6ac97de0bae7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:08.084 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap36ca7446-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:08.084 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:23:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:08.084 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap36ca7446-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:08 np0005539551 kernel: tap36ca7446-a0: entered promiscuous mode
Nov 29 03:23:08 np0005539551 NetworkManager[48922]: <info>  [1764404588.0869] manager: (tap36ca7446-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/242)
Nov 29 03:23:08 np0005539551 nova_compute[227360]: 2025-11-29 08:23:08.086 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:08.088 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap36ca7446-a0, col_values=(('external_ids', {'iface-id': 'f0d0f672-ab06-43e3-bb90-8353b7804006'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:08 np0005539551 ovn_controller[130266]: 2025-11-29T08:23:08Z|00516|binding|INFO|Releasing lport f0d0f672-ab06-43e3-bb90-8353b7804006 from this chassis (sb_readonly=0)
Nov 29 03:23:08 np0005539551 nova_compute[227360]: 2025-11-29 08:23:08.103 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:08.104 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/36ca7446-a7cc-4230-a5a5-4c818b881403.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/36ca7446-a7cc-4230-a5a5-4c818b881403.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:23:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:08.105 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[89dd3ffa-8b78-4ae9-9878-7f2e4305506e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:08.106 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:23:08 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:23:08 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:23:08 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-36ca7446-a7cc-4230-a5a5-4c818b881403
Nov 29 03:23:08 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:23:08 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:23:08 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:23:08 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/36ca7446-a7cc-4230-a5a5-4c818b881403.pid.haproxy
Nov 29 03:23:08 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:23:08 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:23:08 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:23:08 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:23:08 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:23:08 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:23:08 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:23:08 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:23:08 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:23:08 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:23:08 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:23:08 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:23:08 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:23:08 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:23:08 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:23:08 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:23:08 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:23:08 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:23:08 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:23:08 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:23:08 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 36ca7446-a7cc-4230-a5a5-4c818b881403
Nov 29 03:23:08 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:23:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:08.107 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403', 'env', 'PROCESS_TAG=haproxy-36ca7446-a7cc-4230-a5a5-4c818b881403', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/36ca7446-a7cc-4230-a5a5-4c818b881403.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:23:08 np0005539551 podman[273834]: 2025-11-29 08:23:08.515161427 +0000 UTC m=+0.084495766 container create 37c3cf7510f6d3b21b43f9a4022190c221fe024b4e8b5aa63d8f0f9f13733590 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:23:08 np0005539551 podman[273834]: 2025-11-29 08:23:08.455325659 +0000 UTC m=+0.024660008 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:23:08 np0005539551 systemd[1]: Started libpod-conmon-37c3cf7510f6d3b21b43f9a4022190c221fe024b4e8b5aa63d8f0f9f13733590.scope.
Nov 29 03:23:08 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:23:08 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0e9ad8b4c9030a239071290ef7f456d453e664d2b0e9cf73017cc8a6caea6e5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:23:08 np0005539551 podman[273834]: 2025-11-29 08:23:08.610226969 +0000 UTC m=+0.179561318 container init 37c3cf7510f6d3b21b43f9a4022190c221fe024b4e8b5aa63d8f0f9f13733590 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:23:08 np0005539551 podman[273834]: 2025-11-29 08:23:08.615359878 +0000 UTC m=+0.184694207 container start 37c3cf7510f6d3b21b43f9a4022190c221fe024b4e8b5aa63d8f0f9f13733590 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:23:08 np0005539551 neutron-haproxy-ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403[273849]: [NOTICE]   (273853) : New worker (273855) forked
Nov 29 03:23:08 np0005539551 neutron-haproxy-ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403[273849]: [NOTICE]   (273853) : Loading success.
Nov 29 03:23:08 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:23:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:09.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:23:09 np0005539551 nova_compute[227360]: 2025-11-29 08:23:09.362 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404589.362186, 27bb49e9-1b5b-452b-89e4-21008913f536 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:23:09 np0005539551 nova_compute[227360]: 2025-11-29 08:23:09.363 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] VM Started (Lifecycle Event)#033[00m
Nov 29 03:23:09 np0005539551 nova_compute[227360]: 2025-11-29 08:23:09.400 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:23:09 np0005539551 nova_compute[227360]: 2025-11-29 08:23:09.404 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404589.3623402, 27bb49e9-1b5b-452b-89e4-21008913f536 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:23:09 np0005539551 nova_compute[227360]: 2025-11-29 08:23:09.404 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:23:09 np0005539551 nova_compute[227360]: 2025-11-29 08:23:09.424 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:23:09 np0005539551 nova_compute[227360]: 2025-11-29 08:23:09.427 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:23:09 np0005539551 nova_compute[227360]: 2025-11-29 08:23:09.443 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:23:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:09.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:10 np0005539551 nova_compute[227360]: 2025-11-29 08:23:10.969 227364 DEBUG nova.compute.manager [req-b0078b15-75f0-4604-936b-e8107e63d1cd req-b39fecc6-4a88-4112-afb6-2dabaf390b75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Received event network-vif-plugged-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:10 np0005539551 nova_compute[227360]: 2025-11-29 08:23:10.970 227364 DEBUG oslo_concurrency.lockutils [req-b0078b15-75f0-4604-936b-e8107e63d1cd req-b39fecc6-4a88-4112-afb6-2dabaf390b75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:10 np0005539551 nova_compute[227360]: 2025-11-29 08:23:10.970 227364 DEBUG oslo_concurrency.lockutils [req-b0078b15-75f0-4604-936b-e8107e63d1cd req-b39fecc6-4a88-4112-afb6-2dabaf390b75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:10 np0005539551 nova_compute[227360]: 2025-11-29 08:23:10.970 227364 DEBUG oslo_concurrency.lockutils [req-b0078b15-75f0-4604-936b-e8107e63d1cd req-b39fecc6-4a88-4112-afb6-2dabaf390b75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:10 np0005539551 nova_compute[227360]: 2025-11-29 08:23:10.970 227364 DEBUG nova.compute.manager [req-b0078b15-75f0-4604-936b-e8107e63d1cd req-b39fecc6-4a88-4112-afb6-2dabaf390b75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Processing event network-vif-plugged-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:23:10 np0005539551 nova_compute[227360]: 2025-11-29 08:23:10.970 227364 DEBUG nova.compute.manager [req-b0078b15-75f0-4604-936b-e8107e63d1cd req-b39fecc6-4a88-4112-afb6-2dabaf390b75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Received event network-vif-plugged-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:10 np0005539551 nova_compute[227360]: 2025-11-29 08:23:10.971 227364 DEBUG oslo_concurrency.lockutils [req-b0078b15-75f0-4604-936b-e8107e63d1cd req-b39fecc6-4a88-4112-afb6-2dabaf390b75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:10 np0005539551 nova_compute[227360]: 2025-11-29 08:23:10.971 227364 DEBUG oslo_concurrency.lockutils [req-b0078b15-75f0-4604-936b-e8107e63d1cd req-b39fecc6-4a88-4112-afb6-2dabaf390b75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:10 np0005539551 nova_compute[227360]: 2025-11-29 08:23:10.971 227364 DEBUG oslo_concurrency.lockutils [req-b0078b15-75f0-4604-936b-e8107e63d1cd req-b39fecc6-4a88-4112-afb6-2dabaf390b75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:10 np0005539551 nova_compute[227360]: 2025-11-29 08:23:10.971 227364 DEBUG nova.compute.manager [req-b0078b15-75f0-4604-936b-e8107e63d1cd req-b39fecc6-4a88-4112-afb6-2dabaf390b75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] No waiting events found dispatching network-vif-plugged-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:23:10 np0005539551 nova_compute[227360]: 2025-11-29 08:23:10.971 227364 WARNING nova.compute.manager [req-b0078b15-75f0-4604-936b-e8107e63d1cd req-b39fecc6-4a88-4112-afb6-2dabaf390b75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Received unexpected event network-vif-plugged-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:23:10 np0005539551 nova_compute[227360]: 2025-11-29 08:23:10.972 227364 DEBUG nova.compute.manager [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:23:10 np0005539551 nova_compute[227360]: 2025-11-29 08:23:10.975 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404590.9748223, 27bb49e9-1b5b-452b-89e4-21008913f536 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:23:10 np0005539551 nova_compute[227360]: 2025-11-29 08:23:10.975 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:23:10 np0005539551 nova_compute[227360]: 2025-11-29 08:23:10.976 227364 DEBUG nova.virt.libvirt.driver [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:23:10 np0005539551 nova_compute[227360]: 2025-11-29 08:23:10.979 227364 INFO nova.virt.libvirt.driver [-] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Instance spawned successfully.#033[00m
Nov 29 03:23:10 np0005539551 nova_compute[227360]: 2025-11-29 08:23:10.979 227364 DEBUG nova.virt.libvirt.driver [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:23:11 np0005539551 nova_compute[227360]: 2025-11-29 08:23:11.004 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:23:11 np0005539551 nova_compute[227360]: 2025-11-29 08:23:11.008 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:23:11 np0005539551 nova_compute[227360]: 2025-11-29 08:23:11.012 227364 DEBUG nova.virt.libvirt.driver [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:23:11 np0005539551 nova_compute[227360]: 2025-11-29 08:23:11.012 227364 DEBUG nova.virt.libvirt.driver [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:23:11 np0005539551 nova_compute[227360]: 2025-11-29 08:23:11.013 227364 DEBUG nova.virt.libvirt.driver [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:23:11 np0005539551 nova_compute[227360]: 2025-11-29 08:23:11.013 227364 DEBUG nova.virt.libvirt.driver [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:23:11 np0005539551 nova_compute[227360]: 2025-11-29 08:23:11.013 227364 DEBUG nova.virt.libvirt.driver [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:23:11 np0005539551 nova_compute[227360]: 2025-11-29 08:23:11.014 227364 DEBUG nova.virt.libvirt.driver [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:23:11 np0005539551 nova_compute[227360]: 2025-11-29 08:23:11.062 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:23:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:11.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:11 np0005539551 nova_compute[227360]: 2025-11-29 08:23:11.102 227364 INFO nova.compute.manager [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Took 9.90 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:23:11 np0005539551 nova_compute[227360]: 2025-11-29 08:23:11.102 227364 DEBUG nova.compute.manager [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:23:11 np0005539551 nova_compute[227360]: 2025-11-29 08:23:11.172 227364 INFO nova.compute.manager [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Took 10.84 seconds to build instance.#033[00m
Nov 29 03:23:11 np0005539551 nova_compute[227360]: 2025-11-29 08:23:11.190 227364 DEBUG oslo_concurrency.lockutils [None req-c6ef10f9-866a-4ff2-8bdc-973561f01b7a fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "27bb49e9-1b5b-452b-89e4-21008913f536" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.996s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:11 np0005539551 nova_compute[227360]: 2025-11-29 08:23:11.204 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:11 np0005539551 nova_compute[227360]: 2025-11-29 08:23:11.408 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:11.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:13.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:13 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:13 np0005539551 nova_compute[227360]: 2025-11-29 08:23:13.696 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:13 np0005539551 NetworkManager[48922]: <info>  [1764404593.6986] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/243)
Nov 29 03:23:13 np0005539551 NetworkManager[48922]: <info>  [1764404593.6998] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/244)
Nov 29 03:23:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:13.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:13 np0005539551 nova_compute[227360]: 2025-11-29 08:23:13.894 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:13 np0005539551 ovn_controller[130266]: 2025-11-29T08:23:13Z|00517|binding|INFO|Releasing lport f0d0f672-ab06-43e3-bb90-8353b7804006 from this chassis (sb_readonly=0)
Nov 29 03:23:13 np0005539551 nova_compute[227360]: 2025-11-29 08:23:13.909 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:14 np0005539551 nova_compute[227360]: 2025-11-29 08:23:14.010 227364 DEBUG nova.compute.manager [req-2a464285-f279-427e-a4e9-a263c4908fe9 req-f5327f3e-852b-4ebb-82e8-a3ec2063b5e8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Received event network-changed-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:14 np0005539551 nova_compute[227360]: 2025-11-29 08:23:14.010 227364 DEBUG nova.compute.manager [req-2a464285-f279-427e-a4e9-a263c4908fe9 req-f5327f3e-852b-4ebb-82e8-a3ec2063b5e8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Refreshing instance network info cache due to event network-changed-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:23:14 np0005539551 nova_compute[227360]: 2025-11-29 08:23:14.010 227364 DEBUG oslo_concurrency.lockutils [req-2a464285-f279-427e-a4e9-a263c4908fe9 req-f5327f3e-852b-4ebb-82e8-a3ec2063b5e8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-27bb49e9-1b5b-452b-89e4-21008913f536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:23:14 np0005539551 nova_compute[227360]: 2025-11-29 08:23:14.011 227364 DEBUG oslo_concurrency.lockutils [req-2a464285-f279-427e-a4e9-a263c4908fe9 req-f5327f3e-852b-4ebb-82e8-a3ec2063b5e8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-27bb49e9-1b5b-452b-89e4-21008913f536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:23:14 np0005539551 nova_compute[227360]: 2025-11-29 08:23:14.011 227364 DEBUG nova.network.neutron [req-2a464285-f279-427e-a4e9-a263c4908fe9 req-f5327f3e-852b-4ebb-82e8-a3ec2063b5e8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Refreshing network info cache for port 1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:23:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:15.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:15 np0005539551 nova_compute[227360]: 2025-11-29 08:23:15.333 227364 DEBUG nova.network.neutron [req-2a464285-f279-427e-a4e9-a263c4908fe9 req-f5327f3e-852b-4ebb-82e8-a3ec2063b5e8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Updated VIF entry in instance network info cache for port 1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:23:15 np0005539551 nova_compute[227360]: 2025-11-29 08:23:15.334 227364 DEBUG nova.network.neutron [req-2a464285-f279-427e-a4e9-a263c4908fe9 req-f5327f3e-852b-4ebb-82e8-a3ec2063b5e8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Updating instance_info_cache with network_info: [{"id": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "address": "fa:16:3e:32:69:95", "network": {"id": "36ca7446-a7cc-4230-a5a5-4c818b881403", "bridge": "br-int", "label": "tempest-network-smoke--1073048915", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d2b1b3c-a9", "ovs_interfaceid": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:23:15 np0005539551 nova_compute[227360]: 2025-11-29 08:23:15.353 227364 DEBUG oslo_concurrency.lockutils [req-2a464285-f279-427e-a4e9-a263c4908fe9 req-f5327f3e-852b-4ebb-82e8-a3ec2063b5e8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-27bb49e9-1b5b-452b-89e4-21008913f536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:23:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:15.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:16 np0005539551 nova_compute[227360]: 2025-11-29 08:23:16.206 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:16 np0005539551 nova_compute[227360]: 2025-11-29 08:23:16.410 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:17.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:17.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:18 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:19.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:19.873 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:19.874 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:19.875 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:19.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:21.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:21 np0005539551 nova_compute[227360]: 2025-11-29 08:23:21.209 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:21 np0005539551 nova_compute[227360]: 2025-11-29 08:23:21.411 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:23:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:21.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:23:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:23.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:23 np0005539551 ovn_controller[130266]: 2025-11-29T08:23:23Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:32:69:95 10.100.0.8
Nov 29 03:23:23 np0005539551 ovn_controller[130266]: 2025-11-29T08:23:23Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:32:69:95 10.100.0.8
Nov 29 03:23:23 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:23.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:25.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:25 np0005539551 ovn_controller[130266]: 2025-11-29T08:23:25Z|00518|binding|INFO|Releasing lport f0d0f672-ab06-43e3-bb90-8353b7804006 from this chassis (sb_readonly=0)
Nov 29 03:23:25 np0005539551 nova_compute[227360]: 2025-11-29 08:23:25.698 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:25.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:26 np0005539551 nova_compute[227360]: 2025-11-29 08:23:26.214 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:26 np0005539551 nova_compute[227360]: 2025-11-29 08:23:26.413 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:27.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:27.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:28 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:29.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:29 np0005539551 nova_compute[227360]: 2025-11-29 08:23:29.282 227364 INFO nova.compute.manager [None req-2c6af860-dd10-44c6-b17d-93d4465a1a6c fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Get console output#033[00m
Nov 29 03:23:29 np0005539551 nova_compute[227360]: 2025-11-29 08:23:29.290 260937 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:23:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:29.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:30 np0005539551 nova_compute[227360]: 2025-11-29 08:23:30.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:30 np0005539551 podman[273910]: 2025-11-29 08:23:30.614314001 +0000 UTC m=+0.052945774 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:23:30 np0005539551 podman[273909]: 2025-11-29 08:23:30.647483788 +0000 UTC m=+0.078769652 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:23:30 np0005539551 podman[273908]: 2025-11-29 08:23:30.666198204 +0000 UTC m=+0.110876530 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 29 03:23:30 np0005539551 nova_compute[227360]: 2025-11-29 08:23:30.958 227364 INFO nova.compute.manager [None req-bc385d57-5c69-42ae-b77c-8f9d442ebb77 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Get console output#033[00m
Nov 29 03:23:30 np0005539551 nova_compute[227360]: 2025-11-29 08:23:30.962 260937 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:23:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:31.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:31 np0005539551 nova_compute[227360]: 2025-11-29 08:23:31.216 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:31 np0005539551 nova_compute[227360]: 2025-11-29 08:23:31.415 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:31.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:31.980 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:23:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:31.980 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:23:32 np0005539551 nova_compute[227360]: 2025-11-29 08:23:32.012 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:32 np0005539551 nova_compute[227360]: 2025-11-29 08:23:32.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:32 np0005539551 nova_compute[227360]: 2025-11-29 08:23:32.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:32 np0005539551 nova_compute[227360]: 2025-11-29 08:23:32.409 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:23:32 np0005539551 nova_compute[227360]: 2025-11-29 08:23:32.409 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:23:32 np0005539551 nova_compute[227360]: 2025-11-29 08:23:32.754 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "refresh_cache-27bb49e9-1b5b-452b-89e4-21008913f536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:23:32 np0005539551 nova_compute[227360]: 2025-11-29 08:23:32.754 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquired lock "refresh_cache-27bb49e9-1b5b-452b-89e4-21008913f536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:23:32 np0005539551 nova_compute[227360]: 2025-11-29 08:23:32.754 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:23:32 np0005539551 nova_compute[227360]: 2025-11-29 08:23:32.755 227364 DEBUG nova.objects.instance [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lazy-loading 'info_cache' on Instance uuid 27bb49e9-1b5b-452b-89e4-21008913f536 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:23:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:33.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:33 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:33.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:34 np0005539551 nova_compute[227360]: 2025-11-29 08:23:34.951 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Updating instance_info_cache with network_info: [{"id": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "address": "fa:16:3e:32:69:95", "network": {"id": "36ca7446-a7cc-4230-a5a5-4c818b881403", "bridge": "br-int", "label": "tempest-network-smoke--1073048915", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d2b1b3c-a9", "ovs_interfaceid": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:23:34 np0005539551 nova_compute[227360]: 2025-11-29 08:23:34.977 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Releasing lock "refresh_cache-27bb49e9-1b5b-452b-89e4-21008913f536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:23:34 np0005539551 nova_compute[227360]: 2025-11-29 08:23:34.978 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:23:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:35.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:35 np0005539551 nova_compute[227360]: 2025-11-29 08:23:35.358 227364 DEBUG nova.virt.libvirt.driver [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Check if temp file /var/lib/nova/instances/tmpxu9nn7ze exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m
Nov 29 03:23:35 np0005539551 nova_compute[227360]: 2025-11-29 08:23:35.359 227364 DEBUG nova.compute.manager [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpxu9nn7ze',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='27bb49e9-1b5b-452b-89e4-21008913f536',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m
Nov 29 03:23:35 np0005539551 nova_compute[227360]: 2025-11-29 08:23:35.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:35.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:36 np0005539551 nova_compute[227360]: 2025-11-29 08:23:36.220 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:36 np0005539551 nova_compute[227360]: 2025-11-29 08:23:36.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:36 np0005539551 nova_compute[227360]: 2025-11-29 08:23:36.459 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:37.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:37 np0005539551 nova_compute[227360]: 2025-11-29 08:23:37.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:37 np0005539551 nova_compute[227360]: 2025-11-29 08:23:37.433 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:37 np0005539551 nova_compute[227360]: 2025-11-29 08:23:37.433 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:37 np0005539551 nova_compute[227360]: 2025-11-29 08:23:37.433 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:37 np0005539551 nova_compute[227360]: 2025-11-29 08:23:37.433 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:23:37 np0005539551 nova_compute[227360]: 2025-11-29 08:23:37.434 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:37 np0005539551 nova_compute[227360]: 2025-11-29 08:23:37.856 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:23:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:37.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:23:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:23:37 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2374984749' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:23:38 np0005539551 nova_compute[227360]: 2025-11-29 08:23:38.003 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:38 np0005539551 nova_compute[227360]: 2025-11-29 08:23:38.079 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:23:38 np0005539551 nova_compute[227360]: 2025-11-29 08:23:38.080 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:23:38 np0005539551 nova_compute[227360]: 2025-11-29 08:23:38.241 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:23:38 np0005539551 nova_compute[227360]: 2025-11-29 08:23:38.243 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4271MB free_disk=20.830604553222656GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:23:38 np0005539551 nova_compute[227360]: 2025-11-29 08:23:38.243 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:38 np0005539551 nova_compute[227360]: 2025-11-29 08:23:38.244 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:38 np0005539551 nova_compute[227360]: 2025-11-29 08:23:38.288 227364 INFO nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Updating resource usage from migration 1b0fcf33-c641-4e86-ad8d-2103b7f82fe6#033[00m
Nov 29 03:23:38 np0005539551 nova_compute[227360]: 2025-11-29 08:23:38.339 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Migration 1b0fcf33-c641-4e86-ad8d-2103b7f82fe6 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Nov 29 03:23:38 np0005539551 nova_compute[227360]: 2025-11-29 08:23:38.339 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:23:38 np0005539551 nova_compute[227360]: 2025-11-29 08:23:38.339 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:23:38 np0005539551 nova_compute[227360]: 2025-11-29 08:23:38.407 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:38 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:38 np0005539551 nova_compute[227360]: 2025-11-29 08:23:38.757 227364 DEBUG nova.compute.manager [req-e3810e1b-b16c-485e-a669-6c9707553d45 req-9a126490-f5c1-4174-97df-64778e8f821f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Received event network-vif-unplugged-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:38 np0005539551 nova_compute[227360]: 2025-11-29 08:23:38.758 227364 DEBUG oslo_concurrency.lockutils [req-e3810e1b-b16c-485e-a669-6c9707553d45 req-9a126490-f5c1-4174-97df-64778e8f821f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:38 np0005539551 nova_compute[227360]: 2025-11-29 08:23:38.758 227364 DEBUG oslo_concurrency.lockutils [req-e3810e1b-b16c-485e-a669-6c9707553d45 req-9a126490-f5c1-4174-97df-64778e8f821f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:38 np0005539551 nova_compute[227360]: 2025-11-29 08:23:38.759 227364 DEBUG oslo_concurrency.lockutils [req-e3810e1b-b16c-485e-a669-6c9707553d45 req-9a126490-f5c1-4174-97df-64778e8f821f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:38 np0005539551 nova_compute[227360]: 2025-11-29 08:23:38.759 227364 DEBUG nova.compute.manager [req-e3810e1b-b16c-485e-a669-6c9707553d45 req-9a126490-f5c1-4174-97df-64778e8f821f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] No waiting events found dispatching network-vif-unplugged-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:23:38 np0005539551 nova_compute[227360]: 2025-11-29 08:23:38.759 227364 DEBUG nova.compute.manager [req-e3810e1b-b16c-485e-a669-6c9707553d45 req-9a126490-f5c1-4174-97df-64778e8f821f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Received event network-vif-unplugged-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:23:38 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:23:38 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1791215095' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:23:38 np0005539551 nova_compute[227360]: 2025-11-29 08:23:38.839 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:38 np0005539551 nova_compute[227360]: 2025-11-29 08:23:38.846 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:23:38 np0005539551 nova_compute[227360]: 2025-11-29 08:23:38.866 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:23:38 np0005539551 nova_compute[227360]: 2025-11-29 08:23:38.893 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:23:38 np0005539551 nova_compute[227360]: 2025-11-29 08:23:38.894 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:39.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:39 np0005539551 nova_compute[227360]: 2025-11-29 08:23:39.381 227364 INFO nova.compute.manager [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Took 3.34 seconds for pre_live_migration on destination host compute-2.ctlplane.example.com.#033[00m
Nov 29 03:23:39 np0005539551 nova_compute[227360]: 2025-11-29 08:23:39.382 227364 DEBUG nova.compute.manager [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:23:39 np0005539551 nova_compute[227360]: 2025-11-29 08:23:39.407 227364 DEBUG nova.compute.manager [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpxu9nn7ze',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='27bb49e9-1b5b-452b-89e4-21008913f536',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(1b0fcf33-c641-4e86-ad8d-2103b7f82fe6),old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m
Nov 29 03:23:39 np0005539551 nova_compute[227360]: 2025-11-29 08:23:39.412 227364 DEBUG nova.objects.instance [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Lazy-loading 'migration_context' on Instance uuid 27bb49e9-1b5b-452b-89e4-21008913f536 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:23:39 np0005539551 nova_compute[227360]: 2025-11-29 08:23:39.415 227364 DEBUG nova.virt.libvirt.driver [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m
Nov 29 03:23:39 np0005539551 nova_compute[227360]: 2025-11-29 08:23:39.418 227364 DEBUG nova.virt.libvirt.driver [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m
Nov 29 03:23:39 np0005539551 nova_compute[227360]: 2025-11-29 08:23:39.419 227364 DEBUG nova.virt.libvirt.driver [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m
Nov 29 03:23:39 np0005539551 nova_compute[227360]: 2025-11-29 08:23:39.438 227364 DEBUG nova.virt.libvirt.vif [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:22:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-265826808',display_name='tempest-TestNetworkAdvancedServerOps-server-265826808',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-265826808',id=126,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBeXTTUn8A7zpFHcqlGQ90V4zAP7o75tcD3W3n7dQTartlXTpdu7VEmK0VYRLV8PgqFSlc7bWF2UZqww8/DhGK+DK739lPxQTOjWQ1ziHudEAIfQaT52tCAw6zsO+8sntg==',key_name='tempest-TestNetworkAdvancedServerOps-1403441276',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:23:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-sncacrhj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:23:11Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=27bb49e9-1b5b-452b-89e4-21008913f536,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "address": "fa:16:3e:32:69:95", "network": {"id": "36ca7446-a7cc-4230-a5a5-4c818b881403", "bridge": "br-int", "label": "tempest-network-smoke--1073048915", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap1d2b1b3c-a9", "ovs_interfaceid": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:23:39 np0005539551 nova_compute[227360]: 2025-11-29 08:23:39.439 227364 DEBUG nova.network.os_vif_util [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Converting VIF {"id": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "address": "fa:16:3e:32:69:95", "network": {"id": "36ca7446-a7cc-4230-a5a5-4c818b881403", "bridge": "br-int", "label": "tempest-network-smoke--1073048915", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap1d2b1b3c-a9", "ovs_interfaceid": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:23:39 np0005539551 nova_compute[227360]: 2025-11-29 08:23:39.440 227364 DEBUG nova.network.os_vif_util [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:32:69:95,bridge_name='br-int',has_traffic_filtering=True,id=1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4,network=Network(36ca7446-a7cc-4230-a5a5-4c818b881403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d2b1b3c-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:23:39 np0005539551 nova_compute[227360]: 2025-11-29 08:23:39.440 227364 DEBUG nova.virt.libvirt.migration [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Updating guest XML with vif config: <interface type="ethernet">
Nov 29 03:23:39 np0005539551 nova_compute[227360]:  <mac address="fa:16:3e:32:69:95"/>
Nov 29 03:23:39 np0005539551 nova_compute[227360]:  <model type="virtio"/>
Nov 29 03:23:39 np0005539551 nova_compute[227360]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:23:39 np0005539551 nova_compute[227360]:  <mtu size="1442"/>
Nov 29 03:23:39 np0005539551 nova_compute[227360]:  <target dev="tap1d2b1b3c-a9"/>
Nov 29 03:23:39 np0005539551 nova_compute[227360]: </interface>
Nov 29 03:23:39 np0005539551 nova_compute[227360]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m
Nov 29 03:23:39 np0005539551 nova_compute[227360]: 2025-11-29 08:23:39.441 227364 DEBUG nova.virt.libvirt.driver [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m
Nov 29 03:23:39 np0005539551 nova_compute[227360]: 2025-11-29 08:23:39.895 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:39 np0005539551 nova_compute[227360]: 2025-11-29 08:23:39.921 227364 DEBUG nova.virt.libvirt.migration [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 03:23:39 np0005539551 nova_compute[227360]: 2025-11-29 08:23:39.921 227364 INFO nova.virt.libvirt.migration [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m
Nov 29 03:23:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:39.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:40 np0005539551 nova_compute[227360]: 2025-11-29 08:23:40.000 227364 INFO nova.virt.libvirt.driver [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m
Nov 29 03:23:40 np0005539551 nova_compute[227360]: 2025-11-29 08:23:40.502 227364 DEBUG nova.virt.libvirt.migration [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 03:23:40 np0005539551 nova_compute[227360]: 2025-11-29 08:23:40.503 227364 DEBUG nova.virt.libvirt.migration [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Nov 29 03:23:40 np0005539551 nova_compute[227360]: 2025-11-29 08:23:40.838 227364 DEBUG nova.compute.manager [req-ae295325-f632-4c8f-9567-2dde25f377df req-e4415213-0b8d-40b9-8cca-9c498bcee52c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Received event network-vif-plugged-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:40 np0005539551 nova_compute[227360]: 2025-11-29 08:23:40.838 227364 DEBUG oslo_concurrency.lockutils [req-ae295325-f632-4c8f-9567-2dde25f377df req-e4415213-0b8d-40b9-8cca-9c498bcee52c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:40 np0005539551 nova_compute[227360]: 2025-11-29 08:23:40.839 227364 DEBUG oslo_concurrency.lockutils [req-ae295325-f632-4c8f-9567-2dde25f377df req-e4415213-0b8d-40b9-8cca-9c498bcee52c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:40 np0005539551 nova_compute[227360]: 2025-11-29 08:23:40.839 227364 DEBUG oslo_concurrency.lockutils [req-ae295325-f632-4c8f-9567-2dde25f377df req-e4415213-0b8d-40b9-8cca-9c498bcee52c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:40 np0005539551 nova_compute[227360]: 2025-11-29 08:23:40.839 227364 DEBUG nova.compute.manager [req-ae295325-f632-4c8f-9567-2dde25f377df req-e4415213-0b8d-40b9-8cca-9c498bcee52c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] No waiting events found dispatching network-vif-plugged-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:23:40 np0005539551 nova_compute[227360]: 2025-11-29 08:23:40.839 227364 WARNING nova.compute.manager [req-ae295325-f632-4c8f-9567-2dde25f377df req-e4415213-0b8d-40b9-8cca-9c498bcee52c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Received unexpected event network-vif-plugged-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 for instance with vm_state active and task_state migrating.#033[00m
Nov 29 03:23:40 np0005539551 nova_compute[227360]: 2025-11-29 08:23:40.840 227364 DEBUG nova.compute.manager [req-ae295325-f632-4c8f-9567-2dde25f377df req-e4415213-0b8d-40b9-8cca-9c498bcee52c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Received event network-changed-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:40 np0005539551 nova_compute[227360]: 2025-11-29 08:23:40.840 227364 DEBUG nova.compute.manager [req-ae295325-f632-4c8f-9567-2dde25f377df req-e4415213-0b8d-40b9-8cca-9c498bcee52c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Refreshing instance network info cache due to event network-changed-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:23:40 np0005539551 nova_compute[227360]: 2025-11-29 08:23:40.840 227364 DEBUG oslo_concurrency.lockutils [req-ae295325-f632-4c8f-9567-2dde25f377df req-e4415213-0b8d-40b9-8cca-9c498bcee52c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-27bb49e9-1b5b-452b-89e4-21008913f536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:23:40 np0005539551 nova_compute[227360]: 2025-11-29 08:23:40.840 227364 DEBUG oslo_concurrency.lockutils [req-ae295325-f632-4c8f-9567-2dde25f377df req-e4415213-0b8d-40b9-8cca-9c498bcee52c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-27bb49e9-1b5b-452b-89e4-21008913f536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:23:40 np0005539551 nova_compute[227360]: 2025-11-29 08:23:40.840 227364 DEBUG nova.network.neutron [req-ae295325-f632-4c8f-9567-2dde25f377df req-e4415213-0b8d-40b9-8cca-9c498bcee52c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Refreshing network info cache for port 1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:23:40 np0005539551 nova_compute[227360]: 2025-11-29 08:23:40.966 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404620.965674, 27bb49e9-1b5b-452b-89e4-21008913f536 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:23:40 np0005539551 nova_compute[227360]: 2025-11-29 08:23:40.966 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:23:40 np0005539551 nova_compute[227360]: 2025-11-29 08:23:40.987 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:23:40 np0005539551 nova_compute[227360]: 2025-11-29 08:23:40.992 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:23:41 np0005539551 nova_compute[227360]: 2025-11-29 08:23:41.006 227364 DEBUG nova.virt.libvirt.migration [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 03:23:41 np0005539551 nova_compute[227360]: 2025-11-29 08:23:41.006 227364 DEBUG nova.virt.libvirt.migration [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Nov 29 03:23:41 np0005539551 nova_compute[227360]: 2025-11-29 08:23:41.009 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] During sync_power_state the instance has a pending task (migrating). Skip.#033[00m
Nov 29 03:23:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:41.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:41 np0005539551 nova_compute[227360]: 2025-11-29 08:23:41.223 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:41 np0005539551 kernel: tap1d2b1b3c-a9 (unregistering): left promiscuous mode
Nov 29 03:23:41 np0005539551 NetworkManager[48922]: <info>  [1764404621.3963] device (tap1d2b1b3c-a9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:23:41 np0005539551 ovn_controller[130266]: 2025-11-29T08:23:41Z|00519|binding|INFO|Releasing lport 1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 from this chassis (sb_readonly=0)
Nov 29 03:23:41 np0005539551 ovn_controller[130266]: 2025-11-29T08:23:41Z|00520|binding|INFO|Setting lport 1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 down in Southbound
Nov 29 03:23:41 np0005539551 nova_compute[227360]: 2025-11-29 08:23:41.406 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:41 np0005539551 ovn_controller[130266]: 2025-11-29T08:23:41Z|00521|binding|INFO|Removing iface tap1d2b1b3c-a9 ovn-installed in OVS
Nov 29 03:23:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:41.414 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:69:95 10.100.0.8'], port_security=['fa:16:3e:32:69:95 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-2.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '479f969f-dbf7-4938-8979-b8532eb113f6'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '27bb49e9-1b5b-452b-89e4-21008913f536', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-36ca7446-a7cc-4230-a5a5-4c818b881403', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '8', 'neutron:security_group_ids': '29b2a720-5603-492e-b672-0c12c21d24cc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ee4ab43-1884-4c9a-b7dc-aa4995f42087, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:23:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:41.415 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 in datapath 36ca7446-a7cc-4230-a5a5-4c818b881403 unbound from our chassis#033[00m
Nov 29 03:23:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:41.416 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 36ca7446-a7cc-4230-a5a5-4c818b881403, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:23:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:41.417 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[951f7caa-0e78-4cc1-a632-bd4b348ceffc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:41.417 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403 namespace which is not needed anymore#033[00m
Nov 29 03:23:41 np0005539551 nova_compute[227360]: 2025-11-29 08:23:41.424 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:41 np0005539551 nova_compute[227360]: 2025-11-29 08:23:41.460 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:41 np0005539551 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d0000007e.scope: Deactivated successfully.
Nov 29 03:23:41 np0005539551 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d0000007e.scope: Consumed 15.293s CPU time.
Nov 29 03:23:41 np0005539551 nova_compute[227360]: 2025-11-29 08:23:41.467 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:41 np0005539551 systemd-machined[190756]: Machine qemu-56-instance-0000007e terminated.
Nov 29 03:23:41 np0005539551 virtqemud[226785]: Unable to get XATTR trusted.libvirt.security.ref_selinux on vms/27bb49e9-1b5b-452b-89e4-21008913f536_disk: No such file or directory
Nov 29 03:23:41 np0005539551 virtqemud[226785]: Unable to get XATTR trusted.libvirt.security.ref_dac on vms/27bb49e9-1b5b-452b-89e4-21008913f536_disk: No such file or directory
Nov 29 03:23:41 np0005539551 kernel: tap1d2b1b3c-a9: entered promiscuous mode
Nov 29 03:23:41 np0005539551 NetworkManager[48922]: <info>  [1764404621.5279] manager: (tap1d2b1b3c-a9): new Tun device (/org/freedesktop/NetworkManager/Devices/245)
Nov 29 03:23:41 np0005539551 nova_compute[227360]: 2025-11-29 08:23:41.528 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:41 np0005539551 ovn_controller[130266]: 2025-11-29T08:23:41Z|00522|binding|INFO|Claiming lport 1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 for this chassis.
Nov 29 03:23:41 np0005539551 ovn_controller[130266]: 2025-11-29T08:23:41Z|00523|binding|INFO|1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4: Claiming fa:16:3e:32:69:95 10.100.0.8
Nov 29 03:23:41 np0005539551 kernel: tap1d2b1b3c-a9 (unregistering): left promiscuous mode
Nov 29 03:23:41 np0005539551 neutron-haproxy-ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403[273849]: [NOTICE]   (273853) : haproxy version is 2.8.14-c23fe91
Nov 29 03:23:41 np0005539551 neutron-haproxy-ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403[273849]: [NOTICE]   (273853) : path to executable is /usr/sbin/haproxy
Nov 29 03:23:41 np0005539551 neutron-haproxy-ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403[273849]: [WARNING]  (273853) : Exiting Master process...
Nov 29 03:23:41 np0005539551 neutron-haproxy-ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403[273849]: [WARNING]  (273853) : Exiting Master process...
Nov 29 03:23:41 np0005539551 neutron-haproxy-ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403[273849]: [ALERT]    (273853) : Current worker (273855) exited with code 143 (Terminated)
Nov 29 03:23:41 np0005539551 neutron-haproxy-ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403[273849]: [WARNING]  (273853) : All workers exited. Exiting... (0)
Nov 29 03:23:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:41.540 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:69:95 10.100.0.8'], port_security=['fa:16:3e:32:69:95 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-2.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '479f969f-dbf7-4938-8979-b8532eb113f6'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '27bb49e9-1b5b-452b-89e4-21008913f536', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-36ca7446-a7cc-4230-a5a5-4c818b881403', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '8', 'neutron:security_group_ids': '29b2a720-5603-492e-b672-0c12c21d24cc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ee4ab43-1884-4c9a-b7dc-aa4995f42087, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:23:41 np0005539551 systemd[1]: libpod-37c3cf7510f6d3b21b43f9a4022190c221fe024b4e8b5aa63d8f0f9f13733590.scope: Deactivated successfully.
Nov 29 03:23:41 np0005539551 podman[274044]: 2025-11-29 08:23:41.546365698 +0000 UTC m=+0.045046790 container died 37c3cf7510f6d3b21b43f9a4022190c221fe024b4e8b5aa63d8f0f9f13733590 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:23:41 np0005539551 nova_compute[227360]: 2025-11-29 08:23:41.552 227364 DEBUG nova.virt.libvirt.guest [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m
Nov 29 03:23:41 np0005539551 nova_compute[227360]: 2025-11-29 08:23:41.553 227364 INFO nova.virt.libvirt.driver [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Migration operation has completed#033[00m
Nov 29 03:23:41 np0005539551 nova_compute[227360]: 2025-11-29 08:23:41.553 227364 INFO nova.compute.manager [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] _post_live_migration() is started..#033[00m
Nov 29 03:23:41 np0005539551 ovn_controller[130266]: 2025-11-29T08:23:41Z|00524|binding|INFO|Releasing lport 1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 from this chassis (sb_readonly=0)
Nov 29 03:23:41 np0005539551 nova_compute[227360]: 2025-11-29 08:23:41.556 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:41 np0005539551 nova_compute[227360]: 2025-11-29 08:23:41.559 227364 DEBUG nova.virt.libvirt.driver [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m
Nov 29 03:23:41 np0005539551 nova_compute[227360]: 2025-11-29 08:23:41.560 227364 DEBUG nova.virt.libvirt.driver [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m
Nov 29 03:23:41 np0005539551 nova_compute[227360]: 2025-11-29 08:23:41.561 227364 DEBUG nova.virt.libvirt.driver [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m
Nov 29 03:23:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:41.564 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:69:95 10.100.0.8'], port_security=['fa:16:3e:32:69:95 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-2.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '479f969f-dbf7-4938-8979-b8532eb113f6'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '27bb49e9-1b5b-452b-89e4-21008913f536', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-36ca7446-a7cc-4230-a5a5-4c818b881403', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '8', 'neutron:security_group_ids': '29b2a720-5603-492e-b672-0c12c21d24cc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ee4ab43-1884-4c9a-b7dc-aa4995f42087, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:23:41 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-37c3cf7510f6d3b21b43f9a4022190c221fe024b4e8b5aa63d8f0f9f13733590-userdata-shm.mount: Deactivated successfully.
Nov 29 03:23:41 np0005539551 systemd[1]: var-lib-containers-storage-overlay-b0e9ad8b4c9030a239071290ef7f456d453e664d2b0e9cf73017cc8a6caea6e5-merged.mount: Deactivated successfully.
Nov 29 03:23:41 np0005539551 podman[274044]: 2025-11-29 08:23:41.59414622 +0000 UTC m=+0.092827302 container cleanup 37c3cf7510f6d3b21b43f9a4022190c221fe024b4e8b5aa63d8f0f9f13733590 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:23:41 np0005539551 systemd[1]: libpod-conmon-37c3cf7510f6d3b21b43f9a4022190c221fe024b4e8b5aa63d8f0f9f13733590.scope: Deactivated successfully.
Nov 29 03:23:41 np0005539551 nova_compute[227360]: 2025-11-29 08:23:41.631 227364 DEBUG nova.compute.manager [req-e90f84cf-8871-4332-ac10-6067986e7333 req-fcbf84c0-5091-427a-a981-f1ac50df0eef 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Received event network-vif-unplugged-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:41 np0005539551 nova_compute[227360]: 2025-11-29 08:23:41.632 227364 DEBUG oslo_concurrency.lockutils [req-e90f84cf-8871-4332-ac10-6067986e7333 req-fcbf84c0-5091-427a-a981-f1ac50df0eef 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:41 np0005539551 nova_compute[227360]: 2025-11-29 08:23:41.633 227364 DEBUG oslo_concurrency.lockutils [req-e90f84cf-8871-4332-ac10-6067986e7333 req-fcbf84c0-5091-427a-a981-f1ac50df0eef 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:41 np0005539551 nova_compute[227360]: 2025-11-29 08:23:41.633 227364 DEBUG oslo_concurrency.lockutils [req-e90f84cf-8871-4332-ac10-6067986e7333 req-fcbf84c0-5091-427a-a981-f1ac50df0eef 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:41 np0005539551 nova_compute[227360]: 2025-11-29 08:23:41.634 227364 DEBUG nova.compute.manager [req-e90f84cf-8871-4332-ac10-6067986e7333 req-fcbf84c0-5091-427a-a981-f1ac50df0eef 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] No waiting events found dispatching network-vif-unplugged-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:23:41 np0005539551 nova_compute[227360]: 2025-11-29 08:23:41.634 227364 DEBUG nova.compute.manager [req-e90f84cf-8871-4332-ac10-6067986e7333 req-fcbf84c0-5091-427a-a981-f1ac50df0eef 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Received event network-vif-unplugged-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:23:41 np0005539551 podman[274083]: 2025-11-29 08:23:41.658619764 +0000 UTC m=+0.041262966 container remove 37c3cf7510f6d3b21b43f9a4022190c221fe024b4e8b5aa63d8f0f9f13733590 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:23:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:41.665 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9859ded1-6d54-4af0-9e88-f8d420520547]: (4, ('Sat Nov 29 08:23:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403 (37c3cf7510f6d3b21b43f9a4022190c221fe024b4e8b5aa63d8f0f9f13733590)\n37c3cf7510f6d3b21b43f9a4022190c221fe024b4e8b5aa63d8f0f9f13733590\nSat Nov 29 08:23:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403 (37c3cf7510f6d3b21b43f9a4022190c221fe024b4e8b5aa63d8f0f9f13733590)\n37c3cf7510f6d3b21b43f9a4022190c221fe024b4e8b5aa63d8f0f9f13733590\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:41.667 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2dc4e3b5-b166-4268-95b8-3b81af014462]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:41.668 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap36ca7446-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:41 np0005539551 nova_compute[227360]: 2025-11-29 08:23:41.670 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:41 np0005539551 kernel: tap36ca7446-a0: left promiscuous mode
Nov 29 03:23:41 np0005539551 nova_compute[227360]: 2025-11-29 08:23:41.689 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:41.692 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[71ba7f17-1030-49cc-bc4a-90102588ebb5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:41.712 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2bc11bcf-531a-4f64-ac68-5d248b5716c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:41.714 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[dacaa5df-df9b-4681-9423-ba9c9ba2cd5b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:41.728 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[0ceb3a81-1709-4ce8-86ba-d117d63405bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763474, 'reachable_time': 21135, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274102, 'error': None, 'target': 'ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:41.731 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:23:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:41.731 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[c28ae254-808a-4430-9535-be0eb89968c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:41 np0005539551 systemd[1]: run-netns-ovnmeta\x2d36ca7446\x2da7cc\x2d4230\x2da5a5\x2d4c818b881403.mount: Deactivated successfully.
Nov 29 03:23:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:41.732 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 in datapath 36ca7446-a7cc-4230-a5a5-4c818b881403 unbound from our chassis#033[00m
Nov 29 03:23:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:41.733 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 36ca7446-a7cc-4230-a5a5-4c818b881403, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:23:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:41.734 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ec28288a-9fb7-429f-8e9e-a73e1c6ea221]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:41.735 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 in datapath 36ca7446-a7cc-4230-a5a5-4c818b881403 unbound from our chassis#033[00m
Nov 29 03:23:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:41.736 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 36ca7446-a7cc-4230-a5a5-4c818b881403, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:23:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:41.736 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8b52007d-087b-427b-abca-d579532d273e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:41.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:41.982 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:42 np0005539551 nova_compute[227360]: 2025-11-29 08:23:42.607 227364 DEBUG nova.network.neutron [req-ae295325-f632-4c8f-9567-2dde25f377df req-e4415213-0b8d-40b9-8cca-9c498bcee52c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Updated VIF entry in instance network info cache for port 1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:23:42 np0005539551 nova_compute[227360]: 2025-11-29 08:23:42.608 227364 DEBUG nova.network.neutron [req-ae295325-f632-4c8f-9567-2dde25f377df req-e4415213-0b8d-40b9-8cca-9c498bcee52c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Updating instance_info_cache with network_info: [{"id": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "address": "fa:16:3e:32:69:95", "network": {"id": "36ca7446-a7cc-4230-a5a5-4c818b881403", "bridge": "br-int", "label": "tempest-network-smoke--1073048915", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d2b1b3c-a9", "ovs_interfaceid": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-2.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:23:42 np0005539551 nova_compute[227360]: 2025-11-29 08:23:42.634 227364 DEBUG oslo_concurrency.lockutils [req-ae295325-f632-4c8f-9567-2dde25f377df req-e4415213-0b8d-40b9-8cca-9c498bcee52c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-27bb49e9-1b5b-452b-89e4-21008913f536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:23:42 np0005539551 nova_compute[227360]: 2025-11-29 08:23:42.955 227364 DEBUG nova.compute.manager [req-2637521f-1822-4657-b81a-1839721b50f5 req-c5541f4b-07b0-4cf8-a27b-4726b0a02d41 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Received event network-vif-unplugged-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:42 np0005539551 nova_compute[227360]: 2025-11-29 08:23:42.956 227364 DEBUG oslo_concurrency.lockutils [req-2637521f-1822-4657-b81a-1839721b50f5 req-c5541f4b-07b0-4cf8-a27b-4726b0a02d41 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:42 np0005539551 nova_compute[227360]: 2025-11-29 08:23:42.956 227364 DEBUG oslo_concurrency.lockutils [req-2637521f-1822-4657-b81a-1839721b50f5 req-c5541f4b-07b0-4cf8-a27b-4726b0a02d41 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:42 np0005539551 nova_compute[227360]: 2025-11-29 08:23:42.956 227364 DEBUG oslo_concurrency.lockutils [req-2637521f-1822-4657-b81a-1839721b50f5 req-c5541f4b-07b0-4cf8-a27b-4726b0a02d41 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:42 np0005539551 nova_compute[227360]: 2025-11-29 08:23:42.957 227364 DEBUG nova.compute.manager [req-2637521f-1822-4657-b81a-1839721b50f5 req-c5541f4b-07b0-4cf8-a27b-4726b0a02d41 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] No waiting events found dispatching network-vif-unplugged-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:23:42 np0005539551 nova_compute[227360]: 2025-11-29 08:23:42.957 227364 DEBUG nova.compute.manager [req-2637521f-1822-4657-b81a-1839721b50f5 req-c5541f4b-07b0-4cf8-a27b-4726b0a02d41 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Received event network-vif-unplugged-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:23:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:23:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:43.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:23:43 np0005539551 nova_compute[227360]: 2025-11-29 08:23:43.401 227364 DEBUG nova.network.neutron [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Activated binding for port 1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 and host compute-2.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m
Nov 29 03:23:43 np0005539551 nova_compute[227360]: 2025-11-29 08:23:43.401 227364 DEBUG nova.compute.manager [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "address": "fa:16:3e:32:69:95", "network": {"id": "36ca7446-a7cc-4230-a5a5-4c818b881403", "bridge": "br-int", "label": "tempest-network-smoke--1073048915", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d2b1b3c-a9", "ovs_interfaceid": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m
Nov 29 03:23:43 np0005539551 nova_compute[227360]: 2025-11-29 08:23:43.402 227364 DEBUG nova.virt.libvirt.vif [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:22:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-265826808',display_name='tempest-TestNetworkAdvancedServerOps-server-265826808',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-265826808',id=126,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBeXTTUn8A7zpFHcqlGQ90V4zAP7o75tcD3W3n7dQTartlXTpdu7VEmK0VYRLV8PgqFSlc7bWF2UZqww8/DhGK+DK739lPxQTOjWQ1ziHudEAIfQaT52tCAw6zsO+8sntg==',key_name='tempest-TestNetworkAdvancedServerOps-1403441276',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:23:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-sncacrhj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:23:32Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=27bb49e9-1b5b-452b-89e4-21008913f536,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "address": "fa:16:3e:32:69:95", "network": {"id": "36ca7446-a7cc-4230-a5a5-4c818b881403", "bridge": "br-int", "label": "tempest-network-smoke--1073048915", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d2b1b3c-a9", "ovs_interfaceid": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:23:43 np0005539551 nova_compute[227360]: 2025-11-29 08:23:43.402 227364 DEBUG nova.network.os_vif_util [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Converting VIF {"id": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "address": "fa:16:3e:32:69:95", "network": {"id": "36ca7446-a7cc-4230-a5a5-4c818b881403", "bridge": "br-int", "label": "tempest-network-smoke--1073048915", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d2b1b3c-a9", "ovs_interfaceid": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:23:43 np0005539551 nova_compute[227360]: 2025-11-29 08:23:43.403 227364 DEBUG nova.network.os_vif_util [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:32:69:95,bridge_name='br-int',has_traffic_filtering=True,id=1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4,network=Network(36ca7446-a7cc-4230-a5a5-4c818b881403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d2b1b3c-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:23:43 np0005539551 nova_compute[227360]: 2025-11-29 08:23:43.403 227364 DEBUG os_vif [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:32:69:95,bridge_name='br-int',has_traffic_filtering=True,id=1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4,network=Network(36ca7446-a7cc-4230-a5a5-4c818b881403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d2b1b3c-a9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:23:43 np0005539551 nova_compute[227360]: 2025-11-29 08:23:43.405 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:43 np0005539551 nova_compute[227360]: 2025-11-29 08:23:43.405 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d2b1b3c-a9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:43 np0005539551 nova_compute[227360]: 2025-11-29 08:23:43.426 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:43 np0005539551 nova_compute[227360]: 2025-11-29 08:23:43.428 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:43 np0005539551 nova_compute[227360]: 2025-11-29 08:23:43.430 227364 INFO os_vif [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:32:69:95,bridge_name='br-int',has_traffic_filtering=True,id=1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4,network=Network(36ca7446-a7cc-4230-a5a5-4c818b881403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d2b1b3c-a9')#033[00m
Nov 29 03:23:43 np0005539551 nova_compute[227360]: 2025-11-29 08:23:43.431 227364 DEBUG oslo_concurrency.lockutils [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:43 np0005539551 nova_compute[227360]: 2025-11-29 08:23:43.431 227364 DEBUG oslo_concurrency.lockutils [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:43 np0005539551 nova_compute[227360]: 2025-11-29 08:23:43.431 227364 DEBUG oslo_concurrency.lockutils [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:43 np0005539551 nova_compute[227360]: 2025-11-29 08:23:43.432 227364 DEBUG nova.compute.manager [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m
Nov 29 03:23:43 np0005539551 nova_compute[227360]: 2025-11-29 08:23:43.432 227364 INFO nova.virt.libvirt.driver [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Deleting instance files /var/lib/nova/instances/27bb49e9-1b5b-452b-89e4-21008913f536_del#033[00m
Nov 29 03:23:43 np0005539551 nova_compute[227360]: 2025-11-29 08:23:43.432 227364 INFO nova.virt.libvirt.driver [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Deletion of /var/lib/nova/instances/27bb49e9-1b5b-452b-89e4-21008913f536_del complete#033[00m
Nov 29 03:23:43 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:43 np0005539551 nova_compute[227360]: 2025-11-29 08:23:43.721 227364 DEBUG nova.compute.manager [req-35e52eef-61df-44c7-8553-540e284bb492 req-5386f939-bb5f-49ab-806b-c948210f6082 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Received event network-vif-plugged-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:43 np0005539551 nova_compute[227360]: 2025-11-29 08:23:43.722 227364 DEBUG oslo_concurrency.lockutils [req-35e52eef-61df-44c7-8553-540e284bb492 req-5386f939-bb5f-49ab-806b-c948210f6082 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:43 np0005539551 nova_compute[227360]: 2025-11-29 08:23:43.722 227364 DEBUG oslo_concurrency.lockutils [req-35e52eef-61df-44c7-8553-540e284bb492 req-5386f939-bb5f-49ab-806b-c948210f6082 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:43 np0005539551 nova_compute[227360]: 2025-11-29 08:23:43.723 227364 DEBUG oslo_concurrency.lockutils [req-35e52eef-61df-44c7-8553-540e284bb492 req-5386f939-bb5f-49ab-806b-c948210f6082 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:43 np0005539551 nova_compute[227360]: 2025-11-29 08:23:43.723 227364 DEBUG nova.compute.manager [req-35e52eef-61df-44c7-8553-540e284bb492 req-5386f939-bb5f-49ab-806b-c948210f6082 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] No waiting events found dispatching network-vif-plugged-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:23:43 np0005539551 nova_compute[227360]: 2025-11-29 08:23:43.723 227364 WARNING nova.compute.manager [req-35e52eef-61df-44c7-8553-540e284bb492 req-5386f939-bb5f-49ab-806b-c948210f6082 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Received unexpected event network-vif-plugged-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 for instance with vm_state active and task_state migrating.#033[00m
Nov 29 03:23:43 np0005539551 nova_compute[227360]: 2025-11-29 08:23:43.723 227364 DEBUG nova.compute.manager [req-35e52eef-61df-44c7-8553-540e284bb492 req-5386f939-bb5f-49ab-806b-c948210f6082 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Received event network-vif-plugged-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:43 np0005539551 nova_compute[227360]: 2025-11-29 08:23:43.724 227364 DEBUG oslo_concurrency.lockutils [req-35e52eef-61df-44c7-8553-540e284bb492 req-5386f939-bb5f-49ab-806b-c948210f6082 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:43 np0005539551 nova_compute[227360]: 2025-11-29 08:23:43.724 227364 DEBUG oslo_concurrency.lockutils [req-35e52eef-61df-44c7-8553-540e284bb492 req-5386f939-bb5f-49ab-806b-c948210f6082 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:43 np0005539551 nova_compute[227360]: 2025-11-29 08:23:43.724 227364 DEBUG oslo_concurrency.lockutils [req-35e52eef-61df-44c7-8553-540e284bb492 req-5386f939-bb5f-49ab-806b-c948210f6082 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:43 np0005539551 nova_compute[227360]: 2025-11-29 08:23:43.724 227364 DEBUG nova.compute.manager [req-35e52eef-61df-44c7-8553-540e284bb492 req-5386f939-bb5f-49ab-806b-c948210f6082 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] No waiting events found dispatching network-vif-plugged-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:23:43 np0005539551 nova_compute[227360]: 2025-11-29 08:23:43.724 227364 WARNING nova.compute.manager [req-35e52eef-61df-44c7-8553-540e284bb492 req-5386f939-bb5f-49ab-806b-c948210f6082 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Received unexpected event network-vif-plugged-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 for instance with vm_state active and task_state migrating.#033[00m
Nov 29 03:23:43 np0005539551 nova_compute[227360]: 2025-11-29 08:23:43.724 227364 DEBUG nova.compute.manager [req-35e52eef-61df-44c7-8553-540e284bb492 req-5386f939-bb5f-49ab-806b-c948210f6082 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Received event network-vif-plugged-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:43 np0005539551 nova_compute[227360]: 2025-11-29 08:23:43.725 227364 DEBUG oslo_concurrency.lockutils [req-35e52eef-61df-44c7-8553-540e284bb492 req-5386f939-bb5f-49ab-806b-c948210f6082 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:43 np0005539551 nova_compute[227360]: 2025-11-29 08:23:43.725 227364 DEBUG oslo_concurrency.lockutils [req-35e52eef-61df-44c7-8553-540e284bb492 req-5386f939-bb5f-49ab-806b-c948210f6082 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:43 np0005539551 nova_compute[227360]: 2025-11-29 08:23:43.725 227364 DEBUG oslo_concurrency.lockutils [req-35e52eef-61df-44c7-8553-540e284bb492 req-5386f939-bb5f-49ab-806b-c948210f6082 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:43 np0005539551 nova_compute[227360]: 2025-11-29 08:23:43.725 227364 DEBUG nova.compute.manager [req-35e52eef-61df-44c7-8553-540e284bb492 req-5386f939-bb5f-49ab-806b-c948210f6082 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] No waiting events found dispatching network-vif-plugged-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:23:43 np0005539551 nova_compute[227360]: 2025-11-29 08:23:43.726 227364 WARNING nova.compute.manager [req-35e52eef-61df-44c7-8553-540e284bb492 req-5386f939-bb5f-49ab-806b-c948210f6082 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Received unexpected event network-vif-plugged-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 for instance with vm_state active and task_state migrating.#033[00m
Nov 29 03:23:43 np0005539551 nova_compute[227360]: 2025-11-29 08:23:43.726 227364 DEBUG nova.compute.manager [req-35e52eef-61df-44c7-8553-540e284bb492 req-5386f939-bb5f-49ab-806b-c948210f6082 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Received event network-vif-plugged-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:43 np0005539551 nova_compute[227360]: 2025-11-29 08:23:43.726 227364 DEBUG oslo_concurrency.lockutils [req-35e52eef-61df-44c7-8553-540e284bb492 req-5386f939-bb5f-49ab-806b-c948210f6082 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:43 np0005539551 nova_compute[227360]: 2025-11-29 08:23:43.727 227364 DEBUG oslo_concurrency.lockutils [req-35e52eef-61df-44c7-8553-540e284bb492 req-5386f939-bb5f-49ab-806b-c948210f6082 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:43 np0005539551 nova_compute[227360]: 2025-11-29 08:23:43.727 227364 DEBUG oslo_concurrency.lockutils [req-35e52eef-61df-44c7-8553-540e284bb492 req-5386f939-bb5f-49ab-806b-c948210f6082 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:43 np0005539551 nova_compute[227360]: 2025-11-29 08:23:43.727 227364 DEBUG nova.compute.manager [req-35e52eef-61df-44c7-8553-540e284bb492 req-5386f939-bb5f-49ab-806b-c948210f6082 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] No waiting events found dispatching network-vif-plugged-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:23:43 np0005539551 nova_compute[227360]: 2025-11-29 08:23:43.727 227364 WARNING nova.compute.manager [req-35e52eef-61df-44c7-8553-540e284bb492 req-5386f939-bb5f-49ab-806b-c948210f6082 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Received unexpected event network-vif-plugged-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 for instance with vm_state active and task_state migrating.#033[00m
Nov 29 03:23:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:23:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:43.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:23:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:45.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:45.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:46 np0005539551 nova_compute[227360]: 2025-11-29 08:23:46.212 227364 DEBUG oslo_concurrency.lockutils [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "2e7e8742-c504-412d-82cf-4087bd745c3e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:46 np0005539551 nova_compute[227360]: 2025-11-29 08:23:46.213 227364 DEBUG oslo_concurrency.lockutils [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "2e7e8742-c504-412d-82cf-4087bd745c3e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:46 np0005539551 nova_compute[227360]: 2025-11-29 08:23:46.224 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:46 np0005539551 nova_compute[227360]: 2025-11-29 08:23:46.235 227364 DEBUG nova.compute.manager [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:23:46 np0005539551 nova_compute[227360]: 2025-11-29 08:23:46.340 227364 DEBUG oslo_concurrency.lockutils [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:46 np0005539551 nova_compute[227360]: 2025-11-29 08:23:46.340 227364 DEBUG oslo_concurrency.lockutils [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:46 np0005539551 nova_compute[227360]: 2025-11-29 08:23:46.347 227364 DEBUG nova.virt.hardware [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:23:46 np0005539551 nova_compute[227360]: 2025-11-29 08:23:46.348 227364 INFO nova.compute.claims [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:23:46 np0005539551 nova_compute[227360]: 2025-11-29 08:23:46.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:46 np0005539551 nova_compute[227360]: 2025-11-29 08:23:46.409 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:23:46 np0005539551 nova_compute[227360]: 2025-11-29 08:23:46.460 227364 DEBUG oslo_concurrency.processutils [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:46 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:23:46 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3808276717' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:23:46 np0005539551 nova_compute[227360]: 2025-11-29 08:23:46.890 227364 DEBUG oslo_concurrency.processutils [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:46 np0005539551 nova_compute[227360]: 2025-11-29 08:23:46.897 227364 DEBUG nova.compute.provider_tree [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:23:46 np0005539551 nova_compute[227360]: 2025-11-29 08:23:46.914 227364 DEBUG nova.scheduler.client.report [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:23:46 np0005539551 nova_compute[227360]: 2025-11-29 08:23:46.936 227364 DEBUG oslo_concurrency.lockutils [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:46 np0005539551 nova_compute[227360]: 2025-11-29 08:23:46.937 227364 DEBUG nova.compute.manager [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:23:47 np0005539551 nova_compute[227360]: 2025-11-29 08:23:47.009 227364 DEBUG nova.compute.manager [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:23:47 np0005539551 nova_compute[227360]: 2025-11-29 08:23:47.009 227364 DEBUG nova.network.neutron [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:23:47 np0005539551 nova_compute[227360]: 2025-11-29 08:23:47.075 227364 INFO nova.virt.libvirt.driver [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:23:47 np0005539551 nova_compute[227360]: 2025-11-29 08:23:47.097 227364 DEBUG nova.compute.manager [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:23:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:47.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:47 np0005539551 nova_compute[227360]: 2025-11-29 08:23:47.145 227364 INFO nova.virt.block_device [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Booting with volume d807030f-7b93-4396-9211-17a740c6b338 at /dev/vda#033[00m
Nov 29 03:23:47 np0005539551 nova_compute[227360]: 2025-11-29 08:23:47.307 227364 DEBUG nova.policy [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1552f15deb524705a9456cbe9b54c429', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0bace34c102e4d56b089fd695d324f10', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:23:47 np0005539551 nova_compute[227360]: 2025-11-29 08:23:47.354 227364 DEBUG os_brick.utils [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:23:47 np0005539551 nova_compute[227360]: 2025-11-29 08:23:47.355 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:47 np0005539551 nova_compute[227360]: 2025-11-29 08:23:47.366 245195 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:47 np0005539551 nova_compute[227360]: 2025-11-29 08:23:47.366 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[cf622915-45e4-4ba6-b486-9d31664aeb51]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:47 np0005539551 nova_compute[227360]: 2025-11-29 08:23:47.367 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:47 np0005539551 nova_compute[227360]: 2025-11-29 08:23:47.375 245195 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:47 np0005539551 nova_compute[227360]: 2025-11-29 08:23:47.375 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[67529608-70b7-41cb-a4ef-213a4060a870]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34268feecb6', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:47 np0005539551 nova_compute[227360]: 2025-11-29 08:23:47.376 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:47 np0005539551 nova_compute[227360]: 2025-11-29 08:23:47.384 245195 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:47 np0005539551 nova_compute[227360]: 2025-11-29 08:23:47.385 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[1e2fe802-e173-410d-88b6-d6ef08282b83]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:47 np0005539551 nova_compute[227360]: 2025-11-29 08:23:47.386 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[0b2c8505-6a6d-4192-9d76-08a05417cbbf]: (4, '2d58586e-4ce1-425d-890c-d5cdff75e822') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:47 np0005539551 nova_compute[227360]: 2025-11-29 08:23:47.386 227364 DEBUG oslo_concurrency.processutils [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:47 np0005539551 nova_compute[227360]: 2025-11-29 08:23:47.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:47 np0005539551 nova_compute[227360]: 2025-11-29 08:23:47.411 227364 DEBUG oslo_concurrency.processutils [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "nvme version" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:47 np0005539551 nova_compute[227360]: 2025-11-29 08:23:47.413 227364 DEBUG os_brick.initiator.connectors.lightos [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:23:47 np0005539551 nova_compute[227360]: 2025-11-29 08:23:47.413 227364 DEBUG os_brick.initiator.connectors.lightos [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:23:47 np0005539551 nova_compute[227360]: 2025-11-29 08:23:47.414 227364 DEBUG os_brick.initiator.connectors.lightos [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:23:47 np0005539551 nova_compute[227360]: 2025-11-29 08:23:47.414 227364 DEBUG os_brick.utils [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] <== get_connector_properties: return (58ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34268feecb6', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2d58586e-4ce1-425d-890c-d5cdff75e822', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:23:47 np0005539551 nova_compute[227360]: 2025-11-29 08:23:47.414 227364 DEBUG nova.virt.block_device [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Updating existing volume attachment record: b1eb7ca5-9c24-4b9c-83f5-1607b410c2aa _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:23:47 np0005539551 nova_compute[227360]: 2025-11-29 08:23:47.433 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:47 np0005539551 nova_compute[227360]: 2025-11-29 08:23:47.640 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:47.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:48 np0005539551 nova_compute[227360]: 2025-11-29 08:23:48.380 227364 DEBUG nova.compute.manager [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:23:48 np0005539551 nova_compute[227360]: 2025-11-29 08:23:48.381 227364 DEBUG nova.virt.libvirt.driver [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:23:48 np0005539551 nova_compute[227360]: 2025-11-29 08:23:48.381 227364 INFO nova.virt.libvirt.driver [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Creating image(s)#033[00m
Nov 29 03:23:48 np0005539551 nova_compute[227360]: 2025-11-29 08:23:48.382 227364 DEBUG nova.virt.libvirt.driver [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:23:48 np0005539551 nova_compute[227360]: 2025-11-29 08:23:48.382 227364 DEBUG nova.virt.libvirt.driver [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Ensure instance console log exists: /var/lib/nova/instances/2e7e8742-c504-412d-82cf-4087bd745c3e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:23:48 np0005539551 nova_compute[227360]: 2025-11-29 08:23:48.382 227364 DEBUG oslo_concurrency.lockutils [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:48 np0005539551 nova_compute[227360]: 2025-11-29 08:23:48.383 227364 DEBUG oslo_concurrency.lockutils [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:48 np0005539551 nova_compute[227360]: 2025-11-29 08:23:48.383 227364 DEBUG oslo_concurrency.lockutils [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:48 np0005539551 nova_compute[227360]: 2025-11-29 08:23:48.427 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:48 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:48 np0005539551 nova_compute[227360]: 2025-11-29 08:23:48.804 227364 DEBUG oslo_concurrency.lockutils [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Acquiring lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:48 np0005539551 nova_compute[227360]: 2025-11-29 08:23:48.804 227364 DEBUG oslo_concurrency.lockutils [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:48 np0005539551 nova_compute[227360]: 2025-11-29 08:23:48.805 227364 DEBUG oslo_concurrency.lockutils [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:48 np0005539551 nova_compute[227360]: 2025-11-29 08:23:48.841 227364 DEBUG oslo_concurrency.lockutils [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:48 np0005539551 nova_compute[227360]: 2025-11-29 08:23:48.842 227364 DEBUG oslo_concurrency.lockutils [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:48 np0005539551 nova_compute[227360]: 2025-11-29 08:23:48.842 227364 DEBUG oslo_concurrency.lockutils [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:48 np0005539551 nova_compute[227360]: 2025-11-29 08:23:48.842 227364 DEBUG nova.compute.resource_tracker [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:23:48 np0005539551 nova_compute[227360]: 2025-11-29 08:23:48.842 227364 DEBUG oslo_concurrency.processutils [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:49.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:49 np0005539551 nova_compute[227360]: 2025-11-29 08:23:49.201 227364 DEBUG nova.network.neutron [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Successfully created port: 270b7a06-5cdd-4855-a693-0b30baf78df7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:23:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:23:49 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3472975666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:23:49 np0005539551 nova_compute[227360]: 2025-11-29 08:23:49.274 227364 DEBUG oslo_concurrency.processutils [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:49 np0005539551 nova_compute[227360]: 2025-11-29 08:23:49.425 227364 WARNING nova.virt.libvirt.driver [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:23:49 np0005539551 nova_compute[227360]: 2025-11-29 08:23:49.426 227364 DEBUG nova.compute.resource_tracker [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4452MB free_disk=20.851646423339844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:23:49 np0005539551 nova_compute[227360]: 2025-11-29 08:23:49.427 227364 DEBUG oslo_concurrency.lockutils [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:49 np0005539551 nova_compute[227360]: 2025-11-29 08:23:49.427 227364 DEBUG oslo_concurrency.lockutils [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:49 np0005539551 nova_compute[227360]: 2025-11-29 08:23:49.477 227364 DEBUG nova.compute.resource_tracker [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Migration for instance 27bb49e9-1b5b-452b-89e4-21008913f536 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Nov 29 03:23:49 np0005539551 nova_compute[227360]: 2025-11-29 08:23:49.496 227364 DEBUG nova.compute.resource_tracker [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Nov 29 03:23:49 np0005539551 nova_compute[227360]: 2025-11-29 08:23:49.548 227364 DEBUG nova.compute.resource_tracker [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Migration 1b0fcf33-c641-4e86-ad8d-2103b7f82fe6 is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Nov 29 03:23:49 np0005539551 nova_compute[227360]: 2025-11-29 08:23:49.549 227364 DEBUG nova.compute.resource_tracker [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Instance 2e7e8742-c504-412d-82cf-4087bd745c3e actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:23:49 np0005539551 nova_compute[227360]: 2025-11-29 08:23:49.549 227364 DEBUG nova.compute.resource_tracker [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:23:49 np0005539551 nova_compute[227360]: 2025-11-29 08:23:49.550 227364 DEBUG nova.compute.resource_tracker [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:23:49 np0005539551 nova_compute[227360]: 2025-11-29 08:23:49.623 227364 DEBUG oslo_concurrency.processutils [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:23:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:49.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:23:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:23:49 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1463288217' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:23:50 np0005539551 nova_compute[227360]: 2025-11-29 08:23:50.016 227364 DEBUG oslo_concurrency.processutils [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.392s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:50 np0005539551 nova_compute[227360]: 2025-11-29 08:23:50.021 227364 DEBUG nova.compute.provider_tree [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:23:50 np0005539551 nova_compute[227360]: 2025-11-29 08:23:50.051 227364 DEBUG nova.scheduler.client.report [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:23:50 np0005539551 nova_compute[227360]: 2025-11-29 08:23:50.102 227364 DEBUG nova.compute.resource_tracker [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:23:50 np0005539551 nova_compute[227360]: 2025-11-29 08:23:50.103 227364 DEBUG oslo_concurrency.lockutils [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:50 np0005539551 nova_compute[227360]: 2025-11-29 08:23:50.109 227364 INFO nova.compute.manager [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Migrating instance to compute-2.ctlplane.example.com finished successfully.#033[00m
Nov 29 03:23:50 np0005539551 nova_compute[227360]: 2025-11-29 08:23:50.239 227364 INFO nova.scheduler.client.report [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Deleted allocation for migration 1b0fcf33-c641-4e86-ad8d-2103b7f82fe6#033[00m
Nov 29 03:23:50 np0005539551 nova_compute[227360]: 2025-11-29 08:23:50.240 227364 DEBUG nova.virt.libvirt.driver [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m
Nov 29 03:23:50 np0005539551 nova_compute[227360]: 2025-11-29 08:23:50.449 227364 DEBUG nova.network.neutron [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Successfully updated port: 270b7a06-5cdd-4855-a693-0b30baf78df7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:23:50 np0005539551 nova_compute[227360]: 2025-11-29 08:23:50.531 227364 DEBUG oslo_concurrency.lockutils [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "refresh_cache-2e7e8742-c504-412d-82cf-4087bd745c3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:23:50 np0005539551 nova_compute[227360]: 2025-11-29 08:23:50.531 227364 DEBUG oslo_concurrency.lockutils [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquired lock "refresh_cache-2e7e8742-c504-412d-82cf-4087bd745c3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:23:50 np0005539551 nova_compute[227360]: 2025-11-29 08:23:50.531 227364 DEBUG nova.network.neutron [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:23:50 np0005539551 nova_compute[227360]: 2025-11-29 08:23:50.705 227364 DEBUG nova.compute.manager [req-66424ea3-4cb8-40f0-a3e5-2661f489e4a0 req-fe5a7a79-5dfa-4f56-8ab3-666e69c0afab 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Received event network-changed-270b7a06-5cdd-4855-a693-0b30baf78df7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:50 np0005539551 nova_compute[227360]: 2025-11-29 08:23:50.706 227364 DEBUG nova.compute.manager [req-66424ea3-4cb8-40f0-a3e5-2661f489e4a0 req-fe5a7a79-5dfa-4f56-8ab3-666e69c0afab 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Refreshing instance network info cache due to event network-changed-270b7a06-5cdd-4855-a693-0b30baf78df7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:23:50 np0005539551 nova_compute[227360]: 2025-11-29 08:23:50.706 227364 DEBUG oslo_concurrency.lockutils [req-66424ea3-4cb8-40f0-a3e5-2661f489e4a0 req-fe5a7a79-5dfa-4f56-8ab3-666e69c0afab 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-2e7e8742-c504-412d-82cf-4087bd745c3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:23:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:51.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:51 np0005539551 nova_compute[227360]: 2025-11-29 08:23:51.226 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:51 np0005539551 nova_compute[227360]: 2025-11-29 08:23:51.398 227364 DEBUG nova.network.neutron [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:23:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:51.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:52 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #97. Immutable memtables: 0.
Nov 29 03:23:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:23:52.305276) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:23:52 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 97
Nov 29 03:23:52 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404632305417, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 1584, "num_deletes": 260, "total_data_size": 3322357, "memory_usage": 3365840, "flush_reason": "Manual Compaction"}
Nov 29 03:23:52 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #98: started
Nov 29 03:23:52 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404632318339, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 98, "file_size": 2179333, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48933, "largest_seqno": 50512, "table_properties": {"data_size": 2172780, "index_size": 3624, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14730, "raw_average_key_size": 20, "raw_value_size": 2159196, "raw_average_value_size": 2949, "num_data_blocks": 160, "num_entries": 732, "num_filter_entries": 732, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404513, "oldest_key_time": 1764404513, "file_creation_time": 1764404632, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:23:52 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 13047 microseconds, and 6295 cpu microseconds.
Nov 29 03:23:52 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:23:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:23:52.318382) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #98: 2179333 bytes OK
Nov 29 03:23:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:23:52.318403) [db/memtable_list.cc:519] [default] Level-0 commit table #98 started
Nov 29 03:23:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:23:52.320012) [db/memtable_list.cc:722] [default] Level-0 commit table #98: memtable #1 done
Nov 29 03:23:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:23:52.320024) EVENT_LOG_v1 {"time_micros": 1764404632320020, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:23:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:23:52.320040) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:23:52 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 3314957, prev total WAL file size 3314957, number of live WAL files 2.
Nov 29 03:23:52 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000094.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:23:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:23:52.320853) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353036' seq:72057594037927935, type:22 .. '6C6F676D0031373630' seq:0, type:0; will stop at (end)
Nov 29 03:23:52 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:23:52 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [98(2128KB)], [96(9942KB)]
Nov 29 03:23:52 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404632320892, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [98], "files_L6": [96], "score": -1, "input_data_size": 12360124, "oldest_snapshot_seqno": -1}
Nov 29 03:23:52 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #99: 8219 keys, 12216588 bytes, temperature: kUnknown
Nov 29 03:23:52 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404632378062, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 99, "file_size": 12216588, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12162198, "index_size": 32710, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20613, "raw_key_size": 213556, "raw_average_key_size": 25, "raw_value_size": 12016026, "raw_average_value_size": 1461, "num_data_blocks": 1281, "num_entries": 8219, "num_filter_entries": 8219, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764404632, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 99, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:23:52 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:23:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:23:52.378452) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 12216588 bytes
Nov 29 03:23:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:23:52.379721) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 215.7 rd, 213.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 9.7 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(11.3) write-amplify(5.6) OK, records in: 8755, records dropped: 536 output_compression: NoCompression
Nov 29 03:23:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:23:52.379748) EVENT_LOG_v1 {"time_micros": 1764404632379735, "job": 60, "event": "compaction_finished", "compaction_time_micros": 57310, "compaction_time_cpu_micros": 28118, "output_level": 6, "num_output_files": 1, "total_output_size": 12216588, "num_input_records": 8755, "num_output_records": 8219, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:23:52 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:23:52 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404632380750, "job": 60, "event": "table_file_deletion", "file_number": 98}
Nov 29 03:23:52 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000096.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:23:52 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404632383602, "job": 60, "event": "table_file_deletion", "file_number": 96}
Nov 29 03:23:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:23:52.320749) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:23:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:23:52.383670) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:23:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:23:52.383675) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:23:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:23:52.383677) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:23:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:23:52.383679) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:23:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:23:52.383681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:23:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:23:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:53.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:23:53 np0005539551 nova_compute[227360]: 2025-11-29 08:23:53.431 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:53 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:53.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:55.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:55 np0005539551 nova_compute[227360]: 2025-11-29 08:23:55.360 227364 DEBUG nova.network.neutron [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Updating instance_info_cache with network_info: [{"id": "270b7a06-5cdd-4855-a693-0b30baf78df7", "address": "fa:16:3e:74:cb:18", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap270b7a06-5c", "ovs_interfaceid": "270b7a06-5cdd-4855-a693-0b30baf78df7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:23:55 np0005539551 nova_compute[227360]: 2025-11-29 08:23:55.422 227364 DEBUG oslo_concurrency.lockutils [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Releasing lock "refresh_cache-2e7e8742-c504-412d-82cf-4087bd745c3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:23:55 np0005539551 nova_compute[227360]: 2025-11-29 08:23:55.423 227364 DEBUG nova.compute.manager [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Instance network_info: |[{"id": "270b7a06-5cdd-4855-a693-0b30baf78df7", "address": "fa:16:3e:74:cb:18", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap270b7a06-5c", "ovs_interfaceid": "270b7a06-5cdd-4855-a693-0b30baf78df7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:23:55 np0005539551 nova_compute[227360]: 2025-11-29 08:23:55.423 227364 DEBUG oslo_concurrency.lockutils [req-66424ea3-4cb8-40f0-a3e5-2661f489e4a0 req-fe5a7a79-5dfa-4f56-8ab3-666e69c0afab 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-2e7e8742-c504-412d-82cf-4087bd745c3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:23:55 np0005539551 nova_compute[227360]: 2025-11-29 08:23:55.423 227364 DEBUG nova.network.neutron [req-66424ea3-4cb8-40f0-a3e5-2661f489e4a0 req-fe5a7a79-5dfa-4f56-8ab3-666e69c0afab 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Refreshing network info cache for port 270b7a06-5cdd-4855-a693-0b30baf78df7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:23:55 np0005539551 nova_compute[227360]: 2025-11-29 08:23:55.428 227364 DEBUG nova.virt.libvirt.driver [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Start _get_guest_xml network_info=[{"id": "270b7a06-5cdd-4855-a693-0b30baf78df7", "address": "fa:16:3e:74:cb:18", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap270b7a06-5c", "ovs_interfaceid": "270b7a06-5cdd-4855-a693-0b30baf78df7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': True, 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-d807030f-7b93-4396-9211-17a740c6b338', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'd807030f-7b93-4396-9211-17a740c6b338', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '2e7e8742-c504-412d-82cf-4087bd745c3e', 'attached_at': '', 'detached_at': '', 'volume_id': 'd807030f-7b93-4396-9211-17a740c6b338', 'serial': 'd807030f-7b93-4396-9211-17a740c6b338'}, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'attachment_id': 'b1eb7ca5-9c24-4b9c-83f5-1607b410c2aa', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:23:55 np0005539551 nova_compute[227360]: 2025-11-29 08:23:55.431 227364 WARNING nova.virt.libvirt.driver [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:23:55 np0005539551 nova_compute[227360]: 2025-11-29 08:23:55.436 227364 DEBUG nova.virt.libvirt.host [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:23:55 np0005539551 nova_compute[227360]: 2025-11-29 08:23:55.436 227364 DEBUG nova.virt.libvirt.host [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:23:55 np0005539551 nova_compute[227360]: 2025-11-29 08:23:55.440 227364 DEBUG nova.virt.libvirt.host [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:23:55 np0005539551 nova_compute[227360]: 2025-11-29 08:23:55.440 227364 DEBUG nova.virt.libvirt.host [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:23:55 np0005539551 nova_compute[227360]: 2025-11-29 08:23:55.441 227364 DEBUG nova.virt.libvirt.driver [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:23:55 np0005539551 nova_compute[227360]: 2025-11-29 08:23:55.442 227364 DEBUG nova.virt.hardware [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:23:55 np0005539551 nova_compute[227360]: 2025-11-29 08:23:55.442 227364 DEBUG nova.virt.hardware [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:23:55 np0005539551 nova_compute[227360]: 2025-11-29 08:23:55.442 227364 DEBUG nova.virt.hardware [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:23:55 np0005539551 nova_compute[227360]: 2025-11-29 08:23:55.442 227364 DEBUG nova.virt.hardware [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:23:55 np0005539551 nova_compute[227360]: 2025-11-29 08:23:55.442 227364 DEBUG nova.virt.hardware [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:23:55 np0005539551 nova_compute[227360]: 2025-11-29 08:23:55.443 227364 DEBUG nova.virt.hardware [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:23:55 np0005539551 nova_compute[227360]: 2025-11-29 08:23:55.443 227364 DEBUG nova.virt.hardware [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:23:55 np0005539551 nova_compute[227360]: 2025-11-29 08:23:55.443 227364 DEBUG nova.virt.hardware [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:23:55 np0005539551 nova_compute[227360]: 2025-11-29 08:23:55.443 227364 DEBUG nova.virt.hardware [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:23:55 np0005539551 nova_compute[227360]: 2025-11-29 08:23:55.443 227364 DEBUG nova.virt.hardware [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:23:55 np0005539551 nova_compute[227360]: 2025-11-29 08:23:55.444 227364 DEBUG nova.virt.hardware [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:23:55 np0005539551 nova_compute[227360]: 2025-11-29 08:23:55.478 227364 DEBUG nova.storage.rbd_utils [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image 2e7e8742-c504-412d-82cf-4087bd745c3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:23:55 np0005539551 nova_compute[227360]: 2025-11-29 08:23:55.483 227364 DEBUG oslo_concurrency.processutils [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:55.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:56 np0005539551 nova_compute[227360]: 2025-11-29 08:23:56.228 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:56 np0005539551 nova_compute[227360]: 2025-11-29 08:23:56.553 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404621.552147, 27bb49e9-1b5b-452b-89e4-21008913f536 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:23:56 np0005539551 nova_compute[227360]: 2025-11-29 08:23:56.554 227364 INFO nova.compute.manager [-] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:23:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:23:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:57.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:23:57 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:23:57 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1014461165' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:23:57 np0005539551 nova_compute[227360]: 2025-11-29 08:23:57.321 227364 DEBUG oslo_concurrency.processutils [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.838s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:57.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:58 np0005539551 nova_compute[227360]: 2025-11-29 08:23:58.137 227364 DEBUG nova.compute.manager [None req-cf3498c7-bc52-4540-ab2f-aabff989d382 - - - - - -] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:23:58 np0005539551 nova_compute[227360]: 2025-11-29 08:23:58.191 227364 DEBUG nova.virt.libvirt.vif [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:23:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1414239367',display_name='tempest-ServerActionsTestOtherA-server-1414239367',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1414239367',id=130,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGZDzu/2PA5Jq1/mLvX2aaGG/WgUsRbb7Dsx3sFYSYL50dOuvFvn9ZiS3sRkHwVTZXl3/vg+NRcU0ds7Zzbdh2bvajGjb9Qxq1UtC5+8x+Wx/kUkrK3lVnVkeCLnrxzmbg==',key_name='tempest-keypair-186857524',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0bace34c102e4d56b089fd695d324f10',ramdisk_id='',reservation_id='r-sj7sy73r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1954650991',owner_user_name='tempest-ServerActionsTestOtherA-1954650991-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:23:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1552f15deb524705a9456cbe9b54c429',uuid=2e7e8742-c504-412d-82cf-4087bd745c3e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "270b7a06-5cdd-4855-a693-0b30baf78df7", "address": "fa:16:3e:74:cb:18", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap270b7a06-5c", "ovs_interfaceid": "270b7a06-5cdd-4855-a693-0b30baf78df7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:23:58 np0005539551 nova_compute[227360]: 2025-11-29 08:23:58.192 227364 DEBUG nova.network.os_vif_util [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converting VIF {"id": "270b7a06-5cdd-4855-a693-0b30baf78df7", "address": "fa:16:3e:74:cb:18", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap270b7a06-5c", "ovs_interfaceid": "270b7a06-5cdd-4855-a693-0b30baf78df7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:23:58 np0005539551 nova_compute[227360]: 2025-11-29 08:23:58.192 227364 DEBUG nova.network.os_vif_util [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:74:cb:18,bridge_name='br-int',has_traffic_filtering=True,id=270b7a06-5cdd-4855-a693-0b30baf78df7,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap270b7a06-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:23:58 np0005539551 nova_compute[227360]: 2025-11-29 08:23:58.194 227364 DEBUG nova.objects.instance [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2e7e8742-c504-412d-82cf-4087bd745c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:23:58 np0005539551 nova_compute[227360]: 2025-11-29 08:23:58.268 227364 DEBUG nova.virt.libvirt.driver [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:23:58 np0005539551 nova_compute[227360]:  <uuid>2e7e8742-c504-412d-82cf-4087bd745c3e</uuid>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:  <name>instance-00000082</name>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:23:58 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:      <nova:name>tempest-ServerActionsTestOtherA-server-1414239367</nova:name>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:23:55</nova:creationTime>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:23:58 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:        <nova:user uuid="1552f15deb524705a9456cbe9b54c429">tempest-ServerActionsTestOtherA-1954650991-project-member</nova:user>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:        <nova:project uuid="0bace34c102e4d56b089fd695d324f10">tempest-ServerActionsTestOtherA-1954650991</nova:project>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:        <nova:port uuid="270b7a06-5cdd-4855-a693-0b30baf78df7">
Nov 29 03:23:58 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:      <entry name="serial">2e7e8742-c504-412d-82cf-4087bd745c3e</entry>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:      <entry name="uuid">2e7e8742-c504-412d-82cf-4087bd745c3e</entry>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:23:58 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/2e7e8742-c504-412d-82cf-4087bd745c3e_disk.config">
Nov 29 03:23:58 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:23:58 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:23:58 np0005539551 nova_compute[227360]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="volumes/volume-d807030f-7b93-4396-9211-17a740c6b338">
Nov 29 03:23:58 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:23:58 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:      <serial>d807030f-7b93-4396-9211-17a740c6b338</serial>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:23:58 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:74:cb:18"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:      <target dev="tap270b7a06-5c"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:23:58 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/2e7e8742-c504-412d-82cf-4087bd745c3e/console.log" append="off"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:23:58 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:23:58 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:23:58 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:23:58 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:23:58 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:23:58 np0005539551 nova_compute[227360]: 2025-11-29 08:23:58.269 227364 DEBUG nova.compute.manager [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Preparing to wait for external event network-vif-plugged-270b7a06-5cdd-4855-a693-0b30baf78df7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:23:58 np0005539551 nova_compute[227360]: 2025-11-29 08:23:58.270 227364 DEBUG oslo_concurrency.lockutils [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:58 np0005539551 nova_compute[227360]: 2025-11-29 08:23:58.271 227364 DEBUG oslo_concurrency.lockutils [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:58 np0005539551 nova_compute[227360]: 2025-11-29 08:23:58.271 227364 DEBUG oslo_concurrency.lockutils [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:58 np0005539551 nova_compute[227360]: 2025-11-29 08:23:58.273 227364 DEBUG nova.virt.libvirt.vif [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:23:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1414239367',display_name='tempest-ServerActionsTestOtherA-server-1414239367',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1414239367',id=130,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGZDzu/2PA5Jq1/mLvX2aaGG/WgUsRbb7Dsx3sFYSYL50dOuvFvn9ZiS3sRkHwVTZXl3/vg+NRcU0ds7Zzbdh2bvajGjb9Qxq1UtC5+8x+Wx/kUkrK3lVnVkeCLnrxzmbg==',key_name='tempest-keypair-186857524',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0bace34c102e4d56b089fd695d324f10',ramdisk_id='',reservation_id='r-sj7sy73r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1954650991',owner_user_name='tempest-ServerActionsTestOtherA-1954650991-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:23:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1552f15deb524705a9456cbe9b54c429',uuid=2e7e8742-c504-412d-82cf-4087bd745c3e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "270b7a06-5cdd-4855-a693-0b30baf78df7", "address": "fa:16:3e:74:cb:18", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap270b7a06-5c", "ovs_interfaceid": "270b7a06-5cdd-4855-a693-0b30baf78df7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:23:58 np0005539551 nova_compute[227360]: 2025-11-29 08:23:58.273 227364 DEBUG nova.network.os_vif_util [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converting VIF {"id": "270b7a06-5cdd-4855-a693-0b30baf78df7", "address": "fa:16:3e:74:cb:18", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap270b7a06-5c", "ovs_interfaceid": "270b7a06-5cdd-4855-a693-0b30baf78df7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:23:58 np0005539551 nova_compute[227360]: 2025-11-29 08:23:58.275 227364 DEBUG nova.network.os_vif_util [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:74:cb:18,bridge_name='br-int',has_traffic_filtering=True,id=270b7a06-5cdd-4855-a693-0b30baf78df7,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap270b7a06-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:23:58 np0005539551 nova_compute[227360]: 2025-11-29 08:23:58.276 227364 DEBUG os_vif [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:74:cb:18,bridge_name='br-int',has_traffic_filtering=True,id=270b7a06-5cdd-4855-a693-0b30baf78df7,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap270b7a06-5c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:23:58 np0005539551 nova_compute[227360]: 2025-11-29 08:23:58.277 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:58 np0005539551 nova_compute[227360]: 2025-11-29 08:23:58.277 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:58 np0005539551 nova_compute[227360]: 2025-11-29 08:23:58.278 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:23:58 np0005539551 nova_compute[227360]: 2025-11-29 08:23:58.282 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:58 np0005539551 nova_compute[227360]: 2025-11-29 08:23:58.282 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap270b7a06-5c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:58 np0005539551 nova_compute[227360]: 2025-11-29 08:23:58.283 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap270b7a06-5c, col_values=(('external_ids', {'iface-id': '270b7a06-5cdd-4855-a693-0b30baf78df7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:74:cb:18', 'vm-uuid': '2e7e8742-c504-412d-82cf-4087bd745c3e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:58 np0005539551 nova_compute[227360]: 2025-11-29 08:23:58.285 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:58 np0005539551 NetworkManager[48922]: <info>  [1764404638.2868] manager: (tap270b7a06-5c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/246)
Nov 29 03:23:58 np0005539551 nova_compute[227360]: 2025-11-29 08:23:58.289 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:23:58 np0005539551 nova_compute[227360]: 2025-11-29 08:23:58.291 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:58 np0005539551 nova_compute[227360]: 2025-11-29 08:23:58.292 227364 INFO os_vif [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:74:cb:18,bridge_name='br-int',has_traffic_filtering=True,id=270b7a06-5cdd-4855-a693-0b30baf78df7,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap270b7a06-5c')#033[00m
Nov 29 03:23:58 np0005539551 nova_compute[227360]: 2025-11-29 08:23:58.621 227364 DEBUG nova.virt.libvirt.driver [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:23:58 np0005539551 nova_compute[227360]: 2025-11-29 08:23:58.621 227364 DEBUG nova.virt.libvirt.driver [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:23:58 np0005539551 nova_compute[227360]: 2025-11-29 08:23:58.621 227364 DEBUG nova.virt.libvirt.driver [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] No VIF found with MAC fa:16:3e:74:cb:18, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:23:58 np0005539551 nova_compute[227360]: 2025-11-29 08:23:58.622 227364 INFO nova.virt.libvirt.driver [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Using config drive#033[00m
Nov 29 03:23:58 np0005539551 nova_compute[227360]: 2025-11-29 08:23:58.643 227364 DEBUG nova.storage.rbd_utils [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image 2e7e8742-c504-412d-82cf-4087bd745c3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:23:58 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:59.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:59 np0005539551 nova_compute[227360]: 2025-11-29 08:23:59.258 227364 INFO nova.virt.libvirt.driver [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Creating config drive at /var/lib/nova/instances/2e7e8742-c504-412d-82cf-4087bd745c3e/disk.config#033[00m
Nov 29 03:23:59 np0005539551 nova_compute[227360]: 2025-11-29 08:23:59.270 227364 DEBUG oslo_concurrency.processutils [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2e7e8742-c504-412d-82cf-4087bd745c3e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkuckmow0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:59 np0005539551 nova_compute[227360]: 2025-11-29 08:23:59.432 227364 DEBUG oslo_concurrency.processutils [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2e7e8742-c504-412d-82cf-4087bd745c3e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkuckmow0" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:59 np0005539551 nova_compute[227360]: 2025-11-29 08:23:59.465 227364 DEBUG nova.storage.rbd_utils [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image 2e7e8742-c504-412d-82cf-4087bd745c3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:23:59 np0005539551 nova_compute[227360]: 2025-11-29 08:23:59.470 227364 DEBUG oslo_concurrency.processutils [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2e7e8742-c504-412d-82cf-4087bd745c3e/disk.config 2e7e8742-c504-412d-82cf-4087bd745c3e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:59 np0005539551 nova_compute[227360]: 2025-11-29 08:23:59.661 227364 DEBUG oslo_concurrency.processutils [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2e7e8742-c504-412d-82cf-4087bd745c3e/disk.config 2e7e8742-c504-412d-82cf-4087bd745c3e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:59 np0005539551 nova_compute[227360]: 2025-11-29 08:23:59.662 227364 INFO nova.virt.libvirt.driver [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Deleting local config drive /var/lib/nova/instances/2e7e8742-c504-412d-82cf-4087bd745c3e/disk.config because it was imported into RBD.#033[00m
Nov 29 03:23:59 np0005539551 nova_compute[227360]: 2025-11-29 08:23:59.681 227364 DEBUG nova.network.neutron [req-66424ea3-4cb8-40f0-a3e5-2661f489e4a0 req-fe5a7a79-5dfa-4f56-8ab3-666e69c0afab 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Updated VIF entry in instance network info cache for port 270b7a06-5cdd-4855-a693-0b30baf78df7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:23:59 np0005539551 nova_compute[227360]: 2025-11-29 08:23:59.681 227364 DEBUG nova.network.neutron [req-66424ea3-4cb8-40f0-a3e5-2661f489e4a0 req-fe5a7a79-5dfa-4f56-8ab3-666e69c0afab 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Updating instance_info_cache with network_info: [{"id": "270b7a06-5cdd-4855-a693-0b30baf78df7", "address": "fa:16:3e:74:cb:18", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap270b7a06-5c", "ovs_interfaceid": "270b7a06-5cdd-4855-a693-0b30baf78df7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:23:59 np0005539551 nova_compute[227360]: 2025-11-29 08:23:59.706 227364 DEBUG oslo_concurrency.lockutils [req-66424ea3-4cb8-40f0-a3e5-2661f489e4a0 req-fe5a7a79-5dfa-4f56-8ab3-666e69c0afab 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-2e7e8742-c504-412d-82cf-4087bd745c3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:23:59 np0005539551 kernel: tap270b7a06-5c: entered promiscuous mode
Nov 29 03:23:59 np0005539551 NetworkManager[48922]: <info>  [1764404639.7115] manager: (tap270b7a06-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/247)
Nov 29 03:23:59 np0005539551 ovn_controller[130266]: 2025-11-29T08:23:59Z|00525|binding|INFO|Claiming lport 270b7a06-5cdd-4855-a693-0b30baf78df7 for this chassis.
Nov 29 03:23:59 np0005539551 ovn_controller[130266]: 2025-11-29T08:23:59Z|00526|binding|INFO|270b7a06-5cdd-4855-a693-0b30baf78df7: Claiming fa:16:3e:74:cb:18 10.100.0.14
Nov 29 03:23:59 np0005539551 nova_compute[227360]: 2025-11-29 08:23:59.713 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:59 np0005539551 nova_compute[227360]: 2025-11-29 08:23:59.719 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:59 np0005539551 nova_compute[227360]: 2025-11-29 08:23:59.723 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:59 np0005539551 NetworkManager[48922]: <info>  [1764404639.7260] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/248)
Nov 29 03:23:59 np0005539551 NetworkManager[48922]: <info>  [1764404639.7270] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/249)
Nov 29 03:23:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:59.729 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:74:cb:18 10.100.0.14'], port_security=['fa:16:3e:74:cb:18 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '2e7e8742-c504-412d-82cf-4087bd745c3e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bace34c102e4d56b089fd695d324f10', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7465c0fc-60f6-4695-93cd-f6ab8b97c365', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a26ea06d-6837-4c64-a5e9-9d9016316b21, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=270b7a06-5cdd-4855-a693-0b30baf78df7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:23:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:59.730 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 270b7a06-5cdd-4855-a693-0b30baf78df7 in datapath 7fc1dfc3-8d7f-4854-980d-37a93f366035 bound to our chassis#033[00m
Nov 29 03:23:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:59.731 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7fc1dfc3-8d7f-4854-980d-37a93f366035#033[00m
Nov 29 03:23:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:59.741 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ea764239-e3fb-4698-90c3-16a8b2243396]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:59.741 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7fc1dfc3-81 in ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:23:59 np0005539551 systemd-machined[190756]: New machine qemu-57-instance-00000082.
Nov 29 03:23:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:59.743 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7fc1dfc3-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:23:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:59.744 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[cfa496ae-4766-4561-8789-c4bd23281f09]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:59.744 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5ee16c2e-0d91-41cb-bc19-c640054a4bb8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:59.757 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[704928c2-6b48-4c64-9b2d-10361ae3970c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:59 np0005539551 systemd[1]: Started Virtual Machine qemu-57-instance-00000082.
Nov 29 03:23:59 np0005539551 systemd-udevd[274294]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:23:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:59.781 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[876d2f50-758d-4e8c-920a-4314a1c69297]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:59 np0005539551 NetworkManager[48922]: <info>  [1764404639.7985] device (tap270b7a06-5c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:23:59 np0005539551 NetworkManager[48922]: <info>  [1764404639.7997] device (tap270b7a06-5c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:23:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:59.819 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[fb2b00d8-eb2d-42d5-b428-67f327404c98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:59.833 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[37675016-512f-4504-a930-ff2ac9b75877]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:59 np0005539551 NetworkManager[48922]: <info>  [1764404639.8347] manager: (tap7fc1dfc3-80): new Veth device (/org/freedesktop/NetworkManager/Devices/250)
Nov 29 03:23:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:59.861 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[0cefee8a-a5d2-4cb5-ac8a-836612c7bb42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:59.863 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[45fc1a9d-42f1-4291-ad33-684636708adf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:59 np0005539551 NetworkManager[48922]: <info>  [1764404639.8815] device (tap7fc1dfc3-80): carrier: link connected
Nov 29 03:23:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:59.885 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[ce5b1a06-4209-4d1e-b9b3-d2d0ad6c2d11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:59 np0005539551 nova_compute[227360]: 2025-11-29 08:23:59.888 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:59 np0005539551 nova_compute[227360]: 2025-11-29 08:23:59.892 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:59.901 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[17f48112-aaec-4e39-a142-f7fa6674c400]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7fc1dfc3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:27:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 768676, 'reachable_time': 42167, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274325, 'error': None, 'target': 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:59 np0005539551 nova_compute[227360]: 2025-11-29 08:23:59.909 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:59.912 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[dc55975c-1245-4afc-a822-c40436c477fb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5d:273e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 768676, 'tstamp': 768676}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274326, 'error': None, 'target': 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:59 np0005539551 ovn_controller[130266]: 2025-11-29T08:23:59Z|00527|binding|INFO|Setting lport 270b7a06-5cdd-4855-a693-0b30baf78df7 ovn-installed in OVS
Nov 29 03:23:59 np0005539551 ovn_controller[130266]: 2025-11-29T08:23:59Z|00528|binding|INFO|Setting lport 270b7a06-5cdd-4855-a693-0b30baf78df7 up in Southbound
Nov 29 03:23:59 np0005539551 nova_compute[227360]: 2025-11-29 08:23:59.921 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:59.930 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[937f4e60-8253-4ac1-974a-f250a25202a7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7fc1dfc3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:27:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 768676, 'reachable_time': 42167, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274327, 'error': None, 'target': 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:23:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:59.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:23:59.958 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b07cc98a-22dd-4227-b7a5-0202778d4cf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:00.017 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4dcfe058-1237-44cb-8731-655e6f27408d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:00.018 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7fc1dfc3-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:00.018 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:24:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:00.018 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7fc1dfc3-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:00 np0005539551 kernel: tap7fc1dfc3-80: entered promiscuous mode
Nov 29 03:24:00 np0005539551 NetworkManager[48922]: <info>  [1764404640.0206] manager: (tap7fc1dfc3-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/251)
Nov 29 03:24:00 np0005539551 nova_compute[227360]: 2025-11-29 08:24:00.020 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:00.023 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7fc1dfc3-80, col_values=(('external_ids', {'iface-id': '79109459-2a40-4b69-936e-ac2a2aa77985'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:00 np0005539551 nova_compute[227360]: 2025-11-29 08:24:00.024 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:00 np0005539551 ovn_controller[130266]: 2025-11-29T08:24:00Z|00529|binding|INFO|Releasing lport 79109459-2a40-4b69-936e-ac2a2aa77985 from this chassis (sb_readonly=1)
Nov 29 03:24:00 np0005539551 nova_compute[227360]: 2025-11-29 08:24:00.025 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:00.025 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7fc1dfc3-8d7f-4854-980d-37a93f366035.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7fc1dfc3-8d7f-4854-980d-37a93f366035.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:24:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:00.026 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[3354eb5b-85ae-4bd2-9c4a-1966eb40a961]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:00.027 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:24:00 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:24:00 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:24:00 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-7fc1dfc3-8d7f-4854-980d-37a93f366035
Nov 29 03:24:00 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:24:00 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:24:00 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:24:00 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/7fc1dfc3-8d7f-4854-980d-37a93f366035.pid.haproxy
Nov 29 03:24:00 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:24:00 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:24:00 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:24:00 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:24:00 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:24:00 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:24:00 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:24:00 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:24:00 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:24:00 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:24:00 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:24:00 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:24:00 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:24:00 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:24:00 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:24:00 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:24:00 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:24:00 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:24:00 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:24:00 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:24:00 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 7fc1dfc3-8d7f-4854-980d-37a93f366035
Nov 29 03:24:00 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:24:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:00.027 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'env', 'PROCESS_TAG=haproxy-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7fc1dfc3-8d7f-4854-980d-37a93f366035.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:24:00 np0005539551 nova_compute[227360]: 2025-11-29 08:24:00.037 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:00 np0005539551 nova_compute[227360]: 2025-11-29 08:24:00.252 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404640.2520268, 2e7e8742-c504-412d-82cf-4087bd745c3e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:24:00 np0005539551 nova_compute[227360]: 2025-11-29 08:24:00.253 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] VM Started (Lifecycle Event)#033[00m
Nov 29 03:24:00 np0005539551 nova_compute[227360]: 2025-11-29 08:24:00.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:00 np0005539551 nova_compute[227360]: 2025-11-29 08:24:00.411 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:00 np0005539551 nova_compute[227360]: 2025-11-29 08:24:00.412 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:00 np0005539551 nova_compute[227360]: 2025-11-29 08:24:00.412 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:00 np0005539551 nova_compute[227360]: 2025-11-29 08:24:00.412 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:00 np0005539551 nova_compute[227360]: 2025-11-29 08:24:00.413 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:00 np0005539551 nova_compute[227360]: 2025-11-29 08:24:00.413 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:00 np0005539551 podman[274401]: 2025-11-29 08:24:00.407705731 +0000 UTC m=+0.031120712 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:24:00 np0005539551 podman[274401]: 2025-11-29 08:24:00.548277724 +0000 UTC m=+0.171692725 container create 81b5e33138c04f0e2c7f9817a0331a3e2104a72d24446c5adaba36ea28cf3839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Nov 29 03:24:00 np0005539551 systemd[1]: Started libpod-conmon-81b5e33138c04f0e2c7f9817a0331a3e2104a72d24446c5adaba36ea28cf3839.scope.
Nov 29 03:24:00 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:24:00 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/184f4bca024cc7f2b2247233261b71d9ac1430e4428ab3d60019788b21b26d81/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:24:00 np0005539551 podman[274401]: 2025-11-29 08:24:00.671431145 +0000 UTC m=+0.294846127 container init 81b5e33138c04f0e2c7f9817a0331a3e2104a72d24446c5adaba36ea28cf3839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:24:00 np0005539551 podman[274401]: 2025-11-29 08:24:00.68191886 +0000 UTC m=+0.305333861 container start 81b5e33138c04f0e2c7f9817a0331a3e2104a72d24446c5adaba36ea28cf3839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:24:00 np0005539551 neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035[274416]: [NOTICE]   (274435) : New worker (274442) forked
Nov 29 03:24:00 np0005539551 neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035[274416]: [NOTICE]   (274435) : Loading success.
Nov 29 03:24:00 np0005539551 podman[274420]: 2025-11-29 08:24:00.749071926 +0000 UTC m=+0.068934266 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:24:00 np0005539551 podman[274419]: 2025-11-29 08:24:00.758711817 +0000 UTC m=+0.087123018 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:24:00 np0005539551 podman[274465]: 2025-11-29 08:24:00.863340048 +0000 UTC m=+0.100722076 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:24:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:01.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:01 np0005539551 nova_compute[227360]: 2025-11-29 08:24:01.230 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:01 np0005539551 nova_compute[227360]: 2025-11-29 08:24:01.667 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:24:01 np0005539551 nova_compute[227360]: 2025-11-29 08:24:01.673 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404640.2523813, 2e7e8742-c504-412d-82cf-4087bd745c3e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:24:01 np0005539551 nova_compute[227360]: 2025-11-29 08:24:01.674 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:24:01 np0005539551 nova_compute[227360]: 2025-11-29 08:24:01.696 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:24:01 np0005539551 nova_compute[227360]: 2025-11-29 08:24:01.702 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:24:01 np0005539551 nova_compute[227360]: 2025-11-29 08:24:01.719 227364 DEBUG nova.virt.libvirt.imagecache [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Nov 29 03:24:01 np0005539551 nova_compute[227360]: 2025-11-29 08:24:01.720 227364 DEBUG nova.virt.libvirt.imagecache [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Nov 29 03:24:01 np0005539551 nova_compute[227360]: 2025-11-29 08:24:01.721 227364 DEBUG nova.virt.libvirt.imagecache [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] 2e7e8742-c504-412d-82cf-4087bd745c3e is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Nov 29 03:24:01 np0005539551 nova_compute[227360]: 2025-11-29 08:24:01.721 227364 WARNING nova.virt.libvirt.imagecache [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Unknown base file: /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488#033[00m
Nov 29 03:24:01 np0005539551 nova_compute[227360]: 2025-11-29 08:24:01.721 227364 WARNING nova.virt.libvirt.imagecache [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Unknown base file: /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de#033[00m
Nov 29 03:24:01 np0005539551 nova_compute[227360]: 2025-11-29 08:24:01.721 227364 INFO nova.virt.libvirt.imagecache [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Removable base files: /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de#033[00m
Nov 29 03:24:01 np0005539551 nova_compute[227360]: 2025-11-29 08:24:01.722 227364 INFO nova.virt.libvirt.imagecache [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488#033[00m
Nov 29 03:24:01 np0005539551 nova_compute[227360]: 2025-11-29 08:24:01.722 227364 INFO nova.virt.libvirt.imagecache [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de#033[00m
Nov 29 03:24:01 np0005539551 nova_compute[227360]: 2025-11-29 08:24:01.722 227364 DEBUG nova.virt.libvirt.imagecache [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Nov 29 03:24:01 np0005539551 nova_compute[227360]: 2025-11-29 08:24:01.722 227364 DEBUG nova.virt.libvirt.imagecache [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Nov 29 03:24:01 np0005539551 nova_compute[227360]: 2025-11-29 08:24:01.723 227364 DEBUG nova.virt.libvirt.imagecache [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Nov 29 03:24:01 np0005539551 nova_compute[227360]: 2025-11-29 08:24:01.729 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:24:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:01.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:02 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e322 e322: 3 total, 3 up, 3 in
Nov 29 03:24:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:03.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:03 np0005539551 nova_compute[227360]: 2025-11-29 08:24:03.286 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:03 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e322 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:03 np0005539551 nova_compute[227360]: 2025-11-29 08:24:03.897 227364 DEBUG nova.compute.manager [req-6d740ae8-9763-459b-bf09-ede520676aac req-6174e9fd-ccc3-42cf-addd-3486208a4a3f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Received event network-vif-plugged-270b7a06-5cdd-4855-a693-0b30baf78df7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:03 np0005539551 nova_compute[227360]: 2025-11-29 08:24:03.897 227364 DEBUG oslo_concurrency.lockutils [req-6d740ae8-9763-459b-bf09-ede520676aac req-6174e9fd-ccc3-42cf-addd-3486208a4a3f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:03 np0005539551 nova_compute[227360]: 2025-11-29 08:24:03.897 227364 DEBUG oslo_concurrency.lockutils [req-6d740ae8-9763-459b-bf09-ede520676aac req-6174e9fd-ccc3-42cf-addd-3486208a4a3f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:03 np0005539551 nova_compute[227360]: 2025-11-29 08:24:03.898 227364 DEBUG oslo_concurrency.lockutils [req-6d740ae8-9763-459b-bf09-ede520676aac req-6174e9fd-ccc3-42cf-addd-3486208a4a3f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:03 np0005539551 nova_compute[227360]: 2025-11-29 08:24:03.898 227364 DEBUG nova.compute.manager [req-6d740ae8-9763-459b-bf09-ede520676aac req-6174e9fd-ccc3-42cf-addd-3486208a4a3f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Processing event network-vif-plugged-270b7a06-5cdd-4855-a693-0b30baf78df7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:24:03 np0005539551 nova_compute[227360]: 2025-11-29 08:24:03.898 227364 DEBUG nova.compute.manager [req-6d740ae8-9763-459b-bf09-ede520676aac req-6174e9fd-ccc3-42cf-addd-3486208a4a3f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Received event network-vif-plugged-270b7a06-5cdd-4855-a693-0b30baf78df7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:03 np0005539551 nova_compute[227360]: 2025-11-29 08:24:03.898 227364 DEBUG oslo_concurrency.lockutils [req-6d740ae8-9763-459b-bf09-ede520676aac req-6174e9fd-ccc3-42cf-addd-3486208a4a3f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:03 np0005539551 nova_compute[227360]: 2025-11-29 08:24:03.898 227364 DEBUG oslo_concurrency.lockutils [req-6d740ae8-9763-459b-bf09-ede520676aac req-6174e9fd-ccc3-42cf-addd-3486208a4a3f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:03 np0005539551 nova_compute[227360]: 2025-11-29 08:24:03.898 227364 DEBUG oslo_concurrency.lockutils [req-6d740ae8-9763-459b-bf09-ede520676aac req-6174e9fd-ccc3-42cf-addd-3486208a4a3f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:03 np0005539551 nova_compute[227360]: 2025-11-29 08:24:03.899 227364 DEBUG nova.compute.manager [req-6d740ae8-9763-459b-bf09-ede520676aac req-6174e9fd-ccc3-42cf-addd-3486208a4a3f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] No waiting events found dispatching network-vif-plugged-270b7a06-5cdd-4855-a693-0b30baf78df7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:24:03 np0005539551 nova_compute[227360]: 2025-11-29 08:24:03.899 227364 WARNING nova.compute.manager [req-6d740ae8-9763-459b-bf09-ede520676aac req-6174e9fd-ccc3-42cf-addd-3486208a4a3f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Received unexpected event network-vif-plugged-270b7a06-5cdd-4855-a693-0b30baf78df7 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:24:03 np0005539551 nova_compute[227360]: 2025-11-29 08:24:03.899 227364 DEBUG nova.compute.manager [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:24:03 np0005539551 nova_compute[227360]: 2025-11-29 08:24:03.903 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404643.9027998, 2e7e8742-c504-412d-82cf-4087bd745c3e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:24:03 np0005539551 nova_compute[227360]: 2025-11-29 08:24:03.903 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:24:03 np0005539551 nova_compute[227360]: 2025-11-29 08:24:03.904 227364 DEBUG nova.virt.libvirt.driver [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:24:03 np0005539551 nova_compute[227360]: 2025-11-29 08:24:03.912 227364 INFO nova.virt.libvirt.driver [-] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Instance spawned successfully.#033[00m
Nov 29 03:24:03 np0005539551 nova_compute[227360]: 2025-11-29 08:24:03.912 227364 DEBUG nova.virt.libvirt.driver [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:24:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:03.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:05 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:24:05 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:24:05 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:24:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:05.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:05 np0005539551 nova_compute[227360]: 2025-11-29 08:24:05.386 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:24:05 np0005539551 nova_compute[227360]: 2025-11-29 08:24:05.392 227364 DEBUG nova.virt.libvirt.driver [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:05 np0005539551 nova_compute[227360]: 2025-11-29 08:24:05.393 227364 DEBUG nova.virt.libvirt.driver [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:05 np0005539551 nova_compute[227360]: 2025-11-29 08:24:05.393 227364 DEBUG nova.virt.libvirt.driver [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:05 np0005539551 nova_compute[227360]: 2025-11-29 08:24:05.394 227364 DEBUG nova.virt.libvirt.driver [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:05 np0005539551 nova_compute[227360]: 2025-11-29 08:24:05.395 227364 DEBUG nova.virt.libvirt.driver [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:05 np0005539551 nova_compute[227360]: 2025-11-29 08:24:05.395 227364 DEBUG nova.virt.libvirt.driver [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:05 np0005539551 nova_compute[227360]: 2025-11-29 08:24:05.401 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:24:05 np0005539551 nova_compute[227360]: 2025-11-29 08:24:05.448 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:24:05 np0005539551 nova_compute[227360]: 2025-11-29 08:24:05.470 227364 INFO nova.compute.manager [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Took 17.09 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:24:05 np0005539551 nova_compute[227360]: 2025-11-29 08:24:05.471 227364 DEBUG nova.compute.manager [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:24:05 np0005539551 nova_compute[227360]: 2025-11-29 08:24:05.555 227364 INFO nova.compute.manager [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Took 19.25 seconds to build instance.#033[00m
Nov 29 03:24:05 np0005539551 nova_compute[227360]: 2025-11-29 08:24:05.578 227364 DEBUG oslo_concurrency.lockutils [None req-59647844-62c4-4e69-afff-7c1e0b7241b6 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "2e7e8742-c504-412d-82cf-4087bd745c3e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.365s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:05.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e323 e323: 3 total, 3 up, 3 in
Nov 29 03:24:06 np0005539551 nova_compute[227360]: 2025-11-29 08:24:06.233 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:07.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:07.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:08 np0005539551 nova_compute[227360]: 2025-11-29 08:24:08.290 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:08 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e324 e324: 3 total, 3 up, 3 in
Nov 29 03:24:08 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:09.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:09.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:11.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:11 np0005539551 nova_compute[227360]: 2025-11-29 08:24:11.237 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:11 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:24:11 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:24:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:24:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:11.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:24:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:13.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:13 np0005539551 nova_compute[227360]: 2025-11-29 08:24:13.274 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:13 np0005539551 nova_compute[227360]: 2025-11-29 08:24:13.291 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:13 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:13.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:15.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 03:24:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:15.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 03:24:16 np0005539551 nova_compute[227360]: 2025-11-29 08:24:16.163 227364 DEBUG nova.compute.manager [req-52cb2dfd-877f-4b6d-a4fd-73700b62394a req-669fd778-3f0d-469c-bba3-21643ad717e5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Received event network-changed-270b7a06-5cdd-4855-a693-0b30baf78df7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:16 np0005539551 nova_compute[227360]: 2025-11-29 08:24:16.164 227364 DEBUG nova.compute.manager [req-52cb2dfd-877f-4b6d-a4fd-73700b62394a req-669fd778-3f0d-469c-bba3-21643ad717e5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Refreshing instance network info cache due to event network-changed-270b7a06-5cdd-4855-a693-0b30baf78df7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:24:16 np0005539551 nova_compute[227360]: 2025-11-29 08:24:16.164 227364 DEBUG oslo_concurrency.lockutils [req-52cb2dfd-877f-4b6d-a4fd-73700b62394a req-669fd778-3f0d-469c-bba3-21643ad717e5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-2e7e8742-c504-412d-82cf-4087bd745c3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:24:16 np0005539551 nova_compute[227360]: 2025-11-29 08:24:16.164 227364 DEBUG oslo_concurrency.lockutils [req-52cb2dfd-877f-4b6d-a4fd-73700b62394a req-669fd778-3f0d-469c-bba3-21643ad717e5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-2e7e8742-c504-412d-82cf-4087bd745c3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:24:16 np0005539551 nova_compute[227360]: 2025-11-29 08:24:16.164 227364 DEBUG nova.network.neutron [req-52cb2dfd-877f-4b6d-a4fd-73700b62394a req-669fd778-3f0d-469c-bba3-21643ad717e5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Refreshing network info cache for port 270b7a06-5cdd-4855-a693-0b30baf78df7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:24:16 np0005539551 nova_compute[227360]: 2025-11-29 08:24:16.240 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:17.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:17 np0005539551 ovn_controller[130266]: 2025-11-29T08:24:17Z|00530|binding|INFO|Releasing lport 79109459-2a40-4b69-936e-ac2a2aa77985 from this chassis (sb_readonly=0)
Nov 29 03:24:17 np0005539551 nova_compute[227360]: 2025-11-29 08:24:17.442 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e325 e325: 3 total, 3 up, 3 in
Nov 29 03:24:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:17.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:18 np0005539551 nova_compute[227360]: 2025-11-29 08:24:18.293 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:18 np0005539551 ovn_controller[130266]: 2025-11-29T08:24:18Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:74:cb:18 10.100.0.14
Nov 29 03:24:18 np0005539551 ovn_controller[130266]: 2025-11-29T08:24:18Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:74:cb:18 10.100.0.14
Nov 29 03:24:19 np0005539551 nova_compute[227360]: 2025-11-29 08:24:19.153 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:19.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:19 np0005539551 nova_compute[227360]: 2025-11-29 08:24:19.350 227364 DEBUG nova.network.neutron [req-52cb2dfd-877f-4b6d-a4fd-73700b62394a req-669fd778-3f0d-469c-bba3-21643ad717e5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Updated VIF entry in instance network info cache for port 270b7a06-5cdd-4855-a693-0b30baf78df7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:24:19 np0005539551 nova_compute[227360]: 2025-11-29 08:24:19.350 227364 DEBUG nova.network.neutron [req-52cb2dfd-877f-4b6d-a4fd-73700b62394a req-669fd778-3f0d-469c-bba3-21643ad717e5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Updating instance_info_cache with network_info: [{"id": "270b7a06-5cdd-4855-a693-0b30baf78df7", "address": "fa:16:3e:74:cb:18", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap270b7a06-5c", "ovs_interfaceid": "270b7a06-5cdd-4855-a693-0b30baf78df7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:24:19 np0005539551 nova_compute[227360]: 2025-11-29 08:24:19.394 227364 DEBUG oslo_concurrency.lockutils [req-52cb2dfd-877f-4b6d-a4fd-73700b62394a req-669fd778-3f0d-469c-bba3-21643ad717e5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-2e7e8742-c504-412d-82cf-4087bd745c3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:24:19 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:19.874 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:19.875 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:19.877 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:19.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:20 np0005539551 nova_compute[227360]: 2025-11-29 08:24:20.680 227364 DEBUG oslo_concurrency.lockutils [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "refresh_cache-2e7e8742-c504-412d-82cf-4087bd745c3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:24:20 np0005539551 nova_compute[227360]: 2025-11-29 08:24:20.681 227364 DEBUG oslo_concurrency.lockutils [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquired lock "refresh_cache-2e7e8742-c504-412d-82cf-4087bd745c3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:24:20 np0005539551 nova_compute[227360]: 2025-11-29 08:24:20.681 227364 DEBUG nova.network.neutron [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:24:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:21.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:21 np0005539551 nova_compute[227360]: 2025-11-29 08:24:21.242 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:21.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:23.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:23 np0005539551 nova_compute[227360]: 2025-11-29 08:24:23.296 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:23 np0005539551 nova_compute[227360]: 2025-11-29 08:24:23.803 227364 DEBUG nova.network.neutron [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Updating instance_info_cache with network_info: [{"id": "270b7a06-5cdd-4855-a693-0b30baf78df7", "address": "fa:16:3e:74:cb:18", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap270b7a06-5c", "ovs_interfaceid": "270b7a06-5cdd-4855-a693-0b30baf78df7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:24:23 np0005539551 nova_compute[227360]: 2025-11-29 08:24:23.827 227364 DEBUG oslo_concurrency.lockutils [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Releasing lock "refresh_cache-2e7e8742-c504-412d-82cf-4087bd745c3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:24:23 np0005539551 nova_compute[227360]: 2025-11-29 08:24:23.930 227364 DEBUG nova.virt.libvirt.driver [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Nov 29 03:24:23 np0005539551 nova_compute[227360]: 2025-11-29 08:24:23.931 227364 DEBUG nova.virt.libvirt.volume.remotefs [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Creating file /var/lib/nova/instances/2e7e8742-c504-412d-82cf-4087bd745c3e/678f0cffde40439ea3e6dfd054196b4e.tmp on remote host 192.168.122.102 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Nov 29 03:24:23 np0005539551 nova_compute[227360]: 2025-11-29 08:24:23.932 227364 DEBUG oslo_concurrency.processutils [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/2e7e8742-c504-412d-82cf-4087bd745c3e/678f0cffde40439ea3e6dfd054196b4e.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:23.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:24 np0005539551 nova_compute[227360]: 2025-11-29 08:24:24.357 227364 DEBUG oslo_concurrency.processutils [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/2e7e8742-c504-412d-82cf-4087bd745c3e/678f0cffde40439ea3e6dfd054196b4e.tmp" returned: 1 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:24 np0005539551 nova_compute[227360]: 2025-11-29 08:24:24.359 227364 DEBUG oslo_concurrency.processutils [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] 'ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/2e7e8742-c504-412d-82cf-4087bd745c3e/678f0cffde40439ea3e6dfd054196b4e.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 29 03:24:24 np0005539551 nova_compute[227360]: 2025-11-29 08:24:24.360 227364 DEBUG nova.virt.libvirt.volume.remotefs [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Creating directory /var/lib/nova/instances/2e7e8742-c504-412d-82cf-4087bd745c3e on remote host 192.168.122.102 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Nov 29 03:24:24 np0005539551 nova_compute[227360]: 2025-11-29 08:24:24.361 227364 DEBUG oslo_concurrency.processutils [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/2e7e8742-c504-412d-82cf-4087bd745c3e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:24 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:24 np0005539551 nova_compute[227360]: 2025-11-29 08:24:24.595 227364 DEBUG oslo_concurrency.processutils [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/2e7e8742-c504-412d-82cf-4087bd745c3e" returned: 0 in 0.234s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:24 np0005539551 nova_compute[227360]: 2025-11-29 08:24:24.599 227364 DEBUG nova.virt.libvirt.driver [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:24:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:25.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:25.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:26 np0005539551 nova_compute[227360]: 2025-11-29 08:24:26.245 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:27.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:27.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:28 np0005539551 nova_compute[227360]: 2025-11-29 08:24:28.300 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:28 np0005539551 nova_compute[227360]: 2025-11-29 08:24:28.898 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:29.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:24:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:30.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:24:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:31.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:31 np0005539551 nova_compute[227360]: 2025-11-29 08:24:31.247 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:31 np0005539551 nova_compute[227360]: 2025-11-29 08:24:31.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:31 np0005539551 nova_compute[227360]: 2025-11-29 08:24:31.411 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:24:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:32.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:24:32 np0005539551 nova_compute[227360]: 2025-11-29 08:24:32.424 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:33.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:33 np0005539551 nova_compute[227360]: 2025-11-29 08:24:33.304 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:33 np0005539551 nova_compute[227360]: 2025-11-29 08:24:33.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:33 np0005539551 nova_compute[227360]: 2025-11-29 08:24:33.409 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:24:33 np0005539551 nova_compute[227360]: 2025-11-29 08:24:33.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:24:33 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:33 np0005539551 nova_compute[227360]: 2025-11-29 08:24:33.434 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "refresh_cache-2e7e8742-c504-412d-82cf-4087bd745c3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:24:33 np0005539551 nova_compute[227360]: 2025-11-29 08:24:33.434 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquired lock "refresh_cache-2e7e8742-c504-412d-82cf-4087bd745c3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:24:33 np0005539551 nova_compute[227360]: 2025-11-29 08:24:33.435 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:24:33 np0005539551 nova_compute[227360]: 2025-11-29 08:24:33.435 227364 DEBUG nova.objects.instance [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2e7e8742-c504-412d-82cf-4087bd745c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:24:33 np0005539551 podman[274679]: 2025-11-29 08:24:33.466459559 +0000 UTC m=+1.889090097 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:24:33 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 8.872272491s
Nov 29 03:24:33 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 8.872273445s
Nov 29 03:24:33 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.872483253s, txc = 0x5616f4082300
Nov 29 03:24:33 np0005539551 podman[274678]: 2025-11-29 08:24:33.503815849 +0000 UTC m=+1.935838351 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:24:33 np0005539551 podman[274677]: 2025-11-29 08:24:33.524489469 +0000 UTC m=+1.955886695 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 29 03:24:33 np0005539551 ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-1[81668]: 2025-11-29T08:24:33.528+0000 7fe9969b2640 -1 mon.compute-1@2(peon).paxos(paxos updating c 4017..4740) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 3.624767542s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Nov 29 03:24:33 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).paxos(paxos updating c 4017..4740) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 3.624767542s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Nov 29 03:24:33 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.541799545s, txc = 0x5616f4ca2600
Nov 29 03:24:33 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.541431427s, txc = 0x5616f3ab4f00
Nov 29 03:24:33 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.137972832s, txc = 0x5616f33f8000
Nov 29 03:24:33 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.045638561s, txc = 0x5616f3250300
Nov 29 03:24:33 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.879553318s, txc = 0x5616f4150600
Nov 29 03:24:33 np0005539551 nova_compute[227360]: 2025-11-29 08:24:33.976 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.003000080s ======
Nov 29 03:24:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:34.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Nov 29 03:24:34 np0005539551 nova_compute[227360]: 2025-11-29 08:24:34.648 227364 DEBUG nova.virt.libvirt.driver [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:24:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:24:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:35.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:24:35 np0005539551 nova_compute[227360]: 2025-11-29 08:24:35.593 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Updating instance_info_cache with network_info: [{"id": "270b7a06-5cdd-4855-a693-0b30baf78df7", "address": "fa:16:3e:74:cb:18", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap270b7a06-5c", "ovs_interfaceid": "270b7a06-5cdd-4855-a693-0b30baf78df7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:24:35 np0005539551 nova_compute[227360]: 2025-11-29 08:24:35.617 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Releasing lock "refresh_cache-2e7e8742-c504-412d-82cf-4087bd745c3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:24:35 np0005539551 nova_compute[227360]: 2025-11-29 08:24:35.617 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:24:35 np0005539551 nova_compute[227360]: 2025-11-29 08:24:35.618 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:36.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:36 np0005539551 kernel: tap270b7a06-5c (unregistering): left promiscuous mode
Nov 29 03:24:36 np0005539551 NetworkManager[48922]: <info>  [1764404676.2366] device (tap270b7a06-5c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:24:36 np0005539551 nova_compute[227360]: 2025-11-29 08:24:36.245 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:36 np0005539551 ovn_controller[130266]: 2025-11-29T08:24:36Z|00531|binding|INFO|Releasing lport 270b7a06-5cdd-4855-a693-0b30baf78df7 from this chassis (sb_readonly=0)
Nov 29 03:24:36 np0005539551 ovn_controller[130266]: 2025-11-29T08:24:36Z|00532|binding|INFO|Setting lport 270b7a06-5cdd-4855-a693-0b30baf78df7 down in Southbound
Nov 29 03:24:36 np0005539551 ovn_controller[130266]: 2025-11-29T08:24:36Z|00533|binding|INFO|Removing iface tap270b7a06-5c ovn-installed in OVS
Nov 29 03:24:36 np0005539551 nova_compute[227360]: 2025-11-29 08:24:36.251 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:36.258 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:74:cb:18 10.100.0.14'], port_security=['fa:16:3e:74:cb:18 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '2e7e8742-c504-412d-82cf-4087bd745c3e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bace34c102e4d56b089fd695d324f10', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7465c0fc-60f6-4695-93cd-f6ab8b97c365', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.226'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a26ea06d-6837-4c64-a5e9-9d9016316b21, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=270b7a06-5cdd-4855-a693-0b30baf78df7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:24:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:36.259 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 270b7a06-5cdd-4855-a693-0b30baf78df7 in datapath 7fc1dfc3-8d7f-4854-980d-37a93f366035 unbound from our chassis#033[00m
Nov 29 03:24:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:36.260 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7fc1dfc3-8d7f-4854-980d-37a93f366035, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:24:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:36.261 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a452d2fa-c010-49ba-814b-c5ab0303e4d4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:36.262 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035 namespace which is not needed anymore#033[00m
Nov 29 03:24:36 np0005539551 nova_compute[227360]: 2025-11-29 08:24:36.263 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:36 np0005539551 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000082.scope: Deactivated successfully.
Nov 29 03:24:36 np0005539551 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000082.scope: Consumed 14.725s CPU time.
Nov 29 03:24:36 np0005539551 systemd-machined[190756]: Machine qemu-57-instance-00000082 terminated.
Nov 29 03:24:36 np0005539551 nova_compute[227360]: 2025-11-29 08:24:36.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:36 np0005539551 neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035[274416]: [NOTICE]   (274435) : haproxy version is 2.8.14-c23fe91
Nov 29 03:24:36 np0005539551 neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035[274416]: [NOTICE]   (274435) : path to executable is /usr/sbin/haproxy
Nov 29 03:24:36 np0005539551 neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035[274416]: [WARNING]  (274435) : Exiting Master process...
Nov 29 03:24:36 np0005539551 neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035[274416]: [ALERT]    (274435) : Current worker (274442) exited with code 143 (Terminated)
Nov 29 03:24:36 np0005539551 neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035[274416]: [WARNING]  (274435) : All workers exited. Exiting... (0)
Nov 29 03:24:36 np0005539551 systemd[1]: libpod-81b5e33138c04f0e2c7f9817a0331a3e2104a72d24446c5adaba36ea28cf3839.scope: Deactivated successfully.
Nov 29 03:24:36 np0005539551 podman[274766]: 2025-11-29 08:24:36.439278399 +0000 UTC m=+0.055257766 container died 81b5e33138c04f0e2c7f9817a0331a3e2104a72d24446c5adaba36ea28cf3839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:24:36 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-81b5e33138c04f0e2c7f9817a0331a3e2104a72d24446c5adaba36ea28cf3839-userdata-shm.mount: Deactivated successfully.
Nov 29 03:24:36 np0005539551 systemd[1]: var-lib-containers-storage-overlay-184f4bca024cc7f2b2247233261b71d9ac1430e4428ab3d60019788b21b26d81-merged.mount: Deactivated successfully.
Nov 29 03:24:36 np0005539551 nova_compute[227360]: 2025-11-29 08:24:36.657 227364 INFO nova.virt.libvirt.driver [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Instance shutdown successfully after 12 seconds.#033[00m
Nov 29 03:24:36 np0005539551 nova_compute[227360]: 2025-11-29 08:24:36.662 227364 INFO nova.virt.libvirt.driver [-] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Instance destroyed successfully.#033[00m
Nov 29 03:24:36 np0005539551 nova_compute[227360]: 2025-11-29 08:24:36.663 227364 DEBUG nova.virt.libvirt.vif [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:23:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1414239367',display_name='tempest-ServerActionsTestOtherA-server-1414239367',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1414239367',id=130,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGZDzu/2PA5Jq1/mLvX2aaGG/WgUsRbb7Dsx3sFYSYL50dOuvFvn9ZiS3sRkHwVTZXl3/vg+NRcU0ds7Zzbdh2bvajGjb9Qxq1UtC5+8x+Wx/kUkrK3lVnVkeCLnrxzmbg==',key_name='tempest-keypair-186857524',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:24:05Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0bace34c102e4d56b089fd695d324f10',ramdisk_id='',reservation_id='r-sj7sy73r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherA-1954650991',owner_user_name='tempest-ServerActionsTestOtherA-1954650991-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:24:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1552f15deb524705a9456cbe9b54c429',uuid=2e7e8742-c504-412d-82cf-4087bd745c3e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "270b7a06-5cdd-4855-a693-0b30baf78df7", "address": "fa:16:3e:74:cb:18", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-644729119-network", "vif_mac": "fa:16:3e:74:cb:18"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap270b7a06-5c", "ovs_interfaceid": "270b7a06-5cdd-4855-a693-0b30baf78df7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:24:36 np0005539551 nova_compute[227360]: 2025-11-29 08:24:36.663 227364 DEBUG nova.network.os_vif_util [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converting VIF {"id": "270b7a06-5cdd-4855-a693-0b30baf78df7", "address": "fa:16:3e:74:cb:18", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-644729119-network", "vif_mac": "fa:16:3e:74:cb:18"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap270b7a06-5c", "ovs_interfaceid": "270b7a06-5cdd-4855-a693-0b30baf78df7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:24:36 np0005539551 nova_compute[227360]: 2025-11-29 08:24:36.664 227364 DEBUG nova.network.os_vif_util [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:74:cb:18,bridge_name='br-int',has_traffic_filtering=True,id=270b7a06-5cdd-4855-a693-0b30baf78df7,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap270b7a06-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:24:36 np0005539551 nova_compute[227360]: 2025-11-29 08:24:36.665 227364 DEBUG os_vif [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:74:cb:18,bridge_name='br-int',has_traffic_filtering=True,id=270b7a06-5cdd-4855-a693-0b30baf78df7,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap270b7a06-5c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:24:36 np0005539551 nova_compute[227360]: 2025-11-29 08:24:36.666 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:36 np0005539551 nova_compute[227360]: 2025-11-29 08:24:36.667 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap270b7a06-5c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:36 np0005539551 nova_compute[227360]: 2025-11-29 08:24:36.702 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:36 np0005539551 nova_compute[227360]: 2025-11-29 08:24:36.706 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:24:36 np0005539551 nova_compute[227360]: 2025-11-29 08:24:36.708 227364 INFO os_vif [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:74:cb:18,bridge_name='br-int',has_traffic_filtering=True,id=270b7a06-5cdd-4855-a693-0b30baf78df7,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap270b7a06-5c')#033[00m
Nov 29 03:24:36 np0005539551 nova_compute[227360]: 2025-11-29 08:24:36.715 227364 DEBUG nova.virt.libvirt.driver [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:24:36 np0005539551 nova_compute[227360]: 2025-11-29 08:24:36.715 227364 DEBUG nova.virt.libvirt.driver [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:24:36 np0005539551 podman[274766]: 2025-11-29 08:24:36.729602893 +0000 UTC m=+0.345582220 container cleanup 81b5e33138c04f0e2c7f9817a0331a3e2104a72d24446c5adaba36ea28cf3839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:24:36 np0005539551 systemd[1]: libpod-conmon-81b5e33138c04f0e2c7f9817a0331a3e2104a72d24446c5adaba36ea28cf3839.scope: Deactivated successfully.
Nov 29 03:24:36 np0005539551 nova_compute[227360]: 2025-11-29 08:24:36.760 227364 DEBUG nova.compute.manager [req-de808072-f509-4bfd-aa6e-47962e6d14f1 req-c60648ef-580d-4c30-8c69-a5e0fe65d439 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Received event network-vif-unplugged-270b7a06-5cdd-4855-a693-0b30baf78df7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:36 np0005539551 nova_compute[227360]: 2025-11-29 08:24:36.760 227364 DEBUG oslo_concurrency.lockutils [req-de808072-f509-4bfd-aa6e-47962e6d14f1 req-c60648ef-580d-4c30-8c69-a5e0fe65d439 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:36 np0005539551 nova_compute[227360]: 2025-11-29 08:24:36.760 227364 DEBUG oslo_concurrency.lockutils [req-de808072-f509-4bfd-aa6e-47962e6d14f1 req-c60648ef-580d-4c30-8c69-a5e0fe65d439 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:36 np0005539551 nova_compute[227360]: 2025-11-29 08:24:36.760 227364 DEBUG oslo_concurrency.lockutils [req-de808072-f509-4bfd-aa6e-47962e6d14f1 req-c60648ef-580d-4c30-8c69-a5e0fe65d439 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:36 np0005539551 nova_compute[227360]: 2025-11-29 08:24:36.760 227364 DEBUG nova.compute.manager [req-de808072-f509-4bfd-aa6e-47962e6d14f1 req-c60648ef-580d-4c30-8c69-a5e0fe65d439 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] No waiting events found dispatching network-vif-unplugged-270b7a06-5cdd-4855-a693-0b30baf78df7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:24:36 np0005539551 nova_compute[227360]: 2025-11-29 08:24:36.761 227364 WARNING nova.compute.manager [req-de808072-f509-4bfd-aa6e-47962e6d14f1 req-c60648ef-580d-4c30-8c69-a5e0fe65d439 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Received unexpected event network-vif-unplugged-270b7a06-5cdd-4855-a693-0b30baf78df7 for instance with vm_state active and task_state resize_migrating.#033[00m
Nov 29 03:24:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:36.953 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:24:36 np0005539551 nova_compute[227360]: 2025-11-29 08:24:36.954 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:36 np0005539551 podman[274808]: 2025-11-29 08:24:36.99344288 +0000 UTC m=+0.240639071 container remove 81b5e33138c04f0e2c7f9817a0331a3e2104a72d24446c5adaba36ea28cf3839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:24:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:36.999 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[61525bc7-bedb-4fa8-9307-a8ec2b7004c2]: (4, ('Sat Nov 29 08:24:36 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035 (81b5e33138c04f0e2c7f9817a0331a3e2104a72d24446c5adaba36ea28cf3839)\n81b5e33138c04f0e2c7f9817a0331a3e2104a72d24446c5adaba36ea28cf3839\nSat Nov 29 08:24:36 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035 (81b5e33138c04f0e2c7f9817a0331a3e2104a72d24446c5adaba36ea28cf3839)\n81b5e33138c04f0e2c7f9817a0331a3e2104a72d24446c5adaba36ea28cf3839\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:37.001 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[69a1e4f8-ff19-4aad-86b2-7f3b8829aaaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:37.002 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7fc1dfc3-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:37 np0005539551 nova_compute[227360]: 2025-11-29 08:24:37.004 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:37 np0005539551 kernel: tap7fc1dfc3-80: left promiscuous mode
Nov 29 03:24:37 np0005539551 nova_compute[227360]: 2025-11-29 08:24:37.019 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:37.021 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1c5ddad2-aad0-4d3c-b130-17f775d57bd5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:37.041 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f2faccd5-eff2-438d-97c7-6170ed5bd741]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:37.043 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d04b8dda-ac4b-4e8d-9c78-1a8cfafac215]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:37.059 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[bfaf6c91-4f4a-44c3-897b-23c0415d2693]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 768669, 'reachable_time': 34564, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274823, 'error': None, 'target': 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:37.062 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:24:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:37.062 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[8b3642dd-5278-43a6-8cd6-6c8aef9252f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:37 np0005539551 systemd[1]: run-netns-ovnmeta\x2d7fc1dfc3\x2d8d7f\x2d4854\x2d980d\x2d37a93f366035.mount: Deactivated successfully.
Nov 29 03:24:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:37.063 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:24:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:37.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:37 np0005539551 nova_compute[227360]: 2025-11-29 08:24:37.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:37 np0005539551 nova_compute[227360]: 2025-11-29 08:24:37.434 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:37 np0005539551 nova_compute[227360]: 2025-11-29 08:24:37.434 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:37 np0005539551 nova_compute[227360]: 2025-11-29 08:24:37.435 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:37 np0005539551 nova_compute[227360]: 2025-11-29 08:24:37.435 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:24:37 np0005539551 nova_compute[227360]: 2025-11-29 08:24:37.435 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:37 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:24:37 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.9 total, 600.0 interval#012Cumulative writes: 40K writes, 159K keys, 40K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.04 MB/s#012Cumulative WAL: 40K writes, 14K syncs, 2.78 writes per sync, written: 0.15 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 9559 writes, 39K keys, 9559 commit groups, 1.0 writes per commit group, ingest: 36.74 MB, 0.06 MB/s#012Interval WAL: 9559 writes, 3835 syncs, 2.49 writes per sync, written: 0.04 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 03:24:37 np0005539551 nova_compute[227360]: 2025-11-29 08:24:37.644 227364 DEBUG oslo_concurrency.lockutils [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "bb7619f3-13e5-4ca1-913b-b1dcdda532ea" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:37 np0005539551 nova_compute[227360]: 2025-11-29 08:24:37.645 227364 DEBUG oslo_concurrency.lockutils [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "bb7619f3-13e5-4ca1-913b-b1dcdda532ea" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:37 np0005539551 nova_compute[227360]: 2025-11-29 08:24:37.670 227364 DEBUG nova.compute.manager [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:24:37 np0005539551 nova_compute[227360]: 2025-11-29 08:24:37.824 227364 DEBUG oslo_concurrency.lockutils [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:37 np0005539551 nova_compute[227360]: 2025-11-29 08:24:37.825 227364 DEBUG oslo_concurrency.lockutils [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:37 np0005539551 nova_compute[227360]: 2025-11-29 08:24:37.830 227364 DEBUG nova.virt.hardware [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:24:37 np0005539551 nova_compute[227360]: 2025-11-29 08:24:37.831 227364 INFO nova.compute.claims [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:24:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:24:37 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/702346284' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:24:37 np0005539551 nova_compute[227360]: 2025-11-29 08:24:37.857 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:37 np0005539551 nova_compute[227360]: 2025-11-29 08:24:37.872 227364 DEBUG neutronclient.v2_0.client [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 270b7a06-5cdd-4855-a693-0b30baf78df7 for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Nov 29 03:24:37 np0005539551 nova_compute[227360]: 2025-11-29 08:24:37.994 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:24:37 np0005539551 nova_compute[227360]: 2025-11-29 08:24:37.994 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:24:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:38.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:38 np0005539551 nova_compute[227360]: 2025-11-29 08:24:38.085 227364 DEBUG oslo_concurrency.lockutils [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:38 np0005539551 nova_compute[227360]: 2025-11-29 08:24:38.085 227364 DEBUG oslo_concurrency.lockutils [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:38 np0005539551 nova_compute[227360]: 2025-11-29 08:24:38.086 227364 DEBUG oslo_concurrency.lockutils [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:38 np0005539551 nova_compute[227360]: 2025-11-29 08:24:38.145 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:24:38 np0005539551 nova_compute[227360]: 2025-11-29 08:24:38.146 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4446MB free_disk=20.84020233154297GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:24:38 np0005539551 nova_compute[227360]: 2025-11-29 08:24:38.146 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:38 np0005539551 nova_compute[227360]: 2025-11-29 08:24:38.187 227364 DEBUG oslo_concurrency.processutils [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:38 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:38 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:24:38 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1591747134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:24:38 np0005539551 nova_compute[227360]: 2025-11-29 08:24:38.642 227364 DEBUG oslo_concurrency.processutils [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:38 np0005539551 nova_compute[227360]: 2025-11-29 08:24:38.647 227364 DEBUG nova.compute.provider_tree [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:24:38 np0005539551 nova_compute[227360]: 2025-11-29 08:24:38.680 227364 DEBUG nova.scheduler.client.report [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:24:38 np0005539551 nova_compute[227360]: 2025-11-29 08:24:38.717 227364 DEBUG oslo_concurrency.lockutils [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.893s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:38 np0005539551 nova_compute[227360]: 2025-11-29 08:24:38.718 227364 DEBUG nova.compute.manager [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:24:38 np0005539551 nova_compute[227360]: 2025-11-29 08:24:38.722 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:38 np0005539551 nova_compute[227360]: 2025-11-29 08:24:38.794 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Migration for instance 2e7e8742-c504-412d-82cf-4087bd745c3e refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Nov 29 03:24:38 np0005539551 nova_compute[227360]: 2025-11-29 08:24:38.802 227364 DEBUG nova.compute.manager [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:24:38 np0005539551 nova_compute[227360]: 2025-11-29 08:24:38.802 227364 DEBUG nova.network.neutron [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:24:38 np0005539551 nova_compute[227360]: 2025-11-29 08:24:38.825 227364 INFO nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Updating resource usage from migration c6911945-fb45-4d46-8522-b58cdea15a0b#033[00m
Nov 29 03:24:38 np0005539551 nova_compute[227360]: 2025-11-29 08:24:38.826 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Starting to track outgoing migration c6911945-fb45-4d46-8522-b58cdea15a0b with flavor b4d0f3a6-e3dc-4216-aee8-148280e428cc _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1444#033[00m
Nov 29 03:24:38 np0005539551 nova_compute[227360]: 2025-11-29 08:24:38.836 227364 INFO nova.virt.libvirt.driver [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:24:38 np0005539551 nova_compute[227360]: 2025-11-29 08:24:38.853 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Migration c6911945-fb45-4d46-8522-b58cdea15a0b is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Nov 29 03:24:38 np0005539551 nova_compute[227360]: 2025-11-29 08:24:38.854 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance bb7619f3-13e5-4ca1-913b-b1dcdda532ea actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:24:38 np0005539551 nova_compute[227360]: 2025-11-29 08:24:38.854 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:24:38 np0005539551 nova_compute[227360]: 2025-11-29 08:24:38.854 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:24:38 np0005539551 nova_compute[227360]: 2025-11-29 08:24:38.858 227364 DEBUG nova.compute.manager [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:24:38 np0005539551 nova_compute[227360]: 2025-11-29 08:24:38.912 227364 DEBUG nova.compute.manager [req-c8e511b5-b6c2-4eec-919e-63660638d162 req-29d788f5-5a43-43cf-ba57-b9754070192d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Received event network-vif-plugged-270b7a06-5cdd-4855-a693-0b30baf78df7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:38 np0005539551 nova_compute[227360]: 2025-11-29 08:24:38.912 227364 DEBUG oslo_concurrency.lockutils [req-c8e511b5-b6c2-4eec-919e-63660638d162 req-29d788f5-5a43-43cf-ba57-b9754070192d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:38 np0005539551 nova_compute[227360]: 2025-11-29 08:24:38.913 227364 DEBUG oslo_concurrency.lockutils [req-c8e511b5-b6c2-4eec-919e-63660638d162 req-29d788f5-5a43-43cf-ba57-b9754070192d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:38 np0005539551 nova_compute[227360]: 2025-11-29 08:24:38.913 227364 DEBUG oslo_concurrency.lockutils [req-c8e511b5-b6c2-4eec-919e-63660638d162 req-29d788f5-5a43-43cf-ba57-b9754070192d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:38 np0005539551 nova_compute[227360]: 2025-11-29 08:24:38.913 227364 DEBUG nova.compute.manager [req-c8e511b5-b6c2-4eec-919e-63660638d162 req-29d788f5-5a43-43cf-ba57-b9754070192d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] No waiting events found dispatching network-vif-plugged-270b7a06-5cdd-4855-a693-0b30baf78df7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:24:38 np0005539551 nova_compute[227360]: 2025-11-29 08:24:38.913 227364 WARNING nova.compute.manager [req-c8e511b5-b6c2-4eec-919e-63660638d162 req-29d788f5-5a43-43cf-ba57-b9754070192d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Received unexpected event network-vif-plugged-270b7a06-5cdd-4855-a693-0b30baf78df7 for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:24:38 np0005539551 nova_compute[227360]: 2025-11-29 08:24:38.949 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:38 np0005539551 nova_compute[227360]: 2025-11-29 08:24:38.978 227364 DEBUG nova.compute.manager [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:24:38 np0005539551 nova_compute[227360]: 2025-11-29 08:24:38.980 227364 DEBUG nova.virt.libvirt.driver [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:24:38 np0005539551 nova_compute[227360]: 2025-11-29 08:24:38.980 227364 INFO nova.virt.libvirt.driver [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Creating image(s)#033[00m
Nov 29 03:24:39 np0005539551 nova_compute[227360]: 2025-11-29 08:24:39.107 227364 DEBUG nova.storage.rbd_utils [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image bb7619f3-13e5-4ca1-913b-b1dcdda532ea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:39 np0005539551 nova_compute[227360]: 2025-11-29 08:24:39.132 227364 DEBUG nova.storage.rbd_utils [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image bb7619f3-13e5-4ca1-913b-b1dcdda532ea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:39 np0005539551 nova_compute[227360]: 2025-11-29 08:24:39.158 227364 DEBUG nova.storage.rbd_utils [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image bb7619f3-13e5-4ca1-913b-b1dcdda532ea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:39 np0005539551 nova_compute[227360]: 2025-11-29 08:24:39.161 227364 DEBUG oslo_concurrency.processutils [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:39.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:39 np0005539551 nova_compute[227360]: 2025-11-29 08:24:39.227 227364 DEBUG oslo_concurrency.processutils [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:39 np0005539551 nova_compute[227360]: 2025-11-29 08:24:39.228 227364 DEBUG oslo_concurrency.lockutils [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:39 np0005539551 nova_compute[227360]: 2025-11-29 08:24:39.229 227364 DEBUG oslo_concurrency.lockutils [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:39 np0005539551 nova_compute[227360]: 2025-11-29 08:24:39.229 227364 DEBUG oslo_concurrency.lockutils [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:39 np0005539551 nova_compute[227360]: 2025-11-29 08:24:39.258 227364 DEBUG nova.storage.rbd_utils [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image bb7619f3-13e5-4ca1-913b-b1dcdda532ea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:39 np0005539551 nova_compute[227360]: 2025-11-29 08:24:39.263 227364 DEBUG oslo_concurrency.processutils [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 bb7619f3-13e5-4ca1-913b-b1dcdda532ea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:39 np0005539551 nova_compute[227360]: 2025-11-29 08:24:39.293 227364 DEBUG nova.policy [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fed6803a835e471f9bd60e3236e78e5d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4145ed6cde61439ebcc12fae2609b724', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:24:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:24:39 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1734173830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:24:39 np0005539551 nova_compute[227360]: 2025-11-29 08:24:39.421 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:39 np0005539551 nova_compute[227360]: 2025-11-29 08:24:39.428 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:24:39 np0005539551 nova_compute[227360]: 2025-11-29 08:24:39.460 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:24:39 np0005539551 nova_compute[227360]: 2025-11-29 08:24:39.495 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:24:39 np0005539551 nova_compute[227360]: 2025-11-29 08:24:39.496 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:39 np0005539551 nova_compute[227360]: 2025-11-29 08:24:39.933 227364 DEBUG oslo_concurrency.processutils [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 bb7619f3-13e5-4ca1-913b-b1dcdda532ea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.670s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:40.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:40 np0005539551 nova_compute[227360]: 2025-11-29 08:24:40.038 227364 DEBUG nova.storage.rbd_utils [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] resizing rbd image bb7619f3-13e5-4ca1-913b-b1dcdda532ea_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:24:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:40.065 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:40 np0005539551 nova_compute[227360]: 2025-11-29 08:24:40.138 227364 DEBUG nova.objects.instance [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'migration_context' on Instance uuid bb7619f3-13e5-4ca1-913b-b1dcdda532ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:24:40 np0005539551 nova_compute[227360]: 2025-11-29 08:24:40.159 227364 DEBUG nova.virt.libvirt.driver [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:24:40 np0005539551 nova_compute[227360]: 2025-11-29 08:24:40.159 227364 DEBUG nova.virt.libvirt.driver [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Ensure instance console log exists: /var/lib/nova/instances/bb7619f3-13e5-4ca1-913b-b1dcdda532ea/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:24:40 np0005539551 nova_compute[227360]: 2025-11-29 08:24:40.159 227364 DEBUG oslo_concurrency.lockutils [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:40 np0005539551 nova_compute[227360]: 2025-11-29 08:24:40.160 227364 DEBUG oslo_concurrency.lockutils [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:40 np0005539551 nova_compute[227360]: 2025-11-29 08:24:40.160 227364 DEBUG oslo_concurrency.lockutils [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:40 np0005539551 nova_compute[227360]: 2025-11-29 08:24:40.724 227364 DEBUG nova.network.neutron [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Successfully created port: 4bfbbfff-ef3d-48a6-916b-d6356dfaee8f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:24:41 np0005539551 nova_compute[227360]: 2025-11-29 08:24:41.104 227364 DEBUG nova.compute.manager [req-721ac8e6-dc9a-403e-a802-628ec599446d req-33e974eb-4d9a-4e8a-96bb-35fb5c5dfd61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Received event network-changed-270b7a06-5cdd-4855-a693-0b30baf78df7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:41 np0005539551 nova_compute[227360]: 2025-11-29 08:24:41.105 227364 DEBUG nova.compute.manager [req-721ac8e6-dc9a-403e-a802-628ec599446d req-33e974eb-4d9a-4e8a-96bb-35fb5c5dfd61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Refreshing instance network info cache due to event network-changed-270b7a06-5cdd-4855-a693-0b30baf78df7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:24:41 np0005539551 nova_compute[227360]: 2025-11-29 08:24:41.106 227364 DEBUG oslo_concurrency.lockutils [req-721ac8e6-dc9a-403e-a802-628ec599446d req-33e974eb-4d9a-4e8a-96bb-35fb5c5dfd61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-2e7e8742-c504-412d-82cf-4087bd745c3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:24:41 np0005539551 nova_compute[227360]: 2025-11-29 08:24:41.106 227364 DEBUG oslo_concurrency.lockutils [req-721ac8e6-dc9a-403e-a802-628ec599446d req-33e974eb-4d9a-4e8a-96bb-35fb5c5dfd61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-2e7e8742-c504-412d-82cf-4087bd745c3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:24:41 np0005539551 nova_compute[227360]: 2025-11-29 08:24:41.107 227364 DEBUG nova.network.neutron [req-721ac8e6-dc9a-403e-a802-628ec599446d req-33e974eb-4d9a-4e8a-96bb-35fb5c5dfd61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Refreshing network info cache for port 270b7a06-5cdd-4855-a693-0b30baf78df7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:24:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:41.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:41 np0005539551 nova_compute[227360]: 2025-11-29 08:24:41.266 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:41 np0005539551 nova_compute[227360]: 2025-11-29 08:24:41.496 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:41 np0005539551 nova_compute[227360]: 2025-11-29 08:24:41.703 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:24:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:42.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:24:42 np0005539551 nova_compute[227360]: 2025-11-29 08:24:42.280 227364 DEBUG nova.network.neutron [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Successfully updated port: 4bfbbfff-ef3d-48a6-916b-d6356dfaee8f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:24:42 np0005539551 nova_compute[227360]: 2025-11-29 08:24:42.304 227364 DEBUG oslo_concurrency.lockutils [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "refresh_cache-bb7619f3-13e5-4ca1-913b-b1dcdda532ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:24:42 np0005539551 nova_compute[227360]: 2025-11-29 08:24:42.304 227364 DEBUG oslo_concurrency.lockutils [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquired lock "refresh_cache-bb7619f3-13e5-4ca1-913b-b1dcdda532ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:24:42 np0005539551 nova_compute[227360]: 2025-11-29 08:24:42.304 227364 DEBUG nova.network.neutron [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:24:42 np0005539551 nova_compute[227360]: 2025-11-29 08:24:42.593 227364 DEBUG nova.network.neutron [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:24:43 np0005539551 nova_compute[227360]: 2025-11-29 08:24:43.045 227364 DEBUG nova.network.neutron [req-721ac8e6-dc9a-403e-a802-628ec599446d req-33e974eb-4d9a-4e8a-96bb-35fb5c5dfd61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Updated VIF entry in instance network info cache for port 270b7a06-5cdd-4855-a693-0b30baf78df7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:24:43 np0005539551 nova_compute[227360]: 2025-11-29 08:24:43.045 227364 DEBUG nova.network.neutron [req-721ac8e6-dc9a-403e-a802-628ec599446d req-33e974eb-4d9a-4e8a-96bb-35fb5c5dfd61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Updating instance_info_cache with network_info: [{"id": "270b7a06-5cdd-4855-a693-0b30baf78df7", "address": "fa:16:3e:74:cb:18", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap270b7a06-5c", "ovs_interfaceid": "270b7a06-5cdd-4855-a693-0b30baf78df7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:24:43 np0005539551 nova_compute[227360]: 2025-11-29 08:24:43.070 227364 DEBUG oslo_concurrency.lockutils [req-721ac8e6-dc9a-403e-a802-628ec599446d req-33e974eb-4d9a-4e8a-96bb-35fb5c5dfd61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-2e7e8742-c504-412d-82cf-4087bd745c3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:24:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:43.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:43 np0005539551 nova_compute[227360]: 2025-11-29 08:24:43.269 227364 DEBUG nova.compute.manager [req-483403da-1e67-4107-bb51-c034c0985316 req-93cdce38-0c26-4b7f-9d10-e1ce7579e1c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Received event network-changed-4bfbbfff-ef3d-48a6-916b-d6356dfaee8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:43 np0005539551 nova_compute[227360]: 2025-11-29 08:24:43.269 227364 DEBUG nova.compute.manager [req-483403da-1e67-4107-bb51-c034c0985316 req-93cdce38-0c26-4b7f-9d10-e1ce7579e1c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Refreshing instance network info cache due to event network-changed-4bfbbfff-ef3d-48a6-916b-d6356dfaee8f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:24:43 np0005539551 nova_compute[227360]: 2025-11-29 08:24:43.270 227364 DEBUG oslo_concurrency.lockutils [req-483403da-1e67-4107-bb51-c034c0985316 req-93cdce38-0c26-4b7f-9d10-e1ce7579e1c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-bb7619f3-13e5-4ca1-913b-b1dcdda532ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:24:43 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:44.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:45.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:45 np0005539551 nova_compute[227360]: 2025-11-29 08:24:45.224 227364 DEBUG nova.network.neutron [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Updating instance_info_cache with network_info: [{"id": "4bfbbfff-ef3d-48a6-916b-d6356dfaee8f", "address": "fa:16:3e:e7:c6:ba", "network": {"id": "334681ad-19d3-4d55-9af7-56ae7d6c621f", "bridge": "br-int", "label": "tempest-network-smoke--1404794782", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bfbbfff-ef", "ovs_interfaceid": "4bfbbfff-ef3d-48a6-916b-d6356dfaee8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:24:45 np0005539551 nova_compute[227360]: 2025-11-29 08:24:45.252 227364 DEBUG oslo_concurrency.lockutils [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Releasing lock "refresh_cache-bb7619f3-13e5-4ca1-913b-b1dcdda532ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:24:45 np0005539551 nova_compute[227360]: 2025-11-29 08:24:45.252 227364 DEBUG nova.compute.manager [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Instance network_info: |[{"id": "4bfbbfff-ef3d-48a6-916b-d6356dfaee8f", "address": "fa:16:3e:e7:c6:ba", "network": {"id": "334681ad-19d3-4d55-9af7-56ae7d6c621f", "bridge": "br-int", "label": "tempest-network-smoke--1404794782", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bfbbfff-ef", "ovs_interfaceid": "4bfbbfff-ef3d-48a6-916b-d6356dfaee8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:24:45 np0005539551 nova_compute[227360]: 2025-11-29 08:24:45.253 227364 DEBUG oslo_concurrency.lockutils [req-483403da-1e67-4107-bb51-c034c0985316 req-93cdce38-0c26-4b7f-9d10-e1ce7579e1c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-bb7619f3-13e5-4ca1-913b-b1dcdda532ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:24:45 np0005539551 nova_compute[227360]: 2025-11-29 08:24:45.253 227364 DEBUG nova.network.neutron [req-483403da-1e67-4107-bb51-c034c0985316 req-93cdce38-0c26-4b7f-9d10-e1ce7579e1c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Refreshing network info cache for port 4bfbbfff-ef3d-48a6-916b-d6356dfaee8f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:24:45 np0005539551 nova_compute[227360]: 2025-11-29 08:24:45.255 227364 DEBUG nova.virt.libvirt.driver [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Start _get_guest_xml network_info=[{"id": "4bfbbfff-ef3d-48a6-916b-d6356dfaee8f", "address": "fa:16:3e:e7:c6:ba", "network": {"id": "334681ad-19d3-4d55-9af7-56ae7d6c621f", "bridge": "br-int", "label": "tempest-network-smoke--1404794782", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bfbbfff-ef", "ovs_interfaceid": "4bfbbfff-ef3d-48a6-916b-d6356dfaee8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:24:45 np0005539551 nova_compute[227360]: 2025-11-29 08:24:45.259 227364 WARNING nova.virt.libvirt.driver [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:24:45 np0005539551 nova_compute[227360]: 2025-11-29 08:24:45.263 227364 DEBUG nova.virt.libvirt.host [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:24:45 np0005539551 nova_compute[227360]: 2025-11-29 08:24:45.264 227364 DEBUG nova.virt.libvirt.host [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:24:45 np0005539551 nova_compute[227360]: 2025-11-29 08:24:45.267 227364 DEBUG nova.virt.libvirt.host [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:24:45 np0005539551 nova_compute[227360]: 2025-11-29 08:24:45.268 227364 DEBUG nova.virt.libvirt.host [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:24:45 np0005539551 nova_compute[227360]: 2025-11-29 08:24:45.268 227364 DEBUG nova.virt.libvirt.driver [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:24:45 np0005539551 nova_compute[227360]: 2025-11-29 08:24:45.269 227364 DEBUG nova.virt.hardware [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:24:45 np0005539551 nova_compute[227360]: 2025-11-29 08:24:45.269 227364 DEBUG nova.virt.hardware [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:24:45 np0005539551 nova_compute[227360]: 2025-11-29 08:24:45.269 227364 DEBUG nova.virt.hardware [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:24:45 np0005539551 nova_compute[227360]: 2025-11-29 08:24:45.270 227364 DEBUG nova.virt.hardware [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:24:45 np0005539551 nova_compute[227360]: 2025-11-29 08:24:45.270 227364 DEBUG nova.virt.hardware [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:24:45 np0005539551 nova_compute[227360]: 2025-11-29 08:24:45.270 227364 DEBUG nova.virt.hardware [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:24:45 np0005539551 nova_compute[227360]: 2025-11-29 08:24:45.270 227364 DEBUG nova.virt.hardware [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:24:45 np0005539551 nova_compute[227360]: 2025-11-29 08:24:45.270 227364 DEBUG nova.virt.hardware [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:24:45 np0005539551 nova_compute[227360]: 2025-11-29 08:24:45.271 227364 DEBUG nova.virt.hardware [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:24:45 np0005539551 nova_compute[227360]: 2025-11-29 08:24:45.271 227364 DEBUG nova.virt.hardware [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:24:45 np0005539551 nova_compute[227360]: 2025-11-29 08:24:45.271 227364 DEBUG nova.virt.hardware [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:24:45 np0005539551 nova_compute[227360]: 2025-11-29 08:24:45.273 227364 DEBUG oslo_concurrency.processutils [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:24:45 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3981895096' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:24:45 np0005539551 nova_compute[227360]: 2025-11-29 08:24:45.694 227364 DEBUG oslo_concurrency.processutils [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:45 np0005539551 nova_compute[227360]: 2025-11-29 08:24:45.724 227364 DEBUG nova.storage.rbd_utils [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image bb7619f3-13e5-4ca1-913b-b1dcdda532ea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:45 np0005539551 nova_compute[227360]: 2025-11-29 08:24:45.728 227364 DEBUG oslo_concurrency.processutils [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:45 np0005539551 nova_compute[227360]: 2025-11-29 08:24:45.761 227364 DEBUG nova.compute.manager [req-23b53c94-0d25-4518-ab99-cb4f6ef5848a req-1f36678e-ff8b-438f-8c76-96e3bc9113be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Received event network-vif-plugged-270b7a06-5cdd-4855-a693-0b30baf78df7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:45 np0005539551 nova_compute[227360]: 2025-11-29 08:24:45.761 227364 DEBUG oslo_concurrency.lockutils [req-23b53c94-0d25-4518-ab99-cb4f6ef5848a req-1f36678e-ff8b-438f-8c76-96e3bc9113be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:45 np0005539551 nova_compute[227360]: 2025-11-29 08:24:45.762 227364 DEBUG oslo_concurrency.lockutils [req-23b53c94-0d25-4518-ab99-cb4f6ef5848a req-1f36678e-ff8b-438f-8c76-96e3bc9113be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:45 np0005539551 nova_compute[227360]: 2025-11-29 08:24:45.762 227364 DEBUG oslo_concurrency.lockutils [req-23b53c94-0d25-4518-ab99-cb4f6ef5848a req-1f36678e-ff8b-438f-8c76-96e3bc9113be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:45 np0005539551 nova_compute[227360]: 2025-11-29 08:24:45.762 227364 DEBUG nova.compute.manager [req-23b53c94-0d25-4518-ab99-cb4f6ef5848a req-1f36678e-ff8b-438f-8c76-96e3bc9113be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] No waiting events found dispatching network-vif-plugged-270b7a06-5cdd-4855-a693-0b30baf78df7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:24:45 np0005539551 nova_compute[227360]: 2025-11-29 08:24:45.762 227364 WARNING nova.compute.manager [req-23b53c94-0d25-4518-ab99-cb4f6ef5848a req-1f36678e-ff8b-438f-8c76-96e3bc9113be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Received unexpected event network-vif-plugged-270b7a06-5cdd-4855-a693-0b30baf78df7 for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:24:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:46.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:46 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:24:46 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2104640469' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:24:46 np0005539551 nova_compute[227360]: 2025-11-29 08:24:46.152 227364 DEBUG oslo_concurrency.processutils [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:46 np0005539551 nova_compute[227360]: 2025-11-29 08:24:46.155 227364 DEBUG nova.virt.libvirt.vif [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:24:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2089884930',display_name='tempest-TestNetworkAdvancedServerOps-server-2089884930',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2089884930',id=132,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNJO+77UhXU0ti/RlbYhC3aMWwk4oypPMVaoq554oYbmp2Ojft2DnjMDDkauLK0FUGEgT/2zL91cKAxUMco3aYOL98IWtJCrrqPh6dNoPHeMzS5lxlqxxmSeOk34tlcuyg==',key_name='tempest-TestNetworkAdvancedServerOps-2041322078',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-98c60tp0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:24:38Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=bb7619f3-13e5-4ca1-913b-b1dcdda532ea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4bfbbfff-ef3d-48a6-916b-d6356dfaee8f", "address": "fa:16:3e:e7:c6:ba", "network": {"id": "334681ad-19d3-4d55-9af7-56ae7d6c621f", "bridge": "br-int", "label": "tempest-network-smoke--1404794782", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bfbbfff-ef", "ovs_interfaceid": "4bfbbfff-ef3d-48a6-916b-d6356dfaee8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:24:46 np0005539551 nova_compute[227360]: 2025-11-29 08:24:46.156 227364 DEBUG nova.network.os_vif_util [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "4bfbbfff-ef3d-48a6-916b-d6356dfaee8f", "address": "fa:16:3e:e7:c6:ba", "network": {"id": "334681ad-19d3-4d55-9af7-56ae7d6c621f", "bridge": "br-int", "label": "tempest-network-smoke--1404794782", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bfbbfff-ef", "ovs_interfaceid": "4bfbbfff-ef3d-48a6-916b-d6356dfaee8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:24:46 np0005539551 nova_compute[227360]: 2025-11-29 08:24:46.158 227364 DEBUG nova.network.os_vif_util [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:c6:ba,bridge_name='br-int',has_traffic_filtering=True,id=4bfbbfff-ef3d-48a6-916b-d6356dfaee8f,network=Network(334681ad-19d3-4d55-9af7-56ae7d6c621f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bfbbfff-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:24:46 np0005539551 nova_compute[227360]: 2025-11-29 08:24:46.160 227364 DEBUG nova.objects.instance [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'pci_devices' on Instance uuid bb7619f3-13e5-4ca1-913b-b1dcdda532ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:24:46 np0005539551 nova_compute[227360]: 2025-11-29 08:24:46.223 227364 DEBUG nova.virt.libvirt.driver [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:24:46 np0005539551 nova_compute[227360]:  <uuid>bb7619f3-13e5-4ca1-913b-b1dcdda532ea</uuid>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:  <name>instance-00000084</name>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:24:46 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-2089884930</nova:name>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:24:45</nova:creationTime>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:24:46 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:        <nova:user uuid="fed6803a835e471f9bd60e3236e78e5d">tempest-TestNetworkAdvancedServerOps-274367929-project-member</nova:user>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:        <nova:project uuid="4145ed6cde61439ebcc12fae2609b724">tempest-TestNetworkAdvancedServerOps-274367929</nova:project>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:        <nova:port uuid="4bfbbfff-ef3d-48a6-916b-d6356dfaee8f">
Nov 29 03:24:46 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:      <entry name="serial">bb7619f3-13e5-4ca1-913b-b1dcdda532ea</entry>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:      <entry name="uuid">bb7619f3-13e5-4ca1-913b-b1dcdda532ea</entry>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:24:46 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/bb7619f3-13e5-4ca1-913b-b1dcdda532ea_disk">
Nov 29 03:24:46 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:24:46 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:24:46 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/bb7619f3-13e5-4ca1-913b-b1dcdda532ea_disk.config">
Nov 29 03:24:46 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:24:46 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:24:46 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:e7:c6:ba"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:      <target dev="tap4bfbbfff-ef"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:24:46 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/bb7619f3-13e5-4ca1-913b-b1dcdda532ea/console.log" append="off"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:24:46 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:24:46 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:24:46 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:24:46 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:24:46 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:24:46 np0005539551 nova_compute[227360]: 2025-11-29 08:24:46.224 227364 DEBUG nova.compute.manager [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Preparing to wait for external event network-vif-plugged-4bfbbfff-ef3d-48a6-916b-d6356dfaee8f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:24:46 np0005539551 nova_compute[227360]: 2025-11-29 08:24:46.224 227364 DEBUG oslo_concurrency.lockutils [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "bb7619f3-13e5-4ca1-913b-b1dcdda532ea-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:46 np0005539551 nova_compute[227360]: 2025-11-29 08:24:46.225 227364 DEBUG oslo_concurrency.lockutils [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "bb7619f3-13e5-4ca1-913b-b1dcdda532ea-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:46 np0005539551 nova_compute[227360]: 2025-11-29 08:24:46.225 227364 DEBUG oslo_concurrency.lockutils [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "bb7619f3-13e5-4ca1-913b-b1dcdda532ea-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:46 np0005539551 nova_compute[227360]: 2025-11-29 08:24:46.226 227364 DEBUG nova.virt.libvirt.vif [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:24:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2089884930',display_name='tempest-TestNetworkAdvancedServerOps-server-2089884930',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2089884930',id=132,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNJO+77UhXU0ti/RlbYhC3aMWwk4oypPMVaoq554oYbmp2Ojft2DnjMDDkauLK0FUGEgT/2zL91cKAxUMco3aYOL98IWtJCrrqPh6dNoPHeMzS5lxlqxxmSeOk34tlcuyg==',key_name='tempest-TestNetworkAdvancedServerOps-2041322078',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-98c60tp0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:24:38Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=bb7619f3-13e5-4ca1-913b-b1dcdda532ea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4bfbbfff-ef3d-48a6-916b-d6356dfaee8f", "address": "fa:16:3e:e7:c6:ba", "network": {"id": "334681ad-19d3-4d55-9af7-56ae7d6c621f", "bridge": "br-int", "label": "tempest-network-smoke--1404794782", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bfbbfff-ef", "ovs_interfaceid": "4bfbbfff-ef3d-48a6-916b-d6356dfaee8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:24:46 np0005539551 nova_compute[227360]: 2025-11-29 08:24:46.226 227364 DEBUG nova.network.os_vif_util [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "4bfbbfff-ef3d-48a6-916b-d6356dfaee8f", "address": "fa:16:3e:e7:c6:ba", "network": {"id": "334681ad-19d3-4d55-9af7-56ae7d6c621f", "bridge": "br-int", "label": "tempest-network-smoke--1404794782", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bfbbfff-ef", "ovs_interfaceid": "4bfbbfff-ef3d-48a6-916b-d6356dfaee8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:24:46 np0005539551 nova_compute[227360]: 2025-11-29 08:24:46.226 227364 DEBUG nova.network.os_vif_util [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:c6:ba,bridge_name='br-int',has_traffic_filtering=True,id=4bfbbfff-ef3d-48a6-916b-d6356dfaee8f,network=Network(334681ad-19d3-4d55-9af7-56ae7d6c621f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bfbbfff-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:24:46 np0005539551 nova_compute[227360]: 2025-11-29 08:24:46.227 227364 DEBUG os_vif [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:c6:ba,bridge_name='br-int',has_traffic_filtering=True,id=4bfbbfff-ef3d-48a6-916b-d6356dfaee8f,network=Network(334681ad-19d3-4d55-9af7-56ae7d6c621f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bfbbfff-ef') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:24:46 np0005539551 nova_compute[227360]: 2025-11-29 08:24:46.227 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:46 np0005539551 nova_compute[227360]: 2025-11-29 08:24:46.228 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:46 np0005539551 nova_compute[227360]: 2025-11-29 08:24:46.228 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:24:46 np0005539551 nova_compute[227360]: 2025-11-29 08:24:46.230 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:46 np0005539551 nova_compute[227360]: 2025-11-29 08:24:46.230 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4bfbbfff-ef, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:46 np0005539551 nova_compute[227360]: 2025-11-29 08:24:46.231 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4bfbbfff-ef, col_values=(('external_ids', {'iface-id': '4bfbbfff-ef3d-48a6-916b-d6356dfaee8f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e7:c6:ba', 'vm-uuid': 'bb7619f3-13e5-4ca1-913b-b1dcdda532ea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:46 np0005539551 nova_compute[227360]: 2025-11-29 08:24:46.268 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:46 np0005539551 NetworkManager[48922]: <info>  [1764404686.2691] manager: (tap4bfbbfff-ef): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/252)
Nov 29 03:24:46 np0005539551 nova_compute[227360]: 2025-11-29 08:24:46.275 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:46 np0005539551 nova_compute[227360]: 2025-11-29 08:24:46.276 227364 INFO os_vif [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:c6:ba,bridge_name='br-int',has_traffic_filtering=True,id=4bfbbfff-ef3d-48a6-916b-d6356dfaee8f,network=Network(334681ad-19d3-4d55-9af7-56ae7d6c621f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bfbbfff-ef')#033[00m
Nov 29 03:24:46 np0005539551 nova_compute[227360]: 2025-11-29 08:24:46.328 227364 DEBUG nova.virt.libvirt.driver [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:24:46 np0005539551 nova_compute[227360]: 2025-11-29 08:24:46.329 227364 DEBUG nova.virt.libvirt.driver [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:24:46 np0005539551 nova_compute[227360]: 2025-11-29 08:24:46.329 227364 DEBUG nova.virt.libvirt.driver [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] No VIF found with MAC fa:16:3e:e7:c6:ba, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:24:46 np0005539551 nova_compute[227360]: 2025-11-29 08:24:46.329 227364 INFO nova.virt.libvirt.driver [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Using config drive#033[00m
Nov 29 03:24:46 np0005539551 nova_compute[227360]: 2025-11-29 08:24:46.356 227364 DEBUG nova.storage.rbd_utils [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image bb7619f3-13e5-4ca1-913b-b1dcdda532ea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:46 np0005539551 nova_compute[227360]: 2025-11-29 08:24:46.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:46 np0005539551 nova_compute[227360]: 2025-11-29 08:24:46.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:24:46 np0005539551 nova_compute[227360]: 2025-11-29 08:24:46.906 227364 INFO nova.virt.libvirt.driver [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Creating config drive at /var/lib/nova/instances/bb7619f3-13e5-4ca1-913b-b1dcdda532ea/disk.config#033[00m
Nov 29 03:24:46 np0005539551 nova_compute[227360]: 2025-11-29 08:24:46.915 227364 DEBUG oslo_concurrency.processutils [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bb7619f3-13e5-4ca1-913b-b1dcdda532ea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4in9uesw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:46 np0005539551 nova_compute[227360]: 2025-11-29 08:24:46.943 227364 DEBUG oslo_concurrency.lockutils [None req-4209d7ae-ff88-4bd7-af9e-68508a378631 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "2e7e8742-c504-412d-82cf-4087bd745c3e" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:46 np0005539551 nova_compute[227360]: 2025-11-29 08:24:46.944 227364 DEBUG oslo_concurrency.lockutils [None req-4209d7ae-ff88-4bd7-af9e-68508a378631 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "2e7e8742-c504-412d-82cf-4087bd745c3e" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:46 np0005539551 nova_compute[227360]: 2025-11-29 08:24:46.945 227364 DEBUG nova.compute.manager [None req-4209d7ae-ff88-4bd7-af9e-68508a378631 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Going to confirm migration 21 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.036 227364 DEBUG nova.network.neutron [req-483403da-1e67-4107-bb51-c034c0985316 req-93cdce38-0c26-4b7f-9d10-e1ce7579e1c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Updated VIF entry in instance network info cache for port 4bfbbfff-ef3d-48a6-916b-d6356dfaee8f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.036 227364 DEBUG nova.network.neutron [req-483403da-1e67-4107-bb51-c034c0985316 req-93cdce38-0c26-4b7f-9d10-e1ce7579e1c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Updating instance_info_cache with network_info: [{"id": "4bfbbfff-ef3d-48a6-916b-d6356dfaee8f", "address": "fa:16:3e:e7:c6:ba", "network": {"id": "334681ad-19d3-4d55-9af7-56ae7d6c621f", "bridge": "br-int", "label": "tempest-network-smoke--1404794782", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bfbbfff-ef", "ovs_interfaceid": "4bfbbfff-ef3d-48a6-916b-d6356dfaee8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.048 227364 DEBUG oslo_concurrency.processutils [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bb7619f3-13e5-4ca1-913b-b1dcdda532ea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4in9uesw" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.075 227364 DEBUG nova.storage.rbd_utils [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image bb7619f3-13e5-4ca1-913b-b1dcdda532ea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.080 227364 DEBUG oslo_concurrency.processutils [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bb7619f3-13e5-4ca1-913b-b1dcdda532ea/disk.config bb7619f3-13e5-4ca1-913b-b1dcdda532ea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.108 227364 DEBUG oslo_concurrency.lockutils [req-483403da-1e67-4107-bb51-c034c0985316 req-93cdce38-0c26-4b7f-9d10-e1ce7579e1c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-bb7619f3-13e5-4ca1-913b-b1dcdda532ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:24:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:47.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.242 227364 DEBUG oslo_concurrency.processutils [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bb7619f3-13e5-4ca1-913b-b1dcdda532ea/disk.config bb7619f3-13e5-4ca1-913b-b1dcdda532ea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.242 227364 INFO nova.virt.libvirt.driver [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Deleting local config drive /var/lib/nova/instances/bb7619f3-13e5-4ca1-913b-b1dcdda532ea/disk.config because it was imported into RBD.#033[00m
Nov 29 03:24:47 np0005539551 kernel: tap4bfbbfff-ef: entered promiscuous mode
Nov 29 03:24:47 np0005539551 NetworkManager[48922]: <info>  [1764404687.2801] manager: (tap4bfbbfff-ef): new Tun device (/org/freedesktop/NetworkManager/Devices/253)
Nov 29 03:24:47 np0005539551 ovn_controller[130266]: 2025-11-29T08:24:47Z|00534|binding|INFO|Claiming lport 4bfbbfff-ef3d-48a6-916b-d6356dfaee8f for this chassis.
Nov 29 03:24:47 np0005539551 ovn_controller[130266]: 2025-11-29T08:24:47Z|00535|binding|INFO|4bfbbfff-ef3d-48a6-916b-d6356dfaee8f: Claiming fa:16:3e:e7:c6:ba 10.100.0.11
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.279 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:47.287 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:c6:ba 10.100.0.11'], port_security=['fa:16:3e:e7:c6:ba 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'bb7619f3-13e5-4ca1-913b-b1dcdda532ea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-334681ad-19d3-4d55-9af7-56ae7d6c621f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9a0cf290-669e-4d5c-bdaf-045599b84d12', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=faa662df-e006-4150-925f-644315c0c578, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=4bfbbfff-ef3d-48a6-916b-d6356dfaee8f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:47.288 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 4bfbbfff-ef3d-48a6-916b-d6356dfaee8f in datapath 334681ad-19d3-4d55-9af7-56ae7d6c621f bound to our chassis#033[00m
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:47.290 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 334681ad-19d3-4d55-9af7-56ae7d6c621f#033[00m
Nov 29 03:24:47 np0005539551 ovn_controller[130266]: 2025-11-29T08:24:47Z|00536|binding|INFO|Setting lport 4bfbbfff-ef3d-48a6-916b-d6356dfaee8f ovn-installed in OVS
Nov 29 03:24:47 np0005539551 ovn_controller[130266]: 2025-11-29T08:24:47Z|00537|binding|INFO|Setting lport 4bfbbfff-ef3d-48a6-916b-d6356dfaee8f up in Southbound
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.297 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.299 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:47.302 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a4e5d0a8-92c2-4e98-ae02-b39f69ce8f7b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:47.303 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap334681ad-11 in ovnmeta-334681ad-19d3-4d55-9af7-56ae7d6c621f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:47.305 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap334681ad-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:47.305 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[198af221-24f2-4e0d-af54-8ccae56b9ea9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:47.306 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a71cea27-b48a-4085-b830-9fd8088faa04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:47 np0005539551 systemd-udevd[275192]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:24:47 np0005539551 systemd-machined[190756]: New machine qemu-58-instance-00000084.
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:47.316 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[f3ae5df3-3351-4ca6-8971-46cc2f4dd52d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:47 np0005539551 NetworkManager[48922]: <info>  [1764404687.3191] device (tap4bfbbfff-ef): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:24:47 np0005539551 NetworkManager[48922]: <info>  [1764404687.3200] device (tap4bfbbfff-ef): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:24:47 np0005539551 systemd[1]: Started Virtual Machine qemu-58-instance-00000084.
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:47.328 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2c1943c7-035f-4574-9dda-a346915b8151]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:47.353 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[7207eee0-f46d-4a4d-b2b4-5902d248298e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:47 np0005539551 NetworkManager[48922]: <info>  [1764404687.3592] manager: (tap334681ad-10): new Veth device (/org/freedesktop/NetworkManager/Devices/254)
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:47.358 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[fbfbeec3-1df6-40c9-8555-ff391e008f3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:47.389 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[ca148358-df3a-4540-9588-f0e11c4536d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:47.391 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[772d8d1f-2a27-494a-b40e-7a3d3952f751]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:47 np0005539551 NetworkManager[48922]: <info>  [1764404687.4113] device (tap334681ad-10): carrier: link connected
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:47.416 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[92892325-44f7-45bf-ab7e-e2c12d1ec8b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:47.430 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a5852d07-8d60-4605-8b25-be1d9c3a2198]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap334681ad-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8a:16:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 158], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 773429, 'reachable_time': 38298, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275225, 'error': None, 'target': 'ovnmeta-334681ad-19d3-4d55-9af7-56ae7d6c621f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:47.442 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9e4f8605-c59f-4061-93e0-d5c91d9d95f9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8a:16be'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 773429, 'tstamp': 773429}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275226, 'error': None, 'target': 'ovnmeta-334681ad-19d3-4d55-9af7-56ae7d6c621f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:47.454 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[0a566d5c-7e56-40e6-a8d8-5389b7fa3895]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap334681ad-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8a:16:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 158], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 773429, 'reachable_time': 38298, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 275227, 'error': None, 'target': 'ovnmeta-334681ad-19d3-4d55-9af7-56ae7d6c621f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:47.480 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[58209903-db27-4304-84e0-a0616c5cc66b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:47.531 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4f9e9fbb-ad5f-46c4-ba6d-b6b9e488c086]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:47.532 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap334681ad-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:47.532 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:47.533 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap334681ad-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:47 np0005539551 NetworkManager[48922]: <info>  [1764404687.5353] manager: (tap334681ad-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/255)
Nov 29 03:24:47 np0005539551 kernel: tap334681ad-10: entered promiscuous mode
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.534 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:47.537 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap334681ad-10, col_values=(('external_ids', {'iface-id': '412b2220-4617-4378-8076-d6fa7a5fc810'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.538 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:47 np0005539551 ovn_controller[130266]: 2025-11-29T08:24:47Z|00538|binding|INFO|Releasing lport 412b2220-4617-4378-8076-d6fa7a5fc810 from this chassis (sb_readonly=0)
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.553 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:47.554 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/334681ad-19d3-4d55-9af7-56ae7d6c621f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/334681ad-19d3-4d55-9af7-56ae7d6c621f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:47.556 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[bd88a23e-ed45-4d78-8546-a7c596d83a82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:47.556 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-334681ad-19d3-4d55-9af7-56ae7d6c621f
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/334681ad-19d3-4d55-9af7-56ae7d6c621f.pid.haproxy
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 334681ad-19d3-4d55-9af7-56ae7d6c621f
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:24:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:24:47.557 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-334681ad-19d3-4d55-9af7-56ae7d6c621f', 'env', 'PROCESS_TAG=haproxy-334681ad-19d3-4d55-9af7-56ae7d6c621f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/334681ad-19d3-4d55-9af7-56ae7d6c621f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.605 227364 DEBUG nova.compute.manager [req-64d44fc9-0056-40db-9b86-e765f0a5a548 req-3ab98657-25fe-4cd6-ab0e-85c196aef7f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Received event network-vif-plugged-4bfbbfff-ef3d-48a6-916b-d6356dfaee8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.606 227364 DEBUG oslo_concurrency.lockutils [req-64d44fc9-0056-40db-9b86-e765f0a5a548 req-3ab98657-25fe-4cd6-ab0e-85c196aef7f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "bb7619f3-13e5-4ca1-913b-b1dcdda532ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.606 227364 DEBUG oslo_concurrency.lockutils [req-64d44fc9-0056-40db-9b86-e765f0a5a548 req-3ab98657-25fe-4cd6-ab0e-85c196aef7f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bb7619f3-13e5-4ca1-913b-b1dcdda532ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.606 227364 DEBUG oslo_concurrency.lockutils [req-64d44fc9-0056-40db-9b86-e765f0a5a548 req-3ab98657-25fe-4cd6-ab0e-85c196aef7f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bb7619f3-13e5-4ca1-913b-b1dcdda532ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.606 227364 DEBUG nova.compute.manager [req-64d44fc9-0056-40db-9b86-e765f0a5a548 req-3ab98657-25fe-4cd6-ab0e-85c196aef7f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Processing event network-vif-plugged-4bfbbfff-ef3d-48a6-916b-d6356dfaee8f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.762 227364 DEBUG nova.compute.manager [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.763 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404687.7623446, bb7619f3-13e5-4ca1-913b-b1dcdda532ea => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.764 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] VM Started (Lifecycle Event)#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.779 227364 DEBUG nova.virt.libvirt.driver [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.782 227364 INFO nova.virt.libvirt.driver [-] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Instance spawned successfully.#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.783 227364 DEBUG nova.virt.libvirt.driver [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.789 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.792 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.806 227364 DEBUG nova.virt.libvirt.driver [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.806 227364 DEBUG nova.virt.libvirt.driver [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.807 227364 DEBUG nova.virt.libvirt.driver [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.807 227364 DEBUG nova.virt.libvirt.driver [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.807 227364 DEBUG nova.virt.libvirt.driver [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.808 227364 DEBUG nova.virt.libvirt.driver [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.812 227364 DEBUG nova.compute.manager [req-0908d56e-8428-4596-993f-c15e54eb6f23 req-b41b0332-754f-4317-b785-0806544d9ee3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Received event network-vif-plugged-270b7a06-5cdd-4855-a693-0b30baf78df7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.813 227364 DEBUG oslo_concurrency.lockutils [req-0908d56e-8428-4596-993f-c15e54eb6f23 req-b41b0332-754f-4317-b785-0806544d9ee3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.813 227364 DEBUG oslo_concurrency.lockutils [req-0908d56e-8428-4596-993f-c15e54eb6f23 req-b41b0332-754f-4317-b785-0806544d9ee3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.813 227364 DEBUG oslo_concurrency.lockutils [req-0908d56e-8428-4596-993f-c15e54eb6f23 req-b41b0332-754f-4317-b785-0806544d9ee3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.813 227364 DEBUG nova.compute.manager [req-0908d56e-8428-4596-993f-c15e54eb6f23 req-b41b0332-754f-4317-b785-0806544d9ee3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] No waiting events found dispatching network-vif-plugged-270b7a06-5cdd-4855-a693-0b30baf78df7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.814 227364 WARNING nova.compute.manager [req-0908d56e-8428-4596-993f-c15e54eb6f23 req-b41b0332-754f-4317-b785-0806544d9ee3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Received unexpected event network-vif-plugged-270b7a06-5cdd-4855-a693-0b30baf78df7 for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.814 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.814 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404687.7633462, bb7619f3-13e5-4ca1-913b-b1dcdda532ea => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.814 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.842 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.845 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404687.765755, bb7619f3-13e5-4ca1-913b-b1dcdda532ea => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.845 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.872 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.875 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.888 227364 INFO nova.compute.manager [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Took 8.91 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.888 227364 DEBUG nova.compute.manager [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.898 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:24:47 np0005539551 podman[275301]: 2025-11-29 08:24:47.911195181 +0000 UTC m=+0.057025874 container create 3b2238b6ea1ff55a2dab7af587d2c8248c263cc842e0e4a8cdf9f9806e5aad0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-334681ad-19d3-4d55-9af7-56ae7d6c621f, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.943 227364 INFO nova.compute.manager [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Took 10.15 seconds to build instance.#033[00m
Nov 29 03:24:47 np0005539551 systemd[1]: Started libpod-conmon-3b2238b6ea1ff55a2dab7af587d2c8248c263cc842e0e4a8cdf9f9806e5aad0c.scope.
Nov 29 03:24:47 np0005539551 nova_compute[227360]: 2025-11-29 08:24:47.963 227364 DEBUG oslo_concurrency.lockutils [None req-1427d3ca-b2db-4f39-a45d-993c38c6a376 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "bb7619f3-13e5-4ca1-913b-b1dcdda532ea" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.318s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:47 np0005539551 podman[275301]: 2025-11-29 08:24:47.878142106 +0000 UTC m=+0.023972809 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:24:47 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:24:47 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adcba297c5168a62686d62d3ca1aec844ec58a28b13ce297eb69f2d7fa2cc111/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:24:47 np0005539551 podman[275301]: 2025-11-29 08:24:47.990544638 +0000 UTC m=+0.136375351 container init 3b2238b6ea1ff55a2dab7af587d2c8248c263cc842e0e4a8cdf9f9806e5aad0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-334681ad-19d3-4d55-9af7-56ae7d6c621f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:24:47 np0005539551 podman[275301]: 2025-11-29 08:24:47.995924512 +0000 UTC m=+0.141755205 container start 3b2238b6ea1ff55a2dab7af587d2c8248c263cc842e0e4a8cdf9f9806e5aad0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-334681ad-19d3-4d55-9af7-56ae7d6c621f, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:24:48 np0005539551 neutron-haproxy-ovnmeta-334681ad-19d3-4d55-9af7-56ae7d6c621f[275314]: [NOTICE]   (275318) : New worker (275320) forked
Nov 29 03:24:48 np0005539551 neutron-haproxy-ovnmeta-334681ad-19d3-4d55-9af7-56ae7d6c621f[275314]: [NOTICE]   (275318) : Loading success.
Nov 29 03:24:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:48.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:48 np0005539551 nova_compute[227360]: 2025-11-29 08:24:48.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:48 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:49.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:49 np0005539551 nova_compute[227360]: 2025-11-29 08:24:49.421 227364 DEBUG neutronclient.v2_0.client [None req-4209d7ae-ff88-4bd7-af9e-68508a378631 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 270b7a06-5cdd-4855-a693-0b30baf78df7 for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Nov 29 03:24:49 np0005539551 nova_compute[227360]: 2025-11-29 08:24:49.422 227364 DEBUG oslo_concurrency.lockutils [None req-4209d7ae-ff88-4bd7-af9e-68508a378631 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "refresh_cache-2e7e8742-c504-412d-82cf-4087bd745c3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:24:49 np0005539551 nova_compute[227360]: 2025-11-29 08:24:49.422 227364 DEBUG oslo_concurrency.lockutils [None req-4209d7ae-ff88-4bd7-af9e-68508a378631 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquired lock "refresh_cache-2e7e8742-c504-412d-82cf-4087bd745c3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:24:49 np0005539551 nova_compute[227360]: 2025-11-29 08:24:49.423 227364 DEBUG nova.network.neutron [None req-4209d7ae-ff88-4bd7-af9e-68508a378631 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:24:49 np0005539551 nova_compute[227360]: 2025-11-29 08:24:49.423 227364 DEBUG nova.objects.instance [None req-4209d7ae-ff88-4bd7-af9e-68508a378631 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lazy-loading 'info_cache' on Instance uuid 2e7e8742-c504-412d-82cf-4087bd745c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:24:49 np0005539551 nova_compute[227360]: 2025-11-29 08:24:49.948 227364 DEBUG nova.compute.manager [req-102c4c34-d7dc-4ce3-a1fb-3c997724e84d req-dac6a086-2d61-4167-86ff-d8d419ca245b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Received event network-vif-plugged-4bfbbfff-ef3d-48a6-916b-d6356dfaee8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:49 np0005539551 nova_compute[227360]: 2025-11-29 08:24:49.949 227364 DEBUG oslo_concurrency.lockutils [req-102c4c34-d7dc-4ce3-a1fb-3c997724e84d req-dac6a086-2d61-4167-86ff-d8d419ca245b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "bb7619f3-13e5-4ca1-913b-b1dcdda532ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:49 np0005539551 nova_compute[227360]: 2025-11-29 08:24:49.949 227364 DEBUG oslo_concurrency.lockutils [req-102c4c34-d7dc-4ce3-a1fb-3c997724e84d req-dac6a086-2d61-4167-86ff-d8d419ca245b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bb7619f3-13e5-4ca1-913b-b1dcdda532ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:49 np0005539551 nova_compute[227360]: 2025-11-29 08:24:49.949 227364 DEBUG oslo_concurrency.lockutils [req-102c4c34-d7dc-4ce3-a1fb-3c997724e84d req-dac6a086-2d61-4167-86ff-d8d419ca245b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bb7619f3-13e5-4ca1-913b-b1dcdda532ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:49 np0005539551 nova_compute[227360]: 2025-11-29 08:24:49.949 227364 DEBUG nova.compute.manager [req-102c4c34-d7dc-4ce3-a1fb-3c997724e84d req-dac6a086-2d61-4167-86ff-d8d419ca245b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] No waiting events found dispatching network-vif-plugged-4bfbbfff-ef3d-48a6-916b-d6356dfaee8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:24:49 np0005539551 nova_compute[227360]: 2025-11-29 08:24:49.950 227364 WARNING nova.compute.manager [req-102c4c34-d7dc-4ce3-a1fb-3c997724e84d req-dac6a086-2d61-4167-86ff-d8d419ca245b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Received unexpected event network-vif-plugged-4bfbbfff-ef3d-48a6-916b-d6356dfaee8f for instance with vm_state active and task_state None.#033[00m
Nov 29 03:24:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:50.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:50 np0005539551 nova_compute[227360]: 2025-11-29 08:24:50.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:50 np0005539551 nova_compute[227360]: 2025-11-29 08:24:50.409 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:24:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:51.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:51 np0005539551 nova_compute[227360]: 2025-11-29 08:24:51.270 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:51 np0005539551 nova_compute[227360]: 2025-11-29 08:24:51.480 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404676.4789732, 2e7e8742-c504-412d-82cf-4087bd745c3e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:24:51 np0005539551 nova_compute[227360]: 2025-11-29 08:24:51.480 227364 INFO nova.compute.manager [-] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:24:51 np0005539551 nova_compute[227360]: 2025-11-29 08:24:51.512 227364 DEBUG nova.compute.manager [None req-86e475e0-0cf0-49b2-a06e-bdcef7236e31 - - - - - -] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:24:51 np0005539551 nova_compute[227360]: 2025-11-29 08:24:51.515 227364 DEBUG nova.compute.manager [None req-86e475e0-0cf0-49b2-a06e-bdcef7236e31 - - - - - -] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:24:51 np0005539551 nova_compute[227360]: 2025-11-29 08:24:51.551 227364 INFO nova.compute.manager [None req-86e475e0-0cf0-49b2-a06e-bdcef7236e31 - - - - - -] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-1.ctlplane.example.com#033[00m
Nov 29 03:24:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:52.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:52 np0005539551 nova_compute[227360]: 2025-11-29 08:24:52.457 227364 DEBUG nova.network.neutron [None req-4209d7ae-ff88-4bd7-af9e-68508a378631 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Updating instance_info_cache with network_info: [{"id": "270b7a06-5cdd-4855-a693-0b30baf78df7", "address": "fa:16:3e:74:cb:18", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap270b7a06-5c", "ovs_interfaceid": "270b7a06-5cdd-4855-a693-0b30baf78df7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:24:52 np0005539551 nova_compute[227360]: 2025-11-29 08:24:52.494 227364 DEBUG oslo_concurrency.lockutils [None req-4209d7ae-ff88-4bd7-af9e-68508a378631 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Releasing lock "refresh_cache-2e7e8742-c504-412d-82cf-4087bd745c3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:24:52 np0005539551 nova_compute[227360]: 2025-11-29 08:24:52.494 227364 DEBUG nova.objects.instance [None req-4209d7ae-ff88-4bd7-af9e-68508a378631 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lazy-loading 'migration_context' on Instance uuid 2e7e8742-c504-412d-82cf-4087bd745c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:24:52 np0005539551 nova_compute[227360]: 2025-11-29 08:24:52.532 227364 DEBUG nova.storage.rbd_utils [None req-4209d7ae-ff88-4bd7-af9e-68508a378631 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image 2e7e8742-c504-412d-82cf-4087bd745c3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:52 np0005539551 nova_compute[227360]: 2025-11-29 08:24:52.541 227364 DEBUG nova.virt.libvirt.vif [None req-4209d7ae-ff88-4bd7-af9e-68508a378631 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:23:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1414239367',display_name='tempest-ServerActionsTestOtherA-server-1414239367',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1414239367',id=130,image_ref='',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGZDzu/2PA5Jq1/mLvX2aaGG/WgUsRbb7Dsx3sFYSYL50dOuvFvn9ZiS3sRkHwVTZXl3/vg+NRcU0ds7Zzbdh2bvajGjb9Qxq1UtC5+8x+Wx/kUkrK3lVnVkeCLnrxzmbg==',key_name='tempest-keypair-186857524',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:24:44Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0bace34c102e4d56b089fd695d324f10',ramdisk_id='',reservation_id='r-sj7sy73r',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherA-1954650991',owner_user_name='tempest-ServerActionsTestOtherA-1954650991-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:24:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1552f15deb524705a9456cbe9b54c429',uuid=2e7e8742-c504-412d-82cf-4087bd745c3e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "270b7a06-5cdd-4855-a693-0b30baf78df7", "address": "fa:16:3e:74:cb:18", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap270b7a06-5c", "ovs_interfaceid": "270b7a06-5cdd-4855-a693-0b30baf78df7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:24:52 np0005539551 nova_compute[227360]: 2025-11-29 08:24:52.541 227364 DEBUG nova.network.os_vif_util [None req-4209d7ae-ff88-4bd7-af9e-68508a378631 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converting VIF {"id": "270b7a06-5cdd-4855-a693-0b30baf78df7", "address": "fa:16:3e:74:cb:18", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap270b7a06-5c", "ovs_interfaceid": "270b7a06-5cdd-4855-a693-0b30baf78df7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:24:52 np0005539551 nova_compute[227360]: 2025-11-29 08:24:52.543 227364 DEBUG nova.network.os_vif_util [None req-4209d7ae-ff88-4bd7-af9e-68508a378631 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:74:cb:18,bridge_name='br-int',has_traffic_filtering=True,id=270b7a06-5cdd-4855-a693-0b30baf78df7,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap270b7a06-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:24:52 np0005539551 nova_compute[227360]: 2025-11-29 08:24:52.543 227364 DEBUG os_vif [None req-4209d7ae-ff88-4bd7-af9e-68508a378631 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:74:cb:18,bridge_name='br-int',has_traffic_filtering=True,id=270b7a06-5cdd-4855-a693-0b30baf78df7,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap270b7a06-5c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:24:52 np0005539551 nova_compute[227360]: 2025-11-29 08:24:52.545 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:52 np0005539551 nova_compute[227360]: 2025-11-29 08:24:52.545 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap270b7a06-5c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:52 np0005539551 nova_compute[227360]: 2025-11-29 08:24:52.546 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:24:52 np0005539551 nova_compute[227360]: 2025-11-29 08:24:52.548 227364 INFO os_vif [None req-4209d7ae-ff88-4bd7-af9e-68508a378631 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:74:cb:18,bridge_name='br-int',has_traffic_filtering=True,id=270b7a06-5cdd-4855-a693-0b30baf78df7,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap270b7a06-5c')#033[00m
Nov 29 03:24:52 np0005539551 nova_compute[227360]: 2025-11-29 08:24:52.548 227364 DEBUG oslo_concurrency.lockutils [None req-4209d7ae-ff88-4bd7-af9e-68508a378631 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:52 np0005539551 nova_compute[227360]: 2025-11-29 08:24:52.548 227364 DEBUG oslo_concurrency.lockutils [None req-4209d7ae-ff88-4bd7-af9e-68508a378631 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:52 np0005539551 nova_compute[227360]: 2025-11-29 08:24:52.659 227364 DEBUG oslo_concurrency.processutils [None req-4209d7ae-ff88-4bd7-af9e-68508a378631 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:53 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:24:53 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2931865071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:24:53 np0005539551 nova_compute[227360]: 2025-11-29 08:24:53.082 227364 DEBUG oslo_concurrency.processutils [None req-4209d7ae-ff88-4bd7-af9e-68508a378631 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:53 np0005539551 nova_compute[227360]: 2025-11-29 08:24:53.089 227364 DEBUG nova.compute.provider_tree [None req-4209d7ae-ff88-4bd7-af9e-68508a378631 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:24:53 np0005539551 nova_compute[227360]: 2025-11-29 08:24:53.106 227364 DEBUG nova.scheduler.client.report [None req-4209d7ae-ff88-4bd7-af9e-68508a378631 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:24:53 np0005539551 nova_compute[227360]: 2025-11-29 08:24:53.150 227364 DEBUG oslo_concurrency.lockutils [None req-4209d7ae-ff88-4bd7-af9e-68508a378631 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:53.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:53 np0005539551 nova_compute[227360]: 2025-11-29 08:24:53.264 227364 INFO nova.scheduler.client.report [None req-4209d7ae-ff88-4bd7-af9e-68508a378631 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Deleted allocation for migration c6911945-fb45-4d46-8522-b58cdea15a0b#033[00m
Nov 29 03:24:53 np0005539551 nova_compute[227360]: 2025-11-29 08:24:53.325 227364 DEBUG oslo_concurrency.lockutils [None req-4209d7ae-ff88-4bd7-af9e-68508a378631 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "2e7e8742-c504-412d-82cf-4087bd745c3e" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 6.381s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:53 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:53 np0005539551 ovn_controller[130266]: 2025-11-29T08:24:53Z|00539|binding|INFO|Releasing lport 412b2220-4617-4378-8076-d6fa7a5fc810 from this chassis (sb_readonly=0)
Nov 29 03:24:53 np0005539551 nova_compute[227360]: 2025-11-29 08:24:53.989 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:54.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:54 np0005539551 ovn_controller[130266]: 2025-11-29T08:24:54Z|00540|binding|INFO|Releasing lport 412b2220-4617-4378-8076-d6fa7a5fc810 from this chassis (sb_readonly=0)
Nov 29 03:24:54 np0005539551 nova_compute[227360]: 2025-11-29 08:24:54.220 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:54 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #100. Immutable memtables: 0.
Nov 29 03:24:54 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:24:54.367482) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:24:54 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 100
Nov 29 03:24:54 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404694367558, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 869, "num_deletes": 252, "total_data_size": 1605481, "memory_usage": 1623256, "flush_reason": "Manual Compaction"}
Nov 29 03:24:54 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #101: started
Nov 29 03:24:54 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404694388147, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 101, "file_size": 1058759, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 50517, "largest_seqno": 51381, "table_properties": {"data_size": 1054685, "index_size": 1790, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9687, "raw_average_key_size": 20, "raw_value_size": 1046270, "raw_average_value_size": 2170, "num_data_blocks": 78, "num_entries": 482, "num_filter_entries": 482, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404633, "oldest_key_time": 1764404633, "file_creation_time": 1764404694, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:24:54 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 20971 microseconds, and 4192 cpu microseconds.
Nov 29 03:24:54 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:24:54 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:24:54.388460) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #101: 1058759 bytes OK
Nov 29 03:24:54 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:24:54.388536) [db/memtable_list.cc:519] [default] Level-0 commit table #101 started
Nov 29 03:24:54 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:24:54.395484) [db/memtable_list.cc:722] [default] Level-0 commit table #101: memtable #1 done
Nov 29 03:24:54 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:24:54.395513) EVENT_LOG_v1 {"time_micros": 1764404694395505, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:24:54 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:24:54.395539) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:24:54 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 1600970, prev total WAL file size 1600970, number of live WAL files 2.
Nov 29 03:24:54 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000097.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:24:54 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:24:54.396320) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Nov 29 03:24:54 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:24:54 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [101(1033KB)], [99(11MB)]
Nov 29 03:24:54 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404694396373, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [101], "files_L6": [99], "score": -1, "input_data_size": 13275347, "oldest_snapshot_seqno": -1}
Nov 29 03:24:54 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #102: 8181 keys, 11322826 bytes, temperature: kUnknown
Nov 29 03:24:54 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404694470367, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 102, "file_size": 11322826, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11269436, "index_size": 31838, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20485, "raw_key_size": 213603, "raw_average_key_size": 26, "raw_value_size": 11124576, "raw_average_value_size": 1359, "num_data_blocks": 1239, "num_entries": 8181, "num_filter_entries": 8181, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764404694, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 102, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:24:54 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:24:54 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:24:54.470586) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 11322826 bytes
Nov 29 03:24:54 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:24:54.473596) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 179.3 rd, 152.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 11.7 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(23.2) write-amplify(10.7) OK, records in: 8701, records dropped: 520 output_compression: NoCompression
Nov 29 03:24:54 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:24:54.473614) EVENT_LOG_v1 {"time_micros": 1764404694473605, "job": 62, "event": "compaction_finished", "compaction_time_micros": 74060, "compaction_time_cpu_micros": 25140, "output_level": 6, "num_output_files": 1, "total_output_size": 11322826, "num_input_records": 8701, "num_output_records": 8181, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:24:54 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:24:54 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404694473866, "job": 62, "event": "table_file_deletion", "file_number": 101}
Nov 29 03:24:54 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000099.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:24:54 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404694475850, "job": 62, "event": "table_file_deletion", "file_number": 99}
Nov 29 03:24:54 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:24:54.396220) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:24:54 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:24:54.475876) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:24:54 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:24:54.475880) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:24:54 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:24:54.475881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:24:54 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:24:54.475882) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:24:54 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:24:54.475884) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:24:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:55.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:55 np0005539551 NetworkManager[48922]: <info>  [1764404695.3920] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/256)
Nov 29 03:24:55 np0005539551 NetworkManager[48922]: <info>  [1764404695.3931] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/257)
Nov 29 03:24:55 np0005539551 nova_compute[227360]: 2025-11-29 08:24:55.391 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:55 np0005539551 nova_compute[227360]: 2025-11-29 08:24:55.425 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:55 np0005539551 nova_compute[227360]: 2025-11-29 08:24:55.426 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:24:55 np0005539551 nova_compute[227360]: 2025-11-29 08:24:55.440 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:24:55 np0005539551 nova_compute[227360]: 2025-11-29 08:24:55.461 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:55 np0005539551 ovn_controller[130266]: 2025-11-29T08:24:55Z|00541|binding|INFO|Releasing lport 412b2220-4617-4378-8076-d6fa7a5fc810 from this chassis (sb_readonly=0)
Nov 29 03:24:55 np0005539551 nova_compute[227360]: 2025-11-29 08:24:55.475 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:55 np0005539551 nova_compute[227360]: 2025-11-29 08:24:55.795 227364 DEBUG nova.compute.manager [req-81315c4c-99e7-4aad-8ab2-1847339ee859 req-c5b433a7-0efd-4b15-85c8-0a923a881243 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Received event network-changed-4bfbbfff-ef3d-48a6-916b-d6356dfaee8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:55 np0005539551 nova_compute[227360]: 2025-11-29 08:24:55.795 227364 DEBUG nova.compute.manager [req-81315c4c-99e7-4aad-8ab2-1847339ee859 req-c5b433a7-0efd-4b15-85c8-0a923a881243 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Refreshing instance network info cache due to event network-changed-4bfbbfff-ef3d-48a6-916b-d6356dfaee8f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:24:55 np0005539551 nova_compute[227360]: 2025-11-29 08:24:55.795 227364 DEBUG oslo_concurrency.lockutils [req-81315c4c-99e7-4aad-8ab2-1847339ee859 req-c5b433a7-0efd-4b15-85c8-0a923a881243 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-bb7619f3-13e5-4ca1-913b-b1dcdda532ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:24:55 np0005539551 nova_compute[227360]: 2025-11-29 08:24:55.796 227364 DEBUG oslo_concurrency.lockutils [req-81315c4c-99e7-4aad-8ab2-1847339ee859 req-c5b433a7-0efd-4b15-85c8-0a923a881243 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-bb7619f3-13e5-4ca1-913b-b1dcdda532ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:24:55 np0005539551 nova_compute[227360]: 2025-11-29 08:24:55.796 227364 DEBUG nova.network.neutron [req-81315c4c-99e7-4aad-8ab2-1847339ee859 req-c5b433a7-0efd-4b15-85c8-0a923a881243 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Refreshing network info cache for port 4bfbbfff-ef3d-48a6-916b-d6356dfaee8f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:24:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:56.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:56 np0005539551 nova_compute[227360]: 2025-11-29 08:24:56.272 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:57.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:57 np0005539551 nova_compute[227360]: 2025-11-29 08:24:57.786 227364 DEBUG nova.network.neutron [req-81315c4c-99e7-4aad-8ab2-1847339ee859 req-c5b433a7-0efd-4b15-85c8-0a923a881243 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Updated VIF entry in instance network info cache for port 4bfbbfff-ef3d-48a6-916b-d6356dfaee8f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:24:57 np0005539551 nova_compute[227360]: 2025-11-29 08:24:57.787 227364 DEBUG nova.network.neutron [req-81315c4c-99e7-4aad-8ab2-1847339ee859 req-c5b433a7-0efd-4b15-85c8-0a923a881243 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Updating instance_info_cache with network_info: [{"id": "4bfbbfff-ef3d-48a6-916b-d6356dfaee8f", "address": "fa:16:3e:e7:c6:ba", "network": {"id": "334681ad-19d3-4d55-9af7-56ae7d6c621f", "bridge": "br-int", "label": "tempest-network-smoke--1404794782", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bfbbfff-ef", "ovs_interfaceid": "4bfbbfff-ef3d-48a6-916b-d6356dfaee8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:24:57 np0005539551 nova_compute[227360]: 2025-11-29 08:24:57.844 227364 DEBUG oslo_concurrency.lockutils [req-81315c4c-99e7-4aad-8ab2-1847339ee859 req-c5b433a7-0efd-4b15-85c8-0a923a881243 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-bb7619f3-13e5-4ca1-913b-b1dcdda532ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:24:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:58.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:58 np0005539551 nova_compute[227360]: 2025-11-29 08:24:58.419 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:58 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:24:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:59.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:00.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e326 e326: 3 total, 3 up, 3 in
Nov 29 03:25:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:01.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:01 np0005539551 nova_compute[227360]: 2025-11-29 08:25:01.274 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:25:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:02.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:02 np0005539551 ovn_controller[130266]: 2025-11-29T08:25:02Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e7:c6:ba 10.100.0.11
Nov 29 03:25:02 np0005539551 ovn_controller[130266]: 2025-11-29T08:25:02Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e7:c6:ba 10.100.0.11
Nov 29 03:25:02 np0005539551 nova_compute[227360]: 2025-11-29 08:25:02.784 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:03 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e327 e327: 3 total, 3 up, 3 in
Nov 29 03:25:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:03.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:03 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e327 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:03 np0005539551 podman[275372]: 2025-11-29 08:25:03.642563761 +0000 UTC m=+0.081842775 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 03:25:03 np0005539551 podman[275371]: 2025-11-29 08:25:03.668237566 +0000 UTC m=+0.100964463 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:25:03 np0005539551 podman[275373]: 2025-11-29 08:25:03.706636415 +0000 UTC m=+0.127727277 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:25:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:04.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:04 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e328 e328: 3 total, 3 up, 3 in
Nov 29 03:25:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:05.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:05 np0005539551 nova_compute[227360]: 2025-11-29 08:25:05.414 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:06.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:06 np0005539551 nova_compute[227360]: 2025-11-29 08:25:06.277 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:07.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:07 np0005539551 nova_compute[227360]: 2025-11-29 08:25:07.536 227364 INFO nova.compute.manager [None req-82d242c5-d5e8-4938-a416-00bcaf0e88ac fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Get console output#033[00m
Nov 29 03:25:07 np0005539551 nova_compute[227360]: 2025-11-29 08:25:07.540 260937 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:25:07 np0005539551 nova_compute[227360]: 2025-11-29 08:25:07.815 227364 INFO nova.compute.manager [None req-7a632e49-c92e-4e91-9918-04d28ef40d30 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Pausing#033[00m
Nov 29 03:25:07 np0005539551 nova_compute[227360]: 2025-11-29 08:25:07.816 227364 DEBUG nova.objects.instance [None req-7a632e49-c92e-4e91-9918-04d28ef40d30 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'flavor' on Instance uuid bb7619f3-13e5-4ca1-913b-b1dcdda532ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:07 np0005539551 nova_compute[227360]: 2025-11-29 08:25:07.856 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404707.8563483, bb7619f3-13e5-4ca1-913b-b1dcdda532ea => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:07 np0005539551 nova_compute[227360]: 2025-11-29 08:25:07.857 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:25:07 np0005539551 nova_compute[227360]: 2025-11-29 08:25:07.859 227364 DEBUG nova.compute.manager [None req-7a632e49-c92e-4e91-9918-04d28ef40d30 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:07 np0005539551 nova_compute[227360]: 2025-11-29 08:25:07.892 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:07 np0005539551 nova_compute[227360]: 2025-11-29 08:25:07.895 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:25:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:08.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:08 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:09.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:10.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:10 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:25:10 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.0 total, 600.0 interval#012Cumulative writes: 9701 writes, 51K keys, 9701 commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.03 MB/s#012Cumulative WAL: 9701 writes, 9701 syncs, 1.00 writes per sync, written: 0.10 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1654 writes, 8243 keys, 1654 commit groups, 1.0 writes per commit group, ingest: 16.31 MB, 0.03 MB/s#012Interval WAL: 1654 writes, 1654 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     12.4      5.18              0.24        31    0.167       0      0       0.0       0.0#012  L6      1/0   10.80 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.9     31.2     26.7     11.77              1.08        30    0.392    198K    16K       0.0       0.0#012 Sum      1/0   10.80 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.9     21.7     22.3     16.95              1.31        61    0.278    198K    16K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.6    131.2    132.8      0.60              0.24        12    0.050     51K   3143       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0     31.2     26.7     11.77              1.08        30    0.392    198K    16K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     12.4      5.18              0.24        30    0.173       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4200.0 total, 600.0 interval#012Flush(GB): cumulative 0.063, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.37 GB write, 0.09 MB/s write, 0.36 GB read, 0.09 MB/s read, 16.9 seconds#012Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557021ed51f0#2 capacity: 304.00 MB usage: 37.92 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000256 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2123,36.44 MB,11.9855%) FilterBlock(61,557.55 KB,0.179105%) IndexBlock(61,965.44 KB,0.310135%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 03:25:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:11.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:11 np0005539551 nova_compute[227360]: 2025-11-29 08:25:11.279 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:11 np0005539551 podman[275608]: 2025-11-29 08:25:11.503147709 +0000 UTC m=+0.075515264 container exec 90ce4adbab91e406b1c142d44002a8d5649da44e57a442af57615f2fecac4da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:25:11 np0005539551 podman[275608]: 2025-11-29 08:25:11.643820764 +0000 UTC m=+0.216188289 container exec_died 90ce4adbab91e406b1c142d44002a8d5649da44e57a442af57615f2fecac4da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:25:12 np0005539551 nova_compute[227360]: 2025-11-29 08:25:12.004 227364 INFO nova.compute.manager [None req-2e7cbb94-7e3e-46fc-88f8-9573b54507a7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Get console output#033[00m
Nov 29 03:25:12 np0005539551 nova_compute[227360]: 2025-11-29 08:25:12.012 260937 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:25:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:12.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:12 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e329 e329: 3 total, 3 up, 3 in
Nov 29 03:25:12 np0005539551 nova_compute[227360]: 2025-11-29 08:25:12.184 227364 INFO nova.compute.manager [None req-0f421f92-ac23-4110-a88d-260b956df8e4 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Unpausing#033[00m
Nov 29 03:25:12 np0005539551 nova_compute[227360]: 2025-11-29 08:25:12.185 227364 DEBUG nova.objects.instance [None req-0f421f92-ac23-4110-a88d-260b956df8e4 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'flavor' on Instance uuid bb7619f3-13e5-4ca1-913b-b1dcdda532ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:12 np0005539551 nova_compute[227360]: 2025-11-29 08:25:12.209 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404712.2092574, bb7619f3-13e5-4ca1-913b-b1dcdda532ea => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:12 np0005539551 nova_compute[227360]: 2025-11-29 08:25:12.209 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:25:12 np0005539551 virtqemud[226785]: argument unsupported: QEMU guest agent is not configured
Nov 29 03:25:12 np0005539551 nova_compute[227360]: 2025-11-29 08:25:12.212 227364 DEBUG nova.virt.libvirt.guest [None req-0f421f92-ac23-4110-a88d-260b956df8e4 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 29 03:25:12 np0005539551 nova_compute[227360]: 2025-11-29 08:25:12.213 227364 DEBUG nova.compute.manager [None req-0f421f92-ac23-4110-a88d-260b956df8e4 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:12 np0005539551 nova_compute[227360]: 2025-11-29 08:25:12.228 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:12 np0005539551 nova_compute[227360]: 2025-11-29 08:25:12.231 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:25:12 np0005539551 nova_compute[227360]: 2025-11-29 08:25:12.273 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] During sync_power_state the instance has a pending task (unpausing). Skip.#033[00m
Nov 29 03:25:13 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:25:13 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:25:13 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:25:13 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:25:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:13.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:13 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:14.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:14 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:25:14 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:25:14 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:25:14 np0005539551 nova_compute[227360]: 2025-11-29 08:25:14.945 227364 INFO nova.compute.manager [None req-2ce06def-a60e-408e-b143-538aad796210 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Get console output#033[00m
Nov 29 03:25:14 np0005539551 nova_compute[227360]: 2025-11-29 08:25:14.952 260937 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:25:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:25:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:15.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:25:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:16.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:16 np0005539551 nova_compute[227360]: 2025-11-29 08:25:16.272 227364 DEBUG nova.compute.manager [req-c7501510-627a-4ded-b7f3-fc768eb3a58c req-a9463ca7-6dae-40a4-a4f2-aafd88a02714 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Received event network-changed-4bfbbfff-ef3d-48a6-916b-d6356dfaee8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:16 np0005539551 nova_compute[227360]: 2025-11-29 08:25:16.273 227364 DEBUG nova.compute.manager [req-c7501510-627a-4ded-b7f3-fc768eb3a58c req-a9463ca7-6dae-40a4-a4f2-aafd88a02714 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Refreshing instance network info cache due to event network-changed-4bfbbfff-ef3d-48a6-916b-d6356dfaee8f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:25:16 np0005539551 nova_compute[227360]: 2025-11-29 08:25:16.273 227364 DEBUG oslo_concurrency.lockutils [req-c7501510-627a-4ded-b7f3-fc768eb3a58c req-a9463ca7-6dae-40a4-a4f2-aafd88a02714 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-bb7619f3-13e5-4ca1-913b-b1dcdda532ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:25:16 np0005539551 nova_compute[227360]: 2025-11-29 08:25:16.273 227364 DEBUG oslo_concurrency.lockutils [req-c7501510-627a-4ded-b7f3-fc768eb3a58c req-a9463ca7-6dae-40a4-a4f2-aafd88a02714 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-bb7619f3-13e5-4ca1-913b-b1dcdda532ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:25:16 np0005539551 nova_compute[227360]: 2025-11-29 08:25:16.273 227364 DEBUG nova.network.neutron [req-c7501510-627a-4ded-b7f3-fc768eb3a58c req-a9463ca7-6dae-40a4-a4f2-aafd88a02714 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Refreshing network info cache for port 4bfbbfff-ef3d-48a6-916b-d6356dfaee8f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:25:16 np0005539551 nova_compute[227360]: 2025-11-29 08:25:16.280 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:16 np0005539551 nova_compute[227360]: 2025-11-29 08:25:16.359 227364 DEBUG oslo_concurrency.lockutils [None req-345133e5-a70e-4799-be61-4283cf6fb501 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "bb7619f3-13e5-4ca1-913b-b1dcdda532ea" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:16 np0005539551 nova_compute[227360]: 2025-11-29 08:25:16.359 227364 DEBUG oslo_concurrency.lockutils [None req-345133e5-a70e-4799-be61-4283cf6fb501 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "bb7619f3-13e5-4ca1-913b-b1dcdda532ea" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:16 np0005539551 nova_compute[227360]: 2025-11-29 08:25:16.360 227364 DEBUG oslo_concurrency.lockutils [None req-345133e5-a70e-4799-be61-4283cf6fb501 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "bb7619f3-13e5-4ca1-913b-b1dcdda532ea-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:16 np0005539551 nova_compute[227360]: 2025-11-29 08:25:16.360 227364 DEBUG oslo_concurrency.lockutils [None req-345133e5-a70e-4799-be61-4283cf6fb501 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "bb7619f3-13e5-4ca1-913b-b1dcdda532ea-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:16 np0005539551 nova_compute[227360]: 2025-11-29 08:25:16.360 227364 DEBUG oslo_concurrency.lockutils [None req-345133e5-a70e-4799-be61-4283cf6fb501 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "bb7619f3-13e5-4ca1-913b-b1dcdda532ea-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:16 np0005539551 nova_compute[227360]: 2025-11-29 08:25:16.361 227364 INFO nova.compute.manager [None req-345133e5-a70e-4799-be61-4283cf6fb501 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Terminating instance#033[00m
Nov 29 03:25:16 np0005539551 nova_compute[227360]: 2025-11-29 08:25:16.362 227364 DEBUG nova.compute.manager [None req-345133e5-a70e-4799-be61-4283cf6fb501 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:25:16 np0005539551 kernel: tap4bfbbfff-ef (unregistering): left promiscuous mode
Nov 29 03:25:16 np0005539551 NetworkManager[48922]: <info>  [1764404716.4123] device (tap4bfbbfff-ef): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:25:16 np0005539551 nova_compute[227360]: 2025-11-29 08:25:16.431 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:16 np0005539551 ovn_controller[130266]: 2025-11-29T08:25:16Z|00542|binding|INFO|Releasing lport 4bfbbfff-ef3d-48a6-916b-d6356dfaee8f from this chassis (sb_readonly=0)
Nov 29 03:25:16 np0005539551 ovn_controller[130266]: 2025-11-29T08:25:16Z|00543|binding|INFO|Setting lport 4bfbbfff-ef3d-48a6-916b-d6356dfaee8f down in Southbound
Nov 29 03:25:16 np0005539551 ovn_controller[130266]: 2025-11-29T08:25:16Z|00544|binding|INFO|Removing iface tap4bfbbfff-ef ovn-installed in OVS
Nov 29 03:25:16 np0005539551 nova_compute[227360]: 2025-11-29 08:25:16.435 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:16 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:16.450 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:c6:ba 10.100.0.11'], port_security=['fa:16:3e:e7:c6:ba 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'bb7619f3-13e5-4ca1-913b-b1dcdda532ea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-334681ad-19d3-4d55-9af7-56ae7d6c621f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9a0cf290-669e-4d5c-bdaf-045599b84d12', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=faa662df-e006-4150-925f-644315c0c578, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=4bfbbfff-ef3d-48a6-916b-d6356dfaee8f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:25:16 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:16.451 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 4bfbbfff-ef3d-48a6-916b-d6356dfaee8f in datapath 334681ad-19d3-4d55-9af7-56ae7d6c621f unbound from our chassis#033[00m
Nov 29 03:25:16 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:16.453 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 334681ad-19d3-4d55-9af7-56ae7d6c621f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:25:16 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:16.454 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[72442de6-ed30-4fc6-8b2b-0c4b7f0696c4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:16 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:16.456 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-334681ad-19d3-4d55-9af7-56ae7d6c621f namespace which is not needed anymore#033[00m
Nov 29 03:25:16 np0005539551 nova_compute[227360]: 2025-11-29 08:25:16.463 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:16 np0005539551 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000084.scope: Deactivated successfully.
Nov 29 03:25:16 np0005539551 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000084.scope: Consumed 13.777s CPU time.
Nov 29 03:25:16 np0005539551 systemd-machined[190756]: Machine qemu-58-instance-00000084 terminated.
Nov 29 03:25:16 np0005539551 nova_compute[227360]: 2025-11-29 08:25:16.589 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:16 np0005539551 nova_compute[227360]: 2025-11-29 08:25:16.605 227364 INFO nova.virt.libvirt.driver [-] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Instance destroyed successfully.#033[00m
Nov 29 03:25:16 np0005539551 nova_compute[227360]: 2025-11-29 08:25:16.606 227364 DEBUG nova.objects.instance [None req-345133e5-a70e-4799-be61-4283cf6fb501 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'resources' on Instance uuid bb7619f3-13e5-4ca1-913b-b1dcdda532ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:16 np0005539551 neutron-haproxy-ovnmeta-334681ad-19d3-4d55-9af7-56ae7d6c621f[275314]: [NOTICE]   (275318) : haproxy version is 2.8.14-c23fe91
Nov 29 03:25:16 np0005539551 neutron-haproxy-ovnmeta-334681ad-19d3-4d55-9af7-56ae7d6c621f[275314]: [NOTICE]   (275318) : path to executable is /usr/sbin/haproxy
Nov 29 03:25:16 np0005539551 neutron-haproxy-ovnmeta-334681ad-19d3-4d55-9af7-56ae7d6c621f[275314]: [WARNING]  (275318) : Exiting Master process...
Nov 29 03:25:16 np0005539551 neutron-haproxy-ovnmeta-334681ad-19d3-4d55-9af7-56ae7d6c621f[275314]: [WARNING]  (275318) : Exiting Master process...
Nov 29 03:25:16 np0005539551 neutron-haproxy-ovnmeta-334681ad-19d3-4d55-9af7-56ae7d6c621f[275314]: [ALERT]    (275318) : Current worker (275320) exited with code 143 (Terminated)
Nov 29 03:25:16 np0005539551 neutron-haproxy-ovnmeta-334681ad-19d3-4d55-9af7-56ae7d6c621f[275314]: [WARNING]  (275318) : All workers exited. Exiting... (0)
Nov 29 03:25:16 np0005539551 systemd[1]: libpod-3b2238b6ea1ff55a2dab7af587d2c8248c263cc842e0e4a8cdf9f9806e5aad0c.scope: Deactivated successfully.
Nov 29 03:25:16 np0005539551 podman[275890]: 2025-11-29 08:25:16.64656199 +0000 UTC m=+0.068071974 container died 3b2238b6ea1ff55a2dab7af587d2c8248c263cc842e0e4a8cdf9f9806e5aad0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-334681ad-19d3-4d55-9af7-56ae7d6c621f, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:25:16 np0005539551 nova_compute[227360]: 2025-11-29 08:25:16.653 227364 DEBUG nova.virt.libvirt.vif [None req-345133e5-a70e-4799-be61-4283cf6fb501 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:24:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2089884930',display_name='tempest-TestNetworkAdvancedServerOps-server-2089884930',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2089884930',id=132,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNJO+77UhXU0ti/RlbYhC3aMWwk4oypPMVaoq554oYbmp2Ojft2DnjMDDkauLK0FUGEgT/2zL91cKAxUMco3aYOL98IWtJCrrqPh6dNoPHeMzS5lxlqxxmSeOk34tlcuyg==',key_name='tempest-TestNetworkAdvancedServerOps-2041322078',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:24:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-98c60tp0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:25:12Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=bb7619f3-13e5-4ca1-913b-b1dcdda532ea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4bfbbfff-ef3d-48a6-916b-d6356dfaee8f", "address": "fa:16:3e:e7:c6:ba", "network": {"id": "334681ad-19d3-4d55-9af7-56ae7d6c621f", "bridge": "br-int", "label": "tempest-network-smoke--1404794782", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bfbbfff-ef", "ovs_interfaceid": "4bfbbfff-ef3d-48a6-916b-d6356dfaee8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:25:16 np0005539551 nova_compute[227360]: 2025-11-29 08:25:16.654 227364 DEBUG nova.network.os_vif_util [None req-345133e5-a70e-4799-be61-4283cf6fb501 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "4bfbbfff-ef3d-48a6-916b-d6356dfaee8f", "address": "fa:16:3e:e7:c6:ba", "network": {"id": "334681ad-19d3-4d55-9af7-56ae7d6c621f", "bridge": "br-int", "label": "tempest-network-smoke--1404794782", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bfbbfff-ef", "ovs_interfaceid": "4bfbbfff-ef3d-48a6-916b-d6356dfaee8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:25:16 np0005539551 nova_compute[227360]: 2025-11-29 08:25:16.655 227364 DEBUG nova.network.os_vif_util [None req-345133e5-a70e-4799-be61-4283cf6fb501 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e7:c6:ba,bridge_name='br-int',has_traffic_filtering=True,id=4bfbbfff-ef3d-48a6-916b-d6356dfaee8f,network=Network(334681ad-19d3-4d55-9af7-56ae7d6c621f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bfbbfff-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:25:16 np0005539551 nova_compute[227360]: 2025-11-29 08:25:16.656 227364 DEBUG os_vif [None req-345133e5-a70e-4799-be61-4283cf6fb501 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e7:c6:ba,bridge_name='br-int',has_traffic_filtering=True,id=4bfbbfff-ef3d-48a6-916b-d6356dfaee8f,network=Network(334681ad-19d3-4d55-9af7-56ae7d6c621f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bfbbfff-ef') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:25:16 np0005539551 nova_compute[227360]: 2025-11-29 08:25:16.657 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:16 np0005539551 nova_compute[227360]: 2025-11-29 08:25:16.658 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bfbbfff-ef, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:16 np0005539551 nova_compute[227360]: 2025-11-29 08:25:16.659 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:16 np0005539551 nova_compute[227360]: 2025-11-29 08:25:16.661 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:16 np0005539551 nova_compute[227360]: 2025-11-29 08:25:16.664 227364 INFO os_vif [None req-345133e5-a70e-4799-be61-4283cf6fb501 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e7:c6:ba,bridge_name='br-int',has_traffic_filtering=True,id=4bfbbfff-ef3d-48a6-916b-d6356dfaee8f,network=Network(334681ad-19d3-4d55-9af7-56ae7d6c621f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bfbbfff-ef')#033[00m
Nov 29 03:25:16 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3b2238b6ea1ff55a2dab7af587d2c8248c263cc842e0e4a8cdf9f9806e5aad0c-userdata-shm.mount: Deactivated successfully.
Nov 29 03:25:16 np0005539551 systemd[1]: var-lib-containers-storage-overlay-adcba297c5168a62686d62d3ca1aec844ec58a28b13ce297eb69f2d7fa2cc111-merged.mount: Deactivated successfully.
Nov 29 03:25:16 np0005539551 podman[275890]: 2025-11-29 08:25:16.687584709 +0000 UTC m=+0.109094693 container cleanup 3b2238b6ea1ff55a2dab7af587d2c8248c263cc842e0e4a8cdf9f9806e5aad0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-334681ad-19d3-4d55-9af7-56ae7d6c621f, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 03:25:16 np0005539551 systemd[1]: libpod-conmon-3b2238b6ea1ff55a2dab7af587d2c8248c263cc842e0e4a8cdf9f9806e5aad0c.scope: Deactivated successfully.
Nov 29 03:25:16 np0005539551 podman[275942]: 2025-11-29 08:25:16.744987372 +0000 UTC m=+0.038386160 container remove 3b2238b6ea1ff55a2dab7af587d2c8248c263cc842e0e4a8cdf9f9806e5aad0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-334681ad-19d3-4d55-9af7-56ae7d6c621f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:25:16 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:16.749 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5e4793d7-1d41-4d3a-8b25-4835b57f698c]: (4, ('Sat Nov 29 08:25:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-334681ad-19d3-4d55-9af7-56ae7d6c621f (3b2238b6ea1ff55a2dab7af587d2c8248c263cc842e0e4a8cdf9f9806e5aad0c)\n3b2238b6ea1ff55a2dab7af587d2c8248c263cc842e0e4a8cdf9f9806e5aad0c\nSat Nov 29 08:25:16 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-334681ad-19d3-4d55-9af7-56ae7d6c621f (3b2238b6ea1ff55a2dab7af587d2c8248c263cc842e0e4a8cdf9f9806e5aad0c)\n3b2238b6ea1ff55a2dab7af587d2c8248c263cc842e0e4a8cdf9f9806e5aad0c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:16 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:16.751 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4b744bd9-bc6e-425c-b4a3-072046f0b65d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:16 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:16.752 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap334681ad-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:16 np0005539551 nova_compute[227360]: 2025-11-29 08:25:16.753 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:16 np0005539551 kernel: tap334681ad-10: left promiscuous mode
Nov 29 03:25:16 np0005539551 nova_compute[227360]: 2025-11-29 08:25:16.766 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:16 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:16.768 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e4937bda-df47-4e4c-80d8-38bcefdfff01]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:16 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:16.785 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[89b23bd6-88ea-47a3-9986-f59c9b84e6c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:16 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:16.786 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1ce0d6e7-124a-4f00-9e49-9a77b29b003b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:16 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:16.802 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[138e9c15-d0f9-4461-9e9d-c907a6cc6e79]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 773422, 'reachable_time': 32842, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275960, 'error': None, 'target': 'ovnmeta-334681ad-19d3-4d55-9af7-56ae7d6c621f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:16 np0005539551 systemd[1]: run-netns-ovnmeta\x2d334681ad\x2d19d3\x2d4d55\x2d9af7\x2d56ae7d6c621f.mount: Deactivated successfully.
Nov 29 03:25:16 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:16.805 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-334681ad-19d3-4d55-9af7-56ae7d6c621f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:25:16 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:16.806 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[cbb1313a-6844-4d6d-9e62-e69632998bfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:17 np0005539551 nova_compute[227360]: 2025-11-29 08:25:17.080 227364 INFO nova.virt.libvirt.driver [None req-345133e5-a70e-4799-be61-4283cf6fb501 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Deleting instance files /var/lib/nova/instances/bb7619f3-13e5-4ca1-913b-b1dcdda532ea_del#033[00m
Nov 29 03:25:17 np0005539551 nova_compute[227360]: 2025-11-29 08:25:17.081 227364 INFO nova.virt.libvirt.driver [None req-345133e5-a70e-4799-be61-4283cf6fb501 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Deletion of /var/lib/nova/instances/bb7619f3-13e5-4ca1-913b-b1dcdda532ea_del complete#033[00m
Nov 29 03:25:17 np0005539551 nova_compute[227360]: 2025-11-29 08:25:17.136 227364 INFO nova.compute.manager [None req-345133e5-a70e-4799-be61-4283cf6fb501 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:25:17 np0005539551 nova_compute[227360]: 2025-11-29 08:25:17.136 227364 DEBUG oslo.service.loopingcall [None req-345133e5-a70e-4799-be61-4283cf6fb501 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:25:17 np0005539551 nova_compute[227360]: 2025-11-29 08:25:17.137 227364 DEBUG nova.compute.manager [-] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:25:17 np0005539551 nova_compute[227360]: 2025-11-29 08:25:17.137 227364 DEBUG nova.network.neutron [-] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:25:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:17.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:17 np0005539551 nova_compute[227360]: 2025-11-29 08:25:17.902 227364 DEBUG nova.network.neutron [req-c7501510-627a-4ded-b7f3-fc768eb3a58c req-a9463ca7-6dae-40a4-a4f2-aafd88a02714 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Updated VIF entry in instance network info cache for port 4bfbbfff-ef3d-48a6-916b-d6356dfaee8f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:25:17 np0005539551 nova_compute[227360]: 2025-11-29 08:25:17.902 227364 DEBUG nova.network.neutron [req-c7501510-627a-4ded-b7f3-fc768eb3a58c req-a9463ca7-6dae-40a4-a4f2-aafd88a02714 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Updating instance_info_cache with network_info: [{"id": "4bfbbfff-ef3d-48a6-916b-d6356dfaee8f", "address": "fa:16:3e:e7:c6:ba", "network": {"id": "334681ad-19d3-4d55-9af7-56ae7d6c621f", "bridge": "br-int", "label": "tempest-network-smoke--1404794782", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bfbbfff-ef", "ovs_interfaceid": "4bfbbfff-ef3d-48a6-916b-d6356dfaee8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:25:17 np0005539551 nova_compute[227360]: 2025-11-29 08:25:17.922 227364 DEBUG oslo_concurrency.lockutils [req-c7501510-627a-4ded-b7f3-fc768eb3a58c req-a9463ca7-6dae-40a4-a4f2-aafd88a02714 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-bb7619f3-13e5-4ca1-913b-b1dcdda532ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:25:17 np0005539551 nova_compute[227360]: 2025-11-29 08:25:17.994 227364 DEBUG nova.network.neutron [-] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:25:18 np0005539551 nova_compute[227360]: 2025-11-29 08:25:18.012 227364 INFO nova.compute.manager [-] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Took 0.87 seconds to deallocate network for instance.#033[00m
Nov 29 03:25:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:18.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:18 np0005539551 nova_compute[227360]: 2025-11-29 08:25:18.076 227364 DEBUG oslo_concurrency.lockutils [None req-345133e5-a70e-4799-be61-4283cf6fb501 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:18 np0005539551 nova_compute[227360]: 2025-11-29 08:25:18.076 227364 DEBUG oslo_concurrency.lockutils [None req-345133e5-a70e-4799-be61-4283cf6fb501 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:18 np0005539551 nova_compute[227360]: 2025-11-29 08:25:18.138 227364 DEBUG oslo_concurrency.processutils [None req-345133e5-a70e-4799-be61-4283cf6fb501 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:18 np0005539551 nova_compute[227360]: 2025-11-29 08:25:18.365 227364 DEBUG nova.compute.manager [req-504792ec-d474-4911-873a-7dc643f78e31 req-836193a7-6604-49e4-8755-358ac7193cec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Received event network-vif-unplugged-4bfbbfff-ef3d-48a6-916b-d6356dfaee8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:18 np0005539551 nova_compute[227360]: 2025-11-29 08:25:18.366 227364 DEBUG oslo_concurrency.lockutils [req-504792ec-d474-4911-873a-7dc643f78e31 req-836193a7-6604-49e4-8755-358ac7193cec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "bb7619f3-13e5-4ca1-913b-b1dcdda532ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:18 np0005539551 nova_compute[227360]: 2025-11-29 08:25:18.366 227364 DEBUG oslo_concurrency.lockutils [req-504792ec-d474-4911-873a-7dc643f78e31 req-836193a7-6604-49e4-8755-358ac7193cec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bb7619f3-13e5-4ca1-913b-b1dcdda532ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:18 np0005539551 nova_compute[227360]: 2025-11-29 08:25:18.367 227364 DEBUG oslo_concurrency.lockutils [req-504792ec-d474-4911-873a-7dc643f78e31 req-836193a7-6604-49e4-8755-358ac7193cec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bb7619f3-13e5-4ca1-913b-b1dcdda532ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:18 np0005539551 nova_compute[227360]: 2025-11-29 08:25:18.367 227364 DEBUG nova.compute.manager [req-504792ec-d474-4911-873a-7dc643f78e31 req-836193a7-6604-49e4-8755-358ac7193cec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] No waiting events found dispatching network-vif-unplugged-4bfbbfff-ef3d-48a6-916b-d6356dfaee8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:18 np0005539551 nova_compute[227360]: 2025-11-29 08:25:18.367 227364 WARNING nova.compute.manager [req-504792ec-d474-4911-873a-7dc643f78e31 req-836193a7-6604-49e4-8755-358ac7193cec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Received unexpected event network-vif-unplugged-4bfbbfff-ef3d-48a6-916b-d6356dfaee8f for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:25:18 np0005539551 nova_compute[227360]: 2025-11-29 08:25:18.367 227364 DEBUG nova.compute.manager [req-504792ec-d474-4911-873a-7dc643f78e31 req-836193a7-6604-49e4-8755-358ac7193cec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Received event network-vif-plugged-4bfbbfff-ef3d-48a6-916b-d6356dfaee8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:18 np0005539551 nova_compute[227360]: 2025-11-29 08:25:18.367 227364 DEBUG oslo_concurrency.lockutils [req-504792ec-d474-4911-873a-7dc643f78e31 req-836193a7-6604-49e4-8755-358ac7193cec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "bb7619f3-13e5-4ca1-913b-b1dcdda532ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:18 np0005539551 nova_compute[227360]: 2025-11-29 08:25:18.368 227364 DEBUG oslo_concurrency.lockutils [req-504792ec-d474-4911-873a-7dc643f78e31 req-836193a7-6604-49e4-8755-358ac7193cec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bb7619f3-13e5-4ca1-913b-b1dcdda532ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:18 np0005539551 nova_compute[227360]: 2025-11-29 08:25:18.368 227364 DEBUG oslo_concurrency.lockutils [req-504792ec-d474-4911-873a-7dc643f78e31 req-836193a7-6604-49e4-8755-358ac7193cec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bb7619f3-13e5-4ca1-913b-b1dcdda532ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:18 np0005539551 nova_compute[227360]: 2025-11-29 08:25:18.368 227364 DEBUG nova.compute.manager [req-504792ec-d474-4911-873a-7dc643f78e31 req-836193a7-6604-49e4-8755-358ac7193cec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] No waiting events found dispatching network-vif-plugged-4bfbbfff-ef3d-48a6-916b-d6356dfaee8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:18 np0005539551 nova_compute[227360]: 2025-11-29 08:25:18.368 227364 WARNING nova.compute.manager [req-504792ec-d474-4911-873a-7dc643f78e31 req-836193a7-6604-49e4-8755-358ac7193cec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Received unexpected event network-vif-plugged-4bfbbfff-ef3d-48a6-916b-d6356dfaee8f for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:25:18 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:18 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:25:18 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1771210734' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:25:18 np0005539551 nova_compute[227360]: 2025-11-29 08:25:18.579 227364 DEBUG nova.compute.manager [req-e0159feb-2e24-4884-a160-97c28837849f req-0b963106-1503-47bd-a8d7-b6637a7ed0db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Received event network-vif-deleted-4bfbbfff-ef3d-48a6-916b-d6356dfaee8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:18 np0005539551 nova_compute[227360]: 2025-11-29 08:25:18.592 227364 DEBUG oslo_concurrency.processutils [None req-345133e5-a70e-4799-be61-4283cf6fb501 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:18 np0005539551 nova_compute[227360]: 2025-11-29 08:25:18.600 227364 DEBUG nova.compute.provider_tree [None req-345133e5-a70e-4799-be61-4283cf6fb501 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:25:18 np0005539551 nova_compute[227360]: 2025-11-29 08:25:18.615 227364 DEBUG nova.scheduler.client.report [None req-345133e5-a70e-4799-be61-4283cf6fb501 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:25:18 np0005539551 nova_compute[227360]: 2025-11-29 08:25:18.642 227364 DEBUG oslo_concurrency.lockutils [None req-345133e5-a70e-4799-be61-4283cf6fb501 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:18 np0005539551 nova_compute[227360]: 2025-11-29 08:25:18.677 227364 INFO nova.scheduler.client.report [None req-345133e5-a70e-4799-be61-4283cf6fb501 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Deleted allocations for instance bb7619f3-13e5-4ca1-913b-b1dcdda532ea#033[00m
Nov 29 03:25:18 np0005539551 nova_compute[227360]: 2025-11-29 08:25:18.781 227364 DEBUG oslo_concurrency.lockutils [None req-345133e5-a70e-4799-be61-4283cf6fb501 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "bb7619f3-13e5-4ca1-913b-b1dcdda532ea" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.422s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:19.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:19.875 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:19.875 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:19.876 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:25:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:20.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:25:20 np0005539551 nova_compute[227360]: 2025-11-29 08:25:20.733 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:20 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:25:20 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:25:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:21.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:21 np0005539551 nova_compute[227360]: 2025-11-29 08:25:21.283 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:21 np0005539551 nova_compute[227360]: 2025-11-29 08:25:21.659 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:22 np0005539551 nova_compute[227360]: 2025-11-29 08:25:22.052 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:22.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:22 np0005539551 nova_compute[227360]: 2025-11-29 08:25:22.296 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:23.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:23 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:24.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:25.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:26.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:26 np0005539551 nova_compute[227360]: 2025-11-29 08:25:26.285 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:26 np0005539551 nova_compute[227360]: 2025-11-29 08:25:26.662 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:27.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:28.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:28 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:29.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:30 np0005539551 nova_compute[227360]: 2025-11-29 08:25:30.069 227364 DEBUG oslo_concurrency.lockutils [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:30 np0005539551 nova_compute[227360]: 2025-11-29 08:25:30.069 227364 DEBUG oslo_concurrency.lockutils [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:30.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:30 np0005539551 nova_compute[227360]: 2025-11-29 08:25:30.101 227364 DEBUG nova.compute.manager [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:25:30 np0005539551 nova_compute[227360]: 2025-11-29 08:25:30.218 227364 DEBUG oslo_concurrency.lockutils [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:30 np0005539551 nova_compute[227360]: 2025-11-29 08:25:30.218 227364 DEBUG oslo_concurrency.lockutils [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:30 np0005539551 nova_compute[227360]: 2025-11-29 08:25:30.230 227364 DEBUG nova.virt.hardware [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:25:30 np0005539551 nova_compute[227360]: 2025-11-29 08:25:30.231 227364 INFO nova.compute.claims [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:25:30 np0005539551 nova_compute[227360]: 2025-11-29 08:25:30.351 227364 DEBUG oslo_concurrency.processutils [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:25:30 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2408160333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:25:30 np0005539551 nova_compute[227360]: 2025-11-29 08:25:30.815 227364 DEBUG oslo_concurrency.processutils [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:30 np0005539551 nova_compute[227360]: 2025-11-29 08:25:30.824 227364 DEBUG nova.compute.provider_tree [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:25:30 np0005539551 nova_compute[227360]: 2025-11-29 08:25:30.870 227364 DEBUG nova.scheduler.client.report [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:25:30 np0005539551 nova_compute[227360]: 2025-11-29 08:25:30.893 227364 DEBUG oslo_concurrency.lockutils [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:30 np0005539551 nova_compute[227360]: 2025-11-29 08:25:30.895 227364 DEBUG nova.compute.manager [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:25:30 np0005539551 nova_compute[227360]: 2025-11-29 08:25:30.943 227364 DEBUG nova.compute.manager [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:25:30 np0005539551 nova_compute[227360]: 2025-11-29 08:25:30.944 227364 DEBUG nova.network.neutron [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:25:30 np0005539551 nova_compute[227360]: 2025-11-29 08:25:30.960 227364 INFO nova.virt.libvirt.driver [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:25:30 np0005539551 nova_compute[227360]: 2025-11-29 08:25:30.974 227364 DEBUG nova.compute.manager [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:25:31 np0005539551 nova_compute[227360]: 2025-11-29 08:25:31.065 227364 DEBUG nova.compute.manager [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:25:31 np0005539551 nova_compute[227360]: 2025-11-29 08:25:31.067 227364 DEBUG nova.virt.libvirt.driver [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:25:31 np0005539551 nova_compute[227360]: 2025-11-29 08:25:31.068 227364 INFO nova.virt.libvirt.driver [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Creating image(s)#033[00m
Nov 29 03:25:31 np0005539551 nova_compute[227360]: 2025-11-29 08:25:31.112 227364 DEBUG nova.storage.rbd_utils [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image e5dc7e85-787d-4ed8-9752-a604a1815f2b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:31 np0005539551 nova_compute[227360]: 2025-11-29 08:25:31.154 227364 DEBUG nova.storage.rbd_utils [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image e5dc7e85-787d-4ed8-9752-a604a1815f2b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:31 np0005539551 nova_compute[227360]: 2025-11-29 08:25:31.200 227364 DEBUG nova.storage.rbd_utils [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image e5dc7e85-787d-4ed8-9752-a604a1815f2b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:31 np0005539551 nova_compute[227360]: 2025-11-29 08:25:31.206 227364 DEBUG oslo_concurrency.processutils [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:31.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:31 np0005539551 nova_compute[227360]: 2025-11-29 08:25:31.287 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:31 np0005539551 nova_compute[227360]: 2025-11-29 08:25:31.311 227364 DEBUG oslo_concurrency.processutils [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:31 np0005539551 nova_compute[227360]: 2025-11-29 08:25:31.312 227364 DEBUG oslo_concurrency.lockutils [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:31 np0005539551 nova_compute[227360]: 2025-11-29 08:25:31.313 227364 DEBUG oslo_concurrency.lockutils [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:31 np0005539551 nova_compute[227360]: 2025-11-29 08:25:31.314 227364 DEBUG oslo_concurrency.lockutils [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:31 np0005539551 nova_compute[227360]: 2025-11-29 08:25:31.352 227364 DEBUG nova.storage.rbd_utils [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image e5dc7e85-787d-4ed8-9752-a604a1815f2b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:31 np0005539551 nova_compute[227360]: 2025-11-29 08:25:31.358 227364 DEBUG oslo_concurrency.processutils [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 e5dc7e85-787d-4ed8-9752-a604a1815f2b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:31 np0005539551 nova_compute[227360]: 2025-11-29 08:25:31.485 227364 DEBUG nova.policy [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '873186539acb4bf9b90513e0e1beb56f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a9a83f8d8d7f4d08890407f978c05166', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:25:31 np0005539551 nova_compute[227360]: 2025-11-29 08:25:31.602 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404716.600937, bb7619f3-13e5-4ca1-913b-b1dcdda532ea => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:31 np0005539551 nova_compute[227360]: 2025-11-29 08:25:31.603 227364 INFO nova.compute.manager [-] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:25:31 np0005539551 nova_compute[227360]: 2025-11-29 08:25:31.638 227364 DEBUG nova.compute.manager [None req-f0260b1c-4c3a-4efd-b275-d19af4c3d5e2 - - - - - -] [instance: bb7619f3-13e5-4ca1-913b-b1dcdda532ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:31 np0005539551 nova_compute[227360]: 2025-11-29 08:25:31.664 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:31 np0005539551 nova_compute[227360]: 2025-11-29 08:25:31.672 227364 DEBUG oslo_concurrency.processutils [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 e5dc7e85-787d-4ed8-9752-a604a1815f2b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.314s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:31 np0005539551 nova_compute[227360]: 2025-11-29 08:25:31.741 227364 DEBUG nova.storage.rbd_utils [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] resizing rbd image e5dc7e85-787d-4ed8-9752-a604a1815f2b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:25:31 np0005539551 nova_compute[227360]: 2025-11-29 08:25:31.859 227364 DEBUG nova.objects.instance [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'migration_context' on Instance uuid e5dc7e85-787d-4ed8-9752-a604a1815f2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:31 np0005539551 nova_compute[227360]: 2025-11-29 08:25:31.874 227364 DEBUG nova.virt.libvirt.driver [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:25:31 np0005539551 nova_compute[227360]: 2025-11-29 08:25:31.874 227364 DEBUG nova.virt.libvirt.driver [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Ensure instance console log exists: /var/lib/nova/instances/e5dc7e85-787d-4ed8-9752-a604a1815f2b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:25:31 np0005539551 nova_compute[227360]: 2025-11-29 08:25:31.875 227364 DEBUG oslo_concurrency.lockutils [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:31 np0005539551 nova_compute[227360]: 2025-11-29 08:25:31.875 227364 DEBUG oslo_concurrency.lockutils [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:31 np0005539551 nova_compute[227360]: 2025-11-29 08:25:31.876 227364 DEBUG oslo_concurrency.lockutils [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:32.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:32 np0005539551 nova_compute[227360]: 2025-11-29 08:25:32.423 227364 DEBUG nova.network.neutron [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Successfully created port: 31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:25:32 np0005539551 nova_compute[227360]: 2025-11-29 08:25:32.429 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:33.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:33 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:33 np0005539551 nova_compute[227360]: 2025-11-29 08:25:33.909 227364 DEBUG nova.network.neutron [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Successfully updated port: 31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:25:33 np0005539551 nova_compute[227360]: 2025-11-29 08:25:33.923 227364 DEBUG oslo_concurrency.lockutils [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "refresh_cache-e5dc7e85-787d-4ed8-9752-a604a1815f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:25:33 np0005539551 nova_compute[227360]: 2025-11-29 08:25:33.923 227364 DEBUG oslo_concurrency.lockutils [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquired lock "refresh_cache-e5dc7e85-787d-4ed8-9752-a604a1815f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:25:33 np0005539551 nova_compute[227360]: 2025-11-29 08:25:33.923 227364 DEBUG nova.network.neutron [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:25:33 np0005539551 nova_compute[227360]: 2025-11-29 08:25:33.993 227364 DEBUG nova.compute.manager [req-8b16c9f9-553a-444f-a9a0-1c20fd8ec16e req-0cecb935-fba1-4c2a-bf35-1cbc81847f48 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Received event network-changed-31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:33 np0005539551 nova_compute[227360]: 2025-11-29 08:25:33.993 227364 DEBUG nova.compute.manager [req-8b16c9f9-553a-444f-a9a0-1c20fd8ec16e req-0cecb935-fba1-4c2a-bf35-1cbc81847f48 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Refreshing instance network info cache due to event network-changed-31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:25:33 np0005539551 nova_compute[227360]: 2025-11-29 08:25:33.993 227364 DEBUG oslo_concurrency.lockutils [req-8b16c9f9-553a-444f-a9a0-1c20fd8ec16e req-0cecb935-fba1-4c2a-bf35-1cbc81847f48 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-e5dc7e85-787d-4ed8-9752-a604a1815f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:25:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:34.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:34 np0005539551 nova_compute[227360]: 2025-11-29 08:25:34.186 227364 DEBUG nova.network.neutron [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:25:34 np0005539551 nova_compute[227360]: 2025-11-29 08:25:34.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:34 np0005539551 nova_compute[227360]: 2025-11-29 08:25:34.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:34 np0005539551 nova_compute[227360]: 2025-11-29 08:25:34.409 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:25:34 np0005539551 nova_compute[227360]: 2025-11-29 08:25:34.409 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:25:34 np0005539551 nova_compute[227360]: 2025-11-29 08:25:34.427 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 03:25:34 np0005539551 nova_compute[227360]: 2025-11-29 08:25:34.427 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:25:34 np0005539551 podman[276225]: 2025-11-29 08:25:34.629367007 +0000 UTC m=+0.076525481 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 03:25:34 np0005539551 podman[276226]: 2025-11-29 08:25:34.629742297 +0000 UTC m=+0.072884813 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:25:34 np0005539551 podman[276224]: 2025-11-29 08:25:34.656163262 +0000 UTC m=+0.101177759 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:25:35 np0005539551 nova_compute[227360]: 2025-11-29 08:25:35.244 227364 DEBUG nova.network.neutron [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Updating instance_info_cache with network_info: [{"id": "31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441", "address": "fa:16:3e:df:bc:22", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31cb9e6c-fe", "ovs_interfaceid": "31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:25:35 np0005539551 nova_compute[227360]: 2025-11-29 08:25:35.265 227364 DEBUG oslo_concurrency.lockutils [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Releasing lock "refresh_cache-e5dc7e85-787d-4ed8-9752-a604a1815f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:25:35 np0005539551 nova_compute[227360]: 2025-11-29 08:25:35.265 227364 DEBUG nova.compute.manager [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Instance network_info: |[{"id": "31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441", "address": "fa:16:3e:df:bc:22", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31cb9e6c-fe", "ovs_interfaceid": "31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:25:35 np0005539551 nova_compute[227360]: 2025-11-29 08:25:35.266 227364 DEBUG oslo_concurrency.lockutils [req-8b16c9f9-553a-444f-a9a0-1c20fd8ec16e req-0cecb935-fba1-4c2a-bf35-1cbc81847f48 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-e5dc7e85-787d-4ed8-9752-a604a1815f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:25:35 np0005539551 nova_compute[227360]: 2025-11-29 08:25:35.266 227364 DEBUG nova.network.neutron [req-8b16c9f9-553a-444f-a9a0-1c20fd8ec16e req-0cecb935-fba1-4c2a-bf35-1cbc81847f48 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Refreshing network info cache for port 31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:25:35 np0005539551 nova_compute[227360]: 2025-11-29 08:25:35.272 227364 DEBUG nova.virt.libvirt.driver [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Start _get_guest_xml network_info=[{"id": "31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441", "address": "fa:16:3e:df:bc:22", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31cb9e6c-fe", "ovs_interfaceid": "31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:25:35 np0005539551 nova_compute[227360]: 2025-11-29 08:25:35.278 227364 WARNING nova.virt.libvirt.driver [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:25:35 np0005539551 nova_compute[227360]: 2025-11-29 08:25:35.286 227364 DEBUG nova.virt.libvirt.host [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:25:35 np0005539551 nova_compute[227360]: 2025-11-29 08:25:35.287 227364 DEBUG nova.virt.libvirt.host [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:25:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:35.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:35 np0005539551 nova_compute[227360]: 2025-11-29 08:25:35.295 227364 DEBUG nova.virt.libvirt.host [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:25:35 np0005539551 nova_compute[227360]: 2025-11-29 08:25:35.296 227364 DEBUG nova.virt.libvirt.host [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:25:35 np0005539551 nova_compute[227360]: 2025-11-29 08:25:35.298 227364 DEBUG nova.virt.libvirt.driver [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:25:35 np0005539551 nova_compute[227360]: 2025-11-29 08:25:35.299 227364 DEBUG nova.virt.hardware [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:25:35 np0005539551 nova_compute[227360]: 2025-11-29 08:25:35.300 227364 DEBUG nova.virt.hardware [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:25:35 np0005539551 nova_compute[227360]: 2025-11-29 08:25:35.300 227364 DEBUG nova.virt.hardware [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:25:35 np0005539551 nova_compute[227360]: 2025-11-29 08:25:35.301 227364 DEBUG nova.virt.hardware [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:25:35 np0005539551 nova_compute[227360]: 2025-11-29 08:25:35.301 227364 DEBUG nova.virt.hardware [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:25:35 np0005539551 nova_compute[227360]: 2025-11-29 08:25:35.302 227364 DEBUG nova.virt.hardware [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:25:35 np0005539551 nova_compute[227360]: 2025-11-29 08:25:35.302 227364 DEBUG nova.virt.hardware [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:25:35 np0005539551 nova_compute[227360]: 2025-11-29 08:25:35.303 227364 DEBUG nova.virt.hardware [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:25:35 np0005539551 nova_compute[227360]: 2025-11-29 08:25:35.303 227364 DEBUG nova.virt.hardware [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:25:35 np0005539551 nova_compute[227360]: 2025-11-29 08:25:35.304 227364 DEBUG nova.virt.hardware [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:25:35 np0005539551 nova_compute[227360]: 2025-11-29 08:25:35.304 227364 DEBUG nova.virt.hardware [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:25:35 np0005539551 nova_compute[227360]: 2025-11-29 08:25:35.309 227364 DEBUG oslo_concurrency.processutils [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:35 np0005539551 nova_compute[227360]: 2025-11-29 08:25:35.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:25:35 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1264552352' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:25:35 np0005539551 nova_compute[227360]: 2025-11-29 08:25:35.792 227364 DEBUG oslo_concurrency.processutils [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:35 np0005539551 nova_compute[227360]: 2025-11-29 08:25:35.830 227364 DEBUG nova.storage.rbd_utils [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image e5dc7e85-787d-4ed8-9752-a604a1815f2b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:35 np0005539551 nova_compute[227360]: 2025-11-29 08:25:35.834 227364 DEBUG oslo_concurrency.processutils [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:36.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:36 np0005539551 nova_compute[227360]: 2025-11-29 08:25:36.287 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:36 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:25:36 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2266804459' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:25:36 np0005539551 nova_compute[227360]: 2025-11-29 08:25:36.326 227364 DEBUG oslo_concurrency.processutils [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:36 np0005539551 nova_compute[227360]: 2025-11-29 08:25:36.327 227364 DEBUG nova.virt.libvirt.vif [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:25:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-1341744770',display_name='tempest-ServerStableDeviceRescueTest-server-1341744770',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-1341744770',id=135,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9a83f8d8d7f4d08890407f978c05166',ramdisk_id='',reservation_id='r-wgna8oed',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-507673154',owner_user_name='tempest-ServerStableDeviceRescueTest-507673154-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:25:31Z,user_data=None,user_id='873186539acb4bf9b90513e0e1beb56f',uuid=e5dc7e85-787d-4ed8-9752-a604a1815f2b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441", "address": "fa:16:3e:df:bc:22", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31cb9e6c-fe", "ovs_interfaceid": "31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:25:36 np0005539551 nova_compute[227360]: 2025-11-29 08:25:36.328 227364 DEBUG nova.network.os_vif_util [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Converting VIF {"id": "31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441", "address": "fa:16:3e:df:bc:22", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31cb9e6c-fe", "ovs_interfaceid": "31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:25:36 np0005539551 nova_compute[227360]: 2025-11-29 08:25:36.329 227364 DEBUG nova.network.os_vif_util [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:bc:22,bridge_name='br-int',has_traffic_filtering=True,id=31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441,network=Network(5da19f7d-3aa0-41e7-88b0-b9ef17fa4445),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31cb9e6c-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:25:36 np0005539551 nova_compute[227360]: 2025-11-29 08:25:36.330 227364 DEBUG nova.objects.instance [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'pci_devices' on Instance uuid e5dc7e85-787d-4ed8-9752-a604a1815f2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:36 np0005539551 nova_compute[227360]: 2025-11-29 08:25:36.358 227364 DEBUG nova.virt.libvirt.driver [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:25:36 np0005539551 nova_compute[227360]:  <uuid>e5dc7e85-787d-4ed8-9752-a604a1815f2b</uuid>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:  <name>instance-00000087</name>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:25:36 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:      <nova:name>tempest-ServerStableDeviceRescueTest-server-1341744770</nova:name>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:25:35</nova:creationTime>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:25:36 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:        <nova:user uuid="873186539acb4bf9b90513e0e1beb56f">tempest-ServerStableDeviceRescueTest-507673154-project-member</nova:user>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:        <nova:project uuid="a9a83f8d8d7f4d08890407f978c05166">tempest-ServerStableDeviceRescueTest-507673154</nova:project>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:        <nova:port uuid="31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441">
Nov 29 03:25:36 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:      <entry name="serial">e5dc7e85-787d-4ed8-9752-a604a1815f2b</entry>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:      <entry name="uuid">e5dc7e85-787d-4ed8-9752-a604a1815f2b</entry>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:25:36 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/e5dc7e85-787d-4ed8-9752-a604a1815f2b_disk">
Nov 29 03:25:36 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:25:36 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:25:36 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/e5dc7e85-787d-4ed8-9752-a604a1815f2b_disk.config">
Nov 29 03:25:36 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:25:36 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:25:36 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:df:bc:22"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:      <target dev="tap31cb9e6c-fe"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:25:36 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/e5dc7e85-787d-4ed8-9752-a604a1815f2b/console.log" append="off"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:25:36 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:25:36 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:25:36 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:25:36 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:25:36 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:25:36 np0005539551 nova_compute[227360]: 2025-11-29 08:25:36.359 227364 DEBUG nova.compute.manager [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Preparing to wait for external event network-vif-plugged-31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:25:36 np0005539551 nova_compute[227360]: 2025-11-29 08:25:36.360 227364 DEBUG oslo_concurrency.lockutils [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:36 np0005539551 nova_compute[227360]: 2025-11-29 08:25:36.360 227364 DEBUG oslo_concurrency.lockutils [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:36 np0005539551 nova_compute[227360]: 2025-11-29 08:25:36.360 227364 DEBUG oslo_concurrency.lockutils [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:36 np0005539551 nova_compute[227360]: 2025-11-29 08:25:36.361 227364 DEBUG nova.virt.libvirt.vif [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:25:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-1341744770',display_name='tempest-ServerStableDeviceRescueTest-server-1341744770',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-1341744770',id=135,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9a83f8d8d7f4d08890407f978c05166',ramdisk_id='',reservation_id='r-wgna8oed',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-507673154',owner_user_name='tempest-ServerStableDeviceRescueTest-507673154-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:25:31Z,user_data=None,user_id='873186539acb4bf9b90513e0e1beb56f',uuid=e5dc7e85-787d-4ed8-9752-a604a1815f2b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441", "address": "fa:16:3e:df:bc:22", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31cb9e6c-fe", "ovs_interfaceid": "31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:25:36 np0005539551 nova_compute[227360]: 2025-11-29 08:25:36.361 227364 DEBUG nova.network.os_vif_util [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Converting VIF {"id": "31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441", "address": "fa:16:3e:df:bc:22", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31cb9e6c-fe", "ovs_interfaceid": "31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:25:36 np0005539551 nova_compute[227360]: 2025-11-29 08:25:36.362 227364 DEBUG nova.network.os_vif_util [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:bc:22,bridge_name='br-int',has_traffic_filtering=True,id=31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441,network=Network(5da19f7d-3aa0-41e7-88b0-b9ef17fa4445),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31cb9e6c-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:25:36 np0005539551 nova_compute[227360]: 2025-11-29 08:25:36.362 227364 DEBUG os_vif [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:bc:22,bridge_name='br-int',has_traffic_filtering=True,id=31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441,network=Network(5da19f7d-3aa0-41e7-88b0-b9ef17fa4445),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31cb9e6c-fe') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:25:36 np0005539551 nova_compute[227360]: 2025-11-29 08:25:36.363 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:36 np0005539551 nova_compute[227360]: 2025-11-29 08:25:36.363 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:36 np0005539551 nova_compute[227360]: 2025-11-29 08:25:36.364 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:25:36 np0005539551 nova_compute[227360]: 2025-11-29 08:25:36.366 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:36 np0005539551 nova_compute[227360]: 2025-11-29 08:25:36.366 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31cb9e6c-fe, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:36 np0005539551 nova_compute[227360]: 2025-11-29 08:25:36.367 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap31cb9e6c-fe, col_values=(('external_ids', {'iface-id': '31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:df:bc:22', 'vm-uuid': 'e5dc7e85-787d-4ed8-9752-a604a1815f2b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:36 np0005539551 nova_compute[227360]: 2025-11-29 08:25:36.369 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:36 np0005539551 NetworkManager[48922]: <info>  [1764404736.3700] manager: (tap31cb9e6c-fe): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/258)
Nov 29 03:25:36 np0005539551 nova_compute[227360]: 2025-11-29 08:25:36.371 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:25:36 np0005539551 nova_compute[227360]: 2025-11-29 08:25:36.375 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:36 np0005539551 nova_compute[227360]: 2025-11-29 08:25:36.375 227364 INFO os_vif [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:bc:22,bridge_name='br-int',has_traffic_filtering=True,id=31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441,network=Network(5da19f7d-3aa0-41e7-88b0-b9ef17fa4445),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31cb9e6c-fe')#033[00m
Nov 29 03:25:36 np0005539551 nova_compute[227360]: 2025-11-29 08:25:36.411 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:36 np0005539551 nova_compute[227360]: 2025-11-29 08:25:36.446 227364 DEBUG nova.virt.libvirt.driver [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:25:36 np0005539551 nova_compute[227360]: 2025-11-29 08:25:36.447 227364 DEBUG nova.virt.libvirt.driver [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:25:36 np0005539551 nova_compute[227360]: 2025-11-29 08:25:36.447 227364 DEBUG nova.virt.libvirt.driver [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] No VIF found with MAC fa:16:3e:df:bc:22, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:25:36 np0005539551 nova_compute[227360]: 2025-11-29 08:25:36.448 227364 INFO nova.virt.libvirt.driver [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Using config drive#033[00m
Nov 29 03:25:36 np0005539551 nova_compute[227360]: 2025-11-29 08:25:36.479 227364 DEBUG nova.storage.rbd_utils [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image e5dc7e85-787d-4ed8-9752-a604a1815f2b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:37.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:37 np0005539551 nova_compute[227360]: 2025-11-29 08:25:37.500 227364 INFO nova.virt.libvirt.driver [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Creating config drive at /var/lib/nova/instances/e5dc7e85-787d-4ed8-9752-a604a1815f2b/disk.config#033[00m
Nov 29 03:25:37 np0005539551 nova_compute[227360]: 2025-11-29 08:25:37.508 227364 DEBUG oslo_concurrency.processutils [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e5dc7e85-787d-4ed8-9752-a604a1815f2b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7lljx9ue execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:37 np0005539551 nova_compute[227360]: 2025-11-29 08:25:37.643 227364 DEBUG oslo_concurrency.processutils [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e5dc7e85-787d-4ed8-9752-a604a1815f2b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7lljx9ue" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:37 np0005539551 nova_compute[227360]: 2025-11-29 08:25:37.670 227364 DEBUG nova.storage.rbd_utils [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image e5dc7e85-787d-4ed8-9752-a604a1815f2b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:37 np0005539551 nova_compute[227360]: 2025-11-29 08:25:37.674 227364 DEBUG oslo_concurrency.processutils [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e5dc7e85-787d-4ed8-9752-a604a1815f2b/disk.config e5dc7e85-787d-4ed8-9752-a604a1815f2b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:37 np0005539551 nova_compute[227360]: 2025-11-29 08:25:37.758 227364 DEBUG nova.network.neutron [req-8b16c9f9-553a-444f-a9a0-1c20fd8ec16e req-0cecb935-fba1-4c2a-bf35-1cbc81847f48 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Updated VIF entry in instance network info cache for port 31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:25:37 np0005539551 nova_compute[227360]: 2025-11-29 08:25:37.759 227364 DEBUG nova.network.neutron [req-8b16c9f9-553a-444f-a9a0-1c20fd8ec16e req-0cecb935-fba1-4c2a-bf35-1cbc81847f48 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Updating instance_info_cache with network_info: [{"id": "31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441", "address": "fa:16:3e:df:bc:22", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31cb9e6c-fe", "ovs_interfaceid": "31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:25:37 np0005539551 nova_compute[227360]: 2025-11-29 08:25:37.777 227364 DEBUG oslo_concurrency.lockutils [req-8b16c9f9-553a-444f-a9a0-1c20fd8ec16e req-0cecb935-fba1-4c2a-bf35-1cbc81847f48 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-e5dc7e85-787d-4ed8-9752-a604a1815f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:25:37 np0005539551 nova_compute[227360]: 2025-11-29 08:25:37.838 227364 DEBUG oslo_concurrency.processutils [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e5dc7e85-787d-4ed8-9752-a604a1815f2b/disk.config e5dc7e85-787d-4ed8-9752-a604a1815f2b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:37 np0005539551 nova_compute[227360]: 2025-11-29 08:25:37.838 227364 INFO nova.virt.libvirt.driver [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Deleting local config drive /var/lib/nova/instances/e5dc7e85-787d-4ed8-9752-a604a1815f2b/disk.config because it was imported into RBD.#033[00m
Nov 29 03:25:37 np0005539551 kernel: tap31cb9e6c-fe: entered promiscuous mode
Nov 29 03:25:37 np0005539551 NetworkManager[48922]: <info>  [1764404737.8904] manager: (tap31cb9e6c-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/259)
Nov 29 03:25:37 np0005539551 ovn_controller[130266]: 2025-11-29T08:25:37Z|00545|binding|INFO|Claiming lport 31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 for this chassis.
Nov 29 03:25:37 np0005539551 ovn_controller[130266]: 2025-11-29T08:25:37Z|00546|binding|INFO|31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441: Claiming fa:16:3e:df:bc:22 10.100.0.14
Nov 29 03:25:37 np0005539551 nova_compute[227360]: 2025-11-29 08:25:37.893 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:37 np0005539551 nova_compute[227360]: 2025-11-29 08:25:37.897 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:37.902 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:bc:22 10.100.0.14'], port_security=['fa:16:3e:df:bc:22 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'e5dc7e85-787d-4ed8-9752-a604a1815f2b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9a83f8d8d7f4d08890407f978c05166', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1d1bf0bb-aa3c-4461-8a1e-ba1daa172e77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d0d36bf-5f41-4d6e-9e1b-1a2b5a9220ce, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:25:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:37.904 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 in datapath 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 bound to our chassis#033[00m
Nov 29 03:25:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:37.905 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445#033[00m
Nov 29 03:25:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:37.917 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[da0429a3-1c5c-4c54-9cfb-fb6e5e8fb7fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:37.918 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5da19f7d-31 in ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:25:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:37.920 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5da19f7d-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:25:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:37.920 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c03be11e-9705-4758-a5bc-734c507b9171]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:37 np0005539551 systemd-udevd[276429]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:25:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:37.921 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f96399fa-dcae-4614-b7fb-c96f772de891]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:37 np0005539551 systemd-machined[190756]: New machine qemu-59-instance-00000087.
Nov 29 03:25:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:37.932 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[aa24bb78-c48d-4a36-9ac4-30ac586b2bd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:37 np0005539551 systemd[1]: Started Virtual Machine qemu-59-instance-00000087.
Nov 29 03:25:37 np0005539551 NetworkManager[48922]: <info>  [1764404737.9377] device (tap31cb9e6c-fe): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:25:37 np0005539551 NetworkManager[48922]: <info>  [1764404737.9390] device (tap31cb9e6c-fe): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:25:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:37.957 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[3d50b484-4943-41ba-9360-7149d146ee08]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:37 np0005539551 nova_compute[227360]: 2025-11-29 08:25:37.963 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:37 np0005539551 ovn_controller[130266]: 2025-11-29T08:25:37Z|00547|binding|INFO|Setting lport 31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 ovn-installed in OVS
Nov 29 03:25:37 np0005539551 ovn_controller[130266]: 2025-11-29T08:25:37Z|00548|binding|INFO|Setting lport 31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 up in Southbound
Nov 29 03:25:37 np0005539551 nova_compute[227360]: 2025-11-29 08:25:37.968 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:37.981 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[4efa0fee-0aff-47af-9bb7-3d0b9c13e42e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:37.984 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[80f00754-4cab-4f75-92d5-2cb63f53e547]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:37 np0005539551 NetworkManager[48922]: <info>  [1764404737.9856] manager: (tap5da19f7d-30): new Veth device (/org/freedesktop/NetworkManager/Devices/260)
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:38.014 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[47612f55-6cff-450c-9140-f83fd82ea95b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:38.017 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[de0e8ac4-0335-4fb7-81fa-1b3843c635ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:38 np0005539551 NetworkManager[48922]: <info>  [1764404738.0361] device (tap5da19f7d-30): carrier: link connected
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:38.040 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[bb145f55-7cce-42c9-9f40-044bc98d0e02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:38.056 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e30bdfde-e291-4c10-b72e-b70fecaf49a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5da19f7d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:8e:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 778491, 'reachable_time': 18382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276461, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:38.071 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9c0b2490-b27b-46ba-ac46-6bf83a17b2fe]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe18:8e20'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 778491, 'tstamp': 778491}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276462, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:38.089 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f8307a98-64e7-4c27-ae1e-b3fa6c745ae1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5da19f7d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:8e:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 778491, 'reachable_time': 18382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 276463, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:38.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:38.115 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b3a900eb-7a07-4aff-a602-07b69e6cc2a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:38.165 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[283aa256-89f3-4817-b68e-e8be49f5316f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:38.167 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5da19f7d-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:38.167 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:38.168 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5da19f7d-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.169 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:38 np0005539551 kernel: tap5da19f7d-30: entered promiscuous mode
Nov 29 03:25:38 np0005539551 NetworkManager[48922]: <info>  [1764404738.1703] manager: (tap5da19f7d-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/261)
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.171 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:38.174 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5da19f7d-30, col_values=(('external_ids', {'iface-id': 'd4f0104e-3913-4399-9086-37cf4d16e7c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.175 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:38 np0005539551 ovn_controller[130266]: 2025-11-29T08:25:38Z|00549|binding|INFO|Releasing lport d4f0104e-3913-4399-9086-37cf4d16e7c7 from this chassis (sb_readonly=0)
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.191 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:38.192 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5da19f7d-3aa0-41e7-88b0-b9ef17fa4445.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5da19f7d-3aa0-41e7-88b0-b9ef17fa4445.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:38.193 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[16db3d5c-4732-4fd0-aac1-51e425d02c09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:38.193 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/5da19f7d-3aa0-41e7-88b0-b9ef17fa4445.pid.haproxy
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:25:38 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:38.194 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'env', 'PROCESS_TAG=haproxy-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5da19f7d-3aa0-41e7-88b0-b9ef17fa4445.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.264 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404738.2639134, e5dc7e85-787d-4ed8-9752-a604a1815f2b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.265 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] VM Started (Lifecycle Event)#033[00m
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.280 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.285 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404738.2640824, e5dc7e85-787d-4ed8-9752-a604a1815f2b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.285 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.305 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.307 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.327 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:38 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.451 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.451 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.452 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.452 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.452 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:38 np0005539551 podman[276538]: 2025-11-29 08:25:38.581236674 +0000 UTC m=+0.054347072 container create e2aade3387fe3cb8549a053aaec47d27e99dac913d0c9063fe938bb95f4a08f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 29 03:25:38 np0005539551 systemd[1]: Started libpod-conmon-e2aade3387fe3cb8549a053aaec47d27e99dac913d0c9063fe938bb95f4a08f8.scope.
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.643 227364 DEBUG nova.compute.manager [req-10f28e26-8702-46f8-803a-54c4349352d2 req-08725f7b-12e5-4200-91c0-e42fba89cc64 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Received event network-vif-plugged-31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.644 227364 DEBUG oslo_concurrency.lockutils [req-10f28e26-8702-46f8-803a-54c4349352d2 req-08725f7b-12e5-4200-91c0-e42fba89cc64 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.644 227364 DEBUG oslo_concurrency.lockutils [req-10f28e26-8702-46f8-803a-54c4349352d2 req-08725f7b-12e5-4200-91c0-e42fba89cc64 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.645 227364 DEBUG oslo_concurrency.lockutils [req-10f28e26-8702-46f8-803a-54c4349352d2 req-08725f7b-12e5-4200-91c0-e42fba89cc64 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.645 227364 DEBUG nova.compute.manager [req-10f28e26-8702-46f8-803a-54c4349352d2 req-08725f7b-12e5-4200-91c0-e42fba89cc64 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Processing event network-vif-plugged-31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.646 227364 DEBUG nova.compute.manager [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:25:38 np0005539551 podman[276538]: 2025-11-29 08:25:38.551973542 +0000 UTC m=+0.025083860 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:25:38 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.651 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404738.651381, e5dc7e85-787d-4ed8-9752-a604a1815f2b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.652 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:25:38 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f80e94b708bdc1c2230a1060e68c65ff78a2e1b7cdd3c063d3a294359155631/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.662 227364 DEBUG nova.virt.libvirt.driver [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.666 227364 INFO nova.virt.libvirt.driver [-] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Instance spawned successfully.#033[00m
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.667 227364 DEBUG nova.virt.libvirt.driver [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:25:38 np0005539551 podman[276538]: 2025-11-29 08:25:38.683469739 +0000 UTC m=+0.156580067 container init e2aade3387fe3cb8549a053aaec47d27e99dac913d0c9063fe938bb95f4a08f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:25:38 np0005539551 podman[276538]: 2025-11-29 08:25:38.688855415 +0000 UTC m=+0.161965743 container start e2aade3387fe3cb8549a053aaec47d27e99dac913d0c9063fe938bb95f4a08f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:25:38 np0005539551 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[276572]: [NOTICE]   (276576) : New worker (276578) forked
Nov 29 03:25:38 np0005539551 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[276572]: [NOTICE]   (276576) : Loading success.
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.714 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.716 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.724 227364 DEBUG nova.virt.libvirt.driver [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.724 227364 DEBUG nova.virt.libvirt.driver [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.725 227364 DEBUG nova.virt.libvirt.driver [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.725 227364 DEBUG nova.virt.libvirt.driver [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.726 227364 DEBUG nova.virt.libvirt.driver [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.726 227364 DEBUG nova.virt.libvirt.driver [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.771 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.814 227364 INFO nova.compute.manager [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Took 7.75 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.814 227364 DEBUG nova.compute.manager [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:38 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:25:38 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4047560674' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.887 227364 INFO nova.compute.manager [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Took 8.72 seconds to build instance.#033[00m
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.894 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.904 227364 DEBUG oslo_concurrency.lockutils [None req-8f27157a-b80d-487d-9b96-7b20b08522ad 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.960 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:25:38 np0005539551 nova_compute[227360]: 2025-11-29 08:25:38.961 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:25:39 np0005539551 nova_compute[227360]: 2025-11-29 08:25:39.112 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:25:39 np0005539551 nova_compute[227360]: 2025-11-29 08:25:39.113 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4352MB free_disk=20.824642181396484GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:25:39 np0005539551 nova_compute[227360]: 2025-11-29 08:25:39.113 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:39 np0005539551 nova_compute[227360]: 2025-11-29 08:25:39.113 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:39 np0005539551 nova_compute[227360]: 2025-11-29 08:25:39.188 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance e5dc7e85-787d-4ed8-9752-a604a1815f2b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:25:39 np0005539551 nova_compute[227360]: 2025-11-29 08:25:39.189 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:25:39 np0005539551 nova_compute[227360]: 2025-11-29 08:25:39.189 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:25:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:39.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:39 np0005539551 nova_compute[227360]: 2025-11-29 08:25:39.338 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:39 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:39.639 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:25:39 np0005539551 nova_compute[227360]: 2025-11-29 08:25:39.640 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:39 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:39.641 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:25:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:25:39 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1212652604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:25:39 np0005539551 nova_compute[227360]: 2025-11-29 08:25:39.787 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:39 np0005539551 nova_compute[227360]: 2025-11-29 08:25:39.793 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:25:39 np0005539551 nova_compute[227360]: 2025-11-29 08:25:39.808 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:25:39 np0005539551 nova_compute[227360]: 2025-11-29 08:25:39.836 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:25:39 np0005539551 nova_compute[227360]: 2025-11-29 08:25:39.837 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:40.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:41 np0005539551 nova_compute[227360]: 2025-11-29 08:25:41.277 227364 DEBUG nova.compute.manager [req-7b2f7a29-1a30-4715-b996-21bcfd955fcd req-ed3514b9-61e4-4c7f-98f6-50f8de9e7f2f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Received event network-vif-plugged-31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:41 np0005539551 nova_compute[227360]: 2025-11-29 08:25:41.277 227364 DEBUG oslo_concurrency.lockutils [req-7b2f7a29-1a30-4715-b996-21bcfd955fcd req-ed3514b9-61e4-4c7f-98f6-50f8de9e7f2f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:41 np0005539551 nova_compute[227360]: 2025-11-29 08:25:41.278 227364 DEBUG oslo_concurrency.lockutils [req-7b2f7a29-1a30-4715-b996-21bcfd955fcd req-ed3514b9-61e4-4c7f-98f6-50f8de9e7f2f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:41 np0005539551 nova_compute[227360]: 2025-11-29 08:25:41.278 227364 DEBUG oslo_concurrency.lockutils [req-7b2f7a29-1a30-4715-b996-21bcfd955fcd req-ed3514b9-61e4-4c7f-98f6-50f8de9e7f2f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:41 np0005539551 nova_compute[227360]: 2025-11-29 08:25:41.278 227364 DEBUG nova.compute.manager [req-7b2f7a29-1a30-4715-b996-21bcfd955fcd req-ed3514b9-61e4-4c7f-98f6-50f8de9e7f2f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] No waiting events found dispatching network-vif-plugged-31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:41 np0005539551 nova_compute[227360]: 2025-11-29 08:25:41.279 227364 WARNING nova.compute.manager [req-7b2f7a29-1a30-4715-b996-21bcfd955fcd req-ed3514b9-61e4-4c7f-98f6-50f8de9e7f2f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Received unexpected event network-vif-plugged-31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:25:41 np0005539551 nova_compute[227360]: 2025-11-29 08:25:41.289 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:41.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:41 np0005539551 nova_compute[227360]: 2025-11-29 08:25:41.368 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:41 np0005539551 nova_compute[227360]: 2025-11-29 08:25:41.414 227364 DEBUG nova.compute.manager [None req-29ce13f9-7b9b-4889-a085-3ce53bc6957e 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:41 np0005539551 nova_compute[227360]: 2025-11-29 08:25:41.468 227364 INFO nova.compute.manager [None req-29ce13f9-7b9b-4889-a085-3ce53bc6957e 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] instance snapshotting#033[00m
Nov 29 03:25:41 np0005539551 nova_compute[227360]: 2025-11-29 08:25:41.833 227364 INFO nova.virt.libvirt.driver [None req-29ce13f9-7b9b-4889-a085-3ce53bc6957e 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Beginning live snapshot process#033[00m
Nov 29 03:25:41 np0005539551 nova_compute[227360]: 2025-11-29 08:25:41.837 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:42 np0005539551 nova_compute[227360]: 2025-11-29 08:25:42.021 227364 DEBUG nova.virt.libvirt.imagebackend [None req-29ce13f9-7b9b-4889-a085-3ce53bc6957e 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] No parent info for 4873db8c-b414-4e95-acd9-77caabebe722; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:25:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:42.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:42 np0005539551 nova_compute[227360]: 2025-11-29 08:25:42.274 227364 DEBUG nova.storage.rbd_utils [None req-29ce13f9-7b9b-4889-a085-3ce53bc6957e 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] creating snapshot(3455c1137fe840efb9741fb545c9abf4) on rbd image(e5dc7e85-787d-4ed8-9752-a604a1815f2b_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:25:43 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e330 e330: 3 total, 3 up, 3 in
Nov 29 03:25:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:25:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:43.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:25:43 np0005539551 nova_compute[227360]: 2025-11-29 08:25:43.342 227364 DEBUG nova.storage.rbd_utils [None req-29ce13f9-7b9b-4889-a085-3ce53bc6957e 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] cloning vms/e5dc7e85-787d-4ed8-9752-a604a1815f2b_disk@3455c1137fe840efb9741fb545c9abf4 to images/c73d83e2-0a3c-4e73-907f-124979a361df clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:25:43 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:43 np0005539551 nova_compute[227360]: 2025-11-29 08:25:43.451 227364 DEBUG nova.storage.rbd_utils [None req-29ce13f9-7b9b-4889-a085-3ce53bc6957e 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] flattening images/c73d83e2-0a3c-4e73-907f-124979a361df flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:25:43 np0005539551 nova_compute[227360]: 2025-11-29 08:25:43.733 227364 DEBUG nova.storage.rbd_utils [None req-29ce13f9-7b9b-4889-a085-3ce53bc6957e 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] removing snapshot(3455c1137fe840efb9741fb545c9abf4) on rbd image(e5dc7e85-787d-4ed8-9752-a604a1815f2b_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:25:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:44.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:44 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e331 e331: 3 total, 3 up, 3 in
Nov 29 03:25:44 np0005539551 nova_compute[227360]: 2025-11-29 08:25:44.349 227364 DEBUG nova.storage.rbd_utils [None req-29ce13f9-7b9b-4889-a085-3ce53bc6957e 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] creating snapshot(snap) on rbd image(c73d83e2-0a3c-4e73-907f-124979a361df) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:25:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:44.643 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:45.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e332 e332: 3 total, 3 up, 3 in
Nov 29 03:25:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:46.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:46 np0005539551 nova_compute[227360]: 2025-11-29 08:25:46.292 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:46 np0005539551 nova_compute[227360]: 2025-11-29 08:25:46.370 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:46 np0005539551 nova_compute[227360]: 2025-11-29 08:25:46.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:46 np0005539551 nova_compute[227360]: 2025-11-29 08:25:46.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:25:46 np0005539551 nova_compute[227360]: 2025-11-29 08:25:46.819 227364 INFO nova.virt.libvirt.driver [None req-29ce13f9-7b9b-4889-a085-3ce53bc6957e 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Snapshot image upload complete#033[00m
Nov 29 03:25:46 np0005539551 nova_compute[227360]: 2025-11-29 08:25:46.820 227364 INFO nova.compute.manager [None req-29ce13f9-7b9b-4889-a085-3ce53bc6957e 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Took 5.35 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 29 03:25:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:47.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:47 np0005539551 nova_compute[227360]: 2025-11-29 08:25:47.891 227364 DEBUG oslo_concurrency.lockutils [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Acquiring lock "3bf7e58f-cace-4eee-a7ac-44c1ab096c2f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:47 np0005539551 nova_compute[227360]: 2025-11-29 08:25:47.891 227364 DEBUG oslo_concurrency.lockutils [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Lock "3bf7e58f-cace-4eee-a7ac-44c1ab096c2f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:47 np0005539551 nova_compute[227360]: 2025-11-29 08:25:47.912 227364 DEBUG nova.compute.manager [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:25:48 np0005539551 nova_compute[227360]: 2025-11-29 08:25:48.005 227364 DEBUG oslo_concurrency.lockutils [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:48 np0005539551 nova_compute[227360]: 2025-11-29 08:25:48.005 227364 DEBUG oslo_concurrency.lockutils [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:48 np0005539551 nova_compute[227360]: 2025-11-29 08:25:48.012 227364 DEBUG nova.virt.hardware [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:25:48 np0005539551 nova_compute[227360]: 2025-11-29 08:25:48.012 227364 INFO nova.compute.claims [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:25:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:48.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:48 np0005539551 nova_compute[227360]: 2025-11-29 08:25:48.158 227364 DEBUG oslo_concurrency.processutils [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:48 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:48 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:25:48 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/48780745' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:25:48 np0005539551 nova_compute[227360]: 2025-11-29 08:25:48.607 227364 DEBUG oslo_concurrency.processutils [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:48 np0005539551 nova_compute[227360]: 2025-11-29 08:25:48.615 227364 DEBUG nova.compute.provider_tree [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:25:48 np0005539551 nova_compute[227360]: 2025-11-29 08:25:48.644 227364 DEBUG nova.scheduler.client.report [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:25:48 np0005539551 nova_compute[227360]: 2025-11-29 08:25:48.682 227364 DEBUG oslo_concurrency.lockutils [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:48 np0005539551 nova_compute[227360]: 2025-11-29 08:25:48.684 227364 DEBUG nova.compute.manager [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:25:48 np0005539551 nova_compute[227360]: 2025-11-29 08:25:48.785 227364 DEBUG nova.compute.manager [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:25:48 np0005539551 nova_compute[227360]: 2025-11-29 08:25:48.785 227364 DEBUG nova.network.neutron [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:25:48 np0005539551 nova_compute[227360]: 2025-11-29 08:25:48.837 227364 INFO nova.virt.libvirt.driver [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:25:48 np0005539551 nova_compute[227360]: 2025-11-29 08:25:48.860 227364 DEBUG nova.compute.manager [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:25:48 np0005539551 nova_compute[227360]: 2025-11-29 08:25:48.966 227364 DEBUG nova.compute.manager [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:25:48 np0005539551 nova_compute[227360]: 2025-11-29 08:25:48.967 227364 DEBUG nova.virt.libvirt.driver [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:25:48 np0005539551 nova_compute[227360]: 2025-11-29 08:25:48.968 227364 INFO nova.virt.libvirt.driver [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Creating image(s)#033[00m
Nov 29 03:25:48 np0005539551 nova_compute[227360]: 2025-11-29 08:25:48.997 227364 DEBUG nova.storage.rbd_utils [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] rbd image 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:49 np0005539551 nova_compute[227360]: 2025-11-29 08:25:49.024 227364 DEBUG nova.storage.rbd_utils [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] rbd image 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:49 np0005539551 nova_compute[227360]: 2025-11-29 08:25:49.053 227364 DEBUG nova.storage.rbd_utils [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] rbd image 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:49 np0005539551 nova_compute[227360]: 2025-11-29 08:25:49.057 227364 DEBUG oslo_concurrency.processutils [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:49 np0005539551 nova_compute[227360]: 2025-11-29 08:25:49.099 227364 DEBUG nova.policy [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8516d6a79ffe412bbe8552fc233adf59', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '34cdf0e4d08f40cfb54455e1f681aea2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:25:49 np0005539551 nova_compute[227360]: 2025-11-29 08:25:49.156 227364 DEBUG oslo_concurrency.processutils [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:49 np0005539551 nova_compute[227360]: 2025-11-29 08:25:49.156 227364 DEBUG oslo_concurrency.lockutils [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:49 np0005539551 nova_compute[227360]: 2025-11-29 08:25:49.157 227364 DEBUG oslo_concurrency.lockutils [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:49 np0005539551 nova_compute[227360]: 2025-11-29 08:25:49.157 227364 DEBUG oslo_concurrency.lockutils [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:49 np0005539551 nova_compute[227360]: 2025-11-29 08:25:49.185 227364 DEBUG nova.storage.rbd_utils [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] rbd image 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:49 np0005539551 nova_compute[227360]: 2025-11-29 08:25:49.193 227364 DEBUG oslo_concurrency.processutils [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:49.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:49 np0005539551 nova_compute[227360]: 2025-11-29 08:25:49.434 227364 INFO nova.compute.manager [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Rescuing#033[00m
Nov 29 03:25:49 np0005539551 nova_compute[227360]: 2025-11-29 08:25:49.435 227364 DEBUG oslo_concurrency.lockutils [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "refresh_cache-e5dc7e85-787d-4ed8-9752-a604a1815f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:25:49 np0005539551 nova_compute[227360]: 2025-11-29 08:25:49.435 227364 DEBUG oslo_concurrency.lockutils [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquired lock "refresh_cache-e5dc7e85-787d-4ed8-9752-a604a1815f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:25:49 np0005539551 nova_compute[227360]: 2025-11-29 08:25:49.435 227364 DEBUG nova.network.neutron [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:25:49 np0005539551 nova_compute[227360]: 2025-11-29 08:25:49.508 227364 DEBUG oslo_concurrency.processutils [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.315s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:49 np0005539551 nova_compute[227360]: 2025-11-29 08:25:49.575 227364 DEBUG nova.storage.rbd_utils [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] resizing rbd image 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:25:49 np0005539551 nova_compute[227360]: 2025-11-29 08:25:49.675 227364 DEBUG nova.objects.instance [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Lazy-loading 'migration_context' on Instance uuid 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:49 np0005539551 nova_compute[227360]: 2025-11-29 08:25:49.698 227364 DEBUG nova.virt.libvirt.driver [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:25:49 np0005539551 nova_compute[227360]: 2025-11-29 08:25:49.698 227364 DEBUG nova.virt.libvirt.driver [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Ensure instance console log exists: /var/lib/nova/instances/3bf7e58f-cace-4eee-a7ac-44c1ab096c2f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:25:49 np0005539551 nova_compute[227360]: 2025-11-29 08:25:49.699 227364 DEBUG oslo_concurrency.lockutils [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:49 np0005539551 nova_compute[227360]: 2025-11-29 08:25:49.699 227364 DEBUG oslo_concurrency.lockutils [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:49 np0005539551 nova_compute[227360]: 2025-11-29 08:25:49.699 227364 DEBUG oslo_concurrency.lockutils [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:49 np0005539551 nova_compute[227360]: 2025-11-29 08:25:49.826 227364 DEBUG nova.network.neutron [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Successfully created port: d5958e80-e513-441d-9b72-15ef94535bfd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:25:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:50.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:50 np0005539551 nova_compute[227360]: 2025-11-29 08:25:50.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:50 np0005539551 nova_compute[227360]: 2025-11-29 08:25:50.968 227364 DEBUG nova.network.neutron [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Successfully updated port: d5958e80-e513-441d-9b72-15ef94535bfd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:25:50 np0005539551 nova_compute[227360]: 2025-11-29 08:25:50.992 227364 DEBUG oslo_concurrency.lockutils [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Acquiring lock "refresh_cache-3bf7e58f-cace-4eee-a7ac-44c1ab096c2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:25:50 np0005539551 nova_compute[227360]: 2025-11-29 08:25:50.993 227364 DEBUG oslo_concurrency.lockutils [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Acquired lock "refresh_cache-3bf7e58f-cace-4eee-a7ac-44c1ab096c2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:25:50 np0005539551 nova_compute[227360]: 2025-11-29 08:25:50.993 227364 DEBUG nova.network.neutron [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:25:51 np0005539551 nova_compute[227360]: 2025-11-29 08:25:51.219 227364 DEBUG nova.network.neutron [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:25:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:51.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:51 np0005539551 nova_compute[227360]: 2025-11-29 08:25:51.337 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:51 np0005539551 nova_compute[227360]: 2025-11-29 08:25:51.372 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:51 np0005539551 nova_compute[227360]: 2025-11-29 08:25:51.470 227364 DEBUG nova.network.neutron [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Updating instance_info_cache with network_info: [{"id": "31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441", "address": "fa:16:3e:df:bc:22", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31cb9e6c-fe", "ovs_interfaceid": "31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:25:51 np0005539551 nova_compute[227360]: 2025-11-29 08:25:51.492 227364 DEBUG oslo_concurrency.lockutils [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Releasing lock "refresh_cache-e5dc7e85-787d-4ed8-9752-a604a1815f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:25:51 np0005539551 nova_compute[227360]: 2025-11-29 08:25:51.831 227364 DEBUG nova.virt.libvirt.driver [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:25:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:52.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e333 e333: 3 total, 3 up, 3 in
Nov 29 03:25:52 np0005539551 nova_compute[227360]: 2025-11-29 08:25:52.451 227364 DEBUG nova.compute.manager [req-03aa945d-08e3-4029-a7a0-f9cb8d5a58ba req-ece52f49-8934-416d-a252-ffcd5831e097 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Received event network-changed-d5958e80-e513-441d-9b72-15ef94535bfd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:52 np0005539551 nova_compute[227360]: 2025-11-29 08:25:52.452 227364 DEBUG nova.compute.manager [req-03aa945d-08e3-4029-a7a0-f9cb8d5a58ba req-ece52f49-8934-416d-a252-ffcd5831e097 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Refreshing instance network info cache due to event network-changed-d5958e80-e513-441d-9b72-15ef94535bfd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:25:52 np0005539551 nova_compute[227360]: 2025-11-29 08:25:52.452 227364 DEBUG oslo_concurrency.lockutils [req-03aa945d-08e3-4029-a7a0-f9cb8d5a58ba req-ece52f49-8934-416d-a252-ffcd5831e097 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-3bf7e58f-cace-4eee-a7ac-44c1ab096c2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:25:52 np0005539551 nova_compute[227360]: 2025-11-29 08:25:52.580 227364 DEBUG nova.network.neutron [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Updating instance_info_cache with network_info: [{"id": "d5958e80-e513-441d-9b72-15ef94535bfd", "address": "fa:16:3e:b5:5a:cd", "network": {"id": "22d7cf42-d85f-4608-a248-329d35f5c84f", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1168559715-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "34cdf0e4d08f40cfb54455e1f681aea2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5958e80-e5", "ovs_interfaceid": "d5958e80-e513-441d-9b72-15ef94535bfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:25:52 np0005539551 nova_compute[227360]: 2025-11-29 08:25:52.606 227364 DEBUG oslo_concurrency.lockutils [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Releasing lock "refresh_cache-3bf7e58f-cace-4eee-a7ac-44c1ab096c2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:25:52 np0005539551 nova_compute[227360]: 2025-11-29 08:25:52.606 227364 DEBUG nova.compute.manager [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Instance network_info: |[{"id": "d5958e80-e513-441d-9b72-15ef94535bfd", "address": "fa:16:3e:b5:5a:cd", "network": {"id": "22d7cf42-d85f-4608-a248-329d35f5c84f", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1168559715-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "34cdf0e4d08f40cfb54455e1f681aea2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5958e80-e5", "ovs_interfaceid": "d5958e80-e513-441d-9b72-15ef94535bfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:25:52 np0005539551 nova_compute[227360]: 2025-11-29 08:25:52.607 227364 DEBUG oslo_concurrency.lockutils [req-03aa945d-08e3-4029-a7a0-f9cb8d5a58ba req-ece52f49-8934-416d-a252-ffcd5831e097 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-3bf7e58f-cace-4eee-a7ac-44c1ab096c2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:25:52 np0005539551 nova_compute[227360]: 2025-11-29 08:25:52.607 227364 DEBUG nova.network.neutron [req-03aa945d-08e3-4029-a7a0-f9cb8d5a58ba req-ece52f49-8934-416d-a252-ffcd5831e097 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Refreshing network info cache for port d5958e80-e513-441d-9b72-15ef94535bfd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:25:52 np0005539551 nova_compute[227360]: 2025-11-29 08:25:52.609 227364 DEBUG nova.virt.libvirt.driver [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Start _get_guest_xml network_info=[{"id": "d5958e80-e513-441d-9b72-15ef94535bfd", "address": "fa:16:3e:b5:5a:cd", "network": {"id": "22d7cf42-d85f-4608-a248-329d35f5c84f", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1168559715-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "34cdf0e4d08f40cfb54455e1f681aea2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5958e80-e5", "ovs_interfaceid": "d5958e80-e513-441d-9b72-15ef94535bfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:25:52 np0005539551 nova_compute[227360]: 2025-11-29 08:25:52.614 227364 WARNING nova.virt.libvirt.driver [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:25:52 np0005539551 nova_compute[227360]: 2025-11-29 08:25:52.618 227364 DEBUG nova.virt.libvirt.host [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:25:52 np0005539551 nova_compute[227360]: 2025-11-29 08:25:52.619 227364 DEBUG nova.virt.libvirt.host [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:25:52 np0005539551 nova_compute[227360]: 2025-11-29 08:25:52.622 227364 DEBUG nova.virt.libvirt.host [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:25:52 np0005539551 nova_compute[227360]: 2025-11-29 08:25:52.622 227364 DEBUG nova.virt.libvirt.host [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:25:52 np0005539551 nova_compute[227360]: 2025-11-29 08:25:52.623 227364 DEBUG nova.virt.libvirt.driver [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:25:52 np0005539551 nova_compute[227360]: 2025-11-29 08:25:52.623 227364 DEBUG nova.virt.hardware [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:25:52 np0005539551 nova_compute[227360]: 2025-11-29 08:25:52.624 227364 DEBUG nova.virt.hardware [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:25:52 np0005539551 nova_compute[227360]: 2025-11-29 08:25:52.624 227364 DEBUG nova.virt.hardware [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:25:52 np0005539551 nova_compute[227360]: 2025-11-29 08:25:52.624 227364 DEBUG nova.virt.hardware [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:25:52 np0005539551 nova_compute[227360]: 2025-11-29 08:25:52.624 227364 DEBUG nova.virt.hardware [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:25:52 np0005539551 nova_compute[227360]: 2025-11-29 08:25:52.625 227364 DEBUG nova.virt.hardware [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:25:52 np0005539551 nova_compute[227360]: 2025-11-29 08:25:52.625 227364 DEBUG nova.virt.hardware [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:25:52 np0005539551 nova_compute[227360]: 2025-11-29 08:25:52.625 227364 DEBUG nova.virt.hardware [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:25:52 np0005539551 nova_compute[227360]: 2025-11-29 08:25:52.625 227364 DEBUG nova.virt.hardware [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:25:52 np0005539551 nova_compute[227360]: 2025-11-29 08:25:52.625 227364 DEBUG nova.virt.hardware [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:25:52 np0005539551 nova_compute[227360]: 2025-11-29 08:25:52.626 227364 DEBUG nova.virt.hardware [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:25:52 np0005539551 nova_compute[227360]: 2025-11-29 08:25:52.628 227364 DEBUG oslo_concurrency.processutils [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:53 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:25:53 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3599844489' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:25:53 np0005539551 nova_compute[227360]: 2025-11-29 08:25:53.063 227364 DEBUG oslo_concurrency.processutils [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:53 np0005539551 nova_compute[227360]: 2025-11-29 08:25:53.095 227364 DEBUG nova.storage.rbd_utils [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] rbd image 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:53 np0005539551 nova_compute[227360]: 2025-11-29 08:25:53.099 227364 DEBUG oslo_concurrency.processutils [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:53.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:53 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:53 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:25:53 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2688955646' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:25:53 np0005539551 nova_compute[227360]: 2025-11-29 08:25:53.514 227364 DEBUG oslo_concurrency.processutils [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:53 np0005539551 nova_compute[227360]: 2025-11-29 08:25:53.518 227364 DEBUG nova.virt.libvirt.vif [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:25:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-1657914540',display_name='tempest-ServerAddressesNegativeTestJSON-server-1657914540',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-1657914540',id=137,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='34cdf0e4d08f40cfb54455e1f681aea2',ramdisk_id='',reservation_id='r-kaowa8y0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-101422234',owner_user_name='tempest-ServerAddressesNegativeTestJSON-101422234-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:25:48Z,user_data=None,user_id='8516d6a79ffe412bbe8552fc233adf59',uuid=3bf7e58f-cace-4eee-a7ac-44c1ab096c2f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d5958e80-e513-441d-9b72-15ef94535bfd", "address": "fa:16:3e:b5:5a:cd", "network": {"id": "22d7cf42-d85f-4608-a248-329d35f5c84f", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1168559715-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "34cdf0e4d08f40cfb54455e1f681aea2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5958e80-e5", "ovs_interfaceid": "d5958e80-e513-441d-9b72-15ef94535bfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:25:53 np0005539551 nova_compute[227360]: 2025-11-29 08:25:53.519 227364 DEBUG nova.network.os_vif_util [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Converting VIF {"id": "d5958e80-e513-441d-9b72-15ef94535bfd", "address": "fa:16:3e:b5:5a:cd", "network": {"id": "22d7cf42-d85f-4608-a248-329d35f5c84f", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1168559715-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "34cdf0e4d08f40cfb54455e1f681aea2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5958e80-e5", "ovs_interfaceid": "d5958e80-e513-441d-9b72-15ef94535bfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:25:53 np0005539551 nova_compute[227360]: 2025-11-29 08:25:53.520 227364 DEBUG nova.network.os_vif_util [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:5a:cd,bridge_name='br-int',has_traffic_filtering=True,id=d5958e80-e513-441d-9b72-15ef94535bfd,network=Network(22d7cf42-d85f-4608-a248-329d35f5c84f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5958e80-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:25:53 np0005539551 nova_compute[227360]: 2025-11-29 08:25:53.522 227364 DEBUG nova.objects.instance [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:53 np0005539551 nova_compute[227360]: 2025-11-29 08:25:53.641 227364 DEBUG nova.virt.libvirt.driver [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:25:53 np0005539551 nova_compute[227360]:  <uuid>3bf7e58f-cace-4eee-a7ac-44c1ab096c2f</uuid>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:  <name>instance-00000089</name>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:25:53 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:      <nova:name>tempest-ServerAddressesNegativeTestJSON-server-1657914540</nova:name>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:25:52</nova:creationTime>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:25:53 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:        <nova:user uuid="8516d6a79ffe412bbe8552fc233adf59">tempest-ServerAddressesNegativeTestJSON-101422234-project-member</nova:user>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:        <nova:project uuid="34cdf0e4d08f40cfb54455e1f681aea2">tempest-ServerAddressesNegativeTestJSON-101422234</nova:project>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:        <nova:port uuid="d5958e80-e513-441d-9b72-15ef94535bfd">
Nov 29 03:25:53 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:      <entry name="serial">3bf7e58f-cace-4eee-a7ac-44c1ab096c2f</entry>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:      <entry name="uuid">3bf7e58f-cace-4eee-a7ac-44c1ab096c2f</entry>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:25:53 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/3bf7e58f-cace-4eee-a7ac-44c1ab096c2f_disk">
Nov 29 03:25:53 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:25:53 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:25:53 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/3bf7e58f-cace-4eee-a7ac-44c1ab096c2f_disk.config">
Nov 29 03:25:53 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:25:53 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:25:53 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:b5:5a:cd"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:      <target dev="tapd5958e80-e5"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:25:53 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/3bf7e58f-cace-4eee-a7ac-44c1ab096c2f/console.log" append="off"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:25:53 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:25:53 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:25:53 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:25:53 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:25:53 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:25:53 np0005539551 nova_compute[227360]: 2025-11-29 08:25:53.642 227364 DEBUG nova.compute.manager [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Preparing to wait for external event network-vif-plugged-d5958e80-e513-441d-9b72-15ef94535bfd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:25:53 np0005539551 nova_compute[227360]: 2025-11-29 08:25:53.643 227364 DEBUG oslo_concurrency.lockutils [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Acquiring lock "3bf7e58f-cace-4eee-a7ac-44c1ab096c2f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:53 np0005539551 nova_compute[227360]: 2025-11-29 08:25:53.644 227364 DEBUG oslo_concurrency.lockutils [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Lock "3bf7e58f-cace-4eee-a7ac-44c1ab096c2f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:53 np0005539551 nova_compute[227360]: 2025-11-29 08:25:53.645 227364 DEBUG oslo_concurrency.lockutils [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Lock "3bf7e58f-cace-4eee-a7ac-44c1ab096c2f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:53 np0005539551 nova_compute[227360]: 2025-11-29 08:25:53.647 227364 DEBUG nova.virt.libvirt.vif [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:25:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-1657914540',display_name='tempest-ServerAddressesNegativeTestJSON-server-1657914540',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-1657914540',id=137,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='34cdf0e4d08f40cfb54455e1f681aea2',ramdisk_id='',reservation_id='r-kaowa8y0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-101422234',owner_user_name='tempest-ServerAddressesNegativeTestJSON-101422234-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:25:48Z,user_data=None,user_id='8516d6a79ffe412bbe8552fc233adf59',uuid=3bf7e58f-cace-4eee-a7ac-44c1ab096c2f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d5958e80-e513-441d-9b72-15ef94535bfd", "address": "fa:16:3e:b5:5a:cd", "network": {"id": "22d7cf42-d85f-4608-a248-329d35f5c84f", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1168559715-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "34cdf0e4d08f40cfb54455e1f681aea2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5958e80-e5", "ovs_interfaceid": "d5958e80-e513-441d-9b72-15ef94535bfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:25:53 np0005539551 nova_compute[227360]: 2025-11-29 08:25:53.648 227364 DEBUG nova.network.os_vif_util [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Converting VIF {"id": "d5958e80-e513-441d-9b72-15ef94535bfd", "address": "fa:16:3e:b5:5a:cd", "network": {"id": "22d7cf42-d85f-4608-a248-329d35f5c84f", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1168559715-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "34cdf0e4d08f40cfb54455e1f681aea2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5958e80-e5", "ovs_interfaceid": "d5958e80-e513-441d-9b72-15ef94535bfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:25:53 np0005539551 nova_compute[227360]: 2025-11-29 08:25:53.649 227364 DEBUG nova.network.os_vif_util [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:5a:cd,bridge_name='br-int',has_traffic_filtering=True,id=d5958e80-e513-441d-9b72-15ef94535bfd,network=Network(22d7cf42-d85f-4608-a248-329d35f5c84f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5958e80-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:25:53 np0005539551 nova_compute[227360]: 2025-11-29 08:25:53.651 227364 DEBUG os_vif [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:5a:cd,bridge_name='br-int',has_traffic_filtering=True,id=d5958e80-e513-441d-9b72-15ef94535bfd,network=Network(22d7cf42-d85f-4608-a248-329d35f5c84f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5958e80-e5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:25:53 np0005539551 nova_compute[227360]: 2025-11-29 08:25:53.652 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:53 np0005539551 nova_compute[227360]: 2025-11-29 08:25:53.653 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:53 np0005539551 nova_compute[227360]: 2025-11-29 08:25:53.655 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:25:53 np0005539551 nova_compute[227360]: 2025-11-29 08:25:53.660 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:53 np0005539551 nova_compute[227360]: 2025-11-29 08:25:53.661 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd5958e80-e5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:53 np0005539551 nova_compute[227360]: 2025-11-29 08:25:53.661 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd5958e80-e5, col_values=(('external_ids', {'iface-id': 'd5958e80-e513-441d-9b72-15ef94535bfd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b5:5a:cd', 'vm-uuid': '3bf7e58f-cace-4eee-a7ac-44c1ab096c2f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:53 np0005539551 nova_compute[227360]: 2025-11-29 08:25:53.686 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:53 np0005539551 NetworkManager[48922]: <info>  [1764404753.6870] manager: (tapd5958e80-e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/262)
Nov 29 03:25:53 np0005539551 nova_compute[227360]: 2025-11-29 08:25:53.689 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:25:53 np0005539551 nova_compute[227360]: 2025-11-29 08:25:53.695 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:53 np0005539551 nova_compute[227360]: 2025-11-29 08:25:53.696 227364 INFO os_vif [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:5a:cd,bridge_name='br-int',has_traffic_filtering=True,id=d5958e80-e513-441d-9b72-15ef94535bfd,network=Network(22d7cf42-d85f-4608-a248-329d35f5c84f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5958e80-e5')#033[00m
Nov 29 03:25:53 np0005539551 nova_compute[227360]: 2025-11-29 08:25:53.817 227364 DEBUG nova.virt.libvirt.driver [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:25:53 np0005539551 nova_compute[227360]: 2025-11-29 08:25:53.817 227364 DEBUG nova.virt.libvirt.driver [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:25:53 np0005539551 nova_compute[227360]: 2025-11-29 08:25:53.817 227364 DEBUG nova.virt.libvirt.driver [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] No VIF found with MAC fa:16:3e:b5:5a:cd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:25:53 np0005539551 nova_compute[227360]: 2025-11-29 08:25:53.818 227364 INFO nova.virt.libvirt.driver [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Using config drive#033[00m
Nov 29 03:25:53 np0005539551 nova_compute[227360]: 2025-11-29 08:25:53.841 227364 DEBUG nova.storage.rbd_utils [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] rbd image 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:54.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:54 np0005539551 kernel: tap31cb9e6c-fe (unregistering): left promiscuous mode
Nov 29 03:25:54 np0005539551 NetworkManager[48922]: <info>  [1764404754.7592] device (tap31cb9e6c-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:25:54 np0005539551 nova_compute[227360]: 2025-11-29 08:25:54.760 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:54 np0005539551 ovn_controller[130266]: 2025-11-29T08:25:54Z|00550|binding|INFO|Releasing lport 31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 from this chassis (sb_readonly=0)
Nov 29 03:25:54 np0005539551 ovn_controller[130266]: 2025-11-29T08:25:54Z|00551|binding|INFO|Setting lport 31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 down in Southbound
Nov 29 03:25:54 np0005539551 ovn_controller[130266]: 2025-11-29T08:25:54Z|00552|binding|INFO|Removing iface tap31cb9e6c-fe ovn-installed in OVS
Nov 29 03:25:54 np0005539551 nova_compute[227360]: 2025-11-29 08:25:54.773 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:54 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:54.780 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:bc:22 10.100.0.14'], port_security=['fa:16:3e:df:bc:22 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'e5dc7e85-787d-4ed8-9752-a604a1815f2b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9a83f8d8d7f4d08890407f978c05166', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1d1bf0bb-aa3c-4461-8a1e-ba1daa172e77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d0d36bf-5f41-4d6e-9e1b-1a2b5a9220ce, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:25:54 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:54.782 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 in datapath 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 unbound from our chassis#033[00m
Nov 29 03:25:54 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:54.783 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:25:54 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:54.784 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[869ece92-0619-462e-86d2-c870f915fc07]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:54 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:54.785 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 namespace which is not needed anymore#033[00m
Nov 29 03:25:54 np0005539551 nova_compute[227360]: 2025-11-29 08:25:54.794 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:54 np0005539551 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000087.scope: Deactivated successfully.
Nov 29 03:25:54 np0005539551 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000087.scope: Consumed 13.430s CPU time.
Nov 29 03:25:54 np0005539551 systemd-machined[190756]: Machine qemu-59-instance-00000087 terminated.
Nov 29 03:25:54 np0005539551 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[276572]: [NOTICE]   (276576) : haproxy version is 2.8.14-c23fe91
Nov 29 03:25:54 np0005539551 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[276572]: [NOTICE]   (276576) : path to executable is /usr/sbin/haproxy
Nov 29 03:25:54 np0005539551 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[276572]: [WARNING]  (276576) : Exiting Master process...
Nov 29 03:25:54 np0005539551 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[276572]: [ALERT]    (276576) : Current worker (276578) exited with code 143 (Terminated)
Nov 29 03:25:54 np0005539551 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[276572]: [WARNING]  (276576) : All workers exited. Exiting... (0)
Nov 29 03:25:54 np0005539551 systemd[1]: libpod-e2aade3387fe3cb8549a053aaec47d27e99dac913d0c9063fe938bb95f4a08f8.scope: Deactivated successfully.
Nov 29 03:25:54 np0005539551 podman[277045]: 2025-11-29 08:25:54.940027656 +0000 UTC m=+0.048555884 container died e2aade3387fe3cb8549a053aaec47d27e99dac913d0c9063fe938bb95f4a08f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:25:54 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e2aade3387fe3cb8549a053aaec47d27e99dac913d0c9063fe938bb95f4a08f8-userdata-shm.mount: Deactivated successfully.
Nov 29 03:25:54 np0005539551 systemd[1]: var-lib-containers-storage-overlay-3f80e94b708bdc1c2230a1060e68c65ff78a2e1b7cdd3c063d3a294359155631-merged.mount: Deactivated successfully.
Nov 29 03:25:54 np0005539551 podman[277045]: 2025-11-29 08:25:54.981314294 +0000 UTC m=+0.089842512 container cleanup e2aade3387fe3cb8549a053aaec47d27e99dac913d0c9063fe938bb95f4a08f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:25:54 np0005539551 systemd[1]: libpod-conmon-e2aade3387fe3cb8549a053aaec47d27e99dac913d0c9063fe938bb95f4a08f8.scope: Deactivated successfully.
Nov 29 03:25:55 np0005539551 nova_compute[227360]: 2025-11-29 08:25:55.010 227364 INFO nova.virt.libvirt.driver [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:25:55 np0005539551 nova_compute[227360]: 2025-11-29 08:25:55.015 227364 INFO nova.virt.libvirt.driver [-] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Instance destroyed successfully.#033[00m
Nov 29 03:25:55 np0005539551 nova_compute[227360]: 2025-11-29 08:25:55.016 227364 DEBUG nova.objects.instance [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'numa_topology' on Instance uuid e5dc7e85-787d-4ed8-9752-a604a1815f2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:55 np0005539551 podman[277078]: 2025-11-29 08:25:55.04881353 +0000 UTC m=+0.044714071 container remove e2aade3387fe3cb8549a053aaec47d27e99dac913d0c9063fe938bb95f4a08f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:55.053 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d2637d30-2f6c-4b93-89f4-111cd0983587]: (4, ('Sat Nov 29 08:25:54 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 (e2aade3387fe3cb8549a053aaec47d27e99dac913d0c9063fe938bb95f4a08f8)\ne2aade3387fe3cb8549a053aaec47d27e99dac913d0c9063fe938bb95f4a08f8\nSat Nov 29 08:25:54 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 (e2aade3387fe3cb8549a053aaec47d27e99dac913d0c9063fe938bb95f4a08f8)\ne2aade3387fe3cb8549a053aaec47d27e99dac913d0c9063fe938bb95f4a08f8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:55.055 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c20296c7-d767-4e24-9180-09311e16c449]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:55.056 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5da19f7d-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:55 np0005539551 nova_compute[227360]: 2025-11-29 08:25:55.057 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:55 np0005539551 kernel: tap5da19f7d-30: left promiscuous mode
Nov 29 03:25:55 np0005539551 nova_compute[227360]: 2025-11-29 08:25:55.076 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:55.079 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[3ca34764-137a-44f8-97a3-aeb1bdaecbee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:55.092 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c1c5ef27-b11e-440e-af87-608c0285d823]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:55.093 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4495798b-7d25-4f38-a713-0ff78b5ed902]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:55.109 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8695e558-53c4-4dd5-8e51-814572b46e71]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 778485, 'reachable_time': 19375, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277110, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:55 np0005539551 systemd[1]: run-netns-ovnmeta\x2d5da19f7d\x2d3aa0\x2d41e7\x2d88b0\x2db9ef17fa4445.mount: Deactivated successfully.
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:55.112 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:55.112 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[7b5bcc1a-4af2-4c1e-a2fd-2b511878b45f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:55 np0005539551 nova_compute[227360]: 2025-11-29 08:25:55.138 227364 INFO nova.virt.libvirt.driver [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Attempting a stable device rescue#033[00m
Nov 29 03:25:55 np0005539551 nova_compute[227360]: 2025-11-29 08:25:55.184 227364 INFO nova.virt.libvirt.driver [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Creating config drive at /var/lib/nova/instances/3bf7e58f-cace-4eee-a7ac-44c1ab096c2f/disk.config#033[00m
Nov 29 03:25:55 np0005539551 nova_compute[227360]: 2025-11-29 08:25:55.192 227364 DEBUG oslo_concurrency.processutils [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3bf7e58f-cace-4eee-a7ac-44c1ab096c2f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9340x8hz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:55 np0005539551 nova_compute[227360]: 2025-11-29 08:25:55.334 227364 DEBUG oslo_concurrency.processutils [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3bf7e58f-cace-4eee-a7ac-44c1ab096c2f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9340x8hz" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:55.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:55 np0005539551 nova_compute[227360]: 2025-11-29 08:25:55.366 227364 DEBUG nova.storage.rbd_utils [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] rbd image 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:55 np0005539551 nova_compute[227360]: 2025-11-29 08:25:55.372 227364 DEBUG oslo_concurrency.processutils [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3bf7e58f-cace-4eee-a7ac-44c1ab096c2f/disk.config 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:55 np0005539551 nova_compute[227360]: 2025-11-29 08:25:55.418 227364 DEBUG nova.virt.libvirt.driver [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Nov 29 03:25:55 np0005539551 nova_compute[227360]: 2025-11-29 08:25:55.420 227364 DEBUG nova.compute.manager [req-3e77a1f5-271d-4c76-9263-2cf1cf726991 req-029330bd-bdbc-41e4-ad1d-d07b643a73ac 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Received event network-vif-unplugged-31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:55 np0005539551 nova_compute[227360]: 2025-11-29 08:25:55.420 227364 DEBUG oslo_concurrency.lockutils [req-3e77a1f5-271d-4c76-9263-2cf1cf726991 req-029330bd-bdbc-41e4-ad1d-d07b643a73ac 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:55 np0005539551 nova_compute[227360]: 2025-11-29 08:25:55.420 227364 DEBUG oslo_concurrency.lockutils [req-3e77a1f5-271d-4c76-9263-2cf1cf726991 req-029330bd-bdbc-41e4-ad1d-d07b643a73ac 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:55 np0005539551 nova_compute[227360]: 2025-11-29 08:25:55.420 227364 DEBUG oslo_concurrency.lockutils [req-3e77a1f5-271d-4c76-9263-2cf1cf726991 req-029330bd-bdbc-41e4-ad1d-d07b643a73ac 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:55 np0005539551 nova_compute[227360]: 2025-11-29 08:25:55.420 227364 DEBUG nova.compute.manager [req-3e77a1f5-271d-4c76-9263-2cf1cf726991 req-029330bd-bdbc-41e4-ad1d-d07b643a73ac 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] No waiting events found dispatching network-vif-unplugged-31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:55 np0005539551 nova_compute[227360]: 2025-11-29 08:25:55.421 227364 WARNING nova.compute.manager [req-3e77a1f5-271d-4c76-9263-2cf1cf726991 req-029330bd-bdbc-41e4-ad1d-d07b643a73ac 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Received unexpected event network-vif-unplugged-31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:25:55 np0005539551 nova_compute[227360]: 2025-11-29 08:25:55.426 227364 DEBUG nova.virt.libvirt.driver [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:25:55 np0005539551 nova_compute[227360]: 2025-11-29 08:25:55.426 227364 INFO nova.virt.libvirt.driver [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Creating image(s)#033[00m
Nov 29 03:25:55 np0005539551 nova_compute[227360]: 2025-11-29 08:25:55.463 227364 DEBUG nova.storage.rbd_utils [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image e5dc7e85-787d-4ed8-9752-a604a1815f2b_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:55 np0005539551 nova_compute[227360]: 2025-11-29 08:25:55.469 227364 DEBUG nova.objects.instance [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'trusted_certs' on Instance uuid e5dc7e85-787d-4ed8-9752-a604a1815f2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:55 np0005539551 nova_compute[227360]: 2025-11-29 08:25:55.512 227364 DEBUG nova.storage.rbd_utils [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image e5dc7e85-787d-4ed8-9752-a604a1815f2b_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:55 np0005539551 nova_compute[227360]: 2025-11-29 08:25:55.550 227364 DEBUG nova.storage.rbd_utils [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image e5dc7e85-787d-4ed8-9752-a604a1815f2b_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:55 np0005539551 nova_compute[227360]: 2025-11-29 08:25:55.557 227364 DEBUG oslo_concurrency.lockutils [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "cf6e4b0b039e83fc383c917f15db93c582343382" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:55 np0005539551 nova_compute[227360]: 2025-11-29 08:25:55.559 227364 DEBUG oslo_concurrency.lockutils [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "cf6e4b0b039e83fc383c917f15db93c582343382" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:55 np0005539551 nova_compute[227360]: 2025-11-29 08:25:55.566 227364 DEBUG oslo_concurrency.processutils [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3bf7e58f-cace-4eee-a7ac-44c1ab096c2f/disk.config 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:55 np0005539551 nova_compute[227360]: 2025-11-29 08:25:55.567 227364 INFO nova.virt.libvirt.driver [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Deleting local config drive /var/lib/nova/instances/3bf7e58f-cace-4eee-a7ac-44c1ab096c2f/disk.config because it was imported into RBD.#033[00m
Nov 29 03:25:55 np0005539551 kernel: tapd5958e80-e5: entered promiscuous mode
Nov 29 03:25:55 np0005539551 systemd-udevd[277028]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:25:55 np0005539551 NetworkManager[48922]: <info>  [1764404755.6219] manager: (tapd5958e80-e5): new Tun device (/org/freedesktop/NetworkManager/Devices/263)
Nov 29 03:25:55 np0005539551 ovn_controller[130266]: 2025-11-29T08:25:55Z|00553|binding|INFO|Claiming lport d5958e80-e513-441d-9b72-15ef94535bfd for this chassis.
Nov 29 03:25:55 np0005539551 ovn_controller[130266]: 2025-11-29T08:25:55Z|00554|binding|INFO|d5958e80-e513-441d-9b72-15ef94535bfd: Claiming fa:16:3e:b5:5a:cd 10.100.0.8
Nov 29 03:25:55 np0005539551 nova_compute[227360]: 2025-11-29 08:25:55.625 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:55 np0005539551 NetworkManager[48922]: <info>  [1764404755.6339] device (tapd5958e80-e5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:25:55 np0005539551 NetworkManager[48922]: <info>  [1764404755.6347] device (tapd5958e80-e5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:55.636 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:5a:cd 10.100.0.8'], port_security=['fa:16:3e:b5:5a:cd 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '3bf7e58f-cace-4eee-a7ac-44c1ab096c2f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-22d7cf42-d85f-4608-a248-329d35f5c84f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34cdf0e4d08f40cfb54455e1f681aea2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9f764df6-65a5-4758-a40e-a7606418d973', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3a7ab28e-e4f3-49d2-9ff0-0393527af031, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=d5958e80-e513-441d-9b72-15ef94535bfd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:55.637 139482 INFO neutron.agent.ovn.metadata.agent [-] Port d5958e80-e513-441d-9b72-15ef94535bfd in datapath 22d7cf42-d85f-4608-a248-329d35f5c84f bound to our chassis#033[00m
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:55.638 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 22d7cf42-d85f-4608-a248-329d35f5c84f#033[00m
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:55.649 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[db22bf5f-e44c-4dc2-a49a-2820563a9404]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:55.649 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap22d7cf42-d1 in ovnmeta-22d7cf42-d85f-4608-a248-329d35f5c84f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:55.651 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap22d7cf42-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:55.651 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ab4f309b-a934-4e53-b74e-d5987db2501f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:55.652 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[da5fa4c9-9a71-4b4a-9320-c2f6d3a4e435]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:55 np0005539551 systemd-machined[190756]: New machine qemu-60-instance-00000089.
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:55.662 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[2d3ac51f-bf41-4658-9f18-3ca1088a58f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:55 np0005539551 systemd[1]: Started Virtual Machine qemu-60-instance-00000089.
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:55.687 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5cba756b-8d05-4957-87ac-2b47d195bd8f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:55 np0005539551 nova_compute[227360]: 2025-11-29 08:25:55.695 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:55 np0005539551 ovn_controller[130266]: 2025-11-29T08:25:55Z|00555|binding|INFO|Setting lport d5958e80-e513-441d-9b72-15ef94535bfd ovn-installed in OVS
Nov 29 03:25:55 np0005539551 ovn_controller[130266]: 2025-11-29T08:25:55Z|00556|binding|INFO|Setting lport d5958e80-e513-441d-9b72-15ef94535bfd up in Southbound
Nov 29 03:25:55 np0005539551 nova_compute[227360]: 2025-11-29 08:25:55.705 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:55.719 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[e23520fa-c9b6-4064-b72b-7ec560178d0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:55.725 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[aff3af66-839f-405e-a0bf-1182a0b0d821]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:55 np0005539551 NetworkManager[48922]: <info>  [1764404755.7266] manager: (tap22d7cf42-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/264)
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:55.754 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[44a25ba7-2583-4caf-80bf-c5f9fefc5ede]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:55.757 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[910876f3-d0c0-4571-83ec-9c040bc87fa9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:55 np0005539551 NetworkManager[48922]: <info>  [1764404755.7795] device (tap22d7cf42-d0): carrier: link connected
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:55.783 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[a8be7dc6-fd67-4df6-85af-fd8ee68081bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:55.798 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a4664aef-9c39-4014-9b80-1da3525b7076]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap22d7cf42-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:f2:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 780265, 'reachable_time': 44224, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277257, 'error': None, 'target': 'ovnmeta-22d7cf42-d85f-4608-a248-329d35f5c84f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:55 np0005539551 nova_compute[227360]: 2025-11-29 08:25:55.807 227364 DEBUG nova.virt.libvirt.imagebackend [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Image locations are: [{'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/c73d83e2-0a3c-4e73-907f-124979a361df/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/c73d83e2-0a3c-4e73-907f-124979a361df/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:55.811 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[453838ae-5cbf-45a5-ad3d-689008ffb540]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0b:f2db'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 780265, 'tstamp': 780265}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277258, 'error': None, 'target': 'ovnmeta-22d7cf42-d85f-4608-a248-329d35f5c84f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:55.833 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[39dd4fe6-3927-4d88-8c88-e294db837988]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap22d7cf42-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:f2:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 780265, 'reachable_time': 44224, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 277263, 'error': None, 'target': 'ovnmeta-22d7cf42-d85f-4608-a248-329d35f5c84f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:55.863 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[dd67ad1f-4933-4bf0-82b1-8e8ef2ef465b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:55 np0005539551 nova_compute[227360]: 2025-11-29 08:25:55.904 227364 DEBUG nova.virt.libvirt.imagebackend [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Selected location: {'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/c73d83e2-0a3c-4e73-907f-124979a361df/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 29 03:25:55 np0005539551 nova_compute[227360]: 2025-11-29 08:25:55.905 227364 DEBUG nova.storage.rbd_utils [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] cloning images/c73d83e2-0a3c-4e73-907f-124979a361df@snap to None/e5dc7e85-787d-4ed8-9752-a604a1815f2b_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:55.914 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5e69bbe6-ecb3-4a9a-be6d-2799e9c80377]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:55.916 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap22d7cf42-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:55.916 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:55.916 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap22d7cf42-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:55 np0005539551 NetworkManager[48922]: <info>  [1764404755.9209] manager: (tap22d7cf42-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/265)
Nov 29 03:25:55 np0005539551 kernel: tap22d7cf42-d0: entered promiscuous mode
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:55.927 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap22d7cf42-d0, col_values=(('external_ids', {'iface-id': '2c8bc8e2-da43-4ac7-9ae4-5dd83a713bfd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:55 np0005539551 ovn_controller[130266]: 2025-11-29T08:25:55Z|00557|binding|INFO|Releasing lport 2c8bc8e2-da43-4ac7-9ae4-5dd83a713bfd from this chassis (sb_readonly=0)
Nov 29 03:25:55 np0005539551 nova_compute[227360]: 2025-11-29 08:25:55.941 227364 DEBUG nova.network.neutron [req-03aa945d-08e3-4029-a7a0-f9cb8d5a58ba req-ece52f49-8934-416d-a252-ffcd5831e097 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Updated VIF entry in instance network info cache for port d5958e80-e513-441d-9b72-15ef94535bfd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:25:55 np0005539551 nova_compute[227360]: 2025-11-29 08:25:55.942 227364 DEBUG nova.network.neutron [req-03aa945d-08e3-4029-a7a0-f9cb8d5a58ba req-ece52f49-8934-416d-a252-ffcd5831e097 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Updating instance_info_cache with network_info: [{"id": "d5958e80-e513-441d-9b72-15ef94535bfd", "address": "fa:16:3e:b5:5a:cd", "network": {"id": "22d7cf42-d85f-4608-a248-329d35f5c84f", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1168559715-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "34cdf0e4d08f40cfb54455e1f681aea2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5958e80-e5", "ovs_interfaceid": "d5958e80-e513-441d-9b72-15ef94535bfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:25:55 np0005539551 nova_compute[227360]: 2025-11-29 08:25:55.944 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:55 np0005539551 nova_compute[227360]: 2025-11-29 08:25:55.946 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:55.948 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/22d7cf42-d85f-4608-a248-329d35f5c84f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/22d7cf42-d85f-4608-a248-329d35f5c84f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:55.949 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9f99db61-0eae-424c-bcf9-95910660e0df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:55.949 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-22d7cf42-d85f-4608-a248-329d35f5c84f
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/22d7cf42-d85f-4608-a248-329d35f5c84f.pid.haproxy
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 22d7cf42-d85f-4608-a248-329d35f5c84f
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:25:55 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:55.950 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-22d7cf42-d85f-4608-a248-329d35f5c84f', 'env', 'PROCESS_TAG=haproxy-22d7cf42-d85f-4608-a248-329d35f5c84f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/22d7cf42-d85f-4608-a248-329d35f5c84f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:25:55 np0005539551 nova_compute[227360]: 2025-11-29 08:25:55.969 227364 DEBUG oslo_concurrency.lockutils [req-03aa945d-08e3-4029-a7a0-f9cb8d5a58ba req-ece52f49-8934-416d-a252-ffcd5831e097 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-3bf7e58f-cace-4eee-a7ac-44c1ab096c2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.036 227364 DEBUG oslo_concurrency.lockutils [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "cf6e4b0b039e83fc383c917f15db93c582343382" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.478s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.076 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404756.0359921, 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.076 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] VM Started (Lifecycle Event)#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.083 227364 DEBUG nova.objects.instance [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'migration_context' on Instance uuid e5dc7e85-787d-4ed8-9752-a604a1815f2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.101 227364 DEBUG nova.virt.libvirt.driver [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.103 227364 DEBUG nova.virt.libvirt.driver [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Start _get_guest_xml network_info=[{"id": "31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441", "address": "fa:16:3e:df:bc:22", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "vif_mac": "fa:16:3e:df:bc:22"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31cb9e6c-fe", "ovs_interfaceid": "31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'c73d83e2-0a3c-4e73-907f-124979a361df', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.103 227364 DEBUG nova.objects.instance [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'resources' on Instance uuid e5dc7e85-787d-4ed8-9752-a604a1815f2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.105 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.108 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404756.036217, 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.108 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.117 227364 WARNING nova.virt.libvirt.driver [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:25:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:56.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.123 227364 DEBUG nova.virt.libvirt.host [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.123 227364 DEBUG nova.virt.libvirt.host [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.127 227364 DEBUG nova.virt.libvirt.host [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.127 227364 DEBUG nova.virt.libvirt.host [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.128 227364 DEBUG nova.virt.libvirt.driver [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.128 227364 DEBUG nova.virt.hardware [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.128 227364 DEBUG nova.virt.hardware [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.129 227364 DEBUG nova.virt.hardware [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.129 227364 DEBUG nova.virt.hardware [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.129 227364 DEBUG nova.virt.hardware [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.129 227364 DEBUG nova.virt.hardware [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.129 227364 DEBUG nova.virt.hardware [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.129 227364 DEBUG nova.virt.hardware [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.130 227364 DEBUG nova.virt.hardware [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.130 227364 DEBUG nova.virt.hardware [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.130 227364 DEBUG nova.virt.hardware [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.130 227364 DEBUG nova.objects.instance [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'vcpu_model' on Instance uuid e5dc7e85-787d-4ed8-9752-a604a1815f2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.131 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.134 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.154 227364 DEBUG oslo_concurrency.processutils [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.176 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:25:56 np0005539551 podman[277445]: 2025-11-29 08:25:56.338111408 +0000 UTC m=+0.040838465 container create 0d27771f50d9cc19af277b21ef743b1d5ea180c1d38f4c5ce9f214c346e2416c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22d7cf42-d85f-4608-a248-329d35f5c84f, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.345 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:56 np0005539551 systemd[1]: Started libpod-conmon-0d27771f50d9cc19af277b21ef743b1d5ea180c1d38f4c5ce9f214c346e2416c.scope.
Nov 29 03:25:56 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:25:56 np0005539551 podman[277445]: 2025-11-29 08:25:56.31748854 +0000 UTC m=+0.020215617 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:25:56 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0077c04bd82baf0018bd0876ea0306842a023e2bd9234004da46bf6b35b51bfb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:25:56 np0005539551 podman[277445]: 2025-11-29 08:25:56.435607406 +0000 UTC m=+0.138334493 container init 0d27771f50d9cc19af277b21ef743b1d5ea180c1d38f4c5ce9f214c346e2416c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22d7cf42-d85f-4608-a248-329d35f5c84f, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:25:56 np0005539551 podman[277445]: 2025-11-29 08:25:56.449767698 +0000 UTC m=+0.152494765 container start 0d27771f50d9cc19af277b21ef743b1d5ea180c1d38f4c5ce9f214c346e2416c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22d7cf42-d85f-4608-a248-329d35f5c84f, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:25:56 np0005539551 neutron-haproxy-ovnmeta-22d7cf42-d85f-4608-a248-329d35f5c84f[277460]: [NOTICE]   (277464) : New worker (277466) forked
Nov 29 03:25:56 np0005539551 neutron-haproxy-ovnmeta-22d7cf42-d85f-4608-a248-329d35f5c84f[277460]: [NOTICE]   (277464) : Loading success.
Nov 29 03:25:56 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:25:56 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3303924954' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.568 227364 DEBUG oslo_concurrency.processutils [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.622 227364 DEBUG oslo_concurrency.processutils [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.875 227364 DEBUG nova.compute.manager [req-a258e415-4007-4ad5-bb23-c7ca35dc01ac req-9ddf4573-5a1a-4064-8e77-c37414417bb9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Received event network-vif-plugged-d5958e80-e513-441d-9b72-15ef94535bfd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.875 227364 DEBUG oslo_concurrency.lockutils [req-a258e415-4007-4ad5-bb23-c7ca35dc01ac req-9ddf4573-5a1a-4064-8e77-c37414417bb9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "3bf7e58f-cace-4eee-a7ac-44c1ab096c2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.875 227364 DEBUG oslo_concurrency.lockutils [req-a258e415-4007-4ad5-bb23-c7ca35dc01ac req-9ddf4573-5a1a-4064-8e77-c37414417bb9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3bf7e58f-cace-4eee-a7ac-44c1ab096c2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.876 227364 DEBUG oslo_concurrency.lockutils [req-a258e415-4007-4ad5-bb23-c7ca35dc01ac req-9ddf4573-5a1a-4064-8e77-c37414417bb9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3bf7e58f-cace-4eee-a7ac-44c1ab096c2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.876 227364 DEBUG nova.compute.manager [req-a258e415-4007-4ad5-bb23-c7ca35dc01ac req-9ddf4573-5a1a-4064-8e77-c37414417bb9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Processing event network-vif-plugged-d5958e80-e513-441d-9b72-15ef94535bfd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.876 227364 DEBUG nova.compute.manager [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.880 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404756.880444, 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.880 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.883 227364 DEBUG nova.virt.libvirt.driver [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.887 227364 INFO nova.virt.libvirt.driver [-] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Instance spawned successfully.#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.887 227364 DEBUG nova.virt.libvirt.driver [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.900 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.906 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.926 227364 DEBUG nova.virt.libvirt.driver [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.926 227364 DEBUG nova.virt.libvirt.driver [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.926 227364 DEBUG nova.virt.libvirt.driver [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.927 227364 DEBUG nova.virt.libvirt.driver [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.927 227364 DEBUG nova.virt.libvirt.driver [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.927 227364 DEBUG nova.virt.libvirt.driver [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:56 np0005539551 nova_compute[227360]: 2025-11-29 08:25:56.952 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:25:57 np0005539551 nova_compute[227360]: 2025-11-29 08:25:57.000 227364 INFO nova.compute.manager [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Took 8.03 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:25:57 np0005539551 nova_compute[227360]: 2025-11-29 08:25:57.001 227364 DEBUG nova.compute.manager [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:57 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:25:57 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2152765087' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:25:57 np0005539551 nova_compute[227360]: 2025-11-29 08:25:57.060 227364 DEBUG oslo_concurrency.processutils [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:57 np0005539551 nova_compute[227360]: 2025-11-29 08:25:57.061 227364 DEBUG oslo_concurrency.processutils [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:57 np0005539551 nova_compute[227360]: 2025-11-29 08:25:57.093 227364 INFO nova.compute.manager [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Took 9.13 seconds to build instance.#033[00m
Nov 29 03:25:57 np0005539551 nova_compute[227360]: 2025-11-29 08:25:57.122 227364 DEBUG oslo_concurrency.lockutils [None req-bc60e115-3eab-43b2-832a-e7021c094291 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Lock "3bf7e58f-cace-4eee-a7ac-44c1ab096c2f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.230s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:57.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:57 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:25:57 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1247213432' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:25:57 np0005539551 nova_compute[227360]: 2025-11-29 08:25:57.472 227364 DEBUG oslo_concurrency.processutils [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:57 np0005539551 nova_compute[227360]: 2025-11-29 08:25:57.473 227364 DEBUG nova.virt.libvirt.vif [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:25:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-1341744770',display_name='tempest-ServerStableDeviceRescueTest-server-1341744770',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-1341744770',id=135,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:25:38Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a9a83f8d8d7f4d08890407f978c05166',ramdisk_id='',reservation_id='r-wgna8oed',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-507673154',owner_user_name='tempest-ServerStableDeviceRescueTest-507673154-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:25:46Z,user_data=None,user_id='873186539acb4bf9b90513e0e1beb56f',uuid=e5dc7e85-787d-4ed8-9752-a604a1815f2b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441", "address": "fa:16:3e:df:bc:22", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "vif_mac": "fa:16:3e:df:bc:22"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31cb9e6c-fe", "ovs_interfaceid": "31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:25:57 np0005539551 nova_compute[227360]: 2025-11-29 08:25:57.473 227364 DEBUG nova.network.os_vif_util [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Converting VIF {"id": "31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441", "address": "fa:16:3e:df:bc:22", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "vif_mac": "fa:16:3e:df:bc:22"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31cb9e6c-fe", "ovs_interfaceid": "31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:25:57 np0005539551 nova_compute[227360]: 2025-11-29 08:25:57.474 227364 DEBUG nova.network.os_vif_util [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:df:bc:22,bridge_name='br-int',has_traffic_filtering=True,id=31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441,network=Network(5da19f7d-3aa0-41e7-88b0-b9ef17fa4445),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31cb9e6c-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:25:57 np0005539551 nova_compute[227360]: 2025-11-29 08:25:57.475 227364 DEBUG nova.objects.instance [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'pci_devices' on Instance uuid e5dc7e85-787d-4ed8-9752-a604a1815f2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:57 np0005539551 nova_compute[227360]: 2025-11-29 08:25:57.660 227364 DEBUG nova.compute.manager [req-9ced4801-aefd-43bb-8966-7caff94bfbb7 req-abbc6247-c318-4b47-9ff1-36dba830044f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Received event network-vif-plugged-31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:57 np0005539551 nova_compute[227360]: 2025-11-29 08:25:57.660 227364 DEBUG oslo_concurrency.lockutils [req-9ced4801-aefd-43bb-8966-7caff94bfbb7 req-abbc6247-c318-4b47-9ff1-36dba830044f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:57 np0005539551 nova_compute[227360]: 2025-11-29 08:25:57.660 227364 DEBUG oslo_concurrency.lockutils [req-9ced4801-aefd-43bb-8966-7caff94bfbb7 req-abbc6247-c318-4b47-9ff1-36dba830044f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:57 np0005539551 nova_compute[227360]: 2025-11-29 08:25:57.661 227364 DEBUG oslo_concurrency.lockutils [req-9ced4801-aefd-43bb-8966-7caff94bfbb7 req-abbc6247-c318-4b47-9ff1-36dba830044f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:57 np0005539551 nova_compute[227360]: 2025-11-29 08:25:57.661 227364 DEBUG nova.compute.manager [req-9ced4801-aefd-43bb-8966-7caff94bfbb7 req-abbc6247-c318-4b47-9ff1-36dba830044f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] No waiting events found dispatching network-vif-plugged-31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:57 np0005539551 nova_compute[227360]: 2025-11-29 08:25:57.661 227364 WARNING nova.compute.manager [req-9ced4801-aefd-43bb-8966-7caff94bfbb7 req-abbc6247-c318-4b47-9ff1-36dba830044f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Received unexpected event network-vif-plugged-31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:25:57 np0005539551 nova_compute[227360]: 2025-11-29 08:25:57.679 227364 DEBUG nova.virt.libvirt.driver [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:25:57 np0005539551 nova_compute[227360]:  <uuid>e5dc7e85-787d-4ed8-9752-a604a1815f2b</uuid>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:  <name>instance-00000087</name>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      <nova:name>tempest-ServerStableDeviceRescueTest-server-1341744770</nova:name>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:25:56</nova:creationTime>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:25:57 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:        <nova:user uuid="873186539acb4bf9b90513e0e1beb56f">tempest-ServerStableDeviceRescueTest-507673154-project-member</nova:user>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:        <nova:project uuid="a9a83f8d8d7f4d08890407f978c05166">tempest-ServerStableDeviceRescueTest-507673154</nova:project>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:        <nova:port uuid="31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441">
Nov 29 03:25:57 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      <entry name="serial">e5dc7e85-787d-4ed8-9752-a604a1815f2b</entry>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      <entry name="uuid">e5dc7e85-787d-4ed8-9752-a604a1815f2b</entry>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/e5dc7e85-787d-4ed8-9752-a604a1815f2b_disk">
Nov 29 03:25:57 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:25:57 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/e5dc7e85-787d-4ed8-9752-a604a1815f2b_disk.config">
Nov 29 03:25:57 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:25:57 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/e5dc7e85-787d-4ed8-9752-a604a1815f2b_disk.rescue">
Nov 29 03:25:57 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:25:57 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      <target dev="vdb" bus="virtio"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      <boot order="1"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:df:bc:22"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      <target dev="tap31cb9e6c-fe"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/e5dc7e85-787d-4ed8-9752-a604a1815f2b/console.log" append="off"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:25:57 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:25:57 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:25:57 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:25:57 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:25:57 np0005539551 nova_compute[227360]: 2025-11-29 08:25:57.689 227364 INFO nova.virt.libvirt.driver [-] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Instance destroyed successfully.#033[00m
Nov 29 03:25:57 np0005539551 nova_compute[227360]: 2025-11-29 08:25:57.741 227364 DEBUG nova.virt.libvirt.driver [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:25:57 np0005539551 nova_compute[227360]: 2025-11-29 08:25:57.741 227364 DEBUG nova.virt.libvirt.driver [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:25:57 np0005539551 nova_compute[227360]: 2025-11-29 08:25:57.742 227364 DEBUG nova.virt.libvirt.driver [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:25:57 np0005539551 nova_compute[227360]: 2025-11-29 08:25:57.742 227364 DEBUG nova.virt.libvirt.driver [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] No VIF found with MAC fa:16:3e:df:bc:22, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:25:57 np0005539551 nova_compute[227360]: 2025-11-29 08:25:57.742 227364 INFO nova.virt.libvirt.driver [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Using config drive#033[00m
Nov 29 03:25:57 np0005539551 nova_compute[227360]: 2025-11-29 08:25:57.763 227364 DEBUG nova.storage.rbd_utils [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image e5dc7e85-787d-4ed8-9752-a604a1815f2b_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:57 np0005539551 nova_compute[227360]: 2025-11-29 08:25:57.790 227364 DEBUG nova.objects.instance [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'ec2_ids' on Instance uuid e5dc7e85-787d-4ed8-9752-a604a1815f2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:57 np0005539551 nova_compute[227360]: 2025-11-29 08:25:57.823 227364 DEBUG nova.objects.instance [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'keypairs' on Instance uuid e5dc7e85-787d-4ed8-9752-a604a1815f2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:58.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:58 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:58 np0005539551 nova_compute[227360]: 2025-11-29 08:25:58.687 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:25:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:59.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:59 np0005539551 nova_compute[227360]: 2025-11-29 08:25:59.502 227364 INFO nova.virt.libvirt.driver [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Creating config drive at /var/lib/nova/instances/e5dc7e85-787d-4ed8-9752-a604a1815f2b/disk.config.rescue#033[00m
Nov 29 03:25:59 np0005539551 nova_compute[227360]: 2025-11-29 08:25:59.509 227364 DEBUG oslo_concurrency.processutils [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e5dc7e85-787d-4ed8-9752-a604a1815f2b/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0x0dxa3_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:59 np0005539551 nova_compute[227360]: 2025-11-29 08:25:59.611 227364 DEBUG nova.compute.manager [req-67f37705-af8d-40f4-ad83-5a3ef6b2567b req-d7633141-7660-4d58-abf5-5c73924578ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Received event network-vif-plugged-d5958e80-e513-441d-9b72-15ef94535bfd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:59 np0005539551 nova_compute[227360]: 2025-11-29 08:25:59.612 227364 DEBUG oslo_concurrency.lockutils [req-67f37705-af8d-40f4-ad83-5a3ef6b2567b req-d7633141-7660-4d58-abf5-5c73924578ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "3bf7e58f-cace-4eee-a7ac-44c1ab096c2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:59 np0005539551 nova_compute[227360]: 2025-11-29 08:25:59.612 227364 DEBUG oslo_concurrency.lockutils [req-67f37705-af8d-40f4-ad83-5a3ef6b2567b req-d7633141-7660-4d58-abf5-5c73924578ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3bf7e58f-cace-4eee-a7ac-44c1ab096c2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:59 np0005539551 nova_compute[227360]: 2025-11-29 08:25:59.612 227364 DEBUG oslo_concurrency.lockutils [req-67f37705-af8d-40f4-ad83-5a3ef6b2567b req-d7633141-7660-4d58-abf5-5c73924578ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3bf7e58f-cace-4eee-a7ac-44c1ab096c2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:59 np0005539551 nova_compute[227360]: 2025-11-29 08:25:59.613 227364 DEBUG nova.compute.manager [req-67f37705-af8d-40f4-ad83-5a3ef6b2567b req-d7633141-7660-4d58-abf5-5c73924578ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] No waiting events found dispatching network-vif-plugged-d5958e80-e513-441d-9b72-15ef94535bfd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:59 np0005539551 nova_compute[227360]: 2025-11-29 08:25:59.613 227364 WARNING nova.compute.manager [req-67f37705-af8d-40f4-ad83-5a3ef6b2567b req-d7633141-7660-4d58-abf5-5c73924578ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Received unexpected event network-vif-plugged-d5958e80-e513-441d-9b72-15ef94535bfd for instance with vm_state active and task_state None.#033[00m
Nov 29 03:25:59 np0005539551 nova_compute[227360]: 2025-11-29 08:25:59.663 227364 DEBUG oslo_concurrency.processutils [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e5dc7e85-787d-4ed8-9752-a604a1815f2b/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0x0dxa3_" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:59 np0005539551 nova_compute[227360]: 2025-11-29 08:25:59.690 227364 DEBUG nova.storage.rbd_utils [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image e5dc7e85-787d-4ed8-9752-a604a1815f2b_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:59 np0005539551 nova_compute[227360]: 2025-11-29 08:25:59.694 227364 DEBUG oslo_concurrency.processutils [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e5dc7e85-787d-4ed8-9752-a604a1815f2b/disk.config.rescue e5dc7e85-787d-4ed8-9752-a604a1815f2b_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:59 np0005539551 nova_compute[227360]: 2025-11-29 08:25:59.849 227364 DEBUG oslo_concurrency.lockutils [None req-79cadd23-e955-4d1c-b73c-f0cbd2d5a5e8 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Acquiring lock "3bf7e58f-cace-4eee-a7ac-44c1ab096c2f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:59 np0005539551 nova_compute[227360]: 2025-11-29 08:25:59.850 227364 DEBUG oslo_concurrency.lockutils [None req-79cadd23-e955-4d1c-b73c-f0cbd2d5a5e8 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Lock "3bf7e58f-cace-4eee-a7ac-44c1ab096c2f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:59 np0005539551 nova_compute[227360]: 2025-11-29 08:25:59.850 227364 DEBUG oslo_concurrency.lockutils [None req-79cadd23-e955-4d1c-b73c-f0cbd2d5a5e8 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Acquiring lock "3bf7e58f-cace-4eee-a7ac-44c1ab096c2f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:59 np0005539551 nova_compute[227360]: 2025-11-29 08:25:59.850 227364 DEBUG oslo_concurrency.lockutils [None req-79cadd23-e955-4d1c-b73c-f0cbd2d5a5e8 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Lock "3bf7e58f-cace-4eee-a7ac-44c1ab096c2f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:59 np0005539551 nova_compute[227360]: 2025-11-29 08:25:59.851 227364 DEBUG oslo_concurrency.lockutils [None req-79cadd23-e955-4d1c-b73c-f0cbd2d5a5e8 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Lock "3bf7e58f-cace-4eee-a7ac-44c1ab096c2f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:59 np0005539551 nova_compute[227360]: 2025-11-29 08:25:59.853 227364 INFO nova.compute.manager [None req-79cadd23-e955-4d1c-b73c-f0cbd2d5a5e8 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Terminating instance#033[00m
Nov 29 03:25:59 np0005539551 nova_compute[227360]: 2025-11-29 08:25:59.854 227364 DEBUG nova.compute.manager [None req-79cadd23-e955-4d1c-b73c-f0cbd2d5a5e8 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:25:59 np0005539551 kernel: tapd5958e80-e5 (unregistering): left promiscuous mode
Nov 29 03:25:59 np0005539551 NetworkManager[48922]: <info>  [1764404759.8925] device (tapd5958e80-e5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:25:59 np0005539551 nova_compute[227360]: 2025-11-29 08:25:59.895 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:59 np0005539551 ovn_controller[130266]: 2025-11-29T08:25:59Z|00558|binding|INFO|Releasing lport d5958e80-e513-441d-9b72-15ef94535bfd from this chassis (sb_readonly=0)
Nov 29 03:25:59 np0005539551 nova_compute[227360]: 2025-11-29 08:25:59.902 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:59 np0005539551 ovn_controller[130266]: 2025-11-29T08:25:59Z|00559|binding|INFO|Setting lport d5958e80-e513-441d-9b72-15ef94535bfd down in Southbound
Nov 29 03:25:59 np0005539551 ovn_controller[130266]: 2025-11-29T08:25:59Z|00560|binding|INFO|Removing iface tapd5958e80-e5 ovn-installed in OVS
Nov 29 03:25:59 np0005539551 nova_compute[227360]: 2025-11-29 08:25:59.905 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:59.910 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:5a:cd 10.100.0.8'], port_security=['fa:16:3e:b5:5a:cd 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '3bf7e58f-cace-4eee-a7ac-44c1ab096c2f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-22d7cf42-d85f-4608-a248-329d35f5c84f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34cdf0e4d08f40cfb54455e1f681aea2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9f764df6-65a5-4758-a40e-a7606418d973', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3a7ab28e-e4f3-49d2-9ff0-0393527af031, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=d5958e80-e513-441d-9b72-15ef94535bfd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:25:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:59.911 139482 INFO neutron.agent.ovn.metadata.agent [-] Port d5958e80-e513-441d-9b72-15ef94535bfd in datapath 22d7cf42-d85f-4608-a248-329d35f5c84f unbound from our chassis#033[00m
Nov 29 03:25:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:59.913 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 22d7cf42-d85f-4608-a248-329d35f5c84f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:25:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:59.913 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[37908544-c9b7-4d09-9d5e-12d1320e8c8f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:25:59.914 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-22d7cf42-d85f-4608-a248-329d35f5c84f namespace which is not needed anymore#033[00m
Nov 29 03:25:59 np0005539551 nova_compute[227360]: 2025-11-29 08:25:59.918 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:59 np0005539551 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000089.scope: Deactivated successfully.
Nov 29 03:25:59 np0005539551 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000089.scope: Consumed 3.434s CPU time.
Nov 29 03:25:59 np0005539551 systemd-machined[190756]: Machine qemu-60-instance-00000089 terminated.
Nov 29 03:25:59 np0005539551 nova_compute[227360]: 2025-11-29 08:25:59.992 227364 DEBUG oslo_concurrency.processutils [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e5dc7e85-787d-4ed8-9752-a604a1815f2b/disk.config.rescue e5dc7e85-787d-4ed8-9752-a604a1815f2b_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:59 np0005539551 nova_compute[227360]: 2025-11-29 08:25:59.992 227364 INFO nova.virt.libvirt.driver [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Deleting local config drive /var/lib/nova/instances/e5dc7e85-787d-4ed8-9752-a604a1815f2b/disk.config.rescue because it was imported into RBD.#033[00m
Nov 29 03:26:00 np0005539551 neutron-haproxy-ovnmeta-22d7cf42-d85f-4608-a248-329d35f5c84f[277460]: [NOTICE]   (277464) : haproxy version is 2.8.14-c23fe91
Nov 29 03:26:00 np0005539551 neutron-haproxy-ovnmeta-22d7cf42-d85f-4608-a248-329d35f5c84f[277460]: [NOTICE]   (277464) : path to executable is /usr/sbin/haproxy
Nov 29 03:26:00 np0005539551 neutron-haproxy-ovnmeta-22d7cf42-d85f-4608-a248-329d35f5c84f[277460]: [WARNING]  (277464) : Exiting Master process...
Nov 29 03:26:00 np0005539551 neutron-haproxy-ovnmeta-22d7cf42-d85f-4608-a248-329d35f5c84f[277460]: [WARNING]  (277464) : Exiting Master process...
Nov 29 03:26:00 np0005539551 neutron-haproxy-ovnmeta-22d7cf42-d85f-4608-a248-329d35f5c84f[277460]: [ALERT]    (277464) : Current worker (277466) exited with code 143 (Terminated)
Nov 29 03:26:00 np0005539551 neutron-haproxy-ovnmeta-22d7cf42-d85f-4608-a248-329d35f5c84f[277460]: [WARNING]  (277464) : All workers exited. Exiting... (0)
Nov 29 03:26:00 np0005539551 systemd[1]: libpod-0d27771f50d9cc19af277b21ef743b1d5ea180c1d38f4c5ce9f214c346e2416c.scope: Deactivated successfully.
Nov 29 03:26:00 np0005539551 podman[277625]: 2025-11-29 08:26:00.039407237 +0000 UTC m=+0.045440321 container died 0d27771f50d9cc19af277b21ef743b1d5ea180c1d38f4c5ce9f214c346e2416c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22d7cf42-d85f-4608-a248-329d35f5c84f, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 03:26:00 np0005539551 NetworkManager[48922]: <info>  [1764404760.0444] manager: (tap31cb9e6c-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/266)
Nov 29 03:26:00 np0005539551 systemd-udevd[277603]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:26:00 np0005539551 kernel: tap31cb9e6c-fe: entered promiscuous mode
Nov 29 03:26:00 np0005539551 ovn_controller[130266]: 2025-11-29T08:26:00Z|00561|binding|INFO|Claiming lport 31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 for this chassis.
Nov 29 03:26:00 np0005539551 nova_compute[227360]: 2025-11-29 08:26:00.049 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:00 np0005539551 ovn_controller[130266]: 2025-11-29T08:26:00Z|00562|binding|INFO|31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441: Claiming fa:16:3e:df:bc:22 10.100.0.14
Nov 29 03:26:00 np0005539551 NetworkManager[48922]: <info>  [1764404760.0553] device (tap31cb9e6c-fe): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:26:00 np0005539551 NetworkManager[48922]: <info>  [1764404760.0559] device (tap31cb9e6c-fe): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:26:00 np0005539551 ovn_controller[130266]: 2025-11-29T08:26:00Z|00563|binding|INFO|Setting lport 31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 ovn-installed in OVS
Nov 29 03:26:00 np0005539551 nova_compute[227360]: 2025-11-29 08:26:00.070 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:00 np0005539551 NetworkManager[48922]: <info>  [1764404760.0740] manager: (tapd5958e80-e5): new Tun device (/org/freedesktop/NetworkManager/Devices/267)
Nov 29 03:26:00 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0d27771f50d9cc19af277b21ef743b1d5ea180c1d38f4c5ce9f214c346e2416c-userdata-shm.mount: Deactivated successfully.
Nov 29 03:26:00 np0005539551 systemd[1]: var-lib-containers-storage-overlay-0077c04bd82baf0018bd0876ea0306842a023e2bd9234004da46bf6b35b51bfb-merged.mount: Deactivated successfully.
Nov 29 03:26:00 np0005539551 nova_compute[227360]: 2025-11-29 08:26:00.080 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:00 np0005539551 podman[277625]: 2025-11-29 08:26:00.083674774 +0000 UTC m=+0.089707858 container cleanup 0d27771f50d9cc19af277b21ef743b1d5ea180c1d38f4c5ce9f214c346e2416c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22d7cf42-d85f-4608-a248-329d35f5c84f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:26:00 np0005539551 nova_compute[227360]: 2025-11-29 08:26:00.088 227364 INFO nova.virt.libvirt.driver [-] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Instance destroyed successfully.#033[00m
Nov 29 03:26:00 np0005539551 nova_compute[227360]: 2025-11-29 08:26:00.089 227364 DEBUG nova.objects.instance [None req-79cadd23-e955-4d1c-b73c-f0cbd2d5a5e8 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Lazy-loading 'resources' on Instance uuid 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:00 np0005539551 systemd-machined[190756]: New machine qemu-61-instance-00000087.
Nov 29 03:26:00 np0005539551 systemd[1]: Started Virtual Machine qemu-61-instance-00000087.
Nov 29 03:26:00 np0005539551 systemd[1]: libpod-conmon-0d27771f50d9cc19af277b21ef743b1d5ea180c1d38f4c5ce9f214c346e2416c.scope: Deactivated successfully.
Nov 29 03:26:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:00.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:00 np0005539551 podman[277676]: 2025-11-29 08:26:00.138795295 +0000 UTC m=+0.034854143 container remove 0d27771f50d9cc19af277b21ef743b1d5ea180c1d38f4c5ce9f214c346e2416c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22d7cf42-d85f-4608-a248-329d35f5c84f, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:00.144 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[417e85c6-012a-41ae-a055-a50859b2a538]: (4, ('Sat Nov 29 08:25:59 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-22d7cf42-d85f-4608-a248-329d35f5c84f (0d27771f50d9cc19af277b21ef743b1d5ea180c1d38f4c5ce9f214c346e2416c)\n0d27771f50d9cc19af277b21ef743b1d5ea180c1d38f4c5ce9f214c346e2416c\nSat Nov 29 08:26:00 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-22d7cf42-d85f-4608-a248-329d35f5c84f (0d27771f50d9cc19af277b21ef743b1d5ea180c1d38f4c5ce9f214c346e2416c)\n0d27771f50d9cc19af277b21ef743b1d5ea180c1d38f4c5ce9f214c346e2416c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:00.145 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a1aa68ba-86ef-49ed-84c2-8ea7bc1fe86d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:00.146 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap22d7cf42-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:00 np0005539551 nova_compute[227360]: 2025-11-29 08:26:00.147 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:00 np0005539551 nova_compute[227360]: 2025-11-29 08:26:00.160 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:00 np0005539551 kernel: tap22d7cf42-d0: left promiscuous mode
Nov 29 03:26:00 np0005539551 nova_compute[227360]: 2025-11-29 08:26:00.166 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:00.168 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[3cb6758b-0ccd-458a-bf91-80548b8fe5b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:00.182 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a7170776-70aa-4e7c-8516-dc6683c5b1f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:00.183 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[bfa795d5-82c2-4989-b21a-894b5709bc4d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:00.196 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5488fa6e-fbfe-42c3-9d4e-1503e3db18cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 780259, 'reachable_time': 16478, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277704, 'error': None, 'target': 'ovnmeta-22d7cf42-d85f-4608-a248-329d35f5c84f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:00 np0005539551 systemd[1]: run-netns-ovnmeta\x2d22d7cf42\x2dd85f\x2d4608\x2da248\x2d329d35f5c84f.mount: Deactivated successfully.
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:00.200 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-22d7cf42-d85f-4608-a248-329d35f5c84f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:00.201 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[5e232d88-5d05-49ab-a45e-77a5e611afb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:00 np0005539551 nova_compute[227360]: 2025-11-29 08:26:00.226 227364 DEBUG nova.virt.libvirt.vif [None req-79cadd23-e955-4d1c-b73c-f0cbd2d5a5e8 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:25:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-1657914540',display_name='tempest-ServerAddressesNegativeTestJSON-server-1657914540',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-1657914540',id=137,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:25:57Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='34cdf0e4d08f40cfb54455e1f681aea2',ramdisk_id='',reservation_id='r-kaowa8y0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesNegativeTestJSON-101422234',owner_user_name='tempest-ServerAddressesNegativeTestJSON-101422234-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:25:57Z,user_data=None,user_id='8516d6a79ffe412bbe8552fc233adf59',uuid=3bf7e58f-cace-4eee-a7ac-44c1ab096c2f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d5958e80-e513-441d-9b72-15ef94535bfd", "address": "fa:16:3e:b5:5a:cd", "network": {"id": "22d7cf42-d85f-4608-a248-329d35f5c84f", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1168559715-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "34cdf0e4d08f40cfb54455e1f681aea2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5958e80-e5", "ovs_interfaceid": "d5958e80-e513-441d-9b72-15ef94535bfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:26:00 np0005539551 nova_compute[227360]: 2025-11-29 08:26:00.226 227364 DEBUG nova.network.os_vif_util [None req-79cadd23-e955-4d1c-b73c-f0cbd2d5a5e8 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Converting VIF {"id": "d5958e80-e513-441d-9b72-15ef94535bfd", "address": "fa:16:3e:b5:5a:cd", "network": {"id": "22d7cf42-d85f-4608-a248-329d35f5c84f", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1168559715-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "34cdf0e4d08f40cfb54455e1f681aea2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5958e80-e5", "ovs_interfaceid": "d5958e80-e513-441d-9b72-15ef94535bfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:26:00 np0005539551 nova_compute[227360]: 2025-11-29 08:26:00.227 227364 DEBUG nova.network.os_vif_util [None req-79cadd23-e955-4d1c-b73c-f0cbd2d5a5e8 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:5a:cd,bridge_name='br-int',has_traffic_filtering=True,id=d5958e80-e513-441d-9b72-15ef94535bfd,network=Network(22d7cf42-d85f-4608-a248-329d35f5c84f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5958e80-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:26:00 np0005539551 nova_compute[227360]: 2025-11-29 08:26:00.227 227364 DEBUG os_vif [None req-79cadd23-e955-4d1c-b73c-f0cbd2d5a5e8 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:5a:cd,bridge_name='br-int',has_traffic_filtering=True,id=d5958e80-e513-441d-9b72-15ef94535bfd,network=Network(22d7cf42-d85f-4608-a248-329d35f5c84f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5958e80-e5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:26:00 np0005539551 nova_compute[227360]: 2025-11-29 08:26:00.229 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:00 np0005539551 nova_compute[227360]: 2025-11-29 08:26:00.229 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5958e80-e5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:00 np0005539551 nova_compute[227360]: 2025-11-29 08:26:00.231 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:00 np0005539551 nova_compute[227360]: 2025-11-29 08:26:00.232 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:00 np0005539551 nova_compute[227360]: 2025-11-29 08:26:00.234 227364 INFO os_vif [None req-79cadd23-e955-4d1c-b73c-f0cbd2d5a5e8 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:5a:cd,bridge_name='br-int',has_traffic_filtering=True,id=d5958e80-e513-441d-9b72-15ef94535bfd,network=Network(22d7cf42-d85f-4608-a248-329d35f5c84f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5958e80-e5')#033[00m
Nov 29 03:26:00 np0005539551 ovn_controller[130266]: 2025-11-29T08:26:00Z|00564|binding|INFO|Setting lport 31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 up in Southbound
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:00.423 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:bc:22 10.100.0.14'], port_security=['fa:16:3e:df:bc:22 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'e5dc7e85-787d-4ed8-9752-a604a1815f2b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9a83f8d8d7f4d08890407f978c05166', 'neutron:revision_number': '5', 'neutron:security_group_ids': '1d1bf0bb-aa3c-4461-8a1e-ba1daa172e77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d0d36bf-5f41-4d6e-9e1b-1a2b5a9220ce, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:00.426 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 in datapath 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 bound to our chassis#033[00m
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:00.429 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445#033[00m
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:00.441 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ab4de1de-9d3f-437c-bd85-49a744ba7b10]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:00.446 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5da19f7d-31 in ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:00.448 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5da19f7d-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:00.448 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b935dbaf-f538-4768-a7bb-39816e1529a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:00.452 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9e87c2c8-9e82-4c4a-8199-166720f561d2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:00.467 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[5b1f5450-4ee3-4370-8018-da48eddf55f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:00.493 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c941cb75-ccd4-44dd-9e29-a4692ef42da6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:00.525 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[46ae2f0c-e1c9-4ea0-8799-eb52c7148b5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:00.535 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[7bfec29b-ba81-408d-bee7-bce5c638bec6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:00 np0005539551 NetworkManager[48922]: <info>  [1764404760.5389] manager: (tap5da19f7d-30): new Veth device (/org/freedesktop/NetworkManager/Devices/268)
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:00.587 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[cd206864-cf64-42a1-bc7a-c258ebacf585]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:00.591 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[4be74231-e30f-4a5a-aada-7f18cd213509]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:00 np0005539551 NetworkManager[48922]: <info>  [1764404760.6183] device (tap5da19f7d-30): carrier: link connected
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:00.627 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[2ebc9340-a036-4d2f-81b3-c4dd81f9677e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:00 np0005539551 nova_compute[227360]: 2025-11-29 08:26:00.631 227364 DEBUG nova.compute.manager [req-fa7ede94-62a8-4f78-bbd0-5e4888f00a68 req-f130b823-e6c2-4f76-b2d5-e927cd2b5d29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Received event network-vif-plugged-31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:00 np0005539551 nova_compute[227360]: 2025-11-29 08:26:00.631 227364 DEBUG oslo_concurrency.lockutils [req-fa7ede94-62a8-4f78-bbd0-5e4888f00a68 req-f130b823-e6c2-4f76-b2d5-e927cd2b5d29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:00 np0005539551 nova_compute[227360]: 2025-11-29 08:26:00.631 227364 DEBUG oslo_concurrency.lockutils [req-fa7ede94-62a8-4f78-bbd0-5e4888f00a68 req-f130b823-e6c2-4f76-b2d5-e927cd2b5d29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:00 np0005539551 nova_compute[227360]: 2025-11-29 08:26:00.631 227364 DEBUG oslo_concurrency.lockutils [req-fa7ede94-62a8-4f78-bbd0-5e4888f00a68 req-f130b823-e6c2-4f76-b2d5-e927cd2b5d29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:00 np0005539551 nova_compute[227360]: 2025-11-29 08:26:00.631 227364 DEBUG nova.compute.manager [req-fa7ede94-62a8-4f78-bbd0-5e4888f00a68 req-f130b823-e6c2-4f76-b2d5-e927cd2b5d29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] No waiting events found dispatching network-vif-plugged-31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:26:00 np0005539551 nova_compute[227360]: 2025-11-29 08:26:00.632 227364 WARNING nova.compute.manager [req-fa7ede94-62a8-4f78-bbd0-5e4888f00a68 req-f130b823-e6c2-4f76-b2d5-e927cd2b5d29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Received unexpected event network-vif-plugged-31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:00.656 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[817385ba-4ef5-4187-af8d-a263cc8d0843]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5da19f7d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:8e:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 167], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 780749, 'reachable_time': 21887, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277746, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:00.673 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9fb626d6-c283-4802-819e-0add5a9bbd64]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe18:8e20'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 780749, 'tstamp': 780749}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277748, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:00.691 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e164f3a0-d206-4cf8-973e-c551ed105086]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5da19f7d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:8e:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 167], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 780749, 'reachable_time': 21887, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 277756, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:00.722 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[cfe3f2d9-cecd-4432-99ac-2c1a4be9f2ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:00.782 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[de3d7e18-83a8-4303-ad77-40eceaae253b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:00.783 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5da19f7d-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:00.783 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:00.784 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5da19f7d-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:00 np0005539551 nova_compute[227360]: 2025-11-29 08:26:00.785 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:00 np0005539551 kernel: tap5da19f7d-30: entered promiscuous mode
Nov 29 03:26:00 np0005539551 NetworkManager[48922]: <info>  [1764404760.7861] manager: (tap5da19f7d-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/269)
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:00.789 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5da19f7d-30, col_values=(('external_ids', {'iface-id': 'd4f0104e-3913-4399-9086-37cf4d16e7c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:00 np0005539551 nova_compute[227360]: 2025-11-29 08:26:00.790 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:00 np0005539551 ovn_controller[130266]: 2025-11-29T08:26:00Z|00565|binding|INFO|Releasing lport d4f0104e-3913-4399-9086-37cf4d16e7c7 from this chassis (sb_readonly=0)
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:00.792 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5da19f7d-3aa0-41e7-88b0-b9ef17fa4445.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5da19f7d-3aa0-41e7-88b0-b9ef17fa4445.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:00.793 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1bda42db-588f-478b-8b9c-7a0bf8cb580f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:00.794 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/5da19f7d-3aa0-41e7-88b0-b9ef17fa4445.pid.haproxy
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:26:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:00.794 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'env', 'PROCESS_TAG=haproxy-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5da19f7d-3aa0-41e7-88b0-b9ef17fa4445.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:26:00 np0005539551 nova_compute[227360]: 2025-11-29 08:26:00.807 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:00 np0005539551 nova_compute[227360]: 2025-11-29 08:26:00.954 227364 INFO nova.virt.libvirt.driver [None req-79cadd23-e955-4d1c-b73c-f0cbd2d5a5e8 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Deleting instance files /var/lib/nova/instances/3bf7e58f-cace-4eee-a7ac-44c1ab096c2f_del#033[00m
Nov 29 03:26:00 np0005539551 nova_compute[227360]: 2025-11-29 08:26:00.955 227364 INFO nova.virt.libvirt.driver [None req-79cadd23-e955-4d1c-b73c-f0cbd2d5a5e8 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Deletion of /var/lib/nova/instances/3bf7e58f-cace-4eee-a7ac-44c1ab096c2f_del complete#033[00m
Nov 29 03:26:00 np0005539551 nova_compute[227360]: 2025-11-29 08:26:00.988 227364 DEBUG nova.virt.libvirt.host [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Removed pending event for e5dc7e85-787d-4ed8-9752-a604a1815f2b due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:26:00 np0005539551 nova_compute[227360]: 2025-11-29 08:26:00.988 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404760.9877446, e5dc7e85-787d-4ed8-9752-a604a1815f2b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:26:00 np0005539551 nova_compute[227360]: 2025-11-29 08:26:00.989 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:26:00 np0005539551 nova_compute[227360]: 2025-11-29 08:26:00.993 227364 DEBUG nova.compute.manager [None req-fc1d4b32-311b-4e0c-a57a-81539fb401ee 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:01 np0005539551 nova_compute[227360]: 2025-11-29 08:26:01.046 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:01 np0005539551 nova_compute[227360]: 2025-11-29 08:26:01.050 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:26:01 np0005539551 nova_compute[227360]: 2025-11-29 08:26:01.067 227364 INFO nova.compute.manager [None req-79cadd23-e955-4d1c-b73c-f0cbd2d5a5e8 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Took 1.21 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:26:01 np0005539551 nova_compute[227360]: 2025-11-29 08:26:01.068 227364 DEBUG oslo.service.loopingcall [None req-79cadd23-e955-4d1c-b73c-f0cbd2d5a5e8 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:26:01 np0005539551 nova_compute[227360]: 2025-11-29 08:26:01.068 227364 DEBUG nova.compute.manager [-] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:26:01 np0005539551 nova_compute[227360]: 2025-11-29 08:26:01.068 227364 DEBUG nova.network.neutron [-] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:26:01 np0005539551 nova_compute[227360]: 2025-11-29 08:26:01.084 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Nov 29 03:26:01 np0005539551 nova_compute[227360]: 2025-11-29 08:26:01.085 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404760.9893064, e5dc7e85-787d-4ed8-9752-a604a1815f2b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:26:01 np0005539551 nova_compute[227360]: 2025-11-29 08:26:01.085 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] VM Started (Lifecycle Event)#033[00m
Nov 29 03:26:01 np0005539551 nova_compute[227360]: 2025-11-29 08:26:01.110 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:01 np0005539551 nova_compute[227360]: 2025-11-29 08:26:01.113 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:26:01 np0005539551 podman[277841]: 2025-11-29 08:26:01.133023262 +0000 UTC m=+0.041042432 container create 201a6e0d8b074e05e58e1a42fbd6904c8f0c9a090f6b5371f8b1ecc00804ce7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:26:01 np0005539551 systemd[1]: Started libpod-conmon-201a6e0d8b074e05e58e1a42fbd6904c8f0c9a090f6b5371f8b1ecc00804ce7c.scope.
Nov 29 03:26:01 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:26:01 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/555de4670b6d9146292c723a4690e36ac86cb3800cc1e7e1c8958c3f9cd3b61a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:26:01 np0005539551 podman[277841]: 2025-11-29 08:26:01.111757846 +0000 UTC m=+0.019777026 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:26:01 np0005539551 podman[277841]: 2025-11-29 08:26:01.219707187 +0000 UTC m=+0.127726357 container init 201a6e0d8b074e05e58e1a42fbd6904c8f0c9a090f6b5371f8b1ecc00804ce7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:26:01 np0005539551 podman[277841]: 2025-11-29 08:26:01.230303784 +0000 UTC m=+0.138322954 container start 201a6e0d8b074e05e58e1a42fbd6904c8f0c9a090f6b5371f8b1ecc00804ce7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:26:01 np0005539551 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[277856]: [NOTICE]   (277860) : New worker (277862) forked
Nov 29 03:26:01 np0005539551 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[277856]: [NOTICE]   (277860) : Loading success.
Nov 29 03:26:01 np0005539551 nova_compute[227360]: 2025-11-29 08:26:01.341 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:01.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:01 np0005539551 nova_compute[227360]: 2025-11-29 08:26:01.759 227364 DEBUG nova.compute.manager [req-6b096949-0e77-44da-a96e-ec4919d669f7 req-6489b5a8-204d-4206-abff-a6b6bd46a122 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Received event network-vif-unplugged-d5958e80-e513-441d-9b72-15ef94535bfd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:01 np0005539551 nova_compute[227360]: 2025-11-29 08:26:01.760 227364 DEBUG oslo_concurrency.lockutils [req-6b096949-0e77-44da-a96e-ec4919d669f7 req-6489b5a8-204d-4206-abff-a6b6bd46a122 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "3bf7e58f-cace-4eee-a7ac-44c1ab096c2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:01 np0005539551 nova_compute[227360]: 2025-11-29 08:26:01.760 227364 DEBUG oslo_concurrency.lockutils [req-6b096949-0e77-44da-a96e-ec4919d669f7 req-6489b5a8-204d-4206-abff-a6b6bd46a122 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3bf7e58f-cace-4eee-a7ac-44c1ab096c2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:01 np0005539551 nova_compute[227360]: 2025-11-29 08:26:01.760 227364 DEBUG oslo_concurrency.lockutils [req-6b096949-0e77-44da-a96e-ec4919d669f7 req-6489b5a8-204d-4206-abff-a6b6bd46a122 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3bf7e58f-cace-4eee-a7ac-44c1ab096c2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:01 np0005539551 nova_compute[227360]: 2025-11-29 08:26:01.760 227364 DEBUG nova.compute.manager [req-6b096949-0e77-44da-a96e-ec4919d669f7 req-6489b5a8-204d-4206-abff-a6b6bd46a122 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] No waiting events found dispatching network-vif-unplugged-d5958e80-e513-441d-9b72-15ef94535bfd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:26:01 np0005539551 nova_compute[227360]: 2025-11-29 08:26:01.761 227364 DEBUG nova.compute.manager [req-6b096949-0e77-44da-a96e-ec4919d669f7 req-6489b5a8-204d-4206-abff-a6b6bd46a122 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Received event network-vif-unplugged-d5958e80-e513-441d-9b72-15ef94535bfd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:26:01 np0005539551 nova_compute[227360]: 2025-11-29 08:26:01.761 227364 DEBUG nova.compute.manager [req-6b096949-0e77-44da-a96e-ec4919d669f7 req-6489b5a8-204d-4206-abff-a6b6bd46a122 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Received event network-vif-plugged-d5958e80-e513-441d-9b72-15ef94535bfd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:01 np0005539551 nova_compute[227360]: 2025-11-29 08:26:01.762 227364 DEBUG oslo_concurrency.lockutils [req-6b096949-0e77-44da-a96e-ec4919d669f7 req-6489b5a8-204d-4206-abff-a6b6bd46a122 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "3bf7e58f-cace-4eee-a7ac-44c1ab096c2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:01 np0005539551 nova_compute[227360]: 2025-11-29 08:26:01.762 227364 DEBUG oslo_concurrency.lockutils [req-6b096949-0e77-44da-a96e-ec4919d669f7 req-6489b5a8-204d-4206-abff-a6b6bd46a122 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3bf7e58f-cace-4eee-a7ac-44c1ab096c2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:01 np0005539551 nova_compute[227360]: 2025-11-29 08:26:01.762 227364 DEBUG oslo_concurrency.lockutils [req-6b096949-0e77-44da-a96e-ec4919d669f7 req-6489b5a8-204d-4206-abff-a6b6bd46a122 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3bf7e58f-cace-4eee-a7ac-44c1ab096c2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:01 np0005539551 nova_compute[227360]: 2025-11-29 08:26:01.762 227364 DEBUG nova.compute.manager [req-6b096949-0e77-44da-a96e-ec4919d669f7 req-6489b5a8-204d-4206-abff-a6b6bd46a122 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] No waiting events found dispatching network-vif-plugged-d5958e80-e513-441d-9b72-15ef94535bfd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:26:01 np0005539551 nova_compute[227360]: 2025-11-29 08:26:01.763 227364 WARNING nova.compute.manager [req-6b096949-0e77-44da-a96e-ec4919d669f7 req-6489b5a8-204d-4206-abff-a6b6bd46a122 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Received unexpected event network-vif-plugged-d5958e80-e513-441d-9b72-15ef94535bfd for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:26:02 np0005539551 nova_compute[227360]: 2025-11-29 08:26:02.115 227364 DEBUG nova.network.neutron [-] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:26:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:02.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:02 np0005539551 nova_compute[227360]: 2025-11-29 08:26:02.132 227364 INFO nova.compute.manager [-] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Took 1.06 seconds to deallocate network for instance.#033[00m
Nov 29 03:26:02 np0005539551 nova_compute[227360]: 2025-11-29 08:26:02.178 227364 DEBUG oslo_concurrency.lockutils [None req-79cadd23-e955-4d1c-b73c-f0cbd2d5a5e8 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:02 np0005539551 nova_compute[227360]: 2025-11-29 08:26:02.178 227364 DEBUG oslo_concurrency.lockutils [None req-79cadd23-e955-4d1c-b73c-f0cbd2d5a5e8 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:02 np0005539551 nova_compute[227360]: 2025-11-29 08:26:02.242 227364 DEBUG nova.compute.manager [req-fa726db2-5242-4946-99e4-9867c78dae93 req-672c45af-5fd9-40b1-bd07-79fb44c657d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Received event network-vif-deleted-d5958e80-e513-441d-9b72-15ef94535bfd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:02 np0005539551 nova_compute[227360]: 2025-11-29 08:26:02.278 227364 DEBUG nova.scheduler.client.report [None req-79cadd23-e955-4d1c-b73c-f0cbd2d5a5e8 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Refreshing inventories for resource provider 67c71d68-0dd7-4589-b775-189b4191a844 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:26:02 np0005539551 nova_compute[227360]: 2025-11-29 08:26:02.297 227364 DEBUG nova.scheduler.client.report [None req-79cadd23-e955-4d1c-b73c-f0cbd2d5a5e8 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Updating ProviderTree inventory for provider 67c71d68-0dd7-4589-b775-189b4191a844 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:26:02 np0005539551 nova_compute[227360]: 2025-11-29 08:26:02.298 227364 DEBUG nova.compute.provider_tree [None req-79cadd23-e955-4d1c-b73c-f0cbd2d5a5e8 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Updating inventory in ProviderTree for provider 67c71d68-0dd7-4589-b775-189b4191a844 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:26:02 np0005539551 nova_compute[227360]: 2025-11-29 08:26:02.309 227364 DEBUG nova.scheduler.client.report [None req-79cadd23-e955-4d1c-b73c-f0cbd2d5a5e8 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Refreshing aggregate associations for resource provider 67c71d68-0dd7-4589-b775-189b4191a844, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:26:02 np0005539551 nova_compute[227360]: 2025-11-29 08:26:02.326 227364 DEBUG nova.scheduler.client.report [None req-79cadd23-e955-4d1c-b73c-f0cbd2d5a5e8 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Refreshing trait associations for resource provider 67c71d68-0dd7-4589-b775-189b4191a844, traits: COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:26:02 np0005539551 nova_compute[227360]: 2025-11-29 08:26:02.373 227364 DEBUG oslo_concurrency.processutils [None req-79cadd23-e955-4d1c-b73c-f0cbd2d5a5e8 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:02 np0005539551 nova_compute[227360]: 2025-11-29 08:26:02.743 227364 DEBUG nova.compute.manager [req-f3de54e6-26ba-4a5d-b520-f90c4ff3ee2c req-fd55b02b-a554-4407-8a1e-644b199632ea 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Received event network-vif-plugged-31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:02 np0005539551 nova_compute[227360]: 2025-11-29 08:26:02.744 227364 DEBUG oslo_concurrency.lockutils [req-f3de54e6-26ba-4a5d-b520-f90c4ff3ee2c req-fd55b02b-a554-4407-8a1e-644b199632ea 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:02 np0005539551 nova_compute[227360]: 2025-11-29 08:26:02.745 227364 DEBUG oslo_concurrency.lockutils [req-f3de54e6-26ba-4a5d-b520-f90c4ff3ee2c req-fd55b02b-a554-4407-8a1e-644b199632ea 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:02 np0005539551 nova_compute[227360]: 2025-11-29 08:26:02.745 227364 DEBUG oslo_concurrency.lockutils [req-f3de54e6-26ba-4a5d-b520-f90c4ff3ee2c req-fd55b02b-a554-4407-8a1e-644b199632ea 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:02 np0005539551 nova_compute[227360]: 2025-11-29 08:26:02.745 227364 DEBUG nova.compute.manager [req-f3de54e6-26ba-4a5d-b520-f90c4ff3ee2c req-fd55b02b-a554-4407-8a1e-644b199632ea 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] No waiting events found dispatching network-vif-plugged-31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:26:02 np0005539551 nova_compute[227360]: 2025-11-29 08:26:02.746 227364 WARNING nova.compute.manager [req-f3de54e6-26ba-4a5d-b520-f90c4ff3ee2c req-fd55b02b-a554-4407-8a1e-644b199632ea 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Received unexpected event network-vif-plugged-31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 for instance with vm_state rescued and task_state None.#033[00m
Nov 29 03:26:02 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:26:02 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/966392733' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:26:02 np0005539551 nova_compute[227360]: 2025-11-29 08:26:02.871 227364 DEBUG oslo_concurrency.processutils [None req-79cadd23-e955-4d1c-b73c-f0cbd2d5a5e8 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:02 np0005539551 nova_compute[227360]: 2025-11-29 08:26:02.878 227364 DEBUG nova.compute.provider_tree [None req-79cadd23-e955-4d1c-b73c-f0cbd2d5a5e8 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:26:02 np0005539551 nova_compute[227360]: 2025-11-29 08:26:02.896 227364 DEBUG nova.scheduler.client.report [None req-79cadd23-e955-4d1c-b73c-f0cbd2d5a5e8 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:26:02 np0005539551 nova_compute[227360]: 2025-11-29 08:26:02.918 227364 DEBUG oslo_concurrency.lockutils [None req-79cadd23-e955-4d1c-b73c-f0cbd2d5a5e8 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:02 np0005539551 nova_compute[227360]: 2025-11-29 08:26:02.946 227364 INFO nova.scheduler.client.report [None req-79cadd23-e955-4d1c-b73c-f0cbd2d5a5e8 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Deleted allocations for instance 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f#033[00m
Nov 29 03:26:03 np0005539551 nova_compute[227360]: 2025-11-29 08:26:03.028 227364 DEBUG oslo_concurrency.lockutils [None req-79cadd23-e955-4d1c-b73c-f0cbd2d5a5e8 8516d6a79ffe412bbe8552fc233adf59 34cdf0e4d08f40cfb54455e1f681aea2 - - default default] Lock "3bf7e58f-cace-4eee-a7ac-44c1ab096c2f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:03.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:03 np0005539551 nova_compute[227360]: 2025-11-29 08:26:03.404 227364 INFO nova.compute.manager [None req-846bbe71-4980-459b-800f-cb43da8e27a8 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Unrescuing#033[00m
Nov 29 03:26:03 np0005539551 nova_compute[227360]: 2025-11-29 08:26:03.405 227364 DEBUG oslo_concurrency.lockutils [None req-846bbe71-4980-459b-800f-cb43da8e27a8 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "refresh_cache-e5dc7e85-787d-4ed8-9752-a604a1815f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:26:03 np0005539551 nova_compute[227360]: 2025-11-29 08:26:03.405 227364 DEBUG oslo_concurrency.lockutils [None req-846bbe71-4980-459b-800f-cb43da8e27a8 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquired lock "refresh_cache-e5dc7e85-787d-4ed8-9752-a604a1815f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:26:03 np0005539551 nova_compute[227360]: 2025-11-29 08:26:03.405 227364 DEBUG nova.network.neutron [None req-846bbe71-4980-459b-800f-cb43da8e27a8 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:26:03 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:04.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:04 np0005539551 nova_compute[227360]: 2025-11-29 08:26:04.654 227364 DEBUG nova.network.neutron [None req-846bbe71-4980-459b-800f-cb43da8e27a8 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Updating instance_info_cache with network_info: [{"id": "31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441", "address": "fa:16:3e:df:bc:22", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31cb9e6c-fe", "ovs_interfaceid": "31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:26:04 np0005539551 nova_compute[227360]: 2025-11-29 08:26:04.671 227364 DEBUG oslo_concurrency.lockutils [None req-846bbe71-4980-459b-800f-cb43da8e27a8 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Releasing lock "refresh_cache-e5dc7e85-787d-4ed8-9752-a604a1815f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:26:04 np0005539551 nova_compute[227360]: 2025-11-29 08:26:04.671 227364 DEBUG nova.objects.instance [None req-846bbe71-4980-459b-800f-cb43da8e27a8 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'flavor' on Instance uuid e5dc7e85-787d-4ed8-9752-a604a1815f2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:04 np0005539551 kernel: tap31cb9e6c-fe (unregistering): left promiscuous mode
Nov 29 03:26:04 np0005539551 NetworkManager[48922]: <info>  [1764404764.8735] device (tap31cb9e6c-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:26:04 np0005539551 ovn_controller[130266]: 2025-11-29T08:26:04Z|00566|binding|INFO|Releasing lport 31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 from this chassis (sb_readonly=0)
Nov 29 03:26:04 np0005539551 ovn_controller[130266]: 2025-11-29T08:26:04Z|00567|binding|INFO|Setting lport 31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 down in Southbound
Nov 29 03:26:04 np0005539551 ovn_controller[130266]: 2025-11-29T08:26:04Z|00568|binding|INFO|Removing iface tap31cb9e6c-fe ovn-installed in OVS
Nov 29 03:26:04 np0005539551 nova_compute[227360]: 2025-11-29 08:26:04.892 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:04 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:04.898 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:bc:22 10.100.0.14'], port_security=['fa:16:3e:df:bc:22 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'e5dc7e85-787d-4ed8-9752-a604a1815f2b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9a83f8d8d7f4d08890407f978c05166', 'neutron:revision_number': '6', 'neutron:security_group_ids': '1d1bf0bb-aa3c-4461-8a1e-ba1daa172e77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d0d36bf-5f41-4d6e-9e1b-1a2b5a9220ce, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:26:04 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:04.901 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 in datapath 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 unbound from our chassis#033[00m
Nov 29 03:26:04 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:04.903 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:26:04 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:04.904 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[82f4ce33-314d-45e8-b9c2-74fa44fcaf01]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:04 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:04.905 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 namespace which is not needed anymore#033[00m
Nov 29 03:26:04 np0005539551 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000087.scope: Deactivated successfully.
Nov 29 03:26:04 np0005539551 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000087.scope: Consumed 4.859s CPU time.
Nov 29 03:26:04 np0005539551 systemd-machined[190756]: Machine qemu-61-instance-00000087 terminated.
Nov 29 03:26:04 np0005539551 nova_compute[227360]: 2025-11-29 08:26:04.916 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:04 np0005539551 podman[277897]: 2025-11-29 08:26:04.989456098 +0000 UTC m=+0.088223288 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 03:26:04 np0005539551 podman[277896]: 2025-11-29 08:26:04.994119733 +0000 UTC m=+0.094671782 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:26:05 np0005539551 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[277856]: [NOTICE]   (277860) : haproxy version is 2.8.14-c23fe91
Nov 29 03:26:05 np0005539551 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[277856]: [NOTICE]   (277860) : path to executable is /usr/sbin/haproxy
Nov 29 03:26:05 np0005539551 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[277856]: [WARNING]  (277860) : Exiting Master process...
Nov 29 03:26:05 np0005539551 podman[277893]: 2025-11-29 08:26:05.014208926 +0000 UTC m=+0.118662810 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:26:05 np0005539551 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[277856]: [ALERT]    (277860) : Current worker (277862) exited with code 143 (Terminated)
Nov 29 03:26:05 np0005539551 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[277856]: [WARNING]  (277860) : All workers exited. Exiting... (0)
Nov 29 03:26:05 np0005539551 systemd[1]: libpod-201a6e0d8b074e05e58e1a42fbd6904c8f0c9a090f6b5371f8b1ecc00804ce7c.scope: Deactivated successfully.
Nov 29 03:26:05 np0005539551 nova_compute[227360]: 2025-11-29 08:26:05.021 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:05 np0005539551 podman[277974]: 2025-11-29 08:26:05.025201764 +0000 UTC m=+0.038550553 container died 201a6e0d8b074e05e58e1a42fbd6904c8f0c9a090f6b5371f8b1ecc00804ce7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:26:05 np0005539551 nova_compute[227360]: 2025-11-29 08:26:05.025 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:05 np0005539551 nova_compute[227360]: 2025-11-29 08:26:05.036 227364 INFO nova.virt.libvirt.driver [-] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Instance destroyed successfully.#033[00m
Nov 29 03:26:05 np0005539551 nova_compute[227360]: 2025-11-29 08:26:05.036 227364 DEBUG nova.objects.instance [None req-846bbe71-4980-459b-800f-cb43da8e27a8 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'numa_topology' on Instance uuid e5dc7e85-787d-4ed8-9752-a604a1815f2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:05 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-201a6e0d8b074e05e58e1a42fbd6904c8f0c9a090f6b5371f8b1ecc00804ce7c-userdata-shm.mount: Deactivated successfully.
Nov 29 03:26:05 np0005539551 systemd[1]: var-lib-containers-storage-overlay-555de4670b6d9146292c723a4690e36ac86cb3800cc1e7e1c8958c3f9cd3b61a-merged.mount: Deactivated successfully.
Nov 29 03:26:05 np0005539551 podman[277974]: 2025-11-29 08:26:05.063199462 +0000 UTC m=+0.076548231 container cleanup 201a6e0d8b074e05e58e1a42fbd6904c8f0c9a090f6b5371f8b1ecc00804ce7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 03:26:05 np0005539551 systemd[1]: libpod-conmon-201a6e0d8b074e05e58e1a42fbd6904c8f0c9a090f6b5371f8b1ecc00804ce7c.scope: Deactivated successfully.
Nov 29 03:26:05 np0005539551 kernel: tap31cb9e6c-fe: entered promiscuous mode
Nov 29 03:26:05 np0005539551 systemd-udevd[277928]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:26:05 np0005539551 ovn_controller[130266]: 2025-11-29T08:26:05Z|00569|binding|INFO|Claiming lport 31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 for this chassis.
Nov 29 03:26:05 np0005539551 ovn_controller[130266]: 2025-11-29T08:26:05Z|00570|binding|INFO|31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441: Claiming fa:16:3e:df:bc:22 10.100.0.14
Nov 29 03:26:05 np0005539551 nova_compute[227360]: 2025-11-29 08:26:05.112 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:05 np0005539551 NetworkManager[48922]: <info>  [1764404765.1144] manager: (tap31cb9e6c-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/270)
Nov 29 03:26:05 np0005539551 NetworkManager[48922]: <info>  [1764404765.1216] device (tap31cb9e6c-fe): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:26:05 np0005539551 NetworkManager[48922]: <info>  [1764404765.1224] device (tap31cb9e6c-fe): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:05.122 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:bc:22 10.100.0.14'], port_security=['fa:16:3e:df:bc:22 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'e5dc7e85-787d-4ed8-9752-a604a1815f2b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9a83f8d8d7f4d08890407f978c05166', 'neutron:revision_number': '6', 'neutron:security_group_ids': '1d1bf0bb-aa3c-4461-8a1e-ba1daa172e77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d0d36bf-5f41-4d6e-9e1b-1a2b5a9220ce, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:26:05 np0005539551 ovn_controller[130266]: 2025-11-29T08:26:05Z|00571|binding|INFO|Setting lport 31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 ovn-installed in OVS
Nov 29 03:26:05 np0005539551 ovn_controller[130266]: 2025-11-29T08:26:05Z|00572|binding|INFO|Setting lport 31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 up in Southbound
Nov 29 03:26:05 np0005539551 podman[278018]: 2025-11-29 08:26:05.13153001 +0000 UTC m=+0.046139588 container remove 201a6e0d8b074e05e58e1a42fbd6904c8f0c9a090f6b5371f8b1ecc00804ce7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:26:05 np0005539551 nova_compute[227360]: 2025-11-29 08:26:05.131 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:05 np0005539551 nova_compute[227360]: 2025-11-29 08:26:05.133 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:05.138 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d4d5fc2f-6051-4e7e-9d4a-30bac94383d3]: (4, ('Sat Nov 29 08:26:04 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 (201a6e0d8b074e05e58e1a42fbd6904c8f0c9a090f6b5371f8b1ecc00804ce7c)\n201a6e0d8b074e05e58e1a42fbd6904c8f0c9a090f6b5371f8b1ecc00804ce7c\nSat Nov 29 08:26:05 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 (201a6e0d8b074e05e58e1a42fbd6904c8f0c9a090f6b5371f8b1ecc00804ce7c)\n201a6e0d8b074e05e58e1a42fbd6904c8f0c9a090f6b5371f8b1ecc00804ce7c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:05 np0005539551 systemd-machined[190756]: New machine qemu-62-instance-00000087.
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:05.139 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[221062bc-81be-408a-9958-a5290c511b83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:05.140 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5da19f7d-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:05 np0005539551 kernel: tap5da19f7d-30: left promiscuous mode
Nov 29 03:26:05 np0005539551 nova_compute[227360]: 2025-11-29 08:26:05.142 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:05 np0005539551 systemd[1]: Started Virtual Machine qemu-62-instance-00000087.
Nov 29 03:26:05 np0005539551 nova_compute[227360]: 2025-11-29 08:26:05.156 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:05.160 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[01819499-9242-4451-a115-65233ca71d8c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:05.179 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1a20b1fc-9757-445d-bcaa-26432c34ae01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:05.181 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[745f4914-bc43-457d-b283-c9f76aad528b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:05.196 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ba2e33c2-6f15-4050-99f3-51f1ddbd89c1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 780739, 'reachable_time': 15010, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278051, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:05.199 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:05.199 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[88337f3f-a969-453e-9604-fe0d8eb0effb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:05.200 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 in datapath 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 unbound from our chassis#033[00m
Nov 29 03:26:05 np0005539551 systemd[1]: run-netns-ovnmeta\x2d5da19f7d\x2d3aa0\x2d41e7\x2d88b0\x2db9ef17fa4445.mount: Deactivated successfully.
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:05.201 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445#033[00m
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:05.212 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a1d75481-f8f1-4f99-bbc8-0a7128d27a07]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:05.213 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5da19f7d-31 in ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:05.214 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5da19f7d-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:05.214 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[94f22433-11e2-46d2-b9f0-1127342ce1d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:05.215 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[abd2fe88-efb4-45f1-ace2-bf23b780db1a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:05.227 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[05abe3fd-1c58-472e-9bf4-89a471786b20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:05 np0005539551 nova_compute[227360]: 2025-11-29 08:26:05.232 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:05.244 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f814fc36-d45f-4389-8f3f-e4e70824f377]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:05.273 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[7505b797-1d24-475f-b3eb-27c5147f7ae8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:05 np0005539551 NetworkManager[48922]: <info>  [1764404765.2794] manager: (tap5da19f7d-30): new Veth device (/org/freedesktop/NetworkManager/Devices/271)
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:05.280 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[84be1814-5420-454f-9502-bd41922992cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:05.308 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[bcf8ff98-8c6d-46cb-a63d-f49d3834b1f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:05.312 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[6e697e35-d902-4ea0-823c-c128e9921a8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:05 np0005539551 nova_compute[227360]: 2025-11-29 08:26:05.320 227364 DEBUG nova.compute.manager [req-c9e9253c-120c-4d72-bc7f-48d105e553e5 req-b6352bb1-98da-4431-ac71-897cd3aa5c75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Received event network-vif-unplugged-31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:05 np0005539551 nova_compute[227360]: 2025-11-29 08:26:05.321 227364 DEBUG oslo_concurrency.lockutils [req-c9e9253c-120c-4d72-bc7f-48d105e553e5 req-b6352bb1-98da-4431-ac71-897cd3aa5c75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:05 np0005539551 nova_compute[227360]: 2025-11-29 08:26:05.321 227364 DEBUG oslo_concurrency.lockutils [req-c9e9253c-120c-4d72-bc7f-48d105e553e5 req-b6352bb1-98da-4431-ac71-897cd3aa5c75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:05 np0005539551 nova_compute[227360]: 2025-11-29 08:26:05.321 227364 DEBUG oslo_concurrency.lockutils [req-c9e9253c-120c-4d72-bc7f-48d105e553e5 req-b6352bb1-98da-4431-ac71-897cd3aa5c75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:05 np0005539551 nova_compute[227360]: 2025-11-29 08:26:05.321 227364 DEBUG nova.compute.manager [req-c9e9253c-120c-4d72-bc7f-48d105e553e5 req-b6352bb1-98da-4431-ac71-897cd3aa5c75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] No waiting events found dispatching network-vif-unplugged-31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:26:05 np0005539551 nova_compute[227360]: 2025-11-29 08:26:05.322 227364 WARNING nova.compute.manager [req-c9e9253c-120c-4d72-bc7f-48d105e553e5 req-b6352bb1-98da-4431-ac71-897cd3aa5c75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Received unexpected event network-vif-unplugged-31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 for instance with vm_state rescued and task_state unrescuing.#033[00m
Nov 29 03:26:05 np0005539551 NetworkManager[48922]: <info>  [1764404765.3310] device (tap5da19f7d-30): carrier: link connected
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:05.337 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[35224828-df44-425b-84a0-4288f7d36b4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:05.353 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b20532f7-e203-4970-b141-c4857d89a6db]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5da19f7d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:8e:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 781221, 'reachable_time': 32398, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278094, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:05.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:05.369 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[998faca8-f7f0-4fed-a82d-31ba0cd85cda]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe18:8e20'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 781221, 'tstamp': 781221}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278102, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:05.385 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a15a1aa6-c9ee-46e9-b1b1-fa1b4dd735f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5da19f7d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:8e:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 781221, 'reachable_time': 32398, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 278112, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:05.410 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6800a240-abc0-4d68-ada3-b726265fd3c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:05.463 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[3db25059-55ed-4848-91c3-e11353b5af86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:05.464 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5da19f7d-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:05.464 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:05.465 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5da19f7d-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:05 np0005539551 kernel: tap5da19f7d-30: entered promiscuous mode
Nov 29 03:26:05 np0005539551 NetworkManager[48922]: <info>  [1764404765.4682] manager: (tap5da19f7d-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/272)
Nov 29 03:26:05 np0005539551 nova_compute[227360]: 2025-11-29 08:26:05.468 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:05.470 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5da19f7d-30, col_values=(('external_ids', {'iface-id': 'd4f0104e-3913-4399-9086-37cf4d16e7c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:05 np0005539551 ovn_controller[130266]: 2025-11-29T08:26:05Z|00573|binding|INFO|Releasing lport d4f0104e-3913-4399-9086-37cf4d16e7c7 from this chassis (sb_readonly=0)
Nov 29 03:26:05 np0005539551 nova_compute[227360]: 2025-11-29 08:26:05.475 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:05 np0005539551 nova_compute[227360]: 2025-11-29 08:26:05.486 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:05.486 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5da19f7d-3aa0-41e7-88b0-b9ef17fa4445.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5da19f7d-3aa0-41e7-88b0-b9ef17fa4445.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:05.487 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[007b5ccf-1081-4a21-a471-7ee6804da12a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:05.488 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/5da19f7d-3aa0-41e7-88b0-b9ef17fa4445.pid.haproxy
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:26:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:05.488 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'env', 'PROCESS_TAG=haproxy-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5da19f7d-3aa0-41e7-88b0-b9ef17fa4445.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:26:05 np0005539551 nova_compute[227360]: 2025-11-29 08:26:05.515 227364 DEBUG nova.virt.libvirt.host [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Removed pending event for e5dc7e85-787d-4ed8-9752-a604a1815f2b due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:26:05 np0005539551 nova_compute[227360]: 2025-11-29 08:26:05.515 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404765.514939, e5dc7e85-787d-4ed8-9752-a604a1815f2b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:26:05 np0005539551 nova_compute[227360]: 2025-11-29 08:26:05.515 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:26:05 np0005539551 nova_compute[227360]: 2025-11-29 08:26:05.560 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:05 np0005539551 nova_compute[227360]: 2025-11-29 08:26:05.565 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:26:05 np0005539551 nova_compute[227360]: 2025-11-29 08:26:05.595 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Nov 29 03:26:05 np0005539551 nova_compute[227360]: 2025-11-29 08:26:05.596 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404765.517546, e5dc7e85-787d-4ed8-9752-a604a1815f2b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:26:05 np0005539551 nova_compute[227360]: 2025-11-29 08:26:05.596 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] VM Started (Lifecycle Event)#033[00m
Nov 29 03:26:05 np0005539551 nova_compute[227360]: 2025-11-29 08:26:05.620 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:05 np0005539551 nova_compute[227360]: 2025-11-29 08:26:05.622 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:26:05 np0005539551 nova_compute[227360]: 2025-11-29 08:26:05.650 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Nov 29 03:26:05 np0005539551 ovn_controller[130266]: 2025-11-29T08:26:05Z|00574|binding|INFO|Releasing lport d4f0104e-3913-4399-9086-37cf4d16e7c7 from this chassis (sb_readonly=0)
Nov 29 03:26:05 np0005539551 nova_compute[227360]: 2025-11-29 08:26:05.715 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:05 np0005539551 podman[278170]: 2025-11-29 08:26:05.865969419 +0000 UTC m=+0.051610057 container create 78807798cd291cdca1afc2bbbfc7bee952cc0daa23153d2c0cb9ab6821e2fce8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Nov 29 03:26:05 np0005539551 systemd[1]: Started libpod-conmon-78807798cd291cdca1afc2bbbfc7bee952cc0daa23153d2c0cb9ab6821e2fce8.scope.
Nov 29 03:26:05 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:26:05 np0005539551 podman[278170]: 2025-11-29 08:26:05.843658945 +0000 UTC m=+0.029299603 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:26:05 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6aaff5931df7f2502af676e623f8318a79f27291e690a7c8178eaab8a4459d0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:26:05 np0005539551 podman[278170]: 2025-11-29 08:26:05.94805384 +0000 UTC m=+0.133694528 container init 78807798cd291cdca1afc2bbbfc7bee952cc0daa23153d2c0cb9ab6821e2fce8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 03:26:05 np0005539551 podman[278170]: 2025-11-29 08:26:05.955668995 +0000 UTC m=+0.141309633 container start 78807798cd291cdca1afc2bbbfc7bee952cc0daa23153d2c0cb9ab6821e2fce8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:26:05 np0005539551 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[278186]: [NOTICE]   (278190) : New worker (278192) forked
Nov 29 03:26:05 np0005539551 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[278186]: [NOTICE]   (278190) : Loading success.
Nov 29 03:26:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:06.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:06 np0005539551 nova_compute[227360]: 2025-11-29 08:26:06.258 227364 DEBUG nova.compute.manager [None req-846bbe71-4980-459b-800f-cb43da8e27a8 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:06 np0005539551 nova_compute[227360]: 2025-11-29 08:26:06.345 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:07.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:07 np0005539551 nova_compute[227360]: 2025-11-29 08:26:07.440 227364 DEBUG nova.compute.manager [req-16831a56-9d37-445e-9bf9-79654a6e8282 req-648b35de-4b06-4bf9-870b-4558ea9d84bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Received event network-vif-plugged-31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:07 np0005539551 nova_compute[227360]: 2025-11-29 08:26:07.441 227364 DEBUG oslo_concurrency.lockutils [req-16831a56-9d37-445e-9bf9-79654a6e8282 req-648b35de-4b06-4bf9-870b-4558ea9d84bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:07 np0005539551 nova_compute[227360]: 2025-11-29 08:26:07.441 227364 DEBUG oslo_concurrency.lockutils [req-16831a56-9d37-445e-9bf9-79654a6e8282 req-648b35de-4b06-4bf9-870b-4558ea9d84bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:07 np0005539551 nova_compute[227360]: 2025-11-29 08:26:07.441 227364 DEBUG oslo_concurrency.lockutils [req-16831a56-9d37-445e-9bf9-79654a6e8282 req-648b35de-4b06-4bf9-870b-4558ea9d84bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:07 np0005539551 nova_compute[227360]: 2025-11-29 08:26:07.441 227364 DEBUG nova.compute.manager [req-16831a56-9d37-445e-9bf9-79654a6e8282 req-648b35de-4b06-4bf9-870b-4558ea9d84bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] No waiting events found dispatching network-vif-plugged-31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:26:07 np0005539551 nova_compute[227360]: 2025-11-29 08:26:07.442 227364 WARNING nova.compute.manager [req-16831a56-9d37-445e-9bf9-79654a6e8282 req-648b35de-4b06-4bf9-870b-4558ea9d84bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Received unexpected event network-vif-plugged-31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:26:07 np0005539551 nova_compute[227360]: 2025-11-29 08:26:07.442 227364 DEBUG nova.compute.manager [req-16831a56-9d37-445e-9bf9-79654a6e8282 req-648b35de-4b06-4bf9-870b-4558ea9d84bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Received event network-vif-plugged-31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:07 np0005539551 nova_compute[227360]: 2025-11-29 08:26:07.442 227364 DEBUG oslo_concurrency.lockutils [req-16831a56-9d37-445e-9bf9-79654a6e8282 req-648b35de-4b06-4bf9-870b-4558ea9d84bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:07 np0005539551 nova_compute[227360]: 2025-11-29 08:26:07.443 227364 DEBUG oslo_concurrency.lockutils [req-16831a56-9d37-445e-9bf9-79654a6e8282 req-648b35de-4b06-4bf9-870b-4558ea9d84bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:07 np0005539551 nova_compute[227360]: 2025-11-29 08:26:07.443 227364 DEBUG oslo_concurrency.lockutils [req-16831a56-9d37-445e-9bf9-79654a6e8282 req-648b35de-4b06-4bf9-870b-4558ea9d84bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:07 np0005539551 nova_compute[227360]: 2025-11-29 08:26:07.443 227364 DEBUG nova.compute.manager [req-16831a56-9d37-445e-9bf9-79654a6e8282 req-648b35de-4b06-4bf9-870b-4558ea9d84bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] No waiting events found dispatching network-vif-plugged-31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:26:07 np0005539551 nova_compute[227360]: 2025-11-29 08:26:07.443 227364 WARNING nova.compute.manager [req-16831a56-9d37-445e-9bf9-79654a6e8282 req-648b35de-4b06-4bf9-870b-4558ea9d84bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Received unexpected event network-vif-plugged-31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:26:07 np0005539551 nova_compute[227360]: 2025-11-29 08:26:07.444 227364 DEBUG nova.compute.manager [req-16831a56-9d37-445e-9bf9-79654a6e8282 req-648b35de-4b06-4bf9-870b-4558ea9d84bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Received event network-vif-plugged-31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:07 np0005539551 nova_compute[227360]: 2025-11-29 08:26:07.444 227364 DEBUG oslo_concurrency.lockutils [req-16831a56-9d37-445e-9bf9-79654a6e8282 req-648b35de-4b06-4bf9-870b-4558ea9d84bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:07 np0005539551 nova_compute[227360]: 2025-11-29 08:26:07.444 227364 DEBUG oslo_concurrency.lockutils [req-16831a56-9d37-445e-9bf9-79654a6e8282 req-648b35de-4b06-4bf9-870b-4558ea9d84bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:07 np0005539551 nova_compute[227360]: 2025-11-29 08:26:07.444 227364 DEBUG oslo_concurrency.lockutils [req-16831a56-9d37-445e-9bf9-79654a6e8282 req-648b35de-4b06-4bf9-870b-4558ea9d84bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:07 np0005539551 nova_compute[227360]: 2025-11-29 08:26:07.445 227364 DEBUG nova.compute.manager [req-16831a56-9d37-445e-9bf9-79654a6e8282 req-648b35de-4b06-4bf9-870b-4558ea9d84bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] No waiting events found dispatching network-vif-plugged-31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:26:07 np0005539551 nova_compute[227360]: 2025-11-29 08:26:07.445 227364 WARNING nova.compute.manager [req-16831a56-9d37-445e-9bf9-79654a6e8282 req-648b35de-4b06-4bf9-870b-4558ea9d84bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Received unexpected event network-vif-plugged-31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:26:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:08.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:08 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:09.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:10.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:10 np0005539551 nova_compute[227360]: 2025-11-29 08:26:10.235 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:11 np0005539551 nova_compute[227360]: 2025-11-29 08:26:11.346 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:11.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:12.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:13.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:13 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:14.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:15 np0005539551 nova_compute[227360]: 2025-11-29 08:26:15.087 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404760.0857847, 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:26:15 np0005539551 nova_compute[227360]: 2025-11-29 08:26:15.088 227364 INFO nova.compute.manager [-] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:26:15 np0005539551 nova_compute[227360]: 2025-11-29 08:26:15.114 227364 DEBUG nova.compute.manager [None req-67f1c370-cb41-42df-a9ea-2e78eb3229e2 - - - - - -] [instance: 3bf7e58f-cace-4eee-a7ac-44c1ab096c2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:15 np0005539551 nova_compute[227360]: 2025-11-29 08:26:15.237 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:15.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:26:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:16.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:26:16 np0005539551 nova_compute[227360]: 2025-11-29 08:26:16.348 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:16 np0005539551 nova_compute[227360]: 2025-11-29 08:26:16.927 227364 DEBUG oslo_concurrency.lockutils [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquiring lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:16 np0005539551 nova_compute[227360]: 2025-11-29 08:26:16.928 227364 DEBUG oslo_concurrency.lockutils [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:16 np0005539551 nova_compute[227360]: 2025-11-29 08:26:16.946 227364 DEBUG nova.compute.manager [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:26:17 np0005539551 nova_compute[227360]: 2025-11-29 08:26:17.021 227364 DEBUG oslo_concurrency.lockutils [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:17 np0005539551 nova_compute[227360]: 2025-11-29 08:26:17.022 227364 DEBUG oslo_concurrency.lockutils [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:17 np0005539551 nova_compute[227360]: 2025-11-29 08:26:17.031 227364 DEBUG nova.virt.hardware [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:26:17 np0005539551 nova_compute[227360]: 2025-11-29 08:26:17.031 227364 INFO nova.compute.claims [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:26:17 np0005539551 nova_compute[227360]: 2025-11-29 08:26:17.156 227364 DEBUG oslo_concurrency.processutils [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:17.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:26:17 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1835759283' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:26:17 np0005539551 nova_compute[227360]: 2025-11-29 08:26:17.581 227364 DEBUG oslo_concurrency.processutils [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:17 np0005539551 nova_compute[227360]: 2025-11-29 08:26:17.589 227364 DEBUG nova.compute.provider_tree [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:26:17 np0005539551 nova_compute[227360]: 2025-11-29 08:26:17.607 227364 DEBUG nova.scheduler.client.report [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:26:17 np0005539551 nova_compute[227360]: 2025-11-29 08:26:17.630 227364 DEBUG oslo_concurrency.lockutils [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:17 np0005539551 nova_compute[227360]: 2025-11-29 08:26:17.631 227364 DEBUG nova.compute.manager [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:26:17 np0005539551 nova_compute[227360]: 2025-11-29 08:26:17.673 227364 DEBUG nova.compute.manager [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:26:17 np0005539551 nova_compute[227360]: 2025-11-29 08:26:17.673 227364 DEBUG nova.network.neutron [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:26:17 np0005539551 nova_compute[227360]: 2025-11-29 08:26:17.705 227364 INFO nova.virt.libvirt.driver [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:26:17 np0005539551 nova_compute[227360]: 2025-11-29 08:26:17.732 227364 DEBUG nova.compute.manager [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:26:17 np0005539551 nova_compute[227360]: 2025-11-29 08:26:17.871 227364 DEBUG nova.compute.manager [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:26:17 np0005539551 nova_compute[227360]: 2025-11-29 08:26:17.873 227364 DEBUG nova.virt.libvirt.driver [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:26:17 np0005539551 nova_compute[227360]: 2025-11-29 08:26:17.874 227364 INFO nova.virt.libvirt.driver [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Creating image(s)#033[00m
Nov 29 03:26:17 np0005539551 nova_compute[227360]: 2025-11-29 08:26:17.914 227364 DEBUG nova.storage.rbd_utils [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rbd image 6076e253-7727-49f1-9aa6-6c4ccc52fc56_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:17 np0005539551 nova_compute[227360]: 2025-11-29 08:26:17.947 227364 DEBUG nova.storage.rbd_utils [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rbd image 6076e253-7727-49f1-9aa6-6c4ccc52fc56_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:17 np0005539551 nova_compute[227360]: 2025-11-29 08:26:17.972 227364 DEBUG nova.storage.rbd_utils [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rbd image 6076e253-7727-49f1-9aa6-6c4ccc52fc56_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:17 np0005539551 nova_compute[227360]: 2025-11-29 08:26:17.976 227364 DEBUG oslo_concurrency.processutils [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:18 np0005539551 nova_compute[227360]: 2025-11-29 08:26:18.004 227364 DEBUG nova.policy [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '283f8136265e4425a5a31f840935b9ab', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ea7b24ea9d7b4d239b4741634ac3f10c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:26:18 np0005539551 nova_compute[227360]: 2025-11-29 08:26:18.040 227364 DEBUG oslo_concurrency.processutils [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:18 np0005539551 nova_compute[227360]: 2025-11-29 08:26:18.040 227364 DEBUG oslo_concurrency.lockutils [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:18 np0005539551 nova_compute[227360]: 2025-11-29 08:26:18.041 227364 DEBUG oslo_concurrency.lockutils [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:18 np0005539551 nova_compute[227360]: 2025-11-29 08:26:18.041 227364 DEBUG oslo_concurrency.lockutils [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:18 np0005539551 nova_compute[227360]: 2025-11-29 08:26:18.065 227364 DEBUG nova.storage.rbd_utils [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rbd image 6076e253-7727-49f1-9aa6-6c4ccc52fc56_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:18 np0005539551 nova_compute[227360]: 2025-11-29 08:26:18.069 227364 DEBUG oslo_concurrency.processutils [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 6076e253-7727-49f1-9aa6-6c4ccc52fc56_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:18.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:18 np0005539551 nova_compute[227360]: 2025-11-29 08:26:18.371 227364 DEBUG oslo_concurrency.processutils [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 6076e253-7727-49f1-9aa6-6c4ccc52fc56_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.302s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:18 np0005539551 nova_compute[227360]: 2025-11-29 08:26:18.430 227364 DEBUG nova.storage.rbd_utils [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] resizing rbd image 6076e253-7727-49f1-9aa6-6c4ccc52fc56_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:26:18 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:18 np0005539551 nova_compute[227360]: 2025-11-29 08:26:18.509 227364 DEBUG nova.objects.instance [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'migration_context' on Instance uuid 6076e253-7727-49f1-9aa6-6c4ccc52fc56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:18 np0005539551 ovn_controller[130266]: 2025-11-29T08:26:18Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:df:bc:22 10.100.0.14
Nov 29 03:26:18 np0005539551 ovn_controller[130266]: 2025-11-29T08:26:18Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:df:bc:22 10.100.0.14
Nov 29 03:26:18 np0005539551 nova_compute[227360]: 2025-11-29 08:26:18.530 227364 DEBUG nova.virt.libvirt.driver [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:26:18 np0005539551 nova_compute[227360]: 2025-11-29 08:26:18.531 227364 DEBUG nova.virt.libvirt.driver [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Ensure instance console log exists: /var/lib/nova/instances/6076e253-7727-49f1-9aa6-6c4ccc52fc56/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:26:18 np0005539551 nova_compute[227360]: 2025-11-29 08:26:18.531 227364 DEBUG oslo_concurrency.lockutils [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:18 np0005539551 nova_compute[227360]: 2025-11-29 08:26:18.532 227364 DEBUG oslo_concurrency.lockutils [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:18 np0005539551 nova_compute[227360]: 2025-11-29 08:26:18.532 227364 DEBUG oslo_concurrency.lockutils [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:18 np0005539551 nova_compute[227360]: 2025-11-29 08:26:18.787 227364 DEBUG nova.network.neutron [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Successfully created port: 88452b5d-373a-407c-a8ab-aba7b53de034 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:26:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:19.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:19 np0005539551 nova_compute[227360]: 2025-11-29 08:26:19.565 227364 DEBUG nova.network.neutron [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Successfully updated port: 88452b5d-373a-407c-a8ab-aba7b53de034 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:26:19 np0005539551 nova_compute[227360]: 2025-11-29 08:26:19.584 227364 DEBUG oslo_concurrency.lockutils [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquiring lock "refresh_cache-6076e253-7727-49f1-9aa6-6c4ccc52fc56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:26:19 np0005539551 nova_compute[227360]: 2025-11-29 08:26:19.585 227364 DEBUG oslo_concurrency.lockutils [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquired lock "refresh_cache-6076e253-7727-49f1-9aa6-6c4ccc52fc56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:26:19 np0005539551 nova_compute[227360]: 2025-11-29 08:26:19.586 227364 DEBUG nova.network.neutron [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:26:19 np0005539551 nova_compute[227360]: 2025-11-29 08:26:19.782 227364 DEBUG nova.network.neutron [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:26:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:19.876 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:19.877 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:19.877 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:26:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:20.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:26:20 np0005539551 nova_compute[227360]: 2025-11-29 08:26:20.241 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:20 np0005539551 nova_compute[227360]: 2025-11-29 08:26:20.594 227364 DEBUG nova.network.neutron [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Updating instance_info_cache with network_info: [{"id": "88452b5d-373a-407c-a8ab-aba7b53de034", "address": "fa:16:3e:5d:a8:a2", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88452b5d-37", "ovs_interfaceid": "88452b5d-373a-407c-a8ab-aba7b53de034", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:26:20 np0005539551 nova_compute[227360]: 2025-11-29 08:26:20.614 227364 DEBUG oslo_concurrency.lockutils [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Releasing lock "refresh_cache-6076e253-7727-49f1-9aa6-6c4ccc52fc56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:26:20 np0005539551 nova_compute[227360]: 2025-11-29 08:26:20.614 227364 DEBUG nova.compute.manager [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Instance network_info: |[{"id": "88452b5d-373a-407c-a8ab-aba7b53de034", "address": "fa:16:3e:5d:a8:a2", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88452b5d-37", "ovs_interfaceid": "88452b5d-373a-407c-a8ab-aba7b53de034", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:26:20 np0005539551 nova_compute[227360]: 2025-11-29 08:26:20.616 227364 DEBUG nova.virt.libvirt.driver [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Start _get_guest_xml network_info=[{"id": "88452b5d-373a-407c-a8ab-aba7b53de034", "address": "fa:16:3e:5d:a8:a2", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88452b5d-37", "ovs_interfaceid": "88452b5d-373a-407c-a8ab-aba7b53de034", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:26:20 np0005539551 nova_compute[227360]: 2025-11-29 08:26:20.620 227364 WARNING nova.virt.libvirt.driver [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:26:20 np0005539551 nova_compute[227360]: 2025-11-29 08:26:20.626 227364 DEBUG nova.virt.libvirt.host [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:26:20 np0005539551 nova_compute[227360]: 2025-11-29 08:26:20.627 227364 DEBUG nova.virt.libvirt.host [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:26:20 np0005539551 nova_compute[227360]: 2025-11-29 08:26:20.630 227364 DEBUG nova.virt.libvirt.host [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:26:20 np0005539551 nova_compute[227360]: 2025-11-29 08:26:20.630 227364 DEBUG nova.virt.libvirt.host [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:26:20 np0005539551 nova_compute[227360]: 2025-11-29 08:26:20.631 227364 DEBUG nova.virt.libvirt.driver [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:26:20 np0005539551 nova_compute[227360]: 2025-11-29 08:26:20.631 227364 DEBUG nova.virt.hardware [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:26:20 np0005539551 nova_compute[227360]: 2025-11-29 08:26:20.631 227364 DEBUG nova.virt.hardware [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:26:20 np0005539551 nova_compute[227360]: 2025-11-29 08:26:20.631 227364 DEBUG nova.virt.hardware [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:26:20 np0005539551 nova_compute[227360]: 2025-11-29 08:26:20.632 227364 DEBUG nova.virt.hardware [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:26:20 np0005539551 nova_compute[227360]: 2025-11-29 08:26:20.632 227364 DEBUG nova.virt.hardware [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:26:20 np0005539551 nova_compute[227360]: 2025-11-29 08:26:20.632 227364 DEBUG nova.virt.hardware [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:26:20 np0005539551 nova_compute[227360]: 2025-11-29 08:26:20.632 227364 DEBUG nova.virt.hardware [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:26:20 np0005539551 nova_compute[227360]: 2025-11-29 08:26:20.632 227364 DEBUG nova.virt.hardware [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:26:20 np0005539551 nova_compute[227360]: 2025-11-29 08:26:20.632 227364 DEBUG nova.virt.hardware [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:26:20 np0005539551 nova_compute[227360]: 2025-11-29 08:26:20.633 227364 DEBUG nova.virt.hardware [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:26:20 np0005539551 nova_compute[227360]: 2025-11-29 08:26:20.633 227364 DEBUG nova.virt.hardware [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:26:20 np0005539551 nova_compute[227360]: 2025-11-29 08:26:20.635 227364 DEBUG oslo_concurrency.processutils [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:21 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:26:21 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3399868386' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:26:21 np0005539551 nova_compute[227360]: 2025-11-29 08:26:21.155 227364 DEBUG oslo_concurrency.processutils [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:21 np0005539551 nova_compute[227360]: 2025-11-29 08:26:21.180 227364 DEBUG nova.storage.rbd_utils [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rbd image 6076e253-7727-49f1-9aa6-6c4ccc52fc56_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:21 np0005539551 nova_compute[227360]: 2025-11-29 08:26:21.184 227364 DEBUG oslo_concurrency.processutils [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:21 np0005539551 nova_compute[227360]: 2025-11-29 08:26:21.351 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:21.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:21 np0005539551 nova_compute[227360]: 2025-11-29 08:26:21.429 227364 DEBUG nova.compute.manager [req-fc6b4a33-8491-41bf-bb4b-73cbb8c3a9d6 req-e8b0d35e-b627-479b-9692-15695fe580ba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Received event network-changed-88452b5d-373a-407c-a8ab-aba7b53de034 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:21 np0005539551 nova_compute[227360]: 2025-11-29 08:26:21.429 227364 DEBUG nova.compute.manager [req-fc6b4a33-8491-41bf-bb4b-73cbb8c3a9d6 req-e8b0d35e-b627-479b-9692-15695fe580ba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Refreshing instance network info cache due to event network-changed-88452b5d-373a-407c-a8ab-aba7b53de034. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:26:21 np0005539551 nova_compute[227360]: 2025-11-29 08:26:21.430 227364 DEBUG oslo_concurrency.lockutils [req-fc6b4a33-8491-41bf-bb4b-73cbb8c3a9d6 req-e8b0d35e-b627-479b-9692-15695fe580ba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-6076e253-7727-49f1-9aa6-6c4ccc52fc56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:26:21 np0005539551 nova_compute[227360]: 2025-11-29 08:26:21.430 227364 DEBUG oslo_concurrency.lockutils [req-fc6b4a33-8491-41bf-bb4b-73cbb8c3a9d6 req-e8b0d35e-b627-479b-9692-15695fe580ba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-6076e253-7727-49f1-9aa6-6c4ccc52fc56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:26:21 np0005539551 nova_compute[227360]: 2025-11-29 08:26:21.430 227364 DEBUG nova.network.neutron [req-fc6b4a33-8491-41bf-bb4b-73cbb8c3a9d6 req-e8b0d35e-b627-479b-9692-15695fe580ba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Refreshing network info cache for port 88452b5d-373a-407c-a8ab-aba7b53de034 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:26:21 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:26:21 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1020025227' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:26:21 np0005539551 nova_compute[227360]: 2025-11-29 08:26:21.659 227364 DEBUG oslo_concurrency.processutils [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:21 np0005539551 nova_compute[227360]: 2025-11-29 08:26:21.660 227364 DEBUG nova.virt.libvirt.vif [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:26:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1522385452',display_name='tempest-ServerRescueNegativeTestJSON-server-1522385452',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1522385452',id=139,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ea7b24ea9d7b4d239b4741634ac3f10c',ramdisk_id='',reservation_id='r-c2vfh2rm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-2045177058',owner_user_name='tempest-ServerRescueNegativeTestJSON-2045177058-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:26:17Z,user_data=None,user_id='283f8136265e4425a5a31f840935b9ab',uuid=6076e253-7727-49f1-9aa6-6c4ccc52fc56,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "88452b5d-373a-407c-a8ab-aba7b53de034", "address": "fa:16:3e:5d:a8:a2", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88452b5d-37", "ovs_interfaceid": "88452b5d-373a-407c-a8ab-aba7b53de034", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:26:21 np0005539551 nova_compute[227360]: 2025-11-29 08:26:21.660 227364 DEBUG nova.network.os_vif_util [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Converting VIF {"id": "88452b5d-373a-407c-a8ab-aba7b53de034", "address": "fa:16:3e:5d:a8:a2", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88452b5d-37", "ovs_interfaceid": "88452b5d-373a-407c-a8ab-aba7b53de034", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:26:21 np0005539551 nova_compute[227360]: 2025-11-29 08:26:21.661 227364 DEBUG nova.network.os_vif_util [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:a8:a2,bridge_name='br-int',has_traffic_filtering=True,id=88452b5d-373a-407c-a8ab-aba7b53de034,network=Network(4ca67fce-6116-4a0b-b0a9-c25b5adaad19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88452b5d-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:26:21 np0005539551 nova_compute[227360]: 2025-11-29 08:26:21.662 227364 DEBUG nova.objects.instance [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'pci_devices' on Instance uuid 6076e253-7727-49f1-9aa6-6c4ccc52fc56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:21 np0005539551 nova_compute[227360]: 2025-11-29 08:26:21.682 227364 DEBUG nova.virt.libvirt.driver [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:26:21 np0005539551 nova_compute[227360]:  <uuid>6076e253-7727-49f1-9aa6-6c4ccc52fc56</uuid>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:  <name>instance-0000008b</name>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:26:21 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-1522385452</nova:name>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:26:20</nova:creationTime>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:26:21 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:        <nova:user uuid="283f8136265e4425a5a31f840935b9ab">tempest-ServerRescueNegativeTestJSON-2045177058-project-member</nova:user>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:        <nova:project uuid="ea7b24ea9d7b4d239b4741634ac3f10c">tempest-ServerRescueNegativeTestJSON-2045177058</nova:project>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:        <nova:port uuid="88452b5d-373a-407c-a8ab-aba7b53de034">
Nov 29 03:26:21 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:      <entry name="serial">6076e253-7727-49f1-9aa6-6c4ccc52fc56</entry>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:      <entry name="uuid">6076e253-7727-49f1-9aa6-6c4ccc52fc56</entry>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:26:21 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/6076e253-7727-49f1-9aa6-6c4ccc52fc56_disk">
Nov 29 03:26:21 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:26:21 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:26:21 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/6076e253-7727-49f1-9aa6-6c4ccc52fc56_disk.config">
Nov 29 03:26:21 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:26:21 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:26:21 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:5d:a8:a2"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:      <target dev="tap88452b5d-37"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:26:21 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/6076e253-7727-49f1-9aa6-6c4ccc52fc56/console.log" append="off"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:26:21 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:26:21 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:26:21 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:26:21 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:26:21 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:26:21 np0005539551 nova_compute[227360]: 2025-11-29 08:26:21.682 227364 DEBUG nova.compute.manager [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Preparing to wait for external event network-vif-plugged-88452b5d-373a-407c-a8ab-aba7b53de034 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:26:21 np0005539551 nova_compute[227360]: 2025-11-29 08:26:21.683 227364 DEBUG oslo_concurrency.lockutils [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquiring lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:21 np0005539551 nova_compute[227360]: 2025-11-29 08:26:21.683 227364 DEBUG oslo_concurrency.lockutils [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:21 np0005539551 nova_compute[227360]: 2025-11-29 08:26:21.683 227364 DEBUG oslo_concurrency.lockutils [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:21 np0005539551 nova_compute[227360]: 2025-11-29 08:26:21.684 227364 DEBUG nova.virt.libvirt.vif [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:26:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1522385452',display_name='tempest-ServerRescueNegativeTestJSON-server-1522385452',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1522385452',id=139,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ea7b24ea9d7b4d239b4741634ac3f10c',ramdisk_id='',reservation_id='r-c2vfh2rm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-2045177058',owner_user_name='tempest-ServerRescueNegativeTestJSON-2045177058-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:26:17Z,user_data=None,user_id='283f8136265e4425a5a31f840935b9ab',uuid=6076e253-7727-49f1-9aa6-6c4ccc52fc56,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "88452b5d-373a-407c-a8ab-aba7b53de034", "address": "fa:16:3e:5d:a8:a2", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88452b5d-37", "ovs_interfaceid": "88452b5d-373a-407c-a8ab-aba7b53de034", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:26:21 np0005539551 nova_compute[227360]: 2025-11-29 08:26:21.684 227364 DEBUG nova.network.os_vif_util [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Converting VIF {"id": "88452b5d-373a-407c-a8ab-aba7b53de034", "address": "fa:16:3e:5d:a8:a2", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88452b5d-37", "ovs_interfaceid": "88452b5d-373a-407c-a8ab-aba7b53de034", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:26:21 np0005539551 nova_compute[227360]: 2025-11-29 08:26:21.685 227364 DEBUG nova.network.os_vif_util [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:a8:a2,bridge_name='br-int',has_traffic_filtering=True,id=88452b5d-373a-407c-a8ab-aba7b53de034,network=Network(4ca67fce-6116-4a0b-b0a9-c25b5adaad19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88452b5d-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:26:21 np0005539551 nova_compute[227360]: 2025-11-29 08:26:21.685 227364 DEBUG os_vif [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:a8:a2,bridge_name='br-int',has_traffic_filtering=True,id=88452b5d-373a-407c-a8ab-aba7b53de034,network=Network(4ca67fce-6116-4a0b-b0a9-c25b5adaad19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88452b5d-37') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:26:21 np0005539551 nova_compute[227360]: 2025-11-29 08:26:21.686 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:21 np0005539551 nova_compute[227360]: 2025-11-29 08:26:21.686 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:21 np0005539551 nova_compute[227360]: 2025-11-29 08:26:21.686 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:26:21 np0005539551 nova_compute[227360]: 2025-11-29 08:26:21.689 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:21 np0005539551 nova_compute[227360]: 2025-11-29 08:26:21.690 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap88452b5d-37, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:21 np0005539551 nova_compute[227360]: 2025-11-29 08:26:21.690 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap88452b5d-37, col_values=(('external_ids', {'iface-id': '88452b5d-373a-407c-a8ab-aba7b53de034', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5d:a8:a2', 'vm-uuid': '6076e253-7727-49f1-9aa6-6c4ccc52fc56'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:21 np0005539551 nova_compute[227360]: 2025-11-29 08:26:21.692 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:21 np0005539551 NetworkManager[48922]: <info>  [1764404781.6935] manager: (tap88452b5d-37): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/273)
Nov 29 03:26:21 np0005539551 nova_compute[227360]: 2025-11-29 08:26:21.695 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:26:21 np0005539551 nova_compute[227360]: 2025-11-29 08:26:21.699 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:21 np0005539551 nova_compute[227360]: 2025-11-29 08:26:21.700 227364 INFO os_vif [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:a8:a2,bridge_name='br-int',has_traffic_filtering=True,id=88452b5d-373a-407c-a8ab-aba7b53de034,network=Network(4ca67fce-6116-4a0b-b0a9-c25b5adaad19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88452b5d-37')#033[00m
Nov 29 03:26:21 np0005539551 nova_compute[227360]: 2025-11-29 08:26:21.772 227364 DEBUG nova.virt.libvirt.driver [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:26:21 np0005539551 nova_compute[227360]: 2025-11-29 08:26:21.773 227364 DEBUG nova.virt.libvirt.driver [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:26:21 np0005539551 nova_compute[227360]: 2025-11-29 08:26:21.773 227364 DEBUG nova.virt.libvirt.driver [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] No VIF found with MAC fa:16:3e:5d:a8:a2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:26:21 np0005539551 nova_compute[227360]: 2025-11-29 08:26:21.773 227364 INFO nova.virt.libvirt.driver [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Using config drive#033[00m
Nov 29 03:26:21 np0005539551 nova_compute[227360]: 2025-11-29 08:26:21.796 227364 DEBUG nova.storage.rbd_utils [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rbd image 6076e253-7727-49f1-9aa6-6c4ccc52fc56_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:21 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:26:21 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:26:21 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:26:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:22.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:22 np0005539551 nova_compute[227360]: 2025-11-29 08:26:22.261 227364 INFO nova.virt.libvirt.driver [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Creating config drive at /var/lib/nova/instances/6076e253-7727-49f1-9aa6-6c4ccc52fc56/disk.config#033[00m
Nov 29 03:26:22 np0005539551 nova_compute[227360]: 2025-11-29 08:26:22.266 227364 DEBUG oslo_concurrency.processutils [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6076e253-7727-49f1-9aa6-6c4ccc52fc56/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpph8sd2ea execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:22 np0005539551 nova_compute[227360]: 2025-11-29 08:26:22.397 227364 DEBUG oslo_concurrency.processutils [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6076e253-7727-49f1-9aa6-6c4ccc52fc56/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpph8sd2ea" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:22 np0005539551 nova_compute[227360]: 2025-11-29 08:26:22.429 227364 DEBUG nova.storage.rbd_utils [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rbd image 6076e253-7727-49f1-9aa6-6c4ccc52fc56_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:22 np0005539551 nova_compute[227360]: 2025-11-29 08:26:22.434 227364 DEBUG oslo_concurrency.processutils [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6076e253-7727-49f1-9aa6-6c4ccc52fc56/disk.config 6076e253-7727-49f1-9aa6-6c4ccc52fc56_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:22 np0005539551 nova_compute[227360]: 2025-11-29 08:26:22.650 227364 DEBUG oslo_concurrency.processutils [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6076e253-7727-49f1-9aa6-6c4ccc52fc56/disk.config 6076e253-7727-49f1-9aa6-6c4ccc52fc56_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:22 np0005539551 nova_compute[227360]: 2025-11-29 08:26:22.651 227364 INFO nova.virt.libvirt.driver [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Deleting local config drive /var/lib/nova/instances/6076e253-7727-49f1-9aa6-6c4ccc52fc56/disk.config because it was imported into RBD.#033[00m
Nov 29 03:26:22 np0005539551 kernel: tap88452b5d-37: entered promiscuous mode
Nov 29 03:26:22 np0005539551 NetworkManager[48922]: <info>  [1764404782.7240] manager: (tap88452b5d-37): new Tun device (/org/freedesktop/NetworkManager/Devices/274)
Nov 29 03:26:22 np0005539551 ovn_controller[130266]: 2025-11-29T08:26:22Z|00575|binding|INFO|Claiming lport 88452b5d-373a-407c-a8ab-aba7b53de034 for this chassis.
Nov 29 03:26:22 np0005539551 ovn_controller[130266]: 2025-11-29T08:26:22Z|00576|binding|INFO|88452b5d-373a-407c-a8ab-aba7b53de034: Claiming fa:16:3e:5d:a8:a2 10.100.0.9
Nov 29 03:26:22 np0005539551 nova_compute[227360]: 2025-11-29 08:26:22.724 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:22.740 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:a8:a2 10.100.0.9'], port_security=['fa:16:3e:5d:a8:a2 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '6076e253-7727-49f1-9aa6-6c4ccc52fc56', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea7b24ea9d7b4d239b4741634ac3f10c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '525789b3-2118-4a66-bac0-ed0947cafa2d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2992474f-d5bf-4893-b4cc-2c774a8a9871, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=88452b5d-373a-407c-a8ab-aba7b53de034) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:26:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:22.741 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 88452b5d-373a-407c-a8ab-aba7b53de034 in datapath 4ca67fce-6116-4a0b-b0a9-c25b5adaad19 bound to our chassis#033[00m
Nov 29 03:26:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:22.743 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4ca67fce-6116-4a0b-b0a9-c25b5adaad19#033[00m
Nov 29 03:26:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:22.754 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[83ac420d-9a8c-45db-903f-1fd4a138e6ba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:22.756 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4ca67fce-61 in ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:26:22 np0005539551 systemd-udevd[278658]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:26:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:22.760 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4ca67fce-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:26:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:22.760 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[253f4bf7-c285-43c8-9de5-337adbad5de8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:22.761 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a151d755-102a-42e7-9b4b-da69e04fb0bc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:22 np0005539551 systemd-machined[190756]: New machine qemu-63-instance-0000008b.
Nov 29 03:26:22 np0005539551 NetworkManager[48922]: <info>  [1764404782.7747] device (tap88452b5d-37): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:26:22 np0005539551 NetworkManager[48922]: <info>  [1764404782.7760] device (tap88452b5d-37): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:26:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:22.783 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[f11ce4bf-e852-4054-bb70-d1ddd813b2b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:22 np0005539551 systemd[1]: Started Virtual Machine qemu-63-instance-0000008b.
Nov 29 03:26:22 np0005539551 nova_compute[227360]: 2025-11-29 08:26:22.808 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:22 np0005539551 ovn_controller[130266]: 2025-11-29T08:26:22Z|00577|binding|INFO|Setting lport 88452b5d-373a-407c-a8ab-aba7b53de034 ovn-installed in OVS
Nov 29 03:26:22 np0005539551 ovn_controller[130266]: 2025-11-29T08:26:22Z|00578|binding|INFO|Setting lport 88452b5d-373a-407c-a8ab-aba7b53de034 up in Southbound
Nov 29 03:26:22 np0005539551 nova_compute[227360]: 2025-11-29 08:26:22.812 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:22.814 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d7395eb7-aad0-4ae3-abcf-a145b8fe0e59]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:22.842 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[3d4a8e0c-2106-47e2-a16d-2ce64f4d3e8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:22.847 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e78355a7-58cc-492f-bbfc-8591515dab1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:22 np0005539551 NetworkManager[48922]: <info>  [1764404782.8487] manager: (tap4ca67fce-60): new Veth device (/org/freedesktop/NetworkManager/Devices/275)
Nov 29 03:26:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:22.880 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[0ba8f81e-1fd5-4aca-a91e-0b096272e63e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:22.883 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[351ef884-75df-4e18-88b7-d0a20a3cd99f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:22 np0005539551 NetworkManager[48922]: <info>  [1764404782.9113] device (tap4ca67fce-60): carrier: link connected
Nov 29 03:26:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:22.917 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[7e8d2f08-d3c2-4df9-b2d0-0d0e496b2d6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:22.939 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[46ac32c4-ef1d-44a5-9242-e3383f4e3cc4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ca67fce-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:3c:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782979, 'reachable_time': 21142, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278691, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:22.966 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[205529f9-1492-4ce5-a0ab-90d74ee0227e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe18:3cca'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782979, 'tstamp': 782979}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278692, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:22.984 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[0e8b6e08-658f-4741-92c9-14471d2173a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ca67fce-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:3c:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782979, 'reachable_time': 21142, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 278693, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:23 np0005539551 nova_compute[227360]: 2025-11-29 08:26:23.013 227364 DEBUG nova.network.neutron [req-fc6b4a33-8491-41bf-bb4b-73cbb8c3a9d6 req-e8b0d35e-b627-479b-9692-15695fe580ba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Updated VIF entry in instance network info cache for port 88452b5d-373a-407c-a8ab-aba7b53de034. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:26:23 np0005539551 nova_compute[227360]: 2025-11-29 08:26:23.013 227364 DEBUG nova.network.neutron [req-fc6b4a33-8491-41bf-bb4b-73cbb8c3a9d6 req-e8b0d35e-b627-479b-9692-15695fe580ba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Updating instance_info_cache with network_info: [{"id": "88452b5d-373a-407c-a8ab-aba7b53de034", "address": "fa:16:3e:5d:a8:a2", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88452b5d-37", "ovs_interfaceid": "88452b5d-373a-407c-a8ab-aba7b53de034", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:26:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:23.022 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[58220c50-c7cb-4b49-ab63-a2656507ff40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:23.091 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[cffc4431-d961-4566-a5e9-b0ad669746c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:23.093 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ca67fce-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:23.093 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:26:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:23.094 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ca67fce-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:23 np0005539551 nova_compute[227360]: 2025-11-29 08:26:23.096 227364 DEBUG oslo_concurrency.lockutils [req-fc6b4a33-8491-41bf-bb4b-73cbb8c3a9d6 req-e8b0d35e-b627-479b-9692-15695fe580ba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-6076e253-7727-49f1-9aa6-6c4ccc52fc56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:26:23 np0005539551 nova_compute[227360]: 2025-11-29 08:26:23.097 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:23 np0005539551 NetworkManager[48922]: <info>  [1764404783.0977] manager: (tap4ca67fce-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/276)
Nov 29 03:26:23 np0005539551 kernel: tap4ca67fce-60: entered promiscuous mode
Nov 29 03:26:23 np0005539551 nova_compute[227360]: 2025-11-29 08:26:23.100 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:23.112 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4ca67fce-60, col_values=(('external_ids', {'iface-id': '6f99f0ed-ee75-45c3-abe1-1afc889fd227'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:23 np0005539551 nova_compute[227360]: 2025-11-29 08:26:23.113 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:23 np0005539551 ovn_controller[130266]: 2025-11-29T08:26:23Z|00579|binding|INFO|Releasing lport 6f99f0ed-ee75-45c3-abe1-1afc889fd227 from this chassis (sb_readonly=0)
Nov 29 03:26:23 np0005539551 nova_compute[227360]: 2025-11-29 08:26:23.141 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:23 np0005539551 nova_compute[227360]: 2025-11-29 08:26:23.142 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:23.144 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4ca67fce-6116-4a0b-b0a9-c25b5adaad19.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4ca67fce-6116-4a0b-b0a9-c25b5adaad19.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:26:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:23.145 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[fba478f1-3f8d-4a94-a651-0ab23b196875]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:23.145 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:26:23 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:26:23 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:26:23 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-4ca67fce-6116-4a0b-b0a9-c25b5adaad19
Nov 29 03:26:23 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:26:23 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:26:23 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:26:23 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/4ca67fce-6116-4a0b-b0a9-c25b5adaad19.pid.haproxy
Nov 29 03:26:23 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:26:23 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:26:23 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:26:23 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:26:23 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:26:23 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:26:23 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:26:23 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:26:23 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:26:23 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:26:23 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:26:23 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:26:23 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:26:23 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:26:23 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:26:23 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:26:23 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:26:23 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:26:23 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:26:23 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:26:23 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 4ca67fce-6116-4a0b-b0a9-c25b5adaad19
Nov 29 03:26:23 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:26:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:23.146 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'env', 'PROCESS_TAG=haproxy-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4ca67fce-6116-4a0b-b0a9-c25b5adaad19.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:26:23 np0005539551 nova_compute[227360]: 2025-11-29 08:26:23.290 227364 DEBUG nova.compute.manager [req-a9cc4bbe-edf6-4629-b58a-1cf0e8d9a578 req-0432b6e7-136d-4fbe-b2be-d87ce95f5757 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Received event network-vif-plugged-88452b5d-373a-407c-a8ab-aba7b53de034 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:23 np0005539551 nova_compute[227360]: 2025-11-29 08:26:23.291 227364 DEBUG oslo_concurrency.lockutils [req-a9cc4bbe-edf6-4629-b58a-1cf0e8d9a578 req-0432b6e7-136d-4fbe-b2be-d87ce95f5757 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:23 np0005539551 nova_compute[227360]: 2025-11-29 08:26:23.291 227364 DEBUG oslo_concurrency.lockutils [req-a9cc4bbe-edf6-4629-b58a-1cf0e8d9a578 req-0432b6e7-136d-4fbe-b2be-d87ce95f5757 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:23 np0005539551 nova_compute[227360]: 2025-11-29 08:26:23.291 227364 DEBUG oslo_concurrency.lockutils [req-a9cc4bbe-edf6-4629-b58a-1cf0e8d9a578 req-0432b6e7-136d-4fbe-b2be-d87ce95f5757 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:23 np0005539551 nova_compute[227360]: 2025-11-29 08:26:23.291 227364 DEBUG nova.compute.manager [req-a9cc4bbe-edf6-4629-b58a-1cf0e8d9a578 req-0432b6e7-136d-4fbe-b2be-d87ce95f5757 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Processing event network-vif-plugged-88452b5d-373a-407c-a8ab-aba7b53de034 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:26:23 np0005539551 nova_compute[227360]: 2025-11-29 08:26:23.323 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:23 np0005539551 NetworkManager[48922]: <info>  [1764404783.3242] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/277)
Nov 29 03:26:23 np0005539551 NetworkManager[48922]: <info>  [1764404783.3249] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/278)
Nov 29 03:26:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:23.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:23 np0005539551 nova_compute[227360]: 2025-11-29 08:26:23.419 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:23 np0005539551 ovn_controller[130266]: 2025-11-29T08:26:23Z|00580|binding|INFO|Releasing lport d4f0104e-3913-4399-9086-37cf4d16e7c7 from this chassis (sb_readonly=0)
Nov 29 03:26:23 np0005539551 ovn_controller[130266]: 2025-11-29T08:26:23Z|00581|binding|INFO|Releasing lport 6f99f0ed-ee75-45c3-abe1-1afc889fd227 from this chassis (sb_readonly=0)
Nov 29 03:26:23 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:23 np0005539551 nova_compute[227360]: 2025-11-29 08:26:23.439 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:23 np0005539551 podman[278725]: 2025-11-29 08:26:23.582946755 +0000 UTC m=+0.072233835 container create 2eef8a5d5a53e9bba08c99208c504043cf10a5d2d5b283662ceff5e6033cd976 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:26:23 np0005539551 systemd[1]: Started libpod-conmon-2eef8a5d5a53e9bba08c99208c504043cf10a5d2d5b283662ceff5e6033cd976.scope.
Nov 29 03:26:23 np0005539551 podman[278725]: 2025-11-29 08:26:23.549911671 +0000 UTC m=+0.039198751 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:26:23 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:26:23 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e45dd358841d0a84f5d63afe59acc9c51129428223823bf9d354281d1b37016a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:26:23 np0005539551 podman[278725]: 2025-11-29 08:26:23.687425361 +0000 UTC m=+0.176712461 container init 2eef8a5d5a53e9bba08c99208c504043cf10a5d2d5b283662ceff5e6033cd976 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:26:23 np0005539551 podman[278725]: 2025-11-29 08:26:23.693422143 +0000 UTC m=+0.182709213 container start 2eef8a5d5a53e9bba08c99208c504043cf10a5d2d5b283662ceff5e6033cd976 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 03:26:23 np0005539551 neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19[278740]: [NOTICE]   (278744) : New worker (278746) forked
Nov 29 03:26:23 np0005539551 neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19[278740]: [NOTICE]   (278744) : Loading success.
Nov 29 03:26:24 np0005539551 nova_compute[227360]: 2025-11-29 08:26:24.100 227364 DEBUG nova.compute.manager [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:26:24 np0005539551 nova_compute[227360]: 2025-11-29 08:26:24.101 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404784.0998764, 6076e253-7727-49f1-9aa6-6c4ccc52fc56 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:26:24 np0005539551 nova_compute[227360]: 2025-11-29 08:26:24.102 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] VM Started (Lifecycle Event)#033[00m
Nov 29 03:26:24 np0005539551 nova_compute[227360]: 2025-11-29 08:26:24.106 227364 DEBUG nova.virt.libvirt.driver [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:26:24 np0005539551 nova_compute[227360]: 2025-11-29 08:26:24.110 227364 INFO nova.virt.libvirt.driver [-] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Instance spawned successfully.#033[00m
Nov 29 03:26:24 np0005539551 nova_compute[227360]: 2025-11-29 08:26:24.110 227364 DEBUG nova.virt.libvirt.driver [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:26:24 np0005539551 nova_compute[227360]: 2025-11-29 08:26:24.132 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:24 np0005539551 nova_compute[227360]: 2025-11-29 08:26:24.140 227364 DEBUG nova.virt.libvirt.driver [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:26:24 np0005539551 nova_compute[227360]: 2025-11-29 08:26:24.142 227364 DEBUG nova.virt.libvirt.driver [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:26:24 np0005539551 nova_compute[227360]: 2025-11-29 08:26:24.143 227364 DEBUG nova.virt.libvirt.driver [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:26:24 np0005539551 nova_compute[227360]: 2025-11-29 08:26:24.144 227364 DEBUG nova.virt.libvirt.driver [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:26:24 np0005539551 nova_compute[227360]: 2025-11-29 08:26:24.145 227364 DEBUG nova.virt.libvirt.driver [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:26:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:26:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:24.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:26:24 np0005539551 nova_compute[227360]: 2025-11-29 08:26:24.174 227364 DEBUG nova.virt.libvirt.driver [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:26:24 np0005539551 nova_compute[227360]: 2025-11-29 08:26:24.183 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:26:24 np0005539551 nova_compute[227360]: 2025-11-29 08:26:24.229 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:26:24 np0005539551 nova_compute[227360]: 2025-11-29 08:26:24.230 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404784.1012447, 6076e253-7727-49f1-9aa6-6c4ccc52fc56 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:26:24 np0005539551 nova_compute[227360]: 2025-11-29 08:26:24.231 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:26:24 np0005539551 nova_compute[227360]: 2025-11-29 08:26:24.289 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:24 np0005539551 nova_compute[227360]: 2025-11-29 08:26:24.295 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404784.1035454, 6076e253-7727-49f1-9aa6-6c4ccc52fc56 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:26:24 np0005539551 nova_compute[227360]: 2025-11-29 08:26:24.296 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:26:24 np0005539551 nova_compute[227360]: 2025-11-29 08:26:24.367 227364 INFO nova.compute.manager [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Took 6.50 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:26:24 np0005539551 nova_compute[227360]: 2025-11-29 08:26:24.368 227364 DEBUG nova.compute.manager [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:24 np0005539551 nova_compute[227360]: 2025-11-29 08:26:24.402 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:24 np0005539551 nova_compute[227360]: 2025-11-29 08:26:24.413 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:26:24 np0005539551 nova_compute[227360]: 2025-11-29 08:26:24.449 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:26:24 np0005539551 nova_compute[227360]: 2025-11-29 08:26:24.491 227364 INFO nova.compute.manager [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Took 7.50 seconds to build instance.#033[00m
Nov 29 03:26:24 np0005539551 nova_compute[227360]: 2025-11-29 08:26:24.515 227364 DEBUG oslo_concurrency.lockutils [None req-8e483755-a0bd-42e3-ad1f-b1b10a6dc597 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:25.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:25 np0005539551 nova_compute[227360]: 2025-11-29 08:26:25.626 227364 DEBUG nova.compute.manager [req-9cef3c0f-80dd-472c-aff7-4d38aed5a832 req-ff610ca7-97b1-4973-bcba-c3d602adb833 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Received event network-vif-plugged-88452b5d-373a-407c-a8ab-aba7b53de034 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:25 np0005539551 nova_compute[227360]: 2025-11-29 08:26:25.626 227364 DEBUG oslo_concurrency.lockutils [req-9cef3c0f-80dd-472c-aff7-4d38aed5a832 req-ff610ca7-97b1-4973-bcba-c3d602adb833 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:25 np0005539551 nova_compute[227360]: 2025-11-29 08:26:25.626 227364 DEBUG oslo_concurrency.lockutils [req-9cef3c0f-80dd-472c-aff7-4d38aed5a832 req-ff610ca7-97b1-4973-bcba-c3d602adb833 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:25 np0005539551 nova_compute[227360]: 2025-11-29 08:26:25.627 227364 DEBUG oslo_concurrency.lockutils [req-9cef3c0f-80dd-472c-aff7-4d38aed5a832 req-ff610ca7-97b1-4973-bcba-c3d602adb833 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:25 np0005539551 nova_compute[227360]: 2025-11-29 08:26:25.627 227364 DEBUG nova.compute.manager [req-9cef3c0f-80dd-472c-aff7-4d38aed5a832 req-ff610ca7-97b1-4973-bcba-c3d602adb833 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] No waiting events found dispatching network-vif-plugged-88452b5d-373a-407c-a8ab-aba7b53de034 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:26:25 np0005539551 nova_compute[227360]: 2025-11-29 08:26:25.627 227364 WARNING nova.compute.manager [req-9cef3c0f-80dd-472c-aff7-4d38aed5a832 req-ff610ca7-97b1-4973-bcba-c3d602adb833 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Received unexpected event network-vif-plugged-88452b5d-373a-407c-a8ab-aba7b53de034 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:26:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:26.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:26 np0005539551 nova_compute[227360]: 2025-11-29 08:26:26.387 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:26 np0005539551 nova_compute[227360]: 2025-11-29 08:26:26.693 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:27.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:27 np0005539551 nova_compute[227360]: 2025-11-29 08:26:27.547 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:27 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:26:27 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:26:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:28.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:28 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:29.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:30.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:31.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:31 np0005539551 nova_compute[227360]: 2025-11-29 08:26:31.442 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:31 np0005539551 nova_compute[227360]: 2025-11-29 08:26:31.695 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:32.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:33 np0005539551 nova_compute[227360]: 2025-11-29 08:26:33.360 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:33.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:33 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:34.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:34 np0005539551 nova_compute[227360]: 2025-11-29 08:26:34.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:35 np0005539551 nova_compute[227360]: 2025-11-29 08:26:35.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:35.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:35 np0005539551 podman[278848]: 2025-11-29 08:26:35.61319223 +0000 UTC m=+0.060383864 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible)
Nov 29 03:26:35 np0005539551 podman[278849]: 2025-11-29 08:26:35.613443907 +0000 UTC m=+0.057531077 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 03:26:35 np0005539551 podman[278847]: 2025-11-29 08:26:35.66527269 +0000 UTC m=+0.113240935 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller)
Nov 29 03:26:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:36.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:36 np0005539551 nova_compute[227360]: 2025-11-29 08:26:36.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:36 np0005539551 nova_compute[227360]: 2025-11-29 08:26:36.409 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:26:36 np0005539551 nova_compute[227360]: 2025-11-29 08:26:36.409 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:26:36 np0005539551 nova_compute[227360]: 2025-11-29 08:26:36.443 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:36 np0005539551 nova_compute[227360]: 2025-11-29 08:26:36.649 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "refresh_cache-e5dc7e85-787d-4ed8-9752-a604a1815f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:26:36 np0005539551 nova_compute[227360]: 2025-11-29 08:26:36.649 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquired lock "refresh_cache-e5dc7e85-787d-4ed8-9752-a604a1815f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:26:36 np0005539551 nova_compute[227360]: 2025-11-29 08:26:36.650 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:26:36 np0005539551 nova_compute[227360]: 2025-11-29 08:26:36.650 227364 DEBUG nova.objects.instance [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lazy-loading 'info_cache' on Instance uuid e5dc7e85-787d-4ed8-9752-a604a1815f2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:36 np0005539551 nova_compute[227360]: 2025-11-29 08:26:36.697 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:37.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:38.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:38 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:38 np0005539551 ovn_controller[130266]: 2025-11-29T08:26:38Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5d:a8:a2 10.100.0.9
Nov 29 03:26:38 np0005539551 ovn_controller[130266]: 2025-11-29T08:26:38Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5d:a8:a2 10.100.0.9
Nov 29 03:26:38 np0005539551 nova_compute[227360]: 2025-11-29 08:26:38.554 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Updating instance_info_cache with network_info: [{"id": "31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441", "address": "fa:16:3e:df:bc:22", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31cb9e6c-fe", "ovs_interfaceid": "31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:26:38 np0005539551 nova_compute[227360]: 2025-11-29 08:26:38.581 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Releasing lock "refresh_cache-e5dc7e85-787d-4ed8-9752-a604a1815f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:26:38 np0005539551 nova_compute[227360]: 2025-11-29 08:26:38.581 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:26:38 np0005539551 nova_compute[227360]: 2025-11-29 08:26:38.581 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:38 np0005539551 nova_compute[227360]: 2025-11-29 08:26:38.581 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:38 np0005539551 nova_compute[227360]: 2025-11-29 08:26:38.582 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:38 np0005539551 nova_compute[227360]: 2025-11-29 08:26:38.602 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:38 np0005539551 nova_compute[227360]: 2025-11-29 08:26:38.602 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:38 np0005539551 nova_compute[227360]: 2025-11-29 08:26:38.602 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:38 np0005539551 nova_compute[227360]: 2025-11-29 08:26:38.603 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:26:38 np0005539551 nova_compute[227360]: 2025-11-29 08:26:38.603 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:26:39 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2744887289' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:26:39 np0005539551 nova_compute[227360]: 2025-11-29 08:26:39.044 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:39 np0005539551 nova_compute[227360]: 2025-11-29 08:26:39.121 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:26:39 np0005539551 nova_compute[227360]: 2025-11-29 08:26:39.122 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:26:39 np0005539551 nova_compute[227360]: 2025-11-29 08:26:39.126 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:26:39 np0005539551 nova_compute[227360]: 2025-11-29 08:26:39.126 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:26:39 np0005539551 nova_compute[227360]: 2025-11-29 08:26:39.294 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:26:39 np0005539551 nova_compute[227360]: 2025-11-29 08:26:39.296 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3982MB free_disk=20.73072052001953GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:26:39 np0005539551 nova_compute[227360]: 2025-11-29 08:26:39.296 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:39 np0005539551 nova_compute[227360]: 2025-11-29 08:26:39.296 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:39 np0005539551 nova_compute[227360]: 2025-11-29 08:26:39.395 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance e5dc7e85-787d-4ed8-9752-a604a1815f2b actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:26:39 np0005539551 nova_compute[227360]: 2025-11-29 08:26:39.395 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 6076e253-7727-49f1-9aa6-6c4ccc52fc56 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:26:39 np0005539551 nova_compute[227360]: 2025-11-29 08:26:39.396 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:26:39 np0005539551 nova_compute[227360]: 2025-11-29 08:26:39.396 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:26:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:39.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:39 np0005539551 nova_compute[227360]: 2025-11-29 08:26:39.462 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:26:39 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2559527943' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:26:39 np0005539551 nova_compute[227360]: 2025-11-29 08:26:39.866 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:39 np0005539551 nova_compute[227360]: 2025-11-29 08:26:39.873 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:26:39 np0005539551 nova_compute[227360]: 2025-11-29 08:26:39.890 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:26:39 np0005539551 nova_compute[227360]: 2025-11-29 08:26:39.914 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:26:39 np0005539551 nova_compute[227360]: 2025-11-29 08:26:39.915 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:40.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:40 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Nov 29 03:26:40 np0005539551 nova_compute[227360]: 2025-11-29 08:26:40.469 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:40.469 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:26:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:40.470 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:26:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:41.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:41 np0005539551 nova_compute[227360]: 2025-11-29 08:26:41.445 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:41 np0005539551 nova_compute[227360]: 2025-11-29 08:26:41.699 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:42.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e334 e334: 3 total, 3 up, 3 in
Nov 29 03:26:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:43.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:43 np0005539551 ovn_controller[130266]: 2025-11-29T08:26:43Z|00582|binding|INFO|Releasing lport d4f0104e-3913-4399-9086-37cf4d16e7c7 from this chassis (sb_readonly=0)
Nov 29 03:26:43 np0005539551 ovn_controller[130266]: 2025-11-29T08:26:43Z|00583|binding|INFO|Releasing lport 6f99f0ed-ee75-45c3-abe1-1afc889fd227 from this chassis (sb_readonly=0)
Nov 29 03:26:43 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e334 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:43 np0005539551 nova_compute[227360]: 2025-11-29 08:26:43.482 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:43 np0005539551 nova_compute[227360]: 2025-11-29 08:26:43.744 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:44.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:44 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e335 e335: 3 total, 3 up, 3 in
Nov 29 03:26:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:45.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e336 e336: 3 total, 3 up, 3 in
Nov 29 03:26:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:46.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:46 np0005539551 nova_compute[227360]: 2025-11-29 08:26:46.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:46 np0005539551 nova_compute[227360]: 2025-11-29 08:26:46.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:26:46 np0005539551 nova_compute[227360]: 2025-11-29 08:26:46.447 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:46 np0005539551 nova_compute[227360]: 2025-11-29 08:26:46.700 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:47.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:26:47.472 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:48.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:48 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:49.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:50.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:50 np0005539551 nova_compute[227360]: 2025-11-29 08:26:50.619 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:51.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:51 np0005539551 nova_compute[227360]: 2025-11-29 08:26:51.449 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:51 np0005539551 nova_compute[227360]: 2025-11-29 08:26:51.717 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e337 e337: 3 total, 3 up, 3 in
Nov 29 03:26:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:52 np0005539551 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:26:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:52.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:52 np0005539551 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:26:52 np0005539551 nova_compute[227360]: 2025-11-29 08:26:52.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:53.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:53 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:54.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:54 np0005539551 nova_compute[227360]: 2025-11-29 08:26:54.986 227364 INFO nova.compute.manager [None req-c784c293-ab6a-4dc7-9f81-b1570cd634cf 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Pausing#033[00m
Nov 29 03:26:54 np0005539551 nova_compute[227360]: 2025-11-29 08:26:54.987 227364 DEBUG nova.objects.instance [None req-c784c293-ab6a-4dc7-9f81-b1570cd634cf 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'flavor' on Instance uuid 6076e253-7727-49f1-9aa6-6c4ccc52fc56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:55 np0005539551 nova_compute[227360]: 2025-11-29 08:26:55.010 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404815.0098307, 6076e253-7727-49f1-9aa6-6c4ccc52fc56 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:26:55 np0005539551 nova_compute[227360]: 2025-11-29 08:26:55.010 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:26:55 np0005539551 nova_compute[227360]: 2025-11-29 08:26:55.012 227364 DEBUG nova.compute.manager [None req-c784c293-ab6a-4dc7-9f81-b1570cd634cf 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:55 np0005539551 nova_compute[227360]: 2025-11-29 08:26:55.033 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:55 np0005539551 nova_compute[227360]: 2025-11-29 08:26:55.047 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:26:55 np0005539551 ovn_controller[130266]: 2025-11-29T08:26:55Z|00584|binding|INFO|Releasing lport d4f0104e-3913-4399-9086-37cf4d16e7c7 from this chassis (sb_readonly=0)
Nov 29 03:26:55 np0005539551 ovn_controller[130266]: 2025-11-29T08:26:55Z|00585|binding|INFO|Releasing lport 6f99f0ed-ee75-45c3-abe1-1afc889fd227 from this chassis (sb_readonly=0)
Nov 29 03:26:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:55.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:55 np0005539551 nova_compute[227360]: 2025-11-29 08:26:55.474 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:55 np0005539551 nova_compute[227360]: 2025-11-29 08:26:55.649 227364 INFO nova.compute.manager [None req-cd2c9bb7-b6cf-4cf3-be7e-9a56bcb60a95 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Unpausing#033[00m
Nov 29 03:26:55 np0005539551 nova_compute[227360]: 2025-11-29 08:26:55.650 227364 DEBUG nova.objects.instance [None req-cd2c9bb7-b6cf-4cf3-be7e-9a56bcb60a95 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'flavor' on Instance uuid 6076e253-7727-49f1-9aa6-6c4ccc52fc56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:55 np0005539551 nova_compute[227360]: 2025-11-29 08:26:55.672 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404815.6720626, 6076e253-7727-49f1-9aa6-6c4ccc52fc56 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:26:55 np0005539551 nova_compute[227360]: 2025-11-29 08:26:55.672 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:26:55 np0005539551 virtqemud[226785]: argument unsupported: QEMU guest agent is not configured
Nov 29 03:26:55 np0005539551 nova_compute[227360]: 2025-11-29 08:26:55.676 227364 DEBUG nova.virt.libvirt.guest [None req-cd2c9bb7-b6cf-4cf3-be7e-9a56bcb60a95 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 29 03:26:55 np0005539551 nova_compute[227360]: 2025-11-29 08:26:55.676 227364 DEBUG nova.compute.manager [None req-cd2c9bb7-b6cf-4cf3-be7e-9a56bcb60a95 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:55 np0005539551 nova_compute[227360]: 2025-11-29 08:26:55.698 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:55 np0005539551 nova_compute[227360]: 2025-11-29 08:26:55.701 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:26:55 np0005539551 nova_compute[227360]: 2025-11-29 08:26:55.738 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] During sync_power_state the instance has a pending task (unpausing). Skip.#033[00m
Nov 29 03:26:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:56.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:56 np0005539551 nova_compute[227360]: 2025-11-29 08:26:56.452 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:56 np0005539551 nova_compute[227360]: 2025-11-29 08:26:56.718 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:57.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:57 np0005539551 nova_compute[227360]: 2025-11-29 08:26:57.496 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:58.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:58 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:26:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:59.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:00.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:01 np0005539551 nova_compute[227360]: 2025-11-29 08:27:01.040 227364 INFO nova.compute.manager [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Rescuing#033[00m
Nov 29 03:27:01 np0005539551 nova_compute[227360]: 2025-11-29 08:27:01.042 227364 DEBUG oslo_concurrency.lockutils [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquiring lock "refresh_cache-6076e253-7727-49f1-9aa6-6c4ccc52fc56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:27:01 np0005539551 nova_compute[227360]: 2025-11-29 08:27:01.042 227364 DEBUG oslo_concurrency.lockutils [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquired lock "refresh_cache-6076e253-7727-49f1-9aa6-6c4ccc52fc56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:27:01 np0005539551 nova_compute[227360]: 2025-11-29 08:27:01.043 227364 DEBUG nova.network.neutron [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:27:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:01.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:01 np0005539551 nova_compute[227360]: 2025-11-29 08:27:01.454 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:01 np0005539551 nova_compute[227360]: 2025-11-29 08:27:01.760 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:02.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:02 np0005539551 nova_compute[227360]: 2025-11-29 08:27:02.993 227364 DEBUG nova.network.neutron [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Updating instance_info_cache with network_info: [{"id": "88452b5d-373a-407c-a8ab-aba7b53de034", "address": "fa:16:3e:5d:a8:a2", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88452b5d-37", "ovs_interfaceid": "88452b5d-373a-407c-a8ab-aba7b53de034", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:27:03 np0005539551 nova_compute[227360]: 2025-11-29 08:27:03.031 227364 DEBUG oslo_concurrency.lockutils [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Releasing lock "refresh_cache-6076e253-7727-49f1-9aa6-6c4ccc52fc56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:27:03 np0005539551 nova_compute[227360]: 2025-11-29 08:27:03.327 227364 DEBUG nova.virt.libvirt.driver [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:27:03 np0005539551 nova_compute[227360]: 2025-11-29 08:27:03.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:03 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:03.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:04.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:05.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:05 np0005539551 kernel: tap88452b5d-37 (unregistering): left promiscuous mode
Nov 29 03:27:05 np0005539551 NetworkManager[48922]: <info>  [1764404825.6248] device (tap88452b5d-37): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:27:05 np0005539551 ovn_controller[130266]: 2025-11-29T08:27:05Z|00586|binding|INFO|Releasing lport 88452b5d-373a-407c-a8ab-aba7b53de034 from this chassis (sb_readonly=0)
Nov 29 03:27:05 np0005539551 nova_compute[227360]: 2025-11-29 08:27:05.635 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:05 np0005539551 ovn_controller[130266]: 2025-11-29T08:27:05Z|00587|binding|INFO|Setting lport 88452b5d-373a-407c-a8ab-aba7b53de034 down in Southbound
Nov 29 03:27:05 np0005539551 ovn_controller[130266]: 2025-11-29T08:27:05Z|00588|binding|INFO|Removing iface tap88452b5d-37 ovn-installed in OVS
Nov 29 03:27:05 np0005539551 nova_compute[227360]: 2025-11-29 08:27:05.637 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:05.642 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:a8:a2 10.100.0.9'], port_security=['fa:16:3e:5d:a8:a2 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '6076e253-7727-49f1-9aa6-6c4ccc52fc56', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea7b24ea9d7b4d239b4741634ac3f10c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '525789b3-2118-4a66-bac0-ed0947cafa2d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2992474f-d5bf-4893-b4cc-2c774a8a9871, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=88452b5d-373a-407c-a8ab-aba7b53de034) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:27:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:05.643 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 88452b5d-373a-407c-a8ab-aba7b53de034 in datapath 4ca67fce-6116-4a0b-b0a9-c25b5adaad19 unbound from our chassis#033[00m
Nov 29 03:27:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:05.645 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4ca67fce-6116-4a0b-b0a9-c25b5adaad19, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:27:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:05.646 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[52b72df6-eb5a-4366-87ec-2b0aa8d13476]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:05.646 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19 namespace which is not needed anymore#033[00m
Nov 29 03:27:05 np0005539551 nova_compute[227360]: 2025-11-29 08:27:05.657 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:05 np0005539551 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d0000008b.scope: Deactivated successfully.
Nov 29 03:27:05 np0005539551 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d0000008b.scope: Consumed 15.888s CPU time.
Nov 29 03:27:05 np0005539551 systemd-machined[190756]: Machine qemu-63-instance-0000008b terminated.
Nov 29 03:27:05 np0005539551 podman[278954]: 2025-11-29 08:27:05.724006637 +0000 UTC m=+0.058460873 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:27:05 np0005539551 podman[278958]: 2025-11-29 08:27:05.729562117 +0000 UTC m=+0.062440251 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:27:05 np0005539551 neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19[278740]: [NOTICE]   (278744) : haproxy version is 2.8.14-c23fe91
Nov 29 03:27:05 np0005539551 neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19[278740]: [NOTICE]   (278744) : path to executable is /usr/sbin/haproxy
Nov 29 03:27:05 np0005539551 neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19[278740]: [WARNING]  (278744) : Exiting Master process...
Nov 29 03:27:05 np0005539551 neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19[278740]: [ALERT]    (278744) : Current worker (278746) exited with code 143 (Terminated)
Nov 29 03:27:05 np0005539551 neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19[278740]: [WARNING]  (278744) : All workers exited. Exiting... (0)
Nov 29 03:27:05 np0005539551 systemd[1]: libpod-2eef8a5d5a53e9bba08c99208c504043cf10a5d2d5b283662ceff5e6033cd976.scope: Deactivated successfully.
Nov 29 03:27:05 np0005539551 podman[278972]: 2025-11-29 08:27:05.790058283 +0000 UTC m=+0.087220080 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:27:05 np0005539551 podman[279018]: 2025-11-29 08:27:05.79288733 +0000 UTC m=+0.046641942 container died 2eef8a5d5a53e9bba08c99208c504043cf10a5d2d5b283662ceff5e6033cd976 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:27:05 np0005539551 systemd[1]: var-lib-containers-storage-overlay-e45dd358841d0a84f5d63afe59acc9c51129428223823bf9d354281d1b37016a-merged.mount: Deactivated successfully.
Nov 29 03:27:05 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2eef8a5d5a53e9bba08c99208c504043cf10a5d2d5b283662ceff5e6033cd976-userdata-shm.mount: Deactivated successfully.
Nov 29 03:27:05 np0005539551 podman[279018]: 2025-11-29 08:27:05.828422601 +0000 UTC m=+0.082177213 container cleanup 2eef8a5d5a53e9bba08c99208c504043cf10a5d2d5b283662ceff5e6033cd976 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:27:05 np0005539551 systemd[1]: libpod-conmon-2eef8a5d5a53e9bba08c99208c504043cf10a5d2d5b283662ceff5e6033cd976.scope: Deactivated successfully.
Nov 29 03:27:05 np0005539551 podman[279065]: 2025-11-29 08:27:05.888495756 +0000 UTC m=+0.041464542 container remove 2eef8a5d5a53e9bba08c99208c504043cf10a5d2d5b283662ceff5e6033cd976 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:27:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:05.894 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[0f5f87df-1bd4-456c-aa3e-9a76e46d130e]: (4, ('Sat Nov 29 08:27:05 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19 (2eef8a5d5a53e9bba08c99208c504043cf10a5d2d5b283662ceff5e6033cd976)\n2eef8a5d5a53e9bba08c99208c504043cf10a5d2d5b283662ceff5e6033cd976\nSat Nov 29 08:27:05 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19 (2eef8a5d5a53e9bba08c99208c504043cf10a5d2d5b283662ceff5e6033cd976)\n2eef8a5d5a53e9bba08c99208c504043cf10a5d2d5b283662ceff5e6033cd976\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:05.896 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5bdf79c7-5824-4ee3-8a22-67580411df28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:05.897 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ca67fce-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:05 np0005539551 nova_compute[227360]: 2025-11-29 08:27:05.898 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:05 np0005539551 nova_compute[227360]: 2025-11-29 08:27:05.909 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:05 np0005539551 kernel: tap4ca67fce-60: left promiscuous mode
Nov 29 03:27:05 np0005539551 nova_compute[227360]: 2025-11-29 08:27:05.916 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:05.920 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[17a54990-c1e0-42fc-b98b-3e750e3e29f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:05 np0005539551 nova_compute[227360]: 2025-11-29 08:27:05.938 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:05.940 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9dd57f78-b7c7-476b-a10d-af9dfb744399]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:05.941 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4ed6916e-6b84-42ee-8410-23f4510476df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:05.960 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[09780e10-9e16-48dd-8d74-134b46c301c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782971, 'reachable_time': 19753, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279091, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:05 np0005539551 systemd[1]: run-netns-ovnmeta\x2d4ca67fce\x2d6116\x2d4a0b\x2db0a9\x2dc25b5adaad19.mount: Deactivated successfully.
Nov 29 03:27:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:05.963 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:27:05 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:05.963 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[d075c10d-1702-4f0a-b5c6-29cfb126bf60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:05 np0005539551 nova_compute[227360]: 2025-11-29 08:27:05.972 227364 DEBUG nova.compute.manager [req-f37f14e8-583d-4d39-b708-2a6ed4d55d81 req-d52ed9ea-2894-42c9-be66-e17ef113e0a8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Received event network-vif-unplugged-88452b5d-373a-407c-a8ab-aba7b53de034 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:05 np0005539551 nova_compute[227360]: 2025-11-29 08:27:05.972 227364 DEBUG oslo_concurrency.lockutils [req-f37f14e8-583d-4d39-b708-2a6ed4d55d81 req-d52ed9ea-2894-42c9-be66-e17ef113e0a8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:05 np0005539551 nova_compute[227360]: 2025-11-29 08:27:05.973 227364 DEBUG oslo_concurrency.lockutils [req-f37f14e8-583d-4d39-b708-2a6ed4d55d81 req-d52ed9ea-2894-42c9-be66-e17ef113e0a8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:05 np0005539551 nova_compute[227360]: 2025-11-29 08:27:05.973 227364 DEBUG oslo_concurrency.lockutils [req-f37f14e8-583d-4d39-b708-2a6ed4d55d81 req-d52ed9ea-2894-42c9-be66-e17ef113e0a8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:05 np0005539551 nova_compute[227360]: 2025-11-29 08:27:05.973 227364 DEBUG nova.compute.manager [req-f37f14e8-583d-4d39-b708-2a6ed4d55d81 req-d52ed9ea-2894-42c9-be66-e17ef113e0a8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] No waiting events found dispatching network-vif-unplugged-88452b5d-373a-407c-a8ab-aba7b53de034 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:05 np0005539551 nova_compute[227360]: 2025-11-29 08:27:05.973 227364 WARNING nova.compute.manager [req-f37f14e8-583d-4d39-b708-2a6ed4d55d81 req-d52ed9ea-2894-42c9-be66-e17ef113e0a8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Received unexpected event network-vif-unplugged-88452b5d-373a-407c-a8ab-aba7b53de034 for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:27:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:06.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:06 np0005539551 nova_compute[227360]: 2025-11-29 08:27:06.340 227364 INFO nova.virt.libvirt.driver [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:27:06 np0005539551 nova_compute[227360]: 2025-11-29 08:27:06.346 227364 INFO nova.virt.libvirt.driver [-] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Instance destroyed successfully.#033[00m
Nov 29 03:27:06 np0005539551 nova_compute[227360]: 2025-11-29 08:27:06.346 227364 DEBUG nova.objects.instance [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'numa_topology' on Instance uuid 6076e253-7727-49f1-9aa6-6c4ccc52fc56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:06 np0005539551 nova_compute[227360]: 2025-11-29 08:27:06.361 227364 INFO nova.virt.libvirt.driver [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Attempting rescue#033[00m
Nov 29 03:27:06 np0005539551 nova_compute[227360]: 2025-11-29 08:27:06.362 227364 DEBUG nova.virt.libvirt.driver [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Nov 29 03:27:06 np0005539551 nova_compute[227360]: 2025-11-29 08:27:06.367 227364 DEBUG nova.virt.libvirt.driver [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:27:06 np0005539551 nova_compute[227360]: 2025-11-29 08:27:06.368 227364 INFO nova.virt.libvirt.driver [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Creating image(s)#033[00m
Nov 29 03:27:06 np0005539551 nova_compute[227360]: 2025-11-29 08:27:06.403 227364 DEBUG nova.storage.rbd_utils [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rbd image 6076e253-7727-49f1-9aa6-6c4ccc52fc56_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:06 np0005539551 nova_compute[227360]: 2025-11-29 08:27:06.407 227364 DEBUG nova.objects.instance [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'trusted_certs' on Instance uuid 6076e253-7727-49f1-9aa6-6c4ccc52fc56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:06 np0005539551 nova_compute[227360]: 2025-11-29 08:27:06.446 227364 DEBUG nova.storage.rbd_utils [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rbd image 6076e253-7727-49f1-9aa6-6c4ccc52fc56_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:06 np0005539551 nova_compute[227360]: 2025-11-29 08:27:06.476 227364 DEBUG nova.storage.rbd_utils [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rbd image 6076e253-7727-49f1-9aa6-6c4ccc52fc56_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:06 np0005539551 nova_compute[227360]: 2025-11-29 08:27:06.480 227364 DEBUG oslo_concurrency.processutils [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:06 np0005539551 nova_compute[227360]: 2025-11-29 08:27:06.516 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:06 np0005539551 nova_compute[227360]: 2025-11-29 08:27:06.572 227364 DEBUG oslo_concurrency.processutils [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:06 np0005539551 nova_compute[227360]: 2025-11-29 08:27:06.573 227364 DEBUG oslo_concurrency.lockutils [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:06 np0005539551 nova_compute[227360]: 2025-11-29 08:27:06.574 227364 DEBUG oslo_concurrency.lockutils [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:06 np0005539551 nova_compute[227360]: 2025-11-29 08:27:06.575 227364 DEBUG oslo_concurrency.lockutils [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:06 np0005539551 nova_compute[227360]: 2025-11-29 08:27:06.619 227364 DEBUG nova.storage.rbd_utils [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rbd image 6076e253-7727-49f1-9aa6-6c4ccc52fc56_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:06 np0005539551 nova_compute[227360]: 2025-11-29 08:27:06.626 227364 DEBUG oslo_concurrency.processutils [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 6076e253-7727-49f1-9aa6-6c4ccc52fc56_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:06 np0005539551 nova_compute[227360]: 2025-11-29 08:27:06.763 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:07 np0005539551 nova_compute[227360]: 2025-11-29 08:27:07.038 227364 DEBUG oslo_concurrency.processutils [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 6076e253-7727-49f1-9aa6-6c4ccc52fc56_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:07 np0005539551 nova_compute[227360]: 2025-11-29 08:27:07.039 227364 DEBUG nova.objects.instance [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'migration_context' on Instance uuid 6076e253-7727-49f1-9aa6-6c4ccc52fc56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:07 np0005539551 nova_compute[227360]: 2025-11-29 08:27:07.056 227364 DEBUG nova.virt.libvirt.driver [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:27:07 np0005539551 nova_compute[227360]: 2025-11-29 08:27:07.057 227364 DEBUG nova.virt.libvirt.driver [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Start _get_guest_xml network_info=[{"id": "88452b5d-373a-407c-a8ab-aba7b53de034", "address": "fa:16:3e:5d:a8:a2", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "vif_mac": "fa:16:3e:5d:a8:a2"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88452b5d-37", "ovs_interfaceid": "88452b5d-373a-407c-a8ab-aba7b53de034", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '4873db8c-b414-4e95-acd9-77caabebe722', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:27:07 np0005539551 nova_compute[227360]: 2025-11-29 08:27:07.057 227364 DEBUG nova.objects.instance [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'resources' on Instance uuid 6076e253-7727-49f1-9aa6-6c4ccc52fc56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:07 np0005539551 nova_compute[227360]: 2025-11-29 08:27:07.075 227364 WARNING nova.virt.libvirt.driver [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:27:07 np0005539551 nova_compute[227360]: 2025-11-29 08:27:07.081 227364 DEBUG nova.virt.libvirt.host [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:27:07 np0005539551 nova_compute[227360]: 2025-11-29 08:27:07.082 227364 DEBUG nova.virt.libvirt.host [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:27:07 np0005539551 nova_compute[227360]: 2025-11-29 08:27:07.086 227364 DEBUG nova.virt.libvirt.host [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:27:07 np0005539551 nova_compute[227360]: 2025-11-29 08:27:07.087 227364 DEBUG nova.virt.libvirt.host [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:27:07 np0005539551 nova_compute[227360]: 2025-11-29 08:27:07.088 227364 DEBUG nova.virt.libvirt.driver [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:27:07 np0005539551 nova_compute[227360]: 2025-11-29 08:27:07.089 227364 DEBUG nova.virt.hardware [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:27:07 np0005539551 nova_compute[227360]: 2025-11-29 08:27:07.089 227364 DEBUG nova.virt.hardware [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:27:07 np0005539551 nova_compute[227360]: 2025-11-29 08:27:07.090 227364 DEBUG nova.virt.hardware [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:27:07 np0005539551 nova_compute[227360]: 2025-11-29 08:27:07.090 227364 DEBUG nova.virt.hardware [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:27:07 np0005539551 nova_compute[227360]: 2025-11-29 08:27:07.090 227364 DEBUG nova.virt.hardware [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:27:07 np0005539551 nova_compute[227360]: 2025-11-29 08:27:07.090 227364 DEBUG nova.virt.hardware [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:27:07 np0005539551 nova_compute[227360]: 2025-11-29 08:27:07.091 227364 DEBUG nova.virt.hardware [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:27:07 np0005539551 nova_compute[227360]: 2025-11-29 08:27:07.091 227364 DEBUG nova.virt.hardware [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:27:07 np0005539551 nova_compute[227360]: 2025-11-29 08:27:07.091 227364 DEBUG nova.virt.hardware [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:27:07 np0005539551 nova_compute[227360]: 2025-11-29 08:27:07.092 227364 DEBUG nova.virt.hardware [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:27:07 np0005539551 nova_compute[227360]: 2025-11-29 08:27:07.092 227364 DEBUG nova.virt.hardware [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:27:07 np0005539551 nova_compute[227360]: 2025-11-29 08:27:07.092 227364 DEBUG nova.objects.instance [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'vcpu_model' on Instance uuid 6076e253-7727-49f1-9aa6-6c4ccc52fc56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:07 np0005539551 nova_compute[227360]: 2025-11-29 08:27:07.113 227364 DEBUG oslo_concurrency.processutils [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:07.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:27:07 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2117407239' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:27:07 np0005539551 nova_compute[227360]: 2025-11-29 08:27:07.585 227364 DEBUG oslo_concurrency.processutils [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:07 np0005539551 nova_compute[227360]: 2025-11-29 08:27:07.586 227364 DEBUG oslo_concurrency.processutils [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:27:07 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2419539750' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:27:07 np0005539551 nova_compute[227360]: 2025-11-29 08:27:07.992 227364 DEBUG oslo_concurrency.processutils [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:07 np0005539551 nova_compute[227360]: 2025-11-29 08:27:07.993 227364 DEBUG oslo_concurrency.processutils [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:08 np0005539551 nova_compute[227360]: 2025-11-29 08:27:08.066 227364 DEBUG nova.compute.manager [req-b38efbe4-800a-40ed-bd6d-0e5eb55915e9 req-a4af6865-326a-4c52-aaa0-3878fdd2bfd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Received event network-vif-plugged-88452b5d-373a-407c-a8ab-aba7b53de034 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:08 np0005539551 nova_compute[227360]: 2025-11-29 08:27:08.067 227364 DEBUG oslo_concurrency.lockutils [req-b38efbe4-800a-40ed-bd6d-0e5eb55915e9 req-a4af6865-326a-4c52-aaa0-3878fdd2bfd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:08 np0005539551 nova_compute[227360]: 2025-11-29 08:27:08.067 227364 DEBUG oslo_concurrency.lockutils [req-b38efbe4-800a-40ed-bd6d-0e5eb55915e9 req-a4af6865-326a-4c52-aaa0-3878fdd2bfd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:08 np0005539551 nova_compute[227360]: 2025-11-29 08:27:08.068 227364 DEBUG oslo_concurrency.lockutils [req-b38efbe4-800a-40ed-bd6d-0e5eb55915e9 req-a4af6865-326a-4c52-aaa0-3878fdd2bfd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:08 np0005539551 nova_compute[227360]: 2025-11-29 08:27:08.068 227364 DEBUG nova.compute.manager [req-b38efbe4-800a-40ed-bd6d-0e5eb55915e9 req-a4af6865-326a-4c52-aaa0-3878fdd2bfd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] No waiting events found dispatching network-vif-plugged-88452b5d-373a-407c-a8ab-aba7b53de034 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:08 np0005539551 nova_compute[227360]: 2025-11-29 08:27:08.068 227364 WARNING nova.compute.manager [req-b38efbe4-800a-40ed-bd6d-0e5eb55915e9 req-a4af6865-326a-4c52-aaa0-3878fdd2bfd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Received unexpected event network-vif-plugged-88452b5d-373a-407c-a8ab-aba7b53de034 for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:27:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 03:27:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:08.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 03:27:08 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:27:08 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/260902016' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:27:08 np0005539551 nova_compute[227360]: 2025-11-29 08:27:08.414 227364 DEBUG oslo_concurrency.processutils [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:08 np0005539551 nova_compute[227360]: 2025-11-29 08:27:08.415 227364 DEBUG nova.virt.libvirt.vif [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:26:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1522385452',display_name='tempest-ServerRescueNegativeTestJSON-server-1522385452',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1522385452',id=139,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:26:24Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea7b24ea9d7b4d239b4741634ac3f10c',ramdisk_id='',reservation_id='r-c2vfh2rm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-2045177058',owner_user_name='tempest-ServerRescueNegativeTestJSON-2045177058-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:26:55Z,user_data=None,user_id='283f8136265e4425a5a31f840935b9ab',uuid=6076e253-7727-49f1-9aa6-6c4ccc52fc56,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "88452b5d-373a-407c-a8ab-aba7b53de034", "address": "fa:16:3e:5d:a8:a2", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "vif_mac": "fa:16:3e:5d:a8:a2"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88452b5d-37", "ovs_interfaceid": "88452b5d-373a-407c-a8ab-aba7b53de034", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:27:08 np0005539551 nova_compute[227360]: 2025-11-29 08:27:08.416 227364 DEBUG nova.network.os_vif_util [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Converting VIF {"id": "88452b5d-373a-407c-a8ab-aba7b53de034", "address": "fa:16:3e:5d:a8:a2", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "vif_mac": "fa:16:3e:5d:a8:a2"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88452b5d-37", "ovs_interfaceid": "88452b5d-373a-407c-a8ab-aba7b53de034", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:27:08 np0005539551 nova_compute[227360]: 2025-11-29 08:27:08.416 227364 DEBUG nova.network.os_vif_util [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5d:a8:a2,bridge_name='br-int',has_traffic_filtering=True,id=88452b5d-373a-407c-a8ab-aba7b53de034,network=Network(4ca67fce-6116-4a0b-b0a9-c25b5adaad19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88452b5d-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:27:08 np0005539551 nova_compute[227360]: 2025-11-29 08:27:08.417 227364 DEBUG nova.objects.instance [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'pci_devices' on Instance uuid 6076e253-7727-49f1-9aa6-6c4ccc52fc56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:08 np0005539551 nova_compute[227360]: 2025-11-29 08:27:08.433 227364 DEBUG nova.virt.libvirt.driver [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:27:08 np0005539551 nova_compute[227360]:  <uuid>6076e253-7727-49f1-9aa6-6c4ccc52fc56</uuid>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:  <name>instance-0000008b</name>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-1522385452</nova:name>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:27:07</nova:creationTime>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:27:08 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:        <nova:user uuid="283f8136265e4425a5a31f840935b9ab">tempest-ServerRescueNegativeTestJSON-2045177058-project-member</nova:user>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:        <nova:project uuid="ea7b24ea9d7b4d239b4741634ac3f10c">tempest-ServerRescueNegativeTestJSON-2045177058</nova:project>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:        <nova:port uuid="88452b5d-373a-407c-a8ab-aba7b53de034">
Nov 29 03:27:08 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      <entry name="serial">6076e253-7727-49f1-9aa6-6c4ccc52fc56</entry>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      <entry name="uuid">6076e253-7727-49f1-9aa6-6c4ccc52fc56</entry>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/6076e253-7727-49f1-9aa6-6c4ccc52fc56_disk.rescue">
Nov 29 03:27:08 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:27:08 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/6076e253-7727-49f1-9aa6-6c4ccc52fc56_disk">
Nov 29 03:27:08 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:27:08 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      <target dev="vdb" bus="virtio"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/6076e253-7727-49f1-9aa6-6c4ccc52fc56_disk.config.rescue">
Nov 29 03:27:08 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:27:08 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:5d:a8:a2"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      <target dev="tap88452b5d-37"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/6076e253-7727-49f1-9aa6-6c4ccc52fc56/console.log" append="off"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:27:08 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:27:08 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:27:08 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:27:08 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:27:08 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:08 np0005539551 nova_compute[227360]: 2025-11-29 08:27:08.439 227364 INFO nova.virt.libvirt.driver [-] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Instance destroyed successfully.#033[00m
Nov 29 03:27:08 np0005539551 nova_compute[227360]: 2025-11-29 08:27:08.525 227364 DEBUG nova.virt.libvirt.driver [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:27:08 np0005539551 nova_compute[227360]: 2025-11-29 08:27:08.525 227364 DEBUG nova.virt.libvirt.driver [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:27:08 np0005539551 nova_compute[227360]: 2025-11-29 08:27:08.525 227364 DEBUG nova.virt.libvirt.driver [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:27:08 np0005539551 nova_compute[227360]: 2025-11-29 08:27:08.525 227364 DEBUG nova.virt.libvirt.driver [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] No VIF found with MAC fa:16:3e:5d:a8:a2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:27:08 np0005539551 nova_compute[227360]: 2025-11-29 08:27:08.526 227364 INFO nova.virt.libvirt.driver [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Using config drive#033[00m
Nov 29 03:27:08 np0005539551 nova_compute[227360]: 2025-11-29 08:27:08.559 227364 DEBUG nova.storage.rbd_utils [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rbd image 6076e253-7727-49f1-9aa6-6c4ccc52fc56_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:08 np0005539551 nova_compute[227360]: 2025-11-29 08:27:08.586 227364 DEBUG nova.objects.instance [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'ec2_ids' on Instance uuid 6076e253-7727-49f1-9aa6-6c4ccc52fc56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:08 np0005539551 nova_compute[227360]: 2025-11-29 08:27:08.619 227364 DEBUG nova.objects.instance [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'keypairs' on Instance uuid 6076e253-7727-49f1-9aa6-6c4ccc52fc56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:09 np0005539551 nova_compute[227360]: 2025-11-29 08:27:09.173 227364 INFO nova.virt.libvirt.driver [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Creating config drive at /var/lib/nova/instances/6076e253-7727-49f1-9aa6-6c4ccc52fc56/disk.config.rescue#033[00m
Nov 29 03:27:09 np0005539551 nova_compute[227360]: 2025-11-29 08:27:09.182 227364 DEBUG oslo_concurrency.processutils [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6076e253-7727-49f1-9aa6-6c4ccc52fc56/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaa1lrgqk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:09 np0005539551 nova_compute[227360]: 2025-11-29 08:27:09.324 227364 DEBUG oslo_concurrency.processutils [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6076e253-7727-49f1-9aa6-6c4ccc52fc56/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaa1lrgqk" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:09 np0005539551 nova_compute[227360]: 2025-11-29 08:27:09.377 227364 DEBUG nova.storage.rbd_utils [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rbd image 6076e253-7727-49f1-9aa6-6c4ccc52fc56_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:09 np0005539551 nova_compute[227360]: 2025-11-29 08:27:09.382 227364 DEBUG oslo_concurrency.processutils [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6076e253-7727-49f1-9aa6-6c4ccc52fc56/disk.config.rescue 6076e253-7727-49f1-9aa6-6c4ccc52fc56_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:09 np0005539551 nova_compute[227360]: 2025-11-29 08:27:09.411 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:09.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:10 np0005539551 nova_compute[227360]: 2025-11-29 08:27:10.002 227364 DEBUG oslo_concurrency.processutils [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6076e253-7727-49f1-9aa6-6c4ccc52fc56/disk.config.rescue 6076e253-7727-49f1-9aa6-6c4ccc52fc56_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:10 np0005539551 nova_compute[227360]: 2025-11-29 08:27:10.003 227364 INFO nova.virt.libvirt.driver [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Deleting local config drive /var/lib/nova/instances/6076e253-7727-49f1-9aa6-6c4ccc52fc56/disk.config.rescue because it was imported into RBD.#033[00m
Nov 29 03:27:10 np0005539551 kernel: tap88452b5d-37: entered promiscuous mode
Nov 29 03:27:10 np0005539551 NetworkManager[48922]: <info>  [1764404830.0605] manager: (tap88452b5d-37): new Tun device (/org/freedesktop/NetworkManager/Devices/279)
Nov 29 03:27:10 np0005539551 ovn_controller[130266]: 2025-11-29T08:27:10Z|00589|binding|INFO|Claiming lport 88452b5d-373a-407c-a8ab-aba7b53de034 for this chassis.
Nov 29 03:27:10 np0005539551 ovn_controller[130266]: 2025-11-29T08:27:10Z|00590|binding|INFO|88452b5d-373a-407c-a8ab-aba7b53de034: Claiming fa:16:3e:5d:a8:a2 10.100.0.9
Nov 29 03:27:10 np0005539551 nova_compute[227360]: 2025-11-29 08:27:10.118 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:10.124 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:a8:a2 10.100.0.9'], port_security=['fa:16:3e:5d:a8:a2 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '6076e253-7727-49f1-9aa6-6c4ccc52fc56', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea7b24ea9d7b4d239b4741634ac3f10c', 'neutron:revision_number': '5', 'neutron:security_group_ids': '525789b3-2118-4a66-bac0-ed0947cafa2d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2992474f-d5bf-4893-b4cc-2c774a8a9871, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=88452b5d-373a-407c-a8ab-aba7b53de034) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:10.125 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 88452b5d-373a-407c-a8ab-aba7b53de034 in datapath 4ca67fce-6116-4a0b-b0a9-c25b5adaad19 bound to our chassis#033[00m
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:10.126 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4ca67fce-6116-4a0b-b0a9-c25b5adaad19#033[00m
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:10.138 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[473b3add-7cd6-4d17-b842-0f736eab338b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:10.139 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4ca67fce-61 in ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:27:10 np0005539551 systemd-udevd[279325]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:27:10 np0005539551 ovn_controller[130266]: 2025-11-29T08:27:10Z|00591|binding|INFO|Setting lport 88452b5d-373a-407c-a8ab-aba7b53de034 ovn-installed in OVS
Nov 29 03:27:10 np0005539551 ovn_controller[130266]: 2025-11-29T08:27:10Z|00592|binding|INFO|Setting lport 88452b5d-373a-407c-a8ab-aba7b53de034 up in Southbound
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:10.142 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4ca67fce-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:10.142 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[35c5c9cd-22b7-482a-a8ab-694551e8430d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:10 np0005539551 nova_compute[227360]: 2025-11-29 08:27:10.142 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:10.143 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[614a8f95-2130-4d60-ab42-6a60a6e47ff6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:10 np0005539551 systemd-machined[190756]: New machine qemu-64-instance-0000008b.
Nov 29 03:27:10 np0005539551 NetworkManager[48922]: <info>  [1764404830.1516] device (tap88452b5d-37): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:27:10 np0005539551 NetworkManager[48922]: <info>  [1764404830.1525] device (tap88452b5d-37): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:27:10 np0005539551 systemd[1]: Started Virtual Machine qemu-64-instance-0000008b.
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:10.155 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[d9711ffe-b03b-4da5-b5e4-c06a4960ae04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:10.178 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[acc6001b-3a23-4ff8-b19e-ccf49fc3b839]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:10.208 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[978917d1-033c-4df2-a8a0-2957886ec0bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:10 np0005539551 systemd-udevd[279328]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:27:10 np0005539551 NetworkManager[48922]: <info>  [1764404830.2143] manager: (tap4ca67fce-60): new Veth device (/org/freedesktop/NetworkManager/Devices/280)
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:10.213 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a59f3ddd-577f-4da0-af5f-c0138e11808b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:10.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:10.242 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[ab1ab5d6-afe7-4fcd-a390-34bb14fa15fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:10.246 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[40302d19-6fa0-4702-8383-195831621a36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:10 np0005539551 NetworkManager[48922]: <info>  [1764404830.2668] device (tap4ca67fce-60): carrier: link connected
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:10.272 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[3c6873e9-d22b-409a-984f-b3283c49846a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:10.287 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[442206b2-bb8e-4270-8c12-d593d714a0b2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ca67fce-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:3c:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 175], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 787714, 'reachable_time': 39309, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279357, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:10.302 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[0a5d1b04-3907-47d8-be42-2c5dfa624de3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe18:3cca'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 787714, 'tstamp': 787714}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279358, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:10.320 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[07de243c-4bc7-49e3-94ea-66b6a971fa1a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ca67fce-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:3c:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 175], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 787714, 'reachable_time': 39309, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 279359, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:10.348 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[53c4a0cf-e90d-42c0-8d99-87011bb9248f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:10.409 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e5e49895-e8b8-45ac-89c6-4d358a25a02c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:10.411 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ca67fce-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:10.411 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:10.412 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ca67fce-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:10 np0005539551 kernel: tap4ca67fce-60: entered promiscuous mode
Nov 29 03:27:10 np0005539551 NetworkManager[48922]: <info>  [1764404830.4143] manager: (tap4ca67fce-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/281)
Nov 29 03:27:10 np0005539551 nova_compute[227360]: 2025-11-29 08:27:10.414 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:10.416 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4ca67fce-60, col_values=(('external_ids', {'iface-id': '6f99f0ed-ee75-45c3-abe1-1afc889fd227'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:10 np0005539551 ovn_controller[130266]: 2025-11-29T08:27:10Z|00593|binding|INFO|Releasing lport 6f99f0ed-ee75-45c3-abe1-1afc889fd227 from this chassis (sb_readonly=0)
Nov 29 03:27:10 np0005539551 nova_compute[227360]: 2025-11-29 08:27:10.431 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:10.432 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4ca67fce-6116-4a0b-b0a9-c25b5adaad19.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4ca67fce-6116-4a0b-b0a9-c25b5adaad19.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:10.433 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6f34f2d0-e2ef-493f-a8fd-094596498117]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:10.433 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-4ca67fce-6116-4a0b-b0a9-c25b5adaad19
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/4ca67fce-6116-4a0b-b0a9-c25b5adaad19.pid.haproxy
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 4ca67fce-6116-4a0b-b0a9-c25b5adaad19
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:27:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:10.434 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'env', 'PROCESS_TAG=haproxy-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4ca67fce-6116-4a0b-b0a9-c25b5adaad19.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:27:10 np0005539551 nova_compute[227360]: 2025-11-29 08:27:10.620 227364 DEBUG nova.virt.libvirt.host [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Removed pending event for 6076e253-7727-49f1-9aa6-6c4ccc52fc56 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:27:10 np0005539551 nova_compute[227360]: 2025-11-29 08:27:10.621 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404830.6199493, 6076e253-7727-49f1-9aa6-6c4ccc52fc56 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:27:10 np0005539551 nova_compute[227360]: 2025-11-29 08:27:10.621 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:27:10 np0005539551 nova_compute[227360]: 2025-11-29 08:27:10.626 227364 DEBUG nova.compute.manager [None req-54a5c98a-ffda-4e8b-a93e-1054e63a9db3 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:10 np0005539551 nova_compute[227360]: 2025-11-29 08:27:10.652 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:10 np0005539551 nova_compute[227360]: 2025-11-29 08:27:10.656 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:27:10 np0005539551 nova_compute[227360]: 2025-11-29 08:27:10.705 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Nov 29 03:27:10 np0005539551 nova_compute[227360]: 2025-11-29 08:27:10.706 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404830.6233652, 6076e253-7727-49f1-9aa6-6c4ccc52fc56 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:27:10 np0005539551 nova_compute[227360]: 2025-11-29 08:27:10.706 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] VM Started (Lifecycle Event)#033[00m
Nov 29 03:27:10 np0005539551 nova_compute[227360]: 2025-11-29 08:27:10.730 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:10 np0005539551 nova_compute[227360]: 2025-11-29 08:27:10.737 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:27:10 np0005539551 podman[279451]: 2025-11-29 08:27:10.819483181 +0000 UTC m=+0.065165454 container create 245b39969d9db90726fd6a85858ca50d795967fadf081e0304a2da9a4beefd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 03:27:10 np0005539551 systemd[1]: Started libpod-conmon-245b39969d9db90726fd6a85858ca50d795967fadf081e0304a2da9a4beefd00.scope.
Nov 29 03:27:10 np0005539551 podman[279451]: 2025-11-29 08:27:10.777978668 +0000 UTC m=+0.023660971 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:27:10 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:27:10 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a41cfdf4478ba29f3cc98418aa94ee46fe8f675cfc651f2eca8d2c88b1e33e2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:27:10 np0005539551 podman[279451]: 2025-11-29 08:27:10.900194024 +0000 UTC m=+0.145876377 container init 245b39969d9db90726fd6a85858ca50d795967fadf081e0304a2da9a4beefd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Nov 29 03:27:10 np0005539551 podman[279451]: 2025-11-29 08:27:10.907083221 +0000 UTC m=+0.152765524 container start 245b39969d9db90726fd6a85858ca50d795967fadf081e0304a2da9a4beefd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:27:10 np0005539551 neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19[279466]: [NOTICE]   (279470) : New worker (279472) forked
Nov 29 03:27:10 np0005539551 neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19[279466]: [NOTICE]   (279470) : Loading success.
Nov 29 03:27:11 np0005539551 nova_compute[227360]: 2025-11-29 08:27:11.460 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:11.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:11 np0005539551 nova_compute[227360]: 2025-11-29 08:27:11.766 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:12.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:12 np0005539551 nova_compute[227360]: 2025-11-29 08:27:12.305 227364 DEBUG nova.compute.manager [req-bab45227-a045-4319-8bb5-9d467dc35af3 req-05e8ca24-29f6-4552-9ea1-1a0d54ab8689 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Received event network-vif-plugged-88452b5d-373a-407c-a8ab-aba7b53de034 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:12 np0005539551 nova_compute[227360]: 2025-11-29 08:27:12.306 227364 DEBUG oslo_concurrency.lockutils [req-bab45227-a045-4319-8bb5-9d467dc35af3 req-05e8ca24-29f6-4552-9ea1-1a0d54ab8689 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:12 np0005539551 nova_compute[227360]: 2025-11-29 08:27:12.306 227364 DEBUG oslo_concurrency.lockutils [req-bab45227-a045-4319-8bb5-9d467dc35af3 req-05e8ca24-29f6-4552-9ea1-1a0d54ab8689 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:12 np0005539551 nova_compute[227360]: 2025-11-29 08:27:12.306 227364 DEBUG oslo_concurrency.lockutils [req-bab45227-a045-4319-8bb5-9d467dc35af3 req-05e8ca24-29f6-4552-9ea1-1a0d54ab8689 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:12 np0005539551 nova_compute[227360]: 2025-11-29 08:27:12.307 227364 DEBUG nova.compute.manager [req-bab45227-a045-4319-8bb5-9d467dc35af3 req-05e8ca24-29f6-4552-9ea1-1a0d54ab8689 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] No waiting events found dispatching network-vif-plugged-88452b5d-373a-407c-a8ab-aba7b53de034 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:12 np0005539551 nova_compute[227360]: 2025-11-29 08:27:12.307 227364 WARNING nova.compute.manager [req-bab45227-a045-4319-8bb5-9d467dc35af3 req-05e8ca24-29f6-4552-9ea1-1a0d54ab8689 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Received unexpected event network-vif-plugged-88452b5d-373a-407c-a8ab-aba7b53de034 for instance with vm_state rescued and task_state None.#033[00m
Nov 29 03:27:12 np0005539551 nova_compute[227360]: 2025-11-29 08:27:12.307 227364 DEBUG nova.compute.manager [req-bab45227-a045-4319-8bb5-9d467dc35af3 req-05e8ca24-29f6-4552-9ea1-1a0d54ab8689 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Received event network-vif-plugged-88452b5d-373a-407c-a8ab-aba7b53de034 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:12 np0005539551 nova_compute[227360]: 2025-11-29 08:27:12.307 227364 DEBUG oslo_concurrency.lockutils [req-bab45227-a045-4319-8bb5-9d467dc35af3 req-05e8ca24-29f6-4552-9ea1-1a0d54ab8689 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:12 np0005539551 nova_compute[227360]: 2025-11-29 08:27:12.308 227364 DEBUG oslo_concurrency.lockutils [req-bab45227-a045-4319-8bb5-9d467dc35af3 req-05e8ca24-29f6-4552-9ea1-1a0d54ab8689 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:12 np0005539551 nova_compute[227360]: 2025-11-29 08:27:12.308 227364 DEBUG oslo_concurrency.lockutils [req-bab45227-a045-4319-8bb5-9d467dc35af3 req-05e8ca24-29f6-4552-9ea1-1a0d54ab8689 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:12 np0005539551 nova_compute[227360]: 2025-11-29 08:27:12.308 227364 DEBUG nova.compute.manager [req-bab45227-a045-4319-8bb5-9d467dc35af3 req-05e8ca24-29f6-4552-9ea1-1a0d54ab8689 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] No waiting events found dispatching network-vif-plugged-88452b5d-373a-407c-a8ab-aba7b53de034 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:12 np0005539551 nova_compute[227360]: 2025-11-29 08:27:12.308 227364 WARNING nova.compute.manager [req-bab45227-a045-4319-8bb5-9d467dc35af3 req-05e8ca24-29f6-4552-9ea1-1a0d54ab8689 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Received unexpected event network-vif-plugged-88452b5d-373a-407c-a8ab-aba7b53de034 for instance with vm_state rescued and task_state None.#033[00m
Nov 29 03:27:12 np0005539551 nova_compute[227360]: 2025-11-29 08:27:12.394 227364 DEBUG oslo_concurrency.lockutils [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquiring lock "480ae817-3676-4499-a047-6b8b383e7bf2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:12 np0005539551 nova_compute[227360]: 2025-11-29 08:27:12.395 227364 DEBUG oslo_concurrency.lockutils [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "480ae817-3676-4499-a047-6b8b383e7bf2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:12 np0005539551 nova_compute[227360]: 2025-11-29 08:27:12.413 227364 DEBUG nova.compute.manager [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:27:12 np0005539551 nova_compute[227360]: 2025-11-29 08:27:12.498 227364 DEBUG oslo_concurrency.lockutils [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:12 np0005539551 nova_compute[227360]: 2025-11-29 08:27:12.500 227364 DEBUG oslo_concurrency.lockutils [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:12 np0005539551 nova_compute[227360]: 2025-11-29 08:27:12.520 227364 DEBUG nova.virt.hardware [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:27:12 np0005539551 nova_compute[227360]: 2025-11-29 08:27:12.521 227364 INFO nova.compute.claims [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:27:12 np0005539551 nova_compute[227360]: 2025-11-29 08:27:12.773 227364 DEBUG oslo_concurrency.processutils [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:13 np0005539551 nova_compute[227360]: 2025-11-29 08:27:13.040 227364 INFO nova.compute.manager [None req-b4ba94c3-721b-4003-9e1f-cb16a874011c 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Unrescuing#033[00m
Nov 29 03:27:13 np0005539551 nova_compute[227360]: 2025-11-29 08:27:13.042 227364 DEBUG oslo_concurrency.lockutils [None req-b4ba94c3-721b-4003-9e1f-cb16a874011c 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquiring lock "refresh_cache-6076e253-7727-49f1-9aa6-6c4ccc52fc56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:27:13 np0005539551 nova_compute[227360]: 2025-11-29 08:27:13.042 227364 DEBUG oslo_concurrency.lockutils [None req-b4ba94c3-721b-4003-9e1f-cb16a874011c 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquired lock "refresh_cache-6076e253-7727-49f1-9aa6-6c4ccc52fc56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:27:13 np0005539551 nova_compute[227360]: 2025-11-29 08:27:13.043 227364 DEBUG nova.network.neutron [None req-b4ba94c3-721b-4003-9e1f-cb16a874011c 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:27:13 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:27:13 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1960947723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:27:13 np0005539551 nova_compute[227360]: 2025-11-29 08:27:13.222 227364 DEBUG oslo_concurrency.processutils [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:13 np0005539551 nova_compute[227360]: 2025-11-29 08:27:13.227 227364 DEBUG nova.compute.provider_tree [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:27:13 np0005539551 nova_compute[227360]: 2025-11-29 08:27:13.241 227364 DEBUG nova.scheduler.client.report [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:27:13 np0005539551 nova_compute[227360]: 2025-11-29 08:27:13.267 227364 DEBUG oslo_concurrency.lockutils [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:13 np0005539551 nova_compute[227360]: 2025-11-29 08:27:13.267 227364 DEBUG nova.compute.manager [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:27:13 np0005539551 nova_compute[227360]: 2025-11-29 08:27:13.323 227364 DEBUG nova.compute.manager [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:27:13 np0005539551 nova_compute[227360]: 2025-11-29 08:27:13.323 227364 DEBUG nova.network.neutron [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:27:13 np0005539551 nova_compute[227360]: 2025-11-29 08:27:13.341 227364 INFO nova.virt.libvirt.driver [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:27:13 np0005539551 nova_compute[227360]: 2025-11-29 08:27:13.364 227364 DEBUG nova.compute.manager [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:27:13 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:13.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:13 np0005539551 nova_compute[227360]: 2025-11-29 08:27:13.495 227364 DEBUG nova.compute.manager [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:27:13 np0005539551 nova_compute[227360]: 2025-11-29 08:27:13.496 227364 DEBUG nova.virt.libvirt.driver [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:27:13 np0005539551 nova_compute[227360]: 2025-11-29 08:27:13.497 227364 INFO nova.virt.libvirt.driver [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Creating image(s)#033[00m
Nov 29 03:27:13 np0005539551 nova_compute[227360]: 2025-11-29 08:27:13.530 227364 DEBUG nova.storage.rbd_utils [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] rbd image 480ae817-3676-4499-a047-6b8b383e7bf2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:13 np0005539551 nova_compute[227360]: 2025-11-29 08:27:13.556 227364 DEBUG nova.storage.rbd_utils [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] rbd image 480ae817-3676-4499-a047-6b8b383e7bf2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:13 np0005539551 nova_compute[227360]: 2025-11-29 08:27:13.586 227364 DEBUG nova.storage.rbd_utils [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] rbd image 480ae817-3676-4499-a047-6b8b383e7bf2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:13 np0005539551 nova_compute[227360]: 2025-11-29 08:27:13.590 227364 DEBUG oslo_concurrency.processutils [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:13 np0005539551 nova_compute[227360]: 2025-11-29 08:27:13.684 227364 DEBUG oslo_concurrency.processutils [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:13 np0005539551 nova_compute[227360]: 2025-11-29 08:27:13.686 227364 DEBUG oslo_concurrency.lockutils [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:13 np0005539551 nova_compute[227360]: 2025-11-29 08:27:13.687 227364 DEBUG oslo_concurrency.lockutils [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:13 np0005539551 nova_compute[227360]: 2025-11-29 08:27:13.687 227364 DEBUG oslo_concurrency.lockutils [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:13 np0005539551 nova_compute[227360]: 2025-11-29 08:27:13.716 227364 DEBUG nova.storage.rbd_utils [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] rbd image 480ae817-3676-4499-a047-6b8b383e7bf2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:13 np0005539551 nova_compute[227360]: 2025-11-29 08:27:13.720 227364 DEBUG oslo_concurrency.processutils [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 480ae817-3676-4499-a047-6b8b383e7bf2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:27:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:14.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:27:15 np0005539551 nova_compute[227360]: 2025-11-29 08:27:15.114 227364 DEBUG oslo_concurrency.processutils [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 480ae817-3676-4499-a047-6b8b383e7bf2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.394s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:15 np0005539551 nova_compute[227360]: 2025-11-29 08:27:15.193 227364 DEBUG nova.storage.rbd_utils [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] resizing rbd image 480ae817-3676-4499-a047-6b8b383e7bf2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:27:15 np0005539551 nova_compute[227360]: 2025-11-29 08:27:15.282 227364 DEBUG nova.policy [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '64b11a4dc36b4f55b85dbe846183be55', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ae71059d02774857be85797a3be0e4e6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:27:15 np0005539551 nova_compute[227360]: 2025-11-29 08:27:15.376 227364 DEBUG nova.objects.instance [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lazy-loading 'migration_context' on Instance uuid 480ae817-3676-4499-a047-6b8b383e7bf2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:15 np0005539551 nova_compute[227360]: 2025-11-29 08:27:15.462 227364 DEBUG nova.virt.libvirt.driver [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:27:15 np0005539551 nova_compute[227360]: 2025-11-29 08:27:15.463 227364 DEBUG nova.virt.libvirt.driver [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Ensure instance console log exists: /var/lib/nova/instances/480ae817-3676-4499-a047-6b8b383e7bf2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:27:15 np0005539551 nova_compute[227360]: 2025-11-29 08:27:15.463 227364 DEBUG oslo_concurrency.lockutils [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:15 np0005539551 nova_compute[227360]: 2025-11-29 08:27:15.464 227364 DEBUG oslo_concurrency.lockutils [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:15 np0005539551 nova_compute[227360]: 2025-11-29 08:27:15.464 227364 DEBUG oslo_concurrency.lockutils [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:15.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:16.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:16 np0005539551 nova_compute[227360]: 2025-11-29 08:27:16.462 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:16 np0005539551 nova_compute[227360]: 2025-11-29 08:27:16.769 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:17 np0005539551 nova_compute[227360]: 2025-11-29 08:27:17.138 227364 DEBUG nova.network.neutron [None req-b4ba94c3-721b-4003-9e1f-cb16a874011c 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Updating instance_info_cache with network_info: [{"id": "88452b5d-373a-407c-a8ab-aba7b53de034", "address": "fa:16:3e:5d:a8:a2", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88452b5d-37", "ovs_interfaceid": "88452b5d-373a-407c-a8ab-aba7b53de034", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:27:17 np0005539551 nova_compute[227360]: 2025-11-29 08:27:17.337 227364 DEBUG oslo_concurrency.lockutils [None req-b4ba94c3-721b-4003-9e1f-cb16a874011c 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Releasing lock "refresh_cache-6076e253-7727-49f1-9aa6-6c4ccc52fc56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:27:17 np0005539551 nova_compute[227360]: 2025-11-29 08:27:17.338 227364 DEBUG nova.objects.instance [None req-b4ba94c3-721b-4003-9e1f-cb16a874011c 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'flavor' on Instance uuid 6076e253-7727-49f1-9aa6-6c4ccc52fc56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:17.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:17 np0005539551 kernel: tap88452b5d-37 (unregistering): left promiscuous mode
Nov 29 03:27:17 np0005539551 NetworkManager[48922]: <info>  [1764404837.6785] device (tap88452b5d-37): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:27:17 np0005539551 ovn_controller[130266]: 2025-11-29T08:27:17Z|00594|binding|INFO|Releasing lport 88452b5d-373a-407c-a8ab-aba7b53de034 from this chassis (sb_readonly=0)
Nov 29 03:27:17 np0005539551 ovn_controller[130266]: 2025-11-29T08:27:17Z|00595|binding|INFO|Setting lport 88452b5d-373a-407c-a8ab-aba7b53de034 down in Southbound
Nov 29 03:27:17 np0005539551 nova_compute[227360]: 2025-11-29 08:27:17.688 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:17 np0005539551 ovn_controller[130266]: 2025-11-29T08:27:17Z|00596|binding|INFO|Removing iface tap88452b5d-37 ovn-installed in OVS
Nov 29 03:27:17 np0005539551 nova_compute[227360]: 2025-11-29 08:27:17.692 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:17 np0005539551 nova_compute[227360]: 2025-11-29 08:27:17.724 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:17.750 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:a8:a2 10.100.0.9'], port_security=['fa:16:3e:5d:a8:a2 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '6076e253-7727-49f1-9aa6-6c4ccc52fc56', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea7b24ea9d7b4d239b4741634ac3f10c', 'neutron:revision_number': '6', 'neutron:security_group_ids': '525789b3-2118-4a66-bac0-ed0947cafa2d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2992474f-d5bf-4893-b4cc-2c774a8a9871, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=88452b5d-373a-407c-a8ab-aba7b53de034) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:27:17 np0005539551 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d0000008b.scope: Deactivated successfully.
Nov 29 03:27:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:17.752 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 88452b5d-373a-407c-a8ab-aba7b53de034 in datapath 4ca67fce-6116-4a0b-b0a9-c25b5adaad19 unbound from our chassis#033[00m
Nov 29 03:27:17 np0005539551 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d0000008b.scope: Consumed 7.537s CPU time.
Nov 29 03:27:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:17.753 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4ca67fce-6116-4a0b-b0a9-c25b5adaad19, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:27:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:17.755 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[851e940e-a849-49c7-83d5-1d85fd1a0af4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:17.756 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19 namespace which is not needed anymore#033[00m
Nov 29 03:27:17 np0005539551 systemd-machined[190756]: Machine qemu-64-instance-0000008b terminated.
Nov 29 03:27:17 np0005539551 nova_compute[227360]: 2025-11-29 08:27:17.893 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:17 np0005539551 neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19[279466]: [NOTICE]   (279470) : haproxy version is 2.8.14-c23fe91
Nov 29 03:27:17 np0005539551 neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19[279466]: [NOTICE]   (279470) : path to executable is /usr/sbin/haproxy
Nov 29 03:27:17 np0005539551 neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19[279466]: [WARNING]  (279470) : Exiting Master process...
Nov 29 03:27:17 np0005539551 neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19[279466]: [WARNING]  (279470) : Exiting Master process...
Nov 29 03:27:17 np0005539551 neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19[279466]: [ALERT]    (279470) : Current worker (279472) exited with code 143 (Terminated)
Nov 29 03:27:17 np0005539551 neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19[279466]: [WARNING]  (279470) : All workers exited. Exiting... (0)
Nov 29 03:27:17 np0005539551 systemd[1]: libpod-245b39969d9db90726fd6a85858ca50d795967fadf081e0304a2da9a4beefd00.scope: Deactivated successfully.
Nov 29 03:27:17 np0005539551 podman[279693]: 2025-11-29 08:27:17.907788076 +0000 UTC m=+0.051397021 container died 245b39969d9db90726fd6a85858ca50d795967fadf081e0304a2da9a4beefd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:27:17 np0005539551 nova_compute[227360]: 2025-11-29 08:27:17.906 227364 INFO nova.virt.libvirt.driver [-] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Instance destroyed successfully.#033[00m
Nov 29 03:27:17 np0005539551 nova_compute[227360]: 2025-11-29 08:27:17.908 227364 DEBUG nova.objects.instance [None req-b4ba94c3-721b-4003-9e1f-cb16a874011c 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'numa_topology' on Instance uuid 6076e253-7727-49f1-9aa6-6c4ccc52fc56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:17 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-245b39969d9db90726fd6a85858ca50d795967fadf081e0304a2da9a4beefd00-userdata-shm.mount: Deactivated successfully.
Nov 29 03:27:17 np0005539551 systemd[1]: var-lib-containers-storage-overlay-0a41cfdf4478ba29f3cc98418aa94ee46fe8f675cfc651f2eca8d2c88b1e33e2-merged.mount: Deactivated successfully.
Nov 29 03:27:17 np0005539551 podman[279693]: 2025-11-29 08:27:17.945230789 +0000 UTC m=+0.088839724 container cleanup 245b39969d9db90726fd6a85858ca50d795967fadf081e0304a2da9a4beefd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:27:17 np0005539551 systemd[1]: libpod-conmon-245b39969d9db90726fd6a85858ca50d795967fadf081e0304a2da9a4beefd00.scope: Deactivated successfully.
Nov 29 03:27:18 np0005539551 podman[279729]: 2025-11-29 08:27:18.008701896 +0000 UTC m=+0.041598976 container remove 245b39969d9db90726fd6a85858ca50d795967fadf081e0304a2da9a4beefd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:18.015 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a6325197-f740-4004-9c71-b89dd203d836]: (4, ('Sat Nov 29 08:27:17 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19 (245b39969d9db90726fd6a85858ca50d795967fadf081e0304a2da9a4beefd00)\n245b39969d9db90726fd6a85858ca50d795967fadf081e0304a2da9a4beefd00\nSat Nov 29 08:27:17 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19 (245b39969d9db90726fd6a85858ca50d795967fadf081e0304a2da9a4beefd00)\n245b39969d9db90726fd6a85858ca50d795967fadf081e0304a2da9a4beefd00\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:18.017 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[fb5c5d3c-02fd-4293-af95-77ed5085967f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:18.018 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ca67fce-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:18 np0005539551 nova_compute[227360]: 2025-11-29 08:27:18.020 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:18 np0005539551 kernel: tap4ca67fce-60: left promiscuous mode
Nov 29 03:27:18 np0005539551 nova_compute[227360]: 2025-11-29 08:27:18.038 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:18.042 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[fbbc8c09-8a84-4772-86ed-c7caee5a2dc6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:18.067 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b265db66-88eb-477b-9f5d-c403b1955f09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:18.069 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[60b84717-e65f-45ce-9578-84220f63fa92]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:18.084 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[7d5200f6-763e-4f00-88d6-9093a6812115]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 787708, 'reachable_time': 30082, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279753, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:18.086 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:18.086 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[aa5eebac-5ec7-4f23-b5e5-c48b63a8c9a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:18 np0005539551 systemd[1]: run-netns-ovnmeta\x2d4ca67fce\x2d6116\x2d4a0b\x2db0a9\x2dc25b5adaad19.mount: Deactivated successfully.
Nov 29 03:27:18 np0005539551 kernel: tap88452b5d-37: entered promiscuous mode
Nov 29 03:27:18 np0005539551 systemd-udevd[279672]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:27:18 np0005539551 NetworkManager[48922]: <info>  [1764404838.1258] manager: (tap88452b5d-37): new Tun device (/org/freedesktop/NetworkManager/Devices/282)
Nov 29 03:27:18 np0005539551 ovn_controller[130266]: 2025-11-29T08:27:18Z|00597|binding|INFO|Claiming lport 88452b5d-373a-407c-a8ab-aba7b53de034 for this chassis.
Nov 29 03:27:18 np0005539551 ovn_controller[130266]: 2025-11-29T08:27:18Z|00598|binding|INFO|88452b5d-373a-407c-a8ab-aba7b53de034: Claiming fa:16:3e:5d:a8:a2 10.100.0.9
Nov 29 03:27:18 np0005539551 nova_compute[227360]: 2025-11-29 08:27:18.127 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:18 np0005539551 NetworkManager[48922]: <info>  [1764404838.1361] device (tap88452b5d-37): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:27:18 np0005539551 NetworkManager[48922]: <info>  [1764404838.1383] device (tap88452b5d-37): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:27:18 np0005539551 ovn_controller[130266]: 2025-11-29T08:27:18Z|00599|binding|INFO|Setting lport 88452b5d-373a-407c-a8ab-aba7b53de034 ovn-installed in OVS
Nov 29 03:27:18 np0005539551 nova_compute[227360]: 2025-11-29 08:27:18.151 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:18 np0005539551 nova_compute[227360]: 2025-11-29 08:27:18.156 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:18 np0005539551 systemd-machined[190756]: New machine qemu-65-instance-0000008b.
Nov 29 03:27:18 np0005539551 systemd[1]: Started Virtual Machine qemu-65-instance-0000008b.
Nov 29 03:27:18 np0005539551 ovn_controller[130266]: 2025-11-29T08:27:18Z|00600|binding|INFO|Setting lport 88452b5d-373a-407c-a8ab-aba7b53de034 up in Southbound
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:18.230 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:a8:a2 10.100.0.9'], port_security=['fa:16:3e:5d:a8:a2 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '6076e253-7727-49f1-9aa6-6c4ccc52fc56', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea7b24ea9d7b4d239b4741634ac3f10c', 'neutron:revision_number': '6', 'neutron:security_group_ids': '525789b3-2118-4a66-bac0-ed0947cafa2d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2992474f-d5bf-4893-b4cc-2c774a8a9871, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=88452b5d-373a-407c-a8ab-aba7b53de034) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:18.231 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 88452b5d-373a-407c-a8ab-aba7b53de034 in datapath 4ca67fce-6116-4a0b-b0a9-c25b5adaad19 bound to our chassis#033[00m
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:18.232 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4ca67fce-6116-4a0b-b0a9-c25b5adaad19#033[00m
Nov 29 03:27:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:18.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:18.244 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c54a17b0-ec22-433b-a91b-e1518cda279a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:18.244 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4ca67fce-61 in ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:18.246 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4ca67fce-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:18.247 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ff992ef2-1392-4934-981b-6ef74e6efaa7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:18.248 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[40e6ffbd-ed49-4501-b896-77da2d29f377]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:18.260 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[df565265-e4e3-4d73-9fa5-c1e77e73aa8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:18.272 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[151677b1-713a-4fc9-83f8-776052872e7d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:18.296 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[e428bccf-0e1d-4a5e-bd46-6c18731ac214]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:18 np0005539551 NetworkManager[48922]: <info>  [1764404838.3035] manager: (tap4ca67fce-60): new Veth device (/org/freedesktop/NetworkManager/Devices/283)
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:18.302 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[bba48d70-9964-4e9d-a390-a88baa577314]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:18.335 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[b0d6ebf6-79a9-4fa0-8df8-e9774387cfbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:18.340 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[38ee19a7-9bb6-4505-90c8-039d8549a333]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:18 np0005539551 NetworkManager[48922]: <info>  [1764404838.3663] device (tap4ca67fce-60): carrier: link connected
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:18.371 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[0a00c999-8012-4788-b571-407a1f65bd6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:18.390 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[97dd260e-c78f-4cd8-8409-7fb8f3b8cc0c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ca67fce-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:3c:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 178], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 788524, 'reachable_time': 26209, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279798, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:18.408 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[69c7fc49-7cf3-491f-8449-8feaf74c680b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe18:3cca'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 788524, 'tstamp': 788524}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279799, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:18.425 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[864023fb-a775-412c-8bf6-e50fc98119c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ca67fce-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:3c:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 178], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 788524, 'reachable_time': 26209, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 279800, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:18 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:18.468 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6279d71b-a793-455d-8604-893e568e2f9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:18.538 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[35862b7c-8b41-4573-9672-02ff712f8a69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:18.540 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ca67fce-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:18.540 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:18.541 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ca67fce-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:18 np0005539551 nova_compute[227360]: 2025-11-29 08:27:18.543 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:18 np0005539551 NetworkManager[48922]: <info>  [1764404838.5436] manager: (tap4ca67fce-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/284)
Nov 29 03:27:18 np0005539551 kernel: tap4ca67fce-60: entered promiscuous mode
Nov 29 03:27:18 np0005539551 nova_compute[227360]: 2025-11-29 08:27:18.545 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:18.546 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4ca67fce-60, col_values=(('external_ids', {'iface-id': '6f99f0ed-ee75-45c3-abe1-1afc889fd227'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:18 np0005539551 nova_compute[227360]: 2025-11-29 08:27:18.547 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:18 np0005539551 ovn_controller[130266]: 2025-11-29T08:27:18Z|00601|binding|INFO|Releasing lport 6f99f0ed-ee75-45c3-abe1-1afc889fd227 from this chassis (sb_readonly=0)
Nov 29 03:27:18 np0005539551 nova_compute[227360]: 2025-11-29 08:27:18.567 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:18.569 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4ca67fce-6116-4a0b-b0a9-c25b5adaad19.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4ca67fce-6116-4a0b-b0a9-c25b5adaad19.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:18.570 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b18b429d-a62a-470e-a977-4dd8cf8dcd37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:18.572 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-4ca67fce-6116-4a0b-b0a9-c25b5adaad19
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/4ca67fce-6116-4a0b-b0a9-c25b5adaad19.pid.haproxy
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 4ca67fce-6116-4a0b-b0a9-c25b5adaad19
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:27:18 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:18.573 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'env', 'PROCESS_TAG=haproxy-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4ca67fce-6116-4a0b-b0a9-c25b5adaad19.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:27:18 np0005539551 nova_compute[227360]: 2025-11-29 08:27:18.676 227364 DEBUG nova.network.neutron [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Successfully created port: 9eea9321-fa5c-4ec5-81e3-b6fba93e7545 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:27:18 np0005539551 nova_compute[227360]: 2025-11-29 08:27:18.907 227364 DEBUG nova.virt.libvirt.host [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Removed pending event for 6076e253-7727-49f1-9aa6-6c4ccc52fc56 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:27:18 np0005539551 nova_compute[227360]: 2025-11-29 08:27:18.907 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404838.906422, 6076e253-7727-49f1-9aa6-6c4ccc52fc56 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:27:18 np0005539551 nova_compute[227360]: 2025-11-29 08:27:18.908 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:27:19 np0005539551 nova_compute[227360]: 2025-11-29 08:27:19.002 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:19 np0005539551 nova_compute[227360]: 2025-11-29 08:27:19.007 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:27:19 np0005539551 podman[279891]: 2025-11-29 08:27:19.04831317 +0000 UTC m=+0.067085956 container create 7645a84b7c487c9ba2edc7d9004b9e73896b894a85c4c9f102f697e08739c471 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:27:19 np0005539551 systemd[1]: Started libpod-conmon-7645a84b7c487c9ba2edc7d9004b9e73896b894a85c4c9f102f697e08739c471.scope.
Nov 29 03:27:19 np0005539551 podman[279891]: 2025-11-29 08:27:19.011866264 +0000 UTC m=+0.030639140 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:27:19 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:27:19 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ca2c80c775578c4094791c04ba70e467613daf2d1f2a15bd7376f9b1b098163/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:27:19 np0005539551 podman[279891]: 2025-11-29 08:27:19.137545494 +0000 UTC m=+0.156318280 container init 7645a84b7c487c9ba2edc7d9004b9e73896b894a85c4c9f102f697e08739c471 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 03:27:19 np0005539551 podman[279891]: 2025-11-29 08:27:19.144858382 +0000 UTC m=+0.163631168 container start 7645a84b7c487c9ba2edc7d9004b9e73896b894a85c4c9f102f697e08739c471 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:27:19 np0005539551 neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19[279906]: [NOTICE]   (279910) : New worker (279912) forked
Nov 29 03:27:19 np0005539551 neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19[279906]: [NOTICE]   (279910) : Loading success.
Nov 29 03:27:19 np0005539551 nova_compute[227360]: 2025-11-29 08:27:19.308 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Nov 29 03:27:19 np0005539551 nova_compute[227360]: 2025-11-29 08:27:19.309 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404838.9139426, 6076e253-7727-49f1-9aa6-6c4ccc52fc56 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:27:19 np0005539551 nova_compute[227360]: 2025-11-29 08:27:19.309 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] VM Started (Lifecycle Event)#033[00m
Nov 29 03:27:19 np0005539551 nova_compute[227360]: 2025-11-29 08:27:19.328 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:19 np0005539551 nova_compute[227360]: 2025-11-29 08:27:19.332 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:27:19 np0005539551 nova_compute[227360]: 2025-11-29 08:27:19.352 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Nov 29 03:27:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:19.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:19 np0005539551 nova_compute[227360]: 2025-11-29 08:27:19.750 227364 DEBUG nova.compute.manager [req-5ba8d5a2-e938-47fe-9498-e1d83a928421 req-9b532ee2-aa41-4d3b-b489-347dbda88ae6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Received event network-vif-unplugged-88452b5d-373a-407c-a8ab-aba7b53de034 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:19 np0005539551 nova_compute[227360]: 2025-11-29 08:27:19.751 227364 DEBUG oslo_concurrency.lockutils [req-5ba8d5a2-e938-47fe-9498-e1d83a928421 req-9b532ee2-aa41-4d3b-b489-347dbda88ae6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:19 np0005539551 nova_compute[227360]: 2025-11-29 08:27:19.752 227364 DEBUG oslo_concurrency.lockutils [req-5ba8d5a2-e938-47fe-9498-e1d83a928421 req-9b532ee2-aa41-4d3b-b489-347dbda88ae6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:19 np0005539551 nova_compute[227360]: 2025-11-29 08:27:19.752 227364 DEBUG oslo_concurrency.lockutils [req-5ba8d5a2-e938-47fe-9498-e1d83a928421 req-9b532ee2-aa41-4d3b-b489-347dbda88ae6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:19 np0005539551 nova_compute[227360]: 2025-11-29 08:27:19.752 227364 DEBUG nova.compute.manager [req-5ba8d5a2-e938-47fe-9498-e1d83a928421 req-9b532ee2-aa41-4d3b-b489-347dbda88ae6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] No waiting events found dispatching network-vif-unplugged-88452b5d-373a-407c-a8ab-aba7b53de034 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:19 np0005539551 nova_compute[227360]: 2025-11-29 08:27:19.753 227364 WARNING nova.compute.manager [req-5ba8d5a2-e938-47fe-9498-e1d83a928421 req-9b532ee2-aa41-4d3b-b489-347dbda88ae6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Received unexpected event network-vif-unplugged-88452b5d-373a-407c-a8ab-aba7b53de034 for instance with vm_state rescued and task_state unrescuing.#033[00m
Nov 29 03:27:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:19.877 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:19.878 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:19.878 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:20.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:20 np0005539551 nova_compute[227360]: 2025-11-29 08:27:20.455 227364 DEBUG nova.compute.manager [None req-b4ba94c3-721b-4003-9e1f-cb16a874011c 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:20 np0005539551 nova_compute[227360]: 2025-11-29 08:27:20.804 227364 DEBUG nova.network.neutron [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Successfully updated port: 9eea9321-fa5c-4ec5-81e3-b6fba93e7545 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:27:20 np0005539551 nova_compute[227360]: 2025-11-29 08:27:20.825 227364 DEBUG oslo_concurrency.lockutils [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquiring lock "refresh_cache-480ae817-3676-4499-a047-6b8b383e7bf2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:27:20 np0005539551 nova_compute[227360]: 2025-11-29 08:27:20.826 227364 DEBUG oslo_concurrency.lockutils [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquired lock "refresh_cache-480ae817-3676-4499-a047-6b8b383e7bf2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:27:20 np0005539551 nova_compute[227360]: 2025-11-29 08:27:20.827 227364 DEBUG nova.network.neutron [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:27:20 np0005539551 nova_compute[227360]: 2025-11-29 08:27:20.908 227364 DEBUG nova.compute.manager [req-8f6a255b-9110-4daf-8650-374c342cde2b req-61a9c0db-49a7-4b32-9bec-bde3ade9f628 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Received event network-changed-9eea9321-fa5c-4ec5-81e3-b6fba93e7545 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:20 np0005539551 nova_compute[227360]: 2025-11-29 08:27:20.909 227364 DEBUG nova.compute.manager [req-8f6a255b-9110-4daf-8650-374c342cde2b req-61a9c0db-49a7-4b32-9bec-bde3ade9f628 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Refreshing instance network info cache due to event network-changed-9eea9321-fa5c-4ec5-81e3-b6fba93e7545. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:27:20 np0005539551 nova_compute[227360]: 2025-11-29 08:27:20.909 227364 DEBUG oslo_concurrency.lockutils [req-8f6a255b-9110-4daf-8650-374c342cde2b req-61a9c0db-49a7-4b32-9bec-bde3ade9f628 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-480ae817-3676-4499-a047-6b8b383e7bf2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:27:21 np0005539551 nova_compute[227360]: 2025-11-29 08:27:21.465 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:21.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:21 np0005539551 nova_compute[227360]: 2025-11-29 08:27:21.498 227364 DEBUG nova.network.neutron [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:27:21 np0005539551 nova_compute[227360]: 2025-11-29 08:27:21.770 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:21 np0005539551 nova_compute[227360]: 2025-11-29 08:27:21.884 227364 DEBUG nova.compute.manager [req-34082b06-fc2d-4264-a8ad-97a644026e67 req-e11cfb9c-5889-41ed-9e77-4d9e51f92cd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Received event network-vif-plugged-88452b5d-373a-407c-a8ab-aba7b53de034 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:21 np0005539551 nova_compute[227360]: 2025-11-29 08:27:21.885 227364 DEBUG oslo_concurrency.lockutils [req-34082b06-fc2d-4264-a8ad-97a644026e67 req-e11cfb9c-5889-41ed-9e77-4d9e51f92cd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:21 np0005539551 nova_compute[227360]: 2025-11-29 08:27:21.885 227364 DEBUG oslo_concurrency.lockutils [req-34082b06-fc2d-4264-a8ad-97a644026e67 req-e11cfb9c-5889-41ed-9e77-4d9e51f92cd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:21 np0005539551 nova_compute[227360]: 2025-11-29 08:27:21.885 227364 DEBUG oslo_concurrency.lockutils [req-34082b06-fc2d-4264-a8ad-97a644026e67 req-e11cfb9c-5889-41ed-9e77-4d9e51f92cd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:21 np0005539551 nova_compute[227360]: 2025-11-29 08:27:21.885 227364 DEBUG nova.compute.manager [req-34082b06-fc2d-4264-a8ad-97a644026e67 req-e11cfb9c-5889-41ed-9e77-4d9e51f92cd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] No waiting events found dispatching network-vif-plugged-88452b5d-373a-407c-a8ab-aba7b53de034 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:21 np0005539551 nova_compute[227360]: 2025-11-29 08:27:21.886 227364 WARNING nova.compute.manager [req-34082b06-fc2d-4264-a8ad-97a644026e67 req-e11cfb9c-5889-41ed-9e77-4d9e51f92cd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Received unexpected event network-vif-plugged-88452b5d-373a-407c-a8ab-aba7b53de034 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:27:21 np0005539551 nova_compute[227360]: 2025-11-29 08:27:21.886 227364 DEBUG nova.compute.manager [req-34082b06-fc2d-4264-a8ad-97a644026e67 req-e11cfb9c-5889-41ed-9e77-4d9e51f92cd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Received event network-vif-plugged-88452b5d-373a-407c-a8ab-aba7b53de034 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:21 np0005539551 nova_compute[227360]: 2025-11-29 08:27:21.886 227364 DEBUG oslo_concurrency.lockutils [req-34082b06-fc2d-4264-a8ad-97a644026e67 req-e11cfb9c-5889-41ed-9e77-4d9e51f92cd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:21 np0005539551 nova_compute[227360]: 2025-11-29 08:27:21.886 227364 DEBUG oslo_concurrency.lockutils [req-34082b06-fc2d-4264-a8ad-97a644026e67 req-e11cfb9c-5889-41ed-9e77-4d9e51f92cd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:21 np0005539551 nova_compute[227360]: 2025-11-29 08:27:21.886 227364 DEBUG oslo_concurrency.lockutils [req-34082b06-fc2d-4264-a8ad-97a644026e67 req-e11cfb9c-5889-41ed-9e77-4d9e51f92cd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:21 np0005539551 nova_compute[227360]: 2025-11-29 08:27:21.887 227364 DEBUG nova.compute.manager [req-34082b06-fc2d-4264-a8ad-97a644026e67 req-e11cfb9c-5889-41ed-9e77-4d9e51f92cd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] No waiting events found dispatching network-vif-plugged-88452b5d-373a-407c-a8ab-aba7b53de034 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:21 np0005539551 nova_compute[227360]: 2025-11-29 08:27:21.887 227364 WARNING nova.compute.manager [req-34082b06-fc2d-4264-a8ad-97a644026e67 req-e11cfb9c-5889-41ed-9e77-4d9e51f92cd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Received unexpected event network-vif-plugged-88452b5d-373a-407c-a8ab-aba7b53de034 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:27:21 np0005539551 nova_compute[227360]: 2025-11-29 08:27:21.887 227364 DEBUG nova.compute.manager [req-34082b06-fc2d-4264-a8ad-97a644026e67 req-e11cfb9c-5889-41ed-9e77-4d9e51f92cd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Received event network-vif-plugged-88452b5d-373a-407c-a8ab-aba7b53de034 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:21 np0005539551 nova_compute[227360]: 2025-11-29 08:27:21.887 227364 DEBUG oslo_concurrency.lockutils [req-34082b06-fc2d-4264-a8ad-97a644026e67 req-e11cfb9c-5889-41ed-9e77-4d9e51f92cd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:21 np0005539551 nova_compute[227360]: 2025-11-29 08:27:21.887 227364 DEBUG oslo_concurrency.lockutils [req-34082b06-fc2d-4264-a8ad-97a644026e67 req-e11cfb9c-5889-41ed-9e77-4d9e51f92cd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:21 np0005539551 nova_compute[227360]: 2025-11-29 08:27:21.888 227364 DEBUG oslo_concurrency.lockutils [req-34082b06-fc2d-4264-a8ad-97a644026e67 req-e11cfb9c-5889-41ed-9e77-4d9e51f92cd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:21 np0005539551 nova_compute[227360]: 2025-11-29 08:27:21.888 227364 DEBUG nova.compute.manager [req-34082b06-fc2d-4264-a8ad-97a644026e67 req-e11cfb9c-5889-41ed-9e77-4d9e51f92cd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] No waiting events found dispatching network-vif-plugged-88452b5d-373a-407c-a8ab-aba7b53de034 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:21 np0005539551 nova_compute[227360]: 2025-11-29 08:27:21.888 227364 WARNING nova.compute.manager [req-34082b06-fc2d-4264-a8ad-97a644026e67 req-e11cfb9c-5889-41ed-9e77-4d9e51f92cd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Received unexpected event network-vif-plugged-88452b5d-373a-407c-a8ab-aba7b53de034 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:27:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:22.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:22 np0005539551 nova_compute[227360]: 2025-11-29 08:27:22.296 227364 DEBUG nova.network.neutron [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Updating instance_info_cache with network_info: [{"id": "9eea9321-fa5c-4ec5-81e3-b6fba93e7545", "address": "fa:16:3e:a2:58:94", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9eea9321-fa", "ovs_interfaceid": "9eea9321-fa5c-4ec5-81e3-b6fba93e7545", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:27:22 np0005539551 nova_compute[227360]: 2025-11-29 08:27:22.325 227364 DEBUG oslo_concurrency.lockutils [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Releasing lock "refresh_cache-480ae817-3676-4499-a047-6b8b383e7bf2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:27:22 np0005539551 nova_compute[227360]: 2025-11-29 08:27:22.325 227364 DEBUG nova.compute.manager [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Instance network_info: |[{"id": "9eea9321-fa5c-4ec5-81e3-b6fba93e7545", "address": "fa:16:3e:a2:58:94", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9eea9321-fa", "ovs_interfaceid": "9eea9321-fa5c-4ec5-81e3-b6fba93e7545", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:27:22 np0005539551 nova_compute[227360]: 2025-11-29 08:27:22.326 227364 DEBUG oslo_concurrency.lockutils [req-8f6a255b-9110-4daf-8650-374c342cde2b req-61a9c0db-49a7-4b32-9bec-bde3ade9f628 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-480ae817-3676-4499-a047-6b8b383e7bf2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:27:22 np0005539551 nova_compute[227360]: 2025-11-29 08:27:22.326 227364 DEBUG nova.network.neutron [req-8f6a255b-9110-4daf-8650-374c342cde2b req-61a9c0db-49a7-4b32-9bec-bde3ade9f628 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Refreshing network info cache for port 9eea9321-fa5c-4ec5-81e3-b6fba93e7545 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:27:22 np0005539551 nova_compute[227360]: 2025-11-29 08:27:22.332 227364 DEBUG nova.virt.libvirt.driver [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Start _get_guest_xml network_info=[{"id": "9eea9321-fa5c-4ec5-81e3-b6fba93e7545", "address": "fa:16:3e:a2:58:94", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9eea9321-fa", "ovs_interfaceid": "9eea9321-fa5c-4ec5-81e3-b6fba93e7545", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:27:22 np0005539551 nova_compute[227360]: 2025-11-29 08:27:22.338 227364 WARNING nova.virt.libvirt.driver [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:27:22 np0005539551 nova_compute[227360]: 2025-11-29 08:27:22.348 227364 DEBUG nova.virt.libvirt.host [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:27:22 np0005539551 nova_compute[227360]: 2025-11-29 08:27:22.349 227364 DEBUG nova.virt.libvirt.host [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:27:22 np0005539551 nova_compute[227360]: 2025-11-29 08:27:22.354 227364 DEBUG nova.virt.libvirt.host [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:27:22 np0005539551 nova_compute[227360]: 2025-11-29 08:27:22.354 227364 DEBUG nova.virt.libvirt.host [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:27:22 np0005539551 nova_compute[227360]: 2025-11-29 08:27:22.356 227364 DEBUG nova.virt.libvirt.driver [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:27:22 np0005539551 nova_compute[227360]: 2025-11-29 08:27:22.356 227364 DEBUG nova.virt.hardware [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:27:22 np0005539551 nova_compute[227360]: 2025-11-29 08:27:22.356 227364 DEBUG nova.virt.hardware [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:27:22 np0005539551 nova_compute[227360]: 2025-11-29 08:27:22.357 227364 DEBUG nova.virt.hardware [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:27:22 np0005539551 nova_compute[227360]: 2025-11-29 08:27:22.357 227364 DEBUG nova.virt.hardware [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:27:22 np0005539551 nova_compute[227360]: 2025-11-29 08:27:22.357 227364 DEBUG nova.virt.hardware [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:27:22 np0005539551 nova_compute[227360]: 2025-11-29 08:27:22.357 227364 DEBUG nova.virt.hardware [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:27:22 np0005539551 nova_compute[227360]: 2025-11-29 08:27:22.358 227364 DEBUG nova.virt.hardware [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:27:22 np0005539551 nova_compute[227360]: 2025-11-29 08:27:22.358 227364 DEBUG nova.virt.hardware [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:27:22 np0005539551 nova_compute[227360]: 2025-11-29 08:27:22.358 227364 DEBUG nova.virt.hardware [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:27:22 np0005539551 nova_compute[227360]: 2025-11-29 08:27:22.359 227364 DEBUG nova.virt.hardware [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:27:22 np0005539551 nova_compute[227360]: 2025-11-29 08:27:22.359 227364 DEBUG nova.virt.hardware [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:27:22 np0005539551 nova_compute[227360]: 2025-11-29 08:27:22.362 227364 DEBUG oslo_concurrency.processutils [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:27:22 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2298329597' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:27:22 np0005539551 nova_compute[227360]: 2025-11-29 08:27:22.831 227364 DEBUG oslo_concurrency.processutils [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:22 np0005539551 nova_compute[227360]: 2025-11-29 08:27:22.867 227364 DEBUG nova.storage.rbd_utils [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] rbd image 480ae817-3676-4499-a047-6b8b383e7bf2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:22 np0005539551 nova_compute[227360]: 2025-11-29 08:27:22.874 227364 DEBUG oslo_concurrency.processutils [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:23 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:27:23 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3117153331' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:27:23 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:27:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:23.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:27:23 np0005539551 nova_compute[227360]: 2025-11-29 08:27:23.516 227364 DEBUG oslo_concurrency.processutils [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:23 np0005539551 nova_compute[227360]: 2025-11-29 08:27:23.519 227364 DEBUG nova.virt.libvirt.vif [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:27:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-2116728451',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-2116728451',id=143,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ae71059d02774857be85797a3be0e4e6',ramdisk_id='',reservation_id='r-sgna7du5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1715153470',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1715153470-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:27:13Z,user_data=None,user_id='64b11a4dc36b4f55b85dbe846183be55',uuid=480ae817-3676-4499-a047-6b8b383e7bf2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9eea9321-fa5c-4ec5-81e3-b6fba93e7545", "address": "fa:16:3e:a2:58:94", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9eea9321-fa", "ovs_interfaceid": "9eea9321-fa5c-4ec5-81e3-b6fba93e7545", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:27:23 np0005539551 nova_compute[227360]: 2025-11-29 08:27:23.519 227364 DEBUG nova.network.os_vif_util [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Converting VIF {"id": "9eea9321-fa5c-4ec5-81e3-b6fba93e7545", "address": "fa:16:3e:a2:58:94", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9eea9321-fa", "ovs_interfaceid": "9eea9321-fa5c-4ec5-81e3-b6fba93e7545", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:27:23 np0005539551 nova_compute[227360]: 2025-11-29 08:27:23.521 227364 DEBUG nova.network.os_vif_util [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:58:94,bridge_name='br-int',has_traffic_filtering=True,id=9eea9321-fa5c-4ec5-81e3-b6fba93e7545,network=Network(d9d41f0a-17f9-4df4-a453-04da996d63b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9eea9321-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:27:23 np0005539551 nova_compute[227360]: 2025-11-29 08:27:23.523 227364 DEBUG nova.objects.instance [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 480ae817-3676-4499-a047-6b8b383e7bf2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:23 np0005539551 nova_compute[227360]: 2025-11-29 08:27:23.551 227364 DEBUG nova.virt.libvirt.driver [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:27:23 np0005539551 nova_compute[227360]:  <uuid>480ae817-3676-4499-a047-6b8b383e7bf2</uuid>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:  <name>instance-0000008f</name>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:27:23 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:      <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-2116728451</nova:name>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:27:22</nova:creationTime>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:27:23 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:        <nova:user uuid="64b11a4dc36b4f55b85dbe846183be55">tempest-ServerBootFromVolumeStableRescueTest-1715153470-project-member</nova:user>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:        <nova:project uuid="ae71059d02774857be85797a3be0e4e6">tempest-ServerBootFromVolumeStableRescueTest-1715153470</nova:project>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:        <nova:port uuid="9eea9321-fa5c-4ec5-81e3-b6fba93e7545">
Nov 29 03:27:23 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:      <entry name="serial">480ae817-3676-4499-a047-6b8b383e7bf2</entry>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:      <entry name="uuid">480ae817-3676-4499-a047-6b8b383e7bf2</entry>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:27:23 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/480ae817-3676-4499-a047-6b8b383e7bf2_disk">
Nov 29 03:27:23 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:27:23 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:27:23 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/480ae817-3676-4499-a047-6b8b383e7bf2_disk.config">
Nov 29 03:27:23 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:27:23 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:27:23 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:a2:58:94"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:      <target dev="tap9eea9321-fa"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:27:23 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/480ae817-3676-4499-a047-6b8b383e7bf2/console.log" append="off"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:27:23 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:27:23 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:27:23 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:27:23 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:27:23 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:27:23 np0005539551 nova_compute[227360]: 2025-11-29 08:27:23.553 227364 DEBUG nova.compute.manager [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Preparing to wait for external event network-vif-plugged-9eea9321-fa5c-4ec5-81e3-b6fba93e7545 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:27:23 np0005539551 nova_compute[227360]: 2025-11-29 08:27:23.554 227364 DEBUG oslo_concurrency.lockutils [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquiring lock "480ae817-3676-4499-a047-6b8b383e7bf2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:23 np0005539551 nova_compute[227360]: 2025-11-29 08:27:23.555 227364 DEBUG oslo_concurrency.lockutils [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "480ae817-3676-4499-a047-6b8b383e7bf2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:23 np0005539551 nova_compute[227360]: 2025-11-29 08:27:23.555 227364 DEBUG oslo_concurrency.lockutils [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "480ae817-3676-4499-a047-6b8b383e7bf2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:23 np0005539551 nova_compute[227360]: 2025-11-29 08:27:23.567 227364 DEBUG nova.virt.libvirt.vif [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:27:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-2116728451',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-2116728451',id=143,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ae71059d02774857be85797a3be0e4e6',ramdisk_id='',reservation_id='r-sgna7du5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1715153470',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1715153470-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:27:13Z,user_data=None,user_id='64b11a4dc36b4f55b85dbe846183be55',uuid=480ae817-3676-4499-a047-6b8b383e7bf2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9eea9321-fa5c-4ec5-81e3-b6fba93e7545", "address": "fa:16:3e:a2:58:94", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9eea9321-fa", "ovs_interfaceid": "9eea9321-fa5c-4ec5-81e3-b6fba93e7545", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:27:23 np0005539551 nova_compute[227360]: 2025-11-29 08:27:23.568 227364 DEBUG nova.network.os_vif_util [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Converting VIF {"id": "9eea9321-fa5c-4ec5-81e3-b6fba93e7545", "address": "fa:16:3e:a2:58:94", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9eea9321-fa", "ovs_interfaceid": "9eea9321-fa5c-4ec5-81e3-b6fba93e7545", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:27:23 np0005539551 nova_compute[227360]: 2025-11-29 08:27:23.569 227364 DEBUG nova.network.os_vif_util [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:58:94,bridge_name='br-int',has_traffic_filtering=True,id=9eea9321-fa5c-4ec5-81e3-b6fba93e7545,network=Network(d9d41f0a-17f9-4df4-a453-04da996d63b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9eea9321-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:27:23 np0005539551 nova_compute[227360]: 2025-11-29 08:27:23.570 227364 DEBUG os_vif [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:58:94,bridge_name='br-int',has_traffic_filtering=True,id=9eea9321-fa5c-4ec5-81e3-b6fba93e7545,network=Network(d9d41f0a-17f9-4df4-a453-04da996d63b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9eea9321-fa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:27:23 np0005539551 nova_compute[227360]: 2025-11-29 08:27:23.571 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:23 np0005539551 nova_compute[227360]: 2025-11-29 08:27:23.572 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:23 np0005539551 nova_compute[227360]: 2025-11-29 08:27:23.582 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:27:23 np0005539551 nova_compute[227360]: 2025-11-29 08:27:23.589 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:23 np0005539551 nova_compute[227360]: 2025-11-29 08:27:23.590 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9eea9321-fa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:23 np0005539551 nova_compute[227360]: 2025-11-29 08:27:23.590 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9eea9321-fa, col_values=(('external_ids', {'iface-id': '9eea9321-fa5c-4ec5-81e3-b6fba93e7545', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a2:58:94', 'vm-uuid': '480ae817-3676-4499-a047-6b8b383e7bf2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:23 np0005539551 nova_compute[227360]: 2025-11-29 08:27:23.592 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:23 np0005539551 NetworkManager[48922]: <info>  [1764404843.5931] manager: (tap9eea9321-fa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/285)
Nov 29 03:27:23 np0005539551 nova_compute[227360]: 2025-11-29 08:27:23.594 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:27:23 np0005539551 nova_compute[227360]: 2025-11-29 08:27:23.600 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:23 np0005539551 nova_compute[227360]: 2025-11-29 08:27:23.602 227364 INFO os_vif [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:58:94,bridge_name='br-int',has_traffic_filtering=True,id=9eea9321-fa5c-4ec5-81e3-b6fba93e7545,network=Network(d9d41f0a-17f9-4df4-a453-04da996d63b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9eea9321-fa')#033[00m
Nov 29 03:27:23 np0005539551 nova_compute[227360]: 2025-11-29 08:27:23.667 227364 DEBUG nova.virt.libvirt.driver [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:27:23 np0005539551 nova_compute[227360]: 2025-11-29 08:27:23.667 227364 DEBUG nova.virt.libvirt.driver [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:27:23 np0005539551 nova_compute[227360]: 2025-11-29 08:27:23.668 227364 DEBUG nova.virt.libvirt.driver [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] No VIF found with MAC fa:16:3e:a2:58:94, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:27:23 np0005539551 nova_compute[227360]: 2025-11-29 08:27:23.669 227364 INFO nova.virt.libvirt.driver [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Using config drive#033[00m
Nov 29 03:27:23 np0005539551 nova_compute[227360]: 2025-11-29 08:27:23.695 227364 DEBUG nova.storage.rbd_utils [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] rbd image 480ae817-3676-4499-a047-6b8b383e7bf2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:24.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:24 np0005539551 nova_compute[227360]: 2025-11-29 08:27:24.501 227364 INFO nova.virt.libvirt.driver [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Creating config drive at /var/lib/nova/instances/480ae817-3676-4499-a047-6b8b383e7bf2/disk.config#033[00m
Nov 29 03:27:24 np0005539551 nova_compute[227360]: 2025-11-29 08:27:24.505 227364 DEBUG oslo_concurrency.processutils [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/480ae817-3676-4499-a047-6b8b383e7bf2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8zap3zll execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:24 np0005539551 nova_compute[227360]: 2025-11-29 08:27:24.637 227364 DEBUG oslo_concurrency.processutils [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/480ae817-3676-4499-a047-6b8b383e7bf2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8zap3zll" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:24 np0005539551 nova_compute[227360]: 2025-11-29 08:27:24.675 227364 DEBUG nova.storage.rbd_utils [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] rbd image 480ae817-3676-4499-a047-6b8b383e7bf2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:24 np0005539551 nova_compute[227360]: 2025-11-29 08:27:24.680 227364 DEBUG oslo_concurrency.processutils [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/480ae817-3676-4499-a047-6b8b383e7bf2/disk.config 480ae817-3676-4499-a047-6b8b383e7bf2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:24 np0005539551 nova_compute[227360]: 2025-11-29 08:27:24.714 227364 DEBUG nova.network.neutron [req-8f6a255b-9110-4daf-8650-374c342cde2b req-61a9c0db-49a7-4b32-9bec-bde3ade9f628 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Updated VIF entry in instance network info cache for port 9eea9321-fa5c-4ec5-81e3-b6fba93e7545. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:27:24 np0005539551 nova_compute[227360]: 2025-11-29 08:27:24.715 227364 DEBUG nova.network.neutron [req-8f6a255b-9110-4daf-8650-374c342cde2b req-61a9c0db-49a7-4b32-9bec-bde3ade9f628 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Updating instance_info_cache with network_info: [{"id": "9eea9321-fa5c-4ec5-81e3-b6fba93e7545", "address": "fa:16:3e:a2:58:94", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9eea9321-fa", "ovs_interfaceid": "9eea9321-fa5c-4ec5-81e3-b6fba93e7545", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:27:24 np0005539551 nova_compute[227360]: 2025-11-29 08:27:24.744 227364 DEBUG oslo_concurrency.lockutils [req-8f6a255b-9110-4daf-8650-374c342cde2b req-61a9c0db-49a7-4b32-9bec-bde3ade9f628 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-480ae817-3676-4499-a047-6b8b383e7bf2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:27:24 np0005539551 nova_compute[227360]: 2025-11-29 08:27:24.898 227364 DEBUG oslo_concurrency.processutils [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/480ae817-3676-4499-a047-6b8b383e7bf2/disk.config 480ae817-3676-4499-a047-6b8b383e7bf2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.219s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:24 np0005539551 nova_compute[227360]: 2025-11-29 08:27:24.899 227364 INFO nova.virt.libvirt.driver [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Deleting local config drive /var/lib/nova/instances/480ae817-3676-4499-a047-6b8b383e7bf2/disk.config because it was imported into RBD.#033[00m
Nov 29 03:27:24 np0005539551 kernel: tap9eea9321-fa: entered promiscuous mode
Nov 29 03:27:24 np0005539551 NetworkManager[48922]: <info>  [1764404844.9437] manager: (tap9eea9321-fa): new Tun device (/org/freedesktop/NetworkManager/Devices/286)
Nov 29 03:27:24 np0005539551 ovn_controller[130266]: 2025-11-29T08:27:24Z|00602|binding|INFO|Claiming lport 9eea9321-fa5c-4ec5-81e3-b6fba93e7545 for this chassis.
Nov 29 03:27:24 np0005539551 ovn_controller[130266]: 2025-11-29T08:27:24Z|00603|binding|INFO|9eea9321-fa5c-4ec5-81e3-b6fba93e7545: Claiming fa:16:3e:a2:58:94 10.100.0.13
Nov 29 03:27:24 np0005539551 nova_compute[227360]: 2025-11-29 08:27:24.946 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:24 np0005539551 ovn_controller[130266]: 2025-11-29T08:27:24Z|00604|binding|INFO|Setting lport 9eea9321-fa5c-4ec5-81e3-b6fba93e7545 ovn-installed in OVS
Nov 29 03:27:24 np0005539551 nova_compute[227360]: 2025-11-29 08:27:24.961 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:24 np0005539551 nova_compute[227360]: 2025-11-29 08:27:24.964 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:24 np0005539551 systemd-udevd[280058]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:27:24 np0005539551 systemd-machined[190756]: New machine qemu-66-instance-0000008f.
Nov 29 03:27:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:24.989 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:58:94 10.100.0.13'], port_security=['fa:16:3e:a2:58:94 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '480ae817-3676-4499-a047-6b8b383e7bf2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ae71059d02774857be85797a3be0e4e6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9cdb0c1e-9792-4231-abe9-b49a2c7e81de', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43696b0d-f042-4e44-8852-c0333c8ffa4f, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=9eea9321-fa5c-4ec5-81e3-b6fba93e7545) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:27:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:24.990 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 9eea9321-fa5c-4ec5-81e3-b6fba93e7545 in datapath d9d41f0a-17f9-4df4-a453-04da996d63b6 bound to our chassis#033[00m
Nov 29 03:27:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:24.991 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d9d41f0a-17f9-4df4-a453-04da996d63b6#033[00m
Nov 29 03:27:24 np0005539551 ovn_controller[130266]: 2025-11-29T08:27:24Z|00605|binding|INFO|Setting lport 9eea9321-fa5c-4ec5-81e3-b6fba93e7545 up in Southbound
Nov 29 03:27:25 np0005539551 NetworkManager[48922]: <info>  [1764404845.0010] device (tap9eea9321-fa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:27:25 np0005539551 NetworkManager[48922]: <info>  [1764404845.0021] device (tap9eea9321-fa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:27:25 np0005539551 systemd[1]: Started Virtual Machine qemu-66-instance-0000008f.
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:25.003 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[53654dae-15ec-4d7a-9dbf-52d39bd38867]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:25.004 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd9d41f0a-11 in ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:25.005 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd9d41f0a-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:25.005 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[47037acb-291a-424f-8fa4-0345e5a46716]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:25.007 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8ebba3b7-435f-4f50-8c9c-40e7c244a87b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:25.018 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[eaae0b99-2117-40f1-9c0c-8a3ea7c733bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:25.032 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b0d54ce5-1137-4852-be8d-ea4542d16fb3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:25.077 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[07ec9873-2787-40b4-9733-91d824e37d43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:25 np0005539551 systemd-udevd[280061]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:27:25 np0005539551 NetworkManager[48922]: <info>  [1764404845.0938] manager: (tapd9d41f0a-10): new Veth device (/org/freedesktop/NetworkManager/Devices/287)
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:25.092 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[22a35394-4544-4839-9541-b3fa47df6ddb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:25.138 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[a74a95ea-153b-428f-a6f7-6dbff3972f28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:25.140 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[e808e5c9-cc06-462f-aa95-b8413c03be10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:25 np0005539551 NetworkManager[48922]: <info>  [1764404845.1610] device (tapd9d41f0a-10): carrier: link connected
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:25.168 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[1b1499f1-6524-4dbd-8a35-01ae5dc07e96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:25.184 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d8ee0cc1-a98c-48bb-bd3b-4020ccd70087]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9d41f0a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:28:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 180], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 789204, 'reachable_time': 34702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280093, 'error': None, 'target': 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:25.197 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[23e18d6e-d439-4673-ad87-86c8208213b1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe41:2887'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 789204, 'tstamp': 789204}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280094, 'error': None, 'target': 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:25.217 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1f4b0108-20e2-4202-bad6-d15e2f090c70]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9d41f0a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:28:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 180], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 789204, 'reachable_time': 34702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 280095, 'error': None, 'target': 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:25.249 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2ed1ebf6-3b28-459f-8d34-3b422ea6303b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:25.307 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b3bb24ee-182b-452c-8a8f-9e974ffb9d62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:25.308 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9d41f0a-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:25.308 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:25.308 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9d41f0a-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:25 np0005539551 nova_compute[227360]: 2025-11-29 08:27:25.341 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:25 np0005539551 kernel: tapd9d41f0a-10: entered promiscuous mode
Nov 29 03:27:25 np0005539551 NetworkManager[48922]: <info>  [1764404845.3422] manager: (tapd9d41f0a-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/288)
Nov 29 03:27:25 np0005539551 nova_compute[227360]: 2025-11-29 08:27:25.344 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:25.345 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd9d41f0a-10, col_values=(('external_ids', {'iface-id': 'f2118d1b-0f35-4211-8508-64237a2d816e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:25 np0005539551 nova_compute[227360]: 2025-11-29 08:27:25.346 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:25 np0005539551 ovn_controller[130266]: 2025-11-29T08:27:25Z|00606|binding|INFO|Releasing lport f2118d1b-0f35-4211-8508-64237a2d816e from this chassis (sb_readonly=0)
Nov 29 03:27:25 np0005539551 nova_compute[227360]: 2025-11-29 08:27:25.370 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:25.372 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d9d41f0a-17f9-4df4-a453-04da996d63b6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d9d41f0a-17f9-4df4-a453-04da996d63b6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:25.372 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[0873adde-09f2-4af3-9c50-3cf9585c924a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:25.373 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-d9d41f0a-17f9-4df4-a453-04da996d63b6
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/d9d41f0a-17f9-4df4-a453-04da996d63b6.pid.haproxy
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID d9d41f0a-17f9-4df4-a453-04da996d63b6
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:27:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:25.374 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'env', 'PROCESS_TAG=haproxy-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d9d41f0a-17f9-4df4-a453-04da996d63b6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:27:25 np0005539551 nova_compute[227360]: 2025-11-29 08:27:25.424 227364 DEBUG nova.compute.manager [req-e01c9604-8bef-4310-ba25-06eecf28c144 req-6b91e531-ec14-48eb-9c58-abda494d348d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Received event network-vif-plugged-9eea9321-fa5c-4ec5-81e3-b6fba93e7545 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:25 np0005539551 nova_compute[227360]: 2025-11-29 08:27:25.424 227364 DEBUG oslo_concurrency.lockutils [req-e01c9604-8bef-4310-ba25-06eecf28c144 req-6b91e531-ec14-48eb-9c58-abda494d348d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "480ae817-3676-4499-a047-6b8b383e7bf2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:25 np0005539551 nova_compute[227360]: 2025-11-29 08:27:25.425 227364 DEBUG oslo_concurrency.lockutils [req-e01c9604-8bef-4310-ba25-06eecf28c144 req-6b91e531-ec14-48eb-9c58-abda494d348d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "480ae817-3676-4499-a047-6b8b383e7bf2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:25 np0005539551 nova_compute[227360]: 2025-11-29 08:27:25.425 227364 DEBUG oslo_concurrency.lockutils [req-e01c9604-8bef-4310-ba25-06eecf28c144 req-6b91e531-ec14-48eb-9c58-abda494d348d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "480ae817-3676-4499-a047-6b8b383e7bf2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:25 np0005539551 nova_compute[227360]: 2025-11-29 08:27:25.425 227364 DEBUG nova.compute.manager [req-e01c9604-8bef-4310-ba25-06eecf28c144 req-6b91e531-ec14-48eb-9c58-abda494d348d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Processing event network-vif-plugged-9eea9321-fa5c-4ec5-81e3-b6fba93e7545 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:27:25 np0005539551 nova_compute[227360]: 2025-11-29 08:27:25.445 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404845.445536, 480ae817-3676-4499-a047-6b8b383e7bf2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:27:25 np0005539551 nova_compute[227360]: 2025-11-29 08:27:25.446 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] VM Started (Lifecycle Event)#033[00m
Nov 29 03:27:25 np0005539551 nova_compute[227360]: 2025-11-29 08:27:25.448 227364 DEBUG nova.compute.manager [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:27:25 np0005539551 nova_compute[227360]: 2025-11-29 08:27:25.464 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:25 np0005539551 nova_compute[227360]: 2025-11-29 08:27:25.467 227364 DEBUG nova.virt.libvirt.driver [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:27:25 np0005539551 nova_compute[227360]: 2025-11-29 08:27:25.468 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:27:25 np0005539551 nova_compute[227360]: 2025-11-29 08:27:25.472 227364 INFO nova.virt.libvirt.driver [-] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Instance spawned successfully.#033[00m
Nov 29 03:27:25 np0005539551 nova_compute[227360]: 2025-11-29 08:27:25.472 227364 DEBUG nova.virt.libvirt.driver [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:27:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:25.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:25 np0005539551 nova_compute[227360]: 2025-11-29 08:27:25.492 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:27:25 np0005539551 nova_compute[227360]: 2025-11-29 08:27:25.493 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404845.4456353, 480ae817-3676-4499-a047-6b8b383e7bf2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:27:25 np0005539551 nova_compute[227360]: 2025-11-29 08:27:25.493 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:27:25 np0005539551 nova_compute[227360]: 2025-11-29 08:27:25.498 227364 DEBUG nova.virt.libvirt.driver [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:27:25 np0005539551 nova_compute[227360]: 2025-11-29 08:27:25.499 227364 DEBUG nova.virt.libvirt.driver [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:27:25 np0005539551 nova_compute[227360]: 2025-11-29 08:27:25.499 227364 DEBUG nova.virt.libvirt.driver [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:27:25 np0005539551 nova_compute[227360]: 2025-11-29 08:27:25.500 227364 DEBUG nova.virt.libvirt.driver [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:27:25 np0005539551 nova_compute[227360]: 2025-11-29 08:27:25.500 227364 DEBUG nova.virt.libvirt.driver [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:27:25 np0005539551 nova_compute[227360]: 2025-11-29 08:27:25.501 227364 DEBUG nova.virt.libvirt.driver [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:27:25 np0005539551 nova_compute[227360]: 2025-11-29 08:27:25.524 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:25 np0005539551 nova_compute[227360]: 2025-11-29 08:27:25.527 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404845.4503615, 480ae817-3676-4499-a047-6b8b383e7bf2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:27:25 np0005539551 nova_compute[227360]: 2025-11-29 08:27:25.527 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:27:25 np0005539551 nova_compute[227360]: 2025-11-29 08:27:25.553 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:25 np0005539551 nova_compute[227360]: 2025-11-29 08:27:25.557 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:27:25 np0005539551 nova_compute[227360]: 2025-11-29 08:27:25.588 227364 INFO nova.compute.manager [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Took 12.09 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:27:25 np0005539551 nova_compute[227360]: 2025-11-29 08:27:25.588 227364 DEBUG nova.compute.manager [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:25 np0005539551 nova_compute[227360]: 2025-11-29 08:27:25.589 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:27:25 np0005539551 nova_compute[227360]: 2025-11-29 08:27:25.649 227364 INFO nova.compute.manager [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Took 13.18 seconds to build instance.#033[00m
Nov 29 03:27:25 np0005539551 nova_compute[227360]: 2025-11-29 08:27:25.666 227364 DEBUG oslo_concurrency.lockutils [None req-da913fea-3f26-41cc-b119-cdaf697fe87b 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "480ae817-3676-4499-a047-6b8b383e7bf2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.271s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:25 np0005539551 podman[280169]: 2025-11-29 08:27:25.780600874 +0000 UTC m=+0.055800181 container create 6bd859c0997474808cbf014c68b6ef4e905e9e2b7b075c013615f548f18bd271 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:27:25 np0005539551 systemd[1]: Started libpod-conmon-6bd859c0997474808cbf014c68b6ef4e905e9e2b7b075c013615f548f18bd271.scope.
Nov 29 03:27:25 np0005539551 podman[280169]: 2025-11-29 08:27:25.750665755 +0000 UTC m=+0.025865082 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:27:25 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:27:25 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4892b8beb5e7852e93dac44a96d93d23d770aeb101354988b52bea8376b88508/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:27:25 np0005539551 podman[280169]: 2025-11-29 08:27:25.879063887 +0000 UTC m=+0.154263194 container init 6bd859c0997474808cbf014c68b6ef4e905e9e2b7b075c013615f548f18bd271 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:27:25 np0005539551 podman[280169]: 2025-11-29 08:27:25.883834697 +0000 UTC m=+0.159034004 container start 6bd859c0997474808cbf014c68b6ef4e905e9e2b7b075c013615f548f18bd271 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:27:25 np0005539551 neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6[280185]: [NOTICE]   (280189) : New worker (280191) forked
Nov 29 03:27:25 np0005539551 neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6[280185]: [NOTICE]   (280189) : Loading success.
Nov 29 03:27:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:26.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:26 np0005539551 nova_compute[227360]: 2025-11-29 08:27:26.466 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.004000107s ======
Nov 29 03:27:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:27.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000107s
Nov 29 03:27:27 np0005539551 nova_compute[227360]: 2025-11-29 08:27:27.501 227364 DEBUG nova.compute.manager [req-5e261bc8-48d7-401b-be7a-53a9e5772db2 req-09d49de0-b006-43c2-8771-d475abaf6f52 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Received event network-vif-plugged-9eea9321-fa5c-4ec5-81e3-b6fba93e7545 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:27 np0005539551 nova_compute[227360]: 2025-11-29 08:27:27.502 227364 DEBUG oslo_concurrency.lockutils [req-5e261bc8-48d7-401b-be7a-53a9e5772db2 req-09d49de0-b006-43c2-8771-d475abaf6f52 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "480ae817-3676-4499-a047-6b8b383e7bf2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:27 np0005539551 nova_compute[227360]: 2025-11-29 08:27:27.503 227364 DEBUG oslo_concurrency.lockutils [req-5e261bc8-48d7-401b-be7a-53a9e5772db2 req-09d49de0-b006-43c2-8771-d475abaf6f52 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "480ae817-3676-4499-a047-6b8b383e7bf2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:27 np0005539551 nova_compute[227360]: 2025-11-29 08:27:27.503 227364 DEBUG oslo_concurrency.lockutils [req-5e261bc8-48d7-401b-be7a-53a9e5772db2 req-09d49de0-b006-43c2-8771-d475abaf6f52 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "480ae817-3676-4499-a047-6b8b383e7bf2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:27 np0005539551 nova_compute[227360]: 2025-11-29 08:27:27.503 227364 DEBUG nova.compute.manager [req-5e261bc8-48d7-401b-be7a-53a9e5772db2 req-09d49de0-b006-43c2-8771-d475abaf6f52 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] No waiting events found dispatching network-vif-plugged-9eea9321-fa5c-4ec5-81e3-b6fba93e7545 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:27 np0005539551 nova_compute[227360]: 2025-11-29 08:27:27.503 227364 WARNING nova.compute.manager [req-5e261bc8-48d7-401b-be7a-53a9e5772db2 req-09d49de0-b006-43c2-8771-d475abaf6f52 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Received unexpected event network-vif-plugged-9eea9321-fa5c-4ec5-81e3-b6fba93e7545 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:27:27 np0005539551 nova_compute[227360]: 2025-11-29 08:27:27.610 227364 DEBUG nova.compute.manager [None req-f76bb079-9b91-4c20-8299-cc2908cb0bcb 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:27 np0005539551 nova_compute[227360]: 2025-11-29 08:27:27.661 227364 INFO nova.compute.manager [None req-f76bb079-9b91-4c20-8299-cc2908cb0bcb 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] instance snapshotting#033[00m
Nov 29 03:27:28 np0005539551 nova_compute[227360]: 2025-11-29 08:27:28.020 227364 INFO nova.virt.libvirt.driver [None req-f76bb079-9b91-4c20-8299-cc2908cb0bcb 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Beginning live snapshot process#033[00m
Nov 29 03:27:28 np0005539551 nova_compute[227360]: 2025-11-29 08:27:28.181 227364 DEBUG nova.virt.libvirt.imagebackend [None req-f76bb079-9b91-4c20-8299-cc2908cb0bcb 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] No parent info for 4873db8c-b414-4e95-acd9-77caabebe722; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:27:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:28.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:28 np0005539551 nova_compute[227360]: 2025-11-29 08:27:28.359 227364 DEBUG nova.storage.rbd_utils [None req-f76bb079-9b91-4c20-8299-cc2908cb0bcb 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] creating snapshot(a640622302f64feba75928c2ca0603b0) on rbd image(480ae817-3676-4499-a047-6b8b383e7bf2_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:27:28 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:28 np0005539551 nova_compute[227360]: 2025-11-29 08:27:28.603 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:29 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:27:29 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:27:29 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:27:29 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:27:29 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:27:29 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:27:29 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:27:29 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:27:29 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:27:29 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:27:29 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:27:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e338 e338: 3 total, 3 up, 3 in
Nov 29 03:27:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:29.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:29 np0005539551 nova_compute[227360]: 2025-11-29 08:27:29.563 227364 DEBUG nova.storage.rbd_utils [None req-f76bb079-9b91-4c20-8299-cc2908cb0bcb 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] cloning vms/480ae817-3676-4499-a047-6b8b383e7bf2_disk@a640622302f64feba75928c2ca0603b0 to images/ff39bd0f-b544-46e3-a2c3-0aed51f8ff44 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:27:30 np0005539551 nova_compute[227360]: 2025-11-29 08:27:30.099 227364 DEBUG nova.storage.rbd_utils [None req-f76bb079-9b91-4c20-8299-cc2908cb0bcb 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] flattening images/ff39bd0f-b544-46e3-a2c3-0aed51f8ff44 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:27:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:30.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:30 np0005539551 nova_compute[227360]: 2025-11-29 08:27:30.892 227364 DEBUG nova.storage.rbd_utils [None req-f76bb079-9b91-4c20-8299-cc2908cb0bcb 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] removing snapshot(a640622302f64feba75928c2ca0603b0) on rbd image(480ae817-3676-4499-a047-6b8b383e7bf2_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:27:31 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Nov 29 03:27:31 np0005539551 nova_compute[227360]: 2025-11-29 08:27:31.469 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:31.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:31 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e339 e339: 3 total, 3 up, 3 in
Nov 29 03:27:31 np0005539551 nova_compute[227360]: 2025-11-29 08:27:31.695 227364 DEBUG nova.storage.rbd_utils [None req-f76bb079-9b91-4c20-8299-cc2908cb0bcb 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] creating snapshot(snap) on rbd image(ff39bd0f-b544-46e3-a2c3-0aed51f8ff44) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:27:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:32.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:32 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e340 e340: 3 total, 3 up, 3 in
Nov 29 03:27:33 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:33.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:33 np0005539551 ovn_controller[130266]: 2025-11-29T08:27:33Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5d:a8:a2 10.100.0.9
Nov 29 03:27:33 np0005539551 nova_compute[227360]: 2025-11-29 08:27:33.606 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:34.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:34 np0005539551 nova_compute[227360]: 2025-11-29 08:27:34.318 227364 INFO nova.virt.libvirt.driver [None req-f76bb079-9b91-4c20-8299-cc2908cb0bcb 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Snapshot image upload complete#033[00m
Nov 29 03:27:34 np0005539551 nova_compute[227360]: 2025-11-29 08:27:34.318 227364 INFO nova.compute.manager [None req-f76bb079-9b91-4c20-8299-cc2908cb0bcb 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Took 6.66 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 29 03:27:35 np0005539551 nova_compute[227360]: 2025-11-29 08:27:35.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:35 np0005539551 nova_compute[227360]: 2025-11-29 08:27:35.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:35.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:27:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:36.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:27:36 np0005539551 nova_compute[227360]: 2025-11-29 08:27:36.512 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:36 np0005539551 podman[280593]: 2025-11-29 08:27:36.627985411 +0000 UTC m=+0.075931735 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:27:36 np0005539551 podman[280594]: 2025-11-29 08:27:36.643415209 +0000 UTC m=+0.085982868 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:27:36 np0005539551 podman[280592]: 2025-11-29 08:27:36.677649205 +0000 UTC m=+0.119701170 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 29 03:27:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:37.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:27:37 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3790643118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:27:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:27:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:38.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:27:38 np0005539551 nova_compute[227360]: 2025-11-29 08:27:38.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:38 np0005539551 nova_compute[227360]: 2025-11-29 08:27:38.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:27:38 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:38 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:27:38 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:27:38 np0005539551 nova_compute[227360]: 2025-11-29 08:27:38.609 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:38 np0005539551 nova_compute[227360]: 2025-11-29 08:27:38.675 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "refresh_cache-6076e253-7727-49f1-9aa6-6c4ccc52fc56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:27:38 np0005539551 nova_compute[227360]: 2025-11-29 08:27:38.676 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquired lock "refresh_cache-6076e253-7727-49f1-9aa6-6c4ccc52fc56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:27:38 np0005539551 nova_compute[227360]: 2025-11-29 08:27:38.676 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:27:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:39.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:40 np0005539551 ovn_controller[130266]: 2025-11-29T08:27:40Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a2:58:94 10.100.0.13
Nov 29 03:27:40 np0005539551 ovn_controller[130266]: 2025-11-29T08:27:40Z|00080|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a2:58:94 10.100.0.13
Nov 29 03:27:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:40.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:41.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:41 np0005539551 nova_compute[227360]: 2025-11-29 08:27:41.514 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:41 np0005539551 nova_compute[227360]: 2025-11-29 08:27:41.684 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Updating instance_info_cache with network_info: [{"id": "88452b5d-373a-407c-a8ab-aba7b53de034", "address": "fa:16:3e:5d:a8:a2", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88452b5d-37", "ovs_interfaceid": "88452b5d-373a-407c-a8ab-aba7b53de034", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:27:41 np0005539551 nova_compute[227360]: 2025-11-29 08:27:41.708 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Releasing lock "refresh_cache-6076e253-7727-49f1-9aa6-6c4ccc52fc56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:27:41 np0005539551 nova_compute[227360]: 2025-11-29 08:27:41.708 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:27:41 np0005539551 nova_compute[227360]: 2025-11-29 08:27:41.709 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:41 np0005539551 nova_compute[227360]: 2025-11-29 08:27:41.709 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:41 np0005539551 nova_compute[227360]: 2025-11-29 08:27:41.709 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:41 np0005539551 nova_compute[227360]: 2025-11-29 08:27:41.749 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:41 np0005539551 nova_compute[227360]: 2025-11-29 08:27:41.749 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:41 np0005539551 nova_compute[227360]: 2025-11-29 08:27:41.749 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:41 np0005539551 nova_compute[227360]: 2025-11-29 08:27:41.750 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:27:41 np0005539551 nova_compute[227360]: 2025-11-29 08:27:41.750 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:27:42 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/52338917' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:27:42 np0005539551 nova_compute[227360]: 2025-11-29 08:27:42.189 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e341 e341: 3 total, 3 up, 3 in
Nov 29 03:27:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:42.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:42 np0005539551 nova_compute[227360]: 2025-11-29 08:27:42.276 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:27:42 np0005539551 nova_compute[227360]: 2025-11-29 08:27:42.276 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:27:42 np0005539551 nova_compute[227360]: 2025-11-29 08:27:42.279 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:27:42 np0005539551 nova_compute[227360]: 2025-11-29 08:27:42.279 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:27:42 np0005539551 nova_compute[227360]: 2025-11-29 08:27:42.281 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:27:42 np0005539551 nova_compute[227360]: 2025-11-29 08:27:42.281 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:27:42 np0005539551 nova_compute[227360]: 2025-11-29 08:27:42.459 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:27:42 np0005539551 nova_compute[227360]: 2025-11-29 08:27:42.460 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3690MB free_disk=20.61745834350586GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:27:42 np0005539551 nova_compute[227360]: 2025-11-29 08:27:42.460 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:42 np0005539551 nova_compute[227360]: 2025-11-29 08:27:42.461 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:42 np0005539551 nova_compute[227360]: 2025-11-29 08:27:42.615 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance e5dc7e85-787d-4ed8-9752-a604a1815f2b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:27:42 np0005539551 nova_compute[227360]: 2025-11-29 08:27:42.616 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 6076e253-7727-49f1-9aa6-6c4ccc52fc56 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:27:42 np0005539551 nova_compute[227360]: 2025-11-29 08:27:42.616 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 480ae817-3676-4499-a047-6b8b383e7bf2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:27:42 np0005539551 nova_compute[227360]: 2025-11-29 08:27:42.616 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:27:42 np0005539551 nova_compute[227360]: 2025-11-29 08:27:42.616 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:27:42 np0005539551 nova_compute[227360]: 2025-11-29 08:27:42.817 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:43 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:27:43 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3702930249' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:27:43 np0005539551 nova_compute[227360]: 2025-11-29 08:27:43.293 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:43 np0005539551 nova_compute[227360]: 2025-11-29 08:27:43.300 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:27:43 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:43.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:43 np0005539551 nova_compute[227360]: 2025-11-29 08:27:43.553 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:27:43 np0005539551 nova_compute[227360]: 2025-11-29 08:27:43.612 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:43 np0005539551 nova_compute[227360]: 2025-11-29 08:27:43.668 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:27:43 np0005539551 nova_compute[227360]: 2025-11-29 08:27:43.669 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.208s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:43 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:43.702 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:27:43 np0005539551 nova_compute[227360]: 2025-11-29 08:27:43.703 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:43 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:43.704 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:27:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:27:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:44.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:27:44 np0005539551 nova_compute[227360]: 2025-11-29 08:27:44.370 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:44 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e342 e342: 3 total, 3 up, 3 in
Nov 29 03:27:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:45.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:46.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:46 np0005539551 nova_compute[227360]: 2025-11-29 08:27:46.516 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:46 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:27:46.706 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:47.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:48.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:48 np0005539551 nova_compute[227360]: 2025-11-29 08:27:48.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:48 np0005539551 nova_compute[227360]: 2025-11-29 08:27:48.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:27:48 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:48 np0005539551 nova_compute[227360]: 2025-11-29 08:27:48.614 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:49.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:50.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:51.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:51 np0005539551 nova_compute[227360]: 2025-11-29 08:27:51.533 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e343 e343: 3 total, 3 up, 3 in
Nov 29 03:27:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:52.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:53.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:53 np0005539551 nova_compute[227360]: 2025-11-29 08:27:53.615 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:53 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:54.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:54 np0005539551 nova_compute[227360]: 2025-11-29 08:27:54.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:55.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:56.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:56 np0005539551 nova_compute[227360]: 2025-11-29 08:27:56.535 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:57.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:58.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:58 np0005539551 nova_compute[227360]: 2025-11-29 08:27:58.618 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:58 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:27:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:59.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:00.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:00 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #103. Immutable memtables: 0.
Nov 29 03:28:00 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:28:00.476895) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:28:00 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 103
Nov 29 03:28:00 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404880476986, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 2495, "num_deletes": 255, "total_data_size": 5674041, "memory_usage": 5753728, "flush_reason": "Manual Compaction"}
Nov 29 03:28:00 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #104: started
Nov 29 03:28:00 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404880503445, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 104, "file_size": 3687227, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51386, "largest_seqno": 53876, "table_properties": {"data_size": 3676999, "index_size": 6466, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22074, "raw_average_key_size": 20, "raw_value_size": 3656249, "raw_average_value_size": 3475, "num_data_blocks": 279, "num_entries": 1052, "num_filter_entries": 1052, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404695, "oldest_key_time": 1764404695, "file_creation_time": 1764404880, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:28:00 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 26594 microseconds, and 10555 cpu microseconds.
Nov 29 03:28:00 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:28:00 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:28:00.503502) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #104: 3687227 bytes OK
Nov 29 03:28:00 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:28:00.503525) [db/memtable_list.cc:519] [default] Level-0 commit table #104 started
Nov 29 03:28:00 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:28:00.505158) [db/memtable_list.cc:722] [default] Level-0 commit table #104: memtable #1 done
Nov 29 03:28:00 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:28:00.505179) EVENT_LOG_v1 {"time_micros": 1764404880505173, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:28:00 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:28:00.505197) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:28:00 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 5662897, prev total WAL file size 5662897, number of live WAL files 2.
Nov 29 03:28:00 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000100.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:28:00 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:28:00.506680) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Nov 29 03:28:00 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:28:00 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [104(3600KB)], [102(10MB)]
Nov 29 03:28:00 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404880506713, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [104], "files_L6": [102], "score": -1, "input_data_size": 15010053, "oldest_snapshot_seqno": -1}
Nov 29 03:28:00 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #105: 8705 keys, 13127377 bytes, temperature: kUnknown
Nov 29 03:28:00 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404880597708, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 105, "file_size": 13127377, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13068776, "index_size": 35705, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21829, "raw_key_size": 225656, "raw_average_key_size": 25, "raw_value_size": 12913166, "raw_average_value_size": 1483, "num_data_blocks": 1397, "num_entries": 8705, "num_filter_entries": 8705, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764404880, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 105, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:28:00 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:28:00 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:28:00.598005) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 13127377 bytes
Nov 29 03:28:00 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:28:00.600513) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 164.8 rd, 144.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 10.8 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(7.6) write-amplify(3.6) OK, records in: 9233, records dropped: 528 output_compression: NoCompression
Nov 29 03:28:00 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:28:00.600534) EVENT_LOG_v1 {"time_micros": 1764404880600524, "job": 64, "event": "compaction_finished", "compaction_time_micros": 91105, "compaction_time_cpu_micros": 27221, "output_level": 6, "num_output_files": 1, "total_output_size": 13127377, "num_input_records": 9233, "num_output_records": 8705, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:28:00 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:28:00 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404880601524, "job": 64, "event": "table_file_deletion", "file_number": 104}
Nov 29 03:28:00 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000102.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:28:00 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404880603620, "job": 64, "event": "table_file_deletion", "file_number": 102}
Nov 29 03:28:00 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:28:00.506627) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:28:00 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:28:00.603794) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:28:00 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:28:00.603801) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:28:00 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:28:00.603804) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:28:00 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:28:00.603806) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:28:00 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:28:00.603808) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:28:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:01.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:01 np0005539551 nova_compute[227360]: 2025-11-29 08:28:01.619 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:02.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:02 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e344 e344: 3 total, 3 up, 3 in
Nov 29 03:28:03 np0005539551 nova_compute[227360]: 2025-11-29 08:28:03.165 227364 DEBUG oslo_concurrency.lockutils [None req-4dbfd608-25db-4b29-b1c1-837648ea5544 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:03 np0005539551 nova_compute[227360]: 2025-11-29 08:28:03.166 227364 DEBUG oslo_concurrency.lockutils [None req-4dbfd608-25db-4b29-b1c1-837648ea5544 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:03 np0005539551 nova_compute[227360]: 2025-11-29 08:28:03.166 227364 DEBUG oslo_concurrency.lockutils [None req-4dbfd608-25db-4b29-b1c1-837648ea5544 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:03 np0005539551 nova_compute[227360]: 2025-11-29 08:28:03.167 227364 DEBUG oslo_concurrency.lockutils [None req-4dbfd608-25db-4b29-b1c1-837648ea5544 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:03 np0005539551 nova_compute[227360]: 2025-11-29 08:28:03.167 227364 DEBUG oslo_concurrency.lockutils [None req-4dbfd608-25db-4b29-b1c1-837648ea5544 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:03 np0005539551 nova_compute[227360]: 2025-11-29 08:28:03.168 227364 INFO nova.compute.manager [None req-4dbfd608-25db-4b29-b1c1-837648ea5544 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Terminating instance#033[00m
Nov 29 03:28:03 np0005539551 nova_compute[227360]: 2025-11-29 08:28:03.170 227364 DEBUG nova.compute.manager [None req-4dbfd608-25db-4b29-b1c1-837648ea5544 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:28:03 np0005539551 kernel: tap31cb9e6c-fe (unregistering): left promiscuous mode
Nov 29 03:28:03 np0005539551 NetworkManager[48922]: <info>  [1764404883.2352] device (tap31cb9e6c-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:28:03 np0005539551 ovn_controller[130266]: 2025-11-29T08:28:03Z|00607|binding|INFO|Releasing lport 31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 from this chassis (sb_readonly=0)
Nov 29 03:28:03 np0005539551 nova_compute[227360]: 2025-11-29 08:28:03.243 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:03 np0005539551 ovn_controller[130266]: 2025-11-29T08:28:03Z|00608|binding|INFO|Setting lport 31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 down in Southbound
Nov 29 03:28:03 np0005539551 ovn_controller[130266]: 2025-11-29T08:28:03Z|00609|binding|INFO|Removing iface tap31cb9e6c-fe ovn-installed in OVS
Nov 29 03:28:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:03.252 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:bc:22 10.100.0.14'], port_security=['fa:16:3e:df:bc:22 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'e5dc7e85-787d-4ed8-9752-a604a1815f2b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9a83f8d8d7f4d08890407f978c05166', 'neutron:revision_number': '8', 'neutron:security_group_ids': '1d1bf0bb-aa3c-4461-8a1e-ba1daa172e77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d0d36bf-5f41-4d6e-9e1b-1a2b5a9220ce, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:28:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:03.255 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 in datapath 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 unbound from our chassis#033[00m
Nov 29 03:28:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:03.257 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:28:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:03.259 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[cb653604-e792-4981-96ea-d924b2e9ad24]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:03.260 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 namespace which is not needed anymore#033[00m
Nov 29 03:28:03 np0005539551 nova_compute[227360]: 2025-11-29 08:28:03.261 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:03 np0005539551 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000087.scope: Deactivated successfully.
Nov 29 03:28:03 np0005539551 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000087.scope: Consumed 18.342s CPU time.
Nov 29 03:28:03 np0005539551 systemd-machined[190756]: Machine qemu-62-instance-00000087 terminated.
Nov 29 03:28:03 np0005539551 nova_compute[227360]: 2025-11-29 08:28:03.411 227364 INFO nova.virt.libvirt.driver [-] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Instance destroyed successfully.#033[00m
Nov 29 03:28:03 np0005539551 nova_compute[227360]: 2025-11-29 08:28:03.411 227364 DEBUG nova.objects.instance [None req-4dbfd608-25db-4b29-b1c1-837648ea5544 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'resources' on Instance uuid e5dc7e85-787d-4ed8-9752-a604a1815f2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:28:03 np0005539551 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[278186]: [NOTICE]   (278190) : haproxy version is 2.8.14-c23fe91
Nov 29 03:28:03 np0005539551 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[278186]: [NOTICE]   (278190) : path to executable is /usr/sbin/haproxy
Nov 29 03:28:03 np0005539551 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[278186]: [WARNING]  (278190) : Exiting Master process...
Nov 29 03:28:03 np0005539551 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[278186]: [ALERT]    (278190) : Current worker (278192) exited with code 143 (Terminated)
Nov 29 03:28:03 np0005539551 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[278186]: [WARNING]  (278190) : All workers exited. Exiting... (0)
Nov 29 03:28:03 np0005539551 systemd[1]: libpod-78807798cd291cdca1afc2bbbfc7bee952cc0daa23153d2c0cb9ab6821e2fce8.scope: Deactivated successfully.
Nov 29 03:28:03 np0005539551 nova_compute[227360]: 2025-11-29 08:28:03.429 227364 DEBUG nova.virt.libvirt.vif [None req-4dbfd608-25db-4b29-b1c1-837648ea5544 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:25:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-1341744770',display_name='tempest-ServerStableDeviceRescueTest-server-1341744770',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-1341744770',id=135,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:26:00Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a9a83f8d8d7f4d08890407f978c05166',ramdisk_id='',reservation_id='r-wgna8oed',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-507673154',owner_user_name='tempest-ServerStableDeviceRescueTest-507673154-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:26:06Z,user_data=None,user_id='873186539acb4bf9b90513e0e1beb56f',uuid=e5dc7e85-787d-4ed8-9752-a604a1815f2b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441", "address": "fa:16:3e:df:bc:22", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31cb9e6c-fe", "ovs_interfaceid": "31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:28:03 np0005539551 nova_compute[227360]: 2025-11-29 08:28:03.430 227364 DEBUG nova.network.os_vif_util [None req-4dbfd608-25db-4b29-b1c1-837648ea5544 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Converting VIF {"id": "31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441", "address": "fa:16:3e:df:bc:22", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31cb9e6c-fe", "ovs_interfaceid": "31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:28:03 np0005539551 nova_compute[227360]: 2025-11-29 08:28:03.431 227364 DEBUG nova.network.os_vif_util [None req-4dbfd608-25db-4b29-b1c1-837648ea5544 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:df:bc:22,bridge_name='br-int',has_traffic_filtering=True,id=31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441,network=Network(5da19f7d-3aa0-41e7-88b0-b9ef17fa4445),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31cb9e6c-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:28:03 np0005539551 nova_compute[227360]: 2025-11-29 08:28:03.431 227364 DEBUG os_vif [None req-4dbfd608-25db-4b29-b1c1-837648ea5544 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:df:bc:22,bridge_name='br-int',has_traffic_filtering=True,id=31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441,network=Network(5da19f7d-3aa0-41e7-88b0-b9ef17fa4445),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31cb9e6c-fe') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:28:03 np0005539551 nova_compute[227360]: 2025-11-29 08:28:03.433 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:03 np0005539551 nova_compute[227360]: 2025-11-29 08:28:03.433 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31cb9e6c-fe, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:03 np0005539551 nova_compute[227360]: 2025-11-29 08:28:03.435 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:03 np0005539551 podman[280776]: 2025-11-29 08:28:03.438413634 +0000 UTC m=+0.063735125 container died 78807798cd291cdca1afc2bbbfc7bee952cc0daa23153d2c0cb9ab6821e2fce8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:28:03 np0005539551 nova_compute[227360]: 2025-11-29 08:28:03.439 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:28:03 np0005539551 nova_compute[227360]: 2025-11-29 08:28:03.442 227364 INFO os_vif [None req-4dbfd608-25db-4b29-b1c1-837648ea5544 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:df:bc:22,bridge_name='br-int',has_traffic_filtering=True,id=31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441,network=Network(5da19f7d-3aa0-41e7-88b0-b9ef17fa4445),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31cb9e6c-fe')#033[00m
Nov 29 03:28:03 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-78807798cd291cdca1afc2bbbfc7bee952cc0daa23153d2c0cb9ab6821e2fce8-userdata-shm.mount: Deactivated successfully.
Nov 29 03:28:03 np0005539551 systemd[1]: var-lib-containers-storage-overlay-d6aaff5931df7f2502af676e623f8318a79f27291e690a7c8178eaab8a4459d0-merged.mount: Deactivated successfully.
Nov 29 03:28:03 np0005539551 podman[280776]: 2025-11-29 08:28:03.480514473 +0000 UTC m=+0.105835924 container cleanup 78807798cd291cdca1afc2bbbfc7bee952cc0daa23153d2c0cb9ab6821e2fce8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:28:03 np0005539551 systemd[1]: libpod-conmon-78807798cd291cdca1afc2bbbfc7bee952cc0daa23153d2c0cb9ab6821e2fce8.scope: Deactivated successfully.
Nov 29 03:28:03 np0005539551 podman[280831]: 2025-11-29 08:28:03.54106521 +0000 UTC m=+0.040957508 container remove 78807798cd291cdca1afc2bbbfc7bee952cc0daa23153d2c0cb9ab6821e2fce8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 29 03:28:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:03.548 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8a84e7e0-f846-4b39-8acd-b55b732d1818]: (4, ('Sat Nov 29 08:28:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 (78807798cd291cdca1afc2bbbfc7bee952cc0daa23153d2c0cb9ab6821e2fce8)\n78807798cd291cdca1afc2bbbfc7bee952cc0daa23153d2c0cb9ab6821e2fce8\nSat Nov 29 08:28:03 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 (78807798cd291cdca1afc2bbbfc7bee952cc0daa23153d2c0cb9ab6821e2fce8)\n78807798cd291cdca1afc2bbbfc7bee952cc0daa23153d2c0cb9ab6821e2fce8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:03.551 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[501c1fc1-6e01-477a-ba94-77f7f06bbea2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:03.552 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5da19f7d-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:03 np0005539551 kernel: tap5da19f7d-30: left promiscuous mode
Nov 29 03:28:03 np0005539551 nova_compute[227360]: 2025-11-29 08:28:03.554 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:03 np0005539551 nova_compute[227360]: 2025-11-29 08:28:03.573 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:03.576 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2a6aa759-10e7-410d-8252-8c92829ca53b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:03 np0005539551 nova_compute[227360]: 2025-11-29 08:28:03.582 227364 DEBUG nova.compute.manager [req-32c9c8d9-8675-4bec-aacc-acb89977bac0 req-3aff12f4-2cc0-4ed7-b479-0992d3b361b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Received event network-vif-unplugged-31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:03 np0005539551 nova_compute[227360]: 2025-11-29 08:28:03.583 227364 DEBUG oslo_concurrency.lockutils [req-32c9c8d9-8675-4bec-aacc-acb89977bac0 req-3aff12f4-2cc0-4ed7-b479-0992d3b361b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:03 np0005539551 nova_compute[227360]: 2025-11-29 08:28:03.584 227364 DEBUG oslo_concurrency.lockutils [req-32c9c8d9-8675-4bec-aacc-acb89977bac0 req-3aff12f4-2cc0-4ed7-b479-0992d3b361b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:03 np0005539551 nova_compute[227360]: 2025-11-29 08:28:03.584 227364 DEBUG oslo_concurrency.lockutils [req-32c9c8d9-8675-4bec-aacc-acb89977bac0 req-3aff12f4-2cc0-4ed7-b479-0992d3b361b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:03 np0005539551 nova_compute[227360]: 2025-11-29 08:28:03.584 227364 DEBUG nova.compute.manager [req-32c9c8d9-8675-4bec-aacc-acb89977bac0 req-3aff12f4-2cc0-4ed7-b479-0992d3b361b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] No waiting events found dispatching network-vif-unplugged-31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:28:03 np0005539551 nova_compute[227360]: 2025-11-29 08:28:03.585 227364 DEBUG nova.compute.manager [req-32c9c8d9-8675-4bec-aacc-acb89977bac0 req-3aff12f4-2cc0-4ed7-b479-0992d3b361b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Received event network-vif-unplugged-31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:28:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:03.599 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[32d14683-5ada-4792-a1b7-cc76acf6752a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:03.600 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ba59df8b-f30e-474b-8f40-4c851ca697af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:03.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:03.622 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e6efe9d0-b140-44e4-9c6e-588879f68d8a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 781214, 'reachable_time': 22089, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280846, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:03.626 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:28:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:03.626 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[09b7c645-70b5-4d42-8550-7ce7361954a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:03 np0005539551 systemd[1]: run-netns-ovnmeta\x2d5da19f7d\x2d3aa0\x2d41e7\x2d88b0\x2db9ef17fa4445.mount: Deactivated successfully.
Nov 29 03:28:03 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:03 np0005539551 nova_compute[227360]: 2025-11-29 08:28:03.960 227364 INFO nova.virt.libvirt.driver [None req-4dbfd608-25db-4b29-b1c1-837648ea5544 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Deleting instance files /var/lib/nova/instances/e5dc7e85-787d-4ed8-9752-a604a1815f2b_del#033[00m
Nov 29 03:28:03 np0005539551 nova_compute[227360]: 2025-11-29 08:28:03.960 227364 INFO nova.virt.libvirt.driver [None req-4dbfd608-25db-4b29-b1c1-837648ea5544 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Deletion of /var/lib/nova/instances/e5dc7e85-787d-4ed8-9752-a604a1815f2b_del complete#033[00m
Nov 29 03:28:04 np0005539551 nova_compute[227360]: 2025-11-29 08:28:04.028 227364 INFO nova.compute.manager [None req-4dbfd608-25db-4b29-b1c1-837648ea5544 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Took 0.86 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:28:04 np0005539551 nova_compute[227360]: 2025-11-29 08:28:04.029 227364 DEBUG oslo.service.loopingcall [None req-4dbfd608-25db-4b29-b1c1-837648ea5544 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:28:04 np0005539551 nova_compute[227360]: 2025-11-29 08:28:04.030 227364 DEBUG nova.compute.manager [-] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:28:04 np0005539551 nova_compute[227360]: 2025-11-29 08:28:04.030 227364 DEBUG nova.network.neutron [-] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:28:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:04.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:04 np0005539551 nova_compute[227360]: 2025-11-29 08:28:04.629 227364 DEBUG nova.network.neutron [-] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:28:04 np0005539551 nova_compute[227360]: 2025-11-29 08:28:04.648 227364 INFO nova.compute.manager [-] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Took 0.62 seconds to deallocate network for instance.#033[00m
Nov 29 03:28:04 np0005539551 nova_compute[227360]: 2025-11-29 08:28:04.686 227364 DEBUG oslo_concurrency.lockutils [None req-4dbfd608-25db-4b29-b1c1-837648ea5544 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:04 np0005539551 nova_compute[227360]: 2025-11-29 08:28:04.687 227364 DEBUG oslo_concurrency.lockutils [None req-4dbfd608-25db-4b29-b1c1-837648ea5544 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:04 np0005539551 nova_compute[227360]: 2025-11-29 08:28:04.788 227364 DEBUG oslo_concurrency.processutils [None req-4dbfd608-25db-4b29-b1c1-837648ea5544 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:28:05 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2905766190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:28:05 np0005539551 nova_compute[227360]: 2025-11-29 08:28:05.248 227364 DEBUG oslo_concurrency.processutils [None req-4dbfd608-25db-4b29-b1c1-837648ea5544 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:05 np0005539551 nova_compute[227360]: 2025-11-29 08:28:05.254 227364 DEBUG nova.compute.provider_tree [None req-4dbfd608-25db-4b29-b1c1-837648ea5544 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:28:05 np0005539551 nova_compute[227360]: 2025-11-29 08:28:05.268 227364 DEBUG nova.scheduler.client.report [None req-4dbfd608-25db-4b29-b1c1-837648ea5544 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:28:05 np0005539551 nova_compute[227360]: 2025-11-29 08:28:05.291 227364 DEBUG oslo_concurrency.lockutils [None req-4dbfd608-25db-4b29-b1c1-837648ea5544 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:05 np0005539551 nova_compute[227360]: 2025-11-29 08:28:05.315 227364 INFO nova.scheduler.client.report [None req-4dbfd608-25db-4b29-b1c1-837648ea5544 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Deleted allocations for instance e5dc7e85-787d-4ed8-9752-a604a1815f2b#033[00m
Nov 29 03:28:05 np0005539551 nova_compute[227360]: 2025-11-29 08:28:05.381 227364 DEBUG oslo_concurrency.lockutils [None req-4dbfd608-25db-4b29-b1c1-837648ea5544 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.215s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:05.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:05 np0005539551 nova_compute[227360]: 2025-11-29 08:28:05.665 227364 DEBUG nova.compute.manager [req-d4a8112b-fe3f-47d5-9b45-f3ecd1a4265f req-cc04232b-2a4b-4290-91dc-e99e13d486e6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Received event network-vif-plugged-31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:05 np0005539551 nova_compute[227360]: 2025-11-29 08:28:05.666 227364 DEBUG oslo_concurrency.lockutils [req-d4a8112b-fe3f-47d5-9b45-f3ecd1a4265f req-cc04232b-2a4b-4290-91dc-e99e13d486e6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:05 np0005539551 nova_compute[227360]: 2025-11-29 08:28:05.666 227364 DEBUG oslo_concurrency.lockutils [req-d4a8112b-fe3f-47d5-9b45-f3ecd1a4265f req-cc04232b-2a4b-4290-91dc-e99e13d486e6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:05 np0005539551 nova_compute[227360]: 2025-11-29 08:28:05.666 227364 DEBUG oslo_concurrency.lockutils [req-d4a8112b-fe3f-47d5-9b45-f3ecd1a4265f req-cc04232b-2a4b-4290-91dc-e99e13d486e6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5dc7e85-787d-4ed8-9752-a604a1815f2b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:05 np0005539551 nova_compute[227360]: 2025-11-29 08:28:05.666 227364 DEBUG nova.compute.manager [req-d4a8112b-fe3f-47d5-9b45-f3ecd1a4265f req-cc04232b-2a4b-4290-91dc-e99e13d486e6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] No waiting events found dispatching network-vif-plugged-31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:28:05 np0005539551 nova_compute[227360]: 2025-11-29 08:28:05.667 227364 WARNING nova.compute.manager [req-d4a8112b-fe3f-47d5-9b45-f3ecd1a4265f req-cc04232b-2a4b-4290-91dc-e99e13d486e6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Received unexpected event network-vif-plugged-31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:28:05 np0005539551 nova_compute[227360]: 2025-11-29 08:28:05.667 227364 DEBUG nova.compute.manager [req-d4a8112b-fe3f-47d5-9b45-f3ecd1a4265f req-cc04232b-2a4b-4290-91dc-e99e13d486e6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Received event network-vif-deleted-31cb9e6c-feed-4b4b-bc7d-c4f8c7e5b441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:06.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:06 np0005539551 nova_compute[227360]: 2025-11-29 08:28:06.653 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e345 e345: 3 total, 3 up, 3 in
Nov 29 03:28:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:07.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:07 np0005539551 podman[280872]: 2025-11-29 08:28:07.63124486 +0000 UTC m=+0.075750241 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:28:07 np0005539551 podman[280871]: 2025-11-29 08:28:07.648551628 +0000 UTC m=+0.088230898 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0)
Nov 29 03:28:07 np0005539551 podman[280870]: 2025-11-29 08:28:07.670015248 +0000 UTC m=+0.112108364 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:28:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:08.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:08 np0005539551 nova_compute[227360]: 2025-11-29 08:28:08.436 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:09 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:09.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:10.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:11.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:11 np0005539551 nova_compute[227360]: 2025-11-29 08:28:11.655 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:11 np0005539551 ovn_controller[130266]: 2025-11-29T08:28:11Z|00610|binding|INFO|Releasing lport f2118d1b-0f35-4211-8508-64237a2d816e from this chassis (sb_readonly=0)
Nov 29 03:28:11 np0005539551 ovn_controller[130266]: 2025-11-29T08:28:11Z|00611|binding|INFO|Releasing lport 6f99f0ed-ee75-45c3-abe1-1afc889fd227 from this chassis (sb_readonly=0)
Nov 29 03:28:11 np0005539551 nova_compute[227360]: 2025-11-29 08:28:11.801 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:12.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:13 np0005539551 nova_compute[227360]: 2025-11-29 08:28:13.442 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:13.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:14.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:14 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:15.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:28:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:16.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:28:16 np0005539551 nova_compute[227360]: 2025-11-29 08:28:16.657 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:17.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:18 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e346 e346: 3 total, 3 up, 3 in
Nov 29 03:28:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:18.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:18 np0005539551 nova_compute[227360]: 2025-11-29 08:28:18.409 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404883.4074824, e5dc7e85-787d-4ed8-9752-a604a1815f2b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:28:18 np0005539551 nova_compute[227360]: 2025-11-29 08:28:18.410 227364 INFO nova.compute.manager [-] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:28:18 np0005539551 nova_compute[227360]: 2025-11-29 08:28:18.430 227364 DEBUG nova.compute.manager [None req-52f3d692-dec8-40fa-81ae-170957edab94 - - - - - -] [instance: e5dc7e85-787d-4ed8-9752-a604a1815f2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:28:18 np0005539551 nova_compute[227360]: 2025-11-29 08:28:18.444 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:19 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:19 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e347 e347: 3 total, 3 up, 3 in
Nov 29 03:28:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:19.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:19.877 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:19.878 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:19.879 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:20.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:21 np0005539551 nova_compute[227360]: 2025-11-29 08:28:21.154 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:21.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:21 np0005539551 nova_compute[227360]: 2025-11-29 08:28:21.659 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:22.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:23 np0005539551 nova_compute[227360]: 2025-11-29 08:28:23.479 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:23.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:24.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:24 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e347 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:25.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:26.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:26 np0005539551 ovn_controller[130266]: 2025-11-29T08:28:26Z|00612|binding|INFO|Releasing lport f2118d1b-0f35-4211-8508-64237a2d816e from this chassis (sb_readonly=0)
Nov 29 03:28:26 np0005539551 ovn_controller[130266]: 2025-11-29T08:28:26Z|00613|binding|INFO|Releasing lport 6f99f0ed-ee75-45c3-abe1-1afc889fd227 from this chassis (sb_readonly=0)
Nov 29 03:28:26 np0005539551 nova_compute[227360]: 2025-11-29 08:28:26.522 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:26 np0005539551 nova_compute[227360]: 2025-11-29 08:28:26.662 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:26 np0005539551 nova_compute[227360]: 2025-11-29 08:28:26.825 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e348 e348: 3 total, 3 up, 3 in
Nov 29 03:28:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:27.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:28.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:28 np0005539551 nova_compute[227360]: 2025-11-29 08:28:28.482 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:29.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:30.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:30 np0005539551 nova_compute[227360]: 2025-11-29 08:28:30.674 227364 DEBUG oslo_concurrency.lockutils [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:30 np0005539551 nova_compute[227360]: 2025-11-29 08:28:30.675 227364 DEBUG oslo_concurrency.lockutils [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:30 np0005539551 nova_compute[227360]: 2025-11-29 08:28:30.699 227364 DEBUG nova.compute.manager [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:28:30 np0005539551 nova_compute[227360]: 2025-11-29 08:28:30.796 227364 DEBUG oslo_concurrency.lockutils [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:30 np0005539551 nova_compute[227360]: 2025-11-29 08:28:30.797 227364 DEBUG oslo_concurrency.lockutils [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:30 np0005539551 nova_compute[227360]: 2025-11-29 08:28:30.803 227364 DEBUG nova.virt.hardware [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:28:30 np0005539551 nova_compute[227360]: 2025-11-29 08:28:30.804 227364 INFO nova.compute.claims [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:28:30 np0005539551 nova_compute[227360]: 2025-11-29 08:28:30.953 227364 DEBUG oslo_concurrency.processutils [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:31 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:28:31 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2015759419' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:28:31 np0005539551 nova_compute[227360]: 2025-11-29 08:28:31.425 227364 DEBUG oslo_concurrency.processutils [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:31 np0005539551 nova_compute[227360]: 2025-11-29 08:28:31.432 227364 DEBUG nova.compute.provider_tree [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:28:31 np0005539551 nova_compute[227360]: 2025-11-29 08:28:31.450 227364 DEBUG nova.scheduler.client.report [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:28:31 np0005539551 nova_compute[227360]: 2025-11-29 08:28:31.471 227364 DEBUG oslo_concurrency.lockutils [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:31 np0005539551 nova_compute[227360]: 2025-11-29 08:28:31.471 227364 DEBUG nova.compute.manager [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:28:31 np0005539551 nova_compute[227360]: 2025-11-29 08:28:31.524 227364 DEBUG nova.compute.manager [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:28:31 np0005539551 nova_compute[227360]: 2025-11-29 08:28:31.525 227364 DEBUG nova.network.neutron [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:28:31 np0005539551 nova_compute[227360]: 2025-11-29 08:28:31.551 227364 INFO nova.virt.libvirt.driver [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:28:31 np0005539551 nova_compute[227360]: 2025-11-29 08:28:31.568 227364 DEBUG nova.compute.manager [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:28:31 np0005539551 nova_compute[227360]: 2025-11-29 08:28:31.653 227364 DEBUG nova.compute.manager [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:28:31 np0005539551 nova_compute[227360]: 2025-11-29 08:28:31.654 227364 DEBUG nova.virt.libvirt.driver [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:28:31 np0005539551 nova_compute[227360]: 2025-11-29 08:28:31.655 227364 INFO nova.virt.libvirt.driver [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Creating image(s)#033[00m
Nov 29 03:28:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:31.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:31 np0005539551 nova_compute[227360]: 2025-11-29 08:28:31.680 227364 DEBUG nova.storage.rbd_utils [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image d4258b43-e73e-47f3-b1d1-f169bcaf4534_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:28:31 np0005539551 nova_compute[227360]: 2025-11-29 08:28:31.706 227364 DEBUG nova.storage.rbd_utils [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image d4258b43-e73e-47f3-b1d1-f169bcaf4534_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:28:31 np0005539551 nova_compute[227360]: 2025-11-29 08:28:31.738 227364 DEBUG nova.storage.rbd_utils [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image d4258b43-e73e-47f3-b1d1-f169bcaf4534_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:28:31 np0005539551 nova_compute[227360]: 2025-11-29 08:28:31.742 227364 DEBUG oslo_concurrency.processutils [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:31 np0005539551 nova_compute[227360]: 2025-11-29 08:28:31.774 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:31 np0005539551 nova_compute[227360]: 2025-11-29 08:28:31.830 227364 DEBUG oslo_concurrency.processutils [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:31 np0005539551 nova_compute[227360]: 2025-11-29 08:28:31.831 227364 DEBUG oslo_concurrency.lockutils [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:31 np0005539551 nova_compute[227360]: 2025-11-29 08:28:31.832 227364 DEBUG oslo_concurrency.lockutils [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:31 np0005539551 nova_compute[227360]: 2025-11-29 08:28:31.832 227364 DEBUG oslo_concurrency.lockutils [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:31 np0005539551 nova_compute[227360]: 2025-11-29 08:28:31.861 227364 DEBUG nova.storage.rbd_utils [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image d4258b43-e73e-47f3-b1d1-f169bcaf4534_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:28:31 np0005539551 nova_compute[227360]: 2025-11-29 08:28:31.864 227364 DEBUG oslo_concurrency.processutils [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 d4258b43-e73e-47f3-b1d1-f169bcaf4534_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:31 np0005539551 nova_compute[227360]: 2025-11-29 08:28:31.928 227364 DEBUG nova.policy [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fed6803a835e471f9bd60e3236e78e5d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4145ed6cde61439ebcc12fae2609b724', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:28:32 np0005539551 nova_compute[227360]: 2025-11-29 08:28:32.131 227364 DEBUG oslo_concurrency.processutils [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 d4258b43-e73e-47f3-b1d1-f169bcaf4534_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.267s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:32 np0005539551 nova_compute[227360]: 2025-11-29 08:28:32.186 227364 DEBUG nova.storage.rbd_utils [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] resizing rbd image d4258b43-e73e-47f3-b1d1-f169bcaf4534_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:28:32 np0005539551 nova_compute[227360]: 2025-11-29 08:28:32.285 227364 DEBUG nova.objects.instance [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'migration_context' on Instance uuid d4258b43-e73e-47f3-b1d1-f169bcaf4534 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:28:32 np0005539551 nova_compute[227360]: 2025-11-29 08:28:32.298 227364 DEBUG nova.virt.libvirt.driver [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:28:32 np0005539551 nova_compute[227360]: 2025-11-29 08:28:32.299 227364 DEBUG nova.virt.libvirt.driver [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Ensure instance console log exists: /var/lib/nova/instances/d4258b43-e73e-47f3-b1d1-f169bcaf4534/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:28:32 np0005539551 nova_compute[227360]: 2025-11-29 08:28:32.299 227364 DEBUG oslo_concurrency.lockutils [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:32 np0005539551 nova_compute[227360]: 2025-11-29 08:28:32.300 227364 DEBUG oslo_concurrency.lockutils [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:32 np0005539551 nova_compute[227360]: 2025-11-29 08:28:32.300 227364 DEBUG oslo_concurrency.lockutils [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:28:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:32.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:28:33 np0005539551 nova_compute[227360]: 2025-11-29 08:28:33.340 227364 DEBUG nova.network.neutron [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Successfully created port: f455cc42-f497-49e9-84f6-0713ec25f786 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:28:33 np0005539551 nova_compute[227360]: 2025-11-29 08:28:33.484 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:33.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:34.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:34 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:35 np0005539551 nova_compute[227360]: 2025-11-29 08:28:35.262 227364 DEBUG nova.network.neutron [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Successfully updated port: f455cc42-f497-49e9-84f6-0713ec25f786 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:28:35 np0005539551 nova_compute[227360]: 2025-11-29 08:28:35.276 227364 DEBUG oslo_concurrency.lockutils [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "refresh_cache-d4258b43-e73e-47f3-b1d1-f169bcaf4534" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:28:35 np0005539551 nova_compute[227360]: 2025-11-29 08:28:35.276 227364 DEBUG oslo_concurrency.lockutils [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquired lock "refresh_cache-d4258b43-e73e-47f3-b1d1-f169bcaf4534" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:28:35 np0005539551 nova_compute[227360]: 2025-11-29 08:28:35.276 227364 DEBUG nova.network.neutron [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:28:35 np0005539551 nova_compute[227360]: 2025-11-29 08:28:35.404 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:35 np0005539551 nova_compute[227360]: 2025-11-29 08:28:35.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:35 np0005539551 nova_compute[227360]: 2025-11-29 08:28:35.467 227364 DEBUG nova.compute.manager [req-2bcc168c-ae27-4cf1-a4ea-b6bc53911fa4 req-a0fba2b7-2744-448d-8f96-546082df9278 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received event network-changed-f455cc42-f497-49e9-84f6-0713ec25f786 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:35 np0005539551 nova_compute[227360]: 2025-11-29 08:28:35.467 227364 DEBUG nova.compute.manager [req-2bcc168c-ae27-4cf1-a4ea-b6bc53911fa4 req-a0fba2b7-2744-448d-8f96-546082df9278 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Refreshing instance network info cache due to event network-changed-f455cc42-f497-49e9-84f6-0713ec25f786. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:28:35 np0005539551 nova_compute[227360]: 2025-11-29 08:28:35.467 227364 DEBUG oslo_concurrency.lockutils [req-2bcc168c-ae27-4cf1-a4ea-b6bc53911fa4 req-a0fba2b7-2744-448d-8f96-546082df9278 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-d4258b43-e73e-47f3-b1d1-f169bcaf4534" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:28:35 np0005539551 nova_compute[227360]: 2025-11-29 08:28:35.543 227364 DEBUG nova.network.neutron [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:28:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:35.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:36.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:36 np0005539551 nova_compute[227360]: 2025-11-29 08:28:36.681 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:37.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:37 np0005539551 nova_compute[227360]: 2025-11-29 08:28:37.721 227364 DEBUG nova.network.neutron [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Updating instance_info_cache with network_info: [{"id": "f455cc42-f497-49e9-84f6-0713ec25f786", "address": "fa:16:3e:a5:a7:0b", "network": {"id": "8094e12d-22b9-4e7c-bcb5-2de20ab6e675", "bridge": "br-int", "label": "tempest-network-smoke--1722283643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf455cc42-f4", "ovs_interfaceid": "f455cc42-f497-49e9-84f6-0713ec25f786", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:28:37 np0005539551 nova_compute[227360]: 2025-11-29 08:28:37.747 227364 DEBUG oslo_concurrency.lockutils [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Releasing lock "refresh_cache-d4258b43-e73e-47f3-b1d1-f169bcaf4534" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:28:37 np0005539551 nova_compute[227360]: 2025-11-29 08:28:37.748 227364 DEBUG nova.compute.manager [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Instance network_info: |[{"id": "f455cc42-f497-49e9-84f6-0713ec25f786", "address": "fa:16:3e:a5:a7:0b", "network": {"id": "8094e12d-22b9-4e7c-bcb5-2de20ab6e675", "bridge": "br-int", "label": "tempest-network-smoke--1722283643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf455cc42-f4", "ovs_interfaceid": "f455cc42-f497-49e9-84f6-0713ec25f786", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:28:37 np0005539551 nova_compute[227360]: 2025-11-29 08:28:37.748 227364 DEBUG oslo_concurrency.lockutils [req-2bcc168c-ae27-4cf1-a4ea-b6bc53911fa4 req-a0fba2b7-2744-448d-8f96-546082df9278 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-d4258b43-e73e-47f3-b1d1-f169bcaf4534" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:28:37 np0005539551 nova_compute[227360]: 2025-11-29 08:28:37.748 227364 DEBUG nova.network.neutron [req-2bcc168c-ae27-4cf1-a4ea-b6bc53911fa4 req-a0fba2b7-2744-448d-8f96-546082df9278 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Refreshing network info cache for port f455cc42-f497-49e9-84f6-0713ec25f786 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:28:37 np0005539551 nova_compute[227360]: 2025-11-29 08:28:37.751 227364 DEBUG nova.virt.libvirt.driver [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Start _get_guest_xml network_info=[{"id": "f455cc42-f497-49e9-84f6-0713ec25f786", "address": "fa:16:3e:a5:a7:0b", "network": {"id": "8094e12d-22b9-4e7c-bcb5-2de20ab6e675", "bridge": "br-int", "label": "tempest-network-smoke--1722283643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf455cc42-f4", "ovs_interfaceid": "f455cc42-f497-49e9-84f6-0713ec25f786", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:28:37 np0005539551 nova_compute[227360]: 2025-11-29 08:28:37.755 227364 WARNING nova.virt.libvirt.driver [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:28:37 np0005539551 nova_compute[227360]: 2025-11-29 08:28:37.759 227364 DEBUG nova.virt.libvirt.host [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:28:37 np0005539551 nova_compute[227360]: 2025-11-29 08:28:37.760 227364 DEBUG nova.virt.libvirt.host [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:28:37 np0005539551 nova_compute[227360]: 2025-11-29 08:28:37.763 227364 DEBUG nova.virt.libvirt.host [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:28:37 np0005539551 nova_compute[227360]: 2025-11-29 08:28:37.764 227364 DEBUG nova.virt.libvirt.host [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:28:37 np0005539551 nova_compute[227360]: 2025-11-29 08:28:37.765 227364 DEBUG nova.virt.libvirt.driver [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:28:37 np0005539551 nova_compute[227360]: 2025-11-29 08:28:37.765 227364 DEBUG nova.virt.hardware [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:28:37 np0005539551 nova_compute[227360]: 2025-11-29 08:28:37.766 227364 DEBUG nova.virt.hardware [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:28:37 np0005539551 nova_compute[227360]: 2025-11-29 08:28:37.766 227364 DEBUG nova.virt.hardware [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:28:37 np0005539551 nova_compute[227360]: 2025-11-29 08:28:37.766 227364 DEBUG nova.virt.hardware [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:28:37 np0005539551 nova_compute[227360]: 2025-11-29 08:28:37.766 227364 DEBUG nova.virt.hardware [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:28:37 np0005539551 nova_compute[227360]: 2025-11-29 08:28:37.766 227364 DEBUG nova.virt.hardware [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:28:37 np0005539551 nova_compute[227360]: 2025-11-29 08:28:37.767 227364 DEBUG nova.virt.hardware [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:28:37 np0005539551 nova_compute[227360]: 2025-11-29 08:28:37.767 227364 DEBUG nova.virt.hardware [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:28:37 np0005539551 nova_compute[227360]: 2025-11-29 08:28:37.767 227364 DEBUG nova.virt.hardware [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:28:37 np0005539551 nova_compute[227360]: 2025-11-29 08:28:37.767 227364 DEBUG nova.virt.hardware [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:28:37 np0005539551 nova_compute[227360]: 2025-11-29 08:28:37.767 227364 DEBUG nova.virt.hardware [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:28:37 np0005539551 nova_compute[227360]: 2025-11-29 08:28:37.770 227364 DEBUG oslo_concurrency.processutils [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:38 np0005539551 podman[281164]: 2025-11-29 08:28:38.187630928 +0000 UTC m=+0.057526107 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, tcib_managed=true)
Nov 29 03:28:38 np0005539551 podman[281165]: 2025-11-29 08:28:38.196547689 +0000 UTC m=+0.061202307 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:28:38 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:28:38 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/932946566' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:28:38 np0005539551 podman[281163]: 2025-11-29 08:28:38.214853965 +0000 UTC m=+0.088581118 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2)
Nov 29 03:28:38 np0005539551 nova_compute[227360]: 2025-11-29 08:28:38.240 227364 DEBUG oslo_concurrency.processutils [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:38 np0005539551 nova_compute[227360]: 2025-11-29 08:28:38.269 227364 DEBUG nova.storage.rbd_utils [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image d4258b43-e73e-47f3-b1d1-f169bcaf4534_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:28:38 np0005539551 nova_compute[227360]: 2025-11-29 08:28:38.274 227364 DEBUG oslo_concurrency.processutils [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:38.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:38 np0005539551 nova_compute[227360]: 2025-11-29 08:28:38.487 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:38 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:28:38 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4009393923' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:28:38 np0005539551 nova_compute[227360]: 2025-11-29 08:28:38.723 227364 DEBUG oslo_concurrency.processutils [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:38 np0005539551 nova_compute[227360]: 2025-11-29 08:28:38.725 227364 DEBUG nova.virt.libvirt.vif [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:28:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1524442594',display_name='tempest-TestNetworkAdvancedServerOps-server-1524442594',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1524442594',id=146,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAhjCe5Dwqr90TZZDpDYSff23Y0Y+CPvWtwZC2gBvEddB0vd/KfzjYxDz13s1SQbnkCRhnyyKQ9iihlf4eHmWUZGjirLU1hZLSoBtjbwWoOTCzNCv4qInFtsgJKPWKzRFw==',key_name='tempest-TestNetworkAdvancedServerOps-964699803',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-b8gw4bee',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:28:31Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=d4258b43-e73e-47f3-b1d1-f169bcaf4534,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f455cc42-f497-49e9-84f6-0713ec25f786", "address": "fa:16:3e:a5:a7:0b", "network": {"id": "8094e12d-22b9-4e7c-bcb5-2de20ab6e675", "bridge": "br-int", "label": "tempest-network-smoke--1722283643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf455cc42-f4", "ovs_interfaceid": "f455cc42-f497-49e9-84f6-0713ec25f786", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:28:38 np0005539551 nova_compute[227360]: 2025-11-29 08:28:38.726 227364 DEBUG nova.network.os_vif_util [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "f455cc42-f497-49e9-84f6-0713ec25f786", "address": "fa:16:3e:a5:a7:0b", "network": {"id": "8094e12d-22b9-4e7c-bcb5-2de20ab6e675", "bridge": "br-int", "label": "tempest-network-smoke--1722283643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf455cc42-f4", "ovs_interfaceid": "f455cc42-f497-49e9-84f6-0713ec25f786", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:28:38 np0005539551 nova_compute[227360]: 2025-11-29 08:28:38.727 227364 DEBUG nova.network.os_vif_util [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:a7:0b,bridge_name='br-int',has_traffic_filtering=True,id=f455cc42-f497-49e9-84f6-0713ec25f786,network=Network(8094e12d-22b9-4e7c-bcb5-2de20ab6e675),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf455cc42-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:28:38 np0005539551 nova_compute[227360]: 2025-11-29 08:28:38.728 227364 DEBUG nova.objects.instance [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'pci_devices' on Instance uuid d4258b43-e73e-47f3-b1d1-f169bcaf4534 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:28:38 np0005539551 nova_compute[227360]: 2025-11-29 08:28:38.777 227364 DEBUG nova.virt.libvirt.driver [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:28:38 np0005539551 nova_compute[227360]:  <uuid>d4258b43-e73e-47f3-b1d1-f169bcaf4534</uuid>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:  <name>instance-00000092</name>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:28:38 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1524442594</nova:name>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:28:37</nova:creationTime>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:28:38 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:        <nova:user uuid="fed6803a835e471f9bd60e3236e78e5d">tempest-TestNetworkAdvancedServerOps-274367929-project-member</nova:user>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:        <nova:project uuid="4145ed6cde61439ebcc12fae2609b724">tempest-TestNetworkAdvancedServerOps-274367929</nova:project>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:        <nova:port uuid="f455cc42-f497-49e9-84f6-0713ec25f786">
Nov 29 03:28:38 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:      <entry name="serial">d4258b43-e73e-47f3-b1d1-f169bcaf4534</entry>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:      <entry name="uuid">d4258b43-e73e-47f3-b1d1-f169bcaf4534</entry>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:28:38 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/d4258b43-e73e-47f3-b1d1-f169bcaf4534_disk">
Nov 29 03:28:38 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:28:38 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:28:38 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/d4258b43-e73e-47f3-b1d1-f169bcaf4534_disk.config">
Nov 29 03:28:38 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:28:38 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:28:38 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:a5:a7:0b"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:      <target dev="tapf455cc42-f4"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:28:38 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/d4258b43-e73e-47f3-b1d1-f169bcaf4534/console.log" append="off"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:28:38 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:28:38 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:28:38 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:28:38 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:28:38 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:28:38 np0005539551 nova_compute[227360]: 2025-11-29 08:28:38.778 227364 DEBUG nova.compute.manager [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Preparing to wait for external event network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:28:38 np0005539551 nova_compute[227360]: 2025-11-29 08:28:38.779 227364 DEBUG oslo_concurrency.lockutils [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:38 np0005539551 nova_compute[227360]: 2025-11-29 08:28:38.779 227364 DEBUG oslo_concurrency.lockutils [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:38 np0005539551 nova_compute[227360]: 2025-11-29 08:28:38.779 227364 DEBUG oslo_concurrency.lockutils [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:38 np0005539551 nova_compute[227360]: 2025-11-29 08:28:38.780 227364 DEBUG nova.virt.libvirt.vif [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:28:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1524442594',display_name='tempest-TestNetworkAdvancedServerOps-server-1524442594',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1524442594',id=146,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAhjCe5Dwqr90TZZDpDYSff23Y0Y+CPvWtwZC2gBvEddB0vd/KfzjYxDz13s1SQbnkCRhnyyKQ9iihlf4eHmWUZGjirLU1hZLSoBtjbwWoOTCzNCv4qInFtsgJKPWKzRFw==',key_name='tempest-TestNetworkAdvancedServerOps-964699803',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-b8gw4bee',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:28:31Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=d4258b43-e73e-47f3-b1d1-f169bcaf4534,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f455cc42-f497-49e9-84f6-0713ec25f786", "address": "fa:16:3e:a5:a7:0b", "network": {"id": "8094e12d-22b9-4e7c-bcb5-2de20ab6e675", "bridge": "br-int", "label": "tempest-network-smoke--1722283643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf455cc42-f4", "ovs_interfaceid": "f455cc42-f497-49e9-84f6-0713ec25f786", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:28:38 np0005539551 nova_compute[227360]: 2025-11-29 08:28:38.780 227364 DEBUG nova.network.os_vif_util [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "f455cc42-f497-49e9-84f6-0713ec25f786", "address": "fa:16:3e:a5:a7:0b", "network": {"id": "8094e12d-22b9-4e7c-bcb5-2de20ab6e675", "bridge": "br-int", "label": "tempest-network-smoke--1722283643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf455cc42-f4", "ovs_interfaceid": "f455cc42-f497-49e9-84f6-0713ec25f786", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:28:38 np0005539551 nova_compute[227360]: 2025-11-29 08:28:38.781 227364 DEBUG nova.network.os_vif_util [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:a7:0b,bridge_name='br-int',has_traffic_filtering=True,id=f455cc42-f497-49e9-84f6-0713ec25f786,network=Network(8094e12d-22b9-4e7c-bcb5-2de20ab6e675),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf455cc42-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:28:38 np0005539551 nova_compute[227360]: 2025-11-29 08:28:38.781 227364 DEBUG os_vif [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:a7:0b,bridge_name='br-int',has_traffic_filtering=True,id=f455cc42-f497-49e9-84f6-0713ec25f786,network=Network(8094e12d-22b9-4e7c-bcb5-2de20ab6e675),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf455cc42-f4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:28:38 np0005539551 nova_compute[227360]: 2025-11-29 08:28:38.781 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:38 np0005539551 nova_compute[227360]: 2025-11-29 08:28:38.782 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:38 np0005539551 nova_compute[227360]: 2025-11-29 08:28:38.782 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:28:38 np0005539551 nova_compute[227360]: 2025-11-29 08:28:38.785 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:38 np0005539551 nova_compute[227360]: 2025-11-29 08:28:38.786 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf455cc42-f4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:38 np0005539551 nova_compute[227360]: 2025-11-29 08:28:38.786 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf455cc42-f4, col_values=(('external_ids', {'iface-id': 'f455cc42-f497-49e9-84f6-0713ec25f786', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a5:a7:0b', 'vm-uuid': 'd4258b43-e73e-47f3-b1d1-f169bcaf4534'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:38 np0005539551 nova_compute[227360]: 2025-11-29 08:28:38.788 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:38 np0005539551 NetworkManager[48922]: <info>  [1764404918.7893] manager: (tapf455cc42-f4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/289)
Nov 29 03:28:38 np0005539551 nova_compute[227360]: 2025-11-29 08:28:38.790 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:28:38 np0005539551 nova_compute[227360]: 2025-11-29 08:28:38.794 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:38 np0005539551 nova_compute[227360]: 2025-11-29 08:28:38.795 227364 INFO os_vif [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:a7:0b,bridge_name='br-int',has_traffic_filtering=True,id=f455cc42-f497-49e9-84f6-0713ec25f786,network=Network(8094e12d-22b9-4e7c-bcb5-2de20ab6e675),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf455cc42-f4')#033[00m
Nov 29 03:28:38 np0005539551 nova_compute[227360]: 2025-11-29 08:28:38.861 227364 DEBUG nova.virt.libvirt.driver [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:28:38 np0005539551 nova_compute[227360]: 2025-11-29 08:28:38.861 227364 DEBUG nova.virt.libvirt.driver [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:28:38 np0005539551 nova_compute[227360]: 2025-11-29 08:28:38.861 227364 DEBUG nova.virt.libvirt.driver [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] No VIF found with MAC fa:16:3e:a5:a7:0b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:28:38 np0005539551 nova_compute[227360]: 2025-11-29 08:28:38.862 227364 INFO nova.virt.libvirt.driver [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Using config drive#033[00m
Nov 29 03:28:38 np0005539551 nova_compute[227360]: 2025-11-29 08:28:38.888 227364 DEBUG nova.storage.rbd_utils [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image d4258b43-e73e-47f3-b1d1-f169bcaf4534_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:28:39 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:28:39 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:28:39 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:28:39 np0005539551 nova_compute[227360]: 2025-11-29 08:28:39.355 227364 INFO nova.virt.libvirt.driver [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Creating config drive at /var/lib/nova/instances/d4258b43-e73e-47f3-b1d1-f169bcaf4534/disk.config#033[00m
Nov 29 03:28:39 np0005539551 nova_compute[227360]: 2025-11-29 08:28:39.360 227364 DEBUG oslo_concurrency.processutils [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d4258b43-e73e-47f3-b1d1-f169bcaf4534/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpho403d1z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:39 np0005539551 nova_compute[227360]: 2025-11-29 08:28:39.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:39 np0005539551 nova_compute[227360]: 2025-11-29 08:28:39.409 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:28:39 np0005539551 nova_compute[227360]: 2025-11-29 08:28:39.409 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:28:39 np0005539551 nova_compute[227360]: 2025-11-29 08:28:39.428 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 03:28:39 np0005539551 nova_compute[227360]: 2025-11-29 08:28:39.606 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "refresh_cache-6076e253-7727-49f1-9aa6-6c4ccc52fc56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:28:39 np0005539551 nova_compute[227360]: 2025-11-29 08:28:39.607 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquired lock "refresh_cache-6076e253-7727-49f1-9aa6-6c4ccc52fc56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:28:39 np0005539551 nova_compute[227360]: 2025-11-29 08:28:39.607 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:28:39 np0005539551 nova_compute[227360]: 2025-11-29 08:28:39.607 227364 DEBUG nova.objects.instance [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6076e253-7727-49f1-9aa6-6c4ccc52fc56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:28:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:39.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:39 np0005539551 nova_compute[227360]: 2025-11-29 08:28:39.702 227364 DEBUG nova.network.neutron [req-2bcc168c-ae27-4cf1-a4ea-b6bc53911fa4 req-a0fba2b7-2744-448d-8f96-546082df9278 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Updated VIF entry in instance network info cache for port f455cc42-f497-49e9-84f6-0713ec25f786. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:28:39 np0005539551 nova_compute[227360]: 2025-11-29 08:28:39.702 227364 DEBUG nova.network.neutron [req-2bcc168c-ae27-4cf1-a4ea-b6bc53911fa4 req-a0fba2b7-2744-448d-8f96-546082df9278 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Updating instance_info_cache with network_info: [{"id": "f455cc42-f497-49e9-84f6-0713ec25f786", "address": "fa:16:3e:a5:a7:0b", "network": {"id": "8094e12d-22b9-4e7c-bcb5-2de20ab6e675", "bridge": "br-int", "label": "tempest-network-smoke--1722283643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf455cc42-f4", "ovs_interfaceid": "f455cc42-f497-49e9-84f6-0713ec25f786", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:28:39 np0005539551 nova_compute[227360]: 2025-11-29 08:28:39.761 227364 DEBUG oslo_concurrency.lockutils [req-2bcc168c-ae27-4cf1-a4ea-b6bc53911fa4 req-a0fba2b7-2744-448d-8f96-546082df9278 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-d4258b43-e73e-47f3-b1d1-f169bcaf4534" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:28:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:40.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:40 np0005539551 nova_compute[227360]: 2025-11-29 08:28:40.402 227364 DEBUG oslo_concurrency.processutils [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d4258b43-e73e-47f3-b1d1-f169bcaf4534/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpho403d1z" returned: 0 in 1.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:40 np0005539551 nova_compute[227360]: 2025-11-29 08:28:40.447 227364 DEBUG nova.storage.rbd_utils [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image d4258b43-e73e-47f3-b1d1-f169bcaf4534_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:28:40 np0005539551 nova_compute[227360]: 2025-11-29 08:28:40.452 227364 DEBUG oslo_concurrency.processutils [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d4258b43-e73e-47f3-b1d1-f169bcaf4534/disk.config d4258b43-e73e-47f3-b1d1-f169bcaf4534_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:40 np0005539551 nova_compute[227360]: 2025-11-29 08:28:40.644 227364 DEBUG oslo_concurrency.processutils [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d4258b43-e73e-47f3-b1d1-f169bcaf4534/disk.config d4258b43-e73e-47f3-b1d1-f169bcaf4534_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.192s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:40 np0005539551 nova_compute[227360]: 2025-11-29 08:28:40.646 227364 INFO nova.virt.libvirt.driver [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Deleting local config drive /var/lib/nova/instances/d4258b43-e73e-47f3-b1d1-f169bcaf4534/disk.config because it was imported into RBD.#033[00m
Nov 29 03:28:40 np0005539551 kernel: tapf455cc42-f4: entered promiscuous mode
Nov 29 03:28:40 np0005539551 NetworkManager[48922]: <info>  [1764404920.7230] manager: (tapf455cc42-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/290)
Nov 29 03:28:40 np0005539551 ovn_controller[130266]: 2025-11-29T08:28:40Z|00614|binding|INFO|Claiming lport f455cc42-f497-49e9-84f6-0713ec25f786 for this chassis.
Nov 29 03:28:40 np0005539551 nova_compute[227360]: 2025-11-29 08:28:40.722 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:40 np0005539551 ovn_controller[130266]: 2025-11-29T08:28:40Z|00615|binding|INFO|f455cc42-f497-49e9-84f6-0713ec25f786: Claiming fa:16:3e:a5:a7:0b 10.100.0.6
Nov 29 03:28:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:40.732 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:a7:0b 10.100.0.6'], port_security=['fa:16:3e:a5:a7:0b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'd4258b43-e73e-47f3-b1d1-f169bcaf4534', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8094e12d-22b9-4e7c-bcb5-2de20ab6e675', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e716463c-b057-4e72-86b1-45cce498de54', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f97b85de-e617-4358-b258-b884e5e43079, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=f455cc42-f497-49e9-84f6-0713ec25f786) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:28:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:40.733 139482 INFO neutron.agent.ovn.metadata.agent [-] Port f455cc42-f497-49e9-84f6-0713ec25f786 in datapath 8094e12d-22b9-4e7c-bcb5-2de20ab6e675 bound to our chassis#033[00m
Nov 29 03:28:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:40.735 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8094e12d-22b9-4e7c-bcb5-2de20ab6e675#033[00m
Nov 29 03:28:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:40.748 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1853f1da-545a-45a5-9ab1-6b4226be1347]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:40.749 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8094e12d-21 in ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:28:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:40.754 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8094e12d-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:28:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:40.754 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[83f6dce9-485e-4c8c-ad67-115291ac4fa4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:40.755 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[00697b4f-41c4-4b7e-9bc1-4aa140573eda]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:40 np0005539551 ovn_controller[130266]: 2025-11-29T08:28:40Z|00616|binding|INFO|Setting lport f455cc42-f497-49e9-84f6-0713ec25f786 ovn-installed in OVS
Nov 29 03:28:40 np0005539551 ovn_controller[130266]: 2025-11-29T08:28:40Z|00617|binding|INFO|Setting lport f455cc42-f497-49e9-84f6-0713ec25f786 up in Southbound
Nov 29 03:28:40 np0005539551 nova_compute[227360]: 2025-11-29 08:28:40.767 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:40.770 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[793990b4-5dfa-462a-be4e-5379f8e10410]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:40 np0005539551 systemd-machined[190756]: New machine qemu-67-instance-00000092.
Nov 29 03:28:40 np0005539551 nova_compute[227360]: 2025-11-29 08:28:40.775 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:40 np0005539551 systemd[1]: Started Virtual Machine qemu-67-instance-00000092.
Nov 29 03:28:40 np0005539551 systemd-udevd[281450]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:28:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:40.800 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ba812756-9057-4738-8352-cf2d5253ad5f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:40 np0005539551 NetworkManager[48922]: <info>  [1764404920.8142] device (tapf455cc42-f4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:28:40 np0005539551 NetworkManager[48922]: <info>  [1764404920.8149] device (tapf455cc42-f4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:28:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:40.833 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[6aedbb21-76dc-4688-b854-e4b3b0b40131]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:40 np0005539551 NetworkManager[48922]: <info>  [1764404920.8402] manager: (tap8094e12d-20): new Veth device (/org/freedesktop/NetworkManager/Devices/291)
Nov 29 03:28:40 np0005539551 systemd-udevd[281453]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:28:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:40.840 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[753c7225-95e5-46f0-8822-14a8801613b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:40.874 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[066bf0fa-0c7e-4cd6-803b-a0d137b39e16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:40.877 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[3e5840a4-2bd7-412d-aaad-69fba690ee8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:40 np0005539551 NetworkManager[48922]: <info>  [1764404920.9091] device (tap8094e12d-20): carrier: link connected
Nov 29 03:28:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:40.914 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[1bd9bb2b-b0b9-4a59-9574-6e7fbd159242]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:40.944 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1943a0fc-a7ac-43b7-8f47-370a41f6f0c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8094e12d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:7a:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 183], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 796778, 'reachable_time': 23239, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281480, 'error': None, 'target': 'ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:40.963 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ad91a578-673a-4a9e-a935-21320d2782ca]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe80:7afe'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 796778, 'tstamp': 796778}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281481, 'error': None, 'target': 'ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:40.984 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[89779e77-6786-4748-8641-6e4a722242c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8094e12d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:7a:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 183], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 796778, 'reachable_time': 23239, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 281482, 'error': None, 'target': 'ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:41.009 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[48c6a36c-edf4-4480-9316-4b34731f19c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:41.067 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ed111e15-65d5-4b4b-a24f-cd7935c8ea48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:41.068 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8094e12d-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:41.068 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:28:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:41.068 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8094e12d-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.070 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:41 np0005539551 NetworkManager[48922]: <info>  [1764404921.0709] manager: (tap8094e12d-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/292)
Nov 29 03:28:41 np0005539551 kernel: tap8094e12d-20: entered promiscuous mode
Nov 29 03:28:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:41.074 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8094e12d-20, col_values=(('external_ids', {'iface-id': '5843a205-5cee-4ff2-89f1-ae1b369cf659'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:41 np0005539551 ovn_controller[130266]: 2025-11-29T08:28:41Z|00618|binding|INFO|Releasing lport 5843a205-5cee-4ff2-89f1-ae1b369cf659 from this chassis (sb_readonly=0)
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.092 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Updating instance_info_cache with network_info: [{"id": "88452b5d-373a-407c-a8ab-aba7b53de034", "address": "fa:16:3e:5d:a8:a2", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88452b5d-37", "ovs_interfaceid": "88452b5d-373a-407c-a8ab-aba7b53de034", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.103 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:41.105 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8094e12d-22b9-4e7c-bcb5-2de20ab6e675.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8094e12d-22b9-4e7c-bcb5-2de20ab6e675.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:28:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:41.106 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a19d7ec9-2f6e-460b-83d6-134f5fc21ad3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:41.107 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:28:41 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:28:41 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:28:41 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-8094e12d-22b9-4e7c-bcb5-2de20ab6e675
Nov 29 03:28:41 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:28:41 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:28:41 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:28:41 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/8094e12d-22b9-4e7c-bcb5-2de20ab6e675.pid.haproxy
Nov 29 03:28:41 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:28:41 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:28:41 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:28:41 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:28:41 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:28:41 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:28:41 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:28:41 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:28:41 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:28:41 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:28:41 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:28:41 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:28:41 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:28:41 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:28:41 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:28:41 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:28:41 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:28:41 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:28:41 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:28:41 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:28:41 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 8094e12d-22b9-4e7c-bcb5-2de20ab6e675
Nov 29 03:28:41 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:28:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:41.108 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675', 'env', 'PROCESS_TAG=haproxy-8094e12d-22b9-4e7c-bcb5-2de20ab6e675', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8094e12d-22b9-4e7c-bcb5-2de20ab6e675.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.109 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Releasing lock "refresh_cache-6076e253-7727-49f1-9aa6-6c4ccc52fc56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.110 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.110 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.111 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.111 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.133 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.133 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.133 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.134 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.134 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.396 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404921.395517, d4258b43-e73e-47f3-b1d1-f169bcaf4534 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.397 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] VM Started (Lifecycle Event)#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.414 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.419 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404921.3968766, d4258b43-e73e-47f3-b1d1-f169bcaf4534 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.419 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.433 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.435 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.458 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:28:41 np0005539551 podman[281576]: 2025-11-29 08:28:41.476782567 +0000 UTC m=+0.050035045 container create b5c666640c39123051dd8f4c24c13516b2c70051c5cd07a343393dcf8cb6b488 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 03:28:41 np0005539551 systemd[1]: Started libpod-conmon-b5c666640c39123051dd8f4c24c13516b2c70051c5cd07a343393dcf8cb6b488.scope.
Nov 29 03:28:41 np0005539551 podman[281576]: 2025-11-29 08:28:41.449268343 +0000 UTC m=+0.022520831 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:28:41 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:28:41 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/100446974' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:28:41 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:28:41 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29c3f598d1f3b5bda2a99bcb0a51e43e009de6acd50b2e9252be9397b3eb6762/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:28:41 np0005539551 podman[281576]: 2025-11-29 08:28:41.580418231 +0000 UTC m=+0.153670749 container init b5c666640c39123051dd8f4c24c13516b2c70051c5cd07a343393dcf8cb6b488 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:28:41 np0005539551 podman[281576]: 2025-11-29 08:28:41.586518096 +0000 UTC m=+0.159770584 container start b5c666640c39123051dd8f4c24c13516b2c70051c5cd07a343393dcf8cb6b488 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.588 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:41 np0005539551 neutron-haproxy-ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675[281591]: [NOTICE]   (281597) : New worker (281599) forked
Nov 29 03:28:41 np0005539551 neutron-haproxy-ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675[281591]: [NOTICE]   (281597) : Loading success.
Nov 29 03:28:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:41.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.684 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.692 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.692 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.696 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.696 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.700 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.700 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.881 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.883 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3834MB free_disk=20.764175415039062GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.883 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.883 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.961 227364 DEBUG nova.compute.manager [req-84f51846-f52a-4af0-bdc2-86c52d6b23ad req-11ef00e9-e52c-40db-b6f1-d8c1ad289801 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received event network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.961 227364 DEBUG oslo_concurrency.lockutils [req-84f51846-f52a-4af0-bdc2-86c52d6b23ad req-11ef00e9-e52c-40db-b6f1-d8c1ad289801 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.962 227364 DEBUG oslo_concurrency.lockutils [req-84f51846-f52a-4af0-bdc2-86c52d6b23ad req-11ef00e9-e52c-40db-b6f1-d8c1ad289801 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.962 227364 DEBUG oslo_concurrency.lockutils [req-84f51846-f52a-4af0-bdc2-86c52d6b23ad req-11ef00e9-e52c-40db-b6f1-d8c1ad289801 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.963 227364 DEBUG nova.compute.manager [req-84f51846-f52a-4af0-bdc2-86c52d6b23ad req-11ef00e9-e52c-40db-b6f1-d8c1ad289801 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Processing event network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.964 227364 DEBUG nova.compute.manager [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.968 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404921.9683456, d4258b43-e73e-47f3-b1d1-f169bcaf4534 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.968 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.970 227364 DEBUG nova.virt.libvirt.driver [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.972 227364 INFO nova.virt.libvirt.driver [-] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Instance spawned successfully.#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.972 227364 DEBUG nova.virt.libvirt.driver [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.986 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 6076e253-7727-49f1-9aa6-6c4ccc52fc56 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.986 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 480ae817-3676-4499-a047-6b8b383e7bf2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.986 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance d4258b43-e73e-47f3-b1d1-f169bcaf4534 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.987 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.987 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:28:41 np0005539551 nova_compute[227360]: 2025-11-29 08:28:41.997 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:28:42 np0005539551 nova_compute[227360]: 2025-11-29 08:28:42.000 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:28:42 np0005539551 nova_compute[227360]: 2025-11-29 08:28:42.006 227364 DEBUG nova.virt.libvirt.driver [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:28:42 np0005539551 nova_compute[227360]: 2025-11-29 08:28:42.007 227364 DEBUG nova.virt.libvirt.driver [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:28:42 np0005539551 nova_compute[227360]: 2025-11-29 08:28:42.007 227364 DEBUG nova.virt.libvirt.driver [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:28:42 np0005539551 nova_compute[227360]: 2025-11-29 08:28:42.007 227364 DEBUG nova.virt.libvirt.driver [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:28:42 np0005539551 nova_compute[227360]: 2025-11-29 08:28:42.008 227364 DEBUG nova.virt.libvirt.driver [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:28:42 np0005539551 nova_compute[227360]: 2025-11-29 08:28:42.008 227364 DEBUG nova.virt.libvirt.driver [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:28:42 np0005539551 nova_compute[227360]: 2025-11-29 08:28:42.048 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:28:42 np0005539551 nova_compute[227360]: 2025-11-29 08:28:42.126 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:42 np0005539551 nova_compute[227360]: 2025-11-29 08:28:42.128 227364 INFO nova.compute.manager [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Took 10.47 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:28:42 np0005539551 nova_compute[227360]: 2025-11-29 08:28:42.129 227364 DEBUG nova.compute.manager [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:28:42 np0005539551 nova_compute[227360]: 2025-11-29 08:28:42.131 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:42 np0005539551 nova_compute[227360]: 2025-11-29 08:28:42.245 227364 INFO nova.compute.manager [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Took 11.49 seconds to build instance.#033[00m
Nov 29 03:28:42 np0005539551 nova_compute[227360]: 2025-11-29 08:28:42.261 227364 DEBUG oslo_concurrency.lockutils [None req-834839b4-d98c-447a-af5d-77a13a7150c7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:42.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:28:42 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2158833612' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:28:42 np0005539551 nova_compute[227360]: 2025-11-29 08:28:42.653 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:42 np0005539551 nova_compute[227360]: 2025-11-29 08:28:42.659 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:28:42 np0005539551 nova_compute[227360]: 2025-11-29 08:28:42.683 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:28:42 np0005539551 nova_compute[227360]: 2025-11-29 08:28:42.731 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:28:42 np0005539551 nova_compute[227360]: 2025-11-29 08:28:42.732 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.848s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:43.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:43 np0005539551 nova_compute[227360]: 2025-11-29 08:28:43.789 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:44 np0005539551 nova_compute[227360]: 2025-11-29 08:28:44.058 227364 DEBUG nova.compute.manager [req-ba707e5d-df60-491d-bd41-070284731ca4 req-e1a28cae-aeda-4446-8c99-21bc6d5b5f8d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received event network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:44 np0005539551 nova_compute[227360]: 2025-11-29 08:28:44.059 227364 DEBUG oslo_concurrency.lockutils [req-ba707e5d-df60-491d-bd41-070284731ca4 req-e1a28cae-aeda-4446-8c99-21bc6d5b5f8d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:44 np0005539551 nova_compute[227360]: 2025-11-29 08:28:44.059 227364 DEBUG oslo_concurrency.lockutils [req-ba707e5d-df60-491d-bd41-070284731ca4 req-e1a28cae-aeda-4446-8c99-21bc6d5b5f8d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:44 np0005539551 nova_compute[227360]: 2025-11-29 08:28:44.059 227364 DEBUG oslo_concurrency.lockutils [req-ba707e5d-df60-491d-bd41-070284731ca4 req-e1a28cae-aeda-4446-8c99-21bc6d5b5f8d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:44 np0005539551 nova_compute[227360]: 2025-11-29 08:28:44.060 227364 DEBUG nova.compute.manager [req-ba707e5d-df60-491d-bd41-070284731ca4 req-e1a28cae-aeda-4446-8c99-21bc6d5b5f8d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] No waiting events found dispatching network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:28:44 np0005539551 nova_compute[227360]: 2025-11-29 08:28:44.060 227364 WARNING nova.compute.manager [req-ba707e5d-df60-491d-bd41-070284731ca4 req-e1a28cae-aeda-4446-8c99-21bc6d5b5f8d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received unexpected event network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:28:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:44.272 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:28:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:44.273 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:28:44 np0005539551 nova_compute[227360]: 2025-11-29 08:28:44.286 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:44.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:44 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:45 np0005539551 nova_compute[227360]: 2025-11-29 08:28:45.030 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:45.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:45 np0005539551 nova_compute[227360]: 2025-11-29 08:28:45.862 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:45 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:28:45 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:28:46 np0005539551 nova_compute[227360]: 2025-11-29 08:28:46.145 227364 DEBUG nova.compute.manager [req-eee5941d-2b37-4b1d-8d03-0305a3871bea req-f9093639-5d2b-4bb6-a93d-ca3a749c130e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received event network-changed-f455cc42-f497-49e9-84f6-0713ec25f786 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:46 np0005539551 nova_compute[227360]: 2025-11-29 08:28:46.145 227364 DEBUG nova.compute.manager [req-eee5941d-2b37-4b1d-8d03-0305a3871bea req-f9093639-5d2b-4bb6-a93d-ca3a749c130e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Refreshing instance network info cache due to event network-changed-f455cc42-f497-49e9-84f6-0713ec25f786. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:28:46 np0005539551 nova_compute[227360]: 2025-11-29 08:28:46.146 227364 DEBUG oslo_concurrency.lockutils [req-eee5941d-2b37-4b1d-8d03-0305a3871bea req-f9093639-5d2b-4bb6-a93d-ca3a749c130e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-d4258b43-e73e-47f3-b1d1-f169bcaf4534" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:28:46 np0005539551 nova_compute[227360]: 2025-11-29 08:28:46.146 227364 DEBUG oslo_concurrency.lockutils [req-eee5941d-2b37-4b1d-8d03-0305a3871bea req-f9093639-5d2b-4bb6-a93d-ca3a749c130e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-d4258b43-e73e-47f3-b1d1-f169bcaf4534" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:28:46 np0005539551 nova_compute[227360]: 2025-11-29 08:28:46.146 227364 DEBUG nova.network.neutron [req-eee5941d-2b37-4b1d-8d03-0305a3871bea req-f9093639-5d2b-4bb6-a93d-ca3a749c130e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Refreshing network info cache for port f455cc42-f497-49e9-84f6-0713ec25f786 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:28:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:46.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:46 np0005539551 nova_compute[227360]: 2025-11-29 08:28:46.687 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:47.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:48.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:48 np0005539551 nova_compute[227360]: 2025-11-29 08:28:48.792 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:49.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:49 np0005539551 nova_compute[227360]: 2025-11-29 08:28:49.779 227364 DEBUG nova.network.neutron [req-eee5941d-2b37-4b1d-8d03-0305a3871bea req-f9093639-5d2b-4bb6-a93d-ca3a749c130e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Updated VIF entry in instance network info cache for port f455cc42-f497-49e9-84f6-0713ec25f786. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:28:49 np0005539551 nova_compute[227360]: 2025-11-29 08:28:49.779 227364 DEBUG nova.network.neutron [req-eee5941d-2b37-4b1d-8d03-0305a3871bea req-f9093639-5d2b-4bb6-a93d-ca3a749c130e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Updating instance_info_cache with network_info: [{"id": "f455cc42-f497-49e9-84f6-0713ec25f786", "address": "fa:16:3e:a5:a7:0b", "network": {"id": "8094e12d-22b9-4e7c-bcb5-2de20ab6e675", "bridge": "br-int", "label": "tempest-network-smoke--1722283643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf455cc42-f4", "ovs_interfaceid": "f455cc42-f497-49e9-84f6-0713ec25f786", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:28:49 np0005539551 nova_compute[227360]: 2025-11-29 08:28:49.868 227364 DEBUG oslo_concurrency.lockutils [req-eee5941d-2b37-4b1d-8d03-0305a3871bea req-f9093639-5d2b-4bb6-a93d-ca3a749c130e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-d4258b43-e73e-47f3-b1d1-f169bcaf4534" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:28:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:50.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:50 np0005539551 nova_compute[227360]: 2025-11-29 08:28:50.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:50 np0005539551 nova_compute[227360]: 2025-11-29 08:28:50.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:28:51 np0005539551 nova_compute[227360]: 2025-11-29 08:28:51.689 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:51.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:28:52 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4181055010' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:28:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:28:52 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4181055010' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:28:52 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:28:52.276 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:52.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:53.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:53 np0005539551 nova_compute[227360]: 2025-11-29 08:28:53.795 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:53 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:28:53 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/973261306' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:28:53 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:28:53 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/973261306' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:28:54 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:54.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:54 np0005539551 nova_compute[227360]: 2025-11-29 08:28:54.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:55.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:55 np0005539551 ovn_controller[130266]: 2025-11-29T08:28:55Z|00081|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a5:a7:0b 10.100.0.6
Nov 29 03:28:55 np0005539551 ovn_controller[130266]: 2025-11-29T08:28:55Z|00082|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a5:a7:0b 10.100.0.6
Nov 29 03:28:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:56.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:56 np0005539551 nova_compute[227360]: 2025-11-29 08:28:56.736 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:57.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:58.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:58 np0005539551 nova_compute[227360]: 2025-11-29 08:28:58.798 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:59 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:28:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:59.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:00.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:00 np0005539551 nova_compute[227360]: 2025-11-29 08:29:00.801 227364 DEBUG oslo_concurrency.lockutils [None req-74b8d890-d4c8-447d-aceb-1c685a9b42c2 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquiring lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:00 np0005539551 nova_compute[227360]: 2025-11-29 08:29:00.801 227364 DEBUG oslo_concurrency.lockutils [None req-74b8d890-d4c8-447d-aceb-1c685a9b42c2 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:00 np0005539551 nova_compute[227360]: 2025-11-29 08:29:00.801 227364 DEBUG oslo_concurrency.lockutils [None req-74b8d890-d4c8-447d-aceb-1c685a9b42c2 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquiring lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:00 np0005539551 nova_compute[227360]: 2025-11-29 08:29:00.803 227364 DEBUG oslo_concurrency.lockutils [None req-74b8d890-d4c8-447d-aceb-1c685a9b42c2 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:00 np0005539551 nova_compute[227360]: 2025-11-29 08:29:00.804 227364 DEBUG oslo_concurrency.lockutils [None req-74b8d890-d4c8-447d-aceb-1c685a9b42c2 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:00 np0005539551 nova_compute[227360]: 2025-11-29 08:29:00.805 227364 INFO nova.compute.manager [None req-74b8d890-d4c8-447d-aceb-1c685a9b42c2 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Terminating instance#033[00m
Nov 29 03:29:00 np0005539551 nova_compute[227360]: 2025-11-29 08:29:00.806 227364 DEBUG nova.compute.manager [None req-74b8d890-d4c8-447d-aceb-1c685a9b42c2 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:29:00 np0005539551 kernel: tap88452b5d-37 (unregistering): left promiscuous mode
Nov 29 03:29:00 np0005539551 NetworkManager[48922]: <info>  [1764404940.8543] device (tap88452b5d-37): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:29:00 np0005539551 ovn_controller[130266]: 2025-11-29T08:29:00Z|00619|binding|INFO|Releasing lport 88452b5d-373a-407c-a8ab-aba7b53de034 from this chassis (sb_readonly=0)
Nov 29 03:29:00 np0005539551 ovn_controller[130266]: 2025-11-29T08:29:00Z|00620|binding|INFO|Setting lport 88452b5d-373a-407c-a8ab-aba7b53de034 down in Southbound
Nov 29 03:29:00 np0005539551 nova_compute[227360]: 2025-11-29 08:29:00.862 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:00 np0005539551 ovn_controller[130266]: 2025-11-29T08:29:00Z|00621|binding|INFO|Removing iface tap88452b5d-37 ovn-installed in OVS
Nov 29 03:29:00 np0005539551 nova_compute[227360]: 2025-11-29 08:29:00.865 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:00.869 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:a8:a2 10.100.0.9'], port_security=['fa:16:3e:5d:a8:a2 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '6076e253-7727-49f1-9aa6-6c4ccc52fc56', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea7b24ea9d7b4d239b4741634ac3f10c', 'neutron:revision_number': '8', 'neutron:security_group_ids': '525789b3-2118-4a66-bac0-ed0947cafa2d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2992474f-d5bf-4893-b4cc-2c774a8a9871, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=88452b5d-373a-407c-a8ab-aba7b53de034) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:29:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:00.870 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 88452b5d-373a-407c-a8ab-aba7b53de034 in datapath 4ca67fce-6116-4a0b-b0a9-c25b5adaad19 unbound from our chassis#033[00m
Nov 29 03:29:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:00.872 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4ca67fce-6116-4a0b-b0a9-c25b5adaad19, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:29:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:00.873 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[74ab71b7-8729-4141-b2e8-636ff2ef67b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:00.873 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19 namespace which is not needed anymore#033[00m
Nov 29 03:29:00 np0005539551 nova_compute[227360]: 2025-11-29 08:29:00.878 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:00 np0005539551 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d0000008b.scope: Deactivated successfully.
Nov 29 03:29:00 np0005539551 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d0000008b.scope: Consumed 17.568s CPU time.
Nov 29 03:29:00 np0005539551 systemd-machined[190756]: Machine qemu-65-instance-0000008b terminated.
Nov 29 03:29:00 np0005539551 neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19[279906]: [NOTICE]   (279910) : haproxy version is 2.8.14-c23fe91
Nov 29 03:29:00 np0005539551 neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19[279906]: [NOTICE]   (279910) : path to executable is /usr/sbin/haproxy
Nov 29 03:29:00 np0005539551 neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19[279906]: [WARNING]  (279910) : Exiting Master process...
Nov 29 03:29:00 np0005539551 neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19[279906]: [WARNING]  (279910) : Exiting Master process...
Nov 29 03:29:00 np0005539551 neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19[279906]: [ALERT]    (279910) : Current worker (279912) exited with code 143 (Terminated)
Nov 29 03:29:00 np0005539551 neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19[279906]: [WARNING]  (279910) : All workers exited. Exiting... (0)
Nov 29 03:29:00 np0005539551 systemd[1]: libpod-7645a84b7c487c9ba2edc7d9004b9e73896b894a85c4c9f102f697e08739c471.scope: Deactivated successfully.
Nov 29 03:29:00 np0005539551 podman[281705]: 2025-11-29 08:29:00.992702089 +0000 UTC m=+0.039456629 container died 7645a84b7c487c9ba2edc7d9004b9e73896b894a85c4c9f102f697e08739c471 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:29:01 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7645a84b7c487c9ba2edc7d9004b9e73896b894a85c4c9f102f697e08739c471-userdata-shm.mount: Deactivated successfully.
Nov 29 03:29:01 np0005539551 systemd[1]: var-lib-containers-storage-overlay-7ca2c80c775578c4094791c04ba70e467613daf2d1f2a15bd7376f9b1b098163-merged.mount: Deactivated successfully.
Nov 29 03:29:01 np0005539551 nova_compute[227360]: 2025-11-29 08:29:01.026 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:01 np0005539551 nova_compute[227360]: 2025-11-29 08:29:01.031 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:01 np0005539551 podman[281705]: 2025-11-29 08:29:01.03638384 +0000 UTC m=+0.083138390 container cleanup 7645a84b7c487c9ba2edc7d9004b9e73896b894a85c4c9f102f697e08739c471 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:29:01 np0005539551 nova_compute[227360]: 2025-11-29 08:29:01.038 227364 INFO nova.virt.libvirt.driver [-] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Instance destroyed successfully.#033[00m
Nov 29 03:29:01 np0005539551 nova_compute[227360]: 2025-11-29 08:29:01.039 227364 DEBUG nova.objects.instance [None req-74b8d890-d4c8-447d-aceb-1c685a9b42c2 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'resources' on Instance uuid 6076e253-7727-49f1-9aa6-6c4ccc52fc56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:01 np0005539551 systemd[1]: libpod-conmon-7645a84b7c487c9ba2edc7d9004b9e73896b894a85c4c9f102f697e08739c471.scope: Deactivated successfully.
Nov 29 03:29:01 np0005539551 nova_compute[227360]: 2025-11-29 08:29:01.079 227364 DEBUG nova.virt.libvirt.vif [None req-74b8d890-d4c8-447d-aceb-1c685a9b42c2 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:26:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1522385452',display_name='tempest-ServerRescueNegativeTestJSON-server-1522385452',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1522385452',id=139,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:27:10Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea7b24ea9d7b4d239b4741634ac3f10c',ramdisk_id='',reservation_id='r-c2vfh2rm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-2045177058',owner_user_name='tempest-ServerRescueNegativeTestJSON-2045177058-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:27:20Z,user_data=None,user_id='283f8136265e4425a5a31f840935b9ab',uuid=6076e253-7727-49f1-9aa6-6c4ccc52fc56,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "88452b5d-373a-407c-a8ab-aba7b53de034", "address": "fa:16:3e:5d:a8:a2", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88452b5d-37", "ovs_interfaceid": "88452b5d-373a-407c-a8ab-aba7b53de034", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:29:01 np0005539551 nova_compute[227360]: 2025-11-29 08:29:01.080 227364 DEBUG nova.network.os_vif_util [None req-74b8d890-d4c8-447d-aceb-1c685a9b42c2 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Converting VIF {"id": "88452b5d-373a-407c-a8ab-aba7b53de034", "address": "fa:16:3e:5d:a8:a2", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88452b5d-37", "ovs_interfaceid": "88452b5d-373a-407c-a8ab-aba7b53de034", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:29:01 np0005539551 nova_compute[227360]: 2025-11-29 08:29:01.081 227364 DEBUG nova.network.os_vif_util [None req-74b8d890-d4c8-447d-aceb-1c685a9b42c2 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5d:a8:a2,bridge_name='br-int',has_traffic_filtering=True,id=88452b5d-373a-407c-a8ab-aba7b53de034,network=Network(4ca67fce-6116-4a0b-b0a9-c25b5adaad19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88452b5d-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:29:01 np0005539551 nova_compute[227360]: 2025-11-29 08:29:01.081 227364 DEBUG os_vif [None req-74b8d890-d4c8-447d-aceb-1c685a9b42c2 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:a8:a2,bridge_name='br-int',has_traffic_filtering=True,id=88452b5d-373a-407c-a8ab-aba7b53de034,network=Network(4ca67fce-6116-4a0b-b0a9-c25b5adaad19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88452b5d-37') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:29:01 np0005539551 nova_compute[227360]: 2025-11-29 08:29:01.083 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:01 np0005539551 nova_compute[227360]: 2025-11-29 08:29:01.083 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88452b5d-37, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:01 np0005539551 nova_compute[227360]: 2025-11-29 08:29:01.084 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:01 np0005539551 nova_compute[227360]: 2025-11-29 08:29:01.086 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:01 np0005539551 nova_compute[227360]: 2025-11-29 08:29:01.088 227364 INFO os_vif [None req-74b8d890-d4c8-447d-aceb-1c685a9b42c2 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:a8:a2,bridge_name='br-int',has_traffic_filtering=True,id=88452b5d-373a-407c-a8ab-aba7b53de034,network=Network(4ca67fce-6116-4a0b-b0a9-c25b5adaad19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88452b5d-37')#033[00m
Nov 29 03:29:01 np0005539551 podman[281743]: 2025-11-29 08:29:01.091191333 +0000 UTC m=+0.035543423 container remove 7645a84b7c487c9ba2edc7d9004b9e73896b894a85c4c9f102f697e08739c471 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:29:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:01.096 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c41f64e2-5dcb-40c5-a243-70f7b854d30b]: (4, ('Sat Nov 29 08:29:00 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19 (7645a84b7c487c9ba2edc7d9004b9e73896b894a85c4c9f102f697e08739c471)\n7645a84b7c487c9ba2edc7d9004b9e73896b894a85c4c9f102f697e08739c471\nSat Nov 29 08:29:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19 (7645a84b7c487c9ba2edc7d9004b9e73896b894a85c4c9f102f697e08739c471)\n7645a84b7c487c9ba2edc7d9004b9e73896b894a85c4c9f102f697e08739c471\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:01.097 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b12a72c0-1d4b-4923-8acc-63118448d949]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:01.098 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ca67fce-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:01 np0005539551 kernel: tap4ca67fce-60: left promiscuous mode
Nov 29 03:29:01 np0005539551 nova_compute[227360]: 2025-11-29 08:29:01.106 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:01 np0005539551 nova_compute[227360]: 2025-11-29 08:29:01.114 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:01.116 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c91b00a7-18f5-48fd-8e33-d500afdb2b04]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:01.134 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[84c436c9-cf9c-4c9c-8123-63538b48c040]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:01.136 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4002cdb7-97d2-4f97-8b43-c57f802d627b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:01.150 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d4dc1617-004f-41f5-a236-843e04e705bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 788517, 'reachable_time': 25031, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281776, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:01.153 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:29:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:01.153 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[b0680fe6-1204-40e3-93f3-0a376866a010]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:01 np0005539551 systemd[1]: run-netns-ovnmeta\x2d4ca67fce\x2d6116\x2d4a0b\x2db0a9\x2dc25b5adaad19.mount: Deactivated successfully.
Nov 29 03:29:01 np0005539551 nova_compute[227360]: 2025-11-29 08:29:01.615 227364 INFO nova.compute.manager [None req-d4455e9f-6ab4-4893-8522-a1437d0e4f7d fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Get console output#033[00m
Nov 29 03:29:01 np0005539551 nova_compute[227360]: 2025-11-29 08:29:01.620 260937 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:29:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:01.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:01 np0005539551 nova_compute[227360]: 2025-11-29 08:29:01.740 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:02.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:02 np0005539551 nova_compute[227360]: 2025-11-29 08:29:02.438 227364 INFO nova.virt.libvirt.driver [None req-74b8d890-d4c8-447d-aceb-1c685a9b42c2 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Deleting instance files /var/lib/nova/instances/6076e253-7727-49f1-9aa6-6c4ccc52fc56_del#033[00m
Nov 29 03:29:02 np0005539551 nova_compute[227360]: 2025-11-29 08:29:02.439 227364 INFO nova.virt.libvirt.driver [None req-74b8d890-d4c8-447d-aceb-1c685a9b42c2 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Deletion of /var/lib/nova/instances/6076e253-7727-49f1-9aa6-6c4ccc52fc56_del complete#033[00m
Nov 29 03:29:02 np0005539551 nova_compute[227360]: 2025-11-29 08:29:02.494 227364 INFO nova.compute.manager [None req-74b8d890-d4c8-447d-aceb-1c685a9b42c2 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Took 1.69 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:29:02 np0005539551 nova_compute[227360]: 2025-11-29 08:29:02.495 227364 DEBUG oslo.service.loopingcall [None req-74b8d890-d4c8-447d-aceb-1c685a9b42c2 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:29:02 np0005539551 nova_compute[227360]: 2025-11-29 08:29:02.496 227364 DEBUG nova.compute.manager [-] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:29:02 np0005539551 nova_compute[227360]: 2025-11-29 08:29:02.496 227364 DEBUG nova.network.neutron [-] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:29:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:03.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:04 np0005539551 nova_compute[227360]: 2025-11-29 08:29:04.094 227364 DEBUG nova.network.neutron [-] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:29:04 np0005539551 nova_compute[227360]: 2025-11-29 08:29:04.285 227364 INFO nova.compute.manager [-] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Took 1.79 seconds to deallocate network for instance.#033[00m
Nov 29 03:29:04 np0005539551 nova_compute[227360]: 2025-11-29 08:29:04.350 227364 DEBUG oslo_concurrency.lockutils [None req-74b8d890-d4c8-447d-aceb-1c685a9b42c2 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:04 np0005539551 nova_compute[227360]: 2025-11-29 08:29:04.350 227364 DEBUG oslo_concurrency.lockutils [None req-74b8d890-d4c8-447d-aceb-1c685a9b42c2 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:04 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:04.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:04 np0005539551 nova_compute[227360]: 2025-11-29 08:29:04.433 227364 DEBUG nova.compute.manager [req-8aabfdd7-baf4-4e78-a3cd-378795118ecc req-463ddeaa-df7d-4b8a-82f8-54aa2ea3f3f0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Received event network-vif-deleted-88452b5d-373a-407c-a8ab-aba7b53de034 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:04 np0005539551 nova_compute[227360]: 2025-11-29 08:29:04.478 227364 DEBUG oslo_concurrency.processutils [None req-74b8d890-d4c8-447d-aceb-1c685a9b42c2 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:04 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:29:04 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/26981714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:29:04 np0005539551 nova_compute[227360]: 2025-11-29 08:29:04.946 227364 DEBUG oslo_concurrency.processutils [None req-74b8d890-d4c8-447d-aceb-1c685a9b42c2 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:04 np0005539551 nova_compute[227360]: 2025-11-29 08:29:04.953 227364 DEBUG nova.compute.provider_tree [None req-74b8d890-d4c8-447d-aceb-1c685a9b42c2 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:29:04 np0005539551 nova_compute[227360]: 2025-11-29 08:29:04.972 227364 DEBUG nova.scheduler.client.report [None req-74b8d890-d4c8-447d-aceb-1c685a9b42c2 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:29:04 np0005539551 nova_compute[227360]: 2025-11-29 08:29:04.991 227364 DEBUG oslo_concurrency.lockutils [None req-74b8d890-d4c8-447d-aceb-1c685a9b42c2 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:05 np0005539551 nova_compute[227360]: 2025-11-29 08:29:05.038 227364 INFO nova.scheduler.client.report [None req-74b8d890-d4c8-447d-aceb-1c685a9b42c2 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Deleted allocations for instance 6076e253-7727-49f1-9aa6-6c4ccc52fc56#033[00m
Nov 29 03:29:05 np0005539551 nova_compute[227360]: 2025-11-29 08:29:05.182 227364 DEBUG oslo_concurrency.lockutils [None req-74b8d890-d4c8-447d-aceb-1c685a9b42c2 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.381s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:05 np0005539551 nova_compute[227360]: 2025-11-29 08:29:05.189 227364 DEBUG nova.compute.manager [req-dc551516-e3fd-4b08-aea3-d55d5275a499 req-fcab0be0-3525-4f0f-b7d9-8bc6c200cb7f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Received event network-vif-unplugged-88452b5d-373a-407c-a8ab-aba7b53de034 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:05 np0005539551 nova_compute[227360]: 2025-11-29 08:29:05.192 227364 DEBUG oslo_concurrency.lockutils [req-dc551516-e3fd-4b08-aea3-d55d5275a499 req-fcab0be0-3525-4f0f-b7d9-8bc6c200cb7f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:05 np0005539551 nova_compute[227360]: 2025-11-29 08:29:05.192 227364 DEBUG oslo_concurrency.lockutils [req-dc551516-e3fd-4b08-aea3-d55d5275a499 req-fcab0be0-3525-4f0f-b7d9-8bc6c200cb7f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:05 np0005539551 nova_compute[227360]: 2025-11-29 08:29:05.193 227364 DEBUG oslo_concurrency.lockutils [req-dc551516-e3fd-4b08-aea3-d55d5275a499 req-fcab0be0-3525-4f0f-b7d9-8bc6c200cb7f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:05 np0005539551 nova_compute[227360]: 2025-11-29 08:29:05.193 227364 DEBUG nova.compute.manager [req-dc551516-e3fd-4b08-aea3-d55d5275a499 req-fcab0be0-3525-4f0f-b7d9-8bc6c200cb7f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] No waiting events found dispatching network-vif-unplugged-88452b5d-373a-407c-a8ab-aba7b53de034 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:05 np0005539551 nova_compute[227360]: 2025-11-29 08:29:05.193 227364 WARNING nova.compute.manager [req-dc551516-e3fd-4b08-aea3-d55d5275a499 req-fcab0be0-3525-4f0f-b7d9-8bc6c200cb7f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Received unexpected event network-vif-unplugged-88452b5d-373a-407c-a8ab-aba7b53de034 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:29:05 np0005539551 nova_compute[227360]: 2025-11-29 08:29:05.194 227364 DEBUG nova.compute.manager [req-dc551516-e3fd-4b08-aea3-d55d5275a499 req-fcab0be0-3525-4f0f-b7d9-8bc6c200cb7f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Received event network-vif-plugged-88452b5d-373a-407c-a8ab-aba7b53de034 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:05 np0005539551 nova_compute[227360]: 2025-11-29 08:29:05.194 227364 DEBUG oslo_concurrency.lockutils [req-dc551516-e3fd-4b08-aea3-d55d5275a499 req-fcab0be0-3525-4f0f-b7d9-8bc6c200cb7f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:05 np0005539551 nova_compute[227360]: 2025-11-29 08:29:05.194 227364 DEBUG oslo_concurrency.lockutils [req-dc551516-e3fd-4b08-aea3-d55d5275a499 req-fcab0be0-3525-4f0f-b7d9-8bc6c200cb7f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:05 np0005539551 nova_compute[227360]: 2025-11-29 08:29:05.194 227364 DEBUG oslo_concurrency.lockutils [req-dc551516-e3fd-4b08-aea3-d55d5275a499 req-fcab0be0-3525-4f0f-b7d9-8bc6c200cb7f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6076e253-7727-49f1-9aa6-6c4ccc52fc56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:05 np0005539551 nova_compute[227360]: 2025-11-29 08:29:05.195 227364 DEBUG nova.compute.manager [req-dc551516-e3fd-4b08-aea3-d55d5275a499 req-fcab0be0-3525-4f0f-b7d9-8bc6c200cb7f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] No waiting events found dispatching network-vif-plugged-88452b5d-373a-407c-a8ab-aba7b53de034 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:05 np0005539551 nova_compute[227360]: 2025-11-29 08:29:05.195 227364 WARNING nova.compute.manager [req-dc551516-e3fd-4b08-aea3-d55d5275a499 req-fcab0be0-3525-4f0f-b7d9-8bc6c200cb7f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Received unexpected event network-vif-plugged-88452b5d-373a-407c-a8ab-aba7b53de034 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:29:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:05.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:06 np0005539551 nova_compute[227360]: 2025-11-29 08:29:06.086 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:06.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:06 np0005539551 nova_compute[227360]: 2025-11-29 08:29:06.741 227364 DEBUG oslo_concurrency.lockutils [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "refresh_cache-d4258b43-e73e-47f3-b1d1-f169bcaf4534" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:29:06 np0005539551 nova_compute[227360]: 2025-11-29 08:29:06.742 227364 DEBUG oslo_concurrency.lockutils [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquired lock "refresh_cache-d4258b43-e73e-47f3-b1d1-f169bcaf4534" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:29:06 np0005539551 nova_compute[227360]: 2025-11-29 08:29:06.742 227364 DEBUG nova.network.neutron [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:29:06 np0005539551 nova_compute[227360]: 2025-11-29 08:29:06.743 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:07.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:08.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:08 np0005539551 nova_compute[227360]: 2025-11-29 08:29:08.406 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:08 np0005539551 podman[281802]: 2025-11-29 08:29:08.62356767 +0000 UTC m=+0.051470074 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:29:08 np0005539551 podman[281800]: 2025-11-29 08:29:08.64834491 +0000 UTC m=+0.092931736 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:29:08 np0005539551 podman[281801]: 2025-11-29 08:29:08.653070597 +0000 UTC m=+0.091272419 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible)
Nov 29 03:29:09 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:09.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:29:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:10.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:29:11 np0005539551 nova_compute[227360]: 2025-11-29 08:29:11.048 227364 DEBUG nova.network.neutron [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Updating instance_info_cache with network_info: [{"id": "f455cc42-f497-49e9-84f6-0713ec25f786", "address": "fa:16:3e:a5:a7:0b", "network": {"id": "8094e12d-22b9-4e7c-bcb5-2de20ab6e675", "bridge": "br-int", "label": "tempest-network-smoke--1722283643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf455cc42-f4", "ovs_interfaceid": "f455cc42-f497-49e9-84f6-0713ec25f786", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:29:11 np0005539551 nova_compute[227360]: 2025-11-29 08:29:11.065 227364 DEBUG oslo_concurrency.lockutils [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Releasing lock "refresh_cache-d4258b43-e73e-47f3-b1d1-f169bcaf4534" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:29:11 np0005539551 nova_compute[227360]: 2025-11-29 08:29:11.089 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:11 np0005539551 nova_compute[227360]: 2025-11-29 08:29:11.171 227364 DEBUG nova.virt.libvirt.driver [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Nov 29 03:29:11 np0005539551 nova_compute[227360]: 2025-11-29 08:29:11.171 227364 DEBUG nova.virt.libvirt.volume.remotefs [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Creating file /var/lib/nova/instances/d4258b43-e73e-47f3-b1d1-f169bcaf4534/2923f1d8daf946f6b549968fd1e1db18.tmp on remote host 192.168.122.100 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Nov 29 03:29:11 np0005539551 nova_compute[227360]: 2025-11-29 08:29:11.172 227364 DEBUG oslo_concurrency.processutils [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/d4258b43-e73e-47f3-b1d1-f169bcaf4534/2923f1d8daf946f6b549968fd1e1db18.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:11.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:11 np0005539551 nova_compute[227360]: 2025-11-29 08:29:11.733 227364 DEBUG oslo_concurrency.processutils [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/d4258b43-e73e-47f3-b1d1-f169bcaf4534/2923f1d8daf946f6b549968fd1e1db18.tmp" returned: 1 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:11 np0005539551 nova_compute[227360]: 2025-11-29 08:29:11.734 227364 DEBUG oslo_concurrency.processutils [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] 'ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/d4258b43-e73e-47f3-b1d1-f169bcaf4534/2923f1d8daf946f6b549968fd1e1db18.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 29 03:29:11 np0005539551 nova_compute[227360]: 2025-11-29 08:29:11.734 227364 DEBUG nova.virt.libvirt.volume.remotefs [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Creating directory /var/lib/nova/instances/d4258b43-e73e-47f3-b1d1-f169bcaf4534 on remote host 192.168.122.100 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Nov 29 03:29:11 np0005539551 nova_compute[227360]: 2025-11-29 08:29:11.734 227364 DEBUG oslo_concurrency.processutils [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/d4258b43-e73e-47f3-b1d1-f169bcaf4534 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:11 np0005539551 nova_compute[227360]: 2025-11-29 08:29:11.759 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:11 np0005539551 ovn_controller[130266]: 2025-11-29T08:29:11Z|00622|binding|INFO|Releasing lport f2118d1b-0f35-4211-8508-64237a2d816e from this chassis (sb_readonly=0)
Nov 29 03:29:11 np0005539551 ovn_controller[130266]: 2025-11-29T08:29:11Z|00623|binding|INFO|Releasing lport 5843a205-5cee-4ff2-89f1-ae1b369cf659 from this chassis (sb_readonly=0)
Nov 29 03:29:11 np0005539551 nova_compute[227360]: 2025-11-29 08:29:11.971 227364 DEBUG oslo_concurrency.processutils [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/d4258b43-e73e-47f3-b1d1-f169bcaf4534" returned: 0 in 0.237s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:11 np0005539551 nova_compute[227360]: 2025-11-29 08:29:11.975 227364 DEBUG nova.virt.libvirt.driver [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:29:11 np0005539551 nova_compute[227360]: 2025-11-29 08:29:11.995 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:12.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:13.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:14 np0005539551 kernel: tapf455cc42-f4 (unregistering): left promiscuous mode
Nov 29 03:29:14 np0005539551 NetworkManager[48922]: <info>  [1764404954.2096] device (tapf455cc42-f4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:29:14 np0005539551 ovn_controller[130266]: 2025-11-29T08:29:14Z|00624|binding|INFO|Releasing lport f455cc42-f497-49e9-84f6-0713ec25f786 from this chassis (sb_readonly=0)
Nov 29 03:29:14 np0005539551 ovn_controller[130266]: 2025-11-29T08:29:14Z|00625|binding|INFO|Setting lport f455cc42-f497-49e9-84f6-0713ec25f786 down in Southbound
Nov 29 03:29:14 np0005539551 nova_compute[227360]: 2025-11-29 08:29:14.255 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:14 np0005539551 ovn_controller[130266]: 2025-11-29T08:29:14Z|00626|binding|INFO|Removing iface tapf455cc42-f4 ovn-installed in OVS
Nov 29 03:29:14 np0005539551 nova_compute[227360]: 2025-11-29 08:29:14.260 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:14 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:14.264 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:a7:0b 10.100.0.6'], port_security=['fa:16:3e:a5:a7:0b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'd4258b43-e73e-47f3-b1d1-f169bcaf4534', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8094e12d-22b9-4e7c-bcb5-2de20ab6e675', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e716463c-b057-4e72-86b1-45cce498de54', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.222'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f97b85de-e617-4358-b258-b884e5e43079, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=f455cc42-f497-49e9-84f6-0713ec25f786) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:29:14 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:14.265 139482 INFO neutron.agent.ovn.metadata.agent [-] Port f455cc42-f497-49e9-84f6-0713ec25f786 in datapath 8094e12d-22b9-4e7c-bcb5-2de20ab6e675 unbound from our chassis#033[00m
Nov 29 03:29:14 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:14.266 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8094e12d-22b9-4e7c-bcb5-2de20ab6e675, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:29:14 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:14.267 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[780d3653-0d8f-4012-b37a-142d1dbc23f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:14 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:14.268 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675 namespace which is not needed anymore#033[00m
Nov 29 03:29:14 np0005539551 nova_compute[227360]: 2025-11-29 08:29:14.271 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:14 np0005539551 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d00000092.scope: Deactivated successfully.
Nov 29 03:29:14 np0005539551 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d00000092.scope: Consumed 14.814s CPU time.
Nov 29 03:29:14 np0005539551 systemd-machined[190756]: Machine qemu-67-instance-00000092 terminated.
Nov 29 03:29:14 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:14 np0005539551 neutron-haproxy-ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675[281591]: [NOTICE]   (281597) : haproxy version is 2.8.14-c23fe91
Nov 29 03:29:14 np0005539551 neutron-haproxy-ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675[281591]: [NOTICE]   (281597) : path to executable is /usr/sbin/haproxy
Nov 29 03:29:14 np0005539551 neutron-haproxy-ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675[281591]: [WARNING]  (281597) : Exiting Master process...
Nov 29 03:29:14 np0005539551 neutron-haproxy-ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675[281591]: [ALERT]    (281597) : Current worker (281599) exited with code 143 (Terminated)
Nov 29 03:29:14 np0005539551 neutron-haproxy-ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675[281591]: [WARNING]  (281597) : All workers exited. Exiting... (0)
Nov 29 03:29:14 np0005539551 systemd[1]: libpod-b5c666640c39123051dd8f4c24c13516b2c70051c5cd07a343393dcf8cb6b488.scope: Deactivated successfully.
Nov 29 03:29:14 np0005539551 podman[281888]: 2025-11-29 08:29:14.399432401 +0000 UTC m=+0.043909609 container died b5c666640c39123051dd8f4c24c13516b2c70051c5cd07a343393dcf8cb6b488 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:29:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:14.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:14 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b5c666640c39123051dd8f4c24c13516b2c70051c5cd07a343393dcf8cb6b488-userdata-shm.mount: Deactivated successfully.
Nov 29 03:29:14 np0005539551 systemd[1]: var-lib-containers-storage-overlay-29c3f598d1f3b5bda2a99bcb0a51e43e009de6acd50b2e9252be9397b3eb6762-merged.mount: Deactivated successfully.
Nov 29 03:29:14 np0005539551 podman[281888]: 2025-11-29 08:29:14.433588684 +0000 UTC m=+0.078065892 container cleanup b5c666640c39123051dd8f4c24c13516b2c70051c5cd07a343393dcf8cb6b488 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:29:14 np0005539551 systemd[1]: libpod-conmon-b5c666640c39123051dd8f4c24c13516b2c70051c5cd07a343393dcf8cb6b488.scope: Deactivated successfully.
Nov 29 03:29:14 np0005539551 kernel: tapf455cc42-f4: entered promiscuous mode
Nov 29 03:29:14 np0005539551 NetworkManager[48922]: <info>  [1764404954.4693] manager: (tapf455cc42-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/293)
Nov 29 03:29:14 np0005539551 systemd-udevd[281867]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:29:14 np0005539551 nova_compute[227360]: 2025-11-29 08:29:14.474 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:14 np0005539551 ovn_controller[130266]: 2025-11-29T08:29:14Z|00627|binding|INFO|Claiming lport f455cc42-f497-49e9-84f6-0713ec25f786 for this chassis.
Nov 29 03:29:14 np0005539551 ovn_controller[130266]: 2025-11-29T08:29:14Z|00628|binding|INFO|f455cc42-f497-49e9-84f6-0713ec25f786: Claiming fa:16:3e:a5:a7:0b 10.100.0.6
Nov 29 03:29:14 np0005539551 kernel: tapf455cc42-f4 (unregistering): left promiscuous mode
Nov 29 03:29:14 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:14.485 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:a7:0b 10.100.0.6'], port_security=['fa:16:3e:a5:a7:0b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'd4258b43-e73e-47f3-b1d1-f169bcaf4534', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8094e12d-22b9-4e7c-bcb5-2de20ab6e675', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e716463c-b057-4e72-86b1-45cce498de54', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.222'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f97b85de-e617-4358-b258-b884e5e43079, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=f455cc42-f497-49e9-84f6-0713ec25f786) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:29:14 np0005539551 virtnodedevd[227732]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 29 03:29:14 np0005539551 virtnodedevd[227732]: hostname: compute-1
Nov 29 03:29:14 np0005539551 virtnodedevd[227732]: ethtool ioctl error on tapf455cc42-f4: No such device
Nov 29 03:29:14 np0005539551 virtnodedevd[227732]: ethtool ioctl error on tapf455cc42-f4: No such device
Nov 29 03:29:14 np0005539551 virtnodedevd[227732]: ethtool ioctl error on tapf455cc42-f4: No such device
Nov 29 03:29:14 np0005539551 podman[281919]: 2025-11-29 08:29:14.498984003 +0000 UTC m=+0.045379978 container remove b5c666640c39123051dd8f4c24c13516b2c70051c5cd07a343393dcf8cb6b488 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:29:14 np0005539551 virtnodedevd[227732]: ethtool ioctl error on tapf455cc42-f4: No such device
Nov 29 03:29:14 np0005539551 ovn_controller[130266]: 2025-11-29T08:29:14Z|00629|binding|INFO|Setting lport f455cc42-f497-49e9-84f6-0713ec25f786 ovn-installed in OVS
Nov 29 03:29:14 np0005539551 ovn_controller[130266]: 2025-11-29T08:29:14Z|00630|binding|INFO|Setting lport f455cc42-f497-49e9-84f6-0713ec25f786 up in Southbound
Nov 29 03:29:14 np0005539551 ovn_controller[130266]: 2025-11-29T08:29:14Z|00631|binding|INFO|Releasing lport f455cc42-f497-49e9-84f6-0713ec25f786 from this chassis (sb_readonly=1)
Nov 29 03:29:14 np0005539551 ovn_controller[130266]: 2025-11-29T08:29:14Z|00632|if_status|INFO|Dropped 2 log messages in last 457 seconds (most recently, 457 seconds ago) due to excessive rate
Nov 29 03:29:14 np0005539551 ovn_controller[130266]: 2025-11-29T08:29:14Z|00633|if_status|INFO|Not setting lport f455cc42-f497-49e9-84f6-0713ec25f786 down as sb is readonly
Nov 29 03:29:14 np0005539551 nova_compute[227360]: 2025-11-29 08:29:14.501 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:14 np0005539551 ovn_controller[130266]: 2025-11-29T08:29:14Z|00634|binding|INFO|Removing iface tapf455cc42-f4 ovn-installed in OVS
Nov 29 03:29:14 np0005539551 nova_compute[227360]: 2025-11-29 08:29:14.502 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:14 np0005539551 ovn_controller[130266]: 2025-11-29T08:29:14Z|00635|binding|INFO|Releasing lport f455cc42-f497-49e9-84f6-0713ec25f786 from this chassis (sb_readonly=0)
Nov 29 03:29:14 np0005539551 ovn_controller[130266]: 2025-11-29T08:29:14Z|00636|binding|INFO|Setting lport f455cc42-f497-49e9-84f6-0713ec25f786 down in Southbound
Nov 29 03:29:14 np0005539551 virtnodedevd[227732]: ethtool ioctl error on tapf455cc42-f4: No such device
Nov 29 03:29:14 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:14.506 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f938eb4d-23f6-4f21-8367-5d3f2c244844]: (4, ('Sat Nov 29 08:29:14 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675 (b5c666640c39123051dd8f4c24c13516b2c70051c5cd07a343393dcf8cb6b488)\nb5c666640c39123051dd8f4c24c13516b2c70051c5cd07a343393dcf8cb6b488\nSat Nov 29 08:29:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675 (b5c666640c39123051dd8f4c24c13516b2c70051c5cd07a343393dcf8cb6b488)\nb5c666640c39123051dd8f4c24c13516b2c70051c5cd07a343393dcf8cb6b488\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:14 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:14.508 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:a7:0b 10.100.0.6'], port_security=['fa:16:3e:a5:a7:0b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'd4258b43-e73e-47f3-b1d1-f169bcaf4534', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8094e12d-22b9-4e7c-bcb5-2de20ab6e675', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e716463c-b057-4e72-86b1-45cce498de54', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.222'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f97b85de-e617-4358-b258-b884e5e43079, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=f455cc42-f497-49e9-84f6-0713ec25f786) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:29:14 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:14.508 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9ff63eaf-a4e7-42a3-a487-fba77d5cc7ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:14 np0005539551 virtnodedevd[227732]: ethtool ioctl error on tapf455cc42-f4: No such device
Nov 29 03:29:14 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:14.510 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8094e12d-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:14 np0005539551 nova_compute[227360]: 2025-11-29 08:29:14.511 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:14 np0005539551 nova_compute[227360]: 2025-11-29 08:29:14.515 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:14 np0005539551 virtnodedevd[227732]: ethtool ioctl error on tapf455cc42-f4: No such device
Nov 29 03:29:14 np0005539551 virtnodedevd[227732]: ethtool ioctl error on tapf455cc42-f4: No such device
Nov 29 03:29:14 np0005539551 kernel: tap8094e12d-20: left promiscuous mode
Nov 29 03:29:14 np0005539551 nova_compute[227360]: 2025-11-29 08:29:14.527 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:14 np0005539551 nova_compute[227360]: 2025-11-29 08:29:14.532 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:14 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:14.534 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ef4de0a7-f8a0-4459-b528-ce7bab2d7996]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:14 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:14.548 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[aa6d2ab3-0240-49da-b0a7-383bd459dfa3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:14 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:14.550 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[dd6116d7-e213-4e3b-b82f-b13f1e8a26bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:14 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:14.564 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8adb33f6-0b11-4326-91b5-7a3ea3f06245]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 796770, 'reachable_time': 23904, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281959, 'error': None, 'target': 'ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:14 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:14.567 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:29:14 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:14.567 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[7d3b5fa9-52be-4974-a628-f8749193241d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:14 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:14.568 139482 INFO neutron.agent.ovn.metadata.agent [-] Port f455cc42-f497-49e9-84f6-0713ec25f786 in datapath 8094e12d-22b9-4e7c-bcb5-2de20ab6e675 unbound from our chassis#033[00m
Nov 29 03:29:14 np0005539551 systemd[1]: run-netns-ovnmeta\x2d8094e12d\x2d22b9\x2d4e7c\x2dbcb5\x2d2de20ab6e675.mount: Deactivated successfully.
Nov 29 03:29:14 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:14.569 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8094e12d-22b9-4e7c-bcb5-2de20ab6e675, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:29:14 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:14.570 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c999e71d-234f-4718-a36c-3b18ed4a423a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:14 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:14.570 139482 INFO neutron.agent.ovn.metadata.agent [-] Port f455cc42-f497-49e9-84f6-0713ec25f786 in datapath 8094e12d-22b9-4e7c-bcb5-2de20ab6e675 unbound from our chassis#033[00m
Nov 29 03:29:14 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:14.571 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8094e12d-22b9-4e7c-bcb5-2de20ab6e675, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:29:14 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:14.572 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2f0b8d8e-04bd-4edf-b880-45468b1d3549]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:14 np0005539551 nova_compute[227360]: 2025-11-29 08:29:14.989 227364 DEBUG nova.compute.manager [req-90933001-0cf5-4193-9131-01ae2f4e22f1 req-82a1d742-2392-43ef-a960-bf567285e1b0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received event network-vif-unplugged-f455cc42-f497-49e9-84f6-0713ec25f786 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:14 np0005539551 nova_compute[227360]: 2025-11-29 08:29:14.990 227364 DEBUG oslo_concurrency.lockutils [req-90933001-0cf5-4193-9131-01ae2f4e22f1 req-82a1d742-2392-43ef-a960-bf567285e1b0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:14 np0005539551 nova_compute[227360]: 2025-11-29 08:29:14.990 227364 DEBUG oslo_concurrency.lockutils [req-90933001-0cf5-4193-9131-01ae2f4e22f1 req-82a1d742-2392-43ef-a960-bf567285e1b0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:14 np0005539551 nova_compute[227360]: 2025-11-29 08:29:14.991 227364 DEBUG oslo_concurrency.lockutils [req-90933001-0cf5-4193-9131-01ae2f4e22f1 req-82a1d742-2392-43ef-a960-bf567285e1b0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:14 np0005539551 nova_compute[227360]: 2025-11-29 08:29:14.991 227364 DEBUG nova.compute.manager [req-90933001-0cf5-4193-9131-01ae2f4e22f1 req-82a1d742-2392-43ef-a960-bf567285e1b0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] No waiting events found dispatching network-vif-unplugged-f455cc42-f497-49e9-84f6-0713ec25f786 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:14 np0005539551 nova_compute[227360]: 2025-11-29 08:29:14.991 227364 WARNING nova.compute.manager [req-90933001-0cf5-4193-9131-01ae2f4e22f1 req-82a1d742-2392-43ef-a960-bf567285e1b0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received unexpected event network-vif-unplugged-f455cc42-f497-49e9-84f6-0713ec25f786 for instance with vm_state active and task_state resize_migrating.#033[00m
Nov 29 03:29:14 np0005539551 nova_compute[227360]: 2025-11-29 08:29:14.994 227364 INFO nova.virt.libvirt.driver [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:29:14 np0005539551 nova_compute[227360]: 2025-11-29 08:29:14.998 227364 INFO nova.virt.libvirt.driver [-] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Instance destroyed successfully.#033[00m
Nov 29 03:29:14 np0005539551 nova_compute[227360]: 2025-11-29 08:29:14.998 227364 DEBUG nova.virt.libvirt.vif [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:28:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1524442594',display_name='tempest-TestNetworkAdvancedServerOps-server-1524442594',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1524442594',id=146,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAhjCe5Dwqr90TZZDpDYSff23Y0Y+CPvWtwZC2gBvEddB0vd/KfzjYxDz13s1SQbnkCRhnyyKQ9iihlf4eHmWUZGjirLU1hZLSoBtjbwWoOTCzNCv4qInFtsgJKPWKzRFw==',key_name='tempest-TestNetworkAdvancedServerOps-964699803',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:28:42Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-b8gw4bee',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:29:05Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=d4258b43-e73e-47f3-b1d1-f169bcaf4534,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f455cc42-f497-49e9-84f6-0713ec25f786", "address": "fa:16:3e:a5:a7:0b", "network": {"id": "8094e12d-22b9-4e7c-bcb5-2de20ab6e675", "bridge": "br-int", "label": "tempest-network-smoke--1722283643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1722283643", "vif_mac": "fa:16:3e:a5:a7:0b"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf455cc42-f4", "ovs_interfaceid": "f455cc42-f497-49e9-84f6-0713ec25f786", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:29:14 np0005539551 nova_compute[227360]: 2025-11-29 08:29:14.999 227364 DEBUG nova.network.os_vif_util [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "f455cc42-f497-49e9-84f6-0713ec25f786", "address": "fa:16:3e:a5:a7:0b", "network": {"id": "8094e12d-22b9-4e7c-bcb5-2de20ab6e675", "bridge": "br-int", "label": "tempest-network-smoke--1722283643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1722283643", "vif_mac": "fa:16:3e:a5:a7:0b"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf455cc42-f4", "ovs_interfaceid": "f455cc42-f497-49e9-84f6-0713ec25f786", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:29:15 np0005539551 nova_compute[227360]: 2025-11-29 08:29:14.999 227364 DEBUG nova.network.os_vif_util [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a5:a7:0b,bridge_name='br-int',has_traffic_filtering=True,id=f455cc42-f497-49e9-84f6-0713ec25f786,network=Network(8094e12d-22b9-4e7c-bcb5-2de20ab6e675),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf455cc42-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:29:15 np0005539551 nova_compute[227360]: 2025-11-29 08:29:15.000 227364 DEBUG os_vif [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:a7:0b,bridge_name='br-int',has_traffic_filtering=True,id=f455cc42-f497-49e9-84f6-0713ec25f786,network=Network(8094e12d-22b9-4e7c-bcb5-2de20ab6e675),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf455cc42-f4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:29:15 np0005539551 nova_compute[227360]: 2025-11-29 08:29:15.001 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:15 np0005539551 nova_compute[227360]: 2025-11-29 08:29:15.002 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf455cc42-f4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:15 np0005539551 nova_compute[227360]: 2025-11-29 08:29:15.003 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:15 np0005539551 nova_compute[227360]: 2025-11-29 08:29:15.005 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:15 np0005539551 nova_compute[227360]: 2025-11-29 08:29:15.007 227364 INFO os_vif [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:a7:0b,bridge_name='br-int',has_traffic_filtering=True,id=f455cc42-f497-49e9-84f6-0713ec25f786,network=Network(8094e12d-22b9-4e7c-bcb5-2de20ab6e675),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf455cc42-f4')#033[00m
Nov 29 03:29:15 np0005539551 nova_compute[227360]: 2025-11-29 08:29:15.011 227364 DEBUG nova.virt.libvirt.driver [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:29:15 np0005539551 nova_compute[227360]: 2025-11-29 08:29:15.012 227364 DEBUG nova.virt.libvirt.driver [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:29:15 np0005539551 nova_compute[227360]: 2025-11-29 08:29:15.275 227364 DEBUG neutronclient.v2_0.client [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port f455cc42-f497-49e9-84f6-0713ec25f786 for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Nov 29 03:29:15 np0005539551 nova_compute[227360]: 2025-11-29 08:29:15.379 227364 DEBUG oslo_concurrency.lockutils [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:15 np0005539551 nova_compute[227360]: 2025-11-29 08:29:15.380 227364 DEBUG oslo_concurrency.lockutils [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:15 np0005539551 nova_compute[227360]: 2025-11-29 08:29:15.380 227364 DEBUG oslo_concurrency.lockutils [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:15.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:16 np0005539551 nova_compute[227360]: 2025-11-29 08:29:16.036 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404941.035626, 6076e253-7727-49f1-9aa6-6c4ccc52fc56 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:29:16 np0005539551 nova_compute[227360]: 2025-11-29 08:29:16.036 227364 INFO nova.compute.manager [-] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:29:16 np0005539551 nova_compute[227360]: 2025-11-29 08:29:16.066 227364 DEBUG nova.compute.manager [None req-52f197fd-ab1f-486c-adeb-c191de278935 - - - - - -] [instance: 6076e253-7727-49f1-9aa6-6c4ccc52fc56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:16.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:16 np0005539551 nova_compute[227360]: 2025-11-29 08:29:16.413 227364 DEBUG nova.compute.manager [req-e533a10d-b498-4e33-b729-4c0d89c97b79 req-f0b343ef-863e-49a9-8da2-ee2c5fd1172f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received event network-vif-unplugged-f455cc42-f497-49e9-84f6-0713ec25f786 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:16 np0005539551 nova_compute[227360]: 2025-11-29 08:29:16.414 227364 DEBUG oslo_concurrency.lockutils [req-e533a10d-b498-4e33-b729-4c0d89c97b79 req-f0b343ef-863e-49a9-8da2-ee2c5fd1172f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:16 np0005539551 nova_compute[227360]: 2025-11-29 08:29:16.414 227364 DEBUG oslo_concurrency.lockutils [req-e533a10d-b498-4e33-b729-4c0d89c97b79 req-f0b343ef-863e-49a9-8da2-ee2c5fd1172f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:16 np0005539551 nova_compute[227360]: 2025-11-29 08:29:16.415 227364 DEBUG oslo_concurrency.lockutils [req-e533a10d-b498-4e33-b729-4c0d89c97b79 req-f0b343ef-863e-49a9-8da2-ee2c5fd1172f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:16 np0005539551 nova_compute[227360]: 2025-11-29 08:29:16.415 227364 DEBUG nova.compute.manager [req-e533a10d-b498-4e33-b729-4c0d89c97b79 req-f0b343ef-863e-49a9-8da2-ee2c5fd1172f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] No waiting events found dispatching network-vif-unplugged-f455cc42-f497-49e9-84f6-0713ec25f786 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:16 np0005539551 nova_compute[227360]: 2025-11-29 08:29:16.415 227364 WARNING nova.compute.manager [req-e533a10d-b498-4e33-b729-4c0d89c97b79 req-f0b343ef-863e-49a9-8da2-ee2c5fd1172f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received unexpected event network-vif-unplugged-f455cc42-f497-49e9-84f6-0713ec25f786 for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:29:16 np0005539551 nova_compute[227360]: 2025-11-29 08:29:16.769 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:16 np0005539551 nova_compute[227360]: 2025-11-29 08:29:16.953 227364 DEBUG oslo_concurrency.lockutils [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "9bcd1fdb-7518-4259-bac1-8a013dc21273" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:16 np0005539551 nova_compute[227360]: 2025-11-29 08:29:16.953 227364 DEBUG oslo_concurrency.lockutils [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "9bcd1fdb-7518-4259-bac1-8a013dc21273" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:16 np0005539551 nova_compute[227360]: 2025-11-29 08:29:16.970 227364 DEBUG nova.compute.manager [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:29:17 np0005539551 nova_compute[227360]: 2025-11-29 08:29:17.062 227364 DEBUG oslo_concurrency.lockutils [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:17 np0005539551 nova_compute[227360]: 2025-11-29 08:29:17.062 227364 DEBUG oslo_concurrency.lockutils [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:17 np0005539551 nova_compute[227360]: 2025-11-29 08:29:17.068 227364 DEBUG nova.virt.hardware [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:29:17 np0005539551 nova_compute[227360]: 2025-11-29 08:29:17.068 227364 INFO nova.compute.claims [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:29:17 np0005539551 nova_compute[227360]: 2025-11-29 08:29:17.167 227364 DEBUG nova.compute.manager [req-ecc30fe8-fb28-4245-be88-194035240e32 req-0d2fd405-8785-4621-bc76-24fa596d09bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received event network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:17 np0005539551 nova_compute[227360]: 2025-11-29 08:29:17.167 227364 DEBUG oslo_concurrency.lockutils [req-ecc30fe8-fb28-4245-be88-194035240e32 req-0d2fd405-8785-4621-bc76-24fa596d09bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:17 np0005539551 nova_compute[227360]: 2025-11-29 08:29:17.168 227364 DEBUG oslo_concurrency.lockutils [req-ecc30fe8-fb28-4245-be88-194035240e32 req-0d2fd405-8785-4621-bc76-24fa596d09bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:17 np0005539551 nova_compute[227360]: 2025-11-29 08:29:17.168 227364 DEBUG oslo_concurrency.lockutils [req-ecc30fe8-fb28-4245-be88-194035240e32 req-0d2fd405-8785-4621-bc76-24fa596d09bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:17 np0005539551 nova_compute[227360]: 2025-11-29 08:29:17.168 227364 DEBUG nova.compute.manager [req-ecc30fe8-fb28-4245-be88-194035240e32 req-0d2fd405-8785-4621-bc76-24fa596d09bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] No waiting events found dispatching network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:17 np0005539551 nova_compute[227360]: 2025-11-29 08:29:17.168 227364 WARNING nova.compute.manager [req-ecc30fe8-fb28-4245-be88-194035240e32 req-0d2fd405-8785-4621-bc76-24fa596d09bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received unexpected event network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:29:17 np0005539551 nova_compute[227360]: 2025-11-29 08:29:17.168 227364 DEBUG nova.compute.manager [req-ecc30fe8-fb28-4245-be88-194035240e32 req-0d2fd405-8785-4621-bc76-24fa596d09bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received event network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:17 np0005539551 nova_compute[227360]: 2025-11-29 08:29:17.168 227364 DEBUG oslo_concurrency.lockutils [req-ecc30fe8-fb28-4245-be88-194035240e32 req-0d2fd405-8785-4621-bc76-24fa596d09bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:17 np0005539551 nova_compute[227360]: 2025-11-29 08:29:17.169 227364 DEBUG oslo_concurrency.lockutils [req-ecc30fe8-fb28-4245-be88-194035240e32 req-0d2fd405-8785-4621-bc76-24fa596d09bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:17 np0005539551 nova_compute[227360]: 2025-11-29 08:29:17.169 227364 DEBUG oslo_concurrency.lockutils [req-ecc30fe8-fb28-4245-be88-194035240e32 req-0d2fd405-8785-4621-bc76-24fa596d09bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:17 np0005539551 nova_compute[227360]: 2025-11-29 08:29:17.169 227364 DEBUG nova.compute.manager [req-ecc30fe8-fb28-4245-be88-194035240e32 req-0d2fd405-8785-4621-bc76-24fa596d09bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] No waiting events found dispatching network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:17 np0005539551 nova_compute[227360]: 2025-11-29 08:29:17.169 227364 WARNING nova.compute.manager [req-ecc30fe8-fb28-4245-be88-194035240e32 req-0d2fd405-8785-4621-bc76-24fa596d09bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received unexpected event network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:29:17 np0005539551 nova_compute[227360]: 2025-11-29 08:29:17.169 227364 DEBUG nova.compute.manager [req-ecc30fe8-fb28-4245-be88-194035240e32 req-0d2fd405-8785-4621-bc76-24fa596d09bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received event network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:17 np0005539551 nova_compute[227360]: 2025-11-29 08:29:17.169 227364 DEBUG oslo_concurrency.lockutils [req-ecc30fe8-fb28-4245-be88-194035240e32 req-0d2fd405-8785-4621-bc76-24fa596d09bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:17 np0005539551 nova_compute[227360]: 2025-11-29 08:29:17.169 227364 DEBUG oslo_concurrency.lockutils [req-ecc30fe8-fb28-4245-be88-194035240e32 req-0d2fd405-8785-4621-bc76-24fa596d09bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:17 np0005539551 nova_compute[227360]: 2025-11-29 08:29:17.170 227364 DEBUG oslo_concurrency.lockutils [req-ecc30fe8-fb28-4245-be88-194035240e32 req-0d2fd405-8785-4621-bc76-24fa596d09bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:17 np0005539551 nova_compute[227360]: 2025-11-29 08:29:17.170 227364 DEBUG nova.compute.manager [req-ecc30fe8-fb28-4245-be88-194035240e32 req-0d2fd405-8785-4621-bc76-24fa596d09bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] No waiting events found dispatching network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:17 np0005539551 nova_compute[227360]: 2025-11-29 08:29:17.170 227364 WARNING nova.compute.manager [req-ecc30fe8-fb28-4245-be88-194035240e32 req-0d2fd405-8785-4621-bc76-24fa596d09bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received unexpected event network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:29:17 np0005539551 nova_compute[227360]: 2025-11-29 08:29:17.263 227364 DEBUG oslo_concurrency.processutils [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:29:17 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/263396942' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:29:17 np0005539551 nova_compute[227360]: 2025-11-29 08:29:17.690 227364 DEBUG oslo_concurrency.processutils [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:17 np0005539551 nova_compute[227360]: 2025-11-29 08:29:17.696 227364 DEBUG nova.compute.provider_tree [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:29:17 np0005539551 nova_compute[227360]: 2025-11-29 08:29:17.721 227364 DEBUG nova.scheduler.client.report [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:29:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:17.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:17 np0005539551 nova_compute[227360]: 2025-11-29 08:29:17.754 227364 DEBUG oslo_concurrency.lockutils [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:17 np0005539551 nova_compute[227360]: 2025-11-29 08:29:17.755 227364 DEBUG nova.compute.manager [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:29:17 np0005539551 nova_compute[227360]: 2025-11-29 08:29:17.829 227364 DEBUG nova.compute.manager [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:29:17 np0005539551 nova_compute[227360]: 2025-11-29 08:29:17.829 227364 DEBUG nova.network.neutron [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:29:17 np0005539551 nova_compute[227360]: 2025-11-29 08:29:17.856 227364 INFO nova.virt.libvirt.driver [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:29:17 np0005539551 nova_compute[227360]: 2025-11-29 08:29:17.883 227364 DEBUG nova.compute.manager [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:29:18 np0005539551 nova_compute[227360]: 2025-11-29 08:29:18.026 227364 DEBUG nova.compute.manager [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:29:18 np0005539551 nova_compute[227360]: 2025-11-29 08:29:18.027 227364 DEBUG nova.virt.libvirt.driver [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:29:18 np0005539551 nova_compute[227360]: 2025-11-29 08:29:18.027 227364 INFO nova.virt.libvirt.driver [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Creating image(s)#033[00m
Nov 29 03:29:18 np0005539551 nova_compute[227360]: 2025-11-29 08:29:18.047 227364 DEBUG nova.storage.rbd_utils [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 9bcd1fdb-7518-4259-bac1-8a013dc21273_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:18 np0005539551 nova_compute[227360]: 2025-11-29 08:29:18.070 227364 DEBUG nova.storage.rbd_utils [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 9bcd1fdb-7518-4259-bac1-8a013dc21273_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:18 np0005539551 nova_compute[227360]: 2025-11-29 08:29:18.094 227364 DEBUG nova.storage.rbd_utils [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 9bcd1fdb-7518-4259-bac1-8a013dc21273_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:18 np0005539551 nova_compute[227360]: 2025-11-29 08:29:18.099 227364 DEBUG oslo_concurrency.processutils [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:18 np0005539551 nova_compute[227360]: 2025-11-29 08:29:18.167 227364 DEBUG oslo_concurrency.processutils [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:18 np0005539551 nova_compute[227360]: 2025-11-29 08:29:18.168 227364 DEBUG oslo_concurrency.lockutils [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:18 np0005539551 nova_compute[227360]: 2025-11-29 08:29:18.169 227364 DEBUG oslo_concurrency.lockutils [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:18 np0005539551 nova_compute[227360]: 2025-11-29 08:29:18.169 227364 DEBUG oslo_concurrency.lockutils [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:18 np0005539551 nova_compute[227360]: 2025-11-29 08:29:18.197 227364 DEBUG nova.storage.rbd_utils [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 9bcd1fdb-7518-4259-bac1-8a013dc21273_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:18 np0005539551 nova_compute[227360]: 2025-11-29 08:29:18.200 227364 DEBUG oslo_concurrency.processutils [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 9bcd1fdb-7518-4259-bac1-8a013dc21273_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:18 np0005539551 nova_compute[227360]: 2025-11-29 08:29:18.230 227364 DEBUG nova.policy [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0741d46905e94415a372bd62751dff66', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5970d12b2c42419e889cd48de28c4b86', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:29:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:29:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:18.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:29:18 np0005539551 nova_compute[227360]: 2025-11-29 08:29:18.683 227364 DEBUG nova.compute.manager [req-60c6da1c-3225-4365-b31e-ffb2fc86c0c8 req-28f38664-4c16-428b-b111-92456e1efb82 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received event network-changed-f455cc42-f497-49e9-84f6-0713ec25f786 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:18 np0005539551 nova_compute[227360]: 2025-11-29 08:29:18.684 227364 DEBUG nova.compute.manager [req-60c6da1c-3225-4365-b31e-ffb2fc86c0c8 req-28f38664-4c16-428b-b111-92456e1efb82 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Refreshing instance network info cache due to event network-changed-f455cc42-f497-49e9-84f6-0713ec25f786. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:29:18 np0005539551 nova_compute[227360]: 2025-11-29 08:29:18.684 227364 DEBUG oslo_concurrency.lockutils [req-60c6da1c-3225-4365-b31e-ffb2fc86c0c8 req-28f38664-4c16-428b-b111-92456e1efb82 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-d4258b43-e73e-47f3-b1d1-f169bcaf4534" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:29:18 np0005539551 nova_compute[227360]: 2025-11-29 08:29:18.685 227364 DEBUG oslo_concurrency.lockutils [req-60c6da1c-3225-4365-b31e-ffb2fc86c0c8 req-28f38664-4c16-428b-b111-92456e1efb82 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-d4258b43-e73e-47f3-b1d1-f169bcaf4534" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:29:18 np0005539551 nova_compute[227360]: 2025-11-29 08:29:18.685 227364 DEBUG nova.network.neutron [req-60c6da1c-3225-4365-b31e-ffb2fc86c0c8 req-28f38664-4c16-428b-b111-92456e1efb82 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Refreshing network info cache for port f455cc42-f497-49e9-84f6-0713ec25f786 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:29:18 np0005539551 nova_compute[227360]: 2025-11-29 08:29:18.708 227364 DEBUG oslo_concurrency.processutils [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 9bcd1fdb-7518-4259-bac1-8a013dc21273_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:18 np0005539551 nova_compute[227360]: 2025-11-29 08:29:18.784 227364 DEBUG nova.storage.rbd_utils [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] resizing rbd image 9bcd1fdb-7518-4259-bac1-8a013dc21273_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:29:19 np0005539551 nova_compute[227360]: 2025-11-29 08:29:19.166 227364 DEBUG nova.objects.instance [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lazy-loading 'migration_context' on Instance uuid 9bcd1fdb-7518-4259-bac1-8a013dc21273 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:19 np0005539551 nova_compute[227360]: 2025-11-29 08:29:19.178 227364 DEBUG nova.network.neutron [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Successfully created port: 38fb8873-2e1d-4925-8e77-f8b1c3390625 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:29:19 np0005539551 nova_compute[227360]: 2025-11-29 08:29:19.182 227364 DEBUG nova.virt.libvirt.driver [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:29:19 np0005539551 nova_compute[227360]: 2025-11-29 08:29:19.183 227364 DEBUG nova.virt.libvirt.driver [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Ensure instance console log exists: /var/lib/nova/instances/9bcd1fdb-7518-4259-bac1-8a013dc21273/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:29:19 np0005539551 nova_compute[227360]: 2025-11-29 08:29:19.183 227364 DEBUG oslo_concurrency.lockutils [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:19 np0005539551 nova_compute[227360]: 2025-11-29 08:29:19.183 227364 DEBUG oslo_concurrency.lockutils [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:19 np0005539551 nova_compute[227360]: 2025-11-29 08:29:19.184 227364 DEBUG oslo_concurrency.lockutils [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:19 np0005539551 nova_compute[227360]: 2025-11-29 08:29:19.332 227364 DEBUG nova.compute.manager [req-ed3e4c58-9127-43a0-b7c4-4a332d276b61 req-6d99b662-d713-457e-8ced-525f217b7410 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received event network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:19 np0005539551 nova_compute[227360]: 2025-11-29 08:29:19.332 227364 DEBUG oslo_concurrency.lockutils [req-ed3e4c58-9127-43a0-b7c4-4a332d276b61 req-6d99b662-d713-457e-8ced-525f217b7410 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:19 np0005539551 nova_compute[227360]: 2025-11-29 08:29:19.333 227364 DEBUG oslo_concurrency.lockutils [req-ed3e4c58-9127-43a0-b7c4-4a332d276b61 req-6d99b662-d713-457e-8ced-525f217b7410 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:19 np0005539551 nova_compute[227360]: 2025-11-29 08:29:19.333 227364 DEBUG oslo_concurrency.lockutils [req-ed3e4c58-9127-43a0-b7c4-4a332d276b61 req-6d99b662-d713-457e-8ced-525f217b7410 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:19 np0005539551 nova_compute[227360]: 2025-11-29 08:29:19.333 227364 DEBUG nova.compute.manager [req-ed3e4c58-9127-43a0-b7c4-4a332d276b61 req-6d99b662-d713-457e-8ced-525f217b7410 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] No waiting events found dispatching network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:19 np0005539551 nova_compute[227360]: 2025-11-29 08:29:19.333 227364 WARNING nova.compute.manager [req-ed3e4c58-9127-43a0-b7c4-4a332d276b61 req-6d99b662-d713-457e-8ced-525f217b7410 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received unexpected event network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 for instance with vm_state active and task_state resize_finish.#033[00m
Nov 29 03:29:19 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:19 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e349 e349: 3 total, 3 up, 3 in
Nov 29 03:29:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:29:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:19.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:19.879 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:19.879 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:19.879 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:20 np0005539551 nova_compute[227360]: 2025-11-29 08:29:20.003 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:20.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:20 np0005539551 nova_compute[227360]: 2025-11-29 08:29:20.775 227364 DEBUG nova.network.neutron [req-60c6da1c-3225-4365-b31e-ffb2fc86c0c8 req-28f38664-4c16-428b-b111-92456e1efb82 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Updated VIF entry in instance network info cache for port f455cc42-f497-49e9-84f6-0713ec25f786. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:29:20 np0005539551 nova_compute[227360]: 2025-11-29 08:29:20.777 227364 DEBUG nova.network.neutron [req-60c6da1c-3225-4365-b31e-ffb2fc86c0c8 req-28f38664-4c16-428b-b111-92456e1efb82 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Updating instance_info_cache with network_info: [{"id": "f455cc42-f497-49e9-84f6-0713ec25f786", "address": "fa:16:3e:a5:a7:0b", "network": {"id": "8094e12d-22b9-4e7c-bcb5-2de20ab6e675", "bridge": "br-int", "label": "tempest-network-smoke--1722283643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf455cc42-f4", "ovs_interfaceid": "f455cc42-f497-49e9-84f6-0713ec25f786", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:29:20 np0005539551 nova_compute[227360]: 2025-11-29 08:29:20.800 227364 DEBUG oslo_concurrency.lockutils [req-60c6da1c-3225-4365-b31e-ffb2fc86c0c8 req-28f38664-4c16-428b-b111-92456e1efb82 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-d4258b43-e73e-47f3-b1d1-f169bcaf4534" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:29:21 np0005539551 nova_compute[227360]: 2025-11-29 08:29:21.234 227364 DEBUG nova.network.neutron [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Successfully updated port: 38fb8873-2e1d-4925-8e77-f8b1c3390625 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:29:21 np0005539551 nova_compute[227360]: 2025-11-29 08:29:21.255 227364 DEBUG oslo_concurrency.lockutils [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "refresh_cache-9bcd1fdb-7518-4259-bac1-8a013dc21273" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:29:21 np0005539551 nova_compute[227360]: 2025-11-29 08:29:21.255 227364 DEBUG oslo_concurrency.lockutils [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquired lock "refresh_cache-9bcd1fdb-7518-4259-bac1-8a013dc21273" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:29:21 np0005539551 nova_compute[227360]: 2025-11-29 08:29:21.256 227364 DEBUG nova.network.neutron [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:29:21 np0005539551 nova_compute[227360]: 2025-11-29 08:29:21.429 227364 DEBUG nova.compute.manager [req-8f96c939-fe38-40d7-a7ee-ab7b05798cff req-f6fb5845-a101-4d86-aae1-f7c1c2778f3e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Received event network-changed-38fb8873-2e1d-4925-8e77-f8b1c3390625 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:21 np0005539551 nova_compute[227360]: 2025-11-29 08:29:21.430 227364 DEBUG nova.compute.manager [req-8f96c939-fe38-40d7-a7ee-ab7b05798cff req-f6fb5845-a101-4d86-aae1-f7c1c2778f3e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Refreshing instance network info cache due to event network-changed-38fb8873-2e1d-4925-8e77-f8b1c3390625. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:29:21 np0005539551 nova_compute[227360]: 2025-11-29 08:29:21.431 227364 DEBUG oslo_concurrency.lockutils [req-8f96c939-fe38-40d7-a7ee-ab7b05798cff req-f6fb5845-a101-4d86-aae1-f7c1c2778f3e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-9bcd1fdb-7518-4259-bac1-8a013dc21273" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:29:21 np0005539551 nova_compute[227360]: 2025-11-29 08:29:21.743 227364 DEBUG nova.network.neutron [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:29:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:21.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:21 np0005539551 nova_compute[227360]: 2025-11-29 08:29:21.771 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:22.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:22 np0005539551 nova_compute[227360]: 2025-11-29 08:29:22.949 227364 DEBUG nova.network.neutron [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Updating instance_info_cache with network_info: [{"id": "38fb8873-2e1d-4925-8e77-f8b1c3390625", "address": "fa:16:3e:b8:79:8c", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38fb8873-2e", "ovs_interfaceid": "38fb8873-2e1d-4925-8e77-f8b1c3390625", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:29:22 np0005539551 nova_compute[227360]: 2025-11-29 08:29:22.971 227364 DEBUG oslo_concurrency.lockutils [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Releasing lock "refresh_cache-9bcd1fdb-7518-4259-bac1-8a013dc21273" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:29:22 np0005539551 nova_compute[227360]: 2025-11-29 08:29:22.972 227364 DEBUG nova.compute.manager [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Instance network_info: |[{"id": "38fb8873-2e1d-4925-8e77-f8b1c3390625", "address": "fa:16:3e:b8:79:8c", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38fb8873-2e", "ovs_interfaceid": "38fb8873-2e1d-4925-8e77-f8b1c3390625", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:29:22 np0005539551 nova_compute[227360]: 2025-11-29 08:29:22.973 227364 DEBUG oslo_concurrency.lockutils [req-8f96c939-fe38-40d7-a7ee-ab7b05798cff req-f6fb5845-a101-4d86-aae1-f7c1c2778f3e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-9bcd1fdb-7518-4259-bac1-8a013dc21273" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:29:22 np0005539551 nova_compute[227360]: 2025-11-29 08:29:22.973 227364 DEBUG nova.network.neutron [req-8f96c939-fe38-40d7-a7ee-ab7b05798cff req-f6fb5845-a101-4d86-aae1-f7c1c2778f3e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Refreshing network info cache for port 38fb8873-2e1d-4925-8e77-f8b1c3390625 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:29:22 np0005539551 nova_compute[227360]: 2025-11-29 08:29:22.979 227364 DEBUG nova.virt.libvirt.driver [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Start _get_guest_xml network_info=[{"id": "38fb8873-2e1d-4925-8e77-f8b1c3390625", "address": "fa:16:3e:b8:79:8c", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38fb8873-2e", "ovs_interfaceid": "38fb8873-2e1d-4925-8e77-f8b1c3390625", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:29:22 np0005539551 nova_compute[227360]: 2025-11-29 08:29:22.987 227364 WARNING nova.virt.libvirt.driver [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:29:22 np0005539551 nova_compute[227360]: 2025-11-29 08:29:22.991 227364 DEBUG nova.virt.libvirt.host [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:29:22 np0005539551 nova_compute[227360]: 2025-11-29 08:29:22.992 227364 DEBUG nova.virt.libvirt.host [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.003 227364 DEBUG nova.virt.libvirt.host [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.004 227364 DEBUG nova.virt.libvirt.host [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.005 227364 DEBUG nova.virt.libvirt.driver [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.005 227364 DEBUG nova.virt.hardware [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.006 227364 DEBUG nova.virt.hardware [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.006 227364 DEBUG nova.virt.hardware [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.006 227364 DEBUG nova.virt.hardware [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.006 227364 DEBUG nova.virt.hardware [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.006 227364 DEBUG nova.virt.hardware [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.007 227364 DEBUG nova.virt.hardware [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.007 227364 DEBUG nova.virt.hardware [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.007 227364 DEBUG nova.virt.hardware [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.007 227364 DEBUG nova.virt.hardware [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.007 227364 DEBUG nova.virt.hardware [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.010 227364 DEBUG oslo_concurrency.processutils [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.039 227364 DEBUG oslo_concurrency.lockutils [None req-f38fd778-be81-40e8-add6-3a7a09096e7c fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.039 227364 DEBUG oslo_concurrency.lockutils [None req-f38fd778-be81-40e8-add6-3a7a09096e7c fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.040 227364 DEBUG nova.compute.manager [None req-f38fd778-be81-40e8-add6-3a7a09096e7c fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Going to confirm migration 22 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.285 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:23 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:29:23 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/323469963' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.506 227364 DEBUG oslo_concurrency.processutils [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.545 227364 DEBUG nova.storage.rbd_utils [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 9bcd1fdb-7518-4259-bac1-8a013dc21273_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.550 227364 DEBUG oslo_concurrency.processutils [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.717 227364 DEBUG nova.compute.manager [req-3d4bb323-258d-4a43-bd66-73c8ff075750 req-f8e8df40-89f0-4d03-886c-1d4d8027c29e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received event network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.718 227364 DEBUG oslo_concurrency.lockutils [req-3d4bb323-258d-4a43-bd66-73c8ff075750 req-f8e8df40-89f0-4d03-886c-1d4d8027c29e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.718 227364 DEBUG oslo_concurrency.lockutils [req-3d4bb323-258d-4a43-bd66-73c8ff075750 req-f8e8df40-89f0-4d03-886c-1d4d8027c29e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.719 227364 DEBUG oslo_concurrency.lockutils [req-3d4bb323-258d-4a43-bd66-73c8ff075750 req-f8e8df40-89f0-4d03-886c-1d4d8027c29e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.719 227364 DEBUG nova.compute.manager [req-3d4bb323-258d-4a43-bd66-73c8ff075750 req-f8e8df40-89f0-4d03-886c-1d4d8027c29e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] No waiting events found dispatching network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.720 227364 WARNING nova.compute.manager [req-3d4bb323-258d-4a43-bd66-73c8ff075750 req-f8e8df40-89f0-4d03-886c-1d4d8027c29e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received unexpected event network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.720 227364 DEBUG nova.compute.manager [req-3d4bb323-258d-4a43-bd66-73c8ff075750 req-f8e8df40-89f0-4d03-886c-1d4d8027c29e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received event network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.721 227364 DEBUG oslo_concurrency.lockutils [req-3d4bb323-258d-4a43-bd66-73c8ff075750 req-f8e8df40-89f0-4d03-886c-1d4d8027c29e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.721 227364 DEBUG oslo_concurrency.lockutils [req-3d4bb323-258d-4a43-bd66-73c8ff075750 req-f8e8df40-89f0-4d03-886c-1d4d8027c29e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.722 227364 DEBUG oslo_concurrency.lockutils [req-3d4bb323-258d-4a43-bd66-73c8ff075750 req-f8e8df40-89f0-4d03-886c-1d4d8027c29e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.722 227364 DEBUG nova.compute.manager [req-3d4bb323-258d-4a43-bd66-73c8ff075750 req-f8e8df40-89f0-4d03-886c-1d4d8027c29e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] No waiting events found dispatching network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.722 227364 WARNING nova.compute.manager [req-3d4bb323-258d-4a43-bd66-73c8ff075750 req-f8e8df40-89f0-4d03-886c-1d4d8027c29e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received unexpected event network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.729 227364 DEBUG neutronclient.v2_0.client [None req-f38fd778-be81-40e8-add6-3a7a09096e7c fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port f455cc42-f497-49e9-84f6-0713ec25f786 for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.730 227364 DEBUG oslo_concurrency.lockutils [None req-f38fd778-be81-40e8-add6-3a7a09096e7c fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "refresh_cache-d4258b43-e73e-47f3-b1d1-f169bcaf4534" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.730 227364 DEBUG oslo_concurrency.lockutils [None req-f38fd778-be81-40e8-add6-3a7a09096e7c fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquired lock "refresh_cache-d4258b43-e73e-47f3-b1d1-f169bcaf4534" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.730 227364 DEBUG nova.network.neutron [None req-f38fd778-be81-40e8-add6-3a7a09096e7c fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.731 227364 DEBUG nova.objects.instance [None req-f38fd778-be81-40e8-add6-3a7a09096e7c fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'info_cache' on Instance uuid d4258b43-e73e-47f3-b1d1-f169bcaf4534 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:23.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:23 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:29:23 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/545716001' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.990 227364 DEBUG oslo_concurrency.processutils [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.992 227364 DEBUG nova.virt.libvirt.vif [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:29:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-238021844',display_name='tempest-ServersTestJSON-server-238021844',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-238021844',id=149,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5970d12b2c42419e889cd48de28c4b86',ramdisk_id='',reservation_id='r-ei2nhcyp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1509574488',owner_user_name='tempest-ServersTestJSON-1509574488-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:29:17Z,user_data=None,user_id='0741d46905e94415a372bd62751dff66',uuid=9bcd1fdb-7518-4259-bac1-8a013dc21273,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "38fb8873-2e1d-4925-8e77-f8b1c3390625", "address": "fa:16:3e:b8:79:8c", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38fb8873-2e", "ovs_interfaceid": "38fb8873-2e1d-4925-8e77-f8b1c3390625", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.992 227364 DEBUG nova.network.os_vif_util [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converting VIF {"id": "38fb8873-2e1d-4925-8e77-f8b1c3390625", "address": "fa:16:3e:b8:79:8c", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38fb8873-2e", "ovs_interfaceid": "38fb8873-2e1d-4925-8e77-f8b1c3390625", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.993 227364 DEBUG nova.network.os_vif_util [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:79:8c,bridge_name='br-int',has_traffic_filtering=True,id=38fb8873-2e1d-4925-8e77-f8b1c3390625,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap38fb8873-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:29:23 np0005539551 nova_compute[227360]: 2025-11-29 08:29:23.994 227364 DEBUG nova.objects.instance [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9bcd1fdb-7518-4259-bac1-8a013dc21273 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:24 np0005539551 nova_compute[227360]: 2025-11-29 08:29:24.023 227364 DEBUG nova.virt.libvirt.driver [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:29:24 np0005539551 nova_compute[227360]:  <uuid>9bcd1fdb-7518-4259-bac1-8a013dc21273</uuid>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:  <name>instance-00000095</name>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:29:24 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:      <nova:name>tempest-ServersTestJSON-server-238021844</nova:name>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:29:22</nova:creationTime>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:29:24 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:        <nova:user uuid="0741d46905e94415a372bd62751dff66">tempest-ServersTestJSON-1509574488-project-member</nova:user>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:        <nova:project uuid="5970d12b2c42419e889cd48de28c4b86">tempest-ServersTestJSON-1509574488</nova:project>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:        <nova:port uuid="38fb8873-2e1d-4925-8e77-f8b1c3390625">
Nov 29 03:29:24 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:      <entry name="serial">9bcd1fdb-7518-4259-bac1-8a013dc21273</entry>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:      <entry name="uuid">9bcd1fdb-7518-4259-bac1-8a013dc21273</entry>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:29:24 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/9bcd1fdb-7518-4259-bac1-8a013dc21273_disk">
Nov 29 03:29:24 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:29:24 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:29:24 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/9bcd1fdb-7518-4259-bac1-8a013dc21273_disk.config">
Nov 29 03:29:24 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:29:24 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:29:24 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:b8:79:8c"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:      <target dev="tap38fb8873-2e"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:29:24 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/9bcd1fdb-7518-4259-bac1-8a013dc21273/console.log" append="off"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:29:24 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:29:24 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:29:24 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:29:24 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:29:24 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:29:24 np0005539551 nova_compute[227360]: 2025-11-29 08:29:24.025 227364 DEBUG nova.compute.manager [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Preparing to wait for external event network-vif-plugged-38fb8873-2e1d-4925-8e77-f8b1c3390625 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:29:24 np0005539551 nova_compute[227360]: 2025-11-29 08:29:24.025 227364 DEBUG oslo_concurrency.lockutils [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "9bcd1fdb-7518-4259-bac1-8a013dc21273-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:24 np0005539551 nova_compute[227360]: 2025-11-29 08:29:24.025 227364 DEBUG oslo_concurrency.lockutils [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "9bcd1fdb-7518-4259-bac1-8a013dc21273-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:24 np0005539551 nova_compute[227360]: 2025-11-29 08:29:24.025 227364 DEBUG oslo_concurrency.lockutils [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "9bcd1fdb-7518-4259-bac1-8a013dc21273-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:24 np0005539551 nova_compute[227360]: 2025-11-29 08:29:24.026 227364 DEBUG nova.virt.libvirt.vif [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:29:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-238021844',display_name='tempest-ServersTestJSON-server-238021844',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-238021844',id=149,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5970d12b2c42419e889cd48de28c4b86',ramdisk_id='',reservation_id='r-ei2nhcyp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1509574488',owner_user_name='tempest-ServersTestJSON-1509574488-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:29:17Z,user_data=None,user_id='0741d46905e94415a372bd62751dff66',uuid=9bcd1fdb-7518-4259-bac1-8a013dc21273,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "38fb8873-2e1d-4925-8e77-f8b1c3390625", "address": "fa:16:3e:b8:79:8c", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38fb8873-2e", "ovs_interfaceid": "38fb8873-2e1d-4925-8e77-f8b1c3390625", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:29:24 np0005539551 nova_compute[227360]: 2025-11-29 08:29:24.026 227364 DEBUG nova.network.os_vif_util [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converting VIF {"id": "38fb8873-2e1d-4925-8e77-f8b1c3390625", "address": "fa:16:3e:b8:79:8c", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38fb8873-2e", "ovs_interfaceid": "38fb8873-2e1d-4925-8e77-f8b1c3390625", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:29:24 np0005539551 nova_compute[227360]: 2025-11-29 08:29:24.027 227364 DEBUG nova.network.os_vif_util [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:79:8c,bridge_name='br-int',has_traffic_filtering=True,id=38fb8873-2e1d-4925-8e77-f8b1c3390625,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap38fb8873-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:29:24 np0005539551 nova_compute[227360]: 2025-11-29 08:29:24.027 227364 DEBUG os_vif [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:79:8c,bridge_name='br-int',has_traffic_filtering=True,id=38fb8873-2e1d-4925-8e77-f8b1c3390625,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap38fb8873-2e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:29:24 np0005539551 nova_compute[227360]: 2025-11-29 08:29:24.028 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:24 np0005539551 nova_compute[227360]: 2025-11-29 08:29:24.028 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:24 np0005539551 nova_compute[227360]: 2025-11-29 08:29:24.029 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:29:24 np0005539551 nova_compute[227360]: 2025-11-29 08:29:24.031 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:24 np0005539551 nova_compute[227360]: 2025-11-29 08:29:24.031 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap38fb8873-2e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:24 np0005539551 nova_compute[227360]: 2025-11-29 08:29:24.032 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap38fb8873-2e, col_values=(('external_ids', {'iface-id': '38fb8873-2e1d-4925-8e77-f8b1c3390625', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b8:79:8c', 'vm-uuid': '9bcd1fdb-7518-4259-bac1-8a013dc21273'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:24 np0005539551 nova_compute[227360]: 2025-11-29 08:29:24.033 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:24 np0005539551 NetworkManager[48922]: <info>  [1764404964.0341] manager: (tap38fb8873-2e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/294)
Nov 29 03:29:24 np0005539551 nova_compute[227360]: 2025-11-29 08:29:24.036 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:29:24 np0005539551 nova_compute[227360]: 2025-11-29 08:29:24.039 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:24 np0005539551 nova_compute[227360]: 2025-11-29 08:29:24.040 227364 INFO os_vif [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:79:8c,bridge_name='br-int',has_traffic_filtering=True,id=38fb8873-2e1d-4925-8e77-f8b1c3390625,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap38fb8873-2e')#033[00m
Nov 29 03:29:24 np0005539551 nova_compute[227360]: 2025-11-29 08:29:24.129 227364 DEBUG nova.virt.libvirt.driver [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:29:24 np0005539551 nova_compute[227360]: 2025-11-29 08:29:24.129 227364 DEBUG nova.virt.libvirt.driver [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:29:24 np0005539551 nova_compute[227360]: 2025-11-29 08:29:24.130 227364 DEBUG nova.virt.libvirt.driver [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] No VIF found with MAC fa:16:3e:b8:79:8c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:29:24 np0005539551 nova_compute[227360]: 2025-11-29 08:29:24.131 227364 INFO nova.virt.libvirt.driver [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Using config drive#033[00m
Nov 29 03:29:24 np0005539551 nova_compute[227360]: 2025-11-29 08:29:24.161 227364 DEBUG nova.storage.rbd_utils [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 9bcd1fdb-7518-4259-bac1-8a013dc21273_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:24 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e349 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:24.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:24 np0005539551 nova_compute[227360]: 2025-11-29 08:29:24.847 227364 DEBUG nova.network.neutron [req-8f96c939-fe38-40d7-a7ee-ab7b05798cff req-f6fb5845-a101-4d86-aae1-f7c1c2778f3e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Updated VIF entry in instance network info cache for port 38fb8873-2e1d-4925-8e77-f8b1c3390625. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:29:24 np0005539551 nova_compute[227360]: 2025-11-29 08:29:24.848 227364 DEBUG nova.network.neutron [req-8f96c939-fe38-40d7-a7ee-ab7b05798cff req-f6fb5845-a101-4d86-aae1-f7c1c2778f3e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Updating instance_info_cache with network_info: [{"id": "38fb8873-2e1d-4925-8e77-f8b1c3390625", "address": "fa:16:3e:b8:79:8c", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38fb8873-2e", "ovs_interfaceid": "38fb8873-2e1d-4925-8e77-f8b1c3390625", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:29:24 np0005539551 nova_compute[227360]: 2025-11-29 08:29:24.868 227364 DEBUG oslo_concurrency.lockutils [req-8f96c939-fe38-40d7-a7ee-ab7b05798cff req-f6fb5845-a101-4d86-aae1-f7c1c2778f3e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-9bcd1fdb-7518-4259-bac1-8a013dc21273" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:29:25 np0005539551 nova_compute[227360]: 2025-11-29 08:29:25.014 227364 INFO nova.virt.libvirt.driver [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Creating config drive at /var/lib/nova/instances/9bcd1fdb-7518-4259-bac1-8a013dc21273/disk.config#033[00m
Nov 29 03:29:25 np0005539551 nova_compute[227360]: 2025-11-29 08:29:25.020 227364 DEBUG oslo_concurrency.processutils [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9bcd1fdb-7518-4259-bac1-8a013dc21273/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0njkxjlk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:25 np0005539551 nova_compute[227360]: 2025-11-29 08:29:25.170 227364 DEBUG oslo_concurrency.processutils [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9bcd1fdb-7518-4259-bac1-8a013dc21273/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0njkxjlk" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:25 np0005539551 nova_compute[227360]: 2025-11-29 08:29:25.211 227364 DEBUG nova.storage.rbd_utils [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 9bcd1fdb-7518-4259-bac1-8a013dc21273_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:25 np0005539551 nova_compute[227360]: 2025-11-29 08:29:25.217 227364 DEBUG oslo_concurrency.processutils [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9bcd1fdb-7518-4259-bac1-8a013dc21273/disk.config 9bcd1fdb-7518-4259-bac1-8a013dc21273_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:25.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:25 np0005539551 nova_compute[227360]: 2025-11-29 08:29:25.992 227364 DEBUG nova.network.neutron [None req-f38fd778-be81-40e8-add6-3a7a09096e7c fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Updating instance_info_cache with network_info: [{"id": "f455cc42-f497-49e9-84f6-0713ec25f786", "address": "fa:16:3e:a5:a7:0b", "network": {"id": "8094e12d-22b9-4e7c-bcb5-2de20ab6e675", "bridge": "br-int", "label": "tempest-network-smoke--1722283643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf455cc42-f4", "ovs_interfaceid": "f455cc42-f497-49e9-84f6-0713ec25f786", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:29:26 np0005539551 nova_compute[227360]: 2025-11-29 08:29:26.017 227364 DEBUG oslo_concurrency.lockutils [None req-f38fd778-be81-40e8-add6-3a7a09096e7c fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Releasing lock "refresh_cache-d4258b43-e73e-47f3-b1d1-f169bcaf4534" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:29:26 np0005539551 nova_compute[227360]: 2025-11-29 08:29:26.017 227364 DEBUG nova.objects.instance [None req-f38fd778-be81-40e8-add6-3a7a09096e7c fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'migration_context' on Instance uuid d4258b43-e73e-47f3-b1d1-f169bcaf4534 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:26 np0005539551 nova_compute[227360]: 2025-11-29 08:29:26.106 227364 DEBUG nova.storage.rbd_utils [None req-f38fd778-be81-40e8-add6-3a7a09096e7c fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] removing snapshot(nova-resize) on rbd image(d4258b43-e73e-47f3-b1d1-f169bcaf4534_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:29:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:26.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:26 np0005539551 nova_compute[227360]: 2025-11-29 08:29:26.772 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e350 e350: 3 total, 3 up, 3 in
Nov 29 03:29:27 np0005539551 nova_compute[227360]: 2025-11-29 08:29:27.440 227364 DEBUG nova.virt.libvirt.vif [None req-f38fd778-be81-40e8-add6-3a7a09096e7c fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:28:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1524442594',display_name='tempest-TestNetworkAdvancedServerOps-server-1524442594',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1524442594',id=146,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAhjCe5Dwqr90TZZDpDYSff23Y0Y+CPvWtwZC2gBvEddB0vd/KfzjYxDz13s1SQbnkCRhnyyKQ9iihlf4eHmWUZGjirLU1hZLSoBtjbwWoOTCzNCv4qInFtsgJKPWKzRFw==',key_name='tempest-TestNetworkAdvancedServerOps-964699803',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:29:21Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-b8gw4bee',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:29:21Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=d4258b43-e73e-47f3-b1d1-f169bcaf4534,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "f455cc42-f497-49e9-84f6-0713ec25f786", "address": "fa:16:3e:a5:a7:0b", "network": {"id": "8094e12d-22b9-4e7c-bcb5-2de20ab6e675", "bridge": "br-int", "label": "tempest-network-smoke--1722283643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf455cc42-f4", "ovs_interfaceid": "f455cc42-f497-49e9-84f6-0713ec25f786", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:29:27 np0005539551 nova_compute[227360]: 2025-11-29 08:29:27.441 227364 DEBUG nova.network.os_vif_util [None req-f38fd778-be81-40e8-add6-3a7a09096e7c fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "f455cc42-f497-49e9-84f6-0713ec25f786", "address": "fa:16:3e:a5:a7:0b", "network": {"id": "8094e12d-22b9-4e7c-bcb5-2de20ab6e675", "bridge": "br-int", "label": "tempest-network-smoke--1722283643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf455cc42-f4", "ovs_interfaceid": "f455cc42-f497-49e9-84f6-0713ec25f786", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:29:27 np0005539551 nova_compute[227360]: 2025-11-29 08:29:27.441 227364 DEBUG nova.network.os_vif_util [None req-f38fd778-be81-40e8-add6-3a7a09096e7c fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a5:a7:0b,bridge_name='br-int',has_traffic_filtering=True,id=f455cc42-f497-49e9-84f6-0713ec25f786,network=Network(8094e12d-22b9-4e7c-bcb5-2de20ab6e675),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf455cc42-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:29:27 np0005539551 nova_compute[227360]: 2025-11-29 08:29:27.442 227364 DEBUG os_vif [None req-f38fd778-be81-40e8-add6-3a7a09096e7c fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:a7:0b,bridge_name='br-int',has_traffic_filtering=True,id=f455cc42-f497-49e9-84f6-0713ec25f786,network=Network(8094e12d-22b9-4e7c-bcb5-2de20ab6e675),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf455cc42-f4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:29:27 np0005539551 nova_compute[227360]: 2025-11-29 08:29:27.443 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:27 np0005539551 nova_compute[227360]: 2025-11-29 08:29:27.443 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf455cc42-f4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:27 np0005539551 nova_compute[227360]: 2025-11-29 08:29:27.444 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:29:27 np0005539551 nova_compute[227360]: 2025-11-29 08:29:27.445 227364 INFO os_vif [None req-f38fd778-be81-40e8-add6-3a7a09096e7c fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:a7:0b,bridge_name='br-int',has_traffic_filtering=True,id=f455cc42-f497-49e9-84f6-0713ec25f786,network=Network(8094e12d-22b9-4e7c-bcb5-2de20ab6e675),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf455cc42-f4')#033[00m
Nov 29 03:29:27 np0005539551 nova_compute[227360]: 2025-11-29 08:29:27.446 227364 DEBUG oslo_concurrency.lockutils [None req-f38fd778-be81-40e8-add6-3a7a09096e7c fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:27 np0005539551 nova_compute[227360]: 2025-11-29 08:29:27.446 227364 DEBUG oslo_concurrency.lockutils [None req-f38fd778-be81-40e8-add6-3a7a09096e7c fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:27 np0005539551 nova_compute[227360]: 2025-11-29 08:29:27.578 227364 DEBUG oslo_concurrency.processutils [None req-f38fd778-be81-40e8-add6-3a7a09096e7c fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:27.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:27 np0005539551 nova_compute[227360]: 2025-11-29 08:29:27.859 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:28 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:29:28 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/209745093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.046 227364 DEBUG oslo_concurrency.processutils [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9bcd1fdb-7518-4259-bac1-8a013dc21273/disk.config 9bcd1fdb-7518-4259-bac1-8a013dc21273_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.829s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.047 227364 INFO nova.virt.libvirt.driver [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Deleting local config drive /var/lib/nova/instances/9bcd1fdb-7518-4259-bac1-8a013dc21273/disk.config because it was imported into RBD.#033[00m
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.049 227364 DEBUG oslo_concurrency.processutils [None req-f38fd778-be81-40e8-add6-3a7a09096e7c fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.055 227364 DEBUG nova.compute.provider_tree [None req-f38fd778-be81-40e8-add6-3a7a09096e7c fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.081 227364 DEBUG nova.scheduler.client.report [None req-f38fd778-be81-40e8-add6-3a7a09096e7c fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:29:28 np0005539551 kernel: tap38fb8873-2e: entered promiscuous mode
Nov 29 03:29:28 np0005539551 NetworkManager[48922]: <info>  [1764404968.1172] manager: (tap38fb8873-2e): new Tun device (/org/freedesktop/NetworkManager/Devices/295)
Nov 29 03:29:28 np0005539551 ovn_controller[130266]: 2025-11-29T08:29:28Z|00637|binding|INFO|Claiming lport 38fb8873-2e1d-4925-8e77-f8b1c3390625 for this chassis.
Nov 29 03:29:28 np0005539551 ovn_controller[130266]: 2025-11-29T08:29:28Z|00638|binding|INFO|38fb8873-2e1d-4925-8e77-f8b1c3390625: Claiming fa:16:3e:b8:79:8c 10.100.0.4
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.172 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.174 227364 DEBUG oslo_concurrency.lockutils [None req-f38fd778-be81-40e8-add6-3a7a09096e7c fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:28.177 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:79:8c 10.100.0.4'], port_security=['fa:16:3e:b8:79:8c 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '9bcd1fdb-7518-4259-bac1-8a013dc21273', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14ea2b48-9984-443b-82fc-568ae98723fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5970d12b2c42419e889cd48de28c4b86', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1f4c15e1-3db4-4257-8a40-7ffdc4076590', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=deb2b192-93f0-4938-a0e1-77284f619a46, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=38fb8873-2e1d-4925-8e77-f8b1c3390625) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:28.178 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 38fb8873-2e1d-4925-8e77-f8b1c3390625 in datapath 14ea2b48-9984-443b-82fc-568ae98723fc bound to our chassis#033[00m
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:28.179 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14ea2b48-9984-443b-82fc-568ae98723fc#033[00m
Nov 29 03:29:28 np0005539551 ovn_controller[130266]: 2025-11-29T08:29:28Z|00639|binding|INFO|Setting lport 38fb8873-2e1d-4925-8e77-f8b1c3390625 ovn-installed in OVS
Nov 29 03:29:28 np0005539551 ovn_controller[130266]: 2025-11-29T08:29:28Z|00640|binding|INFO|Setting lport 38fb8873-2e1d-4925-8e77-f8b1c3390625 up in Southbound
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:28.190 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[369d9d16-72f8-487d-b699-54bbcb24f7b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:28.191 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap14ea2b48-91 in ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:28.193 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap14ea2b48-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.193 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:28.193 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[cf518b94-af61-4961-9949-7362d09efce5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.194 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:28 np0005539551 systemd-udevd[282343]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:28.195 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[beaf6d5e-b1aa-47d4-93aa-69460158597a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:28 np0005539551 systemd-machined[190756]: New machine qemu-68-instance-00000095.
Nov 29 03:29:28 np0005539551 NetworkManager[48922]: <info>  [1764404968.2086] device (tap38fb8873-2e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:29:28 np0005539551 NetworkManager[48922]: <info>  [1764404968.2098] device (tap38fb8873-2e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:28.209 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[b6da589f-26ca-40d9-bc41-d025c69d114e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:28 np0005539551 systemd[1]: Started Virtual Machine qemu-68-instance-00000095.
Nov 29 03:29:28 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e351 e351: 3 total, 3 up, 3 in
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:28.233 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8aecb46a-5bef-40e0-b2b3-4cb9f8d12a40]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:28.264 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[410f590d-66ba-4e24-b3d8-74f795cb12dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:28 np0005539551 NetworkManager[48922]: <info>  [1764404968.2699] manager: (tap14ea2b48-90): new Veth device (/org/freedesktop/NetworkManager/Devices/296)
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:28.269 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a95a00c4-6149-4ddb-9084-05c77d8cad8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:28.296 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[f70d611f-c424-4640-b183-a7cb1f034370]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:28.299 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[33ffbafc-052d-4dd5-874a-1433296c0527]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:28 np0005539551 NetworkManager[48922]: <info>  [1764404968.3176] device (tap14ea2b48-90): carrier: link connected
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:28.323 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[e81a3c3f-2318-49f3-aff6-8d223b9966be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:28.337 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[0bcd0f1c-6518-41c9-93f7-239e04f57571]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14ea2b48-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:16:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 187], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 801519, 'reachable_time': 35231, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282375, 'error': None, 'target': 'ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:28.350 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a0f5d710-4f59-438d-93e3-6db7669e882b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef8:168b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 801519, 'tstamp': 801519}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282376, 'error': None, 'target': 'ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.358 227364 INFO nova.scheduler.client.report [None req-f38fd778-be81-40e8-add6-3a7a09096e7c fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Deleted allocation for migration 54acf1b9-5683-4fed-b810-5ba387fc478b#033[00m
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:28.371 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f8864d36-1445-4d38-959f-4f8676505cca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14ea2b48-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:16:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 187], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 801519, 'reachable_time': 35231, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 282377, 'error': None, 'target': 'ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:28.403 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[04f5cfe3-e5c2-41b2-b881-f94f1a503dd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:29:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:28.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:28.475 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9ebc88e8-6709-4c1c-93d0-8fae9c918a70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:28.477 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14ea2b48-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:28.478 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:28.478 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14ea2b48-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.478 227364 DEBUG oslo_concurrency.lockutils [None req-f38fd778-be81-40e8-add6-3a7a09096e7c fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 5.439s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:28 np0005539551 NetworkManager[48922]: <info>  [1764404968.4809] manager: (tap14ea2b48-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/297)
Nov 29 03:29:28 np0005539551 kernel: tap14ea2b48-90: entered promiscuous mode
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.485 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:28.486 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14ea2b48-90, col_values=(('external_ids', {'iface-id': '42f71355-5b3f-49f9-b3e9-d89b87086d5d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:28 np0005539551 ovn_controller[130266]: 2025-11-29T08:29:28Z|00641|binding|INFO|Releasing lport 42f71355-5b3f-49f9-b3e9-d89b87086d5d from this chassis (sb_readonly=0)
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.504 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:28.505 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/14ea2b48-9984-443b-82fc-568ae98723fc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/14ea2b48-9984-443b-82fc-568ae98723fc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:28.506 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[502f31f7-ae3b-46bd-9787-99b3b1228561]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:28.507 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-14ea2b48-9984-443b-82fc-568ae98723fc
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/14ea2b48-9984-443b-82fc-568ae98723fc.pid.haproxy
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 14ea2b48-9984-443b-82fc-568ae98723fc
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:29:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:28.508 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc', 'env', 'PROCESS_TAG=haproxy-14ea2b48-9984-443b-82fc-568ae98723fc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/14ea2b48-9984-443b-82fc-568ae98723fc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.630 227364 DEBUG nova.compute.manager [req-87771c31-30f1-4b29-930f-50d7b496e413 req-a0e225bd-c4e7-410b-a7ac-849bc7bf4689 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Received event network-vif-plugged-38fb8873-2e1d-4925-8e77-f8b1c3390625 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.630 227364 DEBUG oslo_concurrency.lockutils [req-87771c31-30f1-4b29-930f-50d7b496e413 req-a0e225bd-c4e7-410b-a7ac-849bc7bf4689 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9bcd1fdb-7518-4259-bac1-8a013dc21273-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.631 227364 DEBUG oslo_concurrency.lockutils [req-87771c31-30f1-4b29-930f-50d7b496e413 req-a0e225bd-c4e7-410b-a7ac-849bc7bf4689 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9bcd1fdb-7518-4259-bac1-8a013dc21273-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.631 227364 DEBUG oslo_concurrency.lockutils [req-87771c31-30f1-4b29-930f-50d7b496e413 req-a0e225bd-c4e7-410b-a7ac-849bc7bf4689 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9bcd1fdb-7518-4259-bac1-8a013dc21273-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.632 227364 DEBUG nova.compute.manager [req-87771c31-30f1-4b29-930f-50d7b496e413 req-a0e225bd-c4e7-410b-a7ac-849bc7bf4689 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Processing event network-vif-plugged-38fb8873-2e1d-4925-8e77-f8b1c3390625 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.843 227364 DEBUG nova.compute.manager [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.845 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404968.844108, 9bcd1fdb-7518-4259-bac1-8a013dc21273 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.846 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] VM Started (Lifecycle Event)#033[00m
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.850 227364 DEBUG nova.virt.libvirt.driver [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.855 227364 INFO nova.virt.libvirt.driver [-] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Instance spawned successfully.#033[00m
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.855 227364 DEBUG nova.virt.libvirt.driver [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.868 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.877 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.883 227364 DEBUG nova.virt.libvirt.driver [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.884 227364 DEBUG nova.virt.libvirt.driver [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.885 227364 DEBUG nova.virt.libvirt.driver [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.886 227364 DEBUG nova.virt.libvirt.driver [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.886 227364 DEBUG nova.virt.libvirt.driver [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.887 227364 DEBUG nova.virt.libvirt.driver [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.909 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.910 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404968.8441887, 9bcd1fdb-7518-4259-bac1-8a013dc21273 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.910 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:29:28 np0005539551 podman[282452]: 2025-11-29 08:29:28.913934182 +0000 UTC m=+0.080766926 container create 69f5e7eb8e4393cc73a7c9b147770581f217a8df8a8369361c7b3828ce9d8c3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.947 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.952 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764404968.8492858, 9bcd1fdb-7518-4259-bac1-8a013dc21273 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.952 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:29:28 np0005539551 systemd[1]: Started libpod-conmon-69f5e7eb8e4393cc73a7c9b147770581f217a8df8a8369361c7b3828ce9d8c3c.scope.
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.969 227364 INFO nova.compute.manager [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Took 10.94 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.970 227364 DEBUG nova.compute.manager [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:28 np0005539551 podman[282452]: 2025-11-29 08:29:28.878568375 +0000 UTC m=+0.045401129 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.977 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:28 np0005539551 nova_compute[227360]: 2025-11-29 08:29:28.980 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:29:28 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:29:28 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca99a2f2774d3f3e49e39ed83d2ebf6ff3fce30d7fe1e4f8da4ea56bd7bac8d0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:29:29 np0005539551 podman[282452]: 2025-11-29 08:29:29.003600888 +0000 UTC m=+0.170433612 container init 69f5e7eb8e4393cc73a7c9b147770581f217a8df8a8369361c7b3828ce9d8c3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 03:29:29 np0005539551 nova_compute[227360]: 2025-11-29 08:29:29.005 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:29:29 np0005539551 podman[282452]: 2025-11-29 08:29:29.010559916 +0000 UTC m=+0.177392620 container start 69f5e7eb8e4393cc73a7c9b147770581f217a8df8a8369361c7b3828ce9d8c3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 03:29:29 np0005539551 nova_compute[227360]: 2025-11-29 08:29:29.033 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:29 np0005539551 neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc[282467]: [NOTICE]   (282471) : New worker (282473) forked
Nov 29 03:29:29 np0005539551 neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc[282467]: [NOTICE]   (282471) : Loading success.
Nov 29 03:29:29 np0005539551 nova_compute[227360]: 2025-11-29 08:29:29.054 227364 INFO nova.compute.manager [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Took 12.02 seconds to build instance.#033[00m
Nov 29 03:29:29 np0005539551 nova_compute[227360]: 2025-11-29 08:29:29.074 227364 DEBUG oslo_concurrency.lockutils [None req-df2f871d-543a-4fce-8140-b2c0fac1d039 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "9bcd1fdb-7518-4259-bac1-8a013dc21273" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e352 e352: 3 total, 3 up, 3 in
Nov 29 03:29:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e352 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:29 np0005539551 nova_compute[227360]: 2025-11-29 08:29:29.498 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404954.4972568, d4258b43-e73e-47f3-b1d1-f169bcaf4534 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:29:29 np0005539551 nova_compute[227360]: 2025-11-29 08:29:29.499 227364 INFO nova.compute.manager [-] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:29:29 np0005539551 nova_compute[227360]: 2025-11-29 08:29:29.518 227364 DEBUG nova.compute.manager [None req-e88a0468-6d92-42a8-87bc-efc71baf011e - - - - - -] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:29.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e353 e353: 3 total, 3 up, 3 in
Nov 29 03:29:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:30.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:30 np0005539551 nova_compute[227360]: 2025-11-29 08:29:30.851 227364 DEBUG nova.compute.manager [req-a47d912d-a0ea-47a3-972e-e28ae1e958be req-2059f33e-095f-4267-a3ac-ee86ddafacfe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Received event network-vif-plugged-38fb8873-2e1d-4925-8e77-f8b1c3390625 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:30 np0005539551 nova_compute[227360]: 2025-11-29 08:29:30.852 227364 DEBUG oslo_concurrency.lockutils [req-a47d912d-a0ea-47a3-972e-e28ae1e958be req-2059f33e-095f-4267-a3ac-ee86ddafacfe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9bcd1fdb-7518-4259-bac1-8a013dc21273-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:30 np0005539551 nova_compute[227360]: 2025-11-29 08:29:30.852 227364 DEBUG oslo_concurrency.lockutils [req-a47d912d-a0ea-47a3-972e-e28ae1e958be req-2059f33e-095f-4267-a3ac-ee86ddafacfe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9bcd1fdb-7518-4259-bac1-8a013dc21273-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:30 np0005539551 nova_compute[227360]: 2025-11-29 08:29:30.852 227364 DEBUG oslo_concurrency.lockutils [req-a47d912d-a0ea-47a3-972e-e28ae1e958be req-2059f33e-095f-4267-a3ac-ee86ddafacfe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9bcd1fdb-7518-4259-bac1-8a013dc21273-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:30 np0005539551 nova_compute[227360]: 2025-11-29 08:29:30.853 227364 DEBUG nova.compute.manager [req-a47d912d-a0ea-47a3-972e-e28ae1e958be req-2059f33e-095f-4267-a3ac-ee86ddafacfe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] No waiting events found dispatching network-vif-plugged-38fb8873-2e1d-4925-8e77-f8b1c3390625 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:30 np0005539551 nova_compute[227360]: 2025-11-29 08:29:30.853 227364 WARNING nova.compute.manager [req-a47d912d-a0ea-47a3-972e-e28ae1e958be req-2059f33e-095f-4267-a3ac-ee86ddafacfe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Received unexpected event network-vif-plugged-38fb8873-2e1d-4925-8e77-f8b1c3390625 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:29:30 np0005539551 nova_compute[227360]: 2025-11-29 08:29:30.924 227364 DEBUG oslo_concurrency.lockutils [None req-74373240-88ba-4dfa-a257-ad60a03bdbfa 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "9bcd1fdb-7518-4259-bac1-8a013dc21273" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:30 np0005539551 nova_compute[227360]: 2025-11-29 08:29:30.925 227364 DEBUG oslo_concurrency.lockutils [None req-74373240-88ba-4dfa-a257-ad60a03bdbfa 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "9bcd1fdb-7518-4259-bac1-8a013dc21273" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:30 np0005539551 nova_compute[227360]: 2025-11-29 08:29:30.926 227364 DEBUG oslo_concurrency.lockutils [None req-74373240-88ba-4dfa-a257-ad60a03bdbfa 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "9bcd1fdb-7518-4259-bac1-8a013dc21273-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:30 np0005539551 nova_compute[227360]: 2025-11-29 08:29:30.927 227364 DEBUG oslo_concurrency.lockutils [None req-74373240-88ba-4dfa-a257-ad60a03bdbfa 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "9bcd1fdb-7518-4259-bac1-8a013dc21273-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:30 np0005539551 nova_compute[227360]: 2025-11-29 08:29:30.927 227364 DEBUG oslo_concurrency.lockutils [None req-74373240-88ba-4dfa-a257-ad60a03bdbfa 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "9bcd1fdb-7518-4259-bac1-8a013dc21273-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:30 np0005539551 nova_compute[227360]: 2025-11-29 08:29:30.929 227364 INFO nova.compute.manager [None req-74373240-88ba-4dfa-a257-ad60a03bdbfa 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Terminating instance#033[00m
Nov 29 03:29:30 np0005539551 nova_compute[227360]: 2025-11-29 08:29:30.932 227364 DEBUG nova.compute.manager [None req-74373240-88ba-4dfa-a257-ad60a03bdbfa 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:29:31 np0005539551 kernel: tap38fb8873-2e (unregistering): left promiscuous mode
Nov 29 03:29:31 np0005539551 NetworkManager[48922]: <info>  [1764404971.2016] device (tap38fb8873-2e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:29:31 np0005539551 ovn_controller[130266]: 2025-11-29T08:29:31Z|00642|binding|INFO|Releasing lport 38fb8873-2e1d-4925-8e77-f8b1c3390625 from this chassis (sb_readonly=0)
Nov 29 03:29:31 np0005539551 ovn_controller[130266]: 2025-11-29T08:29:31Z|00643|binding|INFO|Setting lport 38fb8873-2e1d-4925-8e77-f8b1c3390625 down in Southbound
Nov 29 03:29:31 np0005539551 ovn_controller[130266]: 2025-11-29T08:29:31Z|00644|binding|INFO|Removing iface tap38fb8873-2e ovn-installed in OVS
Nov 29 03:29:31 np0005539551 nova_compute[227360]: 2025-11-29 08:29:31.232 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:31 np0005539551 nova_compute[227360]: 2025-11-29 08:29:31.233 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:31.241 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:79:8c 10.100.0.4'], port_security=['fa:16:3e:b8:79:8c 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '9bcd1fdb-7518-4259-bac1-8a013dc21273', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14ea2b48-9984-443b-82fc-568ae98723fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5970d12b2c42419e889cd48de28c4b86', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1f4c15e1-3db4-4257-8a40-7ffdc4076590', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=deb2b192-93f0-4938-a0e1-77284f619a46, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=38fb8873-2e1d-4925-8e77-f8b1c3390625) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:29:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:31.242 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 38fb8873-2e1d-4925-8e77-f8b1c3390625 in datapath 14ea2b48-9984-443b-82fc-568ae98723fc unbound from our chassis#033[00m
Nov 29 03:29:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:31.244 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 14ea2b48-9984-443b-82fc-568ae98723fc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:29:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:31.245 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[7005d725-da50-4be3-8aaf-46cb8f317059]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:31.245 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc namespace which is not needed anymore#033[00m
Nov 29 03:29:31 np0005539551 nova_compute[227360]: 2025-11-29 08:29:31.250 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:31 np0005539551 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d00000095.scope: Deactivated successfully.
Nov 29 03:29:31 np0005539551 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d00000095.scope: Consumed 2.838s CPU time.
Nov 29 03:29:31 np0005539551 systemd-machined[190756]: Machine qemu-68-instance-00000095 terminated.
Nov 29 03:29:31 np0005539551 nova_compute[227360]: 2025-11-29 08:29:31.372 227364 INFO nova.virt.libvirt.driver [-] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Instance destroyed successfully.#033[00m
Nov 29 03:29:31 np0005539551 nova_compute[227360]: 2025-11-29 08:29:31.372 227364 DEBUG nova.objects.instance [None req-74373240-88ba-4dfa-a257-ad60a03bdbfa 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lazy-loading 'resources' on Instance uuid 9bcd1fdb-7518-4259-bac1-8a013dc21273 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:31 np0005539551 nova_compute[227360]: 2025-11-29 08:29:31.398 227364 DEBUG nova.virt.libvirt.vif [None req-74373240-88ba-4dfa-a257-ad60a03bdbfa 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:29:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-238021844',display_name='tempest-ServersTestJSON-server-238021844',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-238021844',id=149,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:29:28Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5970d12b2c42419e889cd48de28c4b86',ramdisk_id='',reservation_id='r-ei2nhcyp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1509574488',owner_user_name='tempest-ServersTestJSON-1509574488-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:29:29Z,user_data=None,user_id='0741d46905e94415a372bd62751dff66',uuid=9bcd1fdb-7518-4259-bac1-8a013dc21273,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "38fb8873-2e1d-4925-8e77-f8b1c3390625", "address": "fa:16:3e:b8:79:8c", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38fb8873-2e", "ovs_interfaceid": "38fb8873-2e1d-4925-8e77-f8b1c3390625", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:29:31 np0005539551 nova_compute[227360]: 2025-11-29 08:29:31.399 227364 DEBUG nova.network.os_vif_util [None req-74373240-88ba-4dfa-a257-ad60a03bdbfa 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converting VIF {"id": "38fb8873-2e1d-4925-8e77-f8b1c3390625", "address": "fa:16:3e:b8:79:8c", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38fb8873-2e", "ovs_interfaceid": "38fb8873-2e1d-4925-8e77-f8b1c3390625", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:29:31 np0005539551 nova_compute[227360]: 2025-11-29 08:29:31.400 227364 DEBUG nova.network.os_vif_util [None req-74373240-88ba-4dfa-a257-ad60a03bdbfa 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:79:8c,bridge_name='br-int',has_traffic_filtering=True,id=38fb8873-2e1d-4925-8e77-f8b1c3390625,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap38fb8873-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:29:31 np0005539551 nova_compute[227360]: 2025-11-29 08:29:31.400 227364 DEBUG os_vif [None req-74373240-88ba-4dfa-a257-ad60a03bdbfa 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:79:8c,bridge_name='br-int',has_traffic_filtering=True,id=38fb8873-2e1d-4925-8e77-f8b1c3390625,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap38fb8873-2e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:29:31 np0005539551 nova_compute[227360]: 2025-11-29 08:29:31.402 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:31 np0005539551 nova_compute[227360]: 2025-11-29 08:29:31.403 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap38fb8873-2e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:31 np0005539551 nova_compute[227360]: 2025-11-29 08:29:31.404 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:31 np0005539551 nova_compute[227360]: 2025-11-29 08:29:31.405 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:31 np0005539551 nova_compute[227360]: 2025-11-29 08:29:31.408 227364 INFO os_vif [None req-74373240-88ba-4dfa-a257-ad60a03bdbfa 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:79:8c,bridge_name='br-int',has_traffic_filtering=True,id=38fb8873-2e1d-4925-8e77-f8b1c3390625,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap38fb8873-2e')#033[00m
Nov 29 03:29:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:31.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:31 np0005539551 neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc[282467]: [NOTICE]   (282471) : haproxy version is 2.8.14-c23fe91
Nov 29 03:29:31 np0005539551 neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc[282467]: [NOTICE]   (282471) : path to executable is /usr/sbin/haproxy
Nov 29 03:29:31 np0005539551 neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc[282467]: [WARNING]  (282471) : Exiting Master process...
Nov 29 03:29:31 np0005539551 neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc[282467]: [ALERT]    (282471) : Current worker (282473) exited with code 143 (Terminated)
Nov 29 03:29:31 np0005539551 neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc[282467]: [WARNING]  (282471) : All workers exited. Exiting... (0)
Nov 29 03:29:31 np0005539551 systemd[1]: libpod-69f5e7eb8e4393cc73a7c9b147770581f217a8df8a8369361c7b3828ce9d8c3c.scope: Deactivated successfully.
Nov 29 03:29:31 np0005539551 podman[282506]: 2025-11-29 08:29:31.820123031 +0000 UTC m=+0.493176703 container died 69f5e7eb8e4393cc73a7c9b147770581f217a8df8a8369361c7b3828ce9d8c3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:29:31 np0005539551 nova_compute[227360]: 2025-11-29 08:29:31.849 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:32 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-69f5e7eb8e4393cc73a7c9b147770581f217a8df8a8369361c7b3828ce9d8c3c-userdata-shm.mount: Deactivated successfully.
Nov 29 03:29:32 np0005539551 systemd[1]: var-lib-containers-storage-overlay-ca99a2f2774d3f3e49e39ed83d2ebf6ff3fce30d7fe1e4f8da4ea56bd7bac8d0-merged.mount: Deactivated successfully.
Nov 29 03:29:32 np0005539551 podman[282506]: 2025-11-29 08:29:32.098614156 +0000 UTC m=+0.771667848 container cleanup 69f5e7eb8e4393cc73a7c9b147770581f217a8df8a8369361c7b3828ce9d8c3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:29:32 np0005539551 systemd[1]: libpod-conmon-69f5e7eb8e4393cc73a7c9b147770581f217a8df8a8369361c7b3828ce9d8c3c.scope: Deactivated successfully.
Nov 29 03:29:32 np0005539551 podman[282564]: 2025-11-29 08:29:32.174156238 +0000 UTC m=+0.047797694 container remove 69f5e7eb8e4393cc73a7c9b147770581f217a8df8a8369361c7b3828ce9d8c3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:29:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:32.179 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[98fc4a3d-2fc8-437a-afb9-c4d451497eab]: (4, ('Sat Nov 29 08:29:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc (69f5e7eb8e4393cc73a7c9b147770581f217a8df8a8369361c7b3828ce9d8c3c)\n69f5e7eb8e4393cc73a7c9b147770581f217a8df8a8369361c7b3828ce9d8c3c\nSat Nov 29 08:29:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc (69f5e7eb8e4393cc73a7c9b147770581f217a8df8a8369361c7b3828ce9d8c3c)\n69f5e7eb8e4393cc73a7c9b147770581f217a8df8a8369361c7b3828ce9d8c3c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:32.181 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[21c006a5-a630-45e1-890c-c51b68f7b48f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:32.182 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14ea2b48-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:32 np0005539551 kernel: tap14ea2b48-90: left promiscuous mode
Nov 29 03:29:32 np0005539551 nova_compute[227360]: 2025-11-29 08:29:32.184 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:32 np0005539551 nova_compute[227360]: 2025-11-29 08:29:32.198 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:32.202 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9ba2780b-439b-45f7-9c9c-a5822fb1b252]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:32.221 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5075e8c8-bcb6-4e77-ae0e-428fe7551baa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:32.222 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[98850a49-625b-45bc-aac9-e1bc93faee55]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:32.238 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[0b3c947a-5614-4174-85ee-69b540d3e2eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 801513, 'reachable_time': 20259, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282580, 'error': None, 'target': 'ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:32.241 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:29:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:32.241 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[4e67c3b1-3aa3-40f5-b3c2-cfcfdccacf91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:32 np0005539551 systemd[1]: run-netns-ovnmeta\x2d14ea2b48\x2d9984\x2d443b\x2d82fc\x2d568ae98723fc.mount: Deactivated successfully.
Nov 29 03:29:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:32.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:33 np0005539551 nova_compute[227360]: 2025-11-29 08:29:33.318 227364 DEBUG nova.compute.manager [req-aabbc295-00e0-44ce-a261-d23519e920c3 req-1db1abe4-8da7-4673-b18d-1e1773999c60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Received event network-vif-unplugged-38fb8873-2e1d-4925-8e77-f8b1c3390625 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:33 np0005539551 nova_compute[227360]: 2025-11-29 08:29:33.318 227364 DEBUG oslo_concurrency.lockutils [req-aabbc295-00e0-44ce-a261-d23519e920c3 req-1db1abe4-8da7-4673-b18d-1e1773999c60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9bcd1fdb-7518-4259-bac1-8a013dc21273-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:33 np0005539551 nova_compute[227360]: 2025-11-29 08:29:33.319 227364 DEBUG oslo_concurrency.lockutils [req-aabbc295-00e0-44ce-a261-d23519e920c3 req-1db1abe4-8da7-4673-b18d-1e1773999c60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9bcd1fdb-7518-4259-bac1-8a013dc21273-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:33 np0005539551 nova_compute[227360]: 2025-11-29 08:29:33.319 227364 DEBUG oslo_concurrency.lockutils [req-aabbc295-00e0-44ce-a261-d23519e920c3 req-1db1abe4-8da7-4673-b18d-1e1773999c60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9bcd1fdb-7518-4259-bac1-8a013dc21273-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:33 np0005539551 nova_compute[227360]: 2025-11-29 08:29:33.319 227364 DEBUG nova.compute.manager [req-aabbc295-00e0-44ce-a261-d23519e920c3 req-1db1abe4-8da7-4673-b18d-1e1773999c60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] No waiting events found dispatching network-vif-unplugged-38fb8873-2e1d-4925-8e77-f8b1c3390625 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:33 np0005539551 nova_compute[227360]: 2025-11-29 08:29:33.319 227364 DEBUG nova.compute.manager [req-aabbc295-00e0-44ce-a261-d23519e920c3 req-1db1abe4-8da7-4673-b18d-1e1773999c60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Received event network-vif-unplugged-38fb8873-2e1d-4925-8e77-f8b1c3390625 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:29:33 np0005539551 nova_compute[227360]: 2025-11-29 08:29:33.320 227364 DEBUG nova.compute.manager [req-aabbc295-00e0-44ce-a261-d23519e920c3 req-1db1abe4-8da7-4673-b18d-1e1773999c60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Received event network-vif-plugged-38fb8873-2e1d-4925-8e77-f8b1c3390625 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:33 np0005539551 nova_compute[227360]: 2025-11-29 08:29:33.320 227364 DEBUG oslo_concurrency.lockutils [req-aabbc295-00e0-44ce-a261-d23519e920c3 req-1db1abe4-8da7-4673-b18d-1e1773999c60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9bcd1fdb-7518-4259-bac1-8a013dc21273-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:33 np0005539551 nova_compute[227360]: 2025-11-29 08:29:33.320 227364 DEBUG oslo_concurrency.lockutils [req-aabbc295-00e0-44ce-a261-d23519e920c3 req-1db1abe4-8da7-4673-b18d-1e1773999c60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9bcd1fdb-7518-4259-bac1-8a013dc21273-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:33 np0005539551 nova_compute[227360]: 2025-11-29 08:29:33.321 227364 DEBUG oslo_concurrency.lockutils [req-aabbc295-00e0-44ce-a261-d23519e920c3 req-1db1abe4-8da7-4673-b18d-1e1773999c60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9bcd1fdb-7518-4259-bac1-8a013dc21273-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:33 np0005539551 nova_compute[227360]: 2025-11-29 08:29:33.321 227364 DEBUG nova.compute.manager [req-aabbc295-00e0-44ce-a261-d23519e920c3 req-1db1abe4-8da7-4673-b18d-1e1773999c60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] No waiting events found dispatching network-vif-plugged-38fb8873-2e1d-4925-8e77-f8b1c3390625 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:33 np0005539551 nova_compute[227360]: 2025-11-29 08:29:33.322 227364 WARNING nova.compute.manager [req-aabbc295-00e0-44ce-a261-d23519e920c3 req-1db1abe4-8da7-4673-b18d-1e1773999c60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Received unexpected event network-vif-plugged-38fb8873-2e1d-4925-8e77-f8b1c3390625 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:29:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:33.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:34.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:34 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:34 np0005539551 nova_compute[227360]: 2025-11-29 08:29:34.718 227364 INFO nova.virt.libvirt.driver [None req-74373240-88ba-4dfa-a257-ad60a03bdbfa 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Deleting instance files /var/lib/nova/instances/9bcd1fdb-7518-4259-bac1-8a013dc21273_del#033[00m
Nov 29 03:29:34 np0005539551 nova_compute[227360]: 2025-11-29 08:29:34.719 227364 INFO nova.virt.libvirt.driver [None req-74373240-88ba-4dfa-a257-ad60a03bdbfa 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Deletion of /var/lib/nova/instances/9bcd1fdb-7518-4259-bac1-8a013dc21273_del complete#033[00m
Nov 29 03:29:34 np0005539551 nova_compute[227360]: 2025-11-29 08:29:34.795 227364 INFO nova.compute.manager [None req-74373240-88ba-4dfa-a257-ad60a03bdbfa 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Took 3.86 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:29:34 np0005539551 nova_compute[227360]: 2025-11-29 08:29:34.796 227364 DEBUG oslo.service.loopingcall [None req-74373240-88ba-4dfa-a257-ad60a03bdbfa 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:29:34 np0005539551 nova_compute[227360]: 2025-11-29 08:29:34.796 227364 DEBUG nova.compute.manager [-] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:29:34 np0005539551 nova_compute[227360]: 2025-11-29 08:29:34.796 227364 DEBUG nova.network.neutron [-] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:29:35 np0005539551 nova_compute[227360]: 2025-11-29 08:29:35.429 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:35.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:35 np0005539551 nova_compute[227360]: 2025-11-29 08:29:35.829 227364 DEBUG nova.network.neutron [-] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:29:35 np0005539551 nova_compute[227360]: 2025-11-29 08:29:35.863 227364 INFO nova.compute.manager [-] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Took 1.07 seconds to deallocate network for instance.#033[00m
Nov 29 03:29:35 np0005539551 nova_compute[227360]: 2025-11-29 08:29:35.935 227364 DEBUG oslo_concurrency.lockutils [None req-74373240-88ba-4dfa-a257-ad60a03bdbfa 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:35 np0005539551 nova_compute[227360]: 2025-11-29 08:29:35.936 227364 DEBUG oslo_concurrency.lockutils [None req-74373240-88ba-4dfa-a257-ad60a03bdbfa 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:35 np0005539551 nova_compute[227360]: 2025-11-29 08:29:35.991 227364 DEBUG oslo_concurrency.processutils [None req-74373240-88ba-4dfa-a257-ad60a03bdbfa 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:36.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:36 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:29:36 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1784762362' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:29:36 np0005539551 nova_compute[227360]: 2025-11-29 08:29:36.447 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:36 np0005539551 nova_compute[227360]: 2025-11-29 08:29:36.466 227364 DEBUG oslo_concurrency.processutils [None req-74373240-88ba-4dfa-a257-ad60a03bdbfa 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:36 np0005539551 nova_compute[227360]: 2025-11-29 08:29:36.472 227364 DEBUG nova.compute.provider_tree [None req-74373240-88ba-4dfa-a257-ad60a03bdbfa 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:29:36 np0005539551 nova_compute[227360]: 2025-11-29 08:29:36.490 227364 DEBUG nova.scheduler.client.report [None req-74373240-88ba-4dfa-a257-ad60a03bdbfa 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:29:36 np0005539551 nova_compute[227360]: 2025-11-29 08:29:36.516 227364 DEBUG oslo_concurrency.lockutils [None req-74373240-88ba-4dfa-a257-ad60a03bdbfa 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:36 np0005539551 nova_compute[227360]: 2025-11-29 08:29:36.542 227364 INFO nova.scheduler.client.report [None req-74373240-88ba-4dfa-a257-ad60a03bdbfa 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Deleted allocations for instance 9bcd1fdb-7518-4259-bac1-8a013dc21273#033[00m
Nov 29 03:29:36 np0005539551 nova_compute[227360]: 2025-11-29 08:29:36.603 227364 DEBUG oslo_concurrency.lockutils [None req-74373240-88ba-4dfa-a257-ad60a03bdbfa 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "9bcd1fdb-7518-4259-bac1-8a013dc21273" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:36 np0005539551 nova_compute[227360]: 2025-11-29 08:29:36.776 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:37 np0005539551 nova_compute[227360]: 2025-11-29 08:29:37.035 227364 DEBUG oslo_concurrency.lockutils [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquiring lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:37 np0005539551 nova_compute[227360]: 2025-11-29 08:29:37.036 227364 DEBUG oslo_concurrency.lockutils [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:37 np0005539551 nova_compute[227360]: 2025-11-29 08:29:37.050 227364 DEBUG nova.compute.manager [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:29:37 np0005539551 nova_compute[227360]: 2025-11-29 08:29:37.147 227364 DEBUG oslo_concurrency.lockutils [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:37 np0005539551 nova_compute[227360]: 2025-11-29 08:29:37.148 227364 DEBUG oslo_concurrency.lockutils [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:37 np0005539551 nova_compute[227360]: 2025-11-29 08:29:37.153 227364 DEBUG nova.virt.hardware [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:29:37 np0005539551 nova_compute[227360]: 2025-11-29 08:29:37.154 227364 INFO nova.compute.claims [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:29:37 np0005539551 nova_compute[227360]: 2025-11-29 08:29:37.336 227364 DEBUG oslo_concurrency.processutils [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:37 np0005539551 nova_compute[227360]: 2025-11-29 08:29:37.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:37.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:29:37 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4240765415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:29:37 np0005539551 nova_compute[227360]: 2025-11-29 08:29:37.822 227364 DEBUG oslo_concurrency.processutils [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:37 np0005539551 nova_compute[227360]: 2025-11-29 08:29:37.828 227364 DEBUG nova.compute.provider_tree [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:29:38 np0005539551 nova_compute[227360]: 2025-11-29 08:29:38.362 227364 DEBUG nova.compute.manager [req-30b3788e-818e-49d7-97fa-f028b9bb133a req-cb6d5cdd-93e8-4b78-8287-6d23dc56358e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Received event network-vif-deleted-38fb8873-2e1d-4925-8e77-f8b1c3390625 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:38 np0005539551 nova_compute[227360]: 2025-11-29 08:29:38.379 227364 DEBUG nova.scheduler.client.report [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:29:38 np0005539551 nova_compute[227360]: 2025-11-29 08:29:38.435 227364 DEBUG oslo_concurrency.lockutils [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.288s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:38 np0005539551 nova_compute[227360]: 2025-11-29 08:29:38.436 227364 DEBUG nova.compute.manager [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:29:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:38.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:38 np0005539551 nova_compute[227360]: 2025-11-29 08:29:38.578 227364 DEBUG nova.compute.manager [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:29:38 np0005539551 nova_compute[227360]: 2025-11-29 08:29:38.579 227364 DEBUG nova.network.neutron [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:29:38 np0005539551 nova_compute[227360]: 2025-11-29 08:29:38.621 227364 INFO nova.virt.libvirt.driver [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:29:38 np0005539551 nova_compute[227360]: 2025-11-29 08:29:38.660 227364 DEBUG nova.compute.manager [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:29:38 np0005539551 nova_compute[227360]: 2025-11-29 08:29:38.771 227364 INFO nova.virt.block_device [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Booting with volume-backed-image 4873db8c-b414-4e95-acd9-77caabebe722 at /dev/vda#033[00m
Nov 29 03:29:38 np0005539551 nova_compute[227360]: 2025-11-29 08:29:38.852 227364 DEBUG nova.policy [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '64b11a4dc36b4f55b85dbe846183be55', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ae71059d02774857be85797a3be0e4e6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:29:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e354 e354: 3 total, 3 up, 3 in
Nov 29 03:29:39 np0005539551 nova_compute[227360]: 2025-11-29 08:29:39.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:39 np0005539551 podman[282627]: 2025-11-29 08:29:39.603067347 +0000 UTC m=+0.055991686 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:29:39 np0005539551 podman[282626]: 2025-11-29 08:29:39.669934756 +0000 UTC m=+0.126653537 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:29:39 np0005539551 podman[282628]: 2025-11-29 08:29:39.683514333 +0000 UTC m=+0.121771814 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0)
Nov 29 03:29:39 np0005539551 nova_compute[227360]: 2025-11-29 08:29:39.722 227364 DEBUG nova.network.neutron [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Successfully created port: bdb96056-9ec2-474c-abe0-50ee1cbb4097 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:29:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:39.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:40 np0005539551 nova_compute[227360]: 2025-11-29 08:29:40.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:40 np0005539551 nova_compute[227360]: 2025-11-29 08:29:40.409 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:29:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:40.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:41 np0005539551 nova_compute[227360]: 2025-11-29 08:29:41.450 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:41.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:41 np0005539551 nova_compute[227360]: 2025-11-29 08:29:41.827 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:42.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:43 np0005539551 nova_compute[227360]: 2025-11-29 08:29:43.293 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "refresh_cache-480ae817-3676-4499-a047-6b8b383e7bf2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:29:43 np0005539551 nova_compute[227360]: 2025-11-29 08:29:43.293 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquired lock "refresh_cache-480ae817-3676-4499-a047-6b8b383e7bf2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:29:43 np0005539551 nova_compute[227360]: 2025-11-29 08:29:43.293 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:29:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:43.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:44.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:44 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:45.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:46 np0005539551 nova_compute[227360]: 2025-11-29 08:29:46.179 227364 DEBUG nova.network.neutron [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Successfully updated port: bdb96056-9ec2-474c-abe0-50ee1cbb4097 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:29:46 np0005539551 nova_compute[227360]: 2025-11-29 08:29:46.235 227364 DEBUG oslo_concurrency.lockutils [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquiring lock "refresh_cache-a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:29:46 np0005539551 nova_compute[227360]: 2025-11-29 08:29:46.236 227364 DEBUG oslo_concurrency.lockutils [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquired lock "refresh_cache-a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:29:46 np0005539551 nova_compute[227360]: 2025-11-29 08:29:46.236 227364 DEBUG nova.network.neutron [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:29:46 np0005539551 nova_compute[227360]: 2025-11-29 08:29:46.370 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404971.3687372, 9bcd1fdb-7518-4259-bac1-8a013dc21273 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:29:46 np0005539551 nova_compute[227360]: 2025-11-29 08:29:46.370 227364 INFO nova.compute.manager [-] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:29:46 np0005539551 nova_compute[227360]: 2025-11-29 08:29:46.386 227364 DEBUG nova.compute.manager [req-a73fc657-0ddc-4250-a329-68902f4d3098 req-e1fcf5d0-d0a9-410c-966c-5b3c816b7525 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Received event network-changed-bdb96056-9ec2-474c-abe0-50ee1cbb4097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:46 np0005539551 nova_compute[227360]: 2025-11-29 08:29:46.386 227364 DEBUG nova.compute.manager [req-a73fc657-0ddc-4250-a329-68902f4d3098 req-e1fcf5d0-d0a9-410c-966c-5b3c816b7525 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Refreshing instance network info cache due to event network-changed-bdb96056-9ec2-474c-abe0-50ee1cbb4097. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:29:46 np0005539551 nova_compute[227360]: 2025-11-29 08:29:46.387 227364 DEBUG oslo_concurrency.lockutils [req-a73fc657-0ddc-4250-a329-68902f4d3098 req-e1fcf5d0-d0a9-410c-966c-5b3c816b7525 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:29:46 np0005539551 nova_compute[227360]: 2025-11-29 08:29:46.443 227364 DEBUG nova.compute.manager [None req-8a185366-8923-4d07-b4e5-674a65d811f5 - - - - - -] [instance: 9bcd1fdb-7518-4259-bac1-8a013dc21273] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:46 np0005539551 nova_compute[227360]: 2025-11-29 08:29:46.445 227364 DEBUG nova.network.neutron [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:29:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:46.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:46 np0005539551 nova_compute[227360]: 2025-11-29 08:29:46.493 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:46 np0005539551 nova_compute[227360]: 2025-11-29 08:29:46.829 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:46 np0005539551 nova_compute[227360]: 2025-11-29 08:29:46.964 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Updating instance_info_cache with network_info: [{"id": "9eea9321-fa5c-4ec5-81e3-b6fba93e7545", "address": "fa:16:3e:a2:58:94", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9eea9321-fa", "ovs_interfaceid": "9eea9321-fa5c-4ec5-81e3-b6fba93e7545", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:29:47 np0005539551 nova_compute[227360]: 2025-11-29 08:29:47.311 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Releasing lock "refresh_cache-480ae817-3676-4499-a047-6b8b383e7bf2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:29:47 np0005539551 nova_compute[227360]: 2025-11-29 08:29:47.312 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:29:47 np0005539551 nova_compute[227360]: 2025-11-29 08:29:47.312 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:47 np0005539551 nova_compute[227360]: 2025-11-29 08:29:47.313 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:47 np0005539551 nova_compute[227360]: 2025-11-29 08:29:47.313 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:47 np0005539551 nova_compute[227360]: 2025-11-29 08:29:47.412 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:47 np0005539551 nova_compute[227360]: 2025-11-29 08:29:47.412 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:47 np0005539551 nova_compute[227360]: 2025-11-29 08:29:47.412 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:47 np0005539551 nova_compute[227360]: 2025-11-29 08:29:47.413 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:29:47 np0005539551 nova_compute[227360]: 2025-11-29 08:29:47.413 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:47 np0005539551 nova_compute[227360]: 2025-11-29 08:29:47.497 227364 DEBUG nova.network.neutron [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Updating instance_info_cache with network_info: [{"id": "bdb96056-9ec2-474c-abe0-50ee1cbb4097", "address": "fa:16:3e:a2:3e:57", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdb96056-9e", "ovs_interfaceid": "bdb96056-9ec2-474c-abe0-50ee1cbb4097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:29:47 np0005539551 nova_compute[227360]: 2025-11-29 08:29:47.599 227364 DEBUG oslo_concurrency.lockutils [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Releasing lock "refresh_cache-a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:29:47 np0005539551 nova_compute[227360]: 2025-11-29 08:29:47.599 227364 DEBUG nova.compute.manager [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Instance network_info: |[{"id": "bdb96056-9ec2-474c-abe0-50ee1cbb4097", "address": "fa:16:3e:a2:3e:57", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdb96056-9e", "ovs_interfaceid": "bdb96056-9ec2-474c-abe0-50ee1cbb4097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:29:47 np0005539551 nova_compute[227360]: 2025-11-29 08:29:47.600 227364 DEBUG oslo_concurrency.lockutils [req-a73fc657-0ddc-4250-a329-68902f4d3098 req-e1fcf5d0-d0a9-410c-966c-5b3c816b7525 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:29:47 np0005539551 nova_compute[227360]: 2025-11-29 08:29:47.600 227364 DEBUG nova.network.neutron [req-a73fc657-0ddc-4250-a329-68902f4d3098 req-e1fcf5d0-d0a9-410c-966c-5b3c816b7525 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Refreshing network info cache for port bdb96056-9ec2-474c-abe0-50ee1cbb4097 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:29:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:47.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:48 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:29:48 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/351833046' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:29:48 np0005539551 nova_compute[227360]: 2025-11-29 08:29:48.109 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.696s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:48 np0005539551 nova_compute[227360]: 2025-11-29 08:29:48.225 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:29:48 np0005539551 nova_compute[227360]: 2025-11-29 08:29:48.226 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:29:48 np0005539551 nova_compute[227360]: 2025-11-29 08:29:48.380 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:29:48 np0005539551 nova_compute[227360]: 2025-11-29 08:29:48.381 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4101MB free_disk=20.785404205322266GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:29:48 np0005539551 nova_compute[227360]: 2025-11-29 08:29:48.381 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:48 np0005539551 nova_compute[227360]: 2025-11-29 08:29:48.381 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:48.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:48 np0005539551 nova_compute[227360]: 2025-11-29 08:29:48.509 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 480ae817-3676-4499-a047-6b8b383e7bf2 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:29:48 np0005539551 nova_compute[227360]: 2025-11-29 08:29:48.509 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:29:48 np0005539551 nova_compute[227360]: 2025-11-29 08:29:48.511 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:29:48 np0005539551 nova_compute[227360]: 2025-11-29 08:29:48.511 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:29:48 np0005539551 nova_compute[227360]: 2025-11-29 08:29:48.696 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:29:49 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2420349656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:29:49 np0005539551 nova_compute[227360]: 2025-11-29 08:29:49.120 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:49 np0005539551 nova_compute[227360]: 2025-11-29 08:29:49.126 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:29:49 np0005539551 nova_compute[227360]: 2025-11-29 08:29:49.160 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:29:49 np0005539551 nova_compute[227360]: 2025-11-29 08:29:49.196 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:29:49 np0005539551 nova_compute[227360]: 2025-11-29 08:29:49.196 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:49 np0005539551 nova_compute[227360]: 2025-11-29 08:29:49.197 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:49 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:29:49 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:29:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:49 np0005539551 nova_compute[227360]: 2025-11-29 08:29:49.556 227364 DEBUG nova.network.neutron [req-a73fc657-0ddc-4250-a329-68902f4d3098 req-e1fcf5d0-d0a9-410c-966c-5b3c816b7525 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Updated VIF entry in instance network info cache for port bdb96056-9ec2-474c-abe0-50ee1cbb4097. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:29:49 np0005539551 nova_compute[227360]: 2025-11-29 08:29:49.557 227364 DEBUG nova.network.neutron [req-a73fc657-0ddc-4250-a329-68902f4d3098 req-e1fcf5d0-d0a9-410c-966c-5b3c816b7525 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Updating instance_info_cache with network_info: [{"id": "bdb96056-9ec2-474c-abe0-50ee1cbb4097", "address": "fa:16:3e:a2:3e:57", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdb96056-9e", "ovs_interfaceid": "bdb96056-9ec2-474c-abe0-50ee1cbb4097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:29:49 np0005539551 nova_compute[227360]: 2025-11-29 08:29:49.597 227364 DEBUG oslo_concurrency.lockutils [req-a73fc657-0ddc-4250-a329-68902f4d3098 req-e1fcf5d0-d0a9-410c-966c-5b3c816b7525 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:29:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:49.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:50.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:50 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 03:29:50 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:29:50 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:29:50 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:29:51 np0005539551 nova_compute[227360]: 2025-11-29 08:29:51.495 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:51.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:51 np0005539551 nova_compute[227360]: 2025-11-29 08:29:51.832 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:52.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:53 np0005539551 nova_compute[227360]: 2025-11-29 08:29:53.440 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:53 np0005539551 nova_compute[227360]: 2025-11-29 08:29:53.441 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:29:53 np0005539551 nova_compute[227360]: 2025-11-29 08:29:53.441 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:53 np0005539551 nova_compute[227360]: 2025-11-29 08:29:53.441 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:29:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:53.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:54 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:29:54 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/356953745' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:29:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:54.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:54 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:55.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:56.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:56 np0005539551 nova_compute[227360]: 2025-11-29 08:29:56.500 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:56 np0005539551 nova_compute[227360]: 2025-11-29 08:29:56.508 227364 DEBUG os_brick.utils [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:29:56 np0005539551 nova_compute[227360]: 2025-11-29 08:29:56.509 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:56 np0005539551 nova_compute[227360]: 2025-11-29 08:29:56.530 245195 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:56 np0005539551 nova_compute[227360]: 2025-11-29 08:29:56.530 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[58f325cc-9f82-4750-a77e-4fda7ea6eab4]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:56 np0005539551 nova_compute[227360]: 2025-11-29 08:29:56.531 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:56 np0005539551 nova_compute[227360]: 2025-11-29 08:29:56.537 245195 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:56 np0005539551 nova_compute[227360]: 2025-11-29 08:29:56.538 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[f2b3a40d-4ff5-4bdc-8220-825ea39a3b72]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34268feecb6', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:56 np0005539551 nova_compute[227360]: 2025-11-29 08:29:56.540 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:56 np0005539551 nova_compute[227360]: 2025-11-29 08:29:56.546 245195 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:56 np0005539551 nova_compute[227360]: 2025-11-29 08:29:56.546 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[52cd2650-1874-4554-9677-39ba1323b27a]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:56 np0005539551 nova_compute[227360]: 2025-11-29 08:29:56.548 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[5d5fd03c-bff8-4540-bedd-d2eb00f60b62]: (4, '2d58586e-4ce1-425d-890c-d5cdff75e822') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:56 np0005539551 nova_compute[227360]: 2025-11-29 08:29:56.548 227364 DEBUG oslo_concurrency.processutils [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:56 np0005539551 nova_compute[227360]: 2025-11-29 08:29:56.575 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:56 np0005539551 nova_compute[227360]: 2025-11-29 08:29:56.577 227364 DEBUG oslo_concurrency.processutils [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CMD "nvme version" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:56 np0005539551 nova_compute[227360]: 2025-11-29 08:29:56.579 227364 DEBUG os_brick.initiator.connectors.lightos [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:29:56 np0005539551 nova_compute[227360]: 2025-11-29 08:29:56.579 227364 DEBUG os_brick.initiator.connectors.lightos [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:29:56 np0005539551 nova_compute[227360]: 2025-11-29 08:29:56.579 227364 DEBUG os_brick.initiator.connectors.lightos [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:29:56 np0005539551 nova_compute[227360]: 2025-11-29 08:29:56.580 227364 DEBUG os_brick.utils [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] <== get_connector_properties: return (71ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34268feecb6', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2d58586e-4ce1-425d-890c-d5cdff75e822', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:29:56 np0005539551 nova_compute[227360]: 2025-11-29 08:29:56.580 227364 DEBUG nova.virt.block_device [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Updating existing volume attachment record: a9002bdb-9606-46ba-88a2-e1d76472e1dd _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:29:57 np0005539551 nova_compute[227360]: 2025-11-29 08:29:57.080 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:29:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:57.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:29:57 np0005539551 nova_compute[227360]: 2025-11-29 08:29:57.874 227364 DEBUG nova.compute.manager [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:29:57 np0005539551 nova_compute[227360]: 2025-11-29 08:29:57.876 227364 DEBUG nova.virt.libvirt.driver [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:29:57 np0005539551 nova_compute[227360]: 2025-11-29 08:29:57.877 227364 INFO nova.virt.libvirt.driver [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Creating image(s)#033[00m
Nov 29 03:29:57 np0005539551 nova_compute[227360]: 2025-11-29 08:29:57.877 227364 DEBUG nova.virt.libvirt.driver [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:29:57 np0005539551 nova_compute[227360]: 2025-11-29 08:29:57.877 227364 DEBUG nova.virt.libvirt.driver [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Ensure instance console log exists: /var/lib/nova/instances/a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:29:57 np0005539551 nova_compute[227360]: 2025-11-29 08:29:57.878 227364 DEBUG oslo_concurrency.lockutils [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:57 np0005539551 nova_compute[227360]: 2025-11-29 08:29:57.878 227364 DEBUG oslo_concurrency.lockutils [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:57 np0005539551 nova_compute[227360]: 2025-11-29 08:29:57.878 227364 DEBUG oslo_concurrency.lockutils [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:57 np0005539551 nova_compute[227360]: 2025-11-29 08:29:57.880 227364 DEBUG nova.virt.libvirt.driver [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Start _get_guest_xml network_info=[{"id": "bdb96056-9ec2-474c-abe0-50ee1cbb4097", "address": "fa:16:3e:a2:3e:57", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdb96056-9e", "ovs_interfaceid": "bdb96056-9ec2-474c-abe0-50ee1cbb4097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-afa6b782-fcc2-4a45-bde3-dbc6a832d5c6', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'afa6b782-fcc2-4a45-bde3-dbc6a832d5c6', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9', 'attached_at': '', 'detached_at': '', 'volume_id': 'afa6b782-fcc2-4a45-bde3-dbc6a832d5c6', 'serial': 'afa6b782-fcc2-4a45-bde3-dbc6a832d5c6'}, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'attachment_id': 'a9002bdb-9606-46ba-88a2-e1d76472e1dd', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:29:57 np0005539551 nova_compute[227360]: 2025-11-29 08:29:57.884 227364 WARNING nova.virt.libvirt.driver [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:29:57 np0005539551 nova_compute[227360]: 2025-11-29 08:29:57.888 227364 DEBUG nova.virt.libvirt.host [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:29:57 np0005539551 nova_compute[227360]: 2025-11-29 08:29:57.888 227364 DEBUG nova.virt.libvirt.host [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:29:57 np0005539551 nova_compute[227360]: 2025-11-29 08:29:57.891 227364 DEBUG nova.virt.libvirt.host [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:29:57 np0005539551 nova_compute[227360]: 2025-11-29 08:29:57.891 227364 DEBUG nova.virt.libvirt.host [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:29:57 np0005539551 nova_compute[227360]: 2025-11-29 08:29:57.892 227364 DEBUG nova.virt.libvirt.driver [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:29:57 np0005539551 nova_compute[227360]: 2025-11-29 08:29:57.892 227364 DEBUG nova.virt.hardware [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:29:57 np0005539551 nova_compute[227360]: 2025-11-29 08:29:57.893 227364 DEBUG nova.virt.hardware [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:29:57 np0005539551 nova_compute[227360]: 2025-11-29 08:29:57.893 227364 DEBUG nova.virt.hardware [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:29:57 np0005539551 nova_compute[227360]: 2025-11-29 08:29:57.893 227364 DEBUG nova.virt.hardware [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:29:57 np0005539551 nova_compute[227360]: 2025-11-29 08:29:57.893 227364 DEBUG nova.virt.hardware [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:29:57 np0005539551 nova_compute[227360]: 2025-11-29 08:29:57.894 227364 DEBUG nova.virt.hardware [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:29:57 np0005539551 nova_compute[227360]: 2025-11-29 08:29:57.894 227364 DEBUG nova.virt.hardware [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:29:57 np0005539551 nova_compute[227360]: 2025-11-29 08:29:57.894 227364 DEBUG nova.virt.hardware [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:29:57 np0005539551 nova_compute[227360]: 2025-11-29 08:29:57.894 227364 DEBUG nova.virt.hardware [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:29:57 np0005539551 nova_compute[227360]: 2025-11-29 08:29:57.894 227364 DEBUG nova.virt.hardware [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:29:57 np0005539551 nova_compute[227360]: 2025-11-29 08:29:57.895 227364 DEBUG nova.virt.hardware [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:29:57 np0005539551 nova_compute[227360]: 2025-11-29 08:29:57.923 227364 DEBUG nova.storage.rbd_utils [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] rbd image a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:57 np0005539551 nova_compute[227360]: 2025-11-29 08:29:57.926 227364 DEBUG oslo_concurrency.processutils [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:58 np0005539551 ovn_controller[130266]: 2025-11-29T08:29:58Z|00645|binding|INFO|Releasing lport f2118d1b-0f35-4211-8508-64237a2d816e from this chassis (sb_readonly=0)
Nov 29 03:29:58 np0005539551 nova_compute[227360]: 2025-11-29 08:29:58.380 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:58 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:29:58 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/358759204' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:29:58 np0005539551 nova_compute[227360]: 2025-11-29 08:29:58.406 227364 DEBUG oslo_concurrency.processutils [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:58 np0005539551 nova_compute[227360]: 2025-11-29 08:29:58.456 227364 DEBUG nova.virt.libvirt.vif [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:29:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-808979613',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-808979613',id=151,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ae71059d02774857be85797a3be0e4e6',ramdisk_id='',reservation_id='r-ooph05ny',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1715153470',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1715153470-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:29:38Z,user_data=None,user_id='64b11a4dc36b4f55b85dbe846183be55',uuid=a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bdb96056-9ec2-474c-abe0-50ee1cbb4097", "address": "fa:16:3e:a2:3e:57", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdb96056-9e", "ovs_interfaceid": "bdb96056-9ec2-474c-abe0-50ee1cbb4097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:29:58 np0005539551 nova_compute[227360]: 2025-11-29 08:29:58.457 227364 DEBUG nova.network.os_vif_util [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Converting VIF {"id": "bdb96056-9ec2-474c-abe0-50ee1cbb4097", "address": "fa:16:3e:a2:3e:57", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdb96056-9e", "ovs_interfaceid": "bdb96056-9ec2-474c-abe0-50ee1cbb4097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:29:58 np0005539551 nova_compute[227360]: 2025-11-29 08:29:58.458 227364 DEBUG nova.network.os_vif_util [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:3e:57,bridge_name='br-int',has_traffic_filtering=True,id=bdb96056-9ec2-474c-abe0-50ee1cbb4097,network=Network(d9d41f0a-17f9-4df4-a453-04da996d63b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdb96056-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:29:58 np0005539551 nova_compute[227360]: 2025-11-29 08:29:58.459 227364 DEBUG nova.objects.instance [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lazy-loading 'pci_devices' on Instance uuid a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:58.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:58 np0005539551 nova_compute[227360]: 2025-11-29 08:29:58.487 227364 DEBUG nova.virt.libvirt.driver [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:29:58 np0005539551 nova_compute[227360]:  <uuid>a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9</uuid>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:  <name>instance-00000097</name>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:29:58 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:      <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-808979613</nova:name>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:29:57</nova:creationTime>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:29:58 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:        <nova:user uuid="64b11a4dc36b4f55b85dbe846183be55">tempest-ServerBootFromVolumeStableRescueTest-1715153470-project-member</nova:user>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:        <nova:project uuid="ae71059d02774857be85797a3be0e4e6">tempest-ServerBootFromVolumeStableRescueTest-1715153470</nova:project>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:        <nova:port uuid="bdb96056-9ec2-474c-abe0-50ee1cbb4097">
Nov 29 03:29:58 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:      <entry name="serial">a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9</entry>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:      <entry name="uuid">a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9</entry>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:29:58 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9_disk.config">
Nov 29 03:29:58 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:29:58 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:29:58 np0005539551 nova_compute[227360]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="volumes/volume-afa6b782-fcc2-4a45-bde3-dbc6a832d5c6">
Nov 29 03:29:58 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:29:58 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:      <serial>afa6b782-fcc2-4a45-bde3-dbc6a832d5c6</serial>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:29:58 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:a2:3e:57"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:      <target dev="tapbdb96056-9e"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:29:58 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9/console.log" append="off"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:29:58 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:29:58 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:29:58 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:29:58 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:29:58 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:29:58 np0005539551 nova_compute[227360]: 2025-11-29 08:29:58.488 227364 DEBUG nova.compute.manager [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Preparing to wait for external event network-vif-plugged-bdb96056-9ec2-474c-abe0-50ee1cbb4097 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:29:58 np0005539551 nova_compute[227360]: 2025-11-29 08:29:58.490 227364 DEBUG oslo_concurrency.lockutils [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquiring lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:58 np0005539551 nova_compute[227360]: 2025-11-29 08:29:58.490 227364 DEBUG oslo_concurrency.lockutils [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:58 np0005539551 nova_compute[227360]: 2025-11-29 08:29:58.490 227364 DEBUG oslo_concurrency.lockutils [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:58 np0005539551 nova_compute[227360]: 2025-11-29 08:29:58.491 227364 DEBUG nova.virt.libvirt.vif [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:29:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-808979613',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-808979613',id=151,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ae71059d02774857be85797a3be0e4e6',ramdisk_id='',reservation_id='r-ooph05ny',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1715153470',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1715153470-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:29:38Z,user_data=None,user_id='64b11a4dc36b4f55b85dbe846183be55',uuid=a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bdb96056-9ec2-474c-abe0-50ee1cbb4097", "address": "fa:16:3e:a2:3e:57", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdb96056-9e", "ovs_interfaceid": "bdb96056-9ec2-474c-abe0-50ee1cbb4097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:29:58 np0005539551 nova_compute[227360]: 2025-11-29 08:29:58.491 227364 DEBUG nova.network.os_vif_util [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Converting VIF {"id": "bdb96056-9ec2-474c-abe0-50ee1cbb4097", "address": "fa:16:3e:a2:3e:57", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdb96056-9e", "ovs_interfaceid": "bdb96056-9ec2-474c-abe0-50ee1cbb4097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:29:58 np0005539551 nova_compute[227360]: 2025-11-29 08:29:58.492 227364 DEBUG nova.network.os_vif_util [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:3e:57,bridge_name='br-int',has_traffic_filtering=True,id=bdb96056-9ec2-474c-abe0-50ee1cbb4097,network=Network(d9d41f0a-17f9-4df4-a453-04da996d63b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdb96056-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:29:58 np0005539551 nova_compute[227360]: 2025-11-29 08:29:58.492 227364 DEBUG os_vif [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:3e:57,bridge_name='br-int',has_traffic_filtering=True,id=bdb96056-9ec2-474c-abe0-50ee1cbb4097,network=Network(d9d41f0a-17f9-4df4-a453-04da996d63b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdb96056-9e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:29:58 np0005539551 nova_compute[227360]: 2025-11-29 08:29:58.493 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:58 np0005539551 nova_compute[227360]: 2025-11-29 08:29:58.493 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:58 np0005539551 nova_compute[227360]: 2025-11-29 08:29:58.493 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:29:58 np0005539551 nova_compute[227360]: 2025-11-29 08:29:58.496 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:58 np0005539551 nova_compute[227360]: 2025-11-29 08:29:58.496 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbdb96056-9e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:58 np0005539551 nova_compute[227360]: 2025-11-29 08:29:58.496 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbdb96056-9e, col_values=(('external_ids', {'iface-id': 'bdb96056-9ec2-474c-abe0-50ee1cbb4097', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a2:3e:57', 'vm-uuid': 'a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:58 np0005539551 nova_compute[227360]: 2025-11-29 08:29:58.498 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:58 np0005539551 NetworkManager[48922]: <info>  [1764404998.4992] manager: (tapbdb96056-9e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/298)
Nov 29 03:29:58 np0005539551 nova_compute[227360]: 2025-11-29 08:29:58.501 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:29:58 np0005539551 nova_compute[227360]: 2025-11-29 08:29:58.546 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:58 np0005539551 nova_compute[227360]: 2025-11-29 08:29:58.547 227364 INFO os_vif [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:3e:57,bridge_name='br-int',has_traffic_filtering=True,id=bdb96056-9ec2-474c-abe0-50ee1cbb4097,network=Network(d9d41f0a-17f9-4df4-a453-04da996d63b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdb96056-9e')#033[00m
Nov 29 03:29:58 np0005539551 ovn_controller[130266]: 2025-11-29T08:29:58Z|00646|binding|INFO|Releasing lport f2118d1b-0f35-4211-8508-64237a2d816e from this chassis (sb_readonly=0)
Nov 29 03:29:58 np0005539551 nova_compute[227360]: 2025-11-29 08:29:58.556 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:58 np0005539551 nova_compute[227360]: 2025-11-29 08:29:58.601 227364 DEBUG nova.virt.libvirt.driver [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:29:58 np0005539551 nova_compute[227360]: 2025-11-29 08:29:58.602 227364 DEBUG nova.virt.libvirt.driver [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:29:58 np0005539551 nova_compute[227360]: 2025-11-29 08:29:58.602 227364 DEBUG nova.virt.libvirt.driver [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] No VIF found with MAC fa:16:3e:a2:3e:57, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:29:58 np0005539551 nova_compute[227360]: 2025-11-29 08:29:58.602 227364 INFO nova.virt.libvirt.driver [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Using config drive#033[00m
Nov 29 03:29:58 np0005539551 nova_compute[227360]: 2025-11-29 08:29:58.625 227364 DEBUG nova.storage.rbd_utils [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] rbd image a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:59 np0005539551 nova_compute[227360]: 2025-11-29 08:29:59.426 227364 INFO nova.virt.libvirt.driver [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Creating config drive at /var/lib/nova/instances/a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9/disk.config#033[00m
Nov 29 03:29:59 np0005539551 nova_compute[227360]: 2025-11-29 08:29:59.431 227364 DEBUG oslo_concurrency.processutils [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkfzmwl2z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:59.497 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:29:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:59.498 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:29:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:59.499 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:59 np0005539551 nova_compute[227360]: 2025-11-29 08:29:59.498 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:59 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:59 np0005539551 nova_compute[227360]: 2025-11-29 08:29:59.562 227364 DEBUG oslo_concurrency.processutils [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkfzmwl2z" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:59 np0005539551 nova_compute[227360]: 2025-11-29 08:29:59.592 227364 DEBUG nova.storage.rbd_utils [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] rbd image a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:59 np0005539551 nova_compute[227360]: 2025-11-29 08:29:59.597 227364 DEBUG oslo_concurrency.processutils [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9/disk.config a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:59 np0005539551 nova_compute[227360]: 2025-11-29 08:29:59.739 227364 DEBUG oslo_concurrency.processutils [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9/disk.config a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:59 np0005539551 nova_compute[227360]: 2025-11-29 08:29:59.740 227364 INFO nova.virt.libvirt.driver [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Deleting local config drive /var/lib/nova/instances/a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9/disk.config because it was imported into RBD.#033[00m
Nov 29 03:29:59 np0005539551 kernel: tapbdb96056-9e: entered promiscuous mode
Nov 29 03:29:59 np0005539551 NetworkManager[48922]: <info>  [1764404999.7793] manager: (tapbdb96056-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/299)
Nov 29 03:29:59 np0005539551 ovn_controller[130266]: 2025-11-29T08:29:59Z|00647|binding|INFO|Claiming lport bdb96056-9ec2-474c-abe0-50ee1cbb4097 for this chassis.
Nov 29 03:29:59 np0005539551 ovn_controller[130266]: 2025-11-29T08:29:59Z|00648|binding|INFO|bdb96056-9ec2-474c-abe0-50ee1cbb4097: Claiming fa:16:3e:a2:3e:57 10.100.0.10
Nov 29 03:29:59 np0005539551 nova_compute[227360]: 2025-11-29 08:29:59.780 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:59.785 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:3e:57 10.100.0.10'], port_security=['fa:16:3e:a2:3e:57 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ae71059d02774857be85797a3be0e4e6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9cdb0c1e-9792-4231-abe9-b49a2c7e81de', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43696b0d-f042-4e44-8852-c0333c8ffa4f, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=bdb96056-9ec2-474c-abe0-50ee1cbb4097) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:29:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:59.786 139482 INFO neutron.agent.ovn.metadata.agent [-] Port bdb96056-9ec2-474c-abe0-50ee1cbb4097 in datapath d9d41f0a-17f9-4df4-a453-04da996d63b6 bound to our chassis#033[00m
Nov 29 03:29:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:59.788 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d9d41f0a-17f9-4df4-a453-04da996d63b6#033[00m
Nov 29 03:29:59 np0005539551 ovn_controller[130266]: 2025-11-29T08:29:59Z|00649|binding|INFO|Setting lport bdb96056-9ec2-474c-abe0-50ee1cbb4097 ovn-installed in OVS
Nov 29 03:29:59 np0005539551 ovn_controller[130266]: 2025-11-29T08:29:59Z|00650|binding|INFO|Setting lport bdb96056-9ec2-474c-abe0-50ee1cbb4097 up in Southbound
Nov 29 03:29:59 np0005539551 nova_compute[227360]: 2025-11-29 08:29:59.797 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:59 np0005539551 nova_compute[227360]: 2025-11-29 08:29:59.800 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:59.804 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[439301e7-1a2f-4a64-8292-dd15666ed803]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:59 np0005539551 systemd-udevd[283041]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:29:59 np0005539551 systemd-machined[190756]: New machine qemu-69-instance-00000097.
Nov 29 03:29:59 np0005539551 NetworkManager[48922]: <info>  [1764404999.8212] device (tapbdb96056-9e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:29:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:29:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:59.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:59 np0005539551 NetworkManager[48922]: <info>  [1764404999.8223] device (tapbdb96056-9e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:29:59 np0005539551 systemd[1]: Started Virtual Machine qemu-69-instance-00000097.
Nov 29 03:29:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:59.836 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[61ace1e2-2561-4f61-983d-24fad5eb6989]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:59.839 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[d30015a1-9127-45a5-9281-42a9c2c2bd23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:59.863 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[27472657-1b67-4e5a-8f4e-c01b831badad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:59.879 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[25753c81-4919-4405-b1b0-402c470b5a10]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9d41f0a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:28:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 180], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 789204, 'reachable_time': 30617, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283053, 'error': None, 'target': 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:59.893 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[eb316491-04bc-4849-ba54-f34088c8eef7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd9d41f0a-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 789215, 'tstamp': 789215}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283055, 'error': None, 'target': 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd9d41f0a-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 789218, 'tstamp': 789218}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283055, 'error': None, 'target': 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:59.894 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9d41f0a-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:59 np0005539551 nova_compute[227360]: 2025-11-29 08:29:59.895 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:59 np0005539551 nova_compute[227360]: 2025-11-29 08:29:59.896 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:59.897 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9d41f0a-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:59.897 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:29:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:59.897 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd9d41f0a-10, col_values=(('external_ids', {'iface-id': 'f2118d1b-0f35-4211-8508-64237a2d816e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:29:59.897 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:30:00 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:30:00 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:30:00 np0005539551 ceph-mon[81672]: overall HEALTH_OK
Nov 29 03:30:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:00.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:00 np0005539551 nova_compute[227360]: 2025-11-29 08:30:00.577 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405000.5767505, a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:00 np0005539551 nova_compute[227360]: 2025-11-29 08:30:00.577 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] VM Started (Lifecycle Event)#033[00m
Nov 29 03:30:00 np0005539551 nova_compute[227360]: 2025-11-29 08:30:00.609 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:00 np0005539551 nova_compute[227360]: 2025-11-29 08:30:00.613 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405000.5768933, a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:00 np0005539551 nova_compute[227360]: 2025-11-29 08:30:00.614 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:30:00 np0005539551 nova_compute[227360]: 2025-11-29 08:30:00.634 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:00 np0005539551 nova_compute[227360]: 2025-11-29 08:30:00.638 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:30:00 np0005539551 nova_compute[227360]: 2025-11-29 08:30:00.685 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:30:00 np0005539551 nova_compute[227360]: 2025-11-29 08:30:00.826 227364 DEBUG nova.compute.manager [req-6af5b7a1-2739-476b-819d-5cb0bde11e7b req-274332cc-a4fc-43c6-a353-f023fd39a9a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Received event network-vif-plugged-bdb96056-9ec2-474c-abe0-50ee1cbb4097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:00 np0005539551 nova_compute[227360]: 2025-11-29 08:30:00.826 227364 DEBUG oslo_concurrency.lockutils [req-6af5b7a1-2739-476b-819d-5cb0bde11e7b req-274332cc-a4fc-43c6-a353-f023fd39a9a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:00 np0005539551 nova_compute[227360]: 2025-11-29 08:30:00.826 227364 DEBUG oslo_concurrency.lockutils [req-6af5b7a1-2739-476b-819d-5cb0bde11e7b req-274332cc-a4fc-43c6-a353-f023fd39a9a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:00 np0005539551 nova_compute[227360]: 2025-11-29 08:30:00.827 227364 DEBUG oslo_concurrency.lockutils [req-6af5b7a1-2739-476b-819d-5cb0bde11e7b req-274332cc-a4fc-43c6-a353-f023fd39a9a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:00 np0005539551 nova_compute[227360]: 2025-11-29 08:30:00.827 227364 DEBUG nova.compute.manager [req-6af5b7a1-2739-476b-819d-5cb0bde11e7b req-274332cc-a4fc-43c6-a353-f023fd39a9a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Processing event network-vif-plugged-bdb96056-9ec2-474c-abe0-50ee1cbb4097 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:30:00 np0005539551 nova_compute[227360]: 2025-11-29 08:30:00.827 227364 DEBUG nova.compute.manager [req-6af5b7a1-2739-476b-819d-5cb0bde11e7b req-274332cc-a4fc-43c6-a353-f023fd39a9a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Received event network-vif-plugged-bdb96056-9ec2-474c-abe0-50ee1cbb4097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:00 np0005539551 nova_compute[227360]: 2025-11-29 08:30:00.828 227364 DEBUG oslo_concurrency.lockutils [req-6af5b7a1-2739-476b-819d-5cb0bde11e7b req-274332cc-a4fc-43c6-a353-f023fd39a9a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:00 np0005539551 nova_compute[227360]: 2025-11-29 08:30:00.828 227364 DEBUG oslo_concurrency.lockutils [req-6af5b7a1-2739-476b-819d-5cb0bde11e7b req-274332cc-a4fc-43c6-a353-f023fd39a9a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:00 np0005539551 nova_compute[227360]: 2025-11-29 08:30:00.828 227364 DEBUG oslo_concurrency.lockutils [req-6af5b7a1-2739-476b-819d-5cb0bde11e7b req-274332cc-a4fc-43c6-a353-f023fd39a9a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:00 np0005539551 nova_compute[227360]: 2025-11-29 08:30:00.828 227364 DEBUG nova.compute.manager [req-6af5b7a1-2739-476b-819d-5cb0bde11e7b req-274332cc-a4fc-43c6-a353-f023fd39a9a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] No waiting events found dispatching network-vif-plugged-bdb96056-9ec2-474c-abe0-50ee1cbb4097 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:00 np0005539551 nova_compute[227360]: 2025-11-29 08:30:00.829 227364 WARNING nova.compute.manager [req-6af5b7a1-2739-476b-819d-5cb0bde11e7b req-274332cc-a4fc-43c6-a353-f023fd39a9a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Received unexpected event network-vif-plugged-bdb96056-9ec2-474c-abe0-50ee1cbb4097 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:30:00 np0005539551 nova_compute[227360]: 2025-11-29 08:30:00.829 227364 DEBUG nova.compute.manager [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:30:00 np0005539551 nova_compute[227360]: 2025-11-29 08:30:00.833 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405000.8328302, a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:00 np0005539551 nova_compute[227360]: 2025-11-29 08:30:00.833 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:30:00 np0005539551 nova_compute[227360]: 2025-11-29 08:30:00.835 227364 DEBUG nova.virt.libvirt.driver [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:30:00 np0005539551 nova_compute[227360]: 2025-11-29 08:30:00.838 227364 INFO nova.virt.libvirt.driver [-] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Instance spawned successfully.#033[00m
Nov 29 03:30:00 np0005539551 nova_compute[227360]: 2025-11-29 08:30:00.838 227364 DEBUG nova.virt.libvirt.driver [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:30:00 np0005539551 nova_compute[227360]: 2025-11-29 08:30:00.861 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:00 np0005539551 nova_compute[227360]: 2025-11-29 08:30:00.864 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:30:00 np0005539551 nova_compute[227360]: 2025-11-29 08:30:00.873 227364 DEBUG nova.virt.libvirt.driver [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:00 np0005539551 nova_compute[227360]: 2025-11-29 08:30:00.873 227364 DEBUG nova.virt.libvirt.driver [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:00 np0005539551 nova_compute[227360]: 2025-11-29 08:30:00.874 227364 DEBUG nova.virt.libvirt.driver [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:00 np0005539551 nova_compute[227360]: 2025-11-29 08:30:00.874 227364 DEBUG nova.virt.libvirt.driver [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:00 np0005539551 nova_compute[227360]: 2025-11-29 08:30:00.875 227364 DEBUG nova.virt.libvirt.driver [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:00 np0005539551 nova_compute[227360]: 2025-11-29 08:30:00.875 227364 DEBUG nova.virt.libvirt.driver [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:00 np0005539551 nova_compute[227360]: 2025-11-29 08:30:00.898 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:30:00 np0005539551 nova_compute[227360]: 2025-11-29 08:30:00.948 227364 INFO nova.compute.manager [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Took 3.07 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:30:00 np0005539551 nova_compute[227360]: 2025-11-29 08:30:00.948 227364 DEBUG nova.compute.manager [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:01 np0005539551 nova_compute[227360]: 2025-11-29 08:30:01.023 227364 INFO nova.compute.manager [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Took 23.89 seconds to build instance.#033[00m
Nov 29 03:30:01 np0005539551 nova_compute[227360]: 2025-11-29 08:30:01.039 227364 DEBUG oslo_concurrency.lockutils [None req-6cb31237-6b99-4ff2-8ba3-904b952b2324 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 24.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:01.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:02 np0005539551 nova_compute[227360]: 2025-11-29 08:30:02.081 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:02.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:03 np0005539551 nova_compute[227360]: 2025-11-29 08:30:03.499 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:03 np0005539551 nova_compute[227360]: 2025-11-29 08:30:03.622 227364 INFO nova.compute.manager [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Rescuing#033[00m
Nov 29 03:30:03 np0005539551 nova_compute[227360]: 2025-11-29 08:30:03.622 227364 DEBUG oslo_concurrency.lockutils [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquiring lock "refresh_cache-a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:30:03 np0005539551 nova_compute[227360]: 2025-11-29 08:30:03.622 227364 DEBUG oslo_concurrency.lockutils [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquired lock "refresh_cache-a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:30:03 np0005539551 nova_compute[227360]: 2025-11-29 08:30:03.623 227364 DEBUG nova.network.neutron [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:30:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:03.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:04 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:04.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:04 np0005539551 nova_compute[227360]: 2025-11-29 08:30:04.838 227364 DEBUG nova.network.neutron [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Updating instance_info_cache with network_info: [{"id": "bdb96056-9ec2-474c-abe0-50ee1cbb4097", "address": "fa:16:3e:a2:3e:57", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdb96056-9e", "ovs_interfaceid": "bdb96056-9ec2-474c-abe0-50ee1cbb4097", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:30:04 np0005539551 nova_compute[227360]: 2025-11-29 08:30:04.860 227364 DEBUG oslo_concurrency.lockutils [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Releasing lock "refresh_cache-a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:30:05 np0005539551 nova_compute[227360]: 2025-11-29 08:30:05.230 227364 DEBUG nova.virt.libvirt.driver [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:30:05 np0005539551 nova_compute[227360]: 2025-11-29 08:30:05.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:05 np0005539551 nova_compute[227360]: 2025-11-29 08:30:05.409 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:30:05 np0005539551 nova_compute[227360]: 2025-11-29 08:30:05.424 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:30:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:05.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:06.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:07 np0005539551 nova_compute[227360]: 2025-11-29 08:30:07.137 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:07.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:08 np0005539551 nova_compute[227360]: 2025-11-29 08:30:08.502 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:08.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:09 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:09.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:10 np0005539551 podman[283099]: 2025-11-29 08:30:10.624249432 +0000 UTC m=+0.070570670 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:30:10 np0005539551 podman[283100]: 2025-11-29 08:30:10.639935846 +0000 UTC m=+0.084660321 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:30:10 np0005539551 podman[283098]: 2025-11-29 08:30:10.650142802 +0000 UTC m=+0.097666203 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:30:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:30:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:10.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:30:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:11.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:12 np0005539551 nova_compute[227360]: 2025-11-29 08:30:12.175 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:12.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:13 np0005539551 nova_compute[227360]: 2025-11-29 08:30:13.504 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:13.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:14 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:14.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:15 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:15Z|00083|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a2:3e:57 10.100.0.10
Nov 29 03:30:15 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:15Z|00084|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a2:3e:57 10.100.0.10
Nov 29 03:30:15 np0005539551 nova_compute[227360]: 2025-11-29 08:30:15.267 227364 DEBUG nova.virt.libvirt.driver [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:30:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:15.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:16.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:17 np0005539551 nova_compute[227360]: 2025-11-29 08:30:17.176 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:17 np0005539551 kernel: tapbdb96056-9e (unregistering): left promiscuous mode
Nov 29 03:30:17 np0005539551 NetworkManager[48922]: <info>  [1764405017.5492] device (tapbdb96056-9e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:30:17 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:17Z|00651|binding|INFO|Releasing lport bdb96056-9ec2-474c-abe0-50ee1cbb4097 from this chassis (sb_readonly=0)
Nov 29 03:30:17 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:17Z|00652|binding|INFO|Setting lport bdb96056-9ec2-474c-abe0-50ee1cbb4097 down in Southbound
Nov 29 03:30:17 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:17Z|00653|binding|INFO|Removing iface tapbdb96056-9e ovn-installed in OVS
Nov 29 03:30:17 np0005539551 nova_compute[227360]: 2025-11-29 08:30:17.556 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:17 np0005539551 nova_compute[227360]: 2025-11-29 08:30:17.558 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:17.564 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:3e:57 10.100.0.10'], port_security=['fa:16:3e:a2:3e:57 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ae71059d02774857be85797a3be0e4e6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9cdb0c1e-9792-4231-abe9-b49a2c7e81de', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43696b0d-f042-4e44-8852-c0333c8ffa4f, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=bdb96056-9ec2-474c-abe0-50ee1cbb4097) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:30:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:17.565 139482 INFO neutron.agent.ovn.metadata.agent [-] Port bdb96056-9ec2-474c-abe0-50ee1cbb4097 in datapath d9d41f0a-17f9-4df4-a453-04da996d63b6 unbound from our chassis#033[00m
Nov 29 03:30:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:17.566 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d9d41f0a-17f9-4df4-a453-04da996d63b6#033[00m
Nov 29 03:30:17 np0005539551 nova_compute[227360]: 2025-11-29 08:30:17.578 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:17.582 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[39b309ab-5763-41bf-bc4f-1fd572e10966]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:17 np0005539551 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d00000097.scope: Deactivated successfully.
Nov 29 03:30:17 np0005539551 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d00000097.scope: Consumed 14.103s CPU time.
Nov 29 03:30:17 np0005539551 systemd-machined[190756]: Machine qemu-69-instance-00000097 terminated.
Nov 29 03:30:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:17.611 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[a6848cfe-a955-42ca-a5c7-2b198e1457f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:17.613 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[85a72da3-be00-4121-8753-79d2e2cddeff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:17.639 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[349cf0cf-5737-4e29-8a05-ab7d2151ae42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:17.656 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f96f8706-4478-47ee-b314-8ed537f14786]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9d41f0a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:28:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 180], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 789204, 'reachable_time': 30617, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283171, 'error': None, 'target': 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:17.672 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[dea24527-d959-4ec2-9ec8-c2afe4b6c768]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd9d41f0a-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 789215, 'tstamp': 789215}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283172, 'error': None, 'target': 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd9d41f0a-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 789218, 'tstamp': 789218}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283172, 'error': None, 'target': 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:17.674 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9d41f0a-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:17 np0005539551 nova_compute[227360]: 2025-11-29 08:30:17.675 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:17 np0005539551 nova_compute[227360]: 2025-11-29 08:30:17.679 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:17.679 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9d41f0a-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:17.680 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:30:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:17.680 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd9d41f0a-10, col_values=(('external_ids', {'iface-id': 'f2118d1b-0f35-4211-8508-64237a2d816e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:17.681 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:30:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:17.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:18 np0005539551 nova_compute[227360]: 2025-11-29 08:30:18.278 227364 INFO nova.virt.libvirt.driver [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Instance shutdown successfully after 13 seconds.#033[00m
Nov 29 03:30:18 np0005539551 nova_compute[227360]: 2025-11-29 08:30:18.283 227364 INFO nova.virt.libvirt.driver [-] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Instance destroyed successfully.#033[00m
Nov 29 03:30:18 np0005539551 nova_compute[227360]: 2025-11-29 08:30:18.283 227364 DEBUG nova.objects.instance [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lazy-loading 'numa_topology' on Instance uuid a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:18 np0005539551 nova_compute[227360]: 2025-11-29 08:30:18.300 227364 INFO nova.virt.libvirt.driver [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Attempting a stable device rescue#033[00m
Nov 29 03:30:18 np0005539551 nova_compute[227360]: 2025-11-29 08:30:18.506 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:18 np0005539551 nova_compute[227360]: 2025-11-29 08:30:18.536 227364 DEBUG nova.compute.manager [req-ba0bee5c-5ef8-4d4a-b040-3fb8e56d2334 req-37706582-35c6-417f-9c9f-9385b3b12d86 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Received event network-vif-unplugged-bdb96056-9ec2-474c-abe0-50ee1cbb4097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:18 np0005539551 nova_compute[227360]: 2025-11-29 08:30:18.536 227364 DEBUG oslo_concurrency.lockutils [req-ba0bee5c-5ef8-4d4a-b040-3fb8e56d2334 req-37706582-35c6-417f-9c9f-9385b3b12d86 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:18 np0005539551 nova_compute[227360]: 2025-11-29 08:30:18.536 227364 DEBUG oslo_concurrency.lockutils [req-ba0bee5c-5ef8-4d4a-b040-3fb8e56d2334 req-37706582-35c6-417f-9c9f-9385b3b12d86 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:18 np0005539551 nova_compute[227360]: 2025-11-29 08:30:18.537 227364 DEBUG oslo_concurrency.lockutils [req-ba0bee5c-5ef8-4d4a-b040-3fb8e56d2334 req-37706582-35c6-417f-9c9f-9385b3b12d86 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:18 np0005539551 nova_compute[227360]: 2025-11-29 08:30:18.537 227364 DEBUG nova.compute.manager [req-ba0bee5c-5ef8-4d4a-b040-3fb8e56d2334 req-37706582-35c6-417f-9c9f-9385b3b12d86 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] No waiting events found dispatching network-vif-unplugged-bdb96056-9ec2-474c-abe0-50ee1cbb4097 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:18 np0005539551 nova_compute[227360]: 2025-11-29 08:30:18.537 227364 WARNING nova.compute.manager [req-ba0bee5c-5ef8-4d4a-b040-3fb8e56d2334 req-37706582-35c6-417f-9c9f-9385b3b12d86 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Received unexpected event network-vif-unplugged-bdb96056-9ec2-474c-abe0-50ee1cbb4097 for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:30:18 np0005539551 nova_compute[227360]: 2025-11-29 08:30:18.537 227364 DEBUG nova.compute.manager [req-ba0bee5c-5ef8-4d4a-b040-3fb8e56d2334 req-37706582-35c6-417f-9c9f-9385b3b12d86 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Received event network-vif-plugged-bdb96056-9ec2-474c-abe0-50ee1cbb4097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:18 np0005539551 nova_compute[227360]: 2025-11-29 08:30:18.537 227364 DEBUG oslo_concurrency.lockutils [req-ba0bee5c-5ef8-4d4a-b040-3fb8e56d2334 req-37706582-35c6-417f-9c9f-9385b3b12d86 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:18 np0005539551 nova_compute[227360]: 2025-11-29 08:30:18.538 227364 DEBUG oslo_concurrency.lockutils [req-ba0bee5c-5ef8-4d4a-b040-3fb8e56d2334 req-37706582-35c6-417f-9c9f-9385b3b12d86 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:18 np0005539551 nova_compute[227360]: 2025-11-29 08:30:18.538 227364 DEBUG oslo_concurrency.lockutils [req-ba0bee5c-5ef8-4d4a-b040-3fb8e56d2334 req-37706582-35c6-417f-9c9f-9385b3b12d86 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:18 np0005539551 nova_compute[227360]: 2025-11-29 08:30:18.538 227364 DEBUG nova.compute.manager [req-ba0bee5c-5ef8-4d4a-b040-3fb8e56d2334 req-37706582-35c6-417f-9c9f-9385b3b12d86 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] No waiting events found dispatching network-vif-plugged-bdb96056-9ec2-474c-abe0-50ee1cbb4097 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:18 np0005539551 nova_compute[227360]: 2025-11-29 08:30:18.538 227364 WARNING nova.compute.manager [req-ba0bee5c-5ef8-4d4a-b040-3fb8e56d2334 req-37706582-35c6-417f-9c9f-9385b3b12d86 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Received unexpected event network-vif-plugged-bdb96056-9ec2-474c-abe0-50ee1cbb4097 for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:30:18 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #106. Immutable memtables: 0.
Nov 29 03:30:18 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:30:18.601497) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:30:18 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 106
Nov 29 03:30:18 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405018601576, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 1824, "num_deletes": 262, "total_data_size": 4000958, "memory_usage": 4066608, "flush_reason": "Manual Compaction"}
Nov 29 03:30:18 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #107: started
Nov 29 03:30:18 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405018619455, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 107, "file_size": 2627036, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53881, "largest_seqno": 55700, "table_properties": {"data_size": 2619315, "index_size": 4535, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 16959, "raw_average_key_size": 20, "raw_value_size": 2603552, "raw_average_value_size": 3167, "num_data_blocks": 197, "num_entries": 822, "num_filter_entries": 822, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404880, "oldest_key_time": 1764404880, "file_creation_time": 1764405018, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:30:18 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 18000 microseconds, and 6765 cpu microseconds.
Nov 29 03:30:18 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:30:18 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:30:18.619497) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #107: 2627036 bytes OK
Nov 29 03:30:18 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:30:18.619516) [db/memtable_list.cc:519] [default] Level-0 commit table #107 started
Nov 29 03:30:18 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:30:18.621101) [db/memtable_list.cc:722] [default] Level-0 commit table #107: memtable #1 done
Nov 29 03:30:18 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:30:18.621113) EVENT_LOG_v1 {"time_micros": 1764405018621109, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:30:18 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:30:18.621128) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:30:18 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 3992524, prev total WAL file size 3992524, number of live WAL files 2.
Nov 29 03:30:18 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000103.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:30:18 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:30:18.622141) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373539' seq:72057594037927935, type:22 .. '6C6F676D0032303131' seq:0, type:0; will stop at (end)
Nov 29 03:30:18 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:30:18 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [107(2565KB)], [105(12MB)]
Nov 29 03:30:18 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405018622172, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [107], "files_L6": [105], "score": -1, "input_data_size": 15754413, "oldest_snapshot_seqno": -1}
Nov 29 03:30:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:18.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:18 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #108: 8986 keys, 15596638 bytes, temperature: kUnknown
Nov 29 03:30:18 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405018730439, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 108, "file_size": 15596638, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15533437, "index_size": 39601, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22533, "raw_key_size": 232750, "raw_average_key_size": 25, "raw_value_size": 15370230, "raw_average_value_size": 1710, "num_data_blocks": 1559, "num_entries": 8986, "num_filter_entries": 8986, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764405018, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 108, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:30:18 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:30:18 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:30:18.730694) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 15596638 bytes
Nov 29 03:30:18 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:30:18.731851) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.4 rd, 144.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 12.5 +0.0 blob) out(14.9 +0.0 blob), read-write-amplify(11.9) write-amplify(5.9) OK, records in: 9527, records dropped: 541 output_compression: NoCompression
Nov 29 03:30:18 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:30:18.731867) EVENT_LOG_v1 {"time_micros": 1764405018731860, "job": 66, "event": "compaction_finished", "compaction_time_micros": 108333, "compaction_time_cpu_micros": 32738, "output_level": 6, "num_output_files": 1, "total_output_size": 15596638, "num_input_records": 9527, "num_output_records": 8986, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:30:18 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:30:18 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405018732398, "job": 66, "event": "table_file_deletion", "file_number": 107}
Nov 29 03:30:18 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000105.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:30:18 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405018734509, "job": 66, "event": "table_file_deletion", "file_number": 105}
Nov 29 03:30:18 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:30:18.622098) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:30:18 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:30:18.734534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:30:18 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:30:18.734538) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:30:18 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:30:18.734539) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:30:18 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:30:18.734541) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:30:18 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:30:18.734542) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:30:18 np0005539551 nova_compute[227360]: 2025-11-29 08:30:18.777 227364 DEBUG nova.virt.libvirt.driver [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Nov 29 03:30:18 np0005539551 nova_compute[227360]: 2025-11-29 08:30:18.781 227364 DEBUG nova.virt.libvirt.driver [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:30:18 np0005539551 nova_compute[227360]: 2025-11-29 08:30:18.781 227364 INFO nova.virt.libvirt.driver [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Creating image(s)#033[00m
Nov 29 03:30:18 np0005539551 nova_compute[227360]: 2025-11-29 08:30:18.807 227364 DEBUG nova.storage.rbd_utils [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] rbd image a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:18 np0005539551 nova_compute[227360]: 2025-11-29 08:30:18.813 227364 DEBUG nova.objects.instance [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lazy-loading 'trusted_certs' on Instance uuid a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:18 np0005539551 nova_compute[227360]: 2025-11-29 08:30:18.858 227364 DEBUG nova.storage.rbd_utils [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] rbd image a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:18 np0005539551 nova_compute[227360]: 2025-11-29 08:30:18.885 227364 DEBUG nova.storage.rbd_utils [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] rbd image a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:18 np0005539551 nova_compute[227360]: 2025-11-29 08:30:18.889 227364 DEBUG oslo_concurrency.lockutils [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquiring lock "f3f073566c636804905346d95d4f180714d4131b" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:18 np0005539551 nova_compute[227360]: 2025-11-29 08:30:18.891 227364 DEBUG oslo_concurrency.lockutils [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "f3f073566c636804905346d95d4f180714d4131b" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:19 np0005539551 nova_compute[227360]: 2025-11-29 08:30:19.178 227364 DEBUG nova.virt.libvirt.imagebackend [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Image locations are: [{'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/2d9af198-baca-4fb0-8bbb-100141aac9db/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/2d9af198-baca-4fb0-8bbb-100141aac9db/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 03:30:19 np0005539551 nova_compute[227360]: 2025-11-29 08:30:19.251 227364 DEBUG nova.virt.libvirt.imagebackend [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Selected location: {'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/2d9af198-baca-4fb0-8bbb-100141aac9db/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 29 03:30:19 np0005539551 nova_compute[227360]: 2025-11-29 08:30:19.252 227364 DEBUG nova.storage.rbd_utils [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] cloning images/2d9af198-baca-4fb0-8bbb-100141aac9db@snap to None/a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:30:19 np0005539551 nova_compute[227360]: 2025-11-29 08:30:19.388 227364 DEBUG oslo_concurrency.lockutils [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "f3f073566c636804905346d95d4f180714d4131b" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.497s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:19 np0005539551 nova_compute[227360]: 2025-11-29 08:30:19.439 227364 DEBUG nova.objects.instance [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lazy-loading 'migration_context' on Instance uuid a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:19 np0005539551 nova_compute[227360]: 2025-11-29 08:30:19.451 227364 DEBUG nova.virt.libvirt.driver [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:30:19 np0005539551 nova_compute[227360]: 2025-11-29 08:30:19.454 227364 DEBUG nova.virt.libvirt.driver [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Start _get_guest_xml network_info=[{"id": "bdb96056-9ec2-474c-abe0-50ee1cbb4097", "address": "fa:16:3e:a2:3e:57", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "vif_mac": "fa:16:3e:a2:3e:57"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdb96056-9e", "ovs_interfaceid": "bdb96056-9ec2-474c-abe0-50ee1cbb4097", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '2d9af198-baca-4fb0-8bbb-100141aac9db', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-afa6b782-fcc2-4a45-bde3-dbc6a832d5c6', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'afa6b782-fcc2-4a45-bde3-dbc6a832d5c6', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9', 'attached_at': '', 'detached_at': '', 'volume_id': 'afa6b782-fcc2-4a45-bde3-dbc6a832d5c6', 'serial': 'afa6b782-fcc2-4a45-bde3-dbc6a832d5c6'}, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'attachment_id': 'a9002bdb-9606-46ba-88a2-e1d76472e1dd', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:30:19 np0005539551 nova_compute[227360]: 2025-11-29 08:30:19.454 227364 DEBUG nova.objects.instance [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lazy-loading 'resources' on Instance uuid a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:19 np0005539551 nova_compute[227360]: 2025-11-29 08:30:19.470 227364 WARNING nova.virt.libvirt.driver [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:30:19 np0005539551 nova_compute[227360]: 2025-11-29 08:30:19.475 227364 DEBUG nova.virt.libvirt.host [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:30:19 np0005539551 nova_compute[227360]: 2025-11-29 08:30:19.476 227364 DEBUG nova.virt.libvirt.host [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:30:19 np0005539551 nova_compute[227360]: 2025-11-29 08:30:19.479 227364 DEBUG nova.virt.libvirt.host [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:30:19 np0005539551 nova_compute[227360]: 2025-11-29 08:30:19.479 227364 DEBUG nova.virt.libvirt.host [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:30:19 np0005539551 nova_compute[227360]: 2025-11-29 08:30:19.480 227364 DEBUG nova.virt.libvirt.driver [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:30:19 np0005539551 nova_compute[227360]: 2025-11-29 08:30:19.481 227364 DEBUG nova.virt.hardware [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:30:19 np0005539551 nova_compute[227360]: 2025-11-29 08:30:19.481 227364 DEBUG nova.virt.hardware [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:30:19 np0005539551 nova_compute[227360]: 2025-11-29 08:30:19.481 227364 DEBUG nova.virt.hardware [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:30:19 np0005539551 nova_compute[227360]: 2025-11-29 08:30:19.482 227364 DEBUG nova.virt.hardware [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:30:19 np0005539551 nova_compute[227360]: 2025-11-29 08:30:19.482 227364 DEBUG nova.virt.hardware [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:30:19 np0005539551 nova_compute[227360]: 2025-11-29 08:30:19.482 227364 DEBUG nova.virt.hardware [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:30:19 np0005539551 nova_compute[227360]: 2025-11-29 08:30:19.482 227364 DEBUG nova.virt.hardware [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:30:19 np0005539551 nova_compute[227360]: 2025-11-29 08:30:19.482 227364 DEBUG nova.virt.hardware [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:30:19 np0005539551 nova_compute[227360]: 2025-11-29 08:30:19.483 227364 DEBUG nova.virt.hardware [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:30:19 np0005539551 nova_compute[227360]: 2025-11-29 08:30:19.483 227364 DEBUG nova.virt.hardware [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:30:19 np0005539551 nova_compute[227360]: 2025-11-29 08:30:19.483 227364 DEBUG nova.virt.hardware [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:30:19 np0005539551 nova_compute[227360]: 2025-11-29 08:30:19.483 227364 DEBUG nova.objects.instance [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lazy-loading 'vcpu_model' on Instance uuid a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:19 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:19 np0005539551 nova_compute[227360]: 2025-11-29 08:30:19.538 227364 DEBUG oslo_concurrency.processutils [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:19.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:19.880 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:19.880 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:19.881 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:30:20 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3120384837' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:30:20 np0005539551 nova_compute[227360]: 2025-11-29 08:30:20.060 227364 DEBUG oslo_concurrency.processutils [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:20 np0005539551 nova_compute[227360]: 2025-11-29 08:30:20.085 227364 DEBUG oslo_concurrency.processutils [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:30:20 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/997397164' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:30:20 np0005539551 nova_compute[227360]: 2025-11-29 08:30:20.506 227364 DEBUG oslo_concurrency.processutils [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:20 np0005539551 nova_compute[227360]: 2025-11-29 08:30:20.507 227364 DEBUG nova.virt.libvirt.vif [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:29:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-808979613',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-808979613',id=151,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:30:00Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ae71059d02774857be85797a3be0e4e6',ramdisk_id='',reservation_id='r-ooph05ny',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1715153470',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1715153470-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:30:00Z,user_data=None,user_id='64b11a4dc36b4f55b85dbe846183be55',uuid=a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bdb96056-9ec2-474c-abe0-50ee1cbb4097", "address": "fa:16:3e:a2:3e:57", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "vif_mac": "fa:16:3e:a2:3e:57"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdb96056-9e", "ovs_interfaceid": "bdb96056-9ec2-474c-abe0-50ee1cbb4097", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:30:20 np0005539551 nova_compute[227360]: 2025-11-29 08:30:20.508 227364 DEBUG nova.network.os_vif_util [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Converting VIF {"id": "bdb96056-9ec2-474c-abe0-50ee1cbb4097", "address": "fa:16:3e:a2:3e:57", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "vif_mac": "fa:16:3e:a2:3e:57"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdb96056-9e", "ovs_interfaceid": "bdb96056-9ec2-474c-abe0-50ee1cbb4097", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:30:20 np0005539551 nova_compute[227360]: 2025-11-29 08:30:20.508 227364 DEBUG nova.network.os_vif_util [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a2:3e:57,bridge_name='br-int',has_traffic_filtering=True,id=bdb96056-9ec2-474c-abe0-50ee1cbb4097,network=Network(d9d41f0a-17f9-4df4-a453-04da996d63b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdb96056-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:30:20 np0005539551 nova_compute[227360]: 2025-11-29 08:30:20.510 227364 DEBUG nova.objects.instance [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lazy-loading 'pci_devices' on Instance uuid a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:20 np0005539551 nova_compute[227360]: 2025-11-29 08:30:20.523 227364 DEBUG nova.virt.libvirt.driver [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:30:20 np0005539551 nova_compute[227360]:  <uuid>a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9</uuid>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:  <name>instance-00000097</name>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-808979613</nova:name>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:30:19</nova:creationTime>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:30:20 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:        <nova:user uuid="64b11a4dc36b4f55b85dbe846183be55">tempest-ServerBootFromVolumeStableRescueTest-1715153470-project-member</nova:user>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:        <nova:project uuid="ae71059d02774857be85797a3be0e4e6">tempest-ServerBootFromVolumeStableRescueTest-1715153470</nova:project>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:        <nova:port uuid="bdb96056-9ec2-474c-abe0-50ee1cbb4097">
Nov 29 03:30:20 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      <entry name="serial">a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9</entry>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      <entry name="uuid">a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9</entry>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9_disk.config">
Nov 29 03:30:20 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:30:20 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="volumes/volume-afa6b782-fcc2-4a45-bde3-dbc6a832d5c6">
Nov 29 03:30:20 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:30:20 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      <serial>afa6b782-fcc2-4a45-bde3-dbc6a832d5c6</serial>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9_disk.rescue">
Nov 29 03:30:20 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:30:20 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      <target dev="vdb" bus="virtio"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      <boot order="1"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:a2:3e:57"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      <target dev="tapbdb96056-9e"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9/console.log" append="off"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:30:20 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:30:20 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:30:20 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:30:20 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:30:20 np0005539551 nova_compute[227360]: 2025-11-29 08:30:20.530 227364 INFO nova.virt.libvirt.driver [-] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Instance destroyed successfully.#033[00m
Nov 29 03:30:20 np0005539551 nova_compute[227360]: 2025-11-29 08:30:20.583 227364 DEBUG nova.virt.libvirt.driver [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:30:20 np0005539551 nova_compute[227360]: 2025-11-29 08:30:20.583 227364 DEBUG nova.virt.libvirt.driver [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:30:20 np0005539551 nova_compute[227360]: 2025-11-29 08:30:20.584 227364 DEBUG nova.virt.libvirt.driver [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:30:20 np0005539551 nova_compute[227360]: 2025-11-29 08:30:20.584 227364 DEBUG nova.virt.libvirt.driver [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] No VIF found with MAC fa:16:3e:a2:3e:57, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:30:20 np0005539551 nova_compute[227360]: 2025-11-29 08:30:20.584 227364 INFO nova.virt.libvirt.driver [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Using config drive#033[00m
Nov 29 03:30:20 np0005539551 nova_compute[227360]: 2025-11-29 08:30:20.611 227364 DEBUG nova.storage.rbd_utils [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] rbd image a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:20 np0005539551 nova_compute[227360]: 2025-11-29 08:30:20.630 227364 DEBUG nova.objects.instance [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lazy-loading 'ec2_ids' on Instance uuid a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:20 np0005539551 nova_compute[227360]: 2025-11-29 08:30:20.665 227364 DEBUG nova.objects.instance [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lazy-loading 'keypairs' on Instance uuid a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:20.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:20 np0005539551 nova_compute[227360]: 2025-11-29 08:30:20.998 227364 INFO nova.virt.libvirt.driver [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Creating config drive at /var/lib/nova/instances/a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9/disk.config.rescue#033[00m
Nov 29 03:30:21 np0005539551 nova_compute[227360]: 2025-11-29 08:30:21.003 227364 DEBUG oslo_concurrency.processutils [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqkvumbr4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:21 np0005539551 nova_compute[227360]: 2025-11-29 08:30:21.136 227364 DEBUG oslo_concurrency.processutils [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqkvumbr4" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:21 np0005539551 nova_compute[227360]: 2025-11-29 08:30:21.171 227364 DEBUG nova.storage.rbd_utils [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] rbd image a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:21 np0005539551 nova_compute[227360]: 2025-11-29 08:30:21.175 227364 DEBUG oslo_concurrency.processutils [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9/disk.config.rescue a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:21 np0005539551 nova_compute[227360]: 2025-11-29 08:30:21.329 227364 DEBUG oslo_concurrency.processutils [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9/disk.config.rescue a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:21 np0005539551 nova_compute[227360]: 2025-11-29 08:30:21.330 227364 INFO nova.virt.libvirt.driver [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Deleting local config drive /var/lib/nova/instances/a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9/disk.config.rescue because it was imported into RBD.#033[00m
Nov 29 03:30:21 np0005539551 kernel: tapbdb96056-9e: entered promiscuous mode
Nov 29 03:30:21 np0005539551 NetworkManager[48922]: <info>  [1764405021.3875] manager: (tapbdb96056-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/300)
Nov 29 03:30:21 np0005539551 nova_compute[227360]: 2025-11-29 08:30:21.387 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:21 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:21Z|00654|binding|INFO|Claiming lport bdb96056-9ec2-474c-abe0-50ee1cbb4097 for this chassis.
Nov 29 03:30:21 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:21Z|00655|binding|INFO|bdb96056-9ec2-474c-abe0-50ee1cbb4097: Claiming fa:16:3e:a2:3e:57 10.100.0.10
Nov 29 03:30:21 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:21.395 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:3e:57 10.100.0.10'], port_security=['fa:16:3e:a2:3e:57 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ae71059d02774857be85797a3be0e4e6', 'neutron:revision_number': '5', 'neutron:security_group_ids': '9cdb0c1e-9792-4231-abe9-b49a2c7e81de', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43696b0d-f042-4e44-8852-c0333c8ffa4f, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=bdb96056-9ec2-474c-abe0-50ee1cbb4097) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:30:21 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:21.397 139482 INFO neutron.agent.ovn.metadata.agent [-] Port bdb96056-9ec2-474c-abe0-50ee1cbb4097 in datapath d9d41f0a-17f9-4df4-a453-04da996d63b6 bound to our chassis#033[00m
Nov 29 03:30:21 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:21.398 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d9d41f0a-17f9-4df4-a453-04da996d63b6#033[00m
Nov 29 03:30:21 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:21Z|00656|binding|INFO|Setting lport bdb96056-9ec2-474c-abe0-50ee1cbb4097 ovn-installed in OVS
Nov 29 03:30:21 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:21Z|00657|binding|INFO|Setting lport bdb96056-9ec2-474c-abe0-50ee1cbb4097 up in Southbound
Nov 29 03:30:21 np0005539551 nova_compute[227360]: 2025-11-29 08:30:21.406 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:21 np0005539551 nova_compute[227360]: 2025-11-29 08:30:21.410 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:21 np0005539551 systemd-udevd[283461]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:30:21 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:21.414 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[48b5283a-b3dd-43db-a488-4e9eefb9461b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:21 np0005539551 systemd-machined[190756]: New machine qemu-70-instance-00000097.
Nov 29 03:30:21 np0005539551 systemd[1]: Started Virtual Machine qemu-70-instance-00000097.
Nov 29 03:30:21 np0005539551 NetworkManager[48922]: <info>  [1764405021.4319] device (tapbdb96056-9e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:30:21 np0005539551 NetworkManager[48922]: <info>  [1764405021.4328] device (tapbdb96056-9e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:30:21 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:21.444 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[ccf331a6-43db-4e6e-8a83-5a7ecb4fa14f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:21 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:21.447 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[0d60e476-4a7f-48d2-9e95-9df95814807d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:21 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:21.479 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[c325f6cd-0e1e-421d-876a-3d63a1ea73e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:21 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:21.494 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ff895f8b-4221-44ac-b6ad-2a2853fa727f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9d41f0a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:28:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 180], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 789204, 'reachable_time': 30617, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283473, 'error': None, 'target': 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:21 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:21.507 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c998dbf0-25e0-4b1e-b5ca-81329dbdd156]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd9d41f0a-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 789215, 'tstamp': 789215}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283474, 'error': None, 'target': 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd9d41f0a-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 789218, 'tstamp': 789218}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283474, 'error': None, 'target': 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:21 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:21.509 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9d41f0a-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:21 np0005539551 nova_compute[227360]: 2025-11-29 08:30:21.511 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:21 np0005539551 nova_compute[227360]: 2025-11-29 08:30:21.512 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:21 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:21.513 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9d41f0a-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:21 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:21.513 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:30:21 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:21.514 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd9d41f0a-10, col_values=(('external_ids', {'iface-id': 'f2118d1b-0f35-4211-8508-64237a2d816e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:21 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:21.514 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:30:21 np0005539551 nova_compute[227360]: 2025-11-29 08:30:21.627 227364 DEBUG nova.compute.manager [req-56f9ffee-533c-40d6-8004-271f22340a41 req-649fb08c-bf14-4656-8b02-16c98efa11cf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Received event network-vif-plugged-bdb96056-9ec2-474c-abe0-50ee1cbb4097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:21 np0005539551 nova_compute[227360]: 2025-11-29 08:30:21.628 227364 DEBUG oslo_concurrency.lockutils [req-56f9ffee-533c-40d6-8004-271f22340a41 req-649fb08c-bf14-4656-8b02-16c98efa11cf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:21 np0005539551 nova_compute[227360]: 2025-11-29 08:30:21.633 227364 DEBUG oslo_concurrency.lockutils [req-56f9ffee-533c-40d6-8004-271f22340a41 req-649fb08c-bf14-4656-8b02-16c98efa11cf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:21 np0005539551 nova_compute[227360]: 2025-11-29 08:30:21.634 227364 DEBUG oslo_concurrency.lockutils [req-56f9ffee-533c-40d6-8004-271f22340a41 req-649fb08c-bf14-4656-8b02-16c98efa11cf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:21 np0005539551 nova_compute[227360]: 2025-11-29 08:30:21.634 227364 DEBUG nova.compute.manager [req-56f9ffee-533c-40d6-8004-271f22340a41 req-649fb08c-bf14-4656-8b02-16c98efa11cf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] No waiting events found dispatching network-vif-plugged-bdb96056-9ec2-474c-abe0-50ee1cbb4097 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:21 np0005539551 nova_compute[227360]: 2025-11-29 08:30:21.634 227364 WARNING nova.compute.manager [req-56f9ffee-533c-40d6-8004-271f22340a41 req-649fb08c-bf14-4656-8b02-16c98efa11cf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Received unexpected event network-vif-plugged-bdb96056-9ec2-474c-abe0-50ee1cbb4097 for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:30:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:21.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:21 np0005539551 nova_compute[227360]: 2025-11-29 08:30:21.964 227364 DEBUG nova.virt.libvirt.host [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Removed pending event for a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:30:21 np0005539551 nova_compute[227360]: 2025-11-29 08:30:21.964 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405021.9639974, a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:21 np0005539551 nova_compute[227360]: 2025-11-29 08:30:21.965 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:30:21 np0005539551 nova_compute[227360]: 2025-11-29 08:30:21.970 227364 DEBUG nova.compute.manager [None req-5d30b1db-bc73-45f5-bc4a-7264f46d6965 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:21 np0005539551 nova_compute[227360]: 2025-11-29 08:30:21.988 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:21 np0005539551 nova_compute[227360]: 2025-11-29 08:30:21.991 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:30:22 np0005539551 nova_compute[227360]: 2025-11-29 08:30:22.015 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Nov 29 03:30:22 np0005539551 nova_compute[227360]: 2025-11-29 08:30:22.015 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405021.965337, a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:22 np0005539551 nova_compute[227360]: 2025-11-29 08:30:22.015 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] VM Started (Lifecycle Event)#033[00m
Nov 29 03:30:22 np0005539551 nova_compute[227360]: 2025-11-29 08:30:22.039 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:22 np0005539551 nova_compute[227360]: 2025-11-29 08:30:22.044 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:30:22 np0005539551 nova_compute[227360]: 2025-11-29 08:30:22.179 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:30:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:22.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:30:23 np0005539551 nova_compute[227360]: 2025-11-29 08:30:23.205 227364 INFO nova.compute.manager [None req-80ca3244-0e6a-4604-bc26-e09b669297ed 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Unrescuing#033[00m
Nov 29 03:30:23 np0005539551 nova_compute[227360]: 2025-11-29 08:30:23.205 227364 DEBUG oslo_concurrency.lockutils [None req-80ca3244-0e6a-4604-bc26-e09b669297ed 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquiring lock "refresh_cache-a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:30:23 np0005539551 nova_compute[227360]: 2025-11-29 08:30:23.206 227364 DEBUG oslo_concurrency.lockutils [None req-80ca3244-0e6a-4604-bc26-e09b669297ed 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquired lock "refresh_cache-a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:30:23 np0005539551 nova_compute[227360]: 2025-11-29 08:30:23.206 227364 DEBUG nova.network.neutron [None req-80ca3244-0e6a-4604-bc26-e09b669297ed 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:30:23 np0005539551 nova_compute[227360]: 2025-11-29 08:30:23.508 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:23 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #109. Immutable memtables: 0.
Nov 29 03:30:23 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:30:23.609762) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:30:23 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 109
Nov 29 03:30:23 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405023609796, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 309, "num_deletes": 251, "total_data_size": 129072, "memory_usage": 134736, "flush_reason": "Manual Compaction"}
Nov 29 03:30:23 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #110: started
Nov 29 03:30:23 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405023612449, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 110, "file_size": 83952, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 55705, "largest_seqno": 56009, "table_properties": {"data_size": 82022, "index_size": 158, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5667, "raw_average_key_size": 20, "raw_value_size": 78098, "raw_average_value_size": 279, "num_data_blocks": 7, "num_entries": 279, "num_filter_entries": 279, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405019, "oldest_key_time": 1764405019, "file_creation_time": 1764405023, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:30:23 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 2729 microseconds, and 879 cpu microseconds.
Nov 29 03:30:23 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:30:23 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:30:23.612491) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #110: 83952 bytes OK
Nov 29 03:30:23 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:30:23.612510) [db/memtable_list.cc:519] [default] Level-0 commit table #110 started
Nov 29 03:30:23 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:30:23.614285) [db/memtable_list.cc:722] [default] Level-0 commit table #110: memtable #1 done
Nov 29 03:30:23 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:30:23.614328) EVENT_LOG_v1 {"time_micros": 1764405023614323, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:30:23 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:30:23.614343) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:30:23 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 126838, prev total WAL file size 126838, number of live WAL files 2.
Nov 29 03:30:23 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000106.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:30:23 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:30:23.614654) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373532' seq:72057594037927935, type:22 .. '6D6772737461740032303034' seq:0, type:0; will stop at (end)
Nov 29 03:30:23 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:30:23 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [110(81KB)], [108(14MB)]
Nov 29 03:30:23 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405023614681, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [110], "files_L6": [108], "score": -1, "input_data_size": 15680590, "oldest_snapshot_seqno": -1}
Nov 29 03:30:23 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #111: 8755 keys, 11838077 bytes, temperature: kUnknown
Nov 29 03:30:23 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405023691451, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 111, "file_size": 11838077, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11781365, "index_size": 33752, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21893, "raw_key_size": 228157, "raw_average_key_size": 26, "raw_value_size": 11627046, "raw_average_value_size": 1328, "num_data_blocks": 1314, "num_entries": 8755, "num_filter_entries": 8755, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764405023, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 111, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:30:23 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:30:23 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:30:23.691734) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 11838077 bytes
Nov 29 03:30:23 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:30:23.693489) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 204.0 rd, 154.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 14.9 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(327.8) write-amplify(141.0) OK, records in: 9265, records dropped: 510 output_compression: NoCompression
Nov 29 03:30:23 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:30:23.693532) EVENT_LOG_v1 {"time_micros": 1764405023693517, "job": 68, "event": "compaction_finished", "compaction_time_micros": 76863, "compaction_time_cpu_micros": 27048, "output_level": 6, "num_output_files": 1, "total_output_size": 11838077, "num_input_records": 9265, "num_output_records": 8755, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:30:23 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:30:23 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405023693756, "job": 68, "event": "table_file_deletion", "file_number": 110}
Nov 29 03:30:23 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000108.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:30:23 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405023697145, "job": 68, "event": "table_file_deletion", "file_number": 108}
Nov 29 03:30:23 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:30:23.614610) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:30:23 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:30:23.697209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:30:23 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:30:23.697217) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:30:23 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:30:23.697221) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:30:23 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:30:23.697225) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:30:23 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:30:23.697230) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:30:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:30:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:23.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:30:23 np0005539551 nova_compute[227360]: 2025-11-29 08:30:23.885 227364 DEBUG nova.compute.manager [req-cbe9513f-06ea-41b8-a8eb-bbf05f187534 req-92ad5045-68e5-4644-9332-83e945fa6b26 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Received event network-vif-plugged-bdb96056-9ec2-474c-abe0-50ee1cbb4097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:23 np0005539551 nova_compute[227360]: 2025-11-29 08:30:23.886 227364 DEBUG oslo_concurrency.lockutils [req-cbe9513f-06ea-41b8-a8eb-bbf05f187534 req-92ad5045-68e5-4644-9332-83e945fa6b26 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:23 np0005539551 nova_compute[227360]: 2025-11-29 08:30:23.886 227364 DEBUG oslo_concurrency.lockutils [req-cbe9513f-06ea-41b8-a8eb-bbf05f187534 req-92ad5045-68e5-4644-9332-83e945fa6b26 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:23 np0005539551 nova_compute[227360]: 2025-11-29 08:30:23.887 227364 DEBUG oslo_concurrency.lockutils [req-cbe9513f-06ea-41b8-a8eb-bbf05f187534 req-92ad5045-68e5-4644-9332-83e945fa6b26 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:23 np0005539551 nova_compute[227360]: 2025-11-29 08:30:23.887 227364 DEBUG nova.compute.manager [req-cbe9513f-06ea-41b8-a8eb-bbf05f187534 req-92ad5045-68e5-4644-9332-83e945fa6b26 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] No waiting events found dispatching network-vif-plugged-bdb96056-9ec2-474c-abe0-50ee1cbb4097 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:23 np0005539551 nova_compute[227360]: 2025-11-29 08:30:23.888 227364 WARNING nova.compute.manager [req-cbe9513f-06ea-41b8-a8eb-bbf05f187534 req-92ad5045-68e5-4644-9332-83e945fa6b26 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Received unexpected event network-vif-plugged-bdb96056-9ec2-474c-abe0-50ee1cbb4097 for instance with vm_state rescued and task_state unrescuing.#033[00m
Nov 29 03:30:24 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:24.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:24 np0005539551 nova_compute[227360]: 2025-11-29 08:30:24.796 227364 DEBUG nova.network.neutron [None req-80ca3244-0e6a-4604-bc26-e09b669297ed 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Updating instance_info_cache with network_info: [{"id": "bdb96056-9ec2-474c-abe0-50ee1cbb4097", "address": "fa:16:3e:a2:3e:57", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdb96056-9e", "ovs_interfaceid": "bdb96056-9ec2-474c-abe0-50ee1cbb4097", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:30:24 np0005539551 nova_compute[227360]: 2025-11-29 08:30:24.810 227364 DEBUG oslo_concurrency.lockutils [None req-80ca3244-0e6a-4604-bc26-e09b669297ed 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Releasing lock "refresh_cache-a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:30:24 np0005539551 nova_compute[227360]: 2025-11-29 08:30:24.811 227364 DEBUG nova.objects.instance [None req-80ca3244-0e6a-4604-bc26-e09b669297ed 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lazy-loading 'flavor' on Instance uuid a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:24 np0005539551 kernel: tapbdb96056-9e (unregistering): left promiscuous mode
Nov 29 03:30:24 np0005539551 NetworkManager[48922]: <info>  [1764405024.8924] device (tapbdb96056-9e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:30:24 np0005539551 nova_compute[227360]: 2025-11-29 08:30:24.897 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:24 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:24Z|00658|binding|INFO|Releasing lport bdb96056-9ec2-474c-abe0-50ee1cbb4097 from this chassis (sb_readonly=0)
Nov 29 03:30:24 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:24Z|00659|binding|INFO|Setting lport bdb96056-9ec2-474c-abe0-50ee1cbb4097 down in Southbound
Nov 29 03:30:24 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:24Z|00660|binding|INFO|Removing iface tapbdb96056-9e ovn-installed in OVS
Nov 29 03:30:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:24.903 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:3e:57 10.100.0.10'], port_security=['fa:16:3e:a2:3e:57 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ae71059d02774857be85797a3be0e4e6', 'neutron:revision_number': '6', 'neutron:security_group_ids': '9cdb0c1e-9792-4231-abe9-b49a2c7e81de', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43696b0d-f042-4e44-8852-c0333c8ffa4f, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=bdb96056-9ec2-474c-abe0-50ee1cbb4097) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:30:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:24.904 139482 INFO neutron.agent.ovn.metadata.agent [-] Port bdb96056-9ec2-474c-abe0-50ee1cbb4097 in datapath d9d41f0a-17f9-4df4-a453-04da996d63b6 unbound from our chassis#033[00m
Nov 29 03:30:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:24.906 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d9d41f0a-17f9-4df4-a453-04da996d63b6#033[00m
Nov 29 03:30:24 np0005539551 nova_compute[227360]: 2025-11-29 08:30:24.920 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:24.920 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[85044780-9eb5-4a5c-bed4-79c7f8a4d204]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:24.955 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[dfafe060-212a-4ebf-b2cc-90845f90687f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:24 np0005539551 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d00000097.scope: Deactivated successfully.
Nov 29 03:30:24 np0005539551 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d00000097.scope: Consumed 3.582s CPU time.
Nov 29 03:30:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:24.959 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[6b80b8ef-91dc-43df-a817-5ca4241ab575]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:24 np0005539551 systemd-machined[190756]: Machine qemu-70-instance-00000097 terminated.
Nov 29 03:30:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:24.986 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[9a4cd795-d045-42d9-9f23-ad37b0238dd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:25.003 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f44ccbb4-e032-4919-b6f9-f1065b55a478]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9d41f0a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:28:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 180], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 789204, 'reachable_time': 30617, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283548, 'error': None, 'target': 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:25.018 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ad594e1a-9e35-4278-b9e5-361ffe5c72ba]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd9d41f0a-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 789215, 'tstamp': 789215}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283549, 'error': None, 'target': 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd9d41f0a-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 789218, 'tstamp': 789218}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283549, 'error': None, 'target': 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:25.019 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9d41f0a-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:25 np0005539551 nova_compute[227360]: 2025-11-29 08:30:25.021 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:25 np0005539551 nova_compute[227360]: 2025-11-29 08:30:25.024 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:25.025 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9d41f0a-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:25.025 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:30:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:25.025 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd9d41f0a-10, col_values=(('external_ids', {'iface-id': 'f2118d1b-0f35-4211-8508-64237a2d816e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:25.025 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:30:25 np0005539551 nova_compute[227360]: 2025-11-29 08:30:25.065 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:25 np0005539551 nova_compute[227360]: 2025-11-29 08:30:25.070 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:25 np0005539551 nova_compute[227360]: 2025-11-29 08:30:25.078 227364 INFO nova.virt.libvirt.driver [-] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Instance destroyed successfully.#033[00m
Nov 29 03:30:25 np0005539551 nova_compute[227360]: 2025-11-29 08:30:25.078 227364 DEBUG nova.objects.instance [None req-80ca3244-0e6a-4604-bc26-e09b669297ed 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lazy-loading 'numa_topology' on Instance uuid a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:25 np0005539551 kernel: tapbdb96056-9e: entered promiscuous mode
Nov 29 03:30:25 np0005539551 systemd-udevd[283541]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:30:25 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:25Z|00661|binding|INFO|Claiming lport bdb96056-9ec2-474c-abe0-50ee1cbb4097 for this chassis.
Nov 29 03:30:25 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:25Z|00662|binding|INFO|bdb96056-9ec2-474c-abe0-50ee1cbb4097: Claiming fa:16:3e:a2:3e:57 10.100.0.10
Nov 29 03:30:25 np0005539551 nova_compute[227360]: 2025-11-29 08:30:25.169 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:25 np0005539551 NetworkManager[48922]: <info>  [1764405025.1714] manager: (tapbdb96056-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/301)
Nov 29 03:30:25 np0005539551 NetworkManager[48922]: <info>  [1764405025.1784] device (tapbdb96056-9e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:30:25 np0005539551 NetworkManager[48922]: <info>  [1764405025.1796] device (tapbdb96056-9e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:30:25 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:25Z|00663|binding|INFO|Setting lport bdb96056-9ec2-474c-abe0-50ee1cbb4097 ovn-installed in OVS
Nov 29 03:30:25 np0005539551 nova_compute[227360]: 2025-11-29 08:30:25.188 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:25 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:25Z|00664|binding|INFO|Setting lport bdb96056-9ec2-474c-abe0-50ee1cbb4097 up in Southbound
Nov 29 03:30:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:25.198 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:3e:57 10.100.0.10'], port_security=['fa:16:3e:a2:3e:57 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ae71059d02774857be85797a3be0e4e6', 'neutron:revision_number': '6', 'neutron:security_group_ids': '9cdb0c1e-9792-4231-abe9-b49a2c7e81de', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43696b0d-f042-4e44-8852-c0333c8ffa4f, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=bdb96056-9ec2-474c-abe0-50ee1cbb4097) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:30:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:25.199 139482 INFO neutron.agent.ovn.metadata.agent [-] Port bdb96056-9ec2-474c-abe0-50ee1cbb4097 in datapath d9d41f0a-17f9-4df4-a453-04da996d63b6 bound to our chassis#033[00m
Nov 29 03:30:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:25.201 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d9d41f0a-17f9-4df4-a453-04da996d63b6#033[00m
Nov 29 03:30:25 np0005539551 systemd-machined[190756]: New machine qemu-71-instance-00000097.
Nov 29 03:30:25 np0005539551 systemd[1]: Started Virtual Machine qemu-71-instance-00000097.
Nov 29 03:30:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:25.220 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b8b44bcf-6987-4844-912d-65b0751227f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:25.255 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[1e6ae6df-94f8-488a-8a0a-ac45e9dd96e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:25.258 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[9e4441c7-db85-4e2c-910d-433f7a441fc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:25.293 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[3d106e83-b6ce-4f17-9a00-2d39e9d29b7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:25.312 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[32655964-3658-4be4-ba5e-4e8e6cccc9ff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9d41f0a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:28:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 180], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 789204, 'reachable_time': 30617, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283586, 'error': None, 'target': 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:25.329 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2101b058-7b47-4da0-8e76-10a290ac130d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd9d41f0a-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 789215, 'tstamp': 789215}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283587, 'error': None, 'target': 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd9d41f0a-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 789218, 'tstamp': 789218}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283587, 'error': None, 'target': 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:25.331 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9d41f0a-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:25 np0005539551 nova_compute[227360]: 2025-11-29 08:30:25.332 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:25 np0005539551 nova_compute[227360]: 2025-11-29 08:30:25.333 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:25.333 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9d41f0a-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:25.334 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:30:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:25.334 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd9d41f0a-10, col_values=(('external_ids', {'iface-id': 'f2118d1b-0f35-4211-8508-64237a2d816e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:25.335 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:30:25 np0005539551 nova_compute[227360]: 2025-11-29 08:30:25.708 227364 DEBUG nova.virt.libvirt.host [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Removed pending event for a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:30:25 np0005539551 nova_compute[227360]: 2025-11-29 08:30:25.709 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405025.70838, a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:25 np0005539551 nova_compute[227360]: 2025-11-29 08:30:25.709 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:30:25 np0005539551 nova_compute[227360]: 2025-11-29 08:30:25.757 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:25 np0005539551 nova_compute[227360]: 2025-11-29 08:30:25.761 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:30:25 np0005539551 nova_compute[227360]: 2025-11-29 08:30:25.783 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Nov 29 03:30:25 np0005539551 nova_compute[227360]: 2025-11-29 08:30:25.783 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405025.7104645, a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:25 np0005539551 nova_compute[227360]: 2025-11-29 08:30:25.783 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] VM Started (Lifecycle Event)#033[00m
Nov 29 03:30:25 np0005539551 nova_compute[227360]: 2025-11-29 08:30:25.810 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:25 np0005539551 nova_compute[227360]: 2025-11-29 08:30:25.816 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:30:25 np0005539551 nova_compute[227360]: 2025-11-29 08:30:25.850 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Nov 29 03:30:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:25.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:26 np0005539551 nova_compute[227360]: 2025-11-29 08:30:26.029 227364 DEBUG nova.compute.manager [req-34dca42f-7405-48b5-8202-53e44b14640a req-9000094c-110f-4b56-b9f3-7f3e885fe926 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Received event network-vif-unplugged-bdb96056-9ec2-474c-abe0-50ee1cbb4097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:26 np0005539551 nova_compute[227360]: 2025-11-29 08:30:26.029 227364 DEBUG oslo_concurrency.lockutils [req-34dca42f-7405-48b5-8202-53e44b14640a req-9000094c-110f-4b56-b9f3-7f3e885fe926 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:26 np0005539551 nova_compute[227360]: 2025-11-29 08:30:26.029 227364 DEBUG oslo_concurrency.lockutils [req-34dca42f-7405-48b5-8202-53e44b14640a req-9000094c-110f-4b56-b9f3-7f3e885fe926 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:26 np0005539551 nova_compute[227360]: 2025-11-29 08:30:26.030 227364 DEBUG oslo_concurrency.lockutils [req-34dca42f-7405-48b5-8202-53e44b14640a req-9000094c-110f-4b56-b9f3-7f3e885fe926 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:26 np0005539551 nova_compute[227360]: 2025-11-29 08:30:26.030 227364 DEBUG nova.compute.manager [req-34dca42f-7405-48b5-8202-53e44b14640a req-9000094c-110f-4b56-b9f3-7f3e885fe926 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] No waiting events found dispatching network-vif-unplugged-bdb96056-9ec2-474c-abe0-50ee1cbb4097 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:26 np0005539551 nova_compute[227360]: 2025-11-29 08:30:26.030 227364 WARNING nova.compute.manager [req-34dca42f-7405-48b5-8202-53e44b14640a req-9000094c-110f-4b56-b9f3-7f3e885fe926 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Received unexpected event network-vif-unplugged-bdb96056-9ec2-474c-abe0-50ee1cbb4097 for instance with vm_state rescued and task_state unrescuing.#033[00m
Nov 29 03:30:26 np0005539551 nova_compute[227360]: 2025-11-29 08:30:26.030 227364 DEBUG nova.compute.manager [req-34dca42f-7405-48b5-8202-53e44b14640a req-9000094c-110f-4b56-b9f3-7f3e885fe926 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Received event network-vif-plugged-bdb96056-9ec2-474c-abe0-50ee1cbb4097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:26 np0005539551 nova_compute[227360]: 2025-11-29 08:30:26.030 227364 DEBUG oslo_concurrency.lockutils [req-34dca42f-7405-48b5-8202-53e44b14640a req-9000094c-110f-4b56-b9f3-7f3e885fe926 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:26 np0005539551 nova_compute[227360]: 2025-11-29 08:30:26.030 227364 DEBUG oslo_concurrency.lockutils [req-34dca42f-7405-48b5-8202-53e44b14640a req-9000094c-110f-4b56-b9f3-7f3e885fe926 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:26 np0005539551 nova_compute[227360]: 2025-11-29 08:30:26.031 227364 DEBUG oslo_concurrency.lockutils [req-34dca42f-7405-48b5-8202-53e44b14640a req-9000094c-110f-4b56-b9f3-7f3e885fe926 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:26 np0005539551 nova_compute[227360]: 2025-11-29 08:30:26.031 227364 DEBUG nova.compute.manager [req-34dca42f-7405-48b5-8202-53e44b14640a req-9000094c-110f-4b56-b9f3-7f3e885fe926 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] No waiting events found dispatching network-vif-plugged-bdb96056-9ec2-474c-abe0-50ee1cbb4097 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:26 np0005539551 nova_compute[227360]: 2025-11-29 08:30:26.031 227364 WARNING nova.compute.manager [req-34dca42f-7405-48b5-8202-53e44b14640a req-9000094c-110f-4b56-b9f3-7f3e885fe926 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Received unexpected event network-vif-plugged-bdb96056-9ec2-474c-abe0-50ee1cbb4097 for instance with vm_state rescued and task_state unrescuing.#033[00m
Nov 29 03:30:26 np0005539551 nova_compute[227360]: 2025-11-29 08:30:26.031 227364 DEBUG nova.compute.manager [req-34dca42f-7405-48b5-8202-53e44b14640a req-9000094c-110f-4b56-b9f3-7f3e885fe926 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Received event network-vif-plugged-bdb96056-9ec2-474c-abe0-50ee1cbb4097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:26 np0005539551 nova_compute[227360]: 2025-11-29 08:30:26.031 227364 DEBUG oslo_concurrency.lockutils [req-34dca42f-7405-48b5-8202-53e44b14640a req-9000094c-110f-4b56-b9f3-7f3e885fe926 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:26 np0005539551 nova_compute[227360]: 2025-11-29 08:30:26.032 227364 DEBUG oslo_concurrency.lockutils [req-34dca42f-7405-48b5-8202-53e44b14640a req-9000094c-110f-4b56-b9f3-7f3e885fe926 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:26 np0005539551 nova_compute[227360]: 2025-11-29 08:30:26.032 227364 DEBUG oslo_concurrency.lockutils [req-34dca42f-7405-48b5-8202-53e44b14640a req-9000094c-110f-4b56-b9f3-7f3e885fe926 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:26 np0005539551 nova_compute[227360]: 2025-11-29 08:30:26.032 227364 DEBUG nova.compute.manager [req-34dca42f-7405-48b5-8202-53e44b14640a req-9000094c-110f-4b56-b9f3-7f3e885fe926 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] No waiting events found dispatching network-vif-plugged-bdb96056-9ec2-474c-abe0-50ee1cbb4097 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:26 np0005539551 nova_compute[227360]: 2025-11-29 08:30:26.032 227364 WARNING nova.compute.manager [req-34dca42f-7405-48b5-8202-53e44b14640a req-9000094c-110f-4b56-b9f3-7f3e885fe926 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Received unexpected event network-vif-plugged-bdb96056-9ec2-474c-abe0-50ee1cbb4097 for instance with vm_state rescued and task_state unrescuing.#033[00m
Nov 29 03:30:26 np0005539551 nova_compute[227360]: 2025-11-29 08:30:26.375 227364 DEBUG nova.compute.manager [None req-80ca3244-0e6a-4604-bc26-e09b669297ed 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:26.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:27 np0005539551 nova_compute[227360]: 2025-11-29 08:30:27.181 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:27 np0005539551 nova_compute[227360]: 2025-11-29 08:30:27.512 227364 DEBUG oslo_concurrency.lockutils [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Acquiring lock "7f0919df-6a69-4824-b20e-6540d1d3de30" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:27 np0005539551 nova_compute[227360]: 2025-11-29 08:30:27.512 227364 DEBUG oslo_concurrency.lockutils [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:27 np0005539551 nova_compute[227360]: 2025-11-29 08:30:27.530 227364 DEBUG nova.compute.manager [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:30:27 np0005539551 nova_compute[227360]: 2025-11-29 08:30:27.607 227364 DEBUG oslo_concurrency.lockutils [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:27 np0005539551 nova_compute[227360]: 2025-11-29 08:30:27.608 227364 DEBUG oslo_concurrency.lockutils [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:27 np0005539551 nova_compute[227360]: 2025-11-29 08:30:27.614 227364 DEBUG nova.virt.hardware [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:30:27 np0005539551 nova_compute[227360]: 2025-11-29 08:30:27.614 227364 INFO nova.compute.claims [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:30:27 np0005539551 nova_compute[227360]: 2025-11-29 08:30:27.790 227364 DEBUG oslo_concurrency.processutils [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:27.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:28 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:30:28 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3185691362' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:30:28 np0005539551 nova_compute[227360]: 2025-11-29 08:30:28.251 227364 DEBUG oslo_concurrency.processutils [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:28 np0005539551 nova_compute[227360]: 2025-11-29 08:30:28.257 227364 DEBUG nova.compute.provider_tree [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:30:28 np0005539551 nova_compute[227360]: 2025-11-29 08:30:28.281 227364 DEBUG nova.compute.manager [req-4e1fc79d-8612-4751-bb0a-8984c2cb9fb9 req-e6041766-5c6f-4a1a-be54-8a246ed16bd8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Received event network-vif-plugged-bdb96056-9ec2-474c-abe0-50ee1cbb4097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:28 np0005539551 nova_compute[227360]: 2025-11-29 08:30:28.281 227364 DEBUG oslo_concurrency.lockutils [req-4e1fc79d-8612-4751-bb0a-8984c2cb9fb9 req-e6041766-5c6f-4a1a-be54-8a246ed16bd8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:28 np0005539551 nova_compute[227360]: 2025-11-29 08:30:28.282 227364 DEBUG oslo_concurrency.lockutils [req-4e1fc79d-8612-4751-bb0a-8984c2cb9fb9 req-e6041766-5c6f-4a1a-be54-8a246ed16bd8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:28 np0005539551 nova_compute[227360]: 2025-11-29 08:30:28.282 227364 DEBUG oslo_concurrency.lockutils [req-4e1fc79d-8612-4751-bb0a-8984c2cb9fb9 req-e6041766-5c6f-4a1a-be54-8a246ed16bd8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:28 np0005539551 nova_compute[227360]: 2025-11-29 08:30:28.282 227364 DEBUG nova.compute.manager [req-4e1fc79d-8612-4751-bb0a-8984c2cb9fb9 req-e6041766-5c6f-4a1a-be54-8a246ed16bd8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] No waiting events found dispatching network-vif-plugged-bdb96056-9ec2-474c-abe0-50ee1cbb4097 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:28 np0005539551 nova_compute[227360]: 2025-11-29 08:30:28.282 227364 WARNING nova.compute.manager [req-4e1fc79d-8612-4751-bb0a-8984c2cb9fb9 req-e6041766-5c6f-4a1a-be54-8a246ed16bd8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Received unexpected event network-vif-plugged-bdb96056-9ec2-474c-abe0-50ee1cbb4097 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:30:28 np0005539551 nova_compute[227360]: 2025-11-29 08:30:28.288 227364 DEBUG nova.scheduler.client.report [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:30:28 np0005539551 nova_compute[227360]: 2025-11-29 08:30:28.318 227364 DEBUG oslo_concurrency.lockutils [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:28 np0005539551 nova_compute[227360]: 2025-11-29 08:30:28.318 227364 DEBUG nova.compute.manager [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:30:28 np0005539551 nova_compute[227360]: 2025-11-29 08:30:28.376 227364 DEBUG nova.compute.manager [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:30:28 np0005539551 nova_compute[227360]: 2025-11-29 08:30:28.376 227364 DEBUG nova.network.neutron [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:30:28 np0005539551 nova_compute[227360]: 2025-11-29 08:30:28.401 227364 INFO nova.virt.libvirt.driver [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:30:28 np0005539551 nova_compute[227360]: 2025-11-29 08:30:28.421 227364 DEBUG nova.compute.manager [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:30:28 np0005539551 nova_compute[227360]: 2025-11-29 08:30:28.513 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:28 np0005539551 nova_compute[227360]: 2025-11-29 08:30:28.573 227364 DEBUG nova.compute.manager [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:30:28 np0005539551 nova_compute[227360]: 2025-11-29 08:30:28.574 227364 DEBUG nova.virt.libvirt.driver [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:30:28 np0005539551 nova_compute[227360]: 2025-11-29 08:30:28.575 227364 INFO nova.virt.libvirt.driver [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Creating image(s)#033[00m
Nov 29 03:30:28 np0005539551 nova_compute[227360]: 2025-11-29 08:30:28.600 227364 DEBUG nova.storage.rbd_utils [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] rbd image 7f0919df-6a69-4824-b20e-6540d1d3de30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:28 np0005539551 nova_compute[227360]: 2025-11-29 08:30:28.639 227364 DEBUG nova.storage.rbd_utils [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] rbd image 7f0919df-6a69-4824-b20e-6540d1d3de30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:28 np0005539551 nova_compute[227360]: 2025-11-29 08:30:28.667 227364 DEBUG nova.storage.rbd_utils [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] rbd image 7f0919df-6a69-4824-b20e-6540d1d3de30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:28 np0005539551 nova_compute[227360]: 2025-11-29 08:30:28.671 227364 DEBUG oslo_concurrency.processutils [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:28.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:28 np0005539551 nova_compute[227360]: 2025-11-29 08:30:28.737 227364 DEBUG oslo_concurrency.processutils [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:28 np0005539551 nova_compute[227360]: 2025-11-29 08:30:28.738 227364 DEBUG oslo_concurrency.lockutils [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:28 np0005539551 nova_compute[227360]: 2025-11-29 08:30:28.739 227364 DEBUG oslo_concurrency.lockutils [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:28 np0005539551 nova_compute[227360]: 2025-11-29 08:30:28.739 227364 DEBUG oslo_concurrency.lockutils [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:28 np0005539551 nova_compute[227360]: 2025-11-29 08:30:28.765 227364 DEBUG nova.storage.rbd_utils [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] rbd image 7f0919df-6a69-4824-b20e-6540d1d3de30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:28 np0005539551 nova_compute[227360]: 2025-11-29 08:30:28.769 227364 DEBUG oslo_concurrency.processutils [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 7f0919df-6a69-4824-b20e-6540d1d3de30_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:29 np0005539551 nova_compute[227360]: 2025-11-29 08:30:29.058 227364 DEBUG oslo_concurrency.processutils [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 7f0919df-6a69-4824-b20e-6540d1d3de30_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.289s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:29 np0005539551 nova_compute[227360]: 2025-11-29 08:30:29.142 227364 DEBUG oslo_concurrency.lockutils [None req-3d0c4b47-4296-436a-b906-faac9444c703 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquiring lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:29 np0005539551 nova_compute[227360]: 2025-11-29 08:30:29.142 227364 DEBUG oslo_concurrency.lockutils [None req-3d0c4b47-4296-436a-b906-faac9444c703 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:29 np0005539551 nova_compute[227360]: 2025-11-29 08:30:29.142 227364 DEBUG oslo_concurrency.lockutils [None req-3d0c4b47-4296-436a-b906-faac9444c703 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquiring lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:29 np0005539551 nova_compute[227360]: 2025-11-29 08:30:29.142 227364 DEBUG oslo_concurrency.lockutils [None req-3d0c4b47-4296-436a-b906-faac9444c703 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:29 np0005539551 nova_compute[227360]: 2025-11-29 08:30:29.143 227364 DEBUG oslo_concurrency.lockutils [None req-3d0c4b47-4296-436a-b906-faac9444c703 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:29 np0005539551 nova_compute[227360]: 2025-11-29 08:30:29.144 227364 INFO nova.compute.manager [None req-3d0c4b47-4296-436a-b906-faac9444c703 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Terminating instance#033[00m
Nov 29 03:30:29 np0005539551 nova_compute[227360]: 2025-11-29 08:30:29.145 227364 DEBUG nova.compute.manager [None req-3d0c4b47-4296-436a-b906-faac9444c703 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:30:29 np0005539551 nova_compute[227360]: 2025-11-29 08:30:29.150 227364 DEBUG nova.storage.rbd_utils [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] resizing rbd image 7f0919df-6a69-4824-b20e-6540d1d3de30_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:30:29 np0005539551 kernel: tapbdb96056-9e (unregistering): left promiscuous mode
Nov 29 03:30:29 np0005539551 NetworkManager[48922]: <info>  [1764405029.1898] device (tapbdb96056-9e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:30:29 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:29Z|00665|binding|INFO|Releasing lport bdb96056-9ec2-474c-abe0-50ee1cbb4097 from this chassis (sb_readonly=0)
Nov 29 03:30:29 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:29Z|00666|binding|INFO|Setting lport bdb96056-9ec2-474c-abe0-50ee1cbb4097 down in Southbound
Nov 29 03:30:29 np0005539551 nova_compute[227360]: 2025-11-29 08:30:29.195 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:29 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:29Z|00667|binding|INFO|Removing iface tapbdb96056-9e ovn-installed in OVS
Nov 29 03:30:29 np0005539551 nova_compute[227360]: 2025-11-29 08:30:29.200 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:29.210 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:3e:57 10.100.0.10'], port_security=['fa:16:3e:a2:3e:57 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ae71059d02774857be85797a3be0e4e6', 'neutron:revision_number': '8', 'neutron:security_group_ids': '9cdb0c1e-9792-4231-abe9-b49a2c7e81de', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43696b0d-f042-4e44-8852-c0333c8ffa4f, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=bdb96056-9ec2-474c-abe0-50ee1cbb4097) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:30:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:29.211 139482 INFO neutron.agent.ovn.metadata.agent [-] Port bdb96056-9ec2-474c-abe0-50ee1cbb4097 in datapath d9d41f0a-17f9-4df4-a453-04da996d63b6 unbound from our chassis#033[00m
Nov 29 03:30:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:29.212 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d9d41f0a-17f9-4df4-a453-04da996d63b6#033[00m
Nov 29 03:30:29 np0005539551 nova_compute[227360]: 2025-11-29 08:30:29.217 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:29.226 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f9f8b21e-8aa9-49bf-9216-30b35a25762b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:29 np0005539551 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000097.scope: Deactivated successfully.
Nov 29 03:30:29 np0005539551 nova_compute[227360]: 2025-11-29 08:30:29.244 227364 DEBUG nova.policy [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5b0fe4d78df74554a3a5875ab629d59c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1981e9617628491f938ef0ef01c061c5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:30:29 np0005539551 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000097.scope: Consumed 4.002s CPU time.
Nov 29 03:30:29 np0005539551 systemd-machined[190756]: Machine qemu-71-instance-00000097 terminated.
Nov 29 03:30:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:29.254 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[9130d9bd-dfd9-4866-a099-67d16b9f0fe9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:29.257 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[96518f6a-caf8-4c4a-8947-3c6e0dea100f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:29.285 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[8e13942b-9ea8-4002-baa2-8816507c520a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:29.299 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[72905ccf-563c-47f8-a14b-261509baf668]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9d41f0a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:28:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 180], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 789204, 'reachable_time': 30617, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283831, 'error': None, 'target': 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:29.312 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[338ed1d5-733a-4b6d-aa14-064a5f38c5a1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd9d41f0a-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 789215, 'tstamp': 789215}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283832, 'error': None, 'target': 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd9d41f0a-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 789218, 'tstamp': 789218}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283832, 'error': None, 'target': 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:29.313 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9d41f0a-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:29.319 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9d41f0a-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:29.320 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:30:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:29.320 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd9d41f0a-10, col_values=(('external_ids', {'iface-id': 'f2118d1b-0f35-4211-8508-64237a2d816e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:29.320 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:30:29 np0005539551 nova_compute[227360]: 2025-11-29 08:30:29.337 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:29 np0005539551 nova_compute[227360]: 2025-11-29 08:30:29.344 227364 DEBUG nova.objects.instance [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lazy-loading 'migration_context' on Instance uuid 7f0919df-6a69-4824-b20e-6540d1d3de30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:29 np0005539551 nova_compute[227360]: 2025-11-29 08:30:29.360 227364 DEBUG nova.virt.libvirt.driver [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:30:29 np0005539551 nova_compute[227360]: 2025-11-29 08:30:29.361 227364 DEBUG nova.virt.libvirt.driver [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Ensure instance console log exists: /var/lib/nova/instances/7f0919df-6a69-4824-b20e-6540d1d3de30/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:30:29 np0005539551 nova_compute[227360]: 2025-11-29 08:30:29.361 227364 DEBUG oslo_concurrency.lockutils [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:29 np0005539551 nova_compute[227360]: 2025-11-29 08:30:29.362 227364 DEBUG oslo_concurrency.lockutils [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:29 np0005539551 nova_compute[227360]: 2025-11-29 08:30:29.363 227364 DEBUG oslo_concurrency.lockutils [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:29 np0005539551 nova_compute[227360]: 2025-11-29 08:30:29.378 227364 INFO nova.virt.libvirt.driver [-] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Instance destroyed successfully.#033[00m
Nov 29 03:30:29 np0005539551 nova_compute[227360]: 2025-11-29 08:30:29.380 227364 DEBUG nova.objects.instance [None req-3d0c4b47-4296-436a-b906-faac9444c703 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lazy-loading 'resources' on Instance uuid a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:29 np0005539551 nova_compute[227360]: 2025-11-29 08:30:29.401 227364 DEBUG nova.virt.libvirt.vif [None req-3d0c4b47-4296-436a-b906-faac9444c703 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:29:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-808979613',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-808979613',id=151,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:30:21Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ae71059d02774857be85797a3be0e4e6',ramdisk_id='',reservation_id='r-ooph05ny',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1715153470',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1715153470-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:30:26Z,user_data=None,user_id='64b11a4dc36b4f55b85dbe846183be55',uuid=a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bdb96056-9ec2-474c-abe0-50ee1cbb4097", "address": "fa:16:3e:a2:3e:57", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdb96056-9e", "ovs_interfaceid": "bdb96056-9ec2-474c-abe0-50ee1cbb4097", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:30:29 np0005539551 nova_compute[227360]: 2025-11-29 08:30:29.402 227364 DEBUG nova.network.os_vif_util [None req-3d0c4b47-4296-436a-b906-faac9444c703 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Converting VIF {"id": "bdb96056-9ec2-474c-abe0-50ee1cbb4097", "address": "fa:16:3e:a2:3e:57", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdb96056-9e", "ovs_interfaceid": "bdb96056-9ec2-474c-abe0-50ee1cbb4097", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:30:29 np0005539551 nova_compute[227360]: 2025-11-29 08:30:29.403 227364 DEBUG nova.network.os_vif_util [None req-3d0c4b47-4296-436a-b906-faac9444c703 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a2:3e:57,bridge_name='br-int',has_traffic_filtering=True,id=bdb96056-9ec2-474c-abe0-50ee1cbb4097,network=Network(d9d41f0a-17f9-4df4-a453-04da996d63b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdb96056-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:30:29 np0005539551 nova_compute[227360]: 2025-11-29 08:30:29.404 227364 DEBUG os_vif [None req-3d0c4b47-4296-436a-b906-faac9444c703 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a2:3e:57,bridge_name='br-int',has_traffic_filtering=True,id=bdb96056-9ec2-474c-abe0-50ee1cbb4097,network=Network(d9d41f0a-17f9-4df4-a453-04da996d63b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdb96056-9e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:30:29 np0005539551 nova_compute[227360]: 2025-11-29 08:30:29.406 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:29 np0005539551 nova_compute[227360]: 2025-11-29 08:30:29.407 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbdb96056-9e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:29 np0005539551 nova_compute[227360]: 2025-11-29 08:30:29.409 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:29 np0005539551 nova_compute[227360]: 2025-11-29 08:30:29.413 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:30:29 np0005539551 nova_compute[227360]: 2025-11-29 08:30:29.415 227364 INFO os_vif [None req-3d0c4b47-4296-436a-b906-faac9444c703 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a2:3e:57,bridge_name='br-int',has_traffic_filtering=True,id=bdb96056-9ec2-474c-abe0-50ee1cbb4097,network=Network(d9d41f0a-17f9-4df4-a453-04da996d63b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdb96056-9e')#033[00m
Nov 29 03:30:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:29 np0005539551 nova_compute[227360]: 2025-11-29 08:30:29.817 227364 INFO nova.virt.libvirt.driver [None req-3d0c4b47-4296-436a-b906-faac9444c703 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Deleting instance files /var/lib/nova/instances/a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9_del#033[00m
Nov 29 03:30:29 np0005539551 nova_compute[227360]: 2025-11-29 08:30:29.818 227364 INFO nova.virt.libvirt.driver [None req-3d0c4b47-4296-436a-b906-faac9444c703 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Deletion of /var/lib/nova/instances/a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9_del complete#033[00m
Nov 29 03:30:29 np0005539551 nova_compute[227360]: 2025-11-29 08:30:29.864 227364 INFO nova.compute.manager [None req-3d0c4b47-4296-436a-b906-faac9444c703 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:30:29 np0005539551 nova_compute[227360]: 2025-11-29 08:30:29.865 227364 DEBUG oslo.service.loopingcall [None req-3d0c4b47-4296-436a-b906-faac9444c703 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:30:29 np0005539551 nova_compute[227360]: 2025-11-29 08:30:29.865 227364 DEBUG nova.compute.manager [-] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:30:29 np0005539551 nova_compute[227360]: 2025-11-29 08:30:29.865 227364 DEBUG nova.network.neutron [-] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:30:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:29.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:30 np0005539551 nova_compute[227360]: 2025-11-29 08:30:30.404 227364 DEBUG nova.compute.manager [req-d98a2361-ea99-4caf-b2ac-a73ef2ede437 req-7bac2195-1e82-4422-8485-efacb8411737 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Received event network-vif-unplugged-bdb96056-9ec2-474c-abe0-50ee1cbb4097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:30 np0005539551 nova_compute[227360]: 2025-11-29 08:30:30.404 227364 DEBUG oslo_concurrency.lockutils [req-d98a2361-ea99-4caf-b2ac-a73ef2ede437 req-7bac2195-1e82-4422-8485-efacb8411737 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:30 np0005539551 nova_compute[227360]: 2025-11-29 08:30:30.405 227364 DEBUG oslo_concurrency.lockutils [req-d98a2361-ea99-4caf-b2ac-a73ef2ede437 req-7bac2195-1e82-4422-8485-efacb8411737 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:30 np0005539551 nova_compute[227360]: 2025-11-29 08:30:30.405 227364 DEBUG oslo_concurrency.lockutils [req-d98a2361-ea99-4caf-b2ac-a73ef2ede437 req-7bac2195-1e82-4422-8485-efacb8411737 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:30 np0005539551 nova_compute[227360]: 2025-11-29 08:30:30.405 227364 DEBUG nova.compute.manager [req-d98a2361-ea99-4caf-b2ac-a73ef2ede437 req-7bac2195-1e82-4422-8485-efacb8411737 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] No waiting events found dispatching network-vif-unplugged-bdb96056-9ec2-474c-abe0-50ee1cbb4097 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:30 np0005539551 nova_compute[227360]: 2025-11-29 08:30:30.405 227364 DEBUG nova.compute.manager [req-d98a2361-ea99-4caf-b2ac-a73ef2ede437 req-7bac2195-1e82-4422-8485-efacb8411737 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Received event network-vif-unplugged-bdb96056-9ec2-474c-abe0-50ee1cbb4097 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:30:30 np0005539551 nova_compute[227360]: 2025-11-29 08:30:30.406 227364 DEBUG nova.compute.manager [req-d98a2361-ea99-4caf-b2ac-a73ef2ede437 req-7bac2195-1e82-4422-8485-efacb8411737 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Received event network-vif-plugged-bdb96056-9ec2-474c-abe0-50ee1cbb4097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:30 np0005539551 nova_compute[227360]: 2025-11-29 08:30:30.406 227364 DEBUG oslo_concurrency.lockutils [req-d98a2361-ea99-4caf-b2ac-a73ef2ede437 req-7bac2195-1e82-4422-8485-efacb8411737 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:30 np0005539551 nova_compute[227360]: 2025-11-29 08:30:30.406 227364 DEBUG oslo_concurrency.lockutils [req-d98a2361-ea99-4caf-b2ac-a73ef2ede437 req-7bac2195-1e82-4422-8485-efacb8411737 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:30 np0005539551 nova_compute[227360]: 2025-11-29 08:30:30.407 227364 DEBUG oslo_concurrency.lockutils [req-d98a2361-ea99-4caf-b2ac-a73ef2ede437 req-7bac2195-1e82-4422-8485-efacb8411737 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:30 np0005539551 nova_compute[227360]: 2025-11-29 08:30:30.407 227364 DEBUG nova.compute.manager [req-d98a2361-ea99-4caf-b2ac-a73ef2ede437 req-7bac2195-1e82-4422-8485-efacb8411737 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] No waiting events found dispatching network-vif-plugged-bdb96056-9ec2-474c-abe0-50ee1cbb4097 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:30 np0005539551 nova_compute[227360]: 2025-11-29 08:30:30.407 227364 WARNING nova.compute.manager [req-d98a2361-ea99-4caf-b2ac-a73ef2ede437 req-7bac2195-1e82-4422-8485-efacb8411737 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Received unexpected event network-vif-plugged-bdb96056-9ec2-474c-abe0-50ee1cbb4097 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:30:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:30:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:30.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:30:30 np0005539551 nova_compute[227360]: 2025-11-29 08:30:30.957 227364 DEBUG nova.network.neutron [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Successfully created port: b8a243db-cb27-44c3-9015-4bf4d9a49bac _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:30:31 np0005539551 nova_compute[227360]: 2025-11-29 08:30:31.119 227364 DEBUG nova.network.neutron [-] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:30:31 np0005539551 nova_compute[227360]: 2025-11-29 08:30:31.138 227364 INFO nova.compute.manager [-] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Took 1.27 seconds to deallocate network for instance.#033[00m
Nov 29 03:30:31 np0005539551 nova_compute[227360]: 2025-11-29 08:30:31.349 227364 INFO nova.compute.manager [None req-3d0c4b47-4296-436a-b906-faac9444c703 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Took 0.21 seconds to detach 1 volumes for instance.#033[00m
Nov 29 03:30:31 np0005539551 nova_compute[227360]: 2025-11-29 08:30:31.411 227364 DEBUG oslo_concurrency.lockutils [None req-3d0c4b47-4296-436a-b906-faac9444c703 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:31 np0005539551 nova_compute[227360]: 2025-11-29 08:30:31.412 227364 DEBUG oslo_concurrency.lockutils [None req-3d0c4b47-4296-436a-b906-faac9444c703 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:31 np0005539551 nova_compute[227360]: 2025-11-29 08:30:31.477 227364 DEBUG oslo_concurrency.processutils [None req-3d0c4b47-4296-436a-b906-faac9444c703 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:31.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:31 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:30:31 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1601381614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:30:31 np0005539551 nova_compute[227360]: 2025-11-29 08:30:31.946 227364 DEBUG oslo_concurrency.processutils [None req-3d0c4b47-4296-436a-b906-faac9444c703 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:31 np0005539551 nova_compute[227360]: 2025-11-29 08:30:31.953 227364 DEBUG nova.compute.provider_tree [None req-3d0c4b47-4296-436a-b906-faac9444c703 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:30:31 np0005539551 nova_compute[227360]: 2025-11-29 08:30:31.967 227364 DEBUG nova.scheduler.client.report [None req-3d0c4b47-4296-436a-b906-faac9444c703 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:30:31 np0005539551 nova_compute[227360]: 2025-11-29 08:30:31.984 227364 DEBUG oslo_concurrency.lockutils [None req-3d0c4b47-4296-436a-b906-faac9444c703 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.572s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:32 np0005539551 nova_compute[227360]: 2025-11-29 08:30:32.009 227364 INFO nova.scheduler.client.report [None req-3d0c4b47-4296-436a-b906-faac9444c703 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Deleted allocations for instance a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9#033[00m
Nov 29 03:30:32 np0005539551 nova_compute[227360]: 2025-11-29 08:30:32.091 227364 DEBUG oslo_concurrency.lockutils [None req-3d0c4b47-4296-436a-b906-faac9444c703 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.949s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:32 np0005539551 nova_compute[227360]: 2025-11-29 08:30:32.138 227364 DEBUG nova.network.neutron [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Successfully updated port: b8a243db-cb27-44c3-9015-4bf4d9a49bac _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:30:32 np0005539551 nova_compute[227360]: 2025-11-29 08:30:32.156 227364 DEBUG oslo_concurrency.lockutils [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Acquiring lock "refresh_cache-7f0919df-6a69-4824-b20e-6540d1d3de30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:30:32 np0005539551 nova_compute[227360]: 2025-11-29 08:30:32.157 227364 DEBUG oslo_concurrency.lockutils [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Acquired lock "refresh_cache-7f0919df-6a69-4824-b20e-6540d1d3de30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:30:32 np0005539551 nova_compute[227360]: 2025-11-29 08:30:32.157 227364 DEBUG nova.network.neutron [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:30:32 np0005539551 nova_compute[227360]: 2025-11-29 08:30:32.182 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:32 np0005539551 nova_compute[227360]: 2025-11-29 08:30:32.268 227364 DEBUG nova.compute.manager [req-1c40099a-260b-4bc4-9e62-c4de3e4788e3 req-07708836-5a0a-4660-aea4-231f8720bc71 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Received event network-vif-deleted-bdb96056-9ec2-474c-abe0-50ee1cbb4097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:32 np0005539551 nova_compute[227360]: 2025-11-29 08:30:32.340 227364 DEBUG nova.network.neutron [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:30:32 np0005539551 nova_compute[227360]: 2025-11-29 08:30:32.510 227364 DEBUG nova.compute.manager [req-7254dcfb-1474-49a7-a223-b3f391aa01b7 req-f7506e3c-41ad-4ede-ac0d-011d31d83d8b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Received event network-changed-b8a243db-cb27-44c3-9015-4bf4d9a49bac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:32 np0005539551 nova_compute[227360]: 2025-11-29 08:30:32.510 227364 DEBUG nova.compute.manager [req-7254dcfb-1474-49a7-a223-b3f391aa01b7 req-f7506e3c-41ad-4ede-ac0d-011d31d83d8b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Refreshing instance network info cache due to event network-changed-b8a243db-cb27-44c3-9015-4bf4d9a49bac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:30:32 np0005539551 nova_compute[227360]: 2025-11-29 08:30:32.511 227364 DEBUG oslo_concurrency.lockutils [req-7254dcfb-1474-49a7-a223-b3f391aa01b7 req-f7506e3c-41ad-4ede-ac0d-011d31d83d8b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-7f0919df-6a69-4824-b20e-6540d1d3de30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:30:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:32.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:33 np0005539551 nova_compute[227360]: 2025-11-29 08:30:33.152 227364 DEBUG nova.network.neutron [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Updating instance_info_cache with network_info: [{"id": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "address": "fa:16:3e:ad:80:cb", "network": {"id": "d6d35cfb-cc41-4788-977c-b8e5140795a0", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1510670722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1981e9617628491f938ef0ef01c061c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8a243db-cb", "ovs_interfaceid": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:30:33 np0005539551 nova_compute[227360]: 2025-11-29 08:30:33.178 227364 DEBUG oslo_concurrency.lockutils [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Releasing lock "refresh_cache-7f0919df-6a69-4824-b20e-6540d1d3de30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:30:33 np0005539551 nova_compute[227360]: 2025-11-29 08:30:33.179 227364 DEBUG nova.compute.manager [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Instance network_info: |[{"id": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "address": "fa:16:3e:ad:80:cb", "network": {"id": "d6d35cfb-cc41-4788-977c-b8e5140795a0", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1510670722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1981e9617628491f938ef0ef01c061c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8a243db-cb", "ovs_interfaceid": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:30:33 np0005539551 nova_compute[227360]: 2025-11-29 08:30:33.180 227364 DEBUG oslo_concurrency.lockutils [req-7254dcfb-1474-49a7-a223-b3f391aa01b7 req-f7506e3c-41ad-4ede-ac0d-011d31d83d8b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-7f0919df-6a69-4824-b20e-6540d1d3de30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:30:33 np0005539551 nova_compute[227360]: 2025-11-29 08:30:33.181 227364 DEBUG nova.network.neutron [req-7254dcfb-1474-49a7-a223-b3f391aa01b7 req-f7506e3c-41ad-4ede-ac0d-011d31d83d8b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Refreshing network info cache for port b8a243db-cb27-44c3-9015-4bf4d9a49bac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:30:33 np0005539551 nova_compute[227360]: 2025-11-29 08:30:33.188 227364 DEBUG nova.virt.libvirt.driver [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Start _get_guest_xml network_info=[{"id": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "address": "fa:16:3e:ad:80:cb", "network": {"id": "d6d35cfb-cc41-4788-977c-b8e5140795a0", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1510670722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1981e9617628491f938ef0ef01c061c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8a243db-cb", "ovs_interfaceid": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:30:33 np0005539551 nova_compute[227360]: 2025-11-29 08:30:33.193 227364 WARNING nova.virt.libvirt.driver [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:30:33 np0005539551 nova_compute[227360]: 2025-11-29 08:30:33.198 227364 DEBUG nova.virt.libvirt.host [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:30:33 np0005539551 nova_compute[227360]: 2025-11-29 08:30:33.200 227364 DEBUG nova.virt.libvirt.host [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:30:33 np0005539551 nova_compute[227360]: 2025-11-29 08:30:33.203 227364 DEBUG nova.virt.libvirt.host [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:30:33 np0005539551 nova_compute[227360]: 2025-11-29 08:30:33.204 227364 DEBUG nova.virt.libvirt.host [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:30:33 np0005539551 nova_compute[227360]: 2025-11-29 08:30:33.205 227364 DEBUG nova.virt.libvirt.driver [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:30:33 np0005539551 nova_compute[227360]: 2025-11-29 08:30:33.206 227364 DEBUG nova.virt.hardware [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:30:33 np0005539551 nova_compute[227360]: 2025-11-29 08:30:33.207 227364 DEBUG nova.virt.hardware [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:30:33 np0005539551 nova_compute[227360]: 2025-11-29 08:30:33.207 227364 DEBUG nova.virt.hardware [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:30:33 np0005539551 nova_compute[227360]: 2025-11-29 08:30:33.207 227364 DEBUG nova.virt.hardware [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:30:33 np0005539551 nova_compute[227360]: 2025-11-29 08:30:33.208 227364 DEBUG nova.virt.hardware [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:30:33 np0005539551 nova_compute[227360]: 2025-11-29 08:30:33.208 227364 DEBUG nova.virt.hardware [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:30:33 np0005539551 nova_compute[227360]: 2025-11-29 08:30:33.209 227364 DEBUG nova.virt.hardware [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:30:33 np0005539551 nova_compute[227360]: 2025-11-29 08:30:33.209 227364 DEBUG nova.virt.hardware [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:30:33 np0005539551 nova_compute[227360]: 2025-11-29 08:30:33.209 227364 DEBUG nova.virt.hardware [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:30:33 np0005539551 nova_compute[227360]: 2025-11-29 08:30:33.210 227364 DEBUG nova.virt.hardware [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:30:33 np0005539551 nova_compute[227360]: 2025-11-29 08:30:33.210 227364 DEBUG nova.virt.hardware [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:30:33 np0005539551 nova_compute[227360]: 2025-11-29 08:30:33.214 227364 DEBUG oslo_concurrency.processutils [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:33 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e355 e355: 3 total, 3 up, 3 in
Nov 29 03:30:33 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:30:33 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3352772972' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:30:33 np0005539551 nova_compute[227360]: 2025-11-29 08:30:33.633 227364 DEBUG oslo_concurrency.processutils [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:33 np0005539551 nova_compute[227360]: 2025-11-29 08:30:33.660 227364 DEBUG nova.storage.rbd_utils [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] rbd image 7f0919df-6a69-4824-b20e-6540d1d3de30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:33 np0005539551 nova_compute[227360]: 2025-11-29 08:30:33.663 227364 DEBUG oslo_concurrency.processutils [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:33.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:34 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:30:34 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1305545371' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:30:34 np0005539551 nova_compute[227360]: 2025-11-29 08:30:34.103 227364 DEBUG oslo_concurrency.processutils [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:34 np0005539551 nova_compute[227360]: 2025-11-29 08:30:34.105 227364 DEBUG nova.virt.libvirt.vif [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:30:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-570591477',display_name='tempest-AttachVolumeTestJSON-server-570591477',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-570591477',id=156,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPLhAS709HJYCqqSVKtaAsZHN9+aAJmT3RuP2sRo5+42qudvHLEFjfUIRKebI1UviiQdgrbtVipNk1gA+7U8vpFGwsazdqavrjn4FbUCXtlfljCRqwjbJ7fLvby4dRZp3g==',key_name='tempest-keypair-1111859008',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1981e9617628491f938ef0ef01c061c5',ramdisk_id='',reservation_id='r-01op0prm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeTestJSON-169198681',owner_user_name='tempest-AttachVolumeTestJSON-169198681-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:30:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5b0fe4d78df74554a3a5875ab629d59c',uuid=7f0919df-6a69-4824-b20e-6540d1d3de30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "address": "fa:16:3e:ad:80:cb", "network": {"id": "d6d35cfb-cc41-4788-977c-b8e5140795a0", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1510670722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1981e9617628491f938ef0ef01c061c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8a243db-cb", "ovs_interfaceid": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:30:34 np0005539551 nova_compute[227360]: 2025-11-29 08:30:34.106 227364 DEBUG nova.network.os_vif_util [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Converting VIF {"id": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "address": "fa:16:3e:ad:80:cb", "network": {"id": "d6d35cfb-cc41-4788-977c-b8e5140795a0", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1510670722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1981e9617628491f938ef0ef01c061c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8a243db-cb", "ovs_interfaceid": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:30:34 np0005539551 nova_compute[227360]: 2025-11-29 08:30:34.107 227364 DEBUG nova.network.os_vif_util [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:80:cb,bridge_name='br-int',has_traffic_filtering=True,id=b8a243db-cb27-44c3-9015-4bf4d9a49bac,network=Network(d6d35cfb-cc41-4788-977c-b8e5140795a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8a243db-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:30:34 np0005539551 nova_compute[227360]: 2025-11-29 08:30:34.108 227364 DEBUG nova.objects.instance [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7f0919df-6a69-4824-b20e-6540d1d3de30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:34 np0005539551 nova_compute[227360]: 2025-11-29 08:30:34.127 227364 DEBUG nova.virt.libvirt.driver [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:30:34 np0005539551 nova_compute[227360]:  <uuid>7f0919df-6a69-4824-b20e-6540d1d3de30</uuid>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:  <name>instance-0000009c</name>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:30:34 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:      <nova:name>tempest-AttachVolumeTestJSON-server-570591477</nova:name>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:30:33</nova:creationTime>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:30:34 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:        <nova:user uuid="5b0fe4d78df74554a3a5875ab629d59c">tempest-AttachVolumeTestJSON-169198681-project-member</nova:user>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:        <nova:project uuid="1981e9617628491f938ef0ef01c061c5">tempest-AttachVolumeTestJSON-169198681</nova:project>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:        <nova:port uuid="b8a243db-cb27-44c3-9015-4bf4d9a49bac">
Nov 29 03:30:34 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:      <entry name="serial">7f0919df-6a69-4824-b20e-6540d1d3de30</entry>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:      <entry name="uuid">7f0919df-6a69-4824-b20e-6540d1d3de30</entry>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:30:34 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/7f0919df-6a69-4824-b20e-6540d1d3de30_disk">
Nov 29 03:30:34 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:30:34 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:30:34 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/7f0919df-6a69-4824-b20e-6540d1d3de30_disk.config">
Nov 29 03:30:34 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:30:34 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:30:34 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:ad:80:cb"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:      <target dev="tapb8a243db-cb"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:30:34 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/7f0919df-6a69-4824-b20e-6540d1d3de30/console.log" append="off"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:30:34 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:30:34 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:30:34 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:30:34 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:30:34 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:30:34 np0005539551 nova_compute[227360]: 2025-11-29 08:30:34.128 227364 DEBUG nova.compute.manager [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Preparing to wait for external event network-vif-plugged-b8a243db-cb27-44c3-9015-4bf4d9a49bac prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:30:34 np0005539551 nova_compute[227360]: 2025-11-29 08:30:34.129 227364 DEBUG oslo_concurrency.lockutils [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Acquiring lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:34 np0005539551 nova_compute[227360]: 2025-11-29 08:30:34.129 227364 DEBUG oslo_concurrency.lockutils [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:34 np0005539551 nova_compute[227360]: 2025-11-29 08:30:34.129 227364 DEBUG oslo_concurrency.lockutils [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:34 np0005539551 nova_compute[227360]: 2025-11-29 08:30:34.130 227364 DEBUG nova.virt.libvirt.vif [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:30:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-570591477',display_name='tempest-AttachVolumeTestJSON-server-570591477',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-570591477',id=156,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPLhAS709HJYCqqSVKtaAsZHN9+aAJmT3RuP2sRo5+42qudvHLEFjfUIRKebI1UviiQdgrbtVipNk1gA+7U8vpFGwsazdqavrjn4FbUCXtlfljCRqwjbJ7fLvby4dRZp3g==',key_name='tempest-keypair-1111859008',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1981e9617628491f938ef0ef01c061c5',ramdisk_id='',reservation_id='r-01op0prm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeTestJSON-169198681',owner_user_name='tempest-AttachVolumeTestJSON-169198681-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:30:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5b0fe4d78df74554a3a5875ab629d59c',uuid=7f0919df-6a69-4824-b20e-6540d1d3de30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "address": "fa:16:3e:ad:80:cb", "network": {"id": "d6d35cfb-cc41-4788-977c-b8e5140795a0", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1510670722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1981e9617628491f938ef0ef01c061c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8a243db-cb", "ovs_interfaceid": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:30:34 np0005539551 nova_compute[227360]: 2025-11-29 08:30:34.130 227364 DEBUG nova.network.os_vif_util [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Converting VIF {"id": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "address": "fa:16:3e:ad:80:cb", "network": {"id": "d6d35cfb-cc41-4788-977c-b8e5140795a0", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1510670722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1981e9617628491f938ef0ef01c061c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8a243db-cb", "ovs_interfaceid": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:30:34 np0005539551 nova_compute[227360]: 2025-11-29 08:30:34.131 227364 DEBUG nova.network.os_vif_util [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:80:cb,bridge_name='br-int',has_traffic_filtering=True,id=b8a243db-cb27-44c3-9015-4bf4d9a49bac,network=Network(d6d35cfb-cc41-4788-977c-b8e5140795a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8a243db-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:30:34 np0005539551 nova_compute[227360]: 2025-11-29 08:30:34.131 227364 DEBUG os_vif [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:80:cb,bridge_name='br-int',has_traffic_filtering=True,id=b8a243db-cb27-44c3-9015-4bf4d9a49bac,network=Network(d6d35cfb-cc41-4788-977c-b8e5140795a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8a243db-cb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:30:34 np0005539551 nova_compute[227360]: 2025-11-29 08:30:34.132 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:34 np0005539551 nova_compute[227360]: 2025-11-29 08:30:34.132 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:34 np0005539551 nova_compute[227360]: 2025-11-29 08:30:34.133 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:30:34 np0005539551 nova_compute[227360]: 2025-11-29 08:30:34.135 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:34 np0005539551 nova_compute[227360]: 2025-11-29 08:30:34.135 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb8a243db-cb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:34 np0005539551 nova_compute[227360]: 2025-11-29 08:30:34.136 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb8a243db-cb, col_values=(('external_ids', {'iface-id': 'b8a243db-cb27-44c3-9015-4bf4d9a49bac', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ad:80:cb', 'vm-uuid': '7f0919df-6a69-4824-b20e-6540d1d3de30'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:34 np0005539551 nova_compute[227360]: 2025-11-29 08:30:34.137 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:34 np0005539551 NetworkManager[48922]: <info>  [1764405034.1387] manager: (tapb8a243db-cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/302)
Nov 29 03:30:34 np0005539551 nova_compute[227360]: 2025-11-29 08:30:34.140 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:30:34 np0005539551 nova_compute[227360]: 2025-11-29 08:30:34.143 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:34 np0005539551 nova_compute[227360]: 2025-11-29 08:30:34.144 227364 INFO os_vif [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:80:cb,bridge_name='br-int',has_traffic_filtering=True,id=b8a243db-cb27-44c3-9015-4bf4d9a49bac,network=Network(d6d35cfb-cc41-4788-977c-b8e5140795a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8a243db-cb')#033[00m
Nov 29 03:30:34 np0005539551 nova_compute[227360]: 2025-11-29 08:30:34.207 227364 DEBUG nova.virt.libvirt.driver [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:30:34 np0005539551 nova_compute[227360]: 2025-11-29 08:30:34.207 227364 DEBUG nova.virt.libvirt.driver [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:30:34 np0005539551 nova_compute[227360]: 2025-11-29 08:30:34.207 227364 DEBUG nova.virt.libvirt.driver [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] No VIF found with MAC fa:16:3e:ad:80:cb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:30:34 np0005539551 nova_compute[227360]: 2025-11-29 08:30:34.208 227364 INFO nova.virt.libvirt.driver [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Using config drive#033[00m
Nov 29 03:30:34 np0005539551 nova_compute[227360]: 2025-11-29 08:30:34.231 227364 DEBUG nova.storage.rbd_utils [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] rbd image 7f0919df-6a69-4824-b20e-6540d1d3de30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:34 np0005539551 nova_compute[227360]: 2025-11-29 08:30:34.492 227364 DEBUG nova.network.neutron [req-7254dcfb-1474-49a7-a223-b3f391aa01b7 req-f7506e3c-41ad-4ede-ac0d-011d31d83d8b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Updated VIF entry in instance network info cache for port b8a243db-cb27-44c3-9015-4bf4d9a49bac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:30:34 np0005539551 nova_compute[227360]: 2025-11-29 08:30:34.493 227364 DEBUG nova.network.neutron [req-7254dcfb-1474-49a7-a223-b3f391aa01b7 req-f7506e3c-41ad-4ede-ac0d-011d31d83d8b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Updating instance_info_cache with network_info: [{"id": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "address": "fa:16:3e:ad:80:cb", "network": {"id": "d6d35cfb-cc41-4788-977c-b8e5140795a0", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1510670722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1981e9617628491f938ef0ef01c061c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8a243db-cb", "ovs_interfaceid": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:30:34 np0005539551 nova_compute[227360]: 2025-11-29 08:30:34.508 227364 DEBUG oslo_concurrency.lockutils [req-7254dcfb-1474-49a7-a223-b3f391aa01b7 req-f7506e3c-41ad-4ede-ac0d-011d31d83d8b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-7f0919df-6a69-4824-b20e-6540d1d3de30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:30:34 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e355 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:34 np0005539551 nova_compute[227360]: 2025-11-29 08:30:34.591 227364 INFO nova.virt.libvirt.driver [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Creating config drive at /var/lib/nova/instances/7f0919df-6a69-4824-b20e-6540d1d3de30/disk.config#033[00m
Nov 29 03:30:34 np0005539551 nova_compute[227360]: 2025-11-29 08:30:34.597 227364 DEBUG oslo_concurrency.processutils [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7f0919df-6a69-4824-b20e-6540d1d3de30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqhtdctg7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:34.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:34 np0005539551 nova_compute[227360]: 2025-11-29 08:30:34.730 227364 DEBUG oslo_concurrency.processutils [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7f0919df-6a69-4824-b20e-6540d1d3de30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqhtdctg7" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:34 np0005539551 nova_compute[227360]: 2025-11-29 08:30:34.759 227364 DEBUG nova.storage.rbd_utils [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] rbd image 7f0919df-6a69-4824-b20e-6540d1d3de30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:34 np0005539551 nova_compute[227360]: 2025-11-29 08:30:34.762 227364 DEBUG oslo_concurrency.processutils [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7f0919df-6a69-4824-b20e-6540d1d3de30/disk.config 7f0919df-6a69-4824-b20e-6540d1d3de30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:34 np0005539551 nova_compute[227360]: 2025-11-29 08:30:34.914 227364 DEBUG oslo_concurrency.processutils [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7f0919df-6a69-4824-b20e-6540d1d3de30/disk.config 7f0919df-6a69-4824-b20e-6540d1d3de30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:34 np0005539551 nova_compute[227360]: 2025-11-29 08:30:34.915 227364 INFO nova.virt.libvirt.driver [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Deleting local config drive /var/lib/nova/instances/7f0919df-6a69-4824-b20e-6540d1d3de30/disk.config because it was imported into RBD.#033[00m
Nov 29 03:30:34 np0005539551 kernel: tapb8a243db-cb: entered promiscuous mode
Nov 29 03:30:34 np0005539551 NetworkManager[48922]: <info>  [1764405034.9590] manager: (tapb8a243db-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/303)
Nov 29 03:30:34 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:34Z|00668|binding|INFO|Claiming lport b8a243db-cb27-44c3-9015-4bf4d9a49bac for this chassis.
Nov 29 03:30:34 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:34Z|00669|binding|INFO|b8a243db-cb27-44c3-9015-4bf4d9a49bac: Claiming fa:16:3e:ad:80:cb 10.100.0.12
Nov 29 03:30:34 np0005539551 nova_compute[227360]: 2025-11-29 08:30:34.960 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:34 np0005539551 nova_compute[227360]: 2025-11-29 08:30:34.963 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:34.969 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:80:cb 10.100.0.12'], port_security=['fa:16:3e:ad:80:cb 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '7f0919df-6a69-4824-b20e-6540d1d3de30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d6d35cfb-cc41-4788-977c-b8e5140795a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1981e9617628491f938ef0ef01c061c5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '81cf2245-74ac-4962-8637-69fd9ed2858e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4f49e26a-f1b7-44a1-8f75-9c7ae476aa0d, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=b8a243db-cb27-44c3-9015-4bf4d9a49bac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:30:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:34.970 139482 INFO neutron.agent.ovn.metadata.agent [-] Port b8a243db-cb27-44c3-9015-4bf4d9a49bac in datapath d6d35cfb-cc41-4788-977c-b8e5140795a0 bound to our chassis#033[00m
Nov 29 03:30:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:34.972 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d6d35cfb-cc41-4788-977c-b8e5140795a0#033[00m
Nov 29 03:30:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:34.983 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[40b01fb0-18d2-40b8-aeb8-44b1f33ddb84]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:34.985 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd6d35cfb-c1 in ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:30:34 np0005539551 systemd-udevd[284039]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:30:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:34.990 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd6d35cfb-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:30:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:34.990 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[adf60815-ad7e-45a5-ba42-bab065801b4b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:34.991 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8f728dcf-ad4a-4a90-a8d2-204cdb664017]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:34 np0005539551 systemd-machined[190756]: New machine qemu-72-instance-0000009c.
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:35.004 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[e5afc1e4-6719-46ee-9f2b-c504e7021e98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:35 np0005539551 NetworkManager[48922]: <info>  [1764405035.0066] device (tapb8a243db-cb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:30:35 np0005539551 NetworkManager[48922]: <info>  [1764405035.0079] device (tapb8a243db-cb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:30:35 np0005539551 systemd[1]: Started Virtual Machine qemu-72-instance-0000009c.
Nov 29 03:30:35 np0005539551 nova_compute[227360]: 2025-11-29 08:30:35.029 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:35 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:35Z|00670|binding|INFO|Setting lport b8a243db-cb27-44c3-9015-4bf4d9a49bac ovn-installed in OVS
Nov 29 03:30:35 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:35Z|00671|binding|INFO|Setting lport b8a243db-cb27-44c3-9015-4bf4d9a49bac up in Southbound
Nov 29 03:30:35 np0005539551 nova_compute[227360]: 2025-11-29 08:30:35.034 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:35.034 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[87487580-d9f8-4d3c-97f5-da4d934627ee]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:35.066 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[0cf0103a-4e93-427a-a66d-9b85f88d60db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:35.070 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ab260ab9-2c7c-46c3-aa75-a32a2ca17461]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:35 np0005539551 NetworkManager[48922]: <info>  [1764405035.0720] manager: (tapd6d35cfb-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/304)
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:35.100 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[91d8289b-e7c2-4a76-8664-9b63951156dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:35.103 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[8fd27e50-3b14-4ec7-b47e-866a1fbaa3be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:35 np0005539551 NetworkManager[48922]: <info>  [1764405035.1235] device (tapd6d35cfb-c0): carrier: link connected
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:35.128 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[a72d5fa5-32f9-4a0d-9717-57e69e0fdcd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:35.144 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d92c7b63-f085-4080-80d6-7195f201f99b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd6d35cfb-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5e:5b:88'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 196], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 808200, 'reachable_time': 38438, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284072, 'error': None, 'target': 'ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:35.158 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[eb8064a5-5987-426b-8a2a-dda304c9ec06]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5e:5b88'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 808200, 'tstamp': 808200}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284073, 'error': None, 'target': 'ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:35.173 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[104ce90e-89f7-435b-82e4-dd296d84ea1b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd6d35cfb-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5e:5b:88'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 196], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 808200, 'reachable_time': 38438, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 284074, 'error': None, 'target': 'ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:35.202 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8bbbe81c-f6ed-44b6-bbd9-655140c015c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:35 np0005539551 nova_compute[227360]: 2025-11-29 08:30:35.231 227364 DEBUG nova.compute.manager [req-4dc5747d-5928-43ab-9f5b-c4e4c4591a79 req-3ebf0df5-3865-4eb8-9ea3-4fdccf4b9674 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Received event network-vif-plugged-b8a243db-cb27-44c3-9015-4bf4d9a49bac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:35 np0005539551 nova_compute[227360]: 2025-11-29 08:30:35.231 227364 DEBUG oslo_concurrency.lockutils [req-4dc5747d-5928-43ab-9f5b-c4e4c4591a79 req-3ebf0df5-3865-4eb8-9ea3-4fdccf4b9674 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:35 np0005539551 nova_compute[227360]: 2025-11-29 08:30:35.232 227364 DEBUG oslo_concurrency.lockutils [req-4dc5747d-5928-43ab-9f5b-c4e4c4591a79 req-3ebf0df5-3865-4eb8-9ea3-4fdccf4b9674 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:35 np0005539551 nova_compute[227360]: 2025-11-29 08:30:35.232 227364 DEBUG oslo_concurrency.lockutils [req-4dc5747d-5928-43ab-9f5b-c4e4c4591a79 req-3ebf0df5-3865-4eb8-9ea3-4fdccf4b9674 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:35 np0005539551 nova_compute[227360]: 2025-11-29 08:30:35.232 227364 DEBUG nova.compute.manager [req-4dc5747d-5928-43ab-9f5b-c4e4c4591a79 req-3ebf0df5-3865-4eb8-9ea3-4fdccf4b9674 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Processing event network-vif-plugged-b8a243db-cb27-44c3-9015-4bf4d9a49bac _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:35.255 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e6757f40-224d-47f0-bb89-d8172d78f207]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:35.257 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd6d35cfb-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:35.257 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:35.257 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd6d35cfb-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:35 np0005539551 nova_compute[227360]: 2025-11-29 08:30:35.259 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:35 np0005539551 kernel: tapd6d35cfb-c0: entered promiscuous mode
Nov 29 03:30:35 np0005539551 NetworkManager[48922]: <info>  [1764405035.2614] manager: (tapd6d35cfb-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/305)
Nov 29 03:30:35 np0005539551 nova_compute[227360]: 2025-11-29 08:30:35.261 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:35.265 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd6d35cfb-c0, col_values=(('external_ids', {'iface-id': '070d06ed-b610-481b-b747-9c7d0eb2bcf2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:35 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:35Z|00672|binding|INFO|Releasing lport 070d06ed-b610-481b-b747-9c7d0eb2bcf2 from this chassis (sb_readonly=0)
Nov 29 03:30:35 np0005539551 nova_compute[227360]: 2025-11-29 08:30:35.266 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:35.268 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d6d35cfb-cc41-4788-977c-b8e5140795a0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d6d35cfb-cc41-4788-977c-b8e5140795a0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:35.269 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8b74ebcd-e83b-4ef9-a921-326190106caf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:35.272 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-d6d35cfb-cc41-4788-977c-b8e5140795a0
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/d6d35cfb-cc41-4788-977c-b8e5140795a0.pid.haproxy
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID d6d35cfb-cc41-4788-977c-b8e5140795a0
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:30:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:35.273 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0', 'env', 'PROCESS_TAG=haproxy-d6d35cfb-cc41-4788-977c-b8e5140795a0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d6d35cfb-cc41-4788-977c-b8e5140795a0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:30:35 np0005539551 nova_compute[227360]: 2025-11-29 08:30:35.283 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:35 np0005539551 nova_compute[227360]: 2025-11-29 08:30:35.419 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:35 np0005539551 podman[284106]: 2025-11-29 08:30:35.633735196 +0000 UTC m=+0.053007484 container create 08c27187bffe51025a825484f3f2cd815e45063b442fa98fbb46255a5481e9fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 03:30:35 np0005539551 systemd[1]: Started libpod-conmon-08c27187bffe51025a825484f3f2cd815e45063b442fa98fbb46255a5481e9fb.scope.
Nov 29 03:30:35 np0005539551 podman[284106]: 2025-11-29 08:30:35.608881134 +0000 UTC m=+0.028153442 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:30:35 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:30:35 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46541782af9dbb26f3f57e8d8aac6773d4581e00134e9ad776599d175fd6587a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:30:35 np0005539551 podman[284106]: 2025-11-29 08:30:35.72922575 +0000 UTC m=+0.148498038 container init 08c27187bffe51025a825484f3f2cd815e45063b442fa98fbb46255a5481e9fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:30:35 np0005539551 podman[284106]: 2025-11-29 08:30:35.734494252 +0000 UTC m=+0.153766540 container start 08c27187bffe51025a825484f3f2cd815e45063b442fa98fbb46255a5481e9fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 03:30:35 np0005539551 neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0[284122]: [NOTICE]   (284126) : New worker (284128) forked
Nov 29 03:30:35 np0005539551 neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0[284122]: [NOTICE]   (284126) : Loading success.
Nov 29 03:30:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:35.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:35 np0005539551 nova_compute[227360]: 2025-11-29 08:30:35.989 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405035.9889011, 7f0919df-6a69-4824-b20e-6540d1d3de30 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:35 np0005539551 nova_compute[227360]: 2025-11-29 08:30:35.989 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] VM Started (Lifecycle Event)#033[00m
Nov 29 03:30:35 np0005539551 nova_compute[227360]: 2025-11-29 08:30:35.992 227364 DEBUG nova.compute.manager [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:30:35 np0005539551 nova_compute[227360]: 2025-11-29 08:30:35.996 227364 DEBUG nova.virt.libvirt.driver [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:30:36 np0005539551 nova_compute[227360]: 2025-11-29 08:30:36.000 227364 INFO nova.virt.libvirt.driver [-] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Instance spawned successfully.#033[00m
Nov 29 03:30:36 np0005539551 nova_compute[227360]: 2025-11-29 08:30:36.000 227364 DEBUG nova.virt.libvirt.driver [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:30:36 np0005539551 nova_compute[227360]: 2025-11-29 08:30:36.014 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:36 np0005539551 nova_compute[227360]: 2025-11-29 08:30:36.021 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:30:36 np0005539551 nova_compute[227360]: 2025-11-29 08:30:36.024 227364 DEBUG nova.virt.libvirt.driver [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:36 np0005539551 nova_compute[227360]: 2025-11-29 08:30:36.024 227364 DEBUG nova.virt.libvirt.driver [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:36 np0005539551 nova_compute[227360]: 2025-11-29 08:30:36.025 227364 DEBUG nova.virt.libvirt.driver [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:36 np0005539551 nova_compute[227360]: 2025-11-29 08:30:36.025 227364 DEBUG nova.virt.libvirt.driver [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:36 np0005539551 nova_compute[227360]: 2025-11-29 08:30:36.026 227364 DEBUG nova.virt.libvirt.driver [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:36 np0005539551 nova_compute[227360]: 2025-11-29 08:30:36.026 227364 DEBUG nova.virt.libvirt.driver [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:36 np0005539551 nova_compute[227360]: 2025-11-29 08:30:36.067 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:30:36 np0005539551 nova_compute[227360]: 2025-11-29 08:30:36.067 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405035.9890487, 7f0919df-6a69-4824-b20e-6540d1d3de30 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:36 np0005539551 nova_compute[227360]: 2025-11-29 08:30:36.067 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:30:36 np0005539551 nova_compute[227360]: 2025-11-29 08:30:36.085 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:36 np0005539551 nova_compute[227360]: 2025-11-29 08:30:36.088 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405035.9955032, 7f0919df-6a69-4824-b20e-6540d1d3de30 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:36 np0005539551 nova_compute[227360]: 2025-11-29 08:30:36.089 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:30:36 np0005539551 nova_compute[227360]: 2025-11-29 08:30:36.092 227364 INFO nova.compute.manager [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Took 7.52 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:30:36 np0005539551 nova_compute[227360]: 2025-11-29 08:30:36.093 227364 DEBUG nova.compute.manager [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:36 np0005539551 nova_compute[227360]: 2025-11-29 08:30:36.119 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:36 np0005539551 nova_compute[227360]: 2025-11-29 08:30:36.122 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:30:36 np0005539551 nova_compute[227360]: 2025-11-29 08:30:36.147 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:30:36 np0005539551 nova_compute[227360]: 2025-11-29 08:30:36.158 227364 INFO nova.compute.manager [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Took 8.58 seconds to build instance.#033[00m
Nov 29 03:30:36 np0005539551 nova_compute[227360]: 2025-11-29 08:30:36.171 227364 DEBUG oslo_concurrency.lockutils [None req-7178e99f-ea30-488f-bec7-588a9b6ccd62 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:30:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:36.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:30:37 np0005539551 nova_compute[227360]: 2025-11-29 08:30:37.184 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:37 np0005539551 nova_compute[227360]: 2025-11-29 08:30:37.346 227364 DEBUG nova.compute.manager [req-33f7897d-8546-4d1f-8776-a7dbbc987459 req-3dabaa98-8a97-4056-98bf-29e837ab4684 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Received event network-vif-plugged-b8a243db-cb27-44c3-9015-4bf4d9a49bac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:37 np0005539551 nova_compute[227360]: 2025-11-29 08:30:37.346 227364 DEBUG oslo_concurrency.lockutils [req-33f7897d-8546-4d1f-8776-a7dbbc987459 req-3dabaa98-8a97-4056-98bf-29e837ab4684 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:37 np0005539551 nova_compute[227360]: 2025-11-29 08:30:37.347 227364 DEBUG oslo_concurrency.lockutils [req-33f7897d-8546-4d1f-8776-a7dbbc987459 req-3dabaa98-8a97-4056-98bf-29e837ab4684 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:37 np0005539551 nova_compute[227360]: 2025-11-29 08:30:37.347 227364 DEBUG oslo_concurrency.lockutils [req-33f7897d-8546-4d1f-8776-a7dbbc987459 req-3dabaa98-8a97-4056-98bf-29e837ab4684 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:37 np0005539551 nova_compute[227360]: 2025-11-29 08:30:37.347 227364 DEBUG nova.compute.manager [req-33f7897d-8546-4d1f-8776-a7dbbc987459 req-3dabaa98-8a97-4056-98bf-29e837ab4684 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] No waiting events found dispatching network-vif-plugged-b8a243db-cb27-44c3-9015-4bf4d9a49bac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:37 np0005539551 nova_compute[227360]: 2025-11-29 08:30:37.347 227364 WARNING nova.compute.manager [req-33f7897d-8546-4d1f-8776-a7dbbc987459 req-3dabaa98-8a97-4056-98bf-29e837ab4684 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Received unexpected event network-vif-plugged-b8a243db-cb27-44c3-9015-4bf4d9a49bac for instance with vm_state active and task_state None.#033[00m
Nov 29 03:30:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:37.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:38 np0005539551 nova_compute[227360]: 2025-11-29 08:30:38.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:38.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:38 np0005539551 NetworkManager[48922]: <info>  [1764405038.9315] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/306)
Nov 29 03:30:38 np0005539551 NetworkManager[48922]: <info>  [1764405038.9324] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/307)
Nov 29 03:30:38 np0005539551 nova_compute[227360]: 2025-11-29 08:30:38.930 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:39 np0005539551 nova_compute[227360]: 2025-11-29 08:30:39.057 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:39 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:39Z|00673|binding|INFO|Releasing lport f2118d1b-0f35-4211-8508-64237a2d816e from this chassis (sb_readonly=0)
Nov 29 03:30:39 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:39Z|00674|binding|INFO|Releasing lport 070d06ed-b610-481b-b747-9c7d0eb2bcf2 from this chassis (sb_readonly=0)
Nov 29 03:30:39 np0005539551 nova_compute[227360]: 2025-11-29 08:30:39.073 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:39 np0005539551 nova_compute[227360]: 2025-11-29 08:30:39.137 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e355 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:39.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:40 np0005539551 nova_compute[227360]: 2025-11-29 08:30:40.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:40 np0005539551 nova_compute[227360]: 2025-11-29 08:30:40.435 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:40 np0005539551 nova_compute[227360]: 2025-11-29 08:30:40.436 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:40 np0005539551 nova_compute[227360]: 2025-11-29 08:30:40.436 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:40 np0005539551 nova_compute[227360]: 2025-11-29 08:30:40.436 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:30:40 np0005539551 nova_compute[227360]: 2025-11-29 08:30:40.436 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:40 np0005539551 nova_compute[227360]: 2025-11-29 08:30:40.504 227364 DEBUG nova.compute.manager [req-8489eed8-a98d-4b74-a2cd-1d10fa681a30 req-ce55b896-4236-488c-a2fe-211b55872af1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Received event network-changed-b8a243db-cb27-44c3-9015-4bf4d9a49bac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:40 np0005539551 nova_compute[227360]: 2025-11-29 08:30:40.504 227364 DEBUG nova.compute.manager [req-8489eed8-a98d-4b74-a2cd-1d10fa681a30 req-ce55b896-4236-488c-a2fe-211b55872af1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Refreshing instance network info cache due to event network-changed-b8a243db-cb27-44c3-9015-4bf4d9a49bac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:30:40 np0005539551 nova_compute[227360]: 2025-11-29 08:30:40.504 227364 DEBUG oslo_concurrency.lockutils [req-8489eed8-a98d-4b74-a2cd-1d10fa681a30 req-ce55b896-4236-488c-a2fe-211b55872af1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-7f0919df-6a69-4824-b20e-6540d1d3de30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:30:40 np0005539551 nova_compute[227360]: 2025-11-29 08:30:40.505 227364 DEBUG oslo_concurrency.lockutils [req-8489eed8-a98d-4b74-a2cd-1d10fa681a30 req-ce55b896-4236-488c-a2fe-211b55872af1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-7f0919df-6a69-4824-b20e-6540d1d3de30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:30:40 np0005539551 nova_compute[227360]: 2025-11-29 08:30:40.505 227364 DEBUG nova.network.neutron [req-8489eed8-a98d-4b74-a2cd-1d10fa681a30 req-ce55b896-4236-488c-a2fe-211b55872af1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Refreshing network info cache for port b8a243db-cb27-44c3-9015-4bf4d9a49bac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:30:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:40.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:41 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:30:41 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3263912062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:30:41 np0005539551 nova_compute[227360]: 2025-11-29 08:30:41.054 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:41 np0005539551 nova_compute[227360]: 2025-11-29 08:30:41.126 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:30:41 np0005539551 nova_compute[227360]: 2025-11-29 08:30:41.126 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:30:41 np0005539551 nova_compute[227360]: 2025-11-29 08:30:41.130 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:30:41 np0005539551 nova_compute[227360]: 2025-11-29 08:30:41.130 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:30:41 np0005539551 podman[284205]: 2025-11-29 08:30:41.178545327 +0000 UTC m=+0.081513506 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 03:30:41 np0005539551 podman[284204]: 2025-11-29 08:30:41.182968926 +0000 UTC m=+0.087419296 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3)
Nov 29 03:30:41 np0005539551 podman[284203]: 2025-11-29 08:30:41.18313632 +0000 UTC m=+0.088688889 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:30:41 np0005539551 nova_compute[227360]: 2025-11-29 08:30:41.303 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:30:41 np0005539551 nova_compute[227360]: 2025-11-29 08:30:41.304 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3953MB free_disk=20.808021545410156GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:30:41 np0005539551 nova_compute[227360]: 2025-11-29 08:30:41.305 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:41 np0005539551 nova_compute[227360]: 2025-11-29 08:30:41.305 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:41 np0005539551 nova_compute[227360]: 2025-11-29 08:30:41.379 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 480ae817-3676-4499-a047-6b8b383e7bf2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:30:41 np0005539551 nova_compute[227360]: 2025-11-29 08:30:41.380 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 7f0919df-6a69-4824-b20e-6540d1d3de30 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:30:41 np0005539551 nova_compute[227360]: 2025-11-29 08:30:41.380 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:30:41 np0005539551 nova_compute[227360]: 2025-11-29 08:30:41.380 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:30:41 np0005539551 nova_compute[227360]: 2025-11-29 08:30:41.499 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:41 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e356 e356: 3 total, 3 up, 3 in
Nov 29 03:30:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:41.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:41 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:30:41 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/881369551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:30:41 np0005539551 nova_compute[227360]: 2025-11-29 08:30:41.922 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:41 np0005539551 nova_compute[227360]: 2025-11-29 08:30:41.928 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:30:41 np0005539551 nova_compute[227360]: 2025-11-29 08:30:41.950 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:30:41 np0005539551 nova_compute[227360]: 2025-11-29 08:30:41.971 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:30:41 np0005539551 nova_compute[227360]: 2025-11-29 08:30:41.972 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:42 np0005539551 nova_compute[227360]: 2025-11-29 08:30:42.209 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:42 np0005539551 nova_compute[227360]: 2025-11-29 08:30:42.372 227364 DEBUG nova.network.neutron [req-8489eed8-a98d-4b74-a2cd-1d10fa681a30 req-ce55b896-4236-488c-a2fe-211b55872af1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Updated VIF entry in instance network info cache for port b8a243db-cb27-44c3-9015-4bf4d9a49bac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:30:42 np0005539551 nova_compute[227360]: 2025-11-29 08:30:42.372 227364 DEBUG nova.network.neutron [req-8489eed8-a98d-4b74-a2cd-1d10fa681a30 req-ce55b896-4236-488c-a2fe-211b55872af1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Updating instance_info_cache with network_info: [{"id": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "address": "fa:16:3e:ad:80:cb", "network": {"id": "d6d35cfb-cc41-4788-977c-b8e5140795a0", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1510670722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1981e9617628491f938ef0ef01c061c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8a243db-cb", "ovs_interfaceid": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:30:42 np0005539551 nova_compute[227360]: 2025-11-29 08:30:42.390 227364 DEBUG oslo_concurrency.lockutils [req-8489eed8-a98d-4b74-a2cd-1d10fa681a30 req-ce55b896-4236-488c-a2fe-211b55872af1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-7f0919df-6a69-4824-b20e-6540d1d3de30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:30:42 np0005539551 nova_compute[227360]: 2025-11-29 08:30:42.642 227364 DEBUG oslo_concurrency.lockutils [None req-d2949eca-9f18-4f4b-ac8d-44f5f8108a26 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquiring lock "480ae817-3676-4499-a047-6b8b383e7bf2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:42 np0005539551 nova_compute[227360]: 2025-11-29 08:30:42.642 227364 DEBUG oslo_concurrency.lockutils [None req-d2949eca-9f18-4f4b-ac8d-44f5f8108a26 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "480ae817-3676-4499-a047-6b8b383e7bf2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:42 np0005539551 nova_compute[227360]: 2025-11-29 08:30:42.643 227364 DEBUG oslo_concurrency.lockutils [None req-d2949eca-9f18-4f4b-ac8d-44f5f8108a26 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquiring lock "480ae817-3676-4499-a047-6b8b383e7bf2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:42 np0005539551 nova_compute[227360]: 2025-11-29 08:30:42.643 227364 DEBUG oslo_concurrency.lockutils [None req-d2949eca-9f18-4f4b-ac8d-44f5f8108a26 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "480ae817-3676-4499-a047-6b8b383e7bf2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:42 np0005539551 nova_compute[227360]: 2025-11-29 08:30:42.644 227364 DEBUG oslo_concurrency.lockutils [None req-d2949eca-9f18-4f4b-ac8d-44f5f8108a26 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "480ae817-3676-4499-a047-6b8b383e7bf2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:42 np0005539551 nova_compute[227360]: 2025-11-29 08:30:42.646 227364 INFO nova.compute.manager [None req-d2949eca-9f18-4f4b-ac8d-44f5f8108a26 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Terminating instance#033[00m
Nov 29 03:30:42 np0005539551 nova_compute[227360]: 2025-11-29 08:30:42.648 227364 DEBUG nova.compute.manager [None req-d2949eca-9f18-4f4b-ac8d-44f5f8108a26 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:30:42 np0005539551 kernel: tap9eea9321-fa (unregistering): left promiscuous mode
Nov 29 03:30:42 np0005539551 NetworkManager[48922]: <info>  [1764405042.7046] device (tap9eea9321-fa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:30:42 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:42Z|00675|binding|INFO|Releasing lport 9eea9321-fa5c-4ec5-81e3-b6fba93e7545 from this chassis (sb_readonly=0)
Nov 29 03:30:42 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:42Z|00676|binding|INFO|Setting lport 9eea9321-fa5c-4ec5-81e3-b6fba93e7545 down in Southbound
Nov 29 03:30:42 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:42Z|00677|binding|INFO|Removing iface tap9eea9321-fa ovn-installed in OVS
Nov 29 03:30:42 np0005539551 nova_compute[227360]: 2025-11-29 08:30:42.726 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:42 np0005539551 nova_compute[227360]: 2025-11-29 08:30:42.728 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:42.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:42.732 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:58:94 10.100.0.13'], port_security=['fa:16:3e:a2:58:94 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '480ae817-3676-4499-a047-6b8b383e7bf2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ae71059d02774857be85797a3be0e4e6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9cdb0c1e-9792-4231-abe9-b49a2c7e81de', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43696b0d-f042-4e44-8852-c0333c8ffa4f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=9eea9321-fa5c-4ec5-81e3-b6fba93e7545) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:30:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:42.735 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 9eea9321-fa5c-4ec5-81e3-b6fba93e7545 in datapath d9d41f0a-17f9-4df4-a453-04da996d63b6 unbound from our chassis#033[00m
Nov 29 03:30:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:42.738 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d9d41f0a-17f9-4df4-a453-04da996d63b6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:30:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:42.740 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[71680f70-6a25-42bb-bd6d-d8e8b8c41f2a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:42.741 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6 namespace which is not needed anymore#033[00m
Nov 29 03:30:42 np0005539551 nova_compute[227360]: 2025-11-29 08:30:42.755 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:42 np0005539551 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000008f.scope: Deactivated successfully.
Nov 29 03:30:42 np0005539551 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000008f.scope: Consumed 21.403s CPU time.
Nov 29 03:30:42 np0005539551 systemd-machined[190756]: Machine qemu-66-instance-0000008f terminated.
Nov 29 03:30:42 np0005539551 kernel: tap9eea9321-fa: entered promiscuous mode
Nov 29 03:30:42 np0005539551 systemd-udevd[284293]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:30:42 np0005539551 nova_compute[227360]: 2025-11-29 08:30:42.868 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:42 np0005539551 NetworkManager[48922]: <info>  [1764405042.8718] manager: (tap9eea9321-fa): new Tun device (/org/freedesktop/NetworkManager/Devices/308)
Nov 29 03:30:42 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:42Z|00678|binding|INFO|Claiming lport 9eea9321-fa5c-4ec5-81e3-b6fba93e7545 for this chassis.
Nov 29 03:30:42 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:42Z|00679|binding|INFO|9eea9321-fa5c-4ec5-81e3-b6fba93e7545: Claiming fa:16:3e:a2:58:94 10.100.0.13
Nov 29 03:30:42 np0005539551 kernel: tap9eea9321-fa (unregistering): left promiscuous mode
Nov 29 03:30:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:42.878 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:58:94 10.100.0.13'], port_security=['fa:16:3e:a2:58:94 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '480ae817-3676-4499-a047-6b8b383e7bf2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ae71059d02774857be85797a3be0e4e6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9cdb0c1e-9792-4231-abe9-b49a2c7e81de', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43696b0d-f042-4e44-8852-c0333c8ffa4f, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=9eea9321-fa5c-4ec5-81e3-b6fba93e7545) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:30:42 np0005539551 nova_compute[227360]: 2025-11-29 08:30:42.890 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:42 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:42Z|00680|binding|INFO|Setting lport 9eea9321-fa5c-4ec5-81e3-b6fba93e7545 ovn-installed in OVS
Nov 29 03:30:42 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:42Z|00681|binding|INFO|Setting lport 9eea9321-fa5c-4ec5-81e3-b6fba93e7545 up in Southbound
Nov 29 03:30:42 np0005539551 nova_compute[227360]: 2025-11-29 08:30:42.891 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:42 np0005539551 nova_compute[227360]: 2025-11-29 08:30:42.894 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:42 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:42Z|00682|binding|INFO|Releasing lport 9eea9321-fa5c-4ec5-81e3-b6fba93e7545 from this chassis (sb_readonly=1)
Nov 29 03:30:42 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:42Z|00683|binding|INFO|Removing iface tap9eea9321-fa ovn-installed in OVS
Nov 29 03:30:42 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:42Z|00684|if_status|INFO|Dropped 2 log messages in last 88 seconds (most recently, 88 seconds ago) due to excessive rate
Nov 29 03:30:42 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:42Z|00685|if_status|INFO|Not setting lport 9eea9321-fa5c-4ec5-81e3-b6fba93e7545 down as sb is readonly
Nov 29 03:30:42 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:42Z|00686|binding|INFO|Releasing lport 9eea9321-fa5c-4ec5-81e3-b6fba93e7545 from this chassis (sb_readonly=0)
Nov 29 03:30:42 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:42Z|00687|binding|INFO|Setting lport 9eea9321-fa5c-4ec5-81e3-b6fba93e7545 down in Southbound
Nov 29 03:30:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:42.902 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:58:94 10.100.0.13'], port_security=['fa:16:3e:a2:58:94 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '480ae817-3676-4499-a047-6b8b383e7bf2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ae71059d02774857be85797a3be0e4e6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9cdb0c1e-9792-4231-abe9-b49a2c7e81de', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43696b0d-f042-4e44-8852-c0333c8ffa4f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=9eea9321-fa5c-4ec5-81e3-b6fba93e7545) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:30:42 np0005539551 nova_compute[227360]: 2025-11-29 08:30:42.905 227364 INFO nova.virt.libvirt.driver [-] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Instance destroyed successfully.#033[00m
Nov 29 03:30:42 np0005539551 nova_compute[227360]: 2025-11-29 08:30:42.906 227364 DEBUG nova.objects.instance [None req-d2949eca-9f18-4f4b-ac8d-44f5f8108a26 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lazy-loading 'resources' on Instance uuid 480ae817-3676-4499-a047-6b8b383e7bf2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:42 np0005539551 nova_compute[227360]: 2025-11-29 08:30:42.911 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:42 np0005539551 nova_compute[227360]: 2025-11-29 08:30:42.918 227364 DEBUG nova.virt.libvirt.vif [None req-d2949eca-9f18-4f4b-ac8d-44f5f8108a26 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:27:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-2116728451',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-2116728451',id=143,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:27:25Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ae71059d02774857be85797a3be0e4e6',ramdisk_id='',reservation_id='r-sgna7du5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1715153470',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1715153470-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:27:34Z,user_data=None,user_id='64b11a4dc36b4f55b85dbe846183be55',uuid=480ae817-3676-4499-a047-6b8b383e7bf2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9eea9321-fa5c-4ec5-81e3-b6fba93e7545", "address": "fa:16:3e:a2:58:94", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9eea9321-fa", "ovs_interfaceid": "9eea9321-fa5c-4ec5-81e3-b6fba93e7545", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:30:42 np0005539551 nova_compute[227360]: 2025-11-29 08:30:42.919 227364 DEBUG nova.network.os_vif_util [None req-d2949eca-9f18-4f4b-ac8d-44f5f8108a26 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Converting VIF {"id": "9eea9321-fa5c-4ec5-81e3-b6fba93e7545", "address": "fa:16:3e:a2:58:94", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9eea9321-fa", "ovs_interfaceid": "9eea9321-fa5c-4ec5-81e3-b6fba93e7545", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:30:42 np0005539551 nova_compute[227360]: 2025-11-29 08:30:42.920 227364 DEBUG nova.network.os_vif_util [None req-d2949eca-9f18-4f4b-ac8d-44f5f8108a26 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a2:58:94,bridge_name='br-int',has_traffic_filtering=True,id=9eea9321-fa5c-4ec5-81e3-b6fba93e7545,network=Network(d9d41f0a-17f9-4df4-a453-04da996d63b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9eea9321-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:30:42 np0005539551 nova_compute[227360]: 2025-11-29 08:30:42.921 227364 DEBUG os_vif [None req-d2949eca-9f18-4f4b-ac8d-44f5f8108a26 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a2:58:94,bridge_name='br-int',has_traffic_filtering=True,id=9eea9321-fa5c-4ec5-81e3-b6fba93e7545,network=Network(d9d41f0a-17f9-4df4-a453-04da996d63b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9eea9321-fa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:30:42 np0005539551 nova_compute[227360]: 2025-11-29 08:30:42.922 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:42 np0005539551 nova_compute[227360]: 2025-11-29 08:30:42.923 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9eea9321-fa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:42 np0005539551 nova_compute[227360]: 2025-11-29 08:30:42.924 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:42 np0005539551 nova_compute[227360]: 2025-11-29 08:30:42.927 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:30:42 np0005539551 nova_compute[227360]: 2025-11-29 08:30:42.929 227364 INFO os_vif [None req-d2949eca-9f18-4f4b-ac8d-44f5f8108a26 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a2:58:94,bridge_name='br-int',has_traffic_filtering=True,id=9eea9321-fa5c-4ec5-81e3-b6fba93e7545,network=Network(d9d41f0a-17f9-4df4-a453-04da996d63b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9eea9321-fa')#033[00m
Nov 29 03:30:43 np0005539551 nova_compute[227360]: 2025-11-29 08:30:43.072 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:43 np0005539551 nova_compute[227360]: 2025-11-29 08:30:43.072 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:30:43 np0005539551 nova_compute[227360]: 2025-11-29 08:30:43.073 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:30:43 np0005539551 neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6[280185]: [NOTICE]   (280189) : haproxy version is 2.8.14-c23fe91
Nov 29 03:30:43 np0005539551 neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6[280185]: [NOTICE]   (280189) : path to executable is /usr/sbin/haproxy
Nov 29 03:30:43 np0005539551 neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6[280185]: [WARNING]  (280189) : Exiting Master process...
Nov 29 03:30:43 np0005539551 neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6[280185]: [ALERT]    (280189) : Current worker (280191) exited with code 143 (Terminated)
Nov 29 03:30:43 np0005539551 neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6[280185]: [WARNING]  (280189) : All workers exited. Exiting... (0)
Nov 29 03:30:43 np0005539551 nova_compute[227360]: 2025-11-29 08:30:43.112 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Nov 29 03:30:43 np0005539551 systemd[1]: libpod-6bd859c0997474808cbf014c68b6ef4e905e9e2b7b075c013615f548f18bd271.scope: Deactivated successfully.
Nov 29 03:30:43 np0005539551 podman[284310]: 2025-11-29 08:30:43.120713927 +0000 UTC m=+0.283267174 container died 6bd859c0997474808cbf014c68b6ef4e905e9e2b7b075c013615f548f18bd271 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:30:43 np0005539551 nova_compute[227360]: 2025-11-29 08:30:43.873 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "refresh_cache-7f0919df-6a69-4824-b20e-6540d1d3de30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:30:43 np0005539551 nova_compute[227360]: 2025-11-29 08:30:43.875 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquired lock "refresh_cache-7f0919df-6a69-4824-b20e-6540d1d3de30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:30:43 np0005539551 nova_compute[227360]: 2025-11-29 08:30:43.875 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:30:43 np0005539551 nova_compute[227360]: 2025-11-29 08:30:43.875 227364 DEBUG nova.objects.instance [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7f0919df-6a69-4824-b20e-6540d1d3de30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:43.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:43 np0005539551 nova_compute[227360]: 2025-11-29 08:30:43.989 227364 DEBUG nova.compute.manager [req-fa2af278-4bb5-4820-b9ee-fded823fdafe req-cd4abbd2-cf3d-4098-90ac-34882d25c84f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Received event network-vif-unplugged-9eea9321-fa5c-4ec5-81e3-b6fba93e7545 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:43 np0005539551 nova_compute[227360]: 2025-11-29 08:30:43.990 227364 DEBUG oslo_concurrency.lockutils [req-fa2af278-4bb5-4820-b9ee-fded823fdafe req-cd4abbd2-cf3d-4098-90ac-34882d25c84f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "480ae817-3676-4499-a047-6b8b383e7bf2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:43 np0005539551 nova_compute[227360]: 2025-11-29 08:30:43.990 227364 DEBUG oslo_concurrency.lockutils [req-fa2af278-4bb5-4820-b9ee-fded823fdafe req-cd4abbd2-cf3d-4098-90ac-34882d25c84f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "480ae817-3676-4499-a047-6b8b383e7bf2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:43 np0005539551 nova_compute[227360]: 2025-11-29 08:30:43.990 227364 DEBUG oslo_concurrency.lockutils [req-fa2af278-4bb5-4820-b9ee-fded823fdafe req-cd4abbd2-cf3d-4098-90ac-34882d25c84f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "480ae817-3676-4499-a047-6b8b383e7bf2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:43 np0005539551 nova_compute[227360]: 2025-11-29 08:30:43.991 227364 DEBUG nova.compute.manager [req-fa2af278-4bb5-4820-b9ee-fded823fdafe req-cd4abbd2-cf3d-4098-90ac-34882d25c84f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] No waiting events found dispatching network-vif-unplugged-9eea9321-fa5c-4ec5-81e3-b6fba93e7545 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:43 np0005539551 nova_compute[227360]: 2025-11-29 08:30:43.991 227364 DEBUG nova.compute.manager [req-fa2af278-4bb5-4820-b9ee-fded823fdafe req-cd4abbd2-cf3d-4098-90ac-34882d25c84f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Received event network-vif-unplugged-9eea9321-fa5c-4ec5-81e3-b6fba93e7545 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:30:43 np0005539551 nova_compute[227360]: 2025-11-29 08:30:43.991 227364 DEBUG nova.compute.manager [req-fa2af278-4bb5-4820-b9ee-fded823fdafe req-cd4abbd2-cf3d-4098-90ac-34882d25c84f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Received event network-vif-plugged-9eea9321-fa5c-4ec5-81e3-b6fba93e7545 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:43 np0005539551 nova_compute[227360]: 2025-11-29 08:30:43.992 227364 DEBUG oslo_concurrency.lockutils [req-fa2af278-4bb5-4820-b9ee-fded823fdafe req-cd4abbd2-cf3d-4098-90ac-34882d25c84f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "480ae817-3676-4499-a047-6b8b383e7bf2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:43 np0005539551 nova_compute[227360]: 2025-11-29 08:30:43.992 227364 DEBUG oslo_concurrency.lockutils [req-fa2af278-4bb5-4820-b9ee-fded823fdafe req-cd4abbd2-cf3d-4098-90ac-34882d25c84f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "480ae817-3676-4499-a047-6b8b383e7bf2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:43 np0005539551 nova_compute[227360]: 2025-11-29 08:30:43.992 227364 DEBUG oslo_concurrency.lockutils [req-fa2af278-4bb5-4820-b9ee-fded823fdafe req-cd4abbd2-cf3d-4098-90ac-34882d25c84f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "480ae817-3676-4499-a047-6b8b383e7bf2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:43 np0005539551 nova_compute[227360]: 2025-11-29 08:30:43.993 227364 DEBUG nova.compute.manager [req-fa2af278-4bb5-4820-b9ee-fded823fdafe req-cd4abbd2-cf3d-4098-90ac-34882d25c84f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] No waiting events found dispatching network-vif-plugged-9eea9321-fa5c-4ec5-81e3-b6fba93e7545 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:43 np0005539551 nova_compute[227360]: 2025-11-29 08:30:43.994 227364 WARNING nova.compute.manager [req-fa2af278-4bb5-4820-b9ee-fded823fdafe req-cd4abbd2-cf3d-4098-90ac-34882d25c84f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Received unexpected event network-vif-plugged-9eea9321-fa5c-4ec5-81e3-b6fba93e7545 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:30:43 np0005539551 nova_compute[227360]: 2025-11-29 08:30:43.994 227364 DEBUG nova.compute.manager [req-fa2af278-4bb5-4820-b9ee-fded823fdafe req-cd4abbd2-cf3d-4098-90ac-34882d25c84f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Received event network-vif-plugged-9eea9321-fa5c-4ec5-81e3-b6fba93e7545 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:43 np0005539551 nova_compute[227360]: 2025-11-29 08:30:43.994 227364 DEBUG oslo_concurrency.lockutils [req-fa2af278-4bb5-4820-b9ee-fded823fdafe req-cd4abbd2-cf3d-4098-90ac-34882d25c84f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "480ae817-3676-4499-a047-6b8b383e7bf2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:43 np0005539551 nova_compute[227360]: 2025-11-29 08:30:43.995 227364 DEBUG oslo_concurrency.lockutils [req-fa2af278-4bb5-4820-b9ee-fded823fdafe req-cd4abbd2-cf3d-4098-90ac-34882d25c84f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "480ae817-3676-4499-a047-6b8b383e7bf2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:43 np0005539551 nova_compute[227360]: 2025-11-29 08:30:43.995 227364 DEBUG oslo_concurrency.lockutils [req-fa2af278-4bb5-4820-b9ee-fded823fdafe req-cd4abbd2-cf3d-4098-90ac-34882d25c84f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "480ae817-3676-4499-a047-6b8b383e7bf2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:43 np0005539551 nova_compute[227360]: 2025-11-29 08:30:43.995 227364 DEBUG nova.compute.manager [req-fa2af278-4bb5-4820-b9ee-fded823fdafe req-cd4abbd2-cf3d-4098-90ac-34882d25c84f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] No waiting events found dispatching network-vif-plugged-9eea9321-fa5c-4ec5-81e3-b6fba93e7545 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:43 np0005539551 nova_compute[227360]: 2025-11-29 08:30:43.996 227364 WARNING nova.compute.manager [req-fa2af278-4bb5-4820-b9ee-fded823fdafe req-cd4abbd2-cf3d-4098-90ac-34882d25c84f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Received unexpected event network-vif-plugged-9eea9321-fa5c-4ec5-81e3-b6fba93e7545 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:30:44 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6bd859c0997474808cbf014c68b6ef4e905e9e2b7b075c013615f548f18bd271-userdata-shm.mount: Deactivated successfully.
Nov 29 03:30:44 np0005539551 systemd[1]: var-lib-containers-storage-overlay-4892b8beb5e7852e93dac44a96d93d23d770aeb101354988b52bea8376b88508-merged.mount: Deactivated successfully.
Nov 29 03:30:44 np0005539551 podman[284310]: 2025-11-29 08:30:44.284128699 +0000 UTC m=+1.446681946 container cleanup 6bd859c0997474808cbf014c68b6ef4e905e9e2b7b075c013615f548f18bd271 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:30:44 np0005539551 systemd[1]: libpod-conmon-6bd859c0997474808cbf014c68b6ef4e905e9e2b7b075c013615f548f18bd271.scope: Deactivated successfully.
Nov 29 03:30:44 np0005539551 podman[284361]: 2025-11-29 08:30:44.366896319 +0000 UTC m=+0.059428738 container remove 6bd859c0997474808cbf014c68b6ef4e905e9e2b7b075c013615f548f18bd271 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:30:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:44.372 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9193c280-ba3a-4db2-af3e-b04e3a0efec1]: (4, ('Sat Nov 29 08:30:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6 (6bd859c0997474808cbf014c68b6ef4e905e9e2b7b075c013615f548f18bd271)\n6bd859c0997474808cbf014c68b6ef4e905e9e2b7b075c013615f548f18bd271\nSat Nov 29 08:30:44 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6 (6bd859c0997474808cbf014c68b6ef4e905e9e2b7b075c013615f548f18bd271)\n6bd859c0997474808cbf014c68b6ef4e905e9e2b7b075c013615f548f18bd271\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:44.374 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1c7912d6-cfc2-4484-b230-ba87ff86a714]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:44.375 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9d41f0a-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:44 np0005539551 nova_compute[227360]: 2025-11-29 08:30:44.377 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405029.375412, a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:44 np0005539551 nova_compute[227360]: 2025-11-29 08:30:44.377 227364 INFO nova.compute.manager [-] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:30:44 np0005539551 nova_compute[227360]: 2025-11-29 08:30:44.422 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:44 np0005539551 kernel: tapd9d41f0a-10: left promiscuous mode
Nov 29 03:30:44 np0005539551 nova_compute[227360]: 2025-11-29 08:30:44.424 227364 DEBUG nova.compute.manager [None req-4927a224-9574-41f7-aaa5-183f28edd6e6 - - - - - -] [instance: a0b7719a-0c0b-4f4a-ae30-77f06c13cbb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:44 np0005539551 nova_compute[227360]: 2025-11-29 08:30:44.428 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:44.428 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f1ec8633-1622-42be-a4f2-5e4944988ee9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:44 np0005539551 nova_compute[227360]: 2025-11-29 08:30:44.444 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:44.446 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5867331e-230e-445d-a2fa-d4a9a58521e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:44.447 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c331566f-d9c2-4da0-bfbe-2d13c860a8b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:44.461 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[44177a4c-8544-4d61-a523-078f04ad9b46]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 789195, 'reachable_time': 16463, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284379, 'error': None, 'target': 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:44 np0005539551 systemd[1]: run-netns-ovnmeta\x2dd9d41f0a\x2d17f9\x2d4df4\x2da453\x2d04da996d63b6.mount: Deactivated successfully.
Nov 29 03:30:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:44.465 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:30:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:44.465 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[a7c995fd-df73-4401-aeab-23b679ce444c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:44.466 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 9eea9321-fa5c-4ec5-81e3-b6fba93e7545 in datapath d9d41f0a-17f9-4df4-a453-04da996d63b6 unbound from our chassis#033[00m
Nov 29 03:30:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:44.468 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d9d41f0a-17f9-4df4-a453-04da996d63b6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:30:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:44.469 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9e5e11ed-098d-47c9-ad9f-cd27cc7ef412]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:44.470 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 9eea9321-fa5c-4ec5-81e3-b6fba93e7545 in datapath d9d41f0a-17f9-4df4-a453-04da996d63b6 unbound from our chassis#033[00m
Nov 29 03:30:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:44.473 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d9d41f0a-17f9-4df4-a453-04da996d63b6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:30:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:30:44.473 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[174b3562-2e20-459c-87ab-5efb8bf227ba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:44 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e357 e357: 3 total, 3 up, 3 in
Nov 29 03:30:44 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:44.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:44 np0005539551 nova_compute[227360]: 2025-11-29 08:30:44.933 227364 INFO nova.virt.libvirt.driver [None req-d2949eca-9f18-4f4b-ac8d-44f5f8108a26 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Deleting instance files /var/lib/nova/instances/480ae817-3676-4499-a047-6b8b383e7bf2_del#033[00m
Nov 29 03:30:44 np0005539551 nova_compute[227360]: 2025-11-29 08:30:44.934 227364 INFO nova.virt.libvirt.driver [None req-d2949eca-9f18-4f4b-ac8d-44f5f8108a26 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Deletion of /var/lib/nova/instances/480ae817-3676-4499-a047-6b8b383e7bf2_del complete#033[00m
Nov 29 03:30:44 np0005539551 nova_compute[227360]: 2025-11-29 08:30:44.980 227364 INFO nova.compute.manager [None req-d2949eca-9f18-4f4b-ac8d-44f5f8108a26 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Took 2.33 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:30:44 np0005539551 nova_compute[227360]: 2025-11-29 08:30:44.981 227364 DEBUG oslo.service.loopingcall [None req-d2949eca-9f18-4f4b-ac8d-44f5f8108a26 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:30:44 np0005539551 nova_compute[227360]: 2025-11-29 08:30:44.981 227364 DEBUG nova.compute.manager [-] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:30:44 np0005539551 nova_compute[227360]: 2025-11-29 08:30:44.981 227364 DEBUG nova.network.neutron [-] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:30:45 np0005539551 nova_compute[227360]: 2025-11-29 08:30:45.304 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Updating instance_info_cache with network_info: [{"id": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "address": "fa:16:3e:ad:80:cb", "network": {"id": "d6d35cfb-cc41-4788-977c-b8e5140795a0", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1510670722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1981e9617628491f938ef0ef01c061c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8a243db-cb", "ovs_interfaceid": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:30:45 np0005539551 nova_compute[227360]: 2025-11-29 08:30:45.320 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Releasing lock "refresh_cache-7f0919df-6a69-4824-b20e-6540d1d3de30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:30:45 np0005539551 nova_compute[227360]: 2025-11-29 08:30:45.321 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:30:45 np0005539551 nova_compute[227360]: 2025-11-29 08:30:45.322 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:45 np0005539551 nova_compute[227360]: 2025-11-29 08:30:45.323 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:45 np0005539551 nova_compute[227360]: 2025-11-29 08:30:45.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:45.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:46 np0005539551 nova_compute[227360]: 2025-11-29 08:30:46.097 227364 DEBUG nova.compute.manager [req-d8b7bc2c-c143-4fda-85a2-8a40d80eba85 req-f07e8020-8d86-457e-b288-3cd25174c983 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Received event network-vif-plugged-9eea9321-fa5c-4ec5-81e3-b6fba93e7545 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:46 np0005539551 nova_compute[227360]: 2025-11-29 08:30:46.097 227364 DEBUG oslo_concurrency.lockutils [req-d8b7bc2c-c143-4fda-85a2-8a40d80eba85 req-f07e8020-8d86-457e-b288-3cd25174c983 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "480ae817-3676-4499-a047-6b8b383e7bf2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:46 np0005539551 nova_compute[227360]: 2025-11-29 08:30:46.097 227364 DEBUG oslo_concurrency.lockutils [req-d8b7bc2c-c143-4fda-85a2-8a40d80eba85 req-f07e8020-8d86-457e-b288-3cd25174c983 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "480ae817-3676-4499-a047-6b8b383e7bf2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:46 np0005539551 nova_compute[227360]: 2025-11-29 08:30:46.098 227364 DEBUG oslo_concurrency.lockutils [req-d8b7bc2c-c143-4fda-85a2-8a40d80eba85 req-f07e8020-8d86-457e-b288-3cd25174c983 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "480ae817-3676-4499-a047-6b8b383e7bf2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:46 np0005539551 nova_compute[227360]: 2025-11-29 08:30:46.098 227364 DEBUG nova.compute.manager [req-d8b7bc2c-c143-4fda-85a2-8a40d80eba85 req-f07e8020-8d86-457e-b288-3cd25174c983 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] No waiting events found dispatching network-vif-plugged-9eea9321-fa5c-4ec5-81e3-b6fba93e7545 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:46 np0005539551 nova_compute[227360]: 2025-11-29 08:30:46.098 227364 WARNING nova.compute.manager [req-d8b7bc2c-c143-4fda-85a2-8a40d80eba85 req-f07e8020-8d86-457e-b288-3cd25174c983 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Received unexpected event network-vif-plugged-9eea9321-fa5c-4ec5-81e3-b6fba93e7545 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:30:46 np0005539551 nova_compute[227360]: 2025-11-29 08:30:46.098 227364 DEBUG nova.compute.manager [req-d8b7bc2c-c143-4fda-85a2-8a40d80eba85 req-f07e8020-8d86-457e-b288-3cd25174c983 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Received event network-vif-unplugged-9eea9321-fa5c-4ec5-81e3-b6fba93e7545 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:46 np0005539551 nova_compute[227360]: 2025-11-29 08:30:46.099 227364 DEBUG oslo_concurrency.lockutils [req-d8b7bc2c-c143-4fda-85a2-8a40d80eba85 req-f07e8020-8d86-457e-b288-3cd25174c983 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "480ae817-3676-4499-a047-6b8b383e7bf2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:46 np0005539551 nova_compute[227360]: 2025-11-29 08:30:46.099 227364 DEBUG oslo_concurrency.lockutils [req-d8b7bc2c-c143-4fda-85a2-8a40d80eba85 req-f07e8020-8d86-457e-b288-3cd25174c983 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "480ae817-3676-4499-a047-6b8b383e7bf2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:46 np0005539551 nova_compute[227360]: 2025-11-29 08:30:46.099 227364 DEBUG oslo_concurrency.lockutils [req-d8b7bc2c-c143-4fda-85a2-8a40d80eba85 req-f07e8020-8d86-457e-b288-3cd25174c983 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "480ae817-3676-4499-a047-6b8b383e7bf2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:46 np0005539551 nova_compute[227360]: 2025-11-29 08:30:46.099 227364 DEBUG nova.compute.manager [req-d8b7bc2c-c143-4fda-85a2-8a40d80eba85 req-f07e8020-8d86-457e-b288-3cd25174c983 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] No waiting events found dispatching network-vif-unplugged-9eea9321-fa5c-4ec5-81e3-b6fba93e7545 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:46 np0005539551 nova_compute[227360]: 2025-11-29 08:30:46.099 227364 DEBUG nova.compute.manager [req-d8b7bc2c-c143-4fda-85a2-8a40d80eba85 req-f07e8020-8d86-457e-b288-3cd25174c983 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Received event network-vif-unplugged-9eea9321-fa5c-4ec5-81e3-b6fba93e7545 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:30:46 np0005539551 nova_compute[227360]: 2025-11-29 08:30:46.100 227364 DEBUG nova.compute.manager [req-d8b7bc2c-c143-4fda-85a2-8a40d80eba85 req-f07e8020-8d86-457e-b288-3cd25174c983 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Received event network-vif-plugged-9eea9321-fa5c-4ec5-81e3-b6fba93e7545 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:46 np0005539551 nova_compute[227360]: 2025-11-29 08:30:46.100 227364 DEBUG oslo_concurrency.lockutils [req-d8b7bc2c-c143-4fda-85a2-8a40d80eba85 req-f07e8020-8d86-457e-b288-3cd25174c983 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "480ae817-3676-4499-a047-6b8b383e7bf2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:46 np0005539551 nova_compute[227360]: 2025-11-29 08:30:46.100 227364 DEBUG oslo_concurrency.lockutils [req-d8b7bc2c-c143-4fda-85a2-8a40d80eba85 req-f07e8020-8d86-457e-b288-3cd25174c983 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "480ae817-3676-4499-a047-6b8b383e7bf2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:46 np0005539551 nova_compute[227360]: 2025-11-29 08:30:46.100 227364 DEBUG oslo_concurrency.lockutils [req-d8b7bc2c-c143-4fda-85a2-8a40d80eba85 req-f07e8020-8d86-457e-b288-3cd25174c983 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "480ae817-3676-4499-a047-6b8b383e7bf2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:46 np0005539551 nova_compute[227360]: 2025-11-29 08:30:46.100 227364 DEBUG nova.compute.manager [req-d8b7bc2c-c143-4fda-85a2-8a40d80eba85 req-f07e8020-8d86-457e-b288-3cd25174c983 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] No waiting events found dispatching network-vif-plugged-9eea9321-fa5c-4ec5-81e3-b6fba93e7545 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:46 np0005539551 nova_compute[227360]: 2025-11-29 08:30:46.101 227364 WARNING nova.compute.manager [req-d8b7bc2c-c143-4fda-85a2-8a40d80eba85 req-f07e8020-8d86-457e-b288-3cd25174c983 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Received unexpected event network-vif-plugged-9eea9321-fa5c-4ec5-81e3-b6fba93e7545 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:30:46 np0005539551 nova_compute[227360]: 2025-11-29 08:30:46.196 227364 DEBUG nova.network.neutron [-] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:30:46 np0005539551 nova_compute[227360]: 2025-11-29 08:30:46.214 227364 INFO nova.compute.manager [-] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Took 1.23 seconds to deallocate network for instance.#033[00m
Nov 29 03:30:46 np0005539551 nova_compute[227360]: 2025-11-29 08:30:46.257 227364 DEBUG oslo_concurrency.lockutils [None req-d2949eca-9f18-4f4b-ac8d-44f5f8108a26 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:46 np0005539551 nova_compute[227360]: 2025-11-29 08:30:46.258 227364 DEBUG oslo_concurrency.lockutils [None req-d2949eca-9f18-4f4b-ac8d-44f5f8108a26 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:46 np0005539551 nova_compute[227360]: 2025-11-29 08:30:46.315 227364 DEBUG oslo_concurrency.processutils [None req-d2949eca-9f18-4f4b-ac8d-44f5f8108a26 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:46.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:46 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:30:46 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1730477952' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:30:46 np0005539551 nova_compute[227360]: 2025-11-29 08:30:46.872 227364 DEBUG oslo_concurrency.processutils [None req-d2949eca-9f18-4f4b-ac8d-44f5f8108a26 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:46 np0005539551 nova_compute[227360]: 2025-11-29 08:30:46.879 227364 DEBUG nova.compute.provider_tree [None req-d2949eca-9f18-4f4b-ac8d-44f5f8108a26 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:30:46 np0005539551 nova_compute[227360]: 2025-11-29 08:30:46.902 227364 DEBUG nova.scheduler.client.report [None req-d2949eca-9f18-4f4b-ac8d-44f5f8108a26 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:30:46 np0005539551 nova_compute[227360]: 2025-11-29 08:30:46.923 227364 DEBUG oslo_concurrency.lockutils [None req-d2949eca-9f18-4f4b-ac8d-44f5f8108a26 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:46 np0005539551 nova_compute[227360]: 2025-11-29 08:30:46.947 227364 INFO nova.scheduler.client.report [None req-d2949eca-9f18-4f4b-ac8d-44f5f8108a26 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Deleted allocations for instance 480ae817-3676-4499-a047-6b8b383e7bf2#033[00m
Nov 29 03:30:47 np0005539551 nova_compute[227360]: 2025-11-29 08:30:47.010 227364 DEBUG oslo_concurrency.lockutils [None req-d2949eca-9f18-4f4b-ac8d-44f5f8108a26 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "480ae817-3676-4499-a047-6b8b383e7bf2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.368s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:47 np0005539551 nova_compute[227360]: 2025-11-29 08:30:47.210 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:47.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:47 np0005539551 nova_compute[227360]: 2025-11-29 08:30:47.926 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:48 np0005539551 nova_compute[227360]: 2025-11-29 08:30:48.202 227364 DEBUG nova.compute.manager [req-5714c634-cf71-47b1-b9e5-26123392180d req-52b3d2e1-c082-4c98-8138-d4e645dacbf5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Received event network-vif-deleted-9eea9321-fa5c-4ec5-81e3-b6fba93e7545 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:48.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e358 e358: 3 total, 3 up, 3 in
Nov 29 03:30:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:49.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:50.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:51.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:51 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:51Z|00688|binding|INFO|Releasing lport 070d06ed-b610-481b-b747-9c7d0eb2bcf2 from this chassis (sb_readonly=0)
Nov 29 03:30:52 np0005539551 nova_compute[227360]: 2025-11-29 08:30:52.035 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:52 np0005539551 nova_compute[227360]: 2025-11-29 08:30:52.211 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:52.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:52 np0005539551 nova_compute[227360]: 2025-11-29 08:30:52.928 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:53 np0005539551 nova_compute[227360]: 2025-11-29 08:30:53.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:53 np0005539551 nova_compute[227360]: 2025-11-29 08:30:53.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:30:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:53.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:54 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:54 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:54Z|00085|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ad:80:cb 10.100.0.12
Nov 29 03:30:54 np0005539551 ovn_controller[130266]: 2025-11-29T08:30:54Z|00086|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ad:80:cb 10.100.0.12
Nov 29 03:30:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:30:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:54.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:30:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:55.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:56 np0005539551 nova_compute[227360]: 2025-11-29 08:30:56.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:30:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:56.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:30:57 np0005539551 nova_compute[227360]: 2025-11-29 08:30:57.212 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:57 np0005539551 nova_compute[227360]: 2025-11-29 08:30:57.903 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405042.9019465, 480ae817-3676-4499-a047-6b8b383e7bf2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:57 np0005539551 nova_compute[227360]: 2025-11-29 08:30:57.903 227364 INFO nova.compute.manager [-] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:30:57 np0005539551 nova_compute[227360]: 2025-11-29 08:30:57.920 227364 DEBUG nova.compute.manager [None req-7fb5ce76-ae99-455f-8ee1-ef2a025311b1 - - - - - -] [instance: 480ae817-3676-4499-a047-6b8b383e7bf2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:57 np0005539551 nova_compute[227360]: 2025-11-29 08:30:57.929 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:57.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:58.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:59 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:30:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:59.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:00.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:01.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:02 np0005539551 nova_compute[227360]: 2025-11-29 08:31:02.215 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:31:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:02.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:31:02 np0005539551 nova_compute[227360]: 2025-11-29 08:31:02.930 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:03 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:31:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:03.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:04 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:31:04 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:31:04 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:04.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:05.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:06.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:07.212 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:31:07 np0005539551 nova_compute[227360]: 2025-11-29 08:31:07.212 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:07.213 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:31:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:07.213 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:07 np0005539551 nova_compute[227360]: 2025-11-29 08:31:07.216 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:07 np0005539551 nova_compute[227360]: 2025-11-29 08:31:07.933 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:07.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:08.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:09 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:09.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:10.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:11 np0005539551 nova_compute[227360]: 2025-11-29 08:31:11.222 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:11 np0005539551 podman[284535]: 2025-11-29 08:31:11.609165314 +0000 UTC m=+0.048012140 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 03:31:11 np0005539551 podman[284534]: 2025-11-29 08:31:11.611745014 +0000 UTC m=+0.053245252 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 29 03:31:11 np0005539551 podman[284533]: 2025-11-29 08:31:11.665083877 +0000 UTC m=+0.107348935 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 03:31:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:11.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:12 np0005539551 nova_compute[227360]: 2025-11-29 08:31:12.219 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:12.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:12 np0005539551 nova_compute[227360]: 2025-11-29 08:31:12.936 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:13 np0005539551 nova_compute[227360]: 2025-11-29 08:31:13.404 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:13 np0005539551 nova_compute[227360]: 2025-11-29 08:31:13.582 227364 DEBUG oslo_concurrency.lockutils [None req-1933e92c-b4be-40d1-8aeb-e18cae229756 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Acquiring lock "7f0919df-6a69-4824-b20e-6540d1d3de30" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:13 np0005539551 nova_compute[227360]: 2025-11-29 08:31:13.583 227364 DEBUG oslo_concurrency.lockutils [None req-1933e92c-b4be-40d1-8aeb-e18cae229756 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:13 np0005539551 nova_compute[227360]: 2025-11-29 08:31:13.602 227364 DEBUG nova.objects.instance [None req-1933e92c-b4be-40d1-8aeb-e18cae229756 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lazy-loading 'flavor' on Instance uuid 7f0919df-6a69-4824-b20e-6540d1d3de30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:31:13 np0005539551 nova_compute[227360]: 2025-11-29 08:31:13.647 227364 DEBUG oslo_concurrency.lockutils [None req-1933e92c-b4be-40d1-8aeb-e18cae229756 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.064s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:13 np0005539551 nova_compute[227360]: 2025-11-29 08:31:13.871 227364 DEBUG oslo_concurrency.lockutils [None req-1933e92c-b4be-40d1-8aeb-e18cae229756 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Acquiring lock "7f0919df-6a69-4824-b20e-6540d1d3de30" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:13 np0005539551 nova_compute[227360]: 2025-11-29 08:31:13.871 227364 DEBUG oslo_concurrency.lockutils [None req-1933e92c-b4be-40d1-8aeb-e18cae229756 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:13 np0005539551 nova_compute[227360]: 2025-11-29 08:31:13.872 227364 INFO nova.compute.manager [None req-1933e92c-b4be-40d1-8aeb-e18cae229756 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Attaching volume 819403e4-4b9d-48a5-8cd5-1b913fa9f1ae to /dev/vdb#033[00m
Nov 29 03:31:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:13.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:14 np0005539551 nova_compute[227360]: 2025-11-29 08:31:14.031 227364 DEBUG os_brick.utils [None req-1933e92c-b4be-40d1-8aeb-e18cae229756 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:31:14 np0005539551 nova_compute[227360]: 2025-11-29 08:31:14.032 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:14 np0005539551 nova_compute[227360]: 2025-11-29 08:31:14.049 245195 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:14 np0005539551 nova_compute[227360]: 2025-11-29 08:31:14.050 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[b2568a27-b35d-46ae-95f9-f2c2776092af]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:14 np0005539551 nova_compute[227360]: 2025-11-29 08:31:14.051 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:14 np0005539551 nova_compute[227360]: 2025-11-29 08:31:14.065 245195 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:14 np0005539551 nova_compute[227360]: 2025-11-29 08:31:14.065 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[c16fa90c-dd59-42a7-a888-7c4571e1bbef]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34268feecb6', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:14 np0005539551 nova_compute[227360]: 2025-11-29 08:31:14.067 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:14 np0005539551 nova_compute[227360]: 2025-11-29 08:31:14.076 245195 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:14 np0005539551 nova_compute[227360]: 2025-11-29 08:31:14.076 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[acd1b5c4-8081-43c6-99a4-1f016d2ec191]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:14 np0005539551 nova_compute[227360]: 2025-11-29 08:31:14.078 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[fbb8492f-dcae-4f5f-ab10-43f162a8a05d]: (4, '2d58586e-4ce1-425d-890c-d5cdff75e822') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:14 np0005539551 nova_compute[227360]: 2025-11-29 08:31:14.078 227364 DEBUG oslo_concurrency.processutils [None req-1933e92c-b4be-40d1-8aeb-e18cae229756 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:14 np0005539551 nova_compute[227360]: 2025-11-29 08:31:14.107 227364 DEBUG oslo_concurrency.processutils [None req-1933e92c-b4be-40d1-8aeb-e18cae229756 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] CMD "nvme version" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:14 np0005539551 nova_compute[227360]: 2025-11-29 08:31:14.110 227364 DEBUG os_brick.initiator.connectors.lightos [None req-1933e92c-b4be-40d1-8aeb-e18cae229756 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:31:14 np0005539551 nova_compute[227360]: 2025-11-29 08:31:14.110 227364 DEBUG os_brick.initiator.connectors.lightos [None req-1933e92c-b4be-40d1-8aeb-e18cae229756 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:31:14 np0005539551 nova_compute[227360]: 2025-11-29 08:31:14.111 227364 DEBUG os_brick.initiator.connectors.lightos [None req-1933e92c-b4be-40d1-8aeb-e18cae229756 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:31:14 np0005539551 nova_compute[227360]: 2025-11-29 08:31:14.111 227364 DEBUG os_brick.utils [None req-1933e92c-b4be-40d1-8aeb-e18cae229756 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] <== get_connector_properties: return (79ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34268feecb6', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2d58586e-4ce1-425d-890c-d5cdff75e822', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:31:14 np0005539551 nova_compute[227360]: 2025-11-29 08:31:14.112 227364 DEBUG nova.virt.block_device [None req-1933e92c-b4be-40d1-8aeb-e18cae229756 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Updating existing volume attachment record: dbfb7ff5-c7ea-44e1-b024-388a0296e20e _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:31:14 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:14.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:14 np0005539551 nova_compute[227360]: 2025-11-29 08:31:14.952 227364 DEBUG nova.objects.instance [None req-1933e92c-b4be-40d1-8aeb-e18cae229756 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lazy-loading 'flavor' on Instance uuid 7f0919df-6a69-4824-b20e-6540d1d3de30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:31:14 np0005539551 nova_compute[227360]: 2025-11-29 08:31:14.977 227364 DEBUG nova.virt.libvirt.driver [None req-1933e92c-b4be-40d1-8aeb-e18cae229756 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Attempting to attach volume 819403e4-4b9d-48a5-8cd5-1b913fa9f1ae with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:31:14 np0005539551 nova_compute[227360]: 2025-11-29 08:31:14.980 227364 DEBUG nova.virt.libvirt.guest [None req-1933e92c-b4be-40d1-8aeb-e18cae229756 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:31:14 np0005539551 nova_compute[227360]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:31:14 np0005539551 nova_compute[227360]:  <source protocol="rbd" name="volumes/volume-819403e4-4b9d-48a5-8cd5-1b913fa9f1ae">
Nov 29 03:31:14 np0005539551 nova_compute[227360]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:31:14 np0005539551 nova_compute[227360]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:31:14 np0005539551 nova_compute[227360]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:31:14 np0005539551 nova_compute[227360]:  </source>
Nov 29 03:31:14 np0005539551 nova_compute[227360]:  <auth username="openstack">
Nov 29 03:31:14 np0005539551 nova_compute[227360]:    <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:31:14 np0005539551 nova_compute[227360]:  </auth>
Nov 29 03:31:14 np0005539551 nova_compute[227360]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:31:14 np0005539551 nova_compute[227360]:  <serial>819403e4-4b9d-48a5-8cd5-1b913fa9f1ae</serial>
Nov 29 03:31:14 np0005539551 nova_compute[227360]: </disk>
Nov 29 03:31:14 np0005539551 nova_compute[227360]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:31:15 np0005539551 nova_compute[227360]: 2025-11-29 08:31:15.174 227364 DEBUG nova.virt.libvirt.driver [None req-1933e92c-b4be-40d1-8aeb-e18cae229756 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:31:15 np0005539551 nova_compute[227360]: 2025-11-29 08:31:15.174 227364 DEBUG nova.virt.libvirt.driver [None req-1933e92c-b4be-40d1-8aeb-e18cae229756 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:31:15 np0005539551 nova_compute[227360]: 2025-11-29 08:31:15.174 227364 DEBUG nova.virt.libvirt.driver [None req-1933e92c-b4be-40d1-8aeb-e18cae229756 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:31:15 np0005539551 nova_compute[227360]: 2025-11-29 08:31:15.174 227364 DEBUG nova.virt.libvirt.driver [None req-1933e92c-b4be-40d1-8aeb-e18cae229756 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] No VIF found with MAC fa:16:3e:ad:80:cb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:31:15 np0005539551 nova_compute[227360]: 2025-11-29 08:31:15.211 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:15 np0005539551 nova_compute[227360]: 2025-11-29 08:31:15.360 227364 DEBUG oslo_concurrency.lockutils [None req-1933e92c-b4be-40d1-8aeb-e18cae229756 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.488s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:15.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:16 np0005539551 nova_compute[227360]: 2025-11-29 08:31:16.506 227364 DEBUG oslo_concurrency.lockutils [None req-fa640d06-7ee9-4b34-be0f-fd2a69b2a4ca 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Acquiring lock "7f0919df-6a69-4824-b20e-6540d1d3de30" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:16 np0005539551 nova_compute[227360]: 2025-11-29 08:31:16.506 227364 DEBUG oslo_concurrency.lockutils [None req-fa640d06-7ee9-4b34-be0f-fd2a69b2a4ca 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:16 np0005539551 nova_compute[227360]: 2025-11-29 08:31:16.507 227364 DEBUG nova.compute.manager [None req-fa640d06-7ee9-4b34-be0f-fd2a69b2a4ca 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:31:16 np0005539551 nova_compute[227360]: 2025-11-29 08:31:16.511 227364 DEBUG nova.compute.manager [None req-fa640d06-7ee9-4b34-be0f-fd2a69b2a4ca 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Nov 29 03:31:16 np0005539551 nova_compute[227360]: 2025-11-29 08:31:16.512 227364 DEBUG nova.objects.instance [None req-fa640d06-7ee9-4b34-be0f-fd2a69b2a4ca 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lazy-loading 'flavor' on Instance uuid 7f0919df-6a69-4824-b20e-6540d1d3de30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:31:16 np0005539551 nova_compute[227360]: 2025-11-29 08:31:16.538 227364 DEBUG nova.virt.libvirt.driver [None req-fa640d06-7ee9-4b34-be0f-fd2a69b2a4ca 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:31:16 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #112. Immutable memtables: 0.
Nov 29 03:31:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:31:16.707497) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:31:16 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 112
Nov 29 03:31:16 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405076707559, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 861, "num_deletes": 254, "total_data_size": 1558994, "memory_usage": 1586720, "flush_reason": "Manual Compaction"}
Nov 29 03:31:16 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #113: started
Nov 29 03:31:16 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405076714453, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 113, "file_size": 1027784, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56014, "largest_seqno": 56870, "table_properties": {"data_size": 1023736, "index_size": 1764, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9710, "raw_average_key_size": 20, "raw_value_size": 1015324, "raw_average_value_size": 2110, "num_data_blocks": 77, "num_entries": 481, "num_filter_entries": 481, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405023, "oldest_key_time": 1764405023, "file_creation_time": 1764405076, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:31:16 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 6983 microseconds, and 2916 cpu microseconds.
Nov 29 03:31:16 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:31:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:31:16.714486) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #113: 1027784 bytes OK
Nov 29 03:31:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:31:16.714501) [db/memtable_list.cc:519] [default] Level-0 commit table #113 started
Nov 29 03:31:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:31:16.715815) [db/memtable_list.cc:722] [default] Level-0 commit table #113: memtable #1 done
Nov 29 03:31:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:31:16.715826) EVENT_LOG_v1 {"time_micros": 1764405076715823, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:31:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:31:16.715840) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:31:16 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 1554489, prev total WAL file size 1554489, number of live WAL files 2.
Nov 29 03:31:16 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000109.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:31:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:31:16.716370) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Nov 29 03:31:16 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:31:16 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [113(1003KB)], [111(11MB)]
Nov 29 03:31:16 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405076716406, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [113], "files_L6": [111], "score": -1, "input_data_size": 12865861, "oldest_snapshot_seqno": -1}
Nov 29 03:31:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:16.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:16 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #114: 8712 keys, 11015365 bytes, temperature: kUnknown
Nov 29 03:31:16 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405076894091, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 114, "file_size": 11015365, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10959592, "index_size": 32877, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21829, "raw_key_size": 228062, "raw_average_key_size": 26, "raw_value_size": 10806602, "raw_average_value_size": 1240, "num_data_blocks": 1272, "num_entries": 8712, "num_filter_entries": 8712, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764405076, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 114, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:31:16 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:31:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:31:16.895474) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 11015365 bytes
Nov 29 03:31:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:31:16.896989) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 72.0 rd, 61.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 11.3 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(23.2) write-amplify(10.7) OK, records in: 9236, records dropped: 524 output_compression: NoCompression
Nov 29 03:31:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:31:16.897021) EVENT_LOG_v1 {"time_micros": 1764405076897006, "job": 70, "event": "compaction_finished", "compaction_time_micros": 178790, "compaction_time_cpu_micros": 34905, "output_level": 6, "num_output_files": 1, "total_output_size": 11015365, "num_input_records": 9236, "num_output_records": 8712, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:31:16 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:31:16 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405076897518, "job": 70, "event": "table_file_deletion", "file_number": 113}
Nov 29 03:31:16 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000111.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:31:16 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405076902093, "job": 70, "event": "table_file_deletion", "file_number": 111}
Nov 29 03:31:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:31:16.716273) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:31:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:31:16.902306) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:31:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:31:16.902315) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:31:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:31:16.902318) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:31:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:31:16.902513) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:31:16 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:31:16.902527) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:31:17 np0005539551 nova_compute[227360]: 2025-11-29 08:31:17.222 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:17 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:31:17 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:31:17 np0005539551 nova_compute[227360]: 2025-11-29 08:31:17.938 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:17.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:18.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:19 np0005539551 kernel: tapb8a243db-cb (unregistering): left promiscuous mode
Nov 29 03:31:19 np0005539551 NetworkManager[48922]: <info>  [1764405079.0730] device (tapb8a243db-cb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:31:19 np0005539551 ovn_controller[130266]: 2025-11-29T08:31:19Z|00689|binding|INFO|Releasing lport b8a243db-cb27-44c3-9015-4bf4d9a49bac from this chassis (sb_readonly=0)
Nov 29 03:31:19 np0005539551 nova_compute[227360]: 2025-11-29 08:31:19.080 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:19 np0005539551 ovn_controller[130266]: 2025-11-29T08:31:19Z|00690|binding|INFO|Setting lport b8a243db-cb27-44c3-9015-4bf4d9a49bac down in Southbound
Nov 29 03:31:19 np0005539551 ovn_controller[130266]: 2025-11-29T08:31:19Z|00691|binding|INFO|Removing iface tapb8a243db-cb ovn-installed in OVS
Nov 29 03:31:19 np0005539551 nova_compute[227360]: 2025-11-29 08:31:19.082 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:19.091 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:80:cb 10.100.0.12'], port_security=['fa:16:3e:ad:80:cb 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '7f0919df-6a69-4824-b20e-6540d1d3de30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d6d35cfb-cc41-4788-977c-b8e5140795a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1981e9617628491f938ef0ef01c061c5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '81cf2245-74ac-4962-8637-69fd9ed2858e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.178'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4f49e26a-f1b7-44a1-8f75-9c7ae476aa0d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=b8a243db-cb27-44c3-9015-4bf4d9a49bac) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:31:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:19.093 139482 INFO neutron.agent.ovn.metadata.agent [-] Port b8a243db-cb27-44c3-9015-4bf4d9a49bac in datapath d6d35cfb-cc41-4788-977c-b8e5140795a0 unbound from our chassis#033[00m
Nov 29 03:31:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:19.094 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d6d35cfb-cc41-4788-977c-b8e5140795a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:31:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:19.095 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[de6a4a63-f357-496e-8846-34142ad4eab3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:19.096 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0 namespace which is not needed anymore#033[00m
Nov 29 03:31:19 np0005539551 nova_compute[227360]: 2025-11-29 08:31:19.100 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:19 np0005539551 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000009c.scope: Deactivated successfully.
Nov 29 03:31:19 np0005539551 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000009c.scope: Consumed 15.699s CPU time.
Nov 29 03:31:19 np0005539551 systemd-machined[190756]: Machine qemu-72-instance-0000009c terminated.
Nov 29 03:31:19 np0005539551 neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0[284122]: [NOTICE]   (284126) : haproxy version is 2.8.14-c23fe91
Nov 29 03:31:19 np0005539551 neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0[284122]: [NOTICE]   (284126) : path to executable is /usr/sbin/haproxy
Nov 29 03:31:19 np0005539551 neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0[284122]: [WARNING]  (284126) : Exiting Master process...
Nov 29 03:31:19 np0005539551 neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0[284122]: [ALERT]    (284126) : Current worker (284128) exited with code 143 (Terminated)
Nov 29 03:31:19 np0005539551 neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0[284122]: [WARNING]  (284126) : All workers exited. Exiting... (0)
Nov 29 03:31:19 np0005539551 systemd[1]: libpod-08c27187bffe51025a825484f3f2cd815e45063b442fa98fbb46255a5481e9fb.scope: Deactivated successfully.
Nov 29 03:31:19 np0005539551 podman[284694]: 2025-11-29 08:31:19.281749315 +0000 UTC m=+0.108270999 container died 08c27187bffe51025a825484f3f2cd815e45063b442fa98fbb46255a5481e9fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:31:19 np0005539551 nova_compute[227360]: 2025-11-29 08:31:19.306 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:19 np0005539551 nova_compute[227360]: 2025-11-29 08:31:19.310 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:19 np0005539551 nova_compute[227360]: 2025-11-29 08:31:19.341 227364 DEBUG nova.compute.manager [req-5e418069-cf7c-415b-b5fe-f96ed26a3aee req-34b5f74d-2d78-4c30-acda-ca5a6b428fd9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Received event network-vif-unplugged-b8a243db-cb27-44c3-9015-4bf4d9a49bac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:19 np0005539551 nova_compute[227360]: 2025-11-29 08:31:19.342 227364 DEBUG oslo_concurrency.lockutils [req-5e418069-cf7c-415b-b5fe-f96ed26a3aee req-34b5f74d-2d78-4c30-acda-ca5a6b428fd9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:19 np0005539551 nova_compute[227360]: 2025-11-29 08:31:19.342 227364 DEBUG oslo_concurrency.lockutils [req-5e418069-cf7c-415b-b5fe-f96ed26a3aee req-34b5f74d-2d78-4c30-acda-ca5a6b428fd9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:19 np0005539551 nova_compute[227360]: 2025-11-29 08:31:19.343 227364 DEBUG oslo_concurrency.lockutils [req-5e418069-cf7c-415b-b5fe-f96ed26a3aee req-34b5f74d-2d78-4c30-acda-ca5a6b428fd9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:19 np0005539551 nova_compute[227360]: 2025-11-29 08:31:19.343 227364 DEBUG nova.compute.manager [req-5e418069-cf7c-415b-b5fe-f96ed26a3aee req-34b5f74d-2d78-4c30-acda-ca5a6b428fd9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] No waiting events found dispatching network-vif-unplugged-b8a243db-cb27-44c3-9015-4bf4d9a49bac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:31:19 np0005539551 nova_compute[227360]: 2025-11-29 08:31:19.343 227364 WARNING nova.compute.manager [req-5e418069-cf7c-415b-b5fe-f96ed26a3aee req-34b5f74d-2d78-4c30-acda-ca5a6b428fd9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Received unexpected event network-vif-unplugged-b8a243db-cb27-44c3-9015-4bf4d9a49bac for instance with vm_state active and task_state powering-off.#033[00m
Nov 29 03:31:19 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-08c27187bffe51025a825484f3f2cd815e45063b442fa98fbb46255a5481e9fb-userdata-shm.mount: Deactivated successfully.
Nov 29 03:31:19 np0005539551 systemd[1]: var-lib-containers-storage-overlay-46541782af9dbb26f3f57e8d8aac6773d4581e00134e9ad776599d175fd6587a-merged.mount: Deactivated successfully.
Nov 29 03:31:19 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:19 np0005539551 podman[284694]: 2025-11-29 08:31:19.522524999 +0000 UTC m=+0.349046683 container cleanup 08c27187bffe51025a825484f3f2cd815e45063b442fa98fbb46255a5481e9fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:31:19 np0005539551 systemd[1]: libpod-conmon-08c27187bffe51025a825484f3f2cd815e45063b442fa98fbb46255a5481e9fb.scope: Deactivated successfully.
Nov 29 03:31:19 np0005539551 nova_compute[227360]: 2025-11-29 08:31:19.553 227364 INFO nova.virt.libvirt.driver [None req-fa640d06-7ee9-4b34-be0f-fd2a69b2a4ca 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:31:19 np0005539551 nova_compute[227360]: 2025-11-29 08:31:19.558 227364 INFO nova.virt.libvirt.driver [-] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Instance destroyed successfully.#033[00m
Nov 29 03:31:19 np0005539551 nova_compute[227360]: 2025-11-29 08:31:19.558 227364 DEBUG nova.objects.instance [None req-fa640d06-7ee9-4b34-be0f-fd2a69b2a4ca 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lazy-loading 'numa_topology' on Instance uuid 7f0919df-6a69-4824-b20e-6540d1d3de30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:31:19 np0005539551 nova_compute[227360]: 2025-11-29 08:31:19.581 227364 DEBUG nova.compute.manager [None req-fa640d06-7ee9-4b34-be0f-fd2a69b2a4ca 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:31:19 np0005539551 podman[284734]: 2025-11-29 08:31:19.594121796 +0000 UTC m=+0.047355742 container remove 08c27187bffe51025a825484f3f2cd815e45063b442fa98fbb46255a5481e9fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:31:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:19.599 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[cb219cac-1a76-41c6-b481-0e307f77e55a]: (4, ('Sat Nov 29 08:31:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0 (08c27187bffe51025a825484f3f2cd815e45063b442fa98fbb46255a5481e9fb)\n08c27187bffe51025a825484f3f2cd815e45063b442fa98fbb46255a5481e9fb\nSat Nov 29 08:31:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0 (08c27187bffe51025a825484f3f2cd815e45063b442fa98fbb46255a5481e9fb)\n08c27187bffe51025a825484f3f2cd815e45063b442fa98fbb46255a5481e9fb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:19.601 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[24f1ee82-7dc3-4a50-ba1a-f009361898ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:19.602 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd6d35cfb-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:19 np0005539551 kernel: tapd6d35cfb-c0: left promiscuous mode
Nov 29 03:31:19 np0005539551 nova_compute[227360]: 2025-11-29 08:31:19.603 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:19 np0005539551 nova_compute[227360]: 2025-11-29 08:31:19.621 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:19.625 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[98063ab3-aef2-44f5-a1c0-dd1144609d2f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:19.644 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8580f589-bd09-4031-b442-6c4393937bad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:19.645 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5c884841-153f-45b1-b481-0f2be3a360e5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:19 np0005539551 nova_compute[227360]: 2025-11-29 08:31:19.651 227364 DEBUG oslo_concurrency.lockutils [None req-fa640d06-7ee9-4b34-be0f-fd2a69b2a4ca 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.145s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:19.660 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5d970fe3-b14a-4882-b02e-e98294b35a77]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 808194, 'reachable_time': 24130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284753, 'error': None, 'target': 'ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:19 np0005539551 systemd[1]: run-netns-ovnmeta\x2dd6d35cfb\x2dcc41\x2d4788\x2d977c\x2db8e5140795a0.mount: Deactivated successfully.
Nov 29 03:31:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:19.664 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:31:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:19.665 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[319019ad-f9b2-4ac0-a807-2bdf616a8afb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:19.880 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:19.881 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:19.881 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:19.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:20 np0005539551 nova_compute[227360]: 2025-11-29 08:31:20.345 227364 DEBUG nova.objects.instance [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lazy-loading 'flavor' on Instance uuid 7f0919df-6a69-4824-b20e-6540d1d3de30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:31:20 np0005539551 nova_compute[227360]: 2025-11-29 08:31:20.361 227364 DEBUG oslo_concurrency.lockutils [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Acquiring lock "refresh_cache-7f0919df-6a69-4824-b20e-6540d1d3de30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:31:20 np0005539551 nova_compute[227360]: 2025-11-29 08:31:20.362 227364 DEBUG oslo_concurrency.lockutils [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Acquired lock "refresh_cache-7f0919df-6a69-4824-b20e-6540d1d3de30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:31:20 np0005539551 nova_compute[227360]: 2025-11-29 08:31:20.362 227364 DEBUG nova.network.neutron [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:31:20 np0005539551 nova_compute[227360]: 2025-11-29 08:31:20.362 227364 DEBUG nova.objects.instance [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lazy-loading 'info_cache' on Instance uuid 7f0919df-6a69-4824-b20e-6540d1d3de30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:31:20 np0005539551 ceph-osd[78953]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Nov 29 03:31:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:20.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.451 227364 DEBUG nova.compute.manager [req-283d0b8b-8bc8-407a-91d8-24409c416b8f req-bf98180c-ec49-46dc-b208-01c51b11189f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Received event network-vif-plugged-b8a243db-cb27-44c3-9015-4bf4d9a49bac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.452 227364 DEBUG oslo_concurrency.lockutils [req-283d0b8b-8bc8-407a-91d8-24409c416b8f req-bf98180c-ec49-46dc-b208-01c51b11189f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.452 227364 DEBUG oslo_concurrency.lockutils [req-283d0b8b-8bc8-407a-91d8-24409c416b8f req-bf98180c-ec49-46dc-b208-01c51b11189f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.452 227364 DEBUG oslo_concurrency.lockutils [req-283d0b8b-8bc8-407a-91d8-24409c416b8f req-bf98180c-ec49-46dc-b208-01c51b11189f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.453 227364 DEBUG nova.compute.manager [req-283d0b8b-8bc8-407a-91d8-24409c416b8f req-bf98180c-ec49-46dc-b208-01c51b11189f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] No waiting events found dispatching network-vif-plugged-b8a243db-cb27-44c3-9015-4bf4d9a49bac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.453 227364 WARNING nova.compute.manager [req-283d0b8b-8bc8-407a-91d8-24409c416b8f req-bf98180c-ec49-46dc-b208-01c51b11189f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Received unexpected event network-vif-plugged-b8a243db-cb27-44c3-9015-4bf4d9a49bac for instance with vm_state stopped and task_state powering-on.#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.589 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.642 227364 DEBUG nova.network.neutron [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Updating instance_info_cache with network_info: [{"id": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "address": "fa:16:3e:ad:80:cb", "network": {"id": "d6d35cfb-cc41-4788-977c-b8e5140795a0", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1510670722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1981e9617628491f938ef0ef01c061c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8a243db-cb", "ovs_interfaceid": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.669 227364 DEBUG oslo_concurrency.lockutils [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Releasing lock "refresh_cache-7f0919df-6a69-4824-b20e-6540d1d3de30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.711 227364 INFO nova.virt.libvirt.driver [-] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Instance destroyed successfully.#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.712 227364 DEBUG nova.objects.instance [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lazy-loading 'numa_topology' on Instance uuid 7f0919df-6a69-4824-b20e-6540d1d3de30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.728 227364 DEBUG nova.objects.instance [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lazy-loading 'resources' on Instance uuid 7f0919df-6a69-4824-b20e-6540d1d3de30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.742 227364 DEBUG nova.virt.libvirt.vif [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:30:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-570591477',display_name='tempest-AttachVolumeTestJSON-server-570591477',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-570591477',id=156,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPLhAS709HJYCqqSVKtaAsZHN9+aAJmT3RuP2sRo5+42qudvHLEFjfUIRKebI1UviiQdgrbtVipNk1gA+7U8vpFGwsazdqavrjn4FbUCXtlfljCRqwjbJ7fLvby4dRZp3g==',key_name='tempest-keypair-1111859008',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:30:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='1981e9617628491f938ef0ef01c061c5',ramdisk_id='',reservation_id='r-01op0prm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-169198681',owner_user_name='tempest-AttachVolumeTestJSON-169198681-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:31:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5b0fe4d78df74554a3a5875ab629d59c',uuid=7f0919df-6a69-4824-b20e-6540d1d3de30,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "address": "fa:16:3e:ad:80:cb", "network": {"id": "d6d35cfb-cc41-4788-977c-b8e5140795a0", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1510670722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1981e9617628491f938ef0ef01c061c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8a243db-cb", "ovs_interfaceid": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.742 227364 DEBUG nova.network.os_vif_util [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Converting VIF {"id": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "address": "fa:16:3e:ad:80:cb", "network": {"id": "d6d35cfb-cc41-4788-977c-b8e5140795a0", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1510670722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1981e9617628491f938ef0ef01c061c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8a243db-cb", "ovs_interfaceid": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.743 227364 DEBUG nova.network.os_vif_util [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:80:cb,bridge_name='br-int',has_traffic_filtering=True,id=b8a243db-cb27-44c3-9015-4bf4d9a49bac,network=Network(d6d35cfb-cc41-4788-977c-b8e5140795a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8a243db-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.744 227364 DEBUG os_vif [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:80:cb,bridge_name='br-int',has_traffic_filtering=True,id=b8a243db-cb27-44c3-9015-4bf4d9a49bac,network=Network(d6d35cfb-cc41-4788-977c-b8e5140795a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8a243db-cb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.745 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.746 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8a243db-cb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.747 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.749 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.810 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.813 227364 INFO os_vif [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:80:cb,bridge_name='br-int',has_traffic_filtering=True,id=b8a243db-cb27-44c3-9015-4bf4d9a49bac,network=Network(d6d35cfb-cc41-4788-977c-b8e5140795a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8a243db-cb')#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.820 227364 DEBUG nova.virt.libvirt.driver [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Start _get_guest_xml network_info=[{"id": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "address": "fa:16:3e:ad:80:cb", "network": {"id": "d6d35cfb-cc41-4788-977c-b8e5140795a0", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1510670722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1981e9617628491f938ef0ef01c061c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8a243db-cb", "ovs_interfaceid": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-819403e4-4b9d-48a5-8cd5-1b913fa9f1ae', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '819403e4-4b9d-48a5-8cd5-1b913fa9f1ae', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '7f0919df-6a69-4824-b20e-6540d1d3de30', 'attached_at': '', 'detached_at': '', 'volume_id': '819403e4-4b9d-48a5-8cd5-1b913fa9f1ae', 'serial': '819403e4-4b9d-48a5-8cd5-1b913fa9f1ae'}, 'boot_index': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vdb', 'attachment_id': 'dbfb7ff5-c7ea-44e1-b024-388a0296e20e', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.821 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.826 227364 WARNING nova.virt.libvirt.driver [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.831 227364 DEBUG nova.virt.libvirt.host [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.832 227364 DEBUG nova.virt.libvirt.host [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.836 227364 DEBUG nova.virt.libvirt.host [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.836 227364 DEBUG nova.virt.libvirt.host [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.837 227364 DEBUG nova.virt.libvirt.driver [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.837 227364 DEBUG nova.virt.hardware [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.838 227364 DEBUG nova.virt.hardware [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.838 227364 DEBUG nova.virt.hardware [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.838 227364 DEBUG nova.virt.hardware [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.838 227364 DEBUG nova.virt.hardware [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.838 227364 DEBUG nova.virt.hardware [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.839 227364 DEBUG nova.virt.hardware [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.839 227364 DEBUG nova.virt.hardware [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.839 227364 DEBUG nova.virt.hardware [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.839 227364 DEBUG nova.virt.hardware [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.839 227364 DEBUG nova.virt.hardware [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.840 227364 DEBUG nova.objects.instance [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 7f0919df-6a69-4824-b20e-6540d1d3de30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:31:21 np0005539551 nova_compute[227360]: 2025-11-29 08:31:21.861 227364 DEBUG oslo_concurrency.processutils [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:21.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:22 np0005539551 nova_compute[227360]: 2025-11-29 08:31:22.224 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:31:22 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/917222048' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:31:22 np0005539551 nova_compute[227360]: 2025-11-29 08:31:22.323 227364 DEBUG oslo_concurrency.processutils [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:22 np0005539551 nova_compute[227360]: 2025-11-29 08:31:22.361 227364 DEBUG oslo_concurrency.processutils [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:31:22 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1329453782' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:31:22 np0005539551 nova_compute[227360]: 2025-11-29 08:31:22.783 227364 DEBUG oslo_concurrency.processutils [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:22.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:22 np0005539551 nova_compute[227360]: 2025-11-29 08:31:22.808 227364 DEBUG nova.virt.libvirt.vif [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:30:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-570591477',display_name='tempest-AttachVolumeTestJSON-server-570591477',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-570591477',id=156,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPLhAS709HJYCqqSVKtaAsZHN9+aAJmT3RuP2sRo5+42qudvHLEFjfUIRKebI1UviiQdgrbtVipNk1gA+7U8vpFGwsazdqavrjn4FbUCXtlfljCRqwjbJ7fLvby4dRZp3g==',key_name='tempest-keypair-1111859008',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:30:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='1981e9617628491f938ef0ef01c061c5',ramdisk_id='',reservation_id='r-01op0prm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-169198681',owner_user_name='tempest-AttachVolumeTestJSON-169198681-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:31:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5b0fe4d78df74554a3a5875ab629d59c',uuid=7f0919df-6a69-4824-b20e-6540d1d3de30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "address": "fa:16:3e:ad:80:cb", "network": {"id": "d6d35cfb-cc41-4788-977c-b8e5140795a0", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1510670722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1981e9617628491f938ef0ef01c061c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8a243db-cb", "ovs_interfaceid": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:31:22 np0005539551 nova_compute[227360]: 2025-11-29 08:31:22.809 227364 DEBUG nova.network.os_vif_util [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Converting VIF {"id": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "address": "fa:16:3e:ad:80:cb", "network": {"id": "d6d35cfb-cc41-4788-977c-b8e5140795a0", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1510670722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1981e9617628491f938ef0ef01c061c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8a243db-cb", "ovs_interfaceid": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:31:22 np0005539551 nova_compute[227360]: 2025-11-29 08:31:22.810 227364 DEBUG nova.network.os_vif_util [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:80:cb,bridge_name='br-int',has_traffic_filtering=True,id=b8a243db-cb27-44c3-9015-4bf4d9a49bac,network=Network(d6d35cfb-cc41-4788-977c-b8e5140795a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8a243db-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:31:22 np0005539551 nova_compute[227360]: 2025-11-29 08:31:22.811 227364 DEBUG nova.objects.instance [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7f0919df-6a69-4824-b20e-6540d1d3de30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:31:22 np0005539551 nova_compute[227360]: 2025-11-29 08:31:22.828 227364 DEBUG nova.virt.libvirt.driver [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:31:22 np0005539551 nova_compute[227360]:  <uuid>7f0919df-6a69-4824-b20e-6540d1d3de30</uuid>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:  <name>instance-0000009c</name>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      <nova:name>tempest-AttachVolumeTestJSON-server-570591477</nova:name>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:31:21</nova:creationTime>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:31:22 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:        <nova:user uuid="5b0fe4d78df74554a3a5875ab629d59c">tempest-AttachVolumeTestJSON-169198681-project-member</nova:user>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:        <nova:project uuid="1981e9617628491f938ef0ef01c061c5">tempest-AttachVolumeTestJSON-169198681</nova:project>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:        <nova:port uuid="b8a243db-cb27-44c3-9015-4bf4d9a49bac">
Nov 29 03:31:22 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      <entry name="serial">7f0919df-6a69-4824-b20e-6540d1d3de30</entry>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      <entry name="uuid">7f0919df-6a69-4824-b20e-6540d1d3de30</entry>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/7f0919df-6a69-4824-b20e-6540d1d3de30_disk">
Nov 29 03:31:22 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:31:22 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/7f0919df-6a69-4824-b20e-6540d1d3de30_disk.config">
Nov 29 03:31:22 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:31:22 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="volumes/volume-819403e4-4b9d-48a5-8cd5-1b913fa9f1ae">
Nov 29 03:31:22 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:31:22 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      <target dev="vdb" bus="virtio"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      <serial>819403e4-4b9d-48a5-8cd5-1b913fa9f1ae</serial>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:ad:80:cb"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      <target dev="tapb8a243db-cb"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/7f0919df-6a69-4824-b20e-6540d1d3de30/console.log" append="off"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <input type="keyboard" bus="usb"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:31:22 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:31:22 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:31:22 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:31:22 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:31:22 np0005539551 nova_compute[227360]: 2025-11-29 08:31:22.830 227364 DEBUG nova.virt.libvirt.driver [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:31:22 np0005539551 nova_compute[227360]: 2025-11-29 08:31:22.831 227364 DEBUG nova.virt.libvirt.driver [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:31:22 np0005539551 nova_compute[227360]: 2025-11-29 08:31:22.831 227364 DEBUG nova.virt.libvirt.driver [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:31:22 np0005539551 nova_compute[227360]: 2025-11-29 08:31:22.831 227364 DEBUG nova.virt.libvirt.vif [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:30:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-570591477',display_name='tempest-AttachVolumeTestJSON-server-570591477',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-570591477',id=156,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPLhAS709HJYCqqSVKtaAsZHN9+aAJmT3RuP2sRo5+42qudvHLEFjfUIRKebI1UviiQdgrbtVipNk1gA+7U8vpFGwsazdqavrjn4FbUCXtlfljCRqwjbJ7fLvby4dRZp3g==',key_name='tempest-keypair-1111859008',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:30:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='1981e9617628491f938ef0ef01c061c5',ramdisk_id='',reservation_id='r-01op0prm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-169198681',owner_user_name='tempest-AttachVolumeTestJSON-169198681-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:31:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5b0fe4d78df74554a3a5875ab629d59c',uuid=7f0919df-6a69-4824-b20e-6540d1d3de30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "address": "fa:16:3e:ad:80:cb", "network": {"id": "d6d35cfb-cc41-4788-977c-b8e5140795a0", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1510670722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1981e9617628491f938ef0ef01c061c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8a243db-cb", "ovs_interfaceid": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:31:22 np0005539551 nova_compute[227360]: 2025-11-29 08:31:22.832 227364 DEBUG nova.network.os_vif_util [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Converting VIF {"id": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "address": "fa:16:3e:ad:80:cb", "network": {"id": "d6d35cfb-cc41-4788-977c-b8e5140795a0", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1510670722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1981e9617628491f938ef0ef01c061c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8a243db-cb", "ovs_interfaceid": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:31:22 np0005539551 nova_compute[227360]: 2025-11-29 08:31:22.832 227364 DEBUG nova.network.os_vif_util [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:80:cb,bridge_name='br-int',has_traffic_filtering=True,id=b8a243db-cb27-44c3-9015-4bf4d9a49bac,network=Network(d6d35cfb-cc41-4788-977c-b8e5140795a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8a243db-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:31:22 np0005539551 nova_compute[227360]: 2025-11-29 08:31:22.833 227364 DEBUG os_vif [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:80:cb,bridge_name='br-int',has_traffic_filtering=True,id=b8a243db-cb27-44c3-9015-4bf4d9a49bac,network=Network(d6d35cfb-cc41-4788-977c-b8e5140795a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8a243db-cb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:31:22 np0005539551 nova_compute[227360]: 2025-11-29 08:31:22.833 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:22 np0005539551 nova_compute[227360]: 2025-11-29 08:31:22.834 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:22 np0005539551 nova_compute[227360]: 2025-11-29 08:31:22.834 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:31:22 np0005539551 nova_compute[227360]: 2025-11-29 08:31:22.836 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:22 np0005539551 nova_compute[227360]: 2025-11-29 08:31:22.836 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb8a243db-cb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:22 np0005539551 nova_compute[227360]: 2025-11-29 08:31:22.837 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb8a243db-cb, col_values=(('external_ids', {'iface-id': 'b8a243db-cb27-44c3-9015-4bf4d9a49bac', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ad:80:cb', 'vm-uuid': '7f0919df-6a69-4824-b20e-6540d1d3de30'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:22 np0005539551 NetworkManager[48922]: <info>  [1764405082.8391] manager: (tapb8a243db-cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/309)
Nov 29 03:31:22 np0005539551 nova_compute[227360]: 2025-11-29 08:31:22.841 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:31:22 np0005539551 nova_compute[227360]: 2025-11-29 08:31:22.843 227364 INFO os_vif [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:80:cb,bridge_name='br-int',has_traffic_filtering=True,id=b8a243db-cb27-44c3-9015-4bf4d9a49bac,network=Network(d6d35cfb-cc41-4788-977c-b8e5140795a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8a243db-cb')#033[00m
Nov 29 03:31:22 np0005539551 kernel: tapb8a243db-cb: entered promiscuous mode
Nov 29 03:31:22 np0005539551 NetworkManager[48922]: <info>  [1764405082.9096] manager: (tapb8a243db-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/310)
Nov 29 03:31:22 np0005539551 ovn_controller[130266]: 2025-11-29T08:31:22Z|00692|binding|INFO|Claiming lport b8a243db-cb27-44c3-9015-4bf4d9a49bac for this chassis.
Nov 29 03:31:22 np0005539551 ovn_controller[130266]: 2025-11-29T08:31:22Z|00693|binding|INFO|b8a243db-cb27-44c3-9015-4bf4d9a49bac: Claiming fa:16:3e:ad:80:cb 10.100.0.12
Nov 29 03:31:22 np0005539551 nova_compute[227360]: 2025-11-29 08:31:22.909 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:22 np0005539551 nova_compute[227360]: 2025-11-29 08:31:22.915 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:22 np0005539551 nova_compute[227360]: 2025-11-29 08:31:22.916 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:22 np0005539551 nova_compute[227360]: 2025-11-29 08:31:22.922 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:22 np0005539551 NetworkManager[48922]: <info>  [1764405082.9227] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/311)
Nov 29 03:31:22 np0005539551 NetworkManager[48922]: <info>  [1764405082.9235] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/312)
Nov 29 03:31:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:22.925 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:80:cb 10.100.0.12'], port_security=['fa:16:3e:ad:80:cb 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '7f0919df-6a69-4824-b20e-6540d1d3de30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d6d35cfb-cc41-4788-977c-b8e5140795a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1981e9617628491f938ef0ef01c061c5', 'neutron:revision_number': '5', 'neutron:security_group_ids': '81cf2245-74ac-4962-8637-69fd9ed2858e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.178'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4f49e26a-f1b7-44a1-8f75-9c7ae476aa0d, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=b8a243db-cb27-44c3-9015-4bf4d9a49bac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:31:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:22.926 139482 INFO neutron.agent.ovn.metadata.agent [-] Port b8a243db-cb27-44c3-9015-4bf4d9a49bac in datapath d6d35cfb-cc41-4788-977c-b8e5140795a0 bound to our chassis#033[00m
Nov 29 03:31:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:22.928 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d6d35cfb-cc41-4788-977c-b8e5140795a0#033[00m
Nov 29 03:31:22 np0005539551 systemd-udevd[284833]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:31:22 np0005539551 systemd-machined[190756]: New machine qemu-73-instance-0000009c.
Nov 29 03:31:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:22.940 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4d6ee65d-1575-4025-9fc2-d7c3043ddac1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:22.941 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd6d35cfb-c1 in ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:31:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:22.943 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd6d35cfb-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:31:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:22.943 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2328207e-1a99-4027-8882-fcc4cba021a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:22.944 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[56f44656-d891-413f-89db-37cc2dc4b053]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:22 np0005539551 NetworkManager[48922]: <info>  [1764405082.9514] device (tapb8a243db-cb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:31:22 np0005539551 NetworkManager[48922]: <info>  [1764405082.9524] device (tapb8a243db-cb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:31:22 np0005539551 systemd[1]: Started Virtual Machine qemu-73-instance-0000009c.
Nov 29 03:31:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:22.962 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[4cd290ce-0a2b-49da-b5d0-7f5c921221ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:22.985 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e4e36105-4e6e-4d77-9479-f645479472ec]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:23.008 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[709b383f-0807-451d-8b7a-0beaabcc8895]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:23 np0005539551 systemd-udevd[284836]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:31:23 np0005539551 NetworkManager[48922]: <info>  [1764405083.0181] manager: (tapd6d35cfb-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/313)
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:23.017 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b652075f-09a4-4394-b546-e0f81e226f32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:23.058 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[ff3ef977-3ae0-4741-a24a-bb78e3a14f1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:23.061 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[1c7cce9b-f132-4482-a153-ec094bddfad7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:23 np0005539551 NetworkManager[48922]: <info>  [1764405083.0842] device (tapd6d35cfb-c0): carrier: link connected
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:23.089 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[164b6c78-c063-4fcd-8d97-e2611f7ebac8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:23 np0005539551 nova_compute[227360]: 2025-11-29 08:31:23.104 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:23 np0005539551 nova_compute[227360]: 2025-11-29 08:31:23.107 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:23.112 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[78e38d6e-0566-4538-bdcd-368cad71fe82]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd6d35cfb-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5e:5b:88'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 200], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 812996, 'reachable_time': 39615, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284865, 'error': None, 'target': 'ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:23 np0005539551 nova_compute[227360]: 2025-11-29 08:31:23.122 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:23.129 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1b2f562f-70b2-49ed-ad20-0efae6f2ed70]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5e:5b88'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 812996, 'tstamp': 812996}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284866, 'error': None, 'target': 'ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:23 np0005539551 ovn_controller[130266]: 2025-11-29T08:31:23Z|00694|binding|INFO|Setting lport b8a243db-cb27-44c3-9015-4bf4d9a49bac ovn-installed in OVS
Nov 29 03:31:23 np0005539551 ovn_controller[130266]: 2025-11-29T08:31:23Z|00695|binding|INFO|Setting lport b8a243db-cb27-44c3-9015-4bf4d9a49bac up in Southbound
Nov 29 03:31:23 np0005539551 nova_compute[227360]: 2025-11-29 08:31:23.134 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:23.147 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6cda48b3-03bc-4eb7-b96c-c50888d0a164]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd6d35cfb-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5e:5b:88'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 200], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 812996, 'reachable_time': 39615, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 284867, 'error': None, 'target': 'ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:23.175 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[239c681d-c388-4d6a-a5a8-bb4334bad67f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:23.243 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[23d29c9b-1ffc-4440-9f06-bebc0c05c6ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:23.245 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd6d35cfb-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:23.245 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:23.246 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd6d35cfb-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:23 np0005539551 nova_compute[227360]: 2025-11-29 08:31:23.248 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:23 np0005539551 NetworkManager[48922]: <info>  [1764405083.2487] manager: (tapd6d35cfb-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/314)
Nov 29 03:31:23 np0005539551 kernel: tapd6d35cfb-c0: entered promiscuous mode
Nov 29 03:31:23 np0005539551 nova_compute[227360]: 2025-11-29 08:31:23.251 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:23.253 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd6d35cfb-c0, col_values=(('external_ids', {'iface-id': '070d06ed-b610-481b-b747-9c7d0eb2bcf2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:23 np0005539551 nova_compute[227360]: 2025-11-29 08:31:23.254 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:23 np0005539551 ovn_controller[130266]: 2025-11-29T08:31:23Z|00696|binding|INFO|Releasing lport 070d06ed-b610-481b-b747-9c7d0eb2bcf2 from this chassis (sb_readonly=0)
Nov 29 03:31:23 np0005539551 nova_compute[227360]: 2025-11-29 08:31:23.255 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:23.257 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d6d35cfb-cc41-4788-977c-b8e5140795a0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d6d35cfb-cc41-4788-977c-b8e5140795a0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:23.258 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[7bd9764b-7b3f-470c-a231-a0b603901c21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:23.259 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-d6d35cfb-cc41-4788-977c-b8e5140795a0
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/d6d35cfb-cc41-4788-977c-b8e5140795a0.pid.haproxy
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID d6d35cfb-cc41-4788-977c-b8e5140795a0
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:31:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:23.260 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0', 'env', 'PROCESS_TAG=haproxy-d6d35cfb-cc41-4788-977c-b8e5140795a0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d6d35cfb-cc41-4788-977c-b8e5140795a0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:31:23 np0005539551 nova_compute[227360]: 2025-11-29 08:31:23.267 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:23 np0005539551 nova_compute[227360]: 2025-11-29 08:31:23.520 227364 DEBUG nova.virt.libvirt.host [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Removed pending event for 7f0919df-6a69-4824-b20e-6540d1d3de30 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:31:23 np0005539551 nova_compute[227360]: 2025-11-29 08:31:23.521 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405083.5189314, 7f0919df-6a69-4824-b20e-6540d1d3de30 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:31:23 np0005539551 nova_compute[227360]: 2025-11-29 08:31:23.521 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:31:23 np0005539551 nova_compute[227360]: 2025-11-29 08:31:23.523 227364 DEBUG nova.compute.manager [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:31:23 np0005539551 nova_compute[227360]: 2025-11-29 08:31:23.526 227364 INFO nova.virt.libvirt.driver [-] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Instance rebooted successfully.#033[00m
Nov 29 03:31:23 np0005539551 nova_compute[227360]: 2025-11-29 08:31:23.526 227364 DEBUG nova.compute.manager [None req-46998047-2256-4396-b909-418ef2138a89 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:31:23 np0005539551 nova_compute[227360]: 2025-11-29 08:31:23.559 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:31:23 np0005539551 nova_compute[227360]: 2025-11-29 08:31:23.563 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:31:23 np0005539551 nova_compute[227360]: 2025-11-29 08:31:23.570 227364 DEBUG nova.compute.manager [req-bc44e4ed-e5d2-45ce-b820-a86aff0ee2f7 req-cd5aa40f-ad95-494a-88e1-86b9ec97840d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Received event network-vif-plugged-b8a243db-cb27-44c3-9015-4bf4d9a49bac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:23 np0005539551 nova_compute[227360]: 2025-11-29 08:31:23.571 227364 DEBUG oslo_concurrency.lockutils [req-bc44e4ed-e5d2-45ce-b820-a86aff0ee2f7 req-cd5aa40f-ad95-494a-88e1-86b9ec97840d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:23 np0005539551 nova_compute[227360]: 2025-11-29 08:31:23.571 227364 DEBUG oslo_concurrency.lockutils [req-bc44e4ed-e5d2-45ce-b820-a86aff0ee2f7 req-cd5aa40f-ad95-494a-88e1-86b9ec97840d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:23 np0005539551 nova_compute[227360]: 2025-11-29 08:31:23.572 227364 DEBUG oslo_concurrency.lockutils [req-bc44e4ed-e5d2-45ce-b820-a86aff0ee2f7 req-cd5aa40f-ad95-494a-88e1-86b9ec97840d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:23 np0005539551 nova_compute[227360]: 2025-11-29 08:31:23.572 227364 DEBUG nova.compute.manager [req-bc44e4ed-e5d2-45ce-b820-a86aff0ee2f7 req-cd5aa40f-ad95-494a-88e1-86b9ec97840d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] No waiting events found dispatching network-vif-plugged-b8a243db-cb27-44c3-9015-4bf4d9a49bac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:31:23 np0005539551 nova_compute[227360]: 2025-11-29 08:31:23.572 227364 WARNING nova.compute.manager [req-bc44e4ed-e5d2-45ce-b820-a86aff0ee2f7 req-cd5aa40f-ad95-494a-88e1-86b9ec97840d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Received unexpected event network-vif-plugged-b8a243db-cb27-44c3-9015-4bf4d9a49bac for instance with vm_state stopped and task_state powering-on.#033[00m
Nov 29 03:31:23 np0005539551 nova_compute[227360]: 2025-11-29 08:31:23.573 227364 DEBUG nova.compute.manager [req-bc44e4ed-e5d2-45ce-b820-a86aff0ee2f7 req-cd5aa40f-ad95-494a-88e1-86b9ec97840d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Received event network-vif-plugged-b8a243db-cb27-44c3-9015-4bf4d9a49bac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:23 np0005539551 nova_compute[227360]: 2025-11-29 08:31:23.573 227364 DEBUG oslo_concurrency.lockutils [req-bc44e4ed-e5d2-45ce-b820-a86aff0ee2f7 req-cd5aa40f-ad95-494a-88e1-86b9ec97840d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:23 np0005539551 nova_compute[227360]: 2025-11-29 08:31:23.573 227364 DEBUG oslo_concurrency.lockutils [req-bc44e4ed-e5d2-45ce-b820-a86aff0ee2f7 req-cd5aa40f-ad95-494a-88e1-86b9ec97840d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:23 np0005539551 nova_compute[227360]: 2025-11-29 08:31:23.574 227364 DEBUG oslo_concurrency.lockutils [req-bc44e4ed-e5d2-45ce-b820-a86aff0ee2f7 req-cd5aa40f-ad95-494a-88e1-86b9ec97840d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:23 np0005539551 nova_compute[227360]: 2025-11-29 08:31:23.574 227364 DEBUG nova.compute.manager [req-bc44e4ed-e5d2-45ce-b820-a86aff0ee2f7 req-cd5aa40f-ad95-494a-88e1-86b9ec97840d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] No waiting events found dispatching network-vif-plugged-b8a243db-cb27-44c3-9015-4bf4d9a49bac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:31:23 np0005539551 nova_compute[227360]: 2025-11-29 08:31:23.574 227364 WARNING nova.compute.manager [req-bc44e4ed-e5d2-45ce-b820-a86aff0ee2f7 req-cd5aa40f-ad95-494a-88e1-86b9ec97840d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Received unexpected event network-vif-plugged-b8a243db-cb27-44c3-9015-4bf4d9a49bac for instance with vm_state stopped and task_state powering-on.#033[00m
Nov 29 03:31:23 np0005539551 nova_compute[227360]: 2025-11-29 08:31:23.589 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405083.5205433, 7f0919df-6a69-4824-b20e-6540d1d3de30 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:31:23 np0005539551 nova_compute[227360]: 2025-11-29 08:31:23.590 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] VM Started (Lifecycle Event)#033[00m
Nov 29 03:31:23 np0005539551 nova_compute[227360]: 2025-11-29 08:31:23.614 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:31:23 np0005539551 nova_compute[227360]: 2025-11-29 08:31:23.617 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:31:23 np0005539551 podman[284959]: 2025-11-29 08:31:23.635073243 +0000 UTC m=+0.051639348 container create 4551d07094fe68d70fc91e2f775e47e731b4f2ba550abe99a9eebf7487f025a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 29 03:31:23 np0005539551 systemd[1]: Started libpod-conmon-4551d07094fe68d70fc91e2f775e47e731b4f2ba550abe99a9eebf7487f025a5.scope.
Nov 29 03:31:23 np0005539551 podman[284959]: 2025-11-29 08:31:23.608372551 +0000 UTC m=+0.024938686 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:31:23 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:31:23 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a55acfe2b2d494758ae9f06fd12a79cf988f08011819a9e53c890c744f9278ba/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:31:23 np0005539551 podman[284959]: 2025-11-29 08:31:23.749236452 +0000 UTC m=+0.165802577 container init 4551d07094fe68d70fc91e2f775e47e731b4f2ba550abe99a9eebf7487f025a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 03:31:23 np0005539551 podman[284959]: 2025-11-29 08:31:23.754762071 +0000 UTC m=+0.171328176 container start 4551d07094fe68d70fc91e2f775e47e731b4f2ba550abe99a9eebf7487f025a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 03:31:23 np0005539551 neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0[284974]: [NOTICE]   (284978) : New worker (284980) forked
Nov 29 03:31:23 np0005539551 neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0[284974]: [NOTICE]   (284978) : Loading success.
Nov 29 03:31:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:23.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:24 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:24.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:25.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:26.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:27 np0005539551 nova_compute[227360]: 2025-11-29 08:31:27.156 227364 DEBUG oslo_concurrency.lockutils [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "b08df6d7-85bd-4c2a-9bd8-f37384b9148a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:27 np0005539551 nova_compute[227360]: 2025-11-29 08:31:27.157 227364 DEBUG oslo_concurrency.lockutils [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "b08df6d7-85bd-4c2a-9bd8-f37384b9148a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:27 np0005539551 nova_compute[227360]: 2025-11-29 08:31:27.178 227364 DEBUG nova.compute.manager [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:31:27 np0005539551 nova_compute[227360]: 2025-11-29 08:31:27.226 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:27 np0005539551 nova_compute[227360]: 2025-11-29 08:31:27.277 227364 DEBUG oslo_concurrency.lockutils [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:27 np0005539551 nova_compute[227360]: 2025-11-29 08:31:27.278 227364 DEBUG oslo_concurrency.lockutils [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:27 np0005539551 nova_compute[227360]: 2025-11-29 08:31:27.285 227364 DEBUG nova.virt.hardware [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:31:27 np0005539551 nova_compute[227360]: 2025-11-29 08:31:27.286 227364 INFO nova.compute.claims [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:31:27 np0005539551 nova_compute[227360]: 2025-11-29 08:31:27.399 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:27 np0005539551 nova_compute[227360]: 2025-11-29 08:31:27.418 227364 DEBUG nova.scheduler.client.report [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Refreshing inventories for resource provider 67c71d68-0dd7-4589-b775-189b4191a844 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:31:27 np0005539551 nova_compute[227360]: 2025-11-29 08:31:27.491 227364 DEBUG nova.scheduler.client.report [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Updating ProviderTree inventory for provider 67c71d68-0dd7-4589-b775-189b4191a844 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:31:27 np0005539551 nova_compute[227360]: 2025-11-29 08:31:27.491 227364 DEBUG nova.compute.provider_tree [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Updating inventory in ProviderTree for provider 67c71d68-0dd7-4589-b775-189b4191a844 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:31:27 np0005539551 nova_compute[227360]: 2025-11-29 08:31:27.511 227364 DEBUG nova.scheduler.client.report [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Refreshing aggregate associations for resource provider 67c71d68-0dd7-4589-b775-189b4191a844, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:31:27 np0005539551 nova_compute[227360]: 2025-11-29 08:31:27.539 227364 DEBUG nova.scheduler.client.report [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Refreshing trait associations for resource provider 67c71d68-0dd7-4589-b775-189b4191a844, traits: COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:31:27 np0005539551 nova_compute[227360]: 2025-11-29 08:31:27.591 227364 DEBUG oslo_concurrency.processutils [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:27 np0005539551 nova_compute[227360]: 2025-11-29 08:31:27.873 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:31:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:27.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:31:28 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:31:28 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/561594810' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:31:28 np0005539551 nova_compute[227360]: 2025-11-29 08:31:28.052 227364 DEBUG oslo_concurrency.processutils [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:28 np0005539551 nova_compute[227360]: 2025-11-29 08:31:28.058 227364 DEBUG nova.compute.provider_tree [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:31:28 np0005539551 nova_compute[227360]: 2025-11-29 08:31:28.076 227364 DEBUG nova.scheduler.client.report [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:31:28 np0005539551 nova_compute[227360]: 2025-11-29 08:31:28.108 227364 DEBUG oslo_concurrency.lockutils [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:28 np0005539551 nova_compute[227360]: 2025-11-29 08:31:28.109 227364 DEBUG nova.compute.manager [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:31:28 np0005539551 nova_compute[227360]: 2025-11-29 08:31:28.163 227364 DEBUG nova.compute.manager [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:31:28 np0005539551 nova_compute[227360]: 2025-11-29 08:31:28.164 227364 DEBUG nova.network.neutron [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:31:28 np0005539551 nova_compute[227360]: 2025-11-29 08:31:28.189 227364 INFO nova.virt.libvirt.driver [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:31:28 np0005539551 nova_compute[227360]: 2025-11-29 08:31:28.210 227364 DEBUG nova.compute.manager [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:31:28 np0005539551 nova_compute[227360]: 2025-11-29 08:31:28.280 227364 INFO nova.virt.block_device [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Booting with volume 6d9d34fb-9df6-452a-837f-86dd8a2bf2ba at /dev/vda#033[00m
Nov 29 03:31:28 np0005539551 nova_compute[227360]: 2025-11-29 08:31:28.393 227364 DEBUG nova.policy [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c5b0953fb7cc415fb26cf4ffdd5908c6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd4f6db81949d487b853d7567f8a2e6d4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:31:28 np0005539551 nova_compute[227360]: 2025-11-29 08:31:28.474 227364 DEBUG os_brick.utils [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:31:28 np0005539551 nova_compute[227360]: 2025-11-29 08:31:28.475 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:28 np0005539551 nova_compute[227360]: 2025-11-29 08:31:28.492 245195 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:28 np0005539551 nova_compute[227360]: 2025-11-29 08:31:28.492 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[f7548f21-c157-4209-80c5-2511c533b50c]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:28 np0005539551 nova_compute[227360]: 2025-11-29 08:31:28.494 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:28 np0005539551 nova_compute[227360]: 2025-11-29 08:31:28.503 245195 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:28 np0005539551 nova_compute[227360]: 2025-11-29 08:31:28.503 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[69dd879f-a7d6-487f-865b-c736676d5734]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34268feecb6', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:28 np0005539551 nova_compute[227360]: 2025-11-29 08:31:28.504 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:28 np0005539551 nova_compute[227360]: 2025-11-29 08:31:28.518 245195 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:28 np0005539551 nova_compute[227360]: 2025-11-29 08:31:28.518 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[bf74fd37-e18e-4037-9616-560a114ca252]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:28 np0005539551 nova_compute[227360]: 2025-11-29 08:31:28.519 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[29a77a8e-a810-4f32-ae79-5063fab1ea25]: (4, '2d58586e-4ce1-425d-890c-d5cdff75e822') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:28 np0005539551 nova_compute[227360]: 2025-11-29 08:31:28.520 227364 DEBUG oslo_concurrency.processutils [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:28 np0005539551 nova_compute[227360]: 2025-11-29 08:31:28.547 227364 DEBUG oslo_concurrency.processutils [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:28 np0005539551 nova_compute[227360]: 2025-11-29 08:31:28.550 227364 DEBUG os_brick.initiator.connectors.lightos [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:31:28 np0005539551 nova_compute[227360]: 2025-11-29 08:31:28.550 227364 DEBUG os_brick.initiator.connectors.lightos [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:31:28 np0005539551 nova_compute[227360]: 2025-11-29 08:31:28.550 227364 DEBUG os_brick.initiator.connectors.lightos [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:31:28 np0005539551 nova_compute[227360]: 2025-11-29 08:31:28.551 227364 DEBUG os_brick.utils [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] <== get_connector_properties: return (76ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34268feecb6', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2d58586e-4ce1-425d-890c-d5cdff75e822', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:31:28 np0005539551 nova_compute[227360]: 2025-11-29 08:31:28.552 227364 DEBUG nova.virt.block_device [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Updating existing volume attachment record: 06c20c1d-fb1e-4003-92cc-86c5030635df _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:31:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:28.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:28 np0005539551 nova_compute[227360]: 2025-11-29 08:31:28.981 227364 DEBUG nova.network.neutron [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Successfully created port: 19b8adf6-0177-4b65-8028-bf0fe37afa9a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:31:29 np0005539551 nova_compute[227360]: 2025-11-29 08:31:29.499 227364 DEBUG nova.compute.manager [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:31:29 np0005539551 nova_compute[227360]: 2025-11-29 08:31:29.501 227364 DEBUG nova.virt.libvirt.driver [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:31:29 np0005539551 nova_compute[227360]: 2025-11-29 08:31:29.502 227364 INFO nova.virt.libvirt.driver [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Creating image(s)#033[00m
Nov 29 03:31:29 np0005539551 nova_compute[227360]: 2025-11-29 08:31:29.502 227364 DEBUG nova.virt.libvirt.driver [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:31:29 np0005539551 nova_compute[227360]: 2025-11-29 08:31:29.503 227364 DEBUG nova.virt.libvirt.driver [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Ensure instance console log exists: /var/lib/nova/instances/b08df6d7-85bd-4c2a-9bd8-f37384b9148a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:31:29 np0005539551 nova_compute[227360]: 2025-11-29 08:31:29.503 227364 DEBUG oslo_concurrency.lockutils [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:29 np0005539551 nova_compute[227360]: 2025-11-29 08:31:29.504 227364 DEBUG oslo_concurrency.lockutils [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:29 np0005539551 nova_compute[227360]: 2025-11-29 08:31:29.504 227364 DEBUG oslo_concurrency.lockutils [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:29 np0005539551 nova_compute[227360]: 2025-11-29 08:31:29.839 227364 DEBUG nova.network.neutron [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Successfully updated port: 19b8adf6-0177-4b65-8028-bf0fe37afa9a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:31:29 np0005539551 nova_compute[227360]: 2025-11-29 08:31:29.866 227364 DEBUG oslo_concurrency.lockutils [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "refresh_cache-b08df6d7-85bd-4c2a-9bd8-f37384b9148a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:31:29 np0005539551 nova_compute[227360]: 2025-11-29 08:31:29.867 227364 DEBUG oslo_concurrency.lockutils [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquired lock "refresh_cache-b08df6d7-85bd-4c2a-9bd8-f37384b9148a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:31:29 np0005539551 nova_compute[227360]: 2025-11-29 08:31:29.867 227364 DEBUG nova.network.neutron [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:31:29 np0005539551 nova_compute[227360]: 2025-11-29 08:31:29.965 227364 DEBUG nova.compute.manager [req-40758e59-8347-46f9-a7ca-c41e79dd3c0c req-84ae5072-3507-480d-b5d7-89d1140d6211 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Received event network-changed-19b8adf6-0177-4b65-8028-bf0fe37afa9a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:29 np0005539551 nova_compute[227360]: 2025-11-29 08:31:29.966 227364 DEBUG nova.compute.manager [req-40758e59-8347-46f9-a7ca-c41e79dd3c0c req-84ae5072-3507-480d-b5d7-89d1140d6211 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Refreshing instance network info cache due to event network-changed-19b8adf6-0177-4b65-8028-bf0fe37afa9a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:31:29 np0005539551 nova_compute[227360]: 2025-11-29 08:31:29.966 227364 DEBUG oslo_concurrency.lockutils [req-40758e59-8347-46f9-a7ca-c41e79dd3c0c req-84ae5072-3507-480d-b5d7-89d1140d6211 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-b08df6d7-85bd-4c2a-9bd8-f37384b9148a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:31:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:29.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:30.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:30 np0005539551 nova_compute[227360]: 2025-11-29 08:31:30.861 227364 DEBUG nova.network.neutron [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:31:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:31:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:32.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:31:32 np0005539551 nova_compute[227360]: 2025-11-29 08:31:32.227 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:32.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:32 np0005539551 nova_compute[227360]: 2025-11-29 08:31:32.875 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:33 np0005539551 nova_compute[227360]: 2025-11-29 08:31:33.574 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:33 np0005539551 nova_compute[227360]: 2025-11-29 08:31:33.641 227364 DEBUG nova.network.neutron [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Updating instance_info_cache with network_info: [{"id": "19b8adf6-0177-4b65-8028-bf0fe37afa9a", "address": "fa:16:3e:17:d8:67", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19b8adf6-01", "ovs_interfaceid": "19b8adf6-0177-4b65-8028-bf0fe37afa9a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:31:33 np0005539551 nova_compute[227360]: 2025-11-29 08:31:33.658 227364 DEBUG oslo_concurrency.lockutils [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Releasing lock "refresh_cache-b08df6d7-85bd-4c2a-9bd8-f37384b9148a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:31:33 np0005539551 nova_compute[227360]: 2025-11-29 08:31:33.659 227364 DEBUG nova.compute.manager [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Instance network_info: |[{"id": "19b8adf6-0177-4b65-8028-bf0fe37afa9a", "address": "fa:16:3e:17:d8:67", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19b8adf6-01", "ovs_interfaceid": "19b8adf6-0177-4b65-8028-bf0fe37afa9a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:31:33 np0005539551 nova_compute[227360]: 2025-11-29 08:31:33.659 227364 DEBUG oslo_concurrency.lockutils [req-40758e59-8347-46f9-a7ca-c41e79dd3c0c req-84ae5072-3507-480d-b5d7-89d1140d6211 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-b08df6d7-85bd-4c2a-9bd8-f37384b9148a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:31:33 np0005539551 nova_compute[227360]: 2025-11-29 08:31:33.659 227364 DEBUG nova.network.neutron [req-40758e59-8347-46f9-a7ca-c41e79dd3c0c req-84ae5072-3507-480d-b5d7-89d1140d6211 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Refreshing network info cache for port 19b8adf6-0177-4b65-8028-bf0fe37afa9a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:31:33 np0005539551 nova_compute[227360]: 2025-11-29 08:31:33.662 227364 DEBUG nova.virt.libvirt.driver [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Start _get_guest_xml network_info=[{"id": "19b8adf6-0177-4b65-8028-bf0fe37afa9a", "address": "fa:16:3e:17:d8:67", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19b8adf6-01", "ovs_interfaceid": "19b8adf6-0177-4b65-8028-bf0fe37afa9a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-6d9d34fb-9df6-452a-837f-86dd8a2bf2ba', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '6d9d34fb-9df6-452a-837f-86dd8a2bf2ba', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'b08df6d7-85bd-4c2a-9bd8-f37384b9148a', 'attached_at': '', 'detached_at': '', 'volume_id': '6d9d34fb-9df6-452a-837f-86dd8a2bf2ba', 'serial': '6d9d34fb-9df6-452a-837f-86dd8a2bf2ba', 'multiattach': True}, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'attachment_id': '06c20c1d-fb1e-4003-92cc-86c5030635df', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:31:33 np0005539551 nova_compute[227360]: 2025-11-29 08:31:33.667 227364 WARNING nova.virt.libvirt.driver [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:31:33 np0005539551 nova_compute[227360]: 2025-11-29 08:31:33.672 227364 DEBUG nova.virt.libvirt.host [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:31:33 np0005539551 nova_compute[227360]: 2025-11-29 08:31:33.673 227364 DEBUG nova.virt.libvirt.host [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:31:33 np0005539551 nova_compute[227360]: 2025-11-29 08:31:33.677 227364 DEBUG nova.virt.libvirt.host [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:31:33 np0005539551 nova_compute[227360]: 2025-11-29 08:31:33.678 227364 DEBUG nova.virt.libvirt.host [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:31:33 np0005539551 nova_compute[227360]: 2025-11-29 08:31:33.679 227364 DEBUG nova.virt.libvirt.driver [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:31:33 np0005539551 nova_compute[227360]: 2025-11-29 08:31:33.679 227364 DEBUG nova.virt.hardware [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:31:33 np0005539551 nova_compute[227360]: 2025-11-29 08:31:33.679 227364 DEBUG nova.virt.hardware [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:31:33 np0005539551 nova_compute[227360]: 2025-11-29 08:31:33.680 227364 DEBUG nova.virt.hardware [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:31:33 np0005539551 nova_compute[227360]: 2025-11-29 08:31:33.680 227364 DEBUG nova.virt.hardware [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:31:33 np0005539551 nova_compute[227360]: 2025-11-29 08:31:33.680 227364 DEBUG nova.virt.hardware [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:31:33 np0005539551 nova_compute[227360]: 2025-11-29 08:31:33.680 227364 DEBUG nova.virt.hardware [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:31:33 np0005539551 nova_compute[227360]: 2025-11-29 08:31:33.680 227364 DEBUG nova.virt.hardware [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:31:33 np0005539551 nova_compute[227360]: 2025-11-29 08:31:33.681 227364 DEBUG nova.virt.hardware [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:31:33 np0005539551 nova_compute[227360]: 2025-11-29 08:31:33.681 227364 DEBUG nova.virt.hardware [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:31:33 np0005539551 nova_compute[227360]: 2025-11-29 08:31:33.681 227364 DEBUG nova.virt.hardware [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:31:33 np0005539551 nova_compute[227360]: 2025-11-29 08:31:33.681 227364 DEBUG nova.virt.hardware [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:31:33 np0005539551 nova_compute[227360]: 2025-11-29 08:31:33.940 227364 DEBUG nova.storage.rbd_utils [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] rbd image b08df6d7-85bd-4c2a-9bd8-f37384b9148a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:31:33 np0005539551 nova_compute[227360]: 2025-11-29 08:31:33.949 227364 DEBUG oslo_concurrency.processutils [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:34.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:34.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:31:35 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/109521463' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:31:35 np0005539551 nova_compute[227360]: 2025-11-29 08:31:35.802 227364 DEBUG nova.network.neutron [req-40758e59-8347-46f9-a7ca-c41e79dd3c0c req-84ae5072-3507-480d-b5d7-89d1140d6211 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Updated VIF entry in instance network info cache for port 19b8adf6-0177-4b65-8028-bf0fe37afa9a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:31:35 np0005539551 nova_compute[227360]: 2025-11-29 08:31:35.803 227364 DEBUG nova.network.neutron [req-40758e59-8347-46f9-a7ca-c41e79dd3c0c req-84ae5072-3507-480d-b5d7-89d1140d6211 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Updating instance_info_cache with network_info: [{"id": "19b8adf6-0177-4b65-8028-bf0fe37afa9a", "address": "fa:16:3e:17:d8:67", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19b8adf6-01", "ovs_interfaceid": "19b8adf6-0177-4b65-8028-bf0fe37afa9a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:31:35 np0005539551 nova_compute[227360]: 2025-11-29 08:31:35.821 227364 DEBUG oslo_concurrency.lockutils [req-40758e59-8347-46f9-a7ca-c41e79dd3c0c req-84ae5072-3507-480d-b5d7-89d1140d6211 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-b08df6d7-85bd-4c2a-9bd8-f37384b9148a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:31:35 np0005539551 nova_compute[227360]: 2025-11-29 08:31:35.963 227364 DEBUG oslo_concurrency.processutils [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:35 np0005539551 nova_compute[227360]: 2025-11-29 08:31:35.988 227364 DEBUG nova.virt.libvirt.vif [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:31:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-33702943',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-33702943',id=159,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4f6db81949d487b853d7567f8a2e6d4',ramdisk_id='',reservation_id='r-10oir9rx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-573425942',owner_user_name='tempest-AttachVolumeMultiAttachTest-573425942-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:31:28Z,user_data=None,user_id='c5b0953fb7cc415fb26cf4ffdd5908c6',uuid=b08df6d7-85bd-4c2a-9bd8-f37384b9148a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "19b8adf6-0177-4b65-8028-bf0fe37afa9a", "address": "fa:16:3e:17:d8:67", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19b8adf6-01", "ovs_interfaceid": "19b8adf6-0177-4b65-8028-bf0fe37afa9a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:31:35 np0005539551 nova_compute[227360]: 2025-11-29 08:31:35.989 227364 DEBUG nova.network.os_vif_util [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converting VIF {"id": "19b8adf6-0177-4b65-8028-bf0fe37afa9a", "address": "fa:16:3e:17:d8:67", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19b8adf6-01", "ovs_interfaceid": "19b8adf6-0177-4b65-8028-bf0fe37afa9a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:31:35 np0005539551 nova_compute[227360]: 2025-11-29 08:31:35.989 227364 DEBUG nova.network.os_vif_util [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:d8:67,bridge_name='br-int',has_traffic_filtering=True,id=19b8adf6-0177-4b65-8028-bf0fe37afa9a,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19b8adf6-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:31:35 np0005539551 nova_compute[227360]: 2025-11-29 08:31:35.990 227364 DEBUG nova.objects.instance [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'pci_devices' on Instance uuid b08df6d7-85bd-4c2a-9bd8-f37384b9148a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:31:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:36.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:36 np0005539551 nova_compute[227360]: 2025-11-29 08:31:36.009 227364 DEBUG nova.virt.libvirt.driver [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:31:36 np0005539551 nova_compute[227360]:  <uuid>b08df6d7-85bd-4c2a-9bd8-f37384b9148a</uuid>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:  <name>instance-0000009f</name>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:31:36 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:      <nova:name>tempest-AttachVolumeMultiAttachTest-server-33702943</nova:name>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:31:33</nova:creationTime>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:31:36 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:        <nova:user uuid="c5b0953fb7cc415fb26cf4ffdd5908c6">tempest-AttachVolumeMultiAttachTest-573425942-project-member</nova:user>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:        <nova:project uuid="d4f6db81949d487b853d7567f8a2e6d4">tempest-AttachVolumeMultiAttachTest-573425942</nova:project>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:        <nova:port uuid="19b8adf6-0177-4b65-8028-bf0fe37afa9a">
Nov 29 03:31:36 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:      <entry name="serial">b08df6d7-85bd-4c2a-9bd8-f37384b9148a</entry>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:      <entry name="uuid">b08df6d7-85bd-4c2a-9bd8-f37384b9148a</entry>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:31:36 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/b08df6d7-85bd-4c2a-9bd8-f37384b9148a_disk.config">
Nov 29 03:31:36 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:31:36 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:31:36 np0005539551 nova_compute[227360]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="volumes/volume-6d9d34fb-9df6-452a-837f-86dd8a2bf2ba">
Nov 29 03:31:36 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:31:36 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:      <serial>6d9d34fb-9df6-452a-837f-86dd8a2bf2ba</serial>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:      <shareable/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:31:36 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:17:d8:67"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:      <target dev="tap19b8adf6-01"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:31:36 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/b08df6d7-85bd-4c2a-9bd8-f37384b9148a/console.log" append="off"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:31:36 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:31:36 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:31:36 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:31:36 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:31:36 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:31:36 np0005539551 nova_compute[227360]: 2025-11-29 08:31:36.010 227364 DEBUG nova.compute.manager [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Preparing to wait for external event network-vif-plugged-19b8adf6-0177-4b65-8028-bf0fe37afa9a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:31:36 np0005539551 nova_compute[227360]: 2025-11-29 08:31:36.011 227364 DEBUG oslo_concurrency.lockutils [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "b08df6d7-85bd-4c2a-9bd8-f37384b9148a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:36 np0005539551 nova_compute[227360]: 2025-11-29 08:31:36.011 227364 DEBUG oslo_concurrency.lockutils [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "b08df6d7-85bd-4c2a-9bd8-f37384b9148a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:36 np0005539551 nova_compute[227360]: 2025-11-29 08:31:36.011 227364 DEBUG oslo_concurrency.lockutils [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "b08df6d7-85bd-4c2a-9bd8-f37384b9148a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:36 np0005539551 nova_compute[227360]: 2025-11-29 08:31:36.012 227364 DEBUG nova.virt.libvirt.vif [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:31:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-33702943',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-33702943',id=159,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4f6db81949d487b853d7567f8a2e6d4',ramdisk_id='',reservation_id='r-10oir9rx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-573425942',owner_user_name='tempest-AttachVolumeMultiAttachTest-573425942-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:31:28Z,user_data=None,user_id='c5b0953fb7cc415fb26cf4ffdd5908c6',uuid=b08df6d7-85bd-4c2a-9bd8-f37384b9148a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "19b8adf6-0177-4b65-8028-bf0fe37afa9a", "address": "fa:16:3e:17:d8:67", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19b8adf6-01", "ovs_interfaceid": "19b8adf6-0177-4b65-8028-bf0fe37afa9a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:31:36 np0005539551 nova_compute[227360]: 2025-11-29 08:31:36.012 227364 DEBUG nova.network.os_vif_util [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converting VIF {"id": "19b8adf6-0177-4b65-8028-bf0fe37afa9a", "address": "fa:16:3e:17:d8:67", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19b8adf6-01", "ovs_interfaceid": "19b8adf6-0177-4b65-8028-bf0fe37afa9a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:31:36 np0005539551 nova_compute[227360]: 2025-11-29 08:31:36.012 227364 DEBUG nova.network.os_vif_util [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:d8:67,bridge_name='br-int',has_traffic_filtering=True,id=19b8adf6-0177-4b65-8028-bf0fe37afa9a,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19b8adf6-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:31:36 np0005539551 nova_compute[227360]: 2025-11-29 08:31:36.013 227364 DEBUG os_vif [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:d8:67,bridge_name='br-int',has_traffic_filtering=True,id=19b8adf6-0177-4b65-8028-bf0fe37afa9a,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19b8adf6-01') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:31:36 np0005539551 nova_compute[227360]: 2025-11-29 08:31:36.013 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:36 np0005539551 nova_compute[227360]: 2025-11-29 08:31:36.014 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:36 np0005539551 nova_compute[227360]: 2025-11-29 08:31:36.014 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:31:36 np0005539551 nova_compute[227360]: 2025-11-29 08:31:36.016 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:36 np0005539551 nova_compute[227360]: 2025-11-29 08:31:36.017 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap19b8adf6-01, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:36 np0005539551 nova_compute[227360]: 2025-11-29 08:31:36.017 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap19b8adf6-01, col_values=(('external_ids', {'iface-id': '19b8adf6-0177-4b65-8028-bf0fe37afa9a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:17:d8:67', 'vm-uuid': 'b08df6d7-85bd-4c2a-9bd8-f37384b9148a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:36 np0005539551 nova_compute[227360]: 2025-11-29 08:31:36.019 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:36 np0005539551 NetworkManager[48922]: <info>  [1764405096.0205] manager: (tap19b8adf6-01): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/315)
Nov 29 03:31:36 np0005539551 nova_compute[227360]: 2025-11-29 08:31:36.021 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:31:36 np0005539551 nova_compute[227360]: 2025-11-29 08:31:36.025 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:36 np0005539551 nova_compute[227360]: 2025-11-29 08:31:36.026 227364 INFO os_vif [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:d8:67,bridge_name='br-int',has_traffic_filtering=True,id=19b8adf6-0177-4b65-8028-bf0fe37afa9a,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19b8adf6-01')#033[00m
Nov 29 03:31:36 np0005539551 nova_compute[227360]: 2025-11-29 08:31:36.398 227364 DEBUG nova.virt.libvirt.driver [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:31:36 np0005539551 nova_compute[227360]: 2025-11-29 08:31:36.399 227364 DEBUG nova.virt.libvirt.driver [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:31:36 np0005539551 nova_compute[227360]: 2025-11-29 08:31:36.399 227364 DEBUG nova.virt.libvirt.driver [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No VIF found with MAC fa:16:3e:17:d8:67, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:31:36 np0005539551 nova_compute[227360]: 2025-11-29 08:31:36.400 227364 INFO nova.virt.libvirt.driver [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Using config drive#033[00m
Nov 29 03:31:36 np0005539551 nova_compute[227360]: 2025-11-29 08:31:36.432 227364 DEBUG nova.storage.rbd_utils [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] rbd image b08df6d7-85bd-4c2a-9bd8-f37384b9148a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:31:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:36.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:36 np0005539551 nova_compute[227360]: 2025-11-29 08:31:36.883 227364 INFO nova.virt.libvirt.driver [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Creating config drive at /var/lib/nova/instances/b08df6d7-85bd-4c2a-9bd8-f37384b9148a/disk.config#033[00m
Nov 29 03:31:36 np0005539551 nova_compute[227360]: 2025-11-29 08:31:36.888 227364 DEBUG oslo_concurrency.processutils [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b08df6d7-85bd-4c2a-9bd8-f37384b9148a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmply3ushez execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.028 227364 DEBUG oslo_concurrency.processutils [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b08df6d7-85bd-4c2a-9bd8-f37384b9148a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmply3ushez" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.059 227364 DEBUG nova.storage.rbd_utils [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] rbd image b08df6d7-85bd-4c2a-9bd8-f37384b9148a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.064 227364 DEBUG oslo_concurrency.processutils [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b08df6d7-85bd-4c2a-9bd8-f37384b9148a/disk.config b08df6d7-85bd-4c2a-9bd8-f37384b9148a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.218 227364 DEBUG oslo_concurrency.processutils [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b08df6d7-85bd-4c2a-9bd8-f37384b9148a/disk.config b08df6d7-85bd-4c2a-9bd8-f37384b9148a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.220 227364 INFO nova.virt.libvirt.driver [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Deleting local config drive /var/lib/nova/instances/b08df6d7-85bd-4c2a-9bd8-f37384b9148a/disk.config because it was imported into RBD.#033[00m
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.229 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:37 np0005539551 kernel: tap19b8adf6-01: entered promiscuous mode
Nov 29 03:31:37 np0005539551 NetworkManager[48922]: <info>  [1764405097.2732] manager: (tap19b8adf6-01): new Tun device (/org/freedesktop/NetworkManager/Devices/316)
Nov 29 03:31:37 np0005539551 ovn_controller[130266]: 2025-11-29T08:31:37Z|00697|binding|INFO|Claiming lport 19b8adf6-0177-4b65-8028-bf0fe37afa9a for this chassis.
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.276 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:37 np0005539551 ovn_controller[130266]: 2025-11-29T08:31:37Z|00698|binding|INFO|19b8adf6-0177-4b65-8028-bf0fe37afa9a: Claiming fa:16:3e:17:d8:67 10.100.0.14
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:37.282 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:d8:67 10.100.0.14'], port_security=['fa:16:3e:17:d8:67 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'b08df6d7-85bd-4c2a-9bd8-f37384b9148a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4f6db81949d487b853d7567f8a2e6d4', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f152d713-a80c-4ab4-9e52-56ad227c55aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=794eeb47-266a-47f4-b2a1-7a89e6c6ba82, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=19b8adf6-0177-4b65-8028-bf0fe37afa9a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:37.284 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 19b8adf6-0177-4b65-8028-bf0fe37afa9a in datapath ed50ff83-51d1-4b35-b85c-1cbe6fb812c6 bound to our chassis#033[00m
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:37.286 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ed50ff83-51d1-4b35-b85c-1cbe6fb812c6#033[00m
Nov 29 03:31:37 np0005539551 ovn_controller[130266]: 2025-11-29T08:31:37Z|00699|binding|INFO|Setting lport 19b8adf6-0177-4b65-8028-bf0fe37afa9a ovn-installed in OVS
Nov 29 03:31:37 np0005539551 ovn_controller[130266]: 2025-11-29T08:31:37Z|00700|binding|INFO|Setting lport 19b8adf6-0177-4b65-8028-bf0fe37afa9a up in Southbound
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.291 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.295 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:37.297 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2a5496ab-7021-4f8e-8a1e-d3b7ae5e8737]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:37.299 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH taped50ff83-51 in ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:37.301 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface taped50ff83-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:37.301 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2fe57c83-5aba-44d1-9dc3-8ec94f37bce0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:37.302 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[32d343f2-4504-4604-9939-02ac4428766e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:37 np0005539551 systemd-machined[190756]: New machine qemu-74-instance-0000009f.
Nov 29 03:31:37 np0005539551 systemd-udevd[285134]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:37.314 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[06895b77-abf3-499f-bc97-88c92e9f7a6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:37 np0005539551 NetworkManager[48922]: <info>  [1764405097.3233] device (tap19b8adf6-01): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:31:37 np0005539551 NetworkManager[48922]: <info>  [1764405097.3241] device (tap19b8adf6-01): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:31:37 np0005539551 systemd[1]: Started Virtual Machine qemu-74-instance-0000009f.
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:37.341 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d9f4e396-35bc-4ddb-8c3a-d79c9d877db5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:37.369 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[d208e12a-5689-4ac3-a4bb-496c5b055896]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:37 np0005539551 NetworkManager[48922]: <info>  [1764405097.3754] manager: (taped50ff83-50): new Veth device (/org/freedesktop/NetworkManager/Devices/317)
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:37.374 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[624c751a-470e-4813-9d51-2e87483a98d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:37.411 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[06ad6996-398d-41e8-a21d-1cff4404f07b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:37.415 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[c2ea41ea-5ca0-4d4f-9ff0-4b4b0a45ab32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.431 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:37 np0005539551 NetworkManager[48922]: <info>  [1764405097.4397] device (taped50ff83-50): carrier: link connected
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:37.447 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[5b346a7b-b530-4940-b190-0288da188db9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:37.469 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d500af5f-9010-45ea-9fed-34d394a897f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped50ff83-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:60:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 202], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 814431, 'reachable_time': 33322, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285165, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:37.487 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b59ef831-9262-4a5c-90c1-a0b2ab3b1caf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe51:60f2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 814431, 'tstamp': 814431}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285166, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.501 227364 DEBUG nova.compute.manager [req-f0d024ef-c47a-436c-8c03-db8acf6c0188 req-5e05a0aa-775d-4465-9d4c-9d5fff77b66a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Received event network-vif-plugged-19b8adf6-0177-4b65-8028-bf0fe37afa9a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.502 227364 DEBUG oslo_concurrency.lockutils [req-f0d024ef-c47a-436c-8c03-db8acf6c0188 req-5e05a0aa-775d-4465-9d4c-9d5fff77b66a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "b08df6d7-85bd-4c2a-9bd8-f37384b9148a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.502 227364 DEBUG oslo_concurrency.lockutils [req-f0d024ef-c47a-436c-8c03-db8acf6c0188 req-5e05a0aa-775d-4465-9d4c-9d5fff77b66a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b08df6d7-85bd-4c2a-9bd8-f37384b9148a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.502 227364 DEBUG oslo_concurrency.lockutils [req-f0d024ef-c47a-436c-8c03-db8acf6c0188 req-5e05a0aa-775d-4465-9d4c-9d5fff77b66a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b08df6d7-85bd-4c2a-9bd8-f37384b9148a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.502 227364 DEBUG nova.compute.manager [req-f0d024ef-c47a-436c-8c03-db8acf6c0188 req-5e05a0aa-775d-4465-9d4c-9d5fff77b66a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Processing event network-vif-plugged-19b8adf6-0177-4b65-8028-bf0fe37afa9a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:37.512 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[45d63255-d8b0-49b4-a5e6-a2333b91770f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped50ff83-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:60:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 202], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 814431, 'reachable_time': 33322, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 285167, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:37.544 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[bcb6ae9c-eb23-4f9a-aad0-c8352da8397d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:37.600 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[15ac1e04-b817-4c55-a552-2524bd345b59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:37.601 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped50ff83-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:37.602 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:37.602 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped50ff83-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.604 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:37 np0005539551 kernel: taped50ff83-50: entered promiscuous mode
Nov 29 03:31:37 np0005539551 NetworkManager[48922]: <info>  [1764405097.6052] manager: (taped50ff83-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/318)
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:37.606 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=taped50ff83-50, col_values=(('external_ids', {'iface-id': '3b04b2c4-a6da-4677-b446-82ad68652b56'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:37 np0005539551 ovn_controller[130266]: 2025-11-29T08:31:37Z|00701|binding|INFO|Releasing lport 3b04b2c4-a6da-4677-b446-82ad68652b56 from this chassis (sb_readonly=0)
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.620 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:37.622 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ed50ff83-51d1-4b35-b85c-1cbe6fb812c6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ed50ff83-51d1-4b35-b85c-1cbe6fb812c6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:37.623 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8aa176d8-bd2c-4c42-b0ae-d9e55d67acac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:37.624 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/ed50ff83-51d1-4b35-b85c-1cbe6fb812c6.pid.haproxy
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID ed50ff83-51d1-4b35-b85c-1cbe6fb812c6
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:31:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:31:37.624 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'env', 'PROCESS_TAG=haproxy-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ed50ff83-51d1-4b35-b85c-1cbe6fb812c6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.781 227364 DEBUG nova.compute.manager [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.782 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405097.7809198, b08df6d7-85bd-4c2a-9bd8-f37384b9148a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.782 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] VM Started (Lifecycle Event)#033[00m
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.786 227364 DEBUG nova.virt.libvirt.driver [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.789 227364 INFO nova.virt.libvirt.driver [-] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Instance spawned successfully.#033[00m
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.789 227364 DEBUG nova.virt.libvirt.driver [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.807 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.811 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.814 227364 DEBUG nova.virt.libvirt.driver [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.814 227364 DEBUG nova.virt.libvirt.driver [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.815 227364 DEBUG nova.virt.libvirt.driver [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.815 227364 DEBUG nova.virt.libvirt.driver [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.815 227364 DEBUG nova.virt.libvirt.driver [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.816 227364 DEBUG nova.virt.libvirt.driver [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.836 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.837 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405097.7817645, b08df6d7-85bd-4c2a-9bd8-f37384b9148a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.837 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.860 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.863 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405097.7851546, b08df6d7-85bd-4c2a-9bd8-f37384b9148a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.864 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.873 227364 INFO nova.compute.manager [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Took 8.37 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.873 227364 DEBUG nova.compute.manager [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.895 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.898 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.931 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.954 227364 INFO nova.compute.manager [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Took 10.71 seconds to build instance.#033[00m
Nov 29 03:31:37 np0005539551 nova_compute[227360]: 2025-11-29 08:31:37.982 227364 DEBUG oslo_concurrency.lockutils [None req-9c9d7924-8d14-422a-812b-9258e8130013 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "b08df6d7-85bd-4c2a-9bd8-f37384b9148a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:38.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:38 np0005539551 podman[285241]: 2025-11-29 08:31:37.973975804 +0000 UTC m=+0.023338063 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:31:38 np0005539551 podman[285241]: 2025-11-29 08:31:38.210006719 +0000 UTC m=+0.259368988 container create 5f2fab8c24766b411677664c1f9208a908c72d06a38a27719511f83795e20c39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:31:38 np0005539551 systemd[1]: Started libpod-conmon-5f2fab8c24766b411677664c1f9208a908c72d06a38a27719511f83795e20c39.scope.
Nov 29 03:31:38 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:31:38 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3635dcbb01eb160e0a1065c6ad069524e5aa01ddce054d2277257fc0e5c79cd5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:31:38 np0005539551 podman[285241]: 2025-11-29 08:31:38.311462043 +0000 UTC m=+0.360824302 container init 5f2fab8c24766b411677664c1f9208a908c72d06a38a27719511f83795e20c39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:31:38 np0005539551 podman[285241]: 2025-11-29 08:31:38.316592712 +0000 UTC m=+0.365954951 container start 5f2fab8c24766b411677664c1f9208a908c72d06a38a27719511f83795e20c39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 03:31:38 np0005539551 neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6[285256]: [NOTICE]   (285260) : New worker (285262) forked
Nov 29 03:31:38 np0005539551 neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6[285256]: [NOTICE]   (285260) : Loading success.
Nov 29 03:31:38 np0005539551 ovn_controller[130266]: 2025-11-29T08:31:38Z|00087|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ad:80:cb 10.100.0.12
Nov 29 03:31:38 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:31:38 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1031101901' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:31:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:38.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:39 np0005539551 nova_compute[227360]: 2025-11-29 08:31:39.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:39 np0005539551 nova_compute[227360]: 2025-11-29 08:31:39.623 227364 DEBUG nova.compute.manager [req-6fb9bb82-2ea8-4d79-883e-d0481d4ac24f req-c6a16172-2dca-4cce-8658-b80dbadaa95e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Received event network-vif-plugged-19b8adf6-0177-4b65-8028-bf0fe37afa9a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:39 np0005539551 nova_compute[227360]: 2025-11-29 08:31:39.623 227364 DEBUG oslo_concurrency.lockutils [req-6fb9bb82-2ea8-4d79-883e-d0481d4ac24f req-c6a16172-2dca-4cce-8658-b80dbadaa95e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "b08df6d7-85bd-4c2a-9bd8-f37384b9148a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:39 np0005539551 nova_compute[227360]: 2025-11-29 08:31:39.624 227364 DEBUG oslo_concurrency.lockutils [req-6fb9bb82-2ea8-4d79-883e-d0481d4ac24f req-c6a16172-2dca-4cce-8658-b80dbadaa95e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b08df6d7-85bd-4c2a-9bd8-f37384b9148a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:39 np0005539551 nova_compute[227360]: 2025-11-29 08:31:39.624 227364 DEBUG oslo_concurrency.lockutils [req-6fb9bb82-2ea8-4d79-883e-d0481d4ac24f req-c6a16172-2dca-4cce-8658-b80dbadaa95e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b08df6d7-85bd-4c2a-9bd8-f37384b9148a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:39 np0005539551 nova_compute[227360]: 2025-11-29 08:31:39.624 227364 DEBUG nova.compute.manager [req-6fb9bb82-2ea8-4d79-883e-d0481d4ac24f req-c6a16172-2dca-4cce-8658-b80dbadaa95e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] No waiting events found dispatching network-vif-plugged-19b8adf6-0177-4b65-8028-bf0fe37afa9a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:31:39 np0005539551 nova_compute[227360]: 2025-11-29 08:31:39.625 227364 WARNING nova.compute.manager [req-6fb9bb82-2ea8-4d79-883e-d0481d4ac24f req-c6a16172-2dca-4cce-8658-b80dbadaa95e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Received unexpected event network-vif-plugged-19b8adf6-0177-4b65-8028-bf0fe37afa9a for instance with vm_state active and task_state None.#033[00m
Nov 29 03:31:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:31:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:40.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:31:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:40 np0005539551 nova_compute[227360]: 2025-11-29 08:31:40.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:40 np0005539551 nova_compute[227360]: 2025-11-29 08:31:40.432 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:40 np0005539551 nova_compute[227360]: 2025-11-29 08:31:40.433 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:40 np0005539551 nova_compute[227360]: 2025-11-29 08:31:40.433 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:40 np0005539551 nova_compute[227360]: 2025-11-29 08:31:40.433 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:31:40 np0005539551 nova_compute[227360]: 2025-11-29 08:31:40.433 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:40.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:31:40 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/300034909' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:31:40 np0005539551 nova_compute[227360]: 2025-11-29 08:31:40.845 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:40 np0005539551 nova_compute[227360]: 2025-11-29 08:31:40.989 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000009f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:31:40 np0005539551 nova_compute[227360]: 2025-11-29 08:31:40.991 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000009f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:31:40 np0005539551 nova_compute[227360]: 2025-11-29 08:31:40.996 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:31:40 np0005539551 nova_compute[227360]: 2025-11-29 08:31:40.996 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:31:40 np0005539551 nova_compute[227360]: 2025-11-29 08:31:40.997 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:31:41 np0005539551 nova_compute[227360]: 2025-11-29 08:31:41.020 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:41 np0005539551 nova_compute[227360]: 2025-11-29 08:31:41.179 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:31:41 np0005539551 nova_compute[227360]: 2025-11-29 08:31:41.180 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3973MB free_disk=20.852489471435547GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:31:41 np0005539551 nova_compute[227360]: 2025-11-29 08:31:41.181 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:41 np0005539551 nova_compute[227360]: 2025-11-29 08:31:41.181 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:41 np0005539551 nova_compute[227360]: 2025-11-29 08:31:41.281 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 7f0919df-6a69-4824-b20e-6540d1d3de30 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:31:41 np0005539551 nova_compute[227360]: 2025-11-29 08:31:41.281 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance b08df6d7-85bd-4c2a-9bd8-f37384b9148a actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:31:41 np0005539551 nova_compute[227360]: 2025-11-29 08:31:41.281 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:31:41 np0005539551 nova_compute[227360]: 2025-11-29 08:31:41.281 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:31:41 np0005539551 nova_compute[227360]: 2025-11-29 08:31:41.351 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:42.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:42 np0005539551 nova_compute[227360]: 2025-11-29 08:31:42.232 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:42 np0005539551 podman[285316]: 2025-11-29 08:31:42.598466008 +0000 UTC m=+0.049712186 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:31:42 np0005539551 podman[285315]: 2025-11-29 08:31:42.606367881 +0000 UTC m=+0.061669820 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:31:42 np0005539551 podman[285314]: 2025-11-29 08:31:42.629922798 +0000 UTC m=+0.085703609 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:31:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:31:42 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3050560812' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:31:42 np0005539551 nova_compute[227360]: 2025-11-29 08:31:42.806 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:42 np0005539551 nova_compute[227360]: 2025-11-29 08:31:42.811 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:31:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:42.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:42 np0005539551 nova_compute[227360]: 2025-11-29 08:31:42.824 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:31:42 np0005539551 nova_compute[227360]: 2025-11-29 08:31:42.846 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:31:42 np0005539551 nova_compute[227360]: 2025-11-29 08:31:42.847 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:44.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:44.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:44 np0005539551 nova_compute[227360]: 2025-11-29 08:31:44.848 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:44 np0005539551 nova_compute[227360]: 2025-11-29 08:31:44.848 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:31:44 np0005539551 nova_compute[227360]: 2025-11-29 08:31:44.848 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:31:45 np0005539551 nova_compute[227360]: 2025-11-29 08:31:45.274 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "refresh_cache-7f0919df-6a69-4824-b20e-6540d1d3de30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:31:45 np0005539551 nova_compute[227360]: 2025-11-29 08:31:45.275 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquired lock "refresh_cache-7f0919df-6a69-4824-b20e-6540d1d3de30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:31:45 np0005539551 nova_compute[227360]: 2025-11-29 08:31:45.275 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:31:45 np0005539551 nova_compute[227360]: 2025-11-29 08:31:45.275 227364 DEBUG nova.objects.instance [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7f0919df-6a69-4824-b20e-6540d1d3de30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:31:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:46 np0005539551 nova_compute[227360]: 2025-11-29 08:31:46.022 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:46.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:46 np0005539551 nova_compute[227360]: 2025-11-29 08:31:46.607 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Updating instance_info_cache with network_info: [{"id": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "address": "fa:16:3e:ad:80:cb", "network": {"id": "d6d35cfb-cc41-4788-977c-b8e5140795a0", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1510670722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1981e9617628491f938ef0ef01c061c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8a243db-cb", "ovs_interfaceid": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:31:46 np0005539551 nova_compute[227360]: 2025-11-29 08:31:46.650 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Releasing lock "refresh_cache-7f0919df-6a69-4824-b20e-6540d1d3de30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:31:46 np0005539551 nova_compute[227360]: 2025-11-29 08:31:46.651 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:31:46 np0005539551 nova_compute[227360]: 2025-11-29 08:31:46.652 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:46 np0005539551 nova_compute[227360]: 2025-11-29 08:31:46.652 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:46.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:47 np0005539551 nova_compute[227360]: 2025-11-29 08:31:47.233 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:47 np0005539551 nova_compute[227360]: 2025-11-29 08:31:47.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:48.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:48.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:50.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:51 np0005539551 nova_compute[227360]: 2025-11-29 08:31:51.025 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:50.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:51 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Nov 29 03:31:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:52.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:52 np0005539551 nova_compute[227360]: 2025-11-29 08:31:52.236 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:52.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:53 np0005539551 ovn_controller[130266]: 2025-11-29T08:31:53Z|00088|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:17:d8:67 10.100.0.14
Nov 29 03:31:53 np0005539551 ovn_controller[130266]: 2025-11-29T08:31:53Z|00089|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:17:d8:67 10.100.0.14
Nov 29 03:31:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:54.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:54 np0005539551 nova_compute[227360]: 2025-11-29 08:31:54.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:54 np0005539551 nova_compute[227360]: 2025-11-29 08:31:54.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:31:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:54.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:56 np0005539551 nova_compute[227360]: 2025-11-29 08:31:56.029 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:56.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:56 np0005539551 nova_compute[227360]: 2025-11-29 08:31:56.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:56.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:57 np0005539551 nova_compute[227360]: 2025-11-29 08:31:57.238 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:57 np0005539551 nova_compute[227360]: 2025-11-29 08:31:57.865 227364 DEBUG oslo_concurrency.lockutils [None req-01c9ae2a-69aa-42da-bdce-fe028e811225 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Acquiring lock "7f0919df-6a69-4824-b20e-6540d1d3de30" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:57 np0005539551 nova_compute[227360]: 2025-11-29 08:31:57.866 227364 DEBUG oslo_concurrency.lockutils [None req-01c9ae2a-69aa-42da-bdce-fe028e811225 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:57 np0005539551 nova_compute[227360]: 2025-11-29 08:31:57.886 227364 INFO nova.compute.manager [None req-01c9ae2a-69aa-42da-bdce-fe028e811225 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Detaching volume 819403e4-4b9d-48a5-8cd5-1b913fa9f1ae#033[00m
Nov 29 03:31:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:58.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:58 np0005539551 nova_compute[227360]: 2025-11-29 08:31:58.078 227364 INFO nova.virt.block_device [None req-01c9ae2a-69aa-42da-bdce-fe028e811225 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Attempting to driver detach volume 819403e4-4b9d-48a5-8cd5-1b913fa9f1ae from mountpoint /dev/vdb#033[00m
Nov 29 03:31:58 np0005539551 nova_compute[227360]: 2025-11-29 08:31:58.088 227364 DEBUG nova.virt.libvirt.driver [None req-01c9ae2a-69aa-42da-bdce-fe028e811225 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Attempting to detach device vdb from instance 7f0919df-6a69-4824-b20e-6540d1d3de30 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:31:58 np0005539551 nova_compute[227360]: 2025-11-29 08:31:58.089 227364 DEBUG nova.virt.libvirt.guest [None req-01c9ae2a-69aa-42da-bdce-fe028e811225 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:31:58 np0005539551 nova_compute[227360]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:31:58 np0005539551 nova_compute[227360]:  <source protocol="rbd" name="volumes/volume-819403e4-4b9d-48a5-8cd5-1b913fa9f1ae">
Nov 29 03:31:58 np0005539551 nova_compute[227360]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:31:58 np0005539551 nova_compute[227360]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:31:58 np0005539551 nova_compute[227360]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:31:58 np0005539551 nova_compute[227360]:  </source>
Nov 29 03:31:58 np0005539551 nova_compute[227360]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:31:58 np0005539551 nova_compute[227360]:  <serial>819403e4-4b9d-48a5-8cd5-1b913fa9f1ae</serial>
Nov 29 03:31:58 np0005539551 nova_compute[227360]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Nov 29 03:31:58 np0005539551 nova_compute[227360]: </disk>
Nov 29 03:31:58 np0005539551 nova_compute[227360]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:31:58 np0005539551 nova_compute[227360]: 2025-11-29 08:31:58.095 227364 INFO nova.virt.libvirt.driver [None req-01c9ae2a-69aa-42da-bdce-fe028e811225 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Successfully detached device vdb from instance 7f0919df-6a69-4824-b20e-6540d1d3de30 from the persistent domain config.#033[00m
Nov 29 03:31:58 np0005539551 nova_compute[227360]: 2025-11-29 08:31:58.096 227364 DEBUG nova.virt.libvirt.driver [None req-01c9ae2a-69aa-42da-bdce-fe028e811225 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 7f0919df-6a69-4824-b20e-6540d1d3de30 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:31:58 np0005539551 nova_compute[227360]: 2025-11-29 08:31:58.096 227364 DEBUG nova.virt.libvirt.guest [None req-01c9ae2a-69aa-42da-bdce-fe028e811225 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:31:58 np0005539551 nova_compute[227360]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:31:58 np0005539551 nova_compute[227360]:  <source protocol="rbd" name="volumes/volume-819403e4-4b9d-48a5-8cd5-1b913fa9f1ae">
Nov 29 03:31:58 np0005539551 nova_compute[227360]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:31:58 np0005539551 nova_compute[227360]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:31:58 np0005539551 nova_compute[227360]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:31:58 np0005539551 nova_compute[227360]:  </source>
Nov 29 03:31:58 np0005539551 nova_compute[227360]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:31:58 np0005539551 nova_compute[227360]:  <serial>819403e4-4b9d-48a5-8cd5-1b913fa9f1ae</serial>
Nov 29 03:31:58 np0005539551 nova_compute[227360]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Nov 29 03:31:58 np0005539551 nova_compute[227360]: </disk>
Nov 29 03:31:58 np0005539551 nova_compute[227360]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:31:58 np0005539551 nova_compute[227360]: 2025-11-29 08:31:58.154 227364 DEBUG nova.virt.libvirt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Received event <DeviceRemovedEvent: 1764405118.1538217, 7f0919df-6a69-4824-b20e-6540d1d3de30 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:31:58 np0005539551 nova_compute[227360]: 2025-11-29 08:31:58.157 227364 DEBUG nova.virt.libvirt.driver [None req-01c9ae2a-69aa-42da-bdce-fe028e811225 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 7f0919df-6a69-4824-b20e-6540d1d3de30 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:31:58 np0005539551 nova_compute[227360]: 2025-11-29 08:31:58.159 227364 INFO nova.virt.libvirt.driver [None req-01c9ae2a-69aa-42da-bdce-fe028e811225 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Successfully detached device vdb from instance 7f0919df-6a69-4824-b20e-6540d1d3de30 from the live domain config.#033[00m
Nov 29 03:31:58 np0005539551 nova_compute[227360]: 2025-11-29 08:31:58.382 227364 DEBUG nova.objects.instance [None req-01c9ae2a-69aa-42da-bdce-fe028e811225 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lazy-loading 'flavor' on Instance uuid 7f0919df-6a69-4824-b20e-6540d1d3de30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:31:58 np0005539551 nova_compute[227360]: 2025-11-29 08:31:58.418 227364 DEBUG oslo_concurrency.lockutils [None req-01c9ae2a-69aa-42da-bdce-fe028e811225 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:31:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:58.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:59 np0005539551 nova_compute[227360]: 2025-11-29 08:31:59.253 227364 DEBUG oslo_concurrency.lockutils [None req-3eff3f92-637d-43ad-bc84-45056240def3 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Acquiring lock "7f0919df-6a69-4824-b20e-6540d1d3de30" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:59 np0005539551 nova_compute[227360]: 2025-11-29 08:31:59.254 227364 DEBUG oslo_concurrency.lockutils [None req-3eff3f92-637d-43ad-bc84-45056240def3 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:59 np0005539551 nova_compute[227360]: 2025-11-29 08:31:59.254 227364 DEBUG nova.compute.manager [None req-3eff3f92-637d-43ad-bc84-45056240def3 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:31:59 np0005539551 nova_compute[227360]: 2025-11-29 08:31:59.256 227364 DEBUG nova.compute.manager [None req-3eff3f92-637d-43ad-bc84-45056240def3 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Nov 29 03:31:59 np0005539551 nova_compute[227360]: 2025-11-29 08:31:59.257 227364 DEBUG nova.objects.instance [None req-3eff3f92-637d-43ad-bc84-45056240def3 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lazy-loading 'flavor' on Instance uuid 7f0919df-6a69-4824-b20e-6540d1d3de30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:31:59 np0005539551 nova_compute[227360]: 2025-11-29 08:31:59.284 227364 DEBUG nova.virt.libvirt.driver [None req-3eff3f92-637d-43ad-bc84-45056240def3 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:32:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:00.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:00.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:01 np0005539551 nova_compute[227360]: 2025-11-29 08:32:01.032 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:01 np0005539551 kernel: tapb8a243db-cb (unregistering): left promiscuous mode
Nov 29 03:32:01 np0005539551 NetworkManager[48922]: <info>  [1764405121.5253] device (tapb8a243db-cb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:32:01 np0005539551 nova_compute[227360]: 2025-11-29 08:32:01.535 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:01 np0005539551 ovn_controller[130266]: 2025-11-29T08:32:01Z|00702|binding|INFO|Releasing lport b8a243db-cb27-44c3-9015-4bf4d9a49bac from this chassis (sb_readonly=0)
Nov 29 03:32:01 np0005539551 ovn_controller[130266]: 2025-11-29T08:32:01Z|00703|binding|INFO|Setting lport b8a243db-cb27-44c3-9015-4bf4d9a49bac down in Southbound
Nov 29 03:32:01 np0005539551 ovn_controller[130266]: 2025-11-29T08:32:01Z|00704|binding|INFO|Removing iface tapb8a243db-cb ovn-installed in OVS
Nov 29 03:32:01 np0005539551 nova_compute[227360]: 2025-11-29 08:32:01.538 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:01.543 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:80:cb 10.100.0.12'], port_security=['fa:16:3e:ad:80:cb 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '7f0919df-6a69-4824-b20e-6540d1d3de30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d6d35cfb-cc41-4788-977c-b8e5140795a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1981e9617628491f938ef0ef01c061c5', 'neutron:revision_number': '6', 'neutron:security_group_ids': '81cf2245-74ac-4962-8637-69fd9ed2858e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.178', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4f49e26a-f1b7-44a1-8f75-9c7ae476aa0d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=b8a243db-cb27-44c3-9015-4bf4d9a49bac) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:32:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:01.545 139482 INFO neutron.agent.ovn.metadata.agent [-] Port b8a243db-cb27-44c3-9015-4bf4d9a49bac in datapath d6d35cfb-cc41-4788-977c-b8e5140795a0 unbound from our chassis#033[00m
Nov 29 03:32:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:01.546 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d6d35cfb-cc41-4788-977c-b8e5140795a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:32:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:01.548 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[075c93de-3e06-43eb-865c-a0d0d7617113]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:01.548 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0 namespace which is not needed anymore#033[00m
Nov 29 03:32:01 np0005539551 nova_compute[227360]: 2025-11-29 08:32:01.552 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:01 np0005539551 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d0000009c.scope: Deactivated successfully.
Nov 29 03:32:01 np0005539551 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d0000009c.scope: Consumed 15.026s CPU time.
Nov 29 03:32:01 np0005539551 systemd-machined[190756]: Machine qemu-73-instance-0000009c terminated.
Nov 29 03:32:01 np0005539551 neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0[284974]: [NOTICE]   (284978) : haproxy version is 2.8.14-c23fe91
Nov 29 03:32:01 np0005539551 neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0[284974]: [NOTICE]   (284978) : path to executable is /usr/sbin/haproxy
Nov 29 03:32:01 np0005539551 neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0[284974]: [WARNING]  (284978) : Exiting Master process...
Nov 29 03:32:01 np0005539551 neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0[284974]: [ALERT]    (284978) : Current worker (284980) exited with code 143 (Terminated)
Nov 29 03:32:01 np0005539551 neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0[284974]: [WARNING]  (284978) : All workers exited. Exiting... (0)
Nov 29 03:32:01 np0005539551 systemd[1]: libpod-4551d07094fe68d70fc91e2f775e47e731b4f2ba550abe99a9eebf7487f025a5.scope: Deactivated successfully.
Nov 29 03:32:01 np0005539551 podman[285406]: 2025-11-29 08:32:01.683732058 +0000 UTC m=+0.045927523 container died 4551d07094fe68d70fc91e2f775e47e731b4f2ba550abe99a9eebf7487f025a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 03:32:01 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4551d07094fe68d70fc91e2f775e47e731b4f2ba550abe99a9eebf7487f025a5-userdata-shm.mount: Deactivated successfully.
Nov 29 03:32:01 np0005539551 systemd[1]: var-lib-containers-storage-overlay-a55acfe2b2d494758ae9f06fd12a79cf988f08011819a9e53c890c744f9278ba-merged.mount: Deactivated successfully.
Nov 29 03:32:01 np0005539551 podman[285406]: 2025-11-29 08:32:01.719861495 +0000 UTC m=+0.082056970 container cleanup 4551d07094fe68d70fc91e2f775e47e731b4f2ba550abe99a9eebf7487f025a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:32:01 np0005539551 systemd[1]: libpod-conmon-4551d07094fe68d70fc91e2f775e47e731b4f2ba550abe99a9eebf7487f025a5.scope: Deactivated successfully.
Nov 29 03:32:01 np0005539551 nova_compute[227360]: 2025-11-29 08:32:01.782 227364 DEBUG nova.compute.manager [req-c1af31e4-cded-4e69-979d-a536e25b1e25 req-4cdfcb0f-b0a3-4730-b7ad-80572cd7db38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Received event network-vif-unplugged-b8a243db-cb27-44c3-9015-4bf4d9a49bac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:32:01 np0005539551 nova_compute[227360]: 2025-11-29 08:32:01.783 227364 DEBUG oslo_concurrency.lockutils [req-c1af31e4-cded-4e69-979d-a536e25b1e25 req-4cdfcb0f-b0a3-4730-b7ad-80572cd7db38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:01 np0005539551 nova_compute[227360]: 2025-11-29 08:32:01.784 227364 DEBUG oslo_concurrency.lockutils [req-c1af31e4-cded-4e69-979d-a536e25b1e25 req-4cdfcb0f-b0a3-4730-b7ad-80572cd7db38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:01 np0005539551 nova_compute[227360]: 2025-11-29 08:32:01.785 227364 DEBUG oslo_concurrency.lockutils [req-c1af31e4-cded-4e69-979d-a536e25b1e25 req-4cdfcb0f-b0a3-4730-b7ad-80572cd7db38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:01 np0005539551 nova_compute[227360]: 2025-11-29 08:32:01.785 227364 DEBUG nova.compute.manager [req-c1af31e4-cded-4e69-979d-a536e25b1e25 req-4cdfcb0f-b0a3-4730-b7ad-80572cd7db38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] No waiting events found dispatching network-vif-unplugged-b8a243db-cb27-44c3-9015-4bf4d9a49bac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:32:01 np0005539551 nova_compute[227360]: 2025-11-29 08:32:01.786 227364 WARNING nova.compute.manager [req-c1af31e4-cded-4e69-979d-a536e25b1e25 req-4cdfcb0f-b0a3-4730-b7ad-80572cd7db38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Received unexpected event network-vif-unplugged-b8a243db-cb27-44c3-9015-4bf4d9a49bac for instance with vm_state active and task_state powering-off.#033[00m
Nov 29 03:32:01 np0005539551 podman[285440]: 2025-11-29 08:32:01.790074815 +0000 UTC m=+0.050051595 container remove 4551d07094fe68d70fc91e2f775e47e731b4f2ba550abe99a9eebf7487f025a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:32:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:01.795 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[83277678-cd39-45fe-88b8-e6be7d0bb1d8]: (4, ('Sat Nov 29 08:32:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0 (4551d07094fe68d70fc91e2f775e47e731b4f2ba550abe99a9eebf7487f025a5)\n4551d07094fe68d70fc91e2f775e47e731b4f2ba550abe99a9eebf7487f025a5\nSat Nov 29 08:32:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0 (4551d07094fe68d70fc91e2f775e47e731b4f2ba550abe99a9eebf7487f025a5)\n4551d07094fe68d70fc91e2f775e47e731b4f2ba550abe99a9eebf7487f025a5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:01.796 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6290e913-b0f8-4140-a47d-73c78a784206]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:01.797 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd6d35cfb-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:32:01 np0005539551 kernel: tapd6d35cfb-c0: left promiscuous mode
Nov 29 03:32:01 np0005539551 nova_compute[227360]: 2025-11-29 08:32:01.799 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:01 np0005539551 nova_compute[227360]: 2025-11-29 08:32:01.818 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:01.820 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d98d1d82-8ffe-4533-9b05-c204fc4c621b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:01.836 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ce49bf8d-aa01-4ebe-bd79-85ec8c2109b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:01.837 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9024b67b-2fbb-4295-a8e9-1b122e1b3068]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:01.852 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4c3c293f-daff-4fd9-9904-b08807b06b8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 812988, 'reachable_time': 36986, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285469, 'error': None, 'target': 'ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:01.855 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:32:01 np0005539551 systemd[1]: run-netns-ovnmeta\x2dd6d35cfb\x2dcc41\x2d4788\x2d977c\x2db8e5140795a0.mount: Deactivated successfully.
Nov 29 03:32:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:01.855 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[435b3086-6d6e-4cb8-bcc8-ee99becb7a21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:02.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:02 np0005539551 nova_compute[227360]: 2025-11-29 08:32:02.129 227364 DEBUG oslo_concurrency.lockutils [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "2293bfb9-d91d-4ee4-8347-317cf45fe9c4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:02 np0005539551 nova_compute[227360]: 2025-11-29 08:32:02.130 227364 DEBUG oslo_concurrency.lockutils [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "2293bfb9-d91d-4ee4-8347-317cf45fe9c4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:02 np0005539551 nova_compute[227360]: 2025-11-29 08:32:02.149 227364 DEBUG nova.compute.manager [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:32:02 np0005539551 nova_compute[227360]: 2025-11-29 08:32:02.219 227364 DEBUG oslo_concurrency.lockutils [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:02 np0005539551 nova_compute[227360]: 2025-11-29 08:32:02.220 227364 DEBUG oslo_concurrency.lockutils [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:02 np0005539551 nova_compute[227360]: 2025-11-29 08:32:02.227 227364 DEBUG nova.virt.hardware [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:32:02 np0005539551 nova_compute[227360]: 2025-11-29 08:32:02.228 227364 INFO nova.compute.claims [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:32:02 np0005539551 nova_compute[227360]: 2025-11-29 08:32:02.240 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:02 np0005539551 nova_compute[227360]: 2025-11-29 08:32:02.297 227364 INFO nova.virt.libvirt.driver [None req-3eff3f92-637d-43ad-bc84-45056240def3 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:32:02 np0005539551 nova_compute[227360]: 2025-11-29 08:32:02.308 227364 INFO nova.virt.libvirt.driver [-] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Instance destroyed successfully.#033[00m
Nov 29 03:32:02 np0005539551 nova_compute[227360]: 2025-11-29 08:32:02.308 227364 DEBUG nova.objects.instance [None req-3eff3f92-637d-43ad-bc84-45056240def3 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lazy-loading 'numa_topology' on Instance uuid 7f0919df-6a69-4824-b20e-6540d1d3de30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:02 np0005539551 nova_compute[227360]: 2025-11-29 08:32:02.322 227364 DEBUG nova.compute.manager [None req-3eff3f92-637d-43ad-bc84-45056240def3 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:32:02 np0005539551 nova_compute[227360]: 2025-11-29 08:32:02.359 227364 DEBUG oslo_concurrency.processutils [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:02 np0005539551 nova_compute[227360]: 2025-11-29 08:32:02.389 227364 DEBUG oslo_concurrency.lockutils [None req-3eff3f92-637d-43ad-bc84-45056240def3 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.135s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:02 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:32:02 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2737042945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:32:02 np0005539551 nova_compute[227360]: 2025-11-29 08:32:02.789 227364 DEBUG oslo_concurrency.processutils [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:02 np0005539551 nova_compute[227360]: 2025-11-29 08:32:02.795 227364 DEBUG nova.compute.provider_tree [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:32:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:02.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:02 np0005539551 nova_compute[227360]: 2025-11-29 08:32:02.853 227364 DEBUG nova.scheduler.client.report [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:32:02 np0005539551 nova_compute[227360]: 2025-11-29 08:32:02.895 227364 DEBUG oslo_concurrency.lockutils [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:02 np0005539551 nova_compute[227360]: 2025-11-29 08:32:02.897 227364 DEBUG nova.compute.manager [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:32:02 np0005539551 nova_compute[227360]: 2025-11-29 08:32:02.937 227364 DEBUG nova.compute.manager [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:32:02 np0005539551 nova_compute[227360]: 2025-11-29 08:32:02.938 227364 DEBUG nova.network.neutron [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:32:02 np0005539551 nova_compute[227360]: 2025-11-29 08:32:02.960 227364 INFO nova.virt.libvirt.driver [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:32:02 np0005539551 nova_compute[227360]: 2025-11-29 08:32:02.979 227364 DEBUG nova.compute.manager [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:32:03 np0005539551 nova_compute[227360]: 2025-11-29 08:32:03.106 227364 DEBUG nova.compute.manager [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:32:03 np0005539551 nova_compute[227360]: 2025-11-29 08:32:03.108 227364 DEBUG nova.virt.libvirt.driver [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:32:03 np0005539551 nova_compute[227360]: 2025-11-29 08:32:03.108 227364 INFO nova.virt.libvirt.driver [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Creating image(s)#033[00m
Nov 29 03:32:03 np0005539551 nova_compute[227360]: 2025-11-29 08:32:03.136 227364 DEBUG nova.storage.rbd_utils [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] rbd image 2293bfb9-d91d-4ee4-8347-317cf45fe9c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:32:03 np0005539551 nova_compute[227360]: 2025-11-29 08:32:03.165 227364 DEBUG nova.storage.rbd_utils [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] rbd image 2293bfb9-d91d-4ee4-8347-317cf45fe9c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:32:03 np0005539551 nova_compute[227360]: 2025-11-29 08:32:03.192 227364 DEBUG nova.storage.rbd_utils [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] rbd image 2293bfb9-d91d-4ee4-8347-317cf45fe9c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:32:03 np0005539551 nova_compute[227360]: 2025-11-29 08:32:03.195 227364 DEBUG oslo_concurrency.processutils [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:03 np0005539551 nova_compute[227360]: 2025-11-29 08:32:03.224 227364 DEBUG nova.policy [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c5b0953fb7cc415fb26cf4ffdd5908c6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd4f6db81949d487b853d7567f8a2e6d4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:32:03 np0005539551 nova_compute[227360]: 2025-11-29 08:32:03.260 227364 DEBUG nova.objects.instance [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lazy-loading 'flavor' on Instance uuid 7f0919df-6a69-4824-b20e-6540d1d3de30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:03 np0005539551 nova_compute[227360]: 2025-11-29 08:32:03.276 227364 DEBUG oslo_concurrency.processutils [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:03 np0005539551 nova_compute[227360]: 2025-11-29 08:32:03.277 227364 DEBUG oslo_concurrency.lockutils [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:03 np0005539551 nova_compute[227360]: 2025-11-29 08:32:03.277 227364 DEBUG oslo_concurrency.lockutils [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:03 np0005539551 nova_compute[227360]: 2025-11-29 08:32:03.278 227364 DEBUG oslo_concurrency.lockutils [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:03 np0005539551 nova_compute[227360]: 2025-11-29 08:32:03.302 227364 DEBUG nova.storage.rbd_utils [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] rbd image 2293bfb9-d91d-4ee4-8347-317cf45fe9c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:32:03 np0005539551 nova_compute[227360]: 2025-11-29 08:32:03.306 227364 DEBUG oslo_concurrency.processutils [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 2293bfb9-d91d-4ee4-8347-317cf45fe9c4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:03 np0005539551 nova_compute[227360]: 2025-11-29 08:32:03.339 227364 DEBUG oslo_concurrency.lockutils [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Acquiring lock "refresh_cache-7f0919df-6a69-4824-b20e-6540d1d3de30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:32:03 np0005539551 nova_compute[227360]: 2025-11-29 08:32:03.340 227364 DEBUG oslo_concurrency.lockutils [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Acquired lock "refresh_cache-7f0919df-6a69-4824-b20e-6540d1d3de30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:32:03 np0005539551 nova_compute[227360]: 2025-11-29 08:32:03.340 227364 DEBUG nova.network.neutron [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:32:03 np0005539551 nova_compute[227360]: 2025-11-29 08:32:03.340 227364 DEBUG nova.objects.instance [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lazy-loading 'info_cache' on Instance uuid 7f0919df-6a69-4824-b20e-6540d1d3de30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:03 np0005539551 nova_compute[227360]: 2025-11-29 08:32:03.857 227364 DEBUG oslo_concurrency.processutils [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 2293bfb9-d91d-4ee4-8347-317cf45fe9c4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:03 np0005539551 nova_compute[227360]: 2025-11-29 08:32:03.885 227364 DEBUG nova.compute.manager [req-532e4c67-b877-411a-b032-f41d26ac3f2d req-08e735b6-b69a-4cd2-999e-3d7b4260c0a9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Received event network-vif-plugged-b8a243db-cb27-44c3-9015-4bf4d9a49bac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:32:03 np0005539551 nova_compute[227360]: 2025-11-29 08:32:03.885 227364 DEBUG oslo_concurrency.lockutils [req-532e4c67-b877-411a-b032-f41d26ac3f2d req-08e735b6-b69a-4cd2-999e-3d7b4260c0a9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:03 np0005539551 nova_compute[227360]: 2025-11-29 08:32:03.886 227364 DEBUG oslo_concurrency.lockutils [req-532e4c67-b877-411a-b032-f41d26ac3f2d req-08e735b6-b69a-4cd2-999e-3d7b4260c0a9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:03 np0005539551 nova_compute[227360]: 2025-11-29 08:32:03.886 227364 DEBUG oslo_concurrency.lockutils [req-532e4c67-b877-411a-b032-f41d26ac3f2d req-08e735b6-b69a-4cd2-999e-3d7b4260c0a9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:03 np0005539551 nova_compute[227360]: 2025-11-29 08:32:03.886 227364 DEBUG nova.compute.manager [req-532e4c67-b877-411a-b032-f41d26ac3f2d req-08e735b6-b69a-4cd2-999e-3d7b4260c0a9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] No waiting events found dispatching network-vif-plugged-b8a243db-cb27-44c3-9015-4bf4d9a49bac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:32:03 np0005539551 nova_compute[227360]: 2025-11-29 08:32:03.887 227364 WARNING nova.compute.manager [req-532e4c67-b877-411a-b032-f41d26ac3f2d req-08e735b6-b69a-4cd2-999e-3d7b4260c0a9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Received unexpected event network-vif-plugged-b8a243db-cb27-44c3-9015-4bf4d9a49bac for instance with vm_state stopped and task_state powering-on.#033[00m
Nov 29 03:32:03 np0005539551 nova_compute[227360]: 2025-11-29 08:32:03.915 227364 DEBUG nova.storage.rbd_utils [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] resizing rbd image 2293bfb9-d91d-4ee4-8347-317cf45fe9c4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:32:04 np0005539551 nova_compute[227360]: 2025-11-29 08:32:04.004 227364 DEBUG nova.objects.instance [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'migration_context' on Instance uuid 2293bfb9-d91d-4ee4-8347-317cf45fe9c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:04 np0005539551 nova_compute[227360]: 2025-11-29 08:32:04.020 227364 DEBUG nova.virt.libvirt.driver [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:32:04 np0005539551 nova_compute[227360]: 2025-11-29 08:32:04.021 227364 DEBUG nova.virt.libvirt.driver [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Ensure instance console log exists: /var/lib/nova/instances/2293bfb9-d91d-4ee4-8347-317cf45fe9c4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:32:04 np0005539551 nova_compute[227360]: 2025-11-29 08:32:04.021 227364 DEBUG oslo_concurrency.lockutils [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:04 np0005539551 nova_compute[227360]: 2025-11-29 08:32:04.022 227364 DEBUG oslo_concurrency.lockutils [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:04 np0005539551 nova_compute[227360]: 2025-11-29 08:32:04.022 227364 DEBUG oslo_concurrency.lockutils [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:04.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:04 np0005539551 nova_compute[227360]: 2025-11-29 08:32:04.128 227364 DEBUG nova.network.neutron [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Successfully created port: c4bf30df-8d4f-4601-a7a6-1d851938fab0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:32:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:04.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.034 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.043 227364 DEBUG nova.network.neutron [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Updating instance_info_cache with network_info: [{"id": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "address": "fa:16:3e:ad:80:cb", "network": {"id": "d6d35cfb-cc41-4788-977c-b8e5140795a0", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1510670722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1981e9617628491f938ef0ef01c061c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8a243db-cb", "ovs_interfaceid": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:32:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:06.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.070 227364 DEBUG oslo_concurrency.lockutils [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Releasing lock "refresh_cache-7f0919df-6a69-4824-b20e-6540d1d3de30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.094 227364 INFO nova.virt.libvirt.driver [-] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Instance destroyed successfully.#033[00m
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.095 227364 DEBUG nova.objects.instance [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lazy-loading 'numa_topology' on Instance uuid 7f0919df-6a69-4824-b20e-6540d1d3de30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.108 227364 DEBUG nova.objects.instance [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lazy-loading 'resources' on Instance uuid 7f0919df-6a69-4824-b20e-6540d1d3de30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.123 227364 DEBUG nova.virt.libvirt.vif [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:30:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-570591477',display_name='tempest-AttachVolumeTestJSON-server-570591477',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-570591477',id=156,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPLhAS709HJYCqqSVKtaAsZHN9+aAJmT3RuP2sRo5+42qudvHLEFjfUIRKebI1UviiQdgrbtVipNk1gA+7U8vpFGwsazdqavrjn4FbUCXtlfljCRqwjbJ7fLvby4dRZp3g==',key_name='tempest-keypair-1111859008',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:30:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='1981e9617628491f938ef0ef01c061c5',ramdisk_id='',reservation_id='r-01op0prm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-169198681',owner_user_name='tempest-AttachVolumeTestJSON-169198681-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:32:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5b0fe4d78df74554a3a5875ab629d59c',uuid=7f0919df-6a69-4824-b20e-6540d1d3de30,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "address": "fa:16:3e:ad:80:cb", "network": {"id": "d6d35cfb-cc41-4788-977c-b8e5140795a0", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1510670722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1981e9617628491f938ef0ef01c061c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8a243db-cb", "ovs_interfaceid": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.123 227364 DEBUG nova.network.os_vif_util [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Converting VIF {"id": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "address": "fa:16:3e:ad:80:cb", "network": {"id": "d6d35cfb-cc41-4788-977c-b8e5140795a0", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1510670722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1981e9617628491f938ef0ef01c061c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8a243db-cb", "ovs_interfaceid": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.124 227364 DEBUG nova.network.os_vif_util [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:80:cb,bridge_name='br-int',has_traffic_filtering=True,id=b8a243db-cb27-44c3-9015-4bf4d9a49bac,network=Network(d6d35cfb-cc41-4788-977c-b8e5140795a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8a243db-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.124 227364 DEBUG os_vif [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:80:cb,bridge_name='br-int',has_traffic_filtering=True,id=b8a243db-cb27-44c3-9015-4bf4d9a49bac,network=Network(d6d35cfb-cc41-4788-977c-b8e5140795a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8a243db-cb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.126 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.126 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8a243db-cb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.129 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.131 227364 INFO os_vif [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:80:cb,bridge_name='br-int',has_traffic_filtering=True,id=b8a243db-cb27-44c3-9015-4bf4d9a49bac,network=Network(d6d35cfb-cc41-4788-977c-b8e5140795a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8a243db-cb')#033[00m
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.136 227364 DEBUG nova.virt.libvirt.driver [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Start _get_guest_xml network_info=[{"id": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "address": "fa:16:3e:ad:80:cb", "network": {"id": "d6d35cfb-cc41-4788-977c-b8e5140795a0", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1510670722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1981e9617628491f938ef0ef01c061c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8a243db-cb", "ovs_interfaceid": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.139 227364 WARNING nova.virt.libvirt.driver [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.143 227364 DEBUG nova.virt.libvirt.host [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.144 227364 DEBUG nova.virt.libvirt.host [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.146 227364 DEBUG nova.virt.libvirt.host [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.146 227364 DEBUG nova.virt.libvirt.host [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.147 227364 DEBUG nova.virt.libvirt.driver [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.147 227364 DEBUG nova.virt.hardware [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.148 227364 DEBUG nova.virt.hardware [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.148 227364 DEBUG nova.virt.hardware [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.148 227364 DEBUG nova.virt.hardware [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.148 227364 DEBUG nova.virt.hardware [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.148 227364 DEBUG nova.virt.hardware [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.149 227364 DEBUG nova.virt.hardware [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.149 227364 DEBUG nova.virt.hardware [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.149 227364 DEBUG nova.virt.hardware [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.149 227364 DEBUG nova.virt.hardware [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.149 227364 DEBUG nova.virt.hardware [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.150 227364 DEBUG nova.objects.instance [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 7f0919df-6a69-4824-b20e-6540d1d3de30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.164 227364 DEBUG oslo_concurrency.processutils [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.220 227364 DEBUG nova.network.neutron [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Successfully updated port: c4bf30df-8d4f-4601-a7a6-1d851938fab0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.236 227364 DEBUG oslo_concurrency.lockutils [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "refresh_cache-2293bfb9-d91d-4ee4-8347-317cf45fe9c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.236 227364 DEBUG oslo_concurrency.lockutils [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquired lock "refresh_cache-2293bfb9-d91d-4ee4-8347-317cf45fe9c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.236 227364 DEBUG nova.network.neutron [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.336 227364 DEBUG nova.compute.manager [req-380298a3-1048-4efa-8717-7816d14d0069 req-c1b4a378-5adc-49ad-819a-582a44901144 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Received event network-changed-c4bf30df-8d4f-4601-a7a6-1d851938fab0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.336 227364 DEBUG nova.compute.manager [req-380298a3-1048-4efa-8717-7816d14d0069 req-c1b4a378-5adc-49ad-819a-582a44901144 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Refreshing instance network info cache due to event network-changed-c4bf30df-8d4f-4601-a7a6-1d851938fab0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.337 227364 DEBUG oslo_concurrency.lockutils [req-380298a3-1048-4efa-8717-7816d14d0069 req-c1b4a378-5adc-49ad-819a-582a44901144 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-2293bfb9-d91d-4ee4-8347-317cf45fe9c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:32:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:32:06Z|00705|binding|INFO|Releasing lport 3b04b2c4-a6da-4677-b446-82ad68652b56 from this chassis (sb_readonly=0)
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.427 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.436 227364 DEBUG nova.network.neutron [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:32:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:32:06 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4134014667' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.596 227364 DEBUG oslo_concurrency.processutils [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:06 np0005539551 nova_compute[227360]: 2025-11-29 08:32:06.650 227364 DEBUG oslo_concurrency.processutils [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:06.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:32:07 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1352083409' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.097 227364 DEBUG oslo_concurrency.processutils [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.099 227364 DEBUG nova.virt.libvirt.vif [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:30:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-570591477',display_name='tempest-AttachVolumeTestJSON-server-570591477',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-570591477',id=156,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPLhAS709HJYCqqSVKtaAsZHN9+aAJmT3RuP2sRo5+42qudvHLEFjfUIRKebI1UviiQdgrbtVipNk1gA+7U8vpFGwsazdqavrjn4FbUCXtlfljCRqwjbJ7fLvby4dRZp3g==',key_name='tempest-keypair-1111859008',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:30:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='1981e9617628491f938ef0ef01c061c5',ramdisk_id='',reservation_id='r-01op0prm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-169198681',owner_user_name='tempest-AttachVolumeTestJSON-169198681-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:32:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5b0fe4d78df74554a3a5875ab629d59c',uuid=7f0919df-6a69-4824-b20e-6540d1d3de30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "address": "fa:16:3e:ad:80:cb", "network": {"id": "d6d35cfb-cc41-4788-977c-b8e5140795a0", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1510670722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1981e9617628491f938ef0ef01c061c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8a243db-cb", "ovs_interfaceid": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.099 227364 DEBUG nova.network.os_vif_util [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Converting VIF {"id": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "address": "fa:16:3e:ad:80:cb", "network": {"id": "d6d35cfb-cc41-4788-977c-b8e5140795a0", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1510670722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1981e9617628491f938ef0ef01c061c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8a243db-cb", "ovs_interfaceid": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.100 227364 DEBUG nova.network.os_vif_util [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:80:cb,bridge_name='br-int',has_traffic_filtering=True,id=b8a243db-cb27-44c3-9015-4bf4d9a49bac,network=Network(d6d35cfb-cc41-4788-977c-b8e5140795a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8a243db-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.101 227364 DEBUG nova.objects.instance [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7f0919df-6a69-4824-b20e-6540d1d3de30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.115 227364 DEBUG nova.virt.libvirt.driver [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:32:07 np0005539551 nova_compute[227360]:  <uuid>7f0919df-6a69-4824-b20e-6540d1d3de30</uuid>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:  <name>instance-0000009c</name>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:32:07 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:      <nova:name>tempest-AttachVolumeTestJSON-server-570591477</nova:name>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:32:06</nova:creationTime>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:32:07 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:        <nova:user uuid="5b0fe4d78df74554a3a5875ab629d59c">tempest-AttachVolumeTestJSON-169198681-project-member</nova:user>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:        <nova:project uuid="1981e9617628491f938ef0ef01c061c5">tempest-AttachVolumeTestJSON-169198681</nova:project>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:        <nova:port uuid="b8a243db-cb27-44c3-9015-4bf4d9a49bac">
Nov 29 03:32:07 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:      <entry name="serial">7f0919df-6a69-4824-b20e-6540d1d3de30</entry>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:      <entry name="uuid">7f0919df-6a69-4824-b20e-6540d1d3de30</entry>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:32:07 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/7f0919df-6a69-4824-b20e-6540d1d3de30_disk">
Nov 29 03:32:07 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:32:07 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:32:07 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/7f0919df-6a69-4824-b20e-6540d1d3de30_disk.config">
Nov 29 03:32:07 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:32:07 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:32:07 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:ad:80:cb"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:      <target dev="tapb8a243db-cb"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:32:07 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/7f0919df-6a69-4824-b20e-6540d1d3de30/console.log" append="off"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <input type="keyboard" bus="usb"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:32:07 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:32:07 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:32:07 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:32:07 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:32:07 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.116 227364 DEBUG nova.virt.libvirt.driver [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.117 227364 DEBUG nova.virt.libvirt.driver [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.118 227364 DEBUG nova.virt.libvirt.vif [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:30:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-570591477',display_name='tempest-AttachVolumeTestJSON-server-570591477',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-570591477',id=156,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPLhAS709HJYCqqSVKtaAsZHN9+aAJmT3RuP2sRo5+42qudvHLEFjfUIRKebI1UviiQdgrbtVipNk1gA+7U8vpFGwsazdqavrjn4FbUCXtlfljCRqwjbJ7fLvby4dRZp3g==',key_name='tempest-keypair-1111859008',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:30:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='1981e9617628491f938ef0ef01c061c5',ramdisk_id='',reservation_id='r-01op0prm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-169198681',owner_user_name='tempest-AttachVolumeTestJSON-169198681-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:32:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5b0fe4d78df74554a3a5875ab629d59c',uuid=7f0919df-6a69-4824-b20e-6540d1d3de30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "address": "fa:16:3e:ad:80:cb", "network": {"id": "d6d35cfb-cc41-4788-977c-b8e5140795a0", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1510670722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1981e9617628491f938ef0ef01c061c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8a243db-cb", "ovs_interfaceid": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.118 227364 DEBUG nova.network.os_vif_util [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Converting VIF {"id": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "address": "fa:16:3e:ad:80:cb", "network": {"id": "d6d35cfb-cc41-4788-977c-b8e5140795a0", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1510670722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1981e9617628491f938ef0ef01c061c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8a243db-cb", "ovs_interfaceid": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.119 227364 DEBUG nova.network.os_vif_util [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:80:cb,bridge_name='br-int',has_traffic_filtering=True,id=b8a243db-cb27-44c3-9015-4bf4d9a49bac,network=Network(d6d35cfb-cc41-4788-977c-b8e5140795a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8a243db-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.119 227364 DEBUG os_vif [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:80:cb,bridge_name='br-int',has_traffic_filtering=True,id=b8a243db-cb27-44c3-9015-4bf4d9a49bac,network=Network(d6d35cfb-cc41-4788-977c-b8e5140795a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8a243db-cb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.120 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.120 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.120 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.123 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.123 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb8a243db-cb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.123 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb8a243db-cb, col_values=(('external_ids', {'iface-id': 'b8a243db-cb27-44c3-9015-4bf4d9a49bac', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ad:80:cb', 'vm-uuid': '7f0919df-6a69-4824-b20e-6540d1d3de30'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.124 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:07 np0005539551 NetworkManager[48922]: <info>  [1764405127.1257] manager: (tapb8a243db-cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/319)
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.128 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.129 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.129 227364 INFO os_vif [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:80:cb,bridge_name='br-int',has_traffic_filtering=True,id=b8a243db-cb27-44c3-9015-4bf4d9a49bac,network=Network(d6d35cfb-cc41-4788-977c-b8e5140795a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8a243db-cb')#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.243 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:07 np0005539551 kernel: tapb8a243db-cb: entered promiscuous mode
Nov 29 03:32:07 np0005539551 NetworkManager[48922]: <info>  [1764405127.2989] manager: (tapb8a243db-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/320)
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.298 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:07 np0005539551 ovn_controller[130266]: 2025-11-29T08:32:07Z|00706|binding|INFO|Claiming lport b8a243db-cb27-44c3-9015-4bf4d9a49bac for this chassis.
Nov 29 03:32:07 np0005539551 ovn_controller[130266]: 2025-11-29T08:32:07Z|00707|binding|INFO|b8a243db-cb27-44c3-9015-4bf4d9a49bac: Claiming fa:16:3e:ad:80:cb 10.100.0.12
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:07.310 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:80:cb 10.100.0.12'], port_security=['fa:16:3e:ad:80:cb 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '7f0919df-6a69-4824-b20e-6540d1d3de30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d6d35cfb-cc41-4788-977c-b8e5140795a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1981e9617628491f938ef0ef01c061c5', 'neutron:revision_number': '7', 'neutron:security_group_ids': '81cf2245-74ac-4962-8637-69fd9ed2858e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.178'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4f49e26a-f1b7-44a1-8f75-9c7ae476aa0d, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=b8a243db-cb27-44c3-9015-4bf4d9a49bac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:07.311 139482 INFO neutron.agent.ovn.metadata.agent [-] Port b8a243db-cb27-44c3-9015-4bf4d9a49bac in datapath d6d35cfb-cc41-4788-977c-b8e5140795a0 bound to our chassis#033[00m
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:07.312 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d6d35cfb-cc41-4788-977c-b8e5140795a0#033[00m
Nov 29 03:32:07 np0005539551 ovn_controller[130266]: 2025-11-29T08:32:07Z|00708|binding|INFO|Setting lport b8a243db-cb27-44c3-9015-4bf4d9a49bac ovn-installed in OVS
Nov 29 03:32:07 np0005539551 ovn_controller[130266]: 2025-11-29T08:32:07Z|00709|binding|INFO|Setting lport b8a243db-cb27-44c3-9015-4bf4d9a49bac up in Southbound
Nov 29 03:32:07 np0005539551 systemd-udevd[285734]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.323 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:07.325 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[74277aa2-6a5f-4c80-aef0-7a4da837fb93]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:07.325 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd6d35cfb-c1 in ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:07.327 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd6d35cfb-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:07.327 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ca9d8001-c843-487b-afd5-857d3ab05985]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:07.328 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[18db8fc9-2be4-4c08-8481-db5b6406ea25]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.327 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:07.338 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[61f106e0-4036-4002-9154-8d8a3eaec7a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:07 np0005539551 NetworkManager[48922]: <info>  [1764405127.3420] device (tapb8a243db-cb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:32:07 np0005539551 NetworkManager[48922]: <info>  [1764405127.3429] device (tapb8a243db-cb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:32:07 np0005539551 systemd-machined[190756]: New machine qemu-75-instance-0000009c.
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:07.353 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a73145cb-6f44-4697-86b4-414bdd450f00]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:07 np0005539551 systemd[1]: Started Virtual Machine qemu-75-instance-0000009c.
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:07.382 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[a7f3566c-f81e-4050-bee3-a7e44f4ab90d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:07 np0005539551 systemd-udevd[285739]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:32:07 np0005539551 NetworkManager[48922]: <info>  [1764405127.3878] manager: (tapd6d35cfb-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/321)
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:07.387 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f0a62df6-ab82-4d67-9e92-a66082adf2d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:07.412 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[bffe5bc8-b3c5-4650-961e-a1f189836879]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:07.416 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[99529449-07c8-4806-ad4c-ee7b13e806c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.418 227364 DEBUG nova.network.neutron [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Updating instance_info_cache with network_info: [{"id": "c4bf30df-8d4f-4601-a7a6-1d851938fab0", "address": "fa:16:3e:fb:e3:fc", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4bf30df-8d", "ovs_interfaceid": "c4bf30df-8d4f-4601-a7a6-1d851938fab0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:32:07 np0005539551 NetworkManager[48922]: <info>  [1764405127.4355] device (tapd6d35cfb-c0): carrier: link connected
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.435 227364 DEBUG oslo_concurrency.lockutils [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Releasing lock "refresh_cache-2293bfb9-d91d-4ee4-8347-317cf45fe9c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.435 227364 DEBUG nova.compute.manager [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Instance network_info: |[{"id": "c4bf30df-8d4f-4601-a7a6-1d851938fab0", "address": "fa:16:3e:fb:e3:fc", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4bf30df-8d", "ovs_interfaceid": "c4bf30df-8d4f-4601-a7a6-1d851938fab0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.435 227364 DEBUG oslo_concurrency.lockutils [req-380298a3-1048-4efa-8717-7816d14d0069 req-c1b4a378-5adc-49ad-819a-582a44901144 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-2293bfb9-d91d-4ee4-8347-317cf45fe9c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.436 227364 DEBUG nova.network.neutron [req-380298a3-1048-4efa-8717-7816d14d0069 req-c1b4a378-5adc-49ad-819a-582a44901144 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Refreshing network info cache for port c4bf30df-8d4f-4601-a7a6-1d851938fab0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.438 227364 DEBUG nova.virt.libvirt.driver [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Start _get_guest_xml network_info=[{"id": "c4bf30df-8d4f-4601-a7a6-1d851938fab0", "address": "fa:16:3e:fb:e3:fc", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4bf30df-8d", "ovs_interfaceid": "c4bf30df-8d4f-4601-a7a6-1d851938fab0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:07.441 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[b3fe5e20-9556-4936-be59-5a2136eb74ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.444 227364 WARNING nova.virt.libvirt.driver [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.448 227364 DEBUG nova.virt.libvirt.host [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.449 227364 DEBUG nova.virt.libvirt.host [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.455 227364 DEBUG nova.virt.libvirt.host [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.455 227364 DEBUG nova.virt.libvirt.host [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.456 227364 DEBUG nova.virt.libvirt.driver [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.456 227364 DEBUG nova.virt.hardware [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.457 227364 DEBUG nova.virt.hardware [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.457 227364 DEBUG nova.virt.hardware [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.457 227364 DEBUG nova.virt.hardware [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.457 227364 DEBUG nova.virt.hardware [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.457 227364 DEBUG nova.virt.hardware [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:07.456 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2cfeed9f-6058-42a6-a504-75725ddd0917]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd6d35cfb-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5e:5b:88'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 205], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 817431, 'reachable_time': 20455, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285768, 'error': None, 'target': 'ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.458 227364 DEBUG nova.virt.hardware [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.458 227364 DEBUG nova.virt.hardware [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.458 227364 DEBUG nova.virt.hardware [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.458 227364 DEBUG nova.virt.hardware [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.458 227364 DEBUG nova.virt.hardware [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.461 227364 DEBUG oslo_concurrency.processutils [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:07.472 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[3605917b-c7b9-4b7e-a6ad-66f7eaf33971]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5e:5b88'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 817431, 'tstamp': 817431}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285769, 'error': None, 'target': 'ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:07.486 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6a854be2-7ccc-4761-a6ed-65b6c3ce796f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd6d35cfb-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5e:5b:88'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 205], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 817431, 'reachable_time': 20455, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 285771, 'error': None, 'target': 'ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:07.518 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a998fb8f-19e4-446e-aa8a-97ff9e5bb036]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:07.571 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d959ca33-0e9d-4225-b946-64bd81884418]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:07.572 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd6d35cfb-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:07.573 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:07.573 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd6d35cfb-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:32:07 np0005539551 kernel: tapd6d35cfb-c0: entered promiscuous mode
Nov 29 03:32:07 np0005539551 NetworkManager[48922]: <info>  [1764405127.5761] manager: (tapd6d35cfb-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/322)
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.575 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:07.580 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd6d35cfb-c0, col_values=(('external_ids', {'iface-id': '070d06ed-b610-481b-b747-9c7d0eb2bcf2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.581 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:07 np0005539551 ovn_controller[130266]: 2025-11-29T08:32:07Z|00710|binding|INFO|Releasing lport 070d06ed-b610-481b-b747-9c7d0eb2bcf2 from this chassis (sb_readonly=0)
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:07.585 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d6d35cfb-cc41-4788-977c-b8e5140795a0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d6d35cfb-cc41-4788-977c-b8e5140795a0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:07.591 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[15654f5b-1c2d-4d27-afb2-a16473faded8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:07.591 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-d6d35cfb-cc41-4788-977c-b8e5140795a0
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/d6d35cfb-cc41-4788-977c-b8e5140795a0.pid.haproxy
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID d6d35cfb-cc41-4788-977c-b8e5140795a0
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:32:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:07.592 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0', 'env', 'PROCESS_TAG=haproxy-d6d35cfb-cc41-4788-977c-b8e5140795a0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d6d35cfb-cc41-4788-977c-b8e5140795a0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.595 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:32:07 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/114272485' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.896 227364 DEBUG oslo_concurrency.processutils [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.922 227364 DEBUG nova.storage.rbd_utils [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] rbd image 2293bfb9-d91d-4ee4-8347-317cf45fe9c4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:32:07 np0005539551 nova_compute[227360]: 2025-11-29 08:32:07.927 227364 DEBUG oslo_concurrency.processutils [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:07 np0005539551 podman[285824]: 2025-11-29 08:32:07.957635052 +0000 UTC m=+0.048846502 container create 2d4bf9158a5eebcb24a5250c320f8e0b8f4ab9c93d8202ee21d6a364125b0cae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:32:07 np0005539551 systemd[1]: Started libpod-conmon-2d4bf9158a5eebcb24a5250c320f8e0b8f4ab9c93d8202ee21d6a364125b0cae.scope.
Nov 29 03:32:08 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:32:08 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45d658145cf2b31b1a5df29d096da33c06e3e75b64ba83f516ce57731d1e57ee/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:32:08 np0005539551 podman[285824]: 2025-11-29 08:32:07.936415917 +0000 UTC m=+0.027627387 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:32:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 03:32:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:08.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 03:32:08 np0005539551 podman[285824]: 2025-11-29 08:32:08.067794562 +0000 UTC m=+0.159006042 container init 2d4bf9158a5eebcb24a5250c320f8e0b8f4ab9c93d8202ee21d6a364125b0cae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:32:08 np0005539551 podman[285824]: 2025-11-29 08:32:08.073138516 +0000 UTC m=+0.164349966 container start 2d4bf9158a5eebcb24a5250c320f8e0b8f4ab9c93d8202ee21d6a364125b0cae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.079 227364 DEBUG nova.compute.manager [req-6f4d8982-4409-4d56-9ec3-59b8cfd73d60 req-3330f757-a3fe-46b4-b5db-1878989a03ba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Received event network-vif-plugged-b8a243db-cb27-44c3-9015-4bf4d9a49bac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.079 227364 DEBUG oslo_concurrency.lockutils [req-6f4d8982-4409-4d56-9ec3-59b8cfd73d60 req-3330f757-a3fe-46b4-b5db-1878989a03ba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.080 227364 DEBUG oslo_concurrency.lockutils [req-6f4d8982-4409-4d56-9ec3-59b8cfd73d60 req-3330f757-a3fe-46b4-b5db-1878989a03ba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.080 227364 DEBUG oslo_concurrency.lockutils [req-6f4d8982-4409-4d56-9ec3-59b8cfd73d60 req-3330f757-a3fe-46b4-b5db-1878989a03ba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.080 227364 DEBUG nova.compute.manager [req-6f4d8982-4409-4d56-9ec3-59b8cfd73d60 req-3330f757-a3fe-46b4-b5db-1878989a03ba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] No waiting events found dispatching network-vif-plugged-b8a243db-cb27-44c3-9015-4bf4d9a49bac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.080 227364 WARNING nova.compute.manager [req-6f4d8982-4409-4d56-9ec3-59b8cfd73d60 req-3330f757-a3fe-46b4-b5db-1878989a03ba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Received unexpected event network-vif-plugged-b8a243db-cb27-44c3-9015-4bf4d9a49bac for instance with vm_state stopped and task_state powering-on.#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.081 227364 DEBUG nova.compute.manager [req-6f4d8982-4409-4d56-9ec3-59b8cfd73d60 req-3330f757-a3fe-46b4-b5db-1878989a03ba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Received event network-vif-plugged-b8a243db-cb27-44c3-9015-4bf4d9a49bac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.081 227364 DEBUG oslo_concurrency.lockutils [req-6f4d8982-4409-4d56-9ec3-59b8cfd73d60 req-3330f757-a3fe-46b4-b5db-1878989a03ba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.081 227364 DEBUG oslo_concurrency.lockutils [req-6f4d8982-4409-4d56-9ec3-59b8cfd73d60 req-3330f757-a3fe-46b4-b5db-1878989a03ba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.081 227364 DEBUG oslo_concurrency.lockutils [req-6f4d8982-4409-4d56-9ec3-59b8cfd73d60 req-3330f757-a3fe-46b4-b5db-1878989a03ba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.082 227364 DEBUG nova.compute.manager [req-6f4d8982-4409-4d56-9ec3-59b8cfd73d60 req-3330f757-a3fe-46b4-b5db-1878989a03ba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] No waiting events found dispatching network-vif-plugged-b8a243db-cb27-44c3-9015-4bf4d9a49bac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.082 227364 WARNING nova.compute.manager [req-6f4d8982-4409-4d56-9ec3-59b8cfd73d60 req-3330f757-a3fe-46b4-b5db-1878989a03ba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Received unexpected event network-vif-plugged-b8a243db-cb27-44c3-9015-4bf4d9a49bac for instance with vm_state stopped and task_state powering-on.#033[00m
Nov 29 03:32:08 np0005539551 neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0[285858]: [NOTICE]   (285881) : New worker (285883) forked
Nov 29 03:32:08 np0005539551 neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0[285858]: [NOTICE]   (285881) : Loading success.
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.303 227364 DEBUG nova.virt.libvirt.host [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Removed pending event for 7f0919df-6a69-4824-b20e-6540d1d3de30 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.303 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405128.302573, 7f0919df-6a69-4824-b20e-6540d1d3de30 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.304 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.306 227364 DEBUG nova.compute.manager [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.310 227364 INFO nova.virt.libvirt.driver [-] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Instance rebooted successfully.#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.311 227364 DEBUG nova.compute.manager [None req-1b0d3b7d-e26f-43c7-afdc-f66e13ceca1a 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.325 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.328 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.348 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.348 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405128.3044524, 7f0919df-6a69-4824-b20e-6540d1d3de30 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.349 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] VM Started (Lifecycle Event)#033[00m
Nov 29 03:32:08 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:32:08 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/102128650' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.376 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.379 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.386 227364 DEBUG oslo_concurrency.processutils [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.387 227364 DEBUG nova.virt.libvirt.vif [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:32:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='multiattach-server-1',id=162,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFZDUAh1tFHT85mctamdge/Jlh9j7Mmalvlf2a+E48/dJ4b3TzL46vHd8+krJsRkbdr2BabH5xlFnXxT+hxq+KJlLzOnOaQuAWI18v9sbbjA8bZzR2tugMjasg7rWhFwg==',key_name='tempest-keypair-2058861619',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4f6db81949d487b853d7567f8a2e6d4',ramdisk_id='',reservation_id='r-6pwgg4cb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-573425942',owner_user_name='tempest-AttachVolumeMultiAttachTest-573425942-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:32:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5b0953fb7cc415fb26cf4ffdd5908c6',uuid=2293bfb9-d91d-4ee4-8347-317cf45fe9c4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c4bf30df-8d4f-4601-a7a6-1d851938fab0", "address": "fa:16:3e:fb:e3:fc", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4bf30df-8d", "ovs_interfaceid": "c4bf30df-8d4f-4601-a7a6-1d851938fab0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.388 227364 DEBUG nova.network.os_vif_util [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converting VIF {"id": "c4bf30df-8d4f-4601-a7a6-1d851938fab0", "address": "fa:16:3e:fb:e3:fc", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4bf30df-8d", "ovs_interfaceid": "c4bf30df-8d4f-4601-a7a6-1d851938fab0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.389 227364 DEBUG nova.network.os_vif_util [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:e3:fc,bridge_name='br-int',has_traffic_filtering=True,id=c4bf30df-8d4f-4601-a7a6-1d851938fab0,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4bf30df-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.391 227364 DEBUG nova.objects.instance [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2293bfb9-d91d-4ee4-8347-317cf45fe9c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.401 227364 DEBUG nova.virt.libvirt.driver [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:32:08 np0005539551 nova_compute[227360]:  <uuid>2293bfb9-d91d-4ee4-8347-317cf45fe9c4</uuid>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:  <name>instance-000000a2</name>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:32:08 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:      <nova:name>multiattach-server-1</nova:name>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:32:07</nova:creationTime>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:32:08 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:        <nova:user uuid="c5b0953fb7cc415fb26cf4ffdd5908c6">tempest-AttachVolumeMultiAttachTest-573425942-project-member</nova:user>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:        <nova:project uuid="d4f6db81949d487b853d7567f8a2e6d4">tempest-AttachVolumeMultiAttachTest-573425942</nova:project>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:        <nova:port uuid="c4bf30df-8d4f-4601-a7a6-1d851938fab0">
Nov 29 03:32:08 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:      <entry name="serial">2293bfb9-d91d-4ee4-8347-317cf45fe9c4</entry>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:      <entry name="uuid">2293bfb9-d91d-4ee4-8347-317cf45fe9c4</entry>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:32:08 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/2293bfb9-d91d-4ee4-8347-317cf45fe9c4_disk">
Nov 29 03:32:08 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:32:08 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:32:08 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/2293bfb9-d91d-4ee4-8347-317cf45fe9c4_disk.config">
Nov 29 03:32:08 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:32:08 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:32:08 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:fb:e3:fc"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:      <target dev="tapc4bf30df-8d"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:32:08 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/2293bfb9-d91d-4ee4-8347-317cf45fe9c4/console.log" append="off"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:32:08 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:32:08 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:32:08 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:32:08 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:32:08 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.408 227364 DEBUG nova.compute.manager [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Preparing to wait for external event network-vif-plugged-c4bf30df-8d4f-4601-a7a6-1d851938fab0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.408 227364 DEBUG oslo_concurrency.lockutils [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "2293bfb9-d91d-4ee4-8347-317cf45fe9c4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.408 227364 DEBUG oslo_concurrency.lockutils [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "2293bfb9-d91d-4ee4-8347-317cf45fe9c4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.409 227364 DEBUG oslo_concurrency.lockutils [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "2293bfb9-d91d-4ee4-8347-317cf45fe9c4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.410 227364 DEBUG nova.virt.libvirt.vif [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:32:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='multiattach-server-1',id=162,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFZDUAh1tFHT85mctamdge/Jlh9j7Mmalvlf2a+E48/dJ4b3TzL46vHd8+krJsRkbdr2BabH5xlFnXxT+hxq+KJlLzOnOaQuAWI18v9sbbjA8bZzR2tugMjasg7rWhFwg==',key_name='tempest-keypair-2058861619',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4f6db81949d487b853d7567f8a2e6d4',ramdisk_id='',reservation_id='r-6pwgg4cb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-573425942',owner_user_name='tempest-AttachVolumeMultiAttachTest-573425942-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:32:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5b0953fb7cc415fb26cf4ffdd5908c6',uuid=2293bfb9-d91d-4ee4-8347-317cf45fe9c4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c4bf30df-8d4f-4601-a7a6-1d851938fab0", "address": "fa:16:3e:fb:e3:fc", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4bf30df-8d", "ovs_interfaceid": "c4bf30df-8d4f-4601-a7a6-1d851938fab0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.410 227364 DEBUG nova.network.os_vif_util [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converting VIF {"id": "c4bf30df-8d4f-4601-a7a6-1d851938fab0", "address": "fa:16:3e:fb:e3:fc", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4bf30df-8d", "ovs_interfaceid": "c4bf30df-8d4f-4601-a7a6-1d851938fab0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.411 227364 DEBUG nova.network.os_vif_util [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:e3:fc,bridge_name='br-int',has_traffic_filtering=True,id=c4bf30df-8d4f-4601-a7a6-1d851938fab0,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4bf30df-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.411 227364 DEBUG os_vif [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:e3:fc,bridge_name='br-int',has_traffic_filtering=True,id=c4bf30df-8d4f-4601-a7a6-1d851938fab0,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4bf30df-8d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.412 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.413 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.414 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.417 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.417 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4bf30df-8d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.418 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc4bf30df-8d, col_values=(('external_ids', {'iface-id': 'c4bf30df-8d4f-4601-a7a6-1d851938fab0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fb:e3:fc', 'vm-uuid': '2293bfb9-d91d-4ee4-8347-317cf45fe9c4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.419 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:08 np0005539551 NetworkManager[48922]: <info>  [1764405128.4203] manager: (tapc4bf30df-8d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/323)
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.422 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.425 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.427 227364 INFO os_vif [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:e3:fc,bridge_name='br-int',has_traffic_filtering=True,id=c4bf30df-8d4f-4601-a7a6-1d851938fab0,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4bf30df-8d')#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.508 227364 DEBUG nova.virt.libvirt.driver [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.508 227364 DEBUG nova.virt.libvirt.driver [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.509 227364 DEBUG nova.virt.libvirt.driver [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No VIF found with MAC fa:16:3e:fb:e3:fc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.509 227364 INFO nova.virt.libvirt.driver [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Using config drive#033[00m
Nov 29 03:32:08 np0005539551 nova_compute[227360]: 2025-11-29 08:32:08.532 227364 DEBUG nova.storage.rbd_utils [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] rbd image 2293bfb9-d91d-4ee4-8347-317cf45fe9c4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:32:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:08.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:09 np0005539551 nova_compute[227360]: 2025-11-29 08:32:09.254 227364 INFO nova.virt.libvirt.driver [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Creating config drive at /var/lib/nova/instances/2293bfb9-d91d-4ee4-8347-317cf45fe9c4/disk.config#033[00m
Nov 29 03:32:09 np0005539551 nova_compute[227360]: 2025-11-29 08:32:09.259 227364 DEBUG oslo_concurrency.processutils [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2293bfb9-d91d-4ee4-8347-317cf45fe9c4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph55_llcf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:09 np0005539551 nova_compute[227360]: 2025-11-29 08:32:09.395 227364 DEBUG oslo_concurrency.processutils [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2293bfb9-d91d-4ee4-8347-317cf45fe9c4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph55_llcf" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:09 np0005539551 nova_compute[227360]: 2025-11-29 08:32:09.430 227364 DEBUG nova.storage.rbd_utils [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] rbd image 2293bfb9-d91d-4ee4-8347-317cf45fe9c4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:32:09 np0005539551 nova_compute[227360]: 2025-11-29 08:32:09.435 227364 DEBUG oslo_concurrency.processutils [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2293bfb9-d91d-4ee4-8347-317cf45fe9c4/disk.config 2293bfb9-d91d-4ee4-8347-317cf45fe9c4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:09 np0005539551 nova_compute[227360]: 2025-11-29 08:32:09.467 227364 DEBUG nova.network.neutron [req-380298a3-1048-4efa-8717-7816d14d0069 req-c1b4a378-5adc-49ad-819a-582a44901144 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Updated VIF entry in instance network info cache for port c4bf30df-8d4f-4601-a7a6-1d851938fab0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:32:09 np0005539551 nova_compute[227360]: 2025-11-29 08:32:09.468 227364 DEBUG nova.network.neutron [req-380298a3-1048-4efa-8717-7816d14d0069 req-c1b4a378-5adc-49ad-819a-582a44901144 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Updating instance_info_cache with network_info: [{"id": "c4bf30df-8d4f-4601-a7a6-1d851938fab0", "address": "fa:16:3e:fb:e3:fc", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4bf30df-8d", "ovs_interfaceid": "c4bf30df-8d4f-4601-a7a6-1d851938fab0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:32:09 np0005539551 nova_compute[227360]: 2025-11-29 08:32:09.494 227364 DEBUG oslo_concurrency.lockutils [req-380298a3-1048-4efa-8717-7816d14d0069 req-c1b4a378-5adc-49ad-819a-582a44901144 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-2293bfb9-d91d-4ee4-8347-317cf45fe9c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:32:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:10.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:10 np0005539551 nova_compute[227360]: 2025-11-29 08:32:10.070 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:10.071 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:32:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:10.073 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:32:10 np0005539551 nova_compute[227360]: 2025-11-29 08:32:10.150 227364 DEBUG oslo_concurrency.processutils [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2293bfb9-d91d-4ee4-8347-317cf45fe9c4/disk.config 2293bfb9-d91d-4ee4-8347-317cf45fe9c4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.715s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:10 np0005539551 nova_compute[227360]: 2025-11-29 08:32:10.151 227364 INFO nova.virt.libvirt.driver [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Deleting local config drive /var/lib/nova/instances/2293bfb9-d91d-4ee4-8347-317cf45fe9c4/disk.config because it was imported into RBD.#033[00m
Nov 29 03:32:10 np0005539551 kernel: tapc4bf30df-8d: entered promiscuous mode
Nov 29 03:32:10 np0005539551 NetworkManager[48922]: <info>  [1764405130.1937] manager: (tapc4bf30df-8d): new Tun device (/org/freedesktop/NetworkManager/Devices/324)
Nov 29 03:32:10 np0005539551 systemd-udevd[285749]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:32:10 np0005539551 nova_compute[227360]: 2025-11-29 08:32:10.196 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:10 np0005539551 ovn_controller[130266]: 2025-11-29T08:32:10Z|00711|binding|INFO|Claiming lport c4bf30df-8d4f-4601-a7a6-1d851938fab0 for this chassis.
Nov 29 03:32:10 np0005539551 ovn_controller[130266]: 2025-11-29T08:32:10Z|00712|binding|INFO|c4bf30df-8d4f-4601-a7a6-1d851938fab0: Claiming fa:16:3e:fb:e3:fc 10.100.0.4
Nov 29 03:32:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:10.209 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:e3:fc 10.100.0.4'], port_security=['fa:16:3e:fb:e3:fc 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '2293bfb9-d91d-4ee4-8347-317cf45fe9c4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4f6db81949d487b853d7567f8a2e6d4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '56b7aa4d-4e93-4da8-a338-5b87494d2fcd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=794eeb47-266a-47f4-b2a1-7a89e6c6ba82, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=c4bf30df-8d4f-4601-a7a6-1d851938fab0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:32:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:10.210 139482 INFO neutron.agent.ovn.metadata.agent [-] Port c4bf30df-8d4f-4601-a7a6-1d851938fab0 in datapath ed50ff83-51d1-4b35-b85c-1cbe6fb812c6 bound to our chassis#033[00m
Nov 29 03:32:10 np0005539551 nova_compute[227360]: 2025-11-29 08:32:10.213 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:10 np0005539551 NetworkManager[48922]: <info>  [1764405130.2136] device (tapc4bf30df-8d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:32:10 np0005539551 ovn_controller[130266]: 2025-11-29T08:32:10Z|00713|binding|INFO|Setting lport c4bf30df-8d4f-4601-a7a6-1d851938fab0 ovn-installed in OVS
Nov 29 03:32:10 np0005539551 ovn_controller[130266]: 2025-11-29T08:32:10Z|00714|binding|INFO|Setting lport c4bf30df-8d4f-4601-a7a6-1d851938fab0 up in Southbound
Nov 29 03:32:10 np0005539551 nova_compute[227360]: 2025-11-29 08:32:10.215 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:10 np0005539551 NetworkManager[48922]: <info>  [1764405130.2151] device (tapc4bf30df-8d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:32:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:10.215 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ed50ff83-51d1-4b35-b85c-1cbe6fb812c6#033[00m
Nov 29 03:32:10 np0005539551 nova_compute[227360]: 2025-11-29 08:32:10.217 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:10.234 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[80a28f11-f273-44c3-9020-45aa887cd54a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:10 np0005539551 systemd-machined[190756]: New machine qemu-76-instance-000000a2.
Nov 29 03:32:10 np0005539551 systemd[1]: Started Virtual Machine qemu-76-instance-000000a2.
Nov 29 03:32:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:10.264 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[bd07f87e-8ac5-4e33-9b07-d3828c029470]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:10.267 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[e3cf3add-3053-4b4c-879a-3d8e3b83e8a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:10.297 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[8765c9b8-d85d-43f1-9f91-4f8fc3e72ed4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:10.313 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5a1b842e-b546-4f4a-b70d-4cd43fa07151]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped50ff83-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:60:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 202], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 814431, 'reachable_time': 33322, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286020, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:10.332 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[75da8134-876b-4267-9bff-76f88fc2209b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'taped50ff83-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 814444, 'tstamp': 814444}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286021, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'taped50ff83-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 814447, 'tstamp': 814447}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286021, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:10.334 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped50ff83-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:32:10 np0005539551 nova_compute[227360]: 2025-11-29 08:32:10.335 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:10 np0005539551 nova_compute[227360]: 2025-11-29 08:32:10.336 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:10.337 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped50ff83-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:32:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:10.337 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:32:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:10.338 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=taped50ff83-50, col_values=(('external_ids', {'iface-id': '3b04b2c4-a6da-4677-b446-82ad68652b56'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:32:10 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:10.338 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:32:10 np0005539551 nova_compute[227360]: 2025-11-29 08:32:10.404 227364 DEBUG nova.compute.manager [req-d67aef3d-17eb-49a8-846d-84021e2f0018 req-8c657818-baec-4118-93e7-03614c92b3d3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Received event network-vif-plugged-c4bf30df-8d4f-4601-a7a6-1d851938fab0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:32:10 np0005539551 nova_compute[227360]: 2025-11-29 08:32:10.404 227364 DEBUG oslo_concurrency.lockutils [req-d67aef3d-17eb-49a8-846d-84021e2f0018 req-8c657818-baec-4118-93e7-03614c92b3d3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "2293bfb9-d91d-4ee4-8347-317cf45fe9c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:10 np0005539551 nova_compute[227360]: 2025-11-29 08:32:10.404 227364 DEBUG oslo_concurrency.lockutils [req-d67aef3d-17eb-49a8-846d-84021e2f0018 req-8c657818-baec-4118-93e7-03614c92b3d3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2293bfb9-d91d-4ee4-8347-317cf45fe9c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:10 np0005539551 nova_compute[227360]: 2025-11-29 08:32:10.405 227364 DEBUG oslo_concurrency.lockutils [req-d67aef3d-17eb-49a8-846d-84021e2f0018 req-8c657818-baec-4118-93e7-03614c92b3d3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2293bfb9-d91d-4ee4-8347-317cf45fe9c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:10 np0005539551 nova_compute[227360]: 2025-11-29 08:32:10.405 227364 DEBUG nova.compute.manager [req-d67aef3d-17eb-49a8-846d-84021e2f0018 req-8c657818-baec-4118-93e7-03614c92b3d3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Processing event network-vif-plugged-c4bf30df-8d4f-4601-a7a6-1d851938fab0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:32:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:10.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:10 np0005539551 nova_compute[227360]: 2025-11-29 08:32:10.943 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405130.9434712, 2293bfb9-d91d-4ee4-8347-317cf45fe9c4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:32:10 np0005539551 nova_compute[227360]: 2025-11-29 08:32:10.944 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] VM Started (Lifecycle Event)#033[00m
Nov 29 03:32:10 np0005539551 nova_compute[227360]: 2025-11-29 08:32:10.946 227364 DEBUG nova.compute.manager [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:32:10 np0005539551 nova_compute[227360]: 2025-11-29 08:32:10.948 227364 DEBUG nova.virt.libvirt.driver [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:32:10 np0005539551 nova_compute[227360]: 2025-11-29 08:32:10.951 227364 INFO nova.virt.libvirt.driver [-] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Instance spawned successfully.#033[00m
Nov 29 03:32:10 np0005539551 nova_compute[227360]: 2025-11-29 08:32:10.951 227364 DEBUG nova.virt.libvirt.driver [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:32:10 np0005539551 nova_compute[227360]: 2025-11-29 08:32:10.961 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:32:10 np0005539551 nova_compute[227360]: 2025-11-29 08:32:10.967 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:32:10 np0005539551 nova_compute[227360]: 2025-11-29 08:32:10.971 227364 DEBUG nova.virt.libvirt.driver [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:32:10 np0005539551 nova_compute[227360]: 2025-11-29 08:32:10.971 227364 DEBUG nova.virt.libvirt.driver [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:32:10 np0005539551 nova_compute[227360]: 2025-11-29 08:32:10.971 227364 DEBUG nova.virt.libvirt.driver [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:32:10 np0005539551 nova_compute[227360]: 2025-11-29 08:32:10.972 227364 DEBUG nova.virt.libvirt.driver [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:32:10 np0005539551 nova_compute[227360]: 2025-11-29 08:32:10.972 227364 DEBUG nova.virt.libvirt.driver [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:32:10 np0005539551 nova_compute[227360]: 2025-11-29 08:32:10.972 227364 DEBUG nova.virt.libvirt.driver [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:32:10 np0005539551 nova_compute[227360]: 2025-11-29 08:32:10.997 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:32:10 np0005539551 nova_compute[227360]: 2025-11-29 08:32:10.997 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405130.9442484, 2293bfb9-d91d-4ee4-8347-317cf45fe9c4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:32:10 np0005539551 nova_compute[227360]: 2025-11-29 08:32:10.997 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:32:11 np0005539551 nova_compute[227360]: 2025-11-29 08:32:11.022 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:32:11 np0005539551 nova_compute[227360]: 2025-11-29 08:32:11.024 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405130.9481416, 2293bfb9-d91d-4ee4-8347-317cf45fe9c4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:32:11 np0005539551 nova_compute[227360]: 2025-11-29 08:32:11.025 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:32:11 np0005539551 nova_compute[227360]: 2025-11-29 08:32:11.041 227364 INFO nova.compute.manager [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Took 7.93 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:32:11 np0005539551 nova_compute[227360]: 2025-11-29 08:32:11.041 227364 DEBUG nova.compute.manager [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:32:11 np0005539551 nova_compute[227360]: 2025-11-29 08:32:11.042 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:32:11 np0005539551 nova_compute[227360]: 2025-11-29 08:32:11.048 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:32:11 np0005539551 nova_compute[227360]: 2025-11-29 08:32:11.071 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:32:11 np0005539551 nova_compute[227360]: 2025-11-29 08:32:11.109 227364 INFO nova.compute.manager [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Took 8.91 seconds to build instance.#033[00m
Nov 29 03:32:11 np0005539551 nova_compute[227360]: 2025-11-29 08:32:11.126 227364 DEBUG oslo_concurrency.lockutils [None req-a2946922-d6cf-4235-8311-d2015649af2c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "2293bfb9-d91d-4ee4-8347-317cf45fe9c4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.996s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:12.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:12 np0005539551 nova_compute[227360]: 2025-11-29 08:32:12.245 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:12 np0005539551 nova_compute[227360]: 2025-11-29 08:32:12.668 227364 DEBUG nova.compute.manager [req-1fddf883-2753-4628-812b-e185e1365d34 req-1b86fb01-ec77-458b-81ca-c9c064710ee8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Received event network-vif-plugged-c4bf30df-8d4f-4601-a7a6-1d851938fab0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:32:12 np0005539551 nova_compute[227360]: 2025-11-29 08:32:12.668 227364 DEBUG oslo_concurrency.lockutils [req-1fddf883-2753-4628-812b-e185e1365d34 req-1b86fb01-ec77-458b-81ca-c9c064710ee8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "2293bfb9-d91d-4ee4-8347-317cf45fe9c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:12 np0005539551 nova_compute[227360]: 2025-11-29 08:32:12.668 227364 DEBUG oslo_concurrency.lockutils [req-1fddf883-2753-4628-812b-e185e1365d34 req-1b86fb01-ec77-458b-81ca-c9c064710ee8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2293bfb9-d91d-4ee4-8347-317cf45fe9c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:12 np0005539551 nova_compute[227360]: 2025-11-29 08:32:12.669 227364 DEBUG oslo_concurrency.lockutils [req-1fddf883-2753-4628-812b-e185e1365d34 req-1b86fb01-ec77-458b-81ca-c9c064710ee8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2293bfb9-d91d-4ee4-8347-317cf45fe9c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:12 np0005539551 nova_compute[227360]: 2025-11-29 08:32:12.669 227364 DEBUG nova.compute.manager [req-1fddf883-2753-4628-812b-e185e1365d34 req-1b86fb01-ec77-458b-81ca-c9c064710ee8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] No waiting events found dispatching network-vif-plugged-c4bf30df-8d4f-4601-a7a6-1d851938fab0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:32:12 np0005539551 nova_compute[227360]: 2025-11-29 08:32:12.669 227364 WARNING nova.compute.manager [req-1fddf883-2753-4628-812b-e185e1365d34 req-1b86fb01-ec77-458b-81ca-c9c064710ee8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Received unexpected event network-vif-plugged-c4bf30df-8d4f-4601-a7a6-1d851938fab0 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:32:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:12.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:13 np0005539551 nova_compute[227360]: 2025-11-29 08:32:13.420 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:13 np0005539551 podman[286068]: 2025-11-29 08:32:13.613268 +0000 UTC m=+0.058068042 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 29 03:32:13 np0005539551 podman[286066]: 2025-11-29 08:32:13.635679266 +0000 UTC m=+0.082723319 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:32:13 np0005539551 podman[286067]: 2025-11-29 08:32:13.639934011 +0000 UTC m=+0.087188950 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:32:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:14.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:14 np0005539551 nova_compute[227360]: 2025-11-29 08:32:14.760 227364 DEBUG nova.compute.manager [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Received event network-changed-c4bf30df-8d4f-4601-a7a6-1d851938fab0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:32:14 np0005539551 nova_compute[227360]: 2025-11-29 08:32:14.760 227364 DEBUG nova.compute.manager [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Refreshing instance network info cache due to event network-changed-c4bf30df-8d4f-4601-a7a6-1d851938fab0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:32:14 np0005539551 nova_compute[227360]: 2025-11-29 08:32:14.761 227364 DEBUG oslo_concurrency.lockutils [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-2293bfb9-d91d-4ee4-8347-317cf45fe9c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:32:14 np0005539551 nova_compute[227360]: 2025-11-29 08:32:14.761 227364 DEBUG oslo_concurrency.lockutils [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-2293bfb9-d91d-4ee4-8347-317cf45fe9c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:32:14 np0005539551 nova_compute[227360]: 2025-11-29 08:32:14.761 227364 DEBUG nova.network.neutron [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Refreshing network info cache for port c4bf30df-8d4f-4601-a7a6-1d851938fab0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:32:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:14.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:16.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:16 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:16.075 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:32:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:16.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:17 np0005539551 nova_compute[227360]: 2025-11-29 08:32:17.160 227364 DEBUG nova.network.neutron [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Updated VIF entry in instance network info cache for port c4bf30df-8d4f-4601-a7a6-1d851938fab0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:32:17 np0005539551 nova_compute[227360]: 2025-11-29 08:32:17.161 227364 DEBUG nova.network.neutron [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Updating instance_info_cache with network_info: [{"id": "c4bf30df-8d4f-4601-a7a6-1d851938fab0", "address": "fa:16:3e:fb:e3:fc", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4bf30df-8d", "ovs_interfaceid": "c4bf30df-8d4f-4601-a7a6-1d851938fab0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:32:17 np0005539551 nova_compute[227360]: 2025-11-29 08:32:17.182 227364 DEBUG oslo_concurrency.lockutils [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-2293bfb9-d91d-4ee4-8347-317cf45fe9c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:32:17 np0005539551 nova_compute[227360]: 2025-11-29 08:32:17.248 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:18.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:18 np0005539551 nova_compute[227360]: 2025-11-29 08:32:18.176 227364 DEBUG oslo_concurrency.lockutils [None req-b765cefc-4435-41d1-a74d-6460cec3b1b1 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "2293bfb9-d91d-4ee4-8347-317cf45fe9c4" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:18 np0005539551 nova_compute[227360]: 2025-11-29 08:32:18.176 227364 DEBUG oslo_concurrency.lockutils [None req-b765cefc-4435-41d1-a74d-6460cec3b1b1 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "2293bfb9-d91d-4ee4-8347-317cf45fe9c4" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:18 np0005539551 nova_compute[227360]: 2025-11-29 08:32:18.194 227364 DEBUG nova.objects.instance [None req-b765cefc-4435-41d1-a74d-6460cec3b1b1 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'flavor' on Instance uuid 2293bfb9-d91d-4ee4-8347-317cf45fe9c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:18 np0005539551 nova_compute[227360]: 2025-11-29 08:32:18.237 227364 DEBUG oslo_concurrency.lockutils [None req-b765cefc-4435-41d1-a74d-6460cec3b1b1 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "2293bfb9-d91d-4ee4-8347-317cf45fe9c4" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.061s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:18 np0005539551 nova_compute[227360]: 2025-11-29 08:32:18.421 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:18 np0005539551 nova_compute[227360]: 2025-11-29 08:32:18.532 227364 DEBUG oslo_concurrency.lockutils [None req-b765cefc-4435-41d1-a74d-6460cec3b1b1 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "2293bfb9-d91d-4ee4-8347-317cf45fe9c4" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:18 np0005539551 nova_compute[227360]: 2025-11-29 08:32:18.533 227364 DEBUG oslo_concurrency.lockutils [None req-b765cefc-4435-41d1-a74d-6460cec3b1b1 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "2293bfb9-d91d-4ee4-8347-317cf45fe9c4" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:18 np0005539551 nova_compute[227360]: 2025-11-29 08:32:18.533 227364 INFO nova.compute.manager [None req-b765cefc-4435-41d1-a74d-6460cec3b1b1 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Attaching volume 6273512e-203d-43b7-bb2f-a59b9ff4579f to /dev/vdb#033[00m
Nov 29 03:32:18 np0005539551 ovn_controller[130266]: 2025-11-29T08:32:18Z|00715|binding|INFO|Releasing lport 070d06ed-b610-481b-b747-9c7d0eb2bcf2 from this chassis (sb_readonly=0)
Nov 29 03:32:18 np0005539551 ovn_controller[130266]: 2025-11-29T08:32:18Z|00716|binding|INFO|Releasing lport 3b04b2c4-a6da-4677-b446-82ad68652b56 from this chassis (sb_readonly=0)
Nov 29 03:32:18 np0005539551 nova_compute[227360]: 2025-11-29 08:32:18.639 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:18 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:32:18 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:32:18 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:32:18 np0005539551 nova_compute[227360]: 2025-11-29 08:32:18.700 227364 DEBUG os_brick.utils [None req-b765cefc-4435-41d1-a74d-6460cec3b1b1 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:32:18 np0005539551 nova_compute[227360]: 2025-11-29 08:32:18.702 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:18 np0005539551 nova_compute[227360]: 2025-11-29 08:32:18.715 245195 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:18 np0005539551 nova_compute[227360]: 2025-11-29 08:32:18.715 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[31999389-1137-4c6c-9585-450e1c26e991]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:18 np0005539551 nova_compute[227360]: 2025-11-29 08:32:18.716 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:18 np0005539551 nova_compute[227360]: 2025-11-29 08:32:18.725 245195 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:18 np0005539551 nova_compute[227360]: 2025-11-29 08:32:18.726 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[f99265ad-fcc4-432d-b2e5-8f202dcf4edb]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34268feecb6', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:18 np0005539551 nova_compute[227360]: 2025-11-29 08:32:18.727 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:18 np0005539551 nova_compute[227360]: 2025-11-29 08:32:18.737 245195 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:18 np0005539551 nova_compute[227360]: 2025-11-29 08:32:18.737 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[74fb7688-02fd-4106-b16a-3c8f6719c0f9]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:18 np0005539551 nova_compute[227360]: 2025-11-29 08:32:18.739 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[03ce8242-7c50-4ab5-a248-ed8a04b61e0e]: (4, '2d58586e-4ce1-425d-890c-d5cdff75e822') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:18 np0005539551 nova_compute[227360]: 2025-11-29 08:32:18.739 227364 DEBUG oslo_concurrency.processutils [None req-b765cefc-4435-41d1-a74d-6460cec3b1b1 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:18 np0005539551 nova_compute[227360]: 2025-11-29 08:32:18.767 227364 DEBUG oslo_concurrency.processutils [None req-b765cefc-4435-41d1-a74d-6460cec3b1b1 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:18 np0005539551 nova_compute[227360]: 2025-11-29 08:32:18.769 227364 DEBUG os_brick.initiator.connectors.lightos [None req-b765cefc-4435-41d1-a74d-6460cec3b1b1 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:32:18 np0005539551 nova_compute[227360]: 2025-11-29 08:32:18.769 227364 DEBUG os_brick.initiator.connectors.lightos [None req-b765cefc-4435-41d1-a74d-6460cec3b1b1 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:32:18 np0005539551 nova_compute[227360]: 2025-11-29 08:32:18.769 227364 DEBUG os_brick.initiator.connectors.lightos [None req-b765cefc-4435-41d1-a74d-6460cec3b1b1 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:32:18 np0005539551 nova_compute[227360]: 2025-11-29 08:32:18.770 227364 DEBUG os_brick.utils [None req-b765cefc-4435-41d1-a74d-6460cec3b1b1 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] <== get_connector_properties: return (69ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34268feecb6', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2d58586e-4ce1-425d-890c-d5cdff75e822', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:32:18 np0005539551 nova_compute[227360]: 2025-11-29 08:32:18.770 227364 DEBUG nova.virt.block_device [None req-b765cefc-4435-41d1-a74d-6460cec3b1b1 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Updating existing volume attachment record: 5cb6545a-cf46-4ce4-a6e9-76f83d9bbc0b _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:32:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:18.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:19 np0005539551 nova_compute[227360]: 2025-11-29 08:32:19.519 227364 DEBUG nova.objects.instance [None req-b765cefc-4435-41d1-a74d-6460cec3b1b1 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'flavor' on Instance uuid 2293bfb9-d91d-4ee4-8347-317cf45fe9c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:19 np0005539551 nova_compute[227360]: 2025-11-29 08:32:19.548 227364 DEBUG nova.virt.libvirt.driver [None req-b765cefc-4435-41d1-a74d-6460cec3b1b1 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Attempting to attach volume 6273512e-203d-43b7-bb2f-a59b9ff4579f with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:32:19 np0005539551 nova_compute[227360]: 2025-11-29 08:32:19.551 227364 DEBUG nova.virt.libvirt.guest [None req-b765cefc-4435-41d1-a74d-6460cec3b1b1 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:32:19 np0005539551 nova_compute[227360]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:32:19 np0005539551 nova_compute[227360]:  <source protocol="rbd" name="volumes/volume-6273512e-203d-43b7-bb2f-a59b9ff4579f">
Nov 29 03:32:19 np0005539551 nova_compute[227360]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:32:19 np0005539551 nova_compute[227360]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:32:19 np0005539551 nova_compute[227360]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:32:19 np0005539551 nova_compute[227360]:  </source>
Nov 29 03:32:19 np0005539551 nova_compute[227360]:  <auth username="openstack">
Nov 29 03:32:19 np0005539551 nova_compute[227360]:    <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:32:19 np0005539551 nova_compute[227360]:  </auth>
Nov 29 03:32:19 np0005539551 nova_compute[227360]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:32:19 np0005539551 nova_compute[227360]:  <serial>6273512e-203d-43b7-bb2f-a59b9ff4579f</serial>
Nov 29 03:32:19 np0005539551 nova_compute[227360]:  <shareable/>
Nov 29 03:32:19 np0005539551 nova_compute[227360]: </disk>
Nov 29 03:32:19 np0005539551 nova_compute[227360]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:32:19 np0005539551 nova_compute[227360]: 2025-11-29 08:32:19.677 227364 DEBUG nova.virt.libvirt.driver [None req-b765cefc-4435-41d1-a74d-6460cec3b1b1 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:32:19 np0005539551 nova_compute[227360]: 2025-11-29 08:32:19.677 227364 DEBUG nova.virt.libvirt.driver [None req-b765cefc-4435-41d1-a74d-6460cec3b1b1 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:32:19 np0005539551 nova_compute[227360]: 2025-11-29 08:32:19.678 227364 DEBUG nova.virt.libvirt.driver [None req-b765cefc-4435-41d1-a74d-6460cec3b1b1 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:32:19 np0005539551 nova_compute[227360]: 2025-11-29 08:32:19.678 227364 DEBUG nova.virt.libvirt.driver [None req-b765cefc-4435-41d1-a74d-6460cec3b1b1 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No VIF found with MAC fa:16:3e:fb:e3:fc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:32:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:19.882 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:19.882 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:19.884 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:19 np0005539551 nova_compute[227360]: 2025-11-29 08:32:19.941 227364 DEBUG oslo_concurrency.lockutils [None req-b765cefc-4435-41d1-a74d-6460cec3b1b1 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "2293bfb9-d91d-4ee4-8347-317cf45fe9c4" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.408s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:20.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:20.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:21 np0005539551 ovn_controller[130266]: 2025-11-29T08:32:21Z|00090|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ad:80:cb 10.100.0.12
Nov 29 03:32:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:22.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:22 np0005539551 nova_compute[227360]: 2025-11-29 08:32:22.250 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:22.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:23 np0005539551 nova_compute[227360]: 2025-11-29 08:32:23.458 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:24.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:24 np0005539551 nova_compute[227360]: 2025-11-29 08:32:24.155 227364 DEBUG oslo_concurrency.lockutils [None req-62acafb3-2181-4738-adc5-7af57e50efa6 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "2293bfb9-d91d-4ee4-8347-317cf45fe9c4" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:24 np0005539551 nova_compute[227360]: 2025-11-29 08:32:24.155 227364 DEBUG oslo_concurrency.lockutils [None req-62acafb3-2181-4738-adc5-7af57e50efa6 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "2293bfb9-d91d-4ee4-8347-317cf45fe9c4" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:24 np0005539551 nova_compute[227360]: 2025-11-29 08:32:24.170 227364 INFO nova.compute.manager [None req-62acafb3-2181-4738-adc5-7af57e50efa6 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Detaching volume 6273512e-203d-43b7-bb2f-a59b9ff4579f#033[00m
Nov 29 03:32:24 np0005539551 nova_compute[227360]: 2025-11-29 08:32:24.378 227364 INFO nova.virt.block_device [None req-62acafb3-2181-4738-adc5-7af57e50efa6 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Attempting to driver detach volume 6273512e-203d-43b7-bb2f-a59b9ff4579f from mountpoint /dev/vdb#033[00m
Nov 29 03:32:24 np0005539551 nova_compute[227360]: 2025-11-29 08:32:24.385 227364 DEBUG nova.virt.libvirt.driver [None req-62acafb3-2181-4738-adc5-7af57e50efa6 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Attempting to detach device vdb from instance 2293bfb9-d91d-4ee4-8347-317cf45fe9c4 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:32:24 np0005539551 nova_compute[227360]: 2025-11-29 08:32:24.386 227364 DEBUG nova.virt.libvirt.guest [None req-62acafb3-2181-4738-adc5-7af57e50efa6 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:32:24 np0005539551 nova_compute[227360]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:32:24 np0005539551 nova_compute[227360]:  <source protocol="rbd" name="volumes/volume-6273512e-203d-43b7-bb2f-a59b9ff4579f">
Nov 29 03:32:24 np0005539551 nova_compute[227360]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:32:24 np0005539551 nova_compute[227360]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:32:24 np0005539551 nova_compute[227360]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:32:24 np0005539551 nova_compute[227360]:  </source>
Nov 29 03:32:24 np0005539551 nova_compute[227360]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:32:24 np0005539551 nova_compute[227360]:  <serial>6273512e-203d-43b7-bb2f-a59b9ff4579f</serial>
Nov 29 03:32:24 np0005539551 nova_compute[227360]:  <shareable/>
Nov 29 03:32:24 np0005539551 nova_compute[227360]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:32:24 np0005539551 nova_compute[227360]: </disk>
Nov 29 03:32:24 np0005539551 nova_compute[227360]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:32:24 np0005539551 nova_compute[227360]: 2025-11-29 08:32:24.411 227364 INFO nova.virt.libvirt.driver [None req-62acafb3-2181-4738-adc5-7af57e50efa6 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Successfully detached device vdb from instance 2293bfb9-d91d-4ee4-8347-317cf45fe9c4 from the persistent domain config.#033[00m
Nov 29 03:32:24 np0005539551 nova_compute[227360]: 2025-11-29 08:32:24.412 227364 DEBUG nova.virt.libvirt.driver [None req-62acafb3-2181-4738-adc5-7af57e50efa6 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 2293bfb9-d91d-4ee4-8347-317cf45fe9c4 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:32:24 np0005539551 nova_compute[227360]: 2025-11-29 08:32:24.412 227364 DEBUG nova.virt.libvirt.guest [None req-62acafb3-2181-4738-adc5-7af57e50efa6 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:32:24 np0005539551 nova_compute[227360]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:32:24 np0005539551 nova_compute[227360]:  <source protocol="rbd" name="volumes/volume-6273512e-203d-43b7-bb2f-a59b9ff4579f">
Nov 29 03:32:24 np0005539551 nova_compute[227360]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:32:24 np0005539551 nova_compute[227360]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:32:24 np0005539551 nova_compute[227360]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:32:24 np0005539551 nova_compute[227360]:  </source>
Nov 29 03:32:24 np0005539551 nova_compute[227360]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:32:24 np0005539551 nova_compute[227360]:  <serial>6273512e-203d-43b7-bb2f-a59b9ff4579f</serial>
Nov 29 03:32:24 np0005539551 nova_compute[227360]:  <shareable/>
Nov 29 03:32:24 np0005539551 nova_compute[227360]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:32:24 np0005539551 nova_compute[227360]: </disk>
Nov 29 03:32:24 np0005539551 nova_compute[227360]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:32:24 np0005539551 nova_compute[227360]: 2025-11-29 08:32:24.766 227364 DEBUG nova.virt.libvirt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Received event <DeviceRemovedEvent: 1764405144.7657814, 2293bfb9-d91d-4ee4-8347-317cf45fe9c4 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:32:24 np0005539551 nova_compute[227360]: 2025-11-29 08:32:24.767 227364 DEBUG nova.virt.libvirt.driver [None req-62acafb3-2181-4738-adc5-7af57e50efa6 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 2293bfb9-d91d-4ee4-8347-317cf45fe9c4 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:32:24 np0005539551 nova_compute[227360]: 2025-11-29 08:32:24.770 227364 INFO nova.virt.libvirt.driver [None req-62acafb3-2181-4738-adc5-7af57e50efa6 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Successfully detached device vdb from instance 2293bfb9-d91d-4ee4-8347-317cf45fe9c4 from the live domain config.#033[00m
Nov 29 03:32:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:24.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:25 np0005539551 nova_compute[227360]: 2025-11-29 08:32:25.494 227364 DEBUG nova.objects.instance [None req-62acafb3-2181-4738-adc5-7af57e50efa6 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'flavor' on Instance uuid 2293bfb9-d91d-4ee4-8347-317cf45fe9c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:25 np0005539551 nova_compute[227360]: 2025-11-29 08:32:25.566 227364 DEBUG oslo_concurrency.lockutils [None req-62acafb3-2181-4738-adc5-7af57e50efa6 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "2293bfb9-d91d-4ee4-8347-317cf45fe9c4" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.411s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:26.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:26 np0005539551 ovn_controller[130266]: 2025-11-29T08:32:26Z|00091|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fb:e3:fc 10.100.0.4
Nov 29 03:32:26 np0005539551 ovn_controller[130266]: 2025-11-29T08:32:26Z|00092|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fb:e3:fc 10.100.0.4
Nov 29 03:32:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:26.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:27 np0005539551 nova_compute[227360]: 2025-11-29 08:32:27.252 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:28.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:28 np0005539551 nova_compute[227360]: 2025-11-29 08:32:28.497 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:28.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:29 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:32:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:32:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:30.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:32:30 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:32:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:32:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:30.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:32:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:32.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:32 np0005539551 nova_compute[227360]: 2025-11-29 08:32:32.256 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:32 np0005539551 nova_compute[227360]: 2025-11-29 08:32:32.849 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:32.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:33 np0005539551 nova_compute[227360]: 2025-11-29 08:32:33.499 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:34.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:34.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:36.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:36.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:37 np0005539551 nova_compute[227360]: 2025-11-29 08:32:37.261 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:37 np0005539551 nova_compute[227360]: 2025-11-29 08:32:37.404 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:38.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:38 np0005539551 nova_compute[227360]: 2025-11-29 08:32:38.501 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:38.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:39 np0005539551 nova_compute[227360]: 2025-11-29 08:32:39.939 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:40.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:40 np0005539551 nova_compute[227360]: 2025-11-29 08:32:40.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:40 np0005539551 nova_compute[227360]: 2025-11-29 08:32:40.433 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:40 np0005539551 nova_compute[227360]: 2025-11-29 08:32:40.434 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:40 np0005539551 nova_compute[227360]: 2025-11-29 08:32:40.435 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:40 np0005539551 nova_compute[227360]: 2025-11-29 08:32:40.435 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:32:40 np0005539551 nova_compute[227360]: 2025-11-29 08:32:40.436 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:32:40 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4114749764' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:32:40 np0005539551 nova_compute[227360]: 2025-11-29 08:32:40.870 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:40.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:41 np0005539551 nova_compute[227360]: 2025-11-29 08:32:41.382 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000009f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:32:41 np0005539551 nova_compute[227360]: 2025-11-29 08:32:41.382 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000009f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:32:41 np0005539551 nova_compute[227360]: 2025-11-29 08:32:41.385 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:32:41 np0005539551 nova_compute[227360]: 2025-11-29 08:32:41.386 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:32:41 np0005539551 nova_compute[227360]: 2025-11-29 08:32:41.389 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-000000a2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:32:41 np0005539551 nova_compute[227360]: 2025-11-29 08:32:41.389 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-000000a2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:32:41 np0005539551 nova_compute[227360]: 2025-11-29 08:32:41.602 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:32:41 np0005539551 nova_compute[227360]: 2025-11-29 08:32:41.603 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3787MB free_disk=20.791507720947266GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:32:41 np0005539551 nova_compute[227360]: 2025-11-29 08:32:41.604 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:41 np0005539551 nova_compute[227360]: 2025-11-29 08:32:41.604 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:41 np0005539551 nova_compute[227360]: 2025-11-29 08:32:41.743 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 7f0919df-6a69-4824-b20e-6540d1d3de30 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:32:41 np0005539551 nova_compute[227360]: 2025-11-29 08:32:41.744 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance b08df6d7-85bd-4c2a-9bd8-f37384b9148a actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:32:41 np0005539551 nova_compute[227360]: 2025-11-29 08:32:41.744 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 2293bfb9-d91d-4ee4-8347-317cf45fe9c4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:32:41 np0005539551 nova_compute[227360]: 2025-11-29 08:32:41.744 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:32:41 np0005539551 nova_compute[227360]: 2025-11-29 08:32:41.744 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:32:41 np0005539551 nova_compute[227360]: 2025-11-29 08:32:41.873 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:42.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:42 np0005539551 nova_compute[227360]: 2025-11-29 08:32:42.262 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:32:42 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/992890894' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:32:42 np0005539551 nova_compute[227360]: 2025-11-29 08:32:42.378 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:42 np0005539551 nova_compute[227360]: 2025-11-29 08:32:42.384 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:32:42 np0005539551 nova_compute[227360]: 2025-11-29 08:32:42.732 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:32:42 np0005539551 nova_compute[227360]: 2025-11-29 08:32:42.779 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:32:42 np0005539551 nova_compute[227360]: 2025-11-29 08:32:42.779 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.175s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:42.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:43 np0005539551 nova_compute[227360]: 2025-11-29 08:32:43.503 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:43 np0005539551 nova_compute[227360]: 2025-11-29 08:32:43.780 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:43 np0005539551 nova_compute[227360]: 2025-11-29 08:32:43.780 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:32:44 np0005539551 nova_compute[227360]: 2025-11-29 08:32:44.027 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "refresh_cache-b08df6d7-85bd-4c2a-9bd8-f37384b9148a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:32:44 np0005539551 nova_compute[227360]: 2025-11-29 08:32:44.027 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquired lock "refresh_cache-b08df6d7-85bd-4c2a-9bd8-f37384b9148a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:32:44 np0005539551 nova_compute[227360]: 2025-11-29 08:32:44.028 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:32:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:44.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:44 np0005539551 nova_compute[227360]: 2025-11-29 08:32:44.140 227364 DEBUG oslo_concurrency.lockutils [None req-5298ccfe-d318-4b46-bc06-0ba63fb21449 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Acquiring lock "7f0919df-6a69-4824-b20e-6540d1d3de30" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:44 np0005539551 nova_compute[227360]: 2025-11-29 08:32:44.140 227364 DEBUG oslo_concurrency.lockutils [None req-5298ccfe-d318-4b46-bc06-0ba63fb21449 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:44 np0005539551 nova_compute[227360]: 2025-11-29 08:32:44.141 227364 DEBUG oslo_concurrency.lockutils [None req-5298ccfe-d318-4b46-bc06-0ba63fb21449 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Acquiring lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:44 np0005539551 nova_compute[227360]: 2025-11-29 08:32:44.141 227364 DEBUG oslo_concurrency.lockutils [None req-5298ccfe-d318-4b46-bc06-0ba63fb21449 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:44 np0005539551 nova_compute[227360]: 2025-11-29 08:32:44.141 227364 DEBUG oslo_concurrency.lockutils [None req-5298ccfe-d318-4b46-bc06-0ba63fb21449 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:44 np0005539551 nova_compute[227360]: 2025-11-29 08:32:44.142 227364 INFO nova.compute.manager [None req-5298ccfe-d318-4b46-bc06-0ba63fb21449 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Terminating instance#033[00m
Nov 29 03:32:44 np0005539551 nova_compute[227360]: 2025-11-29 08:32:44.143 227364 DEBUG nova.compute.manager [None req-5298ccfe-d318-4b46-bc06-0ba63fb21449 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:32:44 np0005539551 kernel: tapb8a243db-cb (unregistering): left promiscuous mode
Nov 29 03:32:44 np0005539551 NetworkManager[48922]: <info>  [1764405164.2858] device (tapb8a243db-cb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:32:44 np0005539551 nova_compute[227360]: 2025-11-29 08:32:44.296 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:44 np0005539551 ovn_controller[130266]: 2025-11-29T08:32:44Z|00717|binding|INFO|Releasing lport b8a243db-cb27-44c3-9015-4bf4d9a49bac from this chassis (sb_readonly=0)
Nov 29 03:32:44 np0005539551 ovn_controller[130266]: 2025-11-29T08:32:44Z|00718|binding|INFO|Setting lport b8a243db-cb27-44c3-9015-4bf4d9a49bac down in Southbound
Nov 29 03:32:44 np0005539551 ovn_controller[130266]: 2025-11-29T08:32:44Z|00719|binding|INFO|Removing iface tapb8a243db-cb ovn-installed in OVS
Nov 29 03:32:44 np0005539551 nova_compute[227360]: 2025-11-29 08:32:44.301 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:44 np0005539551 nova_compute[227360]: 2025-11-29 08:32:44.313 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:44 np0005539551 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d0000009c.scope: Deactivated successfully.
Nov 29 03:32:44 np0005539551 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d0000009c.scope: Consumed 15.002s CPU time.
Nov 29 03:32:44 np0005539551 systemd-machined[190756]: Machine qemu-75-instance-0000009c terminated.
Nov 29 03:32:44 np0005539551 podman[286385]: 2025-11-29 08:32:44.372508259 +0000 UTC m=+0.061719510 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.build-date=20251125)
Nov 29 03:32:44 np0005539551 podman[286386]: 2025-11-29 08:32:44.376902378 +0000 UTC m=+0.063091458 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:32:44 np0005539551 nova_compute[227360]: 2025-11-29 08:32:44.383 227364 INFO nova.virt.libvirt.driver [-] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Instance destroyed successfully.#033[00m
Nov 29 03:32:44 np0005539551 nova_compute[227360]: 2025-11-29 08:32:44.384 227364 DEBUG nova.objects.instance [None req-5298ccfe-d318-4b46-bc06-0ba63fb21449 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lazy-loading 'resources' on Instance uuid 7f0919df-6a69-4824-b20e-6540d1d3de30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:44.391 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:80:cb 10.100.0.12'], port_security=['fa:16:3e:ad:80:cb 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '7f0919df-6a69-4824-b20e-6540d1d3de30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d6d35cfb-cc41-4788-977c-b8e5140795a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1981e9617628491f938ef0ef01c061c5', 'neutron:revision_number': '8', 'neutron:security_group_ids': '81cf2245-74ac-4962-8637-69fd9ed2858e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.178', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4f49e26a-f1b7-44a1-8f75-9c7ae476aa0d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=b8a243db-cb27-44c3-9015-4bf4d9a49bac) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:32:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:44.392 139482 INFO neutron.agent.ovn.metadata.agent [-] Port b8a243db-cb27-44c3-9015-4bf4d9a49bac in datapath d6d35cfb-cc41-4788-977c-b8e5140795a0 unbound from our chassis#033[00m
Nov 29 03:32:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:44.393 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d6d35cfb-cc41-4788-977c-b8e5140795a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:32:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:44.394 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6591ff7d-eb46-487d-89a6-73f55ce6484c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:44.394 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0 namespace which is not needed anymore#033[00m
Nov 29 03:32:44 np0005539551 podman[286382]: 2025-11-29 08:32:44.399440517 +0000 UTC m=+0.093201852 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:32:44 np0005539551 nova_compute[227360]: 2025-11-29 08:32:44.434 227364 DEBUG nova.virt.libvirt.vif [None req-5298ccfe-d318-4b46-bc06-0ba63fb21449 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:30:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-570591477',display_name='tempest-AttachVolumeTestJSON-server-570591477',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-570591477',id=156,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPLhAS709HJYCqqSVKtaAsZHN9+aAJmT3RuP2sRo5+42qudvHLEFjfUIRKebI1UviiQdgrbtVipNk1gA+7U8vpFGwsazdqavrjn4FbUCXtlfljCRqwjbJ7fLvby4dRZp3g==',key_name='tempest-keypair-1111859008',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:30:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1981e9617628491f938ef0ef01c061c5',ramdisk_id='',reservation_id='r-01op0prm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-169198681',owner_user_name='tempest-AttachVolumeTestJSON-169198681-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:32:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5b0fe4d78df74554a3a5875ab629d59c',uuid=7f0919df-6a69-4824-b20e-6540d1d3de30,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "address": "fa:16:3e:ad:80:cb", "network": {"id": "d6d35cfb-cc41-4788-977c-b8e5140795a0", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1510670722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1981e9617628491f938ef0ef01c061c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8a243db-cb", "ovs_interfaceid": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:32:44 np0005539551 nova_compute[227360]: 2025-11-29 08:32:44.434 227364 DEBUG nova.network.os_vif_util [None req-5298ccfe-d318-4b46-bc06-0ba63fb21449 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Converting VIF {"id": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "address": "fa:16:3e:ad:80:cb", "network": {"id": "d6d35cfb-cc41-4788-977c-b8e5140795a0", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1510670722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1981e9617628491f938ef0ef01c061c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8a243db-cb", "ovs_interfaceid": "b8a243db-cb27-44c3-9015-4bf4d9a49bac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:32:44 np0005539551 nova_compute[227360]: 2025-11-29 08:32:44.435 227364 DEBUG nova.network.os_vif_util [None req-5298ccfe-d318-4b46-bc06-0ba63fb21449 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:80:cb,bridge_name='br-int',has_traffic_filtering=True,id=b8a243db-cb27-44c3-9015-4bf4d9a49bac,network=Network(d6d35cfb-cc41-4788-977c-b8e5140795a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8a243db-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:32:44 np0005539551 nova_compute[227360]: 2025-11-29 08:32:44.435 227364 DEBUG os_vif [None req-5298ccfe-d318-4b46-bc06-0ba63fb21449 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:80:cb,bridge_name='br-int',has_traffic_filtering=True,id=b8a243db-cb27-44c3-9015-4bf4d9a49bac,network=Network(d6d35cfb-cc41-4788-977c-b8e5140795a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8a243db-cb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:32:44 np0005539551 nova_compute[227360]: 2025-11-29 08:32:44.437 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:44 np0005539551 nova_compute[227360]: 2025-11-29 08:32:44.437 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8a243db-cb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:32:44 np0005539551 nova_compute[227360]: 2025-11-29 08:32:44.439 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:44 np0005539551 nova_compute[227360]: 2025-11-29 08:32:44.440 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:32:44 np0005539551 nova_compute[227360]: 2025-11-29 08:32:44.440 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:44 np0005539551 nova_compute[227360]: 2025-11-29 08:32:44.442 227364 INFO os_vif [None req-5298ccfe-d318-4b46-bc06-0ba63fb21449 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:80:cb,bridge_name='br-int',has_traffic_filtering=True,id=b8a243db-cb27-44c3-9015-4bf4d9a49bac,network=Network(d6d35cfb-cc41-4788-977c-b8e5140795a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8a243db-cb')#033[00m
Nov 29 03:32:44 np0005539551 neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0[285858]: [NOTICE]   (285881) : haproxy version is 2.8.14-c23fe91
Nov 29 03:32:44 np0005539551 neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0[285858]: [NOTICE]   (285881) : path to executable is /usr/sbin/haproxy
Nov 29 03:32:44 np0005539551 neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0[285858]: [WARNING]  (285881) : Exiting Master process...
Nov 29 03:32:44 np0005539551 neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0[285858]: [ALERT]    (285881) : Current worker (285883) exited with code 143 (Terminated)
Nov 29 03:32:44 np0005539551 neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0[285858]: [WARNING]  (285881) : All workers exited. Exiting... (0)
Nov 29 03:32:44 np0005539551 systemd[1]: libpod-2d4bf9158a5eebcb24a5250c320f8e0b8f4ab9c93d8202ee21d6a364125b0cae.scope: Deactivated successfully.
Nov 29 03:32:44 np0005539551 podman[286492]: 2025-11-29 08:32:44.523700049 +0000 UTC m=+0.039657823 container died 2d4bf9158a5eebcb24a5250c320f8e0b8f4ab9c93d8202ee21d6a364125b0cae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:32:44 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2d4bf9158a5eebcb24a5250c320f8e0b8f4ab9c93d8202ee21d6a364125b0cae-userdata-shm.mount: Deactivated successfully.
Nov 29 03:32:44 np0005539551 systemd[1]: var-lib-containers-storage-overlay-45d658145cf2b31b1a5df29d096da33c06e3e75b64ba83f516ce57731d1e57ee-merged.mount: Deactivated successfully.
Nov 29 03:32:44 np0005539551 podman[286492]: 2025-11-29 08:32:44.55660346 +0000 UTC m=+0.072561234 container cleanup 2d4bf9158a5eebcb24a5250c320f8e0b8f4ab9c93d8202ee21d6a364125b0cae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:32:44 np0005539551 systemd[1]: libpod-conmon-2d4bf9158a5eebcb24a5250c320f8e0b8f4ab9c93d8202ee21d6a364125b0cae.scope: Deactivated successfully.
Nov 29 03:32:44 np0005539551 podman[286523]: 2025-11-29 08:32:44.611548386 +0000 UTC m=+0.037656830 container remove 2d4bf9158a5eebcb24a5250c320f8e0b8f4ab9c93d8202ee21d6a364125b0cae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:32:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:44.617 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d3c3d282-5e7e-47f7-9cac-e2b6bdbf1428]: (4, ('Sat Nov 29 08:32:44 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0 (2d4bf9158a5eebcb24a5250c320f8e0b8f4ab9c93d8202ee21d6a364125b0cae)\n2d4bf9158a5eebcb24a5250c320f8e0b8f4ab9c93d8202ee21d6a364125b0cae\nSat Nov 29 08:32:44 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0 (2d4bf9158a5eebcb24a5250c320f8e0b8f4ab9c93d8202ee21d6a364125b0cae)\n2d4bf9158a5eebcb24a5250c320f8e0b8f4ab9c93d8202ee21d6a364125b0cae\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:44.618 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6b84e342-282a-4f78-a86f-2f5700c9b953]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:44.619 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd6d35cfb-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:32:44 np0005539551 nova_compute[227360]: 2025-11-29 08:32:44.653 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:44 np0005539551 kernel: tapd6d35cfb-c0: left promiscuous mode
Nov 29 03:32:44 np0005539551 nova_compute[227360]: 2025-11-29 08:32:44.656 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:44.659 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8c7abcf1-a60d-428d-8c2c-424237ffd554]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:44 np0005539551 nova_compute[227360]: 2025-11-29 08:32:44.670 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:44.675 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[90249b13-3f82-4d99-8b0c-d683c168acf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:44.677 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[7fba67f2-49fd-419a-80ed-df6864bf9fe7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:44.693 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[72659501-8667-49c9-bc56-a1564d116b9a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 817425, 'reachable_time': 25555, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286538, 'error': None, 'target': 'ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:44.695 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:32:44 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:32:44.695 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[e6f1ed54-80c1-466b-b89f-48d8c15721a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:44 np0005539551 systemd[1]: run-netns-ovnmeta\x2dd6d35cfb\x2dcc41\x2d4788\x2d977c\x2db8e5140795a0.mount: Deactivated successfully.
Nov 29 03:32:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:44.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:46 np0005539551 nova_compute[227360]: 2025-11-29 08:32:46.008 227364 DEBUG nova.compute.manager [req-e1b7dd70-563c-4316-8647-2552e0715ad4 req-985efb47-8fa5-4f71-a8a8-0948be4a0ba2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Received event network-vif-unplugged-b8a243db-cb27-44c3-9015-4bf4d9a49bac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:32:46 np0005539551 nova_compute[227360]: 2025-11-29 08:32:46.008 227364 DEBUG oslo_concurrency.lockutils [req-e1b7dd70-563c-4316-8647-2552e0715ad4 req-985efb47-8fa5-4f71-a8a8-0948be4a0ba2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:46 np0005539551 nova_compute[227360]: 2025-11-29 08:32:46.008 227364 DEBUG oslo_concurrency.lockutils [req-e1b7dd70-563c-4316-8647-2552e0715ad4 req-985efb47-8fa5-4f71-a8a8-0948be4a0ba2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:46 np0005539551 nova_compute[227360]: 2025-11-29 08:32:46.009 227364 DEBUG oslo_concurrency.lockutils [req-e1b7dd70-563c-4316-8647-2552e0715ad4 req-985efb47-8fa5-4f71-a8a8-0948be4a0ba2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:46 np0005539551 nova_compute[227360]: 2025-11-29 08:32:46.009 227364 DEBUG nova.compute.manager [req-e1b7dd70-563c-4316-8647-2552e0715ad4 req-985efb47-8fa5-4f71-a8a8-0948be4a0ba2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] No waiting events found dispatching network-vif-unplugged-b8a243db-cb27-44c3-9015-4bf4d9a49bac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:32:46 np0005539551 nova_compute[227360]: 2025-11-29 08:32:46.010 227364 DEBUG nova.compute.manager [req-e1b7dd70-563c-4316-8647-2552e0715ad4 req-985efb47-8fa5-4f71-a8a8-0948be4a0ba2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Received event network-vif-unplugged-b8a243db-cb27-44c3-9015-4bf4d9a49bac for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:32:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:46.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:46 np0005539551 nova_compute[227360]: 2025-11-29 08:32:46.308 227364 INFO nova.virt.libvirt.driver [None req-5298ccfe-d318-4b46-bc06-0ba63fb21449 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Deleting instance files /var/lib/nova/instances/7f0919df-6a69-4824-b20e-6540d1d3de30_del#033[00m
Nov 29 03:32:46 np0005539551 nova_compute[227360]: 2025-11-29 08:32:46.309 227364 INFO nova.virt.libvirt.driver [None req-5298ccfe-d318-4b46-bc06-0ba63fb21449 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Deletion of /var/lib/nova/instances/7f0919df-6a69-4824-b20e-6540d1d3de30_del complete#033[00m
Nov 29 03:32:46 np0005539551 nova_compute[227360]: 2025-11-29 08:32:46.469 227364 INFO nova.compute.manager [None req-5298ccfe-d318-4b46-bc06-0ba63fb21449 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Took 2.33 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:32:46 np0005539551 nova_compute[227360]: 2025-11-29 08:32:46.470 227364 DEBUG oslo.service.loopingcall [None req-5298ccfe-d318-4b46-bc06-0ba63fb21449 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:32:46 np0005539551 nova_compute[227360]: 2025-11-29 08:32:46.470 227364 DEBUG nova.compute.manager [-] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:32:46 np0005539551 nova_compute[227360]: 2025-11-29 08:32:46.470 227364 DEBUG nova.network.neutron [-] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:32:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:46.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:47 np0005539551 nova_compute[227360]: 2025-11-29 08:32:47.038 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Updating instance_info_cache with network_info: [{"id": "19b8adf6-0177-4b65-8028-bf0fe37afa9a", "address": "fa:16:3e:17:d8:67", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19b8adf6-01", "ovs_interfaceid": "19b8adf6-0177-4b65-8028-bf0fe37afa9a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:32:47 np0005539551 nova_compute[227360]: 2025-11-29 08:32:47.180 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Releasing lock "refresh_cache-b08df6d7-85bd-4c2a-9bd8-f37384b9148a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:32:47 np0005539551 nova_compute[227360]: 2025-11-29 08:32:47.180 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:32:47 np0005539551 nova_compute[227360]: 2025-11-29 08:32:47.181 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:47 np0005539551 nova_compute[227360]: 2025-11-29 08:32:47.181 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:47 np0005539551 nova_compute[227360]: 2025-11-29 08:32:47.181 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:47 np0005539551 nova_compute[227360]: 2025-11-29 08:32:47.264 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:48.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:48 np0005539551 nova_compute[227360]: 2025-11-29 08:32:48.225 227364 DEBUG nova.compute.manager [req-65fbf022-d82b-4088-88f8-24e87e076319 req-32ad2fc0-3484-4797-807a-a905a3188e57 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Received event network-vif-plugged-b8a243db-cb27-44c3-9015-4bf4d9a49bac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:32:48 np0005539551 nova_compute[227360]: 2025-11-29 08:32:48.226 227364 DEBUG oslo_concurrency.lockutils [req-65fbf022-d82b-4088-88f8-24e87e076319 req-32ad2fc0-3484-4797-807a-a905a3188e57 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:48 np0005539551 nova_compute[227360]: 2025-11-29 08:32:48.226 227364 DEBUG oslo_concurrency.lockutils [req-65fbf022-d82b-4088-88f8-24e87e076319 req-32ad2fc0-3484-4797-807a-a905a3188e57 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:48 np0005539551 nova_compute[227360]: 2025-11-29 08:32:48.226 227364 DEBUG oslo_concurrency.lockutils [req-65fbf022-d82b-4088-88f8-24e87e076319 req-32ad2fc0-3484-4797-807a-a905a3188e57 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:48 np0005539551 nova_compute[227360]: 2025-11-29 08:32:48.227 227364 DEBUG nova.compute.manager [req-65fbf022-d82b-4088-88f8-24e87e076319 req-32ad2fc0-3484-4797-807a-a905a3188e57 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] No waiting events found dispatching network-vif-plugged-b8a243db-cb27-44c3-9015-4bf4d9a49bac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:32:48 np0005539551 nova_compute[227360]: 2025-11-29 08:32:48.227 227364 WARNING nova.compute.manager [req-65fbf022-d82b-4088-88f8-24e87e076319 req-32ad2fc0-3484-4797-807a-a905a3188e57 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Received unexpected event network-vif-plugged-b8a243db-cb27-44c3-9015-4bf4d9a49bac for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:32:48 np0005539551 nova_compute[227360]: 2025-11-29 08:32:48.375 227364 DEBUG nova.network.neutron [-] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:32:48 np0005539551 nova_compute[227360]: 2025-11-29 08:32:48.519 227364 INFO nova.compute.manager [-] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Took 2.05 seconds to deallocate network for instance.#033[00m
Nov 29 03:32:48 np0005539551 nova_compute[227360]: 2025-11-29 08:32:48.638 227364 DEBUG oslo_concurrency.lockutils [None req-5298ccfe-d318-4b46-bc06-0ba63fb21449 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:48 np0005539551 nova_compute[227360]: 2025-11-29 08:32:48.639 227364 DEBUG oslo_concurrency.lockutils [None req-5298ccfe-d318-4b46-bc06-0ba63fb21449 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:48 np0005539551 nova_compute[227360]: 2025-11-29 08:32:48.766 227364 DEBUG nova.compute.manager [req-675965b4-cbb6-4470-8e96-d2106cb28cb0 req-c7889e81-2196-4c21-92f1-b73b0b41811e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Received event network-vif-deleted-b8a243db-cb27-44c3-9015-4bf4d9a49bac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:32:48 np0005539551 nova_compute[227360]: 2025-11-29 08:32:48.826 227364 DEBUG oslo_concurrency.processutils [None req-5298ccfe-d318-4b46-bc06-0ba63fb21449 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:48.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:32:49 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2927857886' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:32:49 np0005539551 nova_compute[227360]: 2025-11-29 08:32:49.301 227364 DEBUG oslo_concurrency.processutils [None req-5298ccfe-d318-4b46-bc06-0ba63fb21449 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:49 np0005539551 nova_compute[227360]: 2025-11-29 08:32:49.308 227364 DEBUG nova.compute.provider_tree [None req-5298ccfe-d318-4b46-bc06-0ba63fb21449 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:32:49 np0005539551 nova_compute[227360]: 2025-11-29 08:32:49.329 227364 DEBUG nova.scheduler.client.report [None req-5298ccfe-d318-4b46-bc06-0ba63fb21449 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:32:49 np0005539551 nova_compute[227360]: 2025-11-29 08:32:49.387 227364 DEBUG oslo_concurrency.lockutils [None req-5298ccfe-d318-4b46-bc06-0ba63fb21449 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:49 np0005539551 nova_compute[227360]: 2025-11-29 08:32:49.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:49 np0005539551 nova_compute[227360]: 2025-11-29 08:32:49.441 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:49 np0005539551 nova_compute[227360]: 2025-11-29 08:32:49.458 227364 INFO nova.scheduler.client.report [None req-5298ccfe-d318-4b46-bc06-0ba63fb21449 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Deleted allocations for instance 7f0919df-6a69-4824-b20e-6540d1d3de30#033[00m
Nov 29 03:32:49 np0005539551 nova_compute[227360]: 2025-11-29 08:32:49.529 227364 DEBUG oslo_concurrency.lockutils [None req-5298ccfe-d318-4b46-bc06-0ba63fb21449 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "7f0919df-6a69-4824-b20e-6540d1d3de30" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.389s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:50.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:50.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:51 np0005539551 nova_compute[227360]: 2025-11-29 08:32:51.301 227364 DEBUG nova.compute.manager [req-a1dbcfe8-cf28-4804-87da-e79e4c458ade req-3c3e16b8-3edf-4f0c-9cb7-2b2fc72cdfe7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Received event network-changed-c4bf30df-8d4f-4601-a7a6-1d851938fab0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:32:51 np0005539551 nova_compute[227360]: 2025-11-29 08:32:51.302 227364 DEBUG nova.compute.manager [req-a1dbcfe8-cf28-4804-87da-e79e4c458ade req-3c3e16b8-3edf-4f0c-9cb7-2b2fc72cdfe7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Refreshing instance network info cache due to event network-changed-c4bf30df-8d4f-4601-a7a6-1d851938fab0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:32:51 np0005539551 nova_compute[227360]: 2025-11-29 08:32:51.302 227364 DEBUG oslo_concurrency.lockutils [req-a1dbcfe8-cf28-4804-87da-e79e4c458ade req-3c3e16b8-3edf-4f0c-9cb7-2b2fc72cdfe7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-2293bfb9-d91d-4ee4-8347-317cf45fe9c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:32:51 np0005539551 nova_compute[227360]: 2025-11-29 08:32:51.302 227364 DEBUG oslo_concurrency.lockutils [req-a1dbcfe8-cf28-4804-87da-e79e4c458ade req-3c3e16b8-3edf-4f0c-9cb7-2b2fc72cdfe7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-2293bfb9-d91d-4ee4-8347-317cf45fe9c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:32:51 np0005539551 nova_compute[227360]: 2025-11-29 08:32:51.303 227364 DEBUG nova.network.neutron [req-a1dbcfe8-cf28-4804-87da-e79e4c458ade req-3c3e16b8-3edf-4f0c-9cb7-2b2fc72cdfe7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Refreshing network info cache for port c4bf30df-8d4f-4601-a7a6-1d851938fab0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:32:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:52.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:52 np0005539551 nova_compute[227360]: 2025-11-29 08:32:52.266 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:52.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:53 np0005539551 nova_compute[227360]: 2025-11-29 08:32:53.659 227364 DEBUG nova.network.neutron [req-a1dbcfe8-cf28-4804-87da-e79e4c458ade req-3c3e16b8-3edf-4f0c-9cb7-2b2fc72cdfe7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Updated VIF entry in instance network info cache for port c4bf30df-8d4f-4601-a7a6-1d851938fab0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:32:53 np0005539551 nova_compute[227360]: 2025-11-29 08:32:53.660 227364 DEBUG nova.network.neutron [req-a1dbcfe8-cf28-4804-87da-e79e4c458ade req-3c3e16b8-3edf-4f0c-9cb7-2b2fc72cdfe7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Updating instance_info_cache with network_info: [{"id": "c4bf30df-8d4f-4601-a7a6-1d851938fab0", "address": "fa:16:3e:fb:e3:fc", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4bf30df-8d", "ovs_interfaceid": "c4bf30df-8d4f-4601-a7a6-1d851938fab0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:32:53 np0005539551 nova_compute[227360]: 2025-11-29 08:32:53.694 227364 DEBUG oslo_concurrency.lockutils [req-a1dbcfe8-cf28-4804-87da-e79e4c458ade req-3c3e16b8-3edf-4f0c-9cb7-2b2fc72cdfe7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-2293bfb9-d91d-4ee4-8347-317cf45fe9c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:32:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:54.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:54 np0005539551 nova_compute[227360]: 2025-11-29 08:32:54.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:54 np0005539551 nova_compute[227360]: 2025-11-29 08:32:54.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:32:54 np0005539551 nova_compute[227360]: 2025-11-29 08:32:54.444 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:54.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:55 np0005539551 nova_compute[227360]: 2025-11-29 08:32:55.632 227364 DEBUG oslo_concurrency.lockutils [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:55 np0005539551 nova_compute[227360]: 2025-11-29 08:32:55.633 227364 DEBUG oslo_concurrency.lockutils [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:55 np0005539551 nova_compute[227360]: 2025-11-29 08:32:55.673 227364 DEBUG nova.compute.manager [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:32:55 np0005539551 nova_compute[227360]: 2025-11-29 08:32:55.767 227364 DEBUG oslo_concurrency.lockutils [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:55 np0005539551 nova_compute[227360]: 2025-11-29 08:32:55.767 227364 DEBUG oslo_concurrency.lockutils [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:55 np0005539551 nova_compute[227360]: 2025-11-29 08:32:55.774 227364 DEBUG nova.virt.hardware [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:32:55 np0005539551 nova_compute[227360]: 2025-11-29 08:32:55.775 227364 INFO nova.compute.claims [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:32:55 np0005539551 nova_compute[227360]: 2025-11-29 08:32:55.960 227364 DEBUG oslo_concurrency.processutils [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:56.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:56 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:32:56 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2898115885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:32:56 np0005539551 nova_compute[227360]: 2025-11-29 08:32:56.484 227364 DEBUG oslo_concurrency.processutils [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:56 np0005539551 nova_compute[227360]: 2025-11-29 08:32:56.489 227364 DEBUG nova.compute.provider_tree [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:32:56 np0005539551 nova_compute[227360]: 2025-11-29 08:32:56.504 227364 DEBUG nova.scheduler.client.report [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:32:56 np0005539551 nova_compute[227360]: 2025-11-29 08:32:56.529 227364 DEBUG oslo_concurrency.lockutils [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.762s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:56 np0005539551 nova_compute[227360]: 2025-11-29 08:32:56.530 227364 DEBUG nova.compute.manager [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:32:56 np0005539551 nova_compute[227360]: 2025-11-29 08:32:56.577 227364 DEBUG nova.compute.manager [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:32:56 np0005539551 nova_compute[227360]: 2025-11-29 08:32:56.578 227364 DEBUG nova.network.neutron [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:32:56 np0005539551 nova_compute[227360]: 2025-11-29 08:32:56.610 227364 INFO nova.virt.libvirt.driver [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:32:56 np0005539551 nova_compute[227360]: 2025-11-29 08:32:56.641 227364 DEBUG nova.compute.manager [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:32:56 np0005539551 nova_compute[227360]: 2025-11-29 08:32:56.850 227364 DEBUG nova.compute.manager [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:32:56 np0005539551 nova_compute[227360]: 2025-11-29 08:32:56.852 227364 DEBUG nova.virt.libvirt.driver [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:32:56 np0005539551 nova_compute[227360]: 2025-11-29 08:32:56.853 227364 INFO nova.virt.libvirt.driver [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Creating image(s)#033[00m
Nov 29 03:32:56 np0005539551 nova_compute[227360]: 2025-11-29 08:32:56.885 227364 DEBUG nova.storage.rbd_utils [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] rbd image f6eba09a-c0cf-4855-afd5-b265b2f2cadc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:32:56 np0005539551 nova_compute[227360]: 2025-11-29 08:32:56.910 227364 DEBUG nova.storage.rbd_utils [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] rbd image f6eba09a-c0cf-4855-afd5-b265b2f2cadc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:32:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:56.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:56 np0005539551 nova_compute[227360]: 2025-11-29 08:32:56.934 227364 DEBUG nova.storage.rbd_utils [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] rbd image f6eba09a-c0cf-4855-afd5-b265b2f2cadc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:32:56 np0005539551 nova_compute[227360]: 2025-11-29 08:32:56.937 227364 DEBUG oslo_concurrency.processutils [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:56 np0005539551 nova_compute[227360]: 2025-11-29 08:32:56.989 227364 DEBUG nova.policy [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c5b0953fb7cc415fb26cf4ffdd5908c6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd4f6db81949d487b853d7567f8a2e6d4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:32:57 np0005539551 nova_compute[227360]: 2025-11-29 08:32:57.001 227364 DEBUG oslo_concurrency.processutils [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:57 np0005539551 nova_compute[227360]: 2025-11-29 08:32:57.003 227364 DEBUG oslo_concurrency.lockutils [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:57 np0005539551 nova_compute[227360]: 2025-11-29 08:32:57.003 227364 DEBUG oslo_concurrency.lockutils [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:57 np0005539551 nova_compute[227360]: 2025-11-29 08:32:57.004 227364 DEBUG oslo_concurrency.lockutils [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:57 np0005539551 nova_compute[227360]: 2025-11-29 08:32:57.033 227364 DEBUG nova.storage.rbd_utils [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] rbd image f6eba09a-c0cf-4855-afd5-b265b2f2cadc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:32:57 np0005539551 nova_compute[227360]: 2025-11-29 08:32:57.037 227364 DEBUG oslo_concurrency.processutils [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 f6eba09a-c0cf-4855-afd5-b265b2f2cadc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:57 np0005539551 nova_compute[227360]: 2025-11-29 08:32:57.269 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:57 np0005539551 nova_compute[227360]: 2025-11-29 08:32:57.411 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:57 np0005539551 nova_compute[227360]: 2025-11-29 08:32:57.995 227364 DEBUG oslo_concurrency.processutils [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 f6eba09a-c0cf-4855-afd5-b265b2f2cadc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.958s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:58 np0005539551 nova_compute[227360]: 2025-11-29 08:32:58.069 227364 DEBUG nova.storage.rbd_utils [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] resizing rbd image f6eba09a-c0cf-4855-afd5-b265b2f2cadc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:32:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:58.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:58 np0005539551 nova_compute[227360]: 2025-11-29 08:32:58.178 227364 DEBUG nova.objects.instance [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'migration_context' on Instance uuid f6eba09a-c0cf-4855-afd5-b265b2f2cadc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:58 np0005539551 nova_compute[227360]: 2025-11-29 08:32:58.213 227364 DEBUG nova.virt.libvirt.driver [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:32:58 np0005539551 nova_compute[227360]: 2025-11-29 08:32:58.214 227364 DEBUG nova.virt.libvirt.driver [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Ensure instance console log exists: /var/lib/nova/instances/f6eba09a-c0cf-4855-afd5-b265b2f2cadc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:32:58 np0005539551 nova_compute[227360]: 2025-11-29 08:32:58.214 227364 DEBUG oslo_concurrency.lockutils [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:58 np0005539551 nova_compute[227360]: 2025-11-29 08:32:58.214 227364 DEBUG oslo_concurrency.lockutils [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:58 np0005539551 nova_compute[227360]: 2025-11-29 08:32:58.215 227364 DEBUG oslo_concurrency.lockutils [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:58 np0005539551 nova_compute[227360]: 2025-11-29 08:32:58.457 227364 DEBUG nova.network.neutron [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Successfully created port: a08044a1-40b5-4987-bfe0-a92ba0c13b97 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:32:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:32:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:58.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:59 np0005539551 nova_compute[227360]: 2025-11-29 08:32:59.381 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405164.3802404, 7f0919df-6a69-4824-b20e-6540d1d3de30 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:32:59 np0005539551 nova_compute[227360]: 2025-11-29 08:32:59.381 227364 INFO nova.compute.manager [-] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:32:59 np0005539551 nova_compute[227360]: 2025-11-29 08:32:59.411 227364 DEBUG nova.compute.manager [None req-db1a3edc-5f87-434d-8146-e0a8216a50d6 - - - - - -] [instance: 7f0919df-6a69-4824-b20e-6540d1d3de30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:32:59 np0005539551 nova_compute[227360]: 2025-11-29 08:32:59.446 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:59 np0005539551 nova_compute[227360]: 2025-11-29 08:32:59.677 227364 DEBUG nova.network.neutron [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Successfully updated port: a08044a1-40b5-4987-bfe0-a92ba0c13b97 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:32:59 np0005539551 nova_compute[227360]: 2025-11-29 08:32:59.706 227364 DEBUG oslo_concurrency.lockutils [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "refresh_cache-f6eba09a-c0cf-4855-afd5-b265b2f2cadc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:32:59 np0005539551 nova_compute[227360]: 2025-11-29 08:32:59.706 227364 DEBUG oslo_concurrency.lockutils [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquired lock "refresh_cache-f6eba09a-c0cf-4855-afd5-b265b2f2cadc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:32:59 np0005539551 nova_compute[227360]: 2025-11-29 08:32:59.706 227364 DEBUG nova.network.neutron [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:32:59 np0005539551 nova_compute[227360]: 2025-11-29 08:32:59.820 227364 DEBUG nova.compute.manager [req-1fb725c4-45f6-4f09-8947-fb23a928b630 req-f67ebabe-e7e4-47c3-94b7-e89a71702b34 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Received event network-changed-a08044a1-40b5-4987-bfe0-a92ba0c13b97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:32:59 np0005539551 nova_compute[227360]: 2025-11-29 08:32:59.820 227364 DEBUG nova.compute.manager [req-1fb725c4-45f6-4f09-8947-fb23a928b630 req-f67ebabe-e7e4-47c3-94b7-e89a71702b34 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Refreshing instance network info cache due to event network-changed-a08044a1-40b5-4987-bfe0-a92ba0c13b97. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:32:59 np0005539551 nova_compute[227360]: 2025-11-29 08:32:59.820 227364 DEBUG oslo_concurrency.lockutils [req-1fb725c4-45f6-4f09-8947-fb23a928b630 req-f67ebabe-e7e4-47c3-94b7-e89a71702b34 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-f6eba09a-c0cf-4855-afd5-b265b2f2cadc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:32:59 np0005539551 nova_compute[227360]: 2025-11-29 08:32:59.949 227364 DEBUG nova.network.neutron [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:33:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:00.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:00.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:01 np0005539551 nova_compute[227360]: 2025-11-29 08:33:01.290 227364 DEBUG nova.network.neutron [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Updating instance_info_cache with network_info: [{"id": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "address": "fa:16:3e:39:c1:ec", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa08044a1-40", "ovs_interfaceid": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:33:01 np0005539551 nova_compute[227360]: 2025-11-29 08:33:01.322 227364 DEBUG oslo_concurrency.lockutils [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Releasing lock "refresh_cache-f6eba09a-c0cf-4855-afd5-b265b2f2cadc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:33:01 np0005539551 nova_compute[227360]: 2025-11-29 08:33:01.323 227364 DEBUG nova.compute.manager [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Instance network_info: |[{"id": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "address": "fa:16:3e:39:c1:ec", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa08044a1-40", "ovs_interfaceid": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:33:01 np0005539551 nova_compute[227360]: 2025-11-29 08:33:01.323 227364 DEBUG oslo_concurrency.lockutils [req-1fb725c4-45f6-4f09-8947-fb23a928b630 req-f67ebabe-e7e4-47c3-94b7-e89a71702b34 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-f6eba09a-c0cf-4855-afd5-b265b2f2cadc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:33:01 np0005539551 nova_compute[227360]: 2025-11-29 08:33:01.324 227364 DEBUG nova.network.neutron [req-1fb725c4-45f6-4f09-8947-fb23a928b630 req-f67ebabe-e7e4-47c3-94b7-e89a71702b34 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Refreshing network info cache for port a08044a1-40b5-4987-bfe0-a92ba0c13b97 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:33:01 np0005539551 nova_compute[227360]: 2025-11-29 08:33:01.326 227364 DEBUG nova.virt.libvirt.driver [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Start _get_guest_xml network_info=[{"id": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "address": "fa:16:3e:39:c1:ec", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa08044a1-40", "ovs_interfaceid": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:33:01 np0005539551 nova_compute[227360]: 2025-11-29 08:33:01.333 227364 WARNING nova.virt.libvirt.driver [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:33:01 np0005539551 nova_compute[227360]: 2025-11-29 08:33:01.340 227364 DEBUG nova.virt.libvirt.host [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:33:01 np0005539551 nova_compute[227360]: 2025-11-29 08:33:01.341 227364 DEBUG nova.virt.libvirt.host [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:33:01 np0005539551 nova_compute[227360]: 2025-11-29 08:33:01.344 227364 DEBUG nova.virt.libvirt.host [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:33:01 np0005539551 nova_compute[227360]: 2025-11-29 08:33:01.345 227364 DEBUG nova.virt.libvirt.host [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:33:01 np0005539551 nova_compute[227360]: 2025-11-29 08:33:01.346 227364 DEBUG nova.virt.libvirt.driver [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:33:01 np0005539551 nova_compute[227360]: 2025-11-29 08:33:01.346 227364 DEBUG nova.virt.hardware [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:33:01 np0005539551 nova_compute[227360]: 2025-11-29 08:33:01.347 227364 DEBUG nova.virt.hardware [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:33:01 np0005539551 nova_compute[227360]: 2025-11-29 08:33:01.347 227364 DEBUG nova.virt.hardware [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:33:01 np0005539551 nova_compute[227360]: 2025-11-29 08:33:01.347 227364 DEBUG nova.virt.hardware [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:33:01 np0005539551 nova_compute[227360]: 2025-11-29 08:33:01.347 227364 DEBUG nova.virt.hardware [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:33:01 np0005539551 nova_compute[227360]: 2025-11-29 08:33:01.348 227364 DEBUG nova.virt.hardware [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:33:01 np0005539551 nova_compute[227360]: 2025-11-29 08:33:01.348 227364 DEBUG nova.virt.hardware [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:33:01 np0005539551 nova_compute[227360]: 2025-11-29 08:33:01.348 227364 DEBUG nova.virt.hardware [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:33:01 np0005539551 nova_compute[227360]: 2025-11-29 08:33:01.348 227364 DEBUG nova.virt.hardware [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:33:01 np0005539551 nova_compute[227360]: 2025-11-29 08:33:01.349 227364 DEBUG nova.virt.hardware [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:33:01 np0005539551 nova_compute[227360]: 2025-11-29 08:33:01.349 227364 DEBUG nova.virt.hardware [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:33:01 np0005539551 nova_compute[227360]: 2025-11-29 08:33:01.352 227364 DEBUG oslo_concurrency.processutils [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:33:01 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4120132629' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:33:01 np0005539551 nova_compute[227360]: 2025-11-29 08:33:01.993 227364 DEBUG oslo_concurrency.processutils [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.641s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:02 np0005539551 nova_compute[227360]: 2025-11-29 08:33:02.018 227364 DEBUG nova.storage.rbd_utils [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] rbd image f6eba09a-c0cf-4855-afd5-b265b2f2cadc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:33:02 np0005539551 nova_compute[227360]: 2025-11-29 08:33:02.022 227364 DEBUG oslo_concurrency.processutils [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:02.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:02 np0005539551 nova_compute[227360]: 2025-11-29 08:33:02.270 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:02 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:33:02 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4144383869' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:33:02 np0005539551 nova_compute[227360]: 2025-11-29 08:33:02.441 227364 DEBUG oslo_concurrency.processutils [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:02 np0005539551 nova_compute[227360]: 2025-11-29 08:33:02.443 227364 DEBUG nova.virt.libvirt.vif [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:32:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='multiattach-server-1',id=166,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFZDUAh1tFHT85mctamdge/Jlh9j7Mmalvlf2a+E48/dJ4b3TzL46vHd8+krJsRkbdr2BabH5xlFnXxT+hxq+KJlLzOnOaQuAWI18v9sbbjA8bZzR2tugMjasg7rWhFwg==',key_name='tempest-keypair-2058861619',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4f6db81949d487b853d7567f8a2e6d4',ramdisk_id='',reservation_id='r-jvsv1b4g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-573425942',owner_user_name='tempest-AttachVolumeMultiAttachTest-573425942-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:32:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5b0953fb7cc415fb26cf4ffdd5908c6',uuid=f6eba09a-c0cf-4855-afd5-b265b2f2cadc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "address": "fa:16:3e:39:c1:ec", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa08044a1-40", "ovs_interfaceid": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:33:02 np0005539551 nova_compute[227360]: 2025-11-29 08:33:02.443 227364 DEBUG nova.network.os_vif_util [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converting VIF {"id": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "address": "fa:16:3e:39:c1:ec", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa08044a1-40", "ovs_interfaceid": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:33:02 np0005539551 nova_compute[227360]: 2025-11-29 08:33:02.444 227364 DEBUG nova.network.os_vif_util [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:c1:ec,bridge_name='br-int',has_traffic_filtering=True,id=a08044a1-40b5-4987-bfe0-a92ba0c13b97,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa08044a1-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:33:02 np0005539551 nova_compute[227360]: 2025-11-29 08:33:02.445 227364 DEBUG nova.objects.instance [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'pci_devices' on Instance uuid f6eba09a-c0cf-4855-afd5-b265b2f2cadc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:02 np0005539551 nova_compute[227360]: 2025-11-29 08:33:02.464 227364 DEBUG nova.virt.libvirt.driver [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:33:02 np0005539551 nova_compute[227360]:  <uuid>f6eba09a-c0cf-4855-afd5-b265b2f2cadc</uuid>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:  <name>instance-000000a6</name>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:33:02 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:      <nova:name>multiattach-server-1</nova:name>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:33:01</nova:creationTime>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:33:02 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:        <nova:user uuid="c5b0953fb7cc415fb26cf4ffdd5908c6">tempest-AttachVolumeMultiAttachTest-573425942-project-member</nova:user>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:        <nova:project uuid="d4f6db81949d487b853d7567f8a2e6d4">tempest-AttachVolumeMultiAttachTest-573425942</nova:project>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:        <nova:port uuid="a08044a1-40b5-4987-bfe0-a92ba0c13b97">
Nov 29 03:33:02 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:      <entry name="serial">f6eba09a-c0cf-4855-afd5-b265b2f2cadc</entry>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:      <entry name="uuid">f6eba09a-c0cf-4855-afd5-b265b2f2cadc</entry>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:33:02 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/f6eba09a-c0cf-4855-afd5-b265b2f2cadc_disk">
Nov 29 03:33:02 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:33:02 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:33:02 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/f6eba09a-c0cf-4855-afd5-b265b2f2cadc_disk.config">
Nov 29 03:33:02 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:33:02 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:33:02 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:39:c1:ec"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:      <target dev="tapa08044a1-40"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:33:02 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/f6eba09a-c0cf-4855-afd5-b265b2f2cadc/console.log" append="off"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:33:02 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:33:02 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:33:02 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:33:02 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:33:02 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:33:02 np0005539551 nova_compute[227360]: 2025-11-29 08:33:02.465 227364 DEBUG nova.compute.manager [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Preparing to wait for external event network-vif-plugged-a08044a1-40b5-4987-bfe0-a92ba0c13b97 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:33:02 np0005539551 nova_compute[227360]: 2025-11-29 08:33:02.466 227364 DEBUG oslo_concurrency.lockutils [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:02 np0005539551 nova_compute[227360]: 2025-11-29 08:33:02.466 227364 DEBUG oslo_concurrency.lockutils [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:02 np0005539551 nova_compute[227360]: 2025-11-29 08:33:02.467 227364 DEBUG oslo_concurrency.lockutils [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:02 np0005539551 nova_compute[227360]: 2025-11-29 08:33:02.468 227364 DEBUG nova.virt.libvirt.vif [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:32:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='multiattach-server-1',id=166,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFZDUAh1tFHT85mctamdge/Jlh9j7Mmalvlf2a+E48/dJ4b3TzL46vHd8+krJsRkbdr2BabH5xlFnXxT+hxq+KJlLzOnOaQuAWI18v9sbbjA8bZzR2tugMjasg7rWhFwg==',key_name='tempest-keypair-2058861619',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4f6db81949d487b853d7567f8a2e6d4',ramdisk_id='',reservation_id='r-jvsv1b4g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-573425942',owner_user_name='tempest-AttachVolumeMultiAttachTest-573425942-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:32:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5b0953fb7cc415fb26cf4ffdd5908c6',uuid=f6eba09a-c0cf-4855-afd5-b265b2f2cadc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "address": "fa:16:3e:39:c1:ec", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa08044a1-40", "ovs_interfaceid": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:33:02 np0005539551 nova_compute[227360]: 2025-11-29 08:33:02.468 227364 DEBUG nova.network.os_vif_util [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converting VIF {"id": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "address": "fa:16:3e:39:c1:ec", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa08044a1-40", "ovs_interfaceid": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:33:02 np0005539551 nova_compute[227360]: 2025-11-29 08:33:02.469 227364 DEBUG nova.network.os_vif_util [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:c1:ec,bridge_name='br-int',has_traffic_filtering=True,id=a08044a1-40b5-4987-bfe0-a92ba0c13b97,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa08044a1-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:33:02 np0005539551 nova_compute[227360]: 2025-11-29 08:33:02.469 227364 DEBUG os_vif [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:c1:ec,bridge_name='br-int',has_traffic_filtering=True,id=a08044a1-40b5-4987-bfe0-a92ba0c13b97,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa08044a1-40') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:33:02 np0005539551 nova_compute[227360]: 2025-11-29 08:33:02.470 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:02 np0005539551 nova_compute[227360]: 2025-11-29 08:33:02.471 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:02 np0005539551 nova_compute[227360]: 2025-11-29 08:33:02.471 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:33:02 np0005539551 nova_compute[227360]: 2025-11-29 08:33:02.474 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:02 np0005539551 nova_compute[227360]: 2025-11-29 08:33:02.474 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa08044a1-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:02 np0005539551 nova_compute[227360]: 2025-11-29 08:33:02.475 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa08044a1-40, col_values=(('external_ids', {'iface-id': 'a08044a1-40b5-4987-bfe0-a92ba0c13b97', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:39:c1:ec', 'vm-uuid': 'f6eba09a-c0cf-4855-afd5-b265b2f2cadc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:02 np0005539551 NetworkManager[48922]: <info>  [1764405182.4775] manager: (tapa08044a1-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/325)
Nov 29 03:33:02 np0005539551 nova_compute[227360]: 2025-11-29 08:33:02.478 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:33:02 np0005539551 nova_compute[227360]: 2025-11-29 08:33:02.484 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:02 np0005539551 nova_compute[227360]: 2025-11-29 08:33:02.485 227364 INFO os_vif [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:c1:ec,bridge_name='br-int',has_traffic_filtering=True,id=a08044a1-40b5-4987-bfe0-a92ba0c13b97,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa08044a1-40')#033[00m
Nov 29 03:33:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:02.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:02 np0005539551 nova_compute[227360]: 2025-11-29 08:33:02.992 227364 DEBUG nova.virt.libvirt.driver [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:33:02 np0005539551 nova_compute[227360]: 2025-11-29 08:33:02.992 227364 DEBUG nova.virt.libvirt.driver [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:33:02 np0005539551 nova_compute[227360]: 2025-11-29 08:33:02.992 227364 DEBUG nova.virt.libvirt.driver [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No VIF found with MAC fa:16:3e:39:c1:ec, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:33:02 np0005539551 nova_compute[227360]: 2025-11-29 08:33:02.993 227364 INFO nova.virt.libvirt.driver [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Using config drive#033[00m
Nov 29 03:33:03 np0005539551 nova_compute[227360]: 2025-11-29 08:33:03.017 227364 DEBUG nova.storage.rbd_utils [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] rbd image f6eba09a-c0cf-4855-afd5-b265b2f2cadc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:33:03 np0005539551 nova_compute[227360]: 2025-11-29 08:33:03.936 227364 INFO nova.virt.libvirt.driver [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Creating config drive at /var/lib/nova/instances/f6eba09a-c0cf-4855-afd5-b265b2f2cadc/disk.config#033[00m
Nov 29 03:33:03 np0005539551 nova_compute[227360]: 2025-11-29 08:33:03.943 227364 DEBUG oslo_concurrency.processutils [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f6eba09a-c0cf-4855-afd5-b265b2f2cadc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_umzt5ax execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:04 np0005539551 nova_compute[227360]: 2025-11-29 08:33:04.010 227364 DEBUG nova.network.neutron [req-1fb725c4-45f6-4f09-8947-fb23a928b630 req-f67ebabe-e7e4-47c3-94b7-e89a71702b34 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Updated VIF entry in instance network info cache for port a08044a1-40b5-4987-bfe0-a92ba0c13b97. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:33:04 np0005539551 nova_compute[227360]: 2025-11-29 08:33:04.011 227364 DEBUG nova.network.neutron [req-1fb725c4-45f6-4f09-8947-fb23a928b630 req-f67ebabe-e7e4-47c3-94b7-e89a71702b34 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Updating instance_info_cache with network_info: [{"id": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "address": "fa:16:3e:39:c1:ec", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa08044a1-40", "ovs_interfaceid": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:33:04 np0005539551 nova_compute[227360]: 2025-11-29 08:33:04.034 227364 DEBUG oslo_concurrency.lockutils [req-1fb725c4-45f6-4f09-8947-fb23a928b630 req-f67ebabe-e7e4-47c3-94b7-e89a71702b34 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-f6eba09a-c0cf-4855-afd5-b265b2f2cadc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:33:04 np0005539551 nova_compute[227360]: 2025-11-29 08:33:04.078 227364 DEBUG oslo_concurrency.processutils [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f6eba09a-c0cf-4855-afd5-b265b2f2cadc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_umzt5ax" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:04 np0005539551 nova_compute[227360]: 2025-11-29 08:33:04.106 227364 DEBUG nova.storage.rbd_utils [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] rbd image f6eba09a-c0cf-4855-afd5-b265b2f2cadc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:33:04 np0005539551 nova_compute[227360]: 2025-11-29 08:33:04.109 227364 DEBUG oslo_concurrency.processutils [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f6eba09a-c0cf-4855-afd5-b265b2f2cadc/disk.config f6eba09a-c0cf-4855-afd5-b265b2f2cadc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:04.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:04 np0005539551 nova_compute[227360]: 2025-11-29 08:33:04.256 227364 DEBUG oslo_concurrency.processutils [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f6eba09a-c0cf-4855-afd5-b265b2f2cadc/disk.config f6eba09a-c0cf-4855-afd5-b265b2f2cadc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:04 np0005539551 nova_compute[227360]: 2025-11-29 08:33:04.257 227364 INFO nova.virt.libvirt.driver [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Deleting local config drive /var/lib/nova/instances/f6eba09a-c0cf-4855-afd5-b265b2f2cadc/disk.config because it was imported into RBD.#033[00m
Nov 29 03:33:04 np0005539551 kernel: tapa08044a1-40: entered promiscuous mode
Nov 29 03:33:04 np0005539551 NetworkManager[48922]: <info>  [1764405184.3048] manager: (tapa08044a1-40): new Tun device (/org/freedesktop/NetworkManager/Devices/326)
Nov 29 03:33:04 np0005539551 nova_compute[227360]: 2025-11-29 08:33:04.305 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:04 np0005539551 ovn_controller[130266]: 2025-11-29T08:33:04Z|00720|binding|INFO|Claiming lport a08044a1-40b5-4987-bfe0-a92ba0c13b97 for this chassis.
Nov 29 03:33:04 np0005539551 ovn_controller[130266]: 2025-11-29T08:33:04Z|00721|binding|INFO|a08044a1-40b5-4987-bfe0-a92ba0c13b97: Claiming fa:16:3e:39:c1:ec 10.100.0.3
Nov 29 03:33:04 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:33:04.312 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:c1:ec 10.100.0.3'], port_security=['fa:16:3e:39:c1:ec 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f6eba09a-c0cf-4855-afd5-b265b2f2cadc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4f6db81949d487b853d7567f8a2e6d4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '56b7aa4d-4e93-4da8-a338-5b87494d2fcd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=794eeb47-266a-47f4-b2a1-7a89e6c6ba82, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=a08044a1-40b5-4987-bfe0-a92ba0c13b97) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:33:04 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:33:04.313 139482 INFO neutron.agent.ovn.metadata.agent [-] Port a08044a1-40b5-4987-bfe0-a92ba0c13b97 in datapath ed50ff83-51d1-4b35-b85c-1cbe6fb812c6 bound to our chassis#033[00m
Nov 29 03:33:04 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:33:04.315 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ed50ff83-51d1-4b35-b85c-1cbe6fb812c6#033[00m
Nov 29 03:33:04 np0005539551 ovn_controller[130266]: 2025-11-29T08:33:04Z|00722|binding|INFO|Setting lport a08044a1-40b5-4987-bfe0-a92ba0c13b97 ovn-installed in OVS
Nov 29 03:33:04 np0005539551 ovn_controller[130266]: 2025-11-29T08:33:04Z|00723|binding|INFO|Setting lport a08044a1-40b5-4987-bfe0-a92ba0c13b97 up in Southbound
Nov 29 03:33:04 np0005539551 nova_compute[227360]: 2025-11-29 08:33:04.322 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:04 np0005539551 nova_compute[227360]: 2025-11-29 08:33:04.324 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:04 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:33:04.332 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[37460d90-6d2e-4a6e-83df-8c9f064abaeb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:04 np0005539551 systemd-udevd[286884]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:33:04 np0005539551 NetworkManager[48922]: <info>  [1764405184.3466] device (tapa08044a1-40): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:33:04 np0005539551 NetworkManager[48922]: <info>  [1764405184.3475] device (tapa08044a1-40): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:33:04 np0005539551 systemd-machined[190756]: New machine qemu-77-instance-000000a6.
Nov 29 03:33:04 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:33:04.361 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[ba2ec0ce-236a-45d0-818c-8f70938ccb01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:04 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:33:04.365 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[183c4c9c-fae9-4c36-8c9b-468356777a1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:04 np0005539551 systemd[1]: Started Virtual Machine qemu-77-instance-000000a6.
Nov 29 03:33:04 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:33:04.388 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[979f436a-f428-4bef-b84b-472860e67e07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:04 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:33:04.403 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6d40fa51-67bf-4905-a447-7a1297ca1040]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped50ff83-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:60:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 202], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 814431, 'reachable_time': 33322, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286895, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:04 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:33:04.415 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d54769a7-9344-4e96-9084-9e3b25a03054]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'taped50ff83-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 814444, 'tstamp': 814444}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286900, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'taped50ff83-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 814447, 'tstamp': 814447}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286900, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:04 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:33:04.417 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped50ff83-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:04 np0005539551 nova_compute[227360]: 2025-11-29 08:33:04.418 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:04 np0005539551 nova_compute[227360]: 2025-11-29 08:33:04.419 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:04 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:33:04.420 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped50ff83-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:04 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:33:04.420 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:33:04 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:33:04.420 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=taped50ff83-50, col_values=(('external_ids', {'iface-id': '3b04b2c4-a6da-4677-b446-82ad68652b56'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:04 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:33:04.421 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:33:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:33:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:04.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:33:04 np0005539551 nova_compute[227360]: 2025-11-29 08:33:04.995 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405184.994675, f6eba09a-c0cf-4855-afd5-b265b2f2cadc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:33:04 np0005539551 nova_compute[227360]: 2025-11-29 08:33:04.995 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] VM Started (Lifecycle Event)#033[00m
Nov 29 03:33:05 np0005539551 nova_compute[227360]: 2025-11-29 08:33:05.022 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:33:05 np0005539551 nova_compute[227360]: 2025-11-29 08:33:05.025 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405184.9948833, f6eba09a-c0cf-4855-afd5-b265b2f2cadc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:33:05 np0005539551 nova_compute[227360]: 2025-11-29 08:33:05.026 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:33:05 np0005539551 nova_compute[227360]: 2025-11-29 08:33:05.045 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:33:05 np0005539551 nova_compute[227360]: 2025-11-29 08:33:05.048 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:33:05 np0005539551 nova_compute[227360]: 2025-11-29 08:33:05.087 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:33:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:06.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:06 np0005539551 nova_compute[227360]: 2025-11-29 08:33:06.416 227364 DEBUG nova.compute.manager [req-01ff9888-c555-45c1-91c8-e05b06db1a46 req-744868e8-93b5-41c9-9e22-fcd151495af7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Received event network-vif-plugged-a08044a1-40b5-4987-bfe0-a92ba0c13b97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:33:06 np0005539551 nova_compute[227360]: 2025-11-29 08:33:06.417 227364 DEBUG oslo_concurrency.lockutils [req-01ff9888-c555-45c1-91c8-e05b06db1a46 req-744868e8-93b5-41c9-9e22-fcd151495af7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:06 np0005539551 nova_compute[227360]: 2025-11-29 08:33:06.417 227364 DEBUG oslo_concurrency.lockutils [req-01ff9888-c555-45c1-91c8-e05b06db1a46 req-744868e8-93b5-41c9-9e22-fcd151495af7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:06 np0005539551 nova_compute[227360]: 2025-11-29 08:33:06.417 227364 DEBUG oslo_concurrency.lockutils [req-01ff9888-c555-45c1-91c8-e05b06db1a46 req-744868e8-93b5-41c9-9e22-fcd151495af7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:06 np0005539551 nova_compute[227360]: 2025-11-29 08:33:06.418 227364 DEBUG nova.compute.manager [req-01ff9888-c555-45c1-91c8-e05b06db1a46 req-744868e8-93b5-41c9-9e22-fcd151495af7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Processing event network-vif-plugged-a08044a1-40b5-4987-bfe0-a92ba0c13b97 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:33:06 np0005539551 nova_compute[227360]: 2025-11-29 08:33:06.418 227364 DEBUG nova.compute.manager [req-01ff9888-c555-45c1-91c8-e05b06db1a46 req-744868e8-93b5-41c9-9e22-fcd151495af7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Received event network-vif-plugged-a08044a1-40b5-4987-bfe0-a92ba0c13b97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:33:06 np0005539551 nova_compute[227360]: 2025-11-29 08:33:06.418 227364 DEBUG oslo_concurrency.lockutils [req-01ff9888-c555-45c1-91c8-e05b06db1a46 req-744868e8-93b5-41c9-9e22-fcd151495af7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:06 np0005539551 nova_compute[227360]: 2025-11-29 08:33:06.418 227364 DEBUG oslo_concurrency.lockutils [req-01ff9888-c555-45c1-91c8-e05b06db1a46 req-744868e8-93b5-41c9-9e22-fcd151495af7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:06 np0005539551 nova_compute[227360]: 2025-11-29 08:33:06.418 227364 DEBUG oslo_concurrency.lockutils [req-01ff9888-c555-45c1-91c8-e05b06db1a46 req-744868e8-93b5-41c9-9e22-fcd151495af7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:06 np0005539551 nova_compute[227360]: 2025-11-29 08:33:06.418 227364 DEBUG nova.compute.manager [req-01ff9888-c555-45c1-91c8-e05b06db1a46 req-744868e8-93b5-41c9-9e22-fcd151495af7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] No waiting events found dispatching network-vif-plugged-a08044a1-40b5-4987-bfe0-a92ba0c13b97 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:33:06 np0005539551 nova_compute[227360]: 2025-11-29 08:33:06.418 227364 WARNING nova.compute.manager [req-01ff9888-c555-45c1-91c8-e05b06db1a46 req-744868e8-93b5-41c9-9e22-fcd151495af7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Received unexpected event network-vif-plugged-a08044a1-40b5-4987-bfe0-a92ba0c13b97 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:33:06 np0005539551 nova_compute[227360]: 2025-11-29 08:33:06.419 227364 DEBUG nova.compute.manager [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:33:06 np0005539551 nova_compute[227360]: 2025-11-29 08:33:06.423 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405186.423038, f6eba09a-c0cf-4855-afd5-b265b2f2cadc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:33:06 np0005539551 nova_compute[227360]: 2025-11-29 08:33:06.423 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:33:06 np0005539551 nova_compute[227360]: 2025-11-29 08:33:06.425 227364 DEBUG nova.virt.libvirt.driver [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:33:06 np0005539551 nova_compute[227360]: 2025-11-29 08:33:06.429 227364 INFO nova.virt.libvirt.driver [-] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Instance spawned successfully.#033[00m
Nov 29 03:33:06 np0005539551 nova_compute[227360]: 2025-11-29 08:33:06.429 227364 DEBUG nova.virt.libvirt.driver [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:33:06 np0005539551 nova_compute[227360]: 2025-11-29 08:33:06.454 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:33:06 np0005539551 nova_compute[227360]: 2025-11-29 08:33:06.459 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:33:06 np0005539551 nova_compute[227360]: 2025-11-29 08:33:06.463 227364 DEBUG nova.virt.libvirt.driver [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:06 np0005539551 nova_compute[227360]: 2025-11-29 08:33:06.464 227364 DEBUG nova.virt.libvirt.driver [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:06 np0005539551 nova_compute[227360]: 2025-11-29 08:33:06.464 227364 DEBUG nova.virt.libvirt.driver [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:06 np0005539551 nova_compute[227360]: 2025-11-29 08:33:06.464 227364 DEBUG nova.virt.libvirt.driver [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:06 np0005539551 nova_compute[227360]: 2025-11-29 08:33:06.465 227364 DEBUG nova.virt.libvirt.driver [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:06 np0005539551 nova_compute[227360]: 2025-11-29 08:33:06.465 227364 DEBUG nova.virt.libvirt.driver [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:06 np0005539551 nova_compute[227360]: 2025-11-29 08:33:06.490 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:33:06 np0005539551 nova_compute[227360]: 2025-11-29 08:33:06.526 227364 INFO nova.compute.manager [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Took 9.67 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:33:06 np0005539551 nova_compute[227360]: 2025-11-29 08:33:06.526 227364 DEBUG nova.compute.manager [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:33:06 np0005539551 nova_compute[227360]: 2025-11-29 08:33:06.594 227364 INFO nova.compute.manager [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Took 10.86 seconds to build instance.#033[00m
Nov 29 03:33:06 np0005539551 nova_compute[227360]: 2025-11-29 08:33:06.622 227364 DEBUG oslo_concurrency.lockutils [None req-2f058088-ab7d-4c98-beff-431ffa7777cc c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.989s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:06.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:07 np0005539551 nova_compute[227360]: 2025-11-29 08:33:07.272 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:07 np0005539551 nova_compute[227360]: 2025-11-29 08:33:07.477 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:08.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:08.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:10.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:10.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:33:11.612 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:33:11 np0005539551 nova_compute[227360]: 2025-11-29 08:33:11.612 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:33:11.613 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:33:11 np0005539551 nova_compute[227360]: 2025-11-29 08:33:11.969 227364 DEBUG nova.compute.manager [req-57b24aa9-5be2-42e1-9891-6594ed1c2185 req-135a454c-ddd7-4139-98b2-ac5539f57389 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Received event network-changed-a08044a1-40b5-4987-bfe0-a92ba0c13b97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:33:11 np0005539551 nova_compute[227360]: 2025-11-29 08:33:11.970 227364 DEBUG nova.compute.manager [req-57b24aa9-5be2-42e1-9891-6594ed1c2185 req-135a454c-ddd7-4139-98b2-ac5539f57389 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Refreshing instance network info cache due to event network-changed-a08044a1-40b5-4987-bfe0-a92ba0c13b97. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:33:11 np0005539551 nova_compute[227360]: 2025-11-29 08:33:11.970 227364 DEBUG oslo_concurrency.lockutils [req-57b24aa9-5be2-42e1-9891-6594ed1c2185 req-135a454c-ddd7-4139-98b2-ac5539f57389 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-f6eba09a-c0cf-4855-afd5-b265b2f2cadc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:33:11 np0005539551 nova_compute[227360]: 2025-11-29 08:33:11.970 227364 DEBUG oslo_concurrency.lockutils [req-57b24aa9-5be2-42e1-9891-6594ed1c2185 req-135a454c-ddd7-4139-98b2-ac5539f57389 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-f6eba09a-c0cf-4855-afd5-b265b2f2cadc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:33:11 np0005539551 nova_compute[227360]: 2025-11-29 08:33:11.971 227364 DEBUG nova.network.neutron [req-57b24aa9-5be2-42e1-9891-6594ed1c2185 req-135a454c-ddd7-4139-98b2-ac5539f57389 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Refreshing network info cache for port a08044a1-40b5-4987-bfe0-a92ba0c13b97 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:33:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:12.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:12 np0005539551 nova_compute[227360]: 2025-11-29 08:33:12.274 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:12 np0005539551 nova_compute[227360]: 2025-11-29 08:33:12.479 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:12.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:13 np0005539551 nova_compute[227360]: 2025-11-29 08:33:13.877 227364 DEBUG nova.network.neutron [req-57b24aa9-5be2-42e1-9891-6594ed1c2185 req-135a454c-ddd7-4139-98b2-ac5539f57389 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Updated VIF entry in instance network info cache for port a08044a1-40b5-4987-bfe0-a92ba0c13b97. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:33:13 np0005539551 nova_compute[227360]: 2025-11-29 08:33:13.878 227364 DEBUG nova.network.neutron [req-57b24aa9-5be2-42e1-9891-6594ed1c2185 req-135a454c-ddd7-4139-98b2-ac5539f57389 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Updating instance_info_cache with network_info: [{"id": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "address": "fa:16:3e:39:c1:ec", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa08044a1-40", "ovs_interfaceid": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:33:13 np0005539551 nova_compute[227360]: 2025-11-29 08:33:13.905 227364 DEBUG oslo_concurrency.lockutils [req-57b24aa9-5be2-42e1-9891-6594ed1c2185 req-135a454c-ddd7-4139-98b2-ac5539f57389 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-f6eba09a-c0cf-4855-afd5-b265b2f2cadc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:33:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:14.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:14 np0005539551 podman[286946]: 2025-11-29 08:33:14.60322953 +0000 UTC m=+0.056258002 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 29 03:33:14 np0005539551 podman[286945]: 2025-11-29 08:33:14.613411416 +0000 UTC m=+0.067022034 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:33:14 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:33:14.614 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:14 np0005539551 podman[286944]: 2025-11-29 08:33:14.637344963 +0000 UTC m=+0.086224513 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:33:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:14.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:16 np0005539551 nova_compute[227360]: 2025-11-29 08:33:16.019 227364 DEBUG oslo_concurrency.lockutils [None req-735b3e58-fadd-4d78-8e5e-bada282fcf60 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:16 np0005539551 nova_compute[227360]: 2025-11-29 08:33:16.019 227364 DEBUG oslo_concurrency.lockutils [None req-735b3e58-fadd-4d78-8e5e-bada282fcf60 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:16 np0005539551 nova_compute[227360]: 2025-11-29 08:33:16.036 227364 DEBUG nova.objects.instance [None req-735b3e58-fadd-4d78-8e5e-bada282fcf60 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'flavor' on Instance uuid f6eba09a-c0cf-4855-afd5-b265b2f2cadc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:16 np0005539551 nova_compute[227360]: 2025-11-29 08:33:16.075 227364 DEBUG oslo_concurrency.lockutils [None req-735b3e58-fadd-4d78-8e5e-bada282fcf60 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:16.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:16 np0005539551 nova_compute[227360]: 2025-11-29 08:33:16.310 227364 DEBUG oslo_concurrency.lockutils [None req-735b3e58-fadd-4d78-8e5e-bada282fcf60 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:16 np0005539551 nova_compute[227360]: 2025-11-29 08:33:16.310 227364 DEBUG oslo_concurrency.lockutils [None req-735b3e58-fadd-4d78-8e5e-bada282fcf60 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:16 np0005539551 nova_compute[227360]: 2025-11-29 08:33:16.310 227364 INFO nova.compute.manager [None req-735b3e58-fadd-4d78-8e5e-bada282fcf60 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Attaching volume d51465d5-c782-4ab5-86e5-16500d7ed93e to /dev/vdb#033[00m
Nov 29 03:33:16 np0005539551 nova_compute[227360]: 2025-11-29 08:33:16.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:16 np0005539551 nova_compute[227360]: 2025-11-29 08:33:16.481 227364 DEBUG os_brick.utils [None req-735b3e58-fadd-4d78-8e5e-bada282fcf60 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:33:16 np0005539551 nova_compute[227360]: 2025-11-29 08:33:16.483 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:16 np0005539551 nova_compute[227360]: 2025-11-29 08:33:16.496 245195 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:16 np0005539551 nova_compute[227360]: 2025-11-29 08:33:16.497 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[5708c4b7-1a7b-4caf-be4d-3354eadcd048]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:16 np0005539551 nova_compute[227360]: 2025-11-29 08:33:16.498 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:16 np0005539551 nova_compute[227360]: 2025-11-29 08:33:16.512 245195 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:16 np0005539551 nova_compute[227360]: 2025-11-29 08:33:16.512 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[67596c51-a939-43e7-89c0-a0abf94ec9f5]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34268feecb6', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:16 np0005539551 nova_compute[227360]: 2025-11-29 08:33:16.514 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:16 np0005539551 nova_compute[227360]: 2025-11-29 08:33:16.525 245195 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:16 np0005539551 nova_compute[227360]: 2025-11-29 08:33:16.525 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[52dfa537-ecf2-442b-b45e-8382eaf74f7a]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:16 np0005539551 nova_compute[227360]: 2025-11-29 08:33:16.527 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[4b03575d-61d2-479a-9995-7c683fddf31e]: (4, '2d58586e-4ce1-425d-890c-d5cdff75e822') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:16 np0005539551 nova_compute[227360]: 2025-11-29 08:33:16.527 227364 DEBUG oslo_concurrency.processutils [None req-735b3e58-fadd-4d78-8e5e-bada282fcf60 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:16 np0005539551 nova_compute[227360]: 2025-11-29 08:33:16.556 227364 DEBUG oslo_concurrency.processutils [None req-735b3e58-fadd-4d78-8e5e-bada282fcf60 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "nvme version" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:16 np0005539551 nova_compute[227360]: 2025-11-29 08:33:16.558 227364 DEBUG os_brick.initiator.connectors.lightos [None req-735b3e58-fadd-4d78-8e5e-bada282fcf60 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:33:16 np0005539551 nova_compute[227360]: 2025-11-29 08:33:16.558 227364 DEBUG os_brick.initiator.connectors.lightos [None req-735b3e58-fadd-4d78-8e5e-bada282fcf60 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:33:16 np0005539551 nova_compute[227360]: 2025-11-29 08:33:16.559 227364 DEBUG os_brick.initiator.connectors.lightos [None req-735b3e58-fadd-4d78-8e5e-bada282fcf60 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:33:16 np0005539551 nova_compute[227360]: 2025-11-29 08:33:16.559 227364 DEBUG os_brick.utils [None req-735b3e58-fadd-4d78-8e5e-bada282fcf60 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] <== get_connector_properties: return (76ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34268feecb6', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2d58586e-4ce1-425d-890c-d5cdff75e822', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:33:16 np0005539551 nova_compute[227360]: 2025-11-29 08:33:16.559 227364 DEBUG nova.virt.block_device [None req-735b3e58-fadd-4d78-8e5e-bada282fcf60 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Updating existing volume attachment record: d3106461-3305-4ffc-b68f-b5055f937fdf _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:33:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:16.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:17 np0005539551 nova_compute[227360]: 2025-11-29 08:33:17.275 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:17 np0005539551 nova_compute[227360]: 2025-11-29 08:33:17.340 227364 DEBUG nova.objects.instance [None req-735b3e58-fadd-4d78-8e5e-bada282fcf60 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'flavor' on Instance uuid f6eba09a-c0cf-4855-afd5-b265b2f2cadc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:17 np0005539551 nova_compute[227360]: 2025-11-29 08:33:17.388 227364 DEBUG nova.virt.libvirt.driver [None req-735b3e58-fadd-4d78-8e5e-bada282fcf60 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Attempting to attach volume d51465d5-c782-4ab5-86e5-16500d7ed93e with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:33:17 np0005539551 nova_compute[227360]: 2025-11-29 08:33:17.391 227364 DEBUG nova.virt.libvirt.guest [None req-735b3e58-fadd-4d78-8e5e-bada282fcf60 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:33:17 np0005539551 nova_compute[227360]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:33:17 np0005539551 nova_compute[227360]:  <source protocol="rbd" name="volumes/volume-d51465d5-c782-4ab5-86e5-16500d7ed93e">
Nov 29 03:33:17 np0005539551 nova_compute[227360]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:33:17 np0005539551 nova_compute[227360]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:33:17 np0005539551 nova_compute[227360]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:33:17 np0005539551 nova_compute[227360]:  </source>
Nov 29 03:33:17 np0005539551 nova_compute[227360]:  <auth username="openstack">
Nov 29 03:33:17 np0005539551 nova_compute[227360]:    <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:33:17 np0005539551 nova_compute[227360]:  </auth>
Nov 29 03:33:17 np0005539551 nova_compute[227360]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:33:17 np0005539551 nova_compute[227360]:  <serial>d51465d5-c782-4ab5-86e5-16500d7ed93e</serial>
Nov 29 03:33:17 np0005539551 nova_compute[227360]:  <shareable/>
Nov 29 03:33:17 np0005539551 nova_compute[227360]: </disk>
Nov 29 03:33:17 np0005539551 nova_compute[227360]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:33:17 np0005539551 nova_compute[227360]: 2025-11-29 08:33:17.481 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:17 np0005539551 nova_compute[227360]: 2025-11-29 08:33:17.656 227364 DEBUG nova.virt.libvirt.driver [None req-735b3e58-fadd-4d78-8e5e-bada282fcf60 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:33:17 np0005539551 nova_compute[227360]: 2025-11-29 08:33:17.657 227364 DEBUG nova.virt.libvirt.driver [None req-735b3e58-fadd-4d78-8e5e-bada282fcf60 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:33:17 np0005539551 nova_compute[227360]: 2025-11-29 08:33:17.657 227364 DEBUG nova.virt.libvirt.driver [None req-735b3e58-fadd-4d78-8e5e-bada282fcf60 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:33:17 np0005539551 nova_compute[227360]: 2025-11-29 08:33:17.657 227364 DEBUG nova.virt.libvirt.driver [None req-735b3e58-fadd-4d78-8e5e-bada282fcf60 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No VIF found with MAC fa:16:3e:39:c1:ec, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:33:17 np0005539551 nova_compute[227360]: 2025-11-29 08:33:17.898 227364 DEBUG oslo_concurrency.lockutils [None req-735b3e58-fadd-4d78-8e5e-bada282fcf60 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:18.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:18.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:33:19.882 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:33:19.883 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:33:19.883 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:20.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:20.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:22.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:22 np0005539551 ovn_controller[130266]: 2025-11-29T08:33:22Z|00093|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:39:c1:ec 10.100.0.3
Nov 29 03:33:22 np0005539551 ovn_controller[130266]: 2025-11-29T08:33:22Z|00094|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:39:c1:ec 10.100.0.3
Nov 29 03:33:22 np0005539551 nova_compute[227360]: 2025-11-29 08:33:22.323 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:22 np0005539551 nova_compute[227360]: 2025-11-29 08:33:22.483 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:22.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:24.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:24.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:26.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:33:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:26.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:33:27 np0005539551 nova_compute[227360]: 2025-11-29 08:33:27.324 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:27 np0005539551 nova_compute[227360]: 2025-11-29 08:33:27.484 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:28.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:28.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:30.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:30.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:31 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:33:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:32.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:32 np0005539551 nova_compute[227360]: 2025-11-29 08:33:32.327 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:32 np0005539551 nova_compute[227360]: 2025-11-29 08:33:32.485 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:32 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:33:32 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:33:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:32.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:34.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:34.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:33:35 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1242229061' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:33:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:36.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:36.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:37 np0005539551 nova_compute[227360]: 2025-11-29 08:33:37.328 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:37 np0005539551 nova_compute[227360]: 2025-11-29 08:33:37.486 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:38.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:38 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e359 e359: 3 total, 3 up, 3 in
Nov 29 03:33:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:38.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:39 np0005539551 nova_compute[227360]: 2025-11-29 08:33:39.434 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:40.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:40.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:41 np0005539551 nova_compute[227360]: 2025-11-29 08:33:41.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:42.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:42 np0005539551 nova_compute[227360]: 2025-11-29 08:33:42.373 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:42 np0005539551 nova_compute[227360]: 2025-11-29 08:33:42.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:42 np0005539551 nova_compute[227360]: 2025-11-29 08:33:42.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:42 np0005539551 nova_compute[227360]: 2025-11-29 08:33:42.486 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:42 np0005539551 nova_compute[227360]: 2025-11-29 08:33:42.486 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:42 np0005539551 nova_compute[227360]: 2025-11-29 08:33:42.487 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:42 np0005539551 nova_compute[227360]: 2025-11-29 08:33:42.487 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:33:42 np0005539551 nova_compute[227360]: 2025-11-29 08:33:42.488 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:42 np0005539551 nova_compute[227360]: 2025-11-29 08:33:42.528 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:33:42 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/47587197' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:33:42 np0005539551 nova_compute[227360]: 2025-11-29 08:33:42.963 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:42.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:43 np0005539551 nova_compute[227360]: 2025-11-29 08:33:43.058 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000009f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:33:43 np0005539551 nova_compute[227360]: 2025-11-29 08:33:43.058 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000009f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:33:43 np0005539551 nova_compute[227360]: 2025-11-29 08:33:43.061 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-000000a2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:33:43 np0005539551 nova_compute[227360]: 2025-11-29 08:33:43.062 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-000000a2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:33:43 np0005539551 nova_compute[227360]: 2025-11-29 08:33:43.065 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:33:43 np0005539551 nova_compute[227360]: 2025-11-29 08:33:43.065 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:33:43 np0005539551 nova_compute[227360]: 2025-11-29 08:33:43.065 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:33:43 np0005539551 nova_compute[227360]: 2025-11-29 08:33:43.216 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:33:43 np0005539551 nova_compute[227360]: 2025-11-29 08:33:43.217 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3716MB free_disk=20.693748474121094GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:33:43 np0005539551 nova_compute[227360]: 2025-11-29 08:33:43.218 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:43 np0005539551 nova_compute[227360]: 2025-11-29 08:33:43.218 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:43 np0005539551 nova_compute[227360]: 2025-11-29 08:33:43.321 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance b08df6d7-85bd-4c2a-9bd8-f37384b9148a actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:33:43 np0005539551 nova_compute[227360]: 2025-11-29 08:33:43.321 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 2293bfb9-d91d-4ee4-8347-317cf45fe9c4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:33:43 np0005539551 nova_compute[227360]: 2025-11-29 08:33:43.322 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance f6eba09a-c0cf-4855-afd5-b265b2f2cadc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:33:43 np0005539551 nova_compute[227360]: 2025-11-29 08:33:43.322 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:33:43 np0005539551 nova_compute[227360]: 2025-11-29 08:33:43.322 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:33:43 np0005539551 nova_compute[227360]: 2025-11-29 08:33:43.441 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:43 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:33:43 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2519330172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:33:43 np0005539551 nova_compute[227360]: 2025-11-29 08:33:43.860 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:43 np0005539551 nova_compute[227360]: 2025-11-29 08:33:43.865 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:33:43 np0005539551 nova_compute[227360]: 2025-11-29 08:33:43.888 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:33:43 np0005539551 nova_compute[227360]: 2025-11-29 08:33:43.924 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:33:43 np0005539551 nova_compute[227360]: 2025-11-29 08:33:43.925 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:44.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:44 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:33:44 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:33:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:44.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:45 np0005539551 podman[287263]: 2025-11-29 08:33:45.599018637 +0000 UTC m=+0.053584960 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:33:45 np0005539551 podman[287262]: 2025-11-29 08:33:45.625236516 +0000 UTC m=+0.071715191 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3)
Nov 29 03:33:45 np0005539551 podman[287261]: 2025-11-29 08:33:45.631171246 +0000 UTC m=+0.089892812 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 03:33:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:33:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:46.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:33:46 np0005539551 nova_compute[227360]: 2025-11-29 08:33:46.925 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:46 np0005539551 nova_compute[227360]: 2025-11-29 08:33:46.926 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:33:46 np0005539551 nova_compute[227360]: 2025-11-29 08:33:46.926 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:33:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:33:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:46.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:33:47 np0005539551 nova_compute[227360]: 2025-11-29 08:33:47.374 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:47 np0005539551 nova_compute[227360]: 2025-11-29 08:33:47.531 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:47 np0005539551 nova_compute[227360]: 2025-11-29 08:33:47.578 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "refresh_cache-b08df6d7-85bd-4c2a-9bd8-f37384b9148a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:33:47 np0005539551 nova_compute[227360]: 2025-11-29 08:33:47.578 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquired lock "refresh_cache-b08df6d7-85bd-4c2a-9bd8-f37384b9148a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:33:47 np0005539551 nova_compute[227360]: 2025-11-29 08:33:47.578 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:33:47 np0005539551 nova_compute[227360]: 2025-11-29 08:33:47.578 227364 DEBUG nova.objects.instance [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lazy-loading 'info_cache' on Instance uuid b08df6d7-85bd-4c2a-9bd8-f37384b9148a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:33:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:48.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:33:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:48.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:49 np0005539551 nova_compute[227360]: 2025-11-29 08:33:49.963 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Updating instance_info_cache with network_info: [{"id": "19b8adf6-0177-4b65-8028-bf0fe37afa9a", "address": "fa:16:3e:17:d8:67", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19b8adf6-01", "ovs_interfaceid": "19b8adf6-0177-4b65-8028-bf0fe37afa9a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:33:49 np0005539551 nova_compute[227360]: 2025-11-29 08:33:49.983 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Releasing lock "refresh_cache-b08df6d7-85bd-4c2a-9bd8-f37384b9148a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:33:49 np0005539551 nova_compute[227360]: 2025-11-29 08:33:49.983 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:33:49 np0005539551 nova_compute[227360]: 2025-11-29 08:33:49.983 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:49 np0005539551 nova_compute[227360]: 2025-11-29 08:33:49.984 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:50.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:50.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e360 e360: 3 total, 3 up, 3 in
Nov 29 03:33:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:52.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:52 np0005539551 nova_compute[227360]: 2025-11-29 08:33:52.376 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:52 np0005539551 nova_compute[227360]: 2025-11-29 08:33:52.532 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:52.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:53 np0005539551 nova_compute[227360]: 2025-11-29 08:33:53.177 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:54.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:55.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:55 np0005539551 nova_compute[227360]: 2025-11-29 08:33:55.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:55 np0005539551 nova_compute[227360]: 2025-11-29 08:33:55.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:33:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:56.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:57.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:57 np0005539551 nova_compute[227360]: 2025-11-29 08:33:57.378 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:57 np0005539551 nova_compute[227360]: 2025-11-29 08:33:57.533 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:58 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:33:58 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2900839083' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:33:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:58.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:58 np0005539551 nova_compute[227360]: 2025-11-29 08:33:58.331 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:58 np0005539551 nova_compute[227360]: 2025-11-29 08:33:58.336 227364 DEBUG nova.compute.manager [req-c011b50a-a9bf-41f6-9400-04c1fc29cbec req-4911a7c8-f9d9-4579-b67c-f9be2d665ab5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Received event network-changed-a08044a1-40b5-4987-bfe0-a92ba0c13b97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:33:58 np0005539551 nova_compute[227360]: 2025-11-29 08:33:58.336 227364 DEBUG nova.compute.manager [req-c011b50a-a9bf-41f6-9400-04c1fc29cbec req-4911a7c8-f9d9-4579-b67c-f9be2d665ab5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Refreshing instance network info cache due to event network-changed-a08044a1-40b5-4987-bfe0-a92ba0c13b97. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:33:58 np0005539551 nova_compute[227360]: 2025-11-29 08:33:58.336 227364 DEBUG oslo_concurrency.lockutils [req-c011b50a-a9bf-41f6-9400-04c1fc29cbec req-4911a7c8-f9d9-4579-b67c-f9be2d665ab5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-f6eba09a-c0cf-4855-afd5-b265b2f2cadc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:33:58 np0005539551 nova_compute[227360]: 2025-11-29 08:33:58.337 227364 DEBUG oslo_concurrency.lockutils [req-c011b50a-a9bf-41f6-9400-04c1fc29cbec req-4911a7c8-f9d9-4579-b67c-f9be2d665ab5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-f6eba09a-c0cf-4855-afd5-b265b2f2cadc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:33:58 np0005539551 nova_compute[227360]: 2025-11-29 08:33:58.337 227364 DEBUG nova.network.neutron [req-c011b50a-a9bf-41f6-9400-04c1fc29cbec req-4911a7c8-f9d9-4579-b67c-f9be2d665ab5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Refreshing network info cache for port a08044a1-40b5-4987-bfe0-a92ba0c13b97 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:33:58 np0005539551 nova_compute[227360]: 2025-11-29 08:33:58.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:33:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:59.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:59 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e361 e361: 3 total, 3 up, 3 in
Nov 29 03:34:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:00.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:00 np0005539551 nova_compute[227360]: 2025-11-29 08:34:00.287 227364 DEBUG nova.network.neutron [req-c011b50a-a9bf-41f6-9400-04c1fc29cbec req-4911a7c8-f9d9-4579-b67c-f9be2d665ab5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Updated VIF entry in instance network info cache for port a08044a1-40b5-4987-bfe0-a92ba0c13b97. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:34:00 np0005539551 nova_compute[227360]: 2025-11-29 08:34:00.287 227364 DEBUG nova.network.neutron [req-c011b50a-a9bf-41f6-9400-04c1fc29cbec req-4911a7c8-f9d9-4579-b67c-f9be2d665ab5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Updating instance_info_cache with network_info: [{"id": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "address": "fa:16:3e:39:c1:ec", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa08044a1-40", "ovs_interfaceid": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:34:00 np0005539551 nova_compute[227360]: 2025-11-29 08:34:00.325 227364 DEBUG oslo_concurrency.lockutils [req-c011b50a-a9bf-41f6-9400-04c1fc29cbec req-4911a7c8-f9d9-4579-b67c-f9be2d665ab5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-f6eba09a-c0cf-4855-afd5-b265b2f2cadc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:34:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e361 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:01.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:01 np0005539551 nova_compute[227360]: 2025-11-29 08:34:01.032 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:02.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:02 np0005539551 nova_compute[227360]: 2025-11-29 08:34:02.381 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:02 np0005539551 nova_compute[227360]: 2025-11-29 08:34:02.535 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:02 np0005539551 nova_compute[227360]: 2025-11-29 08:34:02.661 227364 DEBUG oslo_concurrency.lockutils [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Acquiring lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:02 np0005539551 nova_compute[227360]: 2025-11-29 08:34:02.661 227364 DEBUG oslo_concurrency.lockutils [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:02 np0005539551 nova_compute[227360]: 2025-11-29 08:34:02.744 227364 DEBUG nova.compute.manager [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:34:02 np0005539551 nova_compute[227360]: 2025-11-29 08:34:02.836 227364 DEBUG oslo_concurrency.lockutils [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:02 np0005539551 nova_compute[227360]: 2025-11-29 08:34:02.837 227364 DEBUG oslo_concurrency.lockutils [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:02 np0005539551 nova_compute[227360]: 2025-11-29 08:34:02.846 227364 DEBUG nova.virt.hardware [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:34:02 np0005539551 nova_compute[227360]: 2025-11-29 08:34:02.846 227364 INFO nova.compute.claims [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:34:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:03.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:03 np0005539551 nova_compute[227360]: 2025-11-29 08:34:03.034 227364 DEBUG oslo_concurrency.processutils [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:03 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:34:03 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/832778084' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:34:03 np0005539551 nova_compute[227360]: 2025-11-29 08:34:03.461 227364 DEBUG oslo_concurrency.processutils [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:03 np0005539551 nova_compute[227360]: 2025-11-29 08:34:03.467 227364 DEBUG nova.compute.provider_tree [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:34:03 np0005539551 nova_compute[227360]: 2025-11-29 08:34:03.496 227364 DEBUG nova.scheduler.client.report [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:34:03 np0005539551 nova_compute[227360]: 2025-11-29 08:34:03.528 227364 DEBUG oslo_concurrency.lockutils [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:03 np0005539551 nova_compute[227360]: 2025-11-29 08:34:03.528 227364 DEBUG nova.compute.manager [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:34:03 np0005539551 nova_compute[227360]: 2025-11-29 08:34:03.595 227364 DEBUG nova.compute.manager [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:34:03 np0005539551 nova_compute[227360]: 2025-11-29 08:34:03.596 227364 DEBUG nova.network.neutron [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:34:03 np0005539551 nova_compute[227360]: 2025-11-29 08:34:03.630 227364 INFO nova.virt.libvirt.driver [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:34:03 np0005539551 nova_compute[227360]: 2025-11-29 08:34:03.652 227364 DEBUG nova.compute.manager [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:34:03 np0005539551 nova_compute[227360]: 2025-11-29 08:34:03.803 227364 DEBUG nova.compute.manager [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:34:03 np0005539551 nova_compute[227360]: 2025-11-29 08:34:03.804 227364 DEBUG nova.virt.libvirt.driver [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:34:03 np0005539551 nova_compute[227360]: 2025-11-29 08:34:03.805 227364 INFO nova.virt.libvirt.driver [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Creating image(s)#033[00m
Nov 29 03:34:03 np0005539551 nova_compute[227360]: 2025-11-29 08:34:03.831 227364 DEBUG nova.storage.rbd_utils [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] rbd image aaaa6a6b-0c21-483c-b891-02ebe64e6aab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:34:03 np0005539551 nova_compute[227360]: 2025-11-29 08:34:03.857 227364 DEBUG nova.storage.rbd_utils [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] rbd image aaaa6a6b-0c21-483c-b891-02ebe64e6aab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:34:03 np0005539551 nova_compute[227360]: 2025-11-29 08:34:03.883 227364 DEBUG nova.storage.rbd_utils [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] rbd image aaaa6a6b-0c21-483c-b891-02ebe64e6aab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:34:03 np0005539551 nova_compute[227360]: 2025-11-29 08:34:03.886 227364 DEBUG oslo_concurrency.processutils [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:03 np0005539551 nova_compute[227360]: 2025-11-29 08:34:03.963 227364 DEBUG oslo_concurrency.processutils [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:03 np0005539551 nova_compute[227360]: 2025-11-29 08:34:03.964 227364 DEBUG oslo_concurrency.lockutils [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:03 np0005539551 nova_compute[227360]: 2025-11-29 08:34:03.965 227364 DEBUG oslo_concurrency.lockutils [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:03 np0005539551 nova_compute[227360]: 2025-11-29 08:34:03.965 227364 DEBUG oslo_concurrency.lockutils [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:03 np0005539551 nova_compute[227360]: 2025-11-29 08:34:03.993 227364 DEBUG nova.storage.rbd_utils [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] rbd image aaaa6a6b-0c21-483c-b891-02ebe64e6aab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:34:03 np0005539551 nova_compute[227360]: 2025-11-29 08:34:03.997 227364 DEBUG oslo_concurrency.processutils [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 aaaa6a6b-0c21-483c-b891-02ebe64e6aab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:04 np0005539551 nova_compute[227360]: 2025-11-29 08:34:04.033 227364 DEBUG nova.policy [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a57807acb02b45d082f242ec62cd5b6f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '96e72e7660da497a8b6bf9fdb03fe84c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:34:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:04.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:04 np0005539551 nova_compute[227360]: 2025-11-29 08:34:04.306 227364 DEBUG oslo_concurrency.processutils [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 aaaa6a6b-0c21-483c-b891-02ebe64e6aab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.308s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:04 np0005539551 nova_compute[227360]: 2025-11-29 08:34:04.362 227364 DEBUG nova.storage.rbd_utils [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] resizing rbd image aaaa6a6b-0c21-483c-b891-02ebe64e6aab_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:34:04 np0005539551 nova_compute[227360]: 2025-11-29 08:34:04.460 227364 DEBUG nova.objects.instance [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lazy-loading 'migration_context' on Instance uuid aaaa6a6b-0c21-483c-b891-02ebe64e6aab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:34:04 np0005539551 nova_compute[227360]: 2025-11-29 08:34:04.477 227364 DEBUG nova.virt.libvirt.driver [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:34:04 np0005539551 nova_compute[227360]: 2025-11-29 08:34:04.478 227364 DEBUG nova.virt.libvirt.driver [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Ensure instance console log exists: /var/lib/nova/instances/aaaa6a6b-0c21-483c-b891-02ebe64e6aab/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:34:04 np0005539551 nova_compute[227360]: 2025-11-29 08:34:04.478 227364 DEBUG oslo_concurrency.lockutils [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:04 np0005539551 nova_compute[227360]: 2025-11-29 08:34:04.478 227364 DEBUG oslo_concurrency.lockutils [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:04 np0005539551 nova_compute[227360]: 2025-11-29 08:34:04.479 227364 DEBUG oslo_concurrency.lockutils [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:04 np0005539551 nova_compute[227360]: 2025-11-29 08:34:04.789 227364 DEBUG nova.network.neutron [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Successfully created port: 2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:34:04 np0005539551 nova_compute[227360]: 2025-11-29 08:34:04.984 227364 DEBUG oslo_concurrency.lockutils [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "refresh_cache-f6eba09a-c0cf-4855-afd5-b265b2f2cadc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:34:04 np0005539551 nova_compute[227360]: 2025-11-29 08:34:04.985 227364 DEBUG oslo_concurrency.lockutils [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquired lock "refresh_cache-f6eba09a-c0cf-4855-afd5-b265b2f2cadc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:34:04 np0005539551 nova_compute[227360]: 2025-11-29 08:34:04.985 227364 DEBUG nova.network.neutron [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:34:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:05.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e361 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:06.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:06 np0005539551 nova_compute[227360]: 2025-11-29 08:34:06.790 227364 DEBUG nova.network.neutron [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Successfully updated port: 2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:34:06 np0005539551 nova_compute[227360]: 2025-11-29 08:34:06.827 227364 DEBUG oslo_concurrency.lockutils [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Acquiring lock "refresh_cache-aaaa6a6b-0c21-483c-b891-02ebe64e6aab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:34:06 np0005539551 nova_compute[227360]: 2025-11-29 08:34:06.827 227364 DEBUG oslo_concurrency.lockutils [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Acquired lock "refresh_cache-aaaa6a6b-0c21-483c-b891-02ebe64e6aab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:34:06 np0005539551 nova_compute[227360]: 2025-11-29 08:34:06.827 227364 DEBUG nova.network.neutron [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:34:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:07.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:07 np0005539551 nova_compute[227360]: 2025-11-29 08:34:07.384 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:07 np0005539551 nova_compute[227360]: 2025-11-29 08:34:07.537 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:07 np0005539551 nova_compute[227360]: 2025-11-29 08:34:07.989 227364 DEBUG nova.network.neutron [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:34:08 np0005539551 nova_compute[227360]: 2025-11-29 08:34:08.034 227364 DEBUG nova.network.neutron [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Updating instance_info_cache with network_info: [{"id": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "address": "fa:16:3e:39:c1:ec", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa08044a1-40", "ovs_interfaceid": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:34:08 np0005539551 nova_compute[227360]: 2025-11-29 08:34:08.055 227364 DEBUG oslo_concurrency.lockutils [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Releasing lock "refresh_cache-f6eba09a-c0cf-4855-afd5-b265b2f2cadc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:34:08 np0005539551 nova_compute[227360]: 2025-11-29 08:34:08.075 227364 DEBUG nova.compute.manager [req-4935a155-3506-48f4-a253-3b7d38917c33 req-649246c9-7dee-4a44-b65b-1c4ea8f6f9b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Received event network-changed-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:08 np0005539551 nova_compute[227360]: 2025-11-29 08:34:08.075 227364 DEBUG nova.compute.manager [req-4935a155-3506-48f4-a253-3b7d38917c33 req-649246c9-7dee-4a44-b65b-1c4ea8f6f9b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Refreshing instance network info cache due to event network-changed-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:34:08 np0005539551 nova_compute[227360]: 2025-11-29 08:34:08.075 227364 DEBUG oslo_concurrency.lockutils [req-4935a155-3506-48f4-a253-3b7d38917c33 req-649246c9-7dee-4a44-b65b-1c4ea8f6f9b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-aaaa6a6b-0c21-483c-b891-02ebe64e6aab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:34:08 np0005539551 nova_compute[227360]: 2025-11-29 08:34:08.137 227364 DEBUG nova.virt.libvirt.driver [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Nov 29 03:34:08 np0005539551 nova_compute[227360]: 2025-11-29 08:34:08.138 227364 DEBUG nova.virt.libvirt.volume.remotefs [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Creating file /var/lib/nova/instances/f6eba09a-c0cf-4855-afd5-b265b2f2cadc/b19dde08b326469bb63d8a2ca531390e.tmp on remote host 192.168.122.100 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Nov 29 03:34:08 np0005539551 nova_compute[227360]: 2025-11-29 08:34:08.138 227364 DEBUG oslo_concurrency.processutils [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/f6eba09a-c0cf-4855-afd5-b265b2f2cadc/b19dde08b326469bb63d8a2ca531390e.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:08.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:08 np0005539551 nova_compute[227360]: 2025-11-29 08:34:08.584 227364 DEBUG oslo_concurrency.processutils [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/f6eba09a-c0cf-4855-afd5-b265b2f2cadc/b19dde08b326469bb63d8a2ca531390e.tmp" returned: 1 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:08 np0005539551 nova_compute[227360]: 2025-11-29 08:34:08.585 227364 DEBUG oslo_concurrency.processutils [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] 'ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/f6eba09a-c0cf-4855-afd5-b265b2f2cadc/b19dde08b326469bb63d8a2ca531390e.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 29 03:34:08 np0005539551 nova_compute[227360]: 2025-11-29 08:34:08.585 227364 DEBUG nova.virt.libvirt.volume.remotefs [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Creating directory /var/lib/nova/instances/f6eba09a-c0cf-4855-afd5-b265b2f2cadc on remote host 192.168.122.100 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Nov 29 03:34:08 np0005539551 nova_compute[227360]: 2025-11-29 08:34:08.585 227364 DEBUG oslo_concurrency.processutils [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/f6eba09a-c0cf-4855-afd5-b265b2f2cadc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:08 np0005539551 nova_compute[227360]: 2025-11-29 08:34:08.791 227364 DEBUG oslo_concurrency.processutils [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/f6eba09a-c0cf-4855-afd5-b265b2f2cadc" returned: 0 in 0.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:08 np0005539551 nova_compute[227360]: 2025-11-29 08:34:08.800 227364 DEBUG nova.virt.libvirt.driver [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:34:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:09.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:09 np0005539551 nova_compute[227360]: 2025-11-29 08:34:09.037 227364 DEBUG nova.network.neutron [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Updating instance_info_cache with network_info: [{"id": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "address": "fa:16:3e:c5:09:19", "network": {"id": "a83b79bc-6262-43e7-a9e5-5e808a213726", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1830064421-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e72e7660da497a8b6bf9fdb03fe84c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c7cb162-70", "ovs_interfaceid": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:34:09 np0005539551 nova_compute[227360]: 2025-11-29 08:34:09.052 227364 DEBUG oslo_concurrency.lockutils [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Releasing lock "refresh_cache-aaaa6a6b-0c21-483c-b891-02ebe64e6aab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:34:09 np0005539551 nova_compute[227360]: 2025-11-29 08:34:09.052 227364 DEBUG nova.compute.manager [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Instance network_info: |[{"id": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "address": "fa:16:3e:c5:09:19", "network": {"id": "a83b79bc-6262-43e7-a9e5-5e808a213726", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1830064421-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e72e7660da497a8b6bf9fdb03fe84c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c7cb162-70", "ovs_interfaceid": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:34:09 np0005539551 nova_compute[227360]: 2025-11-29 08:34:09.053 227364 DEBUG oslo_concurrency.lockutils [req-4935a155-3506-48f4-a253-3b7d38917c33 req-649246c9-7dee-4a44-b65b-1c4ea8f6f9b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-aaaa6a6b-0c21-483c-b891-02ebe64e6aab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:34:09 np0005539551 nova_compute[227360]: 2025-11-29 08:34:09.053 227364 DEBUG nova.network.neutron [req-4935a155-3506-48f4-a253-3b7d38917c33 req-649246c9-7dee-4a44-b65b-1c4ea8f6f9b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Refreshing network info cache for port 2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:34:09 np0005539551 nova_compute[227360]: 2025-11-29 08:34:09.055 227364 DEBUG nova.virt.libvirt.driver [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Start _get_guest_xml network_info=[{"id": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "address": "fa:16:3e:c5:09:19", "network": {"id": "a83b79bc-6262-43e7-a9e5-5e808a213726", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1830064421-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e72e7660da497a8b6bf9fdb03fe84c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c7cb162-70", "ovs_interfaceid": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:34:09 np0005539551 nova_compute[227360]: 2025-11-29 08:34:09.060 227364 WARNING nova.virt.libvirt.driver [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:34:09 np0005539551 nova_compute[227360]: 2025-11-29 08:34:09.069 227364 DEBUG nova.virt.libvirt.host [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:34:09 np0005539551 nova_compute[227360]: 2025-11-29 08:34:09.070 227364 DEBUG nova.virt.libvirt.host [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:34:09 np0005539551 nova_compute[227360]: 2025-11-29 08:34:09.083 227364 DEBUG nova.virt.libvirt.host [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:34:09 np0005539551 nova_compute[227360]: 2025-11-29 08:34:09.083 227364 DEBUG nova.virt.libvirt.host [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:34:09 np0005539551 nova_compute[227360]: 2025-11-29 08:34:09.085 227364 DEBUG nova.virt.libvirt.driver [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:34:09 np0005539551 nova_compute[227360]: 2025-11-29 08:34:09.085 227364 DEBUG nova.virt.hardware [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:34:09 np0005539551 nova_compute[227360]: 2025-11-29 08:34:09.085 227364 DEBUG nova.virt.hardware [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:34:09 np0005539551 nova_compute[227360]: 2025-11-29 08:34:09.085 227364 DEBUG nova.virt.hardware [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:34:09 np0005539551 nova_compute[227360]: 2025-11-29 08:34:09.086 227364 DEBUG nova.virt.hardware [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:34:09 np0005539551 nova_compute[227360]: 2025-11-29 08:34:09.086 227364 DEBUG nova.virt.hardware [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:34:09 np0005539551 nova_compute[227360]: 2025-11-29 08:34:09.086 227364 DEBUG nova.virt.hardware [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:34:09 np0005539551 nova_compute[227360]: 2025-11-29 08:34:09.086 227364 DEBUG nova.virt.hardware [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:34:09 np0005539551 nova_compute[227360]: 2025-11-29 08:34:09.087 227364 DEBUG nova.virt.hardware [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:34:09 np0005539551 nova_compute[227360]: 2025-11-29 08:34:09.087 227364 DEBUG nova.virt.hardware [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:34:09 np0005539551 nova_compute[227360]: 2025-11-29 08:34:09.087 227364 DEBUG nova.virt.hardware [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:34:09 np0005539551 nova_compute[227360]: 2025-11-29 08:34:09.087 227364 DEBUG nova.virt.hardware [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:34:09 np0005539551 nova_compute[227360]: 2025-11-29 08:34:09.090 227364 DEBUG oslo_concurrency.processutils [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:09 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:34:09 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2579606961' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:34:09 np0005539551 nova_compute[227360]: 2025-11-29 08:34:09.526 227364 DEBUG oslo_concurrency.processutils [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:09 np0005539551 nova_compute[227360]: 2025-11-29 08:34:09.590 227364 DEBUG nova.storage.rbd_utils [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] rbd image aaaa6a6b-0c21-483c-b891-02ebe64e6aab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:34:09 np0005539551 nova_compute[227360]: 2025-11-29 08:34:09.594 227364 DEBUG oslo_concurrency.processutils [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:34:10 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3064315281' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:34:10 np0005539551 nova_compute[227360]: 2025-11-29 08:34:10.031 227364 DEBUG oslo_concurrency.processutils [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:10 np0005539551 nova_compute[227360]: 2025-11-29 08:34:10.034 227364 DEBUG nova.virt.libvirt.vif [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:34:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1484770108',display_name='tempest-ServersNegativeTestJSON-server-1484770108',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1484770108',id=170,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='96e72e7660da497a8b6bf9fdb03fe84c',ramdisk_id='',reservation_id='r-17iqb8m1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1016750887',owner_user_name='tempest-ServersNegativeTestJSON-1016750887-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:34:03Z,user_data=None,user_id='a57807acb02b45d082f242ec62cd5b6f',uuid=aaaa6a6b-0c21-483c-b891-02ebe64e6aab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "address": "fa:16:3e:c5:09:19", "network": {"id": "a83b79bc-6262-43e7-a9e5-5e808a213726", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1830064421-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e72e7660da497a8b6bf9fdb03fe84c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c7cb162-70", "ovs_interfaceid": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:34:10 np0005539551 nova_compute[227360]: 2025-11-29 08:34:10.035 227364 DEBUG nova.network.os_vif_util [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Converting VIF {"id": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "address": "fa:16:3e:c5:09:19", "network": {"id": "a83b79bc-6262-43e7-a9e5-5e808a213726", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1830064421-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e72e7660da497a8b6bf9fdb03fe84c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c7cb162-70", "ovs_interfaceid": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:34:10 np0005539551 nova_compute[227360]: 2025-11-29 08:34:10.037 227364 DEBUG nova.network.os_vif_util [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:09:19,bridge_name='br-int',has_traffic_filtering=True,id=2c7cb162-70f0-496f-a2bf-0b0af61bd4b2,network=Network(a83b79bc-6262-43e7-a9e5-5e808a213726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c7cb162-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:34:10 np0005539551 nova_compute[227360]: 2025-11-29 08:34:10.039 227364 DEBUG nova.objects.instance [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lazy-loading 'pci_devices' on Instance uuid aaaa6a6b-0c21-483c-b891-02ebe64e6aab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:34:10 np0005539551 nova_compute[227360]: 2025-11-29 08:34:10.062 227364 DEBUG nova.virt.libvirt.driver [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:34:10 np0005539551 nova_compute[227360]:  <uuid>aaaa6a6b-0c21-483c-b891-02ebe64e6aab</uuid>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:  <name>instance-000000aa</name>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:34:10 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:      <nova:name>tempest-ServersNegativeTestJSON-server-1484770108</nova:name>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:34:09</nova:creationTime>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:34:10 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:        <nova:user uuid="a57807acb02b45d082f242ec62cd5b6f">tempest-ServersNegativeTestJSON-1016750887-project-member</nova:user>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:        <nova:project uuid="96e72e7660da497a8b6bf9fdb03fe84c">tempest-ServersNegativeTestJSON-1016750887</nova:project>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:        <nova:port uuid="2c7cb162-70f0-496f-a2bf-0b0af61bd4b2">
Nov 29 03:34:10 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:      <entry name="serial">aaaa6a6b-0c21-483c-b891-02ebe64e6aab</entry>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:      <entry name="uuid">aaaa6a6b-0c21-483c-b891-02ebe64e6aab</entry>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:34:10 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/aaaa6a6b-0c21-483c-b891-02ebe64e6aab_disk">
Nov 29 03:34:10 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:34:10 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:34:10 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/aaaa6a6b-0c21-483c-b891-02ebe64e6aab_disk.config">
Nov 29 03:34:10 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:34:10 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:34:10 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:c5:09:19"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:      <target dev="tap2c7cb162-70"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:34:10 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/aaaa6a6b-0c21-483c-b891-02ebe64e6aab/console.log" append="off"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:34:10 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:34:10 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:34:10 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:34:10 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:34:10 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:34:10 np0005539551 nova_compute[227360]: 2025-11-29 08:34:10.064 227364 DEBUG nova.compute.manager [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Preparing to wait for external event network-vif-plugged-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:34:10 np0005539551 nova_compute[227360]: 2025-11-29 08:34:10.064 227364 DEBUG oslo_concurrency.lockutils [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Acquiring lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:10 np0005539551 nova_compute[227360]: 2025-11-29 08:34:10.065 227364 DEBUG oslo_concurrency.lockutils [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:10 np0005539551 nova_compute[227360]: 2025-11-29 08:34:10.065 227364 DEBUG oslo_concurrency.lockutils [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:10 np0005539551 nova_compute[227360]: 2025-11-29 08:34:10.066 227364 DEBUG nova.virt.libvirt.vif [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:34:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1484770108',display_name='tempest-ServersNegativeTestJSON-server-1484770108',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1484770108',id=170,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='96e72e7660da497a8b6bf9fdb03fe84c',ramdisk_id='',reservation_id='r-17iqb8m1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1016750887',owner_user_name='tempest-ServersNegativeTestJSON-1016750887-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:34:03Z,user_data=None,user_id='a57807acb02b45d082f242ec62cd5b6f',uuid=aaaa6a6b-0c21-483c-b891-02ebe64e6aab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "address": "fa:16:3e:c5:09:19", "network": {"id": "a83b79bc-6262-43e7-a9e5-5e808a213726", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1830064421-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e72e7660da497a8b6bf9fdb03fe84c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c7cb162-70", "ovs_interfaceid": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:34:10 np0005539551 nova_compute[227360]: 2025-11-29 08:34:10.066 227364 DEBUG nova.network.os_vif_util [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Converting VIF {"id": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "address": "fa:16:3e:c5:09:19", "network": {"id": "a83b79bc-6262-43e7-a9e5-5e808a213726", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1830064421-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e72e7660da497a8b6bf9fdb03fe84c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c7cb162-70", "ovs_interfaceid": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:34:10 np0005539551 nova_compute[227360]: 2025-11-29 08:34:10.067 227364 DEBUG nova.network.os_vif_util [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:09:19,bridge_name='br-int',has_traffic_filtering=True,id=2c7cb162-70f0-496f-a2bf-0b0af61bd4b2,network=Network(a83b79bc-6262-43e7-a9e5-5e808a213726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c7cb162-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:34:10 np0005539551 nova_compute[227360]: 2025-11-29 08:34:10.067 227364 DEBUG os_vif [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:09:19,bridge_name='br-int',has_traffic_filtering=True,id=2c7cb162-70f0-496f-a2bf-0b0af61bd4b2,network=Network(a83b79bc-6262-43e7-a9e5-5e808a213726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c7cb162-70') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:34:10 np0005539551 nova_compute[227360]: 2025-11-29 08:34:10.067 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:10 np0005539551 nova_compute[227360]: 2025-11-29 08:34:10.068 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:10 np0005539551 nova_compute[227360]: 2025-11-29 08:34:10.068 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:34:10 np0005539551 nova_compute[227360]: 2025-11-29 08:34:10.072 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:10 np0005539551 nova_compute[227360]: 2025-11-29 08:34:10.072 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2c7cb162-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:10 np0005539551 nova_compute[227360]: 2025-11-29 08:34:10.072 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2c7cb162-70, col_values=(('external_ids', {'iface-id': '2c7cb162-70f0-496f-a2bf-0b0af61bd4b2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c5:09:19', 'vm-uuid': 'aaaa6a6b-0c21-483c-b891-02ebe64e6aab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:10 np0005539551 nova_compute[227360]: 2025-11-29 08:34:10.074 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:10 np0005539551 nova_compute[227360]: 2025-11-29 08:34:10.076 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:34:10 np0005539551 NetworkManager[48922]: <info>  [1764405250.0760] manager: (tap2c7cb162-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/327)
Nov 29 03:34:10 np0005539551 nova_compute[227360]: 2025-11-29 08:34:10.083 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:10 np0005539551 nova_compute[227360]: 2025-11-29 08:34:10.084 227364 INFO os_vif [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:09:19,bridge_name='br-int',has_traffic_filtering=True,id=2c7cb162-70f0-496f-a2bf-0b0af61bd4b2,network=Network(a83b79bc-6262-43e7-a9e5-5e808a213726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c7cb162-70')#033[00m
Nov 29 03:34:10 np0005539551 nova_compute[227360]: 2025-11-29 08:34:10.137 227364 DEBUG nova.virt.libvirt.driver [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:34:10 np0005539551 nova_compute[227360]: 2025-11-29 08:34:10.137 227364 DEBUG nova.virt.libvirt.driver [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:34:10 np0005539551 nova_compute[227360]: 2025-11-29 08:34:10.137 227364 DEBUG nova.virt.libvirt.driver [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] No VIF found with MAC fa:16:3e:c5:09:19, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:34:10 np0005539551 nova_compute[227360]: 2025-11-29 08:34:10.138 227364 INFO nova.virt.libvirt.driver [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Using config drive#033[00m
Nov 29 03:34:10 np0005539551 nova_compute[227360]: 2025-11-29 08:34:10.168 227364 DEBUG nova.storage.rbd_utils [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] rbd image aaaa6a6b-0c21-483c-b891-02ebe64e6aab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:34:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:10.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e361 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:11.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:11 np0005539551 nova_compute[227360]: 2025-11-29 08:34:11.279 227364 INFO nova.virt.libvirt.driver [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Creating config drive at /var/lib/nova/instances/aaaa6a6b-0c21-483c-b891-02ebe64e6aab/disk.config#033[00m
Nov 29 03:34:11 np0005539551 nova_compute[227360]: 2025-11-29 08:34:11.285 227364 DEBUG oslo_concurrency.processutils [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aaaa6a6b-0c21-483c-b891-02ebe64e6aab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8oxz_obw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:11 np0005539551 kernel: tapa08044a1-40 (unregistering): left promiscuous mode
Nov 29 03:34:11 np0005539551 NetworkManager[48922]: <info>  [1764405251.3278] device (tapa08044a1-40): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:34:11 np0005539551 ovn_controller[130266]: 2025-11-29T08:34:11Z|00724|binding|INFO|Releasing lport a08044a1-40b5-4987-bfe0-a92ba0c13b97 from this chassis (sb_readonly=0)
Nov 29 03:34:11 np0005539551 ovn_controller[130266]: 2025-11-29T08:34:11Z|00725|binding|INFO|Setting lport a08044a1-40b5-4987-bfe0-a92ba0c13b97 down in Southbound
Nov 29 03:34:11 np0005539551 nova_compute[227360]: 2025-11-29 08:34:11.339 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:11 np0005539551 ovn_controller[130266]: 2025-11-29T08:34:11Z|00726|binding|INFO|Removing iface tapa08044a1-40 ovn-installed in OVS
Nov 29 03:34:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:11.350 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:c1:ec 10.100.0.3'], port_security=['fa:16:3e:39:c1:ec 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f6eba09a-c0cf-4855-afd5-b265b2f2cadc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4f6db81949d487b853d7567f8a2e6d4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '56b7aa4d-4e93-4da8-a338-5b87494d2fcd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=794eeb47-266a-47f4-b2a1-7a89e6c6ba82, chassis=[], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=a08044a1-40b5-4987-bfe0-a92ba0c13b97) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:34:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:11.351 139482 INFO neutron.agent.ovn.metadata.agent [-] Port a08044a1-40b5-4987-bfe0-a92ba0c13b97 in datapath ed50ff83-51d1-4b35-b85c-1cbe6fb812c6 unbound from our chassis#033[00m
Nov 29 03:34:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:11.352 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ed50ff83-51d1-4b35-b85c-1cbe6fb812c6#033[00m
Nov 29 03:34:11 np0005539551 nova_compute[227360]: 2025-11-29 08:34:11.357 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:11.370 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[52fc3c1a-e656-47d0-a52d-2dd92e113fb2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:11 np0005539551 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d000000a6.scope: Deactivated successfully.
Nov 29 03:34:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:11.406 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[8c99791c-7bd9-40bc-98c8-827bc9f1c744]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:11 np0005539551 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d000000a6.scope: Consumed 16.531s CPU time.
Nov 29 03:34:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:11.410 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[292e4aa6-fe1a-4c1f-bf40-c94013e8c026]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:11 np0005539551 systemd-machined[190756]: Machine qemu-77-instance-000000a6 terminated.
Nov 29 03:34:11 np0005539551 nova_compute[227360]: 2025-11-29 08:34:11.432 227364 DEBUG oslo_concurrency.processutils [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aaaa6a6b-0c21-483c-b891-02ebe64e6aab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8oxz_obw" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:11.437 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[002b1d6f-7779-44f3-8653-a1e75f0f0d48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:11.458 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[102b8f5a-d546-4954-a20e-3f25826cb99f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped50ff83-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:60:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 202], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 814431, 'reachable_time': 15775, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287608, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:11 np0005539551 nova_compute[227360]: 2025-11-29 08:34:11.475 227364 DEBUG nova.storage.rbd_utils [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] rbd image aaaa6a6b-0c21-483c-b891-02ebe64e6aab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:34:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:11.475 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[179db534-a467-45cd-a8a9-69c1880f2312]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'taped50ff83-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 814444, 'tstamp': 814444}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287624, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'taped50ff83-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 814447, 'tstamp': 814447}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287624, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:11.477 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped50ff83-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:11 np0005539551 nova_compute[227360]: 2025-11-29 08:34:11.479 227364 DEBUG oslo_concurrency.processutils [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aaaa6a6b-0c21-483c-b891-02ebe64e6aab/disk.config aaaa6a6b-0c21-483c-b891-02ebe64e6aab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:11.487 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped50ff83-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:11.487 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:34:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:11.488 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=taped50ff83-50, col_values=(('external_ids', {'iface-id': '3b04b2c4-a6da-4677-b446-82ad68652b56'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:11.488 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:34:11 np0005539551 nova_compute[227360]: 2025-11-29 08:34:11.505 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:11 np0005539551 nova_compute[227360]: 2025-11-29 08:34:11.635 227364 DEBUG oslo_concurrency.processutils [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aaaa6a6b-0c21-483c-b891-02ebe64e6aab/disk.config aaaa6a6b-0c21-483c-b891-02ebe64e6aab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:11 np0005539551 nova_compute[227360]: 2025-11-29 08:34:11.635 227364 INFO nova.virt.libvirt.driver [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Deleting local config drive /var/lib/nova/instances/aaaa6a6b-0c21-483c-b891-02ebe64e6aab/disk.config because it was imported into RBD.#033[00m
Nov 29 03:34:11 np0005539551 systemd-udevd[287600]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:34:11 np0005539551 NetworkManager[48922]: <info>  [1764405251.6849] manager: (tap2c7cb162-70): new Tun device (/org/freedesktop/NetworkManager/Devices/328)
Nov 29 03:34:11 np0005539551 kernel: tap2c7cb162-70: entered promiscuous mode
Nov 29 03:34:11 np0005539551 ovn_controller[130266]: 2025-11-29T08:34:11Z|00727|binding|INFO|Claiming lport 2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 for this chassis.
Nov 29 03:34:11 np0005539551 ovn_controller[130266]: 2025-11-29T08:34:11Z|00728|binding|INFO|2c7cb162-70f0-496f-a2bf-0b0af61bd4b2: Claiming fa:16:3e:c5:09:19 10.100.0.11
Nov 29 03:34:11 np0005539551 nova_compute[227360]: 2025-11-29 08:34:11.688 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:11 np0005539551 NetworkManager[48922]: <info>  [1764405251.6964] device (tap2c7cb162-70): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:34:11 np0005539551 NetworkManager[48922]: <info>  [1764405251.6970] device (tap2c7cb162-70): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:34:11 np0005539551 ovn_controller[130266]: 2025-11-29T08:34:11Z|00729|binding|INFO|Setting lport 2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 ovn-installed in OVS
Nov 29 03:34:11 np0005539551 nova_compute[227360]: 2025-11-29 08:34:11.710 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:11 np0005539551 nova_compute[227360]: 2025-11-29 08:34:11.712 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:11 np0005539551 ovn_controller[130266]: 2025-11-29T08:34:11Z|00730|binding|INFO|Setting lport 2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 up in Southbound
Nov 29 03:34:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:11.723 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:09:19 10.100.0.11'], port_security=['fa:16:3e:c5:09:19 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'aaaa6a6b-0c21-483c-b891-02ebe64e6aab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a83b79bc-6262-43e7-a9e5-5e808a213726', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96e72e7660da497a8b6bf9fdb03fe84c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4e362fc3-a5d2-4518-8d56-0e9bbfbe70b6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89f1621a-4594-4d70-9442-76b3c597dffc, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=2c7cb162-70f0-496f-a2bf-0b0af61bd4b2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:34:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:11.725 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 in datapath a83b79bc-6262-43e7-a9e5-5e808a213726 bound to our chassis#033[00m
Nov 29 03:34:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:11.726 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a83b79bc-6262-43e7-a9e5-5e808a213726#033[00m
Nov 29 03:34:11 np0005539551 systemd-machined[190756]: New machine qemu-78-instance-000000aa.
Nov 29 03:34:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:11.736 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[95b479ea-506d-47c1-8da1-7fa86079d2bd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:11.736 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa83b79bc-61 in ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:34:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:11.737 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa83b79bc-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:34:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:11.738 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[771eb378-a31d-4172-ad80-6aaf621def71]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:11 np0005539551 systemd[1]: Started Virtual Machine qemu-78-instance-000000aa.
Nov 29 03:34:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:11.738 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[990c1ad5-98e6-4205-8f45-1814789e6f88]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:11.747 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[d1e75b02-a718-4a20-b572-a63948a89882]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:11.771 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[7f06c770-d0b4-499a-9f0a-0d3a8d5c5527]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:11.793 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[85bf7f45-7105-42d7-b2cc-d8f002213169]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:11 np0005539551 NetworkManager[48922]: <info>  [1764405251.7997] manager: (tapa83b79bc-60): new Veth device (/org/freedesktop/NetworkManager/Devices/329)
Nov 29 03:34:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:11.799 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1356863f-3bd6-4241-9d6e-50c3371b7097]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:11 np0005539551 nova_compute[227360]: 2025-11-29 08:34:11.818 227364 INFO nova.virt.libvirt.driver [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:34:11 np0005539551 nova_compute[227360]: 2025-11-29 08:34:11.825 227364 INFO nova.virt.libvirt.driver [-] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Instance destroyed successfully.#033[00m
Nov 29 03:34:11 np0005539551 nova_compute[227360]: 2025-11-29 08:34:11.826 227364 DEBUG nova.virt.libvirt.vif [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:32:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='multiattach-server-1',id=166,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFZDUAh1tFHT85mctamdge/Jlh9j7Mmalvlf2a+E48/dJ4b3TzL46vHd8+krJsRkbdr2BabH5xlFnXxT+hxq+KJlLzOnOaQuAWI18v9sbbjA8bZzR2tugMjasg7rWhFwg==',key_name='tempest-keypair-2058861619',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:33:06Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d4f6db81949d487b853d7567f8a2e6d4',ramdisk_id='',reservation_id='r-jvsv1b4g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-573425942',owner_user_name='tempest-AttachVolumeMultiAttachTest-573425942-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:34:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5b0953fb7cc415fb26cf4ffdd5908c6',uuid=f6eba09a-c0cf-4855-afd5-b265b2f2cadc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "address": "fa:16:3e:39:c1:ec", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "vif_mac": "fa:16:3e:39:c1:ec"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa08044a1-40", "ovs_interfaceid": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:34:11 np0005539551 nova_compute[227360]: 2025-11-29 08:34:11.826 227364 DEBUG nova.network.os_vif_util [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converting VIF {"id": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "address": "fa:16:3e:39:c1:ec", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "vif_mac": "fa:16:3e:39:c1:ec"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa08044a1-40", "ovs_interfaceid": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:34:11 np0005539551 nova_compute[227360]: 2025-11-29 08:34:11.827 227364 DEBUG nova.network.os_vif_util [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:39:c1:ec,bridge_name='br-int',has_traffic_filtering=True,id=a08044a1-40b5-4987-bfe0-a92ba0c13b97,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa08044a1-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:34:11 np0005539551 nova_compute[227360]: 2025-11-29 08:34:11.828 227364 DEBUG os_vif [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:39:c1:ec,bridge_name='br-int',has_traffic_filtering=True,id=a08044a1-40b5-4987-bfe0-a92ba0c13b97,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa08044a1-40') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:34:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:11.827 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[45890295-eefb-4412-ac94-c51dc175da36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:11 np0005539551 nova_compute[227360]: 2025-11-29 08:34:11.831 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:11 np0005539551 nova_compute[227360]: 2025-11-29 08:34:11.831 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa08044a1-40, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:11.832 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[1094248c-2e97-44ba-ba72-b366038b9336]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:11 np0005539551 nova_compute[227360]: 2025-11-29 08:34:11.876 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:11 np0005539551 nova_compute[227360]: 2025-11-29 08:34:11.879 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:34:11 np0005539551 nova_compute[227360]: 2025-11-29 08:34:11.881 227364 INFO os_vif [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:39:c1:ec,bridge_name='br-int',has_traffic_filtering=True,id=a08044a1-40b5-4987-bfe0-a92ba0c13b97,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa08044a1-40')#033[00m
Nov 29 03:34:11 np0005539551 NetworkManager[48922]: <info>  [1764405251.8921] device (tapa83b79bc-60): carrier: link connected
Nov 29 03:34:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:11.897 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[1b1db810-d5ff-400e-80d4-17fb41b723a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:11.910 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[93c44ee8-cab1-449f-a4bc-62ebb21a3706]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa83b79bc-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:20:41:91'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 829877, 'reachable_time': 31512, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287710, 'error': None, 'target': 'ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:11.922 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[19ba0d6b-3b0e-489a-aead-59e5db64b038]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe20:4191'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 829877, 'tstamp': 829877}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287711, 'error': None, 'target': 'ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:11.934 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6bd29979-e59e-46c8-b7f6-36e230f85859]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa83b79bc-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:20:41:91'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 829877, 'reachable_time': 31512, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287712, 'error': None, 'target': 'ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:11.961 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b515447e-ae96-4ca4-8105-d8188e908a9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.008 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:12.010 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:34:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:12.012 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e9a34d3f-b3ee-4b5b-b69d-7c3ad451abb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:12.013 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa83b79bc-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:12.013 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:34:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:12.014 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa83b79bc-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.015 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:12 np0005539551 NetworkManager[48922]: <info>  [1764405252.0160] manager: (tapa83b79bc-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/330)
Nov 29 03:34:12 np0005539551 kernel: tapa83b79bc-60: entered promiscuous mode
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.017 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:12.018 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa83b79bc-60, col_values=(('external_ids', {'iface-id': '5551fa67-e815-437e-8413-5562ca9c4d10'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.019 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:12 np0005539551 ovn_controller[130266]: 2025-11-29T08:34:12Z|00731|binding|INFO|Releasing lport 5551fa67-e815-437e-8413-5562ca9c4d10 from this chassis (sb_readonly=1)
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.033 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:12.034 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a83b79bc-6262-43e7-a9e5-5e808a213726.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a83b79bc-6262-43e7-a9e5-5e808a213726.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:34:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:12.034 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[defdb003-0b6a-429c-8ac8-56b8f28b96dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:12.035 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:34:12 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:34:12 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:34:12 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-a83b79bc-6262-43e7-a9e5-5e808a213726
Nov 29 03:34:12 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:34:12 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:34:12 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:34:12 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/a83b79bc-6262-43e7-a9e5-5e808a213726.pid.haproxy
Nov 29 03:34:12 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:34:12 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:34:12 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:34:12 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:34:12 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:34:12 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:34:12 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:34:12 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:34:12 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:34:12 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:34:12 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:34:12 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:34:12 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:34:12 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:34:12 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:34:12 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:34:12 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:34:12 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:34:12 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:34:12 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:34:12 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID a83b79bc-6262-43e7-a9e5-5e808a213726
Nov 29 03:34:12 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:34:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:12.035 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726', 'env', 'PROCESS_TAG=haproxy-a83b79bc-6262-43e7-a9e5-5e808a213726', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a83b79bc-6262-43e7-a9e5-5e808a213726.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.159 227364 DEBUG nova.virt.libvirt.driver [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.160 227364 DEBUG nova.virt.libvirt.driver [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.160 227364 DEBUG nova.virt.libvirt.driver [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.236 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405252.2358677, aaaa6a6b-0c21-483c-b891-02ebe64e6aab => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.236 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] VM Started (Lifecycle Event)#033[00m
Nov 29 03:34:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:12.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.273 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.276 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405252.2359555, aaaa6a6b-0c21-483c-b891-02ebe64e6aab => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.276 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.314 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.317 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.326 227364 DEBUG nova.compute.manager [req-52dfb4b3-4cd1-4ff0-a9b8-ce89b2b049f7 req-b67b8015-140b-4d40-8145-7d227b2ca12c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Received event network-vif-plugged-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.326 227364 DEBUG oslo_concurrency.lockutils [req-52dfb4b3-4cd1-4ff0-a9b8-ce89b2b049f7 req-b67b8015-140b-4d40-8145-7d227b2ca12c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.327 227364 DEBUG oslo_concurrency.lockutils [req-52dfb4b3-4cd1-4ff0-a9b8-ce89b2b049f7 req-b67b8015-140b-4d40-8145-7d227b2ca12c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.327 227364 DEBUG oslo_concurrency.lockutils [req-52dfb4b3-4cd1-4ff0-a9b8-ce89b2b049f7 req-b67b8015-140b-4d40-8145-7d227b2ca12c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.327 227364 DEBUG nova.compute.manager [req-52dfb4b3-4cd1-4ff0-a9b8-ce89b2b049f7 req-b67b8015-140b-4d40-8145-7d227b2ca12c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Processing event network-vif-plugged-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.328 227364 DEBUG nova.compute.manager [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.329 227364 DEBUG nova.compute.manager [req-95a85ac9-fe17-43e1-97f3-29b8ba022fbf req-d969957a-7653-4e79-8013-9d9d5137a965 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Received event network-vif-unplugged-a08044a1-40b5-4987-bfe0-a92ba0c13b97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.330 227364 DEBUG oslo_concurrency.lockutils [req-95a85ac9-fe17-43e1-97f3-29b8ba022fbf req-d969957a-7653-4e79-8013-9d9d5137a965 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.330 227364 DEBUG oslo_concurrency.lockutils [req-95a85ac9-fe17-43e1-97f3-29b8ba022fbf req-d969957a-7653-4e79-8013-9d9d5137a965 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.330 227364 DEBUG oslo_concurrency.lockutils [req-95a85ac9-fe17-43e1-97f3-29b8ba022fbf req-d969957a-7653-4e79-8013-9d9d5137a965 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.331 227364 DEBUG nova.compute.manager [req-95a85ac9-fe17-43e1-97f3-29b8ba022fbf req-d969957a-7653-4e79-8013-9d9d5137a965 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] No waiting events found dispatching network-vif-unplugged-a08044a1-40b5-4987-bfe0-a92ba0c13b97 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.331 227364 WARNING nova.compute.manager [req-95a85ac9-fe17-43e1-97f3-29b8ba022fbf req-d969957a-7653-4e79-8013-9d9d5137a965 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Received unexpected event network-vif-unplugged-a08044a1-40b5-4987-bfe0-a92ba0c13b97 for instance with vm_state active and task_state resize_migrating.#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.334 227364 DEBUG nova.virt.libvirt.driver [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.336 227364 INFO nova.virt.libvirt.driver [-] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Instance spawned successfully.#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.336 227364 DEBUG nova.virt.libvirt.driver [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.351 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.351 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405252.3329098, aaaa6a6b-0c21-483c-b891-02ebe64e6aab => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.351 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.356 227364 DEBUG nova.network.neutron [req-4935a155-3506-48f4-a253-3b7d38917c33 req-649246c9-7dee-4a44-b65b-1c4ea8f6f9b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Updated VIF entry in instance network info cache for port 2c7cb162-70f0-496f-a2bf-0b0af61bd4b2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.356 227364 DEBUG nova.network.neutron [req-4935a155-3506-48f4-a253-3b7d38917c33 req-649246c9-7dee-4a44-b65b-1c4ea8f6f9b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Updating instance_info_cache with network_info: [{"id": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "address": "fa:16:3e:c5:09:19", "network": {"id": "a83b79bc-6262-43e7-a9e5-5e808a213726", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1830064421-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e72e7660da497a8b6bf9fdb03fe84c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c7cb162-70", "ovs_interfaceid": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.360 227364 DEBUG nova.virt.libvirt.driver [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.360 227364 DEBUG nova.virt.libvirt.driver [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.361 227364 DEBUG nova.virt.libvirt.driver [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.361 227364 DEBUG nova.virt.libvirt.driver [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.361 227364 DEBUG nova.virt.libvirt.driver [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.362 227364 DEBUG nova.virt.libvirt.driver [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:34:12 np0005539551 podman[287784]: 2025-11-29 08:34:12.373148429 +0000 UTC m=+0.049138829 container create 3c1ad551571f04e32858560c9a1b8f1ff15a420c626ed9a088604f62b049566f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.386 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.402 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.403 227364 DEBUG oslo_concurrency.lockutils [req-4935a155-3506-48f4-a253-3b7d38917c33 req-649246c9-7dee-4a44-b65b-1c4ea8f6f9b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-aaaa6a6b-0c21-483c-b891-02ebe64e6aab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.405 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:34:12 np0005539551 systemd[1]: Started libpod-conmon-3c1ad551571f04e32858560c9a1b8f1ff15a420c626ed9a088604f62b049566f.scope.
Nov 29 03:34:12 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:34:12 np0005539551 podman[287784]: 2025-11-29 08:34:12.347818984 +0000 UTC m=+0.023809414 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:34:12 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3726ef2f489ba611f97c706ab774be9f506e625d35c308e3d7642e58b5c9410/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.459 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:34:12 np0005539551 podman[287784]: 2025-11-29 08:34:12.461946362 +0000 UTC m=+0.137936782 container init 3c1ad551571f04e32858560c9a1b8f1ff15a420c626ed9a088604f62b049566f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:34:12 np0005539551 podman[287784]: 2025-11-29 08:34:12.468060557 +0000 UTC m=+0.144050967 container start 3c1ad551571f04e32858560c9a1b8f1ff15a420c626ed9a088604f62b049566f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:34:12 np0005539551 neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726[287799]: [NOTICE]   (287803) : New worker (287805) forked
Nov 29 03:34:12 np0005539551 neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726[287799]: [NOTICE]   (287803) : Loading success.
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.496 227364 INFO nova.compute.manager [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Took 8.69 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.496 227364 DEBUG nova.compute.manager [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:34:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:12.520 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.655 227364 INFO nova.compute.manager [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Took 9.84 seconds to build instance.#033[00m
Nov 29 03:34:12 np0005539551 nova_compute[227360]: 2025-11-29 08:34:12.693 227364 DEBUG oslo_concurrency.lockutils [None req-5dde6a74-7491-4780-95ce-4271f739bc07 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:13.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:13 np0005539551 nova_compute[227360]: 2025-11-29 08:34:13.119 227364 DEBUG neutronclient.v2_0.client [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port a08044a1-40b5-4987-bfe0-a92ba0c13b97 for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Nov 29 03:34:13 np0005539551 nova_compute[227360]: 2025-11-29 08:34:13.223 227364 DEBUG oslo_concurrency.lockutils [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:13 np0005539551 nova_compute[227360]: 2025-11-29 08:34:13.223 227364 DEBUG oslo_concurrency.lockutils [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:13 np0005539551 nova_compute[227360]: 2025-11-29 08:34:13.223 227364 DEBUG oslo_concurrency.lockutils [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:13 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:34:13 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3981405487' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:34:13 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:34:13 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3981405487' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:34:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:14.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:14 np0005539551 nova_compute[227360]: 2025-11-29 08:34:14.583 227364 DEBUG nova.compute.manager [req-c4de30c7-a4c9-4b41-900f-a04a69e46873 req-f6319d72-7729-4f99-80f2-8bc79bc5c1dd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Received event network-vif-plugged-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:14 np0005539551 nova_compute[227360]: 2025-11-29 08:34:14.583 227364 DEBUG oslo_concurrency.lockutils [req-c4de30c7-a4c9-4b41-900f-a04a69e46873 req-f6319d72-7729-4f99-80f2-8bc79bc5c1dd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:14 np0005539551 nova_compute[227360]: 2025-11-29 08:34:14.583 227364 DEBUG oslo_concurrency.lockutils [req-c4de30c7-a4c9-4b41-900f-a04a69e46873 req-f6319d72-7729-4f99-80f2-8bc79bc5c1dd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:14 np0005539551 nova_compute[227360]: 2025-11-29 08:34:14.584 227364 DEBUG oslo_concurrency.lockutils [req-c4de30c7-a4c9-4b41-900f-a04a69e46873 req-f6319d72-7729-4f99-80f2-8bc79bc5c1dd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:14 np0005539551 nova_compute[227360]: 2025-11-29 08:34:14.584 227364 DEBUG nova.compute.manager [req-c4de30c7-a4c9-4b41-900f-a04a69e46873 req-f6319d72-7729-4f99-80f2-8bc79bc5c1dd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] No waiting events found dispatching network-vif-plugged-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:34:14 np0005539551 nova_compute[227360]: 2025-11-29 08:34:14.584 227364 WARNING nova.compute.manager [req-c4de30c7-a4c9-4b41-900f-a04a69e46873 req-f6319d72-7729-4f99-80f2-8bc79bc5c1dd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Received unexpected event network-vif-plugged-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:34:14 np0005539551 nova_compute[227360]: 2025-11-29 08:34:14.586 227364 DEBUG nova.compute.manager [req-b7aa40a0-3a8a-4050-8f91-4d6d3a9a4292 req-b7a0afb7-5bff-459d-896b-dbe9a4b22ef9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Received event network-vif-plugged-a08044a1-40b5-4987-bfe0-a92ba0c13b97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:14 np0005539551 nova_compute[227360]: 2025-11-29 08:34:14.586 227364 DEBUG oslo_concurrency.lockutils [req-b7aa40a0-3a8a-4050-8f91-4d6d3a9a4292 req-b7a0afb7-5bff-459d-896b-dbe9a4b22ef9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:14 np0005539551 nova_compute[227360]: 2025-11-29 08:34:14.586 227364 DEBUG oslo_concurrency.lockutils [req-b7aa40a0-3a8a-4050-8f91-4d6d3a9a4292 req-b7a0afb7-5bff-459d-896b-dbe9a4b22ef9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:14 np0005539551 nova_compute[227360]: 2025-11-29 08:34:14.586 227364 DEBUG oslo_concurrency.lockutils [req-b7aa40a0-3a8a-4050-8f91-4d6d3a9a4292 req-b7a0afb7-5bff-459d-896b-dbe9a4b22ef9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:14 np0005539551 nova_compute[227360]: 2025-11-29 08:34:14.586 227364 DEBUG nova.compute.manager [req-b7aa40a0-3a8a-4050-8f91-4d6d3a9a4292 req-b7a0afb7-5bff-459d-896b-dbe9a4b22ef9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] No waiting events found dispatching network-vif-plugged-a08044a1-40b5-4987-bfe0-a92ba0c13b97 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:34:14 np0005539551 nova_compute[227360]: 2025-11-29 08:34:14.587 227364 WARNING nova.compute.manager [req-b7aa40a0-3a8a-4050-8f91-4d6d3a9a4292 req-b7a0afb7-5bff-459d-896b-dbe9a4b22ef9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Received unexpected event network-vif-plugged-a08044a1-40b5-4987-bfe0-a92ba0c13b97 for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:34:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:15.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:15 np0005539551 nova_compute[227360]: 2025-11-29 08:34:15.181 227364 DEBUG nova.compute.manager [req-115cb48f-043d-41c1-902b-34d6f10c36bf req-955c5b66-40f5-49b8-b317-ed12a1dab36e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Received event network-changed-a08044a1-40b5-4987-bfe0-a92ba0c13b97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:15 np0005539551 nova_compute[227360]: 2025-11-29 08:34:15.182 227364 DEBUG nova.compute.manager [req-115cb48f-043d-41c1-902b-34d6f10c36bf req-955c5b66-40f5-49b8-b317-ed12a1dab36e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Refreshing instance network info cache due to event network-changed-a08044a1-40b5-4987-bfe0-a92ba0c13b97. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:34:15 np0005539551 nova_compute[227360]: 2025-11-29 08:34:15.183 227364 DEBUG oslo_concurrency.lockutils [req-115cb48f-043d-41c1-902b-34d6f10c36bf req-955c5b66-40f5-49b8-b317-ed12a1dab36e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-f6eba09a-c0cf-4855-afd5-b265b2f2cadc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:34:15 np0005539551 nova_compute[227360]: 2025-11-29 08:34:15.183 227364 DEBUG oslo_concurrency.lockutils [req-115cb48f-043d-41c1-902b-34d6f10c36bf req-955c5b66-40f5-49b8-b317-ed12a1dab36e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-f6eba09a-c0cf-4855-afd5-b265b2f2cadc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:34:15 np0005539551 nova_compute[227360]: 2025-11-29 08:34:15.183 227364 DEBUG nova.network.neutron [req-115cb48f-043d-41c1-902b-34d6f10c36bf req-955c5b66-40f5-49b8-b317-ed12a1dab36e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Refreshing network info cache for port a08044a1-40b5-4987-bfe0-a92ba0c13b97 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:34:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e361 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:16.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:16 np0005539551 podman[287817]: 2025-11-29 08:34:16.599110222 +0000 UTC m=+0.046376516 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:34:16 np0005539551 podman[287816]: 2025-11-29 08:34:16.609587745 +0000 UTC m=+0.058050551 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:34:16 np0005539551 podman[287815]: 2025-11-29 08:34:16.660192834 +0000 UTC m=+0.108669860 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_id=ovn_controller)
Nov 29 03:34:16 np0005539551 nova_compute[227360]: 2025-11-29 08:34:16.875 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:17.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:17 np0005539551 nova_compute[227360]: 2025-11-29 08:34:17.389 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:17 np0005539551 nova_compute[227360]: 2025-11-29 08:34:17.394 227364 DEBUG nova.network.neutron [req-115cb48f-043d-41c1-902b-34d6f10c36bf req-955c5b66-40f5-49b8-b317-ed12a1dab36e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Updated VIF entry in instance network info cache for port a08044a1-40b5-4987-bfe0-a92ba0c13b97. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:34:17 np0005539551 nova_compute[227360]: 2025-11-29 08:34:17.395 227364 DEBUG nova.network.neutron [req-115cb48f-043d-41c1-902b-34d6f10c36bf req-955c5b66-40f5-49b8-b317-ed12a1dab36e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Updating instance_info_cache with network_info: [{"id": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "address": "fa:16:3e:39:c1:ec", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa08044a1-40", "ovs_interfaceid": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:34:17 np0005539551 nova_compute[227360]: 2025-11-29 08:34:17.436 227364 DEBUG oslo_concurrency.lockutils [req-115cb48f-043d-41c1-902b-34d6f10c36bf req-955c5b66-40f5-49b8-b317-ed12a1dab36e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-f6eba09a-c0cf-4855-afd5-b265b2f2cadc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:34:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:18.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:18 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:34:18 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2194635671' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:34:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:19.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:19.883 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:19.884 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:19.885 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e362 e362: 3 total, 3 up, 3 in
Nov 29 03:34:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:20.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e362 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:34:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:21.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:34:21 np0005539551 ovn_controller[130266]: 2025-11-29T08:34:21Z|00732|binding|INFO|Releasing lport 5551fa67-e815-437e-8413-5562ca9c4d10 from this chassis (sb_readonly=0)
Nov 29 03:34:21 np0005539551 ovn_controller[130266]: 2025-11-29T08:34:21Z|00733|binding|INFO|Releasing lport 3b04b2c4-a6da-4677-b446-82ad68652b56 from this chassis (sb_readonly=0)
Nov 29 03:34:21 np0005539551 nova_compute[227360]: 2025-11-29 08:34:21.247 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:21 np0005539551 nova_compute[227360]: 2025-11-29 08:34:21.878 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:22 np0005539551 nova_compute[227360]: 2025-11-29 08:34:22.044 227364 DEBUG nova.compute.manager [req-95e084c3-a3f8-4b45-b058-7013de2452c4 req-c500dc4c-0393-4c58-9be3-9890fe6148ec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Received event network-vif-plugged-a08044a1-40b5-4987-bfe0-a92ba0c13b97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:22 np0005539551 nova_compute[227360]: 2025-11-29 08:34:22.045 227364 DEBUG oslo_concurrency.lockutils [req-95e084c3-a3f8-4b45-b058-7013de2452c4 req-c500dc4c-0393-4c58-9be3-9890fe6148ec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:22 np0005539551 nova_compute[227360]: 2025-11-29 08:34:22.045 227364 DEBUG oslo_concurrency.lockutils [req-95e084c3-a3f8-4b45-b058-7013de2452c4 req-c500dc4c-0393-4c58-9be3-9890fe6148ec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:22 np0005539551 nova_compute[227360]: 2025-11-29 08:34:22.045 227364 DEBUG oslo_concurrency.lockutils [req-95e084c3-a3f8-4b45-b058-7013de2452c4 req-c500dc4c-0393-4c58-9be3-9890fe6148ec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:22 np0005539551 nova_compute[227360]: 2025-11-29 08:34:22.045 227364 DEBUG nova.compute.manager [req-95e084c3-a3f8-4b45-b058-7013de2452c4 req-c500dc4c-0393-4c58-9be3-9890fe6148ec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] No waiting events found dispatching network-vif-plugged-a08044a1-40b5-4987-bfe0-a92ba0c13b97 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:34:22 np0005539551 nova_compute[227360]: 2025-11-29 08:34:22.045 227364 WARNING nova.compute.manager [req-95e084c3-a3f8-4b45-b058-7013de2452c4 req-c500dc4c-0393-4c58-9be3-9890fe6148ec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Received unexpected event network-vif-plugged-a08044a1-40b5-4987-bfe0-a92ba0c13b97 for instance with vm_state active and task_state resize_finish.#033[00m
Nov 29 03:34:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:22.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:22 np0005539551 nova_compute[227360]: 2025-11-29 08:34:22.427 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:22.521 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:23.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:24 np0005539551 nova_compute[227360]: 2025-11-29 08:34:24.262 227364 DEBUG nova.compute.manager [req-57e2d95e-4319-4bbd-9600-3bdd8e6161f8 req-4ce3934a-8d20-4bb1-9633-5cd9fdbb75bb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Received event network-vif-plugged-a08044a1-40b5-4987-bfe0-a92ba0c13b97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:24 np0005539551 nova_compute[227360]: 2025-11-29 08:34:24.263 227364 DEBUG oslo_concurrency.lockutils [req-57e2d95e-4319-4bbd-9600-3bdd8e6161f8 req-4ce3934a-8d20-4bb1-9633-5cd9fdbb75bb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:24 np0005539551 nova_compute[227360]: 2025-11-29 08:34:24.263 227364 DEBUG oslo_concurrency.lockutils [req-57e2d95e-4319-4bbd-9600-3bdd8e6161f8 req-4ce3934a-8d20-4bb1-9633-5cd9fdbb75bb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:24 np0005539551 nova_compute[227360]: 2025-11-29 08:34:24.263 227364 DEBUG oslo_concurrency.lockutils [req-57e2d95e-4319-4bbd-9600-3bdd8e6161f8 req-4ce3934a-8d20-4bb1-9633-5cd9fdbb75bb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:24 np0005539551 nova_compute[227360]: 2025-11-29 08:34:24.263 227364 DEBUG nova.compute.manager [req-57e2d95e-4319-4bbd-9600-3bdd8e6161f8 req-4ce3934a-8d20-4bb1-9633-5cd9fdbb75bb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] No waiting events found dispatching network-vif-plugged-a08044a1-40b5-4987-bfe0-a92ba0c13b97 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:34:24 np0005539551 nova_compute[227360]: 2025-11-29 08:34:24.263 227364 WARNING nova.compute.manager [req-57e2d95e-4319-4bbd-9600-3bdd8e6161f8 req-4ce3934a-8d20-4bb1-9633-5cd9fdbb75bb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Received unexpected event network-vif-plugged-a08044a1-40b5-4987-bfe0-a92ba0c13b97 for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:34:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:24.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:24 np0005539551 nova_compute[227360]: 2025-11-29 08:34:24.928 227364 DEBUG oslo_concurrency.lockutils [None req-4b7af02f-eb93-48f2-91fa-9d10b552ab4d c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:24 np0005539551 nova_compute[227360]: 2025-11-29 08:34:24.929 227364 DEBUG oslo_concurrency.lockutils [None req-4b7af02f-eb93-48f2-91fa-9d10b552ab4d c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:24 np0005539551 nova_compute[227360]: 2025-11-29 08:34:24.929 227364 DEBUG nova.compute.manager [None req-4b7af02f-eb93-48f2-91fa-9d10b552ab4d c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Going to confirm migration 24 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Nov 29 03:34:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:25.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:25 np0005539551 nova_compute[227360]: 2025-11-29 08:34:25.560 227364 DEBUG neutronclient.v2_0.client [None req-4b7af02f-eb93-48f2-91fa-9d10b552ab4d c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port a08044a1-40b5-4987-bfe0-a92ba0c13b97 for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Nov 29 03:34:25 np0005539551 nova_compute[227360]: 2025-11-29 08:34:25.560 227364 DEBUG oslo_concurrency.lockutils [None req-4b7af02f-eb93-48f2-91fa-9d10b552ab4d c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "refresh_cache-f6eba09a-c0cf-4855-afd5-b265b2f2cadc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:34:25 np0005539551 nova_compute[227360]: 2025-11-29 08:34:25.560 227364 DEBUG oslo_concurrency.lockutils [None req-4b7af02f-eb93-48f2-91fa-9d10b552ab4d c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquired lock "refresh_cache-f6eba09a-c0cf-4855-afd5-b265b2f2cadc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:34:25 np0005539551 nova_compute[227360]: 2025-11-29 08:34:25.561 227364 DEBUG nova.network.neutron [None req-4b7af02f-eb93-48f2-91fa-9d10b552ab4d c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:34:25 np0005539551 nova_compute[227360]: 2025-11-29 08:34:25.561 227364 DEBUG nova.objects.instance [None req-4b7af02f-eb93-48f2-91fa-9d10b552ab4d c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'info_cache' on Instance uuid f6eba09a-c0cf-4855-afd5-b265b2f2cadc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:34:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e362 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:25 np0005539551 ovn_controller[130266]: 2025-11-29T08:34:25Z|00095|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c5:09:19 10.100.0.11
Nov 29 03:34:25 np0005539551 ovn_controller[130266]: 2025-11-29T08:34:25Z|00096|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c5:09:19 10.100.0.11
Nov 29 03:34:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:26.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:26 np0005539551 nova_compute[227360]: 2025-11-29 08:34:26.590 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405251.5895185, f6eba09a-c0cf-4855-afd5-b265b2f2cadc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:34:26 np0005539551 nova_compute[227360]: 2025-11-29 08:34:26.590 227364 INFO nova.compute.manager [-] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:34:26 np0005539551 nova_compute[227360]: 2025-11-29 08:34:26.627 227364 DEBUG nova.compute.manager [None req-b2fc4aea-90fa-49cd-b88f-f6fe7fee5685 - - - - - -] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:34:26 np0005539551 nova_compute[227360]: 2025-11-29 08:34:26.630 227364 DEBUG nova.compute.manager [None req-b2fc4aea-90fa-49cd-b88f-f6fe7fee5685 - - - - - -] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:34:26 np0005539551 nova_compute[227360]: 2025-11-29 08:34:26.662 227364 INFO nova.compute.manager [None req-b2fc4aea-90fa-49cd-b88f-f6fe7fee5685 - - - - - -] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-1.ctlplane.example.com#033[00m
Nov 29 03:34:26 np0005539551 nova_compute[227360]: 2025-11-29 08:34:26.881 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:27.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:27 np0005539551 nova_compute[227360]: 2025-11-29 08:34:27.279 227364 DEBUG nova.network.neutron [None req-4b7af02f-eb93-48f2-91fa-9d10b552ab4d c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Updating instance_info_cache with network_info: [{"id": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "address": "fa:16:3e:39:c1:ec", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa08044a1-40", "ovs_interfaceid": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:34:27 np0005539551 nova_compute[227360]: 2025-11-29 08:34:27.309 227364 DEBUG oslo_concurrency.lockutils [None req-4b7af02f-eb93-48f2-91fa-9d10b552ab4d c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Releasing lock "refresh_cache-f6eba09a-c0cf-4855-afd5-b265b2f2cadc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:34:27 np0005539551 nova_compute[227360]: 2025-11-29 08:34:27.310 227364 DEBUG nova.objects.instance [None req-4b7af02f-eb93-48f2-91fa-9d10b552ab4d c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'migration_context' on Instance uuid f6eba09a-c0cf-4855-afd5-b265b2f2cadc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:34:27 np0005539551 nova_compute[227360]: 2025-11-29 08:34:27.410 227364 DEBUG nova.storage.rbd_utils [None req-4b7af02f-eb93-48f2-91fa-9d10b552ab4d c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] removing snapshot(nova-resize) on rbd image(f6eba09a-c0cf-4855-afd5-b265b2f2cadc_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:34:27 np0005539551 nova_compute[227360]: 2025-11-29 08:34:27.476 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e363 e363: 3 total, 3 up, 3 in
Nov 29 03:34:27 np0005539551 nova_compute[227360]: 2025-11-29 08:34:27.552 227364 DEBUG nova.virt.libvirt.vif [None req-4b7af02f-eb93-48f2-91fa-9d10b552ab4d c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:32:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-1',id=166,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFZDUAh1tFHT85mctamdge/Jlh9j7Mmalvlf2a+E48/dJ4b3TzL46vHd8+krJsRkbdr2BabH5xlFnXxT+hxq+KJlLzOnOaQuAWI18v9sbbjA8bZzR2tugMjasg7rWhFwg==',key_name='tempest-keypair-2058861619',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:34:22Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d4f6db81949d487b853d7567f8a2e6d4',ramdisk_id='',reservation_id='r-jvsv1b4g',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-573425942',owner_user_name='tempest-AttachVolumeMultiAttachTest-573425942-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:34:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5b0953fb7cc415fb26cf4ffdd5908c6',uuid=f6eba09a-c0cf-4855-afd5-b265b2f2cadc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "address": "fa:16:3e:39:c1:ec", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa08044a1-40", "ovs_interfaceid": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:34:27 np0005539551 nova_compute[227360]: 2025-11-29 08:34:27.552 227364 DEBUG nova.network.os_vif_util [None req-4b7af02f-eb93-48f2-91fa-9d10b552ab4d c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converting VIF {"id": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "address": "fa:16:3e:39:c1:ec", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa08044a1-40", "ovs_interfaceid": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:34:27 np0005539551 nova_compute[227360]: 2025-11-29 08:34:27.553 227364 DEBUG nova.network.os_vif_util [None req-4b7af02f-eb93-48f2-91fa-9d10b552ab4d c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:39:c1:ec,bridge_name='br-int',has_traffic_filtering=True,id=a08044a1-40b5-4987-bfe0-a92ba0c13b97,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa08044a1-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:34:27 np0005539551 nova_compute[227360]: 2025-11-29 08:34:27.554 227364 DEBUG os_vif [None req-4b7af02f-eb93-48f2-91fa-9d10b552ab4d c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:39:c1:ec,bridge_name='br-int',has_traffic_filtering=True,id=a08044a1-40b5-4987-bfe0-a92ba0c13b97,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa08044a1-40') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:34:27 np0005539551 nova_compute[227360]: 2025-11-29 08:34:27.556 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:27 np0005539551 nova_compute[227360]: 2025-11-29 08:34:27.556 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa08044a1-40, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:27 np0005539551 nova_compute[227360]: 2025-11-29 08:34:27.557 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:34:27 np0005539551 nova_compute[227360]: 2025-11-29 08:34:27.559 227364 INFO os_vif [None req-4b7af02f-eb93-48f2-91fa-9d10b552ab4d c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:39:c1:ec,bridge_name='br-int',has_traffic_filtering=True,id=a08044a1-40b5-4987-bfe0-a92ba0c13b97,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa08044a1-40')#033[00m
Nov 29 03:34:27 np0005539551 nova_compute[227360]: 2025-11-29 08:34:27.560 227364 DEBUG oslo_concurrency.lockutils [None req-4b7af02f-eb93-48f2-91fa-9d10b552ab4d c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:27 np0005539551 nova_compute[227360]: 2025-11-29 08:34:27.560 227364 DEBUG oslo_concurrency.lockutils [None req-4b7af02f-eb93-48f2-91fa-9d10b552ab4d c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:27 np0005539551 nova_compute[227360]: 2025-11-29 08:34:27.762 227364 DEBUG oslo_concurrency.processutils [None req-4b7af02f-eb93-48f2-91fa-9d10b552ab4d c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:28 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:34:28 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4056077534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:34:28 np0005539551 nova_compute[227360]: 2025-11-29 08:34:28.192 227364 DEBUG oslo_concurrency.processutils [None req-4b7af02f-eb93-48f2-91fa-9d10b552ab4d c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:28 np0005539551 nova_compute[227360]: 2025-11-29 08:34:28.202 227364 DEBUG nova.compute.provider_tree [None req-4b7af02f-eb93-48f2-91fa-9d10b552ab4d c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:34:28 np0005539551 nova_compute[227360]: 2025-11-29 08:34:28.223 227364 DEBUG nova.scheduler.client.report [None req-4b7af02f-eb93-48f2-91fa-9d10b552ab4d c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:34:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:28.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:28 np0005539551 nova_compute[227360]: 2025-11-29 08:34:28.287 227364 DEBUG oslo_concurrency.lockutils [None req-4b7af02f-eb93-48f2-91fa-9d10b552ab4d c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:28 np0005539551 nova_compute[227360]: 2025-11-29 08:34:28.456 227364 INFO nova.scheduler.client.report [None req-4b7af02f-eb93-48f2-91fa-9d10b552ab4d c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Deleted allocation for migration 130a277d-60ed-49bc-a634-8c6cc38feb56#033[00m
Nov 29 03:34:28 np0005539551 nova_compute[227360]: 2025-11-29 08:34:28.505 227364 DEBUG oslo_concurrency.lockutils [None req-4b7af02f-eb93-48f2-91fa-9d10b552ab4d c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 3.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:28 np0005539551 nova_compute[227360]: 2025-11-29 08:34:28.871 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:29.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:30.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:30 np0005539551 ovn_controller[130266]: 2025-11-29T08:34:30Z|00734|binding|INFO|Releasing lport 5551fa67-e815-437e-8413-5562ca9c4d10 from this chassis (sb_readonly=0)
Nov 29 03:34:30 np0005539551 ovn_controller[130266]: 2025-11-29T08:34:30Z|00735|binding|INFO|Releasing lport 3b04b2c4-a6da-4677-b446-82ad68652b56 from this chassis (sb_readonly=0)
Nov 29 03:34:30 np0005539551 nova_compute[227360]: 2025-11-29 08:34:30.444 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:34:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:31.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:34:31 np0005539551 nova_compute[227360]: 2025-11-29 08:34:31.885 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:32.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:32 np0005539551 nova_compute[227360]: 2025-11-29 08:34:32.478 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:32 np0005539551 nova_compute[227360]: 2025-11-29 08:34:32.696 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:33.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:34 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e364 e364: 3 total, 3 up, 3 in
Nov 29 03:34:34 np0005539551 nova_compute[227360]: 2025-11-29 08:34:34.227 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:34.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:35.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:36.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:36 np0005539551 nova_compute[227360]: 2025-11-29 08:34:36.888 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:37.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:37 np0005539551 nova_compute[227360]: 2025-11-29 08:34:37.481 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:37 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:34:37 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.9 total, 600.0 interval#012Cumulative writes: 51K writes, 199K keys, 51K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.04 MB/s#012Cumulative WAL: 51K writes, 18K syncs, 2.73 writes per sync, written: 0.19 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 42.43 MB, 0.07 MB/s#012Interval WAL: 10K writes, 4039 syncs, 2.54 writes per sync, written: 0.04 GB, 0.07 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 03:34:38 np0005539551 nova_compute[227360]: 2025-11-29 08:34:38.055 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:38.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:39.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:40.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:41.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:41 np0005539551 nova_compute[227360]: 2025-11-29 08:34:41.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:41 np0005539551 nova_compute[227360]: 2025-11-29 08:34:41.892 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:42.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:42 np0005539551 nova_compute[227360]: 2025-11-29 08:34:42.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:42 np0005539551 nova_compute[227360]: 2025-11-29 08:34:42.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:42 np0005539551 nova_compute[227360]: 2025-11-29 08:34:42.483 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:34:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:43.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:34:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:44.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:44 np0005539551 nova_compute[227360]: 2025-11-29 08:34:44.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:44 np0005539551 nova_compute[227360]: 2025-11-29 08:34:44.433 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:44 np0005539551 nova_compute[227360]: 2025-11-29 08:34:44.433 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:44 np0005539551 nova_compute[227360]: 2025-11-29 08:34:44.434 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:44 np0005539551 nova_compute[227360]: 2025-11-29 08:34:44.434 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:34:44 np0005539551 nova_compute[227360]: 2025-11-29 08:34:44.434 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:44 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:34:44 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/532778696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:34:44 np0005539551 nova_compute[227360]: 2025-11-29 08:34:44.868 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:44 np0005539551 nova_compute[227360]: 2025-11-29 08:34:44.958 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000009f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:34:44 np0005539551 nova_compute[227360]: 2025-11-29 08:34:44.958 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-0000009f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:34:44 np0005539551 nova_compute[227360]: 2025-11-29 08:34:44.961 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-000000aa as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:34:44 np0005539551 nova_compute[227360]: 2025-11-29 08:34:44.962 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-000000aa as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:34:44 np0005539551 nova_compute[227360]: 2025-11-29 08:34:44.965 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-000000a2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:34:44 np0005539551 nova_compute[227360]: 2025-11-29 08:34:44.966 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-000000a2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:34:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:45.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:45 np0005539551 nova_compute[227360]: 2025-11-29 08:34:45.129 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:34:45 np0005539551 nova_compute[227360]: 2025-11-29 08:34:45.130 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3692MB free_disk=20.742504119873047GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:34:45 np0005539551 nova_compute[227360]: 2025-11-29 08:34:45.131 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:45 np0005539551 nova_compute[227360]: 2025-11-29 08:34:45.131 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:45 np0005539551 nova_compute[227360]: 2025-11-29 08:34:45.213 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance b08df6d7-85bd-4c2a-9bd8-f37384b9148a actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:34:45 np0005539551 nova_compute[227360]: 2025-11-29 08:34:45.214 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 2293bfb9-d91d-4ee4-8347-317cf45fe9c4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:34:45 np0005539551 nova_compute[227360]: 2025-11-29 08:34:45.214 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance aaaa6a6b-0c21-483c-b891-02ebe64e6aab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:34:45 np0005539551 nova_compute[227360]: 2025-11-29 08:34:45.214 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:34:45 np0005539551 nova_compute[227360]: 2025-11-29 08:34:45.214 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:34:45 np0005539551 nova_compute[227360]: 2025-11-29 08:34:45.273 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:45 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:34:45 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:34:45 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:34:45 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #115. Immutable memtables: 0.
Nov 29 03:34:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:34:45.367599) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:34:45 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 115
Nov 29 03:34:45 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405285367646, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 2444, "num_deletes": 253, "total_data_size": 5609870, "memory_usage": 5692576, "flush_reason": "Manual Compaction"}
Nov 29 03:34:45 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #116: started
Nov 29 03:34:45 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405285393594, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 116, "file_size": 3688570, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56875, "largest_seqno": 59314, "table_properties": {"data_size": 3678752, "index_size": 6120, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21563, "raw_average_key_size": 20, "raw_value_size": 3658652, "raw_average_value_size": 3528, "num_data_blocks": 266, "num_entries": 1037, "num_filter_entries": 1037, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405077, "oldest_key_time": 1764405077, "file_creation_time": 1764405285, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:34:45 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 26110 microseconds, and 8440 cpu microseconds.
Nov 29 03:34:45 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:34:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:34:45.393708) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #116: 3688570 bytes OK
Nov 29 03:34:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:34:45.393763) [db/memtable_list.cc:519] [default] Level-0 commit table #116 started
Nov 29 03:34:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:34:45.395856) [db/memtable_list.cc:722] [default] Level-0 commit table #116: memtable #1 done
Nov 29 03:34:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:34:45.395871) EVENT_LOG_v1 {"time_micros": 1764405285395866, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:34:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:34:45.395887) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:34:45 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 5598977, prev total WAL file size 5598977, number of live WAL files 2.
Nov 29 03:34:45 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000112.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:34:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:34:45.397451) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Nov 29 03:34:45 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:34:45 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [116(3602KB)], [114(10MB)]
Nov 29 03:34:45 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405285397481, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [116], "files_L6": [114], "score": -1, "input_data_size": 14703935, "oldest_snapshot_seqno": -1}
Nov 29 03:34:45 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #117: 9224 keys, 12794231 bytes, temperature: kUnknown
Nov 29 03:34:45 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405285503502, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 117, "file_size": 12794231, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12733732, "index_size": 36323, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23109, "raw_key_size": 239841, "raw_average_key_size": 26, "raw_value_size": 12570656, "raw_average_value_size": 1362, "num_data_blocks": 1411, "num_entries": 9224, "num_filter_entries": 9224, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764405285, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 117, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:34:45 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:34:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:34:45.503716) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 12794231 bytes
Nov 29 03:34:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:34:45.505560) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.6 rd, 120.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 10.5 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(7.5) write-amplify(3.5) OK, records in: 9749, records dropped: 525 output_compression: NoCompression
Nov 29 03:34:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:34:45.505576) EVENT_LOG_v1 {"time_micros": 1764405285505569, "job": 72, "event": "compaction_finished", "compaction_time_micros": 106085, "compaction_time_cpu_micros": 26139, "output_level": 6, "num_output_files": 1, "total_output_size": 12794231, "num_input_records": 9749, "num_output_records": 9224, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:34:45 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:34:45 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405285506161, "job": 72, "event": "table_file_deletion", "file_number": 116}
Nov 29 03:34:45 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000114.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:34:45 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405285508043, "job": 72, "event": "table_file_deletion", "file_number": 114}
Nov 29 03:34:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:34:45.397371) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:34:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:34:45.508183) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:34:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:34:45.508191) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:34:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:34:45.508194) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:34:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:34:45.508197) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:34:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:34:45.508200) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:34:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:34:45 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2456725057' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:34:45 np0005539551 nova_compute[227360]: 2025-11-29 08:34:45.696 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:45 np0005539551 nova_compute[227360]: 2025-11-29 08:34:45.701 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:34:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:45 np0005539551 nova_compute[227360]: 2025-11-29 08:34:45.718 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:34:45 np0005539551 nova_compute[227360]: 2025-11-29 08:34:45.737 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:34:45 np0005539551 nova_compute[227360]: 2025-11-29 08:34:45.738 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:46.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:46 np0005539551 nova_compute[227360]: 2025-11-29 08:34:46.895 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:47.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:47 np0005539551 nova_compute[227360]: 2025-11-29 08:34:47.486 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:47 np0005539551 nova_compute[227360]: 2025-11-29 08:34:47.528 227364 DEBUG oslo_concurrency.lockutils [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:47 np0005539551 nova_compute[227360]: 2025-11-29 08:34:47.528 227364 DEBUG oslo_concurrency.lockutils [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:47 np0005539551 nova_compute[227360]: 2025-11-29 08:34:47.547 227364 DEBUG nova.compute.manager [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:34:47 np0005539551 nova_compute[227360]: 2025-11-29 08:34:47.605 227364 DEBUG oslo_concurrency.lockutils [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:47 np0005539551 nova_compute[227360]: 2025-11-29 08:34:47.605 227364 DEBUG oslo_concurrency.lockutils [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:47 np0005539551 podman[288113]: 2025-11-29 08:34:47.611054716 +0000 UTC m=+0.053846677 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:34:47 np0005539551 nova_compute[227360]: 2025-11-29 08:34:47.611 227364 DEBUG nova.virt.hardware [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:34:47 np0005539551 nova_compute[227360]: 2025-11-29 08:34:47.611 227364 INFO nova.compute.claims [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:34:47 np0005539551 podman[288112]: 2025-11-29 08:34:47.667062832 +0000 UTC m=+0.109299738 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 03:34:47 np0005539551 podman[288111]: 2025-11-29 08:34:47.674147453 +0000 UTC m=+0.115513536 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125)
Nov 29 03:34:47 np0005539551 nova_compute[227360]: 2025-11-29 08:34:47.739 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:47 np0005539551 nova_compute[227360]: 2025-11-29 08:34:47.739 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:34:47 np0005539551 nova_compute[227360]: 2025-11-29 08:34:47.770 227364 DEBUG oslo_concurrency.processutils [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:47 np0005539551 nova_compute[227360]: 2025-11-29 08:34:47.987 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "refresh_cache-2293bfb9-d91d-4ee4-8347-317cf45fe9c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:34:47 np0005539551 nova_compute[227360]: 2025-11-29 08:34:47.989 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquired lock "refresh_cache-2293bfb9-d91d-4ee4-8347-317cf45fe9c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:34:47 np0005539551 nova_compute[227360]: 2025-11-29 08:34:47.989 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:34:48 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:34:48 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/637342993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:34:48 np0005539551 nova_compute[227360]: 2025-11-29 08:34:48.187 227364 DEBUG oslo_concurrency.processutils [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:48 np0005539551 nova_compute[227360]: 2025-11-29 08:34:48.193 227364 DEBUG nova.compute.provider_tree [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:34:48 np0005539551 nova_compute[227360]: 2025-11-29 08:34:48.208 227364 DEBUG nova.scheduler.client.report [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:34:48 np0005539551 nova_compute[227360]: 2025-11-29 08:34:48.230 227364 DEBUG oslo_concurrency.lockutils [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:48 np0005539551 nova_compute[227360]: 2025-11-29 08:34:48.231 227364 DEBUG nova.compute.manager [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:34:48 np0005539551 nova_compute[227360]: 2025-11-29 08:34:48.287 227364 DEBUG nova.compute.manager [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:34:48 np0005539551 nova_compute[227360]: 2025-11-29 08:34:48.288 227364 DEBUG nova.network.neutron [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:34:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:48.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:48 np0005539551 nova_compute[227360]: 2025-11-29 08:34:48.309 227364 INFO nova.virt.libvirt.driver [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:34:48 np0005539551 nova_compute[227360]: 2025-11-29 08:34:48.325 227364 DEBUG nova.compute.manager [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:34:48 np0005539551 nova_compute[227360]: 2025-11-29 08:34:48.418 227364 DEBUG nova.compute.manager [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:34:48 np0005539551 nova_compute[227360]: 2025-11-29 08:34:48.420 227364 DEBUG nova.virt.libvirt.driver [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:34:48 np0005539551 nova_compute[227360]: 2025-11-29 08:34:48.420 227364 INFO nova.virt.libvirt.driver [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Creating image(s)#033[00m
Nov 29 03:34:48 np0005539551 nova_compute[227360]: 2025-11-29 08:34:48.448 227364 DEBUG nova.storage.rbd_utils [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:34:48 np0005539551 nova_compute[227360]: 2025-11-29 08:34:48.483 227364 DEBUG nova.storage.rbd_utils [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:34:48 np0005539551 nova_compute[227360]: 2025-11-29 08:34:48.516 227364 DEBUG nova.storage.rbd_utils [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:34:48 np0005539551 nova_compute[227360]: 2025-11-29 08:34:48.521 227364 DEBUG oslo_concurrency.processutils [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:48 np0005539551 nova_compute[227360]: 2025-11-29 08:34:48.561 227364 DEBUG nova.policy [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4774e2851bc6407cb0fcde15bd24d1b3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0471b9b208874403aa3f0fbe7504ad19', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:34:48 np0005539551 nova_compute[227360]: 2025-11-29 08:34:48.619 227364 DEBUG oslo_concurrency.processutils [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:48 np0005539551 nova_compute[227360]: 2025-11-29 08:34:48.620 227364 DEBUG oslo_concurrency.lockutils [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:48 np0005539551 nova_compute[227360]: 2025-11-29 08:34:48.621 227364 DEBUG oslo_concurrency.lockutils [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:48 np0005539551 nova_compute[227360]: 2025-11-29 08:34:48.621 227364 DEBUG oslo_concurrency.lockutils [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:48 np0005539551 nova_compute[227360]: 2025-11-29 08:34:48.656 227364 DEBUG nova.storage.rbd_utils [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:34:48 np0005539551 nova_compute[227360]: 2025-11-29 08:34:48.661 227364 DEBUG oslo_concurrency.processutils [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:34:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:49.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:34:49 np0005539551 nova_compute[227360]: 2025-11-29 08:34:49.162 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Updating instance_info_cache with network_info: [{"id": "c4bf30df-8d4f-4601-a7a6-1d851938fab0", "address": "fa:16:3e:fb:e3:fc", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4bf30df-8d", "ovs_interfaceid": "c4bf30df-8d4f-4601-a7a6-1d851938fab0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:34:49 np0005539551 nova_compute[227360]: 2025-11-29 08:34:49.180 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Releasing lock "refresh_cache-2293bfb9-d91d-4ee4-8347-317cf45fe9c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:34:49 np0005539551 nova_compute[227360]: 2025-11-29 08:34:49.181 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:34:49 np0005539551 nova_compute[227360]: 2025-11-29 08:34:49.182 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:49 np0005539551 nova_compute[227360]: 2025-11-29 08:34:49.197 227364 DEBUG nova.network.neutron [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Successfully created port: 0e21f315-1974-4283-92a1-054bfda2ae26 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:34:49 np0005539551 nova_compute[227360]: 2025-11-29 08:34:49.299 227364 DEBUG oslo_concurrency.processutils [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.638s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:49 np0005539551 nova_compute[227360]: 2025-11-29 08:34:49.373 227364 DEBUG nova.storage.rbd_utils [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] resizing rbd image 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:34:49 np0005539551 nova_compute[227360]: 2025-11-29 08:34:49.468 227364 DEBUG nova.objects.instance [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'migration_context' on Instance uuid 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:34:49 np0005539551 nova_compute[227360]: 2025-11-29 08:34:49.485 227364 DEBUG nova.virt.libvirt.driver [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:34:49 np0005539551 nova_compute[227360]: 2025-11-29 08:34:49.485 227364 DEBUG nova.virt.libvirt.driver [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Ensure instance console log exists: /var/lib/nova/instances/1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:34:49 np0005539551 nova_compute[227360]: 2025-11-29 08:34:49.486 227364 DEBUG oslo_concurrency.lockutils [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:49 np0005539551 nova_compute[227360]: 2025-11-29 08:34:49.486 227364 DEBUG oslo_concurrency.lockutils [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:49 np0005539551 nova_compute[227360]: 2025-11-29 08:34:49.486 227364 DEBUG oslo_concurrency.lockutils [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:50 np0005539551 nova_compute[227360]: 2025-11-29 08:34:50.058 227364 DEBUG nova.network.neutron [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Successfully updated port: 0e21f315-1974-4283-92a1-054bfda2ae26 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:34:50 np0005539551 nova_compute[227360]: 2025-11-29 08:34:50.072 227364 DEBUG oslo_concurrency.lockutils [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "refresh_cache-1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:34:50 np0005539551 nova_compute[227360]: 2025-11-29 08:34:50.072 227364 DEBUG oslo_concurrency.lockutils [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquired lock "refresh_cache-1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:34:50 np0005539551 nova_compute[227360]: 2025-11-29 08:34:50.072 227364 DEBUG nova.network.neutron [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:34:50 np0005539551 nova_compute[227360]: 2025-11-29 08:34:50.259 227364 DEBUG nova.network.neutron [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:34:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:50.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:50 np0005539551 nova_compute[227360]: 2025-11-29 08:34:50.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:50 np0005539551 nova_compute[227360]: 2025-11-29 08:34:50.670 227364 DEBUG nova.compute.manager [req-35d35561-c2d2-45a8-98e6-a938dbc25c16 req-767c6a90-894a-4f98-9791-a53c5d107e7f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Received event network-changed-0e21f315-1974-4283-92a1-054bfda2ae26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:50 np0005539551 nova_compute[227360]: 2025-11-29 08:34:50.670 227364 DEBUG nova.compute.manager [req-35d35561-c2d2-45a8-98e6-a938dbc25c16 req-767c6a90-894a-4f98-9791-a53c5d107e7f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Refreshing instance network info cache due to event network-changed-0e21f315-1974-4283-92a1-054bfda2ae26. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:34:50 np0005539551 nova_compute[227360]: 2025-11-29 08:34:50.671 227364 DEBUG oslo_concurrency.lockutils [req-35d35561-c2d2-45a8-98e6-a938dbc25c16 req-767c6a90-894a-4f98-9791-a53c5d107e7f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:34:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.006000160s ======
Nov 29 03:34:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:51.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.006000160s
Nov 29 03:34:51 np0005539551 nova_compute[227360]: 2025-11-29 08:34:51.221 227364 DEBUG nova.network.neutron [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Updating instance_info_cache with network_info: [{"id": "0e21f315-1974-4283-92a1-054bfda2ae26", "address": "fa:16:3e:24:58:28", "network": {"id": "8a47f5fc-b01b-4aaa-a961-1be8b8398ced", "bridge": "br-int", "label": "tempest-network-smoke--178153047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e21f315-19", "ovs_interfaceid": "0e21f315-1974-4283-92a1-054bfda2ae26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:34:51 np0005539551 nova_compute[227360]: 2025-11-29 08:34:51.238 227364 DEBUG oslo_concurrency.lockutils [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Releasing lock "refresh_cache-1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:34:51 np0005539551 nova_compute[227360]: 2025-11-29 08:34:51.238 227364 DEBUG nova.compute.manager [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Instance network_info: |[{"id": "0e21f315-1974-4283-92a1-054bfda2ae26", "address": "fa:16:3e:24:58:28", "network": {"id": "8a47f5fc-b01b-4aaa-a961-1be8b8398ced", "bridge": "br-int", "label": "tempest-network-smoke--178153047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e21f315-19", "ovs_interfaceid": "0e21f315-1974-4283-92a1-054bfda2ae26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:34:51 np0005539551 nova_compute[227360]: 2025-11-29 08:34:51.239 227364 DEBUG oslo_concurrency.lockutils [req-35d35561-c2d2-45a8-98e6-a938dbc25c16 req-767c6a90-894a-4f98-9791-a53c5d107e7f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:34:51 np0005539551 nova_compute[227360]: 2025-11-29 08:34:51.239 227364 DEBUG nova.network.neutron [req-35d35561-c2d2-45a8-98e6-a938dbc25c16 req-767c6a90-894a-4f98-9791-a53c5d107e7f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Refreshing network info cache for port 0e21f315-1974-4283-92a1-054bfda2ae26 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:34:51 np0005539551 nova_compute[227360]: 2025-11-29 08:34:51.245 227364 DEBUG nova.virt.libvirt.driver [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Start _get_guest_xml network_info=[{"id": "0e21f315-1974-4283-92a1-054bfda2ae26", "address": "fa:16:3e:24:58:28", "network": {"id": "8a47f5fc-b01b-4aaa-a961-1be8b8398ced", "bridge": "br-int", "label": "tempest-network-smoke--178153047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e21f315-19", "ovs_interfaceid": "0e21f315-1974-4283-92a1-054bfda2ae26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:34:51 np0005539551 nova_compute[227360]: 2025-11-29 08:34:51.251 227364 WARNING nova.virt.libvirt.driver [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:34:51 np0005539551 nova_compute[227360]: 2025-11-29 08:34:51.255 227364 DEBUG nova.virt.libvirt.host [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:34:51 np0005539551 nova_compute[227360]: 2025-11-29 08:34:51.256 227364 DEBUG nova.virt.libvirt.host [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:34:51 np0005539551 nova_compute[227360]: 2025-11-29 08:34:51.265 227364 DEBUG nova.virt.libvirt.host [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:34:51 np0005539551 nova_compute[227360]: 2025-11-29 08:34:51.266 227364 DEBUG nova.virt.libvirt.host [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:34:51 np0005539551 nova_compute[227360]: 2025-11-29 08:34:51.268 227364 DEBUG nova.virt.libvirt.driver [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:34:51 np0005539551 nova_compute[227360]: 2025-11-29 08:34:51.268 227364 DEBUG nova.virt.hardware [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:34:51 np0005539551 nova_compute[227360]: 2025-11-29 08:34:51.269 227364 DEBUG nova.virt.hardware [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:34:51 np0005539551 nova_compute[227360]: 2025-11-29 08:34:51.269 227364 DEBUG nova.virt.hardware [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:34:51 np0005539551 nova_compute[227360]: 2025-11-29 08:34:51.270 227364 DEBUG nova.virt.hardware [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:34:51 np0005539551 nova_compute[227360]: 2025-11-29 08:34:51.270 227364 DEBUG nova.virt.hardware [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:34:51 np0005539551 nova_compute[227360]: 2025-11-29 08:34:51.270 227364 DEBUG nova.virt.hardware [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:34:51 np0005539551 nova_compute[227360]: 2025-11-29 08:34:51.271 227364 DEBUG nova.virt.hardware [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:34:51 np0005539551 nova_compute[227360]: 2025-11-29 08:34:51.271 227364 DEBUG nova.virt.hardware [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:34:51 np0005539551 nova_compute[227360]: 2025-11-29 08:34:51.271 227364 DEBUG nova.virt.hardware [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:34:51 np0005539551 nova_compute[227360]: 2025-11-29 08:34:51.272 227364 DEBUG nova.virt.hardware [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:34:51 np0005539551 nova_compute[227360]: 2025-11-29 08:34:51.272 227364 DEBUG nova.virt.hardware [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:34:51 np0005539551 nova_compute[227360]: 2025-11-29 08:34:51.277 227364 DEBUG oslo_concurrency.processutils [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:34:51 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2539617917' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:34:51 np0005539551 nova_compute[227360]: 2025-11-29 08:34:51.722 227364 DEBUG oslo_concurrency.processutils [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:51 np0005539551 nova_compute[227360]: 2025-11-29 08:34:51.753 227364 DEBUG nova.storage.rbd_utils [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:34:51 np0005539551 nova_compute[227360]: 2025-11-29 08:34:51.757 227364 DEBUG oslo_concurrency.processutils [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:51 np0005539551 nova_compute[227360]: 2025-11-29 08:34:51.897 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:34:52 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3084338241' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:34:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:52.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:53.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:53 np0005539551 nova_compute[227360]: 2025-11-29 08:34:53.109 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:53 np0005539551 nova_compute[227360]: 2025-11-29 08:34:53.132 227364 DEBUG oslo_concurrency.processutils [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.374s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:53 np0005539551 nova_compute[227360]: 2025-11-29 08:34:53.133 227364 DEBUG nova.virt.libvirt.vif [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:34:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1296419618',display_name='tempest-TestNetworkBasicOps-server-1296419618',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1296419618',id=173,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEhyPFmOb1buTdfp2NZX2b9gSFBAogkRMZKVdLyroT5Ix5fvym2lepYKOMKgz9iBT4Fr3f7j9Gn+L6kUd1A62pvPy+wjiM7n7mFxRX0SSB7I3/1dVL5dTHaRcDNjAm50LQ==',key_name='tempest-TestNetworkBasicOps-1813521378',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-vm52hjet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:34:48Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0e21f315-1974-4283-92a1-054bfda2ae26", "address": "fa:16:3e:24:58:28", "network": {"id": "8a47f5fc-b01b-4aaa-a961-1be8b8398ced", "bridge": "br-int", "label": "tempest-network-smoke--178153047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e21f315-19", "ovs_interfaceid": "0e21f315-1974-4283-92a1-054bfda2ae26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:34:53 np0005539551 nova_compute[227360]: 2025-11-29 08:34:53.133 227364 DEBUG nova.network.os_vif_util [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "0e21f315-1974-4283-92a1-054bfda2ae26", "address": "fa:16:3e:24:58:28", "network": {"id": "8a47f5fc-b01b-4aaa-a961-1be8b8398ced", "bridge": "br-int", "label": "tempest-network-smoke--178153047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e21f315-19", "ovs_interfaceid": "0e21f315-1974-4283-92a1-054bfda2ae26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:34:53 np0005539551 nova_compute[227360]: 2025-11-29 08:34:53.134 227364 DEBUG nova.network.os_vif_util [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:58:28,bridge_name='br-int',has_traffic_filtering=True,id=0e21f315-1974-4283-92a1-054bfda2ae26,network=Network(8a47f5fc-b01b-4aaa-a961-1be8b8398ced),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e21f315-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:34:53 np0005539551 nova_compute[227360]: 2025-11-29 08:34:53.135 227364 DEBUG nova.objects.instance [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:34:53 np0005539551 nova_compute[227360]: 2025-11-29 08:34:53.153 227364 DEBUG nova.virt.libvirt.driver [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:34:53 np0005539551 nova_compute[227360]:  <uuid>1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd</uuid>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:  <name>instance-000000ad</name>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:34:53 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:      <nova:name>tempest-TestNetworkBasicOps-server-1296419618</nova:name>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:34:51</nova:creationTime>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:34:53 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:        <nova:user uuid="4774e2851bc6407cb0fcde15bd24d1b3">tempest-TestNetworkBasicOps-828399474-project-member</nova:user>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:        <nova:project uuid="0471b9b208874403aa3f0fbe7504ad19">tempest-TestNetworkBasicOps-828399474</nova:project>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:        <nova:port uuid="0e21f315-1974-4283-92a1-054bfda2ae26">
Nov 29 03:34:53 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:      <entry name="serial">1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd</entry>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:      <entry name="uuid">1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd</entry>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:34:53 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd_disk">
Nov 29 03:34:53 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:34:53 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:34:53 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd_disk.config">
Nov 29 03:34:53 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:34:53 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:34:53 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:24:58:28"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:      <target dev="tap0e21f315-19"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:34:53 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd/console.log" append="off"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:34:53 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:34:53 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:34:53 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:34:53 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:34:53 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:34:53 np0005539551 nova_compute[227360]: 2025-11-29 08:34:53.153 227364 DEBUG nova.compute.manager [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Preparing to wait for external event network-vif-plugged-0e21f315-1974-4283-92a1-054bfda2ae26 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:34:53 np0005539551 nova_compute[227360]: 2025-11-29 08:34:53.154 227364 DEBUG oslo_concurrency.lockutils [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:53 np0005539551 nova_compute[227360]: 2025-11-29 08:34:53.154 227364 DEBUG oslo_concurrency.lockutils [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:53 np0005539551 nova_compute[227360]: 2025-11-29 08:34:53.154 227364 DEBUG oslo_concurrency.lockutils [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:53 np0005539551 nova_compute[227360]: 2025-11-29 08:34:53.155 227364 DEBUG nova.virt.libvirt.vif [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:34:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1296419618',display_name='tempest-TestNetworkBasicOps-server-1296419618',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1296419618',id=173,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEhyPFmOb1buTdfp2NZX2b9gSFBAogkRMZKVdLyroT5Ix5fvym2lepYKOMKgz9iBT4Fr3f7j9Gn+L6kUd1A62pvPy+wjiM7n7mFxRX0SSB7I3/1dVL5dTHaRcDNjAm50LQ==',key_name='tempest-TestNetworkBasicOps-1813521378',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-vm52hjet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:34:48Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0e21f315-1974-4283-92a1-054bfda2ae26", "address": "fa:16:3e:24:58:28", "network": {"id": "8a47f5fc-b01b-4aaa-a961-1be8b8398ced", "bridge": "br-int", "label": "tempest-network-smoke--178153047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e21f315-19", "ovs_interfaceid": "0e21f315-1974-4283-92a1-054bfda2ae26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:34:53 np0005539551 nova_compute[227360]: 2025-11-29 08:34:53.155 227364 DEBUG nova.network.os_vif_util [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "0e21f315-1974-4283-92a1-054bfda2ae26", "address": "fa:16:3e:24:58:28", "network": {"id": "8a47f5fc-b01b-4aaa-a961-1be8b8398ced", "bridge": "br-int", "label": "tempest-network-smoke--178153047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e21f315-19", "ovs_interfaceid": "0e21f315-1974-4283-92a1-054bfda2ae26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:34:53 np0005539551 nova_compute[227360]: 2025-11-29 08:34:53.156 227364 DEBUG nova.network.os_vif_util [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:58:28,bridge_name='br-int',has_traffic_filtering=True,id=0e21f315-1974-4283-92a1-054bfda2ae26,network=Network(8a47f5fc-b01b-4aaa-a961-1be8b8398ced),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e21f315-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:34:53 np0005539551 nova_compute[227360]: 2025-11-29 08:34:53.156 227364 DEBUG os_vif [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:58:28,bridge_name='br-int',has_traffic_filtering=True,id=0e21f315-1974-4283-92a1-054bfda2ae26,network=Network(8a47f5fc-b01b-4aaa-a961-1be8b8398ced),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e21f315-19') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:34:53 np0005539551 nova_compute[227360]: 2025-11-29 08:34:53.157 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:53 np0005539551 nova_compute[227360]: 2025-11-29 08:34:53.157 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:53 np0005539551 nova_compute[227360]: 2025-11-29 08:34:53.157 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:34:53 np0005539551 nova_compute[227360]: 2025-11-29 08:34:53.160 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:53 np0005539551 nova_compute[227360]: 2025-11-29 08:34:53.160 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0e21f315-19, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:53 np0005539551 nova_compute[227360]: 2025-11-29 08:34:53.161 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0e21f315-19, col_values=(('external_ids', {'iface-id': '0e21f315-1974-4283-92a1-054bfda2ae26', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:24:58:28', 'vm-uuid': '1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:53 np0005539551 NetworkManager[48922]: <info>  [1764405293.1636] manager: (tap0e21f315-19): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/331)
Nov 29 03:34:53 np0005539551 nova_compute[227360]: 2025-11-29 08:34:53.166 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:34:53 np0005539551 nova_compute[227360]: 2025-11-29 08:34:53.169 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:53 np0005539551 nova_compute[227360]: 2025-11-29 08:34:53.170 227364 INFO os_vif [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:58:28,bridge_name='br-int',has_traffic_filtering=True,id=0e21f315-1974-4283-92a1-054bfda2ae26,network=Network(8a47f5fc-b01b-4aaa-a961-1be8b8398ced),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e21f315-19')#033[00m
Nov 29 03:34:53 np0005539551 nova_compute[227360]: 2025-11-29 08:34:53.228 227364 DEBUG nova.virt.libvirt.driver [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:34:53 np0005539551 nova_compute[227360]: 2025-11-29 08:34:53.228 227364 DEBUG nova.virt.libvirt.driver [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:34:53 np0005539551 nova_compute[227360]: 2025-11-29 08:34:53.228 227364 DEBUG nova.virt.libvirt.driver [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No VIF found with MAC fa:16:3e:24:58:28, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:34:53 np0005539551 nova_compute[227360]: 2025-11-29 08:34:53.229 227364 INFO nova.virt.libvirt.driver [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Using config drive#033[00m
Nov 29 03:34:53 np0005539551 nova_compute[227360]: 2025-11-29 08:34:53.254 227364 DEBUG nova.storage.rbd_utils [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:34:53 np0005539551 nova_compute[227360]: 2025-11-29 08:34:53.315 227364 DEBUG nova.network.neutron [req-35d35561-c2d2-45a8-98e6-a938dbc25c16 req-767c6a90-894a-4f98-9791-a53c5d107e7f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Updated VIF entry in instance network info cache for port 0e21f315-1974-4283-92a1-054bfda2ae26. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:34:53 np0005539551 nova_compute[227360]: 2025-11-29 08:34:53.316 227364 DEBUG nova.network.neutron [req-35d35561-c2d2-45a8-98e6-a938dbc25c16 req-767c6a90-894a-4f98-9791-a53c5d107e7f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Updating instance_info_cache with network_info: [{"id": "0e21f315-1974-4283-92a1-054bfda2ae26", "address": "fa:16:3e:24:58:28", "network": {"id": "8a47f5fc-b01b-4aaa-a961-1be8b8398ced", "bridge": "br-int", "label": "tempest-network-smoke--178153047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e21f315-19", "ovs_interfaceid": "0e21f315-1974-4283-92a1-054bfda2ae26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:34:53 np0005539551 nova_compute[227360]: 2025-11-29 08:34:53.331 227364 DEBUG oslo_concurrency.lockutils [req-35d35561-c2d2-45a8-98e6-a938dbc25c16 req-767c6a90-894a-4f98-9791-a53c5d107e7f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:34:53 np0005539551 nova_compute[227360]: 2025-11-29 08:34:53.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:53 np0005539551 nova_compute[227360]: 2025-11-29 08:34:53.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:34:53 np0005539551 nova_compute[227360]: 2025-11-29 08:34:53.664 227364 INFO nova.virt.libvirt.driver [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Creating config drive at /var/lib/nova/instances/1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd/disk.config#033[00m
Nov 29 03:34:53 np0005539551 nova_compute[227360]: 2025-11-29 08:34:53.669 227364 DEBUG oslo_concurrency.processutils [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwgykokob execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:53 np0005539551 nova_compute[227360]: 2025-11-29 08:34:53.800 227364 DEBUG oslo_concurrency.processutils [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwgykokob" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:53 np0005539551 nova_compute[227360]: 2025-11-29 08:34:53.829 227364 DEBUG nova.storage.rbd_utils [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:34:53 np0005539551 nova_compute[227360]: 2025-11-29 08:34:53.833 227364 DEBUG oslo_concurrency.processutils [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd/disk.config 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:54 np0005539551 nova_compute[227360]: 2025-11-29 08:34:54.007 227364 DEBUG oslo_concurrency.processutils [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd/disk.config 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:54 np0005539551 nova_compute[227360]: 2025-11-29 08:34:54.008 227364 INFO nova.virt.libvirt.driver [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Deleting local config drive /var/lib/nova/instances/1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd/disk.config because it was imported into RBD.#033[00m
Nov 29 03:34:54 np0005539551 NetworkManager[48922]: <info>  [1764405294.0535] manager: (tap0e21f315-19): new Tun device (/org/freedesktop/NetworkManager/Devices/332)
Nov 29 03:34:54 np0005539551 kernel: tap0e21f315-19: entered promiscuous mode
Nov 29 03:34:54 np0005539551 ovn_controller[130266]: 2025-11-29T08:34:54Z|00736|binding|INFO|Claiming lport 0e21f315-1974-4283-92a1-054bfda2ae26 for this chassis.
Nov 29 03:34:54 np0005539551 ovn_controller[130266]: 2025-11-29T08:34:54Z|00737|binding|INFO|0e21f315-1974-4283-92a1-054bfda2ae26: Claiming fa:16:3e:24:58:28 10.100.0.8
Nov 29 03:34:54 np0005539551 nova_compute[227360]: 2025-11-29 08:34:54.055 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:54.062 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:58:28 10.100.0.8'], port_security=['fa:16:3e:24:58:28 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8a47f5fc-b01b-4aaa-a961-1be8b8398ced', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0471b9b208874403aa3f0fbe7504ad19', 'neutron:revision_number': '2', 'neutron:security_group_ids': '34a1f28a-44b1-4a29-aed5-fb3f209c19e2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51743244-221f-43b8-b76f-d9e520229bc8, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=0e21f315-1974-4283-92a1-054bfda2ae26) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:54.063 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 0e21f315-1974-4283-92a1-054bfda2ae26 in datapath 8a47f5fc-b01b-4aaa-a961-1be8b8398ced bound to our chassis#033[00m
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:54.064 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8a47f5fc-b01b-4aaa-a961-1be8b8398ced#033[00m
Nov 29 03:34:54 np0005539551 ovn_controller[130266]: 2025-11-29T08:34:54Z|00738|binding|INFO|Setting lport 0e21f315-1974-4283-92a1-054bfda2ae26 ovn-installed in OVS
Nov 29 03:34:54 np0005539551 ovn_controller[130266]: 2025-11-29T08:34:54Z|00739|binding|INFO|Setting lport 0e21f315-1974-4283-92a1-054bfda2ae26 up in Southbound
Nov 29 03:34:54 np0005539551 nova_compute[227360]: 2025-11-29 08:34:54.072 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:54 np0005539551 nova_compute[227360]: 2025-11-29 08:34:54.075 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:54.075 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2a45708b-151d-4f05-912a-3359274207ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:54.077 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8a47f5fc-b1 in ovnmeta-8a47f5fc-b01b-4aaa-a961-1be8b8398ced namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:54.079 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8a47f5fc-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:54.079 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[3a2626a8-a92f-425d-b16d-6690155401f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:54.080 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6718c60d-7ce6-4aa2-81fd-a0ca9c967007]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:54 np0005539551 systemd-udevd[288496]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:34:54 np0005539551 systemd-machined[190756]: New machine qemu-79-instance-000000ad.
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:54.092 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[625467e6-ebc0-4544-84a6-f93c8a2fa858]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:54 np0005539551 NetworkManager[48922]: <info>  [1764405294.0988] device (tap0e21f315-19): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:34:54 np0005539551 NetworkManager[48922]: <info>  [1764405294.1002] device (tap0e21f315-19): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:34:54 np0005539551 systemd[1]: Started Virtual Machine qemu-79-instance-000000ad.
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:54.117 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[0c8e1115-9c52-4475-8741-9d6ca80ef305]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:54.145 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[91497493-2aad-4faf-b761-056f1cc53312]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:54.152 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5d861a25-d4e3-453d-9705-082639bea0d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:54 np0005539551 NetworkManager[48922]: <info>  [1764405294.1537] manager: (tap8a47f5fc-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/333)
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:54.178 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[4c1d212c-830d-460e-94ec-46762972a264]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:54.181 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[e0a2af73-3796-4b26-b7e3-81723030343f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:54 np0005539551 NetworkManager[48922]: <info>  [1764405294.2019] device (tap8a47f5fc-b0): carrier: link connected
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:54.206 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[de694e76-30cc-4830-8fc2-38e72ff23249]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:54.223 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[172eaa56-c30f-4c58-8181-dc2c03174b01]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8a47f5fc-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:e5:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 213], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 834108, 'reachable_time': 27240, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288529, 'error': None, 'target': 'ovnmeta-8a47f5fc-b01b-4aaa-a961-1be8b8398ced', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:54.237 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9eb92802-41a4-4d41-a3d5-d9856b556180]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3e:e553'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 834108, 'tstamp': 834108}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288530, 'error': None, 'target': 'ovnmeta-8a47f5fc-b01b-4aaa-a961-1be8b8398ced', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:54.253 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[939bc3ef-8ba8-4406-abe3-172450f22c62]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8a47f5fc-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:e5:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 213], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 834108, 'reachable_time': 27240, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 288531, 'error': None, 'target': 'ovnmeta-8a47f5fc-b01b-4aaa-a961-1be8b8398ced', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:54.285 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a4da82c3-1ca0-40fc-97b4-b22507cf15d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:54.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:54 np0005539551 nova_compute[227360]: 2025-11-29 08:34:54.329 227364 DEBUG nova.compute.manager [req-4d828494-8e60-4eb5-a100-82ac3897670d req-07a6adcb-7dcf-4d88-b803-58ad1c76c26b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Received event network-vif-plugged-0e21f315-1974-4283-92a1-054bfda2ae26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:54 np0005539551 nova_compute[227360]: 2025-11-29 08:34:54.330 227364 DEBUG oslo_concurrency.lockutils [req-4d828494-8e60-4eb5-a100-82ac3897670d req-07a6adcb-7dcf-4d88-b803-58ad1c76c26b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:54 np0005539551 nova_compute[227360]: 2025-11-29 08:34:54.330 227364 DEBUG oslo_concurrency.lockutils [req-4d828494-8e60-4eb5-a100-82ac3897670d req-07a6adcb-7dcf-4d88-b803-58ad1c76c26b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:54 np0005539551 nova_compute[227360]: 2025-11-29 08:34:54.330 227364 DEBUG oslo_concurrency.lockutils [req-4d828494-8e60-4eb5-a100-82ac3897670d req-07a6adcb-7dcf-4d88-b803-58ad1c76c26b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:54 np0005539551 nova_compute[227360]: 2025-11-29 08:34:54.330 227364 DEBUG nova.compute.manager [req-4d828494-8e60-4eb5-a100-82ac3897670d req-07a6adcb-7dcf-4d88-b803-58ad1c76c26b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Processing event network-vif-plugged-0e21f315-1974-4283-92a1-054bfda2ae26 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:54.336 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e79a1c3d-0812-45be-a998-f1e39b923125]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:54.337 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8a47f5fc-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:54.338 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:54.338 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8a47f5fc-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:54 np0005539551 NetworkManager[48922]: <info>  [1764405294.3404] manager: (tap8a47f5fc-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/334)
Nov 29 03:34:54 np0005539551 kernel: tap8a47f5fc-b0: entered promiscuous mode
Nov 29 03:34:54 np0005539551 nova_compute[227360]: 2025-11-29 08:34:54.339 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:54.342 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8a47f5fc-b0, col_values=(('external_ids', {'iface-id': '4bfe53e7-2be7-4e57-af2c-2f7abbe283f3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:54 np0005539551 ovn_controller[130266]: 2025-11-29T08:34:54Z|00740|binding|INFO|Releasing lport 4bfe53e7-2be7-4e57-af2c-2f7abbe283f3 from this chassis (sb_readonly=0)
Nov 29 03:34:54 np0005539551 nova_compute[227360]: 2025-11-29 08:34:54.343 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:54 np0005539551 nova_compute[227360]: 2025-11-29 08:34:54.356 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:54.357 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8a47f5fc-b01b-4aaa-a961-1be8b8398ced.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8a47f5fc-b01b-4aaa-a961-1be8b8398ced.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:54.358 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[cfd0cd94-073d-4461-8d8c-cac4e5042b29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:54.359 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-8a47f5fc-b01b-4aaa-a961-1be8b8398ced
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/8a47f5fc-b01b-4aaa-a961-1be8b8398ced.pid.haproxy
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 8a47f5fc-b01b-4aaa-a961-1be8b8398ced
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:34:54 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:34:54.361 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8a47f5fc-b01b-4aaa-a961-1be8b8398ced', 'env', 'PROCESS_TAG=haproxy-8a47f5fc-b01b-4aaa-a961-1be8b8398ced', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8a47f5fc-b01b-4aaa-a961-1be8b8398ced.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:34:54 np0005539551 nova_compute[227360]: 2025-11-29 08:34:54.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:54 np0005539551 podman[288579]: 2025-11-29 08:34:54.735232482 +0000 UTC m=+0.050990000 container create bd12f23c02ec5c416f463747aa2a18b42e46ff5c2ac4035f409366d4a3ec6f1c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8a47f5fc-b01b-4aaa-a961-1be8b8398ced, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:34:54 np0005539551 systemd[1]: Started libpod-conmon-bd12f23c02ec5c416f463747aa2a18b42e46ff5c2ac4035f409366d4a3ec6f1c.scope.
Nov 29 03:34:54 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:34:54 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40a2277bd62811fa56e7b977a3c3cba5598f452e3afc8b8195b0c62df1d65f97/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:34:54 np0005539551 podman[288579]: 2025-11-29 08:34:54.795095001 +0000 UTC m=+0.110852539 container init bd12f23c02ec5c416f463747aa2a18b42e46ff5c2ac4035f409366d4a3ec6f1c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8a47f5fc-b01b-4aaa-a961-1be8b8398ced, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:34:54 np0005539551 podman[288579]: 2025-11-29 08:34:54.800053955 +0000 UTC m=+0.115811473 container start bd12f23c02ec5c416f463747aa2a18b42e46ff5c2ac4035f409366d4a3ec6f1c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8a47f5fc-b01b-4aaa-a961-1be8b8398ced, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:34:54 np0005539551 podman[288579]: 2025-11-29 08:34:54.709357952 +0000 UTC m=+0.025115490 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:34:54 np0005539551 neutron-haproxy-ovnmeta-8a47f5fc-b01b-4aaa-a961-1be8b8398ced[288618]: [NOTICE]   (288623) : New worker (288626) forked
Nov 29 03:34:54 np0005539551 neutron-haproxy-ovnmeta-8a47f5fc-b01b-4aaa-a961-1be8b8398ced[288618]: [NOTICE]   (288623) : Loading success.
Nov 29 03:34:54 np0005539551 nova_compute[227360]: 2025-11-29 08:34:54.846 227364 DEBUG nova.compute.manager [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:34:54 np0005539551 nova_compute[227360]: 2025-11-29 08:34:54.848 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405294.8475723, 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:34:54 np0005539551 nova_compute[227360]: 2025-11-29 08:34:54.848 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] VM Started (Lifecycle Event)#033[00m
Nov 29 03:34:54 np0005539551 nova_compute[227360]: 2025-11-29 08:34:54.854 227364 DEBUG nova.virt.libvirt.driver [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:34:54 np0005539551 nova_compute[227360]: 2025-11-29 08:34:54.858 227364 INFO nova.virt.libvirt.driver [-] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Instance spawned successfully.#033[00m
Nov 29 03:34:54 np0005539551 nova_compute[227360]: 2025-11-29 08:34:54.859 227364 DEBUG nova.virt.libvirt.driver [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:34:54 np0005539551 nova_compute[227360]: 2025-11-29 08:34:54.868 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:34:54 np0005539551 nova_compute[227360]: 2025-11-29 08:34:54.871 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:34:54 np0005539551 nova_compute[227360]: 2025-11-29 08:34:54.881 227364 DEBUG nova.virt.libvirt.driver [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:34:54 np0005539551 nova_compute[227360]: 2025-11-29 08:34:54.881 227364 DEBUG nova.virt.libvirt.driver [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:34:54 np0005539551 nova_compute[227360]: 2025-11-29 08:34:54.882 227364 DEBUG nova.virt.libvirt.driver [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:34:54 np0005539551 nova_compute[227360]: 2025-11-29 08:34:54.882 227364 DEBUG nova.virt.libvirt.driver [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:34:54 np0005539551 nova_compute[227360]: 2025-11-29 08:34:54.882 227364 DEBUG nova.virt.libvirt.driver [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:34:54 np0005539551 nova_compute[227360]: 2025-11-29 08:34:54.883 227364 DEBUG nova.virt.libvirt.driver [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:34:54 np0005539551 nova_compute[227360]: 2025-11-29 08:34:54.907 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:34:54 np0005539551 nova_compute[227360]: 2025-11-29 08:34:54.908 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405294.8482263, 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:34:54 np0005539551 nova_compute[227360]: 2025-11-29 08:34:54.908 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:34:54 np0005539551 nova_compute[227360]: 2025-11-29 08:34:54.936 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:34:54 np0005539551 nova_compute[227360]: 2025-11-29 08:34:54.939 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405294.8540497, 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:34:54 np0005539551 nova_compute[227360]: 2025-11-29 08:34:54.939 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:34:54 np0005539551 nova_compute[227360]: 2025-11-29 08:34:54.950 227364 INFO nova.compute.manager [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Took 6.53 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:34:54 np0005539551 nova_compute[227360]: 2025-11-29 08:34:54.950 227364 DEBUG nova.compute.manager [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:34:54 np0005539551 nova_compute[227360]: 2025-11-29 08:34:54.960 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:34:54 np0005539551 nova_compute[227360]: 2025-11-29 08:34:54.963 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:34:54 np0005539551 nova_compute[227360]: 2025-11-29 08:34:54.990 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:34:55 np0005539551 nova_compute[227360]: 2025-11-29 08:34:55.019 227364 INFO nova.compute.manager [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Took 7.43 seconds to build instance.#033[00m
Nov 29 03:34:55 np0005539551 nova_compute[227360]: 2025-11-29 08:34:55.039 227364 DEBUG oslo_concurrency.lockutils [None req-60d12e87-555d-4c31-b8dd-d449378a96a0 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.511s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:55.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:55 np0005539551 nova_compute[227360]: 2025-11-29 08:34:55.424 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:55 np0005539551 nova_compute[227360]: 2025-11-29 08:34:55.425 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:34:55 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:34:55 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:34:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:56.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:56 np0005539551 nova_compute[227360]: 2025-11-29 08:34:56.442 227364 DEBUG nova.compute.manager [req-698cbfe1-588a-425d-bde2-4be8b4472708 req-3dda908d-1cc5-4a9e-be8c-98a3d99ed9bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Received event network-vif-plugged-0e21f315-1974-4283-92a1-054bfda2ae26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:56 np0005539551 nova_compute[227360]: 2025-11-29 08:34:56.442 227364 DEBUG oslo_concurrency.lockutils [req-698cbfe1-588a-425d-bde2-4be8b4472708 req-3dda908d-1cc5-4a9e-be8c-98a3d99ed9bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:56 np0005539551 nova_compute[227360]: 2025-11-29 08:34:56.442 227364 DEBUG oslo_concurrency.lockutils [req-698cbfe1-588a-425d-bde2-4be8b4472708 req-3dda908d-1cc5-4a9e-be8c-98a3d99ed9bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:56 np0005539551 nova_compute[227360]: 2025-11-29 08:34:56.443 227364 DEBUG oslo_concurrency.lockutils [req-698cbfe1-588a-425d-bde2-4be8b4472708 req-3dda908d-1cc5-4a9e-be8c-98a3d99ed9bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:56 np0005539551 nova_compute[227360]: 2025-11-29 08:34:56.443 227364 DEBUG nova.compute.manager [req-698cbfe1-588a-425d-bde2-4be8b4472708 req-3dda908d-1cc5-4a9e-be8c-98a3d99ed9bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] No waiting events found dispatching network-vif-plugged-0e21f315-1974-4283-92a1-054bfda2ae26 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:34:56 np0005539551 nova_compute[227360]: 2025-11-29 08:34:56.443 227364 WARNING nova.compute.manager [req-698cbfe1-588a-425d-bde2-4be8b4472708 req-3dda908d-1cc5-4a9e-be8c-98a3d99ed9bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Received unexpected event network-vif-plugged-0e21f315-1974-4283-92a1-054bfda2ae26 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:34:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:57.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:58 np0005539551 nova_compute[227360]: 2025-11-29 08:34:58.109 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:58 np0005539551 nova_compute[227360]: 2025-11-29 08:34:58.162 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:58.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:34:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:34:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:59.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:34:59 np0005539551 nova_compute[227360]: 2025-11-29 08:34:59.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:59 np0005539551 nova_compute[227360]: 2025-11-29 08:34:59.912 227364 DEBUG nova.compute.manager [req-8783fd9e-26ab-4e57-b049-e5bf2ea45cf1 req-7d23c3de-5df4-4a54-851a-f5532081071c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Received event network-changed-0e21f315-1974-4283-92a1-054bfda2ae26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:59 np0005539551 nova_compute[227360]: 2025-11-29 08:34:59.913 227364 DEBUG nova.compute.manager [req-8783fd9e-26ab-4e57-b049-e5bf2ea45cf1 req-7d23c3de-5df4-4a54-851a-f5532081071c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Refreshing instance network info cache due to event network-changed-0e21f315-1974-4283-92a1-054bfda2ae26. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:34:59 np0005539551 nova_compute[227360]: 2025-11-29 08:34:59.913 227364 DEBUG oslo_concurrency.lockutils [req-8783fd9e-26ab-4e57-b049-e5bf2ea45cf1 req-7d23c3de-5df4-4a54-851a-f5532081071c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:34:59 np0005539551 nova_compute[227360]: 2025-11-29 08:34:59.914 227364 DEBUG oslo_concurrency.lockutils [req-8783fd9e-26ab-4e57-b049-e5bf2ea45cf1 req-7d23c3de-5df4-4a54-851a-f5532081071c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:34:59 np0005539551 nova_compute[227360]: 2025-11-29 08:34:59.914 227364 DEBUG nova.network.neutron [req-8783fd9e-26ab-4e57-b049-e5bf2ea45cf1 req-7d23c3de-5df4-4a54-851a-f5532081071c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Refreshing network info cache for port 0e21f315-1974-4283-92a1-054bfda2ae26 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:35:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:00.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:01.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:02.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:03 np0005539551 nova_compute[227360]: 2025-11-29 08:35:03.008 227364 DEBUG nova.network.neutron [req-8783fd9e-26ab-4e57-b049-e5bf2ea45cf1 req-7d23c3de-5df4-4a54-851a-f5532081071c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Updated VIF entry in instance network info cache for port 0e21f315-1974-4283-92a1-054bfda2ae26. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:35:03 np0005539551 nova_compute[227360]: 2025-11-29 08:35:03.009 227364 DEBUG nova.network.neutron [req-8783fd9e-26ab-4e57-b049-e5bf2ea45cf1 req-7d23c3de-5df4-4a54-851a-f5532081071c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Updating instance_info_cache with network_info: [{"id": "0e21f315-1974-4283-92a1-054bfda2ae26", "address": "fa:16:3e:24:58:28", "network": {"id": "8a47f5fc-b01b-4aaa-a961-1be8b8398ced", "bridge": "br-int", "label": "tempest-network-smoke--178153047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e21f315-19", "ovs_interfaceid": "0e21f315-1974-4283-92a1-054bfda2ae26", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:35:03 np0005539551 nova_compute[227360]: 2025-11-29 08:35:03.034 227364 DEBUG oslo_concurrency.lockutils [req-8783fd9e-26ab-4e57-b049-e5bf2ea45cf1 req-7d23c3de-5df4-4a54-851a-f5532081071c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:35:03 np0005539551 nova_compute[227360]: 2025-11-29 08:35:03.111 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:03 np0005539551 nova_compute[227360]: 2025-11-29 08:35:03.163 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:35:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:03.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:35:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:04.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:05.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:06.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:07.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:08 np0005539551 nova_compute[227360]: 2025-11-29 08:35:08.113 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:08 np0005539551 nova_compute[227360]: 2025-11-29 08:35:08.165 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:08.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:08 np0005539551 nova_compute[227360]: 2025-11-29 08:35:08.647 227364 INFO nova.compute.manager [None req-3478ce4f-00c7-40a9-8421-ffaff4a1a2b3 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Pausing#033[00m
Nov 29 03:35:08 np0005539551 nova_compute[227360]: 2025-11-29 08:35:08.648 227364 DEBUG nova.objects.instance [None req-3478ce4f-00c7-40a9-8421-ffaff4a1a2b3 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lazy-loading 'flavor' on Instance uuid aaaa6a6b-0c21-483c-b891-02ebe64e6aab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:35:08 np0005539551 ovn_controller[130266]: 2025-11-29T08:35:08Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:24:58:28 10.100.0.8
Nov 29 03:35:08 np0005539551 ovn_controller[130266]: 2025-11-29T08:35:08Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:24:58:28 10.100.0.8
Nov 29 03:35:08 np0005539551 nova_compute[227360]: 2025-11-29 08:35:08.687 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405308.6868083, aaaa6a6b-0c21-483c-b891-02ebe64e6aab => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:35:08 np0005539551 nova_compute[227360]: 2025-11-29 08:35:08.687 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:35:08 np0005539551 nova_compute[227360]: 2025-11-29 08:35:08.689 227364 DEBUG nova.compute.manager [None req-3478ce4f-00c7-40a9-8421-ffaff4a1a2b3 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:35:08 np0005539551 nova_compute[227360]: 2025-11-29 08:35:08.717 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:35:08 np0005539551 nova_compute[227360]: 2025-11-29 08:35:08.721 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:35:08 np0005539551 nova_compute[227360]: 2025-11-29 08:35:08.751 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Nov 29 03:35:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:09.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:35:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:10.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:35:10 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:35:10 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.0 total, 600.0 interval#012Cumulative writes: 11K writes, 59K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s#012Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.12 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1650 writes, 8001 keys, 1650 commit groups, 1.0 writes per commit group, ingest: 16.35 MB, 0.03 MB/s#012Interval WAL: 1650 writes, 1650 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     14.2      5.26              0.27        36    0.146       0      0       0.0       0.0#012  L6      1/0   12.20 MB   0.0      0.4     0.1      0.4       0.4      0.0       0.0   5.0     35.5     30.4     12.33              1.22        35    0.352    245K    18K       0.0       0.0#012 Sum      1/0   12.20 MB   0.0      0.4     0.1      0.4       0.4      0.1       0.0   6.0     24.9     25.6     17.59              1.49        71    0.248    245K    18K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.8    110.0    112.2      0.64              0.18        10    0.064     47K   2628       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.4     0.1      0.4       0.4      0.0       0.0   0.0     35.5     30.4     12.33              1.22        35    0.352    245K    18K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     14.2      5.26              0.27        35    0.150       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4800.0 total, 600.0 interval#012Flush(GB): cumulative 0.073, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.44 GB write, 0.09 MB/s write, 0.43 GB read, 0.09 MB/s read, 17.6 seconds#012Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557021ed51f0#2 capacity: 304.00 MB usage: 45.64 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000302 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2564,43.81 MB,14.4103%) FilterBlock(71,687.61 KB,0.220886%) IndexBlock(71,1.16 MB,0.380707%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 03:35:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:10 np0005539551 nova_compute[227360]: 2025-11-29 08:35:10.857 227364 INFO nova.compute.manager [None req-ef17c7a1-8af4-4f5d-8bf2-10fe8daddeb3 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Unpausing#033[00m
Nov 29 03:35:10 np0005539551 nova_compute[227360]: 2025-11-29 08:35:10.858 227364 DEBUG nova.objects.instance [None req-ef17c7a1-8af4-4f5d-8bf2-10fe8daddeb3 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lazy-loading 'flavor' on Instance uuid aaaa6a6b-0c21-483c-b891-02ebe64e6aab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:35:10 np0005539551 nova_compute[227360]: 2025-11-29 08:35:10.883 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405310.883212, aaaa6a6b-0c21-483c-b891-02ebe64e6aab => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:35:10 np0005539551 nova_compute[227360]: 2025-11-29 08:35:10.883 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:35:10 np0005539551 virtqemud[226785]: argument unsupported: QEMU guest agent is not configured
Nov 29 03:35:10 np0005539551 nova_compute[227360]: 2025-11-29 08:35:10.887 227364 DEBUG nova.virt.libvirt.guest [None req-ef17c7a1-8af4-4f5d-8bf2-10fe8daddeb3 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 29 03:35:10 np0005539551 nova_compute[227360]: 2025-11-29 08:35:10.887 227364 DEBUG nova.compute.manager [None req-ef17c7a1-8af4-4f5d-8bf2-10fe8daddeb3 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:35:10 np0005539551 nova_compute[227360]: 2025-11-29 08:35:10.916 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:35:10 np0005539551 nova_compute[227360]: 2025-11-29 08:35:10.919 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:35:10 np0005539551 nova_compute[227360]: 2025-11-29 08:35:10.943 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] During sync_power_state the instance has a pending task (unpausing). Skip.#033[00m
Nov 29 03:35:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:35:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:11.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:35:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:12.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:12 np0005539551 nova_compute[227360]: 2025-11-29 08:35:12.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:12 np0005539551 nova_compute[227360]: 2025-11-29 08:35:12.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:35:12 np0005539551 nova_compute[227360]: 2025-11-29 08:35:12.426 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:35:13 np0005539551 nova_compute[227360]: 2025-11-29 08:35:13.114 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:13 np0005539551 nova_compute[227360]: 2025-11-29 08:35:13.167 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:13 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e365 e365: 3 total, 3 up, 3 in
Nov 29 03:35:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:35:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:13.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:35:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:14.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:14 np0005539551 nova_compute[227360]: 2025-11-29 08:35:14.576 227364 INFO nova.compute.manager [None req-59469548-6446-4abb-a514-b8c0710b6fa5 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Get console output#033[00m
Nov 29 03:35:14 np0005539551 nova_compute[227360]: 2025-11-29 08:35:14.581 260937 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:35:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:15.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:16 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:16.312 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:35:16 np0005539551 nova_compute[227360]: 2025-11-29 08:35:16.312 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:16 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:16.314 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:35:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:16.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:17.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:18 np0005539551 nova_compute[227360]: 2025-11-29 08:35:18.118 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:18 np0005539551 nova_compute[227360]: 2025-11-29 08:35:18.168 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:18.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:18 np0005539551 nova_compute[227360]: 2025-11-29 08:35:18.364 227364 DEBUG oslo_concurrency.lockutils [None req-62760442-8f3f-4ee3-8b81-58a799d5b1f8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "interface-1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:18 np0005539551 nova_compute[227360]: 2025-11-29 08:35:18.365 227364 DEBUG oslo_concurrency.lockutils [None req-62760442-8f3f-4ee3-8b81-58a799d5b1f8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "interface-1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:18 np0005539551 nova_compute[227360]: 2025-11-29 08:35:18.365 227364 DEBUG nova.objects.instance [None req-62760442-8f3f-4ee3-8b81-58a799d5b1f8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'flavor' on Instance uuid 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:35:18 np0005539551 podman[288687]: 2025-11-29 08:35:18.615087587 +0000 UTC m=+0.061966148 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 29 03:35:18 np0005539551 podman[288688]: 2025-11-29 08:35:18.627069491 +0000 UTC m=+0.064068924 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 29 03:35:18 np0005539551 podman[288686]: 2025-11-29 08:35:18.633213417 +0000 UTC m=+0.080946051 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:35:18 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e366 e366: 3 total, 3 up, 3 in
Nov 29 03:35:18 np0005539551 nova_compute[227360]: 2025-11-29 08:35:18.791 227364 DEBUG nova.objects.instance [None req-62760442-8f3f-4ee3-8b81-58a799d5b1f8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'pci_requests' on Instance uuid 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:35:18 np0005539551 nova_compute[227360]: 2025-11-29 08:35:18.803 227364 DEBUG nova.network.neutron [None req-62760442-8f3f-4ee3-8b81-58a799d5b1f8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:35:18 np0005539551 nova_compute[227360]: 2025-11-29 08:35:18.977 227364 DEBUG nova.policy [None req-62760442-8f3f-4ee3-8b81-58a799d5b1f8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4774e2851bc6407cb0fcde15bd24d1b3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0471b9b208874403aa3f0fbe7504ad19', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:35:19 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:35:19 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2386966701' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:35:19 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:35:19 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2386966701' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:35:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:19.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:19 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e367 e367: 3 total, 3 up, 3 in
Nov 29 03:35:19 np0005539551 nova_compute[227360]: 2025-11-29 08:35:19.795 227364 DEBUG nova.network.neutron [None req-62760442-8f3f-4ee3-8b81-58a799d5b1f8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Successfully created port: 4092cc4a-c254-4ce0-a176-b4da7ee2a317 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:35:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:19.884 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:19.884 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:19.885 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:20.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:20 np0005539551 nova_compute[227360]: 2025-11-29 08:35:20.796 227364 DEBUG nova.network.neutron [None req-62760442-8f3f-4ee3-8b81-58a799d5b1f8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Successfully updated port: 4092cc4a-c254-4ce0-a176-b4da7ee2a317 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:35:20 np0005539551 nova_compute[227360]: 2025-11-29 08:35:20.817 227364 DEBUG oslo_concurrency.lockutils [None req-62760442-8f3f-4ee3-8b81-58a799d5b1f8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "refresh_cache-1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:35:20 np0005539551 nova_compute[227360]: 2025-11-29 08:35:20.817 227364 DEBUG oslo_concurrency.lockutils [None req-62760442-8f3f-4ee3-8b81-58a799d5b1f8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquired lock "refresh_cache-1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:35:20 np0005539551 nova_compute[227360]: 2025-11-29 08:35:20.817 227364 DEBUG nova.network.neutron [None req-62760442-8f3f-4ee3-8b81-58a799d5b1f8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:35:20 np0005539551 nova_compute[227360]: 2025-11-29 08:35:20.889 227364 DEBUG nova.compute.manager [req-d22813fd-9212-4617-af4b-a88636c48080 req-e226d988-0ba7-44c5-8aa3-f7677e8a4a84 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Received event network-changed-4092cc4a-c254-4ce0-a176-b4da7ee2a317 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:20 np0005539551 nova_compute[227360]: 2025-11-29 08:35:20.889 227364 DEBUG nova.compute.manager [req-d22813fd-9212-4617-af4b-a88636c48080 req-e226d988-0ba7-44c5-8aa3-f7677e8a4a84 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Refreshing instance network info cache due to event network-changed-4092cc4a-c254-4ce0-a176-b4da7ee2a317. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:35:20 np0005539551 nova_compute[227360]: 2025-11-29 08:35:20.890 227364 DEBUG oslo_concurrency.lockutils [req-d22813fd-9212-4617-af4b-a88636c48080 req-e226d988-0ba7-44c5-8aa3-f7677e8a4a84 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:35:21 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:35:21 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1332837800' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:35:21 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:35:21 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1332837800' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:35:21 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:21.316 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:21 np0005539551 ceph-mgr[82034]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1950343944
Nov 29 03:35:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:21.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:21 np0005539551 nova_compute[227360]: 2025-11-29 08:35:21.421 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:22.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:23 np0005539551 nova_compute[227360]: 2025-11-29 08:35:23.121 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:23 np0005539551 nova_compute[227360]: 2025-11-29 08:35:23.170 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:35:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:23.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:35:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:24.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:24 np0005539551 nova_compute[227360]: 2025-11-29 08:35:24.844 227364 DEBUG nova.network.neutron [None req-62760442-8f3f-4ee3-8b81-58a799d5b1f8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Updating instance_info_cache with network_info: [{"id": "0e21f315-1974-4283-92a1-054bfda2ae26", "address": "fa:16:3e:24:58:28", "network": {"id": "8a47f5fc-b01b-4aaa-a961-1be8b8398ced", "bridge": "br-int", "label": "tempest-network-smoke--178153047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e21f315-19", "ovs_interfaceid": "0e21f315-1974-4283-92a1-054bfda2ae26", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4092cc4a-c254-4ce0-a176-b4da7ee2a317", "address": "fa:16:3e:c0:3f:68", "network": {"id": "4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5", "bridge": "br-int", "label": "tempest-network-smoke--1326133175", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092cc4a-c2", "ovs_interfaceid": "4092cc4a-c254-4ce0-a176-b4da7ee2a317", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:35:24 np0005539551 nova_compute[227360]: 2025-11-29 08:35:24.865 227364 DEBUG oslo_concurrency.lockutils [None req-62760442-8f3f-4ee3-8b81-58a799d5b1f8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Releasing lock "refresh_cache-1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:35:24 np0005539551 nova_compute[227360]: 2025-11-29 08:35:24.866 227364 DEBUG oslo_concurrency.lockutils [req-d22813fd-9212-4617-af4b-a88636c48080 req-e226d988-0ba7-44c5-8aa3-f7677e8a4a84 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:35:24 np0005539551 nova_compute[227360]: 2025-11-29 08:35:24.866 227364 DEBUG nova.network.neutron [req-d22813fd-9212-4617-af4b-a88636c48080 req-e226d988-0ba7-44c5-8aa3-f7677e8a4a84 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Refreshing network info cache for port 4092cc4a-c254-4ce0-a176-b4da7ee2a317 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:35:24 np0005539551 nova_compute[227360]: 2025-11-29 08:35:24.868 227364 DEBUG nova.virt.libvirt.vif [None req-62760442-8f3f-4ee3-8b81-58a799d5b1f8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:34:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1296419618',display_name='tempest-TestNetworkBasicOps-server-1296419618',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1296419618',id=173,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEhyPFmOb1buTdfp2NZX2b9gSFBAogkRMZKVdLyroT5Ix5fvym2lepYKOMKgz9iBT4Fr3f7j9Gn+L6kUd1A62pvPy+wjiM7n7mFxRX0SSB7I3/1dVL5dTHaRcDNjAm50LQ==',key_name='tempest-TestNetworkBasicOps-1813521378',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:34:54Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-vm52hjet',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:34:55Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4092cc4a-c254-4ce0-a176-b4da7ee2a317", "address": "fa:16:3e:c0:3f:68", "network": {"id": "4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5", "bridge": "br-int", "label": "tempest-network-smoke--1326133175", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092cc4a-c2", "ovs_interfaceid": "4092cc4a-c254-4ce0-a176-b4da7ee2a317", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:35:24 np0005539551 nova_compute[227360]: 2025-11-29 08:35:24.869 227364 DEBUG nova.network.os_vif_util [None req-62760442-8f3f-4ee3-8b81-58a799d5b1f8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "4092cc4a-c254-4ce0-a176-b4da7ee2a317", "address": "fa:16:3e:c0:3f:68", "network": {"id": "4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5", "bridge": "br-int", "label": "tempest-network-smoke--1326133175", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092cc4a-c2", "ovs_interfaceid": "4092cc4a-c254-4ce0-a176-b4da7ee2a317", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:35:24 np0005539551 nova_compute[227360]: 2025-11-29 08:35:24.869 227364 DEBUG nova.network.os_vif_util [None req-62760442-8f3f-4ee3-8b81-58a799d5b1f8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:3f:68,bridge_name='br-int',has_traffic_filtering=True,id=4092cc4a-c254-4ce0-a176-b4da7ee2a317,network=Network(4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4092cc4a-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:35:24 np0005539551 nova_compute[227360]: 2025-11-29 08:35:24.870 227364 DEBUG os_vif [None req-62760442-8f3f-4ee3-8b81-58a799d5b1f8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:3f:68,bridge_name='br-int',has_traffic_filtering=True,id=4092cc4a-c254-4ce0-a176-b4da7ee2a317,network=Network(4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4092cc4a-c2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:35:24 np0005539551 nova_compute[227360]: 2025-11-29 08:35:24.870 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:24 np0005539551 nova_compute[227360]: 2025-11-29 08:35:24.871 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:24 np0005539551 nova_compute[227360]: 2025-11-29 08:35:24.871 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:35:24 np0005539551 nova_compute[227360]: 2025-11-29 08:35:24.873 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:24 np0005539551 nova_compute[227360]: 2025-11-29 08:35:24.874 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4092cc4a-c2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:24 np0005539551 nova_compute[227360]: 2025-11-29 08:35:24.874 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4092cc4a-c2, col_values=(('external_ids', {'iface-id': '4092cc4a-c254-4ce0-a176-b4da7ee2a317', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c0:3f:68', 'vm-uuid': '1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:24 np0005539551 NetworkManager[48922]: <info>  [1764405324.8771] manager: (tap4092cc4a-c2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/335)
Nov 29 03:35:24 np0005539551 nova_compute[227360]: 2025-11-29 08:35:24.880 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:24 np0005539551 nova_compute[227360]: 2025-11-29 08:35:24.883 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:35:24 np0005539551 nova_compute[227360]: 2025-11-29 08:35:24.884 227364 INFO os_vif [None req-62760442-8f3f-4ee3-8b81-58a799d5b1f8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:3f:68,bridge_name='br-int',has_traffic_filtering=True,id=4092cc4a-c254-4ce0-a176-b4da7ee2a317,network=Network(4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4092cc4a-c2')#033[00m
Nov 29 03:35:24 np0005539551 nova_compute[227360]: 2025-11-29 08:35:24.885 227364 DEBUG nova.virt.libvirt.vif [None req-62760442-8f3f-4ee3-8b81-58a799d5b1f8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:34:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1296419618',display_name='tempest-TestNetworkBasicOps-server-1296419618',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1296419618',id=173,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEhyPFmOb1buTdfp2NZX2b9gSFBAogkRMZKVdLyroT5Ix5fvym2lepYKOMKgz9iBT4Fr3f7j9Gn+L6kUd1A62pvPy+wjiM7n7mFxRX0SSB7I3/1dVL5dTHaRcDNjAm50LQ==',key_name='tempest-TestNetworkBasicOps-1813521378',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:34:54Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-vm52hjet',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:34:55Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4092cc4a-c254-4ce0-a176-b4da7ee2a317", "address": "fa:16:3e:c0:3f:68", "network": {"id": "4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5", "bridge": "br-int", "label": "tempest-network-smoke--1326133175", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092cc4a-c2", "ovs_interfaceid": "4092cc4a-c254-4ce0-a176-b4da7ee2a317", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:35:24 np0005539551 nova_compute[227360]: 2025-11-29 08:35:24.885 227364 DEBUG nova.network.os_vif_util [None req-62760442-8f3f-4ee3-8b81-58a799d5b1f8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "4092cc4a-c254-4ce0-a176-b4da7ee2a317", "address": "fa:16:3e:c0:3f:68", "network": {"id": "4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5", "bridge": "br-int", "label": "tempest-network-smoke--1326133175", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092cc4a-c2", "ovs_interfaceid": "4092cc4a-c254-4ce0-a176-b4da7ee2a317", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:35:24 np0005539551 nova_compute[227360]: 2025-11-29 08:35:24.886 227364 DEBUG nova.network.os_vif_util [None req-62760442-8f3f-4ee3-8b81-58a799d5b1f8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:3f:68,bridge_name='br-int',has_traffic_filtering=True,id=4092cc4a-c254-4ce0-a176-b4da7ee2a317,network=Network(4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4092cc4a-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:35:24 np0005539551 nova_compute[227360]: 2025-11-29 08:35:24.889 227364 DEBUG nova.virt.libvirt.guest [None req-62760442-8f3f-4ee3-8b81-58a799d5b1f8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] attach device xml: <interface type="ethernet">
Nov 29 03:35:24 np0005539551 nova_compute[227360]:  <mac address="fa:16:3e:c0:3f:68"/>
Nov 29 03:35:24 np0005539551 nova_compute[227360]:  <model type="virtio"/>
Nov 29 03:35:24 np0005539551 nova_compute[227360]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:35:24 np0005539551 nova_compute[227360]:  <mtu size="1442"/>
Nov 29 03:35:24 np0005539551 nova_compute[227360]:  <target dev="tap4092cc4a-c2"/>
Nov 29 03:35:24 np0005539551 nova_compute[227360]: </interface>
Nov 29 03:35:24 np0005539551 nova_compute[227360]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:35:24 np0005539551 kernel: tap4092cc4a-c2: entered promiscuous mode
Nov 29 03:35:24 np0005539551 NetworkManager[48922]: <info>  [1764405324.9005] manager: (tap4092cc4a-c2): new Tun device (/org/freedesktop/NetworkManager/Devices/336)
Nov 29 03:35:24 np0005539551 ovn_controller[130266]: 2025-11-29T08:35:24Z|00741|binding|INFO|Claiming lport 4092cc4a-c254-4ce0-a176-b4da7ee2a317 for this chassis.
Nov 29 03:35:24 np0005539551 ovn_controller[130266]: 2025-11-29T08:35:24Z|00742|binding|INFO|4092cc4a-c254-4ce0-a176-b4da7ee2a317: Claiming fa:16:3e:c0:3f:68 10.100.0.27
Nov 29 03:35:24 np0005539551 nova_compute[227360]: 2025-11-29 08:35:24.902 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:24.919 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:3f:68 10.100.0.27'], port_security=['fa:16:3e:c0:3f:68 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': '1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0471b9b208874403aa3f0fbe7504ad19', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e80d8f2f-092d-4879-a27f-82d3e7dad8a3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=77dd13d2-8e55-41ac-beb5-7d7fff7c9de4, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=4092cc4a-c254-4ce0-a176-b4da7ee2a317) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:35:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:24.921 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 4092cc4a-c254-4ce0-a176-b4da7ee2a317 in datapath 4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5 bound to our chassis#033[00m
Nov 29 03:35:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:24.925 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5#033[00m
Nov 29 03:35:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:24.943 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f37c371c-7f89-4daf-af46-6c1243d1039f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:24.944 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4dca0dfe-b1 in ovnmeta-4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:35:24 np0005539551 systemd-udevd[288753]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:35:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:24.947 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4dca0dfe-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:35:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:24.947 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[422ec52f-e497-4c7a-ba27-8f04e72d265e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:24.948 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[48923c7e-731e-48c4-9456-44e7c2e86869]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:24.959 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[f9b005a1-91ee-41ea-b638-304dcf5dfa67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:24 np0005539551 NetworkManager[48922]: <info>  [1764405324.9616] device (tap4092cc4a-c2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:35:24 np0005539551 NetworkManager[48922]: <info>  [1764405324.9622] device (tap4092cc4a-c2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:35:24 np0005539551 nova_compute[227360]: 2025-11-29 08:35:24.966 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:24 np0005539551 ovn_controller[130266]: 2025-11-29T08:35:24Z|00743|binding|INFO|Setting lport 4092cc4a-c254-4ce0-a176-b4da7ee2a317 ovn-installed in OVS
Nov 29 03:35:24 np0005539551 ovn_controller[130266]: 2025-11-29T08:35:24Z|00744|binding|INFO|Setting lport 4092cc4a-c254-4ce0-a176-b4da7ee2a317 up in Southbound
Nov 29 03:35:24 np0005539551 nova_compute[227360]: 2025-11-29 08:35:24.970 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:24.972 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5a3b4dd7-d610-4227-8608-3ae8fc941d38]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:25.002 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[b70d6e9b-f34b-46e3-8e76-4c99419ff582]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:25.009 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c9127d17-c02f-48c5-96b0-ef7d3b62aa9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:25 np0005539551 NetworkManager[48922]: <info>  [1764405325.0104] manager: (tap4dca0dfe-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/337)
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:25.039 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[73a54e5a-2472-4ac2-b8ea-b015d9e2e0d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:25.042 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[8d2683cf-5091-4616-82e8-8d18b86f574e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:25 np0005539551 nova_compute[227360]: 2025-11-29 08:35:25.062 227364 DEBUG nova.virt.libvirt.driver [None req-62760442-8f3f-4ee3-8b81-58a799d5b1f8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:35:25 np0005539551 nova_compute[227360]: 2025-11-29 08:35:25.062 227364 DEBUG nova.virt.libvirt.driver [None req-62760442-8f3f-4ee3-8b81-58a799d5b1f8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:35:25 np0005539551 nova_compute[227360]: 2025-11-29 08:35:25.062 227364 DEBUG nova.virt.libvirt.driver [None req-62760442-8f3f-4ee3-8b81-58a799d5b1f8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No VIF found with MAC fa:16:3e:24:58:28, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:35:25 np0005539551 nova_compute[227360]: 2025-11-29 08:35:25.063 227364 DEBUG nova.virt.libvirt.driver [None req-62760442-8f3f-4ee3-8b81-58a799d5b1f8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No VIF found with MAC fa:16:3e:c0:3f:68, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:35:25 np0005539551 NetworkManager[48922]: <info>  [1764405325.0656] device (tap4dca0dfe-b0): carrier: link connected
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:25.071 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[cca1ff75-458b-453d-a4e3-4763b9db72a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:25.087 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[fd5791d7-5b31-4210-9928-388f4d400163]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4dca0dfe-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:2c:c3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 837194, 'reachable_time': 18073, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288781, 'error': None, 'target': 'ovnmeta-4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:25 np0005539551 nova_compute[227360]: 2025-11-29 08:35:25.094 227364 DEBUG nova.virt.libvirt.guest [None req-62760442-8f3f-4ee3-8b81-58a799d5b1f8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:35:25 np0005539551 nova_compute[227360]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:35:25 np0005539551 nova_compute[227360]:  <nova:name>tempest-TestNetworkBasicOps-server-1296419618</nova:name>
Nov 29 03:35:25 np0005539551 nova_compute[227360]:  <nova:creationTime>2025-11-29 08:35:25</nova:creationTime>
Nov 29 03:35:25 np0005539551 nova_compute[227360]:  <nova:flavor name="m1.nano">
Nov 29 03:35:25 np0005539551 nova_compute[227360]:    <nova:memory>128</nova:memory>
Nov 29 03:35:25 np0005539551 nova_compute[227360]:    <nova:disk>1</nova:disk>
Nov 29 03:35:25 np0005539551 nova_compute[227360]:    <nova:swap>0</nova:swap>
Nov 29 03:35:25 np0005539551 nova_compute[227360]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:35:25 np0005539551 nova_compute[227360]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:35:25 np0005539551 nova_compute[227360]:  </nova:flavor>
Nov 29 03:35:25 np0005539551 nova_compute[227360]:  <nova:owner>
Nov 29 03:35:25 np0005539551 nova_compute[227360]:    <nova:user uuid="4774e2851bc6407cb0fcde15bd24d1b3">tempest-TestNetworkBasicOps-828399474-project-member</nova:user>
Nov 29 03:35:25 np0005539551 nova_compute[227360]:    <nova:project uuid="0471b9b208874403aa3f0fbe7504ad19">tempest-TestNetworkBasicOps-828399474</nova:project>
Nov 29 03:35:25 np0005539551 nova_compute[227360]:  </nova:owner>
Nov 29 03:35:25 np0005539551 nova_compute[227360]:  <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:35:25 np0005539551 nova_compute[227360]:  <nova:ports>
Nov 29 03:35:25 np0005539551 nova_compute[227360]:    <nova:port uuid="0e21f315-1974-4283-92a1-054bfda2ae26">
Nov 29 03:35:25 np0005539551 nova_compute[227360]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 03:35:25 np0005539551 nova_compute[227360]:    </nova:port>
Nov 29 03:35:25 np0005539551 nova_compute[227360]:    <nova:port uuid="4092cc4a-c254-4ce0-a176-b4da7ee2a317">
Nov 29 03:35:25 np0005539551 nova_compute[227360]:      <nova:ip type="fixed" address="10.100.0.27" ipVersion="4"/>
Nov 29 03:35:25 np0005539551 nova_compute[227360]:    </nova:port>
Nov 29 03:35:25 np0005539551 nova_compute[227360]:  </nova:ports>
Nov 29 03:35:25 np0005539551 nova_compute[227360]: </nova:instance>
Nov 29 03:35:25 np0005539551 nova_compute[227360]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:25.103 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[bd900118-ec58-43e3-bf23-4084841fa362]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed6:2cc3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 837194, 'tstamp': 837194}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288782, 'error': None, 'target': 'ovnmeta-4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:25.121 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c4c57078-89ee-4b20-90ce-9846342f24a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4dca0dfe-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:2c:c3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 837194, 'reachable_time': 18073, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 288783, 'error': None, 'target': 'ovnmeta-4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:25 np0005539551 nova_compute[227360]: 2025-11-29 08:35:25.124 227364 DEBUG oslo_concurrency.lockutils [None req-62760442-8f3f-4ee3-8b81-58a799d5b1f8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "interface-1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:25.154 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ee055495-a87b-43e3-a232-3101d7628e21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:25.210 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d22be5b0-6f6a-49a4-aff1-3e81645e66fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:25.212 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4dca0dfe-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:25.212 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:25.213 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4dca0dfe-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:25 np0005539551 nova_compute[227360]: 2025-11-29 08:35:25.214 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:25 np0005539551 kernel: tap4dca0dfe-b0: entered promiscuous mode
Nov 29 03:35:25 np0005539551 NetworkManager[48922]: <info>  [1764405325.2153] manager: (tap4dca0dfe-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/338)
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:25.219 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4dca0dfe-b0, col_values=(('external_ids', {'iface-id': '4afb5776-1093-4d6e-91f6-e5a855f261d9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:25 np0005539551 ovn_controller[130266]: 2025-11-29T08:35:25Z|00745|binding|INFO|Releasing lport 4afb5776-1093-4d6e-91f6-e5a855f261d9 from this chassis (sb_readonly=0)
Nov 29 03:35:25 np0005539551 nova_compute[227360]: 2025-11-29 08:35:25.220 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:25 np0005539551 nova_compute[227360]: 2025-11-29 08:35:25.232 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:25.233 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:25.234 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[cf63d94c-7c53-4ab1-8326-ed99d0e7c4f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:25.234 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5.pid.haproxy
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:35:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:25.235 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5', 'env', 'PROCESS_TAG=haproxy-4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:35:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:25.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:25 np0005539551 podman[288816]: 2025-11-29 08:35:25.592270345 +0000 UTC m=+0.049514700 container create 6451615c0f46957571cbf253193a3553a336b8ba9f49e2478d01195c9b5adfc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:35:25 np0005539551 systemd[1]: Started libpod-conmon-6451615c0f46957571cbf253193a3553a336b8ba9f49e2478d01195c9b5adfc3.scope.
Nov 29 03:35:25 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:35:25 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad02d1bd17fe489fca59e3a8ca91ddf58b87d958d9e0d8977bea4c4e3556edf0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:35:25 np0005539551 podman[288816]: 2025-11-29 08:35:25.564093853 +0000 UTC m=+0.021338248 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:35:25 np0005539551 podman[288816]: 2025-11-29 08:35:25.669638198 +0000 UTC m=+0.126882553 container init 6451615c0f46957571cbf253193a3553a336b8ba9f49e2478d01195c9b5adfc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:35:25 np0005539551 podman[288816]: 2025-11-29 08:35:25.674852799 +0000 UTC m=+0.132097154 container start 6451615c0f46957571cbf253193a3553a336b8ba9f49e2478d01195c9b5adfc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:35:25 np0005539551 neutron-haproxy-ovnmeta-4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5[288831]: [NOTICE]   (288835) : New worker (288837) forked
Nov 29 03:35:25 np0005539551 neutron-haproxy-ovnmeta-4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5[288831]: [NOTICE]   (288835) : Loading success.
Nov 29 03:35:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.088 227364 DEBUG nova.network.neutron [req-d22813fd-9212-4617-af4b-a88636c48080 req-e226d988-0ba7-44c5-8aa3-f7677e8a4a84 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Updated VIF entry in instance network info cache for port 4092cc4a-c254-4ce0-a176-b4da7ee2a317. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.089 227364 DEBUG nova.network.neutron [req-d22813fd-9212-4617-af4b-a88636c48080 req-e226d988-0ba7-44c5-8aa3-f7677e8a4a84 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Updating instance_info_cache with network_info: [{"id": "0e21f315-1974-4283-92a1-054bfda2ae26", "address": "fa:16:3e:24:58:28", "network": {"id": "8a47f5fc-b01b-4aaa-a961-1be8b8398ced", "bridge": "br-int", "label": "tempest-network-smoke--178153047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e21f315-19", "ovs_interfaceid": "0e21f315-1974-4283-92a1-054bfda2ae26", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4092cc4a-c254-4ce0-a176-b4da7ee2a317", "address": "fa:16:3e:c0:3f:68", "network": {"id": "4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5", "bridge": "br-int", "label": "tempest-network-smoke--1326133175", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092cc4a-c2", "ovs_interfaceid": "4092cc4a-c254-4ce0-a176-b4da7ee2a317", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.102 227364 DEBUG nova.compute.manager [req-6877105d-f646-469d-8e43-675467eec034 req-40abf393-883d-4d7a-8559-7cc601c4f33f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Received event network-vif-plugged-4092cc4a-c254-4ce0-a176-b4da7ee2a317 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.102 227364 DEBUG oslo_concurrency.lockutils [req-6877105d-f646-469d-8e43-675467eec034 req-40abf393-883d-4d7a-8559-7cc601c4f33f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.102 227364 DEBUG oslo_concurrency.lockutils [req-6877105d-f646-469d-8e43-675467eec034 req-40abf393-883d-4d7a-8559-7cc601c4f33f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.102 227364 DEBUG oslo_concurrency.lockutils [req-6877105d-f646-469d-8e43-675467eec034 req-40abf393-883d-4d7a-8559-7cc601c4f33f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.103 227364 DEBUG nova.compute.manager [req-6877105d-f646-469d-8e43-675467eec034 req-40abf393-883d-4d7a-8559-7cc601c4f33f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] No waiting events found dispatching network-vif-plugged-4092cc4a-c254-4ce0-a176-b4da7ee2a317 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.103 227364 WARNING nova.compute.manager [req-6877105d-f646-469d-8e43-675467eec034 req-40abf393-883d-4d7a-8559-7cc601c4f33f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Received unexpected event network-vif-plugged-4092cc4a-c254-4ce0-a176-b4da7ee2a317 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.103 227364 DEBUG nova.compute.manager [req-6877105d-f646-469d-8e43-675467eec034 req-40abf393-883d-4d7a-8559-7cc601c4f33f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Received event network-vif-plugged-4092cc4a-c254-4ce0-a176-b4da7ee2a317 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.103 227364 DEBUG oslo_concurrency.lockutils [req-6877105d-f646-469d-8e43-675467eec034 req-40abf393-883d-4d7a-8559-7cc601c4f33f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.103 227364 DEBUG oslo_concurrency.lockutils [req-6877105d-f646-469d-8e43-675467eec034 req-40abf393-883d-4d7a-8559-7cc601c4f33f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.104 227364 DEBUG oslo_concurrency.lockutils [req-6877105d-f646-469d-8e43-675467eec034 req-40abf393-883d-4d7a-8559-7cc601c4f33f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.104 227364 DEBUG nova.compute.manager [req-6877105d-f646-469d-8e43-675467eec034 req-40abf393-883d-4d7a-8559-7cc601c4f33f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] No waiting events found dispatching network-vif-plugged-4092cc4a-c254-4ce0-a176-b4da7ee2a317 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.104 227364 WARNING nova.compute.manager [req-6877105d-f646-469d-8e43-675467eec034 req-40abf393-883d-4d7a-8559-7cc601c4f33f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Received unexpected event network-vif-plugged-4092cc4a-c254-4ce0-a176-b4da7ee2a317 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.105 227364 DEBUG oslo_concurrency.lockutils [req-d22813fd-9212-4617-af4b-a88636c48080 req-e226d988-0ba7-44c5-8aa3-f7677e8a4a84 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:35:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:26.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.535 227364 DEBUG oslo_concurrency.lockutils [None req-4acd6c9b-6fda-4372-9eb0-c952c251c78f 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "interface-1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd-4092cc4a-c254-4ce0-a176-b4da7ee2a317" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.536 227364 DEBUG oslo_concurrency.lockutils [None req-4acd6c9b-6fda-4372-9eb0-c952c251c78f 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "interface-1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd-4092cc4a-c254-4ce0-a176-b4da7ee2a317" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.550 227364 DEBUG nova.objects.instance [None req-4acd6c9b-6fda-4372-9eb0-c952c251c78f 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'flavor' on Instance uuid 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.568 227364 DEBUG nova.virt.libvirt.vif [None req-4acd6c9b-6fda-4372-9eb0-c952c251c78f 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:34:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1296419618',display_name='tempest-TestNetworkBasicOps-server-1296419618',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1296419618',id=173,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEhyPFmOb1buTdfp2NZX2b9gSFBAogkRMZKVdLyroT5Ix5fvym2lepYKOMKgz9iBT4Fr3f7j9Gn+L6kUd1A62pvPy+wjiM7n7mFxRX0SSB7I3/1dVL5dTHaRcDNjAm50LQ==',key_name='tempest-TestNetworkBasicOps-1813521378',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:34:54Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-vm52hjet',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:34:55Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4092cc4a-c254-4ce0-a176-b4da7ee2a317", "address": "fa:16:3e:c0:3f:68", "network": {"id": "4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5", "bridge": "br-int", "label": "tempest-network-smoke--1326133175", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092cc4a-c2", "ovs_interfaceid": "4092cc4a-c254-4ce0-a176-b4da7ee2a317", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.568 227364 DEBUG nova.network.os_vif_util [None req-4acd6c9b-6fda-4372-9eb0-c952c251c78f 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "4092cc4a-c254-4ce0-a176-b4da7ee2a317", "address": "fa:16:3e:c0:3f:68", "network": {"id": "4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5", "bridge": "br-int", "label": "tempest-network-smoke--1326133175", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092cc4a-c2", "ovs_interfaceid": "4092cc4a-c254-4ce0-a176-b4da7ee2a317", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.569 227364 DEBUG nova.network.os_vif_util [None req-4acd6c9b-6fda-4372-9eb0-c952c251c78f 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:3f:68,bridge_name='br-int',has_traffic_filtering=True,id=4092cc4a-c254-4ce0-a176-b4da7ee2a317,network=Network(4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4092cc4a-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.572 227364 DEBUG nova.virt.libvirt.guest [None req-4acd6c9b-6fda-4372-9eb0-c952c251c78f 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c0:3f:68"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4092cc4a-c2"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.575 227364 DEBUG nova.virt.libvirt.guest [None req-4acd6c9b-6fda-4372-9eb0-c952c251c78f 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c0:3f:68"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4092cc4a-c2"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.579 227364 DEBUG nova.virt.libvirt.driver [None req-4acd6c9b-6fda-4372-9eb0-c952c251c78f 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Attempting to detach device tap4092cc4a-c2 from instance 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.580 227364 DEBUG nova.virt.libvirt.guest [None req-4acd6c9b-6fda-4372-9eb0-c952c251c78f 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] detach device xml: <interface type="ethernet">
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <mac address="fa:16:3e:c0:3f:68"/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <model type="virtio"/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <mtu size="1442"/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <target dev="tap4092cc4a-c2"/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]: </interface>
Nov 29 03:35:26 np0005539551 nova_compute[227360]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.586 227364 DEBUG nova.virt.libvirt.guest [None req-4acd6c9b-6fda-4372-9eb0-c952c251c78f 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c0:3f:68"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4092cc4a-c2"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.590 227364 DEBUG nova.virt.libvirt.guest [None req-4acd6c9b-6fda-4372-9eb0-c952c251c78f 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:c0:3f:68"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4092cc4a-c2"/></interface>not found in domain: <domain type='kvm' id='79'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <name>instance-000000ad</name>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <uuid>1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd</uuid>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <nova:name>tempest-TestNetworkBasicOps-server-1296419618</nova:name>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <nova:creationTime>2025-11-29 08:35:25</nova:creationTime>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <nova:flavor name="m1.nano">
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <nova:memory>128</nova:memory>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <nova:disk>1</nova:disk>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <nova:swap>0</nova:swap>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  </nova:flavor>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <nova:owner>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <nova:user uuid="4774e2851bc6407cb0fcde15bd24d1b3">tempest-TestNetworkBasicOps-828399474-project-member</nova:user>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <nova:project uuid="0471b9b208874403aa3f0fbe7504ad19">tempest-TestNetworkBasicOps-828399474</nova:project>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  </nova:owner>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <nova:ports>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <nova:port uuid="0e21f315-1974-4283-92a1-054bfda2ae26">
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </nova:port>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <nova:port uuid="4092cc4a-c254-4ce0-a176-b4da7ee2a317">
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <nova:ip type="fixed" address="10.100.0.27" ipVersion="4"/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </nova:port>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  </nova:ports>
Nov 29 03:35:26 np0005539551 nova_compute[227360]: </nova:instance>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <memory unit='KiB'>131072</memory>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <vcpu placement='static'>1</vcpu>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <resource>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <partition>/machine</partition>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  </resource>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <sysinfo type='smbios'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <entry name='manufacturer'>RDO</entry>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <entry name='product'>OpenStack Compute</entry>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <entry name='serial'>1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd</entry>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <entry name='uuid'>1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd</entry>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <entry name='family'>Virtual Machine</entry>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <boot dev='hd'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <smbios mode='sysinfo'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <vmcoreinfo state='on'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <cpu mode='custom' match='exact' check='full'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <model fallback='forbid'>Nehalem</model>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <feature policy='require' name='x2apic'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <feature policy='require' name='hypervisor'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <feature policy='require' name='vme'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <clock offset='utc'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <timer name='pit' tickpolicy='delay'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <timer name='hpet' present='no'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <on_poweroff>destroy</on_poweroff>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <on_reboot>restart</on_reboot>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <on_crash>destroy</on_crash>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <disk type='network' device='disk'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <auth username='openstack'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:        <secret type='ceph' uuid='b66774a7-56d9-5535-bd8c-681234404870'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <source protocol='rbd' name='vms/1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd_disk' index='2'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target dev='vda' bus='virtio'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='virtio-disk0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <disk type='network' device='cdrom'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <auth username='openstack'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:        <secret type='ceph' uuid='b66774a7-56d9-5535-bd8c-681234404870'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <source protocol='rbd' name='vms/1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd_disk.config' index='1'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target dev='sda' bus='sata'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <readonly/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='sata0-0-0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='0' model='pcie-root'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pcie.0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='1' port='0x10'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.1'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='2' port='0x11'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.2'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='3' port='0x12'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.3'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='4' port='0x13'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.4'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='5' port='0x14'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.5'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='6' port='0x15'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.6'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='7' port='0x16'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.7'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='8' port='0x17'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.8'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='9' port='0x18'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.9'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='10' port='0x19'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.10'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='11' port='0x1a'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.11'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='12' port='0x1b'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.12'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='13' port='0x1c'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.13'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='14' port='0x1d'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.14'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='15' port='0x1e'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.15'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='16' port='0x1f'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.16'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='17' port='0x20'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.17'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='18' port='0x21'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.18'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='19' port='0x22'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.19'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='20' port='0x23'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.20'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='21' port='0x24'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.21'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='22' port='0x25'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.22'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='23' port='0x26'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.23'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='24' port='0x27'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.24'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='25' port='0x28'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.25'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-pci-bridge'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.26'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='usb'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='sata' index='0'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='ide'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <interface type='ethernet'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <mac address='fa:16:3e:24:58:28'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target dev='tap0e21f315-19'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model type='virtio'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <mtu size='1442'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='net0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <interface type='ethernet'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <mac address='fa:16:3e:c0:3f:68'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target dev='tap4092cc4a-c2'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model type='virtio'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <mtu size='1442'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='net1'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <serial type='pty'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <source path='/dev/pts/3'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <log file='/var/lib/nova/instances/1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd/console.log' append='off'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target type='isa-serial' port='0'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:        <model name='isa-serial'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      </target>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='serial0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <console type='pty' tty='/dev/pts/3'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <source path='/dev/pts/3'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <log file='/var/lib/nova/instances/1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd/console.log' append='off'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target type='serial' port='0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='serial0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </console>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <input type='tablet' bus='usb'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='input0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='usb' bus='0' port='1'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </input>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <input type='mouse' bus='ps2'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='input1'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </input>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <input type='keyboard' bus='ps2'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='input2'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </input>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <graphics type='vnc' port='5903' autoport='yes' listen='::0'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <listen type='address' address='::0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </graphics>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <audio id='1' type='none'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model type='virtio' heads='1' primary='yes'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='video0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <watchdog model='itco' action='reset'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='watchdog0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </watchdog>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <memballoon model='virtio'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <stats period='10'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='balloon0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <rng model='virtio'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <backend model='random'>/dev/urandom</backend>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='rng0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <label>system_u:system_r:svirt_t:s0:c379,c573</label>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c379,c573</imagelabel>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  </seclabel>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <label>+107:+107</label>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <imagelabel>+107:+107</imagelabel>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  </seclabel>
Nov 29 03:35:26 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:35:26 np0005539551 nova_compute[227360]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.591 227364 INFO nova.virt.libvirt.driver [None req-4acd6c9b-6fda-4372-9eb0-c952c251c78f 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Successfully detached device tap4092cc4a-c2 from instance 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd from the persistent domain config.#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.592 227364 DEBUG nova.virt.libvirt.driver [None req-4acd6c9b-6fda-4372-9eb0-c952c251c78f 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] (1/8): Attempting to detach device tap4092cc4a-c2 with device alias net1 from instance 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.592 227364 DEBUG nova.virt.libvirt.guest [None req-4acd6c9b-6fda-4372-9eb0-c952c251c78f 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] detach device xml: <interface type="ethernet">
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <mac address="fa:16:3e:c0:3f:68"/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <model type="virtio"/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <mtu size="1442"/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <target dev="tap4092cc4a-c2"/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]: </interface>
Nov 29 03:35:26 np0005539551 nova_compute[227360]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:35:26 np0005539551 kernel: tap4092cc4a-c2 (unregistering): left promiscuous mode
Nov 29 03:35:26 np0005539551 NetworkManager[48922]: <info>  [1764405326.6515] device (tap4092cc4a-c2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.661 227364 DEBUG nova.virt.libvirt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Received event <DeviceRemovedEvent: 1764405326.660953, 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.661 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:26 np0005539551 ovn_controller[130266]: 2025-11-29T08:35:26Z|00746|binding|INFO|Releasing lport 4092cc4a-c254-4ce0-a176-b4da7ee2a317 from this chassis (sb_readonly=0)
Nov 29 03:35:26 np0005539551 ovn_controller[130266]: 2025-11-29T08:35:26Z|00747|binding|INFO|Setting lport 4092cc4a-c254-4ce0-a176-b4da7ee2a317 down in Southbound
Nov 29 03:35:26 np0005539551 ovn_controller[130266]: 2025-11-29T08:35:26Z|00748|binding|INFO|Removing iface tap4092cc4a-c2 ovn-installed in OVS
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.662 227364 DEBUG nova.virt.libvirt.driver [None req-4acd6c9b-6fda-4372-9eb0-c952c251c78f 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Start waiting for the detach event from libvirt for device tap4092cc4a-c2 with device alias net1 for instance 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.663 227364 DEBUG nova.virt.libvirt.guest [None req-4acd6c9b-6fda-4372-9eb0-c952c251c78f 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c0:3f:68"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4092cc4a-c2"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.664 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.667 227364 DEBUG nova.virt.libvirt.guest [None req-4acd6c9b-6fda-4372-9eb0-c952c251c78f 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:c0:3f:68"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4092cc4a-c2"/></interface>not found in domain: <domain type='kvm' id='79'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <name>instance-000000ad</name>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <uuid>1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd</uuid>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <nova:name>tempest-TestNetworkBasicOps-server-1296419618</nova:name>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <nova:creationTime>2025-11-29 08:35:25</nova:creationTime>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <nova:flavor name="m1.nano">
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <nova:memory>128</nova:memory>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <nova:disk>1</nova:disk>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <nova:swap>0</nova:swap>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  </nova:flavor>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <nova:owner>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <nova:user uuid="4774e2851bc6407cb0fcde15bd24d1b3">tempest-TestNetworkBasicOps-828399474-project-member</nova:user>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <nova:project uuid="0471b9b208874403aa3f0fbe7504ad19">tempest-TestNetworkBasicOps-828399474</nova:project>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  </nova:owner>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <nova:ports>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <nova:port uuid="0e21f315-1974-4283-92a1-054bfda2ae26">
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </nova:port>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <nova:port uuid="4092cc4a-c254-4ce0-a176-b4da7ee2a317">
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <nova:ip type="fixed" address="10.100.0.27" ipVersion="4"/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </nova:port>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  </nova:ports>
Nov 29 03:35:26 np0005539551 nova_compute[227360]: </nova:instance>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <memory unit='KiB'>131072</memory>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <vcpu placement='static'>1</vcpu>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <resource>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <partition>/machine</partition>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  </resource>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <sysinfo type='smbios'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <entry name='manufacturer'>RDO</entry>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <entry name='product'>OpenStack Compute</entry>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <entry name='serial'>1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd</entry>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <entry name='uuid'>1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd</entry>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <entry name='family'>Virtual Machine</entry>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <boot dev='hd'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <smbios mode='sysinfo'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <vmcoreinfo state='on'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <cpu mode='custom' match='exact' check='full'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <model fallback='forbid'>Nehalem</model>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <feature policy='require' name='x2apic'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <feature policy='require' name='hypervisor'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <feature policy='require' name='vme'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <clock offset='utc'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <timer name='pit' tickpolicy='delay'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <timer name='hpet' present='no'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <on_poweroff>destroy</on_poweroff>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <on_reboot>restart</on_reboot>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <on_crash>destroy</on_crash>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <disk type='network' device='disk'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <auth username='openstack'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:        <secret type='ceph' uuid='b66774a7-56d9-5535-bd8c-681234404870'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <source protocol='rbd' name='vms/1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd_disk' index='2'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target dev='vda' bus='virtio'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='virtio-disk0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <disk type='network' device='cdrom'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <auth username='openstack'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:        <secret type='ceph' uuid='b66774a7-56d9-5535-bd8c-681234404870'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <source protocol='rbd' name='vms/1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd_disk.config' index='1'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target dev='sda' bus='sata'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <readonly/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='sata0-0-0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='0' model='pcie-root'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pcie.0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='1' port='0x10'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.1'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='2' port='0x11'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.2'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='3' port='0x12'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.3'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='4' port='0x13'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.4'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='5' port='0x14'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.5'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='6' port='0x15'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.6'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='7' port='0x16'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.7'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='8' port='0x17'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.8'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='9' port='0x18'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.9'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='10' port='0x19'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.10'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='11' port='0x1a'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.11'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='12' port='0x1b'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.12'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='13' port='0x1c'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.13'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='14' port='0x1d'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.14'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='15' port='0x1e'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.15'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='16' port='0x1f'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.16'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='17' port='0x20'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.17'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='18' port='0x21'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.18'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='19' port='0x22'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.19'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='20' port='0x23'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.20'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='21' port='0x24'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.21'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='22' port='0x25'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.22'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='23' port='0x26'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.23'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='24' port='0x27'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.24'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target chassis='25' port='0x28'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.25'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model name='pcie-pci-bridge'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='pci.26'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='usb'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <controller type='sata' index='0'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='ide'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <interface type='ethernet'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <mac address='fa:16:3e:24:58:28'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target dev='tap0e21f315-19'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model type='virtio'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <mtu size='1442'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='net0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <serial type='pty'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <source path='/dev/pts/3'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <log file='/var/lib/nova/instances/1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd/console.log' append='off'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target type='isa-serial' port='0'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:        <model name='isa-serial'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      </target>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='serial0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <console type='pty' tty='/dev/pts/3'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <source path='/dev/pts/3'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <log file='/var/lib/nova/instances/1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd/console.log' append='off'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <target type='serial' port='0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='serial0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </console>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <input type='tablet' bus='usb'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='input0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='usb' bus='0' port='1'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </input>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <input type='mouse' bus='ps2'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='input1'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </input>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <input type='keyboard' bus='ps2'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='input2'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </input>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <graphics type='vnc' port='5903' autoport='yes' listen='::0'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <listen type='address' address='::0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </graphics>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <audio id='1' type='none'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <model type='virtio' heads='1' primary='yes'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='video0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <watchdog model='itco' action='reset'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='watchdog0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </watchdog>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <memballoon model='virtio'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <stats period='10'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='balloon0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <rng model='virtio'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <backend model='random'>/dev/urandom</backend>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <alias name='rng0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <label>system_u:system_r:svirt_t:s0:c379,c573</label>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c379,c573</imagelabel>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  </seclabel>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <label>+107:+107</label>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <imagelabel>+107:+107</imagelabel>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  </seclabel>
Nov 29 03:35:26 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:35:26 np0005539551 nova_compute[227360]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.667 227364 INFO nova.virt.libvirt.driver [None req-4acd6c9b-6fda-4372-9eb0-c952c251c78f 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Successfully detached device tap4092cc4a-c2 from instance 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd from the live domain config.#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.668 227364 DEBUG nova.virt.libvirt.vif [None req-4acd6c9b-6fda-4372-9eb0-c952c251c78f 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:34:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1296419618',display_name='tempest-TestNetworkBasicOps-server-1296419618',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1296419618',id=173,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEhyPFmOb1buTdfp2NZX2b9gSFBAogkRMZKVdLyroT5Ix5fvym2lepYKOMKgz9iBT4Fr3f7j9Gn+L6kUd1A62pvPy+wjiM7n7mFxRX0SSB7I3/1dVL5dTHaRcDNjAm50LQ==',key_name='tempest-TestNetworkBasicOps-1813521378',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:34:54Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-vm52hjet',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:34:55Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4092cc4a-c254-4ce0-a176-b4da7ee2a317", "address": "fa:16:3e:c0:3f:68", "network": {"id": "4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5", "bridge": "br-int", "label": "tempest-network-smoke--1326133175", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092cc4a-c2", "ovs_interfaceid": "4092cc4a-c254-4ce0-a176-b4da7ee2a317", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.668 227364 DEBUG nova.network.os_vif_util [None req-4acd6c9b-6fda-4372-9eb0-c952c251c78f 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "4092cc4a-c254-4ce0-a176-b4da7ee2a317", "address": "fa:16:3e:c0:3f:68", "network": {"id": "4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5", "bridge": "br-int", "label": "tempest-network-smoke--1326133175", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092cc4a-c2", "ovs_interfaceid": "4092cc4a-c254-4ce0-a176-b4da7ee2a317", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.669 227364 DEBUG nova.network.os_vif_util [None req-4acd6c9b-6fda-4372-9eb0-c952c251c78f 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:3f:68,bridge_name='br-int',has_traffic_filtering=True,id=4092cc4a-c254-4ce0-a176-b4da7ee2a317,network=Network(4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4092cc4a-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.669 227364 DEBUG os_vif [None req-4acd6c9b-6fda-4372-9eb0-c952c251c78f 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:3f:68,bridge_name='br-int',has_traffic_filtering=True,id=4092cc4a-c254-4ce0-a176-b4da7ee2a317,network=Network(4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4092cc4a-c2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.671 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.671 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4092cc4a-c2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.673 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:26.674 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:3f:68 10.100.0.27'], port_security=['fa:16:3e:c0:3f:68 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': '1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0471b9b208874403aa3f0fbe7504ad19', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e80d8f2f-092d-4879-a27f-82d3e7dad8a3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=77dd13d2-8e55-41ac-beb5-7d7fff7c9de4, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=4092cc4a-c254-4ce0-a176-b4da7ee2a317) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.675 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:35:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:26.677 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 4092cc4a-c254-4ce0-a176-b4da7ee2a317 in datapath 4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5 unbound from our chassis#033[00m
Nov 29 03:35:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:26.680 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:35:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:26.681 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e7b2a399-877c-4fc3-ab58-01b18d184aa1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:26.681 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5 namespace which is not needed anymore#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.683 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.686 227364 INFO os_vif [None req-4acd6c9b-6fda-4372-9eb0-c952c251c78f 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:3f:68,bridge_name='br-int',has_traffic_filtering=True,id=4092cc4a-c254-4ce0-a176-b4da7ee2a317,network=Network(4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4092cc4a-c2')#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.687 227364 DEBUG nova.virt.libvirt.guest [None req-4acd6c9b-6fda-4372-9eb0-c952c251c78f 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <nova:name>tempest-TestNetworkBasicOps-server-1296419618</nova:name>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <nova:creationTime>2025-11-29 08:35:26</nova:creationTime>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <nova:flavor name="m1.nano">
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <nova:memory>128</nova:memory>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <nova:disk>1</nova:disk>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <nova:swap>0</nova:swap>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  </nova:flavor>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <nova:owner>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <nova:user uuid="4774e2851bc6407cb0fcde15bd24d1b3">tempest-TestNetworkBasicOps-828399474-project-member</nova:user>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <nova:project uuid="0471b9b208874403aa3f0fbe7504ad19">tempest-TestNetworkBasicOps-828399474</nova:project>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  </nova:owner>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  <nova:ports>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    <nova:port uuid="0e21f315-1974-4283-92a1-054bfda2ae26">
Nov 29 03:35:26 np0005539551 nova_compute[227360]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:    </nova:port>
Nov 29 03:35:26 np0005539551 nova_compute[227360]:  </nova:ports>
Nov 29 03:35:26 np0005539551 nova_compute[227360]: </nova:instance>
Nov 29 03:35:26 np0005539551 nova_compute[227360]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 29 03:35:26 np0005539551 neutron-haproxy-ovnmeta-4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5[288831]: [NOTICE]   (288835) : haproxy version is 2.8.14-c23fe91
Nov 29 03:35:26 np0005539551 neutron-haproxy-ovnmeta-4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5[288831]: [NOTICE]   (288835) : path to executable is /usr/sbin/haproxy
Nov 29 03:35:26 np0005539551 neutron-haproxy-ovnmeta-4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5[288831]: [WARNING]  (288835) : Exiting Master process...
Nov 29 03:35:26 np0005539551 neutron-haproxy-ovnmeta-4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5[288831]: [ALERT]    (288835) : Current worker (288837) exited with code 143 (Terminated)
Nov 29 03:35:26 np0005539551 neutron-haproxy-ovnmeta-4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5[288831]: [WARNING]  (288835) : All workers exited. Exiting... (0)
Nov 29 03:35:26 np0005539551 systemd[1]: libpod-6451615c0f46957571cbf253193a3553a336b8ba9f49e2478d01195c9b5adfc3.scope: Deactivated successfully.
Nov 29 03:35:26 np0005539551 podman[288868]: 2025-11-29 08:35:26.809804852 +0000 UTC m=+0.044313319 container died 6451615c0f46957571cbf253193a3553a336b8ba9f49e2478d01195c9b5adfc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:35:26 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6451615c0f46957571cbf253193a3553a336b8ba9f49e2478d01195c9b5adfc3-userdata-shm.mount: Deactivated successfully.
Nov 29 03:35:26 np0005539551 systemd[1]: var-lib-containers-storage-overlay-ad02d1bd17fe489fca59e3a8ca91ddf58b87d958d9e0d8977bea4c4e3556edf0-merged.mount: Deactivated successfully.
Nov 29 03:35:26 np0005539551 podman[288868]: 2025-11-29 08:35:26.852847777 +0000 UTC m=+0.087356224 container cleanup 6451615c0f46957571cbf253193a3553a336b8ba9f49e2478d01195c9b5adfc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:35:26 np0005539551 systemd[1]: libpod-conmon-6451615c0f46957571cbf253193a3553a336b8ba9f49e2478d01195c9b5adfc3.scope: Deactivated successfully.
Nov 29 03:35:26 np0005539551 podman[288899]: 2025-11-29 08:35:26.909944041 +0000 UTC m=+0.038726909 container remove 6451615c0f46957571cbf253193a3553a336b8ba9f49e2478d01195c9b5adfc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:35:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:26.915 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ba2d165f-447e-48b9-9c23-224ff5554c43]: (4, ('Sat Nov 29 08:35:26 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5 (6451615c0f46957571cbf253193a3553a336b8ba9f49e2478d01195c9b5adfc3)\n6451615c0f46957571cbf253193a3553a336b8ba9f49e2478d01195c9b5adfc3\nSat Nov 29 08:35:26 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5 (6451615c0f46957571cbf253193a3553a336b8ba9f49e2478d01195c9b5adfc3)\n6451615c0f46957571cbf253193a3553a336b8ba9f49e2478d01195c9b5adfc3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:26.917 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[81bb0065-8d61-415e-b901-ac1aa995716a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:26.918 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4dca0dfe-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.920 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:26 np0005539551 kernel: tap4dca0dfe-b0: left promiscuous mode
Nov 29 03:35:26 np0005539551 nova_compute[227360]: 2025-11-29 08:35:26.934 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:26.936 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[7ba9aaf6-1eb6-4acf-888e-14a9eb33ea4e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:26.951 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[cb340f59-4c28-42c3-9572-5c93cfce1579]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:26.952 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2277e3c4-fa41-4629-9fd5-17368d6938bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:26.967 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[760617e5-ce5c-4c9d-b3e0-86a38da10cf9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 837187, 'reachable_time': 33169, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288914, 'error': None, 'target': 'ovnmeta-4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:26 np0005539551 systemd[1]: run-netns-ovnmeta\x2d4dca0dfe\x2dbbf9\x2d4d2a\x2d90c3\x2dc0184075e1b5.mount: Deactivated successfully.
Nov 29 03:35:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:26.969 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:35:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:26.970 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[437df197-9ac2-4dc6-b90f-9a42f2e09496]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:27 np0005539551 nova_compute[227360]: 2025-11-29 08:35:27.320 227364 DEBUG oslo_concurrency.lockutils [None req-4acd6c9b-6fda-4372-9eb0-c952c251c78f 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "refresh_cache-1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:35:27 np0005539551 nova_compute[227360]: 2025-11-29 08:35:27.320 227364 DEBUG oslo_concurrency.lockutils [None req-4acd6c9b-6fda-4372-9eb0-c952c251c78f 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquired lock "refresh_cache-1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:35:27 np0005539551 nova_compute[227360]: 2025-11-29 08:35:27.321 227364 DEBUG nova.network.neutron [None req-4acd6c9b-6fda-4372-9eb0-c952c251c78f 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:35:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:27.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:28 np0005539551 nova_compute[227360]: 2025-11-29 08:35:28.122 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:28 np0005539551 nova_compute[227360]: 2025-11-29 08:35:28.221 227364 DEBUG nova.compute.manager [req-00b689d2-c499-4d5e-af99-66078708bded req-f8106872-c70b-4606-851c-0753a89698f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Received event network-vif-unplugged-4092cc4a-c254-4ce0-a176-b4da7ee2a317 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:28 np0005539551 nova_compute[227360]: 2025-11-29 08:35:28.222 227364 DEBUG oslo_concurrency.lockutils [req-00b689d2-c499-4d5e-af99-66078708bded req-f8106872-c70b-4606-851c-0753a89698f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:28 np0005539551 nova_compute[227360]: 2025-11-29 08:35:28.222 227364 DEBUG oslo_concurrency.lockutils [req-00b689d2-c499-4d5e-af99-66078708bded req-f8106872-c70b-4606-851c-0753a89698f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:28 np0005539551 nova_compute[227360]: 2025-11-29 08:35:28.222 227364 DEBUG oslo_concurrency.lockutils [req-00b689d2-c499-4d5e-af99-66078708bded req-f8106872-c70b-4606-851c-0753a89698f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:28 np0005539551 nova_compute[227360]: 2025-11-29 08:35:28.222 227364 DEBUG nova.compute.manager [req-00b689d2-c499-4d5e-af99-66078708bded req-f8106872-c70b-4606-851c-0753a89698f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] No waiting events found dispatching network-vif-unplugged-4092cc4a-c254-4ce0-a176-b4da7ee2a317 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:35:28 np0005539551 nova_compute[227360]: 2025-11-29 08:35:28.223 227364 WARNING nova.compute.manager [req-00b689d2-c499-4d5e-af99-66078708bded req-f8106872-c70b-4606-851c-0753a89698f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Received unexpected event network-vif-unplugged-4092cc4a-c254-4ce0-a176-b4da7ee2a317 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:35:28 np0005539551 nova_compute[227360]: 2025-11-29 08:35:28.223 227364 DEBUG nova.compute.manager [req-00b689d2-c499-4d5e-af99-66078708bded req-f8106872-c70b-4606-851c-0753a89698f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Received event network-vif-plugged-4092cc4a-c254-4ce0-a176-b4da7ee2a317 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:28 np0005539551 nova_compute[227360]: 2025-11-29 08:35:28.223 227364 DEBUG oslo_concurrency.lockutils [req-00b689d2-c499-4d5e-af99-66078708bded req-f8106872-c70b-4606-851c-0753a89698f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:28 np0005539551 nova_compute[227360]: 2025-11-29 08:35:28.223 227364 DEBUG oslo_concurrency.lockutils [req-00b689d2-c499-4d5e-af99-66078708bded req-f8106872-c70b-4606-851c-0753a89698f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:28 np0005539551 nova_compute[227360]: 2025-11-29 08:35:28.223 227364 DEBUG oslo_concurrency.lockutils [req-00b689d2-c499-4d5e-af99-66078708bded req-f8106872-c70b-4606-851c-0753a89698f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:28 np0005539551 nova_compute[227360]: 2025-11-29 08:35:28.224 227364 DEBUG nova.compute.manager [req-00b689d2-c499-4d5e-af99-66078708bded req-f8106872-c70b-4606-851c-0753a89698f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] No waiting events found dispatching network-vif-plugged-4092cc4a-c254-4ce0-a176-b4da7ee2a317 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:35:28 np0005539551 nova_compute[227360]: 2025-11-29 08:35:28.224 227364 WARNING nova.compute.manager [req-00b689d2-c499-4d5e-af99-66078708bded req-f8106872-c70b-4606-851c-0753a89698f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Received unexpected event network-vif-plugged-4092cc4a-c254-4ce0-a176-b4da7ee2a317 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:35:28 np0005539551 nova_compute[227360]: 2025-11-29 08:35:28.224 227364 DEBUG nova.compute.manager [req-00b689d2-c499-4d5e-af99-66078708bded req-f8106872-c70b-4606-851c-0753a89698f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Received event network-vif-deleted-4092cc4a-c254-4ce0-a176-b4da7ee2a317 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:28 np0005539551 nova_compute[227360]: 2025-11-29 08:35:28.224 227364 INFO nova.compute.manager [req-00b689d2-c499-4d5e-af99-66078708bded req-f8106872-c70b-4606-851c-0753a89698f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Neutron deleted interface 4092cc4a-c254-4ce0-a176-b4da7ee2a317; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:35:28 np0005539551 nova_compute[227360]: 2025-11-29 08:35:28.224 227364 DEBUG nova.network.neutron [req-00b689d2-c499-4d5e-af99-66078708bded req-f8106872-c70b-4606-851c-0753a89698f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Updating instance_info_cache with network_info: [{"id": "0e21f315-1974-4283-92a1-054bfda2ae26", "address": "fa:16:3e:24:58:28", "network": {"id": "8a47f5fc-b01b-4aaa-a961-1be8b8398ced", "bridge": "br-int", "label": "tempest-network-smoke--178153047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e21f315-19", "ovs_interfaceid": "0e21f315-1974-4283-92a1-054bfda2ae26", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:35:28 np0005539551 nova_compute[227360]: 2025-11-29 08:35:28.247 227364 DEBUG nova.objects.instance [req-00b689d2-c499-4d5e-af99-66078708bded req-f8106872-c70b-4606-851c-0753a89698f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lazy-loading 'system_metadata' on Instance uuid 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:35:28 np0005539551 nova_compute[227360]: 2025-11-29 08:35:28.267 227364 DEBUG nova.objects.instance [req-00b689d2-c499-4d5e-af99-66078708bded req-f8106872-c70b-4606-851c-0753a89698f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lazy-loading 'flavor' on Instance uuid 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:35:28 np0005539551 nova_compute[227360]: 2025-11-29 08:35:28.287 227364 DEBUG nova.virt.libvirt.vif [req-00b689d2-c499-4d5e-af99-66078708bded req-f8106872-c70b-4606-851c-0753a89698f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:34:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1296419618',display_name='tempest-TestNetworkBasicOps-server-1296419618',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1296419618',id=173,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEhyPFmOb1buTdfp2NZX2b9gSFBAogkRMZKVdLyroT5Ix5fvym2lepYKOMKgz9iBT4Fr3f7j9Gn+L6kUd1A62pvPy+wjiM7n7mFxRX0SSB7I3/1dVL5dTHaRcDNjAm50LQ==',key_name='tempest-TestNetworkBasicOps-1813521378',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:34:54Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-vm52hjet',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:34:55Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4092cc4a-c254-4ce0-a176-b4da7ee2a317", "address": "fa:16:3e:c0:3f:68", "network": {"id": "4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5", "bridge": "br-int", "label": "tempest-network-smoke--1326133175", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092cc4a-c2", "ovs_interfaceid": "4092cc4a-c254-4ce0-a176-b4da7ee2a317", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:35:28 np0005539551 nova_compute[227360]: 2025-11-29 08:35:28.287 227364 DEBUG nova.network.os_vif_util [req-00b689d2-c499-4d5e-af99-66078708bded req-f8106872-c70b-4606-851c-0753a89698f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Converting VIF {"id": "4092cc4a-c254-4ce0-a176-b4da7ee2a317", "address": "fa:16:3e:c0:3f:68", "network": {"id": "4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5", "bridge": "br-int", "label": "tempest-network-smoke--1326133175", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092cc4a-c2", "ovs_interfaceid": "4092cc4a-c254-4ce0-a176-b4da7ee2a317", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:35:28 np0005539551 nova_compute[227360]: 2025-11-29 08:35:28.288 227364 DEBUG nova.network.os_vif_util [req-00b689d2-c499-4d5e-af99-66078708bded req-f8106872-c70b-4606-851c-0753a89698f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:3f:68,bridge_name='br-int',has_traffic_filtering=True,id=4092cc4a-c254-4ce0-a176-b4da7ee2a317,network=Network(4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4092cc4a-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:35:28 np0005539551 nova_compute[227360]: 2025-11-29 08:35:28.290 227364 DEBUG nova.virt.libvirt.guest [req-00b689d2-c499-4d5e-af99-66078708bded req-f8106872-c70b-4606-851c-0753a89698f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c0:3f:68"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4092cc4a-c2"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:35:28 np0005539551 nova_compute[227360]: 2025-11-29 08:35:28.294 227364 DEBUG nova.virt.libvirt.guest [req-00b689d2-c499-4d5e-af99-66078708bded req-f8106872-c70b-4606-851c-0753a89698f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:c0:3f:68"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4092cc4a-c2"/></interface>not found in domain: <domain type='kvm' id='79'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <name>instance-000000ad</name>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <uuid>1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd</uuid>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <nova:name>tempest-TestNetworkBasicOps-server-1296419618</nova:name>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <nova:creationTime>2025-11-29 08:35:26</nova:creationTime>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <nova:flavor name="m1.nano">
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <nova:memory>128</nova:memory>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <nova:disk>1</nova:disk>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <nova:swap>0</nova:swap>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  </nova:flavor>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <nova:owner>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <nova:user uuid="4774e2851bc6407cb0fcde15bd24d1b3">tempest-TestNetworkBasicOps-828399474-project-member</nova:user>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <nova:project uuid="0471b9b208874403aa3f0fbe7504ad19">tempest-TestNetworkBasicOps-828399474</nova:project>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  </nova:owner>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <nova:ports>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <nova:port uuid="0e21f315-1974-4283-92a1-054bfda2ae26">
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </nova:port>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  </nova:ports>
Nov 29 03:35:28 np0005539551 nova_compute[227360]: </nova:instance>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <memory unit='KiB'>131072</memory>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <vcpu placement='static'>1</vcpu>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <resource>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <partition>/machine</partition>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  </resource>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <sysinfo type='smbios'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <entry name='manufacturer'>RDO</entry>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <entry name='product'>OpenStack Compute</entry>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <entry name='serial'>1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd</entry>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <entry name='uuid'>1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd</entry>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <entry name='family'>Virtual Machine</entry>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <boot dev='hd'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <smbios mode='sysinfo'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <vmcoreinfo state='on'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <cpu mode='custom' match='exact' check='full'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <model fallback='forbid'>Nehalem</model>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <feature policy='require' name='x2apic'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <feature policy='require' name='hypervisor'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <feature policy='require' name='vme'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <clock offset='utc'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <timer name='pit' tickpolicy='delay'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <timer name='hpet' present='no'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <on_poweroff>destroy</on_poweroff>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <on_reboot>restart</on_reboot>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <on_crash>destroy</on_crash>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <disk type='network' device='disk'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <auth username='openstack'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:        <secret type='ceph' uuid='b66774a7-56d9-5535-bd8c-681234404870'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <source protocol='rbd' name='vms/1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd_disk' index='2'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target dev='vda' bus='virtio'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='virtio-disk0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <disk type='network' device='cdrom'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <auth username='openstack'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:        <secret type='ceph' uuid='b66774a7-56d9-5535-bd8c-681234404870'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <source protocol='rbd' name='vms/1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd_disk.config' index='1'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target dev='sda' bus='sata'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <readonly/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='sata0-0-0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='0' model='pcie-root'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pcie.0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='1' port='0x10'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.1'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='2' port='0x11'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.2'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='3' port='0x12'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.3'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='4' port='0x13'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.4'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='5' port='0x14'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.5'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='6' port='0x15'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.6'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='7' port='0x16'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.7'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='8' port='0x17'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.8'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='9' port='0x18'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.9'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='10' port='0x19'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.10'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='11' port='0x1a'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.11'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='12' port='0x1b'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.12'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='13' port='0x1c'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.13'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='14' port='0x1d'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.14'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='15' port='0x1e'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.15'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='16' port='0x1f'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.16'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='17' port='0x20'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.17'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='18' port='0x21'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.18'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='19' port='0x22'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.19'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='20' port='0x23'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.20'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='21' port='0x24'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.21'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='22' port='0x25'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.22'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='23' port='0x26'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.23'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='24' port='0x27'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.24'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='25' port='0x28'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.25'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-pci-bridge'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.26'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='usb'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='sata' index='0'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='ide'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <interface type='ethernet'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <mac address='fa:16:3e:24:58:28'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target dev='tap0e21f315-19'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model type='virtio'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <mtu size='1442'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='net0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <serial type='pty'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <source path='/dev/pts/3'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <log file='/var/lib/nova/instances/1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd/console.log' append='off'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target type='isa-serial' port='0'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:        <model name='isa-serial'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      </target>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='serial0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <console type='pty' tty='/dev/pts/3'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <source path='/dev/pts/3'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <log file='/var/lib/nova/instances/1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd/console.log' append='off'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target type='serial' port='0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='serial0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </console>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <input type='tablet' bus='usb'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='input0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='usb' bus='0' port='1'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </input>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <input type='mouse' bus='ps2'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='input1'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </input>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <input type='keyboard' bus='ps2'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='input2'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </input>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <graphics type='vnc' port='5903' autoport='yes' listen='::0'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <listen type='address' address='::0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </graphics>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <audio id='1' type='none'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model type='virtio' heads='1' primary='yes'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='video0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <watchdog model='itco' action='reset'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='watchdog0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </watchdog>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <memballoon model='virtio'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <stats period='10'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='balloon0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <rng model='virtio'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <backend model='random'>/dev/urandom</backend>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='rng0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <label>system_u:system_r:svirt_t:s0:c379,c573</label>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c379,c573</imagelabel>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  </seclabel>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <label>+107:+107</label>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <imagelabel>+107:+107</imagelabel>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  </seclabel>
Nov 29 03:35:28 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:35:28 np0005539551 nova_compute[227360]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 29 03:35:28 np0005539551 nova_compute[227360]: 2025-11-29 08:35:28.294 227364 DEBUG nova.virt.libvirt.guest [req-00b689d2-c499-4d5e-af99-66078708bded req-f8106872-c70b-4606-851c-0753a89698f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c0:3f:68"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4092cc4a-c2"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:35:28 np0005539551 nova_compute[227360]: 2025-11-29 08:35:28.299 227364 DEBUG nova.virt.libvirt.guest [req-00b689d2-c499-4d5e-af99-66078708bded req-f8106872-c70b-4606-851c-0753a89698f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:c0:3f:68"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4092cc4a-c2"/></interface>not found in domain: <domain type='kvm' id='79'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <name>instance-000000ad</name>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <uuid>1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd</uuid>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <nova:name>tempest-TestNetworkBasicOps-server-1296419618</nova:name>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <nova:creationTime>2025-11-29 08:35:26</nova:creationTime>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <nova:flavor name="m1.nano">
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <nova:memory>128</nova:memory>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <nova:disk>1</nova:disk>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <nova:swap>0</nova:swap>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  </nova:flavor>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <nova:owner>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <nova:user uuid="4774e2851bc6407cb0fcde15bd24d1b3">tempest-TestNetworkBasicOps-828399474-project-member</nova:user>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <nova:project uuid="0471b9b208874403aa3f0fbe7504ad19">tempest-TestNetworkBasicOps-828399474</nova:project>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  </nova:owner>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <nova:ports>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <nova:port uuid="0e21f315-1974-4283-92a1-054bfda2ae26">
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </nova:port>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  </nova:ports>
Nov 29 03:35:28 np0005539551 nova_compute[227360]: </nova:instance>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <memory unit='KiB'>131072</memory>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <vcpu placement='static'>1</vcpu>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <resource>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <partition>/machine</partition>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  </resource>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <sysinfo type='smbios'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <entry name='manufacturer'>RDO</entry>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <entry name='product'>OpenStack Compute</entry>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <entry name='serial'>1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd</entry>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <entry name='uuid'>1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd</entry>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <entry name='family'>Virtual Machine</entry>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <boot dev='hd'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <smbios mode='sysinfo'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <vmcoreinfo state='on'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <cpu mode='custom' match='exact' check='full'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <model fallback='forbid'>Nehalem</model>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <feature policy='require' name='x2apic'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <feature policy='require' name='hypervisor'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <feature policy='require' name='vme'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <clock offset='utc'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <timer name='pit' tickpolicy='delay'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <timer name='hpet' present='no'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <on_poweroff>destroy</on_poweroff>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <on_reboot>restart</on_reboot>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <on_crash>destroy</on_crash>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <disk type='network' device='disk'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <auth username='openstack'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:        <secret type='ceph' uuid='b66774a7-56d9-5535-bd8c-681234404870'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <source protocol='rbd' name='vms/1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd_disk' index='2'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target dev='vda' bus='virtio'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='virtio-disk0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <disk type='network' device='cdrom'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <auth username='openstack'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:        <secret type='ceph' uuid='b66774a7-56d9-5535-bd8c-681234404870'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <source protocol='rbd' name='vms/1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd_disk.config' index='1'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target dev='sda' bus='sata'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <readonly/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='sata0-0-0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='0' model='pcie-root'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pcie.0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='1' port='0x10'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.1'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='2' port='0x11'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.2'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='3' port='0x12'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.3'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='4' port='0x13'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.4'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='5' port='0x14'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.5'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='6' port='0x15'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.6'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='7' port='0x16'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.7'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='8' port='0x17'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.8'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='9' port='0x18'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.9'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='10' port='0x19'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.10'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='11' port='0x1a'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.11'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='12' port='0x1b'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.12'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='13' port='0x1c'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.13'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='14' port='0x1d'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.14'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='15' port='0x1e'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.15'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='16' port='0x1f'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.16'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='17' port='0x20'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.17'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='18' port='0x21'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.18'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='19' port='0x22'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.19'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='20' port='0x23'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.20'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='21' port='0x24'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.21'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='22' port='0x25'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.22'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='23' port='0x26'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.23'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='24' port='0x27'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.24'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target chassis='25' port='0x28'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.25'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model name='pcie-pci-bridge'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='pci.26'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='usb'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <controller type='sata' index='0'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='ide'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <interface type='ethernet'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <mac address='fa:16:3e:24:58:28'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target dev='tap0e21f315-19'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model type='virtio'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <mtu size='1442'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='net0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <serial type='pty'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <source path='/dev/pts/3'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <log file='/var/lib/nova/instances/1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd/console.log' append='off'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target type='isa-serial' port='0'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:        <model name='isa-serial'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      </target>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='serial0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <console type='pty' tty='/dev/pts/3'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <source path='/dev/pts/3'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <log file='/var/lib/nova/instances/1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd/console.log' append='off'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <target type='serial' port='0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='serial0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </console>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <input type='tablet' bus='usb'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='input0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='usb' bus='0' port='1'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </input>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <input type='mouse' bus='ps2'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='input1'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </input>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <input type='keyboard' bus='ps2'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='input2'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </input>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <graphics type='vnc' port='5903' autoport='yes' listen='::0'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <listen type='address' address='::0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </graphics>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <audio id='1' type='none'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <model type='virtio' heads='1' primary='yes'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='video0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <watchdog model='itco' action='reset'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='watchdog0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </watchdog>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <memballoon model='virtio'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <stats period='10'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='balloon0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <rng model='virtio'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <backend model='random'>/dev/urandom</backend>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <alias name='rng0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <label>system_u:system_r:svirt_t:s0:c379,c573</label>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c379,c573</imagelabel>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  </seclabel>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <label>+107:+107</label>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <imagelabel>+107:+107</imagelabel>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  </seclabel>
Nov 29 03:35:28 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:35:28 np0005539551 nova_compute[227360]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 29 03:35:28 np0005539551 nova_compute[227360]: 2025-11-29 08:35:28.300 227364 WARNING nova.virt.libvirt.driver [req-00b689d2-c499-4d5e-af99-66078708bded req-f8106872-c70b-4606-851c-0753a89698f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Detaching interface fa:16:3e:c0:3f:68 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap4092cc4a-c2' not found.#033[00m
Nov 29 03:35:28 np0005539551 nova_compute[227360]: 2025-11-29 08:35:28.301 227364 DEBUG nova.virt.libvirt.vif [req-00b689d2-c499-4d5e-af99-66078708bded req-f8106872-c70b-4606-851c-0753a89698f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:34:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1296419618',display_name='tempest-TestNetworkBasicOps-server-1296419618',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1296419618',id=173,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEhyPFmOb1buTdfp2NZX2b9gSFBAogkRMZKVdLyroT5Ix5fvym2lepYKOMKgz9iBT4Fr3f7j9Gn+L6kUd1A62pvPy+wjiM7n7mFxRX0SSB7I3/1dVL5dTHaRcDNjAm50LQ==',key_name='tempest-TestNetworkBasicOps-1813521378',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:34:54Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-vm52hjet',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:34:55Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4092cc4a-c254-4ce0-a176-b4da7ee2a317", "address": "fa:16:3e:c0:3f:68", "network": {"id": "4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5", "bridge": "br-int", "label": "tempest-network-smoke--1326133175", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092cc4a-c2", "ovs_interfaceid": "4092cc4a-c254-4ce0-a176-b4da7ee2a317", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:35:28 np0005539551 nova_compute[227360]: 2025-11-29 08:35:28.301 227364 DEBUG nova.network.os_vif_util [req-00b689d2-c499-4d5e-af99-66078708bded req-f8106872-c70b-4606-851c-0753a89698f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Converting VIF {"id": "4092cc4a-c254-4ce0-a176-b4da7ee2a317", "address": "fa:16:3e:c0:3f:68", "network": {"id": "4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5", "bridge": "br-int", "label": "tempest-network-smoke--1326133175", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092cc4a-c2", "ovs_interfaceid": "4092cc4a-c254-4ce0-a176-b4da7ee2a317", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:35:28 np0005539551 nova_compute[227360]: 2025-11-29 08:35:28.302 227364 DEBUG nova.network.os_vif_util [req-00b689d2-c499-4d5e-af99-66078708bded req-f8106872-c70b-4606-851c-0753a89698f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:3f:68,bridge_name='br-int',has_traffic_filtering=True,id=4092cc4a-c254-4ce0-a176-b4da7ee2a317,network=Network(4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4092cc4a-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:35:28 np0005539551 nova_compute[227360]: 2025-11-29 08:35:28.303 227364 DEBUG os_vif [req-00b689d2-c499-4d5e-af99-66078708bded req-f8106872-c70b-4606-851c-0753a89698f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:3f:68,bridge_name='br-int',has_traffic_filtering=True,id=4092cc4a-c254-4ce0-a176-b4da7ee2a317,network=Network(4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4092cc4a-c2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:35:28 np0005539551 nova_compute[227360]: 2025-11-29 08:35:28.304 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:28 np0005539551 nova_compute[227360]: 2025-11-29 08:35:28.304 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4092cc4a-c2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:28 np0005539551 nova_compute[227360]: 2025-11-29 08:35:28.305 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:35:28 np0005539551 nova_compute[227360]: 2025-11-29 08:35:28.307 227364 INFO os_vif [req-00b689d2-c499-4d5e-af99-66078708bded req-f8106872-c70b-4606-851c-0753a89698f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:3f:68,bridge_name='br-int',has_traffic_filtering=True,id=4092cc4a-c254-4ce0-a176-b4da7ee2a317,network=Network(4dca0dfe-bbf9-4d2a-90c3-c0184075e1b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4092cc4a-c2')#033[00m
Nov 29 03:35:28 np0005539551 nova_compute[227360]: 2025-11-29 08:35:28.308 227364 DEBUG nova.virt.libvirt.guest [req-00b689d2-c499-4d5e-af99-66078708bded req-f8106872-c70b-4606-851c-0753a89698f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <nova:name>tempest-TestNetworkBasicOps-server-1296419618</nova:name>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <nova:creationTime>2025-11-29 08:35:28</nova:creationTime>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <nova:flavor name="m1.nano">
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <nova:memory>128</nova:memory>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <nova:disk>1</nova:disk>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <nova:swap>0</nova:swap>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  </nova:flavor>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <nova:owner>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <nova:user uuid="4774e2851bc6407cb0fcde15bd24d1b3">tempest-TestNetworkBasicOps-828399474-project-member</nova:user>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <nova:project uuid="0471b9b208874403aa3f0fbe7504ad19">tempest-TestNetworkBasicOps-828399474</nova:project>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  </nova:owner>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  <nova:ports>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    <nova:port uuid="0e21f315-1974-4283-92a1-054bfda2ae26">
Nov 29 03:35:28 np0005539551 nova_compute[227360]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:    </nova:port>
Nov 29 03:35:28 np0005539551 nova_compute[227360]:  </nova:ports>
Nov 29 03:35:28 np0005539551 nova_compute[227360]: </nova:instance>
Nov 29 03:35:28 np0005539551 nova_compute[227360]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 29 03:35:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:28.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:28 np0005539551 ovn_controller[130266]: 2025-11-29T08:35:28Z|00749|binding|INFO|Releasing lport 4bfe53e7-2be7-4e57-af2c-2f7abbe283f3 from this chassis (sb_readonly=0)
Nov 29 03:35:28 np0005539551 ovn_controller[130266]: 2025-11-29T08:35:28Z|00750|binding|INFO|Releasing lport 5551fa67-e815-437e-8413-5562ca9c4d10 from this chassis (sb_readonly=0)
Nov 29 03:35:28 np0005539551 ovn_controller[130266]: 2025-11-29T08:35:28Z|00751|binding|INFO|Releasing lport 3b04b2c4-a6da-4677-b446-82ad68652b56 from this chassis (sb_readonly=0)
Nov 29 03:35:28 np0005539551 nova_compute[227360]: 2025-11-29 08:35:28.815 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:28 np0005539551 nova_compute[227360]: 2025-11-29 08:35:28.849 227364 INFO nova.network.neutron [None req-4acd6c9b-6fda-4372-9eb0-c952c251c78f 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Port 4092cc4a-c254-4ce0-a176-b4da7ee2a317 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 29 03:35:28 np0005539551 nova_compute[227360]: 2025-11-29 08:35:28.849 227364 DEBUG nova.network.neutron [None req-4acd6c9b-6fda-4372-9eb0-c952c251c78f 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Updating instance_info_cache with network_info: [{"id": "0e21f315-1974-4283-92a1-054bfda2ae26", "address": "fa:16:3e:24:58:28", "network": {"id": "8a47f5fc-b01b-4aaa-a961-1be8b8398ced", "bridge": "br-int", "label": "tempest-network-smoke--178153047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e21f315-19", "ovs_interfaceid": "0e21f315-1974-4283-92a1-054bfda2ae26", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:35:28 np0005539551 nova_compute[227360]: 2025-11-29 08:35:28.882 227364 DEBUG oslo_concurrency.lockutils [None req-4acd6c9b-6fda-4372-9eb0-c952c251c78f 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Releasing lock "refresh_cache-1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:35:28 np0005539551 nova_compute[227360]: 2025-11-29 08:35:28.908 227364 DEBUG oslo_concurrency.lockutils [None req-4acd6c9b-6fda-4372-9eb0-c952c251c78f 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "interface-1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd-4092cc4a-c254-4ce0-a176-b4da7ee2a317" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 2.373s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e368 e368: 3 total, 3 up, 3 in
Nov 29 03:35:29 np0005539551 nova_compute[227360]: 2025-11-29 08:35:29.405 227364 DEBUG nova.compute.manager [req-44f059e1-9497-45be-8363-eb52c7f07b2f req-56b21e8d-09dd-4a54-ba20-9f60ddb0e1d2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Received event network-changed-0e21f315-1974-4283-92a1-054bfda2ae26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:29 np0005539551 nova_compute[227360]: 2025-11-29 08:35:29.405 227364 DEBUG nova.compute.manager [req-44f059e1-9497-45be-8363-eb52c7f07b2f req-56b21e8d-09dd-4a54-ba20-9f60ddb0e1d2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Refreshing instance network info cache due to event network-changed-0e21f315-1974-4283-92a1-054bfda2ae26. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:35:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:29.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:29 np0005539551 nova_compute[227360]: 2025-11-29 08:35:29.406 227364 DEBUG oslo_concurrency.lockutils [req-44f059e1-9497-45be-8363-eb52c7f07b2f req-56b21e8d-09dd-4a54-ba20-9f60ddb0e1d2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:35:29 np0005539551 nova_compute[227360]: 2025-11-29 08:35:29.406 227364 DEBUG oslo_concurrency.lockutils [req-44f059e1-9497-45be-8363-eb52c7f07b2f req-56b21e8d-09dd-4a54-ba20-9f60ddb0e1d2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:35:29 np0005539551 nova_compute[227360]: 2025-11-29 08:35:29.406 227364 DEBUG nova.network.neutron [req-44f059e1-9497-45be-8363-eb52c7f07b2f req-56b21e8d-09dd-4a54-ba20-9f60ddb0e1d2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Refreshing network info cache for port 0e21f315-1974-4283-92a1-054bfda2ae26 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:35:29 np0005539551 nova_compute[227360]: 2025-11-29 08:35:29.454 227364 DEBUG oslo_concurrency.lockutils [None req-d8a0b43c-3d33-4f07-9694-680c1c50b5ec 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:29 np0005539551 nova_compute[227360]: 2025-11-29 08:35:29.454 227364 DEBUG oslo_concurrency.lockutils [None req-d8a0b43c-3d33-4f07-9694-680c1c50b5ec 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:29 np0005539551 nova_compute[227360]: 2025-11-29 08:35:29.454 227364 DEBUG oslo_concurrency.lockutils [None req-d8a0b43c-3d33-4f07-9694-680c1c50b5ec 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:29 np0005539551 nova_compute[227360]: 2025-11-29 08:35:29.455 227364 DEBUG oslo_concurrency.lockutils [None req-d8a0b43c-3d33-4f07-9694-680c1c50b5ec 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:29 np0005539551 nova_compute[227360]: 2025-11-29 08:35:29.455 227364 DEBUG oslo_concurrency.lockutils [None req-d8a0b43c-3d33-4f07-9694-680c1c50b5ec 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:29 np0005539551 nova_compute[227360]: 2025-11-29 08:35:29.456 227364 INFO nova.compute.manager [None req-d8a0b43c-3d33-4f07-9694-680c1c50b5ec 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Terminating instance#033[00m
Nov 29 03:35:29 np0005539551 nova_compute[227360]: 2025-11-29 08:35:29.457 227364 DEBUG nova.compute.manager [None req-d8a0b43c-3d33-4f07-9694-680c1c50b5ec 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:35:29 np0005539551 kernel: tap0e21f315-19 (unregistering): left promiscuous mode
Nov 29 03:35:29 np0005539551 NetworkManager[48922]: <info>  [1764405329.5127] device (tap0e21f315-19): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:35:29 np0005539551 nova_compute[227360]: 2025-11-29 08:35:29.520 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:29 np0005539551 ovn_controller[130266]: 2025-11-29T08:35:29Z|00752|binding|INFO|Releasing lport 0e21f315-1974-4283-92a1-054bfda2ae26 from this chassis (sb_readonly=0)
Nov 29 03:35:29 np0005539551 ovn_controller[130266]: 2025-11-29T08:35:29Z|00753|binding|INFO|Setting lport 0e21f315-1974-4283-92a1-054bfda2ae26 down in Southbound
Nov 29 03:35:29 np0005539551 ovn_controller[130266]: 2025-11-29T08:35:29Z|00754|binding|INFO|Removing iface tap0e21f315-19 ovn-installed in OVS
Nov 29 03:35:29 np0005539551 nova_compute[227360]: 2025-11-29 08:35:29.524 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:29.531 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:58:28 10.100.0.8'], port_security=['fa:16:3e:24:58:28 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8a47f5fc-b01b-4aaa-a961-1be8b8398ced', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0471b9b208874403aa3f0fbe7504ad19', 'neutron:revision_number': '4', 'neutron:security_group_ids': '34a1f28a-44b1-4a29-aed5-fb3f209c19e2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51743244-221f-43b8-b76f-d9e520229bc8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=0e21f315-1974-4283-92a1-054bfda2ae26) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:35:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:29.533 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 0e21f315-1974-4283-92a1-054bfda2ae26 in datapath 8a47f5fc-b01b-4aaa-a961-1be8b8398ced unbound from our chassis#033[00m
Nov 29 03:35:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:29.536 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8a47f5fc-b01b-4aaa-a961-1be8b8398ced, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:35:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:29.538 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9221591b-ba0d-4f35-bca6-c3cfad3a3670]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:29.538 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8a47f5fc-b01b-4aaa-a961-1be8b8398ced namespace which is not needed anymore#033[00m
Nov 29 03:35:29 np0005539551 nova_compute[227360]: 2025-11-29 08:35:29.542 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:29 np0005539551 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d000000ad.scope: Deactivated successfully.
Nov 29 03:35:29 np0005539551 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d000000ad.scope: Consumed 15.081s CPU time.
Nov 29 03:35:29 np0005539551 systemd-machined[190756]: Machine qemu-79-instance-000000ad terminated.
Nov 29 03:35:29 np0005539551 neutron-haproxy-ovnmeta-8a47f5fc-b01b-4aaa-a961-1be8b8398ced[288618]: [NOTICE]   (288623) : haproxy version is 2.8.14-c23fe91
Nov 29 03:35:29 np0005539551 neutron-haproxy-ovnmeta-8a47f5fc-b01b-4aaa-a961-1be8b8398ced[288618]: [NOTICE]   (288623) : path to executable is /usr/sbin/haproxy
Nov 29 03:35:29 np0005539551 neutron-haproxy-ovnmeta-8a47f5fc-b01b-4aaa-a961-1be8b8398ced[288618]: [WARNING]  (288623) : Exiting Master process...
Nov 29 03:35:29 np0005539551 neutron-haproxy-ovnmeta-8a47f5fc-b01b-4aaa-a961-1be8b8398ced[288618]: [WARNING]  (288623) : Exiting Master process...
Nov 29 03:35:29 np0005539551 neutron-haproxy-ovnmeta-8a47f5fc-b01b-4aaa-a961-1be8b8398ced[288618]: [ALERT]    (288623) : Current worker (288626) exited with code 143 (Terminated)
Nov 29 03:35:29 np0005539551 neutron-haproxy-ovnmeta-8a47f5fc-b01b-4aaa-a961-1be8b8398ced[288618]: [WARNING]  (288623) : All workers exited. Exiting... (0)
Nov 29 03:35:29 np0005539551 systemd[1]: libpod-bd12f23c02ec5c416f463747aa2a18b42e46ff5c2ac4035f409366d4a3ec6f1c.scope: Deactivated successfully.
Nov 29 03:35:29 np0005539551 podman[288938]: 2025-11-29 08:35:29.665045564 +0000 UTC m=+0.043630372 container died bd12f23c02ec5c416f463747aa2a18b42e46ff5c2ac4035f409366d4a3ec6f1c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8a47f5fc-b01b-4aaa-a961-1be8b8398ced, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:35:29 np0005539551 nova_compute[227360]: 2025-11-29 08:35:29.678 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:29 np0005539551 nova_compute[227360]: 2025-11-29 08:35:29.684 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:29 np0005539551 nova_compute[227360]: 2025-11-29 08:35:29.689 227364 INFO nova.virt.libvirt.driver [-] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Instance destroyed successfully.#033[00m
Nov 29 03:35:29 np0005539551 nova_compute[227360]: 2025-11-29 08:35:29.689 227364 DEBUG nova.objects.instance [None req-d8a0b43c-3d33-4f07-9694-680c1c50b5ec 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'resources' on Instance uuid 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:35:29 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bd12f23c02ec5c416f463747aa2a18b42e46ff5c2ac4035f409366d4a3ec6f1c-userdata-shm.mount: Deactivated successfully.
Nov 29 03:35:29 np0005539551 systemd[1]: var-lib-containers-storage-overlay-40a2277bd62811fa56e7b977a3c3cba5598f452e3afc8b8195b0c62df1d65f97-merged.mount: Deactivated successfully.
Nov 29 03:35:29 np0005539551 podman[288938]: 2025-11-29 08:35:29.704705646 +0000 UTC m=+0.083290454 container cleanup bd12f23c02ec5c416f463747aa2a18b42e46ff5c2ac4035f409366d4a3ec6f1c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8a47f5fc-b01b-4aaa-a961-1be8b8398ced, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:35:29 np0005539551 nova_compute[227360]: 2025-11-29 08:35:29.709 227364 DEBUG nova.virt.libvirt.vif [None req-d8a0b43c-3d33-4f07-9694-680c1c50b5ec 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:34:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1296419618',display_name='tempest-TestNetworkBasicOps-server-1296419618',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1296419618',id=173,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEhyPFmOb1buTdfp2NZX2b9gSFBAogkRMZKVdLyroT5Ix5fvym2lepYKOMKgz9iBT4Fr3f7j9Gn+L6kUd1A62pvPy+wjiM7n7mFxRX0SSB7I3/1dVL5dTHaRcDNjAm50LQ==',key_name='tempest-TestNetworkBasicOps-1813521378',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:34:54Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-vm52hjet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:34:55Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0e21f315-1974-4283-92a1-054bfda2ae26", "address": "fa:16:3e:24:58:28", "network": {"id": "8a47f5fc-b01b-4aaa-a961-1be8b8398ced", "bridge": "br-int", "label": "tempest-network-smoke--178153047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e21f315-19", "ovs_interfaceid": "0e21f315-1974-4283-92a1-054bfda2ae26", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:35:29 np0005539551 nova_compute[227360]: 2025-11-29 08:35:29.710 227364 DEBUG nova.network.os_vif_util [None req-d8a0b43c-3d33-4f07-9694-680c1c50b5ec 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "0e21f315-1974-4283-92a1-054bfda2ae26", "address": "fa:16:3e:24:58:28", "network": {"id": "8a47f5fc-b01b-4aaa-a961-1be8b8398ced", "bridge": "br-int", "label": "tempest-network-smoke--178153047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e21f315-19", "ovs_interfaceid": "0e21f315-1974-4283-92a1-054bfda2ae26", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:35:29 np0005539551 nova_compute[227360]: 2025-11-29 08:35:29.711 227364 DEBUG nova.network.os_vif_util [None req-d8a0b43c-3d33-4f07-9694-680c1c50b5ec 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:24:58:28,bridge_name='br-int',has_traffic_filtering=True,id=0e21f315-1974-4283-92a1-054bfda2ae26,network=Network(8a47f5fc-b01b-4aaa-a961-1be8b8398ced),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e21f315-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:35:29 np0005539551 nova_compute[227360]: 2025-11-29 08:35:29.711 227364 DEBUG os_vif [None req-d8a0b43c-3d33-4f07-9694-680c1c50b5ec 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:58:28,bridge_name='br-int',has_traffic_filtering=True,id=0e21f315-1974-4283-92a1-054bfda2ae26,network=Network(8a47f5fc-b01b-4aaa-a961-1be8b8398ced),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e21f315-19') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:35:29 np0005539551 systemd[1]: libpod-conmon-bd12f23c02ec5c416f463747aa2a18b42e46ff5c2ac4035f409366d4a3ec6f1c.scope: Deactivated successfully.
Nov 29 03:35:29 np0005539551 nova_compute[227360]: 2025-11-29 08:35:29.713 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:29 np0005539551 nova_compute[227360]: 2025-11-29 08:35:29.714 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0e21f315-19, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:29 np0005539551 nova_compute[227360]: 2025-11-29 08:35:29.715 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:29 np0005539551 nova_compute[227360]: 2025-11-29 08:35:29.717 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:29 np0005539551 nova_compute[227360]: 2025-11-29 08:35:29.719 227364 INFO os_vif [None req-d8a0b43c-3d33-4f07-9694-680c1c50b5ec 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:58:28,bridge_name='br-int',has_traffic_filtering=True,id=0e21f315-1974-4283-92a1-054bfda2ae26,network=Network(8a47f5fc-b01b-4aaa-a961-1be8b8398ced),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e21f315-19')#033[00m
Nov 29 03:35:29 np0005539551 podman[288977]: 2025-11-29 08:35:29.780559458 +0000 UTC m=+0.047036973 container remove bd12f23c02ec5c416f463747aa2a18b42e46ff5c2ac4035f409366d4a3ec6f1c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8a47f5fc-b01b-4aaa-a961-1be8b8398ced, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:35:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:29.786 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[17ba832c-0c05-4770-a9c8-52ce240dbf98]: (4, ('Sat Nov 29 08:35:29 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8a47f5fc-b01b-4aaa-a961-1be8b8398ced (bd12f23c02ec5c416f463747aa2a18b42e46ff5c2ac4035f409366d4a3ec6f1c)\nbd12f23c02ec5c416f463747aa2a18b42e46ff5c2ac4035f409366d4a3ec6f1c\nSat Nov 29 08:35:29 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8a47f5fc-b01b-4aaa-a961-1be8b8398ced (bd12f23c02ec5c416f463747aa2a18b42e46ff5c2ac4035f409366d4a3ec6f1c)\nbd12f23c02ec5c416f463747aa2a18b42e46ff5c2ac4035f409366d4a3ec6f1c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:29.789 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[59f57106-66de-47f6-bd7e-21f21f7b6e61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:29.790 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8a47f5fc-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:29 np0005539551 kernel: tap8a47f5fc-b0: left promiscuous mode
Nov 29 03:35:29 np0005539551 nova_compute[227360]: 2025-11-29 08:35:29.791 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:29 np0005539551 nova_compute[227360]: 2025-11-29 08:35:29.805 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:29.808 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[db18ec38-2c4f-4563-9bf0-f9a95c762cf3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:29.822 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[026c925c-8fe2-4433-b433-a2d753994efe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:29.823 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[89d35fb2-519b-4292-8164-d20877812b21]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:29.840 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[10fde7ce-9cde-44da-b517-8c16dad229d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 834102, 'reachable_time': 26132, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289010, 'error': None, 'target': 'ovnmeta-8a47f5fc-b01b-4aaa-a961-1be8b8398ced', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:29.844 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8a47f5fc-b01b-4aaa-a961-1be8b8398ced deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:35:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:29.844 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[7061d05c-bbb2-4da4-a179-3c94bf270bbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:29 np0005539551 systemd[1]: run-netns-ovnmeta\x2d8a47f5fc\x2db01b\x2d4aaa\x2da961\x2d1be8b8398ced.mount: Deactivated successfully.
Nov 29 03:35:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:30.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:30 np0005539551 nova_compute[227360]: 2025-11-29 08:35:30.574 227364 INFO nova.virt.libvirt.driver [None req-d8a0b43c-3d33-4f07-9694-680c1c50b5ec 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Deleting instance files /var/lib/nova/instances/1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd_del#033[00m
Nov 29 03:35:30 np0005539551 nova_compute[227360]: 2025-11-29 08:35:30.575 227364 INFO nova.virt.libvirt.driver [None req-d8a0b43c-3d33-4f07-9694-680c1c50b5ec 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Deletion of /var/lib/nova/instances/1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd_del complete#033[00m
Nov 29 03:35:30 np0005539551 nova_compute[227360]: 2025-11-29 08:35:30.626 227364 INFO nova.compute.manager [None req-d8a0b43c-3d33-4f07-9694-680c1c50b5ec 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Took 1.17 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:35:30 np0005539551 nova_compute[227360]: 2025-11-29 08:35:30.627 227364 DEBUG oslo.service.loopingcall [None req-d8a0b43c-3d33-4f07-9694-680c1c50b5ec 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:35:30 np0005539551 nova_compute[227360]: 2025-11-29 08:35:30.627 227364 DEBUG nova.compute.manager [-] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:35:30 np0005539551 nova_compute[227360]: 2025-11-29 08:35:30.628 227364 DEBUG nova.network.neutron [-] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:35:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e368 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:31 np0005539551 nova_compute[227360]: 2025-11-29 08:35:31.217 227364 DEBUG nova.compute.manager [req-1f139d0d-30f2-4328-aac8-e9fdc1bbc084 req-4cb7d295-9acb-4bbb-946b-21cc0b37076f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Received event network-vif-unplugged-0e21f315-1974-4283-92a1-054bfda2ae26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:31 np0005539551 nova_compute[227360]: 2025-11-29 08:35:31.218 227364 DEBUG oslo_concurrency.lockutils [req-1f139d0d-30f2-4328-aac8-e9fdc1bbc084 req-4cb7d295-9acb-4bbb-946b-21cc0b37076f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:31 np0005539551 nova_compute[227360]: 2025-11-29 08:35:31.218 227364 DEBUG oslo_concurrency.lockutils [req-1f139d0d-30f2-4328-aac8-e9fdc1bbc084 req-4cb7d295-9acb-4bbb-946b-21cc0b37076f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:31 np0005539551 nova_compute[227360]: 2025-11-29 08:35:31.219 227364 DEBUG oslo_concurrency.lockutils [req-1f139d0d-30f2-4328-aac8-e9fdc1bbc084 req-4cb7d295-9acb-4bbb-946b-21cc0b37076f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:31 np0005539551 nova_compute[227360]: 2025-11-29 08:35:31.219 227364 DEBUG nova.compute.manager [req-1f139d0d-30f2-4328-aac8-e9fdc1bbc084 req-4cb7d295-9acb-4bbb-946b-21cc0b37076f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] No waiting events found dispatching network-vif-unplugged-0e21f315-1974-4283-92a1-054bfda2ae26 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:35:31 np0005539551 nova_compute[227360]: 2025-11-29 08:35:31.220 227364 DEBUG nova.compute.manager [req-1f139d0d-30f2-4328-aac8-e9fdc1bbc084 req-4cb7d295-9acb-4bbb-946b-21cc0b37076f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Received event network-vif-unplugged-0e21f315-1974-4283-92a1-054bfda2ae26 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:35:31 np0005539551 nova_compute[227360]: 2025-11-29 08:35:31.220 227364 DEBUG nova.compute.manager [req-1f139d0d-30f2-4328-aac8-e9fdc1bbc084 req-4cb7d295-9acb-4bbb-946b-21cc0b37076f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Received event network-vif-plugged-0e21f315-1974-4283-92a1-054bfda2ae26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:31 np0005539551 nova_compute[227360]: 2025-11-29 08:35:31.220 227364 DEBUG oslo_concurrency.lockutils [req-1f139d0d-30f2-4328-aac8-e9fdc1bbc084 req-4cb7d295-9acb-4bbb-946b-21cc0b37076f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:31 np0005539551 nova_compute[227360]: 2025-11-29 08:35:31.221 227364 DEBUG oslo_concurrency.lockutils [req-1f139d0d-30f2-4328-aac8-e9fdc1bbc084 req-4cb7d295-9acb-4bbb-946b-21cc0b37076f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:31 np0005539551 nova_compute[227360]: 2025-11-29 08:35:31.221 227364 DEBUG oslo_concurrency.lockutils [req-1f139d0d-30f2-4328-aac8-e9fdc1bbc084 req-4cb7d295-9acb-4bbb-946b-21cc0b37076f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:31 np0005539551 nova_compute[227360]: 2025-11-29 08:35:31.221 227364 DEBUG nova.compute.manager [req-1f139d0d-30f2-4328-aac8-e9fdc1bbc084 req-4cb7d295-9acb-4bbb-946b-21cc0b37076f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] No waiting events found dispatching network-vif-plugged-0e21f315-1974-4283-92a1-054bfda2ae26 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:35:31 np0005539551 nova_compute[227360]: 2025-11-29 08:35:31.222 227364 WARNING nova.compute.manager [req-1f139d0d-30f2-4328-aac8-e9fdc1bbc084 req-4cb7d295-9acb-4bbb-946b-21cc0b37076f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Received unexpected event network-vif-plugged-0e21f315-1974-4283-92a1-054bfda2ae26 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:35:31 np0005539551 nova_compute[227360]: 2025-11-29 08:35:31.386 227364 DEBUG nova.network.neutron [req-44f059e1-9497-45be-8363-eb52c7f07b2f req-56b21e8d-09dd-4a54-ba20-9f60ddb0e1d2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Updated VIF entry in instance network info cache for port 0e21f315-1974-4283-92a1-054bfda2ae26. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:35:31 np0005539551 nova_compute[227360]: 2025-11-29 08:35:31.386 227364 DEBUG nova.network.neutron [req-44f059e1-9497-45be-8363-eb52c7f07b2f req-56b21e8d-09dd-4a54-ba20-9f60ddb0e1d2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Updating instance_info_cache with network_info: [{"id": "0e21f315-1974-4283-92a1-054bfda2ae26", "address": "fa:16:3e:24:58:28", "network": {"id": "8a47f5fc-b01b-4aaa-a961-1be8b8398ced", "bridge": "br-int", "label": "tempest-network-smoke--178153047", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e21f315-19", "ovs_interfaceid": "0e21f315-1974-4283-92a1-054bfda2ae26", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:35:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:31.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:31 np0005539551 nova_compute[227360]: 2025-11-29 08:35:31.423 227364 DEBUG oslo_concurrency.lockutils [req-44f059e1-9497-45be-8363-eb52c7f07b2f req-56b21e8d-09dd-4a54-ba20-9f60ddb0e1d2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:35:31 np0005539551 nova_compute[227360]: 2025-11-29 08:35:31.629 227364 DEBUG nova.network.neutron [-] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:35:31 np0005539551 nova_compute[227360]: 2025-11-29 08:35:31.645 227364 INFO nova.compute.manager [-] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Took 1.02 seconds to deallocate network for instance.#033[00m
Nov 29 03:35:31 np0005539551 nova_compute[227360]: 2025-11-29 08:35:31.686 227364 DEBUG oslo_concurrency.lockutils [None req-d8a0b43c-3d33-4f07-9694-680c1c50b5ec 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:31 np0005539551 nova_compute[227360]: 2025-11-29 08:35:31.687 227364 DEBUG oslo_concurrency.lockutils [None req-d8a0b43c-3d33-4f07-9694-680c1c50b5ec 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:31 np0005539551 nova_compute[227360]: 2025-11-29 08:35:31.837 227364 DEBUG oslo_concurrency.processutils [None req-d8a0b43c-3d33-4f07-9694-680c1c50b5ec 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:32 np0005539551 nova_compute[227360]: 2025-11-29 08:35:32.079 227364 DEBUG nova.compute.manager [req-acf5f1b9-8842-42ca-972c-f3b71478dff4 req-d4debec8-eece-4141-81c0-88f24e1ae156 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Received event network-vif-deleted-0e21f315-1974-4283-92a1-054bfda2ae26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:32 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:35:32 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2461438424' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:35:32 np0005539551 nova_compute[227360]: 2025-11-29 08:35:32.302 227364 DEBUG oslo_concurrency.processutils [None req-d8a0b43c-3d33-4f07-9694-680c1c50b5ec 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:32 np0005539551 nova_compute[227360]: 2025-11-29 08:35:32.308 227364 DEBUG nova.compute.provider_tree [None req-d8a0b43c-3d33-4f07-9694-680c1c50b5ec 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:35:32 np0005539551 nova_compute[227360]: 2025-11-29 08:35:32.322 227364 DEBUG nova.scheduler.client.report [None req-d8a0b43c-3d33-4f07-9694-680c1c50b5ec 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:35:32 np0005539551 nova_compute[227360]: 2025-11-29 08:35:32.342 227364 DEBUG oslo_concurrency.lockutils [None req-d8a0b43c-3d33-4f07-9694-680c1c50b5ec 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:32 np0005539551 nova_compute[227360]: 2025-11-29 08:35:32.367 227364 INFO nova.scheduler.client.report [None req-d8a0b43c-3d33-4f07-9694-680c1c50b5ec 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Deleted allocations for instance 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd#033[00m
Nov 29 03:35:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:32.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:32 np0005539551 nova_compute[227360]: 2025-11-29 08:35:32.401 227364 DEBUG oslo_concurrency.lockutils [None req-0f59848c-44c1-4f37-9a6f-9605b837b146 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "2293bfb9-d91d-4ee4-8347-317cf45fe9c4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:32 np0005539551 nova_compute[227360]: 2025-11-29 08:35:32.402 227364 DEBUG oslo_concurrency.lockutils [None req-0f59848c-44c1-4f37-9a6f-9605b837b146 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "2293bfb9-d91d-4ee4-8347-317cf45fe9c4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:32 np0005539551 nova_compute[227360]: 2025-11-29 08:35:32.402 227364 DEBUG oslo_concurrency.lockutils [None req-0f59848c-44c1-4f37-9a6f-9605b837b146 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "2293bfb9-d91d-4ee4-8347-317cf45fe9c4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:32 np0005539551 nova_compute[227360]: 2025-11-29 08:35:32.402 227364 DEBUG oslo_concurrency.lockutils [None req-0f59848c-44c1-4f37-9a6f-9605b837b146 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "2293bfb9-d91d-4ee4-8347-317cf45fe9c4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:32 np0005539551 nova_compute[227360]: 2025-11-29 08:35:32.402 227364 DEBUG oslo_concurrency.lockutils [None req-0f59848c-44c1-4f37-9a6f-9605b837b146 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "2293bfb9-d91d-4ee4-8347-317cf45fe9c4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:32 np0005539551 nova_compute[227360]: 2025-11-29 08:35:32.403 227364 INFO nova.compute.manager [None req-0f59848c-44c1-4f37-9a6f-9605b837b146 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Terminating instance#033[00m
Nov 29 03:35:32 np0005539551 nova_compute[227360]: 2025-11-29 08:35:32.404 227364 DEBUG nova.compute.manager [None req-0f59848c-44c1-4f37-9a6f-9605b837b146 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:35:32 np0005539551 nova_compute[227360]: 2025-11-29 08:35:32.438 227364 DEBUG oslo_concurrency.lockutils [None req-d8a0b43c-3d33-4f07-9694-680c1c50b5ec 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.984s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:32 np0005539551 kernel: tapc4bf30df-8d (unregistering): left promiscuous mode
Nov 29 03:35:32 np0005539551 NetworkManager[48922]: <info>  [1764405332.4514] device (tapc4bf30df-8d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:35:32 np0005539551 ovn_controller[130266]: 2025-11-29T08:35:32Z|00755|binding|INFO|Releasing lport c4bf30df-8d4f-4601-a7a6-1d851938fab0 from this chassis (sb_readonly=0)
Nov 29 03:35:32 np0005539551 ovn_controller[130266]: 2025-11-29T08:35:32Z|00756|binding|INFO|Setting lport c4bf30df-8d4f-4601-a7a6-1d851938fab0 down in Southbound
Nov 29 03:35:32 np0005539551 nova_compute[227360]: 2025-11-29 08:35:32.459 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:32 np0005539551 ovn_controller[130266]: 2025-11-29T08:35:32Z|00757|binding|INFO|Removing iface tapc4bf30df-8d ovn-installed in OVS
Nov 29 03:35:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:32.465 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:e3:fc 10.100.0.4'], port_security=['fa:16:3e:fb:e3:fc 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '2293bfb9-d91d-4ee4-8347-317cf45fe9c4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4f6db81949d487b853d7567f8a2e6d4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '56b7aa4d-4e93-4da8-a338-5b87494d2fcd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=794eeb47-266a-47f4-b2a1-7a89e6c6ba82, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=c4bf30df-8d4f-4601-a7a6-1d851938fab0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:35:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:32.466 139482 INFO neutron.agent.ovn.metadata.agent [-] Port c4bf30df-8d4f-4601-a7a6-1d851938fab0 in datapath ed50ff83-51d1-4b35-b85c-1cbe6fb812c6 unbound from our chassis#033[00m
Nov 29 03:35:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:32.467 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ed50ff83-51d1-4b35-b85c-1cbe6fb812c6#033[00m
Nov 29 03:35:32 np0005539551 nova_compute[227360]: 2025-11-29 08:35:32.480 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:32.482 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[378ae528-cdbd-413a-ad28-98d61c6756e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:32.509 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[284ccde1-be92-47b3-bd9d-dd2113f0c42c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:32.513 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[379b436f-43ce-432e-a08e-15c7865d93cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:32 np0005539551 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d000000a2.scope: Deactivated successfully.
Nov 29 03:35:32 np0005539551 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d000000a2.scope: Consumed 22.097s CPU time.
Nov 29 03:35:32 np0005539551 systemd-machined[190756]: Machine qemu-76-instance-000000a2 terminated.
Nov 29 03:35:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:32.538 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[4d693d7c-aa06-4cb9-9ca7-acf672387212]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:32.556 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[30412502-1cd6-486f-909f-7b0b32c6596f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped50ff83-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:60:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 202], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 814431, 'reachable_time': 15775, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289043, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:32.576 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c896eaea-110f-4181-ac28-3c8b5b9a7626]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'taped50ff83-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 814444, 'tstamp': 814444}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289044, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'taped50ff83-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 814447, 'tstamp': 814447}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289044, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:32.581 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped50ff83-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:32 np0005539551 nova_compute[227360]: 2025-11-29 08:35:32.583 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:32 np0005539551 nova_compute[227360]: 2025-11-29 08:35:32.587 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:32.588 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped50ff83-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:32.589 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:35:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:32.589 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=taped50ff83-50, col_values=(('external_ids', {'iface-id': '3b04b2c4-a6da-4677-b446-82ad68652b56'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:32.590 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:35:32 np0005539551 nova_compute[227360]: 2025-11-29 08:35:32.645 227364 INFO nova.virt.libvirt.driver [-] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Instance destroyed successfully.#033[00m
Nov 29 03:35:32 np0005539551 nova_compute[227360]: 2025-11-29 08:35:32.646 227364 DEBUG nova.objects.instance [None req-0f59848c-44c1-4f37-9a6f-9605b837b146 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'resources' on Instance uuid 2293bfb9-d91d-4ee4-8347-317cf45fe9c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:35:32 np0005539551 nova_compute[227360]: 2025-11-29 08:35:32.660 227364 DEBUG nova.virt.libvirt.vif [None req-0f59848c-44c1-4f37-9a6f-9605b837b146 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:32:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='multiattach-server-1',id=162,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFZDUAh1tFHT85mctamdge/Jlh9j7Mmalvlf2a+E48/dJ4b3TzL46vHd8+krJsRkbdr2BabH5xlFnXxT+hxq+KJlLzOnOaQuAWI18v9sbbjA8bZzR2tugMjasg7rWhFwg==',key_name='tempest-keypair-2058861619',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:32:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d4f6db81949d487b853d7567f8a2e6d4',ramdisk_id='',reservation_id='r-6pwgg4cb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeMultiAttachTest-573425942',owner_user_name='tempest-AttachVolumeMultiAttachTest-573425942-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:32:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5b0953fb7cc415fb26cf4ffdd5908c6',uuid=2293bfb9-d91d-4ee4-8347-317cf45fe9c4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c4bf30df-8d4f-4601-a7a6-1d851938fab0", "address": "fa:16:3e:fb:e3:fc", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4bf30df-8d", "ovs_interfaceid": "c4bf30df-8d4f-4601-a7a6-1d851938fab0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:35:32 np0005539551 nova_compute[227360]: 2025-11-29 08:35:32.661 227364 DEBUG nova.network.os_vif_util [None req-0f59848c-44c1-4f37-9a6f-9605b837b146 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converting VIF {"id": "c4bf30df-8d4f-4601-a7a6-1d851938fab0", "address": "fa:16:3e:fb:e3:fc", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4bf30df-8d", "ovs_interfaceid": "c4bf30df-8d4f-4601-a7a6-1d851938fab0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:35:32 np0005539551 nova_compute[227360]: 2025-11-29 08:35:32.662 227364 DEBUG nova.network.os_vif_util [None req-0f59848c-44c1-4f37-9a6f-9605b837b146 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fb:e3:fc,bridge_name='br-int',has_traffic_filtering=True,id=c4bf30df-8d4f-4601-a7a6-1d851938fab0,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4bf30df-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:35:32 np0005539551 nova_compute[227360]: 2025-11-29 08:35:32.662 227364 DEBUG os_vif [None req-0f59848c-44c1-4f37-9a6f-9605b837b146 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fb:e3:fc,bridge_name='br-int',has_traffic_filtering=True,id=c4bf30df-8d4f-4601-a7a6-1d851938fab0,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4bf30df-8d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:35:32 np0005539551 nova_compute[227360]: 2025-11-29 08:35:32.665 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:32 np0005539551 nova_compute[227360]: 2025-11-29 08:35:32.665 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4bf30df-8d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:32 np0005539551 nova_compute[227360]: 2025-11-29 08:35:32.722 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:32 np0005539551 nova_compute[227360]: 2025-11-29 08:35:32.723 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:32 np0005539551 nova_compute[227360]: 2025-11-29 08:35:32.725 227364 INFO os_vif [None req-0f59848c-44c1-4f37-9a6f-9605b837b146 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fb:e3:fc,bridge_name='br-int',has_traffic_filtering=True,id=c4bf30df-8d4f-4601-a7a6-1d851938fab0,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4bf30df-8d')#033[00m
Nov 29 03:35:32 np0005539551 nova_compute[227360]: 2025-11-29 08:35:32.928 227364 DEBUG oslo_concurrency.lockutils [None req-b2dfe170-0a4a-416a-8860-3e30d189528a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Acquiring lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:32 np0005539551 nova_compute[227360]: 2025-11-29 08:35:32.929 227364 DEBUG oslo_concurrency.lockutils [None req-b2dfe170-0a4a-416a-8860-3e30d189528a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:32 np0005539551 nova_compute[227360]: 2025-11-29 08:35:32.929 227364 INFO nova.compute.manager [None req-b2dfe170-0a4a-416a-8860-3e30d189528a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Shelving#033[00m
Nov 29 03:35:32 np0005539551 nova_compute[227360]: 2025-11-29 08:35:32.948 227364 DEBUG nova.virt.libvirt.driver [None req-b2dfe170-0a4a-416a-8860-3e30d189528a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:35:33 np0005539551 nova_compute[227360]: 2025-11-29 08:35:33.082 227364 INFO nova.virt.libvirt.driver [None req-0f59848c-44c1-4f37-9a6f-9605b837b146 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Deleting instance files /var/lib/nova/instances/2293bfb9-d91d-4ee4-8347-317cf45fe9c4_del#033[00m
Nov 29 03:35:33 np0005539551 nova_compute[227360]: 2025-11-29 08:35:33.082 227364 INFO nova.virt.libvirt.driver [None req-0f59848c-44c1-4f37-9a6f-9605b837b146 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Deletion of /var/lib/nova/instances/2293bfb9-d91d-4ee4-8347-317cf45fe9c4_del complete#033[00m
Nov 29 03:35:33 np0005539551 nova_compute[227360]: 2025-11-29 08:35:33.124 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:33 np0005539551 nova_compute[227360]: 2025-11-29 08:35:33.140 227364 INFO nova.compute.manager [None req-0f59848c-44c1-4f37-9a6f-9605b837b146 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:35:33 np0005539551 nova_compute[227360]: 2025-11-29 08:35:33.141 227364 DEBUG oslo.service.loopingcall [None req-0f59848c-44c1-4f37-9a6f-9605b837b146 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:35:33 np0005539551 nova_compute[227360]: 2025-11-29 08:35:33.141 227364 DEBUG nova.compute.manager [-] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:35:33 np0005539551 nova_compute[227360]: 2025-11-29 08:35:33.141 227364 DEBUG nova.network.neutron [-] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:35:33 np0005539551 nova_compute[227360]: 2025-11-29 08:35:33.291 227364 DEBUG nova.compute.manager [req-92f3cc0e-283c-440e-ac6c-0389fc0f1b9d req-f2d7a676-8fae-4a55-9b75-3d982b4a89da 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Received event network-vif-unplugged-c4bf30df-8d4f-4601-a7a6-1d851938fab0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:33 np0005539551 nova_compute[227360]: 2025-11-29 08:35:33.291 227364 DEBUG oslo_concurrency.lockutils [req-92f3cc0e-283c-440e-ac6c-0389fc0f1b9d req-f2d7a676-8fae-4a55-9b75-3d982b4a89da 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "2293bfb9-d91d-4ee4-8347-317cf45fe9c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:33 np0005539551 nova_compute[227360]: 2025-11-29 08:35:33.291 227364 DEBUG oslo_concurrency.lockutils [req-92f3cc0e-283c-440e-ac6c-0389fc0f1b9d req-f2d7a676-8fae-4a55-9b75-3d982b4a89da 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2293bfb9-d91d-4ee4-8347-317cf45fe9c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:33 np0005539551 nova_compute[227360]: 2025-11-29 08:35:33.291 227364 DEBUG oslo_concurrency.lockutils [req-92f3cc0e-283c-440e-ac6c-0389fc0f1b9d req-f2d7a676-8fae-4a55-9b75-3d982b4a89da 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2293bfb9-d91d-4ee4-8347-317cf45fe9c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:33 np0005539551 nova_compute[227360]: 2025-11-29 08:35:33.292 227364 DEBUG nova.compute.manager [req-92f3cc0e-283c-440e-ac6c-0389fc0f1b9d req-f2d7a676-8fae-4a55-9b75-3d982b4a89da 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] No waiting events found dispatching network-vif-unplugged-c4bf30df-8d4f-4601-a7a6-1d851938fab0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:35:33 np0005539551 nova_compute[227360]: 2025-11-29 08:35:33.292 227364 DEBUG nova.compute.manager [req-92f3cc0e-283c-440e-ac6c-0389fc0f1b9d req-f2d7a676-8fae-4a55-9b75-3d982b4a89da 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Received event network-vif-unplugged-c4bf30df-8d4f-4601-a7a6-1d851938fab0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:35:33 np0005539551 nova_compute[227360]: 2025-11-29 08:35:33.292 227364 DEBUG nova.compute.manager [req-92f3cc0e-283c-440e-ac6c-0389fc0f1b9d req-f2d7a676-8fae-4a55-9b75-3d982b4a89da 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Received event network-vif-plugged-c4bf30df-8d4f-4601-a7a6-1d851938fab0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:33 np0005539551 nova_compute[227360]: 2025-11-29 08:35:33.292 227364 DEBUG oslo_concurrency.lockutils [req-92f3cc0e-283c-440e-ac6c-0389fc0f1b9d req-f2d7a676-8fae-4a55-9b75-3d982b4a89da 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "2293bfb9-d91d-4ee4-8347-317cf45fe9c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:33 np0005539551 nova_compute[227360]: 2025-11-29 08:35:33.292 227364 DEBUG oslo_concurrency.lockutils [req-92f3cc0e-283c-440e-ac6c-0389fc0f1b9d req-f2d7a676-8fae-4a55-9b75-3d982b4a89da 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2293bfb9-d91d-4ee4-8347-317cf45fe9c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:33 np0005539551 nova_compute[227360]: 2025-11-29 08:35:33.292 227364 DEBUG oslo_concurrency.lockutils [req-92f3cc0e-283c-440e-ac6c-0389fc0f1b9d req-f2d7a676-8fae-4a55-9b75-3d982b4a89da 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2293bfb9-d91d-4ee4-8347-317cf45fe9c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:33 np0005539551 nova_compute[227360]: 2025-11-29 08:35:33.293 227364 DEBUG nova.compute.manager [req-92f3cc0e-283c-440e-ac6c-0389fc0f1b9d req-f2d7a676-8fae-4a55-9b75-3d982b4a89da 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] No waiting events found dispatching network-vif-plugged-c4bf30df-8d4f-4601-a7a6-1d851938fab0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:35:33 np0005539551 nova_compute[227360]: 2025-11-29 08:35:33.293 227364 WARNING nova.compute.manager [req-92f3cc0e-283c-440e-ac6c-0389fc0f1b9d req-f2d7a676-8fae-4a55-9b75-3d982b4a89da 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Received unexpected event network-vif-plugged-c4bf30df-8d4f-4601-a7a6-1d851938fab0 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:35:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:33.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:33 np0005539551 nova_compute[227360]: 2025-11-29 08:35:33.771 227364 DEBUG nova.network.neutron [-] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:35:33 np0005539551 nova_compute[227360]: 2025-11-29 08:35:33.788 227364 INFO nova.compute.manager [-] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Took 0.65 seconds to deallocate network for instance.#033[00m
Nov 29 03:35:33 np0005539551 nova_compute[227360]: 2025-11-29 08:35:33.846 227364 DEBUG oslo_concurrency.lockutils [None req-0f59848c-44c1-4f37-9a6f-9605b837b146 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:33 np0005539551 nova_compute[227360]: 2025-11-29 08:35:33.847 227364 DEBUG oslo_concurrency.lockutils [None req-0f59848c-44c1-4f37-9a6f-9605b837b146 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:33 np0005539551 nova_compute[227360]: 2025-11-29 08:35:33.915 227364 DEBUG oslo_concurrency.processutils [None req-0f59848c-44c1-4f37-9a6f-9605b837b146 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:34 np0005539551 nova_compute[227360]: 2025-11-29 08:35:34.149 227364 DEBUG nova.compute.manager [req-33d492f5-cc04-4e14-a327-7fafdbe7ba43 req-4dfb0ffb-fea0-408d-979f-f6a49ba434aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Received event network-vif-deleted-c4bf30df-8d4f-4601-a7a6-1d851938fab0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:34 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:35:34 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/871889379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:35:34 np0005539551 nova_compute[227360]: 2025-11-29 08:35:34.365 227364 DEBUG oslo_concurrency.processutils [None req-0f59848c-44c1-4f37-9a6f-9605b837b146 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:34 np0005539551 nova_compute[227360]: 2025-11-29 08:35:34.370 227364 DEBUG nova.compute.provider_tree [None req-0f59848c-44c1-4f37-9a6f-9605b837b146 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:35:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:34.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:34 np0005539551 nova_compute[227360]: 2025-11-29 08:35:34.387 227364 DEBUG nova.scheduler.client.report [None req-0f59848c-44c1-4f37-9a6f-9605b837b146 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:35:34 np0005539551 nova_compute[227360]: 2025-11-29 08:35:34.414 227364 DEBUG oslo_concurrency.lockutils [None req-0f59848c-44c1-4f37-9a6f-9605b837b146 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:34 np0005539551 nova_compute[227360]: 2025-11-29 08:35:34.445 227364 INFO nova.scheduler.client.report [None req-0f59848c-44c1-4f37-9a6f-9605b837b146 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Deleted allocations for instance 2293bfb9-d91d-4ee4-8347-317cf45fe9c4#033[00m
Nov 29 03:35:34 np0005539551 nova_compute[227360]: 2025-11-29 08:35:34.527 227364 DEBUG oslo_concurrency.lockutils [None req-0f59848c-44c1-4f37-9a6f-9605b837b146 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "2293bfb9-d91d-4ee4-8347-317cf45fe9c4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:35 np0005539551 ovn_controller[130266]: 2025-11-29T08:35:35Z|00758|binding|INFO|Releasing lport 5551fa67-e815-437e-8413-5562ca9c4d10 from this chassis (sb_readonly=0)
Nov 29 03:35:35 np0005539551 ovn_controller[130266]: 2025-11-29T08:35:35Z|00759|binding|INFO|Releasing lport 3b04b2c4-a6da-4677-b446-82ad68652b56 from this chassis (sb_readonly=0)
Nov 29 03:35:35 np0005539551 nova_compute[227360]: 2025-11-29 08:35:35.136 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:35 np0005539551 ovn_controller[130266]: 2025-11-29T08:35:35Z|00760|binding|INFO|Releasing lport 5551fa67-e815-437e-8413-5562ca9c4d10 from this chassis (sb_readonly=0)
Nov 29 03:35:35 np0005539551 ovn_controller[130266]: 2025-11-29T08:35:35Z|00761|binding|INFO|Releasing lport 3b04b2c4-a6da-4677-b446-82ad68652b56 from this chassis (sb_readonly=0)
Nov 29 03:35:35 np0005539551 nova_compute[227360]: 2025-11-29 08:35:35.258 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:35 np0005539551 kernel: tap2c7cb162-70 (unregistering): left promiscuous mode
Nov 29 03:35:35 np0005539551 NetworkManager[48922]: <info>  [1764405335.3091] device (tap2c7cb162-70): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:35:35 np0005539551 ovn_controller[130266]: 2025-11-29T08:35:35Z|00762|binding|INFO|Releasing lport 2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 from this chassis (sb_readonly=0)
Nov 29 03:35:35 np0005539551 ovn_controller[130266]: 2025-11-29T08:35:35Z|00763|binding|INFO|Setting lport 2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 down in Southbound
Nov 29 03:35:35 np0005539551 nova_compute[227360]: 2025-11-29 08:35:35.316 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:35 np0005539551 ovn_controller[130266]: 2025-11-29T08:35:35Z|00764|binding|INFO|Removing iface tap2c7cb162-70 ovn-installed in OVS
Nov 29 03:35:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:35.328 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:09:19 10.100.0.11'], port_security=['fa:16:3e:c5:09:19 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'aaaa6a6b-0c21-483c-b891-02ebe64e6aab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a83b79bc-6262-43e7-a9e5-5e808a213726', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96e72e7660da497a8b6bf9fdb03fe84c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4e362fc3-a5d2-4518-8d56-0e9bbfbe70b6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89f1621a-4594-4d70-9442-76b3c597dffc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=2c7cb162-70f0-496f-a2bf-0b0af61bd4b2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:35:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:35.330 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 in datapath a83b79bc-6262-43e7-a9e5-5e808a213726 unbound from our chassis#033[00m
Nov 29 03:35:35 np0005539551 nova_compute[227360]: 2025-11-29 08:35:35.331 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:35.332 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a83b79bc-6262-43e7-a9e5-5e808a213726, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:35:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:35.333 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[488a5179-626d-4e95-bb0d-87cff877cd8d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:35.335 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726 namespace which is not needed anymore#033[00m
Nov 29 03:35:35 np0005539551 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d000000aa.scope: Deactivated successfully.
Nov 29 03:35:35 np0005539551 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d000000aa.scope: Consumed 16.804s CPU time.
Nov 29 03:35:35 np0005539551 systemd-machined[190756]: Machine qemu-78-instance-000000aa terminated.
Nov 29 03:35:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:35.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:35 np0005539551 neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726[287799]: [NOTICE]   (287803) : haproxy version is 2.8.14-c23fe91
Nov 29 03:35:35 np0005539551 neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726[287799]: [NOTICE]   (287803) : path to executable is /usr/sbin/haproxy
Nov 29 03:35:35 np0005539551 neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726[287799]: [WARNING]  (287803) : Exiting Master process...
Nov 29 03:35:35 np0005539551 neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726[287799]: [WARNING]  (287803) : Exiting Master process...
Nov 29 03:35:35 np0005539551 neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726[287799]: [ALERT]    (287803) : Current worker (287805) exited with code 143 (Terminated)
Nov 29 03:35:35 np0005539551 neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726[287799]: [WARNING]  (287803) : All workers exited. Exiting... (0)
Nov 29 03:35:35 np0005539551 systemd[1]: libpod-3c1ad551571f04e32858560c9a1b8f1ff15a420c626ed9a088604f62b049566f.scope: Deactivated successfully.
Nov 29 03:35:35 np0005539551 podman[289119]: 2025-11-29 08:35:35.462266092 +0000 UTC m=+0.043251941 container died 3c1ad551571f04e32858560c9a1b8f1ff15a420c626ed9a088604f62b049566f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:35:35 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3c1ad551571f04e32858560c9a1b8f1ff15a420c626ed9a088604f62b049566f-userdata-shm.mount: Deactivated successfully.
Nov 29 03:35:35 np0005539551 systemd[1]: var-lib-containers-storage-overlay-a3726ef2f489ba611f97c706ab774be9f506e625d35c308e3d7642e58b5c9410-merged.mount: Deactivated successfully.
Nov 29 03:35:35 np0005539551 podman[289119]: 2025-11-29 08:35:35.499667593 +0000 UTC m=+0.080653442 container cleanup 3c1ad551571f04e32858560c9a1b8f1ff15a420c626ed9a088604f62b049566f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:35:35 np0005539551 systemd[1]: libpod-conmon-3c1ad551571f04e32858560c9a1b8f1ff15a420c626ed9a088604f62b049566f.scope: Deactivated successfully.
Nov 29 03:35:35 np0005539551 podman[289149]: 2025-11-29 08:35:35.581989201 +0000 UTC m=+0.058818802 container remove 3c1ad551571f04e32858560c9a1b8f1ff15a420c626ed9a088604f62b049566f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 03:35:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:35.589 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[bdbf8a3c-f4b1-40e1-ab43-265eae5d4248]: (4, ('Sat Nov 29 08:35:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726 (3c1ad551571f04e32858560c9a1b8f1ff15a420c626ed9a088604f62b049566f)\n3c1ad551571f04e32858560c9a1b8f1ff15a420c626ed9a088604f62b049566f\nSat Nov 29 08:35:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726 (3c1ad551571f04e32858560c9a1b8f1ff15a420c626ed9a088604f62b049566f)\n3c1ad551571f04e32858560c9a1b8f1ff15a420c626ed9a088604f62b049566f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:35.591 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1a881ca1-4240-4007-99fb-46c19e62e58d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:35.592 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa83b79bc-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:35 np0005539551 nova_compute[227360]: 2025-11-29 08:35:35.594 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:35 np0005539551 kernel: tapa83b79bc-60: left promiscuous mode
Nov 29 03:35:35 np0005539551 nova_compute[227360]: 2025-11-29 08:35:35.601 227364 DEBUG nova.compute.manager [req-efdf107e-8091-44e1-811b-3d6e812719a9 req-2c794f77-bb5a-45a2-87eb-414fece42968 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Received event network-vif-unplugged-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:35 np0005539551 nova_compute[227360]: 2025-11-29 08:35:35.601 227364 DEBUG oslo_concurrency.lockutils [req-efdf107e-8091-44e1-811b-3d6e812719a9 req-2c794f77-bb5a-45a2-87eb-414fece42968 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:35 np0005539551 nova_compute[227360]: 2025-11-29 08:35:35.601 227364 DEBUG oslo_concurrency.lockutils [req-efdf107e-8091-44e1-811b-3d6e812719a9 req-2c794f77-bb5a-45a2-87eb-414fece42968 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:35 np0005539551 nova_compute[227360]: 2025-11-29 08:35:35.601 227364 DEBUG oslo_concurrency.lockutils [req-efdf107e-8091-44e1-811b-3d6e812719a9 req-2c794f77-bb5a-45a2-87eb-414fece42968 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:35 np0005539551 nova_compute[227360]: 2025-11-29 08:35:35.601 227364 DEBUG nova.compute.manager [req-efdf107e-8091-44e1-811b-3d6e812719a9 req-2c794f77-bb5a-45a2-87eb-414fece42968 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] No waiting events found dispatching network-vif-unplugged-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:35:35 np0005539551 nova_compute[227360]: 2025-11-29 08:35:35.601 227364 WARNING nova.compute.manager [req-efdf107e-8091-44e1-811b-3d6e812719a9 req-2c794f77-bb5a-45a2-87eb-414fece42968 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Received unexpected event network-vif-unplugged-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 for instance with vm_state active and task_state shelving.#033[00m
Nov 29 03:35:35 np0005539551 nova_compute[227360]: 2025-11-29 08:35:35.612 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:35.615 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[fd3918ef-a124-412d-a769-de31222b4450]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:35.630 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8e332999-c56a-4e42-8999-1ed0729901e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:35.632 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[857c512f-3c90-420e-a4f8-122881a12d4b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:35.648 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5ec0ad11-a79a-4511-b1dc-4c5f9cc0bee4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 829866, 'reachable_time': 18673, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289179, 'error': None, 'target': 'ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:35.650 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:35:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:35.650 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[c696fa9d-52a2-4f09-977b-fdbc764864c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:35 np0005539551 systemd[1]: run-netns-ovnmeta\x2da83b79bc\x2d6262\x2d43e7\x2da9e5\x2d5e808a213726.mount: Deactivated successfully.
Nov 29 03:35:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e368 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:35 np0005539551 nova_compute[227360]: 2025-11-29 08:35:35.965 227364 INFO nova.virt.libvirt.driver [None req-b2dfe170-0a4a-416a-8860-3e30d189528a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:35:35 np0005539551 nova_compute[227360]: 2025-11-29 08:35:35.972 227364 INFO nova.virt.libvirt.driver [-] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Instance destroyed successfully.#033[00m
Nov 29 03:35:35 np0005539551 nova_compute[227360]: 2025-11-29 08:35:35.972 227364 DEBUG nova.objects.instance [None req-b2dfe170-0a4a-416a-8860-3e30d189528a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lazy-loading 'numa_topology' on Instance uuid aaaa6a6b-0c21-483c-b891-02ebe64e6aab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:35:36 np0005539551 nova_compute[227360]: 2025-11-29 08:35:36.255 227364 INFO nova.virt.libvirt.driver [None req-b2dfe170-0a4a-416a-8860-3e30d189528a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Beginning cold snapshot process#033[00m
Nov 29 03:35:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:36.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:36 np0005539551 nova_compute[227360]: 2025-11-29 08:35:36.454 227364 DEBUG nova.virt.libvirt.imagebackend [None req-b2dfe170-0a4a-416a-8860-3e30d189528a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] No parent info for 4873db8c-b414-4e95-acd9-77caabebe722; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:35:36 np0005539551 nova_compute[227360]: 2025-11-29 08:35:36.991 227364 DEBUG nova.storage.rbd_utils [None req-b2dfe170-0a4a-416a-8860-3e30d189528a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] creating snapshot(f620896e00b24cdcada2999798ceb346) on rbd image(aaaa6a6b-0c21-483c-b891-02ebe64e6aab_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:35:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:37.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:37 np0005539551 nova_compute[227360]: 2025-11-29 08:35:37.723 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:37 np0005539551 nova_compute[227360]: 2025-11-29 08:35:37.754 227364 DEBUG nova.compute.manager [req-6fa6fb37-65b9-437f-8a13-ab322822ffd2 req-d982de4b-3fbf-4406-b659-b80bd6a5643b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Received event network-vif-plugged-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:37 np0005539551 nova_compute[227360]: 2025-11-29 08:35:37.755 227364 DEBUG oslo_concurrency.lockutils [req-6fa6fb37-65b9-437f-8a13-ab322822ffd2 req-d982de4b-3fbf-4406-b659-b80bd6a5643b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:37 np0005539551 nova_compute[227360]: 2025-11-29 08:35:37.755 227364 DEBUG oslo_concurrency.lockutils [req-6fa6fb37-65b9-437f-8a13-ab322822ffd2 req-d982de4b-3fbf-4406-b659-b80bd6a5643b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:37 np0005539551 nova_compute[227360]: 2025-11-29 08:35:37.755 227364 DEBUG oslo_concurrency.lockutils [req-6fa6fb37-65b9-437f-8a13-ab322822ffd2 req-d982de4b-3fbf-4406-b659-b80bd6a5643b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:37 np0005539551 nova_compute[227360]: 2025-11-29 08:35:37.756 227364 DEBUG nova.compute.manager [req-6fa6fb37-65b9-437f-8a13-ab322822ffd2 req-d982de4b-3fbf-4406-b659-b80bd6a5643b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] No waiting events found dispatching network-vif-plugged-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:35:37 np0005539551 nova_compute[227360]: 2025-11-29 08:35:37.756 227364 WARNING nova.compute.manager [req-6fa6fb37-65b9-437f-8a13-ab322822ffd2 req-d982de4b-3fbf-4406-b659-b80bd6a5643b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Received unexpected event network-vif-plugged-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Nov 29 03:35:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e369 e369: 3 total, 3 up, 3 in
Nov 29 03:35:38 np0005539551 nova_compute[227360]: 2025-11-29 08:35:38.005 227364 DEBUG nova.storage.rbd_utils [None req-b2dfe170-0a4a-416a-8860-3e30d189528a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] cloning vms/aaaa6a6b-0c21-483c-b891-02ebe64e6aab_disk@f620896e00b24cdcada2999798ceb346 to images/f502084d-e717-47c5-8881-072bf335e491 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:35:38 np0005539551 nova_compute[227360]: 2025-11-29 08:35:38.127 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:38 np0005539551 nova_compute[227360]: 2025-11-29 08:35:38.150 227364 DEBUG nova.storage.rbd_utils [None req-b2dfe170-0a4a-416a-8860-3e30d189528a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] flattening images/f502084d-e717-47c5-8881-072bf335e491 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:35:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:38.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:38 np0005539551 nova_compute[227360]: 2025-11-29 08:35:38.577 227364 DEBUG nova.storage.rbd_utils [None req-b2dfe170-0a4a-416a-8860-3e30d189528a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] removing snapshot(f620896e00b24cdcada2999798ceb346) on rbd image(aaaa6a6b-0c21-483c-b891-02ebe64e6aab_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:35:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e370 e370: 3 total, 3 up, 3 in
Nov 29 03:35:39 np0005539551 nova_compute[227360]: 2025-11-29 08:35:39.166 227364 DEBUG nova.storage.rbd_utils [None req-b2dfe170-0a4a-416a-8860-3e30d189528a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] creating snapshot(snap) on rbd image(f502084d-e717-47c5-8881-072bf335e491) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:35:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:39.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:39 np0005539551 nova_compute[227360]: 2025-11-29 08:35:39.667 227364 DEBUG oslo_concurrency.lockutils [None req-1e03c4ed-8849-4f51-94e9-ba0529075fbd c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "b08df6d7-85bd-4c2a-9bd8-f37384b9148a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:39 np0005539551 nova_compute[227360]: 2025-11-29 08:35:39.668 227364 DEBUG oslo_concurrency.lockutils [None req-1e03c4ed-8849-4f51-94e9-ba0529075fbd c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "b08df6d7-85bd-4c2a-9bd8-f37384b9148a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:39 np0005539551 nova_compute[227360]: 2025-11-29 08:35:39.668 227364 DEBUG oslo_concurrency.lockutils [None req-1e03c4ed-8849-4f51-94e9-ba0529075fbd c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "b08df6d7-85bd-4c2a-9bd8-f37384b9148a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:39 np0005539551 nova_compute[227360]: 2025-11-29 08:35:39.669 227364 DEBUG oslo_concurrency.lockutils [None req-1e03c4ed-8849-4f51-94e9-ba0529075fbd c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "b08df6d7-85bd-4c2a-9bd8-f37384b9148a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:39 np0005539551 nova_compute[227360]: 2025-11-29 08:35:39.669 227364 DEBUG oslo_concurrency.lockutils [None req-1e03c4ed-8849-4f51-94e9-ba0529075fbd c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "b08df6d7-85bd-4c2a-9bd8-f37384b9148a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:39 np0005539551 nova_compute[227360]: 2025-11-29 08:35:39.671 227364 INFO nova.compute.manager [None req-1e03c4ed-8849-4f51-94e9-ba0529075fbd c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Terminating instance#033[00m
Nov 29 03:35:39 np0005539551 nova_compute[227360]: 2025-11-29 08:35:39.673 227364 DEBUG nova.compute.manager [None req-1e03c4ed-8849-4f51-94e9-ba0529075fbd c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:35:39 np0005539551 kernel: tap19b8adf6-01 (unregistering): left promiscuous mode
Nov 29 03:35:39 np0005539551 NetworkManager[48922]: <info>  [1764405339.7129] device (tap19b8adf6-01): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:35:39 np0005539551 ovn_controller[130266]: 2025-11-29T08:35:39Z|00765|binding|INFO|Releasing lport 19b8adf6-0177-4b65-8028-bf0fe37afa9a from this chassis (sb_readonly=0)
Nov 29 03:35:39 np0005539551 ovn_controller[130266]: 2025-11-29T08:35:39Z|00766|binding|INFO|Setting lport 19b8adf6-0177-4b65-8028-bf0fe37afa9a down in Southbound
Nov 29 03:35:39 np0005539551 nova_compute[227360]: 2025-11-29 08:35:39.762 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:39 np0005539551 nova_compute[227360]: 2025-11-29 08:35:39.763 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:39 np0005539551 ovn_controller[130266]: 2025-11-29T08:35:39Z|00767|binding|INFO|Removing iface tap19b8adf6-01 ovn-installed in OVS
Nov 29 03:35:39 np0005539551 nova_compute[227360]: 2025-11-29 08:35:39.764 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:39 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:39.772 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:d8:67 10.100.0.14'], port_security=['fa:16:3e:17:d8:67 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'b08df6d7-85bd-4c2a-9bd8-f37384b9148a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4f6db81949d487b853d7567f8a2e6d4', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f152d713-a80c-4ab4-9e52-56ad227c55aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=794eeb47-266a-47f4-b2a1-7a89e6c6ba82, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=19b8adf6-0177-4b65-8028-bf0fe37afa9a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:35:39 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:39.773 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 19b8adf6-0177-4b65-8028-bf0fe37afa9a in datapath ed50ff83-51d1-4b35-b85c-1cbe6fb812c6 unbound from our chassis#033[00m
Nov 29 03:35:39 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:39.774 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ed50ff83-51d1-4b35-b85c-1cbe6fb812c6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:35:39 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:39.775 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b53b2a03-b4e3-4baa-968b-c28bc3c6f6fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:39 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:39.775 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6 namespace which is not needed anymore#033[00m
Nov 29 03:35:39 np0005539551 nova_compute[227360]: 2025-11-29 08:35:39.777 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:39 np0005539551 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d0000009f.scope: Deactivated successfully.
Nov 29 03:35:39 np0005539551 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d0000009f.scope: Consumed 22.768s CPU time.
Nov 29 03:35:39 np0005539551 systemd-machined[190756]: Machine qemu-74-instance-0000009f terminated.
Nov 29 03:35:39 np0005539551 neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6[285256]: [NOTICE]   (285260) : haproxy version is 2.8.14-c23fe91
Nov 29 03:35:39 np0005539551 neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6[285256]: [NOTICE]   (285260) : path to executable is /usr/sbin/haproxy
Nov 29 03:35:39 np0005539551 neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6[285256]: [WARNING]  (285260) : Exiting Master process...
Nov 29 03:35:39 np0005539551 neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6[285256]: [ALERT]    (285260) : Current worker (285262) exited with code 143 (Terminated)
Nov 29 03:35:39 np0005539551 neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6[285256]: [WARNING]  (285260) : All workers exited. Exiting... (0)
Nov 29 03:35:39 np0005539551 systemd[1]: libpod-5f2fab8c24766b411677664c1f9208a908c72d06a38a27719511f83795e20c39.scope: Deactivated successfully.
Nov 29 03:35:39 np0005539551 nova_compute[227360]: 2025-11-29 08:35:39.911 227364 INFO nova.virt.libvirt.driver [-] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Instance destroyed successfully.#033[00m
Nov 29 03:35:39 np0005539551 nova_compute[227360]: 2025-11-29 08:35:39.912 227364 DEBUG nova.objects.instance [None req-1e03c4ed-8849-4f51-94e9-ba0529075fbd c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'resources' on Instance uuid b08df6d7-85bd-4c2a-9bd8-f37384b9148a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:35:39 np0005539551 podman[289347]: 2025-11-29 08:35:39.915757989 +0000 UTC m=+0.046170250 container died 5f2fab8c24766b411677664c1f9208a908c72d06a38a27719511f83795e20c39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 03:35:39 np0005539551 nova_compute[227360]: 2025-11-29 08:35:39.928 227364 DEBUG nova.virt.libvirt.vif [None req-1e03c4ed-8849-4f51-94e9-ba0529075fbd c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:31:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-33702943',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-33702943',id=159,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:31:37Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d4f6db81949d487b853d7567f8a2e6d4',ramdisk_id='',reservation_id='r-10oir9rx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-AttachVolumeMultiAttachTest-573425942',owner_user_name='tempest-AttachVolumeMultiAttachTest-573425942-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:31:37Z,user_data=None,user_id='c5b0953fb7cc415fb26cf4ffdd5908c6',uuid=b08df6d7-85bd-4c2a-9bd8-f37384b9148a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "19b8adf6-0177-4b65-8028-bf0fe37afa9a", "address": "fa:16:3e:17:d8:67", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19b8adf6-01", "ovs_interfaceid": "19b8adf6-0177-4b65-8028-bf0fe37afa9a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:35:39 np0005539551 nova_compute[227360]: 2025-11-29 08:35:39.929 227364 DEBUG nova.network.os_vif_util [None req-1e03c4ed-8849-4f51-94e9-ba0529075fbd c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converting VIF {"id": "19b8adf6-0177-4b65-8028-bf0fe37afa9a", "address": "fa:16:3e:17:d8:67", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19b8adf6-01", "ovs_interfaceid": "19b8adf6-0177-4b65-8028-bf0fe37afa9a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:35:39 np0005539551 nova_compute[227360]: 2025-11-29 08:35:39.930 227364 DEBUG nova.network.os_vif_util [None req-1e03c4ed-8849-4f51-94e9-ba0529075fbd c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:17:d8:67,bridge_name='br-int',has_traffic_filtering=True,id=19b8adf6-0177-4b65-8028-bf0fe37afa9a,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19b8adf6-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:35:39 np0005539551 nova_compute[227360]: 2025-11-29 08:35:39.931 227364 DEBUG os_vif [None req-1e03c4ed-8849-4f51-94e9-ba0529075fbd c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:d8:67,bridge_name='br-int',has_traffic_filtering=True,id=19b8adf6-0177-4b65-8028-bf0fe37afa9a,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19b8adf6-01') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:35:39 np0005539551 nova_compute[227360]: 2025-11-29 08:35:39.933 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:39 np0005539551 nova_compute[227360]: 2025-11-29 08:35:39.934 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap19b8adf6-01, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:39 np0005539551 nova_compute[227360]: 2025-11-29 08:35:39.936 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:39 np0005539551 nova_compute[227360]: 2025-11-29 08:35:39.938 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:35:39 np0005539551 nova_compute[227360]: 2025-11-29 08:35:39.942 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:39 np0005539551 systemd[1]: var-lib-containers-storage-overlay-3635dcbb01eb160e0a1065c6ad069524e5aa01ddce054d2277257fc0e5c79cd5-merged.mount: Deactivated successfully.
Nov 29 03:35:39 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5f2fab8c24766b411677664c1f9208a908c72d06a38a27719511f83795e20c39-userdata-shm.mount: Deactivated successfully.
Nov 29 03:35:39 np0005539551 nova_compute[227360]: 2025-11-29 08:35:39.947 227364 INFO os_vif [None req-1e03c4ed-8849-4f51-94e9-ba0529075fbd c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:d8:67,bridge_name='br-int',has_traffic_filtering=True,id=19b8adf6-0177-4b65-8028-bf0fe37afa9a,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19b8adf6-01')#033[00m
Nov 29 03:35:39 np0005539551 podman[289347]: 2025-11-29 08:35:39.950097378 +0000 UTC m=+0.080509639 container cleanup 5f2fab8c24766b411677664c1f9208a908c72d06a38a27719511f83795e20c39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 29 03:35:39 np0005539551 systemd[1]: libpod-conmon-5f2fab8c24766b411677664c1f9208a908c72d06a38a27719511f83795e20c39.scope: Deactivated successfully.
Nov 29 03:35:40 np0005539551 podman[289390]: 2025-11-29 08:35:40.019348432 +0000 UTC m=+0.044228378 container remove 5f2fab8c24766b411677664c1f9208a908c72d06a38a27719511f83795e20c39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 29 03:35:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:40.025 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[13d1ce25-f202-48f2-be8c-764dfe8ff34e]: (4, ('Sat Nov 29 08:35:39 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6 (5f2fab8c24766b411677664c1f9208a908c72d06a38a27719511f83795e20c39)\n5f2fab8c24766b411677664c1f9208a908c72d06a38a27719511f83795e20c39\nSat Nov 29 08:35:39 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6 (5f2fab8c24766b411677664c1f9208a908c72d06a38a27719511f83795e20c39)\n5f2fab8c24766b411677664c1f9208a908c72d06a38a27719511f83795e20c39\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:40.027 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[10f345aa-f9cd-41cc-84fa-044b6efc8d18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:40 np0005539551 nova_compute[227360]: 2025-11-29 08:35:40.027 227364 DEBUG nova.compute.manager [req-04446cff-3f21-490b-8bd4-ccd8aac90687 req-46964900-63e8-44cc-bf65-c4da865f4589 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Received event network-vif-unplugged-19b8adf6-0177-4b65-8028-bf0fe37afa9a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:40.028 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped50ff83-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:40 np0005539551 nova_compute[227360]: 2025-11-29 08:35:40.028 227364 DEBUG oslo_concurrency.lockutils [req-04446cff-3f21-490b-8bd4-ccd8aac90687 req-46964900-63e8-44cc-bf65-c4da865f4589 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "b08df6d7-85bd-4c2a-9bd8-f37384b9148a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:40 np0005539551 nova_compute[227360]: 2025-11-29 08:35:40.028 227364 DEBUG oslo_concurrency.lockutils [req-04446cff-3f21-490b-8bd4-ccd8aac90687 req-46964900-63e8-44cc-bf65-c4da865f4589 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b08df6d7-85bd-4c2a-9bd8-f37384b9148a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:40 np0005539551 nova_compute[227360]: 2025-11-29 08:35:40.029 227364 DEBUG oslo_concurrency.lockutils [req-04446cff-3f21-490b-8bd4-ccd8aac90687 req-46964900-63e8-44cc-bf65-c4da865f4589 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b08df6d7-85bd-4c2a-9bd8-f37384b9148a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:40 np0005539551 nova_compute[227360]: 2025-11-29 08:35:40.029 227364 DEBUG nova.compute.manager [req-04446cff-3f21-490b-8bd4-ccd8aac90687 req-46964900-63e8-44cc-bf65-c4da865f4589 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] No waiting events found dispatching network-vif-unplugged-19b8adf6-0177-4b65-8028-bf0fe37afa9a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:35:40 np0005539551 kernel: taped50ff83-50: left promiscuous mode
Nov 29 03:35:40 np0005539551 nova_compute[227360]: 2025-11-29 08:35:40.030 227364 DEBUG nova.compute.manager [req-04446cff-3f21-490b-8bd4-ccd8aac90687 req-46964900-63e8-44cc-bf65-c4da865f4589 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Received event network-vif-unplugged-19b8adf6-0177-4b65-8028-bf0fe37afa9a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:35:40 np0005539551 nova_compute[227360]: 2025-11-29 08:35:40.032 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:40 np0005539551 nova_compute[227360]: 2025-11-29 08:35:40.049 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:40.053 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d10efa13-bc57-4647-9935-240974bf231e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:40.075 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[20af59a4-b692-4c1c-a914-3d80be202906]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:40.077 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[12a249f6-a3fe-4c2b-9062-8239f5f3267c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:40.091 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[53101953-5bcb-40ff-a986-35b33960f8ab]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 814424, 'reachable_time': 39553, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289408, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:40.094 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:35:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:35:40.094 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[b6a699d4-3956-49da-bf9e-f6341cfbbe6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:40 np0005539551 systemd[1]: run-netns-ovnmeta\x2ded50ff83\x2d51d1\x2d4b35\x2db85c\x2d1cbe6fb812c6.mount: Deactivated successfully.
Nov 29 03:35:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e371 e371: 3 total, 3 up, 3 in
Nov 29 03:35:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:40.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:40 np0005539551 nova_compute[227360]: 2025-11-29 08:35:40.503 227364 INFO nova.virt.libvirt.driver [None req-1e03c4ed-8849-4f51-94e9-ba0529075fbd c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Deleting instance files /var/lib/nova/instances/b08df6d7-85bd-4c2a-9bd8-f37384b9148a_del#033[00m
Nov 29 03:35:40 np0005539551 nova_compute[227360]: 2025-11-29 08:35:40.504 227364 INFO nova.virt.libvirt.driver [None req-1e03c4ed-8849-4f51-94e9-ba0529075fbd c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Deletion of /var/lib/nova/instances/b08df6d7-85bd-4c2a-9bd8-f37384b9148a_del complete#033[00m
Nov 29 03:35:40 np0005539551 nova_compute[227360]: 2025-11-29 08:35:40.607 227364 INFO nova.compute.manager [None req-1e03c4ed-8849-4f51-94e9-ba0529075fbd c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Took 0.93 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:35:40 np0005539551 nova_compute[227360]: 2025-11-29 08:35:40.608 227364 DEBUG oslo.service.loopingcall [None req-1e03c4ed-8849-4f51-94e9-ba0529075fbd c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:35:40 np0005539551 nova_compute[227360]: 2025-11-29 08:35:40.608 227364 DEBUG nova.compute.manager [-] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:35:40 np0005539551 nova_compute[227360]: 2025-11-29 08:35:40.609 227364 DEBUG nova.network.neutron [-] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:35:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:35:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:41.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:35:41 np0005539551 nova_compute[227360]: 2025-11-29 08:35:41.589 227364 DEBUG nova.network.neutron [-] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:35:41 np0005539551 nova_compute[227360]: 2025-11-29 08:35:41.614 227364 INFO nova.virt.libvirt.driver [None req-b2dfe170-0a4a-416a-8860-3e30d189528a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Snapshot image upload complete#033[00m
Nov 29 03:35:41 np0005539551 nova_compute[227360]: 2025-11-29 08:35:41.615 227364 DEBUG nova.compute.manager [None req-b2dfe170-0a4a-416a-8860-3e30d189528a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:35:41 np0005539551 nova_compute[227360]: 2025-11-29 08:35:41.623 227364 INFO nova.compute.manager [-] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Took 1.01 seconds to deallocate network for instance.#033[00m
Nov 29 03:35:41 np0005539551 nova_compute[227360]: 2025-11-29 08:35:41.644 227364 DEBUG nova.compute.manager [req-e573895e-0759-4d39-b1c3-136c1114c4b0 req-516cbcf7-e747-4f34-841f-a286513b5ad2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Received event network-vif-deleted-19b8adf6-0177-4b65-8028-bf0fe37afa9a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:41 np0005539551 nova_compute[227360]: 2025-11-29 08:35:41.678 227364 INFO nova.compute.manager [None req-b2dfe170-0a4a-416a-8860-3e30d189528a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Shelve offloading#033[00m
Nov 29 03:35:41 np0005539551 nova_compute[227360]: 2025-11-29 08:35:41.685 227364 INFO nova.virt.libvirt.driver [-] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Instance destroyed successfully.#033[00m
Nov 29 03:35:41 np0005539551 nova_compute[227360]: 2025-11-29 08:35:41.686 227364 DEBUG nova.compute.manager [None req-b2dfe170-0a4a-416a-8860-3e30d189528a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:35:41 np0005539551 nova_compute[227360]: 2025-11-29 08:35:41.688 227364 DEBUG oslo_concurrency.lockutils [None req-b2dfe170-0a4a-416a-8860-3e30d189528a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Acquiring lock "refresh_cache-aaaa6a6b-0c21-483c-b891-02ebe64e6aab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:35:41 np0005539551 nova_compute[227360]: 2025-11-29 08:35:41.689 227364 DEBUG oslo_concurrency.lockutils [None req-b2dfe170-0a4a-416a-8860-3e30d189528a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Acquired lock "refresh_cache-aaaa6a6b-0c21-483c-b891-02ebe64e6aab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:35:41 np0005539551 nova_compute[227360]: 2025-11-29 08:35:41.689 227364 DEBUG nova.network.neutron [None req-b2dfe170-0a4a-416a-8860-3e30d189528a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:35:41 np0005539551 nova_compute[227360]: 2025-11-29 08:35:41.847 227364 INFO nova.compute.manager [None req-1e03c4ed-8849-4f51-94e9-ba0529075fbd c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Took 0.22 seconds to detach 1 volumes for instance.#033[00m
Nov 29 03:35:41 np0005539551 nova_compute[227360]: 2025-11-29 08:35:41.899 227364 DEBUG oslo_concurrency.lockutils [None req-1e03c4ed-8849-4f51-94e9-ba0529075fbd c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:41 np0005539551 nova_compute[227360]: 2025-11-29 08:35:41.899 227364 DEBUG oslo_concurrency.lockutils [None req-1e03c4ed-8849-4f51-94e9-ba0529075fbd c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:41 np0005539551 nova_compute[227360]: 2025-11-29 08:35:41.996 227364 DEBUG oslo_concurrency.processutils [None req-1e03c4ed-8849-4f51-94e9-ba0529075fbd c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:42 np0005539551 nova_compute[227360]: 2025-11-29 08:35:42.323 227364 DEBUG nova.compute.manager [req-b7546d7b-2f44-4184-a47a-ef94143c7ee9 req-077c23d2-2d55-44a5-a085-e5cbc4df083e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Received event network-vif-plugged-19b8adf6-0177-4b65-8028-bf0fe37afa9a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:42 np0005539551 nova_compute[227360]: 2025-11-29 08:35:42.323 227364 DEBUG oslo_concurrency.lockutils [req-b7546d7b-2f44-4184-a47a-ef94143c7ee9 req-077c23d2-2d55-44a5-a085-e5cbc4df083e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "b08df6d7-85bd-4c2a-9bd8-f37384b9148a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:42 np0005539551 nova_compute[227360]: 2025-11-29 08:35:42.324 227364 DEBUG oslo_concurrency.lockutils [req-b7546d7b-2f44-4184-a47a-ef94143c7ee9 req-077c23d2-2d55-44a5-a085-e5cbc4df083e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b08df6d7-85bd-4c2a-9bd8-f37384b9148a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:42 np0005539551 nova_compute[227360]: 2025-11-29 08:35:42.324 227364 DEBUG oslo_concurrency.lockutils [req-b7546d7b-2f44-4184-a47a-ef94143c7ee9 req-077c23d2-2d55-44a5-a085-e5cbc4df083e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b08df6d7-85bd-4c2a-9bd8-f37384b9148a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:42 np0005539551 nova_compute[227360]: 2025-11-29 08:35:42.324 227364 DEBUG nova.compute.manager [req-b7546d7b-2f44-4184-a47a-ef94143c7ee9 req-077c23d2-2d55-44a5-a085-e5cbc4df083e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] No waiting events found dispatching network-vif-plugged-19b8adf6-0177-4b65-8028-bf0fe37afa9a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:35:42 np0005539551 nova_compute[227360]: 2025-11-29 08:35:42.324 227364 WARNING nova.compute.manager [req-b7546d7b-2f44-4184-a47a-ef94143c7ee9 req-077c23d2-2d55-44a5-a085-e5cbc4df083e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Received unexpected event network-vif-plugged-19b8adf6-0177-4b65-8028-bf0fe37afa9a for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:35:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:42.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:42 np0005539551 nova_compute[227360]: 2025-11-29 08:35:42.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:42 np0005539551 nova_compute[227360]: 2025-11-29 08:35:42.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:35:42 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2130754502' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:35:42 np0005539551 nova_compute[227360]: 2025-11-29 08:35:42.467 227364 DEBUG oslo_concurrency.processutils [None req-1e03c4ed-8849-4f51-94e9-ba0529075fbd c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:42 np0005539551 nova_compute[227360]: 2025-11-29 08:35:42.473 227364 DEBUG nova.compute.provider_tree [None req-1e03c4ed-8849-4f51-94e9-ba0529075fbd c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:35:42 np0005539551 nova_compute[227360]: 2025-11-29 08:35:42.493 227364 DEBUG nova.scheduler.client.report [None req-1e03c4ed-8849-4f51-94e9-ba0529075fbd c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:35:42 np0005539551 nova_compute[227360]: 2025-11-29 08:35:42.533 227364 DEBUG oslo_concurrency.lockutils [None req-1e03c4ed-8849-4f51-94e9-ba0529075fbd c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:42 np0005539551 nova_compute[227360]: 2025-11-29 08:35:42.568 227364 INFO nova.scheduler.client.report [None req-1e03c4ed-8849-4f51-94e9-ba0529075fbd c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Deleted allocations for instance b08df6d7-85bd-4c2a-9bd8-f37384b9148a#033[00m
Nov 29 03:35:42 np0005539551 nova_compute[227360]: 2025-11-29 08:35:42.646 227364 DEBUG oslo_concurrency.lockutils [None req-1e03c4ed-8849-4f51-94e9-ba0529075fbd c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "b08df6d7-85bd-4c2a-9bd8-f37384b9148a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.978s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:43 np0005539551 nova_compute[227360]: 2025-11-29 08:35:43.129 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:43.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:44 np0005539551 nova_compute[227360]: 2025-11-29 08:35:44.041 227364 DEBUG nova.network.neutron [None req-b2dfe170-0a4a-416a-8860-3e30d189528a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Updating instance_info_cache with network_info: [{"id": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "address": "fa:16:3e:c5:09:19", "network": {"id": "a83b79bc-6262-43e7-a9e5-5e808a213726", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1830064421-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e72e7660da497a8b6bf9fdb03fe84c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c7cb162-70", "ovs_interfaceid": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:35:44 np0005539551 nova_compute[227360]: 2025-11-29 08:35:44.076 227364 DEBUG oslo_concurrency.lockutils [None req-b2dfe170-0a4a-416a-8860-3e30d189528a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Releasing lock "refresh_cache-aaaa6a6b-0c21-483c-b891-02ebe64e6aab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:35:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:44.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:44 np0005539551 nova_compute[227360]: 2025-11-29 08:35:44.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:44 np0005539551 nova_compute[227360]: 2025-11-29 08:35:44.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:44 np0005539551 nova_compute[227360]: 2025-11-29 08:35:44.432 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:44 np0005539551 nova_compute[227360]: 2025-11-29 08:35:44.432 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:44 np0005539551 nova_compute[227360]: 2025-11-29 08:35:44.432 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:44 np0005539551 nova_compute[227360]: 2025-11-29 08:35:44.433 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:35:44 np0005539551 nova_compute[227360]: 2025-11-29 08:35:44.433 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:44 np0005539551 nova_compute[227360]: 2025-11-29 08:35:44.688 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405329.6864328, 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:35:44 np0005539551 nova_compute[227360]: 2025-11-29 08:35:44.689 227364 INFO nova.compute.manager [-] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:35:44 np0005539551 nova_compute[227360]: 2025-11-29 08:35:44.721 227364 DEBUG nova.compute.manager [None req-0145a244-fe16-4cd0-b5d6-48620c9749cb - - - - - -] [instance: 1e514c3f-1e74-4c88-bf51-ba87ca7b5bdd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:35:44 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:35:44 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3325864162' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:35:44 np0005539551 nova_compute[227360]: 2025-11-29 08:35:44.866 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:44 np0005539551 nova_compute[227360]: 2025-11-29 08:35:44.938 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:44 np0005539551 nova_compute[227360]: 2025-11-29 08:35:44.981 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-000000aa as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:35:44 np0005539551 nova_compute[227360]: 2025-11-29 08:35:44.982 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-000000aa as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:35:45 np0005539551 nova_compute[227360]: 2025-11-29 08:35:45.183 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:35:45 np0005539551 nova_compute[227360]: 2025-11-29 08:35:45.184 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4304MB free_disk=20.87621307373047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:35:45 np0005539551 nova_compute[227360]: 2025-11-29 08:35:45.185 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:45 np0005539551 nova_compute[227360]: 2025-11-29 08:35:45.185 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:45 np0005539551 nova_compute[227360]: 2025-11-29 08:35:45.255 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance aaaa6a6b-0c21-483c-b891-02ebe64e6aab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:35:45 np0005539551 nova_compute[227360]: 2025-11-29 08:35:45.255 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:35:45 np0005539551 nova_compute[227360]: 2025-11-29 08:35:45.256 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:35:45 np0005539551 nova_compute[227360]: 2025-11-29 08:35:45.309 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:45 np0005539551 nova_compute[227360]: 2025-11-29 08:35:45.425 227364 INFO nova.virt.libvirt.driver [-] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Instance destroyed successfully.#033[00m
Nov 29 03:35:45 np0005539551 nova_compute[227360]: 2025-11-29 08:35:45.426 227364 DEBUG nova.objects.instance [None req-b2dfe170-0a4a-416a-8860-3e30d189528a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lazy-loading 'resources' on Instance uuid aaaa6a6b-0c21-483c-b891-02ebe64e6aab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:35:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:45.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:45 np0005539551 nova_compute[227360]: 2025-11-29 08:35:45.452 227364 DEBUG nova.virt.libvirt.vif [None req-b2dfe170-0a4a-416a-8860-3e30d189528a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:34:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1484770108',display_name='tempest-ServersNegativeTestJSON-server-1484770108',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1484770108',id=170,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:34:12Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='96e72e7660da497a8b6bf9fdb03fe84c',ramdisk_id='',reservation_id='r-17iqb8m1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1016750887',owner_user_name='tempest-ServersNegativeTestJSON-1016750887-project-member',shelved_at='2025-11-29T08:35:41.615165',shelved_host='compute-1.ctlplane.example.com',shelved_image_id='f502084d-e717-47c5-8881-072bf335e491'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:35:36Z,user_data=None,user_id='a57807acb02b45d082f242ec62cd5b6f',uuid=aaaa6a6b-0c21-483c-b891-02ebe64e6aab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "address": "fa:16:3e:c5:09:19", "network": {"id": "a83b79bc-6262-43e7-a9e5-5e808a213726", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1830064421-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e72e7660da497a8b6bf9fdb03fe84c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c7cb162-70", "ovs_interfaceid": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:35:45 np0005539551 nova_compute[227360]: 2025-11-29 08:35:45.452 227364 DEBUG nova.network.os_vif_util [None req-b2dfe170-0a4a-416a-8860-3e30d189528a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Converting VIF {"id": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "address": "fa:16:3e:c5:09:19", "network": {"id": "a83b79bc-6262-43e7-a9e5-5e808a213726", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1830064421-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e72e7660da497a8b6bf9fdb03fe84c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c7cb162-70", "ovs_interfaceid": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:35:45 np0005539551 nova_compute[227360]: 2025-11-29 08:35:45.454 227364 DEBUG nova.network.os_vif_util [None req-b2dfe170-0a4a-416a-8860-3e30d189528a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:09:19,bridge_name='br-int',has_traffic_filtering=True,id=2c7cb162-70f0-496f-a2bf-0b0af61bd4b2,network=Network(a83b79bc-6262-43e7-a9e5-5e808a213726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c7cb162-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:35:45 np0005539551 nova_compute[227360]: 2025-11-29 08:35:45.454 227364 DEBUG os_vif [None req-b2dfe170-0a4a-416a-8860-3e30d189528a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:09:19,bridge_name='br-int',has_traffic_filtering=True,id=2c7cb162-70f0-496f-a2bf-0b0af61bd4b2,network=Network(a83b79bc-6262-43e7-a9e5-5e808a213726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c7cb162-70') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:35:45 np0005539551 nova_compute[227360]: 2025-11-29 08:35:45.456 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:45 np0005539551 nova_compute[227360]: 2025-11-29 08:35:45.456 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2c7cb162-70, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:45 np0005539551 nova_compute[227360]: 2025-11-29 08:35:45.458 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:45 np0005539551 nova_compute[227360]: 2025-11-29 08:35:45.459 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:45 np0005539551 nova_compute[227360]: 2025-11-29 08:35:45.461 227364 INFO os_vif [None req-b2dfe170-0a4a-416a-8860-3e30d189528a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:09:19,bridge_name='br-int',has_traffic_filtering=True,id=2c7cb162-70f0-496f-a2bf-0b0af61bd4b2,network=Network(a83b79bc-6262-43e7-a9e5-5e808a213726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c7cb162-70')#033[00m
Nov 29 03:35:45 np0005539551 nova_compute[227360]: 2025-11-29 08:35:45.505 227364 DEBUG nova.compute.manager [req-fab2e37c-70f6-4bf1-b80e-fd06fc212a9f req-c9dda64a-bdde-47fe-819a-094e56d2dfcd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Received event network-changed-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:45 np0005539551 nova_compute[227360]: 2025-11-29 08:35:45.506 227364 DEBUG nova.compute.manager [req-fab2e37c-70f6-4bf1-b80e-fd06fc212a9f req-c9dda64a-bdde-47fe-819a-094e56d2dfcd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Refreshing instance network info cache due to event network-changed-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:35:45 np0005539551 nova_compute[227360]: 2025-11-29 08:35:45.506 227364 DEBUG oslo_concurrency.lockutils [req-fab2e37c-70f6-4bf1-b80e-fd06fc212a9f req-c9dda64a-bdde-47fe-819a-094e56d2dfcd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-aaaa6a6b-0c21-483c-b891-02ebe64e6aab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:35:45 np0005539551 nova_compute[227360]: 2025-11-29 08:35:45.507 227364 DEBUG oslo_concurrency.lockutils [req-fab2e37c-70f6-4bf1-b80e-fd06fc212a9f req-c9dda64a-bdde-47fe-819a-094e56d2dfcd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-aaaa6a6b-0c21-483c-b891-02ebe64e6aab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:35:45 np0005539551 nova_compute[227360]: 2025-11-29 08:35:45.507 227364 DEBUG nova.network.neutron [req-fab2e37c-70f6-4bf1-b80e-fd06fc212a9f req-c9dda64a-bdde-47fe-819a-094e56d2dfcd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Refreshing network info cache for port 2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:35:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:35:45 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1462669811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:35:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:45 np0005539551 nova_compute[227360]: 2025-11-29 08:35:45.734 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:45 np0005539551 nova_compute[227360]: 2025-11-29 08:35:45.740 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:35:45 np0005539551 nova_compute[227360]: 2025-11-29 08:35:45.760 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:35:45 np0005539551 nova_compute[227360]: 2025-11-29 08:35:45.788 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:35:45 np0005539551 nova_compute[227360]: 2025-11-29 08:35:45.788 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:45 np0005539551 nova_compute[227360]: 2025-11-29 08:35:45.841 227364 INFO nova.virt.libvirt.driver [None req-b2dfe170-0a4a-416a-8860-3e30d189528a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Deleting instance files /var/lib/nova/instances/aaaa6a6b-0c21-483c-b891-02ebe64e6aab_del#033[00m
Nov 29 03:35:45 np0005539551 nova_compute[227360]: 2025-11-29 08:35:45.842 227364 INFO nova.virt.libvirt.driver [None req-b2dfe170-0a4a-416a-8860-3e30d189528a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Deletion of /var/lib/nova/instances/aaaa6a6b-0c21-483c-b891-02ebe64e6aab_del complete#033[00m
Nov 29 03:35:46 np0005539551 nova_compute[227360]: 2025-11-29 08:35:46.047 227364 INFO nova.scheduler.client.report [None req-b2dfe170-0a4a-416a-8860-3e30d189528a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Deleted allocations for instance aaaa6a6b-0c21-483c-b891-02ebe64e6aab#033[00m
Nov 29 03:35:46 np0005539551 nova_compute[227360]: 2025-11-29 08:35:46.119 227364 DEBUG oslo_concurrency.lockutils [None req-b2dfe170-0a4a-416a-8860-3e30d189528a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:46 np0005539551 nova_compute[227360]: 2025-11-29 08:35:46.120 227364 DEBUG oslo_concurrency.lockutils [None req-b2dfe170-0a4a-416a-8860-3e30d189528a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:46 np0005539551 nova_compute[227360]: 2025-11-29 08:35:46.138 227364 DEBUG oslo_concurrency.processutils [None req-b2dfe170-0a4a-416a-8860-3e30d189528a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:46.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:46 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:35:46 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/864843271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:35:46 np0005539551 nova_compute[227360]: 2025-11-29 08:35:46.607 227364 DEBUG oslo_concurrency.processutils [None req-b2dfe170-0a4a-416a-8860-3e30d189528a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:46 np0005539551 nova_compute[227360]: 2025-11-29 08:35:46.613 227364 DEBUG nova.compute.provider_tree [None req-b2dfe170-0a4a-416a-8860-3e30d189528a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:35:46 np0005539551 nova_compute[227360]: 2025-11-29 08:35:46.653 227364 DEBUG nova.scheduler.client.report [None req-b2dfe170-0a4a-416a-8860-3e30d189528a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:35:47 np0005539551 nova_compute[227360]: 2025-11-29 08:35:47.113 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:47 np0005539551 nova_compute[227360]: 2025-11-29 08:35:47.114 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:35:47 np0005539551 nova_compute[227360]: 2025-11-29 08:35:47.124 227364 DEBUG oslo_concurrency.lockutils [None req-b2dfe170-0a4a-416a-8860-3e30d189528a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:47.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:47 np0005539551 nova_compute[227360]: 2025-11-29 08:35:47.644 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405332.6433504, 2293bfb9-d91d-4ee4-8347-317cf45fe9c4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:35:47 np0005539551 nova_compute[227360]: 2025-11-29 08:35:47.645 227364 INFO nova.compute.manager [-] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:35:47 np0005539551 nova_compute[227360]: 2025-11-29 08:35:47.691 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:35:47 np0005539551 nova_compute[227360]: 2025-11-29 08:35:47.691 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:47 np0005539551 nova_compute[227360]: 2025-11-29 08:35:47.707 227364 DEBUG nova.compute.manager [None req-a73e2eab-3d64-4aee-a0db-20f3f753b71d - - - - - -] [instance: 2293bfb9-d91d-4ee4-8347-317cf45fe9c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:35:47 np0005539551 nova_compute[227360]: 2025-11-29 08:35:47.741 227364 WARNING nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor.#033[00m
Nov 29 03:35:47 np0005539551 nova_compute[227360]: 2025-11-29 08:35:47.742 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Triggering sync for uuid aaaa6a6b-0c21-483c-b891-02ebe64e6aab _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 29 03:35:47 np0005539551 nova_compute[227360]: 2025-11-29 08:35:47.743 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:47 np0005539551 nova_compute[227360]: 2025-11-29 08:35:47.753 227364 DEBUG oslo_concurrency.lockutils [None req-b2dfe170-0a4a-416a-8860-3e30d189528a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 14.824s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:47 np0005539551 nova_compute[227360]: 2025-11-29 08:35:47.753 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.011s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:47 np0005539551 nova_compute[227360]: 2025-11-29 08:35:47.782 227364 INFO nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] During the sync_power process the instance has moved from host None to host compute-1.ctlplane.example.com#033[00m
Nov 29 03:35:47 np0005539551 nova_compute[227360]: 2025-11-29 08:35:47.783 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.029s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:48 np0005539551 nova_compute[227360]: 2025-11-29 08:35:48.132 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 03:35:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:48.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 03:35:48 np0005539551 nova_compute[227360]: 2025-11-29 08:35:48.461 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:49 np0005539551 nova_compute[227360]: 2025-11-29 08:35:49.025 227364 DEBUG nova.network.neutron [req-fab2e37c-70f6-4bf1-b80e-fd06fc212a9f req-c9dda64a-bdde-47fe-819a-094e56d2dfcd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Updated VIF entry in instance network info cache for port 2c7cb162-70f0-496f-a2bf-0b0af61bd4b2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:35:49 np0005539551 nova_compute[227360]: 2025-11-29 08:35:49.025 227364 DEBUG nova.network.neutron [req-fab2e37c-70f6-4bf1-b80e-fd06fc212a9f req-c9dda64a-bdde-47fe-819a-094e56d2dfcd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Updating instance_info_cache with network_info: [{"id": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "address": "fa:16:3e:c5:09:19", "network": {"id": "a83b79bc-6262-43e7-a9e5-5e808a213726", "bridge": null, "label": "tempest-ServersNegativeTestJSON-1830064421-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e72e7660da497a8b6bf9fdb03fe84c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap2c7cb162-70", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:35:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e372 e372: 3 total, 3 up, 3 in
Nov 29 03:35:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:49.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:49 np0005539551 podman[289539]: 2025-11-29 08:35:49.616283692 +0000 UTC m=+0.066296815 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:35:49 np0005539551 podman[289538]: 2025-11-29 08:35:49.619960171 +0000 UTC m=+0.071875016 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd)
Nov 29 03:35:49 np0005539551 podman[289537]: 2025-11-29 08:35:49.62843219 +0000 UTC m=+0.083537671 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 29 03:35:50 np0005539551 nova_compute[227360]: 2025-11-29 08:35:50.147 227364 DEBUG oslo_concurrency.lockutils [req-fab2e37c-70f6-4bf1-b80e-fd06fc212a9f req-c9dda64a-bdde-47fe-819a-094e56d2dfcd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-aaaa6a6b-0c21-483c-b891-02ebe64e6aab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:35:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:50.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:50 np0005539551 nova_compute[227360]: 2025-11-29 08:35:50.458 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:50 np0005539551 nova_compute[227360]: 2025-11-29 08:35:50.558 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405335.5566056, aaaa6a6b-0c21-483c-b891-02ebe64e6aab => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:35:50 np0005539551 nova_compute[227360]: 2025-11-29 08:35:50.558 227364 INFO nova.compute.manager [-] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:35:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:35:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:51.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:35:53 np0005539551 nova_compute[227360]: 2025-11-29 08:35:52.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc407a6f0 =====
Nov 29 03:35:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc407a6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:53.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:53 np0005539551 radosgw[83679]: beast: 0x7fabc407a6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:53.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:53 np0005539551 nova_compute[227360]: 2025-11-29 08:35:53.661 227364 DEBUG nova.compute.manager [None req-f0b06f93-d243-4185-9b4c-73fe226afb8b - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:35:53 np0005539551 nova_compute[227360]: 2025-11-29 08:35:53.662 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:54 np0005539551 nova_compute[227360]: 2025-11-29 08:35:54.909 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405339.9088888, b08df6d7-85bd-4c2a-9bd8-f37384b9148a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:35:54 np0005539551 nova_compute[227360]: 2025-11-29 08:35:54.909 227364 INFO nova.compute.manager [-] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:35:55 np0005539551 nova_compute[227360]: 2025-11-29 08:35:55.460 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:55 np0005539551 nova_compute[227360]: 2025-11-29 08:35:55.595 227364 DEBUG nova.compute.manager [None req-67b014e0-26db-41b3-960e-ffbb6aad94f9 - - - - - -] [instance: b08df6d7-85bd-4c2a-9bd8-f37384b9148a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:35:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc407a6f0 =====
Nov 29 03:35:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:55.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc407a6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:55 np0005539551 radosgw[83679]: beast: 0x7fabc407a6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:55.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:55 np0005539551 nova_compute[227360]: 2025-11-29 08:35:55.688 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:56 np0005539551 podman[289774]: 2025-11-29 08:35:56.242912098 +0000 UTC m=+0.057093556 container exec 90ce4adbab91e406b1c142d44002a8d5649da44e57a442af57615f2fecac4da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:35:56 np0005539551 podman[289774]: 2025-11-29 08:35:56.363725096 +0000 UTC m=+0.177906534 container exec_died 90ce4adbab91e406b1c142d44002a8d5649da44e57a442af57615f2fecac4da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:35:56 np0005539551 nova_compute[227360]: 2025-11-29 08:35:56.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:56 np0005539551 nova_compute[227360]: 2025-11-29 08:35:56.411 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:35:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc407a6f0 =====
Nov 29 03:35:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:57.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc407a6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:57 np0005539551 radosgw[83679]: beast: 0x7fabc407a6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:57.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:35:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:35:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:35:58 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:35:58 np0005539551 nova_compute[227360]: 2025-11-29 08:35:58.661 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:59 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #118. Immutable memtables: 0.
Nov 29 03:35:59 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:35:59.326077) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:35:59 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 118
Nov 29 03:35:59 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405359326177, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 1117, "num_deletes": 256, "total_data_size": 2278123, "memory_usage": 2312864, "flush_reason": "Manual Compaction"}
Nov 29 03:35:59 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #119: started
Nov 29 03:35:59 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405359336951, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 119, "file_size": 1492367, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59319, "largest_seqno": 60431, "table_properties": {"data_size": 1487202, "index_size": 2625, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 10711, "raw_average_key_size": 18, "raw_value_size": 1476740, "raw_average_value_size": 2586, "num_data_blocks": 114, "num_entries": 571, "num_filter_entries": 571, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405286, "oldest_key_time": 1764405286, "file_creation_time": 1764405359, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:35:59 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 10916 microseconds, and 4801 cpu microseconds.
Nov 29 03:35:59 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:35:59 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:35:59.337004) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #119: 1492367 bytes OK
Nov 29 03:35:59 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:35:59.337031) [db/memtable_list.cc:519] [default] Level-0 commit table #119 started
Nov 29 03:35:59 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:35:59.338958) [db/memtable_list.cc:722] [default] Level-0 commit table #119: memtable #1 done
Nov 29 03:35:59 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:35:59.339004) EVENT_LOG_v1 {"time_micros": 1764405359338994, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:35:59 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:35:59.339028) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:35:59 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 2272580, prev total WAL file size 2276773, number of live WAL files 2.
Nov 29 03:35:59 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000115.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:35:59 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:35:59.340049) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323530' seq:72057594037927935, type:22 .. '6B7600353034' seq:0, type:0; will stop at (end)
Nov 29 03:35:59 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:35:59 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [119(1457KB)], [117(12MB)]
Nov 29 03:35:59 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405359340100, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [119], "files_L6": [117], "score": -1, "input_data_size": 14286598, "oldest_snapshot_seqno": -1}
Nov 29 03:35:59 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:35:59 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:35:59 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:35:59 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #120: 9264 keys, 13173232 bytes, temperature: kUnknown
Nov 29 03:35:59 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405359428958, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 120, "file_size": 13173232, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13112039, "index_size": 36953, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23173, "raw_key_size": 242547, "raw_average_key_size": 26, "raw_value_size": 12947616, "raw_average_value_size": 1397, "num_data_blocks": 1421, "num_entries": 9264, "num_filter_entries": 9264, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764405359, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 120, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:35:59 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:35:59 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:35:59.429191) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 13173232 bytes
Nov 29 03:35:59 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:35:59.430927) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 160.7 rd, 148.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 12.2 +0.0 blob) out(12.6 +0.0 blob), read-write-amplify(18.4) write-amplify(8.8) OK, records in: 9795, records dropped: 531 output_compression: NoCompression
Nov 29 03:35:59 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:35:59.430945) EVENT_LOG_v1 {"time_micros": 1764405359430936, "job": 74, "event": "compaction_finished", "compaction_time_micros": 88925, "compaction_time_cpu_micros": 28353, "output_level": 6, "num_output_files": 1, "total_output_size": 13173232, "num_input_records": 9795, "num_output_records": 9264, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:35:59 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:35:59 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405359431277, "job": 74, "event": "table_file_deletion", "file_number": 119}
Nov 29 03:35:59 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000117.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:35:59 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405359433298, "job": 74, "event": "table_file_deletion", "file_number": 117}
Nov 29 03:35:59 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:35:59.339926) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:59 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:35:59.433359) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:59 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:35:59.433365) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:59 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:35:59.433367) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:59 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:35:59.433371) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:59 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:35:59.433373) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:35:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc407a6f0 =====
Nov 29 03:35:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:59.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc407a6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:59 np0005539551 radosgw[83679]: beast: 0x7fabc407a6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:59.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:00 np0005539551 nova_compute[227360]: 2025-11-29 08:36:00.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:00 np0005539551 nova_compute[227360]: 2025-11-29 08:36:00.461 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc407a6f0 =====
Nov 29 03:36:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc407a6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:01.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:01 np0005539551 radosgw[83679]: beast: 0x7fabc407a6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:01.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:03 np0005539551 nova_compute[227360]: 2025-11-29 08:36:03.532 227364 DEBUG oslo_concurrency.lockutils [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Acquiring lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:03 np0005539551 nova_compute[227360]: 2025-11-29 08:36:03.533 227364 DEBUG oslo_concurrency.lockutils [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:03 np0005539551 nova_compute[227360]: 2025-11-29 08:36:03.533 227364 INFO nova.compute.manager [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Unshelving#033[00m
Nov 29 03:36:03 np0005539551 nova_compute[227360]: 2025-11-29 08:36:03.662 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:03 np0005539551 nova_compute[227360]: 2025-11-29 08:36:03.668 227364 DEBUG oslo_concurrency.lockutils [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:03 np0005539551 nova_compute[227360]: 2025-11-29 08:36:03.668 227364 DEBUG oslo_concurrency.lockutils [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:03.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:03.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:03 np0005539551 nova_compute[227360]: 2025-11-29 08:36:03.673 227364 DEBUG nova.objects.instance [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lazy-loading 'pci_requests' on Instance uuid aaaa6a6b-0c21-483c-b891-02ebe64e6aab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:36:03 np0005539551 nova_compute[227360]: 2025-11-29 08:36:03.688 227364 DEBUG nova.objects.instance [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lazy-loading 'numa_topology' on Instance uuid aaaa6a6b-0c21-483c-b891-02ebe64e6aab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:36:03 np0005539551 nova_compute[227360]: 2025-11-29 08:36:03.704 227364 DEBUG nova.virt.hardware [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:36:03 np0005539551 nova_compute[227360]: 2025-11-29 08:36:03.704 227364 INFO nova.compute.claims [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:36:05 np0005539551 nova_compute[227360]: 2025-11-29 08:36:05.462 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:05.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:05.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:05 np0005539551 nova_compute[227360]: 2025-11-29 08:36:05.801 227364 DEBUG oslo_concurrency.processutils [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:36:06 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1521754304' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:36:06 np0005539551 nova_compute[227360]: 2025-11-29 08:36:06.274 227364 DEBUG oslo_concurrency.processutils [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:06 np0005539551 nova_compute[227360]: 2025-11-29 08:36:06.281 227364 DEBUG nova.compute.provider_tree [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:36:06 np0005539551 nova_compute[227360]: 2025-11-29 08:36:06.954 227364 DEBUG nova.scheduler.client.report [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:36:06 np0005539551 nova_compute[227360]: 2025-11-29 08:36:06.987 227364 DEBUG oslo_concurrency.lockutils [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 3.319s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:07 np0005539551 nova_compute[227360]: 2025-11-29 08:36:07.528 227364 INFO nova.network.neutron [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Updating port 2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 with attributes {'binding:host_id': 'compute-1.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Nov 29 03:36:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:07.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:07.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:08 np0005539551 nova_compute[227360]: 2025-11-29 08:36:08.213 227364 DEBUG oslo_concurrency.lockutils [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Acquiring lock "refresh_cache-aaaa6a6b-0c21-483c-b891-02ebe64e6aab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:36:08 np0005539551 nova_compute[227360]: 2025-11-29 08:36:08.214 227364 DEBUG oslo_concurrency.lockutils [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Acquired lock "refresh_cache-aaaa6a6b-0c21-483c-b891-02ebe64e6aab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:36:08 np0005539551 nova_compute[227360]: 2025-11-29 08:36:08.215 227364 DEBUG nova.network.neutron [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:36:08 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:36:08 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:36:08 np0005539551 nova_compute[227360]: 2025-11-29 08:36:08.376 227364 DEBUG nova.compute.manager [req-ae8d1a70-05d2-4561-ae84-206eff08f0c3 req-a2dc6db8-92a9-4492-985a-4a516e8c56f5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Received event network-changed-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:08 np0005539551 nova_compute[227360]: 2025-11-29 08:36:08.376 227364 DEBUG nova.compute.manager [req-ae8d1a70-05d2-4561-ae84-206eff08f0c3 req-a2dc6db8-92a9-4492-985a-4a516e8c56f5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Refreshing instance network info cache due to event network-changed-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:36:08 np0005539551 nova_compute[227360]: 2025-11-29 08:36:08.377 227364 DEBUG oslo_concurrency.lockutils [req-ae8d1a70-05d2-4561-ae84-206eff08f0c3 req-a2dc6db8-92a9-4492-985a-4a516e8c56f5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-aaaa6a6b-0c21-483c-b891-02ebe64e6aab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:36:08 np0005539551 nova_compute[227360]: 2025-11-29 08:36:08.664 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:09.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:09.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:09 np0005539551 nova_compute[227360]: 2025-11-29 08:36:09.937 227364 DEBUG nova.network.neutron [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Updating instance_info_cache with network_info: [{"id": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "address": "fa:16:3e:c5:09:19", "network": {"id": "a83b79bc-6262-43e7-a9e5-5e808a213726", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1830064421-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e72e7660da497a8b6bf9fdb03fe84c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c7cb162-70", "ovs_interfaceid": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:36:09 np0005539551 nova_compute[227360]: 2025-11-29 08:36:09.955 227364 DEBUG oslo_concurrency.lockutils [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Releasing lock "refresh_cache-aaaa6a6b-0c21-483c-b891-02ebe64e6aab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:36:09 np0005539551 nova_compute[227360]: 2025-11-29 08:36:09.956 227364 DEBUG nova.virt.libvirt.driver [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:36:09 np0005539551 nova_compute[227360]: 2025-11-29 08:36:09.957 227364 INFO nova.virt.libvirt.driver [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Creating image(s)#033[00m
Nov 29 03:36:09 np0005539551 nova_compute[227360]: 2025-11-29 08:36:09.980 227364 DEBUG nova.storage.rbd_utils [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] rbd image aaaa6a6b-0c21-483c-b891-02ebe64e6aab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:36:09 np0005539551 nova_compute[227360]: 2025-11-29 08:36:09.983 227364 DEBUG nova.objects.instance [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lazy-loading 'trusted_certs' on Instance uuid aaaa6a6b-0c21-483c-b891-02ebe64e6aab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:36:09 np0005539551 nova_compute[227360]: 2025-11-29 08:36:09.985 227364 DEBUG oslo_concurrency.lockutils [req-ae8d1a70-05d2-4561-ae84-206eff08f0c3 req-a2dc6db8-92a9-4492-985a-4a516e8c56f5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-aaaa6a6b-0c21-483c-b891-02ebe64e6aab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:36:09 np0005539551 nova_compute[227360]: 2025-11-29 08:36:09.985 227364 DEBUG nova.network.neutron [req-ae8d1a70-05d2-4561-ae84-206eff08f0c3 req-a2dc6db8-92a9-4492-985a-4a516e8c56f5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Refreshing network info cache for port 2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:36:10 np0005539551 nova_compute[227360]: 2025-11-29 08:36:10.033 227364 DEBUG nova.storage.rbd_utils [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] rbd image aaaa6a6b-0c21-483c-b891-02ebe64e6aab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:36:10 np0005539551 nova_compute[227360]: 2025-11-29 08:36:10.056 227364 DEBUG nova.storage.rbd_utils [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] rbd image aaaa6a6b-0c21-483c-b891-02ebe64e6aab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:36:10 np0005539551 nova_compute[227360]: 2025-11-29 08:36:10.059 227364 DEBUG oslo_concurrency.lockutils [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Acquiring lock "2e9215b45f1952975ce67bc2fa0a2f84a514e88a" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:10 np0005539551 nova_compute[227360]: 2025-11-29 08:36:10.060 227364 DEBUG oslo_concurrency.lockutils [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "2e9215b45f1952975ce67bc2fa0a2f84a514e88a" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:10 np0005539551 nova_compute[227360]: 2025-11-29 08:36:10.485 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:10 np0005539551 nova_compute[227360]: 2025-11-29 08:36:10.575 227364 DEBUG nova.virt.libvirt.imagebackend [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Image locations are: [{'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/f502084d-e717-47c5-8881-072bf335e491/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/f502084d-e717-47c5-8881-072bf335e491/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 03:36:10 np0005539551 nova_compute[227360]: 2025-11-29 08:36:10.633 227364 DEBUG nova.virt.libvirt.imagebackend [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Selected location: {'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/f502084d-e717-47c5-8881-072bf335e491/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 29 03:36:10 np0005539551 nova_compute[227360]: 2025-11-29 08:36:10.634 227364 DEBUG nova.storage.rbd_utils [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] cloning images/f502084d-e717-47c5-8881-072bf335e491@snap to None/aaaa6a6b-0c21-483c-b891-02ebe64e6aab_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:36:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:10 np0005539551 nova_compute[227360]: 2025-11-29 08:36:10.740 227364 DEBUG oslo_concurrency.lockutils [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "2e9215b45f1952975ce67bc2fa0a2f84a514e88a" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:10 np0005539551 nova_compute[227360]: 2025-11-29 08:36:10.868 227364 DEBUG nova.objects.instance [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lazy-loading 'migration_context' on Instance uuid aaaa6a6b-0c21-483c-b891-02ebe64e6aab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:36:10 np0005539551 nova_compute[227360]: 2025-11-29 08:36:10.930 227364 DEBUG nova.storage.rbd_utils [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] flattening vms/aaaa6a6b-0c21-483c-b891-02ebe64e6aab_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:36:11 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #53. Immutable memtables: 9.
Nov 29 03:36:11 np0005539551 nova_compute[227360]: 2025-11-29 08:36:11.292 227364 DEBUG nova.virt.libvirt.driver [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Image rbd:vms/aaaa6a6b-0c21-483c-b891-02ebe64e6aab_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Nov 29 03:36:11 np0005539551 nova_compute[227360]: 2025-11-29 08:36:11.292 227364 DEBUG nova.virt.libvirt.driver [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:36:11 np0005539551 nova_compute[227360]: 2025-11-29 08:36:11.293 227364 DEBUG nova.virt.libvirt.driver [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Ensure instance console log exists: /var/lib/nova/instances/aaaa6a6b-0c21-483c-b891-02ebe64e6aab/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:36:11 np0005539551 nova_compute[227360]: 2025-11-29 08:36:11.293 227364 DEBUG oslo_concurrency.lockutils [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:11 np0005539551 nova_compute[227360]: 2025-11-29 08:36:11.293 227364 DEBUG oslo_concurrency.lockutils [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:11 np0005539551 nova_compute[227360]: 2025-11-29 08:36:11.294 227364 DEBUG oslo_concurrency.lockutils [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:11 np0005539551 nova_compute[227360]: 2025-11-29 08:36:11.296 227364 DEBUG nova.virt.libvirt.driver [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Start _get_guest_xml network_info=[{"id": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "address": "fa:16:3e:c5:09:19", "network": {"id": "a83b79bc-6262-43e7-a9e5-5e808a213726", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1830064421-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e72e7660da497a8b6bf9fdb03fe84c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c7cb162-70", "ovs_interfaceid": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T08:35:32Z,direct_url=<?>,disk_format='raw',id=f502084d-e717-47c5-8881-072bf335e491,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-1484770108-shelved',owner='96e72e7660da497a8b6bf9fdb03fe84c',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T08:35:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:36:11 np0005539551 nova_compute[227360]: 2025-11-29 08:36:11.299 227364 WARNING nova.virt.libvirt.driver [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:36:11 np0005539551 nova_compute[227360]: 2025-11-29 08:36:11.306 227364 DEBUG nova.virt.libvirt.host [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:36:11 np0005539551 nova_compute[227360]: 2025-11-29 08:36:11.306 227364 DEBUG nova.virt.libvirt.host [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:36:11 np0005539551 nova_compute[227360]: 2025-11-29 08:36:11.310 227364 DEBUG nova.virt.libvirt.host [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:36:11 np0005539551 nova_compute[227360]: 2025-11-29 08:36:11.311 227364 DEBUG nova.virt.libvirt.host [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:36:11 np0005539551 nova_compute[227360]: 2025-11-29 08:36:11.311 227364 DEBUG nova.virt.libvirt.driver [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:36:11 np0005539551 nova_compute[227360]: 2025-11-29 08:36:11.312 227364 DEBUG nova.virt.hardware [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T08:35:32Z,direct_url=<?>,disk_format='raw',id=f502084d-e717-47c5-8881-072bf335e491,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-1484770108-shelved',owner='96e72e7660da497a8b6bf9fdb03fe84c',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T08:35:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:36:11 np0005539551 nova_compute[227360]: 2025-11-29 08:36:11.312 227364 DEBUG nova.virt.hardware [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:36:11 np0005539551 nova_compute[227360]: 2025-11-29 08:36:11.312 227364 DEBUG nova.virt.hardware [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:36:11 np0005539551 nova_compute[227360]: 2025-11-29 08:36:11.313 227364 DEBUG nova.virt.hardware [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:36:11 np0005539551 nova_compute[227360]: 2025-11-29 08:36:11.313 227364 DEBUG nova.virt.hardware [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:36:11 np0005539551 nova_compute[227360]: 2025-11-29 08:36:11.313 227364 DEBUG nova.virt.hardware [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:36:11 np0005539551 nova_compute[227360]: 2025-11-29 08:36:11.313 227364 DEBUG nova.virt.hardware [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:36:11 np0005539551 nova_compute[227360]: 2025-11-29 08:36:11.313 227364 DEBUG nova.virt.hardware [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:36:11 np0005539551 nova_compute[227360]: 2025-11-29 08:36:11.314 227364 DEBUG nova.virt.hardware [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:36:11 np0005539551 nova_compute[227360]: 2025-11-29 08:36:11.314 227364 DEBUG nova.virt.hardware [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:36:11 np0005539551 nova_compute[227360]: 2025-11-29 08:36:11.314 227364 DEBUG nova.virt.hardware [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:36:11 np0005539551 nova_compute[227360]: 2025-11-29 08:36:11.314 227364 DEBUG nova.objects.instance [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lazy-loading 'vcpu_model' on Instance uuid aaaa6a6b-0c21-483c-b891-02ebe64e6aab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:36:11 np0005539551 nova_compute[227360]: 2025-11-29 08:36:11.329 227364 DEBUG oslo_concurrency.processutils [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:36:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:11.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:36:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:11.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:11 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:36:11 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2475678124' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:36:11 np0005539551 nova_compute[227360]: 2025-11-29 08:36:11.749 227364 DEBUG oslo_concurrency.processutils [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:11 np0005539551 nova_compute[227360]: 2025-11-29 08:36:11.783 227364 DEBUG nova.storage.rbd_utils [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] rbd image aaaa6a6b-0c21-483c-b891-02ebe64e6aab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:36:11 np0005539551 nova_compute[227360]: 2025-11-29 08:36:11.787 227364 DEBUG oslo_concurrency.processutils [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:12 np0005539551 nova_compute[227360]: 2025-11-29 08:36:12.027 227364 DEBUG nova.network.neutron [req-ae8d1a70-05d2-4561-ae84-206eff08f0c3 req-a2dc6db8-92a9-4492-985a-4a516e8c56f5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Updated VIF entry in instance network info cache for port 2c7cb162-70f0-496f-a2bf-0b0af61bd4b2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:36:12 np0005539551 nova_compute[227360]: 2025-11-29 08:36:12.027 227364 DEBUG nova.network.neutron [req-ae8d1a70-05d2-4561-ae84-206eff08f0c3 req-a2dc6db8-92a9-4492-985a-4a516e8c56f5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Updating instance_info_cache with network_info: [{"id": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "address": "fa:16:3e:c5:09:19", "network": {"id": "a83b79bc-6262-43e7-a9e5-5e808a213726", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1830064421-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e72e7660da497a8b6bf9fdb03fe84c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c7cb162-70", "ovs_interfaceid": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:36:12 np0005539551 nova_compute[227360]: 2025-11-29 08:36:12.058 227364 DEBUG oslo_concurrency.lockutils [req-ae8d1a70-05d2-4561-ae84-206eff08f0c3 req-a2dc6db8-92a9-4492-985a-4a516e8c56f5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-aaaa6a6b-0c21-483c-b891-02ebe64e6aab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:36:12 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:36:12 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3286647873' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:36:12 np0005539551 nova_compute[227360]: 2025-11-29 08:36:12.215 227364 DEBUG oslo_concurrency.processutils [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:12 np0005539551 nova_compute[227360]: 2025-11-29 08:36:12.216 227364 DEBUG nova.virt.libvirt.vif [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:34:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1484770108',display_name='tempest-ServersNegativeTestJSON-server-1484770108',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1484770108',id=170,image_ref='f502084d-e717-47c5-8881-072bf335e491',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:34:12Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='96e72e7660da497a8b6bf9fdb03fe84c',ramdisk_id='',reservation_id='r-17iqb8m1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1016750887',owner_user_name='tempest-ServersNegativeTestJSON-1016750887-project-member',shelved_at='2025-11-29T08:35:41.615165',shelved_host='compute-1.ctlplane.example.com',shelved_image_id='f502084d-e717-47c5-8881-072bf335e491'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:36:03Z,user_data=None,user_id='a57807acb02b45d082f242ec62cd5b6f',uuid=aaaa6a6b-0c21-483c-b891-02ebe64e6aab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "address": "fa:16:3e:c5:09:19", "network": {"id": "a83b79bc-6262-43e7-a9e5-5e808a213726", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1830064421-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e72e7660da497a8b6bf9fdb03fe84c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c7cb162-70", "ovs_interfaceid": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:36:12 np0005539551 nova_compute[227360]: 2025-11-29 08:36:12.216 227364 DEBUG nova.network.os_vif_util [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Converting VIF {"id": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "address": "fa:16:3e:c5:09:19", "network": {"id": "a83b79bc-6262-43e7-a9e5-5e808a213726", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1830064421-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e72e7660da497a8b6bf9fdb03fe84c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c7cb162-70", "ovs_interfaceid": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:36:12 np0005539551 nova_compute[227360]: 2025-11-29 08:36:12.217 227364 DEBUG nova.network.os_vif_util [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:09:19,bridge_name='br-int',has_traffic_filtering=True,id=2c7cb162-70f0-496f-a2bf-0b0af61bd4b2,network=Network(a83b79bc-6262-43e7-a9e5-5e808a213726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c7cb162-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:36:12 np0005539551 nova_compute[227360]: 2025-11-29 08:36:12.218 227364 DEBUG nova.objects.instance [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lazy-loading 'pci_devices' on Instance uuid aaaa6a6b-0c21-483c-b891-02ebe64e6aab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:36:12 np0005539551 nova_compute[227360]: 2025-11-29 08:36:12.232 227364 DEBUG nova.virt.libvirt.driver [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:36:12 np0005539551 nova_compute[227360]:  <uuid>aaaa6a6b-0c21-483c-b891-02ebe64e6aab</uuid>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:  <name>instance-000000aa</name>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:36:12 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:      <nova:name>tempest-ServersNegativeTestJSON-server-1484770108</nova:name>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:36:11</nova:creationTime>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:36:12 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:        <nova:user uuid="a57807acb02b45d082f242ec62cd5b6f">tempest-ServersNegativeTestJSON-1016750887-project-member</nova:user>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:        <nova:project uuid="96e72e7660da497a8b6bf9fdb03fe84c">tempest-ServersNegativeTestJSON-1016750887</nova:project>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="f502084d-e717-47c5-8881-072bf335e491"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:        <nova:port uuid="2c7cb162-70f0-496f-a2bf-0b0af61bd4b2">
Nov 29 03:36:12 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:      <entry name="serial">aaaa6a6b-0c21-483c-b891-02ebe64e6aab</entry>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:      <entry name="uuid">aaaa6a6b-0c21-483c-b891-02ebe64e6aab</entry>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:36:12 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/aaaa6a6b-0c21-483c-b891-02ebe64e6aab_disk">
Nov 29 03:36:12 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:36:12 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:36:12 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/aaaa6a6b-0c21-483c-b891-02ebe64e6aab_disk.config">
Nov 29 03:36:12 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:36:12 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:36:12 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:c5:09:19"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:      <target dev="tap2c7cb162-70"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:36:12 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/aaaa6a6b-0c21-483c-b891-02ebe64e6aab/console.log" append="off"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <input type="keyboard" bus="usb"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:36:12 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:36:12 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:36:12 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:36:12 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:36:12 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:36:12 np0005539551 nova_compute[227360]: 2025-11-29 08:36:12.233 227364 DEBUG nova.compute.manager [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Preparing to wait for external event network-vif-plugged-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:36:12 np0005539551 nova_compute[227360]: 2025-11-29 08:36:12.233 227364 DEBUG oslo_concurrency.lockutils [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Acquiring lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:12 np0005539551 nova_compute[227360]: 2025-11-29 08:36:12.233 227364 DEBUG oslo_concurrency.lockutils [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:12 np0005539551 nova_compute[227360]: 2025-11-29 08:36:12.233 227364 DEBUG oslo_concurrency.lockutils [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:12 np0005539551 nova_compute[227360]: 2025-11-29 08:36:12.234 227364 DEBUG nova.virt.libvirt.vif [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:34:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1484770108',display_name='tempest-ServersNegativeTestJSON-server-1484770108',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1484770108',id=170,image_ref='f502084d-e717-47c5-8881-072bf335e491',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:34:12Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='96e72e7660da497a8b6bf9fdb03fe84c',ramdisk_id='',reservation_id='r-17iqb8m1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1016750887',owner_user_name='tempest-ServersNegativeTestJSON-1016750887-project-member',shelved_at='2025-11-29T08:35:41.615165',shelved_host='compute-1.ctlplane.example.com',shelved_image_id='f502084d-e717-47c5-8881-072bf335e491'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:36:03Z,user_data=None,user_id='a57807acb02b45d082f242ec62cd5b6f',uuid=aaaa6a6b-0c21-483c-b891-02ebe64e6aab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "address": "fa:16:3e:c5:09:19", "network": {"id": "a83b79bc-6262-43e7-a9e5-5e808a213726", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1830064421-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e72e7660da497a8b6bf9fdb03fe84c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c7cb162-70", "ovs_interfaceid": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:36:12 np0005539551 nova_compute[227360]: 2025-11-29 08:36:12.234 227364 DEBUG nova.network.os_vif_util [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Converting VIF {"id": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "address": "fa:16:3e:c5:09:19", "network": {"id": "a83b79bc-6262-43e7-a9e5-5e808a213726", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1830064421-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e72e7660da497a8b6bf9fdb03fe84c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c7cb162-70", "ovs_interfaceid": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:36:12 np0005539551 nova_compute[227360]: 2025-11-29 08:36:12.235 227364 DEBUG nova.network.os_vif_util [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:09:19,bridge_name='br-int',has_traffic_filtering=True,id=2c7cb162-70f0-496f-a2bf-0b0af61bd4b2,network=Network(a83b79bc-6262-43e7-a9e5-5e808a213726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c7cb162-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:36:12 np0005539551 nova_compute[227360]: 2025-11-29 08:36:12.235 227364 DEBUG os_vif [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:09:19,bridge_name='br-int',has_traffic_filtering=True,id=2c7cb162-70f0-496f-a2bf-0b0af61bd4b2,network=Network(a83b79bc-6262-43e7-a9e5-5e808a213726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c7cb162-70') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:36:12 np0005539551 nova_compute[227360]: 2025-11-29 08:36:12.236 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:12 np0005539551 nova_compute[227360]: 2025-11-29 08:36:12.236 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:12 np0005539551 nova_compute[227360]: 2025-11-29 08:36:12.236 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:36:12 np0005539551 nova_compute[227360]: 2025-11-29 08:36:12.240 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:12 np0005539551 nova_compute[227360]: 2025-11-29 08:36:12.240 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2c7cb162-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:12 np0005539551 nova_compute[227360]: 2025-11-29 08:36:12.241 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2c7cb162-70, col_values=(('external_ids', {'iface-id': '2c7cb162-70f0-496f-a2bf-0b0af61bd4b2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c5:09:19', 'vm-uuid': 'aaaa6a6b-0c21-483c-b891-02ebe64e6aab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:12 np0005539551 NetworkManager[48922]: <info>  [1764405372.2431] manager: (tap2c7cb162-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/339)
Nov 29 03:36:12 np0005539551 nova_compute[227360]: 2025-11-29 08:36:12.244 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:36:12 np0005539551 nova_compute[227360]: 2025-11-29 08:36:12.248 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:12 np0005539551 nova_compute[227360]: 2025-11-29 08:36:12.248 227364 INFO os_vif [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:09:19,bridge_name='br-int',has_traffic_filtering=True,id=2c7cb162-70f0-496f-a2bf-0b0af61bd4b2,network=Network(a83b79bc-6262-43e7-a9e5-5e808a213726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c7cb162-70')#033[00m
Nov 29 03:36:12 np0005539551 nova_compute[227360]: 2025-11-29 08:36:12.302 227364 DEBUG nova.virt.libvirt.driver [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:36:12 np0005539551 nova_compute[227360]: 2025-11-29 08:36:12.302 227364 DEBUG nova.virt.libvirt.driver [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:36:12 np0005539551 nova_compute[227360]: 2025-11-29 08:36:12.302 227364 DEBUG nova.virt.libvirt.driver [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] No VIF found with MAC fa:16:3e:c5:09:19, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:36:12 np0005539551 nova_compute[227360]: 2025-11-29 08:36:12.303 227364 INFO nova.virt.libvirt.driver [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Using config drive#033[00m
Nov 29 03:36:12 np0005539551 nova_compute[227360]: 2025-11-29 08:36:12.332 227364 DEBUG nova.storage.rbd_utils [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] rbd image aaaa6a6b-0c21-483c-b891-02ebe64e6aab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:36:12 np0005539551 nova_compute[227360]: 2025-11-29 08:36:12.353 227364 DEBUG nova.objects.instance [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lazy-loading 'ec2_ids' on Instance uuid aaaa6a6b-0c21-483c-b891-02ebe64e6aab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:36:12 np0005539551 nova_compute[227360]: 2025-11-29 08:36:12.396 227364 DEBUG nova.objects.instance [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lazy-loading 'keypairs' on Instance uuid aaaa6a6b-0c21-483c-b891-02ebe64e6aab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:36:12 np0005539551 nova_compute[227360]: 2025-11-29 08:36:12.827 227364 INFO nova.virt.libvirt.driver [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Creating config drive at /var/lib/nova/instances/aaaa6a6b-0c21-483c-b891-02ebe64e6aab/disk.config#033[00m
Nov 29 03:36:12 np0005539551 nova_compute[227360]: 2025-11-29 08:36:12.831 227364 DEBUG oslo_concurrency.processutils [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aaaa6a6b-0c21-483c-b891-02ebe64e6aab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9lupfroo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:12 np0005539551 nova_compute[227360]: 2025-11-29 08:36:12.964 227364 DEBUG oslo_concurrency.processutils [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aaaa6a6b-0c21-483c-b891-02ebe64e6aab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9lupfroo" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:12 np0005539551 nova_compute[227360]: 2025-11-29 08:36:12.997 227364 DEBUG nova.storage.rbd_utils [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] rbd image aaaa6a6b-0c21-483c-b891-02ebe64e6aab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:36:13 np0005539551 nova_compute[227360]: 2025-11-29 08:36:13.002 227364 DEBUG oslo_concurrency.processutils [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aaaa6a6b-0c21-483c-b891-02ebe64e6aab/disk.config aaaa6a6b-0c21-483c-b891-02ebe64e6aab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:13 np0005539551 nova_compute[227360]: 2025-11-29 08:36:13.171 227364 DEBUG oslo_concurrency.processutils [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aaaa6a6b-0c21-483c-b891-02ebe64e6aab/disk.config aaaa6a6b-0c21-483c-b891-02ebe64e6aab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:13 np0005539551 nova_compute[227360]: 2025-11-29 08:36:13.172 227364 INFO nova.virt.libvirt.driver [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Deleting local config drive /var/lib/nova/instances/aaaa6a6b-0c21-483c-b891-02ebe64e6aab/disk.config because it was imported into RBD.#033[00m
Nov 29 03:36:13 np0005539551 kernel: tap2c7cb162-70: entered promiscuous mode
Nov 29 03:36:13 np0005539551 NetworkManager[48922]: <info>  [1764405373.2506] manager: (tap2c7cb162-70): new Tun device (/org/freedesktop/NetworkManager/Devices/340)
Nov 29 03:36:13 np0005539551 ovn_controller[130266]: 2025-11-29T08:36:13Z|00768|binding|INFO|Claiming lport 2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 for this chassis.
Nov 29 03:36:13 np0005539551 ovn_controller[130266]: 2025-11-29T08:36:13Z|00769|binding|INFO|2c7cb162-70f0-496f-a2bf-0b0af61bd4b2: Claiming fa:16:3e:c5:09:19 10.100.0.11
Nov 29 03:36:13 np0005539551 nova_compute[227360]: 2025-11-29 08:36:13.252 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:13.267 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:09:19 10.100.0.11'], port_security=['fa:16:3e:c5:09:19 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'aaaa6a6b-0c21-483c-b891-02ebe64e6aab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a83b79bc-6262-43e7-a9e5-5e808a213726', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96e72e7660da497a8b6bf9fdb03fe84c', 'neutron:revision_number': '7', 'neutron:security_group_ids': '4e362fc3-a5d2-4518-8d56-0e9bbfbe70b6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89f1621a-4594-4d70-9442-76b3c597dffc, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=2c7cb162-70f0-496f-a2bf-0b0af61bd4b2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:13.269 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 in datapath a83b79bc-6262-43e7-a9e5-5e808a213726 bound to our chassis#033[00m
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:13.270 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a83b79bc-6262-43e7-a9e5-5e808a213726#033[00m
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:13.282 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[31c1cb70-5c81-4a0b-925e-85afceb6fbd6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:13.284 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa83b79bc-61 in ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:13.285 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa83b79bc-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:13.285 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c6a87914-436c-4847-a133-b6f9a0831036]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:13.286 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[73d653bf-5897-4785-be80-a5f1d1e39805]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:13 np0005539551 systemd-machined[190756]: New machine qemu-80-instance-000000aa.
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:13.298 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[7fd3f218-211c-4498-889e-f61b88562d1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:13 np0005539551 systemd[1]: Started Virtual Machine qemu-80-instance-000000aa.
Nov 29 03:36:13 np0005539551 nova_compute[227360]: 2025-11-29 08:36:13.317 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:13 np0005539551 nova_compute[227360]: 2025-11-29 08:36:13.322 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:13.322 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[868c5816-0d32-4bdb-b0fd-b3c28af77dbc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:13 np0005539551 systemd-udevd[290452]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:36:13 np0005539551 ovn_controller[130266]: 2025-11-29T08:36:13Z|00770|binding|INFO|Setting lport 2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 ovn-installed in OVS
Nov 29 03:36:13 np0005539551 ovn_controller[130266]: 2025-11-29T08:36:13Z|00771|binding|INFO|Setting lport 2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 up in Southbound
Nov 29 03:36:13 np0005539551 nova_compute[227360]: 2025-11-29 08:36:13.327 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:13 np0005539551 NetworkManager[48922]: <info>  [1764405373.3373] device (tap2c7cb162-70): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:36:13 np0005539551 NetworkManager[48922]: <info>  [1764405373.3380] device (tap2c7cb162-70): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:13.351 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[3a3d3d20-1310-4642-9345-71333dffee26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:13 np0005539551 systemd-udevd[290456]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:36:13 np0005539551 NetworkManager[48922]: <info>  [1764405373.3569] manager: (tapa83b79bc-60): new Veth device (/org/freedesktop/NetworkManager/Devices/341)
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:13.356 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1ff8b5ed-8a1f-4715-8b77-881300000526]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:13.383 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[2bf65cd3-c754-4d91-a13a-02c62843a4b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:13.387 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[be661589-1c30-440d-91c4-853681666be5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:13 np0005539551 NetworkManager[48922]: <info>  [1764405373.4095] device (tapa83b79bc-60): carrier: link connected
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:13.413 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[72930b60-8e7d-444f-9b2c-304bb6cf6d08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:13.428 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b41e77cc-ac55-4192-87e8-fe95d8a2d747]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa83b79bc-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:20:41:91'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 221], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 842028, 'reachable_time': 29032, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290482, 'error': None, 'target': 'ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:13.445 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8a73c7a0-6394-410a-93d7-b367d8f7cd20]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe20:4191'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 842028, 'tstamp': 842028}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290483, 'error': None, 'target': 'ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:13.459 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[04554e6f-6fd5-4f25-98fa-9aba9e6b4823]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa83b79bc-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:20:41:91'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 221], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 842028, 'reachable_time': 29032, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290484, 'error': None, 'target': 'ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:13.486 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[411c1046-67eb-49fc-8258-fcbf686970ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:13.542 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[72ad5cba-b405-407a-8f50-b992c3f41f89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:13.544 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa83b79bc-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:13.544 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:13.544 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa83b79bc-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:13 np0005539551 nova_compute[227360]: 2025-11-29 08:36:13.546 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:13 np0005539551 kernel: tapa83b79bc-60: entered promiscuous mode
Nov 29 03:36:13 np0005539551 NetworkManager[48922]: <info>  [1764405373.5477] manager: (tapa83b79bc-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/342)
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:13.549 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa83b79bc-60, col_values=(('external_ids', {'iface-id': '5551fa67-e815-437e-8413-5562ca9c4d10'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:13 np0005539551 ovn_controller[130266]: 2025-11-29T08:36:13Z|00772|binding|INFO|Releasing lport 5551fa67-e815-437e-8413-5562ca9c4d10 from this chassis (sb_readonly=0)
Nov 29 03:36:13 np0005539551 nova_compute[227360]: 2025-11-29 08:36:13.565 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:13.566 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a83b79bc-6262-43e7-a9e5-5e808a213726.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a83b79bc-6262-43e7-a9e5-5e808a213726.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:13.567 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[658c8762-c15a-4fac-850d-7d33adbc2cd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:13.568 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-a83b79bc-6262-43e7-a9e5-5e808a213726
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/a83b79bc-6262-43e7-a9e5-5e808a213726.pid.haproxy
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID a83b79bc-6262-43e7-a9e5-5e808a213726
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:36:13 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:13.570 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726', 'env', 'PROCESS_TAG=haproxy-a83b79bc-6262-43e7-a9e5-5e808a213726', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a83b79bc-6262-43e7-a9e5-5e808a213726.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:36:13 np0005539551 nova_compute[227360]: 2025-11-29 08:36:13.628 227364 DEBUG nova.compute.manager [req-b6310910-7ce6-4c91-9e80-342101b16d06 req-62f93dbc-05ce-48a7-95a8-299ae8f3bc5e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Received event network-vif-plugged-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:13 np0005539551 nova_compute[227360]: 2025-11-29 08:36:13.628 227364 DEBUG oslo_concurrency.lockutils [req-b6310910-7ce6-4c91-9e80-342101b16d06 req-62f93dbc-05ce-48a7-95a8-299ae8f3bc5e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:13 np0005539551 nova_compute[227360]: 2025-11-29 08:36:13.629 227364 DEBUG oslo_concurrency.lockutils [req-b6310910-7ce6-4c91-9e80-342101b16d06 req-62f93dbc-05ce-48a7-95a8-299ae8f3bc5e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:13 np0005539551 nova_compute[227360]: 2025-11-29 08:36:13.629 227364 DEBUG oslo_concurrency.lockutils [req-b6310910-7ce6-4c91-9e80-342101b16d06 req-62f93dbc-05ce-48a7-95a8-299ae8f3bc5e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:13 np0005539551 nova_compute[227360]: 2025-11-29 08:36:13.629 227364 DEBUG nova.compute.manager [req-b6310910-7ce6-4c91-9e80-342101b16d06 req-62f93dbc-05ce-48a7-95a8-299ae8f3bc5e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Processing event network-vif-plugged-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:36:13 np0005539551 nova_compute[227360]: 2025-11-29 08:36:13.666 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:13.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:13.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:14 np0005539551 podman[290522]: 2025-11-29 08:36:13.999770163 +0000 UTC m=+0.070155109 container create 05d401ad88406e2d2345b4c90f6190fa8df33098b1e87cfab85926f7b1deeba6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS)
Nov 29 03:36:14 np0005539551 systemd[1]: Started libpod-conmon-05d401ad88406e2d2345b4c90f6190fa8df33098b1e87cfab85926f7b1deeba6.scope.
Nov 29 03:36:14 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:36:14 np0005539551 podman[290522]: 2025-11-29 08:36:13.969103143 +0000 UTC m=+0.039488169 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:36:14 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c1fa76a6229d02d9aafc2959cb544eb882ab5b2e9e40f634c7b450edec656a7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:36:14 np0005539551 nova_compute[227360]: 2025-11-29 08:36:14.072 227364 DEBUG nova.compute.manager [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:36:14 np0005539551 nova_compute[227360]: 2025-11-29 08:36:14.073 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405374.0717301, aaaa6a6b-0c21-483c-b891-02ebe64e6aab => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:36:14 np0005539551 nova_compute[227360]: 2025-11-29 08:36:14.073 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] VM Started (Lifecycle Event)#033[00m
Nov 29 03:36:14 np0005539551 nova_compute[227360]: 2025-11-29 08:36:14.075 227364 DEBUG nova.virt.libvirt.driver [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:36:14 np0005539551 podman[290522]: 2025-11-29 08:36:14.07618907 +0000 UTC m=+0.146574036 container init 05d401ad88406e2d2345b4c90f6190fa8df33098b1e87cfab85926f7b1deeba6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 29 03:36:14 np0005539551 nova_compute[227360]: 2025-11-29 08:36:14.079 227364 INFO nova.virt.libvirt.driver [-] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Instance spawned successfully.#033[00m
Nov 29 03:36:14 np0005539551 podman[290522]: 2025-11-29 08:36:14.08247225 +0000 UTC m=+0.152857216 container start 05d401ad88406e2d2345b4c90f6190fa8df33098b1e87cfab85926f7b1deeba6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:36:14 np0005539551 nova_compute[227360]: 2025-11-29 08:36:14.097 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:36:14 np0005539551 nova_compute[227360]: 2025-11-29 08:36:14.100 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:36:14 np0005539551 neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726[290572]: [NOTICE]   (290576) : New worker (290578) forked
Nov 29 03:36:14 np0005539551 neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726[290572]: [NOTICE]   (290576) : Loading success.
Nov 29 03:36:14 np0005539551 nova_compute[227360]: 2025-11-29 08:36:14.126 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:36:14 np0005539551 nova_compute[227360]: 2025-11-29 08:36:14.127 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405374.071867, aaaa6a6b-0c21-483c-b891-02ebe64e6aab => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:36:14 np0005539551 nova_compute[227360]: 2025-11-29 08:36:14.127 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:36:14 np0005539551 nova_compute[227360]: 2025-11-29 08:36:14.149 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:36:14 np0005539551 nova_compute[227360]: 2025-11-29 08:36:14.152 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405374.0754337, aaaa6a6b-0c21-483c-b891-02ebe64e6aab => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:36:14 np0005539551 nova_compute[227360]: 2025-11-29 08:36:14.152 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:36:14 np0005539551 nova_compute[227360]: 2025-11-29 08:36:14.175 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:36:14 np0005539551 nova_compute[227360]: 2025-11-29 08:36:14.178 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:36:14 np0005539551 nova_compute[227360]: 2025-11-29 08:36:14.205 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:36:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e373 e373: 3 total, 3 up, 3 in
Nov 29 03:36:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:15.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:15.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:15 np0005539551 nova_compute[227360]: 2025-11-29 08:36:15.780 227364 DEBUG nova.compute.manager [req-6bbd8b6c-56de-4472-a980-e6b6f4cd6267 req-440b5c25-5dde-4304-8d6c-5f287ae130af 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Received event network-vif-plugged-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:15 np0005539551 nova_compute[227360]: 2025-11-29 08:36:15.781 227364 DEBUG oslo_concurrency.lockutils [req-6bbd8b6c-56de-4472-a980-e6b6f4cd6267 req-440b5c25-5dde-4304-8d6c-5f287ae130af 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:15 np0005539551 nova_compute[227360]: 2025-11-29 08:36:15.782 227364 DEBUG oslo_concurrency.lockutils [req-6bbd8b6c-56de-4472-a980-e6b6f4cd6267 req-440b5c25-5dde-4304-8d6c-5f287ae130af 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:15 np0005539551 nova_compute[227360]: 2025-11-29 08:36:15.783 227364 DEBUG oslo_concurrency.lockutils [req-6bbd8b6c-56de-4472-a980-e6b6f4cd6267 req-440b5c25-5dde-4304-8d6c-5f287ae130af 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:15 np0005539551 nova_compute[227360]: 2025-11-29 08:36:15.783 227364 DEBUG nova.compute.manager [req-6bbd8b6c-56de-4472-a980-e6b6f4cd6267 req-440b5c25-5dde-4304-8d6c-5f287ae130af 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] No waiting events found dispatching network-vif-plugged-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:36:15 np0005539551 nova_compute[227360]: 2025-11-29 08:36:15.784 227364 WARNING nova.compute.manager [req-6bbd8b6c-56de-4472-a980-e6b6f4cd6267 req-440b5c25-5dde-4304-8d6c-5f287ae130af 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Received unexpected event network-vif-plugged-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 for instance with vm_state shelved_offloaded and task_state spawning.#033[00m
Nov 29 03:36:15 np0005539551 nova_compute[227360]: 2025-11-29 08:36:15.897 227364 DEBUG nova.compute.manager [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:36:15 np0005539551 nova_compute[227360]: 2025-11-29 08:36:15.998 227364 DEBUG oslo_concurrency.lockutils [None req-1cbf62f0-dbc0-4736-9314-0ef6b332674a a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 12.465s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:17 np0005539551 nova_compute[227360]: 2025-11-29 08:36:17.242 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:17.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:17.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:18 np0005539551 nova_compute[227360]: 2025-11-29 08:36:18.667 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:19.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:19.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:19.885 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:19.886 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:19.887 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:20 np0005539551 podman[290590]: 2025-11-29 08:36:20.61019839 +0000 UTC m=+0.061906106 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent)
Nov 29 03:36:20 np0005539551 podman[290589]: 2025-11-29 08:36:20.610363134 +0000 UTC m=+0.065087641 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:36:20 np0005539551 podman[290588]: 2025-11-29 08:36:20.674071018 +0000 UTC m=+0.132454254 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:36:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:21.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:36:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:21.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:36:22 np0005539551 nova_compute[227360]: 2025-11-29 08:36:22.246 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:22.293 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:36:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:22.294 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:36:22 np0005539551 nova_compute[227360]: 2025-11-29 08:36:22.294 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:22 np0005539551 nova_compute[227360]: 2025-11-29 08:36:22.792 227364 DEBUG nova.objects.instance [None req-abd01b06-6806-4980-9d23-f9849b7de1a7 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lazy-loading 'pci_devices' on Instance uuid aaaa6a6b-0c21-483c-b891-02ebe64e6aab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:36:22 np0005539551 nova_compute[227360]: 2025-11-29 08:36:22.814 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405382.814812, aaaa6a6b-0c21-483c-b891-02ebe64e6aab => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:36:22 np0005539551 nova_compute[227360]: 2025-11-29 08:36:22.815 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:36:22 np0005539551 nova_compute[227360]: 2025-11-29 08:36:22.831 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:36:22 np0005539551 nova_compute[227360]: 2025-11-29 08:36:22.836 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:36:22 np0005539551 nova_compute[227360]: 2025-11-29 08:36:22.851 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Nov 29 03:36:23 np0005539551 nova_compute[227360]: 2025-11-29 08:36:23.668 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:23.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:23.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:24 np0005539551 kernel: tap2c7cb162-70 (unregistering): left promiscuous mode
Nov 29 03:36:24 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e374 e374: 3 total, 3 up, 3 in
Nov 29 03:36:24 np0005539551 NetworkManager[48922]: <info>  [1764405384.5622] device (tap2c7cb162-70): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:36:24 np0005539551 ovn_controller[130266]: 2025-11-29T08:36:24Z|00773|binding|INFO|Releasing lport 2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 from this chassis (sb_readonly=0)
Nov 29 03:36:24 np0005539551 ovn_controller[130266]: 2025-11-29T08:36:24Z|00774|binding|INFO|Setting lport 2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 down in Southbound
Nov 29 03:36:24 np0005539551 nova_compute[227360]: 2025-11-29 08:36:24.567 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:24 np0005539551 ovn_controller[130266]: 2025-11-29T08:36:24Z|00775|binding|INFO|Removing iface tap2c7cb162-70 ovn-installed in OVS
Nov 29 03:36:24 np0005539551 nova_compute[227360]: 2025-11-29 08:36:24.569 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:24.575 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:09:19 10.100.0.11'], port_security=['fa:16:3e:c5:09:19 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'aaaa6a6b-0c21-483c-b891-02ebe64e6aab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a83b79bc-6262-43e7-a9e5-5e808a213726', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96e72e7660da497a8b6bf9fdb03fe84c', 'neutron:revision_number': '9', 'neutron:security_group_ids': '4e362fc3-a5d2-4518-8d56-0e9bbfbe70b6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89f1621a-4594-4d70-9442-76b3c597dffc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=2c7cb162-70f0-496f-a2bf-0b0af61bd4b2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:36:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:24.576 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 in datapath a83b79bc-6262-43e7-a9e5-5e808a213726 unbound from our chassis#033[00m
Nov 29 03:36:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:24.577 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a83b79bc-6262-43e7-a9e5-5e808a213726, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:36:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:24.578 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e016d7f7-b551-453f-8d77-43888ba9007d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:24 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:24.579 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726 namespace which is not needed anymore#033[00m
Nov 29 03:36:24 np0005539551 nova_compute[227360]: 2025-11-29 08:36:24.588 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:24 np0005539551 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d000000aa.scope: Deactivated successfully.
Nov 29 03:36:24 np0005539551 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d000000aa.scope: Consumed 10.097s CPU time.
Nov 29 03:36:24 np0005539551 systemd-machined[190756]: Machine qemu-80-instance-000000aa terminated.
Nov 29 03:36:24 np0005539551 nova_compute[227360]: 2025-11-29 08:36:24.758 227364 DEBUG nova.compute.manager [None req-abd01b06-6806-4980-9d23-f9849b7de1a7 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:36:24 np0005539551 neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726[290572]: [NOTICE]   (290576) : haproxy version is 2.8.14-c23fe91
Nov 29 03:36:24 np0005539551 neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726[290572]: [NOTICE]   (290576) : path to executable is /usr/sbin/haproxy
Nov 29 03:36:24 np0005539551 neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726[290572]: [WARNING]  (290576) : Exiting Master process...
Nov 29 03:36:24 np0005539551 neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726[290572]: [WARNING]  (290576) : Exiting Master process...
Nov 29 03:36:24 np0005539551 neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726[290572]: [ALERT]    (290576) : Current worker (290578) exited with code 143 (Terminated)
Nov 29 03:36:24 np0005539551 neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726[290572]: [WARNING]  (290576) : All workers exited. Exiting... (0)
Nov 29 03:36:24 np0005539551 systemd[1]: libpod-05d401ad88406e2d2345b4c90f6190fa8df33098b1e87cfab85926f7b1deeba6.scope: Deactivated successfully.
Nov 29 03:36:24 np0005539551 podman[290677]: 2025-11-29 08:36:24.833500121 +0000 UTC m=+0.172430326 container died 05d401ad88406e2d2345b4c90f6190fa8df33098b1e87cfab85926f7b1deeba6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Nov 29 03:36:24 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-05d401ad88406e2d2345b4c90f6190fa8df33098b1e87cfab85926f7b1deeba6-userdata-shm.mount: Deactivated successfully.
Nov 29 03:36:24 np0005539551 systemd[1]: var-lib-containers-storage-overlay-6c1fa76a6229d02d9aafc2959cb544eb882ab5b2e9e40f634c7b450edec656a7-merged.mount: Deactivated successfully.
Nov 29 03:36:24 np0005539551 podman[290677]: 2025-11-29 08:36:24.864428797 +0000 UTC m=+0.203359002 container cleanup 05d401ad88406e2d2345b4c90f6190fa8df33098b1e87cfab85926f7b1deeba6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 03:36:24 np0005539551 systemd[1]: libpod-conmon-05d401ad88406e2d2345b4c90f6190fa8df33098b1e87cfab85926f7b1deeba6.scope: Deactivated successfully.
Nov 29 03:36:25 np0005539551 podman[290719]: 2025-11-29 08:36:25.047330005 +0000 UTC m=+0.163935926 container remove 05d401ad88406e2d2345b4c90f6190fa8df33098b1e87cfab85926f7b1deeba6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:36:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:25.052 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[508da845-22f4-4871-98ab-7b8151a1ae7a]: (4, ('Sat Nov 29 08:36:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726 (05d401ad88406e2d2345b4c90f6190fa8df33098b1e87cfab85926f7b1deeba6)\n05d401ad88406e2d2345b4c90f6190fa8df33098b1e87cfab85926f7b1deeba6\nSat Nov 29 08:36:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726 (05d401ad88406e2d2345b4c90f6190fa8df33098b1e87cfab85926f7b1deeba6)\n05d401ad88406e2d2345b4c90f6190fa8df33098b1e87cfab85926f7b1deeba6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:25.053 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ce262e3e-704e-48fc-9973-f54d401d78dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:25.054 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa83b79bc-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:25 np0005539551 nova_compute[227360]: 2025-11-29 08:36:25.056 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:25 np0005539551 kernel: tapa83b79bc-60: left promiscuous mode
Nov 29 03:36:25 np0005539551 nova_compute[227360]: 2025-11-29 08:36:25.072 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:25.075 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9e55cc81-8ff4-4402-91e7-efc6ed8757e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:25.088 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[98c748e4-7c4a-4d31-b4a1-0036fdd53e92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:25.090 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[3dee4699-30e2-40e1-af3c-02b4413cf0e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:25.104 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[7d708187-2bc6-46c0-99e4-19cce5cc97c6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 842022, 'reachable_time': 17286, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290738, 'error': None, 'target': 'ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:25.106 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:36:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:25.106 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[3ed33a89-ada8-4a6a-8cf8-c486b91d196c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:25 np0005539551 systemd[1]: run-netns-ovnmeta\x2da83b79bc\x2d6262\x2d43e7\x2da9e5\x2d5e808a213726.mount: Deactivated successfully.
Nov 29 03:36:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:25.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:25.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:26 np0005539551 nova_compute[227360]: 2025-11-29 08:36:26.242 227364 DEBUG nova.compute.manager [req-80ca5758-9909-4ddf-9528-88e941159a36 req-721692a8-8ce6-4679-a11d-c5f87c93abd4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Received event network-vif-unplugged-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:26 np0005539551 nova_compute[227360]: 2025-11-29 08:36:26.243 227364 DEBUG oslo_concurrency.lockutils [req-80ca5758-9909-4ddf-9528-88e941159a36 req-721692a8-8ce6-4679-a11d-c5f87c93abd4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:26 np0005539551 nova_compute[227360]: 2025-11-29 08:36:26.243 227364 DEBUG oslo_concurrency.lockutils [req-80ca5758-9909-4ddf-9528-88e941159a36 req-721692a8-8ce6-4679-a11d-c5f87c93abd4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:26 np0005539551 nova_compute[227360]: 2025-11-29 08:36:26.243 227364 DEBUG oslo_concurrency.lockutils [req-80ca5758-9909-4ddf-9528-88e941159a36 req-721692a8-8ce6-4679-a11d-c5f87c93abd4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:26 np0005539551 nova_compute[227360]: 2025-11-29 08:36:26.244 227364 DEBUG nova.compute.manager [req-80ca5758-9909-4ddf-9528-88e941159a36 req-721692a8-8ce6-4679-a11d-c5f87c93abd4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] No waiting events found dispatching network-vif-unplugged-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:36:26 np0005539551 nova_compute[227360]: 2025-11-29 08:36:26.244 227364 WARNING nova.compute.manager [req-80ca5758-9909-4ddf-9528-88e941159a36 req-721692a8-8ce6-4679-a11d-c5f87c93abd4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Received unexpected event network-vif-unplugged-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 for instance with vm_state suspended and task_state None.#033[00m
Nov 29 03:36:27 np0005539551 nova_compute[227360]: 2025-11-29 08:36:27.249 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:27 np0005539551 nova_compute[227360]: 2025-11-29 08:36:27.348 227364 INFO nova.compute.manager [None req-02299c8f-1936-4cfe-948d-0fde67ee23b7 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Resuming#033[00m
Nov 29 03:36:27 np0005539551 nova_compute[227360]: 2025-11-29 08:36:27.349 227364 DEBUG nova.objects.instance [None req-02299c8f-1936-4cfe-948d-0fde67ee23b7 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lazy-loading 'flavor' on Instance uuid aaaa6a6b-0c21-483c-b891-02ebe64e6aab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:36:27 np0005539551 nova_compute[227360]: 2025-11-29 08:36:27.384 227364 DEBUG oslo_concurrency.lockutils [None req-02299c8f-1936-4cfe-948d-0fde67ee23b7 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Acquiring lock "refresh_cache-aaaa6a6b-0c21-483c-b891-02ebe64e6aab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:36:27 np0005539551 nova_compute[227360]: 2025-11-29 08:36:27.385 227364 DEBUG oslo_concurrency.lockutils [None req-02299c8f-1936-4cfe-948d-0fde67ee23b7 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Acquired lock "refresh_cache-aaaa6a6b-0c21-483c-b891-02ebe64e6aab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:36:27 np0005539551 nova_compute[227360]: 2025-11-29 08:36:27.385 227364 DEBUG nova.network.neutron [None req-02299c8f-1936-4cfe-948d-0fde67ee23b7 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:36:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:27.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:27.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:28.295 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:28 np0005539551 nova_compute[227360]: 2025-11-29 08:36:28.383 227364 DEBUG nova.compute.manager [req-3c925b21-7d92-4c69-8ff5-f7d2e9a1b1bd req-a81ab9d3-c176-4973-b335-2a97b45fa37d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Received event network-vif-plugged-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:28 np0005539551 nova_compute[227360]: 2025-11-29 08:36:28.383 227364 DEBUG oslo_concurrency.lockutils [req-3c925b21-7d92-4c69-8ff5-f7d2e9a1b1bd req-a81ab9d3-c176-4973-b335-2a97b45fa37d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:28 np0005539551 nova_compute[227360]: 2025-11-29 08:36:28.384 227364 DEBUG oslo_concurrency.lockutils [req-3c925b21-7d92-4c69-8ff5-f7d2e9a1b1bd req-a81ab9d3-c176-4973-b335-2a97b45fa37d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:28 np0005539551 nova_compute[227360]: 2025-11-29 08:36:28.384 227364 DEBUG oslo_concurrency.lockutils [req-3c925b21-7d92-4c69-8ff5-f7d2e9a1b1bd req-a81ab9d3-c176-4973-b335-2a97b45fa37d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:28 np0005539551 nova_compute[227360]: 2025-11-29 08:36:28.384 227364 DEBUG nova.compute.manager [req-3c925b21-7d92-4c69-8ff5-f7d2e9a1b1bd req-a81ab9d3-c176-4973-b335-2a97b45fa37d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] No waiting events found dispatching network-vif-plugged-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:36:28 np0005539551 nova_compute[227360]: 2025-11-29 08:36:28.384 227364 WARNING nova.compute.manager [req-3c925b21-7d92-4c69-8ff5-f7d2e9a1b1bd req-a81ab9d3-c176-4973-b335-2a97b45fa37d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Received unexpected event network-vif-plugged-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 for instance with vm_state suspended and task_state resuming.#033[00m
Nov 29 03:36:28 np0005539551 nova_compute[227360]: 2025-11-29 08:36:28.670 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:29 np0005539551 nova_compute[227360]: 2025-11-29 08:36:29.156 227364 DEBUG nova.network.neutron [None req-02299c8f-1936-4cfe-948d-0fde67ee23b7 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Updating instance_info_cache with network_info: [{"id": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "address": "fa:16:3e:c5:09:19", "network": {"id": "a83b79bc-6262-43e7-a9e5-5e808a213726", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1830064421-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e72e7660da497a8b6bf9fdb03fe84c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c7cb162-70", "ovs_interfaceid": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:36:29 np0005539551 nova_compute[227360]: 2025-11-29 08:36:29.179 227364 DEBUG oslo_concurrency.lockutils [None req-02299c8f-1936-4cfe-948d-0fde67ee23b7 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Releasing lock "refresh_cache-aaaa6a6b-0c21-483c-b891-02ebe64e6aab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:36:29 np0005539551 nova_compute[227360]: 2025-11-29 08:36:29.184 227364 DEBUG nova.virt.libvirt.vif [None req-02299c8f-1936-4cfe-948d-0fde67ee23b7 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:34:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1484770108',display_name='tempest-ServersNegativeTestJSON-server-1484770108',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1484770108',id=170,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:36:15Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='96e72e7660da497a8b6bf9fdb03fe84c',ramdisk_id='',reservation_id='r-17iqb8m1',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServersNegativeTestJSON-1016750887',owner_user_name='tempest-ServersNegativeTestJSON-1016750887-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:36:24Z,user_data=None,user_id='a57807acb02b45d082f242ec62cd5b6f',uuid=aaaa6a6b-0c21-483c-b891-02ebe64e6aab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "address": "fa:16:3e:c5:09:19", "network": {"id": "a83b79bc-6262-43e7-a9e5-5e808a213726", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1830064421-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e72e7660da497a8b6bf9fdb03fe84c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c7cb162-70", "ovs_interfaceid": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:36:29 np0005539551 nova_compute[227360]: 2025-11-29 08:36:29.185 227364 DEBUG nova.network.os_vif_util [None req-02299c8f-1936-4cfe-948d-0fde67ee23b7 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Converting VIF {"id": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "address": "fa:16:3e:c5:09:19", "network": {"id": "a83b79bc-6262-43e7-a9e5-5e808a213726", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1830064421-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e72e7660da497a8b6bf9fdb03fe84c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c7cb162-70", "ovs_interfaceid": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:36:29 np0005539551 nova_compute[227360]: 2025-11-29 08:36:29.186 227364 DEBUG nova.network.os_vif_util [None req-02299c8f-1936-4cfe-948d-0fde67ee23b7 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:09:19,bridge_name='br-int',has_traffic_filtering=True,id=2c7cb162-70f0-496f-a2bf-0b0af61bd4b2,network=Network(a83b79bc-6262-43e7-a9e5-5e808a213726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c7cb162-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:36:29 np0005539551 nova_compute[227360]: 2025-11-29 08:36:29.186 227364 DEBUG os_vif [None req-02299c8f-1936-4cfe-948d-0fde67ee23b7 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:09:19,bridge_name='br-int',has_traffic_filtering=True,id=2c7cb162-70f0-496f-a2bf-0b0af61bd4b2,network=Network(a83b79bc-6262-43e7-a9e5-5e808a213726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c7cb162-70') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:36:29 np0005539551 nova_compute[227360]: 2025-11-29 08:36:29.187 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:29 np0005539551 nova_compute[227360]: 2025-11-29 08:36:29.187 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:29 np0005539551 nova_compute[227360]: 2025-11-29 08:36:29.187 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:36:29 np0005539551 nova_compute[227360]: 2025-11-29 08:36:29.190 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:29 np0005539551 nova_compute[227360]: 2025-11-29 08:36:29.190 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2c7cb162-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:29 np0005539551 nova_compute[227360]: 2025-11-29 08:36:29.190 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2c7cb162-70, col_values=(('external_ids', {'iface-id': '2c7cb162-70f0-496f-a2bf-0b0af61bd4b2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c5:09:19', 'vm-uuid': 'aaaa6a6b-0c21-483c-b891-02ebe64e6aab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:29 np0005539551 nova_compute[227360]: 2025-11-29 08:36:29.191 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:36:29 np0005539551 nova_compute[227360]: 2025-11-29 08:36:29.191 227364 INFO os_vif [None req-02299c8f-1936-4cfe-948d-0fde67ee23b7 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:09:19,bridge_name='br-int',has_traffic_filtering=True,id=2c7cb162-70f0-496f-a2bf-0b0af61bd4b2,network=Network(a83b79bc-6262-43e7-a9e5-5e808a213726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c7cb162-70')#033[00m
Nov 29 03:36:29 np0005539551 nova_compute[227360]: 2025-11-29 08:36:29.209 227364 DEBUG nova.objects.instance [None req-02299c8f-1936-4cfe-948d-0fde67ee23b7 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lazy-loading 'numa_topology' on Instance uuid aaaa6a6b-0c21-483c-b891-02ebe64e6aab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:36:29 np0005539551 kernel: tap2c7cb162-70: entered promiscuous mode
Nov 29 03:36:29 np0005539551 NetworkManager[48922]: <info>  [1764405389.2664] manager: (tap2c7cb162-70): new Tun device (/org/freedesktop/NetworkManager/Devices/343)
Nov 29 03:36:29 np0005539551 ovn_controller[130266]: 2025-11-29T08:36:29Z|00776|binding|INFO|Claiming lport 2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 for this chassis.
Nov 29 03:36:29 np0005539551 ovn_controller[130266]: 2025-11-29T08:36:29Z|00777|binding|INFO|2c7cb162-70f0-496f-a2bf-0b0af61bd4b2: Claiming fa:16:3e:c5:09:19 10.100.0.11
Nov 29 03:36:29 np0005539551 nova_compute[227360]: 2025-11-29 08:36:29.269 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:29.275 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:09:19 10.100.0.11'], port_security=['fa:16:3e:c5:09:19 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'aaaa6a6b-0c21-483c-b891-02ebe64e6aab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a83b79bc-6262-43e7-a9e5-5e808a213726', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96e72e7660da497a8b6bf9fdb03fe84c', 'neutron:revision_number': '10', 'neutron:security_group_ids': '4e362fc3-a5d2-4518-8d56-0e9bbfbe70b6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89f1621a-4594-4d70-9442-76b3c597dffc, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=2c7cb162-70f0-496f-a2bf-0b0af61bd4b2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:29.276 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 in datapath a83b79bc-6262-43e7-a9e5-5e808a213726 bound to our chassis#033[00m
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:29.277 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a83b79bc-6262-43e7-a9e5-5e808a213726#033[00m
Nov 29 03:36:29 np0005539551 ovn_controller[130266]: 2025-11-29T08:36:29Z|00778|binding|INFO|Setting lport 2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 ovn-installed in OVS
Nov 29 03:36:29 np0005539551 ovn_controller[130266]: 2025-11-29T08:36:29Z|00779|binding|INFO|Setting lport 2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 up in Southbound
Nov 29 03:36:29 np0005539551 nova_compute[227360]: 2025-11-29 08:36:29.283 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:29 np0005539551 nova_compute[227360]: 2025-11-29 08:36:29.286 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:29.288 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[95b10ab6-d101-4ff6-96f1-dd0165dc888c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:29.288 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa83b79bc-61 in ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:29.290 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa83b79bc-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:29.290 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e36a1551-2726-4830-8fdd-8cc833f333d9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:29.292 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b01d49a8-0543-4364-8f6f-5c8f269bb413]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:29 np0005539551 systemd-udevd[290753]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:36:29 np0005539551 systemd-machined[190756]: New machine qemu-81-instance-000000aa.
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:29.306 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[047544ac-bc3b-4b4f-87e1-75ff31620d74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:29 np0005539551 NetworkManager[48922]: <info>  [1764405389.3110] device (tap2c7cb162-70): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:36:29 np0005539551 NetworkManager[48922]: <info>  [1764405389.3119] device (tap2c7cb162-70): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:36:29 np0005539551 systemd[1]: Started Virtual Machine qemu-81-instance-000000aa.
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:29.318 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e64134ed-fb1e-46ad-9578-ab066668d14b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:29.351 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[d8b498dd-ad72-42f5-b835-109158f7e78d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:29 np0005539551 systemd-udevd[290757]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:36:29 np0005539551 NetworkManager[48922]: <info>  [1764405389.3580] manager: (tapa83b79bc-60): new Veth device (/org/freedesktop/NetworkManager/Devices/344)
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:29.357 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8a795966-9b06-4b37-92fc-563f42efdb53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:29.388 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[96623546-0472-4261-a084-4f5ad1b9d750]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:29.391 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[27f76082-4528-4518-9857-4c9d8e0489c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:29 np0005539551 NetworkManager[48922]: <info>  [1764405389.4121] device (tapa83b79bc-60): carrier: link connected
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:29.418 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[acddf172-ab45-45e0-b5e3-5364d2cfd028]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:29.434 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e40ba710-913d-42e4-af16-2b88baad6782]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa83b79bc-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:20:41:91'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 224], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 843629, 'reachable_time': 20889, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290786, 'error': None, 'target': 'ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:29.447 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ccc09cbe-b593-4d33-b181-3960fc030b34]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe20:4191'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 843629, 'tstamp': 843629}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290787, 'error': None, 'target': 'ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:29.461 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[fcaf3043-09f2-4a2c-ac82-25e2eadb51c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa83b79bc-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:20:41:91'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 224], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 843629, 'reachable_time': 20889, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290788, 'error': None, 'target': 'ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:29.490 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a4b248b7-9fc6-48d7-a306-e4d407aa690c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:29.543 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[fb42bd20-a035-464c-ba14-d5ec325bbe9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:29.544 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa83b79bc-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:29.544 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:29.545 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa83b79bc-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:29 np0005539551 kernel: tapa83b79bc-60: entered promiscuous mode
Nov 29 03:36:29 np0005539551 NetworkManager[48922]: <info>  [1764405389.5474] manager: (tapa83b79bc-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/345)
Nov 29 03:36:29 np0005539551 nova_compute[227360]: 2025-11-29 08:36:29.546 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:29 np0005539551 nova_compute[227360]: 2025-11-29 08:36:29.548 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:29.549 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa83b79bc-60, col_values=(('external_ids', {'iface-id': '5551fa67-e815-437e-8413-5562ca9c4d10'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:29 np0005539551 ovn_controller[130266]: 2025-11-29T08:36:29Z|00780|binding|INFO|Releasing lport 5551fa67-e815-437e-8413-5562ca9c4d10 from this chassis (sb_readonly=0)
Nov 29 03:36:29 np0005539551 nova_compute[227360]: 2025-11-29 08:36:29.549 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:29 np0005539551 nova_compute[227360]: 2025-11-29 08:36:29.562 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:29.564 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a83b79bc-6262-43e7-a9e5-5e808a213726.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a83b79bc-6262-43e7-a9e5-5e808a213726.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:29.565 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[200799c8-9143-4b15-905a-c36574bf2793]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:29.566 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-a83b79bc-6262-43e7-a9e5-5e808a213726
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/a83b79bc-6262-43e7-a9e5-5e808a213726.pid.haproxy
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID a83b79bc-6262-43e7-a9e5-5e808a213726
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:36:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:29.566 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726', 'env', 'PROCESS_TAG=haproxy-a83b79bc-6262-43e7-a9e5-5e808a213726', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a83b79bc-6262-43e7-a9e5-5e808a213726.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:36:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:29.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:29.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:29 np0005539551 podman[290838]: 2025-11-29 08:36:29.914093783 +0000 UTC m=+0.052263705 container create e98c5d4cf86561b83ad79ab2715050756865ed9562436690e128caa8b7c67e3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 03:36:29 np0005539551 systemd[1]: Started libpod-conmon-e98c5d4cf86561b83ad79ab2715050756865ed9562436690e128caa8b7c67e3d.scope.
Nov 29 03:36:29 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:36:29 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5054fd5a661199be3517aa70b5aa3211bdeafba9c6923a000d92f5cef55bc16/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:36:29 np0005539551 podman[290838]: 2025-11-29 08:36:29.885276593 +0000 UTC m=+0.023446545 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:36:29 np0005539551 podman[290838]: 2025-11-29 08:36:29.989020309 +0000 UTC m=+0.127190271 container init e98c5d4cf86561b83ad79ab2715050756865ed9562436690e128caa8b7c67e3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:36:29 np0005539551 podman[290838]: 2025-11-29 08:36:29.994648031 +0000 UTC m=+0.132817963 container start e98c5d4cf86561b83ad79ab2715050756865ed9562436690e128caa8b7c67e3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:36:30 np0005539551 neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726[290853]: [NOTICE]   (290857) : New worker (290859) forked
Nov 29 03:36:30 np0005539551 neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726[290853]: [NOTICE]   (290857) : Loading success.
Nov 29 03:36:30 np0005539551 nova_compute[227360]: 2025-11-29 08:36:30.497 227364 DEBUG nova.virt.libvirt.host [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Removed pending event for aaaa6a6b-0c21-483c-b891-02ebe64e6aab due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:36:30 np0005539551 nova_compute[227360]: 2025-11-29 08:36:30.498 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405390.496962, aaaa6a6b-0c21-483c-b891-02ebe64e6aab => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:36:30 np0005539551 nova_compute[227360]: 2025-11-29 08:36:30.498 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] VM Started (Lifecycle Event)#033[00m
Nov 29 03:36:30 np0005539551 nova_compute[227360]: 2025-11-29 08:36:30.523 227364 DEBUG nova.compute.manager [None req-02299c8f-1936-4cfe-948d-0fde67ee23b7 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:36:30 np0005539551 nova_compute[227360]: 2025-11-29 08:36:30.523 227364 DEBUG nova.objects.instance [None req-02299c8f-1936-4cfe-948d-0fde67ee23b7 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lazy-loading 'pci_devices' on Instance uuid aaaa6a6b-0c21-483c-b891-02ebe64e6aab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:36:30 np0005539551 nova_compute[227360]: 2025-11-29 08:36:30.531 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:36:30 np0005539551 nova_compute[227360]: 2025-11-29 08:36:30.535 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:36:30 np0005539551 nova_compute[227360]: 2025-11-29 08:36:30.539 227364 DEBUG nova.compute.manager [req-e068e3aa-b34d-450c-9b75-aac3c2278a53 req-25080921-75e7-4734-b02d-00bb042df9a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Received event network-vif-plugged-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:30 np0005539551 nova_compute[227360]: 2025-11-29 08:36:30.539 227364 DEBUG oslo_concurrency.lockutils [req-e068e3aa-b34d-450c-9b75-aac3c2278a53 req-25080921-75e7-4734-b02d-00bb042df9a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:30 np0005539551 nova_compute[227360]: 2025-11-29 08:36:30.540 227364 DEBUG oslo_concurrency.lockutils [req-e068e3aa-b34d-450c-9b75-aac3c2278a53 req-25080921-75e7-4734-b02d-00bb042df9a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:30 np0005539551 nova_compute[227360]: 2025-11-29 08:36:30.540 227364 DEBUG oslo_concurrency.lockutils [req-e068e3aa-b34d-450c-9b75-aac3c2278a53 req-25080921-75e7-4734-b02d-00bb042df9a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:30 np0005539551 nova_compute[227360]: 2025-11-29 08:36:30.540 227364 DEBUG nova.compute.manager [req-e068e3aa-b34d-450c-9b75-aac3c2278a53 req-25080921-75e7-4734-b02d-00bb042df9a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] No waiting events found dispatching network-vif-plugged-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:36:30 np0005539551 nova_compute[227360]: 2025-11-29 08:36:30.540 227364 WARNING nova.compute.manager [req-e068e3aa-b34d-450c-9b75-aac3c2278a53 req-25080921-75e7-4734-b02d-00bb042df9a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Received unexpected event network-vif-plugged-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 for instance with vm_state suspended and task_state resuming.#033[00m
Nov 29 03:36:30 np0005539551 nova_compute[227360]: 2025-11-29 08:36:30.543 227364 INFO nova.virt.libvirt.driver [-] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Instance running successfully.#033[00m
Nov 29 03:36:30 np0005539551 virtqemud[226785]: argument unsupported: QEMU guest agent is not configured
Nov 29 03:36:30 np0005539551 nova_compute[227360]: 2025-11-29 08:36:30.546 227364 DEBUG nova.virt.libvirt.guest [None req-02299c8f-1936-4cfe-948d-0fde67ee23b7 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 29 03:36:30 np0005539551 nova_compute[227360]: 2025-11-29 08:36:30.546 227364 DEBUG nova.compute.manager [None req-02299c8f-1936-4cfe-948d-0fde67ee23b7 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:36:30 np0005539551 nova_compute[227360]: 2025-11-29 08:36:30.555 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Nov 29 03:36:30 np0005539551 nova_compute[227360]: 2025-11-29 08:36:30.556 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405390.501542, aaaa6a6b-0c21-483c-b891-02ebe64e6aab => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:36:30 np0005539551 nova_compute[227360]: 2025-11-29 08:36:30.556 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:36:30 np0005539551 nova_compute[227360]: 2025-11-29 08:36:30.583 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:36:30 np0005539551 nova_compute[227360]: 2025-11-29 08:36:30.585 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:36:30 np0005539551 nova_compute[227360]: 2025-11-29 08:36:30.665 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Nov 29 03:36:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:31.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:31.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:32 np0005539551 nova_compute[227360]: 2025-11-29 08:36:32.289 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:32 np0005539551 nova_compute[227360]: 2025-11-29 08:36:32.647 227364 DEBUG nova.compute.manager [req-375cfdc3-d041-430d-89bd-89e28f611b57 req-df4d6138-cb3e-4d09-9cc5-a62f0ed7c70b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Received event network-vif-plugged-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:32 np0005539551 nova_compute[227360]: 2025-11-29 08:36:32.647 227364 DEBUG oslo_concurrency.lockutils [req-375cfdc3-d041-430d-89bd-89e28f611b57 req-df4d6138-cb3e-4d09-9cc5-a62f0ed7c70b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:32 np0005539551 nova_compute[227360]: 2025-11-29 08:36:32.648 227364 DEBUG oslo_concurrency.lockutils [req-375cfdc3-d041-430d-89bd-89e28f611b57 req-df4d6138-cb3e-4d09-9cc5-a62f0ed7c70b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:32 np0005539551 nova_compute[227360]: 2025-11-29 08:36:32.648 227364 DEBUG oslo_concurrency.lockutils [req-375cfdc3-d041-430d-89bd-89e28f611b57 req-df4d6138-cb3e-4d09-9cc5-a62f0ed7c70b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:32 np0005539551 nova_compute[227360]: 2025-11-29 08:36:32.648 227364 DEBUG nova.compute.manager [req-375cfdc3-d041-430d-89bd-89e28f611b57 req-df4d6138-cb3e-4d09-9cc5-a62f0ed7c70b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] No waiting events found dispatching network-vif-plugged-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:36:32 np0005539551 nova_compute[227360]: 2025-11-29 08:36:32.648 227364 WARNING nova.compute.manager [req-375cfdc3-d041-430d-89bd-89e28f611b57 req-df4d6138-cb3e-4d09-9cc5-a62f0ed7c70b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Received unexpected event network-vif-plugged-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:36:33 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e375 e375: 3 total, 3 up, 3 in
Nov 29 03:36:33 np0005539551 nova_compute[227360]: 2025-11-29 08:36:33.672 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:33.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:33.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:35.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:35.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:36 np0005539551 nova_compute[227360]: 2025-11-29 08:36:36.429 227364 DEBUG oslo_concurrency.lockutils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Acquiring lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:36 np0005539551 nova_compute[227360]: 2025-11-29 08:36:36.430 227364 DEBUG oslo_concurrency.lockutils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:36 np0005539551 nova_compute[227360]: 2025-11-29 08:36:36.446 227364 DEBUG nova.compute.manager [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:36:36 np0005539551 ovn_controller[130266]: 2025-11-29T08:36:36Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c5:09:19 10.100.0.11
Nov 29 03:36:36 np0005539551 nova_compute[227360]: 2025-11-29 08:36:36.513 227364 DEBUG oslo_concurrency.lockutils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:36 np0005539551 nova_compute[227360]: 2025-11-29 08:36:36.513 227364 DEBUG oslo_concurrency.lockutils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:36 np0005539551 nova_compute[227360]: 2025-11-29 08:36:36.521 227364 DEBUG nova.virt.hardware [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:36:36 np0005539551 nova_compute[227360]: 2025-11-29 08:36:36.521 227364 INFO nova.compute.claims [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:36:36 np0005539551 nova_compute[227360]: 2025-11-29 08:36:36.734 227364 DEBUG nova.scheduler.client.report [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Refreshing inventories for resource provider 67c71d68-0dd7-4589-b775-189b4191a844 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:36:36 np0005539551 nova_compute[227360]: 2025-11-29 08:36:36.752 227364 DEBUG nova.scheduler.client.report [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Updating ProviderTree inventory for provider 67c71d68-0dd7-4589-b775-189b4191a844 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:36:36 np0005539551 nova_compute[227360]: 2025-11-29 08:36:36.753 227364 DEBUG nova.compute.provider_tree [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Updating inventory in ProviderTree for provider 67c71d68-0dd7-4589-b775-189b4191a844 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:36:36 np0005539551 nova_compute[227360]: 2025-11-29 08:36:36.769 227364 DEBUG nova.scheduler.client.report [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Refreshing aggregate associations for resource provider 67c71d68-0dd7-4589-b775-189b4191a844, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:36:36 np0005539551 nova_compute[227360]: 2025-11-29 08:36:36.791 227364 DEBUG nova.scheduler.client.report [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Refreshing trait associations for resource provider 67c71d68-0dd7-4589-b775-189b4191a844, traits: COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:36:36 np0005539551 nova_compute[227360]: 2025-11-29 08:36:36.837 227364 DEBUG oslo_concurrency.processutils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:37 np0005539551 nova_compute[227360]: 2025-11-29 08:36:37.291 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:36:37 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1375366924' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:36:37 np0005539551 nova_compute[227360]: 2025-11-29 08:36:37.322 227364 DEBUG oslo_concurrency.processutils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:37 np0005539551 nova_compute[227360]: 2025-11-29 08:36:37.329 227364 DEBUG nova.compute.provider_tree [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:36:37 np0005539551 nova_compute[227360]: 2025-11-29 08:36:37.351 227364 DEBUG nova.scheduler.client.report [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:36:37 np0005539551 nova_compute[227360]: 2025-11-29 08:36:37.395 227364 DEBUG oslo_concurrency.lockutils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:37 np0005539551 nova_compute[227360]: 2025-11-29 08:36:37.396 227364 DEBUG nova.compute.manager [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:36:37 np0005539551 nova_compute[227360]: 2025-11-29 08:36:37.454 227364 DEBUG nova.compute.manager [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:36:37 np0005539551 nova_compute[227360]: 2025-11-29 08:36:37.455 227364 DEBUG nova.network.neutron [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:36:37 np0005539551 nova_compute[227360]: 2025-11-29 08:36:37.488 227364 INFO nova.virt.libvirt.driver [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:36:37 np0005539551 nova_compute[227360]: 2025-11-29 08:36:37.510 227364 DEBUG nova.compute.manager [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:36:37 np0005539551 nova_compute[227360]: 2025-11-29 08:36:37.607 227364 DEBUG nova.compute.manager [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:36:37 np0005539551 nova_compute[227360]: 2025-11-29 08:36:37.609 227364 DEBUG nova.virt.libvirt.driver [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:36:37 np0005539551 nova_compute[227360]: 2025-11-29 08:36:37.609 227364 INFO nova.virt.libvirt.driver [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Creating image(s)#033[00m
Nov 29 03:36:37 np0005539551 nova_compute[227360]: 2025-11-29 08:36:37.648 227364 DEBUG nova.storage.rbd_utils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] rbd image 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:36:37 np0005539551 nova_compute[227360]: 2025-11-29 08:36:37.686 227364 DEBUG nova.storage.rbd_utils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] rbd image 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:36:37 np0005539551 nova_compute[227360]: 2025-11-29 08:36:37.714 227364 DEBUG nova.storage.rbd_utils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] rbd image 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:36:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:37.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:37 np0005539551 nova_compute[227360]: 2025-11-29 08:36:37.717 227364 DEBUG oslo_concurrency.lockutils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Acquiring lock "b734e4b236330b7f0c98a124ac144d2673a30e56" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:37 np0005539551 nova_compute[227360]: 2025-11-29 08:36:37.718 227364 DEBUG oslo_concurrency.lockutils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "b734e4b236330b7f0c98a124ac144d2673a30e56" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:37 np0005539551 nova_compute[227360]: 2025-11-29 08:36:37.721 227364 DEBUG nova.policy [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e8b20745b2d14f70b64a43335faed2f4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8d5e30b74e6449dd90ecb667977d1fe9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:36:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:37.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:38 np0005539551 nova_compute[227360]: 2025-11-29 08:36:38.051 227364 DEBUG nova.virt.libvirt.imagebackend [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Image locations are: [{'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/8a30eb6e-dddc-4108-8576-dcd7c8d5406f/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/8a30eb6e-dddc-4108-8576-dcd7c8d5406f/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 03:36:38 np0005539551 nova_compute[227360]: 2025-11-29 08:36:38.674 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:39 np0005539551 nova_compute[227360]: 2025-11-29 08:36:39.020 227364 DEBUG nova.network.neutron [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Successfully created port: 06d48b3b-ec07-4803-9204-b300217b41d1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:36:39 np0005539551 nova_compute[227360]: 2025-11-29 08:36:39.283 227364 DEBUG oslo_concurrency.processutils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b734e4b236330b7f0c98a124ac144d2673a30e56.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:39 np0005539551 nova_compute[227360]: 2025-11-29 08:36:39.361 227364 DEBUG oslo_concurrency.processutils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b734e4b236330b7f0c98a124ac144d2673a30e56.part --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:39 np0005539551 nova_compute[227360]: 2025-11-29 08:36:39.362 227364 DEBUG nova.virt.images [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] 8a30eb6e-dddc-4108-8576-dcd7c8d5406f was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Nov 29 03:36:39 np0005539551 nova_compute[227360]: 2025-11-29 08:36:39.363 227364 DEBUG nova.privsep.utils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 29 03:36:39 np0005539551 nova_compute[227360]: 2025-11-29 08:36:39.363 227364 DEBUG oslo_concurrency.processutils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/b734e4b236330b7f0c98a124ac144d2673a30e56.part /var/lib/nova/instances/_base/b734e4b236330b7f0c98a124ac144d2673a30e56.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:39 np0005539551 nova_compute[227360]: 2025-11-29 08:36:39.595 227364 DEBUG oslo_concurrency.processutils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/b734e4b236330b7f0c98a124ac144d2673a30e56.part /var/lib/nova/instances/_base/b734e4b236330b7f0c98a124ac144d2673a30e56.converted" returned: 0 in 0.232s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:39 np0005539551 nova_compute[227360]: 2025-11-29 08:36:39.600 227364 DEBUG oslo_concurrency.processutils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b734e4b236330b7f0c98a124ac144d2673a30e56.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:39 np0005539551 nova_compute[227360]: 2025-11-29 08:36:39.662 227364 DEBUG oslo_concurrency.processutils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b734e4b236330b7f0c98a124ac144d2673a30e56.converted --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:39 np0005539551 nova_compute[227360]: 2025-11-29 08:36:39.663 227364 DEBUG oslo_concurrency.lockutils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "b734e4b236330b7f0c98a124ac144d2673a30e56" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.945s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:39 np0005539551 nova_compute[227360]: 2025-11-29 08:36:39.687 227364 DEBUG nova.storage.rbd_utils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] rbd image 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:36:39 np0005539551 nova_compute[227360]: 2025-11-29 08:36:39.690 227364 DEBUG oslo_concurrency.processutils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b734e4b236330b7f0c98a124ac144d2673a30e56 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:39.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:39.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:40 np0005539551 nova_compute[227360]: 2025-11-29 08:36:40.008 227364 DEBUG nova.network.neutron [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Successfully updated port: 06d48b3b-ec07-4803-9204-b300217b41d1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:36:40 np0005539551 nova_compute[227360]: 2025-11-29 08:36:40.039 227364 DEBUG oslo_concurrency.lockutils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Acquiring lock "refresh_cache-1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:36:40 np0005539551 nova_compute[227360]: 2025-11-29 08:36:40.040 227364 DEBUG oslo_concurrency.lockutils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Acquired lock "refresh_cache-1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:36:40 np0005539551 nova_compute[227360]: 2025-11-29 08:36:40.040 227364 DEBUG nova.network.neutron [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:36:40 np0005539551 nova_compute[227360]: 2025-11-29 08:36:40.042 227364 DEBUG oslo_concurrency.processutils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b734e4b236330b7f0c98a124ac144d2673a30e56 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.352s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:40 np0005539551 nova_compute[227360]: 2025-11-29 08:36:40.134 227364 DEBUG nova.storage.rbd_utils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] resizing rbd image 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:36:40 np0005539551 nova_compute[227360]: 2025-11-29 08:36:40.480 227364 DEBUG nova.objects.instance [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lazy-loading 'migration_context' on Instance uuid 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:36:40 np0005539551 nova_compute[227360]: 2025-11-29 08:36:40.494 227364 DEBUG nova.virt.libvirt.driver [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:36:40 np0005539551 nova_compute[227360]: 2025-11-29 08:36:40.495 227364 DEBUG nova.virt.libvirt.driver [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Ensure instance console log exists: /var/lib/nova/instances/1f01c882-bb44-4ba5-b82f-e3ffa31b8df1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:36:40 np0005539551 nova_compute[227360]: 2025-11-29 08:36:40.495 227364 DEBUG oslo_concurrency.lockutils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:40 np0005539551 nova_compute[227360]: 2025-11-29 08:36:40.496 227364 DEBUG oslo_concurrency.lockutils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:40 np0005539551 nova_compute[227360]: 2025-11-29 08:36:40.496 227364 DEBUG oslo_concurrency.lockutils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:41 np0005539551 nova_compute[227360]: 2025-11-29 08:36:41.044 227364 DEBUG nova.network.neutron [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:36:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:36:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:41.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:36:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:41.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:41 np0005539551 nova_compute[227360]: 2025-11-29 08:36:41.811 227364 DEBUG nova.compute.manager [req-03e7cf6a-7113-40ef-927c-a6628ee11371 req-d0aba08c-4175-47d6-b979-8c954bae28c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Received event network-changed-06d48b3b-ec07-4803-9204-b300217b41d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:41 np0005539551 nova_compute[227360]: 2025-11-29 08:36:41.811 227364 DEBUG nova.compute.manager [req-03e7cf6a-7113-40ef-927c-a6628ee11371 req-d0aba08c-4175-47d6-b979-8c954bae28c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Refreshing instance network info cache due to event network-changed-06d48b3b-ec07-4803-9204-b300217b41d1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:36:41 np0005539551 nova_compute[227360]: 2025-11-29 08:36:41.811 227364 DEBUG oslo_concurrency.lockutils [req-03e7cf6a-7113-40ef-927c-a6628ee11371 req-d0aba08c-4175-47d6-b979-8c954bae28c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:36:42 np0005539551 nova_compute[227360]: 2025-11-29 08:36:42.296 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:42 np0005539551 nova_compute[227360]: 2025-11-29 08:36:42.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:43 np0005539551 nova_compute[227360]: 2025-11-29 08:36:43.257 227364 DEBUG nova.network.neutron [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Updating instance_info_cache with network_info: [{"id": "06d48b3b-ec07-4803-9204-b300217b41d1", "address": "fa:16:3e:9b:8f:68", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06d48b3b-ec", "ovs_interfaceid": "06d48b3b-ec07-4803-9204-b300217b41d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:36:43 np0005539551 nova_compute[227360]: 2025-11-29 08:36:43.275 227364 DEBUG oslo_concurrency.lockutils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Releasing lock "refresh_cache-1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:36:43 np0005539551 nova_compute[227360]: 2025-11-29 08:36:43.275 227364 DEBUG nova.compute.manager [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Instance network_info: |[{"id": "06d48b3b-ec07-4803-9204-b300217b41d1", "address": "fa:16:3e:9b:8f:68", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06d48b3b-ec", "ovs_interfaceid": "06d48b3b-ec07-4803-9204-b300217b41d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:36:43 np0005539551 nova_compute[227360]: 2025-11-29 08:36:43.276 227364 DEBUG oslo_concurrency.lockutils [req-03e7cf6a-7113-40ef-927c-a6628ee11371 req-d0aba08c-4175-47d6-b979-8c954bae28c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:36:43 np0005539551 nova_compute[227360]: 2025-11-29 08:36:43.276 227364 DEBUG nova.network.neutron [req-03e7cf6a-7113-40ef-927c-a6628ee11371 req-d0aba08c-4175-47d6-b979-8c954bae28c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Refreshing network info cache for port 06d48b3b-ec07-4803-9204-b300217b41d1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:36:43 np0005539551 nova_compute[227360]: 2025-11-29 08:36:43.279 227364 DEBUG nova.virt.libvirt.driver [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Start _get_guest_xml network_info=[{"id": "06d48b3b-ec07-4803-9204-b300217b41d1", "address": "fa:16:3e:9b:8f:68", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06d48b3b-ec", "ovs_interfaceid": "06d48b3b-ec07-4803-9204-b300217b41d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T08:36:31Z,direct_url=<?>,disk_format='qcow2',id=8a30eb6e-dddc-4108-8576-dcd7c8d5406f,min_disk=0,min_ram=0,name='tempest-scenario-img--1804933970',owner='8d5e30b74e6449dd90ecb667977d1fe9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T08:36:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '8a30eb6e-dddc-4108-8576-dcd7c8d5406f'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:36:43 np0005539551 nova_compute[227360]: 2025-11-29 08:36:43.284 227364 WARNING nova.virt.libvirt.driver [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:36:43 np0005539551 nova_compute[227360]: 2025-11-29 08:36:43.293 227364 DEBUG nova.virt.libvirt.host [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:36:43 np0005539551 nova_compute[227360]: 2025-11-29 08:36:43.294 227364 DEBUG nova.virt.libvirt.host [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:36:43 np0005539551 nova_compute[227360]: 2025-11-29 08:36:43.298 227364 DEBUG nova.virt.libvirt.host [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:36:43 np0005539551 nova_compute[227360]: 2025-11-29 08:36:43.298 227364 DEBUG nova.virt.libvirt.host [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:36:43 np0005539551 nova_compute[227360]: 2025-11-29 08:36:43.299 227364 DEBUG nova.virt.libvirt.driver [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:36:43 np0005539551 nova_compute[227360]: 2025-11-29 08:36:43.300 227364 DEBUG nova.virt.hardware [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T08:36:31Z,direct_url=<?>,disk_format='qcow2',id=8a30eb6e-dddc-4108-8576-dcd7c8d5406f,min_disk=0,min_ram=0,name='tempest-scenario-img--1804933970',owner='8d5e30b74e6449dd90ecb667977d1fe9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T08:36:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:36:43 np0005539551 nova_compute[227360]: 2025-11-29 08:36:43.300 227364 DEBUG nova.virt.hardware [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:36:43 np0005539551 nova_compute[227360]: 2025-11-29 08:36:43.300 227364 DEBUG nova.virt.hardware [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:36:43 np0005539551 nova_compute[227360]: 2025-11-29 08:36:43.301 227364 DEBUG nova.virt.hardware [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:36:43 np0005539551 nova_compute[227360]: 2025-11-29 08:36:43.301 227364 DEBUG nova.virt.hardware [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:36:43 np0005539551 nova_compute[227360]: 2025-11-29 08:36:43.301 227364 DEBUG nova.virt.hardware [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:36:43 np0005539551 nova_compute[227360]: 2025-11-29 08:36:43.301 227364 DEBUG nova.virt.hardware [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:36:43 np0005539551 nova_compute[227360]: 2025-11-29 08:36:43.302 227364 DEBUG nova.virt.hardware [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:36:43 np0005539551 nova_compute[227360]: 2025-11-29 08:36:43.302 227364 DEBUG nova.virt.hardware [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:36:43 np0005539551 nova_compute[227360]: 2025-11-29 08:36:43.302 227364 DEBUG nova.virt.hardware [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:36:43 np0005539551 nova_compute[227360]: 2025-11-29 08:36:43.302 227364 DEBUG nova.virt.hardware [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:36:43 np0005539551 nova_compute[227360]: 2025-11-29 08:36:43.305 227364 DEBUG oslo_concurrency.processutils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:43 np0005539551 nova_compute[227360]: 2025-11-29 08:36:43.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:43 np0005539551 nova_compute[227360]: 2025-11-29 08:36:43.677 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:43.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:43 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:36:43 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1045128138' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:36:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:43.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:43 np0005539551 nova_compute[227360]: 2025-11-29 08:36:43.750 227364 DEBUG oslo_concurrency.processutils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:43 np0005539551 nova_compute[227360]: 2025-11-29 08:36:43.777 227364 DEBUG nova.storage.rbd_utils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] rbd image 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:36:43 np0005539551 nova_compute[227360]: 2025-11-29 08:36:43.780 227364 DEBUG oslo_concurrency.processutils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:44 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:36:44 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1351555553' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.201 227364 DEBUG oslo_concurrency.processutils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.203 227364 DEBUG nova.virt.libvirt.vif [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:36:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-111008662',display_name='tempest-TestMinimumBasicScenario-server-111008662',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-111008662',id=178,image_ref='8a30eb6e-dddc-4108-8576-dcd7c8d5406f',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCVJYdnP2TneuDoeBAXwKH6f73W3V8CnCT6vaTZikIJCryJZhoLWCSF0AlKVy0dYmyVNXlJowtpUs2K4/f0SYWoTrLCWDcueCumiTrkMJ87CdWQP8BrlEpeZKDWjOsS6rQ==',key_name='tempest-TestMinimumBasicScenario-445026150',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8d5e30b74e6449dd90ecb667977d1fe9',ramdisk_id='',reservation_id='r-4umr4qgc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8a30eb6e-dddc-4108-8576-dcd7c8d5406f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestMinimumBasicScenario-1569311049',owner_user_name='tempest-TestMinimumBasicScenario-1569311049-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:36:37Z,user_data=None,user_id='e8b20745b2d14f70b64a43335faed2f4',uuid=1f01c882-bb44-4ba5-b82f-e3ffa31b8df1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "06d48b3b-ec07-4803-9204-b300217b41d1", "address": "fa:16:3e:9b:8f:68", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06d48b3b-ec", "ovs_interfaceid": "06d48b3b-ec07-4803-9204-b300217b41d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.203 227364 DEBUG nova.network.os_vif_util [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Converting VIF {"id": "06d48b3b-ec07-4803-9204-b300217b41d1", "address": "fa:16:3e:9b:8f:68", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06d48b3b-ec", "ovs_interfaceid": "06d48b3b-ec07-4803-9204-b300217b41d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.204 227364 DEBUG nova.network.os_vif_util [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:8f:68,bridge_name='br-int',has_traffic_filtering=True,id=06d48b3b-ec07-4803-9204-b300217b41d1,network=Network(0430aba5-d0d7-4d98-ad87-552e6639c190),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06d48b3b-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.205 227364 DEBUG nova.objects.instance [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.220 227364 DEBUG nova.virt.libvirt.driver [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:36:44 np0005539551 nova_compute[227360]:  <uuid>1f01c882-bb44-4ba5-b82f-e3ffa31b8df1</uuid>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:  <name>instance-000000b2</name>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:36:44 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:      <nova:name>tempest-TestMinimumBasicScenario-server-111008662</nova:name>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:36:43</nova:creationTime>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:36:44 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:        <nova:user uuid="e8b20745b2d14f70b64a43335faed2f4">tempest-TestMinimumBasicScenario-1569311049-project-member</nova:user>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:        <nova:project uuid="8d5e30b74e6449dd90ecb667977d1fe9">tempest-TestMinimumBasicScenario-1569311049</nova:project>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="8a30eb6e-dddc-4108-8576-dcd7c8d5406f"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:        <nova:port uuid="06d48b3b-ec07-4803-9204-b300217b41d1">
Nov 29 03:36:44 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:      <entry name="serial">1f01c882-bb44-4ba5-b82f-e3ffa31b8df1</entry>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:      <entry name="uuid">1f01c882-bb44-4ba5-b82f-e3ffa31b8df1</entry>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:36:44 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/1f01c882-bb44-4ba5-b82f-e3ffa31b8df1_disk">
Nov 29 03:36:44 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:36:44 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:36:44 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/1f01c882-bb44-4ba5-b82f-e3ffa31b8df1_disk.config">
Nov 29 03:36:44 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:36:44 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:36:44 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:9b:8f:68"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:      <target dev="tap06d48b3b-ec"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:36:44 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/1f01c882-bb44-4ba5-b82f-e3ffa31b8df1/console.log" append="off"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:36:44 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:36:44 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:36:44 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:36:44 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:36:44 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.221 227364 DEBUG nova.compute.manager [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Preparing to wait for external event network-vif-plugged-06d48b3b-ec07-4803-9204-b300217b41d1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.222 227364 DEBUG oslo_concurrency.lockutils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Acquiring lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.222 227364 DEBUG oslo_concurrency.lockutils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.222 227364 DEBUG oslo_concurrency.lockutils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.223 227364 DEBUG nova.virt.libvirt.vif [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:36:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-111008662',display_name='tempest-TestMinimumBasicScenario-server-111008662',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-111008662',id=178,image_ref='8a30eb6e-dddc-4108-8576-dcd7c8d5406f',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCVJYdnP2TneuDoeBAXwKH6f73W3V8CnCT6vaTZikIJCryJZhoLWCSF0AlKVy0dYmyVNXlJowtpUs2K4/f0SYWoTrLCWDcueCumiTrkMJ87CdWQP8BrlEpeZKDWjOsS6rQ==',key_name='tempest-TestMinimumBasicScenario-445026150',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8d5e30b74e6449dd90ecb667977d1fe9',ramdisk_id='',reservation_id='r-4umr4qgc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8a30eb6e-dddc-4108-8576-dcd7c8d5406f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestMinimumBasicScenario-1569311049',owner_user_name='tempest-TestMinimumBasicScenario-1569311049-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:36:37Z,user_data=None,user_id='e8b20745b2d14f70b64a43335faed2f4',uuid=1f01c882-bb44-4ba5-b82f-e3ffa31b8df1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "06d48b3b-ec07-4803-9204-b300217b41d1", "address": "fa:16:3e:9b:8f:68", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06d48b3b-ec", "ovs_interfaceid": "06d48b3b-ec07-4803-9204-b300217b41d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.223 227364 DEBUG nova.network.os_vif_util [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Converting VIF {"id": "06d48b3b-ec07-4803-9204-b300217b41d1", "address": "fa:16:3e:9b:8f:68", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06d48b3b-ec", "ovs_interfaceid": "06d48b3b-ec07-4803-9204-b300217b41d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.224 227364 DEBUG nova.network.os_vif_util [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:8f:68,bridge_name='br-int',has_traffic_filtering=True,id=06d48b3b-ec07-4803-9204-b300217b41d1,network=Network(0430aba5-d0d7-4d98-ad87-552e6639c190),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06d48b3b-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.224 227364 DEBUG os_vif [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:8f:68,bridge_name='br-int',has_traffic_filtering=True,id=06d48b3b-ec07-4803-9204-b300217b41d1,network=Network(0430aba5-d0d7-4d98-ad87-552e6639c190),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06d48b3b-ec') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.224 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.225 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.225 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.229 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.229 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap06d48b3b-ec, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.229 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap06d48b3b-ec, col_values=(('external_ids', {'iface-id': '06d48b3b-ec07-4803-9204-b300217b41d1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9b:8f:68', 'vm-uuid': '1f01c882-bb44-4ba5-b82f-e3ffa31b8df1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.231 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:44 np0005539551 NetworkManager[48922]: <info>  [1764405404.2324] manager: (tap06d48b3b-ec): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/346)
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.233 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.238 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.238 227364 INFO os_vif [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:8f:68,bridge_name='br-int',has_traffic_filtering=True,id=06d48b3b-ec07-4803-9204-b300217b41d1,network=Network(0430aba5-d0d7-4d98-ad87-552e6639c190),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06d48b3b-ec')#033[00m
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.296 227364 DEBUG nova.virt.libvirt.driver [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.297 227364 DEBUG nova.virt.libvirt.driver [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.297 227364 DEBUG nova.virt.libvirt.driver [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] No VIF found with MAC fa:16:3e:9b:8f:68, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.298 227364 INFO nova.virt.libvirt.driver [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Using config drive#033[00m
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.323 227364 DEBUG nova.storage.rbd_utils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] rbd image 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:36:44 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #121. Immutable memtables: 0.
Nov 29 03:36:44 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:36:44.340183) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:36:44 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 121
Nov 29 03:36:44 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405404340442, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 762, "num_deletes": 259, "total_data_size": 1283997, "memory_usage": 1305152, "flush_reason": "Manual Compaction"}
Nov 29 03:36:44 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #122: started
Nov 29 03:36:44 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405404350159, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 122, "file_size": 846779, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 60436, "largest_seqno": 61193, "table_properties": {"data_size": 843135, "index_size": 1424, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8701, "raw_average_key_size": 19, "raw_value_size": 835517, "raw_average_value_size": 1856, "num_data_blocks": 63, "num_entries": 450, "num_filter_entries": 450, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405359, "oldest_key_time": 1764405359, "file_creation_time": 1764405404, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:36:44 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 9838 microseconds, and 3308 cpu microseconds.
Nov 29 03:36:44 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:36:44 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:36:44.350218) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #122: 846779 bytes OK
Nov 29 03:36:44 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:36:44.350241) [db/memtable_list.cc:519] [default] Level-0 commit table #122 started
Nov 29 03:36:44 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:36:44.352010) [db/memtable_list.cc:722] [default] Level-0 commit table #122: memtable #1 done
Nov 29 03:36:44 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:36:44.352030) EVENT_LOG_v1 {"time_micros": 1764405404352023, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:36:44 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:36:44.352051) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:36:44 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 1279880, prev total WAL file size 1279880, number of live WAL files 2.
Nov 29 03:36:44 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000118.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:36:44 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:36:44.353150) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303130' seq:72057594037927935, type:22 .. '6C6F676D0032323634' seq:0, type:0; will stop at (end)
Nov 29 03:36:44 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:36:44 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [122(826KB)], [120(12MB)]
Nov 29 03:36:44 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405404353204, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [122], "files_L6": [120], "score": -1, "input_data_size": 14020011, "oldest_snapshot_seqno": -1}
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.456 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.456 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.457 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.458 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.458 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:44 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #123: 9180 keys, 13877222 bytes, temperature: kUnknown
Nov 29 03:36:44 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405404461096, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 123, "file_size": 13877222, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13815447, "index_size": 37727, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22981, "raw_key_size": 241850, "raw_average_key_size": 26, "raw_value_size": 13651330, "raw_average_value_size": 1487, "num_data_blocks": 1450, "num_entries": 9180, "num_filter_entries": 9180, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764405404, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 123, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:36:44 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:36:44 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:36:44.461459) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 13877222 bytes
Nov 29 03:36:44 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:36:44.463835) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 129.8 rd, 128.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 12.6 +0.0 blob) out(13.2 +0.0 blob), read-write-amplify(32.9) write-amplify(16.4) OK, records in: 9714, records dropped: 534 output_compression: NoCompression
Nov 29 03:36:44 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:36:44.463863) EVENT_LOG_v1 {"time_micros": 1764405404463850, "job": 76, "event": "compaction_finished", "compaction_time_micros": 107986, "compaction_time_cpu_micros": 40615, "output_level": 6, "num_output_files": 1, "total_output_size": 13877222, "num_input_records": 9714, "num_output_records": 9180, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:36:44 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:36:44 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405404464215, "job": 76, "event": "table_file_deletion", "file_number": 122}
Nov 29 03:36:44 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000120.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:36:44 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405404467500, "job": 76, "event": "table_file_deletion", "file_number": 120}
Nov 29 03:36:44 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:36:44.353047) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:36:44 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:36:44.467610) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:36:44 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:36:44.467616) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:36:44 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:36:44.467618) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:36:44 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:36:44.467620) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:36:44 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:36:44.467622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.488 227364 DEBUG nova.network.neutron [req-03e7cf6a-7113-40ef-927c-a6628ee11371 req-d0aba08c-4175-47d6-b979-8c954bae28c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Updated VIF entry in instance network info cache for port 06d48b3b-ec07-4803-9204-b300217b41d1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.489 227364 DEBUG nova.network.neutron [req-03e7cf6a-7113-40ef-927c-a6628ee11371 req-d0aba08c-4175-47d6-b979-8c954bae28c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Updating instance_info_cache with network_info: [{"id": "06d48b3b-ec07-4803-9204-b300217b41d1", "address": "fa:16:3e:9b:8f:68", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06d48b3b-ec", "ovs_interfaceid": "06d48b3b-ec07-4803-9204-b300217b41d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.509 227364 DEBUG oslo_concurrency.lockutils [req-03e7cf6a-7113-40ef-927c-a6628ee11371 req-d0aba08c-4175-47d6-b979-8c954bae28c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.656 227364 INFO nova.virt.libvirt.driver [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Creating config drive at /var/lib/nova/instances/1f01c882-bb44-4ba5-b82f-e3ffa31b8df1/disk.config#033[00m
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.661 227364 DEBUG oslo_concurrency.processutils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1f01c882-bb44-4ba5-b82f-e3ffa31b8df1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzyoh3i_i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.792 227364 DEBUG oslo_concurrency.processutils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1f01c882-bb44-4ba5-b82f-e3ffa31b8df1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzyoh3i_i" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.821 227364 DEBUG nova.storage.rbd_utils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] rbd image 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.824 227364 DEBUG oslo_concurrency.processutils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1f01c882-bb44-4ba5-b82f-e3ffa31b8df1/disk.config 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:44 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:36:44 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3830970661' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:36:44 np0005539551 nova_compute[227360]: 2025-11-29 08:36:44.870 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:45 np0005539551 nova_compute[227360]: 2025-11-29 08:36:45.138 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-000000aa as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:36:45 np0005539551 nova_compute[227360]: 2025-11-29 08:36:45.139 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-000000aa as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:36:45 np0005539551 nova_compute[227360]: 2025-11-29 08:36:45.141 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-000000b2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:36:45 np0005539551 nova_compute[227360]: 2025-11-29 08:36:45.142 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-000000b2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:36:45 np0005539551 nova_compute[227360]: 2025-11-29 08:36:45.158 227364 DEBUG oslo_concurrency.processutils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1f01c882-bb44-4ba5-b82f-e3ffa31b8df1/disk.config 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:45 np0005539551 nova_compute[227360]: 2025-11-29 08:36:45.158 227364 INFO nova.virt.libvirt.driver [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Deleting local config drive /var/lib/nova/instances/1f01c882-bb44-4ba5-b82f-e3ffa31b8df1/disk.config because it was imported into RBD.#033[00m
Nov 29 03:36:45 np0005539551 NetworkManager[48922]: <info>  [1764405405.2041] manager: (tap06d48b3b-ec): new Tun device (/org/freedesktop/NetworkManager/Devices/347)
Nov 29 03:36:45 np0005539551 kernel: tap06d48b3b-ec: entered promiscuous mode
Nov 29 03:36:45 np0005539551 ovn_controller[130266]: 2025-11-29T08:36:45Z|00781|binding|INFO|Claiming lport 06d48b3b-ec07-4803-9204-b300217b41d1 for this chassis.
Nov 29 03:36:45 np0005539551 ovn_controller[130266]: 2025-11-29T08:36:45Z|00782|binding|INFO|06d48b3b-ec07-4803-9204-b300217b41d1: Claiming fa:16:3e:9b:8f:68 10.100.0.11
Nov 29 03:36:45 np0005539551 nova_compute[227360]: 2025-11-29 08:36:45.207 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:45 np0005539551 nova_compute[227360]: 2025-11-29 08:36:45.213 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:45 np0005539551 nova_compute[227360]: 2025-11-29 08:36:45.217 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:45.227 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:8f:68 10.100.0.11'], port_security=['fa:16:3e:9b:8f:68 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '1f01c882-bb44-4ba5-b82f-e3ffa31b8df1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0430aba5-d0d7-4d98-ad87-552e6639c190', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8d5e30b74e6449dd90ecb667977d1fe9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '52d8fe9a-0c55-4eea-ab3d-17059ad4962d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75451e9b-c915-4a2b-97ed-6cc2296328f6, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=06d48b3b-ec07-4803-9204-b300217b41d1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:45.229 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 06d48b3b-ec07-4803-9204-b300217b41d1 in datapath 0430aba5-d0d7-4d98-ad87-552e6639c190 bound to our chassis#033[00m
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:45.231 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0430aba5-d0d7-4d98-ad87-552e6639c190#033[00m
Nov 29 03:36:45 np0005539551 systemd-machined[190756]: New machine qemu-82-instance-000000b2.
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:45.243 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c7206ffe-37a6-4657-af66-3e2a22293e83]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:45.244 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0430aba5-d1 in ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:45.245 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0430aba5-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:45.246 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[fabc6cf5-9357-419f-82ff-c95e60c8294a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:45.246 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e3ab98ce-f914-4ab7-9259-d67b4dd68dc5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:45.257 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[1754c1a3-495d-442d-84c0-274418d6b64e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:45 np0005539551 systemd[1]: Started Virtual Machine qemu-82-instance-000000b2.
Nov 29 03:36:45 np0005539551 systemd-udevd[291252]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:45.282 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e62978d9-67ca-45d0-b015-9f467022bcc3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:45 np0005539551 ovn_controller[130266]: 2025-11-29T08:36:45Z|00783|binding|INFO|Setting lport 06d48b3b-ec07-4803-9204-b300217b41d1 ovn-installed in OVS
Nov 29 03:36:45 np0005539551 ovn_controller[130266]: 2025-11-29T08:36:45Z|00784|binding|INFO|Setting lport 06d48b3b-ec07-4803-9204-b300217b41d1 up in Southbound
Nov 29 03:36:45 np0005539551 NetworkManager[48922]: <info>  [1764405405.2911] device (tap06d48b3b-ec): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:36:45 np0005539551 NetworkManager[48922]: <info>  [1764405405.2920] device (tap06d48b3b-ec): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:36:45 np0005539551 nova_compute[227360]: 2025-11-29 08:36:45.291 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:45.310 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[6405f838-e462-4d24-8c1b-e2fd2b587669]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:45 np0005539551 NetworkManager[48922]: <info>  [1764405405.3160] manager: (tap0430aba5-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/348)
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:45.315 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[dce22c41-a4f8-4828-ae55-5289767b0f6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:45.349 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[069c5350-94a9-4642-bf29-23dc223fb858]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:45.353 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[db0d0534-199b-40fd-94b2-e73871e1fe48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:45 np0005539551 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:36:45 np0005539551 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:36:45 np0005539551 NetworkManager[48922]: <info>  [1764405405.3797] device (tap0430aba5-d0): carrier: link connected
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:45.384 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[ae8d163d-3ba4-42a8-b354-fe9cd4304d6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:45.411 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[22ac53a1-20cb-457c-8128-4528fdbade82]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0430aba5-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:fd:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 226], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 845225, 'reachable_time': 15875, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291283, 'error': None, 'target': 'ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:45 np0005539551 nova_compute[227360]: 2025-11-29 08:36:45.430 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:45.430 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c78bf196-2664-49be-b555-c8d5f1d6b127]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5c:fd4a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 845225, 'tstamp': 845225}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291284, 'error': None, 'target': 'ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:45 np0005539551 nova_compute[227360]: 2025-11-29 08:36:45.431 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3969MB free_disk=20.900794982910156GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:36:45 np0005539551 nova_compute[227360]: 2025-11-29 08:36:45.431 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:45 np0005539551 nova_compute[227360]: 2025-11-29 08:36:45.431 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:45.448 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8e5b6eb1-d501-4ee5-8404-4c8007d50712]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0430aba5-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:fd:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 226], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 845225, 'reachable_time': 15875, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 291285, 'error': None, 'target': 'ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:45.477 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c6711524-2997-4d14-b983-2f4ac9df0e3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:45.537 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f2b30667-47a3-4203-ae7e-7f450df189c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:45.538 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0430aba5-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:45.539 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:45.539 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0430aba5-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:45 np0005539551 kernel: tap0430aba5-d0: entered promiscuous mode
Nov 29 03:36:45 np0005539551 nova_compute[227360]: 2025-11-29 08:36:45.540 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:45 np0005539551 NetworkManager[48922]: <info>  [1764405405.5412] manager: (tap0430aba5-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/349)
Nov 29 03:36:45 np0005539551 nova_compute[227360]: 2025-11-29 08:36:45.542 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:45.545 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0430aba5-d0, col_values=(('external_ids', {'iface-id': 'dac731d0-69cc-4042-8450-f886e5854f80'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:45 np0005539551 nova_compute[227360]: 2025-11-29 08:36:45.546 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:45 np0005539551 ovn_controller[130266]: 2025-11-29T08:36:45Z|00785|binding|INFO|Releasing lport dac731d0-69cc-4042-8450-f886e5854f80 from this chassis (sb_readonly=0)
Nov 29 03:36:45 np0005539551 nova_compute[227360]: 2025-11-29 08:36:45.546 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:45.548 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0430aba5-d0d7-4d98-ad87-552e6639c190.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0430aba5-d0d7-4d98-ad87-552e6639c190.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:45.549 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f8e67f6b-e1c6-4b0d-894e-79ccc79cbce6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:45.549 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-0430aba5-d0d7-4d98-ad87-552e6639c190
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/0430aba5-d0d7-4d98-ad87-552e6639c190.pid.haproxy
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 0430aba5-d0d7-4d98-ad87-552e6639c190
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:36:45 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:45.550 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190', 'env', 'PROCESS_TAG=haproxy-0430aba5-d0d7-4d98-ad87-552e6639c190', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0430aba5-d0d7-4d98-ad87-552e6639c190.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:36:45 np0005539551 nova_compute[227360]: 2025-11-29 08:36:45.559 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:45 np0005539551 nova_compute[227360]: 2025-11-29 08:36:45.570 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance aaaa6a6b-0c21-483c-b891-02ebe64e6aab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:36:45 np0005539551 nova_compute[227360]: 2025-11-29 08:36:45.570 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:36:45 np0005539551 nova_compute[227360]: 2025-11-29 08:36:45.570 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:36:45 np0005539551 nova_compute[227360]: 2025-11-29 08:36:45.570 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:36:45 np0005539551 nova_compute[227360]: 2025-11-29 08:36:45.616 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:45.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:45.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:45 np0005539551 nova_compute[227360]: 2025-11-29 08:36:45.906 227364 DEBUG oslo_concurrency.lockutils [None req-159baa32-fed3-49c2-9302-1d876367a372 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Acquiring lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:45 np0005539551 nova_compute[227360]: 2025-11-29 08:36:45.906 227364 DEBUG oslo_concurrency.lockutils [None req-159baa32-fed3-49c2-9302-1d876367a372 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:45 np0005539551 nova_compute[227360]: 2025-11-29 08:36:45.907 227364 DEBUG oslo_concurrency.lockutils [None req-159baa32-fed3-49c2-9302-1d876367a372 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Acquiring lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:45 np0005539551 nova_compute[227360]: 2025-11-29 08:36:45.907 227364 DEBUG oslo_concurrency.lockutils [None req-159baa32-fed3-49c2-9302-1d876367a372 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:45 np0005539551 nova_compute[227360]: 2025-11-29 08:36:45.907 227364 DEBUG oslo_concurrency.lockutils [None req-159baa32-fed3-49c2-9302-1d876367a372 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:45 np0005539551 nova_compute[227360]: 2025-11-29 08:36:45.910 227364 INFO nova.compute.manager [None req-159baa32-fed3-49c2-9302-1d876367a372 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Terminating instance#033[00m
Nov 29 03:36:45 np0005539551 nova_compute[227360]: 2025-11-29 08:36:45.911 227364 DEBUG nova.compute.manager [None req-159baa32-fed3-49c2-9302-1d876367a372 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:36:45 np0005539551 podman[291337]: 2025-11-29 08:36:45.962899559 +0000 UTC m=+0.055771319 container create d6ae02f2890ef865c1225ffe8b74e908522b4ecc7b19424126b9b5cb236e6d3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 03:36:46 np0005539551 systemd[1]: Started libpod-conmon-d6ae02f2890ef865c1225ffe8b74e908522b4ecc7b19424126b9b5cb236e6d3f.scope.
Nov 29 03:36:46 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:36:46 np0005539551 podman[291337]: 2025-11-29 08:36:45.935059316 +0000 UTC m=+0.027931096 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:36:46 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46df413971e98aee914592114724f3b2bc63176275dc67a10deef366ddb2253b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:36:46 np0005539551 podman[291337]: 2025-11-29 08:36:46.049384218 +0000 UTC m=+0.142256028 container init d6ae02f2890ef865c1225ffe8b74e908522b4ecc7b19424126b9b5cb236e6d3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 29 03:36:46 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:36:46 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3874949819' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:36:46 np0005539551 podman[291337]: 2025-11-29 08:36:46.059842281 +0000 UTC m=+0.152714051 container start d6ae02f2890ef865c1225ffe8b74e908522b4ecc7b19424126b9b5cb236e6d3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.064 227364 DEBUG nova.compute.manager [req-ed16b388-5920-41db-af2b-2e2691bd37c8 req-377ed4d8-ae7b-4a67-a9c8-03afa86be4ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Received event network-vif-plugged-06d48b3b-ec07-4803-9204-b300217b41d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.065 227364 DEBUG oslo_concurrency.lockutils [req-ed16b388-5920-41db-af2b-2e2691bd37c8 req-377ed4d8-ae7b-4a67-a9c8-03afa86be4ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.065 227364 DEBUG oslo_concurrency.lockutils [req-ed16b388-5920-41db-af2b-2e2691bd37c8 req-377ed4d8-ae7b-4a67-a9c8-03afa86be4ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.065 227364 DEBUG oslo_concurrency.lockutils [req-ed16b388-5920-41db-af2b-2e2691bd37c8 req-377ed4d8-ae7b-4a67-a9c8-03afa86be4ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.065 227364 DEBUG nova.compute.manager [req-ed16b388-5920-41db-af2b-2e2691bd37c8 req-377ed4d8-ae7b-4a67-a9c8-03afa86be4ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Processing event network-vif-plugged-06d48b3b-ec07-4803-9204-b300217b41d1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.067 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.072 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:36:46 np0005539551 neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190[291352]: [NOTICE]   (291358) : New worker (291360) forked
Nov 29 03:36:46 np0005539551 neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190[291352]: [NOTICE]   (291358) : Loading success.
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.089 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.111 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.111 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.329 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405406.3291857, 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.329 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] VM Started (Lifecycle Event)#033[00m
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.332 227364 DEBUG nova.compute.manager [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.334 227364 DEBUG nova.virt.libvirt.driver [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.337 227364 INFO nova.virt.libvirt.driver [-] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Instance spawned successfully.#033[00m
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.337 227364 DEBUG nova.virt.libvirt.driver [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.355 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.357 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.376 227364 DEBUG nova.virt.libvirt.driver [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.376 227364 DEBUG nova.virt.libvirt.driver [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.376 227364 DEBUG nova.virt.libvirt.driver [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.377 227364 DEBUG nova.virt.libvirt.driver [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.377 227364 DEBUG nova.virt.libvirt.driver [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.377 227364 DEBUG nova.virt.libvirt.driver [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.381 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.381 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405406.3314948, 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.382 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.547 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.552 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405406.3340447, 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.552 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.564 227364 INFO nova.compute.manager [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Took 8.96 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.564 227364 DEBUG nova.compute.manager [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.571 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.574 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.599 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.614 227364 INFO nova.compute.manager [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Took 10.13 seconds to build instance.#033[00m
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.638 227364 DEBUG oslo_concurrency.lockutils [None req-fa974cca-efd1-4923-8906-0352a381ad65 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.209s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:46 np0005539551 kernel: tap2c7cb162-70 (unregistering): left promiscuous mode
Nov 29 03:36:46 np0005539551 NetworkManager[48922]: <info>  [1764405406.7938] device (tap2c7cb162-70): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.807 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:46 np0005539551 ovn_controller[130266]: 2025-11-29T08:36:46Z|00786|binding|INFO|Releasing lport 2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 from this chassis (sb_readonly=0)
Nov 29 03:36:46 np0005539551 ovn_controller[130266]: 2025-11-29T08:36:46Z|00787|binding|INFO|Setting lport 2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 down in Southbound
Nov 29 03:36:46 np0005539551 ovn_controller[130266]: 2025-11-29T08:36:46Z|00788|binding|INFO|Removing iface tap2c7cb162-70 ovn-installed in OVS
Nov 29 03:36:46 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:46.816 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:09:19 10.100.0.11'], port_security=['fa:16:3e:c5:09:19 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'aaaa6a6b-0c21-483c-b891-02ebe64e6aab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a83b79bc-6262-43e7-a9e5-5e808a213726', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96e72e7660da497a8b6bf9fdb03fe84c', 'neutron:revision_number': '11', 'neutron:security_group_ids': '4e362fc3-a5d2-4518-8d56-0e9bbfbe70b6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89f1621a-4594-4d70-9442-76b3c597dffc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=2c7cb162-70f0-496f-a2bf-0b0af61bd4b2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:36:46 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:46.817 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 in datapath a83b79bc-6262-43e7-a9e5-5e808a213726 unbound from our chassis#033[00m
Nov 29 03:36:46 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:46.821 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a83b79bc-6262-43e7-a9e5-5e808a213726, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:36:46 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:46.822 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[846596f9-ec73-48da-b36e-1599efb9c369]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:46 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:46.824 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726 namespace which is not needed anymore#033[00m
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.828 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:46 np0005539551 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d000000aa.scope: Deactivated successfully.
Nov 29 03:36:46 np0005539551 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d000000aa.scope: Consumed 6.120s CPU time.
Nov 29 03:36:46 np0005539551 systemd-machined[190756]: Machine qemu-81-instance-000000aa terminated.
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.934 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:46 np0005539551 NetworkManager[48922]: <info>  [1764405406.9350] manager: (tap2c7cb162-70): new Tun device (/org/freedesktop/NetworkManager/Devices/350)
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.940 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.950 227364 INFO nova.virt.libvirt.driver [-] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Instance destroyed successfully.#033[00m
Nov 29 03:36:46 np0005539551 nova_compute[227360]: 2025-11-29 08:36:46.951 227364 DEBUG nova.objects.instance [None req-159baa32-fed3-49c2-9302-1d876367a372 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lazy-loading 'resources' on Instance uuid aaaa6a6b-0c21-483c-b891-02ebe64e6aab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:36:47 np0005539551 neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726[290853]: [NOTICE]   (290857) : haproxy version is 2.8.14-c23fe91
Nov 29 03:36:47 np0005539551 neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726[290853]: [NOTICE]   (290857) : path to executable is /usr/sbin/haproxy
Nov 29 03:36:47 np0005539551 neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726[290853]: [WARNING]  (290857) : Exiting Master process...
Nov 29 03:36:47 np0005539551 neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726[290853]: [ALERT]    (290857) : Current worker (290859) exited with code 143 (Terminated)
Nov 29 03:36:47 np0005539551 neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726[290853]: [WARNING]  (290857) : All workers exited. Exiting... (0)
Nov 29 03:36:47 np0005539551 systemd[1]: libpod-e98c5d4cf86561b83ad79ab2715050756865ed9562436690e128caa8b7c67e3d.scope: Deactivated successfully.
Nov 29 03:36:47 np0005539551 podman[291438]: 2025-11-29 08:36:47.034520339 +0000 UTC m=+0.057000103 container died e98c5d4cf86561b83ad79ab2715050756865ed9562436690e128caa8b7c67e3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 03:36:47 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e98c5d4cf86561b83ad79ab2715050756865ed9562436690e128caa8b7c67e3d-userdata-shm.mount: Deactivated successfully.
Nov 29 03:36:47 np0005539551 systemd[1]: var-lib-containers-storage-overlay-d5054fd5a661199be3517aa70b5aa3211bdeafba9c6923a000d92f5cef55bc16-merged.mount: Deactivated successfully.
Nov 29 03:36:47 np0005539551 podman[291438]: 2025-11-29 08:36:47.073710069 +0000 UTC m=+0.096189823 container cleanup e98c5d4cf86561b83ad79ab2715050756865ed9562436690e128caa8b7c67e3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:36:47 np0005539551 systemd[1]: libpod-conmon-e98c5d4cf86561b83ad79ab2715050756865ed9562436690e128caa8b7c67e3d.scope: Deactivated successfully.
Nov 29 03:36:47 np0005539551 nova_compute[227360]: 2025-11-29 08:36:47.115 227364 DEBUG nova.virt.libvirt.vif [None req-159baa32-fed3-49c2-9302-1d876367a372 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:34:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1484770108',display_name='tempest-ServersNegativeTestJSON-server-1484770108',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1484770108',id=170,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:36:15Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='96e72e7660da497a8b6bf9fdb03fe84c',ramdisk_id='',reservation_id='r-17iqb8m1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1016750887',owner_user_name='tempest-ServersNegativeTestJSON-1016750887-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:36:30Z,user_data=None,user_id='a57807acb02b45d082f242ec62cd5b6f',uuid=aaaa6a6b-0c21-483c-b891-02ebe64e6aab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "address": "fa:16:3e:c5:09:19", "network": {"id": "a83b79bc-6262-43e7-a9e5-5e808a213726", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1830064421-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e72e7660da497a8b6bf9fdb03fe84c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c7cb162-70", "ovs_interfaceid": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:36:47 np0005539551 nova_compute[227360]: 2025-11-29 08:36:47.115 227364 DEBUG nova.network.os_vif_util [None req-159baa32-fed3-49c2-9302-1d876367a372 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Converting VIF {"id": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "address": "fa:16:3e:c5:09:19", "network": {"id": "a83b79bc-6262-43e7-a9e5-5e808a213726", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1830064421-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e72e7660da497a8b6bf9fdb03fe84c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c7cb162-70", "ovs_interfaceid": "2c7cb162-70f0-496f-a2bf-0b0af61bd4b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:36:47 np0005539551 nova_compute[227360]: 2025-11-29 08:36:47.116 227364 DEBUG nova.network.os_vif_util [None req-159baa32-fed3-49c2-9302-1d876367a372 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:09:19,bridge_name='br-int',has_traffic_filtering=True,id=2c7cb162-70f0-496f-a2bf-0b0af61bd4b2,network=Network(a83b79bc-6262-43e7-a9e5-5e808a213726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c7cb162-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:36:47 np0005539551 nova_compute[227360]: 2025-11-29 08:36:47.116 227364 DEBUG os_vif [None req-159baa32-fed3-49c2-9302-1d876367a372 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:09:19,bridge_name='br-int',has_traffic_filtering=True,id=2c7cb162-70f0-496f-a2bf-0b0af61bd4b2,network=Network(a83b79bc-6262-43e7-a9e5-5e808a213726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c7cb162-70') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:36:47 np0005539551 nova_compute[227360]: 2025-11-29 08:36:47.118 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:47 np0005539551 nova_compute[227360]: 2025-11-29 08:36:47.118 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2c7cb162-70, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:47 np0005539551 nova_compute[227360]: 2025-11-29 08:36:47.119 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:47 np0005539551 nova_compute[227360]: 2025-11-29 08:36:47.129 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:36:47 np0005539551 nova_compute[227360]: 2025-11-29 08:36:47.132 227364 INFO os_vif [None req-159baa32-fed3-49c2-9302-1d876367a372 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:09:19,bridge_name='br-int',has_traffic_filtering=True,id=2c7cb162-70f0-496f-a2bf-0b0af61bd4b2,network=Network(a83b79bc-6262-43e7-a9e5-5e808a213726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c7cb162-70')#033[00m
Nov 29 03:36:47 np0005539551 podman[291471]: 2025-11-29 08:36:47.141971775 +0000 UTC m=+0.049303134 container remove e98c5d4cf86561b83ad79ab2715050756865ed9562436690e128caa8b7c67e3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:36:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:47.147 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[790ed568-a0d0-46a1-8716-29c400d39ec1]: (4, ('Sat Nov 29 08:36:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726 (e98c5d4cf86561b83ad79ab2715050756865ed9562436690e128caa8b7c67e3d)\ne98c5d4cf86561b83ad79ab2715050756865ed9562436690e128caa8b7c67e3d\nSat Nov 29 08:36:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726 (e98c5d4cf86561b83ad79ab2715050756865ed9562436690e128caa8b7c67e3d)\ne98c5d4cf86561b83ad79ab2715050756865ed9562436690e128caa8b7c67e3d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:47.150 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e796011b-41d5-43ed-8180-0c605c491ac7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:47.151 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa83b79bc-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:47 np0005539551 kernel: tapa83b79bc-60: left promiscuous mode
Nov 29 03:36:47 np0005539551 nova_compute[227360]: 2025-11-29 08:36:47.154 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:47 np0005539551 nova_compute[227360]: 2025-11-29 08:36:47.169 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:47.171 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b8147bb4-ecb4-4550-8029-53efc2f90b00]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:47.187 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[aeea0191-d298-417c-be0d-29dd1fadbc41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:47.188 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2c5b2197-1679-456e-84ae-df07e1d2c729]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:47.203 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[cbfe328d-e85d-47c9-8457-dd54a5733a3a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 843622, 'reachable_time': 24039, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291502, 'error': None, 'target': 'ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:47.205 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:36:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:36:47.205 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[e0d9e2c5-8280-4cea-8f0c-d59edbfbb46e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:47 np0005539551 systemd[1]: run-netns-ovnmeta\x2da83b79bc\x2d6262\x2d43e7\x2da9e5\x2d5e808a213726.mount: Deactivated successfully.
Nov 29 03:36:47 np0005539551 nova_compute[227360]: 2025-11-29 08:36:47.306 227364 DEBUG nova.compute.manager [req-83ec1603-78fe-4994-ad95-a85470b86f18 req-f1807df2-e7c6-4a89-8169-26cae9f54fdb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Received event network-vif-unplugged-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:47 np0005539551 nova_compute[227360]: 2025-11-29 08:36:47.306 227364 DEBUG oslo_concurrency.lockutils [req-83ec1603-78fe-4994-ad95-a85470b86f18 req-f1807df2-e7c6-4a89-8169-26cae9f54fdb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:47 np0005539551 nova_compute[227360]: 2025-11-29 08:36:47.306 227364 DEBUG oslo_concurrency.lockutils [req-83ec1603-78fe-4994-ad95-a85470b86f18 req-f1807df2-e7c6-4a89-8169-26cae9f54fdb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:47 np0005539551 nova_compute[227360]: 2025-11-29 08:36:47.306 227364 DEBUG oslo_concurrency.lockutils [req-83ec1603-78fe-4994-ad95-a85470b86f18 req-f1807df2-e7c6-4a89-8169-26cae9f54fdb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:47 np0005539551 nova_compute[227360]: 2025-11-29 08:36:47.306 227364 DEBUG nova.compute.manager [req-83ec1603-78fe-4994-ad95-a85470b86f18 req-f1807df2-e7c6-4a89-8169-26cae9f54fdb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] No waiting events found dispatching network-vif-unplugged-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:36:47 np0005539551 nova_compute[227360]: 2025-11-29 08:36:47.307 227364 DEBUG nova.compute.manager [req-83ec1603-78fe-4994-ad95-a85470b86f18 req-f1807df2-e7c6-4a89-8169-26cae9f54fdb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Received event network-vif-unplugged-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:36:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:47.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:47.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:48 np0005539551 nova_compute[227360]: 2025-11-29 08:36:48.111 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:48 np0005539551 nova_compute[227360]: 2025-11-29 08:36:48.112 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:36:48 np0005539551 nova_compute[227360]: 2025-11-29 08:36:48.112 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:36:48 np0005539551 nova_compute[227360]: 2025-11-29 08:36:48.140 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Nov 29 03:36:48 np0005539551 nova_compute[227360]: 2025-11-29 08:36:48.161 227364 DEBUG nova.compute.manager [req-09d5cb76-b3fb-4860-b4a1-6565f1f27a62 req-80934763-dfa0-4122-a10d-154542ba494d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Received event network-vif-plugged-06d48b3b-ec07-4803-9204-b300217b41d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:48 np0005539551 nova_compute[227360]: 2025-11-29 08:36:48.161 227364 DEBUG oslo_concurrency.lockutils [req-09d5cb76-b3fb-4860-b4a1-6565f1f27a62 req-80934763-dfa0-4122-a10d-154542ba494d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:48 np0005539551 nova_compute[227360]: 2025-11-29 08:36:48.162 227364 DEBUG oslo_concurrency.lockutils [req-09d5cb76-b3fb-4860-b4a1-6565f1f27a62 req-80934763-dfa0-4122-a10d-154542ba494d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:48 np0005539551 nova_compute[227360]: 2025-11-29 08:36:48.162 227364 DEBUG oslo_concurrency.lockutils [req-09d5cb76-b3fb-4860-b4a1-6565f1f27a62 req-80934763-dfa0-4122-a10d-154542ba494d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:48 np0005539551 nova_compute[227360]: 2025-11-29 08:36:48.163 227364 DEBUG nova.compute.manager [req-09d5cb76-b3fb-4860-b4a1-6565f1f27a62 req-80934763-dfa0-4122-a10d-154542ba494d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] No waiting events found dispatching network-vif-plugged-06d48b3b-ec07-4803-9204-b300217b41d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:36:48 np0005539551 nova_compute[227360]: 2025-11-29 08:36:48.163 227364 WARNING nova.compute.manager [req-09d5cb76-b3fb-4860-b4a1-6565f1f27a62 req-80934763-dfa0-4122-a10d-154542ba494d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Received unexpected event network-vif-plugged-06d48b3b-ec07-4803-9204-b300217b41d1 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:36:48 np0005539551 nova_compute[227360]: 2025-11-29 08:36:48.547 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "refresh_cache-1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:36:48 np0005539551 nova_compute[227360]: 2025-11-29 08:36:48.547 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquired lock "refresh_cache-1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:36:48 np0005539551 nova_compute[227360]: 2025-11-29 08:36:48.548 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:36:48 np0005539551 nova_compute[227360]: 2025-11-29 08:36:48.548 227364 DEBUG nova.objects.instance [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:36:48 np0005539551 nova_compute[227360]: 2025-11-29 08:36:48.680 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:48 np0005539551 nova_compute[227360]: 2025-11-29 08:36:48.810 227364 INFO nova.virt.libvirt.driver [None req-159baa32-fed3-49c2-9302-1d876367a372 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Deleting instance files /var/lib/nova/instances/aaaa6a6b-0c21-483c-b891-02ebe64e6aab_del#033[00m
Nov 29 03:36:48 np0005539551 nova_compute[227360]: 2025-11-29 08:36:48.811 227364 INFO nova.virt.libvirt.driver [None req-159baa32-fed3-49c2-9302-1d876367a372 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Deletion of /var/lib/nova/instances/aaaa6a6b-0c21-483c-b891-02ebe64e6aab_del complete#033[00m
Nov 29 03:36:48 np0005539551 nova_compute[227360]: 2025-11-29 08:36:48.856 227364 INFO nova.compute.manager [None req-159baa32-fed3-49c2-9302-1d876367a372 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Took 2.94 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:36:48 np0005539551 nova_compute[227360]: 2025-11-29 08:36:48.856 227364 DEBUG oslo.service.loopingcall [None req-159baa32-fed3-49c2-9302-1d876367a372 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:36:48 np0005539551 nova_compute[227360]: 2025-11-29 08:36:48.857 227364 DEBUG nova.compute.manager [-] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:36:48 np0005539551 nova_compute[227360]: 2025-11-29 08:36:48.857 227364 DEBUG nova.network.neutron [-] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:36:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:49.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:36:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:49.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:36:49 np0005539551 nova_compute[227360]: 2025-11-29 08:36:49.792 227364 DEBUG nova.compute.manager [req-1b3b103e-6767-4534-9137-a7273c52c33b req-c0f184a0-16e6-4f53-984c-8860334ea5cc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Received event network-vif-plugged-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:49 np0005539551 nova_compute[227360]: 2025-11-29 08:36:49.792 227364 DEBUG oslo_concurrency.lockutils [req-1b3b103e-6767-4534-9137-a7273c52c33b req-c0f184a0-16e6-4f53-984c-8860334ea5cc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:49 np0005539551 nova_compute[227360]: 2025-11-29 08:36:49.793 227364 DEBUG oslo_concurrency.lockutils [req-1b3b103e-6767-4534-9137-a7273c52c33b req-c0f184a0-16e6-4f53-984c-8860334ea5cc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:49 np0005539551 nova_compute[227360]: 2025-11-29 08:36:49.793 227364 DEBUG oslo_concurrency.lockutils [req-1b3b103e-6767-4534-9137-a7273c52c33b req-c0f184a0-16e6-4f53-984c-8860334ea5cc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:49 np0005539551 nova_compute[227360]: 2025-11-29 08:36:49.793 227364 DEBUG nova.compute.manager [req-1b3b103e-6767-4534-9137-a7273c52c33b req-c0f184a0-16e6-4f53-984c-8860334ea5cc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] No waiting events found dispatching network-vif-plugged-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:36:49 np0005539551 nova_compute[227360]: 2025-11-29 08:36:49.793 227364 WARNING nova.compute.manager [req-1b3b103e-6767-4534-9137-a7273c52c33b req-c0f184a0-16e6-4f53-984c-8860334ea5cc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Received unexpected event network-vif-plugged-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:36:50 np0005539551 nova_compute[227360]: 2025-11-29 08:36:50.139 227364 DEBUG nova.network.neutron [-] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:36:50 np0005539551 nova_compute[227360]: 2025-11-29 08:36:50.167 227364 INFO nova.compute.manager [-] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Took 1.31 seconds to deallocate network for instance.#033[00m
Nov 29 03:36:50 np0005539551 nova_compute[227360]: 2025-11-29 08:36:50.217 227364 DEBUG oslo_concurrency.lockutils [None req-159baa32-fed3-49c2-9302-1d876367a372 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:50 np0005539551 nova_compute[227360]: 2025-11-29 08:36:50.217 227364 DEBUG oslo_concurrency.lockutils [None req-159baa32-fed3-49c2-9302-1d876367a372 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:50 np0005539551 nova_compute[227360]: 2025-11-29 08:36:50.272 227364 DEBUG nova.compute.manager [req-32c7d54c-80d5-420b-a0f7-565b758efeaa req-217b3b53-b20a-4ff2-bda7-169afa1a4b62 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Received event network-vif-deleted-2c7cb162-70f0-496f-a2bf-0b0af61bd4b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:50 np0005539551 nova_compute[227360]: 2025-11-29 08:36:50.274 227364 DEBUG oslo_concurrency.processutils [None req-159baa32-fed3-49c2-9302-1d876367a372 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:50 np0005539551 nova_compute[227360]: 2025-11-29 08:36:50.305 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Updating instance_info_cache with network_info: [{"id": "06d48b3b-ec07-4803-9204-b300217b41d1", "address": "fa:16:3e:9b:8f:68", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06d48b3b-ec", "ovs_interfaceid": "06d48b3b-ec07-4803-9204-b300217b41d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:36:50 np0005539551 nova_compute[227360]: 2025-11-29 08:36:50.333 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Releasing lock "refresh_cache-1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:36:50 np0005539551 nova_compute[227360]: 2025-11-29 08:36:50.333 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:36:50 np0005539551 nova_compute[227360]: 2025-11-29 08:36:50.334 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:50 np0005539551 nova_compute[227360]: 2025-11-29 08:36:50.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:36:50 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3675963448' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:36:50 np0005539551 nova_compute[227360]: 2025-11-29 08:36:50.693 227364 DEBUG oslo_concurrency.processutils [None req-159baa32-fed3-49c2-9302-1d876367a372 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:50 np0005539551 nova_compute[227360]: 2025-11-29 08:36:50.699 227364 DEBUG nova.compute.provider_tree [None req-159baa32-fed3-49c2-9302-1d876367a372 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:36:50 np0005539551 nova_compute[227360]: 2025-11-29 08:36:50.716 227364 DEBUG nova.scheduler.client.report [None req-159baa32-fed3-49c2-9302-1d876367a372 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:36:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:50 np0005539551 nova_compute[227360]: 2025-11-29 08:36:50.746 227364 DEBUG oslo_concurrency.lockutils [None req-159baa32-fed3-49c2-9302-1d876367a372 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.529s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:50 np0005539551 nova_compute[227360]: 2025-11-29 08:36:50.773 227364 INFO nova.scheduler.client.report [None req-159baa32-fed3-49c2-9302-1d876367a372 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Deleted allocations for instance aaaa6a6b-0c21-483c-b891-02ebe64e6aab#033[00m
Nov 29 03:36:50 np0005539551 nova_compute[227360]: 2025-11-29 08:36:50.863 227364 DEBUG oslo_concurrency.lockutils [None req-159baa32-fed3-49c2-9302-1d876367a372 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "aaaa6a6b-0c21-483c-b891-02ebe64e6aab" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.956s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:51 np0005539551 podman[291527]: 2025-11-29 08:36:51.632904986 +0000 UTC m=+0.083439308 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Nov 29 03:36:51 np0005539551 podman[291526]: 2025-11-29 08:36:51.659237699 +0000 UTC m=+0.106759639 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Nov 29 03:36:51 np0005539551 podman[291528]: 2025-11-29 08:36:51.664234924 +0000 UTC m=+0.104010885 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:36:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:51.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:51.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:52 np0005539551 nova_compute[227360]: 2025-11-29 08:36:52.121 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:52 np0005539551 nova_compute[227360]: 2025-11-29 08:36:52.314 227364 DEBUG oslo_concurrency.lockutils [None req-5e9e7e6d-051e-49a9-8a6b-cae0497c833f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Acquiring lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:52 np0005539551 nova_compute[227360]: 2025-11-29 08:36:52.314 227364 DEBUG oslo_concurrency.lockutils [None req-5e9e7e6d-051e-49a9-8a6b-cae0497c833f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:52 np0005539551 nova_compute[227360]: 2025-11-29 08:36:52.335 227364 DEBUG nova.objects.instance [None req-5e9e7e6d-051e-49a9-8a6b-cae0497c833f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lazy-loading 'flavor' on Instance uuid 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:36:52 np0005539551 nova_compute[227360]: 2025-11-29 08:36:52.404 227364 DEBUG oslo_concurrency.lockutils [None req-5e9e7e6d-051e-49a9-8a6b-cae0497c833f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.090s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:52 np0005539551 nova_compute[227360]: 2025-11-29 08:36:52.614 227364 DEBUG oslo_concurrency.lockutils [None req-5e9e7e6d-051e-49a9-8a6b-cae0497c833f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Acquiring lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:52 np0005539551 nova_compute[227360]: 2025-11-29 08:36:52.615 227364 DEBUG oslo_concurrency.lockutils [None req-5e9e7e6d-051e-49a9-8a6b-cae0497c833f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:52 np0005539551 nova_compute[227360]: 2025-11-29 08:36:52.615 227364 INFO nova.compute.manager [None req-5e9e7e6d-051e-49a9-8a6b-cae0497c833f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Attaching volume d0fba77f-63fd-4100-a2fb-b81bfa4b659a to /dev/vdb#033[00m
Nov 29 03:36:52 np0005539551 nova_compute[227360]: 2025-11-29 08:36:52.775 227364 DEBUG os_brick.utils [None req-5e9e7e6d-051e-49a9-8a6b-cae0497c833f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:36:52 np0005539551 nova_compute[227360]: 2025-11-29 08:36:52.776 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:52 np0005539551 nova_compute[227360]: 2025-11-29 08:36:52.787 245195 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:52 np0005539551 nova_compute[227360]: 2025-11-29 08:36:52.787 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[bfe1d316-fe13-4553-8aeb-9d322008a107]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:52 np0005539551 nova_compute[227360]: 2025-11-29 08:36:52.788 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:52 np0005539551 nova_compute[227360]: 2025-11-29 08:36:52.802 245195 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:52 np0005539551 nova_compute[227360]: 2025-11-29 08:36:52.803 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[c835a737-4a78-427c-a947-857714625f13]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34268feecb6', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:52 np0005539551 nova_compute[227360]: 2025-11-29 08:36:52.804 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:52 np0005539551 nova_compute[227360]: 2025-11-29 08:36:52.814 245195 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:52 np0005539551 nova_compute[227360]: 2025-11-29 08:36:52.815 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[c143712e-fc64-47a9-8401-16340a91a8c6]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:52 np0005539551 nova_compute[227360]: 2025-11-29 08:36:52.816 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[b1b563f5-1e66-4cd8-babd-cfda533a88ac]: (4, '2d58586e-4ce1-425d-890c-d5cdff75e822') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:52 np0005539551 nova_compute[227360]: 2025-11-29 08:36:52.817 227364 DEBUG oslo_concurrency.processutils [None req-5e9e7e6d-051e-49a9-8a6b-cae0497c833f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:52 np0005539551 nova_compute[227360]: 2025-11-29 08:36:52.859 227364 DEBUG oslo_concurrency.processutils [None req-5e9e7e6d-051e-49a9-8a6b-cae0497c833f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] CMD "nvme version" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:52 np0005539551 nova_compute[227360]: 2025-11-29 08:36:52.863 227364 DEBUG os_brick.initiator.connectors.lightos [None req-5e9e7e6d-051e-49a9-8a6b-cae0497c833f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:36:52 np0005539551 nova_compute[227360]: 2025-11-29 08:36:52.863 227364 DEBUG os_brick.initiator.connectors.lightos [None req-5e9e7e6d-051e-49a9-8a6b-cae0497c833f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:36:52 np0005539551 nova_compute[227360]: 2025-11-29 08:36:52.864 227364 DEBUG os_brick.initiator.connectors.lightos [None req-5e9e7e6d-051e-49a9-8a6b-cae0497c833f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:36:52 np0005539551 nova_compute[227360]: 2025-11-29 08:36:52.865 227364 DEBUG os_brick.utils [None req-5e9e7e6d-051e-49a9-8a6b-cae0497c833f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] <== get_connector_properties: return (88ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34268feecb6', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2d58586e-4ce1-425d-890c-d5cdff75e822', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:36:52 np0005539551 nova_compute[227360]: 2025-11-29 08:36:52.865 227364 DEBUG nova.virt.block_device [None req-5e9e7e6d-051e-49a9-8a6b-cae0497c833f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Updating existing volume attachment record: 9a45bbc0-d369-4b28-b260-27c4a5e2d9ea _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:36:53 np0005539551 ovn_controller[130266]: 2025-11-29T08:36:53Z|00789|binding|INFO|Releasing lport dac731d0-69cc-4042-8450-f886e5854f80 from this chassis (sb_readonly=0)
Nov 29 03:36:53 np0005539551 nova_compute[227360]: 2025-11-29 08:36:53.304 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:53 np0005539551 nova_compute[227360]: 2025-11-29 08:36:53.681 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:36:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:53.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:36:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:53.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:54 np0005539551 nova_compute[227360]: 2025-11-29 08:36:54.007 227364 DEBUG nova.objects.instance [None req-5e9e7e6d-051e-49a9-8a6b-cae0497c833f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lazy-loading 'flavor' on Instance uuid 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:36:54 np0005539551 nova_compute[227360]: 2025-11-29 08:36:54.076 227364 DEBUG nova.virt.libvirt.driver [None req-5e9e7e6d-051e-49a9-8a6b-cae0497c833f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Attempting to attach volume d0fba77f-63fd-4100-a2fb-b81bfa4b659a with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:36:54 np0005539551 nova_compute[227360]: 2025-11-29 08:36:54.080 227364 DEBUG nova.virt.libvirt.guest [None req-5e9e7e6d-051e-49a9-8a6b-cae0497c833f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:36:54 np0005539551 nova_compute[227360]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:36:54 np0005539551 nova_compute[227360]:  <source protocol="rbd" name="volumes/volume-d0fba77f-63fd-4100-a2fb-b81bfa4b659a">
Nov 29 03:36:54 np0005539551 nova_compute[227360]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:36:54 np0005539551 nova_compute[227360]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:36:54 np0005539551 nova_compute[227360]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:36:54 np0005539551 nova_compute[227360]:  </source>
Nov 29 03:36:54 np0005539551 nova_compute[227360]:  <auth username="openstack">
Nov 29 03:36:54 np0005539551 nova_compute[227360]:    <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:36:54 np0005539551 nova_compute[227360]:  </auth>
Nov 29 03:36:54 np0005539551 nova_compute[227360]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:36:54 np0005539551 nova_compute[227360]:  <serial>d0fba77f-63fd-4100-a2fb-b81bfa4b659a</serial>
Nov 29 03:36:54 np0005539551 nova_compute[227360]: </disk>
Nov 29 03:36:54 np0005539551 nova_compute[227360]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:36:54 np0005539551 nova_compute[227360]: 2025-11-29 08:36:54.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:54 np0005539551 nova_compute[227360]: 2025-11-29 08:36:54.627 227364 DEBUG nova.virt.libvirt.driver [None req-5e9e7e6d-051e-49a9-8a6b-cae0497c833f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:36:54 np0005539551 nova_compute[227360]: 2025-11-29 08:36:54.628 227364 DEBUG nova.virt.libvirt.driver [None req-5e9e7e6d-051e-49a9-8a6b-cae0497c833f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:36:54 np0005539551 nova_compute[227360]: 2025-11-29 08:36:54.628 227364 DEBUG nova.virt.libvirt.driver [None req-5e9e7e6d-051e-49a9-8a6b-cae0497c833f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:36:54 np0005539551 nova_compute[227360]: 2025-11-29 08:36:54.628 227364 DEBUG nova.virt.libvirt.driver [None req-5e9e7e6d-051e-49a9-8a6b-cae0497c833f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] No VIF found with MAC fa:16:3e:9b:8f:68, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:36:55 np0005539551 nova_compute[227360]: 2025-11-29 08:36:55.038 227364 DEBUG oslo_concurrency.lockutils [None req-5e9e7e6d-051e-49a9-8a6b-cae0497c833f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.423s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:55.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:55.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:57 np0005539551 nova_compute[227360]: 2025-11-29 08:36:57.124 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:57.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:57.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:57 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e376 e376: 3 total, 3 up, 3 in
Nov 29 03:36:58 np0005539551 nova_compute[227360]: 2025-11-29 08:36:58.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:58 np0005539551 nova_compute[227360]: 2025-11-29 08:36:58.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:36:58 np0005539551 nova_compute[227360]: 2025-11-29 08:36:58.683 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:59.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:36:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:59.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:00 np0005539551 nova_compute[227360]: 2025-11-29 08:37:00.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:01.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:01 np0005539551 nova_compute[227360]: 2025-11-29 08:37:01.949 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405406.9485083, aaaa6a6b-0c21-483c-b891-02ebe64e6aab => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:37:01 np0005539551 nova_compute[227360]: 2025-11-29 08:37:01.950 227364 INFO nova.compute.manager [-] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:37:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:01.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:02 np0005539551 nova_compute[227360]: 2025-11-29 08:37:02.126 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:02 np0005539551 ovn_controller[130266]: 2025-11-29T08:37:02Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9b:8f:68 10.100.0.11
Nov 29 03:37:02 np0005539551 ovn_controller[130266]: 2025-11-29T08:37:02Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9b:8f:68 10.100.0.11
Nov 29 03:37:02 np0005539551 nova_compute[227360]: 2025-11-29 08:37:02.951 227364 DEBUG nova.compute.manager [None req-6fda6d44-a1d9-425f-8ad5-0057e7888557 - - - - - -] [instance: aaaa6a6b-0c21-483c-b891-02ebe64e6aab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:37:03 np0005539551 nova_compute[227360]: 2025-11-29 08:37:03.722 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:03.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:03.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:05 np0005539551 nova_compute[227360]: 2025-11-29 08:37:05.156 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:05 np0005539551 NetworkManager[48922]: <info>  [1764405425.1575] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/351)
Nov 29 03:37:05 np0005539551 NetworkManager[48922]: <info>  [1764405425.1585] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/352)
Nov 29 03:37:05 np0005539551 nova_compute[227360]: 2025-11-29 08:37:05.351 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:05 np0005539551 ovn_controller[130266]: 2025-11-29T08:37:05Z|00790|binding|INFO|Releasing lport dac731d0-69cc-4042-8450-f886e5854f80 from this chassis (sb_readonly=0)
Nov 29 03:37:05 np0005539551 nova_compute[227360]: 2025-11-29 08:37:05.369 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:05.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:05.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:06 np0005539551 nova_compute[227360]: 2025-11-29 08:37:06.545 227364 DEBUG nova.compute.manager [req-114ccea9-6bc2-4f40-8b17-b7ac8bb33c97 req-d9878578-cbdf-4c78-9b49-ef8d886c9b52 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Received event network-changed-06d48b3b-ec07-4803-9204-b300217b41d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:37:06 np0005539551 nova_compute[227360]: 2025-11-29 08:37:06.545 227364 DEBUG nova.compute.manager [req-114ccea9-6bc2-4f40-8b17-b7ac8bb33c97 req-d9878578-cbdf-4c78-9b49-ef8d886c9b52 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Refreshing instance network info cache due to event network-changed-06d48b3b-ec07-4803-9204-b300217b41d1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:37:06 np0005539551 nova_compute[227360]: 2025-11-29 08:37:06.545 227364 DEBUG oslo_concurrency.lockutils [req-114ccea9-6bc2-4f40-8b17-b7ac8bb33c97 req-d9878578-cbdf-4c78-9b49-ef8d886c9b52 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:37:06 np0005539551 nova_compute[227360]: 2025-11-29 08:37:06.546 227364 DEBUG oslo_concurrency.lockutils [req-114ccea9-6bc2-4f40-8b17-b7ac8bb33c97 req-d9878578-cbdf-4c78-9b49-ef8d886c9b52 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:37:06 np0005539551 nova_compute[227360]: 2025-11-29 08:37:06.546 227364 DEBUG nova.network.neutron [req-114ccea9-6bc2-4f40-8b17-b7ac8bb33c97 req-d9878578-cbdf-4c78-9b49-ef8d886c9b52 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Refreshing network info cache for port 06d48b3b-ec07-4803-9204-b300217b41d1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:37:07 np0005539551 nova_compute[227360]: 2025-11-29 08:37:07.180 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:07.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:37:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:07.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:37:08 np0005539551 nova_compute[227360]: 2025-11-29 08:37:08.255 227364 DEBUG nova.compute.manager [req-d4c7776b-b54b-401b-8706-145ae6596634 req-480b1ce0-f279-4b1f-b912-826178346f9c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Received event network-changed-06d48b3b-ec07-4803-9204-b300217b41d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:37:08 np0005539551 nova_compute[227360]: 2025-11-29 08:37:08.256 227364 DEBUG nova.compute.manager [req-d4c7776b-b54b-401b-8706-145ae6596634 req-480b1ce0-f279-4b1f-b912-826178346f9c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Refreshing instance network info cache due to event network-changed-06d48b3b-ec07-4803-9204-b300217b41d1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:37:08 np0005539551 nova_compute[227360]: 2025-11-29 08:37:08.257 227364 DEBUG oslo_concurrency.lockutils [req-d4c7776b-b54b-401b-8706-145ae6596634 req-480b1ce0-f279-4b1f-b912-826178346f9c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:37:08 np0005539551 nova_compute[227360]: 2025-11-29 08:37:08.508 227364 DEBUG nova.network.neutron [req-114ccea9-6bc2-4f40-8b17-b7ac8bb33c97 req-d9878578-cbdf-4c78-9b49-ef8d886c9b52 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Updated VIF entry in instance network info cache for port 06d48b3b-ec07-4803-9204-b300217b41d1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:37:08 np0005539551 nova_compute[227360]: 2025-11-29 08:37:08.509 227364 DEBUG nova.network.neutron [req-114ccea9-6bc2-4f40-8b17-b7ac8bb33c97 req-d9878578-cbdf-4c78-9b49-ef8d886c9b52 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Updating instance_info_cache with network_info: [{"id": "06d48b3b-ec07-4803-9204-b300217b41d1", "address": "fa:16:3e:9b:8f:68", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06d48b3b-ec", "ovs_interfaceid": "06d48b3b-ec07-4803-9204-b300217b41d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:37:08 np0005539551 nova_compute[227360]: 2025-11-29 08:37:08.530 227364 DEBUG oslo_concurrency.lockutils [req-114ccea9-6bc2-4f40-8b17-b7ac8bb33c97 req-d9878578-cbdf-4c78-9b49-ef8d886c9b52 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:37:08 np0005539551 nova_compute[227360]: 2025-11-29 08:37:08.531 227364 DEBUG oslo_concurrency.lockutils [req-d4c7776b-b54b-401b-8706-145ae6596634 req-480b1ce0-f279-4b1f-b912-826178346f9c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:37:08 np0005539551 nova_compute[227360]: 2025-11-29 08:37:08.531 227364 DEBUG nova.network.neutron [req-d4c7776b-b54b-401b-8706-145ae6596634 req-480b1ce0-f279-4b1f-b912-826178346f9c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Refreshing network info cache for port 06d48b3b-ec07-4803-9204-b300217b41d1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:37:08 np0005539551 nova_compute[227360]: 2025-11-29 08:37:08.725 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:09 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:37:09 np0005539551 nova_compute[227360]: 2025-11-29 08:37:09.406 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:09.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:09.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:10 np0005539551 nova_compute[227360]: 2025-11-29 08:37:10.160 227364 DEBUG nova.network.neutron [req-d4c7776b-b54b-401b-8706-145ae6596634 req-480b1ce0-f279-4b1f-b912-826178346f9c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Updated VIF entry in instance network info cache for port 06d48b3b-ec07-4803-9204-b300217b41d1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:37:10 np0005539551 nova_compute[227360]: 2025-11-29 08:37:10.160 227364 DEBUG nova.network.neutron [req-d4c7776b-b54b-401b-8706-145ae6596634 req-480b1ce0-f279-4b1f-b912-826178346f9c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Updating instance_info_cache with network_info: [{"id": "06d48b3b-ec07-4803-9204-b300217b41d1", "address": "fa:16:3e:9b:8f:68", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06d48b3b-ec", "ovs_interfaceid": "06d48b3b-ec07-4803-9204-b300217b41d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:37:10 np0005539551 nova_compute[227360]: 2025-11-29 08:37:10.178 227364 DEBUG oslo_concurrency.lockutils [req-d4c7776b-b54b-401b-8706-145ae6596634 req-480b1ce0-f279-4b1f-b912-826178346f9c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:37:10 np0005539551 nova_compute[227360]: 2025-11-29 08:37:10.361 227364 DEBUG nova.compute.manager [req-c4495c7f-435b-4683-b555-eb068e96ef0c req-e8324553-87ea-434a-8b0f-22fbb4d35509 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Received event network-changed-06d48b3b-ec07-4803-9204-b300217b41d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:37:10 np0005539551 nova_compute[227360]: 2025-11-29 08:37:10.361 227364 DEBUG nova.compute.manager [req-c4495c7f-435b-4683-b555-eb068e96ef0c req-e8324553-87ea-434a-8b0f-22fbb4d35509 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Refreshing instance network info cache due to event network-changed-06d48b3b-ec07-4803-9204-b300217b41d1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:37:10 np0005539551 nova_compute[227360]: 2025-11-29 08:37:10.362 227364 DEBUG oslo_concurrency.lockutils [req-c4495c7f-435b-4683-b555-eb068e96ef0c req-e8324553-87ea-434a-8b0f-22fbb4d35509 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:37:10 np0005539551 nova_compute[227360]: 2025-11-29 08:37:10.362 227364 DEBUG oslo_concurrency.lockutils [req-c4495c7f-435b-4683-b555-eb068e96ef0c req-e8324553-87ea-434a-8b0f-22fbb4d35509 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:37:10 np0005539551 nova_compute[227360]: 2025-11-29 08:37:10.362 227364 DEBUG nova.network.neutron [req-c4495c7f-435b-4683-b555-eb068e96ef0c req-e8324553-87ea-434a-8b0f-22fbb4d35509 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Refreshing network info cache for port 06d48b3b-ec07-4803-9204-b300217b41d1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:37:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:11 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:37:11 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:37:11 np0005539551 nova_compute[227360]: 2025-11-29 08:37:11.490 227364 DEBUG nova.network.neutron [req-c4495c7f-435b-4683-b555-eb068e96ef0c req-e8324553-87ea-434a-8b0f-22fbb4d35509 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Updated VIF entry in instance network info cache for port 06d48b3b-ec07-4803-9204-b300217b41d1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:37:11 np0005539551 nova_compute[227360]: 2025-11-29 08:37:11.491 227364 DEBUG nova.network.neutron [req-c4495c7f-435b-4683-b555-eb068e96ef0c req-e8324553-87ea-434a-8b0f-22fbb4d35509 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Updating instance_info_cache with network_info: [{"id": "06d48b3b-ec07-4803-9204-b300217b41d1", "address": "fa:16:3e:9b:8f:68", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06d48b3b-ec", "ovs_interfaceid": "06d48b3b-ec07-4803-9204-b300217b41d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:37:11 np0005539551 nova_compute[227360]: 2025-11-29 08:37:11.511 227364 DEBUG oslo_concurrency.lockutils [req-c4495c7f-435b-4683-b555-eb068e96ef0c req-e8324553-87ea-434a-8b0f-22fbb4d35509 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:37:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:11.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:11.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:12 np0005539551 nova_compute[227360]: 2025-11-29 08:37:12.183 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:12 np0005539551 nova_compute[227360]: 2025-11-29 08:37:12.728 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:13 np0005539551 nova_compute[227360]: 2025-11-29 08:37:13.728 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:13.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:13.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:15 np0005539551 nova_compute[227360]: 2025-11-29 08:37:15.506 227364 DEBUG nova.compute.manager [req-f2bb7f06-be84-454d-ae34-08e69ad08cc3 req-620bae1e-0294-4db8-955a-dbf84492db42 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Received event network-changed-06d48b3b-ec07-4803-9204-b300217b41d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:37:15 np0005539551 nova_compute[227360]: 2025-11-29 08:37:15.506 227364 DEBUG nova.compute.manager [req-f2bb7f06-be84-454d-ae34-08e69ad08cc3 req-620bae1e-0294-4db8-955a-dbf84492db42 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Refreshing instance network info cache due to event network-changed-06d48b3b-ec07-4803-9204-b300217b41d1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:37:15 np0005539551 nova_compute[227360]: 2025-11-29 08:37:15.507 227364 DEBUG oslo_concurrency.lockutils [req-f2bb7f06-be84-454d-ae34-08e69ad08cc3 req-620bae1e-0294-4db8-955a-dbf84492db42 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:37:15 np0005539551 nova_compute[227360]: 2025-11-29 08:37:15.507 227364 DEBUG oslo_concurrency.lockutils [req-f2bb7f06-be84-454d-ae34-08e69ad08cc3 req-620bae1e-0294-4db8-955a-dbf84492db42 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:37:15 np0005539551 nova_compute[227360]: 2025-11-29 08:37:15.508 227364 DEBUG nova.network.neutron [req-f2bb7f06-be84-454d-ae34-08e69ad08cc3 req-620bae1e-0294-4db8-955a-dbf84492db42 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Refreshing network info cache for port 06d48b3b-ec07-4803-9204-b300217b41d1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:37:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:15.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:15.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e377 e377: 3 total, 3 up, 3 in
Nov 29 03:37:17 np0005539551 nova_compute[227360]: 2025-11-29 08:37:17.141 227364 DEBUG nova.network.neutron [req-f2bb7f06-be84-454d-ae34-08e69ad08cc3 req-620bae1e-0294-4db8-955a-dbf84492db42 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Updated VIF entry in instance network info cache for port 06d48b3b-ec07-4803-9204-b300217b41d1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:37:17 np0005539551 nova_compute[227360]: 2025-11-29 08:37:17.142 227364 DEBUG nova.network.neutron [req-f2bb7f06-be84-454d-ae34-08e69ad08cc3 req-620bae1e-0294-4db8-955a-dbf84492db42 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Updating instance_info_cache with network_info: [{"id": "06d48b3b-ec07-4803-9204-b300217b41d1", "address": "fa:16:3e:9b:8f:68", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06d48b3b-ec", "ovs_interfaceid": "06d48b3b-ec07-4803-9204-b300217b41d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:37:17 np0005539551 nova_compute[227360]: 2025-11-29 08:37:17.169 227364 DEBUG oslo_concurrency.lockutils [req-f2bb7f06-be84-454d-ae34-08e69ad08cc3 req-620bae1e-0294-4db8-955a-dbf84492db42 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:37:17 np0005539551 nova_compute[227360]: 2025-11-29 08:37:17.186 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:17 np0005539551 nova_compute[227360]: 2025-11-29 08:37:17.615 227364 DEBUG oslo_concurrency.lockutils [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Acquiring lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:17 np0005539551 nova_compute[227360]: 2025-11-29 08:37:17.616 227364 DEBUG oslo_concurrency.lockutils [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:17 np0005539551 nova_compute[227360]: 2025-11-29 08:37:17.616 227364 INFO nova.compute.manager [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Rebooting instance#033[00m
Nov 29 03:37:17 np0005539551 nova_compute[227360]: 2025-11-29 08:37:17.635 227364 DEBUG oslo_concurrency.lockutils [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Acquiring lock "refresh_cache-1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:37:17 np0005539551 nova_compute[227360]: 2025-11-29 08:37:17.636 227364 DEBUG oslo_concurrency.lockutils [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Acquired lock "refresh_cache-1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:37:17 np0005539551 nova_compute[227360]: 2025-11-29 08:37:17.636 227364 DEBUG nova.network.neutron [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:37:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:17.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:17.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:18 np0005539551 nova_compute[227360]: 2025-11-29 08:37:18.730 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:18 np0005539551 nova_compute[227360]: 2025-11-29 08:37:18.795 227364 DEBUG nova.network.neutron [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Updating instance_info_cache with network_info: [{"id": "06d48b3b-ec07-4803-9204-b300217b41d1", "address": "fa:16:3e:9b:8f:68", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06d48b3b-ec", "ovs_interfaceid": "06d48b3b-ec07-4803-9204-b300217b41d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:37:18 np0005539551 nova_compute[227360]: 2025-11-29 08:37:18.839 227364 DEBUG oslo_concurrency.lockutils [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Releasing lock "refresh_cache-1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:37:18 np0005539551 nova_compute[227360]: 2025-11-29 08:37:18.840 227364 DEBUG nova.compute.manager [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:37:19 np0005539551 kernel: tap06d48b3b-ec (unregistering): left promiscuous mode
Nov 29 03:37:19 np0005539551 NetworkManager[48922]: <info>  [1764405439.0106] device (tap06d48b3b-ec): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:37:19 np0005539551 ovn_controller[130266]: 2025-11-29T08:37:19Z|00791|binding|INFO|Releasing lport 06d48b3b-ec07-4803-9204-b300217b41d1 from this chassis (sb_readonly=0)
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.018 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:19 np0005539551 ovn_controller[130266]: 2025-11-29T08:37:19Z|00792|binding|INFO|Setting lport 06d48b3b-ec07-4803-9204-b300217b41d1 down in Southbound
Nov 29 03:37:19 np0005539551 ovn_controller[130266]: 2025-11-29T08:37:19Z|00793|binding|INFO|Removing iface tap06d48b3b-ec ovn-installed in OVS
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.021 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:19.024 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:8f:68 10.100.0.11'], port_security=['fa:16:3e:9b:8f:68 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '1f01c882-bb44-4ba5-b82f-e3ffa31b8df1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0430aba5-d0d7-4d98-ad87-552e6639c190', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8d5e30b74e6449dd90ecb667977d1fe9', 'neutron:revision_number': '5', 'neutron:security_group_ids': '52d8fe9a-0c55-4eea-ab3d-17059ad4962d af0b44f0-ff49-4977-bc4f-d6cd9c3d296d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.199'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75451e9b-c915-4a2b-97ed-6cc2296328f6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=06d48b3b-ec07-4803-9204-b300217b41d1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:37:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:19.025 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 06d48b3b-ec07-4803-9204-b300217b41d1 in datapath 0430aba5-d0d7-4d98-ad87-552e6639c190 unbound from our chassis#033[00m
Nov 29 03:37:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:19.027 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0430aba5-d0d7-4d98-ad87-552e6639c190, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:37:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:19.028 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[0e6c8d38-7c18-419f-aa12-77b2956bab12]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:19.028 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190 namespace which is not needed anymore#033[00m
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.039 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:19 np0005539551 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d000000b2.scope: Deactivated successfully.
Nov 29 03:37:19 np0005539551 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d000000b2.scope: Consumed 15.045s CPU time.
Nov 29 03:37:19 np0005539551 systemd-machined[190756]: Machine qemu-82-instance-000000b2 terminated.
Nov 29 03:37:19 np0005539551 neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190[291352]: [NOTICE]   (291358) : haproxy version is 2.8.14-c23fe91
Nov 29 03:37:19 np0005539551 neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190[291352]: [NOTICE]   (291358) : path to executable is /usr/sbin/haproxy
Nov 29 03:37:19 np0005539551 neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190[291352]: [WARNING]  (291358) : Exiting Master process...
Nov 29 03:37:19 np0005539551 neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190[291352]: [ALERT]    (291358) : Current worker (291360) exited with code 143 (Terminated)
Nov 29 03:37:19 np0005539551 neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190[291352]: [WARNING]  (291358) : All workers exited. Exiting... (0)
Nov 29 03:37:19 np0005539551 systemd[1]: libpod-d6ae02f2890ef865c1225ffe8b74e908522b4ecc7b19424126b9b5cb236e6d3f.scope: Deactivated successfully.
Nov 29 03:37:19 np0005539551 podman[291826]: 2025-11-29 08:37:19.178509069 +0000 UTC m=+0.050262621 container died d6ae02f2890ef865c1225ffe8b74e908522b4ecc7b19424126b9b5cb236e6d3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS)
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.190 227364 INFO nova.virt.libvirt.driver [-] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Instance destroyed successfully.#033[00m
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.191 227364 DEBUG nova.objects.instance [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lazy-loading 'resources' on Instance uuid 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:37:19 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d6ae02f2890ef865c1225ffe8b74e908522b4ecc7b19424126b9b5cb236e6d3f-userdata-shm.mount: Deactivated successfully.
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.210 227364 DEBUG nova.virt.libvirt.vif [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:36:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-111008662',display_name='tempest-TestMinimumBasicScenario-server-111008662',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-111008662',id=178,image_ref='8a30eb6e-dddc-4108-8576-dcd7c8d5406f',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCVJYdnP2TneuDoeBAXwKH6f73W3V8CnCT6vaTZikIJCryJZhoLWCSF0AlKVy0dYmyVNXlJowtpUs2K4/f0SYWoTrLCWDcueCumiTrkMJ87CdWQP8BrlEpeZKDWjOsS6rQ==',key_name='tempest-TestMinimumBasicScenario-445026150',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:36:46Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8d5e30b74e6449dd90ecb667977d1fe9',ramdisk_id='',reservation_id='r-4umr4qgc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8a30eb6e-dddc-4108-8576-dcd7c8d5406f',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestMinimumBasicScenario-1569311049',owner_user_name='tempest-TestMinimumBasicScenario-1569311049-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:37:18Z,user_data=None,user_id='e8b20745b2d14f70b64a43335faed2f4',uuid=1f01c882-bb44-4ba5-b82f-e3ffa31b8df1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "06d48b3b-ec07-4803-9204-b300217b41d1", "address": "fa:16:3e:9b:8f:68", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06d48b3b-ec", "ovs_interfaceid": "06d48b3b-ec07-4803-9204-b300217b41d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.210 227364 DEBUG nova.network.os_vif_util [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Converting VIF {"id": "06d48b3b-ec07-4803-9204-b300217b41d1", "address": "fa:16:3e:9b:8f:68", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06d48b3b-ec", "ovs_interfaceid": "06d48b3b-ec07-4803-9204-b300217b41d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.212 227364 DEBUG nova.network.os_vif_util [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9b:8f:68,bridge_name='br-int',has_traffic_filtering=True,id=06d48b3b-ec07-4803-9204-b300217b41d1,network=Network(0430aba5-d0d7-4d98-ad87-552e6639c190),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06d48b3b-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.213 227364 DEBUG os_vif [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:8f:68,bridge_name='br-int',has_traffic_filtering=True,id=06d48b3b-ec07-4803-9204-b300217b41d1,network=Network(0430aba5-d0d7-4d98-ad87-552e6639c190),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06d48b3b-ec') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:37:19 np0005539551 systemd[1]: var-lib-containers-storage-overlay-46df413971e98aee914592114724f3b2bc63176275dc67a10deef366ddb2253b-merged.mount: Deactivated successfully.
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.215 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.216 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap06d48b3b-ec, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.218 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.219 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.221 227364 INFO os_vif [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:8f:68,bridge_name='br-int',has_traffic_filtering=True,id=06d48b3b-ec07-4803-9204-b300217b41d1,network=Network(0430aba5-d0d7-4d98-ad87-552e6639c190),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06d48b3b-ec')#033[00m
Nov 29 03:37:19 np0005539551 podman[291826]: 2025-11-29 08:37:19.226892608 +0000 UTC m=+0.098646150 container cleanup d6ae02f2890ef865c1225ffe8b74e908522b4ecc7b19424126b9b5cb236e6d3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.230 227364 DEBUG nova.virt.libvirt.driver [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Start _get_guest_xml network_info=[{"id": "06d48b3b-ec07-4803-9204-b300217b41d1", "address": "fa:16:3e:9b:8f:68", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06d48b3b-ec", "ovs_interfaceid": "06d48b3b-ec07-4803-9204-b300217b41d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8a30eb6e-dddc-4108-8576-dcd7c8d5406f,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '8a30eb6e-dddc-4108-8576-dcd7c8d5406f'}], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-d0fba77f-63fd-4100-a2fb-b81bfa4b659a', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'd0fba77f-63fd-4100-a2fb-b81bfa4b659a', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '1f01c882-bb44-4ba5-b82f-e3ffa31b8df1', 'attached_at': '', 'detached_at': '', 'volume_id': 'd0fba77f-63fd-4100-a2fb-b81bfa4b659a', 'serial': 'd0fba77f-63fd-4100-a2fb-b81bfa4b659a'}, 'boot_index': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vdb', 'attachment_id': '9a45bbc0-d369-4b28-b260-27c4a5e2d9ea', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:37:19 np0005539551 systemd[1]: libpod-conmon-d6ae02f2890ef865c1225ffe8b74e908522b4ecc7b19424126b9b5cb236e6d3f.scope: Deactivated successfully.
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.235 227364 WARNING nova.virt.libvirt.driver [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.241 227364 DEBUG nova.virt.libvirt.host [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.242 227364 DEBUG nova.virt.libvirt.host [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.247 227364 DEBUG nova.virt.libvirt.host [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.247 227364 DEBUG nova.virt.libvirt.host [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.249 227364 DEBUG nova.virt.libvirt.driver [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.249 227364 DEBUG nova.virt.hardware [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8a30eb6e-dddc-4108-8576-dcd7c8d5406f,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.249 227364 DEBUG nova.virt.hardware [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.250 227364 DEBUG nova.virt.hardware [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.250 227364 DEBUG nova.virt.hardware [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.250 227364 DEBUG nova.virt.hardware [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.251 227364 DEBUG nova.virt.hardware [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.251 227364 DEBUG nova.virt.hardware [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.251 227364 DEBUG nova.virt.hardware [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.252 227364 DEBUG nova.virt.hardware [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.252 227364 DEBUG nova.virt.hardware [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.252 227364 DEBUG nova.virt.hardware [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.252 227364 DEBUG nova.objects.instance [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.275 227364 DEBUG oslo_concurrency.processutils [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:19 np0005539551 podman[291866]: 2025-11-29 08:37:19.287371444 +0000 UTC m=+0.039185461 container remove d6ae02f2890ef865c1225ffe8b74e908522b4ecc7b19424126b9b5cb236e6d3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:37:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:19.292 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[acfa8224-1e1d-45db-ae0c-ed428928cf4a]: (4, ('Sat Nov 29 08:37:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190 (d6ae02f2890ef865c1225ffe8b74e908522b4ecc7b19424126b9b5cb236e6d3f)\nd6ae02f2890ef865c1225ffe8b74e908522b4ecc7b19424126b9b5cb236e6d3f\nSat Nov 29 08:37:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190 (d6ae02f2890ef865c1225ffe8b74e908522b4ecc7b19424126b9b5cb236e6d3f)\nd6ae02f2890ef865c1225ffe8b74e908522b4ecc7b19424126b9b5cb236e6d3f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:19.294 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4133bfb6-3038-4c2b-91a8-c7a8b8b294fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:19.295 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0430aba5-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:19 np0005539551 kernel: tap0430aba5-d0: left promiscuous mode
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.309 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:19.313 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[176deb23-4309-47fc-a82b-337d1030cf7e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:19.331 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9fbdf80f-7da2-4691-b1fa-122fa1441782]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:19.333 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f2a5a9e6-f062-4c92-a462-b875afadaa9c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:19.350 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[dac09839-7b94-40e6-bbad-d489eaf43a97]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 845218, 'reachable_time': 30378, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291882, 'error': None, 'target': 'ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:19 np0005539551 systemd[1]: run-netns-ovnmeta\x2d0430aba5\x2dd0d7\x2d4d98\x2dad87\x2d552e6639c190.mount: Deactivated successfully.
Nov 29 03:37:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:19.353 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:37:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:19.353 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[9c09df37-23a7-4b6c-961c-2f87fdd912cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.385 227364 DEBUG nova.compute.manager [req-9330d753-6cac-4b3d-a8b3-0ab853dadfae req-8d806601-d0b3-4e72-a88a-d053b315053a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Received event network-vif-unplugged-06d48b3b-ec07-4803-9204-b300217b41d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:37:19 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:37:19 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.386 227364 DEBUG oslo_concurrency.lockutils [req-9330d753-6cac-4b3d-a8b3-0ab853dadfae req-8d806601-d0b3-4e72-a88a-d053b315053a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.387 227364 DEBUG oslo_concurrency.lockutils [req-9330d753-6cac-4b3d-a8b3-0ab853dadfae req-8d806601-d0b3-4e72-a88a-d053b315053a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.387 227364 DEBUG oslo_concurrency.lockutils [req-9330d753-6cac-4b3d-a8b3-0ab853dadfae req-8d806601-d0b3-4e72-a88a-d053b315053a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.388 227364 DEBUG nova.compute.manager [req-9330d753-6cac-4b3d-a8b3-0ab853dadfae req-8d806601-d0b3-4e72-a88a-d053b315053a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] No waiting events found dispatching network-vif-unplugged-06d48b3b-ec07-4803-9204-b300217b41d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.388 227364 WARNING nova.compute.manager [req-9330d753-6cac-4b3d-a8b3-0ab853dadfae req-8d806601-d0b3-4e72-a88a-d053b315053a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Received unexpected event network-vif-unplugged-06d48b3b-ec07-4803-9204-b300217b41d1 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Nov 29 03:37:19 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:37:19 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3018626926' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.710 227364 DEBUG oslo_concurrency.processutils [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:19 np0005539551 nova_compute[227360]: 2025-11-29 08:37:19.750 227364 DEBUG oslo_concurrency.processutils [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:19.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:19.886 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:19.886 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:19.886 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:19.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:37:20 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1488045104' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:37:20 np0005539551 nova_compute[227360]: 2025-11-29 08:37:20.195 227364 DEBUG oslo_concurrency.processutils [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:20 np0005539551 nova_compute[227360]: 2025-11-29 08:37:20.347 227364 DEBUG nova.virt.libvirt.vif [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:36:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-111008662',display_name='tempest-TestMinimumBasicScenario-server-111008662',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-111008662',id=178,image_ref='8a30eb6e-dddc-4108-8576-dcd7c8d5406f',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCVJYdnP2TneuDoeBAXwKH6f73W3V8CnCT6vaTZikIJCryJZhoLWCSF0AlKVy0dYmyVNXlJowtpUs2K4/f0SYWoTrLCWDcueCumiTrkMJ87CdWQP8BrlEpeZKDWjOsS6rQ==',key_name='tempest-TestMinimumBasicScenario-445026150',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:36:46Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8d5e30b74e6449dd90ecb667977d1fe9',ramdisk_id='',reservation_id='r-4umr4qgc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8a30eb6e-dddc-4108-8576-dcd7c8d5406f',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestMinimumBasicScenario-1569311049',owner_user_name='tempest-TestMinimumBasicScenario-1569311049-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:37:18Z,user_data=None,user_id='e8b20745b2d14f70b64a43335faed2f4',uuid=1f01c882-bb44-4ba5-b82f-e3ffa31b8df1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "06d48b3b-ec07-4803-9204-b300217b41d1", "address": "fa:16:3e:9b:8f:68", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06d48b3b-ec", "ovs_interfaceid": "06d48b3b-ec07-4803-9204-b300217b41d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:37:20 np0005539551 nova_compute[227360]: 2025-11-29 08:37:20.348 227364 DEBUG nova.network.os_vif_util [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Converting VIF {"id": "06d48b3b-ec07-4803-9204-b300217b41d1", "address": "fa:16:3e:9b:8f:68", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06d48b3b-ec", "ovs_interfaceid": "06d48b3b-ec07-4803-9204-b300217b41d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:37:20 np0005539551 nova_compute[227360]: 2025-11-29 08:37:20.349 227364 DEBUG nova.network.os_vif_util [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9b:8f:68,bridge_name='br-int',has_traffic_filtering=True,id=06d48b3b-ec07-4803-9204-b300217b41d1,network=Network(0430aba5-d0d7-4d98-ad87-552e6639c190),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06d48b3b-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:37:20 np0005539551 nova_compute[227360]: 2025-11-29 08:37:20.351 227364 DEBUG nova.objects.instance [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:37:20 np0005539551 nova_compute[227360]: 2025-11-29 08:37:20.385 227364 DEBUG nova.virt.libvirt.driver [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:37:20 np0005539551 nova_compute[227360]:  <uuid>1f01c882-bb44-4ba5-b82f-e3ffa31b8df1</uuid>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:  <name>instance-000000b2</name>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      <nova:name>tempest-TestMinimumBasicScenario-server-111008662</nova:name>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:37:19</nova:creationTime>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:37:20 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:        <nova:user uuid="e8b20745b2d14f70b64a43335faed2f4">tempest-TestMinimumBasicScenario-1569311049-project-member</nova:user>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:        <nova:project uuid="8d5e30b74e6449dd90ecb667977d1fe9">tempest-TestMinimumBasicScenario-1569311049</nova:project>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="8a30eb6e-dddc-4108-8576-dcd7c8d5406f"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:        <nova:port uuid="06d48b3b-ec07-4803-9204-b300217b41d1">
Nov 29 03:37:20 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      <entry name="serial">1f01c882-bb44-4ba5-b82f-e3ffa31b8df1</entry>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      <entry name="uuid">1f01c882-bb44-4ba5-b82f-e3ffa31b8df1</entry>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/1f01c882-bb44-4ba5-b82f-e3ffa31b8df1_disk">
Nov 29 03:37:20 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:37:20 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/1f01c882-bb44-4ba5-b82f-e3ffa31b8df1_disk.config">
Nov 29 03:37:20 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:37:20 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="volumes/volume-d0fba77f-63fd-4100-a2fb-b81bfa4b659a">
Nov 29 03:37:20 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:37:20 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      <target dev="vdb" bus="virtio"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      <serial>d0fba77f-63fd-4100-a2fb-b81bfa4b659a</serial>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:9b:8f:68"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      <target dev="tap06d48b3b-ec"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/1f01c882-bb44-4ba5-b82f-e3ffa31b8df1/console.log" append="off"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <input type="keyboard" bus="usb"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:37:20 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:37:20 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:37:20 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:37:20 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:37:20 np0005539551 nova_compute[227360]: 2025-11-29 08:37:20.388 227364 DEBUG nova.virt.libvirt.driver [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] skipping disk for instance-000000b2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:37:20 np0005539551 nova_compute[227360]: 2025-11-29 08:37:20.388 227364 DEBUG nova.virt.libvirt.driver [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] skipping disk for instance-000000b2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:37:20 np0005539551 nova_compute[227360]: 2025-11-29 08:37:20.388 227364 DEBUG nova.virt.libvirt.driver [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] skipping disk for instance-000000b2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:37:20 np0005539551 nova_compute[227360]: 2025-11-29 08:37:20.389 227364 DEBUG nova.virt.libvirt.vif [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:36:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-111008662',display_name='tempest-TestMinimumBasicScenario-server-111008662',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-111008662',id=178,image_ref='8a30eb6e-dddc-4108-8576-dcd7c8d5406f',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCVJYdnP2TneuDoeBAXwKH6f73W3V8CnCT6vaTZikIJCryJZhoLWCSF0AlKVy0dYmyVNXlJowtpUs2K4/f0SYWoTrLCWDcueCumiTrkMJ87CdWQP8BrlEpeZKDWjOsS6rQ==',key_name='tempest-TestMinimumBasicScenario-445026150',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:36:46Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='8d5e30b74e6449dd90ecb667977d1fe9',ramdisk_id='',reservation_id='r-4umr4qgc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8a30eb6e-dddc-4108-8576-dcd7c8d5406f',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestMinimumBasicScenario-1569311049',owner_user_name='tempest-TestMinimumBasicScenario-1569311049-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:37:18Z,user_data=None,user_id='e8b20745b2d14f70b64a43335faed2f4',uuid=1f01c882-bb44-4ba5-b82f-e3ffa31b8df1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "06d48b3b-ec07-4803-9204-b300217b41d1", "address": "fa:16:3e:9b:8f:68", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06d48b3b-ec", "ovs_interfaceid": "06d48b3b-ec07-4803-9204-b300217b41d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:37:20 np0005539551 nova_compute[227360]: 2025-11-29 08:37:20.389 227364 DEBUG nova.network.os_vif_util [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Converting VIF {"id": "06d48b3b-ec07-4803-9204-b300217b41d1", "address": "fa:16:3e:9b:8f:68", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06d48b3b-ec", "ovs_interfaceid": "06d48b3b-ec07-4803-9204-b300217b41d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:37:20 np0005539551 nova_compute[227360]: 2025-11-29 08:37:20.390 227364 DEBUG nova.network.os_vif_util [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9b:8f:68,bridge_name='br-int',has_traffic_filtering=True,id=06d48b3b-ec07-4803-9204-b300217b41d1,network=Network(0430aba5-d0d7-4d98-ad87-552e6639c190),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06d48b3b-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:37:20 np0005539551 nova_compute[227360]: 2025-11-29 08:37:20.391 227364 DEBUG os_vif [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:8f:68,bridge_name='br-int',has_traffic_filtering=True,id=06d48b3b-ec07-4803-9204-b300217b41d1,network=Network(0430aba5-d0d7-4d98-ad87-552e6639c190),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06d48b3b-ec') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:37:20 np0005539551 nova_compute[227360]: 2025-11-29 08:37:20.393 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:20 np0005539551 nova_compute[227360]: 2025-11-29 08:37:20.393 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:20 np0005539551 nova_compute[227360]: 2025-11-29 08:37:20.394 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:37:20 np0005539551 nova_compute[227360]: 2025-11-29 08:37:20.397 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:20 np0005539551 nova_compute[227360]: 2025-11-29 08:37:20.398 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap06d48b3b-ec, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:20 np0005539551 nova_compute[227360]: 2025-11-29 08:37:20.398 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap06d48b3b-ec, col_values=(('external_ids', {'iface-id': '06d48b3b-ec07-4803-9204-b300217b41d1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9b:8f:68', 'vm-uuid': '1f01c882-bb44-4ba5-b82f-e3ffa31b8df1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:20 np0005539551 NetworkManager[48922]: <info>  [1764405440.4009] manager: (tap06d48b3b-ec): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/353)
Nov 29 03:37:20 np0005539551 nova_compute[227360]: 2025-11-29 08:37:20.402 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:37:20 np0005539551 nova_compute[227360]: 2025-11-29 08:37:20.405 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:20 np0005539551 nova_compute[227360]: 2025-11-29 08:37:20.406 227364 INFO os_vif [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:8f:68,bridge_name='br-int',has_traffic_filtering=True,id=06d48b3b-ec07-4803-9204-b300217b41d1,network=Network(0430aba5-d0d7-4d98-ad87-552e6639c190),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06d48b3b-ec')#033[00m
Nov 29 03:37:20 np0005539551 kernel: tap06d48b3b-ec: entered promiscuous mode
Nov 29 03:37:20 np0005539551 ovn_controller[130266]: 2025-11-29T08:37:20Z|00794|binding|INFO|Claiming lport 06d48b3b-ec07-4803-9204-b300217b41d1 for this chassis.
Nov 29 03:37:20 np0005539551 ovn_controller[130266]: 2025-11-29T08:37:20Z|00795|binding|INFO|06d48b3b-ec07-4803-9204-b300217b41d1: Claiming fa:16:3e:9b:8f:68 10.100.0.11
Nov 29 03:37:20 np0005539551 NetworkManager[48922]: <info>  [1764405440.4787] manager: (tap06d48b3b-ec): new Tun device (/org/freedesktop/NetworkManager/Devices/354)
Nov 29 03:37:20 np0005539551 nova_compute[227360]: 2025-11-29 08:37:20.478 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:20 np0005539551 systemd-udevd[291807]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:20.485 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:8f:68 10.100.0.11'], port_security=['fa:16:3e:9b:8f:68 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '1f01c882-bb44-4ba5-b82f-e3ffa31b8df1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0430aba5-d0d7-4d98-ad87-552e6639c190', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8d5e30b74e6449dd90ecb667977d1fe9', 'neutron:revision_number': '6', 'neutron:security_group_ids': '52d8fe9a-0c55-4eea-ab3d-17059ad4962d af0b44f0-ff49-4977-bc4f-d6cd9c3d296d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.199'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75451e9b-c915-4a2b-97ed-6cc2296328f6, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=06d48b3b-ec07-4803-9204-b300217b41d1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:20.486 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 06d48b3b-ec07-4803-9204-b300217b41d1 in datapath 0430aba5-d0d7-4d98-ad87-552e6639c190 bound to our chassis#033[00m
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:20.488 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0430aba5-d0d7-4d98-ad87-552e6639c190#033[00m
Nov 29 03:37:20 np0005539551 NetworkManager[48922]: <info>  [1764405440.4937] device (tap06d48b3b-ec): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:37:20 np0005539551 NetworkManager[48922]: <info>  [1764405440.4947] device (tap06d48b3b-ec): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:37:20 np0005539551 ovn_controller[130266]: 2025-11-29T08:37:20Z|00796|binding|INFO|Setting lport 06d48b3b-ec07-4803-9204-b300217b41d1 ovn-installed in OVS
Nov 29 03:37:20 np0005539551 ovn_controller[130266]: 2025-11-29T08:37:20Z|00797|binding|INFO|Setting lport 06d48b3b-ec07-4803-9204-b300217b41d1 up in Southbound
Nov 29 03:37:20 np0005539551 nova_compute[227360]: 2025-11-29 08:37:20.498 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:20 np0005539551 nova_compute[227360]: 2025-11-29 08:37:20.500 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:20.501 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[33259c8f-5cb7-478f-b2f5-be60a76271c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:20.502 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0430aba5-d1 in ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:20.504 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0430aba5-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:20.505 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[60ccdc79-e8cf-44da-b081-79188322207a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:20.505 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9d51719e-70b1-43da-b95a-646f87def6c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:20 np0005539551 systemd-machined[190756]: New machine qemu-83-instance-000000b2.
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:20.515 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[8e840ec4-efeb-4432-9c3c-08cc023bf7a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:20.527 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[cb676327-3ef8-4061-95cd-1c68e4a5082c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:20 np0005539551 systemd[1]: Started Virtual Machine qemu-83-instance-000000b2.
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:20.559 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[2b5d3891-cd1e-4883-b072-75e24be07a5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:20 np0005539551 NetworkManager[48922]: <info>  [1764405440.5667] manager: (tap0430aba5-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/355)
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:20.565 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a1b11fc5-f2bd-4c6b-a5de-ed5cb8b3e001]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:20.597 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[515bf4ca-d778-455b-847d-8f66c3e4f279]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:20.600 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[7a46b0e5-9834-4847-9a86-28235d3b30b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:20 np0005539551 NetworkManager[48922]: <info>  [1764405440.6224] device (tap0430aba5-d0): carrier: link connected
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:20.628 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[9e905f87-cf6d-4cc9-8f56-160c507d477c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:20.750 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2149d232-325a-40eb-bf07-32654d8d441c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0430aba5-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:fd:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 230], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 848750, 'reachable_time': 21712, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291989, 'error': None, 'target': 'ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:20.768 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[91c859c8-ea95-4a2c-a0b5-24961e09dec0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5c:fd4a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 848750, 'tstamp': 848750}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291990, 'error': None, 'target': 'ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:20.788 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ad81ae32-2117-46bf-a7c7-29ded08d4edf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0430aba5-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:fd:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 230], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 848750, 'reachable_time': 21712, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 291991, 'error': None, 'target': 'ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:20.825 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8d207005-3df9-48be-9cd7-88e4726fd04c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:20.883 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[43eaf1b3-a998-47a6-82e3-9fbbcff28c90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:20.885 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0430aba5-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:20.885 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:20.885 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0430aba5-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:20 np0005539551 nova_compute[227360]: 2025-11-29 08:37:20.887 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:20 np0005539551 NetworkManager[48922]: <info>  [1764405440.8878] manager: (tap0430aba5-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/356)
Nov 29 03:37:20 np0005539551 kernel: tap0430aba5-d0: entered promiscuous mode
Nov 29 03:37:20 np0005539551 nova_compute[227360]: 2025-11-29 08:37:20.891 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:20.892 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0430aba5-d0, col_values=(('external_ids', {'iface-id': 'dac731d0-69cc-4042-8450-f886e5854f80'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:20 np0005539551 nova_compute[227360]: 2025-11-29 08:37:20.893 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:20 np0005539551 ovn_controller[130266]: 2025-11-29T08:37:20Z|00798|binding|INFO|Releasing lport dac731d0-69cc-4042-8450-f886e5854f80 from this chassis (sb_readonly=0)
Nov 29 03:37:20 np0005539551 nova_compute[227360]: 2025-11-29 08:37:20.912 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:20 np0005539551 nova_compute[227360]: 2025-11-29 08:37:20.913 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:20.914 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0430aba5-d0d7-4d98-ad87-552e6639c190.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0430aba5-d0d7-4d98-ad87-552e6639c190.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:20.914 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[0dc89d8e-b12c-436f-8083-f575c7b12c9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:20.915 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-0430aba5-d0d7-4d98-ad87-552e6639c190
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/0430aba5-d0d7-4d98-ad87-552e6639c190.pid.haproxy
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 0430aba5-d0d7-4d98-ad87-552e6639c190
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:37:20 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:20.916 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190', 'env', 'PROCESS_TAG=haproxy-0430aba5-d0d7-4d98-ad87-552e6639c190', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0430aba5-d0d7-4d98-ad87-552e6639c190.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:37:21 np0005539551 podman[292077]: 2025-11-29 08:37:21.295741845 +0000 UTC m=+0.052547713 container create f210e522308663adc9e445892b1e3fadf156117d0e02dcdf3193dcb9ff18c418 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:37:21 np0005539551 systemd[1]: Started libpod-conmon-f210e522308663adc9e445892b1e3fadf156117d0e02dcdf3193dcb9ff18c418.scope.
Nov 29 03:37:21 np0005539551 nova_compute[227360]: 2025-11-29 08:37:21.354 227364 DEBUG nova.virt.libvirt.host [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Removed pending event for 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:37:21 np0005539551 nova_compute[227360]: 2025-11-29 08:37:21.355 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405441.3536255, 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:37:21 np0005539551 nova_compute[227360]: 2025-11-29 08:37:21.356 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:37:21 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:37:21 np0005539551 nova_compute[227360]: 2025-11-29 08:37:21.358 227364 DEBUG nova.compute.manager [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:37:21 np0005539551 podman[292077]: 2025-11-29 08:37:21.269248239 +0000 UTC m=+0.026054077 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:37:21 np0005539551 nova_compute[227360]: 2025-11-29 08:37:21.363 227364 INFO nova.virt.libvirt.driver [-] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Instance rebooted successfully.#033[00m
Nov 29 03:37:21 np0005539551 nova_compute[227360]: 2025-11-29 08:37:21.364 227364 DEBUG nova.compute.manager [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:37:21 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76bc33b94933a08dad2c71ff30b567ac3e1367f5791b1903fd741d8e96a15193/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:37:21 np0005539551 nova_compute[227360]: 2025-11-29 08:37:21.377 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:37:21 np0005539551 podman[292077]: 2025-11-29 08:37:21.378105953 +0000 UTC m=+0.134911821 container init f210e522308663adc9e445892b1e3fadf156117d0e02dcdf3193dcb9ff18c418 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 29 03:37:21 np0005539551 nova_compute[227360]: 2025-11-29 08:37:21.381 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:37:21 np0005539551 podman[292077]: 2025-11-29 08:37:21.383131149 +0000 UTC m=+0.139936977 container start f210e522308663adc9e445892b1e3fadf156117d0e02dcdf3193dcb9ff18c418 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 03:37:21 np0005539551 neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190[292099]: [NOTICE]   (292103) : New worker (292105) forked
Nov 29 03:37:21 np0005539551 neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190[292099]: [NOTICE]   (292103) : Loading success.
Nov 29 03:37:21 np0005539551 nova_compute[227360]: 2025-11-29 08:37:21.409 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Nov 29 03:37:21 np0005539551 nova_compute[227360]: 2025-11-29 08:37:21.410 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405441.3544557, 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:37:21 np0005539551 nova_compute[227360]: 2025-11-29 08:37:21.410 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] VM Started (Lifecycle Event)#033[00m
Nov 29 03:37:21 np0005539551 nova_compute[227360]: 2025-11-29 08:37:21.416 227364 DEBUG oslo_concurrency.lockutils [None req-2c71b089-ee0b-4978-a70f-0927fdcad636 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 3.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:21 np0005539551 nova_compute[227360]: 2025-11-29 08:37:21.435 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:37:21 np0005539551 nova_compute[227360]: 2025-11-29 08:37:21.440 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:37:21 np0005539551 nova_compute[227360]: 2025-11-29 08:37:21.507 227364 DEBUG nova.compute.manager [req-a12c671a-164e-4d40-873a-3ee06407d23f req-50b56022-21b6-4bd0-aec2-b6a365e0b183 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Received event network-vif-plugged-06d48b3b-ec07-4803-9204-b300217b41d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:37:21 np0005539551 nova_compute[227360]: 2025-11-29 08:37:21.508 227364 DEBUG oslo_concurrency.lockutils [req-a12c671a-164e-4d40-873a-3ee06407d23f req-50b56022-21b6-4bd0-aec2-b6a365e0b183 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:21 np0005539551 nova_compute[227360]: 2025-11-29 08:37:21.508 227364 DEBUG oslo_concurrency.lockutils [req-a12c671a-164e-4d40-873a-3ee06407d23f req-50b56022-21b6-4bd0-aec2-b6a365e0b183 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:21 np0005539551 nova_compute[227360]: 2025-11-29 08:37:21.509 227364 DEBUG oslo_concurrency.lockutils [req-a12c671a-164e-4d40-873a-3ee06407d23f req-50b56022-21b6-4bd0-aec2-b6a365e0b183 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:21 np0005539551 nova_compute[227360]: 2025-11-29 08:37:21.509 227364 DEBUG nova.compute.manager [req-a12c671a-164e-4d40-873a-3ee06407d23f req-50b56022-21b6-4bd0-aec2-b6a365e0b183 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] No waiting events found dispatching network-vif-plugged-06d48b3b-ec07-4803-9204-b300217b41d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:37:21 np0005539551 nova_compute[227360]: 2025-11-29 08:37:21.509 227364 WARNING nova.compute.manager [req-a12c671a-164e-4d40-873a-3ee06407d23f req-50b56022-21b6-4bd0-aec2-b6a365e0b183 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Received unexpected event network-vif-plugged-06d48b3b-ec07-4803-9204-b300217b41d1 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:37:21 np0005539551 nova_compute[227360]: 2025-11-29 08:37:21.509 227364 DEBUG nova.compute.manager [req-a12c671a-164e-4d40-873a-3ee06407d23f req-50b56022-21b6-4bd0-aec2-b6a365e0b183 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Received event network-vif-plugged-06d48b3b-ec07-4803-9204-b300217b41d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:37:21 np0005539551 nova_compute[227360]: 2025-11-29 08:37:21.510 227364 DEBUG oslo_concurrency.lockutils [req-a12c671a-164e-4d40-873a-3ee06407d23f req-50b56022-21b6-4bd0-aec2-b6a365e0b183 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:21 np0005539551 nova_compute[227360]: 2025-11-29 08:37:21.510 227364 DEBUG oslo_concurrency.lockutils [req-a12c671a-164e-4d40-873a-3ee06407d23f req-50b56022-21b6-4bd0-aec2-b6a365e0b183 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:21 np0005539551 nova_compute[227360]: 2025-11-29 08:37:21.510 227364 DEBUG oslo_concurrency.lockutils [req-a12c671a-164e-4d40-873a-3ee06407d23f req-50b56022-21b6-4bd0-aec2-b6a365e0b183 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:21 np0005539551 nova_compute[227360]: 2025-11-29 08:37:21.510 227364 DEBUG nova.compute.manager [req-a12c671a-164e-4d40-873a-3ee06407d23f req-50b56022-21b6-4bd0-aec2-b6a365e0b183 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] No waiting events found dispatching network-vif-plugged-06d48b3b-ec07-4803-9204-b300217b41d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:37:21 np0005539551 nova_compute[227360]: 2025-11-29 08:37:21.511 227364 WARNING nova.compute.manager [req-a12c671a-164e-4d40-873a-3ee06407d23f req-50b56022-21b6-4bd0-aec2-b6a365e0b183 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Received unexpected event network-vif-plugged-06d48b3b-ec07-4803-9204-b300217b41d1 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:37:21 np0005539551 nova_compute[227360]: 2025-11-29 08:37:21.511 227364 DEBUG nova.compute.manager [req-a12c671a-164e-4d40-873a-3ee06407d23f req-50b56022-21b6-4bd0-aec2-b6a365e0b183 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Received event network-vif-plugged-06d48b3b-ec07-4803-9204-b300217b41d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:37:21 np0005539551 nova_compute[227360]: 2025-11-29 08:37:21.511 227364 DEBUG oslo_concurrency.lockutils [req-a12c671a-164e-4d40-873a-3ee06407d23f req-50b56022-21b6-4bd0-aec2-b6a365e0b183 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:21 np0005539551 nova_compute[227360]: 2025-11-29 08:37:21.512 227364 DEBUG oslo_concurrency.lockutils [req-a12c671a-164e-4d40-873a-3ee06407d23f req-50b56022-21b6-4bd0-aec2-b6a365e0b183 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:21 np0005539551 nova_compute[227360]: 2025-11-29 08:37:21.512 227364 DEBUG oslo_concurrency.lockutils [req-a12c671a-164e-4d40-873a-3ee06407d23f req-50b56022-21b6-4bd0-aec2-b6a365e0b183 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:21 np0005539551 nova_compute[227360]: 2025-11-29 08:37:21.512 227364 DEBUG nova.compute.manager [req-a12c671a-164e-4d40-873a-3ee06407d23f req-50b56022-21b6-4bd0-aec2-b6a365e0b183 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] No waiting events found dispatching network-vif-plugged-06d48b3b-ec07-4803-9204-b300217b41d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:37:21 np0005539551 nova_compute[227360]: 2025-11-29 08:37:21.513 227364 WARNING nova.compute.manager [req-a12c671a-164e-4d40-873a-3ee06407d23f req-50b56022-21b6-4bd0-aec2-b6a365e0b183 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Received unexpected event network-vif-plugged-06d48b3b-ec07-4803-9204-b300217b41d1 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:37:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:21.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:21.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:22 np0005539551 podman[292116]: 2025-11-29 08:37:22.594350596 +0000 UTC m=+0.047983679 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:37:22 np0005539551 podman[292115]: 2025-11-29 08:37:22.602235689 +0000 UTC m=+0.056764307 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd)
Nov 29 03:37:22 np0005539551 podman[292114]: 2025-11-29 08:37:22.632790845 +0000 UTC m=+0.086398677 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:37:23 np0005539551 nova_compute[227360]: 2025-11-29 08:37:23.604 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:23.606 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:37:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:23.607 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:37:23 np0005539551 nova_compute[227360]: 2025-11-29 08:37:23.732 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:23.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:23.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:25 np0005539551 nova_compute[227360]: 2025-11-29 08:37:25.401 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:25.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e378 e378: 3 total, 3 up, 3 in
Nov 29 03:37:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:26.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:26 np0005539551 nova_compute[227360]: 2025-11-29 08:37:26.406 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:27.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:28.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:28 np0005539551 nova_compute[227360]: 2025-11-29 08:37:28.737 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:29.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:37:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:30.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:37:30 np0005539551 nova_compute[227360]: 2025-11-29 08:37:30.405 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:31.610 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:31.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:32.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:33 np0005539551 nova_compute[227360]: 2025-11-29 08:37:33.780 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:33.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:34.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:35 np0005539551 nova_compute[227360]: 2025-11-29 08:37:35.408 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:35 np0005539551 ovn_controller[130266]: 2025-11-29T08:37:35Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9b:8f:68 10.100.0.11
Nov 29 03:37:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:35.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:36.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:37.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:38.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:38 np0005539551 nova_compute[227360]: 2025-11-29 08:37:38.782 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:39.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:40.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:40 np0005539551 nova_compute[227360]: 2025-11-29 08:37:40.411 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:41.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:42.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:42 np0005539551 nova_compute[227360]: 2025-11-29 08:37:42.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:43 np0005539551 nova_compute[227360]: 2025-11-29 08:37:43.783 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:43.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:44.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:45 np0005539551 nova_compute[227360]: 2025-11-29 08:37:45.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:45 np0005539551 nova_compute[227360]: 2025-11-29 08:37:45.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:45 np0005539551 nova_compute[227360]: 2025-11-29 08:37:45.412 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:45 np0005539551 nova_compute[227360]: 2025-11-29 08:37:45.486 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:45 np0005539551 nova_compute[227360]: 2025-11-29 08:37:45.486 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:45 np0005539551 nova_compute[227360]: 2025-11-29 08:37:45.486 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:45 np0005539551 nova_compute[227360]: 2025-11-29 08:37:45.486 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:37:45 np0005539551 nova_compute[227360]: 2025-11-29 08:37:45.487 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:45.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:37:45 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1291288639' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:37:45 np0005539551 nova_compute[227360]: 2025-11-29 08:37:45.977 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:46.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:46 np0005539551 nova_compute[227360]: 2025-11-29 08:37:46.063 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-000000b2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:37:46 np0005539551 nova_compute[227360]: 2025-11-29 08:37:46.064 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-000000b2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:37:46 np0005539551 nova_compute[227360]: 2025-11-29 08:37:46.064 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-000000b2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:37:46 np0005539551 nova_compute[227360]: 2025-11-29 08:37:46.245 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:37:46 np0005539551 nova_compute[227360]: 2025-11-29 08:37:46.247 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4070MB free_disk=20.831459045410156GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:37:46 np0005539551 nova_compute[227360]: 2025-11-29 08:37:46.247 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:46 np0005539551 nova_compute[227360]: 2025-11-29 08:37:46.247 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:46 np0005539551 nova_compute[227360]: 2025-11-29 08:37:46.350 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:37:46 np0005539551 nova_compute[227360]: 2025-11-29 08:37:46.350 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:37:46 np0005539551 nova_compute[227360]: 2025-11-29 08:37:46.350 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:37:46 np0005539551 nova_compute[227360]: 2025-11-29 08:37:46.400 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:46 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:37:46 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1184404439' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:37:46 np0005539551 nova_compute[227360]: 2025-11-29 08:37:46.850 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:46 np0005539551 nova_compute[227360]: 2025-11-29 08:37:46.858 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:37:46 np0005539551 nova_compute[227360]: 2025-11-29 08:37:46.885 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:37:46 np0005539551 nova_compute[227360]: 2025-11-29 08:37:46.936 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:37:46 np0005539551 nova_compute[227360]: 2025-11-29 08:37:46.937 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:47.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:47 np0005539551 nova_compute[227360]: 2025-11-29 08:37:47.939 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:48.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:48 np0005539551 nova_compute[227360]: 2025-11-29 08:37:48.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:48 np0005539551 nova_compute[227360]: 2025-11-29 08:37:48.411 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:37:48 np0005539551 nova_compute[227360]: 2025-11-29 08:37:48.411 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:37:48 np0005539551 nova_compute[227360]: 2025-11-29 08:37:48.785 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:49 np0005539551 nova_compute[227360]: 2025-11-29 08:37:49.086 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "refresh_cache-1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:37:49 np0005539551 nova_compute[227360]: 2025-11-29 08:37:49.086 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquired lock "refresh_cache-1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:37:49 np0005539551 nova_compute[227360]: 2025-11-29 08:37:49.086 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:37:49 np0005539551 nova_compute[227360]: 2025-11-29 08:37:49.087 227364 DEBUG nova.objects.instance [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:37:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:49.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:50.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:50 np0005539551 nova_compute[227360]: 2025-11-29 08:37:50.398 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Updating instance_info_cache with network_info: [{"id": "06d48b3b-ec07-4803-9204-b300217b41d1", "address": "fa:16:3e:9b:8f:68", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06d48b3b-ec", "ovs_interfaceid": "06d48b3b-ec07-4803-9204-b300217b41d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:37:50 np0005539551 nova_compute[227360]: 2025-11-29 08:37:50.412 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Releasing lock "refresh_cache-1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:37:50 np0005539551 nova_compute[227360]: 2025-11-29 08:37:50.413 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:37:50 np0005539551 nova_compute[227360]: 2025-11-29 08:37:50.453 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:51 np0005539551 nova_compute[227360]: 2025-11-29 08:37:51.400 227364 DEBUG oslo_concurrency.lockutils [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "8fc82e8c-6159-4a45-995b-c8e306dade2a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:51 np0005539551 nova_compute[227360]: 2025-11-29 08:37:51.400 227364 DEBUG oslo_concurrency.lockutils [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "8fc82e8c-6159-4a45-995b-c8e306dade2a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:51 np0005539551 nova_compute[227360]: 2025-11-29 08:37:51.408 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:51 np0005539551 nova_compute[227360]: 2025-11-29 08:37:51.419 227364 DEBUG nova.compute.manager [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:37:51 np0005539551 nova_compute[227360]: 2025-11-29 08:37:51.519 227364 DEBUG oslo_concurrency.lockutils [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:51 np0005539551 nova_compute[227360]: 2025-11-29 08:37:51.520 227364 DEBUG oslo_concurrency.lockutils [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:51 np0005539551 nova_compute[227360]: 2025-11-29 08:37:51.526 227364 DEBUG nova.virt.hardware [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:37:51 np0005539551 nova_compute[227360]: 2025-11-29 08:37:51.527 227364 INFO nova.compute.claims [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:37:51 np0005539551 nova_compute[227360]: 2025-11-29 08:37:51.643 227364 DEBUG oslo_concurrency.processutils [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:51.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:52.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:37:52 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4174580569' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:37:52 np0005539551 nova_compute[227360]: 2025-11-29 08:37:52.078 227364 DEBUG oslo_concurrency.processutils [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:52 np0005539551 nova_compute[227360]: 2025-11-29 08:37:52.085 227364 DEBUG nova.compute.provider_tree [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:37:52 np0005539551 nova_compute[227360]: 2025-11-29 08:37:52.108 227364 DEBUG nova.scheduler.client.report [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:37:52 np0005539551 nova_compute[227360]: 2025-11-29 08:37:52.387 227364 DEBUG oslo_concurrency.lockutils [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.867s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:52 np0005539551 nova_compute[227360]: 2025-11-29 08:37:52.388 227364 DEBUG nova.compute.manager [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:37:52 np0005539551 nova_compute[227360]: 2025-11-29 08:37:52.520 227364 DEBUG nova.compute.manager [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:37:52 np0005539551 nova_compute[227360]: 2025-11-29 08:37:52.521 227364 DEBUG nova.network.neutron [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:37:52 np0005539551 nova_compute[227360]: 2025-11-29 08:37:52.544 227364 INFO nova.virt.libvirt.driver [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:37:52 np0005539551 nova_compute[227360]: 2025-11-29 08:37:52.581 227364 DEBUG nova.compute.manager [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:37:52 np0005539551 nova_compute[227360]: 2025-11-29 08:37:52.694 227364 DEBUG nova.compute.manager [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:37:52 np0005539551 nova_compute[227360]: 2025-11-29 08:37:52.695 227364 DEBUG nova.virt.libvirt.driver [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:37:52 np0005539551 nova_compute[227360]: 2025-11-29 08:37:52.696 227364 INFO nova.virt.libvirt.driver [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Creating image(s)#033[00m
Nov 29 03:37:52 np0005539551 nova_compute[227360]: 2025-11-29 08:37:52.719 227364 DEBUG nova.storage.rbd_utils [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 8fc82e8c-6159-4a45-995b-c8e306dade2a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:37:52 np0005539551 nova_compute[227360]: 2025-11-29 08:37:52.756 227364 DEBUG nova.storage.rbd_utils [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 8fc82e8c-6159-4a45-995b-c8e306dade2a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:37:52 np0005539551 nova_compute[227360]: 2025-11-29 08:37:52.784 227364 DEBUG nova.storage.rbd_utils [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 8fc82e8c-6159-4a45-995b-c8e306dade2a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:37:52 np0005539551 nova_compute[227360]: 2025-11-29 08:37:52.788 227364 DEBUG oslo_concurrency.processutils [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:52 np0005539551 nova_compute[227360]: 2025-11-29 08:37:52.827 227364 DEBUG nova.policy [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4774e2851bc6407cb0fcde15bd24d1b3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0471b9b208874403aa3f0fbe7504ad19', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:37:52 np0005539551 nova_compute[227360]: 2025-11-29 08:37:52.882 227364 DEBUG oslo_concurrency.processutils [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:52 np0005539551 nova_compute[227360]: 2025-11-29 08:37:52.883 227364 DEBUG oslo_concurrency.lockutils [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:52 np0005539551 nova_compute[227360]: 2025-11-29 08:37:52.883 227364 DEBUG oslo_concurrency.lockutils [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:52 np0005539551 nova_compute[227360]: 2025-11-29 08:37:52.883 227364 DEBUG oslo_concurrency.lockutils [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:52 np0005539551 nova_compute[227360]: 2025-11-29 08:37:52.902 227364 DEBUG nova.storage.rbd_utils [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 8fc82e8c-6159-4a45-995b-c8e306dade2a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:37:52 np0005539551 nova_compute[227360]: 2025-11-29 08:37:52.906 227364 DEBUG oslo_concurrency.processutils [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 8fc82e8c-6159-4a45-995b-c8e306dade2a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:53 np0005539551 nova_compute[227360]: 2025-11-29 08:37:53.152 227364 DEBUG oslo_concurrency.processutils [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 8fc82e8c-6159-4a45-995b-c8e306dade2a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.246s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:53 np0005539551 nova_compute[227360]: 2025-11-29 08:37:53.211 227364 DEBUG nova.storage.rbd_utils [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] resizing rbd image 8fc82e8c-6159-4a45-995b-c8e306dade2a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:37:53 np0005539551 nova_compute[227360]: 2025-11-29 08:37:53.305 227364 DEBUG nova.objects.instance [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'migration_context' on Instance uuid 8fc82e8c-6159-4a45-995b-c8e306dade2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:37:53 np0005539551 nova_compute[227360]: 2025-11-29 08:37:53.323 227364 DEBUG nova.virt.libvirt.driver [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:37:53 np0005539551 nova_compute[227360]: 2025-11-29 08:37:53.324 227364 DEBUG nova.virt.libvirt.driver [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Ensure instance console log exists: /var/lib/nova/instances/8fc82e8c-6159-4a45-995b-c8e306dade2a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:37:53 np0005539551 nova_compute[227360]: 2025-11-29 08:37:53.324 227364 DEBUG oslo_concurrency.lockutils [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:53 np0005539551 nova_compute[227360]: 2025-11-29 08:37:53.324 227364 DEBUG oslo_concurrency.lockutils [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:53 np0005539551 nova_compute[227360]: 2025-11-29 08:37:53.325 227364 DEBUG oslo_concurrency.lockutils [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:53 np0005539551 nova_compute[227360]: 2025-11-29 08:37:53.607 227364 DEBUG nova.network.neutron [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Successfully created port: 402578bf-9e47-411e-a9a0-bd831f9b380e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:37:53 np0005539551 podman[292412]: 2025-11-29 08:37:53.621414588 +0000 UTC m=+0.058721430 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 03:37:53 np0005539551 podman[292411]: 2025-11-29 08:37:53.637925615 +0000 UTC m=+0.070283853 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 03:37:53 np0005539551 podman[292410]: 2025-11-29 08:37:53.649897648 +0000 UTC m=+0.086491140 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 29 03:37:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:53.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:53 np0005539551 nova_compute[227360]: 2025-11-29 08:37:53.824 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:54.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:54 np0005539551 nova_compute[227360]: 2025-11-29 08:37:54.555 227364 DEBUG nova.network.neutron [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Successfully updated port: 402578bf-9e47-411e-a9a0-bd831f9b380e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:37:54 np0005539551 nova_compute[227360]: 2025-11-29 08:37:54.580 227364 DEBUG oslo_concurrency.lockutils [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "refresh_cache-8fc82e8c-6159-4a45-995b-c8e306dade2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:37:54 np0005539551 nova_compute[227360]: 2025-11-29 08:37:54.580 227364 DEBUG oslo_concurrency.lockutils [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquired lock "refresh_cache-8fc82e8c-6159-4a45-995b-c8e306dade2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:37:54 np0005539551 nova_compute[227360]: 2025-11-29 08:37:54.581 227364 DEBUG nova.network.neutron [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:37:54 np0005539551 nova_compute[227360]: 2025-11-29 08:37:54.698 227364 DEBUG nova.compute.manager [req-35c84f09-d59e-4387-8b59-a407041f65e9 req-7b383fb3-62ee-47c9-969d-63a65e930704 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Received event network-changed-402578bf-9e47-411e-a9a0-bd831f9b380e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:37:54 np0005539551 nova_compute[227360]: 2025-11-29 08:37:54.699 227364 DEBUG nova.compute.manager [req-35c84f09-d59e-4387-8b59-a407041f65e9 req-7b383fb3-62ee-47c9-969d-63a65e930704 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Refreshing instance network info cache due to event network-changed-402578bf-9e47-411e-a9a0-bd831f9b380e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:37:54 np0005539551 nova_compute[227360]: 2025-11-29 08:37:54.699 227364 DEBUG oslo_concurrency.lockutils [req-35c84f09-d59e-4387-8b59-a407041f65e9 req-7b383fb3-62ee-47c9-969d-63a65e930704 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-8fc82e8c-6159-4a45-995b-c8e306dade2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:37:54 np0005539551 nova_compute[227360]: 2025-11-29 08:37:54.787 227364 DEBUG nova.network.neutron [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:37:55 np0005539551 nova_compute[227360]: 2025-11-29 08:37:55.404 227364 DEBUG nova.compute.manager [req-d8bda6ca-4781-486b-b456-660c7b7c351d req-380b3588-55b8-492b-8489-5becc8817168 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Received event network-changed-06d48b3b-ec07-4803-9204-b300217b41d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:37:55 np0005539551 nova_compute[227360]: 2025-11-29 08:37:55.405 227364 DEBUG nova.compute.manager [req-d8bda6ca-4781-486b-b456-660c7b7c351d req-380b3588-55b8-492b-8489-5becc8817168 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Refreshing instance network info cache due to event network-changed-06d48b3b-ec07-4803-9204-b300217b41d1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:37:55 np0005539551 nova_compute[227360]: 2025-11-29 08:37:55.405 227364 DEBUG oslo_concurrency.lockutils [req-d8bda6ca-4781-486b-b456-660c7b7c351d req-380b3588-55b8-492b-8489-5becc8817168 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:37:55 np0005539551 nova_compute[227360]: 2025-11-29 08:37:55.406 227364 DEBUG oslo_concurrency.lockutils [req-d8bda6ca-4781-486b-b456-660c7b7c351d req-380b3588-55b8-492b-8489-5becc8817168 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:37:55 np0005539551 nova_compute[227360]: 2025-11-29 08:37:55.406 227364 DEBUG nova.network.neutron [req-d8bda6ca-4781-486b-b456-660c7b7c351d req-380b3588-55b8-492b-8489-5becc8817168 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Refreshing network info cache for port 06d48b3b-ec07-4803-9204-b300217b41d1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:37:55 np0005539551 nova_compute[227360]: 2025-11-29 08:37:55.457 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:55 np0005539551 nova_compute[227360]: 2025-11-29 08:37:55.656 227364 DEBUG nova.network.neutron [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Updating instance_info_cache with network_info: [{"id": "402578bf-9e47-411e-a9a0-bd831f9b380e", "address": "fa:16:3e:a7:a3:18", "network": {"id": "4dc7ed86-80fb-4377-a5d0-8edd5e264c14", "bridge": "br-int", "label": "tempest-network-smoke--370633540", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap402578bf-9e", "ovs_interfaceid": "402578bf-9e47-411e-a9a0-bd831f9b380e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:37:55 np0005539551 nova_compute[227360]: 2025-11-29 08:37:55.676 227364 DEBUG oslo_concurrency.lockutils [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Releasing lock "refresh_cache-8fc82e8c-6159-4a45-995b-c8e306dade2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:37:55 np0005539551 nova_compute[227360]: 2025-11-29 08:37:55.677 227364 DEBUG nova.compute.manager [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Instance network_info: |[{"id": "402578bf-9e47-411e-a9a0-bd831f9b380e", "address": "fa:16:3e:a7:a3:18", "network": {"id": "4dc7ed86-80fb-4377-a5d0-8edd5e264c14", "bridge": "br-int", "label": "tempest-network-smoke--370633540", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap402578bf-9e", "ovs_interfaceid": "402578bf-9e47-411e-a9a0-bd831f9b380e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:37:55 np0005539551 nova_compute[227360]: 2025-11-29 08:37:55.678 227364 DEBUG oslo_concurrency.lockutils [req-35c84f09-d59e-4387-8b59-a407041f65e9 req-7b383fb3-62ee-47c9-969d-63a65e930704 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-8fc82e8c-6159-4a45-995b-c8e306dade2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:37:55 np0005539551 nova_compute[227360]: 2025-11-29 08:37:55.678 227364 DEBUG nova.network.neutron [req-35c84f09-d59e-4387-8b59-a407041f65e9 req-7b383fb3-62ee-47c9-969d-63a65e930704 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Refreshing network info cache for port 402578bf-9e47-411e-a9a0-bd831f9b380e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:37:55 np0005539551 nova_compute[227360]: 2025-11-29 08:37:55.684 227364 DEBUG nova.virt.libvirt.driver [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Start _get_guest_xml network_info=[{"id": "402578bf-9e47-411e-a9a0-bd831f9b380e", "address": "fa:16:3e:a7:a3:18", "network": {"id": "4dc7ed86-80fb-4377-a5d0-8edd5e264c14", "bridge": "br-int", "label": "tempest-network-smoke--370633540", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap402578bf-9e", "ovs_interfaceid": "402578bf-9e47-411e-a9a0-bd831f9b380e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:37:55 np0005539551 nova_compute[227360]: 2025-11-29 08:37:55.690 227364 WARNING nova.virt.libvirt.driver [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:37:55 np0005539551 nova_compute[227360]: 2025-11-29 08:37:55.703 227364 DEBUG nova.virt.libvirt.host [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:37:55 np0005539551 nova_compute[227360]: 2025-11-29 08:37:55.705 227364 DEBUG nova.virt.libvirt.host [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:37:55 np0005539551 nova_compute[227360]: 2025-11-29 08:37:55.708 227364 DEBUG nova.virt.libvirt.host [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:37:55 np0005539551 nova_compute[227360]: 2025-11-29 08:37:55.709 227364 DEBUG nova.virt.libvirt.host [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:37:55 np0005539551 nova_compute[227360]: 2025-11-29 08:37:55.710 227364 DEBUG nova.virt.libvirt.driver [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:37:55 np0005539551 nova_compute[227360]: 2025-11-29 08:37:55.710 227364 DEBUG nova.virt.hardware [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:37:55 np0005539551 nova_compute[227360]: 2025-11-29 08:37:55.711 227364 DEBUG nova.virt.hardware [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:37:55 np0005539551 nova_compute[227360]: 2025-11-29 08:37:55.711 227364 DEBUG nova.virt.hardware [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:37:55 np0005539551 nova_compute[227360]: 2025-11-29 08:37:55.711 227364 DEBUG nova.virt.hardware [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:37:55 np0005539551 nova_compute[227360]: 2025-11-29 08:37:55.711 227364 DEBUG nova.virt.hardware [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:37:55 np0005539551 nova_compute[227360]: 2025-11-29 08:37:55.711 227364 DEBUG nova.virt.hardware [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:37:55 np0005539551 nova_compute[227360]: 2025-11-29 08:37:55.712 227364 DEBUG nova.virt.hardware [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:37:55 np0005539551 nova_compute[227360]: 2025-11-29 08:37:55.712 227364 DEBUG nova.virt.hardware [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:37:55 np0005539551 nova_compute[227360]: 2025-11-29 08:37:55.712 227364 DEBUG nova.virt.hardware [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:37:55 np0005539551 nova_compute[227360]: 2025-11-29 08:37:55.712 227364 DEBUG nova.virt.hardware [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:37:55 np0005539551 nova_compute[227360]: 2025-11-29 08:37:55.712 227364 DEBUG nova.virt.hardware [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:37:55 np0005539551 nova_compute[227360]: 2025-11-29 08:37:55.715 227364 DEBUG oslo_concurrency.processutils [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:55.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:56.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:56 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:37:56 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1389541247' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:37:56 np0005539551 nova_compute[227360]: 2025-11-29 08:37:56.162 227364 DEBUG oslo_concurrency.processutils [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:56 np0005539551 nova_compute[227360]: 2025-11-29 08:37:56.202 227364 DEBUG nova.storage.rbd_utils [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 8fc82e8c-6159-4a45-995b-c8e306dade2a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:37:56 np0005539551 nova_compute[227360]: 2025-11-29 08:37:56.207 227364 DEBUG oslo_concurrency.processutils [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:56 np0005539551 nova_compute[227360]: 2025-11-29 08:37:56.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:56 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:37:56 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1011265833' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:37:56 np0005539551 nova_compute[227360]: 2025-11-29 08:37:56.641 227364 DEBUG oslo_concurrency.processutils [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:56 np0005539551 nova_compute[227360]: 2025-11-29 08:37:56.643 227364 DEBUG nova.virt.libvirt.vif [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:37:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-150909303',display_name='tempest-TestNetworkBasicOps-server-150909303',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-150909303',id=182,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH4R36grIphtmcogfN7f7XUKzA3WvhJ7uldgnXhl3WPmJdcvwh2b8mCMqxuy1g7iUfdW30tra7vF5HZPT48iRw9BOGYiesVGDOm3G7pRZI5xev3739PbDhH4Jg23RHG3qA==',key_name='tempest-TestNetworkBasicOps-1973126684',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-fd0m3fdd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:37:52Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=8fc82e8c-6159-4a45-995b-c8e306dade2a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "402578bf-9e47-411e-a9a0-bd831f9b380e", "address": "fa:16:3e:a7:a3:18", "network": {"id": "4dc7ed86-80fb-4377-a5d0-8edd5e264c14", "bridge": "br-int", "label": "tempest-network-smoke--370633540", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap402578bf-9e", "ovs_interfaceid": "402578bf-9e47-411e-a9a0-bd831f9b380e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:37:56 np0005539551 nova_compute[227360]: 2025-11-29 08:37:56.644 227364 DEBUG nova.network.os_vif_util [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "402578bf-9e47-411e-a9a0-bd831f9b380e", "address": "fa:16:3e:a7:a3:18", "network": {"id": "4dc7ed86-80fb-4377-a5d0-8edd5e264c14", "bridge": "br-int", "label": "tempest-network-smoke--370633540", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap402578bf-9e", "ovs_interfaceid": "402578bf-9e47-411e-a9a0-bd831f9b380e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:37:56 np0005539551 nova_compute[227360]: 2025-11-29 08:37:56.645 227364 DEBUG nova.network.os_vif_util [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:a3:18,bridge_name='br-int',has_traffic_filtering=True,id=402578bf-9e47-411e-a9a0-bd831f9b380e,network=Network(4dc7ed86-80fb-4377-a5d0-8edd5e264c14),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap402578bf-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:37:56 np0005539551 nova_compute[227360]: 2025-11-29 08:37:56.646 227364 DEBUG nova.objects.instance [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8fc82e8c-6159-4a45-995b-c8e306dade2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:37:56 np0005539551 nova_compute[227360]: 2025-11-29 08:37:56.689 227364 DEBUG nova.virt.libvirt.driver [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:37:56 np0005539551 nova_compute[227360]:  <uuid>8fc82e8c-6159-4a45-995b-c8e306dade2a</uuid>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:  <name>instance-000000b6</name>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:37:56 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:      <nova:name>tempest-TestNetworkBasicOps-server-150909303</nova:name>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:37:55</nova:creationTime>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:37:56 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:        <nova:user uuid="4774e2851bc6407cb0fcde15bd24d1b3">tempest-TestNetworkBasicOps-828399474-project-member</nova:user>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:        <nova:project uuid="0471b9b208874403aa3f0fbe7504ad19">tempest-TestNetworkBasicOps-828399474</nova:project>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:        <nova:port uuid="402578bf-9e47-411e-a9a0-bd831f9b380e">
Nov 29 03:37:56 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:      <entry name="serial">8fc82e8c-6159-4a45-995b-c8e306dade2a</entry>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:      <entry name="uuid">8fc82e8c-6159-4a45-995b-c8e306dade2a</entry>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:37:56 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/8fc82e8c-6159-4a45-995b-c8e306dade2a_disk">
Nov 29 03:37:56 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:37:56 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:37:56 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/8fc82e8c-6159-4a45-995b-c8e306dade2a_disk.config">
Nov 29 03:37:56 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:37:56 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:37:56 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:a7:a3:18"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:      <target dev="tap402578bf-9e"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:37:56 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/8fc82e8c-6159-4a45-995b-c8e306dade2a/console.log" append="off"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:37:56 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:37:56 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:37:56 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:37:56 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:37:56 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:37:56 np0005539551 nova_compute[227360]: 2025-11-29 08:37:56.691 227364 DEBUG nova.compute.manager [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Preparing to wait for external event network-vif-plugged-402578bf-9e47-411e-a9a0-bd831f9b380e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:37:56 np0005539551 nova_compute[227360]: 2025-11-29 08:37:56.691 227364 DEBUG oslo_concurrency.lockutils [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "8fc82e8c-6159-4a45-995b-c8e306dade2a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:56 np0005539551 nova_compute[227360]: 2025-11-29 08:37:56.692 227364 DEBUG oslo_concurrency.lockutils [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "8fc82e8c-6159-4a45-995b-c8e306dade2a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:56 np0005539551 nova_compute[227360]: 2025-11-29 08:37:56.692 227364 DEBUG oslo_concurrency.lockutils [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "8fc82e8c-6159-4a45-995b-c8e306dade2a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:56 np0005539551 nova_compute[227360]: 2025-11-29 08:37:56.693 227364 DEBUG nova.virt.libvirt.vif [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:37:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-150909303',display_name='tempest-TestNetworkBasicOps-server-150909303',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-150909303',id=182,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH4R36grIphtmcogfN7f7XUKzA3WvhJ7uldgnXhl3WPmJdcvwh2b8mCMqxuy1g7iUfdW30tra7vF5HZPT48iRw9BOGYiesVGDOm3G7pRZI5xev3739PbDhH4Jg23RHG3qA==',key_name='tempest-TestNetworkBasicOps-1973126684',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-fd0m3fdd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:37:52Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=8fc82e8c-6159-4a45-995b-c8e306dade2a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "402578bf-9e47-411e-a9a0-bd831f9b380e", "address": "fa:16:3e:a7:a3:18", "network": {"id": "4dc7ed86-80fb-4377-a5d0-8edd5e264c14", "bridge": "br-int", "label": "tempest-network-smoke--370633540", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap402578bf-9e", "ovs_interfaceid": "402578bf-9e47-411e-a9a0-bd831f9b380e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:37:56 np0005539551 nova_compute[227360]: 2025-11-29 08:37:56.693 227364 DEBUG nova.network.os_vif_util [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "402578bf-9e47-411e-a9a0-bd831f9b380e", "address": "fa:16:3e:a7:a3:18", "network": {"id": "4dc7ed86-80fb-4377-a5d0-8edd5e264c14", "bridge": "br-int", "label": "tempest-network-smoke--370633540", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap402578bf-9e", "ovs_interfaceid": "402578bf-9e47-411e-a9a0-bd831f9b380e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:37:56 np0005539551 nova_compute[227360]: 2025-11-29 08:37:56.694 227364 DEBUG nova.network.os_vif_util [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:a3:18,bridge_name='br-int',has_traffic_filtering=True,id=402578bf-9e47-411e-a9a0-bd831f9b380e,network=Network(4dc7ed86-80fb-4377-a5d0-8edd5e264c14),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap402578bf-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:37:56 np0005539551 nova_compute[227360]: 2025-11-29 08:37:56.695 227364 DEBUG os_vif [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:a3:18,bridge_name='br-int',has_traffic_filtering=True,id=402578bf-9e47-411e-a9a0-bd831f9b380e,network=Network(4dc7ed86-80fb-4377-a5d0-8edd5e264c14),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap402578bf-9e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:37:56 np0005539551 nova_compute[227360]: 2025-11-29 08:37:56.695 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:56 np0005539551 nova_compute[227360]: 2025-11-29 08:37:56.696 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:56 np0005539551 nova_compute[227360]: 2025-11-29 08:37:56.697 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:37:56 np0005539551 nova_compute[227360]: 2025-11-29 08:37:56.700 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:56 np0005539551 nova_compute[227360]: 2025-11-29 08:37:56.700 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap402578bf-9e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:56 np0005539551 nova_compute[227360]: 2025-11-29 08:37:56.700 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap402578bf-9e, col_values=(('external_ids', {'iface-id': '402578bf-9e47-411e-a9a0-bd831f9b380e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a7:a3:18', 'vm-uuid': '8fc82e8c-6159-4a45-995b-c8e306dade2a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:56 np0005539551 nova_compute[227360]: 2025-11-29 08:37:56.702 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:56 np0005539551 NetworkManager[48922]: <info>  [1764405476.7031] manager: (tap402578bf-9e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/357)
Nov 29 03:37:56 np0005539551 nova_compute[227360]: 2025-11-29 08:37:56.704 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:37:56 np0005539551 nova_compute[227360]: 2025-11-29 08:37:56.708 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:56 np0005539551 nova_compute[227360]: 2025-11-29 08:37:56.709 227364 INFO os_vif [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:a3:18,bridge_name='br-int',has_traffic_filtering=True,id=402578bf-9e47-411e-a9a0-bd831f9b380e,network=Network(4dc7ed86-80fb-4377-a5d0-8edd5e264c14),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap402578bf-9e')#033[00m
Nov 29 03:37:56 np0005539551 nova_compute[227360]: 2025-11-29 08:37:56.780 227364 DEBUG nova.virt.libvirt.driver [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:37:56 np0005539551 nova_compute[227360]: 2025-11-29 08:37:56.780 227364 DEBUG nova.virt.libvirt.driver [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:37:56 np0005539551 nova_compute[227360]: 2025-11-29 08:37:56.781 227364 DEBUG nova.virt.libvirt.driver [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No VIF found with MAC fa:16:3e:a7:a3:18, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:37:56 np0005539551 nova_compute[227360]: 2025-11-29 08:37:56.781 227364 INFO nova.virt.libvirt.driver [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Using config drive#033[00m
Nov 29 03:37:56 np0005539551 nova_compute[227360]: 2025-11-29 08:37:56.809 227364 DEBUG nova.storage.rbd_utils [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 8fc82e8c-6159-4a45-995b-c8e306dade2a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:37:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:57.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:58.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:58 np0005539551 nova_compute[227360]: 2025-11-29 08:37:58.112 227364 INFO nova.virt.libvirt.driver [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Creating config drive at /var/lib/nova/instances/8fc82e8c-6159-4a45-995b-c8e306dade2a/disk.config#033[00m
Nov 29 03:37:58 np0005539551 nova_compute[227360]: 2025-11-29 08:37:58.117 227364 DEBUG oslo_concurrency.processutils [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8fc82e8c-6159-4a45-995b-c8e306dade2a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpltb3qhme execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:58 np0005539551 nova_compute[227360]: 2025-11-29 08:37:58.149 227364 DEBUG nova.network.neutron [req-d8bda6ca-4781-486b-b456-660c7b7c351d req-380b3588-55b8-492b-8489-5becc8817168 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Updated VIF entry in instance network info cache for port 06d48b3b-ec07-4803-9204-b300217b41d1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:37:58 np0005539551 nova_compute[227360]: 2025-11-29 08:37:58.151 227364 DEBUG nova.network.neutron [req-d8bda6ca-4781-486b-b456-660c7b7c351d req-380b3588-55b8-492b-8489-5becc8817168 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Updating instance_info_cache with network_info: [{"id": "06d48b3b-ec07-4803-9204-b300217b41d1", "address": "fa:16:3e:9b:8f:68", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06d48b3b-ec", "ovs_interfaceid": "06d48b3b-ec07-4803-9204-b300217b41d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:37:58 np0005539551 nova_compute[227360]: 2025-11-29 08:37:58.175 227364 DEBUG oslo_concurrency.lockutils [req-d8bda6ca-4781-486b-b456-660c7b7c351d req-380b3588-55b8-492b-8489-5becc8817168 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:37:58 np0005539551 nova_compute[227360]: 2025-11-29 08:37:58.254 227364 DEBUG oslo_concurrency.processutils [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8fc82e8c-6159-4a45-995b-c8e306dade2a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpltb3qhme" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:58 np0005539551 nova_compute[227360]: 2025-11-29 08:37:58.280 227364 DEBUG nova.storage.rbd_utils [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 8fc82e8c-6159-4a45-995b-c8e306dade2a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:37:58 np0005539551 nova_compute[227360]: 2025-11-29 08:37:58.284 227364 DEBUG oslo_concurrency.processutils [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8fc82e8c-6159-4a45-995b-c8e306dade2a/disk.config 8fc82e8c-6159-4a45-995b-c8e306dade2a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:58 np0005539551 nova_compute[227360]: 2025-11-29 08:37:58.461 227364 DEBUG oslo_concurrency.processutils [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8fc82e8c-6159-4a45-995b-c8e306dade2a/disk.config 8fc82e8c-6159-4a45-995b-c8e306dade2a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:58 np0005539551 nova_compute[227360]: 2025-11-29 08:37:58.463 227364 INFO nova.virt.libvirt.driver [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Deleting local config drive /var/lib/nova/instances/8fc82e8c-6159-4a45-995b-c8e306dade2a/disk.config because it was imported into RBD.#033[00m
Nov 29 03:37:58 np0005539551 NetworkManager[48922]: <info>  [1764405478.5171] manager: (tap402578bf-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/358)
Nov 29 03:37:58 np0005539551 kernel: tap402578bf-9e: entered promiscuous mode
Nov 29 03:37:58 np0005539551 systemd-udevd[292606]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:37:58 np0005539551 ovn_controller[130266]: 2025-11-29T08:37:58Z|00799|binding|INFO|Claiming lport 402578bf-9e47-411e-a9a0-bd831f9b380e for this chassis.
Nov 29 03:37:58 np0005539551 ovn_controller[130266]: 2025-11-29T08:37:58Z|00800|binding|INFO|402578bf-9e47-411e-a9a0-bd831f9b380e: Claiming fa:16:3e:a7:a3:18 10.100.0.8
Nov 29 03:37:58 np0005539551 nova_compute[227360]: 2025-11-29 08:37:58.558 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:58.566 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:a3:18 10.100.0.8'], port_security=['fa:16:3e:a7:a3:18 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '8fc82e8c-6159-4a45-995b-c8e306dade2a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4dc7ed86-80fb-4377-a5d0-8edd5e264c14', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0471b9b208874403aa3f0fbe7504ad19', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0b219762-b87f-4f30-9b8a-68045d5ed356', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=244cacb4-e852-4547-86ff-006a2141eb2a, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=402578bf-9e47-411e-a9a0-bd831f9b380e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:58.568 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 402578bf-9e47-411e-a9a0-bd831f9b380e in datapath 4dc7ed86-80fb-4377-a5d0-8edd5e264c14 bound to our chassis#033[00m
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:58.569 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4dc7ed86-80fb-4377-a5d0-8edd5e264c14#033[00m
Nov 29 03:37:58 np0005539551 NetworkManager[48922]: <info>  [1764405478.5710] device (tap402578bf-9e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:37:58 np0005539551 NetworkManager[48922]: <info>  [1764405478.5716] device (tap402578bf-9e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:37:58 np0005539551 ovn_controller[130266]: 2025-11-29T08:37:58Z|00801|binding|INFO|Setting lport 402578bf-9e47-411e-a9a0-bd831f9b380e ovn-installed in OVS
Nov 29 03:37:58 np0005539551 ovn_controller[130266]: 2025-11-29T08:37:58Z|00802|binding|INFO|Setting lport 402578bf-9e47-411e-a9a0-bd831f9b380e up in Southbound
Nov 29 03:37:58 np0005539551 nova_compute[227360]: 2025-11-29 08:37:58.577 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:58.582 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[caf98bdd-f89b-4631-bb8a-b46ace378f47]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:58.582 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4dc7ed86-81 in ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:58.584 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4dc7ed86-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:58.584 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[933bc790-0625-4d1f-8666-b4370864c6f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:58.585 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[7cb95bfb-a8aa-41fd-817d-5a300f273304]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:58 np0005539551 systemd-machined[190756]: New machine qemu-84-instance-000000b6.
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:58.604 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[8dd3ba08-e6b7-4c6c-849d-8c2af45e7bab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:58 np0005539551 systemd[1]: Started Virtual Machine qemu-84-instance-000000b6.
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:58.619 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9b64e718-4a55-438a-9602-84365147d1b5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:58.650 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[c726fbc7-ebf6-43fe-8613-5023484fd0a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:58.655 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a7b90954-112f-4e9f-85a9-3397ce0285aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:58 np0005539551 NetworkManager[48922]: <info>  [1764405478.6558] manager: (tap4dc7ed86-80): new Veth device (/org/freedesktop/NetworkManager/Devices/359)
Nov 29 03:37:58 np0005539551 systemd-udevd[292610]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:58.683 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[368b0868-b7cf-4817-982a-2409935425d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:58.686 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[0526c0bc-e924-4dd1-943d-6c2f4f3013d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:58 np0005539551 NetworkManager[48922]: <info>  [1764405478.7077] device (tap4dc7ed86-80): carrier: link connected
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:58.713 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[82204512-f1a0-4418-a1d7-31bf4627368f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:58.729 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[57b815ff-d276-4437-b16f-16e8c72720b9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4dc7ed86-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:72:c5:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 232], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 852558, 'reachable_time': 23598, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292642, 'error': None, 'target': 'ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:58.749 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d75e6296-c3fe-46f1-abbf-f4d96e0d47f0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe72:c520'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 852558, 'tstamp': 852558}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292643, 'error': None, 'target': 'ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:58.766 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[39c8e6f2-d6bc-4135-8029-2e457eeec776]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4dc7ed86-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:72:c5:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 232], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 852558, 'reachable_time': 23598, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 292644, 'error': None, 'target': 'ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:58.806 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[46ce2bf3-a44a-4078-8c16-d3a87537c33c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:58 np0005539551 nova_compute[227360]: 2025-11-29 08:37:58.826 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:58.867 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d343d758-901f-4c6a-9285-e8a9e1fe7016]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:58.868 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4dc7ed86-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:58.869 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:58.869 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4dc7ed86-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:58 np0005539551 kernel: tap4dc7ed86-80: entered promiscuous mode
Nov 29 03:37:58 np0005539551 NetworkManager[48922]: <info>  [1764405478.8716] manager: (tap4dc7ed86-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/360)
Nov 29 03:37:58 np0005539551 nova_compute[227360]: 2025-11-29 08:37:58.876 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:58.879 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4dc7ed86-80, col_values=(('external_ids', {'iface-id': '75cd498f-8b8f-4566-9772-7d1f56716c28'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:58 np0005539551 nova_compute[227360]: 2025-11-29 08:37:58.881 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:58 np0005539551 ovn_controller[130266]: 2025-11-29T08:37:58Z|00803|binding|INFO|Releasing lport 75cd498f-8b8f-4566-9772-7d1f56716c28 from this chassis (sb_readonly=0)
Nov 29 03:37:58 np0005539551 nova_compute[227360]: 2025-11-29 08:37:58.882 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:58.882 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4dc7ed86-80fb-4377-a5d0-8edd5e264c14.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4dc7ed86-80fb-4377-a5d0-8edd5e264c14.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:58.883 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[47fa1969-675a-43e9-9308-c1b100862cfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:58.884 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-4dc7ed86-80fb-4377-a5d0-8edd5e264c14
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/4dc7ed86-80fb-4377-a5d0-8edd5e264c14.pid.haproxy
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 4dc7ed86-80fb-4377-a5d0-8edd5e264c14
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:37:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:37:58.886 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14', 'env', 'PROCESS_TAG=haproxy-4dc7ed86-80fb-4377-a5d0-8edd5e264c14', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4dc7ed86-80fb-4377-a5d0-8edd5e264c14.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:37:58 np0005539551 nova_compute[227360]: 2025-11-29 08:37:58.895 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:59 np0005539551 nova_compute[227360]: 2025-11-29 08:37:59.059 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405479.0589771, 8fc82e8c-6159-4a45-995b-c8e306dade2a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:37:59 np0005539551 nova_compute[227360]: 2025-11-29 08:37:59.060 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] VM Started (Lifecycle Event)#033[00m
Nov 29 03:37:59 np0005539551 nova_compute[227360]: 2025-11-29 08:37:59.081 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:37:59 np0005539551 nova_compute[227360]: 2025-11-29 08:37:59.085 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405479.0600574, 8fc82e8c-6159-4a45-995b-c8e306dade2a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:37:59 np0005539551 nova_compute[227360]: 2025-11-29 08:37:59.086 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:37:59 np0005539551 nova_compute[227360]: 2025-11-29 08:37:59.107 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:37:59 np0005539551 nova_compute[227360]: 2025-11-29 08:37:59.112 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:37:59 np0005539551 nova_compute[227360]: 2025-11-29 08:37:59.124 227364 DEBUG nova.network.neutron [req-35c84f09-d59e-4387-8b59-a407041f65e9 req-7b383fb3-62ee-47c9-969d-63a65e930704 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Updated VIF entry in instance network info cache for port 402578bf-9e47-411e-a9a0-bd831f9b380e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:37:59 np0005539551 nova_compute[227360]: 2025-11-29 08:37:59.125 227364 DEBUG nova.network.neutron [req-35c84f09-d59e-4387-8b59-a407041f65e9 req-7b383fb3-62ee-47c9-969d-63a65e930704 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Updating instance_info_cache with network_info: [{"id": "402578bf-9e47-411e-a9a0-bd831f9b380e", "address": "fa:16:3e:a7:a3:18", "network": {"id": "4dc7ed86-80fb-4377-a5d0-8edd5e264c14", "bridge": "br-int", "label": "tempest-network-smoke--370633540", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap402578bf-9e", "ovs_interfaceid": "402578bf-9e47-411e-a9a0-bd831f9b380e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:37:59 np0005539551 nova_compute[227360]: 2025-11-29 08:37:59.132 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:37:59 np0005539551 nova_compute[227360]: 2025-11-29 08:37:59.144 227364 DEBUG oslo_concurrency.lockutils [req-35c84f09-d59e-4387-8b59-a407041f65e9 req-7b383fb3-62ee-47c9-969d-63a65e930704 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-8fc82e8c-6159-4a45-995b-c8e306dade2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:37:59 np0005539551 podman[292718]: 2025-11-29 08:37:59.224909856 +0000 UTC m=+0.046456298 container create ff8accfb4c9ef482cd7bfeb67d9ba9daad70905ef8567d09e31ddfcc07377303 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:37:59 np0005539551 systemd[1]: Started libpod-conmon-ff8accfb4c9ef482cd7bfeb67d9ba9daad70905ef8567d09e31ddfcc07377303.scope.
Nov 29 03:37:59 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:37:59 np0005539551 podman[292718]: 2025-11-29 08:37:59.200474945 +0000 UTC m=+0.022021417 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:37:59 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17e5033b4ef758d3a0bbb3fb79cf781a6e341e9288bb48a37eebbc9dab9b906a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:37:59 np0005539551 podman[292718]: 2025-11-29 08:37:59.310118171 +0000 UTC m=+0.131664623 container init ff8accfb4c9ef482cd7bfeb67d9ba9daad70905ef8567d09e31ddfcc07377303 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:37:59 np0005539551 podman[292718]: 2025-11-29 08:37:59.315454335 +0000 UTC m=+0.137000787 container start ff8accfb4c9ef482cd7bfeb67d9ba9daad70905ef8567d09e31ddfcc07377303 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Nov 29 03:37:59 np0005539551 nova_compute[227360]: 2025-11-29 08:37:59.316 227364 DEBUG nova.compute.manager [req-7781d0ac-d826-4e71-8b4b-5a3546d79628 req-279645b9-2812-49e2-99e6-44a39699edae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Received event network-vif-plugged-402578bf-9e47-411e-a9a0-bd831f9b380e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:37:59 np0005539551 nova_compute[227360]: 2025-11-29 08:37:59.317 227364 DEBUG oslo_concurrency.lockutils [req-7781d0ac-d826-4e71-8b4b-5a3546d79628 req-279645b9-2812-49e2-99e6-44a39699edae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8fc82e8c-6159-4a45-995b-c8e306dade2a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:59 np0005539551 nova_compute[227360]: 2025-11-29 08:37:59.318 227364 DEBUG oslo_concurrency.lockutils [req-7781d0ac-d826-4e71-8b4b-5a3546d79628 req-279645b9-2812-49e2-99e6-44a39699edae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8fc82e8c-6159-4a45-995b-c8e306dade2a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:59 np0005539551 nova_compute[227360]: 2025-11-29 08:37:59.318 227364 DEBUG oslo_concurrency.lockutils [req-7781d0ac-d826-4e71-8b4b-5a3546d79628 req-279645b9-2812-49e2-99e6-44a39699edae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8fc82e8c-6159-4a45-995b-c8e306dade2a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:59 np0005539551 nova_compute[227360]: 2025-11-29 08:37:59.318 227364 DEBUG nova.compute.manager [req-7781d0ac-d826-4e71-8b4b-5a3546d79628 req-279645b9-2812-49e2-99e6-44a39699edae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Processing event network-vif-plugged-402578bf-9e47-411e-a9a0-bd831f9b380e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:37:59 np0005539551 nova_compute[227360]: 2025-11-29 08:37:59.319 227364 DEBUG nova.compute.manager [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:37:59 np0005539551 nova_compute[227360]: 2025-11-29 08:37:59.324 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405479.3240561, 8fc82e8c-6159-4a45-995b-c8e306dade2a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:37:59 np0005539551 nova_compute[227360]: 2025-11-29 08:37:59.324 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:37:59 np0005539551 nova_compute[227360]: 2025-11-29 08:37:59.326 227364 DEBUG nova.virt.libvirt.driver [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:37:59 np0005539551 nova_compute[227360]: 2025-11-29 08:37:59.330 227364 INFO nova.virt.libvirt.driver [-] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Instance spawned successfully.#033[00m
Nov 29 03:37:59 np0005539551 nova_compute[227360]: 2025-11-29 08:37:59.332 227364 DEBUG nova.virt.libvirt.driver [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:37:59 np0005539551 neutron-haproxy-ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14[292734]: [NOTICE]   (292738) : New worker (292740) forked
Nov 29 03:37:59 np0005539551 neutron-haproxy-ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14[292734]: [NOTICE]   (292738) : Loading success.
Nov 29 03:37:59 np0005539551 nova_compute[227360]: 2025-11-29 08:37:59.353 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:37:59 np0005539551 nova_compute[227360]: 2025-11-29 08:37:59.360 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:37:59 np0005539551 nova_compute[227360]: 2025-11-29 08:37:59.366 227364 DEBUG nova.virt.libvirt.driver [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:37:59 np0005539551 nova_compute[227360]: 2025-11-29 08:37:59.367 227364 DEBUG nova.virt.libvirt.driver [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:37:59 np0005539551 nova_compute[227360]: 2025-11-29 08:37:59.367 227364 DEBUG nova.virt.libvirt.driver [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:37:59 np0005539551 nova_compute[227360]: 2025-11-29 08:37:59.368 227364 DEBUG nova.virt.libvirt.driver [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:37:59 np0005539551 nova_compute[227360]: 2025-11-29 08:37:59.368 227364 DEBUG nova.virt.libvirt.driver [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:37:59 np0005539551 nova_compute[227360]: 2025-11-29 08:37:59.369 227364 DEBUG nova.virt.libvirt.driver [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:37:59 np0005539551 nova_compute[227360]: 2025-11-29 08:37:59.395 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:37:59 np0005539551 nova_compute[227360]: 2025-11-29 08:37:59.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:59 np0005539551 nova_compute[227360]: 2025-11-29 08:37:59.409 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:37:59 np0005539551 nova_compute[227360]: 2025-11-29 08:37:59.418 227364 INFO nova.compute.manager [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Took 6.72 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:37:59 np0005539551 nova_compute[227360]: 2025-11-29 08:37:59.419 227364 DEBUG nova.compute.manager [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:37:59 np0005539551 nova_compute[227360]: 2025-11-29 08:37:59.483 227364 INFO nova.compute.manager [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Took 7.99 seconds to build instance.#033[00m
Nov 29 03:37:59 np0005539551 nova_compute[227360]: 2025-11-29 08:37:59.498 227364 DEBUG oslo_concurrency.lockutils [None req-0943f999-9c1c-46ad-93c4-1a3236782724 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "8fc82e8c-6159-4a45-995b-c8e306dade2a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.098s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:37:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:59.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:00.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:01 np0005539551 nova_compute[227360]: 2025-11-29 08:38:01.403 227364 DEBUG nova.compute.manager [req-e4f7ecef-8bd1-4fe4-a209-a55d0df4bfbf req-0c29d920-8991-40c0-953f-4e08ee5b7269 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Received event network-vif-plugged-402578bf-9e47-411e-a9a0-bd831f9b380e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:01 np0005539551 nova_compute[227360]: 2025-11-29 08:38:01.405 227364 DEBUG oslo_concurrency.lockutils [req-e4f7ecef-8bd1-4fe4-a209-a55d0df4bfbf req-0c29d920-8991-40c0-953f-4e08ee5b7269 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8fc82e8c-6159-4a45-995b-c8e306dade2a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:01 np0005539551 nova_compute[227360]: 2025-11-29 08:38:01.406 227364 DEBUG oslo_concurrency.lockutils [req-e4f7ecef-8bd1-4fe4-a209-a55d0df4bfbf req-0c29d920-8991-40c0-953f-4e08ee5b7269 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8fc82e8c-6159-4a45-995b-c8e306dade2a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:01 np0005539551 nova_compute[227360]: 2025-11-29 08:38:01.406 227364 DEBUG oslo_concurrency.lockutils [req-e4f7ecef-8bd1-4fe4-a209-a55d0df4bfbf req-0c29d920-8991-40c0-953f-4e08ee5b7269 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8fc82e8c-6159-4a45-995b-c8e306dade2a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:01 np0005539551 nova_compute[227360]: 2025-11-29 08:38:01.406 227364 DEBUG nova.compute.manager [req-e4f7ecef-8bd1-4fe4-a209-a55d0df4bfbf req-0c29d920-8991-40c0-953f-4e08ee5b7269 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] No waiting events found dispatching network-vif-plugged-402578bf-9e47-411e-a9a0-bd831f9b380e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:38:01 np0005539551 nova_compute[227360]: 2025-11-29 08:38:01.406 227364 WARNING nova.compute.manager [req-e4f7ecef-8bd1-4fe4-a209-a55d0df4bfbf req-0c29d920-8991-40c0-953f-4e08ee5b7269 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Received unexpected event network-vif-plugged-402578bf-9e47-411e-a9a0-bd831f9b380e for instance with vm_state active and task_state None.#033[00m
Nov 29 03:38:01 np0005539551 nova_compute[227360]: 2025-11-29 08:38:01.703 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:01.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:02.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:02 np0005539551 nova_compute[227360]: 2025-11-29 08:38:02.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:02 np0005539551 nova_compute[227360]: 2025-11-29 08:38:02.753 227364 DEBUG nova.compute.manager [req-0a7c7930-0578-407e-ac8f-9d85f7205727 req-082f3992-0bae-4309-b733-05791de767af 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Received event network-changed-06d48b3b-ec07-4803-9204-b300217b41d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:02 np0005539551 nova_compute[227360]: 2025-11-29 08:38:02.754 227364 DEBUG nova.compute.manager [req-0a7c7930-0578-407e-ac8f-9d85f7205727 req-082f3992-0bae-4309-b733-05791de767af 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Refreshing instance network info cache due to event network-changed-06d48b3b-ec07-4803-9204-b300217b41d1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:38:02 np0005539551 nova_compute[227360]: 2025-11-29 08:38:02.754 227364 DEBUG oslo_concurrency.lockutils [req-0a7c7930-0578-407e-ac8f-9d85f7205727 req-082f3992-0bae-4309-b733-05791de767af 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:38:02 np0005539551 nova_compute[227360]: 2025-11-29 08:38:02.755 227364 DEBUG oslo_concurrency.lockutils [req-0a7c7930-0578-407e-ac8f-9d85f7205727 req-082f3992-0bae-4309-b733-05791de767af 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:38:02 np0005539551 nova_compute[227360]: 2025-11-29 08:38:02.755 227364 DEBUG nova.network.neutron [req-0a7c7930-0578-407e-ac8f-9d85f7205727 req-082f3992-0bae-4309-b733-05791de767af 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Refreshing network info cache for port 06d48b3b-ec07-4803-9204-b300217b41d1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:38:03 np0005539551 nova_compute[227360]: 2025-11-29 08:38:03.497 227364 DEBUG nova.compute.manager [req-89b367a9-fec5-4124-9976-2d612e7a486d req-05513baf-2c95-4636-a087-a66448d1a93f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Received event network-changed-402578bf-9e47-411e-a9a0-bd831f9b380e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:03 np0005539551 nova_compute[227360]: 2025-11-29 08:38:03.498 227364 DEBUG nova.compute.manager [req-89b367a9-fec5-4124-9976-2d612e7a486d req-05513baf-2c95-4636-a087-a66448d1a93f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Refreshing instance network info cache due to event network-changed-402578bf-9e47-411e-a9a0-bd831f9b380e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:38:03 np0005539551 nova_compute[227360]: 2025-11-29 08:38:03.498 227364 DEBUG oslo_concurrency.lockutils [req-89b367a9-fec5-4124-9976-2d612e7a486d req-05513baf-2c95-4636-a087-a66448d1a93f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-8fc82e8c-6159-4a45-995b-c8e306dade2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:38:03 np0005539551 nova_compute[227360]: 2025-11-29 08:38:03.499 227364 DEBUG oslo_concurrency.lockutils [req-89b367a9-fec5-4124-9976-2d612e7a486d req-05513baf-2c95-4636-a087-a66448d1a93f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-8fc82e8c-6159-4a45-995b-c8e306dade2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:38:03 np0005539551 nova_compute[227360]: 2025-11-29 08:38:03.499 227364 DEBUG nova.network.neutron [req-89b367a9-fec5-4124-9976-2d612e7a486d req-05513baf-2c95-4636-a087-a66448d1a93f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Refreshing network info cache for port 402578bf-9e47-411e-a9a0-bd831f9b380e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:38:03 np0005539551 nova_compute[227360]: 2025-11-29 08:38:03.829 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:03.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:04.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:04 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #124. Immutable memtables: 0.
Nov 29 03:38:04 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:38:04.057885) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:38:04 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 124
Nov 29 03:38:04 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405484057969, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 1097, "num_deletes": 252, "total_data_size": 2176759, "memory_usage": 2221856, "flush_reason": "Manual Compaction"}
Nov 29 03:38:04 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #125: started
Nov 29 03:38:04 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405484067442, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 125, "file_size": 1435390, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61199, "largest_seqno": 62290, "table_properties": {"data_size": 1430581, "index_size": 2333, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11190, "raw_average_key_size": 20, "raw_value_size": 1420623, "raw_average_value_size": 2555, "num_data_blocks": 102, "num_entries": 556, "num_filter_entries": 556, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405404, "oldest_key_time": 1764405404, "file_creation_time": 1764405484, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:38:04 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 9581 microseconds, and 3891 cpu microseconds.
Nov 29 03:38:04 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:38:04 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:38:04.067477) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #125: 1435390 bytes OK
Nov 29 03:38:04 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:38:04.067493) [db/memtable_list.cc:519] [default] Level-0 commit table #125 started
Nov 29 03:38:04 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:38:04.069109) [db/memtable_list.cc:722] [default] Level-0 commit table #125: memtable #1 done
Nov 29 03:38:04 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:38:04.069122) EVENT_LOG_v1 {"time_micros": 1764405484069118, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:38:04 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:38:04.069139) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:38:04 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 2171360, prev total WAL file size 2171360, number of live WAL files 2.
Nov 29 03:38:04 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000121.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:38:04 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:38:04.069848) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Nov 29 03:38:04 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:38:04 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [125(1401KB)], [123(13MB)]
Nov 29 03:38:04 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405484069924, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [125], "files_L6": [123], "score": -1, "input_data_size": 15312612, "oldest_snapshot_seqno": -1}
Nov 29 03:38:04 np0005539551 nova_compute[227360]: 2025-11-29 08:38:04.076 227364 DEBUG nova.network.neutron [req-0a7c7930-0578-407e-ac8f-9d85f7205727 req-082f3992-0bae-4309-b733-05791de767af 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Updated VIF entry in instance network info cache for port 06d48b3b-ec07-4803-9204-b300217b41d1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:38:04 np0005539551 nova_compute[227360]: 2025-11-29 08:38:04.077 227364 DEBUG nova.network.neutron [req-0a7c7930-0578-407e-ac8f-9d85f7205727 req-082f3992-0bae-4309-b733-05791de767af 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Updating instance_info_cache with network_info: [{"id": "06d48b3b-ec07-4803-9204-b300217b41d1", "address": "fa:16:3e:9b:8f:68", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06d48b3b-ec", "ovs_interfaceid": "06d48b3b-ec07-4803-9204-b300217b41d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:38:04 np0005539551 nova_compute[227360]: 2025-11-29 08:38:04.092 227364 DEBUG oslo_concurrency.lockutils [req-0a7c7930-0578-407e-ac8f-9d85f7205727 req-082f3992-0bae-4309-b733-05791de767af 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:38:04 np0005539551 nova_compute[227360]: 2025-11-29 08:38:04.125 227364 DEBUG oslo_concurrency.lockutils [None req-58f99404-61f0-453b-81f9-0ed7eeda471f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Acquiring lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:04 np0005539551 nova_compute[227360]: 2025-11-29 08:38:04.126 227364 DEBUG oslo_concurrency.lockutils [None req-58f99404-61f0-453b-81f9-0ed7eeda471f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:04 np0005539551 nova_compute[227360]: 2025-11-29 08:38:04.139 227364 INFO nova.compute.manager [None req-58f99404-61f0-453b-81f9-0ed7eeda471f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Detaching volume d0fba77f-63fd-4100-a2fb-b81bfa4b659a#033[00m
Nov 29 03:38:04 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #126: 9215 keys, 13395885 bytes, temperature: kUnknown
Nov 29 03:38:04 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405484168342, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 126, "file_size": 13395885, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13334487, "index_size": 37286, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23045, "raw_key_size": 243376, "raw_average_key_size": 26, "raw_value_size": 13170371, "raw_average_value_size": 1429, "num_data_blocks": 1426, "num_entries": 9215, "num_filter_entries": 9215, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764405484, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 126, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:38:04 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:38:04 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:38:04.168555) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 13395885 bytes
Nov 29 03:38:04 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:38:04.173362) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.5 rd, 136.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 13.2 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(20.0) write-amplify(9.3) OK, records in: 9736, records dropped: 521 output_compression: NoCompression
Nov 29 03:38:04 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:38:04.173403) EVENT_LOG_v1 {"time_micros": 1764405484173388, "job": 78, "event": "compaction_finished", "compaction_time_micros": 98480, "compaction_time_cpu_micros": 28089, "output_level": 6, "num_output_files": 1, "total_output_size": 13395885, "num_input_records": 9736, "num_output_records": 9215, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:38:04 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:38:04 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405484173766, "job": 78, "event": "table_file_deletion", "file_number": 125}
Nov 29 03:38:04 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000123.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:38:04 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405484175529, "job": 78, "event": "table_file_deletion", "file_number": 123}
Nov 29 03:38:04 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:38:04.069756) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:04 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:38:04.175569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:04 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:38:04.175574) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:04 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:38:04.175575) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:04 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:38:04.175577) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:04 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:38:04.175578) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:04 np0005539551 nova_compute[227360]: 2025-11-29 08:38:04.279 227364 INFO nova.virt.block_device [None req-58f99404-61f0-453b-81f9-0ed7eeda471f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Attempting to driver detach volume d0fba77f-63fd-4100-a2fb-b81bfa4b659a from mountpoint /dev/vdb#033[00m
Nov 29 03:38:04 np0005539551 nova_compute[227360]: 2025-11-29 08:38:04.287 227364 DEBUG nova.virt.libvirt.driver [None req-58f99404-61f0-453b-81f9-0ed7eeda471f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Attempting to detach device vdb from instance 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:38:04 np0005539551 nova_compute[227360]: 2025-11-29 08:38:04.288 227364 DEBUG nova.virt.libvirt.guest [None req-58f99404-61f0-453b-81f9-0ed7eeda471f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:38:04 np0005539551 nova_compute[227360]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:38:04 np0005539551 nova_compute[227360]:  <source protocol="rbd" name="volumes/volume-d0fba77f-63fd-4100-a2fb-b81bfa4b659a">
Nov 29 03:38:04 np0005539551 nova_compute[227360]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:38:04 np0005539551 nova_compute[227360]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:38:04 np0005539551 nova_compute[227360]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:38:04 np0005539551 nova_compute[227360]:  </source>
Nov 29 03:38:04 np0005539551 nova_compute[227360]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:38:04 np0005539551 nova_compute[227360]:  <serial>d0fba77f-63fd-4100-a2fb-b81bfa4b659a</serial>
Nov 29 03:38:04 np0005539551 nova_compute[227360]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Nov 29 03:38:04 np0005539551 nova_compute[227360]: </disk>
Nov 29 03:38:04 np0005539551 nova_compute[227360]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:38:04 np0005539551 nova_compute[227360]: 2025-11-29 08:38:04.295 227364 INFO nova.virt.libvirt.driver [None req-58f99404-61f0-453b-81f9-0ed7eeda471f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Successfully detached device vdb from instance 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1 from the persistent domain config.#033[00m
Nov 29 03:38:04 np0005539551 nova_compute[227360]: 2025-11-29 08:38:04.296 227364 DEBUG nova.virt.libvirt.driver [None req-58f99404-61f0-453b-81f9-0ed7eeda471f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:38:04 np0005539551 nova_compute[227360]: 2025-11-29 08:38:04.296 227364 DEBUG nova.virt.libvirt.guest [None req-58f99404-61f0-453b-81f9-0ed7eeda471f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:38:04 np0005539551 nova_compute[227360]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:38:04 np0005539551 nova_compute[227360]:  <source protocol="rbd" name="volumes/volume-d0fba77f-63fd-4100-a2fb-b81bfa4b659a">
Nov 29 03:38:04 np0005539551 nova_compute[227360]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:38:04 np0005539551 nova_compute[227360]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:38:04 np0005539551 nova_compute[227360]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:38:04 np0005539551 nova_compute[227360]:  </source>
Nov 29 03:38:04 np0005539551 nova_compute[227360]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:38:04 np0005539551 nova_compute[227360]:  <serial>d0fba77f-63fd-4100-a2fb-b81bfa4b659a</serial>
Nov 29 03:38:04 np0005539551 nova_compute[227360]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Nov 29 03:38:04 np0005539551 nova_compute[227360]: </disk>
Nov 29 03:38:04 np0005539551 nova_compute[227360]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:38:04 np0005539551 nova_compute[227360]: 2025-11-29 08:38:04.404 227364 DEBUG nova.virt.libvirt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Received event <DeviceRemovedEvent: 1764405484.4037426, 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:38:04 np0005539551 nova_compute[227360]: 2025-11-29 08:38:04.405 227364 DEBUG nova.virt.libvirt.driver [None req-58f99404-61f0-453b-81f9-0ed7eeda471f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:38:04 np0005539551 nova_compute[227360]: 2025-11-29 08:38:04.407 227364 INFO nova.virt.libvirt.driver [None req-58f99404-61f0-453b-81f9-0ed7eeda471f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Successfully detached device vdb from instance 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1 from the live domain config.#033[00m
Nov 29 03:38:04 np0005539551 nova_compute[227360]: 2025-11-29 08:38:04.580 227364 DEBUG nova.objects.instance [None req-58f99404-61f0-453b-81f9-0ed7eeda471f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lazy-loading 'flavor' on Instance uuid 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:38:04 np0005539551 nova_compute[227360]: 2025-11-29 08:38:04.623 227364 DEBUG oslo_concurrency.lockutils [None req-58f99404-61f0-453b-81f9-0ed7eeda471f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.497s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:05 np0005539551 nova_compute[227360]: 2025-11-29 08:38:05.136 227364 DEBUG nova.network.neutron [req-89b367a9-fec5-4124-9976-2d612e7a486d req-05513baf-2c95-4636-a087-a66448d1a93f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Updated VIF entry in instance network info cache for port 402578bf-9e47-411e-a9a0-bd831f9b380e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:38:05 np0005539551 nova_compute[227360]: 2025-11-29 08:38:05.136 227364 DEBUG nova.network.neutron [req-89b367a9-fec5-4124-9976-2d612e7a486d req-05513baf-2c95-4636-a087-a66448d1a93f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Updating instance_info_cache with network_info: [{"id": "402578bf-9e47-411e-a9a0-bd831f9b380e", "address": "fa:16:3e:a7:a3:18", "network": {"id": "4dc7ed86-80fb-4377-a5d0-8edd5e264c14", "bridge": "br-int", "label": "tempest-network-smoke--370633540", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap402578bf-9e", "ovs_interfaceid": "402578bf-9e47-411e-a9a0-bd831f9b380e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:38:05 np0005539551 nova_compute[227360]: 2025-11-29 08:38:05.156 227364 DEBUG oslo_concurrency.lockutils [req-89b367a9-fec5-4124-9976-2d612e7a486d req-05513baf-2c95-4636-a087-a66448d1a93f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-8fc82e8c-6159-4a45-995b-c8e306dade2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:38:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:38:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:05.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:38:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:06.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:06 np0005539551 nova_compute[227360]: 2025-11-29 08:38:06.706 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:06 np0005539551 nova_compute[227360]: 2025-11-29 08:38:06.988 227364 DEBUG oslo_concurrency.lockutils [None req-2b8e6510-4646-47cd-850f-03b3b1d5e476 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Acquiring lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:06 np0005539551 nova_compute[227360]: 2025-11-29 08:38:06.989 227364 DEBUG oslo_concurrency.lockutils [None req-2b8e6510-4646-47cd-850f-03b3b1d5e476 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:06 np0005539551 nova_compute[227360]: 2025-11-29 08:38:06.989 227364 DEBUG oslo_concurrency.lockutils [None req-2b8e6510-4646-47cd-850f-03b3b1d5e476 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Acquiring lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:06 np0005539551 nova_compute[227360]: 2025-11-29 08:38:06.990 227364 DEBUG oslo_concurrency.lockutils [None req-2b8e6510-4646-47cd-850f-03b3b1d5e476 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:06 np0005539551 nova_compute[227360]: 2025-11-29 08:38:06.990 227364 DEBUG oslo_concurrency.lockutils [None req-2b8e6510-4646-47cd-850f-03b3b1d5e476 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:06 np0005539551 nova_compute[227360]: 2025-11-29 08:38:06.992 227364 INFO nova.compute.manager [None req-2b8e6510-4646-47cd-850f-03b3b1d5e476 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Terminating instance#033[00m
Nov 29 03:38:06 np0005539551 nova_compute[227360]: 2025-11-29 08:38:06.993 227364 DEBUG nova.compute.manager [None req-2b8e6510-4646-47cd-850f-03b3b1d5e476 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:38:07 np0005539551 kernel: tap06d48b3b-ec (unregistering): left promiscuous mode
Nov 29 03:38:07 np0005539551 NetworkManager[48922]: <info>  [1764405487.0877] device (tap06d48b3b-ec): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:38:07 np0005539551 ovn_controller[130266]: 2025-11-29T08:38:07Z|00804|binding|INFO|Releasing lport 06d48b3b-ec07-4803-9204-b300217b41d1 from this chassis (sb_readonly=0)
Nov 29 03:38:07 np0005539551 ovn_controller[130266]: 2025-11-29T08:38:07Z|00805|binding|INFO|Setting lport 06d48b3b-ec07-4803-9204-b300217b41d1 down in Southbound
Nov 29 03:38:07 np0005539551 ovn_controller[130266]: 2025-11-29T08:38:07Z|00806|binding|INFO|Removing iface tap06d48b3b-ec ovn-installed in OVS
Nov 29 03:38:07 np0005539551 nova_compute[227360]: 2025-11-29 08:38:07.092 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:07.099 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:8f:68 10.100.0.11'], port_security=['fa:16:3e:9b:8f:68 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '1f01c882-bb44-4ba5-b82f-e3ffa31b8df1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0430aba5-d0d7-4d98-ad87-552e6639c190', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8d5e30b74e6449dd90ecb667977d1fe9', 'neutron:revision_number': '8', 'neutron:security_group_ids': '52d8fe9a-0c55-4eea-ab3d-17059ad4962d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75451e9b-c915-4a2b-97ed-6cc2296328f6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=06d48b3b-ec07-4803-9204-b300217b41d1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:38:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:07.101 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 06d48b3b-ec07-4803-9204-b300217b41d1 in datapath 0430aba5-d0d7-4d98-ad87-552e6639c190 unbound from our chassis#033[00m
Nov 29 03:38:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:07.104 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0430aba5-d0d7-4d98-ad87-552e6639c190, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:38:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:07.105 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ed7cc131-d918-4f8b-9329-554dba7adff3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:07.106 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190 namespace which is not needed anymore#033[00m
Nov 29 03:38:07 np0005539551 nova_compute[227360]: 2025-11-29 08:38:07.120 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:07 np0005539551 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d000000b2.scope: Deactivated successfully.
Nov 29 03:38:07 np0005539551 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d000000b2.scope: Consumed 16.202s CPU time.
Nov 29 03:38:07 np0005539551 systemd-machined[190756]: Machine qemu-83-instance-000000b2 terminated.
Nov 29 03:38:07 np0005539551 nova_compute[227360]: 2025-11-29 08:38:07.228 227364 INFO nova.virt.libvirt.driver [-] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Instance destroyed successfully.#033[00m
Nov 29 03:38:07 np0005539551 nova_compute[227360]: 2025-11-29 08:38:07.229 227364 DEBUG nova.objects.instance [None req-2b8e6510-4646-47cd-850f-03b3b1d5e476 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lazy-loading 'resources' on Instance uuid 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:38:07 np0005539551 neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190[292099]: [NOTICE]   (292103) : haproxy version is 2.8.14-c23fe91
Nov 29 03:38:07 np0005539551 neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190[292099]: [NOTICE]   (292103) : path to executable is /usr/sbin/haproxy
Nov 29 03:38:07 np0005539551 neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190[292099]: [WARNING]  (292103) : Exiting Master process...
Nov 29 03:38:07 np0005539551 neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190[292099]: [ALERT]    (292103) : Current worker (292105) exited with code 143 (Terminated)
Nov 29 03:38:07 np0005539551 neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190[292099]: [WARNING]  (292103) : All workers exited. Exiting... (0)
Nov 29 03:38:07 np0005539551 systemd[1]: libpod-f210e522308663adc9e445892b1e3fadf156117d0e02dcdf3193dcb9ff18c418.scope: Deactivated successfully.
Nov 29 03:38:07 np0005539551 podman[292775]: 2025-11-29 08:38:07.243682122 +0000 UTC m=+0.045990034 container died f210e522308663adc9e445892b1e3fadf156117d0e02dcdf3193dcb9ff18c418 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:38:07 np0005539551 nova_compute[227360]: 2025-11-29 08:38:07.247 227364 DEBUG nova.virt.libvirt.vif [None req-2b8e6510-4646-47cd-850f-03b3b1d5e476 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:36:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-111008662',display_name='tempest-TestMinimumBasicScenario-server-111008662',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-111008662',id=178,image_ref='8a30eb6e-dddc-4108-8576-dcd7c8d5406f',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCVJYdnP2TneuDoeBAXwKH6f73W3V8CnCT6vaTZikIJCryJZhoLWCSF0AlKVy0dYmyVNXlJowtpUs2K4/f0SYWoTrLCWDcueCumiTrkMJ87CdWQP8BrlEpeZKDWjOsS6rQ==',key_name='tempest-TestMinimumBasicScenario-445026150',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:36:46Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8d5e30b74e6449dd90ecb667977d1fe9',ramdisk_id='',reservation_id='r-4umr4qgc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8a30eb6e-dddc-4108-8576-dcd7c8d5406f',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestMinimumBasicScenario-1569311049',owner_user_name='tempest-TestMinimumBasicScenario-1569311049-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:37:21Z,user_data=None,user_id='e8b20745b2d14f70b64a43335faed2f4',uuid=1f01c882-bb44-4ba5-b82f-e3ffa31b8df1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "06d48b3b-ec07-4803-9204-b300217b41d1", "address": "fa:16:3e:9b:8f:68", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06d48b3b-ec", "ovs_interfaceid": "06d48b3b-ec07-4803-9204-b300217b41d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:38:07 np0005539551 nova_compute[227360]: 2025-11-29 08:38:07.248 227364 DEBUG nova.network.os_vif_util [None req-2b8e6510-4646-47cd-850f-03b3b1d5e476 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Converting VIF {"id": "06d48b3b-ec07-4803-9204-b300217b41d1", "address": "fa:16:3e:9b:8f:68", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06d48b3b-ec", "ovs_interfaceid": "06d48b3b-ec07-4803-9204-b300217b41d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:38:07 np0005539551 nova_compute[227360]: 2025-11-29 08:38:07.248 227364 DEBUG nova.network.os_vif_util [None req-2b8e6510-4646-47cd-850f-03b3b1d5e476 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9b:8f:68,bridge_name='br-int',has_traffic_filtering=True,id=06d48b3b-ec07-4803-9204-b300217b41d1,network=Network(0430aba5-d0d7-4d98-ad87-552e6639c190),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06d48b3b-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:38:07 np0005539551 nova_compute[227360]: 2025-11-29 08:38:07.249 227364 DEBUG os_vif [None req-2b8e6510-4646-47cd-850f-03b3b1d5e476 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:8f:68,bridge_name='br-int',has_traffic_filtering=True,id=06d48b3b-ec07-4803-9204-b300217b41d1,network=Network(0430aba5-d0d7-4d98-ad87-552e6639c190),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06d48b3b-ec') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:38:07 np0005539551 nova_compute[227360]: 2025-11-29 08:38:07.251 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:07 np0005539551 nova_compute[227360]: 2025-11-29 08:38:07.251 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap06d48b3b-ec, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:07 np0005539551 nova_compute[227360]: 2025-11-29 08:38:07.252 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:07 np0005539551 nova_compute[227360]: 2025-11-29 08:38:07.253 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:07 np0005539551 nova_compute[227360]: 2025-11-29 08:38:07.255 227364 INFO os_vif [None req-2b8e6510-4646-47cd-850f-03b3b1d5e476 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:8f:68,bridge_name='br-int',has_traffic_filtering=True,id=06d48b3b-ec07-4803-9204-b300217b41d1,network=Network(0430aba5-d0d7-4d98-ad87-552e6639c190),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06d48b3b-ec')#033[00m
Nov 29 03:38:07 np0005539551 nova_compute[227360]: 2025-11-29 08:38:07.359 227364 DEBUG nova.compute.manager [req-a1f599bb-aea2-4dae-9d24-56dc6ee002b1 req-9778e67a-b71b-4111-aca6-3d47f14fd20f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Received event network-vif-unplugged-06d48b3b-ec07-4803-9204-b300217b41d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:07 np0005539551 nova_compute[227360]: 2025-11-29 08:38:07.360 227364 DEBUG oslo_concurrency.lockutils [req-a1f599bb-aea2-4dae-9d24-56dc6ee002b1 req-9778e67a-b71b-4111-aca6-3d47f14fd20f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:07 np0005539551 nova_compute[227360]: 2025-11-29 08:38:07.360 227364 DEBUG oslo_concurrency.lockutils [req-a1f599bb-aea2-4dae-9d24-56dc6ee002b1 req-9778e67a-b71b-4111-aca6-3d47f14fd20f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:07 np0005539551 nova_compute[227360]: 2025-11-29 08:38:07.361 227364 DEBUG oslo_concurrency.lockutils [req-a1f599bb-aea2-4dae-9d24-56dc6ee002b1 req-9778e67a-b71b-4111-aca6-3d47f14fd20f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:07 np0005539551 nova_compute[227360]: 2025-11-29 08:38:07.361 227364 DEBUG nova.compute.manager [req-a1f599bb-aea2-4dae-9d24-56dc6ee002b1 req-9778e67a-b71b-4111-aca6-3d47f14fd20f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] No waiting events found dispatching network-vif-unplugged-06d48b3b-ec07-4803-9204-b300217b41d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:38:07 np0005539551 nova_compute[227360]: 2025-11-29 08:38:07.361 227364 DEBUG nova.compute.manager [req-a1f599bb-aea2-4dae-9d24-56dc6ee002b1 req-9778e67a-b71b-4111-aca6-3d47f14fd20f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Received event network-vif-unplugged-06d48b3b-ec07-4803-9204-b300217b41d1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:38:07 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f210e522308663adc9e445892b1e3fadf156117d0e02dcdf3193dcb9ff18c418-userdata-shm.mount: Deactivated successfully.
Nov 29 03:38:07 np0005539551 systemd[1]: var-lib-containers-storage-overlay-76bc33b94933a08dad2c71ff30b567ac3e1367f5791b1903fd741d8e96a15193-merged.mount: Deactivated successfully.
Nov 29 03:38:07 np0005539551 podman[292775]: 2025-11-29 08:38:07.497577241 +0000 UTC m=+0.299885133 container cleanup f210e522308663adc9e445892b1e3fadf156117d0e02dcdf3193dcb9ff18c418 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:38:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:07.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:07 np0005539551 podman[292830]: 2025-11-29 08:38:07.854406194 +0000 UTC m=+0.337589324 container remove f210e522308663adc9e445892b1e3fadf156117d0e02dcdf3193dcb9ff18c418 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 03:38:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:07.864 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[15cfecca-f334-45c5-91c8-4c136fa1f7d7]: (4, ('Sat Nov 29 08:38:07 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190 (f210e522308663adc9e445892b1e3fadf156117d0e02dcdf3193dcb9ff18c418)\nf210e522308663adc9e445892b1e3fadf156117d0e02dcdf3193dcb9ff18c418\nSat Nov 29 08:38:07 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190 (f210e522308663adc9e445892b1e3fadf156117d0e02dcdf3193dcb9ff18c418)\nf210e522308663adc9e445892b1e3fadf156117d0e02dcdf3193dcb9ff18c418\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:07.867 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[92fde56b-9079-4acd-bed1-4dc9740917dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:07.869 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0430aba5-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:07 np0005539551 nova_compute[227360]: 2025-11-29 08:38:07.871 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:07 np0005539551 kernel: tap0430aba5-d0: left promiscuous mode
Nov 29 03:38:07 np0005539551 nova_compute[227360]: 2025-11-29 08:38:07.873 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:07.878 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9e18428b-f3a2-4f01-93a5-7f9025003b50]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:07 np0005539551 nova_compute[227360]: 2025-11-29 08:38:07.889 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:07.900 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[0a1d8d8d-c651-4266-911a-482ba324a901]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:07.901 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6f4c6c92-2220-4320-90b8-d014dcf6ca5b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:07 np0005539551 systemd[1]: libpod-conmon-f210e522308663adc9e445892b1e3fadf156117d0e02dcdf3193dcb9ff18c418.scope: Deactivated successfully.
Nov 29 03:38:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:07.923 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e7387084-efee-41d9-9856-d2aea51b6c74]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 848743, 'reachable_time': 39481, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292846, 'error': None, 'target': 'ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:07.926 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:38:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:07.926 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[c14ee422-77ed-4bce-910c-b29902dabf2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:07 np0005539551 systemd[1]: run-netns-ovnmeta\x2d0430aba5\x2dd0d7\x2d4d98\x2dad87\x2d552e6639c190.mount: Deactivated successfully.
Nov 29 03:38:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:08.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:08 np0005539551 nova_compute[227360]: 2025-11-29 08:38:08.471 227364 INFO nova.virt.libvirt.driver [None req-2b8e6510-4646-47cd-850f-03b3b1d5e476 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Deleting instance files /var/lib/nova/instances/1f01c882-bb44-4ba5-b82f-e3ffa31b8df1_del#033[00m
Nov 29 03:38:08 np0005539551 nova_compute[227360]: 2025-11-29 08:38:08.472 227364 INFO nova.virt.libvirt.driver [None req-2b8e6510-4646-47cd-850f-03b3b1d5e476 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Deletion of /var/lib/nova/instances/1f01c882-bb44-4ba5-b82f-e3ffa31b8df1_del complete#033[00m
Nov 29 03:38:08 np0005539551 nova_compute[227360]: 2025-11-29 08:38:08.529 227364 INFO nova.compute.manager [None req-2b8e6510-4646-47cd-850f-03b3b1d5e476 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Took 1.54 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:38:08 np0005539551 nova_compute[227360]: 2025-11-29 08:38:08.530 227364 DEBUG oslo.service.loopingcall [None req-2b8e6510-4646-47cd-850f-03b3b1d5e476 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:38:08 np0005539551 nova_compute[227360]: 2025-11-29 08:38:08.530 227364 DEBUG nova.compute.manager [-] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:38:08 np0005539551 nova_compute[227360]: 2025-11-29 08:38:08.530 227364 DEBUG nova.network.neutron [-] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:38:08 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e379 e379: 3 total, 3 up, 3 in
Nov 29 03:38:08 np0005539551 nova_compute[227360]: 2025-11-29 08:38:08.830 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:09 np0005539551 nova_compute[227360]: 2025-11-29 08:38:09.494 227364 DEBUG nova.compute.manager [req-7abcf1d4-49b5-464e-be4f-dfe5e5e01dd7 req-0b70ef13-c379-4061-b4c9-1f4db0da7044 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Received event network-vif-plugged-06d48b3b-ec07-4803-9204-b300217b41d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:09 np0005539551 nova_compute[227360]: 2025-11-29 08:38:09.494 227364 DEBUG oslo_concurrency.lockutils [req-7abcf1d4-49b5-464e-be4f-dfe5e5e01dd7 req-0b70ef13-c379-4061-b4c9-1f4db0da7044 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:09 np0005539551 nova_compute[227360]: 2025-11-29 08:38:09.495 227364 DEBUG oslo_concurrency.lockutils [req-7abcf1d4-49b5-464e-be4f-dfe5e5e01dd7 req-0b70ef13-c379-4061-b4c9-1f4db0da7044 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:09 np0005539551 nova_compute[227360]: 2025-11-29 08:38:09.495 227364 DEBUG oslo_concurrency.lockutils [req-7abcf1d4-49b5-464e-be4f-dfe5e5e01dd7 req-0b70ef13-c379-4061-b4c9-1f4db0da7044 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:09 np0005539551 nova_compute[227360]: 2025-11-29 08:38:09.495 227364 DEBUG nova.compute.manager [req-7abcf1d4-49b5-464e-be4f-dfe5e5e01dd7 req-0b70ef13-c379-4061-b4c9-1f4db0da7044 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] No waiting events found dispatching network-vif-plugged-06d48b3b-ec07-4803-9204-b300217b41d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:38:09 np0005539551 nova_compute[227360]: 2025-11-29 08:38:09.495 227364 WARNING nova.compute.manager [req-7abcf1d4-49b5-464e-be4f-dfe5e5e01dd7 req-0b70ef13-c379-4061-b4c9-1f4db0da7044 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Received unexpected event network-vif-plugged-06d48b3b-ec07-4803-9204-b300217b41d1 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:38:09 np0005539551 nova_compute[227360]: 2025-11-29 08:38:09.661 227364 DEBUG nova.network.neutron [-] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:38:09 np0005539551 nova_compute[227360]: 2025-11-29 08:38:09.680 227364 INFO nova.compute.manager [-] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Took 1.15 seconds to deallocate network for instance.#033[00m
Nov 29 03:38:09 np0005539551 nova_compute[227360]: 2025-11-29 08:38:09.738 227364 DEBUG oslo_concurrency.lockutils [None req-2b8e6510-4646-47cd-850f-03b3b1d5e476 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:09 np0005539551 nova_compute[227360]: 2025-11-29 08:38:09.739 227364 DEBUG oslo_concurrency.lockutils [None req-2b8e6510-4646-47cd-850f-03b3b1d5e476 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:09 np0005539551 nova_compute[227360]: 2025-11-29 08:38:09.780 227364 DEBUG nova.compute.manager [req-fca8ea7d-c9c3-45c2-bfbf-2a037cc41e9b req-a2fc4bd8-0f4a-4337-a88f-844dcbd01d20 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Received event network-vif-deleted-06d48b3b-ec07-4803-9204-b300217b41d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:09 np0005539551 nova_compute[227360]: 2025-11-29 08:38:09.820 227364 DEBUG oslo_concurrency.processutils [None req-2b8e6510-4646-47cd-850f-03b3b1d5e476 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:09.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:10.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:38:10 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1555248539' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:38:10 np0005539551 nova_compute[227360]: 2025-11-29 08:38:10.289 227364 DEBUG oslo_concurrency.processutils [None req-2b8e6510-4646-47cd-850f-03b3b1d5e476 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:10 np0005539551 nova_compute[227360]: 2025-11-29 08:38:10.294 227364 DEBUG nova.compute.provider_tree [None req-2b8e6510-4646-47cd-850f-03b3b1d5e476 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:38:10 np0005539551 nova_compute[227360]: 2025-11-29 08:38:10.319 227364 DEBUG nova.scheduler.client.report [None req-2b8e6510-4646-47cd-850f-03b3b1d5e476 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:38:10 np0005539551 nova_compute[227360]: 2025-11-29 08:38:10.344 227364 DEBUG oslo_concurrency.lockutils [None req-2b8e6510-4646-47cd-850f-03b3b1d5e476 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:10 np0005539551 nova_compute[227360]: 2025-11-29 08:38:10.377 227364 INFO nova.scheduler.client.report [None req-2b8e6510-4646-47cd-850f-03b3b1d5e476 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Deleted allocations for instance 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1#033[00m
Nov 29 03:38:10 np0005539551 nova_compute[227360]: 2025-11-29 08:38:10.448 227364 DEBUG oslo_concurrency.lockutils [None req-2b8e6510-4646-47cd-850f-03b3b1d5e476 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "1f01c882-bb44-4ba5-b82f-e3ffa31b8df1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.459s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e380 e380: 3 total, 3 up, 3 in
Nov 29 03:38:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:11.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:12.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:12 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e381 e381: 3 total, 3 up, 3 in
Nov 29 03:38:12 np0005539551 nova_compute[227360]: 2025-11-29 08:38:12.261 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:13 np0005539551 nova_compute[227360]: 2025-11-29 08:38:13.833 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:13.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:14.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:14 np0005539551 ovn_controller[130266]: 2025-11-29T08:38:14Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a7:a3:18 10.100.0.8
Nov 29 03:38:14 np0005539551 ovn_controller[130266]: 2025-11-29T08:38:14Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a7:a3:18 10.100.0.8
Nov 29 03:38:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e382 e382: 3 total, 3 up, 3 in
Nov 29 03:38:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:15.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:16.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:17 np0005539551 nova_compute[227360]: 2025-11-29 08:38:17.264 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:17.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:18.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:18 np0005539551 nova_compute[227360]: 2025-11-29 08:38:18.838 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:19.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:19.887 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:19.888 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:19.888 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:20.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e383 e383: 3 total, 3 up, 3 in
Nov 29 03:38:20 np0005539551 nova_compute[227360]: 2025-11-29 08:38:20.354 227364 INFO nova.compute.manager [None req-26103b9e-c3e8-41bd-991d-fba956030416 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Get console output#033[00m
Nov 29 03:38:20 np0005539551 nova_compute[227360]: 2025-11-29 08:38:20.360 260937 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:38:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e383 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:21 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:38:21 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:38:21 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:38:21 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:38:21 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:38:21 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:38:21 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:38:21 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:38:21 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:38:21 np0005539551 nova_compute[227360]: 2025-11-29 08:38:21.075 227364 DEBUG oslo_concurrency.lockutils [None req-93fe3af9-f975-44a3-be12-4e98c43da708 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "8fc82e8c-6159-4a45-995b-c8e306dade2a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:21 np0005539551 nova_compute[227360]: 2025-11-29 08:38:21.076 227364 DEBUG oslo_concurrency.lockutils [None req-93fe3af9-f975-44a3-be12-4e98c43da708 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "8fc82e8c-6159-4a45-995b-c8e306dade2a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:21 np0005539551 nova_compute[227360]: 2025-11-29 08:38:21.076 227364 DEBUG oslo_concurrency.lockutils [None req-93fe3af9-f975-44a3-be12-4e98c43da708 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "8fc82e8c-6159-4a45-995b-c8e306dade2a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:21 np0005539551 nova_compute[227360]: 2025-11-29 08:38:21.077 227364 DEBUG oslo_concurrency.lockutils [None req-93fe3af9-f975-44a3-be12-4e98c43da708 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "8fc82e8c-6159-4a45-995b-c8e306dade2a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:21 np0005539551 nova_compute[227360]: 2025-11-29 08:38:21.077 227364 DEBUG oslo_concurrency.lockutils [None req-93fe3af9-f975-44a3-be12-4e98c43da708 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "8fc82e8c-6159-4a45-995b-c8e306dade2a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:21 np0005539551 nova_compute[227360]: 2025-11-29 08:38:21.078 227364 INFO nova.compute.manager [None req-93fe3af9-f975-44a3-be12-4e98c43da708 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Terminating instance#033[00m
Nov 29 03:38:21 np0005539551 nova_compute[227360]: 2025-11-29 08:38:21.081 227364 DEBUG nova.compute.manager [None req-93fe3af9-f975-44a3-be12-4e98c43da708 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:38:21 np0005539551 kernel: tap402578bf-9e (unregistering): left promiscuous mode
Nov 29 03:38:21 np0005539551 NetworkManager[48922]: <info>  [1764405501.1415] device (tap402578bf-9e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:38:21 np0005539551 ovn_controller[130266]: 2025-11-29T08:38:21Z|00807|binding|INFO|Releasing lport 402578bf-9e47-411e-a9a0-bd831f9b380e from this chassis (sb_readonly=0)
Nov 29 03:38:21 np0005539551 ovn_controller[130266]: 2025-11-29T08:38:21Z|00808|binding|INFO|Setting lport 402578bf-9e47-411e-a9a0-bd831f9b380e down in Southbound
Nov 29 03:38:21 np0005539551 ovn_controller[130266]: 2025-11-29T08:38:21Z|00809|binding|INFO|Removing iface tap402578bf-9e ovn-installed in OVS
Nov 29 03:38:21 np0005539551 nova_compute[227360]: 2025-11-29 08:38:21.168 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:21 np0005539551 nova_compute[227360]: 2025-11-29 08:38:21.170 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:21 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:21.177 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:a3:18 10.100.0.8'], port_security=['fa:16:3e:a7:a3:18 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '8fc82e8c-6159-4a45-995b-c8e306dade2a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4dc7ed86-80fb-4377-a5d0-8edd5e264c14', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0471b9b208874403aa3f0fbe7504ad19', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0b219762-b87f-4f30-9b8a-68045d5ed356', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=244cacb4-e852-4547-86ff-006a2141eb2a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=402578bf-9e47-411e-a9a0-bd831f9b380e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:38:21 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:21.178 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 402578bf-9e47-411e-a9a0-bd831f9b380e in datapath 4dc7ed86-80fb-4377-a5d0-8edd5e264c14 unbound from our chassis#033[00m
Nov 29 03:38:21 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:21.179 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4dc7ed86-80fb-4377-a5d0-8edd5e264c14, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:38:21 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:21.180 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5aecd7f0-4768-4e3c-9e68-e9d43a8cdf3b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:21 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:21.181 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14 namespace which is not needed anymore#033[00m
Nov 29 03:38:21 np0005539551 nova_compute[227360]: 2025-11-29 08:38:21.182 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:21 np0005539551 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d000000b6.scope: Deactivated successfully.
Nov 29 03:38:21 np0005539551 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d000000b6.scope: Consumed 14.129s CPU time.
Nov 29 03:38:21 np0005539551 systemd-machined[190756]: Machine qemu-84-instance-000000b6 terminated.
Nov 29 03:38:21 np0005539551 nova_compute[227360]: 2025-11-29 08:38:21.304 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:21 np0005539551 nova_compute[227360]: 2025-11-29 08:38:21.310 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:21 np0005539551 nova_compute[227360]: 2025-11-29 08:38:21.323 227364 INFO nova.virt.libvirt.driver [-] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Instance destroyed successfully.#033[00m
Nov 29 03:38:21 np0005539551 nova_compute[227360]: 2025-11-29 08:38:21.323 227364 DEBUG nova.objects.instance [None req-93fe3af9-f975-44a3-be12-4e98c43da708 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'resources' on Instance uuid 8fc82e8c-6159-4a45-995b-c8e306dade2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:38:21 np0005539551 nova_compute[227360]: 2025-11-29 08:38:21.334 227364 DEBUG nova.virt.libvirt.vif [None req-93fe3af9-f975-44a3-be12-4e98c43da708 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:37:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-150909303',display_name='tempest-TestNetworkBasicOps-server-150909303',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-150909303',id=182,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH4R36grIphtmcogfN7f7XUKzA3WvhJ7uldgnXhl3WPmJdcvwh2b8mCMqxuy1g7iUfdW30tra7vF5HZPT48iRw9BOGYiesVGDOm3G7pRZI5xev3739PbDhH4Jg23RHG3qA==',key_name='tempest-TestNetworkBasicOps-1973126684',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:37:59Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-fd0m3fdd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:37:59Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=8fc82e8c-6159-4a45-995b-c8e306dade2a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "402578bf-9e47-411e-a9a0-bd831f9b380e", "address": "fa:16:3e:a7:a3:18", "network": {"id": "4dc7ed86-80fb-4377-a5d0-8edd5e264c14", "bridge": "br-int", "label": "tempest-network-smoke--370633540", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap402578bf-9e", "ovs_interfaceid": "402578bf-9e47-411e-a9a0-bd831f9b380e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:38:21 np0005539551 nova_compute[227360]: 2025-11-29 08:38:21.335 227364 DEBUG nova.network.os_vif_util [None req-93fe3af9-f975-44a3-be12-4e98c43da708 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "402578bf-9e47-411e-a9a0-bd831f9b380e", "address": "fa:16:3e:a7:a3:18", "network": {"id": "4dc7ed86-80fb-4377-a5d0-8edd5e264c14", "bridge": "br-int", "label": "tempest-network-smoke--370633540", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap402578bf-9e", "ovs_interfaceid": "402578bf-9e47-411e-a9a0-bd831f9b380e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:38:21 np0005539551 nova_compute[227360]: 2025-11-29 08:38:21.335 227364 DEBUG nova.network.os_vif_util [None req-93fe3af9-f975-44a3-be12-4e98c43da708 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a7:a3:18,bridge_name='br-int',has_traffic_filtering=True,id=402578bf-9e47-411e-a9a0-bd831f9b380e,network=Network(4dc7ed86-80fb-4377-a5d0-8edd5e264c14),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap402578bf-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:38:21 np0005539551 nova_compute[227360]: 2025-11-29 08:38:21.336 227364 DEBUG os_vif [None req-93fe3af9-f975-44a3-be12-4e98c43da708 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a7:a3:18,bridge_name='br-int',has_traffic_filtering=True,id=402578bf-9e47-411e-a9a0-bd831f9b380e,network=Network(4dc7ed86-80fb-4377-a5d0-8edd5e264c14),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap402578bf-9e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:38:21 np0005539551 nova_compute[227360]: 2025-11-29 08:38:21.337 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:21 np0005539551 nova_compute[227360]: 2025-11-29 08:38:21.338 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap402578bf-9e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:21 np0005539551 nova_compute[227360]: 2025-11-29 08:38:21.340 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:38:21 np0005539551 nova_compute[227360]: 2025-11-29 08:38:21.342 227364 INFO os_vif [None req-93fe3af9-f975-44a3-be12-4e98c43da708 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a7:a3:18,bridge_name='br-int',has_traffic_filtering=True,id=402578bf-9e47-411e-a9a0-bd831f9b380e,network=Network(4dc7ed86-80fb-4377-a5d0-8edd5e264c14),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap402578bf-9e')#033[00m
Nov 29 03:38:21 np0005539551 neutron-haproxy-ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14[292734]: [NOTICE]   (292738) : haproxy version is 2.8.14-c23fe91
Nov 29 03:38:21 np0005539551 neutron-haproxy-ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14[292734]: [NOTICE]   (292738) : path to executable is /usr/sbin/haproxy
Nov 29 03:38:21 np0005539551 neutron-haproxy-ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14[292734]: [WARNING]  (292738) : Exiting Master process...
Nov 29 03:38:21 np0005539551 neutron-haproxy-ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14[292734]: [WARNING]  (292738) : Exiting Master process...
Nov 29 03:38:21 np0005539551 neutron-haproxy-ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14[292734]: [ALERT]    (292738) : Current worker (292740) exited with code 143 (Terminated)
Nov 29 03:38:21 np0005539551 neutron-haproxy-ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14[292734]: [WARNING]  (292738) : All workers exited. Exiting... (0)
Nov 29 03:38:21 np0005539551 systemd[1]: libpod-ff8accfb4c9ef482cd7bfeb67d9ba9daad70905ef8567d09e31ddfcc07377303.scope: Deactivated successfully.
Nov 29 03:38:21 np0005539551 podman[293147]: 2025-11-29 08:38:21.437101149 +0000 UTC m=+0.162075136 container died ff8accfb4c9ef482cd7bfeb67d9ba9daad70905ef8567d09e31ddfcc07377303 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:38:21 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:38:21 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2414749904' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:38:21 np0005539551 nova_compute[227360]: 2025-11-29 08:38:21.573 227364 DEBUG nova.compute.manager [req-8a79b33a-62d2-4ce6-9010-6e6aca4a4ca0 req-4540c46c-127b-4935-adc9-98902468c5ac 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Received event network-vif-unplugged-402578bf-9e47-411e-a9a0-bd831f9b380e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:21 np0005539551 nova_compute[227360]: 2025-11-29 08:38:21.574 227364 DEBUG oslo_concurrency.lockutils [req-8a79b33a-62d2-4ce6-9010-6e6aca4a4ca0 req-4540c46c-127b-4935-adc9-98902468c5ac 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8fc82e8c-6159-4a45-995b-c8e306dade2a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:21 np0005539551 nova_compute[227360]: 2025-11-29 08:38:21.574 227364 DEBUG oslo_concurrency.lockutils [req-8a79b33a-62d2-4ce6-9010-6e6aca4a4ca0 req-4540c46c-127b-4935-adc9-98902468c5ac 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8fc82e8c-6159-4a45-995b-c8e306dade2a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:21 np0005539551 nova_compute[227360]: 2025-11-29 08:38:21.575 227364 DEBUG oslo_concurrency.lockutils [req-8a79b33a-62d2-4ce6-9010-6e6aca4a4ca0 req-4540c46c-127b-4935-adc9-98902468c5ac 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8fc82e8c-6159-4a45-995b-c8e306dade2a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:21 np0005539551 nova_compute[227360]: 2025-11-29 08:38:21.575 227364 DEBUG nova.compute.manager [req-8a79b33a-62d2-4ce6-9010-6e6aca4a4ca0 req-4540c46c-127b-4935-adc9-98902468c5ac 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] No waiting events found dispatching network-vif-unplugged-402578bf-9e47-411e-a9a0-bd831f9b380e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:38:21 np0005539551 nova_compute[227360]: 2025-11-29 08:38:21.575 227364 DEBUG nova.compute.manager [req-8a79b33a-62d2-4ce6-9010-6e6aca4a4ca0 req-4540c46c-127b-4935-adc9-98902468c5ac 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Received event network-vif-unplugged-402578bf-9e47-411e-a9a0-bd831f9b380e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:38:21 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ff8accfb4c9ef482cd7bfeb67d9ba9daad70905ef8567d09e31ddfcc07377303-userdata-shm.mount: Deactivated successfully.
Nov 29 03:38:21 np0005539551 systemd[1]: var-lib-containers-storage-overlay-17e5033b4ef758d3a0bbb3fb79cf781a6e341e9288bb48a37eebbc9dab9b906a-merged.mount: Deactivated successfully.
Nov 29 03:38:21 np0005539551 podman[293147]: 2025-11-29 08:38:21.607627671 +0000 UTC m=+0.332601648 container cleanup ff8accfb4c9ef482cd7bfeb67d9ba9daad70905ef8567d09e31ddfcc07377303 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:38:21 np0005539551 systemd[1]: libpod-conmon-ff8accfb4c9ef482cd7bfeb67d9ba9daad70905ef8567d09e31ddfcc07377303.scope: Deactivated successfully.
Nov 29 03:38:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:21.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:21 np0005539551 podman[293204]: 2025-11-29 08:38:21.949805778 +0000 UTC m=+0.320238324 container remove ff8accfb4c9ef482cd7bfeb67d9ba9daad70905ef8567d09e31ddfcc07377303 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 03:38:21 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:21.956 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f9728820-707f-45ff-9822-e2f3532a157e]: (4, ('Sat Nov 29 08:38:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14 (ff8accfb4c9ef482cd7bfeb67d9ba9daad70905ef8567d09e31ddfcc07377303)\nff8accfb4c9ef482cd7bfeb67d9ba9daad70905ef8567d09e31ddfcc07377303\nSat Nov 29 08:38:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14 (ff8accfb4c9ef482cd7bfeb67d9ba9daad70905ef8567d09e31ddfcc07377303)\nff8accfb4c9ef482cd7bfeb67d9ba9daad70905ef8567d09e31ddfcc07377303\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:21 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:21.958 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9d7e993b-df0c-4cf5-a4e7-c6a0e78e0985]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:21 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:21.959 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4dc7ed86-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:21 np0005539551 nova_compute[227360]: 2025-11-29 08:38:21.961 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:21 np0005539551 kernel: tap4dc7ed86-80: left promiscuous mode
Nov 29 03:38:21 np0005539551 nova_compute[227360]: 2025-11-29 08:38:21.976 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:21 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:21.978 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[81ee6a33-fed6-44eb-9ede-d71f0dc57162]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:21 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:21.997 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9336ce75-e755-45ac-8fa1-379a9739d664]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:21 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:21.998 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d5893942-b473-4311-acdb-c7c4efa5102c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:22.021 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[0a56fa37-160a-4e8c-acf7-59290fbbdf73]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 852552, 'reachable_time': 41995, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293220, 'error': None, 'target': 'ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:22.023 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:38:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:22.024 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[af0ef6ed-26bd-4898-80a6-dc9941bed948]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:22 np0005539551 systemd[1]: run-netns-ovnmeta\x2d4dc7ed86\x2d80fb\x2d4377\x2da5d0\x2d8edd5e264c14.mount: Deactivated successfully.
Nov 29 03:38:22 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:38:22 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:38:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:22.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:22 np0005539551 nova_compute[227360]: 2025-11-29 08:38:22.225 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405487.224558, 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:38:22 np0005539551 nova_compute[227360]: 2025-11-29 08:38:22.226 227364 INFO nova.compute.manager [-] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:38:22 np0005539551 nova_compute[227360]: 2025-11-29 08:38:22.251 227364 DEBUG nova.compute.manager [None req-40e7a7e5-c6bd-4534-9753-26e0e4855383 - - - - - -] [instance: 1f01c882-bb44-4ba5-b82f-e3ffa31b8df1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:38:22 np0005539551 nova_compute[227360]: 2025-11-29 08:38:22.747 227364 INFO nova.virt.libvirt.driver [None req-93fe3af9-f975-44a3-be12-4e98c43da708 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Deleting instance files /var/lib/nova/instances/8fc82e8c-6159-4a45-995b-c8e306dade2a_del#033[00m
Nov 29 03:38:22 np0005539551 nova_compute[227360]: 2025-11-29 08:38:22.748 227364 INFO nova.virt.libvirt.driver [None req-93fe3af9-f975-44a3-be12-4e98c43da708 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Deletion of /var/lib/nova/instances/8fc82e8c-6159-4a45-995b-c8e306dade2a_del complete#033[00m
Nov 29 03:38:22 np0005539551 nova_compute[227360]: 2025-11-29 08:38:22.820 227364 INFO nova.compute.manager [None req-93fe3af9-f975-44a3-be12-4e98c43da708 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Took 1.74 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:38:22 np0005539551 nova_compute[227360]: 2025-11-29 08:38:22.820 227364 DEBUG oslo.service.loopingcall [None req-93fe3af9-f975-44a3-be12-4e98c43da708 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:38:22 np0005539551 nova_compute[227360]: 2025-11-29 08:38:22.820 227364 DEBUG nova.compute.manager [-] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:38:22 np0005539551 nova_compute[227360]: 2025-11-29 08:38:22.821 227364 DEBUG nova.network.neutron [-] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:38:23 np0005539551 nova_compute[227360]: 2025-11-29 08:38:23.693 227364 DEBUG nova.compute.manager [req-a2fc43c2-ef94-44cf-89d0-1c8a71a8b865 req-86c7901a-689f-4080-b37c-f8fc410731ba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Received event network-vif-plugged-402578bf-9e47-411e-a9a0-bd831f9b380e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:23 np0005539551 nova_compute[227360]: 2025-11-29 08:38:23.693 227364 DEBUG oslo_concurrency.lockutils [req-a2fc43c2-ef94-44cf-89d0-1c8a71a8b865 req-86c7901a-689f-4080-b37c-f8fc410731ba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8fc82e8c-6159-4a45-995b-c8e306dade2a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:23 np0005539551 nova_compute[227360]: 2025-11-29 08:38:23.693 227364 DEBUG oslo_concurrency.lockutils [req-a2fc43c2-ef94-44cf-89d0-1c8a71a8b865 req-86c7901a-689f-4080-b37c-f8fc410731ba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8fc82e8c-6159-4a45-995b-c8e306dade2a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:23 np0005539551 nova_compute[227360]: 2025-11-29 08:38:23.694 227364 DEBUG oslo_concurrency.lockutils [req-a2fc43c2-ef94-44cf-89d0-1c8a71a8b865 req-86c7901a-689f-4080-b37c-f8fc410731ba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8fc82e8c-6159-4a45-995b-c8e306dade2a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:23 np0005539551 nova_compute[227360]: 2025-11-29 08:38:23.694 227364 DEBUG nova.compute.manager [req-a2fc43c2-ef94-44cf-89d0-1c8a71a8b865 req-86c7901a-689f-4080-b37c-f8fc410731ba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] No waiting events found dispatching network-vif-plugged-402578bf-9e47-411e-a9a0-bd831f9b380e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:38:23 np0005539551 nova_compute[227360]: 2025-11-29 08:38:23.694 227364 WARNING nova.compute.manager [req-a2fc43c2-ef94-44cf-89d0-1c8a71a8b865 req-86c7901a-689f-4080-b37c-f8fc410731ba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Received unexpected event network-vif-plugged-402578bf-9e47-411e-a9a0-bd831f9b380e for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:38:23 np0005539551 nova_compute[227360]: 2025-11-29 08:38:23.765 227364 DEBUG nova.network.neutron [-] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:38:23 np0005539551 nova_compute[227360]: 2025-11-29 08:38:23.785 227364 INFO nova.compute.manager [-] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Took 0.96 seconds to deallocate network for instance.#033[00m
Nov 29 03:38:23 np0005539551 nova_compute[227360]: 2025-11-29 08:38:23.833 227364 DEBUG oslo_concurrency.lockutils [None req-93fe3af9-f975-44a3-be12-4e98c43da708 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:23 np0005539551 nova_compute[227360]: 2025-11-29 08:38:23.834 227364 DEBUG oslo_concurrency.lockutils [None req-93fe3af9-f975-44a3-be12-4e98c43da708 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:23 np0005539551 nova_compute[227360]: 2025-11-29 08:38:23.842 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:23 np0005539551 nova_compute[227360]: 2025-11-29 08:38:23.858 227364 DEBUG nova.compute.manager [req-57883c21-1e6f-4e07-a408-86fb44f3fa43 req-b81599c7-629e-4423-95f7-4cbdbe8bf906 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Received event network-vif-deleted-402578bf-9e47-411e-a9a0-bd831f9b380e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:23.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:23 np0005539551 nova_compute[227360]: 2025-11-29 08:38:23.885 227364 DEBUG oslo_concurrency.processutils [None req-93fe3af9-f975-44a3-be12-4e98c43da708 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:23.928 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:38:23 np0005539551 nova_compute[227360]: 2025-11-29 08:38:23.928 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:23.929 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:38:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:24.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:24 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:38:24 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/897301272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:38:24 np0005539551 nova_compute[227360]: 2025-11-29 08:38:24.308 227364 DEBUG oslo_concurrency.processutils [None req-93fe3af9-f975-44a3-be12-4e98c43da708 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:24 np0005539551 nova_compute[227360]: 2025-11-29 08:38:24.313 227364 DEBUG nova.compute.provider_tree [None req-93fe3af9-f975-44a3-be12-4e98c43da708 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:38:24 np0005539551 nova_compute[227360]: 2025-11-29 08:38:24.327 227364 DEBUG nova.scheduler.client.report [None req-93fe3af9-f975-44a3-be12-4e98c43da708 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:38:24 np0005539551 nova_compute[227360]: 2025-11-29 08:38:24.349 227364 DEBUG oslo_concurrency.lockutils [None req-93fe3af9-f975-44a3-be12-4e98c43da708 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.516s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:24 np0005539551 nova_compute[227360]: 2025-11-29 08:38:24.378 227364 INFO nova.scheduler.client.report [None req-93fe3af9-f975-44a3-be12-4e98c43da708 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Deleted allocations for instance 8fc82e8c-6159-4a45-995b-c8e306dade2a#033[00m
Nov 29 03:38:24 np0005539551 nova_compute[227360]: 2025-11-29 08:38:24.452 227364 DEBUG oslo_concurrency.lockutils [None req-93fe3af9-f975-44a3-be12-4e98c43da708 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "8fc82e8c-6159-4a45-995b-c8e306dade2a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.376s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:24 np0005539551 podman[293245]: 2025-11-29 08:38:24.618333157 +0000 UTC m=+0.063249501 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:38:24 np0005539551 podman[293244]: 2025-11-29 08:38:24.619597182 +0000 UTC m=+0.064797945 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:38:24 np0005539551 podman[293243]: 2025-11-29 08:38:24.646557681 +0000 UTC m=+0.093907781 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:38:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e383 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:25.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:26.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:26 np0005539551 nova_compute[227360]: 2025-11-29 08:38:26.340 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:27.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:28.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:28 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:38:28 np0005539551 nova_compute[227360]: 2025-11-29 08:38:28.842 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:29 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:38:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:29.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:29 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:29.931 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:30.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e383 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:31 np0005539551 nova_compute[227360]: 2025-11-29 08:38:31.386 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:31.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:32.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:33 np0005539551 nova_compute[227360]: 2025-11-29 08:38:33.844 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:38:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:33.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:38:33 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e384 e384: 3 total, 3 up, 3 in
Nov 29 03:38:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:34.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:35 np0005539551 nova_compute[227360]: 2025-11-29 08:38:35.582 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:38:35 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3283364703' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:38:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:38:35 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3283364703' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:38:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:35 np0005539551 nova_compute[227360]: 2025-11-29 08:38:35.864 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:38:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:35.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:38:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:36.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:36 np0005539551 nova_compute[227360]: 2025-11-29 08:38:36.322 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405501.320895, 8fc82e8c-6159-4a45-995b-c8e306dade2a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:38:36 np0005539551 nova_compute[227360]: 2025-11-29 08:38:36.322 227364 INFO nova.compute.manager [-] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:38:36 np0005539551 nova_compute[227360]: 2025-11-29 08:38:36.342 227364 DEBUG nova.compute.manager [None req-1190e8f0-f2da-4c5b-9f2e-c8da07f546aa - - - - - -] [instance: 8fc82e8c-6159-4a45-995b-c8e306dade2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:38:36 np0005539551 nova_compute[227360]: 2025-11-29 08:38:36.422 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:37.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:38.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:38 np0005539551 nova_compute[227360]: 2025-11-29 08:38:38.846 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:39.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:40.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:41 np0005539551 nova_compute[227360]: 2025-11-29 08:38:41.424 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:41.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:42.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:42 np0005539551 nova_compute[227360]: 2025-11-29 08:38:42.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:43 np0005539551 nova_compute[227360]: 2025-11-29 08:38:43.869 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:43.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:44.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e385 e385: 3 total, 3 up, 3 in
Nov 29 03:38:45 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #127. Immutable memtables: 0.
Nov 29 03:38:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:38:45.213836) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:38:45 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 127
Nov 29 03:38:45 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405525213869, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 858, "num_deletes": 254, "total_data_size": 1543350, "memory_usage": 1572560, "flush_reason": "Manual Compaction"}
Nov 29 03:38:45 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #128: started
Nov 29 03:38:45 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405525221238, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 128, "file_size": 776540, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 62296, "largest_seqno": 63148, "table_properties": {"data_size": 772681, "index_size": 1574, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 10208, "raw_average_key_size": 21, "raw_value_size": 764573, "raw_average_value_size": 1623, "num_data_blocks": 67, "num_entries": 471, "num_filter_entries": 471, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405484, "oldest_key_time": 1764405484, "file_creation_time": 1764405525, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:38:45 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 7497 microseconds, and 3800 cpu microseconds.
Nov 29 03:38:45 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:38:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:38:45.221329) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #128: 776540 bytes OK
Nov 29 03:38:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:38:45.221355) [db/memtable_list.cc:519] [default] Level-0 commit table #128 started
Nov 29 03:38:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:38:45.222614) [db/memtable_list.cc:722] [default] Level-0 commit table #128: memtable #1 done
Nov 29 03:38:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:38:45.222634) EVENT_LOG_v1 {"time_micros": 1764405525222627, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:38:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:38:45.222652) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:38:45 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 1538842, prev total WAL file size 1538842, number of live WAL files 2.
Nov 29 03:38:45 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000124.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:38:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:38:45.223523) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303033' seq:72057594037927935, type:22 .. '6D6772737461740032323535' seq:0, type:0; will stop at (end)
Nov 29 03:38:45 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:38:45 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [128(758KB)], [126(12MB)]
Nov 29 03:38:45 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405525223566, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [128], "files_L6": [126], "score": -1, "input_data_size": 14172425, "oldest_snapshot_seqno": -1}
Nov 29 03:38:45 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #129: 9174 keys, 10517876 bytes, temperature: kUnknown
Nov 29 03:38:45 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405525286716, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 129, "file_size": 10517876, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10460724, "index_size": 33103, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22981, "raw_key_size": 242881, "raw_average_key_size": 26, "raw_value_size": 10301333, "raw_average_value_size": 1122, "num_data_blocks": 1252, "num_entries": 9174, "num_filter_entries": 9174, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764405525, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 129, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:38:45 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:38:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:38:45.287194) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 10517876 bytes
Nov 29 03:38:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:38:45.288795) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 223.9 rd, 166.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 12.8 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(31.8) write-amplify(13.5) OK, records in: 9686, records dropped: 512 output_compression: NoCompression
Nov 29 03:38:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:38:45.288825) EVENT_LOG_v1 {"time_micros": 1764405525288811, "job": 80, "event": "compaction_finished", "compaction_time_micros": 63301, "compaction_time_cpu_micros": 33919, "output_level": 6, "num_output_files": 1, "total_output_size": 10517876, "num_input_records": 9686, "num_output_records": 9174, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:38:45 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:38:45 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405525289323, "job": 80, "event": "table_file_deletion", "file_number": 128}
Nov 29 03:38:45 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000126.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:38:45 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405525293990, "job": 80, "event": "table_file_deletion", "file_number": 126}
Nov 29 03:38:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:38:45.223436) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:38:45.294061) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:38:45.294065) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:38:45.294067) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:38:45.294068) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:38:45.294070) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:38:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:45.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:38:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:46.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:46 np0005539551 nova_compute[227360]: 2025-11-29 08:38:46.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:46 np0005539551 nova_compute[227360]: 2025-11-29 08:38:46.426 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:47 np0005539551 nova_compute[227360]: 2025-11-29 08:38:47.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:47 np0005539551 nova_compute[227360]: 2025-11-29 08:38:47.440 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:47 np0005539551 nova_compute[227360]: 2025-11-29 08:38:47.440 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:47 np0005539551 nova_compute[227360]: 2025-11-29 08:38:47.440 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:47 np0005539551 nova_compute[227360]: 2025-11-29 08:38:47.440 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:38:47 np0005539551 nova_compute[227360]: 2025-11-29 08:38:47.440 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:47 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:38:47 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3948338724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:38:47 np0005539551 nova_compute[227360]: 2025-11-29 08:38:47.865 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:47.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:48 np0005539551 nova_compute[227360]: 2025-11-29 08:38:48.018 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:38:48 np0005539551 nova_compute[227360]: 2025-11-29 08:38:48.019 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4274MB free_disk=20.89777374267578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:38:48 np0005539551 nova_compute[227360]: 2025-11-29 08:38:48.020 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:48 np0005539551 nova_compute[227360]: 2025-11-29 08:38:48.020 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:48 np0005539551 nova_compute[227360]: 2025-11-29 08:38:48.089 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:38:48 np0005539551 nova_compute[227360]: 2025-11-29 08:38:48.090 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:38:48 np0005539551 nova_compute[227360]: 2025-11-29 08:38:48.111 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:48.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:48 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:38:48 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3977509434' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:38:48 np0005539551 nova_compute[227360]: 2025-11-29 08:38:48.548 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:48 np0005539551 nova_compute[227360]: 2025-11-29 08:38:48.554 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:38:48 np0005539551 nova_compute[227360]: 2025-11-29 08:38:48.573 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:38:48 np0005539551 nova_compute[227360]: 2025-11-29 08:38:48.595 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:38:48 np0005539551 nova_compute[227360]: 2025-11-29 08:38:48.595 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:48 np0005539551 nova_compute[227360]: 2025-11-29 08:38:48.872 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:49 np0005539551 nova_compute[227360]: 2025-11-29 08:38:49.597 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:49 np0005539551 nova_compute[227360]: 2025-11-29 08:38:49.597 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:38:49 np0005539551 nova_compute[227360]: 2025-11-29 08:38:49.597 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:38:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:49.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:50.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:50 np0005539551 nova_compute[227360]: 2025-11-29 08:38:50.521 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:38:50 np0005539551 nova_compute[227360]: 2025-11-29 08:38:50.522 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:50 np0005539551 nova_compute[227360]: 2025-11-29 08:38:50.946 227364 DEBUG oslo_concurrency.lockutils [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "c9adc277-3d0a-4bb8-8b47-e9f72114cdfd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:50 np0005539551 nova_compute[227360]: 2025-11-29 08:38:50.947 227364 DEBUG oslo_concurrency.lockutils [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "c9adc277-3d0a-4bb8-8b47-e9f72114cdfd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:50 np0005539551 nova_compute[227360]: 2025-11-29 08:38:50.963 227364 DEBUG nova.compute.manager [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:38:51 np0005539551 nova_compute[227360]: 2025-11-29 08:38:51.038 227364 DEBUG oslo_concurrency.lockutils [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:51 np0005539551 nova_compute[227360]: 2025-11-29 08:38:51.038 227364 DEBUG oslo_concurrency.lockutils [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:51 np0005539551 nova_compute[227360]: 2025-11-29 08:38:51.044 227364 DEBUG nova.virt.hardware [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:38:51 np0005539551 nova_compute[227360]: 2025-11-29 08:38:51.044 227364 INFO nova.compute.claims [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:38:51 np0005539551 nova_compute[227360]: 2025-11-29 08:38:51.150 227364 DEBUG oslo_concurrency.processutils [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:51 np0005539551 nova_compute[227360]: 2025-11-29 08:38:51.429 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:38:51 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/926697098' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:38:51 np0005539551 nova_compute[227360]: 2025-11-29 08:38:51.575 227364 DEBUG oslo_concurrency.processutils [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:51 np0005539551 nova_compute[227360]: 2025-11-29 08:38:51.582 227364 DEBUG nova.compute.provider_tree [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:38:51 np0005539551 nova_compute[227360]: 2025-11-29 08:38:51.603 227364 DEBUG nova.scheduler.client.report [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:38:51 np0005539551 nova_compute[227360]: 2025-11-29 08:38:51.635 227364 DEBUG oslo_concurrency.lockutils [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:51 np0005539551 nova_compute[227360]: 2025-11-29 08:38:51.636 227364 DEBUG nova.compute.manager [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:38:51 np0005539551 nova_compute[227360]: 2025-11-29 08:38:51.706 227364 DEBUG nova.compute.manager [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:38:51 np0005539551 nova_compute[227360]: 2025-11-29 08:38:51.706 227364 DEBUG nova.network.neutron [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:38:51 np0005539551 nova_compute[227360]: 2025-11-29 08:38:51.741 227364 INFO nova.virt.libvirt.driver [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:38:51 np0005539551 nova_compute[227360]: 2025-11-29 08:38:51.762 227364 DEBUG nova.compute.manager [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:38:51 np0005539551 nova_compute[227360]: 2025-11-29 08:38:51.853 227364 DEBUG nova.compute.manager [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:38:51 np0005539551 nova_compute[227360]: 2025-11-29 08:38:51.854 227364 DEBUG nova.virt.libvirt.driver [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:38:51 np0005539551 nova_compute[227360]: 2025-11-29 08:38:51.854 227364 INFO nova.virt.libvirt.driver [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Creating image(s)#033[00m
Nov 29 03:38:51 np0005539551 nova_compute[227360]: 2025-11-29 08:38:51.882 227364 DEBUG nova.storage.rbd_utils [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image c9adc277-3d0a-4bb8-8b47-e9f72114cdfd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:38:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:51.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:51 np0005539551 nova_compute[227360]: 2025-11-29 08:38:51.910 227364 DEBUG nova.storage.rbd_utils [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image c9adc277-3d0a-4bb8-8b47-e9f72114cdfd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:38:51 np0005539551 nova_compute[227360]: 2025-11-29 08:38:51.938 227364 DEBUG nova.storage.rbd_utils [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image c9adc277-3d0a-4bb8-8b47-e9f72114cdfd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:38:51 np0005539551 nova_compute[227360]: 2025-11-29 08:38:51.942 227364 DEBUG oslo_concurrency.processutils [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:52 np0005539551 nova_compute[227360]: 2025-11-29 08:38:52.011 227364 DEBUG oslo_concurrency.processutils [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:52 np0005539551 nova_compute[227360]: 2025-11-29 08:38:52.012 227364 DEBUG oslo_concurrency.lockutils [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:52 np0005539551 nova_compute[227360]: 2025-11-29 08:38:52.012 227364 DEBUG oslo_concurrency.lockutils [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:52 np0005539551 nova_compute[227360]: 2025-11-29 08:38:52.013 227364 DEBUG oslo_concurrency.lockutils [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:52 np0005539551 nova_compute[227360]: 2025-11-29 08:38:52.041 227364 DEBUG nova.storage.rbd_utils [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image c9adc277-3d0a-4bb8-8b47-e9f72114cdfd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:38:52 np0005539551 nova_compute[227360]: 2025-11-29 08:38:52.045 227364 DEBUG oslo_concurrency.processutils [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 c9adc277-3d0a-4bb8-8b47-e9f72114cdfd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:38:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:52.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:38:52 np0005539551 nova_compute[227360]: 2025-11-29 08:38:52.152 227364 DEBUG nova.policy [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4774e2851bc6407cb0fcde15bd24d1b3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0471b9b208874403aa3f0fbe7504ad19', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:38:52 np0005539551 nova_compute[227360]: 2025-11-29 08:38:52.303 227364 DEBUG oslo_concurrency.processutils [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 c9adc277-3d0a-4bb8-8b47-e9f72114cdfd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.258s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:52 np0005539551 nova_compute[227360]: 2025-11-29 08:38:52.362 227364 DEBUG nova.storage.rbd_utils [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] resizing rbd image c9adc277-3d0a-4bb8-8b47-e9f72114cdfd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:38:52 np0005539551 nova_compute[227360]: 2025-11-29 08:38:52.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:52 np0005539551 nova_compute[227360]: 2025-11-29 08:38:52.464 227364 DEBUG nova.objects.instance [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'migration_context' on Instance uuid c9adc277-3d0a-4bb8-8b47-e9f72114cdfd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:38:52 np0005539551 nova_compute[227360]: 2025-11-29 08:38:52.479 227364 DEBUG nova.virt.libvirt.driver [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:38:52 np0005539551 nova_compute[227360]: 2025-11-29 08:38:52.479 227364 DEBUG nova.virt.libvirt.driver [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Ensure instance console log exists: /var/lib/nova/instances/c9adc277-3d0a-4bb8-8b47-e9f72114cdfd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:38:52 np0005539551 nova_compute[227360]: 2025-11-29 08:38:52.479 227364 DEBUG oslo_concurrency.lockutils [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:52 np0005539551 nova_compute[227360]: 2025-11-29 08:38:52.480 227364 DEBUG oslo_concurrency.lockutils [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:52 np0005539551 nova_compute[227360]: 2025-11-29 08:38:52.480 227364 DEBUG oslo_concurrency.lockutils [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:53 np0005539551 nova_compute[227360]: 2025-11-29 08:38:53.063 227364 DEBUG nova.network.neutron [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Successfully created port: 89414cbe-0908-4dc8-af0e-648ea658b1fa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:38:53 np0005539551 nova_compute[227360]: 2025-11-29 08:38:53.874 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:53.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:53 np0005539551 nova_compute[227360]: 2025-11-29 08:38:53.956 227364 DEBUG nova.network.neutron [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Successfully updated port: 89414cbe-0908-4dc8-af0e-648ea658b1fa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:38:53 np0005539551 nova_compute[227360]: 2025-11-29 08:38:53.976 227364 DEBUG oslo_concurrency.lockutils [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "refresh_cache-c9adc277-3d0a-4bb8-8b47-e9f72114cdfd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:38:53 np0005539551 nova_compute[227360]: 2025-11-29 08:38:53.976 227364 DEBUG oslo_concurrency.lockutils [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquired lock "refresh_cache-c9adc277-3d0a-4bb8-8b47-e9f72114cdfd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:38:53 np0005539551 nova_compute[227360]: 2025-11-29 08:38:53.976 227364 DEBUG nova.network.neutron [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:38:54 np0005539551 nova_compute[227360]: 2025-11-29 08:38:54.098 227364 DEBUG nova.compute.manager [req-5bfc63bf-23c9-4a5c-9c51-bc664d708c8a req-f41fdeab-b932-4ea2-9324-d064aec2c70f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Received event network-changed-89414cbe-0908-4dc8-af0e-648ea658b1fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:54 np0005539551 nova_compute[227360]: 2025-11-29 08:38:54.099 227364 DEBUG nova.compute.manager [req-5bfc63bf-23c9-4a5c-9c51-bc664d708c8a req-f41fdeab-b932-4ea2-9324-d064aec2c70f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Refreshing instance network info cache due to event network-changed-89414cbe-0908-4dc8-af0e-648ea658b1fa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:38:54 np0005539551 nova_compute[227360]: 2025-11-29 08:38:54.099 227364 DEBUG oslo_concurrency.lockutils [req-5bfc63bf-23c9-4a5c-9c51-bc664d708c8a req-f41fdeab-b932-4ea2-9324-d064aec2c70f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-c9adc277-3d0a-4bb8-8b47-e9f72114cdfd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:38:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:54.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:54 np0005539551 nova_compute[227360]: 2025-11-29 08:38:54.148 227364 DEBUG nova.network.neutron [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:38:54 np0005539551 nova_compute[227360]: 2025-11-29 08:38:54.942 227364 DEBUG nova.network.neutron [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Updating instance_info_cache with network_info: [{"id": "89414cbe-0908-4dc8-af0e-648ea658b1fa", "address": "fa:16:3e:54:8d:5b", "network": {"id": "460c4768-d248-455a-be34-ea028712d091", "bridge": "br-int", "label": "tempest-network-smoke--245531740", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89414cbe-09", "ovs_interfaceid": "89414cbe-0908-4dc8-af0e-648ea658b1fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:38:55 np0005539551 nova_compute[227360]: 2025-11-29 08:38:55.021 227364 DEBUG oslo_concurrency.lockutils [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Releasing lock "refresh_cache-c9adc277-3d0a-4bb8-8b47-e9f72114cdfd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:38:55 np0005539551 nova_compute[227360]: 2025-11-29 08:38:55.021 227364 DEBUG nova.compute.manager [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Instance network_info: |[{"id": "89414cbe-0908-4dc8-af0e-648ea658b1fa", "address": "fa:16:3e:54:8d:5b", "network": {"id": "460c4768-d248-455a-be34-ea028712d091", "bridge": "br-int", "label": "tempest-network-smoke--245531740", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89414cbe-09", "ovs_interfaceid": "89414cbe-0908-4dc8-af0e-648ea658b1fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:38:55 np0005539551 nova_compute[227360]: 2025-11-29 08:38:55.022 227364 DEBUG oslo_concurrency.lockutils [req-5bfc63bf-23c9-4a5c-9c51-bc664d708c8a req-f41fdeab-b932-4ea2-9324-d064aec2c70f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-c9adc277-3d0a-4bb8-8b47-e9f72114cdfd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:38:55 np0005539551 nova_compute[227360]: 2025-11-29 08:38:55.022 227364 DEBUG nova.network.neutron [req-5bfc63bf-23c9-4a5c-9c51-bc664d708c8a req-f41fdeab-b932-4ea2-9324-d064aec2c70f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Refreshing network info cache for port 89414cbe-0908-4dc8-af0e-648ea658b1fa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:38:55 np0005539551 nova_compute[227360]: 2025-11-29 08:38:55.025 227364 DEBUG nova.virt.libvirt.driver [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Start _get_guest_xml network_info=[{"id": "89414cbe-0908-4dc8-af0e-648ea658b1fa", "address": "fa:16:3e:54:8d:5b", "network": {"id": "460c4768-d248-455a-be34-ea028712d091", "bridge": "br-int", "label": "tempest-network-smoke--245531740", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89414cbe-09", "ovs_interfaceid": "89414cbe-0908-4dc8-af0e-648ea658b1fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:38:55 np0005539551 nova_compute[227360]: 2025-11-29 08:38:55.030 227364 WARNING nova.virt.libvirt.driver [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:38:55 np0005539551 nova_compute[227360]: 2025-11-29 08:38:55.038 227364 DEBUG nova.virt.libvirt.host [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:38:55 np0005539551 nova_compute[227360]: 2025-11-29 08:38:55.039 227364 DEBUG nova.virt.libvirt.host [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:38:55 np0005539551 nova_compute[227360]: 2025-11-29 08:38:55.046 227364 DEBUG nova.virt.libvirt.host [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:38:55 np0005539551 nova_compute[227360]: 2025-11-29 08:38:55.047 227364 DEBUG nova.virt.libvirt.host [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:38:55 np0005539551 nova_compute[227360]: 2025-11-29 08:38:55.048 227364 DEBUG nova.virt.libvirt.driver [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:38:55 np0005539551 nova_compute[227360]: 2025-11-29 08:38:55.048 227364 DEBUG nova.virt.hardware [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:38:55 np0005539551 nova_compute[227360]: 2025-11-29 08:38:55.049 227364 DEBUG nova.virt.hardware [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:38:55 np0005539551 nova_compute[227360]: 2025-11-29 08:38:55.049 227364 DEBUG nova.virt.hardware [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:38:55 np0005539551 nova_compute[227360]: 2025-11-29 08:38:55.050 227364 DEBUG nova.virt.hardware [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:38:55 np0005539551 nova_compute[227360]: 2025-11-29 08:38:55.050 227364 DEBUG nova.virt.hardware [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:38:55 np0005539551 nova_compute[227360]: 2025-11-29 08:38:55.050 227364 DEBUG nova.virt.hardware [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:38:55 np0005539551 nova_compute[227360]: 2025-11-29 08:38:55.050 227364 DEBUG nova.virt.hardware [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:38:55 np0005539551 nova_compute[227360]: 2025-11-29 08:38:55.051 227364 DEBUG nova.virt.hardware [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:38:55 np0005539551 nova_compute[227360]: 2025-11-29 08:38:55.051 227364 DEBUG nova.virt.hardware [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:38:55 np0005539551 nova_compute[227360]: 2025-11-29 08:38:55.051 227364 DEBUG nova.virt.hardware [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:38:55 np0005539551 nova_compute[227360]: 2025-11-29 08:38:55.052 227364 DEBUG nova.virt.hardware [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:38:55 np0005539551 nova_compute[227360]: 2025-11-29 08:38:55.055 227364 DEBUG oslo_concurrency.processutils [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:38:55 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4033724436' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:38:55 np0005539551 nova_compute[227360]: 2025-11-29 08:38:55.536 227364 DEBUG oslo_concurrency.processutils [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:55 np0005539551 nova_compute[227360]: 2025-11-29 08:38:55.562 227364 DEBUG nova.storage.rbd_utils [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image c9adc277-3d0a-4bb8-8b47-e9f72114cdfd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:38:55 np0005539551 nova_compute[227360]: 2025-11-29 08:38:55.571 227364 DEBUG oslo_concurrency.processutils [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:55 np0005539551 podman[293615]: 2025-11-29 08:38:55.602996437 +0000 UTC m=+0.048124873 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:38:55 np0005539551 podman[293614]: 2025-11-29 08:38:55.610838579 +0000 UTC m=+0.060747774 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 03:38:55 np0005539551 podman[293613]: 2025-11-29 08:38:55.653401771 +0000 UTC m=+0.105756542 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 29 03:38:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:55.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:38:55 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2539556882' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:38:56 np0005539551 nova_compute[227360]: 2025-11-29 08:38:56.007 227364 DEBUG oslo_concurrency.processutils [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:56 np0005539551 nova_compute[227360]: 2025-11-29 08:38:56.009 227364 DEBUG nova.virt.libvirt.vif [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:38:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1601524375',display_name='tempest-TestNetworkBasicOps-server-1601524375',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1601524375',id=184,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNAyvwAZwPHz0mDZ6+uw7Ctigv7CbFX3JcWWwmeG6ohdJh9eb0DA6IZB4kjH4EtgM2s5DtdFmdoiqApvJrl5Aaw7g3k0AA8E+id9fhlaMPhZgqKsQOQc112xVs1jFb8yOA==',key_name='tempest-TestNetworkBasicOps-65375256',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-xx3hndbp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:38:51Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=c9adc277-3d0a-4bb8-8b47-e9f72114cdfd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "89414cbe-0908-4dc8-af0e-648ea658b1fa", "address": "fa:16:3e:54:8d:5b", "network": {"id": "460c4768-d248-455a-be34-ea028712d091", "bridge": "br-int", "label": "tempest-network-smoke--245531740", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89414cbe-09", "ovs_interfaceid": "89414cbe-0908-4dc8-af0e-648ea658b1fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:38:56 np0005539551 nova_compute[227360]: 2025-11-29 08:38:56.009 227364 DEBUG nova.network.os_vif_util [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "89414cbe-0908-4dc8-af0e-648ea658b1fa", "address": "fa:16:3e:54:8d:5b", "network": {"id": "460c4768-d248-455a-be34-ea028712d091", "bridge": "br-int", "label": "tempest-network-smoke--245531740", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89414cbe-09", "ovs_interfaceid": "89414cbe-0908-4dc8-af0e-648ea658b1fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:38:56 np0005539551 nova_compute[227360]: 2025-11-29 08:38:56.010 227364 DEBUG nova.network.os_vif_util [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:8d:5b,bridge_name='br-int',has_traffic_filtering=True,id=89414cbe-0908-4dc8-af0e-648ea658b1fa,network=Network(460c4768-d248-455a-be34-ea028712d091),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89414cbe-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:38:56 np0005539551 nova_compute[227360]: 2025-11-29 08:38:56.011 227364 DEBUG nova.objects.instance [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'pci_devices' on Instance uuid c9adc277-3d0a-4bb8-8b47-e9f72114cdfd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:38:56 np0005539551 nova_compute[227360]: 2025-11-29 08:38:56.033 227364 DEBUG nova.virt.libvirt.driver [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:38:56 np0005539551 nova_compute[227360]:  <uuid>c9adc277-3d0a-4bb8-8b47-e9f72114cdfd</uuid>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:  <name>instance-000000b8</name>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:38:56 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:      <nova:name>tempest-TestNetworkBasicOps-server-1601524375</nova:name>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:38:55</nova:creationTime>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:38:56 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:        <nova:user uuid="4774e2851bc6407cb0fcde15bd24d1b3">tempest-TestNetworkBasicOps-828399474-project-member</nova:user>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:        <nova:project uuid="0471b9b208874403aa3f0fbe7504ad19">tempest-TestNetworkBasicOps-828399474</nova:project>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:        <nova:port uuid="89414cbe-0908-4dc8-af0e-648ea658b1fa">
Nov 29 03:38:56 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:      <entry name="serial">c9adc277-3d0a-4bb8-8b47-e9f72114cdfd</entry>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:      <entry name="uuid">c9adc277-3d0a-4bb8-8b47-e9f72114cdfd</entry>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:38:56 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/c9adc277-3d0a-4bb8-8b47-e9f72114cdfd_disk">
Nov 29 03:38:56 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:38:56 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:38:56 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/c9adc277-3d0a-4bb8-8b47-e9f72114cdfd_disk.config">
Nov 29 03:38:56 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:38:56 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:38:56 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:54:8d:5b"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:      <target dev="tap89414cbe-09"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:38:56 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/c9adc277-3d0a-4bb8-8b47-e9f72114cdfd/console.log" append="off"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:38:56 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:38:56 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:38:56 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:38:56 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:38:56 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:38:56 np0005539551 nova_compute[227360]: 2025-11-29 08:38:56.034 227364 DEBUG nova.compute.manager [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Preparing to wait for external event network-vif-plugged-89414cbe-0908-4dc8-af0e-648ea658b1fa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:38:56 np0005539551 nova_compute[227360]: 2025-11-29 08:38:56.034 227364 DEBUG oslo_concurrency.lockutils [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "c9adc277-3d0a-4bb8-8b47-e9f72114cdfd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:56 np0005539551 nova_compute[227360]: 2025-11-29 08:38:56.034 227364 DEBUG oslo_concurrency.lockutils [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "c9adc277-3d0a-4bb8-8b47-e9f72114cdfd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:56 np0005539551 nova_compute[227360]: 2025-11-29 08:38:56.035 227364 DEBUG oslo_concurrency.lockutils [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "c9adc277-3d0a-4bb8-8b47-e9f72114cdfd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:56 np0005539551 nova_compute[227360]: 2025-11-29 08:38:56.035 227364 DEBUG nova.virt.libvirt.vif [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:38:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1601524375',display_name='tempest-TestNetworkBasicOps-server-1601524375',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1601524375',id=184,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNAyvwAZwPHz0mDZ6+uw7Ctigv7CbFX3JcWWwmeG6ohdJh9eb0DA6IZB4kjH4EtgM2s5DtdFmdoiqApvJrl5Aaw7g3k0AA8E+id9fhlaMPhZgqKsQOQc112xVs1jFb8yOA==',key_name='tempest-TestNetworkBasicOps-65375256',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-xx3hndbp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:38:51Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=c9adc277-3d0a-4bb8-8b47-e9f72114cdfd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "89414cbe-0908-4dc8-af0e-648ea658b1fa", "address": "fa:16:3e:54:8d:5b", "network": {"id": "460c4768-d248-455a-be34-ea028712d091", "bridge": "br-int", "label": "tempest-network-smoke--245531740", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89414cbe-09", "ovs_interfaceid": "89414cbe-0908-4dc8-af0e-648ea658b1fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:38:56 np0005539551 nova_compute[227360]: 2025-11-29 08:38:56.035 227364 DEBUG nova.network.os_vif_util [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "89414cbe-0908-4dc8-af0e-648ea658b1fa", "address": "fa:16:3e:54:8d:5b", "network": {"id": "460c4768-d248-455a-be34-ea028712d091", "bridge": "br-int", "label": "tempest-network-smoke--245531740", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89414cbe-09", "ovs_interfaceid": "89414cbe-0908-4dc8-af0e-648ea658b1fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:38:56 np0005539551 nova_compute[227360]: 2025-11-29 08:38:56.036 227364 DEBUG nova.network.os_vif_util [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:8d:5b,bridge_name='br-int',has_traffic_filtering=True,id=89414cbe-0908-4dc8-af0e-648ea658b1fa,network=Network(460c4768-d248-455a-be34-ea028712d091),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89414cbe-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:38:56 np0005539551 nova_compute[227360]: 2025-11-29 08:38:56.036 227364 DEBUG os_vif [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:8d:5b,bridge_name='br-int',has_traffic_filtering=True,id=89414cbe-0908-4dc8-af0e-648ea658b1fa,network=Network(460c4768-d248-455a-be34-ea028712d091),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89414cbe-09') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:38:56 np0005539551 nova_compute[227360]: 2025-11-29 08:38:56.037 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:56 np0005539551 nova_compute[227360]: 2025-11-29 08:38:56.037 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:56 np0005539551 nova_compute[227360]: 2025-11-29 08:38:56.038 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:38:56 np0005539551 nova_compute[227360]: 2025-11-29 08:38:56.040 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:56 np0005539551 nova_compute[227360]: 2025-11-29 08:38:56.040 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap89414cbe-09, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:56 np0005539551 nova_compute[227360]: 2025-11-29 08:38:56.041 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap89414cbe-09, col_values=(('external_ids', {'iface-id': '89414cbe-0908-4dc8-af0e-648ea658b1fa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:54:8d:5b', 'vm-uuid': 'c9adc277-3d0a-4bb8-8b47-e9f72114cdfd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:56 np0005539551 nova_compute[227360]: 2025-11-29 08:38:56.042 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:56 np0005539551 NetworkManager[48922]: <info>  [1764405536.0436] manager: (tap89414cbe-09): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/361)
Nov 29 03:38:56 np0005539551 nova_compute[227360]: 2025-11-29 08:38:56.045 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:38:56 np0005539551 nova_compute[227360]: 2025-11-29 08:38:56.050 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:56 np0005539551 nova_compute[227360]: 2025-11-29 08:38:56.051 227364 INFO os_vif [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:8d:5b,bridge_name='br-int',has_traffic_filtering=True,id=89414cbe-0908-4dc8-af0e-648ea658b1fa,network=Network(460c4768-d248-455a-be34-ea028712d091),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89414cbe-09')#033[00m
Nov 29 03:38:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:56.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:56 np0005539551 nova_compute[227360]: 2025-11-29 08:38:56.184 227364 DEBUG nova.virt.libvirt.driver [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:38:56 np0005539551 nova_compute[227360]: 2025-11-29 08:38:56.185 227364 DEBUG nova.virt.libvirt.driver [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:38:56 np0005539551 nova_compute[227360]: 2025-11-29 08:38:56.185 227364 DEBUG nova.virt.libvirt.driver [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No VIF found with MAC fa:16:3e:54:8d:5b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:38:56 np0005539551 nova_compute[227360]: 2025-11-29 08:38:56.186 227364 INFO nova.virt.libvirt.driver [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Using config drive#033[00m
Nov 29 03:38:56 np0005539551 nova_compute[227360]: 2025-11-29 08:38:56.206 227364 DEBUG nova.storage.rbd_utils [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image c9adc277-3d0a-4bb8-8b47-e9f72114cdfd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:38:56 np0005539551 nova_compute[227360]: 2025-11-29 08:38:56.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:57 np0005539551 nova_compute[227360]: 2025-11-29 08:38:57.007 227364 DEBUG nova.network.neutron [req-5bfc63bf-23c9-4a5c-9c51-bc664d708c8a req-f41fdeab-b932-4ea2-9324-d064aec2c70f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Updated VIF entry in instance network info cache for port 89414cbe-0908-4dc8-af0e-648ea658b1fa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:38:57 np0005539551 nova_compute[227360]: 2025-11-29 08:38:57.008 227364 DEBUG nova.network.neutron [req-5bfc63bf-23c9-4a5c-9c51-bc664d708c8a req-f41fdeab-b932-4ea2-9324-d064aec2c70f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Updating instance_info_cache with network_info: [{"id": "89414cbe-0908-4dc8-af0e-648ea658b1fa", "address": "fa:16:3e:54:8d:5b", "network": {"id": "460c4768-d248-455a-be34-ea028712d091", "bridge": "br-int", "label": "tempest-network-smoke--245531740", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89414cbe-09", "ovs_interfaceid": "89414cbe-0908-4dc8-af0e-648ea658b1fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:38:57 np0005539551 nova_compute[227360]: 2025-11-29 08:38:57.168 227364 DEBUG oslo_concurrency.lockutils [req-5bfc63bf-23c9-4a5c-9c51-bc664d708c8a req-f41fdeab-b932-4ea2-9324-d064aec2c70f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-c9adc277-3d0a-4bb8-8b47-e9f72114cdfd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:38:57 np0005539551 nova_compute[227360]: 2025-11-29 08:38:57.289 227364 INFO nova.virt.libvirt.driver [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Creating config drive at /var/lib/nova/instances/c9adc277-3d0a-4bb8-8b47-e9f72114cdfd/disk.config#033[00m
Nov 29 03:38:57 np0005539551 nova_compute[227360]: 2025-11-29 08:38:57.294 227364 DEBUG oslo_concurrency.processutils [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c9adc277-3d0a-4bb8-8b47-e9f72114cdfd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe6vt6y2w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:57 np0005539551 nova_compute[227360]: 2025-11-29 08:38:57.432 227364 DEBUG oslo_concurrency.processutils [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c9adc277-3d0a-4bb8-8b47-e9f72114cdfd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe6vt6y2w" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:57 np0005539551 nova_compute[227360]: 2025-11-29 08:38:57.461 227364 DEBUG nova.storage.rbd_utils [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image c9adc277-3d0a-4bb8-8b47-e9f72114cdfd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:38:57 np0005539551 nova_compute[227360]: 2025-11-29 08:38:57.464 227364 DEBUG oslo_concurrency.processutils [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c9adc277-3d0a-4bb8-8b47-e9f72114cdfd/disk.config c9adc277-3d0a-4bb8-8b47-e9f72114cdfd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:57 np0005539551 nova_compute[227360]: 2025-11-29 08:38:57.633 227364 DEBUG oslo_concurrency.processutils [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c9adc277-3d0a-4bb8-8b47-e9f72114cdfd/disk.config c9adc277-3d0a-4bb8-8b47-e9f72114cdfd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:57 np0005539551 nova_compute[227360]: 2025-11-29 08:38:57.634 227364 INFO nova.virt.libvirt.driver [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Deleting local config drive /var/lib/nova/instances/c9adc277-3d0a-4bb8-8b47-e9f72114cdfd/disk.config because it was imported into RBD.#033[00m
Nov 29 03:38:57 np0005539551 kernel: tap89414cbe-09: entered promiscuous mode
Nov 29 03:38:57 np0005539551 NetworkManager[48922]: <info>  [1764405537.6811] manager: (tap89414cbe-09): new Tun device (/org/freedesktop/NetworkManager/Devices/362)
Nov 29 03:38:57 np0005539551 systemd-udevd[293786]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:38:57 np0005539551 ovn_controller[130266]: 2025-11-29T08:38:57Z|00810|binding|INFO|Claiming lport 89414cbe-0908-4dc8-af0e-648ea658b1fa for this chassis.
Nov 29 03:38:57 np0005539551 ovn_controller[130266]: 2025-11-29T08:38:57Z|00811|binding|INFO|89414cbe-0908-4dc8-af0e-648ea658b1fa: Claiming fa:16:3e:54:8d:5b 10.100.0.3
Nov 29 03:38:57 np0005539551 nova_compute[227360]: 2025-11-29 08:38:57.708 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:57 np0005539551 nova_compute[227360]: 2025-11-29 08:38:57.714 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:57 np0005539551 NetworkManager[48922]: <info>  [1764405537.7227] device (tap89414cbe-09): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:38:57 np0005539551 NetworkManager[48922]: <info>  [1764405537.7264] device (tap89414cbe-09): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:38:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:57.725 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:8d:5b 10.100.0.3'], port_security=['fa:16:3e:54:8d:5b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'c9adc277-3d0a-4bb8-8b47-e9f72114cdfd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-460c4768-d248-455a-be34-ea028712d091', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0471b9b208874403aa3f0fbe7504ad19', 'neutron:revision_number': '2', 'neutron:security_group_ids': '543f427b-dc17-4c93-a5d2-532bed14f830', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=376f822f-3723-4b37-8925-98e26047b898, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=89414cbe-0908-4dc8-af0e-648ea658b1fa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:38:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:57.727 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 89414cbe-0908-4dc8-af0e-648ea658b1fa in datapath 460c4768-d248-455a-be34-ea028712d091 bound to our chassis#033[00m
Nov 29 03:38:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:57.728 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 460c4768-d248-455a-be34-ea028712d091#033[00m
Nov 29 03:38:57 np0005539551 systemd-machined[190756]: New machine qemu-85-instance-000000b8.
Nov 29 03:38:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:57.741 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6cf5df55-37b9-48a9-9429-5472df14d060]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:57.741 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap460c4768-d1 in ovnmeta-460c4768-d248-455a-be34-ea028712d091 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:38:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:57.743 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap460c4768-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:38:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:57.743 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[efc92132-6eff-4713-9c6e-abd37f77b262]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:57.744 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1899d152-abc3-45a2-a266-5d70687b6569]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:57 np0005539551 systemd[1]: Started Virtual Machine qemu-85-instance-000000b8.
Nov 29 03:38:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:57.756 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[2df6f6c8-96b4-461d-ad4e-a6c5b09d3bd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:57 np0005539551 ovn_controller[130266]: 2025-11-29T08:38:57Z|00812|binding|INFO|Setting lport 89414cbe-0908-4dc8-af0e-648ea658b1fa ovn-installed in OVS
Nov 29 03:38:57 np0005539551 ovn_controller[130266]: 2025-11-29T08:38:57Z|00813|binding|INFO|Setting lport 89414cbe-0908-4dc8-af0e-648ea658b1fa up in Southbound
Nov 29 03:38:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:57.784 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[407dfc16-9fac-4b0e-8025-9f3171cfde1a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:57 np0005539551 nova_compute[227360]: 2025-11-29 08:38:57.784 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:57.811 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[c383e56f-f4df-4d53-b108-c34a5741e007]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:57 np0005539551 systemd-udevd[293790]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:38:57 np0005539551 NetworkManager[48922]: <info>  [1764405537.8189] manager: (tap460c4768-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/363)
Nov 29 03:38:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:57.818 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e787696a-ce4c-4da1-ba66-2d7ccb6c8d74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:57.844 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[f5807600-aebf-4456-a5a3-1fcad59283d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:57.847 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[f6db4939-3a08-4928-845f-592eeab67345]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:57 np0005539551 NetworkManager[48922]: <info>  [1764405537.8666] device (tap460c4768-d0): carrier: link connected
Nov 29 03:38:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:57.872 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[294467ad-b31b-4d1b-8b09-7a3e89370608]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:57.889 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[da2a527c-5a03-42d9-9cfc-4b5e80ac9cc5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap460c4768-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:06:4d:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 236], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 858474, 'reachable_time': 25100, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293822, 'error': None, 'target': 'ovnmeta-460c4768-d248-455a-be34-ea028712d091', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:57.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:57.910 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[29f79801-a0b1-41cb-9a3f-d6aa6d3b3b62]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe06:4d02'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 858474, 'tstamp': 858474}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293823, 'error': None, 'target': 'ovnmeta-460c4768-d248-455a-be34-ea028712d091', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:57.927 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ea2135ef-ca14-488d-931f-1daed21233c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap460c4768-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:06:4d:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 236], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 858474, 'reachable_time': 25100, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293824, 'error': None, 'target': 'ovnmeta-460c4768-d248-455a-be34-ea028712d091', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:57.956 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6f318cf8-c72d-4245-8bd3-ffbda0ec293e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:58.017 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6cb37e9e-b0ed-407c-b96f-6c45deabcb21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:58.019 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap460c4768-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:58.020 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:38:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:58.020 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap460c4768-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:58 np0005539551 NetworkManager[48922]: <info>  [1764405538.0229] manager: (tap460c4768-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/364)
Nov 29 03:38:58 np0005539551 kernel: tap460c4768-d0: entered promiscuous mode
Nov 29 03:38:58 np0005539551 nova_compute[227360]: 2025-11-29 08:38:58.027 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:58.034 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap460c4768-d0, col_values=(('external_ids', {'iface-id': 'be178473-8fe5-4236-9899-659c9f39adc0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:58 np0005539551 ovn_controller[130266]: 2025-11-29T08:38:58Z|00814|binding|INFO|Releasing lport be178473-8fe5-4236-9899-659c9f39adc0 from this chassis (sb_readonly=0)
Nov 29 03:38:58 np0005539551 nova_compute[227360]: 2025-11-29 08:38:58.036 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:58.045 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/460c4768-d248-455a-be34-ea028712d091.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/460c4768-d248-455a-be34-ea028712d091.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:38:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:58.046 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ee81eb9c-6bc1-46f2-b163-6486d47f4933]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:58.046 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:38:58 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:38:58 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:38:58 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-460c4768-d248-455a-be34-ea028712d091
Nov 29 03:38:58 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:38:58 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:38:58 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:38:58 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/460c4768-d248-455a-be34-ea028712d091.pid.haproxy
Nov 29 03:38:58 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:38:58 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:38:58 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:38:58 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:38:58 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:38:58 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:38:58 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:38:58 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:38:58 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:38:58 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:38:58 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:38:58 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:38:58 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:38:58 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:38:58 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:38:58 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:38:58 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:38:58 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:38:58 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:38:58 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:38:58 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 460c4768-d248-455a-be34-ea028712d091
Nov 29 03:38:58 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:38:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:38:58.047 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-460c4768-d248-455a-be34-ea028712d091', 'env', 'PROCESS_TAG=haproxy-460c4768-d248-455a-be34-ea028712d091', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/460c4768-d248-455a-be34-ea028712d091.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:38:58 np0005539551 nova_compute[227360]: 2025-11-29 08:38:58.049 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:58.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:58 np0005539551 podman[293895]: 2025-11-29 08:38:58.415923393 +0000 UTC m=+0.045681077 container create 8e89a8e366d2469702a760aa5f9d76825bed5673d996b66f78a7c973bed6a67b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-460c4768-d248-455a-be34-ea028712d091, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:38:58 np0005539551 nova_compute[227360]: 2025-11-29 08:38:58.432 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405538.4318976, c9adc277-3d0a-4bb8-8b47-e9f72114cdfd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:38:58 np0005539551 nova_compute[227360]: 2025-11-29 08:38:58.433 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] VM Started (Lifecycle Event)#033[00m
Nov 29 03:38:58 np0005539551 systemd[1]: Started libpod-conmon-8e89a8e366d2469702a760aa5f9d76825bed5673d996b66f78a7c973bed6a67b.scope.
Nov 29 03:38:58 np0005539551 nova_compute[227360]: 2025-11-29 08:38:58.454 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:38:58 np0005539551 nova_compute[227360]: 2025-11-29 08:38:58.458 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405538.4333506, c9adc277-3d0a-4bb8-8b47-e9f72114cdfd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:38:58 np0005539551 nova_compute[227360]: 2025-11-29 08:38:58.459 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:38:58 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:38:58 np0005539551 nova_compute[227360]: 2025-11-29 08:38:58.478 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:38:58 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/288d219dda5a6f1a7b42ef7e56fa11c9c74df7ba7654b9d5c5457006b0a213ac/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:38:58 np0005539551 nova_compute[227360]: 2025-11-29 08:38:58.485 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:38:58 np0005539551 podman[293895]: 2025-11-29 08:38:58.391483672 +0000 UTC m=+0.021241366 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:38:58 np0005539551 podman[293895]: 2025-11-29 08:38:58.49378996 +0000 UTC m=+0.123547674 container init 8e89a8e366d2469702a760aa5f9d76825bed5673d996b66f78a7c973bed6a67b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-460c4768-d248-455a-be34-ea028712d091, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:38:58 np0005539551 podman[293895]: 2025-11-29 08:38:58.49898836 +0000 UTC m=+0.128746044 container start 8e89a8e366d2469702a760aa5f9d76825bed5673d996b66f78a7c973bed6a67b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-460c4768-d248-455a-be34-ea028712d091, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:38:58 np0005539551 nova_compute[227360]: 2025-11-29 08:38:58.513 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:38:58 np0005539551 neutron-haproxy-ovnmeta-460c4768-d248-455a-be34-ea028712d091[293912]: [NOTICE]   (293916) : New worker (293918) forked
Nov 29 03:38:58 np0005539551 neutron-haproxy-ovnmeta-460c4768-d248-455a-be34-ea028712d091[293912]: [NOTICE]   (293916) : Loading success.
Nov 29 03:38:58 np0005539551 nova_compute[227360]: 2025-11-29 08:38:58.876 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:38:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:59.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:00.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:00 np0005539551 nova_compute[227360]: 2025-11-29 08:39:00.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:00 np0005539551 nova_compute[227360]: 2025-11-29 08:39:00.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:39:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:01 np0005539551 nova_compute[227360]: 2025-11-29 08:39:01.045 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:01.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:02.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:02 np0005539551 nova_compute[227360]: 2025-11-29 08:39:02.558 227364 DEBUG nova.compute.manager [req-69ee7aa7-4860-467a-b29b-eb135658d1f3 req-33f385ed-68d2-4239-8bb8-e11212575be3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Received event network-vif-plugged-89414cbe-0908-4dc8-af0e-648ea658b1fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:02 np0005539551 nova_compute[227360]: 2025-11-29 08:39:02.559 227364 DEBUG oslo_concurrency.lockutils [req-69ee7aa7-4860-467a-b29b-eb135658d1f3 req-33f385ed-68d2-4239-8bb8-e11212575be3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "c9adc277-3d0a-4bb8-8b47-e9f72114cdfd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:02 np0005539551 nova_compute[227360]: 2025-11-29 08:39:02.559 227364 DEBUG oslo_concurrency.lockutils [req-69ee7aa7-4860-467a-b29b-eb135658d1f3 req-33f385ed-68d2-4239-8bb8-e11212575be3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c9adc277-3d0a-4bb8-8b47-e9f72114cdfd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:02 np0005539551 nova_compute[227360]: 2025-11-29 08:39:02.559 227364 DEBUG oslo_concurrency.lockutils [req-69ee7aa7-4860-467a-b29b-eb135658d1f3 req-33f385ed-68d2-4239-8bb8-e11212575be3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c9adc277-3d0a-4bb8-8b47-e9f72114cdfd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:02 np0005539551 nova_compute[227360]: 2025-11-29 08:39:02.559 227364 DEBUG nova.compute.manager [req-69ee7aa7-4860-467a-b29b-eb135658d1f3 req-33f385ed-68d2-4239-8bb8-e11212575be3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Processing event network-vif-plugged-89414cbe-0908-4dc8-af0e-648ea658b1fa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:39:02 np0005539551 nova_compute[227360]: 2025-11-29 08:39:02.560 227364 DEBUG nova.compute.manager [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:39:02 np0005539551 nova_compute[227360]: 2025-11-29 08:39:02.563 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405542.5627174, c9adc277-3d0a-4bb8-8b47-e9f72114cdfd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:39:02 np0005539551 nova_compute[227360]: 2025-11-29 08:39:02.563 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:39:02 np0005539551 nova_compute[227360]: 2025-11-29 08:39:02.565 227364 DEBUG nova.virt.libvirt.driver [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:39:02 np0005539551 nova_compute[227360]: 2025-11-29 08:39:02.567 227364 INFO nova.virt.libvirt.driver [-] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Instance spawned successfully.#033[00m
Nov 29 03:39:02 np0005539551 nova_compute[227360]: 2025-11-29 08:39:02.568 227364 DEBUG nova.virt.libvirt.driver [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:39:02 np0005539551 nova_compute[227360]: 2025-11-29 08:39:02.589 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:39:02 np0005539551 nova_compute[227360]: 2025-11-29 08:39:02.594 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:39:02 np0005539551 nova_compute[227360]: 2025-11-29 08:39:02.597 227364 DEBUG nova.virt.libvirt.driver [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:39:02 np0005539551 nova_compute[227360]: 2025-11-29 08:39:02.597 227364 DEBUG nova.virt.libvirt.driver [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:39:02 np0005539551 nova_compute[227360]: 2025-11-29 08:39:02.598 227364 DEBUG nova.virt.libvirt.driver [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:39:02 np0005539551 nova_compute[227360]: 2025-11-29 08:39:02.598 227364 DEBUG nova.virt.libvirt.driver [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:39:02 np0005539551 nova_compute[227360]: 2025-11-29 08:39:02.599 227364 DEBUG nova.virt.libvirt.driver [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:39:02 np0005539551 nova_compute[227360]: 2025-11-29 08:39:02.599 227364 DEBUG nova.virt.libvirt.driver [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:39:02 np0005539551 nova_compute[227360]: 2025-11-29 08:39:02.624 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:39:02 np0005539551 nova_compute[227360]: 2025-11-29 08:39:02.662 227364 INFO nova.compute.manager [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Took 10.81 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:39:02 np0005539551 nova_compute[227360]: 2025-11-29 08:39:02.663 227364 DEBUG nova.compute.manager [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:39:02 np0005539551 nova_compute[227360]: 2025-11-29 08:39:02.725 227364 INFO nova.compute.manager [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Took 11.71 seconds to build instance.#033[00m
Nov 29 03:39:02 np0005539551 nova_compute[227360]: 2025-11-29 08:39:02.749 227364 DEBUG oslo_concurrency.lockutils [None req-8ef35475-77b2-4a97-8bfb-86e236c88352 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "c9adc277-3d0a-4bb8-8b47-e9f72114cdfd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:03 np0005539551 nova_compute[227360]: 2025-11-29 08:39:03.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:03 np0005539551 nova_compute[227360]: 2025-11-29 08:39:03.878 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:03.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:04.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:04 np0005539551 nova_compute[227360]: 2025-11-29 08:39:04.663 227364 DEBUG nova.compute.manager [req-05d7e8c6-635a-40b2-8ebf-28c3ef530432 req-f2ebbe63-0c0d-4336-b8ae-c829c4a3c839 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Received event network-vif-plugged-89414cbe-0908-4dc8-af0e-648ea658b1fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:04 np0005539551 nova_compute[227360]: 2025-11-29 08:39:04.663 227364 DEBUG oslo_concurrency.lockutils [req-05d7e8c6-635a-40b2-8ebf-28c3ef530432 req-f2ebbe63-0c0d-4336-b8ae-c829c4a3c839 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "c9adc277-3d0a-4bb8-8b47-e9f72114cdfd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:04 np0005539551 nova_compute[227360]: 2025-11-29 08:39:04.663 227364 DEBUG oslo_concurrency.lockutils [req-05d7e8c6-635a-40b2-8ebf-28c3ef530432 req-f2ebbe63-0c0d-4336-b8ae-c829c4a3c839 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c9adc277-3d0a-4bb8-8b47-e9f72114cdfd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:04 np0005539551 nova_compute[227360]: 2025-11-29 08:39:04.664 227364 DEBUG oslo_concurrency.lockutils [req-05d7e8c6-635a-40b2-8ebf-28c3ef530432 req-f2ebbe63-0c0d-4336-b8ae-c829c4a3c839 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c9adc277-3d0a-4bb8-8b47-e9f72114cdfd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:04 np0005539551 nova_compute[227360]: 2025-11-29 08:39:04.664 227364 DEBUG nova.compute.manager [req-05d7e8c6-635a-40b2-8ebf-28c3ef530432 req-f2ebbe63-0c0d-4336-b8ae-c829c4a3c839 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] No waiting events found dispatching network-vif-plugged-89414cbe-0908-4dc8-af0e-648ea658b1fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:39:04 np0005539551 nova_compute[227360]: 2025-11-29 08:39:04.664 227364 WARNING nova.compute.manager [req-05d7e8c6-635a-40b2-8ebf-28c3ef530432 req-f2ebbe63-0c0d-4336-b8ae-c829c4a3c839 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Received unexpected event network-vif-plugged-89414cbe-0908-4dc8-af0e-648ea658b1fa for instance with vm_state active and task_state None.#033[00m
Nov 29 03:39:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:05.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:06 np0005539551 nova_compute[227360]: 2025-11-29 08:39:06.049 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:06.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:06 np0005539551 NetworkManager[48922]: <info>  [1764405546.6023] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/365)
Nov 29 03:39:06 np0005539551 NetworkManager[48922]: <info>  [1764405546.6034] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/366)
Nov 29 03:39:06 np0005539551 nova_compute[227360]: 2025-11-29 08:39:06.601 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:06 np0005539551 nova_compute[227360]: 2025-11-29 08:39:06.849 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:39:06Z|00815|binding|INFO|Releasing lport be178473-8fe5-4236-9899-659c9f39adc0 from this chassis (sb_readonly=0)
Nov 29 03:39:06 np0005539551 nova_compute[227360]: 2025-11-29 08:39:06.873 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:07 np0005539551 nova_compute[227360]: 2025-11-29 08:39:07.001 227364 DEBUG nova.compute.manager [req-934a3ab7-5b6b-4017-b0ee-da0573479334 req-e3b4f52d-dcc2-40ff-88e0-743396a016d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Received event network-changed-89414cbe-0908-4dc8-af0e-648ea658b1fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:07 np0005539551 nova_compute[227360]: 2025-11-29 08:39:07.001 227364 DEBUG nova.compute.manager [req-934a3ab7-5b6b-4017-b0ee-da0573479334 req-e3b4f52d-dcc2-40ff-88e0-743396a016d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Refreshing instance network info cache due to event network-changed-89414cbe-0908-4dc8-af0e-648ea658b1fa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:39:07 np0005539551 nova_compute[227360]: 2025-11-29 08:39:07.001 227364 DEBUG oslo_concurrency.lockutils [req-934a3ab7-5b6b-4017-b0ee-da0573479334 req-e3b4f52d-dcc2-40ff-88e0-743396a016d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-c9adc277-3d0a-4bb8-8b47-e9f72114cdfd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:39:07 np0005539551 nova_compute[227360]: 2025-11-29 08:39:07.001 227364 DEBUG oslo_concurrency.lockutils [req-934a3ab7-5b6b-4017-b0ee-da0573479334 req-e3b4f52d-dcc2-40ff-88e0-743396a016d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-c9adc277-3d0a-4bb8-8b47-e9f72114cdfd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:39:07 np0005539551 nova_compute[227360]: 2025-11-29 08:39:07.002 227364 DEBUG nova.network.neutron [req-934a3ab7-5b6b-4017-b0ee-da0573479334 req-e3b4f52d-dcc2-40ff-88e0-743396a016d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Refreshing network info cache for port 89414cbe-0908-4dc8-af0e-648ea658b1fa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:39:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e386 e386: 3 total, 3 up, 3 in
Nov 29 03:39:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:07.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:08.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:08 np0005539551 nova_compute[227360]: 2025-11-29 08:39:08.312 227364 DEBUG nova.network.neutron [req-934a3ab7-5b6b-4017-b0ee-da0573479334 req-e3b4f52d-dcc2-40ff-88e0-743396a016d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Updated VIF entry in instance network info cache for port 89414cbe-0908-4dc8-af0e-648ea658b1fa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:39:08 np0005539551 nova_compute[227360]: 2025-11-29 08:39:08.313 227364 DEBUG nova.network.neutron [req-934a3ab7-5b6b-4017-b0ee-da0573479334 req-e3b4f52d-dcc2-40ff-88e0-743396a016d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Updating instance_info_cache with network_info: [{"id": "89414cbe-0908-4dc8-af0e-648ea658b1fa", "address": "fa:16:3e:54:8d:5b", "network": {"id": "460c4768-d248-455a-be34-ea028712d091", "bridge": "br-int", "label": "tempest-network-smoke--245531740", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89414cbe-09", "ovs_interfaceid": "89414cbe-0908-4dc8-af0e-648ea658b1fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:39:08 np0005539551 nova_compute[227360]: 2025-11-29 08:39:08.329 227364 DEBUG oslo_concurrency.lockutils [req-934a3ab7-5b6b-4017-b0ee-da0573479334 req-e3b4f52d-dcc2-40ff-88e0-743396a016d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-c9adc277-3d0a-4bb8-8b47-e9f72114cdfd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:39:08 np0005539551 nova_compute[227360]: 2025-11-29 08:39:08.911 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:09.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:10.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:11 np0005539551 nova_compute[227360]: 2025-11-29 08:39:11.100 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:11.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:12.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:13 np0005539551 nova_compute[227360]: 2025-11-29 08:39:13.913 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:13.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:14.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:15.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:16 np0005539551 nova_compute[227360]: 2025-11-29 08:39:16.104 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:16.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:17 np0005539551 ovn_controller[130266]: 2025-11-29T08:39:17Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:54:8d:5b 10.100.0.3
Nov 29 03:39:17 np0005539551 ovn_controller[130266]: 2025-11-29T08:39:17Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:54:8d:5b 10.100.0.3
Nov 29 03:39:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:17.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:18.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:18 np0005539551 nova_compute[227360]: 2025-11-29 08:39:18.914 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:19.888 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:19.889 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:19.889 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:19.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:20.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:21 np0005539551 nova_compute[227360]: 2025-11-29 08:39:21.107 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:21.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:22.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:23 np0005539551 nova_compute[227360]: 2025-11-29 08:39:23.715 227364 INFO nova.compute.manager [None req-2a5f341a-58b8-4c38-b15b-a0f2cd616972 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Get console output#033[00m
Nov 29 03:39:23 np0005539551 nova_compute[227360]: 2025-11-29 08:39:23.721 260937 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:39:23 np0005539551 nova_compute[227360]: 2025-11-29 08:39:23.916 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:23.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:24.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:25.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:26 np0005539551 nova_compute[227360]: 2025-11-29 08:39:26.109 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:26.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:26 np0005539551 nova_compute[227360]: 2025-11-29 08:39:26.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:26 np0005539551 podman[293931]: 2025-11-29 08:39:26.618238736 +0000 UTC m=+0.070784197 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:39:26 np0005539551 podman[293929]: 2025-11-29 08:39:26.618737009 +0000 UTC m=+0.075757962 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:39:26 np0005539551 podman[293930]: 2025-11-29 08:39:26.621272878 +0000 UTC m=+0.076219855 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:39:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:26.851 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:39:26 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:26.852 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:39:26 np0005539551 nova_compute[227360]: 2025-11-29 08:39:26.852 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:27 np0005539551 nova_compute[227360]: 2025-11-29 08:39:27.506 227364 DEBUG oslo_concurrency.lockutils [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "fc75971c-cf7e-4383-81cf-81c801f67489" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:27 np0005539551 nova_compute[227360]: 2025-11-29 08:39:27.506 227364 DEBUG oslo_concurrency.lockutils [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "fc75971c-cf7e-4383-81cf-81c801f67489" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:27 np0005539551 nova_compute[227360]: 2025-11-29 08:39:27.522 227364 DEBUG nova.compute.manager [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:39:27 np0005539551 nova_compute[227360]: 2025-11-29 08:39:27.605 227364 DEBUG oslo_concurrency.lockutils [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:27 np0005539551 nova_compute[227360]: 2025-11-29 08:39:27.605 227364 DEBUG oslo_concurrency.lockutils [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:27 np0005539551 nova_compute[227360]: 2025-11-29 08:39:27.613 227364 DEBUG nova.virt.hardware [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:39:27 np0005539551 nova_compute[227360]: 2025-11-29 08:39:27.613 227364 INFO nova.compute.claims [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:39:27 np0005539551 nova_compute[227360]: 2025-11-29 08:39:27.756 227364 DEBUG oslo_concurrency.processutils [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:27 np0005539551 nova_compute[227360]: 2025-11-29 08:39:27.852 227364 DEBUG oslo_concurrency.lockutils [None req-32d5afc6-7ab5-429d-8358-f61e055788d7 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "interface-c9adc277-3d0a-4bb8-8b47-e9f72114cdfd-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:27 np0005539551 nova_compute[227360]: 2025-11-29 08:39:27.852 227364 DEBUG oslo_concurrency.lockutils [None req-32d5afc6-7ab5-429d-8358-f61e055788d7 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "interface-c9adc277-3d0a-4bb8-8b47-e9f72114cdfd-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:27 np0005539551 nova_compute[227360]: 2025-11-29 08:39:27.853 227364 DEBUG nova.objects.instance [None req-32d5afc6-7ab5-429d-8358-f61e055788d7 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'flavor' on Instance uuid c9adc277-3d0a-4bb8-8b47-e9f72114cdfd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:39:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:27.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:28.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:28 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:39:28 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3373569601' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:39:28 np0005539551 nova_compute[227360]: 2025-11-29 08:39:28.214 227364 DEBUG oslo_concurrency.processutils [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:28 np0005539551 nova_compute[227360]: 2025-11-29 08:39:28.219 227364 DEBUG nova.compute.provider_tree [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:39:28 np0005539551 nova_compute[227360]: 2025-11-29 08:39:28.235 227364 DEBUG nova.scheduler.client.report [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:39:28 np0005539551 nova_compute[227360]: 2025-11-29 08:39:28.259 227364 DEBUG oslo_concurrency.lockutils [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:28 np0005539551 nova_compute[227360]: 2025-11-29 08:39:28.260 227364 DEBUG nova.compute.manager [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:39:28 np0005539551 nova_compute[227360]: 2025-11-29 08:39:28.300 227364 DEBUG nova.compute.manager [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:39:28 np0005539551 nova_compute[227360]: 2025-11-29 08:39:28.300 227364 DEBUG nova.network.neutron [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:39:28 np0005539551 nova_compute[227360]: 2025-11-29 08:39:28.327 227364 INFO nova.virt.libvirt.driver [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:39:28 np0005539551 nova_compute[227360]: 2025-11-29 08:39:28.355 227364 DEBUG nova.compute.manager [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:39:28 np0005539551 nova_compute[227360]: 2025-11-29 08:39:28.358 227364 DEBUG nova.objects.instance [None req-32d5afc6-7ab5-429d-8358-f61e055788d7 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'pci_requests' on Instance uuid c9adc277-3d0a-4bb8-8b47-e9f72114cdfd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:39:28 np0005539551 nova_compute[227360]: 2025-11-29 08:39:28.376 227364 DEBUG nova.network.neutron [None req-32d5afc6-7ab5-429d-8358-f61e055788d7 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:39:28 np0005539551 nova_compute[227360]: 2025-11-29 08:39:28.409 227364 INFO nova.virt.block_device [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Booting with volume c9cf1b4f-d920-4673-b1fc-8c791f56541b at /dev/vda#033[00m
Nov 29 03:39:28 np0005539551 nova_compute[227360]: 2025-11-29 08:39:28.532 227364 DEBUG nova.policy [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'facf4db8501041ab9628ff9f5684c992', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '62ca01275fe34ea0af31d00b34d6d9a5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:39:28 np0005539551 nova_compute[227360]: 2025-11-29 08:39:28.576 227364 DEBUG nova.policy [None req-32d5afc6-7ab5-429d-8358-f61e055788d7 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4774e2851bc6407cb0fcde15bd24d1b3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0471b9b208874403aa3f0fbe7504ad19', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:39:28 np0005539551 nova_compute[227360]: 2025-11-29 08:39:28.601 227364 DEBUG os_brick.utils [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:39:28 np0005539551 nova_compute[227360]: 2025-11-29 08:39:28.602 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:28 np0005539551 nova_compute[227360]: 2025-11-29 08:39:28.613 245195 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:28 np0005539551 nova_compute[227360]: 2025-11-29 08:39:28.613 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[536a4274-1145-4cd8-a1f2-b253686e74de]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:28 np0005539551 nova_compute[227360]: 2025-11-29 08:39:28.614 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:28 np0005539551 nova_compute[227360]: 2025-11-29 08:39:28.624 245195 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:28 np0005539551 nova_compute[227360]: 2025-11-29 08:39:28.624 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[5dc48ceb-36ca-4352-97e4-9faff9eb0c2c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34268feecb6', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:28 np0005539551 nova_compute[227360]: 2025-11-29 08:39:28.625 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:28 np0005539551 nova_compute[227360]: 2025-11-29 08:39:28.636 245195 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:28 np0005539551 nova_compute[227360]: 2025-11-29 08:39:28.636 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[643f3d89-4e2f-43ad-9875-9d8bdd9a4117]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:28 np0005539551 nova_compute[227360]: 2025-11-29 08:39:28.637 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[43ffb8cd-4f56-41a4-bc2a-c902bd130538]: (4, '2d58586e-4ce1-425d-890c-d5cdff75e822') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:28 np0005539551 nova_compute[227360]: 2025-11-29 08:39:28.638 227364 DEBUG oslo_concurrency.processutils [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:28 np0005539551 nova_compute[227360]: 2025-11-29 08:39:28.667 227364 DEBUG oslo_concurrency.processutils [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] CMD "nvme version" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:28 np0005539551 nova_compute[227360]: 2025-11-29 08:39:28.670 227364 DEBUG os_brick.initiator.connectors.lightos [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:39:28 np0005539551 nova_compute[227360]: 2025-11-29 08:39:28.670 227364 DEBUG os_brick.initiator.connectors.lightos [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:39:28 np0005539551 nova_compute[227360]: 2025-11-29 08:39:28.670 227364 DEBUG os_brick.initiator.connectors.lightos [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:39:28 np0005539551 nova_compute[227360]: 2025-11-29 08:39:28.671 227364 DEBUG os_brick.utils [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] <== get_connector_properties: return (69ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34268feecb6', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2d58586e-4ce1-425d-890c-d5cdff75e822', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:39:28 np0005539551 nova_compute[227360]: 2025-11-29 08:39:28.671 227364 DEBUG nova.virt.block_device [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Updating existing volume attachment record: a7b29211-f924-4277-ac59-60f674168cc5 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:39:28 np0005539551 nova_compute[227360]: 2025-11-29 08:39:28.971 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:39:29 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1166825849' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:39:29 np0005539551 nova_compute[227360]: 2025-11-29 08:39:29.293 227364 DEBUG nova.network.neutron [None req-32d5afc6-7ab5-429d-8358-f61e055788d7 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Successfully created port: 98decf71-da91-41cd-9382-afba328b1338 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:39:29 np0005539551 nova_compute[227360]: 2025-11-29 08:39:29.333 227364 DEBUG nova.network.neutron [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Successfully created port: 6ae38b68-e18b-4ba6-be6e-d411d58b407b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:39:29 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:39:29 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:39:29 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:39:29 np0005539551 nova_compute[227360]: 2025-11-29 08:39:29.626 227364 DEBUG nova.compute.manager [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:39:29 np0005539551 nova_compute[227360]: 2025-11-29 08:39:29.628 227364 DEBUG nova.virt.libvirt.driver [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:39:29 np0005539551 nova_compute[227360]: 2025-11-29 08:39:29.628 227364 INFO nova.virt.libvirt.driver [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Creating image(s)#033[00m
Nov 29 03:39:29 np0005539551 nova_compute[227360]: 2025-11-29 08:39:29.629 227364 DEBUG nova.virt.libvirt.driver [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:39:29 np0005539551 nova_compute[227360]: 2025-11-29 08:39:29.629 227364 DEBUG nova.virt.libvirt.driver [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Ensure instance console log exists: /var/lib/nova/instances/fc75971c-cf7e-4383-81cf-81c801f67489/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:39:29 np0005539551 nova_compute[227360]: 2025-11-29 08:39:29.629 227364 DEBUG oslo_concurrency.lockutils [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:29 np0005539551 nova_compute[227360]: 2025-11-29 08:39:29.629 227364 DEBUG oslo_concurrency.lockutils [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:29 np0005539551 nova_compute[227360]: 2025-11-29 08:39:29.630 227364 DEBUG oslo_concurrency.lockutils [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:29.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:30.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:31 np0005539551 nova_compute[227360]: 2025-11-29 08:39:31.112 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:31.853 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:31.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:32 np0005539551 nova_compute[227360]: 2025-11-29 08:39:32.117 227364 DEBUG nova.network.neutron [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Successfully updated port: 6ae38b68-e18b-4ba6-be6e-d411d58b407b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:39:32 np0005539551 nova_compute[227360]: 2025-11-29 08:39:32.136 227364 DEBUG oslo_concurrency.lockutils [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "refresh_cache-fc75971c-cf7e-4383-81cf-81c801f67489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:39:32 np0005539551 nova_compute[227360]: 2025-11-29 08:39:32.136 227364 DEBUG oslo_concurrency.lockutils [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquired lock "refresh_cache-fc75971c-cf7e-4383-81cf-81c801f67489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:39:32 np0005539551 nova_compute[227360]: 2025-11-29 08:39:32.136 227364 DEBUG nova.network.neutron [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:39:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:32.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:33 np0005539551 nova_compute[227360]: 2025-11-29 08:39:33.133 227364 DEBUG nova.network.neutron [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:39:33 np0005539551 nova_compute[227360]: 2025-11-29 08:39:33.138 227364 DEBUG nova.network.neutron [None req-32d5afc6-7ab5-429d-8358-f61e055788d7 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Successfully updated port: 98decf71-da91-41cd-9382-afba328b1338 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:39:33 np0005539551 nova_compute[227360]: 2025-11-29 08:39:33.154 227364 DEBUG oslo_concurrency.lockutils [None req-32d5afc6-7ab5-429d-8358-f61e055788d7 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "refresh_cache-c9adc277-3d0a-4bb8-8b47-e9f72114cdfd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:39:33 np0005539551 nova_compute[227360]: 2025-11-29 08:39:33.154 227364 DEBUG oslo_concurrency.lockutils [None req-32d5afc6-7ab5-429d-8358-f61e055788d7 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquired lock "refresh_cache-c9adc277-3d0a-4bb8-8b47-e9f72114cdfd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:39:33 np0005539551 nova_compute[227360]: 2025-11-29 08:39:33.154 227364 DEBUG nova.network.neutron [None req-32d5afc6-7ab5-429d-8358-f61e055788d7 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:39:33 np0005539551 nova_compute[227360]: 2025-11-29 08:39:33.311 227364 DEBUG nova.compute.manager [req-95222062-467a-4ce3-b5d4-a9a8e2f3b72d req-058ddf11-9afb-4a07-876d-9aea9bc68c04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Received event network-changed-98decf71-da91-41cd-9382-afba328b1338 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:33 np0005539551 nova_compute[227360]: 2025-11-29 08:39:33.311 227364 DEBUG nova.compute.manager [req-95222062-467a-4ce3-b5d4-a9a8e2f3b72d req-058ddf11-9afb-4a07-876d-9aea9bc68c04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Refreshing instance network info cache due to event network-changed-98decf71-da91-41cd-9382-afba328b1338. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:39:33 np0005539551 nova_compute[227360]: 2025-11-29 08:39:33.311 227364 DEBUG oslo_concurrency.lockutils [req-95222062-467a-4ce3-b5d4-a9a8e2f3b72d req-058ddf11-9afb-4a07-876d-9aea9bc68c04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-c9adc277-3d0a-4bb8-8b47-e9f72114cdfd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:39:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:39:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:33.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:39:33 np0005539551 nova_compute[227360]: 2025-11-29 08:39:33.973 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:34.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:34 np0005539551 nova_compute[227360]: 2025-11-29 08:39:34.200 227364 DEBUG nova.compute.manager [req-22755896-6968-4406-a860-e18f77e9b436 req-6c8e1d73-58bc-4428-902d-07898b6a07e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Received event network-changed-6ae38b68-e18b-4ba6-be6e-d411d58b407b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:34 np0005539551 nova_compute[227360]: 2025-11-29 08:39:34.200 227364 DEBUG nova.compute.manager [req-22755896-6968-4406-a860-e18f77e9b436 req-6c8e1d73-58bc-4428-902d-07898b6a07e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Refreshing instance network info cache due to event network-changed-6ae38b68-e18b-4ba6-be6e-d411d58b407b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:39:34 np0005539551 nova_compute[227360]: 2025-11-29 08:39:34.201 227364 DEBUG oslo_concurrency.lockutils [req-22755896-6968-4406-a860-e18f77e9b436 req-6c8e1d73-58bc-4428-902d-07898b6a07e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-fc75971c-cf7e-4383-81cf-81c801f67489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.166 227364 DEBUG nova.network.neutron [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Updating instance_info_cache with network_info: [{"id": "6ae38b68-e18b-4ba6-be6e-d411d58b407b", "address": "fa:16:3e:6f:01:5b", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ae38b68-e1", "ovs_interfaceid": "6ae38b68-e18b-4ba6-be6e-d411d58b407b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.190 227364 DEBUG oslo_concurrency.lockutils [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Releasing lock "refresh_cache-fc75971c-cf7e-4383-81cf-81c801f67489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.190 227364 DEBUG nova.compute.manager [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Instance network_info: |[{"id": "6ae38b68-e18b-4ba6-be6e-d411d58b407b", "address": "fa:16:3e:6f:01:5b", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ae38b68-e1", "ovs_interfaceid": "6ae38b68-e18b-4ba6-be6e-d411d58b407b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.191 227364 DEBUG oslo_concurrency.lockutils [req-22755896-6968-4406-a860-e18f77e9b436 req-6c8e1d73-58bc-4428-902d-07898b6a07e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-fc75971c-cf7e-4383-81cf-81c801f67489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.191 227364 DEBUG nova.network.neutron [req-22755896-6968-4406-a860-e18f77e9b436 req-6c8e1d73-58bc-4428-902d-07898b6a07e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Refreshing network info cache for port 6ae38b68-e18b-4ba6-be6e-d411d58b407b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.194 227364 DEBUG nova.virt.libvirt.driver [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Start _get_guest_xml network_info=[{"id": "6ae38b68-e18b-4ba6-be6e-d411d58b407b", "address": "fa:16:3e:6f:01:5b", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ae38b68-e1", "ovs_interfaceid": "6ae38b68-e18b-4ba6-be6e-d411d58b407b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-c9cf1b4f-d920-4673-b1fc-8c791f56541b', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'c9cf1b4f-d920-4673-b1fc-8c791f56541b', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'fc75971c-cf7e-4383-81cf-81c801f67489', 'attached_at': '', 'detached_at': '', 'volume_id': 'c9cf1b4f-d920-4673-b1fc-8c791f56541b', 'serial': 'c9cf1b4f-d920-4673-b1fc-8c791f56541b'}, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'attachment_id': 'a7b29211-f924-4277-ac59-60f674168cc5', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.201 227364 WARNING nova.virt.libvirt.driver [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.206 227364 DEBUG nova.virt.libvirt.host [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.206 227364 DEBUG nova.virt.libvirt.host [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.211 227364 DEBUG nova.virt.libvirt.host [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.211 227364 DEBUG nova.virt.libvirt.host [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.212 227364 DEBUG nova.virt.libvirt.driver [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.213 227364 DEBUG nova.virt.hardware [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.213 227364 DEBUG nova.virt.hardware [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.213 227364 DEBUG nova.virt.hardware [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.213 227364 DEBUG nova.virt.hardware [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.214 227364 DEBUG nova.virt.hardware [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.214 227364 DEBUG nova.virt.hardware [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.214 227364 DEBUG nova.virt.hardware [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.214 227364 DEBUG nova.virt.hardware [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.215 227364 DEBUG nova.virt.hardware [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.215 227364 DEBUG nova.virt.hardware [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.215 227364 DEBUG nova.virt.hardware [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.239 227364 DEBUG nova.storage.rbd_utils [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] rbd image fc75971c-cf7e-4383-81cf-81c801f67489_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.242 227364 DEBUG oslo_concurrency.processutils [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:39:35 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/268740031' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.656 227364 DEBUG oslo_concurrency.processutils [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.684 227364 DEBUG nova.virt.libvirt.vif [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:39:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-700405556',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-700405556',id=188,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFG6wdTr4YnTt5IOi90oQevRIaDEFT6evKD2WqzrA5InuHLLPBBDt+A3IDlfUfF0+VTQ8wx7jPD+CP0zgY5zll3JN5Id1HeD6V5ixHcQktu+0EcaYFcg2TVX8XapVterdw==',key_name='tempest-TestInstancesWithCinderVolumes-1193741997',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='62ca01275fe34ea0af31d00b34d6d9a5',ramdisk_id='',reservation_id='r-k73u0foc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-911868990',owner_user_name='tempest-TestInstancesWithCinderVolumes-911868990-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:39:28Z,user_data=None,user_id='facf4db8501041ab9628ff9f5684c992',uuid=fc75971c-cf7e-4383-81cf-81c801f67489,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6ae38b68-e18b-4ba6-be6e-d411d58b407b", "address": "fa:16:3e:6f:01:5b", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ae38b68-e1", "ovs_interfaceid": "6ae38b68-e18b-4ba6-be6e-d411d58b407b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.684 227364 DEBUG nova.network.os_vif_util [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Converting VIF {"id": "6ae38b68-e18b-4ba6-be6e-d411d58b407b", "address": "fa:16:3e:6f:01:5b", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ae38b68-e1", "ovs_interfaceid": "6ae38b68-e18b-4ba6-be6e-d411d58b407b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.685 227364 DEBUG nova.network.os_vif_util [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:01:5b,bridge_name='br-int',has_traffic_filtering=True,id=6ae38b68-e18b-4ba6-be6e-d411d58b407b,network=Network(c114bc23-cd62-4198-a95d-5595953a88bd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ae38b68-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.686 227364 DEBUG nova.objects.instance [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lazy-loading 'pci_devices' on Instance uuid fc75971c-cf7e-4383-81cf-81c801f67489 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.702 227364 DEBUG nova.virt.libvirt.driver [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:39:35 np0005539551 nova_compute[227360]:  <uuid>fc75971c-cf7e-4383-81cf-81c801f67489</uuid>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:  <name>instance-000000bc</name>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:39:35 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:      <nova:name>tempest-TestInstancesWithCinderVolumes-server-700405556</nova:name>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:39:35</nova:creationTime>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:39:35 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:        <nova:user uuid="facf4db8501041ab9628ff9f5684c992">tempest-TestInstancesWithCinderVolumes-911868990-project-member</nova:user>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:        <nova:project uuid="62ca01275fe34ea0af31d00b34d6d9a5">tempest-TestInstancesWithCinderVolumes-911868990</nova:project>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:        <nova:port uuid="6ae38b68-e18b-4ba6-be6e-d411d58b407b">
Nov 29 03:39:35 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:      <entry name="serial">fc75971c-cf7e-4383-81cf-81c801f67489</entry>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:      <entry name="uuid">fc75971c-cf7e-4383-81cf-81c801f67489</entry>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:39:35 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/fc75971c-cf7e-4383-81cf-81c801f67489_disk.config">
Nov 29 03:39:35 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:39:35 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:39:35 np0005539551 nova_compute[227360]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="volumes/volume-c9cf1b4f-d920-4673-b1fc-8c791f56541b">
Nov 29 03:39:35 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:39:35 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:      <serial>c9cf1b4f-d920-4673-b1fc-8c791f56541b</serial>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:39:35 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:6f:01:5b"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:      <target dev="tap6ae38b68-e1"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:39:35 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/fc75971c-cf7e-4383-81cf-81c801f67489/console.log" append="off"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:39:35 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:39:35 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:39:35 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:39:35 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:39:35 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.703 227364 DEBUG nova.compute.manager [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Preparing to wait for external event network-vif-plugged-6ae38b68-e18b-4ba6-be6e-d411d58b407b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.704 227364 DEBUG oslo_concurrency.lockutils [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "fc75971c-cf7e-4383-81cf-81c801f67489-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.705 227364 DEBUG oslo_concurrency.lockutils [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "fc75971c-cf7e-4383-81cf-81c801f67489-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.705 227364 DEBUG oslo_concurrency.lockutils [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "fc75971c-cf7e-4383-81cf-81c801f67489-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.706 227364 DEBUG nova.virt.libvirt.vif [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:39:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-700405556',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-700405556',id=188,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFG6wdTr4YnTt5IOi90oQevRIaDEFT6evKD2WqzrA5InuHLLPBBDt+A3IDlfUfF0+VTQ8wx7jPD+CP0zgY5zll3JN5Id1HeD6V5ixHcQktu+0EcaYFcg2TVX8XapVterdw==',key_name='tempest-TestInstancesWithCinderVolumes-1193741997',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='62ca01275fe34ea0af31d00b34d6d9a5',ramdisk_id='',reservation_id='r-k73u0foc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-911868990',owner_user_name='tempest-TestInstancesWithCinderVolumes-911868990-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:39:28Z,user_data=None,user_id='facf4db8501041ab9628ff9f5684c992',uuid=fc75971c-cf7e-4383-81cf-81c801f67489,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6ae38b68-e18b-4ba6-be6e-d411d58b407b", "address": "fa:16:3e:6f:01:5b", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ae38b68-e1", "ovs_interfaceid": "6ae38b68-e18b-4ba6-be6e-d411d58b407b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.707 227364 DEBUG nova.network.os_vif_util [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Converting VIF {"id": "6ae38b68-e18b-4ba6-be6e-d411d58b407b", "address": "fa:16:3e:6f:01:5b", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ae38b68-e1", "ovs_interfaceid": "6ae38b68-e18b-4ba6-be6e-d411d58b407b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.708 227364 DEBUG nova.network.os_vif_util [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:01:5b,bridge_name='br-int',has_traffic_filtering=True,id=6ae38b68-e18b-4ba6-be6e-d411d58b407b,network=Network(c114bc23-cd62-4198-a95d-5595953a88bd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ae38b68-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.709 227364 DEBUG os_vif [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:01:5b,bridge_name='br-int',has_traffic_filtering=True,id=6ae38b68-e18b-4ba6-be6e-d411d58b407b,network=Network(c114bc23-cd62-4198-a95d-5595953a88bd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ae38b68-e1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.710 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.711 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.712 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.715 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.716 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6ae38b68-e1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.717 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6ae38b68-e1, col_values=(('external_ids', {'iface-id': '6ae38b68-e18b-4ba6-be6e-d411d58b407b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6f:01:5b', 'vm-uuid': 'fc75971c-cf7e-4383-81cf-81c801f67489'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.719 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:35 np0005539551 NetworkManager[48922]: <info>  [1764405575.7202] manager: (tap6ae38b68-e1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/367)
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.722 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.729 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:35 np0005539551 nova_compute[227360]: 2025-11-29 08:39:35.731 227364 INFO os_vif [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:01:5b,bridge_name='br-int',has_traffic_filtering=True,id=6ae38b68-e18b-4ba6-be6e-d411d58b407b,network=Network(c114bc23-cd62-4198-a95d-5595953a88bd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ae38b68-e1')#033[00m
Nov 29 03:39:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:39:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:35.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.028 227364 DEBUG nova.virt.libvirt.driver [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.028 227364 DEBUG nova.virt.libvirt.driver [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.029 227364 DEBUG nova.virt.libvirt.driver [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] No VIF found with MAC fa:16:3e:6f:01:5b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.029 227364 INFO nova.virt.libvirt.driver [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Using config drive#033[00m
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.054 227364 DEBUG nova.storage.rbd_utils [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] rbd image fc75971c-cf7e-4383-81cf-81c801f67489_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:39:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:36.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.261 227364 DEBUG nova.network.neutron [None req-32d5afc6-7ab5-429d-8358-f61e055788d7 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Updating instance_info_cache with network_info: [{"id": "89414cbe-0908-4dc8-af0e-648ea658b1fa", "address": "fa:16:3e:54:8d:5b", "network": {"id": "460c4768-d248-455a-be34-ea028712d091", "bridge": "br-int", "label": "tempest-network-smoke--245531740", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89414cbe-09", "ovs_interfaceid": "89414cbe-0908-4dc8-af0e-648ea658b1fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "98decf71-da91-41cd-9382-afba328b1338", "address": "fa:16:3e:9e:a4:7d", "network": {"id": "f3d6a66c-1acd-4ae3-9639-b6444469c1fc", "bridge": "br-int", "label": "tempest-network-smoke--216176959", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98decf71-da", "ovs_interfaceid": "98decf71-da91-41cd-9382-afba328b1338", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.301 227364 DEBUG oslo_concurrency.lockutils [None req-32d5afc6-7ab5-429d-8358-f61e055788d7 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Releasing lock "refresh_cache-c9adc277-3d0a-4bb8-8b47-e9f72114cdfd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.302 227364 DEBUG oslo_concurrency.lockutils [req-95222062-467a-4ce3-b5d4-a9a8e2f3b72d req-058ddf11-9afb-4a07-876d-9aea9bc68c04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-c9adc277-3d0a-4bb8-8b47-e9f72114cdfd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.302 227364 DEBUG nova.network.neutron [req-95222062-467a-4ce3-b5d4-a9a8e2f3b72d req-058ddf11-9afb-4a07-876d-9aea9bc68c04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Refreshing network info cache for port 98decf71-da91-41cd-9382-afba328b1338 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.305 227364 DEBUG nova.virt.libvirt.vif [None req-32d5afc6-7ab5-429d-8358-f61e055788d7 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:38:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1601524375',display_name='tempest-TestNetworkBasicOps-server-1601524375',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1601524375',id=184,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNAyvwAZwPHz0mDZ6+uw7Ctigv7CbFX3JcWWwmeG6ohdJh9eb0DA6IZB4kjH4EtgM2s5DtdFmdoiqApvJrl5Aaw7g3k0AA8E+id9fhlaMPhZgqKsQOQc112xVs1jFb8yOA==',key_name='tempest-TestNetworkBasicOps-65375256',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:39:02Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-xx3hndbp',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:39:02Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=c9adc277-3d0a-4bb8-8b47-e9f72114cdfd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "98decf71-da91-41cd-9382-afba328b1338", "address": "fa:16:3e:9e:a4:7d", "network": {"id": "f3d6a66c-1acd-4ae3-9639-b6444469c1fc", "bridge": "br-int", "label": "tempest-network-smoke--216176959", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98decf71-da", "ovs_interfaceid": "98decf71-da91-41cd-9382-afba328b1338", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.305 227364 DEBUG nova.network.os_vif_util [None req-32d5afc6-7ab5-429d-8358-f61e055788d7 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "98decf71-da91-41cd-9382-afba328b1338", "address": "fa:16:3e:9e:a4:7d", "network": {"id": "f3d6a66c-1acd-4ae3-9639-b6444469c1fc", "bridge": "br-int", "label": "tempest-network-smoke--216176959", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98decf71-da", "ovs_interfaceid": "98decf71-da91-41cd-9382-afba328b1338", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.306 227364 DEBUG nova.network.os_vif_util [None req-32d5afc6-7ab5-429d-8358-f61e055788d7 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:a4:7d,bridge_name='br-int',has_traffic_filtering=True,id=98decf71-da91-41cd-9382-afba328b1338,network=Network(f3d6a66c-1acd-4ae3-9639-b6444469c1fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98decf71-da') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.306 227364 DEBUG os_vif [None req-32d5afc6-7ab5-429d-8358-f61e055788d7 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:a4:7d,bridge_name='br-int',has_traffic_filtering=True,id=98decf71-da91-41cd-9382-afba328b1338,network=Network(f3d6a66c-1acd-4ae3-9639-b6444469c1fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98decf71-da') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.309 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.309 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.309 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.311 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.311 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap98decf71-da, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.312 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap98decf71-da, col_values=(('external_ids', {'iface-id': '98decf71-da91-41cd-9382-afba328b1338', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9e:a4:7d', 'vm-uuid': 'c9adc277-3d0a-4bb8-8b47-e9f72114cdfd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.313 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:36 np0005539551 NetworkManager[48922]: <info>  [1764405576.3144] manager: (tap98decf71-da): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/368)
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.314 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.319 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.320 227364 INFO os_vif [None req-32d5afc6-7ab5-429d-8358-f61e055788d7 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:a4:7d,bridge_name='br-int',has_traffic_filtering=True,id=98decf71-da91-41cd-9382-afba328b1338,network=Network(f3d6a66c-1acd-4ae3-9639-b6444469c1fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98decf71-da')#033[00m
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.320 227364 DEBUG nova.virt.libvirt.vif [None req-32d5afc6-7ab5-429d-8358-f61e055788d7 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:38:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1601524375',display_name='tempest-TestNetworkBasicOps-server-1601524375',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1601524375',id=184,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNAyvwAZwPHz0mDZ6+uw7Ctigv7CbFX3JcWWwmeG6ohdJh9eb0DA6IZB4kjH4EtgM2s5DtdFmdoiqApvJrl5Aaw7g3k0AA8E+id9fhlaMPhZgqKsQOQc112xVs1jFb8yOA==',key_name='tempest-TestNetworkBasicOps-65375256',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:39:02Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-xx3hndbp',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:39:02Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=c9adc277-3d0a-4bb8-8b47-e9f72114cdfd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "98decf71-da91-41cd-9382-afba328b1338", "address": "fa:16:3e:9e:a4:7d", "network": {"id": "f3d6a66c-1acd-4ae3-9639-b6444469c1fc", "bridge": "br-int", "label": "tempest-network-smoke--216176959", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98decf71-da", "ovs_interfaceid": "98decf71-da91-41cd-9382-afba328b1338", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.321 227364 DEBUG nova.network.os_vif_util [None req-32d5afc6-7ab5-429d-8358-f61e055788d7 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "98decf71-da91-41cd-9382-afba328b1338", "address": "fa:16:3e:9e:a4:7d", "network": {"id": "f3d6a66c-1acd-4ae3-9639-b6444469c1fc", "bridge": "br-int", "label": "tempest-network-smoke--216176959", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98decf71-da", "ovs_interfaceid": "98decf71-da91-41cd-9382-afba328b1338", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.321 227364 DEBUG nova.network.os_vif_util [None req-32d5afc6-7ab5-429d-8358-f61e055788d7 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:a4:7d,bridge_name='br-int',has_traffic_filtering=True,id=98decf71-da91-41cd-9382-afba328b1338,network=Network(f3d6a66c-1acd-4ae3-9639-b6444469c1fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98decf71-da') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.323 227364 DEBUG nova.virt.libvirt.guest [None req-32d5afc6-7ab5-429d-8358-f61e055788d7 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] attach device xml: <interface type="ethernet">
Nov 29 03:39:36 np0005539551 nova_compute[227360]:  <mac address="fa:16:3e:9e:a4:7d"/>
Nov 29 03:39:36 np0005539551 nova_compute[227360]:  <model type="virtio"/>
Nov 29 03:39:36 np0005539551 nova_compute[227360]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:39:36 np0005539551 nova_compute[227360]:  <mtu size="1442"/>
Nov 29 03:39:36 np0005539551 nova_compute[227360]:  <target dev="tap98decf71-da"/>
Nov 29 03:39:36 np0005539551 nova_compute[227360]: </interface>
Nov 29 03:39:36 np0005539551 nova_compute[227360]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:39:36 np0005539551 NetworkManager[48922]: <info>  [1764405576.3358] manager: (tap98decf71-da): new Tun device (/org/freedesktop/NetworkManager/Devices/369)
Nov 29 03:39:36 np0005539551 kernel: tap98decf71-da: entered promiscuous mode
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.338 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:36 np0005539551 ovn_controller[130266]: 2025-11-29T08:39:36Z|00816|binding|INFO|Claiming lport 98decf71-da91-41cd-9382-afba328b1338 for this chassis.
Nov 29 03:39:36 np0005539551 ovn_controller[130266]: 2025-11-29T08:39:36Z|00817|binding|INFO|98decf71-da91-41cd-9382-afba328b1338: Claiming fa:16:3e:9e:a4:7d 10.100.0.21
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:36.348 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:a4:7d 10.100.0.21'], port_security=['fa:16:3e:9e:a4:7d 10.100.0.21'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.21/28', 'neutron:device_id': 'c9adc277-3d0a-4bb8-8b47-e9f72114cdfd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3d6a66c-1acd-4ae3-9639-b6444469c1fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0471b9b208874403aa3f0fbe7504ad19', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e80d8f2f-092d-4879-a27f-82d3e7dad8a3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7843eb0e-3b68-48d7-b889-5bece517c173, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=98decf71-da91-41cd-9382-afba328b1338) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:36.349 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 98decf71-da91-41cd-9382-afba328b1338 in datapath f3d6a66c-1acd-4ae3-9639-b6444469c1fc bound to our chassis#033[00m
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:36.352 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3d6a66c-1acd-4ae3-9639-b6444469c1fc#033[00m
Nov 29 03:39:36 np0005539551 systemd-udevd[294223]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:36.363 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d4163b90-d460-49df-bc69-cbb76a9c86f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:36.364 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf3d6a66c-11 in ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:36.366 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf3d6a66c-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:36.366 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d3a6a931-a327-4792-9941-b16ea3daa955]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:36.367 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[7ba7e3e9-8973-4ad9-8440-6a231dac3260]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:36 np0005539551 NetworkManager[48922]: <info>  [1764405576.3776] device (tap98decf71-da): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:39:36 np0005539551 NetworkManager[48922]: <info>  [1764405576.3784] device (tap98decf71-da): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:36.380 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[5eb6b438-9884-4c5b-824b-dde37a13ca54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.392 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:36 np0005539551 ovn_controller[130266]: 2025-11-29T08:39:36Z|00818|binding|INFO|Setting lport 98decf71-da91-41cd-9382-afba328b1338 ovn-installed in OVS
Nov 29 03:39:36 np0005539551 ovn_controller[130266]: 2025-11-29T08:39:36Z|00819|binding|INFO|Setting lport 98decf71-da91-41cd-9382-afba328b1338 up in Southbound
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.396 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:36.404 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2d705006-e3bd-4c72-8101-5107dd4c9f40]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:36.431 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[5a7c1234-2167-4b8e-bf8b-fb19cc36c62e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:36.436 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[04421522-de9a-4b17-ba63-64d106af08c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:36 np0005539551 NetworkManager[48922]: <info>  [1764405576.4374] manager: (tapf3d6a66c-10): new Veth device (/org/freedesktop/NetworkManager/Devices/370)
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:36.465 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[869f6408-b0df-4429-b874-9cb7b1ae5f4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:36.467 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[c1fa826e-112e-4e5e-8302-78246bb89c0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:36 np0005539551 NetworkManager[48922]: <info>  [1764405576.4879] device (tapf3d6a66c-10): carrier: link connected
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:36.493 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[2166bc72-e781-4037-a060-7205da72a47d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:36.509 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[764d4a2e-527a-4ef6-9e6c-ca8b99d04d3e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3d6a66c-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:1b:f5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 862336, 'reachable_time': 32332, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294256, 'error': None, 'target': 'ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:36.525 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[00e546db-3f59-4424-b73a-f47bc051d4b3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fede:1bf5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 862336, 'tstamp': 862336}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294257, 'error': None, 'target': 'ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:36.542 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1d26038b-e4f4-420d-a705-040751140797]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3d6a66c-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:1b:f5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 862336, 'reachable_time': 32332, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294258, 'error': None, 'target': 'ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:36.568 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[db56b7e8-d970-40e9-9835-31a1bf10ded5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:36.618 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9fa3bb9e-3bbd-4d95-81c8-80fb44cc19ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:36.620 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3d6a66c-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:36.621 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:36.621 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3d6a66c-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:36 np0005539551 NetworkManager[48922]: <info>  [1764405576.6240] manager: (tapf3d6a66c-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/371)
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.623 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:36 np0005539551 kernel: tapf3d6a66c-10: entered promiscuous mode
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.628 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:36.630 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3d6a66c-10, col_values=(('external_ids', {'iface-id': '1e240f8f-8745-4fcb-b4a3-c32894f2f8b3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:36 np0005539551 ovn_controller[130266]: 2025-11-29T08:39:36Z|00820|binding|INFO|Releasing lport 1e240f8f-8745-4fcb-b4a3-c32894f2f8b3 from this chassis (sb_readonly=0)
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.631 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.645 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.649 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:36.650 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f3d6a66c-1acd-4ae3-9639-b6444469c1fc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f3d6a66c-1acd-4ae3-9639-b6444469c1fc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:36.651 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[247a2d56-3b50-40ef-a43b-a6549b197b6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:36.652 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-f3d6a66c-1acd-4ae3-9639-b6444469c1fc
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/f3d6a66c-1acd-4ae3-9639-b6444469c1fc.pid.haproxy
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID f3d6a66c-1acd-4ae3-9639-b6444469c1fc
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:39:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:36.653 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc', 'env', 'PROCESS_TAG=haproxy-f3d6a66c-1acd-4ae3-9639-b6444469c1fc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f3d6a66c-1acd-4ae3-9639-b6444469c1fc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.758 227364 DEBUG nova.virt.libvirt.driver [None req-32d5afc6-7ab5-429d-8358-f61e055788d7 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.759 227364 DEBUG nova.virt.libvirt.driver [None req-32d5afc6-7ab5-429d-8358-f61e055788d7 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.759 227364 DEBUG nova.virt.libvirt.driver [None req-32d5afc6-7ab5-429d-8358-f61e055788d7 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No VIF found with MAC fa:16:3e:54:8d:5b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.759 227364 DEBUG nova.virt.libvirt.driver [None req-32d5afc6-7ab5-429d-8358-f61e055788d7 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No VIF found with MAC fa:16:3e:9e:a4:7d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.785 227364 DEBUG nova.virt.libvirt.guest [None req-32d5afc6-7ab5-429d-8358-f61e055788d7 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:39:36 np0005539551 nova_compute[227360]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:39:36 np0005539551 nova_compute[227360]:  <nova:name>tempest-TestNetworkBasicOps-server-1601524375</nova:name>
Nov 29 03:39:36 np0005539551 nova_compute[227360]:  <nova:creationTime>2025-11-29 08:39:36</nova:creationTime>
Nov 29 03:39:36 np0005539551 nova_compute[227360]:  <nova:flavor name="m1.nano">
Nov 29 03:39:36 np0005539551 nova_compute[227360]:    <nova:memory>128</nova:memory>
Nov 29 03:39:36 np0005539551 nova_compute[227360]:    <nova:disk>1</nova:disk>
Nov 29 03:39:36 np0005539551 nova_compute[227360]:    <nova:swap>0</nova:swap>
Nov 29 03:39:36 np0005539551 nova_compute[227360]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:39:36 np0005539551 nova_compute[227360]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:39:36 np0005539551 nova_compute[227360]:  </nova:flavor>
Nov 29 03:39:36 np0005539551 nova_compute[227360]:  <nova:owner>
Nov 29 03:39:36 np0005539551 nova_compute[227360]:    <nova:user uuid="4774e2851bc6407cb0fcde15bd24d1b3">tempest-TestNetworkBasicOps-828399474-project-member</nova:user>
Nov 29 03:39:36 np0005539551 nova_compute[227360]:    <nova:project uuid="0471b9b208874403aa3f0fbe7504ad19">tempest-TestNetworkBasicOps-828399474</nova:project>
Nov 29 03:39:36 np0005539551 nova_compute[227360]:  </nova:owner>
Nov 29 03:39:36 np0005539551 nova_compute[227360]:  <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:39:36 np0005539551 nova_compute[227360]:  <nova:ports>
Nov 29 03:39:36 np0005539551 nova_compute[227360]:    <nova:port uuid="89414cbe-0908-4dc8-af0e-648ea658b1fa">
Nov 29 03:39:36 np0005539551 nova_compute[227360]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 03:39:36 np0005539551 nova_compute[227360]:    </nova:port>
Nov 29 03:39:36 np0005539551 nova_compute[227360]:    <nova:port uuid="98decf71-da91-41cd-9382-afba328b1338">
Nov 29 03:39:36 np0005539551 nova_compute[227360]:      <nova:ip type="fixed" address="10.100.0.21" ipVersion="4"/>
Nov 29 03:39:36 np0005539551 nova_compute[227360]:    </nova:port>
Nov 29 03:39:36 np0005539551 nova_compute[227360]:  </nova:ports>
Nov 29 03:39:36 np0005539551 nova_compute[227360]: </nova:instance>
Nov 29 03:39:36 np0005539551 nova_compute[227360]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.832 227364 DEBUG oslo_concurrency.lockutils [None req-32d5afc6-7ab5-429d-8358-f61e055788d7 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "interface-c9adc277-3d0a-4bb8-8b47-e9f72114cdfd-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 8.980s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.868 227364 INFO nova.virt.libvirt.driver [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Creating config drive at /var/lib/nova/instances/fc75971c-cf7e-4383-81cf-81c801f67489/disk.config#033[00m
Nov 29 03:39:36 np0005539551 nova_compute[227360]: 2025-11-29 08:39:36.876 227364 DEBUG oslo_concurrency.processutils [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fc75971c-cf7e-4383-81cf-81c801f67489/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi9m2z7rc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:37 np0005539551 nova_compute[227360]: 2025-11-29 08:39:37.017 227364 DEBUG oslo_concurrency.processutils [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fc75971c-cf7e-4383-81cf-81c801f67489/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi9m2z7rc" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:37 np0005539551 podman[294296]: 2025-11-29 08:39:37.019461628 +0000 UTC m=+0.052073141 container create 0c6629727a6abc4a732e541369c7a30ac7044bdfa2d8543b19474128bf6fee0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:39:37 np0005539551 nova_compute[227360]: 2025-11-29 08:39:37.048 227364 DEBUG nova.storage.rbd_utils [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] rbd image fc75971c-cf7e-4383-81cf-81c801f67489_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:39:37 np0005539551 nova_compute[227360]: 2025-11-29 08:39:37.052 227364 DEBUG oslo_concurrency.processutils [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fc75971c-cf7e-4383-81cf-81c801f67489/disk.config fc75971c-cf7e-4383-81cf-81c801f67489_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:37 np0005539551 systemd[1]: Started libpod-conmon-0c6629727a6abc4a732e541369c7a30ac7044bdfa2d8543b19474128bf6fee0e.scope.
Nov 29 03:39:37 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:39:37 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d68d9403b644802f2a658772dd89ac3aa6e54690b8767007984f1573a8ebbe89/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:39:37 np0005539551 podman[294296]: 2025-11-29 08:39:36.98962208 +0000 UTC m=+0.022233623 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:39:37 np0005539551 podman[294296]: 2025-11-29 08:39:37.096701389 +0000 UTC m=+0.129312922 container init 0c6629727a6abc4a732e541369c7a30ac7044bdfa2d8543b19474128bf6fee0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 03:39:37 np0005539551 podman[294296]: 2025-11-29 08:39:37.103977876 +0000 UTC m=+0.136589389 container start 0c6629727a6abc4a732e541369c7a30ac7044bdfa2d8543b19474128bf6fee0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:39:37 np0005539551 neutron-haproxy-ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc[294330]: [NOTICE]   (294334) : New worker (294348) forked
Nov 29 03:39:37 np0005539551 neutron-haproxy-ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc[294330]: [NOTICE]   (294334) : Loading success.
Nov 29 03:39:37 np0005539551 nova_compute[227360]: 2025-11-29 08:39:37.220 227364 DEBUG oslo_concurrency.processutils [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fc75971c-cf7e-4383-81cf-81c801f67489/disk.config fc75971c-cf7e-4383-81cf-81c801f67489_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:37 np0005539551 nova_compute[227360]: 2025-11-29 08:39:37.221 227364 INFO nova.virt.libvirt.driver [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Deleting local config drive /var/lib/nova/instances/fc75971c-cf7e-4383-81cf-81c801f67489/disk.config because it was imported into RBD.#033[00m
Nov 29 03:39:37 np0005539551 kernel: tap6ae38b68-e1: entered promiscuous mode
Nov 29 03:39:37 np0005539551 NetworkManager[48922]: <info>  [1764405577.2921] manager: (tap6ae38b68-e1): new Tun device (/org/freedesktop/NetworkManager/Devices/372)
Nov 29 03:39:37 np0005539551 systemd-udevd[294248]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:39:37 np0005539551 nova_compute[227360]: 2025-11-29 08:39:37.294 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:37 np0005539551 ovn_controller[130266]: 2025-11-29T08:39:37Z|00821|binding|INFO|Claiming lport 6ae38b68-e18b-4ba6-be6e-d411d58b407b for this chassis.
Nov 29 03:39:37 np0005539551 ovn_controller[130266]: 2025-11-29T08:39:37Z|00822|binding|INFO|6ae38b68-e18b-4ba6-be6e-d411d58b407b: Claiming fa:16:3e:6f:01:5b 10.100.0.5
Nov 29 03:39:37 np0005539551 NetworkManager[48922]: <info>  [1764405577.3063] device (tap6ae38b68-e1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:39:37 np0005539551 NetworkManager[48922]: <info>  [1764405577.3081] device (tap6ae38b68-e1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:37.312 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:01:5b 10.100.0.5'], port_security=['fa:16:3e:6f:01:5b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'fc75971c-cf7e-4383-81cf-81c801f67489', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c114bc23-cd62-4198-a95d-5595953a88bd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62ca01275fe34ea0af31d00b34d6d9a5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9ca74358-0566-4f32-a6ba-a0c4dcd1723c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6cd3a0f0-9ad7-457d-b2e3-d5300cfee042, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=6ae38b68-e18b-4ba6-be6e-d411d58b407b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:37.313 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 6ae38b68-e18b-4ba6-be6e-d411d58b407b in datapath c114bc23-cd62-4198-a95d-5595953a88bd bound to our chassis#033[00m
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:37.315 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c114bc23-cd62-4198-a95d-5595953a88bd#033[00m
Nov 29 03:39:37 np0005539551 ovn_controller[130266]: 2025-11-29T08:39:37Z|00823|binding|INFO|Setting lport 6ae38b68-e18b-4ba6-be6e-d411d58b407b ovn-installed in OVS
Nov 29 03:39:37 np0005539551 ovn_controller[130266]: 2025-11-29T08:39:37Z|00824|binding|INFO|Setting lport 6ae38b68-e18b-4ba6-be6e-d411d58b407b up in Southbound
Nov 29 03:39:37 np0005539551 nova_compute[227360]: 2025-11-29 08:39:37.318 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:37 np0005539551 nova_compute[227360]: 2025-11-29 08:39:37.323 227364 DEBUG nova.network.neutron [req-22755896-6968-4406-a860-e18f77e9b436 req-6c8e1d73-58bc-4428-902d-07898b6a07e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Updated VIF entry in instance network info cache for port 6ae38b68-e18b-4ba6-be6e-d411d58b407b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:39:37 np0005539551 nova_compute[227360]: 2025-11-29 08:39:37.324 227364 DEBUG nova.network.neutron [req-22755896-6968-4406-a860-e18f77e9b436 req-6c8e1d73-58bc-4428-902d-07898b6a07e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Updating instance_info_cache with network_info: [{"id": "6ae38b68-e18b-4ba6-be6e-d411d58b407b", "address": "fa:16:3e:6f:01:5b", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ae38b68-e1", "ovs_interfaceid": "6ae38b68-e18b-4ba6-be6e-d411d58b407b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:37.327 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[52b0d613-8a5f-4d56-a496-87411db3371c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:37.328 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc114bc23-c1 in ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:39:37 np0005539551 systemd-machined[190756]: New machine qemu-86-instance-000000bc.
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:37.330 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc114bc23-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:37.330 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8fe3d4e3-6188-41a7-9503-5a0b1440c19c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:37.331 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b0453981-95e7-4b21-a2b5-0af1cf20df7e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:37 np0005539551 systemd[1]: Started Virtual Machine qemu-86-instance-000000bc.
Nov 29 03:39:37 np0005539551 nova_compute[227360]: 2025-11-29 08:39:37.343 227364 DEBUG oslo_concurrency.lockutils [req-22755896-6968-4406-a860-e18f77e9b436 req-6c8e1d73-58bc-4428-902d-07898b6a07e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-fc75971c-cf7e-4383-81cf-81c801f67489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:37.344 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[59dacb17-cf11-48ee-8bc0-c1fb176b3943]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:37.369 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2264c7ff-792f-42ba-acc2-28bc7b0c88c5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:37.400 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[872c0a89-3121-4b1f-9e6e-3ab82707b22a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:37.405 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[029bc0de-a056-4f70-8ae5-3999ce1bc24f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:37 np0005539551 NetworkManager[48922]: <info>  [1764405577.4066] manager: (tapc114bc23-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/373)
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:37.435 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[6414af82-0750-4f9d-81dd-c29152dc987a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:37.438 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[bc039f8f-23d5-4803-9bfa-4fcba3308e2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:37 np0005539551 NetworkManager[48922]: <info>  [1764405577.4599] device (tapc114bc23-c0): carrier: link connected
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:37.464 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[874f2a13-d711-4fbe-903e-dae0328c6213]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:37.484 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[24c40029-9695-422b-880a-c8030b7edaba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc114bc23-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c5:78:4d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 240], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 862433, 'reachable_time': 43457, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294392, 'error': None, 'target': 'ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:37.500 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[3974ef73-7cad-4e11-beb9-1e6c743dab38]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec5:784d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 862433, 'tstamp': 862433}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294393, 'error': None, 'target': 'ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:37 np0005539551 ovn_controller[130266]: 2025-11-29T08:39:37Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9e:a4:7d 10.100.0.21
Nov 29 03:39:37 np0005539551 ovn_controller[130266]: 2025-11-29T08:39:37Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9e:a4:7d 10.100.0.21
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:37.515 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f806574a-85c4-469e-9cd5-4a0e33d8f6bd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc114bc23-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c5:78:4d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 240], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 862433, 'reachable_time': 43457, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294394, 'error': None, 'target': 'ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:37.543 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[cfd28329-b7aa-480e-a117-d51a20da8318]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:37.595 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[28dc7b99-2b91-4d21-a575-ba08a8f8341a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:37.596 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc114bc23-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:37.596 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:37.597 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc114bc23-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:37 np0005539551 nova_compute[227360]: 2025-11-29 08:39:37.598 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:37 np0005539551 NetworkManager[48922]: <info>  [1764405577.5999] manager: (tapc114bc23-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/374)
Nov 29 03:39:37 np0005539551 kernel: tapc114bc23-c0: entered promiscuous mode
Nov 29 03:39:37 np0005539551 nova_compute[227360]: 2025-11-29 08:39:37.600 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:37.602 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc114bc23-c0, col_values=(('external_ids', {'iface-id': '1642a0e3-a8d4-4ee4-8971-26f27541a04e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:37 np0005539551 ovn_controller[130266]: 2025-11-29T08:39:37Z|00825|binding|INFO|Releasing lport 1642a0e3-a8d4-4ee4-8971-26f27541a04e from this chassis (sb_readonly=0)
Nov 29 03:39:37 np0005539551 nova_compute[227360]: 2025-11-29 08:39:37.603 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:37 np0005539551 nova_compute[227360]: 2025-11-29 08:39:37.617 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:37.619 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c114bc23-cd62-4198-a95d-5595953a88bd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c114bc23-cd62-4198-a95d-5595953a88bd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:37.620 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d1cf5315-4dab-4cf1-aaf1-28cd60b746dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:37.620 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-c114bc23-cd62-4198-a95d-5595953a88bd
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/c114bc23-cd62-4198-a95d-5595953a88bd.pid.haproxy
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID c114bc23-cd62-4198-a95d-5595953a88bd
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:39:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:39:37.621 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd', 'env', 'PROCESS_TAG=haproxy-c114bc23-cd62-4198-a95d-5595953a88bd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c114bc23-cd62-4198-a95d-5595953a88bd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:39:37 np0005539551 nova_compute[227360]: 2025-11-29 08:39:37.774 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405577.7737603, fc75971c-cf7e-4383-81cf-81c801f67489 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:39:37 np0005539551 nova_compute[227360]: 2025-11-29 08:39:37.774 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] VM Started (Lifecycle Event)#033[00m
Nov 29 03:39:37 np0005539551 nova_compute[227360]: 2025-11-29 08:39:37.805 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:39:37 np0005539551 nova_compute[227360]: 2025-11-29 08:39:37.809 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405577.776168, fc75971c-cf7e-4383-81cf-81c801f67489 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:39:37 np0005539551 nova_compute[227360]: 2025-11-29 08:39:37.809 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:39:37 np0005539551 nova_compute[227360]: 2025-11-29 08:39:37.837 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:39:37 np0005539551 nova_compute[227360]: 2025-11-29 08:39:37.842 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:39:37 np0005539551 nova_compute[227360]: 2025-11-29 08:39:37.870 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:39:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:37.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:37 np0005539551 podman[294468]: 2025-11-29 08:39:37.971800664 +0000 UTC m=+0.042052080 container create cac59ee23e98bf042b06bba3ad759e6e56834ccb48af9a9f5d305b3c3a9d658e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:39:38 np0005539551 systemd[1]: Started libpod-conmon-cac59ee23e98bf042b06bba3ad759e6e56834ccb48af9a9f5d305b3c3a9d658e.scope.
Nov 29 03:39:38 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:39:38 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb2080b7d3a7fc4153ce50c63bb1ab95a130798c2afeb7298e85353fe28fffde/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:39:38 np0005539551 podman[294468]: 2025-11-29 08:39:37.949951472 +0000 UTC m=+0.020202918 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:39:38 np0005539551 podman[294468]: 2025-11-29 08:39:38.050510725 +0000 UTC m=+0.120762181 container init cac59ee23e98bf042b06bba3ad759e6e56834ccb48af9a9f5d305b3c3a9d658e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:39:38 np0005539551 podman[294468]: 2025-11-29 08:39:38.055797828 +0000 UTC m=+0.126049254 container start cac59ee23e98bf042b06bba3ad759e6e56834ccb48af9a9f5d305b3c3a9d658e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:39:38 np0005539551 neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd[294483]: [NOTICE]   (294487) : New worker (294489) forked
Nov 29 03:39:38 np0005539551 neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd[294483]: [NOTICE]   (294487) : Loading success.
Nov 29 03:39:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:39:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:38.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:39:38 np0005539551 nova_compute[227360]: 2025-11-29 08:39:38.349 227364 DEBUG nova.compute.manager [req-785ac881-9994-4b1b-932f-850875e191d3 req-e292c642-3ee3-4c27-8ddf-6d3d7bdc729a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Received event network-vif-plugged-6ae38b68-e18b-4ba6-be6e-d411d58b407b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:38 np0005539551 nova_compute[227360]: 2025-11-29 08:39:38.350 227364 DEBUG oslo_concurrency.lockutils [req-785ac881-9994-4b1b-932f-850875e191d3 req-e292c642-3ee3-4c27-8ddf-6d3d7bdc729a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "fc75971c-cf7e-4383-81cf-81c801f67489-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:38 np0005539551 nova_compute[227360]: 2025-11-29 08:39:38.350 227364 DEBUG oslo_concurrency.lockutils [req-785ac881-9994-4b1b-932f-850875e191d3 req-e292c642-3ee3-4c27-8ddf-6d3d7bdc729a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "fc75971c-cf7e-4383-81cf-81c801f67489-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:38 np0005539551 nova_compute[227360]: 2025-11-29 08:39:38.350 227364 DEBUG oslo_concurrency.lockutils [req-785ac881-9994-4b1b-932f-850875e191d3 req-e292c642-3ee3-4c27-8ddf-6d3d7bdc729a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "fc75971c-cf7e-4383-81cf-81c801f67489-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:38 np0005539551 nova_compute[227360]: 2025-11-29 08:39:38.350 227364 DEBUG nova.compute.manager [req-785ac881-9994-4b1b-932f-850875e191d3 req-e292c642-3ee3-4c27-8ddf-6d3d7bdc729a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Processing event network-vif-plugged-6ae38b68-e18b-4ba6-be6e-d411d58b407b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:39:38 np0005539551 nova_compute[227360]: 2025-11-29 08:39:38.351 227364 DEBUG nova.compute.manager [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:39:38 np0005539551 nova_compute[227360]: 2025-11-29 08:39:38.354 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405578.3547397, fc75971c-cf7e-4383-81cf-81c801f67489 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:39:38 np0005539551 nova_compute[227360]: 2025-11-29 08:39:38.355 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:39:38 np0005539551 nova_compute[227360]: 2025-11-29 08:39:38.357 227364 DEBUG nova.virt.libvirt.driver [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:39:38 np0005539551 nova_compute[227360]: 2025-11-29 08:39:38.359 227364 INFO nova.virt.libvirt.driver [-] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Instance spawned successfully.#033[00m
Nov 29 03:39:38 np0005539551 nova_compute[227360]: 2025-11-29 08:39:38.359 227364 DEBUG nova.virt.libvirt.driver [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:39:38 np0005539551 nova_compute[227360]: 2025-11-29 08:39:38.384 227364 DEBUG nova.virt.libvirt.driver [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:39:38 np0005539551 nova_compute[227360]: 2025-11-29 08:39:38.384 227364 DEBUG nova.virt.libvirt.driver [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:39:38 np0005539551 nova_compute[227360]: 2025-11-29 08:39:38.385 227364 DEBUG nova.virt.libvirt.driver [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:39:38 np0005539551 nova_compute[227360]: 2025-11-29 08:39:38.385 227364 DEBUG nova.virt.libvirt.driver [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:39:38 np0005539551 nova_compute[227360]: 2025-11-29 08:39:38.385 227364 DEBUG nova.virt.libvirt.driver [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:39:38 np0005539551 nova_compute[227360]: 2025-11-29 08:39:38.385 227364 DEBUG nova.virt.libvirt.driver [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:39:38 np0005539551 nova_compute[227360]: 2025-11-29 08:39:38.390 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:39:38 np0005539551 nova_compute[227360]: 2025-11-29 08:39:38.393 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:39:38 np0005539551 nova_compute[227360]: 2025-11-29 08:39:38.448 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:39:38 np0005539551 nova_compute[227360]: 2025-11-29 08:39:38.488 227364 DEBUG nova.compute.manager [req-8dc3dd63-6e14-4c48-a183-83f53c479989 req-d8f79a87-a29d-4c16-8b48-51ec0c26887c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Received event network-vif-plugged-98decf71-da91-41cd-9382-afba328b1338 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:38 np0005539551 nova_compute[227360]: 2025-11-29 08:39:38.488 227364 DEBUG oslo_concurrency.lockutils [req-8dc3dd63-6e14-4c48-a183-83f53c479989 req-d8f79a87-a29d-4c16-8b48-51ec0c26887c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "c9adc277-3d0a-4bb8-8b47-e9f72114cdfd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:38 np0005539551 nova_compute[227360]: 2025-11-29 08:39:38.488 227364 DEBUG oslo_concurrency.lockutils [req-8dc3dd63-6e14-4c48-a183-83f53c479989 req-d8f79a87-a29d-4c16-8b48-51ec0c26887c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c9adc277-3d0a-4bb8-8b47-e9f72114cdfd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:38 np0005539551 nova_compute[227360]: 2025-11-29 08:39:38.489 227364 DEBUG oslo_concurrency.lockutils [req-8dc3dd63-6e14-4c48-a183-83f53c479989 req-d8f79a87-a29d-4c16-8b48-51ec0c26887c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c9adc277-3d0a-4bb8-8b47-e9f72114cdfd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:38 np0005539551 nova_compute[227360]: 2025-11-29 08:39:38.489 227364 DEBUG nova.compute.manager [req-8dc3dd63-6e14-4c48-a183-83f53c479989 req-d8f79a87-a29d-4c16-8b48-51ec0c26887c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] No waiting events found dispatching network-vif-plugged-98decf71-da91-41cd-9382-afba328b1338 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:39:38 np0005539551 nova_compute[227360]: 2025-11-29 08:39:38.489 227364 WARNING nova.compute.manager [req-8dc3dd63-6e14-4c48-a183-83f53c479989 req-d8f79a87-a29d-4c16-8b48-51ec0c26887c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Received unexpected event network-vif-plugged-98decf71-da91-41cd-9382-afba328b1338 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:39:38 np0005539551 nova_compute[227360]: 2025-11-29 08:39:38.489 227364 DEBUG nova.compute.manager [req-8dc3dd63-6e14-4c48-a183-83f53c479989 req-d8f79a87-a29d-4c16-8b48-51ec0c26887c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Received event network-vif-plugged-98decf71-da91-41cd-9382-afba328b1338 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:38 np0005539551 nova_compute[227360]: 2025-11-29 08:39:38.489 227364 DEBUG oslo_concurrency.lockutils [req-8dc3dd63-6e14-4c48-a183-83f53c479989 req-d8f79a87-a29d-4c16-8b48-51ec0c26887c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "c9adc277-3d0a-4bb8-8b47-e9f72114cdfd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:38 np0005539551 nova_compute[227360]: 2025-11-29 08:39:38.490 227364 DEBUG oslo_concurrency.lockutils [req-8dc3dd63-6e14-4c48-a183-83f53c479989 req-d8f79a87-a29d-4c16-8b48-51ec0c26887c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c9adc277-3d0a-4bb8-8b47-e9f72114cdfd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:38 np0005539551 nova_compute[227360]: 2025-11-29 08:39:38.490 227364 DEBUG oslo_concurrency.lockutils [req-8dc3dd63-6e14-4c48-a183-83f53c479989 req-d8f79a87-a29d-4c16-8b48-51ec0c26887c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c9adc277-3d0a-4bb8-8b47-e9f72114cdfd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:38 np0005539551 nova_compute[227360]: 2025-11-29 08:39:38.490 227364 DEBUG nova.compute.manager [req-8dc3dd63-6e14-4c48-a183-83f53c479989 req-d8f79a87-a29d-4c16-8b48-51ec0c26887c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] No waiting events found dispatching network-vif-plugged-98decf71-da91-41cd-9382-afba328b1338 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:39:38 np0005539551 nova_compute[227360]: 2025-11-29 08:39:38.490 227364 WARNING nova.compute.manager [req-8dc3dd63-6e14-4c48-a183-83f53c479989 req-d8f79a87-a29d-4c16-8b48-51ec0c26887c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Received unexpected event network-vif-plugged-98decf71-da91-41cd-9382-afba328b1338 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:39:38 np0005539551 nova_compute[227360]: 2025-11-29 08:39:38.509 227364 INFO nova.compute.manager [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Took 8.88 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:39:38 np0005539551 nova_compute[227360]: 2025-11-29 08:39:38.509 227364 DEBUG nova.compute.manager [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:39:38 np0005539551 nova_compute[227360]: 2025-11-29 08:39:38.587 227364 INFO nova.compute.manager [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Took 11.02 seconds to build instance.#033[00m
Nov 29 03:39:38 np0005539551 nova_compute[227360]: 2025-11-29 08:39:38.620 227364 DEBUG oslo_concurrency.lockutils [None req-dc2e05f8-73f2-414f-af49-dd6d4237efd9 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "fc75971c-cf7e-4383-81cf-81c801f67489" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:38 np0005539551 nova_compute[227360]: 2025-11-29 08:39:38.975 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:39 np0005539551 nova_compute[227360]: 2025-11-29 08:39:39.053 227364 DEBUG nova.network.neutron [req-95222062-467a-4ce3-b5d4-a9a8e2f3b72d req-058ddf11-9afb-4a07-876d-9aea9bc68c04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Updated VIF entry in instance network info cache for port 98decf71-da91-41cd-9382-afba328b1338. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:39:39 np0005539551 nova_compute[227360]: 2025-11-29 08:39:39.054 227364 DEBUG nova.network.neutron [req-95222062-467a-4ce3-b5d4-a9a8e2f3b72d req-058ddf11-9afb-4a07-876d-9aea9bc68c04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Updating instance_info_cache with network_info: [{"id": "89414cbe-0908-4dc8-af0e-648ea658b1fa", "address": "fa:16:3e:54:8d:5b", "network": {"id": "460c4768-d248-455a-be34-ea028712d091", "bridge": "br-int", "label": "tempest-network-smoke--245531740", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89414cbe-09", "ovs_interfaceid": "89414cbe-0908-4dc8-af0e-648ea658b1fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "98decf71-da91-41cd-9382-afba328b1338", "address": "fa:16:3e:9e:a4:7d", "network": {"id": "f3d6a66c-1acd-4ae3-9639-b6444469c1fc", "bridge": "br-int", "label": "tempest-network-smoke--216176959", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98decf71-da", "ovs_interfaceid": "98decf71-da91-41cd-9382-afba328b1338", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:39:39 np0005539551 nova_compute[227360]: 2025-11-29 08:39:39.072 227364 DEBUG oslo_concurrency.lockutils [req-95222062-467a-4ce3-b5d4-a9a8e2f3b72d req-058ddf11-9afb-4a07-876d-9aea9bc68c04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-c9adc277-3d0a-4bb8-8b47-e9f72114cdfd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:39:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:39.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:40.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:40 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:39:40 np0005539551 nova_compute[227360]: 2025-11-29 08:39:40.475 227364 DEBUG nova.compute.manager [req-37ea8863-b5b1-4ac8-a96a-27f0b627e173 req-8e6f10f2-7ac7-4a99-ac3e-23cdac092f24 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Received event network-vif-plugged-6ae38b68-e18b-4ba6-be6e-d411d58b407b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:40 np0005539551 nova_compute[227360]: 2025-11-29 08:39:40.476 227364 DEBUG oslo_concurrency.lockutils [req-37ea8863-b5b1-4ac8-a96a-27f0b627e173 req-8e6f10f2-7ac7-4a99-ac3e-23cdac092f24 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "fc75971c-cf7e-4383-81cf-81c801f67489-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:40 np0005539551 nova_compute[227360]: 2025-11-29 08:39:40.476 227364 DEBUG oslo_concurrency.lockutils [req-37ea8863-b5b1-4ac8-a96a-27f0b627e173 req-8e6f10f2-7ac7-4a99-ac3e-23cdac092f24 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "fc75971c-cf7e-4383-81cf-81c801f67489-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:40 np0005539551 nova_compute[227360]: 2025-11-29 08:39:40.476 227364 DEBUG oslo_concurrency.lockutils [req-37ea8863-b5b1-4ac8-a96a-27f0b627e173 req-8e6f10f2-7ac7-4a99-ac3e-23cdac092f24 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "fc75971c-cf7e-4383-81cf-81c801f67489-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:40 np0005539551 nova_compute[227360]: 2025-11-29 08:39:40.476 227364 DEBUG nova.compute.manager [req-37ea8863-b5b1-4ac8-a96a-27f0b627e173 req-8e6f10f2-7ac7-4a99-ac3e-23cdac092f24 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] No waiting events found dispatching network-vif-plugged-6ae38b68-e18b-4ba6-be6e-d411d58b407b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:39:40 np0005539551 nova_compute[227360]: 2025-11-29 08:39:40.476 227364 WARNING nova.compute.manager [req-37ea8863-b5b1-4ac8-a96a-27f0b627e173 req-8e6f10f2-7ac7-4a99-ac3e-23cdac092f24 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Received unexpected event network-vif-plugged-6ae38b68-e18b-4ba6-be6e-d411d58b407b for instance with vm_state active and task_state None.#033[00m
Nov 29 03:39:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:41 np0005539551 nova_compute[227360]: 2025-11-29 08:39:41.351 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:41 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:39:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:39:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:41.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:39:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:42.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:42 np0005539551 nova_compute[227360]: 2025-11-29 08:39:42.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:43.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:43 np0005539551 nova_compute[227360]: 2025-11-29 08:39:43.977 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:44.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e387 e387: 3 total, 3 up, 3 in
Nov 29 03:39:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:45.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:46.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:46 np0005539551 nova_compute[227360]: 2025-11-29 08:39:46.355 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:47 np0005539551 nova_compute[227360]: 2025-11-29 08:39:47.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:47 np0005539551 nova_compute[227360]: 2025-11-29 08:39:47.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:47 np0005539551 nova_compute[227360]: 2025-11-29 08:39:47.461 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:47 np0005539551 nova_compute[227360]: 2025-11-29 08:39:47.461 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:47 np0005539551 nova_compute[227360]: 2025-11-29 08:39:47.462 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:47 np0005539551 nova_compute[227360]: 2025-11-29 08:39:47.462 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:39:47 np0005539551 nova_compute[227360]: 2025-11-29 08:39:47.462 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:47 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:39:47 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/932236958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:39:47 np0005539551 nova_compute[227360]: 2025-11-29 08:39:47.917 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:47.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:48 np0005539551 nova_compute[227360]: 2025-11-29 08:39:48.091 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-000000b8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:39:48 np0005539551 nova_compute[227360]: 2025-11-29 08:39:48.091 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-000000b8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:39:48 np0005539551 nova_compute[227360]: 2025-11-29 08:39:48.099 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-000000bc as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:39:48 np0005539551 nova_compute[227360]: 2025-11-29 08:39:48.100 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-000000bc as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:39:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:48.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:48 np0005539551 nova_compute[227360]: 2025-11-29 08:39:48.285 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:39:48 np0005539551 nova_compute[227360]: 2025-11-29 08:39:48.286 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3867MB free_disk=20.850975036621094GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:39:48 np0005539551 nova_compute[227360]: 2025-11-29 08:39:48.287 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:48 np0005539551 nova_compute[227360]: 2025-11-29 08:39:48.287 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:48 np0005539551 nova_compute[227360]: 2025-11-29 08:39:48.368 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance c9adc277-3d0a-4bb8-8b47-e9f72114cdfd actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:39:48 np0005539551 nova_compute[227360]: 2025-11-29 08:39:48.369 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance fc75971c-cf7e-4383-81cf-81c801f67489 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:39:48 np0005539551 nova_compute[227360]: 2025-11-29 08:39:48.369 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:39:48 np0005539551 nova_compute[227360]: 2025-11-29 08:39:48.369 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:39:48 np0005539551 nova_compute[227360]: 2025-11-29 08:39:48.467 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:48 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:39:48 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1546841391' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:39:48 np0005539551 nova_compute[227360]: 2025-11-29 08:39:48.900 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:48 np0005539551 nova_compute[227360]: 2025-11-29 08:39:48.905 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:39:48 np0005539551 nova_compute[227360]: 2025-11-29 08:39:48.933 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:39:48 np0005539551 nova_compute[227360]: 2025-11-29 08:39:48.980 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:48 np0005539551 nova_compute[227360]: 2025-11-29 08:39:48.990 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:39:48 np0005539551 nova_compute[227360]: 2025-11-29 08:39:48.990 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e388 e388: 3 total, 3 up, 3 in
Nov 29 03:39:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:49.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:49 np0005539551 nova_compute[227360]: 2025-11-29 08:39:49.992 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:49 np0005539551 nova_compute[227360]: 2025-11-29 08:39:49.992 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:39:49 np0005539551 nova_compute[227360]: 2025-11-29 08:39:49.992 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:39:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:50.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:51 np0005539551 nova_compute[227360]: 2025-11-29 08:39:51.321 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "refresh_cache-c9adc277-3d0a-4bb8-8b47-e9f72114cdfd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:39:51 np0005539551 nova_compute[227360]: 2025-11-29 08:39:51.322 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquired lock "refresh_cache-c9adc277-3d0a-4bb8-8b47-e9f72114cdfd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:39:51 np0005539551 nova_compute[227360]: 2025-11-29 08:39:51.322 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:39:51 np0005539551 nova_compute[227360]: 2025-11-29 08:39:51.322 227364 DEBUG nova.objects.instance [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lazy-loading 'info_cache' on Instance uuid c9adc277-3d0a-4bb8-8b47-e9f72114cdfd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:39:51 np0005539551 nova_compute[227360]: 2025-11-29 08:39:51.358 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e389 e389: 3 total, 3 up, 3 in
Nov 29 03:39:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:51.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:52.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:53 np0005539551 ovn_controller[130266]: 2025-11-29T08:39:53Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6f:01:5b 10.100.0.5
Nov 29 03:39:53 np0005539551 ovn_controller[130266]: 2025-11-29T08:39:53Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6f:01:5b 10.100.0.5
Nov 29 03:39:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:39:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:53.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:39:53 np0005539551 nova_compute[227360]: 2025-11-29 08:39:53.984 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:54.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:54 np0005539551 nova_compute[227360]: 2025-11-29 08:39:54.780 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Updating instance_info_cache with network_info: [{"id": "89414cbe-0908-4dc8-af0e-648ea658b1fa", "address": "fa:16:3e:54:8d:5b", "network": {"id": "460c4768-d248-455a-be34-ea028712d091", "bridge": "br-int", "label": "tempest-network-smoke--245531740", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89414cbe-09", "ovs_interfaceid": "89414cbe-0908-4dc8-af0e-648ea658b1fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "98decf71-da91-41cd-9382-afba328b1338", "address": "fa:16:3e:9e:a4:7d", "network": {"id": "f3d6a66c-1acd-4ae3-9639-b6444469c1fc", "bridge": "br-int", "label": "tempest-network-smoke--216176959", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98decf71-da", "ovs_interfaceid": "98decf71-da91-41cd-9382-afba328b1338", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:39:55 np0005539551 nova_compute[227360]: 2025-11-29 08:39:55.233 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Releasing lock "refresh_cache-c9adc277-3d0a-4bb8-8b47-e9f72114cdfd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:39:55 np0005539551 nova_compute[227360]: 2025-11-29 08:39:55.233 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:39:55 np0005539551 nova_compute[227360]: 2025-11-29 08:39:55.233 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:55 np0005539551 nova_compute[227360]: 2025-11-29 08:39:55.234 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:55 np0005539551 nova_compute[227360]: 2025-11-29 08:39:55.234 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:55 np0005539551 nova_compute[227360]: 2025-11-29 08:39:55.234 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:39:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e390 e390: 3 total, 3 up, 3 in
Nov 29 03:39:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:55.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:56.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:56 np0005539551 nova_compute[227360]: 2025-11-29 08:39:56.362 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:57 np0005539551 nova_compute[227360]: 2025-11-29 08:39:57.423 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:57 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e391 e391: 3 total, 3 up, 3 in
Nov 29 03:39:57 np0005539551 podman[294597]: 2025-11-29 08:39:57.600133018 +0000 UTC m=+0.048216836 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:39:57 np0005539551 podman[294596]: 2025-11-29 08:39:57.604472416 +0000 UTC m=+0.058271319 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:39:57 np0005539551 podman[294595]: 2025-11-29 08:39:57.627313265 +0000 UTC m=+0.082651259 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:39:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:57.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:58.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:58 np0005539551 nova_compute[227360]: 2025-11-29 08:39:58.986 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:39:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:59.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:00.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:00 np0005539551 nova_compute[227360]: 2025-11-29 08:40:00.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:00 np0005539551 nova_compute[227360]: 2025-11-29 08:40:00.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:40:00 np0005539551 ceph-mon[81672]: overall HEALTH_OK
Nov 29 03:40:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e391 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:01 np0005539551 nova_compute[227360]: 2025-11-29 08:40:01.364 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:01 np0005539551 ovn_controller[130266]: 2025-11-29T08:40:01Z|00826|binding|INFO|Releasing lport be178473-8fe5-4236-9899-659c9f39adc0 from this chassis (sb_readonly=0)
Nov 29 03:40:01 np0005539551 ovn_controller[130266]: 2025-11-29T08:40:01Z|00827|binding|INFO|Releasing lport 1e240f8f-8745-4fcb-b4a3-c32894f2f8b3 from this chassis (sb_readonly=0)
Nov 29 03:40:01 np0005539551 ovn_controller[130266]: 2025-11-29T08:40:01Z|00828|binding|INFO|Releasing lport 1642a0e3-a8d4-4ee4-8971-26f27541a04e from this chassis (sb_readonly=0)
Nov 29 03:40:01 np0005539551 nova_compute[227360]: 2025-11-29 08:40:01.580 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:01.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:02.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:03 np0005539551 nova_compute[227360]: 2025-11-29 08:40:03.990 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:03.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:04.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:04 np0005539551 nova_compute[227360]: 2025-11-29 08:40:04.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e392 e392: 3 total, 3 up, 3 in
Nov 29 03:40:05 np0005539551 nova_compute[227360]: 2025-11-29 08:40:05.421 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:05.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:06.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:06 np0005539551 nova_compute[227360]: 2025-11-29 08:40:06.410 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:40:06Z|00829|binding|INFO|Releasing lport be178473-8fe5-4236-9899-659c9f39adc0 from this chassis (sb_readonly=0)
Nov 29 03:40:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:40:06Z|00830|binding|INFO|Releasing lport 1e240f8f-8745-4fcb-b4a3-c32894f2f8b3 from this chassis (sb_readonly=0)
Nov 29 03:40:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:40:06Z|00831|binding|INFO|Releasing lport 1642a0e3-a8d4-4ee4-8971-26f27541a04e from this chassis (sb_readonly=0)
Nov 29 03:40:06 np0005539551 nova_compute[227360]: 2025-11-29 08:40:06.639 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:08.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:08.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:08 np0005539551 nova_compute[227360]: 2025-11-29 08:40:08.994 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:10.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:10.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:11 np0005539551 nova_compute[227360]: 2025-11-29 08:40:11.415 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:12.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:40:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:12.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:40:12 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #54. Immutable memtables: 10.
Nov 29 03:40:13 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e393 e393: 3 total, 3 up, 3 in
Nov 29 03:40:13 np0005539551 nova_compute[227360]: 2025-11-29 08:40:13.996 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:14.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:14.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:15 np0005539551 nova_compute[227360]: 2025-11-29 08:40:15.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:15 np0005539551 nova_compute[227360]: 2025-11-29 08:40:15.411 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:40:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e393 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:16.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:16.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:16 np0005539551 nova_compute[227360]: 2025-11-29 08:40:16.376 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:40:16 np0005539551 nova_compute[227360]: 2025-11-29 08:40:16.417 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:40:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:18.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:40:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:18.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:18 np0005539551 nova_compute[227360]: 2025-11-29 08:40:18.998 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:40:19.890 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:40:19.891 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:40:19.891 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:20.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:20.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e393 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:21 np0005539551 nova_compute[227360]: 2025-11-29 08:40:21.419 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:22.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:22.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:24 np0005539551 nova_compute[227360]: 2025-11-29 08:40:23.999 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:24.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:24.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:24 np0005539551 nova_compute[227360]: 2025-11-29 08:40:24.340 227364 DEBUG nova.compute.manager [req-5677d1cc-9a85-41d1-93cc-d2f7f370a19d req-0f69ccb5-3d4f-476c-955c-ab4b283a50b1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Received event network-changed-98decf71-da91-41cd-9382-afba328b1338 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:40:24 np0005539551 nova_compute[227360]: 2025-11-29 08:40:24.341 227364 DEBUG nova.compute.manager [req-5677d1cc-9a85-41d1-93cc-d2f7f370a19d req-0f69ccb5-3d4f-476c-955c-ab4b283a50b1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Refreshing instance network info cache due to event network-changed-98decf71-da91-41cd-9382-afba328b1338. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:40:24 np0005539551 nova_compute[227360]: 2025-11-29 08:40:24.342 227364 DEBUG oslo_concurrency.lockutils [req-5677d1cc-9a85-41d1-93cc-d2f7f370a19d req-0f69ccb5-3d4f-476c-955c-ab4b283a50b1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-c9adc277-3d0a-4bb8-8b47-e9f72114cdfd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:40:24 np0005539551 nova_compute[227360]: 2025-11-29 08:40:24.342 227364 DEBUG oslo_concurrency.lockutils [req-5677d1cc-9a85-41d1-93cc-d2f7f370a19d req-0f69ccb5-3d4f-476c-955c-ab4b283a50b1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-c9adc277-3d0a-4bb8-8b47-e9f72114cdfd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:40:24 np0005539551 nova_compute[227360]: 2025-11-29 08:40:24.342 227364 DEBUG nova.network.neutron [req-5677d1cc-9a85-41d1-93cc-d2f7f370a19d req-0f69ccb5-3d4f-476c-955c-ab4b283a50b1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Refreshing network info cache for port 98decf71-da91-41cd-9382-afba328b1338 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:40:24 np0005539551 nova_compute[227360]: 2025-11-29 08:40:24.643 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e394 e394: 3 total, 3 up, 3 in
Nov 29 03:40:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:26.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:26.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:26 np0005539551 nova_compute[227360]: 2025-11-29 08:40:26.422 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:26 np0005539551 nova_compute[227360]: 2025-11-29 08:40:26.945 227364 DEBUG nova.network.neutron [req-5677d1cc-9a85-41d1-93cc-d2f7f370a19d req-0f69ccb5-3d4f-476c-955c-ab4b283a50b1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Updated VIF entry in instance network info cache for port 98decf71-da91-41cd-9382-afba328b1338. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:40:26 np0005539551 nova_compute[227360]: 2025-11-29 08:40:26.946 227364 DEBUG nova.network.neutron [req-5677d1cc-9a85-41d1-93cc-d2f7f370a19d req-0f69ccb5-3d4f-476c-955c-ab4b283a50b1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Updating instance_info_cache with network_info: [{"id": "89414cbe-0908-4dc8-af0e-648ea658b1fa", "address": "fa:16:3e:54:8d:5b", "network": {"id": "460c4768-d248-455a-be34-ea028712d091", "bridge": "br-int", "label": "tempest-network-smoke--245531740", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89414cbe-09", "ovs_interfaceid": "89414cbe-0908-4dc8-af0e-648ea658b1fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "98decf71-da91-41cd-9382-afba328b1338", "address": "fa:16:3e:9e:a4:7d", "network": {"id": "f3d6a66c-1acd-4ae3-9639-b6444469c1fc", "bridge": "br-int", "label": "tempest-network-smoke--216176959", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98decf71-da", "ovs_interfaceid": "98decf71-da91-41cd-9382-afba328b1338", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:40:26 np0005539551 nova_compute[227360]: 2025-11-29 08:40:26.973 227364 DEBUG oslo_concurrency.lockutils [req-5677d1cc-9a85-41d1-93cc-d2f7f370a19d req-0f69ccb5-3d4f-476c-955c-ab4b283a50b1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-c9adc277-3d0a-4bb8-8b47-e9f72114cdfd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:40:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:28.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:40:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:28.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:40:28 np0005539551 nova_compute[227360]: 2025-11-29 08:40:28.576 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:40:28.577 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:40:28 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:40:28.578 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:40:28 np0005539551 podman[294661]: 2025-11-29 08:40:28.643097716 +0000 UTC m=+0.078747273 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 29 03:40:28 np0005539551 podman[294662]: 2025-11-29 08:40:28.651107353 +0000 UTC m=+0.084872190 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:40:28 np0005539551 podman[294660]: 2025-11-29 08:40:28.657214298 +0000 UTC m=+0.107035999 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 03:40:29 np0005539551 nova_compute[227360]: 2025-11-29 08:40:29.002 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:30.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:30.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:30 np0005539551 nova_compute[227360]: 2025-11-29 08:40:30.316 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:31 np0005539551 nova_compute[227360]: 2025-11-29 08:40:31.424 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:32.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:32.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:40:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:34.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:40:34 np0005539551 nova_compute[227360]: 2025-11-29 08:40:34.058 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:34.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:40:34.580 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:40:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:40:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:36.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:40:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:36.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:36 np0005539551 nova_compute[227360]: 2025-11-29 08:40:36.428 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:38.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:38.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:39 np0005539551 nova_compute[227360]: 2025-11-29 08:40:39.061 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:40.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:40.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:41 np0005539551 nova_compute[227360]: 2025-11-29 08:40:41.433 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:40:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:42.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:40:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:42.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:43 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:40:43 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:40:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:44.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:44 np0005539551 nova_compute[227360]: 2025-11-29 08:40:44.064 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:44 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 03:40:44 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:40:44 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:40:44 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:40:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:44.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:44 np0005539551 nova_compute[227360]: 2025-11-29 08:40:44.375 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:46.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:46.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:46 np0005539551 nova_compute[227360]: 2025-11-29 08:40:46.436 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:47 np0005539551 nova_compute[227360]: 2025-11-29 08:40:47.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:47 np0005539551 nova_compute[227360]: 2025-11-29 08:40:47.503 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:47 np0005539551 nova_compute[227360]: 2025-11-29 08:40:47.503 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:47 np0005539551 nova_compute[227360]: 2025-11-29 08:40:47.504 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:47 np0005539551 nova_compute[227360]: 2025-11-29 08:40:47.504 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:40:47 np0005539551 nova_compute[227360]: 2025-11-29 08:40:47.504 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:47 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:40:47 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1566603201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:40:47 np0005539551 nova_compute[227360]: 2025-11-29 08:40:47.954 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:48 np0005539551 nova_compute[227360]: 2025-11-29 08:40:48.042 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-000000b8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:40:48 np0005539551 nova_compute[227360]: 2025-11-29 08:40:48.042 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-000000b8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:40:48 np0005539551 nova_compute[227360]: 2025-11-29 08:40:48.046 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-000000bc as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:40:48 np0005539551 nova_compute[227360]: 2025-11-29 08:40:48.046 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-000000bc as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:40:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:48.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:48 np0005539551 nova_compute[227360]: 2025-11-29 08:40:48.227 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:40:48 np0005539551 nova_compute[227360]: 2025-11-29 08:40:48.228 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3857MB free_disk=20.829952239990234GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:40:48 np0005539551 nova_compute[227360]: 2025-11-29 08:40:48.229 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:48 np0005539551 nova_compute[227360]: 2025-11-29 08:40:48.229 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:48.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:48 np0005539551 nova_compute[227360]: 2025-11-29 08:40:48.350 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance c9adc277-3d0a-4bb8-8b47-e9f72114cdfd actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:40:48 np0005539551 nova_compute[227360]: 2025-11-29 08:40:48.350 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance fc75971c-cf7e-4383-81cf-81c801f67489 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:40:48 np0005539551 nova_compute[227360]: 2025-11-29 08:40:48.351 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:40:48 np0005539551 nova_compute[227360]: 2025-11-29 08:40:48.351 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:40:48 np0005539551 nova_compute[227360]: 2025-11-29 08:40:48.465 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:48 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:40:48 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2658383915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:40:48 np0005539551 nova_compute[227360]: 2025-11-29 08:40:48.922 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:48 np0005539551 nova_compute[227360]: 2025-11-29 08:40:48.930 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:40:48 np0005539551 nova_compute[227360]: 2025-11-29 08:40:48.952 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:40:48 np0005539551 nova_compute[227360]: 2025-11-29 08:40:48.986 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:40:48 np0005539551 nova_compute[227360]: 2025-11-29 08:40:48.987 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:49 np0005539551 nova_compute[227360]: 2025-11-29 08:40:49.068 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:49 np0005539551 nova_compute[227360]: 2025-11-29 08:40:49.094 227364 DEBUG oslo_concurrency.lockutils [None req-51ec5aba-11a4-429a-9ae6-718330f6f7e0 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "fc75971c-cf7e-4383-81cf-81c801f67489" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:49 np0005539551 nova_compute[227360]: 2025-11-29 08:40:49.095 227364 DEBUG oslo_concurrency.lockutils [None req-51ec5aba-11a4-429a-9ae6-718330f6f7e0 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "fc75971c-cf7e-4383-81cf-81c801f67489" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:49 np0005539551 nova_compute[227360]: 2025-11-29 08:40:49.118 227364 DEBUG nova.objects.instance [None req-51ec5aba-11a4-429a-9ae6-718330f6f7e0 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lazy-loading 'flavor' on Instance uuid fc75971c-cf7e-4383-81cf-81c801f67489 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:40:49 np0005539551 nova_compute[227360]: 2025-11-29 08:40:49.166 227364 DEBUG oslo_concurrency.lockutils [None req-51ec5aba-11a4-429a-9ae6-718330f6f7e0 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "fc75971c-cf7e-4383-81cf-81c801f67489" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.072s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:49 np0005539551 nova_compute[227360]: 2025-11-29 08:40:49.429 227364 DEBUG oslo_concurrency.lockutils [None req-51ec5aba-11a4-429a-9ae6-718330f6f7e0 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "fc75971c-cf7e-4383-81cf-81c801f67489" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:49 np0005539551 nova_compute[227360]: 2025-11-29 08:40:49.430 227364 DEBUG oslo_concurrency.lockutils [None req-51ec5aba-11a4-429a-9ae6-718330f6f7e0 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "fc75971c-cf7e-4383-81cf-81c801f67489" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:49 np0005539551 nova_compute[227360]: 2025-11-29 08:40:49.431 227364 INFO nova.compute.manager [None req-51ec5aba-11a4-429a-9ae6-718330f6f7e0 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Attaching volume ddbdab48-23f2-48fc-bdd4-33793fe5d282 to /dev/vdb#033[00m
Nov 29 03:40:49 np0005539551 nova_compute[227360]: 2025-11-29 08:40:49.629 227364 DEBUG os_brick.utils [None req-51ec5aba-11a4-429a-9ae6-718330f6f7e0 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:40:49 np0005539551 nova_compute[227360]: 2025-11-29 08:40:49.631 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:49 np0005539551 nova_compute[227360]: 2025-11-29 08:40:49.653 245195 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:49 np0005539551 nova_compute[227360]: 2025-11-29 08:40:49.654 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[593cb1be-1ac3-411f-9aa2-47ecf2f3243e]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:49 np0005539551 nova_compute[227360]: 2025-11-29 08:40:49.655 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:49 np0005539551 nova_compute[227360]: 2025-11-29 08:40:49.664 245195 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:49 np0005539551 nova_compute[227360]: 2025-11-29 08:40:49.665 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[09ffe1f1-e5f9-45a4-ac42-e30ac97bf149]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34268feecb6', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:49 np0005539551 nova_compute[227360]: 2025-11-29 08:40:49.666 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:49 np0005539551 nova_compute[227360]: 2025-11-29 08:40:49.676 245195 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:49 np0005539551 nova_compute[227360]: 2025-11-29 08:40:49.676 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[78b48ff4-5528-4d0d-8803-8654e64cd54f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:49 np0005539551 nova_compute[227360]: 2025-11-29 08:40:49.678 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[9cb9c54e-5fda-4578-9fad-59c0b8a42caa]: (4, '2d58586e-4ce1-425d-890c-d5cdff75e822') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:49 np0005539551 nova_compute[227360]: 2025-11-29 08:40:49.679 227364 DEBUG oslo_concurrency.processutils [None req-51ec5aba-11a4-429a-9ae6-718330f6f7e0 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:49 np0005539551 nova_compute[227360]: 2025-11-29 08:40:49.716 227364 DEBUG oslo_concurrency.processutils [None req-51ec5aba-11a4-429a-9ae6-718330f6f7e0 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] CMD "nvme version" returned: 0 in 0.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:49 np0005539551 nova_compute[227360]: 2025-11-29 08:40:49.719 227364 DEBUG os_brick.initiator.connectors.lightos [None req-51ec5aba-11a4-429a-9ae6-718330f6f7e0 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:40:49 np0005539551 nova_compute[227360]: 2025-11-29 08:40:49.720 227364 DEBUG os_brick.initiator.connectors.lightos [None req-51ec5aba-11a4-429a-9ae6-718330f6f7e0 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:40:49 np0005539551 nova_compute[227360]: 2025-11-29 08:40:49.720 227364 DEBUG os_brick.initiator.connectors.lightos [None req-51ec5aba-11a4-429a-9ae6-718330f6f7e0 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:40:49 np0005539551 nova_compute[227360]: 2025-11-29 08:40:49.721 227364 DEBUG os_brick.utils [None req-51ec5aba-11a4-429a-9ae6-718330f6f7e0 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] <== get_connector_properties: return (91ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34268feecb6', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2d58586e-4ce1-425d-890c-d5cdff75e822', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:40:49 np0005539551 nova_compute[227360]: 2025-11-29 08:40:49.721 227364 DEBUG nova.virt.block_device [None req-51ec5aba-11a4-429a-9ae6-718330f6f7e0 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Updating existing volume attachment record: 6d6d1383-e6ad-4623-a2e8-37a1d85b5465 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:40:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:50.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:50.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:50 np0005539551 nova_compute[227360]: 2025-11-29 08:40:50.538 227364 DEBUG nova.objects.instance [None req-51ec5aba-11a4-429a-9ae6-718330f6f7e0 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lazy-loading 'flavor' on Instance uuid fc75971c-cf7e-4383-81cf-81c801f67489 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:40:50 np0005539551 nova_compute[227360]: 2025-11-29 08:40:50.566 227364 DEBUG nova.virt.libvirt.driver [None req-51ec5aba-11a4-429a-9ae6-718330f6f7e0 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Attempting to attach volume ddbdab48-23f2-48fc-bdd4-33793fe5d282 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:40:50 np0005539551 nova_compute[227360]: 2025-11-29 08:40:50.568 227364 DEBUG nova.virt.libvirt.guest [None req-51ec5aba-11a4-429a-9ae6-718330f6f7e0 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:40:50 np0005539551 nova_compute[227360]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:40:50 np0005539551 nova_compute[227360]:  <source protocol="rbd" name="volumes/volume-ddbdab48-23f2-48fc-bdd4-33793fe5d282">
Nov 29 03:40:50 np0005539551 nova_compute[227360]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:40:50 np0005539551 nova_compute[227360]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:40:50 np0005539551 nova_compute[227360]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:40:50 np0005539551 nova_compute[227360]:  </source>
Nov 29 03:40:50 np0005539551 nova_compute[227360]:  <auth username="openstack">
Nov 29 03:40:50 np0005539551 nova_compute[227360]:    <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:40:50 np0005539551 nova_compute[227360]:  </auth>
Nov 29 03:40:50 np0005539551 nova_compute[227360]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:40:50 np0005539551 nova_compute[227360]:  <serial>ddbdab48-23f2-48fc-bdd4-33793fe5d282</serial>
Nov 29 03:40:50 np0005539551 nova_compute[227360]: </disk>
Nov 29 03:40:50 np0005539551 nova_compute[227360]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:40:50 np0005539551 nova_compute[227360]: 2025-11-29 08:40:50.685 227364 DEBUG nova.virt.libvirt.driver [None req-51ec5aba-11a4-429a-9ae6-718330f6f7e0 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:40:50 np0005539551 nova_compute[227360]: 2025-11-29 08:40:50.685 227364 DEBUG nova.virt.libvirt.driver [None req-51ec5aba-11a4-429a-9ae6-718330f6f7e0 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:40:50 np0005539551 nova_compute[227360]: 2025-11-29 08:40:50.685 227364 DEBUG nova.virt.libvirt.driver [None req-51ec5aba-11a4-429a-9ae6-718330f6f7e0 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:40:50 np0005539551 nova_compute[227360]: 2025-11-29 08:40:50.685 227364 DEBUG nova.virt.libvirt.driver [None req-51ec5aba-11a4-429a-9ae6-718330f6f7e0 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] No VIF found with MAC fa:16:3e:6f:01:5b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:40:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:50 np0005539551 nova_compute[227360]: 2025-11-29 08:40:50.967 227364 DEBUG oslo_concurrency.lockutils [None req-51ec5aba-11a4-429a-9ae6-718330f6f7e0 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "fc75971c-cf7e-4383-81cf-81c801f67489" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.537s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:50 np0005539551 nova_compute[227360]: 2025-11-29 08:40:50.983 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:50 np0005539551 nova_compute[227360]: 2025-11-29 08:40:50.983 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:50 np0005539551 nova_compute[227360]: 2025-11-29 08:40:50.983 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:40:51 np0005539551 nova_compute[227360]: 2025-11-29 08:40:51.168 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "refresh_cache-fc75971c-cf7e-4383-81cf-81c801f67489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:40:51 np0005539551 nova_compute[227360]: 2025-11-29 08:40:51.168 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquired lock "refresh_cache-fc75971c-cf7e-4383-81cf-81c801f67489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:40:51 np0005539551 nova_compute[227360]: 2025-11-29 08:40:51.168 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:40:51 np0005539551 nova_compute[227360]: 2025-11-29 08:40:51.438 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:51 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:40:51 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:40:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:52.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:52 np0005539551 nova_compute[227360]: 2025-11-29 08:40:52.286 227364 DEBUG oslo_concurrency.lockutils [None req-58f61497-705c-4320-8717-d39a37ffe40d facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "fc75971c-cf7e-4383-81cf-81c801f67489" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:52.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:52 np0005539551 nova_compute[227360]: 2025-11-29 08:40:52.286 227364 DEBUG oslo_concurrency.lockutils [None req-58f61497-705c-4320-8717-d39a37ffe40d facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "fc75971c-cf7e-4383-81cf-81c801f67489" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:52 np0005539551 nova_compute[227360]: 2025-11-29 08:40:52.302 227364 DEBUG nova.objects.instance [None req-58f61497-705c-4320-8717-d39a37ffe40d facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lazy-loading 'flavor' on Instance uuid fc75971c-cf7e-4383-81cf-81c801f67489 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:40:52 np0005539551 nova_compute[227360]: 2025-11-29 08:40:52.340 227364 DEBUG oslo_concurrency.lockutils [None req-58f61497-705c-4320-8717-d39a37ffe40d facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "fc75971c-cf7e-4383-81cf-81c801f67489" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.054s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:52 np0005539551 nova_compute[227360]: 2025-11-29 08:40:52.465 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Updating instance_info_cache with network_info: [{"id": "6ae38b68-e18b-4ba6-be6e-d411d58b407b", "address": "fa:16:3e:6f:01:5b", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ae38b68-e1", "ovs_interfaceid": "6ae38b68-e18b-4ba6-be6e-d411d58b407b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:40:52 np0005539551 nova_compute[227360]: 2025-11-29 08:40:52.490 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Releasing lock "refresh_cache-fc75971c-cf7e-4383-81cf-81c801f67489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:40:52 np0005539551 nova_compute[227360]: 2025-11-29 08:40:52.490 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:40:52 np0005539551 nova_compute[227360]: 2025-11-29 08:40:52.491 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:52 np0005539551 nova_compute[227360]: 2025-11-29 08:40:52.783 227364 DEBUG oslo_concurrency.lockutils [None req-58f61497-705c-4320-8717-d39a37ffe40d facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "fc75971c-cf7e-4383-81cf-81c801f67489" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:52 np0005539551 nova_compute[227360]: 2025-11-29 08:40:52.783 227364 DEBUG oslo_concurrency.lockutils [None req-58f61497-705c-4320-8717-d39a37ffe40d facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "fc75971c-cf7e-4383-81cf-81c801f67489" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:52 np0005539551 nova_compute[227360]: 2025-11-29 08:40:52.783 227364 INFO nova.compute.manager [None req-58f61497-705c-4320-8717-d39a37ffe40d facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Attaching volume f8d3ed9a-0699-4f69-8315-0c8ebf96b12c to /dev/vdc#033[00m
Nov 29 03:40:53 np0005539551 nova_compute[227360]: 2025-11-29 08:40:53.010 227364 DEBUG os_brick.utils [None req-58f61497-705c-4320-8717-d39a37ffe40d facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:40:53 np0005539551 nova_compute[227360]: 2025-11-29 08:40:53.012 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:53 np0005539551 nova_compute[227360]: 2025-11-29 08:40:53.026 245195 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:53 np0005539551 nova_compute[227360]: 2025-11-29 08:40:53.026 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[0227f0f9-9442-4a17-959c-6a640dc181e4]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:53 np0005539551 nova_compute[227360]: 2025-11-29 08:40:53.027 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:53 np0005539551 nova_compute[227360]: 2025-11-29 08:40:53.036 245195 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:53 np0005539551 nova_compute[227360]: 2025-11-29 08:40:53.037 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[7d78c0ec-364a-4098-a2cd-a04f42dffbb7]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34268feecb6', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:53 np0005539551 nova_compute[227360]: 2025-11-29 08:40:53.038 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:53 np0005539551 nova_compute[227360]: 2025-11-29 08:40:53.048 245195 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:53 np0005539551 nova_compute[227360]: 2025-11-29 08:40:53.048 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[9940d7e2-08d4-462f-9cb6-f531e5885b63]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:53 np0005539551 nova_compute[227360]: 2025-11-29 08:40:53.050 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[097120b0-a1aa-41f3-8d0e-ea74331ea149]: (4, '2d58586e-4ce1-425d-890c-d5cdff75e822') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:53 np0005539551 nova_compute[227360]: 2025-11-29 08:40:53.050 227364 DEBUG oslo_concurrency.processutils [None req-58f61497-705c-4320-8717-d39a37ffe40d facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:53 np0005539551 nova_compute[227360]: 2025-11-29 08:40:53.083 227364 DEBUG oslo_concurrency.processutils [None req-58f61497-705c-4320-8717-d39a37ffe40d facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:53 np0005539551 nova_compute[227360]: 2025-11-29 08:40:53.086 227364 DEBUG os_brick.initiator.connectors.lightos [None req-58f61497-705c-4320-8717-d39a37ffe40d facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:40:53 np0005539551 nova_compute[227360]: 2025-11-29 08:40:53.086 227364 DEBUG os_brick.initiator.connectors.lightos [None req-58f61497-705c-4320-8717-d39a37ffe40d facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:40:53 np0005539551 nova_compute[227360]: 2025-11-29 08:40:53.086 227364 DEBUG os_brick.initiator.connectors.lightos [None req-58f61497-705c-4320-8717-d39a37ffe40d facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:40:53 np0005539551 nova_compute[227360]: 2025-11-29 08:40:53.087 227364 DEBUG os_brick.utils [None req-58f61497-705c-4320-8717-d39a37ffe40d facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] <== get_connector_properties: return (75ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34268feecb6', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2d58586e-4ce1-425d-890c-d5cdff75e822', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:40:53 np0005539551 nova_compute[227360]: 2025-11-29 08:40:53.087 227364 DEBUG nova.virt.block_device [None req-58f61497-705c-4320-8717-d39a37ffe40d facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Updating existing volume attachment record: beed8568-0144-43ae-b882-fce6ca87d63e _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:40:53 np0005539551 nova_compute[227360]: 2025-11-29 08:40:53.840 227364 DEBUG nova.objects.instance [None req-58f61497-705c-4320-8717-d39a37ffe40d facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lazy-loading 'flavor' on Instance uuid fc75971c-cf7e-4383-81cf-81c801f67489 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:40:53 np0005539551 nova_compute[227360]: 2025-11-29 08:40:53.861 227364 DEBUG nova.virt.libvirt.driver [None req-58f61497-705c-4320-8717-d39a37ffe40d facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Attempting to attach volume f8d3ed9a-0699-4f69-8315-0c8ebf96b12c with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:40:53 np0005539551 nova_compute[227360]: 2025-11-29 08:40:53.864 227364 DEBUG nova.virt.libvirt.guest [None req-58f61497-705c-4320-8717-d39a37ffe40d facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:40:53 np0005539551 nova_compute[227360]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:40:53 np0005539551 nova_compute[227360]:  <source protocol="rbd" name="volumes/volume-f8d3ed9a-0699-4f69-8315-0c8ebf96b12c">
Nov 29 03:40:53 np0005539551 nova_compute[227360]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:40:53 np0005539551 nova_compute[227360]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:40:53 np0005539551 nova_compute[227360]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:40:53 np0005539551 nova_compute[227360]:  </source>
Nov 29 03:40:53 np0005539551 nova_compute[227360]:  <auth username="openstack">
Nov 29 03:40:53 np0005539551 nova_compute[227360]:    <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:40:53 np0005539551 nova_compute[227360]:  </auth>
Nov 29 03:40:53 np0005539551 nova_compute[227360]:  <target dev="vdc" bus="virtio"/>
Nov 29 03:40:53 np0005539551 nova_compute[227360]:  <serial>f8d3ed9a-0699-4f69-8315-0c8ebf96b12c</serial>
Nov 29 03:40:53 np0005539551 nova_compute[227360]: </disk>
Nov 29 03:40:53 np0005539551 nova_compute[227360]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:40:53 np0005539551 nova_compute[227360]: 2025-11-29 08:40:53.974 227364 DEBUG nova.virt.libvirt.driver [None req-58f61497-705c-4320-8717-d39a37ffe40d facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:40:53 np0005539551 nova_compute[227360]: 2025-11-29 08:40:53.975 227364 DEBUG nova.virt.libvirt.driver [None req-58f61497-705c-4320-8717-d39a37ffe40d facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:40:53 np0005539551 nova_compute[227360]: 2025-11-29 08:40:53.975 227364 DEBUG nova.virt.libvirt.driver [None req-58f61497-705c-4320-8717-d39a37ffe40d facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:40:53 np0005539551 nova_compute[227360]: 2025-11-29 08:40:53.975 227364 DEBUG nova.virt.libvirt.driver [None req-58f61497-705c-4320-8717-d39a37ffe40d facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:40:53 np0005539551 nova_compute[227360]: 2025-11-29 08:40:53.975 227364 DEBUG nova.virt.libvirt.driver [None req-58f61497-705c-4320-8717-d39a37ffe40d facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] No VIF found with MAC fa:16:3e:6f:01:5b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:40:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:54.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:54 np0005539551 nova_compute[227360]: 2025-11-29 08:40:54.069 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:54 np0005539551 nova_compute[227360]: 2025-11-29 08:40:54.214 227364 DEBUG oslo_concurrency.lockutils [None req-58f61497-705c-4320-8717-d39a37ffe40d facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "fc75971c-cf7e-4383-81cf-81c801f67489" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.430s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:54.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:55 np0005539551 nova_compute[227360]: 2025-11-29 08:40:55.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:56.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:40:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:56.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:40:56 np0005539551 nova_compute[227360]: 2025-11-29 08:40:56.442 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:57 np0005539551 nova_compute[227360]: 2025-11-29 08:40:57.353 227364 DEBUG oslo_concurrency.lockutils [None req-df6851f9-6b8a-4732-b501-0232380e9882 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "interface-c9adc277-3d0a-4bb8-8b47-e9f72114cdfd-98decf71-da91-41cd-9382-afba328b1338" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:57 np0005539551 nova_compute[227360]: 2025-11-29 08:40:57.354 227364 DEBUG oslo_concurrency.lockutils [None req-df6851f9-6b8a-4732-b501-0232380e9882 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "interface-c9adc277-3d0a-4bb8-8b47-e9f72114cdfd-98decf71-da91-41cd-9382-afba328b1338" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:57 np0005539551 nova_compute[227360]: 2025-11-29 08:40:57.372 227364 DEBUG nova.objects.instance [None req-df6851f9-6b8a-4732-b501-0232380e9882 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'flavor' on Instance uuid c9adc277-3d0a-4bb8-8b47-e9f72114cdfd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:40:57 np0005539551 nova_compute[227360]: 2025-11-29 08:40:57.392 227364 DEBUG nova.virt.libvirt.vif [None req-df6851f9-6b8a-4732-b501-0232380e9882 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:38:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1601524375',display_name='tempest-TestNetworkBasicOps-server-1601524375',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1601524375',id=184,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNAyvwAZwPHz0mDZ6+uw7Ctigv7CbFX3JcWWwmeG6ohdJh9eb0DA6IZB4kjH4EtgM2s5DtdFmdoiqApvJrl5Aaw7g3k0AA8E+id9fhlaMPhZgqKsQOQc112xVs1jFb8yOA==',key_name='tempest-TestNetworkBasicOps-65375256',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:39:02Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-xx3hndbp',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:39:02Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=c9adc277-3d0a-4bb8-8b47-e9f72114cdfd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "98decf71-da91-41cd-9382-afba328b1338", "address": "fa:16:3e:9e:a4:7d", "network": {"id": "f3d6a66c-1acd-4ae3-9639-b6444469c1fc", "bridge": "br-int", "label": "tempest-network-smoke--216176959", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98decf71-da", "ovs_interfaceid": "98decf71-da91-41cd-9382-afba328b1338", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:40:57 np0005539551 nova_compute[227360]: 2025-11-29 08:40:57.393 227364 DEBUG nova.network.os_vif_util [None req-df6851f9-6b8a-4732-b501-0232380e9882 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "98decf71-da91-41cd-9382-afba328b1338", "address": "fa:16:3e:9e:a4:7d", "network": {"id": "f3d6a66c-1acd-4ae3-9639-b6444469c1fc", "bridge": "br-int", "label": "tempest-network-smoke--216176959", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98decf71-da", "ovs_interfaceid": "98decf71-da91-41cd-9382-afba328b1338", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:40:57 np0005539551 nova_compute[227360]: 2025-11-29 08:40:57.394 227364 DEBUG nova.network.os_vif_util [None req-df6851f9-6b8a-4732-b501-0232380e9882 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9e:a4:7d,bridge_name='br-int',has_traffic_filtering=True,id=98decf71-da91-41cd-9382-afba328b1338,network=Network(f3d6a66c-1acd-4ae3-9639-b6444469c1fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98decf71-da') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:40:57 np0005539551 nova_compute[227360]: 2025-11-29 08:40:57.396 227364 DEBUG nova.virt.libvirt.guest [None req-df6851f9-6b8a-4732-b501-0232380e9882 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:9e:a4:7d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap98decf71-da"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:40:57 np0005539551 nova_compute[227360]: 2025-11-29 08:40:57.398 227364 DEBUG nova.virt.libvirt.guest [None req-df6851f9-6b8a-4732-b501-0232380e9882 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:9e:a4:7d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap98decf71-da"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:40:57 np0005539551 nova_compute[227360]: 2025-11-29 08:40:57.400 227364 DEBUG nova.virt.libvirt.driver [None req-df6851f9-6b8a-4732-b501-0232380e9882 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Attempting to detach device tap98decf71-da from instance c9adc277-3d0a-4bb8-8b47-e9f72114cdfd from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:40:57 np0005539551 nova_compute[227360]: 2025-11-29 08:40:57.400 227364 DEBUG nova.virt.libvirt.guest [None req-df6851f9-6b8a-4732-b501-0232380e9882 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] detach device xml: <interface type="ethernet">
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <mac address="fa:16:3e:9e:a4:7d"/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <model type="virtio"/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <mtu size="1442"/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <target dev="tap98decf71-da"/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]: </interface>
Nov 29 03:40:57 np0005539551 nova_compute[227360]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:40:57 np0005539551 nova_compute[227360]: 2025-11-29 08:40:57.405 227364 DEBUG nova.virt.libvirt.guest [None req-df6851f9-6b8a-4732-b501-0232380e9882 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:9e:a4:7d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap98decf71-da"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:40:57 np0005539551 nova_compute[227360]: 2025-11-29 08:40:57.409 227364 DEBUG nova.virt.libvirt.guest [None req-df6851f9-6b8a-4732-b501-0232380e9882 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:9e:a4:7d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap98decf71-da"/></interface>not found in domain: <domain type='kvm' id='85'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <name>instance-000000b8</name>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <uuid>c9adc277-3d0a-4bb8-8b47-e9f72114cdfd</uuid>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <nova:name>tempest-TestNetworkBasicOps-server-1601524375</nova:name>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <nova:creationTime>2025-11-29 08:39:36</nova:creationTime>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <nova:flavor name="m1.nano">
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <nova:memory>128</nova:memory>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <nova:disk>1</nova:disk>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <nova:swap>0</nova:swap>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  </nova:flavor>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <nova:owner>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <nova:user uuid="4774e2851bc6407cb0fcde15bd24d1b3">tempest-TestNetworkBasicOps-828399474-project-member</nova:user>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <nova:project uuid="0471b9b208874403aa3f0fbe7504ad19">tempest-TestNetworkBasicOps-828399474</nova:project>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  </nova:owner>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <nova:ports>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <nova:port uuid="89414cbe-0908-4dc8-af0e-648ea658b1fa">
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </nova:port>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <nova:port uuid="98decf71-da91-41cd-9382-afba328b1338">
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <nova:ip type="fixed" address="10.100.0.21" ipVersion="4"/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </nova:port>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  </nova:ports>
Nov 29 03:40:57 np0005539551 nova_compute[227360]: </nova:instance>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <memory unit='KiB'>131072</memory>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <vcpu placement='static'>1</vcpu>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <resource>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <partition>/machine</partition>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  </resource>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <sysinfo type='smbios'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <entry name='manufacturer'>RDO</entry>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <entry name='product'>OpenStack Compute</entry>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <entry name='serial'>c9adc277-3d0a-4bb8-8b47-e9f72114cdfd</entry>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <entry name='uuid'>c9adc277-3d0a-4bb8-8b47-e9f72114cdfd</entry>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <entry name='family'>Virtual Machine</entry>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <boot dev='hd'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <smbios mode='sysinfo'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <vmcoreinfo state='on'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <cpu mode='custom' match='exact' check='full'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <model fallback='forbid'>Nehalem</model>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <feature policy='require' name='x2apic'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <feature policy='require' name='hypervisor'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <feature policy='require' name='vme'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <clock offset='utc'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <timer name='pit' tickpolicy='delay'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <timer name='hpet' present='no'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <on_poweroff>destroy</on_poweroff>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <on_reboot>restart</on_reboot>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <on_crash>destroy</on_crash>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <disk type='network' device='disk'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <auth username='openstack'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:        <secret type='ceph' uuid='b66774a7-56d9-5535-bd8c-681234404870'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <source protocol='rbd' name='vms/c9adc277-3d0a-4bb8-8b47-e9f72114cdfd_disk' index='2'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target dev='vda' bus='virtio'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='virtio-disk0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <disk type='network' device='cdrom'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <auth username='openstack'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:        <secret type='ceph' uuid='b66774a7-56d9-5535-bd8c-681234404870'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <source protocol='rbd' name='vms/c9adc277-3d0a-4bb8-8b47-e9f72114cdfd_disk.config' index='1'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target dev='sda' bus='sata'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <readonly/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='sata0-0-0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='0' model='pcie-root'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pcie.0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='1' port='0x10'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.1'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='2' port='0x11'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.2'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='3' port='0x12'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.3'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='4' port='0x13'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.4'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='5' port='0x14'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.5'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='6' port='0x15'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.6'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='7' port='0x16'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.7'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='8' port='0x17'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.8'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='9' port='0x18'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.9'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='10' port='0x19'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.10'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='11' port='0x1a'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.11'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='12' port='0x1b'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.12'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='13' port='0x1c'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.13'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='14' port='0x1d'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.14'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='15' port='0x1e'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.15'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='16' port='0x1f'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.16'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='17' port='0x20'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.17'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='18' port='0x21'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.18'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='19' port='0x22'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.19'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='20' port='0x23'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.20'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='21' port='0x24'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.21'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='22' port='0x25'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.22'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='23' port='0x26'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.23'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='24' port='0x27'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.24'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='25' port='0x28'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.25'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-pci-bridge'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.26'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='usb'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='sata' index='0'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='ide'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <interface type='ethernet'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <mac address='fa:16:3e:54:8d:5b'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target dev='tap89414cbe-09'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model type='virtio'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <mtu size='1442'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='net0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <interface type='ethernet'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <mac address='fa:16:3e:9e:a4:7d'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target dev='tap98decf71-da'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model type='virtio'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <mtu size='1442'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='net1'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <serial type='pty'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <source path='/dev/pts/0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <log file='/var/lib/nova/instances/c9adc277-3d0a-4bb8-8b47-e9f72114cdfd/console.log' append='off'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target type='isa-serial' port='0'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:        <model name='isa-serial'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      </target>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='serial0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <console type='pty' tty='/dev/pts/0'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <source path='/dev/pts/0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <log file='/var/lib/nova/instances/c9adc277-3d0a-4bb8-8b47-e9f72114cdfd/console.log' append='off'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target type='serial' port='0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='serial0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </console>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <input type='tablet' bus='usb'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='input0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='usb' bus='0' port='1'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </input>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <input type='mouse' bus='ps2'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='input1'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </input>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <input type='keyboard' bus='ps2'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='input2'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </input>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <listen type='address' address='::0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </graphics>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <audio id='1' type='none'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model type='virtio' heads='1' primary='yes'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='video0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <watchdog model='itco' action='reset'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='watchdog0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </watchdog>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <memballoon model='virtio'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <stats period='10'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='balloon0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <rng model='virtio'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <backend model='random'>/dev/urandom</backend>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='rng0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <label>system_u:system_r:svirt_t:s0:c707,c881</label>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c707,c881</imagelabel>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  </seclabel>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <label>+107:+107</label>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <imagelabel>+107:+107</imagelabel>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  </seclabel>
Nov 29 03:40:57 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:40:57 np0005539551 nova_compute[227360]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 29 03:40:57 np0005539551 nova_compute[227360]: 2025-11-29 08:40:57.410 227364 INFO nova.virt.libvirt.driver [None req-df6851f9-6b8a-4732-b501-0232380e9882 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Successfully detached device tap98decf71-da from instance c9adc277-3d0a-4bb8-8b47-e9f72114cdfd from the persistent domain config.#033[00m
Nov 29 03:40:57 np0005539551 nova_compute[227360]: 2025-11-29 08:40:57.410 227364 DEBUG nova.virt.libvirt.driver [None req-df6851f9-6b8a-4732-b501-0232380e9882 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] (1/8): Attempting to detach device tap98decf71-da with device alias net1 from instance c9adc277-3d0a-4bb8-8b47-e9f72114cdfd from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:40:57 np0005539551 nova_compute[227360]: 2025-11-29 08:40:57.411 227364 DEBUG nova.virt.libvirt.guest [None req-df6851f9-6b8a-4732-b501-0232380e9882 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] detach device xml: <interface type="ethernet">
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <mac address="fa:16:3e:9e:a4:7d"/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <model type="virtio"/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <mtu size="1442"/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <target dev="tap98decf71-da"/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]: </interface>
Nov 29 03:40:57 np0005539551 nova_compute[227360]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:40:57 np0005539551 kernel: tap98decf71-da (unregistering): left promiscuous mode
Nov 29 03:40:57 np0005539551 NetworkManager[48922]: <info>  [1764405657.4614] device (tap98decf71-da): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:40:57 np0005539551 ovn_controller[130266]: 2025-11-29T08:40:57Z|00832|binding|INFO|Releasing lport 98decf71-da91-41cd-9382-afba328b1338 from this chassis (sb_readonly=0)
Nov 29 03:40:57 np0005539551 nova_compute[227360]: 2025-11-29 08:40:57.471 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:57 np0005539551 ovn_controller[130266]: 2025-11-29T08:40:57Z|00833|binding|INFO|Setting lport 98decf71-da91-41cd-9382-afba328b1338 down in Southbound
Nov 29 03:40:57 np0005539551 ovn_controller[130266]: 2025-11-29T08:40:57Z|00834|binding|INFO|Removing iface tap98decf71-da ovn-installed in OVS
Nov 29 03:40:57 np0005539551 nova_compute[227360]: 2025-11-29 08:40:57.473 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:57 np0005539551 nova_compute[227360]: 2025-11-29 08:40:57.474 227364 DEBUG nova.virt.libvirt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Received event <DeviceRemovedEvent: 1764405657.4739764, c9adc277-3d0a-4bb8-8b47-e9f72114cdfd => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:40:57 np0005539551 nova_compute[227360]: 2025-11-29 08:40:57.475 227364 DEBUG nova.virt.libvirt.driver [None req-df6851f9-6b8a-4732-b501-0232380e9882 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Start waiting for the detach event from libvirt for device tap98decf71-da with device alias net1 for instance c9adc277-3d0a-4bb8-8b47-e9f72114cdfd _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:40:57 np0005539551 nova_compute[227360]: 2025-11-29 08:40:57.475 227364 DEBUG nova.virt.libvirt.guest [None req-df6851f9-6b8a-4732-b501-0232380e9882 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:9e:a4:7d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap98decf71-da"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:40:57 np0005539551 nova_compute[227360]: 2025-11-29 08:40:57.479 227364 DEBUG nova.virt.libvirt.guest [None req-df6851f9-6b8a-4732-b501-0232380e9882 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:9e:a4:7d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap98decf71-da"/></interface>not found in domain: <domain type='kvm' id='85'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <name>instance-000000b8</name>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <uuid>c9adc277-3d0a-4bb8-8b47-e9f72114cdfd</uuid>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <nova:name>tempest-TestNetworkBasicOps-server-1601524375</nova:name>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <nova:creationTime>2025-11-29 08:39:36</nova:creationTime>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <nova:flavor name="m1.nano">
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <nova:memory>128</nova:memory>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <nova:disk>1</nova:disk>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <nova:swap>0</nova:swap>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  </nova:flavor>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <nova:owner>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <nova:user uuid="4774e2851bc6407cb0fcde15bd24d1b3">tempest-TestNetworkBasicOps-828399474-project-member</nova:user>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <nova:project uuid="0471b9b208874403aa3f0fbe7504ad19">tempest-TestNetworkBasicOps-828399474</nova:project>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  </nova:owner>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <nova:ports>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <nova:port uuid="89414cbe-0908-4dc8-af0e-648ea658b1fa">
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </nova:port>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <nova:port uuid="98decf71-da91-41cd-9382-afba328b1338">
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <nova:ip type="fixed" address="10.100.0.21" ipVersion="4"/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </nova:port>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  </nova:ports>
Nov 29 03:40:57 np0005539551 nova_compute[227360]: </nova:instance>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <memory unit='KiB'>131072</memory>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <vcpu placement='static'>1</vcpu>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <resource>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <partition>/machine</partition>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  </resource>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <sysinfo type='smbios'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <entry name='manufacturer'>RDO</entry>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <entry name='product'>OpenStack Compute</entry>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <entry name='serial'>c9adc277-3d0a-4bb8-8b47-e9f72114cdfd</entry>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <entry name='uuid'>c9adc277-3d0a-4bb8-8b47-e9f72114cdfd</entry>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <entry name='family'>Virtual Machine</entry>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <boot dev='hd'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <smbios mode='sysinfo'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <vmcoreinfo state='on'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <cpu mode='custom' match='exact' check='full'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <model fallback='forbid'>Nehalem</model>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <feature policy='require' name='x2apic'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <feature policy='require' name='hypervisor'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <feature policy='require' name='vme'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <clock offset='utc'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <timer name='pit' tickpolicy='delay'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <timer name='hpet' present='no'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <on_poweroff>destroy</on_poweroff>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <on_reboot>restart</on_reboot>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <on_crash>destroy</on_crash>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <disk type='network' device='disk'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <auth username='openstack'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:        <secret type='ceph' uuid='b66774a7-56d9-5535-bd8c-681234404870'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <source protocol='rbd' name='vms/c9adc277-3d0a-4bb8-8b47-e9f72114cdfd_disk' index='2'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target dev='vda' bus='virtio'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='virtio-disk0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <disk type='network' device='cdrom'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <auth username='openstack'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:        <secret type='ceph' uuid='b66774a7-56d9-5535-bd8c-681234404870'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <source protocol='rbd' name='vms/c9adc277-3d0a-4bb8-8b47-e9f72114cdfd_disk.config' index='1'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target dev='sda' bus='sata'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <readonly/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='sata0-0-0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='0' model='pcie-root'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pcie.0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='1' port='0x10'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.1'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='2' port='0x11'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.2'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='3' port='0x12'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.3'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='4' port='0x13'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.4'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='5' port='0x14'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.5'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='6' port='0x15'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.6'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='7' port='0x16'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.7'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='8' port='0x17'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.8'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='9' port='0x18'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.9'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='10' port='0x19'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.10'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='11' port='0x1a'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.11'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='12' port='0x1b'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.12'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='13' port='0x1c'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.13'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='14' port='0x1d'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.14'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='15' port='0x1e'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.15'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='16' port='0x1f'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.16'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='17' port='0x20'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.17'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='18' port='0x21'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.18'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='19' port='0x22'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.19'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='20' port='0x23'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.20'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='21' port='0x24'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.21'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='22' port='0x25'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.22'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='23' port='0x26'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.23'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='24' port='0x27'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.24'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-root-port'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target chassis='25' port='0x28'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.25'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model name='pcie-pci-bridge'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='pci.26'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='usb'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <controller type='sata' index='0'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='ide'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </controller>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <interface type='ethernet'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <mac address='fa:16:3e:54:8d:5b'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target dev='tap89414cbe-09'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model type='virtio'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <mtu size='1442'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='net0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <serial type='pty'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <source path='/dev/pts/0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <log file='/var/lib/nova/instances/c9adc277-3d0a-4bb8-8b47-e9f72114cdfd/console.log' append='off'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target type='isa-serial' port='0'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:        <model name='isa-serial'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      </target>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='serial0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <console type='pty' tty='/dev/pts/0'>
Nov 29 03:40:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:40:57.480 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:a4:7d 10.100.0.21', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.21/28', 'neutron:device_id': 'c9adc277-3d0a-4bb8-8b47-e9f72114cdfd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3d6a66c-1acd-4ae3-9639-b6444469c1fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0471b9b208874403aa3f0fbe7504ad19', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7843eb0e-3b68-48d7-b889-5bece517c173, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=98decf71-da91-41cd-9382-afba328b1338) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <source path='/dev/pts/0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <log file='/var/lib/nova/instances/c9adc277-3d0a-4bb8-8b47-e9f72114cdfd/console.log' append='off'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <target type='serial' port='0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='serial0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </console>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <input type='tablet' bus='usb'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='input0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='usb' bus='0' port='1'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </input>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <input type='mouse' bus='ps2'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='input1'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </input>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <input type='keyboard' bus='ps2'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='input2'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </input>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <listen type='address' address='::0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </graphics>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <audio id='1' type='none'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <model type='virtio' heads='1' primary='yes'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='video0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <watchdog model='itco' action='reset'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='watchdog0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </watchdog>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <memballoon model='virtio'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <stats period='10'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='balloon0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <rng model='virtio'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <backend model='random'>/dev/urandom</backend>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <alias name='rng0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <label>system_u:system_r:svirt_t:s0:c707,c881</label>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c707,c881</imagelabel>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  </seclabel>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <label>+107:+107</label>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <imagelabel>+107:+107</imagelabel>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  </seclabel>
Nov 29 03:40:57 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:40:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:40:57.481 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 98decf71-da91-41cd-9382-afba328b1338 in datapath f3d6a66c-1acd-4ae3-9639-b6444469c1fc unbound from our chassis#033[00m
Nov 29 03:40:57 np0005539551 nova_compute[227360]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 29 03:40:57 np0005539551 nova_compute[227360]: 2025-11-29 08:40:57.479 227364 INFO nova.virt.libvirt.driver [None req-df6851f9-6b8a-4732-b501-0232380e9882 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Successfully detached device tap98decf71-da from instance c9adc277-3d0a-4bb8-8b47-e9f72114cdfd from the live domain config.#033[00m
Nov 29 03:40:57 np0005539551 nova_compute[227360]: 2025-11-29 08:40:57.480 227364 DEBUG nova.virt.libvirt.vif [None req-df6851f9-6b8a-4732-b501-0232380e9882 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:38:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1601524375',display_name='tempest-TestNetworkBasicOps-server-1601524375',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1601524375',id=184,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNAyvwAZwPHz0mDZ6+uw7Ctigv7CbFX3JcWWwmeG6ohdJh9eb0DA6IZB4kjH4EtgM2s5DtdFmdoiqApvJrl5Aaw7g3k0AA8E+id9fhlaMPhZgqKsQOQc112xVs1jFb8yOA==',key_name='tempest-TestNetworkBasicOps-65375256',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:39:02Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-xx3hndbp',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:39:02Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=c9adc277-3d0a-4bb8-8b47-e9f72114cdfd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "98decf71-da91-41cd-9382-afba328b1338", "address": "fa:16:3e:9e:a4:7d", "network": {"id": "f3d6a66c-1acd-4ae3-9639-b6444469c1fc", "bridge": "br-int", "label": "tempest-network-smoke--216176959", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98decf71-da", "ovs_interfaceid": "98decf71-da91-41cd-9382-afba328b1338", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:40:57 np0005539551 nova_compute[227360]: 2025-11-29 08:40:57.480 227364 DEBUG nova.network.os_vif_util [None req-df6851f9-6b8a-4732-b501-0232380e9882 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "98decf71-da91-41cd-9382-afba328b1338", "address": "fa:16:3e:9e:a4:7d", "network": {"id": "f3d6a66c-1acd-4ae3-9639-b6444469c1fc", "bridge": "br-int", "label": "tempest-network-smoke--216176959", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98decf71-da", "ovs_interfaceid": "98decf71-da91-41cd-9382-afba328b1338", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:40:57 np0005539551 nova_compute[227360]: 2025-11-29 08:40:57.481 227364 DEBUG nova.network.os_vif_util [None req-df6851f9-6b8a-4732-b501-0232380e9882 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9e:a4:7d,bridge_name='br-int',has_traffic_filtering=True,id=98decf71-da91-41cd-9382-afba328b1338,network=Network(f3d6a66c-1acd-4ae3-9639-b6444469c1fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98decf71-da') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:40:57 np0005539551 nova_compute[227360]: 2025-11-29 08:40:57.481 227364 DEBUG os_vif [None req-df6851f9-6b8a-4732-b501-0232380e9882 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9e:a4:7d,bridge_name='br-int',has_traffic_filtering=True,id=98decf71-da91-41cd-9382-afba328b1338,network=Network(f3d6a66c-1acd-4ae3-9639-b6444469c1fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98decf71-da') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:40:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:40:57.483 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f3d6a66c-1acd-4ae3-9639-b6444469c1fc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:40:57 np0005539551 nova_compute[227360]: 2025-11-29 08:40:57.484 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:57 np0005539551 nova_compute[227360]: 2025-11-29 08:40:57.484 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap98decf71-da, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:40:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:40:57.484 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[53a03e16-9189-4a34-a165-b561c6cddefb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:40:57.484 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc namespace which is not needed anymore#033[00m
Nov 29 03:40:57 np0005539551 nova_compute[227360]: 2025-11-29 08:40:57.486 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:57 np0005539551 nova_compute[227360]: 2025-11-29 08:40:57.487 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:40:57 np0005539551 nova_compute[227360]: 2025-11-29 08:40:57.489 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:57 np0005539551 nova_compute[227360]: 2025-11-29 08:40:57.492 227364 INFO os_vif [None req-df6851f9-6b8a-4732-b501-0232380e9882 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9e:a4:7d,bridge_name='br-int',has_traffic_filtering=True,id=98decf71-da91-41cd-9382-afba328b1338,network=Network(f3d6a66c-1acd-4ae3-9639-b6444469c1fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98decf71-da')#033[00m
Nov 29 03:40:57 np0005539551 nova_compute[227360]: 2025-11-29 08:40:57.493 227364 DEBUG nova.virt.libvirt.guest [None req-df6851f9-6b8a-4732-b501-0232380e9882 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <nova:name>tempest-TestNetworkBasicOps-server-1601524375</nova:name>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <nova:creationTime>2025-11-29 08:40:57</nova:creationTime>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <nova:flavor name="m1.nano">
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <nova:memory>128</nova:memory>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <nova:disk>1</nova:disk>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <nova:swap>0</nova:swap>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  </nova:flavor>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <nova:owner>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <nova:user uuid="4774e2851bc6407cb0fcde15bd24d1b3">tempest-TestNetworkBasicOps-828399474-project-member</nova:user>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <nova:project uuid="0471b9b208874403aa3f0fbe7504ad19">tempest-TestNetworkBasicOps-828399474</nova:project>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  </nova:owner>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  <nova:ports>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    <nova:port uuid="89414cbe-0908-4dc8-af0e-648ea658b1fa">
Nov 29 03:40:57 np0005539551 nova_compute[227360]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:    </nova:port>
Nov 29 03:40:57 np0005539551 nova_compute[227360]:  </nova:ports>
Nov 29 03:40:57 np0005539551 nova_compute[227360]: </nova:instance>
Nov 29 03:40:57 np0005539551 nova_compute[227360]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 29 03:40:57 np0005539551 neutron-haproxy-ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc[294330]: [NOTICE]   (294334) : haproxy version is 2.8.14-c23fe91
Nov 29 03:40:57 np0005539551 neutron-haproxy-ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc[294330]: [NOTICE]   (294334) : path to executable is /usr/sbin/haproxy
Nov 29 03:40:57 np0005539551 neutron-haproxy-ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc[294330]: [WARNING]  (294334) : Exiting Master process...
Nov 29 03:40:57 np0005539551 neutron-haproxy-ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc[294330]: [WARNING]  (294334) : Exiting Master process...
Nov 29 03:40:57 np0005539551 neutron-haproxy-ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc[294330]: [ALERT]    (294334) : Current worker (294348) exited with code 143 (Terminated)
Nov 29 03:40:57 np0005539551 neutron-haproxy-ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc[294330]: [WARNING]  (294334) : All workers exited. Exiting... (0)
Nov 29 03:40:57 np0005539551 systemd[1]: libpod-0c6629727a6abc4a732e541369c7a30ac7044bdfa2d8543b19474128bf6fee0e.scope: Deactivated successfully.
Nov 29 03:40:57 np0005539551 podman[295026]: 2025-11-29 08:40:57.630416985 +0000 UTC m=+0.044414353 container died 0c6629727a6abc4a732e541369c7a30ac7044bdfa2d8543b19474128bf6fee0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:40:57 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0c6629727a6abc4a732e541369c7a30ac7044bdfa2d8543b19474128bf6fee0e-userdata-shm.mount: Deactivated successfully.
Nov 29 03:40:57 np0005539551 systemd[1]: var-lib-containers-storage-overlay-d68d9403b644802f2a658772dd89ac3aa6e54690b8767007984f1573a8ebbe89-merged.mount: Deactivated successfully.
Nov 29 03:40:57 np0005539551 podman[295026]: 2025-11-29 08:40:57.685481367 +0000 UTC m=+0.099478725 container cleanup 0c6629727a6abc4a732e541369c7a30ac7044bdfa2d8543b19474128bf6fee0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:40:57 np0005539551 systemd[1]: libpod-conmon-0c6629727a6abc4a732e541369c7a30ac7044bdfa2d8543b19474128bf6fee0e.scope: Deactivated successfully.
Nov 29 03:40:57 np0005539551 nova_compute[227360]: 2025-11-29 08:40:57.790 227364 DEBUG nova.compute.manager [req-8b33c644-67a3-4d63-bc95-25ec01e9784b req-5c92884b-fd7f-4059-acf8-5be012058085 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Received event network-vif-unplugged-98decf71-da91-41cd-9382-afba328b1338 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:40:57 np0005539551 nova_compute[227360]: 2025-11-29 08:40:57.791 227364 DEBUG oslo_concurrency.lockutils [req-8b33c644-67a3-4d63-bc95-25ec01e9784b req-5c92884b-fd7f-4059-acf8-5be012058085 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "c9adc277-3d0a-4bb8-8b47-e9f72114cdfd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:57 np0005539551 nova_compute[227360]: 2025-11-29 08:40:57.791 227364 DEBUG oslo_concurrency.lockutils [req-8b33c644-67a3-4d63-bc95-25ec01e9784b req-5c92884b-fd7f-4059-acf8-5be012058085 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c9adc277-3d0a-4bb8-8b47-e9f72114cdfd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:57 np0005539551 nova_compute[227360]: 2025-11-29 08:40:57.791 227364 DEBUG oslo_concurrency.lockutils [req-8b33c644-67a3-4d63-bc95-25ec01e9784b req-5c92884b-fd7f-4059-acf8-5be012058085 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c9adc277-3d0a-4bb8-8b47-e9f72114cdfd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:57 np0005539551 nova_compute[227360]: 2025-11-29 08:40:57.791 227364 DEBUG nova.compute.manager [req-8b33c644-67a3-4d63-bc95-25ec01e9784b req-5c92884b-fd7f-4059-acf8-5be012058085 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] No waiting events found dispatching network-vif-unplugged-98decf71-da91-41cd-9382-afba328b1338 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:40:57 np0005539551 nova_compute[227360]: 2025-11-29 08:40:57.791 227364 WARNING nova.compute.manager [req-8b33c644-67a3-4d63-bc95-25ec01e9784b req-5c92884b-fd7f-4059-acf8-5be012058085 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Received unexpected event network-vif-unplugged-98decf71-da91-41cd-9382-afba328b1338 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:40:57 np0005539551 podman[295053]: 2025-11-29 08:40:57.872063759 +0000 UTC m=+0.164914947 container remove 0c6629727a6abc4a732e541369c7a30ac7044bdfa2d8543b19474128bf6fee0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 03:40:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:40:57.880 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a23705d2-d2eb-411a-a60f-5b0606db3e9c]: (4, ('Sat Nov 29 08:40:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc (0c6629727a6abc4a732e541369c7a30ac7044bdfa2d8543b19474128bf6fee0e)\n0c6629727a6abc4a732e541369c7a30ac7044bdfa2d8543b19474128bf6fee0e\nSat Nov 29 08:40:57 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc (0c6629727a6abc4a732e541369c7a30ac7044bdfa2d8543b19474128bf6fee0e)\n0c6629727a6abc4a732e541369c7a30ac7044bdfa2d8543b19474128bf6fee0e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:40:57.882 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[fb05898f-013d-4412-84c8-109d15d27d86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:40:57.883 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3d6a66c-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:40:57 np0005539551 kernel: tapf3d6a66c-10: left promiscuous mode
Nov 29 03:40:57 np0005539551 nova_compute[227360]: 2025-11-29 08:40:57.886 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:57 np0005539551 nova_compute[227360]: 2025-11-29 08:40:57.900 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:40:57.904 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[909ef633-a4b7-48a7-b2ed-e182db5c618e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:40:57.919 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1c572452-8b0a-4ce2-afa4-a47c27701ec7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:40:57.922 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[79fff59a-7f03-4c2e-8ec1-a3571f372de7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:40:57.942 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[dbe7f952-3ed9-4c8f-bbe9-0e489b11b5a9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 862330, 'reachable_time': 25688, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295068, 'error': None, 'target': 'ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:40:57.945 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:40:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:40:57.945 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[3265250c-ccbe-4e14-8e0a-e0b8b11c98d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:57 np0005539551 systemd[1]: run-netns-ovnmeta\x2df3d6a66c\x2d1acd\x2d4ae3\x2d9639\x2db6444469c1fc.mount: Deactivated successfully.
Nov 29 03:40:58 np0005539551 nova_compute[227360]: 2025-11-29 08:40:58.126 227364 DEBUG oslo_concurrency.lockutils [None req-df6851f9-6b8a-4732-b501-0232380e9882 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "refresh_cache-c9adc277-3d0a-4bb8-8b47-e9f72114cdfd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:40:58 np0005539551 nova_compute[227360]: 2025-11-29 08:40:58.127 227364 DEBUG oslo_concurrency.lockutils [None req-df6851f9-6b8a-4732-b501-0232380e9882 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquired lock "refresh_cache-c9adc277-3d0a-4bb8-8b47-e9f72114cdfd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:40:58 np0005539551 nova_compute[227360]: 2025-11-29 08:40:58.127 227364 DEBUG nova.network.neutron [None req-df6851f9-6b8a-4732-b501-0232380e9882 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:40:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:58.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:58 np0005539551 nova_compute[227360]: 2025-11-29 08:40:58.252 227364 DEBUG nova.compute.manager [req-0d7a6665-547f-406f-890f-6781fbdf404f req-f5d3895c-40c1-4b45-9b99-b43ffe46a3bb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Received event network-changed-6ae38b68-e18b-4ba6-be6e-d411d58b407b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:40:58 np0005539551 nova_compute[227360]: 2025-11-29 08:40:58.253 227364 DEBUG nova.compute.manager [req-0d7a6665-547f-406f-890f-6781fbdf404f req-f5d3895c-40c1-4b45-9b99-b43ffe46a3bb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Refreshing instance network info cache due to event network-changed-6ae38b68-e18b-4ba6-be6e-d411d58b407b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:40:58 np0005539551 nova_compute[227360]: 2025-11-29 08:40:58.253 227364 DEBUG oslo_concurrency.lockutils [req-0d7a6665-547f-406f-890f-6781fbdf404f req-f5d3895c-40c1-4b45-9b99-b43ffe46a3bb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-fc75971c-cf7e-4383-81cf-81c801f67489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:40:58 np0005539551 nova_compute[227360]: 2025-11-29 08:40:58.253 227364 DEBUG oslo_concurrency.lockutils [req-0d7a6665-547f-406f-890f-6781fbdf404f req-f5d3895c-40c1-4b45-9b99-b43ffe46a3bb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-fc75971c-cf7e-4383-81cf-81c801f67489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:40:58 np0005539551 nova_compute[227360]: 2025-11-29 08:40:58.253 227364 DEBUG nova.network.neutron [req-0d7a6665-547f-406f-890f-6781fbdf404f req-f5d3895c-40c1-4b45-9b99-b43ffe46a3bb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Refreshing network info cache for port 6ae38b68-e18b-4ba6-be6e-d411d58b407b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:40:58 np0005539551 nova_compute[227360]: 2025-11-29 08:40:58.280 227364 DEBUG nova.compute.manager [req-14de6210-b322-4542-975b-94ba47525aa7 req-53845ab4-e14d-4554-9425-6fac99b51f47 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Received event network-changed-6ae38b68-e18b-4ba6-be6e-d411d58b407b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:40:58 np0005539551 nova_compute[227360]: 2025-11-29 08:40:58.281 227364 DEBUG nova.compute.manager [req-14de6210-b322-4542-975b-94ba47525aa7 req-53845ab4-e14d-4554-9425-6fac99b51f47 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Refreshing instance network info cache due to event network-changed-6ae38b68-e18b-4ba6-be6e-d411d58b407b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:40:58 np0005539551 nova_compute[227360]: 2025-11-29 08:40:58.281 227364 DEBUG oslo_concurrency.lockutils [req-14de6210-b322-4542-975b-94ba47525aa7 req-53845ab4-e14d-4554-9425-6fac99b51f47 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-fc75971c-cf7e-4383-81cf-81c801f67489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:40:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:40:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:58.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:59 np0005539551 nova_compute[227360]: 2025-11-29 08:40:59.072 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:59 np0005539551 nova_compute[227360]: 2025-11-29 08:40:59.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:59 np0005539551 podman[295070]: 2025-11-29 08:40:59.605054381 +0000 UTC m=+0.054939768 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:40:59 np0005539551 podman[295071]: 2025-11-29 08:40:59.623642555 +0000 UTC m=+0.069965406 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:40:59 np0005539551 podman[295069]: 2025-11-29 08:40:59.630089949 +0000 UTC m=+0.084160550 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:40:59 np0005539551 nova_compute[227360]: 2025-11-29 08:40:59.904 227364 DEBUG nova.compute.manager [req-6db2ac9c-bdf6-4289-8ddc-b945c0565521 req-f3584564-fdb8-445e-8fba-14213e9de2f5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Received event network-vif-plugged-98decf71-da91-41cd-9382-afba328b1338 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:40:59 np0005539551 nova_compute[227360]: 2025-11-29 08:40:59.904 227364 DEBUG oslo_concurrency.lockutils [req-6db2ac9c-bdf6-4289-8ddc-b945c0565521 req-f3584564-fdb8-445e-8fba-14213e9de2f5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "c9adc277-3d0a-4bb8-8b47-e9f72114cdfd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:59 np0005539551 nova_compute[227360]: 2025-11-29 08:40:59.904 227364 DEBUG oslo_concurrency.lockutils [req-6db2ac9c-bdf6-4289-8ddc-b945c0565521 req-f3584564-fdb8-445e-8fba-14213e9de2f5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c9adc277-3d0a-4bb8-8b47-e9f72114cdfd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:59 np0005539551 nova_compute[227360]: 2025-11-29 08:40:59.905 227364 DEBUG oslo_concurrency.lockutils [req-6db2ac9c-bdf6-4289-8ddc-b945c0565521 req-f3584564-fdb8-445e-8fba-14213e9de2f5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c9adc277-3d0a-4bb8-8b47-e9f72114cdfd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:59 np0005539551 nova_compute[227360]: 2025-11-29 08:40:59.905 227364 DEBUG nova.compute.manager [req-6db2ac9c-bdf6-4289-8ddc-b945c0565521 req-f3584564-fdb8-445e-8fba-14213e9de2f5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] No waiting events found dispatching network-vif-plugged-98decf71-da91-41cd-9382-afba328b1338 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:40:59 np0005539551 nova_compute[227360]: 2025-11-29 08:40:59.905 227364 WARNING nova.compute.manager [req-6db2ac9c-bdf6-4289-8ddc-b945c0565521 req-f3584564-fdb8-445e-8fba-14213e9de2f5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Received unexpected event network-vif-plugged-98decf71-da91-41cd-9382-afba328b1338 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:41:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:00.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:00.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:00 np0005539551 nova_compute[227360]: 2025-11-29 08:41:00.367 227364 DEBUG nova.compute.manager [req-26c1e758-b2f6-4498-8625-55a11501b0ea req-d99e1c90-8521-4f2a-ad4a-a4845caf3093 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Received event network-changed-6ae38b68-e18b-4ba6-be6e-d411d58b407b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:00 np0005539551 nova_compute[227360]: 2025-11-29 08:41:00.368 227364 DEBUG nova.compute.manager [req-26c1e758-b2f6-4498-8625-55a11501b0ea req-d99e1c90-8521-4f2a-ad4a-a4845caf3093 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Refreshing instance network info cache due to event network-changed-6ae38b68-e18b-4ba6-be6e-d411d58b407b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:41:00 np0005539551 nova_compute[227360]: 2025-11-29 08:41:00.368 227364 DEBUG oslo_concurrency.lockutils [req-26c1e758-b2f6-4498-8625-55a11501b0ea req-d99e1c90-8521-4f2a-ad4a-a4845caf3093 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-fc75971c-cf7e-4383-81cf-81c801f67489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:41:00 np0005539551 ovn_controller[130266]: 2025-11-29T08:41:00Z|00835|binding|INFO|Releasing lport be178473-8fe5-4236-9899-659c9f39adc0 from this chassis (sb_readonly=0)
Nov 29 03:41:00 np0005539551 ovn_controller[130266]: 2025-11-29T08:41:00Z|00836|binding|INFO|Releasing lport 1642a0e3-a8d4-4ee4-8971-26f27541a04e from this chassis (sb_readonly=0)
Nov 29 03:41:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:00 np0005539551 ovn_controller[130266]: 2025-11-29T08:41:00Z|00837|binding|INFO|Releasing lport be178473-8fe5-4236-9899-659c9f39adc0 from this chassis (sb_readonly=0)
Nov 29 03:41:00 np0005539551 ovn_controller[130266]: 2025-11-29T08:41:00Z|00838|binding|INFO|Releasing lport 1642a0e3-a8d4-4ee4-8971-26f27541a04e from this chassis (sb_readonly=0)
Nov 29 03:41:00 np0005539551 nova_compute[227360]: 2025-11-29 08:41:00.877 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:01 np0005539551 nova_compute[227360]: 2025-11-29 08:41:01.010 227364 INFO nova.network.neutron [None req-df6851f9-6b8a-4732-b501-0232380e9882 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Port 98decf71-da91-41cd-9382-afba328b1338 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 29 03:41:01 np0005539551 nova_compute[227360]: 2025-11-29 08:41:01.010 227364 DEBUG nova.network.neutron [None req-df6851f9-6b8a-4732-b501-0232380e9882 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Updating instance_info_cache with network_info: [{"id": "89414cbe-0908-4dc8-af0e-648ea658b1fa", "address": "fa:16:3e:54:8d:5b", "network": {"id": "460c4768-d248-455a-be34-ea028712d091", "bridge": "br-int", "label": "tempest-network-smoke--245531740", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89414cbe-09", "ovs_interfaceid": "89414cbe-0908-4dc8-af0e-648ea658b1fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:41:01 np0005539551 nova_compute[227360]: 2025-11-29 08:41:01.167 227364 DEBUG oslo_concurrency.lockutils [None req-df6851f9-6b8a-4732-b501-0232380e9882 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Releasing lock "refresh_cache-c9adc277-3d0a-4bb8-8b47-e9f72114cdfd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:41:01 np0005539551 nova_compute[227360]: 2025-11-29 08:41:01.192 227364 DEBUG oslo_concurrency.lockutils [None req-df6851f9-6b8a-4732-b501-0232380e9882 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "interface-c9adc277-3d0a-4bb8-8b47-e9f72114cdfd-98decf71-da91-41cd-9382-afba328b1338" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.838s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:01 np0005539551 nova_compute[227360]: 2025-11-29 08:41:01.288 227364 DEBUG nova.network.neutron [req-0d7a6665-547f-406f-890f-6781fbdf404f req-f5d3895c-40c1-4b45-9b99-b43ffe46a3bb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Updated VIF entry in instance network info cache for port 6ae38b68-e18b-4ba6-be6e-d411d58b407b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:41:01 np0005539551 nova_compute[227360]: 2025-11-29 08:41:01.288 227364 DEBUG nova.network.neutron [req-0d7a6665-547f-406f-890f-6781fbdf404f req-f5d3895c-40c1-4b45-9b99-b43ffe46a3bb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Updating instance_info_cache with network_info: [{"id": "6ae38b68-e18b-4ba6-be6e-d411d58b407b", "address": "fa:16:3e:6f:01:5b", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ae38b68-e1", "ovs_interfaceid": "6ae38b68-e18b-4ba6-be6e-d411d58b407b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:41:01 np0005539551 nova_compute[227360]: 2025-11-29 08:41:01.304 227364 DEBUG oslo_concurrency.lockutils [req-0d7a6665-547f-406f-890f-6781fbdf404f req-f5d3895c-40c1-4b45-9b99-b43ffe46a3bb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-fc75971c-cf7e-4383-81cf-81c801f67489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:41:01 np0005539551 nova_compute[227360]: 2025-11-29 08:41:01.304 227364 DEBUG oslo_concurrency.lockutils [req-14de6210-b322-4542-975b-94ba47525aa7 req-53845ab4-e14d-4554-9425-6fac99b51f47 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-fc75971c-cf7e-4383-81cf-81c801f67489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:41:01 np0005539551 nova_compute[227360]: 2025-11-29 08:41:01.304 227364 DEBUG nova.network.neutron [req-14de6210-b322-4542-975b-94ba47525aa7 req-53845ab4-e14d-4554-9425-6fac99b51f47 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Refreshing network info cache for port 6ae38b68-e18b-4ba6-be6e-d411d58b407b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:41:01 np0005539551 nova_compute[227360]: 2025-11-29 08:41:01.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:01 np0005539551 nova_compute[227360]: 2025-11-29 08:41:01.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:41:01 np0005539551 nova_compute[227360]: 2025-11-29 08:41:01.984 227364 DEBUG oslo_concurrency.lockutils [None req-42ab005f-ab47-456f-a6dd-222dcd688788 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "c9adc277-3d0a-4bb8-8b47-e9f72114cdfd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:01 np0005539551 nova_compute[227360]: 2025-11-29 08:41:01.984 227364 DEBUG oslo_concurrency.lockutils [None req-42ab005f-ab47-456f-a6dd-222dcd688788 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "c9adc277-3d0a-4bb8-8b47-e9f72114cdfd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:01 np0005539551 nova_compute[227360]: 2025-11-29 08:41:01.984 227364 DEBUG oslo_concurrency.lockutils [None req-42ab005f-ab47-456f-a6dd-222dcd688788 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "c9adc277-3d0a-4bb8-8b47-e9f72114cdfd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:01 np0005539551 nova_compute[227360]: 2025-11-29 08:41:01.985 227364 DEBUG oslo_concurrency.lockutils [None req-42ab005f-ab47-456f-a6dd-222dcd688788 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "c9adc277-3d0a-4bb8-8b47-e9f72114cdfd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:01 np0005539551 nova_compute[227360]: 2025-11-29 08:41:01.985 227364 DEBUG oslo_concurrency.lockutils [None req-42ab005f-ab47-456f-a6dd-222dcd688788 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "c9adc277-3d0a-4bb8-8b47-e9f72114cdfd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:01 np0005539551 nova_compute[227360]: 2025-11-29 08:41:01.986 227364 INFO nova.compute.manager [None req-42ab005f-ab47-456f-a6dd-222dcd688788 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Terminating instance#033[00m
Nov 29 03:41:01 np0005539551 nova_compute[227360]: 2025-11-29 08:41:01.987 227364 DEBUG nova.compute.manager [None req-42ab005f-ab47-456f-a6dd-222dcd688788 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:41:02 np0005539551 nova_compute[227360]: 2025-11-29 08:41:02.039 227364 DEBUG nova.compute.manager [req-95beb135-97f7-40d6-86c8-86e988b7db78 req-b785b1b1-7f7e-4e57-9324-e01bab47dc2a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Received event network-changed-89414cbe-0908-4dc8-af0e-648ea658b1fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:02 np0005539551 nova_compute[227360]: 2025-11-29 08:41:02.040 227364 DEBUG nova.compute.manager [req-95beb135-97f7-40d6-86c8-86e988b7db78 req-b785b1b1-7f7e-4e57-9324-e01bab47dc2a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Refreshing instance network info cache due to event network-changed-89414cbe-0908-4dc8-af0e-648ea658b1fa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:41:02 np0005539551 nova_compute[227360]: 2025-11-29 08:41:02.040 227364 DEBUG oslo_concurrency.lockutils [req-95beb135-97f7-40d6-86c8-86e988b7db78 req-b785b1b1-7f7e-4e57-9324-e01bab47dc2a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-c9adc277-3d0a-4bb8-8b47-e9f72114cdfd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:41:02 np0005539551 nova_compute[227360]: 2025-11-29 08:41:02.040 227364 DEBUG oslo_concurrency.lockutils [req-95beb135-97f7-40d6-86c8-86e988b7db78 req-b785b1b1-7f7e-4e57-9324-e01bab47dc2a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-c9adc277-3d0a-4bb8-8b47-e9f72114cdfd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:41:02 np0005539551 nova_compute[227360]: 2025-11-29 08:41:02.040 227364 DEBUG nova.network.neutron [req-95beb135-97f7-40d6-86c8-86e988b7db78 req-b785b1b1-7f7e-4e57-9324-e01bab47dc2a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Refreshing network info cache for port 89414cbe-0908-4dc8-af0e-648ea658b1fa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:41:02 np0005539551 kernel: tap89414cbe-09 (unregistering): left promiscuous mode
Nov 29 03:41:02 np0005539551 NetworkManager[48922]: <info>  [1764405662.0517] device (tap89414cbe-09): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:41:02 np0005539551 ovn_controller[130266]: 2025-11-29T08:41:02Z|00839|binding|INFO|Releasing lport 89414cbe-0908-4dc8-af0e-648ea658b1fa from this chassis (sb_readonly=0)
Nov 29 03:41:02 np0005539551 ovn_controller[130266]: 2025-11-29T08:41:02Z|00840|binding|INFO|Setting lport 89414cbe-0908-4dc8-af0e-648ea658b1fa down in Southbound
Nov 29 03:41:02 np0005539551 ovn_controller[130266]: 2025-11-29T08:41:02Z|00841|binding|INFO|Removing iface tap89414cbe-09 ovn-installed in OVS
Nov 29 03:41:02 np0005539551 nova_compute[227360]: 2025-11-29 08:41:02.059 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:02 np0005539551 nova_compute[227360]: 2025-11-29 08:41:02.074 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:02 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:02.074 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:8d:5b 10.100.0.3'], port_security=['fa:16:3e:54:8d:5b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'c9adc277-3d0a-4bb8-8b47-e9f72114cdfd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-460c4768-d248-455a-be34-ea028712d091', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0471b9b208874403aa3f0fbe7504ad19', 'neutron:revision_number': '4', 'neutron:security_group_ids': '543f427b-dc17-4c93-a5d2-532bed14f830', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=376f822f-3723-4b37-8925-98e26047b898, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=89414cbe-0908-4dc8-af0e-648ea658b1fa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:41:02 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:02.076 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 89414cbe-0908-4dc8-af0e-648ea658b1fa in datapath 460c4768-d248-455a-be34-ea028712d091 unbound from our chassis#033[00m
Nov 29 03:41:02 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:02.077 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 460c4768-d248-455a-be34-ea028712d091, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:41:02 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:02.078 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ebe0dad4-a0c9-4cf5-8e92-5a3c603cd571]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:02 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:02.078 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-460c4768-d248-455a-be34-ea028712d091 namespace which is not needed anymore#033[00m
Nov 29 03:41:02 np0005539551 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d000000b8.scope: Deactivated successfully.
Nov 29 03:41:02 np0005539551 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d000000b8.scope: Consumed 18.684s CPU time.
Nov 29 03:41:02 np0005539551 systemd-machined[190756]: Machine qemu-85-instance-000000b8 terminated.
Nov 29 03:41:02 np0005539551 neutron-haproxy-ovnmeta-460c4768-d248-455a-be34-ea028712d091[293912]: [NOTICE]   (293916) : haproxy version is 2.8.14-c23fe91
Nov 29 03:41:02 np0005539551 neutron-haproxy-ovnmeta-460c4768-d248-455a-be34-ea028712d091[293912]: [NOTICE]   (293916) : path to executable is /usr/sbin/haproxy
Nov 29 03:41:02 np0005539551 neutron-haproxy-ovnmeta-460c4768-d248-455a-be34-ea028712d091[293912]: [WARNING]  (293916) : Exiting Master process...
Nov 29 03:41:02 np0005539551 neutron-haproxy-ovnmeta-460c4768-d248-455a-be34-ea028712d091[293912]: [WARNING]  (293916) : Exiting Master process...
Nov 29 03:41:02 np0005539551 neutron-haproxy-ovnmeta-460c4768-d248-455a-be34-ea028712d091[293912]: [ALERT]    (293916) : Current worker (293918) exited with code 143 (Terminated)
Nov 29 03:41:02 np0005539551 neutron-haproxy-ovnmeta-460c4768-d248-455a-be34-ea028712d091[293912]: [WARNING]  (293916) : All workers exited. Exiting... (0)
Nov 29 03:41:02 np0005539551 systemd[1]: libpod-8e89a8e366d2469702a760aa5f9d76825bed5673d996b66f78a7c973bed6a67b.scope: Deactivated successfully.
Nov 29 03:41:02 np0005539551 podman[295155]: 2025-11-29 08:41:02.20256672 +0000 UTC m=+0.041699890 container died 8e89a8e366d2469702a760aa5f9d76825bed5673d996b66f78a7c973bed6a67b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-460c4768-d248-455a-be34-ea028712d091, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 03:41:02 np0005539551 nova_compute[227360]: 2025-11-29 08:41:02.205 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:02.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:02 np0005539551 nova_compute[227360]: 2025-11-29 08:41:02.210 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:02 np0005539551 nova_compute[227360]: 2025-11-29 08:41:02.223 227364 INFO nova.virt.libvirt.driver [-] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Instance destroyed successfully.#033[00m
Nov 29 03:41:02 np0005539551 nova_compute[227360]: 2025-11-29 08:41:02.224 227364 DEBUG nova.objects.instance [None req-42ab005f-ab47-456f-a6dd-222dcd688788 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'resources' on Instance uuid c9adc277-3d0a-4bb8-8b47-e9f72114cdfd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:41:02 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8e89a8e366d2469702a760aa5f9d76825bed5673d996b66f78a7c973bed6a67b-userdata-shm.mount: Deactivated successfully.
Nov 29 03:41:02 np0005539551 systemd[1]: var-lib-containers-storage-overlay-288d219dda5a6f1a7b42ef7e56fa11c9c74df7ba7654b9d5c5457006b0a213ac-merged.mount: Deactivated successfully.
Nov 29 03:41:02 np0005539551 podman[295155]: 2025-11-29 08:41:02.241035842 +0000 UTC m=+0.080168992 container cleanup 8e89a8e366d2469702a760aa5f9d76825bed5673d996b66f78a7c973bed6a67b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-460c4768-d248-455a-be34-ea028712d091, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 03:41:02 np0005539551 systemd[1]: libpod-conmon-8e89a8e366d2469702a760aa5f9d76825bed5673d996b66f78a7c973bed6a67b.scope: Deactivated successfully.
Nov 29 03:41:02 np0005539551 nova_compute[227360]: 2025-11-29 08:41:02.250 227364 DEBUG nova.virt.libvirt.vif [None req-42ab005f-ab47-456f-a6dd-222dcd688788 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:38:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1601524375',display_name='tempest-TestNetworkBasicOps-server-1601524375',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1601524375',id=184,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNAyvwAZwPHz0mDZ6+uw7Ctigv7CbFX3JcWWwmeG6ohdJh9eb0DA6IZB4kjH4EtgM2s5DtdFmdoiqApvJrl5Aaw7g3k0AA8E+id9fhlaMPhZgqKsQOQc112xVs1jFb8yOA==',key_name='tempest-TestNetworkBasicOps-65375256',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:39:02Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-xx3hndbp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:39:02Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=c9adc277-3d0a-4bb8-8b47-e9f72114cdfd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "89414cbe-0908-4dc8-af0e-648ea658b1fa", "address": "fa:16:3e:54:8d:5b", "network": {"id": "460c4768-d248-455a-be34-ea028712d091", "bridge": "br-int", "label": "tempest-network-smoke--245531740", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89414cbe-09", "ovs_interfaceid": "89414cbe-0908-4dc8-af0e-648ea658b1fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:41:02 np0005539551 nova_compute[227360]: 2025-11-29 08:41:02.250 227364 DEBUG nova.network.os_vif_util [None req-42ab005f-ab47-456f-a6dd-222dcd688788 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "89414cbe-0908-4dc8-af0e-648ea658b1fa", "address": "fa:16:3e:54:8d:5b", "network": {"id": "460c4768-d248-455a-be34-ea028712d091", "bridge": "br-int", "label": "tempest-network-smoke--245531740", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89414cbe-09", "ovs_interfaceid": "89414cbe-0908-4dc8-af0e-648ea658b1fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:41:02 np0005539551 nova_compute[227360]: 2025-11-29 08:41:02.251 227364 DEBUG nova.network.os_vif_util [None req-42ab005f-ab47-456f-a6dd-222dcd688788 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:54:8d:5b,bridge_name='br-int',has_traffic_filtering=True,id=89414cbe-0908-4dc8-af0e-648ea658b1fa,network=Network(460c4768-d248-455a-be34-ea028712d091),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89414cbe-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:41:02 np0005539551 nova_compute[227360]: 2025-11-29 08:41:02.251 227364 DEBUG os_vif [None req-42ab005f-ab47-456f-a6dd-222dcd688788 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:8d:5b,bridge_name='br-int',has_traffic_filtering=True,id=89414cbe-0908-4dc8-af0e-648ea658b1fa,network=Network(460c4768-d248-455a-be34-ea028712d091),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89414cbe-09') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:41:02 np0005539551 nova_compute[227360]: 2025-11-29 08:41:02.252 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:02 np0005539551 nova_compute[227360]: 2025-11-29 08:41:02.252 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap89414cbe-09, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:02 np0005539551 nova_compute[227360]: 2025-11-29 08:41:02.254 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:02 np0005539551 nova_compute[227360]: 2025-11-29 08:41:02.255 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:02 np0005539551 nova_compute[227360]: 2025-11-29 08:41:02.257 227364 INFO os_vif [None req-42ab005f-ab47-456f-a6dd-222dcd688788 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:8d:5b,bridge_name='br-int',has_traffic_filtering=True,id=89414cbe-0908-4dc8-af0e-648ea658b1fa,network=Network(460c4768-d248-455a-be34-ea028712d091),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89414cbe-09')#033[00m
Nov 29 03:41:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:02.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:02 np0005539551 podman[295194]: 2025-11-29 08:41:02.304908642 +0000 UTC m=+0.040203689 container remove 8e89a8e366d2469702a760aa5f9d76825bed5673d996b66f78a7c973bed6a67b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-460c4768-d248-455a-be34-ea028712d091, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:41:02 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:02.311 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[bf89b511-a465-45b5-8362-062f3910d560]: (4, ('Sat Nov 29 08:41:02 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-460c4768-d248-455a-be34-ea028712d091 (8e89a8e366d2469702a760aa5f9d76825bed5673d996b66f78a7c973bed6a67b)\n8e89a8e366d2469702a760aa5f9d76825bed5673d996b66f78a7c973bed6a67b\nSat Nov 29 08:41:02 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-460c4768-d248-455a-be34-ea028712d091 (8e89a8e366d2469702a760aa5f9d76825bed5673d996b66f78a7c973bed6a67b)\n8e89a8e366d2469702a760aa5f9d76825bed5673d996b66f78a7c973bed6a67b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:02 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:02.312 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8fa5fd6e-f486-420f-8bec-231f67dc8875]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:02 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:02.313 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap460c4768-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:02 np0005539551 nova_compute[227360]: 2025-11-29 08:41:02.315 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:02 np0005539551 kernel: tap460c4768-d0: left promiscuous mode
Nov 29 03:41:02 np0005539551 nova_compute[227360]: 2025-11-29 08:41:02.328 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:02 np0005539551 nova_compute[227360]: 2025-11-29 08:41:02.329 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:02 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:02.330 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[cea29630-47d2-4a0a-bdc0-e077b16c2df0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:02 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:02.355 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e0d78fa3-8d35-4b8d-baec-0d489396537b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:02 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:02.357 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8862d2a2-c453-4278-93cb-ed56549e8f2d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:02 np0005539551 nova_compute[227360]: 2025-11-29 08:41:02.372 227364 DEBUG nova.compute.manager [req-011ffd32-74d3-4a23-8014-106f8d6e8391 req-6b0e872b-901e-4bac-ac4c-c41a4c225693 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Received event network-vif-unplugged-89414cbe-0908-4dc8-af0e-648ea658b1fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:02 np0005539551 nova_compute[227360]: 2025-11-29 08:41:02.373 227364 DEBUG oslo_concurrency.lockutils [req-011ffd32-74d3-4a23-8014-106f8d6e8391 req-6b0e872b-901e-4bac-ac4c-c41a4c225693 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "c9adc277-3d0a-4bb8-8b47-e9f72114cdfd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:02 np0005539551 nova_compute[227360]: 2025-11-29 08:41:02.373 227364 DEBUG oslo_concurrency.lockutils [req-011ffd32-74d3-4a23-8014-106f8d6e8391 req-6b0e872b-901e-4bac-ac4c-c41a4c225693 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c9adc277-3d0a-4bb8-8b47-e9f72114cdfd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:02 np0005539551 nova_compute[227360]: 2025-11-29 08:41:02.373 227364 DEBUG oslo_concurrency.lockutils [req-011ffd32-74d3-4a23-8014-106f8d6e8391 req-6b0e872b-901e-4bac-ac4c-c41a4c225693 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c9adc277-3d0a-4bb8-8b47-e9f72114cdfd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:02 np0005539551 nova_compute[227360]: 2025-11-29 08:41:02.373 227364 DEBUG nova.compute.manager [req-011ffd32-74d3-4a23-8014-106f8d6e8391 req-6b0e872b-901e-4bac-ac4c-c41a4c225693 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] No waiting events found dispatching network-vif-unplugged-89414cbe-0908-4dc8-af0e-648ea658b1fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:41:02 np0005539551 nova_compute[227360]: 2025-11-29 08:41:02.374 227364 DEBUG nova.compute.manager [req-011ffd32-74d3-4a23-8014-106f8d6e8391 req-6b0e872b-901e-4bac-ac4c-c41a4c225693 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Received event network-vif-unplugged-89414cbe-0908-4dc8-af0e-648ea658b1fa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:41:02 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:02.375 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[404c8a29-f2af-4cb7-aedf-154d5b6b1365]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 858468, 'reachable_time': 36074, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295227, 'error': None, 'target': 'ovnmeta-460c4768-d248-455a-be34-ea028712d091', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:02 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:02.377 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-460c4768-d248-455a-be34-ea028712d091 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:41:02 np0005539551 systemd[1]: run-netns-ovnmeta\x2d460c4768\x2dd248\x2d455a\x2dbe34\x2dea028712d091.mount: Deactivated successfully.
Nov 29 03:41:02 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:02.377 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[7e8cbb43-602f-44c6-8e7e-86cbf52414be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:02 np0005539551 nova_compute[227360]: 2025-11-29 08:41:02.734 227364 INFO nova.virt.libvirt.driver [None req-42ab005f-ab47-456f-a6dd-222dcd688788 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Deleting instance files /var/lib/nova/instances/c9adc277-3d0a-4bb8-8b47-e9f72114cdfd_del#033[00m
Nov 29 03:41:02 np0005539551 nova_compute[227360]: 2025-11-29 08:41:02.734 227364 INFO nova.virt.libvirt.driver [None req-42ab005f-ab47-456f-a6dd-222dcd688788 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Deletion of /var/lib/nova/instances/c9adc277-3d0a-4bb8-8b47-e9f72114cdfd_del complete#033[00m
Nov 29 03:41:02 np0005539551 nova_compute[227360]: 2025-11-29 08:41:02.776 227364 INFO nova.compute.manager [None req-42ab005f-ab47-456f-a6dd-222dcd688788 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:41:02 np0005539551 nova_compute[227360]: 2025-11-29 08:41:02.777 227364 DEBUG oslo.service.loopingcall [None req-42ab005f-ab47-456f-a6dd-222dcd688788 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:41:02 np0005539551 nova_compute[227360]: 2025-11-29 08:41:02.777 227364 DEBUG nova.compute.manager [-] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:41:02 np0005539551 nova_compute[227360]: 2025-11-29 08:41:02.777 227364 DEBUG nova.network.neutron [-] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:41:03 np0005539551 nova_compute[227360]: 2025-11-29 08:41:03.198 227364 DEBUG nova.network.neutron [req-14de6210-b322-4542-975b-94ba47525aa7 req-53845ab4-e14d-4554-9425-6fac99b51f47 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Updated VIF entry in instance network info cache for port 6ae38b68-e18b-4ba6-be6e-d411d58b407b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:41:03 np0005539551 nova_compute[227360]: 2025-11-29 08:41:03.199 227364 DEBUG nova.network.neutron [req-14de6210-b322-4542-975b-94ba47525aa7 req-53845ab4-e14d-4554-9425-6fac99b51f47 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Updating instance_info_cache with network_info: [{"id": "6ae38b68-e18b-4ba6-be6e-d411d58b407b", "address": "fa:16:3e:6f:01:5b", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ae38b68-e1", "ovs_interfaceid": "6ae38b68-e18b-4ba6-be6e-d411d58b407b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:41:03 np0005539551 nova_compute[227360]: 2025-11-29 08:41:03.222 227364 DEBUG oslo_concurrency.lockutils [req-14de6210-b322-4542-975b-94ba47525aa7 req-53845ab4-e14d-4554-9425-6fac99b51f47 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-fc75971c-cf7e-4383-81cf-81c801f67489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:41:03 np0005539551 nova_compute[227360]: 2025-11-29 08:41:03.223 227364 DEBUG nova.compute.manager [req-14de6210-b322-4542-975b-94ba47525aa7 req-53845ab4-e14d-4554-9425-6fac99b51f47 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Received event network-vif-deleted-98decf71-da91-41cd-9382-afba328b1338 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:03 np0005539551 nova_compute[227360]: 2025-11-29 08:41:03.223 227364 INFO nova.compute.manager [req-14de6210-b322-4542-975b-94ba47525aa7 req-53845ab4-e14d-4554-9425-6fac99b51f47 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Neutron deleted interface 98decf71-da91-41cd-9382-afba328b1338; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:41:03 np0005539551 nova_compute[227360]: 2025-11-29 08:41:03.223 227364 DEBUG nova.network.neutron [req-14de6210-b322-4542-975b-94ba47525aa7 req-53845ab4-e14d-4554-9425-6fac99b51f47 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Updating instance_info_cache with network_info: [{"id": "89414cbe-0908-4dc8-af0e-648ea658b1fa", "address": "fa:16:3e:54:8d:5b", "network": {"id": "460c4768-d248-455a-be34-ea028712d091", "bridge": "br-int", "label": "tempest-network-smoke--245531740", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89414cbe-09", "ovs_interfaceid": "89414cbe-0908-4dc8-af0e-648ea658b1fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:41:03 np0005539551 nova_compute[227360]: 2025-11-29 08:41:03.224 227364 DEBUG oslo_concurrency.lockutils [req-26c1e758-b2f6-4498-8625-55a11501b0ea req-d99e1c90-8521-4f2a-ad4a-a4845caf3093 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-fc75971c-cf7e-4383-81cf-81c801f67489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:41:03 np0005539551 nova_compute[227360]: 2025-11-29 08:41:03.225 227364 DEBUG nova.network.neutron [req-26c1e758-b2f6-4498-8625-55a11501b0ea req-d99e1c90-8521-4f2a-ad4a-a4845caf3093 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Refreshing network info cache for port 6ae38b68-e18b-4ba6-be6e-d411d58b407b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:41:03 np0005539551 nova_compute[227360]: 2025-11-29 08:41:03.260 227364 DEBUG nova.compute.manager [req-14de6210-b322-4542-975b-94ba47525aa7 req-53845ab4-e14d-4554-9425-6fac99b51f47 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Detach interface failed, port_id=98decf71-da91-41cd-9382-afba328b1338, reason: Instance c9adc277-3d0a-4bb8-8b47-e9f72114cdfd could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 03:41:03 np0005539551 nova_compute[227360]: 2025-11-29 08:41:03.502 227364 DEBUG nova.compute.manager [req-1c25d9a6-f9db-44c2-a6a2-03a8a42e7c5a req-892a4901-495e-445d-bbb1-1f1542d3cf7f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Received event network-changed-6ae38b68-e18b-4ba6-be6e-d411d58b407b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:03 np0005539551 nova_compute[227360]: 2025-11-29 08:41:03.503 227364 DEBUG nova.compute.manager [req-1c25d9a6-f9db-44c2-a6a2-03a8a42e7c5a req-892a4901-495e-445d-bbb1-1f1542d3cf7f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Refreshing instance network info cache due to event network-changed-6ae38b68-e18b-4ba6-be6e-d411d58b407b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:41:03 np0005539551 nova_compute[227360]: 2025-11-29 08:41:03.505 227364 DEBUG oslo_concurrency.lockutils [req-1c25d9a6-f9db-44c2-a6a2-03a8a42e7c5a req-892a4901-495e-445d-bbb1-1f1542d3cf7f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-fc75971c-cf7e-4383-81cf-81c801f67489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:41:03 np0005539551 nova_compute[227360]: 2025-11-29 08:41:03.680 227364 DEBUG nova.network.neutron [-] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:41:03 np0005539551 nova_compute[227360]: 2025-11-29 08:41:03.697 227364 INFO nova.compute.manager [-] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Took 0.92 seconds to deallocate network for instance.#033[00m
Nov 29 03:41:03 np0005539551 nova_compute[227360]: 2025-11-29 08:41:03.752 227364 DEBUG nova.network.neutron [req-95beb135-97f7-40d6-86c8-86e988b7db78 req-b785b1b1-7f7e-4e57-9324-e01bab47dc2a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Updated VIF entry in instance network info cache for port 89414cbe-0908-4dc8-af0e-648ea658b1fa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:41:03 np0005539551 nova_compute[227360]: 2025-11-29 08:41:03.753 227364 DEBUG nova.network.neutron [req-95beb135-97f7-40d6-86c8-86e988b7db78 req-b785b1b1-7f7e-4e57-9324-e01bab47dc2a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Updating instance_info_cache with network_info: [{"id": "89414cbe-0908-4dc8-af0e-648ea658b1fa", "address": "fa:16:3e:54:8d:5b", "network": {"id": "460c4768-d248-455a-be34-ea028712d091", "bridge": "br-int", "label": "tempest-network-smoke--245531740", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89414cbe-09", "ovs_interfaceid": "89414cbe-0908-4dc8-af0e-648ea658b1fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:41:03 np0005539551 nova_compute[227360]: 2025-11-29 08:41:03.759 227364 DEBUG oslo_concurrency.lockutils [None req-42ab005f-ab47-456f-a6dd-222dcd688788 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:03 np0005539551 nova_compute[227360]: 2025-11-29 08:41:03.760 227364 DEBUG oslo_concurrency.lockutils [None req-42ab005f-ab47-456f-a6dd-222dcd688788 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:03 np0005539551 nova_compute[227360]: 2025-11-29 08:41:03.777 227364 DEBUG oslo_concurrency.lockutils [req-95beb135-97f7-40d6-86c8-86e988b7db78 req-b785b1b1-7f7e-4e57-9324-e01bab47dc2a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-c9adc277-3d0a-4bb8-8b47-e9f72114cdfd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:41:03 np0005539551 nova_compute[227360]: 2025-11-29 08:41:03.822 227364 DEBUG oslo_concurrency.processutils [None req-42ab005f-ab47-456f-a6dd-222dcd688788 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:04 np0005539551 nova_compute[227360]: 2025-11-29 08:41:04.076 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:04.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:04 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:41:04 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4127035829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:41:04 np0005539551 nova_compute[227360]: 2025-11-29 08:41:04.244 227364 DEBUG oslo_concurrency.processutils [None req-42ab005f-ab47-456f-a6dd-222dcd688788 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:04 np0005539551 nova_compute[227360]: 2025-11-29 08:41:04.252 227364 DEBUG nova.compute.provider_tree [None req-42ab005f-ab47-456f-a6dd-222dcd688788 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:41:04 np0005539551 nova_compute[227360]: 2025-11-29 08:41:04.266 227364 DEBUG nova.scheduler.client.report [None req-42ab005f-ab47-456f-a6dd-222dcd688788 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:41:04 np0005539551 nova_compute[227360]: 2025-11-29 08:41:04.286 227364 DEBUG oslo_concurrency.lockutils [None req-42ab005f-ab47-456f-a6dd-222dcd688788 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.526s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:04.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:04 np0005539551 nova_compute[227360]: 2025-11-29 08:41:04.311 227364 INFO nova.scheduler.client.report [None req-42ab005f-ab47-456f-a6dd-222dcd688788 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Deleted allocations for instance c9adc277-3d0a-4bb8-8b47-e9f72114cdfd#033[00m
Nov 29 03:41:04 np0005539551 nova_compute[227360]: 2025-11-29 08:41:04.379 227364 DEBUG oslo_concurrency.lockutils [None req-42ab005f-ab47-456f-a6dd-222dcd688788 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "c9adc277-3d0a-4bb8-8b47-e9f72114cdfd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.395s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:04 np0005539551 nova_compute[227360]: 2025-11-29 08:41:04.518 227364 DEBUG nova.compute.manager [req-ef08fa6f-d1b1-43f3-862c-0c3cc022aec8 req-8f59286f-5ed4-4ed4-b4be-05611974241e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Received event network-vif-plugged-89414cbe-0908-4dc8-af0e-648ea658b1fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:04 np0005539551 nova_compute[227360]: 2025-11-29 08:41:04.518 227364 DEBUG oslo_concurrency.lockutils [req-ef08fa6f-d1b1-43f3-862c-0c3cc022aec8 req-8f59286f-5ed4-4ed4-b4be-05611974241e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "c9adc277-3d0a-4bb8-8b47-e9f72114cdfd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:04 np0005539551 nova_compute[227360]: 2025-11-29 08:41:04.519 227364 DEBUG oslo_concurrency.lockutils [req-ef08fa6f-d1b1-43f3-862c-0c3cc022aec8 req-8f59286f-5ed4-4ed4-b4be-05611974241e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c9adc277-3d0a-4bb8-8b47-e9f72114cdfd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:04 np0005539551 nova_compute[227360]: 2025-11-29 08:41:04.519 227364 DEBUG oslo_concurrency.lockutils [req-ef08fa6f-d1b1-43f3-862c-0c3cc022aec8 req-8f59286f-5ed4-4ed4-b4be-05611974241e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c9adc277-3d0a-4bb8-8b47-e9f72114cdfd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:04 np0005539551 nova_compute[227360]: 2025-11-29 08:41:04.520 227364 DEBUG nova.compute.manager [req-ef08fa6f-d1b1-43f3-862c-0c3cc022aec8 req-8f59286f-5ed4-4ed4-b4be-05611974241e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] No waiting events found dispatching network-vif-plugged-89414cbe-0908-4dc8-af0e-648ea658b1fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:41:04 np0005539551 nova_compute[227360]: 2025-11-29 08:41:04.520 227364 WARNING nova.compute.manager [req-ef08fa6f-d1b1-43f3-862c-0c3cc022aec8 req-8f59286f-5ed4-4ed4-b4be-05611974241e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Received unexpected event network-vif-plugged-89414cbe-0908-4dc8-af0e-648ea658b1fa for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:41:04 np0005539551 nova_compute[227360]: 2025-11-29 08:41:04.520 227364 DEBUG nova.compute.manager [req-ef08fa6f-d1b1-43f3-862c-0c3cc022aec8 req-8f59286f-5ed4-4ed4-b4be-05611974241e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Received event network-vif-deleted-89414cbe-0908-4dc8-af0e-648ea658b1fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:04 np0005539551 nova_compute[227360]: 2025-11-29 08:41:04.521 227364 INFO nova.compute.manager [req-ef08fa6f-d1b1-43f3-862c-0c3cc022aec8 req-8f59286f-5ed4-4ed4-b4be-05611974241e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Neutron deleted interface 89414cbe-0908-4dc8-af0e-648ea658b1fa; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:41:04 np0005539551 nova_compute[227360]: 2025-11-29 08:41:04.521 227364 DEBUG nova.network.neutron [req-ef08fa6f-d1b1-43f3-862c-0c3cc022aec8 req-8f59286f-5ed4-4ed4-b4be-05611974241e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Nov 29 03:41:04 np0005539551 nova_compute[227360]: 2025-11-29 08:41:04.526 227364 DEBUG nova.compute.manager [req-ef08fa6f-d1b1-43f3-862c-0c3cc022aec8 req-8f59286f-5ed4-4ed4-b4be-05611974241e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Detach interface failed, port_id=89414cbe-0908-4dc8-af0e-648ea658b1fa, reason: Instance c9adc277-3d0a-4bb8-8b47-e9f72114cdfd could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 03:41:05 np0005539551 nova_compute[227360]: 2025-11-29 08:41:05.025 227364 DEBUG nova.network.neutron [req-26c1e758-b2f6-4498-8625-55a11501b0ea req-d99e1c90-8521-4f2a-ad4a-a4845caf3093 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Updated VIF entry in instance network info cache for port 6ae38b68-e18b-4ba6-be6e-d411d58b407b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:41:05 np0005539551 nova_compute[227360]: 2025-11-29 08:41:05.026 227364 DEBUG nova.network.neutron [req-26c1e758-b2f6-4498-8625-55a11501b0ea req-d99e1c90-8521-4f2a-ad4a-a4845caf3093 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Updating instance_info_cache with network_info: [{"id": "6ae38b68-e18b-4ba6-be6e-d411d58b407b", "address": "fa:16:3e:6f:01:5b", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ae38b68-e1", "ovs_interfaceid": "6ae38b68-e18b-4ba6-be6e-d411d58b407b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:41:05 np0005539551 nova_compute[227360]: 2025-11-29 08:41:05.049 227364 DEBUG oslo_concurrency.lockutils [req-26c1e758-b2f6-4498-8625-55a11501b0ea req-d99e1c90-8521-4f2a-ad4a-a4845caf3093 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-fc75971c-cf7e-4383-81cf-81c801f67489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:41:05 np0005539551 nova_compute[227360]: 2025-11-29 08:41:05.050 227364 DEBUG oslo_concurrency.lockutils [req-1c25d9a6-f9db-44c2-a6a2-03a8a42e7c5a req-892a4901-495e-445d-bbb1-1f1542d3cf7f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-fc75971c-cf7e-4383-81cf-81c801f67489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:41:05 np0005539551 nova_compute[227360]: 2025-11-29 08:41:05.050 227364 DEBUG nova.network.neutron [req-1c25d9a6-f9db-44c2-a6a2-03a8a42e7c5a req-892a4901-495e-445d-bbb1-1f1542d3cf7f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Refreshing network info cache for port 6ae38b68-e18b-4ba6-be6e-d411d58b407b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:41:05 np0005539551 nova_compute[227360]: 2025-11-29 08:41:05.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:06.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:06.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:06 np0005539551 nova_compute[227360]: 2025-11-29 08:41:06.388 227364 DEBUG oslo_concurrency.lockutils [None req-f8fdc52d-c49a-46a0-bbcb-644777f82826 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "fc75971c-cf7e-4383-81cf-81c801f67489" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:06 np0005539551 nova_compute[227360]: 2025-11-29 08:41:06.389 227364 DEBUG oslo_concurrency.lockutils [None req-f8fdc52d-c49a-46a0-bbcb-644777f82826 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "fc75971c-cf7e-4383-81cf-81c801f67489" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:06 np0005539551 nova_compute[227360]: 2025-11-29 08:41:06.401 227364 INFO nova.compute.manager [None req-f8fdc52d-c49a-46a0-bbcb-644777f82826 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Detaching volume ddbdab48-23f2-48fc-bdd4-33793fe5d282#033[00m
Nov 29 03:41:06 np0005539551 nova_compute[227360]: 2025-11-29 08:41:06.583 227364 INFO nova.virt.block_device [None req-f8fdc52d-c49a-46a0-bbcb-644777f82826 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Attempting to driver detach volume ddbdab48-23f2-48fc-bdd4-33793fe5d282 from mountpoint /dev/vdb#033[00m
Nov 29 03:41:06 np0005539551 nova_compute[227360]: 2025-11-29 08:41:06.596 227364 DEBUG nova.virt.libvirt.driver [None req-f8fdc52d-c49a-46a0-bbcb-644777f82826 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Attempting to detach device vdb from instance fc75971c-cf7e-4383-81cf-81c801f67489 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:41:06 np0005539551 nova_compute[227360]: 2025-11-29 08:41:06.597 227364 DEBUG nova.virt.libvirt.guest [None req-f8fdc52d-c49a-46a0-bbcb-644777f82826 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:41:06 np0005539551 nova_compute[227360]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:41:06 np0005539551 nova_compute[227360]:  <source protocol="rbd" name="volumes/volume-ddbdab48-23f2-48fc-bdd4-33793fe5d282">
Nov 29 03:41:06 np0005539551 nova_compute[227360]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:41:06 np0005539551 nova_compute[227360]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:41:06 np0005539551 nova_compute[227360]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:41:06 np0005539551 nova_compute[227360]:  </source>
Nov 29 03:41:06 np0005539551 nova_compute[227360]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:41:06 np0005539551 nova_compute[227360]:  <serial>ddbdab48-23f2-48fc-bdd4-33793fe5d282</serial>
Nov 29 03:41:06 np0005539551 nova_compute[227360]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:41:06 np0005539551 nova_compute[227360]: </disk>
Nov 29 03:41:06 np0005539551 nova_compute[227360]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:41:06 np0005539551 nova_compute[227360]: 2025-11-29 08:41:06.598 227364 DEBUG nova.network.neutron [req-1c25d9a6-f9db-44c2-a6a2-03a8a42e7c5a req-892a4901-495e-445d-bbb1-1f1542d3cf7f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Updated VIF entry in instance network info cache for port 6ae38b68-e18b-4ba6-be6e-d411d58b407b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:41:06 np0005539551 nova_compute[227360]: 2025-11-29 08:41:06.599 227364 DEBUG nova.network.neutron [req-1c25d9a6-f9db-44c2-a6a2-03a8a42e7c5a req-892a4901-495e-445d-bbb1-1f1542d3cf7f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Updating instance_info_cache with network_info: [{"id": "6ae38b68-e18b-4ba6-be6e-d411d58b407b", "address": "fa:16:3e:6f:01:5b", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ae38b68-e1", "ovs_interfaceid": "6ae38b68-e18b-4ba6-be6e-d411d58b407b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:41:06 np0005539551 nova_compute[227360]: 2025-11-29 08:41:06.607 227364 INFO nova.virt.libvirt.driver [None req-f8fdc52d-c49a-46a0-bbcb-644777f82826 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Successfully detached device vdb from instance fc75971c-cf7e-4383-81cf-81c801f67489 from the persistent domain config.#033[00m
Nov 29 03:41:06 np0005539551 nova_compute[227360]: 2025-11-29 08:41:06.608 227364 DEBUG nova.virt.libvirt.driver [None req-f8fdc52d-c49a-46a0-bbcb-644777f82826 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance fc75971c-cf7e-4383-81cf-81c801f67489 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:41:06 np0005539551 nova_compute[227360]: 2025-11-29 08:41:06.608 227364 DEBUG nova.virt.libvirt.guest [None req-f8fdc52d-c49a-46a0-bbcb-644777f82826 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:41:06 np0005539551 nova_compute[227360]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:41:06 np0005539551 nova_compute[227360]:  <source protocol="rbd" name="volumes/volume-ddbdab48-23f2-48fc-bdd4-33793fe5d282">
Nov 29 03:41:06 np0005539551 nova_compute[227360]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:41:06 np0005539551 nova_compute[227360]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:41:06 np0005539551 nova_compute[227360]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:41:06 np0005539551 nova_compute[227360]:  </source>
Nov 29 03:41:06 np0005539551 nova_compute[227360]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:41:06 np0005539551 nova_compute[227360]:  <serial>ddbdab48-23f2-48fc-bdd4-33793fe5d282</serial>
Nov 29 03:41:06 np0005539551 nova_compute[227360]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:41:06 np0005539551 nova_compute[227360]: </disk>
Nov 29 03:41:06 np0005539551 nova_compute[227360]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:41:06 np0005539551 nova_compute[227360]: 2025-11-29 08:41:06.620 227364 DEBUG oslo_concurrency.lockutils [req-1c25d9a6-f9db-44c2-a6a2-03a8a42e7c5a req-892a4901-495e-445d-bbb1-1f1542d3cf7f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-fc75971c-cf7e-4383-81cf-81c801f67489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:41:06 np0005539551 nova_compute[227360]: 2025-11-29 08:41:06.893 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:06.893 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:41:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:06.894 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:41:06 np0005539551 nova_compute[227360]: 2025-11-29 08:41:06.961 227364 DEBUG nova.virt.libvirt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Received event <DeviceRemovedEvent: 1764405666.9607954, fc75971c-cf7e-4383-81cf-81c801f67489 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:41:06 np0005539551 nova_compute[227360]: 2025-11-29 08:41:06.962 227364 DEBUG nova.virt.libvirt.driver [None req-f8fdc52d-c49a-46a0-bbcb-644777f82826 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance fc75971c-cf7e-4383-81cf-81c801f67489 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:41:06 np0005539551 nova_compute[227360]: 2025-11-29 08:41:06.964 227364 INFO nova.virt.libvirt.driver [None req-f8fdc52d-c49a-46a0-bbcb-644777f82826 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Successfully detached device vdb from instance fc75971c-cf7e-4383-81cf-81c801f67489 from the live domain config.#033[00m
Nov 29 03:41:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e395 e395: 3 total, 3 up, 3 in
Nov 29 03:41:07 np0005539551 nova_compute[227360]: 2025-11-29 08:41:07.246 227364 DEBUG nova.objects.instance [None req-f8fdc52d-c49a-46a0-bbcb-644777f82826 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lazy-loading 'flavor' on Instance uuid fc75971c-cf7e-4383-81cf-81c801f67489 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:41:07 np0005539551 nova_compute[227360]: 2025-11-29 08:41:07.278 227364 DEBUG oslo_concurrency.lockutils [None req-f8fdc52d-c49a-46a0-bbcb-644777f82826 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "fc75971c-cf7e-4383-81cf-81c801f67489" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.889s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:07 np0005539551 nova_compute[227360]: 2025-11-29 08:41:07.314 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:08.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:08 np0005539551 ovn_controller[130266]: 2025-11-29T08:41:08Z|00842|binding|INFO|Releasing lport 1642a0e3-a8d4-4ee4-8971-26f27541a04e from this chassis (sb_readonly=0)
Nov 29 03:41:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:08.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:08 np0005539551 nova_compute[227360]: 2025-11-29 08:41:08.362 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:09 np0005539551 nova_compute[227360]: 2025-11-29 08:41:09.079 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:09 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e396 e396: 3 total, 3 up, 3 in
Nov 29 03:41:09 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #130. Immutable memtables: 0.
Nov 29 03:41:09 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:41:09.186252) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:41:09 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 130
Nov 29 03:41:09 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405669186471, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 1895, "num_deletes": 254, "total_data_size": 4152960, "memory_usage": 4222320, "flush_reason": "Manual Compaction"}
Nov 29 03:41:09 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #131: started
Nov 29 03:41:09 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405669202168, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 131, "file_size": 2726750, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63153, "largest_seqno": 65043, "table_properties": {"data_size": 2718957, "index_size": 4606, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17585, "raw_average_key_size": 20, "raw_value_size": 2702832, "raw_average_value_size": 3202, "num_data_blocks": 200, "num_entries": 844, "num_filter_entries": 844, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405525, "oldest_key_time": 1764405525, "file_creation_time": 1764405669, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:41:09 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 15961 microseconds, and 6531 cpu microseconds.
Nov 29 03:41:09 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:41:09 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:41:09.202222) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #131: 2726750 bytes OK
Nov 29 03:41:09 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:41:09.202242) [db/memtable_list.cc:519] [default] Level-0 commit table #131 started
Nov 29 03:41:09 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:41:09.204276) [db/memtable_list.cc:722] [default] Level-0 commit table #131: memtable #1 done
Nov 29 03:41:09 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:41:09.204304) EVENT_LOG_v1 {"time_micros": 1764405669204285, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:41:09 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:41:09.204322) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:41:09 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 4144265, prev total WAL file size 4144265, number of live WAL files 2.
Nov 29 03:41:09 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000127.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:41:09 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:41:09.205351) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Nov 29 03:41:09 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:41:09 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [131(2662KB)], [129(10MB)]
Nov 29 03:41:09 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405669205382, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [131], "files_L6": [129], "score": -1, "input_data_size": 13244626, "oldest_snapshot_seqno": -1}
Nov 29 03:41:09 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #132: 9491 keys, 11345704 bytes, temperature: kUnknown
Nov 29 03:41:09 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405669261942, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 132, "file_size": 11345704, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11285754, "index_size": 35133, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23749, "raw_key_size": 250506, "raw_average_key_size": 26, "raw_value_size": 11120077, "raw_average_value_size": 1171, "num_data_blocks": 1332, "num_entries": 9491, "num_filter_entries": 9491, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764405669, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 132, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:41:09 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:41:09 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:41:09.262463) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 11345704 bytes
Nov 29 03:41:09 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:41:09.264014) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 233.5 rd, 200.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 10.0 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(9.0) write-amplify(4.2) OK, records in: 10018, records dropped: 527 output_compression: NoCompression
Nov 29 03:41:09 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:41:09.264048) EVENT_LOG_v1 {"time_micros": 1764405669264032, "job": 82, "event": "compaction_finished", "compaction_time_micros": 56733, "compaction_time_cpu_micros": 26234, "output_level": 6, "num_output_files": 1, "total_output_size": 11345704, "num_input_records": 10018, "num_output_records": 9491, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:41:09 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:41:09 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405669265501, "job": 82, "event": "table_file_deletion", "file_number": 131}
Nov 29 03:41:09 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000129.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:41:09 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405669269334, "job": 82, "event": "table_file_deletion", "file_number": 129}
Nov 29 03:41:09 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:41:09.205252) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:41:09 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:41:09.269522) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:41:09 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:41:09.269532) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:41:09 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:41:09.269537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:41:09 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:41:09.269541) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:41:09 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:41:09.269545) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:41:09 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:09.895 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:10 np0005539551 nova_compute[227360]: 2025-11-29 08:41:10.166 227364 DEBUG oslo_concurrency.lockutils [None req-7313f7a1-6224-4331-a833-9a0c94c8b706 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "fc75971c-cf7e-4383-81cf-81c801f67489" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:10 np0005539551 nova_compute[227360]: 2025-11-29 08:41:10.166 227364 DEBUG oslo_concurrency.lockutils [None req-7313f7a1-6224-4331-a833-9a0c94c8b706 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "fc75971c-cf7e-4383-81cf-81c801f67489" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:10 np0005539551 nova_compute[227360]: 2025-11-29 08:41:10.181 227364 INFO nova.compute.manager [None req-7313f7a1-6224-4331-a833-9a0c94c8b706 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Detaching volume f8d3ed9a-0699-4f69-8315-0c8ebf96b12c#033[00m
Nov 29 03:41:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:10.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:10.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:10 np0005539551 nova_compute[227360]: 2025-11-29 08:41:10.347 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:10 np0005539551 nova_compute[227360]: 2025-11-29 08:41:10.373 227364 INFO nova.virt.block_device [None req-7313f7a1-6224-4331-a833-9a0c94c8b706 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Attempting to driver detach volume f8d3ed9a-0699-4f69-8315-0c8ebf96b12c from mountpoint /dev/vdc#033[00m
Nov 29 03:41:10 np0005539551 nova_compute[227360]: 2025-11-29 08:41:10.380 227364 DEBUG nova.virt.libvirt.driver [None req-7313f7a1-6224-4331-a833-9a0c94c8b706 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Attempting to detach device vdc from instance fc75971c-cf7e-4383-81cf-81c801f67489 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:41:10 np0005539551 nova_compute[227360]: 2025-11-29 08:41:10.381 227364 DEBUG nova.virt.libvirt.guest [None req-7313f7a1-6224-4331-a833-9a0c94c8b706 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:41:10 np0005539551 nova_compute[227360]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:41:10 np0005539551 nova_compute[227360]:  <source protocol="rbd" name="volumes/volume-f8d3ed9a-0699-4f69-8315-0c8ebf96b12c">
Nov 29 03:41:10 np0005539551 nova_compute[227360]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:41:10 np0005539551 nova_compute[227360]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:41:10 np0005539551 nova_compute[227360]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:41:10 np0005539551 nova_compute[227360]:  </source>
Nov 29 03:41:10 np0005539551 nova_compute[227360]:  <target dev="vdc" bus="virtio"/>
Nov 29 03:41:10 np0005539551 nova_compute[227360]:  <serial>f8d3ed9a-0699-4f69-8315-0c8ebf96b12c</serial>
Nov 29 03:41:10 np0005539551 nova_compute[227360]:  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Nov 29 03:41:10 np0005539551 nova_compute[227360]: </disk>
Nov 29 03:41:10 np0005539551 nova_compute[227360]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:41:10 np0005539551 nova_compute[227360]: 2025-11-29 08:41:10.387 227364 INFO nova.virt.libvirt.driver [None req-7313f7a1-6224-4331-a833-9a0c94c8b706 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Successfully detached device vdc from instance fc75971c-cf7e-4383-81cf-81c801f67489 from the persistent domain config.#033[00m
Nov 29 03:41:10 np0005539551 nova_compute[227360]: 2025-11-29 08:41:10.388 227364 DEBUG nova.virt.libvirt.driver [None req-7313f7a1-6224-4331-a833-9a0c94c8b706 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] (1/8): Attempting to detach device vdc with device alias virtio-disk2 from instance fc75971c-cf7e-4383-81cf-81c801f67489 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:41:10 np0005539551 nova_compute[227360]: 2025-11-29 08:41:10.388 227364 DEBUG nova.virt.libvirt.guest [None req-7313f7a1-6224-4331-a833-9a0c94c8b706 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:41:10 np0005539551 nova_compute[227360]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:41:10 np0005539551 nova_compute[227360]:  <source protocol="rbd" name="volumes/volume-f8d3ed9a-0699-4f69-8315-0c8ebf96b12c">
Nov 29 03:41:10 np0005539551 nova_compute[227360]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:41:10 np0005539551 nova_compute[227360]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:41:10 np0005539551 nova_compute[227360]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:41:10 np0005539551 nova_compute[227360]:  </source>
Nov 29 03:41:10 np0005539551 nova_compute[227360]:  <target dev="vdc" bus="virtio"/>
Nov 29 03:41:10 np0005539551 nova_compute[227360]:  <serial>f8d3ed9a-0699-4f69-8315-0c8ebf96b12c</serial>
Nov 29 03:41:10 np0005539551 nova_compute[227360]:  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Nov 29 03:41:10 np0005539551 nova_compute[227360]: </disk>
Nov 29 03:41:10 np0005539551 nova_compute[227360]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:41:10 np0005539551 nova_compute[227360]: 2025-11-29 08:41:10.460 227364 DEBUG nova.virt.libvirt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Received event <DeviceRemovedEvent: 1764405670.4597604, fc75971c-cf7e-4383-81cf-81c801f67489 => virtio-disk2> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:41:10 np0005539551 nova_compute[227360]: 2025-11-29 08:41:10.462 227364 DEBUG nova.virt.libvirt.driver [None req-7313f7a1-6224-4331-a833-9a0c94c8b706 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Start waiting for the detach event from libvirt for device vdc with device alias virtio-disk2 for instance fc75971c-cf7e-4383-81cf-81c801f67489 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:41:10 np0005539551 nova_compute[227360]: 2025-11-29 08:41:10.464 227364 INFO nova.virt.libvirt.driver [None req-7313f7a1-6224-4331-a833-9a0c94c8b706 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Successfully detached device vdc from instance fc75971c-cf7e-4383-81cf-81c801f67489 from the live domain config.#033[00m
Nov 29 03:41:10 np0005539551 nova_compute[227360]: 2025-11-29 08:41:10.654 227364 DEBUG nova.objects.instance [None req-7313f7a1-6224-4331-a833-9a0c94c8b706 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lazy-loading 'flavor' on Instance uuid fc75971c-cf7e-4383-81cf-81c801f67489 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:41:10 np0005539551 nova_compute[227360]: 2025-11-29 08:41:10.693 227364 DEBUG oslo_concurrency.lockutils [None req-7313f7a1-6224-4331-a833-9a0c94c8b706 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "fc75971c-cf7e-4383-81cf-81c801f67489" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.526s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:11 np0005539551 nova_compute[227360]: 2025-11-29 08:41:11.438 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:12.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:12 np0005539551 nova_compute[227360]: 2025-11-29 08:41:12.243 227364 DEBUG nova.compute.manager [req-187c89a3-7f43-471e-bd2a-19fbf2e8a14a req-dc850770-e052-4b22-9a9c-d7d8f7244eca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Received event network-changed-6ae38b68-e18b-4ba6-be6e-d411d58b407b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:12 np0005539551 nova_compute[227360]: 2025-11-29 08:41:12.244 227364 DEBUG nova.compute.manager [req-187c89a3-7f43-471e-bd2a-19fbf2e8a14a req-dc850770-e052-4b22-9a9c-d7d8f7244eca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Refreshing instance network info cache due to event network-changed-6ae38b68-e18b-4ba6-be6e-d411d58b407b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:41:12 np0005539551 nova_compute[227360]: 2025-11-29 08:41:12.244 227364 DEBUG oslo_concurrency.lockutils [req-187c89a3-7f43-471e-bd2a-19fbf2e8a14a req-dc850770-e052-4b22-9a9c-d7d8f7244eca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-fc75971c-cf7e-4383-81cf-81c801f67489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:41:12 np0005539551 nova_compute[227360]: 2025-11-29 08:41:12.244 227364 DEBUG oslo_concurrency.lockutils [req-187c89a3-7f43-471e-bd2a-19fbf2e8a14a req-dc850770-e052-4b22-9a9c-d7d8f7244eca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-fc75971c-cf7e-4383-81cf-81c801f67489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:41:12 np0005539551 nova_compute[227360]: 2025-11-29 08:41:12.245 227364 DEBUG nova.network.neutron [req-187c89a3-7f43-471e-bd2a-19fbf2e8a14a req-dc850770-e052-4b22-9a9c-d7d8f7244eca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Refreshing network info cache for port 6ae38b68-e18b-4ba6-be6e-d411d58b407b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:41:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:12.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:12 np0005539551 nova_compute[227360]: 2025-11-29 08:41:12.315 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:13 np0005539551 nova_compute[227360]: 2025-11-29 08:41:13.743 227364 DEBUG nova.network.neutron [req-187c89a3-7f43-471e-bd2a-19fbf2e8a14a req-dc850770-e052-4b22-9a9c-d7d8f7244eca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Updated VIF entry in instance network info cache for port 6ae38b68-e18b-4ba6-be6e-d411d58b407b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:41:13 np0005539551 nova_compute[227360]: 2025-11-29 08:41:13.744 227364 DEBUG nova.network.neutron [req-187c89a3-7f43-471e-bd2a-19fbf2e8a14a req-dc850770-e052-4b22-9a9c-d7d8f7244eca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Updating instance_info_cache with network_info: [{"id": "6ae38b68-e18b-4ba6-be6e-d411d58b407b", "address": "fa:16:3e:6f:01:5b", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ae38b68-e1", "ovs_interfaceid": "6ae38b68-e18b-4ba6-be6e-d411d58b407b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:41:13 np0005539551 nova_compute[227360]: 2025-11-29 08:41:13.773 227364 DEBUG oslo_concurrency.lockutils [req-187c89a3-7f43-471e-bd2a-19fbf2e8a14a req-dc850770-e052-4b22-9a9c-d7d8f7244eca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-fc75971c-cf7e-4383-81cf-81c801f67489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:41:14 np0005539551 nova_compute[227360]: 2025-11-29 08:41:14.082 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:14.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:14.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:14 np0005539551 nova_compute[227360]: 2025-11-29 08:41:14.440 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e397 e397: 3 total, 3 up, 3 in
Nov 29 03:41:15 np0005539551 nova_compute[227360]: 2025-11-29 08:41:15.594 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:15 np0005539551 nova_compute[227360]: 2025-11-29 08:41:15.786 227364 DEBUG nova.compute.manager [req-a76badf4-d44b-4b98-bfcb-c05098d101ce req-0523f929-f8ea-4573-a69f-f2d4f9329ed5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Received event network-changed-6ae38b68-e18b-4ba6-be6e-d411d58b407b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:15 np0005539551 nova_compute[227360]: 2025-11-29 08:41:15.786 227364 DEBUG nova.compute.manager [req-a76badf4-d44b-4b98-bfcb-c05098d101ce req-0523f929-f8ea-4573-a69f-f2d4f9329ed5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Refreshing instance network info cache due to event network-changed-6ae38b68-e18b-4ba6-be6e-d411d58b407b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:41:15 np0005539551 nova_compute[227360]: 2025-11-29 08:41:15.786 227364 DEBUG oslo_concurrency.lockutils [req-a76badf4-d44b-4b98-bfcb-c05098d101ce req-0523f929-f8ea-4573-a69f-f2d4f9329ed5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-fc75971c-cf7e-4383-81cf-81c801f67489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:41:15 np0005539551 nova_compute[227360]: 2025-11-29 08:41:15.787 227364 DEBUG oslo_concurrency.lockutils [req-a76badf4-d44b-4b98-bfcb-c05098d101ce req-0523f929-f8ea-4573-a69f-f2d4f9329ed5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-fc75971c-cf7e-4383-81cf-81c801f67489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:41:15 np0005539551 nova_compute[227360]: 2025-11-29 08:41:15.787 227364 DEBUG nova.network.neutron [req-a76badf4-d44b-4b98-bfcb-c05098d101ce req-0523f929-f8ea-4573-a69f-f2d4f9329ed5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Refreshing network info cache for port 6ae38b68-e18b-4ba6-be6e-d411d58b407b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:41:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:16.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:16.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:17 np0005539551 nova_compute[227360]: 2025-11-29 08:41:17.221 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405662.2200167, c9adc277-3d0a-4bb8-8b47-e9f72114cdfd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:41:17 np0005539551 nova_compute[227360]: 2025-11-29 08:41:17.221 227364 INFO nova.compute.manager [-] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:41:17 np0005539551 nova_compute[227360]: 2025-11-29 08:41:17.242 227364 DEBUG nova.compute.manager [None req-3107cda2-8567-426e-8d22-00c1ff9825da - - - - - -] [instance: c9adc277-3d0a-4bb8-8b47-e9f72114cdfd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:41:17 np0005539551 nova_compute[227360]: 2025-11-29 08:41:17.318 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:17 np0005539551 nova_compute[227360]: 2025-11-29 08:41:17.376 227364 DEBUG nova.network.neutron [req-a76badf4-d44b-4b98-bfcb-c05098d101ce req-0523f929-f8ea-4573-a69f-f2d4f9329ed5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Updated VIF entry in instance network info cache for port 6ae38b68-e18b-4ba6-be6e-d411d58b407b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:41:17 np0005539551 nova_compute[227360]: 2025-11-29 08:41:17.377 227364 DEBUG nova.network.neutron [req-a76badf4-d44b-4b98-bfcb-c05098d101ce req-0523f929-f8ea-4573-a69f-f2d4f9329ed5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Updating instance_info_cache with network_info: [{"id": "6ae38b68-e18b-4ba6-be6e-d411d58b407b", "address": "fa:16:3e:6f:01:5b", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ae38b68-e1", "ovs_interfaceid": "6ae38b68-e18b-4ba6-be6e-d411d58b407b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:41:17 np0005539551 nova_compute[227360]: 2025-11-29 08:41:17.390 227364 DEBUG oslo_concurrency.lockutils [req-a76badf4-d44b-4b98-bfcb-c05098d101ce req-0523f929-f8ea-4573-a69f-f2d4f9329ed5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-fc75971c-cf7e-4383-81cf-81c801f67489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:41:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:18.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:18.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:18 np0005539551 nova_compute[227360]: 2025-11-29 08:41:18.455 227364 DEBUG oslo_concurrency.lockutils [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Acquiring lock "695c5eef-14b1-4f3f-8570-d76e4500f9f0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:18 np0005539551 nova_compute[227360]: 2025-11-29 08:41:18.456 227364 DEBUG oslo_concurrency.lockutils [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Lock "695c5eef-14b1-4f3f-8570-d76e4500f9f0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:18 np0005539551 nova_compute[227360]: 2025-11-29 08:41:18.476 227364 DEBUG nova.compute.manager [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:41:18 np0005539551 nova_compute[227360]: 2025-11-29 08:41:18.589 227364 DEBUG oslo_concurrency.lockutils [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:18 np0005539551 nova_compute[227360]: 2025-11-29 08:41:18.590 227364 DEBUG oslo_concurrency.lockutils [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:18 np0005539551 nova_compute[227360]: 2025-11-29 08:41:18.597 227364 DEBUG nova.virt.hardware [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:41:18 np0005539551 nova_compute[227360]: 2025-11-29 08:41:18.597 227364 INFO nova.compute.claims [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:41:18 np0005539551 nova_compute[227360]: 2025-11-29 08:41:18.838 227364 DEBUG oslo_concurrency.processutils [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:19 np0005539551 nova_compute[227360]: 2025-11-29 08:41:19.083 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:19 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:41:19 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3509680592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:41:19 np0005539551 nova_compute[227360]: 2025-11-29 08:41:19.282 227364 DEBUG oslo_concurrency.processutils [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:19 np0005539551 nova_compute[227360]: 2025-11-29 08:41:19.287 227364 DEBUG nova.compute.provider_tree [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:41:19 np0005539551 nova_compute[227360]: 2025-11-29 08:41:19.311 227364 DEBUG nova.scheduler.client.report [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:41:19 np0005539551 nova_compute[227360]: 2025-11-29 08:41:19.337 227364 DEBUG oslo_concurrency.lockutils [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:19 np0005539551 nova_compute[227360]: 2025-11-29 08:41:19.338 227364 DEBUG nova.compute.manager [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:41:19 np0005539551 nova_compute[227360]: 2025-11-29 08:41:19.383 227364 DEBUG nova.compute.manager [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:41:19 np0005539551 nova_compute[227360]: 2025-11-29 08:41:19.383 227364 DEBUG nova.network.neutron [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:41:19 np0005539551 nova_compute[227360]: 2025-11-29 08:41:19.406 227364 INFO nova.virt.libvirt.driver [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:41:19 np0005539551 nova_compute[227360]: 2025-11-29 08:41:19.423 227364 DEBUG nova.compute.manager [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:41:19 np0005539551 nova_compute[227360]: 2025-11-29 08:41:19.529 227364 DEBUG nova.compute.manager [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:41:19 np0005539551 nova_compute[227360]: 2025-11-29 08:41:19.530 227364 DEBUG nova.virt.libvirt.driver [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:41:19 np0005539551 nova_compute[227360]: 2025-11-29 08:41:19.530 227364 INFO nova.virt.libvirt.driver [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Creating image(s)#033[00m
Nov 29 03:41:19 np0005539551 nova_compute[227360]: 2025-11-29 08:41:19.555 227364 DEBUG nova.storage.rbd_utils [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] rbd image 695c5eef-14b1-4f3f-8570-d76e4500f9f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:19 np0005539551 nova_compute[227360]: 2025-11-29 08:41:19.580 227364 DEBUG nova.storage.rbd_utils [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] rbd image 695c5eef-14b1-4f3f-8570-d76e4500f9f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:19 np0005539551 nova_compute[227360]: 2025-11-29 08:41:19.603 227364 DEBUG nova.storage.rbd_utils [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] rbd image 695c5eef-14b1-4f3f-8570-d76e4500f9f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:19 np0005539551 nova_compute[227360]: 2025-11-29 08:41:19.607 227364 DEBUG oslo_concurrency.processutils [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:19 np0005539551 nova_compute[227360]: 2025-11-29 08:41:19.640 227364 DEBUG nova.policy [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a9046bfcf55a4809aaf25edff2e41e75', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f62777789c6f4c779695a4bd13f4a8a5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:41:19 np0005539551 nova_compute[227360]: 2025-11-29 08:41:19.678 227364 DEBUG oslo_concurrency.processutils [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:19 np0005539551 nova_compute[227360]: 2025-11-29 08:41:19.678 227364 DEBUG oslo_concurrency.lockutils [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:19 np0005539551 nova_compute[227360]: 2025-11-29 08:41:19.679 227364 DEBUG oslo_concurrency.lockutils [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:19 np0005539551 nova_compute[227360]: 2025-11-29 08:41:19.680 227364 DEBUG oslo_concurrency.lockutils [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:19 np0005539551 nova_compute[227360]: 2025-11-29 08:41:19.704 227364 DEBUG nova.storage.rbd_utils [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] rbd image 695c5eef-14b1-4f3f-8570-d76e4500f9f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:19 np0005539551 nova_compute[227360]: 2025-11-29 08:41:19.707 227364 DEBUG oslo_concurrency.processutils [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 695c5eef-14b1-4f3f-8570-d76e4500f9f0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:19.891 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:19.891 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:19.892 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:20.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:20.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:20 np0005539551 nova_compute[227360]: 2025-11-29 08:41:20.775 227364 DEBUG oslo_concurrency.processutils [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 695c5eef-14b1-4f3f-8570-d76e4500f9f0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:20 np0005539551 nova_compute[227360]: 2025-11-29 08:41:20.805 227364 DEBUG nova.network.neutron [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Successfully created port: c597a56c-4986-4a01-80a3-e89d85e8cc17 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:41:20 np0005539551 nova_compute[227360]: 2025-11-29 08:41:20.844 227364 DEBUG nova.storage.rbd_utils [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] resizing rbd image 695c5eef-14b1-4f3f-8570-d76e4500f9f0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:41:21 np0005539551 nova_compute[227360]: 2025-11-29 08:41:21.040 227364 DEBUG nova.objects.instance [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Lazy-loading 'migration_context' on Instance uuid 695c5eef-14b1-4f3f-8570-d76e4500f9f0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:41:21 np0005539551 nova_compute[227360]: 2025-11-29 08:41:21.061 227364 DEBUG nova.virt.libvirt.driver [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:41:21 np0005539551 nova_compute[227360]: 2025-11-29 08:41:21.061 227364 DEBUG nova.virt.libvirt.driver [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Ensure instance console log exists: /var/lib/nova/instances/695c5eef-14b1-4f3f-8570-d76e4500f9f0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:41:21 np0005539551 nova_compute[227360]: 2025-11-29 08:41:21.061 227364 DEBUG oslo_concurrency.lockutils [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:21 np0005539551 nova_compute[227360]: 2025-11-29 08:41:21.062 227364 DEBUG oslo_concurrency.lockutils [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:21 np0005539551 nova_compute[227360]: 2025-11-29 08:41:21.062 227364 DEBUG oslo_concurrency.lockutils [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:21 np0005539551 nova_compute[227360]: 2025-11-29 08:41:21.602 227364 DEBUG nova.network.neutron [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Successfully updated port: c597a56c-4986-4a01-80a3-e89d85e8cc17 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:41:21 np0005539551 nova_compute[227360]: 2025-11-29 08:41:21.620 227364 DEBUG oslo_concurrency.lockutils [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Acquiring lock "refresh_cache-695c5eef-14b1-4f3f-8570-d76e4500f9f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:41:21 np0005539551 nova_compute[227360]: 2025-11-29 08:41:21.620 227364 DEBUG oslo_concurrency.lockutils [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Acquired lock "refresh_cache-695c5eef-14b1-4f3f-8570-d76e4500f9f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:41:21 np0005539551 nova_compute[227360]: 2025-11-29 08:41:21.620 227364 DEBUG nova.network.neutron [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:41:21 np0005539551 nova_compute[227360]: 2025-11-29 08:41:21.685 227364 DEBUG nova.compute.manager [req-d65e0257-b2ae-4512-a5d1-3f45ee585c68 req-b8280c0d-60e3-460b-b604-2851539ecbe0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Received event network-changed-c597a56c-4986-4a01-80a3-e89d85e8cc17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:21 np0005539551 nova_compute[227360]: 2025-11-29 08:41:21.685 227364 DEBUG nova.compute.manager [req-d65e0257-b2ae-4512-a5d1-3f45ee585c68 req-b8280c0d-60e3-460b-b604-2851539ecbe0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Refreshing instance network info cache due to event network-changed-c597a56c-4986-4a01-80a3-e89d85e8cc17. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:41:21 np0005539551 nova_compute[227360]: 2025-11-29 08:41:21.686 227364 DEBUG oslo_concurrency.lockutils [req-d65e0257-b2ae-4512-a5d1-3f45ee585c68 req-b8280c0d-60e3-460b-b604-2851539ecbe0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-695c5eef-14b1-4f3f-8570-d76e4500f9f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:41:21 np0005539551 nova_compute[227360]: 2025-11-29 08:41:21.740 227364 DEBUG oslo_concurrency.lockutils [None req-afa8b8df-f63b-4f83-97ce-4a1d78d0f8a7 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "fc75971c-cf7e-4383-81cf-81c801f67489" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:21 np0005539551 nova_compute[227360]: 2025-11-29 08:41:21.740 227364 DEBUG oslo_concurrency.lockutils [None req-afa8b8df-f63b-4f83-97ce-4a1d78d0f8a7 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "fc75971c-cf7e-4383-81cf-81c801f67489" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:21 np0005539551 nova_compute[227360]: 2025-11-29 08:41:21.741 227364 DEBUG oslo_concurrency.lockutils [None req-afa8b8df-f63b-4f83-97ce-4a1d78d0f8a7 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "fc75971c-cf7e-4383-81cf-81c801f67489-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:21 np0005539551 nova_compute[227360]: 2025-11-29 08:41:21.741 227364 DEBUG oslo_concurrency.lockutils [None req-afa8b8df-f63b-4f83-97ce-4a1d78d0f8a7 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "fc75971c-cf7e-4383-81cf-81c801f67489-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:21 np0005539551 nova_compute[227360]: 2025-11-29 08:41:21.741 227364 DEBUG oslo_concurrency.lockutils [None req-afa8b8df-f63b-4f83-97ce-4a1d78d0f8a7 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "fc75971c-cf7e-4383-81cf-81c801f67489-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:21 np0005539551 nova_compute[227360]: 2025-11-29 08:41:21.742 227364 INFO nova.compute.manager [None req-afa8b8df-f63b-4f83-97ce-4a1d78d0f8a7 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Terminating instance#033[00m
Nov 29 03:41:21 np0005539551 nova_compute[227360]: 2025-11-29 08:41:21.743 227364 DEBUG nova.compute.manager [None req-afa8b8df-f63b-4f83-97ce-4a1d78d0f8a7 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:41:21 np0005539551 kernel: tap6ae38b68-e1 (unregistering): left promiscuous mode
Nov 29 03:41:21 np0005539551 NetworkManager[48922]: <info>  [1764405681.7956] device (tap6ae38b68-e1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:41:21 np0005539551 ovn_controller[130266]: 2025-11-29T08:41:21Z|00843|binding|INFO|Releasing lport 6ae38b68-e18b-4ba6-be6e-d411d58b407b from this chassis (sb_readonly=0)
Nov 29 03:41:21 np0005539551 nova_compute[227360]: 2025-11-29 08:41:21.802 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:21 np0005539551 ovn_controller[130266]: 2025-11-29T08:41:21Z|00844|binding|INFO|Setting lport 6ae38b68-e18b-4ba6-be6e-d411d58b407b down in Southbound
Nov 29 03:41:21 np0005539551 ovn_controller[130266]: 2025-11-29T08:41:21Z|00845|binding|INFO|Removing iface tap6ae38b68-e1 ovn-installed in OVS
Nov 29 03:41:21 np0005539551 nova_compute[227360]: 2025-11-29 08:41:21.806 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:21 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:21.815 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:01:5b 10.100.0.5'], port_security=['fa:16:3e:6f:01:5b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'fc75971c-cf7e-4383-81cf-81c801f67489', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c114bc23-cd62-4198-a95d-5595953a88bd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62ca01275fe34ea0af31d00b34d6d9a5', 'neutron:revision_number': '6', 'neutron:security_group_ids': '9ca74358-0566-4f32-a6ba-a0c4dcd1723c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6cd3a0f0-9ad7-457d-b2e3-d5300cfee042, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=6ae38b68-e18b-4ba6-be6e-d411d58b407b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:41:21 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:21.816 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 6ae38b68-e18b-4ba6-be6e-d411d58b407b in datapath c114bc23-cd62-4198-a95d-5595953a88bd unbound from our chassis#033[00m
Nov 29 03:41:21 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:21.818 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c114bc23-cd62-4198-a95d-5595953a88bd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:41:21 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:21.819 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[7c995f1a-eb62-4c16-98ce-3d70ad285be6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:21 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:21.819 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd namespace which is not needed anymore#033[00m
Nov 29 03:41:21 np0005539551 nova_compute[227360]: 2025-11-29 08:41:21.822 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:21 np0005539551 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d000000bc.scope: Deactivated successfully.
Nov 29 03:41:21 np0005539551 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d000000bc.scope: Consumed 18.092s CPU time.
Nov 29 03:41:21 np0005539551 systemd-machined[190756]: Machine qemu-86-instance-000000bc terminated.
Nov 29 03:41:21 np0005539551 neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd[294483]: [NOTICE]   (294487) : haproxy version is 2.8.14-c23fe91
Nov 29 03:41:21 np0005539551 neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd[294483]: [NOTICE]   (294487) : path to executable is /usr/sbin/haproxy
Nov 29 03:41:21 np0005539551 neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd[294483]: [WARNING]  (294487) : Exiting Master process...
Nov 29 03:41:21 np0005539551 neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd[294483]: [WARNING]  (294487) : Exiting Master process...
Nov 29 03:41:21 np0005539551 neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd[294483]: [ALERT]    (294487) : Current worker (294489) exited with code 143 (Terminated)
Nov 29 03:41:21 np0005539551 neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd[294483]: [WARNING]  (294487) : All workers exited. Exiting... (0)
Nov 29 03:41:21 np0005539551 systemd[1]: libpod-cac59ee23e98bf042b06bba3ad759e6e56834ccb48af9a9f5d305b3c3a9d658e.scope: Deactivated successfully.
Nov 29 03:41:21 np0005539551 podman[295469]: 2025-11-29 08:41:21.945700084 +0000 UTC m=+0.043232961 container died cac59ee23e98bf042b06bba3ad759e6e56834ccb48af9a9f5d305b3c3a9d658e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 03:41:21 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cac59ee23e98bf042b06bba3ad759e6e56834ccb48af9a9f5d305b3c3a9d658e-userdata-shm.mount: Deactivated successfully.
Nov 29 03:41:21 np0005539551 systemd[1]: var-lib-containers-storage-overlay-eb2080b7d3a7fc4153ce50c63bb1ab95a130798c2afeb7298e85353fe28fffde-merged.mount: Deactivated successfully.
Nov 29 03:41:21 np0005539551 nova_compute[227360]: 2025-11-29 08:41:21.984 227364 INFO nova.virt.libvirt.driver [-] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Instance destroyed successfully.#033[00m
Nov 29 03:41:21 np0005539551 nova_compute[227360]: 2025-11-29 08:41:21.985 227364 DEBUG nova.objects.instance [None req-afa8b8df-f63b-4f83-97ce-4a1d78d0f8a7 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lazy-loading 'resources' on Instance uuid fc75971c-cf7e-4383-81cf-81c801f67489 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:41:21 np0005539551 podman[295469]: 2025-11-29 08:41:21.991831853 +0000 UTC m=+0.089364730 container cleanup cac59ee23e98bf042b06bba3ad759e6e56834ccb48af9a9f5d305b3c3a9d658e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:41:21 np0005539551 systemd[1]: libpod-conmon-cac59ee23e98bf042b06bba3ad759e6e56834ccb48af9a9f5d305b3c3a9d658e.scope: Deactivated successfully.
Nov 29 03:41:22 np0005539551 nova_compute[227360]: 2025-11-29 08:41:22.005 227364 DEBUG nova.virt.libvirt.vif [None req-afa8b8df-f63b-4f83-97ce-4a1d78d0f8a7 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:39:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-700405556',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-700405556',id=188,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFG6wdTr4YnTt5IOi90oQevRIaDEFT6evKD2WqzrA5InuHLLPBBDt+A3IDlfUfF0+VTQ8wx7jPD+CP0zgY5zll3JN5Id1HeD6V5ixHcQktu+0EcaYFcg2TVX8XapVterdw==',key_name='tempest-TestInstancesWithCinderVolumes-1193741997',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:39:38Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='62ca01275fe34ea0af31d00b34d6d9a5',ramdisk_id='',reservation_id='r-k73u0foc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestInstancesWithCinderVolumes-911868990',owner_user_name='tempest-TestInstancesWithCinderVolumes-911868990-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:39:38Z,user_data=None,user_id='facf4db8501041ab9628ff9f5684c992',uuid=fc75971c-cf7e-4383-81cf-81c801f67489,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6ae38b68-e18b-4ba6-be6e-d411d58b407b", "address": "fa:16:3e:6f:01:5b", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ae38b68-e1", "ovs_interfaceid": "6ae38b68-e18b-4ba6-be6e-d411d58b407b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:41:22 np0005539551 nova_compute[227360]: 2025-11-29 08:41:22.006 227364 DEBUG nova.network.os_vif_util [None req-afa8b8df-f63b-4f83-97ce-4a1d78d0f8a7 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Converting VIF {"id": "6ae38b68-e18b-4ba6-be6e-d411d58b407b", "address": "fa:16:3e:6f:01:5b", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ae38b68-e1", "ovs_interfaceid": "6ae38b68-e18b-4ba6-be6e-d411d58b407b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:41:22 np0005539551 nova_compute[227360]: 2025-11-29 08:41:22.007 227364 DEBUG nova.network.os_vif_util [None req-afa8b8df-f63b-4f83-97ce-4a1d78d0f8a7 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6f:01:5b,bridge_name='br-int',has_traffic_filtering=True,id=6ae38b68-e18b-4ba6-be6e-d411d58b407b,network=Network(c114bc23-cd62-4198-a95d-5595953a88bd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ae38b68-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:41:22 np0005539551 nova_compute[227360]: 2025-11-29 08:41:22.007 227364 DEBUG os_vif [None req-afa8b8df-f63b-4f83-97ce-4a1d78d0f8a7 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6f:01:5b,bridge_name='br-int',has_traffic_filtering=True,id=6ae38b68-e18b-4ba6-be6e-d411d58b407b,network=Network(c114bc23-cd62-4198-a95d-5595953a88bd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ae38b68-e1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:41:22 np0005539551 nova_compute[227360]: 2025-11-29 08:41:22.011 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:22 np0005539551 nova_compute[227360]: 2025-11-29 08:41:22.011 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ae38b68-e1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:22 np0005539551 nova_compute[227360]: 2025-11-29 08:41:22.013 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:22 np0005539551 nova_compute[227360]: 2025-11-29 08:41:22.016 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:41:22 np0005539551 nova_compute[227360]: 2025-11-29 08:41:22.018 227364 INFO os_vif [None req-afa8b8df-f63b-4f83-97ce-4a1d78d0f8a7 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6f:01:5b,bridge_name='br-int',has_traffic_filtering=True,id=6ae38b68-e18b-4ba6-be6e-d411d58b407b,network=Network(c114bc23-cd62-4198-a95d-5595953a88bd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ae38b68-e1')#033[00m
Nov 29 03:41:22 np0005539551 podman[295509]: 2025-11-29 08:41:22.049942396 +0000 UTC m=+0.038796591 container remove cac59ee23e98bf042b06bba3ad759e6e56834ccb48af9a9f5d305b3c3a9d658e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 03:41:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:22.056 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[aaaadb26-48ce-49e1-962c-9bf2a4244ad1]: (4, ('Sat Nov 29 08:41:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd (cac59ee23e98bf042b06bba3ad759e6e56834ccb48af9a9f5d305b3c3a9d658e)\ncac59ee23e98bf042b06bba3ad759e6e56834ccb48af9a9f5d305b3c3a9d658e\nSat Nov 29 08:41:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd (cac59ee23e98bf042b06bba3ad759e6e56834ccb48af9a9f5d305b3c3a9d658e)\ncac59ee23e98bf042b06bba3ad759e6e56834ccb48af9a9f5d305b3c3a9d658e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:22.058 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[aff48e96-1128-46d9-bc82-bc38a8463de8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:22.059 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc114bc23-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:22 np0005539551 nova_compute[227360]: 2025-11-29 08:41:22.102 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:22 np0005539551 kernel: tapc114bc23-c0: left promiscuous mode
Nov 29 03:41:22 np0005539551 nova_compute[227360]: 2025-11-29 08:41:22.117 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:22.120 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[218dfed8-60b8-43d9-8c2b-53582e9aca15]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:22.134 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e2314834-646e-4db5-a618-6fa7b54261d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:22.136 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[92055a85-817b-4893-88af-9773300cd2b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:22.151 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2d793701-eb14-4c52-9a3c-802966dc91a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 862427, 'reachable_time': 32693, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295540, 'error': None, 'target': 'ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:22.153 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:41:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:22.154 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[b50b6eca-979a-4fe4-9e54-3a4aefa24979]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:22 np0005539551 systemd[1]: run-netns-ovnmeta\x2dc114bc23\x2dcd62\x2d4198\x2da95d\x2d5595953a88bd.mount: Deactivated successfully.
Nov 29 03:41:22 np0005539551 nova_compute[227360]: 2025-11-29 08:41:22.211 227364 DEBUG nova.network.neutron [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:41:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:22.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:22 np0005539551 nova_compute[227360]: 2025-11-29 08:41:22.242 227364 INFO nova.virt.libvirt.driver [None req-afa8b8df-f63b-4f83-97ce-4a1d78d0f8a7 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Deleting instance files /var/lib/nova/instances/fc75971c-cf7e-4383-81cf-81c801f67489_del#033[00m
Nov 29 03:41:22 np0005539551 nova_compute[227360]: 2025-11-29 08:41:22.243 227364 INFO nova.virt.libvirt.driver [None req-afa8b8df-f63b-4f83-97ce-4a1d78d0f8a7 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Deletion of /var/lib/nova/instances/fc75971c-cf7e-4383-81cf-81c801f67489_del complete#033[00m
Nov 29 03:41:22 np0005539551 nova_compute[227360]: 2025-11-29 08:41:22.290 227364 INFO nova.compute.manager [None req-afa8b8df-f63b-4f83-97ce-4a1d78d0f8a7 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Took 0.55 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:41:22 np0005539551 nova_compute[227360]: 2025-11-29 08:41:22.291 227364 DEBUG oslo.service.loopingcall [None req-afa8b8df-f63b-4f83-97ce-4a1d78d0f8a7 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:41:22 np0005539551 nova_compute[227360]: 2025-11-29 08:41:22.291 227364 DEBUG nova.compute.manager [-] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:41:22 np0005539551 nova_compute[227360]: 2025-11-29 08:41:22.291 227364 DEBUG nova.network.neutron [-] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:41:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:22.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.021 227364 DEBUG nova.network.neutron [-] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.045 227364 INFO nova.compute.manager [-] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Took 0.75 seconds to deallocate network for instance.#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.050 227364 DEBUG nova.network.neutron [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Updating instance_info_cache with network_info: [{"id": "c597a56c-4986-4a01-80a3-e89d85e8cc17", "address": "fa:16:3e:9f:15:80", "network": {"id": "1d577637-d13b-484b-b872-20ebc41c4d82", "bridge": "br-int", "label": "tempest-network-smoke--543108712", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f62777789c6f4c779695a4bd13f4a8a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc597a56c-49", "ovs_interfaceid": "c597a56c-4986-4a01-80a3-e89d85e8cc17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.071 227364 DEBUG oslo_concurrency.lockutils [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Releasing lock "refresh_cache-695c5eef-14b1-4f3f-8570-d76e4500f9f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.071 227364 DEBUG nova.compute.manager [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Instance network_info: |[{"id": "c597a56c-4986-4a01-80a3-e89d85e8cc17", "address": "fa:16:3e:9f:15:80", "network": {"id": "1d577637-d13b-484b-b872-20ebc41c4d82", "bridge": "br-int", "label": "tempest-network-smoke--543108712", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f62777789c6f4c779695a4bd13f4a8a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc597a56c-49", "ovs_interfaceid": "c597a56c-4986-4a01-80a3-e89d85e8cc17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.071 227364 DEBUG oslo_concurrency.lockutils [req-d65e0257-b2ae-4512-a5d1-3f45ee585c68 req-b8280c0d-60e3-460b-b604-2851539ecbe0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-695c5eef-14b1-4f3f-8570-d76e4500f9f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.072 227364 DEBUG nova.network.neutron [req-d65e0257-b2ae-4512-a5d1-3f45ee585c68 req-b8280c0d-60e3-460b-b604-2851539ecbe0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Refreshing network info cache for port c597a56c-4986-4a01-80a3-e89d85e8cc17 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.074 227364 DEBUG nova.virt.libvirt.driver [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Start _get_guest_xml network_info=[{"id": "c597a56c-4986-4a01-80a3-e89d85e8cc17", "address": "fa:16:3e:9f:15:80", "network": {"id": "1d577637-d13b-484b-b872-20ebc41c4d82", "bridge": "br-int", "label": "tempest-network-smoke--543108712", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f62777789c6f4c779695a4bd13f4a8a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc597a56c-49", "ovs_interfaceid": "c597a56c-4986-4a01-80a3-e89d85e8cc17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.078 227364 WARNING nova.virt.libvirt.driver [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.083 227364 DEBUG nova.virt.libvirt.host [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.084 227364 DEBUG nova.virt.libvirt.host [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.090 227364 DEBUG nova.virt.libvirt.host [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.091 227364 DEBUG nova.virt.libvirt.host [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.092 227364 DEBUG nova.virt.libvirt.driver [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.092 227364 DEBUG nova.virt.hardware [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.092 227364 DEBUG nova.virt.hardware [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.093 227364 DEBUG nova.virt.hardware [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.093 227364 DEBUG nova.virt.hardware [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.093 227364 DEBUG nova.virt.hardware [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.093 227364 DEBUG nova.virt.hardware [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.093 227364 DEBUG nova.virt.hardware [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.094 227364 DEBUG nova.virt.hardware [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.094 227364 DEBUG nova.virt.hardware [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.094 227364 DEBUG nova.virt.hardware [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.094 227364 DEBUG nova.virt.hardware [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.097 227364 DEBUG oslo_concurrency.processutils [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.176 227364 DEBUG nova.compute.manager [req-76959a48-8d64-433a-8877-ade58d1547fa req-fd591b7f-63ff-4d1d-955a-2829eafce4ba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Received event network-vif-deleted-6ae38b68-e18b-4ba6-be6e-d411d58b407b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.347 227364 INFO nova.compute.manager [None req-afa8b8df-f63b-4f83-97ce-4a1d78d0f8a7 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Took 0.30 seconds to detach 1 volumes for instance.#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.410 227364 DEBUG oslo_concurrency.lockutils [None req-afa8b8df-f63b-4f83-97ce-4a1d78d0f8a7 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.410 227364 DEBUG oslo_concurrency.lockutils [None req-afa8b8df-f63b-4f83-97ce-4a1d78d0f8a7 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.513 227364 DEBUG oslo_concurrency.processutils [None req-afa8b8df-f63b-4f83-97ce-4a1d78d0f8a7 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:23 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:41:23 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3107150212' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.545 227364 DEBUG oslo_concurrency.processutils [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.574 227364 DEBUG nova.storage.rbd_utils [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] rbd image 695c5eef-14b1-4f3f-8570-d76e4500f9f0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.579 227364 DEBUG oslo_concurrency.processutils [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.797 227364 DEBUG nova.compute.manager [req-ad37bf46-a20a-4c66-bb58-2eafb9c38717 req-90e5cd19-d6ed-47df-911c-0c46b364b6ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Received event network-vif-unplugged-6ae38b68-e18b-4ba6-be6e-d411d58b407b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.798 227364 DEBUG oslo_concurrency.lockutils [req-ad37bf46-a20a-4c66-bb58-2eafb9c38717 req-90e5cd19-d6ed-47df-911c-0c46b364b6ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "fc75971c-cf7e-4383-81cf-81c801f67489-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.798 227364 DEBUG oslo_concurrency.lockutils [req-ad37bf46-a20a-4c66-bb58-2eafb9c38717 req-90e5cd19-d6ed-47df-911c-0c46b364b6ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "fc75971c-cf7e-4383-81cf-81c801f67489-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.799 227364 DEBUG oslo_concurrency.lockutils [req-ad37bf46-a20a-4c66-bb58-2eafb9c38717 req-90e5cd19-d6ed-47df-911c-0c46b364b6ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "fc75971c-cf7e-4383-81cf-81c801f67489-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.799 227364 DEBUG nova.compute.manager [req-ad37bf46-a20a-4c66-bb58-2eafb9c38717 req-90e5cd19-d6ed-47df-911c-0c46b364b6ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] No waiting events found dispatching network-vif-unplugged-6ae38b68-e18b-4ba6-be6e-d411d58b407b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.799 227364 WARNING nova.compute.manager [req-ad37bf46-a20a-4c66-bb58-2eafb9c38717 req-90e5cd19-d6ed-47df-911c-0c46b364b6ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Received unexpected event network-vif-unplugged-6ae38b68-e18b-4ba6-be6e-d411d58b407b for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.799 227364 DEBUG nova.compute.manager [req-ad37bf46-a20a-4c66-bb58-2eafb9c38717 req-90e5cd19-d6ed-47df-911c-0c46b364b6ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Received event network-vif-plugged-6ae38b68-e18b-4ba6-be6e-d411d58b407b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.800 227364 DEBUG oslo_concurrency.lockutils [req-ad37bf46-a20a-4c66-bb58-2eafb9c38717 req-90e5cd19-d6ed-47df-911c-0c46b364b6ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "fc75971c-cf7e-4383-81cf-81c801f67489-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.800 227364 DEBUG oslo_concurrency.lockutils [req-ad37bf46-a20a-4c66-bb58-2eafb9c38717 req-90e5cd19-d6ed-47df-911c-0c46b364b6ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "fc75971c-cf7e-4383-81cf-81c801f67489-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.800 227364 DEBUG oslo_concurrency.lockutils [req-ad37bf46-a20a-4c66-bb58-2eafb9c38717 req-90e5cd19-d6ed-47df-911c-0c46b364b6ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "fc75971c-cf7e-4383-81cf-81c801f67489-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.801 227364 DEBUG nova.compute.manager [req-ad37bf46-a20a-4c66-bb58-2eafb9c38717 req-90e5cd19-d6ed-47df-911c-0c46b364b6ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] No waiting events found dispatching network-vif-plugged-6ae38b68-e18b-4ba6-be6e-d411d58b407b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.801 227364 WARNING nova.compute.manager [req-ad37bf46-a20a-4c66-bb58-2eafb9c38717 req-90e5cd19-d6ed-47df-911c-0c46b364b6ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Received unexpected event network-vif-plugged-6ae38b68-e18b-4ba6-be6e-d411d58b407b for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:41:23 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:41:23 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3148235743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.940 227364 DEBUG oslo_concurrency.processutils [None req-afa8b8df-f63b-4f83-97ce-4a1d78d0f8a7 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:23 np0005539551 nova_compute[227360]: 2025-11-29 08:41:23.945 227364 DEBUG nova.compute.provider_tree [None req-afa8b8df-f63b-4f83-97ce-4a1d78d0f8a7 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:41:23 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:41:23 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3011916471' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.008 227364 DEBUG oslo_concurrency.processutils [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.009 227364 DEBUG nova.virt.libvirt.vif [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:41:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1990481903-access_point-596874863',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1990481903-access_point-596874863',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1990481903-ac',id=191,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJmjfd2y7sEmTsC5xZ1nT9DN/WCE26pKipEQS2aTf8eym9bYPBuA0UMch6Jpi2x/MkVRsi765a4KMJRED2sKY8omOdDqfn4nQxGsjvpYTZF5jPc5Ny1nzuhZN4SUjckzkw==',key_name='tempest-TestSecurityGroupsBasicOps-960433095',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f62777789c6f4c779695a4bd13f4a8a5',ramdisk_id='',reservation_id='r-2kecd4zi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1990481903',owner_user_name='tempest-TestSecurityGroupsBasicOps-1990481903-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:41:19Z,user_data=None,user_id='a9046bfcf55a4809aaf25edff2e41e75',uuid=695c5eef-14b1-4f3f-8570-d76e4500f9f0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c597a56c-4986-4a01-80a3-e89d85e8cc17", "address": "fa:16:3e:9f:15:80", "network": {"id": "1d577637-d13b-484b-b872-20ebc41c4d82", "bridge": "br-int", "label": "tempest-network-smoke--543108712", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f62777789c6f4c779695a4bd13f4a8a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc597a56c-49", "ovs_interfaceid": "c597a56c-4986-4a01-80a3-e89d85e8cc17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.009 227364 DEBUG nova.network.os_vif_util [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Converting VIF {"id": "c597a56c-4986-4a01-80a3-e89d85e8cc17", "address": "fa:16:3e:9f:15:80", "network": {"id": "1d577637-d13b-484b-b872-20ebc41c4d82", "bridge": "br-int", "label": "tempest-network-smoke--543108712", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f62777789c6f4c779695a4bd13f4a8a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc597a56c-49", "ovs_interfaceid": "c597a56c-4986-4a01-80a3-e89d85e8cc17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.010 227364 DEBUG nova.network.os_vif_util [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:15:80,bridge_name='br-int',has_traffic_filtering=True,id=c597a56c-4986-4a01-80a3-e89d85e8cc17,network=Network(1d577637-d13b-484b-b872-20ebc41c4d82),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc597a56c-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.012 227364 DEBUG nova.objects.instance [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 695c5eef-14b1-4f3f-8570-d76e4500f9f0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.086 227364 DEBUG nova.scheduler.client.report [None req-afa8b8df-f63b-4f83-97ce-4a1d78d0f8a7 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.092 227364 DEBUG nova.virt.libvirt.driver [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:41:24 np0005539551 nova_compute[227360]:  <uuid>695c5eef-14b1-4f3f-8570-d76e4500f9f0</uuid>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:  <name>instance-000000bf</name>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:41:24 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1990481903-access_point-596874863</nova:name>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:41:23</nova:creationTime>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:41:24 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:        <nova:user uuid="a9046bfcf55a4809aaf25edff2e41e75">tempest-TestSecurityGroupsBasicOps-1990481903-project-member</nova:user>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:        <nova:project uuid="f62777789c6f4c779695a4bd13f4a8a5">tempest-TestSecurityGroupsBasicOps-1990481903</nova:project>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:        <nova:port uuid="c597a56c-4986-4a01-80a3-e89d85e8cc17">
Nov 29 03:41:24 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:      <entry name="serial">695c5eef-14b1-4f3f-8570-d76e4500f9f0</entry>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:      <entry name="uuid">695c5eef-14b1-4f3f-8570-d76e4500f9f0</entry>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:41:24 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/695c5eef-14b1-4f3f-8570-d76e4500f9f0_disk">
Nov 29 03:41:24 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:41:24 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:41:24 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/695c5eef-14b1-4f3f-8570-d76e4500f9f0_disk.config">
Nov 29 03:41:24 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:41:24 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:41:24 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:9f:15:80"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:      <target dev="tapc597a56c-49"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:41:24 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/695c5eef-14b1-4f3f-8570-d76e4500f9f0/console.log" append="off"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:41:24 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:41:24 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:41:24 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:41:24 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:41:24 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.093 227364 DEBUG nova.compute.manager [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Preparing to wait for external event network-vif-plugged-c597a56c-4986-4a01-80a3-e89d85e8cc17 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.094 227364 DEBUG oslo_concurrency.lockutils [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Acquiring lock "695c5eef-14b1-4f3f-8570-d76e4500f9f0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.094 227364 DEBUG oslo_concurrency.lockutils [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Lock "695c5eef-14b1-4f3f-8570-d76e4500f9f0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.094 227364 DEBUG oslo_concurrency.lockutils [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Lock "695c5eef-14b1-4f3f-8570-d76e4500f9f0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.095 227364 DEBUG nova.virt.libvirt.vif [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:41:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1990481903-access_point-596874863',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1990481903-access_point-596874863',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1990481903-ac',id=191,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJmjfd2y7sEmTsC5xZ1nT9DN/WCE26pKipEQS2aTf8eym9bYPBuA0UMch6Jpi2x/MkVRsi765a4KMJRED2sKY8omOdDqfn4nQxGsjvpYTZF5jPc5Ny1nzuhZN4SUjckzkw==',key_name='tempest-TestSecurityGroupsBasicOps-960433095',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f62777789c6f4c779695a4bd13f4a8a5',ramdisk_id='',reservation_id='r-2kecd4zi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1990481903',owner_user_name='tempest-TestSecurityGroupsBasicOps-1990481903-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:41:19Z,user_data=None,user_id='a9046bfcf55a4809aaf25edff2e41e75',uuid=695c5eef-14b1-4f3f-8570-d76e4500f9f0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c597a56c-4986-4a01-80a3-e89d85e8cc17", "address": "fa:16:3e:9f:15:80", "network": {"id": "1d577637-d13b-484b-b872-20ebc41c4d82", "bridge": "br-int", "label": "tempest-network-smoke--543108712", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f62777789c6f4c779695a4bd13f4a8a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc597a56c-49", "ovs_interfaceid": "c597a56c-4986-4a01-80a3-e89d85e8cc17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.096 227364 DEBUG nova.network.os_vif_util [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Converting VIF {"id": "c597a56c-4986-4a01-80a3-e89d85e8cc17", "address": "fa:16:3e:9f:15:80", "network": {"id": "1d577637-d13b-484b-b872-20ebc41c4d82", "bridge": "br-int", "label": "tempest-network-smoke--543108712", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f62777789c6f4c779695a4bd13f4a8a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc597a56c-49", "ovs_interfaceid": "c597a56c-4986-4a01-80a3-e89d85e8cc17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.096 227364 DEBUG nova.network.os_vif_util [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:15:80,bridge_name='br-int',has_traffic_filtering=True,id=c597a56c-4986-4a01-80a3-e89d85e8cc17,network=Network(1d577637-d13b-484b-b872-20ebc41c4d82),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc597a56c-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.097 227364 DEBUG os_vif [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:15:80,bridge_name='br-int',has_traffic_filtering=True,id=c597a56c-4986-4a01-80a3-e89d85e8cc17,network=Network(1d577637-d13b-484b-b872-20ebc41c4d82),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc597a56c-49') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.097 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.099 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.099 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.100 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.103 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.103 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc597a56c-49, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.104 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc597a56c-49, col_values=(('external_ids', {'iface-id': 'c597a56c-4986-4a01-80a3-e89d85e8cc17', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9f:15:80', 'vm-uuid': '695c5eef-14b1-4f3f-8570-d76e4500f9f0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.105 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:24 np0005539551 NetworkManager[48922]: <info>  [1764405684.1067] manager: (tapc597a56c-49): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/375)
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.108 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.110 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.111 227364 INFO os_vif [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:15:80,bridge_name='br-int',has_traffic_filtering=True,id=c597a56c-4986-4a01-80a3-e89d85e8cc17,network=Network(1d577637-d13b-484b-b872-20ebc41c4d82),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc597a56c-49')#033[00m
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.125 227364 DEBUG oslo_concurrency.lockutils [None req-afa8b8df-f63b-4f83-97ce-4a1d78d0f8a7 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.159 227364 INFO nova.scheduler.client.report [None req-afa8b8df-f63b-4f83-97ce-4a1d78d0f8a7 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Deleted allocations for instance fc75971c-cf7e-4383-81cf-81c801f67489#033[00m
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.191 227364 DEBUG nova.virt.libvirt.driver [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.192 227364 DEBUG nova.virt.libvirt.driver [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.192 227364 DEBUG nova.virt.libvirt.driver [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] No VIF found with MAC fa:16:3e:9f:15:80, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.193 227364 INFO nova.virt.libvirt.driver [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Using config drive#033[00m
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.217 227364 DEBUG nova.storage.rbd_utils [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] rbd image 695c5eef-14b1-4f3f-8570-d76e4500f9f0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.224 227364 DEBUG oslo_concurrency.lockutils [None req-afa8b8df-f63b-4f83-97ce-4a1d78d0f8a7 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "fc75971c-cf7e-4383-81cf-81c801f67489" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.484s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:24.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:24.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.518 227364 DEBUG nova.network.neutron [req-d65e0257-b2ae-4512-a5d1-3f45ee585c68 req-b8280c0d-60e3-460b-b604-2851539ecbe0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Updated VIF entry in instance network info cache for port c597a56c-4986-4a01-80a3-e89d85e8cc17. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.518 227364 DEBUG nova.network.neutron [req-d65e0257-b2ae-4512-a5d1-3f45ee585c68 req-b8280c0d-60e3-460b-b604-2851539ecbe0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Updating instance_info_cache with network_info: [{"id": "c597a56c-4986-4a01-80a3-e89d85e8cc17", "address": "fa:16:3e:9f:15:80", "network": {"id": "1d577637-d13b-484b-b872-20ebc41c4d82", "bridge": "br-int", "label": "tempest-network-smoke--543108712", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f62777789c6f4c779695a4bd13f4a8a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc597a56c-49", "ovs_interfaceid": "c597a56c-4986-4a01-80a3-e89d85e8cc17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.532 227364 DEBUG oslo_concurrency.lockutils [req-d65e0257-b2ae-4512-a5d1-3f45ee585c68 req-b8280c0d-60e3-460b-b604-2851539ecbe0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-695c5eef-14b1-4f3f-8570-d76e4500f9f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.612 227364 INFO nova.virt.libvirt.driver [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Creating config drive at /var/lib/nova/instances/695c5eef-14b1-4f3f-8570-d76e4500f9f0/disk.config#033[00m
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.617 227364 DEBUG oslo_concurrency.processutils [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/695c5eef-14b1-4f3f-8570-d76e4500f9f0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfbpcj7ff execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.761 227364 DEBUG oslo_concurrency.processutils [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/695c5eef-14b1-4f3f-8570-d76e4500f9f0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfbpcj7ff" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.788 227364 DEBUG nova.storage.rbd_utils [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] rbd image 695c5eef-14b1-4f3f-8570-d76e4500f9f0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:24 np0005539551 nova_compute[227360]: 2025-11-29 08:41:24.792 227364 DEBUG oslo_concurrency.processutils [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/695c5eef-14b1-4f3f-8570-d76e4500f9f0/disk.config 695c5eef-14b1-4f3f-8570-d76e4500f9f0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:25 np0005539551 nova_compute[227360]: 2025-11-29 08:41:25.117 227364 DEBUG oslo_concurrency.processutils [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/695c5eef-14b1-4f3f-8570-d76e4500f9f0/disk.config 695c5eef-14b1-4f3f-8570-d76e4500f9f0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.325s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:25 np0005539551 nova_compute[227360]: 2025-11-29 08:41:25.119 227364 INFO nova.virt.libvirt.driver [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Deleting local config drive /var/lib/nova/instances/695c5eef-14b1-4f3f-8570-d76e4500f9f0/disk.config because it was imported into RBD.#033[00m
Nov 29 03:41:25 np0005539551 NetworkManager[48922]: <info>  [1764405685.1712] manager: (tapc597a56c-49): new Tun device (/org/freedesktop/NetworkManager/Devices/376)
Nov 29 03:41:25 np0005539551 kernel: tapc597a56c-49: entered promiscuous mode
Nov 29 03:41:25 np0005539551 nova_compute[227360]: 2025-11-29 08:41:25.174 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:25 np0005539551 ovn_controller[130266]: 2025-11-29T08:41:25Z|00846|binding|INFO|Claiming lport c597a56c-4986-4a01-80a3-e89d85e8cc17 for this chassis.
Nov 29 03:41:25 np0005539551 ovn_controller[130266]: 2025-11-29T08:41:25Z|00847|binding|INFO|c597a56c-4986-4a01-80a3-e89d85e8cc17: Claiming fa:16:3e:9f:15:80 10.100.0.9
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:25.182 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:15:80 10.100.0.9'], port_security=['fa:16:3e:9f:15:80 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '695c5eef-14b1-4f3f-8570-d76e4500f9f0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1d577637-d13b-484b-b872-20ebc41c4d82', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f62777789c6f4c779695a4bd13f4a8a5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3a34062f-60f9-4b66-89e3-ade27f6180be dca03b04-d1bc-480a-870c-93cf2cf468c8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12a9d2d3-fbf2-4315-9f5d-7087800d2580, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=c597a56c-4986-4a01-80a3-e89d85e8cc17) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:25.183 139482 INFO neutron.agent.ovn.metadata.agent [-] Port c597a56c-4986-4a01-80a3-e89d85e8cc17 in datapath 1d577637-d13b-484b-b872-20ebc41c4d82 bound to our chassis#033[00m
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:25.184 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1d577637-d13b-484b-b872-20ebc41c4d82#033[00m
Nov 29 03:41:25 np0005539551 ovn_controller[130266]: 2025-11-29T08:41:25Z|00848|binding|INFO|Setting lport c597a56c-4986-4a01-80a3-e89d85e8cc17 ovn-installed in OVS
Nov 29 03:41:25 np0005539551 ovn_controller[130266]: 2025-11-29T08:41:25Z|00849|binding|INFO|Setting lport c597a56c-4986-4a01-80a3-e89d85e8cc17 up in Southbound
Nov 29 03:41:25 np0005539551 nova_compute[227360]: 2025-11-29 08:41:25.193 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:25 np0005539551 nova_compute[227360]: 2025-11-29 08:41:25.194 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:25.198 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1deb80c2-38e4-459c-bb82-f08cd1ed61d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:25.199 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1d577637-d1 in ovnmeta-1d577637-d13b-484b-b872-20ebc41c4d82 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:41:25 np0005539551 nova_compute[227360]: 2025-11-29 08:41:25.198 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:25 np0005539551 systemd-udevd[295701]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:25.200 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1d577637-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:25.201 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[bf8f77d8-7234-40ff-b466-67edb2b560d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:25 np0005539551 systemd-machined[190756]: New machine qemu-87-instance-000000bf.
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:25.201 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[90bbcf7d-db28-4e2d-99c4-ea807c93b79f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:25 np0005539551 NetworkManager[48922]: <info>  [1764405685.2096] device (tapc597a56c-49): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:41:25 np0005539551 NetworkManager[48922]: <info>  [1764405685.2105] device (tapc597a56c-49): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:41:25 np0005539551 systemd[1]: Started Virtual Machine qemu-87-instance-000000bf.
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:25.214 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[272bd3f3-6470-422b-a36d-5c21833e8b07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:25.239 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e30bc3d7-b66b-4443-a4d3-4ffaa83025c3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:25.266 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[a0857a0f-16bc-42fd-b729-739e80bd8709]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:25 np0005539551 systemd-udevd[295704]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:41:25 np0005539551 NetworkManager[48922]: <info>  [1764405685.2727] manager: (tap1d577637-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/377)
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:25.271 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[12229c95-1d94-4b23-aafe-d9ebc1b5203f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:25.299 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[c9fa854f-1560-46fa-bfe9-5ddcefbc749a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:25.302 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[2eed7c8b-7cd8-42ed-889e-80d3c272328b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:25 np0005539551 NetworkManager[48922]: <info>  [1764405685.3224] device (tap1d577637-d0): carrier: link connected
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:25.328 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[13269730-1c0f-4047-a84f-558a7fb3f3a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:25.346 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[33428fb0-bf36-4eb1-a9da-1c11559d1f5c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1d577637-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:05:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 873220, 'reachable_time': 33377, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295733, 'error': None, 'target': 'ovnmeta-1d577637-d13b-484b-b872-20ebc41c4d82', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:25.362 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[abb73939-6c98-4d8c-a7a1-87cf0acab87e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5f:529'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 873220, 'tstamp': 873220}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295734, 'error': None, 'target': 'ovnmeta-1d577637-d13b-484b-b872-20ebc41c4d82', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:25.383 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9fb62959-d79d-44d6-a072-4ce3f65dcee8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1d577637-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:05:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 873220, 'reachable_time': 33377, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295735, 'error': None, 'target': 'ovnmeta-1d577637-d13b-484b-b872-20ebc41c4d82', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:25.414 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c7614a96-ed22-4806-a063-d91e1b0a10c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:25.485 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5c202c25-bd0b-4f3f-91e9-3d6e13417dd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:25.487 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d577637-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:25.487 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:25.487 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1d577637-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:25 np0005539551 kernel: tap1d577637-d0: entered promiscuous mode
Nov 29 03:41:25 np0005539551 nova_compute[227360]: 2025-11-29 08:41:25.489 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:25 np0005539551 NetworkManager[48922]: <info>  [1764405685.4897] manager: (tap1d577637-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/378)
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:25.494 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1d577637-d0, col_values=(('external_ids', {'iface-id': '5d130a64-533c-41d2-b532-7fb647957491'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:25 np0005539551 nova_compute[227360]: 2025-11-29 08:41:25.495 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:25 np0005539551 ovn_controller[130266]: 2025-11-29T08:41:25Z|00850|binding|INFO|Releasing lport 5d130a64-533c-41d2-b532-7fb647957491 from this chassis (sb_readonly=0)
Nov 29 03:41:25 np0005539551 nova_compute[227360]: 2025-11-29 08:41:25.497 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:25.499 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1d577637-d13b-484b-b872-20ebc41c4d82.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1d577637-d13b-484b-b872-20ebc41c4d82.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:25.500 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f915e2af-ab4c-4e7f-8e22-f2cd56eaddc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:25.501 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-1d577637-d13b-484b-b872-20ebc41c4d82
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/1d577637-d13b-484b-b872-20ebc41c4d82.pid.haproxy
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 1d577637-d13b-484b-b872-20ebc41c4d82
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:41:25 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:25.501 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1d577637-d13b-484b-b872-20ebc41c4d82', 'env', 'PROCESS_TAG=haproxy-1d577637-d13b-484b-b872-20ebc41c4d82', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1d577637-d13b-484b-b872-20ebc41c4d82.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:41:25 np0005539551 nova_compute[227360]: 2025-11-29 08:41:25.515 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:25 np0005539551 podman[295767]: 2025-11-29 08:41:25.830745526 +0000 UTC m=+0.040341554 container create bccc8459a3d1cf33e44a51002feccc204c1a5eb37a5180616a0ea256b5aeec7a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d577637-d13b-484b-b872-20ebc41c4d82, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 29 03:41:25 np0005539551 systemd[1]: Started libpod-conmon-bccc8459a3d1cf33e44a51002feccc204c1a5eb37a5180616a0ea256b5aeec7a.scope.
Nov 29 03:41:25 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:41:25 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41d0fa78a1906cf408b84eea72429448b2f9356206e8faba30c2f1e54bfe309a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:41:25 np0005539551 podman[295767]: 2025-11-29 08:41:25.906112286 +0000 UTC m=+0.115708334 container init bccc8459a3d1cf33e44a51002feccc204c1a5eb37a5180616a0ea256b5aeec7a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d577637-d13b-484b-b872-20ebc41c4d82, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 29 03:41:25 np0005539551 podman[295767]: 2025-11-29 08:41:25.809179461 +0000 UTC m=+0.018775509 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:41:25 np0005539551 podman[295767]: 2025-11-29 08:41:25.911523183 +0000 UTC m=+0.121119201 container start bccc8459a3d1cf33e44a51002feccc204c1a5eb37a5180616a0ea256b5aeec7a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d577637-d13b-484b-b872-20ebc41c4d82, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:41:25 np0005539551 neutron-haproxy-ovnmeta-1d577637-d13b-484b-b872-20ebc41c4d82[295783]: [NOTICE]   (295787) : New worker (295789) forked
Nov 29 03:41:25 np0005539551 neutron-haproxy-ovnmeta-1d577637-d13b-484b-b872-20ebc41c4d82[295783]: [NOTICE]   (295787) : Loading success.
Nov 29 03:41:25 np0005539551 nova_compute[227360]: 2025-11-29 08:41:25.939 227364 DEBUG nova.compute.manager [req-3a87b099-06a4-4351-828c-408fad659511 req-5938ff90-5b8f-4a15-9c98-09122731c02e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Received event network-vif-plugged-c597a56c-4986-4a01-80a3-e89d85e8cc17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:25 np0005539551 nova_compute[227360]: 2025-11-29 08:41:25.940 227364 DEBUG oslo_concurrency.lockutils [req-3a87b099-06a4-4351-828c-408fad659511 req-5938ff90-5b8f-4a15-9c98-09122731c02e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "695c5eef-14b1-4f3f-8570-d76e4500f9f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:25 np0005539551 nova_compute[227360]: 2025-11-29 08:41:25.940 227364 DEBUG oslo_concurrency.lockutils [req-3a87b099-06a4-4351-828c-408fad659511 req-5938ff90-5b8f-4a15-9c98-09122731c02e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "695c5eef-14b1-4f3f-8570-d76e4500f9f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:25 np0005539551 nova_compute[227360]: 2025-11-29 08:41:25.940 227364 DEBUG oslo_concurrency.lockutils [req-3a87b099-06a4-4351-828c-408fad659511 req-5938ff90-5b8f-4a15-9c98-09122731c02e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "695c5eef-14b1-4f3f-8570-d76e4500f9f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:25 np0005539551 nova_compute[227360]: 2025-11-29 08:41:25.941 227364 DEBUG nova.compute.manager [req-3a87b099-06a4-4351-828c-408fad659511 req-5938ff90-5b8f-4a15-9c98-09122731c02e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Processing event network-vif-plugged-c597a56c-4986-4a01-80a3-e89d85e8cc17 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:41:25 np0005539551 nova_compute[227360]: 2025-11-29 08:41:25.941 227364 DEBUG nova.compute.manager [req-3a87b099-06a4-4351-828c-408fad659511 req-5938ff90-5b8f-4a15-9c98-09122731c02e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Received event network-vif-plugged-c597a56c-4986-4a01-80a3-e89d85e8cc17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:25 np0005539551 nova_compute[227360]: 2025-11-29 08:41:25.941 227364 DEBUG oslo_concurrency.lockutils [req-3a87b099-06a4-4351-828c-408fad659511 req-5938ff90-5b8f-4a15-9c98-09122731c02e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "695c5eef-14b1-4f3f-8570-d76e4500f9f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:25 np0005539551 nova_compute[227360]: 2025-11-29 08:41:25.942 227364 DEBUG oslo_concurrency.lockutils [req-3a87b099-06a4-4351-828c-408fad659511 req-5938ff90-5b8f-4a15-9c98-09122731c02e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "695c5eef-14b1-4f3f-8570-d76e4500f9f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:25 np0005539551 nova_compute[227360]: 2025-11-29 08:41:25.942 227364 DEBUG oslo_concurrency.lockutils [req-3a87b099-06a4-4351-828c-408fad659511 req-5938ff90-5b8f-4a15-9c98-09122731c02e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "695c5eef-14b1-4f3f-8570-d76e4500f9f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:25 np0005539551 nova_compute[227360]: 2025-11-29 08:41:25.942 227364 DEBUG nova.compute.manager [req-3a87b099-06a4-4351-828c-408fad659511 req-5938ff90-5b8f-4a15-9c98-09122731c02e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] No waiting events found dispatching network-vif-plugged-c597a56c-4986-4a01-80a3-e89d85e8cc17 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:41:25 np0005539551 nova_compute[227360]: 2025-11-29 08:41:25.942 227364 WARNING nova.compute.manager [req-3a87b099-06a4-4351-828c-408fad659511 req-5938ff90-5b8f-4a15-9c98-09122731c02e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Received unexpected event network-vif-plugged-c597a56c-4986-4a01-80a3-e89d85e8cc17 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:41:26 np0005539551 nova_compute[227360]: 2025-11-29 08:41:26.181 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405686.1806426, 695c5eef-14b1-4f3f-8570-d76e4500f9f0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:41:26 np0005539551 nova_compute[227360]: 2025-11-29 08:41:26.182 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] VM Started (Lifecycle Event)#033[00m
Nov 29 03:41:26 np0005539551 nova_compute[227360]: 2025-11-29 08:41:26.186 227364 DEBUG nova.compute.manager [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:41:26 np0005539551 nova_compute[227360]: 2025-11-29 08:41:26.190 227364 DEBUG nova.virt.libvirt.driver [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:41:26 np0005539551 nova_compute[227360]: 2025-11-29 08:41:26.193 227364 INFO nova.virt.libvirt.driver [-] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Instance spawned successfully.#033[00m
Nov 29 03:41:26 np0005539551 nova_compute[227360]: 2025-11-29 08:41:26.194 227364 DEBUG nova.virt.libvirt.driver [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:41:26 np0005539551 nova_compute[227360]: 2025-11-29 08:41:26.218 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:41:26 np0005539551 nova_compute[227360]: 2025-11-29 08:41:26.224 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:41:26 np0005539551 nova_compute[227360]: 2025-11-29 08:41:26.228 227364 DEBUG nova.virt.libvirt.driver [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:41:26 np0005539551 nova_compute[227360]: 2025-11-29 08:41:26.228 227364 DEBUG nova.virt.libvirt.driver [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:41:26 np0005539551 nova_compute[227360]: 2025-11-29 08:41:26.229 227364 DEBUG nova.virt.libvirt.driver [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:41:26 np0005539551 nova_compute[227360]: 2025-11-29 08:41:26.229 227364 DEBUG nova.virt.libvirt.driver [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:41:26 np0005539551 nova_compute[227360]: 2025-11-29 08:41:26.230 227364 DEBUG nova.virt.libvirt.driver [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:41:26 np0005539551 nova_compute[227360]: 2025-11-29 08:41:26.230 227364 DEBUG nova.virt.libvirt.driver [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:41:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:41:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:26.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:41:26 np0005539551 nova_compute[227360]: 2025-11-29 08:41:26.263 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:41:26 np0005539551 nova_compute[227360]: 2025-11-29 08:41:26.264 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405686.1808445, 695c5eef-14b1-4f3f-8570-d76e4500f9f0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:41:26 np0005539551 nova_compute[227360]: 2025-11-29 08:41:26.265 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:41:26 np0005539551 nova_compute[227360]: 2025-11-29 08:41:26.292 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:41:26 np0005539551 nova_compute[227360]: 2025-11-29 08:41:26.296 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405686.1894755, 695c5eef-14b1-4f3f-8570-d76e4500f9f0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:41:26 np0005539551 nova_compute[227360]: 2025-11-29 08:41:26.296 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:41:26 np0005539551 nova_compute[227360]: 2025-11-29 08:41:26.309 227364 INFO nova.compute.manager [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Took 6.78 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:41:26 np0005539551 nova_compute[227360]: 2025-11-29 08:41:26.310 227364 DEBUG nova.compute.manager [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:41:26 np0005539551 nova_compute[227360]: 2025-11-29 08:41:26.327 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:41:26 np0005539551 nova_compute[227360]: 2025-11-29 08:41:26.329 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:41:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:26.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:26 np0005539551 nova_compute[227360]: 2025-11-29 08:41:26.362 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:41:26 np0005539551 nova_compute[227360]: 2025-11-29 08:41:26.383 227364 INFO nova.compute.manager [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Took 7.86 seconds to build instance.#033[00m
Nov 29 03:41:26 np0005539551 nova_compute[227360]: 2025-11-29 08:41:26.429 227364 DEBUG oslo_concurrency.lockutils [None req-c45c9bc1-ff8f-43ff-9b4d-f42f3307a844 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Lock "695c5eef-14b1-4f3f-8570-d76e4500f9f0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:28 np0005539551 nova_compute[227360]: 2025-11-29 08:41:28.018 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:28.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:28.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:29 np0005539551 nova_compute[227360]: 2025-11-29 08:41:29.090 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:29 np0005539551 nova_compute[227360]: 2025-11-29 08:41:29.105 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:30.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:30.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:30 np0005539551 nova_compute[227360]: 2025-11-29 08:41:30.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:30 np0005539551 podman[295842]: 2025-11-29 08:41:30.60489663 +0000 UTC m=+0.054071165 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 29 03:41:30 np0005539551 podman[295841]: 2025-11-29 08:41:30.61711164 +0000 UTC m=+0.068476834 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:41:30 np0005539551 nova_compute[227360]: 2025-11-29 08:41:30.638 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:30 np0005539551 podman[295840]: 2025-11-29 08:41:30.671611777 +0000 UTC m=+0.125270884 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 03:41:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:32.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:32.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:32 np0005539551 nova_compute[227360]: 2025-11-29 08:41:32.664 227364 DEBUG nova.compute.manager [req-b25d293a-c898-4d02-9f3d-a13b0ee78558 req-af01c1c3-da77-443e-96c5-fe923f929533 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Received event network-changed-c597a56c-4986-4a01-80a3-e89d85e8cc17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:32 np0005539551 nova_compute[227360]: 2025-11-29 08:41:32.665 227364 DEBUG nova.compute.manager [req-b25d293a-c898-4d02-9f3d-a13b0ee78558 req-af01c1c3-da77-443e-96c5-fe923f929533 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Refreshing instance network info cache due to event network-changed-c597a56c-4986-4a01-80a3-e89d85e8cc17. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:41:32 np0005539551 nova_compute[227360]: 2025-11-29 08:41:32.665 227364 DEBUG oslo_concurrency.lockutils [req-b25d293a-c898-4d02-9f3d-a13b0ee78558 req-af01c1c3-da77-443e-96c5-fe923f929533 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-695c5eef-14b1-4f3f-8570-d76e4500f9f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:41:32 np0005539551 nova_compute[227360]: 2025-11-29 08:41:32.665 227364 DEBUG oslo_concurrency.lockutils [req-b25d293a-c898-4d02-9f3d-a13b0ee78558 req-af01c1c3-da77-443e-96c5-fe923f929533 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-695c5eef-14b1-4f3f-8570-d76e4500f9f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:41:32 np0005539551 nova_compute[227360]: 2025-11-29 08:41:32.666 227364 DEBUG nova.network.neutron [req-b25d293a-c898-4d02-9f3d-a13b0ee78558 req-af01c1c3-da77-443e-96c5-fe923f929533 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Refreshing network info cache for port c597a56c-4986-4a01-80a3-e89d85e8cc17 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:41:33 np0005539551 nova_compute[227360]: 2025-11-29 08:41:33.950 227364 DEBUG nova.network.neutron [req-b25d293a-c898-4d02-9f3d-a13b0ee78558 req-af01c1c3-da77-443e-96c5-fe923f929533 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Updated VIF entry in instance network info cache for port c597a56c-4986-4a01-80a3-e89d85e8cc17. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:41:33 np0005539551 nova_compute[227360]: 2025-11-29 08:41:33.952 227364 DEBUG nova.network.neutron [req-b25d293a-c898-4d02-9f3d-a13b0ee78558 req-af01c1c3-da77-443e-96c5-fe923f929533 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Updating instance_info_cache with network_info: [{"id": "c597a56c-4986-4a01-80a3-e89d85e8cc17", "address": "fa:16:3e:9f:15:80", "network": {"id": "1d577637-d13b-484b-b872-20ebc41c4d82", "bridge": "br-int", "label": "tempest-network-smoke--543108712", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f62777789c6f4c779695a4bd13f4a8a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc597a56c-49", "ovs_interfaceid": "c597a56c-4986-4a01-80a3-e89d85e8cc17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:41:33 np0005539551 nova_compute[227360]: 2025-11-29 08:41:33.982 227364 DEBUG oslo_concurrency.lockutils [req-b25d293a-c898-4d02-9f3d-a13b0ee78558 req-af01c1c3-da77-443e-96c5-fe923f929533 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-695c5eef-14b1-4f3f-8570-d76e4500f9f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:41:34 np0005539551 nova_compute[227360]: 2025-11-29 08:41:34.091 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:34 np0005539551 nova_compute[227360]: 2025-11-29 08:41:34.106 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 03:41:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:34.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 03:41:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:34.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:34 np0005539551 nova_compute[227360]: 2025-11-29 08:41:34.539 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:36.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:36.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:36 np0005539551 nova_compute[227360]: 2025-11-29 08:41:36.979 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405681.9748967, fc75971c-cf7e-4383-81cf-81c801f67489 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:41:36 np0005539551 nova_compute[227360]: 2025-11-29 08:41:36.979 227364 INFO nova.compute.manager [-] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:41:37 np0005539551 nova_compute[227360]: 2025-11-29 08:41:37.002 227364 DEBUG nova.compute.manager [None req-4ee14ac5-186c-4271-bbce-b98a7ad35716 - - - - - -] [instance: fc75971c-cf7e-4383-81cf-81c801f67489] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:41:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e398 e398: 3 total, 3 up, 3 in
Nov 29 03:41:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:38.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:38.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:38 np0005539551 nova_compute[227360]: 2025-11-29 08:41:38.835 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:39 np0005539551 nova_compute[227360]: 2025-11-29 08:41:39.091 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:39 np0005539551 nova_compute[227360]: 2025-11-29 08:41:39.108 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:40.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:40 np0005539551 ovn_controller[130266]: 2025-11-29T08:41:40Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9f:15:80 10.100.0.9
Nov 29 03:41:40 np0005539551 ovn_controller[130266]: 2025-11-29T08:41:40Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9f:15:80 10.100.0.9
Nov 29 03:41:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:40.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:41 np0005539551 ovn_controller[130266]: 2025-11-29T08:41:41Z|00851|binding|INFO|Releasing lport 5d130a64-533c-41d2-b532-7fb647957491 from this chassis (sb_readonly=0)
Nov 29 03:41:41 np0005539551 nova_compute[227360]: 2025-11-29 08:41:41.862 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:42.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:41:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:42.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:41:44 np0005539551 nova_compute[227360]: 2025-11-29 08:41:44.114 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:44.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:44.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:44 np0005539551 nova_compute[227360]: 2025-11-29 08:41:44.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e399 e399: 3 total, 3 up, 3 in
Nov 29 03:41:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:46.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:46.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:47 np0005539551 nova_compute[227360]: 2025-11-29 08:41:47.399 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:47.399 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:41:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:47.401 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:41:47 np0005539551 nova_compute[227360]: 2025-11-29 08:41:47.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:47 np0005539551 nova_compute[227360]: 2025-11-29 08:41:47.432 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:47 np0005539551 nova_compute[227360]: 2025-11-29 08:41:47.432 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:47 np0005539551 nova_compute[227360]: 2025-11-29 08:41:47.432 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:47 np0005539551 nova_compute[227360]: 2025-11-29 08:41:47.433 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:41:47 np0005539551 nova_compute[227360]: 2025-11-29 08:41:47.433 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:47 np0005539551 ovn_controller[130266]: 2025-11-29T08:41:47Z|00852|binding|INFO|Releasing lport 5d130a64-533c-41d2-b532-7fb647957491 from this chassis (sb_readonly=0)
Nov 29 03:41:47 np0005539551 nova_compute[227360]: 2025-11-29 08:41:47.595 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:47 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:41:47 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2287058055' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:41:47 np0005539551 nova_compute[227360]: 2025-11-29 08:41:47.903 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:48 np0005539551 nova_compute[227360]: 2025-11-29 08:41:48.020 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-000000bf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:41:48 np0005539551 nova_compute[227360]: 2025-11-29 08:41:48.021 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-000000bf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:41:48 np0005539551 nova_compute[227360]: 2025-11-29 08:41:48.160 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:41:48 np0005539551 nova_compute[227360]: 2025-11-29 08:41:48.161 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4067MB free_disk=20.88199234008789GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:41:48 np0005539551 nova_compute[227360]: 2025-11-29 08:41:48.161 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:48 np0005539551 nova_compute[227360]: 2025-11-29 08:41:48.162 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:48.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:48 np0005539551 nova_compute[227360]: 2025-11-29 08:41:48.328 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 695c5eef-14b1-4f3f-8570-d76e4500f9f0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:41:48 np0005539551 nova_compute[227360]: 2025-11-29 08:41:48.329 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:41:48 np0005539551 nova_compute[227360]: 2025-11-29 08:41:48.329 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:41:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:48.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:48 np0005539551 nova_compute[227360]: 2025-11-29 08:41:48.566 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Refreshing inventories for resource provider 67c71d68-0dd7-4589-b775-189b4191a844 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:41:48 np0005539551 nova_compute[227360]: 2025-11-29 08:41:48.605 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Updating ProviderTree inventory for provider 67c71d68-0dd7-4589-b775-189b4191a844 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:41:48 np0005539551 nova_compute[227360]: 2025-11-29 08:41:48.606 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Updating inventory in ProviderTree for provider 67c71d68-0dd7-4589-b775-189b4191a844 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:41:48 np0005539551 nova_compute[227360]: 2025-11-29 08:41:48.626 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Refreshing aggregate associations for resource provider 67c71d68-0dd7-4589-b775-189b4191a844, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:41:48 np0005539551 nova_compute[227360]: 2025-11-29 08:41:48.674 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Refreshing trait associations for resource provider 67c71d68-0dd7-4589-b775-189b4191a844, traits: COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:41:48 np0005539551 nova_compute[227360]: 2025-11-29 08:41:48.756 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:49 np0005539551 nova_compute[227360]: 2025-11-29 08:41:49.115 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:41:49 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/985564955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:41:49 np0005539551 nova_compute[227360]: 2025-11-29 08:41:49.189 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:49 np0005539551 nova_compute[227360]: 2025-11-29 08:41:49.194 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:41:49 np0005539551 nova_compute[227360]: 2025-11-29 08:41:49.228 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:41:49 np0005539551 nova_compute[227360]: 2025-11-29 08:41:49.263 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:41:49 np0005539551 nova_compute[227360]: 2025-11-29 08:41:49.263 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:49.402 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:50.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:50.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:52 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:41:52 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:41:52 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:41:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:52.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:52.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.037 227364 DEBUG oslo_concurrency.lockutils [None req-8eb1a6d6-f84f-4ac6-8e10-062afb7845f0 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Acquiring lock "695c5eef-14b1-4f3f-8570-d76e4500f9f0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.038 227364 DEBUG oslo_concurrency.lockutils [None req-8eb1a6d6-f84f-4ac6-8e10-062afb7845f0 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Lock "695c5eef-14b1-4f3f-8570-d76e4500f9f0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.039 227364 DEBUG oslo_concurrency.lockutils [None req-8eb1a6d6-f84f-4ac6-8e10-062afb7845f0 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Acquiring lock "695c5eef-14b1-4f3f-8570-d76e4500f9f0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.039 227364 DEBUG oslo_concurrency.lockutils [None req-8eb1a6d6-f84f-4ac6-8e10-062afb7845f0 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Lock "695c5eef-14b1-4f3f-8570-d76e4500f9f0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.040 227364 DEBUG oslo_concurrency.lockutils [None req-8eb1a6d6-f84f-4ac6-8e10-062afb7845f0 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Lock "695c5eef-14b1-4f3f-8570-d76e4500f9f0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.041 227364 INFO nova.compute.manager [None req-8eb1a6d6-f84f-4ac6-8e10-062afb7845f0 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Terminating instance#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.042 227364 DEBUG nova.compute.manager [None req-8eb1a6d6-f84f-4ac6-8e10-062afb7845f0 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.112 227364 DEBUG nova.compute.manager [req-3ddf9789-9d52-4860-9826-8ec03792c291 req-4bbf806f-7ba2-4049-ac00-943f42b17519 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Received event network-changed-c597a56c-4986-4a01-80a3-e89d85e8cc17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.112 227364 DEBUG nova.compute.manager [req-3ddf9789-9d52-4860-9826-8ec03792c291 req-4bbf806f-7ba2-4049-ac00-943f42b17519 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Refreshing instance network info cache due to event network-changed-c597a56c-4986-4a01-80a3-e89d85e8cc17. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.112 227364 DEBUG oslo_concurrency.lockutils [req-3ddf9789-9d52-4860-9826-8ec03792c291 req-4bbf806f-7ba2-4049-ac00-943f42b17519 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-695c5eef-14b1-4f3f-8570-d76e4500f9f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.113 227364 DEBUG oslo_concurrency.lockutils [req-3ddf9789-9d52-4860-9826-8ec03792c291 req-4bbf806f-7ba2-4049-ac00-943f42b17519 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-695c5eef-14b1-4f3f-8570-d76e4500f9f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.113 227364 DEBUG nova.network.neutron [req-3ddf9789-9d52-4860-9826-8ec03792c291 req-4bbf806f-7ba2-4049-ac00-943f42b17519 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Refreshing network info cache for port c597a56c-4986-4a01-80a3-e89d85e8cc17 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:41:53 np0005539551 kernel: tapc597a56c-49 (unregistering): left promiscuous mode
Nov 29 03:41:53 np0005539551 NetworkManager[48922]: <info>  [1764405713.1757] device (tapc597a56c-49): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:41:53 np0005539551 ovn_controller[130266]: 2025-11-29T08:41:53Z|00853|binding|INFO|Releasing lport c597a56c-4986-4a01-80a3-e89d85e8cc17 from this chassis (sb_readonly=0)
Nov 29 03:41:53 np0005539551 ovn_controller[130266]: 2025-11-29T08:41:53Z|00854|binding|INFO|Setting lport c597a56c-4986-4a01-80a3-e89d85e8cc17 down in Southbound
Nov 29 03:41:53 np0005539551 ovn_controller[130266]: 2025-11-29T08:41:53Z|00855|binding|INFO|Removing iface tapc597a56c-49 ovn-installed in OVS
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.242 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:53.247 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:15:80 10.100.0.9'], port_security=['fa:16:3e:9f:15:80 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '695c5eef-14b1-4f3f-8570-d76e4500f9f0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1d577637-d13b-484b-b872-20ebc41c4d82', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f62777789c6f4c779695a4bd13f4a8a5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3a34062f-60f9-4b66-89e3-ade27f6180be dca03b04-d1bc-480a-870c-93cf2cf468c8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12a9d2d3-fbf2-4315-9f5d-7087800d2580, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=c597a56c-4986-4a01-80a3-e89d85e8cc17) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:41:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:53.248 139482 INFO neutron.agent.ovn.metadata.agent [-] Port c597a56c-4986-4a01-80a3-e89d85e8cc17 in datapath 1d577637-d13b-484b-b872-20ebc41c4d82 unbound from our chassis#033[00m
Nov 29 03:41:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:53.249 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1d577637-d13b-484b-b872-20ebc41c4d82, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:41:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:53.251 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b53f8f64-5b2e-4df9-bacc-42af1acd8ea8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:53.251 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1d577637-d13b-484b-b872-20ebc41c4d82 namespace which is not needed anymore#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.256 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.260 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.260 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.260 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.260 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.280 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.281 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.281 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:53 np0005539551 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000bf.scope: Deactivated successfully.
Nov 29 03:41:53 np0005539551 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000bf.scope: Consumed 14.933s CPU time.
Nov 29 03:41:53 np0005539551 systemd-machined[190756]: Machine qemu-87-instance-000000bf terminated.
Nov 29 03:41:53 np0005539551 NetworkManager[48922]: <info>  [1764405713.3504] manager: (tapc597a56c-49): new Tun device (/org/freedesktop/NetworkManager/Devices/379)
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.351 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.357 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.368 227364 INFO nova.virt.libvirt.driver [-] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Instance destroyed successfully.#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.368 227364 DEBUG nova.objects.instance [None req-8eb1a6d6-f84f-4ac6-8e10-062afb7845f0 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Lazy-loading 'resources' on Instance uuid 695c5eef-14b1-4f3f-8570-d76e4500f9f0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.380 227364 DEBUG nova.virt.libvirt.vif [None req-8eb1a6d6-f84f-4ac6-8e10-062afb7845f0 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:41:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1990481903-access_point-596874863',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1990481903-access_point-596874863',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1990481903-ac',id=191,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJmjfd2y7sEmTsC5xZ1nT9DN/WCE26pKipEQS2aTf8eym9bYPBuA0UMch6Jpi2x/MkVRsi765a4KMJRED2sKY8omOdDqfn4nQxGsjvpYTZF5jPc5Ny1nzuhZN4SUjckzkw==',key_name='tempest-TestSecurityGroupsBasicOps-960433095',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:41:26Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f62777789c6f4c779695a4bd13f4a8a5',ramdisk_id='',reservation_id='r-2kecd4zi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1990481903',owner_user_name='tempest-TestSecurityGroupsBasicOps-1990481903-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:41:26Z,user_data=None,user_id='a9046bfcf55a4809aaf25edff2e41e75',uuid=695c5eef-14b1-4f3f-8570-d76e4500f9f0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c597a56c-4986-4a01-80a3-e89d85e8cc17", "address": "fa:16:3e:9f:15:80", "network": {"id": "1d577637-d13b-484b-b872-20ebc41c4d82", "bridge": "br-int", "label": "tempest-network-smoke--543108712", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f62777789c6f4c779695a4bd13f4a8a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc597a56c-49", "ovs_interfaceid": "c597a56c-4986-4a01-80a3-e89d85e8cc17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.380 227364 DEBUG nova.network.os_vif_util [None req-8eb1a6d6-f84f-4ac6-8e10-062afb7845f0 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Converting VIF {"id": "c597a56c-4986-4a01-80a3-e89d85e8cc17", "address": "fa:16:3e:9f:15:80", "network": {"id": "1d577637-d13b-484b-b872-20ebc41c4d82", "bridge": "br-int", "label": "tempest-network-smoke--543108712", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f62777789c6f4c779695a4bd13f4a8a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc597a56c-49", "ovs_interfaceid": "c597a56c-4986-4a01-80a3-e89d85e8cc17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.381 227364 DEBUG nova.network.os_vif_util [None req-8eb1a6d6-f84f-4ac6-8e10-062afb7845f0 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9f:15:80,bridge_name='br-int',has_traffic_filtering=True,id=c597a56c-4986-4a01-80a3-e89d85e8cc17,network=Network(1d577637-d13b-484b-b872-20ebc41c4d82),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc597a56c-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.381 227364 DEBUG os_vif [None req-8eb1a6d6-f84f-4ac6-8e10-062afb7845f0 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9f:15:80,bridge_name='br-int',has_traffic_filtering=True,id=c597a56c-4986-4a01-80a3-e89d85e8cc17,network=Network(1d577637-d13b-484b-b872-20ebc41c4d82),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc597a56c-49') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.382 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.383 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc597a56c-49, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.384 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.387 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.390 227364 INFO os_vif [None req-8eb1a6d6-f84f-4ac6-8e10-062afb7845f0 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9f:15:80,bridge_name='br-int',has_traffic_filtering=True,id=c597a56c-4986-4a01-80a3-e89d85e8cc17,network=Network(1d577637-d13b-484b-b872-20ebc41c4d82),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc597a56c-49')#033[00m
Nov 29 03:41:53 np0005539551 neutron-haproxy-ovnmeta-1d577637-d13b-484b-b872-20ebc41c4d82[295783]: [NOTICE]   (295787) : haproxy version is 2.8.14-c23fe91
Nov 29 03:41:53 np0005539551 neutron-haproxy-ovnmeta-1d577637-d13b-484b-b872-20ebc41c4d82[295783]: [NOTICE]   (295787) : path to executable is /usr/sbin/haproxy
Nov 29 03:41:53 np0005539551 neutron-haproxy-ovnmeta-1d577637-d13b-484b-b872-20ebc41c4d82[295783]: [WARNING]  (295787) : Exiting Master process...
Nov 29 03:41:53 np0005539551 neutron-haproxy-ovnmeta-1d577637-d13b-484b-b872-20ebc41c4d82[295783]: [ALERT]    (295787) : Current worker (295789) exited with code 143 (Terminated)
Nov 29 03:41:53 np0005539551 neutron-haproxy-ovnmeta-1d577637-d13b-484b-b872-20ebc41c4d82[295783]: [WARNING]  (295787) : All workers exited. Exiting... (0)
Nov 29 03:41:53 np0005539551 systemd[1]: libpod-bccc8459a3d1cf33e44a51002feccc204c1a5eb37a5180616a0ea256b5aeec7a.scope: Deactivated successfully.
Nov 29 03:41:53 np0005539551 podman[296105]: 2025-11-29 08:41:53.405497916 +0000 UTC m=+0.050151539 container died bccc8459a3d1cf33e44a51002feccc204c1a5eb37a5180616a0ea256b5aeec7a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d577637-d13b-484b-b872-20ebc41c4d82, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:41:53 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bccc8459a3d1cf33e44a51002feccc204c1a5eb37a5180616a0ea256b5aeec7a-userdata-shm.mount: Deactivated successfully.
Nov 29 03:41:53 np0005539551 systemd[1]: var-lib-containers-storage-overlay-41d0fa78a1906cf408b84eea72429448b2f9356206e8faba30c2f1e54bfe309a-merged.mount: Deactivated successfully.
Nov 29 03:41:53 np0005539551 podman[296105]: 2025-11-29 08:41:53.459029726 +0000 UTC m=+0.103683349 container cleanup bccc8459a3d1cf33e44a51002feccc204c1a5eb37a5180616a0ea256b5aeec7a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d577637-d13b-484b-b872-20ebc41c4d82, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:41:53 np0005539551 systemd[1]: libpod-conmon-bccc8459a3d1cf33e44a51002feccc204c1a5eb37a5180616a0ea256b5aeec7a.scope: Deactivated successfully.
Nov 29 03:41:53 np0005539551 podman[296162]: 2025-11-29 08:41:53.519679098 +0000 UTC m=+0.038097103 container remove bccc8459a3d1cf33e44a51002feccc204c1a5eb37a5180616a0ea256b5aeec7a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d577637-d13b-484b-b872-20ebc41c4d82, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:41:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:53.525 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[859b9223-ba65-4352-ba59-c36f57397942]: (4, ('Sat Nov 29 08:41:53 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1d577637-d13b-484b-b872-20ebc41c4d82 (bccc8459a3d1cf33e44a51002feccc204c1a5eb37a5180616a0ea256b5aeec7a)\nbccc8459a3d1cf33e44a51002feccc204c1a5eb37a5180616a0ea256b5aeec7a\nSat Nov 29 08:41:53 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1d577637-d13b-484b-b872-20ebc41c4d82 (bccc8459a3d1cf33e44a51002feccc204c1a5eb37a5180616a0ea256b5aeec7a)\nbccc8459a3d1cf33e44a51002feccc204c1a5eb37a5180616a0ea256b5aeec7a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:53.526 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[48964145-6039-414f-aecd-6894663bdd31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:53.527 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d577637-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.528 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:53 np0005539551 kernel: tap1d577637-d0: left promiscuous mode
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.530 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:53.536 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[717581a5-c406-4ac9-a191-6e54e3fbc843]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.545 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:53.550 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[864331fc-2294-47fe-8cb2-6a22b6830161]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:53.551 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ba43015b-4d49-48c5-b755-5acac4e58b26]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:53.570 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e5a83e93-5f2d-4863-a668-7adfca783de2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 873214, 'reachable_time': 23272, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296177, 'error': None, 'target': 'ovnmeta-1d577637-d13b-484b-b872-20ebc41c4d82', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:53.573 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1d577637-d13b-484b-b872-20ebc41c4d82 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:41:53 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:41:53.573 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[90f80eec-a74b-4004-b3ca-cce9eae25f6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:53 np0005539551 systemd[1]: run-netns-ovnmeta\x2d1d577637\x2dd13b\x2d484b\x2db872\x2d20ebc41c4d82.mount: Deactivated successfully.
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.627 227364 DEBUG nova.compute.manager [req-19ef08ba-d384-483b-bdef-042a7a189b7c req-87264d76-51c0-4363-94b3-b552c35fd435 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Received event network-vif-unplugged-c597a56c-4986-4a01-80a3-e89d85e8cc17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.627 227364 DEBUG oslo_concurrency.lockutils [req-19ef08ba-d384-483b-bdef-042a7a189b7c req-87264d76-51c0-4363-94b3-b552c35fd435 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "695c5eef-14b1-4f3f-8570-d76e4500f9f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.627 227364 DEBUG oslo_concurrency.lockutils [req-19ef08ba-d384-483b-bdef-042a7a189b7c req-87264d76-51c0-4363-94b3-b552c35fd435 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "695c5eef-14b1-4f3f-8570-d76e4500f9f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.628 227364 DEBUG oslo_concurrency.lockutils [req-19ef08ba-d384-483b-bdef-042a7a189b7c req-87264d76-51c0-4363-94b3-b552c35fd435 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "695c5eef-14b1-4f3f-8570-d76e4500f9f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.628 227364 DEBUG nova.compute.manager [req-19ef08ba-d384-483b-bdef-042a7a189b7c req-87264d76-51c0-4363-94b3-b552c35fd435 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] No waiting events found dispatching network-vif-unplugged-c597a56c-4986-4a01-80a3-e89d85e8cc17 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.628 227364 DEBUG nova.compute.manager [req-19ef08ba-d384-483b-bdef-042a7a189b7c req-87264d76-51c0-4363-94b3-b552c35fd435 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Received event network-vif-unplugged-c597a56c-4986-4a01-80a3-e89d85e8cc17 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.775 227364 INFO nova.virt.libvirt.driver [None req-8eb1a6d6-f84f-4ac6-8e10-062afb7845f0 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Deleting instance files /var/lib/nova/instances/695c5eef-14b1-4f3f-8570-d76e4500f9f0_del#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.776 227364 INFO nova.virt.libvirt.driver [None req-8eb1a6d6-f84f-4ac6-8e10-062afb7845f0 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Deletion of /var/lib/nova/instances/695c5eef-14b1-4f3f-8570-d76e4500f9f0_del complete#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.828 227364 INFO nova.compute.manager [None req-8eb1a6d6-f84f-4ac6-8e10-062afb7845f0 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.829 227364 DEBUG oslo.service.loopingcall [None req-8eb1a6d6-f84f-4ac6-8e10-062afb7845f0 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.830 227364 DEBUG nova.compute.manager [-] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:41:53 np0005539551 nova_compute[227360]: 2025-11-29 08:41:53.830 227364 DEBUG nova.network.neutron [-] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:41:54 np0005539551 nova_compute[227360]: 2025-11-29 08:41:54.117 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:41:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:54.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:41:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:54.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:54 np0005539551 nova_compute[227360]: 2025-11-29 08:41:54.552 227364 DEBUG nova.network.neutron [req-3ddf9789-9d52-4860-9826-8ec03792c291 req-4bbf806f-7ba2-4049-ac00-943f42b17519 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Updated VIF entry in instance network info cache for port c597a56c-4986-4a01-80a3-e89d85e8cc17. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:41:54 np0005539551 nova_compute[227360]: 2025-11-29 08:41:54.553 227364 DEBUG nova.network.neutron [req-3ddf9789-9d52-4860-9826-8ec03792c291 req-4bbf806f-7ba2-4049-ac00-943f42b17519 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Updating instance_info_cache with network_info: [{"id": "c597a56c-4986-4a01-80a3-e89d85e8cc17", "address": "fa:16:3e:9f:15:80", "network": {"id": "1d577637-d13b-484b-b872-20ebc41c4d82", "bridge": "br-int", "label": "tempest-network-smoke--543108712", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f62777789c6f4c779695a4bd13f4a8a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc597a56c-49", "ovs_interfaceid": "c597a56c-4986-4a01-80a3-e89d85e8cc17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:41:54 np0005539551 nova_compute[227360]: 2025-11-29 08:41:54.798 227364 DEBUG oslo_concurrency.lockutils [req-3ddf9789-9d52-4860-9826-8ec03792c291 req-4bbf806f-7ba2-4049-ac00-943f42b17519 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-695c5eef-14b1-4f3f-8570-d76e4500f9f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:41:54 np0005539551 nova_compute[227360]: 2025-11-29 08:41:54.865 227364 DEBUG nova.network.neutron [-] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:41:54 np0005539551 nova_compute[227360]: 2025-11-29 08:41:54.894 227364 INFO nova.compute.manager [-] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Took 1.06 seconds to deallocate network for instance.#033[00m
Nov 29 03:41:54 np0005539551 nova_compute[227360]: 2025-11-29 08:41:54.942 227364 DEBUG oslo_concurrency.lockutils [None req-8eb1a6d6-f84f-4ac6-8e10-062afb7845f0 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:54 np0005539551 nova_compute[227360]: 2025-11-29 08:41:54.943 227364 DEBUG oslo_concurrency.lockutils [None req-8eb1a6d6-f84f-4ac6-8e10-062afb7845f0 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:54 np0005539551 nova_compute[227360]: 2025-11-29 08:41:54.991 227364 DEBUG oslo_concurrency.processutils [None req-8eb1a6d6-f84f-4ac6-8e10-062afb7845f0 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:55 np0005539551 nova_compute[227360]: 2025-11-29 08:41:55.200 227364 DEBUG nova.compute.manager [req-84070776-c5f0-4fa2-b8d5-1882ccb2ce4d req-f826fd11-355b-4b6b-b952-55bf3c801c76 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Received event network-vif-deleted-c597a56c-4986-4a01-80a3-e89d85e8cc17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:55 np0005539551 nova_compute[227360]: 2025-11-29 08:41:55.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:41:55 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/574278965' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:41:55 np0005539551 nova_compute[227360]: 2025-11-29 08:41:55.428 227364 DEBUG oslo_concurrency.processutils [None req-8eb1a6d6-f84f-4ac6-8e10-062afb7845f0 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:55 np0005539551 nova_compute[227360]: 2025-11-29 08:41:55.434 227364 DEBUG nova.compute.provider_tree [None req-8eb1a6d6-f84f-4ac6-8e10-062afb7845f0 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:41:55 np0005539551 nova_compute[227360]: 2025-11-29 08:41:55.459 227364 DEBUG nova.scheduler.client.report [None req-8eb1a6d6-f84f-4ac6-8e10-062afb7845f0 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:41:55 np0005539551 nova_compute[227360]: 2025-11-29 08:41:55.485 227364 DEBUG oslo_concurrency.lockutils [None req-8eb1a6d6-f84f-4ac6-8e10-062afb7845f0 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.542s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:55 np0005539551 nova_compute[227360]: 2025-11-29 08:41:55.518 227364 INFO nova.scheduler.client.report [None req-8eb1a6d6-f84f-4ac6-8e10-062afb7845f0 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Deleted allocations for instance 695c5eef-14b1-4f3f-8570-d76e4500f9f0#033[00m
Nov 29 03:41:55 np0005539551 nova_compute[227360]: 2025-11-29 08:41:55.594 227364 DEBUG oslo_concurrency.lockutils [None req-8eb1a6d6-f84f-4ac6-8e10-062afb7845f0 a9046bfcf55a4809aaf25edff2e41e75 f62777789c6f4c779695a4bd13f4a8a5 - - default default] Lock "695c5eef-14b1-4f3f-8570-d76e4500f9f0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:55 np0005539551 nova_compute[227360]: 2025-11-29 08:41:55.710 227364 DEBUG nova.compute.manager [req-cec240ac-8a97-4db6-ab1a-d17641acd782 req-03534fc2-7eb4-4824-b47f-d106dcdfbd30 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Received event network-vif-plugged-c597a56c-4986-4a01-80a3-e89d85e8cc17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:55 np0005539551 nova_compute[227360]: 2025-11-29 08:41:55.711 227364 DEBUG oslo_concurrency.lockutils [req-cec240ac-8a97-4db6-ab1a-d17641acd782 req-03534fc2-7eb4-4824-b47f-d106dcdfbd30 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "695c5eef-14b1-4f3f-8570-d76e4500f9f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:55 np0005539551 nova_compute[227360]: 2025-11-29 08:41:55.711 227364 DEBUG oslo_concurrency.lockutils [req-cec240ac-8a97-4db6-ab1a-d17641acd782 req-03534fc2-7eb4-4824-b47f-d106dcdfbd30 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "695c5eef-14b1-4f3f-8570-d76e4500f9f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:55 np0005539551 nova_compute[227360]: 2025-11-29 08:41:55.711 227364 DEBUG oslo_concurrency.lockutils [req-cec240ac-8a97-4db6-ab1a-d17641acd782 req-03534fc2-7eb4-4824-b47f-d106dcdfbd30 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "695c5eef-14b1-4f3f-8570-d76e4500f9f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:55 np0005539551 nova_compute[227360]: 2025-11-29 08:41:55.711 227364 DEBUG nova.compute.manager [req-cec240ac-8a97-4db6-ab1a-d17641acd782 req-03534fc2-7eb4-4824-b47f-d106dcdfbd30 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] No waiting events found dispatching network-vif-plugged-c597a56c-4986-4a01-80a3-e89d85e8cc17 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:41:55 np0005539551 nova_compute[227360]: 2025-11-29 08:41:55.712 227364 WARNING nova.compute.manager [req-cec240ac-8a97-4db6-ab1a-d17641acd782 req-03534fc2-7eb4-4824-b47f-d106dcdfbd30 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Received unexpected event network-vif-plugged-c597a56c-4986-4a01-80a3-e89d85e8cc17 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:41:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:56.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:56.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:58.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:41:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:58.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:58 np0005539551 nova_compute[227360]: 2025-11-29 08:41:58.387 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:58 np0005539551 nova_compute[227360]: 2025-11-29 08:41:58.921 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:59 np0005539551 nova_compute[227360]: 2025-11-29 08:41:59.119 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:59 np0005539551 nova_compute[227360]: 2025-11-29 08:41:59.482 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:59 np0005539551 nova_compute[227360]: 2025-11-29 08:41:59.750 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:42:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:00.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:42:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:00.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:00 np0005539551 nova_compute[227360]: 2025-11-29 08:42:00.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:00 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:42:00 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:42:01 np0005539551 podman[296254]: 2025-11-29 08:42:01.627118564 +0000 UTC m=+0.068372132 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 03:42:01 np0005539551 podman[296253]: 2025-11-29 08:42:01.64214614 +0000 UTC m=+0.082299469 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:42:01 np0005539551 podman[296252]: 2025-11-29 08:42:01.661150485 +0000 UTC m=+0.108581371 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:42:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:02.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:02.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:02 np0005539551 nova_compute[227360]: 2025-11-29 08:42:02.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:02 np0005539551 nova_compute[227360]: 2025-11-29 08:42:02.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:42:03 np0005539551 nova_compute[227360]: 2025-11-29 08:42:03.390 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:04 np0005539551 nova_compute[227360]: 2025-11-29 08:42:04.150 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:04.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:04.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:05 np0005539551 nova_compute[227360]: 2025-11-29 08:42:05.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:06.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:06.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:08.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:08 np0005539551 nova_compute[227360]: 2025-11-29 08:42:08.366 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405713.3659546, 695c5eef-14b1-4f3f-8570-d76e4500f9f0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:42:08 np0005539551 nova_compute[227360]: 2025-11-29 08:42:08.367 227364 INFO nova.compute.manager [-] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:42:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:08.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:08 np0005539551 nova_compute[227360]: 2025-11-29 08:42:08.391 227364 DEBUG nova.compute.manager [None req-3fb5717c-6aab-46e8-94a7-d292d485ed5e - - - - - -] [instance: 695c5eef-14b1-4f3f-8570-d76e4500f9f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:42:08 np0005539551 nova_compute[227360]: 2025-11-29 08:42:08.392 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:09 np0005539551 nova_compute[227360]: 2025-11-29 08:42:09.152 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:10.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:10.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:42:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:12.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:42:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:12.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:13 np0005539551 nova_compute[227360]: 2025-11-29 08:42:13.451 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:14 np0005539551 nova_compute[227360]: 2025-11-29 08:42:14.154 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:14.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:14.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:15 np0005539551 nova_compute[227360]: 2025-11-29 08:42:15.615 227364 DEBUG oslo_concurrency.lockutils [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Acquiring lock "29a37bfc-c1ee-4237-8204-4facd2648d51" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:15 np0005539551 nova_compute[227360]: 2025-11-29 08:42:15.615 227364 DEBUG oslo_concurrency.lockutils [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "29a37bfc-c1ee-4237-8204-4facd2648d51" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:15 np0005539551 nova_compute[227360]: 2025-11-29 08:42:15.635 227364 DEBUG nova.compute.manager [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:42:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:16 np0005539551 nova_compute[227360]: 2025-11-29 08:42:16.091 227364 DEBUG oslo_concurrency.lockutils [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:16 np0005539551 nova_compute[227360]: 2025-11-29 08:42:16.091 227364 DEBUG oslo_concurrency.lockutils [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:16 np0005539551 nova_compute[227360]: 2025-11-29 08:42:16.098 227364 DEBUG nova.virt.hardware [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:42:16 np0005539551 nova_compute[227360]: 2025-11-29 08:42:16.098 227364 INFO nova.compute.claims [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:42:16 np0005539551 nova_compute[227360]: 2025-11-29 08:42:16.300 227364 DEBUG oslo_concurrency.processutils [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:42:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:16.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:42:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:16.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:42:16 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1367502756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:42:16 np0005539551 nova_compute[227360]: 2025-11-29 08:42:16.759 227364 DEBUG oslo_concurrency.processutils [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:16 np0005539551 nova_compute[227360]: 2025-11-29 08:42:16.766 227364 DEBUG nova.compute.provider_tree [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:42:16 np0005539551 nova_compute[227360]: 2025-11-29 08:42:16.782 227364 DEBUG nova.scheduler.client.report [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:42:16 np0005539551 nova_compute[227360]: 2025-11-29 08:42:16.805 227364 DEBUG oslo_concurrency.lockutils [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:16 np0005539551 nova_compute[227360]: 2025-11-29 08:42:16.806 227364 DEBUG nova.compute.manager [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:42:16 np0005539551 nova_compute[227360]: 2025-11-29 08:42:16.849 227364 DEBUG nova.compute.manager [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:42:16 np0005539551 nova_compute[227360]: 2025-11-29 08:42:16.850 227364 DEBUG nova.network.neutron [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:42:16 np0005539551 nova_compute[227360]: 2025-11-29 08:42:16.869 227364 INFO nova.virt.libvirt.driver [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:42:16 np0005539551 nova_compute[227360]: 2025-11-29 08:42:16.892 227364 DEBUG nova.compute.manager [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:42:16 np0005539551 nova_compute[227360]: 2025-11-29 08:42:16.998 227364 DEBUG nova.compute.manager [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:42:17 np0005539551 nova_compute[227360]: 2025-11-29 08:42:16.999 227364 DEBUG nova.virt.libvirt.driver [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:42:17 np0005539551 nova_compute[227360]: 2025-11-29 08:42:17.000 227364 INFO nova.virt.libvirt.driver [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Creating image(s)#033[00m
Nov 29 03:42:17 np0005539551 nova_compute[227360]: 2025-11-29 08:42:17.029 227364 DEBUG nova.storage.rbd_utils [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] rbd image 29a37bfc-c1ee-4237-8204-4facd2648d51_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:17 np0005539551 nova_compute[227360]: 2025-11-29 08:42:17.061 227364 DEBUG nova.storage.rbd_utils [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] rbd image 29a37bfc-c1ee-4237-8204-4facd2648d51_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:17 np0005539551 nova_compute[227360]: 2025-11-29 08:42:17.092 227364 DEBUG nova.storage.rbd_utils [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] rbd image 29a37bfc-c1ee-4237-8204-4facd2648d51_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:17 np0005539551 nova_compute[227360]: 2025-11-29 08:42:17.095 227364 DEBUG oslo_concurrency.processutils [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:17 np0005539551 nova_compute[227360]: 2025-11-29 08:42:17.173 227364 DEBUG oslo_concurrency.processutils [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:17 np0005539551 nova_compute[227360]: 2025-11-29 08:42:17.174 227364 DEBUG oslo_concurrency.lockutils [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:17 np0005539551 nova_compute[227360]: 2025-11-29 08:42:17.174 227364 DEBUG oslo_concurrency.lockutils [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:17 np0005539551 nova_compute[227360]: 2025-11-29 08:42:17.175 227364 DEBUG oslo_concurrency.lockutils [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:17 np0005539551 nova_compute[227360]: 2025-11-29 08:42:17.201 227364 DEBUG nova.storage.rbd_utils [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] rbd image 29a37bfc-c1ee-4237-8204-4facd2648d51_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:17 np0005539551 nova_compute[227360]: 2025-11-29 08:42:17.204 227364 DEBUG oslo_concurrency.processutils [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 29a37bfc-c1ee-4237-8204-4facd2648d51_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:17 np0005539551 nova_compute[227360]: 2025-11-29 08:42:17.487 227364 DEBUG oslo_concurrency.processutils [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 29a37bfc-c1ee-4237-8204-4facd2648d51_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.283s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:17 np0005539551 nova_compute[227360]: 2025-11-29 08:42:17.547 227364 DEBUG nova.storage.rbd_utils [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] resizing rbd image 29a37bfc-c1ee-4237-8204-4facd2648d51_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:42:17 np0005539551 nova_compute[227360]: 2025-11-29 08:42:17.634 227364 DEBUG nova.objects.instance [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lazy-loading 'migration_context' on Instance uuid 29a37bfc-c1ee-4237-8204-4facd2648d51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:42:17 np0005539551 nova_compute[227360]: 2025-11-29 08:42:17.653 227364 DEBUG nova.virt.libvirt.driver [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:42:17 np0005539551 nova_compute[227360]: 2025-11-29 08:42:17.654 227364 DEBUG nova.virt.libvirt.driver [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Ensure instance console log exists: /var/lib/nova/instances/29a37bfc-c1ee-4237-8204-4facd2648d51/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:42:17 np0005539551 nova_compute[227360]: 2025-11-29 08:42:17.655 227364 DEBUG oslo_concurrency.lockutils [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:17 np0005539551 nova_compute[227360]: 2025-11-29 08:42:17.655 227364 DEBUG oslo_concurrency.lockutils [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:17 np0005539551 nova_compute[227360]: 2025-11-29 08:42:17.656 227364 DEBUG oslo_concurrency.lockutils [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:17 np0005539551 nova_compute[227360]: 2025-11-29 08:42:17.727 227364 DEBUG nova.network.neutron [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Successfully created port: d9d9d77e-54c8-47e4-819a-765417810458 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:42:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:42:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:18.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:42:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:18.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:18 np0005539551 nova_compute[227360]: 2025-11-29 08:42:18.515 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:18 np0005539551 nova_compute[227360]: 2025-11-29 08:42:18.739 227364 DEBUG nova.network.neutron [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Successfully updated port: d9d9d77e-54c8-47e4-819a-765417810458 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:42:18 np0005539551 nova_compute[227360]: 2025-11-29 08:42:18.794 227364 DEBUG oslo_concurrency.lockutils [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Acquiring lock "refresh_cache-29a37bfc-c1ee-4237-8204-4facd2648d51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:42:18 np0005539551 nova_compute[227360]: 2025-11-29 08:42:18.795 227364 DEBUG oslo_concurrency.lockutils [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Acquired lock "refresh_cache-29a37bfc-c1ee-4237-8204-4facd2648d51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:42:18 np0005539551 nova_compute[227360]: 2025-11-29 08:42:18.795 227364 DEBUG nova.network.neutron [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:42:18 np0005539551 nova_compute[227360]: 2025-11-29 08:42:18.862 227364 DEBUG nova.compute.manager [req-d44c5e93-ac69-4996-8ae7-ab6e3274ff34 req-3c4449e3-1bec-4958-87a0-b201168ad6f9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Received event network-changed-d9d9d77e-54c8-47e4-819a-765417810458 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:18 np0005539551 nova_compute[227360]: 2025-11-29 08:42:18.862 227364 DEBUG nova.compute.manager [req-d44c5e93-ac69-4996-8ae7-ab6e3274ff34 req-3c4449e3-1bec-4958-87a0-b201168ad6f9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Refreshing instance network info cache due to event network-changed-d9d9d77e-54c8-47e4-819a-765417810458. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:42:18 np0005539551 nova_compute[227360]: 2025-11-29 08:42:18.863 227364 DEBUG oslo_concurrency.lockutils [req-d44c5e93-ac69-4996-8ae7-ab6e3274ff34 req-3c4449e3-1bec-4958-87a0-b201168ad6f9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-29a37bfc-c1ee-4237-8204-4facd2648d51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:42:18 np0005539551 nova_compute[227360]: 2025-11-29 08:42:18.969 227364 DEBUG nova.network.neutron [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:42:19 np0005539551 nova_compute[227360]: 2025-11-29 08:42:19.155 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:19.892 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:19.892 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:19.892 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:19 np0005539551 nova_compute[227360]: 2025-11-29 08:42:19.947 227364 DEBUG nova.network.neutron [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Updating instance_info_cache with network_info: [{"id": "d9d9d77e-54c8-47e4-819a-765417810458", "address": "fa:16:3e:cd:51:58", "network": {"id": "379eb72a-ec90-4461-897a-adab6a88928f", "bridge": "br-int", "label": "tempest-TestServerMultinode-1337172465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df59ee3c04c04efabfae553312366b99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9d9d77e-54", "ovs_interfaceid": "d9d9d77e-54c8-47e4-819a-765417810458", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:42:19 np0005539551 nova_compute[227360]: 2025-11-29 08:42:19.964 227364 DEBUG oslo_concurrency.lockutils [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Releasing lock "refresh_cache-29a37bfc-c1ee-4237-8204-4facd2648d51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:42:19 np0005539551 nova_compute[227360]: 2025-11-29 08:42:19.965 227364 DEBUG nova.compute.manager [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Instance network_info: |[{"id": "d9d9d77e-54c8-47e4-819a-765417810458", "address": "fa:16:3e:cd:51:58", "network": {"id": "379eb72a-ec90-4461-897a-adab6a88928f", "bridge": "br-int", "label": "tempest-TestServerMultinode-1337172465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df59ee3c04c04efabfae553312366b99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9d9d77e-54", "ovs_interfaceid": "d9d9d77e-54c8-47e4-819a-765417810458", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:42:19 np0005539551 nova_compute[227360]: 2025-11-29 08:42:19.965 227364 DEBUG oslo_concurrency.lockutils [req-d44c5e93-ac69-4996-8ae7-ab6e3274ff34 req-3c4449e3-1bec-4958-87a0-b201168ad6f9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-29a37bfc-c1ee-4237-8204-4facd2648d51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:42:19 np0005539551 nova_compute[227360]: 2025-11-29 08:42:19.965 227364 DEBUG nova.network.neutron [req-d44c5e93-ac69-4996-8ae7-ab6e3274ff34 req-3c4449e3-1bec-4958-87a0-b201168ad6f9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Refreshing network info cache for port d9d9d77e-54c8-47e4-819a-765417810458 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:42:19 np0005539551 nova_compute[227360]: 2025-11-29 08:42:19.968 227364 DEBUG nova.virt.libvirt.driver [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Start _get_guest_xml network_info=[{"id": "d9d9d77e-54c8-47e4-819a-765417810458", "address": "fa:16:3e:cd:51:58", "network": {"id": "379eb72a-ec90-4461-897a-adab6a88928f", "bridge": "br-int", "label": "tempest-TestServerMultinode-1337172465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df59ee3c04c04efabfae553312366b99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9d9d77e-54", "ovs_interfaceid": "d9d9d77e-54c8-47e4-819a-765417810458", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:42:19 np0005539551 nova_compute[227360]: 2025-11-29 08:42:19.974 227364 WARNING nova.virt.libvirt.driver [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:42:19 np0005539551 nova_compute[227360]: 2025-11-29 08:42:19.982 227364 DEBUG nova.virt.libvirt.host [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:42:19 np0005539551 nova_compute[227360]: 2025-11-29 08:42:19.983 227364 DEBUG nova.virt.libvirt.host [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:42:19 np0005539551 nova_compute[227360]: 2025-11-29 08:42:19.986 227364 DEBUG nova.virt.libvirt.host [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:42:19 np0005539551 nova_compute[227360]: 2025-11-29 08:42:19.986 227364 DEBUG nova.virt.libvirt.host [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:42:19 np0005539551 nova_compute[227360]: 2025-11-29 08:42:19.987 227364 DEBUG nova.virt.libvirt.driver [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:42:19 np0005539551 nova_compute[227360]: 2025-11-29 08:42:19.988 227364 DEBUG nova.virt.hardware [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:42:19 np0005539551 nova_compute[227360]: 2025-11-29 08:42:19.988 227364 DEBUG nova.virt.hardware [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:42:19 np0005539551 nova_compute[227360]: 2025-11-29 08:42:19.988 227364 DEBUG nova.virt.hardware [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:42:19 np0005539551 nova_compute[227360]: 2025-11-29 08:42:19.989 227364 DEBUG nova.virt.hardware [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:42:19 np0005539551 nova_compute[227360]: 2025-11-29 08:42:19.989 227364 DEBUG nova.virt.hardware [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:42:19 np0005539551 nova_compute[227360]: 2025-11-29 08:42:19.989 227364 DEBUG nova.virt.hardware [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:42:19 np0005539551 nova_compute[227360]: 2025-11-29 08:42:19.989 227364 DEBUG nova.virt.hardware [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:42:19 np0005539551 nova_compute[227360]: 2025-11-29 08:42:19.989 227364 DEBUG nova.virt.hardware [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:42:19 np0005539551 nova_compute[227360]: 2025-11-29 08:42:19.990 227364 DEBUG nova.virt.hardware [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:42:19 np0005539551 nova_compute[227360]: 2025-11-29 08:42:19.990 227364 DEBUG nova.virt.hardware [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:42:19 np0005539551 nova_compute[227360]: 2025-11-29 08:42:19.990 227364 DEBUG nova.virt.hardware [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:42:19 np0005539551 nova_compute[227360]: 2025-11-29 08:42:19.993 227364 DEBUG oslo_concurrency.processutils [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:20.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:20.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:42:20 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/239643328' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:42:20 np0005539551 nova_compute[227360]: 2025-11-29 08:42:20.434 227364 DEBUG oslo_concurrency.processutils [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:20 np0005539551 nova_compute[227360]: 2025-11-29 08:42:20.463 227364 DEBUG nova.storage.rbd_utils [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] rbd image 29a37bfc-c1ee-4237-8204-4facd2648d51_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:20 np0005539551 nova_compute[227360]: 2025-11-29 08:42:20.467 227364 DEBUG oslo_concurrency.processutils [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:42:20 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3865355405' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:42:20 np0005539551 nova_compute[227360]: 2025-11-29 08:42:20.904 227364 DEBUG oslo_concurrency.processutils [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:20 np0005539551 nova_compute[227360]: 2025-11-29 08:42:20.906 227364 DEBUG nova.virt.libvirt.vif [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:42:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-1441426650',display_name='tempest-TestServerMultinode-server-1441426650',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testservermultinode-server-1441426650',id=196,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6519f321a4954567ab99a11cc07cc5ac',ramdisk_id='',reservation_id='r-w91bq6jm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-1895688433',owner_user_name='tempest-TestServerMultinode-1895688433-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:42:16Z,user_data=None,user_id='dda7b3867e5c45a7bb78d049103bc095',uuid=29a37bfc-c1ee-4237-8204-4facd2648d51,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d9d9d77e-54c8-47e4-819a-765417810458", "address": "fa:16:3e:cd:51:58", "network": {"id": "379eb72a-ec90-4461-897a-adab6a88928f", "bridge": "br-int", "label": "tempest-TestServerMultinode-1337172465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df59ee3c04c04efabfae553312366b99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9d9d77e-54", "ovs_interfaceid": "d9d9d77e-54c8-47e4-819a-765417810458", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:42:20 np0005539551 nova_compute[227360]: 2025-11-29 08:42:20.906 227364 DEBUG nova.network.os_vif_util [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Converting VIF {"id": "d9d9d77e-54c8-47e4-819a-765417810458", "address": "fa:16:3e:cd:51:58", "network": {"id": "379eb72a-ec90-4461-897a-adab6a88928f", "bridge": "br-int", "label": "tempest-TestServerMultinode-1337172465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df59ee3c04c04efabfae553312366b99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9d9d77e-54", "ovs_interfaceid": "d9d9d77e-54c8-47e4-819a-765417810458", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:42:20 np0005539551 nova_compute[227360]: 2025-11-29 08:42:20.907 227364 DEBUG nova.network.os_vif_util [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:51:58,bridge_name='br-int',has_traffic_filtering=True,id=d9d9d77e-54c8-47e4-819a-765417810458,network=Network(379eb72a-ec90-4461-897a-adab6a88928f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9d9d77e-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:42:20 np0005539551 nova_compute[227360]: 2025-11-29 08:42:20.908 227364 DEBUG nova.objects.instance [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lazy-loading 'pci_devices' on Instance uuid 29a37bfc-c1ee-4237-8204-4facd2648d51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:42:20 np0005539551 nova_compute[227360]: 2025-11-29 08:42:20.938 227364 DEBUG nova.virt.libvirt.driver [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:42:20 np0005539551 nova_compute[227360]:  <uuid>29a37bfc-c1ee-4237-8204-4facd2648d51</uuid>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:  <name>instance-000000c4</name>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:42:20 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:      <nova:name>tempest-TestServerMultinode-server-1441426650</nova:name>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:42:19</nova:creationTime>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:42:20 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:        <nova:user uuid="dda7b3867e5c45a7bb78d049103bc095">tempest-TestServerMultinode-1895688433-project-admin</nova:user>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:        <nova:project uuid="6519f321a4954567ab99a11cc07cc5ac">tempest-TestServerMultinode-1895688433</nova:project>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:        <nova:port uuid="d9d9d77e-54c8-47e4-819a-765417810458">
Nov 29 03:42:20 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:      <entry name="serial">29a37bfc-c1ee-4237-8204-4facd2648d51</entry>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:      <entry name="uuid">29a37bfc-c1ee-4237-8204-4facd2648d51</entry>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:42:20 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/29a37bfc-c1ee-4237-8204-4facd2648d51_disk">
Nov 29 03:42:20 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:42:20 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:42:20 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/29a37bfc-c1ee-4237-8204-4facd2648d51_disk.config">
Nov 29 03:42:20 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:42:20 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:42:20 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:cd:51:58"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:      <target dev="tapd9d9d77e-54"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:42:20 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/29a37bfc-c1ee-4237-8204-4facd2648d51/console.log" append="off"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:42:20 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:42:20 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:42:20 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:42:20 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:42:20 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:42:20 np0005539551 nova_compute[227360]: 2025-11-29 08:42:20.940 227364 DEBUG nova.compute.manager [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Preparing to wait for external event network-vif-plugged-d9d9d77e-54c8-47e4-819a-765417810458 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:42:20 np0005539551 nova_compute[227360]: 2025-11-29 08:42:20.940 227364 DEBUG oslo_concurrency.lockutils [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Acquiring lock "29a37bfc-c1ee-4237-8204-4facd2648d51-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:20 np0005539551 nova_compute[227360]: 2025-11-29 08:42:20.940 227364 DEBUG oslo_concurrency.lockutils [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "29a37bfc-c1ee-4237-8204-4facd2648d51-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:20 np0005539551 nova_compute[227360]: 2025-11-29 08:42:20.940 227364 DEBUG oslo_concurrency.lockutils [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "29a37bfc-c1ee-4237-8204-4facd2648d51-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:20 np0005539551 nova_compute[227360]: 2025-11-29 08:42:20.941 227364 DEBUG nova.virt.libvirt.vif [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:42:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-1441426650',display_name='tempest-TestServerMultinode-server-1441426650',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testservermultinode-server-1441426650',id=196,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6519f321a4954567ab99a11cc07cc5ac',ramdisk_id='',reservation_id='r-w91bq6jm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-1895688433',owner_user_name='tempest-TestServerMultinode-1895688433-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:42:16Z,user_data=None,user_id='dda7b3867e5c45a7bb78d049103bc095',uuid=29a37bfc-c1ee-4237-8204-4facd2648d51,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d9d9d77e-54c8-47e4-819a-765417810458", "address": "fa:16:3e:cd:51:58", "network": {"id": "379eb72a-ec90-4461-897a-adab6a88928f", "bridge": "br-int", "label": "tempest-TestServerMultinode-1337172465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df59ee3c04c04efabfae553312366b99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9d9d77e-54", "ovs_interfaceid": "d9d9d77e-54c8-47e4-819a-765417810458", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:42:20 np0005539551 nova_compute[227360]: 2025-11-29 08:42:20.941 227364 DEBUG nova.network.os_vif_util [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Converting VIF {"id": "d9d9d77e-54c8-47e4-819a-765417810458", "address": "fa:16:3e:cd:51:58", "network": {"id": "379eb72a-ec90-4461-897a-adab6a88928f", "bridge": "br-int", "label": "tempest-TestServerMultinode-1337172465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df59ee3c04c04efabfae553312366b99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9d9d77e-54", "ovs_interfaceid": "d9d9d77e-54c8-47e4-819a-765417810458", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:42:20 np0005539551 nova_compute[227360]: 2025-11-29 08:42:20.942 227364 DEBUG nova.network.os_vif_util [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:51:58,bridge_name='br-int',has_traffic_filtering=True,id=d9d9d77e-54c8-47e4-819a-765417810458,network=Network(379eb72a-ec90-4461-897a-adab6a88928f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9d9d77e-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:42:20 np0005539551 nova_compute[227360]: 2025-11-29 08:42:20.942 227364 DEBUG os_vif [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:51:58,bridge_name='br-int',has_traffic_filtering=True,id=d9d9d77e-54c8-47e4-819a-765417810458,network=Network(379eb72a-ec90-4461-897a-adab6a88928f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9d9d77e-54') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:42:20 np0005539551 nova_compute[227360]: 2025-11-29 08:42:20.943 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:20 np0005539551 nova_compute[227360]: 2025-11-29 08:42:20.944 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:20 np0005539551 nova_compute[227360]: 2025-11-29 08:42:20.944 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:42:20 np0005539551 nova_compute[227360]: 2025-11-29 08:42:20.947 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:20 np0005539551 nova_compute[227360]: 2025-11-29 08:42:20.947 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9d9d77e-54, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:20 np0005539551 nova_compute[227360]: 2025-11-29 08:42:20.947 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd9d9d77e-54, col_values=(('external_ids', {'iface-id': 'd9d9d77e-54c8-47e4-819a-765417810458', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cd:51:58', 'vm-uuid': '29a37bfc-c1ee-4237-8204-4facd2648d51'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:20 np0005539551 nova_compute[227360]: 2025-11-29 08:42:20.949 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:20 np0005539551 NetworkManager[48922]: <info>  [1764405740.9503] manager: (tapd9d9d77e-54): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/380)
Nov 29 03:42:20 np0005539551 nova_compute[227360]: 2025-11-29 08:42:20.951 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:42:20 np0005539551 nova_compute[227360]: 2025-11-29 08:42:20.956 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:20 np0005539551 nova_compute[227360]: 2025-11-29 08:42:20.957 227364 INFO os_vif [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:51:58,bridge_name='br-int',has_traffic_filtering=True,id=d9d9d77e-54c8-47e4-819a-765417810458,network=Network(379eb72a-ec90-4461-897a-adab6a88928f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9d9d77e-54')#033[00m
Nov 29 03:42:21 np0005539551 nova_compute[227360]: 2025-11-29 08:42:21.041 227364 DEBUG nova.virt.libvirt.driver [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:42:21 np0005539551 nova_compute[227360]: 2025-11-29 08:42:21.042 227364 DEBUG nova.virt.libvirt.driver [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:42:21 np0005539551 nova_compute[227360]: 2025-11-29 08:42:21.042 227364 DEBUG nova.virt.libvirt.driver [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] No VIF found with MAC fa:16:3e:cd:51:58, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:42:21 np0005539551 nova_compute[227360]: 2025-11-29 08:42:21.042 227364 INFO nova.virt.libvirt.driver [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Using config drive#033[00m
Nov 29 03:42:21 np0005539551 nova_compute[227360]: 2025-11-29 08:42:21.066 227364 DEBUG nova.storage.rbd_utils [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] rbd image 29a37bfc-c1ee-4237-8204-4facd2648d51_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:22 np0005539551 nova_compute[227360]: 2025-11-29 08:42:22.298 227364 INFO nova.virt.libvirt.driver [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Creating config drive at /var/lib/nova/instances/29a37bfc-c1ee-4237-8204-4facd2648d51/disk.config#033[00m
Nov 29 03:42:22 np0005539551 nova_compute[227360]: 2025-11-29 08:42:22.303 227364 DEBUG oslo_concurrency.processutils [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/29a37bfc-c1ee-4237-8204-4facd2648d51/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5bz4vcu3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:22.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:22.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:22 np0005539551 nova_compute[227360]: 2025-11-29 08:42:22.440 227364 DEBUG oslo_concurrency.processutils [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/29a37bfc-c1ee-4237-8204-4facd2648d51/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5bz4vcu3" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:22 np0005539551 nova_compute[227360]: 2025-11-29 08:42:22.469 227364 DEBUG nova.storage.rbd_utils [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] rbd image 29a37bfc-c1ee-4237-8204-4facd2648d51_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:22 np0005539551 nova_compute[227360]: 2025-11-29 08:42:22.473 227364 DEBUG oslo_concurrency.processutils [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/29a37bfc-c1ee-4237-8204-4facd2648d51/disk.config 29a37bfc-c1ee-4237-8204-4facd2648d51_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:22 np0005539551 nova_compute[227360]: 2025-11-29 08:42:22.633 227364 DEBUG oslo_concurrency.processutils [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/29a37bfc-c1ee-4237-8204-4facd2648d51/disk.config 29a37bfc-c1ee-4237-8204-4facd2648d51_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:22 np0005539551 nova_compute[227360]: 2025-11-29 08:42:22.634 227364 INFO nova.virt.libvirt.driver [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Deleting local config drive /var/lib/nova/instances/29a37bfc-c1ee-4237-8204-4facd2648d51/disk.config because it was imported into RBD.#033[00m
Nov 29 03:42:22 np0005539551 kernel: tapd9d9d77e-54: entered promiscuous mode
Nov 29 03:42:22 np0005539551 NetworkManager[48922]: <info>  [1764405742.6958] manager: (tapd9d9d77e-54): new Tun device (/org/freedesktop/NetworkManager/Devices/381)
Nov 29 03:42:22 np0005539551 ovn_controller[130266]: 2025-11-29T08:42:22Z|00856|binding|INFO|Claiming lport d9d9d77e-54c8-47e4-819a-765417810458 for this chassis.
Nov 29 03:42:22 np0005539551 ovn_controller[130266]: 2025-11-29T08:42:22Z|00857|binding|INFO|d9d9d77e-54c8-47e4-819a-765417810458: Claiming fa:16:3e:cd:51:58 10.100.0.5
Nov 29 03:42:22 np0005539551 nova_compute[227360]: 2025-11-29 08:42:22.697 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:22.717 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:51:58 10.100.0.5'], port_security=['fa:16:3e:cd:51:58 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '29a37bfc-c1ee-4237-8204-4facd2648d51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-379eb72a-ec90-4461-897a-adab6a88928f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6519f321a4954567ab99a11cc07cc5ac', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f7ac3748-9331-4cc2-bcd0-273842e7e38b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02c1f455-8d7e-4f22-83b6-df0a05597294, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=d9d9d77e-54c8-47e4-819a-765417810458) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:42:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:22.719 139482 INFO neutron.agent.ovn.metadata.agent [-] Port d9d9d77e-54c8-47e4-819a-765417810458 in datapath 379eb72a-ec90-4461-897a-adab6a88928f bound to our chassis#033[00m
Nov 29 03:42:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:22.721 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 379eb72a-ec90-4461-897a-adab6a88928f#033[00m
Nov 29 03:42:22 np0005539551 systemd-machined[190756]: New machine qemu-88-instance-000000c4.
Nov 29 03:42:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:22.736 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5c13c4f8-472c-4596-a568-66d8b2f3b0c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:22.737 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap379eb72a-e1 in ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:42:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:22.739 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap379eb72a-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:42:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:22.739 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[91812d1a-df40-494f-8833-f4cd22d3380b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:22.740 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[19d96291-5c8a-4b33-b215-06d299c27c55]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:22.759 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[93cfe3e2-71f0-41bc-9e7e-d706eea5063c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:22 np0005539551 systemd[1]: Started Virtual Machine qemu-88-instance-000000c4.
Nov 29 03:42:22 np0005539551 systemd-udevd[296646]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:42:22 np0005539551 nova_compute[227360]: 2025-11-29 08:42:22.786 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:22 np0005539551 NetworkManager[48922]: <info>  [1764405742.7888] device (tapd9d9d77e-54): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:42:22 np0005539551 NetworkManager[48922]: <info>  [1764405742.7896] device (tapd9d9d77e-54): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:42:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:22.793 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[86d359c8-5431-4491-8e51-84de49cbb872]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:22 np0005539551 ovn_controller[130266]: 2025-11-29T08:42:22Z|00858|binding|INFO|Setting lport d9d9d77e-54c8-47e4-819a-765417810458 ovn-installed in OVS
Nov 29 03:42:22 np0005539551 ovn_controller[130266]: 2025-11-29T08:42:22Z|00859|binding|INFO|Setting lport d9d9d77e-54c8-47e4-819a-765417810458 up in Southbound
Nov 29 03:42:22 np0005539551 nova_compute[227360]: 2025-11-29 08:42:22.795 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:22.831 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[8069150e-124f-4d37-8204-5d6a919d2d1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:22 np0005539551 NetworkManager[48922]: <info>  [1764405742.8383] manager: (tap379eb72a-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/382)
Nov 29 03:42:22 np0005539551 systemd-udevd[296650]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:42:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:22.837 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[3f9ec336-6e26-49a3-a15f-5072a91a54ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:22.880 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[7d81a7c1-c6b3-4f97-9d69-a523fa564c0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:22.882 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[6fc7aa7b-c8ca-4ace-b13c-ad9d7ede6964]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:22 np0005539551 NetworkManager[48922]: <info>  [1764405742.9057] device (tap379eb72a-e0): carrier: link connected
Nov 29 03:42:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:22.911 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[99f877c9-f6c3-4dc9-b271-00af91165a16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:22.929 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a873ff25-dc53-44e8-b2c3-0f63b12b9ebf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap379eb72a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:56:58:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 247], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 878978, 'reachable_time': 29256, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296676, 'error': None, 'target': 'ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:22.942 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a1df389d-1d52-417a-bc3d-5187db53305b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe56:585f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 878978, 'tstamp': 878978}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296677, 'error': None, 'target': 'ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:22.960 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[3594d614-a326-48ae-b17e-56a0ccd1aa8b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap379eb72a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:56:58:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 247], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 878978, 'reachable_time': 29256, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 296685, 'error': None, 'target': 'ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:22 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:22.998 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[32d49abf-d7c8-482a-9222-38e6d7f00448]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:23 np0005539551 nova_compute[227360]: 2025-11-29 08:42:23.006 227364 DEBUG nova.network.neutron [req-d44c5e93-ac69-4996-8ae7-ab6e3274ff34 req-3c4449e3-1bec-4958-87a0-b201168ad6f9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Updated VIF entry in instance network info cache for port d9d9d77e-54c8-47e4-819a-765417810458. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:42:23 np0005539551 nova_compute[227360]: 2025-11-29 08:42:23.007 227364 DEBUG nova.network.neutron [req-d44c5e93-ac69-4996-8ae7-ab6e3274ff34 req-3c4449e3-1bec-4958-87a0-b201168ad6f9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Updating instance_info_cache with network_info: [{"id": "d9d9d77e-54c8-47e4-819a-765417810458", "address": "fa:16:3e:cd:51:58", "network": {"id": "379eb72a-ec90-4461-897a-adab6a88928f", "bridge": "br-int", "label": "tempest-TestServerMultinode-1337172465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df59ee3c04c04efabfae553312366b99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9d9d77e-54", "ovs_interfaceid": "d9d9d77e-54c8-47e4-819a-765417810458", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:42:23 np0005539551 nova_compute[227360]: 2025-11-29 08:42:23.030 227364 DEBUG oslo_concurrency.lockutils [req-d44c5e93-ac69-4996-8ae7-ab6e3274ff34 req-3c4449e3-1bec-4958-87a0-b201168ad6f9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-29a37bfc-c1ee-4237-8204-4facd2648d51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:42:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:23.059 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c7998fe3-8e93-439d-ae60-c659ad289ab1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:23.060 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap379eb72a-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:23.060 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:42:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:23.060 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap379eb72a-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:23 np0005539551 nova_compute[227360]: 2025-11-29 08:42:23.097 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:23 np0005539551 NetworkManager[48922]: <info>  [1764405743.0980] manager: (tap379eb72a-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/383)
Nov 29 03:42:23 np0005539551 kernel: tap379eb72a-e0: entered promiscuous mode
Nov 29 03:42:23 np0005539551 nova_compute[227360]: 2025-11-29 08:42:23.098 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:23.100 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap379eb72a-e0, col_values=(('external_ids', {'iface-id': '788af420-8eef-4a56-95d5-80ebf4f9f71c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:23 np0005539551 nova_compute[227360]: 2025-11-29 08:42:23.101 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:23 np0005539551 ovn_controller[130266]: 2025-11-29T08:42:23Z|00860|binding|INFO|Releasing lport 788af420-8eef-4a56-95d5-80ebf4f9f71c from this chassis (sb_readonly=0)
Nov 29 03:42:23 np0005539551 nova_compute[227360]: 2025-11-29 08:42:23.110 227364 DEBUG nova.compute.manager [req-009f2170-1d8a-4633-be89-e14eb0345d83 req-59fde31f-9a1e-4ad7-b6aa-2e49efd5a4b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Received event network-vif-plugged-d9d9d77e-54c8-47e4-819a-765417810458 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:23 np0005539551 nova_compute[227360]: 2025-11-29 08:42:23.111 227364 DEBUG oslo_concurrency.lockutils [req-009f2170-1d8a-4633-be89-e14eb0345d83 req-59fde31f-9a1e-4ad7-b6aa-2e49efd5a4b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "29a37bfc-c1ee-4237-8204-4facd2648d51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:23 np0005539551 nova_compute[227360]: 2025-11-29 08:42:23.111 227364 DEBUG oslo_concurrency.lockutils [req-009f2170-1d8a-4633-be89-e14eb0345d83 req-59fde31f-9a1e-4ad7-b6aa-2e49efd5a4b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "29a37bfc-c1ee-4237-8204-4facd2648d51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:23 np0005539551 nova_compute[227360]: 2025-11-29 08:42:23.112 227364 DEBUG oslo_concurrency.lockutils [req-009f2170-1d8a-4633-be89-e14eb0345d83 req-59fde31f-9a1e-4ad7-b6aa-2e49efd5a4b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "29a37bfc-c1ee-4237-8204-4facd2648d51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:23 np0005539551 nova_compute[227360]: 2025-11-29 08:42:23.112 227364 DEBUG nova.compute.manager [req-009f2170-1d8a-4633-be89-e14eb0345d83 req-59fde31f-9a1e-4ad7-b6aa-2e49efd5a4b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Processing event network-vif-plugged-d9d9d77e-54c8-47e4-819a-765417810458 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:42:23 np0005539551 nova_compute[227360]: 2025-11-29 08:42:23.115 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:23.115 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/379eb72a-ec90-4461-897a-adab6a88928f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/379eb72a-ec90-4461-897a-adab6a88928f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:42:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:23.116 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[7953c24d-bd48-4033-9214-a402cc375366]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:23.117 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:42:23 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:42:23 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:42:23 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-379eb72a-ec90-4461-897a-adab6a88928f
Nov 29 03:42:23 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:42:23 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:42:23 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:42:23 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/379eb72a-ec90-4461-897a-adab6a88928f.pid.haproxy
Nov 29 03:42:23 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:42:23 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:42:23 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:42:23 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:42:23 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:42:23 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:42:23 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:42:23 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:42:23 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:42:23 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:42:23 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:42:23 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:42:23 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:42:23 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:42:23 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:42:23 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:42:23 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:42:23 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:42:23 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:42:23 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:42:23 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 379eb72a-ec90-4461-897a-adab6a88928f
Nov 29 03:42:23 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:42:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:23.117 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f', 'env', 'PROCESS_TAG=haproxy-379eb72a-ec90-4461-897a-adab6a88928f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/379eb72a-ec90-4461-897a-adab6a88928f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:42:23 np0005539551 nova_compute[227360]: 2025-11-29 08:42:23.148 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405743.1475658, 29a37bfc-c1ee-4237-8204-4facd2648d51 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:42:23 np0005539551 nova_compute[227360]: 2025-11-29 08:42:23.148 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] VM Started (Lifecycle Event)#033[00m
Nov 29 03:42:23 np0005539551 nova_compute[227360]: 2025-11-29 08:42:23.151 227364 DEBUG nova.compute.manager [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:42:23 np0005539551 nova_compute[227360]: 2025-11-29 08:42:23.154 227364 DEBUG nova.virt.libvirt.driver [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:42:23 np0005539551 nova_compute[227360]: 2025-11-29 08:42:23.158 227364 INFO nova.virt.libvirt.driver [-] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Instance spawned successfully.#033[00m
Nov 29 03:42:23 np0005539551 nova_compute[227360]: 2025-11-29 08:42:23.158 227364 DEBUG nova.virt.libvirt.driver [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:42:23 np0005539551 nova_compute[227360]: 2025-11-29 08:42:23.183 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:42:23 np0005539551 nova_compute[227360]: 2025-11-29 08:42:23.189 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:42:23 np0005539551 nova_compute[227360]: 2025-11-29 08:42:23.194 227364 DEBUG nova.virt.libvirt.driver [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:42:23 np0005539551 nova_compute[227360]: 2025-11-29 08:42:23.194 227364 DEBUG nova.virt.libvirt.driver [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:42:23 np0005539551 nova_compute[227360]: 2025-11-29 08:42:23.195 227364 DEBUG nova.virt.libvirt.driver [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:42:23 np0005539551 nova_compute[227360]: 2025-11-29 08:42:23.195 227364 DEBUG nova.virt.libvirt.driver [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:42:23 np0005539551 nova_compute[227360]: 2025-11-29 08:42:23.196 227364 DEBUG nova.virt.libvirt.driver [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:42:23 np0005539551 nova_compute[227360]: 2025-11-29 08:42:23.197 227364 DEBUG nova.virt.libvirt.driver [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:42:23 np0005539551 nova_compute[227360]: 2025-11-29 08:42:23.222 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:42:23 np0005539551 nova_compute[227360]: 2025-11-29 08:42:23.222 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405743.1485298, 29a37bfc-c1ee-4237-8204-4facd2648d51 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:42:23 np0005539551 nova_compute[227360]: 2025-11-29 08:42:23.223 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:42:23 np0005539551 nova_compute[227360]: 2025-11-29 08:42:23.257 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:42:23 np0005539551 nova_compute[227360]: 2025-11-29 08:42:23.260 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405743.1534681, 29a37bfc-c1ee-4237-8204-4facd2648d51 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:42:23 np0005539551 nova_compute[227360]: 2025-11-29 08:42:23.261 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:42:23 np0005539551 nova_compute[227360]: 2025-11-29 08:42:23.288 227364 INFO nova.compute.manager [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Took 6.29 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:42:23 np0005539551 nova_compute[227360]: 2025-11-29 08:42:23.289 227364 DEBUG nova.compute.manager [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:42:23 np0005539551 nova_compute[227360]: 2025-11-29 08:42:23.291 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:42:23 np0005539551 nova_compute[227360]: 2025-11-29 08:42:23.297 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:42:23 np0005539551 nova_compute[227360]: 2025-11-29 08:42:23.330 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:42:23 np0005539551 nova_compute[227360]: 2025-11-29 08:42:23.357 227364 INFO nova.compute.manager [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Took 7.64 seconds to build instance.#033[00m
Nov 29 03:42:23 np0005539551 nova_compute[227360]: 2025-11-29 08:42:23.372 227364 DEBUG oslo_concurrency.lockutils [None req-d14d706f-6332-4f93-b9a6-0836e71d7af6 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "29a37bfc-c1ee-4237-8204-4facd2648d51" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:23 np0005539551 podman[296752]: 2025-11-29 08:42:23.476443206 +0000 UTC m=+0.053518580 container create 073b48e548009926bbd61b886493803a1d39b6ae00f65fe01984b39d85078258 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 03:42:23 np0005539551 systemd[1]: Started libpod-conmon-073b48e548009926bbd61b886493803a1d39b6ae00f65fe01984b39d85078258.scope.
Nov 29 03:42:23 np0005539551 podman[296752]: 2025-11-29 08:42:23.449037494 +0000 UTC m=+0.026112888 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:42:23 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:42:23 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/933f3bfc7ad3fbcd3ae1e7051825e83c5fbdf219df1b197f6686abf61206dff9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:42:23 np0005539551 podman[296752]: 2025-11-29 08:42:23.58664456 +0000 UTC m=+0.163719954 container init 073b48e548009926bbd61b886493803a1d39b6ae00f65fe01984b39d85078258 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:42:23 np0005539551 podman[296752]: 2025-11-29 08:42:23.593000802 +0000 UTC m=+0.170076176 container start 073b48e548009926bbd61b886493803a1d39b6ae00f65fe01984b39d85078258 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:42:23 np0005539551 neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f[296767]: [NOTICE]   (296771) : New worker (296773) forked
Nov 29 03:42:23 np0005539551 neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f[296767]: [NOTICE]   (296771) : Loading success.
Nov 29 03:42:24 np0005539551 nova_compute[227360]: 2025-11-29 08:42:24.200 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:24.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:42:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:24.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:42:25 np0005539551 nova_compute[227360]: 2025-11-29 08:42:25.201 227364 DEBUG nova.compute.manager [req-ebe92811-35de-4d90-8b3e-7503e4dbb210 req-389d421d-d458-4d6c-bbc3-95781e82096c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Received event network-vif-plugged-d9d9d77e-54c8-47e4-819a-765417810458 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:25 np0005539551 nova_compute[227360]: 2025-11-29 08:42:25.202 227364 DEBUG oslo_concurrency.lockutils [req-ebe92811-35de-4d90-8b3e-7503e4dbb210 req-389d421d-d458-4d6c-bbc3-95781e82096c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "29a37bfc-c1ee-4237-8204-4facd2648d51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:25 np0005539551 nova_compute[227360]: 2025-11-29 08:42:25.203 227364 DEBUG oslo_concurrency.lockutils [req-ebe92811-35de-4d90-8b3e-7503e4dbb210 req-389d421d-d458-4d6c-bbc3-95781e82096c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "29a37bfc-c1ee-4237-8204-4facd2648d51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:25 np0005539551 nova_compute[227360]: 2025-11-29 08:42:25.203 227364 DEBUG oslo_concurrency.lockutils [req-ebe92811-35de-4d90-8b3e-7503e4dbb210 req-389d421d-d458-4d6c-bbc3-95781e82096c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "29a37bfc-c1ee-4237-8204-4facd2648d51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:25 np0005539551 nova_compute[227360]: 2025-11-29 08:42:25.203 227364 DEBUG nova.compute.manager [req-ebe92811-35de-4d90-8b3e-7503e4dbb210 req-389d421d-d458-4d6c-bbc3-95781e82096c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] No waiting events found dispatching network-vif-plugged-d9d9d77e-54c8-47e4-819a-765417810458 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:42:25 np0005539551 nova_compute[227360]: 2025-11-29 08:42:25.203 227364 WARNING nova.compute.manager [req-ebe92811-35de-4d90-8b3e-7503e4dbb210 req-389d421d-d458-4d6c-bbc3-95781e82096c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Received unexpected event network-vif-plugged-d9d9d77e-54c8-47e4-819a-765417810458 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:42:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:25 np0005539551 nova_compute[227360]: 2025-11-29 08:42:25.951 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:26.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:26.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:28.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:28.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:29 np0005539551 nova_compute[227360]: 2025-11-29 08:42:29.200 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:30.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:30.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:30 np0005539551 nova_compute[227360]: 2025-11-29 08:42:30.955 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:32.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:32.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:32 np0005539551 podman[296784]: 2025-11-29 08:42:32.61757722 +0000 UTC m=+0.063483280 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Nov 29 03:42:32 np0005539551 podman[296783]: 2025-11-29 08:42:32.646870633 +0000 UTC m=+0.096789852 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:42:32 np0005539551 podman[296782]: 2025-11-29 08:42:32.650316006 +0000 UTC m=+0.102931358 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:42:34 np0005539551 nova_compute[227360]: 2025-11-29 08:42:34.201 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:34.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:34.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:35 np0005539551 nova_compute[227360]: 2025-11-29 08:42:35.959 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:36.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:36.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:36 np0005539551 nova_compute[227360]: 2025-11-29 08:42:36.806 227364 DEBUG oslo_concurrency.lockutils [None req-d3882f91-5b1d-427e-bcde-94788332c591 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Acquiring lock "29a37bfc-c1ee-4237-8204-4facd2648d51" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:36 np0005539551 nova_compute[227360]: 2025-11-29 08:42:36.806 227364 DEBUG oslo_concurrency.lockutils [None req-d3882f91-5b1d-427e-bcde-94788332c591 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "29a37bfc-c1ee-4237-8204-4facd2648d51" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:36 np0005539551 nova_compute[227360]: 2025-11-29 08:42:36.807 227364 DEBUG oslo_concurrency.lockutils [None req-d3882f91-5b1d-427e-bcde-94788332c591 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Acquiring lock "29a37bfc-c1ee-4237-8204-4facd2648d51-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:36 np0005539551 nova_compute[227360]: 2025-11-29 08:42:36.807 227364 DEBUG oslo_concurrency.lockutils [None req-d3882f91-5b1d-427e-bcde-94788332c591 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "29a37bfc-c1ee-4237-8204-4facd2648d51-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:36 np0005539551 nova_compute[227360]: 2025-11-29 08:42:36.807 227364 DEBUG oslo_concurrency.lockutils [None req-d3882f91-5b1d-427e-bcde-94788332c591 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "29a37bfc-c1ee-4237-8204-4facd2648d51-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:36 np0005539551 nova_compute[227360]: 2025-11-29 08:42:36.810 227364 INFO nova.compute.manager [None req-d3882f91-5b1d-427e-bcde-94788332c591 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Terminating instance#033[00m
Nov 29 03:42:36 np0005539551 nova_compute[227360]: 2025-11-29 08:42:36.811 227364 DEBUG nova.compute.manager [None req-d3882f91-5b1d-427e-bcde-94788332c591 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:42:36 np0005539551 kernel: tapd9d9d77e-54 (unregistering): left promiscuous mode
Nov 29 03:42:36 np0005539551 NetworkManager[48922]: <info>  [1764405756.8670] device (tapd9d9d77e-54): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:42:36 np0005539551 nova_compute[227360]: 2025-11-29 08:42:36.875 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:36 np0005539551 ovn_controller[130266]: 2025-11-29T08:42:36Z|00861|binding|INFO|Releasing lport d9d9d77e-54c8-47e4-819a-765417810458 from this chassis (sb_readonly=0)
Nov 29 03:42:36 np0005539551 ovn_controller[130266]: 2025-11-29T08:42:36Z|00862|binding|INFO|Setting lport d9d9d77e-54c8-47e4-819a-765417810458 down in Southbound
Nov 29 03:42:36 np0005539551 ovn_controller[130266]: 2025-11-29T08:42:36Z|00863|binding|INFO|Removing iface tapd9d9d77e-54 ovn-installed in OVS
Nov 29 03:42:36 np0005539551 nova_compute[227360]: 2025-11-29 08:42:36.892 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:36 np0005539551 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000c4.scope: Deactivated successfully.
Nov 29 03:42:36 np0005539551 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000c4.scope: Consumed 13.058s CPU time.
Nov 29 03:42:36 np0005539551 systemd-machined[190756]: Machine qemu-88-instance-000000c4 terminated.
Nov 29 03:42:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:36.943 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:51:58 10.100.0.5'], port_security=['fa:16:3e:cd:51:58 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '29a37bfc-c1ee-4237-8204-4facd2648d51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-379eb72a-ec90-4461-897a-adab6a88928f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6519f321a4954567ab99a11cc07cc5ac', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f7ac3748-9331-4cc2-bcd0-273842e7e38b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02c1f455-8d7e-4f22-83b6-df0a05597294, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=d9d9d77e-54c8-47e4-819a-765417810458) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:42:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:36.944 139482 INFO neutron.agent.ovn.metadata.agent [-] Port d9d9d77e-54c8-47e4-819a-765417810458 in datapath 379eb72a-ec90-4461-897a-adab6a88928f unbound from our chassis#033[00m
Nov 29 03:42:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:36.946 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 379eb72a-ec90-4461-897a-adab6a88928f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:42:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:36.947 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[707fcb97-de41-4c9b-845a-3e44be2d45ac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:36 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:36.947 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f namespace which is not needed anymore#033[00m
Nov 29 03:42:37 np0005539551 nova_compute[227360]: 2025-11-29 08:42:37.044 227364 INFO nova.virt.libvirt.driver [-] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Instance destroyed successfully.#033[00m
Nov 29 03:42:37 np0005539551 nova_compute[227360]: 2025-11-29 08:42:37.045 227364 DEBUG nova.objects.instance [None req-d3882f91-5b1d-427e-bcde-94788332c591 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lazy-loading 'resources' on Instance uuid 29a37bfc-c1ee-4237-8204-4facd2648d51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:42:37 np0005539551 neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f[296767]: [NOTICE]   (296771) : haproxy version is 2.8.14-c23fe91
Nov 29 03:42:37 np0005539551 neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f[296767]: [NOTICE]   (296771) : path to executable is /usr/sbin/haproxy
Nov 29 03:42:37 np0005539551 neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f[296767]: [WARNING]  (296771) : Exiting Master process...
Nov 29 03:42:37 np0005539551 neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f[296767]: [WARNING]  (296771) : Exiting Master process...
Nov 29 03:42:37 np0005539551 neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f[296767]: [ALERT]    (296771) : Current worker (296773) exited with code 143 (Terminated)
Nov 29 03:42:37 np0005539551 neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f[296767]: [WARNING]  (296771) : All workers exited. Exiting... (0)
Nov 29 03:42:37 np0005539551 systemd[1]: libpod-073b48e548009926bbd61b886493803a1d39b6ae00f65fe01984b39d85078258.scope: Deactivated successfully.
Nov 29 03:42:37 np0005539551 podman[296867]: 2025-11-29 08:42:37.080637241 +0000 UTC m=+0.047151938 container died 073b48e548009926bbd61b886493803a1d39b6ae00f65fe01984b39d85078258 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:42:37 np0005539551 nova_compute[227360]: 2025-11-29 08:42:37.081 227364 DEBUG nova.virt.libvirt.vif [None req-d3882f91-5b1d-427e-bcde-94788332c591 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:42:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerMultinode-server-1441426650',display_name='tempest-TestServerMultinode-server-1441426650',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testservermultinode-server-1441426650',id=196,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:42:23Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6519f321a4954567ab99a11cc07cc5ac',ramdisk_id='',reservation_id='r-w91bq6jm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerMultinode-1895688433',owner_user_name='tempest-TestServerMultinode-1895688433-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:42:23Z,user_data=None,user_id='dda7b3867e5c45a7bb78d049103bc095',uuid=29a37bfc-c1ee-4237-8204-4facd2648d51,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d9d9d77e-54c8-47e4-819a-765417810458", "address": "fa:16:3e:cd:51:58", "network": {"id": "379eb72a-ec90-4461-897a-adab6a88928f", "bridge": "br-int", "label": "tempest-TestServerMultinode-1337172465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df59ee3c04c04efabfae553312366b99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9d9d77e-54", "ovs_interfaceid": "d9d9d77e-54c8-47e4-819a-765417810458", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:42:37 np0005539551 nova_compute[227360]: 2025-11-29 08:42:37.082 227364 DEBUG nova.network.os_vif_util [None req-d3882f91-5b1d-427e-bcde-94788332c591 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Converting VIF {"id": "d9d9d77e-54c8-47e4-819a-765417810458", "address": "fa:16:3e:cd:51:58", "network": {"id": "379eb72a-ec90-4461-897a-adab6a88928f", "bridge": "br-int", "label": "tempest-TestServerMultinode-1337172465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df59ee3c04c04efabfae553312366b99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9d9d77e-54", "ovs_interfaceid": "d9d9d77e-54c8-47e4-819a-765417810458", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:42:37 np0005539551 nova_compute[227360]: 2025-11-29 08:42:37.082 227364 DEBUG nova.network.os_vif_util [None req-d3882f91-5b1d-427e-bcde-94788332c591 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:51:58,bridge_name='br-int',has_traffic_filtering=True,id=d9d9d77e-54c8-47e4-819a-765417810458,network=Network(379eb72a-ec90-4461-897a-adab6a88928f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9d9d77e-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:42:37 np0005539551 nova_compute[227360]: 2025-11-29 08:42:37.083 227364 DEBUG os_vif [None req-d3882f91-5b1d-427e-bcde-94788332c591 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:51:58,bridge_name='br-int',has_traffic_filtering=True,id=d9d9d77e-54c8-47e4-819a-765417810458,network=Network(379eb72a-ec90-4461-897a-adab6a88928f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9d9d77e-54') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:42:37 np0005539551 nova_compute[227360]: 2025-11-29 08:42:37.085 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:37 np0005539551 nova_compute[227360]: 2025-11-29 08:42:37.085 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9d9d77e-54, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:37 np0005539551 nova_compute[227360]: 2025-11-29 08:42:37.089 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:42:37 np0005539551 nova_compute[227360]: 2025-11-29 08:42:37.092 227364 INFO os_vif [None req-d3882f91-5b1d-427e-bcde-94788332c591 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:51:58,bridge_name='br-int',has_traffic_filtering=True,id=d9d9d77e-54c8-47e4-819a-765417810458,network=Network(379eb72a-ec90-4461-897a-adab6a88928f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9d9d77e-54')#033[00m
Nov 29 03:42:37 np0005539551 systemd[1]: var-lib-containers-storage-overlay-933f3bfc7ad3fbcd3ae1e7051825e83c5fbdf219df1b197f6686abf61206dff9-merged.mount: Deactivated successfully.
Nov 29 03:42:37 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-073b48e548009926bbd61b886493803a1d39b6ae00f65fe01984b39d85078258-userdata-shm.mount: Deactivated successfully.
Nov 29 03:42:37 np0005539551 podman[296867]: 2025-11-29 08:42:37.122380621 +0000 UTC m=+0.088895318 container cleanup 073b48e548009926bbd61b886493803a1d39b6ae00f65fe01984b39d85078258 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 03:42:37 np0005539551 systemd[1]: libpod-conmon-073b48e548009926bbd61b886493803a1d39b6ae00f65fe01984b39d85078258.scope: Deactivated successfully.
Nov 29 03:42:37 np0005539551 podman[296925]: 2025-11-29 08:42:37.182008926 +0000 UTC m=+0.037314062 container remove 073b48e548009926bbd61b886493803a1d39b6ae00f65fe01984b39d85078258 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 03:42:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:37.190 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2c71cfa6-3ac5-432f-84bc-857577a49cdf]: (4, ('Sat Nov 29 08:42:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f (073b48e548009926bbd61b886493803a1d39b6ae00f65fe01984b39d85078258)\n073b48e548009926bbd61b886493803a1d39b6ae00f65fe01984b39d85078258\nSat Nov 29 08:42:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f (073b48e548009926bbd61b886493803a1d39b6ae00f65fe01984b39d85078258)\n073b48e548009926bbd61b886493803a1d39b6ae00f65fe01984b39d85078258\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:37.192 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[482f7f4c-4700-4c2e-974b-694993dda72e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:37.193 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap379eb72a-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:37 np0005539551 nova_compute[227360]: 2025-11-29 08:42:37.195 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:37 np0005539551 kernel: tap379eb72a-e0: left promiscuous mode
Nov 29 03:42:37 np0005539551 nova_compute[227360]: 2025-11-29 08:42:37.208 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:37.210 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2a4c5e29-727a-4fa9-a710-1bc1d9d84d5f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:37.236 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6311c531-07ba-4f60-ad3c-9e9903c189a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:37.238 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8e8e37de-026f-4e16-b595-4439a8af6251]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:37.251 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8df275aa-05a1-471a-8ebb-23f47bc6a37f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 878970, 'reachable_time': 34087, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296940, 'error': None, 'target': 'ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:37 np0005539551 systemd[1]: run-netns-ovnmeta\x2d379eb72a\x2dec90\x2d4461\x2d897a\x2dadab6a88928f.mount: Deactivated successfully.
Nov 29 03:42:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:37.255 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:42:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:37.256 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[637ac17f-54e2-40c2-89e9-163c4d93d764]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:37 np0005539551 nova_compute[227360]: 2025-11-29 08:42:37.489 227364 DEBUG nova.compute.manager [req-5d1763c6-018e-4abb-969b-bb062df4e0e9 req-40d4f530-78b6-451f-8242-12155356dfd5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Received event network-vif-unplugged-d9d9d77e-54c8-47e4-819a-765417810458 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:37 np0005539551 nova_compute[227360]: 2025-11-29 08:42:37.489 227364 DEBUG oslo_concurrency.lockutils [req-5d1763c6-018e-4abb-969b-bb062df4e0e9 req-40d4f530-78b6-451f-8242-12155356dfd5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "29a37bfc-c1ee-4237-8204-4facd2648d51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:37 np0005539551 nova_compute[227360]: 2025-11-29 08:42:37.490 227364 DEBUG oslo_concurrency.lockutils [req-5d1763c6-018e-4abb-969b-bb062df4e0e9 req-40d4f530-78b6-451f-8242-12155356dfd5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "29a37bfc-c1ee-4237-8204-4facd2648d51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:37 np0005539551 nova_compute[227360]: 2025-11-29 08:42:37.490 227364 DEBUG oslo_concurrency.lockutils [req-5d1763c6-018e-4abb-969b-bb062df4e0e9 req-40d4f530-78b6-451f-8242-12155356dfd5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "29a37bfc-c1ee-4237-8204-4facd2648d51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:37 np0005539551 nova_compute[227360]: 2025-11-29 08:42:37.490 227364 DEBUG nova.compute.manager [req-5d1763c6-018e-4abb-969b-bb062df4e0e9 req-40d4f530-78b6-451f-8242-12155356dfd5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] No waiting events found dispatching network-vif-unplugged-d9d9d77e-54c8-47e4-819a-765417810458 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:42:37 np0005539551 nova_compute[227360]: 2025-11-29 08:42:37.490 227364 DEBUG nova.compute.manager [req-5d1763c6-018e-4abb-969b-bb062df4e0e9 req-40d4f530-78b6-451f-8242-12155356dfd5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Received event network-vif-unplugged-d9d9d77e-54c8-47e4-819a-765417810458 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:42:37 np0005539551 nova_compute[227360]: 2025-11-29 08:42:37.570 227364 INFO nova.virt.libvirt.driver [None req-d3882f91-5b1d-427e-bcde-94788332c591 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Deleting instance files /var/lib/nova/instances/29a37bfc-c1ee-4237-8204-4facd2648d51_del#033[00m
Nov 29 03:42:37 np0005539551 nova_compute[227360]: 2025-11-29 08:42:37.571 227364 INFO nova.virt.libvirt.driver [None req-d3882f91-5b1d-427e-bcde-94788332c591 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Deletion of /var/lib/nova/instances/29a37bfc-c1ee-4237-8204-4facd2648d51_del complete#033[00m
Nov 29 03:42:37 np0005539551 nova_compute[227360]: 2025-11-29 08:42:37.700 227364 INFO nova.compute.manager [None req-d3882f91-5b1d-427e-bcde-94788332c591 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Took 0.89 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:42:37 np0005539551 nova_compute[227360]: 2025-11-29 08:42:37.700 227364 DEBUG oslo.service.loopingcall [None req-d3882f91-5b1d-427e-bcde-94788332c591 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:42:37 np0005539551 nova_compute[227360]: 2025-11-29 08:42:37.701 227364 DEBUG nova.compute.manager [-] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:42:37 np0005539551 nova_compute[227360]: 2025-11-29 08:42:37.701 227364 DEBUG nova.network.neutron [-] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:42:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:38.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:38.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:39 np0005539551 nova_compute[227360]: 2025-11-29 08:42:39.203 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:39 np0005539551 nova_compute[227360]: 2025-11-29 08:42:39.660 227364 DEBUG nova.compute.manager [req-a636b5e2-227c-4eac-893b-105bcb0af433 req-dd79d220-6925-4a26-8a25-1c4543a58b65 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Received event network-vif-plugged-d9d9d77e-54c8-47e4-819a-765417810458 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:39 np0005539551 nova_compute[227360]: 2025-11-29 08:42:39.661 227364 DEBUG oslo_concurrency.lockutils [req-a636b5e2-227c-4eac-893b-105bcb0af433 req-dd79d220-6925-4a26-8a25-1c4543a58b65 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "29a37bfc-c1ee-4237-8204-4facd2648d51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:39 np0005539551 nova_compute[227360]: 2025-11-29 08:42:39.661 227364 DEBUG oslo_concurrency.lockutils [req-a636b5e2-227c-4eac-893b-105bcb0af433 req-dd79d220-6925-4a26-8a25-1c4543a58b65 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "29a37bfc-c1ee-4237-8204-4facd2648d51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:39 np0005539551 nova_compute[227360]: 2025-11-29 08:42:39.662 227364 DEBUG oslo_concurrency.lockutils [req-a636b5e2-227c-4eac-893b-105bcb0af433 req-dd79d220-6925-4a26-8a25-1c4543a58b65 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "29a37bfc-c1ee-4237-8204-4facd2648d51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:39 np0005539551 nova_compute[227360]: 2025-11-29 08:42:39.662 227364 DEBUG nova.compute.manager [req-a636b5e2-227c-4eac-893b-105bcb0af433 req-dd79d220-6925-4a26-8a25-1c4543a58b65 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] No waiting events found dispatching network-vif-plugged-d9d9d77e-54c8-47e4-819a-765417810458 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:42:39 np0005539551 nova_compute[227360]: 2025-11-29 08:42:39.663 227364 WARNING nova.compute.manager [req-a636b5e2-227c-4eac-893b-105bcb0af433 req-dd79d220-6925-4a26-8a25-1c4543a58b65 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Received unexpected event network-vif-plugged-d9d9d77e-54c8-47e4-819a-765417810458 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:42:39 np0005539551 nova_compute[227360]: 2025-11-29 08:42:39.677 227364 DEBUG nova.network.neutron [-] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:42:39 np0005539551 nova_compute[227360]: 2025-11-29 08:42:39.694 227364 INFO nova.compute.manager [-] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Took 1.99 seconds to deallocate network for instance.#033[00m
Nov 29 03:42:39 np0005539551 nova_compute[227360]: 2025-11-29 08:42:39.748 227364 DEBUG oslo_concurrency.lockutils [None req-d3882f91-5b1d-427e-bcde-94788332c591 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:39 np0005539551 nova_compute[227360]: 2025-11-29 08:42:39.748 227364 DEBUG oslo_concurrency.lockutils [None req-d3882f91-5b1d-427e-bcde-94788332c591 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:39 np0005539551 nova_compute[227360]: 2025-11-29 08:42:39.815 227364 DEBUG oslo_concurrency.processutils [None req-d3882f91-5b1d-427e-bcde-94788332c591 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:42:40 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1979459202' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:42:40 np0005539551 nova_compute[227360]: 2025-11-29 08:42:40.266 227364 DEBUG oslo_concurrency.processutils [None req-d3882f91-5b1d-427e-bcde-94788332c591 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:40 np0005539551 nova_compute[227360]: 2025-11-29 08:42:40.271 227364 DEBUG nova.compute.provider_tree [None req-d3882f91-5b1d-427e-bcde-94788332c591 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:42:40 np0005539551 nova_compute[227360]: 2025-11-29 08:42:40.293 227364 DEBUG nova.scheduler.client.report [None req-d3882f91-5b1d-427e-bcde-94788332c591 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:42:40 np0005539551 nova_compute[227360]: 2025-11-29 08:42:40.319 227364 DEBUG oslo_concurrency.lockutils [None req-d3882f91-5b1d-427e-bcde-94788332c591 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:40 np0005539551 nova_compute[227360]: 2025-11-29 08:42:40.340 227364 INFO nova.scheduler.client.report [None req-d3882f91-5b1d-427e-bcde-94788332c591 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Deleted allocations for instance 29a37bfc-c1ee-4237-8204-4facd2648d51#033[00m
Nov 29 03:42:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:40.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:40.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:40 np0005539551 nova_compute[227360]: 2025-11-29 08:42:40.426 227364 DEBUG oslo_concurrency.lockutils [None req-d3882f91-5b1d-427e-bcde-94788332c591 dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "29a37bfc-c1ee-4237-8204-4facd2648d51" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:41 np0005539551 nova_compute[227360]: 2025-11-29 08:42:41.809 227364 DEBUG nova.compute.manager [req-52f1b90c-d145-424c-a8d5-4cd4361bd0ec req-72a06d7f-7278-4c86-892f-b2b7ee5763e8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Received event network-vif-deleted-d9d9d77e-54c8-47e4-819a-765417810458 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:42 np0005539551 nova_compute[227360]: 2025-11-29 08:42:42.088 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:42.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:42.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:44 np0005539551 nova_compute[227360]: 2025-11-29 08:42:44.239 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:44.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:44.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:45 np0005539551 nova_compute[227360]: 2025-11-29 08:42:45.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:45 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #133. Immutable memtables: 0.
Nov 29 03:42:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:42:45.566790) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:42:45 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 133
Nov 29 03:42:45 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405765566847, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 1347, "num_deletes": 260, "total_data_size": 2614333, "memory_usage": 2653232, "flush_reason": "Manual Compaction"}
Nov 29 03:42:45 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #134: started
Nov 29 03:42:45 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405765577565, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 134, "file_size": 1722866, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65048, "largest_seqno": 66390, "table_properties": {"data_size": 1717243, "index_size": 2890, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13197, "raw_average_key_size": 20, "raw_value_size": 1705380, "raw_average_value_size": 2599, "num_data_blocks": 127, "num_entries": 656, "num_filter_entries": 656, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405670, "oldest_key_time": 1764405670, "file_creation_time": 1764405765, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:42:45 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 10821 microseconds, and 4616 cpu microseconds.
Nov 29 03:42:45 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:42:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:42:45.577618) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #134: 1722866 bytes OK
Nov 29 03:42:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:42:45.577636) [db/memtable_list.cc:519] [default] Level-0 commit table #134 started
Nov 29 03:42:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:42:45.578858) [db/memtable_list.cc:722] [default] Level-0 commit table #134: memtable #1 done
Nov 29 03:42:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:42:45.578871) EVENT_LOG_v1 {"time_micros": 1764405765578867, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:42:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:42:45.578887) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:42:45 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 2607884, prev total WAL file size 2607884, number of live WAL files 2.
Nov 29 03:42:45 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000130.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:42:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:42:45.579547) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323633' seq:72057594037927935, type:22 .. '6C6F676D0032353136' seq:0, type:0; will stop at (end)
Nov 29 03:42:45 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:42:45 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [134(1682KB)], [132(10MB)]
Nov 29 03:42:45 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405765579588, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [134], "files_L6": [132], "score": -1, "input_data_size": 13068570, "oldest_snapshot_seqno": -1}
Nov 29 03:42:45 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #135: 9609 keys, 12919365 bytes, temperature: kUnknown
Nov 29 03:42:45 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405765646585, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 135, "file_size": 12919365, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12856760, "index_size": 37468, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24069, "raw_key_size": 254053, "raw_average_key_size": 26, "raw_value_size": 12687302, "raw_average_value_size": 1320, "num_data_blocks": 1427, "num_entries": 9609, "num_filter_entries": 9609, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764405765, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 135, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:42:45 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:42:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:42:45.646840) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 12919365 bytes
Nov 29 03:42:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:42:45.648271) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 194.8 rd, 192.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 10.8 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(15.1) write-amplify(7.5) OK, records in: 10147, records dropped: 538 output_compression: NoCompression
Nov 29 03:42:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:42:45.648289) EVENT_LOG_v1 {"time_micros": 1764405765648280, "job": 84, "event": "compaction_finished", "compaction_time_micros": 67080, "compaction_time_cpu_micros": 29338, "output_level": 6, "num_output_files": 1, "total_output_size": 12919365, "num_input_records": 10147, "num_output_records": 9609, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:42:45 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:42:45 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405765648787, "job": 84, "event": "table_file_deletion", "file_number": 134}
Nov 29 03:42:45 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000132.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:42:45 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405765651041, "job": 84, "event": "table_file_deletion", "file_number": 132}
Nov 29 03:42:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:42:45.579489) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:42:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:42:45.651198) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:42:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:42:45.651204) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:42:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:42:45.651207) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:42:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:42:45.651208) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:42:45 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:42:45.651210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:42:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:45 np0005539551 nova_compute[227360]: 2025-11-29 08:42:45.892 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:45 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:45.893 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=58, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=57) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:42:45 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:45.894 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:42:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:42:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:46.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:42:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:46.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:47 np0005539551 nova_compute[227360]: 2025-11-29 08:42:47.091 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:42:47.897 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '58'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:48 np0005539551 nova_compute[227360]: 2025-11-29 08:42:48.163 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:48.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:48.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:49 np0005539551 nova_compute[227360]: 2025-11-29 08:42:49.241 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:49 np0005539551 nova_compute[227360]: 2025-11-29 08:42:49.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:49 np0005539551 nova_compute[227360]: 2025-11-29 08:42:49.435 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:49 np0005539551 nova_compute[227360]: 2025-11-29 08:42:49.436 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:49 np0005539551 nova_compute[227360]: 2025-11-29 08:42:49.436 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:49 np0005539551 nova_compute[227360]: 2025-11-29 08:42:49.436 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:42:49 np0005539551 nova_compute[227360]: 2025-11-29 08:42:49.437 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:42:49 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3592719572' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:42:49 np0005539551 nova_compute[227360]: 2025-11-29 08:42:49.905 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:50 np0005539551 nova_compute[227360]: 2025-11-29 08:42:50.083 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:42:50 np0005539551 nova_compute[227360]: 2025-11-29 08:42:50.084 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4241MB free_disk=20.897598266601562GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:42:50 np0005539551 nova_compute[227360]: 2025-11-29 08:42:50.084 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:50 np0005539551 nova_compute[227360]: 2025-11-29 08:42:50.085 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:50 np0005539551 nova_compute[227360]: 2025-11-29 08:42:50.156 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:42:50 np0005539551 nova_compute[227360]: 2025-11-29 08:42:50.157 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:42:50 np0005539551 nova_compute[227360]: 2025-11-29 08:42:50.202 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:50.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:50.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:42:50 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3605329810' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:42:50 np0005539551 nova_compute[227360]: 2025-11-29 08:42:50.666 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:50 np0005539551 nova_compute[227360]: 2025-11-29 08:42:50.673 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:42:50 np0005539551 nova_compute[227360]: 2025-11-29 08:42:50.701 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:42:50 np0005539551 nova_compute[227360]: 2025-11-29 08:42:50.727 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:42:50 np0005539551 nova_compute[227360]: 2025-11-29 08:42:50.727 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:52 np0005539551 nova_compute[227360]: 2025-11-29 08:42:52.044 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405757.0418627, 29a37bfc-c1ee-4237-8204-4facd2648d51 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:42:52 np0005539551 nova_compute[227360]: 2025-11-29 08:42:52.044 227364 INFO nova.compute.manager [-] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:42:52 np0005539551 nova_compute[227360]: 2025-11-29 08:42:52.069 227364 DEBUG nova.compute.manager [None req-79485753-4851-45e7-ab2e-b7aea549751e - - - - - -] [instance: 29a37bfc-c1ee-4237-8204-4facd2648d51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:42:52 np0005539551 nova_compute[227360]: 2025-11-29 08:42:52.094 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:52.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:52.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:53 np0005539551 nova_compute[227360]: 2025-11-29 08:42:53.723 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:53 np0005539551 nova_compute[227360]: 2025-11-29 08:42:53.723 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:53 np0005539551 nova_compute[227360]: 2025-11-29 08:42:53.724 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:42:53 np0005539551 nova_compute[227360]: 2025-11-29 08:42:53.724 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:42:53 np0005539551 nova_compute[227360]: 2025-11-29 08:42:53.739 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:42:53 np0005539551 nova_compute[227360]: 2025-11-29 08:42:53.740 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:54 np0005539551 nova_compute[227360]: 2025-11-29 08:42:54.243 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:54.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:54.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:55 np0005539551 nova_compute[227360]: 2025-11-29 08:42:55.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:56.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:56.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:57 np0005539551 nova_compute[227360]: 2025-11-29 08:42:57.096 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:58.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:42:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:58.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:59 np0005539551 nova_compute[227360]: 2025-11-29 08:42:59.245 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:00.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:00.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:01 np0005539551 nova_compute[227360]: 2025-11-29 08:43:01.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:02 np0005539551 nova_compute[227360]: 2025-11-29 08:43:02.140 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:02 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:43:02 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:43:02 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:43:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:02.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:02.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:03 np0005539551 podman[297142]: 2025-11-29 08:43:03.611280653 +0000 UTC m=+0.064465846 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 03:43:03 np0005539551 podman[297143]: 2025-11-29 08:43:03.628418027 +0000 UTC m=+0.081025734 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 29 03:43:03 np0005539551 podman[297141]: 2025-11-29 08:43:03.698529685 +0000 UTC m=+0.151642576 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:43:04 np0005539551 nova_compute[227360]: 2025-11-29 08:43:04.246 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:04.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:04 np0005539551 nova_compute[227360]: 2025-11-29 08:43:04.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:04 np0005539551 nova_compute[227360]: 2025-11-29 08:43:04.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:43:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:04.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:04 np0005539551 nova_compute[227360]: 2025-11-29 08:43:04.597 227364 DEBUG oslo_concurrency.lockutils [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "d945df67-3e4a-4695-955a-60f6e92cd470" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:04 np0005539551 nova_compute[227360]: 2025-11-29 08:43:04.597 227364 DEBUG oslo_concurrency.lockutils [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "d945df67-3e4a-4695-955a-60f6e92cd470" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:04 np0005539551 nova_compute[227360]: 2025-11-29 08:43:04.721 227364 DEBUG nova.compute.manager [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:43:04 np0005539551 nova_compute[227360]: 2025-11-29 08:43:04.936 227364 DEBUG oslo_concurrency.lockutils [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:04 np0005539551 nova_compute[227360]: 2025-11-29 08:43:04.937 227364 DEBUG oslo_concurrency.lockutils [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:04 np0005539551 nova_compute[227360]: 2025-11-29 08:43:04.943 227364 DEBUG nova.virt.hardware [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:43:04 np0005539551 nova_compute[227360]: 2025-11-29 08:43:04.944 227364 INFO nova.compute.claims [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:43:05 np0005539551 nova_compute[227360]: 2025-11-29 08:43:05.305 227364 DEBUG oslo_concurrency.processutils [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:43:05 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2786861133' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:43:05 np0005539551 nova_compute[227360]: 2025-11-29 08:43:05.729 227364 DEBUG oslo_concurrency.processutils [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:05 np0005539551 nova_compute[227360]: 2025-11-29 08:43:05.735 227364 DEBUG nova.compute.provider_tree [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:43:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:05 np0005539551 nova_compute[227360]: 2025-11-29 08:43:05.897 227364 DEBUG nova.scheduler.client.report [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:43:05 np0005539551 nova_compute[227360]: 2025-11-29 08:43:05.935 227364 DEBUG oslo_concurrency.lockutils [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.998s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:05 np0005539551 nova_compute[227360]: 2025-11-29 08:43:05.936 227364 DEBUG nova.compute.manager [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:43:06 np0005539551 nova_compute[227360]: 2025-11-29 08:43:06.043 227364 DEBUG nova.compute.manager [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:43:06 np0005539551 nova_compute[227360]: 2025-11-29 08:43:06.043 227364 DEBUG nova.network.neutron [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:43:06 np0005539551 nova_compute[227360]: 2025-11-29 08:43:06.061 227364 INFO nova.virt.libvirt.driver [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:43:06 np0005539551 nova_compute[227360]: 2025-11-29 08:43:06.107 227364 DEBUG nova.compute.manager [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:43:06 np0005539551 nova_compute[227360]: 2025-11-29 08:43:06.275 227364 DEBUG nova.policy [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'de2965680b714b539553cf0792584e1e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '75423dfb570f4b2bbc2f8de4f3a65d18', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:43:06 np0005539551 nova_compute[227360]: 2025-11-29 08:43:06.377 227364 DEBUG nova.compute.manager [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:43:06 np0005539551 nova_compute[227360]: 2025-11-29 08:43:06.378 227364 DEBUG nova.virt.libvirt.driver [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:43:06 np0005539551 nova_compute[227360]: 2025-11-29 08:43:06.379 227364 INFO nova.virt.libvirt.driver [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Creating image(s)#033[00m
Nov 29 03:43:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:06.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:06 np0005539551 nova_compute[227360]: 2025-11-29 08:43:06.401 227364 DEBUG nova.storage.rbd_utils [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image d945df67-3e4a-4695-955a-60f6e92cd470_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:06 np0005539551 nova_compute[227360]: 2025-11-29 08:43:06.425 227364 DEBUG nova.storage.rbd_utils [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image d945df67-3e4a-4695-955a-60f6e92cd470_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:06 np0005539551 nova_compute[227360]: 2025-11-29 08:43:06.449 227364 DEBUG nova.storage.rbd_utils [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image d945df67-3e4a-4695-955a-60f6e92cd470_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:06 np0005539551 nova_compute[227360]: 2025-11-29 08:43:06.453 227364 DEBUG oslo_concurrency.processutils [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:06.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:06 np0005539551 nova_compute[227360]: 2025-11-29 08:43:06.483 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:06 np0005539551 nova_compute[227360]: 2025-11-29 08:43:06.521 227364 DEBUG oslo_concurrency.processutils [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:06 np0005539551 nova_compute[227360]: 2025-11-29 08:43:06.522 227364 DEBUG oslo_concurrency.lockutils [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:06 np0005539551 nova_compute[227360]: 2025-11-29 08:43:06.523 227364 DEBUG oslo_concurrency.lockutils [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:06 np0005539551 nova_compute[227360]: 2025-11-29 08:43:06.523 227364 DEBUG oslo_concurrency.lockutils [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:06 np0005539551 nova_compute[227360]: 2025-11-29 08:43:06.550 227364 DEBUG nova.storage.rbd_utils [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image d945df67-3e4a-4695-955a-60f6e92cd470_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:06 np0005539551 nova_compute[227360]: 2025-11-29 08:43:06.553 227364 DEBUG oslo_concurrency.processutils [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 d945df67-3e4a-4695-955a-60f6e92cd470_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:07 np0005539551 nova_compute[227360]: 2025-11-29 08:43:07.142 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:07 np0005539551 nova_compute[227360]: 2025-11-29 08:43:07.164 227364 DEBUG oslo_concurrency.processutils [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 d945df67-3e4a-4695-955a-60f6e92cd470_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:07 np0005539551 nova_compute[227360]: 2025-11-29 08:43:07.221 227364 DEBUG nova.storage.rbd_utils [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] resizing rbd image d945df67-3e4a-4695-955a-60f6e92cd470_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:43:07 np0005539551 nova_compute[227360]: 2025-11-29 08:43:07.313 227364 DEBUG nova.objects.instance [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lazy-loading 'migration_context' on Instance uuid d945df67-3e4a-4695-955a-60f6e92cd470 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:43:07 np0005539551 nova_compute[227360]: 2025-11-29 08:43:07.376 227364 DEBUG nova.virt.libvirt.driver [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:43:07 np0005539551 nova_compute[227360]: 2025-11-29 08:43:07.377 227364 DEBUG nova.virt.libvirt.driver [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Ensure instance console log exists: /var/lib/nova/instances/d945df67-3e4a-4695-955a-60f6e92cd470/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:43:07 np0005539551 nova_compute[227360]: 2025-11-29 08:43:07.378 227364 DEBUG oslo_concurrency.lockutils [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:07 np0005539551 nova_compute[227360]: 2025-11-29 08:43:07.378 227364 DEBUG oslo_concurrency.lockutils [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:07 np0005539551 nova_compute[227360]: 2025-11-29 08:43:07.378 227364 DEBUG oslo_concurrency.lockutils [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:07 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:43:07 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:43:08 np0005539551 nova_compute[227360]: 2025-11-29 08:43:08.059 227364 DEBUG nova.network.neutron [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Successfully created port: cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:43:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:08.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:08.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:09 np0005539551 nova_compute[227360]: 2025-11-29 08:43:09.278 227364 DEBUG nova.network.neutron [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Successfully updated port: cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:43:09 np0005539551 nova_compute[227360]: 2025-11-29 08:43:09.283 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:09 np0005539551 nova_compute[227360]: 2025-11-29 08:43:09.301 227364 DEBUG oslo_concurrency.lockutils [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "refresh_cache-d945df67-3e4a-4695-955a-60f6e92cd470" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:43:09 np0005539551 nova_compute[227360]: 2025-11-29 08:43:09.302 227364 DEBUG oslo_concurrency.lockutils [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquired lock "refresh_cache-d945df67-3e4a-4695-955a-60f6e92cd470" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:43:09 np0005539551 nova_compute[227360]: 2025-11-29 08:43:09.302 227364 DEBUG nova.network.neutron [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:43:09 np0005539551 nova_compute[227360]: 2025-11-29 08:43:09.487 227364 DEBUG nova.compute.manager [req-3c3e1d86-9171-4a46-afb8-cf34c29659e5 req-c156f4be-0253-4fff-a9f6-9694094402c3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Received event network-changed-cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:43:09 np0005539551 nova_compute[227360]: 2025-11-29 08:43:09.487 227364 DEBUG nova.compute.manager [req-3c3e1d86-9171-4a46-afb8-cf34c29659e5 req-c156f4be-0253-4fff-a9f6-9694094402c3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Refreshing instance network info cache due to event network-changed-cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:43:09 np0005539551 nova_compute[227360]: 2025-11-29 08:43:09.488 227364 DEBUG oslo_concurrency.lockutils [req-3c3e1d86-9171-4a46-afb8-cf34c29659e5 req-c156f4be-0253-4fff-a9f6-9694094402c3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-d945df67-3e4a-4695-955a-60f6e92cd470" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:43:09 np0005539551 nova_compute[227360]: 2025-11-29 08:43:09.568 227364 DEBUG nova.network.neutron [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:43:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:10.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:10.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:10 np0005539551 nova_compute[227360]: 2025-11-29 08:43:10.874 227364 DEBUG nova.network.neutron [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Updating instance_info_cache with network_info: [{"id": "cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02", "address": "fa:16:3e:8c:20:55", "network": {"id": "e9e8d3c4-db7b-4d58-959e-9279d976835d", "bridge": "br-int", "label": "tempest-network-smoke--768821174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc2cdf28-93", "ovs_interfaceid": "cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:43:10 np0005539551 nova_compute[227360]: 2025-11-29 08:43:10.898 227364 DEBUG oslo_concurrency.lockutils [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Releasing lock "refresh_cache-d945df67-3e4a-4695-955a-60f6e92cd470" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:43:10 np0005539551 nova_compute[227360]: 2025-11-29 08:43:10.898 227364 DEBUG nova.compute.manager [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Instance network_info: |[{"id": "cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02", "address": "fa:16:3e:8c:20:55", "network": {"id": "e9e8d3c4-db7b-4d58-959e-9279d976835d", "bridge": "br-int", "label": "tempest-network-smoke--768821174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc2cdf28-93", "ovs_interfaceid": "cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:43:10 np0005539551 nova_compute[227360]: 2025-11-29 08:43:10.898 227364 DEBUG oslo_concurrency.lockutils [req-3c3e1d86-9171-4a46-afb8-cf34c29659e5 req-c156f4be-0253-4fff-a9f6-9694094402c3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-d945df67-3e4a-4695-955a-60f6e92cd470" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:43:10 np0005539551 nova_compute[227360]: 2025-11-29 08:43:10.899 227364 DEBUG nova.network.neutron [req-3c3e1d86-9171-4a46-afb8-cf34c29659e5 req-c156f4be-0253-4fff-a9f6-9694094402c3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Refreshing network info cache for port cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:43:10 np0005539551 nova_compute[227360]: 2025-11-29 08:43:10.901 227364 DEBUG nova.virt.libvirt.driver [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Start _get_guest_xml network_info=[{"id": "cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02", "address": "fa:16:3e:8c:20:55", "network": {"id": "e9e8d3c4-db7b-4d58-959e-9279d976835d", "bridge": "br-int", "label": "tempest-network-smoke--768821174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc2cdf28-93", "ovs_interfaceid": "cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:43:10 np0005539551 nova_compute[227360]: 2025-11-29 08:43:10.905 227364 WARNING nova.virt.libvirt.driver [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:43:10 np0005539551 nova_compute[227360]: 2025-11-29 08:43:10.918 227364 DEBUG nova.virt.libvirt.host [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:43:10 np0005539551 nova_compute[227360]: 2025-11-29 08:43:10.918 227364 DEBUG nova.virt.libvirt.host [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:43:10 np0005539551 nova_compute[227360]: 2025-11-29 08:43:10.922 227364 DEBUG nova.virt.libvirt.host [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:43:10 np0005539551 nova_compute[227360]: 2025-11-29 08:43:10.922 227364 DEBUG nova.virt.libvirt.host [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:43:10 np0005539551 nova_compute[227360]: 2025-11-29 08:43:10.924 227364 DEBUG nova.virt.libvirt.driver [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:43:10 np0005539551 nova_compute[227360]: 2025-11-29 08:43:10.924 227364 DEBUG nova.virt.hardware [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:43:10 np0005539551 nova_compute[227360]: 2025-11-29 08:43:10.924 227364 DEBUG nova.virt.hardware [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:43:10 np0005539551 nova_compute[227360]: 2025-11-29 08:43:10.924 227364 DEBUG nova.virt.hardware [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:43:10 np0005539551 nova_compute[227360]: 2025-11-29 08:43:10.925 227364 DEBUG nova.virt.hardware [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:43:10 np0005539551 nova_compute[227360]: 2025-11-29 08:43:10.925 227364 DEBUG nova.virt.hardware [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:43:10 np0005539551 nova_compute[227360]: 2025-11-29 08:43:10.925 227364 DEBUG nova.virt.hardware [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:43:10 np0005539551 nova_compute[227360]: 2025-11-29 08:43:10.925 227364 DEBUG nova.virt.hardware [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:43:10 np0005539551 nova_compute[227360]: 2025-11-29 08:43:10.925 227364 DEBUG nova.virt.hardware [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:43:10 np0005539551 nova_compute[227360]: 2025-11-29 08:43:10.926 227364 DEBUG nova.virt.hardware [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:43:10 np0005539551 nova_compute[227360]: 2025-11-29 08:43:10.926 227364 DEBUG nova.virt.hardware [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:43:10 np0005539551 nova_compute[227360]: 2025-11-29 08:43:10.926 227364 DEBUG nova.virt.hardware [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:43:10 np0005539551 nova_compute[227360]: 2025-11-29 08:43:10.928 227364 DEBUG oslo_concurrency.processutils [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:11 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:43:11 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1110400259' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:43:11 np0005539551 nova_compute[227360]: 2025-11-29 08:43:11.336 227364 DEBUG oslo_concurrency.processutils [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:11 np0005539551 nova_compute[227360]: 2025-11-29 08:43:11.364 227364 DEBUG nova.storage.rbd_utils [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image d945df67-3e4a-4695-955a-60f6e92cd470_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:11 np0005539551 nova_compute[227360]: 2025-11-29 08:43:11.368 227364 DEBUG oslo_concurrency.processutils [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:11 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:43:11 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3283923598' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:43:11 np0005539551 nova_compute[227360]: 2025-11-29 08:43:11.797 227364 DEBUG oslo_concurrency.processutils [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:11 np0005539551 nova_compute[227360]: 2025-11-29 08:43:11.798 227364 DEBUG nova.virt.libvirt.vif [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:43:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-gen-0-956379726',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-gen-0-956379726',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1136856573-ge',id=200,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPskF69gJv1PkFDp6sMFcSkGc77HX97zARQ2LpJDHXss7elesntPQ0bKM82mcgUWZSOfl7w6RPJ7rDkGVTramaLBKk3dO7uJ04BIQF5ATuD1RLuWDTwHCU9gWwsxsTzFaQ==',key_name='tempest-TestSecurityGroupsBasicOps-568533765',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='75423dfb570f4b2bbc2f8de4f3a65d18',ramdisk_id='',reservation_id='r-itb9254v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1136856573',owner_user_name='tempest-TestSecurityGroupsBasicOps-1136856573-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:43:06Z,user_data=None,user_id='de2965680b714b539553cf0792584e1e',uuid=d945df67-3e4a-4695-955a-60f6e92cd470,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02", "address": "fa:16:3e:8c:20:55", "network": {"id": "e9e8d3c4-db7b-4d58-959e-9279d976835d", "bridge": "br-int", "label": "tempest-network-smoke--768821174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc2cdf28-93", "ovs_interfaceid": "cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:43:11 np0005539551 nova_compute[227360]: 2025-11-29 08:43:11.799 227364 DEBUG nova.network.os_vif_util [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converting VIF {"id": "cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02", "address": "fa:16:3e:8c:20:55", "network": {"id": "e9e8d3c4-db7b-4d58-959e-9279d976835d", "bridge": "br-int", "label": "tempest-network-smoke--768821174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc2cdf28-93", "ovs_interfaceid": "cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:43:11 np0005539551 nova_compute[227360]: 2025-11-29 08:43:11.800 227364 DEBUG nova.network.os_vif_util [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:20:55,bridge_name='br-int',has_traffic_filtering=True,id=cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02,network=Network(e9e8d3c4-db7b-4d58-959e-9279d976835d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc2cdf28-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:43:11 np0005539551 nova_compute[227360]: 2025-11-29 08:43:11.801 227364 DEBUG nova.objects.instance [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lazy-loading 'pci_devices' on Instance uuid d945df67-3e4a-4695-955a-60f6e92cd470 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:43:11 np0005539551 nova_compute[227360]: 2025-11-29 08:43:11.820 227364 DEBUG nova.virt.libvirt.driver [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:43:11 np0005539551 nova_compute[227360]:  <uuid>d945df67-3e4a-4695-955a-60f6e92cd470</uuid>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:  <name>instance-000000c8</name>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:43:11 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-gen-0-956379726</nova:name>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:43:10</nova:creationTime>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:43:11 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:        <nova:user uuid="de2965680b714b539553cf0792584e1e">tempest-TestSecurityGroupsBasicOps-1136856573-project-member</nova:user>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:        <nova:project uuid="75423dfb570f4b2bbc2f8de4f3a65d18">tempest-TestSecurityGroupsBasicOps-1136856573</nova:project>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:        <nova:port uuid="cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02">
Nov 29 03:43:11 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:      <entry name="serial">d945df67-3e4a-4695-955a-60f6e92cd470</entry>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:      <entry name="uuid">d945df67-3e4a-4695-955a-60f6e92cd470</entry>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:43:11 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/d945df67-3e4a-4695-955a-60f6e92cd470_disk">
Nov 29 03:43:11 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:43:11 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:43:11 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/d945df67-3e4a-4695-955a-60f6e92cd470_disk.config">
Nov 29 03:43:11 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:43:11 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:43:11 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:8c:20:55"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:      <target dev="tapcc2cdf28-93"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:43:11 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/d945df67-3e4a-4695-955a-60f6e92cd470/console.log" append="off"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:43:11 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:43:11 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:43:11 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:43:11 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:43:11 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:43:11 np0005539551 nova_compute[227360]: 2025-11-29 08:43:11.821 227364 DEBUG nova.compute.manager [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Preparing to wait for external event network-vif-plugged-cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:43:11 np0005539551 nova_compute[227360]: 2025-11-29 08:43:11.822 227364 DEBUG oslo_concurrency.lockutils [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "d945df67-3e4a-4695-955a-60f6e92cd470-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:11 np0005539551 nova_compute[227360]: 2025-11-29 08:43:11.822 227364 DEBUG oslo_concurrency.lockutils [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "d945df67-3e4a-4695-955a-60f6e92cd470-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:11 np0005539551 nova_compute[227360]: 2025-11-29 08:43:11.823 227364 DEBUG oslo_concurrency.lockutils [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "d945df67-3e4a-4695-955a-60f6e92cd470-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:11 np0005539551 nova_compute[227360]: 2025-11-29 08:43:11.823 227364 DEBUG nova.virt.libvirt.vif [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:43:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-gen-0-956379726',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-gen-0-956379726',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1136856573-ge',id=200,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPskF69gJv1PkFDp6sMFcSkGc77HX97zARQ2LpJDHXss7elesntPQ0bKM82mcgUWZSOfl7w6RPJ7rDkGVTramaLBKk3dO7uJ04BIQF5ATuD1RLuWDTwHCU9gWwsxsTzFaQ==',key_name='tempest-TestSecurityGroupsBasicOps-568533765',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='75423dfb570f4b2bbc2f8de4f3a65d18',ramdisk_id='',reservation_id='r-itb9254v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1136856573',owner_user_name='tempest-TestSecurityGroupsBasicOps-1136856573-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:43:06Z,user_data=None,user_id='de2965680b714b539553cf0792584e1e',uuid=d945df67-3e4a-4695-955a-60f6e92cd470,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02", "address": "fa:16:3e:8c:20:55", "network": {"id": "e9e8d3c4-db7b-4d58-959e-9279d976835d", "bridge": "br-int", "label": "tempest-network-smoke--768821174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc2cdf28-93", "ovs_interfaceid": "cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:43:11 np0005539551 nova_compute[227360]: 2025-11-29 08:43:11.823 227364 DEBUG nova.network.os_vif_util [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converting VIF {"id": "cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02", "address": "fa:16:3e:8c:20:55", "network": {"id": "e9e8d3c4-db7b-4d58-959e-9279d976835d", "bridge": "br-int", "label": "tempest-network-smoke--768821174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc2cdf28-93", "ovs_interfaceid": "cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:43:11 np0005539551 nova_compute[227360]: 2025-11-29 08:43:11.824 227364 DEBUG nova.network.os_vif_util [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:20:55,bridge_name='br-int',has_traffic_filtering=True,id=cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02,network=Network(e9e8d3c4-db7b-4d58-959e-9279d976835d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc2cdf28-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:43:11 np0005539551 nova_compute[227360]: 2025-11-29 08:43:11.824 227364 DEBUG os_vif [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:20:55,bridge_name='br-int',has_traffic_filtering=True,id=cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02,network=Network(e9e8d3c4-db7b-4d58-959e-9279d976835d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc2cdf28-93') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:43:11 np0005539551 nova_compute[227360]: 2025-11-29 08:43:11.825 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:11 np0005539551 nova_compute[227360]: 2025-11-29 08:43:11.825 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:11 np0005539551 nova_compute[227360]: 2025-11-29 08:43:11.826 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:43:11 np0005539551 nova_compute[227360]: 2025-11-29 08:43:11.829 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:11 np0005539551 nova_compute[227360]: 2025-11-29 08:43:11.829 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcc2cdf28-93, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:11 np0005539551 nova_compute[227360]: 2025-11-29 08:43:11.830 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcc2cdf28-93, col_values=(('external_ids', {'iface-id': 'cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8c:20:55', 'vm-uuid': 'd945df67-3e4a-4695-955a-60f6e92cd470'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:11 np0005539551 nova_compute[227360]: 2025-11-29 08:43:11.831 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:11 np0005539551 NetworkManager[48922]: <info>  [1764405791.8321] manager: (tapcc2cdf28-93): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/384)
Nov 29 03:43:11 np0005539551 nova_compute[227360]: 2025-11-29 08:43:11.833 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:43:11 np0005539551 nova_compute[227360]: 2025-11-29 08:43:11.842 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:11 np0005539551 nova_compute[227360]: 2025-11-29 08:43:11.844 227364 INFO os_vif [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:20:55,bridge_name='br-int',has_traffic_filtering=True,id=cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02,network=Network(e9e8d3c4-db7b-4d58-959e-9279d976835d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc2cdf28-93')#033[00m
Nov 29 03:43:11 np0005539551 nova_compute[227360]: 2025-11-29 08:43:11.893 227364 DEBUG nova.virt.libvirt.driver [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:43:11 np0005539551 nova_compute[227360]: 2025-11-29 08:43:11.893 227364 DEBUG nova.virt.libvirt.driver [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:43:11 np0005539551 nova_compute[227360]: 2025-11-29 08:43:11.893 227364 DEBUG nova.virt.libvirt.driver [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] No VIF found with MAC fa:16:3e:8c:20:55, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:43:11 np0005539551 nova_compute[227360]: 2025-11-29 08:43:11.894 227364 INFO nova.virt.libvirt.driver [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Using config drive#033[00m
Nov 29 03:43:11 np0005539551 nova_compute[227360]: 2025-11-29 08:43:11.916 227364 DEBUG nova.storage.rbd_utils [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image d945df67-3e4a-4695-955a-60f6e92cd470_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:12 np0005539551 nova_compute[227360]: 2025-11-29 08:43:12.254 227364 INFO nova.virt.libvirt.driver [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Creating config drive at /var/lib/nova/instances/d945df67-3e4a-4695-955a-60f6e92cd470/disk.config#033[00m
Nov 29 03:43:12 np0005539551 nova_compute[227360]: 2025-11-29 08:43:12.259 227364 DEBUG oslo_concurrency.processutils [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d945df67-3e4a-4695-955a-60f6e92cd470/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpisvjueuo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:12 np0005539551 nova_compute[227360]: 2025-11-29 08:43:12.291 227364 DEBUG nova.network.neutron [req-3c3e1d86-9171-4a46-afb8-cf34c29659e5 req-c156f4be-0253-4fff-a9f6-9694094402c3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Updated VIF entry in instance network info cache for port cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:43:12 np0005539551 nova_compute[227360]: 2025-11-29 08:43:12.292 227364 DEBUG nova.network.neutron [req-3c3e1d86-9171-4a46-afb8-cf34c29659e5 req-c156f4be-0253-4fff-a9f6-9694094402c3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Updating instance_info_cache with network_info: [{"id": "cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02", "address": "fa:16:3e:8c:20:55", "network": {"id": "e9e8d3c4-db7b-4d58-959e-9279d976835d", "bridge": "br-int", "label": "tempest-network-smoke--768821174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc2cdf28-93", "ovs_interfaceid": "cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:43:12 np0005539551 nova_compute[227360]: 2025-11-29 08:43:12.305 227364 DEBUG oslo_concurrency.lockutils [req-3c3e1d86-9171-4a46-afb8-cf34c29659e5 req-c156f4be-0253-4fff-a9f6-9694094402c3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-d945df67-3e4a-4695-955a-60f6e92cd470" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:43:12 np0005539551 nova_compute[227360]: 2025-11-29 08:43:12.396 227364 DEBUG oslo_concurrency.processutils [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d945df67-3e4a-4695-955a-60f6e92cd470/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpisvjueuo" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:12.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:12 np0005539551 nova_compute[227360]: 2025-11-29 08:43:12.426 227364 DEBUG nova.storage.rbd_utils [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image d945df67-3e4a-4695-955a-60f6e92cd470_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:12 np0005539551 nova_compute[227360]: 2025-11-29 08:43:12.430 227364 DEBUG oslo_concurrency.processutils [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d945df67-3e4a-4695-955a-60f6e92cd470/disk.config d945df67-3e4a-4695-955a-60f6e92cd470_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:12.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:12 np0005539551 nova_compute[227360]: 2025-11-29 08:43:12.603 227364 DEBUG oslo_concurrency.processutils [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d945df67-3e4a-4695-955a-60f6e92cd470/disk.config d945df67-3e4a-4695-955a-60f6e92cd470_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:12 np0005539551 nova_compute[227360]: 2025-11-29 08:43:12.604 227364 INFO nova.virt.libvirt.driver [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Deleting local config drive /var/lib/nova/instances/d945df67-3e4a-4695-955a-60f6e92cd470/disk.config because it was imported into RBD.#033[00m
Nov 29 03:43:12 np0005539551 kernel: tapcc2cdf28-93: entered promiscuous mode
Nov 29 03:43:12 np0005539551 NetworkManager[48922]: <info>  [1764405792.6591] manager: (tapcc2cdf28-93): new Tun device (/org/freedesktop/NetworkManager/Devices/385)
Nov 29 03:43:12 np0005539551 ovn_controller[130266]: 2025-11-29T08:43:12Z|00864|binding|INFO|Claiming lport cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02 for this chassis.
Nov 29 03:43:12 np0005539551 ovn_controller[130266]: 2025-11-29T08:43:12Z|00865|binding|INFO|cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02: Claiming fa:16:3e:8c:20:55 10.100.0.14
Nov 29 03:43:12 np0005539551 nova_compute[227360]: 2025-11-29 08:43:12.661 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:12 np0005539551 nova_compute[227360]: 2025-11-29 08:43:12.665 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:12 np0005539551 nova_compute[227360]: 2025-11-29 08:43:12.667 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:12 np0005539551 nova_compute[227360]: 2025-11-29 08:43:12.672 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:12 np0005539551 NetworkManager[48922]: <info>  [1764405792.6738] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/386)
Nov 29 03:43:12 np0005539551 NetworkManager[48922]: <info>  [1764405792.6746] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/387)
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:12.676 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:20:55 10.100.0.14'], port_security=['fa:16:3e:8c:20:55 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'd945df67-3e4a-4695-955a-60f6e92cd470', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e9e8d3c4-db7b-4d58-959e-9279d976835d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '75423dfb570f4b2bbc2f8de4f3a65d18', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1e268cee-9d36-413b-9d89-7a6c381c8f3d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6f7c3086-1e59-4178-81ca-2a80fe1fd75b, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:12.678 139482 INFO neutron.agent.ovn.metadata.agent [-] Port cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02 in datapath e9e8d3c4-db7b-4d58-959e-9279d976835d bound to our chassis#033[00m
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:12.679 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e9e8d3c4-db7b-4d58-959e-9279d976835d#033[00m
Nov 29 03:43:12 np0005539551 systemd-udevd[297576]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:43:12 np0005539551 systemd-machined[190756]: New machine qemu-89-instance-000000c8.
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:12.692 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[13c53e66-9351-4549-bda8-5e9580510318]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:12.694 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape9e8d3c4-d1 in ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:12.695 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape9e8d3c4-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:12.695 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[03c1a788-0b11-47b3-9762-26e27a5d1eec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:12.696 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[fb66448e-770e-4ac3-affb-46a307c98e21]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:12 np0005539551 NetworkManager[48922]: <info>  [1764405792.7031] device (tapcc2cdf28-93): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:43:12 np0005539551 NetworkManager[48922]: <info>  [1764405792.7042] device (tapcc2cdf28-93): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:43:12 np0005539551 systemd[1]: Started Virtual Machine qemu-89-instance-000000c8.
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:12.708 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[70b1ce63-fc8b-4ef9-916b-e7302e5a2f11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:12.734 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[da64130c-4c45-4017-b1f2-85552cdc562f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:12.757 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[2746e76e-0eee-4b39-a7b8-d87d9293de14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:12 np0005539551 NetworkManager[48922]: <info>  [1764405792.7665] manager: (tape9e8d3c4-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/388)
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:12.765 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[0013706f-be10-492d-b59b-cbda0225e929]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:12.806 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[6d30a743-2a51-424e-8aac-aa44600dd9ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:12.810 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[9bac614f-60f4-464c-ac7e-db152404d45d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:12 np0005539551 nova_compute[227360]: 2025-11-29 08:43:12.816 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:12 np0005539551 nova_compute[227360]: 2025-11-29 08:43:12.818 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:12 np0005539551 NetworkManager[48922]: <info>  [1764405792.8306] device (tape9e8d3c4-d0): carrier: link connected
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:12.834 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[055812f1-710e-40d4-a3c3-4b6bf91fe770]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:12 np0005539551 nova_compute[227360]: 2025-11-29 08:43:12.835 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:12 np0005539551 ovn_controller[130266]: 2025-11-29T08:43:12Z|00866|binding|INFO|Setting lport cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02 ovn-installed in OVS
Nov 29 03:43:12 np0005539551 ovn_controller[130266]: 2025-11-29T08:43:12Z|00867|binding|INFO|Setting lport cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02 up in Southbound
Nov 29 03:43:12 np0005539551 nova_compute[227360]: 2025-11-29 08:43:12.846 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:12.850 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ec3f0b84-1fc4-43e2-84d9-859f637115b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape9e8d3c4-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:52:14'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 250], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 883970, 'reachable_time': 25498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297609, 'error': None, 'target': 'ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:12.865 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6599df4a-663a-43f0-9114-d3461a40b01f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe54:5214'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 883970, 'tstamp': 883970}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297610, 'error': None, 'target': 'ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:12.883 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2a6874f1-96a8-45a5-b8bf-003ef57b5142]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape9e8d3c4-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:52:14'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 250], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 883970, 'reachable_time': 25498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 297622, 'error': None, 'target': 'ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:12.914 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ae1b54ca-686b-4187-8ff8-fe9c04a06e69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:12.962 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ee7ea144-3acf-4712-81f5-539af19a0c18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:12.964 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape9e8d3c4-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:12.964 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:12.964 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape9e8d3c4-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:12 np0005539551 kernel: tape9e8d3c4-d0: entered promiscuous mode
Nov 29 03:43:12 np0005539551 NetworkManager[48922]: <info>  [1764405792.9669] manager: (tape9e8d3c4-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/389)
Nov 29 03:43:12 np0005539551 nova_compute[227360]: 2025-11-29 08:43:12.966 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:12.970 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape9e8d3c4-d0, col_values=(('external_ids', {'iface-id': '145c794a-351b-44e3-94d7-5ac8d914c828'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:12 np0005539551 nova_compute[227360]: 2025-11-29 08:43:12.970 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:12 np0005539551 ovn_controller[130266]: 2025-11-29T08:43:12Z|00868|binding|INFO|Releasing lport 145c794a-351b-44e3-94d7-5ac8d914c828 from this chassis (sb_readonly=0)
Nov 29 03:43:12 np0005539551 nova_compute[227360]: 2025-11-29 08:43:12.985 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:12.987 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e9e8d3c4-db7b-4d58-959e-9279d976835d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e9e8d3c4-db7b-4d58-959e-9279d976835d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:12.989 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[11cc2084-b1d7-4873-be36-9226eb77f4a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:12.990 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-e9e8d3c4-db7b-4d58-959e-9279d976835d
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/e9e8d3c4-db7b-4d58-959e-9279d976835d.pid.haproxy
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID e9e8d3c4-db7b-4d58-959e-9279d976835d
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:43:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:12.991 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d', 'env', 'PROCESS_TAG=haproxy-e9e8d3c4-db7b-4d58-959e-9279d976835d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e9e8d3c4-db7b-4d58-959e-9279d976835d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:43:13 np0005539551 nova_compute[227360]: 2025-11-29 08:43:13.034 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405793.03384, d945df67-3e4a-4695-955a-60f6e92cd470 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:43:13 np0005539551 nova_compute[227360]: 2025-11-29 08:43:13.035 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] VM Started (Lifecycle Event)#033[00m
Nov 29 03:43:13 np0005539551 nova_compute[227360]: 2025-11-29 08:43:13.065 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:43:13 np0005539551 nova_compute[227360]: 2025-11-29 08:43:13.069 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405793.0341513, d945df67-3e4a-4695-955a-60f6e92cd470 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:43:13 np0005539551 nova_compute[227360]: 2025-11-29 08:43:13.070 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:43:13 np0005539551 nova_compute[227360]: 2025-11-29 08:43:13.094 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:43:13 np0005539551 nova_compute[227360]: 2025-11-29 08:43:13.097 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:43:13 np0005539551 nova_compute[227360]: 2025-11-29 08:43:13.117 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:43:13 np0005539551 podman[297685]: 2025-11-29 08:43:13.333986155 +0000 UTC m=+0.045617996 container create ffbc71b24fb67d194671d5f07daac55b556662fd275ca7d2264bcc52034a56d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:43:13 np0005539551 nova_compute[227360]: 2025-11-29 08:43:13.337 227364 DEBUG nova.compute.manager [req-6a72af8d-2d30-4b5f-8364-0747980c6a8f req-9d07a33d-27bd-480e-9214-7a927c9c7d4b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Received event network-vif-plugged-cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:43:13 np0005539551 nova_compute[227360]: 2025-11-29 08:43:13.337 227364 DEBUG oslo_concurrency.lockutils [req-6a72af8d-2d30-4b5f-8364-0747980c6a8f req-9d07a33d-27bd-480e-9214-7a927c9c7d4b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d945df67-3e4a-4695-955a-60f6e92cd470-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:13 np0005539551 nova_compute[227360]: 2025-11-29 08:43:13.338 227364 DEBUG oslo_concurrency.lockutils [req-6a72af8d-2d30-4b5f-8364-0747980c6a8f req-9d07a33d-27bd-480e-9214-7a927c9c7d4b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d945df67-3e4a-4695-955a-60f6e92cd470-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:13 np0005539551 nova_compute[227360]: 2025-11-29 08:43:13.338 227364 DEBUG oslo_concurrency.lockutils [req-6a72af8d-2d30-4b5f-8364-0747980c6a8f req-9d07a33d-27bd-480e-9214-7a927c9c7d4b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d945df67-3e4a-4695-955a-60f6e92cd470-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:13 np0005539551 nova_compute[227360]: 2025-11-29 08:43:13.338 227364 DEBUG nova.compute.manager [req-6a72af8d-2d30-4b5f-8364-0747980c6a8f req-9d07a33d-27bd-480e-9214-7a927c9c7d4b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Processing event network-vif-plugged-cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:43:13 np0005539551 nova_compute[227360]: 2025-11-29 08:43:13.339 227364 DEBUG nova.compute.manager [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:43:13 np0005539551 nova_compute[227360]: 2025-11-29 08:43:13.343 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405793.3432136, d945df67-3e4a-4695-955a-60f6e92cd470 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:43:13 np0005539551 nova_compute[227360]: 2025-11-29 08:43:13.344 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:43:13 np0005539551 nova_compute[227360]: 2025-11-29 08:43:13.345 227364 DEBUG nova.virt.libvirt.driver [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:43:13 np0005539551 nova_compute[227360]: 2025-11-29 08:43:13.347 227364 INFO nova.virt.libvirt.driver [-] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Instance spawned successfully.#033[00m
Nov 29 03:43:13 np0005539551 nova_compute[227360]: 2025-11-29 08:43:13.347 227364 DEBUG nova.virt.libvirt.driver [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:43:13 np0005539551 systemd[1]: Started libpod-conmon-ffbc71b24fb67d194671d5f07daac55b556662fd275ca7d2264bcc52034a56d9.scope.
Nov 29 03:43:13 np0005539551 nova_compute[227360]: 2025-11-29 08:43:13.374 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:43:13 np0005539551 nova_compute[227360]: 2025-11-29 08:43:13.378 227364 DEBUG nova.virt.libvirt.driver [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:43:13 np0005539551 nova_compute[227360]: 2025-11-29 08:43:13.378 227364 DEBUG nova.virt.libvirt.driver [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:43:13 np0005539551 nova_compute[227360]: 2025-11-29 08:43:13.379 227364 DEBUG nova.virt.libvirt.driver [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:43:13 np0005539551 nova_compute[227360]: 2025-11-29 08:43:13.380 227364 DEBUG nova.virt.libvirt.driver [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:43:13 np0005539551 nova_compute[227360]: 2025-11-29 08:43:13.380 227364 DEBUG nova.virt.libvirt.driver [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:43:13 np0005539551 nova_compute[227360]: 2025-11-29 08:43:13.380 227364 DEBUG nova.virt.libvirt.driver [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:43:13 np0005539551 nova_compute[227360]: 2025-11-29 08:43:13.384 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:43:13 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:43:13 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3d85fc02c29457bd2f3995050d529f2f3aecaa08635042a0b3c381a48189bc8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:43:13 np0005539551 podman[297685]: 2025-11-29 08:43:13.308212598 +0000 UTC m=+0.019844459 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:43:13 np0005539551 nova_compute[227360]: 2025-11-29 08:43:13.411 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:43:13 np0005539551 podman[297685]: 2025-11-29 08:43:13.417018824 +0000 UTC m=+0.128650685 container init ffbc71b24fb67d194671d5f07daac55b556662fd275ca7d2264bcc52034a56d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:43:13 np0005539551 podman[297685]: 2025-11-29 08:43:13.422829191 +0000 UTC m=+0.134461032 container start ffbc71b24fb67d194671d5f07daac55b556662fd275ca7d2264bcc52034a56d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:43:13 np0005539551 neutron-haproxy-ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d[297700]: [NOTICE]   (297704) : New worker (297706) forked
Nov 29 03:43:13 np0005539551 neutron-haproxy-ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d[297700]: [NOTICE]   (297704) : Loading success.
Nov 29 03:43:13 np0005539551 nova_compute[227360]: 2025-11-29 08:43:13.448 227364 INFO nova.compute.manager [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Took 7.07 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:43:13 np0005539551 nova_compute[227360]: 2025-11-29 08:43:13.448 227364 DEBUG nova.compute.manager [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:43:13 np0005539551 nova_compute[227360]: 2025-11-29 08:43:13.518 227364 INFO nova.compute.manager [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Took 8.62 seconds to build instance.#033[00m
Nov 29 03:43:13 np0005539551 nova_compute[227360]: 2025-11-29 08:43:13.538 227364 DEBUG oslo_concurrency.lockutils [None req-34363f7a-efbd-415a-a2da-1fd9044dc77f de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "d945df67-3e4a-4695-955a-60f6e92cd470" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.942s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:14 np0005539551 nova_compute[227360]: 2025-11-29 08:43:14.284 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:14.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:14.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:15 np0005539551 nova_compute[227360]: 2025-11-29 08:43:15.022 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:15 np0005539551 nova_compute[227360]: 2025-11-29 08:43:15.399 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:15 np0005539551 nova_compute[227360]: 2025-11-29 08:43:15.473 227364 DEBUG nova.compute.manager [req-1308254c-daad-4e9e-b833-c804851fa151 req-b47e3508-c5ee-4849-9504-55f9f7d81442 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Received event network-vif-plugged-cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:43:15 np0005539551 nova_compute[227360]: 2025-11-29 08:43:15.473 227364 DEBUG oslo_concurrency.lockutils [req-1308254c-daad-4e9e-b833-c804851fa151 req-b47e3508-c5ee-4849-9504-55f9f7d81442 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d945df67-3e4a-4695-955a-60f6e92cd470-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:15 np0005539551 nova_compute[227360]: 2025-11-29 08:43:15.473 227364 DEBUG oslo_concurrency.lockutils [req-1308254c-daad-4e9e-b833-c804851fa151 req-b47e3508-c5ee-4849-9504-55f9f7d81442 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d945df67-3e4a-4695-955a-60f6e92cd470-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:15 np0005539551 nova_compute[227360]: 2025-11-29 08:43:15.473 227364 DEBUG oslo_concurrency.lockutils [req-1308254c-daad-4e9e-b833-c804851fa151 req-b47e3508-c5ee-4849-9504-55f9f7d81442 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d945df67-3e4a-4695-955a-60f6e92cd470-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:15 np0005539551 nova_compute[227360]: 2025-11-29 08:43:15.474 227364 DEBUG nova.compute.manager [req-1308254c-daad-4e9e-b833-c804851fa151 req-b47e3508-c5ee-4849-9504-55f9f7d81442 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] No waiting events found dispatching network-vif-plugged-cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:43:15 np0005539551 nova_compute[227360]: 2025-11-29 08:43:15.474 227364 WARNING nova.compute.manager [req-1308254c-daad-4e9e-b833-c804851fa151 req-b47e3508-c5ee-4849-9504-55f9f7d81442 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Received unexpected event network-vif-plugged-cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:43:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:16.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:16.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:16 np0005539551 nova_compute[227360]: 2025-11-29 08:43:16.831 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:17 np0005539551 nova_compute[227360]: 2025-11-29 08:43:17.245 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:18.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:18.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:18 np0005539551 nova_compute[227360]: 2025-11-29 08:43:18.768 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:19 np0005539551 nova_compute[227360]: 2025-11-29 08:43:19.286 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:19.893 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:19.893 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:19.894 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:20 np0005539551 nova_compute[227360]: 2025-11-29 08:43:20.306 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:20.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:20.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:20 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:21 np0005539551 nova_compute[227360]: 2025-11-29 08:43:21.833 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:22.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:22.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:24 np0005539551 nova_compute[227360]: 2025-11-29 08:43:24.287 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:24.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:24.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:24 np0005539551 nova_compute[227360]: 2025-11-29 08:43:24.635 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:26.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:26.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:26 np0005539551 nova_compute[227360]: 2025-11-29 08:43:26.835 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:28 np0005539551 ovn_controller[130266]: 2025-11-29T08:43:28Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8c:20:55 10.100.0.14
Nov 29 03:43:28 np0005539551 ovn_controller[130266]: 2025-11-29T08:43:28Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8c:20:55 10.100.0.14
Nov 29 03:43:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:28.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:28.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:28 np0005539551 nova_compute[227360]: 2025-11-29 08:43:28.514 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:29 np0005539551 nova_compute[227360]: 2025-11-29 08:43:29.289 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:30.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:30.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:31 np0005539551 nova_compute[227360]: 2025-11-29 08:43:31.837 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:32.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:32.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:34 np0005539551 nova_compute[227360]: 2025-11-29 08:43:34.261 227364 DEBUG oslo_concurrency.lockutils [None req-46f399d1-0d9d-44d4-99c7-be46fa1a7fbe de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "d945df67-3e4a-4695-955a-60f6e92cd470" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:34 np0005539551 nova_compute[227360]: 2025-11-29 08:43:34.261 227364 DEBUG oslo_concurrency.lockutils [None req-46f399d1-0d9d-44d4-99c7-be46fa1a7fbe de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "d945df67-3e4a-4695-955a-60f6e92cd470" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:34 np0005539551 nova_compute[227360]: 2025-11-29 08:43:34.262 227364 DEBUG oslo_concurrency.lockutils [None req-46f399d1-0d9d-44d4-99c7-be46fa1a7fbe de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "d945df67-3e4a-4695-955a-60f6e92cd470-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:34 np0005539551 nova_compute[227360]: 2025-11-29 08:43:34.262 227364 DEBUG oslo_concurrency.lockutils [None req-46f399d1-0d9d-44d4-99c7-be46fa1a7fbe de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "d945df67-3e4a-4695-955a-60f6e92cd470-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:34 np0005539551 nova_compute[227360]: 2025-11-29 08:43:34.263 227364 DEBUG oslo_concurrency.lockutils [None req-46f399d1-0d9d-44d4-99c7-be46fa1a7fbe de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "d945df67-3e4a-4695-955a-60f6e92cd470-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:34 np0005539551 nova_compute[227360]: 2025-11-29 08:43:34.264 227364 INFO nova.compute.manager [None req-46f399d1-0d9d-44d4-99c7-be46fa1a7fbe de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Terminating instance#033[00m
Nov 29 03:43:34 np0005539551 nova_compute[227360]: 2025-11-29 08:43:34.265 227364 DEBUG nova.compute.manager [None req-46f399d1-0d9d-44d4-99c7-be46fa1a7fbe de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:43:34 np0005539551 nova_compute[227360]: 2025-11-29 08:43:34.291 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:34 np0005539551 kernel: tapcc2cdf28-93 (unregistering): left promiscuous mode
Nov 29 03:43:34 np0005539551 NetworkManager[48922]: <info>  [1764405814.3633] device (tapcc2cdf28-93): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:43:34 np0005539551 ovn_controller[130266]: 2025-11-29T08:43:34Z|00869|binding|INFO|Releasing lport cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02 from this chassis (sb_readonly=0)
Nov 29 03:43:34 np0005539551 ovn_controller[130266]: 2025-11-29T08:43:34Z|00870|binding|INFO|Setting lport cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02 down in Southbound
Nov 29 03:43:34 np0005539551 ovn_controller[130266]: 2025-11-29T08:43:34Z|00871|binding|INFO|Removing iface tapcc2cdf28-93 ovn-installed in OVS
Nov 29 03:43:34 np0005539551 nova_compute[227360]: 2025-11-29 08:43:34.374 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:34 np0005539551 nova_compute[227360]: 2025-11-29 08:43:34.382 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:34.394 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:20:55 10.100.0.14'], port_security=['fa:16:3e:8c:20:55 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'd945df67-3e4a-4695-955a-60f6e92cd470', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e9e8d3c4-db7b-4d58-959e-9279d976835d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '75423dfb570f4b2bbc2f8de4f3a65d18', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1e268cee-9d36-413b-9d89-7a6c381c8f3d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6f7c3086-1e59-4178-81ca-2a80fe1fd75b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:43:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:34.396 139482 INFO neutron.agent.ovn.metadata.agent [-] Port cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02 in datapath e9e8d3c4-db7b-4d58-959e-9279d976835d unbound from our chassis#033[00m
Nov 29 03:43:34 np0005539551 nova_compute[227360]: 2025-11-29 08:43:34.397 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:34.397 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e9e8d3c4-db7b-4d58-959e-9279d976835d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:43:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:34.399 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5995f177-1014-4866-83f4-c90987127a92]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:34.400 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d namespace which is not needed anymore#033[00m
Nov 29 03:43:34 np0005539551 nova_compute[227360]: 2025-11-29 08:43:34.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:34 np0005539551 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000c8.scope: Deactivated successfully.
Nov 29 03:43:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:34 np0005539551 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000c8.scope: Consumed 13.646s CPU time.
Nov 29 03:43:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:34.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:34 np0005539551 systemd-machined[190756]: Machine qemu-89-instance-000000c8 terminated.
Nov 29 03:43:34 np0005539551 podman[297720]: 2025-11-29 08:43:34.451648515 +0000 UTC m=+0.058029763 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 03:43:34 np0005539551 podman[297719]: 2025-11-29 08:43:34.46809798 +0000 UTC m=+0.076114902 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:43:34 np0005539551 nova_compute[227360]: 2025-11-29 08:43:34.485 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:34 np0005539551 podman[297716]: 2025-11-29 08:43:34.48727877 +0000 UTC m=+0.097705817 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:43:34 np0005539551 nova_compute[227360]: 2025-11-29 08:43:34.490 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:34 np0005539551 nova_compute[227360]: 2025-11-29 08:43:34.499 227364 INFO nova.virt.libvirt.driver [-] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Instance destroyed successfully.#033[00m
Nov 29 03:43:34 np0005539551 nova_compute[227360]: 2025-11-29 08:43:34.500 227364 DEBUG nova.objects.instance [None req-46f399d1-0d9d-44d4-99c7-be46fa1a7fbe de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lazy-loading 'resources' on Instance uuid d945df67-3e4a-4695-955a-60f6e92cd470 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:43:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:34.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:34 np0005539551 nova_compute[227360]: 2025-11-29 08:43:34.519 227364 DEBUG nova.virt.libvirt.vif [None req-46f399d1-0d9d-44d4-99c7-be46fa1a7fbe de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:43:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-gen-0-956379726',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-gen-0-956379726',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1136856573-ge',id=200,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPskF69gJv1PkFDp6sMFcSkGc77HX97zARQ2LpJDHXss7elesntPQ0bKM82mcgUWZSOfl7w6RPJ7rDkGVTramaLBKk3dO7uJ04BIQF5ATuD1RLuWDTwHCU9gWwsxsTzFaQ==',key_name='tempest-TestSecurityGroupsBasicOps-568533765',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:43:13Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='75423dfb570f4b2bbc2f8de4f3a65d18',ramdisk_id='',reservation_id='r-itb9254v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1136856573',owner_user_name='tempest-TestSecurityGroupsBasicOps-1136856573-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:43:13Z,user_data=None,user_id='de2965680b714b539553cf0792584e1e',uuid=d945df67-3e4a-4695-955a-60f6e92cd470,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02", "address": "fa:16:3e:8c:20:55", "network": {"id": "e9e8d3c4-db7b-4d58-959e-9279d976835d", "bridge": "br-int", "label": "tempest-network-smoke--768821174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc2cdf28-93", "ovs_interfaceid": "cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:43:34 np0005539551 nova_compute[227360]: 2025-11-29 08:43:34.520 227364 DEBUG nova.network.os_vif_util [None req-46f399d1-0d9d-44d4-99c7-be46fa1a7fbe de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converting VIF {"id": "cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02", "address": "fa:16:3e:8c:20:55", "network": {"id": "e9e8d3c4-db7b-4d58-959e-9279d976835d", "bridge": "br-int", "label": "tempest-network-smoke--768821174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc2cdf28-93", "ovs_interfaceid": "cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:43:34 np0005539551 nova_compute[227360]: 2025-11-29 08:43:34.520 227364 DEBUG nova.network.os_vif_util [None req-46f399d1-0d9d-44d4-99c7-be46fa1a7fbe de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:20:55,bridge_name='br-int',has_traffic_filtering=True,id=cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02,network=Network(e9e8d3c4-db7b-4d58-959e-9279d976835d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc2cdf28-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:43:34 np0005539551 nova_compute[227360]: 2025-11-29 08:43:34.521 227364 DEBUG os_vif [None req-46f399d1-0d9d-44d4-99c7-be46fa1a7fbe de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:20:55,bridge_name='br-int',has_traffic_filtering=True,id=cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02,network=Network(e9e8d3c4-db7b-4d58-959e-9279d976835d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc2cdf28-93') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:43:34 np0005539551 neutron-haproxy-ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d[297700]: [NOTICE]   (297704) : haproxy version is 2.8.14-c23fe91
Nov 29 03:43:34 np0005539551 neutron-haproxy-ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d[297700]: [NOTICE]   (297704) : path to executable is /usr/sbin/haproxy
Nov 29 03:43:34 np0005539551 neutron-haproxy-ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d[297700]: [WARNING]  (297704) : Exiting Master process...
Nov 29 03:43:34 np0005539551 neutron-haproxy-ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d[297700]: [WARNING]  (297704) : Exiting Master process...
Nov 29 03:43:34 np0005539551 nova_compute[227360]: 2025-11-29 08:43:34.524 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:34 np0005539551 nova_compute[227360]: 2025-11-29 08:43:34.525 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcc2cdf28-93, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:34 np0005539551 neutron-haproxy-ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d[297700]: [ALERT]    (297704) : Current worker (297706) exited with code 143 (Terminated)
Nov 29 03:43:34 np0005539551 neutron-haproxy-ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d[297700]: [WARNING]  (297704) : All workers exited. Exiting... (0)
Nov 29 03:43:34 np0005539551 nova_compute[227360]: 2025-11-29 08:43:34.526 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:34 np0005539551 systemd[1]: libpod-ffbc71b24fb67d194671d5f07daac55b556662fd275ca7d2264bcc52034a56d9.scope: Deactivated successfully.
Nov 29 03:43:34 np0005539551 nova_compute[227360]: 2025-11-29 08:43:34.529 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:43:34 np0005539551 nova_compute[227360]: 2025-11-29 08:43:34.531 227364 INFO os_vif [None req-46f399d1-0d9d-44d4-99c7-be46fa1a7fbe de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:20:55,bridge_name='br-int',has_traffic_filtering=True,id=cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02,network=Network(e9e8d3c4-db7b-4d58-959e-9279d976835d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc2cdf28-93')#033[00m
Nov 29 03:43:34 np0005539551 podman[297798]: 2025-11-29 08:43:34.534257232 +0000 UTC m=+0.047585070 container died ffbc71b24fb67d194671d5f07daac55b556662fd275ca7d2264bcc52034a56d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:43:34 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ffbc71b24fb67d194671d5f07daac55b556662fd275ca7d2264bcc52034a56d9-userdata-shm.mount: Deactivated successfully.
Nov 29 03:43:34 np0005539551 systemd[1]: var-lib-containers-storage-overlay-e3d85fc02c29457bd2f3995050d529f2f3aecaa08635042a0b3c381a48189bc8-merged.mount: Deactivated successfully.
Nov 29 03:43:34 np0005539551 podman[297798]: 2025-11-29 08:43:34.569554887 +0000 UTC m=+0.082882725 container cleanup ffbc71b24fb67d194671d5f07daac55b556662fd275ca7d2264bcc52034a56d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:43:34 np0005539551 systemd[1]: libpod-conmon-ffbc71b24fb67d194671d5f07daac55b556662fd275ca7d2264bcc52034a56d9.scope: Deactivated successfully.
Nov 29 03:43:34 np0005539551 podman[297857]: 2025-11-29 08:43:34.632348928 +0000 UTC m=+0.041989198 container remove ffbc71b24fb67d194671d5f07daac55b556662fd275ca7d2264bcc52034a56d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 03:43:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:34.638 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9f2a7b4a-e7ca-48ba-a0c7-37b7cd7c5122]: (4, ('Sat Nov 29 08:43:34 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d (ffbc71b24fb67d194671d5f07daac55b556662fd275ca7d2264bcc52034a56d9)\nffbc71b24fb67d194671d5f07daac55b556662fd275ca7d2264bcc52034a56d9\nSat Nov 29 08:43:34 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d (ffbc71b24fb67d194671d5f07daac55b556662fd275ca7d2264bcc52034a56d9)\nffbc71b24fb67d194671d5f07daac55b556662fd275ca7d2264bcc52034a56d9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:34.639 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e92375e4-060f-474f-aa58-2ee258e8b576]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:34.640 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape9e8d3c4-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:34 np0005539551 nova_compute[227360]: 2025-11-29 08:43:34.642 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:34 np0005539551 kernel: tape9e8d3c4-d0: left promiscuous mode
Nov 29 03:43:34 np0005539551 nova_compute[227360]: 2025-11-29 08:43:34.655 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:34 np0005539551 nova_compute[227360]: 2025-11-29 08:43:34.657 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:34.659 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[29ceb87d-1395-4b93-8f2c-2887558e8887]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:34.687 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[a3acd5ca-7490-4319-b7e1-eb07e8763b0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:34.689 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1f5bb171-bc3c-46ef-bab1-c79087520d1d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:34.707 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9015cb74-f56d-4f6f-8876-5d44bc211559]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 883963, 'reachable_time': 23085, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297872, 'error': None, 'target': 'ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:34.709 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:43:34 np0005539551 systemd[1]: run-netns-ovnmeta\x2de9e8d3c4\x2ddb7b\x2d4d58\x2d959e\x2d9279d976835d.mount: Deactivated successfully.
Nov 29 03:43:34 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:34.709 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[24d73d74-57a4-46e0-a68c-666257f457bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:36 np0005539551 nova_compute[227360]: 2025-11-29 08:43:36.033 227364 INFO nova.virt.libvirt.driver [None req-46f399d1-0d9d-44d4-99c7-be46fa1a7fbe de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Deleting instance files /var/lib/nova/instances/d945df67-3e4a-4695-955a-60f6e92cd470_del#033[00m
Nov 29 03:43:36 np0005539551 nova_compute[227360]: 2025-11-29 08:43:36.034 227364 INFO nova.virt.libvirt.driver [None req-46f399d1-0d9d-44d4-99c7-be46fa1a7fbe de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Deletion of /var/lib/nova/instances/d945df67-3e4a-4695-955a-60f6e92cd470_del complete#033[00m
Nov 29 03:43:36 np0005539551 nova_compute[227360]: 2025-11-29 08:43:36.104 227364 INFO nova.compute.manager [None req-46f399d1-0d9d-44d4-99c7-be46fa1a7fbe de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Took 1.84 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:43:36 np0005539551 nova_compute[227360]: 2025-11-29 08:43:36.104 227364 DEBUG oslo.service.loopingcall [None req-46f399d1-0d9d-44d4-99c7-be46fa1a7fbe de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:43:36 np0005539551 nova_compute[227360]: 2025-11-29 08:43:36.104 227364 DEBUG nova.compute.manager [-] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:43:36 np0005539551 nova_compute[227360]: 2025-11-29 08:43:36.105 227364 DEBUG nova.network.neutron [-] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:43:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:36.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:36.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:36 np0005539551 nova_compute[227360]: 2025-11-29 08:43:36.517 227364 DEBUG nova.compute.manager [req-6e19f426-2d08-4587-ba40-18a6ec168ca4 req-f4650abf-3ec4-4746-bd96-54ca660f7054 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Received event network-vif-unplugged-cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:43:36 np0005539551 nova_compute[227360]: 2025-11-29 08:43:36.518 227364 DEBUG oslo_concurrency.lockutils [req-6e19f426-2d08-4587-ba40-18a6ec168ca4 req-f4650abf-3ec4-4746-bd96-54ca660f7054 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d945df67-3e4a-4695-955a-60f6e92cd470-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:36 np0005539551 nova_compute[227360]: 2025-11-29 08:43:36.518 227364 DEBUG oslo_concurrency.lockutils [req-6e19f426-2d08-4587-ba40-18a6ec168ca4 req-f4650abf-3ec4-4746-bd96-54ca660f7054 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d945df67-3e4a-4695-955a-60f6e92cd470-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:36 np0005539551 nova_compute[227360]: 2025-11-29 08:43:36.518 227364 DEBUG oslo_concurrency.lockutils [req-6e19f426-2d08-4587-ba40-18a6ec168ca4 req-f4650abf-3ec4-4746-bd96-54ca660f7054 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d945df67-3e4a-4695-955a-60f6e92cd470-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:36 np0005539551 nova_compute[227360]: 2025-11-29 08:43:36.518 227364 DEBUG nova.compute.manager [req-6e19f426-2d08-4587-ba40-18a6ec168ca4 req-f4650abf-3ec4-4746-bd96-54ca660f7054 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] No waiting events found dispatching network-vif-unplugged-cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:43:36 np0005539551 nova_compute[227360]: 2025-11-29 08:43:36.518 227364 DEBUG nova.compute.manager [req-6e19f426-2d08-4587-ba40-18a6ec168ca4 req-f4650abf-3ec4-4746-bd96-54ca660f7054 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Received event network-vif-unplugged-cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:43:37 np0005539551 nova_compute[227360]: 2025-11-29 08:43:37.165 227364 DEBUG nova.network.neutron [-] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:43:37 np0005539551 nova_compute[227360]: 2025-11-29 08:43:37.186 227364 INFO nova.compute.manager [-] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Took 1.08 seconds to deallocate network for instance.#033[00m
Nov 29 03:43:37 np0005539551 nova_compute[227360]: 2025-11-29 08:43:37.244 227364 DEBUG oslo_concurrency.lockutils [None req-46f399d1-0d9d-44d4-99c7-be46fa1a7fbe de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:37 np0005539551 nova_compute[227360]: 2025-11-29 08:43:37.244 227364 DEBUG oslo_concurrency.lockutils [None req-46f399d1-0d9d-44d4-99c7-be46fa1a7fbe de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:37 np0005539551 nova_compute[227360]: 2025-11-29 08:43:37.276 227364 DEBUG nova.compute.manager [req-00248908-ba46-4580-968d-7a53b3a1c80e req-f175a11b-47f4-4e8f-9710-3af00f2943e1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Received event network-vif-deleted-cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:43:37 np0005539551 nova_compute[227360]: 2025-11-29 08:43:37.297 227364 DEBUG oslo_concurrency.processutils [None req-46f399d1-0d9d-44d4-99c7-be46fa1a7fbe de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:43:37 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/379099793' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:43:37 np0005539551 nova_compute[227360]: 2025-11-29 08:43:37.751 227364 DEBUG oslo_concurrency.processutils [None req-46f399d1-0d9d-44d4-99c7-be46fa1a7fbe de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:37 np0005539551 nova_compute[227360]: 2025-11-29 08:43:37.758 227364 DEBUG nova.compute.provider_tree [None req-46f399d1-0d9d-44d4-99c7-be46fa1a7fbe de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:43:37 np0005539551 nova_compute[227360]: 2025-11-29 08:43:37.774 227364 DEBUG oslo_concurrency.lockutils [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Acquiring lock "da14f334-c7fe-428d-b6d2-32c2f4cc4054" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:37 np0005539551 nova_compute[227360]: 2025-11-29 08:43:37.775 227364 DEBUG oslo_concurrency.lockutils [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Lock "da14f334-c7fe-428d-b6d2-32c2f4cc4054" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:37 np0005539551 nova_compute[227360]: 2025-11-29 08:43:37.779 227364 DEBUG nova.scheduler.client.report [None req-46f399d1-0d9d-44d4-99c7-be46fa1a7fbe de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:43:37 np0005539551 nova_compute[227360]: 2025-11-29 08:43:37.787 227364 DEBUG nova.compute.manager [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:43:37 np0005539551 nova_compute[227360]: 2025-11-29 08:43:37.806 227364 DEBUG oslo_concurrency.lockutils [None req-46f399d1-0d9d-44d4-99c7-be46fa1a7fbe de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:37 np0005539551 nova_compute[227360]: 2025-11-29 08:43:37.828 227364 INFO nova.scheduler.client.report [None req-46f399d1-0d9d-44d4-99c7-be46fa1a7fbe de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Deleted allocations for instance d945df67-3e4a-4695-955a-60f6e92cd470#033[00m
Nov 29 03:43:37 np0005539551 nova_compute[227360]: 2025-11-29 08:43:37.853 227364 DEBUG oslo_concurrency.lockutils [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:37 np0005539551 nova_compute[227360]: 2025-11-29 08:43:37.854 227364 DEBUG oslo_concurrency.lockutils [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:37 np0005539551 nova_compute[227360]: 2025-11-29 08:43:37.859 227364 DEBUG nova.virt.hardware [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:43:37 np0005539551 nova_compute[227360]: 2025-11-29 08:43:37.859 227364 INFO nova.compute.claims [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:43:37 np0005539551 nova_compute[227360]: 2025-11-29 08:43:37.929 227364 DEBUG oslo_concurrency.lockutils [None req-46f399d1-0d9d-44d4-99c7-be46fa1a7fbe de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "d945df67-3e4a-4695-955a-60f6e92cd470" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:37 np0005539551 nova_compute[227360]: 2025-11-29 08:43:37.984 227364 DEBUG oslo_concurrency.processutils [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:38 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:43:38 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/98981597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:43:38 np0005539551 nova_compute[227360]: 2025-11-29 08:43:38.411 227364 DEBUG oslo_concurrency.processutils [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:38 np0005539551 nova_compute[227360]: 2025-11-29 08:43:38.418 227364 DEBUG nova.compute.provider_tree [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:43:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:38.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:38 np0005539551 nova_compute[227360]: 2025-11-29 08:43:38.438 227364 DEBUG nova.scheduler.client.report [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:43:38 np0005539551 nova_compute[227360]: 2025-11-29 08:43:38.463 227364 DEBUG oslo_concurrency.lockutils [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:38 np0005539551 nova_compute[227360]: 2025-11-29 08:43:38.464 227364 DEBUG nova.compute.manager [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:43:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:43:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:38.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:43:38 np0005539551 nova_compute[227360]: 2025-11-29 08:43:38.537 227364 DEBUG nova.compute.manager [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:43:38 np0005539551 nova_compute[227360]: 2025-11-29 08:43:38.537 227364 DEBUG nova.network.neutron [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:43:38 np0005539551 nova_compute[227360]: 2025-11-29 08:43:38.558 227364 INFO nova.virt.libvirt.driver [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:43:38 np0005539551 nova_compute[227360]: 2025-11-29 08:43:38.594 227364 DEBUG nova.compute.manager [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:43:38 np0005539551 nova_compute[227360]: 2025-11-29 08:43:38.731 227364 DEBUG nova.compute.manager [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:43:38 np0005539551 nova_compute[227360]: 2025-11-29 08:43:38.732 227364 DEBUG nova.virt.libvirt.driver [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:43:38 np0005539551 nova_compute[227360]: 2025-11-29 08:43:38.733 227364 INFO nova.virt.libvirt.driver [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Creating image(s)#033[00m
Nov 29 03:43:38 np0005539551 nova_compute[227360]: 2025-11-29 08:43:38.761 227364 DEBUG nova.storage.rbd_utils [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] rbd image da14f334-c7fe-428d-b6d2-32c2f4cc4054_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:38 np0005539551 nova_compute[227360]: 2025-11-29 08:43:38.795 227364 DEBUG nova.storage.rbd_utils [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] rbd image da14f334-c7fe-428d-b6d2-32c2f4cc4054_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:38 np0005539551 nova_compute[227360]: 2025-11-29 08:43:38.829 227364 DEBUG nova.storage.rbd_utils [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] rbd image da14f334-c7fe-428d-b6d2-32c2f4cc4054_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:38 np0005539551 nova_compute[227360]: 2025-11-29 08:43:38.833 227364 DEBUG oslo_concurrency.processutils [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:38 np0005539551 nova_compute[227360]: 2025-11-29 08:43:38.873 227364 DEBUG nova.compute.manager [req-e298d826-1d56-4c72-b15b-a71b4fbe2be9 req-4d8c0bdb-1dd5-4db7-82f1-8f137dc38596 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Received event network-vif-plugged-cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:43:38 np0005539551 nova_compute[227360]: 2025-11-29 08:43:38.874 227364 DEBUG oslo_concurrency.lockutils [req-e298d826-1d56-4c72-b15b-a71b4fbe2be9 req-4d8c0bdb-1dd5-4db7-82f1-8f137dc38596 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d945df67-3e4a-4695-955a-60f6e92cd470-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:38 np0005539551 nova_compute[227360]: 2025-11-29 08:43:38.874 227364 DEBUG oslo_concurrency.lockutils [req-e298d826-1d56-4c72-b15b-a71b4fbe2be9 req-4d8c0bdb-1dd5-4db7-82f1-8f137dc38596 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d945df67-3e4a-4695-955a-60f6e92cd470-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:38 np0005539551 nova_compute[227360]: 2025-11-29 08:43:38.875 227364 DEBUG oslo_concurrency.lockutils [req-e298d826-1d56-4c72-b15b-a71b4fbe2be9 req-4d8c0bdb-1dd5-4db7-82f1-8f137dc38596 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d945df67-3e4a-4695-955a-60f6e92cd470-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:38 np0005539551 nova_compute[227360]: 2025-11-29 08:43:38.875 227364 DEBUG nova.compute.manager [req-e298d826-1d56-4c72-b15b-a71b4fbe2be9 req-4d8c0bdb-1dd5-4db7-82f1-8f137dc38596 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] No waiting events found dispatching network-vif-plugged-cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:43:38 np0005539551 nova_compute[227360]: 2025-11-29 08:43:38.875 227364 WARNING nova.compute.manager [req-e298d826-1d56-4c72-b15b-a71b4fbe2be9 req-4d8c0bdb-1dd5-4db7-82f1-8f137dc38596 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Received unexpected event network-vif-plugged-cc2cdf28-93f6-4bfe-9ce7-a22427f7cb02 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:43:38 np0005539551 nova_compute[227360]: 2025-11-29 08:43:38.910 227364 DEBUG oslo_concurrency.processutils [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:38 np0005539551 nova_compute[227360]: 2025-11-29 08:43:38.912 227364 DEBUG oslo_concurrency.lockutils [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:38 np0005539551 nova_compute[227360]: 2025-11-29 08:43:38.913 227364 DEBUG oslo_concurrency.lockutils [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:38 np0005539551 nova_compute[227360]: 2025-11-29 08:43:38.913 227364 DEBUG oslo_concurrency.lockutils [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:38 np0005539551 nova_compute[227360]: 2025-11-29 08:43:38.950 227364 DEBUG nova.storage.rbd_utils [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] rbd image da14f334-c7fe-428d-b6d2-32c2f4cc4054_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:38 np0005539551 nova_compute[227360]: 2025-11-29 08:43:38.955 227364 DEBUG oslo_concurrency.processutils [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 da14f334-c7fe-428d-b6d2-32c2f4cc4054_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:39 np0005539551 nova_compute[227360]: 2025-11-29 08:43:39.244 227364 DEBUG oslo_concurrency.processutils [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 da14f334-c7fe-428d-b6d2-32c2f4cc4054_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.290s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:39 np0005539551 nova_compute[227360]: 2025-11-29 08:43:39.331 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:39 np0005539551 nova_compute[227360]: 2025-11-29 08:43:39.339 227364 DEBUG nova.storage.rbd_utils [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] resizing rbd image da14f334-c7fe-428d-b6d2-32c2f4cc4054_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:43:39 np0005539551 nova_compute[227360]: 2025-11-29 08:43:39.438 227364 DEBUG nova.policy [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5ac9cfdaf51b4a5ab874f7e3571f88a0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '66ee3b60fb89476383201ba204858d4d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:43:39 np0005539551 nova_compute[227360]: 2025-11-29 08:43:39.445 227364 DEBUG nova.objects.instance [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Lazy-loading 'migration_context' on Instance uuid da14f334-c7fe-428d-b6d2-32c2f4cc4054 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:43:39 np0005539551 nova_compute[227360]: 2025-11-29 08:43:39.464 227364 DEBUG nova.virt.libvirt.driver [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:43:39 np0005539551 nova_compute[227360]: 2025-11-29 08:43:39.464 227364 DEBUG nova.virt.libvirt.driver [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Ensure instance console log exists: /var/lib/nova/instances/da14f334-c7fe-428d-b6d2-32c2f4cc4054/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:43:39 np0005539551 nova_compute[227360]: 2025-11-29 08:43:39.465 227364 DEBUG oslo_concurrency.lockutils [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:39 np0005539551 nova_compute[227360]: 2025-11-29 08:43:39.465 227364 DEBUG oslo_concurrency.lockutils [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:39 np0005539551 nova_compute[227360]: 2025-11-29 08:43:39.465 227364 DEBUG oslo_concurrency.lockutils [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:39 np0005539551 nova_compute[227360]: 2025-11-29 08:43:39.526 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:40.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:40.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:40 np0005539551 nova_compute[227360]: 2025-11-29 08:43:40.894 227364 DEBUG nova.network.neutron [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Successfully created port: b8e3a18a-12a7-40b6-8f74-5679f4c9a9df _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:43:41 np0005539551 nova_compute[227360]: 2025-11-29 08:43:41.621 227364 DEBUG nova.network.neutron [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Successfully updated port: b8e3a18a-12a7-40b6-8f74-5679f4c9a9df _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:43:41 np0005539551 nova_compute[227360]: 2025-11-29 08:43:41.635 227364 DEBUG oslo_concurrency.lockutils [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Acquiring lock "refresh_cache-da14f334-c7fe-428d-b6d2-32c2f4cc4054" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:43:41 np0005539551 nova_compute[227360]: 2025-11-29 08:43:41.635 227364 DEBUG oslo_concurrency.lockutils [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Acquired lock "refresh_cache-da14f334-c7fe-428d-b6d2-32c2f4cc4054" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:43:41 np0005539551 nova_compute[227360]: 2025-11-29 08:43:41.636 227364 DEBUG nova.network.neutron [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:43:41 np0005539551 nova_compute[227360]: 2025-11-29 08:43:41.708 227364 DEBUG nova.compute.manager [req-cc86432f-cad8-4cd6-a5ed-43f030299598 req-61838a23-a14f-4243-9c96-51306a1f3b40 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Received event network-changed-b8e3a18a-12a7-40b6-8f74-5679f4c9a9df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:43:41 np0005539551 nova_compute[227360]: 2025-11-29 08:43:41.708 227364 DEBUG nova.compute.manager [req-cc86432f-cad8-4cd6-a5ed-43f030299598 req-61838a23-a14f-4243-9c96-51306a1f3b40 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Refreshing instance network info cache due to event network-changed-b8e3a18a-12a7-40b6-8f74-5679f4c9a9df. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:43:41 np0005539551 nova_compute[227360]: 2025-11-29 08:43:41.708 227364 DEBUG oslo_concurrency.lockutils [req-cc86432f-cad8-4cd6-a5ed-43f030299598 req-61838a23-a14f-4243-9c96-51306a1f3b40 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-da14f334-c7fe-428d-b6d2-32c2f4cc4054" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:43:41 np0005539551 nova_compute[227360]: 2025-11-29 08:43:41.868 227364 DEBUG nova.network.neutron [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:43:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:42.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:42.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:42 np0005539551 nova_compute[227360]: 2025-11-29 08:43:42.842 227364 DEBUG nova.network.neutron [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Updating instance_info_cache with network_info: [{"id": "b8e3a18a-12a7-40b6-8f74-5679f4c9a9df", "address": "fa:16:3e:97:38:a4", "network": {"id": "c60162ec-f468-4b5f-bd91-89b0a1cb9fa1", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-514529391-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66ee3b60fb89476383201ba204858d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8e3a18a-12", "ovs_interfaceid": "b8e3a18a-12a7-40b6-8f74-5679f4c9a9df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:43:42 np0005539551 nova_compute[227360]: 2025-11-29 08:43:42.863 227364 DEBUG oslo_concurrency.lockutils [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Releasing lock "refresh_cache-da14f334-c7fe-428d-b6d2-32c2f4cc4054" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:43:42 np0005539551 nova_compute[227360]: 2025-11-29 08:43:42.863 227364 DEBUG nova.compute.manager [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Instance network_info: |[{"id": "b8e3a18a-12a7-40b6-8f74-5679f4c9a9df", "address": "fa:16:3e:97:38:a4", "network": {"id": "c60162ec-f468-4b5f-bd91-89b0a1cb9fa1", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-514529391-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66ee3b60fb89476383201ba204858d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8e3a18a-12", "ovs_interfaceid": "b8e3a18a-12a7-40b6-8f74-5679f4c9a9df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:43:42 np0005539551 nova_compute[227360]: 2025-11-29 08:43:42.864 227364 DEBUG oslo_concurrency.lockutils [req-cc86432f-cad8-4cd6-a5ed-43f030299598 req-61838a23-a14f-4243-9c96-51306a1f3b40 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-da14f334-c7fe-428d-b6d2-32c2f4cc4054" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:43:42 np0005539551 nova_compute[227360]: 2025-11-29 08:43:42.864 227364 DEBUG nova.network.neutron [req-cc86432f-cad8-4cd6-a5ed-43f030299598 req-61838a23-a14f-4243-9c96-51306a1f3b40 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Refreshing network info cache for port b8e3a18a-12a7-40b6-8f74-5679f4c9a9df _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:43:42 np0005539551 nova_compute[227360]: 2025-11-29 08:43:42.867 227364 DEBUG nova.virt.libvirt.driver [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Start _get_guest_xml network_info=[{"id": "b8e3a18a-12a7-40b6-8f74-5679f4c9a9df", "address": "fa:16:3e:97:38:a4", "network": {"id": "c60162ec-f468-4b5f-bd91-89b0a1cb9fa1", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-514529391-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66ee3b60fb89476383201ba204858d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8e3a18a-12", "ovs_interfaceid": "b8e3a18a-12a7-40b6-8f74-5679f4c9a9df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:43:42 np0005539551 nova_compute[227360]: 2025-11-29 08:43:42.871 227364 WARNING nova.virt.libvirt.driver [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:43:42 np0005539551 nova_compute[227360]: 2025-11-29 08:43:42.876 227364 DEBUG nova.virt.libvirt.host [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:43:42 np0005539551 nova_compute[227360]: 2025-11-29 08:43:42.877 227364 DEBUG nova.virt.libvirt.host [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:43:42 np0005539551 nova_compute[227360]: 2025-11-29 08:43:42.887 227364 DEBUG nova.virt.libvirt.host [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:43:42 np0005539551 nova_compute[227360]: 2025-11-29 08:43:42.887 227364 DEBUG nova.virt.libvirt.host [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:43:42 np0005539551 nova_compute[227360]: 2025-11-29 08:43:42.889 227364 DEBUG nova.virt.libvirt.driver [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:43:42 np0005539551 nova_compute[227360]: 2025-11-29 08:43:42.890 227364 DEBUG nova.virt.hardware [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:43:42 np0005539551 nova_compute[227360]: 2025-11-29 08:43:42.890 227364 DEBUG nova.virt.hardware [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:43:42 np0005539551 nova_compute[227360]: 2025-11-29 08:43:42.891 227364 DEBUG nova.virt.hardware [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:43:42 np0005539551 nova_compute[227360]: 2025-11-29 08:43:42.891 227364 DEBUG nova.virt.hardware [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:43:42 np0005539551 nova_compute[227360]: 2025-11-29 08:43:42.892 227364 DEBUG nova.virt.hardware [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:43:42 np0005539551 nova_compute[227360]: 2025-11-29 08:43:42.892 227364 DEBUG nova.virt.hardware [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:43:42 np0005539551 nova_compute[227360]: 2025-11-29 08:43:42.893 227364 DEBUG nova.virt.hardware [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:43:42 np0005539551 nova_compute[227360]: 2025-11-29 08:43:42.893 227364 DEBUG nova.virt.hardware [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:43:42 np0005539551 nova_compute[227360]: 2025-11-29 08:43:42.894 227364 DEBUG nova.virt.hardware [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:43:42 np0005539551 nova_compute[227360]: 2025-11-29 08:43:42.894 227364 DEBUG nova.virt.hardware [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:43:42 np0005539551 nova_compute[227360]: 2025-11-29 08:43:42.895 227364 DEBUG nova.virt.hardware [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:43:42 np0005539551 nova_compute[227360]: 2025-11-29 08:43:42.899 227364 DEBUG oslo_concurrency.processutils [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:43 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:43:43 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1118480931' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:43:43 np0005539551 nova_compute[227360]: 2025-11-29 08:43:43.340 227364 DEBUG oslo_concurrency.processutils [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:43 np0005539551 nova_compute[227360]: 2025-11-29 08:43:43.366 227364 DEBUG nova.storage.rbd_utils [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] rbd image da14f334-c7fe-428d-b6d2-32c2f4cc4054_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:43 np0005539551 nova_compute[227360]: 2025-11-29 08:43:43.370 227364 DEBUG oslo_concurrency.processutils [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:43 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:43:43 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3129429880' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:43:43 np0005539551 nova_compute[227360]: 2025-11-29 08:43:43.808 227364 DEBUG oslo_concurrency.processutils [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:43 np0005539551 nova_compute[227360]: 2025-11-29 08:43:43.810 227364 DEBUG nova.virt.libvirt.vif [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:43:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-2038800219',display_name='tempest-TestSnapshotPattern-server-2038800219',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-2038800219',id=203,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGvmAjf7VDN0MoYAmHVgdY8l8+1v5wjwJNh4fBpCc/IwM7etIRNnxNIuXJ33y4wtb07HCVtVAHbkNdZ/qEkgOQyG3Oc8WVN/7z3fiAu47wM+5lJvW0Y+dOBmLvwkMU2fbA==',key_name='tempest-TestSnapshotPattern-945427268',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='66ee3b60fb89476383201ba204858d4d',ramdisk_id='',reservation_id='r-tq0a14ds',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1740419556',owner_user_name='tempest-TestSnapshotPattern-1740419556-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:43:38Z,user_data=None,user_id='5ac9cfdaf51b4a5ab874f7e3571f88a0',uuid=da14f334-c7fe-428d-b6d2-32c2f4cc4054,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b8e3a18a-12a7-40b6-8f74-5679f4c9a9df", "address": "fa:16:3e:97:38:a4", "network": {"id": "c60162ec-f468-4b5f-bd91-89b0a1cb9fa1", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-514529391-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66ee3b60fb89476383201ba204858d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8e3a18a-12", "ovs_interfaceid": "b8e3a18a-12a7-40b6-8f74-5679f4c9a9df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:43:43 np0005539551 nova_compute[227360]: 2025-11-29 08:43:43.810 227364 DEBUG nova.network.os_vif_util [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Converting VIF {"id": "b8e3a18a-12a7-40b6-8f74-5679f4c9a9df", "address": "fa:16:3e:97:38:a4", "network": {"id": "c60162ec-f468-4b5f-bd91-89b0a1cb9fa1", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-514529391-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66ee3b60fb89476383201ba204858d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8e3a18a-12", "ovs_interfaceid": "b8e3a18a-12a7-40b6-8f74-5679f4c9a9df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:43:43 np0005539551 nova_compute[227360]: 2025-11-29 08:43:43.811 227364 DEBUG nova.network.os_vif_util [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:38:a4,bridge_name='br-int',has_traffic_filtering=True,id=b8e3a18a-12a7-40b6-8f74-5679f4c9a9df,network=Network(c60162ec-f468-4b5f-bd91-89b0a1cb9fa1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8e3a18a-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:43:43 np0005539551 nova_compute[227360]: 2025-11-29 08:43:43.812 227364 DEBUG nova.objects.instance [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Lazy-loading 'pci_devices' on Instance uuid da14f334-c7fe-428d-b6d2-32c2f4cc4054 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:43:43 np0005539551 nova_compute[227360]: 2025-11-29 08:43:43.832 227364 DEBUG nova.virt.libvirt.driver [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:43:43 np0005539551 nova_compute[227360]:  <uuid>da14f334-c7fe-428d-b6d2-32c2f4cc4054</uuid>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:  <name>instance-000000cb</name>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:43:43 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:      <nova:name>tempest-TestSnapshotPattern-server-2038800219</nova:name>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:43:42</nova:creationTime>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:43:43 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:        <nova:user uuid="5ac9cfdaf51b4a5ab874f7e3571f88a0">tempest-TestSnapshotPattern-1740419556-project-member</nova:user>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:        <nova:project uuid="66ee3b60fb89476383201ba204858d4d">tempest-TestSnapshotPattern-1740419556</nova:project>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:        <nova:port uuid="b8e3a18a-12a7-40b6-8f74-5679f4c9a9df">
Nov 29 03:43:43 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:      <entry name="serial">da14f334-c7fe-428d-b6d2-32c2f4cc4054</entry>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:      <entry name="uuid">da14f334-c7fe-428d-b6d2-32c2f4cc4054</entry>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:43:43 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/da14f334-c7fe-428d-b6d2-32c2f4cc4054_disk">
Nov 29 03:43:43 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:43:43 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:43:43 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/da14f334-c7fe-428d-b6d2-32c2f4cc4054_disk.config">
Nov 29 03:43:43 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:43:43 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:43:43 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:97:38:a4"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:      <target dev="tapb8e3a18a-12"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:43:43 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/da14f334-c7fe-428d-b6d2-32c2f4cc4054/console.log" append="off"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:43:43 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:43:43 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:43:43 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:43:43 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:43:43 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:43:43 np0005539551 nova_compute[227360]: 2025-11-29 08:43:43.833 227364 DEBUG nova.compute.manager [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Preparing to wait for external event network-vif-plugged-b8e3a18a-12a7-40b6-8f74-5679f4c9a9df prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:43:43 np0005539551 nova_compute[227360]: 2025-11-29 08:43:43.833 227364 DEBUG oslo_concurrency.lockutils [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Acquiring lock "da14f334-c7fe-428d-b6d2-32c2f4cc4054-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:43 np0005539551 nova_compute[227360]: 2025-11-29 08:43:43.834 227364 DEBUG oslo_concurrency.lockutils [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Lock "da14f334-c7fe-428d-b6d2-32c2f4cc4054-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:43 np0005539551 nova_compute[227360]: 2025-11-29 08:43:43.834 227364 DEBUG oslo_concurrency.lockutils [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Lock "da14f334-c7fe-428d-b6d2-32c2f4cc4054-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:43 np0005539551 nova_compute[227360]: 2025-11-29 08:43:43.835 227364 DEBUG nova.virt.libvirt.vif [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:43:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-2038800219',display_name='tempest-TestSnapshotPattern-server-2038800219',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-2038800219',id=203,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGvmAjf7VDN0MoYAmHVgdY8l8+1v5wjwJNh4fBpCc/IwM7etIRNnxNIuXJ33y4wtb07HCVtVAHbkNdZ/qEkgOQyG3Oc8WVN/7z3fiAu47wM+5lJvW0Y+dOBmLvwkMU2fbA==',key_name='tempest-TestSnapshotPattern-945427268',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='66ee3b60fb89476383201ba204858d4d',ramdisk_id='',reservation_id='r-tq0a14ds',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1740419556',owner_user_name='tempest-TestSnapshotPattern-1740419556-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:43:38Z,user_data=None,user_id='5ac9cfdaf51b4a5ab874f7e3571f88a0',uuid=da14f334-c7fe-428d-b6d2-32c2f4cc4054,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b8e3a18a-12a7-40b6-8f74-5679f4c9a9df", "address": "fa:16:3e:97:38:a4", "network": {"id": "c60162ec-f468-4b5f-bd91-89b0a1cb9fa1", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-514529391-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66ee3b60fb89476383201ba204858d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8e3a18a-12", "ovs_interfaceid": "b8e3a18a-12a7-40b6-8f74-5679f4c9a9df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:43:43 np0005539551 nova_compute[227360]: 2025-11-29 08:43:43.835 227364 DEBUG nova.network.os_vif_util [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Converting VIF {"id": "b8e3a18a-12a7-40b6-8f74-5679f4c9a9df", "address": "fa:16:3e:97:38:a4", "network": {"id": "c60162ec-f468-4b5f-bd91-89b0a1cb9fa1", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-514529391-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66ee3b60fb89476383201ba204858d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8e3a18a-12", "ovs_interfaceid": "b8e3a18a-12a7-40b6-8f74-5679f4c9a9df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:43:43 np0005539551 nova_compute[227360]: 2025-11-29 08:43:43.836 227364 DEBUG nova.network.os_vif_util [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:38:a4,bridge_name='br-int',has_traffic_filtering=True,id=b8e3a18a-12a7-40b6-8f74-5679f4c9a9df,network=Network(c60162ec-f468-4b5f-bd91-89b0a1cb9fa1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8e3a18a-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:43:43 np0005539551 nova_compute[227360]: 2025-11-29 08:43:43.836 227364 DEBUG os_vif [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:38:a4,bridge_name='br-int',has_traffic_filtering=True,id=b8e3a18a-12a7-40b6-8f74-5679f4c9a9df,network=Network(c60162ec-f468-4b5f-bd91-89b0a1cb9fa1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8e3a18a-12') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:43:43 np0005539551 nova_compute[227360]: 2025-11-29 08:43:43.837 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:43 np0005539551 nova_compute[227360]: 2025-11-29 08:43:43.837 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:43 np0005539551 nova_compute[227360]: 2025-11-29 08:43:43.837 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:43:43 np0005539551 nova_compute[227360]: 2025-11-29 08:43:43.840 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:43 np0005539551 nova_compute[227360]: 2025-11-29 08:43:43.841 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb8e3a18a-12, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:43 np0005539551 nova_compute[227360]: 2025-11-29 08:43:43.841 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb8e3a18a-12, col_values=(('external_ids', {'iface-id': 'b8e3a18a-12a7-40b6-8f74-5679f4c9a9df', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:97:38:a4', 'vm-uuid': 'da14f334-c7fe-428d-b6d2-32c2f4cc4054'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:43 np0005539551 NetworkManager[48922]: <info>  [1764405823.8435] manager: (tapb8e3a18a-12): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/390)
Nov 29 03:43:43 np0005539551 nova_compute[227360]: 2025-11-29 08:43:43.845 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:43:43 np0005539551 nova_compute[227360]: 2025-11-29 08:43:43.847 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:43 np0005539551 nova_compute[227360]: 2025-11-29 08:43:43.848 227364 INFO os_vif [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:38:a4,bridge_name='br-int',has_traffic_filtering=True,id=b8e3a18a-12a7-40b6-8f74-5679f4c9a9df,network=Network(c60162ec-f468-4b5f-bd91-89b0a1cb9fa1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8e3a18a-12')#033[00m
Nov 29 03:43:43 np0005539551 nova_compute[227360]: 2025-11-29 08:43:43.920 227364 DEBUG nova.virt.libvirt.driver [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:43:43 np0005539551 nova_compute[227360]: 2025-11-29 08:43:43.921 227364 DEBUG nova.virt.libvirt.driver [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:43:43 np0005539551 nova_compute[227360]: 2025-11-29 08:43:43.921 227364 DEBUG nova.virt.libvirt.driver [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] No VIF found with MAC fa:16:3e:97:38:a4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:43:43 np0005539551 nova_compute[227360]: 2025-11-29 08:43:43.922 227364 INFO nova.virt.libvirt.driver [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Using config drive#033[00m
Nov 29 03:43:43 np0005539551 nova_compute[227360]: 2025-11-29 08:43:43.951 227364 DEBUG nova.storage.rbd_utils [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] rbd image da14f334-c7fe-428d-b6d2-32c2f4cc4054_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:44 np0005539551 nova_compute[227360]: 2025-11-29 08:43:44.325 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:44.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:44.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:44 np0005539551 nova_compute[227360]: 2025-11-29 08:43:44.570 227364 INFO nova.virt.libvirt.driver [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Creating config drive at /var/lib/nova/instances/da14f334-c7fe-428d-b6d2-32c2f4cc4054/disk.config#033[00m
Nov 29 03:43:44 np0005539551 nova_compute[227360]: 2025-11-29 08:43:44.575 227364 DEBUG oslo_concurrency.processutils [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/da14f334-c7fe-428d-b6d2-32c2f4cc4054/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuaych29j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:44 np0005539551 nova_compute[227360]: 2025-11-29 08:43:44.608 227364 DEBUG nova.network.neutron [req-cc86432f-cad8-4cd6-a5ed-43f030299598 req-61838a23-a14f-4243-9c96-51306a1f3b40 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Updated VIF entry in instance network info cache for port b8e3a18a-12a7-40b6-8f74-5679f4c9a9df. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:43:44 np0005539551 nova_compute[227360]: 2025-11-29 08:43:44.609 227364 DEBUG nova.network.neutron [req-cc86432f-cad8-4cd6-a5ed-43f030299598 req-61838a23-a14f-4243-9c96-51306a1f3b40 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Updating instance_info_cache with network_info: [{"id": "b8e3a18a-12a7-40b6-8f74-5679f4c9a9df", "address": "fa:16:3e:97:38:a4", "network": {"id": "c60162ec-f468-4b5f-bd91-89b0a1cb9fa1", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-514529391-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66ee3b60fb89476383201ba204858d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8e3a18a-12", "ovs_interfaceid": "b8e3a18a-12a7-40b6-8f74-5679f4c9a9df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:43:44 np0005539551 nova_compute[227360]: 2025-11-29 08:43:44.628 227364 DEBUG oslo_concurrency.lockutils [req-cc86432f-cad8-4cd6-a5ed-43f030299598 req-61838a23-a14f-4243-9c96-51306a1f3b40 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-da14f334-c7fe-428d-b6d2-32c2f4cc4054" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:43:44 np0005539551 nova_compute[227360]: 2025-11-29 08:43:44.714 227364 DEBUG oslo_concurrency.processutils [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/da14f334-c7fe-428d-b6d2-32c2f4cc4054/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuaych29j" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:45 np0005539551 nova_compute[227360]: 2025-11-29 08:43:45.080 227364 DEBUG nova.storage.rbd_utils [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] rbd image da14f334-c7fe-428d-b6d2-32c2f4cc4054_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:45 np0005539551 nova_compute[227360]: 2025-11-29 08:43:45.083 227364 DEBUG oslo_concurrency.processutils [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/da14f334-c7fe-428d-b6d2-32c2f4cc4054/disk.config da14f334-c7fe-428d-b6d2-32c2f4cc4054_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:46 np0005539551 nova_compute[227360]: 2025-11-29 08:43:46.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:46.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:46.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:46 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:47 np0005539551 nova_compute[227360]: 2025-11-29 08:43:47.066 227364 DEBUG oslo_concurrency.processutils [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/da14f334-c7fe-428d-b6d2-32c2f4cc4054/disk.config da14f334-c7fe-428d-b6d2-32c2f4cc4054_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.982s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:47 np0005539551 nova_compute[227360]: 2025-11-29 08:43:47.066 227364 INFO nova.virt.libvirt.driver [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Deleting local config drive /var/lib/nova/instances/da14f334-c7fe-428d-b6d2-32c2f4cc4054/disk.config because it was imported into RBD.#033[00m
Nov 29 03:43:47 np0005539551 kernel: tapb8e3a18a-12: entered promiscuous mode
Nov 29 03:43:47 np0005539551 NetworkManager[48922]: <info>  [1764405827.1174] manager: (tapb8e3a18a-12): new Tun device (/org/freedesktop/NetworkManager/Devices/391)
Nov 29 03:43:47 np0005539551 nova_compute[227360]: 2025-11-29 08:43:47.116 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:47 np0005539551 ovn_controller[130266]: 2025-11-29T08:43:47Z|00872|binding|INFO|Claiming lport b8e3a18a-12a7-40b6-8f74-5679f4c9a9df for this chassis.
Nov 29 03:43:47 np0005539551 ovn_controller[130266]: 2025-11-29T08:43:47Z|00873|binding|INFO|b8e3a18a-12a7-40b6-8f74-5679f4c9a9df: Claiming fa:16:3e:97:38:a4 10.100.0.3
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:47.125 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:38:a4 10.100.0.3'], port_security=['fa:16:3e:97:38:a4 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'da14f334-c7fe-428d-b6d2-32c2f4cc4054', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '66ee3b60fb89476383201ba204858d4d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '291b52bf-18bb-41c3-a977-3e030dbdb988', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3b579620-af3c-4534-8cbc-29d18f0dd8a7, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=b8e3a18a-12a7-40b6-8f74-5679f4c9a9df) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:47.126 139482 INFO neutron.agent.ovn.metadata.agent [-] Port b8e3a18a-12a7-40b6-8f74-5679f4c9a9df in datapath c60162ec-f468-4b5f-bd91-89b0a1cb9fa1 bound to our chassis#033[00m
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:47.128 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c60162ec-f468-4b5f-bd91-89b0a1cb9fa1#033[00m
Nov 29 03:43:47 np0005539551 ovn_controller[130266]: 2025-11-29T08:43:47Z|00874|binding|INFO|Setting lport b8e3a18a-12a7-40b6-8f74-5679f4c9a9df ovn-installed in OVS
Nov 29 03:43:47 np0005539551 ovn_controller[130266]: 2025-11-29T08:43:47Z|00875|binding|INFO|Setting lport b8e3a18a-12a7-40b6-8f74-5679f4c9a9df up in Southbound
Nov 29 03:43:47 np0005539551 nova_compute[227360]: 2025-11-29 08:43:47.135 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:47 np0005539551 nova_compute[227360]: 2025-11-29 08:43:47.137 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:47.141 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e2a7fcf3-23d5-4686-a98b-d50059eeb9a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:47.142 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc60162ec-f1 in ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:47.144 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc60162ec-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:47.144 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[63de5cb9-8fe7-44a9-8140-59887d5844ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:47.145 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[81f34ba2-8fc8-45a0-be93-c825b806028e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:47 np0005539551 systemd-udevd[298221]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:43:47 np0005539551 systemd-machined[190756]: New machine qemu-90-instance-000000cb.
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:47.157 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[f0da91f4-d06d-488b-bcda-47722e539e97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:47 np0005539551 NetworkManager[48922]: <info>  [1764405827.1605] device (tapb8e3a18a-12): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:43:47 np0005539551 NetworkManager[48922]: <info>  [1764405827.1614] device (tapb8e3a18a-12): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:43:47 np0005539551 systemd[1]: Started Virtual Machine qemu-90-instance-000000cb.
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:47.172 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[0a9a0664-dc9f-41c2-9119-c3c34be70106]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:47.201 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[564c9dab-8dc2-49fe-bd4e-2f27bd255b98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:47.206 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[fc6b607b-7dc0-47ea-b228-3517d620696d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:47 np0005539551 NetworkManager[48922]: <info>  [1764405827.2081] manager: (tapc60162ec-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/392)
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:47.240 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[dd6da02f-1d49-4b7d-85b7-7436502c4a55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:47.243 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[a46f6ea5-b702-46f6-bd50-e012d94fdbf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:47 np0005539551 NetworkManager[48922]: <info>  [1764405827.2663] device (tapc60162ec-f0): carrier: link connected
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:47.272 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[9857ea07-bd86-41f2-8e6b-d147575cedf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:47.289 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5bb2d33c-6b80-44fc-97e8-77f64b713f25]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc60162ec-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:33:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 253], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 887414, 'reachable_time': 42512, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298254, 'error': None, 'target': 'ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:47.306 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[38902c63-3c7a-43b4-8928-fe565cab6b2f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2a:33ca'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 887414, 'tstamp': 887414}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298255, 'error': None, 'target': 'ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:47.328 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[be9ffcdb-19fe-4b52-8208-121f8a6b23ff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc60162ec-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:33:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 253], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 887414, 'reachable_time': 42512, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 298256, 'error': None, 'target': 'ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:47.361 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[cd7921f7-652c-4a44-991b-e9829f6b3e25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:47.427 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[3f8276c9-329d-44ef-86cc-681168e14bd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:47.429 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc60162ec-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:47.430 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:43:47 np0005539551 nova_compute[227360]: 2025-11-29 08:43:47.430 227364 DEBUG nova.compute.manager [req-64b0c018-3433-488e-9e07-e9a8716fb2f0 req-9c6566dd-cc41-4aa6-897e-c327ca6ee68b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Received event network-vif-plugged-b8e3a18a-12a7-40b6-8f74-5679f4c9a9df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:43:47 np0005539551 nova_compute[227360]: 2025-11-29 08:43:47.430 227364 DEBUG oslo_concurrency.lockutils [req-64b0c018-3433-488e-9e07-e9a8716fb2f0 req-9c6566dd-cc41-4aa6-897e-c327ca6ee68b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "da14f334-c7fe-428d-b6d2-32c2f4cc4054-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:47.430 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc60162ec-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:47 np0005539551 nova_compute[227360]: 2025-11-29 08:43:47.431 227364 DEBUG oslo_concurrency.lockutils [req-64b0c018-3433-488e-9e07-e9a8716fb2f0 req-9c6566dd-cc41-4aa6-897e-c327ca6ee68b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "da14f334-c7fe-428d-b6d2-32c2f4cc4054-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:47 np0005539551 nova_compute[227360]: 2025-11-29 08:43:47.431 227364 DEBUG oslo_concurrency.lockutils [req-64b0c018-3433-488e-9e07-e9a8716fb2f0 req-9c6566dd-cc41-4aa6-897e-c327ca6ee68b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "da14f334-c7fe-428d-b6d2-32c2f4cc4054-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:47 np0005539551 nova_compute[227360]: 2025-11-29 08:43:47.431 227364 DEBUG nova.compute.manager [req-64b0c018-3433-488e-9e07-e9a8716fb2f0 req-9c6566dd-cc41-4aa6-897e-c327ca6ee68b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Processing event network-vif-plugged-b8e3a18a-12a7-40b6-8f74-5679f4c9a9df _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:43:47 np0005539551 kernel: tapc60162ec-f0: entered promiscuous mode
Nov 29 03:43:47 np0005539551 nova_compute[227360]: 2025-11-29 08:43:47.432 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:47 np0005539551 NetworkManager[48922]: <info>  [1764405827.4334] manager: (tapc60162ec-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/393)
Nov 29 03:43:47 np0005539551 nova_compute[227360]: 2025-11-29 08:43:47.435 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:47.436 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc60162ec-f0, col_values=(('external_ids', {'iface-id': '1cbe83e7-a1b1-4865-9743-7778d9db9685'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:47 np0005539551 nova_compute[227360]: 2025-11-29 08:43:47.437 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:47 np0005539551 nova_compute[227360]: 2025-11-29 08:43:47.440 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:47.441 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c60162ec-f468-4b5f-bd91-89b0a1cb9fa1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c60162ec-f468-4b5f-bd91-89b0a1cb9fa1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:43:47 np0005539551 ovn_controller[130266]: 2025-11-29T08:43:47Z|00876|binding|INFO|Releasing lport 1cbe83e7-a1b1-4865-9743-7778d9db9685 from this chassis (sb_readonly=0)
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:47.442 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1ca95cec-5507-44b0-aa20-5e5a1f4e4159]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:47.443 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/c60162ec-f468-4b5f-bd91-89b0a1cb9fa1.pid.haproxy
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID c60162ec-f468-4b5f-bd91-89b0a1cb9fa1
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:43:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:47.444 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1', 'env', 'PROCESS_TAG=haproxy-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c60162ec-f468-4b5f-bd91-89b0a1cb9fa1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:43:47 np0005539551 ovn_controller[130266]: 2025-11-29T08:43:47Z|00877|binding|INFO|Releasing lport 1cbe83e7-a1b1-4865-9743-7778d9db9685 from this chassis (sb_readonly=0)
Nov 29 03:43:47 np0005539551 nova_compute[227360]: 2025-11-29 08:43:47.482 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:47 np0005539551 nova_compute[227360]: 2025-11-29 08:43:47.484 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:47 np0005539551 ovn_controller[130266]: 2025-11-29T08:43:47Z|00878|binding|INFO|Releasing lport 1cbe83e7-a1b1-4865-9743-7778d9db9685 from this chassis (sb_readonly=0)
Nov 29 03:43:47 np0005539551 nova_compute[227360]: 2025-11-29 08:43:47.753 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:47 np0005539551 podman[298288]: 2025-11-29 08:43:47.774149944 +0000 UTC m=+0.021163795 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:43:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:48.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:48.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:48 np0005539551 nova_compute[227360]: 2025-11-29 08:43:48.744 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405828.743644, da14f334-c7fe-428d-b6d2-32c2f4cc4054 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:43:48 np0005539551 nova_compute[227360]: 2025-11-29 08:43:48.745 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] VM Started (Lifecycle Event)#033[00m
Nov 29 03:43:48 np0005539551 nova_compute[227360]: 2025-11-29 08:43:48.747 227364 DEBUG nova.compute.manager [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:43:48 np0005539551 nova_compute[227360]: 2025-11-29 08:43:48.752 227364 DEBUG nova.virt.libvirt.driver [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:43:48 np0005539551 nova_compute[227360]: 2025-11-29 08:43:48.756 227364 INFO nova.virt.libvirt.driver [-] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Instance spawned successfully.#033[00m
Nov 29 03:43:48 np0005539551 nova_compute[227360]: 2025-11-29 08:43:48.756 227364 DEBUG nova.virt.libvirt.driver [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:43:48 np0005539551 nova_compute[227360]: 2025-11-29 08:43:48.843 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:48 np0005539551 nova_compute[227360]: 2025-11-29 08:43:48.867 227364 DEBUG nova.virt.libvirt.driver [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:43:48 np0005539551 nova_compute[227360]: 2025-11-29 08:43:48.867 227364 DEBUG nova.virt.libvirt.driver [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:43:48 np0005539551 nova_compute[227360]: 2025-11-29 08:43:48.868 227364 DEBUG nova.virt.libvirt.driver [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:43:48 np0005539551 nova_compute[227360]: 2025-11-29 08:43:48.869 227364 DEBUG nova.virt.libvirt.driver [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:43:48 np0005539551 nova_compute[227360]: 2025-11-29 08:43:48.869 227364 DEBUG nova.virt.libvirt.driver [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:43:48 np0005539551 nova_compute[227360]: 2025-11-29 08:43:48.870 227364 DEBUG nova.virt.libvirt.driver [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:43:48 np0005539551 nova_compute[227360]: 2025-11-29 08:43:48.893 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:43:48 np0005539551 nova_compute[227360]: 2025-11-29 08:43:48.896 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:43:48 np0005539551 nova_compute[227360]: 2025-11-29 08:43:48.920 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:43:48 np0005539551 nova_compute[227360]: 2025-11-29 08:43:48.921 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405828.7445385, da14f334-c7fe-428d-b6d2-32c2f4cc4054 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:43:48 np0005539551 nova_compute[227360]: 2025-11-29 08:43:48.921 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:43:48 np0005539551 nova_compute[227360]: 2025-11-29 08:43:48.930 227364 INFO nova.compute.manager [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Took 10.20 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:43:48 np0005539551 nova_compute[227360]: 2025-11-29 08:43:48.931 227364 DEBUG nova.compute.manager [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:43:48 np0005539551 nova_compute[227360]: 2025-11-29 08:43:48.939 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:43:48 np0005539551 nova_compute[227360]: 2025-11-29 08:43:48.943 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405828.7506902, da14f334-c7fe-428d-b6d2-32c2f4cc4054 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:43:48 np0005539551 nova_compute[227360]: 2025-11-29 08:43:48.944 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:43:48 np0005539551 nova_compute[227360]: 2025-11-29 08:43:48.960 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:43:48 np0005539551 nova_compute[227360]: 2025-11-29 08:43:48.963 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:43:48 np0005539551 nova_compute[227360]: 2025-11-29 08:43:48.990 227364 INFO nova.compute.manager [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Took 11.15 seconds to build instance.#033[00m
Nov 29 03:43:49 np0005539551 nova_compute[227360]: 2025-11-29 08:43:49.002 227364 DEBUG oslo_concurrency.lockutils [None req-20d0a92b-d699-4904-8b9b-089e9f49c51f 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Lock "da14f334-c7fe-428d-b6d2-32c2f4cc4054" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.227s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:49 np0005539551 nova_compute[227360]: 2025-11-29 08:43:49.326 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:49 np0005539551 nova_compute[227360]: 2025-11-29 08:43:49.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:49 np0005539551 nova_compute[227360]: 2025-11-29 08:43:49.439 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:49 np0005539551 nova_compute[227360]: 2025-11-29 08:43:49.439 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:49 np0005539551 nova_compute[227360]: 2025-11-29 08:43:49.439 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:49 np0005539551 nova_compute[227360]: 2025-11-29 08:43:49.440 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:43:49 np0005539551 nova_compute[227360]: 2025-11-29 08:43:49.440 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:49 np0005539551 nova_compute[227360]: 2025-11-29 08:43:49.498 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405814.4976962, d945df67-3e4a-4695-955a-60f6e92cd470 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:43:49 np0005539551 nova_compute[227360]: 2025-11-29 08:43:49.499 227364 INFO nova.compute.manager [-] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:43:49 np0005539551 nova_compute[227360]: 2025-11-29 08:43:49.521 227364 DEBUG nova.compute.manager [None req-21d9803e-6eb0-4067-b994-be3fc9039726 - - - - - -] [instance: d945df67-3e4a-4695-955a-60f6e92cd470] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:43:49 np0005539551 nova_compute[227360]: 2025-11-29 08:43:49.563 227364 DEBUG nova.compute.manager [req-0b3476ff-90aa-40bd-841a-537553c2a73e req-5a547b04-a552-4a45-894a-359f84ef2231 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Received event network-vif-plugged-b8e3a18a-12a7-40b6-8f74-5679f4c9a9df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:43:49 np0005539551 nova_compute[227360]: 2025-11-29 08:43:49.564 227364 DEBUG oslo_concurrency.lockutils [req-0b3476ff-90aa-40bd-841a-537553c2a73e req-5a547b04-a552-4a45-894a-359f84ef2231 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "da14f334-c7fe-428d-b6d2-32c2f4cc4054-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:49 np0005539551 nova_compute[227360]: 2025-11-29 08:43:49.564 227364 DEBUG oslo_concurrency.lockutils [req-0b3476ff-90aa-40bd-841a-537553c2a73e req-5a547b04-a552-4a45-894a-359f84ef2231 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "da14f334-c7fe-428d-b6d2-32c2f4cc4054-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:49 np0005539551 nova_compute[227360]: 2025-11-29 08:43:49.565 227364 DEBUG oslo_concurrency.lockutils [req-0b3476ff-90aa-40bd-841a-537553c2a73e req-5a547b04-a552-4a45-894a-359f84ef2231 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "da14f334-c7fe-428d-b6d2-32c2f4cc4054-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:49 np0005539551 nova_compute[227360]: 2025-11-29 08:43:49.565 227364 DEBUG nova.compute.manager [req-0b3476ff-90aa-40bd-841a-537553c2a73e req-5a547b04-a552-4a45-894a-359f84ef2231 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] No waiting events found dispatching network-vif-plugged-b8e3a18a-12a7-40b6-8f74-5679f4c9a9df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:43:49 np0005539551 nova_compute[227360]: 2025-11-29 08:43:49.565 227364 WARNING nova.compute.manager [req-0b3476ff-90aa-40bd-841a-537553c2a73e req-5a547b04-a552-4a45-894a-359f84ef2231 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Received unexpected event network-vif-plugged-b8e3a18a-12a7-40b6-8f74-5679f4c9a9df for instance with vm_state active and task_state None.#033[00m
Nov 29 03:43:50 np0005539551 podman[298288]: 2025-11-29 08:43:50.022977133 +0000 UTC m=+2.269990964 container create 28d7577567ebf7cd8c2dcf07ec6f5ea0e86dfb469386209ab9738dd2365cbd0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:43:50 np0005539551 systemd[1]: Started libpod-conmon-28d7577567ebf7cd8c2dcf07ec6f5ea0e86dfb469386209ab9738dd2365cbd0b.scope.
Nov 29 03:43:50 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:43:50 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c60d04f9d14a6eb06b6dce12ef890d3b147b13ea6ccf8a507c604e40b382c3b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:43:50 np0005539551 podman[298288]: 2025-11-29 08:43:50.138636694 +0000 UTC m=+2.385650545 container init 28d7577567ebf7cd8c2dcf07ec6f5ea0e86dfb469386209ab9738dd2365cbd0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:43:50 np0005539551 podman[298288]: 2025-11-29 08:43:50.146128237 +0000 UTC m=+2.393142068 container start 28d7577567ebf7cd8c2dcf07ec6f5ea0e86dfb469386209ab9738dd2365cbd0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:43:50 np0005539551 neutron-haproxy-ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1[298365]: [NOTICE]   (298369) : New worker (298371) forked
Nov 29 03:43:50 np0005539551 neutron-haproxy-ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1[298365]: [NOTICE]   (298369) : Loading success.
Nov 29 03:43:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:43:50 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/759351650' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:43:50 np0005539551 nova_compute[227360]: 2025-11-29 08:43:50.355 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.914s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:50 np0005539551 nova_compute[227360]: 2025-11-29 08:43:50.445 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-000000cb as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:43:50 np0005539551 nova_compute[227360]: 2025-11-29 08:43:50.447 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-000000cb as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:43:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:50.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:50.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:50 np0005539551 nova_compute[227360]: 2025-11-29 08:43:50.628 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:43:50 np0005539551 nova_compute[227360]: 2025-11-29 08:43:50.629 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4168MB free_disk=20.896785736083984GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:43:50 np0005539551 nova_compute[227360]: 2025-11-29 08:43:50.629 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:50 np0005539551 nova_compute[227360]: 2025-11-29 08:43:50.629 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:50 np0005539551 nova_compute[227360]: 2025-11-29 08:43:50.704 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance da14f334-c7fe-428d-b6d2-32c2f4cc4054 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:43:50 np0005539551 nova_compute[227360]: 2025-11-29 08:43:50.704 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:43:50 np0005539551 nova_compute[227360]: 2025-11-29 08:43:50.704 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:43:50 np0005539551 nova_compute[227360]: 2025-11-29 08:43:50.745 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:43:51 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2665335332' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:43:51 np0005539551 nova_compute[227360]: 2025-11-29 08:43:51.210 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:51 np0005539551 nova_compute[227360]: 2025-11-29 08:43:51.215 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:43:51 np0005539551 nova_compute[227360]: 2025-11-29 08:43:51.240 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:43:51 np0005539551 nova_compute[227360]: 2025-11-29 08:43:51.282 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:43:51 np0005539551 nova_compute[227360]: 2025-11-29 08:43:51.282 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:51 np0005539551 nova_compute[227360]: 2025-11-29 08:43:51.297 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:51 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:51.298 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=59, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=58) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:43:51 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:51.299 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:43:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:51 np0005539551 nova_compute[227360]: 2025-11-29 08:43:51.815 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:51 np0005539551 NetworkManager[48922]: <info>  [1764405831.8160] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/394)
Nov 29 03:43:51 np0005539551 NetworkManager[48922]: <info>  [1764405831.8171] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/395)
Nov 29 03:43:52 np0005539551 nova_compute[227360]: 2025-11-29 08:43:52.049 227364 DEBUG nova.compute.manager [req-f9fed948-4962-4061-9a10-74f3f7f7d895 req-31e20716-d26e-4d88-b9c7-084f4d685ece 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Received event network-changed-b8e3a18a-12a7-40b6-8f74-5679f4c9a9df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:43:52 np0005539551 nova_compute[227360]: 2025-11-29 08:43:52.050 227364 DEBUG nova.compute.manager [req-f9fed948-4962-4061-9a10-74f3f7f7d895 req-31e20716-d26e-4d88-b9c7-084f4d685ece 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Refreshing instance network info cache due to event network-changed-b8e3a18a-12a7-40b6-8f74-5679f4c9a9df. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:43:52 np0005539551 nova_compute[227360]: 2025-11-29 08:43:52.050 227364 DEBUG oslo_concurrency.lockutils [req-f9fed948-4962-4061-9a10-74f3f7f7d895 req-31e20716-d26e-4d88-b9c7-084f4d685ece 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-da14f334-c7fe-428d-b6d2-32c2f4cc4054" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:43:52 np0005539551 nova_compute[227360]: 2025-11-29 08:43:52.051 227364 DEBUG oslo_concurrency.lockutils [req-f9fed948-4962-4061-9a10-74f3f7f7d895 req-31e20716-d26e-4d88-b9c7-084f4d685ece 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-da14f334-c7fe-428d-b6d2-32c2f4cc4054" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:43:52 np0005539551 nova_compute[227360]: 2025-11-29 08:43:52.051 227364 DEBUG nova.network.neutron [req-f9fed948-4962-4061-9a10-74f3f7f7d895 req-31e20716-d26e-4d88-b9c7-084f4d685ece 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Refreshing network info cache for port b8e3a18a-12a7-40b6-8f74-5679f4c9a9df _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:43:52 np0005539551 nova_compute[227360]: 2025-11-29 08:43:52.052 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:52 np0005539551 ovn_controller[130266]: 2025-11-29T08:43:52Z|00879|binding|INFO|Releasing lport 1cbe83e7-a1b1-4865-9743-7778d9db9685 from this chassis (sb_readonly=0)
Nov 29 03:43:52 np0005539551 nova_compute[227360]: 2025-11-29 08:43:52.076 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:52.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:52.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:53 np0005539551 nova_compute[227360]: 2025-11-29 08:43:53.846 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:53 np0005539551 nova_compute[227360]: 2025-11-29 08:43:53.973 227364 DEBUG nova.network.neutron [req-f9fed948-4962-4061-9a10-74f3f7f7d895 req-31e20716-d26e-4d88-b9c7-084f4d685ece 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Updated VIF entry in instance network info cache for port b8e3a18a-12a7-40b6-8f74-5679f4c9a9df. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:43:53 np0005539551 nova_compute[227360]: 2025-11-29 08:43:53.974 227364 DEBUG nova.network.neutron [req-f9fed948-4962-4061-9a10-74f3f7f7d895 req-31e20716-d26e-4d88-b9c7-084f4d685ece 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Updating instance_info_cache with network_info: [{"id": "b8e3a18a-12a7-40b6-8f74-5679f4c9a9df", "address": "fa:16:3e:97:38:a4", "network": {"id": "c60162ec-f468-4b5f-bd91-89b0a1cb9fa1", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-514529391-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66ee3b60fb89476383201ba204858d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8e3a18a-12", "ovs_interfaceid": "b8e3a18a-12a7-40b6-8f74-5679f4c9a9df", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:43:53 np0005539551 nova_compute[227360]: 2025-11-29 08:43:53.995 227364 DEBUG oslo_concurrency.lockutils [req-f9fed948-4962-4061-9a10-74f3f7f7d895 req-31e20716-d26e-4d88-b9c7-084f4d685ece 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-da14f334-c7fe-428d-b6d2-32c2f4cc4054" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:43:54 np0005539551 nova_compute[227360]: 2025-11-29 08:43:54.283 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:54 np0005539551 nova_compute[227360]: 2025-11-29 08:43:54.284 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:54 np0005539551 nova_compute[227360]: 2025-11-29 08:43:54.284 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:43:54 np0005539551 nova_compute[227360]: 2025-11-29 08:43:54.284 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:43:54 np0005539551 nova_compute[227360]: 2025-11-29 08:43:54.330 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:54 np0005539551 nova_compute[227360]: 2025-11-29 08:43:54.448 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "refresh_cache-da14f334-c7fe-428d-b6d2-32c2f4cc4054" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:43:54 np0005539551 nova_compute[227360]: 2025-11-29 08:43:54.449 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquired lock "refresh_cache-da14f334-c7fe-428d-b6d2-32c2f4cc4054" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:43:54 np0005539551 nova_compute[227360]: 2025-11-29 08:43:54.449 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:43:54 np0005539551 nova_compute[227360]: 2025-11-29 08:43:54.449 227364 DEBUG nova.objects.instance [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lazy-loading 'info_cache' on Instance uuid da14f334-c7fe-428d-b6d2-32c2f4cc4054 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:43:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:54.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:54.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:55 np0005539551 nova_compute[227360]: 2025-11-29 08:43:55.673 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:56 np0005539551 nova_compute[227360]: 2025-11-29 08:43:56.369 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Updating instance_info_cache with network_info: [{"id": "b8e3a18a-12a7-40b6-8f74-5679f4c9a9df", "address": "fa:16:3e:97:38:a4", "network": {"id": "c60162ec-f468-4b5f-bd91-89b0a1cb9fa1", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-514529391-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66ee3b60fb89476383201ba204858d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8e3a18a-12", "ovs_interfaceid": "b8e3a18a-12a7-40b6-8f74-5679f4c9a9df", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:43:56 np0005539551 nova_compute[227360]: 2025-11-29 08:43:56.386 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Releasing lock "refresh_cache-da14f334-c7fe-428d-b6d2-32c2f4cc4054" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:43:56 np0005539551 nova_compute[227360]: 2025-11-29 08:43:56.387 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:43:56 np0005539551 nova_compute[227360]: 2025-11-29 08:43:56.388 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:56 np0005539551 nova_compute[227360]: 2025-11-29 08:43:56.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:56.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:56.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:56 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:43:57.300 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '59'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:58.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:43:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:58.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:58 np0005539551 nova_compute[227360]: 2025-11-29 08:43:58.905 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:59 np0005539551 nova_compute[227360]: 2025-11-29 08:43:59.332 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:59 np0005539551 nova_compute[227360]: 2025-11-29 08:43:59.935 227364 DEBUG oslo_concurrency.lockutils [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "549af500-fc44-45ed-a35d-c215ef3e4d5f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:59 np0005539551 nova_compute[227360]: 2025-11-29 08:43:59.936 227364 DEBUG oslo_concurrency.lockutils [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "549af500-fc44-45ed-a35d-c215ef3e4d5f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:59 np0005539551 nova_compute[227360]: 2025-11-29 08:43:59.954 227364 DEBUG nova.compute.manager [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:44:00 np0005539551 nova_compute[227360]: 2025-11-29 08:44:00.045 227364 DEBUG oslo_concurrency.lockutils [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:00 np0005539551 nova_compute[227360]: 2025-11-29 08:44:00.045 227364 DEBUG oslo_concurrency.lockutils [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:00 np0005539551 nova_compute[227360]: 2025-11-29 08:44:00.053 227364 DEBUG nova.virt.hardware [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:44:00 np0005539551 nova_compute[227360]: 2025-11-29 08:44:00.053 227364 INFO nova.compute.claims [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:44:00 np0005539551 nova_compute[227360]: 2025-11-29 08:44:00.195 227364 DEBUG oslo_concurrency.processutils [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:00 np0005539551 nova_compute[227360]: 2025-11-29 08:44:00.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:00.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:00.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:44:01 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1839949680' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:44:01 np0005539551 nova_compute[227360]: 2025-11-29 08:44:01.139 227364 DEBUG oslo_concurrency.processutils [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.944s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:01 np0005539551 nova_compute[227360]: 2025-11-29 08:44:01.146 227364 DEBUG nova.compute.provider_tree [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:44:01 np0005539551 nova_compute[227360]: 2025-11-29 08:44:01.178 227364 DEBUG nova.scheduler.client.report [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:44:01 np0005539551 nova_compute[227360]: 2025-11-29 08:44:01.199 227364 DEBUG oslo_concurrency.lockutils [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.153s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:01 np0005539551 nova_compute[227360]: 2025-11-29 08:44:01.200 227364 DEBUG nova.compute.manager [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:44:01 np0005539551 nova_compute[227360]: 2025-11-29 08:44:01.250 227364 DEBUG nova.compute.manager [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:44:01 np0005539551 nova_compute[227360]: 2025-11-29 08:44:01.251 227364 DEBUG nova.network.neutron [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:44:01 np0005539551 nova_compute[227360]: 2025-11-29 08:44:01.277 227364 INFO nova.virt.libvirt.driver [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:44:01 np0005539551 nova_compute[227360]: 2025-11-29 08:44:01.299 227364 DEBUG nova.compute.manager [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:44:01 np0005539551 nova_compute[227360]: 2025-11-29 08:44:01.436 227364 DEBUG nova.compute.manager [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:44:01 np0005539551 nova_compute[227360]: 2025-11-29 08:44:01.438 227364 DEBUG nova.virt.libvirt.driver [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:44:01 np0005539551 nova_compute[227360]: 2025-11-29 08:44:01.438 227364 INFO nova.virt.libvirt.driver [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Creating image(s)#033[00m
Nov 29 03:44:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:01 np0005539551 nova_compute[227360]: 2025-11-29 08:44:01.837 227364 DEBUG nova.storage.rbd_utils [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image 549af500-fc44-45ed-a35d-c215ef3e4d5f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:44:01 np0005539551 nova_compute[227360]: 2025-11-29 08:44:01.864 227364 DEBUG nova.storage.rbd_utils [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image 549af500-fc44-45ed-a35d-c215ef3e4d5f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:44:01 np0005539551 nova_compute[227360]: 2025-11-29 08:44:01.890 227364 DEBUG nova.storage.rbd_utils [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image 549af500-fc44-45ed-a35d-c215ef3e4d5f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:44:01 np0005539551 nova_compute[227360]: 2025-11-29 08:44:01.894 227364 DEBUG oslo_concurrency.processutils [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:01 np0005539551 nova_compute[227360]: 2025-11-29 08:44:01.926 227364 DEBUG nova.policy [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'de2965680b714b539553cf0792584e1e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '75423dfb570f4b2bbc2f8de4f3a65d18', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:44:01 np0005539551 nova_compute[227360]: 2025-11-29 08:44:01.961 227364 DEBUG oslo_concurrency.processutils [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:01 np0005539551 nova_compute[227360]: 2025-11-29 08:44:01.962 227364 DEBUG oslo_concurrency.lockutils [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:01 np0005539551 nova_compute[227360]: 2025-11-29 08:44:01.963 227364 DEBUG oslo_concurrency.lockutils [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:01 np0005539551 nova_compute[227360]: 2025-11-29 08:44:01.963 227364 DEBUG oslo_concurrency.lockutils [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:02.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:02.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:02 np0005539551 nova_compute[227360]: 2025-11-29 08:44:02.669 227364 DEBUG nova.storage.rbd_utils [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image 549af500-fc44-45ed-a35d-c215ef3e4d5f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:44:02 np0005539551 nova_compute[227360]: 2025-11-29 08:44:02.673 227364 DEBUG oslo_concurrency.processutils [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 549af500-fc44-45ed-a35d-c215ef3e4d5f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:02 np0005539551 nova_compute[227360]: 2025-11-29 08:44:02.715 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:03 np0005539551 nova_compute[227360]: 2025-11-29 08:44:03.908 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:04 np0005539551 nova_compute[227360]: 2025-11-29 08:44:04.333 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:04.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:04.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:04 np0005539551 nova_compute[227360]: 2025-11-29 08:44:04.546 227364 DEBUG nova.network.neutron [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Successfully created port: 365d4d20-fe0e-400e-9546-ae7e92d68355 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:44:04 np0005539551 podman[298513]: 2025-11-29 08:44:04.611166482 +0000 UTC m=+0.055558116 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible)
Nov 29 03:44:04 np0005539551 podman[298512]: 2025-11-29 08:44:04.630131616 +0000 UTC m=+0.071225290 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 29 03:44:04 np0005539551 podman[298511]: 2025-11-29 08:44:04.638315037 +0000 UTC m=+0.088850017 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:44:05 np0005539551 nova_compute[227360]: 2025-11-29 08:44:05.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:05 np0005539551 nova_compute[227360]: 2025-11-29 08:44:05.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:44:06 np0005539551 nova_compute[227360]: 2025-11-29 08:44:06.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:06.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:06.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:06 np0005539551 nova_compute[227360]: 2025-11-29 08:44:06.638 227364 DEBUG nova.network.neutron [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Successfully updated port: 365d4d20-fe0e-400e-9546-ae7e92d68355 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:44:06 np0005539551 nova_compute[227360]: 2025-11-29 08:44:06.671 227364 DEBUG oslo_concurrency.lockutils [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "refresh_cache-549af500-fc44-45ed-a35d-c215ef3e4d5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:44:06 np0005539551 nova_compute[227360]: 2025-11-29 08:44:06.671 227364 DEBUG oslo_concurrency.lockutils [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquired lock "refresh_cache-549af500-fc44-45ed-a35d-c215ef3e4d5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:44:06 np0005539551 nova_compute[227360]: 2025-11-29 08:44:06.671 227364 DEBUG nova.network.neutron [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:44:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:06 np0005539551 nova_compute[227360]: 2025-11-29 08:44:06.826 227364 DEBUG nova.compute.manager [req-35610cfa-0449-4e4f-a1be-61a9db3c3e11 req-34257e26-9fcb-4ab0-b8f6-b3f3770ec6d3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Received event network-changed-365d4d20-fe0e-400e-9546-ae7e92d68355 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:06 np0005539551 nova_compute[227360]: 2025-11-29 08:44:06.826 227364 DEBUG nova.compute.manager [req-35610cfa-0449-4e4f-a1be-61a9db3c3e11 req-34257e26-9fcb-4ab0-b8f6-b3f3770ec6d3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Refreshing instance network info cache due to event network-changed-365d4d20-fe0e-400e-9546-ae7e92d68355. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:44:06 np0005539551 nova_compute[227360]: 2025-11-29 08:44:06.826 227364 DEBUG oslo_concurrency.lockutils [req-35610cfa-0449-4e4f-a1be-61a9db3c3e11 req-34257e26-9fcb-4ab0-b8f6-b3f3770ec6d3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-549af500-fc44-45ed-a35d-c215ef3e4d5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:44:07 np0005539551 nova_compute[227360]: 2025-11-29 08:44:07.073 227364 DEBUG nova.network.neutron [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:44:07 np0005539551 nova_compute[227360]: 2025-11-29 08:44:07.854 227364 DEBUG nova.network.neutron [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Updating instance_info_cache with network_info: [{"id": "365d4d20-fe0e-400e-9546-ae7e92d68355", "address": "fa:16:3e:ae:3c:05", "network": {"id": "206c8741-77b9-46d9-99b0-d93958188a98", "bridge": "br-int", "label": "tempest-network-smoke--684921249", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap365d4d20-fe", "ovs_interfaceid": "365d4d20-fe0e-400e-9546-ae7e92d68355", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:44:07 np0005539551 nova_compute[227360]: 2025-11-29 08:44:07.882 227364 DEBUG oslo_concurrency.lockutils [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Releasing lock "refresh_cache-549af500-fc44-45ed-a35d-c215ef3e4d5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:44:07 np0005539551 nova_compute[227360]: 2025-11-29 08:44:07.883 227364 DEBUG nova.compute.manager [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Instance network_info: |[{"id": "365d4d20-fe0e-400e-9546-ae7e92d68355", "address": "fa:16:3e:ae:3c:05", "network": {"id": "206c8741-77b9-46d9-99b0-d93958188a98", "bridge": "br-int", "label": "tempest-network-smoke--684921249", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap365d4d20-fe", "ovs_interfaceid": "365d4d20-fe0e-400e-9546-ae7e92d68355", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:44:07 np0005539551 nova_compute[227360]: 2025-11-29 08:44:07.883 227364 DEBUG oslo_concurrency.lockutils [req-35610cfa-0449-4e4f-a1be-61a9db3c3e11 req-34257e26-9fcb-4ab0-b8f6-b3f3770ec6d3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-549af500-fc44-45ed-a35d-c215ef3e4d5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:44:07 np0005539551 nova_compute[227360]: 2025-11-29 08:44:07.884 227364 DEBUG nova.network.neutron [req-35610cfa-0449-4e4f-a1be-61a9db3c3e11 req-34257e26-9fcb-4ab0-b8f6-b3f3770ec6d3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Refreshing network info cache for port 365d4d20-fe0e-400e-9546-ae7e92d68355 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:44:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:08.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:08.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:08 np0005539551 nova_compute[227360]: 2025-11-29 08:44:08.911 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:09 np0005539551 nova_compute[227360]: 2025-11-29 08:44:09.334 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:09 np0005539551 nova_compute[227360]: 2025-11-29 08:44:09.391 227364 DEBUG oslo_concurrency.processutils [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 549af500-fc44-45ed-a35d-c215ef3e4d5f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 6.719s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:09 np0005539551 nova_compute[227360]: 2025-11-29 08:44:09.459 227364 DEBUG nova.storage.rbd_utils [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] resizing rbd image 549af500-fc44-45ed-a35d-c215ef3e4d5f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:44:09 np0005539551 ovn_controller[130266]: 2025-11-29T08:44:09Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:97:38:a4 10.100.0.3
Nov 29 03:44:09 np0005539551 ovn_controller[130266]: 2025-11-29T08:44:09Z|00116|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:97:38:a4 10.100.0.3
Nov 29 03:44:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:10.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:10.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:10 np0005539551 nova_compute[227360]: 2025-11-29 08:44:10.812 227364 DEBUG nova.network.neutron [req-35610cfa-0449-4e4f-a1be-61a9db3c3e11 req-34257e26-9fcb-4ab0-b8f6-b3f3770ec6d3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Updated VIF entry in instance network info cache for port 365d4d20-fe0e-400e-9546-ae7e92d68355. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:44:10 np0005539551 nova_compute[227360]: 2025-11-29 08:44:10.814 227364 DEBUG nova.network.neutron [req-35610cfa-0449-4e4f-a1be-61a9db3c3e11 req-34257e26-9fcb-4ab0-b8f6-b3f3770ec6d3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Updating instance_info_cache with network_info: [{"id": "365d4d20-fe0e-400e-9546-ae7e92d68355", "address": "fa:16:3e:ae:3c:05", "network": {"id": "206c8741-77b9-46d9-99b0-d93958188a98", "bridge": "br-int", "label": "tempest-network-smoke--684921249", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap365d4d20-fe", "ovs_interfaceid": "365d4d20-fe0e-400e-9546-ae7e92d68355", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:44:10 np0005539551 nova_compute[227360]: 2025-11-29 08:44:10.843 227364 DEBUG oslo_concurrency.lockutils [req-35610cfa-0449-4e4f-a1be-61a9db3c3e11 req-34257e26-9fcb-4ab0-b8f6-b3f3770ec6d3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-549af500-fc44-45ed-a35d-c215ef3e4d5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:44:11 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:44:11 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:44:11 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:44:11 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:11 np0005539551 nova_compute[227360]: 2025-11-29 08:44:11.694 227364 DEBUG nova.objects.instance [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lazy-loading 'migration_context' on Instance uuid 549af500-fc44-45ed-a35d-c215ef3e4d5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:44:11 np0005539551 nova_compute[227360]: 2025-11-29 08:44:11.715 227364 DEBUG nova.virt.libvirt.driver [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:44:11 np0005539551 nova_compute[227360]: 2025-11-29 08:44:11.716 227364 DEBUG nova.virt.libvirt.driver [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Ensure instance console log exists: /var/lib/nova/instances/549af500-fc44-45ed-a35d-c215ef3e4d5f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:44:11 np0005539551 nova_compute[227360]: 2025-11-29 08:44:11.716 227364 DEBUG oslo_concurrency.lockutils [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:11 np0005539551 nova_compute[227360]: 2025-11-29 08:44:11.716 227364 DEBUG oslo_concurrency.lockutils [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:11 np0005539551 nova_compute[227360]: 2025-11-29 08:44:11.717 227364 DEBUG oslo_concurrency.lockutils [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:11 np0005539551 nova_compute[227360]: 2025-11-29 08:44:11.718 227364 DEBUG nova.virt.libvirt.driver [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Start _get_guest_xml network_info=[{"id": "365d4d20-fe0e-400e-9546-ae7e92d68355", "address": "fa:16:3e:ae:3c:05", "network": {"id": "206c8741-77b9-46d9-99b0-d93958188a98", "bridge": "br-int", "label": "tempest-network-smoke--684921249", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap365d4d20-fe", "ovs_interfaceid": "365d4d20-fe0e-400e-9546-ae7e92d68355", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:44:11 np0005539551 nova_compute[227360]: 2025-11-29 08:44:11.722 227364 WARNING nova.virt.libvirt.driver [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:44:11 np0005539551 nova_compute[227360]: 2025-11-29 08:44:11.731 227364 DEBUG nova.virt.libvirt.host [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:44:11 np0005539551 nova_compute[227360]: 2025-11-29 08:44:11.732 227364 DEBUG nova.virt.libvirt.host [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:44:11 np0005539551 nova_compute[227360]: 2025-11-29 08:44:11.738 227364 DEBUG nova.virt.libvirt.host [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:44:11 np0005539551 nova_compute[227360]: 2025-11-29 08:44:11.739 227364 DEBUG nova.virt.libvirt.host [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:44:11 np0005539551 nova_compute[227360]: 2025-11-29 08:44:11.740 227364 DEBUG nova.virt.libvirt.driver [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:44:11 np0005539551 nova_compute[227360]: 2025-11-29 08:44:11.740 227364 DEBUG nova.virt.hardware [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:44:11 np0005539551 nova_compute[227360]: 2025-11-29 08:44:11.741 227364 DEBUG nova.virt.hardware [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:44:11 np0005539551 nova_compute[227360]: 2025-11-29 08:44:11.741 227364 DEBUG nova.virt.hardware [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:44:11 np0005539551 nova_compute[227360]: 2025-11-29 08:44:11.741 227364 DEBUG nova.virt.hardware [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:44:11 np0005539551 nova_compute[227360]: 2025-11-29 08:44:11.742 227364 DEBUG nova.virt.hardware [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:44:11 np0005539551 nova_compute[227360]: 2025-11-29 08:44:11.742 227364 DEBUG nova.virt.hardware [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:44:11 np0005539551 nova_compute[227360]: 2025-11-29 08:44:11.742 227364 DEBUG nova.virt.hardware [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:44:11 np0005539551 nova_compute[227360]: 2025-11-29 08:44:11.742 227364 DEBUG nova.virt.hardware [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:44:11 np0005539551 nova_compute[227360]: 2025-11-29 08:44:11.743 227364 DEBUG nova.virt.hardware [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:44:11 np0005539551 nova_compute[227360]: 2025-11-29 08:44:11.743 227364 DEBUG nova.virt.hardware [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:44:11 np0005539551 nova_compute[227360]: 2025-11-29 08:44:11.743 227364 DEBUG nova.virt.hardware [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:44:11 np0005539551 nova_compute[227360]: 2025-11-29 08:44:11.745 227364 DEBUG oslo_concurrency.processutils [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:12 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:44:12 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2194437333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:44:12 np0005539551 nova_compute[227360]: 2025-11-29 08:44:12.149 227364 DEBUG oslo_concurrency.processutils [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:12 np0005539551 nova_compute[227360]: 2025-11-29 08:44:12.174 227364 DEBUG nova.storage.rbd_utils [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image 549af500-fc44-45ed-a35d-c215ef3e4d5f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:44:12 np0005539551 nova_compute[227360]: 2025-11-29 08:44:12.177 227364 DEBUG oslo_concurrency.processutils [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:12.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:12 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:44:12 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:44:12 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:44:12 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:44:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:12.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:12 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:44:12 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3965362341' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:44:12 np0005539551 nova_compute[227360]: 2025-11-29 08:44:12.638 227364 DEBUG oslo_concurrency.processutils [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:12 np0005539551 nova_compute[227360]: 2025-11-29 08:44:12.640 227364 DEBUG nova.virt.libvirt.vif [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:43:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-access_point-2042085356',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-access_point-2042085356',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1136856573-ac',id=205,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA8Un+xYKSpvPA5DktAyDa6z/asyQ7UxXLrzPLNr4ETJXNVwb3MZ6P5tcTBT3ZcirPqCAKZ+Y2o6ziqz6g69ldHrHH2XDMkI5BNZQk0ea7y/UzgrihO5IQPTj0spY6QEcg==',key_name='tempest-TestSecurityGroupsBasicOps-240470012',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='75423dfb570f4b2bbc2f8de4f3a65d18',ramdisk_id='',reservation_id='r-s7dkbslp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1136856573',owner_user_name='tempest-TestSecurityGroupsBasicOps-1136856573-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:44:01Z,user_data=None,user_id='de2965680b714b539553cf0792584e1e',uuid=549af500-fc44-45ed-a35d-c215ef3e4d5f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "365d4d20-fe0e-400e-9546-ae7e92d68355", "address": "fa:16:3e:ae:3c:05", "network": {"id": "206c8741-77b9-46d9-99b0-d93958188a98", "bridge": "br-int", "label": "tempest-network-smoke--684921249", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap365d4d20-fe", "ovs_interfaceid": "365d4d20-fe0e-400e-9546-ae7e92d68355", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:44:12 np0005539551 nova_compute[227360]: 2025-11-29 08:44:12.640 227364 DEBUG nova.network.os_vif_util [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converting VIF {"id": "365d4d20-fe0e-400e-9546-ae7e92d68355", "address": "fa:16:3e:ae:3c:05", "network": {"id": "206c8741-77b9-46d9-99b0-d93958188a98", "bridge": "br-int", "label": "tempest-network-smoke--684921249", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap365d4d20-fe", "ovs_interfaceid": "365d4d20-fe0e-400e-9546-ae7e92d68355", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:44:12 np0005539551 nova_compute[227360]: 2025-11-29 08:44:12.641 227364 DEBUG nova.network.os_vif_util [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:3c:05,bridge_name='br-int',has_traffic_filtering=True,id=365d4d20-fe0e-400e-9546-ae7e92d68355,network=Network(206c8741-77b9-46d9-99b0-d93958188a98),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap365d4d20-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:44:12 np0005539551 nova_compute[227360]: 2025-11-29 08:44:12.642 227364 DEBUG nova.objects.instance [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lazy-loading 'pci_devices' on Instance uuid 549af500-fc44-45ed-a35d-c215ef3e4d5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:44:12 np0005539551 nova_compute[227360]: 2025-11-29 08:44:12.675 227364 DEBUG nova.virt.libvirt.driver [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:44:12 np0005539551 nova_compute[227360]:  <uuid>549af500-fc44-45ed-a35d-c215ef3e4d5f</uuid>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:  <name>instance-000000cd</name>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:44:12 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-access_point-2042085356</nova:name>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:44:11</nova:creationTime>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:44:12 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:        <nova:user uuid="de2965680b714b539553cf0792584e1e">tempest-TestSecurityGroupsBasicOps-1136856573-project-member</nova:user>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:        <nova:project uuid="75423dfb570f4b2bbc2f8de4f3a65d18">tempest-TestSecurityGroupsBasicOps-1136856573</nova:project>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:        <nova:port uuid="365d4d20-fe0e-400e-9546-ae7e92d68355">
Nov 29 03:44:12 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:      <entry name="serial">549af500-fc44-45ed-a35d-c215ef3e4d5f</entry>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:      <entry name="uuid">549af500-fc44-45ed-a35d-c215ef3e4d5f</entry>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:44:12 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/549af500-fc44-45ed-a35d-c215ef3e4d5f_disk">
Nov 29 03:44:12 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:44:12 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:44:12 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/549af500-fc44-45ed-a35d-c215ef3e4d5f_disk.config">
Nov 29 03:44:12 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:44:12 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:44:12 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:ae:3c:05"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:      <target dev="tap365d4d20-fe"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:44:12 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/549af500-fc44-45ed-a35d-c215ef3e4d5f/console.log" append="off"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:44:12 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:44:12 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:44:12 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:44:12 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:44:12 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:44:12 np0005539551 nova_compute[227360]: 2025-11-29 08:44:12.677 227364 DEBUG nova.compute.manager [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Preparing to wait for external event network-vif-plugged-365d4d20-fe0e-400e-9546-ae7e92d68355 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:44:12 np0005539551 nova_compute[227360]: 2025-11-29 08:44:12.677 227364 DEBUG oslo_concurrency.lockutils [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "549af500-fc44-45ed-a35d-c215ef3e4d5f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:12 np0005539551 nova_compute[227360]: 2025-11-29 08:44:12.677 227364 DEBUG oslo_concurrency.lockutils [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "549af500-fc44-45ed-a35d-c215ef3e4d5f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:12 np0005539551 nova_compute[227360]: 2025-11-29 08:44:12.678 227364 DEBUG oslo_concurrency.lockutils [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "549af500-fc44-45ed-a35d-c215ef3e4d5f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:12 np0005539551 nova_compute[227360]: 2025-11-29 08:44:12.678 227364 DEBUG nova.virt.libvirt.vif [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:43:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-access_point-2042085356',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-access_point-2042085356',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1136856573-ac',id=205,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA8Un+xYKSpvPA5DktAyDa6z/asyQ7UxXLrzPLNr4ETJXNVwb3MZ6P5tcTBT3ZcirPqCAKZ+Y2o6ziqz6g69ldHrHH2XDMkI5BNZQk0ea7y/UzgrihO5IQPTj0spY6QEcg==',key_name='tempest-TestSecurityGroupsBasicOps-240470012',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='75423dfb570f4b2bbc2f8de4f3a65d18',ramdisk_id='',reservation_id='r-s7dkbslp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1136856573',owner_user_name='tempest-TestSecurityGroupsBasicOps-1136856573-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:44:01Z,user_data=None,user_id='de2965680b714b539553cf0792584e1e',uuid=549af500-fc44-45ed-a35d-c215ef3e4d5f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "365d4d20-fe0e-400e-9546-ae7e92d68355", "address": "fa:16:3e:ae:3c:05", "network": {"id": "206c8741-77b9-46d9-99b0-d93958188a98", "bridge": "br-int", "label": "tempest-network-smoke--684921249", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap365d4d20-fe", "ovs_interfaceid": "365d4d20-fe0e-400e-9546-ae7e92d68355", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:44:12 np0005539551 nova_compute[227360]: 2025-11-29 08:44:12.679 227364 DEBUG nova.network.os_vif_util [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converting VIF {"id": "365d4d20-fe0e-400e-9546-ae7e92d68355", "address": "fa:16:3e:ae:3c:05", "network": {"id": "206c8741-77b9-46d9-99b0-d93958188a98", "bridge": "br-int", "label": "tempest-network-smoke--684921249", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap365d4d20-fe", "ovs_interfaceid": "365d4d20-fe0e-400e-9546-ae7e92d68355", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:44:12 np0005539551 nova_compute[227360]: 2025-11-29 08:44:12.679 227364 DEBUG nova.network.os_vif_util [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:3c:05,bridge_name='br-int',has_traffic_filtering=True,id=365d4d20-fe0e-400e-9546-ae7e92d68355,network=Network(206c8741-77b9-46d9-99b0-d93958188a98),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap365d4d20-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:44:12 np0005539551 nova_compute[227360]: 2025-11-29 08:44:12.679 227364 DEBUG os_vif [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:3c:05,bridge_name='br-int',has_traffic_filtering=True,id=365d4d20-fe0e-400e-9546-ae7e92d68355,network=Network(206c8741-77b9-46d9-99b0-d93958188a98),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap365d4d20-fe') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:44:12 np0005539551 nova_compute[227360]: 2025-11-29 08:44:12.680 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:12 np0005539551 nova_compute[227360]: 2025-11-29 08:44:12.680 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:12 np0005539551 nova_compute[227360]: 2025-11-29 08:44:12.681 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:44:12 np0005539551 nova_compute[227360]: 2025-11-29 08:44:12.683 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:12 np0005539551 nova_compute[227360]: 2025-11-29 08:44:12.683 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap365d4d20-fe, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:12 np0005539551 nova_compute[227360]: 2025-11-29 08:44:12.684 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap365d4d20-fe, col_values=(('external_ids', {'iface-id': '365d4d20-fe0e-400e-9546-ae7e92d68355', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ae:3c:05', 'vm-uuid': '549af500-fc44-45ed-a35d-c215ef3e4d5f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:12 np0005539551 nova_compute[227360]: 2025-11-29 08:44:12.685 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:12 np0005539551 NetworkManager[48922]: <info>  [1764405852.6867] manager: (tap365d4d20-fe): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/396)
Nov 29 03:44:12 np0005539551 nova_compute[227360]: 2025-11-29 08:44:12.688 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:44:12 np0005539551 nova_compute[227360]: 2025-11-29 08:44:12.691 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:12 np0005539551 nova_compute[227360]: 2025-11-29 08:44:12.692 227364 INFO os_vif [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:3c:05,bridge_name='br-int',has_traffic_filtering=True,id=365d4d20-fe0e-400e-9546-ae7e92d68355,network=Network(206c8741-77b9-46d9-99b0-d93958188a98),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap365d4d20-fe')#033[00m
Nov 29 03:44:12 np0005539551 nova_compute[227360]: 2025-11-29 08:44:12.793 227364 DEBUG nova.virt.libvirt.driver [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:44:12 np0005539551 nova_compute[227360]: 2025-11-29 08:44:12.794 227364 DEBUG nova.virt.libvirt.driver [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:44:12 np0005539551 nova_compute[227360]: 2025-11-29 08:44:12.794 227364 DEBUG nova.virt.libvirt.driver [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] No VIF found with MAC fa:16:3e:ae:3c:05, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:44:12 np0005539551 nova_compute[227360]: 2025-11-29 08:44:12.794 227364 INFO nova.virt.libvirt.driver [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Using config drive#033[00m
Nov 29 03:44:12 np0005539551 nova_compute[227360]: 2025-11-29 08:44:12.819 227364 DEBUG nova.storage.rbd_utils [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image 549af500-fc44-45ed-a35d-c215ef3e4d5f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:44:13 np0005539551 nova_compute[227360]: 2025-11-29 08:44:13.523 227364 INFO nova.virt.libvirt.driver [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Creating config drive at /var/lib/nova/instances/549af500-fc44-45ed-a35d-c215ef3e4d5f/disk.config#033[00m
Nov 29 03:44:13 np0005539551 nova_compute[227360]: 2025-11-29 08:44:13.529 227364 DEBUG oslo_concurrency.processutils [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/549af500-fc44-45ed-a35d-c215ef3e4d5f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpducmdixz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:13 np0005539551 nova_compute[227360]: 2025-11-29 08:44:13.681 227364 DEBUG oslo_concurrency.processutils [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/549af500-fc44-45ed-a35d-c215ef3e4d5f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpducmdixz" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:14 np0005539551 nova_compute[227360]: 2025-11-29 08:44:14.250 227364 DEBUG nova.storage.rbd_utils [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image 549af500-fc44-45ed-a35d-c215ef3e4d5f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:44:14 np0005539551 nova_compute[227360]: 2025-11-29 08:44:14.256 227364 DEBUG oslo_concurrency.processutils [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/549af500-fc44-45ed-a35d-c215ef3e4d5f/disk.config 549af500-fc44-45ed-a35d-c215ef3e4d5f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:14 np0005539551 nova_compute[227360]: 2025-11-29 08:44:14.336 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:14.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:14.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:14 np0005539551 nova_compute[227360]: 2025-11-29 08:44:14.872 227364 DEBUG oslo_concurrency.processutils [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/549af500-fc44-45ed-a35d-c215ef3e4d5f/disk.config 549af500-fc44-45ed-a35d-c215ef3e4d5f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:14 np0005539551 nova_compute[227360]: 2025-11-29 08:44:14.873 227364 INFO nova.virt.libvirt.driver [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Deleting local config drive /var/lib/nova/instances/549af500-fc44-45ed-a35d-c215ef3e4d5f/disk.config because it was imported into RBD.#033[00m
Nov 29 03:44:14 np0005539551 kernel: tap365d4d20-fe: entered promiscuous mode
Nov 29 03:44:14 np0005539551 NetworkManager[48922]: <info>  [1764405854.9336] manager: (tap365d4d20-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/397)
Nov 29 03:44:14 np0005539551 ovn_controller[130266]: 2025-11-29T08:44:14Z|00880|binding|INFO|Claiming lport 365d4d20-fe0e-400e-9546-ae7e92d68355 for this chassis.
Nov 29 03:44:14 np0005539551 ovn_controller[130266]: 2025-11-29T08:44:14Z|00881|binding|INFO|365d4d20-fe0e-400e-9546-ae7e92d68355: Claiming fa:16:3e:ae:3c:05 10.100.0.5
Nov 29 03:44:14 np0005539551 nova_compute[227360]: 2025-11-29 08:44:14.935 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:14 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:14.942 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:3c:05 10.100.0.5'], port_security=['fa:16:3e:ae:3c:05 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '549af500-fc44-45ed-a35d-c215ef3e4d5f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-206c8741-77b9-46d9-99b0-d93958188a98', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '75423dfb570f4b2bbc2f8de4f3a65d18', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6c9fbbd1-26e1-4977-8e8c-89dbf746498f e4a70837-9964-47ac-95c0-dc348160aa14', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b205763b-d396-48de-89c3-9188104d7f46, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=365d4d20-fe0e-400e-9546-ae7e92d68355) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:44:14 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:14.943 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 365d4d20-fe0e-400e-9546-ae7e92d68355 in datapath 206c8741-77b9-46d9-99b0-d93958188a98 bound to our chassis#033[00m
Nov 29 03:44:14 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:14.945 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 206c8741-77b9-46d9-99b0-d93958188a98#033[00m
Nov 29 03:44:14 np0005539551 ovn_controller[130266]: 2025-11-29T08:44:14Z|00882|binding|INFO|Setting lport 365d4d20-fe0e-400e-9546-ae7e92d68355 ovn-installed in OVS
Nov 29 03:44:14 np0005539551 ovn_controller[130266]: 2025-11-29T08:44:14Z|00883|binding|INFO|Setting lport 365d4d20-fe0e-400e-9546-ae7e92d68355 up in Southbound
Nov 29 03:44:14 np0005539551 nova_compute[227360]: 2025-11-29 08:44:14.954 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:14 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:14.962 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6c0ae73d-1be7-4c0e-b69d-51c1a62e9587]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:14 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:14.962 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap206c8741-71 in ovnmeta-206c8741-77b9-46d9-99b0-d93958188a98 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:44:14 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:14.964 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap206c8741-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:44:14 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:14.965 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e5af3ff8-2e67-48d5-b350-d277519f155a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:14 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:14.966 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2bfdc66f-766e-4d28-a2bb-deac335a175d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:14 np0005539551 systemd-udevd[299042]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:44:14 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:14.976 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[1347e6c8-9c1a-4d4f-87e4-1c0f4aae64a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:14 np0005539551 systemd-machined[190756]: New machine qemu-91-instance-000000cd.
Nov 29 03:44:14 np0005539551 NetworkManager[48922]: <info>  [1764405854.9862] device (tap365d4d20-fe): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:44:14 np0005539551 NetworkManager[48922]: <info>  [1764405854.9871] device (tap365d4d20-fe): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:44:14 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:14.993 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4405d5ed-1090-4d1b-9ef2-7d2f2caec968]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:14 np0005539551 systemd[1]: Started Virtual Machine qemu-91-instance-000000cd.
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:15.023 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[92d85a21-8288-4a01-b2a8-f035cc7e39e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:15 np0005539551 NetworkManager[48922]: <info>  [1764405855.0306] manager: (tap206c8741-70): new Veth device (/org/freedesktop/NetworkManager/Devices/398)
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:15.029 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b85f2b2e-41ca-4136-8739-0294535b22d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:15.063 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[f863d37d-c3c6-4523-9e71-991cf34d5317]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:15.066 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[50a797e3-db90-4323-97d8-204623b28e9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:15 np0005539551 NetworkManager[48922]: <info>  [1764405855.0879] device (tap206c8741-70): carrier: link connected
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:15.094 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[be89c060-1a37-49ab-8bb6-7a683302122c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:15.113 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[3059a416-2ec3-4e88-8208-ac3f6f347588]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap206c8741-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:0c:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 255], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 890196, 'reachable_time': 24138, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299075, 'error': None, 'target': 'ovnmeta-206c8741-77b9-46d9-99b0-d93958188a98', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:15.132 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ca36d302-c2e5-40ae-8664-88bbb67cdccb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec6:c5f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 890196, 'tstamp': 890196}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299076, 'error': None, 'target': 'ovnmeta-206c8741-77b9-46d9-99b0-d93958188a98', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:15.149 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[03205711-b258-422b-9f07-5ee8711ee830]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap206c8741-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:0c:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 255], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 890196, 'reachable_time': 24138, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299077, 'error': None, 'target': 'ovnmeta-206c8741-77b9-46d9-99b0-d93958188a98', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:15.186 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[96a7023f-a281-4312-9eab-78986019f78b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:15.254 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2cb56adc-feda-4ab6-9dc0-a5dffad1dfe5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:15.256 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap206c8741-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:15.256 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:15.256 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap206c8741-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:15 np0005539551 nova_compute[227360]: 2025-11-29 08:44:15.269 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:15 np0005539551 NetworkManager[48922]: <info>  [1764405855.2701] manager: (tap206c8741-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/399)
Nov 29 03:44:15 np0005539551 kernel: tap206c8741-70: entered promiscuous mode
Nov 29 03:44:15 np0005539551 nova_compute[227360]: 2025-11-29 08:44:15.271 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:15.273 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap206c8741-70, col_values=(('external_ids', {'iface-id': '0b2e69f1-bd6a-48ea-a63b-1e2cb8e85ada'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:15 np0005539551 nova_compute[227360]: 2025-11-29 08:44:15.274 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:15 np0005539551 ovn_controller[130266]: 2025-11-29T08:44:15Z|00884|binding|INFO|Releasing lport 0b2e69f1-bd6a-48ea-a63b-1e2cb8e85ada from this chassis (sb_readonly=0)
Nov 29 03:44:15 np0005539551 nova_compute[227360]: 2025-11-29 08:44:15.288 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:15.289 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/206c8741-77b9-46d9-99b0-d93958188a98.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/206c8741-77b9-46d9-99b0-d93958188a98.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:15.290 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9690e96d-dc16-40be-b761-1fcdd049c9b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:15.291 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-206c8741-77b9-46d9-99b0-d93958188a98
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/206c8741-77b9-46d9-99b0-d93958188a98.pid.haproxy
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 206c8741-77b9-46d9-99b0-d93958188a98
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:44:15 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:15.291 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-206c8741-77b9-46d9-99b0-d93958188a98', 'env', 'PROCESS_TAG=haproxy-206c8741-77b9-46d9-99b0-d93958188a98', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/206c8741-77b9-46d9-99b0-d93958188a98.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:44:15 np0005539551 nova_compute[227360]: 2025-11-29 08:44:15.336 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405855.3357048, 549af500-fc44-45ed-a35d-c215ef3e4d5f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:44:15 np0005539551 nova_compute[227360]: 2025-11-29 08:44:15.337 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] VM Started (Lifecycle Event)#033[00m
Nov 29 03:44:15 np0005539551 nova_compute[227360]: 2025-11-29 08:44:15.355 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:44:15 np0005539551 nova_compute[227360]: 2025-11-29 08:44:15.359 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405855.3362277, 549af500-fc44-45ed-a35d-c215ef3e4d5f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:44:15 np0005539551 nova_compute[227360]: 2025-11-29 08:44:15.359 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:44:15 np0005539551 nova_compute[227360]: 2025-11-29 08:44:15.375 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:44:15 np0005539551 nova_compute[227360]: 2025-11-29 08:44:15.378 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:44:15 np0005539551 nova_compute[227360]: 2025-11-29 08:44:15.394 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:44:15 np0005539551 nova_compute[227360]: 2025-11-29 08:44:15.507 227364 DEBUG nova.compute.manager [req-ebfd3706-815f-4b36-8a11-16241d84c921 req-1a9febec-9bb5-47c1-8b95-12a65adf3ea5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Received event network-vif-plugged-365d4d20-fe0e-400e-9546-ae7e92d68355 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:15 np0005539551 nova_compute[227360]: 2025-11-29 08:44:15.508 227364 DEBUG oslo_concurrency.lockutils [req-ebfd3706-815f-4b36-8a11-16241d84c921 req-1a9febec-9bb5-47c1-8b95-12a65adf3ea5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "549af500-fc44-45ed-a35d-c215ef3e4d5f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:15 np0005539551 nova_compute[227360]: 2025-11-29 08:44:15.508 227364 DEBUG oslo_concurrency.lockutils [req-ebfd3706-815f-4b36-8a11-16241d84c921 req-1a9febec-9bb5-47c1-8b95-12a65adf3ea5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "549af500-fc44-45ed-a35d-c215ef3e4d5f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:15 np0005539551 nova_compute[227360]: 2025-11-29 08:44:15.509 227364 DEBUG oslo_concurrency.lockutils [req-ebfd3706-815f-4b36-8a11-16241d84c921 req-1a9febec-9bb5-47c1-8b95-12a65adf3ea5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "549af500-fc44-45ed-a35d-c215ef3e4d5f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:15 np0005539551 nova_compute[227360]: 2025-11-29 08:44:15.509 227364 DEBUG nova.compute.manager [req-ebfd3706-815f-4b36-8a11-16241d84c921 req-1a9febec-9bb5-47c1-8b95-12a65adf3ea5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Processing event network-vif-plugged-365d4d20-fe0e-400e-9546-ae7e92d68355 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:44:15 np0005539551 nova_compute[227360]: 2025-11-29 08:44:15.510 227364 DEBUG nova.compute.manager [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:44:15 np0005539551 nova_compute[227360]: 2025-11-29 08:44:15.513 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405855.5133955, 549af500-fc44-45ed-a35d-c215ef3e4d5f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:44:15 np0005539551 nova_compute[227360]: 2025-11-29 08:44:15.514 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:44:15 np0005539551 nova_compute[227360]: 2025-11-29 08:44:15.515 227364 DEBUG nova.virt.libvirt.driver [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:44:15 np0005539551 nova_compute[227360]: 2025-11-29 08:44:15.518 227364 INFO nova.virt.libvirt.driver [-] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Instance spawned successfully.#033[00m
Nov 29 03:44:15 np0005539551 nova_compute[227360]: 2025-11-29 08:44:15.518 227364 DEBUG nova.virt.libvirt.driver [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:44:15 np0005539551 nova_compute[227360]: 2025-11-29 08:44:15.554 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:44:15 np0005539551 nova_compute[227360]: 2025-11-29 08:44:15.558 227364 DEBUG nova.virt.libvirt.driver [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:44:15 np0005539551 nova_compute[227360]: 2025-11-29 08:44:15.558 227364 DEBUG nova.virt.libvirt.driver [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:44:15 np0005539551 nova_compute[227360]: 2025-11-29 08:44:15.559 227364 DEBUG nova.virt.libvirt.driver [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:44:15 np0005539551 nova_compute[227360]: 2025-11-29 08:44:15.559 227364 DEBUG nova.virt.libvirt.driver [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:44:15 np0005539551 nova_compute[227360]: 2025-11-29 08:44:15.559 227364 DEBUG nova.virt.libvirt.driver [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:44:15 np0005539551 nova_compute[227360]: 2025-11-29 08:44:15.560 227364 DEBUG nova.virt.libvirt.driver [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:44:15 np0005539551 nova_compute[227360]: 2025-11-29 08:44:15.564 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:44:15 np0005539551 nova_compute[227360]: 2025-11-29 08:44:15.605 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:44:15 np0005539551 nova_compute[227360]: 2025-11-29 08:44:15.634 227364 INFO nova.compute.manager [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Took 14.20 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:44:15 np0005539551 nova_compute[227360]: 2025-11-29 08:44:15.635 227364 DEBUG nova.compute.manager [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:44:15 np0005539551 nova_compute[227360]: 2025-11-29 08:44:15.703 227364 INFO nova.compute.manager [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Took 15.68 seconds to build instance.#033[00m
Nov 29 03:44:15 np0005539551 nova_compute[227360]: 2025-11-29 08:44:15.735 227364 DEBUG oslo_concurrency.lockutils [None req-67d75861-32e7-45f1-84e4-43a98f48d189 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "549af500-fc44-45ed-a35d-c215ef3e4d5f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:15 np0005539551 podman[299151]: 2025-11-29 08:44:15.65084152 +0000 UTC m=+0.020463204 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:44:16 np0005539551 podman[299151]: 2025-11-29 08:44:16.123850658 +0000 UTC m=+0.493472322 container create c3dde4da0950cc47362acd5a2b0294b9c8affc8cacf4061db366769e5f9caeb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-206c8741-77b9-46d9-99b0-d93958188a98, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:44:16 np0005539551 systemd[1]: Started libpod-conmon-c3dde4da0950cc47362acd5a2b0294b9c8affc8cacf4061db366769e5f9caeb4.scope.
Nov 29 03:44:16 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:44:16 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31b249441a5cc67e17af4c37c9318fd1ff827ededd273b4ede0c9f40887cb4a5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:44:16 np0005539551 podman[299151]: 2025-11-29 08:44:16.355481149 +0000 UTC m=+0.725102833 container init c3dde4da0950cc47362acd5a2b0294b9c8affc8cacf4061db366769e5f9caeb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-206c8741-77b9-46d9-99b0-d93958188a98, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0)
Nov 29 03:44:16 np0005539551 podman[299151]: 2025-11-29 08:44:16.363335602 +0000 UTC m=+0.732957296 container start c3dde4da0950cc47362acd5a2b0294b9c8affc8cacf4061db366769e5f9caeb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-206c8741-77b9-46d9-99b0-d93958188a98, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:44:16 np0005539551 neutron-haproxy-ovnmeta-206c8741-77b9-46d9-99b0-d93958188a98[299165]: [NOTICE]   (299169) : New worker (299171) forked
Nov 29 03:44:16 np0005539551 neutron-haproxy-ovnmeta-206c8741-77b9-46d9-99b0-d93958188a98[299165]: [NOTICE]   (299169) : Loading success.
Nov 29 03:44:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:16.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:16.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:44:16 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2946470916' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:44:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:17 np0005539551 nova_compute[227360]: 2025-11-29 08:44:17.594 227364 DEBUG nova.compute.manager [req-47a3ccec-b833-4e53-9ea6-0a2ae5793224 req-54d510a0-57e4-42c5-8059-3eec1b87821d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Received event network-vif-plugged-365d4d20-fe0e-400e-9546-ae7e92d68355 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:17 np0005539551 nova_compute[227360]: 2025-11-29 08:44:17.595 227364 DEBUG oslo_concurrency.lockutils [req-47a3ccec-b833-4e53-9ea6-0a2ae5793224 req-54d510a0-57e4-42c5-8059-3eec1b87821d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "549af500-fc44-45ed-a35d-c215ef3e4d5f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:17 np0005539551 nova_compute[227360]: 2025-11-29 08:44:17.595 227364 DEBUG oslo_concurrency.lockutils [req-47a3ccec-b833-4e53-9ea6-0a2ae5793224 req-54d510a0-57e4-42c5-8059-3eec1b87821d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "549af500-fc44-45ed-a35d-c215ef3e4d5f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:17 np0005539551 nova_compute[227360]: 2025-11-29 08:44:17.595 227364 DEBUG oslo_concurrency.lockutils [req-47a3ccec-b833-4e53-9ea6-0a2ae5793224 req-54d510a0-57e4-42c5-8059-3eec1b87821d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "549af500-fc44-45ed-a35d-c215ef3e4d5f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:17 np0005539551 nova_compute[227360]: 2025-11-29 08:44:17.596 227364 DEBUG nova.compute.manager [req-47a3ccec-b833-4e53-9ea6-0a2ae5793224 req-54d510a0-57e4-42c5-8059-3eec1b87821d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] No waiting events found dispatching network-vif-plugged-365d4d20-fe0e-400e-9546-ae7e92d68355 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:44:17 np0005539551 nova_compute[227360]: 2025-11-29 08:44:17.596 227364 WARNING nova.compute.manager [req-47a3ccec-b833-4e53-9ea6-0a2ae5793224 req-54d510a0-57e4-42c5-8059-3eec1b87821d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Received unexpected event network-vif-plugged-365d4d20-fe0e-400e-9546-ae7e92d68355 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:44:17 np0005539551 nova_compute[227360]: 2025-11-29 08:44:17.686 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:18.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:18.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:19 np0005539551 nova_compute[227360]: 2025-11-29 08:44:19.339 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:19 np0005539551 nova_compute[227360]: 2025-11-29 08:44:19.381 227364 DEBUG nova.compute.manager [req-b5c3621e-3e19-4f97-8c1b-56bd78b4c8f2 req-304c55ef-435a-4054-819d-9ea13c7e4d09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Received event network-changed-365d4d20-fe0e-400e-9546-ae7e92d68355 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:19 np0005539551 nova_compute[227360]: 2025-11-29 08:44:19.382 227364 DEBUG nova.compute.manager [req-b5c3621e-3e19-4f97-8c1b-56bd78b4c8f2 req-304c55ef-435a-4054-819d-9ea13c7e4d09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Refreshing instance network info cache due to event network-changed-365d4d20-fe0e-400e-9546-ae7e92d68355. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:44:19 np0005539551 nova_compute[227360]: 2025-11-29 08:44:19.382 227364 DEBUG oslo_concurrency.lockutils [req-b5c3621e-3e19-4f97-8c1b-56bd78b4c8f2 req-304c55ef-435a-4054-819d-9ea13c7e4d09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-549af500-fc44-45ed-a35d-c215ef3e4d5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:44:19 np0005539551 nova_compute[227360]: 2025-11-29 08:44:19.382 227364 DEBUG oslo_concurrency.lockutils [req-b5c3621e-3e19-4f97-8c1b-56bd78b4c8f2 req-304c55ef-435a-4054-819d-9ea13c7e4d09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-549af500-fc44-45ed-a35d-c215ef3e4d5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:44:19 np0005539551 nova_compute[227360]: 2025-11-29 08:44:19.382 227364 DEBUG nova.network.neutron [req-b5c3621e-3e19-4f97-8c1b-56bd78b4c8f2 req-304c55ef-435a-4054-819d-9ea13c7e4d09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Refreshing network info cache for port 365d4d20-fe0e-400e-9546-ae7e92d68355 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:44:19 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:44:19 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:44:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:19.894 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:19.894 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:19.895 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:20.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:20.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:20 np0005539551 nova_compute[227360]: 2025-11-29 08:44:20.689 227364 DEBUG nova.network.neutron [req-b5c3621e-3e19-4f97-8c1b-56bd78b4c8f2 req-304c55ef-435a-4054-819d-9ea13c7e4d09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Updated VIF entry in instance network info cache for port 365d4d20-fe0e-400e-9546-ae7e92d68355. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:44:20 np0005539551 nova_compute[227360]: 2025-11-29 08:44:20.689 227364 DEBUG nova.network.neutron [req-b5c3621e-3e19-4f97-8c1b-56bd78b4c8f2 req-304c55ef-435a-4054-819d-9ea13c7e4d09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Updating instance_info_cache with network_info: [{"id": "365d4d20-fe0e-400e-9546-ae7e92d68355", "address": "fa:16:3e:ae:3c:05", "network": {"id": "206c8741-77b9-46d9-99b0-d93958188a98", "bridge": "br-int", "label": "tempest-network-smoke--684921249", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap365d4d20-fe", "ovs_interfaceid": "365d4d20-fe0e-400e-9546-ae7e92d68355", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:44:20 np0005539551 nova_compute[227360]: 2025-11-29 08:44:20.712 227364 DEBUG oslo_concurrency.lockutils [req-b5c3621e-3e19-4f97-8c1b-56bd78b4c8f2 req-304c55ef-435a-4054-819d-9ea13c7e4d09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-549af500-fc44-45ed-a35d-c215ef3e4d5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:44:21 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:22.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e400 e400: 3 total, 3 up, 3 in
Nov 29 03:44:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:22.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:22 np0005539551 nova_compute[227360]: 2025-11-29 08:44:22.689 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:24 np0005539551 nova_compute[227360]: 2025-11-29 08:44:24.341 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:24.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:24.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:24 np0005539551 nova_compute[227360]: 2025-11-29 08:44:24.644 227364 DEBUG nova.compute.manager [None req-ca845297-03a2-4b1b-98e6-a355553633b2 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:44:24 np0005539551 nova_compute[227360]: 2025-11-29 08:44:24.707 227364 INFO nova.compute.manager [None req-ca845297-03a2-4b1b-98e6-a355553633b2 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] instance snapshotting#033[00m
Nov 29 03:44:24 np0005539551 nova_compute[227360]: 2025-11-29 08:44:24.966 227364 INFO nova.virt.libvirt.driver [None req-ca845297-03a2-4b1b-98e6-a355553633b2 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Beginning live snapshot process#033[00m
Nov 29 03:44:25 np0005539551 nova_compute[227360]: 2025-11-29 08:44:25.121 227364 DEBUG nova.virt.libvirt.imagebackend [None req-ca845297-03a2-4b1b-98e6-a355553633b2 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] No parent info for 4873db8c-b414-4e95-acd9-77caabebe722; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:44:25 np0005539551 nova_compute[227360]: 2025-11-29 08:44:25.325 227364 DEBUG nova.storage.rbd_utils [None req-ca845297-03a2-4b1b-98e6-a355553633b2 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] creating snapshot(ff135b7fa59f4b379296e3640b1054ad) on rbd image(da14f334-c7fe-428d-b6d2-32c2f4cc4054_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:44:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e401 e401: 3 total, 3 up, 3 in
Nov 29 03:44:25 np0005539551 nova_compute[227360]: 2025-11-29 08:44:25.723 227364 DEBUG nova.storage.rbd_utils [None req-ca845297-03a2-4b1b-98e6-a355553633b2 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] cloning vms/da14f334-c7fe-428d-b6d2-32c2f4cc4054_disk@ff135b7fa59f4b379296e3640b1054ad to images/19c04997-1e5f-42a9-90ff-a53c93a49ed0 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:44:25 np0005539551 nova_compute[227360]: 2025-11-29 08:44:25.849 227364 DEBUG nova.storage.rbd_utils [None req-ca845297-03a2-4b1b-98e6-a355553633b2 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] flattening images/19c04997-1e5f-42a9-90ff-a53c93a49ed0 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:44:26 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #55. Immutable memtables: 11.
Nov 29 03:44:26 np0005539551 nova_compute[227360]: 2025-11-29 08:44:26.348 227364 DEBUG nova.storage.rbd_utils [None req-ca845297-03a2-4b1b-98e6-a355553633b2 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] removing snapshot(ff135b7fa59f4b379296e3640b1054ad) on rbd image(da14f334-c7fe-428d-b6d2-32c2f4cc4054_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:44:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:26.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:26.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:26 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:26 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e402 e402: 3 total, 3 up, 3 in
Nov 29 03:44:26 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #136. Immutable memtables: 0.
Nov 29 03:44:26 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:44:26.706062) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:44:26 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 136
Nov 29 03:44:26 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405866706097, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 1333, "num_deletes": 251, "total_data_size": 2881754, "memory_usage": 2920496, "flush_reason": "Manual Compaction"}
Nov 29 03:44:26 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #137: started
Nov 29 03:44:26 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405866717374, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 137, "file_size": 1879311, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66395, "largest_seqno": 67723, "table_properties": {"data_size": 1873521, "index_size": 3120, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 13007, "raw_average_key_size": 20, "raw_value_size": 1861649, "raw_average_value_size": 2908, "num_data_blocks": 137, "num_entries": 640, "num_filter_entries": 640, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405765, "oldest_key_time": 1764405765, "file_creation_time": 1764405866, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:44:26 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 11340 microseconds, and 5056 cpu microseconds.
Nov 29 03:44:26 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:44:26 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:44:26.717401) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #137: 1879311 bytes OK
Nov 29 03:44:26 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:44:26.717417) [db/memtable_list.cc:519] [default] Level-0 commit table #137 started
Nov 29 03:44:26 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:44:26.718973) [db/memtable_list.cc:722] [default] Level-0 commit table #137: memtable #1 done
Nov 29 03:44:26 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:44:26.718985) EVENT_LOG_v1 {"time_micros": 1764405866718982, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:44:26 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:44:26.719000) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:44:26 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 2875376, prev total WAL file size 2875376, number of live WAL files 2.
Nov 29 03:44:26 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000133.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:44:26 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:44:26.719913) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Nov 29 03:44:26 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:44:26 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [137(1835KB)], [135(12MB)]
Nov 29 03:44:26 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405866719947, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [137], "files_L6": [135], "score": -1, "input_data_size": 14798676, "oldest_snapshot_seqno": -1}
Nov 29 03:44:26 np0005539551 nova_compute[227360]: 2025-11-29 08:44:26.719 227364 DEBUG nova.storage.rbd_utils [None req-ca845297-03a2-4b1b-98e6-a355553633b2 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] creating snapshot(snap) on rbd image(19c04997-1e5f-42a9-90ff-a53c93a49ed0) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:44:26 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #138: 9728 keys, 12856552 bytes, temperature: kUnknown
Nov 29 03:44:26 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405866832874, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 138, "file_size": 12856552, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12793294, "index_size": 37865, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24325, "raw_key_size": 257382, "raw_average_key_size": 26, "raw_value_size": 12621841, "raw_average_value_size": 1297, "num_data_blocks": 1439, "num_entries": 9728, "num_filter_entries": 9728, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764405866, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 138, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:44:26 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:44:26 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:44:26.833079) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 12856552 bytes
Nov 29 03:44:26 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:44:26.834963) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 131.0 rd, 113.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 12.3 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(14.7) write-amplify(6.8) OK, records in: 10249, records dropped: 521 output_compression: NoCompression
Nov 29 03:44:26 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:44:26.834978) EVENT_LOG_v1 {"time_micros": 1764405866834971, "job": 86, "event": "compaction_finished", "compaction_time_micros": 112992, "compaction_time_cpu_micros": 41373, "output_level": 6, "num_output_files": 1, "total_output_size": 12856552, "num_input_records": 10249, "num_output_records": 9728, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:44:26 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:44:26 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405866835310, "job": 86, "event": "table_file_deletion", "file_number": 137}
Nov 29 03:44:26 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000135.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:44:26 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405866837703, "job": 86, "event": "table_file_deletion", "file_number": 135}
Nov 29 03:44:26 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:44:26.719847) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:44:26 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:44:26.837736) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:44:26 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:44:26.837741) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:44:26 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:44:26.837743) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:44:26 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:44:26.837745) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:44:26 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:44:26.837747) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:44:27 np0005539551 nova_compute[227360]: 2025-11-29 08:44:27.692 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e403 e403: 3 total, 3 up, 3 in
Nov 29 03:44:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:28.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:28.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:29 np0005539551 nova_compute[227360]: 2025-11-29 08:44:29.171 227364 INFO nova.virt.libvirt.driver [None req-ca845297-03a2-4b1b-98e6-a355553633b2 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Snapshot image upload complete#033[00m
Nov 29 03:44:29 np0005539551 nova_compute[227360]: 2025-11-29 08:44:29.171 227364 INFO nova.compute.manager [None req-ca845297-03a2-4b1b-98e6-a355553633b2 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Took 4.46 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 29 03:44:29 np0005539551 nova_compute[227360]: 2025-11-29 08:44:29.342 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:29 np0005539551 ovn_controller[130266]: 2025-11-29T08:44:29Z|00117|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ae:3c:05 10.100.0.5
Nov 29 03:44:29 np0005539551 ovn_controller[130266]: 2025-11-29T08:44:29Z|00118|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ae:3c:05 10.100.0.5
Nov 29 03:44:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:30.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:30.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:31 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:32.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:32.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:32 np0005539551 nova_compute[227360]: 2025-11-29 08:44:32.694 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:34 np0005539551 nova_compute[227360]: 2025-11-29 08:44:34.344 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:34.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:34.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e404 e404: 3 total, 3 up, 3 in
Nov 29 03:44:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:35.620 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=60, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=59) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:44:35 np0005539551 nova_compute[227360]: 2025-11-29 08:44:35.621 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:35.621 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:44:35 np0005539551 podman[299372]: 2025-11-29 08:44:35.640311933 +0000 UTC m=+0.073916312 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:44:35 np0005539551 podman[299373]: 2025-11-29 08:44:35.657354756 +0000 UTC m=+0.092270630 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 03:44:35 np0005539551 podman[299371]: 2025-11-29 08:44:35.662503424 +0000 UTC m=+0.099744481 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 29 03:44:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:36.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:36.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:36 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:37 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:44:37 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.9 total, 600.0 interval#012Cumulative writes: 62K writes, 244K keys, 62K commit groups, 1.0 writes per commit group, ingest: 0.24 GB, 0.04 MB/s#012Cumulative WAL: 62K writes, 23K syncs, 2.68 writes per sync, written: 0.24 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 11K writes, 45K keys, 11K commit groups, 1.0 writes per commit group, ingest: 47.44 MB, 0.08 MB/s#012Interval WAL: 11K writes, 4694 syncs, 2.50 writes per sync, written: 0.05 GB, 0.08 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 03:44:37 np0005539551 nova_compute[227360]: 2025-11-29 08:44:37.695 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:38.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:38.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:38 np0005539551 nova_compute[227360]: 2025-11-29 08:44:38.984 227364 DEBUG nova.compute.manager [req-8feed832-4701-4744-b865-3e4225e77ec9 req-198fe531-4547-4d86-a0af-50f7110f3552 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Received event network-changed-365d4d20-fe0e-400e-9546-ae7e92d68355 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:38 np0005539551 nova_compute[227360]: 2025-11-29 08:44:38.984 227364 DEBUG nova.compute.manager [req-8feed832-4701-4744-b865-3e4225e77ec9 req-198fe531-4547-4d86-a0af-50f7110f3552 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Refreshing instance network info cache due to event network-changed-365d4d20-fe0e-400e-9546-ae7e92d68355. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:44:38 np0005539551 nova_compute[227360]: 2025-11-29 08:44:38.984 227364 DEBUG oslo_concurrency.lockutils [req-8feed832-4701-4744-b865-3e4225e77ec9 req-198fe531-4547-4d86-a0af-50f7110f3552 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-549af500-fc44-45ed-a35d-c215ef3e4d5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:44:38 np0005539551 nova_compute[227360]: 2025-11-29 08:44:38.985 227364 DEBUG oslo_concurrency.lockutils [req-8feed832-4701-4744-b865-3e4225e77ec9 req-198fe531-4547-4d86-a0af-50f7110f3552 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-549af500-fc44-45ed-a35d-c215ef3e4d5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:44:38 np0005539551 nova_compute[227360]: 2025-11-29 08:44:38.985 227364 DEBUG nova.network.neutron [req-8feed832-4701-4744-b865-3e4225e77ec9 req-198fe531-4547-4d86-a0af-50f7110f3552 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Refreshing network info cache for port 365d4d20-fe0e-400e-9546-ae7e92d68355 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:44:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:44:39 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1002444515' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:44:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:44:39 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1002444515' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:44:39 np0005539551 nova_compute[227360]: 2025-11-29 08:44:39.093 227364 DEBUG oslo_concurrency.lockutils [None req-643123e2-9cd9-4640-aec9-502a42a575c6 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "549af500-fc44-45ed-a35d-c215ef3e4d5f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:39 np0005539551 nova_compute[227360]: 2025-11-29 08:44:39.093 227364 DEBUG oslo_concurrency.lockutils [None req-643123e2-9cd9-4640-aec9-502a42a575c6 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "549af500-fc44-45ed-a35d-c215ef3e4d5f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:39 np0005539551 nova_compute[227360]: 2025-11-29 08:44:39.094 227364 DEBUG oslo_concurrency.lockutils [None req-643123e2-9cd9-4640-aec9-502a42a575c6 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "549af500-fc44-45ed-a35d-c215ef3e4d5f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:39 np0005539551 nova_compute[227360]: 2025-11-29 08:44:39.094 227364 DEBUG oslo_concurrency.lockutils [None req-643123e2-9cd9-4640-aec9-502a42a575c6 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "549af500-fc44-45ed-a35d-c215ef3e4d5f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:39 np0005539551 nova_compute[227360]: 2025-11-29 08:44:39.094 227364 DEBUG oslo_concurrency.lockutils [None req-643123e2-9cd9-4640-aec9-502a42a575c6 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "549af500-fc44-45ed-a35d-c215ef3e4d5f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:39 np0005539551 nova_compute[227360]: 2025-11-29 08:44:39.095 227364 INFO nova.compute.manager [None req-643123e2-9cd9-4640-aec9-502a42a575c6 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Terminating instance#033[00m
Nov 29 03:44:39 np0005539551 nova_compute[227360]: 2025-11-29 08:44:39.096 227364 DEBUG nova.compute.manager [None req-643123e2-9cd9-4640-aec9-502a42a575c6 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:44:39 np0005539551 kernel: tap365d4d20-fe (unregistering): left promiscuous mode
Nov 29 03:44:39 np0005539551 NetworkManager[48922]: <info>  [1764405879.1497] device (tap365d4d20-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:44:39 np0005539551 nova_compute[227360]: 2025-11-29 08:44:39.159 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:39 np0005539551 ovn_controller[130266]: 2025-11-29T08:44:39Z|00885|binding|INFO|Releasing lport 365d4d20-fe0e-400e-9546-ae7e92d68355 from this chassis (sb_readonly=0)
Nov 29 03:44:39 np0005539551 ovn_controller[130266]: 2025-11-29T08:44:39Z|00886|binding|INFO|Setting lport 365d4d20-fe0e-400e-9546-ae7e92d68355 down in Southbound
Nov 29 03:44:39 np0005539551 ovn_controller[130266]: 2025-11-29T08:44:39Z|00887|binding|INFO|Removing iface tap365d4d20-fe ovn-installed in OVS
Nov 29 03:44:39 np0005539551 nova_compute[227360]: 2025-11-29 08:44:39.161 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:39 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:39.166 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:3c:05 10.100.0.5'], port_security=['fa:16:3e:ae:3c:05 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '549af500-fc44-45ed-a35d-c215ef3e4d5f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-206c8741-77b9-46d9-99b0-d93958188a98', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '75423dfb570f4b2bbc2f8de4f3a65d18', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6c9fbbd1-26e1-4977-8e8c-89dbf746498f e4a70837-9964-47ac-95c0-dc348160aa14', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b205763b-d396-48de-89c3-9188104d7f46, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=365d4d20-fe0e-400e-9546-ae7e92d68355) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:44:39 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:39.168 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 365d4d20-fe0e-400e-9546-ae7e92d68355 in datapath 206c8741-77b9-46d9-99b0-d93958188a98 unbound from our chassis#033[00m
Nov 29 03:44:39 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:39.170 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 206c8741-77b9-46d9-99b0-d93958188a98, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:44:39 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:39.172 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[23a0d795-ce5b-4049-88d1-7eb6b848d95c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:39 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:39.172 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-206c8741-77b9-46d9-99b0-d93958188a98 namespace which is not needed anymore#033[00m
Nov 29 03:44:39 np0005539551 nova_compute[227360]: 2025-11-29 08:44:39.176 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:39 np0005539551 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000cd.scope: Deactivated successfully.
Nov 29 03:44:39 np0005539551 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000cd.scope: Consumed 14.129s CPU time.
Nov 29 03:44:39 np0005539551 systemd-machined[190756]: Machine qemu-91-instance-000000cd terminated.
Nov 29 03:44:39 np0005539551 neutron-haproxy-ovnmeta-206c8741-77b9-46d9-99b0-d93958188a98[299165]: [NOTICE]   (299169) : haproxy version is 2.8.14-c23fe91
Nov 29 03:44:39 np0005539551 neutron-haproxy-ovnmeta-206c8741-77b9-46d9-99b0-d93958188a98[299165]: [NOTICE]   (299169) : path to executable is /usr/sbin/haproxy
Nov 29 03:44:39 np0005539551 neutron-haproxy-ovnmeta-206c8741-77b9-46d9-99b0-d93958188a98[299165]: [WARNING]  (299169) : Exiting Master process...
Nov 29 03:44:39 np0005539551 neutron-haproxy-ovnmeta-206c8741-77b9-46d9-99b0-d93958188a98[299165]: [ALERT]    (299169) : Current worker (299171) exited with code 143 (Terminated)
Nov 29 03:44:39 np0005539551 neutron-haproxy-ovnmeta-206c8741-77b9-46d9-99b0-d93958188a98[299165]: [WARNING]  (299169) : All workers exited. Exiting... (0)
Nov 29 03:44:39 np0005539551 systemd[1]: libpod-c3dde4da0950cc47362acd5a2b0294b9c8affc8cacf4061db366769e5f9caeb4.scope: Deactivated successfully.
Nov 29 03:44:39 np0005539551 podman[299458]: 2025-11-29 08:44:39.299671874 +0000 UTC m=+0.042811491 container died c3dde4da0950cc47362acd5a2b0294b9c8affc8cacf4061db366769e5f9caeb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-206c8741-77b9-46d9-99b0-d93958188a98, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:44:39 np0005539551 nova_compute[227360]: 2025-11-29 08:44:39.335 227364 INFO nova.virt.libvirt.driver [-] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Instance destroyed successfully.#033[00m
Nov 29 03:44:39 np0005539551 nova_compute[227360]: 2025-11-29 08:44:39.336 227364 DEBUG nova.objects.instance [None req-643123e2-9cd9-4640-aec9-502a42a575c6 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lazy-loading 'resources' on Instance uuid 549af500-fc44-45ed-a35d-c215ef3e4d5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:44:39 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c3dde4da0950cc47362acd5a2b0294b9c8affc8cacf4061db366769e5f9caeb4-userdata-shm.mount: Deactivated successfully.
Nov 29 03:44:39 np0005539551 systemd[1]: var-lib-containers-storage-overlay-31b249441a5cc67e17af4c37c9318fd1ff827ededd273b4ede0c9f40887cb4a5-merged.mount: Deactivated successfully.
Nov 29 03:44:39 np0005539551 nova_compute[227360]: 2025-11-29 08:44:39.346 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:39 np0005539551 podman[299458]: 2025-11-29 08:44:39.347681414 +0000 UTC m=+0.090821001 container cleanup c3dde4da0950cc47362acd5a2b0294b9c8affc8cacf4061db366769e5f9caeb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-206c8741-77b9-46d9-99b0-d93958188a98, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:44:39 np0005539551 nova_compute[227360]: 2025-11-29 08:44:39.349 227364 DEBUG nova.virt.libvirt.vif [None req-643123e2-9cd9-4640-aec9-502a42a575c6 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:43:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-access_point-2042085356',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-access_point-2042085356',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1136856573-ac',id=205,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA8Un+xYKSpvPA5DktAyDa6z/asyQ7UxXLrzPLNr4ETJXNVwb3MZ6P5tcTBT3ZcirPqCAKZ+Y2o6ziqz6g69ldHrHH2XDMkI5BNZQk0ea7y/UzgrihO5IQPTj0spY6QEcg==',key_name='tempest-TestSecurityGroupsBasicOps-240470012',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:44:15Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='75423dfb570f4b2bbc2f8de4f3a65d18',ramdisk_id='',reservation_id='r-s7dkbslp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1136856573',owner_user_name='tempest-TestSecurityGroupsBasicOps-1136856573-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:44:15Z,user_data=None,user_id='de2965680b714b539553cf0792584e1e',uuid=549af500-fc44-45ed-a35d-c215ef3e4d5f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "365d4d20-fe0e-400e-9546-ae7e92d68355", "address": "fa:16:3e:ae:3c:05", "network": {"id": "206c8741-77b9-46d9-99b0-d93958188a98", "bridge": "br-int", "label": "tempest-network-smoke--684921249", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap365d4d20-fe", "ovs_interfaceid": "365d4d20-fe0e-400e-9546-ae7e92d68355", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:44:39 np0005539551 nova_compute[227360]: 2025-11-29 08:44:39.350 227364 DEBUG nova.network.os_vif_util [None req-643123e2-9cd9-4640-aec9-502a42a575c6 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converting VIF {"id": "365d4d20-fe0e-400e-9546-ae7e92d68355", "address": "fa:16:3e:ae:3c:05", "network": {"id": "206c8741-77b9-46d9-99b0-d93958188a98", "bridge": "br-int", "label": "tempest-network-smoke--684921249", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap365d4d20-fe", "ovs_interfaceid": "365d4d20-fe0e-400e-9546-ae7e92d68355", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:44:39 np0005539551 nova_compute[227360]: 2025-11-29 08:44:39.350 227364 DEBUG nova.network.os_vif_util [None req-643123e2-9cd9-4640-aec9-502a42a575c6 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ae:3c:05,bridge_name='br-int',has_traffic_filtering=True,id=365d4d20-fe0e-400e-9546-ae7e92d68355,network=Network(206c8741-77b9-46d9-99b0-d93958188a98),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap365d4d20-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:44:39 np0005539551 nova_compute[227360]: 2025-11-29 08:44:39.351 227364 DEBUG os_vif [None req-643123e2-9cd9-4640-aec9-502a42a575c6 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ae:3c:05,bridge_name='br-int',has_traffic_filtering=True,id=365d4d20-fe0e-400e-9546-ae7e92d68355,network=Network(206c8741-77b9-46d9-99b0-d93958188a98),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap365d4d20-fe') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:44:39 np0005539551 nova_compute[227360]: 2025-11-29 08:44:39.352 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:39 np0005539551 nova_compute[227360]: 2025-11-29 08:44:39.353 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap365d4d20-fe, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:39 np0005539551 nova_compute[227360]: 2025-11-29 08:44:39.356 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:44:39 np0005539551 nova_compute[227360]: 2025-11-29 08:44:39.358 227364 INFO os_vif [None req-643123e2-9cd9-4640-aec9-502a42a575c6 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ae:3c:05,bridge_name='br-int',has_traffic_filtering=True,id=365d4d20-fe0e-400e-9546-ae7e92d68355,network=Network(206c8741-77b9-46d9-99b0-d93958188a98),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap365d4d20-fe')#033[00m
Nov 29 03:44:39 np0005539551 systemd[1]: libpod-conmon-c3dde4da0950cc47362acd5a2b0294b9c8affc8cacf4061db366769e5f9caeb4.scope: Deactivated successfully.
Nov 29 03:44:39 np0005539551 podman[299501]: 2025-11-29 08:44:39.41734229 +0000 UTC m=+0.041233578 container remove c3dde4da0950cc47362acd5a2b0294b9c8affc8cacf4061db366769e5f9caeb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-206c8741-77b9-46d9-99b0-d93958188a98, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:44:39 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:39.424 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[afd0216e-668d-4520-867b-32c8232ed0a2]: (4, ('Sat Nov 29 08:44:39 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-206c8741-77b9-46d9-99b0-d93958188a98 (c3dde4da0950cc47362acd5a2b0294b9c8affc8cacf4061db366769e5f9caeb4)\nc3dde4da0950cc47362acd5a2b0294b9c8affc8cacf4061db366769e5f9caeb4\nSat Nov 29 08:44:39 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-206c8741-77b9-46d9-99b0-d93958188a98 (c3dde4da0950cc47362acd5a2b0294b9c8affc8cacf4061db366769e5f9caeb4)\nc3dde4da0950cc47362acd5a2b0294b9c8affc8cacf4061db366769e5f9caeb4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:39 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:39.426 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d5f94814-cb2c-4a1c-bcba-aafbc1b5179c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:39 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:39.427 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap206c8741-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:39 np0005539551 nova_compute[227360]: 2025-11-29 08:44:39.429 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:39 np0005539551 kernel: tap206c8741-70: left promiscuous mode
Nov 29 03:44:39 np0005539551 nova_compute[227360]: 2025-11-29 08:44:39.444 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:39 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:39.446 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c31e1b43-a355-4785-b70f-5698078e176c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:39 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:39.461 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9ac55b71-b407-4bff-897b-83c7cdf65e22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:39 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:39.462 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e64e8ca1-aa8a-472f-9cb1-28c60de1fcd6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:39 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:39.481 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e3ff6b7d-1ba1-4edb-8943-9a9ba5c9bfc4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 890189, 'reachable_time': 34988, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299533, 'error': None, 'target': 'ovnmeta-206c8741-77b9-46d9-99b0-d93958188a98', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:39 np0005539551 systemd[1]: run-netns-ovnmeta\x2d206c8741\x2d77b9\x2d46d9\x2d99b0\x2dd93958188a98.mount: Deactivated successfully.
Nov 29 03:44:39 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:39.485 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-206c8741-77b9-46d9-99b0-d93958188a98 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:44:39 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:39.485 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[d6804a24-24d8-465e-b31e-c78e67acd606]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:39 np0005539551 nova_compute[227360]: 2025-11-29 08:44:39.765 227364 INFO nova.virt.libvirt.driver [None req-643123e2-9cd9-4640-aec9-502a42a575c6 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Deleting instance files /var/lib/nova/instances/549af500-fc44-45ed-a35d-c215ef3e4d5f_del#033[00m
Nov 29 03:44:39 np0005539551 nova_compute[227360]: 2025-11-29 08:44:39.766 227364 INFO nova.virt.libvirt.driver [None req-643123e2-9cd9-4640-aec9-502a42a575c6 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Deletion of /var/lib/nova/instances/549af500-fc44-45ed-a35d-c215ef3e4d5f_del complete#033[00m
Nov 29 03:44:39 np0005539551 nova_compute[227360]: 2025-11-29 08:44:39.821 227364 INFO nova.compute.manager [None req-643123e2-9cd9-4640-aec9-502a42a575c6 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:44:39 np0005539551 nova_compute[227360]: 2025-11-29 08:44:39.822 227364 DEBUG oslo.service.loopingcall [None req-643123e2-9cd9-4640-aec9-502a42a575c6 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:44:39 np0005539551 nova_compute[227360]: 2025-11-29 08:44:39.822 227364 DEBUG nova.compute.manager [-] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:44:39 np0005539551 nova_compute[227360]: 2025-11-29 08:44:39.823 227364 DEBUG nova.network.neutron [-] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:44:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:40.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:40.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:40 np0005539551 nova_compute[227360]: 2025-11-29 08:44:40.881 227364 DEBUG nova.network.neutron [req-8feed832-4701-4744-b865-3e4225e77ec9 req-198fe531-4547-4d86-a0af-50f7110f3552 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Updated VIF entry in instance network info cache for port 365d4d20-fe0e-400e-9546-ae7e92d68355. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:44:40 np0005539551 nova_compute[227360]: 2025-11-29 08:44:40.882 227364 DEBUG nova.network.neutron [req-8feed832-4701-4744-b865-3e4225e77ec9 req-198fe531-4547-4d86-a0af-50f7110f3552 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Updating instance_info_cache with network_info: [{"id": "365d4d20-fe0e-400e-9546-ae7e92d68355", "address": "fa:16:3e:ae:3c:05", "network": {"id": "206c8741-77b9-46d9-99b0-d93958188a98", "bridge": "br-int", "label": "tempest-network-smoke--684921249", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap365d4d20-fe", "ovs_interfaceid": "365d4d20-fe0e-400e-9546-ae7e92d68355", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:44:40 np0005539551 nova_compute[227360]: 2025-11-29 08:44:40.884 227364 DEBUG nova.network.neutron [-] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:44:40 np0005539551 nova_compute[227360]: 2025-11-29 08:44:40.902 227364 DEBUG oslo_concurrency.lockutils [req-8feed832-4701-4744-b865-3e4225e77ec9 req-198fe531-4547-4d86-a0af-50f7110f3552 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-549af500-fc44-45ed-a35d-c215ef3e4d5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:44:40 np0005539551 nova_compute[227360]: 2025-11-29 08:44:40.907 227364 INFO nova.compute.manager [-] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Took 1.08 seconds to deallocate network for instance.#033[00m
Nov 29 03:44:40 np0005539551 nova_compute[227360]: 2025-11-29 08:44:40.964 227364 DEBUG oslo_concurrency.lockutils [None req-643123e2-9cd9-4640-aec9-502a42a575c6 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:40 np0005539551 nova_compute[227360]: 2025-11-29 08:44:40.965 227364 DEBUG oslo_concurrency.lockutils [None req-643123e2-9cd9-4640-aec9-502a42a575c6 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:40 np0005539551 nova_compute[227360]: 2025-11-29 08:44:40.984 227364 DEBUG nova.compute.manager [req-29849bf1-ce21-4920-8a13-cdac30c95ad3 req-9947da8e-a4cc-4809-b331-ff00baec9b88 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Received event network-vif-deleted-365d4d20-fe0e-400e-9546-ae7e92d68355 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:41 np0005539551 nova_compute[227360]: 2025-11-29 08:44:41.027 227364 DEBUG oslo_concurrency.processutils [None req-643123e2-9cd9-4640-aec9-502a42a575c6 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:41 np0005539551 nova_compute[227360]: 2025-11-29 08:44:41.061 227364 DEBUG nova.compute.manager [req-e745af2a-9104-40a2-96ae-8b42750b15d6 req-9a653998-dc3d-4ae9-991a-165eac80dda2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Received event network-vif-unplugged-365d4d20-fe0e-400e-9546-ae7e92d68355 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:41 np0005539551 nova_compute[227360]: 2025-11-29 08:44:41.062 227364 DEBUG oslo_concurrency.lockutils [req-e745af2a-9104-40a2-96ae-8b42750b15d6 req-9a653998-dc3d-4ae9-991a-165eac80dda2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "549af500-fc44-45ed-a35d-c215ef3e4d5f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:41 np0005539551 nova_compute[227360]: 2025-11-29 08:44:41.062 227364 DEBUG oslo_concurrency.lockutils [req-e745af2a-9104-40a2-96ae-8b42750b15d6 req-9a653998-dc3d-4ae9-991a-165eac80dda2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "549af500-fc44-45ed-a35d-c215ef3e4d5f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:41 np0005539551 nova_compute[227360]: 2025-11-29 08:44:41.062 227364 DEBUG oslo_concurrency.lockutils [req-e745af2a-9104-40a2-96ae-8b42750b15d6 req-9a653998-dc3d-4ae9-991a-165eac80dda2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "549af500-fc44-45ed-a35d-c215ef3e4d5f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:41 np0005539551 nova_compute[227360]: 2025-11-29 08:44:41.063 227364 DEBUG nova.compute.manager [req-e745af2a-9104-40a2-96ae-8b42750b15d6 req-9a653998-dc3d-4ae9-991a-165eac80dda2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] No waiting events found dispatching network-vif-unplugged-365d4d20-fe0e-400e-9546-ae7e92d68355 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:44:41 np0005539551 nova_compute[227360]: 2025-11-29 08:44:41.063 227364 WARNING nova.compute.manager [req-e745af2a-9104-40a2-96ae-8b42750b15d6 req-9a653998-dc3d-4ae9-991a-165eac80dda2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Received unexpected event network-vif-unplugged-365d4d20-fe0e-400e-9546-ae7e92d68355 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:44:41 np0005539551 nova_compute[227360]: 2025-11-29 08:44:41.063 227364 DEBUG nova.compute.manager [req-e745af2a-9104-40a2-96ae-8b42750b15d6 req-9a653998-dc3d-4ae9-991a-165eac80dda2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Received event network-vif-plugged-365d4d20-fe0e-400e-9546-ae7e92d68355 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:41 np0005539551 nova_compute[227360]: 2025-11-29 08:44:41.063 227364 DEBUG oslo_concurrency.lockutils [req-e745af2a-9104-40a2-96ae-8b42750b15d6 req-9a653998-dc3d-4ae9-991a-165eac80dda2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "549af500-fc44-45ed-a35d-c215ef3e4d5f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:41 np0005539551 nova_compute[227360]: 2025-11-29 08:44:41.063 227364 DEBUG oslo_concurrency.lockutils [req-e745af2a-9104-40a2-96ae-8b42750b15d6 req-9a653998-dc3d-4ae9-991a-165eac80dda2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "549af500-fc44-45ed-a35d-c215ef3e4d5f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:41 np0005539551 nova_compute[227360]: 2025-11-29 08:44:41.064 227364 DEBUG oslo_concurrency.lockutils [req-e745af2a-9104-40a2-96ae-8b42750b15d6 req-9a653998-dc3d-4ae9-991a-165eac80dda2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "549af500-fc44-45ed-a35d-c215ef3e4d5f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:41 np0005539551 nova_compute[227360]: 2025-11-29 08:44:41.064 227364 DEBUG nova.compute.manager [req-e745af2a-9104-40a2-96ae-8b42750b15d6 req-9a653998-dc3d-4ae9-991a-165eac80dda2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] No waiting events found dispatching network-vif-plugged-365d4d20-fe0e-400e-9546-ae7e92d68355 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:44:41 np0005539551 nova_compute[227360]: 2025-11-29 08:44:41.064 227364 WARNING nova.compute.manager [req-e745af2a-9104-40a2-96ae-8b42750b15d6 req-9a653998-dc3d-4ae9-991a-165eac80dda2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Received unexpected event network-vif-plugged-365d4d20-fe0e-400e-9546-ae7e92d68355 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:44:41 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:44:41 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1274356442' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:44:41 np0005539551 nova_compute[227360]: 2025-11-29 08:44:41.444 227364 DEBUG oslo_concurrency.processutils [None req-643123e2-9cd9-4640-aec9-502a42a575c6 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:41 np0005539551 nova_compute[227360]: 2025-11-29 08:44:41.450 227364 DEBUG nova.compute.provider_tree [None req-643123e2-9cd9-4640-aec9-502a42a575c6 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:44:41 np0005539551 nova_compute[227360]: 2025-11-29 08:44:41.467 227364 DEBUG nova.scheduler.client.report [None req-643123e2-9cd9-4640-aec9-502a42a575c6 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:44:41 np0005539551 nova_compute[227360]: 2025-11-29 08:44:41.489 227364 DEBUG oslo_concurrency.lockutils [None req-643123e2-9cd9-4640-aec9-502a42a575c6 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.524s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:41 np0005539551 nova_compute[227360]: 2025-11-29 08:44:41.512 227364 INFO nova.scheduler.client.report [None req-643123e2-9cd9-4640-aec9-502a42a575c6 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Deleted allocations for instance 549af500-fc44-45ed-a35d-c215ef3e4d5f#033[00m
Nov 29 03:44:41 np0005539551 nova_compute[227360]: 2025-11-29 08:44:41.586 227364 DEBUG oslo_concurrency.lockutils [None req-643123e2-9cd9-4640-aec9-502a42a575c6 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "549af500-fc44-45ed-a35d-c215ef3e4d5f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.493s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:41 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:42.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:42.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:44 np0005539551 nova_compute[227360]: 2025-11-29 08:44:44.348 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:44 np0005539551 nova_compute[227360]: 2025-11-29 08:44:44.353 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:44.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:44.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:45 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:44:45.623 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '60'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:46 np0005539551 ovn_controller[130266]: 2025-11-29T08:44:46Z|00888|binding|INFO|Releasing lport 1cbe83e7-a1b1-4865-9743-7778d9db9685 from this chassis (sb_readonly=0)
Nov 29 03:44:46 np0005539551 nova_compute[227360]: 2025-11-29 08:44:46.266 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:46.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:46.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:46 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:48 np0005539551 nova_compute[227360]: 2025-11-29 08:44:48.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:48.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:48.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:49 np0005539551 nova_compute[227360]: 2025-11-29 08:44:49.351 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:49 np0005539551 nova_compute[227360]: 2025-11-29 08:44:49.355 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:49 np0005539551 nova_compute[227360]: 2025-11-29 08:44:49.463 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:49 np0005539551 ovn_controller[130266]: 2025-11-29T08:44:49Z|00889|binding|INFO|Releasing lport 1cbe83e7-a1b1-4865-9743-7778d9db9685 from this chassis (sb_readonly=0)
Nov 29 03:44:49 np0005539551 nova_compute[227360]: 2025-11-29 08:44:49.763 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:50 np0005539551 nova_compute[227360]: 2025-11-29 08:44:50.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:50 np0005539551 nova_compute[227360]: 2025-11-29 08:44:50.429 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:50 np0005539551 nova_compute[227360]: 2025-11-29 08:44:50.429 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:50 np0005539551 nova_compute[227360]: 2025-11-29 08:44:50.430 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:50 np0005539551 nova_compute[227360]: 2025-11-29 08:44:50.430 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:44:50 np0005539551 nova_compute[227360]: 2025-11-29 08:44:50.430 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:44:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:50.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:44:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:44:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:50.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:44:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:44:50 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2979671379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:44:50 np0005539551 nova_compute[227360]: 2025-11-29 08:44:50.863 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:50 np0005539551 nova_compute[227360]: 2025-11-29 08:44:50.970 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-000000cb as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:44:50 np0005539551 nova_compute[227360]: 2025-11-29 08:44:50.971 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-000000cb as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:44:51 np0005539551 nova_compute[227360]: 2025-11-29 08:44:51.114 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:44:51 np0005539551 nova_compute[227360]: 2025-11-29 08:44:51.115 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4001MB free_disk=20.896705627441406GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:44:51 np0005539551 nova_compute[227360]: 2025-11-29 08:44:51.115 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:51 np0005539551 nova_compute[227360]: 2025-11-29 08:44:51.116 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:51 np0005539551 nova_compute[227360]: 2025-11-29 08:44:51.196 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance da14f334-c7fe-428d-b6d2-32c2f4cc4054 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:44:51 np0005539551 nova_compute[227360]: 2025-11-29 08:44:51.196 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:44:51 np0005539551 nova_compute[227360]: 2025-11-29 08:44:51.197 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:44:51 np0005539551 nova_compute[227360]: 2025-11-29 08:44:51.236 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:44:51 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/406913568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:44:51 np0005539551 nova_compute[227360]: 2025-11-29 08:44:51.680 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:51 np0005539551 nova_compute[227360]: 2025-11-29 08:44:51.688 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:44:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:51 np0005539551 nova_compute[227360]: 2025-11-29 08:44:51.724 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:44:51 np0005539551 nova_compute[227360]: 2025-11-29 08:44:51.753 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:44:51 np0005539551 nova_compute[227360]: 2025-11-29 08:44:51.754 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:52.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:52.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:52 np0005539551 nova_compute[227360]: 2025-11-29 08:44:52.913 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:53 np0005539551 nova_compute[227360]: 2025-11-29 08:44:53.890 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:54 np0005539551 nova_compute[227360]: 2025-11-29 08:44:54.333 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405879.332745, 549af500-fc44-45ed-a35d-c215ef3e4d5f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:44:54 np0005539551 nova_compute[227360]: 2025-11-29 08:44:54.334 227364 INFO nova.compute.manager [-] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:44:54 np0005539551 nova_compute[227360]: 2025-11-29 08:44:54.352 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:54 np0005539551 nova_compute[227360]: 2025-11-29 08:44:54.356 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:54 np0005539551 nova_compute[227360]: 2025-11-29 08:44:54.361 227364 DEBUG nova.compute.manager [None req-f73ff0d0-1026-48fb-9f46-63bfc9e241fa - - - - - -] [instance: 549af500-fc44-45ed-a35d-c215ef3e4d5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:44:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:54.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:54.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:54 np0005539551 nova_compute[227360]: 2025-11-29 08:44:54.754 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:54 np0005539551 nova_compute[227360]: 2025-11-29 08:44:54.754 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:44:54 np0005539551 nova_compute[227360]: 2025-11-29 08:44:54.754 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:44:55 np0005539551 nova_compute[227360]: 2025-11-29 08:44:55.093 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "refresh_cache-da14f334-c7fe-428d-b6d2-32c2f4cc4054" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:44:55 np0005539551 nova_compute[227360]: 2025-11-29 08:44:55.093 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquired lock "refresh_cache-da14f334-c7fe-428d-b6d2-32c2f4cc4054" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:44:55 np0005539551 nova_compute[227360]: 2025-11-29 08:44:55.093 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:44:55 np0005539551 nova_compute[227360]: 2025-11-29 08:44:55.093 227364 DEBUG nova.objects.instance [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lazy-loading 'info_cache' on Instance uuid da14f334-c7fe-428d-b6d2-32c2f4cc4054 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:44:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:56.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:56.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:56 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:56 np0005539551 nova_compute[227360]: 2025-11-29 08:44:56.813 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Updating instance_info_cache with network_info: [{"id": "b8e3a18a-12a7-40b6-8f74-5679f4c9a9df", "address": "fa:16:3e:97:38:a4", "network": {"id": "c60162ec-f468-4b5f-bd91-89b0a1cb9fa1", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-514529391-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66ee3b60fb89476383201ba204858d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8e3a18a-12", "ovs_interfaceid": "b8e3a18a-12a7-40b6-8f74-5679f4c9a9df", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:44:56 np0005539551 nova_compute[227360]: 2025-11-29 08:44:56.841 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Releasing lock "refresh_cache-da14f334-c7fe-428d-b6d2-32c2f4cc4054" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:44:56 np0005539551 nova_compute[227360]: 2025-11-29 08:44:56.842 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:44:56 np0005539551 nova_compute[227360]: 2025-11-29 08:44:56.842 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:56 np0005539551 nova_compute[227360]: 2025-11-29 08:44:56.843 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:57 np0005539551 nova_compute[227360]: 2025-11-29 08:44:57.493 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:58 np0005539551 nova_compute[227360]: 2025-11-29 08:44:58.084 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:58.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:44:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:58.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:59 np0005539551 nova_compute[227360]: 2025-11-29 08:44:59.354 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:59 np0005539551 nova_compute[227360]: 2025-11-29 08:44:59.357 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:59 np0005539551 nova_compute[227360]: 2025-11-29 08:44:59.818 227364 DEBUG oslo_concurrency.lockutils [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "964cac7a-4c2d-4b5e-8522-81b99f1416b5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:59 np0005539551 nova_compute[227360]: 2025-11-29 08:44:59.818 227364 DEBUG oslo_concurrency.lockutils [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "964cac7a-4c2d-4b5e-8522-81b99f1416b5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:59 np0005539551 nova_compute[227360]: 2025-11-29 08:44:59.839 227364 DEBUG nova.compute.manager [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:44:59 np0005539551 nova_compute[227360]: 2025-11-29 08:44:59.939 227364 DEBUG oslo_concurrency.lockutils [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:59 np0005539551 nova_compute[227360]: 2025-11-29 08:44:59.939 227364 DEBUG oslo_concurrency.lockutils [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:59 np0005539551 nova_compute[227360]: 2025-11-29 08:44:59.944 227364 DEBUG nova.virt.hardware [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:44:59 np0005539551 nova_compute[227360]: 2025-11-29 08:44:59.945 227364 INFO nova.compute.claims [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:45:00 np0005539551 nova_compute[227360]: 2025-11-29 08:45:00.108 227364 DEBUG oslo_concurrency.processutils [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:45:00 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3713873029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:45:00 np0005539551 nova_compute[227360]: 2025-11-29 08:45:00.546 227364 DEBUG oslo_concurrency.processutils [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:00 np0005539551 nova_compute[227360]: 2025-11-29 08:45:00.551 227364 DEBUG nova.compute.provider_tree [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:45:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:00.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:00 np0005539551 nova_compute[227360]: 2025-11-29 08:45:00.566 227364 DEBUG nova.scheduler.client.report [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:45:00 np0005539551 nova_compute[227360]: 2025-11-29 08:45:00.592 227364 DEBUG oslo_concurrency.lockutils [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:00 np0005539551 nova_compute[227360]: 2025-11-29 08:45:00.593 227364 DEBUG nova.compute.manager [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:45:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:00.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:00 np0005539551 nova_compute[227360]: 2025-11-29 08:45:00.654 227364 DEBUG nova.compute.manager [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:45:00 np0005539551 nova_compute[227360]: 2025-11-29 08:45:00.654 227364 DEBUG nova.network.neutron [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:45:00 np0005539551 nova_compute[227360]: 2025-11-29 08:45:00.676 227364 INFO nova.virt.libvirt.driver [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:45:00 np0005539551 nova_compute[227360]: 2025-11-29 08:45:00.694 227364 DEBUG nova.compute.manager [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:45:00 np0005539551 nova_compute[227360]: 2025-11-29 08:45:00.833 227364 DEBUG nova.compute.manager [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:45:00 np0005539551 nova_compute[227360]: 2025-11-29 08:45:00.834 227364 DEBUG nova.virt.libvirt.driver [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:45:00 np0005539551 nova_compute[227360]: 2025-11-29 08:45:00.834 227364 INFO nova.virt.libvirt.driver [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Creating image(s)#033[00m
Nov 29 03:45:00 np0005539551 nova_compute[227360]: 2025-11-29 08:45:00.859 227364 DEBUG nova.storage.rbd_utils [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image 964cac7a-4c2d-4b5e-8522-81b99f1416b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:45:00 np0005539551 nova_compute[227360]: 2025-11-29 08:45:00.886 227364 DEBUG nova.storage.rbd_utils [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image 964cac7a-4c2d-4b5e-8522-81b99f1416b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:45:00 np0005539551 nova_compute[227360]: 2025-11-29 08:45:00.915 227364 DEBUG nova.storage.rbd_utils [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image 964cac7a-4c2d-4b5e-8522-81b99f1416b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:45:00 np0005539551 nova_compute[227360]: 2025-11-29 08:45:00.919 227364 DEBUG oslo_concurrency.processutils [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:00 np0005539551 nova_compute[227360]: 2025-11-29 08:45:00.996 227364 DEBUG oslo_concurrency.processutils [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:00 np0005539551 nova_compute[227360]: 2025-11-29 08:45:00.998 227364 DEBUG oslo_concurrency.lockutils [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:00 np0005539551 nova_compute[227360]: 2025-11-29 08:45:00.998 227364 DEBUG oslo_concurrency.lockutils [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:00 np0005539551 nova_compute[227360]: 2025-11-29 08:45:00.999 227364 DEBUG oslo_concurrency.lockutils [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:01 np0005539551 nova_compute[227360]: 2025-11-29 08:45:01.024 227364 DEBUG nova.storage.rbd_utils [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image 964cac7a-4c2d-4b5e-8522-81b99f1416b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:45:01 np0005539551 nova_compute[227360]: 2025-11-29 08:45:01.030 227364 DEBUG oslo_concurrency.processutils [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 964cac7a-4c2d-4b5e-8522-81b99f1416b5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:01 np0005539551 nova_compute[227360]: 2025-11-29 08:45:01.303 227364 DEBUG oslo_concurrency.processutils [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 964cac7a-4c2d-4b5e-8522-81b99f1416b5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.273s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:01 np0005539551 nova_compute[227360]: 2025-11-29 08:45:01.364 227364 DEBUG nova.storage.rbd_utils [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] resizing rbd image 964cac7a-4c2d-4b5e-8522-81b99f1416b5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:45:01 np0005539551 nova_compute[227360]: 2025-11-29 08:45:01.400 227364 DEBUG nova.policy [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'de2965680b714b539553cf0792584e1e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '75423dfb570f4b2bbc2f8de4f3a65d18', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:45:01 np0005539551 nova_compute[227360]: 2025-11-29 08:45:01.452 227364 DEBUG nova.objects.instance [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lazy-loading 'migration_context' on Instance uuid 964cac7a-4c2d-4b5e-8522-81b99f1416b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:45:01 np0005539551 nova_compute[227360]: 2025-11-29 08:45:01.465 227364 DEBUG nova.virt.libvirt.driver [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:45:01 np0005539551 nova_compute[227360]: 2025-11-29 08:45:01.466 227364 DEBUG nova.virt.libvirt.driver [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Ensure instance console log exists: /var/lib/nova/instances/964cac7a-4c2d-4b5e-8522-81b99f1416b5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:45:01 np0005539551 nova_compute[227360]: 2025-11-29 08:45:01.466 227364 DEBUG oslo_concurrency.lockutils [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:01 np0005539551 nova_compute[227360]: 2025-11-29 08:45:01.467 227364 DEBUG oslo_concurrency.lockutils [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:01 np0005539551 nova_compute[227360]: 2025-11-29 08:45:01.467 227364 DEBUG oslo_concurrency.lockutils [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:02.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:45:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:02.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:45:03 np0005539551 nova_compute[227360]: 2025-11-29 08:45:03.114 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:04 np0005539551 nova_compute[227360]: 2025-11-29 08:45:04.356 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:04 np0005539551 nova_compute[227360]: 2025-11-29 08:45:04.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:04 np0005539551 nova_compute[227360]: 2025-11-29 08:45:04.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:04 np0005539551 nova_compute[227360]: 2025-11-29 08:45:04.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:45:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:04.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:04.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:05 np0005539551 nova_compute[227360]: 2025-11-29 08:45:05.456 227364 DEBUG nova.network.neutron [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Successfully created port: 6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:45:06 np0005539551 nova_compute[227360]: 2025-11-29 08:45:06.425 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:06.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:06 np0005539551 podman[299792]: 2025-11-29 08:45:06.620425919 +0000 UTC m=+0.060037726 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 29 03:45:06 np0005539551 podman[299791]: 2025-11-29 08:45:06.627561062 +0000 UTC m=+0.071458846 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Nov 29 03:45:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:06.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:06 np0005539551 podman[299790]: 2025-11-29 08:45:06.680619249 +0000 UTC m=+0.125082698 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 03:45:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:07 np0005539551 nova_compute[227360]: 2025-11-29 08:45:07.070 227364 DEBUG nova.network.neutron [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Successfully updated port: 6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:45:07 np0005539551 nova_compute[227360]: 2025-11-29 08:45:07.090 227364 DEBUG oslo_concurrency.lockutils [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "refresh_cache-964cac7a-4c2d-4b5e-8522-81b99f1416b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:45:07 np0005539551 nova_compute[227360]: 2025-11-29 08:45:07.091 227364 DEBUG oslo_concurrency.lockutils [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquired lock "refresh_cache-964cac7a-4c2d-4b5e-8522-81b99f1416b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:45:07 np0005539551 nova_compute[227360]: 2025-11-29 08:45:07.092 227364 DEBUG nova.network.neutron [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:45:07 np0005539551 nova_compute[227360]: 2025-11-29 08:45:07.183 227364 DEBUG nova.compute.manager [req-6cfdb1ea-f615-46c2-b3d7-70d743e9cefa req-b106deb7-026e-4ff8-978a-45d85e5f9697 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Received event network-changed-6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:07 np0005539551 nova_compute[227360]: 2025-11-29 08:45:07.183 227364 DEBUG nova.compute.manager [req-6cfdb1ea-f615-46c2-b3d7-70d743e9cefa req-b106deb7-026e-4ff8-978a-45d85e5f9697 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Refreshing instance network info cache due to event network-changed-6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:45:07 np0005539551 nova_compute[227360]: 2025-11-29 08:45:07.183 227364 DEBUG oslo_concurrency.lockutils [req-6cfdb1ea-f615-46c2-b3d7-70d743e9cefa req-b106deb7-026e-4ff8-978a-45d85e5f9697 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-964cac7a-4c2d-4b5e-8522-81b99f1416b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:45:07 np0005539551 nova_compute[227360]: 2025-11-29 08:45:07.266 227364 DEBUG nova.network.neutron [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:45:07 np0005539551 nova_compute[227360]: 2025-11-29 08:45:07.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:07 np0005539551 nova_compute[227360]: 2025-11-29 08:45:07.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:45:08 np0005539551 nova_compute[227360]: 2025-11-29 08:45:08.412 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:08.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:08.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:09 np0005539551 nova_compute[227360]: 2025-11-29 08:45:09.359 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:45:09 np0005539551 nova_compute[227360]: 2025-11-29 08:45:09.673 227364 DEBUG nova.network.neutron [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Updating instance_info_cache with network_info: [{"id": "6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b", "address": "fa:16:3e:12:9a:71", "network": {"id": "7c95871d-d156-4882-b0a0-97ff36c1744a", "bridge": "br-int", "label": "tempest-network-smoke--1036653926", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b7b58a4-b5", "ovs_interfaceid": "6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:45:09 np0005539551 nova_compute[227360]: 2025-11-29 08:45:09.703 227364 DEBUG oslo_concurrency.lockutils [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Releasing lock "refresh_cache-964cac7a-4c2d-4b5e-8522-81b99f1416b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:45:09 np0005539551 nova_compute[227360]: 2025-11-29 08:45:09.703 227364 DEBUG nova.compute.manager [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Instance network_info: |[{"id": "6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b", "address": "fa:16:3e:12:9a:71", "network": {"id": "7c95871d-d156-4882-b0a0-97ff36c1744a", "bridge": "br-int", "label": "tempest-network-smoke--1036653926", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b7b58a4-b5", "ovs_interfaceid": "6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:45:09 np0005539551 nova_compute[227360]: 2025-11-29 08:45:09.704 227364 DEBUG oslo_concurrency.lockutils [req-6cfdb1ea-f615-46c2-b3d7-70d743e9cefa req-b106deb7-026e-4ff8-978a-45d85e5f9697 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-964cac7a-4c2d-4b5e-8522-81b99f1416b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:45:09 np0005539551 nova_compute[227360]: 2025-11-29 08:45:09.704 227364 DEBUG nova.network.neutron [req-6cfdb1ea-f615-46c2-b3d7-70d743e9cefa req-b106deb7-026e-4ff8-978a-45d85e5f9697 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Refreshing network info cache for port 6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:45:09 np0005539551 nova_compute[227360]: 2025-11-29 08:45:09.706 227364 DEBUG nova.virt.libvirt.driver [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Start _get_guest_xml network_info=[{"id": "6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b", "address": "fa:16:3e:12:9a:71", "network": {"id": "7c95871d-d156-4882-b0a0-97ff36c1744a", "bridge": "br-int", "label": "tempest-network-smoke--1036653926", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b7b58a4-b5", "ovs_interfaceid": "6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:45:09 np0005539551 nova_compute[227360]: 2025-11-29 08:45:09.711 227364 WARNING nova.virt.libvirt.driver [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:45:09 np0005539551 nova_compute[227360]: 2025-11-29 08:45:09.716 227364 DEBUG nova.virt.libvirt.host [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:45:09 np0005539551 nova_compute[227360]: 2025-11-29 08:45:09.717 227364 DEBUG nova.virt.libvirt.host [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:45:09 np0005539551 nova_compute[227360]: 2025-11-29 08:45:09.721 227364 DEBUG nova.virt.libvirt.host [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:45:09 np0005539551 nova_compute[227360]: 2025-11-29 08:45:09.722 227364 DEBUG nova.virt.libvirt.host [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:45:09 np0005539551 nova_compute[227360]: 2025-11-29 08:45:09.723 227364 DEBUG nova.virt.libvirt.driver [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:45:09 np0005539551 nova_compute[227360]: 2025-11-29 08:45:09.724 227364 DEBUG nova.virt.hardware [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:45:09 np0005539551 nova_compute[227360]: 2025-11-29 08:45:09.724 227364 DEBUG nova.virt.hardware [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:45:09 np0005539551 nova_compute[227360]: 2025-11-29 08:45:09.724 227364 DEBUG nova.virt.hardware [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:45:09 np0005539551 nova_compute[227360]: 2025-11-29 08:45:09.725 227364 DEBUG nova.virt.hardware [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:45:09 np0005539551 nova_compute[227360]: 2025-11-29 08:45:09.725 227364 DEBUG nova.virt.hardware [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:45:09 np0005539551 nova_compute[227360]: 2025-11-29 08:45:09.725 227364 DEBUG nova.virt.hardware [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:45:09 np0005539551 nova_compute[227360]: 2025-11-29 08:45:09.726 227364 DEBUG nova.virt.hardware [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:45:09 np0005539551 nova_compute[227360]: 2025-11-29 08:45:09.726 227364 DEBUG nova.virt.hardware [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:45:09 np0005539551 nova_compute[227360]: 2025-11-29 08:45:09.726 227364 DEBUG nova.virt.hardware [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:45:09 np0005539551 nova_compute[227360]: 2025-11-29 08:45:09.727 227364 DEBUG nova.virt.hardware [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:45:09 np0005539551 nova_compute[227360]: 2025-11-29 08:45:09.727 227364 DEBUG nova.virt.hardware [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:45:09 np0005539551 nova_compute[227360]: 2025-11-29 08:45:09.731 227364 DEBUG oslo_concurrency.processutils [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:45:10 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2366485529' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:45:10 np0005539551 nova_compute[227360]: 2025-11-29 08:45:10.213 227364 DEBUG oslo_concurrency.processutils [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:10 np0005539551 nova_compute[227360]: 2025-11-29 08:45:10.248 227364 DEBUG nova.storage.rbd_utils [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image 964cac7a-4c2d-4b5e-8522-81b99f1416b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:45:10 np0005539551 nova_compute[227360]: 2025-11-29 08:45:10.254 227364 DEBUG oslo_concurrency.processutils [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:10 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:45:10 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.0 total, 600.0 interval#012Cumulative writes: 13K writes, 68K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s#012Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.14 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1681 writes, 8609 keys, 1681 commit groups, 1.0 writes per commit group, ingest: 16.52 MB, 0.03 MB/s#012Interval WAL: 1681 writes, 1681 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     16.0      5.34              0.30        43    0.124       0      0       0.0       0.0#012  L6      1/0   12.26 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.4     41.2     35.5     12.92              1.45        42    0.308    314K    22K       0.0       0.0#012 Sum      1/0   12.26 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.4     29.1     29.8     18.26              1.75        85    0.215    314K    22K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   9.1    140.5    140.6      0.67              0.26        14    0.048     69K   3684       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0     41.2     35.5     12.92              1.45        42    0.308    314K    22K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     16.0      5.34              0.30        42    0.127       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 5400.0 total, 600.0 interval#012Flush(GB): cumulative 0.083, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.53 GB write, 0.10 MB/s write, 0.52 GB read, 0.10 MB/s read, 18.3 seconds#012Interval compaction: 0.09 GB write, 0.16 MB/s write, 0.09 GB read, 0.16 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557021ed51f0#2 capacity: 304.00 MB usage: 54.27 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.00068 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3042,51.97 MB,17.0967%) FilterBlock(85,867.30 KB,0.278608%) IndexBlock(85,1.45 MB,0.477189%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 03:45:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:10.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:10.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:45:10 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/557741450' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:45:10 np0005539551 nova_compute[227360]: 2025-11-29 08:45:10.698 227364 DEBUG oslo_concurrency.processutils [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:10 np0005539551 nova_compute[227360]: 2025-11-29 08:45:10.702 227364 DEBUG nova.virt.libvirt.vif [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:44:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-access_point-316697171',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-access_point-316697171',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1136856573-ac',id=208,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIUCtFnN07vszlWcqLqc3OwtiaY5LGVJmT2ZWYrMbRKMkYGyWdO3eJxi7r32YSVdfdSMmHf98ntN1zt+jX0dvGmgoNoiyZKY2TvD4cve07jeq8QHwsvzbRI+YetMB/qunA==',key_name='tempest-TestSecurityGroupsBasicOps-182396490',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='75423dfb570f4b2bbc2f8de4f3a65d18',ramdisk_id='',reservation_id='r-c73j3yhr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1136856573',owner_user_name='tempest-TestSecurityGroupsBasicOps-1136856573-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:45:00Z,user_data=None,user_id='de2965680b714b539553cf0792584e1e',uuid=964cac7a-4c2d-4b5e-8522-81b99f1416b5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b", "address": "fa:16:3e:12:9a:71", "network": {"id": "7c95871d-d156-4882-b0a0-97ff36c1744a", "bridge": "br-int", "label": "tempest-network-smoke--1036653926", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b7b58a4-b5", "ovs_interfaceid": "6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:45:10 np0005539551 nova_compute[227360]: 2025-11-29 08:45:10.703 227364 DEBUG nova.network.os_vif_util [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converting VIF {"id": "6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b", "address": "fa:16:3e:12:9a:71", "network": {"id": "7c95871d-d156-4882-b0a0-97ff36c1744a", "bridge": "br-int", "label": "tempest-network-smoke--1036653926", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b7b58a4-b5", "ovs_interfaceid": "6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:45:10 np0005539551 nova_compute[227360]: 2025-11-29 08:45:10.705 227364 DEBUG nova.network.os_vif_util [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:9a:71,bridge_name='br-int',has_traffic_filtering=True,id=6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b,network=Network(7c95871d-d156-4882-b0a0-97ff36c1744a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b7b58a4-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:45:10 np0005539551 nova_compute[227360]: 2025-11-29 08:45:10.709 227364 DEBUG nova.objects.instance [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lazy-loading 'pci_devices' on Instance uuid 964cac7a-4c2d-4b5e-8522-81b99f1416b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:45:10 np0005539551 nova_compute[227360]: 2025-11-29 08:45:10.728 227364 DEBUG nova.virt.libvirt.driver [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:45:10 np0005539551 nova_compute[227360]:  <uuid>964cac7a-4c2d-4b5e-8522-81b99f1416b5</uuid>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:  <name>instance-000000d0</name>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:45:10 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-access_point-316697171</nova:name>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:45:09</nova:creationTime>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:45:10 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:        <nova:user uuid="de2965680b714b539553cf0792584e1e">tempest-TestSecurityGroupsBasicOps-1136856573-project-member</nova:user>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:        <nova:project uuid="75423dfb570f4b2bbc2f8de4f3a65d18">tempest-TestSecurityGroupsBasicOps-1136856573</nova:project>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:        <nova:port uuid="6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b">
Nov 29 03:45:10 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:      <entry name="serial">964cac7a-4c2d-4b5e-8522-81b99f1416b5</entry>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:      <entry name="uuid">964cac7a-4c2d-4b5e-8522-81b99f1416b5</entry>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:45:10 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/964cac7a-4c2d-4b5e-8522-81b99f1416b5_disk">
Nov 29 03:45:10 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:45:10 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:45:10 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/964cac7a-4c2d-4b5e-8522-81b99f1416b5_disk.config">
Nov 29 03:45:10 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:45:10 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:45:10 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:12:9a:71"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:      <target dev="tap6b7b58a4-b5"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:45:10 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/964cac7a-4c2d-4b5e-8522-81b99f1416b5/console.log" append="off"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:45:10 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:45:10 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:45:10 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:45:10 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:45:10 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:45:10 np0005539551 nova_compute[227360]: 2025-11-29 08:45:10.729 227364 DEBUG nova.compute.manager [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Preparing to wait for external event network-vif-plugged-6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:45:10 np0005539551 nova_compute[227360]: 2025-11-29 08:45:10.730 227364 DEBUG oslo_concurrency.lockutils [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "964cac7a-4c2d-4b5e-8522-81b99f1416b5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:10 np0005539551 nova_compute[227360]: 2025-11-29 08:45:10.730 227364 DEBUG oslo_concurrency.lockutils [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "964cac7a-4c2d-4b5e-8522-81b99f1416b5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:10 np0005539551 nova_compute[227360]: 2025-11-29 08:45:10.730 227364 DEBUG oslo_concurrency.lockutils [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "964cac7a-4c2d-4b5e-8522-81b99f1416b5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:10 np0005539551 nova_compute[227360]: 2025-11-29 08:45:10.731 227364 DEBUG nova.virt.libvirt.vif [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:44:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-access_point-316697171',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-access_point-316697171',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1136856573-ac',id=208,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIUCtFnN07vszlWcqLqc3OwtiaY5LGVJmT2ZWYrMbRKMkYGyWdO3eJxi7r32YSVdfdSMmHf98ntN1zt+jX0dvGmgoNoiyZKY2TvD4cve07jeq8QHwsvzbRI+YetMB/qunA==',key_name='tempest-TestSecurityGroupsBasicOps-182396490',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='75423dfb570f4b2bbc2f8de4f3a65d18',ramdisk_id='',reservation_id='r-c73j3yhr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1136856573',owner_user_name='tempest-TestSecurityGroupsBasicOps-1136856573-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:45:00Z,user_data=None,user_id='de2965680b714b539553cf0792584e1e',uuid=964cac7a-4c2d-4b5e-8522-81b99f1416b5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b", "address": "fa:16:3e:12:9a:71", "network": {"id": "7c95871d-d156-4882-b0a0-97ff36c1744a", "bridge": "br-int", "label": "tempest-network-smoke--1036653926", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b7b58a4-b5", "ovs_interfaceid": "6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:45:10 np0005539551 nova_compute[227360]: 2025-11-29 08:45:10.731 227364 DEBUG nova.network.os_vif_util [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converting VIF {"id": "6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b", "address": "fa:16:3e:12:9a:71", "network": {"id": "7c95871d-d156-4882-b0a0-97ff36c1744a", "bridge": "br-int", "label": "tempest-network-smoke--1036653926", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b7b58a4-b5", "ovs_interfaceid": "6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:45:10 np0005539551 nova_compute[227360]: 2025-11-29 08:45:10.732 227364 DEBUG nova.network.os_vif_util [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:9a:71,bridge_name='br-int',has_traffic_filtering=True,id=6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b,network=Network(7c95871d-d156-4882-b0a0-97ff36c1744a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b7b58a4-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:45:10 np0005539551 nova_compute[227360]: 2025-11-29 08:45:10.732 227364 DEBUG os_vif [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:9a:71,bridge_name='br-int',has_traffic_filtering=True,id=6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b,network=Network(7c95871d-d156-4882-b0a0-97ff36c1744a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b7b58a4-b5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:45:10 np0005539551 nova_compute[227360]: 2025-11-29 08:45:10.733 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:10 np0005539551 nova_compute[227360]: 2025-11-29 08:45:10.733 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:10 np0005539551 nova_compute[227360]: 2025-11-29 08:45:10.733 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:45:10 np0005539551 nova_compute[227360]: 2025-11-29 08:45:10.736 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:10 np0005539551 nova_compute[227360]: 2025-11-29 08:45:10.736 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6b7b58a4-b5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:10 np0005539551 nova_compute[227360]: 2025-11-29 08:45:10.737 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6b7b58a4-b5, col_values=(('external_ids', {'iface-id': '6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:12:9a:71', 'vm-uuid': '964cac7a-4c2d-4b5e-8522-81b99f1416b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:10 np0005539551 nova_compute[227360]: 2025-11-29 08:45:10.738 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:10 np0005539551 NetworkManager[48922]: <info>  [1764405910.7399] manager: (tap6b7b58a4-b5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/400)
Nov 29 03:45:10 np0005539551 nova_compute[227360]: 2025-11-29 08:45:10.740 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:45:10 np0005539551 nova_compute[227360]: 2025-11-29 08:45:10.747 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:10 np0005539551 nova_compute[227360]: 2025-11-29 08:45:10.748 227364 INFO os_vif [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:9a:71,bridge_name='br-int',has_traffic_filtering=True,id=6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b,network=Network(7c95871d-d156-4882-b0a0-97ff36c1744a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b7b58a4-b5')#033[00m
Nov 29 03:45:10 np0005539551 nova_compute[227360]: 2025-11-29 08:45:10.807 227364 DEBUG nova.virt.libvirt.driver [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:45:10 np0005539551 nova_compute[227360]: 2025-11-29 08:45:10.807 227364 DEBUG nova.virt.libvirt.driver [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:45:10 np0005539551 nova_compute[227360]: 2025-11-29 08:45:10.808 227364 DEBUG nova.virt.libvirt.driver [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] No VIF found with MAC fa:16:3e:12:9a:71, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:45:10 np0005539551 nova_compute[227360]: 2025-11-29 08:45:10.808 227364 INFO nova.virt.libvirt.driver [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Using config drive#033[00m
Nov 29 03:45:10 np0005539551 nova_compute[227360]: 2025-11-29 08:45:10.835 227364 DEBUG nova.storage.rbd_utils [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image 964cac7a-4c2d-4b5e-8522-81b99f1416b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:45:11 np0005539551 nova_compute[227360]: 2025-11-29 08:45:11.615 227364 INFO nova.virt.libvirt.driver [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Creating config drive at /var/lib/nova/instances/964cac7a-4c2d-4b5e-8522-81b99f1416b5/disk.config#033[00m
Nov 29 03:45:11 np0005539551 nova_compute[227360]: 2025-11-29 08:45:11.620 227364 DEBUG oslo_concurrency.processutils [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/964cac7a-4c2d-4b5e-8522-81b99f1416b5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp14roacaj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:11 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:11 np0005539551 nova_compute[227360]: 2025-11-29 08:45:11.766 227364 DEBUG oslo_concurrency.processutils [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/964cac7a-4c2d-4b5e-8522-81b99f1416b5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp14roacaj" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:11 np0005539551 nova_compute[227360]: 2025-11-29 08:45:11.806 227364 DEBUG nova.storage.rbd_utils [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image 964cac7a-4c2d-4b5e-8522-81b99f1416b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:45:11 np0005539551 nova_compute[227360]: 2025-11-29 08:45:11.809 227364 DEBUG oslo_concurrency.processutils [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/964cac7a-4c2d-4b5e-8522-81b99f1416b5/disk.config 964cac7a-4c2d-4b5e-8522-81b99f1416b5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:11 np0005539551 nova_compute[227360]: 2025-11-29 08:45:11.975 227364 DEBUG oslo_concurrency.processutils [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/964cac7a-4c2d-4b5e-8522-81b99f1416b5/disk.config 964cac7a-4c2d-4b5e-8522-81b99f1416b5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:11 np0005539551 nova_compute[227360]: 2025-11-29 08:45:11.976 227364 INFO nova.virt.libvirt.driver [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Deleting local config drive /var/lib/nova/instances/964cac7a-4c2d-4b5e-8522-81b99f1416b5/disk.config because it was imported into RBD.#033[00m
Nov 29 03:45:12 np0005539551 nova_compute[227360]: 2025-11-29 08:45:12.004 227364 DEBUG nova.network.neutron [req-6cfdb1ea-f615-46c2-b3d7-70d743e9cefa req-b106deb7-026e-4ff8-978a-45d85e5f9697 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Updated VIF entry in instance network info cache for port 6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:45:12 np0005539551 nova_compute[227360]: 2025-11-29 08:45:12.005 227364 DEBUG nova.network.neutron [req-6cfdb1ea-f615-46c2-b3d7-70d743e9cefa req-b106deb7-026e-4ff8-978a-45d85e5f9697 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Updating instance_info_cache with network_info: [{"id": "6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b", "address": "fa:16:3e:12:9a:71", "network": {"id": "7c95871d-d156-4882-b0a0-97ff36c1744a", "bridge": "br-int", "label": "tempest-network-smoke--1036653926", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b7b58a4-b5", "ovs_interfaceid": "6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:45:12 np0005539551 kernel: tap6b7b58a4-b5: entered promiscuous mode
Nov 29 03:45:12 np0005539551 NetworkManager[48922]: <info>  [1764405912.0314] manager: (tap6b7b58a4-b5): new Tun device (/org/freedesktop/NetworkManager/Devices/401)
Nov 29 03:45:12 np0005539551 ovn_controller[130266]: 2025-11-29T08:45:12Z|00890|binding|INFO|Claiming lport 6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b for this chassis.
Nov 29 03:45:12 np0005539551 ovn_controller[130266]: 2025-11-29T08:45:12Z|00891|binding|INFO|6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b: Claiming fa:16:3e:12:9a:71 10.100.0.6
Nov 29 03:45:12 np0005539551 nova_compute[227360]: 2025-11-29 08:45:12.032 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:12 np0005539551 nova_compute[227360]: 2025-11-29 08:45:12.040 227364 DEBUG oslo_concurrency.lockutils [req-6cfdb1ea-f615-46c2-b3d7-70d743e9cefa req-b106deb7-026e-4ff8-978a-45d85e5f9697 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-964cac7a-4c2d-4b5e-8522-81b99f1416b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:12.042 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:9a:71 10.100.0.6'], port_security=['fa:16:3e:12:9a:71 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '964cac7a-4c2d-4b5e-8522-81b99f1416b5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c95871d-d156-4882-b0a0-97ff36c1744a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '75423dfb570f4b2bbc2f8de4f3a65d18', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a6c30f76-b8e4-47e3-b764-b9886274509b e84628c0-0159-4dba-85f8-ba5fad9cdcdb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cfd6ff47-3ad6-4cdd-b1fa-0e77564ed30b, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:12.044 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b in datapath 7c95871d-d156-4882-b0a0-97ff36c1744a bound to our chassis#033[00m
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:12.046 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7c95871d-d156-4882-b0a0-97ff36c1744a#033[00m
Nov 29 03:45:12 np0005539551 ovn_controller[130266]: 2025-11-29T08:45:12Z|00892|binding|INFO|Setting lport 6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b ovn-installed in OVS
Nov 29 03:45:12 np0005539551 ovn_controller[130266]: 2025-11-29T08:45:12Z|00893|binding|INFO|Setting lport 6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b up in Southbound
Nov 29 03:45:12 np0005539551 nova_compute[227360]: 2025-11-29 08:45:12.047 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:12 np0005539551 nova_compute[227360]: 2025-11-29 08:45:12.049 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:12.061 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1d1cfc43-7e38-47f1-b55a-4e19158aa850]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:12.062 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7c95871d-d1 in ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:12.064 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7c95871d-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:12.064 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[09a75f92-c863-4374-9010-429d2cd890d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:12 np0005539551 systemd-udevd[299987]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:12.065 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[45f50c23-2aab-4713-bf82-8ad56271f56d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:12 np0005539551 systemd-machined[190756]: New machine qemu-92-instance-000000d0.
Nov 29 03:45:12 np0005539551 systemd[1]: Started Virtual Machine qemu-92-instance-000000d0.
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:12.078 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[b8568cfe-414b-4cee-8f03-a3a47922d64b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:12 np0005539551 NetworkManager[48922]: <info>  [1764405912.0800] device (tap6b7b58a4-b5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:45:12 np0005539551 NetworkManager[48922]: <info>  [1764405912.0816] device (tap6b7b58a4-b5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:12.093 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[3233e8af-52fd-4c69-bcec-56d8ba134dd2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:12.124 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[9c4d1e6a-abb8-4d6a-ad97-131baed93342]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:12.128 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[fd653666-163f-4221-a03e-f7f257aedb77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:12 np0005539551 NetworkManager[48922]: <info>  [1764405912.1295] manager: (tap7c95871d-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/402)
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:12.155 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[62676250-76fe-47c2-985c-2ab1e7dff198]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:12.158 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[13f2157a-1592-4c95-ac43-3a270d779188]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:12 np0005539551 NetworkManager[48922]: <info>  [1764405912.1817] device (tap7c95871d-d0): carrier: link connected
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:12.185 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[e458480c-b05b-414b-8c06-315dd25319fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:12.203 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d1198c4b-e950-4aba-a61a-bc9cf5301491]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c95871d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:9a:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 258], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 895906, 'reachable_time': 32366, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300019, 'error': None, 'target': 'ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:12.217 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1515729a-fb44-4d4e-8c38-4c6006a1d7dd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed6:9ac4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 895906, 'tstamp': 895906}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300020, 'error': None, 'target': 'ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:12.237 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[fd3da6a1-a661-46e5-8c45-48dc6427ef6e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c95871d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:9a:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 258], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 895906, 'reachable_time': 32366, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 300021, 'error': None, 'target': 'ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:12.261 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[0335b973-1cac-4ada-93c0-6f617dbe4aa5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:12 np0005539551 nova_compute[227360]: 2025-11-29 08:45:12.295 227364 DEBUG nova.compute.manager [req-ebad1257-4cf4-418a-a880-69915a0dfdf9 req-6ed6f093-2096-415c-8851-8f0a90d226a6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Received event network-vif-plugged-6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:12 np0005539551 nova_compute[227360]: 2025-11-29 08:45:12.296 227364 DEBUG oslo_concurrency.lockutils [req-ebad1257-4cf4-418a-a880-69915a0dfdf9 req-6ed6f093-2096-415c-8851-8f0a90d226a6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "964cac7a-4c2d-4b5e-8522-81b99f1416b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:12 np0005539551 nova_compute[227360]: 2025-11-29 08:45:12.296 227364 DEBUG oslo_concurrency.lockutils [req-ebad1257-4cf4-418a-a880-69915a0dfdf9 req-6ed6f093-2096-415c-8851-8f0a90d226a6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "964cac7a-4c2d-4b5e-8522-81b99f1416b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:12 np0005539551 nova_compute[227360]: 2025-11-29 08:45:12.296 227364 DEBUG oslo_concurrency.lockutils [req-ebad1257-4cf4-418a-a880-69915a0dfdf9 req-6ed6f093-2096-415c-8851-8f0a90d226a6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "964cac7a-4c2d-4b5e-8522-81b99f1416b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:12 np0005539551 nova_compute[227360]: 2025-11-29 08:45:12.296 227364 DEBUG nova.compute.manager [req-ebad1257-4cf4-418a-a880-69915a0dfdf9 req-6ed6f093-2096-415c-8851-8f0a90d226a6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Processing event network-vif-plugged-6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:12.325 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[8cbddeb3-db4b-4a17-999d-0643091fcf36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:12.326 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c95871d-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:12.326 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:12.326 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c95871d-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:12 np0005539551 nova_compute[227360]: 2025-11-29 08:45:12.328 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:12 np0005539551 NetworkManager[48922]: <info>  [1764405912.3291] manager: (tap7c95871d-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/403)
Nov 29 03:45:12 np0005539551 kernel: tap7c95871d-d0: entered promiscuous mode
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:12.331 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7c95871d-d0, col_values=(('external_ids', {'iface-id': '7e0f1264-3298-4910-af53-2aeef940fbc6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:12 np0005539551 ovn_controller[130266]: 2025-11-29T08:45:12Z|00894|binding|INFO|Releasing lport 7e0f1264-3298-4910-af53-2aeef940fbc6 from this chassis (sb_readonly=0)
Nov 29 03:45:12 np0005539551 nova_compute[227360]: 2025-11-29 08:45:12.345 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:12.346 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7c95871d-d156-4882-b0a0-97ff36c1744a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7c95871d-d156-4882-b0a0-97ff36c1744a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:12.347 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c4c2dd71-8ebd-450f-81dd-8a42e40a845c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:12.348 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-7c95871d-d156-4882-b0a0-97ff36c1744a
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/7c95871d-d156-4882-b0a0-97ff36c1744a.pid.haproxy
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 7c95871d-d156-4882-b0a0-97ff36c1744a
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:45:12 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:12.348 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a', 'env', 'PROCESS_TAG=haproxy-7c95871d-d156-4882-b0a0-97ff36c1744a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7c95871d-d156-4882-b0a0-97ff36c1744a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:45:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:12.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:12.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:12 np0005539551 podman[300053]: 2025-11-29 08:45:12.713425084 +0000 UTC m=+0.051594288 container create fba23a98f7744f5fcaf6a1a5d337001323f08fec86fca092179f87b6bc371bfe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:45:12 np0005539551 systemd[1]: Started libpod-conmon-fba23a98f7744f5fcaf6a1a5d337001323f08fec86fca092179f87b6bc371bfe.scope.
Nov 29 03:45:12 np0005539551 podman[300053]: 2025-11-29 08:45:12.683937706 +0000 UTC m=+0.022106940 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:45:12 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:45:12 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9308a62708c24523adcbe92fde658df59065e081c803b70b759f30676464131e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:45:12 np0005539551 podman[300053]: 2025-11-29 08:45:12.800937994 +0000 UTC m=+0.139107228 container init fba23a98f7744f5fcaf6a1a5d337001323f08fec86fca092179f87b6bc371bfe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:45:12 np0005539551 podman[300053]: 2025-11-29 08:45:12.806098763 +0000 UTC m=+0.144267967 container start fba23a98f7744f5fcaf6a1a5d337001323f08fec86fca092179f87b6bc371bfe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 03:45:12 np0005539551 neutron-haproxy-ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a[300068]: [NOTICE]   (300072) : New worker (300074) forked
Nov 29 03:45:12 np0005539551 neutron-haproxy-ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a[300068]: [NOTICE]   (300072) : Loading success.
Nov 29 03:45:13 np0005539551 nova_compute[227360]: 2025-11-29 08:45:13.209 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405913.2095397, 964cac7a-4c2d-4b5e-8522-81b99f1416b5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:45:13 np0005539551 nova_compute[227360]: 2025-11-29 08:45:13.210 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] VM Started (Lifecycle Event)#033[00m
Nov 29 03:45:13 np0005539551 nova_compute[227360]: 2025-11-29 08:45:13.212 227364 DEBUG nova.compute.manager [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:45:13 np0005539551 nova_compute[227360]: 2025-11-29 08:45:13.215 227364 DEBUG nova.virt.libvirt.driver [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:45:13 np0005539551 nova_compute[227360]: 2025-11-29 08:45:13.217 227364 INFO nova.virt.libvirt.driver [-] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Instance spawned successfully.#033[00m
Nov 29 03:45:13 np0005539551 nova_compute[227360]: 2025-11-29 08:45:13.218 227364 DEBUG nova.virt.libvirt.driver [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:45:13 np0005539551 nova_compute[227360]: 2025-11-29 08:45:13.233 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:45:13 np0005539551 nova_compute[227360]: 2025-11-29 08:45:13.236 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:45:13 np0005539551 nova_compute[227360]: 2025-11-29 08:45:13.240 227364 DEBUG nova.virt.libvirt.driver [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:45:13 np0005539551 nova_compute[227360]: 2025-11-29 08:45:13.240 227364 DEBUG nova.virt.libvirt.driver [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:45:13 np0005539551 nova_compute[227360]: 2025-11-29 08:45:13.241 227364 DEBUG nova.virt.libvirt.driver [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:45:13 np0005539551 nova_compute[227360]: 2025-11-29 08:45:13.241 227364 DEBUG nova.virt.libvirt.driver [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:45:13 np0005539551 nova_compute[227360]: 2025-11-29 08:45:13.241 227364 DEBUG nova.virt.libvirt.driver [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:45:13 np0005539551 nova_compute[227360]: 2025-11-29 08:45:13.242 227364 DEBUG nova.virt.libvirt.driver [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:45:13 np0005539551 nova_compute[227360]: 2025-11-29 08:45:13.271 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:45:13 np0005539551 nova_compute[227360]: 2025-11-29 08:45:13.275 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405913.209693, 964cac7a-4c2d-4b5e-8522-81b99f1416b5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:45:13 np0005539551 nova_compute[227360]: 2025-11-29 08:45:13.275 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:45:13 np0005539551 nova_compute[227360]: 2025-11-29 08:45:13.304 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:45:13 np0005539551 nova_compute[227360]: 2025-11-29 08:45:13.307 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764405913.214405, 964cac7a-4c2d-4b5e-8522-81b99f1416b5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:45:13 np0005539551 nova_compute[227360]: 2025-11-29 08:45:13.307 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:45:13 np0005539551 nova_compute[227360]: 2025-11-29 08:45:13.313 227364 INFO nova.compute.manager [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Took 12.48 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:45:13 np0005539551 nova_compute[227360]: 2025-11-29 08:45:13.313 227364 DEBUG nova.compute.manager [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:45:13 np0005539551 nova_compute[227360]: 2025-11-29 08:45:13.372 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:45:13 np0005539551 nova_compute[227360]: 2025-11-29 08:45:13.376 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:45:13 np0005539551 nova_compute[227360]: 2025-11-29 08:45:13.419 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:45:13 np0005539551 nova_compute[227360]: 2025-11-29 08:45:13.451 227364 INFO nova.compute.manager [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Took 13.55 seconds to build instance.#033[00m
Nov 29 03:45:13 np0005539551 nova_compute[227360]: 2025-11-29 08:45:13.472 227364 DEBUG oslo_concurrency.lockutils [None req-dbca7599-1705-4a36-9058-c1cd3b82df70 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "964cac7a-4c2d-4b5e-8522-81b99f1416b5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:14 np0005539551 nova_compute[227360]: 2025-11-29 08:45:14.362 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:14 np0005539551 nova_compute[227360]: 2025-11-29 08:45:14.403 227364 DEBUG nova.compute.manager [req-7b5bc7a3-4885-4c58-8048-1325f8b50f24 req-2c457e40-f680-4ae2-b20d-6c1e421fcfc9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Received event network-vif-plugged-6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:14 np0005539551 nova_compute[227360]: 2025-11-29 08:45:14.403 227364 DEBUG oslo_concurrency.lockutils [req-7b5bc7a3-4885-4c58-8048-1325f8b50f24 req-2c457e40-f680-4ae2-b20d-6c1e421fcfc9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "964cac7a-4c2d-4b5e-8522-81b99f1416b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:14 np0005539551 nova_compute[227360]: 2025-11-29 08:45:14.404 227364 DEBUG oslo_concurrency.lockutils [req-7b5bc7a3-4885-4c58-8048-1325f8b50f24 req-2c457e40-f680-4ae2-b20d-6c1e421fcfc9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "964cac7a-4c2d-4b5e-8522-81b99f1416b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:14 np0005539551 nova_compute[227360]: 2025-11-29 08:45:14.404 227364 DEBUG oslo_concurrency.lockutils [req-7b5bc7a3-4885-4c58-8048-1325f8b50f24 req-2c457e40-f680-4ae2-b20d-6c1e421fcfc9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "964cac7a-4c2d-4b5e-8522-81b99f1416b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:14 np0005539551 nova_compute[227360]: 2025-11-29 08:45:14.404 227364 DEBUG nova.compute.manager [req-7b5bc7a3-4885-4c58-8048-1325f8b50f24 req-2c457e40-f680-4ae2-b20d-6c1e421fcfc9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] No waiting events found dispatching network-vif-plugged-6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:45:14 np0005539551 nova_compute[227360]: 2025-11-29 08:45:14.405 227364 WARNING nova.compute.manager [req-7b5bc7a3-4885-4c58-8048-1325f8b50f24 req-2c457e40-f680-4ae2-b20d-6c1e421fcfc9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Received unexpected event network-vif-plugged-6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b for instance with vm_state active and task_state None.#033[00m
Nov 29 03:45:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:14.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:45:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:14.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:45:15 np0005539551 nova_compute[227360]: 2025-11-29 08:45:15.740 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:16.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:16.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:18.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:18.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:19 np0005539551 nova_compute[227360]: 2025-11-29 08:45:19.364 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:19.894 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:19.895 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:19.896 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:19 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:45:19 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:45:19 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:45:20 np0005539551 nova_compute[227360]: 2025-11-29 08:45:20.550 227364 DEBUG nova.compute.manager [req-f83666ce-4330-4882-a01d-c6c9b12f1065 req-02c78f6b-8591-494d-96d9-504c1927a2cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Received event network-changed-6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:20 np0005539551 nova_compute[227360]: 2025-11-29 08:45:20.551 227364 DEBUG nova.compute.manager [req-f83666ce-4330-4882-a01d-c6c9b12f1065 req-02c78f6b-8591-494d-96d9-504c1927a2cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Refreshing instance network info cache due to event network-changed-6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:45:20 np0005539551 nova_compute[227360]: 2025-11-29 08:45:20.551 227364 DEBUG oslo_concurrency.lockutils [req-f83666ce-4330-4882-a01d-c6c9b12f1065 req-02c78f6b-8591-494d-96d9-504c1927a2cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-964cac7a-4c2d-4b5e-8522-81b99f1416b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:45:20 np0005539551 nova_compute[227360]: 2025-11-29 08:45:20.551 227364 DEBUG oslo_concurrency.lockutils [req-f83666ce-4330-4882-a01d-c6c9b12f1065 req-02c78f6b-8591-494d-96d9-504c1927a2cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-964cac7a-4c2d-4b5e-8522-81b99f1416b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:45:20 np0005539551 nova_compute[227360]: 2025-11-29 08:45:20.551 227364 DEBUG nova.network.neutron [req-f83666ce-4330-4882-a01d-c6c9b12f1065 req-02c78f6b-8591-494d-96d9-504c1927a2cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Refreshing network info cache for port 6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:45:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:20.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:20.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:20 np0005539551 nova_compute[227360]: 2025-11-29 08:45:20.743 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:21 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:21 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e405 e405: 3 total, 3 up, 3 in
Nov 29 03:45:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:22.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:22.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:23 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e406 e406: 3 total, 3 up, 3 in
Nov 29 03:45:23 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:45:23 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2862608233' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:45:23 np0005539551 nova_compute[227360]: 2025-11-29 08:45:23.900 227364 DEBUG nova.network.neutron [req-f83666ce-4330-4882-a01d-c6c9b12f1065 req-02c78f6b-8591-494d-96d9-504c1927a2cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Updated VIF entry in instance network info cache for port 6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:45:23 np0005539551 nova_compute[227360]: 2025-11-29 08:45:23.901 227364 DEBUG nova.network.neutron [req-f83666ce-4330-4882-a01d-c6c9b12f1065 req-02c78f6b-8591-494d-96d9-504c1927a2cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Updating instance_info_cache with network_info: [{"id": "6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b", "address": "fa:16:3e:12:9a:71", "network": {"id": "7c95871d-d156-4882-b0a0-97ff36c1744a", "bridge": "br-int", "label": "tempest-network-smoke--1036653926", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b7b58a4-b5", "ovs_interfaceid": "6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:45:23 np0005539551 nova_compute[227360]: 2025-11-29 08:45:23.951 227364 DEBUG oslo_concurrency.lockutils [req-f83666ce-4330-4882-a01d-c6c9b12f1065 req-02c78f6b-8591-494d-96d9-504c1927a2cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-964cac7a-4c2d-4b5e-8522-81b99f1416b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:45:24 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e407 e407: 3 total, 3 up, 3 in
Nov 29 03:45:24 np0005539551 nova_compute[227360]: 2025-11-29 08:45:24.365 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:24 np0005539551 nova_compute[227360]: 2025-11-29 08:45:24.451 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:24 np0005539551 nova_compute[227360]: 2025-11-29 08:45:24.451 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:45:24 np0005539551 nova_compute[227360]: 2025-11-29 08:45:24.482 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:45:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:24.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:24.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:25 np0005539551 nova_compute[227360]: 2025-11-29 08:45:25.746 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:26 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:45:26 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:45:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:26.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:26.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:26 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:27 np0005539551 ovn_controller[130266]: 2025-11-29T08:45:27Z|00119|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:12:9a:71 10.100.0.6
Nov 29 03:45:27 np0005539551 ovn_controller[130266]: 2025-11-29T08:45:27Z|00120|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:12:9a:71 10.100.0.6
Nov 29 03:45:28 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e408 e408: 3 total, 3 up, 3 in
Nov 29 03:45:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:28.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:28.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:29 np0005539551 nova_compute[227360]: 2025-11-29 08:45:29.368 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:30.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e409 e409: 3 total, 3 up, 3 in
Nov 29 03:45:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:30.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:30 np0005539551 nova_compute[227360]: 2025-11-29 08:45:30.748 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:31 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:32.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:32.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:34 np0005539551 nova_compute[227360]: 2025-11-29 08:45:34.369 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:34.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:34 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e410 e410: 3 total, 3 up, 3 in
Nov 29 03:45:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:34.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e411 e411: 3 total, 3 up, 3 in
Nov 29 03:45:35 np0005539551 nova_compute[227360]: 2025-11-29 08:45:35.751 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:36 np0005539551 nova_compute[227360]: 2025-11-29 08:45:36.436 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:36.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:36 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e412 e412: 3 total, 3 up, 3 in
Nov 29 03:45:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:36.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:36 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e412 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:37 np0005539551 nova_compute[227360]: 2025-11-29 08:45:37.141 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:37.141 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=61, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=60) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:45:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:37.143 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:45:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:37.145 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '61'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:37 np0005539551 nova_compute[227360]: 2025-11-29 08:45:37.459 227364 DEBUG nova.compute.manager [req-e4a11dd8-ab8d-442e-8d91-e11cdef709b6 req-bfd29038-7b71-418f-a407-2a7167af71ab 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Received event network-changed-b8e3a18a-12a7-40b6-8f74-5679f4c9a9df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:37 np0005539551 nova_compute[227360]: 2025-11-29 08:45:37.460 227364 DEBUG nova.compute.manager [req-e4a11dd8-ab8d-442e-8d91-e11cdef709b6 req-bfd29038-7b71-418f-a407-2a7167af71ab 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Refreshing instance network info cache due to event network-changed-b8e3a18a-12a7-40b6-8f74-5679f4c9a9df. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:45:37 np0005539551 nova_compute[227360]: 2025-11-29 08:45:37.460 227364 DEBUG oslo_concurrency.lockutils [req-e4a11dd8-ab8d-442e-8d91-e11cdef709b6 req-bfd29038-7b71-418f-a407-2a7167af71ab 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-da14f334-c7fe-428d-b6d2-32c2f4cc4054" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:45:37 np0005539551 nova_compute[227360]: 2025-11-29 08:45:37.461 227364 DEBUG oslo_concurrency.lockutils [req-e4a11dd8-ab8d-442e-8d91-e11cdef709b6 req-bfd29038-7b71-418f-a407-2a7167af71ab 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-da14f334-c7fe-428d-b6d2-32c2f4cc4054" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:45:37 np0005539551 nova_compute[227360]: 2025-11-29 08:45:37.461 227364 DEBUG nova.network.neutron [req-e4a11dd8-ab8d-442e-8d91-e11cdef709b6 req-bfd29038-7b71-418f-a407-2a7167af71ab 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Refreshing network info cache for port b8e3a18a-12a7-40b6-8f74-5679f4c9a9df _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:45:37 np0005539551 nova_compute[227360]: 2025-11-29 08:45:37.539 227364 DEBUG oslo_concurrency.lockutils [None req-9a12c232-6aa9-4f68-b196-a348f3a71b1a 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Acquiring lock "da14f334-c7fe-428d-b6d2-32c2f4cc4054" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:37 np0005539551 nova_compute[227360]: 2025-11-29 08:45:37.540 227364 DEBUG oslo_concurrency.lockutils [None req-9a12c232-6aa9-4f68-b196-a348f3a71b1a 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Lock "da14f334-c7fe-428d-b6d2-32c2f4cc4054" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:37 np0005539551 nova_compute[227360]: 2025-11-29 08:45:37.540 227364 DEBUG oslo_concurrency.lockutils [None req-9a12c232-6aa9-4f68-b196-a348f3a71b1a 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Acquiring lock "da14f334-c7fe-428d-b6d2-32c2f4cc4054-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:37 np0005539551 nova_compute[227360]: 2025-11-29 08:45:37.541 227364 DEBUG oslo_concurrency.lockutils [None req-9a12c232-6aa9-4f68-b196-a348f3a71b1a 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Lock "da14f334-c7fe-428d-b6d2-32c2f4cc4054-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:37 np0005539551 nova_compute[227360]: 2025-11-29 08:45:37.541 227364 DEBUG oslo_concurrency.lockutils [None req-9a12c232-6aa9-4f68-b196-a348f3a71b1a 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Lock "da14f334-c7fe-428d-b6d2-32c2f4cc4054-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:37 np0005539551 nova_compute[227360]: 2025-11-29 08:45:37.542 227364 INFO nova.compute.manager [None req-9a12c232-6aa9-4f68-b196-a348f3a71b1a 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Terminating instance#033[00m
Nov 29 03:45:37 np0005539551 nova_compute[227360]: 2025-11-29 08:45:37.544 227364 DEBUG nova.compute.manager [None req-9a12c232-6aa9-4f68-b196-a348f3a71b1a 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:45:37 np0005539551 kernel: tapb8e3a18a-12 (unregistering): left promiscuous mode
Nov 29 03:45:37 np0005539551 NetworkManager[48922]: <info>  [1764405937.6115] device (tapb8e3a18a-12): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:45:37 np0005539551 ovn_controller[130266]: 2025-11-29T08:45:37Z|00895|binding|INFO|Releasing lport b8e3a18a-12a7-40b6-8f74-5679f4c9a9df from this chassis (sb_readonly=0)
Nov 29 03:45:37 np0005539551 ovn_controller[130266]: 2025-11-29T08:45:37Z|00896|binding|INFO|Setting lport b8e3a18a-12a7-40b6-8f74-5679f4c9a9df down in Southbound
Nov 29 03:45:37 np0005539551 ovn_controller[130266]: 2025-11-29T08:45:37Z|00897|binding|INFO|Removing iface tapb8e3a18a-12 ovn-installed in OVS
Nov 29 03:45:37 np0005539551 nova_compute[227360]: 2025-11-29 08:45:37.620 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:37 np0005539551 podman[300308]: 2025-11-29 08:45:37.625980734 +0000 UTC m=+0.076938604 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 03:45:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:37.633 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:38:a4 10.100.0.3'], port_security=['fa:16:3e:97:38:a4 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'da14f334-c7fe-428d-b6d2-32c2f4cc4054', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '66ee3b60fb89476383201ba204858d4d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '291b52bf-18bb-41c3-a977-3e030dbdb988', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3b579620-af3c-4534-8cbc-29d18f0dd8a7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=b8e3a18a-12a7-40b6-8f74-5679f4c9a9df) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:45:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:37.634 139482 INFO neutron.agent.ovn.metadata.agent [-] Port b8e3a18a-12a7-40b6-8f74-5679f4c9a9df in datapath c60162ec-f468-4b5f-bd91-89b0a1cb9fa1 unbound from our chassis#033[00m
Nov 29 03:45:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:37.637 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c60162ec-f468-4b5f-bd91-89b0a1cb9fa1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:45:37 np0005539551 nova_compute[227360]: 2025-11-29 08:45:37.639 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:37.638 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[31377f23-d8f3-4774-8e95-e83d8bb9f7fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:37.639 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1 namespace which is not needed anymore#033[00m
Nov 29 03:45:37 np0005539551 podman[300309]: 2025-11-29 08:45:37.654698701 +0000 UTC m=+0.096591196 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 03:45:37 np0005539551 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000cb.scope: Deactivated successfully.
Nov 29 03:45:37 np0005539551 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000cb.scope: Consumed 18.356s CPU time.
Nov 29 03:45:37 np0005539551 systemd-machined[190756]: Machine qemu-90-instance-000000cb terminated.
Nov 29 03:45:37 np0005539551 podman[300307]: 2025-11-29 08:45:37.676853501 +0000 UTC m=+0.121390497 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 29 03:45:37 np0005539551 neutron-haproxy-ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1[298365]: [NOTICE]   (298369) : haproxy version is 2.8.14-c23fe91
Nov 29 03:45:37 np0005539551 neutron-haproxy-ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1[298365]: [NOTICE]   (298369) : path to executable is /usr/sbin/haproxy
Nov 29 03:45:37 np0005539551 neutron-haproxy-ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1[298365]: [WARNING]  (298369) : Exiting Master process...
Nov 29 03:45:37 np0005539551 neutron-haproxy-ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1[298365]: [ALERT]    (298369) : Current worker (298371) exited with code 143 (Terminated)
Nov 29 03:45:37 np0005539551 neutron-haproxy-ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1[298365]: [WARNING]  (298369) : All workers exited. Exiting... (0)
Nov 29 03:45:37 np0005539551 systemd[1]: libpod-28d7577567ebf7cd8c2dcf07ec6f5ea0e86dfb469386209ab9738dd2365cbd0b.scope: Deactivated successfully.
Nov 29 03:45:37 np0005539551 podman[300392]: 2025-11-29 08:45:37.781618699 +0000 UTC m=+0.047589990 container died 28d7577567ebf7cd8c2dcf07ec6f5ea0e86dfb469386209ab9738dd2365cbd0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:45:37 np0005539551 nova_compute[227360]: 2025-11-29 08:45:37.788 227364 INFO nova.virt.libvirt.driver [-] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Instance destroyed successfully.#033[00m
Nov 29 03:45:37 np0005539551 nova_compute[227360]: 2025-11-29 08:45:37.789 227364 DEBUG nova.objects.instance [None req-9a12c232-6aa9-4f68-b196-a348f3a71b1a 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Lazy-loading 'resources' on Instance uuid da14f334-c7fe-428d-b6d2-32c2f4cc4054 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:45:37 np0005539551 nova_compute[227360]: 2025-11-29 08:45:37.803 227364 DEBUG nova.virt.libvirt.vif [None req-9a12c232-6aa9-4f68-b196-a348f3a71b1a 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:43:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-2038800219',display_name='tempest-TestSnapshotPattern-server-2038800219',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-2038800219',id=203,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGvmAjf7VDN0MoYAmHVgdY8l8+1v5wjwJNh4fBpCc/IwM7etIRNnxNIuXJ33y4wtb07HCVtVAHbkNdZ/qEkgOQyG3Oc8WVN/7z3fiAu47wM+5lJvW0Y+dOBmLvwkMU2fbA==',key_name='tempest-TestSnapshotPattern-945427268',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:43:48Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='66ee3b60fb89476383201ba204858d4d',ramdisk_id='',reservation_id='r-tq0a14ds',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSnapshotPattern-1740419556',owner_user_name='tempest-TestSnapshotPattern-1740419556-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:44:29Z,user_data=None,user_id='5ac9cfdaf51b4a5ab874f7e3571f88a0',uuid=da14f334-c7fe-428d-b6d2-32c2f4cc4054,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b8e3a18a-12a7-40b6-8f74-5679f4c9a9df", "address": "fa:16:3e:97:38:a4", "network": {"id": "c60162ec-f468-4b5f-bd91-89b0a1cb9fa1", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-514529391-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66ee3b60fb89476383201ba204858d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8e3a18a-12", "ovs_interfaceid": "b8e3a18a-12a7-40b6-8f74-5679f4c9a9df", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:45:37 np0005539551 nova_compute[227360]: 2025-11-29 08:45:37.804 227364 DEBUG nova.network.os_vif_util [None req-9a12c232-6aa9-4f68-b196-a348f3a71b1a 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Converting VIF {"id": "b8e3a18a-12a7-40b6-8f74-5679f4c9a9df", "address": "fa:16:3e:97:38:a4", "network": {"id": "c60162ec-f468-4b5f-bd91-89b0a1cb9fa1", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-514529391-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66ee3b60fb89476383201ba204858d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8e3a18a-12", "ovs_interfaceid": "b8e3a18a-12a7-40b6-8f74-5679f4c9a9df", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:45:37 np0005539551 nova_compute[227360]: 2025-11-29 08:45:37.804 227364 DEBUG nova.network.os_vif_util [None req-9a12c232-6aa9-4f68-b196-a348f3a71b1a 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:97:38:a4,bridge_name='br-int',has_traffic_filtering=True,id=b8e3a18a-12a7-40b6-8f74-5679f4c9a9df,network=Network(c60162ec-f468-4b5f-bd91-89b0a1cb9fa1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8e3a18a-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:45:37 np0005539551 nova_compute[227360]: 2025-11-29 08:45:37.805 227364 DEBUG os_vif [None req-9a12c232-6aa9-4f68-b196-a348f3a71b1a 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:38:a4,bridge_name='br-int',has_traffic_filtering=True,id=b8e3a18a-12a7-40b6-8f74-5679f4c9a9df,network=Network(c60162ec-f468-4b5f-bd91-89b0a1cb9fa1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8e3a18a-12') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:45:37 np0005539551 nova_compute[227360]: 2025-11-29 08:45:37.806 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:37 np0005539551 nova_compute[227360]: 2025-11-29 08:45:37.806 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8e3a18a-12, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:37 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-28d7577567ebf7cd8c2dcf07ec6f5ea0e86dfb469386209ab9738dd2365cbd0b-userdata-shm.mount: Deactivated successfully.
Nov 29 03:45:37 np0005539551 nova_compute[227360]: 2025-11-29 08:45:37.808 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:37 np0005539551 systemd[1]: var-lib-containers-storage-overlay-2c60d04f9d14a6eb06b6dce12ef890d3b147b13ea6ccf8a507c604e40b382c3b-merged.mount: Deactivated successfully.
Nov 29 03:45:37 np0005539551 nova_compute[227360]: 2025-11-29 08:45:37.811 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:45:37 np0005539551 nova_compute[227360]: 2025-11-29 08:45:37.812 227364 INFO os_vif [None req-9a12c232-6aa9-4f68-b196-a348f3a71b1a 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:38:a4,bridge_name='br-int',has_traffic_filtering=True,id=b8e3a18a-12a7-40b6-8f74-5679f4c9a9df,network=Network(c60162ec-f468-4b5f-bd91-89b0a1cb9fa1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8e3a18a-12')#033[00m
Nov 29 03:45:37 np0005539551 podman[300392]: 2025-11-29 08:45:37.817098478 +0000 UTC m=+0.083069770 container cleanup 28d7577567ebf7cd8c2dcf07ec6f5ea0e86dfb469386209ab9738dd2365cbd0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 03:45:37 np0005539551 systemd[1]: libpod-conmon-28d7577567ebf7cd8c2dcf07ec6f5ea0e86dfb469386209ab9738dd2365cbd0b.scope: Deactivated successfully.
Nov 29 03:45:37 np0005539551 podman[300447]: 2025-11-29 08:45:37.878579933 +0000 UTC m=+0.040474176 container remove 28d7577567ebf7cd8c2dcf07ec6f5ea0e86dfb469386209ab9738dd2365cbd0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:45:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:37.884 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4ee41ed6-45a7-4267-ae3e-ec769f7d5737]: (4, ('Sat Nov 29 08:45:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1 (28d7577567ebf7cd8c2dcf07ec6f5ea0e86dfb469386209ab9738dd2365cbd0b)\n28d7577567ebf7cd8c2dcf07ec6f5ea0e86dfb469386209ab9738dd2365cbd0b\nSat Nov 29 08:45:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1 (28d7577567ebf7cd8c2dcf07ec6f5ea0e86dfb469386209ab9738dd2365cbd0b)\n28d7577567ebf7cd8c2dcf07ec6f5ea0e86dfb469386209ab9738dd2365cbd0b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:37.885 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[0ddcf4bc-e672-410c-ac2b-7f8cde06740b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:37.886 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc60162ec-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:37 np0005539551 nova_compute[227360]: 2025-11-29 08:45:37.888 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:37 np0005539551 kernel: tapc60162ec-f0: left promiscuous mode
Nov 29 03:45:37 np0005539551 nova_compute[227360]: 2025-11-29 08:45:37.901 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:37.904 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[91599acc-5173-40a2-89a7-ef00ceecba85]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:37.918 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9610ff0a-b1d7-425b-af93-2d66aaaee695]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:37.920 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9f2fb229-3dda-46f5-b292-5ca25183c6d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:37.940 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d3652839-5c60-4638-94b9-7dee9b5add31]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 887407, 'reachable_time': 31920, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300466, 'error': None, 'target': 'ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:37 np0005539551 systemd[1]: run-netns-ovnmeta\x2dc60162ec\x2df468\x2d4b5f\x2dbd91\x2d89b0a1cb9fa1.mount: Deactivated successfully.
Nov 29 03:45:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:37.944 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:45:37 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:45:37.944 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[0b9974bb-4627-41f9-a8ac-3366db12078e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:38 np0005539551 nova_compute[227360]: 2025-11-29 08:45:38.236 227364 INFO nova.virt.libvirt.driver [None req-9a12c232-6aa9-4f68-b196-a348f3a71b1a 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Deleting instance files /var/lib/nova/instances/da14f334-c7fe-428d-b6d2-32c2f4cc4054_del#033[00m
Nov 29 03:45:38 np0005539551 nova_compute[227360]: 2025-11-29 08:45:38.237 227364 INFO nova.virt.libvirt.driver [None req-9a12c232-6aa9-4f68-b196-a348f3a71b1a 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Deletion of /var/lib/nova/instances/da14f334-c7fe-428d-b6d2-32c2f4cc4054_del complete#033[00m
Nov 29 03:45:38 np0005539551 nova_compute[227360]: 2025-11-29 08:45:38.413 227364 INFO nova.compute.manager [None req-9a12c232-6aa9-4f68-b196-a348f3a71b1a 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Took 0.87 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:45:38 np0005539551 nova_compute[227360]: 2025-11-29 08:45:38.413 227364 DEBUG oslo.service.loopingcall [None req-9a12c232-6aa9-4f68-b196-a348f3a71b1a 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:45:38 np0005539551 nova_compute[227360]: 2025-11-29 08:45:38.414 227364 DEBUG nova.compute.manager [-] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:45:38 np0005539551 nova_compute[227360]: 2025-11-29 08:45:38.414 227364 DEBUG nova.network.neutron [-] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:45:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:38.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:38.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:38 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e413 e413: 3 total, 3 up, 3 in
Nov 29 03:45:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:45:39 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/949110807' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:45:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:45:39 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/949110807' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:45:39 np0005539551 nova_compute[227360]: 2025-11-29 08:45:39.410 227364 DEBUG nova.network.neutron [req-e4a11dd8-ab8d-442e-8d91-e11cdef709b6 req-bfd29038-7b71-418f-a407-2a7167af71ab 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Updated VIF entry in instance network info cache for port b8e3a18a-12a7-40b6-8f74-5679f4c9a9df. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:45:39 np0005539551 nova_compute[227360]: 2025-11-29 08:45:39.411 227364 DEBUG nova.network.neutron [req-e4a11dd8-ab8d-442e-8d91-e11cdef709b6 req-bfd29038-7b71-418f-a407-2a7167af71ab 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Updating instance_info_cache with network_info: [{"id": "b8e3a18a-12a7-40b6-8f74-5679f4c9a9df", "address": "fa:16:3e:97:38:a4", "network": {"id": "c60162ec-f468-4b5f-bd91-89b0a1cb9fa1", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-514529391-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66ee3b60fb89476383201ba204858d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8e3a18a-12", "ovs_interfaceid": "b8e3a18a-12a7-40b6-8f74-5679f4c9a9df", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:45:39 np0005539551 nova_compute[227360]: 2025-11-29 08:45:39.413 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:39 np0005539551 nova_compute[227360]: 2025-11-29 08:45:39.440 227364 DEBUG oslo_concurrency.lockutils [req-e4a11dd8-ab8d-442e-8d91-e11cdef709b6 req-bfd29038-7b71-418f-a407-2a7167af71ab 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-da14f334-c7fe-428d-b6d2-32c2f4cc4054" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:45:39 np0005539551 nova_compute[227360]: 2025-11-29 08:45:39.516 227364 DEBUG nova.network.neutron [-] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:45:39 np0005539551 nova_compute[227360]: 2025-11-29 08:45:39.537 227364 INFO nova.compute.manager [-] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Took 1.12 seconds to deallocate network for instance.#033[00m
Nov 29 03:45:39 np0005539551 nova_compute[227360]: 2025-11-29 08:45:39.600 227364 DEBUG oslo_concurrency.lockutils [None req-9a12c232-6aa9-4f68-b196-a348f3a71b1a 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:39 np0005539551 nova_compute[227360]: 2025-11-29 08:45:39.601 227364 DEBUG oslo_concurrency.lockutils [None req-9a12c232-6aa9-4f68-b196-a348f3a71b1a 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:39 np0005539551 nova_compute[227360]: 2025-11-29 08:45:39.710 227364 DEBUG oslo_concurrency.processutils [None req-9a12c232-6aa9-4f68-b196-a348f3a71b1a 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:39 np0005539551 nova_compute[227360]: 2025-11-29 08:45:39.817 227364 DEBUG nova.compute.manager [req-31360a3c-dddb-479d-a88b-0209c527ce46 req-7ade830c-b5b2-43e0-b3df-281ab9b12b9d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Received event network-vif-unplugged-b8e3a18a-12a7-40b6-8f74-5679f4c9a9df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:39 np0005539551 nova_compute[227360]: 2025-11-29 08:45:39.818 227364 DEBUG oslo_concurrency.lockutils [req-31360a3c-dddb-479d-a88b-0209c527ce46 req-7ade830c-b5b2-43e0-b3df-281ab9b12b9d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "da14f334-c7fe-428d-b6d2-32c2f4cc4054-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:39 np0005539551 nova_compute[227360]: 2025-11-29 08:45:39.818 227364 DEBUG oslo_concurrency.lockutils [req-31360a3c-dddb-479d-a88b-0209c527ce46 req-7ade830c-b5b2-43e0-b3df-281ab9b12b9d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "da14f334-c7fe-428d-b6d2-32c2f4cc4054-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:39 np0005539551 nova_compute[227360]: 2025-11-29 08:45:39.818 227364 DEBUG oslo_concurrency.lockutils [req-31360a3c-dddb-479d-a88b-0209c527ce46 req-7ade830c-b5b2-43e0-b3df-281ab9b12b9d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "da14f334-c7fe-428d-b6d2-32c2f4cc4054-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:39 np0005539551 nova_compute[227360]: 2025-11-29 08:45:39.818 227364 DEBUG nova.compute.manager [req-31360a3c-dddb-479d-a88b-0209c527ce46 req-7ade830c-b5b2-43e0-b3df-281ab9b12b9d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] No waiting events found dispatching network-vif-unplugged-b8e3a18a-12a7-40b6-8f74-5679f4c9a9df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:45:39 np0005539551 nova_compute[227360]: 2025-11-29 08:45:39.819 227364 WARNING nova.compute.manager [req-31360a3c-dddb-479d-a88b-0209c527ce46 req-7ade830c-b5b2-43e0-b3df-281ab9b12b9d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Received unexpected event network-vif-unplugged-b8e3a18a-12a7-40b6-8f74-5679f4c9a9df for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:45:39 np0005539551 nova_compute[227360]: 2025-11-29 08:45:39.819 227364 DEBUG nova.compute.manager [req-31360a3c-dddb-479d-a88b-0209c527ce46 req-7ade830c-b5b2-43e0-b3df-281ab9b12b9d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Received event network-vif-plugged-b8e3a18a-12a7-40b6-8f74-5679f4c9a9df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:39 np0005539551 nova_compute[227360]: 2025-11-29 08:45:39.819 227364 DEBUG oslo_concurrency.lockutils [req-31360a3c-dddb-479d-a88b-0209c527ce46 req-7ade830c-b5b2-43e0-b3df-281ab9b12b9d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "da14f334-c7fe-428d-b6d2-32c2f4cc4054-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:39 np0005539551 nova_compute[227360]: 2025-11-29 08:45:39.819 227364 DEBUG oslo_concurrency.lockutils [req-31360a3c-dddb-479d-a88b-0209c527ce46 req-7ade830c-b5b2-43e0-b3df-281ab9b12b9d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "da14f334-c7fe-428d-b6d2-32c2f4cc4054-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:39 np0005539551 nova_compute[227360]: 2025-11-29 08:45:39.820 227364 DEBUG oslo_concurrency.lockutils [req-31360a3c-dddb-479d-a88b-0209c527ce46 req-7ade830c-b5b2-43e0-b3df-281ab9b12b9d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "da14f334-c7fe-428d-b6d2-32c2f4cc4054-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:39 np0005539551 nova_compute[227360]: 2025-11-29 08:45:39.820 227364 DEBUG nova.compute.manager [req-31360a3c-dddb-479d-a88b-0209c527ce46 req-7ade830c-b5b2-43e0-b3df-281ab9b12b9d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] No waiting events found dispatching network-vif-plugged-b8e3a18a-12a7-40b6-8f74-5679f4c9a9df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:45:39 np0005539551 nova_compute[227360]: 2025-11-29 08:45:39.820 227364 WARNING nova.compute.manager [req-31360a3c-dddb-479d-a88b-0209c527ce46 req-7ade830c-b5b2-43e0-b3df-281ab9b12b9d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Received unexpected event network-vif-plugged-b8e3a18a-12a7-40b6-8f74-5679f4c9a9df for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:45:39 np0005539551 nova_compute[227360]: 2025-11-29 08:45:39.936 227364 DEBUG nova.compute.manager [req-e9456c79-e9e7-425d-a45d-235c37adb768 req-89576489-1d5e-4d83-8897-aefecf5b891f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Received event network-vif-deleted-b8e3a18a-12a7-40b6-8f74-5679f4c9a9df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:45:40 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2665172244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:45:40 np0005539551 nova_compute[227360]: 2025-11-29 08:45:40.168 227364 DEBUG oslo_concurrency.processutils [None req-9a12c232-6aa9-4f68-b196-a348f3a71b1a 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:40 np0005539551 nova_compute[227360]: 2025-11-29 08:45:40.173 227364 DEBUG nova.compute.provider_tree [None req-9a12c232-6aa9-4f68-b196-a348f3a71b1a 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:45:40 np0005539551 nova_compute[227360]: 2025-11-29 08:45:40.207 227364 DEBUG nova.scheduler.client.report [None req-9a12c232-6aa9-4f68-b196-a348f3a71b1a 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:45:40 np0005539551 nova_compute[227360]: 2025-11-29 08:45:40.239 227364 DEBUG oslo_concurrency.lockutils [None req-9a12c232-6aa9-4f68-b196-a348f3a71b1a 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:40 np0005539551 nova_compute[227360]: 2025-11-29 08:45:40.294 227364 INFO nova.scheduler.client.report [None req-9a12c232-6aa9-4f68-b196-a348f3a71b1a 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Deleted allocations for instance da14f334-c7fe-428d-b6d2-32c2f4cc4054#033[00m
Nov 29 03:45:40 np0005539551 nova_compute[227360]: 2025-11-29 08:45:40.403 227364 DEBUG oslo_concurrency.lockutils [None req-9a12c232-6aa9-4f68-b196-a348f3a71b1a 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Lock "da14f334-c7fe-428d-b6d2-32c2f4cc4054" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.863s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:40.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e414 e414: 3 total, 3 up, 3 in
Nov 29 03:45:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:40.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:41 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:42 np0005539551 ovn_controller[130266]: 2025-11-29T08:45:42Z|00898|binding|INFO|Releasing lport 7e0f1264-3298-4910-af53-2aeef940fbc6 from this chassis (sb_readonly=0)
Nov 29 03:45:42 np0005539551 nova_compute[227360]: 2025-11-29 08:45:42.134 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:42.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:42.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:42 np0005539551 nova_compute[227360]: 2025-11-29 08:45:42.808 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:44 np0005539551 nova_compute[227360]: 2025-11-29 08:45:44.416 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:45:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:44.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:45:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:44.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e415 e415: 3 total, 3 up, 3 in
Nov 29 03:45:45 np0005539551 nova_compute[227360]: 2025-11-29 08:45:45.976 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:46.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:46 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:46.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:47 np0005539551 nova_compute[227360]: 2025-11-29 08:45:47.811 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:48 np0005539551 ovn_controller[130266]: 2025-11-29T08:45:48Z|00899|binding|INFO|Releasing lport 7e0f1264-3298-4910-af53-2aeef940fbc6 from this chassis (sb_readonly=0)
Nov 29 03:45:48 np0005539551 nova_compute[227360]: 2025-11-29 08:45:48.089 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:48.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:48.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:49 np0005539551 nova_compute[227360]: 2025-11-29 08:45:49.419 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:50 np0005539551 nova_compute[227360]: 2025-11-29 08:45:50.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:50.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:50.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:52 np0005539551 nova_compute[227360]: 2025-11-29 08:45:52.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:52 np0005539551 nova_compute[227360]: 2025-11-29 08:45:52.448 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:52 np0005539551 nova_compute[227360]: 2025-11-29 08:45:52.448 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:52 np0005539551 nova_compute[227360]: 2025-11-29 08:45:52.449 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:52 np0005539551 nova_compute[227360]: 2025-11-29 08:45:52.449 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:45:52 np0005539551 nova_compute[227360]: 2025-11-29 08:45:52.449 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:52.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:52.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:52 np0005539551 nova_compute[227360]: 2025-11-29 08:45:52.785 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405937.785039, da14f334-c7fe-428d-b6d2-32c2f4cc4054 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:45:52 np0005539551 nova_compute[227360]: 2025-11-29 08:45:52.786 227364 INFO nova.compute.manager [-] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:45:52 np0005539551 nova_compute[227360]: 2025-11-29 08:45:52.812 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:52 np0005539551 nova_compute[227360]: 2025-11-29 08:45:52.827 227364 DEBUG nova.compute.manager [None req-8e207ce5-3df4-441c-8b19-0802e26d321b - - - - - -] [instance: da14f334-c7fe-428d-b6d2-32c2f4cc4054] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:45:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:45:52 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2067330443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:45:52 np0005539551 nova_compute[227360]: 2025-11-29 08:45:52.871 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:53 np0005539551 nova_compute[227360]: 2025-11-29 08:45:53.653 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-000000d0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:45:53 np0005539551 nova_compute[227360]: 2025-11-29 08:45:53.654 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-000000d0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:45:53 np0005539551 nova_compute[227360]: 2025-11-29 08:45:53.820 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:45:53 np0005539551 nova_compute[227360]: 2025-11-29 08:45:53.821 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4016MB free_disk=20.87639617919922GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:45:53 np0005539551 nova_compute[227360]: 2025-11-29 08:45:53.822 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:53 np0005539551 nova_compute[227360]: 2025-11-29 08:45:53.822 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:53 np0005539551 nova_compute[227360]: 2025-11-29 08:45:53.887 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 964cac7a-4c2d-4b5e-8522-81b99f1416b5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:45:53 np0005539551 nova_compute[227360]: 2025-11-29 08:45:53.887 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:45:53 np0005539551 nova_compute[227360]: 2025-11-29 08:45:53.888 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:45:53 np0005539551 nova_compute[227360]: 2025-11-29 08:45:53.945 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:54 np0005539551 ovn_controller[130266]: 2025-11-29T08:45:54Z|00900|binding|INFO|Releasing lport 7e0f1264-3298-4910-af53-2aeef940fbc6 from this chassis (sb_readonly=0)
Nov 29 03:45:54 np0005539551 nova_compute[227360]: 2025-11-29 08:45:54.261 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:54 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:45:54 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1909092659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:45:54 np0005539551 nova_compute[227360]: 2025-11-29 08:45:54.407 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:54 np0005539551 nova_compute[227360]: 2025-11-29 08:45:54.412 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:45:54 np0005539551 nova_compute[227360]: 2025-11-29 08:45:54.420 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:54 np0005539551 nova_compute[227360]: 2025-11-29 08:45:54.429 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:45:54 np0005539551 nova_compute[227360]: 2025-11-29 08:45:54.452 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:45:54 np0005539551 nova_compute[227360]: 2025-11-29 08:45:54.452 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:54.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:54 np0005539551 ovn_controller[130266]: 2025-11-29T08:45:54Z|00901|binding|INFO|Releasing lport 7e0f1264-3298-4910-af53-2aeef940fbc6 from this chassis (sb_readonly=0)
Nov 29 03:45:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:54.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:54 np0005539551 nova_compute[227360]: 2025-11-29 08:45:54.729 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:56 np0005539551 nova_compute[227360]: 2025-11-29 08:45:56.453 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:56 np0005539551 nova_compute[227360]: 2025-11-29 08:45:56.453 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:45:56 np0005539551 nova_compute[227360]: 2025-11-29 08:45:56.454 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:45:56 np0005539551 ovn_controller[130266]: 2025-11-29T08:45:56Z|00902|binding|INFO|Releasing lport 7e0f1264-3298-4910-af53-2aeef940fbc6 from this chassis (sb_readonly=0)
Nov 29 03:45:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:56.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:56 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:56 np0005539551 nova_compute[227360]: 2025-11-29 08:45:56.706 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:56.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:56 np0005539551 nova_compute[227360]: 2025-11-29 08:45:56.864 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "refresh_cache-964cac7a-4c2d-4b5e-8522-81b99f1416b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:45:56 np0005539551 nova_compute[227360]: 2025-11-29 08:45:56.864 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquired lock "refresh_cache-964cac7a-4c2d-4b5e-8522-81b99f1416b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:45:56 np0005539551 nova_compute[227360]: 2025-11-29 08:45:56.865 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:45:56 np0005539551 nova_compute[227360]: 2025-11-29 08:45:56.865 227364 DEBUG nova.objects.instance [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lazy-loading 'info_cache' on Instance uuid 964cac7a-4c2d-4b5e-8522-81b99f1416b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:45:57 np0005539551 nova_compute[227360]: 2025-11-29 08:45:57.815 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:58 np0005539551 ovn_controller[130266]: 2025-11-29T08:45:58Z|00903|binding|INFO|Releasing lport 7e0f1264-3298-4910-af53-2aeef940fbc6 from this chassis (sb_readonly=0)
Nov 29 03:45:58 np0005539551 nova_compute[227360]: 2025-11-29 08:45:58.195 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:58.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:45:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:58.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:59 np0005539551 nova_compute[227360]: 2025-11-29 08:45:59.423 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:59 np0005539551 nova_compute[227360]: 2025-11-29 08:45:59.522 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Updating instance_info_cache with network_info: [{"id": "6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b", "address": "fa:16:3e:12:9a:71", "network": {"id": "7c95871d-d156-4882-b0a0-97ff36c1744a", "bridge": "br-int", "label": "tempest-network-smoke--1036653926", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b7b58a4-b5", "ovs_interfaceid": "6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:45:59 np0005539551 nova_compute[227360]: 2025-11-29 08:45:59.544 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Releasing lock "refresh_cache-964cac7a-4c2d-4b5e-8522-81b99f1416b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:45:59 np0005539551 nova_compute[227360]: 2025-11-29 08:45:59.544 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:45:59 np0005539551 nova_compute[227360]: 2025-11-29 08:45:59.545 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:59 np0005539551 nova_compute[227360]: 2025-11-29 08:45:59.545 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:59 np0005539551 nova_compute[227360]: 2025-11-29 08:45:59.545 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:59 np0005539551 nova_compute[227360]: 2025-11-29 08:45:59.564 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Triggering sync for uuid 964cac7a-4c2d-4b5e-8522-81b99f1416b5 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 29 03:45:59 np0005539551 nova_compute[227360]: 2025-11-29 08:45:59.564 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "964cac7a-4c2d-4b5e-8522-81b99f1416b5" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:59 np0005539551 nova_compute[227360]: 2025-11-29 08:45:59.565 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "964cac7a-4c2d-4b5e-8522-81b99f1416b5" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:59 np0005539551 nova_compute[227360]: 2025-11-29 08:45:59.592 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "964cac7a-4c2d-4b5e-8522-81b99f1416b5" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.027s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:00 np0005539551 nova_compute[227360]: 2025-11-29 08:46:00.517 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:00.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:00.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:02.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:02.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:02 np0005539551 nova_compute[227360]: 2025-11-29 08:46:02.819 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:03 np0005539551 nova_compute[227360]: 2025-11-29 08:46:03.957 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:04 np0005539551 nova_compute[227360]: 2025-11-29 08:46:04.425 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:04.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:04.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:06 np0005539551 nova_compute[227360]: 2025-11-29 08:46:06.197 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:06 np0005539551 nova_compute[227360]: 2025-11-29 08:46:06.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:06 np0005539551 nova_compute[227360]: 2025-11-29 08:46:06.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:06.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:06.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:07 np0005539551 nova_compute[227360]: 2025-11-29 08:46:07.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:07 np0005539551 nova_compute[227360]: 2025-11-29 08:46:07.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:46:07 np0005539551 nova_compute[227360]: 2025-11-29 08:46:07.822 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:08 np0005539551 podman[300537]: 2025-11-29 08:46:08.613732689 +0000 UTC m=+0.056579313 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:46:08 np0005539551 podman[300536]: 2025-11-29 08:46:08.64632465 +0000 UTC m=+0.093045000 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:46:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:08.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:08 np0005539551 podman[300535]: 2025-11-29 08:46:08.683052396 +0000 UTC m=+0.129953001 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:46:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:08.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:09 np0005539551 nova_compute[227360]: 2025-11-29 08:46:09.428 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:10.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:10.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:11 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:12.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:12.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:12 np0005539551 nova_compute[227360]: 2025-11-29 08:46:12.825 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:14 np0005539551 nova_compute[227360]: 2025-11-29 08:46:14.150 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:14 np0005539551 nova_compute[227360]: 2025-11-29 08:46:14.429 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:14.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:14.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:46:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:16.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:46:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:16.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:17 np0005539551 nova_compute[227360]: 2025-11-29 08:46:17.437 227364 DEBUG nova.compute.manager [req-2845af14-7065-4b64-acc6-2c6ccfe78a71 req-4e1e3e2c-1963-4b47-bc84-cdf15a433b97 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Received event network-changed-6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:46:17 np0005539551 nova_compute[227360]: 2025-11-29 08:46:17.438 227364 DEBUG nova.compute.manager [req-2845af14-7065-4b64-acc6-2c6ccfe78a71 req-4e1e3e2c-1963-4b47-bc84-cdf15a433b97 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Refreshing instance network info cache due to event network-changed-6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:46:17 np0005539551 nova_compute[227360]: 2025-11-29 08:46:17.438 227364 DEBUG oslo_concurrency.lockutils [req-2845af14-7065-4b64-acc6-2c6ccfe78a71 req-4e1e3e2c-1963-4b47-bc84-cdf15a433b97 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-964cac7a-4c2d-4b5e-8522-81b99f1416b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:46:17 np0005539551 nova_compute[227360]: 2025-11-29 08:46:17.438 227364 DEBUG oslo_concurrency.lockutils [req-2845af14-7065-4b64-acc6-2c6ccfe78a71 req-4e1e3e2c-1963-4b47-bc84-cdf15a433b97 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-964cac7a-4c2d-4b5e-8522-81b99f1416b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:46:17 np0005539551 nova_compute[227360]: 2025-11-29 08:46:17.438 227364 DEBUG nova.network.neutron [req-2845af14-7065-4b64-acc6-2c6ccfe78a71 req-4e1e3e2c-1963-4b47-bc84-cdf15a433b97 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Refreshing network info cache for port 6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:46:17 np0005539551 nova_compute[227360]: 2025-11-29 08:46:17.517 227364 DEBUG oslo_concurrency.lockutils [None req-0983e3b4-d57b-4aaf-bad8-7fa472652627 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "964cac7a-4c2d-4b5e-8522-81b99f1416b5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:17 np0005539551 nova_compute[227360]: 2025-11-29 08:46:17.517 227364 DEBUG oslo_concurrency.lockutils [None req-0983e3b4-d57b-4aaf-bad8-7fa472652627 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "964cac7a-4c2d-4b5e-8522-81b99f1416b5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:17 np0005539551 nova_compute[227360]: 2025-11-29 08:46:17.518 227364 DEBUG oslo_concurrency.lockutils [None req-0983e3b4-d57b-4aaf-bad8-7fa472652627 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "964cac7a-4c2d-4b5e-8522-81b99f1416b5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:17 np0005539551 nova_compute[227360]: 2025-11-29 08:46:17.518 227364 DEBUG oslo_concurrency.lockutils [None req-0983e3b4-d57b-4aaf-bad8-7fa472652627 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "964cac7a-4c2d-4b5e-8522-81b99f1416b5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:17 np0005539551 nova_compute[227360]: 2025-11-29 08:46:17.518 227364 DEBUG oslo_concurrency.lockutils [None req-0983e3b4-d57b-4aaf-bad8-7fa472652627 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "964cac7a-4c2d-4b5e-8522-81b99f1416b5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:17 np0005539551 nova_compute[227360]: 2025-11-29 08:46:17.519 227364 INFO nova.compute.manager [None req-0983e3b4-d57b-4aaf-bad8-7fa472652627 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Terminating instance#033[00m
Nov 29 03:46:17 np0005539551 nova_compute[227360]: 2025-11-29 08:46:17.520 227364 DEBUG nova.compute.manager [None req-0983e3b4-d57b-4aaf-bad8-7fa472652627 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:46:17 np0005539551 nova_compute[227360]: 2025-11-29 08:46:17.603 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:17 np0005539551 kernel: tap6b7b58a4-b5 (unregistering): left promiscuous mode
Nov 29 03:46:17 np0005539551 NetworkManager[48922]: <info>  [1764405977.6112] device (tap6b7b58a4-b5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:46:17 np0005539551 nova_compute[227360]: 2025-11-29 08:46:17.619 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:17 np0005539551 ovn_controller[130266]: 2025-11-29T08:46:17Z|00904|binding|INFO|Releasing lport 6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b from this chassis (sb_readonly=0)
Nov 29 03:46:17 np0005539551 ovn_controller[130266]: 2025-11-29T08:46:17Z|00905|binding|INFO|Setting lport 6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b down in Southbound
Nov 29 03:46:17 np0005539551 ovn_controller[130266]: 2025-11-29T08:46:17Z|00906|binding|INFO|Removing iface tap6b7b58a4-b5 ovn-installed in OVS
Nov 29 03:46:17 np0005539551 nova_compute[227360]: 2025-11-29 08:46:17.620 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:17.629 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:9a:71 10.100.0.6'], port_security=['fa:16:3e:12:9a:71 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '964cac7a-4c2d-4b5e-8522-81b99f1416b5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c95871d-d156-4882-b0a0-97ff36c1744a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '75423dfb570f4b2bbc2f8de4f3a65d18', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a6c30f76-b8e4-47e3-b764-b9886274509b e84628c0-0159-4dba-85f8-ba5fad9cdcdb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cfd6ff47-3ad6-4cdd-b1fa-0e77564ed30b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:46:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:17.630 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b in datapath 7c95871d-d156-4882-b0a0-97ff36c1744a unbound from our chassis#033[00m
Nov 29 03:46:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:17.631 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7c95871d-d156-4882-b0a0-97ff36c1744a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:46:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:17.632 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d4a3f53b-4541-4997-9888-f88ceb2ed309]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:17.632 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a namespace which is not needed anymore#033[00m
Nov 29 03:46:17 np0005539551 nova_compute[227360]: 2025-11-29 08:46:17.640 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:17 np0005539551 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000d0.scope: Deactivated successfully.
Nov 29 03:46:17 np0005539551 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000d0.scope: Consumed 17.269s CPU time.
Nov 29 03:46:17 np0005539551 systemd-machined[190756]: Machine qemu-92-instance-000000d0 terminated.
Nov 29 03:46:17 np0005539551 nova_compute[227360]: 2025-11-29 08:46:17.738 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:17 np0005539551 nova_compute[227360]: 2025-11-29 08:46:17.744 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:17 np0005539551 nova_compute[227360]: 2025-11-29 08:46:17.757 227364 INFO nova.virt.libvirt.driver [-] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Instance destroyed successfully.#033[00m
Nov 29 03:46:17 np0005539551 neutron-haproxy-ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a[300068]: [NOTICE]   (300072) : haproxy version is 2.8.14-c23fe91
Nov 29 03:46:17 np0005539551 neutron-haproxy-ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a[300068]: [NOTICE]   (300072) : path to executable is /usr/sbin/haproxy
Nov 29 03:46:17 np0005539551 neutron-haproxy-ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a[300068]: [WARNING]  (300072) : Exiting Master process...
Nov 29 03:46:17 np0005539551 neutron-haproxy-ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a[300068]: [WARNING]  (300072) : Exiting Master process...
Nov 29 03:46:17 np0005539551 nova_compute[227360]: 2025-11-29 08:46:17.758 227364 DEBUG nova.objects.instance [None req-0983e3b4-d57b-4aaf-bad8-7fa472652627 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lazy-loading 'resources' on Instance uuid 964cac7a-4c2d-4b5e-8522-81b99f1416b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:46:17 np0005539551 neutron-haproxy-ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a[300068]: [ALERT]    (300072) : Current worker (300074) exited with code 143 (Terminated)
Nov 29 03:46:17 np0005539551 neutron-haproxy-ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a[300068]: [WARNING]  (300072) : All workers exited. Exiting... (0)
Nov 29 03:46:17 np0005539551 systemd[1]: libpod-fba23a98f7744f5fcaf6a1a5d337001323f08fec86fca092179f87b6bc371bfe.scope: Deactivated successfully.
Nov 29 03:46:17 np0005539551 podman[300625]: 2025-11-29 08:46:17.7687813 +0000 UTC m=+0.054818686 container died fba23a98f7744f5fcaf6a1a5d337001323f08fec86fca092179f87b6bc371bfe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:46:17 np0005539551 nova_compute[227360]: 2025-11-29 08:46:17.772 227364 DEBUG nova.virt.libvirt.vif [None req-0983e3b4-d57b-4aaf-bad8-7fa472652627 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:44:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-access_point-316697171',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-access_point-316697171',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1136856573-ac',id=208,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIUCtFnN07vszlWcqLqc3OwtiaY5LGVJmT2ZWYrMbRKMkYGyWdO3eJxi7r32YSVdfdSMmHf98ntN1zt+jX0dvGmgoNoiyZKY2TvD4cve07jeq8QHwsvzbRI+YetMB/qunA==',key_name='tempest-TestSecurityGroupsBasicOps-182396490',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:45:13Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='75423dfb570f4b2bbc2f8de4f3a65d18',ramdisk_id='',reservation_id='r-c73j3yhr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1136856573',owner_user_name='tempest-TestSecurityGroupsBasicOps-1136856573-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:45:13Z,user_data=None,user_id='de2965680b714b539553cf0792584e1e',uuid=964cac7a-4c2d-4b5e-8522-81b99f1416b5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b", "address": "fa:16:3e:12:9a:71", "network": {"id": "7c95871d-d156-4882-b0a0-97ff36c1744a", "bridge": "br-int", "label": "tempest-network-smoke--1036653926", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b7b58a4-b5", "ovs_interfaceid": "6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:46:17 np0005539551 nova_compute[227360]: 2025-11-29 08:46:17.772 227364 DEBUG nova.network.os_vif_util [None req-0983e3b4-d57b-4aaf-bad8-7fa472652627 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converting VIF {"id": "6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b", "address": "fa:16:3e:12:9a:71", "network": {"id": "7c95871d-d156-4882-b0a0-97ff36c1744a", "bridge": "br-int", "label": "tempest-network-smoke--1036653926", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b7b58a4-b5", "ovs_interfaceid": "6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:46:17 np0005539551 nova_compute[227360]: 2025-11-29 08:46:17.774 227364 DEBUG nova.network.os_vif_util [None req-0983e3b4-d57b-4aaf-bad8-7fa472652627 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:12:9a:71,bridge_name='br-int',has_traffic_filtering=True,id=6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b,network=Network(7c95871d-d156-4882-b0a0-97ff36c1744a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b7b58a4-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:46:17 np0005539551 nova_compute[227360]: 2025-11-29 08:46:17.775 227364 DEBUG os_vif [None req-0983e3b4-d57b-4aaf-bad8-7fa472652627 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:12:9a:71,bridge_name='br-int',has_traffic_filtering=True,id=6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b,network=Network(7c95871d-d156-4882-b0a0-97ff36c1744a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b7b58a4-b5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:46:17 np0005539551 nova_compute[227360]: 2025-11-29 08:46:17.776 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:17 np0005539551 nova_compute[227360]: 2025-11-29 08:46:17.776 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6b7b58a4-b5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:46:17 np0005539551 nova_compute[227360]: 2025-11-29 08:46:17.780 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:17 np0005539551 nova_compute[227360]: 2025-11-29 08:46:17.782 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:46:17 np0005539551 nova_compute[227360]: 2025-11-29 08:46:17.785 227364 INFO os_vif [None req-0983e3b4-d57b-4aaf-bad8-7fa472652627 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:12:9a:71,bridge_name='br-int',has_traffic_filtering=True,id=6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b,network=Network(7c95871d-d156-4882-b0a0-97ff36c1744a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b7b58a4-b5')#033[00m
Nov 29 03:46:17 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fba23a98f7744f5fcaf6a1a5d337001323f08fec86fca092179f87b6bc371bfe-userdata-shm.mount: Deactivated successfully.
Nov 29 03:46:17 np0005539551 systemd[1]: var-lib-containers-storage-overlay-9308a62708c24523adcbe92fde658df59065e081c803b70b759f30676464131e-merged.mount: Deactivated successfully.
Nov 29 03:46:17 np0005539551 podman[300625]: 2025-11-29 08:46:17.803326165 +0000 UTC m=+0.089363561 container cleanup fba23a98f7744f5fcaf6a1a5d337001323f08fec86fca092179f87b6bc371bfe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:46:17 np0005539551 systemd[1]: libpod-conmon-fba23a98f7744f5fcaf6a1a5d337001323f08fec86fca092179f87b6bc371bfe.scope: Deactivated successfully.
Nov 29 03:46:17 np0005539551 podman[300678]: 2025-11-29 08:46:17.868732257 +0000 UTC m=+0.041330681 container remove fba23a98f7744f5fcaf6a1a5d337001323f08fec86fca092179f87b6bc371bfe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 03:46:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:17.875 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[7b937a27-9ac9-4950-818a-bc60e2232245]: (4, ('Sat Nov 29 08:46:17 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a (fba23a98f7744f5fcaf6a1a5d337001323f08fec86fca092179f87b6bc371bfe)\nfba23a98f7744f5fcaf6a1a5d337001323f08fec86fca092179f87b6bc371bfe\nSat Nov 29 08:46:17 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a (fba23a98f7744f5fcaf6a1a5d337001323f08fec86fca092179f87b6bc371bfe)\nfba23a98f7744f5fcaf6a1a5d337001323f08fec86fca092179f87b6bc371bfe\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:17.878 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1aff6923-bbec-42d7-aa3b-a98038eae601]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:17.879 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c95871d-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:46:17 np0005539551 kernel: tap7c95871d-d0: left promiscuous mode
Nov 29 03:46:17 np0005539551 nova_compute[227360]: 2025-11-29 08:46:17.881 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:17 np0005539551 nova_compute[227360]: 2025-11-29 08:46:17.894 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:17.896 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[7c92224a-3c32-423f-81eb-15f51fd62475]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:17.911 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4dc9ab27-4201-4179-9f87-fe2c429ffeca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:17.912 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4bacb2d8-d912-483a-a041-1f9621ca4b38]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:17.929 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[19ec473b-ecb5-4717-8fe7-52df6d6cc6dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 895899, 'reachable_time': 32617, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300695, 'error': None, 'target': 'ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:17.932 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:46:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:17.932 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[18c97f4c-9c50-463c-9472-0e30cba65e47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:17 np0005539551 systemd[1]: run-netns-ovnmeta\x2d7c95871d\x2dd156\x2d4882\x2db0a0\x2d97ff36c1744a.mount: Deactivated successfully.
Nov 29 03:46:18 np0005539551 nova_compute[227360]: 2025-11-29 08:46:18.163 227364 INFO nova.virt.libvirt.driver [None req-0983e3b4-d57b-4aaf-bad8-7fa472652627 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Deleting instance files /var/lib/nova/instances/964cac7a-4c2d-4b5e-8522-81b99f1416b5_del#033[00m
Nov 29 03:46:18 np0005539551 nova_compute[227360]: 2025-11-29 08:46:18.164 227364 INFO nova.virt.libvirt.driver [None req-0983e3b4-d57b-4aaf-bad8-7fa472652627 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Deletion of /var/lib/nova/instances/964cac7a-4c2d-4b5e-8522-81b99f1416b5_del complete#033[00m
Nov 29 03:46:18 np0005539551 nova_compute[227360]: 2025-11-29 08:46:18.210 227364 INFO nova.compute.manager [None req-0983e3b4-d57b-4aaf-bad8-7fa472652627 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Took 0.69 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:46:18 np0005539551 nova_compute[227360]: 2025-11-29 08:46:18.211 227364 DEBUG oslo.service.loopingcall [None req-0983e3b4-d57b-4aaf-bad8-7fa472652627 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:46:18 np0005539551 nova_compute[227360]: 2025-11-29 08:46:18.211 227364 DEBUG nova.compute.manager [-] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:46:18 np0005539551 nova_compute[227360]: 2025-11-29 08:46:18.212 227364 DEBUG nova.network.neutron [-] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:46:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:18.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:18.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:18 np0005539551 nova_compute[227360]: 2025-11-29 08:46:18.762 227364 DEBUG nova.network.neutron [req-2845af14-7065-4b64-acc6-2c6ccfe78a71 req-4e1e3e2c-1963-4b47-bc84-cdf15a433b97 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Updated VIF entry in instance network info cache for port 6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:46:18 np0005539551 nova_compute[227360]: 2025-11-29 08:46:18.763 227364 DEBUG nova.network.neutron [req-2845af14-7065-4b64-acc6-2c6ccfe78a71 req-4e1e3e2c-1963-4b47-bc84-cdf15a433b97 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Updating instance_info_cache with network_info: [{"id": "6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b", "address": "fa:16:3e:12:9a:71", "network": {"id": "7c95871d-d156-4882-b0a0-97ff36c1744a", "bridge": "br-int", "label": "tempest-network-smoke--1036653926", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b7b58a4-b5", "ovs_interfaceid": "6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:46:18 np0005539551 nova_compute[227360]: 2025-11-29 08:46:18.786 227364 DEBUG oslo_concurrency.lockutils [req-2845af14-7065-4b64-acc6-2c6ccfe78a71 req-4e1e3e2c-1963-4b47-bc84-cdf15a433b97 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-964cac7a-4c2d-4b5e-8522-81b99f1416b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:46:18 np0005539551 nova_compute[227360]: 2025-11-29 08:46:18.847 227364 DEBUG nova.network.neutron [-] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:46:18 np0005539551 nova_compute[227360]: 2025-11-29 08:46:18.864 227364 INFO nova.compute.manager [-] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Took 0.65 seconds to deallocate network for instance.#033[00m
Nov 29 03:46:18 np0005539551 nova_compute[227360]: 2025-11-29 08:46:18.898 227364 DEBUG oslo_concurrency.lockutils [None req-0983e3b4-d57b-4aaf-bad8-7fa472652627 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:18 np0005539551 nova_compute[227360]: 2025-11-29 08:46:18.898 227364 DEBUG oslo_concurrency.lockutils [None req-0983e3b4-d57b-4aaf-bad8-7fa472652627 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:18 np0005539551 nova_compute[227360]: 2025-11-29 08:46:18.909 227364 DEBUG nova.compute.manager [req-45a59277-e645-48ad-bf40-bef881b4e5e6 req-f484756b-3b7c-4b8d-acf5-2b18db24051f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Received event network-vif-deleted-6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:46:18 np0005539551 nova_compute[227360]: 2025-11-29 08:46:18.947 227364 DEBUG oslo_concurrency.processutils [None req-0983e3b4-d57b-4aaf-bad8-7fa472652627 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:46:19 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:46:19 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1959131792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:46:19 np0005539551 nova_compute[227360]: 2025-11-29 08:46:19.412 227364 DEBUG oslo_concurrency.processutils [None req-0983e3b4-d57b-4aaf-bad8-7fa472652627 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:46:19 np0005539551 nova_compute[227360]: 2025-11-29 08:46:19.418 227364 DEBUG nova.compute.provider_tree [None req-0983e3b4-d57b-4aaf-bad8-7fa472652627 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:46:19 np0005539551 nova_compute[227360]: 2025-11-29 08:46:19.431 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:19 np0005539551 nova_compute[227360]: 2025-11-29 08:46:19.442 227364 DEBUG nova.scheduler.client.report [None req-0983e3b4-d57b-4aaf-bad8-7fa472652627 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:46:19 np0005539551 nova_compute[227360]: 2025-11-29 08:46:19.463 227364 DEBUG oslo_concurrency.lockutils [None req-0983e3b4-d57b-4aaf-bad8-7fa472652627 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:19 np0005539551 nova_compute[227360]: 2025-11-29 08:46:19.498 227364 INFO nova.scheduler.client.report [None req-0983e3b4-d57b-4aaf-bad8-7fa472652627 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Deleted allocations for instance 964cac7a-4c2d-4b5e-8522-81b99f1416b5#033[00m
Nov 29 03:46:19 np0005539551 nova_compute[227360]: 2025-11-29 08:46:19.548 227364 DEBUG nova.compute.manager [req-01777a47-e1d8-499c-bcee-b3ca6d17b1a4 req-b8aef70c-2158-4721-b16f-c5ba038ed254 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Received event network-vif-unplugged-6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:46:19 np0005539551 nova_compute[227360]: 2025-11-29 08:46:19.548 227364 DEBUG oslo_concurrency.lockutils [req-01777a47-e1d8-499c-bcee-b3ca6d17b1a4 req-b8aef70c-2158-4721-b16f-c5ba038ed254 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "964cac7a-4c2d-4b5e-8522-81b99f1416b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:19 np0005539551 nova_compute[227360]: 2025-11-29 08:46:19.549 227364 DEBUG oslo_concurrency.lockutils [req-01777a47-e1d8-499c-bcee-b3ca6d17b1a4 req-b8aef70c-2158-4721-b16f-c5ba038ed254 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "964cac7a-4c2d-4b5e-8522-81b99f1416b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:19 np0005539551 nova_compute[227360]: 2025-11-29 08:46:19.549 227364 DEBUG oslo_concurrency.lockutils [req-01777a47-e1d8-499c-bcee-b3ca6d17b1a4 req-b8aef70c-2158-4721-b16f-c5ba038ed254 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "964cac7a-4c2d-4b5e-8522-81b99f1416b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:19 np0005539551 nova_compute[227360]: 2025-11-29 08:46:19.549 227364 DEBUG nova.compute.manager [req-01777a47-e1d8-499c-bcee-b3ca6d17b1a4 req-b8aef70c-2158-4721-b16f-c5ba038ed254 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] No waiting events found dispatching network-vif-unplugged-6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:46:19 np0005539551 nova_compute[227360]: 2025-11-29 08:46:19.550 227364 WARNING nova.compute.manager [req-01777a47-e1d8-499c-bcee-b3ca6d17b1a4 req-b8aef70c-2158-4721-b16f-c5ba038ed254 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Received unexpected event network-vif-unplugged-6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:46:19 np0005539551 nova_compute[227360]: 2025-11-29 08:46:19.550 227364 DEBUG nova.compute.manager [req-01777a47-e1d8-499c-bcee-b3ca6d17b1a4 req-b8aef70c-2158-4721-b16f-c5ba038ed254 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Received event network-vif-plugged-6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:46:19 np0005539551 nova_compute[227360]: 2025-11-29 08:46:19.550 227364 DEBUG oslo_concurrency.lockutils [req-01777a47-e1d8-499c-bcee-b3ca6d17b1a4 req-b8aef70c-2158-4721-b16f-c5ba038ed254 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "964cac7a-4c2d-4b5e-8522-81b99f1416b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:19 np0005539551 nova_compute[227360]: 2025-11-29 08:46:19.550 227364 DEBUG oslo_concurrency.lockutils [req-01777a47-e1d8-499c-bcee-b3ca6d17b1a4 req-b8aef70c-2158-4721-b16f-c5ba038ed254 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "964cac7a-4c2d-4b5e-8522-81b99f1416b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:19 np0005539551 nova_compute[227360]: 2025-11-29 08:46:19.551 227364 DEBUG oslo_concurrency.lockutils [req-01777a47-e1d8-499c-bcee-b3ca6d17b1a4 req-b8aef70c-2158-4721-b16f-c5ba038ed254 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "964cac7a-4c2d-4b5e-8522-81b99f1416b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:19 np0005539551 nova_compute[227360]: 2025-11-29 08:46:19.551 227364 DEBUG nova.compute.manager [req-01777a47-e1d8-499c-bcee-b3ca6d17b1a4 req-b8aef70c-2158-4721-b16f-c5ba038ed254 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] No waiting events found dispatching network-vif-plugged-6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:46:19 np0005539551 nova_compute[227360]: 2025-11-29 08:46:19.551 227364 WARNING nova.compute.manager [req-01777a47-e1d8-499c-bcee-b3ca6d17b1a4 req-b8aef70c-2158-4721-b16f-c5ba038ed254 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Received unexpected event network-vif-plugged-6b7b58a4-b5a2-4cce-aa7c-aba7ba15fc7b for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:46:19 np0005539551 nova_compute[227360]: 2025-11-29 08:46:19.592 227364 DEBUG oslo_concurrency.lockutils [None req-0983e3b4-d57b-4aaf-bad8-7fa472652627 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "964cac7a-4c2d-4b5e-8522-81b99f1416b5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.074s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:19.895 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:19.895 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:19.896 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:20.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:20.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:21 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:22.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:22.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:22 np0005539551 nova_compute[227360]: 2025-11-29 08:46:22.778 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:24 np0005539551 nova_compute[227360]: 2025-11-29 08:46:24.433 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:24 np0005539551 nova_compute[227360]: 2025-11-29 08:46:24.634 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:24.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:24.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:24 np0005539551 nova_compute[227360]: 2025-11-29 08:46:24.815 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:26 np0005539551 podman[300892]: 2025-11-29 08:46:26.493146859 +0000 UTC m=+0.071178007 container exec 90ce4adbab91e406b1c142d44002a8d5649da44e57a442af57615f2fecac4da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:46:26 np0005539551 podman[300892]: 2025-11-29 08:46:26.599350435 +0000 UTC m=+0.177381593 container exec_died 90ce4adbab91e406b1c142d44002a8d5649da44e57a442af57615f2fecac4da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 03:46:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:46:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:26.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:46:26 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:26.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:27 np0005539551 nova_compute[227360]: 2025-11-29 08:46:27.780 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:28 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:28 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:28 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:28 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:28 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:28 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:28 np0005539551 podman[301287]: 2025-11-29 08:46:28.370919631 +0000 UTC m=+0.039457219 container create b13b4db8ea3d787eb93117060754422a980dd651c373b4315dd30b19edbfbf41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_banach, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 29 03:46:28 np0005539551 systemd[1]: Started libpod-conmon-b13b4db8ea3d787eb93117060754422a980dd651c373b4315dd30b19edbfbf41.scope.
Nov 29 03:46:28 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:46:28 np0005539551 podman[301287]: 2025-11-29 08:46:28.444608497 +0000 UTC m=+0.113146095 container init b13b4db8ea3d787eb93117060754422a980dd651c373b4315dd30b19edbfbf41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_banach, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:46:28 np0005539551 podman[301287]: 2025-11-29 08:46:28.353355176 +0000 UTC m=+0.021892774 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:46:28 np0005539551 podman[301287]: 2025-11-29 08:46:28.452179652 +0000 UTC m=+0.120717250 container start b13b4db8ea3d787eb93117060754422a980dd651c373b4315dd30b19edbfbf41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 03:46:28 np0005539551 podman[301287]: 2025-11-29 08:46:28.455521703 +0000 UTC m=+0.124059301 container attach b13b4db8ea3d787eb93117060754422a980dd651c373b4315dd30b19edbfbf41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_banach, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:46:28 np0005539551 upbeat_banach[301303]: 167 167
Nov 29 03:46:28 np0005539551 systemd[1]: libpod-b13b4db8ea3d787eb93117060754422a980dd651c373b4315dd30b19edbfbf41.scope: Deactivated successfully.
Nov 29 03:46:28 np0005539551 conmon[301303]: conmon b13b4db8ea3d787eb931 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b13b4db8ea3d787eb93117060754422a980dd651c373b4315dd30b19edbfbf41.scope/container/memory.events
Nov 29 03:46:28 np0005539551 podman[301287]: 2025-11-29 08:46:28.459016847 +0000 UTC m=+0.127554425 container died b13b4db8ea3d787eb93117060754422a980dd651c373b4315dd30b19edbfbf41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_banach, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:46:28 np0005539551 systemd[1]: var-lib-containers-storage-overlay-aef9c1f6266434d6335765bc77d42a9d206023a3f1d08474143029e0ade1d30f-merged.mount: Deactivated successfully.
Nov 29 03:46:28 np0005539551 podman[301287]: 2025-11-29 08:46:28.505779254 +0000 UTC m=+0.174316832 container remove b13b4db8ea3d787eb93117060754422a980dd651c373b4315dd30b19edbfbf41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_banach, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 03:46:28 np0005539551 systemd[1]: libpod-conmon-b13b4db8ea3d787eb93117060754422a980dd651c373b4315dd30b19edbfbf41.scope: Deactivated successfully.
Nov 29 03:46:28 np0005539551 podman[301326]: 2025-11-29 08:46:28.645120866 +0000 UTC m=+0.035357458 container create 2cf318159fc9d30327fb424f1cda1b7935f12e9a4347eab3ba6b0aecd6ea56a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jang, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:46:28 np0005539551 systemd[1]: Started libpod-conmon-2cf318159fc9d30327fb424f1cda1b7935f12e9a4347eab3ba6b0aecd6ea56a0.scope.
Nov 29 03:46:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:28.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:28 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:46:28 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c39e6aa0675ee81dabb4f906b7e659e91cce8fcdbb4efdeb6e4e7e2dd500d2b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:46:28 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c39e6aa0675ee81dabb4f906b7e659e91cce8fcdbb4efdeb6e4e7e2dd500d2b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:46:28 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c39e6aa0675ee81dabb4f906b7e659e91cce8fcdbb4efdeb6e4e7e2dd500d2b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:46:28 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c39e6aa0675ee81dabb4f906b7e659e91cce8fcdbb4efdeb6e4e7e2dd500d2b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:46:28 np0005539551 podman[301326]: 2025-11-29 08:46:28.720385704 +0000 UTC m=+0.110622346 container init 2cf318159fc9d30327fb424f1cda1b7935f12e9a4347eab3ba6b0aecd6ea56a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:46:28 np0005539551 podman[301326]: 2025-11-29 08:46:28.62973037 +0000 UTC m=+0.019966982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:46:28 np0005539551 podman[301326]: 2025-11-29 08:46:28.73057558 +0000 UTC m=+0.120812192 container start 2cf318159fc9d30327fb424f1cda1b7935f12e9a4347eab3ba6b0aecd6ea56a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:46:28 np0005539551 podman[301326]: 2025-11-29 08:46:28.733734855 +0000 UTC m=+0.123971467 container attach 2cf318159fc9d30327fb424f1cda1b7935f12e9a4347eab3ba6b0aecd6ea56a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jang, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Nov 29 03:46:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:28.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:29 np0005539551 nova_compute[227360]: 2025-11-29 08:46:29.435 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:29 np0005539551 focused_jang[301342]: [
Nov 29 03:46:29 np0005539551 focused_jang[301342]:    {
Nov 29 03:46:29 np0005539551 focused_jang[301342]:        "available": false,
Nov 29 03:46:29 np0005539551 focused_jang[301342]:        "ceph_device": false,
Nov 29 03:46:29 np0005539551 focused_jang[301342]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 03:46:29 np0005539551 focused_jang[301342]:        "lsm_data": {},
Nov 29 03:46:29 np0005539551 focused_jang[301342]:        "lvs": [],
Nov 29 03:46:29 np0005539551 focused_jang[301342]:        "path": "/dev/sr0",
Nov 29 03:46:29 np0005539551 focused_jang[301342]:        "rejected_reasons": [
Nov 29 03:46:29 np0005539551 focused_jang[301342]:            "Has a FileSystem",
Nov 29 03:46:29 np0005539551 focused_jang[301342]:            "Insufficient space (<5GB)"
Nov 29 03:46:29 np0005539551 focused_jang[301342]:        ],
Nov 29 03:46:29 np0005539551 focused_jang[301342]:        "sys_api": {
Nov 29 03:46:29 np0005539551 focused_jang[301342]:            "actuators": null,
Nov 29 03:46:29 np0005539551 focused_jang[301342]:            "device_nodes": "sr0",
Nov 29 03:46:29 np0005539551 focused_jang[301342]:            "devname": "sr0",
Nov 29 03:46:29 np0005539551 focused_jang[301342]:            "human_readable_size": "482.00 KB",
Nov 29 03:46:29 np0005539551 focused_jang[301342]:            "id_bus": "ata",
Nov 29 03:46:29 np0005539551 focused_jang[301342]:            "model": "QEMU DVD-ROM",
Nov 29 03:46:29 np0005539551 focused_jang[301342]:            "nr_requests": "2",
Nov 29 03:46:29 np0005539551 focused_jang[301342]:            "parent": "/dev/sr0",
Nov 29 03:46:29 np0005539551 focused_jang[301342]:            "partitions": {},
Nov 29 03:46:29 np0005539551 focused_jang[301342]:            "path": "/dev/sr0",
Nov 29 03:46:29 np0005539551 focused_jang[301342]:            "removable": "1",
Nov 29 03:46:29 np0005539551 focused_jang[301342]:            "rev": "2.5+",
Nov 29 03:46:29 np0005539551 focused_jang[301342]:            "ro": "0",
Nov 29 03:46:29 np0005539551 focused_jang[301342]:            "rotational": "1",
Nov 29 03:46:29 np0005539551 focused_jang[301342]:            "sas_address": "",
Nov 29 03:46:29 np0005539551 focused_jang[301342]:            "sas_device_handle": "",
Nov 29 03:46:29 np0005539551 focused_jang[301342]:            "scheduler_mode": "mq-deadline",
Nov 29 03:46:29 np0005539551 focused_jang[301342]:            "sectors": 0,
Nov 29 03:46:29 np0005539551 focused_jang[301342]:            "sectorsize": "2048",
Nov 29 03:46:29 np0005539551 focused_jang[301342]:            "size": 493568.0,
Nov 29 03:46:29 np0005539551 focused_jang[301342]:            "support_discard": "2048",
Nov 29 03:46:29 np0005539551 focused_jang[301342]:            "type": "disk",
Nov 29 03:46:29 np0005539551 focused_jang[301342]:            "vendor": "QEMU"
Nov 29 03:46:29 np0005539551 focused_jang[301342]:        }
Nov 29 03:46:29 np0005539551 focused_jang[301342]:    }
Nov 29 03:46:29 np0005539551 focused_jang[301342]: ]
Nov 29 03:46:29 np0005539551 systemd[1]: libpod-2cf318159fc9d30327fb424f1cda1b7935f12e9a4347eab3ba6b0aecd6ea56a0.scope: Deactivated successfully.
Nov 29 03:46:29 np0005539551 systemd[1]: libpod-2cf318159fc9d30327fb424f1cda1b7935f12e9a4347eab3ba6b0aecd6ea56a0.scope: Consumed 1.232s CPU time.
Nov 29 03:46:30 np0005539551 podman[302540]: 2025-11-29 08:46:30.003192377 +0000 UTC m=+0.034977808 container died 2cf318159fc9d30327fb424f1cda1b7935f12e9a4347eab3ba6b0aecd6ea56a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Nov 29 03:46:30 np0005539551 systemd[1]: var-lib-containers-storage-overlay-c39e6aa0675ee81dabb4f906b7e659e91cce8fcdbb4efdeb6e4e7e2dd500d2b5-merged.mount: Deactivated successfully.
Nov 29 03:46:30 np0005539551 podman[302540]: 2025-11-29 08:46:30.058529215 +0000 UTC m=+0.090314656 container remove 2cf318159fc9d30327fb424f1cda1b7935f12e9a4347eab3ba6b0aecd6ea56a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jang, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:46:30 np0005539551 systemd[1]: libpod-conmon-2cf318159fc9d30327fb424f1cda1b7935f12e9a4347eab3ba6b0aecd6ea56a0.scope: Deactivated successfully.
Nov 29 03:46:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:30.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:30.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:31 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:31 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:31 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:31 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:31 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:46:31 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:31 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:46:31 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:32.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:32 np0005539551 nova_compute[227360]: 2025-11-29 08:46:32.756 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405977.755545, 964cac7a-4c2d-4b5e-8522-81b99f1416b5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:46:32 np0005539551 nova_compute[227360]: 2025-11-29 08:46:32.757 227364 INFO nova.compute.manager [-] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:46:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:32.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:32 np0005539551 nova_compute[227360]: 2025-11-29 08:46:32.798 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:32 np0005539551 nova_compute[227360]: 2025-11-29 08:46:32.880 227364 DEBUG nova.compute.manager [None req-55572474-3e67-4224-89bb-d2e88d7f82f3 - - - - - -] [instance: 964cac7a-4c2d-4b5e-8522-81b99f1416b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:46:33 np0005539551 nova_compute[227360]: 2025-11-29 08:46:33.115 227364 DEBUG oslo_concurrency.lockutils [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "9db49dc0-dfcc-4700-a0ff-de58bff013ee" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:33 np0005539551 nova_compute[227360]: 2025-11-29 08:46:33.116 227364 DEBUG oslo_concurrency.lockutils [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "9db49dc0-dfcc-4700-a0ff-de58bff013ee" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:34 np0005539551 nova_compute[227360]: 2025-11-29 08:46:34.062 227364 DEBUG nova.compute.manager [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:46:34 np0005539551 nova_compute[227360]: 2025-11-29 08:46:34.436 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:34 np0005539551 nova_compute[227360]: 2025-11-29 08:46:34.526 227364 DEBUG oslo_concurrency.lockutils [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:34 np0005539551 nova_compute[227360]: 2025-11-29 08:46:34.527 227364 DEBUG oslo_concurrency.lockutils [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:34 np0005539551 nova_compute[227360]: 2025-11-29 08:46:34.535 227364 DEBUG nova.virt.hardware [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:46:34 np0005539551 nova_compute[227360]: 2025-11-29 08:46:34.535 227364 INFO nova.compute.claims [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:46:34 np0005539551 nova_compute[227360]: 2025-11-29 08:46:34.656 227364 DEBUG oslo_concurrency.processutils [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:46:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:34.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:34.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:46:35 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2103712282' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:46:35 np0005539551 nova_compute[227360]: 2025-11-29 08:46:35.155 227364 DEBUG oslo_concurrency.processutils [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:46:35 np0005539551 nova_compute[227360]: 2025-11-29 08:46:35.162 227364 DEBUG nova.compute.provider_tree [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:46:36 np0005539551 nova_compute[227360]: 2025-11-29 08:46:36.161 227364 DEBUG nova.scheduler.client.report [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:46:36 np0005539551 nova_compute[227360]: 2025-11-29 08:46:36.189 227364 DEBUG oslo_concurrency.lockutils [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:36 np0005539551 nova_compute[227360]: 2025-11-29 08:46:36.189 227364 DEBUG nova.compute.manager [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:46:36 np0005539551 nova_compute[227360]: 2025-11-29 08:46:36.240 227364 DEBUG nova.compute.manager [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:46:36 np0005539551 nova_compute[227360]: 2025-11-29 08:46:36.240 227364 DEBUG nova.network.neutron [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:46:36 np0005539551 nova_compute[227360]: 2025-11-29 08:46:36.261 227364 INFO nova.virt.libvirt.driver [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:46:36 np0005539551 nova_compute[227360]: 2025-11-29 08:46:36.287 227364 DEBUG nova.compute.manager [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:46:36 np0005539551 nova_compute[227360]: 2025-11-29 08:46:36.362 227364 INFO nova.virt.block_device [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Booting with volume 3f901e23-17b3-4b75-8f9b-33496e3b8402 at /dev/vda#033[00m
Nov 29 03:46:36 np0005539551 nova_compute[227360]: 2025-11-29 08:46:36.510 227364 DEBUG nova.policy [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b576a51181b5425aa6e44a0eb0a22803', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b7ffcb23bac14ee49474df9aee5f7dae', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:46:36 np0005539551 nova_compute[227360]: 2025-11-29 08:46:36.640 227364 DEBUG os_brick.utils [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:46:36 np0005539551 nova_compute[227360]: 2025-11-29 08:46:36.641 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:46:36 np0005539551 nova_compute[227360]: 2025-11-29 08:46:36.652 245195 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:46:36 np0005539551 nova_compute[227360]: 2025-11-29 08:46:36.652 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[74c0ff34-3556-4f7f-81f4-cbb67e2c3bbd]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:36 np0005539551 nova_compute[227360]: 2025-11-29 08:46:36.654 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:46:36 np0005539551 nova_compute[227360]: 2025-11-29 08:46:36.666 245195 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:46:36 np0005539551 nova_compute[227360]: 2025-11-29 08:46:36.666 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[7b46d482-1480-49f1-8683-1a9d24fb73ae]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34268feecb6', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:36 np0005539551 nova_compute[227360]: 2025-11-29 08:46:36.668 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:46:36 np0005539551 nova_compute[227360]: 2025-11-29 08:46:36.677 245195 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:46:36 np0005539551 nova_compute[227360]: 2025-11-29 08:46:36.677 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[74f9d35e-b0e9-4b6e-8baa-bd4a5ad51a67]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:36 np0005539551 nova_compute[227360]: 2025-11-29 08:46:36.678 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[a7d9a5ca-6c6b-4eba-84ba-596d8ffbc788]: (4, '2d58586e-4ce1-425d-890c-d5cdff75e822') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:36 np0005539551 nova_compute[227360]: 2025-11-29 08:46:36.679 227364 DEBUG oslo_concurrency.processutils [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:46:36 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:36.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:36 np0005539551 nova_compute[227360]: 2025-11-29 08:46:36.713 227364 DEBUG oslo_concurrency.processutils [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:46:36 np0005539551 nova_compute[227360]: 2025-11-29 08:46:36.716 227364 DEBUG os_brick.initiator.connectors.lightos [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:46:36 np0005539551 nova_compute[227360]: 2025-11-29 08:46:36.717 227364 DEBUG os_brick.initiator.connectors.lightos [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:46:36 np0005539551 nova_compute[227360]: 2025-11-29 08:46:36.717 227364 DEBUG os_brick.initiator.connectors.lightos [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:46:36 np0005539551 nova_compute[227360]: 2025-11-29 08:46:36.718 227364 DEBUG os_brick.utils [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] <== get_connector_properties: return (77ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34268feecb6', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2d58586e-4ce1-425d-890c-d5cdff75e822', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:46:36 np0005539551 nova_compute[227360]: 2025-11-29 08:46:36.718 227364 DEBUG nova.virt.block_device [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Updating existing volume attachment record: 3b199221-236b-4e10-a8d7-4a286122c5ad _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:46:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:36.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:37 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:37 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:37 np0005539551 nova_compute[227360]: 2025-11-29 08:46:37.800 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:38.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:38.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:39 np0005539551 nova_compute[227360]: 2025-11-29 08:46:39.438 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:39 np0005539551 podman[302635]: 2025-11-29 08:46:39.62101641 +0000 UTC m=+0.072246147 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd)
Nov 29 03:46:39 np0005539551 podman[302636]: 2025-11-29 08:46:39.629622812 +0000 UTC m=+0.066267354 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:46:39 np0005539551 podman[302634]: 2025-11-29 08:46:39.653580622 +0000 UTC m=+0.108243253 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 03:46:40 np0005539551 nova_compute[227360]: 2025-11-29 08:46:40.132 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:40.133 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=62, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=61) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:46:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:40.134 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:46:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:40.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:40.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:40 np0005539551 nova_compute[227360]: 2025-11-29 08:46:40.958 227364 DEBUG nova.compute.manager [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:46:40 np0005539551 nova_compute[227360]: 2025-11-29 08:46:40.960 227364 DEBUG nova.virt.libvirt.driver [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:46:40 np0005539551 nova_compute[227360]: 2025-11-29 08:46:40.960 227364 INFO nova.virt.libvirt.driver [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Creating image(s)#033[00m
Nov 29 03:46:40 np0005539551 nova_compute[227360]: 2025-11-29 08:46:40.961 227364 DEBUG nova.virt.libvirt.driver [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:46:40 np0005539551 nova_compute[227360]: 2025-11-29 08:46:40.961 227364 DEBUG nova.virt.libvirt.driver [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Ensure instance console log exists: /var/lib/nova/instances/9db49dc0-dfcc-4700-a0ff-de58bff013ee/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:46:40 np0005539551 nova_compute[227360]: 2025-11-29 08:46:40.961 227364 DEBUG oslo_concurrency.lockutils [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:40 np0005539551 nova_compute[227360]: 2025-11-29 08:46:40.961 227364 DEBUG oslo_concurrency.lockutils [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:40 np0005539551 nova_compute[227360]: 2025-11-29 08:46:40.962 227364 DEBUG oslo_concurrency.lockutils [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:41 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:42 np0005539551 nova_compute[227360]: 2025-11-29 08:46:42.490 227364 DEBUG nova.network.neutron [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Successfully created port: 24be8773-ea21-45a8-9ea1-63d0f273aa40 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:46:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:42.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:42.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:42 np0005539551 nova_compute[227360]: 2025-11-29 08:46:42.837 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:43 np0005539551 nova_compute[227360]: 2025-11-29 08:46:43.984 227364 DEBUG nova.network.neutron [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Successfully updated port: 24be8773-ea21-45a8-9ea1-63d0f273aa40 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:46:44 np0005539551 nova_compute[227360]: 2025-11-29 08:46:44.003 227364 DEBUG oslo_concurrency.lockutils [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "refresh_cache-9db49dc0-dfcc-4700-a0ff-de58bff013ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:46:44 np0005539551 nova_compute[227360]: 2025-11-29 08:46:44.004 227364 DEBUG oslo_concurrency.lockutils [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquired lock "refresh_cache-9db49dc0-dfcc-4700-a0ff-de58bff013ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:46:44 np0005539551 nova_compute[227360]: 2025-11-29 08:46:44.004 227364 DEBUG nova.network.neutron [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:46:44 np0005539551 nova_compute[227360]: 2025-11-29 08:46:44.154 227364 DEBUG nova.compute.manager [req-2fd585b9-6165-43be-9bae-7452cbea2d80 req-ce2e18cb-2220-4a7e-bbac-0040762505a6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Received event network-changed-24be8773-ea21-45a8-9ea1-63d0f273aa40 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:46:44 np0005539551 nova_compute[227360]: 2025-11-29 08:46:44.155 227364 DEBUG nova.compute.manager [req-2fd585b9-6165-43be-9bae-7452cbea2d80 req-ce2e18cb-2220-4a7e-bbac-0040762505a6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Refreshing instance network info cache due to event network-changed-24be8773-ea21-45a8-9ea1-63d0f273aa40. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:46:44 np0005539551 nova_compute[227360]: 2025-11-29 08:46:44.155 227364 DEBUG oslo_concurrency.lockutils [req-2fd585b9-6165-43be-9bae-7452cbea2d80 req-ce2e18cb-2220-4a7e-bbac-0040762505a6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-9db49dc0-dfcc-4700-a0ff-de58bff013ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:46:44 np0005539551 nova_compute[227360]: 2025-11-29 08:46:44.221 227364 DEBUG nova.network.neutron [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:46:44 np0005539551 nova_compute[227360]: 2025-11-29 08:46:44.441 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:46:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:44.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:46:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:44.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:46 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:46.136 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '62'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:46:46 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:46.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:46.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:46 np0005539551 nova_compute[227360]: 2025-11-29 08:46:46.816 227364 DEBUG nova.network.neutron [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Updating instance_info_cache with network_info: [{"id": "24be8773-ea21-45a8-9ea1-63d0f273aa40", "address": "fa:16:3e:1b:b1:e6", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24be8773-ea", "ovs_interfaceid": "24be8773-ea21-45a8-9ea1-63d0f273aa40", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:46:46 np0005539551 nova_compute[227360]: 2025-11-29 08:46:46.878 227364 DEBUG oslo_concurrency.lockutils [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Releasing lock "refresh_cache-9db49dc0-dfcc-4700-a0ff-de58bff013ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:46:46 np0005539551 nova_compute[227360]: 2025-11-29 08:46:46.878 227364 DEBUG nova.compute.manager [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Instance network_info: |[{"id": "24be8773-ea21-45a8-9ea1-63d0f273aa40", "address": "fa:16:3e:1b:b1:e6", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24be8773-ea", "ovs_interfaceid": "24be8773-ea21-45a8-9ea1-63d0f273aa40", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:46:46 np0005539551 nova_compute[227360]: 2025-11-29 08:46:46.879 227364 DEBUG oslo_concurrency.lockutils [req-2fd585b9-6165-43be-9bae-7452cbea2d80 req-ce2e18cb-2220-4a7e-bbac-0040762505a6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-9db49dc0-dfcc-4700-a0ff-de58bff013ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:46:46 np0005539551 nova_compute[227360]: 2025-11-29 08:46:46.879 227364 DEBUG nova.network.neutron [req-2fd585b9-6165-43be-9bae-7452cbea2d80 req-ce2e18cb-2220-4a7e-bbac-0040762505a6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Refreshing network info cache for port 24be8773-ea21-45a8-9ea1-63d0f273aa40 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:46:46 np0005539551 nova_compute[227360]: 2025-11-29 08:46:46.881 227364 DEBUG nova.virt.libvirt.driver [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Start _get_guest_xml network_info=[{"id": "24be8773-ea21-45a8-9ea1-63d0f273aa40", "address": "fa:16:3e:1b:b1:e6", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24be8773-ea", "ovs_interfaceid": "24be8773-ea21-45a8-9ea1-63d0f273aa40", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-3f901e23-17b3-4b75-8f9b-33496e3b8402', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '3f901e23-17b3-4b75-8f9b-33496e3b8402', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '9db49dc0-dfcc-4700-a0ff-de58bff013ee', 'attached_at': '', 'detached_at': '', 'volume_id': '3f901e23-17b3-4b75-8f9b-33496e3b8402', 'serial': '3f901e23-17b3-4b75-8f9b-33496e3b8402'}, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'attachment_id': '3b199221-236b-4e10-a8d7-4a286122c5ad', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:46:46 np0005539551 nova_compute[227360]: 2025-11-29 08:46:46.886 227364 WARNING nova.virt.libvirt.driver [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:46:46 np0005539551 nova_compute[227360]: 2025-11-29 08:46:46.892 227364 DEBUG nova.virt.libvirt.host [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:46:46 np0005539551 nova_compute[227360]: 2025-11-29 08:46:46.893 227364 DEBUG nova.virt.libvirt.host [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:46:46 np0005539551 nova_compute[227360]: 2025-11-29 08:46:46.897 227364 DEBUG nova.virt.libvirt.host [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:46:46 np0005539551 nova_compute[227360]: 2025-11-29 08:46:46.898 227364 DEBUG nova.virt.libvirt.host [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:46:46 np0005539551 nova_compute[227360]: 2025-11-29 08:46:46.899 227364 DEBUG nova.virt.libvirt.driver [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:46:46 np0005539551 nova_compute[227360]: 2025-11-29 08:46:46.899 227364 DEBUG nova.virt.hardware [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:46:46 np0005539551 nova_compute[227360]: 2025-11-29 08:46:46.899 227364 DEBUG nova.virt.hardware [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:46:46 np0005539551 nova_compute[227360]: 2025-11-29 08:46:46.900 227364 DEBUG nova.virt.hardware [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:46:46 np0005539551 nova_compute[227360]: 2025-11-29 08:46:46.900 227364 DEBUG nova.virt.hardware [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:46:46 np0005539551 nova_compute[227360]: 2025-11-29 08:46:46.900 227364 DEBUG nova.virt.hardware [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:46:46 np0005539551 nova_compute[227360]: 2025-11-29 08:46:46.900 227364 DEBUG nova.virt.hardware [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:46:46 np0005539551 nova_compute[227360]: 2025-11-29 08:46:46.900 227364 DEBUG nova.virt.hardware [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:46:46 np0005539551 nova_compute[227360]: 2025-11-29 08:46:46.901 227364 DEBUG nova.virt.hardware [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:46:46 np0005539551 nova_compute[227360]: 2025-11-29 08:46:46.901 227364 DEBUG nova.virt.hardware [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:46:46 np0005539551 nova_compute[227360]: 2025-11-29 08:46:46.901 227364 DEBUG nova.virt.hardware [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:46:46 np0005539551 nova_compute[227360]: 2025-11-29 08:46:46.901 227364 DEBUG nova.virt.hardware [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:46:46 np0005539551 nova_compute[227360]: 2025-11-29 08:46:46.928 227364 DEBUG nova.storage.rbd_utils [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] rbd image 9db49dc0-dfcc-4700-a0ff-de58bff013ee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:46:46 np0005539551 nova_compute[227360]: 2025-11-29 08:46:46.932 227364 DEBUG oslo_concurrency.processutils [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:46:47 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:46:47 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2965583625' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:46:47 np0005539551 nova_compute[227360]: 2025-11-29 08:46:47.367 227364 DEBUG oslo_concurrency.processutils [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:46:47 np0005539551 nova_compute[227360]: 2025-11-29 08:46:47.556 227364 DEBUG os_brick.encryptors [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Using volume encryption metadata '{'encryption_key_id': '091a9a2e-d1cc-44d2-be96-b1719b550e5c', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-3f901e23-17b3-4b75-8f9b-33496e3b8402', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '3f901e23-17b3-4b75-8f9b-33496e3b8402', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '9db49dc0-dfcc-4700-a0ff-de58bff013ee', 'attached_at': '', 'detached_at': '', 'volume_id': '3f901e23-17b3-4b75-8f9b-33496e3b8402', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Nov 29 03:46:47 np0005539551 nova_compute[227360]: 2025-11-29 08:46:47.561 227364 DEBUG barbicanclient.client [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Nov 29 03:46:47 np0005539551 nova_compute[227360]: 2025-11-29 08:46:47.584 227364 DEBUG barbicanclient.v1.secrets [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/091a9a2e-d1cc-44d2-be96-b1719b550e5c get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Nov 29 03:46:47 np0005539551 nova_compute[227360]: 2025-11-29 08:46:47.585 227364 INFO barbicanclient.base [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Calculated Secrets uuid ref: secrets/091a9a2e-d1cc-44d2-be96-b1719b550e5c#033[00m
Nov 29 03:46:47 np0005539551 nova_compute[227360]: 2025-11-29 08:46:47.613 227364 DEBUG barbicanclient.client [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:46:47 np0005539551 nova_compute[227360]: 2025-11-29 08:46:47.614 227364 INFO barbicanclient.base [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Calculated Secrets uuid ref: secrets/091a9a2e-d1cc-44d2-be96-b1719b550e5c#033[00m
Nov 29 03:46:47 np0005539551 nova_compute[227360]: 2025-11-29 08:46:47.655 227364 DEBUG barbicanclient.client [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:46:47 np0005539551 nova_compute[227360]: 2025-11-29 08:46:47.656 227364 INFO barbicanclient.base [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Calculated Secrets uuid ref: secrets/091a9a2e-d1cc-44d2-be96-b1719b550e5c#033[00m
Nov 29 03:46:47 np0005539551 nova_compute[227360]: 2025-11-29 08:46:47.688 227364 DEBUG barbicanclient.client [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:46:47 np0005539551 nova_compute[227360]: 2025-11-29 08:46:47.689 227364 INFO barbicanclient.base [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Calculated Secrets uuid ref: secrets/091a9a2e-d1cc-44d2-be96-b1719b550e5c#033[00m
Nov 29 03:46:47 np0005539551 nova_compute[227360]: 2025-11-29 08:46:47.712 227364 DEBUG barbicanclient.client [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:46:47 np0005539551 nova_compute[227360]: 2025-11-29 08:46:47.713 227364 INFO barbicanclient.base [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Calculated Secrets uuid ref: secrets/091a9a2e-d1cc-44d2-be96-b1719b550e5c#033[00m
Nov 29 03:46:47 np0005539551 nova_compute[227360]: 2025-11-29 08:46:47.745 227364 DEBUG barbicanclient.client [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:46:47 np0005539551 nova_compute[227360]: 2025-11-29 08:46:47.746 227364 INFO barbicanclient.base [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Calculated Secrets uuid ref: secrets/091a9a2e-d1cc-44d2-be96-b1719b550e5c#033[00m
Nov 29 03:46:47 np0005539551 nova_compute[227360]: 2025-11-29 08:46:47.770 227364 DEBUG barbicanclient.client [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:46:47 np0005539551 nova_compute[227360]: 2025-11-29 08:46:47.771 227364 INFO barbicanclient.base [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Calculated Secrets uuid ref: secrets/091a9a2e-d1cc-44d2-be96-b1719b550e5c#033[00m
Nov 29 03:46:47 np0005539551 nova_compute[227360]: 2025-11-29 08:46:47.805 227364 DEBUG barbicanclient.client [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:46:47 np0005539551 nova_compute[227360]: 2025-11-29 08:46:47.806 227364 INFO barbicanclient.base [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Calculated Secrets uuid ref: secrets/091a9a2e-d1cc-44d2-be96-b1719b550e5c#033[00m
Nov 29 03:46:47 np0005539551 nova_compute[227360]: 2025-11-29 08:46:47.825 227364 DEBUG barbicanclient.client [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:46:47 np0005539551 nova_compute[227360]: 2025-11-29 08:46:47.826 227364 INFO barbicanclient.base [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Calculated Secrets uuid ref: secrets/091a9a2e-d1cc-44d2-be96-b1719b550e5c#033[00m
Nov 29 03:46:47 np0005539551 nova_compute[227360]: 2025-11-29 08:46:47.840 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:47 np0005539551 nova_compute[227360]: 2025-11-29 08:46:47.852 227364 DEBUG barbicanclient.client [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:46:47 np0005539551 nova_compute[227360]: 2025-11-29 08:46:47.853 227364 INFO barbicanclient.base [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Calculated Secrets uuid ref: secrets/091a9a2e-d1cc-44d2-be96-b1719b550e5c#033[00m
Nov 29 03:46:47 np0005539551 nova_compute[227360]: 2025-11-29 08:46:47.890 227364 DEBUG barbicanclient.client [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:46:47 np0005539551 nova_compute[227360]: 2025-11-29 08:46:47.890 227364 INFO barbicanclient.base [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Calculated Secrets uuid ref: secrets/091a9a2e-d1cc-44d2-be96-b1719b550e5c#033[00m
Nov 29 03:46:47 np0005539551 nova_compute[227360]: 2025-11-29 08:46:47.916 227364 DEBUG barbicanclient.client [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:46:47 np0005539551 nova_compute[227360]: 2025-11-29 08:46:47.917 227364 INFO barbicanclient.base [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Calculated Secrets uuid ref: secrets/091a9a2e-d1cc-44d2-be96-b1719b550e5c#033[00m
Nov 29 03:46:47 np0005539551 nova_compute[227360]: 2025-11-29 08:46:47.933 227364 DEBUG barbicanclient.client [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:46:47 np0005539551 nova_compute[227360]: 2025-11-29 08:46:47.933 227364 INFO barbicanclient.base [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Calculated Secrets uuid ref: secrets/091a9a2e-d1cc-44d2-be96-b1719b550e5c#033[00m
Nov 29 03:46:47 np0005539551 nova_compute[227360]: 2025-11-29 08:46:47.956 227364 DEBUG barbicanclient.client [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:46:47 np0005539551 nova_compute[227360]: 2025-11-29 08:46:47.956 227364 INFO barbicanclient.base [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Calculated Secrets uuid ref: secrets/091a9a2e-d1cc-44d2-be96-b1719b550e5c#033[00m
Nov 29 03:46:47 np0005539551 nova_compute[227360]: 2025-11-29 08:46:47.998 227364 DEBUG barbicanclient.client [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:46:47 np0005539551 nova_compute[227360]: 2025-11-29 08:46:47.998 227364 INFO barbicanclient.base [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Calculated Secrets uuid ref: secrets/091a9a2e-d1cc-44d2-be96-b1719b550e5c#033[00m
Nov 29 03:46:48 np0005539551 nova_compute[227360]: 2025-11-29 08:46:48.027 227364 DEBUG barbicanclient.client [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:46:48 np0005539551 nova_compute[227360]: 2025-11-29 08:46:48.028 227364 DEBUG nova.virt.libvirt.host [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Secret XML: <secret ephemeral="no" private="no">
Nov 29 03:46:48 np0005539551 nova_compute[227360]:  <usage type="volume">
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <volume>3f901e23-17b3-4b75-8f9b-33496e3b8402</volume>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:  </usage>
Nov 29 03:46:48 np0005539551 nova_compute[227360]: </secret>
Nov 29 03:46:48 np0005539551 nova_compute[227360]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Nov 29 03:46:48 np0005539551 nova_compute[227360]: 2025-11-29 08:46:48.071 227364 DEBUG nova.virt.libvirt.vif [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:46:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1639731580',display_name='tempest-TestVolumeBootPattern-server-1639731580',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1639731580',id=212,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b7ffcb23bac14ee49474df9aee5f7dae',ramdisk_id='',reservation_id='r-zawjw05l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1614567902',owner_user_name='tempest-TestVolumeBootPattern-1614567902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:46:36Z,user_data=None,user_id='b576a51181b5425aa6e44a0eb0a22803',uuid=9db49dc0-dfcc-4700-a0ff-de58bff013ee,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "24be8773-ea21-45a8-9ea1-63d0f273aa40", "address": "fa:16:3e:1b:b1:e6", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24be8773-ea", "ovs_interfaceid": "24be8773-ea21-45a8-9ea1-63d0f273aa40", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:46:48 np0005539551 nova_compute[227360]: 2025-11-29 08:46:48.072 227364 DEBUG nova.network.os_vif_util [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converting VIF {"id": "24be8773-ea21-45a8-9ea1-63d0f273aa40", "address": "fa:16:3e:1b:b1:e6", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24be8773-ea", "ovs_interfaceid": "24be8773-ea21-45a8-9ea1-63d0f273aa40", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:46:48 np0005539551 nova_compute[227360]: 2025-11-29 08:46:48.073 227364 DEBUG nova.network.os_vif_util [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:b1:e6,bridge_name='br-int',has_traffic_filtering=True,id=24be8773-ea21-45a8-9ea1-63d0f273aa40,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24be8773-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:46:48 np0005539551 nova_compute[227360]: 2025-11-29 08:46:48.075 227364 DEBUG nova.objects.instance [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lazy-loading 'pci_devices' on Instance uuid 9db49dc0-dfcc-4700-a0ff-de58bff013ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:46:48 np0005539551 nova_compute[227360]: 2025-11-29 08:46:48.123 227364 DEBUG nova.virt.libvirt.driver [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:46:48 np0005539551 nova_compute[227360]:  <uuid>9db49dc0-dfcc-4700-a0ff-de58bff013ee</uuid>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:  <name>instance-000000d4</name>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:46:48 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:      <nova:name>tempest-TestVolumeBootPattern-server-1639731580</nova:name>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:46:46</nova:creationTime>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:46:48 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:        <nova:user uuid="b576a51181b5425aa6e44a0eb0a22803">tempest-TestVolumeBootPattern-1614567902-project-member</nova:user>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:        <nova:project uuid="b7ffcb23bac14ee49474df9aee5f7dae">tempest-TestVolumeBootPattern-1614567902</nova:project>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:        <nova:port uuid="24be8773-ea21-45a8-9ea1-63d0f273aa40">
Nov 29 03:46:48 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:      <entry name="serial">9db49dc0-dfcc-4700-a0ff-de58bff013ee</entry>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:      <entry name="uuid">9db49dc0-dfcc-4700-a0ff-de58bff013ee</entry>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:46:48 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/9db49dc0-dfcc-4700-a0ff-de58bff013ee_disk.config">
Nov 29 03:46:48 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:46:48 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:46:48 np0005539551 nova_compute[227360]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="volumes/volume-3f901e23-17b3-4b75-8f9b-33496e3b8402">
Nov 29 03:46:48 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:46:48 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:      <serial>3f901e23-17b3-4b75-8f9b-33496e3b8402</serial>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:      <encryption format="luks">
Nov 29 03:46:48 np0005539551 nova_compute[227360]:        <secret type="passphrase" uuid="68278e4e-035c-4d10-960e-f6d51b1464e1"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:      </encryption>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:46:48 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:1b:b1:e6"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:      <target dev="tap24be8773-ea"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:46:48 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/9db49dc0-dfcc-4700-a0ff-de58bff013ee/console.log" append="off"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:46:48 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:46:48 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:46:48 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:46:48 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:46:48 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:46:48 np0005539551 nova_compute[227360]: 2025-11-29 08:46:48.125 227364 DEBUG nova.compute.manager [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Preparing to wait for external event network-vif-plugged-24be8773-ea21-45a8-9ea1-63d0f273aa40 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:46:48 np0005539551 nova_compute[227360]: 2025-11-29 08:46:48.126 227364 DEBUG oslo_concurrency.lockutils [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "9db49dc0-dfcc-4700-a0ff-de58bff013ee-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:48 np0005539551 nova_compute[227360]: 2025-11-29 08:46:48.126 227364 DEBUG oslo_concurrency.lockutils [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "9db49dc0-dfcc-4700-a0ff-de58bff013ee-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:48 np0005539551 nova_compute[227360]: 2025-11-29 08:46:48.126 227364 DEBUG oslo_concurrency.lockutils [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "9db49dc0-dfcc-4700-a0ff-de58bff013ee-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:48 np0005539551 nova_compute[227360]: 2025-11-29 08:46:48.127 227364 DEBUG nova.virt.libvirt.vif [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:46:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1639731580',display_name='tempest-TestVolumeBootPattern-server-1639731580',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1639731580',id=212,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b7ffcb23bac14ee49474df9aee5f7dae',ramdisk_id='',reservation_id='r-zawjw05l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1614567902',owner_user_name='tempest-TestVolumeBootPattern-1614567902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:46:36Z,user_data=None,user_id='b576a51181b5425aa6e44a0eb0a22803',uuid=9db49dc0-dfcc-4700-a0ff-de58bff013ee,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "24be8773-ea21-45a8-9ea1-63d0f273aa40", "address": "fa:16:3e:1b:b1:e6", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24be8773-ea", "ovs_interfaceid": "24be8773-ea21-45a8-9ea1-63d0f273aa40", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:46:48 np0005539551 nova_compute[227360]: 2025-11-29 08:46:48.127 227364 DEBUG nova.network.os_vif_util [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converting VIF {"id": "24be8773-ea21-45a8-9ea1-63d0f273aa40", "address": "fa:16:3e:1b:b1:e6", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24be8773-ea", "ovs_interfaceid": "24be8773-ea21-45a8-9ea1-63d0f273aa40", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:46:48 np0005539551 nova_compute[227360]: 2025-11-29 08:46:48.128 227364 DEBUG nova.network.os_vif_util [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:b1:e6,bridge_name='br-int',has_traffic_filtering=True,id=24be8773-ea21-45a8-9ea1-63d0f273aa40,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24be8773-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:46:48 np0005539551 nova_compute[227360]: 2025-11-29 08:46:48.128 227364 DEBUG os_vif [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:b1:e6,bridge_name='br-int',has_traffic_filtering=True,id=24be8773-ea21-45a8-9ea1-63d0f273aa40,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24be8773-ea') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:46:48 np0005539551 nova_compute[227360]: 2025-11-29 08:46:48.129 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:48 np0005539551 nova_compute[227360]: 2025-11-29 08:46:48.129 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:46:48 np0005539551 nova_compute[227360]: 2025-11-29 08:46:48.130 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:46:48 np0005539551 nova_compute[227360]: 2025-11-29 08:46:48.133 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:48 np0005539551 nova_compute[227360]: 2025-11-29 08:46:48.133 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap24be8773-ea, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:46:48 np0005539551 nova_compute[227360]: 2025-11-29 08:46:48.133 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap24be8773-ea, col_values=(('external_ids', {'iface-id': '24be8773-ea21-45a8-9ea1-63d0f273aa40', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1b:b1:e6', 'vm-uuid': '9db49dc0-dfcc-4700-a0ff-de58bff013ee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:46:48 np0005539551 nova_compute[227360]: 2025-11-29 08:46:48.134 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:48 np0005539551 NetworkManager[48922]: <info>  [1764406008.1354] manager: (tap24be8773-ea): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/404)
Nov 29 03:46:48 np0005539551 nova_compute[227360]: 2025-11-29 08:46:48.136 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:46:48 np0005539551 nova_compute[227360]: 2025-11-29 08:46:48.141 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:48 np0005539551 nova_compute[227360]: 2025-11-29 08:46:48.142 227364 INFO os_vif [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:b1:e6,bridge_name='br-int',has_traffic_filtering=True,id=24be8773-ea21-45a8-9ea1-63d0f273aa40,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24be8773-ea')#033[00m
Nov 29 03:46:48 np0005539551 nova_compute[227360]: 2025-11-29 08:46:48.201 227364 DEBUG nova.virt.libvirt.driver [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:46:48 np0005539551 nova_compute[227360]: 2025-11-29 08:46:48.202 227364 DEBUG nova.virt.libvirt.driver [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:46:48 np0005539551 nova_compute[227360]: 2025-11-29 08:46:48.202 227364 DEBUG nova.virt.libvirt.driver [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] No VIF found with MAC fa:16:3e:1b:b1:e6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:46:48 np0005539551 nova_compute[227360]: 2025-11-29 08:46:48.202 227364 INFO nova.virt.libvirt.driver [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Using config drive#033[00m
Nov 29 03:46:48 np0005539551 nova_compute[227360]: 2025-11-29 08:46:48.228 227364 DEBUG nova.storage.rbd_utils [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] rbd image 9db49dc0-dfcc-4700-a0ff-de58bff013ee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:46:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:48.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:48.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:49 np0005539551 nova_compute[227360]: 2025-11-29 08:46:49.443 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:49 np0005539551 nova_compute[227360]: 2025-11-29 08:46:49.506 227364 INFO nova.virt.libvirt.driver [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Creating config drive at /var/lib/nova/instances/9db49dc0-dfcc-4700-a0ff-de58bff013ee/disk.config#033[00m
Nov 29 03:46:49 np0005539551 nova_compute[227360]: 2025-11-29 08:46:49.510 227364 DEBUG oslo_concurrency.processutils [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9db49dc0-dfcc-4700-a0ff-de58bff013ee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgm7ft5zu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:46:49 np0005539551 nova_compute[227360]: 2025-11-29 08:46:49.649 227364 DEBUG oslo_concurrency.processutils [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9db49dc0-dfcc-4700-a0ff-de58bff013ee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgm7ft5zu" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:46:49 np0005539551 nova_compute[227360]: 2025-11-29 08:46:49.693 227364 DEBUG nova.storage.rbd_utils [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] rbd image 9db49dc0-dfcc-4700-a0ff-de58bff013ee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:46:49 np0005539551 nova_compute[227360]: 2025-11-29 08:46:49.699 227364 DEBUG oslo_concurrency.processutils [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9db49dc0-dfcc-4700-a0ff-de58bff013ee/disk.config 9db49dc0-dfcc-4700-a0ff-de58bff013ee_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:46:49 np0005539551 nova_compute[227360]: 2025-11-29 08:46:49.870 227364 DEBUG oslo_concurrency.processutils [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9db49dc0-dfcc-4700-a0ff-de58bff013ee/disk.config 9db49dc0-dfcc-4700-a0ff-de58bff013ee_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:46:49 np0005539551 nova_compute[227360]: 2025-11-29 08:46:49.872 227364 INFO nova.virt.libvirt.driver [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Deleting local config drive /var/lib/nova/instances/9db49dc0-dfcc-4700-a0ff-de58bff013ee/disk.config because it was imported into RBD.#033[00m
Nov 29 03:46:49 np0005539551 kernel: tap24be8773-ea: entered promiscuous mode
Nov 29 03:46:49 np0005539551 NetworkManager[48922]: <info>  [1764406009.9244] manager: (tap24be8773-ea): new Tun device (/org/freedesktop/NetworkManager/Devices/405)
Nov 29 03:46:49 np0005539551 ovn_controller[130266]: 2025-11-29T08:46:49Z|00907|binding|INFO|Claiming lport 24be8773-ea21-45a8-9ea1-63d0f273aa40 for this chassis.
Nov 29 03:46:49 np0005539551 nova_compute[227360]: 2025-11-29 08:46:49.926 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:49 np0005539551 ovn_controller[130266]: 2025-11-29T08:46:49Z|00908|binding|INFO|24be8773-ea21-45a8-9ea1-63d0f273aa40: Claiming fa:16:3e:1b:b1:e6 10.100.0.14
Nov 29 03:46:49 np0005539551 nova_compute[227360]: 2025-11-29 08:46:49.929 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:49 np0005539551 nova_compute[227360]: 2025-11-29 08:46:49.934 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:49.940 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:b1:e6 10.100.0.14'], port_security=['fa:16:3e:1b:b1:e6 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '9db49dc0-dfcc-4700-a0ff-de58bff013ee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7ffcb23bac14ee49474df9aee5f7dae', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e58ae615-f827-4417-b716-6b0f5aa1e5ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2432be5b-087b-4981-ab5e-ea6b1be12111, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=24be8773-ea21-45a8-9ea1-63d0f273aa40) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:46:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:49.941 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 24be8773-ea21-45a8-9ea1-63d0f273aa40 in datapath 3d510715-dc99-4870-8ae9-ff599ae1a9c2 bound to our chassis#033[00m
Nov 29 03:46:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:49.942 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3d510715-dc99-4870-8ae9-ff599ae1a9c2#033[00m
Nov 29 03:46:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:49.952 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[277fd84d-7192-45d0-9488-ea39b7743730]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:49.953 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3d510715-d1 in ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:46:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:49.954 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3d510715-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:46:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:49.954 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e164ecb8-39fc-4228-acc2-6802d9fdc2ae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:49.955 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2869382f-1289-4000-95fb-6bd378f35619]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:49 np0005539551 systemd-machined[190756]: New machine qemu-93-instance-000000d4.
Nov 29 03:46:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:49.969 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[ca4b9cc0-0aed-4f3b-b366-b1df492cef3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:49 np0005539551 systemd[1]: Started Virtual Machine qemu-93-instance-000000d4.
Nov 29 03:46:49 np0005539551 nova_compute[227360]: 2025-11-29 08:46:49.993 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:49.994 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[aaa56dba-f6c3-4e5f-be09-070cba87a115]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:49 np0005539551 ovn_controller[130266]: 2025-11-29T08:46:49Z|00909|binding|INFO|Setting lport 24be8773-ea21-45a8-9ea1-63d0f273aa40 ovn-installed in OVS
Nov 29 03:46:49 np0005539551 ovn_controller[130266]: 2025-11-29T08:46:49Z|00910|binding|INFO|Setting lport 24be8773-ea21-45a8-9ea1-63d0f273aa40 up in Southbound
Nov 29 03:46:49 np0005539551 nova_compute[227360]: 2025-11-29 08:46:49.997 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:50 np0005539551 systemd-udevd[302816]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:46:50 np0005539551 NetworkManager[48922]: <info>  [1764406010.0193] device (tap24be8773-ea): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:46:50 np0005539551 NetworkManager[48922]: <info>  [1764406010.0206] device (tap24be8773-ea): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:50.028 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[5b923e72-7c1d-4b57-b6e3-beb80655bf85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:50 np0005539551 NetworkManager[48922]: <info>  [1764406010.0336] manager: (tap3d510715-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/406)
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:50.033 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b1e98ef3-09cf-48cf-a53c-0e5adadc54ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:50.068 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[b57df8ce-01e0-4a59-9c9a-4be194e4e1c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:50.070 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[2f7c74aa-6301-48e2-bf43-84b1175eba03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:50 np0005539551 NetworkManager[48922]: <info>  [1764406010.0956] device (tap3d510715-d0): carrier: link connected
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:50.100 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[d5a0a382-3875-47d1-9d8a-3820363ab717]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:50.118 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5aa5a88f-524a-4cd2-adc3-0d655eaca879]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d510715-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:61:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 905697, 'reachable_time': 22082, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302845, 'error': None, 'target': 'ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:50.133 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[39c151a1-5973-458f-b22f-bba524efb8e5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe63:6190'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 905697, 'tstamp': 905697}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302846, 'error': None, 'target': 'ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:50.151 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[42b056b0-9d06-4a80-b5f6-cc480591aaa9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d510715-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:61:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 905697, 'reachable_time': 22082, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 302847, 'error': None, 'target': 'ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:50.174 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[64df666d-743d-4361-a12e-05b541feef4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:50.232 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[87ff7bf2-cacb-4afe-b29a-006f5549d50a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:50.233 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d510715-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:50.234 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:50.234 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3d510715-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:46:50 np0005539551 nova_compute[227360]: 2025-11-29 08:46:50.236 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:50 np0005539551 NetworkManager[48922]: <info>  [1764406010.2366] manager: (tap3d510715-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/407)
Nov 29 03:46:50 np0005539551 kernel: tap3d510715-d0: entered promiscuous mode
Nov 29 03:46:50 np0005539551 nova_compute[227360]: 2025-11-29 08:46:50.239 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:50.240 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3d510715-d0, col_values=(('external_ids', {'iface-id': '9b7ae33f-c1c7-4a13-97b3-0ae6cb40a1db'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:46:50 np0005539551 ovn_controller[130266]: 2025-11-29T08:46:50Z|00911|binding|INFO|Releasing lport 9b7ae33f-c1c7-4a13-97b3-0ae6cb40a1db from this chassis (sb_readonly=0)
Nov 29 03:46:50 np0005539551 nova_compute[227360]: 2025-11-29 08:46:50.255 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:50.256 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3d510715-dc99-4870-8ae9-ff599ae1a9c2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3d510715-dc99-4870-8ae9-ff599ae1a9c2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:50.257 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2f298a2c-40b0-42ae-b4d6-13c65de606c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:50.258 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-3d510715-dc99-4870-8ae9-ff599ae1a9c2
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/3d510715-dc99-4870-8ae9-ff599ae1a9c2.pid.haproxy
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 3d510715-dc99-4870-8ae9-ff599ae1a9c2
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:46:50 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:50.259 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'env', 'PROCESS_TAG=haproxy-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3d510715-dc99-4870-8ae9-ff599ae1a9c2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:46:50 np0005539551 nova_compute[227360]: 2025-11-29 08:46:50.562 227364 DEBUG nova.network.neutron [req-2fd585b9-6165-43be-9bae-7452cbea2d80 req-ce2e18cb-2220-4a7e-bbac-0040762505a6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Updated VIF entry in instance network info cache for port 24be8773-ea21-45a8-9ea1-63d0f273aa40. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:46:50 np0005539551 nova_compute[227360]: 2025-11-29 08:46:50.564 227364 DEBUG nova.network.neutron [req-2fd585b9-6165-43be-9bae-7452cbea2d80 req-ce2e18cb-2220-4a7e-bbac-0040762505a6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Updating instance_info_cache with network_info: [{"id": "24be8773-ea21-45a8-9ea1-63d0f273aa40", "address": "fa:16:3e:1b:b1:e6", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24be8773-ea", "ovs_interfaceid": "24be8773-ea21-45a8-9ea1-63d0f273aa40", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:46:50 np0005539551 nova_compute[227360]: 2025-11-29 08:46:50.602 227364 DEBUG oslo_concurrency.lockutils [req-2fd585b9-6165-43be-9bae-7452cbea2d80 req-ce2e18cb-2220-4a7e-bbac-0040762505a6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-9db49dc0-dfcc-4700-a0ff-de58bff013ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:46:50 np0005539551 podman[302879]: 2025-11-29 08:46:50.648051257 +0000 UTC m=+0.060817798 container create 4290186cde28a3e4b1b8e095e863637ede6560057c93e749ccd44437db90d384 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:46:50 np0005539551 systemd[1]: Started libpod-conmon-4290186cde28a3e4b1b8e095e863637ede6560057c93e749ccd44437db90d384.scope.
Nov 29 03:46:50 np0005539551 podman[302879]: 2025-11-29 08:46:50.615036613 +0000 UTC m=+0.027803244 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:46:50 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:46:50 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbc8979aefae0dee4f87a5a7869aad1f8e500416581b8a0e4b80875a760e247d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:46:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:50.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:50 np0005539551 podman[302879]: 2025-11-29 08:46:50.741093976 +0000 UTC m=+0.153860557 container init 4290186cde28a3e4b1b8e095e863637ede6560057c93e749ccd44437db90d384 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 03:46:50 np0005539551 podman[302879]: 2025-11-29 08:46:50.746465312 +0000 UTC m=+0.159231863 container start 4290186cde28a3e4b1b8e095e863637ede6560057c93e749ccd44437db90d384 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:46:50 np0005539551 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[302901]: [NOTICE]   (302930) : New worker (302936) forked
Nov 29 03:46:50 np0005539551 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[302901]: [NOTICE]   (302930) : Loading success.
Nov 29 03:46:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:50.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:51 np0005539551 nova_compute[227360]: 2025-11-29 08:46:51.607 227364 DEBUG nova.compute.manager [req-871b4d7e-0d0d-4dee-b44a-66fdb565c70a req-5f692be3-36e7-449e-b50d-e61b1f23ab3a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Received event network-vif-plugged-24be8773-ea21-45a8-9ea1-63d0f273aa40 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:46:51 np0005539551 nova_compute[227360]: 2025-11-29 08:46:51.608 227364 DEBUG oslo_concurrency.lockutils [req-871b4d7e-0d0d-4dee-b44a-66fdb565c70a req-5f692be3-36e7-449e-b50d-e61b1f23ab3a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9db49dc0-dfcc-4700-a0ff-de58bff013ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:51 np0005539551 nova_compute[227360]: 2025-11-29 08:46:51.608 227364 DEBUG oslo_concurrency.lockutils [req-871b4d7e-0d0d-4dee-b44a-66fdb565c70a req-5f692be3-36e7-449e-b50d-e61b1f23ab3a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9db49dc0-dfcc-4700-a0ff-de58bff013ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:51 np0005539551 nova_compute[227360]: 2025-11-29 08:46:51.608 227364 DEBUG oslo_concurrency.lockutils [req-871b4d7e-0d0d-4dee-b44a-66fdb565c70a req-5f692be3-36e7-449e-b50d-e61b1f23ab3a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9db49dc0-dfcc-4700-a0ff-de58bff013ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:51 np0005539551 nova_compute[227360]: 2025-11-29 08:46:51.608 227364 DEBUG nova.compute.manager [req-871b4d7e-0d0d-4dee-b44a-66fdb565c70a req-5f692be3-36e7-449e-b50d-e61b1f23ab3a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Processing event network-vif-plugged-24be8773-ea21-45a8-9ea1-63d0f273aa40 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:46:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:52 np0005539551 nova_compute[227360]: 2025-11-29 08:46:52.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:52 np0005539551 nova_compute[227360]: 2025-11-29 08:46:52.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:52 np0005539551 nova_compute[227360]: 2025-11-29 08:46:52.447 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:52 np0005539551 nova_compute[227360]: 2025-11-29 08:46:52.448 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:52 np0005539551 nova_compute[227360]: 2025-11-29 08:46:52.448 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:52 np0005539551 nova_compute[227360]: 2025-11-29 08:46:52.448 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:46:52 np0005539551 nova_compute[227360]: 2025-11-29 08:46:52.449 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:46:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:52.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:52.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:46:52 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3410835600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:46:52 np0005539551 nova_compute[227360]: 2025-11-29 08:46:52.917 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.137 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.323 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764406013.3227036, 9db49dc0-dfcc-4700-a0ff-de58bff013ee => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.323 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] VM Started (Lifecycle Event)#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.325 227364 DEBUG nova.compute.manager [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.333 227364 DEBUG nova.virt.libvirt.driver [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.337 227364 INFO nova.virt.libvirt.driver [-] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Instance spawned successfully.#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.338 227364 DEBUG nova.virt.libvirt.driver [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.448 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.456 227364 DEBUG nova.virt.libvirt.driver [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.456 227364 DEBUG nova.virt.libvirt.driver [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.457 227364 DEBUG nova.virt.libvirt.driver [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.458 227364 DEBUG nova.virt.libvirt.driver [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.458 227364 DEBUG nova.virt.libvirt.driver [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.458 227364 DEBUG nova.virt.libvirt.driver [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.466 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.472 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-000000d4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.472 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-000000d4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.505 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.506 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764406013.3228831, 9db49dc0-dfcc-4700-a0ff-de58bff013ee => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.506 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.563 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.566 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764406013.32915, 9db49dc0-dfcc-4700-a0ff-de58bff013ee => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.567 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.595 227364 INFO nova.compute.manager [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Took 12.64 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.595 227364 DEBUG nova.compute.manager [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.597 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.605 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.667 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.699 227364 INFO nova.compute.manager [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Took 19.20 seconds to build instance.#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.705 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.706 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4120MB free_disk=20.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.706 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.707 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.744 227364 DEBUG oslo_concurrency.lockutils [None req-b16d5f41-6cc3-484e-8433-f7e40ea2822a b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "9db49dc0-dfcc-4700-a0ff-de58bff013ee" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.810 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 9db49dc0-dfcc-4700-a0ff-de58bff013ee actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.810 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.811 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.825 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Refreshing inventories for resource provider 67c71d68-0dd7-4589-b775-189b4191a844 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.848 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Updating ProviderTree inventory for provider 67c71d68-0dd7-4589-b775-189b4191a844 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.848 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Updating inventory in ProviderTree for provider 67c71d68-0dd7-4589-b775-189b4191a844 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.873 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Refreshing aggregate associations for resource provider 67c71d68-0dd7-4589-b775-189b4191a844, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.895 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Refreshing trait associations for resource provider 67c71d68-0dd7-4589-b775-189b4191a844, traits: COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.925 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.960 227364 DEBUG nova.compute.manager [req-7c8c06e0-ee78-4210-a7ff-40110329f2f9 req-52a4758d-46df-4a6f-b49d-852d9e070de3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Received event network-vif-plugged-24be8773-ea21-45a8-9ea1-63d0f273aa40 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.961 227364 DEBUG oslo_concurrency.lockutils [req-7c8c06e0-ee78-4210-a7ff-40110329f2f9 req-52a4758d-46df-4a6f-b49d-852d9e070de3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9db49dc0-dfcc-4700-a0ff-de58bff013ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.961 227364 DEBUG oslo_concurrency.lockutils [req-7c8c06e0-ee78-4210-a7ff-40110329f2f9 req-52a4758d-46df-4a6f-b49d-852d9e070de3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9db49dc0-dfcc-4700-a0ff-de58bff013ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.962 227364 DEBUG oslo_concurrency.lockutils [req-7c8c06e0-ee78-4210-a7ff-40110329f2f9 req-52a4758d-46df-4a6f-b49d-852d9e070de3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9db49dc0-dfcc-4700-a0ff-de58bff013ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.962 227364 DEBUG nova.compute.manager [req-7c8c06e0-ee78-4210-a7ff-40110329f2f9 req-52a4758d-46df-4a6f-b49d-852d9e070de3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] No waiting events found dispatching network-vif-plugged-24be8773-ea21-45a8-9ea1-63d0f273aa40 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:46:53 np0005539551 nova_compute[227360]: 2025-11-29 08:46:53.962 227364 WARNING nova.compute.manager [req-7c8c06e0-ee78-4210-a7ff-40110329f2f9 req-52a4758d-46df-4a6f-b49d-852d9e070de3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Received unexpected event network-vif-plugged-24be8773-ea21-45a8-9ea1-63d0f273aa40 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:46:54 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:46:54 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1806510511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:46:54 np0005539551 nova_compute[227360]: 2025-11-29 08:46:54.367 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:46:54 np0005539551 nova_compute[227360]: 2025-11-29 08:46:54.377 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:46:54 np0005539551 nova_compute[227360]: 2025-11-29 08:46:54.401 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:46:54 np0005539551 nova_compute[227360]: 2025-11-29 08:46:54.432 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:46:54 np0005539551 nova_compute[227360]: 2025-11-29 08:46:54.432 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:54 np0005539551 nova_compute[227360]: 2025-11-29 08:46:54.446 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:54.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:54.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:56 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:56.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:56.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:57 np0005539551 nova_compute[227360]: 2025-11-29 08:46:57.433 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:57 np0005539551 nova_compute[227360]: 2025-11-29 08:46:57.434 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:46:57 np0005539551 nova_compute[227360]: 2025-11-29 08:46:57.434 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:46:57 np0005539551 nova_compute[227360]: 2025-11-29 08:46:57.934 227364 DEBUG oslo_concurrency.lockutils [None req-e2c696cb-dbe5-4dca-80e4-8f84b4c1807d b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "9db49dc0-dfcc-4700-a0ff-de58bff013ee" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:57 np0005539551 nova_compute[227360]: 2025-11-29 08:46:57.934 227364 DEBUG oslo_concurrency.lockutils [None req-e2c696cb-dbe5-4dca-80e4-8f84b4c1807d b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "9db49dc0-dfcc-4700-a0ff-de58bff013ee" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:57 np0005539551 nova_compute[227360]: 2025-11-29 08:46:57.935 227364 DEBUG oslo_concurrency.lockutils [None req-e2c696cb-dbe5-4dca-80e4-8f84b4c1807d b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "9db49dc0-dfcc-4700-a0ff-de58bff013ee-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:57 np0005539551 nova_compute[227360]: 2025-11-29 08:46:57.935 227364 DEBUG oslo_concurrency.lockutils [None req-e2c696cb-dbe5-4dca-80e4-8f84b4c1807d b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "9db49dc0-dfcc-4700-a0ff-de58bff013ee-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:57 np0005539551 nova_compute[227360]: 2025-11-29 08:46:57.936 227364 DEBUG oslo_concurrency.lockutils [None req-e2c696cb-dbe5-4dca-80e4-8f84b4c1807d b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "9db49dc0-dfcc-4700-a0ff-de58bff013ee-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:57 np0005539551 nova_compute[227360]: 2025-11-29 08:46:57.937 227364 INFO nova.compute.manager [None req-e2c696cb-dbe5-4dca-80e4-8f84b4c1807d b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Terminating instance#033[00m
Nov 29 03:46:57 np0005539551 nova_compute[227360]: 2025-11-29 08:46:57.938 227364 DEBUG nova.compute.manager [None req-e2c696cb-dbe5-4dca-80e4-8f84b4c1807d b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:46:57 np0005539551 kernel: tap24be8773-ea (unregistering): left promiscuous mode
Nov 29 03:46:57 np0005539551 NetworkManager[48922]: <info>  [1764406017.9833] device (tap24be8773-ea): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:46:57 np0005539551 ovn_controller[130266]: 2025-11-29T08:46:57Z|00912|binding|INFO|Releasing lport 24be8773-ea21-45a8-9ea1-63d0f273aa40 from this chassis (sb_readonly=0)
Nov 29 03:46:58 np0005539551 ovn_controller[130266]: 2025-11-29T08:46:57Z|00913|binding|INFO|Setting lport 24be8773-ea21-45a8-9ea1-63d0f273aa40 down in Southbound
Nov 29 03:46:58 np0005539551 nova_compute[227360]: 2025-11-29 08:46:57.998 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:58 np0005539551 ovn_controller[130266]: 2025-11-29T08:46:58Z|00914|binding|INFO|Removing iface tap24be8773-ea ovn-installed in OVS
Nov 29 03:46:58 np0005539551 nova_compute[227360]: 2025-11-29 08:46:58.003 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "refresh_cache-9db49dc0-dfcc-4700-a0ff-de58bff013ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:46:58 np0005539551 nova_compute[227360]: 2025-11-29 08:46:58.003 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquired lock "refresh_cache-9db49dc0-dfcc-4700-a0ff-de58bff013ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:46:58 np0005539551 nova_compute[227360]: 2025-11-29 08:46:58.003 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:46:58 np0005539551 nova_compute[227360]: 2025-11-29 08:46:58.004 227364 DEBUG nova.objects.instance [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9db49dc0-dfcc-4700-a0ff-de58bff013ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:46:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:58.008 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:b1:e6 10.100.0.14'], port_security=['fa:16:3e:1b:b1:e6 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '9db49dc0-dfcc-4700-a0ff-de58bff013ee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7ffcb23bac14ee49474df9aee5f7dae', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e58ae615-f827-4417-b716-6b0f5aa1e5ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2432be5b-087b-4981-ab5e-ea6b1be12111, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=24be8773-ea21-45a8-9ea1-63d0f273aa40) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:46:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:58.011 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 24be8773-ea21-45a8-9ea1-63d0f273aa40 in datapath 3d510715-dc99-4870-8ae9-ff599ae1a9c2 unbound from our chassis#033[00m
Nov 29 03:46:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:58.015 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3d510715-dc99-4870-8ae9-ff599ae1a9c2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:46:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:58.016 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4ca1e6f5-cab8-45a1-92e3-e9eecad75803]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:58.017 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2 namespace which is not needed anymore#033[00m
Nov 29 03:46:58 np0005539551 nova_compute[227360]: 2025-11-29 08:46:58.026 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:58 np0005539551 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000d4.scope: Deactivated successfully.
Nov 29 03:46:58 np0005539551 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000d4.scope: Consumed 3.756s CPU time.
Nov 29 03:46:58 np0005539551 systemd-machined[190756]: Machine qemu-93-instance-000000d4 terminated.
Nov 29 03:46:58 np0005539551 nova_compute[227360]: 2025-11-29 08:46:58.138 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:58 np0005539551 nova_compute[227360]: 2025-11-29 08:46:58.176 227364 INFO nova.virt.libvirt.driver [-] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Instance destroyed successfully.#033[00m
Nov 29 03:46:58 np0005539551 nova_compute[227360]: 2025-11-29 08:46:58.176 227364 DEBUG nova.objects.instance [None req-e2c696cb-dbe5-4dca-80e4-8f84b4c1807d b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lazy-loading 'resources' on Instance uuid 9db49dc0-dfcc-4700-a0ff-de58bff013ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:46:58 np0005539551 nova_compute[227360]: 2025-11-29 08:46:58.191 227364 DEBUG nova.virt.libvirt.vif [None req-e2c696cb-dbe5-4dca-80e4-8f84b4c1807d b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:46:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1639731580',display_name='tempest-TestVolumeBootPattern-server-1639731580',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1639731580',id=212,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:46:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b7ffcb23bac14ee49474df9aee5f7dae',ramdisk_id='',reservation_id='r-zawjw05l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestVolumeBootPattern-1614567902',owner_user_name='tempest-TestVolumeBootPattern-1614567902-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:46:53Z,user_data=None,user_id='b576a51181b5425aa6e44a0eb0a22803',uuid=9db49dc0-dfcc-4700-a0ff-de58bff013ee,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "24be8773-ea21-45a8-9ea1-63d0f273aa40", "address": "fa:16:3e:1b:b1:e6", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24be8773-ea", "ovs_interfaceid": "24be8773-ea21-45a8-9ea1-63d0f273aa40", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:46:58 np0005539551 nova_compute[227360]: 2025-11-29 08:46:58.192 227364 DEBUG nova.network.os_vif_util [None req-e2c696cb-dbe5-4dca-80e4-8f84b4c1807d b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converting VIF {"id": "24be8773-ea21-45a8-9ea1-63d0f273aa40", "address": "fa:16:3e:1b:b1:e6", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24be8773-ea", "ovs_interfaceid": "24be8773-ea21-45a8-9ea1-63d0f273aa40", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:46:58 np0005539551 nova_compute[227360]: 2025-11-29 08:46:58.193 227364 DEBUG nova.network.os_vif_util [None req-e2c696cb-dbe5-4dca-80e4-8f84b4c1807d b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:b1:e6,bridge_name='br-int',has_traffic_filtering=True,id=24be8773-ea21-45a8-9ea1-63d0f273aa40,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24be8773-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:46:58 np0005539551 nova_compute[227360]: 2025-11-29 08:46:58.193 227364 DEBUG os_vif [None req-e2c696cb-dbe5-4dca-80e4-8f84b4c1807d b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:b1:e6,bridge_name='br-int',has_traffic_filtering=True,id=24be8773-ea21-45a8-9ea1-63d0f273aa40,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24be8773-ea') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:46:58 np0005539551 nova_compute[227360]: 2025-11-29 08:46:58.195 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:58 np0005539551 nova_compute[227360]: 2025-11-29 08:46:58.195 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap24be8773-ea, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:46:58 np0005539551 nova_compute[227360]: 2025-11-29 08:46:58.196 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:58 np0005539551 nova_compute[227360]: 2025-11-29 08:46:58.198 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:58 np0005539551 nova_compute[227360]: 2025-11-29 08:46:58.200 227364 INFO os_vif [None req-e2c696cb-dbe5-4dca-80e4-8f84b4c1807d b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:b1:e6,bridge_name='br-int',has_traffic_filtering=True,id=24be8773-ea21-45a8-9ea1-63d0f273aa40,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24be8773-ea')#033[00m
Nov 29 03:46:58 np0005539551 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[302901]: [NOTICE]   (302930) : haproxy version is 2.8.14-c23fe91
Nov 29 03:46:58 np0005539551 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[302901]: [NOTICE]   (302930) : path to executable is /usr/sbin/haproxy
Nov 29 03:46:58 np0005539551 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[302901]: [WARNING]  (302930) : Exiting Master process...
Nov 29 03:46:58 np0005539551 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[302901]: [ALERT]    (302930) : Current worker (302936) exited with code 143 (Terminated)
Nov 29 03:46:58 np0005539551 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[302901]: [WARNING]  (302930) : All workers exited. Exiting... (0)
Nov 29 03:46:58 np0005539551 systemd[1]: libpod-4290186cde28a3e4b1b8e095e863637ede6560057c93e749ccd44437db90d384.scope: Deactivated successfully.
Nov 29 03:46:58 np0005539551 podman[303021]: 2025-11-29 08:46:58.432788546 +0000 UTC m=+0.321807705 container died 4290186cde28a3e4b1b8e095e863637ede6560057c93e749ccd44437db90d384 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:46:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:58.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:46:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:58.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:59 np0005539551 nova_compute[227360]: 2025-11-29 08:46:59.062 227364 DEBUG nova.compute.manager [req-528ff34b-4876-40c8-9aec-28dc4cfd8460 req-0e9bf167-28d9-4e50-be67-a28c7829b2ce 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Received event network-vif-unplugged-24be8773-ea21-45a8-9ea1-63d0f273aa40 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:46:59 np0005539551 nova_compute[227360]: 2025-11-29 08:46:59.062 227364 DEBUG oslo_concurrency.lockutils [req-528ff34b-4876-40c8-9aec-28dc4cfd8460 req-0e9bf167-28d9-4e50-be67-a28c7829b2ce 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9db49dc0-dfcc-4700-a0ff-de58bff013ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:59 np0005539551 nova_compute[227360]: 2025-11-29 08:46:59.062 227364 DEBUG oslo_concurrency.lockutils [req-528ff34b-4876-40c8-9aec-28dc4cfd8460 req-0e9bf167-28d9-4e50-be67-a28c7829b2ce 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9db49dc0-dfcc-4700-a0ff-de58bff013ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:59 np0005539551 nova_compute[227360]: 2025-11-29 08:46:59.062 227364 DEBUG oslo_concurrency.lockutils [req-528ff34b-4876-40c8-9aec-28dc4cfd8460 req-0e9bf167-28d9-4e50-be67-a28c7829b2ce 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9db49dc0-dfcc-4700-a0ff-de58bff013ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:59 np0005539551 nova_compute[227360]: 2025-11-29 08:46:59.063 227364 DEBUG nova.compute.manager [req-528ff34b-4876-40c8-9aec-28dc4cfd8460 req-0e9bf167-28d9-4e50-be67-a28c7829b2ce 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] No waiting events found dispatching network-vif-unplugged-24be8773-ea21-45a8-9ea1-63d0f273aa40 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:46:59 np0005539551 nova_compute[227360]: 2025-11-29 08:46:59.063 227364 DEBUG nova.compute.manager [req-528ff34b-4876-40c8-9aec-28dc4cfd8460 req-0e9bf167-28d9-4e50-be67-a28c7829b2ce 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Received event network-vif-unplugged-24be8773-ea21-45a8-9ea1-63d0f273aa40 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:46:59 np0005539551 nova_compute[227360]: 2025-11-29 08:46:59.063 227364 DEBUG nova.compute.manager [req-528ff34b-4876-40c8-9aec-28dc4cfd8460 req-0e9bf167-28d9-4e50-be67-a28c7829b2ce 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Received event network-vif-plugged-24be8773-ea21-45a8-9ea1-63d0f273aa40 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:46:59 np0005539551 nova_compute[227360]: 2025-11-29 08:46:59.063 227364 DEBUG oslo_concurrency.lockutils [req-528ff34b-4876-40c8-9aec-28dc4cfd8460 req-0e9bf167-28d9-4e50-be67-a28c7829b2ce 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9db49dc0-dfcc-4700-a0ff-de58bff013ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:59 np0005539551 nova_compute[227360]: 2025-11-29 08:46:59.063 227364 DEBUG oslo_concurrency.lockutils [req-528ff34b-4876-40c8-9aec-28dc4cfd8460 req-0e9bf167-28d9-4e50-be67-a28c7829b2ce 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9db49dc0-dfcc-4700-a0ff-de58bff013ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:59 np0005539551 nova_compute[227360]: 2025-11-29 08:46:59.063 227364 DEBUG oslo_concurrency.lockutils [req-528ff34b-4876-40c8-9aec-28dc4cfd8460 req-0e9bf167-28d9-4e50-be67-a28c7829b2ce 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9db49dc0-dfcc-4700-a0ff-de58bff013ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:59 np0005539551 nova_compute[227360]: 2025-11-29 08:46:59.064 227364 DEBUG nova.compute.manager [req-528ff34b-4876-40c8-9aec-28dc4cfd8460 req-0e9bf167-28d9-4e50-be67-a28c7829b2ce 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] No waiting events found dispatching network-vif-plugged-24be8773-ea21-45a8-9ea1-63d0f273aa40 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:46:59 np0005539551 nova_compute[227360]: 2025-11-29 08:46:59.064 227364 WARNING nova.compute.manager [req-528ff34b-4876-40c8-9aec-28dc4cfd8460 req-0e9bf167-28d9-4e50-be67-a28c7829b2ce 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Received unexpected event network-vif-plugged-24be8773-ea21-45a8-9ea1-63d0f273aa40 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:46:59 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4290186cde28a3e4b1b8e095e863637ede6560057c93e749ccd44437db90d384-userdata-shm.mount: Deactivated successfully.
Nov 29 03:46:59 np0005539551 systemd[1]: var-lib-containers-storage-overlay-bbc8979aefae0dee4f87a5a7869aad1f8e500416581b8a0e4b80875a760e247d-merged.mount: Deactivated successfully.
Nov 29 03:46:59 np0005539551 podman[303021]: 2025-11-29 08:46:59.223221738 +0000 UTC m=+1.112240907 container cleanup 4290186cde28a3e4b1b8e095e863637ede6560057c93e749ccd44437db90d384 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:46:59 np0005539551 podman[303080]: 2025-11-29 08:46:59.279886832 +0000 UTC m=+0.035775330 container remove 4290186cde28a3e4b1b8e095e863637ede6560057c93e749ccd44437db90d384 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 03:46:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:59.286 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e85649fb-7a97-4831-a704-84c88222a833]: (4, ('Sat Nov 29 08:46:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2 (4290186cde28a3e4b1b8e095e863637ede6560057c93e749ccd44437db90d384)\n4290186cde28a3e4b1b8e095e863637ede6560057c93e749ccd44437db90d384\nSat Nov 29 08:46:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2 (4290186cde28a3e4b1b8e095e863637ede6560057c93e749ccd44437db90d384)\n4290186cde28a3e4b1b8e095e863637ede6560057c93e749ccd44437db90d384\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:59.287 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[0a6198d4-2aea-4559-bd1f-a12b8cd9a991]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:59.289 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d510715-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:46:59 np0005539551 nova_compute[227360]: 2025-11-29 08:46:59.291 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:59 np0005539551 kernel: tap3d510715-d0: left promiscuous mode
Nov 29 03:46:59 np0005539551 nova_compute[227360]: 2025-11-29 08:46:59.299 227364 INFO nova.virt.libvirt.driver [None req-e2c696cb-dbe5-4dca-80e4-8f84b4c1807d b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Deleting instance files /var/lib/nova/instances/9db49dc0-dfcc-4700-a0ff-de58bff013ee_del#033[00m
Nov 29 03:46:59 np0005539551 nova_compute[227360]: 2025-11-29 08:46:59.300 227364 INFO nova.virt.libvirt.driver [None req-e2c696cb-dbe5-4dca-80e4-8f84b4c1807d b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Deletion of /var/lib/nova/instances/9db49dc0-dfcc-4700-a0ff-de58bff013ee_del complete#033[00m
Nov 29 03:46:59 np0005539551 nova_compute[227360]: 2025-11-29 08:46:59.304 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:59 np0005539551 nova_compute[227360]: 2025-11-29 08:46:59.304 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:59.306 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[0ff9f833-b6a8-45e4-8a95-d8068457c6d4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:59.323 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c1161ed5-8768-4828-859b-e4cfcc8f14f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:59.325 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ea5405c6-03cc-480f-a011-d72da15d21ef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:59.342 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c338c637-f414-4f19-bd9d-58f9ffa167fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 905690, 'reachable_time': 21375, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303097, 'error': None, 'target': 'ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:59.344 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:46:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:46:59.345 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[6775fe15-c45b-40bf-84a6-fb3f1885c1fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:59 np0005539551 systemd[1]: run-netns-ovnmeta\x2d3d510715\x2ddc99\x2d4870\x2d8ae9\x2dff599ae1a9c2.mount: Deactivated successfully.
Nov 29 03:46:59 np0005539551 nova_compute[227360]: 2025-11-29 08:46:59.358 227364 INFO nova.compute.manager [None req-e2c696cb-dbe5-4dca-80e4-8f84b4c1807d b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Took 1.42 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:46:59 np0005539551 nova_compute[227360]: 2025-11-29 08:46:59.359 227364 DEBUG oslo.service.loopingcall [None req-e2c696cb-dbe5-4dca-80e4-8f84b4c1807d b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:46:59 np0005539551 nova_compute[227360]: 2025-11-29 08:46:59.359 227364 DEBUG nova.compute.manager [-] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:46:59 np0005539551 nova_compute[227360]: 2025-11-29 08:46:59.360 227364 DEBUG nova.network.neutron [-] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:46:59 np0005539551 systemd[1]: libpod-conmon-4290186cde28a3e4b1b8e095e863637ede6560057c93e749ccd44437db90d384.scope: Deactivated successfully.
Nov 29 03:46:59 np0005539551 nova_compute[227360]: 2025-11-29 08:46:59.448 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:00 np0005539551 nova_compute[227360]: 2025-11-29 08:47:00.007 227364 DEBUG nova.network.neutron [-] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:47:00 np0005539551 nova_compute[227360]: 2025-11-29 08:47:00.028 227364 INFO nova.compute.manager [-] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Took 0.67 seconds to deallocate network for instance.#033[00m
Nov 29 03:47:00 np0005539551 nova_compute[227360]: 2025-11-29 08:47:00.288 227364 INFO nova.compute.manager [None req-e2c696cb-dbe5-4dca-80e4-8f84b4c1807d b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Took 0.26 seconds to detach 1 volumes for instance.#033[00m
Nov 29 03:47:00 np0005539551 nova_compute[227360]: 2025-11-29 08:47:00.348 227364 DEBUG oslo_concurrency.lockutils [None req-e2c696cb-dbe5-4dca-80e4-8f84b4c1807d b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:00 np0005539551 nova_compute[227360]: 2025-11-29 08:47:00.348 227364 DEBUG oslo_concurrency.lockutils [None req-e2c696cb-dbe5-4dca-80e4-8f84b4c1807d b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:00 np0005539551 nova_compute[227360]: 2025-11-29 08:47:00.437 227364 DEBUG oslo_concurrency.processutils [None req-e2c696cb-dbe5-4dca-80e4-8f84b4c1807d b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:00 np0005539551 nova_compute[227360]: 2025-11-29 08:47:00.728 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Updating instance_info_cache with network_info: [{"id": "24be8773-ea21-45a8-9ea1-63d0f273aa40", "address": "fa:16:3e:1b:b1:e6", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24be8773-ea", "ovs_interfaceid": "24be8773-ea21-45a8-9ea1-63d0f273aa40", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:47:00 np0005539551 nova_compute[227360]: 2025-11-29 08:47:00.746 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Releasing lock "refresh_cache-9db49dc0-dfcc-4700-a0ff-de58bff013ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:47:00 np0005539551 nova_compute[227360]: 2025-11-29 08:47:00.747 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:47:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:00.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:00 np0005539551 nova_compute[227360]: 2025-11-29 08:47:00.747 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:00 np0005539551 nova_compute[227360]: 2025-11-29 08:47:00.748 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:00.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:47:00 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3859940345' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:47:00 np0005539551 nova_compute[227360]: 2025-11-29 08:47:00.869 227364 DEBUG oslo_concurrency.processutils [None req-e2c696cb-dbe5-4dca-80e4-8f84b4c1807d b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:00 np0005539551 nova_compute[227360]: 2025-11-29 08:47:00.875 227364 DEBUG nova.compute.provider_tree [None req-e2c696cb-dbe5-4dca-80e4-8f84b4c1807d b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:47:00 np0005539551 nova_compute[227360]: 2025-11-29 08:47:00.892 227364 DEBUG nova.scheduler.client.report [None req-e2c696cb-dbe5-4dca-80e4-8f84b4c1807d b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:47:00 np0005539551 nova_compute[227360]: 2025-11-29 08:47:00.922 227364 DEBUG oslo_concurrency.lockutils [None req-e2c696cb-dbe5-4dca-80e4-8f84b4c1807d b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:00 np0005539551 nova_compute[227360]: 2025-11-29 08:47:00.957 227364 INFO nova.scheduler.client.report [None req-e2c696cb-dbe5-4dca-80e4-8f84b4c1807d b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Deleted allocations for instance 9db49dc0-dfcc-4700-a0ff-de58bff013ee#033[00m
Nov 29 03:47:01 np0005539551 nova_compute[227360]: 2025-11-29 08:47:01.031 227364 DEBUG oslo_concurrency.lockutils [None req-e2c696cb-dbe5-4dca-80e4-8f84b4c1807d b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "9db49dc0-dfcc-4700-a0ff-de58bff013ee" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.097s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:01 np0005539551 nova_compute[227360]: 2025-11-29 08:47:01.143 227364 DEBUG nova.compute.manager [req-d8aa0181-3be1-44f7-aa38-eaf87afed8f1 req-1977a994-ccae-417c-a684-cad80da20213 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Received event network-vif-deleted-24be8773-ea21-45a8-9ea1-63d0f273aa40 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:47:01 np0005539551 nova_compute[227360]: 2025-11-29 08:47:01.144 227364 INFO nova.compute.manager [req-d8aa0181-3be1-44f7-aa38-eaf87afed8f1 req-1977a994-ccae-417c-a684-cad80da20213 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Neutron deleted interface 24be8773-ea21-45a8-9ea1-63d0f273aa40; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:47:01 np0005539551 nova_compute[227360]: 2025-11-29 08:47:01.144 227364 DEBUG nova.network.neutron [req-d8aa0181-3be1-44f7-aa38-eaf87afed8f1 req-1977a994-ccae-417c-a684-cad80da20213 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Nov 29 03:47:01 np0005539551 nova_compute[227360]: 2025-11-29 08:47:01.147 227364 DEBUG nova.compute.manager [req-d8aa0181-3be1-44f7-aa38-eaf87afed8f1 req-1977a994-ccae-417c-a684-cad80da20213 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Detach interface failed, port_id=24be8773-ea21-45a8-9ea1-63d0f273aa40, reason: Instance 9db49dc0-dfcc-4700-a0ff-de58bff013ee could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 03:47:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:01 np0005539551 nova_compute[227360]: 2025-11-29 08:47:01.719 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:47:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:02.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:47:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:02.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:03 np0005539551 nova_compute[227360]: 2025-11-29 08:47:03.198 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:04 np0005539551 nova_compute[227360]: 2025-11-29 08:47:04.451 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:04.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:04.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:06 np0005539551 nova_compute[227360]: 2025-11-29 08:47:06.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:06.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:47:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:06.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:47:07 np0005539551 nova_compute[227360]: 2025-11-29 08:47:07.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:08 np0005539551 nova_compute[227360]: 2025-11-29 08:47:08.201 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:08 np0005539551 nova_compute[227360]: 2025-11-29 08:47:08.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:08 np0005539551 nova_compute[227360]: 2025-11-29 08:47:08.409 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:47:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 03:47:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:08.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 03:47:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:08.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:09 np0005539551 nova_compute[227360]: 2025-11-29 08:47:09.452 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:10 np0005539551 podman[303122]: 2025-11-29 08:47:10.619052221 +0000 UTC m=+0.063793759 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 29 03:47:10 np0005539551 podman[303121]: 2025-11-29 08:47:10.64599124 +0000 UTC m=+0.094004396 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:47:10 np0005539551 podman[303120]: 2025-11-29 08:47:10.646257287 +0000 UTC m=+0.100929424 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Nov 29 03:47:10 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #139. Immutable memtables: 0.
Nov 29 03:47:10 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:47:10.659106) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:47:10 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 139
Nov 29 03:47:10 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406030659181, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 2168, "num_deletes": 258, "total_data_size": 4930071, "memory_usage": 5016008, "flush_reason": "Manual Compaction"}
Nov 29 03:47:10 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #140: started
Nov 29 03:47:10 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406030670867, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 140, "file_size": 2049691, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 67729, "largest_seqno": 69891, "table_properties": {"data_size": 2042770, "index_size": 3674, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 18455, "raw_average_key_size": 21, "raw_value_size": 2027416, "raw_average_value_size": 2376, "num_data_blocks": 161, "num_entries": 853, "num_filter_entries": 853, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405867, "oldest_key_time": 1764405867, "file_creation_time": 1764406030, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:47:10 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 11795 microseconds, and 5000 cpu microseconds.
Nov 29 03:47:10 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:47:10 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:47:10.670912) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #140: 2049691 bytes OK
Nov 29 03:47:10 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:47:10.670930) [db/memtable_list.cc:519] [default] Level-0 commit table #140 started
Nov 29 03:47:10 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:47:10.672198) [db/memtable_list.cc:722] [default] Level-0 commit table #140: memtable #1 done
Nov 29 03:47:10 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:47:10.672209) EVENT_LOG_v1 {"time_micros": 1764406030672206, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:47:10 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:47:10.672227) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:47:10 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 4920325, prev total WAL file size 4920325, number of live WAL files 2.
Nov 29 03:47:10 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000136.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:47:10 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:47:10.673558) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323534' seq:72057594037927935, type:22 .. '6D6772737461740032353036' seq:0, type:0; will stop at (end)
Nov 29 03:47:10 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:47:10 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [140(2001KB)], [138(12MB)]
Nov 29 03:47:10 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406030673593, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [140], "files_L6": [138], "score": -1, "input_data_size": 14906243, "oldest_snapshot_seqno": -1}
Nov 29 03:47:10 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #141: 10121 keys, 12178307 bytes, temperature: kUnknown
Nov 29 03:47:10 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406030741991, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 141, "file_size": 12178307, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12114580, "index_size": 37310, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25349, "raw_key_size": 265981, "raw_average_key_size": 26, "raw_value_size": 11938531, "raw_average_value_size": 1179, "num_data_blocks": 1420, "num_entries": 10121, "num_filter_entries": 10121, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764406030, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 141, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:47:10 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:47:10 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:47:10.742590) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 12178307 bytes
Nov 29 03:47:10 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:47:10.744036) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 217.4 rd, 177.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 12.3 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(13.2) write-amplify(5.9) OK, records in: 10581, records dropped: 460 output_compression: NoCompression
Nov 29 03:47:10 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:47:10.744072) EVENT_LOG_v1 {"time_micros": 1764406030744055, "job": 88, "event": "compaction_finished", "compaction_time_micros": 68572, "compaction_time_cpu_micros": 27296, "output_level": 6, "num_output_files": 1, "total_output_size": 12178307, "num_input_records": 10581, "num_output_records": 10121, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:47:10 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:47:10 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406030745001, "job": 88, "event": "table_file_deletion", "file_number": 140}
Nov 29 03:47:10 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000138.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:47:10 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406030749649, "job": 88, "event": "table_file_deletion", "file_number": 138}
Nov 29 03:47:10 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:47:10.673453) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:47:10 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:47:10.749703) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:47:10 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:47:10.749712) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:47:10 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:47:10.749717) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:47:10 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:47:10.749721) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:47:10 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:47:10.749725) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:47:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:10.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:47:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:10.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:47:11 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:12.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:12.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:13 np0005539551 nova_compute[227360]: 2025-11-29 08:47:13.176 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764406018.1735823, 9db49dc0-dfcc-4700-a0ff-de58bff013ee => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:47:13 np0005539551 nova_compute[227360]: 2025-11-29 08:47:13.176 227364 INFO nova.compute.manager [-] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:47:13 np0005539551 nova_compute[227360]: 2025-11-29 08:47:13.199 227364 DEBUG nova.compute.manager [None req-871465b8-11ff-4e9b-afb7-01f60c5ea52b - - - - - -] [instance: 9db49dc0-dfcc-4700-a0ff-de58bff013ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:47:13 np0005539551 nova_compute[227360]: 2025-11-29 08:47:13.204 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:14 np0005539551 nova_compute[227360]: 2025-11-29 08:47:14.455 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:14.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:14.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:14 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e416 e416: 3 total, 3 up, 3 in
Nov 29 03:47:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:16.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:16.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:16 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:47:16.971 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=63, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=62) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:47:16 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:47:16.972 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:47:16 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:47:16.972 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '63'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:47:16 np0005539551 nova_compute[227360]: 2025-11-29 08:47:16.975 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:18 np0005539551 nova_compute[227360]: 2025-11-29 08:47:18.208 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:18.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:18.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:19 np0005539551 nova_compute[227360]: 2025-11-29 08:47:19.456 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:47:19.896 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:47:19.896 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:47:19.896 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:20.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:20.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:21 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:22.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:22.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:23 np0005539551 nova_compute[227360]: 2025-11-29 08:47:23.210 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:24 np0005539551 nova_compute[227360]: 2025-11-29 08:47:24.458 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:24.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:24.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:26 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:26.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:26.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:28 np0005539551 nova_compute[227360]: 2025-11-29 08:47:28.214 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:28 np0005539551 nova_compute[227360]: 2025-11-29 08:47:28.232 227364 DEBUG oslo_concurrency.lockutils [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Acquiring lock "6b7b3384-3de2-4098-b77f-3658f82eedfc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:28 np0005539551 nova_compute[227360]: 2025-11-29 08:47:28.233 227364 DEBUG oslo_concurrency.lockutils [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "6b7b3384-3de2-4098-b77f-3658f82eedfc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:28 np0005539551 nova_compute[227360]: 2025-11-29 08:47:28.251 227364 DEBUG nova.compute.manager [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:47:28 np0005539551 nova_compute[227360]: 2025-11-29 08:47:28.372 227364 DEBUG oslo_concurrency.lockutils [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:28 np0005539551 nova_compute[227360]: 2025-11-29 08:47:28.373 227364 DEBUG oslo_concurrency.lockutils [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:28 np0005539551 nova_compute[227360]: 2025-11-29 08:47:28.378 227364 DEBUG nova.virt.hardware [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:47:28 np0005539551 nova_compute[227360]: 2025-11-29 08:47:28.378 227364 INFO nova.compute.claims [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:47:28 np0005539551 nova_compute[227360]: 2025-11-29 08:47:28.592 227364 DEBUG oslo_concurrency.processutils [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:28.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:47:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:28.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:47:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:47:29 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1719270791' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:47:29 np0005539551 nova_compute[227360]: 2025-11-29 08:47:29.058 227364 DEBUG oslo_concurrency.processutils [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:29 np0005539551 nova_compute[227360]: 2025-11-29 08:47:29.064 227364 DEBUG nova.compute.provider_tree [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:47:29 np0005539551 nova_compute[227360]: 2025-11-29 08:47:29.082 227364 DEBUG nova.scheduler.client.report [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:47:29 np0005539551 nova_compute[227360]: 2025-11-29 08:47:29.107 227364 DEBUG oslo_concurrency.lockutils [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:29 np0005539551 nova_compute[227360]: 2025-11-29 08:47:29.108 227364 DEBUG nova.compute.manager [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:47:29 np0005539551 nova_compute[227360]: 2025-11-29 08:47:29.163 227364 DEBUG nova.compute.manager [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:47:29 np0005539551 nova_compute[227360]: 2025-11-29 08:47:29.164 227364 DEBUG nova.network.neutron [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:47:29 np0005539551 nova_compute[227360]: 2025-11-29 08:47:29.186 227364 INFO nova.virt.libvirt.driver [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:47:29 np0005539551 nova_compute[227360]: 2025-11-29 08:47:29.215 227364 DEBUG nova.compute.manager [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:47:29 np0005539551 nova_compute[227360]: 2025-11-29 08:47:29.325 227364 DEBUG nova.compute.manager [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:47:29 np0005539551 nova_compute[227360]: 2025-11-29 08:47:29.327 227364 DEBUG nova.virt.libvirt.driver [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:47:29 np0005539551 nova_compute[227360]: 2025-11-29 08:47:29.328 227364 INFO nova.virt.libvirt.driver [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Creating image(s)#033[00m
Nov 29 03:47:29 np0005539551 nova_compute[227360]: 2025-11-29 08:47:29.372 227364 DEBUG nova.storage.rbd_utils [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] rbd image 6b7b3384-3de2-4098-b77f-3658f82eedfc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:47:29 np0005539551 nova_compute[227360]: 2025-11-29 08:47:29.411 227364 DEBUG nova.storage.rbd_utils [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] rbd image 6b7b3384-3de2-4098-b77f-3658f82eedfc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:47:29 np0005539551 nova_compute[227360]: 2025-11-29 08:47:29.442 227364 DEBUG nova.storage.rbd_utils [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] rbd image 6b7b3384-3de2-4098-b77f-3658f82eedfc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:47:29 np0005539551 nova_compute[227360]: 2025-11-29 08:47:29.447 227364 DEBUG oslo_concurrency.processutils [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:29 np0005539551 nova_compute[227360]: 2025-11-29 08:47:29.483 227364 DEBUG nova.policy [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '14d446574294425e9bc89e596ea56dc9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2e636ab14fe94059b82b9cbcf8831d87', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:47:29 np0005539551 nova_compute[227360]: 2025-11-29 08:47:29.486 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:29 np0005539551 nova_compute[227360]: 2025-11-29 08:47:29.522 227364 DEBUG oslo_concurrency.processutils [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:29 np0005539551 nova_compute[227360]: 2025-11-29 08:47:29.522 227364 DEBUG oslo_concurrency.lockutils [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:29 np0005539551 nova_compute[227360]: 2025-11-29 08:47:29.523 227364 DEBUG oslo_concurrency.lockutils [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:29 np0005539551 nova_compute[227360]: 2025-11-29 08:47:29.523 227364 DEBUG oslo_concurrency.lockutils [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:29 np0005539551 nova_compute[227360]: 2025-11-29 08:47:29.549 227364 DEBUG nova.storage.rbd_utils [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] rbd image 6b7b3384-3de2-4098-b77f-3658f82eedfc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:47:29 np0005539551 nova_compute[227360]: 2025-11-29 08:47:29.553 227364 DEBUG oslo_concurrency.processutils [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 6b7b3384-3de2-4098-b77f-3658f82eedfc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:30 np0005539551 nova_compute[227360]: 2025-11-29 08:47:30.018 227364 DEBUG oslo_concurrency.processutils [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 6b7b3384-3de2-4098-b77f-3658f82eedfc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:30 np0005539551 nova_compute[227360]: 2025-11-29 08:47:30.098 227364 DEBUG nova.storage.rbd_utils [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] resizing rbd image 6b7b3384-3de2-4098-b77f-3658f82eedfc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:47:30 np0005539551 nova_compute[227360]: 2025-11-29 08:47:30.137 227364 DEBUG nova.network.neutron [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Successfully created port: bef873ea-4c5b-4b48-8022-9c3171bdab37 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:47:30 np0005539551 nova_compute[227360]: 2025-11-29 08:47:30.227 227364 DEBUG nova.objects.instance [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lazy-loading 'migration_context' on Instance uuid 6b7b3384-3de2-4098-b77f-3658f82eedfc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:47:30 np0005539551 nova_compute[227360]: 2025-11-29 08:47:30.258 227364 DEBUG nova.virt.libvirt.driver [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:47:30 np0005539551 nova_compute[227360]: 2025-11-29 08:47:30.259 227364 DEBUG nova.virt.libvirt.driver [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Ensure instance console log exists: /var/lib/nova/instances/6b7b3384-3de2-4098-b77f-3658f82eedfc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:47:30 np0005539551 nova_compute[227360]: 2025-11-29 08:47:30.260 227364 DEBUG oslo_concurrency.lockutils [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:30 np0005539551 nova_compute[227360]: 2025-11-29 08:47:30.260 227364 DEBUG oslo_concurrency.lockutils [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:30 np0005539551 nova_compute[227360]: 2025-11-29 08:47:30.261 227364 DEBUG oslo_concurrency.lockutils [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:30 np0005539551 nova_compute[227360]: 2025-11-29 08:47:30.775 227364 DEBUG nova.network.neutron [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Successfully updated port: bef873ea-4c5b-4b48-8022-9c3171bdab37 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:47:30 np0005539551 nova_compute[227360]: 2025-11-29 08:47:30.790 227364 DEBUG oslo_concurrency.lockutils [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Acquiring lock "refresh_cache-6b7b3384-3de2-4098-b77f-3658f82eedfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:47:30 np0005539551 nova_compute[227360]: 2025-11-29 08:47:30.790 227364 DEBUG oslo_concurrency.lockutils [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Acquired lock "refresh_cache-6b7b3384-3de2-4098-b77f-3658f82eedfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:47:30 np0005539551 nova_compute[227360]: 2025-11-29 08:47:30.791 227364 DEBUG nova.network.neutron [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:47:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:30.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:30.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:30 np0005539551 nova_compute[227360]: 2025-11-29 08:47:30.902 227364 DEBUG nova.compute.manager [req-9d702ffe-f200-4be6-8e2c-fe88fcc982ad req-fdf1c70f-1872-4b4c-a606-a16cf6b710de 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Received event network-changed-bef873ea-4c5b-4b48-8022-9c3171bdab37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:47:30 np0005539551 nova_compute[227360]: 2025-11-29 08:47:30.903 227364 DEBUG nova.compute.manager [req-9d702ffe-f200-4be6-8e2c-fe88fcc982ad req-fdf1c70f-1872-4b4c-a606-a16cf6b710de 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Refreshing instance network info cache due to event network-changed-bef873ea-4c5b-4b48-8022-9c3171bdab37. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:47:30 np0005539551 nova_compute[227360]: 2025-11-29 08:47:30.903 227364 DEBUG oslo_concurrency.lockutils [req-9d702ffe-f200-4be6-8e2c-fe88fcc982ad req-fdf1c70f-1872-4b4c-a606-a16cf6b710de 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-6b7b3384-3de2-4098-b77f-3658f82eedfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:47:30 np0005539551 nova_compute[227360]: 2025-11-29 08:47:30.966 227364 DEBUG nova.network.neutron [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:47:31 np0005539551 nova_compute[227360]: 2025-11-29 08:47:31.706 227364 DEBUG nova.network.neutron [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Updating instance_info_cache with network_info: [{"id": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "address": "fa:16:3e:cd:1a:28", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbef873ea-4c", "ovs_interfaceid": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:47:31 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:31 np0005539551 nova_compute[227360]: 2025-11-29 08:47:31.739 227364 DEBUG oslo_concurrency.lockutils [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Releasing lock "refresh_cache-6b7b3384-3de2-4098-b77f-3658f82eedfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:47:31 np0005539551 nova_compute[227360]: 2025-11-29 08:47:31.739 227364 DEBUG nova.compute.manager [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Instance network_info: |[{"id": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "address": "fa:16:3e:cd:1a:28", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbef873ea-4c", "ovs_interfaceid": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:47:31 np0005539551 nova_compute[227360]: 2025-11-29 08:47:31.739 227364 DEBUG oslo_concurrency.lockutils [req-9d702ffe-f200-4be6-8e2c-fe88fcc982ad req-fdf1c70f-1872-4b4c-a606-a16cf6b710de 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-6b7b3384-3de2-4098-b77f-3658f82eedfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:47:31 np0005539551 nova_compute[227360]: 2025-11-29 08:47:31.739 227364 DEBUG nova.network.neutron [req-9d702ffe-f200-4be6-8e2c-fe88fcc982ad req-fdf1c70f-1872-4b4c-a606-a16cf6b710de 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Refreshing network info cache for port bef873ea-4c5b-4b48-8022-9c3171bdab37 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:47:31 np0005539551 nova_compute[227360]: 2025-11-29 08:47:31.741 227364 DEBUG nova.virt.libvirt.driver [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Start _get_guest_xml network_info=[{"id": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "address": "fa:16:3e:cd:1a:28", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbef873ea-4c", "ovs_interfaceid": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_format': None, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:47:31 np0005539551 nova_compute[227360]: 2025-11-29 08:47:31.746 227364 WARNING nova.virt.libvirt.driver [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:47:31 np0005539551 nova_compute[227360]: 2025-11-29 08:47:31.750 227364 DEBUG nova.virt.libvirt.host [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:47:31 np0005539551 nova_compute[227360]: 2025-11-29 08:47:31.751 227364 DEBUG nova.virt.libvirt.host [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:47:31 np0005539551 nova_compute[227360]: 2025-11-29 08:47:31.755 227364 DEBUG nova.virt.libvirt.host [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:47:31 np0005539551 nova_compute[227360]: 2025-11-29 08:47:31.755 227364 DEBUG nova.virt.libvirt.host [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:47:31 np0005539551 nova_compute[227360]: 2025-11-29 08:47:31.756 227364 DEBUG nova.virt.libvirt.driver [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:47:31 np0005539551 nova_compute[227360]: 2025-11-29 08:47:31.756 227364 DEBUG nova.virt.hardware [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:47:31 np0005539551 nova_compute[227360]: 2025-11-29 08:47:31.757 227364 DEBUG nova.virt.hardware [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:47:31 np0005539551 nova_compute[227360]: 2025-11-29 08:47:31.757 227364 DEBUG nova.virt.hardware [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:47:31 np0005539551 nova_compute[227360]: 2025-11-29 08:47:31.757 227364 DEBUG nova.virt.hardware [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:47:31 np0005539551 nova_compute[227360]: 2025-11-29 08:47:31.757 227364 DEBUG nova.virt.hardware [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:47:31 np0005539551 nova_compute[227360]: 2025-11-29 08:47:31.758 227364 DEBUG nova.virt.hardware [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:47:31 np0005539551 nova_compute[227360]: 2025-11-29 08:47:31.758 227364 DEBUG nova.virt.hardware [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:47:31 np0005539551 nova_compute[227360]: 2025-11-29 08:47:31.758 227364 DEBUG nova.virt.hardware [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:47:31 np0005539551 nova_compute[227360]: 2025-11-29 08:47:31.758 227364 DEBUG nova.virt.hardware [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:47:31 np0005539551 nova_compute[227360]: 2025-11-29 08:47:31.759 227364 DEBUG nova.virt.hardware [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:47:31 np0005539551 nova_compute[227360]: 2025-11-29 08:47:31.759 227364 DEBUG nova.virt.hardware [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:47:31 np0005539551 nova_compute[227360]: 2025-11-29 08:47:31.762 227364 DEBUG oslo_concurrency.processutils [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:32 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:47:32 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3207227971' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:47:32 np0005539551 nova_compute[227360]: 2025-11-29 08:47:32.197 227364 DEBUG oslo_concurrency.processutils [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:32 np0005539551 nova_compute[227360]: 2025-11-29 08:47:32.240 227364 DEBUG nova.storage.rbd_utils [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] rbd image 6b7b3384-3de2-4098-b77f-3658f82eedfc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:47:32 np0005539551 nova_compute[227360]: 2025-11-29 08:47:32.247 227364 DEBUG oslo_concurrency.processutils [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:32 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:47:32 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1443064577' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:47:32 np0005539551 nova_compute[227360]: 2025-11-29 08:47:32.729 227364 DEBUG oslo_concurrency.processutils [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:32 np0005539551 nova_compute[227360]: 2025-11-29 08:47:32.731 227364 DEBUG nova.virt.libvirt.vif [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:47:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1254459075',display_name='tempest-TestShelveInstance-server-1254459075',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1254459075',id=215,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA6ZJ0e8HULKzf15gAhzf0Pozq+BNpsY6JGJkWj4En/gstCBUIDEBBlhlRRy2j+ObTo/Olxsq8yNRaWE1A2BtIbVFq5FCJEzTcF45GwsvPUIsE1i0kjUMi8fiaEVQJMjJA==',key_name='tempest-TestShelveInstance-892403284',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2e636ab14fe94059b82b9cbcf8831d87',ramdisk_id='',reservation_id='r-id4k7nof',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-498716578',owner_user_name='tempest-TestShelveInstance-498716578-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:47:29Z,user_data=None,user_id='14d446574294425e9bc89e596ea56dc9',uuid=6b7b3384-3de2-4098-b77f-3658f82eedfc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "address": "fa:16:3e:cd:1a:28", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbef873ea-4c", "ovs_interfaceid": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:47:32 np0005539551 nova_compute[227360]: 2025-11-29 08:47:32.732 227364 DEBUG nova.network.os_vif_util [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Converting VIF {"id": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "address": "fa:16:3e:cd:1a:28", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbef873ea-4c", "ovs_interfaceid": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:47:32 np0005539551 nova_compute[227360]: 2025-11-29 08:47:32.734 227364 DEBUG nova.network.os_vif_util [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:1a:28,bridge_name='br-int',has_traffic_filtering=True,id=bef873ea-4c5b-4b48-8022-9c3171bdab37,network=Network(7e328485-18b8-4dc7-b012-0dd256b9b97f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbef873ea-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:47:32 np0005539551 nova_compute[227360]: 2025-11-29 08:47:32.736 227364 DEBUG nova.objects.instance [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6b7b3384-3de2-4098-b77f-3658f82eedfc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:47:32 np0005539551 nova_compute[227360]: 2025-11-29 08:47:32.751 227364 DEBUG nova.virt.libvirt.driver [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:47:32 np0005539551 nova_compute[227360]:  <uuid>6b7b3384-3de2-4098-b77f-3658f82eedfc</uuid>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:  <name>instance-000000d7</name>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:47:32 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:      <nova:name>tempest-TestShelveInstance-server-1254459075</nova:name>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:47:31</nova:creationTime>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:47:32 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:        <nova:user uuid="14d446574294425e9bc89e596ea56dc9">tempest-TestShelveInstance-498716578-project-member</nova:user>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:        <nova:project uuid="2e636ab14fe94059b82b9cbcf8831d87">tempest-TestShelveInstance-498716578</nova:project>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:        <nova:port uuid="bef873ea-4c5b-4b48-8022-9c3171bdab37">
Nov 29 03:47:32 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:      <entry name="serial">6b7b3384-3de2-4098-b77f-3658f82eedfc</entry>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:      <entry name="uuid">6b7b3384-3de2-4098-b77f-3658f82eedfc</entry>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:47:32 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/6b7b3384-3de2-4098-b77f-3658f82eedfc_disk">
Nov 29 03:47:32 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:47:32 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:47:32 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/6b7b3384-3de2-4098-b77f-3658f82eedfc_disk.config">
Nov 29 03:47:32 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:47:32 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:47:32 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:cd:1a:28"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:      <target dev="tapbef873ea-4c"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:47:32 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/6b7b3384-3de2-4098-b77f-3658f82eedfc/console.log" append="off"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:47:32 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:47:32 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:47:32 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:47:32 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:47:32 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:47:32 np0005539551 nova_compute[227360]: 2025-11-29 08:47:32.753 227364 DEBUG nova.compute.manager [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Preparing to wait for external event network-vif-plugged-bef873ea-4c5b-4b48-8022-9c3171bdab37 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:47:32 np0005539551 nova_compute[227360]: 2025-11-29 08:47:32.754 227364 DEBUG oslo_concurrency.lockutils [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Acquiring lock "6b7b3384-3de2-4098-b77f-3658f82eedfc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:32 np0005539551 nova_compute[227360]: 2025-11-29 08:47:32.755 227364 DEBUG oslo_concurrency.lockutils [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "6b7b3384-3de2-4098-b77f-3658f82eedfc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:32 np0005539551 nova_compute[227360]: 2025-11-29 08:47:32.755 227364 DEBUG oslo_concurrency.lockutils [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "6b7b3384-3de2-4098-b77f-3658f82eedfc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:32 np0005539551 nova_compute[227360]: 2025-11-29 08:47:32.756 227364 DEBUG nova.virt.libvirt.vif [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:47:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1254459075',display_name='tempest-TestShelveInstance-server-1254459075',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1254459075',id=215,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA6ZJ0e8HULKzf15gAhzf0Pozq+BNpsY6JGJkWj4En/gstCBUIDEBBlhlRRy2j+ObTo/Olxsq8yNRaWE1A2BtIbVFq5FCJEzTcF45GwsvPUIsE1i0kjUMi8fiaEVQJMjJA==',key_name='tempest-TestShelveInstance-892403284',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2e636ab14fe94059b82b9cbcf8831d87',ramdisk_id='',reservation_id='r-id4k7nof',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-498716578',owner_user_name='tempest-TestShelveInstance-498716578-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:47:29Z,user_data=None,user_id='14d446574294425e9bc89e596ea56dc9',uuid=6b7b3384-3de2-4098-b77f-3658f82eedfc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "address": "fa:16:3e:cd:1a:28", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbef873ea-4c", "ovs_interfaceid": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:47:32 np0005539551 nova_compute[227360]: 2025-11-29 08:47:32.756 227364 DEBUG nova.network.os_vif_util [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Converting VIF {"id": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "address": "fa:16:3e:cd:1a:28", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbef873ea-4c", "ovs_interfaceid": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:47:32 np0005539551 nova_compute[227360]: 2025-11-29 08:47:32.757 227364 DEBUG nova.network.os_vif_util [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:1a:28,bridge_name='br-int',has_traffic_filtering=True,id=bef873ea-4c5b-4b48-8022-9c3171bdab37,network=Network(7e328485-18b8-4dc7-b012-0dd256b9b97f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbef873ea-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:47:32 np0005539551 nova_compute[227360]: 2025-11-29 08:47:32.757 227364 DEBUG os_vif [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:1a:28,bridge_name='br-int',has_traffic_filtering=True,id=bef873ea-4c5b-4b48-8022-9c3171bdab37,network=Network(7e328485-18b8-4dc7-b012-0dd256b9b97f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbef873ea-4c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:47:32 np0005539551 nova_compute[227360]: 2025-11-29 08:47:32.758 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:32 np0005539551 nova_compute[227360]: 2025-11-29 08:47:32.758 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:47:32 np0005539551 nova_compute[227360]: 2025-11-29 08:47:32.759 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:47:32 np0005539551 nova_compute[227360]: 2025-11-29 08:47:32.762 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:32 np0005539551 nova_compute[227360]: 2025-11-29 08:47:32.763 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbef873ea-4c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:47:32 np0005539551 nova_compute[227360]: 2025-11-29 08:47:32.763 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbef873ea-4c, col_values=(('external_ids', {'iface-id': 'bef873ea-4c5b-4b48-8022-9c3171bdab37', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cd:1a:28', 'vm-uuid': '6b7b3384-3de2-4098-b77f-3658f82eedfc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:47:32 np0005539551 nova_compute[227360]: 2025-11-29 08:47:32.764 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:32 np0005539551 NetworkManager[48922]: <info>  [1764406052.7659] manager: (tapbef873ea-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/408)
Nov 29 03:47:32 np0005539551 nova_compute[227360]: 2025-11-29 08:47:32.767 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:47:32 np0005539551 nova_compute[227360]: 2025-11-29 08:47:32.774 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:32 np0005539551 nova_compute[227360]: 2025-11-29 08:47:32.775 227364 INFO os_vif [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:1a:28,bridge_name='br-int',has_traffic_filtering=True,id=bef873ea-4c5b-4b48-8022-9c3171bdab37,network=Network(7e328485-18b8-4dc7-b012-0dd256b9b97f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbef873ea-4c')#033[00m
Nov 29 03:47:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:32.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:32 np0005539551 nova_compute[227360]: 2025-11-29 08:47:32.826 227364 DEBUG nova.virt.libvirt.driver [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:47:32 np0005539551 nova_compute[227360]: 2025-11-29 08:47:32.827 227364 DEBUG nova.virt.libvirt.driver [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:47:32 np0005539551 nova_compute[227360]: 2025-11-29 08:47:32.827 227364 DEBUG nova.virt.libvirt.driver [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] No VIF found with MAC fa:16:3e:cd:1a:28, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:47:32 np0005539551 nova_compute[227360]: 2025-11-29 08:47:32.828 227364 INFO nova.virt.libvirt.driver [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Using config drive#033[00m
Nov 29 03:47:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:32.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:32 np0005539551 nova_compute[227360]: 2025-11-29 08:47:32.856 227364 DEBUG nova.storage.rbd_utils [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] rbd image 6b7b3384-3de2-4098-b77f-3658f82eedfc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:47:34 np0005539551 nova_compute[227360]: 2025-11-29 08:47:34.461 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:34 np0005539551 nova_compute[227360]: 2025-11-29 08:47:34.559 227364 INFO nova.virt.libvirt.driver [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Creating config drive at /var/lib/nova/instances/6b7b3384-3de2-4098-b77f-3658f82eedfc/disk.config#033[00m
Nov 29 03:47:34 np0005539551 nova_compute[227360]: 2025-11-29 08:47:34.564 227364 DEBUG oslo_concurrency.processutils [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6b7b3384-3de2-4098-b77f-3658f82eedfc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwnrd5spe execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:34 np0005539551 nova_compute[227360]: 2025-11-29 08:47:34.702 227364 DEBUG oslo_concurrency.processutils [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6b7b3384-3de2-4098-b77f-3658f82eedfc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwnrd5spe" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:34 np0005539551 nova_compute[227360]: 2025-11-29 08:47:34.738 227364 DEBUG nova.storage.rbd_utils [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] rbd image 6b7b3384-3de2-4098-b77f-3658f82eedfc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:47:34 np0005539551 nova_compute[227360]: 2025-11-29 08:47:34.743 227364 DEBUG oslo_concurrency.processutils [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6b7b3384-3de2-4098-b77f-3658f82eedfc/disk.config 6b7b3384-3de2-4098-b77f-3658f82eedfc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:34.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:34.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:34 np0005539551 nova_compute[227360]: 2025-11-29 08:47:34.915 227364 DEBUG oslo_concurrency.processutils [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6b7b3384-3de2-4098-b77f-3658f82eedfc/disk.config 6b7b3384-3de2-4098-b77f-3658f82eedfc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:34 np0005539551 nova_compute[227360]: 2025-11-29 08:47:34.916 227364 INFO nova.virt.libvirt.driver [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Deleting local config drive /var/lib/nova/instances/6b7b3384-3de2-4098-b77f-3658f82eedfc/disk.config because it was imported into RBD.#033[00m
Nov 29 03:47:34 np0005539551 kernel: tapbef873ea-4c: entered promiscuous mode
Nov 29 03:47:34 np0005539551 NetworkManager[48922]: <info>  [1764406054.9681] manager: (tapbef873ea-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/409)
Nov 29 03:47:34 np0005539551 nova_compute[227360]: 2025-11-29 08:47:34.967 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:34 np0005539551 ovn_controller[130266]: 2025-11-29T08:47:34Z|00915|binding|INFO|Claiming lport bef873ea-4c5b-4b48-8022-9c3171bdab37 for this chassis.
Nov 29 03:47:34 np0005539551 ovn_controller[130266]: 2025-11-29T08:47:34Z|00916|binding|INFO|bef873ea-4c5b-4b48-8022-9c3171bdab37: Claiming fa:16:3e:cd:1a:28 10.100.0.6
Nov 29 03:47:34 np0005539551 nova_compute[227360]: 2025-11-29 08:47:34.971 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:34 np0005539551 systemd-udevd[303507]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:47:35.003 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:1a:28 10.100.0.6'], port_security=['fa:16:3e:cd:1a:28 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '6b7b3384-3de2-4098-b77f-3658f82eedfc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e328485-18b8-4dc7-b012-0dd256b9b97f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2e636ab14fe94059b82b9cbcf8831d87', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'aa85fffb-3c65-4631-aec4-f04bc3fcc9b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d402cfd4-158a-4fe2-be8d-72cfa52ed799, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=bef873ea-4c5b-4b48-8022-9c3171bdab37) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:47:35.003 139482 INFO neutron.agent.ovn.metadata.agent [-] Port bef873ea-4c5b-4b48-8022-9c3171bdab37 in datapath 7e328485-18b8-4dc7-b012-0dd256b9b97f bound to our chassis#033[00m
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:47:35.005 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7e328485-18b8-4dc7-b012-0dd256b9b97f#033[00m
Nov 29 03:47:35 np0005539551 NetworkManager[48922]: <info>  [1764406055.0125] device (tapbef873ea-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:47:35 np0005539551 NetworkManager[48922]: <info>  [1764406055.0132] device (tapbef873ea-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:47:35.022 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[359def67-05b3-4acd-b15f-548742277ea1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:47:35.023 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7e328485-11 in ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:47:35 np0005539551 systemd-machined[190756]: New machine qemu-94-instance-000000d7.
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:47:35.025 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7e328485-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:47:35.025 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b40636ac-f3a2-4227-bf7d-6c8a6aafbf16]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:47:35.026 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[59ae1e64-405b-4973-9606-c859d7fea013]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:35 np0005539551 nova_compute[227360]: 2025-11-29 08:47:35.031 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:35 np0005539551 systemd[1]: Started Virtual Machine qemu-94-instance-000000d7.
Nov 29 03:47:35 np0005539551 ovn_controller[130266]: 2025-11-29T08:47:35Z|00917|binding|INFO|Setting lport bef873ea-4c5b-4b48-8022-9c3171bdab37 ovn-installed in OVS
Nov 29 03:47:35 np0005539551 ovn_controller[130266]: 2025-11-29T08:47:35Z|00918|binding|INFO|Setting lport bef873ea-4c5b-4b48-8022-9c3171bdab37 up in Southbound
Nov 29 03:47:35 np0005539551 nova_compute[227360]: 2025-11-29 08:47:35.083 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:47:35.040 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[05e91944-9a9e-407c-892d-5afdf41c13b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:47:35.107 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[fc0a86a3-3933-4368-80ee-6a6aded822a0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:47:35.142 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[df82a6d8-50bb-41a7-b44e-0834feb9bec9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:35 np0005539551 systemd-udevd[303512]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:47:35 np0005539551 NetworkManager[48922]: <info>  [1764406055.1509] manager: (tap7e328485-10): new Veth device (/org/freedesktop/NetworkManager/Devices/410)
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:47:35.150 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[0c3ecfae-f5d7-43ce-959c-7825e8226bdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:47:35.187 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[db00d58e-eecb-4473-ba2d-b6a93670aabe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:47:35.191 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[4f5ff5ca-5530-4dcf-9f1a-883aec3a8b5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:35 np0005539551 NetworkManager[48922]: <info>  [1764406055.2165] device (tap7e328485-10): carrier: link connected
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:47:35.223 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[ed3687fe-cce9-4268-b2fe-f27e9c950f6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:47:35.240 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2831187d-9a55-4d12-ac80-1640e0732cff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7e328485-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:c0:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 265], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 910209, 'reachable_time': 39794, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303543, 'error': None, 'target': 'ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:47:35.256 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[08eae01a-9c88-4f6e-9d1f-fd49390dcfa2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fead:c011'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 910209, 'tstamp': 910209}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303544, 'error': None, 'target': 'ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:47:35.277 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[49c6fa98-64a5-4f5e-8e0e-42c975804332]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7e328485-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:c0:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 265], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 910209, 'reachable_time': 39794, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303545, 'error': None, 'target': 'ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:47:35.313 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ae28177e-6a56-436d-94e8-cbee9a3d4453]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:47:35.388 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[6bf1a839-cd39-436a-a82e-528fb80c76ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:47:35.390 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7e328485-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:47:35.390 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:47:35.390 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7e328485-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:47:35 np0005539551 nova_compute[227360]: 2025-11-29 08:47:35.392 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:35 np0005539551 kernel: tap7e328485-10: entered promiscuous mode
Nov 29 03:47:35 np0005539551 NetworkManager[48922]: <info>  [1764406055.3942] manager: (tap7e328485-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/411)
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:47:35.395 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7e328485-10, col_values=(('external_ids', {'iface-id': '239aff46-c81c-4ca8-9f81-35eea8cc0198'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:47:35 np0005539551 ovn_controller[130266]: 2025-11-29T08:47:35Z|00919|binding|INFO|Releasing lport 239aff46-c81c-4ca8-9f81-35eea8cc0198 from this chassis (sb_readonly=0)
Nov 29 03:47:35 np0005539551 nova_compute[227360]: 2025-11-29 08:47:35.397 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:35 np0005539551 nova_compute[227360]: 2025-11-29 08:47:35.410 227364 DEBUG nova.network.neutron [req-9d702ffe-f200-4be6-8e2c-fe88fcc982ad req-fdf1c70f-1872-4b4c-a606-a16cf6b710de 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Updated VIF entry in instance network info cache for port bef873ea-4c5b-4b48-8022-9c3171bdab37. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:47:35 np0005539551 nova_compute[227360]: 2025-11-29 08:47:35.412 227364 DEBUG nova.network.neutron [req-9d702ffe-f200-4be6-8e2c-fe88fcc982ad req-fdf1c70f-1872-4b4c-a606-a16cf6b710de 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Updating instance_info_cache with network_info: [{"id": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "address": "fa:16:3e:cd:1a:28", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbef873ea-4c", "ovs_interfaceid": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:47:35.412 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7e328485-18b8-4dc7-b012-0dd256b9b97f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7e328485-18b8-4dc7-b012-0dd256b9b97f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:47:35.413 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[37ce2d66-ac39-40df-98d5-ddf852f620da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:47:35.413 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-7e328485-18b8-4dc7-b012-0dd256b9b97f
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/7e328485-18b8-4dc7-b012-0dd256b9b97f.pid.haproxy
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 7e328485-18b8-4dc7-b012-0dd256b9b97f
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:47:35 np0005539551 nova_compute[227360]: 2025-11-29 08:47:35.414 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:35 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:47:35.414 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f', 'env', 'PROCESS_TAG=haproxy-7e328485-18b8-4dc7-b012-0dd256b9b97f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7e328485-18b8-4dc7-b012-0dd256b9b97f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:47:35 np0005539551 nova_compute[227360]: 2025-11-29 08:47:35.435 227364 DEBUG oslo_concurrency.lockutils [req-9d702ffe-f200-4be6-8e2c-fe88fcc982ad req-fdf1c70f-1872-4b4c-a606-a16cf6b710de 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-6b7b3384-3de2-4098-b77f-3658f82eedfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:47:35 np0005539551 nova_compute[227360]: 2025-11-29 08:47:35.480 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764406055.4797473, 6b7b3384-3de2-4098-b77f-3658f82eedfc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:47:35 np0005539551 nova_compute[227360]: 2025-11-29 08:47:35.481 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] VM Started (Lifecycle Event)#033[00m
Nov 29 03:47:35 np0005539551 nova_compute[227360]: 2025-11-29 08:47:35.509 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:47:35 np0005539551 nova_compute[227360]: 2025-11-29 08:47:35.516 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764406055.481152, 6b7b3384-3de2-4098-b77f-3658f82eedfc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:47:35 np0005539551 nova_compute[227360]: 2025-11-29 08:47:35.516 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:47:35 np0005539551 nova_compute[227360]: 2025-11-29 08:47:35.546 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:47:35 np0005539551 nova_compute[227360]: 2025-11-29 08:47:35.551 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:47:35 np0005539551 nova_compute[227360]: 2025-11-29 08:47:35.560 227364 DEBUG nova.compute.manager [req-272bb07b-1792-4c72-9641-9e393acbb891 req-5002d8cc-ca19-43b7-8806-c14faca32944 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Received event network-vif-plugged-bef873ea-4c5b-4b48-8022-9c3171bdab37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:47:35 np0005539551 nova_compute[227360]: 2025-11-29 08:47:35.561 227364 DEBUG oslo_concurrency.lockutils [req-272bb07b-1792-4c72-9641-9e393acbb891 req-5002d8cc-ca19-43b7-8806-c14faca32944 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "6b7b3384-3de2-4098-b77f-3658f82eedfc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:35 np0005539551 nova_compute[227360]: 2025-11-29 08:47:35.561 227364 DEBUG oslo_concurrency.lockutils [req-272bb07b-1792-4c72-9641-9e393acbb891 req-5002d8cc-ca19-43b7-8806-c14faca32944 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6b7b3384-3de2-4098-b77f-3658f82eedfc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:35 np0005539551 nova_compute[227360]: 2025-11-29 08:47:35.562 227364 DEBUG oslo_concurrency.lockutils [req-272bb07b-1792-4c72-9641-9e393acbb891 req-5002d8cc-ca19-43b7-8806-c14faca32944 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6b7b3384-3de2-4098-b77f-3658f82eedfc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:35 np0005539551 nova_compute[227360]: 2025-11-29 08:47:35.562 227364 DEBUG nova.compute.manager [req-272bb07b-1792-4c72-9641-9e393acbb891 req-5002d8cc-ca19-43b7-8806-c14faca32944 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Processing event network-vif-plugged-bef873ea-4c5b-4b48-8022-9c3171bdab37 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:47:35 np0005539551 nova_compute[227360]: 2025-11-29 08:47:35.562 227364 DEBUG nova.compute.manager [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:47:35 np0005539551 nova_compute[227360]: 2025-11-29 08:47:35.567 227364 DEBUG nova.virt.libvirt.driver [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:47:35 np0005539551 nova_compute[227360]: 2025-11-29 08:47:35.571 227364 INFO nova.virt.libvirt.driver [-] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Instance spawned successfully.#033[00m
Nov 29 03:47:35 np0005539551 nova_compute[227360]: 2025-11-29 08:47:35.572 227364 DEBUG nova.virt.libvirt.driver [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:47:35 np0005539551 nova_compute[227360]: 2025-11-29 08:47:35.576 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:47:35 np0005539551 nova_compute[227360]: 2025-11-29 08:47:35.577 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764406055.5667722, 6b7b3384-3de2-4098-b77f-3658f82eedfc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:47:35 np0005539551 nova_compute[227360]: 2025-11-29 08:47:35.578 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:47:35 np0005539551 nova_compute[227360]: 2025-11-29 08:47:35.601 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:47:35 np0005539551 nova_compute[227360]: 2025-11-29 08:47:35.608 227364 DEBUG nova.virt.libvirt.driver [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:47:35 np0005539551 nova_compute[227360]: 2025-11-29 08:47:35.609 227364 DEBUG nova.virt.libvirt.driver [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:47:35 np0005539551 nova_compute[227360]: 2025-11-29 08:47:35.610 227364 DEBUG nova.virt.libvirt.driver [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:47:35 np0005539551 nova_compute[227360]: 2025-11-29 08:47:35.611 227364 DEBUG nova.virt.libvirt.driver [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:47:35 np0005539551 nova_compute[227360]: 2025-11-29 08:47:35.611 227364 DEBUG nova.virt.libvirt.driver [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:47:35 np0005539551 nova_compute[227360]: 2025-11-29 08:47:35.612 227364 DEBUG nova.virt.libvirt.driver [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:47:35 np0005539551 nova_compute[227360]: 2025-11-29 08:47:35.618 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:47:35 np0005539551 nova_compute[227360]: 2025-11-29 08:47:35.659 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:47:35 np0005539551 nova_compute[227360]: 2025-11-29 08:47:35.697 227364 INFO nova.compute.manager [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Took 6.37 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:47:35 np0005539551 nova_compute[227360]: 2025-11-29 08:47:35.698 227364 DEBUG nova.compute.manager [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:47:35 np0005539551 nova_compute[227360]: 2025-11-29 08:47:35.769 227364 INFO nova.compute.manager [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Took 7.47 seconds to build instance.#033[00m
Nov 29 03:47:35 np0005539551 nova_compute[227360]: 2025-11-29 08:47:35.787 227364 DEBUG oslo_concurrency.lockutils [None req-c8f0de80-02fa-4a00-a11a-de65eb881a43 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "6b7b3384-3de2-4098-b77f-3658f82eedfc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.554s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:35 np0005539551 podman[303617]: 2025-11-29 08:47:35.82110695 +0000 UTC m=+0.084677404 container create 6f9196a865dd089c9b2fd974b1f8dddf35c3dbf68f3662a02857fcc9871f192e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 03:47:35 np0005539551 podman[303617]: 2025-11-29 08:47:35.763872749 +0000 UTC m=+0.027443223 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:47:35 np0005539551 systemd[1]: Started libpod-conmon-6f9196a865dd089c9b2fd974b1f8dddf35c3dbf68f3662a02857fcc9871f192e.scope.
Nov 29 03:47:35 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:47:35 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebd9e5d4d8bae45461da165f8c1a50b4a8e74ea60e3606d3911dfa4eb812417b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:47:35 np0005539551 podman[303617]: 2025-11-29 08:47:35.925162507 +0000 UTC m=+0.188733001 container init 6f9196a865dd089c9b2fd974b1f8dddf35c3dbf68f3662a02857fcc9871f192e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:47:35 np0005539551 podman[303617]: 2025-11-29 08:47:35.9308236 +0000 UTC m=+0.194394044 container start 6f9196a865dd089c9b2fd974b1f8dddf35c3dbf68f3662a02857fcc9871f192e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:47:35 np0005539551 neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f[303632]: [NOTICE]   (303636) : New worker (303638) forked
Nov 29 03:47:35 np0005539551 neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f[303632]: [NOTICE]   (303636) : Loading success.
Nov 29 03:47:36 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:36.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:36.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:37 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #142. Immutable memtables: 0.
Nov 29 03:47:37 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:47:37.124691) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:47:37 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 142
Nov 29 03:47:37 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406057124726, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 529, "num_deletes": 251, "total_data_size": 755354, "memory_usage": 766344, "flush_reason": "Manual Compaction"}
Nov 29 03:47:37 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #143: started
Nov 29 03:47:37 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406057129521, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 143, "file_size": 498197, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69896, "largest_seqno": 70420, "table_properties": {"data_size": 495437, "index_size": 795, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6778, "raw_average_key_size": 19, "raw_value_size": 489836, "raw_average_value_size": 1379, "num_data_blocks": 35, "num_entries": 355, "num_filter_entries": 355, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406030, "oldest_key_time": 1764406030, "file_creation_time": 1764406057, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:47:37 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 4858 microseconds, and 2027 cpu microseconds.
Nov 29 03:47:37 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:47:37 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:47:37.129554) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #143: 498197 bytes OK
Nov 29 03:47:37 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:47:37.129570) [db/memtable_list.cc:519] [default] Level-0 commit table #143 started
Nov 29 03:47:37 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:47:37.130809) [db/memtable_list.cc:722] [default] Level-0 commit table #143: memtable #1 done
Nov 29 03:47:37 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:47:37.130821) EVENT_LOG_v1 {"time_micros": 1764406057130817, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:47:37 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:47:37.130835) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:47:37 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 752252, prev total WAL file size 752252, number of live WAL files 2.
Nov 29 03:47:37 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000139.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:47:37 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:47:37.131550) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Nov 29 03:47:37 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:47:37 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [143(486KB)], [141(11MB)]
Nov 29 03:47:37 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406057131592, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [143], "files_L6": [141], "score": -1, "input_data_size": 12676504, "oldest_snapshot_seqno": -1}
Nov 29 03:47:37 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #144: 9962 keys, 10769937 bytes, temperature: kUnknown
Nov 29 03:47:37 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406057187341, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 144, "file_size": 10769937, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10708529, "index_size": 35375, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24965, "raw_key_size": 263397, "raw_average_key_size": 26, "raw_value_size": 10536417, "raw_average_value_size": 1057, "num_data_blocks": 1331, "num_entries": 9962, "num_filter_entries": 9962, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764406057, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 144, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:47:37 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:47:37 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:47:37.187648) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 10769937 bytes
Nov 29 03:47:37 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:47:37.189360) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 227.1 rd, 192.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 11.6 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(47.1) write-amplify(21.6) OK, records in: 10476, records dropped: 514 output_compression: NoCompression
Nov 29 03:47:37 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:47:37.189379) EVENT_LOG_v1 {"time_micros": 1764406057189370, "job": 90, "event": "compaction_finished", "compaction_time_micros": 55831, "compaction_time_cpu_micros": 26606, "output_level": 6, "num_output_files": 1, "total_output_size": 10769937, "num_input_records": 10476, "num_output_records": 9962, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:47:37 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:47:37 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406057189569, "job": 90, "event": "table_file_deletion", "file_number": 143}
Nov 29 03:47:37 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000141.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:47:37 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406057191953, "job": 90, "event": "table_file_deletion", "file_number": 141}
Nov 29 03:47:37 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:47:37.131498) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:47:37 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:47:37.191992) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:47:37 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:47:37.191997) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:47:37 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:47:37.191999) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:47:37 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:47:37.192000) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:47:37 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:47:37.192002) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:47:37 np0005539551 nova_compute[227360]: 2025-11-29 08:47:37.640 227364 DEBUG nova.compute.manager [req-bbb09520-5b59-4249-b16f-04c85d6c36b8 req-2d6a0cd6-8564-4c84-9d99-419fc984a7db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Received event network-vif-plugged-bef873ea-4c5b-4b48-8022-9c3171bdab37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:47:37 np0005539551 nova_compute[227360]: 2025-11-29 08:47:37.640 227364 DEBUG oslo_concurrency.lockutils [req-bbb09520-5b59-4249-b16f-04c85d6c36b8 req-2d6a0cd6-8564-4c84-9d99-419fc984a7db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "6b7b3384-3de2-4098-b77f-3658f82eedfc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:37 np0005539551 nova_compute[227360]: 2025-11-29 08:47:37.640 227364 DEBUG oslo_concurrency.lockutils [req-bbb09520-5b59-4249-b16f-04c85d6c36b8 req-2d6a0cd6-8564-4c84-9d99-419fc984a7db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6b7b3384-3de2-4098-b77f-3658f82eedfc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:37 np0005539551 nova_compute[227360]: 2025-11-29 08:47:37.641 227364 DEBUG oslo_concurrency.lockutils [req-bbb09520-5b59-4249-b16f-04c85d6c36b8 req-2d6a0cd6-8564-4c84-9d99-419fc984a7db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6b7b3384-3de2-4098-b77f-3658f82eedfc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:37 np0005539551 nova_compute[227360]: 2025-11-29 08:47:37.641 227364 DEBUG nova.compute.manager [req-bbb09520-5b59-4249-b16f-04c85d6c36b8 req-2d6a0cd6-8564-4c84-9d99-419fc984a7db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] No waiting events found dispatching network-vif-plugged-bef873ea-4c5b-4b48-8022-9c3171bdab37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:47:37 np0005539551 nova_compute[227360]: 2025-11-29 08:47:37.641 227364 WARNING nova.compute.manager [req-bbb09520-5b59-4249-b16f-04c85d6c36b8 req-2d6a0cd6-8564-4c84-9d99-419fc984a7db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Received unexpected event network-vif-plugged-bef873ea-4c5b-4b48-8022-9c3171bdab37 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:47:37 np0005539551 nova_compute[227360]: 2025-11-29 08:47:37.765 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:38 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e417 e417: 3 total, 3 up, 3 in
Nov 29 03:47:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:38.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:38.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:47:39 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4034973517' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:47:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:47:39 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4034973517' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:47:39 np0005539551 nova_compute[227360]: 2025-11-29 08:47:39.477 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:40 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:47:40 np0005539551 nova_compute[227360]: 2025-11-29 08:47:40.404 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:40.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:40.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:41 np0005539551 NetworkManager[48922]: <info>  [1764406061.4010] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/412)
Nov 29 03:47:41 np0005539551 NetworkManager[48922]: <info>  [1764406061.4023] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/413)
Nov 29 03:47:41 np0005539551 nova_compute[227360]: 2025-11-29 08:47:41.402 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:41 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:47:41 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:47:41 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:47:41 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:47:41 np0005539551 nova_compute[227360]: 2025-11-29 08:47:41.661 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:41 np0005539551 ovn_controller[130266]: 2025-11-29T08:47:41Z|00920|binding|INFO|Releasing lport 239aff46-c81c-4ca8-9f81-35eea8cc0198 from this chassis (sb_readonly=0)
Nov 29 03:47:41 np0005539551 nova_compute[227360]: 2025-11-29 08:47:41.674 227364 DEBUG nova.compute.manager [req-275d13cc-9f5f-4283-9f4f-64e7cbd1176d req-69399dad-5c5d-4829-a05c-37380bdf4d22 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Received event network-changed-bef873ea-4c5b-4b48-8022-9c3171bdab37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:47:41 np0005539551 nova_compute[227360]: 2025-11-29 08:47:41.675 227364 DEBUG nova.compute.manager [req-275d13cc-9f5f-4283-9f4f-64e7cbd1176d req-69399dad-5c5d-4829-a05c-37380bdf4d22 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Refreshing instance network info cache due to event network-changed-bef873ea-4c5b-4b48-8022-9c3171bdab37. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:47:41 np0005539551 nova_compute[227360]: 2025-11-29 08:47:41.675 227364 DEBUG oslo_concurrency.lockutils [req-275d13cc-9f5f-4283-9f4f-64e7cbd1176d req-69399dad-5c5d-4829-a05c-37380bdf4d22 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-6b7b3384-3de2-4098-b77f-3658f82eedfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:47:41 np0005539551 nova_compute[227360]: 2025-11-29 08:47:41.675 227364 DEBUG oslo_concurrency.lockutils [req-275d13cc-9f5f-4283-9f4f-64e7cbd1176d req-69399dad-5c5d-4829-a05c-37380bdf4d22 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-6b7b3384-3de2-4098-b77f-3658f82eedfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:47:41 np0005539551 nova_compute[227360]: 2025-11-29 08:47:41.675 227364 DEBUG nova.network.neutron [req-275d13cc-9f5f-4283-9f4f-64e7cbd1176d req-69399dad-5c5d-4829-a05c-37380bdf4d22 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Refreshing network info cache for port bef873ea-4c5b-4b48-8022-9c3171bdab37 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:47:41 np0005539551 nova_compute[227360]: 2025-11-29 08:47:41.679 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:41 np0005539551 podman[303780]: 2025-11-29 08:47:41.681347472 +0000 UTC m=+0.079827423 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:47:41 np0005539551 podman[303779]: 2025-11-29 08:47:41.694483628 +0000 UTC m=+0.088381495 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 03:47:41 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:41 np0005539551 podman[303778]: 2025-11-29 08:47:41.743151655 +0000 UTC m=+0.147632809 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:47:42 np0005539551 nova_compute[227360]: 2025-11-29 08:47:42.768 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:42.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:42.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:43 np0005539551 nova_compute[227360]: 2025-11-29 08:47:43.726 227364 DEBUG nova.network.neutron [req-275d13cc-9f5f-4283-9f4f-64e7cbd1176d req-69399dad-5c5d-4829-a05c-37380bdf4d22 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Updated VIF entry in instance network info cache for port bef873ea-4c5b-4b48-8022-9c3171bdab37. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:47:43 np0005539551 nova_compute[227360]: 2025-11-29 08:47:43.727 227364 DEBUG nova.network.neutron [req-275d13cc-9f5f-4283-9f4f-64e7cbd1176d req-69399dad-5c5d-4829-a05c-37380bdf4d22 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Updating instance_info_cache with network_info: [{"id": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "address": "fa:16:3e:cd:1a:28", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbef873ea-4c", "ovs_interfaceid": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:47:43 np0005539551 nova_compute[227360]: 2025-11-29 08:47:43.755 227364 DEBUG oslo_concurrency.lockutils [req-275d13cc-9f5f-4283-9f4f-64e7cbd1176d req-69399dad-5c5d-4829-a05c-37380bdf4d22 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-6b7b3384-3de2-4098-b77f-3658f82eedfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:47:44 np0005539551 nova_compute[227360]: 2025-11-29 08:47:44.481 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:47:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:44.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:47:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:44.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e418 e418: 3 total, 3 up, 3 in
Nov 29 03:47:46 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:46.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:46.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:47 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:47:47 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:47:47 np0005539551 nova_compute[227360]: 2025-11-29 08:47:47.770 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:47:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:48.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:47:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:48.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:49 np0005539551 nova_compute[227360]: 2025-11-29 08:47:49.480 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:50 np0005539551 ovn_controller[130266]: 2025-11-29T08:47:50Z|00121|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cd:1a:28 10.100.0.6
Nov 29 03:47:50 np0005539551 ovn_controller[130266]: 2025-11-29T08:47:50Z|00122|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cd:1a:28 10.100.0.6
Nov 29 03:47:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:50.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:50.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:52 np0005539551 nova_compute[227360]: 2025-11-29 08:47:52.773 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:52.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:52.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:53 np0005539551 nova_compute[227360]: 2025-11-29 08:47:53.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:53 np0005539551 nova_compute[227360]: 2025-11-29 08:47:53.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:54 np0005539551 nova_compute[227360]: 2025-11-29 08:47:54.339 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:54 np0005539551 nova_compute[227360]: 2025-11-29 08:47:54.341 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:54 np0005539551 nova_compute[227360]: 2025-11-29 08:47:54.341 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:54 np0005539551 nova_compute[227360]: 2025-11-29 08:47:54.342 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:47:54 np0005539551 nova_compute[227360]: 2025-11-29 08:47:54.342 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:54 np0005539551 nova_compute[227360]: 2025-11-29 08:47:54.531 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:54 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:47:54 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1444893939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:47:54 np0005539551 nova_compute[227360]: 2025-11-29 08:47:54.823 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:54.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:54.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:54 np0005539551 nova_compute[227360]: 2025-11-29 08:47:54.919 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-000000d7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:47:54 np0005539551 nova_compute[227360]: 2025-11-29 08:47:54.920 227364 DEBUG nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] skipping disk for instance-000000d7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:47:55 np0005539551 nova_compute[227360]: 2025-11-29 08:47:55.156 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:47:55 np0005539551 nova_compute[227360]: 2025-11-29 08:47:55.158 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4010MB free_disk=20.85297393798828GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:47:55 np0005539551 nova_compute[227360]: 2025-11-29 08:47:55.158 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:55 np0005539551 nova_compute[227360]: 2025-11-29 08:47:55.159 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:55 np0005539551 nova_compute[227360]: 2025-11-29 08:47:55.242 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 6b7b3384-3de2-4098-b77f-3658f82eedfc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:47:55 np0005539551 nova_compute[227360]: 2025-11-29 08:47:55.242 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:47:55 np0005539551 nova_compute[227360]: 2025-11-29 08:47:55.243 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:47:55 np0005539551 nova_compute[227360]: 2025-11-29 08:47:55.289 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:47:55 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3142937061' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:47:55 np0005539551 nova_compute[227360]: 2025-11-29 08:47:55.762 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:55 np0005539551 nova_compute[227360]: 2025-11-29 08:47:55.770 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:47:55 np0005539551 nova_compute[227360]: 2025-11-29 08:47:55.810 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:47:55 np0005539551 nova_compute[227360]: 2025-11-29 08:47:55.843 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:47:55 np0005539551 nova_compute[227360]: 2025-11-29 08:47:55.844 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:56 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:56.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:56.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:57 np0005539551 nova_compute[227360]: 2025-11-29 08:47:57.777 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:57 np0005539551 nova_compute[227360]: 2025-11-29 08:47:57.844 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:57 np0005539551 nova_compute[227360]: 2025-11-29 08:47:57.844 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:47:57 np0005539551 nova_compute[227360]: 2025-11-29 08:47:57.844 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:47:58 np0005539551 nova_compute[227360]: 2025-11-29 08:47:58.410 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "refresh_cache-6b7b3384-3de2-4098-b77f-3658f82eedfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:47:58 np0005539551 nova_compute[227360]: 2025-11-29 08:47:58.411 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquired lock "refresh_cache-6b7b3384-3de2-4098-b77f-3658f82eedfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:47:58 np0005539551 nova_compute[227360]: 2025-11-29 08:47:58.411 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:47:58 np0005539551 nova_compute[227360]: 2025-11-29 08:47:58.411 227364 DEBUG nova.objects.instance [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6b7b3384-3de2-4098-b77f-3658f82eedfc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:47:58 np0005539551 nova_compute[227360]: 2025-11-29 08:47:58.435 227364 DEBUG oslo_concurrency.lockutils [None req-8808af56-106a-4c83-b3d4-cf2b7977aa0a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Acquiring lock "6b7b3384-3de2-4098-b77f-3658f82eedfc" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:58 np0005539551 nova_compute[227360]: 2025-11-29 08:47:58.436 227364 DEBUG oslo_concurrency.lockutils [None req-8808af56-106a-4c83-b3d4-cf2b7977aa0a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "6b7b3384-3de2-4098-b77f-3658f82eedfc" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:58 np0005539551 nova_compute[227360]: 2025-11-29 08:47:58.436 227364 INFO nova.compute.manager [None req-8808af56-106a-4c83-b3d4-cf2b7977aa0a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Shelving#033[00m
Nov 29 03:47:58 np0005539551 nova_compute[227360]: 2025-11-29 08:47:58.472 227364 DEBUG nova.virt.libvirt.driver [None req-8808af56-106a-4c83-b3d4-cf2b7977aa0a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:47:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:58.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:47:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:58.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:59 np0005539551 nova_compute[227360]: 2025-11-29 08:47:59.525 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:47:59.525 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=64, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=63) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:47:59 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:47:59.528 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:47:59 np0005539551 nova_compute[227360]: 2025-11-29 08:47:59.533 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:00 np0005539551 nova_compute[227360]: 2025-11-29 08:48:00.332 227364 DEBUG nova.network.neutron [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Updating instance_info_cache with network_info: [{"id": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "address": "fa:16:3e:cd:1a:28", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbef873ea-4c", "ovs_interfaceid": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:48:00 np0005539551 nova_compute[227360]: 2025-11-29 08:48:00.353 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Releasing lock "refresh_cache-6b7b3384-3de2-4098-b77f-3658f82eedfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:48:00 np0005539551 nova_compute[227360]: 2025-11-29 08:48:00.353 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:48:00 np0005539551 nova_compute[227360]: 2025-11-29 08:48:00.354 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:00 np0005539551 nova_compute[227360]: 2025-11-29 08:48:00.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:00 np0005539551 nova_compute[227360]: 2025-11-29 08:48:00.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:00 np0005539551 kernel: tapbef873ea-4c (unregistering): left promiscuous mode
Nov 29 03:48:00 np0005539551 NetworkManager[48922]: <info>  [1764406080.7053] device (tapbef873ea-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:48:00 np0005539551 ovn_controller[130266]: 2025-11-29T08:48:00Z|00921|binding|INFO|Releasing lport bef873ea-4c5b-4b48-8022-9c3171bdab37 from this chassis (sb_readonly=0)
Nov 29 03:48:00 np0005539551 ovn_controller[130266]: 2025-11-29T08:48:00Z|00922|binding|INFO|Setting lport bef873ea-4c5b-4b48-8022-9c3171bdab37 down in Southbound
Nov 29 03:48:00 np0005539551 nova_compute[227360]: 2025-11-29 08:48:00.714 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:00 np0005539551 ovn_controller[130266]: 2025-11-29T08:48:00Z|00923|binding|INFO|Removing iface tapbef873ea-4c ovn-installed in OVS
Nov 29 03:48:00 np0005539551 nova_compute[227360]: 2025-11-29 08:48:00.716 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:48:00.720 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:1a:28 10.100.0.6'], port_security=['fa:16:3e:cd:1a:28 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '6b7b3384-3de2-4098-b77f-3658f82eedfc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e328485-18b8-4dc7-b012-0dd256b9b97f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2e636ab14fe94059b82b9cbcf8831d87', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'aa85fffb-3c65-4631-aec4-f04bc3fcc9b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.227'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d402cfd4-158a-4fe2-be8d-72cfa52ed799, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=bef873ea-4c5b-4b48-8022-9c3171bdab37) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:48:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:48:00.721 139482 INFO neutron.agent.ovn.metadata.agent [-] Port bef873ea-4c5b-4b48-8022-9c3171bdab37 in datapath 7e328485-18b8-4dc7-b012-0dd256b9b97f unbound from our chassis#033[00m
Nov 29 03:48:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:48:00.723 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7e328485-18b8-4dc7-b012-0dd256b9b97f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:48:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:48:00.724 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[02b55da0-0e99-4da7-be7d-931e72036f85]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:48:00.725 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f namespace which is not needed anymore#033[00m
Nov 29 03:48:00 np0005539551 nova_compute[227360]: 2025-11-29 08:48:00.732 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:00 np0005539551 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000d7.scope: Deactivated successfully.
Nov 29 03:48:00 np0005539551 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000d7.scope: Consumed 14.698s CPU time.
Nov 29 03:48:00 np0005539551 systemd-machined[190756]: Machine qemu-94-instance-000000d7 terminated.
Nov 29 03:48:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:00.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:00 np0005539551 neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f[303632]: [NOTICE]   (303636) : haproxy version is 2.8.14-c23fe91
Nov 29 03:48:00 np0005539551 neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f[303632]: [NOTICE]   (303636) : path to executable is /usr/sbin/haproxy
Nov 29 03:48:00 np0005539551 neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f[303632]: [WARNING]  (303636) : Exiting Master process...
Nov 29 03:48:00 np0005539551 neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f[303632]: [ALERT]    (303636) : Current worker (303638) exited with code 143 (Terminated)
Nov 29 03:48:00 np0005539551 neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f[303632]: [WARNING]  (303636) : All workers exited. Exiting... (0)
Nov 29 03:48:00 np0005539551 systemd[1]: libpod-6f9196a865dd089c9b2fd974b1f8dddf35c3dbf68f3662a02857fcc9871f192e.scope: Deactivated successfully.
Nov 29 03:48:00 np0005539551 podman[303964]: 2025-11-29 08:48:00.855852525 +0000 UTC m=+0.044370181 container died 6f9196a865dd089c9b2fd974b1f8dddf35c3dbf68f3662a02857fcc9871f192e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:48:00 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6f9196a865dd089c9b2fd974b1f8dddf35c3dbf68f3662a02857fcc9871f192e-userdata-shm.mount: Deactivated successfully.
Nov 29 03:48:00 np0005539551 systemd[1]: var-lib-containers-storage-overlay-ebd9e5d4d8bae45461da165f8c1a50b4a8e74ea60e3606d3911dfa4eb812417b-merged.mount: Deactivated successfully.
Nov 29 03:48:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:00.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:00 np0005539551 podman[303964]: 2025-11-29 08:48:00.907877164 +0000 UTC m=+0.096394800 container cleanup 6f9196a865dd089c9b2fd974b1f8dddf35c3dbf68f3662a02857fcc9871f192e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true)
Nov 29 03:48:00 np0005539551 systemd[1]: libpod-conmon-6f9196a865dd089c9b2fd974b1f8dddf35c3dbf68f3662a02857fcc9871f192e.scope: Deactivated successfully.
Nov 29 03:48:00 np0005539551 podman[303993]: 2025-11-29 08:48:00.971121997 +0000 UTC m=+0.042735599 container remove 6f9196a865dd089c9b2fd974b1f8dddf35c3dbf68f3662a02857fcc9871f192e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:48:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:48:00.976 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[44ef0e2e-99af-429b-b5b7-64ef01f0c758]: (4, ('Sat Nov 29 08:48:00 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f (6f9196a865dd089c9b2fd974b1f8dddf35c3dbf68f3662a02857fcc9871f192e)\n6f9196a865dd089c9b2fd974b1f8dddf35c3dbf68f3662a02857fcc9871f192e\nSat Nov 29 08:48:00 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f (6f9196a865dd089c9b2fd974b1f8dddf35c3dbf68f3662a02857fcc9871f192e)\n6f9196a865dd089c9b2fd974b1f8dddf35c3dbf68f3662a02857fcc9871f192e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:48:00.978 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5b2870ab-a0b1-4af8-b500-a7b45bfb409d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:48:00.980 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7e328485-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:00 np0005539551 nova_compute[227360]: 2025-11-29 08:48:00.982 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:00 np0005539551 kernel: tap7e328485-10: left promiscuous mode
Nov 29 03:48:00 np0005539551 nova_compute[227360]: 2025-11-29 08:48:00.997 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:48:01.000 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9f4efdac-30dd-4a4f-b90e-ce0a14eec6de]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:48:01.015 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ee1e6bc7-2262-4164-ab45-c36dd43df836]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:48:01.017 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[48e0751d-29c4-4b49-a16f-9576f0330139]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:48:01.041 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9a0eff80-f3a8-4db9-8dce-80a810ec8921]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 910201, 'reachable_time': 25034, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304023, 'error': None, 'target': 'ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:01 np0005539551 systemd[1]: run-netns-ovnmeta\x2d7e328485\x2d18b8\x2d4dc7\x2db012\x2d0dd256b9b97f.mount: Deactivated successfully.
Nov 29 03:48:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:48:01.045 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:48:01 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:48:01.045 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[dccaf3e6-a5ab-4767-bf1d-bade8b5753a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:01 np0005539551 nova_compute[227360]: 2025-11-29 08:48:01.489 227364 INFO nova.virt.libvirt.driver [None req-8808af56-106a-4c83-b3d4-cf2b7977aa0a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:48:01 np0005539551 nova_compute[227360]: 2025-11-29 08:48:01.494 227364 INFO nova.virt.libvirt.driver [-] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Instance destroyed successfully.#033[00m
Nov 29 03:48:01 np0005539551 nova_compute[227360]: 2025-11-29 08:48:01.494 227364 DEBUG nova.objects.instance [None req-8808af56-106a-4c83-b3d4-cf2b7977aa0a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lazy-loading 'numa_topology' on Instance uuid 6b7b3384-3de2-4098-b77f-3658f82eedfc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:48:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:01 np0005539551 nova_compute[227360]: 2025-11-29 08:48:01.789 227364 INFO nova.virt.libvirt.driver [None req-8808af56-106a-4c83-b3d4-cf2b7977aa0a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Beginning cold snapshot process#033[00m
Nov 29 03:48:01 np0005539551 nova_compute[227360]: 2025-11-29 08:48:01.921 227364 DEBUG nova.virt.libvirt.imagebackend [None req-8808af56-106a-4c83-b3d4-cf2b7977aa0a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] No parent info for 4873db8c-b414-4e95-acd9-77caabebe722; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:48:02 np0005539551 nova_compute[227360]: 2025-11-29 08:48:02.192 227364 DEBUG nova.storage.rbd_utils [None req-8808af56-106a-4c83-b3d4-cf2b7977aa0a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] creating snapshot(c7db056ada744e5aaddabb501ca07cb7) on rbd image(6b7b3384-3de2-4098-b77f-3658f82eedfc_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:48:02 np0005539551 nova_compute[227360]: 2025-11-29 08:48:02.497 227364 DEBUG nova.compute.manager [req-bfafe2e4-f55f-4674-a643-297f648a52d4 req-20adc5ff-bc57-4b60-9515-d2ba7b98b4ce 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Received event network-vif-unplugged-bef873ea-4c5b-4b48-8022-9c3171bdab37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:48:02 np0005539551 nova_compute[227360]: 2025-11-29 08:48:02.498 227364 DEBUG oslo_concurrency.lockutils [req-bfafe2e4-f55f-4674-a643-297f648a52d4 req-20adc5ff-bc57-4b60-9515-d2ba7b98b4ce 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "6b7b3384-3de2-4098-b77f-3658f82eedfc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:02 np0005539551 nova_compute[227360]: 2025-11-29 08:48:02.498 227364 DEBUG oslo_concurrency.lockutils [req-bfafe2e4-f55f-4674-a643-297f648a52d4 req-20adc5ff-bc57-4b60-9515-d2ba7b98b4ce 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6b7b3384-3de2-4098-b77f-3658f82eedfc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:02 np0005539551 nova_compute[227360]: 2025-11-29 08:48:02.499 227364 DEBUG oslo_concurrency.lockutils [req-bfafe2e4-f55f-4674-a643-297f648a52d4 req-20adc5ff-bc57-4b60-9515-d2ba7b98b4ce 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6b7b3384-3de2-4098-b77f-3658f82eedfc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:02 np0005539551 nova_compute[227360]: 2025-11-29 08:48:02.500 227364 DEBUG nova.compute.manager [req-bfafe2e4-f55f-4674-a643-297f648a52d4 req-20adc5ff-bc57-4b60-9515-d2ba7b98b4ce 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] No waiting events found dispatching network-vif-unplugged-bef873ea-4c5b-4b48-8022-9c3171bdab37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:48:02 np0005539551 nova_compute[227360]: 2025-11-29 08:48:02.500 227364 WARNING nova.compute.manager [req-bfafe2e4-f55f-4674-a643-297f648a52d4 req-20adc5ff-bc57-4b60-9515-d2ba7b98b4ce 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Received unexpected event network-vif-unplugged-bef873ea-4c5b-4b48-8022-9c3171bdab37 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Nov 29 03:48:02 np0005539551 nova_compute[227360]: 2025-11-29 08:48:02.501 227364 DEBUG nova.compute.manager [req-bfafe2e4-f55f-4674-a643-297f648a52d4 req-20adc5ff-bc57-4b60-9515-d2ba7b98b4ce 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Received event network-vif-plugged-bef873ea-4c5b-4b48-8022-9c3171bdab37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:48:02 np0005539551 nova_compute[227360]: 2025-11-29 08:48:02.501 227364 DEBUG oslo_concurrency.lockutils [req-bfafe2e4-f55f-4674-a643-297f648a52d4 req-20adc5ff-bc57-4b60-9515-d2ba7b98b4ce 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "6b7b3384-3de2-4098-b77f-3658f82eedfc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:02 np0005539551 nova_compute[227360]: 2025-11-29 08:48:02.501 227364 DEBUG oslo_concurrency.lockutils [req-bfafe2e4-f55f-4674-a643-297f648a52d4 req-20adc5ff-bc57-4b60-9515-d2ba7b98b4ce 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6b7b3384-3de2-4098-b77f-3658f82eedfc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:02 np0005539551 nova_compute[227360]: 2025-11-29 08:48:02.501 227364 DEBUG oslo_concurrency.lockutils [req-bfafe2e4-f55f-4674-a643-297f648a52d4 req-20adc5ff-bc57-4b60-9515-d2ba7b98b4ce 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6b7b3384-3de2-4098-b77f-3658f82eedfc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:02 np0005539551 nova_compute[227360]: 2025-11-29 08:48:02.502 227364 DEBUG nova.compute.manager [req-bfafe2e4-f55f-4674-a643-297f648a52d4 req-20adc5ff-bc57-4b60-9515-d2ba7b98b4ce 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] No waiting events found dispatching network-vif-plugged-bef873ea-4c5b-4b48-8022-9c3171bdab37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:48:02 np0005539551 nova_compute[227360]: 2025-11-29 08:48:02.502 227364 WARNING nova.compute.manager [req-bfafe2e4-f55f-4674-a643-297f648a52d4 req-20adc5ff-bc57-4b60-9515-d2ba7b98b4ce 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Received unexpected event network-vif-plugged-bef873ea-4c5b-4b48-8022-9c3171bdab37 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Nov 29 03:48:02 np0005539551 nova_compute[227360]: 2025-11-29 08:48:02.780 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:02.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:02.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:03 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e419 e419: 3 total, 3 up, 3 in
Nov 29 03:48:03 np0005539551 nova_compute[227360]: 2025-11-29 08:48:03.260 227364 DEBUG nova.storage.rbd_utils [None req-8808af56-106a-4c83-b3d4-cf2b7977aa0a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] cloning vms/6b7b3384-3de2-4098-b77f-3658f82eedfc_disk@c7db056ada744e5aaddabb501ca07cb7 to images/e21e97a9-475e-4ed4-bd3f-ad8000b59f07 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:48:03 np0005539551 nova_compute[227360]: 2025-11-29 08:48:03.459 227364 DEBUG nova.storage.rbd_utils [None req-8808af56-106a-4c83-b3d4-cf2b7977aa0a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] flattening images/e21e97a9-475e-4ed4-bd3f-ad8000b59f07 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:48:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:48:03.530 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '64'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:04 np0005539551 nova_compute[227360]: 2025-11-29 08:48:04.016 227364 DEBUG nova.storage.rbd_utils [None req-8808af56-106a-4c83-b3d4-cf2b7977aa0a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] removing snapshot(c7db056ada744e5aaddabb501ca07cb7) on rbd image(6b7b3384-3de2-4098-b77f-3658f82eedfc_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:48:04 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e420 e420: 3 total, 3 up, 3 in
Nov 29 03:48:04 np0005539551 nova_compute[227360]: 2025-11-29 08:48:04.285 227364 DEBUG nova.storage.rbd_utils [None req-8808af56-106a-4c83-b3d4-cf2b7977aa0a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] creating snapshot(snap) on rbd image(e21e97a9-475e-4ed4-bd3f-ad8000b59f07) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:48:04 np0005539551 nova_compute[227360]: 2025-11-29 08:48:04.535 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:04.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:04.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e421 e421: 3 total, 3 up, 3 in
Nov 29 03:48:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:06.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:06.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:07 np0005539551 nova_compute[227360]: 2025-11-29 08:48:07.036 227364 INFO nova.virt.libvirt.driver [None req-8808af56-106a-4c83-b3d4-cf2b7977aa0a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Snapshot image upload complete#033[00m
Nov 29 03:48:07 np0005539551 nova_compute[227360]: 2025-11-29 08:48:07.037 227364 DEBUG nova.compute.manager [None req-8808af56-106a-4c83-b3d4-cf2b7977aa0a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:48:07 np0005539551 nova_compute[227360]: 2025-11-29 08:48:07.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:07 np0005539551 nova_compute[227360]: 2025-11-29 08:48:07.477 227364 INFO nova.compute.manager [None req-8808af56-106a-4c83-b3d4-cf2b7977aa0a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Shelve offloading#033[00m
Nov 29 03:48:07 np0005539551 nova_compute[227360]: 2025-11-29 08:48:07.484 227364 INFO nova.virt.libvirt.driver [-] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Instance destroyed successfully.#033[00m
Nov 29 03:48:07 np0005539551 nova_compute[227360]: 2025-11-29 08:48:07.484 227364 DEBUG nova.compute.manager [None req-8808af56-106a-4c83-b3d4-cf2b7977aa0a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:48:07 np0005539551 nova_compute[227360]: 2025-11-29 08:48:07.487 227364 DEBUG oslo_concurrency.lockutils [None req-8808af56-106a-4c83-b3d4-cf2b7977aa0a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Acquiring lock "refresh_cache-6b7b3384-3de2-4098-b77f-3658f82eedfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:48:07 np0005539551 nova_compute[227360]: 2025-11-29 08:48:07.487 227364 DEBUG oslo_concurrency.lockutils [None req-8808af56-106a-4c83-b3d4-cf2b7977aa0a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Acquired lock "refresh_cache-6b7b3384-3de2-4098-b77f-3658f82eedfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:48:07 np0005539551 nova_compute[227360]: 2025-11-29 08:48:07.488 227364 DEBUG nova.network.neutron [None req-8808af56-106a-4c83-b3d4-cf2b7977aa0a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:48:07 np0005539551 nova_compute[227360]: 2025-11-29 08:48:07.783 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:08 np0005539551 nova_compute[227360]: 2025-11-29 08:48:08.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:08 np0005539551 nova_compute[227360]: 2025-11-29 08:48:08.715 227364 DEBUG nova.network.neutron [None req-8808af56-106a-4c83-b3d4-cf2b7977aa0a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Updating instance_info_cache with network_info: [{"id": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "address": "fa:16:3e:cd:1a:28", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbef873ea-4c", "ovs_interfaceid": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:48:08 np0005539551 nova_compute[227360]: 2025-11-29 08:48:08.763 227364 DEBUG oslo_concurrency.lockutils [None req-8808af56-106a-4c83-b3d4-cf2b7977aa0a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Releasing lock "refresh_cache-6b7b3384-3de2-4098-b77f-3658f82eedfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:48:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:08.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:08.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:09 np0005539551 nova_compute[227360]: 2025-11-29 08:48:09.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:09 np0005539551 nova_compute[227360]: 2025-11-29 08:48:09.409 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:48:09 np0005539551 nova_compute[227360]: 2025-11-29 08:48:09.536 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:09 np0005539551 nova_compute[227360]: 2025-11-29 08:48:09.607 227364 INFO nova.virt.libvirt.driver [-] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Instance destroyed successfully.#033[00m
Nov 29 03:48:09 np0005539551 nova_compute[227360]: 2025-11-29 08:48:09.608 227364 DEBUG nova.objects.instance [None req-8808af56-106a-4c83-b3d4-cf2b7977aa0a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lazy-loading 'resources' on Instance uuid 6b7b3384-3de2-4098-b77f-3658f82eedfc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:48:09 np0005539551 nova_compute[227360]: 2025-11-29 08:48:09.641 227364 DEBUG nova.virt.libvirt.vif [None req-8808af56-106a-4c83-b3d4-cf2b7977aa0a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:47:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1254459075',display_name='tempest-TestShelveInstance-server-1254459075',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1254459075',id=215,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA6ZJ0e8HULKzf15gAhzf0Pozq+BNpsY6JGJkWj4En/gstCBUIDEBBlhlRRy2j+ObTo/Olxsq8yNRaWE1A2BtIbVFq5FCJEzTcF45GwsvPUIsE1i0kjUMi8fiaEVQJMjJA==',key_name='tempest-TestShelveInstance-892403284',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:47:35Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='2e636ab14fe94059b82b9cbcf8831d87',ramdisk_id='',reservation_id='r-id4k7nof',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-498716578',owner_user_name='tempest-TestShelveInstance-498716578-project-member',shelved_at='2025-11-29T08:48:07.036984',shelved_host='compute-1.ctlplane.example.com',shelved_image_id='e21e97a9-475e-4ed4-bd3f-ad8000b59f07'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:48:01Z,user_data=None,user_id='14d446574294425e9bc89e596ea56dc9',uuid=6b7b3384-3de2-4098-b77f-3658f82eedfc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "address": "fa:16:3e:cd:1a:28", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbef873ea-4c", "ovs_interfaceid": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:48:09 np0005539551 nova_compute[227360]: 2025-11-29 08:48:09.642 227364 DEBUG nova.network.os_vif_util [None req-8808af56-106a-4c83-b3d4-cf2b7977aa0a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Converting VIF {"id": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "address": "fa:16:3e:cd:1a:28", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbef873ea-4c", "ovs_interfaceid": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:48:09 np0005539551 nova_compute[227360]: 2025-11-29 08:48:09.642 227364 DEBUG nova.network.os_vif_util [None req-8808af56-106a-4c83-b3d4-cf2b7977aa0a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:1a:28,bridge_name='br-int',has_traffic_filtering=True,id=bef873ea-4c5b-4b48-8022-9c3171bdab37,network=Network(7e328485-18b8-4dc7-b012-0dd256b9b97f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbef873ea-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:48:09 np0005539551 nova_compute[227360]: 2025-11-29 08:48:09.643 227364 DEBUG os_vif [None req-8808af56-106a-4c83-b3d4-cf2b7977aa0a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:1a:28,bridge_name='br-int',has_traffic_filtering=True,id=bef873ea-4c5b-4b48-8022-9c3171bdab37,network=Network(7e328485-18b8-4dc7-b012-0dd256b9b97f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbef873ea-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:48:09 np0005539551 nova_compute[227360]: 2025-11-29 08:48:09.644 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:09 np0005539551 nova_compute[227360]: 2025-11-29 08:48:09.644 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbef873ea-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:09 np0005539551 nova_compute[227360]: 2025-11-29 08:48:09.697 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:09 np0005539551 nova_compute[227360]: 2025-11-29 08:48:09.698 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:09 np0005539551 nova_compute[227360]: 2025-11-29 08:48:09.700 227364 INFO os_vif [None req-8808af56-106a-4c83-b3d4-cf2b7977aa0a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:1a:28,bridge_name='br-int',has_traffic_filtering=True,id=bef873ea-4c5b-4b48-8022-9c3171bdab37,network=Network(7e328485-18b8-4dc7-b012-0dd256b9b97f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbef873ea-4c')#033[00m
Nov 29 03:48:09 np0005539551 nova_compute[227360]: 2025-11-29 08:48:09.715 227364 DEBUG nova.compute.manager [req-f77ff184-4794-4ad1-b240-4be3dfdef738 req-3e90762d-49e9-496a-a0e7-90897951fed3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Received event network-changed-bef873ea-4c5b-4b48-8022-9c3171bdab37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:48:09 np0005539551 nova_compute[227360]: 2025-11-29 08:48:09.715 227364 DEBUG nova.compute.manager [req-f77ff184-4794-4ad1-b240-4be3dfdef738 req-3e90762d-49e9-496a-a0e7-90897951fed3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Refreshing instance network info cache due to event network-changed-bef873ea-4c5b-4b48-8022-9c3171bdab37. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:48:09 np0005539551 nova_compute[227360]: 2025-11-29 08:48:09.716 227364 DEBUG oslo_concurrency.lockutils [req-f77ff184-4794-4ad1-b240-4be3dfdef738 req-3e90762d-49e9-496a-a0e7-90897951fed3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-6b7b3384-3de2-4098-b77f-3658f82eedfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:48:09 np0005539551 nova_compute[227360]: 2025-11-29 08:48:09.716 227364 DEBUG oslo_concurrency.lockutils [req-f77ff184-4794-4ad1-b240-4be3dfdef738 req-3e90762d-49e9-496a-a0e7-90897951fed3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-6b7b3384-3de2-4098-b77f-3658f82eedfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:48:09 np0005539551 nova_compute[227360]: 2025-11-29 08:48:09.716 227364 DEBUG nova.network.neutron [req-f77ff184-4794-4ad1-b240-4be3dfdef738 req-3e90762d-49e9-496a-a0e7-90897951fed3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Refreshing network info cache for port bef873ea-4c5b-4b48-8022-9c3171bdab37 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:48:10 np0005539551 nova_compute[227360]: 2025-11-29 08:48:10.401 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e422 e422: 3 total, 3 up, 3 in
Nov 29 03:48:10 np0005539551 nova_compute[227360]: 2025-11-29 08:48:10.695 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:10 np0005539551 nova_compute[227360]: 2025-11-29 08:48:10.796 227364 INFO nova.virt.libvirt.driver [None req-8808af56-106a-4c83-b3d4-cf2b7977aa0a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Deleting instance files /var/lib/nova/instances/6b7b3384-3de2-4098-b77f-3658f82eedfc_del#033[00m
Nov 29 03:48:10 np0005539551 nova_compute[227360]: 2025-11-29 08:48:10.797 227364 INFO nova.virt.libvirt.driver [None req-8808af56-106a-4c83-b3d4-cf2b7977aa0a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Deletion of /var/lib/nova/instances/6b7b3384-3de2-4098-b77f-3658f82eedfc_del complete#033[00m
Nov 29 03:48:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:10.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:10.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:10 np0005539551 nova_compute[227360]: 2025-11-29 08:48:10.949 227364 INFO nova.scheduler.client.report [None req-8808af56-106a-4c83-b3d4-cf2b7977aa0a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Deleted allocations for instance 6b7b3384-3de2-4098-b77f-3658f82eedfc#033[00m
Nov 29 03:48:11 np0005539551 nova_compute[227360]: 2025-11-29 08:48:11.001 227364 DEBUG oslo_concurrency.lockutils [None req-8808af56-106a-4c83-b3d4-cf2b7977aa0a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:11 np0005539551 nova_compute[227360]: 2025-11-29 08:48:11.002 227364 DEBUG oslo_concurrency.lockutils [None req-8808af56-106a-4c83-b3d4-cf2b7977aa0a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:11 np0005539551 nova_compute[227360]: 2025-11-29 08:48:11.024 227364 DEBUG oslo_concurrency.processutils [None req-8808af56-106a-4c83-b3d4-cf2b7977aa0a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:11 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:48:11 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1595076906' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:48:11 np0005539551 nova_compute[227360]: 2025-11-29 08:48:11.493 227364 DEBUG nova.network.neutron [req-f77ff184-4794-4ad1-b240-4be3dfdef738 req-3e90762d-49e9-496a-a0e7-90897951fed3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Updated VIF entry in instance network info cache for port bef873ea-4c5b-4b48-8022-9c3171bdab37. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:48:11 np0005539551 nova_compute[227360]: 2025-11-29 08:48:11.495 227364 DEBUG nova.network.neutron [req-f77ff184-4794-4ad1-b240-4be3dfdef738 req-3e90762d-49e9-496a-a0e7-90897951fed3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Updating instance_info_cache with network_info: [{"id": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "address": "fa:16:3e:cd:1a:28", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": null, "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapbef873ea-4c", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:48:11 np0005539551 nova_compute[227360]: 2025-11-29 08:48:11.513 227364 DEBUG oslo_concurrency.processutils [None req-8808af56-106a-4c83-b3d4-cf2b7977aa0a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:11 np0005539551 nova_compute[227360]: 2025-11-29 08:48:11.517 227364 DEBUG nova.compute.provider_tree [None req-8808af56-106a-4c83-b3d4-cf2b7977aa0a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:48:11 np0005539551 nova_compute[227360]: 2025-11-29 08:48:11.526 227364 DEBUG oslo_concurrency.lockutils [req-f77ff184-4794-4ad1-b240-4be3dfdef738 req-3e90762d-49e9-496a-a0e7-90897951fed3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-6b7b3384-3de2-4098-b77f-3658f82eedfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:48:11 np0005539551 nova_compute[227360]: 2025-11-29 08:48:11.539 227364 DEBUG nova.scheduler.client.report [None req-8808af56-106a-4c83-b3d4-cf2b7977aa0a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:48:11 np0005539551 nova_compute[227360]: 2025-11-29 08:48:11.572 227364 DEBUG oslo_concurrency.lockutils [None req-8808af56-106a-4c83-b3d4-cf2b7977aa0a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:11 np0005539551 nova_compute[227360]: 2025-11-29 08:48:11.626 227364 DEBUG oslo_concurrency.lockutils [None req-8808af56-106a-4c83-b3d4-cf2b7977aa0a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "6b7b3384-3de2-4098-b77f-3658f82eedfc" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 13.191s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:11 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:12 np0005539551 podman[304210]: 2025-11-29 08:48:12.602447745 +0000 UTC m=+0.054672782 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:48:12 np0005539551 podman[304209]: 2025-11-29 08:48:12.612207488 +0000 UTC m=+0.068491044 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 03:48:12 np0005539551 podman[304208]: 2025-11-29 08:48:12.633516686 +0000 UTC m=+0.091957551 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:48:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:12.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:12.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:14 np0005539551 nova_compute[227360]: 2025-11-29 08:48:14.538 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:14 np0005539551 nova_compute[227360]: 2025-11-29 08:48:14.741 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:14.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:14.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:15 np0005539551 nova_compute[227360]: 2025-11-29 08:48:15.965 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764406080.964223, 6b7b3384-3de2-4098-b77f-3658f82eedfc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:48:15 np0005539551 nova_compute[227360]: 2025-11-29 08:48:15.966 227364 INFO nova.compute.manager [-] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:48:15 np0005539551 nova_compute[227360]: 2025-11-29 08:48:15.987 227364 DEBUG nova.compute.manager [None req-e9552a13-30a5-424f-8f91-0bcc33f3db82 - - - - - -] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:48:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:16.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:16.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:18.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:18.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:19 np0005539551 nova_compute[227360]: 2025-11-29 08:48:19.542 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:19 np0005539551 nova_compute[227360]: 2025-11-29 08:48:19.743 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:48:19.896 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:48:19.897 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:48:19.897 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:20.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:20.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:21 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:22.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:22.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:24 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e423 e423: 3 total, 3 up, 3 in
Nov 29 03:48:24 np0005539551 nova_compute[227360]: 2025-11-29 08:48:24.571 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:24 np0005539551 nova_compute[227360]: 2025-11-29 08:48:24.745 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:24.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:24.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:26 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:26.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:26.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:28.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:28.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:29 np0005539551 nova_compute[227360]: 2025-11-29 08:48:29.621 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:29 np0005539551 nova_compute[227360]: 2025-11-29 08:48:29.746 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e424 e424: 3 total, 3 up, 3 in
Nov 29 03:48:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:30.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:30.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:31 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:32.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:32.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:34 np0005539551 nova_compute[227360]: 2025-11-29 08:48:34.686 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:34 np0005539551 nova_compute[227360]: 2025-11-29 08:48:34.748 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:34.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:34.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:36 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:48:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:36.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:48:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:36.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:38.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:38.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:39 np0005539551 nova_compute[227360]: 2025-11-29 08:48:39.689 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:39 np0005539551 nova_compute[227360]: 2025-11-29 08:48:39.749 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:40.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:40.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:41 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:48:42.146 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=65, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=64) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:48:42 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:48:42.147 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:48:42 np0005539551 nova_compute[227360]: 2025-11-29 08:48:42.162 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:42.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:42.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:43 np0005539551 podman[304271]: 2025-11-29 08:48:43.619537961 +0000 UTC m=+0.064844987 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:48:43 np0005539551 podman[304270]: 2025-11-29 08:48:43.636334855 +0000 UTC m=+0.087071138 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:48:43 np0005539551 podman[304272]: 2025-11-29 08:48:43.647106267 +0000 UTC m=+0.081224040 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:48:44 np0005539551 nova_compute[227360]: 2025-11-29 08:48:44.691 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:44 np0005539551 nova_compute[227360]: 2025-11-29 08:48:44.750 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:44 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e425 e425: 3 total, 3 up, 3 in
Nov 29 03:48:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:44.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:44.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:46 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:46.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:48:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:46.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:48:47 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:48:47.149 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '65'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:48 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:48:48 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:48:48 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:48:48 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:48:48 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:48:48 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:48:48 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:48:48 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:48:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:48.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:48.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:49 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:48:49 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:48:49 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:48:49 np0005539551 nova_compute[227360]: 2025-11-29 08:48:49.692 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:49 np0005539551 nova_compute[227360]: 2025-11-29 08:48:49.751 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:50.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:50.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:52.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:52.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:53 np0005539551 nova_compute[227360]: 2025-11-29 08:48:53.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:53 np0005539551 nova_compute[227360]: 2025-11-29 08:48:53.436 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:53 np0005539551 nova_compute[227360]: 2025-11-29 08:48:53.437 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:53 np0005539551 nova_compute[227360]: 2025-11-29 08:48:53.437 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:53 np0005539551 nova_compute[227360]: 2025-11-29 08:48:53.438 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:48:53 np0005539551 nova_compute[227360]: 2025-11-29 08:48:53.439 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:53 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:48:53 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1594139705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:48:53 np0005539551 nova_compute[227360]: 2025-11-29 08:48:53.855 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:54 np0005539551 nova_compute[227360]: 2025-11-29 08:48:54.003 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:48:54 np0005539551 nova_compute[227360]: 2025-11-29 08:48:54.005 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4213MB free_disk=20.970890045166016GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:48:54 np0005539551 nova_compute[227360]: 2025-11-29 08:48:54.005 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:54 np0005539551 nova_compute[227360]: 2025-11-29 08:48:54.005 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:54 np0005539551 nova_compute[227360]: 2025-11-29 08:48:54.093 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:48:54 np0005539551 nova_compute[227360]: 2025-11-29 08:48:54.093 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:48:54 np0005539551 nova_compute[227360]: 2025-11-29 08:48:54.122 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:54 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:48:54 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3559129672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:48:54 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:48:54 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:48:54 np0005539551 nova_compute[227360]: 2025-11-29 08:48:54.561 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:54 np0005539551 nova_compute[227360]: 2025-11-29 08:48:54.567 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:48:54 np0005539551 nova_compute[227360]: 2025-11-29 08:48:54.587 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:48:54 np0005539551 nova_compute[227360]: 2025-11-29 08:48:54.613 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:48:54 np0005539551 nova_compute[227360]: 2025-11-29 08:48:54.613 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:54 np0005539551 nova_compute[227360]: 2025-11-29 08:48:54.693 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:54 np0005539551 nova_compute[227360]: 2025-11-29 08:48:54.753 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:54.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:54.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:55 np0005539551 nova_compute[227360]: 2025-11-29 08:48:55.614 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:56 np0005539551 nova_compute[227360]: 2025-11-29 08:48:56.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:56 np0005539551 nova_compute[227360]: 2025-11-29 08:48:56.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:48:56 np0005539551 nova_compute[227360]: 2025-11-29 08:48:56.411 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:48:56 np0005539551 nova_compute[227360]: 2025-11-29 08:48:56.441 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:48:56 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:48:56 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3508650177' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:48:56 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:56.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:56.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:58.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:48:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:58.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:59 np0005539551 nova_compute[227360]: 2025-11-29 08:48:59.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:59 np0005539551 nova_compute[227360]: 2025-11-29 08:48:59.695 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:59 np0005539551 nova_compute[227360]: 2025-11-29 08:48:59.754 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:59 np0005539551 nova_compute[227360]: 2025-11-29 08:48:59.809 227364 DEBUG oslo_concurrency.lockutils [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Acquiring lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:59 np0005539551 nova_compute[227360]: 2025-11-29 08:48:59.809 227364 DEBUG oslo_concurrency.lockutils [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:59 np0005539551 nova_compute[227360]: 2025-11-29 08:48:59.823 227364 DEBUG nova.compute.manager [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:48:59 np0005539551 nova_compute[227360]: 2025-11-29 08:48:59.891 227364 DEBUG oslo_concurrency.lockutils [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:59 np0005539551 nova_compute[227360]: 2025-11-29 08:48:59.891 227364 DEBUG oslo_concurrency.lockutils [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:59 np0005539551 nova_compute[227360]: 2025-11-29 08:48:59.897 227364 DEBUG nova.virt.hardware [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:48:59 np0005539551 nova_compute[227360]: 2025-11-29 08:48:59.898 227364 INFO nova.compute.claims [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:48:59 np0005539551 nova_compute[227360]: 2025-11-29 08:48:59.986 227364 DEBUG oslo_concurrency.processutils [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:00 np0005539551 nova_compute[227360]: 2025-11-29 08:49:00.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:49:00 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1029578492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:49:00 np0005539551 nova_compute[227360]: 2025-11-29 08:49:00.458 227364 DEBUG oslo_concurrency.processutils [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:00 np0005539551 nova_compute[227360]: 2025-11-29 08:49:00.463 227364 DEBUG nova.compute.provider_tree [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:49:00 np0005539551 nova_compute[227360]: 2025-11-29 08:49:00.484 227364 DEBUG nova.scheduler.client.report [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:49:00 np0005539551 nova_compute[227360]: 2025-11-29 08:49:00.502 227364 DEBUG oslo_concurrency.lockutils [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:00 np0005539551 nova_compute[227360]: 2025-11-29 08:49:00.503 227364 DEBUG nova.compute.manager [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:49:00 np0005539551 nova_compute[227360]: 2025-11-29 08:49:00.549 227364 DEBUG nova.compute.manager [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:49:00 np0005539551 nova_compute[227360]: 2025-11-29 08:49:00.549 227364 DEBUG nova.network.neutron [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:49:00 np0005539551 nova_compute[227360]: 2025-11-29 08:49:00.567 227364 INFO nova.virt.libvirt.driver [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:49:00 np0005539551 nova_compute[227360]: 2025-11-29 08:49:00.584 227364 DEBUG nova.compute.manager [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:49:00 np0005539551 nova_compute[227360]: 2025-11-29 08:49:00.626 227364 INFO nova.virt.block_device [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Booting with volume 77422021-a8a1-4df0-9b4a-24ac0dd34ac3 at /dev/vda#033[00m
Nov 29 03:49:00 np0005539551 nova_compute[227360]: 2025-11-29 08:49:00.743 227364 DEBUG nova.policy [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '14d446574294425e9bc89e596ea56dc9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2e636ab14fe94059b82b9cbcf8831d87', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:49:00 np0005539551 nova_compute[227360]: 2025-11-29 08:49:00.775 227364 DEBUG os_brick.utils [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:49:00 np0005539551 nova_compute[227360]: 2025-11-29 08:49:00.777 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:00 np0005539551 nova_compute[227360]: 2025-11-29 08:49:00.790 245195 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:00 np0005539551 nova_compute[227360]: 2025-11-29 08:49:00.790 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[03e7c001-db83-43fc-9a7f-78ae738a7940]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:00 np0005539551 nova_compute[227360]: 2025-11-29 08:49:00.791 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:00 np0005539551 nova_compute[227360]: 2025-11-29 08:49:00.799 245195 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:00 np0005539551 nova_compute[227360]: 2025-11-29 08:49:00.799 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[d0e3466e-8be2-4bb9-b11c-fef81873ee75]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34268feecb6', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:00 np0005539551 nova_compute[227360]: 2025-11-29 08:49:00.801 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:00 np0005539551 nova_compute[227360]: 2025-11-29 08:49:00.810 245195 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:00 np0005539551 nova_compute[227360]: 2025-11-29 08:49:00.811 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[d4b261a0-4ce4-484d-86b8-97b38464cad2]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:00 np0005539551 nova_compute[227360]: 2025-11-29 08:49:00.812 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[b950fb7c-6c70-45bf-96f4-d0843dd7c231]: (4, '2d58586e-4ce1-425d-890c-d5cdff75e822') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:00 np0005539551 nova_compute[227360]: 2025-11-29 08:49:00.813 227364 DEBUG oslo_concurrency.processutils [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:00 np0005539551 nova_compute[227360]: 2025-11-29 08:49:00.846 227364 DEBUG oslo_concurrency.processutils [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:00 np0005539551 nova_compute[227360]: 2025-11-29 08:49:00.848 227364 DEBUG os_brick.initiator.connectors.lightos [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:49:00 np0005539551 nova_compute[227360]: 2025-11-29 08:49:00.848 227364 DEBUG os_brick.initiator.connectors.lightos [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:49:00 np0005539551 nova_compute[227360]: 2025-11-29 08:49:00.848 227364 DEBUG os_brick.initiator.connectors.lightos [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:49:00 np0005539551 nova_compute[227360]: 2025-11-29 08:49:00.849 227364 DEBUG os_brick.utils [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] <== get_connector_properties: return (72ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34268feecb6', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2d58586e-4ce1-425d-890c-d5cdff75e822', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:49:00 np0005539551 nova_compute[227360]: 2025-11-29 08:49:00.849 227364 DEBUG nova.virt.block_device [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Updating existing volume attachment record: d10519eb-a981-4742-a2f0-dedecae74be4 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:49:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:00.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:00.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:01 np0005539551 nova_compute[227360]: 2025-11-29 08:49:01.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:01 np0005539551 nova_compute[227360]: 2025-11-29 08:49:01.803 227364 DEBUG nova.compute.manager [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:49:01 np0005539551 nova_compute[227360]: 2025-11-29 08:49:01.805 227364 DEBUG nova.virt.libvirt.driver [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:49:01 np0005539551 nova_compute[227360]: 2025-11-29 08:49:01.806 227364 INFO nova.virt.libvirt.driver [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Creating image(s)#033[00m
Nov 29 03:49:01 np0005539551 nova_compute[227360]: 2025-11-29 08:49:01.806 227364 DEBUG nova.virt.libvirt.driver [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:49:01 np0005539551 nova_compute[227360]: 2025-11-29 08:49:01.806 227364 DEBUG nova.virt.libvirt.driver [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Ensure instance console log exists: /var/lib/nova/instances/e82bbbb4-4776-445f-9d1d-aea28fbbcaa8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:49:01 np0005539551 nova_compute[227360]: 2025-11-29 08:49:01.807 227364 DEBUG oslo_concurrency.lockutils [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:01 np0005539551 nova_compute[227360]: 2025-11-29 08:49:01.807 227364 DEBUG oslo_concurrency.lockutils [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:01 np0005539551 nova_compute[227360]: 2025-11-29 08:49:01.808 227364 DEBUG oslo_concurrency.lockutils [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:02 np0005539551 nova_compute[227360]: 2025-11-29 08:49:02.849 227364 DEBUG nova.network.neutron [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Successfully created port: ce1430a7-5573-42ed-950f-6b729838e557 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:49:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:02.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:02.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:04 np0005539551 nova_compute[227360]: 2025-11-29 08:49:04.697 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:04 np0005539551 nova_compute[227360]: 2025-11-29 08:49:04.756 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:04 np0005539551 nova_compute[227360]: 2025-11-29 08:49:04.913 227364 DEBUG nova.network.neutron [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Successfully updated port: ce1430a7-5573-42ed-950f-6b729838e557 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:49:04 np0005539551 nova_compute[227360]: 2025-11-29 08:49:04.928 227364 DEBUG oslo_concurrency.lockutils [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Acquiring lock "refresh_cache-e82bbbb4-4776-445f-9d1d-aea28fbbcaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:49:04 np0005539551 nova_compute[227360]: 2025-11-29 08:49:04.928 227364 DEBUG oslo_concurrency.lockutils [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Acquired lock "refresh_cache-e82bbbb4-4776-445f-9d1d-aea28fbbcaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:49:04 np0005539551 nova_compute[227360]: 2025-11-29 08:49:04.929 227364 DEBUG nova.network.neutron [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:49:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:04.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:04 np0005539551 nova_compute[227360]: 2025-11-29 08:49:04.992 227364 DEBUG nova.compute.manager [req-5aad9203-8fa5-4ba3-a637-62e14cf58806 req-4d040601-9524-4a46-bcde-26f8aa0903ea 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Received event network-changed-ce1430a7-5573-42ed-950f-6b729838e557 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:49:04 np0005539551 nova_compute[227360]: 2025-11-29 08:49:04.992 227364 DEBUG nova.compute.manager [req-5aad9203-8fa5-4ba3-a637-62e14cf58806 req-4d040601-9524-4a46-bcde-26f8aa0903ea 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Refreshing instance network info cache due to event network-changed-ce1430a7-5573-42ed-950f-6b729838e557. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:49:04 np0005539551 nova_compute[227360]: 2025-11-29 08:49:04.993 227364 DEBUG oslo_concurrency.lockutils [req-5aad9203-8fa5-4ba3-a637-62e14cf58806 req-4d040601-9524-4a46-bcde-26f8aa0903ea 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-e82bbbb4-4776-445f-9d1d-aea28fbbcaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:49:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:04.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:05 np0005539551 nova_compute[227360]: 2025-11-29 08:49:05.415 227364 DEBUG nova.network.neutron [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:49:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:06 np0005539551 nova_compute[227360]: 2025-11-29 08:49:06.917 227364 DEBUG nova.network.neutron [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Updating instance_info_cache with network_info: [{"id": "ce1430a7-5573-42ed-950f-6b729838e557", "address": "fa:16:3e:0a:48:c9", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce1430a7-55", "ovs_interfaceid": "ce1430a7-5573-42ed-950f-6b729838e557", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:49:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:06.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:06 np0005539551 nova_compute[227360]: 2025-11-29 08:49:06.948 227364 DEBUG oslo_concurrency.lockutils [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Releasing lock "refresh_cache-e82bbbb4-4776-445f-9d1d-aea28fbbcaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:49:06 np0005539551 nova_compute[227360]: 2025-11-29 08:49:06.949 227364 DEBUG nova.compute.manager [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Instance network_info: |[{"id": "ce1430a7-5573-42ed-950f-6b729838e557", "address": "fa:16:3e:0a:48:c9", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce1430a7-55", "ovs_interfaceid": "ce1430a7-5573-42ed-950f-6b729838e557", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:49:06 np0005539551 nova_compute[227360]: 2025-11-29 08:49:06.949 227364 DEBUG oslo_concurrency.lockutils [req-5aad9203-8fa5-4ba3-a637-62e14cf58806 req-4d040601-9524-4a46-bcde-26f8aa0903ea 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-e82bbbb4-4776-445f-9d1d-aea28fbbcaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:49:06 np0005539551 nova_compute[227360]: 2025-11-29 08:49:06.949 227364 DEBUG nova.network.neutron [req-5aad9203-8fa5-4ba3-a637-62e14cf58806 req-4d040601-9524-4a46-bcde-26f8aa0903ea 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Refreshing network info cache for port ce1430a7-5573-42ed-950f-6b729838e557 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:49:06 np0005539551 nova_compute[227360]: 2025-11-29 08:49:06.953 227364 DEBUG nova.virt.libvirt.driver [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Start _get_guest_xml network_info=[{"id": "ce1430a7-5573-42ed-950f-6b729838e557", "address": "fa:16:3e:0a:48:c9", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce1430a7-55", "ovs_interfaceid": "ce1430a7-5573-42ed-950f-6b729838e557", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': True, 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-77422021-a8a1-4df0-9b4a-24ac0dd34ac3', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '77422021-a8a1-4df0-9b4a-24ac0dd34ac3', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'e82bbbb4-4776-445f-9d1d-aea28fbbcaa8', 'attached_at': '', 'detached_at': '', 'volume_id': '77422021-a8a1-4df0-9b4a-24ac0dd34ac3', 'serial': '77422021-a8a1-4df0-9b4a-24ac0dd34ac3'}, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'attachment_id': 'd10519eb-a981-4742-a2f0-dedecae74be4', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:49:06 np0005539551 nova_compute[227360]: 2025-11-29 08:49:06.958 227364 WARNING nova.virt.libvirt.driver [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:49:06 np0005539551 nova_compute[227360]: 2025-11-29 08:49:06.962 227364 DEBUG nova.virt.libvirt.host [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:49:06 np0005539551 nova_compute[227360]: 2025-11-29 08:49:06.962 227364 DEBUG nova.virt.libvirt.host [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:49:06 np0005539551 nova_compute[227360]: 2025-11-29 08:49:06.967 227364 DEBUG nova.virt.libvirt.host [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:49:06 np0005539551 nova_compute[227360]: 2025-11-29 08:49:06.967 227364 DEBUG nova.virt.libvirt.host [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:49:06 np0005539551 nova_compute[227360]: 2025-11-29 08:49:06.968 227364 DEBUG nova.virt.libvirt.driver [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:49:06 np0005539551 nova_compute[227360]: 2025-11-29 08:49:06.969 227364 DEBUG nova.virt.hardware [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:49:06 np0005539551 nova_compute[227360]: 2025-11-29 08:49:06.969 227364 DEBUG nova.virt.hardware [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:49:06 np0005539551 nova_compute[227360]: 2025-11-29 08:49:06.969 227364 DEBUG nova.virt.hardware [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:49:06 np0005539551 nova_compute[227360]: 2025-11-29 08:49:06.969 227364 DEBUG nova.virt.hardware [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:49:06 np0005539551 nova_compute[227360]: 2025-11-29 08:49:06.970 227364 DEBUG nova.virt.hardware [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:49:06 np0005539551 nova_compute[227360]: 2025-11-29 08:49:06.970 227364 DEBUG nova.virt.hardware [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:49:06 np0005539551 nova_compute[227360]: 2025-11-29 08:49:06.970 227364 DEBUG nova.virt.hardware [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:49:06 np0005539551 nova_compute[227360]: 2025-11-29 08:49:06.970 227364 DEBUG nova.virt.hardware [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:49:06 np0005539551 nova_compute[227360]: 2025-11-29 08:49:06.971 227364 DEBUG nova.virt.hardware [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:49:06 np0005539551 nova_compute[227360]: 2025-11-29 08:49:06.971 227364 DEBUG nova.virt.hardware [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:49:06 np0005539551 nova_compute[227360]: 2025-11-29 08:49:06.971 227364 DEBUG nova.virt.hardware [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:49:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:06.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:07 np0005539551 nova_compute[227360]: 2025-11-29 08:49:07.002 227364 DEBUG nova.storage.rbd_utils [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] rbd image e82bbbb4-4776-445f-9d1d-aea28fbbcaa8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:49:07 np0005539551 nova_compute[227360]: 2025-11-29 08:49:07.006 227364 DEBUG oslo_concurrency.processutils [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:49:07 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3833863017' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:49:07 np0005539551 nova_compute[227360]: 2025-11-29 08:49:07.434 227364 DEBUG oslo_concurrency.processutils [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:07 np0005539551 nova_compute[227360]: 2025-11-29 08:49:07.459 227364 DEBUG nova.virt.libvirt.vif [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:48:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-965324894',display_name='tempest-TestShelveInstance-server-965324894',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testshelveinstance-server-965324894',id=220,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFD/+OETyA9Gkljsz2W5PflUlfWT5bS3tM7MSqY4jYxRfqsQn4zIu3YXrN04BT+xMXAEVtwQTmrwJ33WHrUnatG2eD4TuM6i7mJQ/IeECsfPQo6L0sbJAJwESalXFW3BvQ==',key_name='tempest-TestShelveInstance-691552797',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2e636ab14fe94059b82b9cbcf8831d87',ramdisk_id='',reservation_id='r-081uojmw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestShelveInstance-498716578',owner_user_name='tempest-TestShelveInstance-498716578-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:49:00Z,user_data=None,user_id='14d446574294425e9bc89e596ea56dc9',uuid=e82bbbb4-4776-445f-9d1d-aea28fbbcaa8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ce1430a7-5573-42ed-950f-6b729838e557", "address": "fa:16:3e:0a:48:c9", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce1430a7-55", "ovs_interfaceid": "ce1430a7-5573-42ed-950f-6b729838e557", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:49:07 np0005539551 nova_compute[227360]: 2025-11-29 08:49:07.460 227364 DEBUG nova.network.os_vif_util [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Converting VIF {"id": "ce1430a7-5573-42ed-950f-6b729838e557", "address": "fa:16:3e:0a:48:c9", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce1430a7-55", "ovs_interfaceid": "ce1430a7-5573-42ed-950f-6b729838e557", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:49:07 np0005539551 nova_compute[227360]: 2025-11-29 08:49:07.460 227364 DEBUG nova.network.os_vif_util [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0a:48:c9,bridge_name='br-int',has_traffic_filtering=True,id=ce1430a7-5573-42ed-950f-6b729838e557,network=Network(7e328485-18b8-4dc7-b012-0dd256b9b97f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce1430a7-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:49:07 np0005539551 nova_compute[227360]: 2025-11-29 08:49:07.461 227364 DEBUG nova.objects.instance [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lazy-loading 'pci_devices' on Instance uuid e82bbbb4-4776-445f-9d1d-aea28fbbcaa8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:49:07 np0005539551 nova_compute[227360]: 2025-11-29 08:49:07.472 227364 DEBUG nova.virt.libvirt.driver [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:49:07 np0005539551 nova_compute[227360]:  <uuid>e82bbbb4-4776-445f-9d1d-aea28fbbcaa8</uuid>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:  <name>instance-000000dc</name>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:49:07 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:      <nova:name>tempest-TestShelveInstance-server-965324894</nova:name>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:49:06</nova:creationTime>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:49:07 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:        <nova:user uuid="14d446574294425e9bc89e596ea56dc9">tempest-TestShelveInstance-498716578-project-member</nova:user>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:        <nova:project uuid="2e636ab14fe94059b82b9cbcf8831d87">tempest-TestShelveInstance-498716578</nova:project>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:        <nova:port uuid="ce1430a7-5573-42ed-950f-6b729838e557">
Nov 29 03:49:07 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:      <entry name="serial">e82bbbb4-4776-445f-9d1d-aea28fbbcaa8</entry>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:      <entry name="uuid">e82bbbb4-4776-445f-9d1d-aea28fbbcaa8</entry>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:49:07 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/e82bbbb4-4776-445f-9d1d-aea28fbbcaa8_disk.config">
Nov 29 03:49:07 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:49:07 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:49:07 np0005539551 nova_compute[227360]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="volumes/volume-77422021-a8a1-4df0-9b4a-24ac0dd34ac3">
Nov 29 03:49:07 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:49:07 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:      <serial>77422021-a8a1-4df0-9b4a-24ac0dd34ac3</serial>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:49:07 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:0a:48:c9"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:      <target dev="tapce1430a7-55"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:49:07 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/e82bbbb4-4776-445f-9d1d-aea28fbbcaa8/console.log" append="off"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:49:07 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:49:07 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:49:07 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:49:07 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:49:07 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:49:07 np0005539551 nova_compute[227360]: 2025-11-29 08:49:07.474 227364 DEBUG nova.compute.manager [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Preparing to wait for external event network-vif-plugged-ce1430a7-5573-42ed-950f-6b729838e557 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:49:07 np0005539551 nova_compute[227360]: 2025-11-29 08:49:07.474 227364 DEBUG oslo_concurrency.lockutils [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Acquiring lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:07 np0005539551 nova_compute[227360]: 2025-11-29 08:49:07.474 227364 DEBUG oslo_concurrency.lockutils [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:07 np0005539551 nova_compute[227360]: 2025-11-29 08:49:07.474 227364 DEBUG oslo_concurrency.lockutils [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:07 np0005539551 nova_compute[227360]: 2025-11-29 08:49:07.475 227364 DEBUG nova.virt.libvirt.vif [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:48:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-965324894',display_name='tempest-TestShelveInstance-server-965324894',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testshelveinstance-server-965324894',id=220,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFD/+OETyA9Gkljsz2W5PflUlfWT5bS3tM7MSqY4jYxRfqsQn4zIu3YXrN04BT+xMXAEVtwQTmrwJ33WHrUnatG2eD4TuM6i7mJQ/IeECsfPQo6L0sbJAJwESalXFW3BvQ==',key_name='tempest-TestShelveInstance-691552797',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2e636ab14fe94059b82b9cbcf8831d87',ramdisk_id='',reservation_id='r-081uojmw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestShelveInstance-498716578',owner_user_name='tempest-TestShelveInstance-498716578-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:49:00Z,user_data=None,user_id='14d446574294425e9bc89e596ea56dc9',uuid=e82bbbb4-4776-445f-9d1d-aea28fbbcaa8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ce1430a7-5573-42ed-950f-6b729838e557", "address": "fa:16:3e:0a:48:c9", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce1430a7-55", "ovs_interfaceid": "ce1430a7-5573-42ed-950f-6b729838e557", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:49:07 np0005539551 nova_compute[227360]: 2025-11-29 08:49:07.475 227364 DEBUG nova.network.os_vif_util [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Converting VIF {"id": "ce1430a7-5573-42ed-950f-6b729838e557", "address": "fa:16:3e:0a:48:c9", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce1430a7-55", "ovs_interfaceid": "ce1430a7-5573-42ed-950f-6b729838e557", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:49:07 np0005539551 nova_compute[227360]: 2025-11-29 08:49:07.476 227364 DEBUG nova.network.os_vif_util [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0a:48:c9,bridge_name='br-int',has_traffic_filtering=True,id=ce1430a7-5573-42ed-950f-6b729838e557,network=Network(7e328485-18b8-4dc7-b012-0dd256b9b97f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce1430a7-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:49:07 np0005539551 nova_compute[227360]: 2025-11-29 08:49:07.476 227364 DEBUG os_vif [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:48:c9,bridge_name='br-int',has_traffic_filtering=True,id=ce1430a7-5573-42ed-950f-6b729838e557,network=Network(7e328485-18b8-4dc7-b012-0dd256b9b97f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce1430a7-55') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:49:07 np0005539551 nova_compute[227360]: 2025-11-29 08:49:07.477 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:07 np0005539551 nova_compute[227360]: 2025-11-29 08:49:07.477 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:49:07 np0005539551 nova_compute[227360]: 2025-11-29 08:49:07.478 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:49:07 np0005539551 nova_compute[227360]: 2025-11-29 08:49:07.481 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:07 np0005539551 nova_compute[227360]: 2025-11-29 08:49:07.481 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce1430a7-55, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:49:07 np0005539551 nova_compute[227360]: 2025-11-29 08:49:07.482 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapce1430a7-55, col_values=(('external_ids', {'iface-id': 'ce1430a7-5573-42ed-950f-6b729838e557', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0a:48:c9', 'vm-uuid': 'e82bbbb4-4776-445f-9d1d-aea28fbbcaa8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:49:07 np0005539551 nova_compute[227360]: 2025-11-29 08:49:07.483 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:07 np0005539551 NetworkManager[48922]: <info>  [1764406147.4843] manager: (tapce1430a7-55): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/414)
Nov 29 03:49:07 np0005539551 nova_compute[227360]: 2025-11-29 08:49:07.486 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:49:07 np0005539551 nova_compute[227360]: 2025-11-29 08:49:07.489 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:07 np0005539551 nova_compute[227360]: 2025-11-29 08:49:07.490 227364 INFO os_vif [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:48:c9,bridge_name='br-int',has_traffic_filtering=True,id=ce1430a7-5573-42ed-950f-6b729838e557,network=Network(7e328485-18b8-4dc7-b012-0dd256b9b97f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce1430a7-55')#033[00m
Nov 29 03:49:07 np0005539551 nova_compute[227360]: 2025-11-29 08:49:07.527 227364 DEBUG nova.virt.libvirt.driver [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:49:07 np0005539551 nova_compute[227360]: 2025-11-29 08:49:07.528 227364 DEBUG nova.virt.libvirt.driver [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:49:07 np0005539551 nova_compute[227360]: 2025-11-29 08:49:07.528 227364 DEBUG nova.virt.libvirt.driver [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] No VIF found with MAC fa:16:3e:0a:48:c9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:49:07 np0005539551 nova_compute[227360]: 2025-11-29 08:49:07.528 227364 INFO nova.virt.libvirt.driver [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Using config drive#033[00m
Nov 29 03:49:07 np0005539551 nova_compute[227360]: 2025-11-29 08:49:07.553 227364 DEBUG nova.storage.rbd_utils [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] rbd image e82bbbb4-4776-445f-9d1d-aea28fbbcaa8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:49:07 np0005539551 nova_compute[227360]: 2025-11-29 08:49:07.939 227364 INFO nova.virt.libvirt.driver [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Creating config drive at /var/lib/nova/instances/e82bbbb4-4776-445f-9d1d-aea28fbbcaa8/disk.config#033[00m
Nov 29 03:49:07 np0005539551 nova_compute[227360]: 2025-11-29 08:49:07.943 227364 DEBUG oslo_concurrency.processutils [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e82bbbb4-4776-445f-9d1d-aea28fbbcaa8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgqu6i2oa execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:08 np0005539551 nova_compute[227360]: 2025-11-29 08:49:08.083 227364 DEBUG oslo_concurrency.processutils [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e82bbbb4-4776-445f-9d1d-aea28fbbcaa8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgqu6i2oa" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:08 np0005539551 nova_compute[227360]: 2025-11-29 08:49:08.108 227364 DEBUG nova.storage.rbd_utils [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] rbd image e82bbbb4-4776-445f-9d1d-aea28fbbcaa8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:49:08 np0005539551 nova_compute[227360]: 2025-11-29 08:49:08.111 227364 DEBUG oslo_concurrency.processutils [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e82bbbb4-4776-445f-9d1d-aea28fbbcaa8/disk.config e82bbbb4-4776-445f-9d1d-aea28fbbcaa8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:08 np0005539551 nova_compute[227360]: 2025-11-29 08:49:08.262 227364 DEBUG oslo_concurrency.processutils [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e82bbbb4-4776-445f-9d1d-aea28fbbcaa8/disk.config e82bbbb4-4776-445f-9d1d-aea28fbbcaa8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:08 np0005539551 nova_compute[227360]: 2025-11-29 08:49:08.263 227364 INFO nova.virt.libvirt.driver [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Deleting local config drive /var/lib/nova/instances/e82bbbb4-4776-445f-9d1d-aea28fbbcaa8/disk.config because it was imported into RBD.#033[00m
Nov 29 03:49:08 np0005539551 kernel: tapce1430a7-55: entered promiscuous mode
Nov 29 03:49:08 np0005539551 NetworkManager[48922]: <info>  [1764406148.3148] manager: (tapce1430a7-55): new Tun device (/org/freedesktop/NetworkManager/Devices/415)
Nov 29 03:49:08 np0005539551 nova_compute[227360]: 2025-11-29 08:49:08.314 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:08 np0005539551 ovn_controller[130266]: 2025-11-29T08:49:08Z|00924|binding|INFO|Claiming lport ce1430a7-5573-42ed-950f-6b729838e557 for this chassis.
Nov 29 03:49:08 np0005539551 ovn_controller[130266]: 2025-11-29T08:49:08Z|00925|binding|INFO|ce1430a7-5573-42ed-950f-6b729838e557: Claiming fa:16:3e:0a:48:c9 10.100.0.8
Nov 29 03:49:08 np0005539551 nova_compute[227360]: 2025-11-29 08:49:08.319 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:08 np0005539551 nova_compute[227360]: 2025-11-29 08:49:08.321 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:08.327 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0a:48:c9 10.100.0.8'], port_security=['fa:16:3e:0a:48:c9 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e82bbbb4-4776-445f-9d1d-aea28fbbcaa8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e328485-18b8-4dc7-b012-0dd256b9b97f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2e636ab14fe94059b82b9cbcf8831d87', 'neutron:revision_number': '2', 'neutron:security_group_ids': '96207713-2918-4e0d-9816-6691655bac56', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d402cfd4-158a-4fe2-be8d-72cfa52ed799, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=ce1430a7-5573-42ed-950f-6b729838e557) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:08.328 139482 INFO neutron.agent.ovn.metadata.agent [-] Port ce1430a7-5573-42ed-950f-6b729838e557 in datapath 7e328485-18b8-4dc7-b012-0dd256b9b97f bound to our chassis#033[00m
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:08.330 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7e328485-18b8-4dc7-b012-0dd256b9b97f#033[00m
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:08.341 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e5f8ae86-ff66-41de-a243-5c90a1f9e43b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:08.342 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7e328485-11 in ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:49:08 np0005539551 systemd-udevd[304825]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:08.343 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7e328485-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:08.343 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[b71b4e59-5a9e-41e8-a811-01e3aa9c2b76]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:08.344 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2cf83459-1925-4b8d-9666-f3158ba21433]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:08 np0005539551 systemd-machined[190756]: New machine qemu-95-instance-000000dc.
Nov 29 03:49:08 np0005539551 NetworkManager[48922]: <info>  [1764406148.3548] device (tapce1430a7-55): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:49:08 np0005539551 NetworkManager[48922]: <info>  [1764406148.3561] device (tapce1430a7-55): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:08.355 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[639013b3-5a35-4cc0-9364-9808c771d4be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:08 np0005539551 systemd[1]: Started Virtual Machine qemu-95-instance-000000dc.
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:08.381 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c2ce5ff4-a264-472b-aca5-1de7830d509b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:08 np0005539551 nova_compute[227360]: 2025-11-29 08:49:08.386 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:08 np0005539551 ovn_controller[130266]: 2025-11-29T08:49:08Z|00926|binding|INFO|Setting lport ce1430a7-5573-42ed-950f-6b729838e557 ovn-installed in OVS
Nov 29 03:49:08 np0005539551 ovn_controller[130266]: 2025-11-29T08:49:08Z|00927|binding|INFO|Setting lport ce1430a7-5573-42ed-950f-6b729838e557 up in Southbound
Nov 29 03:49:08 np0005539551 nova_compute[227360]: 2025-11-29 08:49:08.389 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:08 np0005539551 nova_compute[227360]: 2025-11-29 08:49:08.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:08.415 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[11f5bc18-f0d0-4e4c-a1fb-147b330230e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:08.420 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[15d20e3f-571b-4cf1-9b6b-30cb385e0efa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:08 np0005539551 NetworkManager[48922]: <info>  [1764406148.4216] manager: (tap7e328485-10): new Veth device (/org/freedesktop/NetworkManager/Devices/416)
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:08.455 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[0b9adfae-ecaa-4bbe-a153-be585d00a244]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:08.460 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[71b0cd63-249a-443a-b505-b10f3a5fe76f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:08 np0005539551 NetworkManager[48922]: <info>  [1764406148.4870] device (tap7e328485-10): carrier: link connected
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:08.495 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[43193eb3-1258-470c-bd2d-2fece31fa880]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:08.516 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[1d67d7c2-e153-46dc-aaeb-3c91b706d2e3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7e328485-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:c0:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 268], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 919536, 'reachable_time': 17194, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304857, 'error': None, 'target': 'ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:08.536 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[cac4d308-ed4e-428c-bec1-392dd4fff862]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fead:c011'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 919536, 'tstamp': 919536}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304858, 'error': None, 'target': 'ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:08.557 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[52c5656d-2d68-4447-b7b7-5779466214d8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7e328485-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:c0:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 268], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 919536, 'reachable_time': 17194, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 304859, 'error': None, 'target': 'ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:08.598 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[07d90210-777c-4bac-90e6-cf5fc5ce925d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:08.660 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[974772e8-5177-47e6-87e7-458afe56d439]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:08.661 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7e328485-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:08.662 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:08.662 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7e328485-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:49:08 np0005539551 kernel: tap7e328485-10: entered promiscuous mode
Nov 29 03:49:08 np0005539551 NetworkManager[48922]: <info>  [1764406148.6654] manager: (tap7e328485-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/417)
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:08.667 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7e328485-10, col_values=(('external_ids', {'iface-id': '239aff46-c81c-4ca8-9f81-35eea8cc0198'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:49:08 np0005539551 ovn_controller[130266]: 2025-11-29T08:49:08Z|00928|binding|INFO|Releasing lport 239aff46-c81c-4ca8-9f81-35eea8cc0198 from this chassis (sb_readonly=0)
Nov 29 03:49:08 np0005539551 nova_compute[227360]: 2025-11-29 08:49:08.668 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:08 np0005539551 nova_compute[227360]: 2025-11-29 08:49:08.683 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:08.685 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7e328485-18b8-4dc7-b012-0dd256b9b97f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7e328485-18b8-4dc7-b012-0dd256b9b97f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:08.686 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e89f7132-54a0-4c18-9ec5-20ea2eee4679]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:08.687 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-7e328485-18b8-4dc7-b012-0dd256b9b97f
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/7e328485-18b8-4dc7-b012-0dd256b9b97f.pid.haproxy
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 7e328485-18b8-4dc7-b012-0dd256b9b97f
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:49:08 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:08.688 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f', 'env', 'PROCESS_TAG=haproxy-7e328485-18b8-4dc7-b012-0dd256b9b97f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7e328485-18b8-4dc7-b012-0dd256b9b97f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:49:08 np0005539551 nova_compute[227360]: 2025-11-29 08:49:08.725 227364 DEBUG nova.compute.manager [req-f0b23469-2b12-431c-8311-489b81e0825d req-1c7fa2a5-61b4-4095-a8d5-da255146a358 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Received event network-vif-plugged-ce1430a7-5573-42ed-950f-6b729838e557 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:49:08 np0005539551 nova_compute[227360]: 2025-11-29 08:49:08.726 227364 DEBUG oslo_concurrency.lockutils [req-f0b23469-2b12-431c-8311-489b81e0825d req-1c7fa2a5-61b4-4095-a8d5-da255146a358 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:08 np0005539551 nova_compute[227360]: 2025-11-29 08:49:08.726 227364 DEBUG oslo_concurrency.lockutils [req-f0b23469-2b12-431c-8311-489b81e0825d req-1c7fa2a5-61b4-4095-a8d5-da255146a358 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:08 np0005539551 nova_compute[227360]: 2025-11-29 08:49:08.726 227364 DEBUG oslo_concurrency.lockutils [req-f0b23469-2b12-431c-8311-489b81e0825d req-1c7fa2a5-61b4-4095-a8d5-da255146a358 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:08 np0005539551 nova_compute[227360]: 2025-11-29 08:49:08.726 227364 DEBUG nova.compute.manager [req-f0b23469-2b12-431c-8311-489b81e0825d req-1c7fa2a5-61b4-4095-a8d5-da255146a358 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Processing event network-vif-plugged-ce1430a7-5573-42ed-950f-6b729838e557 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:49:08 np0005539551 nova_compute[227360]: 2025-11-29 08:49:08.860 227364 DEBUG nova.network.neutron [req-5aad9203-8fa5-4ba3-a637-62e14cf58806 req-4d040601-9524-4a46-bcde-26f8aa0903ea 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Updated VIF entry in instance network info cache for port ce1430a7-5573-42ed-950f-6b729838e557. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:49:08 np0005539551 nova_compute[227360]: 2025-11-29 08:49:08.861 227364 DEBUG nova.network.neutron [req-5aad9203-8fa5-4ba3-a637-62e14cf58806 req-4d040601-9524-4a46-bcde-26f8aa0903ea 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Updating instance_info_cache with network_info: [{"id": "ce1430a7-5573-42ed-950f-6b729838e557", "address": "fa:16:3e:0a:48:c9", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce1430a7-55", "ovs_interfaceid": "ce1430a7-5573-42ed-950f-6b729838e557", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:49:08 np0005539551 nova_compute[227360]: 2025-11-29 08:49:08.875 227364 DEBUG oslo_concurrency.lockutils [req-5aad9203-8fa5-4ba3-a637-62e14cf58806 req-4d040601-9524-4a46-bcde-26f8aa0903ea 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-e82bbbb4-4776-445f-9d1d-aea28fbbcaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:49:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:08.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:08.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:09 np0005539551 podman[304891]: 2025-11-29 08:49:09.053570713 +0000 UTC m=+0.046699806 container create 30141b9fbefbe75631872b171f6c46a9127ad95257508ef71aaa95d2c0c0e6cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:49:09 np0005539551 systemd[1]: Started libpod-conmon-30141b9fbefbe75631872b171f6c46a9127ad95257508ef71aaa95d2c0c0e6cf.scope.
Nov 29 03:49:09 np0005539551 podman[304891]: 2025-11-29 08:49:09.02986317 +0000 UTC m=+0.022992243 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:49:09 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:49:09 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad368bcda9b7ea62160a86d1db607124f0e05ca0227793736811a33c036af03/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:49:09 np0005539551 podman[304891]: 2025-11-29 08:49:09.147210988 +0000 UTC m=+0.140340071 container init 30141b9fbefbe75631872b171f6c46a9127ad95257508ef71aaa95d2c0c0e6cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 03:49:09 np0005539551 podman[304891]: 2025-11-29 08:49:09.152754388 +0000 UTC m=+0.145883461 container start 30141b9fbefbe75631872b171f6c46a9127ad95257508ef71aaa95d2c0c0e6cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 03:49:09 np0005539551 neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f[304939]: [NOTICE]   (304951) : New worker (304953) forked
Nov 29 03:49:09 np0005539551 neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f[304939]: [NOTICE]   (304951) : Loading success.
Nov 29 03:49:09 np0005539551 nova_compute[227360]: 2025-11-29 08:49:09.225 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764406149.2251592, e82bbbb4-4776-445f-9d1d-aea28fbbcaa8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:49:09 np0005539551 nova_compute[227360]: 2025-11-29 08:49:09.226 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] VM Started (Lifecycle Event)#033[00m
Nov 29 03:49:09 np0005539551 nova_compute[227360]: 2025-11-29 08:49:09.228 227364 DEBUG nova.compute.manager [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:49:09 np0005539551 nova_compute[227360]: 2025-11-29 08:49:09.231 227364 DEBUG nova.virt.libvirt.driver [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:49:09 np0005539551 nova_compute[227360]: 2025-11-29 08:49:09.235 227364 INFO nova.virt.libvirt.driver [-] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Instance spawned successfully.#033[00m
Nov 29 03:49:09 np0005539551 nova_compute[227360]: 2025-11-29 08:49:09.235 227364 DEBUG nova.virt.libvirt.driver [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:49:09 np0005539551 nova_compute[227360]: 2025-11-29 08:49:09.250 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:49:09 np0005539551 nova_compute[227360]: 2025-11-29 08:49:09.261 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:49:09 np0005539551 nova_compute[227360]: 2025-11-29 08:49:09.269 227364 DEBUG nova.virt.libvirt.driver [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:49:09 np0005539551 nova_compute[227360]: 2025-11-29 08:49:09.270 227364 DEBUG nova.virt.libvirt.driver [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:49:09 np0005539551 nova_compute[227360]: 2025-11-29 08:49:09.270 227364 DEBUG nova.virt.libvirt.driver [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:49:09 np0005539551 nova_compute[227360]: 2025-11-29 08:49:09.270 227364 DEBUG nova.virt.libvirt.driver [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:49:09 np0005539551 nova_compute[227360]: 2025-11-29 08:49:09.271 227364 DEBUG nova.virt.libvirt.driver [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:49:09 np0005539551 nova_compute[227360]: 2025-11-29 08:49:09.271 227364 DEBUG nova.virt.libvirt.driver [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:49:09 np0005539551 nova_compute[227360]: 2025-11-29 08:49:09.312 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:49:09 np0005539551 nova_compute[227360]: 2025-11-29 08:49:09.312 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764406149.2254066, e82bbbb4-4776-445f-9d1d-aea28fbbcaa8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:49:09 np0005539551 nova_compute[227360]: 2025-11-29 08:49:09.313 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:49:09 np0005539551 nova_compute[227360]: 2025-11-29 08:49:09.356 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:49:09 np0005539551 nova_compute[227360]: 2025-11-29 08:49:09.359 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764406149.2308614, e82bbbb4-4776-445f-9d1d-aea28fbbcaa8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:49:09 np0005539551 nova_compute[227360]: 2025-11-29 08:49:09.359 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:49:09 np0005539551 nova_compute[227360]: 2025-11-29 08:49:09.371 227364 INFO nova.compute.manager [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Took 7.57 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:49:09 np0005539551 nova_compute[227360]: 2025-11-29 08:49:09.371 227364 DEBUG nova.compute.manager [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:49:09 np0005539551 nova_compute[227360]: 2025-11-29 08:49:09.397 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:49:09 np0005539551 nova_compute[227360]: 2025-11-29 08:49:09.400 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:49:09 np0005539551 nova_compute[227360]: 2025-11-29 08:49:09.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:09 np0005539551 nova_compute[227360]: 2025-11-29 08:49:09.450 227364 INFO nova.compute.manager [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Took 9.58 seconds to build instance.#033[00m
Nov 29 03:49:09 np0005539551 nova_compute[227360]: 2025-11-29 08:49:09.467 227364 DEBUG oslo_concurrency.lockutils [None req-f30ce20c-f664-4ecb-91aa-9db0dedb018a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:09 np0005539551 nova_compute[227360]: 2025-11-29 08:49:09.699 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:10 np0005539551 nova_compute[227360]: 2025-11-29 08:49:10.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:10 np0005539551 nova_compute[227360]: 2025-11-29 08:49:10.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:49:10 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #145. Immutable memtables: 0.
Nov 29 03:49:10 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:49:10.695153) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:49:10 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 145
Nov 29 03:49:10 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406150695279, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 1406, "num_deletes": 258, "total_data_size": 2987008, "memory_usage": 3029112, "flush_reason": "Manual Compaction"}
Nov 29 03:49:10 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #146: started
Nov 29 03:49:10 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406150710132, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 146, "file_size": 1936669, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70425, "largest_seqno": 71826, "table_properties": {"data_size": 1930463, "index_size": 3408, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13718, "raw_average_key_size": 20, "raw_value_size": 1917747, "raw_average_value_size": 2845, "num_data_blocks": 149, "num_entries": 674, "num_filter_entries": 674, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406058, "oldest_key_time": 1764406058, "file_creation_time": 1764406150, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:49:10 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 15163 microseconds, and 5680 cpu microseconds.
Nov 29 03:49:10 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:49:10 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:49:10.710316) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #146: 1936669 bytes OK
Nov 29 03:49:10 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:49:10.710390) [db/memtable_list.cc:519] [default] Level-0 commit table #146 started
Nov 29 03:49:10 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:49:10.712147) [db/memtable_list.cc:722] [default] Level-0 commit table #146: memtable #1 done
Nov 29 03:49:10 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:49:10.712161) EVENT_LOG_v1 {"time_micros": 1764406150712157, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:49:10 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:49:10.712176) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:49:10 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 2980280, prev total WAL file size 2980280, number of live WAL files 2.
Nov 29 03:49:10 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000142.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:49:10 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:49:10.713254) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353135' seq:72057594037927935, type:22 .. '6C6F676D0032373636' seq:0, type:0; will stop at (end)
Nov 29 03:49:10 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:49:10 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [146(1891KB)], [144(10MB)]
Nov 29 03:49:10 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406150713307, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [146], "files_L6": [144], "score": -1, "input_data_size": 12706606, "oldest_snapshot_seqno": -1}
Nov 29 03:49:10 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #147: 10099 keys, 12541305 bytes, temperature: kUnknown
Nov 29 03:49:10 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406150789018, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 147, "file_size": 12541305, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12476843, "index_size": 38109, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25285, "raw_key_size": 267364, "raw_average_key_size": 26, "raw_value_size": 12300230, "raw_average_value_size": 1217, "num_data_blocks": 1444, "num_entries": 10099, "num_filter_entries": 10099, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764406150, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 147, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:49:10 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:49:10 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:49:10.789260) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 12541305 bytes
Nov 29 03:49:10 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:49:10.790565) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 167.7 rd, 165.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 10.3 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(13.0) write-amplify(6.5) OK, records in: 10636, records dropped: 537 output_compression: NoCompression
Nov 29 03:49:10 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:49:10.790582) EVENT_LOG_v1 {"time_micros": 1764406150790574, "job": 92, "event": "compaction_finished", "compaction_time_micros": 75768, "compaction_time_cpu_micros": 26452, "output_level": 6, "num_output_files": 1, "total_output_size": 12541305, "num_input_records": 10636, "num_output_records": 10099, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:49:10 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:49:10 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406150790950, "job": 92, "event": "table_file_deletion", "file_number": 146}
Nov 29 03:49:10 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000144.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:49:10 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406150792670, "job": 92, "event": "table_file_deletion", "file_number": 144}
Nov 29 03:49:10 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:49:10.713172) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:49:10 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:49:10.792751) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:49:10 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:49:10.792756) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:49:10 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:49:10.792758) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:49:10 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:49:10.792759) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:49:10 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:49:10.792760) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:49:10 np0005539551 nova_compute[227360]: 2025-11-29 08:49:10.865 227364 DEBUG nova.compute.manager [req-2add75a0-0a1e-4140-9e45-7d89ae49989a req-622d99c5-edb5-43f8-96e6-074993af7ec9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Received event network-vif-plugged-ce1430a7-5573-42ed-950f-6b729838e557 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:49:10 np0005539551 nova_compute[227360]: 2025-11-29 08:49:10.866 227364 DEBUG oslo_concurrency.lockutils [req-2add75a0-0a1e-4140-9e45-7d89ae49989a req-622d99c5-edb5-43f8-96e6-074993af7ec9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:10 np0005539551 nova_compute[227360]: 2025-11-29 08:49:10.866 227364 DEBUG oslo_concurrency.lockutils [req-2add75a0-0a1e-4140-9e45-7d89ae49989a req-622d99c5-edb5-43f8-96e6-074993af7ec9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:10 np0005539551 nova_compute[227360]: 2025-11-29 08:49:10.866 227364 DEBUG oslo_concurrency.lockutils [req-2add75a0-0a1e-4140-9e45-7d89ae49989a req-622d99c5-edb5-43f8-96e6-074993af7ec9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:10 np0005539551 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:49:10 np0005539551 nova_compute[227360]: 2025-11-29 08:49:10.867 227364 DEBUG nova.compute.manager [req-2add75a0-0a1e-4140-9e45-7d89ae49989a req-622d99c5-edb5-43f8-96e6-074993af7ec9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] No waiting events found dispatching network-vif-plugged-ce1430a7-5573-42ed-950f-6b729838e557 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:49:10 np0005539551 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:49:10 np0005539551 nova_compute[227360]: 2025-11-29 08:49:10.867 227364 WARNING nova.compute.manager [req-2add75a0-0a1e-4140-9e45-7d89ae49989a req-622d99c5-edb5-43f8-96e6-074993af7ec9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Received unexpected event network-vif-plugged-ce1430a7-5573-42ed-950f-6b729838e557 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:49:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:10.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:11.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:11 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:12 np0005539551 nova_compute[227360]: 2025-11-29 08:49:12.483 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:12.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:13.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:13 np0005539551 nova_compute[227360]: 2025-11-29 08:49:13.077 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:13 np0005539551 NetworkManager[48922]: <info>  [1764406153.0786] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/418)
Nov 29 03:49:13 np0005539551 NetworkManager[48922]: <info>  [1764406153.0808] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/419)
Nov 29 03:49:13 np0005539551 nova_compute[227360]: 2025-11-29 08:49:13.366 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:13 np0005539551 ovn_controller[130266]: 2025-11-29T08:49:13Z|00929|binding|INFO|Releasing lport 239aff46-c81c-4ca8-9f81-35eea8cc0198 from this chassis (sb_readonly=0)
Nov 29 03:49:13 np0005539551 nova_compute[227360]: 2025-11-29 08:49:13.408 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:13 np0005539551 nova_compute[227360]: 2025-11-29 08:49:13.719 227364 DEBUG nova.compute.manager [req-e8721080-44f8-4258-b64f-e4a41aaf0545 req-a64e2514-9060-4bf6-9e07-545061493f3a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Received event network-changed-ce1430a7-5573-42ed-950f-6b729838e557 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:49:13 np0005539551 nova_compute[227360]: 2025-11-29 08:49:13.719 227364 DEBUG nova.compute.manager [req-e8721080-44f8-4258-b64f-e4a41aaf0545 req-a64e2514-9060-4bf6-9e07-545061493f3a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Refreshing instance network info cache due to event network-changed-ce1430a7-5573-42ed-950f-6b729838e557. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:49:13 np0005539551 nova_compute[227360]: 2025-11-29 08:49:13.720 227364 DEBUG oslo_concurrency.lockutils [req-e8721080-44f8-4258-b64f-e4a41aaf0545 req-a64e2514-9060-4bf6-9e07-545061493f3a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-e82bbbb4-4776-445f-9d1d-aea28fbbcaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:49:13 np0005539551 nova_compute[227360]: 2025-11-29 08:49:13.720 227364 DEBUG oslo_concurrency.lockutils [req-e8721080-44f8-4258-b64f-e4a41aaf0545 req-a64e2514-9060-4bf6-9e07-545061493f3a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-e82bbbb4-4776-445f-9d1d-aea28fbbcaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:49:13 np0005539551 nova_compute[227360]: 2025-11-29 08:49:13.720 227364 DEBUG nova.network.neutron [req-e8721080-44f8-4258-b64f-e4a41aaf0545 req-a64e2514-9060-4bf6-9e07-545061493f3a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Refreshing network info cache for port ce1430a7-5573-42ed-950f-6b729838e557 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:49:14 np0005539551 podman[304966]: 2025-11-29 08:49:14.601036755 +0000 UTC m=+0.054139857 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:49:14 np0005539551 podman[304967]: 2025-11-29 08:49:14.601487697 +0000 UTC m=+0.051088354 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 29 03:49:14 np0005539551 podman[304965]: 2025-11-29 08:49:14.647225796 +0000 UTC m=+0.090494372 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller)
Nov 29 03:49:14 np0005539551 nova_compute[227360]: 2025-11-29 08:49:14.702 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:14.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:15 np0005539551 nova_compute[227360]: 2025-11-29 08:49:15.003 227364 DEBUG nova.network.neutron [req-e8721080-44f8-4258-b64f-e4a41aaf0545 req-a64e2514-9060-4bf6-9e07-545061493f3a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Updated VIF entry in instance network info cache for port ce1430a7-5573-42ed-950f-6b729838e557. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:49:15 np0005539551 nova_compute[227360]: 2025-11-29 08:49:15.004 227364 DEBUG nova.network.neutron [req-e8721080-44f8-4258-b64f-e4a41aaf0545 req-a64e2514-9060-4bf6-9e07-545061493f3a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Updating instance_info_cache with network_info: [{"id": "ce1430a7-5573-42ed-950f-6b729838e557", "address": "fa:16:3e:0a:48:c9", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce1430a7-55", "ovs_interfaceid": "ce1430a7-5573-42ed-950f-6b729838e557", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:49:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:15.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:15 np0005539551 nova_compute[227360]: 2025-11-29 08:49:15.021 227364 DEBUG oslo_concurrency.lockutils [req-e8721080-44f8-4258-b64f-e4a41aaf0545 req-a64e2514-9060-4bf6-9e07-545061493f3a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-e82bbbb4-4776-445f-9d1d-aea28fbbcaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:49:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:16.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:17.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:17 np0005539551 nova_compute[227360]: 2025-11-29 08:49:17.484 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:18.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:19.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:19 np0005539551 nova_compute[227360]: 2025-11-29 08:49:19.704 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:19.898 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:19.898 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:19.899 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:20.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:21.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:21 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:21.122 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=66, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=65) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:49:21 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:21.123 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:49:21 np0005539551 nova_compute[227360]: 2025-11-29 08:49:21.139 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:21 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:22 np0005539551 nova_compute[227360]: 2025-11-29 08:49:22.520 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:22.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:49:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:23.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:49:23 np0005539551 ovn_controller[130266]: 2025-11-29T08:49:23Z|00123|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0a:48:c9 10.100.0.8
Nov 29 03:49:23 np0005539551 ovn_controller[130266]: 2025-11-29T08:49:23Z|00124|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0a:48:c9 10.100.0.8
Nov 29 03:49:23 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:23.126 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '66'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:49:24 np0005539551 nova_compute[227360]: 2025-11-29 08:49:24.706 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:24.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:25.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:26 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:26.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:27.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:27 np0005539551 nova_compute[227360]: 2025-11-29 08:49:27.522 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:28.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:29.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:29 np0005539551 nova_compute[227360]: 2025-11-29 08:49:29.420 227364 DEBUG oslo_concurrency.lockutils [None req-f5d83b11-5c1e-4d07-b658-c710e4e9af1c 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Acquiring lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8" by "nova.compute.manager.ComputeManager.shelve_offload_instance.<locals>.do_shelve_offload_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:29 np0005539551 nova_compute[227360]: 2025-11-29 08:49:29.421 227364 DEBUG oslo_concurrency.lockutils [None req-f5d83b11-5c1e-4d07-b658-c710e4e9af1c 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8" acquired by "nova.compute.manager.ComputeManager.shelve_offload_instance.<locals>.do_shelve_offload_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:29 np0005539551 nova_compute[227360]: 2025-11-29 08:49:29.421 227364 INFO nova.compute.manager [None req-f5d83b11-5c1e-4d07-b658-c710e4e9af1c 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Shelve offloading#033[00m
Nov 29 03:49:29 np0005539551 nova_compute[227360]: 2025-11-29 08:49:29.446 227364 DEBUG nova.virt.libvirt.driver [None req-f5d83b11-5c1e-4d07-b658-c710e4e9af1c 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:49:29 np0005539551 nova_compute[227360]: 2025-11-29 08:49:29.708 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:30.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:31.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:31 np0005539551 kernel: tapce1430a7-55 (unregistering): left promiscuous mode
Nov 29 03:49:31 np0005539551 NetworkManager[48922]: <info>  [1764406171.6939] device (tapce1430a7-55): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:49:31 np0005539551 ovn_controller[130266]: 2025-11-29T08:49:31Z|00930|binding|INFO|Releasing lport ce1430a7-5573-42ed-950f-6b729838e557 from this chassis (sb_readonly=0)
Nov 29 03:49:31 np0005539551 ovn_controller[130266]: 2025-11-29T08:49:31Z|00931|binding|INFO|Setting lport ce1430a7-5573-42ed-950f-6b729838e557 down in Southbound
Nov 29 03:49:31 np0005539551 nova_compute[227360]: 2025-11-29 08:49:31.704 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:31 np0005539551 ovn_controller[130266]: 2025-11-29T08:49:31Z|00932|binding|INFO|Removing iface tapce1430a7-55 ovn-installed in OVS
Nov 29 03:49:31 np0005539551 nova_compute[227360]: 2025-11-29 08:49:31.706 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:31.712 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0a:48:c9 10.100.0.8'], port_security=['fa:16:3e:0a:48:c9 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e82bbbb4-4776-445f-9d1d-aea28fbbcaa8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e328485-18b8-4dc7-b012-0dd256b9b97f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2e636ab14fe94059b82b9cbcf8831d87', 'neutron:revision_number': '4', 'neutron:security_group_ids': '96207713-2918-4e0d-9816-6691655bac56', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.185'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d402cfd4-158a-4fe2-be8d-72cfa52ed799, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=ce1430a7-5573-42ed-950f-6b729838e557) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:49:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:31.713 139482 INFO neutron.agent.ovn.metadata.agent [-] Port ce1430a7-5573-42ed-950f-6b729838e557 in datapath 7e328485-18b8-4dc7-b012-0dd256b9b97f unbound from our chassis#033[00m
Nov 29 03:49:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:31.715 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7e328485-18b8-4dc7-b012-0dd256b9b97f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:49:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:31.717 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f2de352d-ed05-464f-84a8-d71da9b4b4ef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:31.717 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f namespace which is not needed anymore#033[00m
Nov 29 03:49:31 np0005539551 nova_compute[227360]: 2025-11-29 08:49:31.723 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:31 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:31 np0005539551 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000dc.scope: Deactivated successfully.
Nov 29 03:49:31 np0005539551 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000dc.scope: Consumed 14.305s CPU time.
Nov 29 03:49:31 np0005539551 systemd-machined[190756]: Machine qemu-95-instance-000000dc terminated.
Nov 29 03:49:31 np0005539551 neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f[304939]: [NOTICE]   (304951) : haproxy version is 2.8.14-c23fe91
Nov 29 03:49:31 np0005539551 neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f[304939]: [NOTICE]   (304951) : path to executable is /usr/sbin/haproxy
Nov 29 03:49:31 np0005539551 neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f[304939]: [WARNING]  (304951) : Exiting Master process...
Nov 29 03:49:31 np0005539551 neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f[304939]: [ALERT]    (304951) : Current worker (304953) exited with code 143 (Terminated)
Nov 29 03:49:31 np0005539551 neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f[304939]: [WARNING]  (304951) : All workers exited. Exiting... (0)
Nov 29 03:49:31 np0005539551 systemd[1]: libpod-30141b9fbefbe75631872b171f6c46a9127ad95257508ef71aaa95d2c0c0e6cf.scope: Deactivated successfully.
Nov 29 03:49:31 np0005539551 podman[305049]: 2025-11-29 08:49:31.855221697 +0000 UTC m=+0.047491987 container died 30141b9fbefbe75631872b171f6c46a9127ad95257508ef71aaa95d2c0c0e6cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 29 03:49:31 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-30141b9fbefbe75631872b171f6c46a9127ad95257508ef71aaa95d2c0c0e6cf-userdata-shm.mount: Deactivated successfully.
Nov 29 03:49:31 np0005539551 systemd[1]: var-lib-containers-storage-overlay-dad368bcda9b7ea62160a86d1db607124f0e05ca0227793736811a33c036af03-merged.mount: Deactivated successfully.
Nov 29 03:49:31 np0005539551 podman[305049]: 2025-11-29 08:49:31.902474047 +0000 UTC m=+0.094744327 container cleanup 30141b9fbefbe75631872b171f6c46a9127ad95257508ef71aaa95d2c0c0e6cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:49:31 np0005539551 systemd[1]: libpod-conmon-30141b9fbefbe75631872b171f6c46a9127ad95257508ef71aaa95d2c0c0e6cf.scope: Deactivated successfully.
Nov 29 03:49:31 np0005539551 podman[305080]: 2025-11-29 08:49:31.970137269 +0000 UTC m=+0.045268217 container remove 30141b9fbefbe75631872b171f6c46a9127ad95257508ef71aaa95d2c0c0e6cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:49:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:31.977 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[cd6f7470-3320-4ea3-b467-dbeb8c715634]: (4, ('Sat Nov 29 08:49:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f (30141b9fbefbe75631872b171f6c46a9127ad95257508ef71aaa95d2c0c0e6cf)\n30141b9fbefbe75631872b171f6c46a9127ad95257508ef71aaa95d2c0c0e6cf\nSat Nov 29 08:49:31 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f (30141b9fbefbe75631872b171f6c46a9127ad95257508ef71aaa95d2c0c0e6cf)\n30141b9fbefbe75631872b171f6c46a9127ad95257508ef71aaa95d2c0c0e6cf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:31.980 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[22abd0ac-8d39-4d3e-9ab2-9e29bb7d9350]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:31 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:31.981 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7e328485-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:49:32 np0005539551 nova_compute[227360]: 2025-11-29 08:49:32.028 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:32 np0005539551 kernel: tap7e328485-10: left promiscuous mode
Nov 29 03:49:32 np0005539551 nova_compute[227360]: 2025-11-29 08:49:32.041 227364 DEBUG nova.compute.manager [req-e7a1342a-29c9-4e25-a695-929ac6e1cc05 req-997b3e9f-6a04-4ce6-884b-9167fd334292 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Received event network-vif-unplugged-ce1430a7-5573-42ed-950f-6b729838e557 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:49:32 np0005539551 nova_compute[227360]: 2025-11-29 08:49:32.042 227364 DEBUG oslo_concurrency.lockutils [req-e7a1342a-29c9-4e25-a695-929ac6e1cc05 req-997b3e9f-6a04-4ce6-884b-9167fd334292 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:32 np0005539551 nova_compute[227360]: 2025-11-29 08:49:32.042 227364 DEBUG oslo_concurrency.lockutils [req-e7a1342a-29c9-4e25-a695-929ac6e1cc05 req-997b3e9f-6a04-4ce6-884b-9167fd334292 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:32 np0005539551 nova_compute[227360]: 2025-11-29 08:49:32.042 227364 DEBUG oslo_concurrency.lockutils [req-e7a1342a-29c9-4e25-a695-929ac6e1cc05 req-997b3e9f-6a04-4ce6-884b-9167fd334292 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:32 np0005539551 nova_compute[227360]: 2025-11-29 08:49:32.043 227364 DEBUG nova.compute.manager [req-e7a1342a-29c9-4e25-a695-929ac6e1cc05 req-997b3e9f-6a04-4ce6-884b-9167fd334292 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] No waiting events found dispatching network-vif-unplugged-ce1430a7-5573-42ed-950f-6b729838e557 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:49:32 np0005539551 nova_compute[227360]: 2025-11-29 08:49:32.043 227364 WARNING nova.compute.manager [req-e7a1342a-29c9-4e25-a695-929ac6e1cc05 req-997b3e9f-6a04-4ce6-884b-9167fd334292 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Received unexpected event network-vif-unplugged-ce1430a7-5573-42ed-950f-6b729838e557 for instance with vm_state active and task_state shelving.#033[00m
Nov 29 03:49:32 np0005539551 nova_compute[227360]: 2025-11-29 08:49:32.044 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:32.047 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[150f2692-58b8-43c2-9c72-18b684ecf1e9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:32.061 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[84c6bcbf-b022-49cc-b2b0-861ebc254fe1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:32.063 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[af1876e9-a3f3-4be4-b4bf-5f2d9091dd65]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:32.083 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d8ff9112-eb40-402c-9f18-d182bb117ccc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 919528, 'reachable_time': 34993, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305108, 'error': None, 'target': 'ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:32.085 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:49:32 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:49:32.085 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[a7f849cb-f454-4a8f-bed7-ddbc351c4758]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:32 np0005539551 systemd[1]: run-netns-ovnmeta\x2d7e328485\x2d18b8\x2d4dc7\x2db012\x2d0dd256b9b97f.mount: Deactivated successfully.
Nov 29 03:49:32 np0005539551 nova_compute[227360]: 2025-11-29 08:49:32.470 227364 INFO nova.virt.libvirt.driver [None req-f5d83b11-5c1e-4d07-b658-c710e4e9af1c 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:49:32 np0005539551 nova_compute[227360]: 2025-11-29 08:49:32.478 227364 INFO nova.virt.libvirt.driver [-] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Instance destroyed successfully.#033[00m
Nov 29 03:49:32 np0005539551 nova_compute[227360]: 2025-11-29 08:49:32.479 227364 DEBUG nova.objects.instance [None req-f5d83b11-5c1e-4d07-b658-c710e4e9af1c 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lazy-loading 'numa_topology' on Instance uuid e82bbbb4-4776-445f-9d1d-aea28fbbcaa8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:49:32 np0005539551 nova_compute[227360]: 2025-11-29 08:49:32.496 227364 DEBUG nova.compute.manager [None req-f5d83b11-5c1e-4d07-b658-c710e4e9af1c 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:49:32 np0005539551 nova_compute[227360]: 2025-11-29 08:49:32.500 227364 DEBUG oslo_concurrency.lockutils [None req-f5d83b11-5c1e-4d07-b658-c710e4e9af1c 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Acquiring lock "refresh_cache-e82bbbb4-4776-445f-9d1d-aea28fbbcaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:49:32 np0005539551 nova_compute[227360]: 2025-11-29 08:49:32.501 227364 DEBUG oslo_concurrency.lockutils [None req-f5d83b11-5c1e-4d07-b658-c710e4e9af1c 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Acquired lock "refresh_cache-e82bbbb4-4776-445f-9d1d-aea28fbbcaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:49:32 np0005539551 nova_compute[227360]: 2025-11-29 08:49:32.501 227364 DEBUG nova.network.neutron [None req-f5d83b11-5c1e-4d07-b658-c710e4e9af1c 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:49:32 np0005539551 nova_compute[227360]: 2025-11-29 08:49:32.525 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:32 np0005539551 nova_compute[227360]: 2025-11-29 08:49:32.534 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:32 np0005539551 nova_compute[227360]: 2025-11-29 08:49:32.827 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:32.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:33.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:34 np0005539551 nova_compute[227360]: 2025-11-29 08:49:34.181 227364 DEBUG nova.compute.manager [req-771abcbe-ce47-45bf-95fa-96560cf97ccf req-835705a0-4792-41db-b86e-5716ff0cb2c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Received event network-vif-plugged-ce1430a7-5573-42ed-950f-6b729838e557 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:49:34 np0005539551 nova_compute[227360]: 2025-11-29 08:49:34.182 227364 DEBUG oslo_concurrency.lockutils [req-771abcbe-ce47-45bf-95fa-96560cf97ccf req-835705a0-4792-41db-b86e-5716ff0cb2c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:34 np0005539551 nova_compute[227360]: 2025-11-29 08:49:34.182 227364 DEBUG oslo_concurrency.lockutils [req-771abcbe-ce47-45bf-95fa-96560cf97ccf req-835705a0-4792-41db-b86e-5716ff0cb2c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:34 np0005539551 nova_compute[227360]: 2025-11-29 08:49:34.183 227364 DEBUG oslo_concurrency.lockutils [req-771abcbe-ce47-45bf-95fa-96560cf97ccf req-835705a0-4792-41db-b86e-5716ff0cb2c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:34 np0005539551 nova_compute[227360]: 2025-11-29 08:49:34.183 227364 DEBUG nova.compute.manager [req-771abcbe-ce47-45bf-95fa-96560cf97ccf req-835705a0-4792-41db-b86e-5716ff0cb2c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] No waiting events found dispatching network-vif-plugged-ce1430a7-5573-42ed-950f-6b729838e557 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:49:34 np0005539551 nova_compute[227360]: 2025-11-29 08:49:34.183 227364 WARNING nova.compute.manager [req-771abcbe-ce47-45bf-95fa-96560cf97ccf req-835705a0-4792-41db-b86e-5716ff0cb2c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Received unexpected event network-vif-plugged-ce1430a7-5573-42ed-950f-6b729838e557 for instance with vm_state active and task_state shelving.#033[00m
Nov 29 03:49:34 np0005539551 nova_compute[227360]: 2025-11-29 08:49:34.572 227364 DEBUG nova.network.neutron [None req-f5d83b11-5c1e-4d07-b658-c710e4e9af1c 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Updating instance_info_cache with network_info: [{"id": "ce1430a7-5573-42ed-950f-6b729838e557", "address": "fa:16:3e:0a:48:c9", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce1430a7-55", "ovs_interfaceid": "ce1430a7-5573-42ed-950f-6b729838e557", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:49:34 np0005539551 nova_compute[227360]: 2025-11-29 08:49:34.592 227364 DEBUG oslo_concurrency.lockutils [None req-f5d83b11-5c1e-4d07-b658-c710e4e9af1c 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Releasing lock "refresh_cache-e82bbbb4-4776-445f-9d1d-aea28fbbcaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:49:34 np0005539551 nova_compute[227360]: 2025-11-29 08:49:34.710 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:34.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:35.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:35 np0005539551 nova_compute[227360]: 2025-11-29 08:49:35.910 227364 INFO nova.virt.libvirt.driver [-] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Instance destroyed successfully.#033[00m
Nov 29 03:49:35 np0005539551 nova_compute[227360]: 2025-11-29 08:49:35.911 227364 DEBUG nova.objects.instance [None req-f5d83b11-5c1e-4d07-b658-c710e4e9af1c 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lazy-loading 'resources' on Instance uuid e82bbbb4-4776-445f-9d1d-aea28fbbcaa8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:49:35 np0005539551 nova_compute[227360]: 2025-11-29 08:49:35.927 227364 DEBUG nova.virt.libvirt.vif [None req-f5d83b11-5c1e-4d07-b658-c710e4e9af1c 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:48:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-965324894',display_name='tempest-TestShelveInstance-server-965324894',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testshelveinstance-server-965324894',id=220,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFD/+OETyA9Gkljsz2W5PflUlfWT5bS3tM7MSqY4jYxRfqsQn4zIu3YXrN04BT+xMXAEVtwQTmrwJ33WHrUnatG2eD4TuM6i7mJQ/IeECsfPQo6L0sbJAJwESalXFW3BvQ==',key_name='tempest-TestShelveInstance-691552797',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:49:09Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2e636ab14fe94059b82b9cbcf8831d87',ramdisk_id='',reservation_id='r-081uojmw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestShelveInstance-498716578',owner_user_name='tempest-TestShelveInstance-498716578-project-member'},tags=<?>,task_state='shelving',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:49:09Z,user_data=None,user_id='14d446574294425e9bc89e596ea56dc9',uuid=e82bbbb4-4776-445f-9d1d-aea28fbbcaa8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ce1430a7-5573-42ed-950f-6b729838e557", "address": "fa:16:3e:0a:48:c9", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce1430a7-55", "ovs_interfaceid": "ce1430a7-5573-42ed-950f-6b729838e557", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:49:35 np0005539551 nova_compute[227360]: 2025-11-29 08:49:35.928 227364 DEBUG nova.network.os_vif_util [None req-f5d83b11-5c1e-4d07-b658-c710e4e9af1c 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Converting VIF {"id": "ce1430a7-5573-42ed-950f-6b729838e557", "address": "fa:16:3e:0a:48:c9", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce1430a7-55", "ovs_interfaceid": "ce1430a7-5573-42ed-950f-6b729838e557", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:49:35 np0005539551 nova_compute[227360]: 2025-11-29 08:49:35.929 227364 DEBUG nova.network.os_vif_util [None req-f5d83b11-5c1e-4d07-b658-c710e4e9af1c 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0a:48:c9,bridge_name='br-int',has_traffic_filtering=True,id=ce1430a7-5573-42ed-950f-6b729838e557,network=Network(7e328485-18b8-4dc7-b012-0dd256b9b97f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce1430a7-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:49:35 np0005539551 nova_compute[227360]: 2025-11-29 08:49:35.929 227364 DEBUG os_vif [None req-f5d83b11-5c1e-4d07-b658-c710e4e9af1c 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:48:c9,bridge_name='br-int',has_traffic_filtering=True,id=ce1430a7-5573-42ed-950f-6b729838e557,network=Network(7e328485-18b8-4dc7-b012-0dd256b9b97f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce1430a7-55') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:49:35 np0005539551 nova_compute[227360]: 2025-11-29 08:49:35.930 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:35 np0005539551 nova_compute[227360]: 2025-11-29 08:49:35.931 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce1430a7-55, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:49:35 np0005539551 nova_compute[227360]: 2025-11-29 08:49:35.932 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:35 np0005539551 nova_compute[227360]: 2025-11-29 08:49:35.933 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:35 np0005539551 nova_compute[227360]: 2025-11-29 08:49:35.935 227364 INFO os_vif [None req-f5d83b11-5c1e-4d07-b658-c710e4e9af1c 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:48:c9,bridge_name='br-int',has_traffic_filtering=True,id=ce1430a7-5573-42ed-950f-6b729838e557,network=Network(7e328485-18b8-4dc7-b012-0dd256b9b97f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce1430a7-55')#033[00m
Nov 29 03:49:36 np0005539551 nova_compute[227360]: 2025-11-29 08:49:36.010 227364 DEBUG nova.compute.manager [req-02f6a08d-5cef-4be7-be84-48d729ea86f1 req-e03df76b-5f6b-4e88-a115-2df92e040232 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Received event network-changed-ce1430a7-5573-42ed-950f-6b729838e557 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:49:36 np0005539551 nova_compute[227360]: 2025-11-29 08:49:36.010 227364 DEBUG nova.compute.manager [req-02f6a08d-5cef-4be7-be84-48d729ea86f1 req-e03df76b-5f6b-4e88-a115-2df92e040232 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Refreshing instance network info cache due to event network-changed-ce1430a7-5573-42ed-950f-6b729838e557. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:49:36 np0005539551 nova_compute[227360]: 2025-11-29 08:49:36.011 227364 DEBUG oslo_concurrency.lockutils [req-02f6a08d-5cef-4be7-be84-48d729ea86f1 req-e03df76b-5f6b-4e88-a115-2df92e040232 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-e82bbbb4-4776-445f-9d1d-aea28fbbcaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:49:36 np0005539551 nova_compute[227360]: 2025-11-29 08:49:36.012 227364 DEBUG oslo_concurrency.lockutils [req-02f6a08d-5cef-4be7-be84-48d729ea86f1 req-e03df76b-5f6b-4e88-a115-2df92e040232 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-e82bbbb4-4776-445f-9d1d-aea28fbbcaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:49:36 np0005539551 nova_compute[227360]: 2025-11-29 08:49:36.012 227364 DEBUG nova.network.neutron [req-02f6a08d-5cef-4be7-be84-48d729ea86f1 req-e03df76b-5f6b-4e88-a115-2df92e040232 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Refreshing network info cache for port ce1430a7-5573-42ed-950f-6b729838e557 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:49:36 np0005539551 nova_compute[227360]: 2025-11-29 08:49:36.146 227364 INFO nova.virt.libvirt.driver [None req-f5d83b11-5c1e-4d07-b658-c710e4e9af1c 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Deleting instance files /var/lib/nova/instances/e82bbbb4-4776-445f-9d1d-aea28fbbcaa8_del#033[00m
Nov 29 03:49:36 np0005539551 nova_compute[227360]: 2025-11-29 08:49:36.147 227364 INFO nova.virt.libvirt.driver [None req-f5d83b11-5c1e-4d07-b658-c710e4e9af1c 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Deletion of /var/lib/nova/instances/e82bbbb4-4776-445f-9d1d-aea28fbbcaa8_del complete#033[00m
Nov 29 03:49:36 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:36 np0005539551 nova_compute[227360]: 2025-11-29 08:49:36.812 227364 INFO nova.scheduler.client.report [None req-f5d83b11-5c1e-4d07-b658-c710e4e9af1c 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Deleted allocations for instance e82bbbb4-4776-445f-9d1d-aea28fbbcaa8#033[00m
Nov 29 03:49:36 np0005539551 nova_compute[227360]: 2025-11-29 08:49:36.875 227364 DEBUG oslo_concurrency.lockutils [None req-f5d83b11-5c1e-4d07-b658-c710e4e9af1c 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:36 np0005539551 nova_compute[227360]: 2025-11-29 08:49:36.875 227364 DEBUG oslo_concurrency.lockutils [None req-f5d83b11-5c1e-4d07-b658-c710e4e9af1c 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:36 np0005539551 nova_compute[227360]: 2025-11-29 08:49:36.906 227364 DEBUG oslo_concurrency.processutils [None req-f5d83b11-5c1e-4d07-b658-c710e4e9af1c 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:36.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:37.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:49:37 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1336802824' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:49:37 np0005539551 nova_compute[227360]: 2025-11-29 08:49:37.378 227364 DEBUG oslo_concurrency.processutils [None req-f5d83b11-5c1e-4d07-b658-c710e4e9af1c 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:37 np0005539551 nova_compute[227360]: 2025-11-29 08:49:37.385 227364 DEBUG nova.compute.provider_tree [None req-f5d83b11-5c1e-4d07-b658-c710e4e9af1c 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:49:37 np0005539551 nova_compute[227360]: 2025-11-29 08:49:37.402 227364 DEBUG nova.scheduler.client.report [None req-f5d83b11-5c1e-4d07-b658-c710e4e9af1c 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:49:37 np0005539551 nova_compute[227360]: 2025-11-29 08:49:37.420 227364 DEBUG oslo_concurrency.lockutils [None req-f5d83b11-5c1e-4d07-b658-c710e4e9af1c 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:37 np0005539551 nova_compute[227360]: 2025-11-29 08:49:37.481 227364 DEBUG oslo_concurrency.lockutils [None req-f5d83b11-5c1e-4d07-b658-c710e4e9af1c 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8" "released" by "nova.compute.manager.ComputeManager.shelve_offload_instance.<locals>.do_shelve_offload_instance" :: held 8.060s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:37 np0005539551 nova_compute[227360]: 2025-11-29 08:49:37.659 227364 DEBUG nova.network.neutron [req-02f6a08d-5cef-4be7-be84-48d729ea86f1 req-e03df76b-5f6b-4e88-a115-2df92e040232 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Updated VIF entry in instance network info cache for port ce1430a7-5573-42ed-950f-6b729838e557. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:49:37 np0005539551 nova_compute[227360]: 2025-11-29 08:49:37.660 227364 DEBUG nova.network.neutron [req-02f6a08d-5cef-4be7-be84-48d729ea86f1 req-e03df76b-5f6b-4e88-a115-2df92e040232 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Updating instance_info_cache with network_info: [{"id": "ce1430a7-5573-42ed-950f-6b729838e557", "address": "fa:16:3e:0a:48:c9", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": null, "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapce1430a7-55", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:49:37 np0005539551 nova_compute[227360]: 2025-11-29 08:49:37.688 227364 DEBUG oslo_concurrency.lockutils [req-02f6a08d-5cef-4be7-be84-48d729ea86f1 req-e03df76b-5f6b-4e88-a115-2df92e040232 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-e82bbbb4-4776-445f-9d1d-aea28fbbcaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:49:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:38.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:39.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:39 np0005539551 nova_compute[227360]: 2025-11-29 08:49:39.713 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:40 np0005539551 nova_compute[227360]: 2025-11-29 08:49:40.933 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:40.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:41.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:41 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:42.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:49:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:43.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:49:43 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e426 e426: 3 total, 3 up, 3 in
Nov 29 03:49:43 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:49:43 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1095335144' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:49:44 np0005539551 nova_compute[227360]: 2025-11-29 08:49:44.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:44 np0005539551 nova_compute[227360]: 2025-11-29 08:49:44.715 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:45.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:45.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:45 np0005539551 podman[305153]: 2025-11-29 08:49:45.595506868 +0000 UTC m=+0.047518768 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 29 03:49:45 np0005539551 podman[305152]: 2025-11-29 08:49:45.60922274 +0000 UTC m=+0.061292421 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 03:49:45 np0005539551 podman[305151]: 2025-11-29 08:49:45.625961413 +0000 UTC m=+0.082975768 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 29 03:49:45 np0005539551 nova_compute[227360]: 2025-11-29 08:49:45.935 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:46 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:46 np0005539551 nova_compute[227360]: 2025-11-29 08:49:46.936 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764406171.934071, e82bbbb4-4776-445f-9d1d-aea28fbbcaa8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:49:46 np0005539551 nova_compute[227360]: 2025-11-29 08:49:46.936 227364 INFO nova.compute.manager [-] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:49:46 np0005539551 nova_compute[227360]: 2025-11-29 08:49:46.964 227364 DEBUG nova.compute.manager [None req-506da612-c27c-4a2b-b3a6-edecff5f25e1 - - - - - -] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:49:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:47.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:47.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:49.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:49.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:49 np0005539551 nova_compute[227360]: 2025-11-29 08:49:49.717 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e427 e427: 3 total, 3 up, 3 in
Nov 29 03:49:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e428 e428: 3 total, 3 up, 3 in
Nov 29 03:49:50 np0005539551 nova_compute[227360]: 2025-11-29 08:49:50.939 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:51.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:51.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:53.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:53.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:54 np0005539551 nova_compute[227360]: 2025-11-29 08:49:54.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:54 np0005539551 nova_compute[227360]: 2025-11-29 08:49:54.440 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:54 np0005539551 nova_compute[227360]: 2025-11-29 08:49:54.440 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:54 np0005539551 nova_compute[227360]: 2025-11-29 08:49:54.440 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:54 np0005539551 nova_compute[227360]: 2025-11-29 08:49:54.441 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:49:54 np0005539551 nova_compute[227360]: 2025-11-29 08:49:54.441 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:54 np0005539551 nova_compute[227360]: 2025-11-29 08:49:54.718 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:54 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:49:54 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2765099508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:49:54 np0005539551 nova_compute[227360]: 2025-11-29 08:49:54.962 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:55.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:55.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:55 np0005539551 nova_compute[227360]: 2025-11-29 08:49:55.133 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:49:55 np0005539551 nova_compute[227360]: 2025-11-29 08:49:55.135 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4209MB free_disk=20.98813247680664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:49:55 np0005539551 nova_compute[227360]: 2025-11-29 08:49:55.135 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:55 np0005539551 nova_compute[227360]: 2025-11-29 08:49:55.135 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:55 np0005539551 nova_compute[227360]: 2025-11-29 08:49:55.222 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:49:55 np0005539551 nova_compute[227360]: 2025-11-29 08:49:55.223 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:49:55 np0005539551 nova_compute[227360]: 2025-11-29 08:49:55.239 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:49:55 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2419053673' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:49:55 np0005539551 nova_compute[227360]: 2025-11-29 08:49:55.670 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:55 np0005539551 nova_compute[227360]: 2025-11-29 08:49:55.675 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:49:55 np0005539551 nova_compute[227360]: 2025-11-29 08:49:55.692 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:49:55 np0005539551 nova_compute[227360]: 2025-11-29 08:49:55.716 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:49:55 np0005539551 nova_compute[227360]: 2025-11-29 08:49:55.717 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e429 e429: 3 total, 3 up, 3 in
Nov 29 03:49:55 np0005539551 nova_compute[227360]: 2025-11-29 08:49:55.941 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:56 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:56 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:49:56 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:49:56 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:49:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:57.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:57.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:57 np0005539551 nova_compute[227360]: 2025-11-29 08:49:57.718 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:57 np0005539551 nova_compute[227360]: 2025-11-29 08:49:57.718 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:49:57 np0005539551 nova_compute[227360]: 2025-11-29 08:49:57.718 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:49:57 np0005539551 nova_compute[227360]: 2025-11-29 08:49:57.750 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:49:57 np0005539551 nova_compute[227360]: 2025-11-29 08:49:57.750 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:49:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:59.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:49:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:49:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:59.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:59 np0005539551 nova_compute[227360]: 2025-11-29 08:49:59.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:59 np0005539551 nova_compute[227360]: 2025-11-29 08:49:59.721 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:00 np0005539551 nova_compute[227360]: 2025-11-29 08:50:00.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:00 np0005539551 nova_compute[227360]: 2025-11-29 08:50:00.944 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:01.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:01.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:01 np0005539551 ceph-mon[81672]: overall HEALTH_OK
Nov 29 03:50:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:03.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:03.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:03 np0005539551 nova_compute[227360]: 2025-11-29 08:50:03.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:04 np0005539551 nova_compute[227360]: 2025-11-29 08:50:04.724 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:05.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:05.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:05 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:50:05 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:50:05 np0005539551 nova_compute[227360]: 2025-11-29 08:50:05.946 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:07.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:07.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:08 np0005539551 nova_compute[227360]: 2025-11-29 08:50:08.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:08 np0005539551 nova_compute[227360]: 2025-11-29 08:50:08.965 227364 DEBUG oslo_concurrency.lockutils [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "8a308fe2-114e-4322-b527-1dab59c053bb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:08 np0005539551 nova_compute[227360]: 2025-11-29 08:50:08.965 227364 DEBUG oslo_concurrency.lockutils [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "8a308fe2-114e-4322-b527-1dab59c053bb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:08 np0005539551 nova_compute[227360]: 2025-11-29 08:50:08.985 227364 DEBUG nova.compute.manager [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:50:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:09.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:09.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:09 np0005539551 nova_compute[227360]: 2025-11-29 08:50:09.104 227364 DEBUG oslo_concurrency.lockutils [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:09 np0005539551 nova_compute[227360]: 2025-11-29 08:50:09.105 227364 DEBUG oslo_concurrency.lockutils [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:09 np0005539551 nova_compute[227360]: 2025-11-29 08:50:09.114 227364 DEBUG nova.virt.hardware [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:50:09 np0005539551 nova_compute[227360]: 2025-11-29 08:50:09.114 227364 INFO nova.compute.claims [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:50:09 np0005539551 nova_compute[227360]: 2025-11-29 08:50:09.230 227364 DEBUG oslo_concurrency.processutils [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:50:09 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:50:09 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2111726143' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:50:09 np0005539551 nova_compute[227360]: 2025-11-29 08:50:09.669 227364 DEBUG oslo_concurrency.processutils [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:50:09 np0005539551 nova_compute[227360]: 2025-11-29 08:50:09.677 227364 DEBUG nova.compute.provider_tree [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:50:09 np0005539551 nova_compute[227360]: 2025-11-29 08:50:09.698 227364 DEBUG nova.scheduler.client.report [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:50:09 np0005539551 nova_compute[227360]: 2025-11-29 08:50:09.725 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:09 np0005539551 nova_compute[227360]: 2025-11-29 08:50:09.728 227364 DEBUG oslo_concurrency.lockutils [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:09 np0005539551 nova_compute[227360]: 2025-11-29 08:50:09.729 227364 DEBUG nova.compute.manager [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:50:09 np0005539551 nova_compute[227360]: 2025-11-29 08:50:09.813 227364 DEBUG nova.compute.manager [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:50:09 np0005539551 nova_compute[227360]: 2025-11-29 08:50:09.814 227364 DEBUG nova.network.neutron [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:50:09 np0005539551 nova_compute[227360]: 2025-11-29 08:50:09.849 227364 INFO nova.virt.libvirt.driver [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:50:09 np0005539551 nova_compute[227360]: 2025-11-29 08:50:09.889 227364 DEBUG nova.compute.manager [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:50:09 np0005539551 nova_compute[227360]: 2025-11-29 08:50:09.939 227364 INFO nova.virt.block_device [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Booting with volume 10605c29-d705-4bde-bb14-15733badfd18 at /dev/vda#033[00m
Nov 29 03:50:10 np0005539551 nova_compute[227360]: 2025-11-29 08:50:10.098 227364 DEBUG os_brick.utils [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:50:10 np0005539551 nova_compute[227360]: 2025-11-29 08:50:10.101 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:50:10 np0005539551 nova_compute[227360]: 2025-11-29 08:50:10.119 245195 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:50:10 np0005539551 nova_compute[227360]: 2025-11-29 08:50:10.119 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[71d853f9-af6b-4981-8c20-91000871a224]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:10 np0005539551 nova_compute[227360]: 2025-11-29 08:50:10.122 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:50:10 np0005539551 nova_compute[227360]: 2025-11-29 08:50:10.135 245195 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:50:10 np0005539551 nova_compute[227360]: 2025-11-29 08:50:10.136 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[e143e859-7416-4bb7-bded-ef78464643d6]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34268feecb6', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:10 np0005539551 nova_compute[227360]: 2025-11-29 08:50:10.138 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:50:10 np0005539551 nova_compute[227360]: 2025-11-29 08:50:10.152 245195 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:50:10 np0005539551 nova_compute[227360]: 2025-11-29 08:50:10.153 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[b2bcbc44-bb81-4005-9cc4-1f1e9a0b7b3e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:10 np0005539551 nova_compute[227360]: 2025-11-29 08:50:10.155 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[81279992-33cf-45e2-84dc-d2807edb4a62]: (4, '2d58586e-4ce1-425d-890c-d5cdff75e822') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:10 np0005539551 nova_compute[227360]: 2025-11-29 08:50:10.156 227364 DEBUG oslo_concurrency.processutils [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:50:10 np0005539551 nova_compute[227360]: 2025-11-29 08:50:10.206 227364 DEBUG oslo_concurrency.processutils [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "nvme version" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:50:10 np0005539551 nova_compute[227360]: 2025-11-29 08:50:10.211 227364 DEBUG os_brick.initiator.connectors.lightos [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:50:10 np0005539551 nova_compute[227360]: 2025-11-29 08:50:10.212 227364 DEBUG os_brick.initiator.connectors.lightos [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:50:10 np0005539551 nova_compute[227360]: 2025-11-29 08:50:10.212 227364 DEBUG os_brick.initiator.connectors.lightos [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:50:10 np0005539551 nova_compute[227360]: 2025-11-29 08:50:10.213 227364 DEBUG os_brick.utils [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] <== get_connector_properties: return (113ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34268feecb6', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2d58586e-4ce1-425d-890c-d5cdff75e822', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:50:10 np0005539551 nova_compute[227360]: 2025-11-29 08:50:10.214 227364 DEBUG nova.virt.block_device [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Updating existing volume attachment record: 5f8fbb9c-5917-40c5-9633-0019669722aa _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:50:10 np0005539551 nova_compute[227360]: 2025-11-29 08:50:10.612 227364 DEBUG nova.policy [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b576a51181b5425aa6e44a0eb0a22803', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b7ffcb23bac14ee49474df9aee5f7dae', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:50:10 np0005539551 nova_compute[227360]: 2025-11-29 08:50:10.949 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:11.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:11.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:11 np0005539551 nova_compute[227360]: 2025-11-29 08:50:11.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:11 np0005539551 nova_compute[227360]: 2025-11-29 08:50:11.516 227364 DEBUG nova.compute.manager [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:50:11 np0005539551 nova_compute[227360]: 2025-11-29 08:50:11.518 227364 DEBUG nova.virt.libvirt.driver [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:50:11 np0005539551 nova_compute[227360]: 2025-11-29 08:50:11.518 227364 INFO nova.virt.libvirt.driver [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Creating image(s)#033[00m
Nov 29 03:50:11 np0005539551 nova_compute[227360]: 2025-11-29 08:50:11.519 227364 DEBUG nova.virt.libvirt.driver [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:50:11 np0005539551 nova_compute[227360]: 2025-11-29 08:50:11.519 227364 DEBUG nova.virt.libvirt.driver [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Ensure instance console log exists: /var/lib/nova/instances/8a308fe2-114e-4322-b527-1dab59c053bb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:50:11 np0005539551 nova_compute[227360]: 2025-11-29 08:50:11.519 227364 DEBUG oslo_concurrency.lockutils [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:11 np0005539551 nova_compute[227360]: 2025-11-29 08:50:11.520 227364 DEBUG oslo_concurrency.lockutils [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:11 np0005539551 nova_compute[227360]: 2025-11-29 08:50:11.520 227364 DEBUG oslo_concurrency.lockutils [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:11 np0005539551 nova_compute[227360]: 2025-11-29 08:50:11.610 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:11.611 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=67, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=66) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:50:11 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:11.613 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:50:11 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:11 np0005539551 nova_compute[227360]: 2025-11-29 08:50:11.810 227364 DEBUG nova.network.neutron [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Successfully created port: 4072b7cb-0a4c-45a6-af62-8d549a24a536 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:50:12 np0005539551 nova_compute[227360]: 2025-11-29 08:50:12.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:12 np0005539551 nova_compute[227360]: 2025-11-29 08:50:12.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:50:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:13.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:13.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:13 np0005539551 nova_compute[227360]: 2025-11-29 08:50:13.873 227364 DEBUG nova.network.neutron [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Successfully updated port: 4072b7cb-0a4c-45a6-af62-8d549a24a536 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:50:13 np0005539551 nova_compute[227360]: 2025-11-29 08:50:13.893 227364 DEBUG oslo_concurrency.lockutils [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "refresh_cache-8a308fe2-114e-4322-b527-1dab59c053bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:50:13 np0005539551 nova_compute[227360]: 2025-11-29 08:50:13.894 227364 DEBUG oslo_concurrency.lockutils [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquired lock "refresh_cache-8a308fe2-114e-4322-b527-1dab59c053bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:50:13 np0005539551 nova_compute[227360]: 2025-11-29 08:50:13.894 227364 DEBUG nova.network.neutron [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:50:14 np0005539551 nova_compute[227360]: 2025-11-29 08:50:14.011 227364 DEBUG nova.compute.manager [req-f4a6a770-3e34-4247-99a5-4d088aae8958 req-7b99e2e3-14b7-4166-94bf-36818459e320 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Received event network-changed-4072b7cb-0a4c-45a6-af62-8d549a24a536 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:50:14 np0005539551 nova_compute[227360]: 2025-11-29 08:50:14.011 227364 DEBUG nova.compute.manager [req-f4a6a770-3e34-4247-99a5-4d088aae8958 req-7b99e2e3-14b7-4166-94bf-36818459e320 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Refreshing instance network info cache due to event network-changed-4072b7cb-0a4c-45a6-af62-8d549a24a536. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:50:14 np0005539551 nova_compute[227360]: 2025-11-29 08:50:14.012 227364 DEBUG oslo_concurrency.lockutils [req-f4a6a770-3e34-4247-99a5-4d088aae8958 req-7b99e2e3-14b7-4166-94bf-36818459e320 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-8a308fe2-114e-4322-b527-1dab59c053bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:50:14 np0005539551 nova_compute[227360]: 2025-11-29 08:50:14.595 227364 DEBUG nova.network.neutron [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:50:14 np0005539551 nova_compute[227360]: 2025-11-29 08:50:14.727 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:15.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:15.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:15 np0005539551 nova_compute[227360]: 2025-11-29 08:50:15.679 227364 DEBUG nova.network.neutron [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Updating instance_info_cache with network_info: [{"id": "4072b7cb-0a4c-45a6-af62-8d549a24a536", "address": "fa:16:3e:2f:db:45", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4072b7cb-0a", "ovs_interfaceid": "4072b7cb-0a4c-45a6-af62-8d549a24a536", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:50:15 np0005539551 nova_compute[227360]: 2025-11-29 08:50:15.701 227364 DEBUG oslo_concurrency.lockutils [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Releasing lock "refresh_cache-8a308fe2-114e-4322-b527-1dab59c053bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:50:15 np0005539551 nova_compute[227360]: 2025-11-29 08:50:15.701 227364 DEBUG nova.compute.manager [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Instance network_info: |[{"id": "4072b7cb-0a4c-45a6-af62-8d549a24a536", "address": "fa:16:3e:2f:db:45", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4072b7cb-0a", "ovs_interfaceid": "4072b7cb-0a4c-45a6-af62-8d549a24a536", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:50:15 np0005539551 nova_compute[227360]: 2025-11-29 08:50:15.701 227364 DEBUG oslo_concurrency.lockutils [req-f4a6a770-3e34-4247-99a5-4d088aae8958 req-7b99e2e3-14b7-4166-94bf-36818459e320 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-8a308fe2-114e-4322-b527-1dab59c053bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:50:15 np0005539551 nova_compute[227360]: 2025-11-29 08:50:15.701 227364 DEBUG nova.network.neutron [req-f4a6a770-3e34-4247-99a5-4d088aae8958 req-7b99e2e3-14b7-4166-94bf-36818459e320 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Refreshing network info cache for port 4072b7cb-0a4c-45a6-af62-8d549a24a536 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:50:15 np0005539551 nova_compute[227360]: 2025-11-29 08:50:15.704 227364 DEBUG nova.virt.libvirt.driver [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Start _get_guest_xml network_info=[{"id": "4072b7cb-0a4c-45a6-af62-8d549a24a536", "address": "fa:16:3e:2f:db:45", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4072b7cb-0a", "ovs_interfaceid": "4072b7cb-0a4c-45a6-af62-8d549a24a536", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-10605c29-d705-4bde-bb14-15733badfd18', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '10605c29-d705-4bde-bb14-15733badfd18', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '8a308fe2-114e-4322-b527-1dab59c053bb', 'attached_at': '', 'detached_at': '', 'volume_id': '10605c29-d705-4bde-bb14-15733badfd18', 'serial': '10605c29-d705-4bde-bb14-15733badfd18'}, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'attachment_id': '5f8fbb9c-5917-40c5-9633-0019669722aa', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:50:15 np0005539551 nova_compute[227360]: 2025-11-29 08:50:15.712 227364 WARNING nova.virt.libvirt.driver [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:50:15 np0005539551 nova_compute[227360]: 2025-11-29 08:50:15.719 227364 DEBUG nova.virt.libvirt.host [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:50:15 np0005539551 nova_compute[227360]: 2025-11-29 08:50:15.720 227364 DEBUG nova.virt.libvirt.host [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:50:15 np0005539551 nova_compute[227360]: 2025-11-29 08:50:15.724 227364 DEBUG nova.virt.libvirt.host [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:50:15 np0005539551 nova_compute[227360]: 2025-11-29 08:50:15.724 227364 DEBUG nova.virt.libvirt.host [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:50:15 np0005539551 nova_compute[227360]: 2025-11-29 08:50:15.725 227364 DEBUG nova.virt.libvirt.driver [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:50:15 np0005539551 nova_compute[227360]: 2025-11-29 08:50:15.725 227364 DEBUG nova.virt.hardware [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:50:15 np0005539551 nova_compute[227360]: 2025-11-29 08:50:15.726 227364 DEBUG nova.virt.hardware [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:50:15 np0005539551 nova_compute[227360]: 2025-11-29 08:50:15.726 227364 DEBUG nova.virt.hardware [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:50:15 np0005539551 nova_compute[227360]: 2025-11-29 08:50:15.726 227364 DEBUG nova.virt.hardware [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:50:15 np0005539551 nova_compute[227360]: 2025-11-29 08:50:15.726 227364 DEBUG nova.virt.hardware [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:50:15 np0005539551 nova_compute[227360]: 2025-11-29 08:50:15.727 227364 DEBUG nova.virt.hardware [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:50:15 np0005539551 nova_compute[227360]: 2025-11-29 08:50:15.727 227364 DEBUG nova.virt.hardware [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:50:15 np0005539551 nova_compute[227360]: 2025-11-29 08:50:15.727 227364 DEBUG nova.virt.hardware [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:50:15 np0005539551 nova_compute[227360]: 2025-11-29 08:50:15.727 227364 DEBUG nova.virt.hardware [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:50:15 np0005539551 nova_compute[227360]: 2025-11-29 08:50:15.728 227364 DEBUG nova.virt.hardware [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:50:15 np0005539551 nova_compute[227360]: 2025-11-29 08:50:15.728 227364 DEBUG nova.virt.hardware [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:50:15 np0005539551 nova_compute[227360]: 2025-11-29 08:50:15.758 227364 DEBUG nova.storage.rbd_utils [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] rbd image 8a308fe2-114e-4322-b527-1dab59c053bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:50:15 np0005539551 nova_compute[227360]: 2025-11-29 08:50:15.762 227364 DEBUG oslo_concurrency.processutils [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:50:15 np0005539551 nova_compute[227360]: 2025-11-29 08:50:15.953 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:50:16 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/245637550' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:50:16 np0005539551 nova_compute[227360]: 2025-11-29 08:50:16.221 227364 DEBUG oslo_concurrency.processutils [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:50:16 np0005539551 nova_compute[227360]: 2025-11-29 08:50:16.249 227364 DEBUG nova.virt.libvirt.vif [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:50:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1651734879',display_name='tempest-TestVolumeBootPattern-server-1651734879',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1651734879',id=221,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEHlo38mb7s9ySq/cdi6P777CZR2z+Afm9Wa+lTfRoAMzUkpqw+8CUb0JbGjLvJJBhmx7BRYWkJB9ViGobLhvgEEMVD0rXS3of0skum5gZvlaPu98ryoqeuiqaHIJoj7oQ==',key_name='tempest-TestVolumeBootPattern-597611796',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b7ffcb23bac14ee49474df9aee5f7dae',ramdisk_id='',reservation_id='r-1z7vrbfn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1614567902',owner_user_name='tempest-TestVolumeBootPattern-1614567902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:50:09Z,user_data=None,user_id='b576a51181b5425aa6e44a0eb0a22803',uuid=8a308fe2-114e-4322-b527-1dab59c053bb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4072b7cb-0a4c-45a6-af62-8d549a24a536", "address": "fa:16:3e:2f:db:45", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4072b7cb-0a", "ovs_interfaceid": "4072b7cb-0a4c-45a6-af62-8d549a24a536", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:50:16 np0005539551 nova_compute[227360]: 2025-11-29 08:50:16.250 227364 DEBUG nova.network.os_vif_util [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converting VIF {"id": "4072b7cb-0a4c-45a6-af62-8d549a24a536", "address": "fa:16:3e:2f:db:45", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4072b7cb-0a", "ovs_interfaceid": "4072b7cb-0a4c-45a6-af62-8d549a24a536", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:50:16 np0005539551 nova_compute[227360]: 2025-11-29 08:50:16.252 227364 DEBUG nova.network.os_vif_util [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2f:db:45,bridge_name='br-int',has_traffic_filtering=True,id=4072b7cb-0a4c-45a6-af62-8d549a24a536,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4072b7cb-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:50:16 np0005539551 nova_compute[227360]: 2025-11-29 08:50:16.254 227364 DEBUG nova.objects.instance [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lazy-loading 'pci_devices' on Instance uuid 8a308fe2-114e-4322-b527-1dab59c053bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:50:16 np0005539551 nova_compute[227360]: 2025-11-29 08:50:16.272 227364 DEBUG nova.virt.libvirt.driver [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:50:16 np0005539551 nova_compute[227360]:  <uuid>8a308fe2-114e-4322-b527-1dab59c053bb</uuid>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:  <name>instance-000000dd</name>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:50:16 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:      <nova:name>tempest-TestVolumeBootPattern-server-1651734879</nova:name>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:50:15</nova:creationTime>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:50:16 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:        <nova:user uuid="b576a51181b5425aa6e44a0eb0a22803">tempest-TestVolumeBootPattern-1614567902-project-member</nova:user>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:        <nova:project uuid="b7ffcb23bac14ee49474df9aee5f7dae">tempest-TestVolumeBootPattern-1614567902</nova:project>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:        <nova:port uuid="4072b7cb-0a4c-45a6-af62-8d549a24a536">
Nov 29 03:50:16 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:      <entry name="serial">8a308fe2-114e-4322-b527-1dab59c053bb</entry>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:      <entry name="uuid">8a308fe2-114e-4322-b527-1dab59c053bb</entry>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:50:16 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/8a308fe2-114e-4322-b527-1dab59c053bb_disk.config">
Nov 29 03:50:16 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:50:16 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:50:16 np0005539551 nova_compute[227360]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="volumes/volume-10605c29-d705-4bde-bb14-15733badfd18">
Nov 29 03:50:16 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:50:16 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:      <serial>10605c29-d705-4bde-bb14-15733badfd18</serial>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:50:16 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:2f:db:45"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:      <target dev="tap4072b7cb-0a"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:50:16 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/8a308fe2-114e-4322-b527-1dab59c053bb/console.log" append="off"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:50:16 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:50:16 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:50:16 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:50:16 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:50:16 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:50:16 np0005539551 nova_compute[227360]: 2025-11-29 08:50:16.274 227364 DEBUG nova.compute.manager [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Preparing to wait for external event network-vif-plugged-4072b7cb-0a4c-45a6-af62-8d549a24a536 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:50:16 np0005539551 nova_compute[227360]: 2025-11-29 08:50:16.275 227364 DEBUG oslo_concurrency.lockutils [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "8a308fe2-114e-4322-b527-1dab59c053bb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:16 np0005539551 nova_compute[227360]: 2025-11-29 08:50:16.276 227364 DEBUG oslo_concurrency.lockutils [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "8a308fe2-114e-4322-b527-1dab59c053bb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:16 np0005539551 nova_compute[227360]: 2025-11-29 08:50:16.276 227364 DEBUG oslo_concurrency.lockutils [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "8a308fe2-114e-4322-b527-1dab59c053bb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:16 np0005539551 nova_compute[227360]: 2025-11-29 08:50:16.277 227364 DEBUG nova.virt.libvirt.vif [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:50:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1651734879',display_name='tempest-TestVolumeBootPattern-server-1651734879',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1651734879',id=221,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEHlo38mb7s9ySq/cdi6P777CZR2z+Afm9Wa+lTfRoAMzUkpqw+8CUb0JbGjLvJJBhmx7BRYWkJB9ViGobLhvgEEMVD0rXS3of0skum5gZvlaPu98ryoqeuiqaHIJoj7oQ==',key_name='tempest-TestVolumeBootPattern-597611796',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b7ffcb23bac14ee49474df9aee5f7dae',ramdisk_id='',reservation_id='r-1z7vrbfn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1614567902',owner_user_name='tempest-TestVolumeBootPattern-1614567902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:50:09Z,user_data=None,user_id='b576a51181b5425aa6e44a0eb0a22803',uuid=8a308fe2-114e-4322-b527-1dab59c053bb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4072b7cb-0a4c-45a6-af62-8d549a24a536", "address": "fa:16:3e:2f:db:45", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4072b7cb-0a", "ovs_interfaceid": "4072b7cb-0a4c-45a6-af62-8d549a24a536", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:50:16 np0005539551 nova_compute[227360]: 2025-11-29 08:50:16.277 227364 DEBUG nova.network.os_vif_util [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converting VIF {"id": "4072b7cb-0a4c-45a6-af62-8d549a24a536", "address": "fa:16:3e:2f:db:45", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4072b7cb-0a", "ovs_interfaceid": "4072b7cb-0a4c-45a6-af62-8d549a24a536", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:50:16 np0005539551 nova_compute[227360]: 2025-11-29 08:50:16.278 227364 DEBUG nova.network.os_vif_util [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2f:db:45,bridge_name='br-int',has_traffic_filtering=True,id=4072b7cb-0a4c-45a6-af62-8d549a24a536,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4072b7cb-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:50:16 np0005539551 nova_compute[227360]: 2025-11-29 08:50:16.279 227364 DEBUG os_vif [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:db:45,bridge_name='br-int',has_traffic_filtering=True,id=4072b7cb-0a4c-45a6-af62-8d549a24a536,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4072b7cb-0a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:50:16 np0005539551 nova_compute[227360]: 2025-11-29 08:50:16.280 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:16 np0005539551 nova_compute[227360]: 2025-11-29 08:50:16.280 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:50:16 np0005539551 nova_compute[227360]: 2025-11-29 08:50:16.281 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:50:16 np0005539551 nova_compute[227360]: 2025-11-29 08:50:16.285 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:16 np0005539551 nova_compute[227360]: 2025-11-29 08:50:16.285 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4072b7cb-0a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:50:16 np0005539551 nova_compute[227360]: 2025-11-29 08:50:16.286 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4072b7cb-0a, col_values=(('external_ids', {'iface-id': '4072b7cb-0a4c-45a6-af62-8d549a24a536', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2f:db:45', 'vm-uuid': '8a308fe2-114e-4322-b527-1dab59c053bb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:50:16 np0005539551 nova_compute[227360]: 2025-11-29 08:50:16.287 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:16 np0005539551 NetworkManager[48922]: <info>  [1764406216.2884] manager: (tap4072b7cb-0a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/420)
Nov 29 03:50:16 np0005539551 nova_compute[227360]: 2025-11-29 08:50:16.291 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:50:16 np0005539551 nova_compute[227360]: 2025-11-29 08:50:16.296 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:16 np0005539551 nova_compute[227360]: 2025-11-29 08:50:16.297 227364 INFO os_vif [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:db:45,bridge_name='br-int',has_traffic_filtering=True,id=4072b7cb-0a4c-45a6-af62-8d549a24a536,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4072b7cb-0a')#033[00m
Nov 29 03:50:16 np0005539551 nova_compute[227360]: 2025-11-29 08:50:16.356 227364 DEBUG nova.virt.libvirt.driver [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:50:16 np0005539551 nova_compute[227360]: 2025-11-29 08:50:16.357 227364 DEBUG nova.virt.libvirt.driver [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:50:16 np0005539551 nova_compute[227360]: 2025-11-29 08:50:16.357 227364 DEBUG nova.virt.libvirt.driver [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] No VIF found with MAC fa:16:3e:2f:db:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:50:16 np0005539551 nova_compute[227360]: 2025-11-29 08:50:16.357 227364 INFO nova.virt.libvirt.driver [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Using config drive#033[00m
Nov 29 03:50:16 np0005539551 nova_compute[227360]: 2025-11-29 08:50:16.385 227364 DEBUG nova.storage.rbd_utils [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] rbd image 8a308fe2-114e-4322-b527-1dab59c053bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:50:16 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:16.616 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '67'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:50:16 np0005539551 podman[305525]: 2025-11-29 08:50:16.631036935 +0000 UTC m=+0.080041389 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:50:16 np0005539551 podman[305526]: 2025-11-29 08:50:16.67038884 +0000 UTC m=+0.109026843 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:50:16 np0005539551 podman[305524]: 2025-11-29 08:50:16.671451808 +0000 UTC m=+0.115324083 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:50:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:16 np0005539551 nova_compute[227360]: 2025-11-29 08:50:16.823 227364 INFO nova.virt.libvirt.driver [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Creating config drive at /var/lib/nova/instances/8a308fe2-114e-4322-b527-1dab59c053bb/disk.config#033[00m
Nov 29 03:50:16 np0005539551 nova_compute[227360]: 2025-11-29 08:50:16.834 227364 DEBUG oslo_concurrency.processutils [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8a308fe2-114e-4322-b527-1dab59c053bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzx8iyl4f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.006 227364 DEBUG oslo_concurrency.processutils [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8a308fe2-114e-4322-b527-1dab59c053bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzx8iyl4f" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:50:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:17.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.054 227364 DEBUG nova.storage.rbd_utils [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] rbd image 8a308fe2-114e-4322-b527-1dab59c053bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.060 227364 DEBUG oslo_concurrency.processutils [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8a308fe2-114e-4322-b527-1dab59c053bb/disk.config 8a308fe2-114e-4322-b527-1dab59c053bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:50:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:17.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.279 227364 DEBUG oslo_concurrency.processutils [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8a308fe2-114e-4322-b527-1dab59c053bb/disk.config 8a308fe2-114e-4322-b527-1dab59c053bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.218s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.280 227364 INFO nova.virt.libvirt.driver [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Deleting local config drive /var/lib/nova/instances/8a308fe2-114e-4322-b527-1dab59c053bb/disk.config because it was imported into RBD.#033[00m
Nov 29 03:50:17 np0005539551 kernel: tap4072b7cb-0a: entered promiscuous mode
Nov 29 03:50:17 np0005539551 NetworkManager[48922]: <info>  [1764406217.3445] manager: (tap4072b7cb-0a): new Tun device (/org/freedesktop/NetworkManager/Devices/421)
Nov 29 03:50:17 np0005539551 ovn_controller[130266]: 2025-11-29T08:50:17Z|00933|binding|INFO|Claiming lport 4072b7cb-0a4c-45a6-af62-8d549a24a536 for this chassis.
Nov 29 03:50:17 np0005539551 ovn_controller[130266]: 2025-11-29T08:50:17Z|00934|binding|INFO|4072b7cb-0a4c-45a6-af62-8d549a24a536: Claiming fa:16:3e:2f:db:45 10.100.0.6
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.347 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.350 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.354 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:17.361 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2f:db:45 10.100.0.6'], port_security=['fa:16:3e:2f:db:45 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8a308fe2-114e-4322-b527-1dab59c053bb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7ffcb23bac14ee49474df9aee5f7dae', 'neutron:revision_number': '2', 'neutron:security_group_ids': '94ca66f4-d521-4114-adbd-83f0454e0911', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2432be5b-087b-4981-ab5e-ea6b1be12111, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=4072b7cb-0a4c-45a6-af62-8d549a24a536) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:17.363 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 4072b7cb-0a4c-45a6-af62-8d549a24a536 in datapath 3d510715-dc99-4870-8ae9-ff599ae1a9c2 bound to our chassis#033[00m
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:17.366 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3d510715-dc99-4870-8ae9-ff599ae1a9c2#033[00m
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:17.380 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9aadf576-9bc7-45ed-9923-8958ae793f04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:17.381 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3d510715-d1 in ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:17.383 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3d510715-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:17.383 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[49500a53-696d-47fd-b451-9e9770ce03eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:17.384 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[276eb896-193c-4d57-8942-18d7ef999995]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:17 np0005539551 systemd-machined[190756]: New machine qemu-96-instance-000000dd.
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:17.399 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[6a5141b1-9097-4957-96dc-6afe09675383]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:17 np0005539551 systemd[1]: Started Virtual Machine qemu-96-instance-000000dd.
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.412 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:17 np0005539551 ovn_controller[130266]: 2025-11-29T08:50:17Z|00935|binding|INFO|Setting lport 4072b7cb-0a4c-45a6-af62-8d549a24a536 ovn-installed in OVS
Nov 29 03:50:17 np0005539551 ovn_controller[130266]: 2025-11-29T08:50:17Z|00936|binding|INFO|Setting lport 4072b7cb-0a4c-45a6-af62-8d549a24a536 up in Southbound
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.417 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:17.416 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d582c73f-5189-476a-9b22-71057959b2a7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:17 np0005539551 systemd-udevd[305645]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:50:17 np0005539551 NetworkManager[48922]: <info>  [1764406217.4449] device (tap4072b7cb-0a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:50:17 np0005539551 NetworkManager[48922]: <info>  [1764406217.4469] device (tap4072b7cb-0a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:17.457 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[edd0772b-c46c-4c11-a6c4-755339329620]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:17.463 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[75b969b8-83c8-41a9-b75b-858dd748e4d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:17 np0005539551 NetworkManager[48922]: <info>  [1764406217.4648] manager: (tap3d510715-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/422)
Nov 29 03:50:17 np0005539551 systemd-udevd[305649]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:17.500 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[8c7fcace-adb0-4520-9509-66eefb5c0b89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:17.503 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[66e90c41-c644-4b7f-9e7f-fb1801940df1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:17 np0005539551 NetworkManager[48922]: <info>  [1764406217.5285] device (tap3d510715-d0): carrier: link connected
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:17.536 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[2ecbbc46-c353-48cc-9915-7746f2a1b8f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:17.558 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5eafbbe7-8c1c-4c81-b3e9-9f8062155da8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d510715-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:61:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 271], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 926440, 'reachable_time': 22498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305675, 'error': None, 'target': 'ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:17.579 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[11953a85-b284-4054-94d6-190f5bf16f6e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe63:6190'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 926440, 'tstamp': 926440}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305676, 'error': None, 'target': 'ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:17.607 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[23c24f90-b182-46eb-aa56-7366a80584bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d510715-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:61:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 271], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 926440, 'reachable_time': 22498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305677, 'error': None, 'target': 'ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:17.651 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[93ba480c-46c5-4f1e-b849-b3460ce9d2ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:17.737 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2fd249a3-576b-4909-941b-913dc5c2b191]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:17.739 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d510715-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:17.740 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:17.740 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3d510715-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:50:17 np0005539551 NetworkManager[48922]: <info>  [1764406217.7441] manager: (tap3d510715-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/423)
Nov 29 03:50:17 np0005539551 kernel: tap3d510715-d0: entered promiscuous mode
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.744 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:17.749 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3d510715-d0, col_values=(('external_ids', {'iface-id': '9b7ae33f-c1c7-4a13-97b3-0ae6cb40a1db'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.751 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:17 np0005539551 ovn_controller[130266]: 2025-11-29T08:50:17Z|00937|binding|INFO|Releasing lport 9b7ae33f-c1c7-4a13-97b3-0ae6cb40a1db from this chassis (sb_readonly=0)
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:17.752 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3d510715-dc99-4870-8ae9-ff599ae1a9c2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3d510715-dc99-4870-8ae9-ff599ae1a9c2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:17.753 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[924c54ec-cf78-47fb-be38-0e25ba040d69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:17.754 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-3d510715-dc99-4870-8ae9-ff599ae1a9c2
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/3d510715-dc99-4870-8ae9-ff599ae1a9c2.pid.haproxy
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 3d510715-dc99-4870-8ae9-ff599ae1a9c2
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:50:17 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:17.755 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'env', 'PROCESS_TAG=haproxy-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3d510715-dc99-4870-8ae9-ff599ae1a9c2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.764 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.841 227364 DEBUG nova.compute.manager [req-ab1d5a7b-633d-4eb4-88b0-15b1fca1227b req-e558cde8-41c9-49a6-88a4-36c14babbbb9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Received event network-vif-plugged-4072b7cb-0a4c-45a6-af62-8d549a24a536 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.842 227364 DEBUG oslo_concurrency.lockutils [req-ab1d5a7b-633d-4eb4-88b0-15b1fca1227b req-e558cde8-41c9-49a6-88a4-36c14babbbb9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8a308fe2-114e-4322-b527-1dab59c053bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.842 227364 DEBUG oslo_concurrency.lockutils [req-ab1d5a7b-633d-4eb4-88b0-15b1fca1227b req-e558cde8-41c9-49a6-88a4-36c14babbbb9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8a308fe2-114e-4322-b527-1dab59c053bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.843 227364 DEBUG oslo_concurrency.lockutils [req-ab1d5a7b-633d-4eb4-88b0-15b1fca1227b req-e558cde8-41c9-49a6-88a4-36c14babbbb9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8a308fe2-114e-4322-b527-1dab59c053bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.843 227364 DEBUG nova.compute.manager [req-ab1d5a7b-633d-4eb4-88b0-15b1fca1227b req-e558cde8-41c9-49a6-88a4-36c14babbbb9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Processing event network-vif-plugged-4072b7cb-0a4c-45a6-af62-8d549a24a536 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.844 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764406217.837181, 8a308fe2-114e-4322-b527-1dab59c053bb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.844 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] VM Started (Lifecycle Event)#033[00m
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.849 227364 DEBUG nova.compute.manager [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.858 227364 DEBUG nova.virt.libvirt.driver [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.863 227364 INFO nova.virt.libvirt.driver [-] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Instance spawned successfully.#033[00m
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.864 227364 DEBUG nova.virt.libvirt.driver [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.870 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.874 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.891 227364 DEBUG nova.virt.libvirt.driver [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.892 227364 DEBUG nova.virt.libvirt.driver [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.892 227364 DEBUG nova.virt.libvirt.driver [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.893 227364 DEBUG nova.virt.libvirt.driver [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.894 227364 DEBUG nova.virt.libvirt.driver [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.895 227364 DEBUG nova.virt.libvirt.driver [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.901 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.902 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764406217.8529139, 8a308fe2-114e-4322-b527-1dab59c053bb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.902 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.940 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.945 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764406217.8577077, 8a308fe2-114e-4322-b527-1dab59c053bb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.946 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.971 227364 INFO nova.compute.manager [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Took 6.45 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.972 227364 DEBUG nova.compute.manager [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.977 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:50:17 np0005539551 nova_compute[227360]: 2025-11-29 08:50:17.989 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:50:18 np0005539551 nova_compute[227360]: 2025-11-29 08:50:18.035 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:50:18 np0005539551 nova_compute[227360]: 2025-11-29 08:50:18.067 227364 INFO nova.compute.manager [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Took 9.02 seconds to build instance.#033[00m
Nov 29 03:50:18 np0005539551 nova_compute[227360]: 2025-11-29 08:50:18.085 227364 DEBUG oslo_concurrency.lockutils [None req-fb6db808-e1e1-481c-8b1e-4aab2ca994b0 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "8a308fe2-114e-4322-b527-1dab59c053bb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:18 np0005539551 podman[305751]: 2025-11-29 08:50:18.14254881 +0000 UTC m=+0.057761015 container create 9b3cf832f291c913656a89169170fa5f1735dae8ac39ecd02f8dee809b82ecf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:50:18 np0005539551 nova_compute[227360]: 2025-11-29 08:50:18.199 227364 DEBUG nova.network.neutron [req-f4a6a770-3e34-4247-99a5-4d088aae8958 req-7b99e2e3-14b7-4166-94bf-36818459e320 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Updated VIF entry in instance network info cache for port 4072b7cb-0a4c-45a6-af62-8d549a24a536. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:50:18 np0005539551 nova_compute[227360]: 2025-11-29 08:50:18.200 227364 DEBUG nova.network.neutron [req-f4a6a770-3e34-4247-99a5-4d088aae8958 req-7b99e2e3-14b7-4166-94bf-36818459e320 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Updating instance_info_cache with network_info: [{"id": "4072b7cb-0a4c-45a6-af62-8d549a24a536", "address": "fa:16:3e:2f:db:45", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4072b7cb-0a", "ovs_interfaceid": "4072b7cb-0a4c-45a6-af62-8d549a24a536", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:50:18 np0005539551 systemd[1]: Started libpod-conmon-9b3cf832f291c913656a89169170fa5f1735dae8ac39ecd02f8dee809b82ecf3.scope.
Nov 29 03:50:18 np0005539551 podman[305751]: 2025-11-29 08:50:18.108431786 +0000 UTC m=+0.023644031 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:50:18 np0005539551 nova_compute[227360]: 2025-11-29 08:50:18.221 227364 DEBUG oslo_concurrency.lockutils [req-f4a6a770-3e34-4247-99a5-4d088aae8958 req-7b99e2e3-14b7-4166-94bf-36818459e320 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-8a308fe2-114e-4322-b527-1dab59c053bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:50:18 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:50:18 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d25045b305d6e179c81bf541711754f5867a66a1eeac704291da7870449f7126/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:50:18 np0005539551 podman[305751]: 2025-11-29 08:50:18.274929884 +0000 UTC m=+0.190142099 container init 9b3cf832f291c913656a89169170fa5f1735dae8ac39ecd02f8dee809b82ecf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 03:50:18 np0005539551 podman[305751]: 2025-11-29 08:50:18.282884189 +0000 UTC m=+0.198096384 container start 9b3cf832f291c913656a89169170fa5f1735dae8ac39ecd02f8dee809b82ecf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0)
Nov 29 03:50:18 np0005539551 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[305766]: [NOTICE]   (305770) : New worker (305772) forked
Nov 29 03:50:18 np0005539551 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[305766]: [NOTICE]   (305770) : Loading success.
Nov 29 03:50:18 np0005539551 nova_compute[227360]: 2025-11-29 08:50:18.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:18 np0005539551 nova_compute[227360]: 2025-11-29 08:50:18.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:50:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:19.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:19.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:19 np0005539551 nova_compute[227360]: 2025-11-29 08:50:19.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:19 np0005539551 nova_compute[227360]: 2025-11-29 08:50:19.728 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:19.898 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:19.899 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:19.899 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:19 np0005539551 nova_compute[227360]: 2025-11-29 08:50:19.956 227364 DEBUG nova.compute.manager [req-a3c26737-10eb-459c-9aea-3212500e4f4d req-ea3cb058-e415-4f69-80e9-daeb2f6864c8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Received event network-vif-plugged-4072b7cb-0a4c-45a6-af62-8d549a24a536 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:50:19 np0005539551 nova_compute[227360]: 2025-11-29 08:50:19.956 227364 DEBUG oslo_concurrency.lockutils [req-a3c26737-10eb-459c-9aea-3212500e4f4d req-ea3cb058-e415-4f69-80e9-daeb2f6864c8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8a308fe2-114e-4322-b527-1dab59c053bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:19 np0005539551 nova_compute[227360]: 2025-11-29 08:50:19.957 227364 DEBUG oslo_concurrency.lockutils [req-a3c26737-10eb-459c-9aea-3212500e4f4d req-ea3cb058-e415-4f69-80e9-daeb2f6864c8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8a308fe2-114e-4322-b527-1dab59c053bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:19 np0005539551 nova_compute[227360]: 2025-11-29 08:50:19.957 227364 DEBUG oslo_concurrency.lockutils [req-a3c26737-10eb-459c-9aea-3212500e4f4d req-ea3cb058-e415-4f69-80e9-daeb2f6864c8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8a308fe2-114e-4322-b527-1dab59c053bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:19 np0005539551 nova_compute[227360]: 2025-11-29 08:50:19.958 227364 DEBUG nova.compute.manager [req-a3c26737-10eb-459c-9aea-3212500e4f4d req-ea3cb058-e415-4f69-80e9-daeb2f6864c8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] No waiting events found dispatching network-vif-plugged-4072b7cb-0a4c-45a6-af62-8d549a24a536 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:50:19 np0005539551 nova_compute[227360]: 2025-11-29 08:50:19.958 227364 WARNING nova.compute.manager [req-a3c26737-10eb-459c-9aea-3212500e4f4d req-ea3cb058-e415-4f69-80e9-daeb2f6864c8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Received unexpected event network-vif-plugged-4072b7cb-0a4c-45a6-af62-8d549a24a536 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:50:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:21.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:21.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:21 np0005539551 nova_compute[227360]: 2025-11-29 08:50:21.288 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:21 np0005539551 ceph-mgr[82034]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1950343944
Nov 29 03:50:21 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:23.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:23.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:24 np0005539551 nova_compute[227360]: 2025-11-29 08:50:24.055 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:24 np0005539551 NetworkManager[48922]: <info>  [1764406224.0558] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/424)
Nov 29 03:50:24 np0005539551 NetworkManager[48922]: <info>  [1764406224.0582] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/425)
Nov 29 03:50:24 np0005539551 nova_compute[227360]: 2025-11-29 08:50:24.221 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:24 np0005539551 ovn_controller[130266]: 2025-11-29T08:50:24Z|00938|binding|INFO|Releasing lport 9b7ae33f-c1c7-4a13-97b3-0ae6cb40a1db from this chassis (sb_readonly=0)
Nov 29 03:50:24 np0005539551 nova_compute[227360]: 2025-11-29 08:50:24.244 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:24 np0005539551 nova_compute[227360]: 2025-11-29 08:50:24.717 227364 DEBUG nova.compute.manager [req-d3eb07e3-5a18-4766-a6fa-a5a1e8faba58 req-0f7f40c9-7fad-489d-a289-f3cd59dcbd7c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Received event network-changed-4072b7cb-0a4c-45a6-af62-8d549a24a536 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:50:24 np0005539551 nova_compute[227360]: 2025-11-29 08:50:24.718 227364 DEBUG nova.compute.manager [req-d3eb07e3-5a18-4766-a6fa-a5a1e8faba58 req-0f7f40c9-7fad-489d-a289-f3cd59dcbd7c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Refreshing instance network info cache due to event network-changed-4072b7cb-0a4c-45a6-af62-8d549a24a536. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:50:24 np0005539551 nova_compute[227360]: 2025-11-29 08:50:24.718 227364 DEBUG oslo_concurrency.lockutils [req-d3eb07e3-5a18-4766-a6fa-a5a1e8faba58 req-0f7f40c9-7fad-489d-a289-f3cd59dcbd7c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-8a308fe2-114e-4322-b527-1dab59c053bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:50:24 np0005539551 nova_compute[227360]: 2025-11-29 08:50:24.718 227364 DEBUG oslo_concurrency.lockutils [req-d3eb07e3-5a18-4766-a6fa-a5a1e8faba58 req-0f7f40c9-7fad-489d-a289-f3cd59dcbd7c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-8a308fe2-114e-4322-b527-1dab59c053bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:50:24 np0005539551 nova_compute[227360]: 2025-11-29 08:50:24.718 227364 DEBUG nova.network.neutron [req-d3eb07e3-5a18-4766-a6fa-a5a1e8faba58 req-0f7f40c9-7fad-489d-a289-f3cd59dcbd7c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Refreshing network info cache for port 4072b7cb-0a4c-45a6-af62-8d549a24a536 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:50:24 np0005539551 nova_compute[227360]: 2025-11-29 08:50:24.730 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:25.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:25.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:26 np0005539551 nova_compute[227360]: 2025-11-29 08:50:26.291 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:26 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:27.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:27.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:27 np0005539551 nova_compute[227360]: 2025-11-29 08:50:27.659 227364 DEBUG nova.network.neutron [req-d3eb07e3-5a18-4766-a6fa-a5a1e8faba58 req-0f7f40c9-7fad-489d-a289-f3cd59dcbd7c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Updated VIF entry in instance network info cache for port 4072b7cb-0a4c-45a6-af62-8d549a24a536. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:50:27 np0005539551 nova_compute[227360]: 2025-11-29 08:50:27.660 227364 DEBUG nova.network.neutron [req-d3eb07e3-5a18-4766-a6fa-a5a1e8faba58 req-0f7f40c9-7fad-489d-a289-f3cd59dcbd7c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Updating instance_info_cache with network_info: [{"id": "4072b7cb-0a4c-45a6-af62-8d549a24a536", "address": "fa:16:3e:2f:db:45", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4072b7cb-0a", "ovs_interfaceid": "4072b7cb-0a4c-45a6-af62-8d549a24a536", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:50:27 np0005539551 nova_compute[227360]: 2025-11-29 08:50:27.722 227364 DEBUG oslo_concurrency.lockutils [req-d3eb07e3-5a18-4766-a6fa-a5a1e8faba58 req-0f7f40c9-7fad-489d-a289-f3cd59dcbd7c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-8a308fe2-114e-4322-b527-1dab59c053bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:50:28 np0005539551 nova_compute[227360]: 2025-11-29 08:50:28.440 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:28 np0005539551 nova_compute[227360]: 2025-11-29 08:50:28.441 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:50:28 np0005539551 nova_compute[227360]: 2025-11-29 08:50:28.466 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:50:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:29.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:29.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:29 np0005539551 nova_compute[227360]: 2025-11-29 08:50:29.733 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:50:29 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/247237203' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:50:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:50:29 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/247237203' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:50:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:31.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:31.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:31 np0005539551 nova_compute[227360]: 2025-11-29 08:50:31.294 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:31 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:31 np0005539551 ovn_controller[130266]: 2025-11-29T08:50:31Z|00125|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2f:db:45 10.100.0.6
Nov 29 03:50:31 np0005539551 ovn_controller[130266]: 2025-11-29T08:50:31Z|00126|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2f:db:45 10.100.0.6
Nov 29 03:50:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:33.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:50:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:33.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:50:34 np0005539551 nova_compute[227360]: 2025-11-29 08:50:34.735 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:35.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:35.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:35 np0005539551 ovn_controller[130266]: 2025-11-29T08:50:35Z|00939|binding|INFO|Releasing lport 9b7ae33f-c1c7-4a13-97b3-0ae6cb40a1db from this chassis (sb_readonly=0)
Nov 29 03:50:35 np0005539551 nova_compute[227360]: 2025-11-29 08:50:35.930 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:36 np0005539551 nova_compute[227360]: 2025-11-29 08:50:36.296 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:36 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:37.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:37.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:39.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:39.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:39 np0005539551 nova_compute[227360]: 2025-11-29 08:50:39.781 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:39 np0005539551 ovn_controller[130266]: 2025-11-29T08:50:39Z|00940|binding|INFO|Releasing lport 9b7ae33f-c1c7-4a13-97b3-0ae6cb40a1db from this chassis (sb_readonly=0)
Nov 29 03:50:39 np0005539551 nova_compute[227360]: 2025-11-29 08:50:39.987 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:40 np0005539551 nova_compute[227360]: 2025-11-29 08:50:40.686 227364 DEBUG oslo_concurrency.lockutils [None req-1d3e1c31-3631-414d-bfb1-5975cbdabc8c b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "8a308fe2-114e-4322-b527-1dab59c053bb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:40 np0005539551 nova_compute[227360]: 2025-11-29 08:50:40.687 227364 DEBUG oslo_concurrency.lockutils [None req-1d3e1c31-3631-414d-bfb1-5975cbdabc8c b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "8a308fe2-114e-4322-b527-1dab59c053bb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:40 np0005539551 nova_compute[227360]: 2025-11-29 08:50:40.687 227364 DEBUG oslo_concurrency.lockutils [None req-1d3e1c31-3631-414d-bfb1-5975cbdabc8c b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "8a308fe2-114e-4322-b527-1dab59c053bb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:40 np0005539551 nova_compute[227360]: 2025-11-29 08:50:40.687 227364 DEBUG oslo_concurrency.lockutils [None req-1d3e1c31-3631-414d-bfb1-5975cbdabc8c b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "8a308fe2-114e-4322-b527-1dab59c053bb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:40 np0005539551 nova_compute[227360]: 2025-11-29 08:50:40.687 227364 DEBUG oslo_concurrency.lockutils [None req-1d3e1c31-3631-414d-bfb1-5975cbdabc8c b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "8a308fe2-114e-4322-b527-1dab59c053bb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:40 np0005539551 nova_compute[227360]: 2025-11-29 08:50:40.688 227364 INFO nova.compute.manager [None req-1d3e1c31-3631-414d-bfb1-5975cbdabc8c b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Terminating instance#033[00m
Nov 29 03:50:40 np0005539551 nova_compute[227360]: 2025-11-29 08:50:40.689 227364 DEBUG nova.compute.manager [None req-1d3e1c31-3631-414d-bfb1-5975cbdabc8c b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:50:40 np0005539551 kernel: tap4072b7cb-0a (unregistering): left promiscuous mode
Nov 29 03:50:40 np0005539551 NetworkManager[48922]: <info>  [1764406240.7786] device (tap4072b7cb-0a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:50:40 np0005539551 ovn_controller[130266]: 2025-11-29T08:50:40Z|00941|binding|INFO|Releasing lport 4072b7cb-0a4c-45a6-af62-8d549a24a536 from this chassis (sb_readonly=0)
Nov 29 03:50:40 np0005539551 ovn_controller[130266]: 2025-11-29T08:50:40Z|00942|binding|INFO|Setting lport 4072b7cb-0a4c-45a6-af62-8d549a24a536 down in Southbound
Nov 29 03:50:40 np0005539551 nova_compute[227360]: 2025-11-29 08:50:40.788 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:40 np0005539551 ovn_controller[130266]: 2025-11-29T08:50:40Z|00943|binding|INFO|Removing iface tap4072b7cb-0a ovn-installed in OVS
Nov 29 03:50:40 np0005539551 nova_compute[227360]: 2025-11-29 08:50:40.792 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:40.800 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2f:db:45 10.100.0.6'], port_security=['fa:16:3e:2f:db:45 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8a308fe2-114e-4322-b527-1dab59c053bb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7ffcb23bac14ee49474df9aee5f7dae', 'neutron:revision_number': '4', 'neutron:security_group_ids': '94ca66f4-d521-4114-adbd-83f0454e0911', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.198'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2432be5b-087b-4981-ab5e-ea6b1be12111, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=4072b7cb-0a4c-45a6-af62-8d549a24a536) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:50:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:40.801 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 4072b7cb-0a4c-45a6-af62-8d549a24a536 in datapath 3d510715-dc99-4870-8ae9-ff599ae1a9c2 unbound from our chassis#033[00m
Nov 29 03:50:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:40.803 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3d510715-dc99-4870-8ae9-ff599ae1a9c2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:50:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:40.804 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[9095c261-7cfd-4c50-9141-5b492fcae235]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:40 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:40.805 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2 namespace which is not needed anymore#033[00m
Nov 29 03:50:40 np0005539551 nova_compute[227360]: 2025-11-29 08:50:40.807 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:40 np0005539551 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000dd.scope: Deactivated successfully.
Nov 29 03:50:40 np0005539551 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000dd.scope: Consumed 14.313s CPU time.
Nov 29 03:50:40 np0005539551 systemd-machined[190756]: Machine qemu-96-instance-000000dd terminated.
Nov 29 03:50:40 np0005539551 nova_compute[227360]: 2025-11-29 08:50:40.920 227364 INFO nova.virt.libvirt.driver [-] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Instance destroyed successfully.#033[00m
Nov 29 03:50:40 np0005539551 nova_compute[227360]: 2025-11-29 08:50:40.921 227364 DEBUG nova.objects.instance [None req-1d3e1c31-3631-414d-bfb1-5975cbdabc8c b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lazy-loading 'resources' on Instance uuid 8a308fe2-114e-4322-b527-1dab59c053bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:50:40 np0005539551 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[305766]: [NOTICE]   (305770) : haproxy version is 2.8.14-c23fe91
Nov 29 03:50:40 np0005539551 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[305766]: [NOTICE]   (305770) : path to executable is /usr/sbin/haproxy
Nov 29 03:50:40 np0005539551 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[305766]: [WARNING]  (305770) : Exiting Master process...
Nov 29 03:50:40 np0005539551 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[305766]: [WARNING]  (305770) : Exiting Master process...
Nov 29 03:50:40 np0005539551 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[305766]: [ALERT]    (305770) : Current worker (305772) exited with code 143 (Terminated)
Nov 29 03:50:40 np0005539551 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[305766]: [WARNING]  (305770) : All workers exited. Exiting... (0)
Nov 29 03:50:40 np0005539551 systemd[1]: libpod-9b3cf832f291c913656a89169170fa5f1735dae8ac39ecd02f8dee809b82ecf3.scope: Deactivated successfully.
Nov 29 03:50:40 np0005539551 nova_compute[227360]: 2025-11-29 08:50:40.943 227364 DEBUG nova.virt.libvirt.vif [None req-1d3e1c31-3631-414d-bfb1-5975cbdabc8c b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:50:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1651734879',display_name='tempest-TestVolumeBootPattern-server-1651734879',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1651734879',id=221,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEHlo38mb7s9ySq/cdi6P777CZR2z+Afm9Wa+lTfRoAMzUkpqw+8CUb0JbGjLvJJBhmx7BRYWkJB9ViGobLhvgEEMVD0rXS3of0skum5gZvlaPu98ryoqeuiqaHIJoj7oQ==',key_name='tempest-TestVolumeBootPattern-597611796',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:50:17Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b7ffcb23bac14ee49474df9aee5f7dae',ramdisk_id='',reservation_id='r-1z7vrbfn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1614567902',owner_user_name='tempest-TestVolumeBootPattern-1614567902-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:50:18Z,user_data=None,user_id='b576a51181b5425aa6e44a0eb0a22803',uuid=8a308fe2-114e-4322-b527-1dab59c053bb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4072b7cb-0a4c-45a6-af62-8d549a24a536", "address": "fa:16:3e:2f:db:45", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4072b7cb-0a", "ovs_interfaceid": "4072b7cb-0a4c-45a6-af62-8d549a24a536", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:50:40 np0005539551 nova_compute[227360]: 2025-11-29 08:50:40.943 227364 DEBUG nova.network.os_vif_util [None req-1d3e1c31-3631-414d-bfb1-5975cbdabc8c b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converting VIF {"id": "4072b7cb-0a4c-45a6-af62-8d549a24a536", "address": "fa:16:3e:2f:db:45", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4072b7cb-0a", "ovs_interfaceid": "4072b7cb-0a4c-45a6-af62-8d549a24a536", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:50:40 np0005539551 nova_compute[227360]: 2025-11-29 08:50:40.944 227364 DEBUG nova.network.os_vif_util [None req-1d3e1c31-3631-414d-bfb1-5975cbdabc8c b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2f:db:45,bridge_name='br-int',has_traffic_filtering=True,id=4072b7cb-0a4c-45a6-af62-8d549a24a536,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4072b7cb-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:50:40 np0005539551 nova_compute[227360]: 2025-11-29 08:50:40.944 227364 DEBUG os_vif [None req-1d3e1c31-3631-414d-bfb1-5975cbdabc8c b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2f:db:45,bridge_name='br-int',has_traffic_filtering=True,id=4072b7cb-0a4c-45a6-af62-8d549a24a536,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4072b7cb-0a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:50:40 np0005539551 podman[305807]: 2025-11-29 08:50:40.945204912 +0000 UTC m=+0.045377010 container died 9b3cf832f291c913656a89169170fa5f1735dae8ac39ecd02f8dee809b82ecf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 03:50:40 np0005539551 nova_compute[227360]: 2025-11-29 08:50:40.946 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:40 np0005539551 nova_compute[227360]: 2025-11-29 08:50:40.947 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4072b7cb-0a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:50:40 np0005539551 nova_compute[227360]: 2025-11-29 08:50:40.985 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:40 np0005539551 nova_compute[227360]: 2025-11-29 08:50:40.987 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:40 np0005539551 nova_compute[227360]: 2025-11-29 08:50:40.990 227364 INFO os_vif [None req-1d3e1c31-3631-414d-bfb1-5975cbdabc8c b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2f:db:45,bridge_name='br-int',has_traffic_filtering=True,id=4072b7cb-0a4c-45a6-af62-8d549a24a536,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4072b7cb-0a')#033[00m
Nov 29 03:50:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:41.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:41.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:41 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9b3cf832f291c913656a89169170fa5f1735dae8ac39ecd02f8dee809b82ecf3-userdata-shm.mount: Deactivated successfully.
Nov 29 03:50:41 np0005539551 systemd[1]: var-lib-containers-storage-overlay-d25045b305d6e179c81bf541711754f5867a66a1eeac704291da7870449f7126-merged.mount: Deactivated successfully.
Nov 29 03:50:41 np0005539551 podman[305807]: 2025-11-29 08:50:41.189467685 +0000 UTC m=+0.289639823 container cleanup 9b3cf832f291c913656a89169170fa5f1735dae8ac39ecd02f8dee809b82ecf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:50:41 np0005539551 systemd[1]: libpod-conmon-9b3cf832f291c913656a89169170fa5f1735dae8ac39ecd02f8dee809b82ecf3.scope: Deactivated successfully.
Nov 29 03:50:41 np0005539551 podman[305866]: 2025-11-29 08:50:41.407603502 +0000 UTC m=+0.192026831 container remove 9b3cf832f291c913656a89169170fa5f1735dae8ac39ecd02f8dee809b82ecf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:50:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:41.413 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e6e00562-0cb9-4d18-bf8d-286271114959]: (4, ('Sat Nov 29 08:50:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2 (9b3cf832f291c913656a89169170fa5f1735dae8ac39ecd02f8dee809b82ecf3)\n9b3cf832f291c913656a89169170fa5f1735dae8ac39ecd02f8dee809b82ecf3\nSat Nov 29 08:50:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2 (9b3cf832f291c913656a89169170fa5f1735dae8ac39ecd02f8dee809b82ecf3)\n9b3cf832f291c913656a89169170fa5f1735dae8ac39ecd02f8dee809b82ecf3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:41.415 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[0eebc7a8-0d0e-4f1f-ac2a-3c22499048f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:41.416 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d510715-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:50:41 np0005539551 nova_compute[227360]: 2025-11-29 08:50:41.418 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:41 np0005539551 kernel: tap3d510715-d0: left promiscuous mode
Nov 29 03:50:41 np0005539551 nova_compute[227360]: 2025-11-29 08:50:41.430 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:41.434 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[87a395b6-fb8f-4ce8-a555-565c9e1d53dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:41.450 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[de00d9c4-dd3c-4c2c-86c0-54cb96b8db40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:41.451 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f47d9a5c-df6a-40f0-acb9-895103145e6a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:41.473 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c412cb96-a83b-4d7a-8496-cce30cdf2fab]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 926432, 'reachable_time': 44301, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305881, 'error': None, 'target': 'ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:41.476 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:50:41 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:41.476 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[24144ff6-826d-46d6-b3f7-34d2edd9baaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:41 np0005539551 systemd[1]: run-netns-ovnmeta\x2d3d510715\x2ddc99\x2d4870\x2d8ae9\x2dff599ae1a9c2.mount: Deactivated successfully.
Nov 29 03:50:41 np0005539551 nova_compute[227360]: 2025-11-29 08:50:41.646 227364 DEBUG nova.compute.manager [req-f9027200-27bc-4285-a346-4b2460574118 req-1772df36-d488-4787-93b3-e86ab38b000f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Received event network-vif-unplugged-4072b7cb-0a4c-45a6-af62-8d549a24a536 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:50:41 np0005539551 nova_compute[227360]: 2025-11-29 08:50:41.647 227364 DEBUG oslo_concurrency.lockutils [req-f9027200-27bc-4285-a346-4b2460574118 req-1772df36-d488-4787-93b3-e86ab38b000f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8a308fe2-114e-4322-b527-1dab59c053bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:41 np0005539551 nova_compute[227360]: 2025-11-29 08:50:41.647 227364 DEBUG oslo_concurrency.lockutils [req-f9027200-27bc-4285-a346-4b2460574118 req-1772df36-d488-4787-93b3-e86ab38b000f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8a308fe2-114e-4322-b527-1dab59c053bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:41 np0005539551 nova_compute[227360]: 2025-11-29 08:50:41.647 227364 DEBUG oslo_concurrency.lockutils [req-f9027200-27bc-4285-a346-4b2460574118 req-1772df36-d488-4787-93b3-e86ab38b000f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8a308fe2-114e-4322-b527-1dab59c053bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:41 np0005539551 nova_compute[227360]: 2025-11-29 08:50:41.648 227364 DEBUG nova.compute.manager [req-f9027200-27bc-4285-a346-4b2460574118 req-1772df36-d488-4787-93b3-e86ab38b000f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] No waiting events found dispatching network-vif-unplugged-4072b7cb-0a4c-45a6-af62-8d549a24a536 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:50:41 np0005539551 nova_compute[227360]: 2025-11-29 08:50:41.648 227364 DEBUG nova.compute.manager [req-f9027200-27bc-4285-a346-4b2460574118 req-1772df36-d488-4787-93b3-e86ab38b000f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Received event network-vif-unplugged-4072b7cb-0a4c-45a6-af62-8d549a24a536 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:50:41 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:42 np0005539551 nova_compute[227360]: 2025-11-29 08:50:42.341 227364 INFO nova.virt.libvirt.driver [None req-1d3e1c31-3631-414d-bfb1-5975cbdabc8c b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Deleting instance files /var/lib/nova/instances/8a308fe2-114e-4322-b527-1dab59c053bb_del#033[00m
Nov 29 03:50:42 np0005539551 nova_compute[227360]: 2025-11-29 08:50:42.343 227364 INFO nova.virt.libvirt.driver [None req-1d3e1c31-3631-414d-bfb1-5975cbdabc8c b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Deletion of /var/lib/nova/instances/8a308fe2-114e-4322-b527-1dab59c053bb_del complete#033[00m
Nov 29 03:50:42 np0005539551 nova_compute[227360]: 2025-11-29 08:50:42.395 227364 INFO nova.compute.manager [None req-1d3e1c31-3631-414d-bfb1-5975cbdabc8c b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Took 1.71 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:50:42 np0005539551 nova_compute[227360]: 2025-11-29 08:50:42.396 227364 DEBUG oslo.service.loopingcall [None req-1d3e1c31-3631-414d-bfb1-5975cbdabc8c b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:50:42 np0005539551 nova_compute[227360]: 2025-11-29 08:50:42.397 227364 DEBUG nova.compute.manager [-] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:50:42 np0005539551 nova_compute[227360]: 2025-11-29 08:50:42.397 227364 DEBUG nova.network.neutron [-] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:50:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:43.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:43.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:43 np0005539551 nova_compute[227360]: 2025-11-29 08:50:43.772 227364 DEBUG nova.compute.manager [req-7845e52a-b748-4d98-b0f5-c6133dbf2216 req-8ed5e9eb-75d6-4b9b-a302-52d0053fdd7e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Received event network-vif-plugged-4072b7cb-0a4c-45a6-af62-8d549a24a536 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:50:43 np0005539551 nova_compute[227360]: 2025-11-29 08:50:43.773 227364 DEBUG oslo_concurrency.lockutils [req-7845e52a-b748-4d98-b0f5-c6133dbf2216 req-8ed5e9eb-75d6-4b9b-a302-52d0053fdd7e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8a308fe2-114e-4322-b527-1dab59c053bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:43 np0005539551 nova_compute[227360]: 2025-11-29 08:50:43.773 227364 DEBUG oslo_concurrency.lockutils [req-7845e52a-b748-4d98-b0f5-c6133dbf2216 req-8ed5e9eb-75d6-4b9b-a302-52d0053fdd7e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8a308fe2-114e-4322-b527-1dab59c053bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:43 np0005539551 nova_compute[227360]: 2025-11-29 08:50:43.773 227364 DEBUG oslo_concurrency.lockutils [req-7845e52a-b748-4d98-b0f5-c6133dbf2216 req-8ed5e9eb-75d6-4b9b-a302-52d0053fdd7e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8a308fe2-114e-4322-b527-1dab59c053bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:43 np0005539551 nova_compute[227360]: 2025-11-29 08:50:43.774 227364 DEBUG nova.compute.manager [req-7845e52a-b748-4d98-b0f5-c6133dbf2216 req-8ed5e9eb-75d6-4b9b-a302-52d0053fdd7e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] No waiting events found dispatching network-vif-plugged-4072b7cb-0a4c-45a6-af62-8d549a24a536 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:50:43 np0005539551 nova_compute[227360]: 2025-11-29 08:50:43.774 227364 WARNING nova.compute.manager [req-7845e52a-b748-4d98-b0f5-c6133dbf2216 req-8ed5e9eb-75d6-4b9b-a302-52d0053fdd7e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Received unexpected event network-vif-plugged-4072b7cb-0a4c-45a6-af62-8d549a24a536 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:50:44 np0005539551 nova_compute[227360]: 2025-11-29 08:50:44.783 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:44 np0005539551 nova_compute[227360]: 2025-11-29 08:50:44.852 227364 DEBUG nova.network.neutron [-] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:50:44 np0005539551 nova_compute[227360]: 2025-11-29 08:50:44.970 227364 INFO nova.compute.manager [-] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Took 2.57 seconds to deallocate network for instance.#033[00m
Nov 29 03:50:45 np0005539551 nova_compute[227360]: 2025-11-29 08:50:45.016 227364 DEBUG nova.compute.manager [req-f38f6ebd-de77-4b4b-9bef-e1c510783a97 req-de16adb3-bc8b-4ac3-a13f-638e8566f0d1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Received event network-vif-deleted-4072b7cb-0a4c-45a6-af62-8d549a24a536 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:50:45 np0005539551 nova_compute[227360]: 2025-11-29 08:50:45.017 227364 INFO nova.compute.manager [req-f38f6ebd-de77-4b4b-9bef-e1c510783a97 req-de16adb3-bc8b-4ac3-a13f-638e8566f0d1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Neutron deleted interface 4072b7cb-0a4c-45a6-af62-8d549a24a536; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:50:45 np0005539551 nova_compute[227360]: 2025-11-29 08:50:45.017 227364 DEBUG nova.network.neutron [req-f38f6ebd-de77-4b4b-9bef-e1c510783a97 req-de16adb3-bc8b-4ac3-a13f-638e8566f0d1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:50:45 np0005539551 nova_compute[227360]: 2025-11-29 08:50:45.039 227364 DEBUG nova.compute.manager [req-f38f6ebd-de77-4b4b-9bef-e1c510783a97 req-de16adb3-bc8b-4ac3-a13f-638e8566f0d1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Detach interface failed, port_id=4072b7cb-0a4c-45a6-af62-8d549a24a536, reason: Instance 8a308fe2-114e-4322-b527-1dab59c053bb could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 03:50:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:45.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:45.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:45 np0005539551 nova_compute[227360]: 2025-11-29 08:50:45.411 227364 INFO nova.compute.manager [None req-1d3e1c31-3631-414d-bfb1-5975cbdabc8c b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Took 0.44 seconds to detach 1 volumes for instance.#033[00m
Nov 29 03:50:45 np0005539551 nova_compute[227360]: 2025-11-29 08:50:45.485 227364 DEBUG oslo_concurrency.lockutils [None req-1d3e1c31-3631-414d-bfb1-5975cbdabc8c b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:45 np0005539551 nova_compute[227360]: 2025-11-29 08:50:45.486 227364 DEBUG oslo_concurrency.lockutils [None req-1d3e1c31-3631-414d-bfb1-5975cbdabc8c b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:45 np0005539551 nova_compute[227360]: 2025-11-29 08:50:45.713 227364 DEBUG oslo_concurrency.processutils [None req-1d3e1c31-3631-414d-bfb1-5975cbdabc8c b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:50:45 np0005539551 nova_compute[227360]: 2025-11-29 08:50:45.988 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:46 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:50:46 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/975072056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:50:46 np0005539551 nova_compute[227360]: 2025-11-29 08:50:46.152 227364 DEBUG oslo_concurrency.processutils [None req-1d3e1c31-3631-414d-bfb1-5975cbdabc8c b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:50:46 np0005539551 nova_compute[227360]: 2025-11-29 08:50:46.157 227364 DEBUG nova.compute.provider_tree [None req-1d3e1c31-3631-414d-bfb1-5975cbdabc8c b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:50:46 np0005539551 nova_compute[227360]: 2025-11-29 08:50:46.171 227364 DEBUG nova.scheduler.client.report [None req-1d3e1c31-3631-414d-bfb1-5975cbdabc8c b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:50:46 np0005539551 nova_compute[227360]: 2025-11-29 08:50:46.194 227364 DEBUG oslo_concurrency.lockutils [None req-1d3e1c31-3631-414d-bfb1-5975cbdabc8c b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:46 np0005539551 nova_compute[227360]: 2025-11-29 08:50:46.237 227364 INFO nova.scheduler.client.report [None req-1d3e1c31-3631-414d-bfb1-5975cbdabc8c b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Deleted allocations for instance 8a308fe2-114e-4322-b527-1dab59c053bb#033[00m
Nov 29 03:50:46 np0005539551 nova_compute[227360]: 2025-11-29 08:50:46.336 227364 DEBUG oslo_concurrency.lockutils [None req-1d3e1c31-3631-414d-bfb1-5975cbdabc8c b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "8a308fe2-114e-4322-b527-1dab59c053bb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:46 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:47.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:47.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:47 np0005539551 podman[305906]: 2025-11-29 08:50:47.634063309 +0000 UTC m=+0.075072893 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:50:47 np0005539551 podman[305907]: 2025-11-29 08:50:47.640708359 +0000 UTC m=+0.086637637 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:50:47 np0005539551 podman[305905]: 2025-11-29 08:50:47.67547179 +0000 UTC m=+0.117780060 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:50:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:49.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:49.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:49 np0005539551 nova_compute[227360]: 2025-11-29 08:50:49.785 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:51.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:51.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:51 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:51.219 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=68, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=67) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:50:51 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:51.219 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:50:51 np0005539551 nova_compute[227360]: 2025-11-29 08:50:51.249 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:51 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #148. Immutable memtables: 0.
Nov 29 03:50:51 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:50:51.286180) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:50:51 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 148
Nov 29 03:50:51 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406251286234, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 1292, "num_deletes": 253, "total_data_size": 2724089, "memory_usage": 2758864, "flush_reason": "Manual Compaction"}
Nov 29 03:50:51 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #149: started
Nov 29 03:50:51 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406251298672, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 149, "file_size": 1796726, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 71831, "largest_seqno": 73118, "table_properties": {"data_size": 1791102, "index_size": 2954, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12452, "raw_average_key_size": 20, "raw_value_size": 1779702, "raw_average_value_size": 2893, "num_data_blocks": 131, "num_entries": 615, "num_filter_entries": 615, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406151, "oldest_key_time": 1764406151, "file_creation_time": 1764406251, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:50:51 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 12537 microseconds, and 5706 cpu microseconds.
Nov 29 03:50:51 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:50:51 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:50:51.298717) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #149: 1796726 bytes OK
Nov 29 03:50:51 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:50:51.298738) [db/memtable_list.cc:519] [default] Level-0 commit table #149 started
Nov 29 03:50:51 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:50:51.300566) [db/memtable_list.cc:722] [default] Level-0 commit table #149: memtable #1 done
Nov 29 03:50:51 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:50:51.300582) EVENT_LOG_v1 {"time_micros": 1764406251300577, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:50:51 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:50:51.300599) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:50:51 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 2717948, prev total WAL file size 2717948, number of live WAL files 2.
Nov 29 03:50:51 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000145.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:50:51 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:50:51.301583) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Nov 29 03:50:51 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:50:51 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [149(1754KB)], [147(11MB)]
Nov 29 03:50:51 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406251301621, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [149], "files_L6": [147], "score": -1, "input_data_size": 14338031, "oldest_snapshot_seqno": -1}
Nov 29 03:50:51 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #150: 10191 keys, 12487100 bytes, temperature: kUnknown
Nov 29 03:50:51 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406251554126, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 150, "file_size": 12487100, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12421941, "index_size": 38550, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25541, "raw_key_size": 270058, "raw_average_key_size": 26, "raw_value_size": 12243682, "raw_average_value_size": 1201, "num_data_blocks": 1458, "num_entries": 10191, "num_filter_entries": 10191, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764406251, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 150, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:50:51 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:50:51 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:50:51.554388) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 12487100 bytes
Nov 29 03:50:51 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:50:51.685620) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 56.8 rd, 49.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 12.0 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(14.9) write-amplify(6.9) OK, records in: 10714, records dropped: 523 output_compression: NoCompression
Nov 29 03:50:51 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:50:51.685658) EVENT_LOG_v1 {"time_micros": 1764406251685643, "job": 94, "event": "compaction_finished", "compaction_time_micros": 252570, "compaction_time_cpu_micros": 38515, "output_level": 6, "num_output_files": 1, "total_output_size": 12487100, "num_input_records": 10714, "num_output_records": 10191, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:50:51 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:50:51 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406251686092, "job": 94, "event": "table_file_deletion", "file_number": 149}
Nov 29 03:50:51 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000147.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:50:51 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406251688633, "job": 94, "event": "table_file_deletion", "file_number": 147}
Nov 29 03:50:51 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:50:51.301445) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:50:51 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:50:51.688660) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:50:51 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:50:51.688665) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:50:51 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:50:51.688667) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:50:51 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:50:51.688670) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:50:51 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:50:51.688672) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:50:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:52 np0005539551 nova_compute[227360]: 2025-11-29 08:50:52.091 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:53.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:53.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:54 np0005539551 nova_compute[227360]: 2025-11-29 08:50:54.436 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:54 np0005539551 nova_compute[227360]: 2025-11-29 08:50:54.478 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:54 np0005539551 nova_compute[227360]: 2025-11-29 08:50:54.479 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:54 np0005539551 nova_compute[227360]: 2025-11-29 08:50:54.479 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:54 np0005539551 nova_compute[227360]: 2025-11-29 08:50:54.480 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:50:54 np0005539551 nova_compute[227360]: 2025-11-29 08:50:54.480 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:50:54 np0005539551 nova_compute[227360]: 2025-11-29 08:50:54.787 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:54 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:50:54 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/677442494' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:50:54 np0005539551 nova_compute[227360]: 2025-11-29 08:50:54.942 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:50:55 np0005539551 nova_compute[227360]: 2025-11-29 08:50:55.101 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:50:55 np0005539551 nova_compute[227360]: 2025-11-29 08:50:55.102 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4207MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:50:55 np0005539551 nova_compute[227360]: 2025-11-29 08:50:55.102 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:55 np0005539551 nova_compute[227360]: 2025-11-29 08:50:55.102 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:55.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:50:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:55.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:50:55 np0005539551 nova_compute[227360]: 2025-11-29 08:50:55.195 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:50:55 np0005539551 nova_compute[227360]: 2025-11-29 08:50:55.196 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:50:55 np0005539551 nova_compute[227360]: 2025-11-29 08:50:55.211 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:50:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:50:55 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2948640866' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:50:55 np0005539551 nova_compute[227360]: 2025-11-29 08:50:55.636 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:50:55 np0005539551 nova_compute[227360]: 2025-11-29 08:50:55.641 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:50:55 np0005539551 nova_compute[227360]: 2025-11-29 08:50:55.657 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:50:55 np0005539551 nova_compute[227360]: 2025-11-29 08:50:55.675 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:50:55 np0005539551 nova_compute[227360]: 2025-11-29 08:50:55.676 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:55 np0005539551 nova_compute[227360]: 2025-11-29 08:50:55.920 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764406240.918842, 8a308fe2-114e-4322-b527-1dab59c053bb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:50:55 np0005539551 nova_compute[227360]: 2025-11-29 08:50:55.921 227364 INFO nova.compute.manager [-] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:50:55 np0005539551 nova_compute[227360]: 2025-11-29 08:50:55.942 227364 DEBUG nova.compute.manager [None req-35417ff7-69a9-4d5f-bea7-78e7ce7eb7d8 - - - - - -] [instance: 8a308fe2-114e-4322-b527-1dab59c053bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:50:56 np0005539551 nova_compute[227360]: 2025-11-29 08:50:56.252 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:56 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:57.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:57.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:57 np0005539551 nova_compute[227360]: 2025-11-29 08:50:57.158 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:57 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:50:57.221 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '68'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:50:58 np0005539551 nova_compute[227360]: 2025-11-29 08:50:58.651 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:58 np0005539551 nova_compute[227360]: 2025-11-29 08:50:58.651 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:50:58 np0005539551 nova_compute[227360]: 2025-11-29 08:50:58.651 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:50:58 np0005539551 nova_compute[227360]: 2025-11-29 08:50:58.669 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:50:58 np0005539551 nova_compute[227360]: 2025-11-29 08:50:58.670 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:59.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:50:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:59.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:59 np0005539551 nova_compute[227360]: 2025-11-29 08:50:59.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:59 np0005539551 nova_compute[227360]: 2025-11-29 08:50:59.790 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:01.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:01 np0005539551 nova_compute[227360]: 2025-11-29 08:51:01.256 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:01 np0005539551 nova_compute[227360]: 2025-11-29 08:51:01.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:01.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:03.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:03.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:04 np0005539551 nova_compute[227360]: 2025-11-29 08:51:04.250 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:04 np0005539551 nova_compute[227360]: 2025-11-29 08:51:04.411 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:04 np0005539551 nova_compute[227360]: 2025-11-29 08:51:04.793 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:05.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:05 np0005539551 nova_compute[227360]: 2025-11-29 08:51:05.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:05.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:06 np0005539551 nova_compute[227360]: 2025-11-29 08:51:06.258 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:07.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:07.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:07 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:51:07 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:51:07 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 03:51:07 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:51:07 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:51:07 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:51:08 np0005539551 nova_compute[227360]: 2025-11-29 08:51:08.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:09.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:09.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:09 np0005539551 nova_compute[227360]: 2025-11-29 08:51:09.794 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:11.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:11 np0005539551 nova_compute[227360]: 2025-11-29 08:51:11.262 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:11.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:11 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:12 np0005539551 nova_compute[227360]: 2025-11-29 08:51:12.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:13.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:13.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:14 np0005539551 nova_compute[227360]: 2025-11-29 08:51:14.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:14 np0005539551 nova_compute[227360]: 2025-11-29 08:51:14.409 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:51:14 np0005539551 nova_compute[227360]: 2025-11-29 08:51:14.797 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:15.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:15.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:16 np0005539551 nova_compute[227360]: 2025-11-29 08:51:16.263 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:17.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:17.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:18 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:51:18 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:51:18 np0005539551 podman[306198]: 2025-11-29 08:51:18.63916096 +0000 UTC m=+0.080604803 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 29 03:51:18 np0005539551 podman[306197]: 2025-11-29 08:51:18.650756325 +0000 UTC m=+0.093625037 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Nov 29 03:51:18 np0005539551 podman[306196]: 2025-11-29 08:51:18.667099337 +0000 UTC m=+0.114671576 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:51:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:19.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:19.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:19 np0005539551 nova_compute[227360]: 2025-11-29 08:51:19.797 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:51:19.899 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:51:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:51:19.900 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:51:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:51:19.900 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:51:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:21.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:21 np0005539551 nova_compute[227360]: 2025-11-29 08:51:21.266 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:21.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:21 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:23.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:23.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:24 np0005539551 nova_compute[227360]: 2025-11-29 08:51:24.845 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:25.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:25.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:26 np0005539551 nova_compute[227360]: 2025-11-29 08:51:26.270 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:26 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:27.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:27.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:29.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:29.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:29 np0005539551 nova_compute[227360]: 2025-11-29 08:51:29.848 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:31.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:31 np0005539551 nova_compute[227360]: 2025-11-29 08:51:31.273 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:31.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:31 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:33.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:33.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:34 np0005539551 nova_compute[227360]: 2025-11-29 08:51:34.851 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:35.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:35.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:36 np0005539551 nova_compute[227360]: 2025-11-29 08:51:36.275 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:36 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:37.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:37.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:39.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:39.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:39 np0005539551 nova_compute[227360]: 2025-11-29 08:51:39.853 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:41.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:41 np0005539551 nova_compute[227360]: 2025-11-29 08:51:41.278 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:41.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:41 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:43.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:43.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:43 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e430 e430: 3 total, 3 up, 3 in
Nov 29 03:51:44 np0005539551 nova_compute[227360]: 2025-11-29 08:51:44.855 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:45.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:45.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:46 np0005539551 nova_compute[227360]: 2025-11-29 08:51:46.280 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:46 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:47.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:47 np0005539551 nova_compute[227360]: 2025-11-29 08:51:47.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:47.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:48 np0005539551 ovn_controller[130266]: 2025-11-29T08:51:48Z|00944|memory_trim|INFO|Detected inactivity (last active 30014 ms ago): trimming memory
Nov 29 03:51:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:49.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:49.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:49 np0005539551 podman[306256]: 2025-11-29 08:51:49.605328957 +0000 UTC m=+0.058372100 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:51:49 np0005539551 podman[306257]: 2025-11-29 08:51:49.631100406 +0000 UTC m=+0.082185576 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 29 03:51:49 np0005539551 podman[306255]: 2025-11-29 08:51:49.631333802 +0000 UTC m=+0.086644716 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:51:49 np0005539551 nova_compute[227360]: 2025-11-29 08:51:49.855 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:51.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:51 np0005539551 nova_compute[227360]: 2025-11-29 08:51:51.282 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:51.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:53.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:53.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:54 np0005539551 nova_compute[227360]: 2025-11-29 08:51:54.858 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:55.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:55.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:56 np0005539551 nova_compute[227360]: 2025-11-29 08:51:56.250 227364 DEBUG oslo_concurrency.lockutils [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "1a52df6e-2dba-41dc-bbcf-68954705ed0a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:51:56 np0005539551 nova_compute[227360]: 2025-11-29 08:51:56.250 227364 DEBUG oslo_concurrency.lockutils [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "1a52df6e-2dba-41dc-bbcf-68954705ed0a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:51:56 np0005539551 nova_compute[227360]: 2025-11-29 08:51:56.285 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:56 np0005539551 nova_compute[227360]: 2025-11-29 08:51:56.329 227364 DEBUG nova.compute.manager [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:51:56 np0005539551 nova_compute[227360]: 2025-11-29 08:51:56.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:56 np0005539551 nova_compute[227360]: 2025-11-29 08:51:56.516 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:51:56 np0005539551 nova_compute[227360]: 2025-11-29 08:51:56.517 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:51:56 np0005539551 nova_compute[227360]: 2025-11-29 08:51:56.518 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:51:56 np0005539551 nova_compute[227360]: 2025-11-29 08:51:56.518 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:51:56 np0005539551 nova_compute[227360]: 2025-11-29 08:51:56.519 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:51:56 np0005539551 nova_compute[227360]: 2025-11-29 08:51:56.575 227364 DEBUG oslo_concurrency.lockutils [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:51:56 np0005539551 nova_compute[227360]: 2025-11-29 08:51:56.576 227364 DEBUG oslo_concurrency.lockutils [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:51:56 np0005539551 nova_compute[227360]: 2025-11-29 08:51:56.585 227364 DEBUG nova.virt.hardware [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:51:56 np0005539551 nova_compute[227360]: 2025-11-29 08:51:56.586 227364 INFO nova.compute.claims [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 29 03:51:56 np0005539551 nova_compute[227360]: 2025-11-29 08:51:56.828 227364 DEBUG nova.scheduler.client.report [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Refreshing inventories for resource provider 67c71d68-0dd7-4589-b775-189b4191a844 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:51:56 np0005539551 nova_compute[227360]: 2025-11-29 08:51:56.883 227364 DEBUG nova.scheduler.client.report [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Updating ProviderTree inventory for provider 67c71d68-0dd7-4589-b775-189b4191a844 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:51:56 np0005539551 nova_compute[227360]: 2025-11-29 08:51:56.884 227364 DEBUG nova.compute.provider_tree [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Updating inventory in ProviderTree for provider 67c71d68-0dd7-4589-b775-189b4191a844 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:51:56 np0005539551 nova_compute[227360]: 2025-11-29 08:51:56.905 227364 DEBUG nova.scheduler.client.report [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Refreshing aggregate associations for resource provider 67c71d68-0dd7-4589-b775-189b4191a844, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:51:56 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:51:56 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1202719490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:51:56 np0005539551 nova_compute[227360]: 2025-11-29 08:51:56.926 227364 DEBUG nova.scheduler.client.report [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Refreshing trait associations for resource provider 67c71d68-0dd7-4589-b775-189b4191a844, traits: COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:51:56 np0005539551 nova_compute[227360]: 2025-11-29 08:51:56.937 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:51:56 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:56 np0005539551 nova_compute[227360]: 2025-11-29 08:51:56.961 227364 DEBUG oslo_concurrency.processutils [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:51:57 np0005539551 nova_compute[227360]: 2025-11-29 08:51:57.117 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:51:57 np0005539551 nova_compute[227360]: 2025-11-29 08:51:57.119 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4236MB free_disk=20.98813247680664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:51:57 np0005539551 nova_compute[227360]: 2025-11-29 08:51:57.119 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:51:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:57.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:57 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:51:57 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1967553369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:51:57 np0005539551 nova_compute[227360]: 2025-11-29 08:51:57.391 227364 DEBUG oslo_concurrency.processutils [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:51:57 np0005539551 nova_compute[227360]: 2025-11-29 08:51:57.397 227364 DEBUG nova.compute.provider_tree [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:51:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:57.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:57 np0005539551 nova_compute[227360]: 2025-11-29 08:51:57.612 227364 DEBUG nova.scheduler.client.report [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:51:57 np0005539551 nova_compute[227360]: 2025-11-29 08:51:57.671 227364 DEBUG oslo_concurrency.lockutils [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:51:57 np0005539551 nova_compute[227360]: 2025-11-29 08:51:57.672 227364 DEBUG nova.compute.manager [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:51:57 np0005539551 nova_compute[227360]: 2025-11-29 08:51:57.674 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:51:57 np0005539551 nova_compute[227360]: 2025-11-29 08:51:57.960 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Instance 1a52df6e-2dba-41dc-bbcf-68954705ed0a actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:51:57 np0005539551 nova_compute[227360]: 2025-11-29 08:51:57.960 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:51:57 np0005539551 nova_compute[227360]: 2025-11-29 08:51:57.960 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:51:57 np0005539551 nova_compute[227360]: 2025-11-29 08:51:57.999 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:51:58 np0005539551 nova_compute[227360]: 2025-11-29 08:51:58.037 227364 DEBUG nova.compute.manager [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:51:58 np0005539551 nova_compute[227360]: 2025-11-29 08:51:58.038 227364 DEBUG nova.network.neutron [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:51:58 np0005539551 nova_compute[227360]: 2025-11-29 08:51:58.065 227364 INFO nova.virt.libvirt.driver [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:51:58 np0005539551 nova_compute[227360]: 2025-11-29 08:51:58.100 227364 DEBUG nova.compute.manager [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:51:58 np0005539551 nova_compute[227360]: 2025-11-29 08:51:58.151 227364 INFO nova.virt.block_device [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Booting with volume fee77070-ed28-42d2-8e57-eb8da81862b4 at /dev/vda#033[00m
Nov 29 03:51:58 np0005539551 nova_compute[227360]: 2025-11-29 08:51:58.408 227364 DEBUG nova.policy [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b576a51181b5425aa6e44a0eb0a22803', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b7ffcb23bac14ee49474df9aee5f7dae', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:51:58 np0005539551 nova_compute[227360]: 2025-11-29 08:51:58.418 227364 DEBUG os_brick.utils [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:51:58 np0005539551 nova_compute[227360]: 2025-11-29 08:51:58.419 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:51:58 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:51:58 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/938491352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:51:58 np0005539551 nova_compute[227360]: 2025-11-29 08:51:58.432 245195 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:51:58 np0005539551 nova_compute[227360]: 2025-11-29 08:51:58.432 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[6af99975-d000-4ce1-bf28-1b4a104626f0]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:51:58 np0005539551 nova_compute[227360]: 2025-11-29 08:51:58.433 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:51:58 np0005539551 nova_compute[227360]: 2025-11-29 08:51:58.441 245195 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:51:58 np0005539551 nova_compute[227360]: 2025-11-29 08:51:58.441 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[4d313b8b-ec67-4267-8411-f8db27a6c3c8]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34268feecb6', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:51:58 np0005539551 nova_compute[227360]: 2025-11-29 08:51:58.443 245195 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:51:58 np0005539551 nova_compute[227360]: 2025-11-29 08:51:58.449 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:51:58 np0005539551 nova_compute[227360]: 2025-11-29 08:51:58.453 245195 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:51:58 np0005539551 nova_compute[227360]: 2025-11-29 08:51:58.453 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[d78aa429-79fd-4e7f-8e83-f8f45b97cb96]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:51:58 np0005539551 nova_compute[227360]: 2025-11-29 08:51:58.454 245195 DEBUG oslo.privsep.daemon [-] privsep: reply[38571dd5-e950-4756-b358-7f935d208e05]: (4, '2d58586e-4ce1-425d-890c-d5cdff75e822') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:51:58 np0005539551 nova_compute[227360]: 2025-11-29 08:51:58.455 227364 DEBUG oslo_concurrency.processutils [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:51:58 np0005539551 nova_compute[227360]: 2025-11-29 08:51:58.489 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:51:58 np0005539551 nova_compute[227360]: 2025-11-29 08:51:58.492 227364 DEBUG oslo_concurrency.processutils [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "nvme version" returned: 0 in 0.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:51:58 np0005539551 nova_compute[227360]: 2025-11-29 08:51:58.494 227364 DEBUG os_brick.initiator.connectors.lightos [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:51:58 np0005539551 nova_compute[227360]: 2025-11-29 08:51:58.495 227364 DEBUG os_brick.initiator.connectors.lightos [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:51:58 np0005539551 nova_compute[227360]: 2025-11-29 08:51:58.495 227364 DEBUG os_brick.initiator.connectors.lightos [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:51:58 np0005539551 nova_compute[227360]: 2025-11-29 08:51:58.495 227364 DEBUG os_brick.utils [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] <== get_connector_properties: return (75ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34268feecb6', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '2d58586e-4ce1-425d-890c-d5cdff75e822', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:51:58 np0005539551 nova_compute[227360]: 2025-11-29 08:51:58.495 227364 DEBUG nova.virt.block_device [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Updating existing volume attachment record: 0f9405e6-f520-48b3-a6cb-d5f56827c6b9 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:51:58 np0005539551 nova_compute[227360]: 2025-11-29 08:51:58.530 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:51:58 np0005539551 nova_compute[227360]: 2025-11-29 08:51:58.550 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:51:58 np0005539551 nova_compute[227360]: 2025-11-29 08:51:58.551 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:51:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:51:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:59.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:51:59 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e431 e431: 3 total, 3 up, 3 in
Nov 29 03:51:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:51:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:59.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:59 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:51:59 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3822588138' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:51:59 np0005539551 nova_compute[227360]: 2025-11-29 08:51:59.905 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:00 np0005539551 nova_compute[227360]: 2025-11-29 08:52:00.552 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:00 np0005539551 nova_compute[227360]: 2025-11-29 08:52:00.553 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:52:00 np0005539551 nova_compute[227360]: 2025-11-29 08:52:00.553 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:52:00 np0005539551 nova_compute[227360]: 2025-11-29 08:52:00.600 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 03:52:00 np0005539551 nova_compute[227360]: 2025-11-29 08:52:00.601 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:52:00 np0005539551 nova_compute[227360]: 2025-11-29 08:52:00.601 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:00 np0005539551 nova_compute[227360]: 2025-11-29 08:52:00.602 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e432 e432: 3 total, 3 up, 3 in
Nov 29 03:52:00 np0005539551 nova_compute[227360]: 2025-11-29 08:52:00.812 227364 DEBUG nova.compute.manager [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:52:00 np0005539551 nova_compute[227360]: 2025-11-29 08:52:00.815 227364 DEBUG nova.virt.libvirt.driver [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:52:00 np0005539551 nova_compute[227360]: 2025-11-29 08:52:00.816 227364 INFO nova.virt.libvirt.driver [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Creating image(s)#033[00m
Nov 29 03:52:00 np0005539551 nova_compute[227360]: 2025-11-29 08:52:00.816 227364 DEBUG nova.virt.libvirt.driver [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:52:00 np0005539551 nova_compute[227360]: 2025-11-29 08:52:00.817 227364 DEBUG nova.virt.libvirt.driver [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Ensure instance console log exists: /var/lib/nova/instances/1a52df6e-2dba-41dc-bbcf-68954705ed0a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:52:00 np0005539551 nova_compute[227360]: 2025-11-29 08:52:00.818 227364 DEBUG oslo_concurrency.lockutils [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:00 np0005539551 nova_compute[227360]: 2025-11-29 08:52:00.818 227364 DEBUG oslo_concurrency.lockutils [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:00 np0005539551 nova_compute[227360]: 2025-11-29 08:52:00.819 227364 DEBUG oslo_concurrency.lockutils [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:00 np0005539551 nova_compute[227360]: 2025-11-29 08:52:00.821 227364 DEBUG nova.network.neutron [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Successfully created port: 88369384-0ebd-4a3f-9cd4-3a6ffd4271ea _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:52:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:01.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:01 np0005539551 nova_compute[227360]: 2025-11-29 08:52:01.288 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:01.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e433 e433: 3 total, 3 up, 3 in
Nov 29 03:52:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:02 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e434 e434: 3 total, 3 up, 3 in
Nov 29 03:52:03 np0005539551 nova_compute[227360]: 2025-11-29 08:52:03.020 227364 DEBUG nova.network.neutron [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Successfully updated port: 88369384-0ebd-4a3f-9cd4-3a6ffd4271ea _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:52:03 np0005539551 nova_compute[227360]: 2025-11-29 08:52:03.036 227364 DEBUG oslo_concurrency.lockutils [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "refresh_cache-1a52df6e-2dba-41dc-bbcf-68954705ed0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:52:03 np0005539551 nova_compute[227360]: 2025-11-29 08:52:03.036 227364 DEBUG oslo_concurrency.lockutils [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquired lock "refresh_cache-1a52df6e-2dba-41dc-bbcf-68954705ed0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:52:03 np0005539551 nova_compute[227360]: 2025-11-29 08:52:03.036 227364 DEBUG nova.network.neutron [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:52:03 np0005539551 nova_compute[227360]: 2025-11-29 08:52:03.117 227364 DEBUG nova.compute.manager [req-63636fae-5e38-42c9-99b5-94cb4cbea35e req-68027756-92ce-40a3-ab26-18b8e1c2fc6c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Received event network-changed-88369384-0ebd-4a3f-9cd4-3a6ffd4271ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:52:03 np0005539551 nova_compute[227360]: 2025-11-29 08:52:03.118 227364 DEBUG nova.compute.manager [req-63636fae-5e38-42c9-99b5-94cb4cbea35e req-68027756-92ce-40a3-ab26-18b8e1c2fc6c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Refreshing instance network info cache due to event network-changed-88369384-0ebd-4a3f-9cd4-3a6ffd4271ea. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:52:03 np0005539551 nova_compute[227360]: 2025-11-29 08:52:03.118 227364 DEBUG oslo_concurrency.lockutils [req-63636fae-5e38-42c9-99b5-94cb4cbea35e req-68027756-92ce-40a3-ab26-18b8e1c2fc6c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-1a52df6e-2dba-41dc-bbcf-68954705ed0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:52:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:03.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:03 np0005539551 nova_compute[227360]: 2025-11-29 08:52:03.454 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:03 np0005539551 nova_compute[227360]: 2025-11-29 08:52:03.574 227364 DEBUG nova.network.neutron [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:52:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:03.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:04 np0005539551 nova_compute[227360]: 2025-11-29 08:52:04.731 227364 DEBUG nova.network.neutron [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Updating instance_info_cache with network_info: [{"id": "88369384-0ebd-4a3f-9cd4-3a6ffd4271ea", "address": "fa:16:3e:36:a5:df", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88369384-0e", "ovs_interfaceid": "88369384-0ebd-4a3f-9cd4-3a6ffd4271ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:52:04 np0005539551 nova_compute[227360]: 2025-11-29 08:52:04.758 227364 DEBUG oslo_concurrency.lockutils [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Releasing lock "refresh_cache-1a52df6e-2dba-41dc-bbcf-68954705ed0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:52:04 np0005539551 nova_compute[227360]: 2025-11-29 08:52:04.758 227364 DEBUG nova.compute.manager [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Instance network_info: |[{"id": "88369384-0ebd-4a3f-9cd4-3a6ffd4271ea", "address": "fa:16:3e:36:a5:df", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88369384-0e", "ovs_interfaceid": "88369384-0ebd-4a3f-9cd4-3a6ffd4271ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:52:04 np0005539551 nova_compute[227360]: 2025-11-29 08:52:04.759 227364 DEBUG oslo_concurrency.lockutils [req-63636fae-5e38-42c9-99b5-94cb4cbea35e req-68027756-92ce-40a3-ab26-18b8e1c2fc6c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-1a52df6e-2dba-41dc-bbcf-68954705ed0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:52:04 np0005539551 nova_compute[227360]: 2025-11-29 08:52:04.759 227364 DEBUG nova.network.neutron [req-63636fae-5e38-42c9-99b5-94cb4cbea35e req-68027756-92ce-40a3-ab26-18b8e1c2fc6c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Refreshing network info cache for port 88369384-0ebd-4a3f-9cd4-3a6ffd4271ea _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:52:04 np0005539551 nova_compute[227360]: 2025-11-29 08:52:04.762 227364 DEBUG nova.virt.libvirt.driver [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Start _get_guest_xml network_info=[{"id": "88369384-0ebd-4a3f-9cd4-3a6ffd4271ea", "address": "fa:16:3e:36:a5:df", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88369384-0e", "ovs_interfaceid": "88369384-0ebd-4a3f-9cd4-3a6ffd4271ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-fee77070-ed28-42d2-8e57-eb8da81862b4', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'fee77070-ed28-42d2-8e57-eb8da81862b4', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '1a52df6e-2dba-41dc-bbcf-68954705ed0a', 'attached_at': '', 'detached_at': '', 'volume_id': 'fee77070-ed28-42d2-8e57-eb8da81862b4', 'serial': 'fee77070-ed28-42d2-8e57-eb8da81862b4'}, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'attachment_id': '0f9405e6-f520-48b3-a6cb-d5f56827c6b9', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:52:04 np0005539551 nova_compute[227360]: 2025-11-29 08:52:04.765 227364 WARNING nova.virt.libvirt.driver [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:52:04 np0005539551 nova_compute[227360]: 2025-11-29 08:52:04.770 227364 DEBUG nova.virt.libvirt.host [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:52:04 np0005539551 nova_compute[227360]: 2025-11-29 08:52:04.771 227364 DEBUG nova.virt.libvirt.host [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:52:04 np0005539551 nova_compute[227360]: 2025-11-29 08:52:04.774 227364 DEBUG nova.virt.libvirt.host [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:52:04 np0005539551 nova_compute[227360]: 2025-11-29 08:52:04.774 227364 DEBUG nova.virt.libvirt.host [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:52:04 np0005539551 nova_compute[227360]: 2025-11-29 08:52:04.775 227364 DEBUG nova.virt.libvirt.driver [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:52:04 np0005539551 nova_compute[227360]: 2025-11-29 08:52:04.775 227364 DEBUG nova.virt.hardware [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:52:04 np0005539551 nova_compute[227360]: 2025-11-29 08:52:04.776 227364 DEBUG nova.virt.hardware [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:52:04 np0005539551 nova_compute[227360]: 2025-11-29 08:52:04.776 227364 DEBUG nova.virt.hardware [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:52:04 np0005539551 nova_compute[227360]: 2025-11-29 08:52:04.776 227364 DEBUG nova.virt.hardware [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:52:04 np0005539551 nova_compute[227360]: 2025-11-29 08:52:04.777 227364 DEBUG nova.virt.hardware [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:52:04 np0005539551 nova_compute[227360]: 2025-11-29 08:52:04.777 227364 DEBUG nova.virt.hardware [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:52:04 np0005539551 nova_compute[227360]: 2025-11-29 08:52:04.777 227364 DEBUG nova.virt.hardware [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:52:04 np0005539551 nova_compute[227360]: 2025-11-29 08:52:04.777 227364 DEBUG nova.virt.hardware [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:52:04 np0005539551 nova_compute[227360]: 2025-11-29 08:52:04.778 227364 DEBUG nova.virt.hardware [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:52:04 np0005539551 nova_compute[227360]: 2025-11-29 08:52:04.778 227364 DEBUG nova.virt.hardware [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:52:04 np0005539551 nova_compute[227360]: 2025-11-29 08:52:04.778 227364 DEBUG nova.virt.hardware [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:52:04 np0005539551 nova_compute[227360]: 2025-11-29 08:52:04.801 227364 DEBUG nova.storage.rbd_utils [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] rbd image 1a52df6e-2dba-41dc-bbcf-68954705ed0a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:52:04 np0005539551 nova_compute[227360]: 2025-11-29 08:52:04.805 227364 DEBUG oslo_concurrency.processutils [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:52:04 np0005539551 nova_compute[227360]: 2025-11-29 08:52:04.907 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:05.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:52:05 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3961577610' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:52:05 np0005539551 nova_compute[227360]: 2025-11-29 08:52:05.269 227364 DEBUG oslo_concurrency.processutils [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:52:05 np0005539551 nova_compute[227360]: 2025-11-29 08:52:05.300 227364 DEBUG nova.virt.libvirt.vif [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:51:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1971016946',display_name='tempest-TestVolumeBootPattern-server-1971016946',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1971016946',id=223,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEHlo38mb7s9ySq/cdi6P777CZR2z+Afm9Wa+lTfRoAMzUkpqw+8CUb0JbGjLvJJBhmx7BRYWkJB9ViGobLhvgEEMVD0rXS3of0skum5gZvlaPu98ryoqeuiqaHIJoj7oQ==',key_name='tempest-TestVolumeBootPattern-597611796',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b7ffcb23bac14ee49474df9aee5f7dae',ramdisk_id='',reservation_id='r-45cj73bx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1614567902',owner_user_name='tempest-TestVolumeBootPattern-1614567902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:51:58Z,user_data=None,user_id='b576a51181b5425aa6e44a0eb0a22803',uuid=1a52df6e-2dba-41dc-bbcf-68954705ed0a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "88369384-0ebd-4a3f-9cd4-3a6ffd4271ea", "address": "fa:16:3e:36:a5:df", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88369384-0e", "ovs_interfaceid": "88369384-0ebd-4a3f-9cd4-3a6ffd4271ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:52:05 np0005539551 nova_compute[227360]: 2025-11-29 08:52:05.301 227364 DEBUG nova.network.os_vif_util [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converting VIF {"id": "88369384-0ebd-4a3f-9cd4-3a6ffd4271ea", "address": "fa:16:3e:36:a5:df", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88369384-0e", "ovs_interfaceid": "88369384-0ebd-4a3f-9cd4-3a6ffd4271ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:52:05 np0005539551 nova_compute[227360]: 2025-11-29 08:52:05.301 227364 DEBUG nova.network.os_vif_util [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:a5:df,bridge_name='br-int',has_traffic_filtering=True,id=88369384-0ebd-4a3f-9cd4-3a6ffd4271ea,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88369384-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:52:05 np0005539551 nova_compute[227360]: 2025-11-29 08:52:05.303 227364 DEBUG nova.objects.instance [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lazy-loading 'pci_devices' on Instance uuid 1a52df6e-2dba-41dc-bbcf-68954705ed0a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:52:05 np0005539551 nova_compute[227360]: 2025-11-29 08:52:05.318 227364 DEBUG nova.virt.libvirt.driver [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:52:05 np0005539551 nova_compute[227360]:  <uuid>1a52df6e-2dba-41dc-bbcf-68954705ed0a</uuid>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:  <name>instance-000000df</name>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:  <memory>131072</memory>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:  <vcpu>1</vcpu>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:  <metadata>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:52:05 np0005539551 nova_compute[227360]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:      <nova:name>tempest-TestVolumeBootPattern-server-1971016946</nova:name>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:      <nova:creationTime>2025-11-29 08:52:04</nova:creationTime>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:      <nova:flavor name="m1.nano">
Nov 29 03:52:05 np0005539551 nova_compute[227360]:        <nova:memory>128</nova:memory>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:        <nova:disk>1</nova:disk>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:        <nova:swap>0</nova:swap>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:      </nova:flavor>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:      <nova:owner>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:        <nova:user uuid="b576a51181b5425aa6e44a0eb0a22803">tempest-TestVolumeBootPattern-1614567902-project-member</nova:user>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:        <nova:project uuid="b7ffcb23bac14ee49474df9aee5f7dae">tempest-TestVolumeBootPattern-1614567902</nova:project>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:      </nova:owner>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:      <nova:ports>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:        <nova:port uuid="88369384-0ebd-4a3f-9cd4-3a6ffd4271ea">
Nov 29 03:52:05 np0005539551 nova_compute[227360]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:        </nova:port>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:      </nova:ports>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    </nova:instance>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:  </metadata>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:  <sysinfo type="smbios">
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <system>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:      <entry name="serial">1a52df6e-2dba-41dc-bbcf-68954705ed0a</entry>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:      <entry name="uuid">1a52df6e-2dba-41dc-bbcf-68954705ed0a</entry>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    </system>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:  </sysinfo>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:  <os>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <boot dev="hd"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <smbios mode="sysinfo"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:  </os>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:  <features>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <acpi/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <apic/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <vmcoreinfo/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:  </features>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:  <clock offset="utc">
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <timer name="hpet" present="no"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:  </clock>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:  <cpu mode="custom" match="exact">
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <model>Nehalem</model>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:  </cpu>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:  <devices>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <disk type="network" device="cdrom">
Nov 29 03:52:05 np0005539551 nova_compute[227360]:      <driver type="raw" cache="none"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="vms/1a52df6e-2dba-41dc-bbcf-68954705ed0a_disk.config">
Nov 29 03:52:05 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:52:05 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:      <target dev="sda" bus="sata"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <disk type="network" device="disk">
Nov 29 03:52:05 np0005539551 nova_compute[227360]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:      <source protocol="rbd" name="volumes/volume-fee77070-ed28-42d2-8e57-eb8da81862b4">
Nov 29 03:52:05 np0005539551 nova_compute[227360]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:      </source>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:      <auth username="openstack">
Nov 29 03:52:05 np0005539551 nova_compute[227360]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:      </auth>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:      <target dev="vda" bus="virtio"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:      <serial>fee77070-ed28-42d2-8e57-eb8da81862b4</serial>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    </disk>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <interface type="ethernet">
Nov 29 03:52:05 np0005539551 nova_compute[227360]:      <mac address="fa:16:3e:36:a5:df"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:      <mtu size="1442"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:      <target dev="tap88369384-0e"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    </interface>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <serial type="pty">
Nov 29 03:52:05 np0005539551 nova_compute[227360]:      <log file="/var/lib/nova/instances/1a52df6e-2dba-41dc-bbcf-68954705ed0a/console.log" append="off"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    </serial>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <video>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:      <model type="virtio"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    </video>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <input type="tablet" bus="usb"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <rng model="virtio">
Nov 29 03:52:05 np0005539551 nova_compute[227360]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    </rng>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <controller type="usb" index="0"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    <memballoon model="virtio">
Nov 29 03:52:05 np0005539551 nova_compute[227360]:      <stats period="10"/>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:    </memballoon>
Nov 29 03:52:05 np0005539551 nova_compute[227360]:  </devices>
Nov 29 03:52:05 np0005539551 nova_compute[227360]: </domain>
Nov 29 03:52:05 np0005539551 nova_compute[227360]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:52:05 np0005539551 nova_compute[227360]: 2025-11-29 08:52:05.319 227364 DEBUG nova.compute.manager [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Preparing to wait for external event network-vif-plugged-88369384-0ebd-4a3f-9cd4-3a6ffd4271ea prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:52:05 np0005539551 nova_compute[227360]: 2025-11-29 08:52:05.320 227364 DEBUG oslo_concurrency.lockutils [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "1a52df6e-2dba-41dc-bbcf-68954705ed0a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:05 np0005539551 nova_compute[227360]: 2025-11-29 08:52:05.321 227364 DEBUG oslo_concurrency.lockutils [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "1a52df6e-2dba-41dc-bbcf-68954705ed0a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:05 np0005539551 nova_compute[227360]: 2025-11-29 08:52:05.321 227364 DEBUG oslo_concurrency.lockutils [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "1a52df6e-2dba-41dc-bbcf-68954705ed0a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:05 np0005539551 nova_compute[227360]: 2025-11-29 08:52:05.322 227364 DEBUG nova.virt.libvirt.vif [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:51:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1971016946',display_name='tempest-TestVolumeBootPattern-server-1971016946',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1971016946',id=223,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEHlo38mb7s9ySq/cdi6P777CZR2z+Afm9Wa+lTfRoAMzUkpqw+8CUb0JbGjLvJJBhmx7BRYWkJB9ViGobLhvgEEMVD0rXS3of0skum5gZvlaPu98ryoqeuiqaHIJoj7oQ==',key_name='tempest-TestVolumeBootPattern-597611796',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b7ffcb23bac14ee49474df9aee5f7dae',ramdisk_id='',reservation_id='r-45cj73bx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1614567902',owner_user_name='tempest-TestVolumeBootPattern-1614567902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:51:58Z,user_data=None,user_id='b576a51181b5425aa6e44a0eb0a22803',uuid=1a52df6e-2dba-41dc-bbcf-68954705ed0a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "88369384-0ebd-4a3f-9cd4-3a6ffd4271ea", "address": "fa:16:3e:36:a5:df", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88369384-0e", "ovs_interfaceid": "88369384-0ebd-4a3f-9cd4-3a6ffd4271ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:52:05 np0005539551 nova_compute[227360]: 2025-11-29 08:52:05.323 227364 DEBUG nova.network.os_vif_util [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converting VIF {"id": "88369384-0ebd-4a3f-9cd4-3a6ffd4271ea", "address": "fa:16:3e:36:a5:df", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88369384-0e", "ovs_interfaceid": "88369384-0ebd-4a3f-9cd4-3a6ffd4271ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:52:05 np0005539551 nova_compute[227360]: 2025-11-29 08:52:05.324 227364 DEBUG nova.network.os_vif_util [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:a5:df,bridge_name='br-int',has_traffic_filtering=True,id=88369384-0ebd-4a3f-9cd4-3a6ffd4271ea,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88369384-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:52:05 np0005539551 nova_compute[227360]: 2025-11-29 08:52:05.324 227364 DEBUG os_vif [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:a5:df,bridge_name='br-int',has_traffic_filtering=True,id=88369384-0ebd-4a3f-9cd4-3a6ffd4271ea,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88369384-0e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:52:05 np0005539551 nova_compute[227360]: 2025-11-29 08:52:05.325 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:05 np0005539551 nova_compute[227360]: 2025-11-29 08:52:05.326 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:52:05 np0005539551 nova_compute[227360]: 2025-11-29 08:52:05.326 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:52:05 np0005539551 nova_compute[227360]: 2025-11-29 08:52:05.331 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:05 np0005539551 nova_compute[227360]: 2025-11-29 08:52:05.331 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap88369384-0e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:52:05 np0005539551 nova_compute[227360]: 2025-11-29 08:52:05.332 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap88369384-0e, col_values=(('external_ids', {'iface-id': '88369384-0ebd-4a3f-9cd4-3a6ffd4271ea', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:36:a5:df', 'vm-uuid': '1a52df6e-2dba-41dc-bbcf-68954705ed0a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:52:05 np0005539551 nova_compute[227360]: 2025-11-29 08:52:05.334 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:05 np0005539551 NetworkManager[48922]: <info>  [1764406325.3351] manager: (tap88369384-0e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/426)
Nov 29 03:52:05 np0005539551 nova_compute[227360]: 2025-11-29 08:52:05.336 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:52:05 np0005539551 nova_compute[227360]: 2025-11-29 08:52:05.344 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:05 np0005539551 nova_compute[227360]: 2025-11-29 08:52:05.346 227364 INFO os_vif [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:a5:df,bridge_name='br-int',has_traffic_filtering=True,id=88369384-0ebd-4a3f-9cd4-3a6ffd4271ea,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88369384-0e')#033[00m
Nov 29 03:52:05 np0005539551 nova_compute[227360]: 2025-11-29 08:52:05.404 227364 DEBUG nova.virt.libvirt.driver [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:52:05 np0005539551 nova_compute[227360]: 2025-11-29 08:52:05.404 227364 DEBUG nova.virt.libvirt.driver [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:52:05 np0005539551 nova_compute[227360]: 2025-11-29 08:52:05.404 227364 DEBUG nova.virt.libvirt.driver [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] No VIF found with MAC fa:16:3e:36:a5:df, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:52:05 np0005539551 nova_compute[227360]: 2025-11-29 08:52:05.405 227364 INFO nova.virt.libvirt.driver [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Using config drive#033[00m
Nov 29 03:52:05 np0005539551 nova_compute[227360]: 2025-11-29 08:52:05.440 227364 DEBUG nova.storage.rbd_utils [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] rbd image 1a52df6e-2dba-41dc-bbcf-68954705ed0a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:52:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:05.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:05 np0005539551 nova_compute[227360]: 2025-11-29 08:52:05.832 227364 INFO nova.virt.libvirt.driver [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Creating config drive at /var/lib/nova/instances/1a52df6e-2dba-41dc-bbcf-68954705ed0a/disk.config#033[00m
Nov 29 03:52:05 np0005539551 nova_compute[227360]: 2025-11-29 08:52:05.837 227364 DEBUG oslo_concurrency.processutils [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1a52df6e-2dba-41dc-bbcf-68954705ed0a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfejopy4u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:52:05 np0005539551 nova_compute[227360]: 2025-11-29 08:52:05.976 227364 DEBUG oslo_concurrency.processutils [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1a52df6e-2dba-41dc-bbcf-68954705ed0a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfejopy4u" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:52:06 np0005539551 nova_compute[227360]: 2025-11-29 08:52:06.006 227364 DEBUG nova.storage.rbd_utils [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] rbd image 1a52df6e-2dba-41dc-bbcf-68954705ed0a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:52:06 np0005539551 nova_compute[227360]: 2025-11-29 08:52:06.009 227364 DEBUG oslo_concurrency.processutils [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1a52df6e-2dba-41dc-bbcf-68954705ed0a/disk.config 1a52df6e-2dba-41dc-bbcf-68954705ed0a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:52:06 np0005539551 nova_compute[227360]: 2025-11-29 08:52:06.152 227364 DEBUG nova.network.neutron [req-63636fae-5e38-42c9-99b5-94cb4cbea35e req-68027756-92ce-40a3-ab26-18b8e1c2fc6c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Updated VIF entry in instance network info cache for port 88369384-0ebd-4a3f-9cd4-3a6ffd4271ea. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:52:06 np0005539551 nova_compute[227360]: 2025-11-29 08:52:06.153 227364 DEBUG nova.network.neutron [req-63636fae-5e38-42c9-99b5-94cb4cbea35e req-68027756-92ce-40a3-ab26-18b8e1c2fc6c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Updating instance_info_cache with network_info: [{"id": "88369384-0ebd-4a3f-9cd4-3a6ffd4271ea", "address": "fa:16:3e:36:a5:df", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88369384-0e", "ovs_interfaceid": "88369384-0ebd-4a3f-9cd4-3a6ffd4271ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:52:06 np0005539551 nova_compute[227360]: 2025-11-29 08:52:06.176 227364 DEBUG oslo_concurrency.lockutils [req-63636fae-5e38-42c9-99b5-94cb4cbea35e req-68027756-92ce-40a3-ab26-18b8e1c2fc6c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-1a52df6e-2dba-41dc-bbcf-68954705ed0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:52:06 np0005539551 nova_compute[227360]: 2025-11-29 08:52:06.209 227364 DEBUG oslo_concurrency.processutils [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1a52df6e-2dba-41dc-bbcf-68954705ed0a/disk.config 1a52df6e-2dba-41dc-bbcf-68954705ed0a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:52:06 np0005539551 nova_compute[227360]: 2025-11-29 08:52:06.210 227364 INFO nova.virt.libvirt.driver [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Deleting local config drive /var/lib/nova/instances/1a52df6e-2dba-41dc-bbcf-68954705ed0a/disk.config because it was imported into RBD.#033[00m
Nov 29 03:52:06 np0005539551 kernel: tap88369384-0e: entered promiscuous mode
Nov 29 03:52:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:52:06Z|00945|binding|INFO|Claiming lport 88369384-0ebd-4a3f-9cd4-3a6ffd4271ea for this chassis.
Nov 29 03:52:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:52:06Z|00946|binding|INFO|88369384-0ebd-4a3f-9cd4-3a6ffd4271ea: Claiming fa:16:3e:36:a5:df 10.100.0.8
Nov 29 03:52:06 np0005539551 NetworkManager[48922]: <info>  [1764406326.2685] manager: (tap88369384-0e): new Tun device (/org/freedesktop/NetworkManager/Devices/427)
Nov 29 03:52:06 np0005539551 nova_compute[227360]: 2025-11-29 08:52:06.267 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:06 np0005539551 nova_compute[227360]: 2025-11-29 08:52:06.273 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:06 np0005539551 nova_compute[227360]: 2025-11-29 08:52:06.278 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:06 np0005539551 nova_compute[227360]: 2025-11-29 08:52:06.283 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:06 np0005539551 nova_compute[227360]: 2025-11-29 08:52:06.288 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:06 np0005539551 NetworkManager[48922]: <info>  [1764406326.2887] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/428)
Nov 29 03:52:06 np0005539551 NetworkManager[48922]: <info>  [1764406326.2899] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/429)
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:06.292 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:a5:df 10.100.0.8'], port_security=['fa:16:3e:36:a5:df 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '1a52df6e-2dba-41dc-bbcf-68954705ed0a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7ffcb23bac14ee49474df9aee5f7dae', 'neutron:revision_number': '2', 'neutron:security_group_ids': '94ca66f4-d521-4114-adbd-83f0454e0911', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2432be5b-087b-4981-ab5e-ea6b1be12111, chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=88369384-0ebd-4a3f-9cd4-3a6ffd4271ea) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:06.294 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 88369384-0ebd-4a3f-9cd4-3a6ffd4271ea in datapath 3d510715-dc99-4870-8ae9-ff599ae1a9c2 bound to our chassis#033[00m
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:06.296 139482 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3d510715-dc99-4870-8ae9-ff599ae1a9c2#033[00m
Nov 29 03:52:06 np0005539551 systemd-udevd[306501]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:52:06 np0005539551 systemd-machined[190756]: New machine qemu-97-instance-000000df.
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:06.314 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ce788df8-432f-48a5-8ffa-c23780ffe098]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:06.316 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3d510715-d1 in ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:52:06 np0005539551 NetworkManager[48922]: <info>  [1764406326.3185] device (tap88369384-0e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:06.318 231643 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3d510715-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:06.318 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[c46f63c8-f462-4ef1-9430-a47b690a2ce3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:06.320 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[bb32a009-62c8-456d-838f-2489c99d04b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:06 np0005539551 NetworkManager[48922]: <info>  [1764406326.3208] device (tap88369384-0e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:06.337 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[bcc9685e-aede-4cd1-9613-4c532a10dc7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:06 np0005539551 systemd[1]: Started Virtual Machine qemu-97-instance-000000df.
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:06.370 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[06e5a3ce-6597-443a-b960-b978188cb87c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:06.403 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[4529e04b-e6b4-47ff-8973-24c7483fdaa2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:06 np0005539551 nova_compute[227360]: 2025-11-29 08:52:06.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:06 np0005539551 systemd-udevd[306505]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:52:06 np0005539551 NetworkManager[48922]: <info>  [1764406326.4132] manager: (tap3d510715-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/430)
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:06.412 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[e410a2fe-e7ff-41c6-b36b-ee04a845de31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:06.456 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[e12152e7-13c1-4626-b1f1-1765b3615228]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:06.467 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[63ad857a-2674-4b05-9ade-012f66a80c42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:06 np0005539551 nova_compute[227360]: 2025-11-29 08:52:06.485 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:06 np0005539551 NetworkManager[48922]: <info>  [1764406326.5047] device (tap3d510715-d0): carrier: link connected
Nov 29 03:52:06 np0005539551 nova_compute[227360]: 2025-11-29 08:52:06.506 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:06.508 231659 DEBUG oslo.privsep.daemon [-] privsep: reply[11c8c796-89cf-4e17-bb54-93dddde615d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:52:06Z|00947|binding|INFO|Setting lport 88369384-0ebd-4a3f-9cd4-3a6ffd4271ea ovn-installed in OVS
Nov 29 03:52:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:52:06Z|00948|binding|INFO|Setting lport 88369384-0ebd-4a3f-9cd4-3a6ffd4271ea up in Southbound
Nov 29 03:52:06 np0005539551 nova_compute[227360]: 2025-11-29 08:52:06.517 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:06.535 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[5f57430c-c630-4dc3-832c-4f7cc55543c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d510715-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:61:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 274], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 937338, 'reachable_time': 44749, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306535, 'error': None, 'target': 'ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:06.556 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[92a13fb9-a990-4286-a95d-e74fc753a239]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe63:6190'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 937338, 'tstamp': 937338}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306536, 'error': None, 'target': 'ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:06.579 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[485666b0-3185-40d7-987b-816b7f028b10]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d510715-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:61:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 274], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 937338, 'reachable_time': 44749, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 306537, 'error': None, 'target': 'ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:06.614 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[06f56f25-d4e0-4cd4-bd2f-18cfeffe8c93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:06.675 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[bfb18db6-d436-493a-b9ff-899276230296]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:06.676 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d510715-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:06.677 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:06.677 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3d510715-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:52:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e435 e435: 3 total, 3 up, 3 in
Nov 29 03:52:06 np0005539551 nova_compute[227360]: 2025-11-29 08:52:06.679 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:06 np0005539551 kernel: tap3d510715-d0: entered promiscuous mode
Nov 29 03:52:06 np0005539551 NetworkManager[48922]: <info>  [1764406326.6801] manager: (tap3d510715-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/431)
Nov 29 03:52:06 np0005539551 nova_compute[227360]: 2025-11-29 08:52:06.680 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:06.685 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3d510715-d0, col_values=(('external_ids', {'iface-id': '9b7ae33f-c1c7-4a13-97b3-0ae6cb40a1db'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:52:06 np0005539551 nova_compute[227360]: 2025-11-29 08:52:06.686 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:06 np0005539551 ovn_controller[130266]: 2025-11-29T08:52:06Z|00949|binding|INFO|Releasing lport 9b7ae33f-c1c7-4a13-97b3-0ae6cb40a1db from this chassis (sb_readonly=0)
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:06.687 139482 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3d510715-dc99-4870-8ae9-ff599ae1a9c2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3d510715-dc99-4870-8ae9-ff599ae1a9c2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:06.688 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[4b52109d-0abb-433b-ab00-428a2ddd1623]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:06.688 139482 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]: global
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]:    log         /dev/log local0 debug
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]:    log-tag     haproxy-metadata-proxy-3d510715-dc99-4870-8ae9-ff599ae1a9c2
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]:    user        root
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]:    group       root
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]:    maxconn     1024
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]:    pidfile     /var/lib/neutron/external/pids/3d510715-dc99-4870-8ae9-ff599ae1a9c2.pid.haproxy
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]:    daemon
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]: defaults
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]:    log global
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]:    mode http
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]:    option httplog
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]:    option dontlognull
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]:    option http-server-close
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]:    option forwardfor
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]:    retries                 3
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]:    timeout http-request    30s
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]:    timeout connect         30s
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]:    timeout client          32s
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]:    timeout server          32s
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]:    timeout http-keep-alive 30s
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]: 
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]: listen listener
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]:    bind 169.254.169.254:80
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]:    http-request add-header X-OVN-Network-ID 3d510715-dc99-4870-8ae9-ff599ae1a9c2
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:52:06 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:06.689 139482 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'env', 'PROCESS_TAG=haproxy-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3d510715-dc99-4870-8ae9-ff599ae1a9c2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:52:06 np0005539551 nova_compute[227360]: 2025-11-29 08:52:06.698 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:06 np0005539551 nova_compute[227360]: 2025-11-29 08:52:06.930 227364 DEBUG nova.compute.manager [req-0ba9690c-2f93-4caf-8a7d-c6e911cd25cc req-94a71c12-2550-4128-bed0-c2cf532e60e6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Received event network-vif-plugged-88369384-0ebd-4a3f-9cd4-3a6ffd4271ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:52:06 np0005539551 nova_compute[227360]: 2025-11-29 08:52:06.931 227364 DEBUG oslo_concurrency.lockutils [req-0ba9690c-2f93-4caf-8a7d-c6e911cd25cc req-94a71c12-2550-4128-bed0-c2cf532e60e6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1a52df6e-2dba-41dc-bbcf-68954705ed0a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:06 np0005539551 nova_compute[227360]: 2025-11-29 08:52:06.931 227364 DEBUG oslo_concurrency.lockutils [req-0ba9690c-2f93-4caf-8a7d-c6e911cd25cc req-94a71c12-2550-4128-bed0-c2cf532e60e6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1a52df6e-2dba-41dc-bbcf-68954705ed0a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:06 np0005539551 nova_compute[227360]: 2025-11-29 08:52:06.931 227364 DEBUG oslo_concurrency.lockutils [req-0ba9690c-2f93-4caf-8a7d-c6e911cd25cc req-94a71c12-2550-4128-bed0-c2cf532e60e6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1a52df6e-2dba-41dc-bbcf-68954705ed0a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:06 np0005539551 nova_compute[227360]: 2025-11-29 08:52:06.932 227364 DEBUG nova.compute.manager [req-0ba9690c-2f93-4caf-8a7d-c6e911cd25cc req-94a71c12-2550-4128-bed0-c2cf532e60e6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Processing event network-vif-plugged-88369384-0ebd-4a3f-9cd4-3a6ffd4271ea _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:52:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:07.013 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=69, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=68) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:52:07 np0005539551 nova_compute[227360]: 2025-11-29 08:52:07.014 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:07 np0005539551 podman[306582]: 2025-11-29 08:52:07.042717509 +0000 UTC m=+0.050562070 container create 9eece95c45096a72158b86e88caa686f8514c307ee4f34297f79018101c1c6b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:52:07 np0005539551 systemd[1]: Started libpod-conmon-9eece95c45096a72158b86e88caa686f8514c307ee4f34297f79018101c1c6b2.scope.
Nov 29 03:52:07 np0005539551 systemd[1]: Started libcrun container.
Nov 29 03:52:07 np0005539551 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8867e85511b49786005ca1f68fcb1ea3f1ad0359e1c88b6bc02fe836c3b8a241/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:52:07 np0005539551 podman[306582]: 2025-11-29 08:52:07.110783712 +0000 UTC m=+0.118628293 container init 9eece95c45096a72158b86e88caa686f8514c307ee4f34297f79018101c1c6b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:52:07 np0005539551 podman[306582]: 2025-11-29 08:52:07.016708655 +0000 UTC m=+0.024553236 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:52:07 np0005539551 podman[306582]: 2025-11-29 08:52:07.117914105 +0000 UTC m=+0.125758656 container start 9eece95c45096a72158b86e88caa686f8514c307ee4f34297f79018101c1c6b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 03:52:07 np0005539551 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[306622]: [NOTICE]   (306627) : New worker (306630) forked
Nov 29 03:52:07 np0005539551 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[306622]: [NOTICE]   (306627) : Loading success.
Nov 29 03:52:07 np0005539551 nova_compute[227360]: 2025-11-29 08:52:07.169 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764406327.1692173, 1a52df6e-2dba-41dc-bbcf-68954705ed0a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:52:07 np0005539551 nova_compute[227360]: 2025-11-29 08:52:07.170 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] VM Started (Lifecycle Event)#033[00m
Nov 29 03:52:07 np0005539551 nova_compute[227360]: 2025-11-29 08:52:07.171 227364 DEBUG nova.compute.manager [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:52:07 np0005539551 nova_compute[227360]: 2025-11-29 08:52:07.174 227364 DEBUG nova.virt.libvirt.driver [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:52:07 np0005539551 nova_compute[227360]: 2025-11-29 08:52:07.177 227364 INFO nova.virt.libvirt.driver [-] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Instance spawned successfully.#033[00m
Nov 29 03:52:07 np0005539551 nova_compute[227360]: 2025-11-29 08:52:07.177 227364 DEBUG nova.virt.libvirt.driver [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:52:07 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:07.187 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:52:07 np0005539551 nova_compute[227360]: 2025-11-29 08:52:07.202 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:52:07 np0005539551 nova_compute[227360]: 2025-11-29 08:52:07.208 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:52:07 np0005539551 nova_compute[227360]: 2025-11-29 08:52:07.212 227364 DEBUG nova.virt.libvirt.driver [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:52:07 np0005539551 nova_compute[227360]: 2025-11-29 08:52:07.212 227364 DEBUG nova.virt.libvirt.driver [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:52:07 np0005539551 nova_compute[227360]: 2025-11-29 08:52:07.212 227364 DEBUG nova.virt.libvirt.driver [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:52:07 np0005539551 nova_compute[227360]: 2025-11-29 08:52:07.213 227364 DEBUG nova.virt.libvirt.driver [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:52:07 np0005539551 nova_compute[227360]: 2025-11-29 08:52:07.213 227364 DEBUG nova.virt.libvirt.driver [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:52:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:07 np0005539551 nova_compute[227360]: 2025-11-29 08:52:07.214 227364 DEBUG nova.virt.libvirt.driver [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:52:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:07.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:07 np0005539551 nova_compute[227360]: 2025-11-29 08:52:07.256 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:52:07 np0005539551 nova_compute[227360]: 2025-11-29 08:52:07.257 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764406327.169459, 1a52df6e-2dba-41dc-bbcf-68954705ed0a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:52:07 np0005539551 nova_compute[227360]: 2025-11-29 08:52:07.257 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:52:07 np0005539551 nova_compute[227360]: 2025-11-29 08:52:07.298 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:52:07 np0005539551 nova_compute[227360]: 2025-11-29 08:52:07.302 227364 DEBUG nova.virt.driver [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] Emitting event <LifecycleEvent: 1764406327.174018, 1a52df6e-2dba-41dc-bbcf-68954705ed0a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:52:07 np0005539551 nova_compute[227360]: 2025-11-29 08:52:07.302 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:52:07 np0005539551 nova_compute[227360]: 2025-11-29 08:52:07.324 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:52:07 np0005539551 nova_compute[227360]: 2025-11-29 08:52:07.326 227364 DEBUG nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:52:07 np0005539551 nova_compute[227360]: 2025-11-29 08:52:07.339 227364 INFO nova.compute.manager [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Took 6.53 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:52:07 np0005539551 nova_compute[227360]: 2025-11-29 08:52:07.340 227364 DEBUG nova.compute.manager [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:52:07 np0005539551 nova_compute[227360]: 2025-11-29 08:52:07.353 227364 INFO nova.compute.manager [None req-f8eccdbb-e04a-4543-a8d6-d2b952e37bfa - - - - - -] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:52:07 np0005539551 nova_compute[227360]: 2025-11-29 08:52:07.409 227364 INFO nova.compute.manager [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Took 10.91 seconds to build instance.#033[00m
Nov 29 03:52:07 np0005539551 nova_compute[227360]: 2025-11-29 08:52:07.432 227364 DEBUG oslo_concurrency.lockutils [None req-d54135aa-d1c8-4f17-bca7-417ab575ca67 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "1a52df6e-2dba-41dc-bbcf-68954705ed0a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.182s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:52:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:07.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:52:09 np0005539551 nova_compute[227360]: 2025-11-29 08:52:09.034 227364 DEBUG nova.compute.manager [req-9fbef4de-7877-4063-b480-f9084941f7d8 req-58680944-8d55-480a-b832-c434a6765bb1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Received event network-vif-plugged-88369384-0ebd-4a3f-9cd4-3a6ffd4271ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:52:09 np0005539551 nova_compute[227360]: 2025-11-29 08:52:09.034 227364 DEBUG oslo_concurrency.lockutils [req-9fbef4de-7877-4063-b480-f9084941f7d8 req-58680944-8d55-480a-b832-c434a6765bb1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1a52df6e-2dba-41dc-bbcf-68954705ed0a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:09 np0005539551 nova_compute[227360]: 2025-11-29 08:52:09.034 227364 DEBUG oslo_concurrency.lockutils [req-9fbef4de-7877-4063-b480-f9084941f7d8 req-58680944-8d55-480a-b832-c434a6765bb1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1a52df6e-2dba-41dc-bbcf-68954705ed0a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:09 np0005539551 nova_compute[227360]: 2025-11-29 08:52:09.035 227364 DEBUG oslo_concurrency.lockutils [req-9fbef4de-7877-4063-b480-f9084941f7d8 req-58680944-8d55-480a-b832-c434a6765bb1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1a52df6e-2dba-41dc-bbcf-68954705ed0a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:09 np0005539551 nova_compute[227360]: 2025-11-29 08:52:09.035 227364 DEBUG nova.compute.manager [req-9fbef4de-7877-4063-b480-f9084941f7d8 req-58680944-8d55-480a-b832-c434a6765bb1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] No waiting events found dispatching network-vif-plugged-88369384-0ebd-4a3f-9cd4-3a6ffd4271ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:52:09 np0005539551 nova_compute[227360]: 2025-11-29 08:52:09.035 227364 WARNING nova.compute.manager [req-9fbef4de-7877-4063-b480-f9084941f7d8 req-58680944-8d55-480a-b832-c434a6765bb1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Received unexpected event network-vif-plugged-88369384-0ebd-4a3f-9cd4-3a6ffd4271ea for instance with vm_state active and task_state None.#033[00m
Nov 29 03:52:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:09.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:09 np0005539551 nova_compute[227360]: 2025-11-29 08:52:09.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:09.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:09 np0005539551 nova_compute[227360]: 2025-11-29 08:52:09.910 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:10 np0005539551 nova_compute[227360]: 2025-11-29 08:52:10.370 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:11.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:11.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:11 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:13.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:52:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:13.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:52:14 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:14.189 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '69'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:52:14 np0005539551 nova_compute[227360]: 2025-11-29 08:52:14.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:14 np0005539551 nova_compute[227360]: 2025-11-29 08:52:14.411 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:14 np0005539551 nova_compute[227360]: 2025-11-29 08:52:14.411 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:52:14 np0005539551 nova_compute[227360]: 2025-11-29 08:52:14.946 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:15.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:15 np0005539551 nova_compute[227360]: 2025-11-29 08:52:15.372 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:15.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:15 np0005539551 nova_compute[227360]: 2025-11-29 08:52:15.723 227364 DEBUG nova.compute.manager [req-3eb65a91-c928-4a8a-bec8-82ab0b4346f5 req-0f68c93c-596d-45bd-8edf-a2d28562823b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Received event network-changed-88369384-0ebd-4a3f-9cd4-3a6ffd4271ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:52:15 np0005539551 nova_compute[227360]: 2025-11-29 08:52:15.725 227364 DEBUG nova.compute.manager [req-3eb65a91-c928-4a8a-bec8-82ab0b4346f5 req-0f68c93c-596d-45bd-8edf-a2d28562823b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Refreshing instance network info cache due to event network-changed-88369384-0ebd-4a3f-9cd4-3a6ffd4271ea. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:52:15 np0005539551 nova_compute[227360]: 2025-11-29 08:52:15.725 227364 DEBUG oslo_concurrency.lockutils [req-3eb65a91-c928-4a8a-bec8-82ab0b4346f5 req-0f68c93c-596d-45bd-8edf-a2d28562823b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-1a52df6e-2dba-41dc-bbcf-68954705ed0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:52:15 np0005539551 nova_compute[227360]: 2025-11-29 08:52:15.726 227364 DEBUG oslo_concurrency.lockutils [req-3eb65a91-c928-4a8a-bec8-82ab0b4346f5 req-0f68c93c-596d-45bd-8edf-a2d28562823b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-1a52df6e-2dba-41dc-bbcf-68954705ed0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:52:15 np0005539551 nova_compute[227360]: 2025-11-29 08:52:15.727 227364 DEBUG nova.network.neutron [req-3eb65a91-c928-4a8a-bec8-82ab0b4346f5 req-0f68c93c-596d-45bd-8edf-a2d28562823b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Refreshing network info cache for port 88369384-0ebd-4a3f-9cd4-3a6ffd4271ea _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:52:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:17.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:17.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:17 np0005539551 nova_compute[227360]: 2025-11-29 08:52:17.758 227364 DEBUG nova.network.neutron [req-3eb65a91-c928-4a8a-bec8-82ab0b4346f5 req-0f68c93c-596d-45bd-8edf-a2d28562823b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Updated VIF entry in instance network info cache for port 88369384-0ebd-4a3f-9cd4-3a6ffd4271ea. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:52:17 np0005539551 nova_compute[227360]: 2025-11-29 08:52:17.758 227364 DEBUG nova.network.neutron [req-3eb65a91-c928-4a8a-bec8-82ab0b4346f5 req-0f68c93c-596d-45bd-8edf-a2d28562823b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Updating instance_info_cache with network_info: [{"id": "88369384-0ebd-4a3f-9cd4-3a6ffd4271ea", "address": "fa:16:3e:36:a5:df", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88369384-0e", "ovs_interfaceid": "88369384-0ebd-4a3f-9cd4-3a6ffd4271ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:52:17 np0005539551 nova_compute[227360]: 2025-11-29 08:52:17.793 227364 DEBUG oslo_concurrency.lockutils [req-3eb65a91-c928-4a8a-bec8-82ab0b4346f5 req-0f68c93c-596d-45bd-8edf-a2d28562823b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-1a52df6e-2dba-41dc-bbcf-68954705ed0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:52:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:19.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:19 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:52:19 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:52:19 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:52:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:19.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:19.900 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:19.902 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:19.903 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:19 np0005539551 nova_compute[227360]: 2025-11-29 08:52:19.948 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:20 np0005539551 nova_compute[227360]: 2025-11-29 08:52:20.374 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:20 np0005539551 podman[306773]: 2025-11-29 08:52:20.613831958 +0000 UTC m=+0.060259192 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:52:20 np0005539551 podman[306772]: 2025-11-29 08:52:20.613860469 +0000 UTC m=+0.060068427 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:52:20 np0005539551 podman[306771]: 2025-11-29 08:52:20.710877226 +0000 UTC m=+0.156121108 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 29 03:52:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:21.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:21.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:21 np0005539551 ovn_controller[130266]: 2025-11-29T08:52:21Z|00127|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.5 does not match offer 10.100.0.8
Nov 29 03:52:21 np0005539551 ovn_controller[130266]: 2025-11-29T08:52:21Z|00128|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:36:a5:df 10.100.0.8
Nov 29 03:52:21 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:23.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:23.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:24 np0005539551 nova_compute[227360]: 2025-11-29 08:52:24.950 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:25 np0005539551 ovn_controller[130266]: 2025-11-29T08:52:25Z|00129|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.5 does not match offer 10.100.0.8
Nov 29 03:52:25 np0005539551 ovn_controller[130266]: 2025-11-29T08:52:25Z|00130|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:36:a5:df 10.100.0.8
Nov 29 03:52:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:25.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:25 np0005539551 nova_compute[227360]: 2025-11-29 08:52:25.376 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:25.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:26 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:52:26 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:52:26 np0005539551 ovn_controller[130266]: 2025-11-29T08:52:26Z|00131|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:36:a5:df 10.100.0.8
Nov 29 03:52:26 np0005539551 ovn_controller[130266]: 2025-11-29T08:52:26Z|00132|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:36:a5:df 10.100.0.8
Nov 29 03:52:26 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:27.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:27.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:52:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:29.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:52:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:29.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:29 np0005539551 nova_compute[227360]: 2025-11-29 08:52:29.952 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:30 np0005539551 nova_compute[227360]: 2025-11-29 08:52:30.379 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:31.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:31.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:31 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:52:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:33.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:52:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:33.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:34 np0005539551 nova_compute[227360]: 2025-11-29 08:52:34.955 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:52:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:35.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:52:35 np0005539551 nova_compute[227360]: 2025-11-29 08:52:35.381 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:35.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:36 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:37.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:37.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:39.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:39.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:39 np0005539551 nova_compute[227360]: 2025-11-29 08:52:39.958 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:40 np0005539551 nova_compute[227360]: 2025-11-29 08:52:40.382 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:41.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:41.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:41 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:43.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:52:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:43.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:52:44 np0005539551 nova_compute[227360]: 2025-11-29 08:52:44.961 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:45.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:45 np0005539551 nova_compute[227360]: 2025-11-29 08:52:45.384 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:45.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:46 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:47.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:47.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:48 np0005539551 nova_compute[227360]: 2025-11-29 08:52:48.943 227364 DEBUG oslo_concurrency.lockutils [None req-1a6c40e1-2b67-4572-bd4e-8a6c51d1dbbf b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "1a52df6e-2dba-41dc-bbcf-68954705ed0a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:48 np0005539551 nova_compute[227360]: 2025-11-29 08:52:48.944 227364 DEBUG oslo_concurrency.lockutils [None req-1a6c40e1-2b67-4572-bd4e-8a6c51d1dbbf b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "1a52df6e-2dba-41dc-bbcf-68954705ed0a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:48 np0005539551 nova_compute[227360]: 2025-11-29 08:52:48.944 227364 DEBUG oslo_concurrency.lockutils [None req-1a6c40e1-2b67-4572-bd4e-8a6c51d1dbbf b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "1a52df6e-2dba-41dc-bbcf-68954705ed0a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:48 np0005539551 nova_compute[227360]: 2025-11-29 08:52:48.944 227364 DEBUG oslo_concurrency.lockutils [None req-1a6c40e1-2b67-4572-bd4e-8a6c51d1dbbf b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "1a52df6e-2dba-41dc-bbcf-68954705ed0a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:48 np0005539551 nova_compute[227360]: 2025-11-29 08:52:48.945 227364 DEBUG oslo_concurrency.lockutils [None req-1a6c40e1-2b67-4572-bd4e-8a6c51d1dbbf b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "1a52df6e-2dba-41dc-bbcf-68954705ed0a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:48 np0005539551 nova_compute[227360]: 2025-11-29 08:52:48.946 227364 INFO nova.compute.manager [None req-1a6c40e1-2b67-4572-bd4e-8a6c51d1dbbf b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Terminating instance#033[00m
Nov 29 03:52:48 np0005539551 nova_compute[227360]: 2025-11-29 08:52:48.948 227364 DEBUG nova.compute.manager [None req-1a6c40e1-2b67-4572-bd4e-8a6c51d1dbbf b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:52:49 np0005539551 kernel: tap88369384-0e (unregistering): left promiscuous mode
Nov 29 03:52:49 np0005539551 NetworkManager[48922]: <info>  [1764406369.0166] device (tap88369384-0e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:52:49 np0005539551 nova_compute[227360]: 2025-11-29 08:52:49.028 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:49 np0005539551 ovn_controller[130266]: 2025-11-29T08:52:49Z|00950|binding|INFO|Releasing lport 88369384-0ebd-4a3f-9cd4-3a6ffd4271ea from this chassis (sb_readonly=0)
Nov 29 03:52:49 np0005539551 ovn_controller[130266]: 2025-11-29T08:52:49Z|00951|binding|INFO|Setting lport 88369384-0ebd-4a3f-9cd4-3a6ffd4271ea down in Southbound
Nov 29 03:52:49 np0005539551 ovn_controller[130266]: 2025-11-29T08:52:49Z|00952|binding|INFO|Removing iface tap88369384-0e ovn-installed in OVS
Nov 29 03:52:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:49.033 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:a5:df 10.100.0.8'], port_security=['fa:16:3e:36:a5:df 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '1a52df6e-2dba-41dc-bbcf-68954705ed0a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7ffcb23bac14ee49474df9aee5f7dae', 'neutron:revision_number': '4', 'neutron:security_group_ids': '94ca66f4-d521-4114-adbd-83f0454e0911', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2432be5b-087b-4981-ab5e-ea6b1be12111, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>], logical_port=88369384-0ebd-4a3f-9cd4-3a6ffd4271ea) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb3c845d760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:52:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:49.034 139482 INFO neutron.agent.ovn.metadata.agent [-] Port 88369384-0ebd-4a3f-9cd4-3a6ffd4271ea in datapath 3d510715-dc99-4870-8ae9-ff599ae1a9c2 unbound from our chassis#033[00m
Nov 29 03:52:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:49.036 139482 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3d510715-dc99-4870-8ae9-ff599ae1a9c2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:52:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:49.038 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[d53f0946-f8ed-4f48-98fe-501a3dd96b66]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:49.038 139482 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2 namespace which is not needed anymore#033[00m
Nov 29 03:52:49 np0005539551 nova_compute[227360]: 2025-11-29 08:52:49.050 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:49 np0005539551 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d000000df.scope: Deactivated successfully.
Nov 29 03:52:49 np0005539551 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d000000df.scope: Consumed 16.055s CPU time.
Nov 29 03:52:49 np0005539551 systemd-machined[190756]: Machine qemu-97-instance-000000df terminated.
Nov 29 03:52:49 np0005539551 nova_compute[227360]: 2025-11-29 08:52:49.195 227364 INFO nova.virt.libvirt.driver [-] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Instance destroyed successfully.#033[00m
Nov 29 03:52:49 np0005539551 nova_compute[227360]: 2025-11-29 08:52:49.196 227364 DEBUG nova.objects.instance [None req-1a6c40e1-2b67-4572-bd4e-8a6c51d1dbbf b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lazy-loading 'resources' on Instance uuid 1a52df6e-2dba-41dc-bbcf-68954705ed0a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:52:49 np0005539551 nova_compute[227360]: 2025-11-29 08:52:49.213 227364 DEBUG nova.virt.libvirt.vif [None req-1a6c40e1-2b67-4572-bd4e-8a6c51d1dbbf b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:51:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1971016946',display_name='tempest-TestVolumeBootPattern-server-1971016946',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1971016946',id=223,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEHlo38mb7s9ySq/cdi6P777CZR2z+Afm9Wa+lTfRoAMzUkpqw+8CUb0JbGjLvJJBhmx7BRYWkJB9ViGobLhvgEEMVD0rXS3of0skum5gZvlaPu98ryoqeuiqaHIJoj7oQ==',key_name='tempest-TestVolumeBootPattern-597611796',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:52:07Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b7ffcb23bac14ee49474df9aee5f7dae',ramdisk_id='',reservation_id='r-45cj73bx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1614567902',owner_user_name='tempest-TestVolumeBootPattern-1614567902-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:52:07Z,user_data=None,user_id='b576a51181b5425aa6e44a0eb0a22803',uuid=1a52df6e-2dba-41dc-bbcf-68954705ed0a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "88369384-0ebd-4a3f-9cd4-3a6ffd4271ea", "address": "fa:16:3e:36:a5:df", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88369384-0e", "ovs_interfaceid": "88369384-0ebd-4a3f-9cd4-3a6ffd4271ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:52:49 np0005539551 nova_compute[227360]: 2025-11-29 08:52:49.213 227364 DEBUG nova.network.os_vif_util [None req-1a6c40e1-2b67-4572-bd4e-8a6c51d1dbbf b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converting VIF {"id": "88369384-0ebd-4a3f-9cd4-3a6ffd4271ea", "address": "fa:16:3e:36:a5:df", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88369384-0e", "ovs_interfaceid": "88369384-0ebd-4a3f-9cd4-3a6ffd4271ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:52:49 np0005539551 nova_compute[227360]: 2025-11-29 08:52:49.214 227364 DEBUG nova.network.os_vif_util [None req-1a6c40e1-2b67-4572-bd4e-8a6c51d1dbbf b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:36:a5:df,bridge_name='br-int',has_traffic_filtering=True,id=88369384-0ebd-4a3f-9cd4-3a6ffd4271ea,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88369384-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:52:49 np0005539551 nova_compute[227360]: 2025-11-29 08:52:49.215 227364 DEBUG os_vif [None req-1a6c40e1-2b67-4572-bd4e-8a6c51d1dbbf b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:36:a5:df,bridge_name='br-int',has_traffic_filtering=True,id=88369384-0ebd-4a3f-9cd4-3a6ffd4271ea,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88369384-0e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:52:49 np0005539551 nova_compute[227360]: 2025-11-29 08:52:49.217 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:49 np0005539551 nova_compute[227360]: 2025-11-29 08:52:49.217 227364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88369384-0e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:52:49 np0005539551 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[306622]: [NOTICE]   (306627) : haproxy version is 2.8.14-c23fe91
Nov 29 03:52:49 np0005539551 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[306622]: [NOTICE]   (306627) : path to executable is /usr/sbin/haproxy
Nov 29 03:52:49 np0005539551 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[306622]: [WARNING]  (306627) : Exiting Master process...
Nov 29 03:52:49 np0005539551 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[306622]: [WARNING]  (306627) : Exiting Master process...
Nov 29 03:52:49 np0005539551 nova_compute[227360]: 2025-11-29 08:52:49.220 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:49 np0005539551 nova_compute[227360]: 2025-11-29 08:52:49.221 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:52:49 np0005539551 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[306622]: [ALERT]    (306627) : Current worker (306630) exited with code 143 (Terminated)
Nov 29 03:52:49 np0005539551 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[306622]: [WARNING]  (306627) : All workers exited. Exiting... (0)
Nov 29 03:52:49 np0005539551 nova_compute[227360]: 2025-11-29 08:52:49.223 227364 INFO os_vif [None req-1a6c40e1-2b67-4572-bd4e-8a6c51d1dbbf b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:36:a5:df,bridge_name='br-int',has_traffic_filtering=True,id=88369384-0ebd-4a3f-9cd4-3a6ffd4271ea,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88369384-0e')#033[00m
Nov 29 03:52:49 np0005539551 systemd[1]: libpod-9eece95c45096a72158b86e88caa686f8514c307ee4f34297f79018101c1c6b2.scope: Deactivated successfully.
Nov 29 03:52:49 np0005539551 podman[306916]: 2025-11-29 08:52:49.231704183 +0000 UTC m=+0.046791668 container died 9eece95c45096a72158b86e88caa686f8514c307ee4f34297f79018101c1c6b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:52:49 np0005539551 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9eece95c45096a72158b86e88caa686f8514c307ee4f34297f79018101c1c6b2-userdata-shm.mount: Deactivated successfully.
Nov 29 03:52:49 np0005539551 systemd[1]: var-lib-containers-storage-overlay-8867e85511b49786005ca1f68fcb1ea3f1ad0359e1c88b6bc02fe836c3b8a241-merged.mount: Deactivated successfully.
Nov 29 03:52:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:49.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:49 np0005539551 podman[306916]: 2025-11-29 08:52:49.288831869 +0000 UTC m=+0.103919314 container cleanup 9eece95c45096a72158b86e88caa686f8514c307ee4f34297f79018101c1c6b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2)
Nov 29 03:52:49 np0005539551 systemd[1]: libpod-conmon-9eece95c45096a72158b86e88caa686f8514c307ee4f34297f79018101c1c6b2.scope: Deactivated successfully.
Nov 29 03:52:49 np0005539551 nova_compute[227360]: 2025-11-29 08:52:49.321 227364 DEBUG nova.compute.manager [req-b2498ea8-c123-478d-a053-c308e5a759f9 req-63c2bf9e-fea9-4420-aaa1-a283aceb6f45 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Received event network-changed-88369384-0ebd-4a3f-9cd4-3a6ffd4271ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:52:49 np0005539551 nova_compute[227360]: 2025-11-29 08:52:49.321 227364 DEBUG nova.compute.manager [req-b2498ea8-c123-478d-a053-c308e5a759f9 req-63c2bf9e-fea9-4420-aaa1-a283aceb6f45 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Refreshing instance network info cache due to event network-changed-88369384-0ebd-4a3f-9cd4-3a6ffd4271ea. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:52:49 np0005539551 nova_compute[227360]: 2025-11-29 08:52:49.321 227364 DEBUG oslo_concurrency.lockutils [req-b2498ea8-c123-478d-a053-c308e5a759f9 req-63c2bf9e-fea9-4420-aaa1-a283aceb6f45 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-1a52df6e-2dba-41dc-bbcf-68954705ed0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:52:49 np0005539551 nova_compute[227360]: 2025-11-29 08:52:49.322 227364 DEBUG oslo_concurrency.lockutils [req-b2498ea8-c123-478d-a053-c308e5a759f9 req-63c2bf9e-fea9-4420-aaa1-a283aceb6f45 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-1a52df6e-2dba-41dc-bbcf-68954705ed0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:52:49 np0005539551 nova_compute[227360]: 2025-11-29 08:52:49.322 227364 DEBUG nova.network.neutron [req-b2498ea8-c123-478d-a053-c308e5a759f9 req-63c2bf9e-fea9-4420-aaa1-a283aceb6f45 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Refreshing network info cache for port 88369384-0ebd-4a3f-9cd4-3a6ffd4271ea _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:52:49 np0005539551 podman[306971]: 2025-11-29 08:52:49.358001562 +0000 UTC m=+0.045074051 container remove 9eece95c45096a72158b86e88caa686f8514c307ee4f34297f79018101c1c6b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:52:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:49.363 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[3ba85e73-0bc7-472f-9c87-ddb4c278d637]: (4, ('Sat Nov 29 08:52:49 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2 (9eece95c45096a72158b86e88caa686f8514c307ee4f34297f79018101c1c6b2)\n9eece95c45096a72158b86e88caa686f8514c307ee4f34297f79018101c1c6b2\nSat Nov 29 08:52:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2 (9eece95c45096a72158b86e88caa686f8514c307ee4f34297f79018101c1c6b2)\n9eece95c45096a72158b86e88caa686f8514c307ee4f34297f79018101c1c6b2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:49.365 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[f487dd6a-e3ac-4167-a3cc-0251a34f680b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:49.366 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d510715-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:52:49 np0005539551 kernel: tap3d510715-d0: left promiscuous mode
Nov 29 03:52:49 np0005539551 nova_compute[227360]: 2025-11-29 08:52:49.368 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:49 np0005539551 nova_compute[227360]: 2025-11-29 08:52:49.380 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:49.383 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[ad3a72b1-5f46-403a-98ac-d8100ce4b9b1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:49.407 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[21ae7833-56ad-4fbf-b05d-389cbc10edd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:49.409 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[2b503663-05aa-481f-a6ed-489a17a003c9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:49.426 231643 DEBUG oslo.privsep.daemon [-] privsep: reply[74313f0f-4d12-4c3e-9e80-4c3c405b9cad]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 937327, 'reachable_time': 40190, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306986, 'error': None, 'target': 'ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:49.428 139603 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:52:49 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:52:49.428 139603 DEBUG oslo.privsep.daemon [-] privsep: reply[b0b5d22a-ab47-4993-9eca-0222182f5007]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:49 np0005539551 systemd[1]: run-netns-ovnmeta\x2d3d510715\x2ddc99\x2d4870\x2d8ae9\x2dff599ae1a9c2.mount: Deactivated successfully.
Nov 29 03:52:49 np0005539551 nova_compute[227360]: 2025-11-29 08:52:49.453 227364 INFO nova.virt.libvirt.driver [None req-1a6c40e1-2b67-4572-bd4e-8a6c51d1dbbf b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Deleting instance files /var/lib/nova/instances/1a52df6e-2dba-41dc-bbcf-68954705ed0a_del#033[00m
Nov 29 03:52:49 np0005539551 nova_compute[227360]: 2025-11-29 08:52:49.454 227364 INFO nova.virt.libvirt.driver [None req-1a6c40e1-2b67-4572-bd4e-8a6c51d1dbbf b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Deletion of /var/lib/nova/instances/1a52df6e-2dba-41dc-bbcf-68954705ed0a_del complete#033[00m
Nov 29 03:52:49 np0005539551 nova_compute[227360]: 2025-11-29 08:52:49.505 227364 INFO nova.compute.manager [None req-1a6c40e1-2b67-4572-bd4e-8a6c51d1dbbf b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Took 0.56 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:52:49 np0005539551 nova_compute[227360]: 2025-11-29 08:52:49.506 227364 DEBUG oslo.service.loopingcall [None req-1a6c40e1-2b67-4572-bd4e-8a6c51d1dbbf b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:52:49 np0005539551 nova_compute[227360]: 2025-11-29 08:52:49.506 227364 DEBUG nova.compute.manager [-] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:52:49 np0005539551 nova_compute[227360]: 2025-11-29 08:52:49.506 227364 DEBUG nova.network.neutron [-] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:52:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:49.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:49 np0005539551 nova_compute[227360]: 2025-11-29 08:52:49.962 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:50 np0005539551 nova_compute[227360]: 2025-11-29 08:52:50.200 227364 DEBUG nova.network.neutron [-] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:52:50 np0005539551 nova_compute[227360]: 2025-11-29 08:52:50.230 227364 INFO nova.compute.manager [-] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Took 0.72 seconds to deallocate network for instance.#033[00m
Nov 29 03:52:50 np0005539551 nova_compute[227360]: 2025-11-29 08:52:50.416 227364 INFO nova.compute.manager [None req-1a6c40e1-2b67-4572-bd4e-8a6c51d1dbbf b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Took 0.19 seconds to detach 1 volumes for instance.#033[00m
Nov 29 03:52:50 np0005539551 nova_compute[227360]: 2025-11-29 08:52:50.475 227364 DEBUG oslo_concurrency.lockutils [None req-1a6c40e1-2b67-4572-bd4e-8a6c51d1dbbf b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:50 np0005539551 nova_compute[227360]: 2025-11-29 08:52:50.476 227364 DEBUG oslo_concurrency.lockutils [None req-1a6c40e1-2b67-4572-bd4e-8a6c51d1dbbf b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:50 np0005539551 nova_compute[227360]: 2025-11-29 08:52:50.563 227364 DEBUG nova.network.neutron [req-b2498ea8-c123-478d-a053-c308e5a759f9 req-63c2bf9e-fea9-4420-aaa1-a283aceb6f45 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Updated VIF entry in instance network info cache for port 88369384-0ebd-4a3f-9cd4-3a6ffd4271ea. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:52:50 np0005539551 nova_compute[227360]: 2025-11-29 08:52:50.564 227364 DEBUG nova.network.neutron [req-b2498ea8-c123-478d-a053-c308e5a759f9 req-63c2bf9e-fea9-4420-aaa1-a283aceb6f45 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Updating instance_info_cache with network_info: [{"id": "88369384-0ebd-4a3f-9cd4-3a6ffd4271ea", "address": "fa:16:3e:36:a5:df", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88369384-0e", "ovs_interfaceid": "88369384-0ebd-4a3f-9cd4-3a6ffd4271ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:52:50 np0005539551 nova_compute[227360]: 2025-11-29 08:52:50.584 227364 DEBUG oslo_concurrency.lockutils [req-b2498ea8-c123-478d-a053-c308e5a759f9 req-63c2bf9e-fea9-4420-aaa1-a283aceb6f45 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-1a52df6e-2dba-41dc-bbcf-68954705ed0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:52:50 np0005539551 nova_compute[227360]: 2025-11-29 08:52:50.585 227364 DEBUG nova.compute.manager [req-b2498ea8-c123-478d-a053-c308e5a759f9 req-63c2bf9e-fea9-4420-aaa1-a283aceb6f45 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Received event network-vif-unplugged-88369384-0ebd-4a3f-9cd4-3a6ffd4271ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:52:50 np0005539551 nova_compute[227360]: 2025-11-29 08:52:50.585 227364 DEBUG oslo_concurrency.lockutils [req-b2498ea8-c123-478d-a053-c308e5a759f9 req-63c2bf9e-fea9-4420-aaa1-a283aceb6f45 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1a52df6e-2dba-41dc-bbcf-68954705ed0a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:50 np0005539551 nova_compute[227360]: 2025-11-29 08:52:50.585 227364 DEBUG oslo_concurrency.lockutils [req-b2498ea8-c123-478d-a053-c308e5a759f9 req-63c2bf9e-fea9-4420-aaa1-a283aceb6f45 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1a52df6e-2dba-41dc-bbcf-68954705ed0a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:50 np0005539551 nova_compute[227360]: 2025-11-29 08:52:50.585 227364 DEBUG oslo_concurrency.lockutils [req-b2498ea8-c123-478d-a053-c308e5a759f9 req-63c2bf9e-fea9-4420-aaa1-a283aceb6f45 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1a52df6e-2dba-41dc-bbcf-68954705ed0a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:50 np0005539551 nova_compute[227360]: 2025-11-29 08:52:50.585 227364 DEBUG nova.compute.manager [req-b2498ea8-c123-478d-a053-c308e5a759f9 req-63c2bf9e-fea9-4420-aaa1-a283aceb6f45 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] No waiting events found dispatching network-vif-unplugged-88369384-0ebd-4a3f-9cd4-3a6ffd4271ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:52:50 np0005539551 nova_compute[227360]: 2025-11-29 08:52:50.585 227364 DEBUG nova.compute.manager [req-b2498ea8-c123-478d-a053-c308e5a759f9 req-63c2bf9e-fea9-4420-aaa1-a283aceb6f45 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Received event network-vif-unplugged-88369384-0ebd-4a3f-9cd4-3a6ffd4271ea for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:52:50 np0005539551 nova_compute[227360]: 2025-11-29 08:52:50.643 227364 DEBUG oslo_concurrency.processutils [None req-1a6c40e1-2b67-4572-bd4e-8a6c51d1dbbf b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:52:51 np0005539551 nova_compute[227360]: 2025-11-29 08:52:51.013 227364 DEBUG nova.compute.manager [req-7ef57279-4b28-4c29-94c0-7e01821ee56f req-f5414c23-ccc1-4419-b38a-6d896f38d591 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Received event network-vif-deleted-88369384-0ebd-4a3f-9cd4-3a6ffd4271ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:52:51 np0005539551 nova_compute[227360]: 2025-11-29 08:52:51.015 227364 INFO nova.compute.manager [req-7ef57279-4b28-4c29-94c0-7e01821ee56f req-f5414c23-ccc1-4419-b38a-6d896f38d591 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Neutron deleted interface 88369384-0ebd-4a3f-9cd4-3a6ffd4271ea; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:52:51 np0005539551 nova_compute[227360]: 2025-11-29 08:52:51.016 227364 DEBUG nova.network.neutron [req-7ef57279-4b28-4c29-94c0-7e01821ee56f req-f5414c23-ccc1-4419-b38a-6d896f38d591 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:52:51 np0005539551 nova_compute[227360]: 2025-11-29 08:52:51.036 227364 DEBUG nova.compute.manager [req-7ef57279-4b28-4c29-94c0-7e01821ee56f req-f5414c23-ccc1-4419-b38a-6d896f38d591 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Detach interface failed, port_id=88369384-0ebd-4a3f-9cd4-3a6ffd4271ea, reason: Instance 1a52df6e-2dba-41dc-bbcf-68954705ed0a could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 03:52:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:51.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:52:51 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1822531910' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:52:51 np0005539551 nova_compute[227360]: 2025-11-29 08:52:51.343 227364 DEBUG oslo_concurrency.processutils [None req-1a6c40e1-2b67-4572-bd4e-8a6c51d1dbbf b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.700s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:52:51 np0005539551 nova_compute[227360]: 2025-11-29 08:52:51.351 227364 DEBUG nova.compute.provider_tree [None req-1a6c40e1-2b67-4572-bd4e-8a6c51d1dbbf b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:52:51 np0005539551 nova_compute[227360]: 2025-11-29 08:52:51.369 227364 DEBUG nova.scheduler.client.report [None req-1a6c40e1-2b67-4572-bd4e-8a6c51d1dbbf b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:52:51 np0005539551 nova_compute[227360]: 2025-11-29 08:52:51.391 227364 DEBUG oslo_concurrency.lockutils [None req-1a6c40e1-2b67-4572-bd4e-8a6c51d1dbbf b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.915s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:51 np0005539551 nova_compute[227360]: 2025-11-29 08:52:51.468 227364 DEBUG nova.compute.manager [req-9c23ce3d-89f0-4469-91e7-5a00e82b9561 req-b88a9467-3552-4951-8b64-f7aed8d4856c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Received event network-vif-plugged-88369384-0ebd-4a3f-9cd4-3a6ffd4271ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:52:51 np0005539551 nova_compute[227360]: 2025-11-29 08:52:51.469 227364 DEBUG oslo_concurrency.lockutils [req-9c23ce3d-89f0-4469-91e7-5a00e82b9561 req-b88a9467-3552-4951-8b64-f7aed8d4856c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1a52df6e-2dba-41dc-bbcf-68954705ed0a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:51 np0005539551 nova_compute[227360]: 2025-11-29 08:52:51.469 227364 DEBUG oslo_concurrency.lockutils [req-9c23ce3d-89f0-4469-91e7-5a00e82b9561 req-b88a9467-3552-4951-8b64-f7aed8d4856c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1a52df6e-2dba-41dc-bbcf-68954705ed0a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:51 np0005539551 nova_compute[227360]: 2025-11-29 08:52:51.469 227364 DEBUG oslo_concurrency.lockutils [req-9c23ce3d-89f0-4469-91e7-5a00e82b9561 req-b88a9467-3552-4951-8b64-f7aed8d4856c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1a52df6e-2dba-41dc-bbcf-68954705ed0a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:51 np0005539551 nova_compute[227360]: 2025-11-29 08:52:51.470 227364 DEBUG nova.compute.manager [req-9c23ce3d-89f0-4469-91e7-5a00e82b9561 req-b88a9467-3552-4951-8b64-f7aed8d4856c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] No waiting events found dispatching network-vif-plugged-88369384-0ebd-4a3f-9cd4-3a6ffd4271ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:52:51 np0005539551 nova_compute[227360]: 2025-11-29 08:52:51.470 227364 WARNING nova.compute.manager [req-9c23ce3d-89f0-4469-91e7-5a00e82b9561 req-b88a9467-3552-4951-8b64-f7aed8d4856c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Received unexpected event network-vif-plugged-88369384-0ebd-4a3f-9cd4-3a6ffd4271ea for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:52:51 np0005539551 nova_compute[227360]: 2025-11-29 08:52:51.489 227364 INFO nova.scheduler.client.report [None req-1a6c40e1-2b67-4572-bd4e-8a6c51d1dbbf b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Deleted allocations for instance 1a52df6e-2dba-41dc-bbcf-68954705ed0a#033[00m
Nov 29 03:52:51 np0005539551 nova_compute[227360]: 2025-11-29 08:52:51.549 227364 DEBUG oslo_concurrency.lockutils [None req-1a6c40e1-2b67-4572-bd4e-8a6c51d1dbbf b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "1a52df6e-2dba-41dc-bbcf-68954705ed0a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:51 np0005539551 podman[307011]: 2025-11-29 08:52:51.639942868 +0000 UTC m=+0.080154861 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:52:51 np0005539551 podman[307012]: 2025-11-29 08:52:51.662211841 +0000 UTC m=+0.102681431 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:52:51 np0005539551 podman[307010]: 2025-11-29 08:52:51.675087969 +0000 UTC m=+0.123895865 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:52:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:51.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e436 e436: 3 total, 3 up, 3 in
Nov 29 03:52:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:53.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:53.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:54 np0005539551 nova_compute[227360]: 2025-11-29 08:52:54.221 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:54 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e437 e437: 3 total, 3 up, 3 in
Nov 29 03:52:54 np0005539551 nova_compute[227360]: 2025-11-29 08:52:54.966 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:55.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:55.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:56 np0005539551 nova_compute[227360]: 2025-11-29 08:52:56.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:56 np0005539551 nova_compute[227360]: 2025-11-29 08:52:56.443 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:56 np0005539551 nova_compute[227360]: 2025-11-29 08:52:56.443 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:56 np0005539551 nova_compute[227360]: 2025-11-29 08:52:56.443 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:56 np0005539551 nova_compute[227360]: 2025-11-29 08:52:56.444 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:52:56 np0005539551 nova_compute[227360]: 2025-11-29 08:52:56.444 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:52:56 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:52:56 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/292598624' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:52:56 np0005539551 nova_compute[227360]: 2025-11-29 08:52:56.929 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:52:56 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:57 np0005539551 nova_compute[227360]: 2025-11-29 08:52:57.113 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:52:57 np0005539551 nova_compute[227360]: 2025-11-29 08:52:57.114 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4211MB free_disk=20.98813247680664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:52:57 np0005539551 nova_compute[227360]: 2025-11-29 08:52:57.114 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:57 np0005539551 nova_compute[227360]: 2025-11-29 08:52:57.115 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:57 np0005539551 nova_compute[227360]: 2025-11-29 08:52:57.197 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:52:57 np0005539551 nova_compute[227360]: 2025-11-29 08:52:57.198 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:52:57 np0005539551 nova_compute[227360]: 2025-11-29 08:52:57.211 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:52:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:57.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:57 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:52:57 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2864147508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:52:57 np0005539551 nova_compute[227360]: 2025-11-29 08:52:57.657 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:52:57 np0005539551 nova_compute[227360]: 2025-11-29 08:52:57.664 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:52:57 np0005539551 nova_compute[227360]: 2025-11-29 08:52:57.683 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:52:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:57.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:57 np0005539551 nova_compute[227360]: 2025-11-29 08:52:57.717 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:52:57 np0005539551 nova_compute[227360]: 2025-11-29 08:52:57.718 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:59 np0005539551 nova_compute[227360]: 2025-11-29 08:52:59.224 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:59.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:52:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:59.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:59 np0005539551 nova_compute[227360]: 2025-11-29 08:52:59.717 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:59 np0005539551 nova_compute[227360]: 2025-11-29 08:52:59.718 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:52:59 np0005539551 nova_compute[227360]: 2025-11-29 08:52:59.718 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:52:59 np0005539551 nova_compute[227360]: 2025-11-29 08:52:59.762 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:52:59 np0005539551 nova_compute[227360]: 2025-11-29 08:52:59.967 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:00 np0005539551 nova_compute[227360]: 2025-11-29 08:53:00.044 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:00 np0005539551 nova_compute[227360]: 2025-11-29 08:53:00.274 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:00 np0005539551 nova_compute[227360]: 2025-11-29 08:53:00.411 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:00 np0005539551 nova_compute[227360]: 2025-11-29 08:53:00.812 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:53:00.813 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=70, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=69) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:53:00 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:53:00.813 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:53:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:01.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:01 np0005539551 nova_compute[227360]: 2025-11-29 08:53:01.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 e438: 3 total, 3 up, 3 in
Nov 29 03:53:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:01.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:03.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:03 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:53:03 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2619880111' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:53:03 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:53:03 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2619880111' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:53:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:03.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:03 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:53:03.816 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '70'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:53:04 np0005539551 nova_compute[227360]: 2025-11-29 08:53:04.195 227364 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764406369.1920848, 1a52df6e-2dba-41dc-bbcf-68954705ed0a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:53:04 np0005539551 nova_compute[227360]: 2025-11-29 08:53:04.195 227364 INFO nova.compute.manager [-] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:53:04 np0005539551 nova_compute[227360]: 2025-11-29 08:53:04.214 227364 DEBUG nova.compute.manager [None req-ccbd5be9-5e10-4fe0-899c-6fda3d2e0669 - - - - - -] [instance: 1a52df6e-2dba-41dc-bbcf-68954705ed0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:53:04 np0005539551 nova_compute[227360]: 2025-11-29 08:53:04.227 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:04 np0005539551 nova_compute[227360]: 2025-11-29 08:53:04.970 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:53:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:05.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:53:05 np0005539551 nova_compute[227360]: 2025-11-29 08:53:05.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:05.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:06 np0005539551 nova_compute[227360]: 2025-11-29 08:53:06.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:07.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:07.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:09 np0005539551 nova_compute[227360]: 2025-11-29 08:53:09.231 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:09.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:09 np0005539551 nova_compute[227360]: 2025-11-29 08:53:09.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:09.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:09 np0005539551 nova_compute[227360]: 2025-11-29 08:53:09.972 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:11.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:11.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:11 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:13.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:13.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:14 np0005539551 nova_compute[227360]: 2025-11-29 08:53:14.234 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:14 np0005539551 nova_compute[227360]: 2025-11-29 08:53:14.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:14 np0005539551 nova_compute[227360]: 2025-11-29 08:53:14.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:53:14 np0005539551 nova_compute[227360]: 2025-11-29 08:53:14.973 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:15.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:15.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:16 np0005539551 nova_compute[227360]: 2025-11-29 08:53:16.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:17.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:17.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:19 np0005539551 nova_compute[227360]: 2025-11-29 08:53:19.238 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:19.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:19.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:53:19.901 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:53:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:53:19.901 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:53:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:53:19.901 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:53:19 np0005539551 nova_compute[227360]: 2025-11-29 08:53:19.977 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:21.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:21.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:21 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:22 np0005539551 podman[307122]: 2025-11-29 08:53:22.63925104 +0000 UTC m=+0.070481499 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:53:22 np0005539551 podman[307121]: 2025-11-29 08:53:22.649648422 +0000 UTC m=+0.080956593 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Nov 29 03:53:22 np0005539551 podman[307120]: 2025-11-29 08:53:22.676128609 +0000 UTC m=+0.116899966 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:53:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:23.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:23.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:24 np0005539551 nova_compute[227360]: 2025-11-29 08:53:24.297 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:24 np0005539551 nova_compute[227360]: 2025-11-29 08:53:24.979 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:25.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:25.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:26 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:27.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:27.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:28 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:53:28 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:53:28 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:53:29 np0005539551 nova_compute[227360]: 2025-11-29 08:53:29.299 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:29.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:29.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:29 np0005539551 nova_compute[227360]: 2025-11-29 08:53:29.980 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:31.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:31.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:31 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:33.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:33.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:34 np0005539551 nova_compute[227360]: 2025-11-29 08:53:34.304 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:34 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:53:34 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:53:34 np0005539551 nova_compute[227360]: 2025-11-29 08:53:34.982 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:35.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:35.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:36 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:37.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:37.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:39 np0005539551 nova_compute[227360]: 2025-11-29 08:53:39.306 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:39.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:39.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:39 np0005539551 nova_compute[227360]: 2025-11-29 08:53:39.984 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:41.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:41.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:41 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:43.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:43.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:44 np0005539551 nova_compute[227360]: 2025-11-29 08:53:44.310 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:44 np0005539551 nova_compute[227360]: 2025-11-29 08:53:44.987 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:45.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:45.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:46 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:47.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:53:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:47.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:53:47 np0005539551 ovn_controller[130266]: 2025-11-29T08:53:47Z|00953|memory_trim|INFO|Detected inactivity (last active 30026 ms ago): trimming memory
Nov 29 03:53:49 np0005539551 nova_compute[227360]: 2025-11-29 08:53:49.312 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:49.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:49.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:49 np0005539551 nova_compute[227360]: 2025-11-29 08:53:49.988 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:53:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:51.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:53:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:51.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:52 np0005539551 nova_compute[227360]: 2025-11-29 08:53:52.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:53.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:53 np0005539551 podman[307367]: 2025-11-29 08:53:53.617896464 +0000 UTC m=+0.056789859 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Nov 29 03:53:53 np0005539551 podman[307366]: 2025-11-29 08:53:53.651946706 +0000 UTC m=+0.095423725 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:53:53 np0005539551 podman[307365]: 2025-11-29 08:53:53.658214196 +0000 UTC m=+0.095421685 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:53:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:53.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:54 np0005539551 nova_compute[227360]: 2025-11-29 08:53:54.354 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:54 np0005539551 nova_compute[227360]: 2025-11-29 08:53:54.990 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:55.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:55.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:56 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:57 np0005539551 nova_compute[227360]: 2025-11-29 08:53:57.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:57.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:57 np0005539551 nova_compute[227360]: 2025-11-29 08:53:57.459 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:53:57 np0005539551 nova_compute[227360]: 2025-11-29 08:53:57.460 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:53:57 np0005539551 nova_compute[227360]: 2025-11-29 08:53:57.460 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:53:57 np0005539551 nova_compute[227360]: 2025-11-29 08:53:57.461 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:53:57 np0005539551 nova_compute[227360]: 2025-11-29 08:53:57.461 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:53:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:57.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:57 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:53:57 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1251568039' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:53:57 np0005539551 nova_compute[227360]: 2025-11-29 08:53:57.943 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:53:58 np0005539551 nova_compute[227360]: 2025-11-29 08:53:58.093 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:53:58 np0005539551 nova_compute[227360]: 2025-11-29 08:53:58.094 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4226MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:53:58 np0005539551 nova_compute[227360]: 2025-11-29 08:53:58.094 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:53:58 np0005539551 nova_compute[227360]: 2025-11-29 08:53:58.095 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:53:58 np0005539551 nova_compute[227360]: 2025-11-29 08:53:58.153 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:53:58 np0005539551 nova_compute[227360]: 2025-11-29 08:53:58.153 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:53:58 np0005539551 nova_compute[227360]: 2025-11-29 08:53:58.172 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:53:58 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:53:58 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2252891805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:53:58 np0005539551 nova_compute[227360]: 2025-11-29 08:53:58.588 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:53:58 np0005539551 nova_compute[227360]: 2025-11-29 08:53:58.594 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:53:58 np0005539551 nova_compute[227360]: 2025-11-29 08:53:58.610 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:53:58 np0005539551 nova_compute[227360]: 2025-11-29 08:53:58.612 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:53:58 np0005539551 nova_compute[227360]: 2025-11-29 08:53:58.612 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.517s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:53:59 np0005539551 nova_compute[227360]: 2025-11-29 08:53:59.358 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:53:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:59.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:53:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:53:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:59.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:59 np0005539551 nova_compute[227360]: 2025-11-29 08:53:59.992 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:00 np0005539551 nova_compute[227360]: 2025-11-29 08:54:00.612 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:00 np0005539551 nova_compute[227360]: 2025-11-29 08:54:00.612 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:54:00 np0005539551 nova_compute[227360]: 2025-11-29 08:54:00.613 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:54:00 np0005539551 nova_compute[227360]: 2025-11-29 08:54:00.628 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:54:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:01.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:01.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:02 np0005539551 nova_compute[227360]: 2025-11-29 08:54:02.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:03 np0005539551 nova_compute[227360]: 2025-11-29 08:54:03.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:54:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:03.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:54:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:03.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:04 np0005539551 nova_compute[227360]: 2025-11-29 08:54:04.400 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:04 np0005539551 nova_compute[227360]: 2025-11-29 08:54:04.994 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:05 np0005539551 nova_compute[227360]: 2025-11-29 08:54:05.404 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:05.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:05.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:07 np0005539551 nova_compute[227360]: 2025-11-29 08:54:07.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:07.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:07.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:09 np0005539551 nova_compute[227360]: 2025-11-29 08:54:09.402 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:09.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:09.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:09 np0005539551 nova_compute[227360]: 2025-11-29 08:54:09.995 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:10 np0005539551 nova_compute[227360]: 2025-11-29 08:54:10.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:11.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:11.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:11 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:13 np0005539551 systemd-logind[788]: New session 57 of user zuul.
Nov 29 03:54:13 np0005539551 systemd[1]: Started Session 57 of User zuul.
Nov 29 03:54:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:13.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:13.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:14 np0005539551 nova_compute[227360]: 2025-11-29 08:54:14.455 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:14 np0005539551 nova_compute[227360]: 2025-11-29 08:54:14.997 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:15 np0005539551 nova_compute[227360]: 2025-11-29 08:54:15.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:15 np0005539551 nova_compute[227360]: 2025-11-29 08:54:15.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:54:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:15.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:15.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:16 np0005539551 nova_compute[227360]: 2025-11-29 08:54:16.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 29 03:54:16 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2897677560' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 03:54:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:17.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:17.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:19.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:19 np0005539551 nova_compute[227360]: 2025-11-29 08:54:19.459 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:19.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:54:19.901 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:54:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:54:19.901 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:54:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:54:19.901 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:54:19 np0005539551 ovs-vsctl[307760]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 29 03:54:19 np0005539551 nova_compute[227360]: 2025-11-29 08:54:19.998 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:20 np0005539551 virtqemud[226785]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 29 03:54:20 np0005539551 virtqemud[226785]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 29 03:54:21 np0005539551 virtqemud[226785]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 29 03:54:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:21.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:21 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj asok_command: cache status {prefix=cache status} (starting...)
Nov 29 03:54:21 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj Can't run that command on an inactive MDS!
Nov 29 03:54:21 np0005539551 lvm[308078]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 03:54:21 np0005539551 lvm[308078]: VG ceph_vg0 finished
Nov 29 03:54:21 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj asok_command: client ls {prefix=client ls} (starting...)
Nov 29 03:54:21 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj Can't run that command on an inactive MDS!
Nov 29 03:54:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:21.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:21 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:22 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj asok_command: damage ls {prefix=damage ls} (starting...)
Nov 29 03:54:22 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj Can't run that command on an inactive MDS!
Nov 29 03:54:22 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj asok_command: dump loads {prefix=dump loads} (starting...)
Nov 29 03:54:22 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj Can't run that command on an inactive MDS!
Nov 29 03:54:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 29 03:54:22 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2825663970' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 03:54:22 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 29 03:54:22 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj Can't run that command on an inactive MDS!
Nov 29 03:54:22 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 29 03:54:22 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj Can't run that command on an inactive MDS!
Nov 29 03:54:22 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 29 03:54:22 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj Can't run that command on an inactive MDS!
Nov 29 03:54:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:54:22 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/806821748' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:54:22 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 29 03:54:22 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj Can't run that command on an inactive MDS!
Nov 29 03:54:23 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 29 03:54:23 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj Can't run that command on an inactive MDS!
Nov 29 03:54:23 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 29 03:54:23 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3723587710' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 29 03:54:23 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 29 03:54:23 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj Can't run that command on an inactive MDS!
Nov 29 03:54:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:23.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:23 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj asok_command: ops {prefix=ops} (starting...)
Nov 29 03:54:23 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj Can't run that command on an inactive MDS!
Nov 29 03:54:23 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 29 03:54:23 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/148312495' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 29 03:54:23 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 29 03:54:23 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/303893898' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 29 03:54:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:23.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:23 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 29 03:54:23 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2425727295' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 03:54:24 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj asok_command: session ls {prefix=session ls} (starting...)
Nov 29 03:54:24 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj Can't run that command on an inactive MDS!
Nov 29 03:54:24 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj asok_command: status {prefix=status} (starting...)
Nov 29 03:54:24 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 29 03:54:24 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1986936218' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 03:54:24 np0005539551 nova_compute[227360]: 2025-11-29 08:54:24.491 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:24 np0005539551 podman[308512]: 2025-11-29 08:54:24.62199192 +0000 UTC m=+0.080111040 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:54:24 np0005539551 podman[308513]: 2025-11-29 08:54:24.639135744 +0000 UTC m=+0.092241148 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:54:24 np0005539551 podman[308503]: 2025-11-29 08:54:24.64116554 +0000 UTC m=+0.100293507 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:54:24 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 29 03:54:24 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1980467150' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 03:54:24 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 29 03:54:24 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/467763195' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 03:54:25 np0005539551 nova_compute[227360]: 2025-11-29 08:54:25.000 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 03:54:25 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1968363107' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 03:54:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 29 03:54:25 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2097013357' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 29 03:54:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:25.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:25 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 29 03:54:25 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/61580430' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 03:54:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:25.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2344379655' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3282320048' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #151. Immutable memtables: 0.
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:54:26.196105) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 151
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406466196166, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 2459, "num_deletes": 254, "total_data_size": 5902651, "memory_usage": 5983744, "flush_reason": "Manual Compaction"}
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #152: started
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406466213642, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 152, "file_size": 3872042, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 73123, "largest_seqno": 75577, "table_properties": {"data_size": 3861851, "index_size": 6492, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21680, "raw_average_key_size": 20, "raw_value_size": 3841300, "raw_average_value_size": 3707, "num_data_blocks": 282, "num_entries": 1036, "num_filter_entries": 1036, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406252, "oldest_key_time": 1764406252, "file_creation_time": 1764406466, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 17866 microseconds, and 6462 cpu microseconds.
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:54:26.213969) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #152: 3872042 bytes OK
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:54:26.214060) [db/memtable_list.cc:519] [default] Level-0 commit table #152 started
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:54:26.215520) [db/memtable_list.cc:722] [default] Level-0 commit table #152: memtable #1 done
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:54:26.215535) EVENT_LOG_v1 {"time_micros": 1764406466215530, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:54:26.215551) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 5891700, prev total WAL file size 5891700, number of live WAL files 2.
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000148.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:54:26.217145) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [152(3781KB)], [150(11MB)]
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406466217203, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [152], "files_L6": [150], "score": -1, "input_data_size": 16359142, "oldest_snapshot_seqno": -1}
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #153: 10702 keys, 14433766 bytes, temperature: kUnknown
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406466284190, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 153, "file_size": 14433766, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14363493, "index_size": 42384, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26821, "raw_key_size": 281872, "raw_average_key_size": 26, "raw_value_size": 14174592, "raw_average_value_size": 1324, "num_data_blocks": 1616, "num_entries": 10702, "num_filter_entries": 10702, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764406466, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 153, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:54:26.284473) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 14433766 bytes
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:54:26.285695) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 244.0 rd, 215.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 11.9 +0.0 blob) out(13.8 +0.0 blob), read-write-amplify(8.0) write-amplify(3.7) OK, records in: 11227, records dropped: 525 output_compression: NoCompression
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:54:26.285711) EVENT_LOG_v1 {"time_micros": 1764406466285703, "job": 96, "event": "compaction_finished", "compaction_time_micros": 67057, "compaction_time_cpu_micros": 31360, "output_level": 6, "num_output_files": 1, "total_output_size": 14433766, "num_input_records": 11227, "num_output_records": 10702, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406466286389, "job": 96, "event": "table_file_deletion", "file_number": 152}
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000150.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406466288215, "job": 96, "event": "table_file_deletion", "file_number": 150}
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:54:26.217087) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:54:26.288252) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:54:26.288257) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:54:26.288259) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:54:26.288261) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:54:26.288263) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2540878717' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 29 03:54:26 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 29 03:54:27 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2009040820' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 03:54:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:27.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 29 03:54:27 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3267097004' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 ms_handle_reset con 0x5616f60f6400 session 0x5616f52fe000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 ms_handle_reset con 0x5616f60fa800 session 0x5616f560c780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 ms_handle_reset con 0x5616f63dac00 session 0x5616f33552c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 ms_handle_reset con 0x5616f2efe400 session 0x5616f4d674a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 ms_handle_reset con 0x5616f5e54c00 session 0x5616f560e780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 380936192 unmapped: 61808640 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 380936192 unmapped: 61808640 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4522721 data_alloc: 251658240 data_used: 42831872
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 ms_handle_reset con 0x5616f5001400 session 0x5616f58bf2c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 ms_handle_reset con 0x5616f5e53c00 session 0x5616f5c5cd20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 380936192 unmapped: 61808640 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 380936192 unmapped: 61808640 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 ms_handle_reset con 0x5616f3518400 session 0x5616f52fe1e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.202406883s of 10.659513474s, submitted: 153
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 ms_handle_reset con 0x5616f5001400 session 0x5616f32a0d20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 ms_handle_reset con 0x5616f2efe400 session 0x5616f59172c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 ms_handle_reset con 0x5616f5e53c00 session 0x5616f53f1860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 ms_handle_reset con 0x5616f5e54c00 session 0x5616f59172c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 380936192 unmapped: 61808640 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 heartbeat osd_stat(store_statfs(0x1a46f4000/0x0/0x1bfc00000, data 0x69403b5/0x6b0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 380936192 unmapped: 61808640 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 ms_handle_reset con 0x5616f5414000 session 0x5616f32a0d20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 ms_handle_reset con 0x5616f5412000 session 0x5616f509da40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 380952576 unmapped: 61792256 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4533314 data_alloc: 251658240 data_used: 44998656
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 ms_handle_reset con 0x5616f40c9c00 session 0x5616f60efc20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 ms_handle_reset con 0x5616f96e8000 session 0x5616f509cb40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 ms_handle_reset con 0x561704583c00 session 0x5616f60ee960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 ms_handle_reset con 0x5616f2ec1400 session 0x5616f60efa40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 381419520 unmapped: 61325312 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 ms_handle_reset con 0x5616f2ec1400 session 0x5616f5dfd860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 382656512 unmapped: 60088320 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 ms_handle_reset con 0x5616f2efe400 session 0x5616f33552c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 ms_handle_reset con 0x5616f5001400 session 0x5616f33250e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 ms_handle_reset con 0x5616f5e53c00 session 0x5616f5dfd4a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 heartbeat osd_stat(store_statfs(0x1a3fa0000/0x0/0x1bfc00000, data 0x70943b5/0x725e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 382656512 unmapped: 60088320 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 382656512 unmapped: 60088320 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 ms_handle_reset con 0x5616f96e8000 session 0x5616f58bef00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 382836736 unmapped: 59908096 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4674846 data_alloc: 268435456 data_used: 54767616
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 ms_handle_reset con 0x5616f2ec1400 session 0x5616f5dfc3c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 382836736 unmapped: 59908096 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 ms_handle_reset con 0x5616f2efe400 session 0x5616f53f0000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 ms_handle_reset con 0x5616f5001400 session 0x5616f5e28780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 ms_handle_reset con 0x5616f5e53c00 session 0x5616f58754a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 ms_handle_reset con 0x561704583c00 session 0x5616f582d860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 ms_handle_reset con 0x5616f2ec1400 session 0x5616f5605860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 ms_handle_reset con 0x5616f2efe400 session 0x5616f5917c20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 382894080 unmapped: 59850752 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 heartbeat osd_stat(store_statfs(0x1a3fa0000/0x0/0x1bfc00000, data 0x70943b5/0x725e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 ms_handle_reset con 0x5616f5001400 session 0x5616f5c5d2c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 ms_handle_reset con 0x5616f5e53c00 session 0x5616f509cf00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 ms_handle_reset con 0x5616f40c9c00 session 0x5616f52fe3c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 ms_handle_reset con 0x5616f5412000 session 0x5616f3bd6b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 ms_handle_reset con 0x5616f40c9c00 session 0x5616f560f680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 ms_handle_reset con 0x5616f2ec1400 session 0x5616f59165a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 ms_handle_reset con 0x5616f2efe400 session 0x5616f560fc20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.884857178s of 10.149435997s, submitted: 52
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 ms_handle_reset con 0x5616f5e53c00 session 0x5616f5de83c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 ms_handle_reset con 0x5616f5001400 session 0x5616f509bc20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 382894080 unmapped: 59850752 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 383000576 unmapped: 59744256 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 heartbeat osd_stat(store_statfs(0x1a3749000/0x0/0x1bfc00000, data 0x78ea3d8/0x7ab5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [1])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 386711552 unmapped: 56033280 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4796763 data_alloc: 268435456 data_used: 61206528
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 ms_handle_reset con 0x5616f2ec1400 session 0x5616f4cdd0e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 ms_handle_reset con 0x5616f2efe400 session 0x5616f5917e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 386711552 unmapped: 56033280 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 354 handle_osd_map epochs [354,355], i have 354, src has [1,355]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 355 ms_handle_reset con 0x5616f5412000 session 0x5616f3bcd4a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 355 ms_handle_reset con 0x5616f3256400 session 0x5616f58730e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 355 ms_handle_reset con 0x5616f40c9c00 session 0x5616f5c5d860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 355 ms_handle_reset con 0x5616f63db000 session 0x5616f560ed20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 355 ms_handle_reset con 0x5616fd7ae800 session 0x5616f3324f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 382566400 unmapped: 60178432 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 355 ms_handle_reset con 0x5616f3256400 session 0x5616f33550e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 355 ms_handle_reset con 0x5616f2efe400 session 0x5616f4cdd0e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 355 ms_handle_reset con 0x5616f2ec1400 session 0x5616f3bd63c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 355 ms_handle_reset con 0x5616f2efe400 session 0x5616f509bc20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 378142720 unmapped: 64602112 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 355 ms_handle_reset con 0x5616f60f6400 session 0x5616f5de8d20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 355 ms_handle_reset con 0x5616f60fa800 session 0x5616f58bf4a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 378863616 unmapped: 63881216 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 355 ms_handle_reset con 0x5616f63db000 session 0x5616f3355e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 378060800 unmapped: 64684032 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 355 heartbeat osd_stat(store_statfs(0x1a5144000/0x0/0x1bfc00000, data 0x5ef011c/0x60ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4487102 data_alloc: 251658240 data_used: 44003328
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 355 ms_handle_reset con 0x5616f2ec0000 session 0x5616f58741e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 355 ms_handle_reset con 0x5616f90d9400 session 0x5616f58281e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 378060800 unmapped: 64684032 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 355 ms_handle_reset con 0x5616f2ec1400 session 0x5616f402b860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 378060800 unmapped: 64684032 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 378060800 unmapped: 64684032 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 355 heartbeat osd_stat(store_statfs(0x1a5144000/0x0/0x1bfc00000, data 0x5ef011c/0x60ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.724976540s of 11.283328056s, submitted: 182
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 378200064 unmapped: 64544768 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 355 handle_osd_map epochs [356,356], i have 355, src has [1,356]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 379248640 unmapped: 63496192 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 356 ms_handle_reset con 0x5616f2efe400 session 0x5616f50d9680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4373941 data_alloc: 251658240 data_used: 36306944
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 356 ms_handle_reset con 0x5616f5436400 session 0x5616f53f01e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 356 ms_handle_reset con 0x5616f589b800 session 0x5616f560fa40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 375644160 unmapped: 67100672 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 356 ms_handle_reset con 0x5616f2ec1400 session 0x5616f560ef00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 356 ms_handle_reset con 0x5616f2efe400 session 0x5616f5917860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 356 ms_handle_reset con 0x5616f90d9400 session 0x5616f5e154a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 356 ms_handle_reset con 0x5616f5e54c00 session 0x5616f560c780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 356 ms_handle_reset con 0x5616f2ec1400 session 0x5616f3bcc5a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 380960768 unmapped: 61784064 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 356 ms_handle_reset con 0x5616f75c3800 session 0x5616f56052c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 356 ms_handle_reset con 0x5616f2efe400 session 0x5616f60eeb40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 356 handle_osd_map epochs [357,357], i have 356, src has [1,357]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 357 ms_handle_reset con 0x5616f589b800 session 0x5616f3324b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 357 ms_handle_reset con 0x5616f90d9400 session 0x5616f5dfdc20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 357 ms_handle_reset con 0x5616f2ec1400 session 0x5616f60ee960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 357 ms_handle_reset con 0x5616f2efe400 session 0x5616f60efa40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 357 ms_handle_reset con 0x5616f2ec0000 session 0x5616f2f7a5a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 375668736 unmapped: 67076096 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 357 heartbeat osd_stat(store_statfs(0x1a6368000/0x0/0x1bfc00000, data 0x4987a46/0x4b51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 375668736 unmapped: 67076096 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 375668736 unmapped: 67076096 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4233710 data_alloc: 251658240 data_used: 30441472
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 357 heartbeat osd_stat(store_statfs(0x1a6368000/0x0/0x1bfc00000, data 0x4987a46/0x4b51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 375668736 unmapped: 67076096 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 373178368 unmapped: 69566464 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 374988800 unmapped: 67756032 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 357 handle_osd_map epochs [358,358], i have 357, src has [1,358]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.121341705s of 10.251806259s, submitted: 117
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f589b800 session 0x5616f5917c20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 373358592 unmapped: 69386240 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f75c3800 session 0x5616f582c960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 373366784 unmapped: 69378048 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4286912 data_alloc: 251658240 data_used: 30482432
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a6050000/0x0/0x1bfc00000, data 0x4fe264f/0x51ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 373366784 unmapped: 69378048 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a6050000/0x0/0x1bfc00000, data 0x4fe264f/0x51ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,4])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 373374976 unmapped: 69369856 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 373383168 unmapped: 69361664 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 373383168 unmapped: 69361664 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f2efe400 session 0x5616f5875a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a6049000/0x0/0x1bfc00000, data 0x4fea64f/0x51b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f589b800 session 0x5616f5dfc5a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 373383168 unmapped: 69361664 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4335448 data_alloc: 251658240 data_used: 36179968
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 372015104 unmapped: 70729728 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a655f000/0x0/0x1bfc00000, data 0x4ad464f/0x4c9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 372015104 unmapped: 70729728 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 372015104 unmapped: 70729728 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 372015104 unmapped: 70729728 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 372015104 unmapped: 70729728 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4261390 data_alloc: 251658240 data_used: 30846976
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 372015104 unmapped: 70729728 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a655f000/0x0/0x1bfc00000, data 0x4ad464f/0x4c9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.420407295s of 12.947847366s, submitted: 35
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 372023296 unmapped: 70721536 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 372023296 unmapped: 70721536 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f90d9400 session 0x5616f362f4a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 372023296 unmapped: 70721536 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 372023296 unmapped: 70721536 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4269454 data_alloc: 251658240 data_used: 31416320
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a655f000/0x0/0x1bfc00000, data 0x4ad464f/0x4c9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 372023296 unmapped: 70721536 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 372023296 unmapped: 70721536 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 372023296 unmapped: 70721536 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 372023296 unmapped: 70721536 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 372023296 unmapped: 70721536 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a655f000/0x0/0x1bfc00000, data 0x4ad464f/0x4c9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1,0,5])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4271510 data_alloc: 251658240 data_used: 31412224
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f60f6400 session 0x5616f5872f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 372039680 unmapped: 70705152 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f60fa800 session 0x5616f5df30e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f2efe400 session 0x5616f5c5c780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f589b800 session 0x5616f5e18f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f60f6400 session 0x5616f5829680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 372039680 unmapped: 70705152 heap: 442744832 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.167735100s of 10.317631721s, submitted: 19
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f2ec0000 session 0x5616f509d2c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f2ec1400 session 0x5616f58752c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f90d9400 session 0x5616f58725a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f2ec0000 session 0x5616f560e5a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f2efe400 session 0x5616f5e29680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f60f6400 session 0x5616f5df3c20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f63db000 session 0x5616f5de9c20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 364175360 unmapped: 82247680 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a6d54000/0x0/0x1bfc00000, data 0x427565f/0x4441000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 364175360 unmapped: 82247680 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f589b800 session 0x5616f512d0e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 364109824 unmapped: 82313216 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4149738 data_alloc: 234881024 data_used: 23425024
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f2ec0000 session 0x5616f509c960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f2efe400 session 0x5616f5872780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 364109824 unmapped: 82313216 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f60f6400 session 0x5616f509d4a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f3256400 session 0x5616f59165a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f90d9400 session 0x5616f5e15680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f40c9c00 session 0x5616f3bd6b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 366141440 unmapped: 80281600 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f2ec0000 session 0x5616f582d680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a6dba000/0x0/0x1bfc00000, data 0x427666f/0x4443000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f60f6400 session 0x5616f5c02780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 366149632 unmapped: 80273408 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616fd7ae800 session 0x5616f60ee3c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a6db9000/0x0/0x1bfc00000, data 0x427667f/0x4444000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 365805568 unmapped: 80617472 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f2ec0000 session 0x5616f53f01e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 365805568 unmapped: 80617472 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4196809 data_alloc: 251658240 data_used: 30666752
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f40c9c00 session 0x5616f50d9680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 365928448 unmapped: 80494592 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f60f6400 session 0x5616f58281e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f5412000 session 0x5616f3b51c20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x561702363800 session 0x5616f560e960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f2ec0000 session 0x5616f4d66d20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 366231552 unmapped: 80191488 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 366239744 unmapped: 80183296 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 366239744 unmapped: 80183296 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a6487000/0x0/0x1bfc00000, data 0x4ba967f/0x4d77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 366239744 unmapped: 80183296 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4271371 data_alloc: 251658240 data_used: 32342016
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 366239744 unmapped: 80183296 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 366239744 unmapped: 80183296 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a6487000/0x0/0x1bfc00000, data 0x4ba967f/0x4d77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 366239744 unmapped: 80183296 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a6487000/0x0/0x1bfc00000, data 0x4ba967f/0x4d77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 366239744 unmapped: 80183296 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a6487000/0x0/0x1bfc00000, data 0x4ba967f/0x4d77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.798950195s of 17.770008087s, submitted: 169
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 368828416 unmapped: 77594624 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4295795 data_alloc: 251658240 data_used: 32337920
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 369811456 unmapped: 76611584 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 369876992 unmapped: 76546048 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 370720768 unmapped: 75702272 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 370794496 unmapped: 75628544 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f5412000 session 0x5616f53f1e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a5e66000/0x0/0x1bfc00000, data 0x51c267f/0x5390000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 370958336 unmapped: 75464704 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4333697 data_alloc: 251658240 data_used: 33284096
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f75c2000 session 0x5616f5e194a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f60fb000 session 0x5616f5c5c960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f5415400 session 0x5616f5604f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f2ec0000 session 0x5616f509da40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 370761728 unmapped: 75661312 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f5412000 session 0x5616f5de8780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f60fb000 session 0x5616f5df3680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f75c2000 session 0x5616f5dfcd20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f7367000 session 0x5616f5c5c960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f7367000 session 0x5616f5e194a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 370769920 unmapped: 75653120 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 370769920 unmapped: 75653120 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 370769920 unmapped: 75653120 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a5e57000/0x0/0x1bfc00000, data 0x51d76b2/0x53a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.132991791s of 10.091823578s, submitted: 87
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 370769920 unmapped: 75653120 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4391199 data_alloc: 251658240 data_used: 41725952
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f2ec0000 session 0x5616f560e960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 370786304 unmapped: 75636736 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a5e51000/0x0/0x1bfc00000, data 0x51dc6d5/0x53ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 370786304 unmapped: 75636736 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f75c2000 session 0x5616f509b680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f566a800 session 0x5616f509a5a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f8392400 session 0x5616f509ba40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f2ec0000 session 0x5616f3cf1680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f566a800 session 0x5616f2f7a960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f7367000 session 0x5616f5df2780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f75c2000 session 0x5616f2ec4d20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f8392400 session 0x5616f32885a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f2ec0000 session 0x5616f5df2d20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 372539392 unmapped: 73883648 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 372547584 unmapped: 73875456 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f2efe400 session 0x5616f53f0960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f589b800 session 0x5616f50d8b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 372547584 unmapped: 73875456 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4449880 data_alloc: 251658240 data_used: 41693184
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 372547584 unmapped: 73875456 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f3516000 session 0x5616f3cf0960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a57af000/0x0/0x1bfc00000, data 0x587e6d5/0x5a4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 372547584 unmapped: 73875456 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a5349000/0x0/0x1bfc00000, data 0x5ce46d5/0x5eb5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 373768192 unmapped: 72654848 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 375971840 unmapped: 70451200 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.525812149s of 10.135337830s, submitted: 110
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f3516000 session 0x5616f582d680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 378880000 unmapped: 67543040 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4526500 data_alloc: 251658240 data_used: 42426368
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 378912768 unmapped: 67510272 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f7367000 session 0x5616f56052c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 378912768 unmapped: 67510272 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a4933000/0x0/0x1bfc00000, data 0x555c6b5/0x572b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 379764736 unmapped: 66658304 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 380837888 unmapped: 65585152 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 380837888 unmapped: 65585152 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4446269 data_alloc: 251658240 data_used: 41111552
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f40c9c00 session 0x5616f5874b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 380837888 unmapped: 65585152 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f3266c00 session 0x5616f5875680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f5414c00 session 0x5616f5e283c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f2efe400 session 0x5616f5df21e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 380870656 unmapped: 65552384 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 380870656 unmapped: 65552384 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a57cf000/0x0/0x1bfc00000, data 0x46be692/0x488c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f90d9400 session 0x5616f2f7a780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f5001400 session 0x5616f5e290e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 380870656 unmapped: 65552384 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 380870656 unmapped: 65552384 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.938549042s of 10.264632225s, submitted: 110
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4233989 data_alloc: 251658240 data_used: 33923072
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a57cf000/0x0/0x1bfc00000, data 0x46c1692/0x488f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f2efe400 session 0x5616f4d663c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f3266c00 session 0x5616f52ff4a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f40c9c00 session 0x5616f3bb7c20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f5414c00 session 0x5616f3bccd20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f5412000 session 0x5616f50d9680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f60fb000 session 0x5616f5dfd2c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 380870656 unmapped: 65552384 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f5414c00 session 0x5616f5e14960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f2efe400 session 0x5616f5df21e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f3266c00 session 0x5616f5874b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f3266c00 session 0x5616f3cf0960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f2efe400 session 0x5616f32885a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 381239296 unmapped: 65183744 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 381247488 unmapped: 65175552 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 381247488 unmapped: 65175552 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f5412000 session 0x5616f509a5a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 383909888 unmapped: 62513152 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4386471 data_alloc: 251658240 data_used: 34238464
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f5414c00 session 0x5616f5604f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f5001400 session 0x5616f5df2780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 384335872 unmapped: 62087168 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a45af000/0x0/0x1bfc00000, data 0x58df704/0x5aaf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,3])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a45af000/0x0/0x1bfc00000, data 0x58df704/0x5aaf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f5001400 session 0x5616f5df2b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 384335872 unmapped: 62087168 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f2efe400 session 0x5616f58725a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a45ac000/0x0/0x1bfc00000, data 0x58e1714/0x5ab2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 384516096 unmapped: 61906944 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 384516096 unmapped: 61906944 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f3266c00 session 0x5616f509cd20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f5412000 session 0x5616f5e29a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 385204224 unmapped: 61218816 heap: 446423040 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4443114 data_alloc: 251658240 data_used: 40751104
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.033100128s of 10.272195816s, submitted: 119
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f7367000 session 0x5616f59172c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f2efe400 session 0x5616f58bf0e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 385449984 unmapped: 64651264 heap: 450101248 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f3266c00 session 0x5616f5df2000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f5001400 session 0x5616f5c02960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f5412000 session 0x5616f5c5c1e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f7367000 session 0x5616f509b0e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f589b800 session 0x5616f60eeb40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f2efe400 session 0x5616f5875a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a3eff000/0x0/0x1bfc00000, data 0x5f8f704/0x615f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 386506752 unmapped: 63594496 heap: 450101248 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 386506752 unmapped: 63594496 heap: 450101248 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a3efe000/0x0/0x1bfc00000, data 0x5f8f6f1/0x615f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a3edc000/0x0/0x1bfc00000, data 0x5fb16f1/0x6181000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 386580480 unmapped: 63520768 heap: 450101248 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f3266c00 session 0x5616f52fef00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 386580480 unmapped: 63520768 heap: 450101248 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4496262 data_alloc: 251658240 data_used: 40747008
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a3edd000/0x0/0x1bfc00000, data 0x5fb16f1/0x6181000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 388210688 unmapped: 61890560 heap: 450101248 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a3edd000/0x0/0x1bfc00000, data 0x5fb16f1/0x6181000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 390209536 unmapped: 59891712 heap: 450101248 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 390209536 unmapped: 59891712 heap: 450101248 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 390209536 unmapped: 59891712 heap: 450101248 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f5001400 session 0x5616f5873c20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a3ed4000/0x0/0x1bfc00000, data 0x5fb16f1/0x6181000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 390209536 unmapped: 59891712 heap: 450101248 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4554358 data_alloc: 268435456 data_used: 48439296
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.969882011s of 10.317898750s, submitted: 56
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 398516224 unmapped: 51585024 heap: 450101248 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 392667136 unmapped: 57434112 heap: 450101248 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f63d9800 session 0x5616f2ec4f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a2c8c000/0x0/0x1bfc00000, data 0x6df06f1/0x6fc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [0,0,0,0,0,1])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 393322496 unmapped: 56778752 heap: 450101248 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 394395648 unmapped: 55705600 heap: 450101248 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 394412032 unmapped: 55689216 heap: 450101248 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4675658 data_alloc: 268435456 data_used: 49528832
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 394412032 unmapped: 55689216 heap: 450101248 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 394821632 unmapped: 55279616 heap: 450101248 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 396705792 unmapped: 53395456 heap: 450101248 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a26c1000/0x0/0x1bfc00000, data 0x73b56f1/0x7585000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 396705792 unmapped: 53395456 heap: 450101248 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f2efe400 session 0x5616f5c5d0e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f3266c00 session 0x5616f58bf4a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 396738560 unmapped: 53362688 heap: 450101248 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f5001400 session 0x5616f582cb40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4773186 data_alloc: 268435456 data_used: 50352128
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f589b800 session 0x5616f5dfd4a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f581a400 session 0x5616f4d66b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.615989208s of 10.113099098s, submitted: 252
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 396763136 unmapped: 53338112 heap: 450101248 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 396771328 unmapped: 53329920 heap: 450101248 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a2175000/0x0/0x1bfc00000, data 0x7907753/0x7ad8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 396771328 unmapped: 53329920 heap: 450101248 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f5412000 session 0x5616f58721e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f3256800 session 0x5616f5e28f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 396771328 unmapped: 53329920 heap: 450101248 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 392372224 unmapped: 57729024 heap: 450101248 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4603650 data_alloc: 251658240 data_used: 41472000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f3266c00 session 0x5616f5e29680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f2efe400 session 0x5616f5872780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 393781248 unmapped: 56320000 heap: 450101248 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f5001400 session 0x5616f3bd6b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f2efe400 session 0x5616f5605c20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f3266c00 session 0x5616f5dfd4a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f5412000 session 0x5616f582cb40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f96e9000 session 0x5616f58bf4a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 394010624 unmapped: 63438848 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 394010624 unmapped: 63438848 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a24c9000/0x0/0x1bfc00000, data 0x75b4753/0x7785000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 394018816 unmapped: 63430656 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f6aa0400 session 0x5616f5c5d0e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 394018816 unmapped: 63430656 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4697154 data_alloc: 251658240 data_used: 40947712
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 394018816 unmapped: 63430656 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f60fb000 session 0x5616f560e960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f40c9c00 session 0x5616f2f7a960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.649077415s of 11.258911133s, submitted: 66
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 390127616 unmapped: 67321856 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a24c9000/0x0/0x1bfc00000, data 0x75b4753/0x7785000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 390152192 unmapped: 67297280 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f5412000 session 0x5616f58285a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 390553600 unmapped: 66895872 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a3247000/0x0/0x1bfc00000, data 0x6837743/0x6a07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 390553600 unmapped: 66895872 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4626819 data_alloc: 268435456 data_used: 46141440
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 390553600 unmapped: 66895872 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f2efe400 session 0x5616f5873c20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f3266c00 session 0x5616f362f0e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 390553600 unmapped: 66895872 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f2efe400 session 0x5616f3bb6d20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 384253952 unmapped: 73195520 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a3c9d000/0x0/0x1bfc00000, data 0x5de1743/0x5fb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 384253952 unmapped: 73195520 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 384253952 unmapped: 73195520 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4477121 data_alloc: 251658240 data_used: 36618240
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 384253952 unmapped: 73195520 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a3c9d000/0x0/0x1bfc00000, data 0x5de1743/0x5fb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 384253952 unmapped: 73195520 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.254314423s of 10.469334602s, submitted: 35
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f40c9c00 session 0x5616f560ef00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f5412000 session 0x5616f5de9c20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f60fb000 session 0x5616f509bc20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f96e9000 session 0x5616f5872f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 384253952 unmapped: 73195520 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f96e9000 session 0x5616f5de81e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f2efe400 session 0x5616f5829e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f40c9c00 session 0x5616f3cf0960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 384270336 unmapped: 73179136 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f60fb000 session 0x5616f5872000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f5e56800 session 0x5616f5c5cb40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f2efe400 session 0x5616f52fe3c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f40c9c00 session 0x5616f58bf0e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a38d1000/0x0/0x1bfc00000, data 0x61ab763/0x637d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,2,4])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 384270336 unmapped: 73179136 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f5412000 session 0x5616f32885a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f96e9000 session 0x5616f5df2b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4515418 data_alloc: 251658240 data_used: 36679680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f60fb000 session 0x5616f33250e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f2efe400 session 0x5616f5c02960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f40c9c00 session 0x5616f59172c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a38d1000/0x0/0x1bfc00000, data 0x61ab763/0x637d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f5412000 session 0x5616f5df2000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f96e9000 session 0x5616f560fe00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 384663552 unmapped: 72785920 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 384663552 unmapped: 72785920 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a340a000/0x0/0x1bfc00000, data 0x6671763/0x6843000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 384679936 unmapped: 72769536 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f3257800 session 0x5616f33250e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f2efe400 session 0x5616f32885a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f40c9c00 session 0x5616f58bf0e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f5412000 session 0x5616f52fe3c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a33fe000/0x0/0x1bfc00000, data 0x667c763/0x684e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 389701632 unmapped: 67747840 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f5e55400 session 0x5616f5e18d20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f96e9000 session 0x5616f5c5cb40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f96e9000 session 0x5616f5829e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f2efe400 session 0x5616f5de81e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f40c9c00 session 0x5616f5872f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a2fe3000/0x0/0x1bfc00000, data 0x6a99763/0x6c6b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [0,0,0,0,0,0,0,10])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 389267456 unmapped: 68182016 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f5412000 session 0x5616f509bc20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f5e55400 session 0x5616f5de9c20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4641172 data_alloc: 251658240 data_used: 36790272
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f75c0800 session 0x5616f5df25a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f5e55400 session 0x5616f560ef00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f2efe400 session 0x5616f3bb6d20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f5412000 session 0x5616f5dfd680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f96e9000 session 0x5616f5dfda40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 386129920 unmapped: 71319552 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f40c9c00 session 0x5616f362f0e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f2efe400 session 0x5616f5873c20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f75c0800 session 0x5616f5c5d0e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 386326528 unmapped: 71122944 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5605c20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 386310144 unmapped: 71139328 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a2c4b000/0x0/0x1bfc00000, data 0x6e30773/0x7003000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 387063808 unmapped: 70385664 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 387391488 unmapped: 70057984 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4709962 data_alloc: 251658240 data_used: 44818432
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 388734976 unmapped: 68714496 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a2c4b000/0x0/0x1bfc00000, data 0x6e30773/0x7003000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 388734976 unmapped: 68714496 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 388734976 unmapped: 68714496 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616fb009400 session 0x5616f512cb40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 388734976 unmapped: 68714496 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.026947975s of 16.827764511s, submitted: 88
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f40c9c00 session 0x5616f5de8000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 387375104 unmapped: 70074368 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4746442 data_alloc: 268435456 data_used: 49885184
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a2c4b000/0x0/0x1bfc00000, data 0x6e30773/0x7003000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 388423680 unmapped: 69025792 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 388423680 unmapped: 69025792 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 388423680 unmapped: 69025792 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 388431872 unmapped: 69017600 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 389963776 unmapped: 67485696 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4827750 data_alloc: 268435456 data_used: 51462144
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 394567680 unmapped: 62881792 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a1bff000/0x0/0x1bfc00000, data 0x7e7c773/0x804f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [0,0,0,0,8])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a1bff000/0x0/0x1bfc00000, data 0x7e7c773/0x804f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 398516224 unmapped: 58933248 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 398606336 unmapped: 58843136 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a12ed000/0x0/0x1bfc00000, data 0x8786773/0x8959000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 398721024 unmapped: 58728448 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.911468506s of 10.522818565s, submitted: 207
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 398721024 unmapped: 58728448 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4978072 data_alloc: 268435456 data_used: 53125120
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f60ff000 session 0x5616f5874b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f96e9800 session 0x5616f50dd2c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 398049280 unmapped: 59400192 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 398082048 unmapped: 59367424 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 398082048 unmapped: 59367424 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a0da3000/0x0/0x1bfc00000, data 0x8cd8773/0x8eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 398499840 unmapped: 58949632 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 400154624 unmapped: 57294848 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4901726 data_alloc: 268435456 data_used: 48635904
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x561702363000 session 0x5616f5e29a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a186b000/0x0/0x1bfc00000, data 0x820a773/0x83dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f3256800 session 0x5616f5e18780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f589b800 session 0x5616f5df3e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 400162816 unmapped: 57286656 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f40c9c00 session 0x5616f362f4a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f60ff000 session 0x5616f5873a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f96e9800 session 0x5616f50d8f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x561702363000 session 0x5616f509a1e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f40c9c00 session 0x5616f2f963c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f589b800 session 0x5616f59172c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f60ff000 session 0x5616f56054a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 400236544 unmapped: 57212928 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a107f000/0x0/0x1bfc00000, data 0x89f4773/0x8bc7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 400236544 unmapped: 57212928 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f96e9800 session 0x5616f4cdde00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f6aa0400 session 0x5616f58732c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 400302080 unmapped: 57147392 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a107f000/0x0/0x1bfc00000, data 0x89f4773/0x8bc7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 400310272 unmapped: 57139200 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4974662 data_alloc: 268435456 data_used: 48533504
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 400310272 unmapped: 57139200 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f40c9c00 session 0x5616f509dc20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.967396736s of 12.459898949s, submitted: 158
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 400318464 unmapped: 57131008 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 400326656 unmapped: 57122816 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 400203776 unmapped: 57245696 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 ms_handle_reset con 0x5616f96e9800 session 0x5616f4cdd2c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a1084000/0x0/0x1bfc00000, data 0x89f7773/0x8bca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 400211968 unmapped: 57237504 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5014286 data_alloc: 268435456 data_used: 51806208
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 heartbeat osd_stat(store_statfs(0x1a1084000/0x0/0x1bfc00000, data 0x89f7773/0x8bca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 400220160 unmapped: 57229312 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 358 handle_osd_map epochs [359,359], i have 358, src has [1,359]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 359 ms_handle_reset con 0x5616f5e50c00 session 0x5616f60ee1e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a1084000/0x0/0x1bfc00000, data 0x89f7773/0x8bca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 359 ms_handle_reset con 0x561702362400 session 0x5616f509a1e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 359 ms_handle_reset con 0x5616f589b800 session 0x5616f3b514a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 359 ms_handle_reset con 0x5616f40c9c00 session 0x5616f5829a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 359 ms_handle_reset con 0x5616f60ff000 session 0x5616f5917860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 359 ms_handle_reset con 0x5616f5e50c00 session 0x5616f5c20780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 401301504 unmapped: 56147968 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 359 ms_handle_reset con 0x561702362400 session 0x5616f512d2c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 401301504 unmapped: 56147968 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 397254656 unmapped: 60194816 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 359 ms_handle_reset con 0x5616f96e9800 session 0x5616f5604960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 397254656 unmapped: 60194816 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4916116 data_alloc: 251658240 data_used: 44888064
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 397262848 unmapped: 60186624 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 359 ms_handle_reset con 0x5616f40c9c00 session 0x5616f3bb74a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 359 ms_handle_reset con 0x5616f5e50c00 session 0x5616f4d674a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 359 ms_handle_reset con 0x5616f60ff000 session 0x5616f5917a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 359 ms_handle_reset con 0x561702362400 session 0x5616f5c02780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 403652608 unmapped: 53796864 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a0dc0000/0x0/0x1bfc00000, data 0x8cb8508/0x8e8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1,0,7])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.315165043s of 10.675896645s, submitted: 50
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 359 ms_handle_reset con 0x5616f589bc00 session 0x5616f560dc20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 359 ms_handle_reset con 0x5616f40c9c00 session 0x5616f509af00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 359 ms_handle_reset con 0x5616f5e50c00 session 0x5616f40b1860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 397574144 unmapped: 59875328 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 397574144 unmapped: 59875328 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 359 ms_handle_reset con 0x5616f60ff000 session 0x5616f5605680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 359 ms_handle_reset con 0x561702362400 session 0x5616f3bb65a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a0dbd000/0x0/0x1bfc00000, data 0x8cbb508/0x8e91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 397574144 unmapped: 59875328 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5006937 data_alloc: 251658240 data_used: 44040192
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 359 ms_handle_reset con 0x5616f3517c00 session 0x5616f362e1e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 397574144 unmapped: 59875328 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 359 heartbeat osd_stat(store_statfs(0x1a0dbc000/0x0/0x1bfc00000, data 0x8cbb518/0x8e92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 359 ms_handle_reset con 0x5616f40c9c00 session 0x5616f5875c20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 397582336 unmapped: 59867136 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 397582336 unmapped: 59867136 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 397582336 unmapped: 59867136 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 359 handle_osd_map epochs [360,360], i have 359, src has [1,360]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 360 ms_handle_reset con 0x5616f5e50c00 session 0x5616f582d0e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 397598720 unmapped: 59850752 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5012561 data_alloc: 251658240 data_used: 44052480
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 360 ms_handle_reset con 0x561702362400 session 0x5616f509c960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 397598720 unmapped: 59850752 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 360 heartbeat osd_stat(store_statfs(0x1a0db9000/0x0/0x1bfc00000, data 0x8cbd22d/0x8e94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 360 ms_handle_reset con 0x5616f63d9800 session 0x5616f512cb40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 360 heartbeat osd_stat(store_statfs(0x1a0db8000/0x0/0x1bfc00000, data 0x8cbd23c/0x8e95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 397598720 unmapped: 59850752 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 397287424 unmapped: 60162048 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.063557148s of 10.670211792s, submitted: 44
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 397287424 unmapped: 60162048 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 360 heartbeat osd_stat(store_statfs(0x1a0db8000/0x0/0x1bfc00000, data 0x8cbd23c/0x8e95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 397287424 unmapped: 60162048 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5068605 data_alloc: 268435456 data_used: 49643520
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 397287424 unmapped: 60162048 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 397287424 unmapped: 60162048 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 360 handle_osd_map epochs [361,361], i have 360, src has [1,361]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 397295616 unmapped: 60153856 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 361 ms_handle_reset con 0x5616f60ff000 session 0x5616f509c5a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 397303808 unmapped: 60145664 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 361 heartbeat osd_stat(store_statfs(0x1a0db3000/0x0/0x1bfc00000, data 0x8cbfe45/0x8e99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 397303808 unmapped: 60145664 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5071887 data_alloc: 268435456 data_used: 49643520
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 361 ms_handle_reset con 0x5616fb008800 session 0x5616f5829680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 361 ms_handle_reset con 0x5616f40c9c00 session 0x5616f5e290e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 361 ms_handle_reset con 0x5616f5e50c00 session 0x5616f5e28f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 361 ms_handle_reset con 0x5616f63d9800 session 0x5616f60ef0e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 397303808 unmapped: 60145664 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 361 ms_handle_reset con 0x561702362400 session 0x5616f560da40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 361 ms_handle_reset con 0x5616f40c9c00 session 0x5616f5e152c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 361 ms_handle_reset con 0x5616f5e50c00 session 0x5616f512cb40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 361 ms_handle_reset con 0x5616f63d9800 session 0x5616f582d0e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 361 ms_handle_reset con 0x5616fb008800 session 0x5616f3bb65a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 361 ms_handle_reset con 0x5616f5e59400 session 0x5616f560dc20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 397492224 unmapped: 59957248 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 361 ms_handle_reset con 0x5616f2ec1000 session 0x5616f32a0d20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 361 ms_handle_reset con 0x5616f2efe400 session 0x5616f58741e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 394313728 unmapped: 63135744 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 361 ms_handle_reset con 0x5616f40c9c00 session 0x5616f4d674a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.320205688s of 10.475521088s, submitted: 58
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 398082048 unmapped: 59367424 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 397549568 unmapped: 59899904 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5085104 data_alloc: 268435456 data_used: 47661056
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 361 heartbeat osd_stat(store_statfs(0x1a0a7a000/0x0/0x1bfc00000, data 0x8ffae45/0x91d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 397565952 unmapped: 59883520 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 397565952 unmapped: 59883520 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 361 heartbeat osd_stat(store_statfs(0x1a0a6a000/0x0/0x1bfc00000, data 0x9009e45/0x91e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 397565952 unmapped: 59883520 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 361 ms_handle_reset con 0x5616fb009800 session 0x5616f33c2780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 361 ms_handle_reset con 0x5616f3257800 session 0x5616f5872780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 361 ms_handle_reset con 0x5616f75c0800 session 0x5616f5de8780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 361 ms_handle_reset con 0x5616f3257800 session 0x5616f560ef00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 397565952 unmapped: 59883520 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 361 heartbeat osd_stat(store_statfs(0x1a0a6a000/0x0/0x1bfc00000, data 0x9009e45/0x91e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 361 ms_handle_reset con 0x5616f40c9c00 session 0x5616f5873c20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 397606912 unmapped: 59842560 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 361 ms_handle_reset con 0x5616fb009800 session 0x5616f5c5d0e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5096764 data_alloc: 268435456 data_used: 49057792
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 361 ms_handle_reset con 0x5616f5e50c00 session 0x5616f5873e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 400195584 unmapped: 57253888 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 361 ms_handle_reset con 0x5616f3257800 session 0x5616f5e29a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 400195584 unmapped: 57253888 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 361 heartbeat osd_stat(store_statfs(0x1a0a6d000/0x0/0x1bfc00000, data 0x9009e26/0x91e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 400195584 unmapped: 57253888 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 361 heartbeat osd_stat(store_statfs(0x1a0a6d000/0x0/0x1bfc00000, data 0x9009e26/0x91e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 361 ms_handle_reset con 0x5616f40c9c00 session 0x5616f362e960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 400195584 unmapped: 57253888 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 361 heartbeat osd_stat(store_statfs(0x1a0a6d000/0x0/0x1bfc00000, data 0x9009e26/0x91e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 400203776 unmapped: 57245696 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5122124 data_alloc: 268435456 data_used: 52559872
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.328190804s of 11.684500694s, submitted: 114
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 361 ms_handle_reset con 0x5616f6aa1400 session 0x5616f582cf00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 361 ms_handle_reset con 0x5616f5e57000 session 0x5616f2f7a960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 400261120 unmapped: 57188352 heap: 457449472 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 361 ms_handle_reset con 0x5616fb009800 session 0x5616f3289e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 361 ms_handle_reset con 0x5616f75c0800 session 0x5616f3cf1680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 361 ms_handle_reset con 0x5616f40c9c00 session 0x5616f5de94a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 361 ms_handle_reset con 0x5616f5e57000 session 0x5616f5dfc000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 400433152 unmapped: 61218816 heap: 461651968 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 361 ms_handle_reset con 0x5616f6aa1400 session 0x5616f32885a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 361 handle_osd_map epochs [361,362], i have 361, src has [1,362]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 362 ms_handle_reset con 0x5616f63d9800 session 0x5616f5e29680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 362 ms_handle_reset con 0x5616f3257800 session 0x5616f5e14780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 362 ms_handle_reset con 0x5616f40c9c00 session 0x5616f50dde00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 362 ms_handle_reset con 0x5616f5e57000 session 0x5616f5e292c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 362 ms_handle_reset con 0x5616f63d9800 session 0x5616f5c02b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 362 ms_handle_reset con 0x5616f6aa1400 session 0x5616f5c03e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 400441344 unmapped: 61210624 heap: 461651968 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 362 ms_handle_reset con 0x5616f3257800 session 0x5616f5df2d20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 400441344 unmapped: 61210624 heap: 461651968 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 400441344 unmapped: 61210624 heap: 461651968 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5004024 data_alloc: 251658240 data_used: 40796160
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 362 heartbeat osd_stat(store_statfs(0x1a0fe1000/0x0/0x1bfc00000, data 0x8a94b39/0x8c6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 400449536 unmapped: 61202432 heap: 461651968 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 362 ms_handle_reset con 0x5616f5412000 session 0x5616f58285a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 362 ms_handle_reset con 0x5616f5e55400 session 0x5616f58283c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 362 ms_handle_reset con 0x5616f63d9800 session 0x5616f5dfc5a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 362 ms_handle_reset con 0x5616fb008800 session 0x5616f3bd63c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 399745024 unmapped: 61906944 heap: 461651968 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 362 ms_handle_reset con 0x5616f75c0800 session 0x5616f560eb40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 362 ms_handle_reset con 0x5616f3257800 session 0x5616f60ee780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 362 ms_handle_reset con 0x5616f5412000 session 0x5616f58bf0e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 399491072 unmapped: 62160896 heap: 461651968 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 399499264 unmapped: 62152704 heap: 461651968 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 362 ms_handle_reset con 0x5616f320a400 session 0x5616f5c03680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 362 handle_osd_map epochs [362,363], i have 362, src has [1,363]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 363 ms_handle_reset con 0x5616f75c3400 session 0x5616f5c5d0e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 363 ms_handle_reset con 0x5616f5e55400 session 0x5616f5917a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 363 ms_handle_reset con 0x5616f63d9800 session 0x5616f509ba40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 363 heartbeat osd_stat(store_statfs(0x1a161f000/0x0/0x1bfc00000, data 0x84558a0/0x862d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 399515648 unmapped: 62136320 heap: 461651968 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4853481 data_alloc: 251658240 data_used: 38703104
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.373140335s of 10.012124062s, submitted: 195
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 363 ms_handle_reset con 0x5616f320a400 session 0x5616f5df3e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 395558912 unmapped: 66093056 heap: 461651968 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 395558912 unmapped: 66093056 heap: 461651968 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 395558912 unmapped: 66093056 heap: 461651968 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 395558912 unmapped: 66093056 heap: 461651968 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 395558912 unmapped: 66093056 heap: 461651968 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4852756 data_alloc: 251658240 data_used: 38703104
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 363 heartbeat osd_stat(store_statfs(0x1a234f000/0x0/0x1bfc00000, data 0x7728890/0x78ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 395558912 unmapped: 66093056 heap: 461651968 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 363 handle_osd_map epochs [364,364], i have 363, src has [1,364]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 ms_handle_reset con 0x5616f3257800 session 0x5616f3c63860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 395550720 unmapped: 66101248 heap: 461651968 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 395558912 unmapped: 66093056 heap: 461651968 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 395558912 unmapped: 66093056 heap: 461651968 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.9 total, 600.0 interval#012Cumulative writes: 51K writes, 199K keys, 51K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.04 MB/s#012Cumulative WAL: 51K writes, 18K syncs, 2.73 writes per sync, written: 0.19 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 42.43 MB, 0.07 MB/s#012Interval WAL: 10K writes, 4039 syncs, 2.54 writes per sync, written: 0.04 GB, 0.07 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 ms_handle_reset con 0x5616f75c0800 session 0x5616f560f0e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 395558912 unmapped: 66093056 heap: 461651968 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4863450 data_alloc: 251658240 data_used: 39215104
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 ms_handle_reset con 0x5616f320a400 session 0x5616f3bd6b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 ms_handle_reset con 0x5616f3257800 session 0x5616f3355e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 ms_handle_reset con 0x5616f5e55400 session 0x5616f4d66f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 395558912 unmapped: 66093056 heap: 461651968 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 heartbeat osd_stat(store_statfs(0x1a234b000/0x0/0x1bfc00000, data 0x772a4fb/0x7903000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 ms_handle_reset con 0x5616f63d9800 session 0x5616f5dfcd20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.008210182s of 11.044918060s, submitted: 22
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 ms_handle_reset con 0x5616f63dbc00 session 0x5616f582cb40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 ms_handle_reset con 0x5616f320a400 session 0x5616f5de8f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 ms_handle_reset con 0x5616f3257800 session 0x5616f5e14960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 ms_handle_reset con 0x5616f5e55400 session 0x5616f58732c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 ms_handle_reset con 0x5616f63d9800 session 0x5616f2f7af00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 395788288 unmapped: 69541888 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 396771328 unmapped: 68558848 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 396771328 unmapped: 68558848 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 heartbeat osd_stat(store_statfs(0x1a1954000/0x0/0x1bfc00000, data 0x81214fb/0x82fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 396771328 unmapped: 68558848 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4942853 data_alloc: 251658240 data_used: 39211008
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 396771328 unmapped: 68558848 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 396771328 unmapped: 68558848 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 heartbeat osd_stat(store_statfs(0x1a1954000/0x0/0x1bfc00000, data 0x81214fb/0x82fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 396771328 unmapped: 68558848 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 ms_handle_reset con 0x5616f581ac00 session 0x5616f5605c20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 396779520 unmapped: 68550656 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 ms_handle_reset con 0x5616f581ac00 session 0x5616f5e145a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 396959744 unmapped: 68370432 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4946335 data_alloc: 251658240 data_used: 39223296
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 ms_handle_reset con 0x5616f320a400 session 0x5616f5e190e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 ms_handle_reset con 0x5616f3257800 session 0x5616f5e194a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 ms_handle_reset con 0x5616f5e55400 session 0x5616f5df2780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 ms_handle_reset con 0x5616f63d9800 session 0x5616f5828000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 396967936 unmapped: 68362240 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 ms_handle_reset con 0x5616f320a400 session 0x5616f509dc20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 ms_handle_reset con 0x5616f5412000 session 0x5616f5c5cb40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 ms_handle_reset con 0x5616f581ac00 session 0x5616f362f0e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 ms_handle_reset con 0x5616f5e55400 session 0x5616f3bcc1e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 ms_handle_reset con 0x5616f566ac00 session 0x5616f33c2960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 ms_handle_reset con 0x5616f320a400 session 0x5616f560d2c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 396967936 unmapped: 68362240 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 heartbeat osd_stat(store_statfs(0x1a1337000/0x0/0x1bfc00000, data 0x873d50b/0x8917000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 397557760 unmapped: 67772416 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 ms_handle_reset con 0x5616f5412000 session 0x5616f5c21680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 397557760 unmapped: 67772416 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 heartbeat osd_stat(store_statfs(0x1a1337000/0x0/0x1bfc00000, data 0x873d50b/0x8917000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 ms_handle_reset con 0x5616f581ac00 session 0x5616f3bb74a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 ms_handle_reset con 0x5616f5e55400 session 0x5616f56052c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 398024704 unmapped: 67305472 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5066741 data_alloc: 268435456 data_used: 50515968
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 ms_handle_reset con 0x5616f7367400 session 0x5616f3bd7e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.188134193s of 14.529722214s, submitted: 41
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 ms_handle_reset con 0x5616f320a400 session 0x5616f33550e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 398204928 unmapped: 67125248 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 heartbeat osd_stat(store_statfs(0x1a1312000/0x0/0x1bfc00000, data 0x876151b/0x893c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 ms_handle_reset con 0x5616f5412000 session 0x5616f56054a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 ms_handle_reset con 0x5616f581ac00 session 0x5616f3bcc960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 398229504 unmapped: 67100672 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 heartbeat osd_stat(store_statfs(0x1a1311000/0x0/0x1bfc00000, data 0x876152b/0x893d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 398532608 unmapped: 66797568 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 heartbeat osd_stat(store_statfs(0x1a1311000/0x0/0x1bfc00000, data 0x876152b/0x893d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 399089664 unmapped: 66240512 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 ms_handle_reset con 0x5616f90d9400 session 0x5616f3324f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 ms_handle_reset con 0x5616f7e48000 session 0x5616f40b03c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 403505152 unmapped: 61825024 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5197598 data_alloc: 268435456 data_used: 53239808
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 ms_handle_reset con 0x5616f320a400 session 0x5616f5829680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 ms_handle_reset con 0x5616f5412000 session 0x5616f4cdd0e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 ms_handle_reset con 0x5616f581ac00 session 0x5616f5de8960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 heartbeat osd_stat(store_statfs(0x1a1312000/0x0/0x1bfc00000, data 0x876151b/0x893c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 399360000 unmapped: 65970176 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 399368192 unmapped: 65961984 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 399433728 unmapped: 65896448 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 401293312 unmapped: 64036864 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 402456576 unmapped: 62873600 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5270306 data_alloc: 268435456 data_used: 53555200
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 402931712 unmapped: 62398464 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 heartbeat osd_stat(store_statfs(0x19fd54000/0x0/0x1bfc00000, data 0x9d1e52b/0x9efa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 heartbeat osd_stat(store_statfs(0x19fd54000/0x0/0x1bfc00000, data 0x9d1e52b/0x9efa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 402939904 unmapped: 62390272 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 heartbeat osd_stat(store_statfs(0x19fd54000/0x0/0x1bfc00000, data 0x9d1e52b/0x9efa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 402939904 unmapped: 62390272 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.486301422s of 13.051985741s, submitted: 224
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 405929984 unmapped: 59400192 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 403931136 unmapped: 61399040 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5357942 data_alloc: 268435456 data_used: 54382592
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 heartbeat osd_stat(store_statfs(0x19f386000/0x0/0x1bfc00000, data 0xa6ec52b/0xa8c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 61317120 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 ms_handle_reset con 0x5616f90d9400 session 0x5616f509a3c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 61317120 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 heartbeat osd_stat(store_statfs(0x19f33c000/0x0/0x1bfc00000, data 0xa73652b/0xa912000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 405159936 unmapped: 60170240 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 410894336 unmapped: 54435840 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 364 handle_osd_map epochs [364,365], i have 364, src has [1,365]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 365 ms_handle_reset con 0x5616f5415c00 session 0x5616f509b0e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 410992640 unmapped: 54337536 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 365 ms_handle_reset con 0x5616f6aa0c00 session 0x5616f5e18d20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5477144 data_alloc: 285212672 data_used: 67272704
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 411017216 unmapped: 54312960 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 365 heartbeat osd_stat(store_statfs(0x19f336000/0x0/0x1bfc00000, data 0xa7383ce/0xa917000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 365 ms_handle_reset con 0x5616f5e53000 session 0x5616f5df2b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 365 ms_handle_reset con 0x5616f60ffc00 session 0x5616f60ee1e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 411017216 unmapped: 54312960 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 365 ms_handle_reset con 0x5616f5415c00 session 0x5616f4d674a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 411361280 unmapped: 53968896 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 411361280 unmapped: 53968896 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 411369472 unmapped: 53960704 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 365 ms_handle_reset con 0x5616f320a400 session 0x5616f5dfc000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5477144 data_alloc: 285212672 data_used: 67280896
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 365 ms_handle_reset con 0x5616f320a400 session 0x5616f4d670e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 365 handle_osd_map epochs [366,366], i have 365, src has [1,366]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.401262283s of 11.751626015s, submitted: 96
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 366 ms_handle_reset con 0x5616f5415c00 session 0x5616f402ab40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 366 ms_handle_reset con 0x5616f5e53000 session 0x5616f5c021e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 407568384 unmapped: 57761792 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 366 handle_osd_map epochs [367,367], i have 366, src has [1,367]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 367 ms_handle_reset con 0x5616f60ffc00 session 0x5616f5c03680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 367 heartbeat osd_stat(store_statfs(0x1a0135000/0x0/0x1bfc00000, data 0x9939e9a/0x9b18000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 407568384 unmapped: 57761792 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 367 ms_handle_reset con 0x5616f6aa0c00 session 0x5616f509af00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 407724032 unmapped: 57606144 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 367 heartbeat osd_stat(store_statfs(0x19fd22000/0x0/0x1bfc00000, data 0x993ce9a/0x9b1b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 367 ms_handle_reset con 0x5616f40c9c00 session 0x5616f5828d20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 367 ms_handle_reset con 0x5616f5e57000 session 0x5616f58721e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 407724032 unmapped: 57606144 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 367 ms_handle_reset con 0x5616f320a400 session 0x5616f4cdcb40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 402202624 unmapped: 63127552 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5124137 data_alloc: 268435456 data_used: 47226880
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 367 heartbeat osd_stat(store_statfs(0x1a0a72000/0x0/0x1bfc00000, data 0x8beee8a/0x8dcc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 402202624 unmapped: 63127552 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 367 ms_handle_reset con 0x5616f5415c00 session 0x5616f3bb6d20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 402210816 unmapped: 63119360 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 402210816 unmapped: 63119360 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 367 ms_handle_reset con 0x5616f30d9800 session 0x5616f560e960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 367 ms_handle_reset con 0x5616f7e49800 session 0x5616f5e18780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 367 heartbeat osd_stat(store_statfs(0x1a0a70000/0x0/0x1bfc00000, data 0x8befe9a/0x8dce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 402243584 unmapped: 63086592 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 402251776 unmapped: 63078400 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5048501 data_alloc: 251658240 data_used: 42909696
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 367 ms_handle_reset con 0x5616f320a400 session 0x5616f582d680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.662788391s of 10.267221451s, submitted: 268
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 403300352 unmapped: 62029824 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 367 handle_osd_map epochs [368,368], i have 367, src has [1,368]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 368 ms_handle_reset con 0x5616f5e55400 session 0x5616f3bcda40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 368 ms_handle_reset con 0x561704582400 session 0x5616f5c03860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 368 heartbeat osd_stat(store_statfs(0x1a0fbd000/0x0/0x1bfc00000, data 0x86a3e38/0x8881000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 395223040 unmapped: 70107136 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 368 ms_handle_reset con 0x5616f5415c00 session 0x5616f362e1e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 368 ms_handle_reset con 0x5616f320a400 session 0x5616f560fa40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 395296768 unmapped: 70033408 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 368 ms_handle_reset con 0x5616f5e55400 session 0x5616f402b860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 368 heartbeat osd_stat(store_statfs(0x1a1c84000/0x0/0x1bfc00000, data 0x7698a59/0x7875000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 395337728 unmapped: 69992448 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 368 ms_handle_reset con 0x5616f5414c00 session 0x5616f560ed20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 368 ms_handle_reset con 0x5616f3516000 session 0x5616f560e5a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 368 ms_handle_reset con 0x5616f7e49800 session 0x5616f512d2c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 395354112 unmapped: 69976064 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4646769 data_alloc: 251658240 data_used: 32620544
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 395370496 unmapped: 69959680 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 368 ms_handle_reset con 0x5616f2ec1000 session 0x5616f40b10e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 368 ms_handle_reset con 0x5616f2efe400 session 0x5616f362f860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 368 ms_handle_reset con 0x5616f7e49800 session 0x5616f5e285a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 368 ms_handle_reset con 0x5616f320a400 session 0x5616f5604960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 368 ms_handle_reset con 0x5616f3516000 session 0x5616f5c20000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 395812864 unmapped: 69517312 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 368 ms_handle_reset con 0x5616f75c2000 session 0x5616f5e15680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 368 ms_handle_reset con 0x5616f2ec0000 session 0x5616f582c960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 368 ms_handle_reset con 0x5616f3516000 session 0x5616f3288780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 368 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5c023c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 368 heartbeat osd_stat(store_statfs(0x1a364d000/0x0/0x1bfc00000, data 0x60179f0/0x61f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [0,2] op hist [0,1,0,1])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 368 ms_handle_reset con 0x5616f2efe400 session 0x5616f52fe3c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 393445376 unmapped: 71884800 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 368 ms_handle_reset con 0x5616f2ec0000 session 0x5616f402b680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 393453568 unmapped: 71876608 heap: 465330176 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 368 handle_osd_map epochs [368,369], i have 368, src has [1,369]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 368 handle_osd_map epochs [369,369], i have 369, src has [1,369]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 369 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5e194a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 369 ms_handle_reset con 0x5616f75c2000 session 0x5616f3bb72c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 369 ms_handle_reset con 0x5616f3516000 session 0x5616f5df3860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 369 ms_handle_reset con 0x5616f7e49800 session 0x5616f509ab40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 415408128 unmapped: 72425472 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4726661 data_alloc: 251658240 data_used: 37699584
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 369 ms_handle_reset con 0x5616f320a400 session 0x5616f509ba40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 369 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5e190e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 369 handle_osd_map epochs [370,370], i have 369, src has [1,370]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.139569283s of 10.152736664s, submitted: 361
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 370 ms_handle_reset con 0x5616f2ec0000 session 0x5616f5872f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 407830528 unmapped: 80003072 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 370 ms_handle_reset con 0x5616f75c2000 session 0x5616f4d66f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 370 ms_handle_reset con 0x5616f60f6400 session 0x5616f3f68b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 370 ms_handle_reset con 0x561702363800 session 0x5616f582c5a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 370 ms_handle_reset con 0x5616f5414c00 session 0x5616f560e780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 370 heartbeat osd_stat(store_statfs(0x1a1917000/0x0/0x1bfc00000, data 0x6ba9525/0x6d86000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 370 handle_osd_map epochs [371,371], i have 370, src has [1,371]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 371 ms_handle_reset con 0x5616f3516000 session 0x5616f53f01e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 407863296 unmapped: 79970304 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 371 ms_handle_reset con 0x5616f2ec0000 session 0x5616f58bed20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 371 ms_handle_reset con 0x5616f320a400 session 0x5616f5c20000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 371 ms_handle_reset con 0x5616f2ec0000 session 0x5616f362f860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 371 ms_handle_reset con 0x5616f320a400 session 0x5616f512d2c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 407863296 unmapped: 79970304 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 371 heartbeat osd_stat(store_statfs(0x1a1912000/0x0/0x1bfc00000, data 0x6bab251/0x6d89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 407822336 unmapped: 80011264 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 407740416 unmapped: 80093184 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4750779 data_alloc: 251658240 data_used: 39964672
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 371 ms_handle_reset con 0x5616f3516000 session 0x5616f560fa40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 399474688 unmapped: 88358912 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 371 ms_handle_reset con 0x5616f5414c00 session 0x5616f3bcda40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 399499264 unmapped: 88334336 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 371 ms_handle_reset con 0x561702363800 session 0x5616f58290e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 392290304 unmapped: 95543296 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 371 heartbeat osd_stat(store_statfs(0x1a37bd000/0x0/0x1bfc00000, data 0x4d04241/0x4ee1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 392290304 unmapped: 95543296 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 392290304 unmapped: 95543296 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4387629 data_alloc: 234881024 data_used: 19030016
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 371 handle_osd_map epochs [372,372], i have 371, src has [1,372]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.702404022s of 10.330249786s, submitted: 186
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 392290304 unmapped: 95543296 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 392290304 unmapped: 95543296 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 372 heartbeat osd_stat(store_statfs(0x1a37b9000/0x0/0x1bfc00000, data 0x4d05e4a/0x4ee4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 392290304 unmapped: 95543296 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 392290304 unmapped: 95543296 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 372 heartbeat osd_stat(store_statfs(0x1a37b9000/0x0/0x1bfc00000, data 0x4d05e4a/0x4ee4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 392290304 unmapped: 95543296 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4393723 data_alloc: 234881024 data_used: 19087360
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 390578176 unmapped: 97255424 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 390332416 unmapped: 97501184 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 391847936 unmapped: 95985664 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 391725056 unmapped: 96108544 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 372 heartbeat osd_stat(store_statfs(0x1a3206000/0x0/0x1bfc00000, data 0x52b0e4a/0x548f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 391725056 unmapped: 96108544 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4444251 data_alloc: 234881024 data_used: 19460096
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 391725056 unmapped: 96108544 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 391725056 unmapped: 96108544 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 391725056 unmapped: 96108544 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 372 heartbeat osd_stat(store_statfs(0x1a31fd000/0x0/0x1bfc00000, data 0x52bce4a/0x549b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 391725056 unmapped: 96108544 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.374138832s of 13.847321510s, submitted: 116
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 372 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5875c20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 372 ms_handle_reset con 0x5616f75c2000 session 0x5616f33550e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 391733248 unmapped: 96100352 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4443239 data_alloc: 234881024 data_used: 19464192
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 391749632 unmapped: 96083968 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 372 ms_handle_reset con 0x5616f2ec0000 session 0x5616f509a5a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 391749632 unmapped: 96083968 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 372 heartbeat osd_stat(store_statfs(0x1a3631000/0x0/0x1bfc00000, data 0x4996dd8/0x4b73000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 391749632 unmapped: 96083968 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 372 heartbeat osd_stat(store_statfs(0x1a3631000/0x0/0x1bfc00000, data 0x4996dd8/0x4b73000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 391749632 unmapped: 96083968 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 391749632 unmapped: 96083968 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4346868 data_alloc: 234881024 data_used: 16777216
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 391749632 unmapped: 96083968 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 372 ms_handle_reset con 0x5616f320a400 session 0x5616f3bd6b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 372 ms_handle_reset con 0x5616f3516000 session 0x5616f3bd65a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 372 ms_handle_reset con 0x5616f2ec0000 session 0x5616f5dfda40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 372 ms_handle_reset con 0x5616f320a400 session 0x5616f58741e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 372 ms_handle_reset con 0x5616f2ec1000 session 0x5616f52fe780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 372 ms_handle_reset con 0x5616f75c2000 session 0x5616f5c21680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 372 ms_handle_reset con 0x5616f5414c00 session 0x5616f2ec4d20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 372 ms_handle_reset con 0x5616f5414c00 session 0x5616f5c02780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 391938048 unmapped: 95895552 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 372 ms_handle_reset con 0x5616f2ec0000 session 0x5616f402b2c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 372 ms_handle_reset con 0x5616f320a400 session 0x5616f5e15680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 372 ms_handle_reset con 0x5616f2ec1000 session 0x5616f3bcc5a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 372 ms_handle_reset con 0x5616f75c2000 session 0x5616f56052c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 372 ms_handle_reset con 0x5616f75c2000 session 0x5616f5e28780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 372 ms_handle_reset con 0x5616f2ec0000 session 0x5616f3f692c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 372 ms_handle_reset con 0x5616f2ec1000 session 0x5616f50dde00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 372 heartbeat osd_stat(store_statfs(0x1a3181000/0x0/0x1bfc00000, data 0x5340dd8/0x551d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #53. Immutable memtables: 9.
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 372 ms_handle_reset con 0x5616f320a400 session 0x5616f5df25a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 413736960 unmapped: 74096640 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 413736960 unmapped: 74096640 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 372 ms_handle_reset con 0x5616f40c9c00 session 0x5616f60eeb40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 372 ms_handle_reset con 0x5616f2ec0000 session 0x5616f32a0000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 413745152 unmapped: 74088448 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4558023 data_alloc: 251658240 data_used: 43958272
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.393950462s of 10.760866165s, submitted: 102
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 372 ms_handle_reset con 0x5616f63d9800 session 0x5616f509bc20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 372 ms_handle_reset con 0x5616f3257800 session 0x5616f58754a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 403865600 unmapped: 83968000 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 372 ms_handle_reset con 0x5616f40c9c00 session 0x5616f50dd2c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 372 handle_osd_map epochs [373,373], i have 372, src has [1,373]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 403750912 unmapped: 84082688 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 373 ms_handle_reset con 0x5616f5414c00 session 0x5616f560d4a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 373 ms_handle_reset con 0x5616f75c2000 session 0x5616f60eeb40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 373 heartbeat osd_stat(store_statfs(0x1a42c0000/0x0/0x1bfc00000, data 0x2e34b4f/0x3012000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 383377408 unmapped: 104456192 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 383377408 unmapped: 104456192 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 383385600 unmapped: 104448000 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4105381 data_alloc: 218103808 data_used: 8441856
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 382853120 unmapped: 104980480 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 373 heartbeat osd_stat(store_statfs(0x1a42c0000/0x0/0x1bfc00000, data 0x2e34b4f/0x3012000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 382992384 unmapped: 104841216 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 382992384 unmapped: 104841216 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 382992384 unmapped: 104841216 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 382992384 unmapped: 104841216 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4176901 data_alloc: 234881024 data_used: 18378752
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 373 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5e18b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 373 ms_handle_reset con 0x5616f320a400 session 0x5616f58292c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 373 handle_osd_map epochs [373,374], i have 373, src has [1,374]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.216200829s of 10.479046822s, submitted: 102
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 382992384 unmapped: 104841216 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 374 heartbeat osd_stat(store_statfs(0x1a42c0000/0x0/0x1bfc00000, data 0x2e34b4f/0x3012000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 374 heartbeat osd_stat(store_statfs(0x1a44e8000/0x0/0x1bfc00000, data 0x2e36758/0x3015000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 382992384 unmapped: 104841216 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 382992384 unmapped: 104841216 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 382992384 unmapped: 104841216 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 382992384 unmapped: 104841216 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4179699 data_alloc: 234881024 data_used: 18378752
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 374 ms_handle_reset con 0x5616f3257800 session 0x5616f4cdcb40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 374 ms_handle_reset con 0x5616f40c9c00 session 0x5616f582de00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 374 ms_handle_reset con 0x5616f2ec1000 session 0x5616f58bf860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 374 ms_handle_reset con 0x5616f320a400 session 0x5616f3c625a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 382992384 unmapped: 104841216 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 374 ms_handle_reset con 0x5616f3257800 session 0x5616f402ab40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 374 ms_handle_reset con 0x5616f63d9800 session 0x5616f33243c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 374 ms_handle_reset con 0x5616f5e55400 session 0x5616f33c3a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 374 ms_handle_reset con 0x5616f2ec1000 session 0x5616f509a1e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 374 ms_handle_reset con 0x5616f320a400 session 0x5616f2f963c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 387678208 unmapped: 100155392 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 374 heartbeat osd_stat(store_statfs(0x1a42dc000/0x0/0x1bfc00000, data 0x3043758/0x3222000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 388907008 unmapped: 98926592 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 388907008 unmapped: 98926592 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 374 ms_handle_reset con 0x561704582400 session 0x5616f5de9a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 374 ms_handle_reset con 0x5616f5e57000 session 0x5616f5916960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 374 handle_osd_map epochs [374,375], i have 374, src has [1,375]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 390782976 unmapped: 97050624 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4373904 data_alloc: 234881024 data_used: 18944000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 375 ms_handle_reset con 0x5616f63d9800 session 0x5616f512d2c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 375 ms_handle_reset con 0x5616f2ec1000 session 0x5616f58bef00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 375 heartbeat osd_stat(store_statfs(0x1a2d28000/0x0/0x1bfc00000, data 0x45e747b/0x47c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 375 ms_handle_reset con 0x5616f320a400 session 0x5616f509da40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.639972687s of 10.098323822s, submitted: 149
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 390791168 unmapped: 97042432 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 390791168 unmapped: 97042432 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 391159808 unmapped: 96673792 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 375 ms_handle_reset con 0x561704582400 session 0x5616f3354b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 375 ms_handle_reset con 0x5616f5e53000 session 0x5616f33550e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 375 ms_handle_reset con 0x5616f60ffc00 session 0x5616f2f7a5a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 391651328 unmapped: 96182272 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 375 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5de81e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 396378112 unmapped: 91455488 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4414292 data_alloc: 251658240 data_used: 29167616
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 375 ms_handle_reset con 0x5616f320a400 session 0x5616f402b680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 375 heartbeat osd_stat(store_statfs(0x1a2925000/0x0/0x1bfc00000, data 0x45e74ae/0x47c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x18b0f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 375 ms_handle_reset con 0x5616f5414c00 session 0x5616f5c5cd20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 375 ms_handle_reset con 0x5616f2ec0000 session 0x5616f59172c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 375 ms_handle_reset con 0x5616f5e53000 session 0x5616f509cd20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 400269312 unmapped: 87564288 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 375 ms_handle_reset con 0x5616f5e53000 session 0x5616f560f4a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 375 ms_handle_reset con 0x5616f2ec1000 session 0x5616f582da40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 375 ms_handle_reset con 0x561704582400 session 0x5616f5c20b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 375 ms_handle_reset con 0x5616f5412000 session 0x5616f5c03e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 375 ms_handle_reset con 0x5616f581ac00 session 0x5616f3b503c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 394633216 unmapped: 93200384 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 375 ms_handle_reset con 0x5616f2ec0000 session 0x5616f52ff4a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 394633216 unmapped: 93200384 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 375 heartbeat osd_stat(store_statfs(0x1a2fdf000/0x0/0x1bfc00000, data 0x39e04bd/0x3bc3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x18b0f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 394633216 unmapped: 93200384 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 375 ms_handle_reset con 0x5616f581ac00 session 0x5616f5874f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 375 ms_handle_reset con 0x5616f2ec1000 session 0x5616f58743c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 394649600 unmapped: 93184000 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4262697 data_alloc: 234881024 data_used: 23322624
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.085034370s of 10.033424377s, submitted: 327
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 375 ms_handle_reset con 0x5616f5412000 session 0x5616f5828000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 394649600 unmapped: 93184000 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 375 ms_handle_reset con 0x5616f5e53000 session 0x5616f3bd7e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 375 heartbeat osd_stat(store_statfs(0x1a3508000/0x0/0x1bfc00000, data 0x3a0445b/0x3be6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x18b0f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 394797056 unmapped: 93036544 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 375 ms_handle_reset con 0x5616f75c2000 session 0x5616f5828b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 375 heartbeat osd_stat(store_statfs(0x1a3508000/0x0/0x1bfc00000, data 0x3a0445b/0x3be6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x18b0f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 375 ms_handle_reset con 0x5616f3257800 session 0x5616f3cf12c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 394797056 unmapped: 93036544 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 394821632 unmapped: 93011968 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 398778368 unmapped: 89055232 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4286059 data_alloc: 234881024 data_used: 26677248
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 375 ms_handle_reset con 0x5616f2ec1000 session 0x5616f3355c20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 375 heartbeat osd_stat(store_statfs(0x1a3208000/0x0/0x1bfc00000, data 0x398e3f9/0x3b6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x18b0f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 398778368 unmapped: 89055232 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 375 ms_handle_reset con 0x5616f5412000 session 0x5616f560c780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 398868480 unmapped: 88965120 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 398868480 unmapped: 88965120 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 375 heartbeat osd_stat(store_statfs(0x1a3556000/0x0/0x1bfc00000, data 0x39b73f9/0x3b98000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x18b0f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 398868480 unmapped: 88965120 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4302175 data_alloc: 251658240 data_used: 27394048
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 398868480 unmapped: 88965120 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 398868480 unmapped: 88965120 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 375 heartbeat osd_stat(store_statfs(0x1a3556000/0x0/0x1bfc00000, data 0x39b73f9/0x3b98000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x18b0f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 398868480 unmapped: 88965120 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.638890266s of 11.923023224s, submitted: 90
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 398868480 unmapped: 88965120 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 375 handle_osd_map epochs [375,376], i have 375, src has [1,376]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 398868480 unmapped: 88965120 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 376 heartbeat osd_stat(store_statfs(0x1a458f000/0x0/0x1bfc00000, data 0x39bc11c/0x3b9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4307621 data_alloc: 251658240 data_used: 27459584
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 398868480 unmapped: 88965120 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 376 heartbeat osd_stat(store_statfs(0x1a41c5000/0x0/0x1bfc00000, data 0x3d8711c/0x3f69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 402391040 unmapped: 85442560 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 402620416 unmapped: 85213184 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 402620416 unmapped: 85213184 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 402636800 unmapped: 85196800 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 376 heartbeat osd_stat(store_statfs(0x1a3ef9000/0x0/0x1bfc00000, data 0x405211c/0x4234000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 376 heartbeat osd_stat(store_statfs(0x1a3ef9000/0x0/0x1bfc00000, data 0x405211c/0x4234000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4365013 data_alloc: 251658240 data_used: 28643328
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 402636800 unmapped: 85196800 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 376 heartbeat osd_stat(store_statfs(0x1a3eed000/0x0/0x1bfc00000, data 0x405e11c/0x4240000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 402644992 unmapped: 85188608 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 402644992 unmapped: 85188608 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 376 heartbeat osd_stat(store_statfs(0x1a3eed000/0x0/0x1bfc00000, data 0x405e11c/0x4240000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 402644992 unmapped: 85188608 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 402644992 unmapped: 85188608 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4365653 data_alloc: 251658240 data_used: 28725248
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 402653184 unmapped: 85180416 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 376 heartbeat osd_stat(store_statfs(0x1a3eed000/0x0/0x1bfc00000, data 0x405e11c/0x4240000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 402653184 unmapped: 85180416 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 402653184 unmapped: 85180416 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 376 ms_handle_reset con 0x561704582400 session 0x5616f5875e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 376 ms_handle_reset con 0x5616f2ec1000 session 0x5616f402b860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 376 ms_handle_reset con 0x5616f3257800 session 0x5616f560fe00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 376 ms_handle_reset con 0x5616f5412000 session 0x5616f5916d20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.451592445s of 15.033450127s, submitted: 68
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 402669568 unmapped: 85164032 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 376 ms_handle_reset con 0x5616f90d9400 session 0x5616f5c5cb40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 376 ms_handle_reset con 0x5616f75c2000 session 0x5616f582d4a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 376 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5df23c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 376 ms_handle_reset con 0x5616f3257800 session 0x5616f3b51c20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 376 ms_handle_reset con 0x5616f5412000 session 0x5616f5c210e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 376 ms_handle_reset con 0x5616f90d9400 session 0x5616f5604780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 402685952 unmapped: 85147648 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4428128 data_alloc: 251658240 data_used: 28733440
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 402685952 unmapped: 85147648 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 376 heartbeat osd_stat(store_statfs(0x1a3723000/0x0/0x1bfc00000, data 0x482911c/0x4a0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 402685952 unmapped: 85147648 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 376 heartbeat osd_stat(store_statfs(0x1a3723000/0x0/0x1bfc00000, data 0x482911c/0x4a0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 402685952 unmapped: 85147648 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 376 ms_handle_reset con 0x5616f589ac00 session 0x5616f3bb7a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 376 ms_handle_reset con 0x5616f2ec1000 session 0x5616f3bd6b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 376 ms_handle_reset con 0x5616f5e57400 session 0x5616f56054a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 402685952 unmapped: 85147648 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 376 handle_osd_map epochs [377,377], i have 376, src has [1,377]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 377 heartbeat osd_stat(store_statfs(0x1a3722000/0x0/0x1bfc00000, data 0x482913f/0x4a0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 402702336 unmapped: 85131264 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 377 ms_handle_reset con 0x5616f5e57000 session 0x5616f5829a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 377 ms_handle_reset con 0x5616f63d9800 session 0x5616f2ec52c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 377 ms_handle_reset con 0x5616f90d9400 session 0x5616f5e28b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4393538 data_alloc: 251658240 data_used: 33644544
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 403472384 unmapped: 84361216 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 377 ms_handle_reset con 0x5616f5e53000 session 0x5616f52fe780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 377 ms_handle_reset con 0x5616f2ec0000 session 0x5616f560f0e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 377 ms_handle_reset con 0x5616f581ac00 session 0x5616f4d66f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 377 ms_handle_reset con 0x5616f2ec1000 session 0x5616f33550e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 377 ms_handle_reset con 0x5616f5e57000 session 0x5616f5de9a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 377 ms_handle_reset con 0x5616f5e57000 session 0x5616f509a1e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 377 ms_handle_reset con 0x5616f2ec0000 session 0x5616f33c3a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 377 ms_handle_reset con 0x5616f2ec1000 session 0x5616f402ab40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 377 ms_handle_reset con 0x5616f581ac00 session 0x5616f58bf860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 404570112 unmapped: 83263488 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 404570112 unmapped: 83263488 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 377 heartbeat osd_stat(store_statfs(0x1a370c000/0x0/0x1bfc00000, data 0x483cee5/0x4a20000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 404570112 unmapped: 83263488 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 404611072 unmapped: 83222528 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4461893 data_alloc: 251658240 data_used: 33726464
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 404611072 unmapped: 83222528 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 377 ms_handle_reset con 0x5616f2eff000 session 0x5616f5de94a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 404611072 unmapped: 83222528 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 404611072 unmapped: 83222528 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 377 heartbeat osd_stat(store_statfs(0x1a370c000/0x0/0x1bfc00000, data 0x483cee5/0x4a20000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 377 handle_osd_map epochs [378,378], i have 377, src has [1,378]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 377 handle_osd_map epochs [378,378], i have 378, src has [1,378]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.362333298s of 14.791017532s, submitted: 113
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 377 handle_osd_map epochs [378,378], i have 378, src has [1,378]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 405667840 unmapped: 82165760 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 heartbeat osd_stat(store_statfs(0x1a370a000/0x0/0x1bfc00000, data 0x483eaee/0x4a23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 407674880 unmapped: 80158720 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4526547 data_alloc: 251658240 data_used: 42151936
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 410247168 unmapped: 77586432 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 heartbeat osd_stat(store_statfs(0x1a370a000/0x0/0x1bfc00000, data 0x483eaee/0x4a23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 410566656 unmapped: 77266944 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 ms_handle_reset con 0x5616f2eff000 session 0x5616f560da40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 ms_handle_reset con 0x5616f581ac00 session 0x5616f402a3c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 ms_handle_reset con 0x5616f5e57000 session 0x5616f582cf00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 410738688 unmapped: 77094912 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 ms_handle_reset con 0x5616f3fb3400 session 0x5616f5df2d20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 ms_handle_reset con 0x5616f90d9c00 session 0x5616f560c960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 ms_handle_reset con 0x5616f2eff000 session 0x5616f3bb6960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 ms_handle_reset con 0x5616f3fb3400 session 0x5616f4cdd2c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 412483584 unmapped: 75350016 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 ms_handle_reset con 0x5616f581ac00 session 0x5616f5e154a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 ms_handle_reset con 0x5616f5e57000 session 0x5616f2f97e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 412483584 unmapped: 75350016 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4679167 data_alloc: 251658240 data_used: 42352640
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 412508160 unmapped: 75325440 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 412508160 unmapped: 75325440 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 heartbeat osd_stat(store_statfs(0x1a2489000/0x0/0x1bfc00000, data 0x5ac0aee/0x5ca5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 412508160 unmapped: 75325440 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 412508160 unmapped: 75325440 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 heartbeat osd_stat(store_statfs(0x1a2489000/0x0/0x1bfc00000, data 0x5ac0aee/0x5ca5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.066395760s of 11.356934547s, submitted: 73
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 412508160 unmapped: 75325440 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4751577 data_alloc: 251658240 data_used: 42455040
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 415358976 unmapped: 72474624 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 ms_handle_reset con 0x5616f3516800 session 0x5616f5c5c3c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 415490048 unmapped: 72343552 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 415490048 unmapped: 72343552 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 415784960 unmapped: 72048640 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 heartbeat osd_stat(store_statfs(0x1a1b1f000/0x0/0x1bfc00000, data 0x642aaee/0x660f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 426188800 unmapped: 61644800 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4872801 data_alloc: 268435456 data_used: 57180160
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 426221568 unmapped: 61612032 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 426221568 unmapped: 61612032 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 heartbeat osd_stat(store_statfs(0x1a1a7e000/0x0/0x1bfc00000, data 0x64cbaee/0x66b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 426221568 unmapped: 61612032 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 heartbeat osd_stat(store_statfs(0x1a1a7e000/0x0/0x1bfc00000, data 0x64cbaee/0x66b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 426221568 unmapped: 61612032 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 heartbeat osd_stat(store_statfs(0x1a1a7e000/0x0/0x1bfc00000, data 0x64cbaee/0x66b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 426237952 unmapped: 61595648 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.531919479s of 10.814566612s, submitted: 96
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4873297 data_alloc: 268435456 data_used: 57188352
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 426237952 unmapped: 61595648 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 heartbeat osd_stat(store_statfs(0x1a1a7e000/0x0/0x1bfc00000, data 0x64cbaee/0x66b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 heartbeat osd_stat(store_statfs(0x1a1a7e000/0x0/0x1bfc00000, data 0x64cbaee/0x66b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 426237952 unmapped: 61595648 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 426237952 unmapped: 61595648 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 426246144 unmapped: 61587456 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 ms_handle_reset con 0x5616f581ac00 session 0x5616f59174a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 ms_handle_reset con 0x5616f5e57000 session 0x5616f5828b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 ms_handle_reset con 0x5616f5e52800 session 0x5616f52ffe00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 ms_handle_reset con 0x5617029d0c00 session 0x5616f509d2c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 435380224 unmapped: 52453376 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 ms_handle_reset con 0x5616f5e53c00 session 0x5616f52fe780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 ms_handle_reset con 0x5616f5e53c00 session 0x5616f3c634a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 ms_handle_reset con 0x5616f581ac00 session 0x5616f4cdd0e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 ms_handle_reset con 0x5616f5e52800 session 0x5616f3cf1680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 ms_handle_reset con 0x5616f5e57000 session 0x5616f560eb40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 heartbeat osd_stat(store_statfs(0x1a0fe2000/0x0/0x1bfc00000, data 0x6f66afe/0x714c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [0,0,2,4])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5012303 data_alloc: 268435456 data_used: 57208832
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 426426368 unmapped: 61407232 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 heartbeat osd_stat(store_statfs(0x1a0863000/0x0/0x1bfc00000, data 0x76e5afe/0x78cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 428007424 unmapped: 59826176 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 428048384 unmapped: 59785216 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 428048384 unmapped: 59785216 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 428048384 unmapped: 59785216 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 ms_handle_reset con 0x5617029d0c00 session 0x5616f59172c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5031323 data_alloc: 268435456 data_used: 59416576
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 428048384 unmapped: 59785216 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 heartbeat osd_stat(store_statfs(0x1a0855000/0x0/0x1bfc00000, data 0x76f3afe/0x78d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 428605440 unmapped: 59228160 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.031945229s of 11.794381142s, submitted: 102
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 431194112 unmapped: 56639488 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 431194112 unmapped: 56639488 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 ms_handle_reset con 0x5616f5e52800 session 0x5616f5df30e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 431194112 unmapped: 56639488 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5084711 data_alloc: 268435456 data_used: 61992960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 431194112 unmapped: 56639488 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 ms_handle_reset con 0x5616f5e57400 session 0x5616f5e18b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 heartbeat osd_stat(store_statfs(0x1a0852000/0x0/0x1bfc00000, data 0x76f6afe/0x78dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 431235072 unmapped: 56598528 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 431235072 unmapped: 56598528 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 ms_handle_reset con 0x5616f5e53c00 session 0x5616f5e285a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 ms_handle_reset con 0x5616f5e53000 session 0x5616f4cdcb40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 ms_handle_reset con 0x5616f2ec0000 session 0x5616f512c000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 ms_handle_reset con 0x5616f63d9800 session 0x5616f3bcda40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5c02780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 431235072 unmapped: 56598528 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 heartbeat osd_stat(store_statfs(0x1a0842000/0x0/0x1bfc00000, data 0x7706afe/0x78ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [1])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 431251456 unmapped: 56582144 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 ms_handle_reset con 0x5616f5e53000 session 0x5616f58bef00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 ms_handle_reset con 0x5616f5e52800 session 0x5616f582c960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 378 handle_osd_map epochs [378,379], i have 378, src has [1,379]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 379 ms_handle_reset con 0x5616f5e53c00 session 0x5616f5de81e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4926736 data_alloc: 268435456 data_used: 53444608
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 431251456 unmapped: 56582144 heap: 487833600 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 379 heartbeat osd_stat(store_statfs(0x1a15fd000/0x0/0x1bfc00000, data 0x6949822/0x6b30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 379 ms_handle_reset con 0x5616f5e52800 session 0x5616f60efe00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 379 ms_handle_reset con 0x5616f2ec1000 session 0x5616f3c63860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 443842560 unmapped: 51175424 heap: 495017984 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 379 ms_handle_reset con 0x5616f5e53000 session 0x5616f5c023c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.562608719s of 10.000881195s, submitted: 83
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 379 handle_osd_map epochs [380,380], i have 379, src has [1,380]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 379 handle_osd_map epochs [380,380], i have 380, src has [1,380]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442212352 unmapped: 56655872 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 380 ms_handle_reset con 0x5616f63d9800 session 0x5616f5e29680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 380 handle_osd_map epochs [381,381], i have 380, src has [1,381]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 380 heartbeat osd_stat(store_statfs(0x19f5ad000/0x0/0x1bfc00000, data 0x899732c/0x8b80000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [0,0,0,0,0,1,2])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 380 handle_osd_map epochs [381,381], i have 381, src has [1,381]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 380 handle_osd_map epochs [381,381], i have 381, src has [1,381]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 438665216 unmapped: 60203008 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 381 ms_handle_reset con 0x5616f5e57000 session 0x5616f5875680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 381 ms_handle_reset con 0x5616f5e57400 session 0x5616f5c20b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 381 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5873c20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 434954240 unmapped: 63913984 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 381 ms_handle_reset con 0x5616f5e52800 session 0x5616f5c03860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 381 ms_handle_reset con 0x5616f5e53000 session 0x5616f53f0960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5248920 data_alloc: 268435456 data_used: 51568640
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 443523072 unmapped: 55345152 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 435134464 unmapped: 63733760 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 381 handle_osd_map epochs [381,382], i have 381, src has [1,382]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 382 ms_handle_reset con 0x5616f63d9800 session 0x5616f5c5cd20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 434601984 unmapped: 64266240 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 382 heartbeat osd_stat(store_statfs(0x19f207000/0x0/0x1bfc00000, data 0x8d390dd/0x8f26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [1])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 382 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5de85a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 424919040 unmapped: 73949184 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 382 heartbeat osd_stat(store_statfs(0x1a04bc000/0x0/0x1bfc00000, data 0x7a8407b/0x7c70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 424919040 unmapped: 73949184 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 382 ms_handle_reset con 0x5616f5e52800 session 0x5616f582dc20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4991128 data_alloc: 251658240 data_used: 34705408
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 424919040 unmapped: 73949184 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 382 heartbeat osd_stat(store_statfs(0x1a04bd000/0x0/0x1bfc00000, data 0x7a840dd/0x7c71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 382 handle_osd_map epochs [382,383], i have 382, src has [1,383]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 424919040 unmapped: 73949184 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 383 ms_handle_reset con 0x5617029d0c00 session 0x5616f560f4a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 383 ms_handle_reset con 0x5616f581ac00 session 0x5616f3f69860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 424919040 unmapped: 73949184 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 383 ms_handle_reset con 0x5616f5e53000 session 0x5616f33c2780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 383 ms_handle_reset con 0x5616f2ec1000 session 0x5616f52ffa40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 383 ms_handle_reset con 0x5616f581ac00 session 0x5616f58741e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.190949440s of 11.112445831s, submitted: 222
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 383 ms_handle_reset con 0x5616f5e57400 session 0x5616f3f683c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 425508864 unmapped: 73359360 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 383 ms_handle_reset con 0x5616f60ff800 session 0x5616f402b2c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 383 heartbeat osd_stat(store_statfs(0x1a1aa9000/0x0/0x1bfc00000, data 0x6497d0e/0x6685000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 383 ms_handle_reset con 0x5616f5e52800 session 0x5616f5df2b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 383 heartbeat osd_stat(store_statfs(0x1a1aa9000/0x0/0x1bfc00000, data 0x6497d0e/0x6685000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 383 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5de8780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 425508864 unmapped: 73359360 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 383 ms_handle_reset con 0x5616f581ac00 session 0x5616f5829e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 383 ms_handle_reset con 0x5616f5e57400 session 0x5616f5e19680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 383 ms_handle_reset con 0x5616f60ff800 session 0x5616f509af00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 383 ms_handle_reset con 0x5617029d0c00 session 0x5616f5828000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 383 ms_handle_reset con 0x5616f75c2400 session 0x5616f58294a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 383 ms_handle_reset con 0x5617029d0c00 session 0x5616f5829680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4839682 data_alloc: 251658240 data_used: 38920192
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 425508864 unmapped: 73359360 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 383 ms_handle_reset con 0x5616f2eff000 session 0x5616f402a000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 383 ms_handle_reset con 0x5616f3fb3400 session 0x5616f560dc20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 425508864 unmapped: 73359360 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 425508864 unmapped: 73359360 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 383 ms_handle_reset con 0x5616f3257800 session 0x5616f509b680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 383 ms_handle_reset con 0x5616f5412000 session 0x5616f4d66d20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 383 heartbeat osd_stat(store_statfs(0x1a1498000/0x0/0x1bfc00000, data 0x6aa7d1e/0x6c96000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 418586624 unmapped: 80281600 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 383 ms_handle_reset con 0x5616f2eff000 session 0x5616f582da40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 383 ms_handle_reset con 0x5616f75c2400 session 0x5616f509cf00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 419676160 unmapped: 79192064 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 383 ms_handle_reset con 0x5616f581ac00 session 0x5616f560c780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 383 ms_handle_reset con 0x5617029d0c00 session 0x5616f33250e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 383 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5df2f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 383 ms_handle_reset con 0x5617029d0c00 session 0x5616f33c2960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 383 ms_handle_reset con 0x5616f2eff000 session 0x5616f5e150e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 383 ms_handle_reset con 0x5616f3fb3400 session 0x5616f5e14d20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 383 ms_handle_reset con 0x5616f5412000 session 0x5616f2ec52c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 383 ms_handle_reset con 0x5616f5412000 session 0x5616f2f7a780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4563851 data_alloc: 251658240 data_used: 35672064
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 432635904 unmapped: 66232320 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 383 ms_handle_reset con 0x5616f2ec1000 session 0x5616f3354b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 383 heartbeat osd_stat(store_statfs(0x1a2652000/0x0/0x1bfc00000, data 0x58eecfb/0x5adc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 432644096 unmapped: 66224128 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 383 ms_handle_reset con 0x5616f2eff000 session 0x5616f5875a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 432652288 unmapped: 66215936 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.298098564s of 10.731973648s, submitted: 94
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 432668672 unmapped: 66199552 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 383 ms_handle_reset con 0x5616f75c2400 session 0x5616f3355e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 426532864 unmapped: 72335360 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 383 ms_handle_reset con 0x5616f40c8400 session 0x5616f5e28000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 383 handle_osd_map epochs [383,384], i have 383, src has [1,384]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 383 handle_osd_map epochs [384,384], i have 384, src has [1,384]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4489875 data_alloc: 234881024 data_used: 24907776
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 420093952 unmapped: 78774272 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a264e000/0x0/0x1bfc00000, data 0x58f0a72/0x5adf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 384 ms_handle_reset con 0x5616f581ac00 session 0x5616f58754a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 420102144 unmapped: 78766080 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 384 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5e19680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 421158912 unmapped: 77709312 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 384 ms_handle_reset con 0x5616f2eff000 session 0x5616f58294a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 420118528 unmapped: 78749696 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 384 ms_handle_reset con 0x5616f5412000 session 0x5616f509d4a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 384 ms_handle_reset con 0x5616f75c2400 session 0x5616f5828f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 420118528 unmapped: 78749696 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4483519 data_alloc: 234881024 data_used: 24866816
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 420118528 unmapped: 78749696 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a3b30000/0x0/0x1bfc00000, data 0x4411a52/0x45fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a3b30000/0x0/0x1bfc00000, data 0x4411a52/0x45fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 420118528 unmapped: 78749696 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a3b30000/0x0/0x1bfc00000, data 0x4411a52/0x45fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 420118528 unmapped: 78749696 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a3b30000/0x0/0x1bfc00000, data 0x4411a52/0x45fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 420118528 unmapped: 78749696 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.226911545s of 10.434025764s, submitted: 114
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 420175872 unmapped: 78692352 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4495595 data_alloc: 234881024 data_used: 25206784
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 417005568 unmapped: 81862656 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 384 heartbeat osd_stat(store_statfs(0x1a39ca000/0x0/0x1bfc00000, data 0x4577a52/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 417021952 unmapped: 81846272 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 384 handle_osd_map epochs [385,385], i have 384, src has [1,385]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 385 heartbeat osd_stat(store_statfs(0x1a39c6000/0x0/0x1bfc00000, data 0x457965b/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 416882688 unmapped: 81985536 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 416882688 unmapped: 81985536 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 416882688 unmapped: 81985536 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4512553 data_alloc: 234881024 data_used: 26595328
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 416882688 unmapped: 81985536 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 416980992 unmapped: 81887232 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 385 heartbeat osd_stat(store_statfs(0x1a39c7000/0x0/0x1bfc00000, data 0x457965b/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 416980992 unmapped: 81887232 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 385 ms_handle_reset con 0x5616f2eff000 session 0x5616f32a0d20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 385 ms_handle_reset con 0x5616f5412000 session 0x5616f60efa40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 417570816 unmapped: 81297408 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 385 ms_handle_reset con 0x5616f581ac00 session 0x5616f5de8b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 385 ms_handle_reset con 0x5616f90d8800 session 0x5616f56052c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 385 ms_handle_reset con 0x5617029d1000 session 0x5616f58292c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 417570816 unmapped: 81297408 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4586012 data_alloc: 234881024 data_used: 26595328
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 385 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5dfdc20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 417570816 unmapped: 81297408 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 417570816 unmapped: 81297408 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 385 ms_handle_reset con 0x5616f2eff000 session 0x5616f4d66f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.899076462s of 13.064254761s, submitted: 79
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 385 ms_handle_reset con 0x5616f5e57400 session 0x5616f50d8780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 385 ms_handle_reset con 0x5616f60ff800 session 0x5616f5c5cb40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 385 ms_handle_reset con 0x5616f40c8400 session 0x5616f509a3c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 385 ms_handle_reset con 0x5616f3fb3400 session 0x5616f2f7ba40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 385 ms_handle_reset con 0x5617029d0c00 session 0x5616f5874000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 385 heartbeat osd_stat(store_statfs(0x1a3097000/0x0/0x1bfc00000, data 0x4ea86bd/0x5097000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 385 ms_handle_reset con 0x5616f2ec1000 session 0x5616f3c63c20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 417546240 unmapped: 81321984 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 385 ms_handle_reset con 0x5616f2eff000 session 0x5616f3bcd860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 414998528 unmapped: 83869696 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 385 ms_handle_reset con 0x5616f581ac00 session 0x5616f50d8780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 385 ms_handle_reset con 0x5616f581ac00 session 0x5616f4d66f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 415014912 unmapped: 83853312 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4422668 data_alloc: 234881024 data_used: 21213184
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 415014912 unmapped: 83853312 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 415014912 unmapped: 83853312 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 415014912 unmapped: 83853312 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 385 heartbeat osd_stat(store_statfs(0x1a40e4000/0x0/0x1bfc00000, data 0x3e5a6cd/0x404a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 415014912 unmapped: 83853312 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 414892032 unmapped: 83976192 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4490828 data_alloc: 251658240 data_used: 30756864
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 416735232 unmapped: 82132992 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 416735232 unmapped: 82132992 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 385 heartbeat osd_stat(store_statfs(0x1a40e4000/0x0/0x1bfc00000, data 0x3e5a6cd/0x404a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 416735232 unmapped: 82132992 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.606307983s of 10.659096718s, submitted: 21
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 416735232 unmapped: 82132992 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 385 handle_osd_map epochs [386,386], i have 385, src has [1,386]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f3fb3400 session 0x5616f5872d20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5617029d0c00 session 0x5616f60efa40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f90d8800 session 0x5616f560f680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f2ec0c00 session 0x5616f4cdcf00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f2ec0c00 session 0x5616f2f7af00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f3fb3400 session 0x5616f5875e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 416743424 unmapped: 82124800 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f581ac00 session 0x5616f5df25a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f90d8800 session 0x5616f582de00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5617029d0c00 session 0x5616f5e292c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f2ec0c00 session 0x5616f3b503c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4587216 data_alloc: 251658240 data_used: 30765056
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 416768000 unmapped: 82100224 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f3fb3400 session 0x5616f60ef860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f581ac00 session 0x5616f5de94a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 416768000 unmapped: 82100224 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 heartbeat osd_stat(store_statfs(0x1a3553000/0x0/0x1bfc00000, data 0x49e54f7/0x4bdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f90d8800 session 0x5616f5de9860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 416768000 unmapped: 82100224 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f320d400 session 0x5616f5c021e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f2ec0c00 session 0x5616f33552c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f3fb3400 session 0x5616f5c023c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 418332672 unmapped: 80535552 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f581ac00 session 0x5616f5df23c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f40c6c00 session 0x5616f2ec5e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 418340864 unmapped: 80527360 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f90d8800 session 0x5616f5df2b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f320cc00 session 0x5616f5e19680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5617023cbc00 session 0x5616f52fe000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4616119 data_alloc: 251658240 data_used: 34701312
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 413622272 unmapped: 85245952 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f2ec0c00 session 0x5616f3f69860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f3fb3400 session 0x5616f509b0e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 419897344 unmapped: 78970880 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f40c6c00 session 0x5616f5df32c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 heartbeat osd_stat(store_statfs(0x1a2b3c000/0x0/0x1bfc00000, data 0x53f8589/0x55f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.473925591s of 10.023031235s, submitted: 133
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 421453824 unmapped: 77414400 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f3fb3400 session 0x5616f5e283c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5617023cbc00 session 0x5616f5605c20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f581ac00 session 0x5616f5df21e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 423100416 unmapped: 75767808 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f5001400 session 0x5616f402ab40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f589a000 session 0x5616f509b2c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 heartbeat osd_stat(store_statfs(0x1a215e000/0x0/0x1bfc00000, data 0x59bf527/0x5bb8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f3fb3400 session 0x5616f5de81e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 423124992 unmapped: 75743232 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f581ac00 session 0x5616f509cb40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4787561 data_alloc: 251658240 data_used: 40722432
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f5001400 session 0x5616f5875680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 423124992 unmapped: 75743232 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5617023cbc00 session 0x5616f52fe000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5617023cb000 session 0x5616f402a5a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f3fb3400 session 0x5616f2ec5e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f5001400 session 0x5616f52fef00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 423624704 unmapped: 75243520 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 423624704 unmapped: 75243520 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 heartbeat osd_stat(store_statfs(0x1a149e000/0x0/0x1bfc00000, data 0x6687527/0x6880000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 423624704 unmapped: 75243520 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 423624704 unmapped: 75243520 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4889206 data_alloc: 251658240 data_used: 40734720
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 423624704 unmapped: 75243520 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 423632896 unmapped: 75235328 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 heartbeat osd_stat(store_statfs(0x1a149b000/0x0/0x1bfc00000, data 0x668a527/0x6883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 423632896 unmapped: 75235328 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f581ac00 session 0x5616f33552c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 423641088 unmapped: 75227136 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5617023cbc00 session 0x5616f5c021e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 423641088 unmapped: 75227136 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f5e59800 session 0x5616f5de9860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.508553505s of 12.171620369s, submitted: 123
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f3fb3400 session 0x5616f5de94a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 heartbeat osd_stat(store_statfs(0x1a1398000/0x0/0x1bfc00000, data 0x678d527/0x6986000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [0,0,0,0,1])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f5001400 session 0x5616f582de00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4950327 data_alloc: 251658240 data_used: 41025536
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 425885696 unmapped: 72982528 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f2f26400 session 0x5616f362f860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 425967616 unmapped: 72900608 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 428392448 unmapped: 70475776 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 heartbeat osd_stat(store_statfs(0x1a0c63000/0x0/0x1bfc00000, data 0x6eb4527/0x70ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 429506560 unmapped: 69361664 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 heartbeat osd_stat(store_statfs(0x1a0c63000/0x0/0x1bfc00000, data 0x6eb4527/0x70ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f6ac1000 session 0x5616f5c02b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 429514752 unmapped: 69353472 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5003673 data_alloc: 268435456 data_used: 46665728
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 429514752 unmapped: 69353472 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 heartbeat osd_stat(store_statfs(0x1a0c63000/0x0/0x1bfc00000, data 0x6eb4527/0x70ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f8392c00 session 0x5616f5e154a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 429539328 unmapped: 69328896 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x561704583000 session 0x5616f5e28f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f2f26400 session 0x5616f2ec52c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 heartbeat osd_stat(store_statfs(0x1a0c4c000/0x0/0x1bfc00000, data 0x6ed854a/0x70d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 429539328 unmapped: 69328896 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f6ac1000 session 0x5616f362e1e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f4132000 session 0x5616f60efa40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x561704583800 session 0x5616f3354b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 429563904 unmapped: 69304320 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 432996352 unmapped: 65871872 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.953697205s of 10.443795204s, submitted: 83
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5072811 data_alloc: 268435456 data_used: 54734848
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 434724864 unmapped: 64143360 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 436854784 unmapped: 62013440 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 436854784 unmapped: 62013440 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 heartbeat osd_stat(store_statfs(0x1a0c47000/0x0/0x1bfc00000, data 0x6ed958d/0x70d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 436854784 unmapped: 62013440 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 436862976 unmapped: 62005248 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f2ec0c00 session 0x5616f5df3e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f320cc00 session 0x5616f5e28000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f6ac1000 session 0x5616f560ef00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5154891 data_alloc: 268435456 data_used: 57315328
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442392576 unmapped: 56475648 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 ms_handle_reset con 0x5616f60ff800 session 0x5616f5c5cb40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 386 handle_osd_map epochs [386,387], i have 386, src has [1,387]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 387 ms_handle_reset con 0x561704583000 session 0x5616f402a780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 387 ms_handle_reset con 0x5616f320cc00 session 0x5616f560d2c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 387 ms_handle_reset con 0x5616f2ec0c00 session 0x5616f5c023c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442146816 unmapped: 56721408 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 387 heartbeat osd_stat(store_statfs(0x1a03ba000/0x0/0x1bfc00000, data 0x77642c0/0x7963000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [0,0,0,0,0,1,1])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 453378048 unmapped: 45490176 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 448872448 unmapped: 49995776 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 387 ms_handle_reset con 0x5616f60ff800 session 0x5616f5872000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 387 ms_handle_reset con 0x5616f6ac1000 session 0x5616f33243c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 448872448 unmapped: 49995776 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.538432121s of 10.001110077s, submitted: 149
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 387 heartbeat osd_stat(store_statfs(0x19f51e000/0x0/0x1bfc00000, data 0x860025e/0x87fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 387 handle_osd_map epochs [388,388], i have 387, src has [1,388]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 388 ms_handle_reset con 0x5616f40c5c00 session 0x5616f509cd20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 388 ms_handle_reset con 0x5616f5e57400 session 0x5616f2f7ba40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 388 ms_handle_reset con 0x5616f5412000 session 0x5616f402a3c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 388 ms_handle_reset con 0x5616f320cc00 session 0x5616f5df30e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 388 ms_handle_reset con 0x5616f60ff800 session 0x5616f5c5d0e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5387197 data_alloc: 285212672 data_used: 66945024
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 388 ms_handle_reset con 0x5616f6ac1000 session 0x5616f58bf860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452665344 unmapped: 46202880 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 388 ms_handle_reset con 0x5616f6ac1000 session 0x5616f560ef00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 446562304 unmapped: 52305920 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 388 handle_osd_map epochs [389,389], i have 388, src has [1,389]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 388 ms_handle_reset con 0x5616f320cc00 session 0x5616f5df3e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 389 ms_handle_reset con 0x5616f2ec0c00 session 0x5616f5de83c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 389 ms_handle_reset con 0x5616f5e57400 session 0x5616f5c021e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 389 ms_handle_reset con 0x5616f60ff800 session 0x5616f52fef00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 446619648 unmapped: 52248576 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 389 ms_handle_reset con 0x5616f2ec0c00 session 0x5616f402a5a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 389 ms_handle_reset con 0x5616f320cc00 session 0x5616f5875680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 450437120 unmapped: 48431104 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 389 ms_handle_reset con 0x5616f5412000 session 0x5616f3354b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 389 ms_handle_reset con 0x5616f5e57400 session 0x5616f512c000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 448888832 unmapped: 49979392 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 389 ms_handle_reset con 0x5616f6ac1000 session 0x5616f3f692c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 389 ms_handle_reset con 0x5616f6ac1000 session 0x5616f5dfc1e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 389 heartbeat osd_stat(store_statfs(0x19cb93000/0x0/0x1bfc00000, data 0x9de9d14/0x9fea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5490833 data_alloc: 268435456 data_used: 62476288
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449118208 unmapped: 49750016 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 389 handle_osd_map epochs [389,390], i have 389, src has [1,390]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449216512 unmapped: 49651712 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 390 ms_handle_reset con 0x5616f2ec0c00 session 0x5616f5828000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449216512 unmapped: 49651712 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 390 ms_handle_reset con 0x5616f320cc00 session 0x5616f509c960
Nov 29 03:54:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 390 handle_osd_map epochs [391,391], i have 390, src has [1,391]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449216512 unmapped: 49651712 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 391 ms_handle_reset con 0x5616f5e57400 session 0x5616f50d8f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 391 ms_handle_reset con 0x5616f3257000 session 0x5616f5dfcb40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 391 ms_handle_reset con 0x5616f5412000 session 0x5616f3bcc1e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 391 ms_handle_reset con 0x5616f30d9800 session 0x5616f4cdcf00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 391 heartbeat osd_stat(store_statfs(0x19c946000/0x0/0x1bfc00000, data 0xa034871/0xa237000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [0,2] op hist [1])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 391 ms_handle_reset con 0x5616f7095c00 session 0x5616f5e294a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433872896 unmapped: 64995328 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.518003464s of 10.177884102s, submitted: 314
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 391 ms_handle_reset con 0x5616f5e57400 session 0x5616f33243c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5211523 data_alloc: 251658240 data_used: 41857024
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433881088 unmapped: 64987136 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433881088 unmapped: 64987136 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:27.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 391 ms_handle_reset con 0x5616f4132000 session 0x5616f3bb6960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 435404800 unmapped: 63463424 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433635328 unmapped: 65232896 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 391 ms_handle_reset con 0x5616f30d9800 session 0x5616f5e29860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 391 ms_handle_reset con 0x5616f5412000 session 0x5616f4cdda40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433643520 unmapped: 65224704 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 391 heartbeat osd_stat(store_statfs(0x19e6dd000/0x0/0x1bfc00000, data 0x82a15ef/0x84a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5283691 data_alloc: 268435456 data_used: 48095232
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433643520 unmapped: 65224704 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 391 handle_osd_map epochs [391,392], i have 391, src has [1,392]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433643520 unmapped: 65224704 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433643520 unmapped: 65224704 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 392 heartbeat osd_stat(store_statfs(0x19e6d9000/0x0/0x1bfc00000, data 0x82a41f8/0x84a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433643520 unmapped: 65224704 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433643520 unmapped: 65224704 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 392 ms_handle_reset con 0x5616f5e57400 session 0x5616f3c62000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 392 ms_handle_reset con 0x5616f6ac1000 session 0x5616f560e780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 392 ms_handle_reset con 0x5616f7095c00 session 0x5616f5873680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 392 ms_handle_reset con 0x5616f30d9800 session 0x5616f362f860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 392 ms_handle_reset con 0x5616f5412000 session 0x5616f560d2c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 392 ms_handle_reset con 0x5616f5e57400 session 0x5616f5de8960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 392 ms_handle_reset con 0x5616f6ac1000 session 0x5616f2f7af00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.647849083s of 10.085773468s, submitted: 53
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5527646 data_alloc: 268435456 data_used: 60395520
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 392 ms_handle_reset con 0x5616f3257c00 session 0x5616f5df2780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 435027968 unmapped: 80101376 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 392 ms_handle_reset con 0x5616f30d9800 session 0x5616f5829e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 392 ms_handle_reset con 0x5616f5412000 session 0x5616f52ffe00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 435036160 unmapped: 80093184 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 392 ms_handle_reset con 0x5616f5e57400 session 0x5616f509b2c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 435036160 unmapped: 80093184 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 392 ms_handle_reset con 0x5616f75c1400 session 0x5616f60ee1e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 437993472 unmapped: 77135872 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #54. Immutable memtables: 10.
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 392 heartbeat osd_stat(store_statfs(0x19c789000/0x0/0x1bfc00000, data 0xa1ec208/0xa3ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 439795712 unmapped: 75333632 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 392 handle_osd_map epochs [392,393], i have 392, src has [1,393]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 393 heartbeat osd_stat(store_statfs(0x19b575000/0x0/0x1bfc00000, data 0xa265f7f/0xa468000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [2])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 393 ms_handle_reset con 0x5616f412d400 session 0x5616f5c20000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5487571 data_alloc: 268435456 data_used: 53460992
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 440999936 unmapped: 74129408 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441016320 unmapped: 74113024 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441016320 unmapped: 74113024 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441016320 unmapped: 74113024 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441204736 unmapped: 73924608 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5483427 data_alloc: 268435456 data_used: 53526528
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441204736 unmapped: 73924608 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 393 heartbeat osd_stat(store_statfs(0x19c3b4000/0x0/0x1bfc00000, data 0x9428f6f/0x962a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.015411377s of 10.591547012s, submitted: 184
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441204736 unmapped: 73924608 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 393 heartbeat osd_stat(store_statfs(0x19c3b1000/0x0/0x1bfc00000, data 0x942bf6f/0x962d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441204736 unmapped: 73924608 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441204736 unmapped: 73924608 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441221120 unmapped: 73908224 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5481703 data_alloc: 268435456 data_used: 53526528
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441221120 unmapped: 73908224 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 393 handle_osd_map epochs [393,394], i have 393, src has [1,394]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441229312 unmapped: 73900032 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 heartbeat osd_stat(store_statfs(0x19c3ac000/0x0/0x1bfc00000, data 0x942eb78/0x9631000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441745408 unmapped: 73383936 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441745408 unmapped: 73383936 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441761792 unmapped: 73367552 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 heartbeat osd_stat(store_statfs(0x19c3ac000/0x0/0x1bfc00000, data 0x942eb78/0x9631000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5503425 data_alloc: 268435456 data_used: 55824384
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441761792 unmapped: 73367552 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.513783455s of 10.562233925s, submitted: 26
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441761792 unmapped: 73367552 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441761792 unmapped: 73367552 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441761792 unmapped: 73367552 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441761792 unmapped: 73367552 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 heartbeat osd_stat(store_statfs(0x19c3a4000/0x0/0x1bfc00000, data 0x9435b78/0x9638000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5505311 data_alloc: 268435456 data_used: 55832576
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441769984 unmapped: 73359360 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441769984 unmapped: 73359360 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441769984 unmapped: 73359360 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f6ac1000 session 0x5616f33552c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f5412000 session 0x5616f2f7a960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441778176 unmapped: 73351168 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f5e57400 session 0x5616f3324b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f75c1400 session 0x5616f5de9e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 heartbeat osd_stat(store_statfs(0x19c3a5000/0x0/0x1bfc00000, data 0x9436b78/0x9639000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f4011c00 session 0x5616f5de8b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f5412000 session 0x5616f58752c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f5e57400 session 0x5616f4cdc960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f6ac1000 session 0x5616f32a0d20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f75c1400 session 0x5616f5829a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f2f36800 session 0x5616f5df3a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f3517000 session 0x5616f582cd20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442933248 unmapped: 72196096 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f5412000 session 0x5616f5c03e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5588007 data_alloc: 268435456 data_used: 55848960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442998784 unmapped: 72130560 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.622900009s of 10.002265930s, submitted: 92
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 443023360 unmapped: 72105984 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f30d9800 session 0x5616f5e28780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 443023360 unmapped: 72105984 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 heartbeat osd_stat(store_statfs(0x19b9b5000/0x0/0x1bfc00000, data 0x9e22dea/0xa028000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 443023360 unmapped: 72105984 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f5e57400 session 0x5616f560dc20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f6ac1000 session 0x5616f53f05a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f30d9800 session 0x5616f5874f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 443023360 unmapped: 72105984 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f3517000 session 0x5616f5916f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5570839 data_alloc: 268435456 data_used: 55844864
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f5412000 session 0x5616f5872f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 443023360 unmapped: 72105984 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 443031552 unmapped: 72097792 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 444547072 unmapped: 70582272 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 448094208 unmapped: 67035136 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 heartbeat osd_stat(store_statfs(0x19bb91000/0x0/0x1bfc00000, data 0x9c47b88/0x9e4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f2ec2000 session 0x5616f2f7a5a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f30d8800 session 0x5616f50d9a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 447365120 unmapped: 67764224 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f30d8800 session 0x5616f33250e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 heartbeat osd_stat(store_statfs(0x19d0fa000/0x0/0x1bfc00000, data 0x836ab88/0x856e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 heartbeat osd_stat(store_statfs(0x19d0fa000/0x0/0x1bfc00000, data 0x836ab88/0x856e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5381098 data_alloc: 268435456 data_used: 53198848
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 447365120 unmapped: 67764224 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 447365120 unmapped: 67764224 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 heartbeat osd_stat(store_statfs(0x19d46f000/0x0/0x1bfc00000, data 0x836ab78/0x856d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 heartbeat osd_stat(store_statfs(0x19d46f000/0x0/0x1bfc00000, data 0x836ab78/0x856d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 447373312 unmapped: 67756032 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f2ec0c00 session 0x5616f509d860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f320cc00 session 0x5616f33245a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 447373312 unmapped: 67756032 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.172218323s of 12.415048599s, submitted: 83
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f3517000 session 0x5616f5c20d20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f30d9800 session 0x5616f33c30e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f5412000 session 0x5616f5c214a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 447397888 unmapped: 67731456 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 heartbeat osd_stat(store_statfs(0x19e677000/0x0/0x1bfc00000, data 0x7167b25/0x7366000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5172649 data_alloc: 251658240 data_used: 44666880
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 447414272 unmapped: 67715072 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f30d8800 session 0x5616f4cdcb40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 447414272 unmapped: 67715072 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 447414272 unmapped: 67715072 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 heartbeat osd_stat(store_statfs(0x19e678000/0x0/0x1bfc00000, data 0x7167b25/0x7366000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 447414272 unmapped: 67715072 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 447078400 unmapped: 68050944 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5270581 data_alloc: 268435456 data_used: 45740032
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 447078400 unmapped: 68050944 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 448274432 unmapped: 66854912 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 448274432 unmapped: 66854912 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5875a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f2eff000 session 0x5616f5df3860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 heartbeat osd_stat(store_statfs(0x19db3c000/0x0/0x1bfc00000, data 0x7c95b25/0x7e94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f320cc00 session 0x5616f5df2000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442515456 unmapped: 72613888 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442515456 unmapped: 72613888 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.709886551s of 11.222613335s, submitted: 175
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5058100 data_alloc: 251658240 data_used: 35414016
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442310656 unmapped: 72818688 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 heartbeat osd_stat(store_statfs(0x19eed0000/0x0/0x1bfc00000, data 0x690eab3/0x6b0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442351616 unmapped: 72777728 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f320cc00 session 0x5616f60ef680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442351616 unmapped: 72777728 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f2ec2000 session 0x5616f5828d20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 394 handle_osd_map epochs [394,395], i have 394, src has [1,395]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 395 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5df32c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442351616 unmapped: 72777728 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 395 ms_handle_reset con 0x5616f2eff000 session 0x5616f582c000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 395 ms_handle_reset con 0x5616f30d8800 session 0x5616f402b2c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442351616 unmapped: 72777728 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 395 handle_osd_map epochs [396,396], i have 395, src has [1,396]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 396 ms_handle_reset con 0x5616f5412000 session 0x5616f560eb40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5075575 data_alloc: 251658240 data_used: 35434496
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442351616 unmapped: 72777728 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 396 heartbeat osd_stat(store_statfs(0x19eca8000/0x0/0x1bfc00000, data 0x6b317bf/0x6d33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 396 ms_handle_reset con 0x5616f2ec0c00 session 0x5616f509da40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 396 ms_handle_reset con 0x5616f30d8800 session 0x5616f509ab40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442294272 unmapped: 72835072 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 396 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5c02b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 396 ms_handle_reset con 0x5616f2ec2000 session 0x5616f560dc20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442302464 unmapped: 72826880 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 396 ms_handle_reset con 0x5616f2ec2000 session 0x5616f582cd20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442302464 unmapped: 72826880 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442302464 unmapped: 72826880 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.502508163s of 10.008315086s, submitted: 139
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 396 heartbeat osd_stat(store_statfs(0x19eeab000/0x0/0x1bfc00000, data 0x693553d/0x6b33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5054005 data_alloc: 251658240 data_used: 35422208
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442302464 unmapped: 72826880 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 396 handle_osd_map epochs [396,397], i have 396, src has [1,397]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442302464 unmapped: 72826880 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 heartbeat osd_stat(store_statfs(0x19eea7000/0x0/0x1bfc00000, data 0x6937146/0x6b36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442302464 unmapped: 72826880 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2ec0c00 session 0x5616f52ffe00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442302464 unmapped: 72826880 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5df2780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442302464 unmapped: 72826880 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f5412000 session 0x5616f5c02d20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2eff000 session 0x5616f33250e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2eff000 session 0x5616f582d860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 heartbeat osd_stat(store_statfs(0x19eea7000/0x0/0x1bfc00000, data 0x6937156/0x6b37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2ec0c00 session 0x5616f5e29680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5058771 data_alloc: 251658240 data_used: 35430400
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442318848 unmapped: 72810496 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f30d8800 session 0x5616f2f7af00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5e19a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 heartbeat osd_stat(store_statfs(0x19eea6000/0x0/0x1bfc00000, data 0x693717f/0x6b38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2ec2000 session 0x5616f3cf0960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2ec0c00 session 0x5616f5c02000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442343424 unmapped: 72785920 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2ec1000 session 0x5616f4cdd2c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2eff000 session 0x5616f2ec5a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x561704583800 session 0x5616f50dd860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2f26400 session 0x5616f4d66f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442359808 unmapped: 72769536 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2ec0c00 session 0x5616f5605e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442368000 unmapped: 72761344 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442368000 unmapped: 72761344 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.372431755s of 10.531121254s, submitted: 66
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5198499 data_alloc: 251658240 data_used: 35426304
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 heartbeat osd_stat(store_statfs(0x19e973000/0x0/0x1bfc00000, data 0x6e6b185/0x706a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442400768 unmapped: 72728576 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f3fb3400 session 0x5616f2f7a780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2eff000 session 0x5616f5c210e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2ec1000 session 0x5616f2ec52c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f5001400 session 0x5616f5df3680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2ec0c00 session 0x5616f4d66f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2ec1000 session 0x5616f50dd860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2eff000 session 0x5616f2ec5a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f3fb3400 session 0x5616f4cdd2c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2f26400 session 0x5616f5c02000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442400768 unmapped: 72728576 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442548224 unmapped: 72581120 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442556416 unmapped: 72572928 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f581ac00 session 0x5616f5df25a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5617023cbc00 session 0x5616f5de9a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2eff000 session 0x5616f56054a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2ec1000 session 0x5616f3bd65a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442720256 unmapped: 72409088 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5238067 data_alloc: 251658240 data_used: 40054784
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442720256 unmapped: 72409088 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x7bc81a8/0x7dc8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442720256 unmapped: 72409088 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f30d8800 session 0x5616f5829680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 447840256 unmapped: 67289088 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2ec1000 session 0x5616f582d860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 445145088 unmapped: 69984256 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2eff000 session 0x5616f362e1e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f581ac00 session 0x5616f5828b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442048512 unmapped: 73080832 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f3fb3400 session 0x5616f560f0e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x561704583800 session 0x5616f33245a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2eff000 session 0x5616f5de9c20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.607354164s of 10.012436867s, submitted: 120
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f3fb3400 session 0x5616f5de9860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5605860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4631949 data_alloc: 234881024 data_used: 19976192
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 423682048 unmapped: 91447296 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 heartbeat osd_stat(store_statfs(0x1a1595000/0x0/0x1bfc00000, data 0x3e3b165/0x4038000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 423682048 unmapped: 91447296 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 423682048 unmapped: 91447296 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 397 handle_osd_map epochs [397,398], i have 397, src has [1,398]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 423706624 unmapped: 91422720 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 398 ms_handle_reset con 0x5616f581ac00 session 0x5616f3c63860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 424615936 unmapped: 90513408 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4624741 data_alloc: 234881024 data_used: 16244736
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 424960000 unmapped: 90169344 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 398 heartbeat osd_stat(store_statfs(0x1a1711000/0x0/0x1bfc00000, data 0x3cb0eb9/0x3eae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 424001536 unmapped: 91127808 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 424001536 unmapped: 91127808 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 424001536 unmapped: 91127808 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 398 heartbeat osd_stat(store_statfs(0x1a1706000/0x0/0x1bfc00000, data 0x3cc4eb9/0x3ec2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 398 heartbeat osd_stat(store_statfs(0x1a1706000/0x0/0x1bfc00000, data 0x3cc4eb9/0x3ec2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 424001536 unmapped: 91127808 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.879499435s of 10.224841118s, submitted: 135
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4626897 data_alloc: 234881024 data_used: 16068608
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 423985152 unmapped: 91144192 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 398 ms_handle_reset con 0x561704583800 session 0x5616f3bcc960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 398 ms_handle_reset con 0x561704583800 session 0x5616f509ba40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 398 ms_handle_reset con 0x5616f2ec1000 session 0x5616f402b680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 398 ms_handle_reset con 0x5616f2eff000 session 0x5616f402ab40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 398 ms_handle_reset con 0x5616f3fb3400 session 0x5616f560e960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 424001536 unmapped: 91127808 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 398 handle_osd_map epochs [399,399], i have 398, src has [1,399]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 423952384 unmapped: 91176960 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1381000/0x0/0x1bfc00000, data 0x404cb24/0x424c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 423952384 unmapped: 91176960 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f581ac00 session 0x5616f560e5a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f581ac00 session 0x5616f50dd2c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2ec1000 session 0x5616f50dcf00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f3fb3400 session 0x5616f5dfd860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 424222720 unmapped: 90906624 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1341000/0x0/0x1bfc00000, data 0x408cb87/0x428d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x561704583800 session 0x5616f509c5a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5617023cbc00 session 0x5616f60efa40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2ec1000 session 0x5616f53f0f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f3fb3400 session 0x5616f582c5a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f581ac00 session 0x5616f52fe3c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4686260 data_alloc: 234881024 data_used: 16064512
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 424222720 unmapped: 90906624 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x561704583800 session 0x5616f5605c20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1041000/0x0/0x1bfc00000, data 0x438cb87/0x458d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2eff000 session 0x5616f582c000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2ec1000 session 0x5616f582cf00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 424222720 unmapped: 90906624 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f3fb3400 session 0x5616f5873e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f581ac00 session 0x5616f58725a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 424222720 unmapped: 90906624 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 424222720 unmapped: 90906624 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2f26400 session 0x5616f33250e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2ec0c00 session 0x5616f5c02d20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1041000/0x0/0x1bfc00000, data 0x438cb87/0x458d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2ec1000 session 0x5616f560d0e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 420421632 unmapped: 94707712 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4584740 data_alloc: 234881024 data_used: 14233600
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 420421632 unmapped: 94707712 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 420421632 unmapped: 94707712 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.874294281s of 11.168619156s, submitted: 124
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x561704583800 session 0x5616f5872780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5412000 session 0x5616f560d4a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 420429824 unmapped: 94699520 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2f26400 session 0x5616f509dc20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f3fb3400 session 0x5616f5e285a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 420429824 unmapped: 94699520 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1f86000/0x0/0x1bfc00000, data 0x344aa90/0x3647000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 420429824 unmapped: 94699520 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4530552 data_alloc: 234881024 data_used: 10539008
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 420429824 unmapped: 94699520 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 420438016 unmapped: 94691328 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e57400 session 0x5616f5e14d20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f75c1400 session 0x5616f3f68b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1f87000/0x0/0x1bfc00000, data 0x344aa90/0x3647000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 412344320 unmapped: 102785024 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2f26400 session 0x5616f3f692c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 412344320 unmapped: 102785024 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 412344320 unmapped: 102785024 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4334326 data_alloc: 218103808 data_used: 5632000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 412344320 unmapped: 102785024 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3075000/0x0/0x1bfc00000, data 0x210ea1e/0x2309000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 412344320 unmapped: 102785024 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 412344320 unmapped: 102785024 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 412344320 unmapped: 102785024 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3075000/0x0/0x1bfc00000, data 0x210ea1e/0x2309000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 412344320 unmapped: 102785024 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4334326 data_alloc: 218103808 data_used: 5632000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 412344320 unmapped: 102785024 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 412344320 unmapped: 102785024 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.168770790s of 15.469518661s, submitted: 82
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 414171136 unmapped: 100958208 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 414179328 unmapped: 100950016 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a2c1b000/0x0/0x1bfc00000, data 0x27b8a1e/0x29b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 414179328 unmapped: 100950016 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5412000 session 0x5616f5de9a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x561704583800 session 0x5616f362e1e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2f26400 session 0x5616f60eeb40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4386012 data_alloc: 218103808 data_used: 5632000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5412000 session 0x5616f509ba40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 414187520 unmapped: 100941824 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e57400 session 0x5616f50d8f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f75c1400 session 0x5616f50d9e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f581ac00 session 0x5616f5df3a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2f26400 session 0x5616f3354f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5412000 session 0x5616f52feb40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a2784000/0x0/0x1bfc00000, data 0x2c4ea2e/0x2e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 100909056 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a2704000/0x0/0x1bfc00000, data 0x2ccea2e/0x2eca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 100909056 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f581ac00 session 0x5616f5de8780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e57400 session 0x5616f50dd2c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f75c1400 session 0x5616f50dde00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2f26400 session 0x5616f4cdde00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5412000 session 0x5616f402b860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f581ac00 session 0x5616f3bd7680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 415342592 unmapped: 99786752 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e57400 session 0x5616f509a780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f320cc00 session 0x5616f5df21e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2f26400 session 0x5616f5c5cd20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f320cc00 session 0x5616f3b503c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 415342592 unmapped: 99786752 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5412000 session 0x5616f53f05a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e57400 session 0x5616f4cdcb40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f581ac00 session 0x5616f5872000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2f26400 session 0x5616f60ef4a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5412000 session 0x5616f58741e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f320cc00 session 0x5616f5df32c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e57400 session 0x5616f50dde00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f3517000 session 0x5616f5df3a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2f26400 session 0x5616f509ba40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f320cc00 session 0x5616f362e1e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5412000 session 0x5616f5de9a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4553674 data_alloc: 218103808 data_used: 5632000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e57400 session 0x5616f3f692c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 415367168 unmapped: 99762176 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x561704582c00 session 0x5616f3f68b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2f26400 session 0x5616f58725a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f320cc00 session 0x5616f5873e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5412000 session 0x5616f402a3c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e57400 session 0x5616f582c000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x561704582c00 session 0x5616f52fe3c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5412000 session 0x5616f60efa40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e57400 session 0x5616f59174a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x561704582c00 session 0x5616f3c625a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f32e2400 session 0x5616f5df3680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 416399360 unmapped: 102400000 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f320d400 session 0x5616f5828f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f32e2400 session 0x5616f5de85a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 416415744 unmapped: 102383616 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5412000 session 0x5616f5873c20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.485384941s of 11.044736862s, submitted: 152
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a0223000/0x0/0x1bfc00000, data 0x51acaaf/0x53ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [0,0,0,1])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e57400 session 0x5616f5875c20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a0223000/0x0/0x1bfc00000, data 0x51acaaf/0x53ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 416333824 unmapped: 102465536 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f32e2800 session 0x5616f5875c20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 416350208 unmapped: 102449152 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f8393800 session 0x5616f5873c20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4831007 data_alloc: 234881024 data_used: 14082048
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 417071104 unmapped: 101728256 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f32e2400 session 0x5616f5828f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f32e2800 session 0x5616f5df3680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 417218560 unmapped: 101580800 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f6aa0000 session 0x5616f60ee5a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 417218560 unmapped: 101580800 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f90d8400 session 0x5616f52fef00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 417562624 unmapped: 101236736 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a01fc000/0x0/0x1bfc00000, data 0x51d0af2/0x53d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e56800 session 0x5616f3324000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f32e2400 session 0x5616f33245a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f32e2800 session 0x5616f2f7af00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 417734656 unmapped: 101064704 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a01fb000/0x0/0x1bfc00000, data 0x51d0b15/0x53d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5412000 session 0x5616f60efa40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e57400 session 0x5616f512c000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4866860 data_alloc: 234881024 data_used: 17317888
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 418791424 unmapped: 100007936 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f3517800 session 0x5616f509cf00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 420724736 unmapped: 98074624 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 420741120 unmapped: 98058240 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.821496964s of 10.020360947s, submitted: 65
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a0813000/0x0/0x1bfc00000, data 0x4b16ad3/0x4d16000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 426475520 unmapped: 92323840 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a03d9000/0x0/0x1bfc00000, data 0x4ff5ad3/0x51f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 429613056 unmapped: 89186304 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4957772 data_alloc: 251658240 data_used: 32051200
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 429613056 unmapped: 89186304 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 434085888 unmapped: 84713472 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x561704582c00 session 0x5616f60efe00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2f26800 session 0x5616f33c3a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x19fe2b000/0x0/0x1bfc00000, data 0x559aad3/0x579a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 434126848 unmapped: 84672512 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f3517800 session 0x5616f58292c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433512448 unmapped: 85286912 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433512448 unmapped: 85286912 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4843881 data_alloc: 234881024 data_used: 28360704
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433512448 unmapped: 85286912 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2f26400 session 0x5616f53f0f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433512448 unmapped: 85286912 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f320cc00 session 0x5616f5de94a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2f26400 session 0x5616f509a1e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 434356224 unmapped: 84443136 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a0ae4000/0x0/0x1bfc00000, data 0x48deab3/0x4adc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.756709099s of 10.414925575s, submitted: 278
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 438312960 unmapped: 80486400 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f32e2400 session 0x5616f60ef680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 437952512 unmapped: 80846848 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4909693 data_alloc: 234881024 data_used: 26497024
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 437387264 unmapped: 81412096 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f3fb3400 session 0x5616f5c5cb40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2ec1000 session 0x5616f560e5a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a0084000/0x0/0x1bfc00000, data 0x5346ab3/0x5544000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2f26800 session 0x5616f509af00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 437395456 unmapped: 81403904 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a07d6000/0x0/0x1bfc00000, data 0x49a4ab3/0x4ba2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 438722560 unmapped: 80076800 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 438722560 unmapped: 80076800 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 438591488 unmapped: 80207872 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5605a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a09f7000/0x0/0x1bfc00000, data 0x49d9ab3/0x4bd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4841649 data_alloc: 234881024 data_used: 26312704
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 438599680 unmapped: 80199680 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 438599680 unmapped: 80199680 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 438599680 unmapped: 80199680 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 438599680 unmapped: 80199680 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a0a37000/0x0/0x1bfc00000, data 0x4999a50/0x4b96000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 438599680 unmapped: 80199680 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.172043800s of 11.835883141s, submitted: 192
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4842113 data_alloc: 234881024 data_used: 26324992
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 438607872 unmapped: 80191488 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f6aa0000 session 0x5616f53f12c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f90d9800 session 0x5616f4cdcf00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 438616064 unmapped: 80183296 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2f26400 session 0x5616f5df3e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433168384 unmapped: 85630976 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a17af000/0x0/0x1bfc00000, data 0x346f9bb/0x3669000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433168384 unmapped: 85630976 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a17af000/0x0/0x1bfc00000, data 0x346f9bb/0x3669000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433168384 unmapped: 85630976 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4609335 data_alloc: 234881024 data_used: 16211968
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433168384 unmapped: 85630976 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433168384 unmapped: 85630976 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a17af000/0x0/0x1bfc00000, data 0x346f9bb/0x3669000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a17af000/0x0/0x1bfc00000, data 0x346f9bb/0x3669000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433168384 unmapped: 85630976 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433168384 unmapped: 85630976 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433168384 unmapped: 85630976 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a17af000/0x0/0x1bfc00000, data 0x346f9bb/0x3669000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4609335 data_alloc: 234881024 data_used: 16211968
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a17af000/0x0/0x1bfc00000, data 0x346f9bb/0x3669000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433168384 unmapped: 85630976 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433176576 unmapped: 85622784 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f32e2400 session 0x5616f509ba40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f32e2400 session 0x5616f2ec5a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5e28f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2f26400 session 0x5616f5df3a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.733957291s of 11.840334892s, submitted: 44
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f6aa0000 session 0x5616f509b680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f90d9800 session 0x5616f509cd20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f90d9800 session 0x5616f33c2780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2ec1000 session 0x5616f4cdcb40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2f26400 session 0x5616f362f0e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433348608 unmapped: 85450752 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433348608 unmapped: 85450752 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433348608 unmapped: 85450752 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4682381 data_alloc: 234881024 data_used: 16211968
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433356800 unmapped: 85442560 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a17c3000/0x0/0x1bfc00000, data 0x3c0fa2d/0x3e0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433356800 unmapped: 85442560 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433356800 unmapped: 85442560 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f32e2400 session 0x5616f58290e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a17c3000/0x0/0x1bfc00000, data 0x3c0fa2d/0x3e0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433364992 unmapped: 85434368 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433381376 unmapped: 85417984 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4733210 data_alloc: 234881024 data_used: 22900736
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433389568 unmapped: 85409792 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433389568 unmapped: 85409792 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a179e000/0x0/0x1bfc00000, data 0x3c33a50/0x3e30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433389568 unmapped: 85409792 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.563005447s of 10.698960304s, submitted: 55
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433389568 unmapped: 85409792 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433389568 unmapped: 85409792 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4733674 data_alloc: 234881024 data_used: 22908928
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433389568 unmapped: 85409792 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433389568 unmapped: 85409792 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433389568 unmapped: 85409792 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a179e000/0x0/0x1bfc00000, data 0x3c33a50/0x3e30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433389568 unmapped: 85409792 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f3517800 session 0x5616f60ee5a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2ec1000 session 0x5616f509cf00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2f26400 session 0x5616f5605a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433545216 unmapped: 85254144 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f32e2400 session 0x5616f5e28f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f90d9800 session 0x5616f5dfdc20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x561704582c00 session 0x5616f582c5a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4803369 data_alloc: 234881024 data_used: 22982656
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 434413568 unmapped: 84385792 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 436215808 unmapped: 82583552 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 436895744 unmapped: 81903616 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a0807000/0x0/0x1bfc00000, data 0x4bcaa50/0x4dc7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 436895744 unmapped: 81903616 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.587735176s of 11.122464180s, submitted: 139
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x561704582c00 session 0x5616f60ef0e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5de9680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2f26400 session 0x5616f5de83c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f32e2400 session 0x5616f362e960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f90d9800 session 0x5616f5e18f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 436936704 unmapped: 81862656 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2ec1000 session 0x5616f3f68b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4970904 data_alloc: 234881024 data_used: 24236032
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 436936704 unmapped: 81862656 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 436944896 unmapped: 81854464 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 436944896 unmapped: 81854464 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 438747136 unmapped: 80052224 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x19fc82000/0x0/0x1bfc00000, data 0x574eab2/0x594c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f6aa0000 session 0x5616f3354f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f3fb3400 session 0x5616f5dfd860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 438747136 unmapped: 80052224 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4844900 data_alloc: 234881024 data_used: 23859200
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 437157888 unmapped: 81641472 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x561704582c00 session 0x5616f3bb70e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 437166080 unmapped: 81633280 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f32e2800 session 0x5616f509d860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 437166080 unmapped: 81633280 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 437166080 unmapped: 81633280 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.755673409s of 10.567603111s, submitted: 95
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f6aa0000 session 0x5616f5e29680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x561704582c00 session 0x5616f3c63860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5412000 session 0x5616f5c03e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e57400 session 0x5616f402a780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a0667000/0x0/0x1bfc00000, data 0x4d6ba7f/0x4f67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e52c00 session 0x5616f5e28960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 437321728 unmapped: 81477632 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4962291 data_alloc: 251658240 data_used: 33669120
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 440590336 unmapped: 78209024 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e56800 session 0x5616f5de8000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f90d8400 session 0x5616f5e28000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 438689792 unmapped: 80109568 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a0667000/0x0/0x1bfc00000, data 0x4d6ba7f/0x4f67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5412000 session 0x5616f5df34a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 438714368 unmapped: 80084992 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 440532992 unmapped: 78266368 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 440549376 unmapped: 78249984 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4842813 data_alloc: 234881024 data_used: 23040000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 440606720 unmapped: 78192640 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a0921000/0x0/0x1bfc00000, data 0x46a1a7f/0x489d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e57400 session 0x5616f582cd20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f6aa0000 session 0x5616f5de8d20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5412000 session 0x5616f33245a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e56800 session 0x5616f5875e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 440614912 unmapped: 78184448 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f90d8400 session 0x5616f5875e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e57400 session 0x5616f2f7af00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x561704582c00 session 0x5616f5de8960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5412000 session 0x5616f33550e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e56800 session 0x5616f58283c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e57400 session 0x5616f5c02000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441253888 unmapped: 81747968 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x19fec3000/0x0/0x1bfc00000, data 0x50fdaf0/0x52fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x19fec3000/0x0/0x1bfc00000, data 0x50fdaf0/0x52fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441253888 unmapped: 81747968 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x19fec3000/0x0/0x1bfc00000, data 0x50fdaf0/0x52fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441253888 unmapped: 81747968 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x19fec3000/0x0/0x1bfc00000, data 0x50fdaf0/0x52fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4969624 data_alloc: 234881024 data_used: 29413376
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441253888 unmapped: 81747968 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f3518000 session 0x5616f3f692c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f3266c00 session 0x5616f509da40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.016990662s of 12.429978371s, submitted: 257
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442310656 unmapped: 80691200 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x19fec3000/0x0/0x1bfc00000, data 0x50fdaf0/0x52fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,9,15,9])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f3518000 session 0x5616f56052c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442310656 unmapped: 80691200 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x19f976000/0x0/0x1bfc00000, data 0x5649b00/0x5848000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5412000 session 0x5616f58294a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442327040 unmapped: 80674816 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x19f954000/0x0/0x1bfc00000, data 0x566ab23/0x586a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442327040 unmapped: 80674816 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x19f954000/0x0/0x1bfc00000, data 0x566ab23/0x586a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5058977 data_alloc: 251658240 data_used: 35041280
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x19f92d000/0x0/0x1bfc00000, data 0x5690b23/0x5890000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 445292544 unmapped: 77709312 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 445292544 unmapped: 77709312 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 445292544 unmapped: 77709312 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x19f928000/0x0/0x1bfc00000, data 0x5695b23/0x5895000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 445292544 unmapped: 77709312 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x19f928000/0x0/0x1bfc00000, data 0x5695b23/0x5895000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 445292544 unmapped: 77709312 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5106433 data_alloc: 251658240 data_used: 39256064
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 445292544 unmapped: 77709312 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.188847542s of 10.390885353s, submitted: 83
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 447930368 unmapped: 75071488 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 448323584 unmapped: 74678272 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449970176 unmapped: 73031680 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x19f322000/0x0/0x1bfc00000, data 0x5c9cb23/0x5e9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,2,70])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449093632 unmapped: 73908224 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5177385 data_alloc: 251658240 data_used: 39260160
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449380352 unmapped: 73621504 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449380352 unmapped: 73621504 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x19eeb9000/0x0/0x1bfc00000, data 0x6104b4c/0x6305000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [0,0,0,0,0,0,5,2,50,0,7])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458301440 unmapped: 64700416 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452206592 unmapped: 70795264 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e52800 session 0x5616f5c03e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f3519400 session 0x5616f3f68b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f60f9c00 session 0x5616f5e18f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452214784 unmapped: 70787072 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5345026 data_alloc: 251658240 data_used: 40615936
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452214784 unmapped: 70787072 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f3518000 session 0x5616f362e960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f3519400 session 0x5616f5de9680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 2.339155436s of 10.027582169s, submitted: 217
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x19c8e7000/0x0/0x1bfc00000, data 0x7536b4c/0x7737000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [0,0,0,0,0,0,0,2])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452214784 unmapped: 70787072 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 453509120 unmapped: 69492736 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 453509120 unmapped: 69492736 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5412000 session 0x5616f582c5a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 453533696 unmapped: 69468160 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5374693 data_alloc: 251658240 data_used: 40714240
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 453541888 unmapped: 69459968 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458301440 unmapped: 64700416 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x19c7f4000/0x0/0x1bfc00000, data 0x7629b85/0x782a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458326016 unmapped: 64675840 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458326016 unmapped: 64675840 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458334208 unmapped: 64667648 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5454477 data_alloc: 268435456 data_used: 48197632
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458334208 unmapped: 64667648 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458334208 unmapped: 64667648 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 399 handle_osd_map epochs [400,400], i have 399, src has [1,400]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.992421150s of 10.850731850s, submitted: 70
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458350592 unmapped: 64651264 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 400 heartbeat osd_stat(store_statfs(0x19c7ca000/0x0/0x1bfc00000, data 0x7651b85/0x7852000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458358784 unmapped: 64643072 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 400 ms_handle_reset con 0x5617023cbc00 session 0x5616f5874b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458358784 unmapped: 64643072 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 400 ms_handle_reset con 0x5616f5436000 session 0x5616f3354f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5459395 data_alloc: 268435456 data_used: 48234496
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 400 handle_osd_map epochs [400,401], i have 400, src has [1,401]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 401 ms_handle_reset con 0x5616f3518000 session 0x5616f5c034a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 401 ms_handle_reset con 0x5616f3fb2c00 session 0x5616f52ff860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 401 ms_handle_reset con 0x5616f5412000 session 0x5616f509b0e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 401 ms_handle_reset con 0x5616f3519400 session 0x5616f58741e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 401 ms_handle_reset con 0x5616f6ac1000 session 0x5616f5df3e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 401 ms_handle_reset con 0x5617023cbc00 session 0x5616f560c780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #55. Immutable memtables: 11.
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 488464384 unmapped: 43663360 heap: 532127744 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 401 heartbeat osd_stat(store_statfs(0x19c7c5000/0x0/0x1bfc00000, data 0x76558a8/0x7857000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [0,0,0,0,0,0,1,5])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 401 ms_handle_reset con 0x5616f7094400 session 0x5616f59163c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 401 ms_handle_reset con 0x5616f3518000 session 0x5616f50dcf00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 401 ms_handle_reset con 0x5616f3519400 session 0x5616f402a3c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 401 ms_handle_reset con 0x5616f3fb2c00 session 0x5616f53f0f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 401 handle_osd_map epochs [402,402], i have 401, src has [1,402]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 475619328 unmapped: 63553536 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 402 handle_osd_map epochs [403,403], i have 402, src has [1,403]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 403 ms_handle_reset con 0x5616f3518000 session 0x5616f3b503c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 403 ms_handle_reset con 0x5616f3519400 session 0x5616f560d0e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 476692480 unmapped: 62480384 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 403 ms_handle_reset con 0x5616f3fb2c00 session 0x5616f58750e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 403 ms_handle_reset con 0x5616f7094400 session 0x5616f5e283c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 403 ms_handle_reset con 0x5617023cbc00 session 0x5616f3bd6b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 403 ms_handle_reset con 0x5616f3518000 session 0x5616f3354b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 403 ms_handle_reset con 0x5616f3519400 session 0x5616f3bb6d20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 403 ms_handle_reset con 0x5616f3fb2c00 session 0x5616f582de00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 476700672 unmapped: 62472192 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 403 heartbeat osd_stat(store_statfs(0x198c6c000/0x0/0x1bfc00000, data 0xa00a091/0xa210000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 476717056 unmapped: 62455808 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 403 ms_handle_reset con 0x5616f5412000 session 0x5616f5e29860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 403 ms_handle_reset con 0x5616f7094400 session 0x5616f5e283c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5825660 data_alloc: 268435456 data_used: 60710912
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 403 heartbeat osd_stat(store_statfs(0x198c6c000/0x0/0x1bfc00000, data 0xa00a091/0xa210000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 476725248 unmapped: 62447616 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 476758016 unmapped: 62414848 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.162752151s of 10.004703522s, submitted: 169
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 473513984 unmapped: 65658880 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 473513984 unmapped: 65658880 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 403 ms_handle_reset con 0x5616f3518000 session 0x5616f5874000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 403 ms_handle_reset con 0x5616f3fb2c00 session 0x5616f362f0e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 403 ms_handle_reset con 0x5616f3519400 session 0x5616f5e29e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 403 ms_handle_reset con 0x5616f5412000 session 0x5616f5605e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 403 ms_handle_reset con 0x5617029d0400 session 0x5616f60efa40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 403 ms_handle_reset con 0x5616f3518000 session 0x5616f5917e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 403 heartbeat osd_stat(store_statfs(0x198c6e000/0x0/0x1bfc00000, data 0xa00a091/0xa210000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 473513984 unmapped: 65658880 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5810520 data_alloc: 268435456 data_used: 60723200
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 403 ms_handle_reset con 0x5616f3519400 session 0x5616f5c21860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 403 handle_osd_map epochs [404,404], i have 403, src has [1,404]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f5412000 session 0x5616f52fe000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f3fb2c00 session 0x5616f33c2780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f60fd000 session 0x5616f3bcc1e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f3518000 session 0x5616f5e29c20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f3519400 session 0x5616f5e281e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 473554944 unmapped: 65617920 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 473554944 unmapped: 65617920 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.9 total, 600.0 interval#012Cumulative writes: 62K writes, 244K keys, 62K commit groups, 1.0 writes per commit group, ingest: 0.24 GB, 0.04 MB/s#012Cumulative WAL: 62K writes, 23K syncs, 2.68 writes per sync, written: 0.24 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 11K writes, 45K keys, 11K commit groups, 1.0 writes per commit group, ingest: 47.44 MB, 0.08 MB/s#012Interval WAL: 11K writes, 4694 syncs, 2.50 writes per sync, written: 0.05 GB, 0.08 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f5e56800 session 0x5616f560fa40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f5e57400 session 0x5616f3355c20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 heartbeat osd_stat(store_statfs(0x198c45000/0x0/0x1bfc00000, data 0xa031c9a/0xa239000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 473604096 unmapped: 65568768 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f3fb2c00 session 0x5616f5dfd860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f5e52800 session 0x5616f60ee5a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616fd7af400 session 0x5616f560e960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 473604096 unmapped: 65568768 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f3519400 session 0x5616f560d2c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f3518000 session 0x5616f50dcf00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f5e56800 session 0x5616f3f692c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 469598208 unmapped: 69574656 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f5e52800 session 0x5616f5917680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5367180 data_alloc: 251658240 data_used: 41144320
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616fd7af400 session 0x5616f3cf1680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 469598208 unmapped: 69574656 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f5e57400 session 0x5616f5c02d20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 469614592 unmapped: 69558272 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 469614592 unmapped: 69558272 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.227970123s of 10.570312500s, submitted: 124
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f2f26400 session 0x5616f3325680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f32e2400 session 0x5616f582cf00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 heartbeat osd_stat(store_statfs(0x19b442000/0x0/0x1bfc00000, data 0x7839b94/0x7a3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f2f26400 session 0x5616f58290e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 465469440 unmapped: 73703424 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 heartbeat osd_stat(store_statfs(0x19cc5c000/0x0/0x1bfc00000, data 0x601fb94/0x6222000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 465543168 unmapped: 73629696 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5247727 data_alloc: 268435456 data_used: 53030912
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 heartbeat osd_stat(store_statfs(0x19cc5c000/0x0/0x1bfc00000, data 0x601fb94/0x6222000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: mgrc ms_handle_reset ms_handle_reset con 0x5616fb858400
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1950343944
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1950343944,v1:192.168.122.100:6801/1950343944]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: mgrc handle_mgr_configure stats_period=5
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 465534976 unmapped: 73637888 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 heartbeat osd_stat(store_statfs(0x19cc5c000/0x0/0x1bfc00000, data 0x601fb94/0x6222000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 465534976 unmapped: 73637888 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f320d000 session 0x5616f27045a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f320dc00 session 0x5616f5c205a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f40c7800 session 0x5616f582d2c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 465534976 unmapped: 73637888 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 465534976 unmapped: 73637888 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 heartbeat osd_stat(store_statfs(0x19cc5c000/0x0/0x1bfc00000, data 0x601fb94/0x6222000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 465534976 unmapped: 73637888 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5247727 data_alloc: 268435456 data_used: 53030912
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 heartbeat osd_stat(store_statfs(0x19cc5c000/0x0/0x1bfc00000, data 0x601fb94/0x6222000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 465534976 unmapped: 73637888 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 465534976 unmapped: 73637888 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 465534976 unmapped: 73637888 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 465534976 unmapped: 73637888 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.256523132s of 11.341904640s, submitted: 35
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 heartbeat osd_stat(store_statfs(0x19cc5b000/0x0/0x1bfc00000, data 0x601fb94/0x6222000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [2])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 467238912 unmapped: 71933952 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5342124 data_alloc: 268435456 data_used: 57995264
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 473391104 unmapped: 65781760 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 473391104 unmapped: 65781760 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 heartbeat osd_stat(store_statfs(0x19c6d6000/0x0/0x1bfc00000, data 0x65a5b94/0x67a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 473391104 unmapped: 65781760 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 473391104 unmapped: 65781760 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 473391104 unmapped: 65781760 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5344980 data_alloc: 268435456 data_used: 58019840
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 heartbeat osd_stat(store_statfs(0x19c6d6000/0x0/0x1bfc00000, data 0x65a5b94/0x67a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616fd7af400 session 0x5616f560d2c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f5415400 session 0x5616f58752c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f6aa0c00 session 0x5616f58bfc20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f60ffc00 session 0x5616f560c960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 470687744 unmapped: 68485120 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f2f26400 session 0x5616f58bf4a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f5415400 session 0x5616f2f7ba40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f6aa0c00 session 0x5616f362e1e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616fd7af400 session 0x5616f5df3860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f6aa1000 session 0x5616f4d663c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 470761472 unmapped: 68411392 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 470761472 unmapped: 68411392 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 470761472 unmapped: 68411392 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 heartbeat osd_stat(store_statfs(0x19c320000/0x0/0x1bfc00000, data 0x695bb94/0x6b5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 470761472 unmapped: 68411392 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5373747 data_alloc: 268435456 data_used: 58421248
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 470761472 unmapped: 68411392 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.540078163s of 11.709421158s, submitted: 32
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 heartbeat osd_stat(store_statfs(0x19c320000/0x0/0x1bfc00000, data 0x695bb94/0x6b5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 470384640 unmapped: 68788224 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 470384640 unmapped: 68788224 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 heartbeat osd_stat(store_statfs(0x19c320000/0x0/0x1bfc00000, data 0x695bb94/0x6b5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 470384640 unmapped: 68788224 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 470384640 unmapped: 68788224 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 heartbeat osd_stat(store_statfs(0x19c320000/0x0/0x1bfc00000, data 0x695bb94/0x6b5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5370515 data_alloc: 268435456 data_used: 58421248
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 470384640 unmapped: 68788224 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f2f26400 session 0x5616f3bb7e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 471433216 unmapped: 67739648 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f5415400 session 0x5616f5e19c20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f6aa0c00 session 0x5616f5e18000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f320ac00 session 0x5616f5c025a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f7094800 session 0x5616f5df3e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 471433216 unmapped: 67739648 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f2f26400 session 0x5616f509b4a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f320ac00 session 0x5616f5916d20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f5415400 session 0x5616f3bd6f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f6aa0c00 session 0x5616f5de90e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f7e48400 session 0x5616f4d67680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 heartbeat osd_stat(store_statfs(0x19bf7c000/0x0/0x1bfc00000, data 0x6cfcc29/0x6f02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [1])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 471916544 unmapped: 67256320 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 471916544 unmapped: 67256320 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5430629 data_alloc: 268435456 data_used: 61665280
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 471916544 unmapped: 67256320 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 heartbeat osd_stat(store_statfs(0x19bf7c000/0x0/0x1bfc00000, data 0x6cfcc29/0x6f02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 471916544 unmapped: 67256320 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f2f26400 session 0x5616f60eed20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f320ac00 session 0x5616f33552c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 471916544 unmapped: 67256320 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f5415400 session 0x5616f5873680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 472162304 unmapped: 67010560 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.858821869s of 12.966835976s, submitted: 38
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f6aa0c00 session 0x5616f5de85a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 472170496 unmapped: 67002368 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5441572 data_alloc: 268435456 data_used: 61931520
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 472170496 unmapped: 67002368 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f412d800 session 0x5616f4d67e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 404 handle_osd_map epochs [405,405], i have 404, src has [1,405]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 405 ms_handle_reset con 0x5616f412d800 session 0x5616f582d860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 405 ms_handle_reset con 0x5616f320ac00 session 0x5616f5c210e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 405 ms_handle_reset con 0x5616f2f26400 session 0x5616f5c20000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 481460224 unmapped: 57712640 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 405 heartbeat osd_stat(store_statfs(0x19bf57000/0x0/0x1bfc00000, data 0x6d20c39/0x6f27000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 405 ms_handle_reset con 0x5616f5415400 session 0x5616f5df23c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 405 ms_handle_reset con 0x5616f6aa0c00 session 0x5616f5c5d0e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 485851136 unmapped: 53321728 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 405 handle_osd_map epochs [406,406], i have 405, src has [1,406]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 406 handle_osd_map epochs [406,407], i have 406, src has [1,407]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 485965824 unmapped: 53207040 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 407 heartbeat osd_stat(store_statfs(0x199914000/0x0/0x1bfc00000, data 0x9360412/0x956a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 482426880 unmapped: 56745984 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5807578 data_alloc: 285212672 data_used: 72122368
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 483565568 unmapped: 55607296 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 484753408 unmapped: 54419456 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 407 heartbeat osd_stat(store_statfs(0x1991d1000/0x0/0x1bfc00000, data 0x9aa2612/0x9cad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 407 heartbeat osd_stat(store_statfs(0x1991d1000/0x0/0x1bfc00000, data 0x9aa2612/0x9cad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 407 handle_osd_map epochs [408,408], i have 407, src has [1,408]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 484818944 unmapped: 54353920 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 408 ms_handle_reset con 0x5616f6aa0c00 session 0x5616f2ec5a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 408 ms_handle_reset con 0x5616f2f26400 session 0x5616f3bb6960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 484827136 unmapped: 54345728 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 408 ms_handle_reset con 0x5616f3518000 session 0x5616f58283c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 408 ms_handle_reset con 0x5616f3519400 session 0x5616f5de8f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 408 heartbeat osd_stat(store_statfs(0x19b807000/0x0/0x1bfc00000, data 0x746a3a5/0x7676000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 484827136 unmapped: 54345728 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 408 handle_osd_map epochs [409,409], i have 408, src has [1,409]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.260406494s of 11.294813156s, submitted: 121
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 409 ms_handle_reset con 0x5616f5412000 session 0x5616f5de81e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 409 ms_handle_reset con 0x5616f7095c00 session 0x5616f33550e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5535768 data_alloc: 285212672 data_used: 70262784
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 409 ms_handle_reset con 0x5616f320ac00 session 0x5616f362f4a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 484868096 unmapped: 54304768 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 409 ms_handle_reset con 0x5616f2f26400 session 0x5616f5c210e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 484868096 unmapped: 54304768 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 409 heartbeat osd_stat(store_statfs(0x19bdfc000/0x0/0x1bfc00000, data 0x6e73fca/0x7081000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [0,1,1])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 488734720 unmapped: 50438144 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 488071168 unmapped: 51101696 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 409 ms_handle_reset con 0x5616f3519400 session 0x5616f362e1e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 409 ms_handle_reset con 0x5616f5412000 session 0x5616f2f7ba40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 409 handle_osd_map epochs [410,410], i have 409, src has [1,410]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 410 ms_handle_reset con 0x5616f3518000 session 0x5616f3bb7e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 484196352 unmapped: 54976512 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 410 handle_osd_map epochs [410,411], i have 410, src has [1,411]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5300557 data_alloc: 268435456 data_used: 48500736
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 484196352 unmapped: 54976512 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 411 handle_osd_map epochs [412,412], i have 411, src has [1,412]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 412 ms_handle_reset con 0x5616f2f26400 session 0x5616f5de92c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 479281152 unmapped: 59891712 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 412 ms_handle_reset con 0x5616f90d8400 session 0x5616f52ffa40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 412 ms_handle_reset con 0x5616f2ec3400 session 0x5616f5df34a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 412 ms_handle_reset con 0x5616f320ac00 session 0x5616f5e29a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 412 ms_handle_reset con 0x5616f320ac00 session 0x5616f5df25a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 469557248 unmapped: 69615616 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 412 heartbeat osd_stat(store_statfs(0x19d77c000/0x0/0x1bfc00000, data 0x54f24dd/0x5701000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 412 handle_osd_map epochs [413,413], i have 412, src has [1,413]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 469557248 unmapped: 69615616 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 469557248 unmapped: 69615616 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4973071 data_alloc: 234881024 data_used: 27447296
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 469557248 unmapped: 69615616 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 413 handle_osd_map epochs [414,414], i have 413, src has [1,414]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.968615532s of 10.758380890s, submitted: 303
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 414 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5829680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 414 ms_handle_reset con 0x5616f3fb3400 session 0x5616f582c780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 414 heartbeat osd_stat(store_statfs(0x19e860000/0x0/0x1bfc00000, data 0x440ce4f/0x461d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 469557248 unmapped: 69615616 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 414 ms_handle_reset con 0x5616f2ec3400 session 0x5616f3c63c20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 454434816 unmapped: 84738048 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 414 ms_handle_reset con 0x5616f2f26400 session 0x5616f5c5cb40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 414 ms_handle_reset con 0x5616f2ec1000 session 0x5616f3bd6f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 414 ms_handle_reset con 0x5616f2ec3400 session 0x5616f4d67e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 414 ms_handle_reset con 0x5616f320ac00 session 0x5616f50dde00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 414 ms_handle_reset con 0x5616f3fb3400 session 0x5616f3bb7680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 455491584 unmapped: 83681280 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 455491584 unmapped: 83681280 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 414 heartbeat osd_stat(store_statfs(0x19f2b8000/0x0/0x1bfc00000, data 0x39b4e4f/0x3bc5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4799044 data_alloc: 218103808 data_used: 10117120
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 414 heartbeat osd_stat(store_statfs(0x19f2b7000/0x0/0x1bfc00000, data 0x39b5e4f/0x3bc6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 414 handle_osd_map epochs [415,415], i have 414, src has [1,415]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 414 handle_osd_map epochs [415,415], i have 415, src has [1,415]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 455467008 unmapped: 83705856 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 455467008 unmapped: 83705856 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f6aa1c00 session 0x5616f3355e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616ff64d800 session 0x5616f5e29c20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f3518000 session 0x5616f509b4a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5873680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452935680 unmapped: 86237184 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452935680 unmapped: 86237184 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452943872 unmapped: 86228992 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x19ff0a000/0x0/0x1bfc00000, data 0x2d3fa0e/0x2f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4671481 data_alloc: 218103808 data_used: 6033408
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f2ec3400 session 0x5616f5c21a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452919296 unmapped: 86253568 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.958932877s of 10.313260078s, submitted: 148
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452919296 unmapped: 86253568 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452837376 unmapped: 86335488 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452771840 unmapped: 86401024 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452771840 unmapped: 86401024 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4721172 data_alloc: 234881024 data_used: 12980224
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452771840 unmapped: 86401024 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x19ff30000/0x0/0x1bfc00000, data 0x2d3fa0e/0x2f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452771840 unmapped: 86401024 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x19ff30000/0x0/0x1bfc00000, data 0x2d3fa0e/0x2f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452771840 unmapped: 86401024 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452771840 unmapped: 86401024 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452771840 unmapped: 86401024 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4721524 data_alloc: 234881024 data_used: 12980224
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x19ff30000/0x0/0x1bfc00000, data 0x2d3fa0e/0x2f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452771840 unmapped: 86401024 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452771840 unmapped: 86401024 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.325275421s of 11.334540367s, submitted: 3
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452771840 unmapped: 86401024 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x19ff2a000/0x0/0x1bfc00000, data 0x2d45a0e/0x2f54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452771840 unmapped: 86401024 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 454483968 unmapped: 84688896 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4777692 data_alloc: 234881024 data_used: 13152256
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 455630848 unmapped: 83542016 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 455630848 unmapped: 83542016 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x19f8e3000/0x0/0x1bfc00000, data 0x337ea0e/0x358d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 455630848 unmapped: 83542016 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 455581696 unmapped: 83591168 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 455589888 unmapped: 83582976 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4775208 data_alloc: 234881024 data_used: 13279232
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 455565312 unmapped: 83607552 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 455565312 unmapped: 83607552 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f320ac00 session 0x5616f4cdde00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f3fb3400 session 0x5616f560fa40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f2ec1000 session 0x5616f3bb7e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f2ec3400 session 0x5616f5df34a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f3518000 session 0x5616f52ffa40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.795422554s of 10.063662529s, submitted: 97
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f90d8400 session 0x5616f3288b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459776000 unmapped: 79396864 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f90d8400 session 0x5616f3354f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a000e000/0x0/0x1bfc00000, data 0x28dca0e/0x2aeb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f2ec1000 session 0x5616f4cdd2c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616ff64d800 session 0x5616f5874b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f2ec3400 session 0x5616f52fe780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f3518000 session 0x5616f5df3860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f3518000 session 0x5616f2f7a780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 453967872 unmapped: 85204992 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x19fbda000/0x0/0x1bfc00000, data 0x304f9ac/0x325d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 453967872 unmapped: 85204992 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4702053 data_alloc: 218103808 data_used: 6033408
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x19fbda000/0x0/0x1bfc00000, data 0x304f9ac/0x325d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 453967872 unmapped: 85204992 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x19fbda000/0x0/0x1bfc00000, data 0x304f9ac/0x325d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 453967872 unmapped: 85204992 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616fd7af400 session 0x5616f5916b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f5e59000 session 0x5616f5e28780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5de9680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f2ec3400 session 0x5616f5605e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f2ec3400 session 0x5616f58750e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 448102400 unmapped: 91070464 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f2ec1000 session 0x5616f512c1e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f3518000 session 0x5616f5de8d20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a00d1000/0x0/0x1bfc00000, data 0x278f989/0x299c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 448102400 unmapped: 91070464 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 448102400 unmapped: 91070464 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4613701 data_alloc: 218103808 data_used: 2564096
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 448102400 unmapped: 91070464 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 448102400 unmapped: 91070464 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 448102400 unmapped: 91070464 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a00d1000/0x0/0x1bfc00000, data 0x278f9ac/0x299d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 447897600 unmapped: 91275264 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.336462021s of 11.657950401s, submitted: 92
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449208320 unmapped: 89964544 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a0090000/0x0/0x1bfc00000, data 0x27cfa0f/0x29de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4682560 data_alloc: 218103808 data_used: 11546624
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f90d8400 session 0x5616f5605c20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449208320 unmapped: 89964544 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a0090000/0x0/0x1bfc00000, data 0x27cfa0f/0x29de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f5e59000 session 0x5616f5604b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616fd7af400 session 0x5616f560d2c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449208320 unmapped: 89964544 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449208320 unmapped: 89964544 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449208320 unmapped: 89964544 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a0090000/0x0/0x1bfc00000, data 0x27cfa0f/0x29de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449208320 unmapped: 89964544 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4682488 data_alloc: 218103808 data_used: 11546624
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449208320 unmapped: 89964544 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449208320 unmapped: 89964544 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449208320 unmapped: 89964544 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a0090000/0x0/0x1bfc00000, data 0x27cfa0f/0x29de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449208320 unmapped: 89964544 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.503433228s of 10.529278755s, submitted: 6
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449208320 unmapped: 89964544 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a0090000/0x0/0x1bfc00000, data 0x27cfa0f/0x29de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4682440 data_alloc: 218103808 data_used: 11550720
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 450256896 unmapped: 88915968 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 450289664 unmapped: 88883200 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a0090000/0x0/0x1bfc00000, data 0x27cfa0f/0x29de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 450330624 unmapped: 88842240 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 450330624 unmapped: 88842240 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a10d0000/0x0/0x1bfc00000, data 0x27cfa0f/0x29de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 450355200 unmapped: 88817664 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4682440 data_alloc: 218103808 data_used: 11550720
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 450355200 unmapped: 88817664 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 450355200 unmapped: 88817664 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f2ec1000 session 0x5616f509da40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f2ec3400 session 0x5616f58734a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 450355200 unmapped: 88817664 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 450355200 unmapped: 88817664 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a10d0000/0x0/0x1bfc00000, data 0x27cfa0f/0x29de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 450355200 unmapped: 88817664 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4682440 data_alloc: 218103808 data_used: 11550720
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 450355200 unmapped: 88817664 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 450355200 unmapped: 88817664 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 450355200 unmapped: 88817664 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f566a800 session 0x5616f58281e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 450355200 unmapped: 88817664 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f566a800 session 0x5616f5c205a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a10d0000/0x0/0x1bfc00000, data 0x27cfa0f/0x29de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 450355200 unmapped: 88817664 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4682600 data_alloc: 218103808 data_used: 11554816
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a10d0000/0x0/0x1bfc00000, data 0x27cfa0f/0x29de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 450355200 unmapped: 88817664 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 450355200 unmapped: 88817664 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.885305405s of 17.616886139s, submitted: 256
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f3518000 session 0x5616f582cb40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f90d8400 session 0x5616f582cd20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 450969600 unmapped: 88203264 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5916000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 439574528 unmapped: 99598336 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f2ec3400 session 0x5616f3c63860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f566a800 session 0x5616f5c212c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616fd7af400 session 0x5616f362e1e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616fd7af400 session 0x5616f5604d20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5e281e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 439279616 unmapped: 99893248 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4622542 data_alloc: 218103808 data_used: 2826240
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 439279616 unmapped: 99893248 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a1194000/0x0/0x1bfc00000, data 0x270c9ec/0x291a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 439279616 unmapped: 99893248 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f3fb3400 session 0x5616f5c201e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f3519400 session 0x5616f40b12c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 439279616 unmapped: 99893248 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f2ec3400 session 0x5616f3bcc960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 439287808 unmapped: 99885056 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a1194000/0x0/0x1bfc00000, data 0x270c9ec/0x291a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 439287808 unmapped: 99885056 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f2ec3400 session 0x5616f58752c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4621486 data_alloc: 218103808 data_used: 2826240
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f3519400 session 0x5616f4cdd860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f2ec1000 session 0x5616f560c780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 439296000 unmapped: 99876864 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 439296000 unmapped: 99876864 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f3fb3400 session 0x5616f560e5a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.537684441s of 10.185144424s, submitted: 96
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616fd7af400 session 0x5616f60eeb40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 440344576 unmapped: 98828288 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 440344576 unmapped: 98828288 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 440344576 unmapped: 98828288 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a11d3000/0x0/0x1bfc00000, data 0x26cc9ac/0x28da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4628242 data_alloc: 218103808 data_used: 3612672
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 440385536 unmapped: 98787328 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a11d3000/0x0/0x1bfc00000, data 0x26cc9ac/0x28da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 440385536 unmapped: 98787328 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f2ec3400 session 0x5616f582d680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442286080 unmapped: 96886784 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442286080 unmapped: 96886784 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f3519400 session 0x5616f4cdda40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442286080 unmapped: 96886784 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4693548 data_alloc: 234881024 data_used: 15388672
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a11d2000/0x0/0x1bfc00000, data 0x26cca1e/0x28dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442286080 unmapped: 96886784 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f3fb3400 session 0x5616f52fe3c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442286080 unmapped: 96886784 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f566a800 session 0x5616f52feb40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.909992218s of 10.046482086s, submitted: 28
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f90d8400 session 0x5616f5c023c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a11d3000/0x0/0x1bfc00000, data 0x26cc9bc/0x28db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f2ec3400 session 0x5616f3bb65a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442327040 unmapped: 96845824 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442327040 unmapped: 96845824 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f3519400 session 0x5616f5e192c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0cc6000/0x0/0x1bfc00000, data 0x2bd76df/0x2de7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442351616 unmapped: 96821248 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4738982 data_alloc: 234881024 data_used: 15396864
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0cc6000/0x0/0x1bfc00000, data 0x2bd76df/0x2de7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442351616 unmapped: 96821248 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 444358656 unmapped: 94814208 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0cc6000/0x0/0x1bfc00000, data 0x2bd76df/0x2de7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 444358656 unmapped: 94814208 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 444358656 unmapped: 94814208 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 444358656 unmapped: 94814208 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4788338 data_alloc: 234881024 data_used: 15515648
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 444366848 unmapped: 94806016 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f566a800 session 0x5616f512c000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f3fb3400 session 0x5616f3bb7a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 444366848 unmapped: 94806016 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a066c000/0x0/0x1bfc00000, data 0x32316df/0x3441000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 444366848 unmapped: 94806016 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 444366848 unmapped: 94806016 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 444366848 unmapped: 94806016 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4788498 data_alloc: 234881024 data_used: 15519744
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 444366848 unmapped: 94806016 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a066c000/0x0/0x1bfc00000, data 0x32316df/0x3441000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 444375040 unmapped: 94797824 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f7095c00 session 0x5616f5872000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 444375040 unmapped: 94797824 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a066c000/0x0/0x1bfc00000, data 0x32316df/0x3441000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.315635681s of 16.485715866s, submitted: 45
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449699840 unmapped: 89473024 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f3fb3400 session 0x5616f59163c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f566a800 session 0x5616f33552c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f7095c00 session 0x5616f33c30e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f6aa0c00 session 0x5616f3324000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f412d800 session 0x5616f5c21a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 445997056 unmapped: 93175808 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f3fb3400 session 0x5616f509a780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f566a800 session 0x5616f5828b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4900087 data_alloc: 234881024 data_used: 20758528
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f6aa0c00 session 0x5616f5c203c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f7095c00 session 0x5616f582dc20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f5415400 session 0x5616f512cb40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f2ec3400 session 0x5616f58bf4a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f3519400 session 0x5616f4cdcb40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 446447616 unmapped: 92725248 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f566a800 session 0x5616f58bf4a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f3fb3400 session 0x5616f4d67e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f6aa0c00 session 0x5616f5e29a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 446447616 unmapped: 92725248 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 446447616 unmapped: 92725248 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 416 heartbeat osd_stat(store_statfs(0x19fcfb000/0x0/0x1bfc00000, data 0x3ba36df/0x3db3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f2ec3400 session 0x5616f3bb6d20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f3519400 session 0x5616f60ef4a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 446447616 unmapped: 92725248 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 446447616 unmapped: 92725248 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f7095c00 session 0x5616f5c03860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4900379 data_alloc: 234881024 data_used: 20787200
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f589a000 session 0x5616f2ec4d20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f7e48c00 session 0x5616f5605a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 448356352 unmapped: 90816512 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f2ec3400 session 0x5616f582de00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 416 heartbeat osd_stat(store_statfs(0x19fcfb000/0x0/0x1bfc00000, data 0x3ba36df/0x3db3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f3519400 session 0x5616f512de00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f7095c00 session 0x5616f3bd7680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f60ff400 session 0x5616f509cd20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 448364544 unmapped: 90808320 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f589a000 session 0x5616f3f683c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 416 handle_osd_map epochs [417,417], i have 416, src has [1,417]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 417 ms_handle_reset con 0x5616f2ec3400 session 0x5616f5de8000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 447709184 unmapped: 91463680 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 417 ms_handle_reset con 0x5616f3519400 session 0x5616f59170e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449052672 unmapped: 90120192 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449052672 unmapped: 90120192 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4889461 data_alloc: 234881024 data_used: 24100864
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449052672 unmapped: 90120192 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 417 heartbeat osd_stat(store_statfs(0x1a0200000/0x0/0x1bfc00000, data 0x369c455/0x38ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449052672 unmapped: 90120192 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 417 heartbeat osd_stat(store_statfs(0x1a0200000/0x0/0x1bfc00000, data 0x369c455/0x38ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 417 ms_handle_reset con 0x5616f60ff400 session 0x5616f5605860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449052672 unmapped: 90120192 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449052672 unmapped: 90120192 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.257311821s of 15.645789146s, submitted: 59
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 417 ms_handle_reset con 0x5616f7095c00 session 0x5616f362f0e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 417 handle_osd_map epochs [417,418], i have 417, src has [1,418]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449060864 unmapped: 90112000 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4895835 data_alloc: 234881024 data_used: 24109056
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449060864 unmapped: 90112000 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 418 ms_handle_reset con 0x5617029d1000 session 0x5616f5872780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a01fc000/0x0/0x1bfc00000, data 0x369e0c0/0x38b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [1,0,1])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 418 ms_handle_reset con 0x5616f2ec3400 session 0x5616f5df34a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449069056 unmapped: 90103808 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 418 ms_handle_reset con 0x5616f3519400 session 0x5616f5de81e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 418 ms_handle_reset con 0x5616f60ff400 session 0x5616f3bd65a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449298432 unmapped: 89874432 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449298432 unmapped: 89874432 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 451903488 unmapped: 87269376 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5064164 data_alloc: 234881024 data_used: 24522752
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 418 heartbeat osd_stat(store_statfs(0x19ec51000/0x0/0x1bfc00000, data 0x4c4505e/0x4e57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452976640 unmapped: 86196224 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452976640 unmapped: 86196224 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452976640 unmapped: 86196224 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452976640 unmapped: 86196224 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 418 heartbeat osd_stat(store_statfs(0x19ebbc000/0x0/0x1bfc00000, data 0x4cd105e/0x4ee3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452976640 unmapped: 86196224 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5074174 data_alloc: 234881024 data_used: 24330240
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452984832 unmapped: 86188032 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.492916107s of 11.969421387s, submitted: 150
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452984832 unmapped: 86188032 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 418 ms_handle_reset con 0x5616f412c800 session 0x5616f5de9e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 418 ms_handle_reset con 0x5616f2effc00 session 0x5616f5c203c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 418 heartbeat osd_stat(store_statfs(0x19ebaa000/0x0/0x1bfc00000, data 0x4cf205e/0x4f04000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 418 ms_handle_reset con 0x5616f2effc00 session 0x5616f402b860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452993024 unmapped: 86179840 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452993024 unmapped: 86179840 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 418 ms_handle_reset con 0x5616f3fb3400 session 0x5616f3bcd680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 418 ms_handle_reset con 0x5616f566a800 session 0x5616f582dc20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452993024 unmapped: 86179840 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4945171 data_alloc: 234881024 data_used: 20971520
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 418 ms_handle_reset con 0x5616f2ec3400 session 0x5616f4d663c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 453001216 unmapped: 86171648 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f8c8000/0x0/0x1bfc00000, data 0x3fd504f/0x41e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 453009408 unmapped: 86163456 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 418 handle_osd_map epochs [419,419], i have 418, src has [1,419]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 419 ms_handle_reset con 0x5616f3519400 session 0x5616f58beb40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 419 ms_handle_reset con 0x5616f2effc00 session 0x5616f33552c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 419 ms_handle_reset con 0x5616f2ec3400 session 0x5616f50dd860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 419 ms_handle_reset con 0x5616fd7af400 session 0x5616f5e18b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 419 ms_handle_reset con 0x5616f2ec1000 session 0x5616f58721e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 419 ms_handle_reset con 0x5616f3fb3400 session 0x5616f58be000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 453427200 unmapped: 85745664 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 419 ms_handle_reset con 0x5616f2ec3400 session 0x5616f2ec5a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 419 handle_osd_map epochs [419,420], i have 419, src has [1,420]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 420 handle_osd_map epochs [420,420], i have 420, src has [1,420]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 420 ms_handle_reset con 0x5616f2ec1000 session 0x5616f512c000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 420 ms_handle_reset con 0x5616f566a800 session 0x5616f5872000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 420 ms_handle_reset con 0x5616fd7af400 session 0x5616f5c203c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 451584000 unmapped: 87588864 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 420 handle_osd_map epochs [420,421], i have 420, src has [1,421]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 421 ms_handle_reset con 0x5616f2effc00 session 0x5616f50dd860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 421 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5de81e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 421 ms_handle_reset con 0x5616f2ec3400 session 0x5616f5df34a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 451592192 unmapped: 87580672 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4988778 data_alloc: 234881024 data_used: 14598144
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 451600384 unmapped: 87572480 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 451600384 unmapped: 87572480 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 421 heartbeat osd_stat(store_statfs(0x19f069000/0x0/0x1bfc00000, data 0x482c877/0x4a42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 454402048 unmapped: 84770816 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.975195885s of 12.467619896s, submitted: 140
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 450756608 unmapped: 88416256 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 421 handle_osd_map epochs [421,422], i have 421, src has [1,422]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 422 ms_handle_reset con 0x5616f412c800 session 0x5616f509c1e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 451821568 unmapped: 87351296 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4964992 data_alloc: 234881024 data_used: 18284544
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 422 heartbeat osd_stat(store_statfs(0x19f77b000/0x0/0x1bfc00000, data 0x411b480/0x4332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 451821568 unmapped: 87351296 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 451821568 unmapped: 87351296 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 451821568 unmapped: 87351296 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 451821568 unmapped: 87351296 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 422 heartbeat osd_stat(store_statfs(0x19f77b000/0x0/0x1bfc00000, data 0x411b480/0x4332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 451821568 unmapped: 87351296 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4964992 data_alloc: 234881024 data_used: 18284544
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 422 heartbeat osd_stat(store_statfs(0x19f77b000/0x0/0x1bfc00000, data 0x411b480/0x4332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 422 heartbeat osd_stat(store_statfs(0x19f77b000/0x0/0x1bfc00000, data 0x411b480/0x4332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 451821568 unmapped: 87351296 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 451821568 unmapped: 87351296 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452878336 unmapped: 86294528 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 422 ms_handle_reset con 0x5616f7095c00 session 0x5616f50d9e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 422 ms_handle_reset con 0x5616f60ff400 session 0x5616f5e18960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 422 ms_handle_reset con 0x5616f2ec1000 session 0x5616f3bd7e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 422 ms_handle_reset con 0x5616f2ec3400 session 0x5616f3f68b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 422 ms_handle_reset con 0x5616f412c800 session 0x5616f5829e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 422 ms_handle_reset con 0x5616f7095c00 session 0x5616f60ef2c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.915423393s of 10.204797745s, submitted: 83
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 487800832 unmapped: 59777024 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 422 ms_handle_reset con 0x5616f6ac1400 session 0x5616f5c02d20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 422 heartbeat osd_stat(store_statfs(0x19f0f5000/0x0/0x1bfc00000, data 0x47a2480/0x49b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 422 ms_handle_reset con 0x5616f2ec1000 session 0x5616f3bcc1e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 478314496 unmapped: 69263360 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5229959 data_alloc: 251658240 data_used: 37421056
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 422 ms_handle_reset con 0x5616f2ec3400 session 0x5616f3bd63c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 478314496 unmapped: 69263360 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 422 ms_handle_reset con 0x5616f412c800 session 0x5616f5c025a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 478314496 unmapped: 69263360 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 422 ms_handle_reset con 0x5616f7095c00 session 0x5616f509c5a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 478314496 unmapped: 69263360 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 422 handle_osd_map epochs [423,423], i have 422, src has [1,423]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 423 ms_handle_reset con 0x5616f2f37000 session 0x5616f50d8780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 456826880 unmapped: 90750976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 423 heartbeat osd_stat(store_statfs(0x19f603000/0x0/0x1bfc00000, data 0x42911f7/0x44a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457383936 unmapped: 90193920 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5053639 data_alloc: 234881024 data_used: 23670784
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457383936 unmapped: 90193920 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457383936 unmapped: 90193920 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457383936 unmapped: 90193920 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 423 heartbeat osd_stat(store_statfs(0x19f603000/0x0/0x1bfc00000, data 0x42911f7/0x44a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457383936 unmapped: 90193920 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457383936 unmapped: 90193920 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 423 handle_osd_map epochs [424,424], i have 423, src has [1,424]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.029996872s of 11.434091568s, submitted: 84
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5056437 data_alloc: 234881024 data_used: 23670784
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457383936 unmapped: 90193920 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457383936 unmapped: 90193920 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 424 heartbeat osd_stat(store_statfs(0x19f601000/0x0/0x1bfc00000, data 0x4292e00/0x44ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457383936 unmapped: 90193920 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457383936 unmapped: 90193920 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 424 heartbeat osd_stat(store_statfs(0x19f602000/0x0/0x1bfc00000, data 0x4292e00/0x44ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457539584 unmapped: 90038272 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5087533 data_alloc: 234881024 data_used: 26923008
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457605120 unmapped: 89972736 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457605120 unmapped: 89972736 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457605120 unmapped: 89972736 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457605120 unmapped: 89972736 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 424 heartbeat osd_stat(store_statfs(0x19f602000/0x0/0x1bfc00000, data 0x4292e00/0x44ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457605120 unmapped: 89972736 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5089773 data_alloc: 234881024 data_used: 26980352
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457605120 unmapped: 89972736 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.539049149s of 11.555925369s, submitted: 21
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 424 ms_handle_reset con 0x5616f60ffc00 session 0x5616f582da40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 424 ms_handle_reset con 0x5616f60f6000 session 0x5616f5df3e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457605120 unmapped: 89972736 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 424 ms_handle_reset con 0x5616f2ec1000 session 0x5616f402a5a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452968448 unmapped: 94609408 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 424 heartbeat osd_stat(store_statfs(0x1a0871000/0x0/0x1bfc00000, data 0x3023d9e/0x323c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 424 handle_osd_map epochs [425,425], i have 424, src has [1,425]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f2ec3400 session 0x5616f59163c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f412c800 session 0x5616f509c960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 454017024 unmapped: 93560832 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a086c000/0x0/0x1bfc00000, data 0x3025c31/0x3240000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 454017024 unmapped: 93560832 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4859937 data_alloc: 234881024 data_used: 13221888
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 454017024 unmapped: 93560832 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f412c800 session 0x5616f4cdde00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f2ec1000 session 0x5616f3c63c20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f2ec3400 session 0x5616f362f860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f60f6000 session 0x5616f509c780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 454017024 unmapped: 93560832 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f60ffc00 session 0x5616f2ec4d20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f2ec1000 session 0x5616f560d2c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f2ec3400 session 0x5616f509c780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f412c800 session 0x5616f4cdde00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f60f6000 session 0x5616f5df3e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 454336512 unmapped: 93241344 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 454336512 unmapped: 93241344 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f7095c00 session 0x5616f509c5a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 454336512 unmapped: 93241344 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a02fa000/0x0/0x1bfc00000, data 0x3599c31/0x37b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4918678 data_alloc: 234881024 data_used: 17874944
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a02fa000/0x0/0x1bfc00000, data 0x3599c31/0x37b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 454336512 unmapped: 93241344 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f2ec3400 session 0x5616f5e183c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f2ec1000 session 0x5616f3bd63c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f412c800 session 0x5616f3f68b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f60f6000 session 0x5616f5e18960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 454336512 unmapped: 93241344 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.991387367s of 10.391763687s, submitted: 83
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 454336512 unmapped: 93241344 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616ff64c800 session 0x5616f5c212c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f2ec1000 session 0x5616f50d9a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 454344704 unmapped: 93233152 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f2ec3400 session 0x5616f4d663c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f412c800 session 0x5616f50dcf00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 456785920 unmapped: 90791936 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5017348 data_alloc: 234881024 data_used: 23482368
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 456949760 unmapped: 90628096 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f60f6000 session 0x5616f3bcd860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 heartbeat osd_stat(store_statfs(0x19fd1a000/0x0/0x1bfc00000, data 0x3b77ca3/0x3d94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 456949760 unmapped: 90628096 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f75c3c00 session 0x5616f5c02b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f2ec1000 session 0x5616f3bb7e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f2ec3400 session 0x5616f50dd4a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 456949760 unmapped: 90628096 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 heartbeat osd_stat(store_statfs(0x19fd19000/0x0/0x1bfc00000, data 0x3b77cb3/0x3d95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 456949760 unmapped: 90628096 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 heartbeat osd_stat(store_statfs(0x19fd19000/0x0/0x1bfc00000, data 0x3b77cb3/0x3d95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 456949760 unmapped: 90628096 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5018102 data_alloc: 234881024 data_used: 23515136
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457138176 unmapped: 90439680 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 heartbeat osd_stat(store_statfs(0x19fd19000/0x0/0x1bfc00000, data 0x3b77cb3/0x3d95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457138176 unmapped: 90439680 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 heartbeat osd_stat(store_statfs(0x19fd19000/0x0/0x1bfc00000, data 0x3b77cb3/0x3d95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457138176 unmapped: 90439680 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457138176 unmapped: 90439680 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457138176 unmapped: 90439680 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5018742 data_alloc: 234881024 data_used: 24084480
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.307758331s of 13.497234344s, submitted: 60
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 heartbeat osd_stat(store_statfs(0x19fd19000/0x0/0x1bfc00000, data 0x3b77cb3/0x3d95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [0,1,2,2,7])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f5e57800 session 0x5616f5de83c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 465027072 unmapped: 82550784 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f75c0400 session 0x5616f5917c20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f412cc00 session 0x5616f5917680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5916b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 465387520 unmapped: 82190336 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 465477632 unmapped: 82100224 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 467640320 unmapped: 79937536 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 heartbeat osd_stat(store_statfs(0x19e7a0000/0x0/0x1bfc00000, data 0x4cdece6/0x4efe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 467771392 unmapped: 79806464 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5224004 data_alloc: 251658240 data_used: 30937088
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 467771392 unmapped: 79806464 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 heartbeat osd_stat(store_statfs(0x19e7a0000/0x0/0x1bfc00000, data 0x4cdece6/0x4efe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 467771392 unmapped: 79806464 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 467771392 unmapped: 79806464 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 467771392 unmapped: 79806464 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 467779584 unmapped: 79798272 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5225124 data_alloc: 251658240 data_used: 30965760
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.863697052s of 10.178115845s, submitted: 132
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 469884928 unmapped: 77692928 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 469884928 unmapped: 77692928 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 heartbeat osd_stat(store_statfs(0x19e3da000/0x0/0x1bfc00000, data 0x50a4ce6/0x52c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f7095c00 session 0x5616f509c1e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f60fa000 session 0x5616f5de90e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 467066880 unmapped: 80510976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f75c0400 session 0x5616f3c630e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463896576 unmapped: 83681280 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 465829888 unmapped: 81747968 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5095579 data_alloc: 234881024 data_used: 24866816
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466575360 unmapped: 81002496 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466591744 unmapped: 80986112 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466591744 unmapped: 80986112 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f274000/0x0/0x1bfc00000, data 0x420ace6/0x442a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466591744 unmapped: 80986112 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466591744 unmapped: 80986112 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5111081 data_alloc: 234881024 data_used: 24174592
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466591744 unmapped: 80986112 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f274000/0x0/0x1bfc00000, data 0x420ace6/0x442a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466599936 unmapped: 80977920 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.665365219s of 12.100666046s, submitted: 147
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466206720 unmapped: 81371136 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 81354752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 81354752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f270000/0x0/0x1bfc00000, data 0x420dce6/0x442d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f2ec3400 session 0x5616f3bb74a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f5e57800 session 0x5616f5829c20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5108485 data_alloc: 234881024 data_used: 24137728
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466231296 unmapped: 81346560 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466231296 unmapped: 81346560 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f271000/0x0/0x1bfc00000, data 0x420dce6/0x442d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466231296 unmapped: 81346560 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466231296 unmapped: 81346560 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f2ec3400 session 0x5616f58743c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f271000/0x0/0x1bfc00000, data 0x420dce6/0x442d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466239488 unmapped: 81338368 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5100508 data_alloc: 234881024 data_used: 24023040
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466239488 unmapped: 81338368 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f412c800 session 0x5616f3289680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f60f6000 session 0x5616f560f680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f2ec1000 session 0x5616f58292c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f298000/0x0/0x1bfc00000, data 0x41e9ca3/0x4406000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466247680 unmapped: 81330176 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466247680 unmapped: 81330176 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.273255348s of 11.444032669s, submitted: 70
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466264064 unmapped: 81313792 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f2ec1000 session 0x5616f2ec5a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f298000/0x0/0x1bfc00000, data 0x41e9ca3/0x4406000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f2ec3400 session 0x5616f5df3860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f298000/0x0/0x1bfc00000, data 0x41e9ca3/0x4406000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466280448 unmapped: 81297408 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5062775 data_alloc: 234881024 data_used: 24018944
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466280448 unmapped: 81297408 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f412c800 session 0x5616f58734a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466296832 unmapped: 81281024 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 425 handle_osd_map epochs [425,426], i have 425, src has [1,426]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 426 ms_handle_reset con 0x5616f5e57800 session 0x5616f512cb40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 426 ms_handle_reset con 0x5616f566a800 session 0x5616f362f0e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 426 ms_handle_reset con 0x5616fd7af400 session 0x5616f5605860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466305024 unmapped: 81272832 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 426 ms_handle_reset con 0x5616f2ec1000 session 0x5616f2ec5a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466305024 unmapped: 81272832 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 426 heartbeat osd_stat(store_statfs(0x19f6ca000/0x0/0x1bfc00000, data 0x3db6a0a/0x3fd3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 426 ms_handle_reset con 0x5616f2ec3400 session 0x5616f58743c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466337792 unmapped: 81240064 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5065343 data_alloc: 234881024 data_used: 24027136
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 426 ms_handle_reset con 0x5616f412c800 session 0x5616f3c630e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463544320 unmapped: 84033536 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 426 ms_handle_reset con 0x5616f5e57800 session 0x5616f5c212c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 426 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5e18960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463544320 unmapped: 84033536 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 426 ms_handle_reset con 0x5616f412c800 session 0x5616f509c5a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 426 ms_handle_reset con 0x5616fd7af400 session 0x5616f5df3e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463544320 unmapped: 84033536 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 426 heartbeat osd_stat(store_statfs(0x1a08f2000/0x0/0x1bfc00000, data 0x2b8ea1a/0x2dac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 426 handle_osd_map epochs [427,427], i have 426, src has [1,427]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.148363113s of 10.002849579s, submitted: 144
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 427 ms_handle_reset con 0x5616f2ec3400 session 0x5616f3f68b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463544320 unmapped: 84033536 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 427 handle_osd_map epochs [428,428], i have 427, src has [1,428]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4858742 data_alloc: 234881024 data_used: 14594048
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a08eb000/0x0/0x1bfc00000, data 0x2b923b4/0x2db1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a08eb000/0x0/0x1bfc00000, data 0x2b923b4/0x2db1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 428 handle_osd_map epochs [428,429], i have 428, src has [1,429]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463593472 unmapped: 83984384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463593472 unmapped: 83984384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4861044 data_alloc: 234881024 data_used: 14594048
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463593472 unmapped: 83984384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a08e9000/0x0/0x1bfc00000, data 0x2b93fbd/0x2db4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463593472 unmapped: 83984384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f7095c00 session 0x5616f5e14d20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463593472 unmapped: 83984384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a08e9000/0x0/0x1bfc00000, data 0x2b93fbd/0x2db4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.008526802s of 10.033182144s, submitted: 30
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5873a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463568896 unmapped: 84008960 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463577088 unmapped: 84000768 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4861992 data_alloc: 234881024 data_used: 14594048
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463577088 unmapped: 84000768 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f2ec3400 session 0x5616f5873c20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f412c800 session 0x5616f5875e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616fd7af400 session 0x5616f5829a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f75c0400 session 0x5616f5604960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a08e6000/0x0/0x1bfc00000, data 0x2b93fcd/0x2db5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4875568 data_alloc: 234881024 data_used: 15200256
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a08e6000/0x0/0x1bfc00000, data 0x2b93fcd/0x2db5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a08e4000/0x0/0x1bfc00000, data 0x2b98fcd/0x2dba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4875126 data_alloc: 234881024 data_used: 15200256
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.116181374s of 13.762666702s, submitted: 33
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a08e3000/0x0/0x1bfc00000, data 0x2b99fcd/0x2dbb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f2ec1000 session 0x5616f33c25a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f2ec3400 session 0x5616f5874f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4875378 data_alloc: 234881024 data_used: 15200256
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f412c800 session 0x5616f52feb40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616fd7af400 session 0x5616f362f4a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a08be000/0x0/0x1bfc00000, data 0x2bbdfdd/0x2de0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a08be000/0x0/0x1bfc00000, data 0x2bbdfdd/0x2de0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a08be000/0x0/0x1bfc00000, data 0x2bbdfdd/0x2de0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a08be000/0x0/0x1bfc00000, data 0x2bbdfdd/0x2de0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463593472 unmapped: 83984384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463593472 unmapped: 83984384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4881025 data_alloc: 234881024 data_used: 15208448
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463593472 unmapped: 83984384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463593472 unmapped: 83984384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a08be000/0x0/0x1bfc00000, data 0x2bbdfdd/0x2de0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463593472 unmapped: 83984384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463609856 unmapped: 83968000 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.698847771s of 11.737822533s, submitted: 9
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f60f6000 session 0x5616f59165a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f60fa000 session 0x5616f5873860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f2ec1000 session 0x5616f2705c20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463618048 unmapped: 83959808 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4885408 data_alloc: 234881024 data_used: 16318464
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a08bf000/0x0/0x1bfc00000, data 0x2bbdfcd/0x2ddf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463618048 unmapped: 83959808 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f2ec3400 session 0x5616f5e143c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463618048 unmapped: 83959808 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f412c800 session 0x5616f5dfda40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 456876032 unmapped: 90701824 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457089024 unmapped: 90488832 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457089024 unmapped: 90488832 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4747316 data_alloc: 218103808 data_used: 7409664
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457007104 unmapped: 90570752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1240000/0x0/0x1bfc00000, data 0x223ef5b/0x245e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457007104 unmapped: 90570752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457007104 unmapped: 90570752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1240000/0x0/0x1bfc00000, data 0x223ef5b/0x245e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457007104 unmapped: 90570752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457007104 unmapped: 90570752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4754578 data_alloc: 218103808 data_used: 7409664
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457007104 unmapped: 90570752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457007104 unmapped: 90570752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457007104 unmapped: 90570752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1240000/0x0/0x1bfc00000, data 0x223ef5b/0x245e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.422370911s of 14.626880646s, submitted: 63
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f9f27c00 session 0x5616f402a5a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f63da800 session 0x5616f58bf860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457007104 unmapped: 90570752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1241000/0x0/0x1bfc00000, data 0x223ef4b/0x245d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457007104 unmapped: 90570752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4745933 data_alloc: 218103808 data_used: 7299072
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f9f27c00 session 0x5616f3354780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457007104 unmapped: 90570752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457007104 unmapped: 90570752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1265000/0x0/0x1bfc00000, data 0x221af4b/0x2439000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457007104 unmapped: 90570752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1265000/0x0/0x1bfc00000, data 0x221af4b/0x2439000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457007104 unmapped: 90570752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457007104 unmapped: 90570752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4745933 data_alloc: 218103808 data_used: 7299072
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457007104 unmapped: 90570752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457007104 unmapped: 90570752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457015296 unmapped: 90562560 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1265000/0x0/0x1bfc00000, data 0x221af4b/0x2439000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457015296 unmapped: 90562560 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457015296 unmapped: 90562560 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4745933 data_alloc: 218103808 data_used: 7299072
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457015296 unmapped: 90562560 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457015296 unmapped: 90562560 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1265000/0x0/0x1bfc00000, data 0x221af4b/0x2439000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457015296 unmapped: 90562560 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457015296 unmapped: 90562560 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457015296 unmapped: 90562560 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4745933 data_alloc: 218103808 data_used: 7299072
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1265000/0x0/0x1bfc00000, data 0x221af4b/0x2439000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f2ec1000 session 0x5616f509af00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457015296 unmapped: 90562560 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f2ec3400 session 0x5616f3324000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457015296 unmapped: 90562560 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1265000/0x0/0x1bfc00000, data 0x221af4b/0x2439000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f412c800 session 0x5616f4d66d20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457015296 unmapped: 90562560 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5de94a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457023488 unmapped: 90554368 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457023488 unmapped: 90554368 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4745933 data_alloc: 218103808 data_used: 7299072
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457023488 unmapped: 90554368 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1265000/0x0/0x1bfc00000, data 0x221af4b/0x2439000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457023488 unmapped: 90554368 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457023488 unmapped: 90554368 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457031680 unmapped: 90546176 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1265000/0x0/0x1bfc00000, data 0x221af4b/0x2439000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457031680 unmapped: 90546176 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4745933 data_alloc: 218103808 data_used: 7299072
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1265000/0x0/0x1bfc00000, data 0x221af4b/0x2439000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457031680 unmapped: 90546176 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1265000/0x0/0x1bfc00000, data 0x221af4b/0x2439000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457031680 unmapped: 90546176 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457031680 unmapped: 90546176 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457031680 unmapped: 90546176 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457039872 unmapped: 90537984 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4745933 data_alloc: 218103808 data_used: 7299072
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1265000/0x0/0x1bfc00000, data 0x221af4b/0x2439000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457039872 unmapped: 90537984 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457039872 unmapped: 90537984 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457039872 unmapped: 90537984 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457039872 unmapped: 90537984 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457048064 unmapped: 90529792 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4750573 data_alloc: 218103808 data_used: 7696384
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457048064 unmapped: 90529792 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1265000/0x0/0x1bfc00000, data 0x221af4b/0x2439000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457048064 unmapped: 90529792 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 37.722682953s of 38.938465118s, submitted: 21
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457048064 unmapped: 90529792 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457048064 unmapped: 90529792 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457056256 unmapped: 90521600 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4753789 data_alloc: 218103808 data_used: 7696384
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457056256 unmapped: 90521600 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1259000/0x0/0x1bfc00000, data 0x2226f4b/0x2445000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457056256 unmapped: 90521600 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457056256 unmapped: 90521600 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1259000/0x0/0x1bfc00000, data 0x2226f4b/0x2445000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457056256 unmapped: 90521600 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457056256 unmapped: 90521600 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4754049 data_alloc: 218103808 data_used: 7696384
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457056256 unmapped: 90521600 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457056256 unmapped: 90521600 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457064448 unmapped: 90513408 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1258000/0x0/0x1bfc00000, data 0x2227f4b/0x2446000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457064448 unmapped: 90513408 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457064448 unmapped: 90513408 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4753169 data_alloc: 218103808 data_used: 7696384
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457064448 unmapped: 90513408 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1258000/0x0/0x1bfc00000, data 0x2227f4b/0x2446000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457064448 unmapped: 90513408 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457064448 unmapped: 90513408 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457072640 unmapped: 90505216 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457072640 unmapped: 90505216 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4753169 data_alloc: 218103808 data_used: 7696384
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1258000/0x0/0x1bfc00000, data 0x2227f4b/0x2446000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457080832 unmapped: 90497024 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457080832 unmapped: 90497024 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457080832 unmapped: 90497024 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1258000/0x0/0x1bfc00000, data 0x2227f4b/0x2446000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457080832 unmapped: 90497024 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457080832 unmapped: 90497024 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4758929 data_alloc: 218103808 data_used: 8683520
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f60fa000 session 0x5616f4cdd860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457080832 unmapped: 90497024 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1258000/0x0/0x1bfc00000, data 0x2227f4b/0x2446000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457080832 unmapped: 90497024 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 429 handle_osd_map epochs [430,430], i have 429, src has [1,430]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.687757492s of 24.706781387s, submitted: 7
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458137600 unmapped: 89440256 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 430 ms_handle_reset con 0x5616fd7af400 session 0x5616f5df2000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458145792 unmapped: 89432064 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1254000/0x0/0x1bfc00000, data 0x2229c6e/0x2449000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 430 ms_handle_reset con 0x5616f9f27c00 session 0x5616f5873860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1254000/0x0/0x1bfc00000, data 0x2229c6e/0x2449000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458145792 unmapped: 89432064 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1254000/0x0/0x1bfc00000, data 0x2229c6e/0x2449000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4762751 data_alloc: 218103808 data_used: 8691712
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1254000/0x0/0x1bfc00000, data 0x2229c6e/0x2449000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458145792 unmapped: 89432064 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 430 ms_handle_reset con 0x5616f8393000 session 0x5616f33c23c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 430 ms_handle_reset con 0x5616f5e59c00 session 0x5616f5874f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458153984 unmapped: 89423872 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458153984 unmapped: 89423872 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 430 ms_handle_reset con 0x5616f8393000 session 0x5616f33c25a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458153984 unmapped: 89423872 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1254000/0x0/0x1bfc00000, data 0x2229c7e/0x244a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1254000/0x0/0x1bfc00000, data 0x2229c7e/0x244a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458244096 unmapped: 89333760 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4787213 data_alloc: 218103808 data_used: 8691712
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458244096 unmapped: 89333760 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 430 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5875e00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 430 ms_handle_reset con 0x5616f60fa000 session 0x5616f5c212c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a10d4000/0x0/0x1bfc00000, data 0x23a9c7e/0x25ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458203136 unmapped: 89374720 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458203136 unmapped: 89374720 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458203136 unmapped: 89374720 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a0d51000/0x0/0x1bfc00000, data 0x272cc7e/0x294d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458203136 unmapped: 89374720 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4813800 data_alloc: 218103808 data_used: 8691712
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458203136 unmapped: 89374720 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.882975578s of 14.058014870s, submitted: 44
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458203136 unmapped: 89374720 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 430 handle_osd_map epochs [431,431], i have 430, src has [1,431]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458219520 unmapped: 89358336 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 431 handle_osd_map epochs [432,432], i have 431, src has [1,432]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 432 ms_handle_reset con 0x5616fd7af400 session 0x5616f5605860
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458235904 unmapped: 89341952 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 432 heartbeat osd_stat(store_statfs(0x1a0d48000/0x0/0x1bfc00000, data 0x273072b/0x2954000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 432 handle_osd_map epochs [432,433], i have 432, src has [1,433]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 433 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5828b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458178560 unmapped: 89399296 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4905089 data_alloc: 218103808 data_used: 8699904
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 433 handle_osd_map epochs [433,434], i have 433, src has [1,434]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 434 ms_handle_reset con 0x5616fd7af400 session 0x5616f3bb6000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 434 ms_handle_reset con 0x5616f9f27c00 session 0x5616f512cb40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458186752 unmapped: 89391104 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 434 heartbeat osd_stat(store_statfs(0x1a0398000/0x0/0x1bfc00000, data 0x30de20b/0x3305000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 434 ms_handle_reset con 0x5616f5e59c00 session 0x5616f5df25a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458186752 unmapped: 89391104 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458186752 unmapped: 89391104 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 434 ms_handle_reset con 0x5616f8393000 session 0x5616f58741e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 434 ms_handle_reset con 0x5616fd7ae400 session 0x5616f4d66b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 434 ms_handle_reset con 0x5616f8393000 session 0x5616f58bef00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 462127104 unmapped: 85450752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 434 ms_handle_reset con 0x5616f2ec1000 session 0x5616f27045a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 434 handle_osd_map epochs [435,435], i have 434, src has [1,435]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 435 ms_handle_reset con 0x5616f75c0800 session 0x5616f5dfda40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 435 ms_handle_reset con 0x5616f60fa000 session 0x5616f402a780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458473472 unmapped: 89104384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5006405 data_alloc: 218103808 data_used: 8712192
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 435 heartbeat osd_stat(store_statfs(0x19f820000/0x0/0x1bfc00000, data 0x3c52f99/0x3e7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458473472 unmapped: 89104384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458473472 unmapped: 89104384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458473472 unmapped: 89104384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 435 heartbeat osd_stat(store_statfs(0x19f820000/0x0/0x1bfc00000, data 0x3c52f99/0x3e7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458473472 unmapped: 89104384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458473472 unmapped: 89104384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5006405 data_alloc: 218103808 data_used: 8712192
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 435 heartbeat osd_stat(store_statfs(0x19f820000/0x0/0x1bfc00000, data 0x3c52f99/0x3e7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458473472 unmapped: 89104384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458473472 unmapped: 89104384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.717531204s of 16.022281647s, submitted: 72
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 435 heartbeat osd_stat(store_statfs(0x19f820000/0x0/0x1bfc00000, data 0x3c52f99/0x3e7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [0,1])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458481664 unmapped: 89096192 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458481664 unmapped: 89096192 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458481664 unmapped: 89096192 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5014845 data_alloc: 218103808 data_used: 8712192
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458481664 unmapped: 89096192 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 435 heartbeat osd_stat(store_statfs(0x19f81f000/0x0/0x1bfc00000, data 0x3cc1f99/0x3e7e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458489856 unmapped: 89088000 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 461832192 unmapped: 85745664 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 461840384 unmapped: 85737472 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 435 heartbeat osd_stat(store_statfs(0x19f4cf000/0x0/0x1bfc00000, data 0x4012f99/0x41cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 435 heartbeat osd_stat(store_statfs(0x19f4cf000/0x0/0x1bfc00000, data 0x4012f99/0x41cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 461840384 unmapped: 85737472 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5049196 data_alloc: 218103808 data_used: 8863744
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 461840384 unmapped: 85737472 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 461840384 unmapped: 85737472 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 435 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5c03680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 461840384 unmapped: 85737472 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 435 ms_handle_reset con 0x5616f75c0800 session 0x5616f5e28b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 435 ms_handle_reset con 0x5616f8393000 session 0x5616f5c20960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 461848576 unmapped: 85729280 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.018222809s of 12.057978630s, submitted: 6
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 435 ms_handle_reset con 0x5616fd7ae400 session 0x5616f3bb7a40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 435 heartbeat osd_stat(store_statfs(0x19f4cf000/0x0/0x1bfc00000, data 0x4012f99/0x41cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458588160 unmapped: 88989696 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5046046 data_alloc: 218103808 data_used: 8871936
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 435 heartbeat osd_stat(store_statfs(0x19f4aa000/0x0/0x1bfc00000, data 0x4036fa9/0x41f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459407360 unmapped: 88170496 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459898880 unmapped: 87678976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459898880 unmapped: 87678976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459898880 unmapped: 87678976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 435 heartbeat osd_stat(store_statfs(0x19f4aa000/0x0/0x1bfc00000, data 0x4036fa9/0x41f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459898880 unmapped: 87678976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5123326 data_alloc: 234881024 data_used: 19738624
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459898880 unmapped: 87678976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459898880 unmapped: 87678976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459898880 unmapped: 87678976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459898880 unmapped: 87678976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459898880 unmapped: 87678976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5123326 data_alloc: 234881024 data_used: 19738624
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 435 heartbeat osd_stat(store_statfs(0x19f4aa000/0x0/0x1bfc00000, data 0x4036fa9/0x41f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459898880 unmapped: 87678976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.779089928s of 12.794838905s, submitted: 2
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 461488128 unmapped: 86089728 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 462577664 unmapped: 85000192 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 435 heartbeat osd_stat(store_statfs(0x19d938000/0x0/0x1bfc00000, data 0x509dfa9/0x4bc6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 462585856 unmapped: 84992000 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 462585856 unmapped: 84992000 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5277298 data_alloc: 234881024 data_used: 20439040
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 462585856 unmapped: 84992000 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 462585856 unmapped: 84992000 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 462585856 unmapped: 84992000 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 462585856 unmapped: 84992000 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 435 heartbeat osd_stat(store_statfs(0x19d92f000/0x0/0x1bfc00000, data 0x51b9fa9/0x4bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 435 ms_handle_reset con 0x5616f9f26c00 session 0x5616f5e28000
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 435 ms_handle_reset con 0x5616f40c7400 session 0x5616f509a780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 462585856 unmapped: 84992000 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5277298 data_alloc: 234881024 data_used: 20439040
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 435 ms_handle_reset con 0x5616f2ec1000 session 0x5616f3bb6d20
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 462594048 unmapped: 84983808 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 435 ms_handle_reset con 0x5616f6aa0800 session 0x5616f582cf00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 435 ms_handle_reset con 0x5616f566b800 session 0x5616f4cdd0e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.805445671s of 10.007963181s, submitted: 88
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 435 ms_handle_reset con 0x5616f75c0800 session 0x5616f582c5a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 462610432 unmapped: 84967424 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 435 heartbeat osd_stat(store_statfs(0x19d954000/0x0/0x1bfc00000, data 0x5195f99/0x4baa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 462610432 unmapped: 84967424 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 462618624 unmapped: 84959232 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 435 ms_handle_reset con 0x5616f6aa0800 session 0x5616f5df2960
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 435 handle_osd_map epochs [435,436], i have 435, src has [1,436]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 436 handle_osd_map epochs [436,436], i have 436, src has [1,436]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 436 ms_handle_reset con 0x5616f566b800 session 0x5616f5872780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 436 ms_handle_reset con 0x5616f2ec1000 session 0x5616f3288780
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 436 ms_handle_reset con 0x5616f75c0800 session 0x5616f582c5a0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 462643200 unmapped: 84934656 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4965343 data_alloc: 234881024 data_used: 15863808
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 436 ms_handle_reset con 0x5616f40c7400 session 0x5616f5c21680
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 436 ms_handle_reset con 0x5616f8393000 session 0x5616f5df2f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 436 ms_handle_reset con 0x5616fd7ae400 session 0x5616f5dfda40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 436 heartbeat osd_stat(store_statfs(0x19d957000/0x0/0x1bfc00000, data 0x5195f04/0x4ba7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457850880 unmapped: 89726976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 436 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5828b40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 436 handle_osd_map epochs [436,437], i have 436, src has [1,437]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457850880 unmapped: 89726976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 437 ms_handle_reset con 0x5616f566b800 session 0x5616f5c212c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457850880 unmapped: 89726976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a009e000/0x0/0x1bfc00000, data 0x2236ba4/0x245f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457850880 unmapped: 89726976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 437 ms_handle_reset con 0x5616f2ec3400 session 0x5616f52fe3c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 437 ms_handle_reset con 0x5616f63da800 session 0x5616f56052c0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457850880 unmapped: 89726976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4831173 data_alloc: 218103808 data_used: 8921088
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 437 ms_handle_reset con 0x5616f2ec3400 session 0x5616f5e18f00
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457850880 unmapped: 89726976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457850880 unmapped: 89726976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457850880 unmapped: 89726976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457850880 unmapped: 89726976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 437 handle_osd_map epochs [437,438], i have 437, src has [1,438]
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.959725380s of 12.410096169s, submitted: 147
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a009f000/0x0/0x1bfc00000, data 0x2236ba4/0x245f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458899456 unmapped: 88678400 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4835347 data_alloc: 218103808 data_used: 8929280
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 438 ms_handle_reset con 0x5616f2ec1000 session 0x5616f509da40
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 438 ms_handle_reset con 0x5616f566b800 session 0x5616f58741e0
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458817536 unmapped: 88760320 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458817536 unmapped: 88760320 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458817536 unmapped: 88760320 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458833920 unmapped: 88743936 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458833920 unmapped: 88743936 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458833920 unmapped: 88743936 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458833920 unmapped: 88743936 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458833920 unmapped: 88743936 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458833920 unmapped: 88743936 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458842112 unmapped: 88735744 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458842112 unmapped: 88735744 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458842112 unmapped: 88735744 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458842112 unmapped: 88735744 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458842112 unmapped: 88735744 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458842112 unmapped: 88735744 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458842112 unmapped: 88735744 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458842112 unmapped: 88735744 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458850304 unmapped: 88727552 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458850304 unmapped: 88727552 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458850304 unmapped: 88727552 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458850304 unmapped: 88727552 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458850304 unmapped: 88727552 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458850304 unmapped: 88727552 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458850304 unmapped: 88727552 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458850304 unmapped: 88727552 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458850304 unmapped: 88727552 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 88719360 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 88719360 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 88719360 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 88719360 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 88719360 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 88719360 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 88719360 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458866688 unmapped: 88711168 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458866688 unmapped: 88711168 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458866688 unmapped: 88711168 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458866688 unmapped: 88711168 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458866688 unmapped: 88711168 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458866688 unmapped: 88711168 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458866688 unmapped: 88711168 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458874880 unmapped: 88702976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458874880 unmapped: 88702976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458874880 unmapped: 88702976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458874880 unmapped: 88702976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458874880 unmapped: 88702976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458883072 unmapped: 88694784 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458883072 unmapped: 88694784 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458883072 unmapped: 88694784 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458883072 unmapped: 88694784 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458883072 unmapped: 88694784 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458891264 unmapped: 88686592 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458891264 unmapped: 88686592 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458891264 unmapped: 88686592 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458891264 unmapped: 88686592 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458891264 unmapped: 88686592 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458891264 unmapped: 88686592 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458899456 unmapped: 88678400 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458899456 unmapped: 88678400 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458899456 unmapped: 88678400 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458899456 unmapped: 88678400 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458899456 unmapped: 88678400 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458899456 unmapped: 88678400 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458907648 unmapped: 88670208 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458907648 unmapped: 88670208 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458907648 unmapped: 88670208 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458907648 unmapped: 88670208 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458915840 unmapped: 88662016 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458915840 unmapped: 88662016 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458915840 unmapped: 88662016 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458915840 unmapped: 88662016 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458915840 unmapped: 88662016 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458915840 unmapped: 88662016 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458924032 unmapped: 88653824 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458924032 unmapped: 88653824 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458924032 unmapped: 88653824 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458924032 unmapped: 88653824 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458924032 unmapped: 88653824 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458932224 unmapped: 88645632 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458932224 unmapped: 88645632 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458932224 unmapped: 88645632 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458932224 unmapped: 88645632 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458915840 unmapped: 88662016 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: do_command 'config diff' '{prefix=config diff}'
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: do_command 'config show' '{prefix=config show}'
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: do_command 'counter dump' '{prefix=counter dump}'
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: do_command 'counter schema' '{prefix=counter schema}'
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458317824 unmapped: 89260032 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458227712 unmapped: 89350144 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 03:54:27 np0005539551 ceph-osd[78953]: do_command 'log dump' '{prefix=log dump}'
Nov 29 03:54:28 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 29 03:54:28 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1249990409' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 03:54:28 np0005539551 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:54:28 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 03:54:28 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1297414243' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 03:54:28 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 29 03:54:28 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3599179138' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 03:54:29 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 29 03:54:29 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/716488162' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 29 03:54:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:29.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:29 np0005539551 nova_compute[227360]: 2025-11-29 08:54:29.494 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:54:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:29.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:54:30 np0005539551 nova_compute[227360]: 2025-11-29 08:54:30.001 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 29 03:54:30 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2555323206' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 29 03:54:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 29 03:54:30 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/83381259' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 29 03:54:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 29 03:54:30 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3602846857' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 29 03:54:30 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 29 03:54:30 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/607720651' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 29 03:54:31 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 29 03:54:31 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4294749008' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 29 03:54:31 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 29 03:54:31 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/591463489' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 29 03:54:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:31.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:31 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 29 03:54:31 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/302676362' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 29 03:54:31 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Nov 29 03:54:31 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/82846422' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 29 03:54:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 03:54:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:31.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 03:54:31 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:31 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Nov 29 03:54:31 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4161390600' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 29 03:54:32 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Nov 29 03:54:32 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2263779412' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 29 03:54:32 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Nov 29 03:54:32 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/274123403' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 29 03:54:32 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Nov 29 03:54:32 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2496202389' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 29 03:54:32 np0005539551 systemd[1]: Starting Hostname Service...
Nov 29 03:54:32 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 29 03:54:32 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/63316731' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 03:54:32 np0005539551 systemd[1]: Started Hostname Service.
Nov 29 03:54:32 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Nov 29 03:54:32 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2182638551' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 29 03:54:33 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Nov 29 03:54:33 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4035638359' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 29 03:54:33 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Nov 29 03:54:33 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1551667369' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 29 03:54:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:33.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:33.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:34 np0005539551 nova_compute[227360]: 2025-11-29 08:54:34.497 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:34 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Nov 29 03:54:34 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1659788378' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 29 03:54:35 np0005539551 nova_compute[227360]: 2025-11-29 08:54:35.046 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Nov 29 03:54:35 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3841574943' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 29 03:54:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:35.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:35 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 29 03:54:35 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3123822911' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 03:54:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:35.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:36 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Nov 29 03:54:36 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3668990219' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 29 03:54:36 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 03:54:36 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 03:54:36 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Nov 29 03:54:36 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3867516453' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 29 03:54:36 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:37 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:54:37 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:54:37 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:54:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:37.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:37 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:54:37 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.9 total, 600.0 interval#012Cumulative writes: 68K writes, 265K keys, 68K commit groups, 1.0 writes per commit group, ingest: 0.25 GB, 0.04 MB/s#012Cumulative WAL: 68K writes, 25K syncs, 2.65 writes per sync, written: 0.25 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5487 writes, 20K keys, 5487 commit groups, 1.0 writes per commit group, ingest: 16.82 MB, 0.03 MB/s#012Interval WAL: 5487 writes, 2358 syncs, 2.33 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.64              0.00         1    0.643       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.64              0.00         1    0.643       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.64              0.00         1    0.643       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.9 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.6 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5616f192a430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.9 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5616f192a430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.9 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 me
Nov 29 03:54:37 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 03:54:37 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 03:54:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Nov 29 03:54:37 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/81187528' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 29 03:54:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:37.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:38 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df"} v 0) v1
Nov 29 03:54:38 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1469009255' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 29 03:54:38 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Nov 29 03:54:38 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1894489077' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 29 03:54:38 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Nov 29 03:54:38 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1441903502' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 29 03:54:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:54:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:39.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:54:39 np0005539551 nova_compute[227360]: 2025-11-29 08:54:39.500 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Nov 29 03:54:39 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/192616189' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 29 03:54:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:39.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:40 np0005539551 nova_compute[227360]: 2025-11-29 08:54:40.049 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Nov 29 03:54:40 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3699077459' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 29 03:54:40 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 03:54:40 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 03:54:40 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Nov 29 03:54:40 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2546063901' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 29 03:54:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:41.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:41.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:41 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Nov 29 03:54:42 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1468098544' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 29 03:54:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Nov 29 03:54:42 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2703370217' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 29 03:54:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:43.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:43 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Nov 29 03:54:43 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3965022323' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 29 03:54:43 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:54:43 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:54:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:43.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:43 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Nov 29 03:54:43 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4092442318' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 29 03:54:44 np0005539551 nova_compute[227360]: 2025-11-29 08:54:44.501 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:44 np0005539551 ovs-appctl[312321]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 29 03:54:44 np0005539551 ovs-appctl[312337]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 29 03:54:44 np0005539551 ovs-appctl[312360]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 29 03:54:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 29 03:54:45 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3342109832' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 03:54:45 np0005539551 nova_compute[227360]: 2025-11-29 08:54:45.050 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Nov 29 03:54:45 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3472819389' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 29 03:54:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:45.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:45 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Nov 29 03:54:45 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4198124404' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 29 03:54:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:54:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:45.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:54:46 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 29 03:54:46 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3170152758' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 03:54:46 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:47 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0) v1
Nov 29 03:54:47 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/821457907' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 29 03:54:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:47.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:47.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:47 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0) v1
Nov 29 03:54:47 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2986149224' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 29 03:54:48 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0) v1
Nov 29 03:54:48 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1136191876' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 29 03:54:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0) v1
Nov 29 03:54:49 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3199422408' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 29 03:54:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:49.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:49 np0005539551 nova_compute[227360]: 2025-11-29 08:54:49.503 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0) v1
Nov 29 03:54:49 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3577263347' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 29 03:54:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:49.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:50 np0005539551 nova_compute[227360]: 2025-11-29 08:54:50.051 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0) v1
Nov 29 03:54:50 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2945466966' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 29 03:54:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:51.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:51.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0) v1
Nov 29 03:54:52 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1979420826' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 29 03:54:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0) v1
Nov 29 03:54:52 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3323978376' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 29 03:54:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:53.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:53 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 29 03:54:53 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1755297146' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 03:54:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:53.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:54 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0) v1
Nov 29 03:54:54 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1098247854' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 29 03:54:54 np0005539551 nova_compute[227360]: 2025-11-29 08:54:54.505 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:54 np0005539551 podman[314071]: 2025-11-29 08:54:54.75359155 +0000 UTC m=+0.090787589 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:54:54 np0005539551 podman[314073]: 2025-11-29 08:54:54.756565251 +0000 UTC m=+0.090581194 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 29 03:54:54 np0005539551 podman[314072]: 2025-11-29 08:54:54.779923743 +0000 UTC m=+0.117083101 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 29 03:54:55 np0005539551 virtqemud[226785]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 29 03:54:55 np0005539551 nova_compute[227360]: 2025-11-29 08:54:55.054 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 03:54:55 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1726562860' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 03:54:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:55.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0) v1
Nov 29 03:54:55 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/570437073' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 29 03:54:55 np0005539551 systemd[1]: Starting Time & Date Service...
Nov 29 03:54:55 np0005539551 systemd[1]: Started Time & Date Service.
Nov 29 03:54:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:55.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:56 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:57.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:57.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:58 np0005539551 nova_compute[227360]: 2025-11-29 08:54:58.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:58 np0005539551 nova_compute[227360]: 2025-11-29 08:54:58.695 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:54:58 np0005539551 nova_compute[227360]: 2025-11-29 08:54:58.696 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:54:58 np0005539551 nova_compute[227360]: 2025-11-29 08:54:58.696 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:54:58 np0005539551 nova_compute[227360]: 2025-11-29 08:54:58.696 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:54:58 np0005539551 nova_compute[227360]: 2025-11-29 08:54:58.696 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:54:59 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:54:59 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2914174965' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:54:59 np0005539551 nova_compute[227360]: 2025-11-29 08:54:59.141 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:54:59 np0005539551 nova_compute[227360]: 2025-11-29 08:54:59.274 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:54:59 np0005539551 nova_compute[227360]: 2025-11-29 08:54:59.275 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4078MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:54:59 np0005539551 nova_compute[227360]: 2025-11-29 08:54:59.276 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:54:59 np0005539551 nova_compute[227360]: 2025-11-29 08:54:59.276 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:54:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:59.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:59 np0005539551 nova_compute[227360]: 2025-11-29 08:54:59.508 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:54:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:59.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:59 np0005539551 nova_compute[227360]: 2025-11-29 08:54:59.895 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:54:59 np0005539551 nova_compute[227360]: 2025-11-29 08:54:59.895 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:54:59 np0005539551 nova_compute[227360]: 2025-11-29 08:54:59.915 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:55:00 np0005539551 nova_compute[227360]: 2025-11-29 08:55:00.056 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:55:00 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2880138213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:55:00 np0005539551 nova_compute[227360]: 2025-11-29 08:55:00.354 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:55:00 np0005539551 nova_compute[227360]: 2025-11-29 08:55:00.360 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:55:00 np0005539551 nova_compute[227360]: 2025-11-29 08:55:00.379 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:55:00 np0005539551 nova_compute[227360]: 2025-11-29 08:55:00.381 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:55:00 np0005539551 nova_compute[227360]: 2025-11-29 08:55:00.381 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.105s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:55:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:01.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:01.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:02 np0005539551 nova_compute[227360]: 2025-11-29 08:55:02.382 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:02 np0005539551 nova_compute[227360]: 2025-11-29 08:55:02.382 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:55:02 np0005539551 nova_compute[227360]: 2025-11-29 08:55:02.382 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:55:02 np0005539551 nova_compute[227360]: 2025-11-29 08:55:02.455 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:55:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:03.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:03.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:04 np0005539551 nova_compute[227360]: 2025-11-29 08:55:04.412 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:04 np0005539551 nova_compute[227360]: 2025-11-29 08:55:04.413 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:04 np0005539551 nova_compute[227360]: 2025-11-29 08:55:04.512 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:05 np0005539551 nova_compute[227360]: 2025-11-29 08:55:05.058 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:05.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:05.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:07 np0005539551 nova_compute[227360]: 2025-11-29 08:55:07.407 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:07.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:07.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:09 np0005539551 nova_compute[227360]: 2025-11-29 08:55:09.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:09 np0005539551 nova_compute[227360]: 2025-11-29 08:55:09.516 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 03:55:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:09.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 03:55:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:09.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:10 np0005539551 nova_compute[227360]: 2025-11-29 08:55:10.059 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:10 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:55:10 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.0 total, 600.0 interval#012Cumulative writes: 14K writes, 76K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s#012Cumulative WAL: 14K writes, 14K syncs, 1.00 writes per sync, written: 0.15 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1642 writes, 8204 keys, 1642 commit groups, 1.0 writes per commit group, ingest: 17.00 MB, 0.03 MB/s#012Interval WAL: 1642 writes, 1642 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     17.6      5.40              0.32        48    0.113       0      0       0.0       0.0#012  L6      1/0   13.77 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.5     44.6     38.6     13.44              1.60        47    0.286    368K    25K       0.0       0.0#012 Sum      1/0   13.77 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.5     31.8     32.6     18.84              1.92        95    0.198    368K    25K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.1    116.3    118.9      0.58              0.18        10    0.058     53K   2559       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0     44.6     38.6     13.44              1.60        47    0.286    368K    25K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     17.6      5.40              0.32        47    0.115       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.0 total, 600.0 interval#012Flush(GB): cumulative 0.093, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.60 GB write, 0.10 MB/s write, 0.59 GB read, 0.10 MB/s read, 18.8 seconds#012Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.11 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557021ed51f0#2 capacity: 304.00 MB usage: 64.23 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000512 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3994,61.57 MB,20.254%) FilterBlock(95,1013.11 KB,0.325449%) IndexBlock(95,1.67 MB,0.55016%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 03:55:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:11.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:11.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:11 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:12 np0005539551 nova_compute[227360]: 2025-11-29 08:55:12.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:13.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:13.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:14 np0005539551 nova_compute[227360]: 2025-11-29 08:55:14.520 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:15 np0005539551 nova_compute[227360]: 2025-11-29 08:55:15.061 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:15.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:15.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:16 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:17 np0005539551 nova_compute[227360]: 2025-11-29 08:55:17.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:17 np0005539551 nova_compute[227360]: 2025-11-29 08:55:17.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:55:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:17.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:17.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:18 np0005539551 nova_compute[227360]: 2025-11-29 08:55:18.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:19.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:19 np0005539551 nova_compute[227360]: 2025-11-29 08:55:19.556 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:55:19.902 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:55:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:55:19.903 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:55:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:55:19.903 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:55:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:19.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:20 np0005539551 nova_compute[227360]: 2025-11-29 08:55:20.064 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:55:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:21.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:55:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:55:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:21.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:55:21 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:23.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:23.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:24 np0005539551 nova_compute[227360]: 2025-11-29 08:55:24.559 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:25 np0005539551 nova_compute[227360]: 2025-11-29 08:55:25.065 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:25.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:25 np0005539551 podman[314421]: 2025-11-29 08:55:25.60914403 +0000 UTC m=+0.059423930 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:55:25 np0005539551 podman[314420]: 2025-11-29 08:55:25.612947344 +0000 UTC m=+0.065618238 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:55:25 np0005539551 podman[314419]: 2025-11-29 08:55:25.634212099 +0000 UTC m=+0.087320915 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:55:25 np0005539551 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 03:55:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:25.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:25 np0005539551 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 03:55:26 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:27.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:27.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:29.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:29 np0005539551 nova_compute[227360]: 2025-11-29 08:55:29.561 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:29.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:30 np0005539551 nova_compute[227360]: 2025-11-29 08:55:30.104 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:31 np0005539551 nova_compute[227360]: 2025-11-29 08:55:31.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:31 np0005539551 nova_compute[227360]: 2025-11-29 08:55:31.411 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:55:31 np0005539551 nova_compute[227360]: 2025-11-29 08:55:31.436 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:55:31 np0005539551 nova_compute[227360]: 2025-11-29 08:55:31.436 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:31.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:31 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #154. Immutable memtables: 0.
Nov 29 03:55:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:55:31.762177) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:55:31 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 154
Nov 29 03:55:31 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406531762235, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 1224, "num_deletes": 251, "total_data_size": 2217903, "memory_usage": 2253696, "flush_reason": "Manual Compaction"}
Nov 29 03:55:31 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #155: started
Nov 29 03:55:31 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406531771884, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 155, "file_size": 1050502, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75582, "largest_seqno": 76801, "table_properties": {"data_size": 1045003, "index_size": 2509, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 17712, "raw_average_key_size": 23, "raw_value_size": 1032203, "raw_average_value_size": 1354, "num_data_blocks": 108, "num_entries": 762, "num_filter_entries": 762, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406467, "oldest_key_time": 1764406467, "file_creation_time": 1764406531, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:55:31 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 9746 microseconds, and 5536 cpu microseconds.
Nov 29 03:55:31 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:55:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:55:31.771923) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #155: 1050502 bytes OK
Nov 29 03:55:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:55:31.771941) [db/memtable_list.cc:519] [default] Level-0 commit table #155 started
Nov 29 03:55:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:55:31.773443) [db/memtable_list.cc:722] [default] Level-0 commit table #155: memtable #1 done
Nov 29 03:55:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:55:31.773482) EVENT_LOG_v1 {"time_micros": 1764406531773476, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:55:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:55:31.773500) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:55:31 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 2211124, prev total WAL file size 2211124, number of live WAL files 2.
Nov 29 03:55:31 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000151.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:55:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:55:31.774196) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353035' seq:72057594037927935, type:22 .. '6D6772737461740032373537' seq:0, type:0; will stop at (end)
Nov 29 03:55:31 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:55:31 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [155(1025KB)], [153(13MB)]
Nov 29 03:55:31 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406531774216, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [155], "files_L6": [153], "score": -1, "input_data_size": 15484268, "oldest_snapshot_seqno": -1}
Nov 29 03:55:31 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #156: 10976 keys, 12038597 bytes, temperature: kUnknown
Nov 29 03:55:31 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406531838770, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 156, "file_size": 12038597, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11970541, "index_size": 39527, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27461, "raw_key_size": 289656, "raw_average_key_size": 26, "raw_value_size": 11780863, "raw_average_value_size": 1073, "num_data_blocks": 1496, "num_entries": 10976, "num_filter_entries": 10976, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764406531, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 156, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:55:31 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:55:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:55:31.839096) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 12038597 bytes
Nov 29 03:55:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:55:31.840399) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 239.3 rd, 186.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 13.8 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(26.2) write-amplify(11.5) OK, records in: 11464, records dropped: 488 output_compression: NoCompression
Nov 29 03:55:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:55:31.840419) EVENT_LOG_v1 {"time_micros": 1764406531840410, "job": 98, "event": "compaction_finished", "compaction_time_micros": 64713, "compaction_time_cpu_micros": 28092, "output_level": 6, "num_output_files": 1, "total_output_size": 12038597, "num_input_records": 11464, "num_output_records": 10976, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:55:31 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:55:31 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406531840878, "job": 98, "event": "table_file_deletion", "file_number": 155}
Nov 29 03:55:31 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000153.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:55:31 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406531843944, "job": 98, "event": "table_file_deletion", "file_number": 153}
Nov 29 03:55:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:55:31.774151) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:55:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:55:31.844060) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:55:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:55:31.844065) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:55:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:55:31.844066) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:55:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:55:31.844069) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:55:31 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:55:31.844070) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:55:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:31.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:31 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:32 np0005539551 nova_compute[227360]: 2025-11-29 08:55:32.428 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:32 np0005539551 nova_compute[227360]: 2025-11-29 08:55:32.428 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:55:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:33.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:33.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:34 np0005539551 nova_compute[227360]: 2025-11-29 08:55:34.621 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:35 np0005539551 nova_compute[227360]: 2025-11-29 08:55:35.106 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:35.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:35.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:36 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:37.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:37.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:39 np0005539551 systemd[1]: session-57.scope: Deactivated successfully.
Nov 29 03:55:39 np0005539551 systemd[1]: session-57.scope: Consumed 2min 40.175s CPU time, 1.0G memory peak, read 466.1M from disk, written 328.0M to disk.
Nov 29 03:55:39 np0005539551 systemd-logind[788]: Session 57 logged out. Waiting for processes to exit.
Nov 29 03:55:39 np0005539551 systemd-logind[788]: Removed session 57.
Nov 29 03:55:39 np0005539551 systemd-logind[788]: New session 58 of user zuul.
Nov 29 03:55:39 np0005539551 systemd[1]: Started Session 58 of User zuul.
Nov 29 03:55:39 np0005539551 systemd[1]: session-58.scope: Deactivated successfully.
Nov 29 03:55:39 np0005539551 systemd-logind[788]: Session 58 logged out. Waiting for processes to exit.
Nov 29 03:55:39 np0005539551 systemd-logind[788]: Removed session 58.
Nov 29 03:55:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:39.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:39 np0005539551 systemd-logind[788]: New session 59 of user zuul.
Nov 29 03:55:39 np0005539551 systemd[1]: Started Session 59 of User zuul.
Nov 29 03:55:39 np0005539551 nova_compute[227360]: 2025-11-29 08:55:39.623 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:39 np0005539551 systemd[1]: session-59.scope: Deactivated successfully.
Nov 29 03:55:39 np0005539551 systemd-logind[788]: Session 59 logged out. Waiting for processes to exit.
Nov 29 03:55:39 np0005539551 systemd-logind[788]: Removed session 59.
Nov 29 03:55:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:55:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:39.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:55:40 np0005539551 nova_compute[227360]: 2025-11-29 08:55:40.108 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:41.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:41.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:41 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:43.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:43.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:44 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:55:44 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:55:44 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:55:44 np0005539551 nova_compute[227360]: 2025-11-29 08:55:44.660 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:45 np0005539551 nova_compute[227360]: 2025-11-29 08:55:45.110 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:45.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:45.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:46 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:47.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:47.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:49.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:49 np0005539551 nova_compute[227360]: 2025-11-29 08:55:49.664 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:49.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:50 np0005539551 nova_compute[227360]: 2025-11-29 08:55:50.110 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:51.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:51.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:52 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:55:52 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:55:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:55:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:53.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:55:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:53.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:54 np0005539551 nova_compute[227360]: 2025-11-29 08:55:54.667 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:55 np0005539551 nova_compute[227360]: 2025-11-29 08:55:55.111 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:55 np0005539551 nova_compute[227360]: 2025-11-29 08:55:55.496 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:55.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:55.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:56 np0005539551 podman[314724]: 2025-11-29 08:55:56.624991989 +0000 UTC m=+0.068029862 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 03:55:56 np0005539551 podman[314725]: 2025-11-29 08:55:56.64238899 +0000 UTC m=+0.079406231 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 29 03:55:56 np0005539551 podman[314723]: 2025-11-29 08:55:56.670190373 +0000 UTC m=+0.108492349 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:55:57 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:57.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:57.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:59.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:59 np0005539551 nova_compute[227360]: 2025-11-29 08:55:59.669 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:55:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:59.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:00 np0005539551 nova_compute[227360]: 2025-11-29 08:56:00.114 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:00 np0005539551 nova_compute[227360]: 2025-11-29 08:56:00.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:00 np0005539551 nova_compute[227360]: 2025-11-29 08:56:00.452 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:56:00 np0005539551 nova_compute[227360]: 2025-11-29 08:56:00.452 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:56:00 np0005539551 nova_compute[227360]: 2025-11-29 08:56:00.452 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:56:00 np0005539551 nova_compute[227360]: 2025-11-29 08:56:00.452 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:56:00 np0005539551 nova_compute[227360]: 2025-11-29 08:56:00.453 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:56:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:56:00 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1240879343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:56:00 np0005539551 nova_compute[227360]: 2025-11-29 08:56:00.926 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:56:01 np0005539551 nova_compute[227360]: 2025-11-29 08:56:01.068 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:56:01 np0005539551 nova_compute[227360]: 2025-11-29 08:56:01.069 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4214MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:56:01 np0005539551 nova_compute[227360]: 2025-11-29 08:56:01.069 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:56:01 np0005539551 nova_compute[227360]: 2025-11-29 08:56:01.069 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:56:01 np0005539551 nova_compute[227360]: 2025-11-29 08:56:01.163 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:56:01 np0005539551 nova_compute[227360]: 2025-11-29 08:56:01.164 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:56:01 np0005539551 nova_compute[227360]: 2025-11-29 08:56:01.194 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:56:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:01.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:56:01 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1005671389' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:56:01 np0005539551 nova_compute[227360]: 2025-11-29 08:56:01.640 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:56:01 np0005539551 nova_compute[227360]: 2025-11-29 08:56:01.650 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:56:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:01.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:01 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #157. Immutable memtables: 0.
Nov 29 03:56:01 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:56:01.982708) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:56:01 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 157
Nov 29 03:56:01 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406561982823, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 556, "num_deletes": 255, "total_data_size": 833236, "memory_usage": 843976, "flush_reason": "Manual Compaction"}
Nov 29 03:56:01 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #158: started
Nov 29 03:56:01 np0005539551 nova_compute[227360]: 2025-11-29 08:56:01.983 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:56:01 np0005539551 nova_compute[227360]: 2025-11-29 08:56:01.987 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:56:01 np0005539551 nova_compute[227360]: 2025-11-29 08:56:01.987 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.918s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:56:01 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406561990263, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 158, "file_size": 550047, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 76806, "largest_seqno": 77357, "table_properties": {"data_size": 547097, "index_size": 921, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6826, "raw_average_key_size": 18, "raw_value_size": 541200, "raw_average_value_size": 1478, "num_data_blocks": 40, "num_entries": 366, "num_filter_entries": 366, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406532, "oldest_key_time": 1764406532, "file_creation_time": 1764406561, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:56:01 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 7585 microseconds, and 3846 cpu microseconds.
Nov 29 03:56:01 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:56:01 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:56:01.990308) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #158: 550047 bytes OK
Nov 29 03:56:01 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:56:01.990328) [db/memtable_list.cc:519] [default] Level-0 commit table #158 started
Nov 29 03:56:01 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:56:01.992407) [db/memtable_list.cc:722] [default] Level-0 commit table #158: memtable #1 done
Nov 29 03:56:01 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:56:01.992421) EVENT_LOG_v1 {"time_micros": 1764406561992415, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:56:01 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:56:01.992438) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:56:01 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 830001, prev total WAL file size 830001, number of live WAL files 2.
Nov 29 03:56:01 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000154.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:56:01 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:56:01.992965) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373635' seq:72057594037927935, type:22 .. '6C6F676D0033303136' seq:0, type:0; will stop at (end)
Nov 29 03:56:01 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:56:01 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [158(537KB)], [156(11MB)]
Nov 29 03:56:01 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406561993039, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [158], "files_L6": [156], "score": -1, "input_data_size": 12588644, "oldest_snapshot_seqno": -1}
Nov 29 03:56:02 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:02 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #159: 10820 keys, 12447519 bytes, temperature: kUnknown
Nov 29 03:56:02 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406562065948, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 159, "file_size": 12447519, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12379544, "index_size": 39830, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27077, "raw_key_size": 287361, "raw_average_key_size": 26, "raw_value_size": 12191664, "raw_average_value_size": 1126, "num_data_blocks": 1506, "num_entries": 10820, "num_filter_entries": 10820, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764406561, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 159, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:56:02 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:56:02 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:56:02.066241) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 12447519 bytes
Nov 29 03:56:02 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:56:02.067837) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 172.5 rd, 170.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 11.5 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(45.5) write-amplify(22.6) OK, records in: 11342, records dropped: 522 output_compression: NoCompression
Nov 29 03:56:02 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:56:02.067858) EVENT_LOG_v1 {"time_micros": 1764406562067849, "job": 100, "event": "compaction_finished", "compaction_time_micros": 72982, "compaction_time_cpu_micros": 35129, "output_level": 6, "num_output_files": 1, "total_output_size": 12447519, "num_input_records": 11342, "num_output_records": 10820, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:56:02 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:56:02 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406562068254, "job": 100, "event": "table_file_deletion", "file_number": 158}
Nov 29 03:56:02 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000156.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:56:02 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406562071057, "job": 100, "event": "table_file_deletion", "file_number": 156}
Nov 29 03:56:02 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:56:01.992857) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:56:02 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:56:02.071126) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:56:02 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:56:02.071134) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:56:02 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:56:02.071137) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:56:02 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:56:02.071140) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:56:02 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:56:02.071143) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:56:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:56:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:03.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:56:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:03.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:03 np0005539551 nova_compute[227360]: 2025-11-29 08:56:03.988 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:03 np0005539551 nova_compute[227360]: 2025-11-29 08:56:03.989 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:56:03 np0005539551 nova_compute[227360]: 2025-11-29 08:56:03.989 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:56:04 np0005539551 nova_compute[227360]: 2025-11-29 08:56:04.009 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:56:04 np0005539551 nova_compute[227360]: 2025-11-29 08:56:04.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:04 np0005539551 nova_compute[227360]: 2025-11-29 08:56:04.672 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:05 np0005539551 nova_compute[227360]: 2025-11-29 08:56:05.115 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:05.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:05.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:06 np0005539551 nova_compute[227360]: 2025-11-29 08:56:06.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:07.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:07.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:08 np0005539551 nova_compute[227360]: 2025-11-29 08:56:08.404 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:09 np0005539551 nova_compute[227360]: 2025-11-29 08:56:09.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:09.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:09 np0005539551 nova_compute[227360]: 2025-11-29 08:56:09.676 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:09.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:10 np0005539551 nova_compute[227360]: 2025-11-29 08:56:10.117 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:11.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:11.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:12 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:13.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:13.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:14 np0005539551 nova_compute[227360]: 2025-11-29 08:56:14.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:14 np0005539551 nova_compute[227360]: 2025-11-29 08:56:14.680 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:15 np0005539551 nova_compute[227360]: 2025-11-29 08:56:15.120 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:15.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:15.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:17.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:18.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:18 np0005539551 nova_compute[227360]: 2025-11-29 08:56:18.733 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:19 np0005539551 nova_compute[227360]: 2025-11-29 08:56:19.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:19 np0005539551 nova_compute[227360]: 2025-11-29 08:56:19.409 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:56:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:19.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:19 np0005539551 nova_compute[227360]: 2025-11-29 08:56:19.682 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:56:19.903 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:56:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:56:19.903 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:56:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:56:19.904 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:56:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:20.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:20 np0005539551 nova_compute[227360]: 2025-11-29 08:56:20.162 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:20 np0005539551 nova_compute[227360]: 2025-11-29 08:56:20.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:21.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:22.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:23.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:24.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:24 np0005539551 nova_compute[227360]: 2025-11-29 08:56:24.687 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:25 np0005539551 nova_compute[227360]: 2025-11-29 08:56:25.164 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:25.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:56:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:26.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:56:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:27 np0005539551 podman[314830]: 2025-11-29 08:56:27.605332735 +0000 UTC m=+0.062486712 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 03:56:27 np0005539551 podman[314831]: 2025-11-29 08:56:27.640105756 +0000 UTC m=+0.091369354 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 03:56:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:27.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:27 np0005539551 podman[314829]: 2025-11-29 08:56:27.67236759 +0000 UTC m=+0.128750297 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:56:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:28.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:29.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:29 np0005539551 nova_compute[227360]: 2025-11-29 08:56:29.692 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:30.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:30 np0005539551 nova_compute[227360]: 2025-11-29 08:56:30.211 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:31.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:32 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:32.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:33.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:34.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:34 np0005539551 nova_compute[227360]: 2025-11-29 08:56:34.732 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:35 np0005539551 nova_compute[227360]: 2025-11-29 08:56:35.212 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:35.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:36.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:37.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:38.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:39.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:39 np0005539551 nova_compute[227360]: 2025-11-29 08:56:39.735 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:40.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:40 np0005539551 nova_compute[227360]: 2025-11-29 08:56:40.214 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:41.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:56:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:42.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:56:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:43.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:44.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:44 np0005539551 nova_compute[227360]: 2025-11-29 08:56:44.737 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:45 np0005539551 nova_compute[227360]: 2025-11-29 08:56:45.216 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:45.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:46.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:47 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:47.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:48.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:49.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:49 np0005539551 nova_compute[227360]: 2025-11-29 08:56:49.741 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:50.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:50 np0005539551 nova_compute[227360]: 2025-11-29 08:56:50.249 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:51.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:52.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:52 np0005539551 podman[315067]: 2025-11-29 08:56:52.677723513 +0000 UTC m=+0.340930782 container exec 90ce4adbab91e406b1c142d44002a8d5649da44e57a442af57615f2fecac4da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 03:56:52 np0005539551 podman[315067]: 2025-11-29 08:56:52.792317016 +0000 UTC m=+0.455524285 container exec_died 90ce4adbab91e406b1c142d44002a8d5649da44e57a442af57615f2fecac4da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 03:56:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:53.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:54.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:54 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:56:54 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:56:54 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:56:54 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:56:54 np0005539551 nova_compute[227360]: 2025-11-29 08:56:54.744 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:55 np0005539551 nova_compute[227360]: 2025-11-29 08:56:55.291 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:55 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:56:55 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:56:55 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:56:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:55.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:56.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:57 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:57.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:58.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:58 np0005539551 podman[315324]: 2025-11-29 08:56:58.619320485 +0000 UTC m=+0.067275342 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Nov 29 03:56:58 np0005539551 podman[315325]: 2025-11-29 08:56:58.625185514 +0000 UTC m=+0.075628979 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:56:58 np0005539551 podman[315323]: 2025-11-29 08:56:58.642053951 +0000 UTC m=+0.096458263 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 03:56:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:56:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:59.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:59 np0005539551 nova_compute[227360]: 2025-11-29 08:56:59.746 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:57:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:00.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:57:00 np0005539551 nova_compute[227360]: 2025-11-29 08:57:00.325 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:01.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:02 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:02.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:02 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:57:02 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:57:02 np0005539551 nova_compute[227360]: 2025-11-29 08:57:02.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:02 np0005539551 nova_compute[227360]: 2025-11-29 08:57:02.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:57:02 np0005539551 nova_compute[227360]: 2025-11-29 08:57:02.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:57:02 np0005539551 nova_compute[227360]: 2025-11-29 08:57:02.661 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:57:02 np0005539551 nova_compute[227360]: 2025-11-29 08:57:02.662 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:02 np0005539551 nova_compute[227360]: 2025-11-29 08:57:02.695 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:57:02 np0005539551 nova_compute[227360]: 2025-11-29 08:57:02.696 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:57:02 np0005539551 nova_compute[227360]: 2025-11-29 08:57:02.696 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:57:02 np0005539551 nova_compute[227360]: 2025-11-29 08:57:02.696 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:57:02 np0005539551 nova_compute[227360]: 2025-11-29 08:57:02.697 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:57:03 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:57:03 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1910601173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:57:03 np0005539551 nova_compute[227360]: 2025-11-29 08:57:03.134 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:57:03 np0005539551 nova_compute[227360]: 2025-11-29 08:57:03.299 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:57:03 np0005539551 nova_compute[227360]: 2025-11-29 08:57:03.300 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4185MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:57:03 np0005539551 nova_compute[227360]: 2025-11-29 08:57:03.300 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:57:03 np0005539551 nova_compute[227360]: 2025-11-29 08:57:03.301 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:57:03 np0005539551 nova_compute[227360]: 2025-11-29 08:57:03.414 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:57:03 np0005539551 nova_compute[227360]: 2025-11-29 08:57:03.414 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:57:03 np0005539551 nova_compute[227360]: 2025-11-29 08:57:03.460 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Refreshing inventories for resource provider 67c71d68-0dd7-4589-b775-189b4191a844 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:57:03 np0005539551 nova_compute[227360]: 2025-11-29 08:57:03.532 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Updating ProviderTree inventory for provider 67c71d68-0dd7-4589-b775-189b4191a844 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:57:03 np0005539551 nova_compute[227360]: 2025-11-29 08:57:03.533 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Updating inventory in ProviderTree for provider 67c71d68-0dd7-4589-b775-189b4191a844 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:57:03 np0005539551 nova_compute[227360]: 2025-11-29 08:57:03.550 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Refreshing aggregate associations for resource provider 67c71d68-0dd7-4589-b775-189b4191a844, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:57:03 np0005539551 nova_compute[227360]: 2025-11-29 08:57:03.567 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Refreshing trait associations for resource provider 67c71d68-0dd7-4589-b775-189b4191a844, traits: COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:57:03 np0005539551 nova_compute[227360]: 2025-11-29 08:57:03.582 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:57:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:03.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:04 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:57:04 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/48833951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:57:04 np0005539551 nova_compute[227360]: 2025-11-29 08:57:04.064 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:57:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:04.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:04 np0005539551 nova_compute[227360]: 2025-11-29 08:57:04.073 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:57:04 np0005539551 nova_compute[227360]: 2025-11-29 08:57:04.240 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:57:04 np0005539551 nova_compute[227360]: 2025-11-29 08:57:04.243 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:57:04 np0005539551 nova_compute[227360]: 2025-11-29 08:57:04.244 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.943s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:57:04 np0005539551 nova_compute[227360]: 2025-11-29 08:57:04.749 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:05 np0005539551 nova_compute[227360]: 2025-11-29 08:57:05.347 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:05.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:06.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:07.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:07 np0005539551 nova_compute[227360]: 2025-11-29 08:57:07.993 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:07 np0005539551 nova_compute[227360]: 2025-11-29 08:57:07.993 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:08.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:08 np0005539551 nova_compute[227360]: 2025-11-29 08:57:08.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:09.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:09 np0005539551 nova_compute[227360]: 2025-11-29 08:57:09.753 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:10.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:10 np0005539551 nova_compute[227360]: 2025-11-29 08:57:10.350 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:10 np0005539551 nova_compute[227360]: 2025-11-29 08:57:10.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:11.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:12 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:12.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:13.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:14.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:14 np0005539551 nova_compute[227360]: 2025-11-29 08:57:14.756 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:15 np0005539551 nova_compute[227360]: 2025-11-29 08:57:15.388 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:15 np0005539551 nova_compute[227360]: 2025-11-29 08:57:15.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:15.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:16.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:17.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:18.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:19.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:19 np0005539551 nova_compute[227360]: 2025-11-29 08:57:19.760 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:57:19.904 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:57:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:57:19.904 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:57:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:57:19.904 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:57:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:20.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:20 np0005539551 nova_compute[227360]: 2025-11-29 08:57:20.391 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:20 np0005539551 nova_compute[227360]: 2025-11-29 08:57:20.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:20 np0005539551 nova_compute[227360]: 2025-11-29 08:57:20.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:57:21 np0005539551 nova_compute[227360]: 2025-11-29 08:57:21.411 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:57:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:21.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:57:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:22.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:23.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:24.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:24 np0005539551 nova_compute[227360]: 2025-11-29 08:57:24.762 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:25 np0005539551 nova_compute[227360]: 2025-11-29 08:57:25.414 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:25.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:26.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:27.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:28.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:29 np0005539551 podman[315484]: 2025-11-29 08:57:29.64195569 +0000 UTC m=+0.072883834 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:57:29 np0005539551 podman[315483]: 2025-11-29 08:57:29.649369761 +0000 UTC m=+0.085132946 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:57:29 np0005539551 podman[315482]: 2025-11-29 08:57:29.684677477 +0000 UTC m=+0.126235759 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller)
Nov 29 03:57:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:29.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:29 np0005539551 nova_compute[227360]: 2025-11-29 08:57:29.764 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:30.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:30 np0005539551 nova_compute[227360]: 2025-11-29 08:57:30.416 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:31.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:32 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:32.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:33.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:34.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:34 np0005539551 nova_compute[227360]: 2025-11-29 08:57:34.768 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:35 np0005539551 nova_compute[227360]: 2025-11-29 08:57:35.465 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:35.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:36.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:37.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:38.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:57:39 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1431184385' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:57:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:57:39 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1431184385' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:57:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:39.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:39 np0005539551 nova_compute[227360]: 2025-11-29 08:57:39.771 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:40.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:40 np0005539551 nova_compute[227360]: 2025-11-29 08:57:40.466 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:41.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:42.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:43.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:44.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:44 np0005539551 nova_compute[227360]: 2025-11-29 08:57:44.774 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:45 np0005539551 nova_compute[227360]: 2025-11-29 08:57:45.468 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:45.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:57:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:46.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:57:47 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:47.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:48.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:49 np0005539551 nova_compute[227360]: 2025-11-29 08:57:49.776 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:49.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:50.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:50 np0005539551 nova_compute[227360]: 2025-11-29 08:57:50.470 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:51.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:52.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:53.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:54.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:54 np0005539551 nova_compute[227360]: 2025-11-29 08:57:54.780 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:55 np0005539551 nova_compute[227360]: 2025-11-29 08:57:55.472 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:55.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:56.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:57 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:57.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:58.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:59 np0005539551 nova_compute[227360]: 2025-11-29 08:57:59.783 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:57:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:59.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:00.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:00 np0005539551 nova_compute[227360]: 2025-11-29 08:58:00.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:00 np0005539551 nova_compute[227360]: 2025-11-29 08:58:00.474 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:00 np0005539551 podman[315543]: 2025-11-29 08:58:00.639198464 +0000 UTC m=+0.067168899 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:58:00 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #160. Immutable memtables: 0.
Nov 29 03:58:00 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:58:00.651227) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:58:00 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 160
Nov 29 03:58:00 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406680651262, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 1370, "num_deletes": 251, "total_data_size": 3109092, "memory_usage": 3136512, "flush_reason": "Manual Compaction"}
Nov 29 03:58:00 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #161: started
Nov 29 03:58:00 np0005539551 podman[315542]: 2025-11-29 08:58:00.658061985 +0000 UTC m=+0.081630421 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:58:00 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406680664841, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 161, "file_size": 2040821, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 77363, "largest_seqno": 78727, "table_properties": {"data_size": 2034972, "index_size": 3179, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12429, "raw_average_key_size": 19, "raw_value_size": 2023247, "raw_average_value_size": 3242, "num_data_blocks": 141, "num_entries": 624, "num_filter_entries": 624, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406562, "oldest_key_time": 1764406562, "file_creation_time": 1764406680, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:58:00 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 13651 microseconds, and 4723 cpu microseconds.
Nov 29 03:58:00 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:58:00 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:58:00.664878) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #161: 2040821 bytes OK
Nov 29 03:58:00 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:58:00.664895) [db/memtable_list.cc:519] [default] Level-0 commit table #161 started
Nov 29 03:58:00 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:58:00.666847) [db/memtable_list.cc:722] [default] Level-0 commit table #161: memtable #1 done
Nov 29 03:58:00 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:58:00.666861) EVENT_LOG_v1 {"time_micros": 1764406680666856, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:58:00 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:58:00.666879) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:58:00 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 3102707, prev total WAL file size 3102707, number of live WAL files 2.
Nov 29 03:58:00 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000157.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:58:00 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:58:00.667715) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Nov 29 03:58:00 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:58:00 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [161(1992KB)], [159(11MB)]
Nov 29 03:58:00 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406680667776, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [161], "files_L6": [159], "score": -1, "input_data_size": 14488340, "oldest_snapshot_seqno": -1}
Nov 29 03:58:00 np0005539551 podman[315541]: 2025-11-29 08:58:00.680215615 +0000 UTC m=+0.111493709 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 03:58:00 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #162: 10927 keys, 12556564 bytes, temperature: kUnknown
Nov 29 03:58:00 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406680726515, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 162, "file_size": 12556564, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12488002, "index_size": 40161, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27333, "raw_key_size": 290220, "raw_average_key_size": 26, "raw_value_size": 12298212, "raw_average_value_size": 1125, "num_data_blocks": 1515, "num_entries": 10927, "num_filter_entries": 10927, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764406680, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 162, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:58:00 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:58:00 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:58:00.726778) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 12556564 bytes
Nov 29 03:58:00 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:58:00.727960) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 246.4 rd, 213.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 11.9 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(13.3) write-amplify(6.2) OK, records in: 11444, records dropped: 517 output_compression: NoCompression
Nov 29 03:58:00 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:58:00.727976) EVENT_LOG_v1 {"time_micros": 1764406680727968, "job": 102, "event": "compaction_finished", "compaction_time_micros": 58809, "compaction_time_cpu_micros": 29772, "output_level": 6, "num_output_files": 1, "total_output_size": 12556564, "num_input_records": 11444, "num_output_records": 10927, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:58:00 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:58:00 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406680728405, "job": 102, "event": "table_file_deletion", "file_number": 161}
Nov 29 03:58:00 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000159.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:58:00 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406680730611, "job": 102, "event": "table_file_deletion", "file_number": 159}
Nov 29 03:58:00 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:58:00.667600) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:58:00 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:58:00.730682) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:58:00 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:58:00.730688) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:58:00 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:58:00.730691) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:58:00 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:58:00.730693) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:58:00 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-08:58:00.730695) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:58:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:01.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:02 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:02.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:02 np0005539551 nova_compute[227360]: 2025-11-29 08:58:02.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:02 np0005539551 nova_compute[227360]: 2025-11-29 08:58:02.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:58:02 np0005539551 nova_compute[227360]: 2025-11-29 08:58:02.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:58:02 np0005539551 nova_compute[227360]: 2025-11-29 08:58:02.426 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:58:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:03.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:04 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:58:04 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:58:04 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:58:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:04.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:04 np0005539551 nova_compute[227360]: 2025-11-29 08:58:04.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:04 np0005539551 nova_compute[227360]: 2025-11-29 08:58:04.433 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:58:04 np0005539551 nova_compute[227360]: 2025-11-29 08:58:04.434 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:58:04 np0005539551 nova_compute[227360]: 2025-11-29 08:58:04.434 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:58:04 np0005539551 nova_compute[227360]: 2025-11-29 08:58:04.434 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:58:04 np0005539551 nova_compute[227360]: 2025-11-29 08:58:04.435 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:58:04 np0005539551 nova_compute[227360]: 2025-11-29 08:58:04.784 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:04 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:58:04 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/314018690' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:58:04 np0005539551 nova_compute[227360]: 2025-11-29 08:58:04.894 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:58:05 np0005539551 nova_compute[227360]: 2025-11-29 08:58:05.039 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:58:05 np0005539551 nova_compute[227360]: 2025-11-29 08:58:05.041 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4198MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:58:05 np0005539551 nova_compute[227360]: 2025-11-29 08:58:05.041 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:58:05 np0005539551 nova_compute[227360]: 2025-11-29 08:58:05.041 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:58:05 np0005539551 nova_compute[227360]: 2025-11-29 08:58:05.285 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:58:05 np0005539551 nova_compute[227360]: 2025-11-29 08:58:05.285 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:58:05 np0005539551 nova_compute[227360]: 2025-11-29 08:58:05.445 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:58:05 np0005539551 nova_compute[227360]: 2025-11-29 08:58:05.476 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:05.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:58:05 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/202258666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:58:05 np0005539551 nova_compute[227360]: 2025-11-29 08:58:05.871 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:58:05 np0005539551 nova_compute[227360]: 2025-11-29 08:58:05.877 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:58:05 np0005539551 nova_compute[227360]: 2025-11-29 08:58:05.908 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:58:05 np0005539551 nova_compute[227360]: 2025-11-29 08:58:05.910 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:58:05 np0005539551 nova_compute[227360]: 2025-11-29 08:58:05.911 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.869s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:58:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:06.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:07.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:07 np0005539551 nova_compute[227360]: 2025-11-29 08:58:07.911 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:08.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:09 np0005539551 nova_compute[227360]: 2025-11-29 08:58:09.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:09 np0005539551 nova_compute[227360]: 2025-11-29 08:58:09.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:09 np0005539551 nova_compute[227360]: 2025-11-29 08:58:09.787 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:09.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:10 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:58:10 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:58:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:10.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:10 np0005539551 nova_compute[227360]: 2025-11-29 08:58:10.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:10 np0005539551 nova_compute[227360]: 2025-11-29 08:58:10.477 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:11.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:12 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:12.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:13.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:14.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:14 np0005539551 nova_compute[227360]: 2025-11-29 08:58:14.790 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:15 np0005539551 nova_compute[227360]: 2025-11-29 08:58:15.479 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:15.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:16.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:17 np0005539551 nova_compute[227360]: 2025-11-29 08:58:17.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:17.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:18.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:19 np0005539551 nova_compute[227360]: 2025-11-29 08:58:19.794 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:58:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:19.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:58:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:58:19.906 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:58:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:58:19.906 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:58:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:58:19.906 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:58:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:20.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:20 np0005539551 nova_compute[227360]: 2025-11-29 08:58:20.483 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:21.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:22.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:22 np0005539551 nova_compute[227360]: 2025-11-29 08:58:22.411 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:22 np0005539551 nova_compute[227360]: 2025-11-29 08:58:22.412 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:58:23 np0005539551 nova_compute[227360]: 2025-11-29 08:58:23.412 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:23.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:24.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:24 np0005539551 nova_compute[227360]: 2025-11-29 08:58:24.796 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:25 np0005539551 nova_compute[227360]: 2025-11-29 08:58:25.486 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:25.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:26.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:27.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:28.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:29 np0005539551 nova_compute[227360]: 2025-11-29 08:58:29.800 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:29.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:30.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:30 np0005539551 nova_compute[227360]: 2025-11-29 08:58:30.487 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:31 np0005539551 podman[315830]: 2025-11-29 08:58:31.608242589 +0000 UTC m=+0.062240996 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:58:31 np0005539551 podman[315831]: 2025-11-29 08:58:31.608808104 +0000 UTC m=+0.057126647 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 29 03:58:31 np0005539551 podman[315829]: 2025-11-29 08:58:31.632068664 +0000 UTC m=+0.088222690 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:58:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:31.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:32 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:32.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:58:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:33.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:58:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:34.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:34 np0005539551 nova_compute[227360]: 2025-11-29 08:58:34.802 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:35 np0005539551 nova_compute[227360]: 2025-11-29 08:58:35.490 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:35.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:36.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:37.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:38.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:58:39 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1068567165' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:58:39 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:58:39 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1068567165' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:58:39 np0005539551 nova_compute[227360]: 2025-11-29 08:58:39.806 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:39.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:40.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:40 np0005539551 nova_compute[227360]: 2025-11-29 08:58:40.492 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:41.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:42.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:43.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:44.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:44 np0005539551 nova_compute[227360]: 2025-11-29 08:58:44.809 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:45 np0005539551 nova_compute[227360]: 2025-11-29 08:58:45.494 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:45.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:46.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:47 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:47.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:48.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:49 np0005539551 nova_compute[227360]: 2025-11-29 08:58:49.810 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:49.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:50.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:50 np0005539551 nova_compute[227360]: 2025-11-29 08:58:50.497 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:51 np0005539551 nova_compute[227360]: 2025-11-29 08:58:51.535 227364 DEBUG oslo_concurrency.processutils [None req-ff64d45f-ac0e-4f0e-bb3f-545ad70e0b2f 07d8fdc1f04d4769b5744eeac3a6f5f4 313f5427e3624aa189013c3cc05bee02 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:58:51 np0005539551 nova_compute[227360]: 2025-11-29 08:58:51.582 227364 DEBUG oslo_concurrency.processutils [None req-ff64d45f-ac0e-4f0e-bb3f-545ad70e0b2f 07d8fdc1f04d4769b5744eeac3a6f5f4 313f5427e3624aa189013c3cc05bee02 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:58:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:52.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:52.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:54.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:54.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:54 np0005539551 nova_compute[227360]: 2025-11-29 08:58:54.813 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:55 np0005539551 nova_compute[227360]: 2025-11-29 08:58:55.499 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:56.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:56.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:56 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:58:56.450 139482 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=71, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=70) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:58:56 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:58:56.451 139482 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:58:56 np0005539551 nova_compute[227360]: 2025-11-29 08:58:56.498 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:57 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:58.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:58:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:58.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:58 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:58:58.452 139482 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a37d8697-3fee-4a55-8dd5-3894cb7e8e1c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '71'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:58:59 np0005539551 nova_compute[227360]: 2025-11-29 08:58:59.816 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:00.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:00.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:00 np0005539551 nova_compute[227360]: 2025-11-29 08:59:00.500 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:02 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:02.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:02.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:02 np0005539551 podman[315892]: 2025-11-29 08:59:02.629224815 +0000 UTC m=+0.075252199 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:59:02 np0005539551 podman[315893]: 2025-11-29 08:59:02.640245833 +0000 UTC m=+0.070272233 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 29 03:59:02 np0005539551 podman[315891]: 2025-11-29 08:59:02.657134971 +0000 UTC m=+0.099564856 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:59:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:04.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:04.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:04 np0005539551 nova_compute[227360]: 2025-11-29 08:59:04.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:04 np0005539551 nova_compute[227360]: 2025-11-29 08:59:04.411 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:59:04 np0005539551 nova_compute[227360]: 2025-11-29 08:59:04.411 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:59:04 np0005539551 nova_compute[227360]: 2025-11-29 08:59:04.431 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:59:04 np0005539551 nova_compute[227360]: 2025-11-29 08:59:04.818 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:05 np0005539551 nova_compute[227360]: 2025-11-29 08:59:05.517 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:06.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:06.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:06 np0005539551 nova_compute[227360]: 2025-11-29 08:59:06.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:06 np0005539551 nova_compute[227360]: 2025-11-29 08:59:06.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:06 np0005539551 nova_compute[227360]: 2025-11-29 08:59:06.435 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:59:06 np0005539551 nova_compute[227360]: 2025-11-29 08:59:06.435 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:59:06 np0005539551 nova_compute[227360]: 2025-11-29 08:59:06.435 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:59:06 np0005539551 nova_compute[227360]: 2025-11-29 08:59:06.435 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:59:06 np0005539551 nova_compute[227360]: 2025-11-29 08:59:06.436 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:59:06 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:59:06 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2130178875' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:59:06 np0005539551 nova_compute[227360]: 2025-11-29 08:59:06.849 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:59:07 np0005539551 nova_compute[227360]: 2025-11-29 08:59:07.006 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:59:07 np0005539551 nova_compute[227360]: 2025-11-29 08:59:07.007 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4194MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:59:07 np0005539551 nova_compute[227360]: 2025-11-29 08:59:07.007 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:59:07 np0005539551 nova_compute[227360]: 2025-11-29 08:59:07.007 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:59:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:07 np0005539551 nova_compute[227360]: 2025-11-29 08:59:07.112 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:59:07 np0005539551 nova_compute[227360]: 2025-11-29 08:59:07.112 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:59:07 np0005539551 nova_compute[227360]: 2025-11-29 08:59:07.201 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:59:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:59:07 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4231614786' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:59:07 np0005539551 nova_compute[227360]: 2025-11-29 08:59:07.732 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:59:07 np0005539551 nova_compute[227360]: 2025-11-29 08:59:07.739 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:59:07 np0005539551 nova_compute[227360]: 2025-11-29 08:59:07.761 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:59:07 np0005539551 nova_compute[227360]: 2025-11-29 08:59:07.763 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:59:07 np0005539551 nova_compute[227360]: 2025-11-29 08:59:07.763 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:59:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:08.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:08.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:09 np0005539551 nova_compute[227360]: 2025-11-29 08:59:09.820 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:10.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:10.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:10 np0005539551 nova_compute[227360]: 2025-11-29 08:59:10.518 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:10 np0005539551 nova_compute[227360]: 2025-11-29 08:59:10.764 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:11 np0005539551 nova_compute[227360]: 2025-11-29 08:59:11.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:11 np0005539551 nova_compute[227360]: 2025-11-29 08:59:11.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:12 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:12 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:59:12 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:59:12 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:59:12 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:59:12 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:59:12 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:59:12 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:59:12 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:59:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:12.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:12.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:13 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:59:13 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:59:13 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:59:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:14.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:14.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:14 np0005539551 nova_compute[227360]: 2025-11-29 08:59:14.822 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:15 np0005539551 nova_compute[227360]: 2025-11-29 08:59:15.522 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:16.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:16.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:18.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:18.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:18 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:59:18 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:59:18 np0005539551 nova_compute[227360]: 2025-11-29 08:59:18.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:19 np0005539551 nova_compute[227360]: 2025-11-29 08:59:19.826 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:59:19.906 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:59:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:59:19.907 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:59:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 08:59:19.907 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:59:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:20.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:20.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:20 np0005539551 nova_compute[227360]: 2025-11-29 08:59:20.523 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:22.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:22.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:23 np0005539551 nova_compute[227360]: 2025-11-29 08:59:23.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:23 np0005539551 nova_compute[227360]: 2025-11-29 08:59:23.409 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:59:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:24.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:24.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:24 np0005539551 nova_compute[227360]: 2025-11-29 08:59:24.828 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:25 np0005539551 nova_compute[227360]: 2025-11-29 08:59:25.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:25 np0005539551 nova_compute[227360]: 2025-11-29 08:59:25.525 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:26.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:26.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:28.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:28.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:29 np0005539551 nova_compute[227360]: 2025-11-29 08:59:29.831 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:30.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:30.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:30 np0005539551 nova_compute[227360]: 2025-11-29 08:59:30.533 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:32 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:32.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:32.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:33 np0005539551 podman[316301]: 2025-11-29 08:59:33.621473592 +0000 UTC m=+0.060568690 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125)
Nov 29 03:59:33 np0005539551 podman[316300]: 2025-11-29 08:59:33.666384368 +0000 UTC m=+0.103120812 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:59:33 np0005539551 podman[316299]: 2025-11-29 08:59:33.701467218 +0000 UTC m=+0.141509012 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:59:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:34.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:34.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:34 np0005539551 nova_compute[227360]: 2025-11-29 08:59:34.833 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:35 np0005539551 nova_compute[227360]: 2025-11-29 08:59:35.535 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:36.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:36.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:38.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:38.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:39 np0005539551 nova_compute[227360]: 2025-11-29 08:59:39.835 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:40.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:40.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:40 np0005539551 nova_compute[227360]: 2025-11-29 08:59:40.537 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:42.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:42.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:44.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:44.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:44 np0005539551 nova_compute[227360]: 2025-11-29 08:59:44.837 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:45 np0005539551 nova_compute[227360]: 2025-11-29 08:59:45.539 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:46.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:46.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:47 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:48.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:48.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:49 np0005539551 nova_compute[227360]: 2025-11-29 08:59:49.840 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:50.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:50.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:50 np0005539551 nova_compute[227360]: 2025-11-29 08:59:50.541 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:52.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:52.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:54.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:54.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:54 np0005539551 nova_compute[227360]: 2025-11-29 08:59:54.843 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:55 np0005539551 nova_compute[227360]: 2025-11-29 08:59:55.544 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:56.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:56.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:57 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:58.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 03:59:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:58.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:59 np0005539551 nova_compute[227360]: 2025-11-29 08:59:59.845 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:00.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:00 np0005539551 ceph-mon[81672]: overall HEALTH_OK
Nov 29 04:00:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:00.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:00 np0005539551 nova_compute[227360]: 2025-11-29 09:00:00.546 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:02 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:02.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:02.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:03 np0005539551 nova_compute[227360]: 2025-11-29 09:00:03.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:04.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:04.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:04 np0005539551 podman[316361]: 2025-11-29 09:00:04.628089801 +0000 UTC m=+0.077608243 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:00:04 np0005539551 podman[316362]: 2025-11-29 09:00:04.644219977 +0000 UTC m=+0.090410149 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 04:00:04 np0005539551 podman[316363]: 2025-11-29 09:00:04.646373465 +0000 UTC m=+0.092459874 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 04:00:04 np0005539551 nova_compute[227360]: 2025-11-29 09:00:04.881 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:05 np0005539551 nova_compute[227360]: 2025-11-29 09:00:05.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:05 np0005539551 nova_compute[227360]: 2025-11-29 09:00:05.409 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:00:05 np0005539551 nova_compute[227360]: 2025-11-29 09:00:05.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:00:05 np0005539551 nova_compute[227360]: 2025-11-29 09:00:05.517 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:00:05 np0005539551 nova_compute[227360]: 2025-11-29 09:00:05.548 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:06.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:06.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:06 np0005539551 nova_compute[227360]: 2025-11-29 09:00:06.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:07 np0005539551 nova_compute[227360]: 2025-11-29 09:00:07.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:07 np0005539551 nova_compute[227360]: 2025-11-29 09:00:07.458 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:00:07 np0005539551 nova_compute[227360]: 2025-11-29 09:00:07.459 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:00:07 np0005539551 nova_compute[227360]: 2025-11-29 09:00:07.459 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:00:07 np0005539551 nova_compute[227360]: 2025-11-29 09:00:07.459 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:00:07 np0005539551 nova_compute[227360]: 2025-11-29 09:00:07.459 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:00:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:00:07 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/887616016' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:00:07 np0005539551 nova_compute[227360]: 2025-11-29 09:00:07.866 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:00:08 np0005539551 nova_compute[227360]: 2025-11-29 09:00:08.026 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:00:08 np0005539551 nova_compute[227360]: 2025-11-29 09:00:08.027 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4198MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:00:08 np0005539551 nova_compute[227360]: 2025-11-29 09:00:08.027 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:00:08 np0005539551 nova_compute[227360]: 2025-11-29 09:00:08.028 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:00:08 np0005539551 nova_compute[227360]: 2025-11-29 09:00:08.095 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:00:08 np0005539551 nova_compute[227360]: 2025-11-29 09:00:08.096 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:00:08 np0005539551 nova_compute[227360]: 2025-11-29 09:00:08.109 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:00:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:08.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:08.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:08 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:00:08 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/232227709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:00:08 np0005539551 nova_compute[227360]: 2025-11-29 09:00:08.520 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:00:08 np0005539551 nova_compute[227360]: 2025-11-29 09:00:08.527 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:00:08 np0005539551 nova_compute[227360]: 2025-11-29 09:00:08.548 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:00:08 np0005539551 nova_compute[227360]: 2025-11-29 09:00:08.550 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:00:08 np0005539551 nova_compute[227360]: 2025-11-29 09:00:08.550 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.523s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:00:09 np0005539551 nova_compute[227360]: 2025-11-29 09:00:09.884 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:10.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:10.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:10 np0005539551 nova_compute[227360]: 2025-11-29 09:00:10.551 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:11 np0005539551 nova_compute[227360]: 2025-11-29 09:00:11.552 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:12 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:12.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:12.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:12 np0005539551 nova_compute[227360]: 2025-11-29 09:00:12.411 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:13 np0005539551 nova_compute[227360]: 2025-11-29 09:00:13.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:14.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:14.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:14 np0005539551 nova_compute[227360]: 2025-11-29 09:00:14.887 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:15 np0005539551 nova_compute[227360]: 2025-11-29 09:00:15.553 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:16.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:16.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:18.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:18.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:18 np0005539551 nova_compute[227360]: 2025-11-29 09:00:18.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:19 np0005539551 nova_compute[227360]: 2025-11-29 09:00:19.888 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 09:00:19.907 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:00:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 09:00:19.907 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:00:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 09:00:19.907 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:00:20 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:00:20 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:00:20 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:00:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:20.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:20.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:20 np0005539551 nova_compute[227360]: 2025-11-29 09:00:20.555 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:22.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:22.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:24.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:24.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:24 np0005539551 nova_compute[227360]: 2025-11-29 09:00:24.891 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:25 np0005539551 nova_compute[227360]: 2025-11-29 09:00:25.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:25 np0005539551 nova_compute[227360]: 2025-11-29 09:00:25.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:00:25 np0005539551 nova_compute[227360]: 2025-11-29 09:00:25.557 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:26.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:26 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:00:26 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:00:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:26.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:26 np0005539551 nova_compute[227360]: 2025-11-29 09:00:26.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:28.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:00:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:28.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:00:29 np0005539551 nova_compute[227360]: 2025-11-29 09:00:29.894 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:30.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:30.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:30 np0005539551 nova_compute[227360]: 2025-11-29 09:00:30.560 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:31 np0005539551 nova_compute[227360]: 2025-11-29 09:00:31.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:32 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:32.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:32.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:34.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:34.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:34 np0005539551 nova_compute[227360]: 2025-11-29 09:00:34.897 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:35 np0005539551 nova_compute[227360]: 2025-11-29 09:00:35.564 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:35 np0005539551 podman[316655]: 2025-11-29 09:00:35.603209726 +0000 UTC m=+0.053445088 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 04:00:35 np0005539551 podman[316654]: 2025-11-29 09:00:35.624109611 +0000 UTC m=+0.074922708 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:00:35 np0005539551 podman[316653]: 2025-11-29 09:00:35.652605373 +0000 UTC m=+0.108933650 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 04:00:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:36.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:00:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:36.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:00:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:37 np0005539551 nova_compute[227360]: 2025-11-29 09:00:37.462 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:37 np0005539551 nova_compute[227360]: 2025-11-29 09:00:37.462 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 04:00:37 np0005539551 nova_compute[227360]: 2025-11-29 09:00:37.497 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 04:00:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:38.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:38.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:39 np0005539551 nova_compute[227360]: 2025-11-29 09:00:39.940 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:40.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:40.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:40 np0005539551 nova_compute[227360]: 2025-11-29 09:00:40.566 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:42.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:42 np0005539551 nova_compute[227360]: 2025-11-29 09:00:42.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:42 np0005539551 nova_compute[227360]: 2025-11-29 09:00:42.411 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 04:00:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:42.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:44.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:44.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:44 np0005539551 nova_compute[227360]: 2025-11-29 09:00:44.942 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:45 np0005539551 nova_compute[227360]: 2025-11-29 09:00:45.568 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:46.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:46.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:47 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:48.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:48.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:49 np0005539551 nova_compute[227360]: 2025-11-29 09:00:49.944 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:50.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:50.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:50 np0005539551 nova_compute[227360]: 2025-11-29 09:00:50.569 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:52.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:52.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:54.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:54.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:54 np0005539551 nova_compute[227360]: 2025-11-29 09:00:54.947 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:55 np0005539551 nova_compute[227360]: 2025-11-29 09:00:55.573 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:56.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:56.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:57 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:58.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:00:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:58.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:59 np0005539551 nova_compute[227360]: 2025-11-29 09:00:59.950 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:01:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:00.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:01:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:00.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:00 np0005539551 nova_compute[227360]: 2025-11-29 09:01:00.574 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:02 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:02.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:02.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:04.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:04.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:04 np0005539551 nova_compute[227360]: 2025-11-29 09:01:04.952 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:05 np0005539551 nova_compute[227360]: 2025-11-29 09:01:05.431 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:05 np0005539551 nova_compute[227360]: 2025-11-29 09:01:05.431 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:01:05 np0005539551 nova_compute[227360]: 2025-11-29 09:01:05.432 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:01:05 np0005539551 nova_compute[227360]: 2025-11-29 09:01:05.451 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:01:05 np0005539551 nova_compute[227360]: 2025-11-29 09:01:05.575 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:06.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:06 np0005539551 nova_compute[227360]: 2025-11-29 09:01:06.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:06 np0005539551 podman[316728]: 2025-11-29 09:01:06.619535738 +0000 UTC m=+0.061805594 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3)
Nov 29 04:01:06 np0005539551 podman[316734]: 2025-11-29 09:01:06.674037853 +0000 UTC m=+0.110508272 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 04:01:06 np0005539551 podman[316727]: 2025-11-29 09:01:06.695136725 +0000 UTC m=+0.144577726 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 04:01:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:06.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:08.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:08 np0005539551 nova_compute[227360]: 2025-11-29 09:01:08.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:08 np0005539551 nova_compute[227360]: 2025-11-29 09:01:08.445 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:01:08 np0005539551 nova_compute[227360]: 2025-11-29 09:01:08.446 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:01:08 np0005539551 nova_compute[227360]: 2025-11-29 09:01:08.446 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:01:08 np0005539551 nova_compute[227360]: 2025-11-29 09:01:08.446 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:01:08 np0005539551 nova_compute[227360]: 2025-11-29 09:01:08.447 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:01:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:08.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:08 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:01:08 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1971028993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:01:08 np0005539551 nova_compute[227360]: 2025-11-29 09:01:08.871 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:01:09 np0005539551 nova_compute[227360]: 2025-11-29 09:01:09.019 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:01:09 np0005539551 nova_compute[227360]: 2025-11-29 09:01:09.020 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4190MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:01:09 np0005539551 nova_compute[227360]: 2025-11-29 09:01:09.020 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:01:09 np0005539551 nova_compute[227360]: 2025-11-29 09:01:09.020 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:01:09 np0005539551 nova_compute[227360]: 2025-11-29 09:01:09.146 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:01:09 np0005539551 nova_compute[227360]: 2025-11-29 09:01:09.146 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:01:09 np0005539551 nova_compute[227360]: 2025-11-29 09:01:09.164 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:01:09 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:01:09 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1458591606' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:01:09 np0005539551 nova_compute[227360]: 2025-11-29 09:01:09.580 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:01:09 np0005539551 nova_compute[227360]: 2025-11-29 09:01:09.586 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:01:09 np0005539551 nova_compute[227360]: 2025-11-29 09:01:09.622 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:01:09 np0005539551 nova_compute[227360]: 2025-11-29 09:01:09.624 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:01:09 np0005539551 nova_compute[227360]: 2025-11-29 09:01:09.624 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:01:09 np0005539551 nova_compute[227360]: 2025-11-29 09:01:09.954 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:10.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:10 np0005539551 nova_compute[227360]: 2025-11-29 09:01:10.578 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:10.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:12 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:12.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:12.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:13 np0005539551 nova_compute[227360]: 2025-11-29 09:01:13.624 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:13 np0005539551 nova_compute[227360]: 2025-11-29 09:01:13.625 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:14.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:14.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:14 np0005539551 nova_compute[227360]: 2025-11-29 09:01:14.957 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:15 np0005539551 nova_compute[227360]: 2025-11-29 09:01:15.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:15 np0005539551 nova_compute[227360]: 2025-11-29 09:01:15.581 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:16.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:16.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:01:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:18.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:01:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:18.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:19 np0005539551 nova_compute[227360]: 2025-11-29 09:01:19.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 09:01:19.908 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:01:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 09:01:19.908 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:01:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 09:01:19.908 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:01:19 np0005539551 nova_compute[227360]: 2025-11-29 09:01:19.960 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:20.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:20 np0005539551 nova_compute[227360]: 2025-11-29 09:01:20.582 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:20.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:22.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:22.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:24.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:24.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:24 np0005539551 nova_compute[227360]: 2025-11-29 09:01:24.963 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:25 np0005539551 nova_compute[227360]: 2025-11-29 09:01:25.635 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:26.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:26 np0005539551 nova_compute[227360]: 2025-11-29 09:01:26.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:26 np0005539551 nova_compute[227360]: 2025-11-29 09:01:26.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:26 np0005539551 nova_compute[227360]: 2025-11-29 09:01:26.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:01:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:26.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:28.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:28 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:01:28 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:01:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:28.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:29 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 04:01:29 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:01:29 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:01:29 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:01:29 np0005539551 nova_compute[227360]: 2025-11-29 09:01:29.965 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:30.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:30 np0005539551 nova_compute[227360]: 2025-11-29 09:01:30.638 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:30.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:32 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:32.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:32.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:34.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:34.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:34 np0005539551 nova_compute[227360]: 2025-11-29 09:01:34.968 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:35 np0005539551 nova_compute[227360]: 2025-11-29 09:01:35.683 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:36.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:36.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:36 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #163. Immutable memtables: 0.
Nov 29 04:01:36 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:01:36.976771) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 04:01:36 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 163
Nov 29 04:01:36 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406896976932, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 2350, "num_deletes": 251, "total_data_size": 5884774, "memory_usage": 5959824, "flush_reason": "Manual Compaction"}
Nov 29 04:01:36 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #164: started
Nov 29 04:01:36 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406896995435, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 164, "file_size": 3828938, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 78732, "largest_seqno": 81077, "table_properties": {"data_size": 3819374, "index_size": 6057, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19260, "raw_average_key_size": 20, "raw_value_size": 3800423, "raw_average_value_size": 4004, "num_data_blocks": 265, "num_entries": 949, "num_filter_entries": 949, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406681, "oldest_key_time": 1764406681, "file_creation_time": 1764406896, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:01:36 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 18630 microseconds, and 7474 cpu microseconds.
Nov 29 04:01:36 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:01:36 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:01:36.995474) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #164: 3828938 bytes OK
Nov 29 04:01:36 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:01:36.995491) [db/memtable_list.cc:519] [default] Level-0 commit table #164 started
Nov 29 04:01:36 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:01:36.997450) [db/memtable_list.cc:722] [default] Level-0 commit table #164: memtable #1 done
Nov 29 04:01:36 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:01:36.997461) EVENT_LOG_v1 {"time_micros": 1764406896997458, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 04:01:36 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:01:36.997476) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 04:01:36 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 5874493, prev total WAL file size 5874493, number of live WAL files 2.
Nov 29 04:01:36 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000160.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:01:36 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:01:36.998659) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Nov 29 04:01:36 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 04:01:36 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [164(3739KB)], [162(11MB)]
Nov 29 04:01:36 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406896998725, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [164], "files_L6": [162], "score": -1, "input_data_size": 16385502, "oldest_snapshot_seqno": -1}
Nov 29 04:01:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:37 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #165: 11357 keys, 14379861 bytes, temperature: kUnknown
Nov 29 04:01:37 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406897153385, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 165, "file_size": 14379861, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14306958, "index_size": 43414, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28421, "raw_key_size": 299802, "raw_average_key_size": 26, "raw_value_size": 14108192, "raw_average_value_size": 1242, "num_data_blocks": 1650, "num_entries": 11357, "num_filter_entries": 11357, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764406896, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 165, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:01:37 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:01:37 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:01:37.153659) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 14379861 bytes
Nov 29 04:01:37 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:01:37.156187) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 105.9 rd, 92.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 12.0 +0.0 blob) out(13.7 +0.0 blob), read-write-amplify(8.0) write-amplify(3.8) OK, records in: 11876, records dropped: 519 output_compression: NoCompression
Nov 29 04:01:37 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:01:37.156211) EVENT_LOG_v1 {"time_micros": 1764406897156200, "job": 104, "event": "compaction_finished", "compaction_time_micros": 154731, "compaction_time_cpu_micros": 63551, "output_level": 6, "num_output_files": 1, "total_output_size": 14379861, "num_input_records": 11876, "num_output_records": 11357, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 04:01:37 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:01:37 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406897157935, "job": 104, "event": "table_file_deletion", "file_number": 164}
Nov 29 04:01:37 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000162.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:01:37 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406897162236, "job": 104, "event": "table_file_deletion", "file_number": 162}
Nov 29 04:01:37 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:01:36.998499) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:01:37 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:01:37.162277) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:01:37 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:01:37.162282) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:01:37 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:01:37.162285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:01:37 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:01:37.162328) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:01:37 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:01:37.162330) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:01:37 np0005539551 podman[317013]: 2025-11-29 09:01:37.258155794 +0000 UTC m=+0.056816430 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 04:01:37 np0005539551 podman[317012]: 2025-11-29 09:01:37.267318251 +0000 UTC m=+0.067996431 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 04:01:37 np0005539551 podman[317011]: 2025-11-29 09:01:37.300416478 +0000 UTC m=+0.100482412 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125)
Nov 29 04:01:38 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:01:38 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:01:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:38.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:38.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:39 np0005539551 nova_compute[227360]: 2025-11-29 09:01:39.970 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:40.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:40 np0005539551 nova_compute[227360]: 2025-11-29 09:01:40.686 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:40.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:42.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:42.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:44.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:44.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:44 np0005539551 nova_compute[227360]: 2025-11-29 09:01:44.972 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:45 np0005539551 nova_compute[227360]: 2025-11-29 09:01:45.688 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:46.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:46.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:47 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:48.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:48.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:49 np0005539551 nova_compute[227360]: 2025-11-29 09:01:49.975 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:50.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:50 np0005539551 nova_compute[227360]: 2025-11-29 09:01:50.690 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:50.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:52.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:52.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:54.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:54.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:54 np0005539551 nova_compute[227360]: 2025-11-29 09:01:54.977 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:55 np0005539551 nova_compute[227360]: 2025-11-29 09:01:55.696 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:56.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:56.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:57 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:58.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:01:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:58.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:59 np0005539551 nova_compute[227360]: 2025-11-29 09:01:59.980 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:00.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:00 np0005539551 nova_compute[227360]: 2025-11-29 09:02:00.697 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:00.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:02 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:02.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:02.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:04.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:04.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:04 np0005539551 nova_compute[227360]: 2025-11-29 09:02:04.982 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:05 np0005539551 nova_compute[227360]: 2025-11-29 09:02:05.406 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:05 np0005539551 nova_compute[227360]: 2025-11-29 09:02:05.746 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:06.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:06.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:07 np0005539551 nova_compute[227360]: 2025-11-29 09:02:07.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:07 np0005539551 nova_compute[227360]: 2025-11-29 09:02:07.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:02:07 np0005539551 nova_compute[227360]: 2025-11-29 09:02:07.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:02:07 np0005539551 nova_compute[227360]: 2025-11-29 09:02:07.433 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:02:07 np0005539551 nova_compute[227360]: 2025-11-29 09:02:07.434 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:07 np0005539551 podman[317144]: 2025-11-29 09:02:07.603140869 +0000 UTC m=+0.051238559 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 04:02:07 np0005539551 podman[317143]: 2025-11-29 09:02:07.603883029 +0000 UTC m=+0.057618521 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:02:07 np0005539551 podman[317142]: 2025-11-29 09:02:07.626019289 +0000 UTC m=+0.081446227 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 04:02:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:02:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:08.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:02:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.003000080s ======
Nov 29 04:02:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:08.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Nov 29 04:02:09 np0005539551 nova_compute[227360]: 2025-11-29 09:02:09.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:09 np0005539551 nova_compute[227360]: 2025-11-29 09:02:09.439 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:02:09 np0005539551 nova_compute[227360]: 2025-11-29 09:02:09.439 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:02:09 np0005539551 nova_compute[227360]: 2025-11-29 09:02:09.439 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:02:09 np0005539551 nova_compute[227360]: 2025-11-29 09:02:09.440 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:02:09 np0005539551 nova_compute[227360]: 2025-11-29 09:02:09.440 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:02:09 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:02:09 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3682842047' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:02:09 np0005539551 nova_compute[227360]: 2025-11-29 09:02:09.888 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:02:09 np0005539551 nova_compute[227360]: 2025-11-29 09:02:09.984 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:10 np0005539551 nova_compute[227360]: 2025-11-29 09:02:10.026 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:02:10 np0005539551 nova_compute[227360]: 2025-11-29 09:02:10.027 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4207MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:02:10 np0005539551 nova_compute[227360]: 2025-11-29 09:02:10.028 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:02:10 np0005539551 nova_compute[227360]: 2025-11-29 09:02:10.028 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:02:10 np0005539551 nova_compute[227360]: 2025-11-29 09:02:10.120 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:02:10 np0005539551 nova_compute[227360]: 2025-11-29 09:02:10.120 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:02:10 np0005539551 nova_compute[227360]: 2025-11-29 09:02:10.136 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Refreshing inventories for resource provider 67c71d68-0dd7-4589-b775-189b4191a844 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 04:02:10 np0005539551 nova_compute[227360]: 2025-11-29 09:02:10.151 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Updating ProviderTree inventory for provider 67c71d68-0dd7-4589-b775-189b4191a844 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 04:02:10 np0005539551 nova_compute[227360]: 2025-11-29 09:02:10.151 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Updating inventory in ProviderTree for provider 67c71d68-0dd7-4589-b775-189b4191a844 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 04:02:10 np0005539551 nova_compute[227360]: 2025-11-29 09:02:10.171 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Refreshing aggregate associations for resource provider 67c71d68-0dd7-4589-b775-189b4191a844, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 04:02:10 np0005539551 nova_compute[227360]: 2025-11-29 09:02:10.193 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Refreshing trait associations for resource provider 67c71d68-0dd7-4589-b775-189b4191a844, traits: COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 04:02:10 np0005539551 nova_compute[227360]: 2025-11-29 09:02:10.208 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:02:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:10.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:02:10 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2839695857' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:02:10 np0005539551 nova_compute[227360]: 2025-11-29 09:02:10.639 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:02:10 np0005539551 nova_compute[227360]: 2025-11-29 09:02:10.646 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:02:10 np0005539551 nova_compute[227360]: 2025-11-29 09:02:10.671 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:02:10 np0005539551 nova_compute[227360]: 2025-11-29 09:02:10.673 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:02:10 np0005539551 nova_compute[227360]: 2025-11-29 09:02:10.673 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:02:10 np0005539551 nova_compute[227360]: 2025-11-29 09:02:10.793 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:10.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:12 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:02:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:12.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:02:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:12.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:14.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:14.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:14 np0005539551 nova_compute[227360]: 2025-11-29 09:02:14.986 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:15 np0005539551 nova_compute[227360]: 2025-11-29 09:02:15.674 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:15 np0005539551 nova_compute[227360]: 2025-11-29 09:02:15.674 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:15 np0005539551 nova_compute[227360]: 2025-11-29 09:02:15.794 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:16.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:16.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:17 np0005539551 nova_compute[227360]: 2025-11-29 09:02:17.406 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:18.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:18.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 09:02:19.908 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:02:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 09:02:19.908 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:02:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 09:02:19.909 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:02:19 np0005539551 nova_compute[227360]: 2025-11-29 09:02:19.988 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:20.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:20 np0005539551 nova_compute[227360]: 2025-11-29 09:02:20.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:20 np0005539551 nova_compute[227360]: 2025-11-29 09:02:20.795 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:20.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:22.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:22.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:02:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:24.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:02:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:24.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:24 np0005539551 nova_compute[227360]: 2025-11-29 09:02:24.990 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:25 np0005539551 nova_compute[227360]: 2025-11-29 09:02:25.836 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:26 np0005539551 nova_compute[227360]: 2025-11-29 09:02:26.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:26 np0005539551 nova_compute[227360]: 2025-11-29 09:02:26.411 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:02:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:26.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:26.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:28 np0005539551 nova_compute[227360]: 2025-11-29 09:02:28.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:28.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:02:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:28.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:02:29 np0005539551 nova_compute[227360]: 2025-11-29 09:02:29.994 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:30.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:30 np0005539551 nova_compute[227360]: 2025-11-29 09:02:30.838 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:30.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:32 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:32.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:32.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:34.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:34.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:34 np0005539551 nova_compute[227360]: 2025-11-29 09:02:34.997 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:35 np0005539551 nova_compute[227360]: 2025-11-29 09:02:35.840 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:36.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:02:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:36.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:02:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:37 np0005539551 podman[317388]: 2025-11-29 09:02:37.7522021 +0000 UTC m=+0.077107439 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 29 04:02:37 np0005539551 podman[317390]: 2025-11-29 09:02:37.753234878 +0000 UTC m=+0.075571358 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 04:02:37 np0005539551 podman[317391]: 2025-11-29 09:02:37.77546695 +0000 UTC m=+0.082454823 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 04:02:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:38.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:38.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:40 np0005539551 nova_compute[227360]: 2025-11-29 09:02:40.000 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:40 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:02:40 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:02:40 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:02:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:40.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:40 np0005539551 nova_compute[227360]: 2025-11-29 09:02:40.843 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:40.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:42.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:42.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:44.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:44.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:45 np0005539551 nova_compute[227360]: 2025-11-29 09:02:45.001 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:45 np0005539551 nova_compute[227360]: 2025-11-29 09:02:45.884 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:46.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:46.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:47 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:48.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:48.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:50 np0005539551 nova_compute[227360]: 2025-11-29 09:02:50.005 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:50.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:50 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:02:50 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:02:50 np0005539551 nova_compute[227360]: 2025-11-29 09:02:50.884 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:50.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:52.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:52.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:54.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:54.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:55 np0005539551 nova_compute[227360]: 2025-11-29 09:02:55.006 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:55 np0005539551 nova_compute[227360]: 2025-11-29 09:02:55.940 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:56.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:56.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:57 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:58.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:02:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:58.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:00 np0005539551 nova_compute[227360]: 2025-11-29 09:03:00.009 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:00.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:00 np0005539551 nova_compute[227360]: 2025-11-29 09:03:00.942 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:00.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:02 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:02.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:03.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:04.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:05.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:05 np0005539551 nova_compute[227360]: 2025-11-29 09:03:05.012 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:05 np0005539551 nova_compute[227360]: 2025-11-29 09:03:05.946 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:06.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:07.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:08 np0005539551 podman[317580]: 2025-11-29 09:03:08.652490119 +0000 UTC m=+0.098132818 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 04:03:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:08.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:08 np0005539551 podman[317581]: 2025-11-29 09:03:08.667066894 +0000 UTC m=+0.104045368 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 04:03:08 np0005539551 podman[317582]: 2025-11-29 09:03:08.673123308 +0000 UTC m=+0.105676312 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 04:03:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:09.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:09 np0005539551 nova_compute[227360]: 2025-11-29 09:03:09.413 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:09 np0005539551 nova_compute[227360]: 2025-11-29 09:03:09.413 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:03:09 np0005539551 nova_compute[227360]: 2025-11-29 09:03:09.413 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:03:09 np0005539551 nova_compute[227360]: 2025-11-29 09:03:09.433 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:03:09 np0005539551 nova_compute[227360]: 2025-11-29 09:03:09.434 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:09 np0005539551 nova_compute[227360]: 2025-11-29 09:03:09.434 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:09 np0005539551 nova_compute[227360]: 2025-11-29 09:03:09.454 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:03:09 np0005539551 nova_compute[227360]: 2025-11-29 09:03:09.455 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:03:09 np0005539551 nova_compute[227360]: 2025-11-29 09:03:09.455 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:03:09 np0005539551 nova_compute[227360]: 2025-11-29 09:03:09.455 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:03:09 np0005539551 nova_compute[227360]: 2025-11-29 09:03:09.456 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:03:09 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:03:09 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/810973871' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:03:09 np0005539551 nova_compute[227360]: 2025-11-29 09:03:09.881 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:03:10 np0005539551 nova_compute[227360]: 2025-11-29 09:03:10.014 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:10 np0005539551 nova_compute[227360]: 2025-11-29 09:03:10.019 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:03:10 np0005539551 nova_compute[227360]: 2025-11-29 09:03:10.020 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4214MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:03:10 np0005539551 nova_compute[227360]: 2025-11-29 09:03:10.020 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:03:10 np0005539551 nova_compute[227360]: 2025-11-29 09:03:10.021 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:03:10 np0005539551 nova_compute[227360]: 2025-11-29 09:03:10.188 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:03:10 np0005539551 nova_compute[227360]: 2025-11-29 09:03:10.189 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:03:10 np0005539551 nova_compute[227360]: 2025-11-29 09:03:10.321 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:03:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:10.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:10 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:03:10 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2021506521' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:03:10 np0005539551 nova_compute[227360]: 2025-11-29 09:03:10.746 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:03:10 np0005539551 nova_compute[227360]: 2025-11-29 09:03:10.753 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:03:10 np0005539551 nova_compute[227360]: 2025-11-29 09:03:10.775 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:03:10 np0005539551 nova_compute[227360]: 2025-11-29 09:03:10.777 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:03:10 np0005539551 nova_compute[227360]: 2025-11-29 09:03:10.777 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:03:10 np0005539551 nova_compute[227360]: 2025-11-29 09:03:10.950 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:11.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:12 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:12.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:03:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:13.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:03:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:14.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:15 np0005539551 nova_compute[227360]: 2025-11-29 09:03:15.016 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:15.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:15 np0005539551 nova_compute[227360]: 2025-11-29 09:03:15.753 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:15 np0005539551 nova_compute[227360]: 2025-11-29 09:03:15.951 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:16 np0005539551 nova_compute[227360]: 2025-11-29 09:03:16.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:16.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:17.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:18.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:19.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:19 np0005539551 nova_compute[227360]: 2025-11-29 09:03:19.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 09:03:19.909 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:03:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 09:03:19.909 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:03:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 09:03:19.909 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:03:20 np0005539551 nova_compute[227360]: 2025-11-29 09:03:20.018 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:20 np0005539551 nova_compute[227360]: 2025-11-29 09:03:20.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:20.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:20 np0005539551 nova_compute[227360]: 2025-11-29 09:03:20.953 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:21.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:22.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:23.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:24.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:25 np0005539551 nova_compute[227360]: 2025-11-29 09:03:25.021 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:25.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:25 np0005539551 nova_compute[227360]: 2025-11-29 09:03:25.997 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:26 np0005539551 nova_compute[227360]: 2025-11-29 09:03:26.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:26 np0005539551 nova_compute[227360]: 2025-11-29 09:03:26.411 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:03:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:26.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:27.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:28.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:29.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:30 np0005539551 nova_compute[227360]: 2025-11-29 09:03:30.023 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:30 np0005539551 nova_compute[227360]: 2025-11-29 09:03:30.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:30.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:31 np0005539551 nova_compute[227360]: 2025-11-29 09:03:31.000 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:31.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:32 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #166. Immutable memtables: 0.
Nov 29 04:03:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:03:32.120105) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 04:03:32 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 166
Nov 29 04:03:32 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407012120186, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 1303, "num_deletes": 259, "total_data_size": 2948454, "memory_usage": 2988960, "flush_reason": "Manual Compaction"}
Nov 29 04:03:32 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #167: started
Nov 29 04:03:32 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407012137573, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 167, "file_size": 1946407, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 81082, "largest_seqno": 82380, "table_properties": {"data_size": 1940819, "index_size": 2982, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11788, "raw_average_key_size": 19, "raw_value_size": 1929555, "raw_average_value_size": 3178, "num_data_blocks": 133, "num_entries": 607, "num_filter_entries": 607, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406898, "oldest_key_time": 1764406898, "file_creation_time": 1764407012, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:03:32 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 17516 microseconds, and 5643 cpu microseconds.
Nov 29 04:03:32 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:03:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:03:32.137629) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #167: 1946407 bytes OK
Nov 29 04:03:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:03:32.137651) [db/memtable_list.cc:519] [default] Level-0 commit table #167 started
Nov 29 04:03:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:03:32.139280) [db/memtable_list.cc:722] [default] Level-0 commit table #167: memtable #1 done
Nov 29 04:03:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:03:32.139319) EVENT_LOG_v1 {"time_micros": 1764407012139313, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 04:03:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:03:32.139338) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 04:03:32 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 2942302, prev total WAL file size 2942302, number of live WAL files 2.
Nov 29 04:03:32 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000163.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:03:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:03:32.140441) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303135' seq:72057594037927935, type:22 .. '6C6F676D0033323730' seq:0, type:0; will stop at (end)
Nov 29 04:03:32 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 04:03:32 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [167(1900KB)], [165(13MB)]
Nov 29 04:03:32 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407012140498, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [167], "files_L6": [165], "score": -1, "input_data_size": 16326268, "oldest_snapshot_seqno": -1}
Nov 29 04:03:32 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:32 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #168: 11433 keys, 16185076 bytes, temperature: kUnknown
Nov 29 04:03:32 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407012228767, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 168, "file_size": 16185076, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16109573, "index_size": 45822, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28613, "raw_key_size": 302309, "raw_average_key_size": 26, "raw_value_size": 15907376, "raw_average_value_size": 1391, "num_data_blocks": 1751, "num_entries": 11433, "num_filter_entries": 11433, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764407012, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 168, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:03:32 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:03:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:03:32.230517) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 16185076 bytes
Nov 29 04:03:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:03:32.232903) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 184.9 rd, 183.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 13.7 +0.0 blob) out(15.4 +0.0 blob), read-write-amplify(16.7) write-amplify(8.3) OK, records in: 11964, records dropped: 531 output_compression: NoCompression
Nov 29 04:03:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:03:32.232924) EVENT_LOG_v1 {"time_micros": 1764407012232915, "job": 106, "event": "compaction_finished", "compaction_time_micros": 88311, "compaction_time_cpu_micros": 35272, "output_level": 6, "num_output_files": 1, "total_output_size": 16185076, "num_input_records": 11964, "num_output_records": 11433, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 04:03:32 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:03:32 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407012233396, "job": 106, "event": "table_file_deletion", "file_number": 167}
Nov 29 04:03:32 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000165.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:03:32 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407012235750, "job": 106, "event": "table_file_deletion", "file_number": 165}
Nov 29 04:03:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:03:32.140337) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:03:32.235830) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:03:32.235835) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:03:32.235837) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:03:32.235838) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:32 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:03:32.235840) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:32.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:33.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:34.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:35 np0005539551 nova_compute[227360]: 2025-11-29 09:03:35.027 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:35.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:36 np0005539551 nova_compute[227360]: 2025-11-29 09:03:36.002 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:36.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:37.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:38.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:38 np0005539551 podman[317726]: 2025-11-29 09:03:38.790371549 +0000 UTC m=+0.055343209 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 04:03:38 np0005539551 podman[317727]: 2025-11-29 09:03:38.80735535 +0000 UTC m=+0.066247225 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:03:38 np0005539551 podman[317725]: 2025-11-29 09:03:38.868118244 +0000 UTC m=+0.132405015 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:03:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:39.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:40 np0005539551 nova_compute[227360]: 2025-11-29 09:03:40.029 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:40.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:41 np0005539551 nova_compute[227360]: 2025-11-29 09:03:41.004 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:41.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:42.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:43.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:44.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:45 np0005539551 nova_compute[227360]: 2025-11-29 09:03:45.030 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:45.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:46 np0005539551 nova_compute[227360]: 2025-11-29 09:03:46.006 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:46.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:47.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:47 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:48.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:49.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:50 np0005539551 nova_compute[227360]: 2025-11-29 09:03:50.031 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:50.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:51 np0005539551 nova_compute[227360]: 2025-11-29 09:03:51.008 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:51.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:51 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:03:51 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:03:51 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:03:52 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #169. Immutable memtables: 0.
Nov 29 04:03:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:03:52.018177) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 04:03:52 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 169
Nov 29 04:03:52 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407032018240, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 451, "num_deletes": 250, "total_data_size": 523098, "memory_usage": 530992, "flush_reason": "Manual Compaction"}
Nov 29 04:03:52 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #170: started
Nov 29 04:03:52 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407032024005, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 170, "file_size": 283158, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 82385, "largest_seqno": 82831, "table_properties": {"data_size": 280773, "index_size": 484, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6618, "raw_average_key_size": 20, "raw_value_size": 275863, "raw_average_value_size": 854, "num_data_blocks": 22, "num_entries": 323, "num_filter_entries": 323, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764407012, "oldest_key_time": 1764407012, "file_creation_time": 1764407032, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:03:52 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 5884 microseconds, and 2540 cpu microseconds.
Nov 29 04:03:52 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:03:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:03:52.024069) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #170: 283158 bytes OK
Nov 29 04:03:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:03:52.024095) [db/memtable_list.cc:519] [default] Level-0 commit table #170 started
Nov 29 04:03:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:03:52.025549) [db/memtable_list.cc:722] [default] Level-0 commit table #170: memtable #1 done
Nov 29 04:03:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:03:52.025564) EVENT_LOG_v1 {"time_micros": 1764407032025559, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 04:03:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:03:52.025584) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 04:03:52 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 520303, prev total WAL file size 520303, number of live WAL files 2.
Nov 29 04:03:52 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000166.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:03:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:03:52.026046) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373536' seq:72057594037927935, type:22 .. '6D6772737461740033303037' seq:0, type:0; will stop at (end)
Nov 29 04:03:52 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 04:03:52 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [170(276KB)], [168(15MB)]
Nov 29 04:03:52 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407032026085, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [170], "files_L6": [168], "score": -1, "input_data_size": 16468234, "oldest_snapshot_seqno": -1}
Nov 29 04:03:52 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #171: 11249 keys, 12695110 bytes, temperature: kUnknown
Nov 29 04:03:52 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407032119860, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 171, "file_size": 12695110, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12625480, "index_size": 40397, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28165, "raw_key_size": 298687, "raw_average_key_size": 26, "raw_value_size": 12431102, "raw_average_value_size": 1105, "num_data_blocks": 1525, "num_entries": 11249, "num_filter_entries": 11249, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764407032, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 171, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:03:52 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:03:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:03:52.120328) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 12695110 bytes
Nov 29 04:03:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:03:52.122963) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 175.4 rd, 135.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 15.4 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(103.0) write-amplify(44.8) OK, records in: 11756, records dropped: 507 output_compression: NoCompression
Nov 29 04:03:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:03:52.123029) EVENT_LOG_v1 {"time_micros": 1764407032123005, "job": 108, "event": "compaction_finished", "compaction_time_micros": 93914, "compaction_time_cpu_micros": 42311, "output_level": 6, "num_output_files": 1, "total_output_size": 12695110, "num_input_records": 11756, "num_output_records": 11249, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 04:03:52 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:03:52 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407032123476, "job": 108, "event": "table_file_deletion", "file_number": 170}
Nov 29 04:03:52 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000168.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:03:52 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407032128219, "job": 108, "event": "table_file_deletion", "file_number": 168}
Nov 29 04:03:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:03:52.026003) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:03:52.128407) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:03:52.128419) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:03:52.128423) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:03:52.128427) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:52 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:03:52.128430) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:52.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:53.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:54.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:55 np0005539551 nova_compute[227360]: 2025-11-29 09:03:55.033 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:55.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:56 np0005539551 nova_compute[227360]: 2025-11-29 09:03:56.010 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:56.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:57.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:57 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:58.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:03:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:59.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:00 np0005539551 nova_compute[227360]: 2025-11-29 09:04:00.036 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:00.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:01 np0005539551 nova_compute[227360]: 2025-11-29 09:04:01.013 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:01.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:02 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:04:02 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:04:02 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:02.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:03.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:04.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:05 np0005539551 nova_compute[227360]: 2025-11-29 09:04:05.038 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:05.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:05 np0005539551 nova_compute[227360]: 2025-11-29 09:04:05.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:06 np0005539551 nova_compute[227360]: 2025-11-29 09:04:06.054 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:04:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:06.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:04:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:07.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:08.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:09.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:09 np0005539551 podman[318008]: 2025-11-29 09:04:09.173479777 +0000 UTC m=+0.055971056 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:04:09 np0005539551 podman[318009]: 2025-11-29 09:04:09.197426666 +0000 UTC m=+0.075612568 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 29 04:04:09 np0005539551 podman[318007]: 2025-11-29 09:04:09.229975777 +0000 UTC m=+0.114826670 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 29 04:04:09 np0005539551 nova_compute[227360]: 2025-11-29 09:04:09.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:09 np0005539551 nova_compute[227360]: 2025-11-29 09:04:09.412 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:09 np0005539551 nova_compute[227360]: 2025-11-29 09:04:09.447 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:04:09 np0005539551 nova_compute[227360]: 2025-11-29 09:04:09.448 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:04:09 np0005539551 nova_compute[227360]: 2025-11-29 09:04:09.449 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:04:09 np0005539551 nova_compute[227360]: 2025-11-29 09:04:09.449 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:04:09 np0005539551 nova_compute[227360]: 2025-11-29 09:04:09.449 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:04:09 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:04:09 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1834381154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:04:09 np0005539551 nova_compute[227360]: 2025-11-29 09:04:09.887 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:04:10 np0005539551 nova_compute[227360]: 2025-11-29 09:04:10.040 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:10 np0005539551 nova_compute[227360]: 2025-11-29 09:04:10.104 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:04:10 np0005539551 nova_compute[227360]: 2025-11-29 09:04:10.106 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4216MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:04:10 np0005539551 nova_compute[227360]: 2025-11-29 09:04:10.106 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:04:10 np0005539551 nova_compute[227360]: 2025-11-29 09:04:10.106 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:04:10 np0005539551 nova_compute[227360]: 2025-11-29 09:04:10.644 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:04:10 np0005539551 nova_compute[227360]: 2025-11-29 09:04:10.645 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:04:10 np0005539551 nova_compute[227360]: 2025-11-29 09:04:10.670 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:04:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:10.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:11 np0005539551 nova_compute[227360]: 2025-11-29 09:04:11.055 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:11 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:04:11 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2904342085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:04:11 np0005539551 nova_compute[227360]: 2025-11-29 09:04:11.086 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:04:11 np0005539551 nova_compute[227360]: 2025-11-29 09:04:11.093 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:04:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:11.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:11 np0005539551 nova_compute[227360]: 2025-11-29 09:04:11.111 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:04:11 np0005539551 nova_compute[227360]: 2025-11-29 09:04:11.113 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:04:11 np0005539551 nova_compute[227360]: 2025-11-29 09:04:11.114 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:04:12 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:12.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:13.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:13 np0005539551 nova_compute[227360]: 2025-11-29 09:04:13.113 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:13 np0005539551 nova_compute[227360]: 2025-11-29 09:04:13.113 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:04:13 np0005539551 nova_compute[227360]: 2025-11-29 09:04:13.114 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:04:13 np0005539551 nova_compute[227360]: 2025-11-29 09:04:13.344 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:04:14 np0005539551 nova_compute[227360]: 2025-11-29 09:04:14.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:14.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:15 np0005539551 nova_compute[227360]: 2025-11-29 09:04:15.043 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:15.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:16 np0005539551 nova_compute[227360]: 2025-11-29 09:04:16.058 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:16 np0005539551 nova_compute[227360]: 2025-11-29 09:04:16.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:16.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:17.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:18.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:19.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 09:04:19.909 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:04:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 09:04:19.910 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:04:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 09:04:19.910 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:04:20 np0005539551 nova_compute[227360]: 2025-11-29 09:04:20.056 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:20 np0005539551 nova_compute[227360]: 2025-11-29 09:04:20.406 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:20.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:21 np0005539551 nova_compute[227360]: 2025-11-29 09:04:21.061 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:21.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:22 np0005539551 nova_compute[227360]: 2025-11-29 09:04:22.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:22.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:23.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:24.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:25 np0005539551 nova_compute[227360]: 2025-11-29 09:04:25.060 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:25.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:26 np0005539551 nova_compute[227360]: 2025-11-29 09:04:26.062 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:26.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:27.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:28 np0005539551 nova_compute[227360]: 2025-11-29 09:04:28.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:28 np0005539551 nova_compute[227360]: 2025-11-29 09:04:28.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:04:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:28.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:29.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:30 np0005539551 nova_compute[227360]: 2025-11-29 09:04:30.062 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:30.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:31 np0005539551 nova_compute[227360]: 2025-11-29 09:04:31.063 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:31.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:32 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:32 np0005539551 nova_compute[227360]: 2025-11-29 09:04:32.411 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:32.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:04:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:33.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:04:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:34.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:35 np0005539551 nova_compute[227360]: 2025-11-29 09:04:35.066 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:35.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:36 np0005539551 nova_compute[227360]: 2025-11-29 09:04:36.066 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:36.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:37.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:37 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 04:04:37 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.9 total, 600.0 interval#012Cumulative writes: 68K writes, 266K keys, 68K commit groups, 1.0 writes per commit group, ingest: 0.25 GB, 0.04 MB/s#012Cumulative WAL: 68K writes, 26K syncs, 2.64 writes per sync, written: 0.25 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 508 writes, 775 keys, 508 commit groups, 1.0 writes per commit group, ingest: 0.25 MB, 0.00 MB/s#012Interval WAL: 508 writes, 252 syncs, 2.02 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 04:04:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:38.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:39.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:39 np0005539551 podman[318139]: 2025-11-29 09:04:39.599862528 +0000 UTC m=+0.045004070 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 04:04:39 np0005539551 podman[318138]: 2025-11-29 09:04:39.606443386 +0000 UTC m=+0.055167945 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:04:39 np0005539551 podman[318137]: 2025-11-29 09:04:39.672204697 +0000 UTC m=+0.123111735 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 04:04:40 np0005539551 nova_compute[227360]: 2025-11-29 09:04:40.067 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:40.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:41 np0005539551 nova_compute[227360]: 2025-11-29 09:04:41.069 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:41.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:42.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:43.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:44.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:45 np0005539551 nova_compute[227360]: 2025-11-29 09:04:45.070 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:45.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:46 np0005539551 nova_compute[227360]: 2025-11-29 09:04:46.072 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:46.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:47.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:47 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:48.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:49.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:50 np0005539551 nova_compute[227360]: 2025-11-29 09:04:50.072 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:50.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:51 np0005539551 nova_compute[227360]: 2025-11-29 09:04:51.075 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:04:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:51.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:04:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:52.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:53.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:54.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:55 np0005539551 nova_compute[227360]: 2025-11-29 09:04:55.076 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:55.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:56 np0005539551 nova_compute[227360]: 2025-11-29 09:04:56.075 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:56.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:57.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:57 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:58.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:04:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:59.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:00 np0005539551 nova_compute[227360]: 2025-11-29 09:05:00.079 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:00.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:01 np0005539551 nova_compute[227360]: 2025-11-29 09:05:01.077 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:01.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:02 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:02.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:03.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:03 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:05:03 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:05:03 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:05:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:04.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:05 np0005539551 nova_compute[227360]: 2025-11-29 09:05:05.082 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:05.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:06 np0005539551 nova_compute[227360]: 2025-11-29 09:05:06.078 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:06.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:07.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:08.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:09.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:09 np0005539551 nova_compute[227360]: 2025-11-29 09:05:09.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:10 np0005539551 podman[318425]: 2025-11-29 09:05:10.043895306 +0000 UTC m=+0.058352261 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:05:10 np0005539551 podman[318424]: 2025-11-29 09:05:10.074415002 +0000 UTC m=+0.097558692 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:05:10 np0005539551 nova_compute[227360]: 2025-11-29 09:05:10.084 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:10 np0005539551 podman[318423]: 2025-11-29 09:05:10.093067597 +0000 UTC m=+0.113903625 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:05:10 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 04:05:10 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.0 total, 600.0 interval#012Cumulative writes: 16K writes, 83K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s#012Cumulative WAL: 16K writes, 16K syncs, 1.00 writes per sync, written: 0.17 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1455 writes, 7193 keys, 1455 commit groups, 1.0 writes per commit group, ingest: 15.10 MB, 0.03 MB/s#012Interval WAL: 1455 writes, 1455 syncs, 1.00 writes per sync, written: 0.01 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     19.0      5.48              0.35        54    0.101       0      0       0.0       0.0#012  L6      1/0   12.11 MB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   5.7     49.2     42.6     13.97              1.84        53    0.264    438K    28K       0.0       0.0#012 Sum      1/0   12.11 MB   0.0      0.7     0.1      0.6       0.7      0.1       0.0   6.7     35.3     36.0     19.45              2.19       107    0.182    438K    28K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   9.3    144.3    141.5      0.61              0.26        12    0.051     69K   3084       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   0.0     49.2     42.6     13.97              1.84        53    0.264    438K    28K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     19.1      5.47              0.35        53    0.103       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6600.0 total, 600.0 interval#012Flush(GB): cumulative 0.102, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.68 GB write, 0.11 MB/s write, 0.67 GB read, 0.10 MB/s read, 19.5 seconds#012Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.09 GB read, 0.15 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557021ed51f0#2 capacity: 304.00 MB usage: 70.46 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.000846 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(4341,67.33 MB,22.1491%) FilterBlock(107,1.19 MB,0.390419%) IndexBlock(107,1.94 MB,0.637918%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 04:05:10 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:05:10 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:05:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:10.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:11 np0005539551 nova_compute[227360]: 2025-11-29 09:05:11.080 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:11.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:11 np0005539551 nova_compute[227360]: 2025-11-29 09:05:11.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:11 np0005539551 nova_compute[227360]: 2025-11-29 09:05:11.427 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:05:11 np0005539551 nova_compute[227360]: 2025-11-29 09:05:11.428 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:05:11 np0005539551 nova_compute[227360]: 2025-11-29 09:05:11.428 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:05:11 np0005539551 nova_compute[227360]: 2025-11-29 09:05:11.429 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:05:11 np0005539551 nova_compute[227360]: 2025-11-29 09:05:11.429 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:05:11 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:05:11 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/403914837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:05:11 np0005539551 nova_compute[227360]: 2025-11-29 09:05:11.922 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:05:12 np0005539551 nova_compute[227360]: 2025-11-29 09:05:12.076 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:05:12 np0005539551 nova_compute[227360]: 2025-11-29 09:05:12.077 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4213MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:05:12 np0005539551 nova_compute[227360]: 2025-11-29 09:05:12.077 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:05:12 np0005539551 nova_compute[227360]: 2025-11-29 09:05:12.078 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:05:12 np0005539551 nova_compute[227360]: 2025-11-29 09:05:12.168 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:05:12 np0005539551 nova_compute[227360]: 2025-11-29 09:05:12.168 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:05:12 np0005539551 nova_compute[227360]: 2025-11-29 09:05:12.190 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:05:12 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:12 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:05:12 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2277966156' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:05:12 np0005539551 nova_compute[227360]: 2025-11-29 09:05:12.603 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:05:12 np0005539551 nova_compute[227360]: 2025-11-29 09:05:12.608 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:05:12 np0005539551 nova_compute[227360]: 2025-11-29 09:05:12.655 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:05:12 np0005539551 nova_compute[227360]: 2025-11-29 09:05:12.657 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:05:12 np0005539551 nova_compute[227360]: 2025-11-29 09:05:12.657 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:05:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:12.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:13.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:14 np0005539551 nova_compute[227360]: 2025-11-29 09:05:14.658 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:14 np0005539551 nova_compute[227360]: 2025-11-29 09:05:14.658 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:05:14 np0005539551 nova_compute[227360]: 2025-11-29 09:05:14.659 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:05:14 np0005539551 nova_compute[227360]: 2025-11-29 09:05:14.675 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:05:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:14.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:15 np0005539551 nova_compute[227360]: 2025-11-29 09:05:15.086 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:15.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:15 np0005539551 nova_compute[227360]: 2025-11-29 09:05:15.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:15 np0005539551 nova_compute[227360]: 2025-11-29 09:05:15.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:15 np0005539551 nova_compute[227360]: 2025-11-29 09:05:15.410 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:05:15 np0005539551 nova_compute[227360]: 2025-11-29 09:05:15.411 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:05:15 np0005539551 nova_compute[227360]: 2025-11-29 09:05:15.412 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:05:15 np0005539551 nova_compute[227360]: 2025-11-29 09:05:15.412 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:05:15 np0005539551 nova_compute[227360]: 2025-11-29 09:05:15.412 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:05:15 np0005539551 nova_compute[227360]: 2025-11-29 09:05:15.413 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:05:15 np0005539551 nova_compute[227360]: 2025-11-29 09:05:15.441 227364 DEBUG nova.virt.libvirt.imagecache [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Nov 29 04:05:15 np0005539551 nova_compute[227360]: 2025-11-29 09:05:15.441 227364 WARNING nova.virt.libvirt.imagecache [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Unknown base file: /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488#033[00m
Nov 29 04:05:15 np0005539551 nova_compute[227360]: 2025-11-29 09:05:15.441 227364 WARNING nova.virt.libvirt.imagecache [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Unknown base file: /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de#033[00m
Nov 29 04:05:15 np0005539551 nova_compute[227360]: 2025-11-29 09:05:15.442 227364 WARNING nova.virt.libvirt.imagecache [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Unknown base file: /var/lib/nova/instances/_base/b734e4b236330b7f0c98a124ac144d2673a30e56#033[00m
Nov 29 04:05:15 np0005539551 nova_compute[227360]: 2025-11-29 09:05:15.442 227364 INFO nova.virt.libvirt.imagecache [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Removable base files: /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de /var/lib/nova/instances/_base/b734e4b236330b7f0c98a124ac144d2673a30e56#033[00m
Nov 29 04:05:15 np0005539551 nova_compute[227360]: 2025-11-29 09:05:15.443 227364 INFO nova.virt.libvirt.imagecache [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488#033[00m
Nov 29 04:05:15 np0005539551 nova_compute[227360]: 2025-11-29 09:05:15.443 227364 INFO nova.virt.libvirt.imagecache [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de#033[00m
Nov 29 04:05:15 np0005539551 nova_compute[227360]: 2025-11-29 09:05:15.443 227364 INFO nova.virt.libvirt.imagecache [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/b734e4b236330b7f0c98a124ac144d2673a30e56#033[00m
Nov 29 04:05:15 np0005539551 nova_compute[227360]: 2025-11-29 09:05:15.443 227364 DEBUG nova.virt.libvirt.imagecache [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Nov 29 04:05:15 np0005539551 nova_compute[227360]: 2025-11-29 09:05:15.444 227364 DEBUG nova.virt.libvirt.imagecache [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Nov 29 04:05:15 np0005539551 nova_compute[227360]: 2025-11-29 09:05:15.444 227364 DEBUG nova.virt.libvirt.imagecache [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Nov 29 04:05:16 np0005539551 nova_compute[227360]: 2025-11-29 09:05:16.082 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:16 np0005539551 nova_compute[227360]: 2025-11-29 09:05:16.444 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:16.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:17.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:18.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:19.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:19 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #172. Immutable memtables: 0.
Nov 29 04:05:19 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:05:19.742950) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 04:05:19 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 172
Nov 29 04:05:19 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407119742991, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 1092, "num_deletes": 251, "total_data_size": 2317975, "memory_usage": 2339520, "flush_reason": "Manual Compaction"}
Nov 29 04:05:19 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #173: started
Nov 29 04:05:19 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407119753469, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 173, "file_size": 1518411, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 82836, "largest_seqno": 83923, "table_properties": {"data_size": 1513656, "index_size": 2342, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10532, "raw_average_key_size": 19, "raw_value_size": 1503997, "raw_average_value_size": 2811, "num_data_blocks": 104, "num_entries": 535, "num_filter_entries": 535, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764407032, "oldest_key_time": 1764407032, "file_creation_time": 1764407119, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:05:19 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 10576 microseconds, and 4604 cpu microseconds.
Nov 29 04:05:19 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:05:19 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:05:19.753521) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #173: 1518411 bytes OK
Nov 29 04:05:19 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:05:19.753544) [db/memtable_list.cc:519] [default] Level-0 commit table #173 started
Nov 29 04:05:19 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:05:19.754781) [db/memtable_list.cc:722] [default] Level-0 commit table #173: memtable #1 done
Nov 29 04:05:19 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:05:19.754796) EVENT_LOG_v1 {"time_micros": 1764407119754790, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 04:05:19 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:05:19.754814) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 04:05:19 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 2312660, prev total WAL file size 2312660, number of live WAL files 2.
Nov 29 04:05:19 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000169.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:05:19 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:05:19.755636) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Nov 29 04:05:19 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 04:05:19 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [173(1482KB)], [171(12MB)]
Nov 29 04:05:19 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407119755709, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [173], "files_L6": [171], "score": -1, "input_data_size": 14213521, "oldest_snapshot_seqno": -1}
Nov 29 04:05:19 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #174: 11269 keys, 12138918 bytes, temperature: kUnknown
Nov 29 04:05:19 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407119829562, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 174, "file_size": 12138918, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12069970, "index_size": 39651, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28229, "raw_key_size": 299757, "raw_average_key_size": 26, "raw_value_size": 11876006, "raw_average_value_size": 1053, "num_data_blocks": 1485, "num_entries": 11269, "num_filter_entries": 11269, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764407119, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 174, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:05:19 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:05:19 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:05:19.830174) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 12138918 bytes
Nov 29 04:05:19 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:05:19.831798) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 192.1 rd, 164.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 12.1 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(17.4) write-amplify(8.0) OK, records in: 11784, records dropped: 515 output_compression: NoCompression
Nov 29 04:05:19 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:05:19.831831) EVENT_LOG_v1 {"time_micros": 1764407119831818, "job": 110, "event": "compaction_finished", "compaction_time_micros": 73982, "compaction_time_cpu_micros": 29177, "output_level": 6, "num_output_files": 1, "total_output_size": 12138918, "num_input_records": 11784, "num_output_records": 11269, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 04:05:19 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:05:19 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407119832176, "job": 110, "event": "table_file_deletion", "file_number": 173}
Nov 29 04:05:19 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000171.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:05:19 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407119834294, "job": 110, "event": "table_file_deletion", "file_number": 171}
Nov 29 04:05:19 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:05:19.755474) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:05:19 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:05:19.834432) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:05:19 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:05:19.834438) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:05:19 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:05:19.834439) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:05:19 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:05:19.834441) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:05:19 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:05:19.834442) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:05:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 09:05:19.911 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:05:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 09:05:19.912 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:05:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 09:05:19.912 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:05:20 np0005539551 nova_compute[227360]: 2025-11-29 09:05:20.090 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:20 np0005539551 nova_compute[227360]: 2025-11-29 09:05:20.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:20.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:21 np0005539551 nova_compute[227360]: 2025-11-29 09:05:21.084 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:21.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:21 np0005539551 ceph-mgr[82034]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1950343944
Nov 29 04:05:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:22 np0005539551 nova_compute[227360]: 2025-11-29 09:05:22.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:22.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:23.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:24.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:25 np0005539551 nova_compute[227360]: 2025-11-29 09:05:25.092 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:25.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:26 np0005539551 nova_compute[227360]: 2025-11-29 09:05:26.086 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:26.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:27.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:28.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:29.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:29 np0005539551 nova_compute[227360]: 2025-11-29 09:05:29.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:29 np0005539551 nova_compute[227360]: 2025-11-29 09:05:29.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:05:30 np0005539551 nova_compute[227360]: 2025-11-29 09:05:30.094 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:30.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:31 np0005539551 nova_compute[227360]: 2025-11-29 09:05:31.128 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:31.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:32 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:05:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:32.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:05:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:33.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:33 np0005539551 nova_compute[227360]: 2025-11-29 09:05:33.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:34.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:35 np0005539551 nova_compute[227360]: 2025-11-29 09:05:35.096 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:35.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:36 np0005539551 nova_compute[227360]: 2025-11-29 09:05:36.160 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:36.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:37.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:38.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:39.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:40 np0005539551 nova_compute[227360]: 2025-11-29 09:05:40.098 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:40 np0005539551 nova_compute[227360]: 2025-11-29 09:05:40.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:40 np0005539551 nova_compute[227360]: 2025-11-29 09:05:40.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 04:05:40 np0005539551 nova_compute[227360]: 2025-11-29 09:05:40.427 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 04:05:40 np0005539551 podman[318574]: 2025-11-29 09:05:40.589923163 +0000 UTC m=+0.045352318 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:05:40 np0005539551 podman[318573]: 2025-11-29 09:05:40.598505536 +0000 UTC m=+0.057231052 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_id=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd)
Nov 29 04:05:40 np0005539551 podman[318572]: 2025-11-29 09:05:40.621231261 +0000 UTC m=+0.082177686 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:05:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:05:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:40.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:05:41 np0005539551 nova_compute[227360]: 2025-11-29 09:05:41.202 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:41.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:41 np0005539551 nova_compute[227360]: 2025-11-29 09:05:41.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:42.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:43.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:44.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:45 np0005539551 nova_compute[227360]: 2025-11-29 09:05:45.100 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:45.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:46 np0005539551 nova_compute[227360]: 2025-11-29 09:05:46.203 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:46 np0005539551 nova_compute[227360]: 2025-11-29 09:05:46.430 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:46 np0005539551 nova_compute[227360]: 2025-11-29 09:05:46.430 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 04:05:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:46.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:47 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:47.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:48.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:05:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:49.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:05:50 np0005539551 nova_compute[227360]: 2025-11-29 09:05:50.103 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:50.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:51 np0005539551 nova_compute[227360]: 2025-11-29 09:05:51.205 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:51.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:52.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:53.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:54 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:54 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:54 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:54.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:55 np0005539551 nova_compute[227360]: 2025-11-29 09:05:55.106 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:55.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:56 np0005539551 nova_compute[227360]: 2025-11-29 09:05:56.206 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:56 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:56 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:56 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:56.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:57 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:57.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:58 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:58 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:58 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:58.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:05:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:59.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:00 np0005539551 nova_compute[227360]: 2025-11-29 09:06:00.111 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:00 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:00 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:00 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:00.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:01 np0005539551 nova_compute[227360]: 2025-11-29 09:06:01.214 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:01.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:02 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:02 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:02 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:02 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:02.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:03.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:04 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:04 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:04 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:04.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:05 np0005539551 nova_compute[227360]: 2025-11-29 09:06:05.153 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:05.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:06 np0005539551 nova_compute[227360]: 2025-11-29 09:06:06.260 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:06 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:06 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:06 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:06.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:06:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:07.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:06:08 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:08 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:08 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:08.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:09.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:09 np0005539551 nova_compute[227360]: 2025-11-29 09:06:09.425 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:10 np0005539551 nova_compute[227360]: 2025-11-29 09:06:10.156 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:10 np0005539551 nova_compute[227360]: 2025-11-29 09:06:10.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:10 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:10 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:10 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:10.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:11 np0005539551 nova_compute[227360]: 2025-11-29 09:06:11.263 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:11.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:11 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:06:11 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:06:11 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:06:11 np0005539551 podman[318805]: 2025-11-29 09:06:11.625351823 +0000 UTC m=+0.076728508 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:06:11 np0005539551 podman[318804]: 2025-11-29 09:06:11.640137244 +0000 UTC m=+0.091350154 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 04:06:11 np0005539551 podman[318803]: 2025-11-29 09:06:11.681213646 +0000 UTC m=+0.126644600 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 04:06:12 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:12 np0005539551 nova_compute[227360]: 2025-11-29 09:06:12.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:12 np0005539551 nova_compute[227360]: 2025-11-29 09:06:12.435 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:06:12 np0005539551 nova_compute[227360]: 2025-11-29 09:06:12.435 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:06:12 np0005539551 nova_compute[227360]: 2025-11-29 09:06:12.435 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:06:12 np0005539551 nova_compute[227360]: 2025-11-29 09:06:12.435 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:06:12 np0005539551 nova_compute[227360]: 2025-11-29 09:06:12.436 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:06:12 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:06:12 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4044233353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:06:12 np0005539551 nova_compute[227360]: 2025-11-29 09:06:12.859 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:06:12 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:12 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:12 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:12.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:13 np0005539551 nova_compute[227360]: 2025-11-29 09:06:13.007 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:06:13 np0005539551 nova_compute[227360]: 2025-11-29 09:06:13.008 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4196MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:06:13 np0005539551 nova_compute[227360]: 2025-11-29 09:06:13.008 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:06:13 np0005539551 nova_compute[227360]: 2025-11-29 09:06:13.008 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:06:13 np0005539551 nova_compute[227360]: 2025-11-29 09:06:13.070 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:06:13 np0005539551 nova_compute[227360]: 2025-11-29 09:06:13.070 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:06:13 np0005539551 nova_compute[227360]: 2025-11-29 09:06:13.097 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:06:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:13.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:13 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:06:13 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3713049411' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:06:13 np0005539551 nova_compute[227360]: 2025-11-29 09:06:13.531 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:06:13 np0005539551 nova_compute[227360]: 2025-11-29 09:06:13.536 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:06:13 np0005539551 nova_compute[227360]: 2025-11-29 09:06:13.550 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:06:13 np0005539551 nova_compute[227360]: 2025-11-29 09:06:13.554 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:06:13 np0005539551 nova_compute[227360]: 2025-11-29 09:06:13.554 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.546s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:06:14 np0005539551 nova_compute[227360]: 2025-11-29 09:06:14.555 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:14 np0005539551 nova_compute[227360]: 2025-11-29 09:06:14.556 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:06:14 np0005539551 nova_compute[227360]: 2025-11-29 09:06:14.556 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:06:14 np0005539551 nova_compute[227360]: 2025-11-29 09:06:14.576 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:06:14 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:14 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:14 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:14.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:15 np0005539551 nova_compute[227360]: 2025-11-29 09:06:15.194 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:15.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:15 np0005539551 nova_compute[227360]: 2025-11-29 09:06:15.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:16 np0005539551 nova_compute[227360]: 2025-11-29 09:06:16.302 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:16 np0005539551 nova_compute[227360]: 2025-11-29 09:06:16.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:16 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:16 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:16 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:16.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:06:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:17.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:06:18 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:18 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:18 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:18.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:19.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 09:06:19.913 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:06:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 09:06:19.913 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:06:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 09:06:19.913 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:06:20 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:06:20 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:06:20 np0005539551 nova_compute[227360]: 2025-11-29 09:06:20.196 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:20 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:20 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:20 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:20.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:21 np0005539551 nova_compute[227360]: 2025-11-29 09:06:21.302 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 04:06:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:21.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 04:06:21 np0005539551 nova_compute[227360]: 2025-11-29 09:06:21.404 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:22 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:22 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:22 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:22.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:23.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:23 np0005539551 nova_compute[227360]: 2025-11-29 09:06:23.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:24 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:24 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:24 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:24.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:25 np0005539551 nova_compute[227360]: 2025-11-29 09:06:25.198 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:25.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:26 np0005539551 nova_compute[227360]: 2025-11-29 09:06:26.304 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:26 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:26 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:26 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:26.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:27.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:28 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:28 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:28 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:28.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:29.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:29 np0005539551 nova_compute[227360]: 2025-11-29 09:06:29.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:29 np0005539551 nova_compute[227360]: 2025-11-29 09:06:29.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:06:30 np0005539551 nova_compute[227360]: 2025-11-29 09:06:30.200 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:30 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:30 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:30 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:30.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:31 np0005539551 nova_compute[227360]: 2025-11-29 09:06:31.308 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:31.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:32 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:32 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:32 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:32 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:32.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:33.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:33 np0005539551 nova_compute[227360]: 2025-11-29 09:06:33.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:34 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:34 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:34 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:34.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:35 np0005539551 nova_compute[227360]: 2025-11-29 09:06:35.203 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:35.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:36 np0005539551 nova_compute[227360]: 2025-11-29 09:06:36.310 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:36 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:36 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:06:36 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:36.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:06:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:37.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:38 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:38 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:38 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:38.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:39.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:40 np0005539551 nova_compute[227360]: 2025-11-29 09:06:40.204 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:40 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:40 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:40 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:40.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:41 np0005539551 nova_compute[227360]: 2025-11-29 09:06:41.311 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:41.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:41 np0005539551 nova_compute[227360]: 2025-11-29 09:06:41.732 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:42 np0005539551 podman[318977]: 2025-11-29 09:06:42.59081317 +0000 UTC m=+0.047144528 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125)
Nov 29 04:06:42 np0005539551 podman[318976]: 2025-11-29 09:06:42.611481299 +0000 UTC m=+0.069109771 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 29 04:06:42 np0005539551 podman[318978]: 2025-11-29 09:06:42.616748832 +0000 UTC m=+0.069255525 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 04:06:42 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:42 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:42 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:42.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:43.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:44 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:44 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:44 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:44.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:45 np0005539551 nova_compute[227360]: 2025-11-29 09:06:45.207 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:45.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:46 np0005539551 nova_compute[227360]: 2025-11-29 09:06:46.313 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:46 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:46 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:46 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:46.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:47 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:47.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:48 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:48 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:48 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:48.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:06:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:49.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:06:50 np0005539551 nova_compute[227360]: 2025-11-29 09:06:50.210 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:50 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:50 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:50 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:50.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:51 np0005539551 nova_compute[227360]: 2025-11-29 09:06:51.316 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:51.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:52 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:52 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:52 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:52.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:06:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:53.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:06:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:55.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:55 np0005539551 nova_compute[227360]: 2025-11-29 09:06:55.214 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:55.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:56 np0005539551 nova_compute[227360]: 2025-11-29 09:06:56.319 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:57.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:57 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:57.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:59.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:06:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:06:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:59.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:07:00 np0005539551 nova_compute[227360]: 2025-11-29 09:07:00.216 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:01.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:01 np0005539551 nova_compute[227360]: 2025-11-29 09:07:01.322 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:01.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:02 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:03.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:03.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:05.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:05 np0005539551 nova_compute[227360]: 2025-11-29 09:07:05.219 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:05.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:06 np0005539551 nova_compute[227360]: 2025-11-29 09:07:06.365 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:07.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:07:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:07.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:07:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:09.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:09.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:10 np0005539551 nova_compute[227360]: 2025-11-29 09:07:10.221 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:11.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:11 np0005539551 nova_compute[227360]: 2025-11-29 09:07:11.368 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:11.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:11 np0005539551 nova_compute[227360]: 2025-11-29 09:07:11.424 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:12 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:13.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:13 np0005539551 podman[319065]: 2025-11-29 09:07:13.340427969 +0000 UTC m=+0.072608867 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:07:13 np0005539551 podman[319066]: 2025-11-29 09:07:13.354366456 +0000 UTC m=+0.077597512 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 04:07:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:13.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:13 np0005539551 podman[319064]: 2025-11-29 09:07:13.40326473 +0000 UTC m=+0.132489548 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 04:07:13 np0005539551 nova_compute[227360]: 2025-11-29 09:07:13.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:13 np0005539551 nova_compute[227360]: 2025-11-29 09:07:13.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:07:13 np0005539551 nova_compute[227360]: 2025-11-29 09:07:13.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:07:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:15.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:15 np0005539551 nova_compute[227360]: 2025-11-29 09:07:15.197 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:07:15 np0005539551 nova_compute[227360]: 2025-11-29 09:07:15.198 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:15 np0005539551 nova_compute[227360]: 2025-11-29 09:07:15.223 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:15 np0005539551 nova_compute[227360]: 2025-11-29 09:07:15.234 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:07:15 np0005539551 nova_compute[227360]: 2025-11-29 09:07:15.235 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:07:15 np0005539551 nova_compute[227360]: 2025-11-29 09:07:15.235 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:07:15 np0005539551 nova_compute[227360]: 2025-11-29 09:07:15.235 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:07:15 np0005539551 nova_compute[227360]: 2025-11-29 09:07:15.235 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:07:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:15.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:07:15 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1630020571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:07:15 np0005539551 nova_compute[227360]: 2025-11-29 09:07:15.727 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:07:15 np0005539551 nova_compute[227360]: 2025-11-29 09:07:15.890 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:07:15 np0005539551 nova_compute[227360]: 2025-11-29 09:07:15.892 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4220MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:07:15 np0005539551 nova_compute[227360]: 2025-11-29 09:07:15.892 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:07:15 np0005539551 nova_compute[227360]: 2025-11-29 09:07:15.893 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:07:16 np0005539551 nova_compute[227360]: 2025-11-29 09:07:16.409 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:17.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:17.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:18 np0005539551 nova_compute[227360]: 2025-11-29 09:07:18.435 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:07:18 np0005539551 nova_compute[227360]: 2025-11-29 09:07:18.436 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:07:18 np0005539551 nova_compute[227360]: 2025-11-29 09:07:18.532 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Refreshing inventories for resource provider 67c71d68-0dd7-4589-b775-189b4191a844 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 04:07:18 np0005539551 nova_compute[227360]: 2025-11-29 09:07:18.576 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Updating ProviderTree inventory for provider 67c71d68-0dd7-4589-b775-189b4191a844 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 04:07:18 np0005539551 nova_compute[227360]: 2025-11-29 09:07:18.576 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Updating inventory in ProviderTree for provider 67c71d68-0dd7-4589-b775-189b4191a844 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 04:07:18 np0005539551 nova_compute[227360]: 2025-11-29 09:07:18.589 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Refreshing aggregate associations for resource provider 67c71d68-0dd7-4589-b775-189b4191a844, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 04:07:18 np0005539551 nova_compute[227360]: 2025-11-29 09:07:18.618 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Refreshing trait associations for resource provider 67c71d68-0dd7-4589-b775-189b4191a844, traits: COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 04:07:18 np0005539551 nova_compute[227360]: 2025-11-29 09:07:18.652 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:07:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:19.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:19 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:07:19 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2352589229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:07:19 np0005539551 nova_compute[227360]: 2025-11-29 09:07:19.079 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:07:19 np0005539551 nova_compute[227360]: 2025-11-29 09:07:19.084 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:07:19 np0005539551 nova_compute[227360]: 2025-11-29 09:07:19.101 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:07:19 np0005539551 nova_compute[227360]: 2025-11-29 09:07:19.102 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:07:19 np0005539551 nova_compute[227360]: 2025-11-29 09:07:19.103 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.210s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:07:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:19.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 09:07:19.913 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:07:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 09:07:19.914 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:07:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 09:07:19.914 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:07:20 np0005539551 nova_compute[227360]: 2025-11-29 09:07:20.271 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:20 np0005539551 podman[319355]: 2025-11-29 09:07:20.495778943 +0000 UTC m=+0.062832323 container exec 90ce4adbab91e406b1c142d44002a8d5649da44e57a442af57615f2fecac4da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 04:07:20 np0005539551 podman[319355]: 2025-11-29 09:07:20.600750815 +0000 UTC m=+0.167804185 container exec_died 90ce4adbab91e406b1c142d44002a8d5649da44e57a442af57615f2fecac4da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-1, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 04:07:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:21.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:21 np0005539551 nova_compute[227360]: 2025-11-29 09:07:21.314 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:21 np0005539551 nova_compute[227360]: 2025-11-29 09:07:21.316 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:07:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:21.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:07:21 np0005539551 nova_compute[227360]: 2025-11-29 09:07:21.410 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:22 np0005539551 nova_compute[227360]: 2025-11-29 09:07:22.406 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:22 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:07:22 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:07:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:23.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:23.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:23 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:07:23 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:07:24 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:07:24 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:07:24 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:07:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:25.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:25 np0005539551 nova_compute[227360]: 2025-11-29 09:07:25.273 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:25 np0005539551 nova_compute[227360]: 2025-11-29 09:07:25.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:07:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:25.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:07:26 np0005539551 nova_compute[227360]: 2025-11-29 09:07:26.457 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:27.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:27.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:29.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:29.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:30 np0005539551 nova_compute[227360]: 2025-11-29 09:07:30.331 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:30 np0005539551 nova_compute[227360]: 2025-11-29 09:07:30.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:30 np0005539551 nova_compute[227360]: 2025-11-29 09:07:30.410 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:07:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:31.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:31.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:31 np0005539551 nova_compute[227360]: 2025-11-29 09:07:31.460 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:32 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:33.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:33.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:34 np0005539551 nova_compute[227360]: 2025-11-29 09:07:34.411 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:35.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:35 np0005539551 nova_compute[227360]: 2025-11-29 09:07:35.349 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:35.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:36 np0005539551 nova_compute[227360]: 2025-11-29 09:07:36.498 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:37.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:37.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:38 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:07:38 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:07:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:39.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:39.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:40 np0005539551 nova_compute[227360]: 2025-11-29 09:07:40.351 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:41.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:41.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:41 np0005539551 nova_compute[227360]: 2025-11-29 09:07:41.500 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:43.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:43.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:43 np0005539551 podman[319691]: 2025-11-29 09:07:43.609719691 +0000 UTC m=+0.060212811 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 04:07:43 np0005539551 podman[319692]: 2025-11-29 09:07:43.625273012 +0000 UTC m=+0.068274029 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:07:43 np0005539551 podman[319690]: 2025-11-29 09:07:43.707027225 +0000 UTC m=+0.155130481 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 29 04:07:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:45.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:45 np0005539551 nova_compute[227360]: 2025-11-29 09:07:45.354 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:45.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:46 np0005539551 nova_compute[227360]: 2025-11-29 09:07:46.502 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:47.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:47 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:47.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:49.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:49.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:50 np0005539551 nova_compute[227360]: 2025-11-29 09:07:50.356 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:51.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:07:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:51.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:07:51 np0005539551 nova_compute[227360]: 2025-11-29 09:07:51.504 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:53.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:53.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:55.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:55 np0005539551 nova_compute[227360]: 2025-11-29 09:07:55.358 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:55.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:56 np0005539551 nova_compute[227360]: 2025-11-29 09:07:56.506 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:57.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:57 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:57.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:59.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:07:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:59.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:00 np0005539551 nova_compute[227360]: 2025-11-29 09:08:00.363 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:01.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:01.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:01 np0005539551 nova_compute[227360]: 2025-11-29 09:08:01.509 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:02 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:08:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:03.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:08:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:03.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:05.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:05 np0005539551 nova_compute[227360]: 2025-11-29 09:08:05.366 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:05.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:06 np0005539551 nova_compute[227360]: 2025-11-29 09:08:06.511 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:07.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:07 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:07 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:07 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:07 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:07.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:09.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:09 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:09 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:09 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:09.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:10 np0005539551 nova_compute[227360]: 2025-11-29 09:08:10.369 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:11.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:11 np0005539551 nova_compute[227360]: 2025-11-29 09:08:11.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:11 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:11 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:11 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:11.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:11 np0005539551 nova_compute[227360]: 2025-11-29 09:08:11.514 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:12 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:08:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:13.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:08:13 np0005539551 nova_compute[227360]: 2025-11-29 09:08:13.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:13 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:13 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:13 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:13.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:14 np0005539551 nova_compute[227360]: 2025-11-29 09:08:14.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:14 np0005539551 nova_compute[227360]: 2025-11-29 09:08:14.409 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:08:14 np0005539551 nova_compute[227360]: 2025-11-29 09:08:14.409 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:08:14 np0005539551 nova_compute[227360]: 2025-11-29 09:08:14.438 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:08:14 np0005539551 nova_compute[227360]: 2025-11-29 09:08:14.438 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:14 np0005539551 nova_compute[227360]: 2025-11-29 09:08:14.467 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:08:14 np0005539551 nova_compute[227360]: 2025-11-29 09:08:14.467 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:08:14 np0005539551 nova_compute[227360]: 2025-11-29 09:08:14.468 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:08:14 np0005539551 nova_compute[227360]: 2025-11-29 09:08:14.468 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:08:14 np0005539551 nova_compute[227360]: 2025-11-29 09:08:14.468 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:08:14 np0005539551 podman[319798]: 2025-11-29 09:08:14.618347195 +0000 UTC m=+0.068775174 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3)
Nov 29 04:08:14 np0005539551 podman[319797]: 2025-11-29 09:08:14.625152918 +0000 UTC m=+0.082567815 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 04:08:14 np0005539551 podman[319799]: 2025-11-29 09:08:14.625868628 +0000 UTC m=+0.076508282 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 04:08:14 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:08:14 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3567634774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:08:14 np0005539551 nova_compute[227360]: 2025-11-29 09:08:14.903 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:08:15 np0005539551 nova_compute[227360]: 2025-11-29 09:08:15.051 227364 WARNING nova.virt.libvirt.driver [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:08:15 np0005539551 nova_compute[227360]: 2025-11-29 09:08:15.052 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4209MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:08:15 np0005539551 nova_compute[227360]: 2025-11-29 09:08:15.053 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:08:15 np0005539551 nova_compute[227360]: 2025-11-29 09:08:15.053 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:08:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:15.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:15 np0005539551 nova_compute[227360]: 2025-11-29 09:08:15.134 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:08:15 np0005539551 nova_compute[227360]: 2025-11-29 09:08:15.134 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:08:15 np0005539551 nova_compute[227360]: 2025-11-29 09:08:15.271 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:08:15 np0005539551 nova_compute[227360]: 2025-11-29 09:08:15.371 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:15 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:15 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:15 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:15.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:15 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:08:15 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1287997365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:08:15 np0005539551 nova_compute[227360]: 2025-11-29 09:08:15.703 227364 DEBUG oslo_concurrency.processutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:08:15 np0005539551 nova_compute[227360]: 2025-11-29 09:08:15.710 227364 DEBUG nova.compute.provider_tree [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed in ProviderTree for provider: 67c71d68-0dd7-4589-b775-189b4191a844 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:08:15 np0005539551 nova_compute[227360]: 2025-11-29 09:08:15.725 227364 DEBUG nova.scheduler.client.report [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Inventory has not changed for provider 67c71d68-0dd7-4589-b775-189b4191a844 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:08:15 np0005539551 nova_compute[227360]: 2025-11-29 09:08:15.726 227364 DEBUG nova.compute.resource_tracker [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:08:15 np0005539551 nova_compute[227360]: 2025-11-29 09:08:15.726 227364 DEBUG oslo_concurrency.lockutils [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:08:16 np0005539551 nova_compute[227360]: 2025-11-29 09:08:16.562 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:17.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:17 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:17 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:17 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:17 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:17.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:18 np0005539551 nova_compute[227360]: 2025-11-29 09:08:18.698 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:18 np0005539551 nova_compute[227360]: 2025-11-29 09:08:18.699 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:19.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:19 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:19 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:19 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:19.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 09:08:19.914 139482 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:08:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 09:08:19.915 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:08:19 np0005539551 ovn_metadata_agent[139465]: 2025-11-29 09:08:19.915 139482 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:08:20 np0005539551 nova_compute[227360]: 2025-11-29 09:08:20.380 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:21.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:21 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:21 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:21 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:21.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:21 np0005539551 nova_compute[227360]: 2025-11-29 09:08:21.619 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:22 np0005539551 nova_compute[227360]: 2025-11-29 09:08:22.405 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:22 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:23.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:23 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:23 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:23 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:23.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:25.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:25 np0005539551 nova_compute[227360]: 2025-11-29 09:08:25.382 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:25 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:25 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:08:25 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:25.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:08:26 np0005539551 nova_compute[227360]: 2025-11-29 09:08:26.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:26 np0005539551 nova_compute[227360]: 2025-11-29 09:08:26.658 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:27.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:27 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:27 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:27 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:27 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:27.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:29.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:29 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:29 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:29 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:29.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:30 np0005539551 nova_compute[227360]: 2025-11-29 09:08:30.386 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:31.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:31 np0005539551 nova_compute[227360]: 2025-11-29 09:08:31.409 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:31 np0005539551 nova_compute[227360]: 2025-11-29 09:08:31.409 227364 DEBUG nova.compute.manager [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:08:31 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:31 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:31 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:31.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:31 np0005539551 nova_compute[227360]: 2025-11-29 09:08:31.659 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:32 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:33.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:33 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:33 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:33 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:33.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:34 np0005539551 nova_compute[227360]: 2025-11-29 09:08:34.410 227364 DEBUG oslo_service.periodic_task [None req-5fbbc006-7566-4a2f-a6e0-c77a6e85e0dc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:08:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:35.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:08:35 np0005539551 nova_compute[227360]: 2025-11-29 09:08:35.388 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:35 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:35 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:35 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:35.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:36 np0005539551 nova_compute[227360]: 2025-11-29 09:08:36.700 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:37.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:37 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:37 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:37 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:37 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:37.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:39.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:39 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:39 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:39 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:39.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:39 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:08:39 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:08:39 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:08:40 np0005539551 systemd-logind[788]: New session 60 of user zuul.
Nov 29 04:08:40 np0005539551 systemd[1]: Started Session 60 of User zuul.
Nov 29 04:08:40 np0005539551 nova_compute[227360]: 2025-11-29 09:08:40.392 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:41.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:41 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:41 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:41 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:41.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:41 np0005539551 nova_compute[227360]: 2025-11-29 09:08:41.702 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:42 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:43.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:43 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:43 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:43 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:43.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:43 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 29 04:08:43 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1173036093' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 04:08:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:45.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:45 np0005539551 nova_compute[227360]: 2025-11-29 09:08:45.394 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:45 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:45 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:45 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:45.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:45 np0005539551 podman[320334]: 2025-11-29 09:08:45.616242248 +0000 UTC m=+0.061911027 container health_status 806b9d1c3a9dd6bb51244118601764ebea2aa2ac9356d76ec970f3bfe6315d1f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 04:08:45 np0005539551 podman[320335]: 2025-11-29 09:08:45.616457114 +0000 UTC m=+0.059296076 container health_status e7ff94beb3e3642ca219da723f87b4446244310c7274fab169baab7b4411678e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 04:08:45 np0005539551 podman[320333]: 2025-11-29 09:08:45.648164602 +0000 UTC m=+0.094661253 container health_status 5cb3250aa06f4e45c7a2a550b4521912cec8318d94d0f2ee1d0ffa331abf5558 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller)
Nov 29 04:08:46 np0005539551 ovs-vsctl[320422]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 29 04:08:46 np0005539551 nova_compute[227360]: 2025-11-29 09:08:46.704 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:47.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:47 np0005539551 virtqemud[226785]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 29 04:08:47 np0005539551 virtqemud[226785]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 29 04:08:47 np0005539551 virtqemud[226785]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 29 04:08:47 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:47 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:47 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:47 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:47.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:47 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj asok_command: cache status {prefix=cache status} (starting...)
Nov 29 04:08:47 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj Can't run that command on an inactive MDS!
Nov 29 04:08:48 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj asok_command: client ls {prefix=client ls} (starting...)
Nov 29 04:08:48 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj Can't run that command on an inactive MDS!
Nov 29 04:08:48 np0005539551 lvm[320782]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 04:08:48 np0005539551 lvm[320782]: VG ceph_vg0 finished
Nov 29 04:08:48 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj asok_command: damage ls {prefix=damage ls} (starting...)
Nov 29 04:08:48 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj Can't run that command on an inactive MDS!
Nov 29 04:08:48 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj asok_command: dump loads {prefix=dump loads} (starting...)
Nov 29 04:08:48 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj Can't run that command on an inactive MDS!
Nov 29 04:08:48 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 29 04:08:48 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj Can't run that command on an inactive MDS!
Nov 29 04:08:49 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 29 04:08:49 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj Can't run that command on an inactive MDS!
Nov 29 04:08:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:08:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:49.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:08:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 29 04:08:49 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/64839274' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 04:08:49 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 29 04:08:49 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj Can't run that command on an inactive MDS!
Nov 29 04:08:49 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 29 04:08:49 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj Can't run that command on an inactive MDS!
Nov 29 04:08:49 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:49 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:49 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:49.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:49 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 29 04:08:49 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj Can't run that command on an inactive MDS!
Nov 29 04:08:49 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 29 04:08:49 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj Can't run that command on an inactive MDS!
Nov 29 04:08:49 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:08:49 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/665972153' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:08:50 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj asok_command: ops {prefix=ops} (starting...)
Nov 29 04:08:50 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj Can't run that command on an inactive MDS!
Nov 29 04:08:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 29 04:08:50 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1189960888' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 29 04:08:50 np0005539551 nova_compute[227360]: 2025-11-29 09:08:50.397 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:50 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:08:50 np0005539551 ceph-mon[81672]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:08:50 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj asok_command: session ls {prefix=session ls} (starting...)
Nov 29 04:08:50 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj Can't run that command on an inactive MDS!
Nov 29 04:08:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 29 04:08:50 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1887462661' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 29 04:08:50 np0005539551 ceph-mds[84617]: mds.cephfs.compute-1.ldsugj asok_command: status {prefix=status} (starting...)
Nov 29 04:08:50 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 29 04:08:50 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2330835388' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 29 04:08:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 04:08:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:51.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 04:08:51 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 29 04:08:51 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2992573714' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 04:08:51 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:51 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:51 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:51.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:51 np0005539551 nova_compute[227360]: 2025-11-29 09:08:51.705 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 29 04:08:52 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/114900305' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 04:08:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:52 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 29 04:08:52 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1119372860' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 04:08:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:53.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:53 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 29 04:08:53 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1849295217' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 04:08:53 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 29 04:08:53 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/7233192' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 29 04:08:53 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:53 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:53 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:53.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:53 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 04:08:53 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2632282219' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 04:08:54 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 29 04:08:54 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2120260587' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 388 handle_osd_map epochs [389,389], i have 388, src has [1,389]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 388 ms_handle_reset con 0x5616f320cc00 session 0x5616f5df3e00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 389 ms_handle_reset con 0x5616f2ec0c00 session 0x5616f5de83c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 389 ms_handle_reset con 0x5616f5e57400 session 0x5616f5c021e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 389 ms_handle_reset con 0x5616f60ff800 session 0x5616f52fef00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 446619648 unmapped: 52248576 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 389 ms_handle_reset con 0x5616f2ec0c00 session 0x5616f402a5a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 389 ms_handle_reset con 0x5616f320cc00 session 0x5616f5875680
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 450437120 unmapped: 48431104 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 389 ms_handle_reset con 0x5616f5412000 session 0x5616f3354b40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 389 ms_handle_reset con 0x5616f5e57400 session 0x5616f512c000
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 448888832 unmapped: 49979392 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 389 ms_handle_reset con 0x5616f6ac1000 session 0x5616f3f692c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 389 ms_handle_reset con 0x5616f6ac1000 session 0x5616f5dfc1e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 389 heartbeat osd_stat(store_statfs(0x19cb93000/0x0/0x1bfc00000, data 0x9de9d14/0x9fea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5490833 data_alloc: 268435456 data_used: 62476288
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449118208 unmapped: 49750016 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 389 handle_osd_map epochs [389,390], i have 389, src has [1,390]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449216512 unmapped: 49651712 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 390 ms_handle_reset con 0x5616f2ec0c00 session 0x5616f5828000
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449216512 unmapped: 49651712 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 390 ms_handle_reset con 0x5616f320cc00 session 0x5616f509c960
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 390 handle_osd_map epochs [391,391], i have 390, src has [1,391]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449216512 unmapped: 49651712 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 391 ms_handle_reset con 0x5616f5e57400 session 0x5616f50d8f00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 391 ms_handle_reset con 0x5616f3257000 session 0x5616f5dfcb40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 391 ms_handle_reset con 0x5616f5412000 session 0x5616f3bcc1e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 391 ms_handle_reset con 0x5616f30d9800 session 0x5616f4cdcf00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 391 heartbeat osd_stat(store_statfs(0x19c946000/0x0/0x1bfc00000, data 0xa034871/0xa237000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [0,2] op hist [1])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 391 ms_handle_reset con 0x5616f7095c00 session 0x5616f5e294a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433872896 unmapped: 64995328 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.518003464s of 10.177884102s, submitted: 314
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 391 ms_handle_reset con 0x5616f5e57400 session 0x5616f33243c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5211523 data_alloc: 251658240 data_used: 41857024
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433881088 unmapped: 64987136 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433881088 unmapped: 64987136 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 391 ms_handle_reset con 0x5616f4132000 session 0x5616f3bb6960
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 435404800 unmapped: 63463424 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433635328 unmapped: 65232896 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 391 ms_handle_reset con 0x5616f30d9800 session 0x5616f5e29860
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 391 ms_handle_reset con 0x5616f5412000 session 0x5616f4cdda40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433643520 unmapped: 65224704 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 391 heartbeat osd_stat(store_statfs(0x19e6dd000/0x0/0x1bfc00000, data 0x82a15ef/0x84a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5283691 data_alloc: 268435456 data_used: 48095232
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433643520 unmapped: 65224704 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 391 handle_osd_map epochs [391,392], i have 391, src has [1,392]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433643520 unmapped: 65224704 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433643520 unmapped: 65224704 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 392 heartbeat osd_stat(store_statfs(0x19e6d9000/0x0/0x1bfc00000, data 0x82a41f8/0x84a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433643520 unmapped: 65224704 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433643520 unmapped: 65224704 heap: 498868224 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 392 ms_handle_reset con 0x5616f5e57400 session 0x5616f3c62000
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 392 ms_handle_reset con 0x5616f6ac1000 session 0x5616f560e780
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 392 ms_handle_reset con 0x5616f7095c00 session 0x5616f5873680
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 392 ms_handle_reset con 0x5616f30d9800 session 0x5616f362f860
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 392 ms_handle_reset con 0x5616f5412000 session 0x5616f560d2c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 392 ms_handle_reset con 0x5616f5e57400 session 0x5616f5de8960
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 392 ms_handle_reset con 0x5616f6ac1000 session 0x5616f2f7af00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.647849083s of 10.085773468s, submitted: 53
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5527646 data_alloc: 268435456 data_used: 60395520
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 392 ms_handle_reset con 0x5616f3257c00 session 0x5616f5df2780
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 435027968 unmapped: 80101376 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 392 ms_handle_reset con 0x5616f30d9800 session 0x5616f5829e00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 392 ms_handle_reset con 0x5616f5412000 session 0x5616f52ffe00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 435036160 unmapped: 80093184 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 392 ms_handle_reset con 0x5616f5e57400 session 0x5616f509b2c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 435036160 unmapped: 80093184 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 392 ms_handle_reset con 0x5616f75c1400 session 0x5616f60ee1e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 437993472 unmapped: 77135872 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #54. Immutable memtables: 10.
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 392 heartbeat osd_stat(store_statfs(0x19c789000/0x0/0x1bfc00000, data 0xa1ec208/0xa3ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 439795712 unmapped: 75333632 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 392 handle_osd_map epochs [392,393], i have 392, src has [1,393]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 393 heartbeat osd_stat(store_statfs(0x19b575000/0x0/0x1bfc00000, data 0xa265f7f/0xa468000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [2])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 393 ms_handle_reset con 0x5616f412d400 session 0x5616f5c20000
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5487571 data_alloc: 268435456 data_used: 53460992
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 440999936 unmapped: 74129408 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441016320 unmapped: 74113024 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441016320 unmapped: 74113024 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441016320 unmapped: 74113024 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441204736 unmapped: 73924608 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5483427 data_alloc: 268435456 data_used: 53526528
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441204736 unmapped: 73924608 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 393 heartbeat osd_stat(store_statfs(0x19c3b4000/0x0/0x1bfc00000, data 0x9428f6f/0x962a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.015411377s of 10.591547012s, submitted: 184
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441204736 unmapped: 73924608 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 393 heartbeat osd_stat(store_statfs(0x19c3b1000/0x0/0x1bfc00000, data 0x942bf6f/0x962d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441204736 unmapped: 73924608 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441204736 unmapped: 73924608 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441221120 unmapped: 73908224 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5481703 data_alloc: 268435456 data_used: 53526528
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441221120 unmapped: 73908224 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 393 handle_osd_map epochs [393,394], i have 393, src has [1,394]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441229312 unmapped: 73900032 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 heartbeat osd_stat(store_statfs(0x19c3ac000/0x0/0x1bfc00000, data 0x942eb78/0x9631000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441745408 unmapped: 73383936 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441745408 unmapped: 73383936 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441761792 unmapped: 73367552 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 heartbeat osd_stat(store_statfs(0x19c3ac000/0x0/0x1bfc00000, data 0x942eb78/0x9631000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5503425 data_alloc: 268435456 data_used: 55824384
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441761792 unmapped: 73367552 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.513783455s of 10.562233925s, submitted: 26
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441761792 unmapped: 73367552 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441761792 unmapped: 73367552 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441761792 unmapped: 73367552 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441761792 unmapped: 73367552 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 heartbeat osd_stat(store_statfs(0x19c3a4000/0x0/0x1bfc00000, data 0x9435b78/0x9638000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5505311 data_alloc: 268435456 data_used: 55832576
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441769984 unmapped: 73359360 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441769984 unmapped: 73359360 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441769984 unmapped: 73359360 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f6ac1000 session 0x5616f33552c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f5412000 session 0x5616f2f7a960
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441778176 unmapped: 73351168 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f5e57400 session 0x5616f3324b40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f75c1400 session 0x5616f5de9e00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 heartbeat osd_stat(store_statfs(0x19c3a5000/0x0/0x1bfc00000, data 0x9436b78/0x9639000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f4011c00 session 0x5616f5de8b40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f5412000 session 0x5616f58752c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f5e57400 session 0x5616f4cdc960
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f6ac1000 session 0x5616f32a0d20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f75c1400 session 0x5616f5829a40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f2f36800 session 0x5616f5df3a40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f3517000 session 0x5616f582cd20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442933248 unmapped: 72196096 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f5412000 session 0x5616f5c03e00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5588007 data_alloc: 268435456 data_used: 55848960
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442998784 unmapped: 72130560 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.622900009s of 10.002265930s, submitted: 92
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 443023360 unmapped: 72105984 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f30d9800 session 0x5616f5e28780
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 443023360 unmapped: 72105984 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 heartbeat osd_stat(store_statfs(0x19b9b5000/0x0/0x1bfc00000, data 0x9e22dea/0xa028000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 443023360 unmapped: 72105984 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f5e57400 session 0x5616f560dc20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f6ac1000 session 0x5616f53f05a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f30d9800 session 0x5616f5874f00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 443023360 unmapped: 72105984 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f3517000 session 0x5616f5916f00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5570839 data_alloc: 268435456 data_used: 55844864
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f5412000 session 0x5616f5872f00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 443023360 unmapped: 72105984 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 443031552 unmapped: 72097792 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 444547072 unmapped: 70582272 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 448094208 unmapped: 67035136 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 heartbeat osd_stat(store_statfs(0x19bb91000/0x0/0x1bfc00000, data 0x9c47b88/0x9e4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f2ec2000 session 0x5616f2f7a5a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f30d8800 session 0x5616f50d9a40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 447365120 unmapped: 67764224 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f30d8800 session 0x5616f33250e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 heartbeat osd_stat(store_statfs(0x19d0fa000/0x0/0x1bfc00000, data 0x836ab88/0x856e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 heartbeat osd_stat(store_statfs(0x19d0fa000/0x0/0x1bfc00000, data 0x836ab88/0x856e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5381098 data_alloc: 268435456 data_used: 53198848
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 447365120 unmapped: 67764224 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 447365120 unmapped: 67764224 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 heartbeat osd_stat(store_statfs(0x19d46f000/0x0/0x1bfc00000, data 0x836ab78/0x856d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 heartbeat osd_stat(store_statfs(0x19d46f000/0x0/0x1bfc00000, data 0x836ab78/0x856d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 447373312 unmapped: 67756032 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f2ec0c00 session 0x5616f509d860
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f320cc00 session 0x5616f33245a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 447373312 unmapped: 67756032 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.172218323s of 12.415048599s, submitted: 83
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f3517000 session 0x5616f5c20d20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f30d9800 session 0x5616f33c30e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f5412000 session 0x5616f5c214a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 447397888 unmapped: 67731456 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 heartbeat osd_stat(store_statfs(0x19e677000/0x0/0x1bfc00000, data 0x7167b25/0x7366000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5172649 data_alloc: 251658240 data_used: 44666880
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 447414272 unmapped: 67715072 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f30d8800 session 0x5616f4cdcb40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 447414272 unmapped: 67715072 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 447414272 unmapped: 67715072 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 heartbeat osd_stat(store_statfs(0x19e678000/0x0/0x1bfc00000, data 0x7167b25/0x7366000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 447414272 unmapped: 67715072 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 447078400 unmapped: 68050944 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5270581 data_alloc: 268435456 data_used: 45740032
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 447078400 unmapped: 68050944 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 448274432 unmapped: 66854912 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 448274432 unmapped: 66854912 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5875a40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f2eff000 session 0x5616f5df3860
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 heartbeat osd_stat(store_statfs(0x19db3c000/0x0/0x1bfc00000, data 0x7c95b25/0x7e94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f320cc00 session 0x5616f5df2000
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442515456 unmapped: 72613888 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442515456 unmapped: 72613888 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.709886551s of 11.222613335s, submitted: 175
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5058100 data_alloc: 251658240 data_used: 35414016
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442310656 unmapped: 72818688 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 heartbeat osd_stat(store_statfs(0x19eed0000/0x0/0x1bfc00000, data 0x690eab3/0x6b0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442351616 unmapped: 72777728 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f320cc00 session 0x5616f60ef680
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442351616 unmapped: 72777728 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 ms_handle_reset con 0x5616f2ec2000 session 0x5616f5828d20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 394 handle_osd_map epochs [394,395], i have 394, src has [1,395]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 395 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5df32c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442351616 unmapped: 72777728 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 395 ms_handle_reset con 0x5616f2eff000 session 0x5616f582c000
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 395 ms_handle_reset con 0x5616f30d8800 session 0x5616f402b2c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442351616 unmapped: 72777728 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 395 handle_osd_map epochs [396,396], i have 395, src has [1,396]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 396 ms_handle_reset con 0x5616f5412000 session 0x5616f560eb40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5075575 data_alloc: 251658240 data_used: 35434496
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442351616 unmapped: 72777728 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 396 heartbeat osd_stat(store_statfs(0x19eca8000/0x0/0x1bfc00000, data 0x6b317bf/0x6d33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 396 ms_handle_reset con 0x5616f2ec0c00 session 0x5616f509da40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 396 ms_handle_reset con 0x5616f30d8800 session 0x5616f509ab40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442294272 unmapped: 72835072 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 396 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5c02b40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 396 ms_handle_reset con 0x5616f2ec2000 session 0x5616f560dc20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442302464 unmapped: 72826880 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 396 ms_handle_reset con 0x5616f2ec2000 session 0x5616f582cd20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442302464 unmapped: 72826880 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442302464 unmapped: 72826880 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.502508163s of 10.008315086s, submitted: 139
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 396 heartbeat osd_stat(store_statfs(0x19eeab000/0x0/0x1bfc00000, data 0x693553d/0x6b33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5054005 data_alloc: 251658240 data_used: 35422208
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442302464 unmapped: 72826880 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 396 handle_osd_map epochs [396,397], i have 396, src has [1,397]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442302464 unmapped: 72826880 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 heartbeat osd_stat(store_statfs(0x19eea7000/0x0/0x1bfc00000, data 0x6937146/0x6b36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442302464 unmapped: 72826880 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2ec0c00 session 0x5616f52ffe00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442302464 unmapped: 72826880 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5df2780
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442302464 unmapped: 72826880 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f5412000 session 0x5616f5c02d20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2eff000 session 0x5616f33250e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2eff000 session 0x5616f582d860
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 heartbeat osd_stat(store_statfs(0x19eea7000/0x0/0x1bfc00000, data 0x6937156/0x6b37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2ec0c00 session 0x5616f5e29680
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5058771 data_alloc: 251658240 data_used: 35430400
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442318848 unmapped: 72810496 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f30d8800 session 0x5616f2f7af00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5e19a40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 heartbeat osd_stat(store_statfs(0x19eea6000/0x0/0x1bfc00000, data 0x693717f/0x6b38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2ec2000 session 0x5616f3cf0960
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2ec0c00 session 0x5616f5c02000
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442343424 unmapped: 72785920 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2ec1000 session 0x5616f4cdd2c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2eff000 session 0x5616f2ec5a40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x561704583800 session 0x5616f50dd860
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2f26400 session 0x5616f4d66f00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442359808 unmapped: 72769536 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2ec0c00 session 0x5616f5605e00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442368000 unmapped: 72761344 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442368000 unmapped: 72761344 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.372431755s of 10.531121254s, submitted: 66
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5198499 data_alloc: 251658240 data_used: 35426304
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 heartbeat osd_stat(store_statfs(0x19e973000/0x0/0x1bfc00000, data 0x6e6b185/0x706a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442400768 unmapped: 72728576 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f3fb3400 session 0x5616f2f7a780
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2eff000 session 0x5616f5c210e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2ec1000 session 0x5616f2ec52c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f5001400 session 0x5616f5df3680
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2ec0c00 session 0x5616f4d66f00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2ec1000 session 0x5616f50dd860
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2eff000 session 0x5616f2ec5a40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f3fb3400 session 0x5616f4cdd2c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2f26400 session 0x5616f5c02000
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442400768 unmapped: 72728576 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442548224 unmapped: 72581120 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442556416 unmapped: 72572928 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f581ac00 session 0x5616f5df25a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5617023cbc00 session 0x5616f5de9a40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2eff000 session 0x5616f56054a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2ec1000 session 0x5616f3bd65a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442720256 unmapped: 72409088 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5238067 data_alloc: 251658240 data_used: 40054784
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442720256 unmapped: 72409088 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x7bc81a8/0x7dc8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442720256 unmapped: 72409088 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f30d8800 session 0x5616f5829680
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 447840256 unmapped: 67289088 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2ec1000 session 0x5616f582d860
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 445145088 unmapped: 69984256 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2eff000 session 0x5616f362e1e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f581ac00 session 0x5616f5828b40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442048512 unmapped: 73080832 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f3fb3400 session 0x5616f560f0e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x561704583800 session 0x5616f33245a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2eff000 session 0x5616f5de9c20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.607354164s of 10.012436867s, submitted: 120
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f3fb3400 session 0x5616f5de9860
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5605860
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4631949 data_alloc: 234881024 data_used: 19976192
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 423682048 unmapped: 91447296 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 heartbeat osd_stat(store_statfs(0x1a1595000/0x0/0x1bfc00000, data 0x3e3b165/0x4038000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 423682048 unmapped: 91447296 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 423682048 unmapped: 91447296 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 397 handle_osd_map epochs [397,398], i have 397, src has [1,398]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 423706624 unmapped: 91422720 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 398 ms_handle_reset con 0x5616f581ac00 session 0x5616f3c63860
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 424615936 unmapped: 90513408 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4624741 data_alloc: 234881024 data_used: 16244736
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 424960000 unmapped: 90169344 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 398 heartbeat osd_stat(store_statfs(0x1a1711000/0x0/0x1bfc00000, data 0x3cb0eb9/0x3eae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 424001536 unmapped: 91127808 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 424001536 unmapped: 91127808 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 424001536 unmapped: 91127808 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 398 heartbeat osd_stat(store_statfs(0x1a1706000/0x0/0x1bfc00000, data 0x3cc4eb9/0x3ec2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 398 heartbeat osd_stat(store_statfs(0x1a1706000/0x0/0x1bfc00000, data 0x3cc4eb9/0x3ec2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 424001536 unmapped: 91127808 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.879499435s of 10.224841118s, submitted: 135
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4626897 data_alloc: 234881024 data_used: 16068608
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 423985152 unmapped: 91144192 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 398 ms_handle_reset con 0x561704583800 session 0x5616f3bcc960
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 398 ms_handle_reset con 0x561704583800 session 0x5616f509ba40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 398 ms_handle_reset con 0x5616f2ec1000 session 0x5616f402b680
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 398 ms_handle_reset con 0x5616f2eff000 session 0x5616f402ab40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 398 ms_handle_reset con 0x5616f3fb3400 session 0x5616f560e960
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 424001536 unmapped: 91127808 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 398 handle_osd_map epochs [399,399], i have 398, src has [1,399]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 423952384 unmapped: 91176960 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1381000/0x0/0x1bfc00000, data 0x404cb24/0x424c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 423952384 unmapped: 91176960 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f581ac00 session 0x5616f560e5a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f581ac00 session 0x5616f50dd2c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2ec1000 session 0x5616f50dcf00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f3fb3400 session 0x5616f5dfd860
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 424222720 unmapped: 90906624 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1341000/0x0/0x1bfc00000, data 0x408cb87/0x428d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x561704583800 session 0x5616f509c5a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5617023cbc00 session 0x5616f60efa40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2ec1000 session 0x5616f53f0f00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f3fb3400 session 0x5616f582c5a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f581ac00 session 0x5616f52fe3c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4686260 data_alloc: 234881024 data_used: 16064512
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 424222720 unmapped: 90906624 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x561704583800 session 0x5616f5605c20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1041000/0x0/0x1bfc00000, data 0x438cb87/0x458d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2eff000 session 0x5616f582c000
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2ec1000 session 0x5616f582cf00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 424222720 unmapped: 90906624 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f3fb3400 session 0x5616f5873e00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f581ac00 session 0x5616f58725a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 424222720 unmapped: 90906624 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 424222720 unmapped: 90906624 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2f26400 session 0x5616f33250e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2ec0c00 session 0x5616f5c02d20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1041000/0x0/0x1bfc00000, data 0x438cb87/0x458d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2ec1000 session 0x5616f560d0e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 420421632 unmapped: 94707712 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4584740 data_alloc: 234881024 data_used: 14233600
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 420421632 unmapped: 94707712 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 420421632 unmapped: 94707712 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.874294281s of 11.168619156s, submitted: 124
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x561704583800 session 0x5616f5872780
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5412000 session 0x5616f560d4a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 420429824 unmapped: 94699520 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2f26400 session 0x5616f509dc20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f3fb3400 session 0x5616f5e285a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 420429824 unmapped: 94699520 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1f86000/0x0/0x1bfc00000, data 0x344aa90/0x3647000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 420429824 unmapped: 94699520 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4530552 data_alloc: 234881024 data_used: 10539008
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 420429824 unmapped: 94699520 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 420438016 unmapped: 94691328 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e57400 session 0x5616f5e14d20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f75c1400 session 0x5616f3f68b40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a1f87000/0x0/0x1bfc00000, data 0x344aa90/0x3647000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 412344320 unmapped: 102785024 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2f26400 session 0x5616f3f692c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 412344320 unmapped: 102785024 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 412344320 unmapped: 102785024 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4334326 data_alloc: 218103808 data_used: 5632000
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 412344320 unmapped: 102785024 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3075000/0x0/0x1bfc00000, data 0x210ea1e/0x2309000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 412344320 unmapped: 102785024 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 412344320 unmapped: 102785024 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 412344320 unmapped: 102785024 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a3075000/0x0/0x1bfc00000, data 0x210ea1e/0x2309000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 412344320 unmapped: 102785024 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4334326 data_alloc: 218103808 data_used: 5632000
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 412344320 unmapped: 102785024 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 412344320 unmapped: 102785024 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.168770790s of 15.469518661s, submitted: 82
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 414171136 unmapped: 100958208 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 414179328 unmapped: 100950016 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a2c1b000/0x0/0x1bfc00000, data 0x27b8a1e/0x29b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 414179328 unmapped: 100950016 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5412000 session 0x5616f5de9a40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x561704583800 session 0x5616f362e1e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2f26400 session 0x5616f60eeb40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4386012 data_alloc: 218103808 data_used: 5632000
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5412000 session 0x5616f509ba40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 414187520 unmapped: 100941824 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e57400 session 0x5616f50d8f00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f75c1400 session 0x5616f50d9e00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f581ac00 session 0x5616f5df3a40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2f26400 session 0x5616f3354f00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5412000 session 0x5616f52feb40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a2784000/0x0/0x1bfc00000, data 0x2c4ea2e/0x2e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 100909056 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a2704000/0x0/0x1bfc00000, data 0x2ccea2e/0x2eca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 100909056 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f581ac00 session 0x5616f5de8780
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e57400 session 0x5616f50dd2c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f75c1400 session 0x5616f50dde00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2f26400 session 0x5616f4cdde00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5412000 session 0x5616f402b860
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f581ac00 session 0x5616f3bd7680
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 415342592 unmapped: 99786752 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e57400 session 0x5616f509a780
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f320cc00 session 0x5616f5df21e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2f26400 session 0x5616f5c5cd20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f320cc00 session 0x5616f3b503c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 415342592 unmapped: 99786752 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5412000 session 0x5616f53f05a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e57400 session 0x5616f4cdcb40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f581ac00 session 0x5616f5872000
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2f26400 session 0x5616f60ef4a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5412000 session 0x5616f58741e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f320cc00 session 0x5616f5df32c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e57400 session 0x5616f50dde00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f3517000 session 0x5616f5df3a40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2f26400 session 0x5616f509ba40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f320cc00 session 0x5616f362e1e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5412000 session 0x5616f5de9a40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4553674 data_alloc: 218103808 data_used: 5632000
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e57400 session 0x5616f3f692c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 415367168 unmapped: 99762176 heap: 515129344 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x561704582c00 session 0x5616f3f68b40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2f26400 session 0x5616f58725a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f320cc00 session 0x5616f5873e00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5412000 session 0x5616f402a3c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e57400 session 0x5616f582c000
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x561704582c00 session 0x5616f52fe3c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5412000 session 0x5616f60efa40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e57400 session 0x5616f59174a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x561704582c00 session 0x5616f3c625a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f32e2400 session 0x5616f5df3680
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 416399360 unmapped: 102400000 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f320d400 session 0x5616f5828f00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f32e2400 session 0x5616f5de85a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 416415744 unmapped: 102383616 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5412000 session 0x5616f5873c20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.485384941s of 11.044736862s, submitted: 152
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a0223000/0x0/0x1bfc00000, data 0x51acaaf/0x53ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [0,0,0,1])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e57400 session 0x5616f5875c20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a0223000/0x0/0x1bfc00000, data 0x51acaaf/0x53ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 416333824 unmapped: 102465536 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f32e2800 session 0x5616f5875c20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 416350208 unmapped: 102449152 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f8393800 session 0x5616f5873c20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4831007 data_alloc: 234881024 data_used: 14082048
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 417071104 unmapped: 101728256 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f32e2400 session 0x5616f5828f00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f32e2800 session 0x5616f5df3680
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 417218560 unmapped: 101580800 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f6aa0000 session 0x5616f60ee5a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 417218560 unmapped: 101580800 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f90d8400 session 0x5616f52fef00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 417562624 unmapped: 101236736 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a01fc000/0x0/0x1bfc00000, data 0x51d0af2/0x53d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e56800 session 0x5616f3324000
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f32e2400 session 0x5616f33245a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f32e2800 session 0x5616f2f7af00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 417734656 unmapped: 101064704 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a01fb000/0x0/0x1bfc00000, data 0x51d0b15/0x53d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5412000 session 0x5616f60efa40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e57400 session 0x5616f512c000
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4866860 data_alloc: 234881024 data_used: 17317888
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 418791424 unmapped: 100007936 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f3517800 session 0x5616f509cf00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 420724736 unmapped: 98074624 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 420741120 unmapped: 98058240 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.821496964s of 10.020360947s, submitted: 65
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a0813000/0x0/0x1bfc00000, data 0x4b16ad3/0x4d16000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 426475520 unmapped: 92323840 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a03d9000/0x0/0x1bfc00000, data 0x4ff5ad3/0x51f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 429613056 unmapped: 89186304 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4957772 data_alloc: 251658240 data_used: 32051200
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 429613056 unmapped: 89186304 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 434085888 unmapped: 84713472 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x561704582c00 session 0x5616f60efe00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2f26800 session 0x5616f33c3a40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x19fe2b000/0x0/0x1bfc00000, data 0x559aad3/0x579a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 434126848 unmapped: 84672512 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f3517800 session 0x5616f58292c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433512448 unmapped: 85286912 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433512448 unmapped: 85286912 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4843881 data_alloc: 234881024 data_used: 28360704
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433512448 unmapped: 85286912 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2f26400 session 0x5616f53f0f00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433512448 unmapped: 85286912 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f320cc00 session 0x5616f5de94a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2f26400 session 0x5616f509a1e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 434356224 unmapped: 84443136 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a0ae4000/0x0/0x1bfc00000, data 0x48deab3/0x4adc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.756709099s of 10.414925575s, submitted: 278
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 438312960 unmapped: 80486400 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f32e2400 session 0x5616f60ef680
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 437952512 unmapped: 80846848 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4909693 data_alloc: 234881024 data_used: 26497024
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 437387264 unmapped: 81412096 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f3fb3400 session 0x5616f5c5cb40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2ec1000 session 0x5616f560e5a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a0084000/0x0/0x1bfc00000, data 0x5346ab3/0x5544000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2f26800 session 0x5616f509af00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 437395456 unmapped: 81403904 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a07d6000/0x0/0x1bfc00000, data 0x49a4ab3/0x4ba2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 438722560 unmapped: 80076800 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 438722560 unmapped: 80076800 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 438591488 unmapped: 80207872 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5605a40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a09f7000/0x0/0x1bfc00000, data 0x49d9ab3/0x4bd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4841649 data_alloc: 234881024 data_used: 26312704
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 438599680 unmapped: 80199680 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 438599680 unmapped: 80199680 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 438599680 unmapped: 80199680 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 438599680 unmapped: 80199680 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a0a37000/0x0/0x1bfc00000, data 0x4999a50/0x4b96000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 438599680 unmapped: 80199680 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.172043800s of 11.835883141s, submitted: 192
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4842113 data_alloc: 234881024 data_used: 26324992
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 438607872 unmapped: 80191488 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f6aa0000 session 0x5616f53f12c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f90d9800 session 0x5616f4cdcf00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 438616064 unmapped: 80183296 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2f26400 session 0x5616f5df3e00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433168384 unmapped: 85630976 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a17af000/0x0/0x1bfc00000, data 0x346f9bb/0x3669000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433168384 unmapped: 85630976 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a17af000/0x0/0x1bfc00000, data 0x346f9bb/0x3669000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433168384 unmapped: 85630976 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4609335 data_alloc: 234881024 data_used: 16211968
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433168384 unmapped: 85630976 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433168384 unmapped: 85630976 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a17af000/0x0/0x1bfc00000, data 0x346f9bb/0x3669000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a17af000/0x0/0x1bfc00000, data 0x346f9bb/0x3669000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433168384 unmapped: 85630976 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433168384 unmapped: 85630976 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433168384 unmapped: 85630976 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a17af000/0x0/0x1bfc00000, data 0x346f9bb/0x3669000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4609335 data_alloc: 234881024 data_used: 16211968
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a17af000/0x0/0x1bfc00000, data 0x346f9bb/0x3669000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433168384 unmapped: 85630976 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433176576 unmapped: 85622784 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f32e2400 session 0x5616f509ba40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f32e2400 session 0x5616f2ec5a40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5e28f00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2f26400 session 0x5616f5df3a40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.733957291s of 11.840334892s, submitted: 44
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f6aa0000 session 0x5616f509b680
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f90d9800 session 0x5616f509cd20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f90d9800 session 0x5616f33c2780
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2ec1000 session 0x5616f4cdcb40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2f26400 session 0x5616f362f0e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433348608 unmapped: 85450752 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433348608 unmapped: 85450752 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433348608 unmapped: 85450752 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4682381 data_alloc: 234881024 data_used: 16211968
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433356800 unmapped: 85442560 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a17c3000/0x0/0x1bfc00000, data 0x3c0fa2d/0x3e0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433356800 unmapped: 85442560 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433356800 unmapped: 85442560 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f32e2400 session 0x5616f58290e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a17c3000/0x0/0x1bfc00000, data 0x3c0fa2d/0x3e0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433364992 unmapped: 85434368 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433381376 unmapped: 85417984 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4733210 data_alloc: 234881024 data_used: 22900736
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433389568 unmapped: 85409792 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433389568 unmapped: 85409792 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a179e000/0x0/0x1bfc00000, data 0x3c33a50/0x3e30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433389568 unmapped: 85409792 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.563005447s of 10.698960304s, submitted: 55
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433389568 unmapped: 85409792 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433389568 unmapped: 85409792 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4733674 data_alloc: 234881024 data_used: 22908928
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433389568 unmapped: 85409792 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433389568 unmapped: 85409792 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433389568 unmapped: 85409792 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a179e000/0x0/0x1bfc00000, data 0x3c33a50/0x3e30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433389568 unmapped: 85409792 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f3517800 session 0x5616f60ee5a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2ec1000 session 0x5616f509cf00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2f26400 session 0x5616f5605a40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 433545216 unmapped: 85254144 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f32e2400 session 0x5616f5e28f00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f90d9800 session 0x5616f5dfdc20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x561704582c00 session 0x5616f582c5a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4803369 data_alloc: 234881024 data_used: 22982656
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 434413568 unmapped: 84385792 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 436215808 unmapped: 82583552 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 436895744 unmapped: 81903616 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a0807000/0x0/0x1bfc00000, data 0x4bcaa50/0x4dc7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 436895744 unmapped: 81903616 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.587735176s of 11.122464180s, submitted: 139
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x561704582c00 session 0x5616f60ef0e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5de9680
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2f26400 session 0x5616f5de83c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f32e2400 session 0x5616f362e960
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f90d9800 session 0x5616f5e18f00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 436936704 unmapped: 81862656 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f2ec1000 session 0x5616f3f68b40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4970904 data_alloc: 234881024 data_used: 24236032
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 436936704 unmapped: 81862656 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 436944896 unmapped: 81854464 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 436944896 unmapped: 81854464 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 438747136 unmapped: 80052224 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x19fc82000/0x0/0x1bfc00000, data 0x574eab2/0x594c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f6aa0000 session 0x5616f3354f00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f3fb3400 session 0x5616f5dfd860
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 438747136 unmapped: 80052224 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4844900 data_alloc: 234881024 data_used: 23859200
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 437157888 unmapped: 81641472 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x561704582c00 session 0x5616f3bb70e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 437166080 unmapped: 81633280 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f32e2800 session 0x5616f509d860
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 437166080 unmapped: 81633280 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 437166080 unmapped: 81633280 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.755673409s of 10.567603111s, submitted: 95
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f6aa0000 session 0x5616f5e29680
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x561704582c00 session 0x5616f3c63860
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5412000 session 0x5616f5c03e00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e57400 session 0x5616f402a780
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a0667000/0x0/0x1bfc00000, data 0x4d6ba7f/0x4f67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e52c00 session 0x5616f5e28960
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 437321728 unmapped: 81477632 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4962291 data_alloc: 251658240 data_used: 33669120
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 440590336 unmapped: 78209024 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e56800 session 0x5616f5de8000
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f90d8400 session 0x5616f5e28000
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 438689792 unmapped: 80109568 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a0667000/0x0/0x1bfc00000, data 0x4d6ba7f/0x4f67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5412000 session 0x5616f5df34a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 438714368 unmapped: 80084992 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 440532992 unmapped: 78266368 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 440549376 unmapped: 78249984 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4842813 data_alloc: 234881024 data_used: 23040000
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 440606720 unmapped: 78192640 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x1a0921000/0x0/0x1bfc00000, data 0x46a1a7f/0x489d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e57400 session 0x5616f582cd20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f6aa0000 session 0x5616f5de8d20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5412000 session 0x5616f33245a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e56800 session 0x5616f5875e00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 440614912 unmapped: 78184448 heap: 518799360 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f90d8400 session 0x5616f5875e00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e57400 session 0x5616f2f7af00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x561704582c00 session 0x5616f5de8960
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5412000 session 0x5616f33550e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e56800 session 0x5616f58283c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e57400 session 0x5616f5c02000
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441253888 unmapped: 81747968 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x19fec3000/0x0/0x1bfc00000, data 0x50fdaf0/0x52fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x19fec3000/0x0/0x1bfc00000, data 0x50fdaf0/0x52fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441253888 unmapped: 81747968 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x19fec3000/0x0/0x1bfc00000, data 0x50fdaf0/0x52fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441253888 unmapped: 81747968 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x19fec3000/0x0/0x1bfc00000, data 0x50fdaf0/0x52fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4969624 data_alloc: 234881024 data_used: 29413376
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 441253888 unmapped: 81747968 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f3518000 session 0x5616f3f692c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f3266c00 session 0x5616f509da40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.016990662s of 12.429978371s, submitted: 257
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442310656 unmapped: 80691200 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x19fec3000/0x0/0x1bfc00000, data 0x50fdaf0/0x52fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,9,15,9])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f3518000 session 0x5616f56052c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442310656 unmapped: 80691200 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x19f976000/0x0/0x1bfc00000, data 0x5649b00/0x5848000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5412000 session 0x5616f58294a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442327040 unmapped: 80674816 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x19f954000/0x0/0x1bfc00000, data 0x566ab23/0x586a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442327040 unmapped: 80674816 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x19f954000/0x0/0x1bfc00000, data 0x566ab23/0x586a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5058977 data_alloc: 251658240 data_used: 35041280
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x19f92d000/0x0/0x1bfc00000, data 0x5690b23/0x5890000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 445292544 unmapped: 77709312 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 445292544 unmapped: 77709312 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 445292544 unmapped: 77709312 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x19f928000/0x0/0x1bfc00000, data 0x5695b23/0x5895000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 445292544 unmapped: 77709312 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x19f928000/0x0/0x1bfc00000, data 0x5695b23/0x5895000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 445292544 unmapped: 77709312 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5106433 data_alloc: 251658240 data_used: 39256064
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 445292544 unmapped: 77709312 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.188847542s of 10.390885353s, submitted: 83
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 447930368 unmapped: 75071488 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 448323584 unmapped: 74678272 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449970176 unmapped: 73031680 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x19f322000/0x0/0x1bfc00000, data 0x5c9cb23/0x5e9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,2,70])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449093632 unmapped: 73908224 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5177385 data_alloc: 251658240 data_used: 39260160
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449380352 unmapped: 73621504 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449380352 unmapped: 73621504 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x19eeb9000/0x0/0x1bfc00000, data 0x6104b4c/0x6305000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [0,0,0,0,0,0,5,2,50,0,7])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458301440 unmapped: 64700416 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452206592 unmapped: 70795264 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5e52800 session 0x5616f5c03e00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f3519400 session 0x5616f3f68b40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f60f9c00 session 0x5616f5e18f00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452214784 unmapped: 70787072 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5345026 data_alloc: 251658240 data_used: 40615936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452214784 unmapped: 70787072 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f3518000 session 0x5616f362e960
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f3519400 session 0x5616f5de9680
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 2.339155436s of 10.027582169s, submitted: 217
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x19c8e7000/0x0/0x1bfc00000, data 0x7536b4c/0x7737000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [0,0,0,0,0,0,0,2])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452214784 unmapped: 70787072 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 453509120 unmapped: 69492736 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 453509120 unmapped: 69492736 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 ms_handle_reset con 0x5616f5412000 session 0x5616f582c5a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 453533696 unmapped: 69468160 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5374693 data_alloc: 251658240 data_used: 40714240
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 453541888 unmapped: 69459968 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458301440 unmapped: 64700416 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 heartbeat osd_stat(store_statfs(0x19c7f4000/0x0/0x1bfc00000, data 0x7629b85/0x782a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458326016 unmapped: 64675840 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458326016 unmapped: 64675840 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458334208 unmapped: 64667648 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5454477 data_alloc: 268435456 data_used: 48197632
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458334208 unmapped: 64667648 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458334208 unmapped: 64667648 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 399 handle_osd_map epochs [400,400], i have 399, src has [1,400]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.992421150s of 10.850731850s, submitted: 70
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458350592 unmapped: 64651264 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 400 heartbeat osd_stat(store_statfs(0x19c7ca000/0x0/0x1bfc00000, data 0x7651b85/0x7852000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458358784 unmapped: 64643072 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 400 ms_handle_reset con 0x5617023cbc00 session 0x5616f5874b40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458358784 unmapped: 64643072 heap: 523001856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 400 ms_handle_reset con 0x5616f5436000 session 0x5616f3354f00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5459395 data_alloc: 268435456 data_used: 48234496
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 400 handle_osd_map epochs [400,401], i have 400, src has [1,401]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 401 ms_handle_reset con 0x5616f3518000 session 0x5616f5c034a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 401 ms_handle_reset con 0x5616f3fb2c00 session 0x5616f52ff860
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 401 ms_handle_reset con 0x5616f5412000 session 0x5616f509b0e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 401 ms_handle_reset con 0x5616f3519400 session 0x5616f58741e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 401 ms_handle_reset con 0x5616f6ac1000 session 0x5616f5df3e00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 401 ms_handle_reset con 0x5617023cbc00 session 0x5616f560c780
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #55. Immutable memtables: 11.
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 488464384 unmapped: 43663360 heap: 532127744 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 401 heartbeat osd_stat(store_statfs(0x19c7c5000/0x0/0x1bfc00000, data 0x76558a8/0x7857000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [0,0,0,0,0,0,1,5])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 401 ms_handle_reset con 0x5616f7094400 session 0x5616f59163c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 401 ms_handle_reset con 0x5616f3518000 session 0x5616f50dcf00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 401 ms_handle_reset con 0x5616f3519400 session 0x5616f402a3c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 401 ms_handle_reset con 0x5616f3fb2c00 session 0x5616f53f0f00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 401 handle_osd_map epochs [402,402], i have 401, src has [1,402]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 475619328 unmapped: 63553536 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 402 handle_osd_map epochs [403,403], i have 402, src has [1,403]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 403 ms_handle_reset con 0x5616f3518000 session 0x5616f3b503c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 403 ms_handle_reset con 0x5616f3519400 session 0x5616f560d0e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 476692480 unmapped: 62480384 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 403 ms_handle_reset con 0x5616f3fb2c00 session 0x5616f58750e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 403 ms_handle_reset con 0x5616f7094400 session 0x5616f5e283c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 403 ms_handle_reset con 0x5617023cbc00 session 0x5616f3bd6b40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 403 ms_handle_reset con 0x5616f3518000 session 0x5616f3354b40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 403 ms_handle_reset con 0x5616f3519400 session 0x5616f3bb6d20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 403 ms_handle_reset con 0x5616f3fb2c00 session 0x5616f582de00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 476700672 unmapped: 62472192 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 403 heartbeat osd_stat(store_statfs(0x198c6c000/0x0/0x1bfc00000, data 0xa00a091/0xa210000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 476717056 unmapped: 62455808 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 403 ms_handle_reset con 0x5616f5412000 session 0x5616f5e29860
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 403 ms_handle_reset con 0x5616f7094400 session 0x5616f5e283c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5825660 data_alloc: 268435456 data_used: 60710912
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 403 heartbeat osd_stat(store_statfs(0x198c6c000/0x0/0x1bfc00000, data 0xa00a091/0xa210000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 476725248 unmapped: 62447616 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 476758016 unmapped: 62414848 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.162752151s of 10.004703522s, submitted: 169
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 473513984 unmapped: 65658880 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 473513984 unmapped: 65658880 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 403 ms_handle_reset con 0x5616f3518000 session 0x5616f5874000
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 403 ms_handle_reset con 0x5616f3fb2c00 session 0x5616f362f0e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 403 ms_handle_reset con 0x5616f3519400 session 0x5616f5e29e00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 403 ms_handle_reset con 0x5616f5412000 session 0x5616f5605e00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 403 ms_handle_reset con 0x5617029d0400 session 0x5616f60efa40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 403 ms_handle_reset con 0x5616f3518000 session 0x5616f5917e00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 403 heartbeat osd_stat(store_statfs(0x198c6e000/0x0/0x1bfc00000, data 0xa00a091/0xa210000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 473513984 unmapped: 65658880 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5810520 data_alloc: 268435456 data_used: 60723200
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 403 ms_handle_reset con 0x5616f3519400 session 0x5616f5c21860
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 403 handle_osd_map epochs [404,404], i have 403, src has [1,404]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f5412000 session 0x5616f52fe000
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f3fb2c00 session 0x5616f33c2780
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f60fd000 session 0x5616f3bcc1e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f3518000 session 0x5616f5e29c20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f3519400 session 0x5616f5e281e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 473554944 unmapped: 65617920 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 473554944 unmapped: 65617920 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.9 total, 600.0 interval#012Cumulative writes: 62K writes, 244K keys, 62K commit groups, 1.0 writes per commit group, ingest: 0.24 GB, 0.04 MB/s#012Cumulative WAL: 62K writes, 23K syncs, 2.68 writes per sync, written: 0.24 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 11K writes, 45K keys, 11K commit groups, 1.0 writes per commit group, ingest: 47.44 MB, 0.08 MB/s#012Interval WAL: 11K writes, 4694 syncs, 2.50 writes per sync, written: 0.05 GB, 0.08 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f5e56800 session 0x5616f560fa40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f5e57400 session 0x5616f3355c20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 heartbeat osd_stat(store_statfs(0x198c45000/0x0/0x1bfc00000, data 0xa031c9a/0xa239000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 473604096 unmapped: 65568768 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f3fb2c00 session 0x5616f5dfd860
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f5e52800 session 0x5616f60ee5a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616fd7af400 session 0x5616f560e960
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 473604096 unmapped: 65568768 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f3519400 session 0x5616f560d2c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f3518000 session 0x5616f50dcf00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f5e56800 session 0x5616f3f692c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 469598208 unmapped: 69574656 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f5e52800 session 0x5616f5917680
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5367180 data_alloc: 251658240 data_used: 41144320
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616fd7af400 session 0x5616f3cf1680
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 469598208 unmapped: 69574656 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f5e57400 session 0x5616f5c02d20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 469614592 unmapped: 69558272 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 469614592 unmapped: 69558272 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.227970123s of 10.570312500s, submitted: 124
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f2f26400 session 0x5616f3325680
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f32e2400 session 0x5616f582cf00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 heartbeat osd_stat(store_statfs(0x19b442000/0x0/0x1bfc00000, data 0x7839b94/0x7a3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f2f26400 session 0x5616f58290e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 465469440 unmapped: 73703424 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 heartbeat osd_stat(store_statfs(0x19cc5c000/0x0/0x1bfc00000, data 0x601fb94/0x6222000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 465543168 unmapped: 73629696 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5247727 data_alloc: 268435456 data_used: 53030912
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 heartbeat osd_stat(store_statfs(0x19cc5c000/0x0/0x1bfc00000, data 0x601fb94/0x6222000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: mgrc ms_handle_reset ms_handle_reset con 0x5616fb858400
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1950343944
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1950343944,v1:192.168.122.100:6801/1950343944]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: mgrc handle_mgr_configure stats_period=5
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 465534976 unmapped: 73637888 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 heartbeat osd_stat(store_statfs(0x19cc5c000/0x0/0x1bfc00000, data 0x601fb94/0x6222000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 465534976 unmapped: 73637888 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f320d000 session 0x5616f27045a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f320dc00 session 0x5616f5c205a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f40c7800 session 0x5616f582d2c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 465534976 unmapped: 73637888 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 465534976 unmapped: 73637888 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 heartbeat osd_stat(store_statfs(0x19cc5c000/0x0/0x1bfc00000, data 0x601fb94/0x6222000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 465534976 unmapped: 73637888 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5247727 data_alloc: 268435456 data_used: 53030912
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 heartbeat osd_stat(store_statfs(0x19cc5c000/0x0/0x1bfc00000, data 0x601fb94/0x6222000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 465534976 unmapped: 73637888 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 465534976 unmapped: 73637888 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 465534976 unmapped: 73637888 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 465534976 unmapped: 73637888 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.256523132s of 11.341904640s, submitted: 35
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 heartbeat osd_stat(store_statfs(0x19cc5b000/0x0/0x1bfc00000, data 0x601fb94/0x6222000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [2])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 467238912 unmapped: 71933952 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5342124 data_alloc: 268435456 data_used: 57995264
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 473391104 unmapped: 65781760 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 473391104 unmapped: 65781760 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 heartbeat osd_stat(store_statfs(0x19c6d6000/0x0/0x1bfc00000, data 0x65a5b94/0x67a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 473391104 unmapped: 65781760 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 473391104 unmapped: 65781760 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 473391104 unmapped: 65781760 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5344980 data_alloc: 268435456 data_used: 58019840
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 heartbeat osd_stat(store_statfs(0x19c6d6000/0x0/0x1bfc00000, data 0x65a5b94/0x67a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616fd7af400 session 0x5616f560d2c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f5415400 session 0x5616f58752c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f6aa0c00 session 0x5616f58bfc20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f60ffc00 session 0x5616f560c960
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 470687744 unmapped: 68485120 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f2f26400 session 0x5616f58bf4a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f5415400 session 0x5616f2f7ba40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f6aa0c00 session 0x5616f362e1e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616fd7af400 session 0x5616f5df3860
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f6aa1000 session 0x5616f4d663c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 470761472 unmapped: 68411392 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 470761472 unmapped: 68411392 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 470761472 unmapped: 68411392 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 heartbeat osd_stat(store_statfs(0x19c320000/0x0/0x1bfc00000, data 0x695bb94/0x6b5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 470761472 unmapped: 68411392 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5373747 data_alloc: 268435456 data_used: 58421248
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 470761472 unmapped: 68411392 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.540078163s of 11.709421158s, submitted: 32
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 heartbeat osd_stat(store_statfs(0x19c320000/0x0/0x1bfc00000, data 0x695bb94/0x6b5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 470384640 unmapped: 68788224 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 470384640 unmapped: 68788224 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 heartbeat osd_stat(store_statfs(0x19c320000/0x0/0x1bfc00000, data 0x695bb94/0x6b5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 470384640 unmapped: 68788224 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 470384640 unmapped: 68788224 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 heartbeat osd_stat(store_statfs(0x19c320000/0x0/0x1bfc00000, data 0x695bb94/0x6b5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5370515 data_alloc: 268435456 data_used: 58421248
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 470384640 unmapped: 68788224 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f2f26400 session 0x5616f3bb7e00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 471433216 unmapped: 67739648 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f5415400 session 0x5616f5e19c20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f6aa0c00 session 0x5616f5e18000
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f320ac00 session 0x5616f5c025a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f7094800 session 0x5616f5df3e00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 471433216 unmapped: 67739648 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f2f26400 session 0x5616f509b4a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f320ac00 session 0x5616f5916d20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f5415400 session 0x5616f3bd6f00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f6aa0c00 session 0x5616f5de90e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f7e48400 session 0x5616f4d67680
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 heartbeat osd_stat(store_statfs(0x19bf7c000/0x0/0x1bfc00000, data 0x6cfcc29/0x6f02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [1])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 471916544 unmapped: 67256320 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 471916544 unmapped: 67256320 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5430629 data_alloc: 268435456 data_used: 61665280
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 471916544 unmapped: 67256320 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 heartbeat osd_stat(store_statfs(0x19bf7c000/0x0/0x1bfc00000, data 0x6cfcc29/0x6f02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 471916544 unmapped: 67256320 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f2f26400 session 0x5616f60eed20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f320ac00 session 0x5616f33552c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 471916544 unmapped: 67256320 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f5415400 session 0x5616f5873680
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 472162304 unmapped: 67010560 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.858821869s of 12.966835976s, submitted: 38
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f6aa0c00 session 0x5616f5de85a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 472170496 unmapped: 67002368 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5441572 data_alloc: 268435456 data_used: 61931520
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 472170496 unmapped: 67002368 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 ms_handle_reset con 0x5616f412d800 session 0x5616f4d67e00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 404 handle_osd_map epochs [405,405], i have 404, src has [1,405]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 405 ms_handle_reset con 0x5616f412d800 session 0x5616f582d860
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 405 ms_handle_reset con 0x5616f320ac00 session 0x5616f5c210e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 405 ms_handle_reset con 0x5616f2f26400 session 0x5616f5c20000
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 481460224 unmapped: 57712640 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 405 heartbeat osd_stat(store_statfs(0x19bf57000/0x0/0x1bfc00000, data 0x6d20c39/0x6f27000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 405 ms_handle_reset con 0x5616f5415400 session 0x5616f5df23c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 405 ms_handle_reset con 0x5616f6aa0c00 session 0x5616f5c5d0e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 485851136 unmapped: 53321728 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 405 handle_osd_map epochs [406,406], i have 405, src has [1,406]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 406 handle_osd_map epochs [406,407], i have 406, src has [1,407]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 485965824 unmapped: 53207040 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 407 heartbeat osd_stat(store_statfs(0x199914000/0x0/0x1bfc00000, data 0x9360412/0x956a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 482426880 unmapped: 56745984 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5807578 data_alloc: 285212672 data_used: 72122368
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 483565568 unmapped: 55607296 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 484753408 unmapped: 54419456 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 407 heartbeat osd_stat(store_statfs(0x1991d1000/0x0/0x1bfc00000, data 0x9aa2612/0x9cad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 407 heartbeat osd_stat(store_statfs(0x1991d1000/0x0/0x1bfc00000, data 0x9aa2612/0x9cad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 407 handle_osd_map epochs [408,408], i have 407, src has [1,408]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 484818944 unmapped: 54353920 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 408 ms_handle_reset con 0x5616f6aa0c00 session 0x5616f2ec5a40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 408 ms_handle_reset con 0x5616f2f26400 session 0x5616f3bb6960
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 484827136 unmapped: 54345728 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 408 ms_handle_reset con 0x5616f3518000 session 0x5616f58283c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 408 ms_handle_reset con 0x5616f3519400 session 0x5616f5de8f00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 408 heartbeat osd_stat(store_statfs(0x19b807000/0x0/0x1bfc00000, data 0x746a3a5/0x7676000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 484827136 unmapped: 54345728 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 408 handle_osd_map epochs [409,409], i have 408, src has [1,409]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.260406494s of 11.294813156s, submitted: 121
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 409 ms_handle_reset con 0x5616f5412000 session 0x5616f5de81e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 409 ms_handle_reset con 0x5616f7095c00 session 0x5616f33550e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5535768 data_alloc: 285212672 data_used: 70262784
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 409 ms_handle_reset con 0x5616f320ac00 session 0x5616f362f4a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 484868096 unmapped: 54304768 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 409 ms_handle_reset con 0x5616f2f26400 session 0x5616f5c210e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 484868096 unmapped: 54304768 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 409 heartbeat osd_stat(store_statfs(0x19bdfc000/0x0/0x1bfc00000, data 0x6e73fca/0x7081000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [0,1,1])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 488734720 unmapped: 50438144 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 488071168 unmapped: 51101696 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 409 ms_handle_reset con 0x5616f3519400 session 0x5616f362e1e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 409 ms_handle_reset con 0x5616f5412000 session 0x5616f2f7ba40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 409 handle_osd_map epochs [410,410], i have 409, src has [1,410]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 410 ms_handle_reset con 0x5616f3518000 session 0x5616f3bb7e00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 484196352 unmapped: 54976512 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 410 handle_osd_map epochs [410,411], i have 410, src has [1,411]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5300557 data_alloc: 268435456 data_used: 48500736
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 484196352 unmapped: 54976512 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 411 handle_osd_map epochs [412,412], i have 411, src has [1,412]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 412 ms_handle_reset con 0x5616f2f26400 session 0x5616f5de92c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 479281152 unmapped: 59891712 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 412 ms_handle_reset con 0x5616f90d8400 session 0x5616f52ffa40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 412 ms_handle_reset con 0x5616f2ec3400 session 0x5616f5df34a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 412 ms_handle_reset con 0x5616f320ac00 session 0x5616f5e29a40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 412 ms_handle_reset con 0x5616f320ac00 session 0x5616f5df25a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 469557248 unmapped: 69615616 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 412 heartbeat osd_stat(store_statfs(0x19d77c000/0x0/0x1bfc00000, data 0x54f24dd/0x5701000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 412 handle_osd_map epochs [413,413], i have 412, src has [1,413]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 469557248 unmapped: 69615616 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 469557248 unmapped: 69615616 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4973071 data_alloc: 234881024 data_used: 27447296
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 469557248 unmapped: 69615616 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 413 handle_osd_map epochs [414,414], i have 413, src has [1,414]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.968615532s of 10.758380890s, submitted: 303
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 414 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5829680
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 414 ms_handle_reset con 0x5616f3fb3400 session 0x5616f582c780
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 414 heartbeat osd_stat(store_statfs(0x19e860000/0x0/0x1bfc00000, data 0x440ce4f/0x461d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 469557248 unmapped: 69615616 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 414 ms_handle_reset con 0x5616f2ec3400 session 0x5616f3c63c20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 454434816 unmapped: 84738048 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 414 ms_handle_reset con 0x5616f2f26400 session 0x5616f5c5cb40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 414 ms_handle_reset con 0x5616f2ec1000 session 0x5616f3bd6f00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 414 ms_handle_reset con 0x5616f2ec3400 session 0x5616f4d67e00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 414 ms_handle_reset con 0x5616f320ac00 session 0x5616f50dde00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 414 ms_handle_reset con 0x5616f3fb3400 session 0x5616f3bb7680
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 455491584 unmapped: 83681280 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 455491584 unmapped: 83681280 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 414 heartbeat osd_stat(store_statfs(0x19f2b8000/0x0/0x1bfc00000, data 0x39b4e4f/0x3bc5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4799044 data_alloc: 218103808 data_used: 10117120
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 414 heartbeat osd_stat(store_statfs(0x19f2b7000/0x0/0x1bfc00000, data 0x39b5e4f/0x3bc6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 414 handle_osd_map epochs [415,415], i have 414, src has [1,415]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 414 handle_osd_map epochs [415,415], i have 415, src has [1,415]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 455467008 unmapped: 83705856 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 455467008 unmapped: 83705856 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f6aa1c00 session 0x5616f3355e00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616ff64d800 session 0x5616f5e29c20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f3518000 session 0x5616f509b4a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5873680
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452935680 unmapped: 86237184 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452935680 unmapped: 86237184 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452943872 unmapped: 86228992 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x19ff0a000/0x0/0x1bfc00000, data 0x2d3fa0e/0x2f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4671481 data_alloc: 218103808 data_used: 6033408
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f2ec3400 session 0x5616f5c21a40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452919296 unmapped: 86253568 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.958932877s of 10.313260078s, submitted: 148
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452919296 unmapped: 86253568 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452837376 unmapped: 86335488 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452771840 unmapped: 86401024 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452771840 unmapped: 86401024 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4721172 data_alloc: 234881024 data_used: 12980224
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452771840 unmapped: 86401024 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x19ff30000/0x0/0x1bfc00000, data 0x2d3fa0e/0x2f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452771840 unmapped: 86401024 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x19ff30000/0x0/0x1bfc00000, data 0x2d3fa0e/0x2f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452771840 unmapped: 86401024 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452771840 unmapped: 86401024 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452771840 unmapped: 86401024 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4721524 data_alloc: 234881024 data_used: 12980224
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x19ff30000/0x0/0x1bfc00000, data 0x2d3fa0e/0x2f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452771840 unmapped: 86401024 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452771840 unmapped: 86401024 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.325275421s of 11.334540367s, submitted: 3
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452771840 unmapped: 86401024 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x19ff2a000/0x0/0x1bfc00000, data 0x2d45a0e/0x2f54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452771840 unmapped: 86401024 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 454483968 unmapped: 84688896 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4777692 data_alloc: 234881024 data_used: 13152256
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 455630848 unmapped: 83542016 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 455630848 unmapped: 83542016 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x19f8e3000/0x0/0x1bfc00000, data 0x337ea0e/0x358d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 455630848 unmapped: 83542016 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 455581696 unmapped: 83591168 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 455589888 unmapped: 83582976 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4775208 data_alloc: 234881024 data_used: 13279232
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 455565312 unmapped: 83607552 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 455565312 unmapped: 83607552 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f320ac00 session 0x5616f4cdde00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f3fb3400 session 0x5616f560fa40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f2ec1000 session 0x5616f3bb7e00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f2ec3400 session 0x5616f5df34a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f3518000 session 0x5616f52ffa40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.795422554s of 10.063662529s, submitted: 97
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f90d8400 session 0x5616f3288b40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459776000 unmapped: 79396864 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f90d8400 session 0x5616f3354f00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a000e000/0x0/0x1bfc00000, data 0x28dca0e/0x2aeb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f2ec1000 session 0x5616f4cdd2c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616ff64d800 session 0x5616f5874b40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f2ec3400 session 0x5616f52fe780
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f3518000 session 0x5616f5df3860
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f3518000 session 0x5616f2f7a780
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 453967872 unmapped: 85204992 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x19fbda000/0x0/0x1bfc00000, data 0x304f9ac/0x325d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 453967872 unmapped: 85204992 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4702053 data_alloc: 218103808 data_used: 6033408
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x19fbda000/0x0/0x1bfc00000, data 0x304f9ac/0x325d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 453967872 unmapped: 85204992 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x19fbda000/0x0/0x1bfc00000, data 0x304f9ac/0x325d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 453967872 unmapped: 85204992 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616fd7af400 session 0x5616f5916b40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f5e59000 session 0x5616f5e28780
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5de9680
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f2ec3400 session 0x5616f5605e00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f2ec3400 session 0x5616f58750e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 448102400 unmapped: 91070464 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f2ec1000 session 0x5616f512c1e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f3518000 session 0x5616f5de8d20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a00d1000/0x0/0x1bfc00000, data 0x278f989/0x299c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 448102400 unmapped: 91070464 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 448102400 unmapped: 91070464 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4613701 data_alloc: 218103808 data_used: 2564096
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 448102400 unmapped: 91070464 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 448102400 unmapped: 91070464 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 448102400 unmapped: 91070464 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a00d1000/0x0/0x1bfc00000, data 0x278f9ac/0x299d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 447897600 unmapped: 91275264 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.336462021s of 11.657950401s, submitted: 92
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449208320 unmapped: 89964544 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a0090000/0x0/0x1bfc00000, data 0x27cfa0f/0x29de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4682560 data_alloc: 218103808 data_used: 11546624
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f90d8400 session 0x5616f5605c20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449208320 unmapped: 89964544 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a0090000/0x0/0x1bfc00000, data 0x27cfa0f/0x29de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f5e59000 session 0x5616f5604b40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616fd7af400 session 0x5616f560d2c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449208320 unmapped: 89964544 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449208320 unmapped: 89964544 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449208320 unmapped: 89964544 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a0090000/0x0/0x1bfc00000, data 0x27cfa0f/0x29de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449208320 unmapped: 89964544 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4682488 data_alloc: 218103808 data_used: 11546624
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449208320 unmapped: 89964544 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449208320 unmapped: 89964544 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449208320 unmapped: 89964544 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a0090000/0x0/0x1bfc00000, data 0x27cfa0f/0x29de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449208320 unmapped: 89964544 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.503433228s of 10.529278755s, submitted: 6
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449208320 unmapped: 89964544 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a0090000/0x0/0x1bfc00000, data 0x27cfa0f/0x29de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4682440 data_alloc: 218103808 data_used: 11550720
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 450256896 unmapped: 88915968 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 450289664 unmapped: 88883200 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a0090000/0x0/0x1bfc00000, data 0x27cfa0f/0x29de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 450330624 unmapped: 88842240 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 450330624 unmapped: 88842240 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a10d0000/0x0/0x1bfc00000, data 0x27cfa0f/0x29de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 450355200 unmapped: 88817664 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4682440 data_alloc: 218103808 data_used: 11550720
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 450355200 unmapped: 88817664 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 450355200 unmapped: 88817664 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f2ec1000 session 0x5616f509da40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f2ec3400 session 0x5616f58734a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 450355200 unmapped: 88817664 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 450355200 unmapped: 88817664 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a10d0000/0x0/0x1bfc00000, data 0x27cfa0f/0x29de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 450355200 unmapped: 88817664 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4682440 data_alloc: 218103808 data_used: 11550720
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 450355200 unmapped: 88817664 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 450355200 unmapped: 88817664 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 450355200 unmapped: 88817664 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f566a800 session 0x5616f58281e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 450355200 unmapped: 88817664 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f566a800 session 0x5616f5c205a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a10d0000/0x0/0x1bfc00000, data 0x27cfa0f/0x29de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 450355200 unmapped: 88817664 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4682600 data_alloc: 218103808 data_used: 11554816
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a10d0000/0x0/0x1bfc00000, data 0x27cfa0f/0x29de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 450355200 unmapped: 88817664 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 450355200 unmapped: 88817664 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.885305405s of 17.616886139s, submitted: 256
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f3518000 session 0x5616f582cb40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f90d8400 session 0x5616f582cd20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 450969600 unmapped: 88203264 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5916000
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 439574528 unmapped: 99598336 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f2ec3400 session 0x5616f3c63860
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f566a800 session 0x5616f5c212c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616fd7af400 session 0x5616f362e1e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616fd7af400 session 0x5616f5604d20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5e281e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 439279616 unmapped: 99893248 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4622542 data_alloc: 218103808 data_used: 2826240
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 439279616 unmapped: 99893248 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a1194000/0x0/0x1bfc00000, data 0x270c9ec/0x291a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 439279616 unmapped: 99893248 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f3fb3400 session 0x5616f5c201e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f3519400 session 0x5616f40b12c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 439279616 unmapped: 99893248 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f2ec3400 session 0x5616f3bcc960
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 439287808 unmapped: 99885056 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a1194000/0x0/0x1bfc00000, data 0x270c9ec/0x291a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 439287808 unmapped: 99885056 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f2ec3400 session 0x5616f58752c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4621486 data_alloc: 218103808 data_used: 2826240
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f3519400 session 0x5616f4cdd860
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f2ec1000 session 0x5616f560c780
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 439296000 unmapped: 99876864 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 439296000 unmapped: 99876864 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f3fb3400 session 0x5616f560e5a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.537684441s of 10.185144424s, submitted: 96
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616fd7af400 session 0x5616f60eeb40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 440344576 unmapped: 98828288 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 440344576 unmapped: 98828288 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 440344576 unmapped: 98828288 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a11d3000/0x0/0x1bfc00000, data 0x26cc9ac/0x28da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4628242 data_alloc: 218103808 data_used: 3612672
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 440385536 unmapped: 98787328 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a11d3000/0x0/0x1bfc00000, data 0x26cc9ac/0x28da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 440385536 unmapped: 98787328 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f2ec3400 session 0x5616f582d680
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442286080 unmapped: 96886784 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442286080 unmapped: 96886784 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f3519400 session 0x5616f4cdda40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442286080 unmapped: 96886784 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4693548 data_alloc: 234881024 data_used: 15388672
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a11d2000/0x0/0x1bfc00000, data 0x26cca1e/0x28dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442286080 unmapped: 96886784 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f3fb3400 session 0x5616f52fe3c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442286080 unmapped: 96886784 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f566a800 session 0x5616f52feb40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.909992218s of 10.046482086s, submitted: 28
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f90d8400 session 0x5616f5c023c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a11d3000/0x0/0x1bfc00000, data 0x26cc9bc/0x28db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f2ec3400 session 0x5616f3bb65a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442327040 unmapped: 96845824 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442327040 unmapped: 96845824 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 ms_handle_reset con 0x5616f3519400 session 0x5616f5e192c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0cc6000/0x0/0x1bfc00000, data 0x2bd76df/0x2de7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442351616 unmapped: 96821248 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4738982 data_alloc: 234881024 data_used: 15396864
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0cc6000/0x0/0x1bfc00000, data 0x2bd76df/0x2de7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 442351616 unmapped: 96821248 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 444358656 unmapped: 94814208 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0cc6000/0x0/0x1bfc00000, data 0x2bd76df/0x2de7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 444358656 unmapped: 94814208 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 444358656 unmapped: 94814208 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 444358656 unmapped: 94814208 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4788338 data_alloc: 234881024 data_used: 15515648
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 444366848 unmapped: 94806016 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f566a800 session 0x5616f512c000
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f3fb3400 session 0x5616f3bb7a40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 444366848 unmapped: 94806016 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a066c000/0x0/0x1bfc00000, data 0x32316df/0x3441000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 444366848 unmapped: 94806016 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 444366848 unmapped: 94806016 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 444366848 unmapped: 94806016 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4788498 data_alloc: 234881024 data_used: 15519744
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 444366848 unmapped: 94806016 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a066c000/0x0/0x1bfc00000, data 0x32316df/0x3441000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 444375040 unmapped: 94797824 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f7095c00 session 0x5616f5872000
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 444375040 unmapped: 94797824 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a066c000/0x0/0x1bfc00000, data 0x32316df/0x3441000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.315635681s of 16.485715866s, submitted: 45
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449699840 unmapped: 89473024 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f3fb3400 session 0x5616f59163c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f566a800 session 0x5616f33552c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f7095c00 session 0x5616f33c30e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f6aa0c00 session 0x5616f3324000
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f412d800 session 0x5616f5c21a40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 445997056 unmapped: 93175808 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f3fb3400 session 0x5616f509a780
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f566a800 session 0x5616f5828b40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4900087 data_alloc: 234881024 data_used: 20758528
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f6aa0c00 session 0x5616f5c203c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f7095c00 session 0x5616f582dc20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f5415400 session 0x5616f512cb40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f2ec3400 session 0x5616f58bf4a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f3519400 session 0x5616f4cdcb40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 446447616 unmapped: 92725248 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f566a800 session 0x5616f58bf4a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f3fb3400 session 0x5616f4d67e00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f6aa0c00 session 0x5616f5e29a40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 446447616 unmapped: 92725248 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 446447616 unmapped: 92725248 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 416 heartbeat osd_stat(store_statfs(0x19fcfb000/0x0/0x1bfc00000, data 0x3ba36df/0x3db3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f2ec3400 session 0x5616f3bb6d20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f3519400 session 0x5616f60ef4a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 446447616 unmapped: 92725248 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 446447616 unmapped: 92725248 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f7095c00 session 0x5616f5c03860
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4900379 data_alloc: 234881024 data_used: 20787200
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f589a000 session 0x5616f2ec4d20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f7e48c00 session 0x5616f5605a40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 448356352 unmapped: 90816512 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f2ec3400 session 0x5616f582de00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 416 heartbeat osd_stat(store_statfs(0x19fcfb000/0x0/0x1bfc00000, data 0x3ba36df/0x3db3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f3519400 session 0x5616f512de00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f7095c00 session 0x5616f3bd7680
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f60ff400 session 0x5616f509cd20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 448364544 unmapped: 90808320 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 416 ms_handle_reset con 0x5616f589a000 session 0x5616f3f683c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 416 handle_osd_map epochs [417,417], i have 416, src has [1,417]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 417 ms_handle_reset con 0x5616f2ec3400 session 0x5616f5de8000
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 447709184 unmapped: 91463680 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 417 ms_handle_reset con 0x5616f3519400 session 0x5616f59170e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449052672 unmapped: 90120192 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449052672 unmapped: 90120192 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4889461 data_alloc: 234881024 data_used: 24100864
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449052672 unmapped: 90120192 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 417 heartbeat osd_stat(store_statfs(0x1a0200000/0x0/0x1bfc00000, data 0x369c455/0x38ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449052672 unmapped: 90120192 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 417 heartbeat osd_stat(store_statfs(0x1a0200000/0x0/0x1bfc00000, data 0x369c455/0x38ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 417 ms_handle_reset con 0x5616f60ff400 session 0x5616f5605860
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449052672 unmapped: 90120192 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449052672 unmapped: 90120192 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.257311821s of 15.645789146s, submitted: 59
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 417 ms_handle_reset con 0x5616f7095c00 session 0x5616f362f0e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 417 handle_osd_map epochs [417,418], i have 417, src has [1,418]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449060864 unmapped: 90112000 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4895835 data_alloc: 234881024 data_used: 24109056
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449060864 unmapped: 90112000 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 418 ms_handle_reset con 0x5617029d1000 session 0x5616f5872780
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 418 heartbeat osd_stat(store_statfs(0x1a01fc000/0x0/0x1bfc00000, data 0x369e0c0/0x38b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [1,0,1])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 418 ms_handle_reset con 0x5616f2ec3400 session 0x5616f5df34a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449069056 unmapped: 90103808 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 418 ms_handle_reset con 0x5616f3519400 session 0x5616f5de81e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 418 ms_handle_reset con 0x5616f60ff400 session 0x5616f3bd65a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449298432 unmapped: 89874432 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 449298432 unmapped: 89874432 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 451903488 unmapped: 87269376 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5064164 data_alloc: 234881024 data_used: 24522752
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 418 heartbeat osd_stat(store_statfs(0x19ec51000/0x0/0x1bfc00000, data 0x4c4505e/0x4e57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452976640 unmapped: 86196224 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452976640 unmapped: 86196224 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452976640 unmapped: 86196224 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452976640 unmapped: 86196224 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 418 heartbeat osd_stat(store_statfs(0x19ebbc000/0x0/0x1bfc00000, data 0x4cd105e/0x4ee3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452976640 unmapped: 86196224 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5074174 data_alloc: 234881024 data_used: 24330240
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452984832 unmapped: 86188032 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.492916107s of 11.969421387s, submitted: 150
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452984832 unmapped: 86188032 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 418 ms_handle_reset con 0x5616f412c800 session 0x5616f5de9e00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 418 ms_handle_reset con 0x5616f2effc00 session 0x5616f5c203c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 418 heartbeat osd_stat(store_statfs(0x19ebaa000/0x0/0x1bfc00000, data 0x4cf205e/0x4f04000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 418 ms_handle_reset con 0x5616f2effc00 session 0x5616f402b860
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452993024 unmapped: 86179840 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452993024 unmapped: 86179840 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 418 ms_handle_reset con 0x5616f3fb3400 session 0x5616f3bcd680
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 418 ms_handle_reset con 0x5616f566a800 session 0x5616f582dc20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452993024 unmapped: 86179840 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4945171 data_alloc: 234881024 data_used: 20971520
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 418 ms_handle_reset con 0x5616f2ec3400 session 0x5616f4d663c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 453001216 unmapped: 86171648 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 418 heartbeat osd_stat(store_statfs(0x19f8c8000/0x0/0x1bfc00000, data 0x3fd504f/0x41e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 453009408 unmapped: 86163456 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 418 handle_osd_map epochs [419,419], i have 418, src has [1,419]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 419 ms_handle_reset con 0x5616f3519400 session 0x5616f58beb40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 419 ms_handle_reset con 0x5616f2effc00 session 0x5616f33552c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 419 ms_handle_reset con 0x5616f2ec3400 session 0x5616f50dd860
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 419 ms_handle_reset con 0x5616fd7af400 session 0x5616f5e18b40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 419 ms_handle_reset con 0x5616f2ec1000 session 0x5616f58721e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 419 ms_handle_reset con 0x5616f3fb3400 session 0x5616f58be000
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 453427200 unmapped: 85745664 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 419 ms_handle_reset con 0x5616f2ec3400 session 0x5616f2ec5a40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 419 handle_osd_map epochs [419,420], i have 419, src has [1,420]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 420 handle_osd_map epochs [420,420], i have 420, src has [1,420]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 420 ms_handle_reset con 0x5616f2ec1000 session 0x5616f512c000
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 420 ms_handle_reset con 0x5616f566a800 session 0x5616f5872000
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 420 ms_handle_reset con 0x5616fd7af400 session 0x5616f5c203c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 451584000 unmapped: 87588864 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 420 handle_osd_map epochs [420,421], i have 420, src has [1,421]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 421 ms_handle_reset con 0x5616f2effc00 session 0x5616f50dd860
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 421 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5de81e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 421 ms_handle_reset con 0x5616f2ec3400 session 0x5616f5df34a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 451592192 unmapped: 87580672 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4988778 data_alloc: 234881024 data_used: 14598144
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 451600384 unmapped: 87572480 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 451600384 unmapped: 87572480 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 421 heartbeat osd_stat(store_statfs(0x19f069000/0x0/0x1bfc00000, data 0x482c877/0x4a42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 454402048 unmapped: 84770816 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.975195885s of 12.467619896s, submitted: 140
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 450756608 unmapped: 88416256 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 421 handle_osd_map epochs [421,422], i have 421, src has [1,422]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 422 ms_handle_reset con 0x5616f412c800 session 0x5616f509c1e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 451821568 unmapped: 87351296 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4964992 data_alloc: 234881024 data_used: 18284544
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 422 heartbeat osd_stat(store_statfs(0x19f77b000/0x0/0x1bfc00000, data 0x411b480/0x4332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 451821568 unmapped: 87351296 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 451821568 unmapped: 87351296 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 451821568 unmapped: 87351296 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 451821568 unmapped: 87351296 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 422 heartbeat osd_stat(store_statfs(0x19f77b000/0x0/0x1bfc00000, data 0x411b480/0x4332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 451821568 unmapped: 87351296 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4964992 data_alloc: 234881024 data_used: 18284544
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 422 heartbeat osd_stat(store_statfs(0x19f77b000/0x0/0x1bfc00000, data 0x411b480/0x4332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 422 heartbeat osd_stat(store_statfs(0x19f77b000/0x0/0x1bfc00000, data 0x411b480/0x4332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 451821568 unmapped: 87351296 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 451821568 unmapped: 87351296 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452878336 unmapped: 86294528 heap: 539172864 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 422 ms_handle_reset con 0x5616f7095c00 session 0x5616f50d9e00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 422 ms_handle_reset con 0x5616f60ff400 session 0x5616f5e18960
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 422 ms_handle_reset con 0x5616f2ec1000 session 0x5616f3bd7e00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 422 ms_handle_reset con 0x5616f2ec3400 session 0x5616f3f68b40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 422 ms_handle_reset con 0x5616f412c800 session 0x5616f5829e00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 422 ms_handle_reset con 0x5616f7095c00 session 0x5616f60ef2c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.915423393s of 10.204797745s, submitted: 83
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 487800832 unmapped: 59777024 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 422 ms_handle_reset con 0x5616f6ac1400 session 0x5616f5c02d20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 422 heartbeat osd_stat(store_statfs(0x19f0f5000/0x0/0x1bfc00000, data 0x47a2480/0x49b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 422 ms_handle_reset con 0x5616f2ec1000 session 0x5616f3bcc1e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 478314496 unmapped: 69263360 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5229959 data_alloc: 251658240 data_used: 37421056
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 422 ms_handle_reset con 0x5616f2ec3400 session 0x5616f3bd63c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 478314496 unmapped: 69263360 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 422 ms_handle_reset con 0x5616f412c800 session 0x5616f5c025a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 478314496 unmapped: 69263360 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 422 ms_handle_reset con 0x5616f7095c00 session 0x5616f509c5a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 478314496 unmapped: 69263360 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 422 handle_osd_map epochs [423,423], i have 422, src has [1,423]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 423 ms_handle_reset con 0x5616f2f37000 session 0x5616f50d8780
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 456826880 unmapped: 90750976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 423 heartbeat osd_stat(store_statfs(0x19f603000/0x0/0x1bfc00000, data 0x42911f7/0x44a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457383936 unmapped: 90193920 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5053639 data_alloc: 234881024 data_used: 23670784
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457383936 unmapped: 90193920 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457383936 unmapped: 90193920 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457383936 unmapped: 90193920 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 423 heartbeat osd_stat(store_statfs(0x19f603000/0x0/0x1bfc00000, data 0x42911f7/0x44a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457383936 unmapped: 90193920 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457383936 unmapped: 90193920 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 423 handle_osd_map epochs [424,424], i have 423, src has [1,424]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.029996872s of 11.434091568s, submitted: 84
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5056437 data_alloc: 234881024 data_used: 23670784
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457383936 unmapped: 90193920 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457383936 unmapped: 90193920 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 424 heartbeat osd_stat(store_statfs(0x19f601000/0x0/0x1bfc00000, data 0x4292e00/0x44ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457383936 unmapped: 90193920 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457383936 unmapped: 90193920 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 424 heartbeat osd_stat(store_statfs(0x19f602000/0x0/0x1bfc00000, data 0x4292e00/0x44ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457539584 unmapped: 90038272 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5087533 data_alloc: 234881024 data_used: 26923008
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457605120 unmapped: 89972736 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457605120 unmapped: 89972736 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457605120 unmapped: 89972736 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457605120 unmapped: 89972736 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 424 heartbeat osd_stat(store_statfs(0x19f602000/0x0/0x1bfc00000, data 0x4292e00/0x44ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457605120 unmapped: 89972736 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5089773 data_alloc: 234881024 data_used: 26980352
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457605120 unmapped: 89972736 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.539049149s of 11.555925369s, submitted: 21
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 424 ms_handle_reset con 0x5616f60ffc00 session 0x5616f582da40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 424 ms_handle_reset con 0x5616f60f6000 session 0x5616f5df3e00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457605120 unmapped: 89972736 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 424 ms_handle_reset con 0x5616f2ec1000 session 0x5616f402a5a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 452968448 unmapped: 94609408 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 424 heartbeat osd_stat(store_statfs(0x1a0871000/0x0/0x1bfc00000, data 0x3023d9e/0x323c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 424 handle_osd_map epochs [425,425], i have 424, src has [1,425]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f2ec3400 session 0x5616f59163c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f412c800 session 0x5616f509c960
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 454017024 unmapped: 93560832 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a086c000/0x0/0x1bfc00000, data 0x3025c31/0x3240000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 454017024 unmapped: 93560832 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4859937 data_alloc: 234881024 data_used: 13221888
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 454017024 unmapped: 93560832 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f412c800 session 0x5616f4cdde00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f2ec1000 session 0x5616f3c63c20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f2ec3400 session 0x5616f362f860
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f60f6000 session 0x5616f509c780
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 454017024 unmapped: 93560832 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f60ffc00 session 0x5616f2ec4d20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f2ec1000 session 0x5616f560d2c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f2ec3400 session 0x5616f509c780
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f412c800 session 0x5616f4cdde00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f60f6000 session 0x5616f5df3e00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 454336512 unmapped: 93241344 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 454336512 unmapped: 93241344 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f7095c00 session 0x5616f509c5a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 454336512 unmapped: 93241344 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a02fa000/0x0/0x1bfc00000, data 0x3599c31/0x37b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4918678 data_alloc: 234881024 data_used: 17874944
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 heartbeat osd_stat(store_statfs(0x1a02fa000/0x0/0x1bfc00000, data 0x3599c31/0x37b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 454336512 unmapped: 93241344 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f2ec3400 session 0x5616f5e183c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f2ec1000 session 0x5616f3bd63c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f412c800 session 0x5616f3f68b40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f60f6000 session 0x5616f5e18960
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 454336512 unmapped: 93241344 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.991387367s of 10.391763687s, submitted: 83
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 454336512 unmapped: 93241344 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616ff64c800 session 0x5616f5c212c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f2ec1000 session 0x5616f50d9a40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 454344704 unmapped: 93233152 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f2ec3400 session 0x5616f4d663c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f412c800 session 0x5616f50dcf00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 456785920 unmapped: 90791936 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5017348 data_alloc: 234881024 data_used: 23482368
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 456949760 unmapped: 90628096 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f60f6000 session 0x5616f3bcd860
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 heartbeat osd_stat(store_statfs(0x19fd1a000/0x0/0x1bfc00000, data 0x3b77ca3/0x3d94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 456949760 unmapped: 90628096 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f75c3c00 session 0x5616f5c02b40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f2ec1000 session 0x5616f3bb7e00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f2ec3400 session 0x5616f50dd4a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 456949760 unmapped: 90628096 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 heartbeat osd_stat(store_statfs(0x19fd19000/0x0/0x1bfc00000, data 0x3b77cb3/0x3d95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 456949760 unmapped: 90628096 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 heartbeat osd_stat(store_statfs(0x19fd19000/0x0/0x1bfc00000, data 0x3b77cb3/0x3d95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 456949760 unmapped: 90628096 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5018102 data_alloc: 234881024 data_used: 23515136
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457138176 unmapped: 90439680 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 heartbeat osd_stat(store_statfs(0x19fd19000/0x0/0x1bfc00000, data 0x3b77cb3/0x3d95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457138176 unmapped: 90439680 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 heartbeat osd_stat(store_statfs(0x19fd19000/0x0/0x1bfc00000, data 0x3b77cb3/0x3d95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457138176 unmapped: 90439680 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457138176 unmapped: 90439680 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457138176 unmapped: 90439680 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5018742 data_alloc: 234881024 data_used: 24084480
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.307758331s of 13.497234344s, submitted: 60
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 heartbeat osd_stat(store_statfs(0x19fd19000/0x0/0x1bfc00000, data 0x3b77cb3/0x3d95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [0,1,2,2,7])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f5e57800 session 0x5616f5de83c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 465027072 unmapped: 82550784 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f75c0400 session 0x5616f5917c20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f412cc00 session 0x5616f5917680
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5916b40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 465387520 unmapped: 82190336 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 465477632 unmapped: 82100224 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 467640320 unmapped: 79937536 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 heartbeat osd_stat(store_statfs(0x19e7a0000/0x0/0x1bfc00000, data 0x4cdece6/0x4efe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 467771392 unmapped: 79806464 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5224004 data_alloc: 251658240 data_used: 30937088
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 467771392 unmapped: 79806464 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 heartbeat osd_stat(store_statfs(0x19e7a0000/0x0/0x1bfc00000, data 0x4cdece6/0x4efe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 467771392 unmapped: 79806464 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 467771392 unmapped: 79806464 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 467771392 unmapped: 79806464 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 467779584 unmapped: 79798272 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5225124 data_alloc: 251658240 data_used: 30965760
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.863697052s of 10.178115845s, submitted: 132
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 469884928 unmapped: 77692928 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 469884928 unmapped: 77692928 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 heartbeat osd_stat(store_statfs(0x19e3da000/0x0/0x1bfc00000, data 0x50a4ce6/0x52c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f7095c00 session 0x5616f509c1e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f60fa000 session 0x5616f5de90e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 467066880 unmapped: 80510976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f75c0400 session 0x5616f3c630e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463896576 unmapped: 83681280 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 465829888 unmapped: 81747968 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5095579 data_alloc: 234881024 data_used: 24866816
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466575360 unmapped: 81002496 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466591744 unmapped: 80986112 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466591744 unmapped: 80986112 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f274000/0x0/0x1bfc00000, data 0x420ace6/0x442a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466591744 unmapped: 80986112 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466591744 unmapped: 80986112 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5111081 data_alloc: 234881024 data_used: 24174592
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466591744 unmapped: 80986112 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f274000/0x0/0x1bfc00000, data 0x420ace6/0x442a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466599936 unmapped: 80977920 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.665365219s of 12.100666046s, submitted: 147
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466206720 unmapped: 81371136 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 81354752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466223104 unmapped: 81354752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f270000/0x0/0x1bfc00000, data 0x420dce6/0x442d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f2ec3400 session 0x5616f3bb74a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f5e57800 session 0x5616f5829c20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5108485 data_alloc: 234881024 data_used: 24137728
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466231296 unmapped: 81346560 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466231296 unmapped: 81346560 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f271000/0x0/0x1bfc00000, data 0x420dce6/0x442d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466231296 unmapped: 81346560 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466231296 unmapped: 81346560 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f2ec3400 session 0x5616f58743c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f271000/0x0/0x1bfc00000, data 0x420dce6/0x442d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466239488 unmapped: 81338368 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5100508 data_alloc: 234881024 data_used: 24023040
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466239488 unmapped: 81338368 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f412c800 session 0x5616f3289680
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f60f6000 session 0x5616f560f680
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f2ec1000 session 0x5616f58292c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f298000/0x0/0x1bfc00000, data 0x41e9ca3/0x4406000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466247680 unmapped: 81330176 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466247680 unmapped: 81330176 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.273255348s of 11.444032669s, submitted: 70
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466264064 unmapped: 81313792 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f2ec1000 session 0x5616f2ec5a40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f298000/0x0/0x1bfc00000, data 0x41e9ca3/0x4406000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f2ec3400 session 0x5616f5df3860
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f298000/0x0/0x1bfc00000, data 0x41e9ca3/0x4406000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466280448 unmapped: 81297408 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5062775 data_alloc: 234881024 data_used: 24018944
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466280448 unmapped: 81297408 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 ms_handle_reset con 0x5616f412c800 session 0x5616f58734a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466296832 unmapped: 81281024 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 425 handle_osd_map epochs [425,426], i have 425, src has [1,426]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 426 ms_handle_reset con 0x5616f5e57800 session 0x5616f512cb40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 426 ms_handle_reset con 0x5616f566a800 session 0x5616f362f0e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 426 ms_handle_reset con 0x5616fd7af400 session 0x5616f5605860
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466305024 unmapped: 81272832 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 426 ms_handle_reset con 0x5616f2ec1000 session 0x5616f2ec5a40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466305024 unmapped: 81272832 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 426 heartbeat osd_stat(store_statfs(0x19f6ca000/0x0/0x1bfc00000, data 0x3db6a0a/0x3fd3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 426 ms_handle_reset con 0x5616f2ec3400 session 0x5616f58743c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 466337792 unmapped: 81240064 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5065343 data_alloc: 234881024 data_used: 24027136
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 426 ms_handle_reset con 0x5616f412c800 session 0x5616f3c630e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463544320 unmapped: 84033536 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 426 ms_handle_reset con 0x5616f5e57800 session 0x5616f5c212c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 426 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5e18960
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463544320 unmapped: 84033536 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 426 ms_handle_reset con 0x5616f412c800 session 0x5616f509c5a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 426 ms_handle_reset con 0x5616fd7af400 session 0x5616f5df3e00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463544320 unmapped: 84033536 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 426 heartbeat osd_stat(store_statfs(0x1a08f2000/0x0/0x1bfc00000, data 0x2b8ea1a/0x2dac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 426 handle_osd_map epochs [427,427], i have 426, src has [1,427]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.148363113s of 10.002849579s, submitted: 144
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 427 ms_handle_reset con 0x5616f2ec3400 session 0x5616f3f68b40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463544320 unmapped: 84033536 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 427 handle_osd_map epochs [428,428], i have 427, src has [1,428]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4858742 data_alloc: 234881024 data_used: 14594048
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a08eb000/0x0/0x1bfc00000, data 0x2b923b4/0x2db1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a08eb000/0x0/0x1bfc00000, data 0x2b923b4/0x2db1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 428 handle_osd_map epochs [428,429], i have 428, src has [1,429]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463593472 unmapped: 83984384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463593472 unmapped: 83984384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4861044 data_alloc: 234881024 data_used: 14594048
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463593472 unmapped: 83984384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a08e9000/0x0/0x1bfc00000, data 0x2b93fbd/0x2db4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463593472 unmapped: 83984384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f7095c00 session 0x5616f5e14d20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463593472 unmapped: 83984384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a08e9000/0x0/0x1bfc00000, data 0x2b93fbd/0x2db4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.008526802s of 10.033182144s, submitted: 30
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5873a40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463568896 unmapped: 84008960 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463577088 unmapped: 84000768 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4861992 data_alloc: 234881024 data_used: 14594048
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463577088 unmapped: 84000768 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f2ec3400 session 0x5616f5873c20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f412c800 session 0x5616f5875e00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616fd7af400 session 0x5616f5829a40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f75c0400 session 0x5616f5604960
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a08e6000/0x0/0x1bfc00000, data 0x2b93fcd/0x2db5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4875568 data_alloc: 234881024 data_used: 15200256
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a08e6000/0x0/0x1bfc00000, data 0x2b93fcd/0x2db5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a08e4000/0x0/0x1bfc00000, data 0x2b98fcd/0x2dba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4875126 data_alloc: 234881024 data_used: 15200256
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.116181374s of 13.762666702s, submitted: 33
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a08e3000/0x0/0x1bfc00000, data 0x2b99fcd/0x2dbb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f2ec1000 session 0x5616f33c25a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f2ec3400 session 0x5616f5874f00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4875378 data_alloc: 234881024 data_used: 15200256
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f412c800 session 0x5616f52feb40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616fd7af400 session 0x5616f362f4a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a08be000/0x0/0x1bfc00000, data 0x2bbdfdd/0x2de0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a08be000/0x0/0x1bfc00000, data 0x2bbdfdd/0x2de0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463585280 unmapped: 83992576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a08be000/0x0/0x1bfc00000, data 0x2bbdfdd/0x2de0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a08be000/0x0/0x1bfc00000, data 0x2bbdfdd/0x2de0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3668260469' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463593472 unmapped: 83984384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463593472 unmapped: 83984384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4881025 data_alloc: 234881024 data_used: 15208448
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463593472 unmapped: 83984384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463593472 unmapped: 83984384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a08be000/0x0/0x1bfc00000, data 0x2bbdfdd/0x2de0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463593472 unmapped: 83984384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463609856 unmapped: 83968000 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.698847771s of 11.737822533s, submitted: 9
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f60f6000 session 0x5616f59165a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f60fa000 session 0x5616f5873860
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f2ec1000 session 0x5616f2705c20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463618048 unmapped: 83959808 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4885408 data_alloc: 234881024 data_used: 16318464
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a08bf000/0x0/0x1bfc00000, data 0x2bbdfcd/0x2ddf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463618048 unmapped: 83959808 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f2ec3400 session 0x5616f5e143c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 463618048 unmapped: 83959808 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f412c800 session 0x5616f5dfda40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 456876032 unmapped: 90701824 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457089024 unmapped: 90488832 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457089024 unmapped: 90488832 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4747316 data_alloc: 218103808 data_used: 7409664
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457007104 unmapped: 90570752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1240000/0x0/0x1bfc00000, data 0x223ef5b/0x245e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457007104 unmapped: 90570752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457007104 unmapped: 90570752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1240000/0x0/0x1bfc00000, data 0x223ef5b/0x245e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457007104 unmapped: 90570752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457007104 unmapped: 90570752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4754578 data_alloc: 218103808 data_used: 7409664
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457007104 unmapped: 90570752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457007104 unmapped: 90570752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457007104 unmapped: 90570752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1240000/0x0/0x1bfc00000, data 0x223ef5b/0x245e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.422370911s of 14.626880646s, submitted: 63
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f9f27c00 session 0x5616f402a5a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f63da800 session 0x5616f58bf860
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457007104 unmapped: 90570752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1241000/0x0/0x1bfc00000, data 0x223ef4b/0x245d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457007104 unmapped: 90570752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4745933 data_alloc: 218103808 data_used: 7299072
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f9f27c00 session 0x5616f3354780
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457007104 unmapped: 90570752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457007104 unmapped: 90570752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1265000/0x0/0x1bfc00000, data 0x221af4b/0x2439000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457007104 unmapped: 90570752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1265000/0x0/0x1bfc00000, data 0x221af4b/0x2439000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457007104 unmapped: 90570752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457007104 unmapped: 90570752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4745933 data_alloc: 218103808 data_used: 7299072
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457007104 unmapped: 90570752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457007104 unmapped: 90570752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457015296 unmapped: 90562560 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1265000/0x0/0x1bfc00000, data 0x221af4b/0x2439000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457015296 unmapped: 90562560 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457015296 unmapped: 90562560 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4745933 data_alloc: 218103808 data_used: 7299072
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457015296 unmapped: 90562560 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457015296 unmapped: 90562560 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1265000/0x0/0x1bfc00000, data 0x221af4b/0x2439000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457015296 unmapped: 90562560 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457015296 unmapped: 90562560 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457015296 unmapped: 90562560 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4745933 data_alloc: 218103808 data_used: 7299072
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1265000/0x0/0x1bfc00000, data 0x221af4b/0x2439000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f2ec1000 session 0x5616f509af00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457015296 unmapped: 90562560 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f2ec3400 session 0x5616f3324000
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457015296 unmapped: 90562560 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1265000/0x0/0x1bfc00000, data 0x221af4b/0x2439000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f412c800 session 0x5616f4d66d20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457015296 unmapped: 90562560 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5de94a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457023488 unmapped: 90554368 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457023488 unmapped: 90554368 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4745933 data_alloc: 218103808 data_used: 7299072
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457023488 unmapped: 90554368 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1265000/0x0/0x1bfc00000, data 0x221af4b/0x2439000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457023488 unmapped: 90554368 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457023488 unmapped: 90554368 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457031680 unmapped: 90546176 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1265000/0x0/0x1bfc00000, data 0x221af4b/0x2439000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457031680 unmapped: 90546176 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4745933 data_alloc: 218103808 data_used: 7299072
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1265000/0x0/0x1bfc00000, data 0x221af4b/0x2439000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457031680 unmapped: 90546176 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1265000/0x0/0x1bfc00000, data 0x221af4b/0x2439000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457031680 unmapped: 90546176 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457031680 unmapped: 90546176 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457031680 unmapped: 90546176 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457039872 unmapped: 90537984 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4745933 data_alloc: 218103808 data_used: 7299072
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1265000/0x0/0x1bfc00000, data 0x221af4b/0x2439000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457039872 unmapped: 90537984 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457039872 unmapped: 90537984 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457039872 unmapped: 90537984 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457039872 unmapped: 90537984 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457048064 unmapped: 90529792 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4750573 data_alloc: 218103808 data_used: 7696384
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457048064 unmapped: 90529792 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1265000/0x0/0x1bfc00000, data 0x221af4b/0x2439000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457048064 unmapped: 90529792 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 37.722682953s of 38.938465118s, submitted: 21
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457048064 unmapped: 90529792 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457048064 unmapped: 90529792 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457056256 unmapped: 90521600 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4753789 data_alloc: 218103808 data_used: 7696384
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457056256 unmapped: 90521600 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1259000/0x0/0x1bfc00000, data 0x2226f4b/0x2445000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457056256 unmapped: 90521600 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457056256 unmapped: 90521600 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1259000/0x0/0x1bfc00000, data 0x2226f4b/0x2445000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457056256 unmapped: 90521600 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457056256 unmapped: 90521600 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4754049 data_alloc: 218103808 data_used: 7696384
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457056256 unmapped: 90521600 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457056256 unmapped: 90521600 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457064448 unmapped: 90513408 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1258000/0x0/0x1bfc00000, data 0x2227f4b/0x2446000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457064448 unmapped: 90513408 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457064448 unmapped: 90513408 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4753169 data_alloc: 218103808 data_used: 7696384
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457064448 unmapped: 90513408 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1258000/0x0/0x1bfc00000, data 0x2227f4b/0x2446000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457064448 unmapped: 90513408 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457064448 unmapped: 90513408 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457072640 unmapped: 90505216 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457072640 unmapped: 90505216 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4753169 data_alloc: 218103808 data_used: 7696384
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1258000/0x0/0x1bfc00000, data 0x2227f4b/0x2446000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457080832 unmapped: 90497024 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457080832 unmapped: 90497024 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457080832 unmapped: 90497024 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1258000/0x0/0x1bfc00000, data 0x2227f4b/0x2446000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457080832 unmapped: 90497024 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457080832 unmapped: 90497024 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4758929 data_alloc: 218103808 data_used: 8683520
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 ms_handle_reset con 0x5616f60fa000 session 0x5616f4cdd860
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457080832 unmapped: 90497024 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 heartbeat osd_stat(store_statfs(0x1a1258000/0x0/0x1bfc00000, data 0x2227f4b/0x2446000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457080832 unmapped: 90497024 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 429 handle_osd_map epochs [430,430], i have 429, src has [1,430]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.687757492s of 24.706781387s, submitted: 7
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458137600 unmapped: 89440256 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 430 ms_handle_reset con 0x5616fd7af400 session 0x5616f5df2000
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458145792 unmapped: 89432064 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1254000/0x0/0x1bfc00000, data 0x2229c6e/0x2449000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 430 ms_handle_reset con 0x5616f9f27c00 session 0x5616f5873860
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1254000/0x0/0x1bfc00000, data 0x2229c6e/0x2449000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458145792 unmapped: 89432064 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1254000/0x0/0x1bfc00000, data 0x2229c6e/0x2449000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4762751 data_alloc: 218103808 data_used: 8691712
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1254000/0x0/0x1bfc00000, data 0x2229c6e/0x2449000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458145792 unmapped: 89432064 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 430 ms_handle_reset con 0x5616f8393000 session 0x5616f33c23c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 430 ms_handle_reset con 0x5616f5e59c00 session 0x5616f5874f00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458153984 unmapped: 89423872 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458153984 unmapped: 89423872 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 430 ms_handle_reset con 0x5616f8393000 session 0x5616f33c25a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458153984 unmapped: 89423872 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1254000/0x0/0x1bfc00000, data 0x2229c7e/0x244a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a1254000/0x0/0x1bfc00000, data 0x2229c7e/0x244a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458244096 unmapped: 89333760 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4787213 data_alloc: 218103808 data_used: 8691712
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458244096 unmapped: 89333760 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 430 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5875e00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 430 ms_handle_reset con 0x5616f60fa000 session 0x5616f5c212c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a10d4000/0x0/0x1bfc00000, data 0x23a9c7e/0x25ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458203136 unmapped: 89374720 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458203136 unmapped: 89374720 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458203136 unmapped: 89374720 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 430 heartbeat osd_stat(store_statfs(0x1a0d51000/0x0/0x1bfc00000, data 0x272cc7e/0x294d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458203136 unmapped: 89374720 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4813800 data_alloc: 218103808 data_used: 8691712
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458203136 unmapped: 89374720 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.882975578s of 14.058014870s, submitted: 44
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458203136 unmapped: 89374720 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 430 handle_osd_map epochs [431,431], i have 430, src has [1,431]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458219520 unmapped: 89358336 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 431 handle_osd_map epochs [432,432], i have 431, src has [1,432]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 432 ms_handle_reset con 0x5616fd7af400 session 0x5616f5605860
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458235904 unmapped: 89341952 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 432 heartbeat osd_stat(store_statfs(0x1a0d48000/0x0/0x1bfc00000, data 0x273072b/0x2954000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 432 handle_osd_map epochs [432,433], i have 432, src has [1,433]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 433 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5828b40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458178560 unmapped: 89399296 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4905089 data_alloc: 218103808 data_used: 8699904
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 433 handle_osd_map epochs [433,434], i have 433, src has [1,434]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 434 ms_handle_reset con 0x5616fd7af400 session 0x5616f3bb6000
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 434 ms_handle_reset con 0x5616f9f27c00 session 0x5616f512cb40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458186752 unmapped: 89391104 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 434 heartbeat osd_stat(store_statfs(0x1a0398000/0x0/0x1bfc00000, data 0x30de20b/0x3305000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 434 ms_handle_reset con 0x5616f5e59c00 session 0x5616f5df25a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458186752 unmapped: 89391104 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458186752 unmapped: 89391104 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 434 ms_handle_reset con 0x5616f8393000 session 0x5616f58741e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 434 ms_handle_reset con 0x5616fd7ae400 session 0x5616f4d66b40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 434 ms_handle_reset con 0x5616f8393000 session 0x5616f58bef00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 462127104 unmapped: 85450752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 434 ms_handle_reset con 0x5616f2ec1000 session 0x5616f27045a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 434 handle_osd_map epochs [435,435], i have 434, src has [1,435]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 435 ms_handle_reset con 0x5616f75c0800 session 0x5616f5dfda40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 435 ms_handle_reset con 0x5616f60fa000 session 0x5616f402a780
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458473472 unmapped: 89104384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5006405 data_alloc: 218103808 data_used: 8712192
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 435 heartbeat osd_stat(store_statfs(0x19f820000/0x0/0x1bfc00000, data 0x3c52f99/0x3e7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458473472 unmapped: 89104384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458473472 unmapped: 89104384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458473472 unmapped: 89104384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 435 heartbeat osd_stat(store_statfs(0x19f820000/0x0/0x1bfc00000, data 0x3c52f99/0x3e7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458473472 unmapped: 89104384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458473472 unmapped: 89104384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5006405 data_alloc: 218103808 data_used: 8712192
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 435 heartbeat osd_stat(store_statfs(0x19f820000/0x0/0x1bfc00000, data 0x3c52f99/0x3e7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458473472 unmapped: 89104384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458473472 unmapped: 89104384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.717531204s of 16.022281647s, submitted: 72
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 435 heartbeat osd_stat(store_statfs(0x19f820000/0x0/0x1bfc00000, data 0x3c52f99/0x3e7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [0,1])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458481664 unmapped: 89096192 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458481664 unmapped: 89096192 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458481664 unmapped: 89096192 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5014845 data_alloc: 218103808 data_used: 8712192
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458481664 unmapped: 89096192 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 435 heartbeat osd_stat(store_statfs(0x19f81f000/0x0/0x1bfc00000, data 0x3cc1f99/0x3e7e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458489856 unmapped: 89088000 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 461832192 unmapped: 85745664 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 461840384 unmapped: 85737472 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 435 heartbeat osd_stat(store_statfs(0x19f4cf000/0x0/0x1bfc00000, data 0x4012f99/0x41cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 435 heartbeat osd_stat(store_statfs(0x19f4cf000/0x0/0x1bfc00000, data 0x4012f99/0x41cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 461840384 unmapped: 85737472 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5049196 data_alloc: 218103808 data_used: 8863744
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 461840384 unmapped: 85737472 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 461840384 unmapped: 85737472 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 435 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5c03680
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 461840384 unmapped: 85737472 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 435 ms_handle_reset con 0x5616f75c0800 session 0x5616f5e28b40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 435 ms_handle_reset con 0x5616f8393000 session 0x5616f5c20960
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 461848576 unmapped: 85729280 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.018222809s of 12.057978630s, submitted: 6
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 435 ms_handle_reset con 0x5616fd7ae400 session 0x5616f3bb7a40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 435 heartbeat osd_stat(store_statfs(0x19f4cf000/0x0/0x1bfc00000, data 0x4012f99/0x41cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458588160 unmapped: 88989696 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5046046 data_alloc: 218103808 data_used: 8871936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 435 heartbeat osd_stat(store_statfs(0x19f4aa000/0x0/0x1bfc00000, data 0x4036fa9/0x41f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459407360 unmapped: 88170496 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459898880 unmapped: 87678976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459898880 unmapped: 87678976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459898880 unmapped: 87678976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 435 heartbeat osd_stat(store_statfs(0x19f4aa000/0x0/0x1bfc00000, data 0x4036fa9/0x41f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459898880 unmapped: 87678976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5123326 data_alloc: 234881024 data_used: 19738624
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459898880 unmapped: 87678976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459898880 unmapped: 87678976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459898880 unmapped: 87678976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459898880 unmapped: 87678976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459898880 unmapped: 87678976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5123326 data_alloc: 234881024 data_used: 19738624
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 435 heartbeat osd_stat(store_statfs(0x19f4aa000/0x0/0x1bfc00000, data 0x4036fa9/0x41f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459898880 unmapped: 87678976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.779089928s of 12.794838905s, submitted: 2
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 461488128 unmapped: 86089728 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 462577664 unmapped: 85000192 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 435 heartbeat osd_stat(store_statfs(0x19d938000/0x0/0x1bfc00000, data 0x509dfa9/0x4bc6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 462585856 unmapped: 84992000 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 462585856 unmapped: 84992000 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5277298 data_alloc: 234881024 data_used: 20439040
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 462585856 unmapped: 84992000 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 462585856 unmapped: 84992000 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 462585856 unmapped: 84992000 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 462585856 unmapped: 84992000 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 435 heartbeat osd_stat(store_statfs(0x19d92f000/0x0/0x1bfc00000, data 0x51b9fa9/0x4bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 435 ms_handle_reset con 0x5616f9f26c00 session 0x5616f5e28000
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 435 ms_handle_reset con 0x5616f40c7400 session 0x5616f509a780
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 462585856 unmapped: 84992000 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5277298 data_alloc: 234881024 data_used: 20439040
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 435 ms_handle_reset con 0x5616f2ec1000 session 0x5616f3bb6d20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 462594048 unmapped: 84983808 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 435 ms_handle_reset con 0x5616f6aa0800 session 0x5616f582cf00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 435 ms_handle_reset con 0x5616f566b800 session 0x5616f4cdd0e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.805445671s of 10.007963181s, submitted: 88
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 435 ms_handle_reset con 0x5616f75c0800 session 0x5616f582c5a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 462610432 unmapped: 84967424 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 435 heartbeat osd_stat(store_statfs(0x19d954000/0x0/0x1bfc00000, data 0x5195f99/0x4baa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 462610432 unmapped: 84967424 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 462618624 unmapped: 84959232 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 435 ms_handle_reset con 0x5616f6aa0800 session 0x5616f5df2960
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 435 handle_osd_map epochs [435,436], i have 435, src has [1,436]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 436 handle_osd_map epochs [436,436], i have 436, src has [1,436]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 436 ms_handle_reset con 0x5616f566b800 session 0x5616f5872780
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 436 ms_handle_reset con 0x5616f2ec1000 session 0x5616f3288780
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 436 ms_handle_reset con 0x5616f75c0800 session 0x5616f582c5a0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 462643200 unmapped: 84934656 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4965343 data_alloc: 234881024 data_used: 15863808
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 436 ms_handle_reset con 0x5616f40c7400 session 0x5616f5c21680
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 436 ms_handle_reset con 0x5616f8393000 session 0x5616f5df2f00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 436 ms_handle_reset con 0x5616fd7ae400 session 0x5616f5dfda40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 436 heartbeat osd_stat(store_statfs(0x19d957000/0x0/0x1bfc00000, data 0x5195f04/0x4ba7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457850880 unmapped: 89726976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 436 ms_handle_reset con 0x5616f2ec1000 session 0x5616f5828b40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 436 handle_osd_map epochs [436,437], i have 436, src has [1,437]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457850880 unmapped: 89726976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 437 ms_handle_reset con 0x5616f566b800 session 0x5616f5c212c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457850880 unmapped: 89726976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a009e000/0x0/0x1bfc00000, data 0x2236ba4/0x245f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457850880 unmapped: 89726976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 437 ms_handle_reset con 0x5616f2ec3400 session 0x5616f52fe3c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 437 ms_handle_reset con 0x5616f63da800 session 0x5616f56052c0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457850880 unmapped: 89726976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4831173 data_alloc: 218103808 data_used: 8921088
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 437 ms_handle_reset con 0x5616f2ec3400 session 0x5616f5e18f00
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457850880 unmapped: 89726976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457850880 unmapped: 89726976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457850880 unmapped: 89726976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457850880 unmapped: 89726976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 437 handle_osd_map epochs [437,438], i have 437, src has [1,438]
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.959725380s of 12.410096169s, submitted: 147
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a009f000/0x0/0x1bfc00000, data 0x2236ba4/0x245f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458899456 unmapped: 88678400 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4835347 data_alloc: 218103808 data_used: 8929280
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 ms_handle_reset con 0x5616f2ec1000 session 0x5616f509da40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 ms_handle_reset con 0x5616f566b800 session 0x5616f58741e0
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458817536 unmapped: 88760320 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458817536 unmapped: 88760320 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458817536 unmapped: 88760320 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458833920 unmapped: 88743936 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458833920 unmapped: 88743936 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458833920 unmapped: 88743936 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458833920 unmapped: 88743936 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458833920 unmapped: 88743936 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458833920 unmapped: 88743936 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458842112 unmapped: 88735744 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458842112 unmapped: 88735744 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458842112 unmapped: 88735744 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458842112 unmapped: 88735744 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458842112 unmapped: 88735744 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458842112 unmapped: 88735744 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458842112 unmapped: 88735744 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458842112 unmapped: 88735744 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458850304 unmapped: 88727552 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458850304 unmapped: 88727552 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458850304 unmapped: 88727552 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458850304 unmapped: 88727552 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458850304 unmapped: 88727552 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458850304 unmapped: 88727552 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458850304 unmapped: 88727552 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458850304 unmapped: 88727552 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458850304 unmapped: 88727552 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 88719360 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 88719360 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 88719360 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 88719360 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 88719360 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 88719360 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 88719360 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458866688 unmapped: 88711168 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458866688 unmapped: 88711168 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458866688 unmapped: 88711168 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458866688 unmapped: 88711168 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458866688 unmapped: 88711168 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458866688 unmapped: 88711168 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458866688 unmapped: 88711168 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458874880 unmapped: 88702976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458874880 unmapped: 88702976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458874880 unmapped: 88702976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458874880 unmapped: 88702976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458874880 unmapped: 88702976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458883072 unmapped: 88694784 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458883072 unmapped: 88694784 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458883072 unmapped: 88694784 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458883072 unmapped: 88694784 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458883072 unmapped: 88694784 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458891264 unmapped: 88686592 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458891264 unmapped: 88686592 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458891264 unmapped: 88686592 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458891264 unmapped: 88686592 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458891264 unmapped: 88686592 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458891264 unmapped: 88686592 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458899456 unmapped: 88678400 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458899456 unmapped: 88678400 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458899456 unmapped: 88678400 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458899456 unmapped: 88678400 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458899456 unmapped: 88678400 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458899456 unmapped: 88678400 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458907648 unmapped: 88670208 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458907648 unmapped: 88670208 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458907648 unmapped: 88670208 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458907648 unmapped: 88670208 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458915840 unmapped: 88662016 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458915840 unmapped: 88662016 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458915840 unmapped: 88662016 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458915840 unmapped: 88662016 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458915840 unmapped: 88662016 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458915840 unmapped: 88662016 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458924032 unmapped: 88653824 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458924032 unmapped: 88653824 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458924032 unmapped: 88653824 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458924032 unmapped: 88653824 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458924032 unmapped: 88653824 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458932224 unmapped: 88645632 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458932224 unmapped: 88645632 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458932224 unmapped: 88645632 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458932224 unmapped: 88645632 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458915840 unmapped: 88662016 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: do_command 'config diff' '{prefix=config diff}'
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: do_command 'config show' '{prefix=config show}'
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: do_command 'counter dump' '{prefix=counter dump}'
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: do_command 'counter schema' '{prefix=counter schema}'
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458317824 unmapped: 89260032 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458227712 unmapped: 89350144 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: do_command 'log dump' '{prefix=log dump}'
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458285056 unmapped: 89292800 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: do_command 'perf dump' '{prefix=perf dump}'
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: do_command 'perf schema' '{prefix=perf schema}'
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458006528 unmapped: 89571328 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458006528 unmapped: 89571328 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458006528 unmapped: 89571328 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458014720 unmapped: 89563136 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458014720 unmapped: 89563136 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458022912 unmapped: 89554944 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458022912 unmapped: 89554944 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458022912 unmapped: 89554944 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458022912 unmapped: 89554944 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.9 total, 600.0 interval#012Cumulative writes: 68K writes, 265K keys, 68K commit groups, 1.0 writes per commit group, ingest: 0.25 GB, 0.04 MB/s#012Cumulative WAL: 68K writes, 25K syncs, 2.65 writes per sync, written: 0.25 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5487 writes, 20K keys, 5487 commit groups, 1.0 writes per commit group, ingest: 16.82 MB, 0.03 MB/s#012Interval WAL: 5487 writes, 2358 syncs, 2.33 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.64              0.00         1    0.643       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.64              0.00         1    0.643       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.64              0.00         1    0.643       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.9 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.6 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5616f192a430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.9 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5616f192a430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.9 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 me
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458022912 unmapped: 89554944 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458022912 unmapped: 89554944 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458031104 unmapped: 89546752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458031104 unmapped: 89546752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458031104 unmapped: 89546752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458031104 unmapped: 89546752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458031104 unmapped: 89546752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458031104 unmapped: 89546752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458031104 unmapped: 89546752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458031104 unmapped: 89546752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458039296 unmapped: 89538560 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458039296 unmapped: 89538560 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458039296 unmapped: 89538560 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458039296 unmapped: 89538560 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458039296 unmapped: 89538560 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458047488 unmapped: 89530368 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458047488 unmapped: 89530368 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458047488 unmapped: 89530368 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458047488 unmapped: 89530368 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458047488 unmapped: 89530368 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458055680 unmapped: 89522176 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458055680 unmapped: 89522176 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458055680 unmapped: 89522176 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458055680 unmapped: 89522176 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458055680 unmapped: 89522176 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458055680 unmapped: 89522176 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458063872 unmapped: 89513984 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458063872 unmapped: 89513984 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458063872 unmapped: 89513984 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458063872 unmapped: 89513984 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458063872 unmapped: 89513984 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458063872 unmapped: 89513984 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458063872 unmapped: 89513984 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458063872 unmapped: 89513984 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458072064 unmapped: 89505792 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458072064 unmapped: 89505792 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458080256 unmapped: 89497600 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458080256 unmapped: 89497600 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458088448 unmapped: 89489408 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458088448 unmapped: 89489408 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458088448 unmapped: 89489408 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458088448 unmapped: 89489408 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458088448 unmapped: 89489408 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458088448 unmapped: 89489408 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458096640 unmapped: 89481216 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458096640 unmapped: 89481216 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458096640 unmapped: 89481216 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458096640 unmapped: 89481216 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458096640 unmapped: 89481216 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458096640 unmapped: 89481216 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458104832 unmapped: 89473024 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458104832 unmapped: 89473024 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458104832 unmapped: 89473024 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458104832 unmapped: 89473024 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458113024 unmapped: 89464832 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458121216 unmapped: 89456640 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458121216 unmapped: 89456640 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458121216 unmapped: 89456640 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458121216 unmapped: 89456640 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458121216 unmapped: 89456640 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458121216 unmapped: 89456640 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458121216 unmapped: 89456640 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458129408 unmapped: 89448448 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458129408 unmapped: 89448448 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458129408 unmapped: 89448448 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458129408 unmapped: 89448448 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458137600 unmapped: 89440256 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458137600 unmapped: 89440256 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458145792 unmapped: 89432064 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458145792 unmapped: 89432064 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458145792 unmapped: 89432064 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458145792 unmapped: 89432064 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458145792 unmapped: 89432064 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458153984 unmapped: 89423872 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458153984 unmapped: 89423872 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458153984 unmapped: 89423872 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458153984 unmapped: 89423872 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458153984 unmapped: 89423872 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458162176 unmapped: 89415680 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458162176 unmapped: 89415680 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458162176 unmapped: 89415680 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458162176 unmapped: 89415680 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458162176 unmapped: 89415680 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458162176 unmapped: 89415680 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458170368 unmapped: 89407488 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458170368 unmapped: 89407488 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458178560 unmapped: 89399296 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458178560 unmapped: 89399296 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458178560 unmapped: 89399296 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458178560 unmapped: 89399296 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458178560 unmapped: 89399296 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458178560 unmapped: 89399296 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458186752 unmapped: 89391104 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458186752 unmapped: 89391104 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458186752 unmapped: 89391104 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458186752 unmapped: 89391104 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458186752 unmapped: 89391104 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458203136 unmapped: 89374720 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458203136 unmapped: 89374720 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458203136 unmapped: 89374720 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458203136 unmapped: 89374720 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458203136 unmapped: 89374720 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458203136 unmapped: 89374720 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457883648 unmapped: 89694208 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457883648 unmapped: 89694208 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457883648 unmapped: 89694208 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457891840 unmapped: 89686016 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457891840 unmapped: 89686016 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457900032 unmapped: 89677824 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457900032 unmapped: 89677824 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457900032 unmapped: 89677824 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457900032 unmapped: 89677824 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457900032 unmapped: 89677824 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457900032 unmapped: 89677824 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457916416 unmapped: 89661440 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457916416 unmapped: 89661440 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457916416 unmapped: 89661440 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 213.760330200s of 213.801147461s, submitted: 34
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457916416 unmapped: 89661440 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457916416 unmapped: 89661440 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457949184 unmapped: 89628672 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 457990144 unmapped: 89587712 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458014720 unmapped: 89563136 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458022912 unmapped: 89554944 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458031104 unmapped: 89546752 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458047488 unmapped: 89530368 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458063872 unmapped: 89513984 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458072064 unmapped: 89505792 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458072064 unmapped: 89505792 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458072064 unmapped: 89505792 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458072064 unmapped: 89505792 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458072064 unmapped: 89505792 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458072064 unmapped: 89505792 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458072064 unmapped: 89505792 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458072064 unmapped: 89505792 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458072064 unmapped: 89505792 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458072064 unmapped: 89505792 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458072064 unmapped: 89505792 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458072064 unmapped: 89505792 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458072064 unmapped: 89505792 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458080256 unmapped: 89497600 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458080256 unmapped: 89497600 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458080256 unmapped: 89497600 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458080256 unmapped: 89497600 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458088448 unmapped: 89489408 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458088448 unmapped: 89489408 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458096640 unmapped: 89481216 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458096640 unmapped: 89481216 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458096640 unmapped: 89481216 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458096640 unmapped: 89481216 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458096640 unmapped: 89481216 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458096640 unmapped: 89481216 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458096640 unmapped: 89481216 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458096640 unmapped: 89481216 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458104832 unmapped: 89473024 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458104832 unmapped: 89473024 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458104832 unmapped: 89473024 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458104832 unmapped: 89473024 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458104832 unmapped: 89473024 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458104832 unmapped: 89473024 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458104832 unmapped: 89473024 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458104832 unmapped: 89473024 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458113024 unmapped: 89464832 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458113024 unmapped: 89464832 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458121216 unmapped: 89456640 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458121216 unmapped: 89456640 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458121216 unmapped: 89456640 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458121216 unmapped: 89456640 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458121216 unmapped: 89456640 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458121216 unmapped: 89456640 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458121216 unmapped: 89456640 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458121216 unmapped: 89456640 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458129408 unmapped: 89448448 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458129408 unmapped: 89448448 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458129408 unmapped: 89448448 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458129408 unmapped: 89448448 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458129408 unmapped: 89448448 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458129408 unmapped: 89448448 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458137600 unmapped: 89440256 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458137600 unmapped: 89440256 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458137600 unmapped: 89440256 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458137600 unmapped: 89440256 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458137600 unmapped: 89440256 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458145792 unmapped: 89432064 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458145792 unmapped: 89432064 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458145792 unmapped: 89432064 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458145792 unmapped: 89432064 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458145792 unmapped: 89432064 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458153984 unmapped: 89423872 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458153984 unmapped: 89423872 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458153984 unmapped: 89423872 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458153984 unmapped: 89423872 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458153984 unmapped: 89423872 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458153984 unmapped: 89423872 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458153984 unmapped: 89423872 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458162176 unmapped: 89415680 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458162176 unmapped: 89415680 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458162176 unmapped: 89415680 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458162176 unmapped: 89415680 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458162176 unmapped: 89415680 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458162176 unmapped: 89415680 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458162176 unmapped: 89415680 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458170368 unmapped: 89407488 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458170368 unmapped: 89407488 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458170368 unmapped: 89407488 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458170368 unmapped: 89407488 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458170368 unmapped: 89407488 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458178560 unmapped: 89399296 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458178560 unmapped: 89399296 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458178560 unmapped: 89399296 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458186752 unmapped: 89391104 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458186752 unmapped: 89391104 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458194944 unmapped: 89382912 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458194944 unmapped: 89382912 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458194944 unmapped: 89382912 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458194944 unmapped: 89382912 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458194944 unmapped: 89382912 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458194944 unmapped: 89382912 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458203136 unmapped: 89374720 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458203136 unmapped: 89374720 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458203136 unmapped: 89374720 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458203136 unmapped: 89374720 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458203136 unmapped: 89374720 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458211328 unmapped: 89366528 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458211328 unmapped: 89366528 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458211328 unmapped: 89366528 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458219520 unmapped: 89358336 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458219520 unmapped: 89358336 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458227712 unmapped: 89350144 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458227712 unmapped: 89350144 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458227712 unmapped: 89350144 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458227712 unmapped: 89350144 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458227712 unmapped: 89350144 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458227712 unmapped: 89350144 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458235904 unmapped: 89341952 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458235904 unmapped: 89341952 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458244096 unmapped: 89333760 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458244096 unmapped: 89333760 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458244096 unmapped: 89333760 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458244096 unmapped: 89333760 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458244096 unmapped: 89333760 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458244096 unmapped: 89333760 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458244096 unmapped: 89333760 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458244096 unmapped: 89333760 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458244096 unmapped: 89333760 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458244096 unmapped: 89333760 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458252288 unmapped: 89325568 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458252288 unmapped: 89325568 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458252288 unmapped: 89325568 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458260480 unmapped: 89317376 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458268672 unmapped: 89309184 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458268672 unmapped: 89309184 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458268672 unmapped: 89309184 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458268672 unmapped: 89309184 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458268672 unmapped: 89309184 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458268672 unmapped: 89309184 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458268672 unmapped: 89309184 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458276864 unmapped: 89300992 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458276864 unmapped: 89300992 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458276864 unmapped: 89300992 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458276864 unmapped: 89300992 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458276864 unmapped: 89300992 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458293248 unmapped: 89284608 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458293248 unmapped: 89284608 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458293248 unmapped: 89284608 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458293248 unmapped: 89284608 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458293248 unmapped: 89284608 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458293248 unmapped: 89284608 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458309632 unmapped: 89268224 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458309632 unmapped: 89268224 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458317824 unmapped: 89260032 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458317824 unmapped: 89260032 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458317824 unmapped: 89260032 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458326016 unmapped: 89251840 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458326016 unmapped: 89251840 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458326016 unmapped: 89251840 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458326016 unmapped: 89251840 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458326016 unmapped: 89251840 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458342400 unmapped: 89235456 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458342400 unmapped: 89235456 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458342400 unmapped: 89235456 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458342400 unmapped: 89235456 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458350592 unmapped: 89227264 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458350592 unmapped: 89227264 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458358784 unmapped: 89219072 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458358784 unmapped: 89219072 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458358784 unmapped: 89219072 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458358784 unmapped: 89219072 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458366976 unmapped: 89210880 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458366976 unmapped: 89210880 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458366976 unmapped: 89210880 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458366976 unmapped: 89210880 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458375168 unmapped: 89202688 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458375168 unmapped: 89202688 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458383360 unmapped: 89194496 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458383360 unmapped: 89194496 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458383360 unmapped: 89194496 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458391552 unmapped: 89186304 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458391552 unmapped: 89186304 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458391552 unmapped: 89186304 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458391552 unmapped: 89186304 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458391552 unmapped: 89186304 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458399744 unmapped: 89178112 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458399744 unmapped: 89178112 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458399744 unmapped: 89178112 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458399744 unmapped: 89178112 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458399744 unmapped: 89178112 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458399744 unmapped: 89178112 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458407936 unmapped: 89169920 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 ms_handle_reset con 0x5616f5e52800 session 0x5616f5e19680
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458407936 unmapped: 89169920 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 ms_handle_reset con 0x5616f75c0c00 session 0x5616f560ed20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 ms_handle_reset con 0x5616f320dc00 session 0x5616f5e19a40
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458416128 unmapped: 89161728 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458416128 unmapped: 89161728 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458416128 unmapped: 89161728 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458416128 unmapped: 89161728 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458416128 unmapped: 89161728 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458416128 unmapped: 89161728 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458416128 unmapped: 89161728 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458416128 unmapped: 89161728 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458424320 unmapped: 89153536 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458424320 unmapped: 89153536 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458424320 unmapped: 89153536 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458424320 unmapped: 89153536 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458432512 unmapped: 89145344 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458432512 unmapped: 89145344 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458432512 unmapped: 89145344 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458432512 unmapped: 89145344 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458440704 unmapped: 89137152 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458440704 unmapped: 89137152 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458448896 unmapped: 89128960 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458448896 unmapped: 89128960 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458448896 unmapped: 89128960 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458448896 unmapped: 89128960 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458448896 unmapped: 89128960 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458448896 unmapped: 89128960 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458457088 unmapped: 89120768 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458457088 unmapped: 89120768 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458457088 unmapped: 89120768 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458457088 unmapped: 89120768 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458465280 unmapped: 89112576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458465280 unmapped: 89112576 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458473472 unmapped: 89104384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458473472 unmapped: 89104384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458473472 unmapped: 89104384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458473472 unmapped: 89104384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458473472 unmapped: 89104384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458473472 unmapped: 89104384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458473472 unmapped: 89104384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458473472 unmapped: 89104384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458481664 unmapped: 89096192 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458481664 unmapped: 89096192 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458481664 unmapped: 89096192 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458481664 unmapped: 89096192 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458481664 unmapped: 89096192 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458498048 unmapped: 89079808 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458498048 unmapped: 89079808 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458506240 unmapped: 89071616 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458506240 unmapped: 89071616 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458506240 unmapped: 89071616 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458506240 unmapped: 89071616 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458506240 unmapped: 89071616 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458506240 unmapped: 89071616 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458506240 unmapped: 89071616 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458514432 unmapped: 89063424 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458522624 unmapped: 89055232 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458522624 unmapped: 89055232 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458522624 unmapped: 89055232 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458522624 unmapped: 89055232 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458522624 unmapped: 89055232 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458522624 unmapped: 89055232 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458522624 unmapped: 89055232 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458530816 unmapped: 89047040 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458530816 unmapped: 89047040 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458530816 unmapped: 89047040 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458530816 unmapped: 89047040 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458539008 unmapped: 89038848 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458539008 unmapped: 89038848 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458547200 unmapped: 89030656 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458555392 unmapped: 89022464 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458555392 unmapped: 89022464 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458563584 unmapped: 89014272 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458563584 unmapped: 89014272 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458563584 unmapped: 89014272 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458563584 unmapped: 89014272 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458563584 unmapped: 89014272 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458563584 unmapped: 89014272 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458563584 unmapped: 89014272 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458563584 unmapped: 89014272 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458571776 unmapped: 89006080 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458571776 unmapped: 89006080 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458571776 unmapped: 89006080 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458571776 unmapped: 89006080 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458571776 unmapped: 89006080 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458579968 unmapped: 88997888 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458588160 unmapped: 88989696 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458588160 unmapped: 88989696 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458596352 unmapped: 88981504 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458596352 unmapped: 88981504 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458604544 unmapped: 88973312 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458604544 unmapped: 88973312 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458604544 unmapped: 88973312 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458604544 unmapped: 88973312 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458604544 unmapped: 88973312 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458612736 unmapped: 88965120 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458612736 unmapped: 88965120 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458612736 unmapped: 88965120 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458612736 unmapped: 88965120 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458612736 unmapped: 88965120 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458612736 unmapped: 88965120 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458612736 unmapped: 88965120 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458629120 unmapped: 88948736 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458629120 unmapped: 88948736 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458629120 unmapped: 88948736 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458629120 unmapped: 88948736 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458629120 unmapped: 88948736 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458629120 unmapped: 88948736 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458629120 unmapped: 88948736 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458629120 unmapped: 88948736 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458637312 unmapped: 88940544 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458637312 unmapped: 88940544 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458637312 unmapped: 88940544 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458637312 unmapped: 88940544 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458637312 unmapped: 88940544 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458637312 unmapped: 88940544 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458645504 unmapped: 88932352 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458645504 unmapped: 88932352 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458653696 unmapped: 88924160 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458653696 unmapped: 88924160 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458653696 unmapped: 88924160 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458653696 unmapped: 88924160 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458653696 unmapped: 88924160 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 ms_handle_reset con 0x5616ff64d800 session 0x5616f509bc20
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458661888 unmapped: 88915968 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458670080 unmapped: 88907776 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458670080 unmapped: 88907776 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458686464 unmapped: 88891392 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458686464 unmapped: 88891392 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458686464 unmapped: 88891392 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458686464 unmapped: 88891392 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458686464 unmapped: 88891392 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458686464 unmapped: 88891392 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458686464 unmapped: 88891392 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458686464 unmapped: 88891392 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458694656 unmapped: 88883200 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458694656 unmapped: 88883200 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458694656 unmapped: 88883200 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458694656 unmapped: 88883200 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458694656 unmapped: 88883200 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458711040 unmapped: 88866816 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458711040 unmapped: 88866816 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458711040 unmapped: 88866816 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458711040 unmapped: 88866816 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458711040 unmapped: 88866816 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458719232 unmapped: 88858624 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458719232 unmapped: 88858624 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458719232 unmapped: 88858624 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458719232 unmapped: 88858624 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458719232 unmapped: 88858624 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458719232 unmapped: 88858624 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458727424 unmapped: 88850432 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458727424 unmapped: 88850432 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458735616 unmapped: 88842240 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458735616 unmapped: 88842240 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458735616 unmapped: 88842240 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458735616 unmapped: 88842240 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458743808 unmapped: 88834048 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458743808 unmapped: 88834048 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458752000 unmapped: 88825856 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458752000 unmapped: 88825856 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458752000 unmapped: 88825856 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458752000 unmapped: 88825856 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458760192 unmapped: 88817664 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458776576 unmapped: 88801280 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458776576 unmapped: 88801280 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458776576 unmapped: 88801280 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458784768 unmapped: 88793088 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458784768 unmapped: 88793088 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458784768 unmapped: 88793088 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458784768 unmapped: 88793088 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458784768 unmapped: 88793088 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458784768 unmapped: 88793088 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458784768 unmapped: 88793088 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458784768 unmapped: 88793088 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458801152 unmapped: 88776704 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458801152 unmapped: 88776704 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458801152 unmapped: 88776704 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458801152 unmapped: 88776704 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458801152 unmapped: 88776704 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458801152 unmapped: 88776704 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458801152 unmapped: 88776704 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458801152 unmapped: 88776704 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458817536 unmapped: 88760320 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458817536 unmapped: 88760320 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458825728 unmapped: 88752128 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458825728 unmapped: 88752128 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458833920 unmapped: 88743936 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458833920 unmapped: 88743936 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458833920 unmapped: 88743936 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458833920 unmapped: 88743936 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458833920 unmapped: 88743936 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458833920 unmapped: 88743936 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458842112 unmapped: 88735744 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458842112 unmapped: 88735744 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458850304 unmapped: 88727552 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458850304 unmapped: 88727552 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458850304 unmapped: 88727552 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458850304 unmapped: 88727552 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458850304 unmapped: 88727552 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458850304 unmapped: 88727552 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 88719360 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 88719360 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 88719360 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 88719360 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458858496 unmapped: 88719360 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458866688 unmapped: 88711168 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458866688 unmapped: 88711168 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458874880 unmapped: 88702976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458874880 unmapped: 88702976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458874880 unmapped: 88702976 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458883072 unmapped: 88694784 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458883072 unmapped: 88694784 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458883072 unmapped: 88694784 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458883072 unmapped: 88694784 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458883072 unmapped: 88694784 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458883072 unmapped: 88694784 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458899456 unmapped: 88678400 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458899456 unmapped: 88678400 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458907648 unmapped: 88670208 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458907648 unmapped: 88670208 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458907648 unmapped: 88670208 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458907648 unmapped: 88670208 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458907648 unmapped: 88670208 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458907648 unmapped: 88670208 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458915840 unmapped: 88662016 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458915840 unmapped: 88662016 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458924032 unmapped: 88653824 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458924032 unmapped: 88653824 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458924032 unmapped: 88653824 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458932224 unmapped: 88645632 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458932224 unmapped: 88645632 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458940416 unmapped: 88637440 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458940416 unmapped: 88637440 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458940416 unmapped: 88637440 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458948608 unmapped: 88629248 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458948608 unmapped: 88629248 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458948608 unmapped: 88629248 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458956800 unmapped: 88621056 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458964992 unmapped: 88612864 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458964992 unmapped: 88612864 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458964992 unmapped: 88612864 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458964992 unmapped: 88612864 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458964992 unmapped: 88612864 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458964992 unmapped: 88612864 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458964992 unmapped: 88612864 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458973184 unmapped: 88604672 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458973184 unmapped: 88604672 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458981376 unmapped: 88596480 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458981376 unmapped: 88596480 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458981376 unmapped: 88596480 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458981376 unmapped: 88596480 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458981376 unmapped: 88596480 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458981376 unmapped: 88596480 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458997760 unmapped: 88580096 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458997760 unmapped: 88580096 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458997760 unmapped: 88580096 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458997760 unmapped: 88580096 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458997760 unmapped: 88580096 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 458997760 unmapped: 88580096 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459005952 unmapped: 88571904 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459005952 unmapped: 88571904 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459014144 unmapped: 88563712 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459014144 unmapped: 88563712 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459022336 unmapped: 88555520 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459022336 unmapped: 88555520 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459022336 unmapped: 88555520 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459022336 unmapped: 88555520 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459022336 unmapped: 88555520 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459022336 unmapped: 88555520 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459022336 unmapped: 88555520 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459022336 unmapped: 88555520 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459038720 unmapped: 88539136 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459038720 unmapped: 88539136 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459038720 unmapped: 88539136 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459038720 unmapped: 88539136 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459038720 unmapped: 88539136 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459038720 unmapped: 88539136 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459046912 unmapped: 88530944 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459046912 unmapped: 88530944 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459063296 unmapped: 88514560 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459063296 unmapped: 88514560 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459063296 unmapped: 88514560 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459079680 unmapped: 88498176 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459079680 unmapped: 88498176 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459079680 unmapped: 88498176 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459079680 unmapped: 88498176 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459079680 unmapped: 88498176 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459079680 unmapped: 88498176 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459079680 unmapped: 88498176 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459079680 unmapped: 88498176 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459087872 unmapped: 88489984 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.9 total, 600.0 interval#012Cumulative writes: 68K writes, 266K keys, 68K commit groups, 1.0 writes per commit group, ingest: 0.25 GB, 0.04 MB/s#012Cumulative WAL: 68K writes, 26K syncs, 2.64 writes per sync, written: 0.25 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 508 writes, 775 keys, 508 commit groups, 1.0 writes per commit group, ingest: 0.25 MB, 0.00 MB/s#012Interval WAL: 508 writes, 252 syncs, 2.02 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459087872 unmapped: 88489984 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459087872 unmapped: 88489984 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459096064 unmapped: 88481792 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459104256 unmapped: 88473600 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459104256 unmapped: 88473600 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459104256 unmapped: 88473600 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459104256 unmapped: 88473600 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459104256 unmapped: 88473600 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459112448 unmapped: 88465408 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459112448 unmapped: 88465408 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459112448 unmapped: 88465408 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459112448 unmapped: 88465408 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459120640 unmapped: 88457216 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459120640 unmapped: 88457216 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459120640 unmapped: 88457216 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459128832 unmapped: 88449024 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459128832 unmapped: 88449024 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459128832 unmapped: 88449024 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459137024 unmapped: 88440832 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459145216 unmapped: 88432640 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459145216 unmapped: 88432640 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459145216 unmapped: 88432640 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459145216 unmapped: 88432640 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459153408 unmapped: 88424448 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459153408 unmapped: 88424448 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459153408 unmapped: 88424448 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459161600 unmapped: 88416256 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459161600 unmapped: 88416256 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459161600 unmapped: 88416256 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459161600 unmapped: 88416256 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459161600 unmapped: 88416256 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459161600 unmapped: 88416256 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459161600 unmapped: 88416256 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459161600 unmapped: 88416256 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459161600 unmapped: 88416256 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459161600 unmapped: 88416256 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459177984 unmapped: 88399872 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459177984 unmapped: 88399872 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459177984 unmapped: 88399872 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459177984 unmapped: 88399872 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459177984 unmapped: 88399872 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459177984 unmapped: 88399872 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459186176 unmapped: 88391680 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459186176 unmapped: 88391680 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459186176 unmapped: 88391680 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459186176 unmapped: 88391680 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459186176 unmapped: 88391680 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459210752 unmapped: 88367104 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459210752 unmapped: 88367104 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459210752 unmapped: 88367104 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459210752 unmapped: 88367104 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459210752 unmapped: 88367104 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459210752 unmapped: 88367104 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459210752 unmapped: 88367104 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459210752 unmapped: 88367104 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459227136 unmapped: 88350720 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459227136 unmapped: 88350720 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459227136 unmapped: 88350720 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459227136 unmapped: 88350720 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459227136 unmapped: 88350720 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459235328 unmapped: 88342528 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459235328 unmapped: 88342528 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459235328 unmapped: 88342528 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459235328 unmapped: 88342528 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459235328 unmapped: 88342528 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459235328 unmapped: 88342528 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459243520 unmapped: 88334336 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459243520 unmapped: 88334336 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459251712 unmapped: 88326144 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459251712 unmapped: 88326144 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459251712 unmapped: 88326144 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459251712 unmapped: 88326144 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459251712 unmapped: 88326144 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459251712 unmapped: 88326144 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459268096 unmapped: 88309760 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459268096 unmapped: 88309760 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459284480 unmapped: 88293376 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459284480 unmapped: 88293376 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459284480 unmapped: 88293376 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459284480 unmapped: 88293376 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459292672 unmapped: 88285184 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459292672 unmapped: 88285184 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459292672 unmapped: 88285184 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459292672 unmapped: 88285184 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459292672 unmapped: 88285184 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459292672 unmapped: 88285184 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459309056 unmapped: 88268800 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459309056 unmapped: 88268800 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459309056 unmapped: 88268800 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459309056 unmapped: 88268800 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459309056 unmapped: 88268800 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459309056 unmapped: 88268800 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459317248 unmapped: 88260608 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459317248 unmapped: 88260608 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459317248 unmapped: 88260608 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459325440 unmapped: 88252416 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459325440 unmapped: 88252416 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459325440 unmapped: 88252416 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459325440 unmapped: 88252416 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459325440 unmapped: 88252416 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459333632 unmapped: 88244224 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459333632 unmapped: 88244224 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459341824 unmapped: 88236032 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459341824 unmapped: 88236032 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459341824 unmapped: 88236032 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459341824 unmapped: 88236032 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459341824 unmapped: 88236032 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459341824 unmapped: 88236032 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459350016 unmapped: 88227840 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459350016 unmapped: 88227840 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459366400 unmapped: 88211456 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459366400 unmapped: 88211456 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459366400 unmapped: 88211456 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459366400 unmapped: 88211456 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459366400 unmapped: 88211456 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459374592 unmapped: 88203264 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459382784 unmapped: 88195072 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 599.409484863s of 600.182312012s, submitted: 256
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459382784 unmapped: 88195072 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459382784 unmapped: 88195072 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459390976 unmapped: 88186880 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a04c0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459390976 unmapped: 88186880 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459399168 unmapped: 88178688 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459431936 unmapped: 88145920 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a00b0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459464704 unmapped: 88113152 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 88080384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 88080384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a00b0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 88080384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 88080384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a00b0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 88080384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 88080384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 88080384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a00b0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 88080384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 88080384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 88080384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459497472 unmapped: 88080384 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459505664 unmapped: 88072192 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a00b0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459505664 unmapped: 88072192 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459505664 unmapped: 88072192 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459513856 unmapped: 88064000 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459513856 unmapped: 88064000 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a00b0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459513856 unmapped: 88064000 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a00b0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459513856 unmapped: 88064000 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a00b0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459513856 unmapped: 88064000 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459513856 unmapped: 88064000 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a00b0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459513856 unmapped: 88064000 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459513856 unmapped: 88064000 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a00b0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459513856 unmapped: 88064000 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459522048 unmapped: 88055808 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459522048 unmapped: 88055808 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a00b0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459522048 unmapped: 88055808 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459522048 unmapped: 88055808 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459522048 unmapped: 88055808 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459522048 unmapped: 88055808 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459530240 unmapped: 88047616 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a00b0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459530240 unmapped: 88047616 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459530240 unmapped: 88047616 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459530240 unmapped: 88047616 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459530240 unmapped: 88047616 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459530240 unmapped: 88047616 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a00b0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459538432 unmapped: 88039424 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a00b0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459538432 unmapped: 88039424 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459538432 unmapped: 88039424 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459538432 unmapped: 88039424 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459538432 unmapped: 88039424 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a00b0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459554816 unmapped: 88023040 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459554816 unmapped: 88023040 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459554816 unmapped: 88023040 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459554816 unmapped: 88023040 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459554816 unmapped: 88023040 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459554816 unmapped: 88023040 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a00b0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459554816 unmapped: 88023040 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459563008 unmapped: 88014848 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459563008 unmapped: 88014848 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459563008 unmapped: 88014848 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459563008 unmapped: 88014848 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a00b0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459563008 unmapped: 88014848 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459563008 unmapped: 88014848 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459563008 unmapped: 88014848 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459563008 unmapped: 88014848 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459571200 unmapped: 88006656 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a00b0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459571200 unmapped: 88006656 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459571200 unmapped: 88006656 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459571200 unmapped: 88006656 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459571200 unmapped: 88006656 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a00b0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459571200 unmapped: 88006656 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459571200 unmapped: 88006656 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459571200 unmapped: 88006656 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459579392 unmapped: 87998464 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459579392 unmapped: 87998464 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459579392 unmapped: 87998464 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a00b0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459587584 unmapped: 87990272 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459587584 unmapped: 87990272 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a00b0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459587584 unmapped: 87990272 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459595776 unmapped: 87982080 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459595776 unmapped: 87982080 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459595776 unmapped: 87982080 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a00b0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459595776 unmapped: 87982080 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459595776 unmapped: 87982080 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a00b0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459603968 unmapped: 87973888 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459603968 unmapped: 87973888 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459603968 unmapped: 87973888 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a00b0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459612160 unmapped: 87965696 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459612160 unmapped: 87965696 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459612160 unmapped: 87965696 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459612160 unmapped: 87965696 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a00b0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459612160 unmapped: 87965696 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459612160 unmapped: 87965696 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459620352 unmapped: 87957504 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a00b0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459620352 unmapped: 87957504 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459620352 unmapped: 87957504 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459620352 unmapped: 87957504 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459628544 unmapped: 87949312 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a00b0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459628544 unmapped: 87949312 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459628544 unmapped: 87949312 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459636736 unmapped: 87941120 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459636736 unmapped: 87941120 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459636736 unmapped: 87941120 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a00b0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459636736 unmapped: 87941120 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459636736 unmapped: 87941120 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459636736 unmapped: 87941120 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459636736 unmapped: 87941120 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459636736 unmapped: 87941120 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a00b0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459653120 unmapped: 87924736 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459661312 unmapped: 87916544 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459661312 unmapped: 87916544 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459661312 unmapped: 87916544 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a00b0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459661312 unmapped: 87916544 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a00b0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459661312 unmapped: 87916544 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459661312 unmapped: 87916544 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a00b0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459661312 unmapped: 87916544 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459677696 unmapped: 87900160 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459677696 unmapped: 87900160 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459677696 unmapped: 87900160 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459677696 unmapped: 87900160 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459677696 unmapped: 87900160 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a00b0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459685888 unmapped: 87891968 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459685888 unmapped: 87891968 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459685888 unmapped: 87891968 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459685888 unmapped: 87891968 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459685888 unmapped: 87891968 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a00b0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459694080 unmapped: 87883776 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459702272 unmapped: 87875584 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459702272 unmapped: 87875584 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459710464 unmapped: 87867392 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459710464 unmapped: 87867392 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a00b0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459718656 unmapped: 87859200 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459718656 unmapped: 87859200 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a00b0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459718656 unmapped: 87859200 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459718656 unmapped: 87859200 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459718656 unmapped: 87859200 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459718656 unmapped: 87859200 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459726848 unmapped: 87851008 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459726848 unmapped: 87851008 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: do_command 'config diff' '{prefix=config diff}'
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: bluestore.MempoolThread(0x5616f1a09b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4784387 data_alloc: 218103808 data_used: 7335936
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: do_command 'config show' '{prefix=config show}'
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: do_command 'counter dump' '{prefix=counter dump}'
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: do_command 'counter schema' '{prefix=counter schema}'
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: osd.1 438 heartbeat osd_stat(store_statfs(0x1a00b0000/0x0/0x1bfc00000, data 0x1e157d5/0x203e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,2] op hist [])
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459866112 unmapped: 87711744 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: prioritycache tune_memory target: 4294967296 mapped: 459341824 unmapped: 88236032 heap: 547577856 old mem: 2845415833 new mem: 2845415833
Nov 29 04:08:54 np0005539551 ceph-osd[78953]: do_command 'log dump' '{prefix=log dump}'
Nov 29 04:08:54 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 29 04:08:54 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/905741846' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 04:08:54 np0005539551 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 04:08:54 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 29 04:08:54 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3573145944' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 29 04:08:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:55.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:55 np0005539551 nova_compute[227360]: 2025-11-29 09:08:55.400 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:55 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:55 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:55 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:55.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:55 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 29 04:08:55 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3915277277' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 04:08:56 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 29 04:08:56 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2643288740' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 04:08:56 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 04:08:56 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4223868609' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 04:08:56 np0005539551 nova_compute[227360]: 2025-11-29 09:08:56.756 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:56 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 29 04:08:56 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2719316282' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 04:08:57 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 29 04:08:57 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1511510513' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 04:08:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:57.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:57 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:57 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:08:57 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:57.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:08:57 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:57 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 29 04:08:57 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4115779355' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 29 04:08:58 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 29 04:08:58 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/500413863' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 29 04:08:58 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 29 04:08:58 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2873349756' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 29 04:08:59 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 29 04:08:59 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4058511430' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 29 04:08:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:59.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:59 np0005539551 systemd[1]: Starting Hostname Service...
Nov 29 04:08:59 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 29 04:08:59 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2436681072' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 29 04:08:59 np0005539551 systemd[1]: Started Hostname Service.
Nov 29 04:08:59 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 29 04:08:59 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1685426490' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 29 04:08:59 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:08:59 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:59 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:59.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:59 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 29 04:08:59 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2341023033' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 29 04:08:59 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 29 04:08:59 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1692665712' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 29 04:09:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Nov 29 04:09:00 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3075965662' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 29 04:09:00 np0005539551 nova_compute[227360]: 2025-11-29 09:09:00.434 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Nov 29 04:09:00 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3784755945' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 29 04:09:00 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Nov 29 04:09:00 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2283732807' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 29 04:09:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:09:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:01.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:01 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:09:01 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:01 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:01.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Nov 29 04:09:01 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1787392855' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 29 04:09:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Nov 29 04:09:01 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/579204885' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 29 04:09:01 np0005539551 nova_compute[227360]: 2025-11-29 09:09:01.759 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:01 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 29 04:09:01 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1539350563' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 04:09:02 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Nov 29 04:09:02 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2166859596' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 29 04:09:02 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Nov 29 04:09:02 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1965828325' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 29 04:09:02 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Nov 29 04:09:02 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3510267053' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 29 04:09:02 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:09:02 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Nov 29 04:09:02 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2735463076' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 29 04:09:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:09:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:09:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:03.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:09:03 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #175. Immutable memtables: 0.
Nov 29 04:09:03 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:09:03.558510) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 04:09:03 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 175
Nov 29 04:09:03 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407343558544, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 2512, "num_deletes": 251, "total_data_size": 6022745, "memory_usage": 6120368, "flush_reason": "Manual Compaction"}
Nov 29 04:09:03 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #176: started
Nov 29 04:09:03 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:09:03 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:03 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:03.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:03 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407343594059, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 176, "file_size": 3941526, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 83928, "largest_seqno": 86435, "table_properties": {"data_size": 3930997, "index_size": 6638, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2821, "raw_key_size": 23583, "raw_average_key_size": 21, "raw_value_size": 3909351, "raw_average_value_size": 3518, "num_data_blocks": 288, "num_entries": 1111, "num_filter_entries": 1111, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764407120, "oldest_key_time": 1764407120, "file_creation_time": 1764407343, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:09:03 np0005539551 ceph-mon[81672]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 35594 microseconds, and 7757 cpu microseconds.
Nov 29 04:09:03 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:09:03 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:09:03.594100) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #176: 3941526 bytes OK
Nov 29 04:09:03 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:09:03.594120) [db/memtable_list.cc:519] [default] Level-0 commit table #176 started
Nov 29 04:09:03 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:09:03.599934) [db/memtable_list.cc:722] [default] Level-0 commit table #176: memtable #1 done
Nov 29 04:09:03 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:09:03.599946) EVENT_LOG_v1 {"time_micros": 1764407343599942, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 04:09:03 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:09:03.599992) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 04:09:03 np0005539551 ceph-mon[81672]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 6011330, prev total WAL file size 6011330, number of live WAL files 2.
Nov 29 04:09:03 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000172.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:09:03 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:09:03.601332) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Nov 29 04:09:03 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 04:09:03 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [176(3849KB)], [174(11MB)]
Nov 29 04:09:03 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407343601395, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [176], "files_L6": [174], "score": -1, "input_data_size": 16080444, "oldest_snapshot_seqno": -1}
Nov 29 04:09:03 np0005539551 ceph-mon[81672]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #177: 11863 keys, 14092489 bytes, temperature: kUnknown
Nov 29 04:09:03 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407343690102, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 177, "file_size": 14092489, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14018117, "index_size": 43603, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29701, "raw_key_size": 313770, "raw_average_key_size": 26, "raw_value_size": 13812345, "raw_average_value_size": 1164, "num_data_blocks": 1647, "num_entries": 11863, "num_filter_entries": 11863, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400510, "oldest_key_time": 0, "file_creation_time": 1764407343, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a0019487-5659-4bb5-bfa3-5ec467e1c6c5", "db_session_id": "BQ87E2QTX8PUCMR4F5B1", "orig_file_number": 177, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:09:03 np0005539551 ceph-mon[81672]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:09:03 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:09:03.690344) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 14092489 bytes
Nov 29 04:09:03 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:09:03.691755) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.1 rd, 158.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 11.6 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(7.7) write-amplify(3.6) OK, records in: 12380, records dropped: 517 output_compression: NoCompression
Nov 29 04:09:03 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:09:03.691773) EVENT_LOG_v1 {"time_micros": 1764407343691764, "job": 112, "event": "compaction_finished", "compaction_time_micros": 88769, "compaction_time_cpu_micros": 33052, "output_level": 6, "num_output_files": 1, "total_output_size": 14092489, "num_input_records": 12380, "num_output_records": 11863, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 04:09:03 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:09:03 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407343692605, "job": 112, "event": "table_file_deletion", "file_number": 176}
Nov 29 04:09:03 np0005539551 ceph-mon[81672]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000174.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:09:03 np0005539551 ceph-mon[81672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407343694778, "job": 112, "event": "table_file_deletion", "file_number": 174}
Nov 29 04:09:03 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:09:03.601214) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:09:03 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:09:03.694848) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:09:03 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:09:03.694853) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:09:03 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:09:03.694854) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:09:03 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:09:03.694855) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:09:03 np0005539551 ceph-mon[81672]: rocksdb: (Original Log Time 2025/11/29-09:09:03.694857) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:09:04 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Nov 29 04:09:04 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2094610014' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 29 04:09:04 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Nov 29 04:09:04 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/850092357' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 29 04:09:04 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 29 04:09:04 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/662651324' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 04:09:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:09:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:09:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:05.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:09:05 np0005539551 ceph-mon[81672]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Nov 29 04:09:05 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3747360618' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 29 04:09:05 np0005539551 nova_compute[227360]: 2025-11-29 09:09:05.437 227364 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:05 np0005539551 radosgw[83679]: ====== starting new request req=0x7fabc40fb6f0 =====
Nov 29 04:09:05 np0005539551 radosgw[83679]: ====== req done req=0x7fabc40fb6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:09:05 np0005539551 radosgw[83679]: beast: 0x7fabc40fb6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:05.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:09:05 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 04:09:05 np0005539551 ceph-mon[81672]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
